GB2351426A - Method and apparatus for the generation of computer graphic representations of individuals - Google Patents

Method and apparatus for the generation of computer graphic representations of individuals Download PDF

Info

Publication number
GB2351426A
GB2351426A GB9914823A GB9914823A GB2351426A GB 2351426 A GB2351426 A GB 2351426A GB 9914823 A GB9914823 A GB 9914823A GB 9914823 A GB9914823 A GB 9914823A GB 2351426 A GB2351426 A GB 2351426A
Authority
GB
United Kingdom
Prior art keywords
individual
data
model
computer
pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB9914823A
Other versions
GB9914823D0 (en
Inventor
Stephen James Crampton
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to GB9914823A priority Critical patent/GB2351426A/en
Publication of GB9914823D0 publication Critical patent/GB9914823D0/en
Priority to PCT/GB2000/002458 priority patent/WO2001001354A1/en
Priority to JP2001506503A priority patent/JP2003503776A/en
Priority to AU55535/00A priority patent/AU5553500A/en
Priority to EP00940624A priority patent/EP1194899A1/en
Publication of GB2351426A publication Critical patent/GB2351426A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/57Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of game services offered to the player
    • A63F2300/572Communication between players during game play of non game information, e.g. e-mail, chat, file transfer, streaming of audio and streaming of video

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

Apparatus for gendering computer models of individuals is provided comprising a booth 1 that is connected to a server (2) via the Internet (3). Image data of an individual is captured using the booth (1) and a computer model corresponding to the individual is then generated by comparing the captured image data relative to a stored generic model. Data representative of a generated model is then transmitted to the server (2) where it is stored. Stored data can then be retrieved via the Internet using a personal computer (4) having application software stored therein. The application software on the personal computer (4) can then utilise the data to create graphic representations of an individual in any of a number of poses.

Description

2351426 METHOD AND APPARATUS FOR THE GENERATION OF COMPUTER GRAPHIC
REPRESENTATIONS OF INDIVIDUALS The present invention concerns methods and apparatus for generating computer graphical representations of individuals. In particular, the present invention concerns the generation of texture rendered wire mesh computer models of individuals which can be used to generate representations of an individual in any of a number of different poses.
Computer software applications frequently require users to have representations of themselves shown on a screen. The animation of a computer graphic representation is then used to illustrate the actions of that individual. At present such graphical representations are fixed by the application in use and all individuals either use the same representation or one selected from a very limited range of possible representations. However, the representations available often have little resemblance to the individuals that they are intended to represent.
it is possible to generate accurate threedimensional models of individuals in a single pose using scanning apparatus such as the PERSONA scanner manufactured by 3D Scanners Limited and the whole body scanner developed by Cyberware Lab Inc.
However the three-dimensional computer models 2 generated by such scanning apparatus are not particularly suitable for creating representations of an individual in any other pose, since the scanning apparatus are only arranged to obtain data indicative of the surface of an individual in a single pose, no data is obtained about the internal structure of an individual. It therefore is not possible to generate representations of an individual in another pose directly from such data.
The present invention aims to provide means by which computer models of individuals can be generated which may be used to generate computer graphical representations of individuals in different poses.
Embodiments of the present invention provide means by which animated sequences of computer graphical images can be generated which are indicative of the movement of an individual between a number of different poses.
Embodiments of the present invention also enable computer graphical representations of individuals within application software to more closely resemble the individual users than is possible at present.
In accordance with one aspect of the present invention there is provided an apparatus for generating computer models of individuals for generating representations of those individuals in any of a number of different poses comprising:
means for generating a plurality of images of a model of a person in a plurality of poses in accordance 3 with animation instructions; scanning means for obtaining scan data of an individual, representative of the external appearance of said individual in a pose; determination means for determining the pose adopted by an individual scanned by said scanning means; and model generation means for generating a model on the basis of said comparison.
Embodiments of the present invention will now be described with reference to the accompanying drawings, in which:
Figure 1 is a block diagram of apparatus for creating and animating threedimensional models representing individuals in accordance with a first embodiment of the present invention; Figure 2 is a perspective view of the exterior of a booth of Figure 1; Figure 3 is a view of the booth of Figure 2 as seen from direction indicated by arrow A in Figure 2; Figures 4, 5, 6 and 7 are schematic diagrams illustrating a user adopting four poses within the light box of a booth; Figure 8 is a cross-sectional view of the mounting of the digital cameras; Figure 9 is a plan view of the booth of Figure 2; Figure 10 is a cross-section of the light box of the booth along the lines C-C' shown in Figure 9; 4 Figure 11 is a diagrammatic representation of the view of the light box of the booth as seen from the perspective of the digital cameras; Figure 12 is a view of the interior of the booth as seen from the perspective of an individual standing in the light box of the booth; Figure 13 is a block diagram of the control system of-the booth of Figure 2; Figure 14 is a block diagram of the contents of the memory of the control system of Figure 13; Figure 15 is a f low diagram of the processing of the self-test program; Figure 16 is a flow diagram illustrating the steps involved in the generation of a computer graphical representation of an individual in accordance with the first embodiment of the present invention; Figure 17 is a schematic diagram of a card generated by the booth of Figure 2; Figure 18 is a f low diagram of the steps involved in obtaining image data for generating a computer graphical representation using the booth of Figure 2; Figure 19 is a graph illustrating the timing of the f lash and the opening and closing of shutters of the digital cameras of the booth of Figure 2; 25 Figure 20 is an illustration of an example of image data captured by a camera of the booth using the flash; Figure 21 is an illustration of an example of image data captured by a camera of the booth without using a flash; Figure 22 is a flow diagram illustrating an overview of the steps involved in generating an avatar using image 5 data captured by the cameras of the booth; Figure 23 is a flow diagram illustrating the steps involved in the alignment of images captured by the digital cameras of the booth; Figures 24A and 24B is a flow diagram illustrating the steps of determining a mapping function between a generic avatar geometry and a calculated geometry of an individual; Figure 25A, 25B and 25C are diagrammatic illustrations used to explain the removal of extraneous rear foot data from the outline of an individual in profile; Figure 26 is a representation of an outline generated from a silhouette corresponding to the example of Figure 20 on which a number of landmark points are indicated; Figure 27 is a flow diagram illustrating the processing for identifying facial features from image data; Figure 28 is a flow diagram illustrating the iterative process for improving the accuracy of initial facial feature estimates.
Figure 29 is an illustration of areas of images used 6 to identify the positions of facial feature points; Figures 30, 31 and 32 are illustrations of examples of screen displays used for the confirmation and editing of facial features; Figures 33 and 34 are an illustrative example of head tilt correction; Figure 35 is a block diagram of the generic model avatar program stored in memory; Figure 36 is a representation of the data structure for a generic polygon wire mesh for a generic model avatar; Figure 37 is an illustrative representation of a polygonal wire mesh of a generic model avatar; Figure 38 is a pair of illustrations showing the deformation of a part of a wire mesh to account for the stretching of skin about a joint; Figure 39 is a diagram illustrating the data structure of an avatar transmitted from the booth of Figure 2 to a server; 20 Figure 40 is a block diagram of the structure of a data storage system of the server of Figure 1; Figure 41 is a block diagram of the memory of a user station of Figure 1 having animation software stored therein; 25 Figure 42 is a flow diagram illustrating the steps involved in the generation of an animated sequence of computer graphical representations of an individual on 7 a personal computer using a data generated using a booth which has been stored on a server; Figure 43 is a block diagram of a second embodiment of the present invention; Figure 44 is a block diagram of a third embodiment of the present invention; Figure 45 is a block diagram of a fourth embodiment of the present invention; Figure 46 is a plan view of a booth adopted for the generation of an avatar representative of an individual in a wheelchair; Figure 47 is a representation of a generic wire mesh model for an avatar of an individual in a wheelchair; Figure 48 is a cross-section of a booth of a further embodiment of the present invention; Figure 49 is a diagrammatic representation of the interior of a light box in accordance with the booth of Figure 48; Figure 50 is a cross-section of a booth of a further embodiment of the present invention; and Figure 51 is a flow diagram illustrating the processing of data in the booth of Figure 50.
Figure 1 is a block diagram of apparatus for creating and animating threedimensional models of individuals to generate sequences of images representing individuals in motion in accordance with a first embodiment of the present invention.
8 In this embodiment, the apparatus comprises a plurality of booths 1 which are connected to a server 2 via the Internet 3. The server is also connected to a plurality of personal computers 4 via the Internet 3.
As will be described in detail below in accordance with the present invention image data of the external appearance of individuals are captured using the booths 1. Captured image data is then processed to generate three-dimensional model representations of those individuals which can be used to generate images representative of the individuals in various different poses or stances. This three-dimensional model which can be used to generate images of a person in a number of different poses, (hereinafter referred to as 'an avatar') is then transmitted from the booth 1 to the server 2 via the Internet 3 and is then stored in the server 2.
When a user then wishes to use an avatar, the data stored in the server representative of an avatar is downloaded from the server 2 via the Internet 3 into a personal computer 4 having applications software stored therein (not shown). The application software then causes the generation of a series of animation instructions which are utilised together with the avatar data to cause the generation of representations of an individual in a plurality of poses or stances. By displaying consecutive sequences of images of an individual in a number of different poses the impression 9 of an animated computer model of an individual is then created.
Figure 2 is a schematic diagram of the external appearance of a booth 1. The booth 1 comprises an exterior wall 10 that defines the perimeter of the booth 1. On top of the exterior wall 10 is a roof 12 enclosing the top of the booth 1. The exterior wall 10 and the roof 12 are made from a light rigid fire resistant material such as pressed aluminium, fibreglass or MDF.
In this embodiment of the present invention the exterior wall 10 and roof 12 are arranged to enclose an area of approximately 2.9 m long by 1.7 m wide by 2.5 m high.
In part of the exterior wall 10 in a central section of one of the longer sides of the booth there is provided a doorway 14 which allows access by a user into the interior of the booth 1. The doorway 14 is slightly raised from the ground and a ramp 16 is provided which allows access to the doorway 14. The doorway 14 is covered by a curtain 18 that is arranged to minimise the amount of light entering the interior of the booth 1 via the doorway 14.
Mounted on the exterior wall 10 are a pair of external display screens 20 and pairs of external speakers 22. The external display screen 20 and the speakers 22 are used to attract users into the booth 1, provide instructions on how to use the booth, and also to display avatars generated using the booth 1, as will be explained in detail later.
Figure 3 is a diagrammatic view of the booth 1 seen from the direction indicated by arrow A in Figure 2. The booth 1 comprises three sections 30, 32, 34. The first section 30, at the one end of the booth houses a pair of digital cameras 36, 38 such as a Fuji DS330 or a Kodak DCS560 and a control system 39 and an arrangement of mirrors 40,42. The digital cameras 36, 38 are arranged with their optical axis 43 directed towards the arrangement of mirrors 40, 42 so that the digital cameras 36, 38 are arranged to obtain substantially identical images of the interior of the booth 1 remote f rom the digital cameras 36,38 with the optical axis 43 aligned with the centre of the booth I as will be described in detail below. In the far section of the booth 1 remote from the digital cameras 36,38 there is provided a light box 44 for lighting from above, below and behind an individual 46 of whom image data is to be obtained using the digital cameras 36,38. The central section 32 of the booth 1 between the first section 30 and the far section 34 provides a floor space which enables a user to access the light box 44 after entering the booth via the ramp 16 and doorway 18. The central section 32 of the booth 1 also acts to separate the digital cameras 36,38 from the light box 44 so that the digital cameras 36,38 can obtain image data of the entirety of an individual 46 standing in the light box 44 using a square 11 lens to avoid significant optical distortion of an image resulting from the use of a wide angle lens.
The control system 39 of the booth 1 is arranged to direct an individual 46 who enters the booth by means of oral and visual instructions to adopt four predefined poses standing in an identified position within the light box 44 at the far end of the booth 34. When an individual has adopted a required pose within the light box 44 the control system 39 then causes the digital cameras 36, 38 to obtain images of the individual 46 in that pose. The control system 39 then uses the obtained image data for the dif ferent poses as the basis for generating an avatar for that individual as will be described in detail later.
Figures 4, 5, 6 and 7 illustrate an individual posing within the light box 44 in the four required poses in accordance with this embodiment of the present invention.
In the first pose as is shown in Figure 4 an individual 46 is required to pose with his feet apart facing the digital cameras 36,38 arms outstretched with the backs of his hands facing the camera and the fingers spread out.
In the second pose as is shown in Figure 5 the individual 46 is required to pose facing the side of the light box with his feet apart and his arms against his sides with the palms of the hands turned inwards.
12 In the third pose as is shown in Figure 6 the user is required to pose facing away from the camera with his f eet apart, arms outstretched with the palms of his hands facing the camera and the fingers spread out.
In the fourth and final pose as is shown in Figure 7 the user 46 is required to pose with his f eet apart facing the opposite wall to the wall faced in Figure 5 with his arms against his sides with the palms of the hands against his legs.
As will be described in detail later the user 46 is instructed to adopt these specific poses at identified positions within the light box 44 by means of oral and visual instructions and the use of indicator lights in order that image data captured by the digital cameras 36, 38 can be processed by the control system to generate an avatar.
Figure 8 is a cross-sectional view of the mounting of the digital cameras 36,38 and the arrangement of mirrors 40,42. The digital cameras 36,38 and the arrangement of mirrors 40,42 are mounted within a light proof box 47 open at the end closest to the light box 44.
The open end of the light proof box 47 is covered by a glass plate 48 treated with an anti-reflection coating so as to render the glass non-reflective on the inside of the box 47.
The light proof box 47 acts to prevent the digital cameras 36,38 from obtaining image data from anything 13 other than the view through the glass plate 48. By being within an enclosed box 47.48 the digital cameras 36,38 and mirror arrangement 40,42 are prevented from being contaminated by dust from outside the box 47,48.
The digital cameras 36,38 are mounted on opposite walls of the light box 47 adjacent to the glass plate 48 held in place by adjustable fixings 49 which hold the digital cameras 36,38 in place and enable the orientation of the digital cameras to be adjusted until they are both orientated with their optical axes directed towards the middle of the box 47,48. The mirror arrangement 40,41 comprising a front silvered mirror 40 and a partially front silvered mirror 42, such as a half front silvered mirror is mounted within box 47,48 in the middle of the box 47,48 with the front silvered mirror 40 arranged at an angle of 450 to the optical axis of the first digital camera 36 and the partially front silvered mirror 42 arranged between the front silvered mirror 40 and the glass plate 48 at an angle of 450 to the optical axis of the second digital camera 38. By providing front silvered 40 and partially front silvered mirrors 42 the images which the digital cameras 36,38 obtain are obtained by reflection only without any double reflections as would be obtained using a rear silvered mirror so that the distortion of the image is minimised.
The arrangement of mirrors is such that light passing from the glass plate 48 is reflected from the partially 14 silvered mirror onto the second digital camera, and light which is not reflected by the partially silvered mirror 42 passes through the partially silvered mirror 42 and on to the front silvered mirror 40 by which it is 5 reflected on to the first digital camera.
In this way by providing a mirror arrangement 40,42 the digital cameras 36,38 are presented with a substantially identical view of light coming in through the glass front 48. The mirror arrangement 40,42 within the box 47 also acts to maximise the effective optical distance of the cameras 36,38 from the light box 44 in the far section 34 of the booth. Thus reducing the required size of the central section 32 of the booth.
Figure 9 is a plan view of the interior of a booth 1. The digital cameras 36, 38 and the control system 39 of the booth are provided behind a door 50 that is attached by a hinge 52 which enables the door 50 to be opened to allow access to the digital cameras 36, 38 and the control system 39. A window 54 is provided in the portion of the door 50 that is in front of the mirror arrangement 40,42 so that the cameras view of the far end 34 of the booth 1 remote from the mirrors 40,42 is not obscured. The digital cameras 36,38 and mirror arrangement 40,42 being such as to aim the cameras to view the light box 44 along the optical axis 43 passing through the middle of the booth 1.
Four flash lights 56 are provided inside the booth 1 in the first section 30 of the booth 1, one flash part of the lights being provided inside the booth adjacent to the exterior wall 10 next to the doorway 14, with another pair being provided adjacent to the exterior wall 10 opposite to the doorway 14. Both pairs of flashlights 56 are positioned behind frosted panels 57 to diffuse the light generated by the flash lights. The flashlights 56 are arranged in these positions either side of the camera 36,38 so that when they are operated the flash lights 56 uniformly illuminate the interior of the booth 1 and in particular the front of the light box 44. In order to ensure that the light from the f lash lights 56 is as uniform as possible the interior of the booth 1 and the curtain 18 are darkly coloured. so as to reduce the amount of light which is reflected from the interior of the booth 1 and the curtain 18.
The light box 44 at the far end 34 of the booth remote from the cameras 36,38 is defined by a floor 60, a curved wall 62 and roof (not shown in Figure 8).
Provided behind the wall 62 beneath the floor 60 and within the roof are a number of fluorescent lights 70 arranged to illuminate the wall 62, floor 60 and roof of the light box. The wall 62 of the light box 44 comprises a frosted translucent flexible material such as a 6 mm panel of opaline., The plurality of fluorescent lights 70 are placed 75 mm behind the wall 62 and arranged behind the wall 62 so as to illuminate the wall 62 with 16 a uniform light. The translucent wall 62 and the f luorescent lights 70 are such as to ensure that the uniform light within the light box 44 is of between about 200 and 400 lux. A forward portion 72 of the wall 62, closest to the cameras are covered by a black material to prevent the forward portion of the light box from being illuminated by the fluorescent lights 70 behind the wall 62.
By providing a light box 44 having a curved back wall 62 the problems of uniformly illuminating the corners of a square box are thereby avoided and the surface area which is required to be lit is minimised thus reducing the required number of fluorescent lights 70. The number and power of the lights 70 is also minimised by covering the interior of the exterior wall 10, behind the wall 62 with a reflective material 73 to direct light from the fluorescent lights 70 into the light box 44.
Provided within the wall 62 adjacent to the blacked out portions and in the middle of the wall 62 directly opposite to the cameras 36,38 are three strips 74 of LEDs for directing an individual to adopt a correct pose as will be described in detail later.
Provided on the surface of the floor 60 are a first 76 and a second 78 pair of foot position indicators. The foot position indicators 43,44 comprise markers and foot lights to highlight the position an individual's feet 17 should be placed when adopting a requested pose in the light box 44.
The first pair of foot position indicators 76, are provided towards the front of the light box 44 either side of a line passing through the middle of the booth 1 of a distance of about 460 mm apart. The first pair of foot position indicators 76 are provided at this position so that when an individual places his feet on the indicators the forward portion of the individual closest to the camera protrudes in front of the blacked out portion 72 of the wall 62 and so is not illuminated by the lights 70. The first pair of foot position indicators are provided either side of the centre of the booth I so that an image of the individual obtained by the cameras 36,38 is centred on the individual. By providing foot indicators 76, about 460 mm apart a means is provided to instruct a user to adopt a pose as is illustrated in Figures 4 and 6 in which the users legs are sufficiently separated so that a defined crotch is apparent in an image of the user taken by the cameras 36,38.
The second pair of foot indicators 78 are provided towards the front of the light box 44, along the line passing through the centre of the booth 1 with one foot indicator closer to the cameras 36,38 and the other about 400 mm further away from the cameras 36,38. In this way, when an individual places his feet on the indicators 78 18 the forward portion of the individual closer to the cameras 36,38 protrudes in front of the blacked out portion 72 of the wall 62 and is not illuminated by the lights 70 and an image taken of the individual by the cameras 36,38 is centred on the individual. By ensuring that the pair of foot indicators 78 are about 400 mm apart a means is provided to ensure that in the images of an individual standing on the foot indicators 78 taken by the camera 36,38 the individual's foot further from the cameras 3 6, 3 8 appears separated f rom the individual I s foot closer to the camera.
Figure 10 is a schematic diagram of a cross section of the light box 44 as taken along the line C - C' in Figure 9. As is shown in Figure 10 the floor 60 of the light box 44 comprises a first layer 80 and a second layer 82. Beneath the floor 60 there are provided a number of fluorescent lights 70 for illuminating the surface of the floor 60. The first and second layers 80, 82 are supported above the fluorescent lights 70 by a number of spacers 84, made from a transparent material.
It is necessary that the floor 60 is sufficiently strong so as not to deform when an individual stands on the floor 60 and is also required to be uniformly illuminated by the fluorescent lights 70. In this embodiment this is achieved by having a first layer 80 that is made from a strong clear material that will not deform under the weight of an individual and a second 19 layer 82 comprising a diffusion layer for diffusing the light from the fluorescent lights 70. Asuitable material for the first layer 80 would be an 8mm thick sheet of polycarbonate plastic. A suitable material for the second layer 82 would be a 4mm thick layer of opaline. By providing a clear first layer the maximum amount of diffused light from the fluorescent light 70 beneath the second layer 80 is able to reach the surface of the floor 60 of the light box 44. By providing a strong first layer 80 above the diffusing second layer the choice of material for the second layer is increased as it is no longer necessary to provide a strong material for diffusing the light from the fluorescent lights 70. By providing spacers 84 made from a transparent material, substantially uniform illumination of the floor 60 by the fluorescent lights 70 can be achieved.
In order to minimise the reflections of the feet of a user standing within the booth on the surface of the floor 60, it is necessary to render the upper surface of the floor non-reflective. In this embodiment this is achieved by treating the upper surface of the first layer 80 with a frosting process across its entire surface except where the foot indicators 76 and 78 are provided.
Provided on the surface of the second layer 82 of the floor 60 are transfers 86 marking the outline of feet to indicate where a user I s f eet are to be placed. As these transfers 86 are provided on the second layer 82 they are protected from wear by the first layer 80.
Inside the outline made by the transfers 86 holes are provided in the second layer 82 through which indicator lights 88 protrude. In this way by providing transfers indicating where a user should place their feet and by providing indicator lights which can be lit to indicate which of the pair of foot indicators 76,78 should be used a user can be instructed exactly where to place their f eet when an image is to be obtained by the digital cameras 36,38 as will be described later. By leaving the surface of the first layer 80 clear at the positions of the foot indicators 76,78 the transfers 86 and lights 88 remain clearly visible to users, whilst being protected against damage by the first layer 80.
The roof of the light box 88 comprises a further light diffusing layer 90 such as a 4mm thick opaline sheet behind which are provided further fluorescent lights 70 for illuminating the light box 44. The space beneath the floor 60, behind the back wall 62 and within the roof 88 are arranged so as to communicate with each other to form a single chamber. An air duct 92 is provided at the rear of the booth to introduce air into the space beneath the floor 60 of the light box 44 and a number of fans 94 are provided in the upper surface of the roof to extract air from behind the light diffusing layer 90. The effect of heating by the lights 70 within the light box 44 is then minimised by drawing air from 21 outside the booth via the duct 92 through beneath the floor 60 up behind the back wall 62 into the roof cavity and out of the booth by the fans 94.
Figure 11 is a schematic diagram of the view of the light box 44 as seen by the digital cameras 36,38 in the absence of an individual standing within the light box 44. From the view point of the digital cameras 36, 38 the rear wall 62 provides a uniformly bright back drop for obtaining image data of an individual standing on the foot position indicators 76,78. The front portion 72 of the back wall 62 at the front of the light box closest to the cameras appears dark as no light from the fluorescent lights 70 passes through the blacked out portions 72 of the wall 62. Adjacent to the blacked out 72 portions of the back wall 62 are strips 74 of LEDs for informing a user where to look when posing. A further strip 74 of LEDs is provided in the centre of the back wall 62.
Also provided on the surface of the back wall 62 are four alignment markers 95-98 placed on the wall 62 in known positions relative to the foot positioning markers 76,78 in a plane level to where an individual will stand.
By providing alignment markers 95-98 at known positions in the same plane level to where an individual will stand scaling information for an imageof an individual standing on the foot markers 76,78 is provided, since the apparent height of an individual in an image to the 22 individual's actual height will be proportional to the apparent distances between the representations of the alignment markers 95-98 relative to the known actual distances. This scaling information is then used to generate an avatar for that individual is described later.
When the booth 1 is initially set up in images of the interior of the booth are obtained using the digital cameras 36,38. In order to ensure that the images obtained by the digital cameras 36,38 are as far as possible aligned with each other the images of the booth 1 are then used to determine liow the adjustable fixings 49 on which the cameras 36,38 are mounted are altered to change the orientation of the cameras until the images obtained by the two cameras 36,38 are as far as possible aligned.
Figure 12 is a view of the interior of the booth as seen from an individual standing within the light box 44 looking towards the digital cameras 36,38. As seen from this view the interior of the booth is flanked at either side by two parts of flash lights 56 for illuminating the interior of the booth 1. In the centre of the booth between the flash lights 56 there is the window 54 through which the digital cameras 36,38 obtain their image. To the left of the window 54 there are mounted a pair of speakers 100, a touch screen display 102, a credit card reader 104, a bank note reader 106 and a card 23 printer 108. The speakers 100 and touch screen display are arranged to instruct a user orally and visually on how to use the booth. The bank note reader 104 and the credit card reader 106 are arranged to receive payment for use of the booth. The card printer 108 is arranged to print out cards displaying a password for retrieval of a generated avatar as will be described in detail later.
To the right of the window 54 as seen from the light box 44 there is an alcove 110 having a hook 112 for hanging up a coat and a shelf 114 for placing objects which have been brought into the booth. By providing an alcove 110 which is provided in a portion of the booth which is not visible to the digital cameras 36, 38 through the window 54 a means is provided for storing excess clothing, bags and the like which may be brought into the booth 1 by an individual prior to posing so that they can pose in the light box 44 un-encumbered without these items they have brought into the booth effecting the images obtained by the digital cameras 36,38 and without these items blocking the camera's field of view.
Figure 13 is a block diagram of the control system 39 of the booth 1. The control system 39 comprises a computer 120 having a memory 125. The computer is connected to the Internet and the telephone network (not shown in Figure 13) via an interface 126. The computer 120 is also connected to the card reader 104, the bank 24 note reader 106, the touch screen display 102, the internal speakers 100, the external screen 20, the external speakers 22, the fluorescent lights 70, the fans 94, the first and second digital cameras 36,38, the foot light indicators 88, the strips of LEDs 74 and the card printer 108. The computer 120 is also indirectly connected to the flash lights 56 via the first digital camera 36.
The computer 120 is arranged to receive signals from the card reader 104 and the bank note reader 106 indicating whether payment has been made using either the card reader 104 or the bank note reader 106 and to receive input instructions input using the touch screen display 102. The computer 120 processes the received signals in accordance with programs in the memory 125 and then instructs a user on how to use the booth and the poses to be adopted via the internal speaker 100, the touch screen display 102, and by illuminating appropriate foot lights 70 and wall lights 78 as will be explained in detail below.
The computer 120 is also arranged to coordinate the switching on of the fluorescent lights 70 and fan 94 when the booth 1 is initially turned on and also coordinate the obtaining of image data by the camera 36,38 and the subsequent processing of that image data as will also be explained in detail below.
Figure 14 is a block diagram of the content of the memory 125 of the computer system 120. Stored within the memory 125 of the computer system are a user instruction program 130 which is a program arranged to cause images to be displayed on the touch screen display 102 and the external screen 20 and sound to be transmitted through the internal and external speakers 100,22; a booth control program 132 which is a program for coordinating the output of the touch screen display 120 the internal speaker 100, the wall lights 74 and foot lights 70 to ensure that a user adopts the correct pose within the light box 44 prior to activating the digital cameras 36, 38 and the flash 56 to obtain image data; an avatar construction program 134 which is a program for generating an avatar of an individual from received image data; a generic avatar program 13 5 which is a program for generating a three-dimensional polygonal wire mesh model of a computer representation of a generic person which can be used to generate representations of that generic person in different stances in accordance with animation instructions; a self-test program 136 which is a program for switching on the fluorescent lights 70 and fan 90 when the booth is initially activated and for testing whether the booth is operating properly; and an animation program 137 for adapting and displaying an animation sequence using a newly created avatar. A portion 138 of the memory 125 is also left available for the storage of data.
26 The processing of the computer system 120 and the steps involved in the generation of an avatar using the booth 1 will now be described with reference to Figures to 36.
Figure 15 is a flow diagram of the processing of the computer 120 in accordance with the self-test program 136 stored in memory 125. When the booth is initially switched on this causes (sl) the computer 120 to turn on the lights 70 and the fan 94. The computer 120 then performs a self-test routine (s2) to determine whether the computer 120 is working properly. If (s3) an error is detected an error message (s4) is displayed on the screens 20, 102 and the system halts.
If no error is detected the computer 120 then tests (s5) the Internet connection via the interface 126. If (s6) an error is detected a warning (s4) is displayed on the screens 20,102, and the system halts.
If the testing of the Internet connection via the interface 130 is successful the computer 120 then tests whether the digital cameras 36,38, the credit card reader 104 and the bank note reader 106 are all performing correctly. If (s8) any errors are detected a warning message is displayed (s9) on the screens 20,102 and also sent to the central server 2 via the interface 126 and the Internet 3 so that a distributor of booths can be informed of the failure of the apparatus. The self-test program 136 then halts.
27 when the self-test program 136 has completed its processing the booth control program 132 is then invoked.
The processing of the booth control program 132 will now be described with reference to Figure 16 which is flow diagram of the processing the booth control program 132. When the booth control program 132 is initially invoked this causes (slO) the booth control program 132 to invoke the user instructions program 130 to activate the external screens 20 and external speakers 22 to broadcast an external presentation to attract the user to use the booth. The display on the external screen 20 could for example show different uses of generated avatars or could demonstrate particular avatars which have been generated using that booth. The external screens 20 could also show user's instructions on how to use the booth. The display on the external screens 20 and sound broadcast through the external speakers 22 then continues through the time the booth 1 remains switched on.
After the external display has been initiated the booth control program 132 then invokes the user instructions program 130 to cause (sll) instructions to a user to be displayed on the internal screen 102 and to be transmitted via the internal speaker 100 instructing a user to remove loose outer clothing and make a payment using either the card reader 104 or the bank note reader 106.
28 The computer system 120 then determines (s12) whether any payment has been made using the bank note reader 106 or that the computer system 125 has received authorization for a credit card transaction from the telephone network. Until sufficient payment has been made or authorization has been received, the booth control program 132 invokes the user instructions program 130 to again display (sll) and transmit instructions for a user to make a payment.
If the computer 120 determines that sufficient payment has been made or a credit card transaction has been authorised, the computer 120 then causes on the screen 102 to display an image of a keyboard and instructs the user via the speaker 100 to input their name. When a user has entered their name using the touch screen 102 and the displayed keyboard, the computer 120 then generates an avatar identification number and a password which is then passed to the printer which prints (s13) out a card having the user's name, the generated user ID and the password printed thereon. At the same time the booth control program 132 causes the user instruction program to display on the screen 102 and transmit through the speaker 100 instructions to a user to wait for the card to be printed and to store the card so that the user can retrieve a generated avatar from the server 2 at a later stage.
Figure 17 is a schematic diagram of an exemplary 29 card 140 printed by the printer 108. The card 140 comprises a paper substrate on which are printed the user's name 141, the generated avatar identification number 142 and the generated password 143 and the website address 144 of the server 2. The card provides a physical reminder for the user that has an avatar to download from the server 2. The card 140 also provides a means by which a user is provided with the information required to download a generated avatar from the server 1 af ter it has been created by the booth 1 as will be described later.
Af ter a card has been printed the booth control program 132 causes (s14) image data of a user to be obtained in four poses as will be described in detail below with reference to Figures 18 to 21.
After image data has been obtained the booth control program 132 then invokes the avatar construction program 132 to generate (s15) an avatar using the obtained image data as will be described in detail with reference to Figures 22 to 37. When an avatar has been generated from the image data the booth control program 132 invokes the animation program 137 which causes an animation sequences of images utilising the newly created avatar to be displayed (s16) on the screens 20,102. The booth control program 132 then causes the data representative of the avatar to be compressed (s17) and transmitted to the server 2 via the Internet 3 and the interface 130 (for later retrieval as will be described later). The booth control program 132 then finishes.
Figure 18 is a flow diagram illustrating in detail the processing of the booth control program 132 in order 5 to obtain image data of a user.
When a card (sl3) has been printed the booth control program 132 then invokes the user instructions program 130 to display on the internal screen 102 and transmit via the internal speakers 100 instructions on how to use the booth (s20).
In this embodiment of the present invention a threedimensional computer model of an individual is obtained by processing image data of an individual from four different perspectives as are illustrated in Figures 4 to 7. The instructions for using the booth for this embodiment would therefore comprise instructions on the four poses to adopt whilst image data is captured using the digital cameras 36 and 38. Thus for example in this embodiment the display on the internal screen 102 would show an individual standing in the position illustrated in Figure 4 whilst instructions are presented to the user via the internal speaker 100 that in that in the first position he is to face the camera with his feet on the lit foot lights 76, legs and back straight, arms straight out, backs of the hands to the camera and fingers stretched out as shown.
The display on the screen 102 would then change to 31 show the individual 46 on the screen in the position of Figure 5 and the audio track transmitted through the internal speaker 100 will inform a user that he would then have to turn right, face the light 74 on the side wall looking at the lit light with f eet on the lit footlights 78, standing straight with arms straight on legs as shown.
The display shown on the internal screen 102 would then change to show an individual 46 in the position shown in Figure 6 with instructions being transmitted via the internal speaker 100 to inform the user that he will need to turn right, f ace the rear wall looking at the lit LED 74, feet on the lit footlights 76, legs and back straight, arms straight out with palms to the camera and fingers stretched out as shown.
The user instruction program 130 then causes an image of an individual 46 having the position shown in Figure 7 to be displayed on the internal screen 102 with instructions being transmitted via the internal speaker for the individual to turn right face the lit LED looking 74 at the lit LED, feet on the lit footlights 78, standing straight with arms straight and hands and legs as shown.
The booth control program 132 then causes the touch screen display 102 to display the options of reviewing the instructions or begin the avatar generation program, and causes the internal speaker 100 to instruct a user 32 to either choose to review the instructions or to start the avatar construction process.
If (s2l) it is detected that a user indicates by touching the touch screen 102 that they wish to view the instructions once again the instructions on how to use the booth repeated by being are displayed on the screen 102 and transmitted through the internal speaker 100 (s20).
If it is determined that the user indicates by touching that he is now ready to start the avatar construction process the booth control program 132 then causes (s22) the lights 88 associated with the first pair 76 of foot light indicators to be illuminated. The booth control program 132 then invokes the user instruction program 130 to cause an image of an individual adopting the pose of Figure 4 to be displayed on the internal screen 102 whilst instructions are transmitted via the internal speaker 100 to the user to adopt a pose facing the camera with their feet on the lit foot lights, legs and back straight, arms straight out, backs of the hands to the camera and fingers stretched out as shown (s23). The booth control program 132 then causes (s24) images of the user standing in the required position to be taken using the digital cameras 36,38. 25 Figure 19 is a graph illustrating the relative timings of the activation of the flash 56, the first digital camera 36 and the second digital camera 38.
33 Initially the booth control program 132 instructs the first digital camera 36 to take a picture with the flash lights 56 being activated. This causes the flash 56 to be activated and the shutter of the first digital camera 36 to be opened for a 1/90th of a second. Very shortly after being activated the flash ceases to illuminate the interior of the booth 1. After approximately one hundredth of a second the booth control 132 instructs the second digital camera 38 to take a picture. This causes the shutter of the second digital camera 38 to be opened. Shortly thereafter the shutter of the first digital camera 36 is closed. After a further 1/90th of a second the shutter of the second digital camera 38 is then closed.
In this way within less than 2/90ths of a second two images of an individual standing in the light box of the booth are obtained. The image obtained by the first digital camera 36 being of the booth illuminated with the flash 56, and the image of the second digital camera 38 being obtained after a short delay and being an image of the interior of the booth after the booth ceases to be illuminated by the flash 56. Since the computer 120 does not control the opening and closing of the shutters of the digital cameras 36,38 and the activation of the flash 56 directly the exact timing of the operations of opening and closing shutters and activating the flash is not determined by the computer 120 alone but rather is a 34 combination of the timing of the instructions sent by the computer 120, the time taken to send signals to the digital cameras. 36,38, the time taken to process those signals within the digital cameras 36,38 and the time 5 taken for the digital camera 36 to activate the flash 56. Since the processing of the computer 120 and the digital cameras 36,38 may not be completely synchronised the fact that the activation of the shutters and the flash 56 is only indirectly controlled by the computer causes the timings of the cameras 36,38 and flash to vary. In this embodiment of the present invention, by ensuring that the signal sent to the second digital camera 38 is delayed by at least one hundredth of a second after the signal activating the first digital camera 38 has been sent it is possible to ensure that by the time the shutter of the second digital camera 38 is opening the interior of the booth is no longer effectively illuminated by the flash lights 56.
Figure 20 is a schematic diagram illustrating an example of an image of an individual 145 standing in the booth in the position of Figure 4 taken by the f irst digital camera 36 with the flash 56 being activated.
This image comprises an image in which the forward surface of the individual 145 closest to the camera 36 is illuminated by the flash 56 and which provides data about the appearance of the forward surface of the individual 145. The background of the image is a representation of the uniformly lit back wall 62 against which representations of the four alignment markers 95-98 are apparent.
Figure 21 is a schematic diag ram illustrating an example of an image of a user 145 of Figure 20 taken shortly thereafter by the second digital camera 38. The image of Figure 21 therefore corresponds to the interior of the booth one hundredth of a second after the image of Figure 20. In Figure 21 since the flash 56 is no longer illuminating the interior of the booth the surf ace of the individual closest to the camera 38 is not illuminated and an image is obtained of the user 140 silhouetted against the uniformly lit back wall 62 against which representations of the four alignment markers 95-98 are apparent. As has been previously stated, the illumination of the forward surface of an individual 140 standing in the light box 44 is avoided by the presence of the blacked out portions 72 of the wall 62 and by arranging the foot indicators 76 in a position so that the lit area of the back wall 62 only extends to about half way along the user's feet.
Af ter images have been taken of the user in the first position the booth control program 132 then determines (s25) whether the images which have just been captured correspond to the final pose for generating an avatar. If it is determined that the image which has just been taken is the final image the booth control I 36 program 132 then invokes the avatar construction program 137 as will be described later.
If the images which have just been captured do not correspond to the final pose of the set of four poses the booth control program 132 then (s26) switches off the light 88 associated with the current set of foot indicators 76 and then illuminates lights 88 associated with the other set of foot indicators 78 and illuminates an LED slightly above eye level of the user to the right of where the user is currently standing. The booth control program 132 then instructs (s27) the user via the speaker 100 to turn right, look at the lit LED and stand in the pose that is also illustrated in the screen 102. The booth control 132 then causes another pair of images (s24) to be captured with and without the flash using the digital cameras 36,38, before once again checking (s25) if the obtained images correspond to the final pose.
In this way a user is instructed to adopt f our specific poses within the light box by means of oral and visual instructions transmitted using the internal screen 102 and the internal speakers 100. The use of indicator lights to indicate the position where a user is to place his feet and also to indicate where the user should face further constrains the poses which are adopted by a user.
Thus the booth control program 132 enables a set of eight images to be obtained, with images of an individual being taken with and without the flash in each of four 37 orthogonal poses. These images are then passed to the avatar construction program 134 for the construction of an avatar as will now be described.
Figure 22 is a f low diagram of the processing of image data by the avatar construction program af ter it has been obtained by the digital cameras 36, 38. Initially each of the images of a user in the light box 44 obtained by the second camera 38 are subjected to a filtering process (s30) to obtain a black and white image in which black pixels correspond to pixels in the original image having less than a certain amount of threshold luminance. In this way an image comprising a black silhouette of an individual against the uniform white background is obtained, the white background having imposed thereon black representations of the alignment markers 95- 98 The silhouette images are then aligned (s3l) with the corresponding flash image of an individual in the same pose obtained using the first digital camera 36.
The image data obtained by the first 36 and second 38 digital cameras are required to be aligned as although the arrangement of mirrors 40,42, is intended to present the digital cameras 36,38 with identical views of the interior of the booth due to manufacturing tolerances this usually will not quite be the case. This alignment process will now be described in detail with reference to Figure 23.
38 As a f irst step in aligning the images obtained f rom the two cameras 36, 38 the computer 120 initially identifies (s40) in the two images the coordinates of the representations of the alignment markers 95-98 appearing 5 in both of the images. The computer 120 then calculates (s4l) the translation required to displace the silhouette image obtained by filtering the non-flash image taken by the second camera 38 to align the representation of the bottom left alignment marker 95 appearing in the silhouette with the representation of the bottom left alignment marker 95 in the corresponding image taken by the first digital camera 36. The calculated displacement is then applied to the silhouette image so that the representations of the bottom left marker 95 are aligned.
The computer 120 then calculates (s42) the required rotation and scaling distortion to locate the representation of the top right alignment marker 98 in the translated silhouette image with the representation of the top right marker 98 in the corresponding image taken by the first digital camera. The calculated rotation and scaling operation is then applied to the filtered non-flash image that has been aligned with the bottom left marker 95 of the flash image obtained by the first camera 36.
The computer 120 then calculates (s43) how the rotated scaled silhouette image is required to be distorted so that the representations of the alignment 39 markers in the top left 97 and bottom right 96 corners of the image are to be aligned with the representations of the corresponding alignment markers appearing in the corresponding flash image obtained by the first digital camera 36. The computer 120 then applies this calculated distortion to the rotated scaled image so that an aligned silhouette image is obtained in which all four representations of the alignment markers 95-98 correspond in position to the positions of the four alignment markers 95-98 in the flash image of the individual in the same pose obtained by the first digital camera when the flash is activated. The aligned silhouette image is then (s44) stored in the data storage portion 138 of the memory 125 of the computer 120.
After the images obtained by the second digital camera 38 have been filtered and aligned (s30,s3l) to generate a set of aligned silhouettes each aligned with the corresponding image of the user in the same pose obtained by the first digital camera 36, the aligned silhouettes are then used by the avatar construction program 134 to calculate (s32) a geometry for a wire mesh model approximation of a three dimensional model of the individual of whom images have been obtained.
The processing of the avatar construction program 134 to determine the geometry of a wire mesh model of an avatar of an individual will now be explained in detail with reference to Figures 24A and 24B which is a flow diagram illustrating the processing of the computer.
Af ter a set of aligned silhouette images have been obtained the computer 120 then determines (s50) for each of the silhouettes, which pixels in the silhouette images correspond to the perimeter of the silhouette of the user against the uniformly back lit back wall 62.
The computer 120 then determines whether the outline which has been obtained from a silhouette image is a single continuous loop which is known as a water tight outline. If (s5l) it is determined that the outline identified by the computer 120 is not a water tight outline this indicates that the data capture process has failed or that in the pose adopted by the user a portion of a user's body was obscuring another part of his body.
For example this may occur when a user's hands are held insufficiently far away from the user's body when the user adopts the pose of Figure 4 or 6 or if their legs are insufficiently spread apart. If the outline obtained from the image data is not water tight the avatar construction program 134 is unable to generate an avatar using that image and invokes the booth control program 132 to request that a user re-poses (s52).
After the computer (s5l) has determined that the outline generated from the silhouette images are all water tight outlines the computer 120 then determines whether it is processing the images corresponding to the first or third poses (s53) in a set of four poses. if 41 it is determined that the computer is processing images corresponding to the second or fourth poses in a set of four poses, being the poses where a user is side on to the camera the computer 120 then processes (s54) the outline obtained from the silhouette to remove the additional rear foot that appears in the silhouette as will now be explained with reference to Figure 25A, 25B and 25C.
Figure 25A is an illustration of an example of a portion of the silhouette image obtained from a non-flash photo taken by the second digital camera 38 of an individual in the pose of Figure 5. In this position the user is sideways on to the camera with their feet pointing to the left hand side of the image. As shown in Figure 25A when a non-flash image is taken of a user in this position the user's feet appear silhouetted against the background of the image with the user's foot further away from the camera 150 appearing to protrude from the leg of the user a small distance D above the user's other foot 152.
When an outline is obtained from such an image the outline for this portion of the image corresponds to the outline shown in Figure 25B. This outline is unsatisfactory for the generation of an avatar as the user I s leg appears to have a bulge 154 as a result of the silhouette of the foot further away from the camera.
However, since this bulge 154 appears at a known 42 position within theimage, it is therefore possible to determine the portion of the outline obtained from an image which corresponds to the foot of a user further away from the camera. An estimated outline in the absence of the bulge 154 can then be calculated from the remainder of the outline for the leg.
In this embodiment, the bulge 154 is identified within the outline by determining the coordinates of the pixel corresponding to the turning point at the top of the user's front foot and the coordinates of the pixel corresponding to the turning point at the top of the user's rear foot. These coordinates can easily be identified from the outline by differentiating the outline image. When these turning points have been identified the computer 120 then deletes the portion of the outline between these two points and replaces it with a straight line between the two identified coordinates.
Figure 25C is an illustration of the outline of Figure 25B amended so that the bulge 154 corresponding to the user's other foot has been replaced by an estimated outline of the leg of the user close to the camera 156. This amended outline is then used by the computer 120 for the generation of an avatar.
After a water tight outline (s50-s53) has been obtained and if required the outline has been modified to remove the outline of the user's back foot (s54) for each of the four silhouette images, outlines 43 corresponding to the outlines of an individual in four orthogonal poses for which image data is available will be stored in memory. The computer 120 then (s55) processes these outline's to identify a number of land mark points on the outline as will now be described.
For the outlines of a user in the poses of Figures 4 to 7 the land mark points comprise points on the outline which are easily identifiable and from which an estimation of the orientation of a user's limbs in the corresponding images can be estimated.
From the outlines of a user in the poses of Figures and 7 the landmark points correspond to points on the outline which can be identified and which can be used to make an initial estimation of the position of facial features of an individual.
Figure 26 is an illustration of the outline corresponding to Figure 21 with a number of land mark points indicated. In this example the land mark points identified by the computer are the highest point on the outline 160, the extreme left most point 162, the extreme right most point 164, the left most point having the lowest coordinates 166 and the right most point having the lowest coordinates 168. These points correspond to the top of the user's head 160, the tip of a user's hands 162,164 and the tip of the user's feet 166,168. The computer 120 also calculates the coordinates of the points on the outline between these landmark points 44 having greatest curvature. These correspond to the left 170 and right 172 hand sides of the user's neck, the left 174 and right 176 armpits of a user and the centre 178 of a user's crotch. Similar land mark points are also calculated for the outline of the image of the user with his back to the camera.
When the computer 120 has calculated the coordinates of the landmark points for the silhouettes of the user standing in the positions of Figure 4 and Figure 6 the computer 120 then compares the coordinates of the top of the user's head 160 and the centre 178 of a user's crotch. If the height of a user's crotch is significantly less than 51.9% of the user's total height this is taken to indicate that the silhouette of a user has been taken of a user wearing a skirt since on average the height of an individual's hip bone is 51.9% of the height of an individual. In this case the apparent position of a user's crotch 178 on the user's outline is indicative of the hem of the skirt rather than a position on the user I s body. It is therefore unsuitable for estimating the actual posture adopted by a user in the image. The crotch position 160 is also unsuitable for determining the mapping of the outline of a generic avatar.
The computer therefore records that the outline is indicative of a user wearing a skirt and estimates the true position of a user's crotch to be 51.9% of a user's height and this estimated crotch position is then used to determine the actual posture adopted by a user as will be described in detail below.
From outlines obtained of the silhouettes of a user facing of the side walls landmark points, comprising turning points on those outlines identifying the tip of the user I s nose and the user I s chin are determined. These landmark points corresponding to the extreme leftmost points in the top sixth of the outline of the user facing the side wall in the position of Figure 5 and the extreme rightmost points in the top sixth of the outline of a user making the side wall in the position of Figure 7.
After the landmark points on the outline images have been determined the computer 120 then calculates the scale factor (s56). This involves determining from the outline of the individual in the pose of Figure 4 the height of the user. This is achieved by comparing the coordinates of the land mark point 160 corresponding to the top of the user I s head and the coordinates of the representations in the image of the alignment markers 9598. Since the alignment markers 95-98 are placed on the back wall 62 in known positions relative to the foot position indicators 76,78 and a known distance apart from the coordinates of the representations of the alignment markers within the image can be used to determine the scale for the image. Once the scale of an image has been 46 determined the coordinates of the highest point 160 of a user I s head can be used to calculate the user's actual height. The calculated height of a user is then stored in the data storage portion 138 of the memory 125 for use in the generation of an avatar as will be described later. The calculated height of an individual for the first image also used by the booth control program 132 to determine which of the LEDs in the strips of LEDs 74 is to be illuminated to cause a user to look slightly above eye level in the later poses.
After the scale factor has been determined and stored in memory 125 the computer 120 then utilises the landmark points 160-178 calculated for the images of the user in the poses of Figures 4 and 6 to determine (s57) the actual poses adopted by the user in those images. From the calculated positions of the left hand side of the user's neck 170 the tip of the user's left hand 162 and the user's left armpit 174 the orientation of the user's left arm relative to his body can be estimated.
Similarly the orientation of the user's left leg can be estimated from the relative positions of the user's left armpit 174, the tip of the user's left foot 166 and the position of the user's crotch 178. The orientation of the user's right arm can be estimated using the coordinates of the right hand side of the user's neck 172, the tip of the user's right hand 164 and the user's right armpit 176. The orientation of the user's right 47 leg can be estimated using the coordinates of the user's right armpit 176 and the tip of the user Is right foot 168 and the user's crotch 178.
Since the user will have been instructed to adopt a specific pose via the speaker 100 and the internal screen 120 these estimates of the exact positioning of the user's limbs provide sufficient data to determine within tolerable boundaries for error the posture adopted by the user in the images provided that the user has correctly followed the instructions given to him. Data indicative of the orientation of the user's limbs in the images corresponding to the poses of Figures 4 and 6 is then stored in the data storage portion 138 of the memory 125 of the computer 120.
After the actual posture (s57) of the user in the four images has been determined the computer 120 then identifies (s58) a number of facial features.
Figure 27 is a flow diagram of the processing of the computer 120 to identify the facial features in the image of a user facing the camera. Initially (s70) the computer sets initial estimates for the y coordinates (height) of a point corresponding to the user's nose to be equal to the point identified as the user's nose in one of the profile outlines. The X coordinate (left- right) of the initial estimate of a user's nose is set to correspond to the centre of the image and the coordinates of the left and right eye are calculated as 48 being in fixed positions relative to this initial estimate for the position of the user's nose.
The computer 120 then (s7l) determines for the portion of one of the images of the user in prof ile taken by the f irst camera 36, corresponding to the portion between the topmost part of the outline and the identified points identified as the neck of a user, the rate of change of illuminance for each of the pixels in that portion of the image. The computer 120 then selects as an initial estimate of the position of a user's mouth the point within the outline of the image having the greatest change in luminance. This initial estimate should correspond to the point of contrast between a user I s skin and the line of a user I s mouth and their lips. An initial estimate for the coordinates for the centre of the mouth are then (s72) set as being in the centre of the first image of the user taken by the first camera 36 facing the camera with a y coordinate corresponding to the estimated point of the edge of the user's mouth from the image in profile.
The computer 120 then (s73) uses these initial estimates of the points corresponding to the eyes, nose and centre of the user I s mouth in the image and then iteratively processes them (s73) to obtain more accurate estimates of the actual positions of these facial features.
Figure 28 is a flow diagram of the iterative 49 processing of the initial estimations of the facial features of a user appearing in the image obtained by the first camera 36, of the user in the position of Figure 4 to obtain more accurate estimations of these positions.
When initial estimates of the positions of a user I s eyes, nose and mouth have been calculated the computer first (s80) calculates the rate of change of luminance for all of the pixels corresponding to the portion of the image obtained by the first camera 36, of a user in the position of Figure 4 corresponding to the user's face. The computer then determines which of the pixels of the portion of the image corresponding to the user Is face has a rate of change of luminance above a set threshold value. The coordinates of these points are then stored in memory 125.
The computer then for each of the facial feature points selects a rectangle of pixels centred on the estimated position for that feature point and calculates the distribution of pixels within that rectangle corresponding to points in the image having a rate of change of luminance above the set threshold.
Figure 29 is a schematic illustration of rectangles for testing for the position of eyes, nose and mouth.
As is shown in Figure 29 the rectangles that are used for testing for the position of the user's eyes 180,182 such as to correspond to approximately one sixth of the width of a user's head as it appears in the image. The rectangle for identifying the position of a user's nose 184 is approximately half the height of the rectangle used for the identification of a user's eyes and corresponds in width to about one third of the width of the image of the user's head and the rectangle 186 for determining the position of the user's mouth is about one quarter of the height of the rectangles used to identify the positions of the user's eyes 180,182 and about half the width of the image of the user's head.
For the facial feature for which a position is to be estimated the computer calculates the centre of gravity of the distribution of points having a rate of change of luminance above the set threshold with the rectangle for that feature and then determines (s82) whether the determined centre of gravity corresponds to the current estimate of the point corresponding to that facial point. If the centre of gravity and the current estimate do not correspond the determined centre of gravity is then set (s83) to be the new estimate for the position of that facial feature and the centre of gravity for a rectangle centred upon this new estimate (s8l) then calculated. In this way the estimate of the positions of the facial features are adjusted to improve the estimate of the position of the facial feature.
When the estimates of the position of the facial features have been corrected the computer then (s84) checks whether the currently estimated positions for the 51 eyes, nose and the centre of the mouth are in positions which are likely to correspond to the positions of a pair of eyes, a nose and a mouth in an image of a face. if the estimated facial positions are found to be at the expected positions for the features corresponding to eyes, nose and mouth of a face the estimates are considered satisfactory and the computer 120 then goes to use those current estimates to identify a point corresponding to the tip of the nose and the points corresponding to the edge of the mouth as will be described below. If, however, the estimated positions clearly do not correspond to correct positions because the proportions of the estimated positions clearly do not correspond to those expected for a face the iteration process (s81-s83) is repeated either with alternative initial estimates of the positions of facial features being used a seed to the process or by using a different threshold to establish those points of change of luminance which are to be considered in the iteration process (s85).
Thus for example where the iterations for the positions of a user's eye result in two estimates corresponding to the same position, this would indicate that the initial estimates for a user's eyes converge to a single eye. An estimated for position for the other eye either to the left or the right of the identified eye could then be used as a seed to identify where the actual 52 other eye is located. Similarly if the estimated position for a user's mouth corresponds to the estimated position for a user's nose this indicates that the initial estimates for these facial features converge to a single point. On the basis of the ratio of the position for the user's eyes, nose and mouth a corrected initial seed position for the user Is nose and mouth could then be used as a seed.
When the facial features have been confirmed more accurately identified (s73) the computer then identifies the point within a rectangle centred on the estimate of the user's nose (s74) having the greatest luminance.
This point is taken to correspond to the image of the tip of the user's nose.
The computer then (s75) uses the estimated position of the centre of the user's mouth to calculate estimates of the edges of the user's mouth. This is determined by processing image data corresponding to a rectangle centred on the initial estimate at the centre of the user's mouth to determine the luminance of the pixels corresponding to that portion of the image. The coordinates of a horizonal line having the least luminance within that rectangle are taken to be the line of the user's mouth and the edges of the mouth are estimated to the point on that line where the luminance changes the most which should correspond to the position of the edge of the user's lips.
53 The computer then finally (s76) estimates a position corresponding to the user's chin being a point in line with the tip of the user's nose the same distance below the line of the user's mouth as the tip of the nose is 5 above the user's mouth.
The avatar construction program 134 then requests (s59) that a user confirm the estimated positions of facial features of the individual appearing in the images of the individual taken in the poses of Figures 4, 5 and 7. This is achieved by the avatar construction program causing the portion of the image of the user in the position of Figure 4 taken by the first camera 36 corresponding to the head of the user to be displayed on the touch screen 102. Superimposed on this image are six crosses located at estimated positions for the eyes, nose, sides of the mouth and chin of the individual shown in the image.
The avatar construction program 134 then instructs a user via the speaker 100 and the touch screen 102, to select the superimposed crosses by touching on the touch screen 102 and drag them to the correct positions for the eyesf nose, mouth and chin respectively. When a user touches on the touch screen this causes a cross to be selected. By then dragging this finger across the screen, a user causes the cross change its position on the screen.
Figure 30 is an illustration of the display on the 54 touch screen 106 of an image of a user after the crosses have been located in their proper positions. The crosses comprise a cross on the left eye 200, a cross on the right eye 201, a cross on the tip of the nose 202, a cross on the left hand side of the mouth 203, a cross on the right hand side of the mouth 204 and a cross on the tip of the chin 205. When the user has correctly located all six of the crosses he then confirms they are in the correct position. The avatar construction program 134 then displays on the screen 102 the portion of the image of the person in the pose of Figure 5 corresponding to the head of the person with two superimposed crosses superimposed in the calculated positions for the nose and the tip of the chin calculated from the outline of the silhouette corresponding to that pose and instructs the user to correct these estimated positions in a manner similar to that for the first image.
Figure 31 is a diagrammatic illustration of the image shown on the display 102 for identifying the positions of the nose and the chin in an image of the head in profile of a user in the pose of Figure 5. In this example the display comprises an image of the head of a user 208 and a pair of crosses, one cross 210 located on the user's nose and another cross 212 located on the user I s chin. The user is then instructed to correct the positions of the crosses if required and then confirm when they are in the correct position.
When the user confirms that the crosses in this image are in the correct position avatar construction program 134 then causes to be displayed on the touch screen 102 to the portion of the image taken by the f irst camera 36 of the individual in the position of Figure 7 corresponding to the head of the individual with two crosses superimposed on the image located at estimated positions of the nose and chin in the image.
Figure 32 is an illustration of an example of an image of the head 213 in profile of an individual in the pose of Figure 7 having superimposed thereon a cross on the nose 214 and a cross superimposed on the tip of the chin 215. The user is then invited to confirm or correct the position of these crosses as has been previously described in relation to the image for the head of the individual in the position of Figure 7. After the user has confirmed the position of all of the crosses and corrected them if required the coordinates of the features identified by the crosses are stored in the data 20 storage portion 138 of the memory 125. After the positions of facial features (s59) have been confirmed by a user the avatar construction program 134 then amends (s60) the outlines and the images corresponding to the user in the positions of Figure 5 and 7 to account for any discrepancies between the angle at which a user has held his head in the image taken of the user in the position of Figure 4. From the feature 56 points confirmed by a user corresponding to the tip of the user's nose in the images of a user's head as seen from the front and in profile. The y coordinate (height) of the tip of the user I s nose can be determined. As this point is meant to correspond to a single point on the user I s body if the y coordinate of this point is not identical between the three images, this indicates that in the different poses of Figures 4, 5 and 7 a user has held his head at a slightly different angle. If this is the case the computer then calculates the difference between y coordinate of the user's head in profile and the y coordinate of the user's nose as seen in the image of the user taken whilst the user stands in the position of Figure 4.
Figure 33 is an example of an illustration of a user's head. In this example it is assumed that the user's nose 220 is held a certain distance indeed above the position which is the corresponding image of the user in the position of Figure 4. The image data and the outline corresponding to that image of a user in profile is then corrected by applying a rotation of all of the pixels corresponding to the head of a user about a rotation point 224 estimated at the position in the centre of the user's neck, the point of rotation 204.
This rotation approximately corresponds to the change in image resulting from a change in the angle in which the head is held.
57 Figure 34 is an illustration of the example of Figure 33 after the orientation of the head has been amended. In this example the head has been rotated clockwise. This causes the height of the pixel corresponding to the tip of the user's nose 200 to change. In doing so a portion 226 of the image of a user is revealed which is not apparent in the original image data. Colour image data for this portion of the user is then estimated by colouring this portion of the user in the manner in which this area was previously coloured in the earlier image before the image had been transformed to a correct full head tilt.
The computer 120 then uses the generic model avatar program 135 to calculate (s6l) the outlines of an image of the generic computer avatar 135 in four orthogonal poses, comprising the poses of Figures 4 to 7 which a user adopts when image data is captured. In order to maximise the correlation between the poses of a user and the poses for the avatar the actual posture data for the user in the images of the user in the poses of Figures 4 and 6 are used as the basis for generating outlines of an avatar in those poses as will now be described.
Figure 35 is a block diagram of the generic model avatar program 135 stored in memory 125. The generic model avatar comprises a generic wire mesh model 270, a set of generic avatar joint angles 280 and a movement deformation program 290. The generic wire mesh model 270 58 comprises data defining a polygonal wire mesh representation of a generic individual in a single pose.
The generic avatar joint angles 280 are a set of data defining the orientation of the limbs of a model skeleton corresponding to the generic wire mesh model 270 and the movement deformation program 290 is a program defining how the wire mesh model 270 should be varied to account for changes in the orientation of the joint angles 280 as has been described in greater detail later.
Figure 36 is a diagram illustrating the data structure of data stored defining the generic wire mesh model 270 of a standard avatar 135. The generic wire mesh model 270 comprises data defining the position, connectivity and associated body segment of a number of points corresponding to vertices on the surface of a polygonal wire mesh model of a generic individual.
Stored within the memory 125 for each of the vertices of the polygonal wire mesh model is a vertex number 292, a vertex topology 294 a vertex geometry 296 and a list of associated body segments 298. The vertex number 290 uniquely identifies each vertex. The vertex topology is a list of vertices to which the vertex identified by the vertex number 290 is connected within the wire mesh model 270. The vertex geometry 296 is a set of x, y and z coordinates identifying the initial position of the vertex identified by the vertex number 292 in the wire mesh model in an initial position. Each vertex also has 59 associated with it a list of one or two associated body segments 298. The association of vertices with a body segment enables the generic avatar to be used to generate J. Lmages of the avatar in different poses as will be described in detail later.
Figure 37 is an illustration of the wire mesh model 270 of a generic avatar in an initial position. In this position the vertices on the wire mesh model 300 appear as representations of a three-dimensional model where x, y and z coordinates correspond to the x, y and z coordinates of the vertex geometry 296. Each of the vertices of the wire mesh model are connected to other vertices in accordance with the vertex topology 294 for that vertex. A portion of the wire mesh 302 between the legs of the model 300 comprise a number of polygons representative of a skirt. This portion of the wire mesh 302 is provided by including in the vertex topology 294 of the generic model avatar a number of connections between vertices in the front left leg to vertices in the f ront of the right leg of the model and between vertices in the back of the left leg to -vertices in the back of the right leg. When calculating a colour rendering function for generating an avatar of an individual the rendering for this portion 302 of the wire mesh 300 is varied depending on whether it is or is not determined that the user is wearing a skirt as will be described below.
Images corresponding to the generic model avatar in different poses can then be created by inputting data corresponding to a set of generic avatar joint angles 280. This data 280 is then processed by the movement deformation program 290 to determine initially how varying the joint angles in the manner described would position the body segments corresponding to those joint angles and then how this would affect the positioning of the individual vertices associated with each body segment. The movement deformation program 290 also when calculating a modified geometry ensures that the joint angles requested remain within maximum and minimum limits to mimic the body's normal limited flexibility. The movement deformation program 290 may be arranged to also ensure that the deformed geometry is constrained so that the surface of the deformed polygonal mesh does not intersect itself since this would correspond to an individual passing one part of this body through another. By limiting the effect of changing a joint to cause a transformation on the wire mesh model of vertices only associated with some of the body parts the input of joint angles can be used to generate a wire mesh model of the generic avatar in any pose. Thus for example where the angle of a joint corresponding to a shoulder is varied this would be processed to impose a translation on all of the vertices corresponding to points associated with body segments corresponding to the upper arm, the lower 61 arm and the hand. In contrast where a joint angle corresponding to the orientation of the elbow is changed this would only affect the position of vertices associated with the lower arm and hand. By associating each of the vertices with one or more body segments the deformation required to account for a change in the position of the model is simplified since the actual geometry for the modified wire mesh needs only to be calculated in terms of how the body segments associated with a vertex is affected by changes in the joint angles and how a vertex is affected by a change of a joint angle (if at all). The overall affect of a variation in joint angles is then the sum of how an associated body segment is af f ected by the change in joint angles and how the individual vertices on a body segment are affected by the change in joint angle.
For the majority of vertices, a transformation to account for a change in orientation of a joint angle is solely determined by the translation and rotation of an associated body segment. However for points located around the joints centres the change in orientation of one limb relative to another will affect position relative to the remaining vertices associated with a joint segment to account for the stretching of the skin.
The processing of the data for the wire mesh model to account for thesechanges will now be described with reference to Figure 38.
62 Figure 38 is a representation of a portion of the wire mesh model for a generic avatar corresponding to the upper arm 308, elbow 310 and forearm 312 in two positions. When the joint angle corresponding to the elbow is changed this causes points in the lower forearm 312 to all be translated in the same way. A change in the angle of the elbow has no affect on the positioning of vertices corresponding to the upper portion 308 of the upper arm. Between the upper portion of the upper arm and the forearm 312 are a set of vertices 310 whose relative positioning vary with the variation of the joint angle corresponding to the elbow. By having the movement deformation program to cause the relative positions of these points to vary in accordance with the joint angle in addition to any translation which is imposed on all of the vertices associated with the forearm, hand and wrist. The wire mesh model 270 is made to appear to stretch its skin about the elbow joint. Thus the combination of the generic wire mesh model 20 270, and the movement deformation program 290 enable the computer 120 to generate wire mesh model representations of the generic model avatar in any position as is detailed by a set of joint angle 270. The processing of the computer 120 to generate outlines of the generic avatar corresponding to the outlines obtained for a user will now be described.
In order for the computer to calculate outlines 63 corresponding to the user in the poses of Figures 4 to 7 the computer takes the calculated orientations of the actual positions of a user's limbs and uses these to determine joint angles which are then used by the movement deformation program to construct a wire mesh model corresponding to the generic avatar in the positions corresponding to the positions adopted by the user in the images. Since the position of the camera relative to the foot marks is fixed from the deformed vertex geometry corresponding a wire mesh model of an avatar in the positions of the user. The points on the calculated surface for the wire mesh model corresponding to points on the outlines as seen from the cameras can then be determined.
When four outlines have been calculated for the avatar in the four poses f or which image data has been obtained the avatar outlines are then (s62) scaled in accordance with the scaling factor stored in memory 125.
The computer 120 then determines (s63) a mapping function between the vertices of the wire mesh model of the avatar 300 corresponding to the eyes, nose, sides of the mouth and chin in an image of the avatar positioned and scaled compared with the identified positions of the corresponding facial features appearing in the image data. This mapping for these facial features is then stored in memory 125.
The computer then compares the outlines for the avatar model with the outlines generated from the image data of a user to calculate (s63) the distortion of the outlines required so that the outline of the avatar would correspond with the outline for the image data.
The comparison of the facial feature points and the points on the polygonal wire mesh model 270 of the avatar which correlate to points on the outlines of the avatar are then used (s64) as the basis for generating a mapping for all of the vertices of the polygonal wire mesh for the avatar model 270 by calculating the required distortion for the remaining vertices by interpolating intermediate distortions from the distortions identified for facial features and from the orthogonal outlines.
Where it has been determined that the outlines of an individual correspond to an individual in a skirt the distortion for vertices in the legs of the model is calculated dependent upon whether the vertices correspond to points above or below the detected crotch height. For vertices in the legs below the detected crotch height the distortion is calculated from the outline data in the usual way. The vertices in the legs above the detected crotch height are hidden from view by the skirt and hence the outline data is unsuitable for calculating an appropriate distortion for these vertices. The distortion for these vertices is therefore interpolated from the highest point below the detected crotch height and the calculated actual position of the crotch based upon the detected height of the individual.
The function for distorting the polygonal mesh of the model avatar 270 is then used to generate a set of data corresponding to the vertex geometry of a wire mesh model for an avatar corresponding to the individual by applying the calculated distortion to the vertex geometry 296 of each of the vertices of the generic wire mesh model 270.
In this way a wire mesh model of the individual is obtained since the movement deformation program 270 is arranged to apply a further deformation to a set of vertices corresponding to the vertices of a wire mesh model. This program 270 can now be used to generate representations of the wire mesh model corresponding to the individual for which image data has been obtained in any pose in accordance with joint angle data.
Colour rendering techniques are then used to determine (s33) a colour texture render function to colour the surface of an avatar having this revised geometry using the image data of the individual captured using the first digital camera 36 as will now be described.
The texture rendering function for colouring the surfaces of a polygonal wire mesh representing an individual, in this embodiment comprises a texture map and a set of texture coordinates, the texture map representing the surface texture of the model and the 66 texture coordinates being mapping between the projection of the vertices of the wire mesh model in a predetermined pose on to the texture map. The texture map and texture coordinates can then be used to texture render the model of the individual in any pose since the relative positions of the vertices in a calculated model of the individual in a pose define a distortion function for distorting a corresponding portion of the texture map identified by the texture coordinates for those vertices.
The texture map is calculated by comparing the images obtained of an individual using the first digital camera 36 with calculated representations of the avatar corresponding to that individual in the positions adopted in each of the four images.
The computer then calculates the orientation of normals to the surfaces of each of the polygons which have not already been recorded as rendered as being transparent. This normal is then compared with estimated directions from which image data has been captured and a texture rendering is calculated for colouring the surface of that polygon from the images which most closely correspond to the direction from which image data is available. Thus the polygons corresponding to the front surface of an avatar will have normals orientated in one direction and are coloured using texture data obtained from the image for the user obtained by the first camera 36 in the position of Figure 4. For 67 polygons with normals oriented in the opposite direction, a texture rendering is calculated from the image of the user in the position of Figure 6. For polygons having normals to their surfaces orientated at angles in between these directions a blend of colour texture is calculated for the surf ace of the polygon f rom the pair of images for the avatar corresponding to the images which are closest to the orientation of the polygon surf ace. Thus for example portions of the avatar oriented forward and to the left are coloured using a blend of colours determined from the images of the user in the positions of Figures 4 and 7. By varying the ratio in which the colours for an individual polygon are blended f rom the two images in accordance with the angle corresponding to the normal surface of that polygon. A gradual blending from the image data obtained from one pose to another is obtained. Once a texture rendering function f or the entire surface of each of the polygons has been calculated this is then stored in the memory 125.
In order to generate a representation of a skirt if required, the texture rendering of polygons representative of a skirt 302 is determined upon whether the detected crotch position is indicative or a crotch of a skirt hem. If it is determined from the detected crotch height that no skirt is being worn these polygons are rendered as being transparent. If the detected crotch height is indicative of the hem of a skirt the 68 skirt polygons are rendered using image data in the ordinary way with portions of the polygons representative of a skirt outside of the outline for the individual being rendered as transparent.
When a texture map for texture rendering the surfaces of the polygonal mesh has been determined the texture coordinates for the texture rendering function are calculated so as to correspond to the projections of the vertices of the wire mesh so that the images of the model in the poses from which data has been captured are as far as possible the same as the image data.
The animation program 137 is then invoked and causes a sequence of images representative of the newly generated avatar to be displayed on external 20 screens.
The animation program 137 comprises a list of joint angle data and an animation engine which processes the joint angle data to generate images of the newly generated avatar of the individual in a number of different stances in the same way as the movement deformation program 240.
The animation engine also additionally processes the texture coordinates and texture map to cause the newly generated avatar to be coloured in accordance with a calculated texture rendering function. In this way an individual is presented with an animation avatar of him or herself as soon as the avatar generation process is complete.
At this stage the computer system 120 has stored in 69 its memory a user's name, a generated avatar identification number, a password, an avatar geometry and data representative of a texture rendering function comprising a texture map and texture coordinates for 5 colouring an avatar.
Prior to sending the data to the server the texture map is compressed to reduce the amount of data which is sent over the Internet as will now be explained.
The texture map for an avatar is calculated by the computer system 120 is based upon the image data obtained by the first camera 36. By having a texture map corresponding to data from the images obtained by the first camera 36 indicative of the texture of an individual and not sending irrelevant data about the background appearing in the image the size of the texture rendering function is reduced relative to the size of the images from the camera 36. However, the size of the texture map can still be further compressed by compressing individual parts of the data to a lesser or greater extent in dependence upon the relative importance of the correct textural rendering of individual parts of an avatar's body. In this respect it has been determined that the most important areas of an avatar to be correctly coloured are the hands and the face. In particular the face areas around the mouth and eyes are sensitive to any loss of information during compression as any changes in these areas of the avatar result in noticeable apparent differences to the appearance of the avatar. This is the embodiment of the present invention whereas for the majority of the surface of the avatar the texture rendering functioning for transmission is compressed to about 1/50th of the amount of data for the original texture rendering function, data for the face of the avatar is compressed only to about 1/4 of its original size with areas around the eyes and mouth not being compressed at all.
After the texture map of an avatar has been compressed data representative of the avatar are transmitted from the booth 1 to the server 2 via the Internet 3. Figure 39 is a diagram illustrating the data transmitted f rom a booth 1 to the server 2. The data comprises an avatar number 310 being the generated avatar identification number, data representative of a user's name 311 being the data input by the user, a password 312, vertex geometry 313 comprising the relative positions of a predetermined number of points on the surface of an individual and texture rendering function data comprising texture coordinates 314 and a compressed texture map 315, the texture coordinates 314 corresponding to the projections of the points identified by the vertex geometry 313 onto the compressed texture map 315.
This data comprises all the data that is necessary to identify and then generate a computer model of an 71 individual provided that the manner in which the vertices identified by the vertex geometry 313 are connected is known. Since however, the manner in which the points identified by the vertex geometry 313 are connected corresponds to the vertex topology of the generic model avatar, the manner in which the points of the vertex geometry 313 are connected is known in advance and hence may be distributed separately from the data specific to an individual avatar.
Figure 40 is a block diagram of the data storage system of a server 2. The data storage system of a server 2 has stored therein a plurality of sets of data comprising identification data 320, geometry data 321 and texture rendering data 323 for a plurality of avatars.
The identification data comprises data representative of the name, identification number and password for an avatar generated from the booth 1. The geometry data 321 and texture rendering data 322 comprises data representative of geometry and a corresponding set of texture coordinates and a texture map for generating an avatar. Each of the sets of identification data 320 is associated with one set of geometry data 321 and texture rendering function data 323 representative of the avatar identified by the avatar identification.
The data storage system of the server 2 also has stored therein a processing program arranged 324 to process signals received from the Internet and to 72 transmit data to a user station on the basis on the processing of those signals. By providing means by which users can identify and obtain data from a server a means is provided in which users can download onto their own computers avatars which have been generated using the booth 1. By associating identification data 320 with geometry and a texture map and a set of texture coordinates for an avatar a means is provided to limit the access to avatars to individuals who are in possession of the identification data and thus the use of an avatar can be restricted to appropriate individuals.
Figure 41 is a block diagram of the content of the memory of a user station 4 having stored therein an animation program. The memory has stored therein a vertex topology 325 corresponding to the vertex topology 294 for the generic wire mesh model 270 stored in the booth. The memory also has stored therein a geometry transformation program 326, an animation driver 327, a data input/output program 328 and a control program 329.
The geometry transformation program 326 comprises a program for calculating the transformations of points identified by geometry data to generate geometry data for an avatar in any of a number of poses. The geometry transformation program 326 therefore is similar to the movement deformation program 290 stored in the memory 125 of the computer system 120 of the booth 1. The animation 73 driver 327 comprises means for generating a series of animation instructions which are used by the geometry transformation program 326 to generate geometry data for an avatar in a pose identified by the animation instructions. The animation driver 327 is arranged to generate an animated sequence of images on the basis of the transformed geometry, the vertex topology 325 and texture coordinates and texture data received. The animation driver 327 also includes means for means for generating backgrounds and representations of other objects with which an animated avatar may be shown to interact.
The data input/output program 328 is arranged to coordinate the receipt of data via a keyboard and a modem connected to the Internet and also arranged to transmit data from the user station 427 via the Internet. The control program 329 is arranged to coordinate the interactions between the data input/output program 328 and the animation driver 327.
Figure 42 is a flow diagram illustrating the steps taken by a user who wishes to download a previously generated avatar that is stored on a server 2 into his computer 4 that has software stored within its memory which will utilise the geometry texture map and texture coordinates for an avatar of an individual. Initially the user (s90) inputs the avatar identification number and the password printed on a card 140 printed from a 74 booth 1 into the computer 4. The computer then transmits the password and avatar identification number to the server 2 via the Internet together with a request to download the data representing the geometry and colour 5 rendering functions for the user stored on the server 2.
When the server 2 receives the request to download the geometry and rendering functions it checks that the password and avatar identification number received correspond to a transformation and rendering function stored within the server and then causes the computer 4 to download (s9l) the data representative of the geometry and rendering functions for that user into its memory. By making the downloading of data representative of a geometry and a rendering function upon the input of a correct avatar number and a password, access to the avatar of an individual is restricted to individuals in possession of this information.
When the application software within the computer 4 is then invoked to generate representations of an individual using an avatar representing the individual the avatar geometry individual is used to generate a wire mesh model of an individual then coloured using the texture map and texture coordinates (s92).
The application software then generates a series of graphical representations indicative of the movement of an individual for the image data has been obtained by generating computer representations (s93) of the avatar in the variety of dif f erent stances wherein each case the representation of the avatar is distorted in accordance with a set of joint angles which are utilised by a animation generation program to distort the avatar in a similar manner to that which has previously been described. In this way the avatar used by an individual on their computer can accurately represent an individual for whom image data has been obtained using the booth 1.
Figure 43 is a block diagram of a second embodiment of the present invention. As is described in relation to the first embodiment of the present invention a booth is provided which is self contained and includes a screen for displaying instructions to a user before posing, a touch screen display 102 for inputting data about the user prior to posing in the booth and a screen 102 for displaying the generated avatar are all provided within a single apparatus. In this embodiment of the present invention apparatus will be described for generating and animating an avatar in which apparatus for generating and animating an avatar are arranged to enable a greater throughput of people using the apparatus than is possible in a single booth. In this way queuing is reduced.
In accordance with this embodiment of the present invention, the apparatus comprises a display screen 330, a keyboard 331, a printer 332 a posing booth 333, a touch screen display 334 and an avatar animation system 335.
The keyboard 331 the printer 332, touch screen display 76 334 and avatar generation system 335 are all connected to the posing booth 333. The posing booth 333 is also connected to the Internet (not shown).
In accordance with this embodiment of the present invention the apparatus for generating an avatar are distributed over a number of distinct areas. In use, a user first views instructions on the display screen 330.
The user then moves to the keyboard 331 where they input data for identifying an avatar. Once data has been input using the keyboard 331 this causes the printer 332 to print out a card having identification data printed on it which is retrieved by the user so that they can retrieve the data from a server (not shown) later.
The user then moves to the posing booth 333. The posing booth comprises a booth that is similar to the booth I of the first embodiment except that an additional exist doorway is provided opposite to the first doorway.
A user enters the booth 333 via the first doorway and then image data of an individual is obtained using the posing booth 333 in the same way has previously been described. The user then exits the booth 333 via the other doorway. The f acial features of a user who has posed in the booth 333 are then displayed on the touch screen 334 where they can be corrected by the user or another individual. When the facial features have been confirmed the posing booth 333 then generates an avatar in the same way as has previously been described in 77 relation to the first embodiment the user then moves to the avatar animation system to view an animation using the generated avatar. The generated avatar is also transmitted to a server (not shown) via the Internet.
In this embodiment the throughput of people through the posing booth 333 is maximised since the steps of viewing instructions, inputting identification data, receiving a printed card, confirming the facial features of an image and viewing an animation will take place outside of the booth 333 and these activities can therefore take place simultaneously for different users. The throughput of people using the apparatus of this embodiment is dependent upon the time taken to view instruction, enter personal data, receive a card, data capture within the booth, editing the facial features, generating the avatar and then viewing an animation. The speed with which people can be processed is dependent upon the slowest of the six steps. In this embodiment of the present invention the most time consuming step is normally the time taken posing within the posing booth 333. The time taken posing is significantly increased due to the need to provide instructions on how to pose and the need to repose if the instructions are incorrectly understood. The throughput of people through the booth 333 can, therefore be significantly increased by providing a manual override within the booth and having an assistant check that instructions are properly 78 followed before manually initiating the taking of images using the digital cameras 36,38. In this way the time allowed for individuals to adopt the correct pose can be reduced and the numbers of reposes required are significantly reduced. The throughput through the booth 333 can also be increased by requiring users to make a payment prior to using the apparatus rather than making a payment within the booth 333.
Figure 44 is a block diagram of a third embodiment of the present invention. In this embodiment of the present invention a number of generated avatars are used and combined in a single animation sequence. The apparatus comprises a plurality of booths 1 similar to the booths of the first embodiment. The booths are all connected to an animation engine 350. The animation engine 350 having a memory which has stored therein a set of animation instructions 352, a set of content instructions 353 and a portion 354 available for the receipt of avatar data 354. The animation engine 350 is connected to digital projector 355 for displaying a generated animation on a screen 356. The booths I are also all connected to the Internet (not shown) for transmitting avatar data in the manner described in relation to the first embodiment.
In this embodiment of the present invention users use the booths I to have avatar data representative of those users generated and sent to the animation engine 79 350. The avatar data is also sent to a server (not shown) via the Internet in the same manner as has been described in relation to the first embodiment. The animation engine 350 is arranged to store in memory 354 a plurality of sets of avatar data for example 15 to 25 sets of avatar data. The animation engine 350 then utilises the animation instructions 352, the content data 353 and avatar data stored in memory 354 to generate an animation. The animation is then displayed on the screen 356 using the digital projector 355.
In this embodiment of the present invention the animation instructions 352 comprise a set of data indicative of the relative positioning of the vertices of a generic avatar within an animation. Thus for example for each frame within an animation data is stored of the position of each of the vertices of a generic avatar. The animation engine 350 is arranged to calculate on the basis of a comparison between the avatar data for an individual 354 stored in memory. A displacement vector for displacing vertices of a model of a generic avatar to distort the generic avatar in a predetermined stance correspond to the vertex geometry of a generated avatar. These vector distortions are then applied to the animation data 352 to calculate the positions of vertices of a generic model distorted to correspond to the model of an avatar stored in memory 354. The animation instructions 352 havingbeen transformed by the vector distortions corresponding to the difference between the positions of corresponding points on the surf ace of a generic avatar and a generated avatar are then used to generate representations of the generated avatar which are texture rendered using the texture render data for the generated avatar.
In this way since the animation engine 350 merely applies a vector transformation to the data 352 the rate at which animation can be generated is significantly increased. Thus the animation engine 350 can be arranged to generate animations in real time involving the plurality of avatars interacting set against a background defined by the content data 353. Thus individuals who have had their avatars generating using the booth 1 can be shown interacting on the screen 356. The present embodiment is therefore particularly suitable for use within a cinema or location based entertainment.
Figure 45 is a block diagram of a fourth embodiment of the present invention. In this embodiment of the present invention a plurality of booths 1 are connected via a communications network 300 to a computer animation unit 363. Connected to the computer animation unit 363 are a plurality of control consoles 365, each of the control cancels 365 is connected to a display 370.
The computer animation unit 360 in this embodiment of the present invention is arranged to receive avatar geometries and texture rendering functions calculated 81 from image data scanned in using the booths 1 in an identical manner to that described in the first embodiment. The avatar geometries and rendering functions are then used by the computer animation unit 363 to generate sequences of computer graphic representations using the avatar geometries on texture rendering functions in accordance with animation instructions generated in accordance with instructions received via the control consoles 365. The generated sequences of computer graphic representations of the individuals are displayed on the displays 370. Thus in this example of the present invention the apparatus could be used as the basis of a location based entertainment in which users generate an avatar using a booth 1 and then control the actions of their avatar using the control console 365 with the results being shown as a sequence of computer animated representations on the display 370 connected with their console 365.
Although in relation to the first embodiment a booth has been described in which images of a user are obtained in four orthogonal positions as shown in Figures 4 to 7, it will be appreciated that it may not be possible forsome individuals and particularly those who use wheelchairs to adopt the required poses. A f urther embodiment of the present invention will now be described in which the booth of the first embodiment is adapted for 82 use by wheelchair bound individuals.
Figure 46 is a plan view of an amended booth adapted for use by both able bodied users and users who are confined to a wheelchair. The plan view of Figure 43 is identical to the plan view of Figure 9 except that an additional area of floor space 400 is provided in front of the under lit floor space 60 of the light box. The additional floor space 400 is also under lit in the same manner in which the floor 60 is under lit. The combined floor 400, 60 provides an under lit floor against which an image of a user in a wheelchair can be obtained. In addition to the additional floor space 400 four strips of LEDs 402,404,406,408 also are provided in the square arrangement in the centre of the combined floor space 400,60. The LEDs 402-408 provide indicators in a similar manner to the foot indicators to show the way in which the wheels of a wheelchair are to be oriented when image data is to be obtained of a user in a wheelchair. The booth is further modified in that the light proof box 47 containing the digital cameras 36,38 is arranged to be rotated about a pivot point to redirect the view of the cameras to account for the difference in height of a user standing, and a user sitting in a wheelchair. The memory of the booth also has additionally stored therein a generic wire mesh model for an individual sitting in a wheelchair and the user instructions program is modified to instruct wheelchair users to adopt required poses.
83 In accordance with this embodiment of the present invention when an individual first enters the booth he is given the option of either obtaining an able bodied avatar or a wheelchair avatar. If the user selects the able bodied avatar the processing of the booth is identical as to that which has been previously described.
If the user selects a wheelchair avatar this causes the light proof box 47 containing the cameras to be reoriented and for the lights associated with the additional portion of the floor 400 to be illuminated. The instructions to a user to pose in the light box are then displayed instructing a user in a wheelchair to orient the wheels of the chair with the LEDs 402-408 rather than adopting a pose by placing their feet on the f oot lights. When images of the user are taken and processed the obtained outlines are then compared with a generic wire mesh model for an individual sitting in a wheelchair.
Figure 47 is a illustrative representation of a wire mesh model of an individual sitting in a wheelchair. This generic wire mesh model for an individual sitting in a wheelchair is then modified in a manner as has been described in relation to the first embodiment. In this way avatars can be generated of individuals who are unable to adopt specified poses within the light box.
Figure 48 is a cross-section through a booth in accordance with another embodiment of the present 84 invention. In this embodiment of the present invention in contrast to the first embodiment in which image data is obtained using a pair of cameras 36,38, two pairs of cameras are used to obtain image data. A booth in accordance with this embodiment of the present invention has an additional set of cameras 500,502 and an additional arrangement of mirrors 504,506 provided with the first set of cameras 36,38 being provided below the second set of cameras 500,502. Apart from the repositioning of the first set of cameras 36,38 and the mirror arrangement 40,42 the booth is substantially identical to that which has been described in relation to the first embodiment of the present invention.
By providing two sets of cameras one above another it is no longer necessary that the digital cameras 36,38 obtain image data of the entire height of an individual 46. Since image data for the top of the individual can be obtained using the second set of cameras 500,502. The need to distance the light box 44 from the cameras 36,38 is reduced and the size of the central section 32 of the booth 1 can be reduced accordingly.
If more than one set of cameras are used to obtain image data of an individual standing in the booth it is however necessary to provide a way in which images corresponding to the top half of the individual standing in the light box can be matched with an image for the bottom half of the individual. Figure 49 illustrates the interior of a light box in accordance with this embodiment of the present invention. In addition to the alignment patches 95-98 in the corners of the booth a further set of six alignment patches 510-515 are provided in the middle of the booth. By providing these additional alignment patches the cameras 36,38,500,502 can be arranged to view the top six 97, 98,510-512 or bottom six 513-515,95,96 patches respectively. The points in the images corresponding to the central patches obtained by the different sets of cameras can then be identified and provide a way in which image data from the two different cameras can be aligned so that they may be combined into a single image. The provision of at least three reference points- which appear in both images provide sufficient information to align the cameras accounting for differences in orientation, rotation and scale.
* A further embodiment of the present invention will now be described in which an avatar of an individual is in the absence of clothes is generated using data about an individual's weight. Figure 50 is a cross-sectional view of a booth in accordance with this embodiment of the present invention. The booth is identical to the booth of Figure 3 except that in addition the light box has provided within its floor a weighing apparatus 550 for determining the weight of an individual standing in the booth. In addition the avatar construction program 134 86 is arranged to utilise this weight data to generate an avatar as will be described in detail below. The remaining portions of the booth are identical to the booth of the first embodiment and description will not be repeated here. The processing of a booth in accordance with this embodiment will now be described with reference to Figure 51.
In accordance with this embodiment of the present invention the user selects whether they wish to generate a male or female avatar (slOl). A user then poses within the light box 44 and the weighing apparatus 550 weighs (s102) the individual whilst the digital cameras 36,38 obtain image data. The avatar construction program then processes (sl03) image data obtained of an individual posing in the light box 44 in the manner as has previously been described in order to generate an avatar in the manner as has previously been described. The avatar construction program 134 then determines the joint positions (sl04) corresponding to the joints of the generated avatar and stores these in memory.
The avatar construction program 134 then calculates an expected volume (sl05) of an avatar based on the weight data obtained using the weighing means 550 on the basis of stored data of the average density of people.
A reference male or female model avatar is then scaled (s106) so that the joint positions of the reference avatar correspond to the calculated joint positions for 87 the model avatar of the individual. The volume for the scaled reference avatar is then compared with the calculated volume calculated from the weight data. The cross section for the scaled avatars trunk, legs and arms are then increased or decreased proportionately to scale the avatar to occupy the volume corresponding to the calculated volume based on the weight data.
The scaled trunk, legs and arms are then combined with the representations for the head, hands and feet (sl07) of the avatar of the individual and a texture rendering function for the trunk, legs and arms for the combined avatar is then determined on the basis of a comparison with a stored texture rending function for the reference avatar and the colour of an individual's skin as is apparent on the user's face.
In this way by obtaining weight data of an individual an avatar representative of the user in the absence of any clothing can be generated. Computer representations of clothing can then be added to the naked avatar to vary the appearance of the avatar and to enable the clothing of the avatar to be properly animated and exchanged for computer generated clothing from a library of predefined computer generated clothing. Thus the present embodiment in the invention enables avatars representative of individuals in the absence of their clothes to be generated without requiring users to remove their clothes during the scanning process.
88 Although the previous embodiments of the present invention have been described in which the booth 1 has been used to obtain image data using two digital cameras 36,38, it will be appreciated that a video camera could be used to obtain the images with the images being subsequently digitised. It will also be appreciated that instead of a pair of cameras taking the two images a single digital camera could be used in which the digital camera took two images one after another. Although in the embodiments previously described four pairs of images of an individual in four different poses, are obtained using the booth 1, it will be appreciated that by providing more cameras within the booth image data for a user from different perspectives could be obtained from a single pose.
It will also be appreciated that more data about how an avatar representing an individual could be animated could be obtained by taking all images of an individual in further poses. For example, by obtaining images of an individual with their arms or legs bent it would be possible to establish from those images a better approximation of where the joints of an individual's avatar should be placed. Alternatively images of an individual could be obtained whilst that individual is made to walk or run on a moving platform to obtain data on exactly how that individual walks or runs which could be used as a basis for generating an animated sequence 89 of representations of that individual with an avatar that mimicked the detected motion.
Although in the described embodiments apparatus is described which aligns images of a booth taken with dif f erent cameras every time a pair of images of the booth are taken, it will be appreciated that assessments of the alignment of images between the two cameras could be made only periodically for example whenever the booth 1 is switched on or serviced.
Although in the previous embodiments the capture of data within the booth 1 has been described in terms of taking photos of an individual against a uniformly back lit wall 62 that is back lit by a number of fluorescent lights 70, it will be appreciated that other means could be used to illuminate a back drop. other means of illuminating the back drop could include using electric paper' to generate a uniformly bright background against which an image of a user is obtained. Alternatively the back wall could be covered by phosphorescent paint which would effectively illuminate the user after the activation of the flash lights 56. A further alternative means of providing back illumination would be to provide a back drop comprising a retro reflective material and to illuminate the back drop using a light source having a known wavelength which could be then detected in order to calculate the portions of an image which did not correspond to part of the user.
Although in the described embodiments apparatus is described which aligns images of a booth taken with different cameras every time a pair of images of the booth are taken, it will be appreciated that assessments of the alignment of images between the two cameras could be made only periodically for example whenever the booth 1 is switched on or services.
Although in the previous embodiments a booth I has been described in which two images of a user are obtained in any of the poses adopted, it will be appreciated that apparatus could be provided in which only a single image is obtained of the user in each pose with a silhouette of the user being calculated from a single image. Methods of generating silhouettes from a single image could involve providing a patterned background which can be detected by the computer to determine where the back wall 62 is visible in an image or alternatively chroma key techniques could be used to calculate those portions of an image which correspond to a user and those portions and those portions of an image which do not.
Although in the previous embodiments the booth 1 has been described in terms of apparatus which generates the transformation function transforming a single generic model of an avatar from outline data calculated from a silhouette, it will be appreciated that any form scanning means for scanning in data could be used. For example laser stripe apparatus could be used to generate a three- 91 dimensional computer model of an individual which could be used as the basis for calculating a transformation of the geometry of a wire mesh for a generic avatar into an avatar representing an individual.
Although in previous embodiments a booth 1 has been described for obtaining image data representative of an individual, it will be appreciated that image data could be obtained in other ways. For example images of an individual taken in different poses could be scanned in from photographs of the individual in the different poses and in this way the present invention could be implemented without using a dedicated booth to capture image data.
Furthermore, although in the previous embodiments a transformation between a single geometry for stored generic avatar has been described it would be appreciated that a number of different generic of avatars could be stored and a determination could be made prior to the calculation of a transformation from a stored avatar geometry to determine which of the stored avatar geometries initially most closely resembles the calculated geometry for a user. The geometry most closely resembling a user could then be used as the basis for being transformed to correspond to the calculated geometry. Thus for example different generic avatars could be stored representing adults, children, men and women, or various initial body shapes, which could then 92 form the basis for generating avatars representing specific individuals. The selection of which generic avatar is used as a model could alternatively be selected by a user.
The generated model avatars could also have varying levels of details for use in different applications.
Thus for example avatars having a polygonal mesh of 2600 polygons could be generated for use in some software with a polygonal mesh of 10,000 or 40,000 polygons being used for other applications. The model avatar data for a lower resolution of polygons could be a subset of a model of higher resolution. Thus where a model avatar is transmitted across a network a progressive mesh download could be used in which representations of avatars are increased in detail as more data becomes available.
Although in previous embodiments foot positioning indicators 76,78 have been described in terms of transfers and lights on the floor 60 of a booth, other foot positioning indicators could be used for example a raised portion of the floor could indicate where a user is meant to place their f eet. other ways in which a user could be instructed to adopt a particular position also would include spot lights illuminating a portion of the user where the user is instructed to adopt a stance which specific portions of their body are illuminated by spot lights. Thus for example a light could illuminate the position where the tip of a user's nose is expected to 93 be placed in order to ensure that a user adopted a stance in which their head was in a particular position.
In addition to obtaining data representative of a user's external appearance other data could also be obtained using a booth 1. Thus f or example a sound recording of a user a speaking could be obtained within the booth in order to generate oral representations of the user speaking which could be combined with the avatar generated representing the user's external appearance.
Although in previous embodiments the transmission of data representative of a computer model of an individual has been described in terms of the transfer of signals via computer networks, it will be appreciated that other means could be used to transmit data. Thus for example a booth could be provided in which data is recorded on a computer disc such as a CD ROM or magnetic disc or magneto-optical disc or a computer chip which is output to the user who takes the disc CD ROM or computer chip to their computer to make use of the recorded model avatar data.
Although reference has been made to the generation of printed cards containing identification data for retrieval of a avatar from a central storage system, it will be appreciated that other means of recording this information could be delivered to a user. Thus f or example swipe cards could be generated by a booth, the swipe cards having recorded thereon data for retrieving 94 an avatar at a later stage. Thus for example where avatars are used within location based entertainments, a swipe card could provide means by which a user causes avatar data to be downloaded which has been generated in one part of the location based entertainment for use in another attraction.
Although in the previous embodiments the application of avatar data has been described in terms of using the avatar data for the generation of animations, it would be appreciated that the avatar data representing a computer model of an individual could be used in numerous other ways. For example, rather than using the model for generating sequences of images of an individual in different poses the avatar data could be used to generate representations of an individual in a single pose. Thus for example application software for generating images of individuals in single poses on the basis of avatar data could be provided which are arranged to receive the avatar data and provide a user interface for enabling a user to select the pose in which an avatar appears.
Other applications which might use the avatar data generated in accordance with this invention could also include communication software in which model representations of individuals are combined with data input by a user which is transmitted to other users. Thus for example an individual's avatar could be transmitted to another user's computer where it is combined with verbal data input by the individual and transmitted to that other user's computer to effect a form of virtual conferencing.
Other forms of application for the avatar data generated using the present invention could also involve application software in which the avatar data itself is edited, thus for example the texture map and texture coordinates and geometry could be edited to change the appearance of an avatar either the changing the appearance of the clothes of an avatar, their hairstyle or in some way editing the bodily appearance of an avatar.
Although reference has been made to having application software stored within the personal computer of a user, it will be appreciated that such application software could initially be stored within a server and downloaded together with the avatar representation of an individual. It will also be appreciated that payment for an avatar could be made at this stage or alternatively the generation of avatars could be entirely free from the requirement of making any payment at all.
96

Claims (96)

1. An apparatus for generating computer models of individuals for generating graphical representations of individuals in different poses comprising:
storage means for storing a computer model of a generic person; means for generating representations of a computer model of a person in poses in accordance with pose instructions; data input means for obtaining data of an individual, representative of the external appearance of said individual in a pose; determination means for determining the pose adopted by an individual in the data obtained by said data input means; comparison means for comparing said obtained data and data generated by said means for generating representation of said model of a person in said pose determined by said determination means, and model generation means for generating a computer model of said individual for generating computer graphical representations of said individual in different poses, wherein said model generation means is arranged to generate said model in accordance with said comparison by said comparison means.
2. Apparatus in accordance with claim 1 wherein said 97 data input means comprises laser stripe scanning apparatus to identify a plurality of points on the surface of an individual in said pose.
3. Apparatus in accordance with claim 1, wherein said data input means comprises means f or obtaining data representative of an outline of said individual in said pose.
4. Apparatus in accordance with claim 1, wherein said data input means comprises at least one camera for obtaining image data of an individual in said pose.
5. Apparatus in accordance with claim 4, wherein said at least one camera comprises a digital camera.
6. Apparatus in accordance with any of claims 4 or 5 wherein said data input means further comprises means for processing image data of an individual in a pose to obtain an outline of said individual in said pose.
7. Apparatus in accordance with claim 6, wherein said means for obtaining outline data comprises means providing a predefined background wherein said data input means is arranged to obtain an image of an individual in said pose against said predefined background wherein said means for processing said image data is arranged to 98 identify portions of said image data corresponding to said background and processing said image data to identify the outline of an individual in said image.
8. Apparatus in accordance with claim 7, wherein said processing means is arranged to process said image data to identify portions of said image data corresponding to background by performing a thresholding operation.
9. Apparatus in accordance with claim 8, wherein said means providing a predefined background comprises an illuminated background wherein said processing means is arranged to perform a thresholding operation on the basis of the luminance of portions of an image of an individual in a pose.
10. Apparatus in accordance with claim 9, wherein said means providing a predefined background comprises a light box comprising a floor, back wall and roof, said floor, back wall and roof comprising a translucent material, and means for illuminating said floor, back wall and roof from beneath, behind and above respectively.
11. Apparatus in accordance with claim 8, wherein said background comprises a background of uniform pattern or colour wherein said thresholding operation is arranged to identify within said images of said person in said 99 pose, portions of said image corresponding to said pattern or colour.
12. Apparatus in accordance with any preceding claim wherein said determination means comprises at least two foot marks for indicating where an individual should place their feet when adopting said pose.
13. Apparatus in accordance with any preceding claim, wherein said determination means comprises indicator means for indicating a position at which a user should look at when adopting said pose.
14. Apparatus in accordance with any preceding claim wherein said determination means comprise means for instructing a user to adopt a predefined pose.
15. Apparatus in accordance with claim 14, wherein said instruction means comprise display means for displaying an illustration of said pose to be adopted by said individual.
16. Apparatus in accordance with claim 14 or claim 15, wherein said instructions means comprise speakers for broadcasting oral instructions to an individual to adopt a specific pose.
17. Apparatus in accordance with any preceding claim, wherein said determination means further comprises means for calculating the pose adopted by an individual from said data representative of the external appearance of 5 said individual in a pose.
18. Apparatus in accordance with claim 17, wherein said pose calculation means is arranged to identify a plurality of points on the surface of an individual wherein said pose adopted by an individual is determined from the relative orientation of said identified points.
19. Apparatus in accordance with claim 18, wherein said plurality of points identified from said data comprise any of the top of the user I s head either side of a user s neck the tips of the user I s hands, the tips of the user s feet, the user's armpits and the user's crotch.
20. Apparatus in accordance with any preceding claim, wherein said determination means for determining the pose adopted by an individual comprises input means for inputting data indicative of the stance of said individual in said pose.
21. Apparatus in accordance with any preceding claim wherein said comparison means comprises scale identification means wherein said scale identification 101 means is arranged to determine from said data obtained by said scanning means scale data indicative of the height of said individual in said pose wherein said comparison means is arranged to compare said obtained data and data generated by said means for generating representation of said model of a person scaled in accordance with said scale data.
22. Apparatus in accordance with any preceding claim, wherein said comparison means is arranged to compare data representative of points on the surface of sai d model of a person in said pose determined by said determination means and data representative of points on the surface of said individual in said pose obtained by said data input means.
23. Apparatus in accordance with any preceding claim wherein said comparison means further comprises feature identification means for identifying from said data obtained by said data input means portions of said data corresponding to body parts of an individual wherein said comparison means is arranged to compare the relative positions of said body parts of an individual relative to the representations of said body parts in said model of said person in,said pose.
24. Apparatus in accordance with claim 23, wherein said 102 body parts comprise any of the eyes, nose, ears or mouth of an individual.
25. Apparatus in accordance with claim 23 or claim 24, wherein said feature detection means is arranged to determine the position of body parts based upon a determination of the luminance or rate of change of luminance of portions of images of said individual in said pose.
26. Apparatus in accordance with any preceding claim wherein said storage means for storing a computer model of a generic person is arranged to store geometry data representative of the relative positioning of a predefined number of points on the surface of a computer wire mesh model of a generic person and data defining a wire mesh topology comprising data representative of the connection of said predetermined number of points on surface of said generic model of a person connected to others of said predetermined number of points wherein said means for generating representation is arranged to generate a calculated geometry data for said model of said person in a pose in accordance with animation instructions on the basis of said stored geometry data.
27. Apparatus in accordance with claim 26, wherein said 103 comparison means is arranged to calculate the relative position of points on the surface of an individual identified by obtained data relative to corresponding points on the surface of a computer representation of a generic person in said pose determined by said determination means, wherein said model generation means is arranged to generate a computer model of an individual comprising geometry data representative of the surface of said individual in a predetermined stance, comprising the relative positioning of a predetermined number of points on the surface of a model of said individual in said predetermined stance being points representative of a wire mesh model of said individual connected to other points on the surface of said model in accordance with said topology data stored in said storage means defining topology data for said computer model of a generic person.
28. Apparatus in accordance with claim 26 or 27 when dependent upon claim 23, 24 or 25 wherein said model generation means for generating a computer model is arranged to generate geometry data for points on the surface of said individual identified by said feature identification means corresponding to points identifying said features in said geometry data of said model of a generic person stored in said storage means.
104
29. Apparatus in accordance with any of claims 27 to 29 wherein said model generation means is arranged to generate geometry data comprising points representative of a predefined number of points on the surface of a model of an individual in a predetermined stance based upon a comparison between points identified by said scanning means and said determination means corresponding to points on the surface of a model of a generic person and interpolation of points on the surf ace of said model of said individual which are not identified as corresponding to points identified by said scanning means and said determination means.
30. Apparatus in accordance with any preceding claim when directly or indirectly dependent upon claim 4 wherein said model generation means is arranged to generate a texture rendering function for texture rendering polygons of a wire mesh computer model of an individual in a predetermined stance by processing image data obtained by said at least one camera of an individual in said pose and comparing said image data with a representation of said computer model of said individual in said pose.
31. Apparatus in accordance with any preceding claim wherein said data input means is arranged to obtain data of an individual representative of the external appearance of said individual in a plurality of poses, said apparatus further comprising means for processing said data to calculate composite data representative of said individual in a single pose.
32. Apparatus in accordance with claim 31 wherein said scan data processing means is arranged to generate said composite data on the basis of said determination by said determination means of the pose adopted by an individual wherein said processing means is arranged to adjust said data for an individual so that data obtained by a plurality of poses corresponds to data representative of a single pose.
33. Apparatus in accordance with claim 32, wherein said scan data processing means is arranged to adjust scan data identified by said determination means as corresponding to the same points on the surface of an individual in data representative of said individual in different poses so that said points of said individual in said different poses correspond to the same points on the surface of a model of said individual.
34. Apparatus in accordance with claim 33, wherein said data processing means is arranged to adjust data representative of an individual in a pose to remove data representative of parts of the surface of an individual 106 in a pose which corresponds to parts of the surface of an individually which is represented in another set of data obtained of said individual in another pose.
35. Apparatus for generating computer animations of an individual representative of the movement of said individual comprising:
apparatus for generating computer models of individuals in accordance with any preceding claim, means for storing data representative of a sequence of animation instructions and means for displaying representations of said generated computer model of an individual in poses in accordance with said animation instructions using said generated model.
36. Apparatus in accordance with any preceding claim further comprising: means for inputting model identification data; and means for transmitting said model of said individual and said model identification data to a server.
37. Apparatus in accordance with claim 36, further comprising a printer for printing a hard copy information carrier having recorded thereon identification data for identifying a computer model transmitted to a server.
38. Apparatus in accordance with any preceding claim, 107 further comprising recording means for recording on an information carrier data representative of said computer model generated by said model generation means.
39. A method for generating computer models of individuals for generating graphical representations of individuals in different poses comprising the steps of:
storing a computer model of a generic person in a predefined pose; scanning an individual to obtain data representative of the external appearance of an individual in a pose; determining the pose adopted by an individual scanned in said scanning step; generating a computer representation of said generic person in said pose determined in said determination step; comparing said data representative of the external appearance of an individual in said pose with data generated of said stored generic model of a person in said pose determined in said determination step; and generating a computer model of said individual on the basis of said comparison.
40. A method in accordance with claim 39 wherein said scanning step comprises scanning an individual using laser stripe scanning apparatus to identify a plurality of points on the surface of an individual in said pose.
108
41. A method in accordance with claim 39, wherein said scanning step comprises obtaining data representative of an outline of said individual in said pose.
42. A method in accordance with claim 39, wherein said scanning step comprises obtaining image data using a camera.
43. A method in accordance with claim 42, wherein said 10 scanning step further comprises means for processing image data of an individual in a pose to obtain an outline of said individual in said pose.
44. A method in accordance with claim 43, wherein said 15 processing step comprises processing said image data to identify portions of said image data corresponding to background by performing a thresholding operation.
45. A method in accordance with any of claims 39 to 44, 20 wherein said determination step comprises instructing a user to adopt a predefined pose.
46. A method in accordance with claim 45, wherein said instruction comprises displaying an illustration of said 25 pose to be adopted by said individual.
47. A method in accordance with claim 45, wherein said 109 instructions comprises broadcasting oral instructions to an individual to adopt a specific pose.
48. A method in accordance with any of claims 39 to 47, wherein said determination step further comprises calculating the pose adopted by an individual scanned by said scanning means from said data representative of the external appearance of said individual in a pose.
49. A method in accordance with claim 48, wherein said calculation comprises identifying a plurality of points on the surface of an individual wherein said pose adopted by an individual is determined from the relative orientation of said identified points.
50. A method in accordance with claim 49, wherein said plurality of points identified from said scan data comprise any of the top of the user's head either side of a user's neck the tips of the user's hands, the tips of the user's feet, the user's armpits and the user's crotch.
51. A method in accordance with any of claims 39 to 50, wherein said determination means for determining the pose adopted by an individual scanned by said scanning means comprises input means for inputting data indicative of the stance of said individual in said pose.
52. A method in accordance with any of claims 39 to 51, wherein said comparison step comprises identifying a scale of an image and comprising said scan data and data generated representative of said model of a person scaled 5 in accordance with said scale.
53. A method in accordance with any of claims 39 to 52, wherein said comparison step further comprises the steps of identifying from said data obtained by said scanning means portions of said data corresponding to body parts of an individual; and compare the relative positions of said body parts of an individual relative to the representations of said body parts in said model of said person in said pose.
54. A method in accordance with claim 53, wherein said body parts comprise any of the eyes, nose, ears or mouth of an individual.
55. A method in accordance with claim 53 or claim 54, wherein identification of the position of body parts from image data is based upon a determination of the luminance or rate of change of luminance of portions of images of said individual in said pose. 25
56. A method in accordance with any of claims 39 to 55, wherein said comparison step comprises calculating the relative position of points on the surface of an individual identified by said scan data and said determination step, relative to corresponding points on the surf ace of a computer representation of a generic person in said pose and generating a computer model of an individual comprising geometry data representative of the surface of said individual in a predetermined stance, comprising the relative positioning of a predetermined number of points on the surf ace of a model of said individual in said predetermined stance being points representative of a wire mesh model of said individual connected to other points on the surface of said model in accordance with a predetermined topology.
57. A method in accordance with any of claims 39 to 56, further comprising generating a texture rendering function for texture rendering polygons of a wire mesh computer model of an individual in a predetermined stance by processing image data of said individual in at least one pose and comparing said image data with a representation of said computer model of said individual in said at least one pose.
58. A method in accordance with any of claims 39 to 56 scanning an individual in a plurality of poses wherein said generation of said computer model is based upon data obtained in said plurality of scans.
112
59. A method in accordance with any of claims 39 to 58, further comprising:
means for inputting model identification data; and means for transmitting said model of said individual 5 and said model identification data to a server.
60. A method with any of claims 39 to 5 9, f urther comprising the step of recording on an information carrier data representative of said computer model generated by said model generation means.
61. A method for generating computer animations of an individual representative of the movement of said individual comprising:
the steps of generating computer models of individuals in accordance with any of claims 39 to 60, storing data representative of a sequence of animation instructions and displaying representations of said generated computer model of an individual in poses in accordance with said animation instructions generated by said animation means.
62. A method for generating computer models of individuals for generating graphical representations of individuals in different poses comprising the steps of: storing a computer model of a generic person; inputting data representative of the external 113 appearance of an individual in a pose; inputting data indicative of said pose; generating a computer representation of said generic person in said pose determined in said determination step; comparing said input data representative of the external appearance of said individual with data generated of said generic person in said pose adopted by said individual as defined by said input data; and generating a computer model of said individual on the basis of said comparison.
63. An information carrier having recorded thereon data for generating a computer model of the external appearance of an individual in a plurality of poses, said data comprising: geometry data representative of the relative positioning of predefined number of points on the surface of the said individual in a predetermined stance; and 20 data representative of a texture rendering function for texture rendering polygons of a wire mesh computer model of said individual in said predetermined stance, said wire mesh computer model comprising the connection of said points on the surface of said individual defined by said geometry data connected to other of said points defined by said geometry data, in accordance with a predefined set of connection data.
114
64. A computer apparatus having stored therein data for generating computer models of the external appearance of a plurality of individuals in a plurality of poses, comprising:
means for storing identification data for a plurality of computer models of the external appearance of a plurality of individuals in a predefined stance; means for storing geometry data representative of a the relative positioning of predefined number of points on the surface of the each of said plurality of individuals in said predetermined stance; and means for storing data representative of a plurality of texture rendering functions for texture rendering polygons of wire mesh computer models of each of said plurality of individuals in said predetermined stance, said wire mesh computer models comprising the connection of said points on the surf ace of a said individual defined by said geometry data connected to other of said points defined by said geometry data, wherein said computer apparatus is arranged to associate said identification data for a model of an individual with geometry data and a texture rendering function for said model.
65. An apparatus in accordance with claim 64, further comprising:
means for receiving data, means for determining whether said data corresponds to any of said plurality of identification data, and means for transmitting geometry data and texture rendering functions associated with received data 5 corresponding to identification data.
66. A process of generating a computer model of an individual for generating computer graphical representations of an individual in a plurality of poses comprising the steps of:
providing a computer network comprising:
a first computer apparatus in accordance with any of claims 64 to 65;and a second computer apparatus; providing an information carrier having recorded thereon identification data identifying data stored on said first computer apparatus, for generating a computer model representative of the external appearance of an individual in any of a plurality of poses; inputting said identification data from said information carrier into said second computer apparatus; transmitting said identification data from said second computer apparatus to said first computer apparatus; and transmitting said data for generating a computer model representative of the external appearance of an individual in any of a plurality of poses, identified by 116 said identification data, from said first computer to said second computer.
67. A computer apparatus for outputting computer graphical representations of a computer model of an individual in poses in accordance with animation instructions, said apparatus comprising:
means defining a wire mesh topology comprising data representative of the connection of a predetermined number of points on the surface of a generic model of a person connected to other of said predetermined number of points on the surface of a generic model of a person; means f or receiving geometry data representative of a the relative positioning of said predetermined number of points on the surface of an individual in a predetermined stance; means for receiving data representative of a texture rendering function for texture rendering polygons of a wire mesh computer model of said individual in said predetermined stance, said wire mesh computer model comprising the connection of said points on the surface of said individual defined by geometry data connected to other of said points defined by said geometry data in accordance with said defined wire mesh topology; and means for, outputting computer graphical representations of said individual in a stances accordance with animation instructions, comprising:
117 means for calculating a transformation function between received geometry data representative of a the relative positioning of said predefined number of points on the surface of said model of said individual in said predetermined stance, into geometry data representative of corresponding points on the surf ace of said model of an individual in a stance defined by animation instructions; means for calculating a texture rendering function for said model of said individual in said stance defined by animation instructions based upon said calculated transformation and received data representative texture rendering function for texturing a wire mesh model of said individual in said predetermined stance; and output means f or outputting a computer graphical representation of said model of said individual on the basis of said calculated texture rendering function and said received geometry data transformed in accordance with said calculated transformation.
68. A computer apparatus in accordance with claim 67, further comprising means for inputting data and means for transmitting data to request the transmission of geometry data and data representative of a texture rendering function.
118
69. An apparatus for generating computer models of individuals in accordance with any of claims 1 to 38, further comprising weighing means for obtaining weight data of an individual; wherein said generation means is arranged to generate a computer model of said individual on the basis of a comparison of the volume of a model of a generic person scaled so as to occupy a volume. corresponding to the expected volume of a model of an individual having said weight.
70. Apparatus in accordance with claim 69, wherein said model generation means is arranged to combine portions of a model generated on the basis of comparison of data representative of the external appearance of an individual and data generated of said model of a person in said pose determined by said determination means, and a model of a generic person scaled to have a representative volume representative of the volume of a model of an individual having the weight of said individual from whom weight data has been obtained.
71. Apparatus in accordance with claim 70, wherein said model generation means is arranged to generate a texture rendering function for rendering the colour on a model of an individual wherein said texture rendering function is generated on the basis of a stored texture rendering function for a generic individual and image data obtained 119 for an image of an individual's face.
72. Apparatus for generating computer models of people, comprising:
at least one booth adapted for receiving a person; means for obtaining an image of a person in the booth; means for creating a three-dimensional computer model from said image; payment means associated with said booth; and means for making said three-dimensional computer model available in a predetermined way upon a payment being made utilising said payment means.
73. Apparatus for generating computer models of people, comprising: meansfor obtaining an image of a person and creating a three- dimensional computer model from said image; 20 payment means associated with said obtaining means; and means for making said three-dimensional computer model available in a predetermined way upon a payment being made utilising said payment means. 25
74. Apparatus for generating computer models of people, comprising:
means for obtaining an image of a person and creating a three-dimensional animated computer model from said image; payment means associated with said obtaining means; and means for making said three-dimensional animated computer model available in a predetermined way upon a payment being made utilising said payment means.
75. A booth for recording image data for generating computer models of people comprising: wall means defining a zone for receiving an individual; lighting apparatus arranged illuminate the surface of an individual with said zone; image recording apparatus arranged to obtain image data representative of the external appearance of an individual; and an audio visual instruction display for instructing an individual to adopt one or more specific poses within said booth.
76. Apparatus for generating computer models of people comprising: 25 a booth incorporating apparatus for recording an image of the external appearance of an individual; a dispenser for dispensing a carrier having a 121 password recorded thereon; and means for storing a computer model generated on the basis of said image data, wherein said storage means is arranged to output said model on the basis on receipt of 5 data representative of said password.
77. An apparatus for generating images of individuals comprising: a booth for receiving an individual; apparatus for recording image data of external appearance of an individual in a first number of poses within said booth; and means for deriving from said image data representative of said individual in any of a second number of poses, said second number being greater than said first number.
78. An apparatus in accordance with claim 77, further comprising means for generating an animation sequence representative of the motion of an individual, said animation sequence comprising a sequence of contiguous images of said individual in a plurality of poses.
79. A system for generating animated computer models of individuals comprising: at least one booth for recording image data of individuals; 122 means for generating from said image data computer models of said individuals; and a plurality of terminals having stored therein animation means arranged to receive said computer models wherein said animation means are arranged to generate animation sequences of images of said individuals in any of a plurality of poses using said generated computer models.
80. An apparatus for obtaining image data for generating computer models of individuals comprising:
a booth for receiving an individual; image recording apparatus for obtaining image data representative of the external appearance of an individual within said booth; lighting means for lighting an individual within said booth; and indicators for indicating to an individual a pose to adopt whilst being illuminated by said lighting means.
81. A booth for generating computer models of people comprising: image recording apparatus for recording image data representative of the external appearance of an individual standing within said booth; first and second lighting means for illuminating an individual standing in said booth, wherein said first 123 lighting means is arranged to illuminate the interior of said booth from a first direction and said second lighting means is arranged to illuminate said interior of said booth from a direction different to said first direction wherein said image recording apparatus is arranged to obtain image data of an individual within said booth both with and without said second lighting means being activated.
82. A booth for generating computer models of people comprising: image recording means for obtaining image data representative of the external appearance of an individual; and 15 a light box for illuminating an individual in a pose, said light box comprising a curved wall remote from said recording means, and illumination means for illuminating said curved wall.
83. A booth for generating computer models of people comprising: image recording apparatus for obtaining image data representative of the external appearance of an individual; 25 pose markers for indicating to an individual a pose to adopt when image data is obtained of said individual by said recording apparatus; and 124 model generation means, said model generation means having installed therein a generic model of a person; data relating to said pose markers; and processing means for generating a computer model of an individual based upon said generic model, said data relating to pose markers and image data of an individual obtained by said image recording apparatus.
84. An apparatus for generating data for generating computer models of individuals comprising:
a booth for receiving an individual; means for obtaining image data representative of the external appearance of an individual wthin said booth; and means for editing said image data to generate data suitable for use for creating a model of said individual.
85. A method of generating a computer model of an individual comprising:
obtaining image data of an individual lit from behind; obtaining an image of an individual lit from in front and behind; and processing said obtained image data together with a computer model of a generic individual to obtain a computer model of said individual.
86. Apparatus for generating a computer model of an individual comprising:
storage means for storing a computer model of an individual; image capture means for obtaining image data of an individual lit from behind; image capture means for obtaining image data of an individual lit from in front and behind; and processing means for processing obtained image data together with said computer model to obtain a computer model of said individual.
87. A method of generating a computer model of an individual comprising the steps of: 15 paying for the generation of said model; capturing image data representative of an individual; dispensing a password; generating a model of said individual on the basis of said image data; and transferring data representative of said computer model to a computer apparatus on the basis of receipt of said password.
88. An apparatus for generating computer modelsof people comprising:
audio visual instruction means for instructing an 126 individual to adopt a plurality of predefined poses; image capture apparatus for capture image data representative of the external appearance of an individual in a pose; activation means for activating said image capture apparatus to capture images of individuals af ter they have been instructed to adopt a predefined pose by said audio visual instruction means; and computer model generation means f or generating a computer model of said individual on the basis of said captured image data.
89. A process of generating a computer model of an individual comprising: 15 instructing an individual to adopt a plurality of predefined poses; capturing image data of said individual; af ter they have been instructed to adopt each of said plurality of poses; 20 dispensing a password; generating a computer model of said individual on the basis of said image data of said individual in said plurality of poses and transferring data representative of said computer model to a computer apparatus on the receipt of said password.
90. Apparatus for generating computer models of 127 individuals, said apparatus comprising:
a booth for receiving an individual; lighting means for illuminating an individual within said booth in different illumination conditions; and image capture apparatus f or capturing image data representative of the appearance of an individual, said image capture apparatus arranged to obtain image data representative of an individual within said booth and activation means for activating said image capture apparatus to obtain image data of an individual within said booth under said different illumination conditions; and means for generating a computer model of an individual on the basis of said image data.
91. A method of generating a computer model of an individual comprising the steps of: lighting an individual from behind; activating a flash and camera to obtain image data of said individual illuminated f rom in front and behind; obtaining image data representative of an individual illuminated only from behind; and processing said image data to generate a computer model of said individual.
92. An apparatus for generating computer models of individuals said apparatus comprising:
128 a booth for receiving an individual; apparatus for obtaining image data of an individual within said booth in four orthogonal poses; means for obtaining outlines of individuals from said image data; means for processing said outlines and a stored generic model of an individual to generate a computer model of said individual; and means for texture rendering said generated computer model using said image data.
93. A booth in accordance with claim 92, further comprising means for identifying portions of an outline indicative of points on the surface of an individual which are not contiguous with each other; and processing means for processing an outline to replace portions of an outline corresponding to non contiguous portions of the surface of an individual with an estimate of an outline corresponding to contiguous points on the surface of an individual.
94. An apparatus for generating computer models of individuals comprising: a touch screen, said touch screen being arranged to display portions of images of individuals, and indicator marks for indicating points on said images corresponding to facial features of said individual; 129 means for adjusting the position of said indicators on said display; and means for generating a computer model of an individual on the basis of the relative positions of said 5 indicators relative to said image.
95. Apparatus for generating computer models of individuals comprising: means for obtaining image data representative of an individual seated in a wheelchair; and means for generating a computer model of an individual seated in a wheelchair on the basis of said image data.
96. Apparatus for generating texture rendered computer model of individuals comprising: means for storing a computer model of a generic individual; means for obtaining outline data from images of an individuals in a pose lit from behind; means for obtaining image data of an individual lit from in front; means for processing said outline data and said computer model to generate a computer model of said individual; I means for texture rendering said computer model using said image data to generate a texture rendered computer model of said individual.
GB9914823A 1999-06-24 1999-06-24 Method and apparatus for the generation of computer graphic representations of individuals Withdrawn GB2351426A (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
GB9914823A GB2351426A (en) 1999-06-24 1999-06-24 Method and apparatus for the generation of computer graphic representations of individuals
PCT/GB2000/002458 WO2001001354A1 (en) 1999-06-24 2000-06-26 Method and apparatus for the generation of computer graphic representations of individuals
JP2001506503A JP2003503776A (en) 1999-06-24 2000-06-26 Method and apparatus for generating a computer image representation of a person
AU55535/00A AU5553500A (en) 1999-06-24 2000-06-26 Method and apparatus for the generation of computer graphic representations of individuals
EP00940624A EP1194899A1 (en) 1999-06-24 2000-06-26 Method and apparatus for the generation of computer graphic representations of individuals

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB9914823A GB2351426A (en) 1999-06-24 1999-06-24 Method and apparatus for the generation of computer graphic representations of individuals

Publications (2)

Publication Number Publication Date
GB9914823D0 GB9914823D0 (en) 1999-08-25
GB2351426A true GB2351426A (en) 2000-12-27

Family

ID=10856018

Family Applications (1)

Application Number Title Priority Date Filing Date
GB9914823A Withdrawn GB2351426A (en) 1999-06-24 1999-06-24 Method and apparatus for the generation of computer graphic representations of individuals

Country Status (5)

Country Link
EP (1) EP1194899A1 (en)
JP (1) JP2003503776A (en)
AU (1) AU5553500A (en)
GB (1) GB2351426A (en)
WO (1) WO2001001354A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2874724A1 (en) * 2004-12-03 2006-03-03 France Telecom Three dimensional avatar e.g. humanoid, temporal animation process for e.g. game, involves interpolating intermediate posture of avatar between initial posture and final posture taking into account of difference relative to initial posture
WO2010071980A1 (en) * 2008-12-28 2010-07-01 Nortel Networks Limited Method and apparatus for enhancing control of an avatar in a three dimensional computer-generated virtual environment
CN104813340A (en) * 2012-09-05 2015-07-29 体通有限公司 System and method for deriving accurate body size measures from a sequence of 2d images

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030009664A1 (en) * 2001-06-28 2003-01-09 Eastman Kodak Company Method and player for authenticating playback of animated content
AU2003201032A1 (en) * 2002-01-07 2003-07-24 Stephen James Crampton Method and apparatus for an avatar user interface system
FR2837593B1 (en) * 2002-03-22 2004-05-28 Kenneth Kuk Kei Wang METHOD AND DEVICE FOR VIEWING, ARCHIVING AND TRANSMISSION ON A NETWORK OF COMPUTERS OF A CLOTHING MODEL
NZ534677A (en) * 2004-08-18 2007-10-26 Hgm Design Ltd Model generation and distribution system
US7310999B2 (en) * 2005-09-16 2007-12-25 Greg Miller Body volume measurement apparatus and method of measuring the body volume of a person
WO2013066601A1 (en) 2011-10-17 2013-05-10 Kimmel Zebadiah M Method and apparatus for monitoring individuals while protecting their privacy
US9974466B2 (en) 2011-10-17 2018-05-22 Atlas5D, Inc. Method and apparatus for detecting change in health status
US9341464B2 (en) 2011-10-17 2016-05-17 Atlas5D, Inc. Method and apparatus for sizing and fitting an individual for apparel, accessories, or prosthetics
JP5948092B2 (en) * 2012-03-05 2016-07-06 東芝テック株式会社 Try-on device and try-on program
US9600993B2 (en) 2014-01-27 2017-03-21 Atlas5D, Inc. Method and system for behavior detection
US10013756B2 (en) 2015-03-13 2018-07-03 Atlas5D, Inc. Methods and systems for measuring use of an assistive device for ambulation
US11017901B2 (en) 2016-08-02 2021-05-25 Atlas5D, Inc. Systems and methods to identify persons and/or identify and quantify pain, fatigue, mood, and intent with protection of privacy
KR102157246B1 (en) * 2018-09-19 2020-10-23 재단법인 실감교류인체감응솔루션연구단 Method for modelling virtual hand on real hand and apparatus therefor
DE102021204611A1 (en) 2021-05-06 2022-11-10 Continental Automotive Technologies GmbH Computer-implemented method for generating training data for use in the field of vehicle occupant observation

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998028908A1 (en) * 1996-12-24 1998-07-02 Stephen James Crampton Avatar kiosk
WO1998035320A1 (en) * 1997-02-07 1998-08-13 Peppers Ghost Productions Limited Animation system and method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS56115904A (en) * 1980-02-19 1981-09-11 Unitika Ltd Automatic measuring method for size of human body and device therefor
US5495576A (en) * 1993-01-11 1996-02-27 Ritchey; Kurtis J. Panoramic image based virtual reality/telepresence audio-visual system and method
US5530652A (en) * 1993-08-11 1996-06-25 Levi Strauss & Co. Automatic garment inspection and measurement system
EP0664526A3 (en) * 1994-01-19 1995-12-27 Eastman Kodak Co Method and apparatus for three-dimensional personalized video games using 3-D models and depth measuring apparatus.
US5850222A (en) * 1995-09-13 1998-12-15 Pixel Dust, Inc. Method and system for displaying a graphic image of a person modeling a garment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998028908A1 (en) * 1996-12-24 1998-07-02 Stephen James Crampton Avatar kiosk
WO1998035320A1 (en) * 1997-02-07 1998-08-13 Peppers Ghost Productions Limited Animation system and method

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2874724A1 (en) * 2004-12-03 2006-03-03 France Telecom Three dimensional avatar e.g. humanoid, temporal animation process for e.g. game, involves interpolating intermediate posture of avatar between initial posture and final posture taking into account of difference relative to initial posture
WO2010071980A1 (en) * 2008-12-28 2010-07-01 Nortel Networks Limited Method and apparatus for enhancing control of an avatar in a three dimensional computer-generated virtual environment
US8232989B2 (en) 2008-12-28 2012-07-31 Avaya Inc. Method and apparatus for enhancing control of an avatar in a three dimensional computer-generated virtual environment
CN104813340A (en) * 2012-09-05 2015-07-29 体通有限公司 System and method for deriving accurate body size measures from a sequence of 2d images
EP2893479A4 (en) * 2012-09-05 2017-02-01 Body Pass Ltd. System and method for deriving accurate body size measures from a sequence of 2d images
US9727787B2 (en) 2012-09-05 2017-08-08 Sizer Technologies Ltd System and method for deriving accurate body size measures from a sequence of 2D images
CN104813340B (en) * 2012-09-05 2018-02-23 体通有限公司 The system and method that accurate body sizes measurement is exported from 2D image sequences

Also Published As

Publication number Publication date
WO2001001354A1 (en) 2001-01-04
EP1194899A1 (en) 2002-04-10
JP2003503776A (en) 2003-01-28
AU5553500A (en) 2001-01-31
GB9914823D0 (en) 1999-08-25

Similar Documents

Publication Publication Date Title
US7184047B1 (en) Method and apparatus for the generation of computer graphic representations of individuals
GB2351426A (en) Method and apparatus for the generation of computer graphic representations of individuals
US10684467B2 (en) Image processing for head mounted display devices
EP2478695B1 (en) System and method for image processing and generating a body model
US7805017B1 (en) Digitally-generated lighting for video conferencing applications
CN109288333B (en) Apparatus, system and method for capturing and displaying appearance
JP4932951B2 (en) Facial image processing method and system
JP4865093B2 (en) Method and system for animating facial features and method and system for facial expression transformation
CN106340064B (en) A kind of mixed reality sand table device and method
CN105556508B (en) The devices, systems, and methods of virtual mirror
CN106133796A (en) For representing the method and system of virtual objects in the view of true environment
JP2010507854A (en) Method and apparatus for virtual simulation of video image sequence
CN109618089A (en) Intelligentized shooting controller, Management Controller and image pickup method
GB2323733A (en) Virtual studio projection system
JP2024512672A (en) Surface texturing from multiple cameras
JPH11316853A (en) Device and method for producing virtual reality feeling
US9058605B2 (en) Systems and methods for simulating accessory display on a subject
CN112954206B (en) Virtual makeup display method, system, device and storage medium thereof
Moubaraki et al. Realistic 3d facial animation in virtual space teleconferencing
Wu et al. A structured light-based system for human heads
Ahmed High quality dynamic reflectance and surface reconstruction from video
Jaynes et al. ProCams 2006

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)