EP1082706A1 - 3d image processing method and apparatus - Google Patents
3d image processing method and apparatusInfo
- Publication number
- EP1082706A1 EP1082706A1 EP99955353A EP99955353A EP1082706A1 EP 1082706 A1 EP1082706 A1 EP 1082706A1 EP 99955353 A EP99955353 A EP 99955353A EP 99955353 A EP99955353 A EP 99955353A EP 1082706 A1 EP1082706 A1 EP 1082706A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- character
- representation
- dimensional
- generic
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000003672 processing method Methods 0.000 title description 3
- 238000000034 method Methods 0.000 claims abstract description 35
- 230000001419 dependent effect Effects 0.000 claims description 3
- 241001465754 Metazoa Species 0.000 claims description 2
- 238000004891 communication Methods 0.000 claims description 2
- 230000001815 facial effect Effects 0.000 description 10
- 238000012545 processing Methods 0.000 description 10
- 230000008569 process Effects 0.000 description 5
- 230000002596 correlated effect Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 239000000654 additive Substances 0.000 description 2
- 230000000996 additive effect Effects 0.000 description 2
- 238000013179 statistical model Methods 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 235000019642 color hue Nutrition 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000000875 corresponding effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
- 238000009333 weeding Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/239—Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/10—Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/194—Transmission of image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/50—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
- A63F2300/55—Details of game data or player data management
- A63F2300/5546—Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history
- A63F2300/5553—Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history user representation in the game field, e.g. avatar
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/60—Methods for processing data by generating or executing the game program
- A63F2300/69—Involving elements of the real world in the game world, e.g. measurement in live races, real video
- A63F2300/695—Imported photos, e.g. of the player
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/111—Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
- H04N2013/0081—Depth or disparity estimation from stereoscopic image signals
Definitions
- the present invention relates to a method and apparatus for processing facial images, particularly but not exclusively for use in digital animation eg computer games.
- Photogrammetric techniques are known for converting two or more overlapping 2D images acquired from different viewpoints into a common 3D representation and in principle such techniques can be applied to the human face to generate a 3D representation which can be animated using known digital techniques.
- Suitable algorithms for correlating image regions of corresponding images are already known - eg Gruen's algorithm (see Gruen, A W “Adaptive least squares correlation: a powerful image matching technique”S Afr J of Photogrammetry, remote sensing and Cartography Vol 14 No 3 (1985) and Gruen, A W and Baltsavias, E P "High precision image matching for digital terrain model generation” Int Arch photogrammetry Vol 25 No 3 (1986) p254) and particularly the "region-growing” modification thereto which is described in Otto and Chau “Region-growing algorithm for matching terrain images” Image and Vision Computing Vol 7 No 2 May 1989 p83.
- Gruen's algorithm is an adaptive least squares correlation algorithm in which two image patches of typically 15 x 15 to 30 x 30 pixels are correlated (ie selected from larger left and right images in such a manner as to give the most consistent match between patches) by allowing an affine geometric distortion between coordinates in the images (ie stretching or compression in which originally parallel lines remain parallel in the transformation) and allowing an additive radiometric distortion between the grey levels of the pixels in the image patches, generating an over-constrained set of linear equations representing the discrepancies between the correlated pixels and finding a least squares solution which minimises the discrepancies.
- the Gruen algorithm is essentially an iterative algorithm and requires a reasonable approximation for the correlation to be fed in before it will converge to the correct solution.
- the Otto and Chau region-growing algorithm begins with an approximate match between a point in one image and a point in the other, utilises Gruen's algorithm to produce a more accurate match and to generate the geometric and radiometric distortion parameters, and uses the distortion parameters to predict _ approximate matches for points in the region of the neighbourhood of the initial matching point.
- the neighbouring points are selected by choosing the four adjacent points on a grid having a grid spacing of eg 5 or 10 pixels in order to avoid running
- a candidate matched point moves by more than a certain amount (eg 3 pixels) per iteration then it is not a valid matched point and should be rejected;
- One object of the present invention is to overcome or alleviate such disadvantages.
- the present invention provides a method of providing a three- dimensional representation of an object wherein two or more two-dimensional images of the object are photogrammetrically processed to generate an incomplete three-dimensional representation thereof and the incomplete three-dimensional representation is combined with a generic representation of such objects to provide the three-dimensional representation.
- the object is a human or animal body or a part thereof.
- the three-dimensional representation derived from the combination with the generic representation is provided in the format of an animatable character.
- the resulting three-dimensional representation is converted to the file format of a computer game character and loaded into the computer game.
- the invention provides a method of personalising a computer game character wherein at least one image of a player of the game is digitally processed at a location remote from the player's computer, converted to an animatable character file and loaded onto the player's computer.
- the image can be processed on an Internet server computer and downloaded over the Internet.
- a fully automated system whereby Quake, Doom, Descent and other popular games users, can be provided with a custom game character with their own face (preferably a 3D face) inserted into the character. This would enable them to use a visualisation of themselves in a game.
- This service could be provided via the Internet with little or no human intervention by the operator of the server.
- a low-resolution model is required for gaming, as the game will have to support the manipulation of the character in the gaming environment, in real time, on a variety of PCs.
- the games user would be required to take a set of images of himself/herself using a digital camera or scanned photographs, under specified guidelines.
- the server will then schedule an image processing job to perform the following tasks: * Determine 3D facial geometry from the supplied image files
- the completed character would be sent as an attachment to the specified email address, and a micro transaction performed to bill the user's credit card.
- Figure 1 is a schematic flow diagram of an image processing method in accordance with one aspect of the invention.
- Figure 2A is a schematic plan view showing one camera arrangement for acquiring the images utilised in the method of Figure 1 ;
- Figure 2B is a schematic plan view of another camera arrangement for acquiring the ima *goe"s utilised in the method of Figure 1;
- Figure 2C is a schematic plan view of yet another camera arrangement for acquiring the images utilised in the method of Figure 1, and
- Figure 3 is a schematic representation of an Internet-based arrangement for providing an animated games character by a method in accordance with the second aspect of the invention.
- left and right images II and 12 are acquired eg by a digital camera and processed by standard photogrammetric techniques to provide an incomplete 3D representatation 100 of the game player's head.
- the determination of the game player's facial geometry can involve Gruens type area matching, facial feature correlation, and facial feature recognition via a statistical model of the human face.
- Gruens type area matching suffers from the problem of having no projected texture, and is thus highly susceptible to the texture in the face of the subject, the ambient lighting conditions, and the difference in colour hues and intensities between images. It is also susceptible to the lack of camera model or optical geometry of the captured images.
- Facial feature correlation suffers from the problem that any facial feature that is incorrectly detected will cause a very poor model to be generated. Facial feature recognition via a statistical model prevents gross inaccuracies from occurring and should lead to a more robust solution. It is possible that part of the image submission process could involve the user in specifying certain key points on the images.
- a 3D representation of a generic head 200 is provided. Given geometric information derived from the preceding stage, the generic head model can be distorted to fit the subject's roughly calculated geometry. This head could be in one of two forms, a NURBS (Non-Uniform Rational B Spline) model, or a polygon model.
- the NURBS Model has the advantage of being easily deformable to the subject's geometry, but suffers from the drawback of higher processing overhead, and having to convert to polygons for subsequent processing stages.
- a texture map is derived (400) from the 3D head (100) and attached to the representation resulting from step 300 (step 500) (ie used to render the modified generic head) and the resulting realistic character representation is then integrated with or attached to the body of the games character (step 600).
- step 700 If necessary the resulting model is converted to polygon form (step 700).
- the number of polygons may have to be reduced (step 800).
- the completed model may be reduced to quite a low polygon count, possibly 100 or so, in order to produce a relatively small model to transmit and use within the game.
- polygon-reduced representation is converted to a games file format which can be handled by the game (step 900).
- This last step may require liaison and co-operation with the games manufacturers, or it is conceivable that this task could be performed completely independently.
- FIG. 2A, 2B and 2C Each of these Figures shows different camera arrangements which could be provided as fixed stereoscopic camera arrangements in a dedicated booth provided in (say a gaming arcade) or could be set up by the games player.
- a camera C acquires an image from one viewpoint and the same or a different camera C acquires an overlapping image from a different viewpoint.
- the fields of view V must overlap in the region of the face of the subject 1.
- the cameras are diagonally disposed at right angles
- the cameras are parallel
- the cameras are orthogonal, so that one camera has a front view and the other camera has a profile view of the subject 1.
- the arrangement of Figure 2C is particularly preferred because the front view and profile are acquired independently.
- the front image and profile image can be analysed to determine the size and location of features and the resulting data can be used to select one of a range of generic heads or to adjust variable parameters of the generic head as shown, prior to step 300.
- the exact camera locations and orientations can be determined and the remaining points correlated relatively easily to enable a 3D representation of the subject 1 to be generated, essentially be projecting ray lines from pairs of correlated points by virtual projectors having the same location, orientation and optical parameters as the cameras.
- a known algorithm eg the Gruen algorithm
- the above correlation process between the generic image G and the image I of the character 1 provided by digital camera C can be performed by a server computer S on the Internet and the 2D images acquired by the camera C can either be posted by the games player (eg as photographic prints) or uploaded (eg as email attachments) onto the server from the user's computer PC via a communications link CL provided by the Internet.
- server computer S has stored on its hard disc HD:
- the use's computer PC would have stored on its hard disc HD one or more games programs, Internet access software, graphics software for handling the images 0 provided by camera C and a conventional operating system, eg Windows 95® or
- Both computer PC and computer S are provided with a standard microprocessor ⁇ P eg an Intel Pentium® processor as well as RAM and ROM and appropriate input/output circuitry I/O connected to standard modems M or other 5 comunication devices.
- ⁇ P eg an Intel Pentium® processor
- RAM and ROM appropriate input/output circuitry I/O connected to standard modems M or other 5 comunication devices.
- the WWW submission form F would be based on a Java Applet to allow the validation of the quality of the submitted images, and the selection of body Q types. It is likely that the server operator would want to test images for their size, resolution, and possibly their contrast ratio, before accepting them for processing. If this can be done by an applet before accepting any credit card transaction, then it will help to reduce bad conversions. By weeding out potential failures at an early _ stage, this will reduce wasted processing time, and will reduce customer frustration by not having to wait a few hours to find out that the images were not of sufficient quality to produce a character.
- the operator can request other information of the user for his own uses, namely:
- the invention can also be implemented in a purpose-built games booth at which the images II and 12 are acquired, and the processing can be carried out either locally in the booth or remotely eg in a server computer linked to a number of such booths in a network.
- more than two cameras could be used to acquire the 3D surface of the character.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
- Image Processing (AREA)
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB9811695 | 1998-06-01 | ||
GBGB9811695.7A GB9811695D0 (en) | 1998-06-01 | 1998-06-01 | Facial image processing method and apparatus |
PCT/GB1999/001744 WO1999063490A1 (en) | 1998-06-01 | 1999-06-01 | 3d image processing method and apparatus |
Publications (1)
Publication Number | Publication Date |
---|---|
EP1082706A1 true EP1082706A1 (en) | 2001-03-14 |
Family
ID=10832987
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP99955353A Withdrawn EP1082706A1 (en) | 1998-06-01 | 1999-06-01 | 3d image processing method and apparatus |
Country Status (6)
Country | Link |
---|---|
EP (1) | EP1082706A1 (ko) |
JP (1) | JP2002517840A (ko) |
KR (1) | KR20010074504A (ko) |
AU (1) | AU4275899A (ko) |
GB (2) | GB9811695D0 (ko) |
WO (1) | WO1999063490A1 (ko) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2342026B (en) * | 1998-09-22 | 2003-06-11 | Luvvy Ltd | Graphics and image processing system |
US7359748B1 (en) | 2000-07-26 | 2008-04-15 | Rhett Drugge | Apparatus for total immersion photography |
JP4677661B2 (ja) * | 2000-10-16 | 2011-04-27 | ソニー株式会社 | 動画像受注システム及び方法 |
US6980333B2 (en) * | 2001-04-11 | 2005-12-27 | Eastman Kodak Company | Personalized motion imaging system |
US7257236B2 (en) | 2002-05-22 | 2007-08-14 | A4Vision | Methods and systems for detecting and recognizing objects in a controlled wide area |
US7174033B2 (en) | 2002-05-22 | 2007-02-06 | A4Vision | Methods and systems for detecting and recognizing an object based on 3D image data |
JP4521012B2 (ja) * | 2007-04-13 | 2010-08-11 | 株式会社ソフイア | 遊技機 |
JP5627860B2 (ja) | 2009-04-27 | 2014-11-19 | 三菱電機株式会社 | 立体映像配信システム、立体映像配信方法、立体映像配信装置、立体映像視聴システム、立体映像視聴方法、立体映像視聴装置 |
KR101050364B1 (ko) * | 2009-09-30 | 2011-07-20 | 주식회사 한울네오텍 | 3차원 객체를 제공하는 사진 장치 및 그 제공방법 |
JP2013535726A (ja) * | 2010-07-23 | 2013-09-12 | アルカテル−ルーセント | 仮想環境のユーザを視覚化するための方法 |
JP5603452B1 (ja) | 2013-04-11 | 2014-10-08 | 株式会社スクウェア・エニックス | ビデオゲーム処理装置、及びビデオゲーム処理プログラム |
JP6219791B2 (ja) * | 2014-08-21 | 2017-10-25 | 株式会社スクウェア・エニックス | ビデオゲーム処理装置、及びビデオゲーム処理プログラム |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB9013983D0 (en) * | 1990-06-22 | 1990-08-15 | Nat Res Dev | Automatic carcass grading apparatus and method |
US5550960A (en) * | 1993-08-02 | 1996-08-27 | Sun Microsystems, Inc. | Method and apparatus for performing dynamic texture mapping for complex surfaces |
EP0664527A1 (en) * | 1993-12-30 | 1995-07-26 | Eastman Kodak Company | Method and apparatus for standardizing facial images for personalized video entertainment |
GB9610212D0 (en) * | 1996-05-16 | 1996-07-24 | Cyberglass Limited | Method and apparatus for generating moving characters |
DE19626096C1 (de) * | 1996-06-28 | 1997-06-19 | Siemens Nixdorf Inf Syst | Verfahren zur dreidimensionalen Bilddarstellung auf einer Großbildprojektionsfläche mittels eines Laser-Projektors |
US6016148A (en) * | 1997-06-06 | 2000-01-18 | Digital Equipment Corporation | Automated mapping of facial images to animation wireframes topologies |
IL121178A (en) * | 1997-06-27 | 2003-11-23 | Nds Ltd | Interactive game system |
-
1998
- 1998-06-01 GB GBGB9811695.7A patent/GB9811695D0/en not_active Ceased
-
1999
- 1999-06-01 KR KR1020007013639A patent/KR20010074504A/ko not_active Application Discontinuation
- 1999-06-01 EP EP99955353A patent/EP1082706A1/en not_active Withdrawn
- 1999-06-01 GB GB9912707A patent/GB2350511B/en not_active Expired - Fee Related
- 1999-06-01 AU AU42758/99A patent/AU4275899A/en not_active Abandoned
- 1999-06-01 JP JP2000552633A patent/JP2002517840A/ja not_active Withdrawn
- 1999-06-01 WO PCT/GB1999/001744 patent/WO1999063490A1/en not_active Application Discontinuation
Non-Patent Citations (1)
Title |
---|
See references of WO9963490A1 * |
Also Published As
Publication number | Publication date |
---|---|
GB2350511A (en) | 2000-11-29 |
AU4275899A (en) | 1999-12-20 |
GB2350511B (en) | 2003-11-19 |
GB9811695D0 (en) | 1998-07-29 |
KR20010074504A (ko) | 2001-08-04 |
GB9912707D0 (en) | 1999-08-04 |
JP2002517840A (ja) | 2002-06-18 |
WO1999063490A1 (en) | 1999-12-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107484428B (zh) | 用于显示对象的方法 | |
US6219444B1 (en) | Synthesizing virtual two dimensional images of three dimensional space from a collection of real two dimensional images | |
Bernardini et al. | Building a digital model of Michelangelo's Florentine Pieta | |
Pollefeys et al. | From images to 3D models | |
US7224357B2 (en) | Three-dimensional modeling based on photographic images | |
US20200057831A1 (en) | Real-time generation of synthetic data from multi-shot structured light sensors for three-dimensional object pose estimation | |
CN109410256A (zh) | 基于互信息的点云与影像自动高精度配准方法 | |
US11182945B2 (en) | Automatically generating an animatable object from various types of user input | |
US20040095385A1 (en) | System and method for embodying virtual reality | |
WO2019035155A1 (ja) | 画像処理システム、画像処理方法、及びプログラム | |
KR20130138247A (ko) | 신속 3d 모델링 | |
EP1082706A1 (en) | 3d image processing method and apparatus | |
CN101996416A (zh) | 3d人脸捕获方法和设备 | |
CN107622526A (zh) | 一种基于手机面部识别组件进行三维扫描建模的方法 | |
Jaw et al. | Registration of ground‐based LiDAR point clouds by means of 3D line features | |
US7280685B2 (en) | Object segmentation from images acquired by handheld cameras | |
Wang et al. | Dynamic human body reconstruction and motion tracking with low-cost depth cameras | |
US11645800B2 (en) | Advanced systems and methods for automatically generating an animatable object from various types of user input | |
Furferi et al. | A RGB-D based instant body-scanning solution for compact box installation | |
US11080920B2 (en) | Method of displaying an object | |
Frisky et al. | Acquisition Evaluation on Outdoor Scanning for Archaeological Artifact Digitalization. | |
Gallardo et al. | Using Shading and a 3D Template to Reconstruct Complex Surface Deformations. | |
JP2001012922A (ja) | 3次元データ処理装置 | |
EP3779878A1 (en) | Method and device for combining a texture with an artificial object | |
JP2002135807A (ja) | 3次元入力のためのキャリブレーション方法および装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20001130 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): DE GB SE |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN |
|
18W | Application withdrawn |
Withdrawal date: 20020411 |