GB2518673A - A method using 3D geometry data for virtual reality presentation and control in 3D space - Google Patents

A method using 3D geometry data for virtual reality presentation and control in 3D space Download PDF

Info

Publication number
GB2518673A
GB2518673A GB1317245.7A GB201317245A GB2518673A GB 2518673 A GB2518673 A GB 2518673A GB 201317245 A GB201317245 A GB 201317245A GB 2518673 A GB2518673 A GB 2518673A
Authority
GB
United Kingdom
Prior art keywords
photo
mesh
image file
matching
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1317245.7A
Other versions
GB201317245D0 (en
Inventor
Douglas Wei-Ming Wang
Peng-Cheng Lai
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ortery Technologies Inc Taiwan
Original Assignee
Ortery Technologies Inc Taiwan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ortery Technologies Inc Taiwan filed Critical Ortery Technologies Inc Taiwan
Priority to GB1317245.7A priority Critical patent/GB2518673A/en
Publication of GB201317245D0 publication Critical patent/GB201317245D0/en
Publication of GB2518673A publication Critical patent/GB2518673A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A method of matching a collection of two dimensional photographs (144 figure 3) with a three dimensional mesh 226 using matrix transformations with six degrees of freedom in which 3d geometry parameters are compiled with parameters of the 2d images. The method is used in virtual reality and augmented reality applications. The collection of 2d photos may comprise uses a collection of photographic images of an object taken from different viewing angles. The matching process may comprise manually aligning the 2d images with the 3d model and this may include manually adjusting an axis 236 to control angle or adjusting the scale of the mesh. The method may comprise automatically matching the pictures with the 3d representation possibly using some manually aligned images and knowledge of the angles between the angles at which the images were taken. The method may match stereographic images with the 3d mesh.

Description

A METHOD USING 3D GEOMETRY DATA FOR VIRTUAL REALITY IMAGE PRESENTATION AND CONTROL IN 3D SPACE
FIELD OF THE INVENTION
[00011 This invention relates generally to the field of 3D photographic presentations. The techniques of virtual reality are used to show high quality photo images. It also takes the advantages of the 3D modeling technologies to provide geometry data for physical measurement or control, and will be used in the augmented rea'ity applications. It can also extend to the stereoscopic display system for real time applications.
DESCRIPTION OF THE RELATED ART
[0002] Virtual reality uses a set of photo images to show the solid object viewing from different view angles. It offers high quality photo images for presentation applications.
However, with limited number of photo frames, the viewing angles are limited to a discrete number of photo-taking positions and result with non-smooth animations. The photo images also consist no geometry data. They can not be aligned precisely in presentation, and can not be used in any physica' related applications, for measurement or for controL [0003] 3D modeling is another approach of presenting a sofid object. It has geometly information, can be used for physical applications including augmented reality. However, to obtain the precise geometry data and to present with texture mapping techniques for good quality presentation, it is very expensive in capturing the geometry data and to save the large amount of texture images. It is also difficult to do the photo-realistic rendering in real time with low performance personal computing devices.
[0004] There is a need to produce high image quality, photo-realistic virtual reality presentation for commercial applications, and there is a need to include the geometry information for physical augmented reality applications, especially for the desktop personal computers or mobile devices like tablet PCs and smart phones. To provide both the high quality viewing experience and the physical information, combining the merits of two different approaches of virtual reality and 3D modeling is a way to offer economic solutions and meet the quality requirement with the available computing devices. This invention achieves these goals and can be implemented with existing computing devices and mechanical systems.
SUMMARY OF THE INVENTION
[0005] In accordance with one aspect of the invention, a method of combining a set of photo frames with a set of the geometiy information is described and a systematic way of prcscnting the 2D photos in a 3D spacc at a computing device's viewing window is described. The mathematical relationship among the image frame related parameters and the solid object's viewing transformation at the 3D presentation space is described.
[0006] In accordance with another aspect of the invention, a system consisting of a computer-control mechanical system to capture the photo images automatically at different view angles is described. A 3D geometry data scanning subsystem based on vaneties of optical scanning hardware, or photo taking camera for extracting 3D geometry data by silhouette or referencing mat or stripes are described.
[0007] In accordance with another aspect of the invention, a software system consisting of a workstation, a storage system and a remote server and the client viewing device to implement the invention are described. A software program to compose the 2D photo frames with the scanned 3D geometry data to produce a set of controlling parameters manually or automatically is described. A software program to load the image and geometry data and do the viewing, measurement and control of the photo realistic solid object is descnbed.
[00081 In accordance with another aspect of the invention, an extension of the hardware and software system to implement the stereoscopic display and control function is described.
BRIEF DESCRIPTION OF THE DRAWINGS
[0009] The present invention is better understood from the following description in conjunction with the accompanying drawing figures, although the detailed description may cover a more abstract system not limited by the visual figures.
[0010] FIG. 1: Virtual Reality Image Presentation in 3D Space with 3D Modeling Data The relationship among the true object. view window, the high resolution image and the 3D mesh, and the viewer
I
[0011] FIG. 2: The Implementation for a 3D Virtual Reality System The mechanical image and 3D data capturing system. the composing computer, the data and program server and the client viewing devices [0012] FIG. 3: Block Diagram of Data Capturing. Composing and Viewing System The process of captunng the data, the data to be saved, the composing program and the viewing program [0013] FIG. 4: Photo Image Capturing System The mechanical system of photo capturing and the workfiow of image files produced [0014] FIG. 5: 3D Modeling Data Capturing by Photo Camera or 3D Scanner The mechanical system of photo cameras or 3D scanner and the workflow of 3D geometly data produced [0015] FIG. 6: Embedding 3D Data System Diagram The frame-by-frame embedding of 3D geometry data and the 2D photos to assign the 6 degrees of freedom variables over to the images The required reference frames to do the automatic process [0016] FIG. 7: Adjusting Frame Parameters by Scaling. Translating and Rotating The user interface to adjust the 6 variables or theft-colTesponding data to each of the frames (Three major adjusting procedures to be implemented) [0017] FIG. 8: Automatic Parameters Generating for All Frames The user interface to adjust the 6 variables or their corresponding data to each of the frames (Three major adjusting procedures to be implemented) [0018] FIG. 9: File System for Imaging, 3D Data and Profiling, and Flow Chart of Viewing Program The data file generated and the corresponding imaging and geometry data files The viewing program flow chart to show the image and data loading [0019] FIG. 10: Viewing Program with 3D Presentation and Control The viewing program functions and controls for the end user The data resource structure for high resolution presentation and morphing techniques for smooth operation [0020] FIG. 11: Extension to the Stereoscopic System The same system is used to take dual set of the photo images with frames compliant to the specifications for the stereoscopic display and control
DETAILED DESCRIPTION OF THE TNVENTION
[0021] Reference will now be made in detail to specific embodiments of the present invention. Examples of these embodiments are illustrated in the accompanying drawings.
While the invention will be described in conjunction with these specific embodiments, it will be understood that it is not intended to limit the invention to these embodiments. ffi fact, it is intended to cover alternatives, modifications, and equivalents as may be induded within the spirit arid scope of the invention as defined by the appended claims. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. The present invention may be practiced without some or all of these specific details. In other instances, well-known process operations are not described in detail in order not to obscure the present invention. Besides, in all of the following embodiments, the same or similar components illustrated in different embodiments refer to the same symbols.
[0022] With reference to FIG. 1, a method 100 is illustrated for a 2D photo image 108 with projected mapping on the 2D view window 106.
[0023] Match the 2D photo image 108 with 3D mesh 110 by using matrix transformation with six degrees of freedom for a solid object 102, or known as a solid object. Herein, the solid object is exemplarily illustrated as a mug, but can further be instead of any other solid object. such as a shoe, a light bulb and so on, in other non-illustrated embodiments. The geometry parameters generated from a 3D scanner can construct a 3D mesh 110.
[0024] The viewer 104 view and control the images interactively. 2D photos can be zoomed with scales s, panned by the screen coordinates (x, y) and rotated by w angle, plus the (8, p) angles represented by a set of frames in each of the column, row positions.
[0025] With reference to FIG. 2, an implementation 120 consists a Computer System 126 for mechanical control, image processing and data composition. A Photo capture System 121 consists of controlled rotating platform 122 and multi-arm 124 with cameras 123 moving at p direction with lens zoom and tilting controlled to take photos at different (0, p) positions of the solid object 102.
[0026] A 3D scanner subsystem 128 (hardware or software enhanced) is included for capturing the 3D geometry data which can be constructed into a 3D mesh 110 (shown in FIG. 1). The scanner subsystem 128 can be replaced by cameras 123 if a photogrammetry with the sithouette of the 2D photo image 108 (shown in FIG, 1) is used for 3D modeling.
[0027] The Computer system 126 composes the 2D photo image 108 and 3D mesh 110 and sends them through Internet network 130 to a remote server and network storage system 134 linking to the bternet network 130.
[0028] An Internet connected client device 132, such as a PC, a tablet PC, a smart phone and so on. with viewing and control software is used to view and control the 2D photo image 108 and the 3D mesh 110 interactively.
[0029] With reference to FIG. 3, a block diagram 140 shows how the data were captured, processed and stored and then be consumed by the viewer at the client side.
[0030] In the Hock 142, 2D photo images are captured frame by frame at each of the view position, and they are preprocessed to optionally remove the image background, or compressed to JPEG format with hierarchical pixel resolution, transparency information, and then further saved in a 2D photo image file as shown in the block 144.
[0031] In the block 146, 3D geometry data are scanned at different view positions by, for example but not limited to, a 3D modeling data scan. But after filtering process to obtain the reliable data, they are further composed to a single set of mesh points with global coordinates system. such as a solid object file as shown in the block 148, or known as the 3D mesh.
[0032] In the block 150, a composing system will process the 2D photo image file and the solid object file, so as to comply the 3D geometry parameters of the 3D mesh with the corresponding 2D photo image parameters of the 2D photo images in the 2D image file for high image quality, photo-realistic virtual reality presentation and physical augmented reality applications, and then the matching of the 2D photo image file with the 3D mesh can be achieved. The composed results are saved at a file structure, such as an application and data folder as shown in the block 152, to save photo images at different resolution level, the solid object file and a profile to save the corresponding parameters with, for example but not limited to, xml file structure.
[0033] A viewing program as shown in the block 154 runs at a client device for decoding matching parameters and presenting the photo image in high quality interactively with the end user, and can additionally provide control and measurement of a 3D mesh for specific applications like augmented reality.
[0034] With reference to FIG. 4, a 2D photo capturing system 160 will take photo images of a solid object located in a computer controlled rotating mechanics 162.
[0035] The sofid object will be viewed from different view angles, with at least a camera moving horizontally and vertically around the solid object with a fixed rotation axis. In the present embodiment, the solid object are exemplarily taken photos at the highest possible resolution by 5 different photo cameras with different view angles, for example the bottom side, the lower right side, the right side, the upper right side and the top side, and 8 different horizontal orientations relative to the solid object via the rotation of the computer controlled rotating mechanics 162, forexample 0. 45, 90, 135, l80, 225. 270 and 315, so asto form 40 different image files, and then the image files are saved frame by frame, with a specific naming convention 164. However, in other non-illustrated embodiments, it is also possible to take less or more photos for the solid object.
[0036] Note that the image files may be pre-processed to remove the unwanted background images, added the transparency information, or converted into hierarchical lower resolution and saved under a single root directory for future composing and viewing process.
[0037] With reference to FIG. 5, a 3D geometry data capturing system 180 is used to obtain the geometry data of the solid object. It could be physically an independent system, or a subsystem of the photo capturing system as described in FIG. 4.
[0038] The 3D geometry data capturing system 180 will use a certain wavelength of visible optics camera, laser beam or invisible infrared and reflection capturing system, by getting the depth data 182 of each of the object geometry, or simply by taking the silhouette of the 2D photo image 108.
[0039] The 3D geometry data will be processed by measurement, with unreliable noise data removed first, such as a computing routine of filter unreliable data 184, and then compute the statistically more accurate data as the final node positions in the 3D global coordinate system, such as a routine of statistically compute geometry data 186.
[0040] The geometry data 186 will be compared and merged to a global data set 188 and save the accumulated in a standard solid object file 190.
[0041] With repeated measurement and data computations from many key positions to get the all necessary geometry data and parameters for the solid object, a final 3D mesh 192 from a plurality of 3D geometry parameters can be constructed.
[0042] With reference to FIG. 6, a parameter matching system 200 will be generated for matching the 2D photos with the 3D geometry data.
[0043] As the photo images will be save in each of the photo frames 202 at each of the view angle, we have to match the 3D geometry parameters of the 3D mesh 204 with the corresponding 2D photo image parameters of the 2D photo images/frames 202, so they will be seen at the same presentation space.
[0044] As we know, any of a solid object can be represented by six degrees of freedom. We can chose a reference point at the 3D space (x. y, z) and the orientation angles of the object (8, p, w) to represent the correlated relation between a photo image and the 3D geometry data.
[0045] Therefore for each of the photo frame 202, we need to assign a set of the six parameters and tie them together for future presentation and control functions. In the present embodiment, for example but not limited to, the photo frame 202 can be named as Framei,j.jpg and composed of M c&umns and N rows, and the reference point 206 thereof can be denoted as (x1. yij. z1.). As a result, the six parameters of the 3D geometry data can be denoted as (xo,o, yo.o, zoo, 800, (Po,o, woo), while the six parameters of the photo frames 202 can be denoted as (x,1, yjj, Zjj, Hj, 4jj, co), wherein i=1, 2...M, and J=1, 2...N.
[0046] With reference to FIG. 7, a parameters matching software program 220 can be used to match these parameters with each of the photo frames.
[0047] The matching software program 220 has functions 222 to load the original 2D photo images and the 3D geometry parameters of the 3D mesh 226, and to save the composed data.
[0048] The matching software program 220 is designed to interact with the user by showing both the photo image 224 in any one of the 2D image frame as shown in the Frame Selection 230 and the 3D mesh 226.
[0049] Since the mouse cursor on a computer screen can move with only two degrees of freedom, the user can do the parameter matching manually. It can control the solid object body axis 236 by moving the tip of the axis for controlling the values of 8 and/or p, and then by rotating the solid object body axis 236 for controlling the value of w.
[0050] The reference point 234 then can be panned on the screen to control the value x, y and then use the mouse wheel to control the size of the 3D mesh, which is equivalent to the scale of the object and hence the projected z location. It should be noted that, in this embodiment, all of the six parameters (x, y, z, H, p, to) are adjusted for manually matching the 2D image frame 224 with the 3D mesh 226. However, in other embodiments non-illustrated herein, it is certainly possible not to adjust all of the six parameters if unnecessary.
[0051] In contrast, the auto computing matching process 228 for helping to match the parameters is also provided, which can further match the parameters programmatically for a single frame, or for multiple frames, and will be described in the FIG. 8.
[0052] Please note that the manual matching processes 232 can further be replaced by direct computation by using the auto computing matching processes 228 while doing the capturing process altogether. The automatically matching of a 2D photo image file with a 3D mesh is programmatically automatic matching the parameters of the 2D photo images with the 3D geometry parameters of the 3D mesh while a 3D geometry scan mechanism can provide the parameters relations between the 2D photo images and the 3D mesh.
[0053] With reference to FIG. 8, a computation scheme 240 is developed to generate the parameters matching for all viewing angles at each of the photo frame automatically.
[0054] By applying the Quaternion technology, any of the 3D vector v, which representing the reference point and the body axis, can be calculated to get the new vector r in the 3D space after rotation around a rotating unit axis a with a rotating angle H. [0055] Therefore we can use any of the two frames at the same row with known rotating angles, by using the parameters to calculate the rotating unit axis n. Once it is known, any of other reference point and the body axis in each of the frames at the same row 242 can be calculated, and therefore automatically matching the parameters.
[0056] The same computation can be done on the vertical direction for the image frames at a single column 254 but different rows 252. Duplicating the same process. all the frames can be calculated.
[0057] Theoretically, we need only three manually matched frames to calculate the rotating unit axis in horizontal and vertical directions, and save tremendously the man power to find out the matching parameters. However, with the practical implementation, the rotating trajectory of the camera may not located at a perfect circular path and the tilting angle and zoom lens may project the photo images in a non-linear way, more manually matched frames of 5 or 7 may be required to get a more reliable data. A visual adjustment to review the matching computation is also offered to do the fine adjustment.
[0058] With reference to FIG. 9, a file system at the Internet server 260 is constructed to provide the end user a viewing mechanism to see high resolution photo images plus the 3D geometry data at his client device.
[0059] All the viewer programs, image data in rea' time and in high resolution, the geometry data, accessory dal.a and the presentation profile. are saved under a root directory 262 to ensure there's no cross domain access problem.
[00601 The viewer program, accessed by the end user, will load all the necessary program routines, named Viewer herein, as shown in the block 264, and then get the real time image and geometry data of 3D mesh automatically, as shown in the block 266. Next, as shown in the block 268, the interactive operation for viewing high resolution image and the 3D mesh is available, so as to get high resolution images as shown in the block 269. Additionally, the functional operation as shown in the block 270 will further be available, depending on the augmented reality applications, for necessary 3D measurement as shown in the block 272 or 3D contro' functions as shown in the block 274.
[00611 The program can be implemented on a client device with 3D operating environment like OpenGL or WebGL. or any other 3D environments.
[00621 With reference to FIG. 10, a client side viewing program 280 is developed to implement the functions described in FIG. 9.
[0063] The viewing program 280 can be a WebGL-enaNed browser-based HTML5 viewing program for the Windows platform for the Computer system 126 (shown in FIG. 2), such as a desktop computer. a mobile device or any device capable of showing the operation window 282, or a native program with OpenGL ES enabled mobile device.
[0064] The program has operational buttons 286 to perform zooni, pan and rotate functions of viewing the photo image interactively, it has a slider controller to view either photo images in high quality, or to see the wire frame of the 3D model and even viewing both of them in a different transparency way.
[0065] To show the smoothness of the 2D photo images in the 3D space, it can also peiform angular morphing of the 2D photo image(s) 284 by varying an angle O<AO<Binermet andlor an angle O<LP<(Pincreme,it.
[0066] Depending on the application, it also provides functioning buttons 288 to perform the measurement and application control, and any other functions required.
[0067] With reference to FIG. 11, the system can also extend to a stereoscopic system 300 to view the object in a more realistic feeling due to the human being's eyes perception.
[0068] The viewing windows will be two separate ones for both left stereogram 306 and right stereogram 308, proving images for left eye 302 and right eye 304, respectively.
[0069] The two set of the images and the matching parameters are taken in considering the view angle difference for the same object 310. There will be independently set of left one 312 and right one 314. In the present embodiment, for example but not limited to, the left one 312 and the right one 314 can be respectively named as FrameLi.j.jpg and FrameRi,j.jpg, and the reference points 316 and 318 thereof can be respectively denoted as yj,j, zl,l)R and Yi,j' Zi,J)L. As a result, the six parameters of the 3D geometry data relative to the left one 312 and the nght one 314 can be respectively denoted as (xoo, yoU, zoo, 00.0, (po,o, wo,o)R and (xoo, yo.o, z0,0, po,o, wo.o)L, while the six parameters of the left one 312 and the right one 314 are respectively denoted as yj,j, zjj, Ojj. Pu. coI.j)R and yjj, zjj. Bid, pi.j, I.j)L wherein i=l, 2...M, and J=l, 2...N.
[0070] 1-lowever, it is also possible to use a single set of the 2D photos with different column index for the same row of the images. It will not be very accurate in the viewing angle and distance simulation, but will offer sufficient depth feeling for the average viewers.
[0071] The view windows can be applied to TV's, movie screens, or even new wearable gadgets with view glasses.
[0072] Although specific embodiments of the present invention have been descdbed, it will be understood by those of skill in the art that there are other embodiments that are equivalent to the described embodiments. Accordingly, it is to be understood that the invention is not to be limited by the specific illustrated embodiments, but only by the scope of the appended claims.

Claims (10)

  1. What is claimed is: I. A method of matching a 2D photo image file with a 3D mesh by using matrix transformation with six degrees of freedom for a solid object, wherein 3D geometry parameters of the 3D mesh are complied with 2D photo image parameters of the 2D image file for high image quality, photo-realistic virtual reality presentation and physical augmented reality applications.
  2. 2. The method of matching a 2D photo image file with a 3D mesh as claimed iii claim I, wherein 2D photo images in the 2D photo image file is optionally processed at least one of the following:removing image backgrounds thereof;compressed to JPEG format with hierarchical pixel resolution, transparency information; and saved in the 2D photo image file.
  3. 3. The method of matching a 2D photo image file with a 3D mesh as claimed in claim 1. wherein the 3D geometry parameters can be generated from a certain wavelength of visible optics camera, laser beam or invisible infrared and reflection capturing system, by getting the silhouette of 2D photo images in the 2D photo image file, or the depth data of each of object geometry of the solid object.
  4. 4. The method of matching a 2D photo image file with a 3D mesh as claimed in claim 1. wherein the matrix transformation with six degree of freedom for the solid object is done either manually or automatically.
  5. 5. The method of matching a 2D photo image file with a 3D mesh as claimed in claim 4, wherein the manually comprises at least one of the following steps: manually moving the tip of the axis for controlling the values of 0; manually moving the tip of the axis for controlling the values of p; manually rotating a body axis of the sofid object for controlling the value of w; and scaling a size of the 3D mesh for matching with the 2D photo image parameters being selected from the 2D photo image file until the whole set of the 2D photo image file has been used up, wherein no less than 3 of 2D photo images in the 2D photo image file are selected manually from the body axis in horizontal and vertical directions with calculation assisted.
  6. 6. The method of matching a 2D photo image file with a 3D mesh as claimed in claim 4. wherein the automatically is progranimatically automatic matching the 2D photo image parameters with the 3D geometry parameters while a 3D geometry scan mechanism provides parameter relations between 2D photo images in the 2D photo image file and the 3D mesh.
  7. 7, The method of matching a 2D photo image file with a 3D mesh as claimed in claim 1, wherein the high image qua'ity. photo-realistic virtual reality presentation is a file system at the internet server constructed to provide an end user a viewing mechanism to see high resolution photo images and the 3D mesh, and the physical augmented reality 1 0 applications are for 3D measurement or 3D control functions.
  8. 8. The method of matching a 2D photo image file with a 3D mesh as claimed in claim 7, wherein the high resolution photo images and the 3D mesh are viewed together in a different transparency way.
  9. 9. The method of matching a 2D photo image file with a 3D mesh as claimed in claim 7. wherein the high image quality, photo-realistic virtual reality presentation further extends to a stereoscopic system with viewing window of left stereogram and right stereogram for eft eye and right eye, respectively.
  10. 10. The method of matching a 2D photo image file with a 3D mesh as claimed in claim 7, wherein the high image quality, photo-realistic virtual reality presentation further extends to show a smoothness of 2D photo images in the 2D photo image tile in a 3D space through angular morphing of the 2D photo images in at least one of a 0 direction and a p direction.
GB1317245.7A 2013-09-30 2013-09-30 A method using 3D geometry data for virtual reality presentation and control in 3D space Withdrawn GB2518673A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1317245.7A GB2518673A (en) 2013-09-30 2013-09-30 A method using 3D geometry data for virtual reality presentation and control in 3D space

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1317245.7A GB2518673A (en) 2013-09-30 2013-09-30 A method using 3D geometry data for virtual reality presentation and control in 3D space

Publications (2)

Publication Number Publication Date
GB201317245D0 GB201317245D0 (en) 2013-11-13
GB2518673A true GB2518673A (en) 2015-04-01

Family

ID=49585030

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1317245.7A Withdrawn GB2518673A (en) 2013-09-30 2013-09-30 A method using 3D geometry data for virtual reality presentation and control in 3D space

Country Status (1)

Country Link
GB (1) GB2518673A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107944422A (en) * 2017-12-08 2018-04-20 业成科技(成都)有限公司 Three-dimensional image pickup device, three-dimensional camera shooting method and face identification method
CN109147627A (en) * 2018-10-31 2019-01-04 天津天创数字科技有限公司 Digital museum AR explains method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006105625A1 (en) * 2005-04-08 2006-10-12 K.U. Leuven Research & Development Method and system for pre-operative prediction
WO2008060289A1 (en) * 2006-11-17 2008-05-22 Thomson Licensing System and method for model fitting and registration of objects for 2d-to-3d conversion
US20090141966A1 (en) * 2007-11-30 2009-06-04 Microsoft Corporation Interactive geo-positioning of imagery
US20090245691A1 (en) * 2008-03-31 2009-10-01 University Of Southern California Estimating pose of photographic images in 3d earth model using human assistance
US20120120199A1 (en) * 2009-07-29 2012-05-17 Metaio Gmbh Method for determining the pose of a camera with respect to at least one real object
EP2602588A1 (en) * 2011-12-06 2013-06-12 Hexagon Technology Center GmbH Position and Orientation Determination in 6-DOF

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006105625A1 (en) * 2005-04-08 2006-10-12 K.U. Leuven Research & Development Method and system for pre-operative prediction
WO2008060289A1 (en) * 2006-11-17 2008-05-22 Thomson Licensing System and method for model fitting and registration of objects for 2d-to-3d conversion
US20090141966A1 (en) * 2007-11-30 2009-06-04 Microsoft Corporation Interactive geo-positioning of imagery
US20090245691A1 (en) * 2008-03-31 2009-10-01 University Of Southern California Estimating pose of photographic images in 3d earth model using human assistance
US20120120199A1 (en) * 2009-07-29 2012-05-17 Metaio Gmbh Method for determining the pose of a camera with respect to at least one real object
EP2602588A1 (en) * 2011-12-06 2013-06-12 Hexagon Technology Center GmbH Position and Orientation Determination in 6-DOF

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107944422A (en) * 2017-12-08 2018-04-20 业成科技(成都)有限公司 Three-dimensional image pickup device, three-dimensional camera shooting method and face identification method
CN107944422B (en) * 2017-12-08 2020-05-12 业成科技(成都)有限公司 Three-dimensional camera device, three-dimensional camera method and face recognition method
CN109147627A (en) * 2018-10-31 2019-01-04 天津天创数字科技有限公司 Digital museum AR explains method

Also Published As

Publication number Publication date
GB201317245D0 (en) 2013-11-13

Similar Documents

Publication Publication Date Title
US9286718B2 (en) Method using 3D geometry data for virtual reality image presentation and control in 3D space
US11869205B1 (en) Techniques for determining a three-dimensional representation of a surface of an object from a set of images
US11106275B2 (en) Virtual 3D methods, systems and software
US10096157B2 (en) Generation of three-dimensional imagery from a two-dimensional image using a depth map
US10127722B2 (en) Mobile capture visualization incorporating three-dimensional and two-dimensional imagery
US10430994B1 (en) Techniques for determining a three-dimensional textured representation of a surface of an object from a set of images with varying formats
US9288476B2 (en) System and method for real-time depth modification of stereo images of a virtual reality environment
TW200825984A (en) Modeling and texturing digital surface models in a mapping application
EP3295372A1 (en) Facial signature methods, systems and software
US9754398B1 (en) Animation curve reduction for mobile application user interface objects
WO2014121108A1 (en) Methods for converting two-dimensional images into three-dimensional images
KR101588935B1 (en) A method using 3d geometry data for virtual reality image presentation and control in 3d space
US9025007B1 (en) Configuring stereo cameras
CN115529835A (en) Neural blending for novel view synthesis
EP3057316B1 (en) Generation of three-dimensional imagery to supplement existing content
GB2518673A (en) A method using 3D geometry data for virtual reality presentation and control in 3D space
TWI603288B (en) Method using 3d geometry data for virtual reality image presentation and control in 3d space
CN116664770A (en) Image processing method, storage medium and system for shooting entity
JP2018116421A (en) Image processing device and image processing method
US20230122149A1 (en) Asymmetric communication system with viewer position indications
JP5878511B2 (en) Method of using 3D geometric data for representation and control of virtual reality image in 3D space
Radkowski et al. Enhanced natural visual perception for augmented reality-workstations by simulation of perspective
Ban et al. Pixel of matter: new ways of seeing with an active volumetric filmmaking system
Krasil’nikov et al. Method of converting a 2D image into a stereoscopic 3D image
CN104574497B (en) A kind of method of one 2D photographic image files of pairing and a 3D grid

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)