WO2018106198A1 - Viewing three-dimensional models through mobile-assisted virtual reality (vr) glasses - Google Patents

Viewing three-dimensional models through mobile-assisted virtual reality (vr) glasses Download PDF

Info

Publication number
WO2018106198A1
WO2018106198A1 PCT/TR2016/050534 TR2016050534W WO2018106198A1 WO 2018106198 A1 WO2018106198 A1 WO 2018106198A1 TR 2016050534 W TR2016050534 W TR 2016050534W WO 2018106198 A1 WO2018106198 A1 WO 2018106198A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
software
textures
models
proceed
Prior art date
Application number
PCT/TR2016/050534
Other languages
French (fr)
Inventor
Onur DURSUN
Christopher FERRARIS
Stylianos PETRAKOS
John Ferraris
Duhan ÖLMEZ
Mustafa AZADEN
Original Assignee
Yasar Universitesi
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yasar Universitesi filed Critical Yasar Universitesi
Publication of WO2018106198A1 publication Critical patent/WO2018106198A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • the present invention relates to a method for applying virtual reality where a project that is a draft or finished project is presented to persons in a 3D space and/or used by designers during the design process.
  • the invention relates to an optimization method for viewing three- dimensional models through mobile-assisted virtual reality (VR) glasses using smart phone enhancements; and the innovative process that makes it possible to explore said three-dimensional model interactively by means of this method.
  • VR virtual reality
  • the object of the invention is to create a process in combination with an optimization system which can enable the models to be viewed by means of mobile-assisted virtual reality headsets using telephone processors.
  • an optimization system which can enable the models to be viewed by means of mobile-assisted virtual reality headsets using telephone processors.
  • a model is a unit made up of triangle geometries in a virtual form in the space, known as polygons, a specific visual texture, and texture assistant files customized in different layers behind the texture. These texture layers allow different complex texture creation systems such as UVW, reflection, bump, diffuse, diffuse and transparency mapping.
  • Model Elements mentioned in the documents as models are the elements created by combining the triangular elements as a result of virtually modeling 3D objects that we see in the real world.
  • Texture Data files that are used to cover the outer surfaces of the models and that bear visually perceptible properties such as material data and colour codes. It is usually in image form and can contain different types of information in layers. Briefly, most common texture background layers can be described as follows: UVW Layer: This layer may be the same size as the original texture image or have a different size, and contains information as to how many indentations and protrusions the texture has in what form and in what location of the texture. The ripple layers used to create the material ensure that the material looks more realistic. In real-time or pre- computed images, they define the shadow cast on the material. Reflection Layer: This layer may be the same size as the original texture image or in a different scale. This layer defines how much the material will reflect when applied to the texture, or in which parts of the material it will reflect more, in case of variable reflection.
  • Bump Layer This layer may be the same size as the original texture image or have a different size. This layer allows iterative texture elements in the model with deeper inputs and outputs compared to the ripple layer that partly appear and disappear.
  • Diffuse Layer This layer may be the same size as the original texture image or have a different size. This layer defines in what parts of the material the light will be absorbed more by the material.
  • Opacity Layer This layer may be the same size as the original texture image or in a different scale. This layer defines the transparency of the material. This layer defines what parts of the material will be transparent or what levle of transparency will be applied.
  • Triangle Number As mentioned before, models are composed of triangular geometries combined at different angles and sizes. Triangle number is vital for the invention. In today's systems, due to the triangle number, models are displayed either by computer- aided systems, or mobile devices which accommodate not models but images. The triangle number allows viewing of the product on a desired display by varying the number of images per second by assigning the workload to the processor or video card. Visualization Methods: The visualization process, which is called background rendering, is the computation system that allows the delivery of the finished model to the end user. Today, in the modeling and marketing industries, the model can be presented to the user in three different ways..
  • Static Visual Rendering This refers to a 2D visual production system based on stage, camera and lighting adjustments followed by a computerized calculation through a modeling software. While these products are static, they do not cause any extra workload for representation as they are preliminarily computed and rendered.
  • Real-Time Rendering This refers to a rendering system where models, created by means of a modeling software, and lighting settings are rendered in real-time with the user navigating through the model instead of creating static images, without any preliminarily computed and rendering. As it renders individually each frame displayed at every moment, there is big workload on the processor and video card, if present, of the devices. Therefore, if a system for real-time rendering high graphical fidelity models and media is intended, then powerful computers are required.
  • Real-Time Rendering with Pre-Computed Texture This process is more complicated than the first two steps. Briefly, all models and textures are computed/rendered with high triangle number, layered textures and lighting at reality level of graphical fidelity.
  • Models with a high level of graphical fidelity can be produced and experienced, no need to optimize the models, high cost, inability to share with customers, customers are unable to experience them through their own means, need to qualified in-house staff to handle the system, long production time.
  • Pros and Cons Capability to produce visuals internally without the need to service from an external provider, ability of customers to view images by their own means and cost-effective instruments, reality effect cannot fully be realized due to the software used, companies and customers prefer traditional 2D visuals rather than using this system, mathematical algorithms employed by the software to create such visuals reduce the spatial experience during the rendering process, there is need to an in- house staff to handle this job, spaces cannot be experienced as the photographs created are static, viewpoints can be modified only in the photograph, there is need to design and model all essential scenes at all camera angles while the traditional rendering systems only require labour for the camera viewpoint, interface interactions or data transfer are not possible, and the workload is heavy for low-budget traditional projects. 2. Virtual Reality System Services Available as a Service Outsourced from a Specialized Provider
  • Implementation Steps Submitting the project to a visualization company, creation of high-resolution models, textures and lighting by a visualization team, consultation between the company and the visualization team for approximately 1 month to decide on revisions, textures and camera locations, creation of 360-degree static photographs by the visualization team, and delivering the final visual to the customer through a virtual reality platform.
  • Implementation Steps the project developed is entrusted to qualified visualization companies, high-resolution models, textures and lighting are developed by the expert team, model and texture revisions are applied under consultation between with the project owner and the expert team, models are combined using third party software (e.g. Unity, Unreal Engine) and the application is developed, the project owner applies interface and data revisions to the software, the expert team finalizes the software and delivers it to the client.
  • third party software e.g. Unity, Unreal Engine
  • Pros and Cons images can be created internally by the company, software development does not require much time and and can be produce at on a lower budget, can be shared with clients, images with high level of graphical fidelity can be produced, creation of images requires many camera locations, quantiy of images required increases already long rendering times, inability to create an interactive interface, revision times may be lengthy due to the involvement of a second company, spatial perception cannot be fully experience due to static nature of the photographs, spatial depth cannot be experienced in scene transitions, software has to be developed as a project-specific product andtherefore a library of assets cannot be built for sharing with other projects.
  • VR is not a novel technology
  • HMD VR head-mounted displays
  • one object of the invention is to provide a system, due to its specific optimization method, to view 3D models through mobile-assisted virtual reality headsets, by means of using smart phones instead of expensive equipment dedicated VR headsets, by dramatically reducing the triangle number.
  • Another object of the invention is to propose an innovative process for experiencing three-dimensional models interactively with mobile-assisted virtual reality headsets using smart phones.
  • Another object of the invention is to reduce high costs of around USD 20,000 down to the levels of USD 100 based on the use and accessability of smart phones and mobile- assisted virtual reality headsets.
  • Another object of the invention is to provide a practical solution to end users by dramatically reducing, thanks to its innovative process, the time required to create images from three-dimensional models in a labour-intensive manner.
  • Another object of the invention is to make portable the virtual reality solutions developed on spatially dependent workstations with powerful hardware through the use of smart phones and mobile-assisted virtual reality headsets.
  • Another object of the invention is to reduce, using the original optimization method developed, three-dimensional models down to sizes that better support accessibility and shareability for end users.
  • Figure 1 - Step 1 building the model, Figure 2- Step 2, process step, optimizing the model Figure 3- Step 3, process step, integrating the model with software Figure 4- Step 4, process step, creating the application file.
  • Camera orientation package camera movement package, interaction package, user interface package,
  • the invention relates to an optimization method for viewing three-dimensional models through mobile-assisted virtual reality (VR) headsets using smart phone enhancements; and a process that makes it possible to explore said three-dimensional model interactively by means of this method.
  • Process of the invention is detailed in figure 1 , 2, 3 and 4. Accordingly, process steps and procedures in each step are as follows:
  • any three-dimensional model whether vectorial or not, is transformed into "mesh" models composed of transferable triangles. This allows the transfer of triangles required by the optimization process (20) between the programs. This step can be omitted on some projects. Rather than pre-building and optimizing the models, they can be designed as "low-poly" versions, namely models with few triangle number, in triangle-based 3D design tools. Or, if the model's number of triangle surfaces is too low, only the detail displaying, checking (14) and geometry correction procedures can be carried out without any further processing before proceeding with the software integration step (step 3).
  • the optimization process is the generic name of the triangle reduction process (214) of the models through any second-party software that serves as a triangle-based 3D design tool (e.g. Autodesk Maya, Autodesk 3d Studio Max, SketchUp).
  • the reduction of the triangular geometry number (214) reduces the number of surfaces to be processed by the processor in real-time rendering, so image per second can be increased on the device display, hence ensuring smooth display on mobile-assisted virtual reality devices ( Figure 2).
  • Triangle number can be reduced in three different ways:
  • this procedure can be completed in two different ways. First is the manual method (214) where the user concatenates the triangle edges on the model after import into larger triangles, and optimizes the model to the extent desired.
  • the other is the automatic method (215) where optimization is conducted by means of a software or third party program.
  • automated commands and software used today may lead to undesired results in the models. While the number of triangles is reduced, unintended geometry losses may occur, causing the 3D model/environment to be unusable. Therefore, the optimization software developed with this invention has an algorithm accommodating a detail variable that will not cause any geometry loss.
  • LOD Level of detail
  • Segmentation method In this system, the whole model is divided into smaller pieces not perceived by the user, roughly reducing the number of triangles.
  • the models are segmented into cubes, so all surfaces to be visible within the camera angle are displayed, and invisible parts are not displayed, hence minimum workload is guaranteed on the device processor.
  • virtual reality system represents a system that enables the creation of a realistic perception by changing the direction of a camera in a virtual environment in the same direction to which the user turns its head.
  • the head motion sensed via motion sensors in the device is transferred to the virtual environment.
  • Camera Motion Pack In a virtual reality system, this allows changing/advancing of the user's view in the virtual environment via a controller, so that the user can navigate through the model/environment. Using the forward, backward, right and left commands on the controller, the user can move in any direction and to any extent within the model. The directions change as the head moves with direction of the camera viewpoint being "forward".
  • Interaction Package This is used especially for architectural models and demonstrations, where an animation/demonstration is activated when the user looks at an interactive object or enters in an interactive space, (e.g. an automatic door opening when a person approaches, an item in the store coming forward when looked at, etc.)

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

This invention relates to an optimization method for viewing three-dimensional models through mobile-assisted virtual reality (VR) glasses using smart phone enhancements; and the innovative process that makes it possible to explore said three-dimensional model interactively by means of this method.

Description

VIEWING THREE-DIMENSIONAL MODELS THROUGH MOBILE-ASSISTED
VIRTUAL REALITY (VR) GLASSES
Description
Technical Field The present invention relates to a method for applying virtual reality where a project that is a draft or finished project is presented to persons in a 3D space and/or used by designers during the design process.
In particular, the invention relates to an optimization method for viewing three- dimensional models through mobile-assisted virtual reality (VR) glasses using smart phone enhancements; and the innovative process that makes it possible to explore said three-dimensional model interactively by means of this method.
Background Art
There are two different approaches in today's systems: 1. Computer Systems: Models are developed at a high level of realism and converted into usable media by software, and then used in combination with sophisticated computers and virtual reality headsets. People not technically equipped as described above cannot use such systems, and it takes much time and labour to develop the software. 2. Mobile Systems: In these systems where static photographs and videos are used instead of models, no spatial perception can be evoked and no interactive experience can be presented, merely allowing the user to view a 360-degree scene from a single point. Due to the inability to use the models directly, rendering is required, therefore the system cannot be employed as a design tool where images are yielded as an output.
Proposed System: Addressing the deficiencies in existing systems reveals either a high graphical fidelity accommodated by the models which require viewing on computers, or images and videos are used instead of models at the cost of spatial perception and interactivity. The object of the invention is to create a process in combination with an optimization system which can enable the models to be viewed by means of mobile-assisted virtual reality headsets using telephone processors. In order to understand the optimization process, first, it is important to understand how the models are created and what types of packages are available in the models.
A model is a unit made up of triangle geometries in a virtual form in the space, known as polygons, a specific visual texture, and texture assistant files customized in different layers behind the texture. These texture layers allow different complex texture creation systems such as UVW, reflection, bump, diffuse, diffuse and transparency mapping.
Model: Elements mentioned in the documents as models are the elements created by combining the triangular elements as a result of virtually modeling 3D objects that we see in the real world.
Texture: Data files that are used to cover the outer surfaces of the models and that bear visually perceptible properties such as material data and colour codes. It is usually in image form and can contain different types of information in layers. Briefly, most common texture background layers can be described as follows: UVW Layer: This layer may be the same size as the original texture image or have a different size, and contains information as to how many indentations and protrusions the texture has in what form and in what location of the texture. The ripple layers used to create the material ensure that the material looks more realistic. In real-time or pre- computed images, they define the shadow cast on the material. Reflection Layer: This layer may be the same size as the original texture image or in a different scale. This layer defines how much the material will reflect when applied to the texture, or in which parts of the material it will reflect more, in case of variable reflection.
Bump Layer: This layer may be the same size as the original texture image or have a different size. This layer allows iterative texture elements in the model with deeper inputs and outputs compared to the ripple layer that partly appear and disappear.
Diffuse Layer: This layer may be the same size as the original texture image or have a different size. This layer defines in what parts of the material the light will be absorbed more by the material. Opacity Layer: This layer may be the same size as the original texture image or in a different scale. This layer defines the transparency of the material. This layer defines what parts of the material will be transparent or what levle of transparency will be applied.
Triangle Number: As mentioned before, models are composed of triangular geometries combined at different angles and sizes. Triangle number is vital for the invention. In today's systems, due to the triangle number, models are displayed either by computer- aided systems, or mobile devices which accommodate not models but images. The triangle number allows viewing of the product on a desired display by varying the number of images per second by assigning the workload to the processor or video card. Visualization Methods: The visualization process, which is called background rendering, is the computation system that allows the delivery of the finished model to the end user. Today, in the modeling and marketing industries, the model can be presented to the user in three different ways..
Static Visual Rendering: This refers to a 2D visual production system based on stage, camera and lighting adjustments followed by a computerized calculation through a modeling software. While these products are static, they do not cause any extra workload for representation as they are preliminarily computed and rendered.
Real-Time Rendering: This refers to a rendering system where models, created by means of a modeling software, and lighting settings are rendered in real-time with the user navigating through the model instead of creating static images, without any preliminarily computed and rendering. As it renders individually each frame displayed at every moment, there is big workload on the processor and video card, if present, of the devices. Therefore, if a system for real-time rendering high graphical fidelity models and media is intended, then powerful computers are required. Real-Time Rendering with Pre-Computed Texture: This process is more complicated than the first two steps. Briefly, all models and textures are computed/rendered with high triangle number, layered textures and lighting at reality level of graphical fidelity. Then, textures computed are visualized on each triangle or group of triangles. Subsequently, triangle numbers of the models are reduced, and textures are substituted with textual visuals already calculated, and pasted on the models. Thus, while the devices allow the real-time user to experience the model and the environment, the focus on representing the triangles in the model rather than making heavy computations such as those for texture and lighting. Ultimately, the number of frames displayed per second is greatly increased.
The current situation of VR systems used in the existing technology, the implementation steps, and the advantages and disadvantages offered by the prior art are detailed below.
1. In-House Systems
1.1 . In-House Development of Computer and Computer-Aided VR Systems;
Current Situation: This system has recently been introduced in some of the high- budget projects abroad. However, they are project-specific, and do not have a proprietary system, but are rather developed in-house through the cooperation of specific teams.
Implementation Steps: Developing in-house high-resolution models, textures and lighting, and producing applications capable of displaying models with high levels of graphical fidelity for the user to experience, running on computers with a sophisticated hardware level costing around USD 6,000 as a minimum, and on computer-integrated virtual reality headsets costing around USD 1 ,000 as a minimum.
Pros and Cons: Models with a high level of graphical fidelity can be produced and experienced, no need to optimize the models, high cost, inability to share with customers, customers are unable to experience them through their own means, need to qualified in-house staff to handle the system, long production time.
1.2. In-House Development of Static Photographs with Mobile Virtual Reality Systems
Current Situation: Today, there is no such a team in any design/ construction company. However, some software attempts have been launched to ensure that people with ordinary skill in normal visualization can create 360-degree visuals for virtual reality. Implementation Steps: Thanks to their internal staff adequately skilled in visualization, companies can both visualize a design and produce 360-degree photographs through sthe use of third party software. After procuring software capable of creating images, such photographs can be prepared by experts and delivered to customers.
Pros and Cons: Capability to produce visuals internally without the need to service from an external provider, ability of customers to view images by their own means and cost-effective instruments, reality effect cannot fully be realized due to the software used, companies and customers prefer traditional 2D visuals rather than using this system, mathematical algorithms employed by the software to create such visuals reduce the spatial experience during the rendering process, there is need to an in- house staff to handle this job, spaces cannot be experienced as the photographs created are static, viewpoints can be modified only in the photograph, there is need to design and model all essential scenes at all camera angles while the traditional rendering systems only require labour for the camera viewpoint, interface interactions or data transfer are not possible, and the workload is heavy for low-budget traditional projects. 2. Virtual Reality System Services Available as a Service Outsourced from a Specialized Provider
2.1 . Service of Producing VR-Assisted 360-Degree Photographs through Visualization Companies
Current Situation: Today, some companies provide standard visualization services. Despite the VR system has not become so common yet, some companies have launched efforts in this area.
Implementation Steps: Submitting the project to a visualization company, creation of high-resolution models, textures and lighting by a visualization team, consultation between the company and the visualization team for approximately 1 month to decide on revisions, textures and camera locations, creation of 360-degree static photographs by the visualization team, and delivering the final visual to the customer through a virtual reality platform.
Pros and Cons; photographs created can be shared, capability to produce photographs with a high level of reality perception, a very high cost and time is required to procure the service, the model cannot be experienced interactively, static photographs are limited in the level of spatial experience they can communicate, creation of visuals requires a a significant amount of time, the entire process has to be rolled back in case of a revision in the project, design phase has to be complete, inability to use as a design tool, inability to create an interface as the output is in the form of a photograph. 2.2. Visualization Companies' Services for Creating Computer and Computer-Assisted VR Systems Current Situation: Although not introduced in our country yet, some companies abroad have already begun to design these systems for high-budget projects.
Implementation Steps: the project developed is entrusted to qualified visualization companies, high-resolution models, textures and lighting are developed by the expert team, model and texture revisions are applied under consultation between with the project owner and the expert team, models are combined using third party software (e.g. Unity, Unreal Engine) and the application is developed, the project owner applies interface and data revisions to the software, the expert team finalizes the software and delivers it to the client. Pros and Cons; the virtual reality experience can be offered at a high level of graphical fidelity. An interactive interface and navigation functions are available, the service can be procured at a very high cost, any revision in the design model alters the entire model and software, only the project sales offices can use it, it can be presented only through computers with sophisticated hardware costing around USD 6,000 as a minimum, and computer-integrated virtual reality glasses costing around USD 1 ,000 as a minimum at project sales offices, the project cannot be shared with the client on any platform, communication with remote clients is not possible, the library created is project-specific and cannot be transferred between projects, the design phase has to be complete, use as a design tool is not possible. 2.3. Software Integration by a Second Company of the Visual Developed by the Client
Current Situation: Today, this system is typically built as a virtual tour, with the user expected to experience the spaces via internet explorer.
Implementation Steps: As described in article A2, 360-degree static photographs are created in the design office, images are submitted to a software developer, if to the intent is to develop a virtual tour of the entire space it is necessary to create many images to allow a variety of camera points, the service provider transforms the images supplied by the designer into a package by a software or customized utility, and the final product is delivered to the client through a store or web site.
Pros and Cons; images can be created internally by the company, software development does not require much time and and can be produce at on a lower budget, can be shared with clients, images with high level of graphical fidelity can be produced, creation of images requires many camera locations, quantiy of images required increases already long rendering times, inability to create an interactive interface, revision times may be lengthy due to the involvement of a second company, spatial perception cannot be fully experience due to static nature of the photographs, spatial depth cannot be experienced in scene transitions, software has to be developed as a project-specific product andtherefore a library of assets cannot be built for sharing with other projects.
Briefly, while VR is not a novel technology, it has gained great momentum driven by recent advancements in computer hardware as well as the introduction and development of relatively cost-effective VR head-mounted displays (HMD) (e.g. HTC Vive, Oculus Rift). Despite the progress so far, VR solutions available in the market today have some problems.
Current problems in the prior art can be described as follows:
Cost Problem: High cost of computers (workstations) equipped with the hardware capable of supporting VR solutions, and HMD technologies is one of the critical obstacles preventing the extension of the market and growth of the user portfolio. In current situation, a user intending to obtain a high-quality VR experience has to spend approximately USD 20,000 or more in Turkey.
Portability Problem: The hardware that supports VR technologies is not only expensive, but also spatially constrained (static). Currently, an average user who wants to experience VR needs to go to a VR developer's/service provider's office and use the equipment there.
Accessibility-Shareability Problem: File sizes of current VR solutions are high. This is a major obstacle to accessibility and shareability.
Object of the Invention
In order to eliminate known disadvantages of the prior art, one object of the invention is to provide a system, due to its specific optimization method, to view 3D models through mobile-assisted virtual reality headsets, by means of using smart phones instead of expensive equipment dedicated VR headsets, by dramatically reducing the triangle number. Another object of the invention is to propose an innovative process for experiencing three-dimensional models interactively with mobile-assisted virtual reality headsets using smart phones.
Another object of the invention is to reduce high costs of around USD 20,000 down to the levels of USD 100 based on the use and accessability of smart phones and mobile- assisted virtual reality headsets.
Another object of the invention is to provide a practical solution to end users by dramatically reducing, thanks to its innovative process, the time required to create images from three-dimensional models in a labour-intensive manner. Another object of the invention is to make portable the virtual reality solutions developed on spatially dependent workstations with powerful hardware through the use of smart phones and mobile-assisted virtual reality headsets.
Another object of the invention is to reduce, using the original optimization method developed, three-dimensional models down to sizes that better support accessibility and shareability for end users.
Description of Drawings:
Figure 1 - Step 1 , building the model, Figure 2- Step 2, process step, optimizing the model Figure 3- Step 3, process step, integrating the model with software Figure 4- Step 4, process step, creating the application file.
Description of References
Ref. No. Description
1 Initial step,
10 Build the model,
1 1 Build the 3D model through any second-party software, Export the model in .obj or .fbx format,
Import the data in .obj or .fbx format into the other second-party application to be optimized,
Check the imported data for any failure in triangle geometries,
- if yes, go to step 12,
- if no, proceed with step 21 ,
Optimize the model,
Start optimization,
Display variable distance detail,
Reduce the number of model surfaces,
Fit the pre-computed texture,
Manually reduce the triangle number of the models,
Automatic reduction thanks to third-party software and utilities,
Check whether the model's number of triangle surfaces is optimum,
- if no, return to step 21 ,
- if yes, proceed with step 23,
Query if textures will be pre-computed,
- if no, proceed with step 31 ,
- if yes, proceed with step 24,
Assign textures and materials where necessary layers are placed on the surfaces of models developed,
Make essential computations with textures, and develop textures with high level of reality, Check computation results of textures for any problem,
- if yes, return to step 24,
- if no, proceed with step 27,
Replace initial textures with pre-computed textures with high level of reality,
Integrate the model with software,
Start integration,
Transfer the models and textures to a second-party game engine/software
Check for any surface problem once the model has been imported, -if yes, return to step 22, - if no, proceed with step 34,
Fit the model into the scene, complete essential lighting and scaling,
Fit and reproduce pre-built objects in scene set-up,
Images and 3D objects in the pre-built library,
Check whether the model and 3D visual elements appear properly in the scene,
- if no, return to step 32,
- if yes, proceed with step 37,
Add pre-built codes and software packages to the project scene,
Camera orientation package, camera movement package, interaction package, user interface package,
Check whether packages function properly in integration with the model, - if no, return to step 37,
- if yes, proceed with step 41 ,
40 Build the application file,
41 Build the application version and transfer to the mobile device,
42 Check all data,
- Image per second (model building (10) and model optimization (20)),
- Surface checks in the mode (model optimization (20)),
- Depth, material radiation and reflection quantity in textures (model optimization (20)),
- Functional condition of interaction packages (Model integration with software (30)),
Check whether the whole package and steps function properly,
- If no, repeat the relevant steps,
- if yes, proceed with the last step,
5 Last step
Y Yes
N No
Detailed Description of the Invention
The invention relates to an optimization method for viewing three-dimensional models through mobile-assisted virtual reality (VR) headsets using smart phone enhancements; and a process that makes it possible to explore said three-dimensional model interactively by means of this method. Process of the invention is detailed in figure 1 , 2, 3 and 4. Accordingly, process steps and procedures in each step are as follows:
1. Building the Model (10): This is the process of developing any 3D model by means of a second-party software (e.g. 3D Studio Max, Autodesk Maya, Rhinoceros, Archicad, Revit, Blender, ZBrush, SketchUp, etc.) (1 1 ). The purpose and method of forming the models at this stage is not of essence. In general, models can be created for architectural, industrial designs or for mechanical or other purposes. The core requirement is that the model has to be exportable in '.fbx' or '.obj' format (12) (Figure 1 ).
These models, which are generally not produced for virtual realization, either function vectorially or have high triangle number. Therefore, it is not possible for these models to be displayed directly on mobile-assisted virtual reality headsets due to limited processing capacities of today's mobile devices.
After the models are generated in any second-party software (1 1 ), they are transferred to another software for optimization (13). Here, models should be exported (12) in '.fbx' or '.obj' files that represent surface and model transfer extensions. At this point, any three-dimensional model, whether vectorial or not, is transformed into "mesh" models composed of transferable triangles. This allows the transfer of triangles required by the optimization process (20) between the programs. This step can be omitted on some projects. Rather than pre-building and optimizing the models, they can be designed as "low-poly" versions, namely models with few triangle number, in triangle-based 3D design tools. Or, if the model's number of triangle surfaces is too low, only the detail displaying, checking (14) and geometry correction procedures can be carried out without any further processing before proceeding with the software integration step (step 3).
2. Optimizing the Model (20): The optimization process is the generic name of the triangle reduction process (214) of the models through any second-party software that serves as a triangle-based 3D design tool (e.g. Autodesk Maya, Autodesk 3d Studio Max, SketchUp). The reduction of the triangular geometry number (214)reduces the number of surfaces to be processed by the processor in real-time rendering, so image per second can be increased on the device display, hence ensuring smooth display on mobile-assisted virtual reality devices (Figure 2). Triangle number can be reduced in three different ways:
Reducing the Number of Own Surfaces of the Models (212): this procedure can be completed in two different ways. First is the manual method (214) where the user concatenates the triangle edges on the model after import into larger triangles, and optimizes the model to the extent desired.
The other is the automatic method (215) where optimization is conducted by means of a software or third party program. However, automated commands and software used today may lead to undesired results in the models. While the number of triangles is reduced, unintended geometry losses may occur, causing the 3D model/environment to be unusable. Therefore, the optimization software developed with this invention has an algorithm accommodating a detail variable that will not cause any geometry loss.
Level of detail (LOD) (21 1 ): This system allows models to appear differently from different distances. In this system, as the camera moves away from the object, less triangle surfaces or a geometrically less complex model is displayed, aiming to reduce the workload on the device processor imposed by remote objects that do not require the user to view them in detail at all distances between user and model. For example, while an object has 1 ,000 triangles at a distance of 1 meter, it has 500 triangles at a distance of 5 meters, 200 triangles at a distance of 25 meters, and only 20 triangles at a more remote distance.
Segmentation method: In this system, the whole model is divided into smaller pieces not perceived by the user, roughly reducing the number of triangles. The models are segmented into cubes, so all surfaces to be visible within the camera angle are displayed, and invisible parts are not displayed, hence minimum workload is guaranteed on the device processor.
3. Software Integration (30): Operating systems employed for virtual reality systems require different types of coding and software methods. Thanks to the systems where generally different code and software packages, such as game production software (e.g. Unity, Unreal Engine, etc.) can be used, models built and optimized are combined with interactive model parts and software (Figure 3). Pre-built model parts with predefined interactivity and detail level can be added to architectural and environmental models (32) transferred to game engine software, and these can also be reproduced or modified. This means that the whole system can also be used as a design tool as models can be transferred to a system while the design is in progress, instead of forming them into a final product.
What is important in integrating software packages is which packages and functions are developed into software and added to the model in package form to provide interactive experience. The "package" will hereinafter be used to refer to a series of software products that can be integrated into the models with a drag and drop action. The packages produced in the invention and used in the system are as follows:
Camera Orientation Package (371 ): Essentially, virtual reality system represents a system that enables the creation of a realistic perception by changing the direction of a camera in a virtual environment in the same direction to which the user turns its head. The head motion sensed via motion sensors in the device is transferred to the virtual environment.
Camera Motion Pack: In a virtual reality system, this allows changing/advancing of the user's view in the virtual environment via a controller, so that the user can navigate through the model/environment. Using the forward, backward, right and left commands on the controller, the user can move in any direction and to any extent within the model. The directions change as the head moves with direction of the camera viewpoint being "forward".
Interaction Package: This is used especially for architectural models and demonstrations, where an animation/demonstration is activated when the user looks at an interactive object or enters in an interactive space, (e.g. an automatic door opening when a person approaches, an item in the store coming forward when looked at, etc.)
User Interface Package: This is a system superposed to the user's camera image, which can optionally be hidden or switched to a different interface. It is an in-app platform package where information transfer is possible, optional menus can be created, or settings can be made and adjusted.
4. Building the application file (40): In order to be viewed on mobile devices, applications developed and run on the computer operating system are exported in such a way that they can run on mobile operating systems (41 ). The APK file (application file) downloaded to the phone is opened by the Android or IOS operating system. The phone is fitted into a mobile virtual reality headset and made ready for use (Figure 4).
The process steps of the system according to the invention are as follows; Initial step (1 )
- Build the model (10),
- Build the 3D model through any second-party software (1 1 ),
- Export the model in .obj or .fbx format (12),
- Import the data in .obj or .fbx format into the other second-party application to be optimized (13),
- Check the imported data for any failure in triangle geometries (14),
- if yes (Y), return to step 12,
- if no (N), proceed with step 21 ,
Optimize the model (20),
- Start optimization (21 ),
- Display variable distance detail (21 1 ),
- Reduce the number of model surfaces (212),
- Fit the pre-computed texture (213),
- Manually reduce the triangle number of the models (214),
- Automatic reduction thanks to third-party software and utilities (215),
- Check whether the model's number of triangle surfaces is optimum (22),
- if no (N), return to step 21 , - if yes (Y), proceed with step 23,
- Query if textures will be pre-computed (23),
- if no (N), proceed with step 31 ,
- if yes (Y), proceed with step 24,
- Assign textures and materials where necessary layers are placed on the surfaces of models developed (24),
- Make essential computations with textures, and develop textures with high level of reality (25),
- Check computation results of textures for any problem (26),
- if yes (Y), return to step 24,
- if no (N), proceed with step 27,
- Replace initial textures with pre-computed textures with high level of reality
(27),
Integrate the model with software (30),
- Start integration (31 ),
- Transfer the models and textures to a second-party game engine/software (32),
- Check for any surface problem once the model has been imported (33),
- if yes (Y), return to step 22,
- - if no (N), proceed with step 34,
- Fit the model into the scene, complete essential lighting and scaling (34),
- Fit and reproduce pre-built objects in scene set-up (35),
- Images and 3D objects in the pre-built library (351 ), - Check whether the model and 3D visual elements appear properly in the scene (36),
- if no (N), return to step 32,
- if yes (Y), proceed with step 37,
- Add pre-built codes and software packages to the project scene (37),
- Camera orientation package, camera movement package, interaction package, user interface package (371 ),
- Check whether packages function properly in integration with the model (38),
- if no (N), return to step 37,
- if yes (Y), proceed with step 41 ,
Build the application file (40),
- Build the application version and transfer to the mobile device (41 ),
- Check all data (42),
- Image per second (model building (10) and model optimization (20)),
- Surface checks in the mode (model optimization (20)),
- Depth, material radiation and reflection quantity in textures (model optimization (20)),
- Functional condition of interaction packages (Model integration with software (30)),
- Check whether the whole package and steps function properly,
- If no, repeat the relevant steps,
- if yes, proceed with the last step,
Last step (5).

Claims

1 . This invention is an optimization method for viewing three-dimensional models through mobile-assisted virtual reality (VR) headsets using smart phone enhancements, containing the processes of developing any 3D model by means of a second-party software (1 1 ), transferring them to another software for optimization (13), checking the imported data for any failure in triangle geometries (14), and building the model (10), characterized in that, it contains the processes of optimizing the model (20), integrating the model with the software (30), and creating the application file (40).
2. A method according to Claim 1 , characterized in that the model developed is exported in '.fbx' or '.obj' format.
3. A method according to Claim 1 , wherein said model optimization procedure (20) involves the following process steps; display variable distance detail (21 1 ), reduce the number of model surfaces (212), fit the pre-computed texture (213), manually reduce the triangle number of the models (214), automatic reduction thanks to third-party software and utilities (215), check whether the model's number of triangle surfaces is optimum (22), if no (n), return to step 21 , if yes (y), proceed with step 23, query if textures will be pre-computed (23), if no (n), proceed with step 31 , if yes (y), proceed with step 24, assign textures and materials where necessary layers are placed on the surfaces of models developed (24), make essential computations with textures, and develop textures with high level of reality (25), check computation results of textures for any problem (26), if yes (y), return to step 24, if no (n), proceed with step 27, and replace initial textures with pre-computed textures with high level of reality (27),
4. A method according to Claim 1 , wherein said model-software integration procedure (30) involves the following steps; start integration (31 ), transfer the models and textures to a second-party game engine/software (32), check for any surface problem once the model has been imported (33), if yes (y), return to step 22, if no (n), proceed with step 34, and fit the model into the scene, complete essential lighting and scaling (34), fit and reproduce pre-built objects in scene set-up (35), images and 3D objects in the pre-built library (351 ), check whether the model and 3D visual elements appear properly in the scene (36), if no (n), return to step 32, and if yes (y), proceed with step 37, add pre-built codes and software packages to the project scene (37), camera orientation package, camera movement package, interaction package, user interface package (371 ), check whether packages function properly in integration with the model (38), if no (n), return to step 37, and if yes (y), proceed with step 41.
5. A method according to Claim 1 , wherein said procedure of creating the application file (40) involves the following process steps; build the application version and transfer to the mobile device (41 ), check all data (42), image per second (model building (10) and model optimization (20)), surface checks in the mode (model optimization (20)), depth, material radiation and reflection quantity in textures (model optimization (20)), functional condition of interaction packages (model integration with software (30)), check whether the whole package and steps function properly, if no, repeat the relevant steps, and if yes, proceed with the last step.
PCT/TR2016/050534 2016-12-10 2016-12-24 Viewing three-dimensional models through mobile-assisted virtual reality (vr) glasses WO2018106198A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
TR201618285 2016-12-10
TR2016/18285 2016-12-10

Publications (1)

Publication Number Publication Date
WO2018106198A1 true WO2018106198A1 (en) 2018-06-14

Family

ID=57915048

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/TR2016/050534 WO2018106198A1 (en) 2016-12-10 2016-12-24 Viewing three-dimensional models through mobile-assisted virtual reality (vr) glasses

Country Status (1)

Country Link
WO (1) WO2018106198A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109117533A (en) * 2018-07-27 2019-01-01 上海宝冶集团有限公司 Electronic workshop fire-fighting method based on BIM combination VR
CN109671161A (en) * 2018-11-06 2019-04-23 天津大学 Immersion terra cotta warriors and horses burning makes process virtual experiencing system
CN109920044A (en) * 2019-02-27 2019-06-21 浙江科澜信息技术有限公司 A kind of three-dimensional scene construction method, device, equipment and medium
CN110136269A (en) * 2019-05-09 2019-08-16 安徽工程大学 Drop test visualization virtual reality system and method based on normal self-correction
CN110503719A (en) * 2019-08-21 2019-11-26 山西新华电脑职业培训学校 A kind of VR game design method
CN111243063A (en) * 2020-01-12 2020-06-05 杭州电子科技大学 Secret propaganda education training system based on virtual reality and implementation method thereof
CN114390268A (en) * 2021-12-31 2022-04-22 中南建筑设计院股份有限公司 Method for making virtual reality panoramic video based on Rhino and Enscape
US11475652B2 (en) 2020-06-30 2022-10-18 Samsung Electronics Co., Ltd. Automatic representation toggling based on depth camera field of view
US12026901B2 (en) 2020-07-01 2024-07-02 Samsung Electronics Co., Ltd. Efficient encoding of depth data across devices

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016050283A1 (en) * 2014-09-30 2016-04-07 Telefonaktiebolaget L M Ericsson (Publ) Reduced bit rate immersive video
GB2534538A (en) * 2014-11-14 2016-08-03 Visr Vr Ltd Virtual reality headset

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016050283A1 (en) * 2014-09-30 2016-04-07 Telefonaktiebolaget L M Ericsson (Publ) Reduced bit rate immersive video
GB2534538A (en) * 2014-11-14 2016-08-03 Visr Vr Ltd Virtual reality headset

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SILVIO H RIZZI ET AL: "Automating the Extraction of 3D Models from Medical Images for Virtual Reality and Haptic Simulations", AUTOMATION SCIENCE AND ENGINEERING, 2007. CASE 2007. IEEE INTERNATIONA L CONFERENCE ON, IEEE, PI, 22 September 2007 (2007-09-22) - 25 September 2007 (2007-09-25), pages 152 - 157, XP031141559, ISBN: 978-1-4244-1153-5 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109117533A (en) * 2018-07-27 2019-01-01 上海宝冶集团有限公司 Electronic workshop fire-fighting method based on BIM combination VR
CN109671161A (en) * 2018-11-06 2019-04-23 天津大学 Immersion terra cotta warriors and horses burning makes process virtual experiencing system
CN109920044A (en) * 2019-02-27 2019-06-21 浙江科澜信息技术有限公司 A kind of three-dimensional scene construction method, device, equipment and medium
CN110136269A (en) * 2019-05-09 2019-08-16 安徽工程大学 Drop test visualization virtual reality system and method based on normal self-correction
CN110136269B (en) * 2019-05-09 2022-09-23 安徽工程大学 Fall test visual virtual reality system based on normal self-correction
CN110503719A (en) * 2019-08-21 2019-11-26 山西新华电脑职业培训学校 A kind of VR game design method
CN111243063A (en) * 2020-01-12 2020-06-05 杭州电子科技大学 Secret propaganda education training system based on virtual reality and implementation method thereof
CN111243063B (en) * 2020-01-12 2023-11-07 杭州电子科技大学 Secret propaganda education training system based on virtual reality and implementation method thereof
US11475652B2 (en) 2020-06-30 2022-10-18 Samsung Electronics Co., Ltd. Automatic representation toggling based on depth camera field of view
US12026901B2 (en) 2020-07-01 2024-07-02 Samsung Electronics Co., Ltd. Efficient encoding of depth data across devices
CN114390268A (en) * 2021-12-31 2022-04-22 中南建筑设计院股份有限公司 Method for making virtual reality panoramic video based on Rhino and Enscape
CN114390268B (en) * 2021-12-31 2023-08-11 中南建筑设计院股份有限公司 Virtual reality panoramic video manufacturing method based on Rhino and Enscape

Similar Documents

Publication Publication Date Title
WO2018106198A1 (en) Viewing three-dimensional models through mobile-assisted virtual reality (vr) glasses
US11094140B2 (en) Systems and methods for generating and intelligently distributing forms of extended reality content
Ijiri et al. Seamless integration of initial sketching and subsequent detail editing in flower modeling
WO2021021742A1 (en) Rapid design and visualization of three-dimensional designs with multi-user input
WO2022159283A1 (en) Generating augmented reality prerenderings using template images
Li et al. [Retracted] Multivisual Animation Character 3D Model Design Method Based on VR Technology
Saldana An integrated approach to the procedural modeling of ancient cities and buildings
RU2656584C1 (en) System of designing objects in virtual reality environment in real time
EP3776185B1 (en) Optimizing viewing assets
Ratican et al. A proposed meta-reality immersive development pipeline: Generative ai models and extended reality (xr) content for the metaverse
WO2017123163A1 (en) Improvements in or relating to the generation of three dimensional geometries of an object
US10460497B1 (en) Generating content using a virtual environment
Cabral et al. An experience using X3D for virtual cultural heritage
KR20090000729A (en) System and method for web based cyber model house
Dias et al. VIARmodes: visualization and iiteraction in mmersive virtual reality for architectural design process
Choi et al. ONESVIEW: an integrated system for one-stop virtual design review
CN116843816B (en) Three-dimensional graphic rendering display method and device for product display
Papaefthymiou et al. Gamified AR/VR character rendering and animation-enabling technologies
Syahputra et al. Virtual application of Darul Arif palace from Serdang sultanate using virtual reality
Gebert et al. Meta-model for VR-based design reviews
Shen et al. Collaborative design in 3D space
EP2717226A1 (en) Method for generating personalized product views
US20220254098A1 (en) Transparent, semi-transparent, and opaque dynamic 3d objects in design software
Jarvis et al. Evolution of VR Software and Hardware for Explosion and Fire Safety Assessment and Training
Kirn Building Information Modeling and Virtual Reality: Editing of IFC Elements in Virtual Reality

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16831774

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16831774

Country of ref document: EP

Kind code of ref document: A1