AU2021240312A1 - Technology configured to enable shared experiences whereby multiple users engage with 2d and/or 3d architectural environments whilst in a common physical location - Google Patents

Technology configured to enable shared experiences whereby multiple users engage with 2d and/or 3d architectural environments whilst in a common physical location Download PDF

Info

Publication number
AU2021240312A1
AU2021240312A1 AU2021240312A AU2021240312A AU2021240312A1 AU 2021240312 A1 AU2021240312 A1 AU 2021240312A1 AU 2021240312 A AU2021240312 A AU 2021240312A AU 2021240312 A AU2021240312 A AU 2021240312A AU 2021240312 A1 AU2021240312 A1 AU 2021240312A1
Authority
AU
Australia
Prior art keywords
module
model
devices
artefacts
physical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
AU2021240312A
Inventor
Michael Shaw
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Envision Vr Pty Ltd
Original Assignee
Envision Vr Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Envision Vr Pty Ltd filed Critical Envision Vr Pty Ltd
Priority to AU2021240312A priority Critical patent/AU2021240312A1/en
Priority to AU2022241539A priority patent/AU2022241539A1/en
Priority to US17/936,985 priority patent/US20230104636A1/en
Publication of AU2021240312A1 publication Critical patent/AU2021240312A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/16Real estate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Abstract

The present invention relates, in various embodiments, to technology configured to facilitate client device engagement with computerised models. Some embodiments relate to 5 technology configured to enable shared experiences whereby multiple users engage with 2D and/or 3D architectural environments whilst in a common physical location. A particular focus of the technology is enabling users of mobile and/or VR devices to perform virtual property inspections, including in a collaborative engagement scenario. While some embodiments will be described herein with particular reference to those applications, it will 10 be appreciated that the invention is not limited to such a field of use, and is applicable in broader contexts. 1/2 Base Model Base Model Ingestion Module Processing Module I113 Optimised Model Guide Control Processing Module Module 103 1115 Model Generation4 Systems 120 Optimised Storage Shared Experience 115 114 Module Management Module 116 117 Example Guide Optimised Model Client Device Devices Delivery Module Interaction Modules (e.g. mobileNR) 118- Model Altemative Audio data transport Asset Control module Module 100 3D Content Engagement System 101 102 Example Example Client Device Client Device (VR) (mobile) FIG. 1

Description

1/2
Base Model Base Model Ingestion Module Processing Module
I113
Optimised Model Guide Control Processing Module Module 103
Model Generation4 Systems 1115
120 Optimised Storage Shared Experience 115 Module Management Module 114
116 117 Example Guide Optimised Model Client Device Devices Delivery Module Interaction Modules (e.g. mobileNR)
118- Model Altemative Audio data transport Asset Control module Module 100
3D Content Engagement System
101 102 Example Example Client Device Client Device (VR) (mobile)
FIG. 1
TECHNOLOGY CONFIGURED TO ENABLE SHARED EXPERIENCES WHEREBY MULTIPLE USERS ENGAGE WITH 2D AND/OR 3D ARCHITECTURAL ENVIRONMENTS WHILST IN A COMMON PHYSICAL LOCATION
FIELD OF THE INVENTION
The present invention relates, in various embodiments, to technology configured to facilitate client device engagement with computerised models. Some embodiments relate to technology configured to enable shared experiences whereby multiple users engage with 2D and/or 3D architectural environments whilst in a common physical location. A particular focus of the technology is enabling users of mobile and/or VR devices to perform virtual property inspections, including in a collaborative engagement scenario. While some embodiments will be described herein with particular reference to those applications, it will be appreciated that the invention is not limited to such a field of use, and is applicable in broader contexts.
BACKGROUND
[0001] Any discussion of the background art throughout the specification should in no way be considered as an admission that such art is widely known or forms part of common general knowledge in the field.
[0002] Property inspections have conventionally been performed via a physical "in person" approach. For example, people would gather at a property during a designated time for the purpose of inspection, during which time a real estate representative would usually be present.
[0003] In recent years, technology has enhanced the ability to perform inspections in a virtual context. For example, 360 cameras have allowed for virtual walkthroughs and the like. There remain various technical limitations and challenged associated with virtual inspections.
SUMMARY OF THE INVENTION
[0004] It is an object of the present invention to overcome or ameliorate at least one of the disadvantages of the prior art, or to provide a useful alternative.
[0005] Example embodiments are described below in the section entitled "claims".
[0006] Reference throughout this specification to "one embodiment", "some embodiments" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrases "in one embodiment", "in some embodiments" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment, but may. Furthermore, the particular features, structures or characteristics may be combined in any suitable manner, as would be apparent to one of ordinary skill in the art from this disclosure, in one or more embodiments.
[0007] As used herein, unless otherwise specified the use of the ordinal adjectives "first", "second", "third", etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.
[0008] In the claims below and the description herein, any one of the terms comprising, comprised of or which comprises is an open term that means including at least the elements/features that follow, but not excluding others. Thus, the term comprising, when used in the claims, should not be interpreted as being limitative to the means or elements or steps listed thereafter. For example, the scope of the expression a device comprising A and B should not be limited to devices consisting only of elements A and B. Any one of the terms including or which includes or that includes as used herein is also an open term that also means including at least the elements/features that follow the term, but not excluding others. Thus, including is synonymous with and means comprising.
[0009] As used herein, the term "exemplary" is used in the sense of providing examples, as opposed to indicating quality. That is, an "exemplary embodiment" is an embodiment provided as an example, as opposed to necessarily being an embodiment of exemplary quality.
[0010] The description below refers to "systems" and "modules". The term "module" refers to a software component that is logically separable (a computer program), or a hardware component. The module of the embodiment refers to not only a module in the computer program but also a module in a hardware configuration. The discussion of the embodiment also serves as the discussion of computer programs for causing the modules to function (including a program that causes a computer to execute each step, a program that causes the computer to function as means, and a program that causes the computer to implement each function), and as the discussion of a system and a method. For convenience of explanation, the phrases "stores information," "causes information to be stored," and other phrases equivalent thereto are used. If the embodiment is a computer program, these phrases are intended to express "causes a memory device to store information" or "controls a memory device to cause the memory device to store information." The modules may correspond to the functions in a one-to-one correspondence. In a software implementation, one module may form one program or multiple modules may form one program. One module may form multiple programs. Multiple modules may be executed by a single computer. A single module may be executed by multiple computers in a distributed environment or a parallel environment. One module may include another module. In the discussion that follows, the term "connection" refers to not only a physical connection but also a logical connection (such as an exchange of data, instructions, and data reference relationship). The term "predetermined" means that something is decided in advance of a process of interest. The term "predetermined" is thus intended to refer to something that is decided in advance of a process of interest in the embodiment. Even after a process in the embodiment has started, the term "predetermined" refers to something that is decided in advance of a process of interest depending on a condition or a status of the embodiment at the present point of time or depending on a condition or status heretofore continuing down to the present point of time. If "predetermined values" are plural, the predetermined values may be different from each other, or two or more of the predetermined values (including all the values) may be equal to each other. A statement that "if A, B is to be performed" is intended to mean "that it is determined whether something is A, and that if something is determined as A, an action B is to be carried out". The statement becomes meaningless if the determination as to whether something is A is not performed.
[0011] The term "system" refers to an arrangement where multiple computers, hardware configurations, and devices are interconnected via a communication network (including a one-to-one communication connection). The term "system", and the term "device", also refer to an arrangement that includes a single computer, a hardware configuration, and a device. The system does not include a social system that is a social "arrangement" formulated by humans.
[0012] At each process performed by a module, or at one of the processes performed by a module, information as a process target is read from a memory device, the information is then processed, and the process results are written onto the memory device. A description related to the reading of the information from the memory device prior to the process and the writing of the processed information onto the memory device subsequent to the process may be omitted as appropriate. The memory devices may include a hard disk, a random-access memory (RAM), an external storage medium, a memory device connected via a communication network, and a ledger within a CPU (Central Processing Unit).
BRIEF DESCRIPTION OF THE DRAWINGS
[0013] Embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings in which:
[0014] FIG. 1 illustrates a system according to one embodiment.
[0015] FIG. 2 illustrates a method according to one embodiment.
DETAILED DESCRIPTION
[0016] The present invention relates, in various embodiments, to technology configured to facilitate client device engagement with three-dimensional architectural models. A particular focus of the technology is enabling users of mobile and/or VR devices to perform virtual property inspections, including in a collaborative engagement scenario. While some embodiments will be described herein with particular reference to those applications, it will be appreciated that the invention is not limited to such a field of use, and is applicable in broader contexts.
Example System for Enabling Shared Experience Delivery
[0017] One embodiment provides a system which is configured to enable client device engagement with three-dimensional content, thereby to enable virtual property inspections. The system may include one or more server devices, which interact with a plurality of client devices. The client devices preferably include smartphone/tablet devices, and virtual reality devices (for example headsets).
[0018] In examples below, the three-dimensional content is described by reference to models which represent physical spaces, for example virtual spaces which represent real estate settings (such as houses, apartments, office space, industrial space, commercial space, and the like), but the technology may be applied to other settings also. For the examples described below, 3D architectural models are first generated, using one or more of the numerous platforms available for such purposes. These are then ingested and normalised into a standardised form which is optimised for rendering via the Unity engine (or a similar technology), such that the models can be experienced via consumer-level client devices (such as smartphones and VR headsets).
[0019] In the example of FIG. 1, a 3D Content Engagement System 100 is a server system which is configured to enable client device engagement with three-dimensional content, thereby to enable virtual property inspections. System 100 is configured to receive and process requests from a plurality of client devices. Example client devices are illustrated: a mobile device 101 and a VR device 102.
[0020] The requests received from client devices 101 and 102 include the following:
(i) A request to access a specified three-dimensional model, in which case the server system is configured to enable downloading to the relevant client device of computer executable code representative of the model, such that the model is renderable at the client device. For example, each client device executed a respective instance of a software applications which is configured to enable section of a model (e.g. by the name of a location), downloading of 3D data, and rendering of that data as a 3D virtual environment (for example using a Unity engine or the like). For the present purposes, that software applications is referred to as a Virtual Inspection App, or VIA.
(ii) A request to operate as a guide device, in which case the server system is configured to enable hosting by the client device of a collaborative viewing event. In the illustrated embodiment, in the event that a client device 101 or 102 provides such a request, that device becomes a guide device 103. In some cases "ordinary" users of the VIA are prevented from providing requests to operate as guide devices, which may be achieved via various means.
(iii) A request to operate as a participant device, in which case the server system is configured to enable participation by the client device of in the collaborative viewing event.
[0021] These and other requests are received by system 100 via client device interaction modules 117. In the present example, models downloaded to and rendered by client devices are also referred to as "optimised" models.
[0022] In the present embodiment a user interacts with the VIA on their respective client device 101/102, selects a model (i.e. property/location) via interaction with client device interaction module 117, and is delivered data representative of the relevant 3D model assets via a model delivery module 118 (regardless of whether the device has guide or participant status). In the case of a participant device, a user is able to join a scheduled shared experience (this may be scheduled by a central scheduling system, and/or a shared experience may be joined for a given model by a participant any time that model is rendered, even if there are no guide users present).
[0023] System 100 includes an engagement management module which is configured to facilitate concurrent multi-device engagement with a common three-dimensional model during a collaborative viewing event. In FIG. 1, this is illustrated as a Shared Experience Management Module 115. A "collaborative viewing event" is defined as a period of time during which at least one guide device and at least one participant device concurrently render the same model (from a common physical location - an "onsite virtual inspection", or from geographically distinct locations - a "remote virtual inspection").
[0024] Shared experience management module 115 is configured to provide at least two modes of engagement, including:
SA first mode of engagement, described herein as a "self-guided mode". Each participant device selecting the "self-guided mode" engagement is enabled to control navigation in a three-dimensional environment rendered from downloaded data representative of the common three-dimensional model. That is, for example, using VR hardware they are able to move around and insect a virtual space defined by the three-dimensional environment. In a practical use case, this allows users to move around an inspection area at their own discretion.
SA second mode of engagement, described herein as an "agent-guided mode". Each participant device selecting the second mode of engagement is enabled to experience the three-dimensional based on navigation instructions inputted by a guide device hosting the collaborative viewing event. In a practical use case, this allows an agent or the like to show users around an inspection area, without the users needing to input navigation instructions.
[0025] The provision of both modes open up virtual inspections to a wider range of users, including both users with familiarity/competency in navigating virtual spaces, and users without such familiarity/competency.
[0026] In the illustrated embodiment, system 10 includes an audio data transport module 119 which is configured to enable audible communications during a collaborative viewing event between the host device and the (or each) participant device. This is in some embodiments one-way communication (guide to participant), and in other embodiments two-way communication. In some embodiments there is a control such that only one participant device is able to engage in audio communication with the guide device at any given time. It will be appreciated that in some cases the audio data transport module is a plugin provided via a 3D rendering platform, for example Unity.
[0027] As noted above, the three-dimensional content is described by reference to models which represent physical spaces, for example virtual spaces which represent real estate settings (such as houses, apartments, office space, industrial space, commercial space, and the like), but the technology may be applied to other settings also. For the examples described below, 3D architectural models are first generated, using one or more of the numerous platforms available for such purposes. These are then ingested and normalised into a standardised form which is optimised for rendering via the a real time rendering engine (for example the Unity engine, Unreal engine, or a similar technology), such that the models can be experienced via consumer-level client devices (such as smartphones and VR headsets).
[0028] In the context of FIG. 1, architectural models are generated via a plurality of model generation systems 120. For example, these may include computers executing software packages such as 3DSMAX, MAYA or VRAY. This results in generation of project files for "base models".. These "base models" typically require a significant degree of processing power to view, and as such are not suitable for consumption via smartphones and the like. System 101 is configured to convert these base models into "optimised models" which are optimised for delivery over a conventional Internet (including mobile Internet) connection, and rendered at client devices such as client devices 101 and 102. For example, in the present embodiments, the Unity engine is used.
[0029] System 101 includes a base model ingestion module 110, which is configured to receive project files for a base model from one of systems 120. A base model processing module 111 receives the project files and performs a series of steps to convert the base model files, which may be a variety of formats, into a predefined format with set attributes optimised for rendering via the Unity engine at mobile devices.
[0030] Base model processing module exports a set of project files for an optimised model. This is optionally subjected to further processing via module 112 (for example in response to feedback from a commercial client), and when finalised added to an optimised model storage module 114. Data in module 114 is then, in response to instructions triggered by modules 117, delivered to client devices via an optimised mode delivery module 116. Manual and automated processes to correct errors that may occur when rendering with our invention including but not limited to; creating new vertex arrays from existing geometry, automatically detecting and correcting vertex normal errors, packing mesh UVW maps into atlases with optimal texel density for display within our invention and processes of rendering image data directly onto the model.
[0031] In the present embodiment, each optimised model is associated with two or more sets of model assets. Each set of model assets includes a respective collection of three dimensional objects renderable at predefined locations in the three-dimensional environment rendered from downloaded model data. For example, this may include furniture and the like. Each device is configured to enable swapping between viewing of the two or more sets of model assets, thereby to enable viewing of a space with different styling. In this embodiment, system 100 includes a model alternate asset control module 118, which is configured to enable, during a collaborative viewing event, a control instruction provided via the host device to trigger swapping between viewing of the two or more sets of model assets at one or more participant devices. This allows the host to control the styling of a space during collaborative viewing, which may be practically relevant in terms of performing a guided inspection.
Shared Virtual Experience Management in Common Physical Location
[0032] As noted above, collaborative viewing events provided by system 100 may include inspections performed by multiple parties at a common physical location - an "onsite virtual inspection", or from geographically distinct locations - a "remote virtual inspection".
[0033] In the context of onsite virtual inspections, there is a technical/practical problem which arises where multiple persons use VR headsets to navigate a common space at the same time, given the risk that they collide. A technical solution exists, but this requires use of hardware which implements antilatency technology, adding to costs of implementation. Described below is an alternate technical solution, which can be implemented using conventional VR headsets.
[0034] In overview, in some embodiments shared experience management module 115 is configured to enable a plurality of client devices, in the form of Virtual Reality headset devices such as device 102, to concurrently experience a common virtual environment whilst physically located in a physical environment. Each device performs a process thereby to determine a current position and orientation of the device based on comparison of artefacts (for example landmarks identified by Al processing) in image data collected by the device and artefacts in the model data, and the shared experience management module is configured to receive current position and orientation data for a first one of the devices and provides to one or more other client devices data thereby to enable rendering of an avatar representing current location of a user of the first one of the devices.
[0035] FIG. 2 illustrates a method 200 according to one embodiment, being a method for enabling virtual reality engagement in a physical space via a device having a display screen and a camera module (for example a conventional VR headset, such as an Oculus Quest or HTC Vibe).
[0036] Functional block 201 represents a process including accessing data representative of a three-dimensional model. For example, using the context of FIG. 1 as an example, a user executes the VIA, and interacts with a selection interface to select a particular location for viewing (this may alternately be automatically selected). Model data is then either downloaded from server system 200, or located in local memory.
[0037] Functional block 202 represents rendering the three-dimensional model via the display screen, thereby to enable a user to navigate a rendered three-dimensional virtual environment (e.g. using the VIA).
[0038] Functional block 203 represents a process including determining that user is part of an onsite shared experience. In practice, an onsite shared experience is where: (i) a user is located in a particular physical space; (ii) the rendered three-dimensional environment is representative of that particular physical space; and (iii) there are or may be one or more further client devices also in that particular physical space rendering the same or a corresponding virtual three-dimensional environment. From a computing perspective a process may be configured to determine whether the user is part of an onsite shared experience based on any one or more of the following techniques:
• Specific manual configuration by the user or an administrator to place the client device in a specified onsite shared experience.
• Geolocation methods, for example determining that a current physical location is within a threshold range of a defined model location.
• Recognition of physical artefacts via image processing of data collected by a VR headset camera (as discussed further below).
[0039] Functional block 204 represents a process including processing data received via the camera module, thereby to identify one or more artefacts. These artefacts may include, for example, landmarks which are identifiable via Al-based image processing. Image processes may include edge detection, but in a preferred embodiment Al image recognition is used. As will be appreciated by those skilled in the art, Al technologies perform processes which include assessing pixel data to predict what object those pixels might represent, and prefers artefacts that are likely to be recognizable from many angles and lighting settings. For example: static objects with strong contrast and silhouettes. As such, preferred artefacts include door frames, room corners, a tree, and the like.
[0040] Functional block 205 represents a process including registering one or more of the identified artefacts against virtual artefacts in the three-dimensional environment, thereby to define a position and orientation of the virtual environment relative to those artefacts, and also thereby to determine a current position and orientation of the device in a known physical environment. This is preferably achieved by having pre-determined virtual anchor points that prescribe to specific or non-specific landmarks, such that when a landmark is identified the virtual environment is anchored to that location
[0041] In a preferred embodiment, the first device to load the virtual environment is used thereby to define the position and orientation of the virtual environment relative to identified physical artefacts. Devices in the same physical location which subsequently load the model receive data representative of that the position and orientation of the virtual environment relative to identified physical artefacts, and based on that data are able to correctly align rendering of the virtual environment based on their own detection of the physical artefacts. As such, all of the devices render the virtual environment in a common position and orientation relative to the physical environment
[0042] Functional block 206 represents a process including uploading the data representative of current position and orientation of the device to a server, such that the server is configured to provide to one or more further devices in the same physical space with data representative of the position and orientation of the device or a user of the device. The device performing method 200 also receives this data regarding other devices. This enables rendering in the virtual environment of virtual avatars representative of each user, thereby to facilitate collision avoidance.
[0043] It will be appreciated that current position and orientation is tracked over time. This may include any one or more of the following techniques:
• Ongoing processing of image data to identify artefacts, and registration of those artefacts.
• Recording of position and orientation between registration events using an IMU and/or other sensors in the VR headset.
• Tracking of registered objects via image processing techniques.
Approximated tracking based on the device's movement.
[0044] This allows for conducting of a multi-user onsite virtual inspection, overcoming technical problems associated with collision management. In a practical sense, this is a particularly valuable outcome. For example, it allows a real estate agent to show a physical space to a group of people, and then guide them through the space via VR technology, allowing visualisation of alternate styling options (e.g. paint, furniture, and the like), doing so in a collaborative shared environment.
[0045] In a further embodiment, a similar co-location technique is implemented thereby to enable shared experiences using augmented reality technology. For example, a plurality of users in a common physical space wear augmented reality glasses, and these glasses use a physical artefact detection process thereby to provide an objective common reference frame. This is used to enable rendering of AR content in a common position and orientation relative to the physical space, substantially in the same manner as described above (although with AR rather than VR). In one such embodiment the AR content may be 2D content projected on a planar floor (for example a 2D floorplan). This allows multiple AR users to have a shared experience with a common AR projected floorplan.
[0046] In such embodiments, a model is in the form of a two-dimensional model including a floorplan (although a three-dimensional floorplan or other model may be used), and wherein various users' devices are are augmented reality devices configured to display the floorplan in a common location and orientation relative to a physical space, such that the device and the one or more further devices display the two-dimensional model in a substantially aligned manner.
[0047] As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit," "module" or "system." Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
[0048] Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
[0049] A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
[0050] Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
[0051] Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages, a scripting language such as Perl, VBS or similar languages, and/or functional languages such as Lisp and ML and logic-oriented languages such as Prolog. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
[0052] Aspects of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
[0053] These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
[0054] The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
[0055] The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
[0056] The computer program product may comprise all the respective features enabling the implementation of the methodology described herein, and which-when loaded in a computer system-is able to carry out the methods. Computer program, software program, program, or software, in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: (a) conversion to another language, code or notation; and/or (b) reproduction in a different material form.
[0057] The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
[0058] The corresponding structures, materials, acts, and equivalents of all means or step plus function elements, if any, in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present invention has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.
[0059] Various aspects of the present disclosure may be embodied as a program, software, or computer instructions embodied in a computer or machine usable or readable medium, which causes the computer or machine to perform the steps of the method when executed on the computer, processor, and/or machine. A program storage device readable by a machine, tangibly embodying a program of instructions executable by the machine to perform various functionalities and methods described in the present disclosure is also provided.
[0060] The system and method of the present disclosure may be implemented and run on a general-purpose computer or special-purpose computer system. The terms "computer system" and "computer network" as may be used in the present application may include a variety of combinations of fixed and/or portable computer hardware, software, peripherals, and storage devices. The computer system may include a plurality of individual components that are networked or otherwise linked to perform collaboratively, or may include one or more stand-alone components. The hardware and software components of the computer system of the present application may include and may be included within fixed and portable devices such as desktop, laptop, and/or server. A module may be a component of a device, software, program, or system that implements some "functionality", which can be embodied as software, hardware, firmware, electronic circuitry, or etc.
[0061] Although specific embodiments of the present invention have been described, it will be understood by those of skill in the art that there are other embodiments that are equivalent to the described embodiments. Accordingly, it is to be understood that the invention is not to be limited by the specific illustrated embodiments, but only by the scope of the appended claims.
[0062] It should be appreciated that in the above description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, FIG., or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the Detailed Description are hereby expressly incorporated into this Detailed Description, with each claim standing on its own as a separate embodiment of this invention.
[0063] Furthermore, while some embodiments described herein include some but not other features included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention, and form different embodiments, as would be understood by those skilled in the art. For example, in the following claims, any of the claimed embodiments can be used in any combination.
[0064] Furthermore, some of the embodiments are described herein as a method or combination of elements of a method that can be implemented by a processor of a computer system or by other means of carrying out the function. Thus, a processor with the necessary instructions for carrying out such a method or element of a method forms a means for carrying out the method or element of a method. Furthermore, an element described herein of an apparatus embodiment is an example of a means for carrying out the function performed by the element for the purpose of carrying out the invention.
[0065] In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In other instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
[0066] Similarly, it is to be noticed that the term coupled, when used in the claims, should not be interpreted as being limited to direct connections only. The terms "coupled" and "connected," along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other. Thus, the scope of the expression a device A coupled to a device B should not be limited to devices or systems wherein an output of device A is directly connected to an input of device B. It means that there exists a path between an output of A and an input of B which may be a path including other devices or means. "Coupled" may mean that two or more elements are either in direct physical or electrical contact, or that two or more elements are not in direct contact with each other but yet still co-operate or interact with each other.
Thus, while there has been described what are believed to be the preferred embodiments of the invention, those skilled in the art will recognize that other and further modifications may be made thereto without departing from the spirit of the invention, and it is intended to claim all such changes and modifications as falling within the scope of the invention. For example, any formulas given above are merely representative of procedures that may be used. Functionality may be added or deleted from the block diagrams and operations may be interchanged among functional blocks. Steps may be added or deleted to methods described within the scope of the present invention.

Claims (4)

1. A method for enabling virtual reality or augmented reality engagement in a physical space via a device having a display and a camera module, the method including:
accessing data representative of a two or three-dimensional model;
rendering the model via the display, thereby to enable a user to navigate a rendered virtual environment;
processing data received via the camera module, thereby to identify one or more artefacts in the physical space;
registering one or more of the identified artefacts against physical artefacts in the virtual environment, thereby to determine a current position and orientation of the device in the physical environment; and
uploading the data representative of current position and orientation of the device to a server, such that the server is configured to provide to one or more further devices in the same physical space with a rendering of the same model in the same position and orientation relative to the physical space.
2. A method according to claim 1 wherein the model is a two-dimensional model including a floorplan, and wherein the device and further devices are augmented reality devices configured to display the floorplan in a common location and orientation relative to a physical space, such that the device and the one or more further devices display the two-dimensional model in a substantially aligned manner.
3. A method for enabling virtual reality engagement in a physical space via a device having a display screen and a camera module, the method including:
accessing data representative of a three-dimensional model;
rendering the three-dimensional model via the display screen, thereby to enable a user to navigate a rendered three-dimensional virtual environment;
processing data received via the camera module, thereby to identify one or more artefacts in the physical space; registering one or more of the identified artefacts against physical artefacts in the three-dimensional environment, thereby to determine a current position and orientation of the device in the physical environment; and uploading the data representative of current position and orientation of the device to a server, such that the server is configured to provide to one or more further devices in the same physical space with data representative of the position and orientation of the device or a user of the device.
4. A method according to claim 3 wherein the server provides data to the device and to the further devices such that each device is able to identify a current location of each other device.
AU2021240312A 2021-10-01 2021-10-01 Technology configured to enable shared experiences whereby multiple users engage with 2d and/or 3d architectural environments whilst in a common physical location Pending AU2021240312A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
AU2021240312A AU2021240312A1 (en) 2021-10-01 2021-10-01 Technology configured to enable shared experiences whereby multiple users engage with 2d and/or 3d architectural environments whilst in a common physical location
AU2022241539A AU2022241539A1 (en) 2021-10-01 2022-09-29 Technology configured to facilitate client device engagement with three-dimensional architectural models
US17/936,985 US20230104636A1 (en) 2021-10-01 2022-09-30 Technology configured to facilitate client device engagement with three-dimensional architectural models

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2021240312A AU2021240312A1 (en) 2021-10-01 2021-10-01 Technology configured to enable shared experiences whereby multiple users engage with 2d and/or 3d architectural environments whilst in a common physical location

Related Child Applications (1)

Application Number Title Priority Date Filing Date
AU2022241539A Division AU2022241539A1 (en) 2021-10-01 2022-09-29 Technology configured to facilitate client device engagement with three-dimensional architectural models

Publications (1)

Publication Number Publication Date
AU2021240312A1 true AU2021240312A1 (en) 2023-04-20

Family

ID=85983127

Family Applications (2)

Application Number Title Priority Date Filing Date
AU2021240312A Pending AU2021240312A1 (en) 2021-10-01 2021-10-01 Technology configured to enable shared experiences whereby multiple users engage with 2d and/or 3d architectural environments whilst in a common physical location
AU2022241539A Pending AU2022241539A1 (en) 2021-10-01 2022-09-29 Technology configured to facilitate client device engagement with three-dimensional architectural models

Family Applications After (1)

Application Number Title Priority Date Filing Date
AU2022241539A Pending AU2022241539A1 (en) 2021-10-01 2022-09-29 Technology configured to facilitate client device engagement with three-dimensional architectural models

Country Status (1)

Country Link
AU (2) AU2021240312A1 (en)

Also Published As

Publication number Publication date
AU2022241539A1 (en) 2023-04-20

Similar Documents

Publication Publication Date Title
US20200092473A1 (en) Connecting And Using Building Data Acquired From Mobile Devices
US11632516B2 (en) Capture, analysis and use of building data from mobile devices
US9661214B2 (en) Depth determination using camera focus
JP6431245B1 (en) Edge recognition bidirectional image processing
CN107850779B (en) Virtual position anchor
CN108895981B (en) Three-dimensional measurement method, device, server and storage medium
CN106846497B (en) Method and device for presenting three-dimensional map applied to terminal
CN109891365A (en) Virtual reality and striding equipment experience
CN110874818B (en) Image processing and virtual space construction method, device, system and storage medium
KR20230074538A (en) Interfaces for organizing and sharing destination locations
CN113196239A (en) Intelligent management of content related to objects displayed within a communication session
US20190340832A1 (en) Seamless switching between an authoring view and a consumption view of a three-dimensional scene
US20190340317A1 (en) Computer vision through simulated hardware optimization
CN109816768B (en) Indoor reconstruction method, device, equipment and medium
KR20230004773A (en) Event Overlay Invitation Messaging System
WO2022237026A1 (en) Plane information detection method and system
CN115769260A (en) Photometric measurement based 3D object modeling
JP2019527355A (en) Computer system and method for improved gloss rendering in digital images
KR20230078756A (en) Method, system and computer readable storage medium for image animation
KR20230027237A (en) Reconstruction of 3D object models from 2D images
CN115349140A (en) Efficient positioning based on multiple feature types
CN111369690B (en) Building block model generation method and device, terminal and computer readable storage medium
Pintore et al. Mobile mapping and visualization of indoor structures to simplify scene understanding and location awareness
CN115423920B (en) VR scene processing method, device and storage medium
AU2021240312A1 (en) Technology configured to enable shared experiences whereby multiple users engage with 2d and/or 3d architectural environments whilst in a common physical location