GB2591103A - Rendering of spatial environments - Google Patents

Rendering of spatial environments Download PDF

Info

Publication number
GB2591103A
GB2591103A GB2000562.5A GB202000562A GB2591103A GB 2591103 A GB2591103 A GB 2591103A GB 202000562 A GB202000562 A GB 202000562A GB 2591103 A GB2591103 A GB 2591103A
Authority
GB
United Kingdom
Prior art keywords
rendering
virtual
computer system
environment
virtual object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2000562.5A
Other versions
GB202000562D0 (en
Inventor
Pena-Rios Anasol
Leon-Garza Hugo
Hagras Hani
Conway Anthony
Owusu Gilbert
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
British Telecommunications PLC
Original Assignee
British Telecommunications PLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by British Telecommunications PLC filed Critical British Telecommunications PLC
Priority to GB2000562.5A priority Critical patent/GB2591103A/en
Publication of GB202000562D0 publication Critical patent/GB202000562D0/en
Publication of GB2591103A publication Critical patent/GB2591103A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Human Computer Interaction (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A digital representation of a spatial environment is navigated by a user in which a portion of the digital representation is rendered by a rendering computer system 202 for visualisation by the user. The method comprises accessing a digital model 208 of the environment including a specification 210 of each of a plurality of virtual objects 212 for rendering in the environment. Each specification identifying at least a virtual object and a location of the virtual object in the environment. Each virtual object is defined by a virtual object data structure storing information for rendering a visualisation of a virtual object. A current location 204 of a user in the virtual environment is determined and a subset of virtual objects is selected for rendering by the rendering computer system for visualisation by the user. The subset of virtual objects is selected based on a measure of a performance 206 of the rendering computer system. The environment may be a real or virtual world space and the rendering system may be part of an augmented or virtual reality system. The virtual objects may be ranked according to characteristics and selected when meeting a threshold.

Description

Rendering of Spatial Environments The present invention relates to the performance improvements for virtual and augmented reality applications.
The provision of, and navigation through, digital representations of spatial environments are increasingly common with growing industrial application. For example, virtual reality (VR) and augmented reality (AR) applications provide for the rendering of digital representations of spatial environments whether real-world space (augmented reality) or a virtual-world space (virtual reality) or, indeed, some combination of the two. Such environments find application in maintenance tasks where existing real-world items are overlaid with virtual content such as virtual objects, resources, media and the like. Such virtual content can be for passive consumption by a user, such as by reading, hearing, watching, and/or for active engagement by the user such as operating, handling, moving and the like.
Devices involved in the rendering of audio/visual representations of such digital representations of special environments can vary in nature and capability. Such devices can include smartphones and other small, pervasive devices. Additionally or alternatively, such devices can include dedicated apparatus such as computer systems with VR or AR headsets. Other devices are also known, with a wide range of resource and capability such as memory, processing and network connectivity including bandwidth. In some cases, rendering or display devices are low-resource devices having constrained memory, processor and/or network bandwidth capabilities.
Accordingly, it is beneficial to provide for improved configuration and arrangement of content for rendering as part of a digital representation of a spatial environment for a user.
According to a first aspect of the present invention, there is provided a computer implemented method of navigating a digital representation of a spatial environment by a user in which a portion of the digital representation is rendered by a rendering computer system for visualisation by the user, the method comprising: accessing a digital model of the environment including a specification of each of a plurality of virtual objects for rendering in the environment, each specification identifying at least a virtual object and a location of the virtual object in the environment, wherein each virtual object is defined by a virtual object data structure storing information for rendering a visualisation of a virtual object; determining a current location in the virtual environment for a user navigating the environment; selecting a subset of virtual objects for rendering by the rendering computer system for visualisation by the user, wherein the subset of virtual objects is selected based on a measure of a performance of the rendering computer system.
Preferably, the spatial environment is one of: a real-world space and the rendering computer system is part of an augmented reality system; and a virtual-world space and the 5 rendering computer system is part of a virtual reality system.
Preferably, selection of the subset of virtual objects includes the steps of: identifying virtual objects within a predetermined proximity to the current location in the spatial environment; ranking the identified virtual objects according to characteristics of the virtual objects; and selecting virtual objects meeting a threshold ranking as the subset of virtual objects, the threshold ranking being determined based on the measure of performance.
Preferably, the characteristics of a virtual object includes one or more of: a measure of a complexity of the virtual object; a count of a number of vertices of the object; a location of the object; a proximity of the object to the current location; and a size of the object as a measure of an amount of data required to store the object.
Preferably, the measure of performance of the rendering computer system includes one or more of: a frame rate of the rendering computer system rendering the digital representation and virtual objects; a data transfer rate for communicating data to the rendering computer system; an available memory of the rendering computer system; and a processor capability of the rendering computer system.
Preferably, the threshold ranking is determined based on a size of the subset of virtual objects for rendering, the size being determined based on the measure of performance of the rendering computer system, the method further comprising: applying at least one fuzzy logic classifier to each identified virtual object; and accessing a set of rules for each classified virtual object to determine if the identified virtual object is to be rendered for the determined subset size.
Preferably, the virtual object data structure for a virtual object further includes information defining functional characteristics of the virtual object; and wherein a rendered virtual object is interactive for the user based on the functional characteristics.
According to a second aspect of the present invention, there is a provided a computer system including a processor and memory storing computer program code for performing the steps of the method set out above.
According to a third aspect of the present invention, there is a provided a computer system including a processor and memory storing computer program code for performing the steps of the method set out above.
Embodiments of the present invention will now be described, by way of example only, with 5 reference to the accompanying drawings, in which: Figure 1 is a block diagram a computer system suitable for the operation of embodiments of the present invention; Figure 2 is a component diagram of an arrangement for navigating a digital representation of a spatial environment by a user according to an embodiment of the present 10 invention; and Figure 3 is a flowchart of a method for navigating a digital representation of a spatial environment by a user according to an embodiment of the present invention.
Figure 1 is a block diagram of a computer system suitable for the operation of embodiments of the present invention. A central processor unit (CPU) 102 is communicatively connected to a storage 104 and an input/output (I/O) interface 106 via a data bus 108. The storage 104 can be any read/write storage device such as a random-access memory (RAM) or a non-volatile storage device. An example of a non-volatile storage device includes a disk or tape storage device. The I/O interface 106 is an interface to devices for the input or output of data, or for both input and output of data. Examples of I/O devices connectable to I/O interface 106 include a keyboard, a mouse, a display (such as a monitor) and a network connection.
A digital representation of a spatial environment is provided according to which the environment is modelled using, for example, virtual objects such as Building Information Model (BIM) objects as are known in the art. Such virtual objects can be provided as digital representations of objects and/or classes of object including a visualisation suitable for rendering for a user. Thus, virtual objects can include a parametric digital representation of an entity such as an item, article, facility or building, and can include both visualisation information and optionally behavioural information indicating a manner or nature of behaviour of objects when rendered. For example, virtual objects can include, inter alia: media content; data structures having a visual representation for rendering; and data for display. In a simple example, a virtual object can be a simple image, piece of text or the like. Other objects can include three-dimensional models of objects having edges, vertices and relationships therebetween. More complex objects can be functional, interactive or dynamic. For example, objects with behaviours can include behaviour information defining, inter alia: how an object reacts when interacted with; how objects interact between themselves; physical characteristics of an object that are not necessarily part of its visualisation such as mass, friction, rigidity and the like; functions of an object such as behaviours, adaptations, processing or other functions; and other behaviours as will be apparent to those skilled in the art. Virtual objects are stored as data structures and thus have a size as an amount of data required to store and transfer the object.
Figure 2 is a component diagram of an arrangement for navigating a digital representation of a spatial environment by a user according to an embodiment of the present invention. A rendering computer system 202 is provided as a hardware, software, firmware or combination component such as a physical or virtualised computer system for rendering a digital representation of a spatial environment for visualisation by a user. Such rendered environments can additionally be provided for interaction by the user, and the rendering system 202 can be a resource-constrained device such as a portable, handheld or low-resource pervasive device. Alternatively, the rendering system can be a dedicated VR or AR system including a headset or the like. Altemative suitable rendering systems including systems in a range of capabilities between these examples can also be used.
The rendering system 202 renders a visualisation of a portion of the environment for the user based on a digital representation of the environment. The digital representation is provided by way of a digital model 208 of the environment such as one or more data structures specifying the environment including virtual objects included therein. Each virtual object has associated a location and visualisation information, for example. The digital model 208 includes a specification 210 of virtual objects 212 therein, each virtual object 212 being stored as one or more data structures such as BIM objects. In one embodiment, the virtual objects 212 are stored in a data store such as a network connected storage as a cloud store for access by the rendering system 202 for rending a portion of the digital representation of a spatial environment based on the digital model 208.
The rendered environment generated by the rendering system 202 is accessed by a user such as through a smartphone or VR or AR headset. The user navigates the rendered environment such as by controlling a movement through the environment. Such movement can correspond to movement in the physical environment, such as may be the case in an AR system. Thus, the user has a location in the environment at a point in time represented in or for the rendering system 202 as a current location 204. The current location 204 thus constitutes the location of the user in the spatial environment as rendered.
The rendering system 202 further has associated performance characteristics based on 35 performance measurements 206. The performance measurements 206 serve to indicate a current state of performance of the rendering system 202 during the rendering of the environment and can include, for example, inter alia: a frame rate of the rendering computer system 202 rendering the digital representation and virtual objects; a data transfer rate for communicating data to the rendering computer system 202, such as virtual object data 212 from a cloud storage; an available memory or other storage of the rendering computer system; and a processor capability of the rendering computer system, such as a processor utilisation, speed, throttling or other capability or characteristic.
In use, the rendering system 202 is operable to render a portion of the digital representation of the spatial environment according to the data model and including a subset of virtual objects 212 in the rendered portion. The subset of virtual objects 212 is selected by an object selector 200 component as a hardware, software, firmware, or combination component in communication with the rendering system 202 and operable to access the digital model 208 so as to be informed of the object specification 210 and the virtual object information 212. In particular, the object selector 200 receives, accesses or otherwise determines the user's current location 204 in a currently rendered portion of the environment and selects a subset of virtual objects 212 for rendering by the rendering system 202 based on one or more performance measures 206 of the rendering system. In this way, the performance of the rendering system 202 directly influences the selected subset of virtual objects 212 for inclusion in the rendered environment by the rendering system 202. Thus, resource constraints of the rendering system 202 such as resource constraints arising due to a overly large number of virtual objects 212, a high complexity of virtual objects 212, or any other cause of reduced performance by the rendering system 202, can cause the object selector 200 to rationalise a subset of objects 212 for rendering by the rendering system 202.
In one embodiment, the selection of the subset of virtual objects 212 is based on identifying a objects 212 within a predetermined proximity to the current location 204 of the user in the spatial environment. For example, objects 212 within the predetermined proximity can be ranked according to characteristics of the objects 212 such as, inter alia: a measure of a complexity of virtual objects; a count of a number of vertices of objects; a location of object in the spatial environment; a proximity of objects to the current location 204; and a size of objects as a measure of an amount of data required to store and/or communicate the object. Thus, once ranked, the subset of virtual objects for rendering can be selected by the object selector 200 based on, for example, a threshold ranking determined based on the measure of performance. For example, a threshold corresponding to a relatively high performance measure 206 can lead to a selection of a larger subset of objects and/or a subset of objects having a larger size and/or complexity. In contrast, a threshold corresponding to a relatively low performance measure 206 can lead to a selection of a smaller subset of objects and/or a subset of objects having a smaller size and/or complexity.
In one embodiment, the threshold ranking is determined based on a predetermined size of the subset of virtual objects for rendering, the size being determined based on the measure of performance 206. For example, the subset can have a size according to an predefined enumeration of sizes such as: small, medium and large sized subsets (each including an appropriate number of virtual objects 212 according to the particular arrangement). Thus, the predetermined threshold ranking can correspond to one of the predetermined sizes of the subset such that objects 212 are selected from the ranked list based on the predetermined size. For example, more objects can be selected where a "large" size is used. The selection of the predetermined size for the rendering system 202 can be based on a fuzzy logic classification of each virtual object identified to be within the predetermined proximity to the current location 204 of the user. The fuzzy logic classifier is applied to each identified virtual object to classify the object. Subsequently, a set of rules can be accessed for each classified virtual object to determine if the identified virtual object is to be rendered for the determined subset size. Thus, objects classified as "large", for example, can be excluded from subset size definitions of "small".
Figure 3 is a flowchart of a method for navigating a digital representation of a spatial environment by a user according to an embodiment of the present invention. Initially, at step 302, the method accesses a digital model of the spatial environment including a specification 210 of virtual objects 212. At step 304, the method determines a current location of the user in the rendered environment and at step 306 the method selects a subset of virtual objects 212 for rendering by the rendering system 202 based on a measure of performance of the rendering system 202.
Insofar as embodiments of the invention described are implementable, at least in part, using a software-controlled programmable processing device, such as a microprocessor, digital signal processor or other processing device, data processing apparatus or system, it will be appreciated that a computer program for configuring a programmable device, apparatus or system to implement the foregoing described methods is envisaged as an aspect of the present invention. The computer program may be embodied as source code or undergo compilation for implementation on a processing device, apparatus or system or may be embodied as object code, for example.
Suitably, the computer program is stored on a carrier medium in machine or device readable form, for example in solid-state memory, magnetic memory such as disk or tape, optically or magneto-optically readable memory such as compact disk or digital versatile disk etc., and the processing device utilises the program or a part thereof to configure it for operation. The computer program may be supplied from a remote source embodied in a communications medium such as an electronic signal, radio frequency carrier wave or optical carrier wave. Such carrier media are also envisaged as aspects of the present invention.
It will be understood by those skilled in the art that, although the present invention has been described in relation to the above described example embodiments, the invention is not limited thereto and that there are many possible variations and modifications which fall within the scope of the invention.
The scope of the present invention includes any novel features or combination of features disclosed herein. The applicant hereby gives notice that new claims may be formulated to such features or combination of features during prosecution of this application or of any such further applications derived therefrom. In particular, with reference to the appended claims, features from dependent claims may be combined with those of the independent claims and features from respective independent claims may be combined in any appropriate manner and not merely in the specific combinations enumerated in the claims.

Claims (9)

  1. CLAIMS1. A computer implemented method of navigating a digital representation of a spatial environment by a user in which a portion of the digital representation is rendered by a rendering computer system for visualisation by the user, the method comprising: accessing a digital model of the environment including a specification of each of a plurality of virtual objects for rendering in the environment, each specification identifying at least a virtual object and a location of the virtual object in the environment, wherein each virtual object is defined by a virtual object data structure storing information for rendering a visualisation of a virtual object; determining a current location in the virtual environment for a user navigating the environment; selecting a subset of virtual objects for rendering by the rendering computer system for visualisation by the user, wherein the subset of virtual objects is selected based on a measure of a performance of the rendering computer system.
  2. 2. The method of claim 1 wherein the spatial environment is one of a real-world space and the rendering computer system is part of an augmented reality system; and a virtual-world space and the rendering computer system is part of a virtual reality system.
  3. 3. The method of any preceding claim wherein selection of the subset of virtual objects includes the steps of identifying virtual objects within a predetermined proximity to the current location in the spatial environment; ranking the identified virtual objects according to characteristics of the virtual objects; and selecting virtual objects meeting a threshold ranking as the subset of virtual objects, the threshold ranking being determined based on the measure of performance.
  4. 4. The method of claim 3 wherein the characteristics of a virtual object includes one or 30 more of: a measure of a complexity of the virtual object; a count of a number of vertices of the object; a location of the object; a proximity of the object to the current location; and a size of the object as a measure of an amount of data required to store the object.
  5. 5. The method of any preceding claim wherein the measure of performance of the 35 rendering computer system includes one or more of a frame rate of the rendering computer system rendering the digital representation and virtual objects; a data transfer rate for communicating data to the rendering computer system; an available memory of the rendering computer system; and a processor capability of the rendering computer system.
  6. 6. The method of any of claims 3 to 5 wherein the threshold ranking is determined 5 based on a size of the subset of virtual objects for rendering, the size being determined based on the measure of performance of the rendering computer system, the method further comprising: applying at least one fuzzy logic classifier to each identified virtual object; and accessing a set of rules for each classified virtual object to determine if the identified 10 virtual object is to be rendered for the determined subset size.
  7. 7. The method of any preceding claim wherein the virtual object data structure for a virtual object further includes information defining functional characteristics of the virtual object; and wherein a rendered virtual object is interactive for the user based on the 15 functional characteristics.
  8. 8. A computer system including a processor and memory storing computer program code for performing the steps of the method of any preceding claim.
  9. 9. A computer program element comprising computer program code to, when loaded into a computer system and executed thereon, cause the computer to perform the steps of a method as claimed in any of claims 1 to 7.
GB2000562.5A 2020-01-15 2020-01-15 Rendering of spatial environments Pending GB2591103A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB2000562.5A GB2591103A (en) 2020-01-15 2020-01-15 Rendering of spatial environments

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB2000562.5A GB2591103A (en) 2020-01-15 2020-01-15 Rendering of spatial environments

Publications (2)

Publication Number Publication Date
GB202000562D0 GB202000562D0 (en) 2020-02-26
GB2591103A true GB2591103A (en) 2021-07-21

Family

ID=69626270

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2000562.5A Pending GB2591103A (en) 2020-01-15 2020-01-15 Rendering of spatial environments

Country Status (1)

Country Link
GB (1) GB2591103A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6232974B1 (en) * 1997-07-30 2001-05-15 Microsoft Corporation Decision-theoretic regulation for allocating computational resources among components of multimedia content to improve fidelity
US6933946B1 (en) * 2003-05-07 2005-08-23 At&T Corp. Method for out-of core rendering of large 3D models
EP3115970A1 (en) * 2015-07-07 2017-01-11 The Boeing Company Product visualization system
US20170330371A1 (en) * 2014-12-23 2017-11-16 Intel Corporation Facilitating culling of composite objects in graphics processing units when such objects produce no visible change in graphics images
EP3388929A1 (en) * 2017-04-14 2018-10-17 Facebook, Inc. Discovering augmented reality elements in a camera viewfinder display

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6232974B1 (en) * 1997-07-30 2001-05-15 Microsoft Corporation Decision-theoretic regulation for allocating computational resources among components of multimedia content to improve fidelity
US6933946B1 (en) * 2003-05-07 2005-08-23 At&T Corp. Method for out-of core rendering of large 3D models
US20170330371A1 (en) * 2014-12-23 2017-11-16 Intel Corporation Facilitating culling of composite objects in graphics processing units when such objects produce no visible change in graphics images
EP3115970A1 (en) * 2015-07-07 2017-01-11 The Boeing Company Product visualization system
EP3388929A1 (en) * 2017-04-14 2018-10-17 Facebook, Inc. Discovering augmented reality elements in a camera viewfinder display

Also Published As

Publication number Publication date
GB202000562D0 (en) 2020-02-26

Similar Documents

Publication Publication Date Title
WO2020231570A1 (en) Contextual input in a three-dimensional environment
CN107957831A (en) A kind of data processing method, device and processing equipment for showing interface content
US20100162176A1 (en) Reduced complexity user interface
EP2984558B1 (en) Multiple displays for displaying workspaces
KR20130086138A (en) Cross-platform application framework
GB2469929A (en) Data transfer between connected devices using inter-device interfaces
CN110928626A (en) Interface switching method and device and electronic equipment
CN108829595B (en) Test method, test device, storage medium and electronic equipment
KR20220054855A (en) occlusion detection system
US20120324379A1 (en) Generating information on application tasks for component objects grouped in a composite object
CN112881052B (en) Method and device for constructing working scene of mobile robot
CN110286981A (en) The display methods and display system of the use state of virtual cloud desktop server
GB2591103A (en) Rendering of spatial environments
US12026298B2 (en) Interaction-based rendering of spatial environments
CN110990006A (en) Form management system and form generation device
CN114693893B (en) Data processing method and device, electronic equipment and storage medium
CN114797109A (en) Object editing method and device, electronic equipment and storage medium
US20120124518A1 (en) Managing Operations via a User Interface
CN109471410B (en) Dynamic preview generation in a product lifecycle management environment
CN114629800A (en) Visual generation method, device, terminal and storage medium for industrial control network target range
CN108182656B (en) Image processing method and terminal
CN115098083B (en) Method, device and equipment for expanding graphic view frame and storage medium
WO2022134115A1 (en) Data processing apparatus, method, and electrode device
US20210224691A1 (en) Method and system for generating variable training data for artificial intelligence systems
CN114218490A (en) Book recommendation method and electronic equipment