WO2022061296A1 - Remote collaboration platform for interactions in virtual environments - Google Patents

Remote collaboration platform for interactions in virtual environments Download PDF

Info

Publication number
WO2022061296A1
WO2022061296A1 PCT/US2021/051330 US2021051330W WO2022061296A1 WO 2022061296 A1 WO2022061296 A1 WO 2022061296A1 US 2021051330 W US2021051330 W US 2021051330W WO 2022061296 A1 WO2022061296 A1 WO 2022061296A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
model
dimensional
image data
models
Prior art date
Application number
PCT/US2021/051330
Other languages
French (fr)
Inventor
M. Luisa G. CALDAS
Original Assignee
The Regents Of The University Of California
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The Regents Of The University Of California filed Critical The Regents Of The University Of California
Publication of WO2022061296A1 publication Critical patent/WO2022061296A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/024Multi-user, collaborative environment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2004Aligning objects, relative positioning of parts

Definitions

  • This disclosure relates to remote collaboration platforms, more particularly to virtual environments.
  • FIGs. 1A-1C show virtual building in a geographic location.
  • FIG. 2 shows an embodiment of a building location.
  • FIG. 3 shows an embodiment of a lobby having interactive objects such as a directory and a video wall.
  • FIGs. 4-5 show embodiments of directories resulting from selecting a floor from an interactive directory.
  • FIG. 6 shows an embodiment of avatars with an embodiment of a lobby.
  • FIG. 7 shows an embodiment of a video wall.
  • FIG. 8 a scene of avatars watching a video on a video wall.
  • FIG. 9 shows an embodiment of an exhibition space in a lobby.
  • FIG. 10 shows embodiments of a user interface that allows a user to create avatars.
  • FIG. 11 shows an embodiment of a building floorplan that shows who is in the building and interfaces to allow a current user to interact with others in the building.
  • FIG. 12 shows an embodiment of a studio space.
  • FIG. 13 shows an embodiment of a studio space with avatars.
  • FIG. 14 shows a view of a desk and a menu associated with the desk.
  • FIG. 15 shows a view of a user operating on a ghost mode.
  • FIG. 16 shows a view of an embodiment of a desk in a studio space.
  • FIG. 17 shows a view of an embodiment of an interactive model.
  • FIGs. 18-20 show embodiments of scenes of an interactive models with avatars.
  • FIG. 21 shows an embodiment of a model menu.
  • FIG. 22 shows an embodiment of a view of a model in a solar study.
  • FIGs. 23-26 show views of an embodiment of a clipping plane used to segment a model for interaction.
  • FIG. 27 shows an embodiment of a two-dimensional visualization of two-dimensional media from inside a three-dimensional model.
  • FIG. 28 shows examples of audio zones within a building.
  • FIG. 29 shows a system diagram of an embodiment of a collaboration platform.
  • the embodiments here address the challenges of the loss of community, cohort building, and information interactions. They address these challenges by creating a three- dimensional environment that recreates or Stahls the actual physical workspace of the organization/institution, representing users in space through avatars that interact with each other, providing an increased sense of embodiment to the experience, and creating a unique and innovative multimodal communications environment.
  • the embodiments also simultaneously act as a gateway access point for multiple platforms and websites that users currently adopt in their daily work or communications environment.
  • Virtual Studio can apply to remote collaboration, training, learning, event organization, and remote work in general.
  • the embodiments are unique in that the components can be replicated for many different domains with minimal alterations.
  • Virtual Studio provides a fully three-dimensional environment, implemented in 2D mode for general access through a common computer across all operating systems.
  • a full VR (virtual reality) immersive version is available for users who have VR headsets.
  • the below discussion focuses on a particular implementation of the embodiments, based upon a virtual building.
  • the building represented here exists in reality on the University of California-Berkeley’s campus, Bauer Wurster Hall. However, this provides only an example of the capabilities and capacities of the system and methods used here. No limitation to any particular building or other component of a virtual environment is intended, nor should any be implied.
  • Virtual Bauer Wurster as an implementation of the embodiments here, provides an interactive, informal, collaborative learning platform tailored to the needs of the specific community that uses Bauer Wurster Hall, the College of Environmental Design at UC Berkeley. It provides a context in which one can see how a familiar environment can allow the users to experience a better remote learning and working situation.
  • any building of a configuration could be used, the particular example just provides an example for ease of discussion.
  • the embodiments offer a wide range of possibilities not currently available on other platforms. These possibilities include users having the capability to post their work in both 2D and 3D formats and interact with each other through their avatars. Students can walk around the building together, discuss work on display and navigate inside each other’s 3D architecture models. Communication modes include voice, synchronous and asynchronous messaging, video calls, sticky notes, laser pointers, annotations and drawings.
  • the embodiments additionally provide a centralized gateway to out-of-the-box technologies commonly used by this community, such as to Zoom®, Slack®, Miro®, and 3D viewers as examples. New links can be easily provided. Again, while the current example relies upon the Virtual Bauer Wurster model, it can apply to other departments, other schools, and other institutions and organizations.
  • the embodiments such as Virtual Bauer Wurster, can be used in both substitutional and transformational ways.
  • Substitutional ways replace some of the in-person experiences lost in remote working/ distance learning situations. These include interfaces for those who are working from home to be able to post their work and discuss it with colleagues.
  • the platform allows for informal interactions. Avatars signal that individual are “in” the building, meaning that they are available for connecting, meeting, and so on.
  • the location indicates the nature of potential interaction as it would in the building. This allows for openness of discovery that is needed for cohort building.
  • the nature of the interaction is informal, not curated, and one-on-one, for those who are actively looking to engage each other. Events, exhibitions and other virtual gatherings can be hosted to support formal and informal gathering.
  • Virtual Bauer Wurster also provides transformational uses in which users can add new modes of interaction and communication only possible in virtual environments.
  • Virtual Bauer Wurster increases the space of the building for such things as designs, exhibitions, and pin-ups. It provides limitless space for students to share access to their work and allows the College to store a virtual depository of the work produced throughout time. Exhibitions can be staged that can be virtually shared as well as archived.
  • Virtual Bauer Wurster will also provide new opportunities for recruiting for students who cannot come in person. Apart from giving tours to prospective students, it will easily allow friends and family to visit Virtual Bauer Wurster Hall. In a future in which the campus may need to open and close, where students are in-person and remote, the virtual setting of Bauer Wurster Hall supports the practices shared in the spaces of the building. For those that are remote, on-line, working remotely, or in-person, users participate in and contribute to the culture.
  • the embodiments of Virtual Studio/Virtual Bauer Wurster may act as a prototype for a future remote leaming/working environment that is totally immersive, both for educational and corporate environments.
  • augmented reality means an environment in which computer generated images are superimposed on a user’s actual view.
  • mixed reality means a computer generated environment in which virtual elements and physical elements are combined.
  • virtual reality usually means the computer-generated simulation of a three-dimensional image or environment that can be interacted with in a seemingly real or physical way by a person using special electronic equipment, such as a helmet with a screen inside or gloves fitted with sensors.
  • the system will be referred to as a virtual reality system and may include augmented, mixed, and what would be considered true virtual reality.
  • a Virtual Studio is composed of a series of virtual models, models of physical volumes, nested at different scales, including Building, Floors, Studios, Desks, Project Models and Avatars as examples. “Nested at different scales” means that at least some of the different models reside inside the other models. For example, using the examples above, the Desk and Avatar models reside on a Floor, Floors reside within Buildings, and Buildings reside within City Models. Other models may reside in the Building and/or Floor models including Studios. Studio may nest at the same level within the Floors, or within a Building. These are just examples and in no way intended to limit the scope of the claims.
  • Building Models provide the access gate to the Virtual Studio experience. Buildings, represented by Building Models can represent their physical counterparts, such as educational, commercial, office, retail, event space, etc., to create substitutional experiences and enhance physical embodiments and community building, such as shown FIGs. 1A-1C.
  • the Bauer Wurster building 10 shown in FIGs. 1 A-C is a virtual counterpart to the actual building and these figures show the virtual building 10 being located in the same geographic location.
  • buildings can represent idealized physical spaces in lieu of those that house the institution/organization, with the goals of testing new organizational and spatial formats, enhance aesthetics, or increase currently available physical space, among others.
  • the Building Model is the main virtual gateway, shown as the building 10 entrance in FIG. 2 guiding users to specific points of the experience, incorporating virtual maps and directories, shown in FIG. 3, and including virtual exhibition spaces to present the institution to visitors and community members.
  • FIG. 3 shows an embodiment of a directory taking the form of floating system models such as 11. These particular system models represent floors which in turn contain studios.
  • the directory scene also includes exhibition halls such as 16 and a video wall 14. Video walls may appear in several places in the virtual environment, allowing students and other users to upload videos for other users to view.
  • FIGs. 4 and 5 demonstrate the interactive nature of the directory, as well as the levels of display resolution being rendered on a user’s device.
  • a user selects a floor 12 in the directory 11 of FIG. 3, a list of courses 17 appears.
  • a directory for that course appears. While this example lies in the educational space, one can easily see that this may apply to other buildings as well.
  • the floor directory could show different companies on a floor, or different floors of a same company.
  • the course directory could become a list of contacts for each company, or a list of employees who are present in the virtual space at that time.
  • FIG. 6 shows avatars such as 20 and 22 interacting with each other and the building directory. If a user selects the video wall, a video launches for the user to watch, either in the lobby with the other users represented by the avatars.
  • FIG. 7 shows an example of a video wall showing a video.
  • FIG. 8 shows the avatars watching a video together.
  • one of the users could select an exhibition hall and then the other avatars and visit.
  • the exhibition hall can display an instantly curated collection using a link to upload images, as shown in FIG. 9.
  • the avatars are the means through which the users interact with the virtual environment.
  • Each Avatar Model provides the virtual representation of each user, such as workers, students, and visitors, the agents that circulate in the 3D model and communicate with each other, engage on discussions related to published content, and promote social and intellectual interaction.
  • users To start using Virtual Studio/Virtual Bauer Wurster, users must create their own avatar. The user logs into the website of the platform.
  • the platform system may comprise multiple processors operating in multiple different physical locations and computing devices, such as servers, etc.
  • the system receives user input from the user’s computing device and produces three-dimensional models with which the user can interact through their avatar.
  • the three-dimensional models will include the user’s avatar in the model rendering.
  • the models will also have user input regions, as will be discussed in more detail below, that allow the user to select interactive objects, such as models, notes, whiteboards, and access out-of-the- box, third party applications.
  • FIG. 10 shows two different types of avatars that the user could select and customize to develop their avatars.
  • the users will also customize their laser pointer that they can use to point out things and interact with other user as will be discussed in more detail later.
  • the user can cause the avatar to move around, either by using the user’s mouse, or keys such as the arrow keys, or the WASD keys.
  • keys may be assigned to allow the avatar to speak, open a secondary menu, open or close a chat box, enter ghost mode, or open a circle menu, all of which will be described in more detail further.
  • a pause may occur as the various components download.
  • Avatars can have emojis and/or images attached to them that may signal that they are interested in chatting with others, such as a smiley face, or that they may need help with some topic, like an exclamation mark.
  • Special users like course GSIs (Graduate Student Instructors) may have a signal saying ‘Hi, I’m here’ to let students know that they are available to answer questions, for example.
  • the system may have secondary menus activatable by a preset key.
  • the preset key may be the Tab key, so these menus here may be referred to as Tab menus.
  • FIG. 11 shows an example of a secondary menu that shows a floor plan of the building 30. Below the floor plan, a list shows who is in the building, and a list of options such as chat 32 that allow the users to chat with other users, and a list of announcements.
  • FIG. 12 shows a view of a studio with individual desks, mentioned above. As an avatar approaches a desk or a set of desks, the image will populate with the objects on that desk.
  • FIG. 13 shows an avatar 52 approaching a set of desks, such as 50.
  • Each desk may have a set of interactive objects as shown in FIG. 14.
  • the desk 50 has several interactive objects such as a phone 62, a pad 64, a project model 66, and an image 68. If the user viewing the desk activates the secondary menu, a menu such as the circle menu appears.
  • the circle menu in this example has several icons that change the view. For example, if the user selects icon
  • FIG. 15 shows a ‘flying’ avatar 72.
  • ghost mode facilitates navigation of early stage and incomplete models, and may include new views and perspectives of any model, including a bird’s-eye view.
  • FIG. 16 shows a view returning to a desk view showing the interactive model 66. Users can leave sticky notes by clicking on the sticky note 74 to allow the user to leave a note for the desk owner.
  • the desk includes a cloud icon 76. This allows many functionalities of the desk to become active. These functionalities may include uploading 3D models for other to view, enter and navigate. Two-dimensional images such as 68 from FIG. 14 may also be present.
  • Users may upload their own work, download their own work, and download other people’s work. When the users log in, they can see upon what other users are working by looking at other user’s desks. In more complex embodiments, the users can upload one's own work, download one's own work and display it on the desk/pinup board, upload one's own work such as 3D models, OBJ files, download one's own work and display it on the desk. They can also see what other users are working on with real time updates and multiplayer modes.
  • the system may present the user with a desk menu to upload 3D models, 2D images, including titles and captions. The users may select the various third-party applications that go with each of the interactive objects on the desk.
  • the system When the system receives an upload, it receives a link to the uploaded content and links the user’s id in the system with the link to the content in a databased.
  • the user click on the download button for their own content causes the system to fetch the user id from the database and get the link to the content.
  • the link found relates to the user id associated with the desk and then the content is downloaded.
  • Out-of-the-box technologies open in a web browser window inside Virtual Bauer Wurster, to prevent the need to navigate to a different window in the computer.
  • These applications may show a collaborative whiteboard environment such as in Miro, canvas applications, such as Padlet®, allow fast and easy posting of shared content.
  • the link may be triggered by clicking on a pile of paper 68 on each user’s desk, as shown in FIG. 15.
  • Other out-of-the-box, or third-party applications may also be accessible.
  • the system responds by launching a new window within the virtual environment with access to that application.
  • project model means any work product of the desk owner or other members of the environment, to differentiate it from system models, discussed above. While the project models used here represent architectural models, they could also comprise other work products, such as 3D renderings of circuit boards, art objects, mechanical components, construction components, just as examples.
  • Project models of lower resolution may be uploaded to the user’s desk in low resolution format, to avoid overloading the overall model size.
  • Detailed project models can require large amounts of real-time data transfer.
  • a medium resolution format of the project model appears in a new scene, where the view of the Floor model has now been transformed into a background image. This transformation from the actual 3D space of the building to a screenshot of the same space, taken from the user’s perspective, to keep the visual illusion of spatial permanence, frees up crucial memory and data bandwidth to allow for the uploading and subsequent navigation of the 3D model.
  • avatar 80 can enter and navigate the model as shown in FIG. 18, in this example with another avatar 82.
  • Avatar 80 has laser pointer 84 and avatar has laser pointer 86. These laser pointers may be of different colors and thicknesses, but are shown as black here for ease of viewing.
  • FIG. 19 shows a building scale view showing the avatar 80 in the door way.
  • FIG. 20 shows the two avatars 80 and 82 on a staircase in an inner view of the building formed in the model.
  • the user needs to upload the 3D model to allow the virtual interactions.
  • the system provides users access to a 3D editing tool that allows users to import their 3D project models, such as those created using a 3D tool such as Rhino®, Autocad® or SketchUp®.
  • the tool allows the user to import their project models into a virtual reality scene and customize it with different furniture, landscapes, lighting, textures, and skies.
  • the user can then save their scene and upload it directly to Virtual Bauer Wurster, where guests can enter the scene.
  • Virtual Bauer Wurster By allowing users to customize and upload their own scenes, the system to save hundreds of hours of work manually integrating a user’s model and scene, if used, into the virtual reality development tool.
  • Some embodiments may use a tool such as Unity®.
  • the editor allows users without experience in coding or virtual reality design tools to easily build out a scene with their model, for architectural project models. It bridges the gap between developing models within software such as Rhino® and bringing the model to life in the 3D modeling tool such as Unity®.
  • the editor’s user interface is designed to be simple to understand, consisting of a dropdown menu of functions, a hierarchy of objects in the scene, and an inspector for modifying values and selecting scenery. It only takes a couple of minutes for users to familiarize themselves with the software and finish creating their scene.
  • Project files include the project model file (model. obj) and the scene’s information (Scenelnformation.json), if the project model is an architectural model.
  • the scene information j son records the position, rotation, scale, tiling, and texture of all objects in the scene.
  • the system uploads the project files to the server in a web services system, allowing the system to reconstruct the entire scene.
  • the system may utilize shaders to enhance the program.
  • the system may include an outline shader to highlight objects in the scene when selected, allowing users to see through other objects and quickly differentiate the layers of the model.
  • the system may also include a cross-section shader that allows users to move a clipping plane that only renders pixels below it, which offers a new X-ray perspective of the model. Sun path calculations may be added to the editor, allowing users to input the latitude and longitude of their building location, and time of the year (day /hour), making the editor display the solar patterns on the building at that particular location and time.
  • the editor may be equipped with a dropdown menu that features four main categories of options: File, Edit, Assets, and Render.
  • the File drop down allows users to view instructions from an embedded Google Chrome window, import model files, create/save/load projects, and exit the application.
  • the Edit drop down allows users to modify the transform of objects in the scene and reset objects such as lighting and the floor to their default location.
  • the Assets drop down allows users to pick and place landscaping, entourage and furniture objects into the scene.
  • the Rendering drop down allows users to change the texture of objects and also change the skybox.
  • the user can assign materials to the various portions of the model, selected from a collection provided. Alternatively, the user may upload and save custom materials.
  • FIG. 21 shows a secondary, circle menu that has several selections for the user to make to further interaction with the building in the model.
  • the user could select the menu item that looks like a sun to launch a solar study.
  • FIG. 22 shows an example of one such study.
  • the user interface allows the viewing user to manipulate the latitude, longitude, day and time, and time zone using sliders ash shown. This allows the viewing user to see the sun move through the year interactively.
  • FIG. 23 shows an entire building model that results from a modeling tool such as that discussed above. Users can create real time plans and sections of building, while their avatars are inside the buildings. This may happen in the x, y, and z directions, using the sliders to move the locations of the sections. Sliding the y-plane slider may result in a picture as shown in FIG. 24. Sliding the x-plane slider may result in a picture as shown in FIG. 25.
  • menu items may be available on the secondary menu such as a Tab menu, may allow for interactive top-view plans of the building, as shown in FIG. 26. The view may be changed as desired, such as to add the roof, not shown.
  • FIG. 27 shows a 2D rendering of a view from a student’s desk, as an example.
  • the system may also provide a user interface to allow the user to customize audio zones.
  • the audio zones allow different users to chat across different rooms without audio interference from outside the zone. These are shown by the boxes in FIG. 28.
  • the system can provide advanced functionalities customized for each client organization.
  • the proposed functionalities include lecture classes/seminars, virtual exhibitions and lectures, organization of informal gatherings and formal events, creativity platforms, library, archives and collections, and faculty offices, labs, special locations.
  • Lecture classes/seminars are included in the directory and listed on the walls of the lobby, including the same information as the studio courses such as course number, instructor, link to online materials, possibly using an out-of-the-box application, and posting of materials in students’ desks, if pertinent for the course format.
  • Virtual, interactive exhibitions can easily be organized using the existing Virtual Studio functionalities. Drawings, photos, videos and models can be displayed on designated areas of the building, and users can interact with them as described above. However, these exhibition spaces also offer transformational value. Exhibits cannot usually be touched in actual exhibitions; entering a 3D model is a transformational activity, impossible in real life. Communication with other visitors is also not usual, with the experience being more individualized than collective.
  • Synchronous lectures can be organized using the embedded video functionalities, or out-of-the-box technologies such as Google Slides. Asynchronous lectures are based on a menu of past lectures available. Synchronous and asynchronous public feedback is possible using existing functionalities.
  • Virtual buildings such as Virtual Bauer Wurster will include links to the webpages of their libraries, in this case the CED library and the Environmental Design Archives. There may also be a function to allow students to ‘publish’ their essays to the library.
  • the building may include faculty offices, labs, and special locations.
  • Virtual Bauer Wurster can include links to 360° footage of CED labs, like the XR Lab, PrintFarm and CBE facilities, faculty offices, possibly on a volunteer base, and other special locations, like particular rooms or spaces.
  • the Virtual Studio platform may have additional features added to virtual spaces such as Virtual Bauer Wurster. For example, gamification may be added to Virtual Bauer Wurster, with games such as building blocks, virtual plants to care for, virtual scavenger hunts, etc.
  • the system may be implemented on one or more computing devices discussed variously above, connected together as shown in FIG. 29. A user interacts with the system through a user’s computer device 102, having a user interface 102, one or more processors such as 104, a memory 106, and a display device 110 and a network connection to the system that produces the system models of physical spaces discussed above.
  • the user uploads the information needed to build an avatar and the system has one or more rendering processors to execute code to represent the user avatar as image data within the image data the forms a visual model of a space.
  • One or more processors render the 3D data as 2D date on the user computing device 102.
  • the image data being rendered includes visual model of the space has at least one link to an interactive object, such as the project models or an out-of-the-box, third-party application such as 120.
  • Interactive objects may include picking up and returning models, writing or enlarging notes, etc.
  • the system may include one or more processors to execute code to cause the processors to allow' the user to log into the system, typically through a web browser or other typical network interface.
  • the one or more processors may reside in one or more servers.
  • the one or more servers may execute code to cause the servers to allow the user to access the library 114, an authorization service 112, and a scheduling tool 116.
  • the system also has a storage 118, represented here as a single storage but will more than likely be distributed across multiple servers and multiple devices across the system.
  • a system provides a system for remote interactions, and the remote interactions can become more spontaneous and more ‘real,’ allowing users to interact and work together in ways that previously had been limited to being physically present with each other, as well as providing interactions that would not be possible in a physical environment.
  • the remote interactions can become more spontaneous and more ‘real,’ allowing users to interact and work together in ways that previously had been limited to being physically present with each other, as well as providing interactions that would not be possible in a physical environment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Architecture (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A system includes a network connection to a user computing device, at least one processor to: produce image data of at least one of two or more three-dimensional models of a physical space, wherein the two or more models are nested models within the physical space; provide user input regions within the at least one three-dimensional model; and at least one interactive object or external link that becomes active based upon a user input from the user computing device in one of the one or more user input regions.

Description

REMOTE COLLABORATION PLATFORM FOR INTERACTIONS IN VIRTUAL ENVIRONMENTS
CROSS REFERENCE TO RELATED APPLICATION
[0001] This application claims priority to and the benefit of US Provisional Application No. 63/080,863, filed September 21, 2020, which is incorporated herein by reference in its entirety.
TECHNICAL FIELD
[0002] This disclosure relates to remote collaboration platforms, more particularly to virtual environments.
BACKGROUND
[0003] Innovative models in support of remote collaboration and distance leaming/work are in high demand. Globalization and the transition to a digital culture have significantly increased collaboration among geographically distant groups. However, for the last decades, in-person contact has still been considered fundamental for certain modes of interaction and collaboration. The ongoing health pandemic has imposed remote work and learning as the new operational mode for large sectors of society. It is predicted that many remote work/leaming collaborative models now being developed will stay in place after the current public health crisis has been overcome. This change will also have a positive impact on climate change, resulting in less need for travel, both for daily commutes and long-distance traveling, in the wake of the cultural and organizational shifts currently taking place.
[0004] Several challenges have been identified as a consequence of remote working and learning. These include loss of community and cohort building, and loss of informal interactions. Information interactions involve the spontaneity of finding and connecting with individuals or small groups that are identified as available to interact, and loss of lateral learning from coworkers and peers.
BRIEF DESCRIPTION OF THE DRAWINGS
[0005] FIGs. 1A-1C show virtual building in a geographic location.
[0006] FIG. 2 shows an embodiment of a building location.
[0007] FIG. 3 shows an embodiment of a lobby having interactive objects such as a directory and a video wall.
[0008] FIGs. 4-5 show embodiments of directories resulting from selecting a floor from an interactive directory.
[0009] FIG. 6 shows an embodiment of avatars with an embodiment of a lobby.
[0010] FIG. 7 shows an embodiment of a video wall.
[0011] FIG. 8 a scene of avatars watching a video on a video wall.
[0012] FIG. 9 shows an embodiment of an exhibition space in a lobby.
[0013] FIG. 10 shows embodiments of a user interface that allows a user to create avatars.
[0014] FIG. 11 shows an embodiment of a building floorplan that shows who is in the building and interfaces to allow a current user to interact with others in the building.
[0015] FIG. 12 shows an embodiment of a studio space.
[0016] FIG. 13 shows an embodiment of a studio space with avatars.
[0017] FIG. 14 shows a view of a desk and a menu associated with the desk.
[0018] FIG. 15 shows a view of a user operating on a ghost mode.
[0019] FIG. 16 shows a view of an embodiment of a desk in a studio space.
[0020] FIG. 17 shows a view of an embodiment of an interactive model.
[0021] FIGs. 18-20 show embodiments of scenes of an interactive models with avatars.
[0022] FIG. 21 shows an embodiment of a model menu. [0023] FIG. 22 shows an embodiment of a view of a model in a solar study.
[0024] FIGs. 23-26 show views of an embodiment of a clipping plane used to segment a model for interaction.
[0025] FIG. 27 shows an embodiment of a two-dimensional visualization of two-dimensional media from inside a three-dimensional model.
[0026] FIG. 28 shows examples of audio zones within a building.
[0027] FIG. 29 shows a system diagram of an embodiment of a collaboration platform.
DETAILED DESCRIPTION OF THE EMBODIMENTS
[0028] The embodiments here address the challenges of the loss of community, cohort building, and information interactions. They address these challenges by creating a three- dimensional environment that recreates or reinvents the actual physical workspace of the organization/institution, representing users in space through avatars that interact with each other, providing an increased sense of embodiment to the experience, and creating a unique and innovative multimodal communications environment. The embodiments also simultaneously act as a gateway access point for multiple platforms and websites that users currently adopt in their daily work or communications environment.
[0029] The embodiments here and all of their variations will be referred to as Virtual Studio. Virtual Studio can apply to remote collaboration, training, learning, event organization, and remote work in general. The embodiments are unique in that the components can be replicated for many different domains with minimal alterations. Virtual Studio provides a fully three-dimensional environment, implemented in 2D mode for general access through a common computer across all operating systems. A full VR (virtual reality) immersive version is available for users who have VR headsets. [0030] The below discussion focuses on a particular implementation of the embodiments, based upon a virtual building. The building represented here exists in reality on the University of California-Berkeley’s campus, Bauer Wurster Hall. However, this provides only an example of the capabilities and capacities of the system and methods used here. No limitation to any particular building or other component of a virtual environment is intended, nor should any be implied.
[0031] Virtual Bauer Wurster, as an implementation of the embodiments here, provides an interactive, informal, collaborative learning platform tailored to the needs of the specific community that uses Bauer Wurster Hall, the College of Environmental Design at UC Berkeley. It provides a context in which one can see how a familiar environment can allow the users to experience a better remote learning and working situation. One should note that any building of a configuration could be used, the particular example just provides an example for ease of discussion.
[0032] The embodiments offer a wide range of possibilities not currently available on other platforms. These possibilities include users having the capability to post their work in both 2D and 3D formats and interact with each other through their avatars. Students can walk around the building together, discuss work on display and navigate inside each other’s 3D architecture models. Communication modes include voice, synchronous and asynchronous messaging, video calls, sticky notes, laser pointers, annotations and drawings. The embodiments additionally provide a centralized gateway to out-of-the-box technologies commonly used by this community, such as to Zoom®, Slack®, Miro®, and 3D viewers as examples. New links can be easily provided. Again, while the current example relies upon the Virtual Bauer Wurster model, it can apply to other departments, other schools, and other institutions and organizations. [0033] The embodiments, such as Virtual Bauer Wurster, can be used in both substitutional and transformational ways. Substitutional ways replace some of the in-person experiences lost in remote working/ distance learning situations. These include interfaces for those who are working from home to be able to post their work and discuss it with colleagues. The platform allows for informal interactions. Avatars signal that individual are “in” the building, meaning that they are available for connecting, meeting, and so on. The location indicates the nature of potential interaction as it would in the building. This allows for openness of discovery that is needed for cohort building. The nature of the interaction is informal, not curated, and one-on-one, for those who are actively looking to engage each other. Events, exhibitions and other virtual gatherings can be hosted to support formal and informal gathering.
[0034] In the course of a normal semester, students engage in both formal and informal processes of learning with their peers. In the exchanges, common questions, shared objectives, and collective ways of observing emerge that are then elaborated upon both collaboratively and individually. Virtual Bauer Wurster provides a platform to restore these crucial interactions.
[0035] Virtual Bauer Wurster also provides transformational uses in which users can add new modes of interaction and communication only possible in virtual environments. Virtual Bauer Wurster increases the space of the building for such things as designs, exhibitions, and pin-ups. It provides limitless space for students to share access to their work and allows the College to store a virtual depository of the work produced throughout time. Exhibitions can be staged that can be virtually shared as well as archived.
[0036] Virtual Bauer Wurster will also provide new opportunities for recruiting for students who cannot come in person. Apart from giving tours to prospective students, it will easily allow friends and family to visit Virtual Bauer Wurster Hall. In a future in which the campus may need to open and close, where students are in-person and remote, the virtual setting of Bauer Wurster Hall supports the practices shared in the spaces of the building. For those that are remote, on-line, working remotely, or in-person, users participate in and contribute to the culture.
[0037] Due to the highly adaptive and modular nature of the platform, and the embodiments’ link to out-of-the-box technologies that different departments, institutions and organizations already use for remote collaboration, there is high potential for application to others of these departments, institutions and organizations.
[0038] Given the recent move towards immersive technologies such as virtual reality, augmented reality and mixed reality, the embodiments of Virtual Studio/Virtual Bauer Wurster may act as a prototype for a future remote leaming/working environment that is totally immersive, both for educational and corporate environments. As used here, the term “augmented reality” means an environment in which computer generated images are superimposed on a user’s actual view. The term “mixed reality” means a computer generated environment in which virtual elements and physical elements are combined. The term “virtual reality” usually means the computer-generated simulation of a three-dimensional image or environment that can be interacted with in a seemingly real or physical way by a person using special electronic equipment, such as a helmet with a screen inside or gloves fitted with sensors. For purposes of this discussion, the system will be referred to as a virtual reality system and may include augmented, mixed, and what would be considered true virtual reality.
[0039] A Virtual Studio is composed of a series of virtual models, models of physical volumes, nested at different scales, including Building, Floors, Studios, Desks, Project Models and Avatars as examples. “Nested at different scales” means that at least some of the different models reside inside the other models. For example, using the examples above, the Desk and Avatar models reside on a Floor, Floors reside within Buildings, and Buildings reside within City Models. Other models may reside in the Building and/or Floor models including Studios. Studios may nest at the same level within the Floors, or within a Building. These are just examples and in no way intended to limit the scope of the claims.
[0040] Building Models provide the access gate to the Virtual Studio experience. Buildings, represented by Building Models can represent their physical counterparts, such as educational, commercial, office, retail, event space, etc., to create substitutional experiences and enhance physical embodiments and community building, such as shown FIGs. 1A-1C. The Bauer Wurster building 10 shown in FIGs. 1 A-C is a virtual counterpart to the actual building and these figures show the virtual building 10 being located in the same geographic location.
[0041] Alternatively, buildings can represent idealized physical spaces in lieu of those that house the institution/organization, with the goals of testing new organizational and spatial formats, enhance aesthetics, or increase currently available physical space, among others. [0042] The Building Model is the main virtual gateway, shown as the building 10 entrance in FIG. 2 guiding users to specific points of the experience, incorporating virtual maps and directories, shown in FIG. 3, and including virtual exhibition spaces to present the institution to visitors and community members.
[0043] The Building Model is also the location where several gathering spaces are located, to promote informal interactions, community building and lateral learning. FIG. 3 shows an embodiment of a directory taking the form of floating system models such as 11. These particular system models represent floors which in turn contain studios. The directory scene also includes exhibition halls such as 16 and a video wall 14. Video walls may appear in several places in the virtual environment, allowing students and other users to upload videos for other users to view.
[0044] FIGs. 4 and 5 demonstrate the interactive nature of the directory, as well as the levels of display resolution being rendered on a user’s device. When a user selects a floor 12 in the directory 11 of FIG. 3, a list of courses 17 appears. When a user selects a particular course 19 from the list, a directory for that course appears. While this example lies in the educational space, one can easily see that this may apply to other buildings as well. The floor directory could show different companies on a floor, or different floors of a same company. The course directory could become a list of contacts for each company, or a list of employees who are present in the virtual space at that time.
[0045] FIG. 6 shows avatars such as 20 and 22 interacting with each other and the building directory. If a user selects the video wall, a video launches for the user to watch, either in the lobby with the other users represented by the avatars. FIG. 7 shows an example of a video wall showing a video. FIG. 8 shows the avatars watching a video together. Returning to FIG. 3, one of the users could select an exhibition hall and then the other avatars and visit. The exhibition hall can display an instantly curated collection using a link to upload images, as shown in FIG. 9.
[0046] The avatars are the means through which the users interact with the virtual environment. Each Avatar Model provides the virtual representation of each user, such as workers, students, and visitors, the agents that circulate in the 3D model and communicate with each other, engage on discussions related to published content, and promote social and intellectual interaction. [0047] To start using Virtual Studio/Virtual Bauer Wurster, users must create their own avatar. The user logs into the website of the platform. The platform system may comprise multiple processors operating in multiple different physical locations and computing devices, such as servers, etc. The system receives user input from the user’s computing device and produces three-dimensional models with which the user can interact through their avatar. The three-dimensional models will include the user’s avatar in the model rendering. The models will also have user input regions, as will be discussed in more detail below, that allow the user to select interactive objects, such as models, notes, whiteboards, and access out-of-the- box, third party applications.
[0048] FIG. 10 shows two different types of avatars that the user could select and customize to develop their avatars. The users will also customize their laser pointer that they can use to point out things and interact with other user as will be discussed in more detail later. Once the user has created their avatar, the user can cause the avatar to move around, either by using the user’s mouse, or keys such as the arrow keys, or the WASD keys. In addition, keys may be assigned to allow the avatar to speak, open a secondary menu, open or close a chat box, enter ghost mode, or open a circle menu, all of which will be described in more detail further. [0049] When in the studio, once the avatar enters, a pause may occur as the various components download. In order to preserve bandwidth, the models and components only appear when the user avatars are standing close to the desks. Interactive objects may be rendered on the desks in highlight, indicating that they are ‘live’ and interactive. Similarly, some set icons indicate certain tasks, such as a cloud to show that the user can upload content. Sticky notes on the desks, for example, are interactive and users can leave notes by clicking on the sticky note icon on a desk. These guidelines will become clearer as the discussion progresses. [0050] Users can add a short bio to their avatars to let others know in what area their main areas of interest/activity lies. They can also attach keywords to their profile so that others can look for community members interested in similar topics.
[0051] Avatars can have emojis and/or images attached to them that may signal that they are interested in chatting with others, such as a smiley face, or that they may need help with some topic, like an exclamation mark. Special users, like course GSIs (Graduate Student Instructors) may have a signal saying ‘Hi, I’m here’ to let students know that they are available to answer questions, for example.
[0052] As mentioned above, the system may have secondary menus activatable by a preset key. In one embodiment, the preset key may be the Tab key, so these menus here may be referred to as Tab menus. This only represents an example and is in no way intended to limit the scope of the claims. FIG. 11 shows an example of a secondary menu that shows a floor plan of the building 30. Below the floor plan, a list shows who is in the building, and a list of options such as chat 32 that allow the users to chat with other users, and a list of announcements.
[0053] FIG. 12 shows a view of a studio with individual desks, mentioned above. As an avatar approaches a desk or a set of desks, the image will populate with the objects on that desk. FIG. 13 shows an avatar 52 approaching a set of desks, such as 50. Each desk may have a set of interactive objects as shown in FIG. 14. The desk 50 has several interactive objects such as a phone 62, a pad 64, a project model 66, and an image 68. If the user viewing the desk activates the secondary menu, a menu such as the circle menu appears. The circle menu in this example has several icons that change the view. For example, if the user selects icon
70 that has an image of a ghost, the user’s avatar moves into ‘ghost mode.’ [0054] In ghost mode, 3D avatars can fly over the building and cross walls and other obstacles, instead of moving only by walking on the floor and climbing stairs. FIG. 15 shows a ‘flying’ avatar 72. Ghost mode facilitates navigation of early stage and incomplete models, and may include new views and perspectives of any model, including a bird’s-eye view. [0055] FIG. 16 shows a view returning to a desk view showing the interactive model 66. Users can leave sticky notes by clicking on the sticky note 74 to allow the user to leave a note for the desk owner. As shown in FIG. 17, the desk includes a cloud icon 76. This allows many functionalities of the desk to become active. These functionalities may include uploading 3D models for other to view, enter and navigate. Two-dimensional images such as 68 from FIG. 14 may also be present.
[0056] Users may upload their own work, download their own work, and download other people’s work. When the users log in, they can see upon what other users are working by looking at other user’s desks. In more complex embodiments, the users can upload one's own work, download one's own work and display it on the desk/pinup board, upload one's own work such as 3D models, OBJ files, download one's own work and display it on the desk. They can also see what other users are working on with real time updates and multiplayer modes. The system may present the user with a desk menu to upload 3D models, 2D images, including titles and captions. The users may select the various third-party applications that go with each of the interactive objects on the desk.
[0057] When the system receives an upload, it receives a link to the uploaded content and links the user’s id in the system with the link to the content in a databased. On download, the user click on the download button for their own content causes the system to fetch the user id from the database and get the link to the content. When the user clicks on another user’s content from that user’s desk, the link found relates to the user id associated with the desk and then the content is downloaded.
[0058] For a higher resolution view of 2D content, or to view related content that is not currently posted in Virtual Bauer Wurster, users can access postings in any other platforms/out-of-the-box technologies. Out-of-the-box technologies open in a web browser window inside Virtual Bauer Wurster, to prevent the need to navigate to a different window in the computer. These applications may show a collaborative whiteboard environment such as in Miro, canvas applications, such as Padlet®, allow fast and easy posting of shared content. The link may be triggered by clicking on a pile of paper 68 on each user’s desk, as shown in FIG. 15. Other out-of-the-box, or third-party applications, may also be accessible. When the user clicks or otherwise selects the region in the image where those assets exist, the system responds by launching a new window within the virtual environment with access to that application.
[0059] Interaction with 3D project models may occur in different levels of resolution. The term “project model” means any work product of the desk owner or other members of the environment, to differentiate it from system models, discussed above. While the project models used here represent architectural models, they could also comprise other work products, such as 3D renderings of circuit boards, art objects, mechanical components, construction components, just as examples.
[0060] Project models of lower resolution may be uploaded to the user’s desk in low resolution format, to avoid overloading the overall model size. Detailed project models can require large amounts of real-time data transfer. When a user wants to inspect/experience a project model 66 they see on a desk, they right click on it. A medium resolution format of the project model appears in a new scene, where the view of the Floor model has now been transformed into a background image. This transformation from the actual 3D space of the building to a screenshot of the same space, taken from the user’s perspective, to keep the visual illusion of spatial permanence, frees up crucial memory and data bandwidth to allow for the uploading and subsequent navigation of the 3D model.
[0061] When a user wants to inspect/experience a project model inside, they left click on it. This will trigger a new scene, where the high resolution project model will be placed on the ground, and the user avatars can walk around and inside it, individually or in groups as shown. When a user enters the project model, this may trigger a new scene where only the space where the user enters will be displayed and rendered in detailed high-resolution format. Due to the large data formats, the detailed high-resolution format is only made possible because each section of the project model is rendered in different scenes that are uploaded sequentially each time the user crosses a spatial threshold, called a portal. The project model owners are responsible for sectioning the model and identifying/rendering the areas that they consider worthwhile. Not all areas of the 3D project model building/airplane/car/industrial design, etc., may be worth visiting in high resolution.
[0062] When a user’s avatar selects the project model 66, the avatar 80 can enter and navigate the model as shown in FIG. 18, in this example with another avatar 82. Avatar 80 has laser pointer 84 and avatar has laser pointer 86. These laser pointers may be of different colors and thicknesses, but are shown as black here for ease of viewing.
[0063] FIG. 19 shows a building scale view showing the avatar 80 in the door way. FIG. 20 shows the two avatars 80 and 82 on a staircase in an inner view of the building formed in the model. In order for these avatars to have such interactions, the user needs to upload the 3D model to allow the virtual interactions. [0064] The system provides users access to a 3D editing tool that allows users to import their 3D project models, such as those created using a 3D tool such as Rhino®, Autocad® or SketchUp®. For the embodiments involving architectural models, the tool allows the user to import their project models into a virtual reality scene and customize it with different furniture, landscapes, lighting, textures, and skies. The user can then save their scene and upload it directly to Virtual Bauer Wurster, where guests can enter the scene. By allowing users to customize and upload their own scenes, the system to save hundreds of hours of work manually integrating a user’s model and scene, if used, into the virtual reality development tool. Some embodiments may use a tool such as Unity®.
[0065] The editor allows users without experience in coding or virtual reality design tools to easily build out a scene with their model, for architectural project models. It bridges the gap between developing models within software such as Rhino® and bringing the model to life in the 3D modeling tool such as Unity®. The editor’s user interface is designed to be simple to understand, consisting of a dropdown menu of functions, a hierarchy of objects in the scene, and an inspector for modifying values and selecting scenery. It only takes a couple of minutes for users to familiarize themselves with the software and finish creating their scene.
[0066] To import models (an .obj file), they systems uses the Runtime OBJ Importer Asset, which allows us to import .obj files during runtime. The import speed loads approximately 750,000 triangles in ten seconds. To store data in between runs, the system saves project files to the user’s local computer via the virtual reality’s tools application persistent data path. Project files include the project model file (model. obj) and the scene’s information (Scenelnformation.json), if the project model is an architectural model. The scene information j son records the position, rotation, scale, tiling, and texture of all objects in the scene. [0067] In one embodiment, the system uploads the project files to the server in a web services system, allowing the system to reconstruct the entire scene. In addition, the system may utilize shaders to enhance the program. For example, the system may include an outline shader to highlight objects in the scene when selected, allowing users to see through other objects and quickly differentiate the layers of the model. The system may also include a cross-section shader that allows users to move a clipping plane that only renders pixels below it, which offers a new X-ray perspective of the model. Sun path calculations may be added to the editor, allowing users to input the latitude and longitude of their building location, and time of the year (day /hour), making the editor display the solar patterns on the building at that particular location and time. This will be discussed in more detail with regard to FIG. 21. [0068] The editor may be equipped with a dropdown menu that features four main categories of options: File, Edit, Assets, and Render. The File drop down allows users to view instructions from an embedded Google Chrome window, import model files, create/save/load projects, and exit the application. The Edit drop down allows users to modify the transform of objects in the scene and reset objects such as lighting and the floor to their default location. The Assets drop down allows users to pick and place landscaping, entourage and furniture objects into the scene. The Rendering drop down allows users to change the texture of objects and also change the skybox. The user can assign materials to the various portions of the model, selected from a collection provided. Alternatively, the user may upload and save custom materials.
[0069] FIG. 21 shows a secondary, circle menu that has several selections for the user to make to further interaction with the building in the model. As discussed above, the user could select the menu item that looks like a sun to launch a solar study. FIG. 22 shows an example of one such study. The user interface allows the viewing user to manipulate the latitude, longitude, day and time, and time zone using sliders ash shown. This allows the viewing user to see the sun move through the year interactively.
[0070] FIG. 23 shows an entire building model that results from a modeling tool such as that discussed above. Users can create real time plans and sections of building, while their avatars are inside the buildings. This may happen in the x, y, and z directions, using the sliders to move the locations of the sections. Sliding the y-plane slider may result in a picture as shown in FIG. 24. Sliding the x-plane slider may result in a picture as shown in FIG. 25.
[0071] Other menu items may be available on the secondary menu such as a Tab menu, may allow for interactive top-view plans of the building, as shown in FIG. 26. The view may be changed as desired, such as to add the roof, not shown.
[0072] Returning to the circle menu shown in FIG. 21, the user could also make other selections, such as 2D mode. This allow the visualization of 2D media while inside the 3D model. FIG. 27 shows a 2D rendering of a view from a student’s desk, as an example.
[0073] In addition to provide ways to customize the views, the system may also provide a user interface to allow the user to customize audio zones. The audio zones allow different users to chat across different rooms without audio interference from outside the zone. These are shown by the boxes in FIG. 28.
[0074] In addition to the overall general features discussed above, the system can provide advanced functionalities customized for each client organization. For example, in Virtual Bauer Wurster, an academic hall, the proposed functionalities include lecture classes/seminars, virtual exhibitions and lectures, organization of informal gatherings and formal events, creativity platforms, library, archives and collections, and faculty offices, labs, special locations.
[0075] Lecture classes/seminars are included in the directory and listed on the walls of the lobby, including the same information as the studio courses such as course number, instructor, link to online materials, possibly using an out-of-the-box application, and posting of materials in students’ desks, if pertinent for the course format.
[0076] Virtual, interactive exhibitions can easily be organized using the existing Virtual Studio functionalities. Drawings, photos, videos and models can be displayed on designated areas of the building, and users can interact with them as described above. However, these exhibition spaces also offer transformational value. Exhibits cannot usually be touched in actual exhibitions; entering a 3D model is a transformational activity, impossible in real life. Communication with other visitors is also not usual, with the experience being more individualized than collective.
[0077] Synchronous lectures can be organized using the embedded video functionalities, or out-of-the-box technologies such as Google Slides. Asynchronous lectures are based on a menu of past lectures available. Synchronous and asynchronous public feedback is possible using existing functionalities.
[0078] Virtual buildings such as Virtual Bauer Wurster will include links to the webpages of their libraries, in this case the CED library and the Environmental Design Archives. There may also be a function to allow students to ‘publish’ their essays to the library. In addition to the library, the building may include faculty offices, labs, and special locations. Virtual Bauer Wurster can include links to 360° footage of CED labs, like the XR Lab, PrintFarm and CBE facilities, faculty offices, possibly on a volunteer base, and other special locations, like particular rooms or spaces.
[0079] The Virtual Studio platform may have additional features added to virtual spaces such as Virtual Bauer Wurster. For example, gamification may be added to Virtual Bauer Wurster, with games such as building blocks, virtual plants to care for, virtual scavenger hunts, etc. [0080] The system may be implemented on one or more computing devices discussed variously above, connected together as shown in FIG. 29. A user interacts with the system through a user’s computer device 102, having a user interface 102, one or more processors such as 104, a memory 106, and a display device 110 and a network connection to the system that produces the system models of physical spaces discussed above. The user uploads the information needed to build an avatar and the system has one or more rendering processors to execute code to represent the user avatar as image data within the image data the forms a visual model of a space. One or more processors render the 3D data as 2D date on the user computing device 102. The image data being rendered includes visual model of the space has at least one link to an interactive object, such as the project models or an out-of-the-box, third-party application such as 120. Interactive objects may include picking up and returning models, writing or enlarging notes, etc.
[0081] In the example shown here the system may include one or more processors to execute code to cause the processors to allow' the user to log into the system, typically through a web browser or other typical network interface. The one or more processors may reside in one or more servers. The one or more servers may execute code to cause the servers to allow the user to access the library 114, an authorization service 112, and a scheduling tool 116. The system also has a storage 118, represented here as a single storage but will more than likely be distributed across multiple servers and multiple devices across the system.
[0082] In this manner, a system provides a system for remote interactions, and the remote interactions can become more spontaneous and more ‘real,’ allowing users to interact and work together in ways that previously had been limited to being physically present with each other, as well as providing interactions that would not be possible in a physical environment. [0083] It will be appreciated that variants of the above-disclosed and other features and functions, or alternatives thereof, may be combined into many other different systems or applications. Various presently unforeseen or unanticipated alternatives, modifications, variations, or improvements therein may be subsequently made by those skilled in the art which are also intended to be encompassed by the embodiments.

Claims

WHAT IS CLAIMED IS:
1. A system, comprising: a network connection to a user computing device, the user computing device having a display and a user interface; one or more servers having one or more processors configured to execute instructions that cause the one or more processors to: prompt the user to enter user credentials through a user interface on the user computing device; authorize the user to interact with a virtual space comprising at least two three- dimensional models of physical space; produce image data of at least one of the at least two three-dimensional models of a physical space as two-dimensional image data on a display of the user computing device, the two or more three-dimensional models being nested models within the physical space and the image data corresponds to a location represented by a particular model; provide user input regions within the image data; and render an image of at least one interactive object and to render an altered view of the interactive object when the interactive object is selected as indicated by a signal from the user computing device, and resolution of the altered view is higher than resolution of the image.
2. The system of claim 1, wherein the at least two three-dimensional models of the physical space includes at least one three-dimensional model that is nested within another of the at least two three-dimensional models.
3. The system of claim 2, wherein the at least two three-dimensional models include at least two of a building model, a floor model, a desk model, a studio model, project model and an avatar model.
4. The system of claim 1, further comprising links to at least one third-party application embedded in the two dimensional image data, wherein the links cause the processor to connect to the at least one third-party application across a network.
5. The system of claim 1, wherein the two-dimensional image data includes regions of video data.
6. The system of claim 1, wherein the one or more processors provide a user interface to a three-dimensional editor to allow users to edit, segment and import project models as the interactive objects.
7. A method of providing a virtual reality environment, comprising: generating two or more three-dimensional models of a physical space, the two or more three-dimensional models being nested models within the physical space; rendering, across a network, two-dimensional image data of the two or more three- dimensional models of the physical space on a display of the user computing device; inserting user input regions within the image data; embedding an interactive object within the image data; and altering the two-dimensional image data when a signal selecting the interactive object is received from the user computing device.
8. The method as claimed in claim 7, wherein generating two or more three-dimensional models of the physical space comprises generating two or more three-dimensional models of an actual physical space.
9. The method as claimed in claim 7, wherein generating two or more three-dimensional models of the physical space comprises generating two or more three-dimensional models of a virtual physical space.
10. The method as claimed in claim 7, wherein altering the two-dimensional image data comprises rendering two-dimensional image data of the interactive object at a higher resolution than a resolution of the two-dimensional image data and the interactive object comprises a project model.
11. The method as claimed in claim 10, further comprising rendering segments of the project model based upon signals from the user computing device selecting portions of the project model to view.
12. The method as claimed in claim 7, further comprising providing the user with access to a tool to allow the user to: section the project model in a three-dimensional design tool; customize the project model; and upload the project model and customizations to the system;
13. The method as claimed in claim 12, further comprising importing object files of the project model.
14. The method as claimed in claim 13, further comprising storing the object files on the user computing device as needed to conserve bandwidth.
15. The method as claimed in claim 7, wherein the interactive object comprises a third- party application and altering the two-dimensional image data comprises opening a window within the two-dimensional image data to allow the user to connect to the third-party application.
16. The method as claimed in claim 15, wherein the third-party application comprises a real-time messaging system, and the system renders an interface to the real-time messaging system that shows one or more users interacting with the project model.
17. The method as claimed in claim 16, wherein the system rendering the interface includes rending a laser pointer associated with each user.
18. The method as claimed in claim 7, wherein rendering, across a network, two- dimensional image data of the two or more three-dimensional models of the physical space on a display of the user computing device comprises rendering a desk model.
PCT/US2021/051330 2020-09-21 2021-09-21 Remote collaboration platform for interactions in virtual environments WO2022061296A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202063080863P 2020-09-21 2020-09-21
US63/080,863 2020-09-21

Publications (1)

Publication Number Publication Date
WO2022061296A1 true WO2022061296A1 (en) 2022-03-24

Family

ID=80775670

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2021/051330 WO2022061296A1 (en) 2020-09-21 2021-09-21 Remote collaboration platform for interactions in virtual environments

Country Status (1)

Country Link
WO (1) WO2022061296A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110029897A1 (en) * 2009-07-31 2011-02-03 Siemens Corporation Virtual World Building Operations Center
US20140002444A1 (en) * 2012-06-29 2014-01-02 Darren Bennett Configuring an interaction zone within an augmented reality environment
US20180144487A1 (en) * 2011-06-29 2018-05-24 Matterport, Inc. Building a three-dimensional composite scene

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110029897A1 (en) * 2009-07-31 2011-02-03 Siemens Corporation Virtual World Building Operations Center
US20180144487A1 (en) * 2011-06-29 2018-05-24 Matterport, Inc. Building a three-dimensional composite scene
US20140002444A1 (en) * 2012-06-29 2014-01-02 Darren Bennett Configuring an interaction zone within an augmented reality environment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
POWELL THOMAS M., LEWIS CRAIG V. W., CURCHITSER ENRIQUE N., HAIDVOGEL DALE B., HERMANN ALBERT J., DOBBINS ELIZABETH L.: "Results from a three-dimensional, nested biological-physical model of the California Current System and comparisons with statistics from satellite imagery", JOURNAL OF GEOPHYSICAL RESEARCH, AMERICAN GEOPHYSICAL UNION, US, vol. 111, no. C7, 1 January 2006 (2006-01-01), US , XP055921283, ISSN: 0148-0227, DOI: 10.1029/2004JC002506 *

Similar Documents

Publication Publication Date Title
US10970934B2 (en) Integrated operating environment
US20180225885A1 (en) Zone-based three-dimensional (3d) browsing
CA3121009A1 (en) Integrated operating environment
Smith Dance performance and virtual reality: an investigation of current practice and a suggested tool for analysis
Scholte The perpetuation of site-specific installation artworks in museums: staging contemporary art
Dawson et al. Digitally enhanced community rescue archaeology
Morse Gaming engines: Unity, unreal, and interactive 3D spaces
Banfi Virtual museums and human-VR-computer interaction for cultural heritage application: new levels of interactivity and knowledge of digital models and descriptive geometry
Hudson-Smith Digitally distributed urban environments: The prospects for online planning
Niblock et al. An augmented and interactive exhibition of an archived model for Frederick Kiesler's Endless House, 1959
WO2022061296A1 (en) Remote collaboration platform for interactions in virtual environments
Champion et al. Time-layered gamic interaction with a virtual museum template
Stefan et al. Prototyping 3D virtual learning environments with X3D-based content and visualization tools
Stender Man-Made Mountains and Other Traces of a Fluctuating Market.. An Anthropological View on Unintended Design
Anderson et al. Preserving and presenting Cultural Heritage using off-the-shelf software
Barker Images and eventfulness: expanded cinema and experimental research at the University of New South Wales
Albracht Visualizing urban development: improved planning & communication with 3D interactive visualizations
Mendes et al. Appropriating video surveillance for art and environmental awareness: Experiences from ARTiVIS
Thomopoulos et al. DICE: digital immersive cultural environment
Cassidy et al. Time travel as a visitor experience: a virtual reality exhibit template for historical exploration
Bodini et al. Using immersive technologies to facilitate location scouting in audiovisual media production: a user requirements study and proposed framework
Stavrev Virtual exhibitions during a pandemic-A real-time online expo with a fictional interior
Tewari et al. Virtual Campus Walkthrough
Abdelmonem Virtual Heritage: Global Perspectives for creative modes of heritage visualisation
Brooks The Applications of Immersive Virtual Reality Technologies for Archaeology

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21870418

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21870418

Country of ref document: EP

Kind code of ref document: A1