WO2023014618A1 - Placement d'objet pour dispositifs électroniques - Google Patents

Placement d'objet pour dispositifs électroniques Download PDF

Info

Publication number
WO2023014618A1
WO2023014618A1 PCT/US2022/038952 US2022038952W WO2023014618A1 WO 2023014618 A1 WO2023014618 A1 WO 2023014618A1 US 2022038952 W US2022038952 W US 2022038952W WO 2023014618 A1 WO2023014618 A1 WO 2023014618A1
Authority
WO
WIPO (PCT)
Prior art keywords
anchor
virtual content
location
placement
system process
Prior art date
Application number
PCT/US2022/038952
Other languages
English (en)
Inventor
Michael E. BUERLI
Pavel V. DUDRENOV
Original Assignee
Apple Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/827,651 external-priority patent/US20230040610A1/en
Application filed by Apple Inc. filed Critical Apple Inc.
Priority to EP22758334.1A priority Critical patent/EP4363950A1/fr
Priority to CN202280054769.4A priority patent/CN117795461A/zh
Publication of WO2023014618A1 publication Critical patent/WO2023014618A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Definitions

  • the present description relates generally to extended reality environments including, for example, object placement for electronic devices.
  • Extended reality technology aims to bridge a gap between virtual environments and a physical environment by providing an enhanced physical environment that is extended with electronic information.
  • the electronic information appears to be part of the physical environment as perceived by a user.
  • it can be challenging to determine where in the physical environment to place the electronic information.
  • FIG. 1 illustrates an example system architecture including various electronic devices that may implement the subject system in accordance with one or more implementations.
  • FIG. 2 illustrates an example computing device that may implement aspects of the subject technology.
  • FIG. 3 illustrates an example operating system (OS) service of an electronic device in accordance with one or more implementations.
  • OS operating system
  • FIG. 4 illustrates the example OS service of FIG. 3 generating an anchor responsive to an implicit anchor request in accordance with one or more implementations.
  • FIG. 5 illustrates the example OS service of FIG. 3 updating an anchor responsive to user input in accordance with one or more implementations.
  • FIG. 6 illustrates an example of virtual content anchored to an explicit anchor in accordance with aspects of the subject technology.
  • FIG. 7 illustrates an example of virtual content anchored to placement locations associated with an implicit anchor in accordance with aspects of the subject technology.
  • FIG. 9 illustrates an example of virtual content being moved in an XR environment in accordance with aspects of the subject technology.
  • FIG. 10 illustrates another example of virtual content being moved in an XR environment in accordance with aspects of the subject technology.
  • FIG. 11 illustrates an example of updates to anchors for virtual content using a placement system of an electronic device in accordance with aspects of the subject technology.
  • FIG. 12 illustrates an example of a portion of a user-interface window in an XR environment being extracted for standalone display in the XR environment in accordance with aspects of the subject technology.
  • FIG. 13 illustrates a flow diagram of an example process for object placement according to aspects of the subject technology.
  • a physical environment refers to a physical world that people can sense and/or interact with without aid of electronic devices.
  • the physical environment may include physical features such as a physical surface or a physical object.
  • the physical environment corresponds to a physical park that includes physical trees, physical buildings, and physical people. People can directly sense and/or interact with the physical environment such as through sight, touch, hearing, taste, and smell.
  • an extended reality (XR) environment refers to a wholly or partially simulated environment that people sense and/or interact with via an electronic device.
  • the XR environment may include augmented reality (AR) content, mixed reality (MR) content, virtual reality (VR) content, and/or the like.
  • an XR system With an XR system, a subset of a person’s physical motions, or representations thereof, are tracked, and, in response, one or more characteristics of one or more virtual objects simulated in the XR environment are adjusted in a manner that comports with at least one law of physics.
  • the XR system may detect head movement and, in response, adjust graphical content and an acoustic field presented to the person in a manner similar to how such views and sounds would change in a physical environment.
  • the head mountable system may incorporate one or more imaging sensors to capture images or video of the physical environment, and/or one or more microphones to capture audio of the physical environment.
  • a head mountable system may have a transparent or translucent display.
  • the transparent or translucent display may have a medium through which light representative of images is directed to a person’s eyes.
  • the display may utilize digital light projection, OLEDs, LEDs, uLEDs, liquid crystal on silicon, laser scanning light source, or any combination of these technologies.
  • the medium may be an optical waveguide, a hologram medium, an optical combiner, an optical reflector, or any combination thereof.
  • the transparent or translucent display may be configured to become opaque selectively.
  • Projection-based systems may employ retinal projection technology that projects graphical images onto a person’s retina. Projection systems also may be configured to project virtual objects into the physical environment, for example, as a hologram or on a physical surface.
  • Implementations of the subject technology described herein provide for coordinated and intuitive placement of virtual objects in an extended reality environment provided by an electronic device.
  • Virtual content displayed in an extended reality environment can be anchored to an anchor location in a physical environment of the device, so that the displayed virtual content remains stationary relative to the physical environment as the user moves the device and/or looks around the XR environment.
  • the anchor location can be requested by an application running at a device from a system process of the device.
  • the requested anchor location is specific or explicit (e.g., an anchor corresponding to a compact physical object such as a user’s hand, or an explicit location such as at the center of the user’s field of view) and can be immediately provided by the system process.
  • the requested anchor location can be an implicit anchor such as a region (e.g., a horizontal plane or a vertical plane) within which various placement locations are possible.
  • placement context information can be obtained by a system process of a device, and used to determine where, in the region, to anchor and display the virtual content.
  • a placement system of a device provides additional placement input to an anchoring system of the device.
  • the placement system may facilitate coordinated and intuitive placement of virtual objects relative to each other and relative to physical objects in the extended reality environment.
  • the placement context information can also help facilitate intuitive and natural adjustments of the positions of displayed virtual objects as another virtual object is moved around or among the displayed virtual objects.
  • FIG. 1 illustrates an example system architecture 100 including various electronic devices that may implement the subject system in accordance with one or more implementations. Not all of the depicted components may be used in all implementations, however, and one or more implementations may include additional or different components than those shown in the figure. Variations in the arrangement and type of the components may be made without departing from the spirit or scope of the claims as set forth herein. Additional components, different components, or fewer components may be provided.
  • the system architecture 100 includes an electronic device 105, an electronic device 110, an electronic device 115, and a server 120.
  • the system architecture 100 is illustrated in FIG. 1 as including the electronic device 105, the electronic device 110, the electronic device 115, and the server 120; however, the system architecture 100 may include any number of electronic devices and any number of servers or a data center including multiple servers.
  • the electronic device 105 may be smart phone, a tablet device, or a wearable device such as a head mountable portable system, that includes a display system capable of presenting a visualization of an extended reality environment to a user 101.
  • the electronic device 105 may be powered with a battery and/or any other power supply.
  • the display system of the electronic device 105 provides a stereoscopic presentation of the extended reality environment, enabling a three-dimensional visual display of a rendering of a particular scene, to the user.
  • the user may use an electronic device 104, such as a tablet, watch, mobile device, and the like.
  • the electronic device 105 may include one or more cameras such as camera(s) 150 (e.g., visible light cameras, infrared cameras, etc.) Further, the electronic device 105 may include various sensors 152 including, but not limited to, cameras, image sensors, touch sensors, microphones, inertial measurement units (IMU), heart rate sensors, temperature sensors, Lidar sensors, radar sensors, sonar sensors, GPS sensors, Wi-Fi sensors, near-field communications sensors, etc.) Moreover, the electronic device 105 may include hardware elements that can receive user input such as hardware buttons or switches. User input detected by such sensors and/or hardware elements correspond to various input modalities for interacting with virtual content displayed within a given extended reality environment.
  • camera(s) 150 e.g., visible light cameras, infrared cameras, etc.
  • various sensors 152 including, but not limited to, cameras, image sensors, touch sensors, microphones, inertial measurement units (IMU), heart rate sensors, temperature sensors, Lidar sensors, radar sensors, sonar sensors, GPS sensors, Wi-Fi sensors, near
  • such input modalities may include, but not limited to, facial tracking, eye tracking (e.g., gaze direction), hand tracking, gesture tracking, biometric readings (e.g., heart rate, pulse, pupil dilation, breath, temperature, electroencephalogram, olfactory), recognizing speech or audio (e.g., particular hotwords), and activating buttons or switches, etc.
  • the electronic device 105 may also detect and/or classify physical objects in the physical environment of the electronic device 105.
  • the electronic device 105 may be communicatively coupled to a base device such as the electronic device 110 and/or the electronic device 115.
  • a base device may, in general, include more computing resources and/or available power in comparison with the electronic device 105.
  • the electronic device 105 may operate in various modes. For instance, the electronic device 105 can operate in a standalone mode independent of any base device. When the electronic device 105 operates in the standalone mode, the number of input modalities may be constrained by power limitations of the electronic device 105 such as available battery power of the device. In response to power limitations, the electronic device 105 may deactivate certain sensors within the device itself to preserve battery power.
  • the electronic device 105 may also operate in a wireless tethered mode (e.g., connected via a wireless connection with a base device), working in conjunction with a given base device.
  • the electronic device 105 may also work in a connected mode where the electronic device 105 is physically connected to a base device (e.g., via a cable or some other physical connector) and may utilize power resources provided by the base device (e.g., where the base device is charging the electronic device 105 while physically connected).
  • the electronic device 105 When the electronic device 105 operates in the wireless tethered mode or the connected mode, a least a portion of processing user inputs and/or rendering the extended reality environment may be offloaded to the base device thereby reducing processing burdens on the electronic device 105.
  • the electronic device 105 works in conjunction with the electronic device 110 or the electronic device 115 to generate an extended reality environment including physical and/or virtual objects that enables different forms of interaction (e.g., visual, auditory, and/or physical or tactile interaction) between the user and the extended reality environment in a real-time manner.
  • the electronic device 105 provides a rendering of a scene corresponding to the extended reality environment that can be perceived by the user and interacted with in a real-time manner.
  • the electronic device 105 may provide sound, and/or haptic or tactile feedback to the user.
  • the content of a given rendered scene may be dependent on available processing capability, network availability and capacity, available battery power, and current system workload.
  • the electronic device 105 may also detect events that have occurred within the scene of the extended reality environment. Examples of such events include detecting a presence of a particular person, entity, or object in the scene. Detected physical objects may be classified by electronic device 105, electronic device 110, and/or electronic device 115 and the location, position, size, dimensions, shape, and/or other characteristics of the physical objects can be used to provide physical anchor objects for an XR application generating virtual content, such as a UI of an application, for display within the XR environment.
  • the electronic device 110 and/or the electronic device 115 can also generate such extended reality environments either working in conjunction with the electronic device 105 or independently of the electronic device 105.
  • the network 106 may communicatively (directly or indirectly) couple, for example, the electronic device 105, the electronic device 110 and/or the electronic device 115 with the server 120 and/or one or more electronic devices of one or more other users.
  • the network 106 may be an interconnected network of devices that may include, or may be communicatively coupled to, the Internet.
  • the electronic device 110 may include a touchscreen and may be, for example, a smartphone that includes a touchscreen, a portable computing device such as a laptop computer that includes a touchscreen, a peripheral device that includes a touchscreen (e.g., a digital camera, headphones), a tablet device that includes a touchscreen, a wearable device that includes a touchscreen such as a watch, a band, and the like, any other appropriate device that includes, for example, a touchscreen, or any electronic device with a touchpad.
  • the electronic device 110 may not include a touchscreen but may support touchscreen-like gestures, such as in an extended reality environment.
  • the electronic device 110 may include a touchpad. In FIG.
  • the electronic device 110 is depicted as a mobile smartphone device with a touchscreen.
  • the electronic device 110, the electronic device 104, and/or the electronic device 105 may be, and/or may include all or part of, the electronic system discussed below with respect to FIG. 14.
  • the electronic device 110 may be another device such as an Internet Protocol (IP) camera, a tablet, or a peripheral device such as an electronic stylus, etc.
  • IP Internet Protocol
  • the electronic device 115 may be, for example, desktop computer, a portable computing device such as a laptop computer, a smartphone, a peripheral device (e.g., a digital camera, headphones), a tablet device, a wearable device such as a watch, a band, and the like.
  • a desktop computer e.g., a laptop computer, a smartphone, a peripheral device (e.g., a digital camera, headphones), a tablet device, a wearable device such as a watch, a band, and the like.
  • the electronic device 115 is depicted as a desktop computer.
  • the electronic device 115 may be, and/or may include all or part of, the electronic system discussed below with respect to FIG. 14.
  • the server 120 may form all or part of a network of computers or a group of servers 130, such as in a cloud computing or data center implementation.
  • the server 120 stores data and software, and includes specific hardware (e.g., processors, graphics processors and other specialized or custom processors) for rendering and generating content such as graphics, images, video, audio and multi-media files for extended reality environments.
  • the server 120 may function as a cloud storage server that stores any of the aforementioned extended reality content generated by the abovediscussed devices and/or the server 120.
  • FIG. 2 illustrates an example architecture that may be implemented by the electronic device 105 in accordance with one or more implementations of the subject technology.
  • portions of the architecture of FIG. 2 are described as being implemented by the electronic device 105 of FIG. 1, such as by a processor and/or memory of the electronic device; however, appropriate portions of the architecture may be implemented by any other electronic device, including the electronic device 110, electronic device 115, and/or server 120.
  • Not all of the depicted components may be used in all implementations, however, and one or more implementations may include additional or different components than those shown in the figure. Variations in the arrangement and type of the components may be made without departing from the spirit or scope of the claims as set forth herein. Additional components, different components, or fewer components may be provided.
  • FIG. 2 Various portions of the architecture of FIG. 2 can be implemented in software or hardware, including by one or more processors and a memory device containing instructions, which when executed by the processor cause the processor to perform the operations described herein.
  • the trapezoidal boxes may indicate that the sensors 152, the camera(s) 150 and the display 225 may be hardware components
  • the rectangular boxes may indicate that the OS service 200, the application 202, the rendering engine 223, and the compositing engine 227 may be implemented in software, including by one or more processors and a memory device containing instructions, which when executed by the processor cause the processor to perform the operations described herein.
  • an application such as application 202 provides application data to a rendering engine 223 for rendering of the application data, such as a UI of the application or other virtual content.
  • the application data may include applicationgenerated content (e.g., windows, buttons, tools, etc.) and/or user-generated content (e.g., text, images, etc.), and information for rendering the content in the UI.
  • rendering engine 223 renders the UI for display by a display such as display 225 of the electronic device 105.
  • additional information may be provided for display of the UI of the application 202, such as in a two-dimensional or three-dimensional (e.g., XR) scene.
  • sensors 152 provide environment information (e.g., depth information from one or more depth sensors, motion information from one or more motion sensors, and/or user information) to an OS service 200 (e.g., an XR service that may be provided by an operating system of the electronic device 105).
  • Camera(s) 150 may also provide images of a physical environment and/or one or more portions of the user (e.g., the user’s eyes, hands, face, etc.) to OS service 200.
  • OS service 200 may generate scene information, such as three-dimensional map, of some or all of the physical environment of electronic device 105 using the environment information (e.g., the depth information and/or the images) from sensors 152 and camera(s) 150.
  • the anchor request may include a request for an implicit anchor (e.g., a general physical object such as a horizontal planar surface, a surface of a floor, or a surface of a tabletop, a vertical planar surface, or a surface of a wall).
  • an implicit anchor e.g., a general physical object such as a horizontal planar surface, a surface of a floor, or a surface of a tabletop, a vertical planar surface, or a surface of a wall.
  • the anchor request may include a request for an explicit anchor, such as a specific physical object (e.g., a hand of a user or another participant, a face of a user or another participant, a head of a user or another participant, or a body of a user or another participant), a user-defined location (e.g., defined by a gesture to or at the location by the user), or a camera-centric location (e.g., a location at the center of the user’s field of view).
  • a specific physical object e.g., a hand of a user or another participant, a face of a user or another participant, a head of a user or another participant, or a body of a user or another participant
  • a user-defined location e.g., defined by a gesture to or at the location by the user
  • a camera-centric location e.g., a location at the center of the user’s field of view
  • Application 202 may include code that, when executed by one or more processors of electronic device 105, generates application data, for display of a UI or other virtual content of the application on, near, attached to, or otherwise associated with an anchor location corresponding to the anchor identified by the identifier provided from OS service 200.
  • the application data can be provided to the OS service 200 and/or the rendering engine 223, as illustrated in FIG. 2.
  • scene information can also be provided to rendering engine 223.
  • the scene information can include or be based on, as examples, environment information such as a depth map of the physical environment, and/or object information for detected objects in the physical environment.
  • Rendering engine 223 can then render the application data from application 202 for display by display 225 of electronic device 105 at an anchor location corresponding to the anchor generated responsive to the anchor request.
  • the UI or other virtual content of application 202 is rendered for display at the appropriate location on the display 225, to appear in association with the anchor provided by OS service 200.
  • electronic device 105 can also include a compositing engine 227 that composites video images of the physical environment, based on images from camera(s) 150, for display together with the rendered UI or other virtual content from rendering engine 223.
  • compositing engine 227 may be provided in an electronic device 105 that includes an opaque display, to provide pass-through video to the display.
  • compositing engine 227 may be omitted or unused in some circumstances, or may be incorporated in rendering engine 223.
  • FIG. 2 illustrates a rendering engine 223 that is separate from OS service 200, it should be appreciated that OS service 200 and rendering engine 223 may form a common service and/or that rendering operations for rendering content for display can be performed by the OS service 200.
  • FIG. 2 illustrates a rendering engine 223 that is separate from application 202, it should be appreciated that, in some implementations, application 202 may render content for display by display 225 without using a separate rendering engine.
  • Electronic device 105 may allow application 202 to request and obtain anchor information from OS service 200 (e.g., via an application programming interface (API) or via a Serial Peripheral Interface (SPI)) as illustrated in FIG. 2, which can facilitate efficient development, implementation, and/or run-time execution of application 202 (e.g., since each application 202 does not have to perform its own object detection, scene mapping, anchoring, tracking, etc.) as well as intuitive placement and/or behavior of virtual objects in the XR environment.
  • API application programming interface
  • SPI Serial Peripheral Interface
  • the application 202 may request an explicit anchor or an implicit anchor from the OS service 200.
  • FIGS. 3 and 4 illustrate example operations of the OS service 200 that may be performed for generating an explicit anchor and an implicit anchor, respectively.
  • the OS service 200 may include one or more host systems 300, a placement system 302, and an anchoring system 304.
  • the host systems 300 receive an explicit anchor request (e.g., from the application 202).
  • the host systems 300 may also receive environment information and/or user information (e.g., from camera(s) 150 and/or sensors 152).
  • the environment information may include a map (e.g., a depth map) of a physical environment of the electronic device 105 or another representation of the physical environment of the electronic device 105 and/or the locations, sizes, and/or other features of one or more objects in the physical environment.
  • a map e.g., a depth map
  • the user information may include a location of the user in the physical environment (e.g., a location relative to particular object or location in the physical environment, such as an initial location of the user or the device or a coordinate origin established when the device was powered on or established during an enrollment operation in the physical environment and/or based on a mapping of the physical environment), an orientation of the user (e.g., an orientation of the user’s body and/or the user’s head), a gaze location corresponding to the location at which the user’s gaze is currently focused), user motion information, user gesture information, etc.
  • a location of the user in the physical environment e.g., a location relative to particular object or location in the physical environment, such as an initial location of the user or the device or a coordinate origin established when the device was powered on or established during an enrollment operation in the physical environment and/or based on a mapping of the physical environment
  • an orientation of the user e.g., an orientation of the user’s body and/or the user’s head
  • the placement system 302 may determine (e.g., based on placement context information such as information indicating occupancy of virtual and/or physical objects in the XR environment and/or based on application and/or system display preferences for the virtual content) bounds around the anchor location within which application content can be displayed, and may return the bound information to the host systems 300.
  • the bound information may be included in the placement information that is provided to the application 202 from the host systems 300.
  • some or all of the anchor information may be provided to the rendering engine 223 in the scene information from the host systems 300, or the anchor information can be handled by the host systems 300 to determine a location for rendering the application data, and the scene information can include an indication of the location for rendering the application data.
  • the scene information may include additional information such as some or all of the environment information and/or the user information, and/or scene rendering instructions based on the environment information and/or the user information.
  • the rendering engine 223 may then render application data (e.g., application data corresponding to a UI or other virtual content generated by the application) and/or other virtual content, for display in an XR environment, based on the scene information from the host systems 300.
  • the application data for rendering at the anchor location may be provided to the host systems 300 and rendered at the anchor location by the host systems 300.
  • FIG. 4 illustrates a scenario in which the OS service 200 receives an implicit anchor request.
  • An implicit anchor request may be a request for an anchor that includes some ambiguity with respect to the specific anchor location.
  • an implicit anchor request may be a request for a region (e.g., a request for a horizontal plane, a request for a vertical plane, a request for a floor, or a request for a wall), in which the location within the region is not specified in the request.
  • the host systems 300 may provide the implicit anchor request along with one or more placement parameters to the placement system 302.
  • the placement parameters may include placement context information such as occupancy information for other virtual content in the XR environment, occupancy information for physical objects in the physical environment, available region information indicating one or more available portions of the physical environment (e.g., an unoccupied portion of a wall or an unoccupied portion of a floor or a desk) in which the virtual content can be placed, and/or one or more placement preferences such as preferences to be placed to the left of, to the right of, above, or below an existing virtual or physical object, a preference to be placed as child object of an existing virtual or physical object, a preference to be oriented to a primary user, a preference to respond or not respond to another participant in a shared XR experience, a preference to be oriented toward multiple users (e.g., toward an intermediate location between a user of the electronic device and another participant in a shared XR experience, a preference to be oriented toward multiple users (e.g., toward an intermediate location
  • the placement system 302 may determine, based on the implicit anchor request and the placement parameters, a placement component. For example, the placement system 302 may identify a region in the physical environment within which space is available for display of the virtual content, and select a placement location within that region based on display preferences for the virtual content and/or based on display preferences of other virtual content already displayed.
  • the placement component may be or include the placement location, and may be provided to the anchoring system 304 to indicate, to the anchoring system 304, the placement location at which to generate the anchor, In one or more other implementations, the placement system 302 may include, in the placement component, instructions to the anchoring system 304 for selecting the location at which to generate the anchor.
  • the placement system 302 may obtain initial anchor information from the anchoring system 304 prior to generation of the placement component.
  • the placement system 302 may obtain an anchor location of a physical anchor object (e.g., an anchor location of a floor, a wall, a desk, a table, etc.) or a virtual anchor object (e.g., a floating vertical plane or a floating horizontal plane) that corresponds to the implicit anchor (e.g., at a center, edge, corner, or other position on the implicit anchor), determine a placement location different than the anchor location of the physical or virtual anchor object, determine a relative transform between the placement location and the anchor location of the physical or virtual anchor object, and provide the relative transform to the anchoring system (e.g., as the placement component).
  • a physical anchor object e.g., an anchor location of a floor, a wall, a desk, a table, etc.
  • a virtual anchor object e.g., a floating vertical plane or a floating horizontal plane
  • the implicit anchor e.g
  • the anchoring system 304 may determine an anchor transform between a coordinate origin and the placement location, by combining the relative transform received from the placement system 302 with a transform between the coordinate origin and the anchor location of the physical or virtual anchor object. In this example, the anchoring system 304 may provide the anchor transform and an identifier of the anchor transform to the placement system 302 (e.g., as anchor information for the generated anchor).
  • the placement system 302 may also generate bounds (e.g., and/or other information such as orientation information for the virtual content) corresponding to the anchor that generated responsive to the implicit anchor request, and can provide the bounds (e.g., and/or other information such as orientation information for the virtual content) to host systems 300 along with the anchor information generated by the anchoring system 304. As in the example of FIG. 3, the host systems 300 may then provide placement information (e.g., including the anchor identifier and the bound information or other placement information) to the application 202, and provide scene information to the rendering engine 223.
  • bounds e.g., and/or other information such as orientation information for the virtual content
  • the host systems 300 may then provide placement information (e.g., including the anchor identifier and the bound information or other placement information) to the application 202, and provide scene information to the rendering engine 223.
  • anchors generated by the OS service 200 can be dynamically updated in a way that provides intuitive interaction with virtual content by a user of the electronic device.
  • FIG. 5 illustrates operations that may be performed by the OS service 200 when a modification input is received from a user.
  • modification inputs may be user movements within an XR environment (e.g., translational and/or rotational movements of the user’s body, reorientations of the user’s head, and/or eye movements), user interaction with displayed virtual content (e.g., user gestures such as gestures to grab, push, pull, rotate, and/or otherwise move displayed virtual content), user inputs to add new virtual content to an XR environment, an addition or motion of another participant in a shared XR experience, or other inputs for modifying any aspect of the XR environment.
  • user gestures such as gestures to grab, push, pull, rotate, and/or otherwise move displayed virtual content
  • the modification input may be provided to the host systems 300.
  • the host systems 300 may generate updated placement parameters and provide the updated placement parameters to the placement system 302.
  • the placement system 302 may generate an updated placement component (e.g., including an updated placement location) based on the updated placement parameters, and provide the updated placement component to the anchoring system 304.
  • the anchoring system 304 may then generate updated anchor information (e.g., an updated transform corresponding to the updated placement location indicated by the updated placement component for an existing anchor identifier, or a new transform corresponding to a new placement location indicated by the updated placement component for a new existing anchor identifier), and provide the updated anchor information to the placement system 302.
  • updated anchor information e.g., an updated transform corresponding to the updated placement location indicated by the updated placement component for an existing anchor identifier, or a new transform corresponding to a new placement location indicated by the updated placement component for a new existing anchor identifier
  • the modification input can be provided directly to the placement system 302 and the placement system 302 can generate the updated placement component without receiving any updated placement parameters (e.g., the placement parameters may be unchanged by the modification input in some implementations).
  • the placement system 302 may store one or more workspaces, such as workspace 500, during some or all of a modification operation for virtual content (e.g., to allow the placement system 302 to snap the virtual content back to a previous anchor location if the modification operation is terminated before a new placement location is determined for the virtual content). Examples of modification inputs, and resulting operations of the placement system 302 and/or the anchoring system 304, are described hereinafter in connection with, for example, FIGS. 9-12.
  • FIG. 6 illustrates an example in which the electronic device 105 displays virtual content at an explicit anchor location in an XR environment.
  • electronic device 105 display may a user interface (UI) of an application, such as application 202, running on the device, at an anchor location in a physical environment of the electronic device.
  • FIG. 6 illustrates an example in which a user interface window 604 (e.g., of application 202) is displayed by electronic device 105 to appear to be located at an anchor location 605 in an environment such as physical environment 600 of the electronic device 105.
  • UI window 604 which may include one or more elements 606.
  • Elements 606 may include text entry fields, buttons, selectable tools, scrollbars, menus, drop-down menus, links, plugins, image viewers, media players, sliders, or the like.
  • UI window 604 is displayed in the viewable area 607 of the display of the electronic device 105 to appear, in an extended reality environment of electronic device 105, as if attached to a physical wall 601 in the physical environment 600.
  • a physical table 612 is also present in the physical environment 600.
  • the display of the UI window 604 to appear as though on the physical wall 601 can be achieved, in part, by receiving an explicit user-identification of the anchor location 605, and generating (e.g., by anchoring system 304) a transform between a coordinate origin 610 and the anchor location 605. In this way, if electronic device 105 is moved within the physical environment 600, the displayed UI window 604 appears to remain at the anchor location 605 on physical wall 601.
  • virtual content in the form of a UI window is displayed by the electronic device 105 anchored to a vertical plane (e.g., the physical wall 601).
  • a vertical plane e.g., the physical wall 601
  • other virtual content may be displayed, which can be anchored to other physical surfaces and/or other physical or virtual locations (e.g., floating locations that are separate from physical objects in the physical environment 600 and that can be fixed relative to the physical environment, and/or that can be anchored to appear at static location in a device-centric coordinate system).
  • the placement system 302 and/or the anchoring system 304 may compute a virtual system anchor position that can be used to anchor application content.
  • a virtual system anchor position can be generated to anchor application content, such as a tool pallet or other content that may be used at various locations in an XR environment, to a body-follow position that updates (e.g., to move with movements of the body of the user and/or the motion of the electronic device) per-frame (e.g., per display frame of a display and/or per position/orientation measurement frame).
  • anchor application content such as a tool pallet or other content that may be used at various locations in an XR environment
  • body-follow position that updates (e.g., to move with movements of the body of the user and/or the motion of the electronic device) per-frame (e.g., per display frame of a display and/or per position/orientation measurement frame).
  • FIG. 8 illustrates an example use case in which virtual object 806 is displayed on a horizontal plane corresponding to an unoccupied portion of the surface of the physical table 612.
  • an application e.g., application 202
  • the virtual object 806 may be a virtual game board (e.g., a virtual chess board, a virtual checkers board, or other virtual board game setup), a virtual keyboard, a virtual character (e.g., a virtual animal, person, or fantastical character), or any other virtual object.
  • the electronic device 105 e.g., host systems 300 and/or placement system 302
  • a physical object 802 occupies a portion of the surface 800 of the physical table 612.
  • the electronic device 105 may detect the physical object 802 (e.g., using sensors 152 and/or camera(s) 150) and exclude the location of the physical object 802 from the available portion of the surface 800 for placement of the virtual object 806 (e.g., by providing the placement system 302 with information indicating only the available portion of the surface 800 and/or by providing the placement system 302 with an occupancy map of physical objects in the physical environment with which the placement system 302 can derive the available portion).
  • the new placement location 1200 may be specified explicitly by the user (e.g., by releasing the pinch or grasp gesture at the new placement location 1200) and the anchor at the new placement location 1200 can be generated by the anchoring system 304 without involvement of the placement system 302.
  • the placement system 302 may determine the new placement location 1200 based on a virtual occupancy map of the XR environment (e.g., including the locations of the UI window 604 and the UI window 702 and/or other virtual content in the XR environment), based on the available portion of the physical wall 601 or other physical or virtual plane or space, and/or one or more placement preferences for the element 606.
  • the placement system 302 may also determine new placement locations for other virtual content such as the UI window 604 and the UI window 702 to accommodate the element 606 at the new placement location 1200 (e.g., as described in connection with the example of FIG. 11 in a case in which the element 606 is moved into an overlapping or otherwise conflicting location with the UI window 604 and/or the UI window 702).
  • FIG. 13 illustrates a flow diagram of an example process for object placement for extended reality according to aspects of the subject technology according to aspects of the subject technology.
  • the blocks of process 1300 are described herein as occurring in serial, or linearly. However, multiple blocks of process 1300 may occur in parallel.
  • the blocks of process 1300 need not be performed in the order shown and/or one or more blocks of process 1300 need not be performed and/or can be replaced by other operations.
  • One or more implementations may include devices that function as both input and output devices, such as a touchscreen.
  • feedback provided to the user can be any form of sensory feedback, such as visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.
  • the bus 1410 also couples the computing device 1400 to one or more networks and/or to one or more network nodes through the one or more network interface(s) 1416.
  • the computing device 1400 can be a part of a network of computers (such as a LAN, a wide area network (“WAN”), or an Intranet, or a network of networks, such as the Internet. Any or all components of the computing device 1400 can be used in conjunction with the subject disclosure.
  • the computer-readable storage medium can include any non-semiconductor memory, such as optical disk storage, magnetic disk storage, magnetic tape, other magnetic storage devices, or any other medium capable of storing one or more instructions.
  • the tangible computer-readable storage medium can be directly coupled to a computing device, while in other implementations, the tangible computer- readable storage medium can be indirectly coupled to a computing device, e.g., via one or more wired connections, one or more wireless connections, or any combination thereof.
  • Instructions can be directly executable or can be used to develop executable instructions.
  • instructions can be realized as executable or non-executable machine code or as instructions in a high-level language that can be compiled to produce executable or non-executable machine code.
  • instructions also can be realized as or can include data.
  • Computer-executable instructions also can be organized in any format, including routines, subroutines, programs, data structures, objects, modules, applications, applets, functions, etc. As recognized by those of skill in the art, details including, but not limited to, the number, structure, sequence, and organization of instructions can vary significantly without varying the underlying logic, function, processing, and output.
  • a processor configured to monitor and control an operation or a component may also mean the processor being programmed to monitor and control the operation or the processor being operable to monitor and control the operation.
  • a processor configured to execute code can be construed as a processor programmed to execute code or operable to execute code.
  • phrases such as an aspect, the aspect, another aspect, some aspects, one or more aspects, an implementation, the implementation, another implementation, some implementations, one or more implementations, an embodiment, the embodiment, another embodiment, some implementations, one or more implementations, a configuration, the configuration, another configuration, some configurations, one or more configurations, the subject technology, the disclosure, the present disclosure, other variations thereof and alike are for convenience and do not imply that a disclosure relating to such phrase(s) is essential to the subject technology or that such disclosure applies to all configurations of the subject technology.
  • a disclosure relating to such phrase(s) may apply to all configurations, or one or more configurations.
  • a disclosure relating to such phrase(s) may provide one or more examples.
  • a phrase such as an aspect or some aspects may refer to one or more aspects and vice versa, and this applies similarly to other foregoing phrases.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

La présente invention concerne, selon certains aspects de la technologie, un système en temps réel pour positionner et/ou agencer un contenu virtuel ancré à des emplacements dans un environnement physique. La technologie du sujet peut comprendre un système de placement qui facilite le placement d'un contenu d'application par rapport aux ancres, selon des préférences et/ou des exigences d'application et/ou de système pour l'affichage du contenu.
PCT/US2022/038952 2021-08-06 2022-07-29 Placement d'objet pour dispositifs électroniques WO2023014618A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP22758334.1A EP4363950A1 (fr) 2021-08-06 2022-07-29 Placement d'objet pour dispositifs électroniques
CN202280054769.4A CN117795461A (zh) 2021-08-06 2022-07-29 用于电子设备的对象放置

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202163230666P 2021-08-06 2021-08-06
US63/230,666 2021-08-06
US17/827,651 2022-05-27
US17/827,651 US20230040610A1 (en) 2021-08-06 2022-05-27 Object placement for electronic devices

Publications (1)

Publication Number Publication Date
WO2023014618A1 true WO2023014618A1 (fr) 2023-02-09

Family

ID=83050073

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2022/038952 WO2023014618A1 (fr) 2021-08-06 2022-07-29 Placement d'objet pour dispositifs électroniques

Country Status (1)

Country Link
WO (1) WO2023014618A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3495926A1 (fr) * 2017-12-05 2019-06-12 Samsung Electronics Co., Ltd. Procédé de limites de transition et interfaces sensibles à la distance en réalité augmentée et virtuelle et son dispositif électronique
US20190272674A1 (en) * 2018-03-01 2019-09-05 Dell Products L.P. Information handling system augmented reality through a virtual object anchor
US20200111256A1 (en) * 2018-10-08 2020-04-09 Microsoft Technology Licensing, Llc Real-world anchor in a virtual-reality environment
EP3896556A1 (fr) * 2020-04-17 2021-10-20 Apple Inc. Systèmes et procédés d'ancrage virtuel pour réalité étendue

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3495926A1 (fr) * 2017-12-05 2019-06-12 Samsung Electronics Co., Ltd. Procédé de limites de transition et interfaces sensibles à la distance en réalité augmentée et virtuelle et son dispositif électronique
US20190272674A1 (en) * 2018-03-01 2019-09-05 Dell Products L.P. Information handling system augmented reality through a virtual object anchor
US20200111256A1 (en) * 2018-10-08 2020-04-09 Microsoft Technology Licensing, Llc Real-world anchor in a virtual-reality environment
EP3896556A1 (fr) * 2020-04-17 2021-10-20 Apple Inc. Systèmes et procédés d'ancrage virtuel pour réalité étendue

Similar Documents

Publication Publication Date Title
US20230040610A1 (en) Object placement for electronic devices
US11861056B2 (en) Controlling representations of virtual objects in a computer-generated reality environment
US20230102820A1 (en) Parallel renderers for electronic devices
US20230092282A1 (en) Methods for moving objects in a three-dimensional environment
US20240211053A1 (en) Intention-based user interface control for electronic devices
WO2021211265A1 (fr) Continuité multi-dispositifs destinée à être utilisée avec des systèmes de réalité étendue
US20230221830A1 (en) User interface modes for three-dimensional display
US20240094819A1 (en) Devices, methods, and user interfaces for gesture-based interactions
US20230094658A1 (en) Protected access to rendering information for electronic devices
US20230343049A1 (en) Obstructed objects in a three-dimensional environment
US20230252737A1 (en) Devices, methods, and graphical user interfaces for interacting with virtual objects using hand gestures
WO2023014618A1 (fr) Placement d'objet pour dispositifs électroniques
US11972088B2 (en) Scene information access for electronic device applications
US20220244903A1 (en) Application casting
US20240004538A1 (en) Out-of-process hit-testing for electronic devices
US20240004678A1 (en) Out-of-process effects for electronic devices
US11361473B1 (en) Including a physical object based on context
US20230334808A1 (en) Methods for displaying, selecting and moving objects and containers in an environment
CN118235104A (zh) 用于电子设备的基于意图的用户界面控制
WO2023049216A1 (fr) Accès protégé à des informations de rendu pour dispositifs électroniques
US20240211091A1 (en) Application-free systems and methods
CN118265962A (zh) 用于电子设备应用程序的场景信息访问
US11656586B1 (en) Systems and methods for device configuration
US20230315385A1 (en) Methods for quick message response and dictation in a three-dimensional environment
US20230206572A1 (en) Methods for sharing content and interacting with physical devices in a three-dimensional environment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22758334

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2022758334

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 202280054769.4

Country of ref document: CN

ENP Entry into the national phase

Ref document number: 2022758334

Country of ref document: EP

Effective date: 20240130

NENP Non-entry into the national phase

Ref country code: DE