CN117041517A - Multi-device content relay based on source device location - Google Patents

Multi-device content relay based on source device location Download PDF

Info

Publication number
CN117041517A
CN117041517A CN202310522315.1A CN202310522315A CN117041517A CN 117041517 A CN117041517 A CN 117041517A CN 202310522315 A CN202310522315 A CN 202310522315A CN 117041517 A CN117041517 A CN 117041517A
Authority
CN
China
Prior art keywords
content item
indicator
location
physical environment
content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310522315.1A
Other languages
Chinese (zh)
Inventor
A·Y·陈
A·W·罗本
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US18/140,175 external-priority patent/US20230368475A1/en
Application filed by Apple Inc filed Critical Apple Inc
Publication of CN117041517A publication Critical patent/CN117041517A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/398Synchronisation thereof; Control thereof

Abstract

The present disclosure relates to multi-device content relay based on source device location. Various implementations disclosed herein include devices, systems, and methods that facilitate the use of application content, such as text, images, video, and 3D models, in an XR environment. In some implementations, a first device (e.g., an HMD) provides an indicator corresponding to a currently/recently used content item on a second device (e.g., a mobile phone), where the indicator is located based on a location of the second device. The user may access the website on his mobile phone and, when using his HMD, see a view with a depiction of his mobile phone and a proximity indicator (e.g., an affordance) for accessing that same website on the HMD. The affordance may be located based on (e.g., next to) the mobile phone such that its location provides an intuitive user experience or otherwise facilitates easy understanding of the use of the content item on another device.

Description

Multi-device content relay based on source device location
Technical Field
The present disclosure relates generally to electronic devices that provide content within an augmented reality (XR) environment, the content including a view that includes content based on use of the content on other devices within the environment.
Background
Existing augmented reality (XR) systems may be improved in providing a means for a user to experience content items on multiple devices.
Disclosure of Invention
Various implementations disclosed herein include devices, systems, and methods that facilitate the use of content items such as text, images, video, and 3D models in an XR environment. In some implementations, a first device (e.g., an HMD) provides an indicator corresponding to a currently/recently used content item on a second device (e.g., a mobile phone), where the indicator is located based on a location of the second device. For example, a user may access a website on their mobile phone and, when using their HMD, see a view with a depiction of their mobile phone and a proximity indicator (e.g., an affordance) for accessing that same website on the HMD. The affordance may be located based on (e.g., next to) the mobile phone such that its location provides an intuitive user experience or otherwise facilitates easy understanding of the use of the content item on another device.
In some implementations, the processor performs the method by executing instructions stored on a computer-readable medium. The method acquires sensor data during use of the first device in a physical environment including the second device. The sensor data may include RGB camera sensor data, depth sensor data, dense depth data, audio data, or various other types of sensor data captured by the first device to provide a user experience or understand the physical environment or devices therein and the user.
The method identifies a location of the second device in the physical environment based on the sensor data. The location of the second device may correspond to a 3D (e.g., x, y, z coordinates) location in a world coordinate system, a relative location of the second device to the first device, a distance and direction of the second device from the first device, a 2D location of the second device within a captured image of the physical environment, a 3D location of the second device relative to a 3D model of the physical environment, or any other type of location data.
The method identifies content items (e.g., documents, 3D models, web pages, communication session instances, shared viewing session instances, etc.) for use via the second device. For example, this may involve determining that the second device is currently using a particular content item within a particular application or that the second device is currently displaying a particular content item. In another example, this involves identifying content items that were recently used by a particular application or that were recently displayed on a second device.
The method provides a view of an augmented reality (XR) environment based on the physical environment, wherein the view includes a depiction of the second device and an indicator corresponding to the content item, wherein the indicator is located based on a location of the second device. The positioning of the indicator may indicate that the second device is (or recently) using or displaying the content item. The indicator may take various forms including, but not limited to, a notification, a representable, a link, and the like. The indicator may be overlaid on a perspective (passthrough) video that includes a depiction of the second device. The indicator may be triggered or adjusted based on context. For example, opening a word processing program on a first device may trigger the display of an indicator of one or more word processing documents that were recently used on a second device. The indicator may be displayed only when the second device is unlocked or when both devices are accessed by one or more users associated with the same user/group account. Interaction with the indicator may trigger a relay (handoff) or "throw" of the content item from the first device to the second device, which may enable the user to continue using the content item on the first device.
According to some implementations, an apparatus includes one or more processors, non-transitory memory, and one or more programs; the one or more programs are stored in the non-transitory memory and configured to be executed by the one or more processors, and the one or more programs include instructions for performing or causing performance of any of the methods described herein. According to some implementations, a non-transitory computer-readable storage medium has instructions stored therein, which when executed by one or more processors of a device, cause the device to perform or cause to perform any of the methods described herein. According to some implementations, an apparatus includes: one or more processors, non-transitory memory, and means for performing or causing performance of any one of the methods described herein.
Drawings
Accordingly, the present disclosure may be understood by those of ordinary skill in the art, and the more detailed description may reference aspects of some illustrative implementations, some of which are shown in the accompanying drawings.
Fig. 1A-1B illustrate physical environments in which electronic devices are used, according to some implementations.
Fig. 2A, 2B, and 2C illustrate the display and use of example indications of content items in a view of an augmented reality environment provided based on the physical environment of fig. 1, according to some implementations.
Fig. 3A, 3B, 3C, and 3D illustrate the display and use of another example indication of content items in a view of an augmented reality environment provided based on the physical environment of fig. 1, according to some implementations.
Fig. 4 illustrates another physical environment in which an electronic device is used, in accordance with some implementations.
Fig. 5A and 5B illustrate the display and use of example indications of content items in a view of an augmented reality environment provided based on the physical environment of fig. 4, according to some implementations.
Fig. 6 is a flow diagram illustrating a method for providing an indication of a content item for use via another device, according to some implementations.
Fig. 7 is a block diagram of an electronic device according to some implementations.
The various features shown in the drawings may not be drawn to scale according to common practice. Accordingly, the dimensions of the various features may be arbitrarily expanded or reduced for clarity. In addition, some figures may not depict all of the components of a given system, method, or apparatus. Finally, like reference numerals may be used to refer to like features throughout the specification and drawings.
Detailed Description
Numerous details are described to provide a thorough understanding of the example implementations shown in the drawings. However, the drawings illustrate only some example aspects of the disclosure and therefore should not be considered limiting. It will be apparent to one of ordinary skill in the art that other effective aspects or variations do not include all of the specific details set forth herein. Moreover, well-known systems, methods, components, devices, and circuits have not been described in detail so as not to obscure the more pertinent aspects of the exemplary implementations described herein.
Fig. 1A and 1B illustrate a physical environment 100 in which exemplary electronic devices 110, 120 are used by a user 102. The physical environment 100 in this example is a room that includes a table 130. In fig. 1A, the second electronic device 120 is a tablet-type device executing a web browser application to display the content item 112 (e.g., a website related to the united states constitution).
In fig. 1B, the user 102 is currently using the first electronic device 110 and has placed the second electronic device 120 on the upper surface of the table 130. The first electronic device 110 provides a view of the XR environment that includes a depiction of the second electronic device 120 and an indication corresponding to the content item 112 used/ever used by the second electronic device 120. Example views are illustrated in fig. 2A to 2C and fig. 3A to 3D, as follows.
The first electronic device 110 includes one or more cameras, microphones, depth sensors, or other sensors that may be used to capture information about the physical environment 100 and objects therein and evaluate the physical environment and objects therein as well as capture information about the user 102. Information about the physical environment 100 or the user 102 may be used to provide visual and audio content or to identify the current location of the physical environment 100 or the location of the user and objects (such as the second electronic device 120) within the physical environment 100. In some implementations, a view of an augmented reality (XR) environment may be provided. Such an XR environment may include a view of a 3D environment generated based on a camera image or depth camera image of physical environment 100 and a representation of user 102 based on a camera image or depth camera image of user 102. Such an XR environment may include virtual content overlaid on a view of physical environment 100 or positioned at a 3D location relative to a 3D coordinate system associated with the XR environment, which may correspond to the 3D coordinate system of physical environment 100.
A physical environment refers to a physical world in which people can sense or interact without the assistance of an electronic system. The physical environment may include physical features, such as physical surfaces or physical objects. For example, the physical environment corresponds to a physical park that includes physical trees, physical buildings, and physical people. People can sense or interact with the physical environment directly, such as by visual, tactile, auditory, gustatory, and olfactory. In contrast, an augmented reality (XR) environment refers to a fully or partially simulated environment in which people sense or interact via electronic devices. For example, the XR environment may include Augmented Reality (AR) content, mixed Reality (MR) content, virtual Reality (VR) content, and the like. In the case of an XR system, a subset of the physical movements of the person, or a representation thereof, are tracked and in response one or more characteristics of one or more virtual objects simulated in the XR system are adjusted in a manner consistent with at least one physical law. For example, an XR system may detect rotating head movements and, in response, adjust the graphical content and sound field presented to a person in a manner similar to the manner in which such views and sounds change in a physical environment. As another example, the XR system may detect rotational or translational movement of an electronic device (e.g., mobile phone, tablet, laptop, etc.) presenting the XR environment, and in response, adjust the graphical content and sound field presented to the person in a manner similar to how such views and sounds would change in the physical environment. In some cases (e.g., for reachability reasons), the adjustment of one or more features of the graphical content in the XR environment may be made in response to a representation of the physical movement (e.g., a voice command).
There are many different types of electronic systems that enable a person to sense or interact with various XR environments. Examples include head-mounted systems, projection-based systems, head-up displays (HUDs), vehicle windshields integrated with display capabilities, windows integrated with display capabilities, displays formed as lenses designed for placement on a person's eyes (e.g., similar to contact lenses), headphones/earphones, speaker arrays, input systems (e.g., wearable or handheld controllers with or without haptic feedback), smartphones, tablets, and desktop/laptop computers. The head-mounted system may have an integrated opaque display and one or more speakers. Alternatively, the head-mounted system may be configured to accept an external opaque display (e.g., a smart phone). The head-mounted system may incorporate one or more imaging sensors for capturing images or video of the physical environment, or one or more microphones for capturing audio of the physical environment. The head-mounted system may have a transparent or translucent display instead of an opaque display. The transparent or translucent display may have a medium through which light representing an image is directed to the eyes of a person. The display may utilize digital light projection, OLED, LED, uLED, liquid crystal on silicon, laser scanning light sources, or any combination of these techniques. The medium may be an optical waveguide, a holographic medium, an optical combiner, an optical reflector, or any combination thereof. In some implementations, the transparent or translucent display may be configured to selectively become opaque. Projection-based systems may employ retinal projection techniques that project a graphical image onto a person's retina. The projection system may also be configured to project the virtual object into the physical environment, for example as a hologram or on a physical surface.
Fig. 2A, 2B, and 2C illustrate the display and use of example indications of content items in views of an augmented reality environment provided based on the physical environment 100 of fig. 1A-1B. Fig. 2A illustrates a view of an XR environment comprising a depiction of physical environment 100 (comprising depiction 230 of table 130 and depiction 220 of device 120). Such views may be provided by providing perspective video images from an image capture device to a display device on device 110. Alternatively, such views may be provided by generating views using images or sensor data (e.g., images, depth sensor data, etc.) captured in the physical environment 100. In some implementations, a 3D representation of the physical environment 100 is generated and used to provide some or all of the views of the XR environment including the depiction of the physical environment 100. In still other examples, the view of the XR environment may include a view of physical environment 100 through a transparent or translucent display of device 110.
Fig. 2B illustrates virtual content added to an XR environment to provide an indication that a content item is or was being used on a device depicted in the view. In particular, the indication 240 is displayed to indicate that the content item 112 (FIG. 1A) (e.g., a website related to the United states constitution) is or was being used on the device 120. The indication 240 is provided based on determining the location of the device 120 in the physical environment. The location may be determined in various ways. For example, the location may be determined using sensor data, e.g., computer vision via field captured image data, radio-based positioning (e.g., using ultra-wideband technology), or other sensor data analysis. In another example, the location may alternatively or additionally be determined based on a timed electronic communication signal (e.g., time-of-flight analysis).
In the example of fig. 2B, the determined location is used to display the indication 240, for example, by positioning the indication 240 in proximity to the depiction 220 of the device 120 in a view of the XR environment. The indication 240 may be within a threshold distance of the depiction 220 of the device 120. For example, the indication 240 may be within a fixed number of pixels of the depiction 220 that are remote from the device 120 in a 2D image that forms the view. In another example, the indication 240 may overlap (e.g., at least partially overlap) with the depiction 220 of the device 120 in this image. In another example, the indication 240 may be assigned a 3D position in a 3D coordinate system in which the depiction 220 or corresponding device 120 is positioned and within a threshold distance in the 3D coordinate system. The indication may be assigned a 3D location that is selected as the most recently available location that meets certain criteria, such as on the nearest surface, at the nearest location that does not obscure other users or other content of a predetermined type/nature, etc. In some implementations, the 3D location is automatically selected based on user-specified criteria or context information. The indication 240 may be provided at a fixed or anchored location, for example, such that the indication moves 240 when the depiction 220 or corresponding device 120 moves. In other examples, the indication 240 points to, surrounds, or otherwise graphically indicates a relationship with the depiction 220 of the device 120.
The indication 240 may provide an indication of the type of content item (e.g., web page, 3D model, word processing document, spreadsheet, etc.), the application used to create and edit the content item (e.g., word processing program brand X, etc.), the actual content of the content item (e.g., identifying that the website is related to the united states constitution), or other useful/descriptive information about the content item. In some implementations, the indication is displayed only if one or more criteria are met (e.g., the first device is capable of presenting the content item, the second device is unlocked, the content item is not affected by a relay constraint, etc.).
The indication 240 may be an interactable user interface element. For example, user input (e.g., hand gestures, gaze input, etc.) may be used to select or activate the indication 240 to cause an action or response. For example, as illustrated in fig. 2C, activation of the indication 240 may be used to trigger a relay of the content item from the device 120 to the device 110. This may involve the device 120 identifying the content item to the device 110, sending a copy of the content item to the device 110, sending a link or other source information from which the device 110 may access the content item, or otherwise exchanging information between the device 110 and the device 120 via an electronic communication channel or via information that can be detected by a device sensor, such that the first device 110 is able to provide the content item. In this example, the first device 110 launches a web browser application having a user interface 265 that displays a web page within the XR environment that the first device is providing and displays a content item 260, the content item 260 being another instance of the content item 112 that has been used on the second device 120 (e.g., a web site related to the united states constitution).
Fig. 3A, 3B, 3C, and 3D illustrate the display and use of another example indication of content items in a view of an XR environment provided based on the physical environment of fig. 1. Fig. 3A illustrates a view of an XR environment comprising a depiction of physical environment 100 (comprising depiction 330 of table 130 and depiction 320 of device 120). Such views may be provided by providing perspective video images from an image capture device to a display device on device 110. Alternatively, such views may be provided by generating views using images or sensor data (e.g., images, depth sensor data, etc.) captured in the physical environment 100. In some implementations, a 3D representation of the physical environment 100 is generated and used to provide some or all of the views of the XR environment including the depiction of the physical environment 100. In still other examples, the view of the XR environment may include a view of physical environment 100 through a transparent or translucent display of device 110.
Fig. 3B illustrates a user interface 345 provided in an XR environment by a web browser application executing on electronic device 110. The user interface 345 displays a content item 350, which in this example is from the first revision website.
In fig. 3C, electronic device 110 determines to provide graphical indicator 340 based on the context of the web browser application executing in the XR environment and currently in use. The graphical indicator 340 provides an indication that the content item is or was being used on the device depicted in the view. In particular, indication 340 is displayed to indicate that content item 112 (FIG. 1A) (e.g., a website related to the United states constitution) is or was being used on device 120. The indication 340 is provided based on determining the location of the device 120 in the physical environment. In this example, the location is used to display the indication 340, for example, by positioning the indication 340 in proximity to the depiction 320 of the device 120 in a view of the XR environment. In this example, indication 340 is only displayed if one or more context criteria are met (e.g., an application capable of rendering content item 112 is currently being used in an XR environment).
The indication 340 may be an interactable user interface element. For example, user input (e.g., hand gestures, gaze input, etc.) may be used to select or activate the indication 340 to cause an action or response. For example, as illustrated in fig. 3D, activation of the indication 340 may be used to trigger a relay of the content item from the device 120 to the device 110. In this example, the first device 110 automatically navigates to display the content item 360, which is another instance of the content item 112 (e.g., a website related to united states constitution), and thus navigates away from the previously displayed content item 350 (e.g., a website related to the first revision).
In the examples of fig. 2A-2C and 3A-3D, the indicator is used to identify the currently or most recently used content item from the second device 120. In some implementations, criteria are used to select one or more content items previously used on the second device for which the indication is provided. For example, content items that are not recently used on the second device 120 may be selected based on certain criteria. For example, such content items may be more relevant to the user's current environment and thus have priority over more recently used content items. The in-focus state or other usage state may be used to identify a particular context in which the device 110, 120 is used, such as a person, school, work, sleep, mediation, exercise/activity, etc. When in an active state, the first device 110 may only display an indicator of a content item that is used on the second device 120 that is also associated with the active state, and thus exclude indications corresponding to more recently used content items associated with the personal state.
Fig. 4 illustrates another physical environment 400 in which a user 402 uses electronic devices 410, 420, 430. The physical environment 400 in this example is a room that includes furniture, television content devices 420, and televisions 430. Television content device 420 provides a content item 432, e.g., movie a, that is displayed on television 430. The user 402 is holding a first electronic device 410 that provides a view of an XR environment comprising depictions of television content devices 420 and televisions 430 and indications corresponding to content items 432 used/ever used by those devices. Example views are illustrated in fig. 5A to 5B, as follows.
The first electronic device 410 includes one or more cameras, microphones, depth sensors, or other sensors that may be used to capture information about the physical environment 400 and objects therein and evaluate the physical environment and objects therein as well as capture information about the user 402. Information about the physical environment 400 or the user 402 may be used to provide visual and audio content or to identify the current location of the physical environment 400 or the location of users and objects within the physical environment 400, such as television content devices 420 and televisions 430. In some implementations, a view of an augmented reality (XR) environment may be provided. Such an XR environment may include a view of a 3D environment generated based on camera images or depth camera images of physical environment 400 and a representation of user 402 based on camera images or depth camera images of user 402. Such an XR environment may include virtual content overlaid on a view of physical environment 400 or positioned at a 3D location relative to a 3D coordinate system associated with the XR environment, which may correspond to the 3D coordinate system of physical environment 400.
Fig. 5A and 5B illustrate the display and use of example indications of content items in a view of an XR environment provided based on physical environment 400 of fig. 4. Fig. 5A shows a view of an XR environment comprising a depiction of physical environment 400, including a depiction 520 of television content device 420 and a depiction 530 of television 430. Such views may be provided by providing perspective video images from an image capture device to a display device on device 410. Alternatively, such views may be provided by generating views using images or sensor data (e.g., images, depth sensor data, etc.) captured in the physical environment 400. In some implementations, a 3D representation of the physical environment 400 is generated and used to provide some or all of the views of the XR environment including the depiction of the physical environment 400. In still other examples, the view of the XR environment may include a view of physical environment 400 through a transparent or translucent display of device 410.
Fig. 5A illustrates virtual content added to an XR environment to provide an indication that a content item is or was being used on a device depicted in the view. In particular, indication 540 is displayed to indicate that content item 432 (FIG. 4), e.g., film A, is or was being used by television content device 420 and television 430. The indication 540 is provided based on determining the locations of the television content device 420 and the television 430 in the physical environment 400 or their corresponding depictions 520, 530 in the view. In this example, one or both of the positions are used to display the indication 540, for example by positioning the indication 540 near both depictions 520, 530 in the view.
The indication 540 may be an interactable user interface element. For example, user input (e.g., touch input, mouse input, gaze input, etc.) may be used to select or activate the indication 540 to cause an action or response. For example, as illustrated in fig. 5B, activation of the indication 540 may be used to trigger a relay of the content item from the device 420 to the device 410. This may involve device 420 identifying the content item to device 410, sending a copy of the content item to device 410, sending a link or other source information from which device 410 may access the content item, or otherwise exchanging information between device 410 and device 420 via an electronic communication channel or via information that can be detected by a device sensor, such that first device 410 is able to provide the content item. In this example, the first device 410 launches a movie player application having a user interface 565 that displays a movie and displays a content item 560, which is another instance of content item 532 (e.g., movie a), on a virtual movie screen within the XR environment that the first device is providing.
In some implementations, content items are projected from one device to another. For example, device 420 may execute a player application with projection capabilities and provide the projected content to device 410, which presents the content within a projected user interface.
Fig. 6 is a flow chart illustrating a method 600 for providing an indication of a content item for use via another device. In some implementations, a device, such as electronic device 110 or device 410, performs method 600. In some implementations, the method 600 is performed on a mobile device, desktop computer, notebook computer, HMD, or server device. Method 600 is performed by processing logic (including hardware, firmware, software, or a combination thereof). In some implementations, the method 600 is performed on a processor executing code stored in a non-transitory computer readable medium (e.g., memory).
At block 602, the method 600 obtains sensor data during use of a first device in a physical environment including a second device. For example, the sensor data may include RGB, lidar-based depth, dense depth, audio, and the like.
At block 604, the method 600 identifies a location of the second device in the physical environment based on the sensor data. As an example, this may involve identifying an x, y, z position in the world coordinate system, a relative position with the first device, and so on.
At block 606, the method 600 identifies content items (e.g., documents, 3D models, web pages, communication sessions, shared content viewing sessions, etc.) for use via the second device.
At block 608, the method 600 provides a view of an augmented reality (XR) environment based on the physical environment, wherein the view includes a depiction of the second device and an indicator corresponding to the content item, wherein the indicator is located based on a location of the second device. The location of the indicator may indicate that the second device is the source of the content item. The indicator may be a notification, a representable, a link, or the like. The indicator may be triggered/adjusted based on context, e.g., turning on a word processor on a first device may trigger display of an indicator of a word processor that was recently used on a second device. The indicator may be displayed only if the second device is unlocked, the device is associated with the same account, etc. Interaction with the indicator may trigger a relay or "projection" of the content from the first device to the second device, which may be configured to avoid the second device needing to log-in separately to the network source to access the content item.
The method 600 may determine to provide an indicator of the content item used on the second device based on determining that the content item is currently being used on the second device or is the most recently accessed content item on the second device. The method may determine to provide the indicator based on determining that the second device is currently unlocked or has been locked for less than a threshold amount of time. The method 600 may determine to provide an indicator based on determining that the first device and the second device are currently accessed using the same user account. The method 600 may determine to provide an indicator based on user input accessing an application corresponding to a type of content item on the first device.
The method 600 may receive an input corresponding to the indicator and initiate a relay of the content item from the second device to the first device based on the input corresponding to the indicator. The method 600 may receive an input corresponding to the indicator and initiate a projection of the content item from the second device to the first device based on the input corresponding to the indicator.
In some implementations, the first device accesses the first content item from the content source using the login credentials and projects the content item to the second device, while the second device accesses the content item from the content source without using the login credentials.
In some implementations, the method 600 can receive an input corresponding to the indicator, and based on the input corresponding to the indicator: obtaining a representation of the content item from the second device; and displaying the content item based on the representation of the content item.
In some implementations, the method 600 may involve a representation of a content item, including: a content item; a link to a content item; or a visual representation of the content item.
In some implementations, in method 600, the representation of the content item includes a visual representation of the content item that is generated by the second device by accessing the content item from the content source using the login credentials, and the visual representation of the content item is received from the second device without the first device accessing the content item from the content source using the login credentials.
Fig. 7 is a block diagram of an electronic device 700. Device 700 illustrates an exemplary device configuration of electronic device 110 or electronic device 410. While certain specific features are shown, those of ordinary skill in the art will appreciate from the disclosure that various other features are not shown for brevity and so as not to obscure more pertinent aspects of the implementations disclosed herein. To this end, as a non-limiting example, in some implementations, the device 700 includes one or more processing units 702 (e.g., microprocessors, ASIC, FPGA, GPU, CPU, processing cores, and the like), one or more input/output (I/O) devices and sensors 706, one or more communication interfaces 708 (e.g., USB, FIREWIRE, THUNDERBOLT, IEEE 802.3x, IEEE 802.11x, IEEE 802.16x, GSM, CDMA, TDMA, GPS, IR, BLUETOOTH, ZIGBEE, SPI, I C, or similar types of interfaces), one or more programming (e.g., I/O) interfaces 710, one or more output devices 712, one or more inwardly or outwardly facing image sensor systems 714, a memory 720, and one or more communication buses 704 for interconnecting these components and various other components.
In some implementations, the one or more communication buses 704 include circuitry that interconnects the system components and controls communication between the system components. In some implementations, the one or more I/O devices and sensors 706 include at least one of: an Inertial Measurement Unit (IMU), accelerometer, magnetometer, gyroscope, thermometer, one or more physiological sensors (e.g., blood pressure monitor, heart rate monitor, blood oxygen sensor, blood glucose sensor, etc.), one or more microphones, one or more speakers, a haptic engine, or one or more depth sensors (e.g., structured light, time of flight, etc.), or the like.
In some implementations, the one or more output devices 712 include one or more displays configured to present a view of the 3D environment to a user. In some implementations, the one or more displays 712 correspond to holographic, digital Light Processing (DLP), liquid Crystal Displays (LCD), liquid crystal on silicon (LCoS), organic light emitting field effect transistors (OLET), organic Light Emitting Diodes (OLED), surface conduction electron emitter displays (SED), field Emission Displays (FED), quantum dot light emitting diodes (QD-LED), microelectromechanical systems (MEMS), or similar display types. In some implementations, one or more displays correspond to a diffractive, reflective, polarizing, holographic, or the like waveguide display. In one example, device 700 includes a single display. In another example, the device 700 includes a display for each eye of the user.
In some implementations, the one or more output devices 712 include one or more audio generating devices. In some implementations, the one or more output devices 712 include one or more speakers, surround sound speakers, speaker arrays, or headphones for producing spatialized sound (e.g., 3D audio effects). Such devices may virtually place sound sources in a 3D environment, including behind, above, or below one or more listeners. Generating the spatialized sound may involve transforming the sound waves (e.g., using head-related transfer functions (HRTFs), reverberation, or cancellation techniques) to simulate natural sound waves (including reflections from walls and floors) emanating from one or more points in the 3D environment. The spatialized sound may entice the listener's brain to interpret the sound as if it were occurring at one or more points in the 3D environment (e.g., from one or more particular sound sources), even though the actual sound may be produced by speakers in other locations. The one or more output devices 1512 may additionally or alternatively be configured to generate haptic sensations.
In some implementations, the one or more image sensor systems 714 are configured to obtain image data corresponding to at least a portion of the physical environment. For example, the one or more image sensor systems 714 may include one or more RGB cameras (e.g., with Complementary Metal Oxide Semiconductor (CMOS) image sensors or Charge Coupled Device (CCD) image sensors), monochrome cameras, IR cameras, depth cameras, event based cameras, or the like. In various implementations, the one or more image sensor systems 714 further include an illumination source, such as a flash, that emits light. In various implementations, the one or more image sensor systems 714 further include an on-camera Image Signal Processor (ISP) configured to perform a plurality of processing operations on the image data.
Memory 720 includes high-speed random access memory such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices. In some implementations, the memory 720 includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. Memory 720 optionally includes one or more storage devices remotely located from the one or more processing units 702. Memory 720 includes non-transitory computer-readable storage media.
In some implementations, memory 720 or a non-transitory computer-readable storage medium of memory 720 stores an optional operating system 730 and one or more instruction sets 740. Operating system 730 includes processes for handling various basic system services and for performing hardware-related tasks. In some implementations, the instruction set 740 includes executable software defined by binary information stored in the form of a charge. In some implementations, the instruction set 740 is software that is executable by the one or more processing units 702 to implement one or more of the techniques described herein.
Instruction set 740 includes an application instruction set 742 configured, when executed, to anchor or provide a user interface for one or more content applications within an XR environment as described herein. The instruction set 740 further includes a relay/projection instruction set 1544 configured to provide, upon execution, an indication of content items used by other devices or facilitate relay/projection of content items between devices as described herein. The instruction set 740 may be embodied as a single software executable or as a plurality of software executable files.
While the instruction set 740 is shown as residing on a single device, it should be understood that in other implementations, any combination of elements may be located in separate computing devices. Furthermore, the drawings are intended to serve as functional descriptions of various features that may be present in particular implementations, as opposed to structural schematic illustrations of the implementations described herein. As will be appreciated by one of ordinary skill in the art, the individually displayed items may be combined and some items may be separated. The actual number of instruction sets, and how features are distributed among them, will vary depending upon the particular implementation, and may depend in part on the particular combination of hardware, software, or firmware selected for a particular implementation.
It should be understood that the implementations described above are cited by way of example, and that the present disclosure is not limited to what has been particularly shown and described hereinabove. Rather, the scope includes both combinations and subcombinations of the various features described hereinabove as well as variations and modifications thereof which would occur to persons skilled in the art upon reading the foregoing description and which are not disclosed in the prior art.
As described above, one aspect of the present technology is to collect and use sensor data, which may include user data, to improve the user experience of an electronic device. The present disclosure contemplates that in some cases, the collected data may include personal information data that uniquely identifies a particular person or that may be used to identify an interest, characteristic, or predisposition of a particular person. Such personal information data may include athletic data, physiological data, demographic data, location-based data, telephone numbers, email addresses, home addresses, device characteristics of personal devices, or any other personal information.
The present disclosure recognizes that the use of such personal information data in the present technology may be used to benefit users. For example, personal information data may be used to improve the content viewing experience. Thus, the use of such personal information data may enable planned control of the electronic device. In addition, the present disclosure contemplates other uses for personal information data that are beneficial to the user.
The present disclosure also contemplates that the entity responsible for the collection, analysis, disclosure, transmission, storage, or other use of such personal information or physiological data will adhere to established privacy policies or privacy practices. In particular, such entities should exercise and adhere to privacy policies and practices that are recognized as meeting or exceeding industry or government requirements for maintaining the privacy and security of personal information data. For example, personal information from a user should be collected for legal and legitimate uses of an entity and not shared or sold outside of those legal uses. In addition, such collection should be done only after the user's informed consent. In addition, such entities should take any required steps to secure and protect access to such personal information data and to ensure that other people who are able to access the personal information data adhere to their privacy policies and procedures. In addition, such entities may subject themselves to third party evaluations to prove compliance with widely accepted privacy policies and practices.
Regardless of the foregoing, the present disclosure also contemplates implementations in which a user selectively prevents use or access to personal information data. That is, the present disclosure contemplates that hardware elements or software elements may be provided to prevent or block access to such personal information data. For example, with respect to content delivery services customized for a user, the techniques of the present invention may be configured to allow the user to choose to "join" or "leave" to participate in the collection of personal information data during the registration service. In another example, the user may choose not to provide personal information data for the targeted content delivery service. In yet another example, the user may choose not to provide personal information, but allow anonymous information to be transmitted for improved functionality of the device.
Thus, while the present disclosure broadly covers the use of personal information data to implement one or more of the various disclosed embodiments, the present disclosure also contemplates that the various embodiments may be implemented without accessing such personal information data. That is, various embodiments of the present technology do not fail to function properly due to the lack of all or a portion of such personal information data. For example, the content may be selected and delivered to the user by inferring preferences or settings based on non-personal information data or absolute minimum personal information such as content requested by a device associated with the user, other non-personal information available to the content delivery service, or publicly available information.
In some embodiments, the data is stored using a public/private key system that only allows the owner of the data to decrypt the stored data. In some other implementations, the data may be stored anonymously (e.g., without identifying or personal information about the user, such as legal name, user name, time and location data, etc.). Thus, other users, hackers, or third parties cannot determine the identity of the user associated with the stored data. In some implementations, a user may access its stored data from a user device other than the user device used to upload the stored data. In these cases, the user may need to provide login credentials to access their stored data.
Numerous specific details are set forth herein to provide a thorough understanding of the claimed subject matter. However, it will be understood by those skilled in the art that the claimed subject matter may be practiced without these specific details. In other instances, methods, devices, or systems known by those of ordinary skill have not been described in detail so as not to obscure the claimed subject matter.
Unless specifically stated otherwise, it is appreciated that throughout the description, discussions utilizing terms such as "processing," "computing," "calculating," "determining," and "identifying/identifying" or the like, refer to the action and processes of a computing device, such as one or more computers or similar electronic computing devices, that manipulate or transform data represented as physical, electronic, or magnetic quantities within the computing platform's memory, registers, or other information storage device, transmission device, or display device.
The one or more systems discussed herein are not limited to any particular hardware architecture or configuration. The computing device may include any suitable arrangement of components that provide results conditioned on one or more inputs. Suitable computing devices include multi-purpose microprocessor-based computer systems that access stored software that programs or configures the computing system from a general-purpose computing device to a special-purpose computing device that implements one or more implementations of the subject invention. The teachings contained herein may be implemented in software for programming or configuring a computing device using any suitable programming, scripting, or other type of language or combination of languages.
Implementations of the methods disclosed herein may be performed in the operation of such computing devices. The order of the blocks presented in the above examples may be varied, e.g., the blocks may be reordered, combined, or divided into sub-blocks. Some blocks or processes may be performed in parallel.
The use of "adapted" or "configured to" herein is meant to be an open and inclusive language that does not exclude devices adapted or configured to perform additional tasks or steps. In addition, the use of "based on" is intended to be open and inclusive in that a process, step, calculation, or other action "based on" one or more of the stated conditions or values may be based on additional conditions or beyond the stated values in practice. Headings, lists, and numbers included herein are for ease of explanation only and are not intended to be limiting.
It will also be understood that, although the terms "first," "second," etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, a first node may be referred to as a second node, and similarly, a second node may be referred to as a first node, which changes the meaning of the description, so long as all occurrences of "first node" are renamed consistently and all occurrences of "second node" are renamed consistently. The first node and the second node are both nodes, but they are not the same node.
The terminology used herein is for the purpose of describing particular implementations only and is not intended to be limiting of the claims. As used in the description of this specification and the appended claims, the singular forms "a," "an," and "the" are intended to cover the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term "or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, or groups thereof.
As used herein, the term "if" may be interpreted to mean "when the prerequisite is true" or "in response to a determination" or "upon a determination" or "in response to detecting" that the prerequisite is true, depending on the context. Similarly, the phrase "if it is determined that the prerequisite is true" or "if it is true" or "when it is true" is interpreted to mean "when it is determined that the prerequisite is true" or "in response to a determination" or "upon determination" that the prerequisite is true or "when it is detected that the prerequisite is true" or "in response to detection that the prerequisite is true", depending on the context.
The foregoing description and summary of the invention should be understood to be in every respect illustrative and exemplary, but not limiting, and the scope of the invention disclosed herein is to be determined not by the detailed description of illustrative implementations, but by the full breadth permitted by the patent laws. It is to be understood that the specific implementations shown and described herein are merely illustrative of the principles of this invention and that various modifications may be implemented by those skilled in the art without departing from the scope and spirit of the invention.

Claims (20)

1. A method, comprising:
at a first device having a processor:
acquiring sensor data during use of the first device in a physical environment including a second device;
identifying a location of the second device in the physical environment based on the sensor data;
identifying content items for use via the second device; and
a view of an augmented reality XR environment is provided based on the physical environment, wherein the view includes a depiction of the second device and an indicator corresponding to the content item, wherein the indicator is located based on a location of the second device.
2. The method of claim 1, wherein the indicator is positioned at a location defined relative to the location of the second device.
3. The method of claim 1, wherein the indicator is positioned within a predetermined distance from the depiction of the second device.
4. The method of claim 1, wherein the indicator is overlaid on a perspective video of the physical environment.
5. The method of claim 1, further comprising: the indicator is determined to be provided based on a determination that the content item is currently being used on the second device.
6. The method of claim 1, further comprising: the indicator is determined to be provided based on determining that the second device is currently unlocked or has been locked for less than a threshold amount of time.
7. The method of claim 1, further comprising: the indicator is determined to be provided based on determining that the same user account is currently being used to access the first device and the second device.
8. The method of claim 1, further comprising: the indicator is determined to be provided based on user input on the first device accessing an application corresponding to the type of the content item.
9. The method of claim 1, further comprising:
receiving an input corresponding to the indicator; and
based on the input corresponding to the indicator:
Obtaining a representation of the content item from the second device; and
the content item is displayed based on the representation of the content item.
10. The method of claim 9, wherein the representation of the content item comprises:
the content item;
a link to the content item; or alternatively
A visual representation of the content item.
11. The method according to claim 10, wherein:
the representation of the content item includes the visual representation of the content item;
the visual representation of the content item is generated by the second device by accessing the content item from a content source using login credentials; and is also provided with
The visual representation of the content item is received from the second device without the first device using the login credentials to access the content item from the content source.
12. The method of claim 1, wherein the content item comprises a document, a 3D model, a web page, a communication session instance, or a shared viewing experience session.
13. The method of claim 1, wherein the indicator comprises a notification, an affordance, or a link.
14. A system, comprising:
a non-transitory computer readable storage medium; and
One or more processors coupled to the non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium includes program instructions that, when executed on the one or more processors, cause the system to perform operations comprising:
acquiring sensor data during use of the first device in a physical environment including a second device;
identifying a location of the second device in the physical environment based on the sensor data;
identifying content items for use via the second device; and
a view of an augmented reality XR environment is provided based on the physical environment, wherein the view includes a depiction of the second device and an indicator corresponding to the content item, wherein the indicator is located based on a location of the second device.
15. The system of claim 14, wherein the indicator is positioned at a location defined relative to the location of the second device.
16. The system of claim 14, wherein the indicator is positioned within a predetermined distance from the depiction of the second device.
17. The system of claim 14, wherein the indicator is overlaid on a perspective video of the physical environment.
18. The system of claim 14, wherein the operations further comprise: the indicator is determined to be provided based on a determination that the content item is currently being used on the second device.
19. The system of claim 14, wherein the operations further comprise: the indicator is determined to be provided based on determining that the second device is currently unlocked or has been locked for less than a threshold amount of time.
20. A non-transitory computer readable storage medium storing the program instructions executable via one or more processors to perform operations comprising:
acquiring sensor data during use of the first device in a physical environment including a second device;
identifying a location of the second device in the physical environment based on the sensor data;
identifying content items for use via the second device; and
a view of an augmented reality XR environment is provided based on the physical environment, wherein the view includes a depiction of the second device and an indicator corresponding to the content item, wherein the indicator is located based on a location of the second device.
CN202310522315.1A 2022-05-10 2023-05-10 Multi-device content relay based on source device location Pending CN117041517A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US63/340,060 2022-05-10
US18/140,175 2023-04-27
US18/140,175 US20230368475A1 (en) 2022-05-10 2023-04-27 Multi-Device Content Handoff Based on Source Device Position

Publications (1)

Publication Number Publication Date
CN117041517A true CN117041517A (en) 2023-11-10

Family

ID=88637853

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310522315.1A Pending CN117041517A (en) 2022-05-10 2023-05-10 Multi-device content relay based on source device location

Country Status (1)

Country Link
CN (1) CN117041517A (en)

Similar Documents

Publication Publication Date Title
JP7200063B2 (en) Detection and display of mixed 2D/3D content
US20200233681A1 (en) Computer-generated reality platform
JP2020532796A (en) Eye-based user interaction
KR20200035344A (en) Localization for mobile devices
US20220253136A1 (en) Methods for presenting and sharing content in an environment
US11308686B1 (en) Captured image data in a computer-generated reality environment
US11861056B2 (en) Controlling representations of virtual objects in a computer-generated reality environment
US20210074014A1 (en) Positional synchronization of virtual and physical cameras
US20230343049A1 (en) Obstructed objects in a three-dimensional environment
US20230081605A1 (en) Digital assistant for moving and copying graphical elements
US20230102820A1 (en) Parallel renderers for electronic devices
CN117795461A (en) Object placement for electronic devices
CN117041517A (en) Multi-device content relay based on source device location
US20230368475A1 (en) Multi-Device Content Handoff Based on Source Device Position
US20230206572A1 (en) Methods for sharing content and interacting with physical devices in a three-dimensional environment
US11361473B1 (en) Including a physical object based on context
US20240103705A1 (en) Convergence During 3D Gesture-Based User Interface Element Movement
US20240104871A1 (en) User interfaces for capturing media and manipulating virtual objects
US11816759B1 (en) Split applications in a multi-user communication session
US20230315385A1 (en) Methods for quick message response and dictation in a three-dimensional environment
US20240103614A1 (en) Devices, methods, for interacting with graphical user interfaces
CN117762244A (en) Fusion during movement of user interface elements based on 3D gestures
WO2023043877A1 (en) Digital assistant for moving and copying graphical elements
CN117957512A (en) Digital assistant for moving and copying graphic elements
WO2024064350A1 (en) User interfaces for capturing stereoscopic media

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination