CN117170602A - Electronic device for displaying virtual object - Google Patents

Electronic device for displaying virtual object Download PDF

Info

Publication number
CN117170602A
CN117170602A CN202310643206.5A CN202310643206A CN117170602A CN 117170602 A CN117170602 A CN 117170602A CN 202310643206 A CN202310643206 A CN 202310643206A CN 117170602 A CN117170602 A CN 117170602A
Authority
CN
China
Prior art keywords
depth
virtual object
apparent
apparent depth
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310643206.5A
Other languages
Chinese (zh)
Inventor
P·R·简森杜斯雷斯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US18/295,353 external-priority patent/US20230396752A1/en
Application filed by Apple Inc filed Critical Apple Inc
Publication of CN117170602A publication Critical patent/CN117170602A/en
Pending legal-status Critical Current

Links

Abstract

The present disclosure provides an electronic device that displays virtual objects. An electronic device may include one or more sensors and one or more displays. The display may be configured to display various types of virtual objects. The electronic device may receive a request to display a virtual object of a first type having a defined location relative to a location corresponding to the electronic device or a user of the electronic device. In response to the request to display the virtual object of the first type, the electronic device may determine a depth of the object aligned with a target display direction of the virtual object; determining an apparent depth to display the virtual object based at least on the depth of the object; and displaying the virtual object at the apparent depth via the one or more displays.

Description

Electronic device for displaying virtual object
The present application claims priority from U.S. patent application Ser. No. 18/295,353, filed 4 at 2023, and U.S. provisional patent application Ser. No. 63/348,897, filed 6 at 2022, and which are hereby incorporated by reference in their entireties.
Technical Field
The present disclosure relates generally to electronic devices and, more particularly, to electronic devices having displays that display virtual objects.
Background
Some electronic devices include a display that presents an image near the user's eyes. For example, an augmented reality headset may include a display with optics that allow a user to view images from the display.
Devices such as these can be challenging to design. If careless, viewing the image from the display may not be as comfortable as the user desires.
Disclosure of Invention
The electronic device may include one or more sensors, one or more displays, one or more processors, and a memory storing instructions configured to be executed by the one or more processors. The instructions may include instructions for: receiving a request for displaying a virtual object; and in accordance with a determination that the virtual object is a virtual object of a first type having a defined position relative to a position corresponding to the electronic device or a user of the electronic device, determining, via the one or more sensors, a depth of the physical object; determining an apparent depth to display the virtual object based at least on the depth of the physical object; and displaying the virtual object at an apparent depth via the one or more displays.
Drawings
FIG. 1 is a diagram of an exemplary system with a display according to some embodiments.
Fig. 2A-2C are top views of an augmented reality (XR) environment including a head-mounted device, a physical object, and a world-locked virtual object, according to some embodiments.
Fig. 3A is a depiction of a user view of the XR environment of fig. 2A, according to some embodiments.
Fig. 3B is a depiction of a user view of the XR environment of fig. 2B, according to some embodiments.
Fig. 3C is a depiction of a user view of the XR environment of fig. 2C, according to some embodiments.
Fig. 4A-4C are top views of an XR environment including a head mounted device, a physical object, and a head locked virtual object at a fixed distance from the head mounted device, according to some embodiments.
Fig. 5A is a depiction of a user view of the XR environment of fig. 4A, according to some embodiments.
Fig. 5B is a depiction of a user view of the XR environment of fig. 4B, according to some embodiments.
Fig. 5C is a depiction of a user view of the XR environment of fig. 4C, according to some embodiments.
Fig. 6A-6C are top views of an XR environment including a head mounted device, a physical object, and head locked virtual objects at different distances from the head mounted device, according to some embodiments.
Fig. 7 is a top view of an XR environment with a virtual object at an apparent depth equal to a maximum allowable apparent depth, according to some embodiments.
Fig. 8 is a top view of an XR environment with a virtual object at an apparent depth equal to a minimum allowable apparent depth, according to some embodiments.
Fig. 9 is a flowchart of an exemplary method performed by an electronic device, according to some embodiments.
Detailed Description
The head-mounted device may display different types of augmented reality content for the user. The head-mounted device may display virtual objects perceived at apparent depths within the physical environment of the user. Virtual objects may sometimes be displayed at a fixed location relative to the physical environment of the user. For example, consider an example in which the physical environment of the user includes a table. The virtual object may be displayed for the user such that the virtual object appears to be resting on a table. When a user moves their head and otherwise interacts with the XR environment, the virtual object remains at the same fixed location on the table (e.g., as if the virtual object were another physical object in the XR environment). This type of content may be referred to as world-locked content (because the positioning of the virtual object is fixed relative to the physical environment of the user).
Other virtual objects may be displayed at defined locations relative to the head-mounted device or a user of the head-mounted device. First, consider an example of a virtual object displayed at a defined location relative to a head-mounted device. As the head-mounted device moves (e.g., as the user's head rotates), the virtual object remains in a fixed position relative to the head-mounted device. For example, the virtual object may be displayed at a particular distance in front of and in the center of the head-mounted device (e.g., in the center of the field of view of the device or user). As users move their heads to the left and right, their field of view of their physical environment changes accordingly. However, as the user moves their head, the virtual object may remain fixed at a particular distance in the center of the device or user's field of view (assuming the gaze direction remains constant). This type of content may be referred to as head locked content. The head lock content is fixed in a given position relative to the head mounted device (and thus supporting the head of the user of the head mounted device). The head lock content may not be adjusted based on the gaze direction of the user. In other words, if the head positioning of the user remains constant and their gaze is directed away from the head lock content, the head lock content will remain at the same apparent positioning.
Second, consider an example of a virtual object displayed at a defined location relative to a portion of a user of the head-mounted device (e.g., relative to a torso of the user). This type of content may be referred to as body locked content. For example, virtual objects may be displayed in front of and to the left of the user's body (e.g., at locations defined by distance and angular offset from the forward direction of the user's torso), regardless of which direction the user's head faces. If the user's body is facing in a first direction, the virtual object will be displayed in front of and to the left of the user's body. While facing the first direction, the virtual objects may remain at the same fixed location relative to the user's body in the XR environment, despite the user rotating their head left and right (to see toward and away from the virtual objects). However, the virtual object may move within the field of view of the device or user in response to the user rotating his head. If the user turns around and their body is facing in a second direction opposite to the first direction, the virtual object will be repositioned within the XR environment so that the virtual object is still displayed in front of and to the left of the user's body. While facing in the second direction, the virtual object may remain at the same fixed location relative to the user's body in the XR environment, despite the user rotating their head left and right (to see toward and away from the virtual object).
In the foregoing example, the body lock content is displayed at a fixed position/orientation relative to the user's body even as the user's body rotates. For example, the virtual object may be displayed at a fixed distance in front of the user's body. If the user is facing north, the virtual object is at a fixed distance in front of the user's body (north). If the user rotates and faces south, the virtual object is at a fixed distance in front of the user's body (south).
Alternatively, the distance offset between the body locked content and the user may be fixed relative to the user, while the orientation of the body locked content may remain fixed relative to the physical environment. For example, when the user faces north, the virtual object may be displayed in front of the user's body at a fixed distance from the user. If the user rotates and faces south, the virtual object remains at a fixed distance from the user's body in the north of the user's body.
The body lock content is also configured to always remain in gravity or horizontal alignment such that head and/or body changes in the scrolling orientation will not cause the body lock content to move within the XR environment. The translational movement may cause the body lock content to be repositioned within the XR environment to maintain a fixed distance from the user. The subsequent description of the body lock content may include both types of body lock content described above.
To improve user comfort in certain scenarios, head lock and/or body lock content may be displayed at apparent depths that match depths of objects (e.g., virtual objects or physical objects) in an XR environment. The head-mounted device may include one or more sensors that determine the depth of a physical object in the physical environment. Based at least on the depth of the object, the headset may determine an apparent depth of the virtual object and display the virtual object at the apparent depth. The apparent depth of the virtual object may be repeatedly updated to continuously match the depth of the object in the XR environment.
The system 10 of fig. 1 may be a head mounted device having one or more displays. The display in system 10 may include a display 20 (sometimes referred to as a near-eye display) mounted within a support structure (housing) 8. The support structure 8 may have the shape of a pair of eyeglasses or goggles (e.g., a support frame), may form an outer shell having the shape of a helmet, or may have other configurations for helping mount and secure the components of the near-eye display 20 on or near the user's head. Near-eye display 20 may include one or more display modules, such as display module 20A, and one or more optical systems, such as optical system 20B. The display module 20A may be mounted in a support structure such as support structure 8. Each display module 20A may emit light 38 (image light) that is redirected toward the user's eye at the eyebox 24 using an associated one of the optical systems 20B.
Control circuitry 16 may be used to control the operation of system 10. Control circuitry 16 may be configured to perform operations in system 10 using hardware (e.g., dedicated hardware or circuitry), firmware, and/or software. Software code and other data for performing operations in the system 10 are stored on a non-transitory computer readable storage medium (e.g., a tangible computer readable storage medium) in the control circuit 16. Software code may sometimes be referred to as software, data, program instructions, or code. The non-transitory computer-readable storage medium (sometimes commonly referred to as memory) may include non-volatile memory such as non-volatile random access memory (NVRAM), one or more hard disk drives (e.g., magnetic disk drives or solid state drives), one or more removable flash drives, or other removable media, and the like. Software stored on the non-transitory computer readable storage medium may be executed on the processing circuitry of the control circuit 16. The processing circuitry may include an application specific integrated circuit with processing circuitry, one or more microprocessors, digital signal processors, graphics processing units, central Processing Units (CPUs), or other processing circuitry.
The system 10 may include input-output circuitry such as an input-output device 12. The input-output device 12 may be used to allow data to be received by the system 10 from an external apparatus (e.g., a tethered computer, a portable apparatus (such as a handheld or laptop computer), or other electrical apparatus) and to allow a user to provide user input to the headset 10. The input-output device 12 may also be used to gather information about the environment in which the system 10 (e.g., the head-mounted device 10) is operating. Output components in the apparatus 12 may allow the system 10 to provide output to a user and may be used to communicate with external electronic devices. The input-output device 12 may include sensors and other components 18 (e.g., image sensors for capturing images of real world objects, optionally digitally combined with virtual objects on a display in the system 10, accelerometers, depth sensors, light sensors, haptic output devices, speakers, batteries, wireless communication circuitry for communicating between the system 10 and external electronics, etc.).
The display module 20A may be a liquid crystal display, an organic light emitting diode display, a laser-based display, or other type of display. The optical system 20B may form a lens that allows an observer (see, e.g., the eye of the observer at the eyebox 24) to view an image on the display 20. There may be two optical systems 20B associated with the respective left and right eyes of the user (e.g., for forming left and right lenses). A single display 20 may produce images for both eyes or a pair of displays 20 may be used to display images. In configurations with multiple displays (e.g., left-eye and right-eye displays), the focal length and positioning of the lens formed by system 20B may be selected such that any gaps that exist between the displays will not be visible to the user (e.g., such that the images of the left and right displays overlap or merge seamlessly).
If desired, optical system 20B may include components (e.g., an optical combiner, etc.) to allow real-world image light from real-world image or object 28 to be optically combined with a virtual (computer-generated) image, such as a virtual image in image light 38. In this type of system, a user of system 10 may view both real-world content and computer-generated content overlaid on top of the real-world content. A camera-based system may also be used in the device 10 (e.g., an arrangement in which a camera captures a real world image of the object 28 and digitally combines that content with virtual content at the optical system 20B).
If desired, the system 10 may include wireless circuitry and/or other circuitry to support communication with a computer or other external device (e.g., a computer that provides image content to the display 20). During operation, control circuitry 16 may provide image content to display 20. The content may be received remotely (e.g., from a computer or other content source coupled to the system 10) and/or may be generated by the control circuitry 16 (e.g., text, other computer-generated content, etc.). The content provided by control circuitry 16 to display 20 may be viewed by a viewer at eyebox 24.
Fig. 2A to 2C and 3A to 3C show examples of world-locked content. Fig. 2A-2C are top views of an XR environment comprising headset 10, physical object 42-1, and physical object 42-2. Fig. 3A is a depiction of a user view of the XR environment of fig. 2A when wearing headset 10. Fig. 3B is a depiction of a user view of the XR environment of fig. 2B, while wearing headset 10. Fig. 3C is a depiction of a user view of the XR environment of fig. 2C when wearing headset 10. The dashed line 48 in fig. 2A-2C shows the field of view of the user wearing the headset 10.
In FIG. 2A, the field of view of the user operating the headset 10 includes the physical object 42-1 but does not include the physical object 42-2. Thus, physical object 42-1 is visible in the view of FIG. 3A, while physical object 42-2 is not. FIG. 2A also shows the apparent location 44 of the world-locked virtual object displayed in the XR environment. The apparent location 44 overlaps the physical object 42-1. As shown in the corresponding view of fig. 3A, virtual object 46 appears to be positioned on the upper surface of physical object 42-1. In fig. 3A-3C, virtual object 46 is depicted as a three-dimensional object (e.g., a simulated physical object such as a cube). This example is merely illustrative. The virtual object 46 in fig. 3A-3C may alternatively be a two-dimensional object (e.g., text).
Since the virtual objects in fig. 2A-2C and 3A-3C are world-locked virtual objects, the virtual objects remain in a fixed position relative to the physical objects 42-1 and 42-2 as the user moves their head. In fig. 2B, the user has rotated their head to the right, causing the positioning of the headset 10 to change. Both physical objects 42-1 and 42-2 are now in the field of view of the user. Thus, both physical objects 42-1 and 42-2 are visible in the view of FIG. 3B. The apparent location 44 of the virtual object remains the same relative to the physical objects 42-1 and 42-2 (overlapping the physical object 42-1). As shown in the corresponding view of fig. 3B, virtual object 46 thus still appears to be positioned on the upper surface of physical object 42-1 (even though the user's head has rotated).
In fig. 2C, the user has rotated their head even further to the right. At this location, physical object 42-2 is visible in the view of FIG. 3C, while physical object 42-1 is not. Thus, the virtual object 46 is no longer in view and is no longer displayed.
Fig. 4A to 4C and fig. 5A to 5C show examples of the head lock content. Fig. 4A-4C are top views of an XR environment comprising headset 10, physical object 42-1, and physical object 42-2. Fig. 5A is a depiction of a user view of the XR environment of fig. 4A, wearing headset 10. Fig. 5B is a depiction of a user view of the XR environment of fig. 4B, while wearing headset 10. Fig. 5C is a depiction of a user view of the XR environment of fig. 4C when wearing headset 10. The dashed line 48 in fig. 4A-4C shows the field of view of the user wearing the headset 10.
As shown in FIG. 4A, the field of view of the user operating the headset 10 includes the physical object 42-1 but does not include the physical object 42-2. Thus, physical object 42-1 is visible in the view of FIG. 5A, while physical object 42-2 is not. Fig. 5A also shows apparent location 44 of the head-locked virtual object displayed in the XR environment. In fig. 5A to 5C, the virtual object 46 is depicted as a two-dimensional object. This example is merely illustrative. The virtual object 46 in fig. 5A-5C may alternatively be a three-dimensional object.
In fig. 4A, apparent location 44 is located at a depth 50 and an angle 54 relative to device 10 (e.g., the front surface of device 10). In particular, depth 50 and angle 54 may be measured relative to a reference point 56 on head mounted device 10. The reference point 56 may be aligned with the center of the headset 10 or any other desired portion of the headset 10. In fig. 4A, the angle 54 is 90 degrees such that the apparent position 44 of the virtual object is set directly in front of the head-mounted device 10. Thus, as shown in the corresponding view of FIG. 5A, virtual object 46 appears in the center of the user's field of view (at a depth closer than physical object 42-1).
In fig. 4A-4C, depth 50 and angle 54 are fixed. Since the virtual objects in fig. 4A-4C and 5A-5C are head-locked virtual objects, the virtual objects remain in a fixed position relative to the head-mounted device 10 as the user moves their head. In fig. 4B, the user has rotated their head to the right so that the posture of the head-mounted device 10 changes. Thus, both physical objects 42-1 and 42-2 are now in the field of view of the user. Thus, both physical objects 42-1 and 42-2 are visible in the view of FIG. 5B. However, the apparent location 44 of the virtual object remains the same relative to the headset 10 (e.g., in the center of the user's field of view as shown in fig. 5B). The magnitudes of the depth 50 and angle 54 of the apparent location 44 in fig. 4B are the same as in fig. 4A.
In fig. 4C, the user has rotated their head further to the right so that the posture of the head mounted device 10 changes. Thus, physical object 42-1 is no longer visible, while physical object 42-2 remains visible. The physical object 42-2 is thus visible in the view of fig. 5C. However, the apparent location 44 of the virtual object remains the same relative to the headset 10 (e.g., in the center of the user's field of view as shown in fig. 5C). The magnitudes of the depth 50 and angle 54 of the apparent location 44 in fig. 4C are the same as in fig. 4A and 4B.
In fig. 4A-4C, the depth 50 of the apparent location 44 is fixed. It should be noted that depth here refers to the lateral separation between the headset and a location in the XR environment (e.g., the location of a physical object or virtual object). In addition to this lateral component (depth), the spacing between the head-mounted device and the subject may also have a vertical component. The total distance between the head-mounted device and the subject may be a function of the lateral spacing (depth) and the vertical spacing. Trigonometry (trigonometry) may be used to characterize the relationship between the total distance, the lateral spacing, and the vertical spacing, where the total distance defines the hypotenuse of the right triangle and the lateral spacing and the vertical spacing define the remaining two sides of the right triangle. Thus, any of the total distance, the lateral spacing, and the vertical spacing may be calculated based on known values of the remaining two of the total distance, the lateral spacing, and the vertical spacing.
In fig. 4A-4C, the virtual object 46 remains at the same apparent position and has the same apparent depth relative to the headset as the user rotates his head. As shown in FIG. 4A, the apparent depth 50 is less than the depth 52-1 of the physical object 42-1. As shown in FIG. 4C, the apparent depth 50 is less than the depth 52-2 of the physical object 42-2. Such a mismatch in depth between the virtual object and the physical object may result in double vision (double vision) of the virtual object or the physical object. If an observer focuses their eyes on a focal plane that includes the virtual object, the physical object will be out of focus for the observer (resulting in a double vision of the physical object). If the observer focuses their eyes on a focal plane that includes the physical object, the virtual object will be out of focus for the observer (resulting in a double vision of the virtual object).
To improve user comfort in viewing the head-locked content, the head-locked content may have an apparent depth that matches the depth of the physical object aligned with the head-locked content. Fig. 6A-6C are top views of an XR environment comprising headset 10, physical object 42-1, and physical object 42-2. Fig. 6A-6C illustrate examples of head lock content having an apparent depth that varies based on a distance between a head mounted device and a physical object in an XR environment. The dashed line 48 in fig. 6A-6C shows the field of view of the user wearing the headset 10.
The sensor 18 (see fig. 1) within the head mounted device 10 may determine the depth of the physical object aligned with the direction of the apparent location 44. For example, in FIG. 6A, the sensor may determine the depth 52-1 of the physical object 42-1. The physical object 42-1 may be the nearest physical object aligned with the angle 54 of the apparent position 44. In other words, line 58 is extrapolated from reference point 56 in a given direction indicated by angle 54. The first physical object hit by line 58 is aligned with the display direction of the virtual object. The sensor 18 in the head mounted device 10 may determine the depth of the nearest physical object based on the known display orientation of the virtual object. In FIG. 6A, the sensor determines the depth of the physical object 42-1. The apparent location 44 of the virtual object is then set to have an apparent depth 50 that matches the depth 52-1 of the physical object 42-1.
The example in which the apparent depth 50 is set to match the depth 52-1 of the physical object 42-1 is merely illustrative. The apparent depth may have a default depth (such as a maximum allowed depth) that is adjusted if the apparent depth is greater than the distance to the closest physical object.
In addition, apparent depth 50 may be set to match the depth of other virtual objects in the XR environment. For example, a world-locked virtual object may exist at an apparent depth in an XR environment. The apparent depth 50 may sometimes be set to have an apparent depth that matches the apparent depth of the world-locked virtual object. In this case, no sensor is needed to determine the depth of the world-locked virtual object (because the object is displayed by the head-mounted device and thus the depth of the world-locked virtual object is known by the head-mounted device). However, the adjustment of the apparent depth of the head-locked (or body-locked) virtual object when aligned with another virtual object may be otherwise the same as when aligned with a physical object.
The user view of the XR environment of FIG. 6A when headset 10 is worn may be similar to the view shown in FIG. 5A, with the apparent depth of virtual object 46 matching the depth of physical object 42-1.
The angle 54 characterizing the apparent position 44 of the virtual object remains fixed as the user moves their head. However, the depth 50 characterizing the apparent location 44 of the virtual object may be updated based on the depth of the nearest physical object aligned with the virtual object.
In fig. 6B, the user has rotated their head to the right so that the posture of the head-mounted device 10 changes. Thus, both physical objects 42-1 and 42-2 are now in the field of view of the user. However, neither physical object 42-1 nor physical object 42-2 is aligned with apparent position 44. In this case, the apparent depth 50 of the virtual object may remain unchanged (e.g., remain the same as in fig. 6A) or may revert to a predetermined magnitude (e.g., maximum comfort apparent depth). The user view of the XR environment of fig. 6B while wearing headset 10 may be similar to the view shown in fig. 5B.
In fig. 6C, the user has rotated their head further to the right so that the posture of the head mounted device 10 changes. Thus, physical object 42-1 is no longer visible, while physical object 42-2 remains visible. The sensor 18 in the head mounted device 10 may determine that the physical object 42-2 is the most recent physical object based on the known display orientation of the virtual object. The sensor determines the depth of the physical object 42-2. The apparent position 44 of the virtual object is then set to have an apparent depth 50 that matches the depth 52-2 of the physical object 42-2. The depth 52-2 of the physical object 42-2 is less than the depth 52-1 of the physical object 42-1. By adjusting the depth 50 in fig. 6C to match the depth 52-2, a mismatch between depths is avoided.
In some cases, the depth of the most recent physical (or virtual) object may be used to adjust the apparent depth of the virtual object even when there is no overlap between the virtual object and the most recent physical (or virtual) object. For example, if the most recent physical (or virtual) object is within a threshold distance or angle of the virtual object, the apparent depth of the virtual object may be set equal to the depth of the most recent physical (or virtual) object. This may enable an observer to easily view one from the other.
The user view of the XR environment of FIG. 6C when headset 10 is worn may be similar to the view shown in FIG. 5C, with the apparent depth of virtual object 46 matching the depth of physical object 42-2.
By adjusting the depth of the head-locked content to match the depth of the physical object in the XR environment, a user may have improved comfort in viewing the head-locked content.
The head lock content is explicitly described in conjunction with fig. 4A to 4C, fig. 5A to 5C, and fig. 6A to 6C. However, it should be understood that these same descriptions also apply to body lock content. For body lock content, the apparent position (with apparent depth adjusted to match the depth of the nearest physical object) may be adjusted in a similar manner as for head lock content. However, the apparent location of the body lock content is defined relative to a reference point on the user's body rather than on the head-mounted device itself.
One or more sensors 18 in the headset 10 may be configured to determine the depth of a physical object in the XR environment. The one or more sensors 18 may include an image sensor having an array of imaging pixels (e.g., sensing red, blue, and green visible light) configured to capture an image of the user's physical environment. A machine learning algorithm may be applied to images captured from the image sensor to determine the depth of various physical objects in the physical environment. As another example, the one or more sensors may include a gaze detection sensor. The gaze detection sensor may be capable of determining a degree of convergence of the eyes of the user. A high convergence may indicate physical objects near the user (e.g., short depths), while a low convergence may indicate physical objects far from the user (e.g., far depths). The convergence determined by the gaze detection sensor may thus be used to estimate the depth of the physical object in the physical environment. As another example, the one or more sensors may include a stereo camera (having two or more lenses and an image sensor for capturing three-dimensional images). As yet another example, the one or more sensors may include a depth sensor. The depth sensor may be a pixelated depth sensor (e.g., configured to measure multiple depths across the physical environment) or a point sensor (configured to measure a single depth in the physical environment). When a point sensor is used, the point sensor may be aligned with a known display orientation of the head-locked content in the headset 10. For example, in fig. 6A-6C, the head lock content is always aligned with line 58 at angle 54 relative to the head mounted device 10. The point sensor may thus measure depth along line 58 to determine the depth of the nearest physical object aligned with the virtual object. The depth sensor (whether a pixelated depth sensor or a point sensor) may use phase detection (e.g., phase detection autofocus pixels) or light detection and ranging (LIDAR). Any subset of these types of sensors may be used in combination to determine the depth of a physical object in a physical environment.
The optical system 20B in the head-mounted device 10 may have an associated target viewing zone for virtual objects displayed using the head-mounted device. When the virtual object is displayed within the target viewing zone, the user may view the virtual object with acceptable comfort. Fig. 7 and 8 are top views of an exemplary XR environment showing target viewing area 60. As shown in fig. 7 and 8, the target viewing zone has a maximum apparent comfort depth 62 and a minimum apparent comfort depth 64. In some cases, the head mounted device 10 may display only virtual objects within the target viewing zone 60. Thus, the maximum comfort apparent depth 62 may sometimes be referred to as the maximum allowed apparent depth 62, and the minimum comfort apparent depth 64 may sometimes be referred to as the minimum allowed apparent depth 64.
In the example of fig. 7, the physical object 42 in the XR environment has a depth greater than the maximum allowable apparent depth 62. In this case, the apparent location 44 of the head-locked virtual object or body-locked virtual object in the XR environment may be set to have a depth equal to the maximum allowable apparent depth 62. In other examples, the apparent location 44 of the head-locked virtual object or body-locked virtual object in the XR environment may be set to a depth equal to the maximum allowed apparent depth 62, the minimum allowed apparent depth 64, or any value therebetween. In the example of fig. 8, the physical object 42 in the XR environment has a depth less than the minimum allowable apparent depth 64. In this case, the apparent location 44 of the head-locked virtual object or body-locked virtual object in the XR environment may be set to have a depth equal to the minimum allowable apparent depth 64. In other examples, the apparent location 44 of the head-locked virtual object or body-locked virtual object in the XR environment may be set to a depth equal to the maximum allowed apparent depth 62, the minimum allowed apparent depth 64, or any value therebetween.
In the examples of fig. 7 and 8, when the physical object aligned with the virtual object has a depth outside the target viewing zone, the virtual object is anchored to the minimum allowed apparent depth or the maximum allowed apparent depth of the target viewing zone 60. This example is merely illustrative. Alternatively or additionally, the virtual object may be modified (adjusted) when the physical object aligned with the virtual object has a depth outside the target viewing zone. Consider the example of fig. 8. Displaying the virtual object at apparent location 44 may cause viewer discomfort (because the virtual object will appear to be "inside" of physical object 42). To alleviate viewer discomfort in this type of scene, the virtual object may be adjusted. Possible adjustments include changing the size of the virtual object (e.g., making the virtual object smaller), changing the opacity of the virtual object (e.g., fading out the virtual object), displaying a warning or other indication of discomfort (e.g., red dots, warning text, etc.) in place of or in addition to the virtual object, applying a visual effect (e.g., blurring, feathering, masking, etc.) to the edges of or around the virtual object.
As shown and discussed in connection with fig. 6A-6C, the apparent depth of the head-locked virtual object or body-locked virtual object may be adjusted to match the depth of the object aligned with the virtual object. This results in real-time changes in apparent depth of the virtual object as the user rotates their head and aligns the virtual content with the object at different depths. To ensure a seamless experience for the user when viewing the virtual content at different depths, the size of the virtual content on the display 20 in the head mounted device 10 may be updated such that the apparent size of the virtual content (e.g., the size of the object in screen space or the amount of screen occupied by the object) when viewed by the user remains constant.
In addition to maintaining a constant apparent size with different apparent depths, the alignment of the images used to display the virtual object may be adjusted with different apparent depths. The first and second displays may display first and second images that are viewed by the first and second eyes of the user to perceive the virtual object. At closer apparent depths, the first image and the second image may be separated by a smaller distance than at further apparent depths.
The changing of the apparent depth and/or the aligning of the images for displaying the virtual object may be performed gradually across the transition period. The transition period may simulate the performance of the human eye and create a more natural viewing experience for the user. The transition period may have a duration of at least 5 milliseconds, at least 50 milliseconds, at least 100 milliseconds, at least 200 milliseconds, at least 300 milliseconds, at least 500 milliseconds, less than 1 second, less than 300 milliseconds, between 200 and 400 milliseconds, between 250 and 350 milliseconds, between 50 and 1 second, etc. The alignment and/or apparent depth of the virtual object may change gradually throughout the transition period.
Fig. 9 is a flowchart illustrating an exemplary method performed by an electronic device (e.g., control circuit 16 in device 10). The blocks of fig. 9 may be stored as instructions in a memory of the electronic device 10, where the instructions are configured to be executed by one or more processors in the electronic device.
At block 102, the control circuitry may receive a request to display a virtual object. The virtual object may be a two-dimensional virtual object or a three-dimensional virtual object. The virtual object may be a world-locked virtual object (where the location of the virtual object is fixed relative to the physical environment of the user), a head-locked virtual object (where the virtual object remains in a fixed location relative to the head-mounted device when the head-mounted device is moved), or a body-locked virtual object (where the virtual object remains in a fixed location relative to a portion of the user of the head-mounted device). As a particular example, the virtual object may be a notification to the user that includes text. This type of virtual object may be a head-locked virtual object. The virtual object may alternatively be a simulation of a physical object such as a cube. This type of virtual object may be a world-locked virtual object. At block 104, the control circuitry may determine whether the virtual object is a first type of virtual object having a location defined relative to a location corresponding to the electronic device (e.g., head-locked content) or a user of the electronic device (e.g., body-locked content), or whether the virtual object is a second type of virtual object having a location defined relative to a static location within a coordinate system of the three-dimensional environment (e.g., world-locked content). For example, the control circuitry may receive a request to display a user notification including text. The control circuitry may determine that the virtual object is a first type of virtual object (e.g., a head-locked virtual object). Alternatively, the control circuitry may receive a request to display a simulation of a cube and determine that the virtual object is a virtual object of a second type (e.g., a world-locked virtual object). The three-dimensional environment with the coordinate system may be an XR environment, which represents a virtual or physical environment surrounding a user of the headset.
In response to determining that the virtual object is a second type of virtual object (e.g., a world-locked virtual object), the method may proceed to block 106. At block 106, the virtual object may be displayed (e.g., using display 20) using the location as an apparent location of the virtual object. The world-locked virtual object may remain fixed in position relative to the three-dimensional environment as the user moves their head, body, and/or gaze. For example, as a user moves their head, a simulation of a cube (discussed above) may be displayed at a fixed location relative to the physical environment (e.g., on a table).
In response to determining that the virtual object is a first type of virtual object (e.g., a head-locked virtual object or a body-locked virtual object), the method may proceed to block 108. At block 108, the control circuitry 16 may use the at least one sensor 18 to determine a depth of the physical object in the physical environment of the user. The at least one sensor may include a camera, a LIDAR sensor, a depth sensor, and/or a stereo camera configured to capture an image of the surrounding of the electronic device (which is then analyzed by the control circuitry to determine the depth of the physical object). The physical object may be the physical object closest to the electronic device in a given direction relative to the electronic device. The virtual object may be displayed at an apparent depth corresponding to the determined depth and in a given direction relative to the electronic device. In other words, the sensor is configured to determine a depth of the nearest physical object aligned with an intended display direction of the virtual object. For example, consider the above example where the virtual object is a user notification that includes text. The sensor may determine a depth of a physical object (e.g., a wall) proximate the head mounted device that is aligned with the user notification.
The example of determining the depth to the closest physical object in block 108 is merely illustrative. As previously described, the control circuitry may determine the depth of the closest object in the XR environment, whether the closest object is a physical object (e.g., a wall as described above) or a virtual object (e.g., a simulation of a three-dimensional object).
In some examples, at block 108, if the object is determined to satisfy one or more criteria, such as the object being greater than a threshold size, the object occupying a threshold field of view of the user or electronic device, the object being a particular type of object (e.g., a wall, a display of the electronic device, etc.), etc., the depth of the physical object or virtual object may be relied solely. The size of the object may be determined using known properties of the object, depth sensors, cameras in the system 10, or any other desired sensor. In other words, control circuitry 16 may determine a depth of the nearest object that is aligned with the intended display direction of the virtual object and that is greater than a threshold size. For example, the depth sensor may detect a first physical object at a first depth. However, the first physical object may be smaller than the threshold size and thus the first depth is independent of subsequent processing. The depth sensor may also detect a second physical object at a second depth greater than the first depth. The second physical object may be greater than the threshold size and thus the second depth is for subsequent processing.
Next, at block 110, the control circuitry may determine an apparent depth to display the virtual object based at least on the depth of the physical object. The head mounted device may have an associated minimum allowed apparent depth and maximum allowed apparent depth. When the depth of the physical object is greater than or equal to the minimum allowable apparent depth and less than or equal to the maximum allowable apparent depth, the apparent depth of the virtual object may be set to be equal to the depth of the physical object. When the depth of the physical object is greater than the maximum allowable apparent depth, the apparent depth of the virtual object may be set equal to the maximum allowable apparent depth. When the depth of the physical object is less than the minimum allowable apparent depth, the apparent depth of the virtual object may be set equal to the minimum allowable apparent depth. When the depth of the physical object is less than the minimum allowed apparent depth or greater than the maximum allowed apparent depth, the virtual object may be adjusted to mitigate discomfort to the observer. Possible adjustments include changing the size of the virtual object (e.g., making the virtual object smaller), changing the opacity of the virtual object (e.g., fading out the virtual object), displaying a warning or other indication of discomfort (e.g., red dots, warning text, etc.) in place of or in addition to the virtual object, applying a visual effect on or around the virtual object, etc.
Consider the example above where the object closest to the head mounted device is a wall in a physical environment. If the depth of the wall is greater than or equal to the minimum allowed apparent depth and less than or equal to the maximum allowed apparent depth, the apparent depth of the user notification may be set equal to the depth of the wall. If the depth of the wall is greater than the maximum allowable apparent depth, the apparent depth of the user notification may be set equal to the maximum allowable apparent depth. When the depth of the wall is less than the minimum allowable apparent depth, the apparent depth notified by the user may be set equal to the minimum allowable apparent depth. When the depth of the wall is less than the minimum allowed apparent depth or greater than the maximum allowed apparent depth, the user notification may be adjusted to mitigate viewer discomfort. Possible adjustments include changing the size of the user notification (e.g., making the user notification smaller), changing the opacity of the user notification (e.g., fading out the user notification), displaying a warning or other indication of discomfort (e.g., red dots, warning text, etc.) in place of or in addition to the user notification, applying a visual effect on or around the user notification, etc.
Generally, the display directions of the head-locked virtual object and the body-locked virtual object are fixed (relative to the head-mounted device and the body of the user, respectively). However, there are some situations where an angle (e.g., angle 54) may be adjusted at block 110. The angle 54 may be adjusted to prevent the virtual object from overlapping two physical objects of different depths. Consider a scene in which the left half of a virtual object overlaps a first physical object at a first depth and the right half of the virtual object overlaps a second physical object at a second depth different from the first depth. The angle may be shifted such that the virtual object completely overlaps the first physical object or the second physical object.
After determining the apparent location (including apparent depth) at which to display the virtual object at block 110, the virtual object may be displayed at the apparent depth (e.g., user notification discussed above) at block 112. Blocks 108-112 may be repeatedly performed for a virtual object of a first type. In this way, if the user rotates his head and the depth of the physical object aligned with the virtual object changes, the apparent depth of the virtual object may be continuously adjusted to match the depth of the aligned physical object. Blocks 108-112 (e.g., repeatedly determining the depth of the physical object and repeatedly determining the apparent depth) may be repeated at a frequency greater than 1Hz, greater than 2Hz, greater than 4Hz, greater than 10Hz, greater than 30Hz, greater than 60Hz, less than 30Hz, less than 10Hz, less than 5Hz, between 2Hz and 10Hz, etc.
Repeatedly determining the apparent depth may include changing the apparent depth from a first apparent depth to a second apparent depth that is greater than the first apparent depth. The apparent size of the virtual object may remain constant as the apparent depth changes.
The apparent depth and/or alignment of the virtual object may be gradually updated during the transition period. The transition period may have a duration of at least 200 milliseconds or another desired duration.
With sufficient caution, it should be noted that to some extent, if any specific implementation of the technology involves the use of personally identifiable information, the implementer should follow privacy policies and practices that are generally considered to meet or exceed industry or government requirements to maintain user privacy. In particular, personally identifiable information data should be managed and processed to minimize the risk of inadvertent or unauthorized access or use, and the nature of authorized use should be specified to the user.
According to one embodiment, an electronic device is provided that includes one or more sensors, one or more displays, one or more processors, and memory storing instructions configured to be executed by the one or more processors for: receiving a request for displaying a virtual object; and in accordance with a determination that the virtual object is a virtual object of a first type having a defined position relative to a position corresponding to the electronic device or a user of the electronic device, determining, via the one or more sensors, a depth of the physical object; determining an apparent depth to display the virtual object based at least on the depth of the physical object; and displaying the virtual object at an apparent depth via the one or more displays.
According to another embodiment, the instructions include instructions for: repeatedly determining, via the one or more sensors, a depth of the physical object; and repeatedly determining an apparent depth based on the determined depth of the physical object, the repeatedly determining the apparent depth including changing the apparent depth from a first apparent depth to a second apparent depth different from the first apparent depth.
According to another embodiment, changing the apparent depth of the virtual object from the first apparent depth to the second apparent depth includes changing the apparent depth of the virtual object from the first apparent depth to the second apparent depth during a transition period of at least 200 milliseconds.
According to another embodiment, the one or more displays have an associated range of acceptable apparent depths for the virtual object, the range of acceptable apparent depths for the virtual object being defined by a minimum acceptable apparent depth and a maximum acceptable apparent depth, and determining to display the apparent depth of the virtual object comprises determining the apparent depth of the virtual object as the minimum acceptable apparent depth in response to the depth of the physical object being less than the minimum acceptable apparent depth, and determining the apparent depth of the virtual object as the maximum acceptable apparent depth in response to the depth of the physical object being greater than the maximum acceptable apparent depth.
According to another embodiment, determining, via the one or more sensors, a depth of the physical object includes determining a depth of the physical object closest to the electronic device in a given direction relative to the electronic device, and displaying the virtual object at the apparent depth includes displaying the virtual object at the apparent depth and in the given direction relative to the electronic device.
According to another embodiment, determining, via the one or more sensors, a depth of the physical object includes determining a depth of the physical object closest to the electronic device in a given direction relative to the electronic device and meeting the size criteria, and displaying the virtual object at the apparent depth includes displaying the virtual object at the apparent depth and in the given direction relative to the electronic device.
According to another embodiment, the one or more sensors include a light detection and ranging (LIDAR) sensor, a depth sensor, or a stereo camera.
According to another embodiment, the instructions include instructions for: in accordance with a determination that the virtual object is a virtual object of a second type having an additional location defined relative to a static location within a coordinate system of the three-dimensional environment, the virtual object is displayed at the additional location via the one or more displays.
According to one embodiment, a method of operating an electronic device including one or more sensors and one or more displays is provided that includes: receiving a request for displaying a virtual object; and in accordance with a determination that the virtual object is a virtual object of a first type having a defined position relative to a position corresponding to the electronic device or a user of the electronic device, determining, via the one or more sensors, a depth of the physical object; determining an apparent depth to display the virtual object based at least on the depth of the physical object; and displaying the virtual object at an apparent depth via the one or more displays.
According to another embodiment, the method comprises: repeatedly determining, via the one or more sensors, a depth of the physical object; and repeatedly determining an apparent depth based on the determined depth of the physical object, the repeatedly determining the apparent depth including changing the apparent depth from a first apparent depth to a second apparent depth different from the first apparent depth.
According to another embodiment, changing the apparent depth of the virtual object from the first apparent depth to the second apparent depth includes changing the apparent depth of the virtual object from the first apparent depth to the second apparent depth during a transition period of at least 200 milliseconds.
According to another embodiment, the one or more displays have an associated range of acceptable apparent depths for the virtual object, the range of acceptable apparent depths for the virtual object being defined by a minimum acceptable apparent depth and a maximum acceptable apparent depth, and determining to display the apparent depth of the virtual object comprises determining the apparent depth of the virtual object as the minimum acceptable apparent depth in response to the depth of the physical object being less than the minimum acceptable apparent depth, and determining the apparent depth of the virtual object as the maximum acceptable apparent depth in response to the depth of the physical object being greater than the maximum acceptable apparent depth.
According to another embodiment, determining, via the one or more sensors, a depth of the physical object includes determining a depth of the physical object closest to the electronic device in a given direction relative to the electronic device, and displaying the virtual object at the apparent depth includes displaying the virtual object at the apparent depth and in the given direction relative to the electronic device.
According to another embodiment, determining, via the one or more sensors, a depth of the physical object includes determining a depth of the physical object closest to the electronic device in a given direction relative to the electronic device and meeting the size criteria, and displaying the virtual object at the apparent depth includes displaying the virtual object at the apparent depth and in the given direction relative to the electronic device.
According to another embodiment, the one or more sensors include a light detection and ranging (LIDAR) sensor, a depth sensor, or a stereo camera.
According to another embodiment, the method comprises: in accordance with a determination that the virtual object is a virtual object of a second type having an additional location defined relative to a static location within a coordinate system of the three-dimensional environment, the virtual object is displayed at the additional location via the one or more displays.
According to one embodiment, a non-transitory computer readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device comprising one or more sensors and one or more displays, the one or more programs comprising instructions for: receiving a request for displaying a virtual object; and in accordance with a determination that the virtual object is a virtual object of a first type having a defined position relative to a position corresponding to the electronic device or a user of the electronic device, determining, via the one or more sensors, a depth of the physical object; determining an apparent depth to display the virtual object based at least on the depth of the physical object; and displaying the virtual object at an apparent depth via the one or more displays.
According to another embodiment, the instructions include instructions for: repeatedly determining, via the one or more sensors, a depth of the physical object; and repeatedly determining an apparent depth based on the determined depth of the physical object, the repeatedly determining the apparent depth including changing the apparent depth from a first apparent depth to a second apparent depth different from the first apparent depth.
According to another embodiment, changing the apparent depth of the virtual object from the first apparent depth to the second apparent depth includes changing the apparent depth of the virtual object from the first apparent depth to the second apparent depth during a transition period of at least 200 milliseconds.
According to another embodiment, the one or more displays have an associated range of acceptable apparent depths for the virtual object, the range of acceptable apparent depths for the virtual object being defined by a minimum acceptable apparent depth and a maximum acceptable apparent depth, and determining to display the apparent depth of the virtual object comprises determining the apparent depth of the virtual object as the minimum acceptable apparent depth in response to the depth of the physical object being less than the minimum acceptable apparent depth, and determining the apparent depth of the virtual object as the maximum acceptable apparent depth in response to the depth of the physical object being greater than the maximum acceptable apparent depth.
According to another embodiment, determining, via the one or more sensors, a depth of the physical object includes determining a depth of the physical object closest to the electronic device in a given direction relative to the electronic device, and displaying the virtual object at the apparent depth includes displaying the virtual object at the apparent depth and in the given direction relative to the electronic device.
According to another embodiment, determining, via the one or more sensors, a depth of the physical object includes determining a depth of the physical object closest to the electronic device in a given direction relative to the electronic device and meeting the size criteria, and displaying the virtual object at the apparent depth includes displaying the virtual object at the apparent depth and in the given direction relative to the electronic device.
According to another embodiment, the one or more sensors include a light detection and ranging (LIDAR) sensor, a depth sensor, or a stereo camera.
According to another embodiment, the instructions include instructions for: in accordance with a determination that the virtual object is a virtual object of a second type having an additional location defined relative to a static location within a coordinate system of the three-dimensional environment, the virtual object is displayed at the additional location via the one or more displays.
The foregoing is merely exemplary and various modifications may be made to the embodiments described. The foregoing embodiments may be implemented independently or may be implemented in any combination.

Claims (20)

1. An electronic device, comprising:
one or more sensors;
one or more displays;
one or more processors; and
a memory storing instructions configured to be executed by the one or more processors, the instructions for:
receiving a request for displaying a virtual object; and
in accordance with a determination that the virtual object is a virtual object of a first type having a defined location relative to a location corresponding to the electronic device or a user of the electronic device:
determining, via the one or more sensors, a depth of the physical object;
determining an apparent depth to display the virtual object based at least on the depth of the physical object; and
the virtual object is displayed at the apparent depth via the one or more displays.
2. The electronic device of claim 1, wherein the instructions further comprise instructions to:
repeatedly determining a depth of the physical object via the one or more sensors; and
Repeatedly determining the apparent depth based on the determined depth of the physical object, wherein repeatedly determining the apparent depth includes changing the apparent depth from a first apparent depth to a second apparent depth different from the first apparent depth.
3. The electronic device of claim 2, wherein changing the apparent depth of the virtual object from the first apparent depth to the second apparent depth comprises changing the apparent depth of the virtual object from the first apparent depth to the second apparent depth during a transition period of at least 200 milliseconds.
4. The electronic device of claim 1, wherein the one or more displays have an associated range of acceptable apparent depths for the virtual object, wherein the range of acceptable apparent depths for the virtual object is defined by a minimum acceptable apparent depth and a maximum acceptable apparent depth, and wherein determining to display the apparent depth of the virtual object comprises:
responsive to the depth of the physical object being less than the minimum acceptable apparent depth, determining the apparent depth of the virtual object as the minimum acceptable apparent depth; and
In response to the depth of the physical object being greater than the maximum acceptable apparent depth, the apparent depth of the virtual object is determined to be the maximum acceptable apparent depth.
5. The electronic device of claim 1, wherein determining the depth of the physical object via the one or more sensors comprises determining a depth of a physical object closest to the electronic device in a given direction relative to the electronic device, and wherein displaying the virtual object at the apparent depth comprises displaying the virtual object at the apparent depth and in the given direction relative to the electronic device.
6. The electronic device of claim 1, wherein determining the depth of the physical object via the one or more sensors comprises determining a depth of a physical object closest to the electronic device in a given direction relative to the electronic device and meeting a size criterion, and wherein displaying the virtual object at the apparent depth comprises displaying the virtual object at the apparent depth and in the given direction relative to the electronic device.
7. The electronic device of claim 1, wherein the one or more sensors comprise a light detection and ranging LIDAR sensor, a depth sensor, or a stereo camera.
8. The electronic device of claim 1, wherein the instructions further comprise instructions to:
in accordance with a determination that the virtual object is a second type of virtual object having an additional position defined relative to a static position within a coordinate system of the three-dimensional environment:
the virtual object is displayed at the additional location via the one or more displays.
9. A method of operating an electronic device comprising one or more sensors and one or more displays, the method comprising:
receiving a request for displaying a virtual object; and
in accordance with a determination that the virtual object is a virtual object of a first type having a defined location relative to a location corresponding to the electronic device or a user of the electronic device:
determining, via the one or more sensors, a depth of the physical object;
determining an apparent depth to display the virtual object based at least on the depth of the physical object; and
the virtual object is displayed at the apparent depth via the one or more displays.
10. The method of claim 9, further comprising:
repeatedly determining a depth of the physical object via the one or more sensors; and
Repeatedly determining the apparent depth based on the determined depth of the physical object, wherein repeatedly determining the apparent depth includes changing the apparent depth from a first apparent depth to a second apparent depth different from the first apparent depth.
11. The method of claim 10, wherein changing the apparent depth of the virtual object from the first apparent depth to the second apparent depth comprises changing the apparent depth of the virtual object from the first apparent depth to the second apparent depth during a transition period of at least 200 milliseconds.
12. The method of claim 9, wherein the one or more displays have an associated range of acceptable apparent depths for the virtual object, wherein the range of acceptable apparent depths for the virtual object is defined by a minimum acceptable apparent depth and a maximum acceptable apparent depth, and wherein determining to display the apparent depth of the virtual object comprises:
responsive to the depth of the physical object being less than the minimum acceptable apparent depth, determining the apparent depth of the virtual object as the minimum acceptable apparent depth; and
In response to the depth of the physical object being greater than the maximum acceptable apparent depth, the apparent depth of the virtual object is determined to be the maximum acceptable apparent depth.
13. The method of claim 9, wherein determining the depth of the physical object via the one or more sensors comprises determining a depth of a physical object closest to the electronic device in a given direction relative to the electronic device, and wherein displaying the virtual object at the apparent depth comprises displaying the virtual object at the apparent depth and in the given direction relative to the electronic device.
14. The method of claim 9, wherein determining the depth of the physical object via the one or more sensors comprises determining a depth of a physical object closest to the electronic device in a given direction relative to the electronic device and meeting a size criterion, and wherein displaying the virtual object at the apparent depth comprises displaying the virtual object at the apparent depth and in the given direction relative to the electronic device.
15. The method of claim 9, wherein the one or more sensors comprise a light detection and ranging LIDAR sensor, a depth sensor, or a stereo camera.
16. The method of claim 9, wherein the method further comprises:
in accordance with a determination that the virtual object is a second type of virtual object having an additional position defined relative to a static position within a coordinate system of the three-dimensional environment:
the virtual object is displayed at the additional location via the one or more displays.
17. A non-transitory computer readable storage medium storing one or more programs configured for execution by one or more processors of an electronic device comprising one or more sensors and one or more displays, the one or more programs comprising instructions for:
receiving a request for displaying a virtual object; and
in accordance with a determination that the virtual object is a virtual object of a first type having a defined location relative to a location corresponding to the electronic device or a user of the electronic device:
determining, via the one or more sensors, a depth of the physical object;
determining an apparent depth to display the virtual object based at least on the depth of the physical object; and
the virtual object is displayed at the apparent depth via the one or more displays.
18. The non-transitory computer-readable storage medium of claim 17, wherein the instructions further comprise instructions to:
repeatedly determining a depth of the physical object via the one or more sensors; and
repeatedly determining the apparent depth based on the determined depth of the physical object, wherein repeatedly determining the apparent depth includes changing the apparent depth from a first apparent depth to a second apparent depth different from the first apparent depth.
19. The non-transitory computer-readable storage medium of claim 18, wherein changing the apparent depth of the virtual object from the first apparent depth to the second apparent depth comprises changing the apparent depth of the virtual object from the first apparent depth to the second apparent depth during a transition period of at least 200 milliseconds.
20. The non-transitory computer-readable storage medium of claim 17, wherein the one or more displays have an associated range of acceptable apparent depths for the virtual object, wherein the range of acceptable apparent depths for the virtual object is defined by a minimum acceptable apparent depth and a maximum acceptable apparent depth, and wherein determining to display the apparent depth of the virtual object comprises:
Responsive to the depth of the physical object being less than the minimum acceptable apparent depth, determining the apparent depth of the virtual object as the minimum acceptable apparent depth; and
in response to the depth of the physical object being greater than the maximum acceptable apparent depth, the apparent depth of the virtual object is determined to be the maximum acceptable apparent depth.
CN202310643206.5A 2022-06-03 2023-06-01 Electronic device for displaying virtual object Pending CN117170602A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US63/348,897 2022-06-03
US18/295,353 2023-04-04
US18/295,353 US20230396752A1 (en) 2022-06-03 2023-04-04 Electronic Device that Displays Virtual Objects

Publications (1)

Publication Number Publication Date
CN117170602A true CN117170602A (en) 2023-12-05

Family

ID=88932517

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310643206.5A Pending CN117170602A (en) 2022-06-03 2023-06-01 Electronic device for displaying virtual object

Country Status (1)

Country Link
CN (1) CN117170602A (en)

Similar Documents

Publication Publication Date Title
US11455032B2 (en) Immersive displays
US10495885B2 (en) Apparatus and method for a bioptic real time video system
US10740971B2 (en) Augmented reality field of view object follower
US9380295B2 (en) Non-linear navigation of a three dimensional stereoscopic display
US10382699B2 (en) Imaging system and method of producing images for display apparatus
US20230269358A1 (en) Methods and systems for multiple access to a single hardware data stream
CN111602082B (en) Position tracking system for head mounted display including sensor integrated circuit
US20180246331A1 (en) Helmet-mounted display, visual field calibration method thereof, and mixed reality display system
KR20220120649A (en) Artificial Reality System with Varifocal Display of Artificial Reality Content
CN112655202B (en) Reduced bandwidth stereoscopic distortion correction for fisheye lenses of head-mounted displays
EP2859399A1 (en) Apparatus and method for a bioptic real time video system
CN110895433B (en) Method and apparatus for user interaction in augmented reality
EP3038061A1 (en) Apparatus and method to display augmented reality data
JP6446465B2 (en) I / O device, I / O program, and I / O method
EP4286994A1 (en) Electronic device that displays virtual objects
CN117170602A (en) Electronic device for displaying virtual object
US11237413B1 (en) Multi-focal display based on polarization switches and geometric phase lenses
WO2020137088A1 (en) Head-mounted display, display method, and display system
US20240056563A1 (en) Transparent Display with Blur Effect for Physical Objects
JP2019004471A (en) Head-mounted type display device, and control method of head-mounted type display device
US20240053611A1 (en) Latency Correction for a Camera Image
WO2024036006A1 (en) Transparent display with blur effect for physical objects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination