CN113557465A - Apparatus, system, and method for wearable head-mounted display - Google Patents

Apparatus, system, and method for wearable head-mounted display Download PDF

Info

Publication number
CN113557465A
CN113557465A CN202080017851.0A CN202080017851A CN113557465A CN 113557465 A CN113557465 A CN 113557465A CN 202080017851 A CN202080017851 A CN 202080017851A CN 113557465 A CN113557465 A CN 113557465A
Authority
CN
China
Prior art keywords
mounted display
head mounted
camera
cameras
visual data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080017851.0A
Other languages
Chinese (zh)
Inventor
奥斯卡·林德
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Meta Platforms Technologies LLC
Original Assignee
Facebook Technologies LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Facebook Technologies LLC filed Critical Facebook Technologies LLC
Publication of CN113557465A publication Critical patent/CN113557465A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0176Head mounted characterised by mechanical features
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0123Head-up displays characterised by optical features comprising devices increasing the field of view
    • G02B2027/0125Field-of-view increase by wavefront division
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0181Adaptation to the pilot/driver

Abstract

An apparatus for a wearable head-mounted display may include a head-mounted display comprising: (i) four side cameras including (a) a camera mounted on a right side of the head mounted display, (b) a camera mounted on a left side of the head mounted display, (c) a camera mounted on a front of the head mounted display and located on a right side of a front center of the head mounted display, and (e) a camera mounted on a front of the head mounted display and located on a left side of a front center of the head mounted display; (ii) a central camera mounted in front of the head mounted display; and (iii) at least one display surface that displays visual data to a wearer of the head mounted display. Various other apparatuses, systems, and methods are also disclosed.

Description

Apparatus, system, and method for wearable head-mounted display
Cross Reference to Related Applications
This application claims priority from U.S. application nos. 62/814, 249, filed on 3/5/2019, and 16/655, 492, filed on 10/17/2019, the contents of which are hereby incorporated by reference in their entirety for all purposes.
Background
Augmented reality experiences, in which virtual objects are projected onto or overlaid on a real scene, and virtual reality experiences, in which a user is enclosed in a completely virtual world, are becoming increasingly popular. One common form factor for augmented reality and virtual reality experiences is a wearable headset with a screen that displays an augmented or virtual world to the wearer. Augmented reality and virtual reality headsets may use motion tracking to accurately place users in their environment and display the correct objects and trigger the correct cues for the user's location. One method of motion tracking includes placing a camera on the headset to recognize visual cues of position and tracking motion of one or more controls held by the user.
Unfortunately, conventional motion tracking systems have various drawbacks. Many camera configurations leave gaps in the camera coverage where the user may move the controller without the camera seeing the controller. Some methods of positioning a pair of controllers may result in one controller blocking the other controller, which temporarily removes the second controller from the field of view. Some methods of securing the camera to the headgear may be aesthetically unpleasing or lack durability. Accordingly, the present disclosure identifies and addresses the need for additional and improved camera configurations on wearable headsets. Further, the present disclosure identifies and addresses the need for improved data transmission from multiple cameras attached to the same device (e.g., wearable headgear).
SUMMARY
As will be described in greater detail below, the present disclosure describes apparatus, systems, and methods for a wearable head-mounted display according to the appended claims. Apparatus, systems, and methods for wearable head-mounted displays provide motion tracking and/or controller tracking through five cameras mounted on various external surfaces of the head-mounted display, which transmit video streams in the form of images over a limited bandwidth connection.
In some embodiments, an apparatus for a wearable head mounted display may include a head mounted display comprising: (i) four side cameras including (a) a camera mounted on a right side of the head mounted display, (b) a camera mounted on a left side of the head mounted display, (c) a camera mounted on a front of the head mounted display and located on a right side of a front center of the head mounted display, and (e) a camera mounted on a front of the head mounted display and located on a left side of a front center of the head mounted display; (ii) a central camera mounted in front of the head mounted display; and (iii) at least one display surface that displays visual data to a wearer of the head mounted display.
In one embodiment, the camera mounted to the left of the head mounted display may be tilted downward with respect to the camera mounted to the front of the head mounted display and to the left of the center of the front of the head mounted display, and/or the camera mounted to the right of the head mounted display may be tilted downward with respect to the camera mounted to the front of the head mounted display and to the right of the center of the front of the head mounted display. In some embodiments, the center camera may be mounted higher in front of the head mounted display than the camera mounted in front of the head mounted display and to the left of the center of the front of the head mounted display.
In some embodiments, the central camera may be mounted on the head mounted display by a non-rigid mounting (mounting). In one embodiment, four side cameras may be mounted on the head mounted display by rigid mounting brackets. In some examples, four side cameras may be mounted on the head mounted display by at least one rigid mount, the fields of view of the four side cameras may overlap the field of view of the central camera, and the head mounted display may transmit data from the fields of view of the four side cameras to a system that uses the data from the fields of view of the four side cameras that overlap the field of view of the central camera to correct for visual interference caused by the non-rigid mount of the central camera.
In one example, a display surface of the head mounted display may display visual data to the wearer based at least in part on a position of the head mounted display in the physical environment, and at least one of the four side cameras and the center camera may capture visual environment data indicative of the position of the head mounted display in the physical environment. In some examples, at least one of the four side cameras and the center camera may track a position of a controller operated by a wearer of the head mounted display. Additionally or alternatively, at least one of the four side cameras and the center camera may track the position of one or both hands of the head-mounted display wearer.
In one embodiment, each of the four side cameras may be mounted parallel to the surface of the head mounted display on which it is mounted. In some embodiments, the front of the head mounted display may at least partially cover the face of the wearer of the head mounted display, the right side of the head mounted display may be adjacent the front of the head mounted display, and/or the left side of the head mounted display may be adjacent the front of the head mounted display, opposite the right side of the head mounted display.
In some embodiments, a system for a wearable head mounted display may include a head mounted display including five cameras, the cameras including: (i) four side cameras including (a) a camera mounted to the right of the head mounted display, (b) a camera mounted to the left of the head mounted display, (c) a camera mounted to the front of the head mounted display and positioned to the right of the center of the front of the head mounted display, and (d) a camera mounted to the front of the head mounted display and positioned to the left of the center of the front of the head mounted display; and (ii) a central camera mounted in front of the head mounted display. In some embodiments, the system may further include at least one display surface that displays visual data to a wearer of the head mounted display, and an augmented reality system that receives visual data input from at least one of the five cameras and transmits the visual data output to the display surface of the head mounted display.
In some embodiments, the augmented reality system may receive visual data input from at least one of the five cameras and send the visual data output to the display surface of the head mounted display by combining the streaming visual data input received from all five of the five cameras into combined visual data and displaying at least a portion of the combined visual data on the display surface of the head mounted display. In some examples, the augmented reality system may receive visual data input from at least one of the five cameras by: the method includes receiving visual data from a center camera that includes visual interference due to non-fixed mounting of the center camera, receiving visual data from at least one of four side cameras that does not include visual interference due to fixed mounting of the at least one of the four side cameras, and correcting the visual interference in the visual data from the center camera using the visual data from the at least one of the four side cameras.
In one embodiment, the augmented reality system may identify a controller device within the visual data input, determine a position of the controller device relative to a wearer of the head mounted display based on at least one visual cue within the visual data input, and perform the augmented reality action based at least in part on the position of the controller device relative to the wearer of the head mounted display. Additionally or alternatively, the augmented reality system may identify physical location cues within the visual data input from at least two of the five cameras, determine a physical location of the wearer of the head mounted display based at least in part on triangulating the physical location cues within the visual data input from the at least two cameras, and perform the augmented reality action based at least in part on the physical location of the wearer of the head mounted display.
In some examples, the augmented reality system may: (i) identifying a first controller device and a second controller device, (ii) determining that the first controller device is visually occluded by the second controller device in visual data input from one of the five cameras, (iii) determining that the first controller device is not visually occluded by the second controller device in visual data input from a different one of the five cameras, (iv) determining a location of the first controller device based at least in part on visual data from the different camera, and (v) performing an augmented reality action based at least in part on the location of the first controller device. In some embodiments, for each camera within the five cameras, the field of view of the camera may at least partially overlap with the field of view of at least one additional camera within the five cameras.
In some embodiments, a computer-implemented method for motion tracking a head mounted display may comprise: (i) identifying a head mounted display comprising five cameras, wherein one of the five cameras is attached to a right side of the head mounted display, one of the five cameras is attached to a left side of the head mounted display, one of the five cameras is attached to a center of a front of the head mounted display, and two of the five cameras are attached at sides in the front of the head mounted display, (ii) capturing visual data of a physical environment surrounding the head mounted display wearer by at least one of the five cameras, (iii) determining a position of the head mounted display wearer relative to the physical environment based on the visual data of the physical environment captured by the at least one camera, and (iv) performing an action based on the position of the head mounted display wearer relative to the physical environment.
In some embodiments, performing the action may include displaying a virtual object on a display surface of the head mounted display. In some examples, the method may further include determining a location of the controller device based on visual data of the physical environment captured by the at least one camera, and performing the action based on the location of the controller device.
In one example, a computer-implemented method for efficiently transferring data from a camera may comprise: (i) identifying at least two video data streams, each video data stream being generated by a different camera, (ii) receiving a set of at least two frames of video data, the set comprising exactly one frame from each of the at least two video data streams, (iii) placing within the image the set of at least two frames of video data received from the at least two video data streams, and (iv) transmitting the image comprising the set of at least two frames of video data received from the at least two video data streams via a single transmission channel.
In one embodiment, placing the set of at least two frames of video data within the image may include arranging each frame of video data within the set of at least two frames of video data within the image based at least in part on a characteristic of the frame of video data. In one example, the characteristic may include a read start time of a frame of video data. Additionally or alternatively, the characteristic may include an exposure length of the frame of video data. In one embodiment, arranging each frame of video data within the image based at least in part on a characteristic of the frame of video data may include arranging each frame of video data horizontally side-by-side on the image such that a vertical placement of each frame of video data within the image corresponds to the characteristic.
In one embodiment, placing the set of at least two frames of video data within the image may include encoding metadata describing the set of at least two frames of video data within the image. In some examples, encoding the metadata may include encoding a timestamp for each frame from the set of at least two frames of video data. In some examples, encoding the metadata may include encoding at least one camera setting for creating each frame from a set of at least two frames of video data. Additionally or alternatively, encoding the metadata may include encoding, for each frame from the set of at least two frames of video data, an identifier of a type of function that the camera recording the frame is performing.
In one embodiment, the at least two video data streams may be generated by at least two cameras, each camera comprising a different exposure length. In some examples, transmitting the image via the single transmission channel may include transmitting the image via a transmission channel having a limited bandwidth. In some examples, transmitting the image via the single transmission channel may include transmitting the image via a cable.
In one embodiment, at least two video data streams may be generated by cameras coupled to the same device. In some examples, transmitting the image via the single transmission channel may include transmitting the image from a first component of the device to a second component of the device.
In one embodiment, placing within an image a set of at least two frames of video data received from at least two video data streams may include encoding the image via a default image encoder of at least one of a camera or a processor processing the image used to generate one of the at least two video data streams.
In one embodiment, a system for implementing the above method may include at least one physical processor and a physical memory including computer-executable instructions that, when executed by the physical processor, cause the physical processor to: (i) identifying at least two video data streams, each video data stream being generated by a different camera, (ii) receiving a set of at least two frames of video data comprising exactly one frame from each of the at least two video data streams, (iii) placing the set of at least two frames of video data received from the at least two video data streams within an image, and (iv) transmitting the image comprising the set of at least two frames of video data received from the at least two video data streams via a single transmission channel.
In some examples, the above-described methods may be encoded as computer-readable instructions on a non-transitory computer-readable medium. For example, a computer-readable medium may include one or more computer-executable instructions that, when executed by at least one processor of a computing device, may cause the computing device to: (i) identifying at least two video data streams, each video data stream being generated by a different camera, (ii) receiving a set of at least two frames of video data comprising exactly one frame from each of the at least two video data streams, (iii) placing the set of at least two frames of video data received from the at least two video data streams within an image, and (iv) transmitting the image comprising the set of at least two frames of video data received from the at least two video data streams via a single transmission channel.
Features from any of the above-mentioned embodiments may be used in combination with each other, in accordance with the general principles described herein. These and other embodiments, features and advantages will be more fully understood when the following detailed description is read in conjunction with the accompanying drawings and claims.
Brief Description of Drawings
The accompanying drawings illustrate a number of exemplary embodiments and are a part of the specification. Together with the following description, these drawings demonstrate and explain various principles of the present disclosure.
FIG. 1 is an illustration of two exemplary coverage areas of two different exemplary head mounted displays.
FIG. 2 is an illustration of two exemplary regions of non-covered space of two different exemplary head mounted displays.
Fig. 3 is an isometric view of an exemplary head mounted display.
Fig. 4 is an additional isometric view of an exemplary head mounted display.
Fig. 5 is a left side view of an exemplary head mounted display.
Fig. 6 is a right side view of an exemplary head mounted display.
Fig. 7 is an isometric right side view of an exemplary head mounted display.
Fig. 8 is a front view of an exemplary head mounted display.
Fig. 9 is a rear view of an exemplary head mounted display.
Fig. 10 is a top view of an exemplary head mounted display.
Fig. 11 is a bottom view of an exemplary head mounted display.
FIG. 12 is an illustration of an exemplary head mounted display in context.
Fig. 13 is a block diagram of an exemplary system for processing video data for transmission over a limited bandwidth channel.
FIG. 14 is a block diagram of an exemplary system for processing visual data for a wearable head-mounted display.
Fig. 15 is a flow chart of an exemplary method for efficient transmission of video stream data.
FIG. 16 is a flow chart of an exemplary method for processing visual data of a wearable head-mounted display.
Fig. 17 is a block diagram of an exemplary exposure and readout of a camera.
FIG. 18 is a block diagram of an exemplary image including a camera frame.
FIG. 19 is a flow chart of an exemplary method for processing visual data of a wearable head-mounted display.
Throughout the drawings, identical reference numbers and descriptions indicate similar, but not necessarily identical, elements. While the exemplary embodiments described herein are susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and will be described in detail herein. However, the exemplary embodiments described herein are not intended to be limited to the particular forms disclosed. Rather, the disclosure is to cover all modifications, equivalents, and alternatives falling within the scope of the appended claims.
Detailed description of exemplary embodiments
The present disclosure relates generally to devices, systems, and methods for wearable head mounted displays. As will be explained in more detail below, embodiments of the present disclosure may improve the effectiveness of motion tracking and/or controller tracking of a wearable head mounted display by constructing the head mounted display with five cameras, four at the sides and one at the center. In some embodiments, constructing the head mounted display with five cameras (rather than a fewer number, e.g., four cameras) may increase the coverage area of the cameras and/or reduce dead zones that are not covered by any cameras. In some examples, increasing the area where the fields of view of two or more cameras overlap may enable the systems described herein to improve tracking of one controller and/or reduce the effects of visual interference in the feed from any one camera in the event that one controller may obstruct the field of view of another controller. Furthermore, constructing a head mounted display with five cameras may enable the cameras to be placed flush with the surface on which the cameras are mounted, which may improve the durability and/or aesthetics of the head mounted display compared to head mounted displays in which the cameras are fixed in corners and/or other locations that are not flush with the surface of the head mounted display. In some examples, the apparatus, systems, and methods described herein may improve the field of augmented reality by increasing the ability of an augmented reality system to locate a user and/or controller to provide accurate augmented reality content based on the location of the user and/or controller. Further, the apparatus, systems, and methods described herein may improve the functionality of a computing device by improving the coverage and/or quality of visual input processed by the computing device.
In some embodiments, a camera on a wearable head mounted display or other device with multiple cameras (e.g., other wearable devices, vehicles, drones, etc.) may transmit streaming video data to other components of the same device and/or other devices over a single communication channel. In some examples, the communication channel may have a limited bandwidth, such as a wireless link or a universal service bus cable. By combining frames from multiple video streams into a single image that also includes metadata, and then transmitting the image, the systems and methods described herein may more efficiently transmit data recorded by a video camera over a limited bandwidth channel. In some embodiments, the systems described herein may create images that may be encoded and decoded by a standard decoder, thereby improving interoperability. Furthermore, the system described herein may reduce the use of computing resources (e.g., energy consumption) and improve the functionality of low power devices such as headsets, as compared to methods involving frame buffers. In some examples, the systems and methods described herein may improve the field of video streaming (streaming) by more efficiently transmitting video data. Further, the systems and methods described herein may improve the functionality of a computing device by reducing the resources required to transmit data recorded by multiple video cameras.
FIG. 1 is an illustration of two exemplary coverage areas of two different exemplary head mounted displays. The term "head mounted display" as used herein generally refers to any wearable device worn on the head of a wearer and includes at least one display surface that displays visual data to the wearer. In some embodiments, the head mounted display may include cameras mounted on the outer surface, the fields of view of which collectively create a camera coverage area in the space around the head mounted display. In some examples, the head mounted display coverage area 100(a) may show a camera coverage area of a head mounted display having four cameras. In one example, the head mounted display coverage area 100(a) may be comprised of camera coverage areas 102, 104, and/or 106 and/or additional camera coverage areas opposite the camera coverage area 106. In some examples, the head mounted display coverage area 100(a) may have a gap in front of the wearer's face and/or around the wearer's shoulders. In one example, these overlay gaps may prevent the head-mounted display from accurately tracking the motion of the controller while the wearer holds the controller over the wearer's shoulders.
In some embodiments, the head mounted display coverage area 100(b) may include camera coverage areas 108, 110, 112, and/or 114, and/or additional camera coverage areas opposite the camera coverage area 112. In some examples, the head mounted display coverage area 100(b) may cover an area around the shoulders and head of the wearer, accurately capturing the location of any controls moving into that area. In some embodiments, a head mounted display with five cameras, such as the head mounted display that produces the head mounted display coverage area 100(b), may provide significantly improved coverage over a head mounted display with four cameras, such as the head mounted display that produces the head mounted display coverage area 100 (a).
FIG. 2 is an illustration of two exemplary regions of non-covered space of two different exemplary head mounted displays. In one embodiment, a head mounted display with four cameras (e.g., a display that produces the head mounted display coverage area 100(a) in fig. 1) may produce a dead zone 202 without camera coverage. In some examples, controllers, environmental features, and/or other objects in the dead zone 202 may not be captured by any camera of the head mounted display, thereby preventing the augmented reality system from accurately responding to the presence and/or location of such objects and/or features. In some embodiments, a head mounted display with five cameras (e.g., a display that produces the head mounted display footprint 100(b) of fig. 1) may produce the dead zone 204. In some examples, the dead zone 204 may cover less and/or less important space (e.g., in terms of the likelihood that the controller and/or other objects occupying the space are important to the augmented reality system) than the dead zone 202.
Fig. 3 is an illustration of an exemplary head mounted display. In some embodiments, head mounted display 300 may include cameras 302, 304, 306, 308, and/or 310, and/or display surface 312. In some embodiments, camera 302 may be mounted on a right surface of head mounted display 300, camera 308 may be mounted on a left surface of head mounted display 300, camera 304 may be mounted on a right side of the front, camera 306 may be mounted on a left side of the front, and/or camera 310 may be centrally mounted on the front of head mounted display 300. In some embodiments, cameras 302, 304, 306, and/or 308 may be mounted on rigid mounting points, while camera 310 may be mounted on non-rigid mounting points. In one embodiment, cameras 302, 304, 306, and/or 308 may be mounted on a set of metal brackets within head mounted display 300.
In some embodiments, cameras 302, 304, 306, 308, and/or 310 may each be mounted flush with the surface of head mounted display 300 (rather than protruding from head mounted display 300). In one embodiment, the camera 302 may be located behind the camera 304 (relative to the front of the head mounted display 300) and/or may be tilted at a downward angle, such as 45 ° downward. In some embodiments, the camera 302 may be tilted at different downward angles, such as 30 degrees, 60 degrees, or any other suitable angle. Similarly, the camera 308 may be located behind the camera 306 and/or may be tilted at a downward angle. In some embodiments, cameras 304, 306, and 310 may all be mounted on the same surface of the head mounted display. In other embodiments, cameras 304 and/or 306 may be mounted on one front surface of the head mounted display, while camera 310 may be mounted on a separate front surface of the head mounted display.
Fig. 4 is an illustration of the head mounted display 300 as seen from above and from the rear. As shown in fig. 4, in some embodiments, the camera 310 may be mounted on top of the front of the head mounted display 300 perpendicular to the cameras 304 and/or 306. In one embodiment, the camera 308 may be mounted on the side of the head mounted display 300. In some embodiments, display surface 312 may be a combined display surface that is visible to both eyes of the wearer. Additionally or alternatively, the display surface 312 may include separate lenses, each lens positioned in front of one eye of the wearer of the head mounted display 300.
Fig. 5 is a left side view of the head mounted display 300. As shown in fig. 5, the camera 308 may be mounted on the left side of the head mounted display 300. In some embodiments, the camera 308 may be bottom mounted toward the left and/or may be tilted downward.
Fig. 6 is a right side view of the head mounted display 300. As shown in fig. 6, the camera 302 may be mounted on the right side of the head mounted display 300. In some embodiments, the camera 302 may be bottom mounted toward the right side and/or may be tilted downward.
Fig. 7 is an isometric right side view of head mounted display 300. As shown in fig. 7, in some embodiments, the camera 310 may be mounted on top of the head mounted display 300 at an obtuse angle to the camera 302.
Fig. 8 is a front view of the head mounted display 300. As shown in fig. 8, in some embodiments, cameras 304 and 306 may be mounted on the same front surface of head mounted display 300. In one embodiment, the camera 304 may be mounted toward the right side of the head mounted display 300 (depending on the wearer), and/or the camera 306 may be mounted toward the left side of the head mounted display 300.
Fig. 9 is a rear view of the head mounted display 300. As shown in fig. 9, in some embodiments, the display surface 312 may be divided into a display surface 312(a) and a display surface 312(b), with each portion of the display surface 312 displaying an image to one eye of the wearer.
Fig. 10 is a top view of the head mounted display 300. As shown in fig. 10, in some embodiments, a camera 310 may be mounted on top of the components of the head mounted display 300 that also house a display surface 312.
Fig. 11 is a bottom view of the head mounted display 300. As shown in fig. 11, in some embodiments, the camera 302 and/or the camera 308 may be mounted and/or tilted toward the bottom of the head mounted display 300. In one embodiment, camera 304 and/or camera 306 may be mounted at an angle to camera 302 and/or camera 308.
FIG. 12 is an illustration of an exemplary head mounted display in context. In some examples, wearer 1212 may wear head mounted display 1202 and/or handheld controller 1208(a) and/or controller 1208 (b). In one example, a camera on head mounted display 1202 may identify landmark (landmark)1204 and/or landmark 1206 to determine a location of wearer 1212 within physical environment 1200. In some embodiments, the system described herein may use two or more cameras with overlapping fields of view mounted on the head mounted display 1202 to triangulate the position of the landmark 1204 and/or the landmark 1206. In one example, the system described herein may use landmark 1204 and/or landmark 1206 to triangulate the position of wearer 1212.
In some examples, a camera on head mounted display 1202 may motion track controller 1208(a) and/or controller 1208 (b). In one example, the augmented reality system may use information about the location of wearer 1212 and/or the location of controllers 1208(a) and/or 1208(b) to display augmented reality object 1214 on the display surface of head mounted display 1202. In some examples, the augmented reality object 1214 may appear to the wearer 1212 to be located within the physical environment 1200, and/or the augmented reality system may use visual input data from a camera of the head mounted display 1202 to display a portion of the physical environment 1200 on a display surface of the head mounted display 1202. In other examples, the augmented reality object 1214 may appear to be located within a virtual scene that is completely unrelated to the physical environment 1200.
In some examples, the display surface of head mounted display 1202 may display different augmented reality objects to wearer 1212 based on the position of wearer 1212 in physical environment 1200. For example, when wearer 1212 is within a certain radius of the location of augmented reality object 1214, head mounted display 1202 may only display augmented reality object 1214. Additionally or alternatively, head mounted display 1202 may display different augmented reality objects based on input received from controllers 1208(a) and/or 1208(b) including the relative positions of controllers 1208(a) and/or 1208 (b). For example, wearer 1212 may swing controller 1208(a) like a sword to control a virtual sword, and the augmented reality system may stop displaying augmented reality object 1214 in response to detecting that the virtual sword controlled by controller 1208(a) intersects augmented reality object 1214 (e.g., because wearer 1212 has killed the dragon).
Additionally or alternatively, head mounted display 1202 may display different augmented reality objects and/or environments based on the position of one or more hands of wearer 1212. In some embodiments, one or more cameras on head mounted display 1202 may perform hand tracking on wearer 1212. In some embodiments, a particular camera, such as one side camera on each side of the head mounted display 1202, may collect image data for performing hand tracking. In one embodiment, a side camera mounted on the front surface of the head mounted display 1202 may collect image data for performing hand tracking. Additionally or alternatively, different cameras may implement hand tracking and/or other functions at different times. In some examples, the term "hand tracking" as used herein may generally refer to hand pose estimation across a time sequence (e.g., across a sequence of still images extracted from a video feed captured by a camera). Additionally or alternatively, hand tracking may include determining a three-dimensional pose of the user's hand, including a three-dimensional position of the hand, an orientation of the hand, and/or a contour (configuration) of fingers of the hand. In some embodiments, the systems described herein may perform hand tracking instead of controller tracking. Additionally or alternatively, the systems described herein may perform hand tracking in addition to controller tracking. In some embodiments, the system described herein may include a hand tracking module that receives data from one or more cameras of the head mounted display 1202 and determines the location of one or more hand features on the hand model using a machine learning algorithm, such as a neural network. In some examples, the system described herein may detect the position of the hand of wearer 1212 and then display the position of the hand on the screen of head mounted display 1202. Additionally or alternatively, in response to determining the position of one or both hands of wearer 1212, the systems described herein may change configuration settings (e.g., volume), perform virtual reality actions, and/or perform other actions.
Fig. 13 is a block diagram of an exemplary system for processing video data into images for transmission over a limited bandwidth channel. In one embodiment, device 1302 may include and/or receive data from multiple cameras (e.g., cameras 1304, 1306, and/or 1308). In some embodiments, device 1302 may be a head mounted display. Additionally or alternatively, device 1302 may be another type of wearable device, a vehicle, and/or a drone. In one embodiment, video processing module 1310 may receive streaming video data from cameras 1304, 1306, and/or 1308 and generate still images, each still image including at most one frame of video data from each camera. In some examples, video processing module 1310 may send data to image transmission module 1312, and image transmission module 1312 may transmit still images via a limited bandwidth channel. In one example, the image transmission module 1312 may transmit the image to a data consumption module 1314 also hosted on the device 1302. The data consumption module 1314 may perform various tasks related to the data included in the images, such as constructing a combined video stream with images from different cameras and/or processing the images to determine the information contained in the images. In some embodiments, the image transmission module 1314 may send the image to the data consumption module 1314 via a physical cable with limited bandwidth. Additionally or alternatively, image transfer module 1312 may transfer the image to device 1320 that is not physically coupled to device 1302. In some embodiments, device 1302 may represent, but is not limited to, a wearable device, a server, and/or a personal computing device. In one embodiment, the transmission module 1312 may transmit the image over a limited bandwidth wireless connection.
FIG. 14 is a block diagram of an exemplary system for processing visual data for a wearable head-mounted display. The term "visual data" as used herein generally refers to any data that may be captured by a camera. In some examples, the visual data may include streaming video data. Additionally or alternatively, the visual data may include recorded video data and/or still images. As shown in fig. 14, head mounted display 1430 may include side cameras 1402, center camera 1412, and/or display surface 1414. In one embodiment, the side camera 1402 may include cameras 1404, 1406, 1408, and/or 1410. In one embodiment, video processing module 1434 may receive streaming video data from cameras 1404, 1406, 1408, 1410, and/or 1412. In some embodiments, the video processing module 1434 may process the streaming video data into a series of images, each image including at most one frame from each camera stream. In some examples, the image may also include metadata about the camera frame. In one example, the video processing module 1434 may then send each image to the image transmission module 1432, and the image transmission module 1432 transmits the image to other modules within the head mounted display 1430 and/or external modules.
In some embodiments, the augmented reality system 1440 may include a camera input module 1416 that receives data from the image transmission module 1432, processes the data to extract relevant information (e.g., user location and/or controller location), and/or sends data to the augmented reality module 1420. In one embodiment, the augmented reality system may also include a controller input module 1418 that receives input from the controller 1424 and sends data to the augmented reality module 1420. In some embodiments, augmented reality module 1410 may send data to visual output module 1422, and visual output module 1422 sends visual data to display surface 1414 of head mounted display 1430. In some embodiments, some or all of the augmented reality system 1440 may be hosted on a module located within the head mounted display 1430. Additionally or alternatively, some or all of the augmented reality system 1440 may be hosted on a separate device, such as a local server, a local gaming system, and/or a remote server.
In some embodiments, the camera input module 1416 may process the input data in a variety of ways. For example, the camera 1412 may be mounted on a non-rigid mount, cause blurring of visual data from the camera 1412, be produced from slightly different angles at different times (e.g., due to bouncing of the camera 1412), and/or include other visual disturbances. In some examples, camera input module 1416 may use visual data from cameras 1404, 1406, 1408, and/or 1410 to correct for visual disturbances in data from camera 1412. For example, camera 1404 may have a field of view that overlaps the field of view from camera 1412, and camera input module 1416 may use data from camera 1404 to correct problems in the data from camera 1412 that arise from the portion of the field of view of camera 1412 that overlaps the field of view of camera 1404.
Fig. 15 is a flow diagram of an exemplary method 1500 for processing visual data of a wearable head-mounted display. At step 1510, one or more systems described herein may identify a head mounted display that includes five cameras, wherein one of the five cameras is attached to the right side of the head mounted display, one of the five cameras is attached to the left side of the head mounted display, one of the five cameras is attached centrally to the front of the head mounted display, and two of the five cameras are attached laterally to the front of the head mounted display. In some embodiments, the five cameras may have overlapping fields of view for motion tracking of one or more controllers, and/or for position tracking of the wearer of the head mounted display.
At step 1520, one or more systems described herein may capture visual data of a physical environment surrounding the head mounted display wearer with at least one of the five cameras. In some embodiments, the systems described herein may capture video data of a physical environment.
At step 1530, one or more systems described herein may determine a position of a wearer of the head mounted display relative to the physical environment based on visual data of the physical environment captured by the at least one camera. Additionally or alternatively, the systems described herein may determine a position of one or more controllers relative to a wearer of the head mounted display.
In step 1540, one or more systems described herein may perform an action based on a position of a wearer of the head mounted display relative to the physical environment. For example, the system described herein may start or stop displaying one or more augmented reality objects, scenes, and/or scene features. In some examples, the systems described herein may activate and/or deactivate an augmented reality effect (e.g., change statistics of an augmented reality game character based on proximity effects of locations in the game), modify audio data (e.g., play and/or stop sound effects and/or music), and/or perform any other suitable action related to the augmented reality system. In one example, the system described herein may display a warning when the wearer is detected to be too close to a wall, stairwell, and/or other dangerous object based on the wearer's location.
Fig. 16 is a flow diagram of an exemplary method 1600 for efficiently transmitting data received from a video camera. As shown in fig. 16, at step 1610, one or more systems described herein can identify at least two video data streams, each video data stream generated by a different camera. In some examples, the camera may be mounted on a wearable device such as a head mounted display. Additionally or alternatively, the camera may be mounted on another type of device, such as an automobile, a drone, and/or any other type of device having two or more cameras.
In some embodiments, the video cameras may have different exposure lengths, readout start times, and/or readout end times. For example, the first camera may have a shorter exposure length than the second camera, resulting in a difference in readout start and/or end times, because the first camera completes recording a frame before the second camera completes recording a frame. In some embodiments, different cameras may have different exposure lengths because the cameras perform different functions. For example, a camera tracking a landmark to triangulate the position of a wearer of an augmented reality headset may have a longer exposure time than a camera tracking the position of a handheld controller for an augmented reality system because the position of the wearer changes relatively slowly compared to a faster change in the controller position. In some examples, the camera may alternate between shorter and longer exposures.
In some examples, the camera may have an exposure (temporal centered exposure) in a time center. For example, as shown in fig. 17, the camera 1702 may alternate between short and long exposures with each readout beginning immediately after the exposure ends. In this example, the camera 1704 may similarly alternate between short and long exposures, but may have a shorter long exposure than the camera 1702. In some examples, the long exposures of cameras 1702 and 1704 may be centered such that each camera reaches the middle of its exposure duration at the same time. By centering the exposure times of multiple cameras, the system described herein may more efficiently collect frames from multiple cameras for placement within a single image, and/or may minimize the delay of waiting for various cameras to complete exposure without causing a time gap in camera coverage.
Returning to fig. 16, at step 1620, one or more systems described herein may receive a set of at least two frames of video data comprising exactly one frame from each of at least two streams of video data. In some examples, the systems described herein may receive data from more than two cameras, and/or the systems described herein may not receive data frames from each camera at every interval. For example, if one camera has a much longer exposure than two other cameras, the system described herein may not receive frames from the long exposure camera during a particular frame collection interval.
At step 1630, one or more systems described herein may place a set of at least two frames of video data received from at least two streams of video data within an image. In some embodiments, the systems described herein may arrange frames based on one or more characteristics of the frames. For example, the system described herein may arrange frames based on exposure duration and/or read start and/or end times of the frames. For example, as shown in FIG. 18, image 1802 may include frames 1814, 1816, and/or 1818 arranged in horizontal lines, the vertical placement of each frame being determined by the readout start time, with frames having earlier readout start times placed higher in the image. In some examples, the systems described herein may also encode the metadata as blocks of pixels in the image, each block of pixels being placed above and/or below the relevant frame. In one example, the system described herein may encode each bit of metadata as an eight by eight block of pixels. For example, metadata 1804, 1806, and/or 1808 may correspond to frames 1814, 1816, and/or 1818, respectively. Examples of metadata may include, but are not limited to, exposure duration, gain settings, timestamps, and/or other suitable camera setting information. In some embodiments, the metadata may include a flag indicating the function that the camera is performing when recording the frame (e.g., wearer position tracking and/or controller tracking). In some embodiments, the image may be encoded using a standard encoder such as JPEG, BITMAP, and/or GIF.
Returning to fig. 16, at step 1640, one or more systems described herein may transmit an image comprising a set of at least two video data frames received from at least two video data streams via a single transmission channel. In some embodiments, the systems described herein may transmit images wirelessly. Additionally or alternatively, the systems described herein may transmit images via a wired connection, such as a universal service bus cable. In some embodiments, the systems described herein may transmit images from one component of a device (e.g., a head mounted display) to another component of the same device. Additionally or alternatively, the systems described herein may transmit images from one device to another (e.g., a server and/or game console).
Fig. 19 is a flow diagram of an exemplary method 1900 for processing visual data of a wearable head-mounted display. In some examples, at step 1910, the system described herein may receive streaming video data from five cameras installed at different locations on an augmented reality headset. In some examples, different cameras may have different exposure lengths, may be tilted at different angles, and/or may be mounted on different portions of the augmented reality headset. At step 1920, the system described herein may place frames from the streaming video data into images such that each image includes at most one frame of video data from each of the five cameras. In some examples, the systems described herein may arrange frames based on exposure length and/or include metadata in the image. At step 1930, the system described herein can process the image to determine the location of the wearer and/or the controller of the augmented reality headset. In some examples, the systems described herein may reassemble one or more video streams from a series of images (each image comprising a video frame). Additionally or alternatively, the systems described herein may analyze frames within an image. At step 1940, the system described herein may perform an augmented reality action based on a location of a wearer or controller of the augmented reality headset. For example, the system described herein may trigger an augmented reality object to appear, disappear, move, and/or change.
As detailed above, the computing devices and systems described and/or illustrated herein broadly represent any type or form of computing device or system capable of executing computer-readable instructions (e.g., those contained in modules described herein). In their most basic configuration, these computing devices may each include at least one memory device and at least one physical processor.
In some examples, the term "memory device" generally refers to any type or form of volatile or non-volatile storage device or medium capable of storing data and/or computer-readable instructions. In one example, a memory device may store, load, and/or maintain one or more modules described herein. Examples of memory devices include, but are not limited to, Random Access Memory (RAM), Read Only Memory (ROM), flash memory, Hard Disk Drives (HDD), Solid State Drives (SSD), optical disk drives, cache, variations or combinations of one or more of these components, or any other suitable storage memory.
In some examples, the term "physical processor" generally refers to any type or form of hardware-implemented processing unit capable of interpreting and/or executing computer-readable instructions. In one example, a physical processor may access and/or modify one or more modules stored in the memory device described above. Examples of a physical processor include, but are not limited to, a microprocessor, a microcontroller, a Central Processing Unit (CPU), a Field Programmable Gate Array (FPGA) implementing a soft-core processor, an Application Specific Integrated Circuit (ASIC), portions of one or more of these components, variations or combinations of one or more of these components, or any other suitable physical processor.
Although illustrated as separate elements, the modules described and/or illustrated herein may represent portions of a single module or application. Further, in some embodiments, one or more of these modules may represent one or more software applications or programs that, when executed by a computing device, may cause the computing device to perform one or more tasks. For example, one or more modules described and/or illustrated herein may represent modules stored and configured to run on one or more computing devices or systems described and/or illustrated herein. One or more of these modules may also represent all or part of one or more special-purpose computers configured to perform one or more tasks.
Further, one or more modules described herein may convert data, physical devices, and/or representations of physical devices from one form to another. For example, one or more modules described herein may receive image data to be transformed, instructions to transform the image data into a pixel array, output the transformation results to display an image on the pixel array, display the image and/or video using the transformation results, and store the transformation results to create a record of the displayed image and/or video. Additionally or alternatively, one or more modules described herein may transform a processor, volatile memory, non-volatile memory, and/or any other portion of a physical computing device from one form to another form by executing on the computing device, storing data on the computing device, and/or otherwise interacting with the computing device.
In some embodiments, the term "computer-readable medium" generally refers to any form of device, carrier, or medium capable of storing or carrying computer-readable instructions. Examples of computer readable media include, but are not limited to, transmission type media (e.g., carrier waves) and non-transitory media such as magnetic storage media (e.g., hard disk drives, tape drives, and floppy disks), optical storage media (e.g., Compact Disks (CDs), Digital Video Disks (DVDs), and BLU-RAY disks), electronic storage media (e.g., solid state drives and flash media), and other distribution systems.
Embodiments of the present disclosure may include or be implemented in conjunction with an artificial reality system. Artificial reality is a form of reality that has been adjusted in some way before being presented to the user, and may include, for example, Virtual Reality (VR), Augmented Reality (AR), Mixed Reality (MR), hybrid reality (hybrid reality), or some combination and/or derivative thereof. The artificial reality content may include fully generated content or generated content combined with captured (e.g., real-world) content. The artificial reality content may include video, audio, haptic feedback, or some combination thereof, any of which may be presented in a single channel or in multiple channels (e.g., stereoscopic video that produces a three-dimensional effect to a viewer). Further, in some embodiments, the artificial reality may also be associated with an application, product, accessory, service, or some combination thereof, that is used, for example, to create content in the artificial reality and/or otherwise use in the artificial reality (e.g., perform an activity in the artificial reality). An artificial reality system that provides artificial reality content may be implemented on a variety of platforms, including a Head Mounted Display (HMD) connected to a host computer system, a standalone HMD, a mobile device or computing system, or any other hardware platform capable of providing artificial reality content to one or more viewers.
The process parameters and sequence of steps described and/or illustrated herein are given by way of example only and may be varied as desired. For example, while the steps shown and/or described herein may be shown or discussed in a particular order, these steps need not necessarily be performed in the order shown or discussed. Various exemplary methods described and/or illustrated herein may also omit one or more steps described or illustrated herein, or include additional steps in addition to those disclosed.
The previous description is provided to enable others skilled in the art to best utilize various aspects of the exemplary embodiments disclosed herein. The exemplary description is not intended to be exhaustive or to be limited to any precise form disclosed. Many modifications and variations are possible without departing from the spirit and scope of the disclosure. The embodiments disclosed herein are to be considered in all respects as illustrative and not restrictive. In determining the scope of the present disclosure, reference should be made to the appended claims and their equivalents.
Unless otherwise noted, the terms "connected to" and "coupled to" (and derivatives thereof) as used in the specification and claims are to be construed to allow both direct and indirect (i.e., via other elements or components) connection. Furthermore, the terms "a" or "an" as used in the specification and claims should be interpreted to mean at least one of. Finally, for ease of use, the terms "comprising" and "having" (and derivatives thereof) as used in the specification and claims are interchangeable with and have the same meaning as the word "comprising".

Claims (15)

1. An apparatus, comprising:
a head-mounted display, comprising:
four side cameras, comprising:
a camera mounted on a right side of the head mounted display;
a camera mounted on a left side of the head mounted display;
a camera mounted on a front of the head mounted display and located to the right of a center of the front of the head mounted display; and
a camera mounted on a front of the head mounted display and located on a left side of a center of the front of the head mounted display;
a central camera mounted in front of said head mounted display; and
at least one display surface that displays visual data to a wearer of the head mounted display.
2. The apparatus of claim 1, wherein:
a camera mounted on a left side of the head mounted display is tilted downward relative to a camera mounted on a front of the head mounted display and located on a left side of a center of the front of the head mounted display; and
a camera mounted on a right side of the head mounted display is tilted downward relative to a camera mounted on a front of the head mounted display and on a right side of a center of the front of the head mounted display.
3. The device of claim 1, wherein the central camera is mounted on the head mounted display by a non-rigid mount, and optionally wherein:
the four side cameras are mounted on the head-mounted display through at least one rigid mounting portion; and
the fields of view of the four side cameras overlap the field of view of the center camera; and
the head mounted display transmits data from the fields of view of the four side cameras to a system that uses data from the fields of view of the four side cameras that overlap the field of view of the central camera to correct for visual disturbances caused by the non-rigid mounting of the central camera.
4. The apparatus of claim 1, wherein the four side cameras are mounted on the head mounted display by rigid mounting brackets.
5. The apparatus of claim 1, wherein:
a display surface of the head mounted display displays the visual data to the wearer based at least in part on a position of the head mounted display in a physical environment; and
at least one of the four side cameras and the center camera captures visual environment data indicative of a position of the head mounted display in the physical environment.
6. The apparatus of claim 1, wherein at least one of the four side cameras and the central camera tracks a position of a controller operated by a wearer of the head mounted display, or wherein at least one of the four side cameras and the central camera tracks a position of at least one hand of the wearer of the head mounted display.
7. The apparatus of claim 1, wherein each of the four side cameras is mounted parallel to a surface of the head mounted display on which it is mounted, or wherein:
a front portion of the head mounted display at least partially covering a face of a wearer of the head mounted display;
the right side of the head mounted display is adjacent to the front of the head mounted display; and
the left side of the head mounted display is adjacent to the front of the head mounted display, opposite the right side of the head mounted display.
8. The apparatus of claim 1, wherein the central camera is mounted in front of the head mounted display higher than a camera mounted in front of the head mounted display and located to the left of the center of the front of the head mounted display.
9. A system, comprising:
a head-mounted display comprising:
five cameras, including:
four side cameras, comprising:
a camera mounted on a right side of the head mounted display;
a camera mounted on a left side of the head mounted display;
a camera mounted on a front of the head mounted display and located to the right of a center of the front of the head mounted display; and
a camera mounted on a front of the head mounted display and located on a left side of a center of the front of the head mounted display; and
a central camera mounted in front of said head mounted display; and
at least one display surface that displays visual data to a wearer of the head mounted display; and
an augmented reality system that receives visual data input from at least one of the five cameras and transmits visual data output to a display surface of the head mounted display.
10. The system of claim 9, wherein the augmented reality system receives visual data input from at least one of the five cameras and transmits the visual data output to a display surface of the head mounted display by:
combining streaming visual data inputs received from all five of the five cameras into combined visual data; and
displaying at least a portion of the combined visual data on a display surface of the head mounted display.
11. The system of claim 9, wherein the augmented reality system:
identifying a controller device within the visual data input;
determining a position of the controller device relative to a wearer of the head-mounted display based on at least one visual cue within the visual data input; and
performing an augmented reality action based at least in part on a position of the controller device relative to a wearer of the head mounted display, or wherein the augmented reality system:
identifying a physical location cue within visual data input from at least two of the five cameras;
determining a physical location of a wearer of the head mounted display based at least in part on triangulation of physical location cues within visual data input from the at least two cameras; and
performing an augmented reality action based at least in part on a physical location of a wearer of the head mounted display.
12. The system of claim 9, wherein the augmented reality system receives visual data input from at least one of the five cameras by:
receiving, from the central camera, visual data comprising visual disturbances due to non-fixed mounting of the central camera;
receiving, from at least one of the four side cameras, visual data that does not include the visual interference due to the fixed mounting of the at least one of the four side cameras; and
correcting the visual disturbance in the visual data from the center camera using the visual data from the at least one of the four side cameras.
13. The system of claim 9, wherein the augmented reality system:
identifying a first controller device and a second controller device;
determining that the first controller device is visually obscured by the second controller device in visual data input from one of the five cameras;
determining that the first controller device is not visually obscured by the second controller device in visual data input from a different camera of the five cameras;
determining a position of the first controller device based at least in part on visual data from the different camera; and
performing an augmented reality action based at least in part on a location of the first controller device.
14. The system of claim 9, wherein, for each of the five cameras, the field of view of the camera at least partially overlaps with the field of view of at least one other of the five cameras.
15. A computer-implemented method for motion tracking a head mounted display, at least a portion of the method being performed by a computing device comprising at least one processor, the method comprising:
identifying a head mounted display comprising five cameras, wherein one of the five cameras is attached to a right side of the head mounted display, one of the five cameras is attached to a left side of the head mounted display, one of the five cameras is centrally attached to a front of the head mounted display, two of the five cameras are attached to a front of the head mounted display on a side;
capturing, via at least one camera of the five cameras, visual data of a physical environment surrounding a wearer of the head mounted display;
determining a position of a wearer of the head mounted display relative to the physical environment based on visual data of the physical environment captured by the at least one camera; and
performing an action based on a position of a wearer of the head mounted display relative to the physical environment, and optionally the method further comprises:
determining a location of a controller device based on visual data of the physical environment captured by the at least one camera; and
performing an action based on the position of the controller device.
CN202080017851.0A 2019-03-05 2020-03-03 Apparatus, system, and method for wearable head-mounted display Pending CN113557465A (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US201962814249P 2019-03-05 2019-03-05
US62/814,249 2019-03-05
US16/655,492 2019-10-17
US16/655,492 US20200285056A1 (en) 2019-03-05 2019-10-17 Apparatus, systems, and methods for wearable head-mounted displays
PCT/US2020/020778 WO2020180859A1 (en) 2019-03-05 2020-03-03 Apparatus, systems, and methods for wearable head-mounted displays

Publications (1)

Publication Number Publication Date
CN113557465A true CN113557465A (en) 2021-10-26

Family

ID=72335180

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080017851.0A Pending CN113557465A (en) 2019-03-05 2020-03-03 Apparatus, system, and method for wearable head-mounted display

Country Status (7)

Country Link
US (1) US20200285056A1 (en)
EP (1) EP3935436A1 (en)
JP (1) JP2022522579A (en)
KR (1) KR20210132157A (en)
CN (1) CN113557465A (en)
TW (1) TW202040216A (en)
WO (1) WO2020180859A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11265487B2 (en) * 2019-06-05 2022-03-01 Mediatek Inc. Camera view synthesis on head-mounted display for virtual reality and augmented reality
FR3116351B1 (en) * 2020-11-18 2023-06-16 Thales Sa Head-up display device and associated display method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101626513A (en) * 2009-07-23 2010-01-13 深圳大学 Method and system for generating panoramic video
CN101849416A (en) * 2007-09-14 2010-09-29 Doo技术公司 Method and system for processing of images
US20150138065A1 (en) * 2013-11-21 2015-05-21 Nvidia Corporation Head-mounted integrated interface
WO2017117675A1 (en) * 2016-01-08 2017-07-13 Sulon Technologies Inc. Head mounted device for augmented reality
CN106959762A (en) * 2017-04-24 2017-07-18 英华达(上海)科技有限公司 Virtual reality system and method
US20180082482A1 (en) * 2016-09-22 2018-03-22 Apple Inc. Display system having world and user sensors
CN109076165A (en) * 2016-05-02 2018-12-21 华为技术有限公司 Head-mounted display content capture and shared

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102308937B1 (en) * 2017-02-28 2021-10-05 매직 립, 인코포레이티드 Virtual and real object recording on mixed reality devices

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101849416A (en) * 2007-09-14 2010-09-29 Doo技术公司 Method and system for processing of images
CN101626513A (en) * 2009-07-23 2010-01-13 深圳大学 Method and system for generating panoramic video
US20150138065A1 (en) * 2013-11-21 2015-05-21 Nvidia Corporation Head-mounted integrated interface
WO2017117675A1 (en) * 2016-01-08 2017-07-13 Sulon Technologies Inc. Head mounted device for augmented reality
CN109076165A (en) * 2016-05-02 2018-12-21 华为技术有限公司 Head-mounted display content capture and shared
US20180082482A1 (en) * 2016-09-22 2018-03-22 Apple Inc. Display system having world and user sensors
CN109643145A (en) * 2016-09-22 2019-04-16 苹果公司 Display system with world's sensor and user sensor
CN106959762A (en) * 2017-04-24 2017-07-18 英华达(上海)科技有限公司 Virtual reality system and method

Also Published As

Publication number Publication date
JP2022522579A (en) 2022-04-20
US20200285056A1 (en) 2020-09-10
TW202040216A (en) 2020-11-01
WO2020180859A1 (en) 2020-09-10
KR20210132157A (en) 2021-11-03
EP3935436A1 (en) 2022-01-12

Similar Documents

Publication Publication Date Title
JP7472362B2 (en) Receiving method, terminal and program
US10957104B2 (en) Information processing device, information processing system, and information processing method
JP7277451B2 (en) racing simulation
US10543430B2 (en) Expanded field of view re-rendering for VR spectating
EP3451681B1 (en) Information processing device, control method of information processing device, and program
CN107801045B (en) Method, device and system for automatically zooming when playing augmented reality scene
US8878846B1 (en) Superimposing virtual views of 3D objects with live images
JP4903888B2 (en) Image display device, image display method, and image correction method
KR101804199B1 (en) Apparatus and method of creating 3 dimension panorama image
CN105939497B (en) Media streaming system and media streaming method
CN113557465A (en) Apparatus, system, and method for wearable head-mounted display
EP3619685A1 (en) Head mounted display and method
US20240077941A1 (en) Information processing system, information processing method, and program
US9161012B2 (en) Video compression using virtual skeleton
WO2020244078A1 (en) Football match special effect presentation system and method, and computer apparatus
US11831853B2 (en) Information processing apparatus, information processing method, and storage medium
US20230154106A1 (en) Information processing apparatus, information processing method, and display apparatus
CN110891168A (en) Information processing apparatus, information processing method, and storage medium
US11128836B2 (en) Multi-camera display
WO2018109265A1 (en) A method and technical equipment for encoding media content
WO2018007779A1 (en) Augmented reality system and method
US20200257112A1 (en) Content generation apparatus and method
JP5222407B2 (en) Image display device, image display method, and image correction method
US20220337805A1 (en) Reproduction device, reproduction method, and recording medium
EP3553629B1 (en) Rendering a message within a volumetric data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: California, USA

Applicant after: Yuan Platform Technology Co.,Ltd.

Address before: California, USA

Applicant before: Facebook Technologies, LLC

CB02 Change of applicant information
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20211026

WD01 Invention patent application deemed withdrawn after publication