CN117957421A - VCSEL chip for generating linear structured light patterns and flood illumination - Google Patents

VCSEL chip for generating linear structured light patterns and flood illumination Download PDF

Info

Publication number
CN117957421A
CN117957421A CN202280062088.2A CN202280062088A CN117957421A CN 117957421 A CN117957421 A CN 117957421A CN 202280062088 A CN202280062088 A CN 202280062088A CN 117957421 A CN117957421 A CN 117957421A
Authority
CN
China
Prior art keywords
vcsel
vcsel array
light
dca
stripe
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280062088.2A
Other languages
Chinese (zh)
Inventor
阿伦·库马尔·纳拉尼·查克拉瓦图拉
本杰明·尼古拉斯·琼斯
乔纳坦·金兹伯格
李军
劳伦斯·昌勇·王
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Meta Platforms Technologies LLC
Original Assignee
Meta Platforms Technologies LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/572,395 external-priority patent/US20230085063A1/en
Application filed by Meta Platforms Technologies LLC filed Critical Meta Platforms Technologies LLC
Priority claimed from PCT/US2022/043356 external-priority patent/WO2023039288A1/en
Publication of CN117957421A publication Critical patent/CN117957421A/en
Pending legal-status Critical Current

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

A Vertical Cavity Surface Emitting Laser (VCSEL) chip includes a Structured Light (SL) VCSEL array and a fill VCSEL array. The SL VCSEL array includes a plurality of first VCSELs located on a substrate. The filled VCSEL array includes a plurality of second VCSELs on a substrate. The filler VCSEL array is positioned orthogonal to the SL VCSEL array on the substrate. Light emitted from the SL VCSEL array may be used to form a stripe pattern, and light from the SL VCSEL array and the fill VCSEL array together may be used to form flood illumination.

Description

VCSEL chip for generating linear structured light patterns and flood illumination
Technical Field
The present disclosure relates generally to vertical cavity surface emitting laser (VERTICAL CAVITY surface EMITTING LASER) arrays, and more particularly to VCSEL chips for generating linear structured light patterns and flood illumination.
Background
The depth sensing system determines depth information describing the local area. Conventional depth sensing systems, particularly those having small form factors (e.g., head-mounted), are typically capable of projecting either a structured light pattern or flood illumination, but not both.
The present disclosure seeks to mitigate at least some, any, or all of the above-identified deficiencies and disadvantages.
Disclosure of Invention
A depth camera assembly (DEPTH CAMERA assembly, DCA) may determine depth information of the local area. The DCA may include at least one camera and at least one illuminator. The illuminator may comprise a VCSEL chip. The VCSEL chip can produce one or more linear Structured Light (SL) patterns or flood illumination. The one or more SL patterns may be used to illuminate a localized area, for example, with a SL bar pattern, or to selectively illuminate one or more portions of the localized area. Flood lighting may illuminate the entire localized area. One or more cameras of the DCA may capture images of the illuminated local area. DCA may use acquired images and depth sensing patterns (e.g., SL or assisted stereo (ToF) for linear bar patterns or time-of-flight (ToF) for flood illumination) to determine depth information.
According to a first aspect of the present disclosure, there is provided a Vertical Cavity Surface Emitting Laser (VCSEL) chip comprising a first VCSEL array comprising a plurality of first VCSELs on a substrate and a second VCSEL array comprising a plurality of second VCSELs on the substrate, the second VCSEL array being positioned orthogonal to the first VCSEL array on the substrate, and wherein light emitted from the first VCSEL array is used to form a stripe pattern, and wherein light emitted from the first VCSEL array and the second VCSEL array together is used to form flood illumination.
In some embodiments, each of the plurality of first VCSELs may have a respective emission region over a first length and may further include a third VCSEL array including a plurality of third VCSELs on the substrate, wherein each of the plurality of third VCSELs may have a respective emission region over a third length that may be longer than the first length and the third VCSEL array may be oriented parallel to the first VCSEL array.
In some embodiments, at least a portion of the third VCSEL array can be staggered within the first VCSEL array.
In some embodiments, each of the second linear emission sources may have an elliptical emission area.
In some embodiments, two adjacent emitter regions of the plurality of second VCSELs may be separated by a gap, and a first VCSEL of the first VCSEL array may be positioned along a line bisecting the gap.
In some embodiments, the first VCSEL array may be arranged as a plurality of parallel stripe sources, and each of the plurality of stripe sources may include a plurality of first VCSELs.
In some embodiments, adjacent ones of the plurality of parallel strip sources may be separated by respective gaps, and for each gap there may be a corresponding second VCSEL whose emitter region may be positioned along a line parallel to the adjacent strip sources and passing through the center of the gap.
In some embodiments, each of the plurality of stripe sources may be addressable, and wherein light from each of the plurality of stripe sources may correspond to a different stripe in the stripe pattern.
In some embodiments, the second VCSEL array can be arranged to form an addressable single stripe source, and wherein light from the single stripe source can fill in dark areas between stripes in the stripe pattern to form flood illumination.
In some embodiments, light from the first VCSEL array may be refracted by a cylindrical lens to form a stripe pattern, and light from the first VCSEL array and the second VCSEL array may be refracted by a cylindrical lens to form flood illumination.
In some embodiments, the VCSEL chip may be part of a Depth Camera Assembly (DCA), and a controller of the DCA may be configured to: selecting a depth sensing mode for a local region of the DCA; instruct the VCSEL chip to emit light according to the selected depth sensing mode; and determining depth information of the local region using the acquired image of the local region illuminated by the emitted light from the VCSEL chip.
In some embodiments, the depth sensing mode may be selected from the group consisting of: ancillary stereo, time of flight and structured light.
According to a second aspect of the present disclosure, there is provided a Depth Camera Assembly (DCA) comprising: a Vertical Cavity Surface Emitting Laser (VCSEL) chip comprising a first VCSEL array comprising a plurality of first VCSELs on a substrate and a second VCSEL array comprising a plurality of second VCSELs on the substrate, the second VCSEL array being positioned orthogonal to the first VCSEL array on the substrate; an optical assembly configured to condition light from the VCSEL chip and project the conditioned light into a localized region of the DCA, the conditioned light forming one of a stripe pattern or flood illumination, wherein light emitted from the first VCSEL array is used to form a stripe pattern, and wherein light emitted from the first VCSEL array and the second VCSEL array together is used to form flood illumination; a camera configured to capture an image of a local area illuminated by the adjusted light; and a controller configured to: the VCSEL chip is instructed to emit light to form one of a flood illumination or a stripe pattern, and the acquired image is used to determine depth information for the localized region.
In some embodiments, the controller may be configured to: a depth sensing mode is selected for a local area of the DCA, wherein the depth sensing mode may be selected from the group consisting of: auxiliary stereo, time of flight and structured light; and instructing the VCSEL chip to emit light according to the selected depth sensing mode.
In some embodiments, each of the plurality of first VCSELs may have a respective emission region over a first length, the DCA may further include a third VCSEL array including a plurality of third VCSELs on the substrate, wherein each of the plurality of third VCSELs may have a respective emission region over a third length that may be longer than the first length, and the third VCSEL array may be oriented parallel to the first VCSEL array.
In some embodiments, at least a portion of the third VCSEL array can be staggered within the first VCSEL array.
In some embodiments, the optical assembly may include a cylindrical lens by which light from the first VCSEL array may be refracted to form the stripe pattern, and light from the first VCSEL array and the second VCSEL array may be refracted to form flood illumination.
In some embodiments, the first VCSEL array may include two adjacent stripe sources that may be parallel to each other and may be separated by a gap, and there may be a corresponding second VCSEL whose emitter region may be positioned along a line parallel to the two adjacent stripe sources and passing through the center of the gap.
In some embodiments, the first VCSEL array may be arranged as a plurality of parallel stripe sources, and each of the plurality of stripe sources may include a plurality of first VCSELs, and adjacent emission regions of a plurality of second VCSELs may be separated by respective gaps, and each of the plurality of stripe sources may be positioned to bisect a different gap.
In some embodiments, the DCA may further include a laser driver configured to provide drive currents to the first stripe source of the first VCSEL array and the second stripe source of the second VCSEL array, and the drive currents provided to the second stripe source may be 8 times less than the drive currents provided to the first stripe source.
According to a third aspect of the present disclosure, there is provided a non-transitory computer readable medium configured to store program code instructions that, when executed by a processor of a Depth Camera Assembly (DCA), cause the DCA to perform steps comprising: instructing a Vertical Cavity Surface Emitting Laser (VCSEL) chip to emit light to form one of a flood illumination or a stripe pattern, wherein the VCSEL chip comprises a first VCSEL array comprising a plurality of first VCSELs on a substrate and a second VCSEL array comprising a plurality of second VCSELs on the substrate, the second VCSEL array being positioned orthogonal to the first VCSEL array on the substrate; modulating light from the VCSEL chip via an optical component; projecting the modulated light into a localized area of the DCA, the modulated light forming one of a stripe pattern or flood illumination, wherein light emitted from the first VCSEL array is used to form the stripe pattern, and wherein light emitted from the first VCSEL array and the second VCSEL array together is used to form the flood illumination; collecting an image of the local area illuminated by the adjusted light; and determining depth information for the local area using the acquired image.
It should be understood that any feature described herein as being suitable for incorporation into one or more aspects or one or more embodiments of the present disclosure is intended to be generalized to any and all aspects, and any and all embodiments of the present disclosure. Other aspects of the present disclosure will be appreciated by those skilled in the art from the specification, claims and drawings of the present disclosure. The foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the claims.
Drawings
Fig. 1A is a perspective view of a headset (head set) implemented as an eye-worn device in accordance with one or more embodiments of the present disclosure.
Fig. 1B is a perspective view of a head mounted device implemented as a Head Mounted Display (HMD) in accordance with one or more embodiments of the present disclosure.
Fig. 2 is a block diagram of a DCA in accordance with one or more embodiments of the present disclosure.
Fig. 3 is a schematic diagram of DCA in local areas in accordance with one or more embodiments of the present disclosure.
Fig. 4A is a plan view of a linear SL pattern according to one or more embodiments of the present disclosure.
Fig. 4B is a plan view of flood lighting in accordance with one or more embodiments of the present disclosure.
Fig. 5A is a plan view of a VCSEL chip according to one or more embodiments of the present disclosure;
Fig. 5B is a portion of the VCSEL chip of fig. 5A.
Fig. 5C is an exemplary current driver for the VCSEL chip of fig. 5A.
Fig. 6 is a flow diagram illustrating a process for generating a linear SL pattern or flood illumination in accordance with one or more embodiments of the present disclosure.
Fig. 7 is a system including a headset device in accordance with one or more embodiments of the present disclosure.
The figures depict various embodiments for purposes of illustration only. Those skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles described herein.
Detailed Description
A Depth Camera Assembly (DCA) determines depth information for the local area. DCA may be integrated into, for example, a head mounted device. The DCA includes at least one illuminator including VCSEL chips. DCA may instruct the VCSEL chip to emit one or more different linear SL patterns or flood illumination according to a particular depth sensing mode. Depth sensing modes may include, for example, SL, time-of-flight (ToF), and auxiliary stereo. ToF may refer to indirect ToF or to ToF. Auxiliary stereo refers to the use of SL to provide texture in the acquired image, which is then processed by DCA using conventional stereo methods. If the depth sensing mode is SL or autostereoscopic, then DCA instructs the VCSEL chip to emit a linear SL pattern. Conversely, if the depth sensing mode is ToF, the DCA instructs the VCSEL chip to emit flood illumination. The camera assembly captures an image of the illuminated localized area. DCA uses the acquired images and depth sensing patterns (e.g., SL for linear bar patterns or auxiliary solids or ToF for flood illumination) to determine depth information.
The VCSEL chip illuminates the local area according to instructions from the controller of the DCA. The VCSEL chip includes an SL VCSEL array and a fill VCSEL array. The SL VCSEL array includes a plurality of first VCSELs located on a substrate. In some embodiments, the plurality of first VCSELs have a substantially rectangular emission array. And each of the plurality of first VCSELs has a respective emitter region over a first length (e.g., a long dimension of a rectangle). The filled VCSEL array includes a plurality of second VCSELs on a substrate. The filler VCSEL array is positioned orthogonal to the SL VCSEL array on the substrate. In some embodiments, each of the plurality of second VCSELs has a respective emitter region over a second length.
The DCA is configured to condition light from the VCSEL chips (e.g., via the optical component) and to project the conditioned light into a localized area of the DCA. The DCA includes at least one cylindrical lens for spreading light from the VCSEL chips substantially in one dimension. In this way, light from the SL VCSEL array is spread to form a linear SL pattern (i.e., a stripe pattern) comprising parallel light stripes separated by dark spaces. Note that the VCSELs of different rows in the SL VCSEL array are individually addressable. In this way, some or all of the "stripes" of the linear SL pattern may be selectively activated, enabling a different set of linear SL patterns to be obtained based on which rows of VCSELs are active and which rows of VCSELs are inactive. DCA may also output flood lighting. DCA may indicate that both the SL VCSEL array and the fill VCSEL array emit light simultaneously. Light from the filled VCSEL array is dispersed by a cylindrical lens to form a second pattern. And the resulting second pattern acts to fill the dark space of the linear SL pattern due to the positioning of the filling VCSEL array, resulting in flood illumination.
DCA captures (e.g., via a camera assembly) images of local areas illuminated by light from the VCSEL chips. DCA uses the acquired images and depth sensing patterns (e.g., SL, toF, auxiliary stereo) to determine depth information for the illuminated portion of the local area.
As described above, DCA can use VCSEL chips to generate SL patterns of different sizes or to generate flood illumination. This is in contrast to conventional depth sensing systems that are typically limited to SL illumination or flood illumination, but not both.
Embodiments of the invention may include or be implemented in conjunction with an artificial reality system. An artificial reality is a form of reality that has been somehow adjusted before being presented to a user, and may include, for example, virtual Reality (VR), augmented reality (augmented reality, AR), mixed Reality (MR), mixed reality (hybrid reality), or some combination and/or derivative thereof. The artificial reality content may include entirely generated content or generated content in combination with collected (e.g., real world) content. The artificial reality content may include video, audio, haptic feedback, or some combination thereof, any of which may be presented in a single channel or multiple channels (e.g., stereoscopic video that brings a three-dimensional effect to the viewer). Additionally, in some embodiments, the artificial reality may also be associated with applications, products, accessories, services, or some combination thereof, which are used to create content in the artificial reality and/or otherwise used in the artificial reality. An artificial reality system that provides artificial reality content may be implemented on a variety of platforms, including a wearable device (e.g., a head mounted device), a stand-alone wearable device (e.g., a head mounted device), a mobile device or a computing system connected to a host computer system, or any other hardware platform capable of providing artificial reality content to one or more viewers.
Fig. 1A is a perspective view of a head-mounted device 100 implemented as an eye-mounted device in accordance with one or more embodiments. In some embodiments, the eye-worn device is a near-eye display (NEAR EYE DISPLAY, NED). In general, the head-mounted device 100 may be worn on the face of a user such that content (e.g., media content) is presented using a display component and/or an audio system. However, the head mounted device 100 may also be used such that media content is presented to a user in a different manner. Examples of media content presented by the head-mounted device 100 include one or more images, video, audio, or some combination thereof. The head mounted device 100 includes a frame and may include a display assembly (which includes one or more display elements 120), a Depth Camera Assembly (DCA), an audio system, and a position sensor 190, among other components. Although fig. 1A shows example locations where components of the headset 100 are located on the headset 100, these components may be located elsewhere on the headset 100, on a peripheral device paired with the headset 100, or some combination thereof. Similarly, there may be more or fewer components on the headset 100 than shown in fig. 1A.
The frame 110 holds other components of the headset 100. The frame 110 includes: a front portion that holds one or more display elements 120, and an end piece (e.g., a temple) that attaches to the user's head. The front of the frame 110 bridges the top of the user's nose. The length of the end piece may be adjustable (e.g., an adjustable temple length) to suit different users. The end piece may also include a portion that curves behind the user's ear (e.g., a boot, earpieces).
One or more display elements 120 provide light to a user wearing the headset 100. As shown, the head mounted device includes one display element 120 for each eye of the user. In some embodiments, the display element 120 generates image light that is provided to the eyebox of the head-mounted device 100. The eyebox is the position in space occupied by the user's eyes when wearing the headset 100. For example, the display element 120 may be a waveguide display. The waveguide display includes a light source (e.g., a two-dimensional source, one or more line sources, one or more point sources, etc.) and one or more waveguides. Light from the light source is coupled into one or more waveguides that output light in a manner such that pupil replication exists in the eyebox of the head-mounted device 100. The coupling-in and/or coupling-out of light from one or more waveguides may be accomplished using one or more diffraction gratings. In some embodiments, the waveguide display includes a scanning element (e.g., waveguide, mirror, etc.) that scans light from the light source as it is coupled into the one or more waveguides. Note that in some embodiments, one or both of the display elements 120 are opaque and do not transmit light from a localized area around the head-mounted device 100. The local area is the area around the head-mounted device 100. For example, the local area may be a room in which the user wearing the head mounted device 100 is located, or the user wearing the head mounted device 100 may be outside and the local area is an outdoor area. In this context, the headset 100 generates VR content. Alternatively, in some embodiments, one or both of the display elements 120 are at least partially transparent, such that light from a localized region may be combined with light from the one or more display elements to generate AR content and/or MR content.
In some embodiments, the display element 120 does not generate image light, but rather acts as a lens that transmits light from a localized area to an eyebox. For example, one or both of the display elements 120 may be an uncorrected (non-prescription) lens or a prescription lens (e.g., a single vision lens, a bifocal lens, and a trifocal lens, or a progressive lens) that helps correct a user's vision defects. In some embodiments, display element 120 may be polarized and/or tinted to protect the user's eyes from the sun.
In some embodiments, display element 120 may include additional optics blocks (not shown). The optics block may include one or more optical elements (e.g., lenses, fresnel lenses, etc.) that direct light from the display element 120 to the eyebox. The optics block may, for example, correct aberrations in some or all of the image content, magnify some or all of the image, or some combination thereof.
DCA determines depth information for a portion of the local area around the headset 100. The DCA includes a camera assembly, a DCA controller 150, and an illuminator 140. The DCA instructs the illuminator 140 to illuminate at least a portion of the localized area with light according to the particular depth sensing mode. The depth sensing mode may be, for example, SL, toF or auxiliary stereo. In some embodiments, if the depth sensing mode is SL or autostereoscopic, DCA controller 150 instructs illuminator 140 to emit a linear SL pattern, and if the depth sensing mode is ToF, DCA controller 150 instructs illuminator 140 to emit flood illumination. Note that in some embodiments DCA controller 150 can also use ToF to process images of localized areas illuminated by the linear SL pattern. DCA will be discussed in detail below with respect to, for example, fig. 2-7.
For example, the DCA may instruct the VCSEL chip to emit one or more different linear SL patterns or flood illumination according to a particular depth sensing mode. The light may be, for example, a linear SL pattern (e.g., parallel lines) or Infrared (IR) flood illumination. In some embodiments, one or more imaging devices 130 acquire images of portions of the localized area that include light from illuminator 140. As shown, fig. 1A shows a single illuminator 140 and two imaging devices 130.
Illuminator 140 projects a set of possible linear SL patterns or flood illumination into a localized area or portion of a localized area. Illuminator 140 includes a VCSEL chip and an optical component. The VCSEL chip includes an SL VCSEL array and a fill VCSEL array. The SL VCSEL array includes a plurality of first VCSELs located on a substrate. And each of the plurality of first VCSELs has a respective emission region over at least a first length. In some embodiments, the VCSEL chip may include an additional VCSEL array parallel to the SL VCSEL array, the respective emission regions of the additional VCSEL array being greater than the first length. Furthermore, in some embodiments, a particular VCSEL array can include different lengths of emitter regions. For example, rows of VCSELs in one VCSEL array with longer emission regions may be used to determine depth in the far field, while rows of VCSELs in a SL VCSEL array with shorter emission regions may be used to determine depth in the near field. The filled VCSEL array includes a plurality of second VCSELs on a substrate. The filler VCSEL array is positioned orthogonal to the SL VCSEL array on the substrate. Light emitted from the SL VCSEL array is used to form a stripe pattern and light emitted from the SL VCSEL array and the fill VCSEL array together is used to form flood illumination.
The optical assembly is configured to condition light from the VCSEL chip and to project the conditioned light into the localized area. The optical assembly includes at least one cylindrical lens for spreading light from the VCSEL chip substantially in one dimension. In this way, light from the SL VCSEL array is spread to form a first linear SL pattern (i.e., a stripe pattern) comprising parallel light stripes separated by dark regions. The dark areas are dark spaces between adjacent light bars. In some embodiments, different rows of VCSELs in the SL VCSEL array are individually addressable. In this way, some or all of the "stripes" of the linear SL pattern may be selectively activated, enabling a different set of linear SL patterns to be obtained based on which rows of VCSELs are active and which rows of VCSELs.
The illuminator 140 may also output flood illumination according to instructions from the DCA controller 150. For flood illumination, both the SL VCSEL array and the fill VCSEL array are instructed (by DCA controller 150) to emit light. Light from the filled VCSEL array is dispersed by a cylindrical lens to form a second pattern. And the resulting second pattern acts to fill the dark space of the first linear SL pattern produced by the SL VCSEL array due to the positioning of the filling VCSEL array, thereby forming flood illumination.
In some embodiments, the optical component may tile the light emitted by the VCSEL chip such that the linear SL pattern or flood illumination emitted by the VCSEL chip is tiled over different portions of the localized area. For example, the optical component may include one or more diffraction gratings that tile the light emitted from the VCSEL chip. In some embodiments, the optical assembly may include a steerable mirror or some other optical element to dynamically direct light to provide additional selectivity of projection of the linear SL pattern or flood illumination in the localized area.
The camera assembly captures an image of the local area illuminated by light from the VCSEL chip. The camera assembly includes one or more imaging devices 130 (e.g., cameras).
DCA controller 150 calculates depth information based on the image acquired by imaging device 130 and the depth sensing mode. For example, if DCA controller 150 has instructed illuminator 140 to emit a linear SL pattern, DCA controller 150 will use the SL depth sensing mode and the acquired image to determine the depth of the portion of the localized area illuminated by the linear SL pattern. The DCA controller 150 can use an initial depth sense mode, such as ToF, to determine the depth of the object in the local area, and then, based on the calculated depth to the object, the DCA controller 150 can select some or all of the SL VCSEL arrays for activation to illuminate portions of the local area containing the object and not other areas of the local area. The depth information may be used by other components (e.g., audio systems, display components, applications, etc.) in order to facilitate presentation of content to a user.
The audio system provides audio content. The audio system includes a transducer array, a sensor array, and an audio controller. However, in other embodiments, the audio system may include different components and/or additional components. Similarly, in some cases, the functionality described with reference to components in an audio system may be distributed among the components in a different manner than described herein. For example, some or all of the functions of the controller may be performed by a remote server.
The transducer array presents sound to the user. The transducer array includes a plurality of transducers. The transducer may be a speaker 160 or a tissue transducer 170 (e.g., a bone conduction transducer or a cartilage conduction transducer). Although the speaker 160 is shown as being external to the frame 110, the speaker 160 may be enclosed in the frame 110. In some embodiments, rather than including separate speakers for each ear, the headset 100 includes a speaker array that includes multiple speakers integrated into the frame 110 to increase the directionality of the presented audio content. The tissue transducer 170 is coupled to the head of the user and directly vibrates the tissue (e.g., bone or cartilage) of the user to generate sound. The number and/or location of the individual transducers may be different from that shown in fig. 1A.
The sensor array detects sound within a localized area of the headset 100. The sensor array includes a plurality of acoustic sensors 180. The acoustic sensor 180 collects sounds emitted from one or more sound sources in a local area (e.g., room). Each acoustic sensor is configured to detect sound and convert the detected sound into an electronic format (analog or digital). The acoustic sensor 180 may be an acoustic wave sensor, a microphone, a sound transducer or similar sensor adapted to detect sound.
In some embodiments, one or more acoustic sensors 180 may be placed in the ear canal of each ear (e.g., acting as a binaural microphone). In some embodiments, the acoustic sensor 180 may be placed on an exterior surface of the head-mounted device 100, on an interior surface of the head-mounted device 100, separate from the head-mounted device 100 (e.g., as part of some other device), or some combination thereof. The number and/or location of acoustic sensors 180 may be different than that shown in fig. 1A. For example, the number of acoustic detection locations may be increased to increase the amount of audio information collected and the sensitivity and/or accuracy of the information. The acoustic detection location may be oriented such that the microphone is able to detect sound in a wide range of directions around the user wearing the headset 100.
The audio controller processes information from the sensor array describing sounds detected by the sensor array. The audio controller may include a processor and a computer-readable storage medium. The audio controller may be configured to generate direction of arrival (direction of arrival, DOA) estimates, generate acoustic transfer functions (e.g., array transfer functions and/or head related transfer functions), track the location of the sound source, form beams in the direction of the sound source, classify the sound source, generate sound filters for speakers 160, or some combination thereof.
The position sensor 190 generates one or more measurement signals in response to movement of the headset 100. The position sensor 190 may be located on a portion of the frame 110 of the headset 100. The position sensor 190 may include an inertial measurement unit (inertial measurement unit, IMU). Examples of the position sensor 190 include: one or more accelerometers, one or more gyroscopes, one or more magnetometers, other suitable types of sensors that detect motion, a type of sensor for error correction of the IMU, or some combination thereof. The position sensor 190 may be located external to the IMU, internal to the IMU, or some combination thereof.
In some embodiments, the headset 100 may provide synchronized positioning and mapping (simultaneous localization AND MAPPING, SLAM) for the location of the headset 100 and model updates of the local area. For example, the head mounted device 100 may include a Passive Camera Assembly (PCA) that generates color image data. PCA may include one or more RGB cameras that capture images of some or all of the local areas. In some embodiments, some or all of the imaging devices 130 in the DCA may also be used as PCA. The image acquired by the PCA and the depth information determined by the DCA may be used to determine parameters of the local area, generate a model of the local area, update the model of the local area, or some combination thereof. Further, the position sensor 190 tracks the position (e.g., location and posture) of the head mounted device 100 within the room. Additional details regarding the components of the headset 100 are discussed below in connection with fig. 6.
Fig. 1B is a perspective view of a head mounted device 105 implemented as an HMD in accordance with one or more embodiments. In embodiments describing an AR system and/or MR system, portions of a front side of the HMD are at least partially transparent in a visible band (about 380 nanometers (nm) to 750 nm), and portions of the HMD located between the front side of the HMD and an eye of a user are at least partially transparent (e.g., a partially transparent electronic display). The HMD includes a front rigid body 115 and a strap 175. The head mounted device 105 includes many of the same components as described above with reference to fig. 1A, but these components are modified to integrate with the HMD form factor. For example, the HMD includes a display component, DCA, audio system, and position sensor 190. Fig. 1B shows an illuminator 140, a DCA controller 150, a plurality of speakers 160, a plurality of imaging devices 130, a plurality of acoustic sensors 180, and a position sensor 190. Illuminator 140 is configured to generate a linear SL pattern and/or flood illumination for depth sensing.
FIG. 2 is a block diagram of DCA 200 in accordance with one or more embodiments. The DCA of fig. 1A and 1B may be an embodiment of DCA 200. DCA 200 is configured to obtain depth information of a local area surrounding DCA 200. For example, DCA 200 may be configured to detect the location of objects in the room. DCA 200 includes illuminator 210, camera assembly 220, and DCA controller 230. Some embodiments of DCA 200 have different components than those described herein. Similarly, in some cases, functions may be distributed among components in a different manner than described herein.
The illuminator 210 is configured to project light into a localized area. The illuminator 140 of fig. 1A and 1B may be an embodiment of the illuminator 210. The illuminator 210 may project one or more different linear SL patterns or flood illumination into the localized area. The projected light may be in IR. Illuminator 210 includes a VCSEL chip and an optical component. An exemplary VCSEL chip is described below with respect to fig. 5.
The VCSEL chip emits light to produce one or more SL patterns or flood illumination. The VCSEL chip includes a plurality of VCSEL arrays on a substrate. The plurality of VCSEL arrays includes one or more linear SL VCSEL arrays and one or more filler VCSEL arrays. Each VCSEL of the plurality of VCSELs in the plurality of VCSEL arrays has a corresponding emitter region of a particular size and shape. The emitter region may have a shape such as rectangular, oval, diamond, triangular, square, sinusoidal, etc.
The one or more linear SL VCSEL arrays are configured to produce one or more different types of linear SL patterns. Each linear SL VCSEL array includes one or more stripe sources. The stripe source is a plurality of VCSELs arranged in a linear fashion, and each of the plurality of VCSELs has a respective emission region. In some embodiments, VCSELs of the strip source have emission regions of the same size and/or the same shape (e.g., rectangles having the same length). In other embodiments, the VCSEL of the strip source has an emission region of a different shape and/or size. In some embodiments, the one or more linear SL VCSEL arrays include some strip sources composed of VCSELs having an emitter region of a first length (e.g., rectangular with a long dimension (being a first length)), and one or more other strip sources composed of VCSELs having an emitter region of a second length that is longer than the first length. For example, the stripe sources with VCSELs having longer emission regions may be staggered between stripe sources with VCSELs having shorter emission regions. The strip source of VCSELs with longer emitter regions may be periodically interposed or periodically located between or adjacent rows of strip sources with VCSELs with shorter emitter regions. A VCSEL with a longer emission region emits more light than a VCSEL with a shorter emission region. For example, a row of VCSELs with longer emission regions in one linear SL VCSEL array (e.g., a stripe source) may be used to determine depth in the far field, while a row of VCSELs with shorter emission regions in another linear SL VCSEL array may be used to determine depth in the near field. Note that the stripe sources of each of the one or more SL VCSEL arrays are arranged in parallel. Thus, adjacent strip sources are separated by a gap (i.e., pitch). The gap corresponds to the dark area between the bars in the finally formed linear SL pattern. Likewise, the light emitted from the strip light source corresponds to the strips in the linear SL pattern. Some or all of the stripe sources of one or more linear SL VCSEL arrays are addressable. For example, in some embodiments, all of the stripe sources are addressable.
One or more filler VCSEL arrays are used in combination with one or more SL VCSEL arrays to produce flood illumination. One or more filler VCSEL arrays are disposed orthogonally to the one or more SL VCSEL arrays on the substrate. Each of the one or more fill VCSEL arrays includes one or more stripe sources arranged in parallel, and each stripe source includes a plurality of VCSELs arranged in a linear fashion. A given VCSEL of the fill array may emit more light than a given VCSEL of the SL VCSEL array. In some embodiments, the VCSELs filling the VCSEL array have emission regions of the same size and/or shape (e.g., elliptical). In other embodiments, the VCSELs filling the VCSEL array have differently shaped and/or sized emitter regions. The multiple emission regions of multiple VCSELs in one or more SL VCSEL arrays are arranged such that the light they produce can be used to fill dark regions in a linear SL pattern produced by the one or more SL VCSEL arrays. For example, adjacent stripe sources of one or more SL VCSEL arrays are parallel to each other and separated by respective gaps, each gap having a respective area corresponding to a minimum light intensity in a dark region of the stripe pattern. And for each gap there are one or more corresponding VCSELs filling the VCSEL array with its emission region positioned along a line parallel to the adjacent strip light sources and passing (and possibly passing) through the corresponding region of minimum light intensity within a threshold distance. Note that in some embodiments, this region may be located in the center of the gap (e.g., if adjacent strip sources each emit light of the same intensity). In some cases, however, the region may be offset from the center of the gap (e.g., if one of the adjacent strip sources emits light of substantially different intensity than the other adjacent strip source). In some embodiments, to produce flood illumination with a more uniform intensity distribution, the size and/or shape of the emitting regions of the VCSELs in the one or more SL VCSEL arrays may be varied to account for variations in light emitted from different sizes and/or shapes of the emitting regions of the VCSELs in the one or more SL VCSEL arrays.
The intensity of the light emitted by the VCSEL may be based in part on the area of the emitting region of the VCSEL. A far field VCSEL that includes a relatively long length can emit light of greater intensity than a near field VCSEL that includes an emission region of relatively short length. The far field VCSEL can be activated for depth sensing in the far field where a greater illumination intensity is desired. The near field VCSEL may be activated for depth sensing in the near field where DCA 200 may utilize lower intensity illumination. As used herein, "far field" refers to a distance from DCA 200 that is greater than a threshold distance. As used herein, "near field" refers to a distance from DCA 200 that is less than a threshold distance. In some embodiments, the threshold distance may be about 2 meters, or between 1 meter and 5 meters.
The optical assembly is configured to condition light emitted by the VCSEL array. The optical component may include one or more lenses, apertures, diffractive optical elements, or some combination thereof. The adjusting of the light includes stretching the light emitted by each VCSEL. For example, the one or more lenses may include a cylindrical lens that applies optical power substantially in one axis, but not in the orthogonal axis. In this way, a cylindrical lens may be used to spread (substantially in a single dimension) the light emitted by the emitting region of the VCSEL. The cylindrical lenses may be oriented such that the applied optical power spreads the light over the long dimension of each stripe source of one or more SL VCSEL arrays, such that the discrete light spots merge with one another to form a series of parallel stripes, where each stripe corresponds to light emitted from a different stripe source. In this way, cylindrical lenses may be used to form a linear SL pattern from light emitted by one or more SL VCSEL arrays. Note that the lines of the linear SL pattern may have a constant intensity or substantially constant intensity along the length of the lines. In some embodiments, the one or more lenses may include additional filters to facilitate smoothing the intensity over the length of the wire.
In a similar manner, light emitted from one or more filled VCSEL arrays is spread out in the same dimension. Note that the one or more fill VCSEL arrays are positioned orthogonal to the one or more SL VCSEL arrays, and the emitter regions of the one or more fill VCSEL arrays are positioned to align with gaps between adjacent stripe sources of the one or more SL VCSEL arrays. Thus, light emitted from the one or more filled VCSEL arrays is dispersed by the cylindrical lens to form a second pattern. In the case where the one or more SL VCSEL arrays are producing a linear SL pattern, the resulting second pattern acts as a dark region filling the linear SL pattern due to the positioning of the one or more filling VCSEL arrays relative to the one or more SL linear VCSEL arrays, thereby forming a flood illumination. In some embodiments, one or more lenses may include additional filters to facilitate the generation of a smooth and uniform flood illumination intensity over its illumination field.
The conditioning of the light may also include tiling the light (SL pattern or flood illumination) emitted by the VCSEL chip on a portion of the localized area. The illuminator 210 has an illumination field that spans a portion of the localized area of the DCA 200. To increase the illumination field of illuminator 210, the optical components may tile the emitted light using, for example, one or more diffraction gratings. For example, the diffraction grating may be 1D for tiling in one dimension, or the diffraction grating may be 2D for tiling in two dimensions. In this way, for example, a linear SL pattern or flood illumination may be replicated and projected onto a localized area to illuminate a larger portion of the localized area.
The camera assembly 220 is configured to collect light from a localized area according to instructions from the DCA controller 230. The camera assembly 220 includes one or more imaging devices (e.g., the imaging device 130 of fig. 1A and 1B). Each imaging device (e.g., camera) may include one or more sensors. In some embodiments, each sensor may include a charge-coupled device (CCD) or a complementary metal oxide semiconductor (complementary metal-oxide-semiconductor, CMOS). Each sensor includes a plurality of pixels. Each pixel is configured to detect photons incident on the pixel. The pixels are configured to detect the bandwidth of light (including the wavelength of the light projected by the illuminator 210).
DCA controller 230 is configured to provide instructions to the various components of DCA 200 and to calculate depth information for the local regions. DCA controller 150 of fig. 1A and 1B may be an embodiment of DCA controller 230. Some embodiments of DCA controller 230 have different components than those described herein. Similarly, in some cases, functions may be distributed among components in a different manner than described herein.
DCA controller 230 is configured to generate instructions for illuminator 210 to emit light to the localized area. DCA controller 230 selects the depth sense mode. The depth sensing mode is selected based on, for example, a predefined mode (e.g., five SL frames followed by a flood frame), scene content (e.g., SL for near objects, and ToF for far objects), and the like. DCA controller 230 instructs illuminator 210 to illuminate a portion of the localized area according to the selected depth sense mode. For example, if the depth sensing mode is autostereoscopic or SL, DCA controller 230 instructs illuminator 210 to emit a linear SL pattern. Similarly, if the depth sensing mode is ToF, DCA controller 230 instructs illuminator 210 to emit flood illumination. In some embodiments, DCA controller 230 can identify specific regions in the localized region to track and instruct illuminator 210 to selectively illuminate the identified regions. And in some embodiments DCA controller 230 may generate instructions and provide those instructions to illuminator 210 to illuminate objects in the near field, and illuminator 210 will illuminate objects using a strip source having a VCSEL with a relatively short emission area. And in some embodiments DCA controller 230 can generate instructions and provide those instructions to illuminator 210 to illuminate objects in the far field, and illuminator 210 will illuminate objects using a strip source with VCSELs having relatively long emission regions.
DCA controller 230 generates instructions and provides those instructions to camera assembly 220 to capture images of illuminated portions of the localized area (i.e., portions illuminated with a linear SL pattern or flood illumination) according to the selected depth sensing mode.
DCA controller 230 calculates depth information based on the images captured by camera assembly 220 of the illuminated portion of the local area and the selected depth determination mode. The depth information may be calculated using various depth sensing modes including ToF depth sensing (which may be direct ToF or indirect ToF), SL depth sensing, passive stereoscopic depth sensing, auxiliary stereoscopic depth sensing, stereoscopic imaging, or some combination thereof. DCA controller 230 can store depth information in a model of the local region. The model describes the size and shape of objects and the location of these objects in the local area.
DCA controller 230 can dynamically adjust which bars in the linear SL pattern are active. For example, DCA controller 230 may control the linear density of the stripes in the linear SL pattern, the periodicity (or non-periodicity) of the stripes in the linear SL pattern, or some combination thereof. DCA controller 230 can adjust the pattern based on: such as the distance of the object from DCA200, the type of object (e.g., dense pattern for the user's hand and sparse pattern for the wall), the movement of the object relative to DCA200 (e.g., a denser pattern for moving objects than for static objects may be used).
Fig. 3 is a schematic diagram of a DCA 300 that acquires depth information in a local area 310 in accordance with one or more embodiments. DCA 300 can be an embodiment of DCA 200 of fig. 2. DCA 300 includes illuminator 320 and camera assembly 340. Illuminator 320 may be an embodiment of illuminator 210 of fig. 2, and camera assembly 340 is an embodiment of camera assembly 220. Illuminator 320 modulates light (via optical components) from the VCSEL chip to produce modulated light (e.g., a linear SL pattern or flood illumination). DCA 300 may illuminate some or all of object 330 with conditioned light 342.
The camera assembly 340 captures an image of the object 330 illuminated with the adjusted light 342. As shown, the illumination field (field of illumination, FOI) of illuminator 320 is less than the field of view (FOV) of camera assembly 340. In other embodiments, however, the FOI may have a different size relative to the FOV, and in some cases may be larger than the FOV. In some embodiments, the illuminator 320 can dynamically change the FOI of the illuminator by, for example, adjusting the amount of tiling and the tiling position of the adjusted light 342 within the localized area 310. As described above with reference to fig. 2, a controller (not shown) of DCA 300 uses the acquired images to determine depth information of local area 310.
Fig. 4A is a plan view of a linear SL pattern 400 in accordance with one or more embodiments. The linear SL pattern 400 includes a plurality of lines (e.g., lines 410) arranged in parallel to form a stripe pattern. Each line of the plurality of lines is formed from light from a single strip source (e.g., in one or more SL linear VCSEL arrays). There are corresponding dark regions (e.g., dark region 420) between adjacent lines in the linear SL pattern 400. In some embodiments, DCA may tile linear SL pattern 400 to cover a larger portion of the local area.
FIG. 4B is a plan view of flood lighting in accordance with one or more embodiments. The flood illumination 450 is formed by having one or more SL linear VCSEL arrays and one or more fill VCSEL arrays simultaneously emitting light. One or more SL VCSEL arrays produce a linear SL pattern and one or more fill VCSEL arrays produce a second pattern. The second pattern is positioned such that it fills the dark regions with light to form filled regions (e.g., filled regions 460). Thus, the SL linear pattern and the second pattern together form a flood illumination 450. In some embodiments, the DCA may tile the flood illumination 450 to cover a larger portion of the localized area.
Fig. 5A is a plan view of a VCSEL chip 500 in accordance with one or more embodiments. The VCSEL chip 500 may be an embodiment of the VCSEL chip of the illuminator 210 of fig. 2. The VCSEL chip 500 may include a substrate 505, a SL VCSEL array 510, a fill VCSEL array 515, and a plurality of bond pads.
The substrate 505 is configured to provide a surface upon which various components of the VCSEL chip 500 can be assembled.
The SL VCSEL array 510 is configured to produce one or more different linear SL patterns. The linear SL pattern array includes a plurality of stripe sources (e.g., stripe source 520 and stripe source 525). In the illustrated embodiment, a plurality of strip sources are arranged substantially parallel to one another on the substrate 505. Although the VCSEL chip 500 is shown as including 26 stripe sources, in other embodiments the number of stripe sources may be greater or fewer. As shown, each of the plurality of stripe sources is individually addressable. In this way, the SL VCSEL array 510 can activate any stripe source to produce various stripe patterns. For example, a high density pattern may be achieved by activating all of the stripe sources, and a low density stripe pattern may be achieved by activating one stripe source every fifth.
Each stripe source includes a plurality of VCSELs, and each VCSEL includes a respective emitter region (e.g., emitter region 530). In fig. 5A, each stripe source includes VCSELs having emission regions of the same size and shape, but in other embodiments the size and/or shape of one or more emission regions of the VCSELs on a single stripe source may vary. Note that some of the strip sources include VCSELs with emission regions that are larger than those in other strip sources. For example, the emissive area in strip source 525 is larger than the emissive area in strip source 520. A strip source (e.g., 525) with a larger emission area emits more light than a strip source (e.g., 520) with a smaller emission area. As shown, the emitter regions are all substantially rectangular in shape, but in other embodiments the emitter regions may have different shapes.
The filling VCSEL array 515 is configured to emit a second pattern of light. As shown, the fill VCSEL array 515 is a single stripe source. However, in some embodiments, the fill VCSEL array 515 may include multiple stripe sources. The filler VCSEL array 515 is orthogonally disposed on the substrate relative to the SL VCSEL array 510. As shown, the fill VCSEL array 515 includes a single stripe source comprising a plurality of VCSELs arranged in a linear fashion, and each of the plurality of VCSELs includes a respective emission region 535. As shown, the filler VCSEL array is located on a first side (i.e., right side) of the SL VCSEL array 510, but in other embodiments the filler VCSEL array may be located elsewhere on the chip. For example, the fill VCSEL array can be located on the other side (e.g., left side) of the SL VCSEL array, in the center of the SL VCSEL array, etc. Furthermore, in some embodiments, there may be multiple fill VCSEL arrays on the VCSEL chip. For example, there may be multiple filled VCSEL arrays that are parallel and adjacent to each other. Or in other embodiments there may be filler VCSEL chips located in different positions relative to the SL VCSEL array 510 (e.g., one filler VCSEL array to the left of the SL VCSEL array, and a second filler VCSEL array to the right of the SL VCSEL array). The emitter regions of the fill VCSEL array 515 are offset from the individual stripe sources of the SL VCSEL array 510. This will be described in detail below with respect to fig. 5B. Note that the light from the VCSELs in the fill VCSEL array 515 may be brighter than the light of the VCSELs in the SL VCSEL array 510. The brightness of the VCSELs in the filling VCSEL array 510 can be determined in order to obtain flood illumination with a substantially flat intensity distribution.
Light from the populated VCSEL array is dispersed by the optical component of the DCA to form a second pattern. And due to the positioning of the filling VCSEL array, the resulting second pattern acts to fill the dark areas of the SL pattern emitted by the SL VCSEL array 510, thereby creating flood illumination.
Bond pads (e.g., bond pad 540) are configured to provide electrical connection between substrate 510 and a ribbon source. The bond pads may include a conductive material coupled to the substrate 510. As shown, the bond pads and corresponding stripe sources of the SL VCSEL array 510 are generally staggered to provide a compact form factor.
Fig. 5B is a portion of the VCSEL chip 500 of fig. 5A. The individual stripe sources of the SL VCSEL array 510 are parallel to each other and separated by respective gaps (also referred to as pitch). For example, the SL VCSEL array 510 includes a stripe source 560 and a stripe source 565, and they are separated by a gap 570 (which may also be referred to as a pitch). As shown, the gap between the various strip sources is constant, but in some embodiments, the gap may be different for one or more adjacent strip sources. Note that for each gap, there is a corresponding emitter region of the VCSELs filling the VCSEL array 515 that is positioned offset from the stripe source such that the emitter region coincides with the corresponding gap. For example, line 575 is parallel to the strip sources 560, 565 and is located between the strip sources 560, 565, the line 575 extending through the gap 570 and intersecting the center of the emitter region 580 of the VCSELs filling the VCSEL array 515. As shown, line 575 corresponds to the minimum light intensity region between light emitted from stripe sources 560 and 565 (i.e., the dark region between light stripes in the SL pattern). As shown, line 575 is located along the center of the gap. In some embodiments, however, the line 575 may be offset from the center of the gap 570. For example, one of the adjacent strip sources emits a light intensity that is substantially different from the light intensity emitted by the other of the adjacent strip sources.
Fig. 5C is an exemplary current driver 582 of the VCSEL chip 500 of fig. 5A. A current driver 582 selectively provides current to one or more of the SL VCSEL array channels 584 and/or the fill VCSEL array channels 586. The SL VCSEL array channel includes a current channel for each stripe source of the SL VCSEL array 510, and the fill VCSEL array channel 586 includes a current channel for filling a single stripe source of the VCSEL array 515. Note that in other embodiments, multiple stripe sources may be present in one and/or multiple fill VCSEL arrays, and thus in other embodiments more than one fill VCSEL array channel may be present. The current channels provide current that is used to drive SL VCSEL array 510 and/or fill the VCSELs in VCSEL array 515. The laser driver 582 includes one or more current adjustment modules (e.g., current adjustment module 588).
The current adjustment module 588 may dynamically adjust the current provided to the fill VCSEL array 515 according to instructions from a controller (e.g., controller 230). The current adjustment module 588 includes one or more digital-to-analog controllers (digital analog controller, DACs) and may also include, for example, a shunt that can dynamically reduce the amount of current output on the fill VCSEL array channel 586 relative to the current output on the SL VCSEL array channel 584. One or more DACs provide independent control of each of the SL VCSEL array channel 584, the fill VCSEL array channel 586, or both. The reduction may range from 1 (no reduction) to N (maximum reduction). And N may be relatively large. For example, N may be 5,8, 10, etc. For example, if the current adjustment module 588 reduces the current by a factor of 10, the current adjustment module will output 1/10 of the amount of current on the fill VCSEL array channel 586 relative to the output on one of the SL VCSEL array channels 584. In some embodiments, the reduced range may be a continuous range. Alternatively, the reduction range includes a series of discrete values. One advantage of being able to independently drive the fill VCSEL array 515 with much lower current is safety. For example, the illuminator may further comprise a light sensor positioned to detect back scattering of light emitted from the fill VCSEL array 515 and/or the SL VCSEL array 510 (e.g., from the optical elements of the illuminator). The current adjustment module 588 may be instructed to reduce the current output on the fill VCSEL array channel 586. The reduction may be relatively large (e.g., 10 times) relative to normal operation. The low current causes the filling VCSEL array 515 to emit at a lower intensity than it would be during normal operation. For example, a light sensor may be used to detect back-scattering of light, which indicates misalignment, damaged optics, and the like. The illuminator may use the lower intensity emission and the signal from the light sensor as a security check before outputting the linear SL pattern or flood illumination at high power, for example from a VCSEL chip. This may serve as a security check against misalignments, optics damage, etc. that may lead to potential safety hazards and/or potentially further damage the DCA. In addition, the ability to dynamically adjust the drive current provides the DCA with an option for tunable illumination to fill VCSEL array 515.
Note that conventional laser drivers cannot have large increments between the current supplied on different channels because it causes instability in the supplied current. This is due in part to the inability of conventional laser drivers to independently control the current supplied to the different channels. Instead, a global DAC may be used that provides uniform current control over all of their channels. Instead, the current adjustment module 588 is capable of independently controlling each of the SL VCSEL array channel 584, the fill VCSEL array channel 586, or both.
As shown, the current adjustment module 588 adjusts the current supplied to the fill VCSEL array channel 586. In other embodiments, the current adjustment module 588 and/or one or more additional current adjustment modules may adjust the current provided to some or all of the channels (e.g., some or all of the SL VCSEL array channels, some or all of the fill VCSEL array channels, or some combination thereof).
Fig. 6 is a flow diagram that illustrates a process 600 for generating a linear SL pattern or flood illumination in accordance with one or more embodiments. The process shown in fig. 6 may be performed by a component of a DCA (e.g., DCA 200 of fig. 2). In other embodiments, other entities may perform some or all of the steps in fig. 6. Embodiments may include different steps and/or additional steps, or may perform the steps in a different order.
DCA selects 610 a depth sense mode. For example, TOF may be used in bright light and SL may be used indoors. Note that in other embodiments, the depth sensing mode may be used under other conditions. DCA instructs the VCSEL chip to emit light according to the selected depth sensing mode.
The DCA illuminates 620 a portion of the local area according to the selected depth sensing mode. For example, if the selected depth sensing mode is SL, the DCA illuminates the portion of the local area with a linear SL pattern. Or if the selected depth sensing is ToF, the DCA illuminates the portion of the local area with flood illumination. DCA uses VCSEL chips within the illuminator of DCA to produce linear SL patterns or flood illumination.
The DCA captures 630 one or more images of the illuminated portion of the local area. DCA uses one or more imaging devices of the camera assembly to acquire images.
The DCA determines 640 depth information of the local area based on the acquired image. DCA calculates depth information using the acquired image and the selected depth sensing mode. DCA updates the model using the determined depth information.
Fig. 7 is a system 700 that includes a headset 705 in accordance with one or more embodiments. In some embodiments, the headset 705 may be the headset 100 of fig. 1A or the headset 105 of fig. 1B. The system 700 may operate in an artificial reality environment (e.g., a virtual reality environment, an augmented reality environment, a mixed reality environment, or some combination thereof). The system 700 shown in fig. 7 includes a head-mounted device 705 and an input/output (I/O) interface 710 coupled to a console 715. Although fig. 7 shows an exemplary system 700 including one head mounted device 705 and one I/O interface 710, in other embodiments, the system 700 may include any number of these components. For example, there may be multiple head mounted devices each having an associated I/O interface 710, where each head mounted device and I/O interface 710 communicates with console 715. In alternative constructions, the system 700 may include different components and/or additional components. Additionally, in some embodiments, the functionality described in connection with one or more of the components illustrated in FIG. 7 may be distributed among the components in a different manner than described in connection with FIG. 7. For example, some or all of the functionality of the console 715 may be provided by the head mounted device 705.
The head mounted device 705 includes a display assembly 730, an optics block 735, one or more position sensors 740, and a DCA 745. Some embodiments of the headset 705 have different components than those described in connection with fig. 7. In addition, the functionality provided by the various components described in connection with fig. 7 may be distributed differently among the components of the head-mounted device 705 in other embodiments, or may be embodied in a separate component remote from the head-mounted device 705.
The display component 730 displays content to a user based on data received from the console 715. The display component 730 displays content using one or more display elements (e.g., display element 120). The display element may be, for example, an electronic display. In various embodiments, display assembly 730 includes a single display element or multiple display elements (e.g., one display per eye of a user). Examples of electronic displays include: a Liquid CRYSTAL DISPLAY (LCD), an Organic LIGHT EMITTING Diode (OLED) display, an active-matrix organic light-emitting diode display (AMOLED) display, a waveguide display, some other display, or some combination thereof. Note that in some embodiments, display element 120 may also include some or all of the functionality of optics block 735.
The optics block 735 may amplify the received image light from the electronic display, correct an optical error associated with the image light, and present the corrected image light to one or both eyepieces of the headset 705. In various embodiments, the optics block 735 includes one or more optical elements. Exemplary optical elements included in the optics block 735 include: an aperture, fresnel lens, convex lens, concave lens, optical filter, reflective surface, or any other suitable optical element that affects image light. Further, the optics block 735 may include a combination of different optical elements. In some embodiments, one or more optical elements in the optics block 735 can have one or more coatings, such as partially reflective coatings or anti-reflective coatings.
The magnification and focusing of image light by optics block 735 allows the electronic display to be physically smaller, lighter in weight, and lower in power consumption than larger displays. Additionally, the magnification may increase the field of view of the content presented by the electronic display. For example, the field of view of the displayed content is such that the displayed content is presented using nearly the entire user field of view (e.g., about 110 degree diagonal), and in some cases, the displayed content is presented using the entire user field of view. Additionally, in some embodiments, the amount of magnification may be adjusted by adding or removing optical elements.
In some embodiments, optics block 735 may be designed to correct one or more types of optical errors. Examples of optical errors include barrel distortion or pincushion distortion, longitudinal chromatic aberration, or lateral chromatic aberration. Other types of optical errors may also include: spherical aberration; color difference; or errors due to lens curvature, astigmatism; or any other type of optical error. In some embodiments, the content provided to the electronic display for display is pre-distorted, and the optics block 735 corrects the distortion when it receives image light from the electronic display generated based on the content.
The position sensor 740 is an electronic device that generates data indicative of the position of the headset 705. The position sensor 740 generates one or more measurement signals in response to movement of the headset 705. The position sensor 190 is an embodiment of the position sensor 740. Examples of the position sensor 740 include: one or more IMUs, one or more accelerometers, one or more gyroscopes, one or more magnetometers, another suitable type of sensor that detects motion, or some combination thereof. The position sensor 740 may include a plurality of accelerometers for measuring translational motion (forward/backward, up/down, left/right) and a plurality of gyroscopes for measuring rotational motion (e.g., pitch, yaw, roll). In some embodiments, the IMU rapidly samples the measurement signals and calculates an estimated position of the headset 705 from the sampled data. For example, the IMU integrates the received measurement signals from the accelerometer over time to estimate a velocity vector, and integrates the velocity vector over time to determine an estimated location of a reference point on the headset 705. The reference point is a point that may be used to describe the location of the headset 705. Although a reference point may generally be defined as a point in space, the reference point is actually defined as a point within the headset 705.
DCA 745 generates depth information for portions of the local area. DCA 745 may be an embodiment of DCA200 of fig. 2. The DCA includes one or more imaging devices and a DCA controller. DCA 745 also includes an illuminator that includes VCSEL chips. DCA 745 may be configured to use VCSEL chips to generate different SL patterns and flood illumination for depth sensing. Different SL patterns and flood illumination may be produced by activating different strip sources of the illuminator. The operation and structure of DCA 745 is described above primarily with respect to fig. 2.
The audio system 750 provides audio content to a user of the headset 705. The audio system 750 may include one or more acoustic sensors, one or more transducers, and an audio controller. The audio system 750 may provide the user with spatialized audio content. In some embodiments, the audio system 750 may request acoustic parameters from a mapping server. The acoustic parameters describe one or more acoustic properties (e.g., room impulse response, reverberation time, reverberation level, etc.) of the local region. The audio system 750 may provide information describing at least a portion of the local area from, for example, DCA 745 and/or location information for the head mounted device 705 from the location sensor 740. The audio system 750 may use one or more acoustic parameters to generate one or more sound filters and use the sound filters to provide audio content to a user.
The I/O interface 710 is a device that allows a user to send action requests to the console 715 and to receive responses from the console 715. An action request is a request to perform a particular action. For example, the action request may be an instruction to start or end capturing image data or video data, or an instruction to perform a specific action within an application. The I/O interface 710 may include one or more input devices. Exemplary input devices include: a keyboard, a mouse, a game controller, or any other suitable device for receiving and transmitting motion requests to console 715. The action request received by the I/O interface 710 is transmitted to the console 715, which performs an action corresponding to the action request. In some embodiments, the I/O interface 710 includes an IMU that collects calibration data indicating an estimated position of the I/O interface 710 relative to an initial position of the I/O interface 710. In some embodiments, the I/O interface 710 may provide haptic feedback to the user in accordance with instructions received from the console 715. For example, tactile feedback is provided when a request for an action is received, or the console 715 transmits instructions to the I/O interface 710 when the console 715 performs an action, thereby causing the I/O interface 710 to generate tactile feedback.
The console 715 provides content to the head mounted device 505 for processing in accordance with information received from one or more of the following: DCA 745, head mounted device 705, and I/O interface 710. In the example shown in fig. 7, console 715 includes an application library 755, a tracking module 760, and an engine 765. Some embodiments of the console 715 have different modules or components than those described in connection with fig. 7. Similarly, the functions described further below may be distributed among the components of console 715 in a different manner than that described in connection with FIG. 7. In some embodiments, the functionality discussed herein with respect to console 715 may be implemented in head mounted device 705 or a remote system.
The application library 755 stores one or more applications for execution by the console 715. An application is a set of instructions that when executed by a processor generate content for presentation to a user. Content generated by the application may be responsive to input received from a user via movement of the headset 705 or the I/O interface 710. Examples of applications include: a gaming application, a conferencing application, a video playback application, or other suitable application.
The tracking module 760 uses information from the DCA 745, the one or more location sensors 740, or some combination thereof to track movement of the head mounted device 705 or the I/O interface 710. For example, tracking module 760 determines the location of the reference point of headset 705 in the plot of the local area based on information from headset 705. The tracking module 760 may also determine the location of an object or virtual object. Additionally, in some embodiments, tracking module 760 may use a portion of the data from location sensor 740 indicating the location of head mounted device 705 and a representation of the local area from DCA 745 to predict the future location of head mounted device 705. The tracking module 760 provides the engine 765 with an estimated or predicted future location of the headset 705 or the I/O interface 710.
The engine 765 executes the application and receives the position information, acceleration information, velocity information, predicted future position, or some combination thereof, of the headset 705 from the tracking module 760. Based on the received information, engine 765 determines content to be provided to head mounted device 705 for presentation to the user. For example, if the received information indicates that the user has seen to the left, the engine 765 generates the following for the headset 705: the content reflects the user's movements in the virtual local area or in the local area enhanced with additional content. In addition, the engine 765 performs actions within applications executing on the console 715 in response to received action requests from the I/O interface 710 and provides feedback to the user that the actions have been performed. The feedback provided may be visual feedback or audible feedback via the headset 705, or tactile feedback via the I/O interface 710.
The network couples the head mounted device 705 and/or the console 715 to external systems. The network may include any combination of local area networks and/or wide area networks that use both wireless and/or wired communication systems. For example, the network may include the internet as well as a mobile telephone network. In one embodiment, the network uses standard communication techniques and/or standard communication protocols. Thus, the network may include links using the following techniques: such as ethernet, 802.11, worldwide interoperability for microwave access (worldwide interoperability for microwave access, wiMAX), 2G/3G/4G mobile communication protocols, digital subscriber line (digital subscriber line, DSL), asynchronous transfer mode (asynchronous transfer mode, ATM), infiniBand (InfiniBand), PCI Express advanced switching (PCI Express ADVANCED SWITCHING), etc. Similarly, networking protocols used on a network may include multiprotocol label switching (multiprotocol label switching, MPLS), transmission control protocol/internet protocol (transmission control protocol/Internet protocol, TCP/IP), user datagram protocol (User Datagram Protocol, UDP), hypertext transfer protocol (hypertext transport protocol, HTTP), simple mail transfer protocol (SIMPLE MAIL TRANSFER protocol, SMTP), file transfer protocol (FILE TRANSFER protocol, FTP), and the like. Data exchanged over the network may be represented using the following techniques and/or formats: the techniques and/or formats include binary forms of image data (e.g., portable network graphics (Portable Network Graphics, PNG)), hypertext markup language (hypertext markup language, HTML), extensible markup language (extensible markup language, XML), and the like. In addition, all or some of the links may be encrypted using conventional encryption techniques such as secure sockets layer (secure sockets layer, SSL), transport layer security protocol (transport layer security, TLS), virtual private network (virtual private network, VPN), internet security protocol (Internet Protocol security, IPsec), etc.
One or more components in system 700 can include a privacy module that stores one or more privacy settings of user data elements. The user data element describes the user or the headset 705. For example, the user data elements may describe physical characteristics of the user, actions performed by the user, the location of the user of the head-mounted device 705, the location of the head-mounted device 705, HRTFs of the user, and so forth. The privacy settings (or "access settings") of the user data elements may be stored in any suitable manner, for example, in association with the user data elements, in an index on an authorization server, in another suitable manner, or any suitable combination thereof.
The privacy settings of the user data elements specify how the user data elements (or particular information associated with the user data elements) may be accessed, stored, or otherwise used (e.g., viewed, shared, modified, copied, executed, rendered, or identified). In some embodiments, the privacy settings of the user data elements may specify a "blacklist" of entities that may not be able to access certain information associated with the user data elements. The privacy settings associated with the user data elements may specify any suitable granularity of permission to access or denial of access. For example, some entities may have permissions to ascertain the presence of particular user data elements, some entities may have permissions to view the content of particular user data elements, and some entities may have permissions to modify particular user data elements. The privacy settings may allow the user to allow other entities to access or store user data elements for a limited period of time.
The privacy settings may allow the user to specify one or more geographic locations where the user data elements may be accessed. Access to or denial of access to the user data element may depend on the geographic location of the entity attempting to access the user data element. For example, a user may allow access to user data elements and specify that the user data elements are accessible to an entity only when the user is in a particular location. If the user leaves the particular location, the user data element may no longer be accessible to the entity. As another example, a user may specify that a user data element is only accessible to entities within a threshold distance from the user (e.g., another user of a head-mounted device that is within the same local area as the user). If the user subsequently changes locations, the entity having access to that user data element may lose access, while a new set of entities may gain access when they come within a threshold distance of the user.
The system 700 may include one or more authorization/privacy servers for enforcing privacy settings. A request from an entity for a particular user data element may identify the entity associated with the request and may only send the user data element to the entity if the authorization server determines that the entity is authorized to access the user data element based on privacy settings associated with the user data element. If the requesting entity is not authorized to access the user data element, the authorization server may prevent the requested user data element from being retrieved or may prevent the requested user data element from being sent to the entity. Although this disclosure describes enforcing privacy settings in a particular manner, this disclosure contemplates enforcing privacy settings in any suitable manner.
Additional configuration information
The foregoing description of the embodiments has been presented for purposes of illustration; it is not intended to be exhaustive or to limit the patent rights to the precise form disclosed. Those skilled in the relevant art will appreciate that many modifications and variations are possible in light of the above disclosure.
Some portions of this specification describe embodiments in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to effectively convey the substance of their work to others skilled in the art. These operations, although described functionally, computationally, or logically, are understood to be implemented by computer programs or equivalent circuits or microcode, or the like. Furthermore, it has proven convenient at times, to refer to these arrangements of operations as modules, without loss of generality. The described operations and their associated modules may be implemented in software, firmware, hardware, or any combination thereof.
Any of these steps, operations, or processes described herein may be performed or implemented using one or more hardware or software modules, alone or in combination with other devices. In one embodiment, the software modules are implemented using a computer program product comprising a computer readable medium containing computer program code executable by a computer processor to perform any or all of the steps, operations, or processes described.
Embodiments may also relate to an apparatus for performing the operations herein. The apparatus may be specially constructed for the required purposes, and/or the apparatus may comprise a general-purpose computing device selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a non-transitory tangible computer readable storage medium that may be coupled to a computer system bus, or in any type of medium suitable for storing electronic instructions. Furthermore, any computing system referred to in this specification may comprise a single processor or may be an architecture employing a multi-processor design for increased computing power.
Embodiments may also relate to a product resulting from the computing process described herein. Such products may include information derived from a computing process, where the information is stored on a non-transitory tangible computer-readable storage medium and may include any embodiment of a computer program product or other data combination described herein.
Finally, the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the patent rights. Accordingly, it is intended that the scope of the patent claims not be limited by this detailed description, but rather by any claims based on the disclosure of the application herein. Accordingly, the disclosure of the embodiments is intended to be illustrative, but not limiting, of the scope of the patent claims, which is set forth in the following claims.

Claims (16)

1. A Vertical Cavity Surface Emitting Laser (VCSEL) chip, the VCSEL chip comprising:
A first VCSEL array comprising a plurality of first VCSELs on a substrate;
a second VCSEL array comprising a plurality of second VCSELs on the substrate, the second VCSEL array being positioned orthogonal to the first VCSEL array on the substrate, and
Wherein light emitted from the first VCSEL array is used to form a stripe pattern, and wherein light emitted from the first VCSEL array and the second VCSEL array together is used to form flood illumination.
2. The VCSEL chip of claim 1, wherein each of the plurality of first VCSELs has a respective emitter region over a first length, the VCSEL chip further comprising:
A third VCSEL array comprising a plurality of third VCSELs on the substrate, wherein each of the plurality of third VCSELs has a respective emission region over a third length longer than the first length, and the third VCSEL array is oriented parallel to the first VCSEL array;
Preferably, wherein at least a portion of the third VCSEL array is staggered within the first VCSEL array.
3. A VCSEL chip as claimed in claim 1 or 2, wherein each of the second linear emission sources has an elliptical emission region.
4. A VCSEL chip as claimed in any preceding claim, wherein two adjacent emitter regions of the second plurality of VCSELs are separated by a gap and a first VCSEL of the first VCSEL array is positioned along a line bisecting the gap.
5. A VCSEL chip as claimed in any preceding claim, wherein the first VCSEL array is arranged as a plurality of parallel stripe sources and each of the plurality of stripe sources comprises a plurality of first VCSELs; and preferably wherein each of the plurality of stripe sources is addressable, and wherein light from each of the plurality of stripe sources corresponds to a different stripe in the stripe pattern.
6. The VCSEL chip of claim 5, wherein adjacent stripe sources of the plurality of parallel stripe sources are separated by respective gaps, and for each gap there is a corresponding second VCSEL whose emitter region is positioned along a line parallel to the adjacent stripe sources and passing through the center of the gap; and preferably wherein the second VCSEL array is arranged to form an addressable single stripe source, and wherein light from the single stripe source fills dark regions between stripes in the stripe pattern to form the flood illumination.
7. A VCSEL chip as claimed in any preceding claim, wherein light from the first VCSEL array is refracted by a cylindrical lens to form the stripe pattern and light from the first and second VCSEL arrays is refracted by the cylindrical lens to form the flood illumination.
8. The VCSEL chip of any of the preceding claims, wherein the VCSEL chip is part of a Depth Camera Assembly (DCA), and a controller of the DCA is configured to:
Selecting a depth sensing mode for a local region of the DCA;
Instruct the VCSEL chip to emit light according to the selected depth sensing mode; and
Determining depth information of the local area using the acquired image of the local area illuminated by the emitted light from the VCSEL chip;
Preferably, wherein the depth sensing mode is selected from the group comprising: ancillary stereo, time of flight and structured light.
9. A Depth Camera Assembly (DCA), the DCA comprising:
a Vertical Cavity Surface Emitting Laser (VCSEL) chip, the VCSEL chip comprising:
a first VCSEL array including a plurality of first VCSELs on a substrate,
A second VCSEL array comprising a plurality of second VCSELs on the substrate, the second VCSEL array being positioned orthogonal to the first VCSEL array on the substrate;
An optical assembly configured to condition light from the VCSEL chips and project the conditioned light into a localized area of the DCA, the conditioned light forming one of a stripe pattern or flood illumination, wherein light emitted from the first VCSEL array is used to form the stripe pattern, and wherein light emitted from the first and second VCSEL arrays together is used to form the flood illumination;
a camera configured to capture an image of the localized area illuminated by the adjusted light; and
A controller configured to:
instruct the VCSEL chip to emit light to form one of the flood illumination or the stripe pattern, and
Depth information of the local area is determined using the acquired image.
10. The DCA of claim 9, wherein the controller is configured to:
Selecting a depth sensing mode for the local area of the DCA, wherein the depth sensing mode is selected from the group consisting of: auxiliary stereo, time of flight and structured light; and
The VCSEL chip is instructed to emit light according to the selected depth sensing mode.
11. The DCA of claim 9 or 10, wherein each of the plurality of first VCSELs has a respective emission area over a first length, the DCA further comprising:
A third VCSEL array comprising a plurality of third VCSELs on the substrate, wherein each of the plurality of third VCSELs has a respective emission region over a third length longer than the first length, and the third VCSEL array is oriented parallel to the first VCSEL array;
Preferably, wherein at least a portion of the third VCSEL array is staggered within the first VCSEL array.
12. The DCA of any of claims 9 to 11, wherein the optical assembly includes a cylindrical lens by which light from the first VCSEL array is refracted to form the stripe pattern, and light from the first VCSEL array and the second VCSEL array is refracted to form the flood illumination.
13. The DCA of any of claims 9 to 12, the first VCSEL array comprising two adjacent stripe sources parallel to each other and separated by a gap, and there being a corresponding second VCSEL whose emitter region is positioned along a line parallel to the two adjacent stripe sources and passing through the center of the gap.
14. The DCA according to any of claims 9 to 13, wherein the first VCSEL array is arranged as a plurality of parallel stripe sources, and each of the plurality of stripe sources comprises a plurality of first VCSELs, and adjacent emission regions of the plurality of second VCSELs are separated by respective gaps, and each of the plurality of stripe sources is positioned to bisect a different gap.
15. The DCA according to any one of claims 9 to 14, further comprising:
A laser driver configured to provide a drive current to a first stripe source of the first VCSEL array and a second stripe source of the second VCSEL array, and the drive current provided to the second stripe source is 8 times less than the drive current provided to the first stripe source.
16. A non-transitory computer readable medium configured to store program code instructions that, when executed by a processor of a Depth Camera Assembly (DCA), cause the DCA to perform steps comprising:
instructing a Vertical Cavity Surface Emitting Laser (VCSEL) chip to emit light to form one of a flood illumination or a stripe pattern, wherein the VCSEL chip comprises:
a first VCSEL array including a plurality of first VCSELs on a substrate,
A second VCSEL array comprising a plurality of second VCSELs on the substrate, the second VCSEL array being positioned orthogonal to the first VCSEL array on the substrate;
Conditioning light from the VCSEL chip via an optical component;
Projecting conditioned light into a localized area of the DCA, the conditioned light forming one of the stripe pattern or the flood illumination, wherein light emitted from the first VCSEL array is used to form the stripe pattern, and wherein light emitted from the first VCSEL array and the second VCSEL array together is used to form the flood illumination;
Acquiring an image of the localized area illuminated by the adjusted light; and
Depth information of the local area is determined using the acquired image.
CN202280062088.2A 2021-09-13 2022-09-13 VCSEL chip for generating linear structured light patterns and flood illumination Pending CN117957421A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US63/243,514 2021-09-13
US17/572,395 2022-01-10
US17/572,395 US20230085063A1 (en) 2021-09-13 2022-01-10 Vcsel chip for generation of linear structured light patterns and flood illumination
PCT/US2022/043356 WO2023039288A1 (en) 2021-09-13 2022-09-13 Vcsel chip for generation of linear structured light patterns and flood illumination

Publications (1)

Publication Number Publication Date
CN117957421A true CN117957421A (en) 2024-04-30

Family

ID=90798686

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280062088.2A Pending CN117957421A (en) 2021-09-13 2022-09-13 VCSEL chip for generating linear structured light patterns and flood illumination

Country Status (1)

Country Link
CN (1) CN117957421A (en)

Similar Documents

Publication Publication Date Title
US11740075B2 (en) Dynamic adjustment of structured light for depth sensing systems based on contrast in a local area
US11611197B2 (en) Addressable vertical cavity surface emitting laser array for generating structured light patterns
US11754720B2 (en) Time-to-digital converter for depth sensing
US11082794B2 (en) Compensating for effects of headset on head related transfer functions
US20210314549A1 (en) Switchable fringe pattern illuminator
US11276215B1 (en) Spatial audio and avatar control using captured audio signals
US20230085063A1 (en) Vcsel chip for generation of linear structured light patterns and flood illumination
US11195291B1 (en) Dynamic illumination control for depth determination
EP4179273B1 (en) Vcsel arrays for generation of linear structured light features
CN117957421A (en) VCSEL chip for generating linear structured light patterns and flood illumination
WO2023039288A1 (en) Vcsel chip for generation of linear structured light patterns and flood illumination
US20230216269A1 (en) Wafer level optics for structured light generation
US20240214709A1 (en) Multi-mode sensor assembly for light detection
TW202338481A (en) Wafer level optics for structured light generation
US20220373803A1 (en) Actuator aligned multichannel projector assembly
US11719942B2 (en) Offsetting image light aberration due to waveguide movement in display assemblies using information from piezoelectric movement sensors
EP4235220A1 (en) Indirect time of flight sensor with parallel pixel architecture
US20210366142A1 (en) Dynamic depth determination
JP2023553801A (en) Improved display panel grounding
CN117452591A (en) Lens barrel with integrated tunable lens
WO2022251030A1 (en) Actuator aligned multichannel projector assembly
CN117616329A (en) Off-axis pixel design for liquid crystal display device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination