WO2018111423A1 - Systems and methods for improved depth sensing - Google Patents

Systems and methods for improved depth sensing Download PDF

Info

Publication number
WO2018111423A1
WO2018111423A1 PCT/US2017/058982 US2017058982W WO2018111423A1 WO 2018111423 A1 WO2018111423 A1 WO 2018111423A1 US 2017058982 W US2017058982 W US 2017058982W WO 2018111423 A1 WO2018111423 A1 WO 2018111423A1
Authority
WO
WIPO (PCT)
Prior art keywords
depth
structured light
field
image
light pattern
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2017/058982
Other languages
English (en)
French (fr)
Inventor
Kalin Atanassov
Sergiu Goma
Stephen Michael Verrall
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qualcomm Inc
Original Assignee
Qualcomm Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qualcomm Inc filed Critical Qualcomm Inc
Priority to BR112019011021A priority Critical patent/BR112019011021A2/pt
Priority to KR1020197016374A priority patent/KR20190096992A/ko
Priority to CN201780071795.7A priority patent/CN109983506A/zh
Priority to EP17804996.1A priority patent/EP3555855B1/en
Priority to JP2019531636A priority patent/JP6866484B2/ja
Publication of WO2018111423A1 publication Critical patent/WO2018111423A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/254Image signal generators using stereoscopic image cameras in combination with electromagnetic radiation sources for illuminating objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/907Television signal recording using static stores, e.g. storage tubes or semiconductor memories
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • This technology relates to active depth sensing, and more specifically, to generating a depth map of a scene to facilitate refinement of image capture parameters and post capture image processing.
  • Imaging devices that are structured light active sensing systems include a transmitter and a receiver configured to transmit and receive patterns corresponding to spatial codes (or "codewords") to generate a depth map that indicates the distance of one or more objects in a scene from the imaging device.
  • codewords spatial codes
  • the closer the object is to the transmitter and receiver the farther the received codeword is from its original position in the transmitted codeword.
  • the difference between the position of a received codeword and the corresponding transmitted codeword may be used to determine the depth of an object in a scene.
  • Structured light active sensing systems may use these determined depths to generate a depth map of a scene, which may be a three dimensional representation of the scene.
  • Many applications may benefit from determining a depth map of a scene, including image quality enhancement and computer vision techniques.
  • generating a depth map may include detecting codewords.
  • the codewords may include an array of symbols.
  • Decoding filters may identify spatial boundaries for codewords and symbols, and classify symbols as, for example, "0" or " 1" based on their intensity values.
  • Decoding filters may use matched filters, corresponding to the set of harmonic basis functions used to define the set of possible codewords, to classify incoming basis functions. Therefore, depth map accuracy depends on accurately receiving symbols, codewords, and/or basis functions.
  • the light source used to project a pattern for example, a laser
  • spots corresponding to brighter symbols may be too dark to be differentiated from darker symbols. Accordingly, there is a need for methods and systems to improve depth map generation for digital imaging applications.
  • the imaging device includes a plurality of transmitters including a first transmitter configured to generate a structured light pattern with a first depth of field at a first resolution and a second structured light pattern with a second depth of field at the first resolution, wherein the second depth of field is wider than the first depth of field, a receiver configured to focus within the first depth of field to receive the first structured light pattern and capture a first image representing the first structured light pattern and to focus within the second depth of field to receive the second structured light pattern and capture a second image representing the second structured light pattern; and an electronic hardware processor configured to generate a depth map of the scene based on the first image and the second image.
  • the electronic hardware processor is further configured to adjust image capture parameters of a third image based on the generated depth map, and to capture the third image using the image capture parameters.
  • the electronic hardware processor is configured to adjust one or more of an aperture, shutter speed, or illumination parameter of the third image based on the depth map.
  • the electronic hardware processor is further configured to store one or more of the depth map and the third image to a stable storage.
  • the first transmitter is configured to generate the first structured light pattern to have a first maximum resolution at a first distance from the imaging device and the second transmitter is configured to generate the second structured light pattern to have a second maximum resolution lower than the first structured light pattern at a second distance from the imaging device.
  • the first distance and the second distance are equivalent. In some aspects, the first distance is different than the second distance. In some aspects, the first transmitter is configured to generate the first depth of field to overlap with the second depth of field. In some aspects, the first transmitter is configured to generate the first depth of field to be contained within the second depth of field.
  • Another aspect disclosed is a method of capturing an image with an imaging device.
  • the method includes generating, using a first light emitter, a first structured light pattern with a first depth of field at a first resolution onto a scene, focusing, via an electronic hardware processor an imaging sensor within the first depth of field to capture a first image representing the scene illuminated by the first structured light pattern, generating, using a second light emitter, a second structured light pattern with a second depth of field at the first resolution, wherein the second depth of field is wider than the first depth of field, focusing the imaging sensor within the second depth of field to capture a second image representing the scene illuminated by the second structured light pattern, generating, using the electronic hardware processor, a depth map of the scene based on the first image and the second image, determining image capture parameters based on the depth map, capturing, using the imaging sensor, a third image of the scene based on the image capture parameters; and storing the third image to a stable storage.
  • the method includes adjusting one or more of an aperture, shutter speed, or illumination parameter of the third image based on the depth map.
  • the method includes storing the depth map to the stable storage.
  • the method includes generating the first structured light pattern to have a first maximum resolution at a first distance from the imaging device and the second transmitter is configured to generate the second structured light pattern to have a second maximum resolution lower than the first structured light pattern at a second distance from the imaging device.
  • the first distance and the second distance are equivalent.
  • the first distance is different than the second distance.
  • the method includes generating the first depth of field to overlap with the second depth of field.
  • the method includes generating the first depth of field to be contained within the second depth of field.
  • Another aspect disclosed is a non-transitory computer readable storage medium comprising instructions that when executed cause a processor to perform a method of capturing an image with an imaging device.
  • the method includes generating, using a first light emitter, a first structured light pattern with a first depth of field at a first resolution onto a scene, focusing, via an electronic hardware processor an imaging sensor within the first depth of field to capture a first image representing the scene illuminated by the first structured light pattern, generating, using a second light emitter, a second structured light pattern with a second depth of field at the first resolution, wherein the second depth of field is wider than the first depth of field, focusing the imaging sensor within the second depth of field to capture a second image representing the scene illuminated by the second structured light pattern, generating, using the electronic hardware processor, a depth map of the scene based on the first image and the second image, determining image capture parameters based on the depth map, capturing, using the imaging sensor, a third image of the scene based on the image capture parameters; and
  • the method of the non-transitory computer readable storage medium includes generating the first structured light pattern to have a first maximum resolution at a first distance from the imaging device and the second transmitter is configured to generate the second structured light pattern to have a second maximum resolution lower than the first structured light pattern at a second distance from the imaging device.
  • the first distance and the second distance are equivalent.
  • FIG. 1 shows a structured light transmitter and receiver.
  • FIG. 2A shows a receiver including a lens and an imaging sensor.
  • FIG 2B shows a graph demonstrating how the bandwidth of light reflected by an object and received by the receiver is affected by the distance of the objects.
  • FIG. 3 A shows a transmitter that utilizes a diffractive optical element.
  • FIG. 3B is a graph demonstrating how the transmitter bandwidth may vary with distance due to the effects described above with respect to FIG. 3 A
  • FIG. 4A shows how two light emitters may be combined into a single device to provide a broader range of coverage for structured light patterns.
  • FIG. 4B shows a reception range of an imaging device equipped with the two light emitters of FIG. 4 A.
  • FIG. 5A shows an exemplary configuration of a receiver and three transmitters or light emitters.
  • FIG. 5B shows another exemplary configuration of a receiver and two transmitters or light emitters.
  • FIG. 5C shows another exemplary configuration of a receiver and two transmitters.
  • FIG. 6 illustrates an example of how depth may be sensed for one or more objects in a scene.
  • FIG. 7 is a graph showing imaging performance for alterative designs of a diffractive optical element.
  • FIG. 8 is a graph showing imaging performance for alternative designs of a diffractive optical element.
  • FIG. 9 is an illustration of an imaging device including a plurality of structured light transmitters.
  • FIG. 10 is a second illustration of an imaging device including a plurality of emitters.
  • FIG. 1 1 is a third illustration of an imaging device including a plurality of emitters.
  • FIG. 12 is a fourth illustration of an imaging device including a plurality of emitters.
  • FIG. 13 is a block diagram of an exemplary device implementing the disclosed embodiments.
  • FIG. 14 is a flowchart of an exemplary method of generating a depth map.
  • FIG. 1 shows a structured light transmitter 102 and receiver 104.
  • Active sensing may project unique patterns (or codes) from the transmitter 102 onto a scene and receiving reflections of those patterns at the receiver 104.
  • the size of the patterns or code projected by the transmitter 102 influences the smallest detectable object.
  • both the transmitter 102 and receiver 104 are "band-tuned" at a certain depth plan and become band-limited as the depth in the scene moves away from that plane.
  • the transmitter may be configured to transmit a structured light pattern 105 that is focused at a particular depth, for example, either of the depths represented by 106a-b.
  • the transmitter may produce a structured light pattern that has a field of view 108 at the various depths 106a-b.
  • the receiver may be tuned to receive the structured light pattern 105 at one or more of the various depths 106a-b.
  • the information encoded in the structured light pattern 105 may be received by an imaging sensor 108 in the receiver at a variety of positions on the imaging sensor 108, as shown by positions 1 10a and 1 10b.
  • FIG. 2A shows a receiver including a lens and an imaging sensor.
  • FIG. 2A demonstrates that the imaging sensor 202 may receive light information via the lens 204 which is reflected from a plurality of objects at different depths, shown as objects 210a- c.
  • the quality of the reflected light at the imaging sensor may vary.
  • light reflected from object 210b is shown well focused on the surface of the imaging sensor 202 as point 212a.
  • light reflected from object 210a, shown as 212a may be focused on somewhat ahead of the imaging sensor 202
  • light reflected from object 210c shown as 212c
  • This may result in a reduced bandwidth of light being available for sensing the objects 210a and 210c as compared to object 210b.
  • FIG 2B shows a graph 250 demonstrating how the bandwidth of light reflected by an object and received by the receiver is affected by the distance of the objects. This may result in a receiver, such as a camera of an imaging device, having difficulty detecting small objects that are positioned away from a focal plane of the camera.
  • FIG. 3 A shows a transmitter that utilizes a diffractive optical element.
  • FIG. 3 A shows that light may be emitted from multiple locations 304a-c of a transmitting surface. As the light passes through a diffractive optical element 302, it may be focused on a focal plane, shown by point 306b. However, the transmitter may be unable to provide the same focused light at different depths within a scene, shown by depths 306a and 306c.
  • FIG. 3B is a graph 350 demonstrating how the transmitter bandwidth may vary with distance due to the effects described above with respect to FIG. 3 A.
  • FIG. 4A shows how two light emitters may be combined into a single device to provide a broader range of coverage for structured light patterns.
  • FIG. 4A shows light emitter 402a and 402b, which may be included in a single imaging device.
  • the two light emitters 402a-b include a diffractive optical element 404a-b respectively.
  • the two light emitters 402a-b are tuned for different focal distances, shown as 406a-b respectively. The results of such a configuration are shown in FIG. 4B, discussed below.
  • FIG. 4B shows a reception range of an imaging device equipped with the two light emitters 402a-b of FIG. 4A.
  • the reception range 450 includes a first depth sensing zone 452a, provided by the light emitter 402a, and a second depth sensing zone 452b, provided by the light emitter 402b.
  • FIG. 5A shows an exemplary configuration of a receiver and three transmitters or light emitters.
  • the three transmitters 504a-c are all located on the same horizontal side of the receiver 502, but each transmitter 504a-c is at a different distance from the receiver 502.
  • the transmitters 504a-c generate a combined structured light pattern shown by 506a-c. Reflections from objects in a scene from each structured light pattern 506a-c may have different levels of occlusion based on the different positions of the three transmitters 405a-c.
  • the overlapping coverage provided by the three structured light patterns 506a-c may provide for improved depth sensing when compared to devices utilizing other transmitter configurations.
  • FIG. 5B shows another exemplary configuration of a receiver and two transmitters or light emitters.
  • the two transmitters are designed to provide structured light patterns are different focal depths.
  • the transmitter 544a is designed to provide a longer focal depth resulting in a broader field of view for the structured light pattern 546a.
  • the transmitter 544b is designed to provide a shorter focal depth, resulting in higher resolution and a smaller field of view for the structured light pattern 546b.
  • the complementary coverage provided by the two structured light patterns 546a-b may provide for improved depth sensing when compared to devices utilizing other transmitter configurations.
  • FIG. 5C shows another exemplary configuration of a receiver and two transmitters.
  • transmitters 544a-b are positioned on either side of the receiver 562.
  • structured light patterns emitted by the two transmitters 544a-b may provide a depth map that is generated from depth information that suffered minimal degradation from occlusion.
  • transmitter 544b may be in a position to illuminate that side of the object, and provide depth information to the receiver 562.
  • FIG. 6 illustrates an example of how depth may be sensed for one or more objects in a scene.
  • the description with respect to FIG. 6 may be used by any of the disclosed embodiments to determine one or more depths of an image scene utilizing structured light.
  • FIG. 6 shows a device 600 including a transmitter 102 and a receiver 104. The device is illuminating two objects 606 and 608 with structured light emitted from transmitter 102 as codeword projection 610. The codeword projection 610 reflects from objects 606 and/or 608 and is received as a reflected codeword 61 1 by receiver 104 on sensor plane 607.
  • the transmitter 102 is on the same reference plane (e.g., lens plane 605) as the receiver 104.
  • the transmitter 102 projects the codeword projection 610 onto the objects 606 and 608 through an aperture 613.
  • the codeword projection 610 illuminates the object 606 as projected segment
  • the reflected codeword 1 1 1 may show reflections generated from the object 608 at a first distance dl and reflections generated from the object 106 at a second distance d2.
  • the projected segment 612' appears at a distance d2 from its initial location.
  • the projected segment 612 appears at a distance dl from its initial location (where dl ⁇ d2). That is, the further away an object is from the transmitter/receiver, the closer the received projected segment/portion/window is from its original position at the receiver 104 (e.g., the outgoing projection and incoming projection are more parallel). Conversely, the closer an object is from the transmitter/receiver, the further the received projected segment/portion/window is from its original position at the receiver 104.
  • the difference between received and transmitted codeword position may be used as an indicator of the depth of an object.
  • depth e.g., relative depth
  • modulation and coding schemes include temporal coding, spatial coding, and direct codification.
  • Pseudorandom codes may be based on De-Bruijn or M-arrays define the codebook (e.g., m-ary intensity or color modulation). Pattern segmentation may not be easily attained, for example, where the shapes and patterns are distorted.
  • Modulation may be by a monotonic phase or an intensity waveform.
  • this scheme may utilize a codebook that is larger than the codebook utilized for other methods.
  • received codewords may be correlated against a defined set of possible codewords (e.g., in a codebook).
  • a small set of codewords e.g., small codebook
  • additional errors may be experienced by implementations using larger codebooks.
  • Structured light patterns may be projected onto a scene by shining light through a codemask.
  • Light projected through the codemask may contain one or more tessellated codemask primitives.
  • Each codemask primitive may contain an array of spatial codes.
  • a codebook or data structure may include the set of codes.
  • Spatial codes, the codemask, and codemask primitives may be generated using basis functions. The periodicities of the basis functions may be chosen to meet the requirements for the aggregate pattern of Hermitian symmetry (for eliminating ghost images and simplifying manufacturing), minimum duty cycle (to ensure a minimum power per codeword), perfect window property (for optimum contour resolution and code packing for high resolution), and randomized shifting (for improved detection on object boundaries).
  • a receiver may make use of the codebook and/or the attributes of the design intended to conform to the constraints when demodulating, decoding, and correcting errors in received patterns.
  • the size and corresponding resolution of the spatial codes corresponds to a physical spatial extent of a spatial code on a codemask. Size may correspond to the number of rows and columns in a matrix that represents each codeword. The smaller a codeword, the smaller an object that can be detected. For example, to detect and determine a depth difference between a button on a shirt and the shirt fabric, the codeword should be no larger than the size of the button.
  • each spatial code may occupy four rows and four columns.
  • the codes may occupy more or fewer rows and columns (rows x columns), to occupy, for example, 3x3, 4x4, 4x5, 5x5, 6x4, or 10x10 rows and columns.
  • the spatial representation of spatial codes corresponds to how each codeword element is patterned on the codemask and then projected onto a scene.
  • each codeword element may be represented using one or more dots, one or more line segments, one or more grids, some other shape, or some combination thereof.
  • the "duty cycle" of spatial codes corresponds to a ratio of a number of asserted bits or portions (e.g., " I s") to a number of un-asserted bits or portions (e.g., "0s") in the codeword.
  • each bit or portion that has a value of " 1" may have energy (e.g., "light energy"), but each bit having a value of "0” may be devoid of energy.
  • the codeword should have sufficient energy. Low energy codewords may be more difficult to detect and may be more susceptible to noise.
  • a 4x4 codeword has a duty cycle of 50% or more if 8 or more of the bits in the codeword are " 1."
  • the "contour resolution" or "perfect window” characteristic of codes indicates that when a codeword is shifted by an amount, for example, a one-bit rotation, the resulting data represents another codeword.
  • An amount that the codeword is shifted may be referred to as a shift amount.
  • Codes with high contour resolution may enable the structured light depth sensing system to recognize relatively small object boundaries and provide recognition continuity for different objects.
  • a shift amount of 1 in the row dimension and 2 in the column dimension may correspond to a shift by one bit positions to the right along the row dimension, and two bit positions down along the column dimension.
  • High contour resolution sets of codewords make it possible to move a window on a received image one row or one column at a time, and determine the depth at each window position.
  • the window may be sized based on the resolution of the object depths to be determined (for example, a button on a shirt).
  • the symmetry of codes may indicate that the code mask or codebook primitive has Hermitian symmetry, which may provide several benefits as compared to using non- Hermitian symmetric codebook primitives or patterns. Patterns with Hermitian symmetries are "flipped" or symmetric, along both X and Y (row and column) axes.
  • the aliasing characteristic of codemasks or codemask primitives corresponds to a distance between two codewords that are the same.
  • the aliasing distance may be based on the size of the codebook primitive.
  • the aliasing distance may thus represent a uniqueness criterion indicating that each codeword of the codebook primitive is to be different from each other codeword of the codebook primitive, and that the codebook primitive is unique as a whole.
  • the aliasing distance may be known to one or more receiver devices, and used to prevent aliasing during codeword demodulation.
  • the cardinality of a codemask corresponds to a number of unique codes in a codebook primitive.
  • FIG. 7 is a graph showing imaging performance for alterative designs of a diffractive optical element.
  • Graph 740 shows four graphs 742a-d representing optical performance of four hypothetical designs of a diffractive optical element.
  • a first design provides the best resolution 744a at a distance of approximately 70 cm.
  • a second design provides the best resolution 744b at a distance of approximately 100 cm.
  • a third design provides the best resolution 744c at a distance of approximately 150cm.
  • a fourth design provides the best resolution 744d at a distance of approximately 200 cm.
  • the optical performance of each of the designs shown in FIG. IB can also be compared at a reference resolution 746 of, for example, .7.
  • the reference resolution may be explained as follows; in some aspects, when a best focus is achieved by a particular design, a point in space of a scene being imaged will produce a circle on the lens of the design. This circle is based at least partially on the diffraction limit.
  • the reference (maximum) resolution is l/circle_diameter; as the point deviates from the optimal focus, the circle becomes larger due to defocusing thus decreasing the resolution. At arbitrary selected reduction, say 0.7, we say the image is no longer in focus.
  • the first design's optical performance represented by graph 742a provides a depth of field of a first depth 748a at the reference resolution.
  • the second design's optical performance represented by graph 742b, provides a depth of field at the reference resolution of a second depth 748b, which is greater than the first depth of field 748a. Note that the second design' s depth of field 748b is also at a different distance from the imaging device than the first design's depth of field 748a.
  • the third design's optical performance, represented by graph 742c provides a depth of field at the reference resolution of a third depth 748c, which is greater than both the first and second depths of field 748a-b.
  • the third depth of field 748c is also provided at a different distance from the imaging device. Note also that the third depth of field 748c overlaps somewhat with the second depth of field 748b.
  • the fourth design' s optical performance, represented by graph 742d, provides a depth of field at the reference resolution of a fourth depth 748d, which is greater than any of the first, second, or third depths of field 748a-c.
  • the four depth of field is also provided at a different distance from the imaging device than the first, second or third depths of field 748a-c, although the fourth depth of field 748d overlaps with the third depth of field 748c.
  • FIG. 8 is a graph showing imaging performance for alternative designs of a diffractive optical element. Whereas the designs shown in FIG. 7 achieved equivalent resolutions at different distances from the imaging device, the two designs shown in FIG. 8 both achieve their best resolution at the same distance (150cm) from the imaging device. However, a first design, shown by graph 862a, achieves a substantially higher resolution than a second design, shown by graph 862b.
  • the second design with optical performance shown by graph 862b, provides a greater depth of field 868b at a reference resolution 866 than the first design' s optical performance 862a with a depth of field 868a.
  • FIG. 9 is an illustration of an imaging device including a plurality of structured light transmitters.
  • FIG. 9 shows the imaging device 905 and two structured light transmitters 908a-b.
  • Each of the structured light transmitters 908a-b transmit a structured light pattern 910a-b respectively.
  • Each of the structured light patterns 910a-b is focused at different distances 920a-b respectively.
  • a receiver 912 may receive the pattern information from each of the structured light patterns 910a-b.
  • the receiver 912 may include a lens with a variable focal distance.
  • the imaging device 905 may be configured to emit the first structured light pattern 210a when the receiver 912 is adjusted to the focal distance 920a.
  • the receiver may then be configured to capture a first image based on the illumination of the scene by the first structured light pattern 910a.
  • the imaging device 205 may be configured to then generate the second structured light pattern 910b after the receiver 912 has been adjusted to the different focal distance 920b.
  • the receiver may then be configured to capture a second image while the scene is illuminated by the second structured light image.
  • the imaging device 905 may be configured to generate depth information from the first image and the second image. Because each of the first structured light image 910a and the second structured light image 910b are focused at different distances 920a- b from the imaging device, the generated depth information may be more complete and/or accurate than if the depth information had been generated from only one structured light pattern, such as either of structured light patterns 910a-b.
  • FIG. 10 is a second illustration of an imaging device including a plurality of emitters.
  • the imaging device 1055 of FIG. 10 includes light emitters 1058a-b.
  • Light emitter 1058a is configured to generate a structured light pattern 1060a at a median distance from the imaging device 1055 of 1070a.
  • the first structured light pattern 1060a is generated by the emitter 1058a to have a depth of field indicated as 1072a.
  • the second emitter 1058b is configured to generate structured light pattern 1060b.
  • the structured light pattern 1060b has a depth of field of 272b. Structured light pattern 1060b's depth of field 272b is substantially within the depth of field 272a of the first structured light pattern 1060a generated by light emitter 1058a.
  • the structured light pattern 1060a may be generated at a first resolution
  • the second structured light pattern 1072b may be generated at a second resolution which is higher than the first resolution
  • a receiver 1062 may receive the information from each of the structured light patterns 1060a-b.
  • the receiver 1062 may include a lens with a variable focal distance.
  • the imaging device 1055 may be configured to emit the first structured light pattern 1058a when the receiver 1062 is adjusted to the focal distance 270a. The receiver may then be configured to capture a first image based on the illumination of the scene by the first structured light pattern 1060a.
  • the imaging device 1055 may be configured to then generate the second structured light pattern 1060b after the receiver 1062 has been adjusted to the different focal distance 1070b.
  • the receiver may then be configured to capture a second image while the scene is illuminated by the second structured light image.
  • the imaging device 1 105 of FIG. 1 1 includes light emitters 1 108a-b.
  • Light emitter 1 108a is configured to generate a structured light pattern 1 110 at a median distance from the imaging device 1 105 of 1 120a.
  • the first structured light pattern 1 1 10a is generated by the emitter 1 108a to have a depth of field indicated as 1 122a.
  • the second emitter 308b is configured to generate structured light pattern 1 1 10b.
  • the structured light pattern 1 1 10b has a depth of field of 1 122b.
  • Structured light pattern l l lOb' s depth of field 1 122b is narrower than the depth of field 1 122a of the first structured light pattern 310a generated by light emitter 1 108a. Furthermore, unlike the depth of fields 1022a-b of FIG. 10, the depth of field of the two structured light patterns 1 1 lOa-b in FIG. 1 1 do not substantially overlap.
  • the structured light pattern 1 1 10a may be generated at a first resolution
  • the second structured light pattern 1 122b may be generated at a second resolution which is higher than the first resolution
  • a receiver 1 1 12 may receive the pattern information from each of the structured light patterns l l lOa-b.
  • the receiver 1 1 12 may include a lens with a variable focal distance.
  • the imaging device 1 105 may be configured to emit the first structured light pattern 1 108a when the receiver 1 1 12 is adjusted to the focal distance 1 120a. The receiver may then be configured to capture a first image based on the illumination of the scene by the first structured light pattern 1 1 10a.
  • the imaging device 305 may be configured to then generate the second structured light pattern 1 1 10b after the receiver 1 1 12 has been adjusted to the different focal distance 1120b.
  • the receiver may then be configured to capture a second image while the scene is illuminated by the second structured light image.
  • the device 1 105 may include three or more emitters.
  • a first emitter may generate a structured light pattern with a depth of field of 50 centimeters to 4 meters from the imaging device 1 105.
  • a second light emitter may be configured to generate a second structured light pattern with a depth of field of 30 cm to 50 cm from the imaging device 1 105.
  • a third light emitter may be configured to generate a third structured light pattern with a depth of field of 20 cm to 30 cm from the imaging device 1 105.
  • FIG. 12 is a fourth illustration of an imaging device including a plurality of emitters.
  • the imaging device 1205 of FIG. 12 includes light emitters 1208a-c. Unlike the emitters 908a-b, 1008a-b, and 1 108a-b shown in FIGs. 9-1 1 respectively, light emitters 1208a-c are positioned on the same side of but different distances from a receiver 1214.
  • Light emitter 1208a is configured to generate a structured light pattern 1210a at a median focal distance from the imaging device 1205 of 1220a.
  • the second emitter 308b is configured to generate structured light pattern 1210b at the median focal distance 1220a.
  • Light emitter 1208c is configured to generate structured light pattern 1210c at the median focal distance 1220a.
  • objects in the scene 1201 may have different levels of occlusion to each of the structured light patterns 1210a-c.
  • the imaging device 1205 may be able to derive improved depth information for the scene 1201 when compared to an imaging device using, for example, fewer light emitters to illuminate the scene 1201.
  • a receiver 1212 may receive the pattern information from each of the structured light patterns 1210a-c.
  • the imaging device 305 may be configured to emit the first structured light pattern 1208a.
  • the receiver 1212 may then be configured to capture a first image based on the illumination of the scene by the first structured light pattern 1210a.
  • the imaging device 1205 may be configured to then generate the second structured light pattern 1210b.
  • the receiver 1212 may then be configured to capture a second image while the scene is illuminated by the second structured light pattern 1210b.
  • the imaging device 1205 may be configured to then generate the third structured light pattern 1210c.
  • the receiver 1212 may then be configured to capture a third image while the scene is illuminated by the third structured light pattern 1210c.
  • the imaging device 1205 may be configured to combine information from the first, second, and third images to generate a depth map.
  • the depth map may include more complete depth information and/or more accurate depth information for the scene 1201.
  • FIG. 13 is a block diagram of an exemplary device implementing the disclosed embodiments.
  • the device 1300 may be an embodiment of any of the devices 1005, 1055, 1 105, or 1205, shown above with respect to FIGS. 10-12.
  • the device 1300 includes at least two light emitters 1308a-b and an imaging sensor 1312.
  • the two light emitters 1308a-b may represent any of the light emitters 208a-b, 1058a-b, 308a-b, 408- a-b discussed above with respect to FIGs. 10-12.
  • the imaging sensor 1312 may represent any of the imaging sensors 912, 1062, 1 1 12, 1312, or 1212, discussed above.
  • the device 1300 also includes an electronic hardware processor 1340, an electronic hardware memory 1345 operably coupled to the processor 1340, an electronic display 1350 operably coupled to the processor 1340, and a storage 1360 operably coupled to the processor 1340.
  • the memory 1345 stores instructions that configure the processor to perform one or more functions discussed herein.
  • the instructions may be organized into modules to facilitate discussion of their functionality. For example, the instructions of the device 1300 of FIG. 13 are organized into a structured light module 1346a, depth map module 1346b, and an image capture module 1346c.
  • FIG. 14 is a flowchart of an exemplary method of generating a depth map.
  • the process 1400 may be performed by the device 1300 discussed above with respect to FIG. 13.
  • the process 1400 may be performed by any of the imaging devices and/or using any of the imaging configurations shown in FIGs. 1, 4A-C, 5A-C, or 9-12.
  • Process 1400 may provide enhanced depth sensing. As discussed above, existing designs for active sensing may rely on projecting and detecting unique patterns or codes. The size of the pattern may determine the smallest detectable object. Both a structured light transmitter and receiver may be band-tuned" to certain depth planes and may be band limited the further away from the depth plane an object in a scene being imaged falls. Under these circumstances, a light emitting device may be challenged to project structured light codes the further away from the focal plane an object lies.
  • process 1400 may utilize an auto focus camera as a structured light receiver and shift a focus range based on the scene being imaged. Multiple structured light transmitters, focused at different depths may be utilized. A subset of the transmitters may then be selected depending on the scene characteristics and possibly other considerations.
  • a first structured light pattern is generated onto a scene.
  • the first structured light pattern may be generated by, for example, one of the light emitters 1308a-b.
  • the first structured light pattern may be generated by a laser projected through a diffractive optical element as described above.
  • the diffractive optical element used to generate the first structured light pattern is configured to provide the first structured light pattern with a first depth of field.
  • the light emitter generating the first structured light pattern is a first distance from a receiver or imaging sensor that is configured to capture the first structured light pattern.
  • a lens is adjusted so as to be focused within the first depth of field.
  • the lens may be part of an imaging sensor, such as imaging sensor 1312 discussed above with respect to device 1300.
  • a first image is captured with the lens while it is focused within the first depth of field.
  • the first image represents the first structured light pattern as projected onto the scene.
  • the first image is captured using the imaging sensor 1312 discussed above with respect to exemplary device 1300.
  • a second structured light pattern is generated onto the scene.
  • the second structured light pattern may be generated using a different light emitter than was used to generate the first structured light pattern.
  • the second structured light pattern may be generated by one of the light emitters 1308-b.
  • the second structured light pattern may be generated by illuminating a laser light source and projecting the light source through a second diffractive optical element (DOE) as discussed above.
  • DOE diffractive optical element
  • the second DOE may be configured to generate the second structured light pattern so as to have a second depth of field different than the first depth of field of the first structured light pattern.
  • the depths of field may be different in that they are present at different distances from the imaging sensor or receive capturing the structured light patterns.
  • the depths of field may be different in that they are of different widths.
  • one depth of field may be, for example, 20 cm deep, while the second depth of field may be 50cm deep.
  • the two depths of field may overlap completely or partially, or may not overlap depending on the embodiment' s configuration.
  • the second light emitter may be a different second distance from the imaging sensor 1312 or a receiver configured to capture the first and second structured light patterns.
  • the second light emitter may be on an equivalent horizontal side of the imaging sensor capturing the first and second structured light pattern, or may be on an opposing horizontal side of the imaging sensor.
  • the first DOE may be configured to generate the first structured light pattern using a first maximum resolution, while the second DOE is configured to generate the second structured light pattern to have a second maximum resolution different from (greater or less than) the first resolution.
  • the first and second structured light patterns may be configured to provide their maximum resolution at different (or equivalent) distances from a receiver or imaging sensor capturing the structured light patterns.
  • the lens is adjusted so as to be focused within the second depth of field.
  • a second image is captured with the lens while it is focused within the second depth of field.
  • the second image represents the second structured light pattern as projected onto the scene.
  • the second image may be captured by the imaging sensor 1312.
  • a depth map is generated based on the first and/or second images.
  • objects included in the scene at a depth within the first depth of field may be represented in the depth map based on the first image
  • objects within the scene at a depth within the second depth of field may be represented in the depth map based on data derived from the second image.
  • only the first or second image may be utilized to generate the depth map.
  • the first or second image may be utilized based on characteristics of the scene being imaged.
  • a quality level of depth information provided in the images may be evaluated, with the image including the highest quality depth information used to generate the depth map.
  • the quality level may be based on an amount of depth information or an amount of corrupted or missing depth information.
  • an image having the least amount of missing depth information may be utilized to generate the depth map in some aspects.
  • Process 1400 may also include writing the generated depth map to a stable storage (or storage medium).
  • one or more image capture parameters of a third image may be adjusted based on the depth map. For example, in some aspects, an exposure time or flash illumination setting (such as flash intensity and/or whether a flash is used) may be modified based on the depth map. The third image may then be captured using the one or more image capture parameters. The third image may also be written to a stable storage or other I/O device after capture.
  • Table 1 below shows a variety of configurations of diffractive optical elements that may be combined to provide various advantages when generating a depth map.
  • process 1400 may utilize any of the exemplary configurations shown below to generate a depth map.
  • Process 1400 may also utilize any of the exemplary configurations shown above in any one of figures 4A-C, 5A-C, and 7-12.
  • Table 1 shows a variety of configurations of diffractive optical elements that may be combined to provide various advantages when generating a depth map.
  • process 1400 may utilize any of the exemplary configurations shown below to generate a depth map.
  • Process 1400 may also utilize any of the exemplary configurations shown above in any one of figures 4A-C, 5A-C, and 7-12.
  • corresponding blank cells for the first and second structured light patterns may be equivalent across a single configuration.
  • determining may include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” may include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Also, “determining” may include resolving, selecting, choosing, establishing and the like. Further, a "channel width" as used herein may encompass or may also be referred to as a bandwidth in certain aspects. [0097] As used herein, a phrase referring to "at least one of a list of items refers to any combination of those items, including single members. As an example, "at least one of: a, b, or c" is intended to cover: a, b, c, a-b, a-c, b-c, and a-b-c.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array signal
  • PLD programmable logic device
  • a general purpose processor may be a microprocessor, but in the alternative, the processor may be any commercially available processor, controller, microcontroller or state machine.
  • a processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
  • the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
  • Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another.
  • a storage media may be any available media that can be accessed by a computer.
  • such computer-readable media can comprise RAM, ROM, EEPROM, CD- ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
  • any connection is properly termed a computer-readable medium.
  • the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave
  • the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium.
  • Disk and disc includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers.
  • computer readable medium may comprise non-transitory computer readable medium (e.g., tangible media).
  • computer readable medium may comprise transitory computer readable medium (e.g., a signal). Combinations of the above should also be included within the scope of computer-readable media.
  • the methods disclosed herein comprise one or more steps or actions for achieving the described method.
  • the method steps and/or actions may be interchanged with one another without departing from the scope of the claims.
  • the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims.
  • a storage media may be any available media that can be accessed by a computer.
  • such computer-readable media can comprise RAM, ROM, EEPROM, CD- ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
  • Disk and disc include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray® disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers.
  • certain aspects may comprise a computer program product for performing the operations presented herein.
  • a computer program product may comprise a computer readable medium having instructions stored (and/or encoded) thereon, the instructions being executable by one or more processors to perform the operations described herein.
  • the computer program product may include packaging material.
  • modules and/or other appropriate means for performing the methods and techniques described herein can be downloaded and/or otherwise obtained by a user terminal and/or base station as applicable.
  • a user terminal and/or base station can be coupled to a server to facilitate the transfer of means for performing the methods described herein.
  • various methods described herein can be provided via storage means (e.g., RAM, ROM, a physical storage medium such as a compact disc (CD) or floppy disk, etc.), such that a user terminal and/or base station can obtain the various methods upon coupling or providing the storage means to the device.
  • storage means e.g., RAM, ROM, a physical storage medium such as a compact disc (CD) or floppy disk, etc.
  • CD compact disc
  • floppy disk etc.
  • any other suitable technique for providing the methods and techniques described herein to a device can be utilized.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Studio Devices (AREA)
  • Measurement Of Optical Distance (AREA)
PCT/US2017/058982 2016-12-15 2017-10-30 Systems and methods for improved depth sensing Ceased WO2018111423A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
BR112019011021A BR112019011021A2 (pt) 2016-12-15 2017-10-30 sistemas e métodos para melhorar o sensor de profundidade
KR1020197016374A KR20190096992A (ko) 2016-12-15 2017-10-30 개선된 심도 감지를 위한 시스템들 및 방법들
CN201780071795.7A CN109983506A (zh) 2016-12-15 2017-10-30 用于改进的深度感测的系统和方法
EP17804996.1A EP3555855B1 (en) 2016-12-15 2017-10-30 Systems and methods for improved depth sensing
JP2019531636A JP6866484B2 (ja) 2016-12-15 2017-10-30 向上した深度感知のためのシステムおよび方法

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US15/380,745 2016-12-15
US15/380,745 US10771768B2 (en) 2016-12-15 2016-12-15 Systems and methods for improved depth sensing

Publications (1)

Publication Number Publication Date
WO2018111423A1 true WO2018111423A1 (en) 2018-06-21

Family

ID=60480378

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2017/058982 Ceased WO2018111423A1 (en) 2016-12-15 2017-10-30 Systems and methods for improved depth sensing

Country Status (7)

Country Link
US (1) US10771768B2 (enExample)
EP (1) EP3555855B1 (enExample)
JP (1) JP6866484B2 (enExample)
KR (1) KR20190096992A (enExample)
CN (1) CN109983506A (enExample)
BR (1) BR112019011021A2 (enExample)
WO (1) WO2018111423A1 (enExample)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020231957A1 (en) * 2019-05-13 2020-11-19 Lumileds Llc Depth sensing using line pattern generators
US11592726B2 (en) 2018-06-18 2023-02-28 Lumileds Llc Lighting device comprising LED and grating

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI672677B (zh) * 2017-03-31 2019-09-21 鈺立微電子股份有限公司 用以融合多深度圖的深度圖產生裝置
US10735711B2 (en) * 2017-05-05 2020-08-04 Motorola Mobility Llc Creating a three-dimensional image via a wide-angle camera sensor
US10595007B2 (en) * 2018-03-21 2020-03-17 Himax Imaging Limited Structured-light method and system of dynamically generating a depth map
WO2020057365A1 (en) * 2018-09-18 2020-03-26 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method, system, and computer-readable medium for generating spoofed structured light illuminated face
CN109974611B (zh) * 2019-03-23 2023-07-21 柳州阜民科技有限公司 深度检测系统及其支架和电子装置
US11563873B2 (en) * 2020-04-14 2023-01-24 Qualcomm Incorporated Wide-angle 3D sensing
US11353389B2 (en) * 2020-09-25 2022-06-07 Applied Materials, Inc. Method and apparatus for detection of particle size in a fluid
CN112437285A (zh) * 2020-11-24 2021-03-02 深圳博升光电科技有限公司 一种三维成像装置、方法及电子设备
CN114543749B (zh) * 2022-03-17 2022-07-29 苏州英示测量科技有限公司 测量多目标景深的光学系统及方法
CN120021926A (zh) * 2023-11-21 2025-05-23 苏州佳世达光电有限公司 照明装置

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150022697A1 (en) * 2013-07-16 2015-01-22 Texas Instruments Incorporated Projector Auto-Focus Correction with the Aid of a Camera
US20160267671A1 (en) * 2015-03-12 2016-09-15 Qualcomm Incorporated Active sensing spatial resolution improvement through multiple receivers and code reuse

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0382909A (ja) * 1989-08-25 1991-04-08 Honda Motor Co Ltd 光学式反射体検出装置
US6850872B1 (en) 2000-08-30 2005-02-01 Microsoft Corporation Facial image processing methods and systems
JP3778085B2 (ja) * 2001-12-28 2006-05-24 松下電工株式会社 反射型光電センサ
GB0921461D0 (en) * 2009-12-08 2010-01-20 Qinetiq Ltd Range based sensing
US9696427B2 (en) * 2012-08-14 2017-07-04 Microsoft Technology Licensing, Llc Wide angle depth detection
US11509880B2 (en) * 2012-11-14 2022-11-22 Qualcomm Incorporated Dynamic adjustment of light source power in structured light active depth sensing systems
US20140267701A1 (en) * 2013-03-12 2014-09-18 Ziv Aviv Apparatus and techniques for determining object depth in images
US8872818B2 (en) 2013-03-15 2014-10-28 State Farm Mutual Automobile Insurance Company Methods and systems for capturing the condition of a physical structure
CN105143820B (zh) 2013-03-15 2017-06-09 苹果公司 利用多个发射器进行深度扫描
SG10201710025WA (en) 2013-06-06 2018-01-30 Heptagon Micro Optics Pte Ltd Sensor system with active illumination
JP6075644B2 (ja) * 2014-01-14 2017-02-08 ソニー株式会社 情報処理装置および方法
US9589359B2 (en) 2014-04-24 2017-03-07 Intel Corporation Structured stereo
US9696424B2 (en) * 2014-05-19 2017-07-04 Rockwell Automation Technologies, Inc. Optical area monitoring with spot matrix illumination
US10785393B2 (en) * 2015-05-22 2020-09-22 Facebook, Inc. Methods and devices for selective flash illumination
IL239919A (en) * 2015-07-14 2016-11-30 Brightway Vision Ltd Branded template lighting
JP6805904B2 (ja) * 2017-03-08 2020-12-23 株式会社リコー 計測装置、計測方法およびロボット

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150022697A1 (en) * 2013-07-16 2015-01-22 Texas Instruments Incorporated Projector Auto-Focus Correction with the Aid of a Camera
US20160267671A1 (en) * 2015-03-12 2016-09-15 Qualcomm Incorporated Active sensing spatial resolution improvement through multiple receivers and code reuse

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"LECTURE NOTES IN COMPUTER SCIENCE", vol. 8689, 1 January 2014, SPRINGER BERLIN HEIDELBERG, Berlin, Heidelberg, ISBN: 978-3-54-045234-8, ISSN: 0302-9743, article SUPREETH ACHAR ET AL: "Multi Focus Structured Light for Recovering Scene Shape and Global Illumination", pages: 205 - 219, XP055193120, DOI: 10.1007/978-3-319-10590-1_14 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11592726B2 (en) 2018-06-18 2023-02-28 Lumileds Llc Lighting device comprising LED and grating
WO2020231957A1 (en) * 2019-05-13 2020-11-19 Lumileds Llc Depth sensing using line pattern generators
US11150088B2 (en) 2019-05-13 2021-10-19 Lumileds Llc Depth sensing using line pattern generators
US11835362B2 (en) 2019-05-13 2023-12-05 Lumileds Llc Depth sensing using line pattern generators

Also Published As

Publication number Publication date
KR20190096992A (ko) 2019-08-20
JP2020502506A (ja) 2020-01-23
US20180176542A1 (en) 2018-06-21
CN109983506A (zh) 2019-07-05
US10771768B2 (en) 2020-09-08
EP3555855B1 (en) 2023-12-20
JP6866484B2 (ja) 2021-04-28
EP3555855A1 (en) 2019-10-23
BR112019011021A2 (pt) 2019-10-08

Similar Documents

Publication Publication Date Title
EP3555855B1 (en) Systems and methods for improved depth sensing
EP3344949B1 (en) Code domain power control for structured light
KR101950658B1 (ko) 구조형 광 깊이 맵들의 아웃라이어 검출 및 정정을 위한 방법들 및 장치들
US9530215B2 (en) Systems and methods for enhanced depth map retrieval for moving objects using active sensing technology
US9948920B2 (en) Systems and methods for error correction in structured light
EP3513552B1 (en) Systems and methods for improved depth sensing
EP3335192B1 (en) Memory-efficient coded light error correction
WO2013176808A1 (en) Design of code in affine-invariant spatial mask
EP3485460B1 (en) Object reconstruction in disparity maps using displaced shadow outlines
EP3158504B1 (en) Coded light pattern having hermitian symmetry
EP3268932B1 (en) Active sensing spatial resolution improvement through multiple receivers and code reuse
HK40004010A (en) Systems and methods for improved depth sensing

Legal Events

Date Code Title Description
DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17804996

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20197016374

Country of ref document: KR

Kind code of ref document: A

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112019011021

Country of ref document: BR

ENP Entry into the national phase

Ref document number: 2019531636

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2017804996

Country of ref document: EP

Effective date: 20190715

ENP Entry into the national phase

Ref document number: 112019011021

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20190529