WO2021032298A1 - High resolution optical depth scanner - Google Patents

High resolution optical depth scanner Download PDF

Info

Publication number
WO2021032298A1
WO2021032298A1 PCT/EP2019/072364 EP2019072364W WO2021032298A1 WO 2021032298 A1 WO2021032298 A1 WO 2021032298A1 EP 2019072364 W EP2019072364 W EP 2019072364W WO 2021032298 A1 WO2021032298 A1 WO 2021032298A1
Authority
WO
WIPO (PCT)
Prior art keywords
structured light
interest
light beams
projecting
deflecting means
Prior art date
Application number
PCT/EP2019/072364
Other languages
French (fr)
Inventor
Marko Eromaki
Radu Ciprian Bilcu
Original Assignee
Huawei Technologies Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co., Ltd. filed Critical Huawei Technologies Co., Ltd.
Priority to PCT/EP2019/072364 priority Critical patent/WO2021032298A1/en
Publication of WO2021032298A1 publication Critical patent/WO2021032298A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2531Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object using several gratings, projected with variable angle of incidence on the object, and one detection device
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • G01B11/03Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness by measuring coordinates of points
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B26/00Optical devices or arrangements for the control of light using movable or deformable optical elements
    • G02B26/08Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light
    • G02B26/10Scanning systems
    • G02B26/105Scanning systems with one or more pivoting mirrors or galvano-mirrors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/254Image signal generators using stereoscopic image cameras in combination with electromagnetic radiation sources for illuminating objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • the disclosure relates generally to optical depth scanners, in particular to systems and methods for structured light depth sensing for capturing high resolution 3D point clouds.
  • Depth sensing systems on mobile handsets are becoming common imaging features to enhance camera operations (for instance laser-base focusing), for scene 3D mapping where the 3D model of the surrounding scene is built, or in ARA/R applications and for different scenery objects analysis solutions. From the current point/single beam detection systems the trend is moving towards full view scanning solutions with higher accuracy which enable implementation of more complex applications.
  • Depth sensing solutions can be classified into two main categories: the so-called LiDAR (Light Detection and Ranging) systems that use laser beams to scan the scene, and camera modules which capture the depth of the entire field-of-view (FoV) at once.
  • LiDAR Light Detection and Ranging
  • FOV field-of-view
  • For each scanning position the distance is calculated from the time interval between the moment the laser beam is generated and the moment its reflection is received. By rotating the laser generator, or deviating the laser beam, the depth of the entire surrounding scene is captured.
  • Some LiDAR systems are projecting single beams and use rotating mirrors or prisms to scan the entire FoV.
  • the second class of depth sensing systems can be split into 2 sub-classes.
  • the first one contains the so-called time- of-flight (ToF) sensors where, instead of a laser beam, an infrared (IR) wave is projected onto the scene.
  • the reflections from the scene are captured by an IR sensor which measures the time the light traveled from the IR generator to the IR sensor, for each pixel.
  • the ToF sensors can read the time delay between the emitted and reflected moments by means of either digital counters or by measuring the phase shift of the emitted and reflected light wave.
  • the second sub-class contains the so-called structured light sensors. These modules are not based on the time-of-flight principle, but calculate the depth based on the position, onto an IR sensor, of a certain IR pattern when reflected by the scene.
  • the various structured light solutions differ mainly in the type of dot pattern which they project onto the scene.
  • LiDAR systems generally deflect a laser beam to scan an entire scene, meaning that the system contains moving parts, usually rotating mirrors.
  • these LiDAR systems cannot accomplish high density point cloud and long sensing range in real time - in fact, it usually takes a few minutes to scan a scene in high resolution and with long range.
  • ToF camera modules are capable of capturing the entire field- of-view at once, but suffer from limited resolution (these ToF sensors usually have less than IMpixel resolution) and are also sensitive to the multipath problem (a form of interference caused by reflected waves reaching the sensor from two or more paths).
  • Structured light systems are able to eliminate some of the main disadvantages of LiDAR systems and ToF modules: they can capture few dozens of thousands of 3D points at once with high accuracy, and do not suffer from the multipath problem.
  • structured light systems do not require specially designed circuitry, such as ultrafast switching electronics.
  • these structured light systems are not suitable because the required depth scanning resolution (number of 3D points captured) of these applications is much higher.
  • a depth sensing system comprising: a structured light source comprising emitting means configured to emit structured light beams, projecting means configured to project the structured light beams onto an object of interest; and a sensor configured to repeatedly capture reflections of the structured light beams reflected from the object of interest; wherein the projecting means comprises deflecting means configured to change the projecting direction of the structured light beams with respect to the emitting means to offset the projected structured light beams relative to the object of interest between each consecutive capture of the sensor.
  • the sensor performing successive captures to capture a series of reflections of the structured light beams enables increasing the number of 3D points for a given scenery, while ensuring multipath immunity and a fast capturing capability.
  • This high resolution can be achieved by the introduction of the deflecting means that enables slightly offsetting the projection on the scenery for each successive capture by the sensor. Offsetting the point cloud this way optically multiplies the amount of dots and thus increases the resolution without the need for the structured light source to be moved itself between successive captures.
  • the proposed system and the itemized properties result in compact architectural element, suitable for product integration, to produce a scenery scanning solution having high accuracy and long measuring distance.
  • the senor is implemented as a sensing array comprising multiple sensors, thus further increasing resolution and accuracy, and enabling enhanced capture dimensions.
  • the emitting means comprise a laser generator configured to generate a laser beam, the laser beam following an optical path between the laser generator and the object of interest; and the projecting mean comprises a beam splitter positioned along the optical path configured to split the laser beam into a plurality of diverging laser beams, thus allowing for higher resolution depth sensing with an increased precision due to the use of laser beams.
  • the beam splitter is positioned between the laser generator and the deflecting means along the optical path, thus allowing for a more compact design of the structured light source and more protection from outside forces for the beam splitter component.
  • the beam splitter is positioned between the deflecting means and the object of interest along the optical path, allowing fora more precise pattern definition to be projected and offset between each capture, since the deflecting means only deflect one beam, which then arrives to the beam splitter to be split into a plurality if beams.
  • the beam splitter comprises a Diffractive Optical Element configured to generate a structured light pattern to be projected onto said object of interest, while still retaining the optical characteristics of the input beam, such as size, polarization and phase, as well as allowing for a wide field-of-view (around 80 degrees diagonally).
  • the structured light source further comprises a corrective optical element, positioned between the laser generator and the beam splitter along the optical path, configured to generate a structured light pattern (10) to be projected onto said object of interest collimate the laser beam, thus enabling to align the laser beams in a specific direction (i.e., make collimated light or parallel rays), or to cause the spatial cross section of the laser beam to become smaller for increased precision.
  • a corrective optical element positioned between the laser generator and the beam splitter along the optical path, configured to generate a structured light pattern (10) to be projected onto said object of interest collimate the laser beam, thus enabling to align the laser beams in a specific direction (i.e., make collimated light or parallel rays), or to cause the spatial cross section of the laser beam to become smaller for increased precision.
  • the deflecting means comprise a reflective element, such as a mirror, which enables reusing existing optical image stabilization elements (e.g. tilt mirror) used in so-called “folded cameras”, having optical axis first aligned horizontally and later angled by 90 degrees, thus saving time and costs to redesign a new one.
  • the deflecting means comprise a refractive element, such as a prism, which enables reusing existing optical image stabilization elements (e.g. tilt prism) used in so-called “folded cameras”, having optical axis first aligned horizontally and later angled by 90 degrees, thus saving time and costs to redesign a new one.
  • the deflecting means are configured in their initial position to deflect the path of a structured light beam from the structured light source at a substantially right angle, thus allowing for a compact design that is especially advantageous for mobile devices.
  • the deflecting means are configured to change the projecting direction of the structured light beams from the initial position by means of a tilting mechanism configured to adjust the angle of the deflecting means relative to at least one tilt axis, the at least one tilt axis being fixed relative to the emitting means, thus allowing for small, precise movement of the deflecting means and a compact design.
  • the structured light source is configured to project infrared light beams and the sensor is an optical sensor that is only sensitive to the IR light spectrum, thus allowing for increased measuring accuracy by reducing the effect of noise in the captured images from other frequencies.
  • a depth sensing method comprising: projecting structured light beams from a structured light source onto an object of interest; capturing reflections of the structured light beams reflected from the object of interest by means of a sensor; changing the projecting direction of the structured light beams with respect to the object of interest; and repeating the above cycle at least once, whereby the projecting direction of the structured light beams is changed with respect to the object of interest by means of deflecting means after each capture, so that the structured light beams projected onto the object of interest are offset in each new cycle with respect to the previous cycle.
  • projecting the structured light beams comprises projecting, onto the object of interest, a structured light pattern formed by the structured light beams; capturing the reflections comprises capturing, by means of an optical sensor, a reflected pattern comprising structured light beams reflected from the object of interest; and the method further comprises: calculating, using triangulation, a 3D point cloud comprising at least one 3D point, wherein each 3D point represents the distance of a point of the object of interest from the structured light source, calculated from the position of the reflected light beam on the optical sensor, using a known projecting angle and a known baseline distance between the optical sensor and the structured light source; wherein a new 3D point cloud is calculated after each cycle.
  • Performing the above sequence of steps enables capturing more 3D points with a structured light-based system compared to the non-scanning structured light solutions, while still retaining robustness to multipath and enabling fast depth sensing.
  • the use of an optical sensor ensures that production costs would also remain reasonable, since either a standard optical sensor can be used, or an optical sensor that needs very little customization.
  • the different 3D point clouds calculated after each cycle are combined into a cumulative 3D point cloud, the cumulative 3D point cloud having a larger data point resolution than each of the different 3D point clouds. This ensures an even higher data resolution as well as a compact data format for saving or forwarding for further interpretation (such as 3D visualization purposes).
  • the repeating of the cycles stops when the data point resolution of the cumulative 3D point cloud reaches a predetermined minimum threshold, thus ensuring optimal running time and effective use of resources, while still ensuring that the required resolution is achieved.
  • the deflecting means comprise a tilting mechanism for changing the projecting direction of the structured light beams by adjusting the angle of the deflecting means with respect to the structured light source between predefined angles.
  • a tilting mechanism for changing the projecting direction of the structured light beams by adjusting the angle of the deflecting means with respect to the structured light source between predefined angles.
  • the repeating of the cycles stops when at least one cycle according to each of the predefined angles has completed, thus enabling optimal running time and effective use of resources, while still ensuring that the required resolution is achieved.
  • FIG. 1 shows a schematic top view of a system in accordance with the prior art
  • Fig. 2 shows a schematic top view of a system in accordance with one embodiment of the first aspect
  • Fig. 3 illustrates the triangulation method used for the calculations in accordance with embodiments of the first and second aspects
  • Fig. 4 shows a schematic cross-section of a system in accordance with another embodiment of the first aspect
  • Fig. 5A and 5B show schematic cross-sections of a structured light source in accordance with alternative embodiments of the first aspect
  • Fig. 6A and 6B show schematic illustrations of tilting mechanisms for deflecting means in accordance with further alternative embodiments of the first aspect.
  • Fig. 7 shows a logic flow diagram illustrating steps of a method in accordance with possible embodiments of the second aspect.
  • Fig. 1 illustrates a typical prior art structured light depth sensing assembly, wherein a laser generator is utilized to generate a laser beam, which is then split into several beams (a structured light pattern) by means of a Diffractive Optical Element (DOE). These laser beams are reflected by the object of a scene and the reflected pattern is captured by a sensor. The depth can be calculated by triangulation based on the position of the reflected beams on the sensor, knowing the baseline distance between the laser generator and the sensor, as will be described in detail below with respect to Fig. 3.
  • DOE Diffractive Optical Element
  • FIG. 2 shows a schematic top view of a system according to the present disclosure.
  • a structured light source 1 is shown comprising emitting means 2 for emitting structured light beams 4 and projecting means 3 for projecting the structured light beams 4 onto an object of interest 6.
  • the system also comprises a sensor 7 for repeatedly capturing reflections of the structured light beams 4 reflected back from the object of interest 6.
  • the projecting means 3 further comprises deflecting means 5 which allow the changing of the projecting direction of the structured light beams 4 with respect to the emitting means 2, which offsets the projected structured light beams 4 relative to the object of interest 6 between each consecutive capture of the sensor 7. This way, after each direction change, a different reflection is captured of the object 6, thereby increasing the resolution of the resulting depth image with each capture.
  • Fig. 3 illustrates the triangulation method used for the calculation of scene depth using a structured light depth scanning system according to the present disclosure.
  • one beam is shown, which is emitted in the same plane as the central axis of the structured light source 1, preferably via a beam splitter 8 at the end of the structured light source 1, at an horizontal angle Q measured from the structured light source 1 horizontal main axis.
  • This beam is then reflected by the measured scene onto a sensor 7.
  • the place where the reflected beam hits the sensor 7 is depicted with a dot representing a 3D point 18. If the emitter, beam splitter 8 and the sensor 7 are perfectly aligned mechanically, the 3D point 18 will be situated on the horizontal axis of the sensor 7.
  • the horizontal coordinate of the 3D point 18, calculated from the center of the sensor 7, is denoted by x in Fig. 3.
  • This particular laser beam is only selected for simplicity of explanations, and the derived formulas are similar for all other emitted and reflected beams.
  • x ⁇ ⁇ X1 and denoting the focal length by f and the baseline (the distance between the optical center of the camera module comprising the sensor 7 and the center of the structured light source 1 ) by B the depth z of a scene point reflected by the laser beam can be calculated as:
  • each laser beam will produce, on the sensor 7, one segment when the scene point moves away or closer to the camera.
  • This segment is called the epipolar segment of the laser beam, wherein the length of each epipolar segment depends on the scene depth range.
  • the epipolar segments are not allowed to intersect each other. As a consequence, the requirement to have long epipolar segments, which are not intersecting, is limiting the number of laser beams that can be projected to the scene at once.
  • one solution to increase the total number of 3D points is to perform several captures where the position of the laser beams slightly differs from one shot to the other. This could be accomplished by slightly rotating or moving the structured light source 1 , however this solution would not be appreciated by users since that would imply moving the device. Furthermore, the solution would also not work if the camera is fixed on a tripod (which is a common use case for depth sensing). Therefore, the inventors came up with the better solution of keeping the structured light source 1 fixed and rotating or moving the orientation of the laser beams from one shot to the other, which solves the issues above without the need for the device to be moved, and works also if the device containing the depth sensing system is mounted on a fixed tripod.
  • Fig. 4 shows a schematic cross-section of a system according to the present disclosure, wherein the emitting means 2 comprises a laser generator 2A (e.g. VCSEL) for generating a laser beam 4A, the laser beam 4A following an optical path between the laser generator 2A and the object of interest 6 (shown as a dotted line), deflected by deflecting means 5 as described above.
  • a laser generator 2A e.g. VCSEL
  • the laser beam 4A following an optical path between the laser generator 2A and the object of interest 6 (shown as a dotted line), deflected by deflecting means 5 as described above.
  • the components of the structured light source 1 are arranged so that the deflecting means 5, when in their initial position, can deflect the path of the laser beam 4A from the structured light source 1 at a substantially right angle towards the object of interest 6.
  • the projecting means 3 may comprise a beam splitter 8 positioned along the optical path for splitting the laser beam 4A into a plurality of diverging laser beams 4A.
  • the beam splitter 8 is configured to generate a structured light pattern 10 to be projected onto the object of interest 6. As illustrated in Fig.
  • the deflecting means 5 are rotated or moved to change the projecting direction and thus project a second structured light pattern 10B, that is slightly offset compared to the first structured light pattern 10A, onto the object of interest 6.
  • the beam splitter 8 comprises a Diffractive Optical Element (DOE, also known as diffractive beam splitter, multispot beam generator or array beam generator) 8A as shown in Fig. 5A, for generating this structured light pattern 10.
  • DOE is a single optical element that divides an input beam into N output beams, wherein each output beam retains the same optical characteristics as the input beam, such as size, polarization and phase.
  • a DOE can generate either a 1- dimensional beam array (1 x N) or a 2-dimensional beam matrix (M x N), depending on the diffractive pattern on the element.
  • the DOE is generally used, as in this case, with monochromatic light such as a laser beam, and is designed for a specific wavelength and angle of separation between output beams.
  • the projecting means 3 further comprises a corrective optical element 9, positioned between the laser generator 2A and the beam splitter 8 along the optical path, preferably before the deflecting means 5, for collimating the laser beam 4A.
  • the optical element 9 is a collimator lens that is designed to narrow a laser beam 4A.
  • the collimator may consist of a curved mirror or lens.
  • the system further comprises a base substrate 17 (e.g. printed circuit board or flexible board), and a housing 16 that surrounds the components of the structured light source 1.
  • the system further comprises a sensing device, such as an IR camera 7B shown in Fig. 4, to capture the reflected beams from the object of interest 6 and measure the position of the reflected beams on its optical sensor 7A for computational purposes, as described above.
  • a sensing device such as an IR camera 7B shown in Fig. 4, to capture the reflected beams from the object of interest 6 and measure the position of the reflected beams on its optical sensor 7A for computational purposes, as described above.
  • Fig. 5A and 5B show schematic cross-sections of a structured light source illustrating two possible implementations of the system according to the present disclosure.
  • features that are the same or similar to corresponding features previously described or shown herein are denoted by the same reference numeral as previously used for simplicity.
  • the beam splitter 8 is positioned after the deflecting means 5 and before the object of interest 6, allowing for a more precise pattern definition to be projected and offset between each capture, since the deflecting means 5 only deflect one beam, which then arrives to the beam splitter 8 to be split into a plurality if beams 4 or a structured light pattern 10.
  • the beam splitter 8 is positioned between the laser generator 2A and the deflecting means 5, thus allowing for a more compact design of the structured light source 1 and more protection from outside forces for the beam splitter 8.
  • the deflecting means 5 is used only to direct the beam 4 on the required parts of the scene to be measured. Either the deflecting means 5 directs a single beam 4 and then the beam splitter 8 is splitting it into many (as shown on Fig. 5A) or it is directing the multitude of beams 4 already generated by the beam splitter 8 (as shown on Fig. 5B). Thus, the dot pattern 10 is generated by the beam splitter 8 (or DOE element 8A), the deflecting means 5 is not generating any point cloud, it is only a reflective or refractive element in the optical system.
  • Fig. 6A and 6B show schematic illustrations of tilting mechanisms for deflecting means illustrating two possible implementations of the system according to the present disclosure. In these implementations, features that are the same or similar to corresponding features previously described or shown herein are denoted by the same reference numeral as previously used for simplicity.
  • the deflecting means 5 are configured to change the projecting direction of the structured light beams 4 from the initial position by means of a tilting mechanism 12 configured to adjust the angle of the deflecting means 5 relative to at least one tilt axis 13, the at least one tilt axis 13 being fixed relative to the emitting means 2.
  • the tilting mechanism 12 can be driven by an actuator 14 connected to a driver 15.
  • the actuator 15 can comprise e.g. electromagnetic, shape memory alloy, piezo, electrostatic, magneto-strictive, MEMS or electroactive polymer-based actuation.
  • the required amount of optical tilting can be limited to +/- 0.5...1 .0 degrees along either of the tilt axis.
  • the behavior of the tilting mechanism 12 can be also switching type, i.e. bi-stable, instead of continuous.
  • the deflecting means 5 comprise a refractive element 5B, such as a prism, configured to deflect, in its original position, the structured light beams 4 coming from the emitting means 2 at a substantially right angle.
  • the deflecting means 5 comprise a reflective element 5A, such as a mirror.
  • the deflecting means 5 comprise a tilting mechanism 12 for changing the projecting direction of the structured light beams 4 by adjusting the angle of the deflecting means 5 with respect to the structured light source 1 (or the emitting means 2) between predefined mechanical angles, thereby resulting in different optical deflecting angles depicted in Fig. 6B as CM, 02, CO, wherein each reflected light beam has an optical deflection angle of 2 x the mechanical angle.
  • the predefined mechanical angles may be set within a tilt operating range of e.g. +/-0.5 degrees at a step resolution of 0.01 degrees (as the smallest drivable step).
  • the rated tilt angle may in further embodiments be selected by a user to adjust between super fine, basic or coarse scanning in relation to the scanning distance.
  • the predefined mechanical angles may be set at fixed values of +/-0.1 , +/-0.3, or +/- 0.5 degrees.
  • the proposed tilt mirror 5A (or tilt prism 5B) unit is identical with the one used in cameras with angled (e.g. by 90 degrees) optical axis arrangement (e.g. long telecentric types) for image stabilization and can be re-used.
  • angled optical axis arrangement e.g. long telecentric types
  • the production costs would also remain reasonable as other system parts are either standard ones or needing very low-level customization.
  • Fig. 7 shows a logic flow diagram illustrating steps of a method according to the present disclosure.
  • a first step 101 of a depicted capture cycle structured light beams 4 (or laser beams 4A) are projected from a structured light source 1 onto an object of interest 6. This projection can happen via any implementation of the structured light source 1 as described above with respect to Figs. 2 to 6.
  • the step of projecting 101 the structured light beams 4 (or laser beams 4A) comprises projecting, onto the object of interest 6, a structured light pattern 10 formed by the structured light beams 4 as described above.
  • a next step 102 reflections of the structured light beams 4 that are reflected from the object of interest 6 are captured by means of a sensor 7.
  • the step of capturing 102 the reflections comprises capturing, by means of an optical sensor 7A (integrated in e.g. an IR camera 7B shown on Fig. 4), a reflected pattern 11 comprising structured light beams 4 reflected from the object of interest 6.
  • a 3D point cloud 19 comprising at least one 3D point 18 is calculated using a triangulation method as described above with respect to Fig. 2., wherein each 3D point 18 represents the distance of a point of the object of interest 6 from the structured light source 1 , calculated from the position x of the reflected light beam on the optical sensor 7A, using a known projecting angle Q and a known baseline distance B between the optical sensor 7A and the structured light source 1 .
  • a new 3D point cloud 19 is calculated after each cycle.
  • the 3D point clouds 19 (that are calculated after each cycle as described above) are combined into a cumulative 3D point cloud 10, and the cumulative 3D point cloud 10 is saved in a local or remote storage device, or is forwarded to a predetermined destination via a data connection.
  • the cumulative 3D point cloud 20 has a larger data point resolution than each of the different 3D point clouds 19.
  • a next step 105 the projecting direction of the structured light beams 4 is changed with respect to the object of interest 6 by means of deflecting means 5 as described above with respect to Figs. 6A-6B (e.g. the mirror 5A is tilted to its next position as shown in Fig. 6B).
  • the above cycle is repeated at least once, whereby the projecting direction of the structured light beams 4 (or laser beams 4A) is changed with respect to the object of interest 6 by means of deflecting means 5 after each capture, so that the structured light beams 4 projected onto the object of interest 6 are offset in each new cycle with respect to the previous cycle.
  • the repeating of the cycles stops 106A when the data point resolution of the cumulative 3D point cloud 20 reaches a predetermined minimum threshold (i.e. when enough 3D points are calculated).
  • the threshold of minimum number of 3D points depends on the further applications of the 3D point cloud, but in some exemplary cases it can be determined at 50,000 points.
  • a number of N laser beams 4A are shot onto the scene (or object 6 of interest within the scene) and their reflections are captured by the IR camera 7B.
  • the reflections are detected, in the image captured from the sensor 7A, and the 3D coordinates of each of the N reflection points (3D points 18) are calculated.
  • the direction of each laser beam 4A is changed and again the N laser beams 4A are shot. This time they have different orientations.
  • the new reflections are captured and the scene 3D coordinates for the new laser positions are calculated. This step can repeat until a total desired number of 3D coordinates have been captured.
  • the repeating of the cycles stops 106B when at least one cycle according to each of the predefined angles, as described above with respect to Fig. 6B, has completed. Otherwise, in case there are any predefined angles left, the deflecting means 5 are moved (tilted) to the next predefined angle, and the cycle is repeated.
  • the sequence of mirror tilting, image capturing, and 3D point calculation can repeat for a fixed number of predefined positions of the mirror 5A.
  • the process can be terminated before the mirror 5A visited all the predefined positions.
  • the early stop can be for instance due to the fact that the necessary number of 3D points 18 was already achieved, as described above.
  • the mirror 5A can be driven so that some intermediate positions are not taken, for instance the mirror 5A is moved over several predefined positions, or the mirror 5A takes only the odd indexed predefined positions.
  • the tilt angles of each laser beam 4A for each mirror position must be known accurately and this measurement must be part of the device calibration. However, no special calibration is necessary and existing, known methods can be used.
  • a computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems.
  • a suitable medium such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

A system and method of structured light depth sensing for capturing high resolution 3D point clouds by introducing a tilting deflecting element (5) inside the structured light source (1) between a laser generator (2A) and a beam splitter (8), thus slightly offsetting the projected structured light patterns (10A,10B) on the scenery between consecutive captures of an optical sensor (7A) of an IR camera (7B). The projection of slightly offset structured light patterns (10A,10B) optically multiplies the number of dots to be detected and thus increases the calculated depth resolution while maintaining multipath immunity and ensuring fast capturing capability.

Description

HIGH RESOLUTION OPTICAL DEPTH SCANNER
The disclosure relates generally to optical depth scanners, in particular to systems and methods for structured light depth sensing for capturing high resolution 3D point clouds.
BACKGROUND
Depth sensing systems on mobile handsets are becoming common imaging features to enhance camera operations (for instance laser-base focusing), for scene 3D mapping where the 3D model of the surrounding scene is built, or in ARA/R applications and for different scenery objects analysis solutions. From the current point/single beam detection systems the trend is moving towards full view scanning solutions with higher accuracy which enable implementation of more complex applications.
Depth sensing solutions can be classified into two main categories: the so-called LiDAR (Light Detection and Ranging) systems that use laser beams to scan the scene, and camera modules which capture the depth of the entire field-of-view (FoV) at once. In the case of LiDAR systems, for each scanning position the distance is calculated from the time interval between the moment the laser beam is generated and the moment its reflection is received. By rotating the laser generator, or deviating the laser beam, the depth of the entire surrounding scene is captured. Some LiDAR systems are projecting single beams and use rotating mirrors or prisms to scan the entire FoV. The second class of depth sensing systems can be split into 2 sub-classes. The first one contains the so-called time- of-flight (ToF) sensors where, instead of a laser beam, an infrared (IR) wave is projected onto the scene. The reflections from the scene are captured by an IR sensor which measures the time the light traveled from the IR generator to the IR sensor, for each pixel. The ToF sensors can read the time delay between the emitted and reflected moments by means of either digital counters or by measuring the phase shift of the emitted and reflected light wave. The second sub-class contains the so-called structured light sensors. These modules are not based on the time-of-flight principle, but calculate the depth based on the position, onto an IR sensor, of a certain IR pattern when reflected by the scene. The various structured light solutions differ mainly in the type of dot pattern which they project onto the scene.
These different depth sensing methods have different advantages and limitations which make them more or less suitable for certain applications. For instance, LiDAR systems generally deflect a laser beam to scan an entire scene, meaning that the system contains moving parts, usually rotating mirrors. Moreover, these LiDAR systems cannot accomplish high density point cloud and long sensing range in real time - in fact, it usually takes a few minutes to scan a scene in high resolution and with long range.
On the other hand, ToF camera modules are capable of capturing the entire field- of-view at once, but suffer from limited resolution (these ToF sensors usually have less than IMpixel resolution) and are also sensitive to the multipath problem (a form of interference caused by reflected waves reaching the sensor from two or more paths).
Structured light systems are able to eliminate some of the main disadvantages of LiDAR systems and ToF modules: they can capture few dozens of thousands of 3D points at once with high accuracy, and do not suffer from the multipath problem. In addition, structured light systems do not require specially designed circuitry, such as ultrafast switching electronics. However, for most advanced applications these structured light systems are not suitable because the required depth scanning resolution (number of 3D points captured) of these applications is much higher.
SUMMARY Therefore, it is an object of the proposed solution to provide an improved depth sensing method and system which overcomes or at least reduces the problems mentioned above, by capturing a denser 3D point cloud while keeping the advantages of multipath resistance and fast capturing speed. The foregoing and other objects are achieved by the features of the independent claims. Further implementation forms are apparent from the dependent claims, the description and the figures.
According to a first aspect, there is provided a depth sensing system comprising: a structured light source comprising emitting means configured to emit structured light beams, projecting means configured to project the structured light beams onto an object of interest; and a sensor configured to repeatedly capture reflections of the structured light beams reflected from the object of interest; wherein the projecting means comprises deflecting means configured to change the projecting direction of the structured light beams with respect to the emitting means to offset the projected structured light beams relative to the object of interest between each consecutive capture of the sensor.
The sensor performing successive captures to capture a series of reflections of the structured light beams enables increasing the number of 3D points for a given scenery, while ensuring multipath immunity and a fast capturing capability. This high resolution can be achieved by the introduction of the deflecting means that enables slightly offsetting the projection on the scenery for each successive capture by the sensor. Offsetting the point cloud this way optically multiplies the amount of dots and thus increases the resolution without the need for the structured light source to be moved itself between successive captures. The proposed system and the itemized properties result in compact architectural element, suitable for product integration, to produce a scenery scanning solution having high accuracy and long measuring distance.
In an embodiment the sensor is implemented as a sensing array comprising multiple sensors, thus further increasing resolution and accuracy, and enabling enhanced capture dimensions.
In a possible implementation form of the first aspect the emitting means comprise a laser generator configured to generate a laser beam, the laser beam following an optical path between the laser generator and the object of interest; and the projecting mean comprises a beam splitter positioned along the optical path configured to split the laser beam into a plurality of diverging laser beams, thus allowing for higher resolution depth sensing with an increased precision due to the use of laser beams.
In a further possible implementation form of the first aspect the beam splitter is positioned between the laser generator and the deflecting means along the optical path, thus allowing for a more compact design of the structured light source and more protection from outside forces for the beam splitter component.
In a further possible implementation form of the first aspect the beam splitter is positioned between the deflecting means and the object of interest along the optical path, allowing fora more precise pattern definition to be projected and offset between each capture, since the deflecting means only deflect one beam, which then arrives to the beam splitter to be split into a plurality if beams.
In a further possible implementation form of the first aspect the beam splitter comprises a Diffractive Optical Element configured to generate a structured light pattern to be projected onto said object of interest, while still retaining the optical characteristics of the input beam, such as size, polarization and phase, as well as allowing for a wide field-of-view (around 80 degrees diagonally).
In a further possible implementation form of the first aspect the structured light source further comprises a corrective optical element, positioned between the laser generator and the beam splitter along the optical path, configured to generate a structured light pattern (10) to be projected onto said object of interest collimate the laser beam, thus enabling to align the laser beams in a specific direction (i.e., make collimated light or parallel rays), or to cause the spatial cross section of the laser beam to become smaller for increased precision.
In a further possible implementation form of the first aspect the deflecting means comprise a reflective element, such as a mirror, which enables reusing existing optical image stabilization elements (e.g. tilt mirror) used in so-called “folded cameras”, having optical axis first aligned horizontally and later angled by 90 degrees, thus saving time and costs to redesign a new one. In a further possible implementation form of the first aspect the deflecting means comprise a refractive element, such as a prism, which enables reusing existing optical image stabilization elements (e.g. tilt prism) used in so-called “folded cameras”, having optical axis first aligned horizontally and later angled by 90 degrees, thus saving time and costs to redesign a new one.
In a further possible implementation form of the first aspect the deflecting means are configured in their initial position to deflect the path of a structured light beam from the structured light source at a substantially right angle, thus allowing for a compact design that is especially advantageous for mobile devices.
In a further possible implementation form of the first aspect the deflecting means are configured to change the projecting direction of the structured light beams from the initial position by means of a tilting mechanism configured to adjust the angle of the deflecting means relative to at least one tilt axis, the at least one tilt axis being fixed relative to the emitting means, thus allowing for small, precise movement of the deflecting means and a compact design.
In a further possible implementation form of the first aspect the structured light source is configured to project infrared light beams and the sensor is an optical sensor that is only sensitive to the IR light spectrum, thus allowing for increased measuring accuracy by reducing the effect of noise in the captured images from other frequencies.
According to a second aspect, there is provided a depth sensing method comprising: projecting structured light beams from a structured light source onto an object of interest; capturing reflections of the structured light beams reflected from the object of interest by means of a sensor; changing the projecting direction of the structured light beams with respect to the object of interest; and repeating the above cycle at least once, whereby the projecting direction of the structured light beams is changed with respect to the object of interest by means of deflecting means after each capture, so that the structured light beams projected onto the object of interest are offset in each new cycle with respect to the previous cycle.
Performing the above sequence of steps enables and the introduction of the deflecting means inside the optics system enables slightly offsetting the point cloud location on the scenery for each capture. This enables high depth resolution and accuracy for capturing a given scenery, while ensuring multipath immunity and a fast capturing capability, without the need for the structured light source to be moved itself between consecutive captures.
In a possible implementation form of the second aspect, projecting the structured light beams comprises projecting, onto the object of interest, a structured light pattern formed by the structured light beams; capturing the reflections comprises capturing, by means of an optical sensor, a reflected pattern comprising structured light beams reflected from the object of interest; and the method further comprises: calculating, using triangulation, a 3D point cloud comprising at least one 3D point, wherein each 3D point represents the distance of a point of the object of interest from the structured light source, calculated from the position of the reflected light beam on the optical sensor, using a known projecting angle and a known baseline distance between the optical sensor and the structured light source; wherein a new 3D point cloud is calculated after each cycle.
Performing the above sequence of steps enables capturing more 3D points with a structured light-based system compared to the non-scanning structured light solutions, while still retaining robustness to multipath and enabling fast depth sensing. The use of an optical sensor ensures that production costs would also remain reasonable, since either a standard optical sensor can be used, or an optical sensor that needs very little customization. In a further possible implementation form of the second aspect the different 3D point clouds calculated after each cycle are combined into a cumulative 3D point cloud, the cumulative 3D point cloud having a larger data point resolution than each of the different 3D point clouds. This ensures an even higher data resolution as well as a compact data format for saving or forwarding for further interpretation (such as 3D visualization purposes).
In a further possible implementation form of the second aspect the repeating of the cycles stops when the data point resolution of the cumulative 3D point cloud reaches a predetermined minimum threshold, thus ensuring optimal running time and effective use of resources, while still ensuring that the required resolution is achieved.
In a further possible implementation form of the second aspect the deflecting means comprise a tilting mechanism for changing the projecting direction of the structured light beams by adjusting the angle of the deflecting means with respect to the structured light source between predefined angles. The use of such a tilting mechanism ensures high accuracy while also further lowering production costs, since either a standard tilting mechanism can be used, or a tilting mechanism that needs very little customization.
In a further possible implementation form of the second aspect the repeating of the cycles stops when at least one cycle according to each of the predefined angles has completed, thus enabling optimal running time and effective use of resources, while still ensuring that the required resolution is achieved.
These and other aspects will be apparent from and the embodiment(s) described below. BRIEF DESCRIPTION OF THE DRAWINGS
In the following detailed portion of the present disclosure, the aspects, embodiments and implementations will be explained in more detail with reference to the example embodiments shown in the drawings, in which:
Fig. 1 shows a schematic top view of a system in accordance with the prior art; Fig. 2 shows a schematic top view of a system in accordance with one embodiment of the first aspect;
Fig. 3 illustrates the triangulation method used for the calculations in accordance with embodiments of the first and second aspects; Fig. 4 shows a schematic cross-section of a system in accordance with another embodiment of the first aspect;
Fig. 5A and 5B show schematic cross-sections of a structured light source in accordance with alternative embodiments of the first aspect;
Fig. 6A and 6B show schematic illustrations of tilting mechanisms for deflecting means in accordance with further alternative embodiments of the first aspect; and
Fig. 7 shows a logic flow diagram illustrating steps of a method in accordance with possible embodiments of the second aspect.
DETAILED DESCRIPTION
Fig. 1 illustrates a typical prior art structured light depth sensing assembly, wherein a laser generator is utilized to generate a laser beam, which is then split into several beams (a structured light pattern) by means of a Diffractive Optical Element (DOE). These laser beams are reflected by the object of a scene and the reflected pattern is captured by a sensor. The depth can be calculated by triangulation based on the position of the reflected beams on the sensor, knowing the baseline distance between the laser generator and the sensor, as will be described in detail below with respect to Fig. 3.
Fig. 2 shows a schematic top view of a system according to the present disclosure. A structured light source 1 is shown comprising emitting means 2 for emitting structured light beams 4 and projecting means 3 for projecting the structured light beams 4 onto an object of interest 6.
The system also comprises a sensor 7 for repeatedly capturing reflections of the structured light beams 4 reflected back from the object of interest 6. As shown in the figure, the projecting means 3 further comprises deflecting means 5 which allow the changing of the projecting direction of the structured light beams 4 with respect to the emitting means 2, which offsets the projected structured light beams 4 relative to the object of interest 6 between each consecutive capture of the sensor 7. This way, after each direction change, a different reflection is captured of the object 6, thereby increasing the resolution of the resulting depth image with each capture.
Fig. 3 illustrates the triangulation method used for the calculation of scene depth using a structured light depth scanning system according to the present disclosure. For the sake of simplicity only one beam is shown, which is emitted in the same plane as the central axis of the structured light source 1, preferably via a beam splitter 8 at the end of the structured light source 1, at an horizontal angle Q measured from the structured light source 1 horizontal main axis. This beam is then reflected by the measured scene onto a sensor 7. The place where the reflected beam hits the sensor 7 is depicted with a dot representing a 3D point 18. If the emitter, beam splitter 8 and the sensor 7 are perfectly aligned mechanically, the 3D point 18 will be situated on the horizontal axis of the sensor 7. The horizontal coordinate of the 3D point 18, calculated from the center of the sensor 7, is denoted by x in Fig. 3. This particular laser beam is only selected for simplicity of explanations, and the derived formulas are similar for all other emitted and reflected beams. Noting that x\ \X1 and denoting the focal length by f and the baseline (the distance between the optical center of the camera module comprising the sensor 7 and the center of the structured light source 1 ) by B, the depth z of a scene point reflected by the laser beam can be calculated as:
Figure imgf000010_0001
It is clear from formula (1 ) that for different z values one obtains different x values. As a consequence, each laser beam will produce, on the sensor 7, one segment when the scene point moves away or closer to the camera. This segment is called the epipolar segment of the laser beam, wherein the length of each epipolar segment depends on the scene depth range. In order the have a structured light solution that is able to capture a large scene depth it is necessary to enable long epipolar segments. Moreover, in order to be able to uniquely assign a detected spot with its corresponding beam, the epipolar segments are not allowed to intersect each other. As a consequence, the requirement to have long epipolar segments, which are not intersecting, is limiting the number of laser beams that can be projected to the scene at once.
Since in one capture only a limited number of laser beams can be used, one solution to increase the total number of 3D points is to perform several captures where the position of the laser beams slightly differs from one shot to the other. This could be accomplished by slightly rotating or moving the structured light source 1 , however this solution would not be appreciated by users since that would imply moving the device. Furthermore, the solution would also not work if the camera is fixed on a tripod (which is a common use case for depth sensing). Therefore, the inventors came up with the better solution of keeping the structured light source 1 fixed and rotating or moving the orientation of the laser beams from one shot to the other, which solves the issues above without the need for the device to be moved, and works also if the device containing the depth sensing system is mounted on a fixed tripod.
Fig. 4 shows a schematic cross-section of a system according to the present disclosure, wherein the emitting means 2 comprises a laser generator 2A (e.g. VCSEL) for generating a laser beam 4A, the laser beam 4A following an optical path between the laser generator 2A and the object of interest 6 (shown as a dotted line), deflected by deflecting means 5 as described above.
In preferred embodiments, as illustrated in Figs 4 to 6, the components of the structured light source 1 are arranged so that the deflecting means 5, when in their initial position, can deflect the path of the laser beam 4A from the structured light source 1 at a substantially right angle towards the object of interest 6. The projecting means 3 may comprise a beam splitter 8 positioned along the optical path for splitting the laser beam 4A into a plurality of diverging laser beams 4A. In some embodiments the beam splitter 8 is configured to generate a structured light pattern 10 to be projected onto the object of interest 6. As illustrated in Fig. 4, after a first structured light pattern 10A is projected onto the object of interest 6, the deflecting means 5 are rotated or moved to change the projecting direction and thus project a second structured light pattern 10B, that is slightly offset compared to the first structured light pattern 10A, onto the object of interest 6. In some embodiments the beam splitter 8 comprises a Diffractive Optical Element (DOE, also known as diffractive beam splitter, multispot beam generator or array beam generator) 8A as shown in Fig. 5A, for generating this structured light pattern 10. The DOE is a single optical element that divides an input beam into N output beams, wherein each output beam retains the same optical characteristics as the input beam, such as size, polarization and phase. A DOE can generate either a 1- dimensional beam array (1 x N) or a 2-dimensional beam matrix (M x N), depending on the diffractive pattern on the element. The DOE is generally used, as in this case, with monochromatic light such as a laser beam, and is designed for a specific wavelength and angle of separation between output beams. In some embodiments the projecting means 3 further comprises a corrective optical element 9, positioned between the laser generator 2A and the beam splitter 8 along the optical path, preferably before the deflecting means 5, for collimating the laser beam 4A. In an embodiment the optical element 9 is a collimator lens that is designed to narrow a laser beam 4A. To narrow in this case can mean either to cause the directions of motion to become more aligned in a specific direction (i.e., making collimated light or parallel rays), or to cause the spatial cross section of the beam to become smaller (beam limiting device). The collimator may consist of a curved mirror or lens. In some embodiments the system further comprises a base substrate 17 (e.g. printed circuit board or flexible board), and a housing 16 that surrounds the components of the structured light source 1.
In some embodiments the system further comprises a sensing device, such as an IR camera 7B shown in Fig. 4, to capture the reflected beams from the object of interest 6 and measure the position of the reflected beams on its optical sensor 7A for computational purposes, as described above.
Fig. 5A and 5B show schematic cross-sections of a structured light source illustrating two possible implementations of the system according to the present disclosure. In these implementations, features that are the same or similar to corresponding features previously described or shown herein are denoted by the same reference numeral as previously used for simplicity.
In the embodiment illustrated on Fig. 5A the beam splitter 8 is positioned after the deflecting means 5 and before the object of interest 6, allowing for a more precise pattern definition to be projected and offset between each capture, since the deflecting means 5 only deflect one beam, which then arrives to the beam splitter 8 to be split into a plurality if beams 4 or a structured light pattern 10.
In another embodiment illustrated on Fig. 5B the beam splitter 8 is positioned between the laser generator 2A and the deflecting means 5, thus allowing for a more compact design of the structured light source 1 and more protection from outside forces for the beam splitter 8.
In both embodiments, the deflecting means 5 is used only to direct the beam 4 on the required parts of the scene to be measured. Either the deflecting means 5 directs a single beam 4 and then the beam splitter 8 is splitting it into many (as shown on Fig. 5A) or it is directing the multitude of beams 4 already generated by the beam splitter 8 (as shown on Fig. 5B). Thus, the dot pattern 10 is generated by the beam splitter 8 (or DOE element 8A), the deflecting means 5 is not generating any point cloud, it is only a reflective or refractive element in the optical system. Fig. 6A and 6B show schematic illustrations of tilting mechanisms for deflecting means illustrating two possible implementations of the system according to the present disclosure. In these implementations, features that are the same or similar to corresponding features previously described or shown herein are denoted by the same reference numeral as previously used for simplicity.
In the embodiments illustrated on both Fig. 6A and Fig. 6B the deflecting means 5 are configured to change the projecting direction of the structured light beams 4 from the initial position by means of a tilting mechanism 12 configured to adjust the angle of the deflecting means 5 relative to at least one tilt axis 13, the at least one tilt axis 13 being fixed relative to the emitting means 2. In some embodiments the tilting mechanism 12 can be driven by an actuator 14 connected to a driver 15. The actuator 15 can comprise e.g. electromagnetic, shape memory alloy, piezo, electrostatic, magneto-strictive, MEMS or electroactive polymer-based actuation. In further embodiments, the required amount of optical tilting can be limited to +/- 0.5...1 .0 degrees along either of the tilt axis. In further possible embodiments the behavior of the tilting mechanism 12 can be also switching type, i.e. bi-stable, instead of continuous.
In an embodiment illustrated on Fig. 6A the deflecting means 5 comprise a refractive element 5B, such as a prism, configured to deflect, in its original position, the structured light beams 4 coming from the emitting means 2 at a substantially right angle. In an embodiment illustrated on Fig. 6B the deflecting means 5 comprise a reflective element 5A, such as a mirror.
In a further embodiment, as also illustrated in Fig. 6B, the deflecting means 5 comprise a tilting mechanism 12 for changing the projecting direction of the structured light beams 4 by adjusting the angle of the deflecting means 5 with respect to the structured light source 1 (or the emitting means 2) between predefined mechanical angles, thereby resulting in different optical deflecting angles depicted in Fig. 6B as CM, 02, CO, wherein each reflected light beam has an optical deflection angle of 2 x the mechanical angle. The predefined mechanical angles may be set within a tilt operating range of e.g. +/-0.5 degrees at a step resolution of 0.01 degrees (as the smallest drivable step). The rated tilt angle may in further embodiments be selected by a user to adjust between super fine, basic or coarse scanning in relation to the scanning distance. In an embodiment the predefined mechanical angles may be set at fixed values of +/-0.1 , +/-0.3, or +/- 0.5 degrees.
The proposed tilt mirror 5A (or tilt prism 5B) unit is identical with the one used in cameras with angled (e.g. by 90 degrees) optical axis arrangement (e.g. long telecentric types) for image stabilization and can be re-used. Thus, the production costs would also remain reasonable as other system parts are either standard ones or needing very low-level customization.
Fig. 7 shows a logic flow diagram illustrating steps of a method according to the present disclosure.
In a first step 101 of a depicted capture cycle, structured light beams 4 (or laser beams 4A) are projected from a structured light source 1 onto an object of interest 6. This projection can happen via any implementation of the structured light source 1 as described above with respect to Figs. 2 to 6. In a possible embodiment, the step of projecting 101 the structured light beams 4 (or laser beams 4A) comprises projecting, onto the object of interest 6, a structured light pattern 10 formed by the structured light beams 4 as described above.
In a next step 102, reflections of the structured light beams 4 that are reflected from the object of interest 6 are captured by means of a sensor 7. In a possible embodiment, the step of capturing 102 the reflections comprises capturing, by means of an optical sensor 7A (integrated in e.g. an IR camera 7B shown on Fig. 4), a reflected pattern 11 comprising structured light beams 4 reflected from the object of interest 6.
In a next, optional step 103, a 3D point cloud 19 comprising at least one 3D point 18 is calculated using a triangulation method as described above with respect to Fig. 2., wherein each 3D point 18 represents the distance of a point of the object of interest 6 from the structured light source 1 , calculated from the position x of the reflected light beam on the optical sensor 7A, using a known projecting angle Q and a known baseline distance B between the optical sensor 7A and the structured light source 1 . In an embodiment, a new 3D point cloud 19 is calculated after each cycle.
In a next, optional step 104, the 3D point clouds 19 (that are calculated after each cycle as described above) are combined into a cumulative 3D point cloud 10, and the cumulative 3D point cloud 10 is saved in a local or remote storage device, or is forwarded to a predetermined destination via a data connection. Thus, the cumulative 3D point cloud 20 has a larger data point resolution than each of the different 3D point clouds 19.
In a next step 105, the projecting direction of the structured light beams 4 is changed with respect to the object of interest 6 by means of deflecting means 5 as described above with respect to Figs. 6A-6B (e.g. the mirror 5A is tilted to its next position as shown in Fig. 6B).
After changing the projecting direction of the structured light beams 4 (or laser beams 4A) the above cycle is repeated at least once, whereby the projecting direction of the structured light beams 4 (or laser beams 4A) is changed with respect to the object of interest 6 by means of deflecting means 5 after each capture, so that the structured light beams 4 projected onto the object of interest 6 are offset in each new cycle with respect to the previous cycle.
In an embodiment, the repeating of the cycles stops 106A when the data point resolution of the cumulative 3D point cloud 20 reaches a predetermined minimum threshold (i.e. when enough 3D points are calculated). The threshold of minimum number of 3D points depends on the further applications of the 3D point cloud, but in some exemplary cases it can be determined at 50,000 points. At each projection cycle, a number of N laser beams 4A are shot onto the scene (or object 6 of interest within the scene) and their reflections are captured by the IR camera 7B. The reflections are detected, in the image captured from the sensor 7A, and the 3D coordinates of each of the N reflection points (3D points 18) are calculated. In a next step the direction of each laser beam 4A is changed and again the N laser beams 4A are shot. This time they have different orientations. The new reflections are captured and the scene 3D coordinates for the new laser positions are calculated. This step can repeat until a total desired number of 3D coordinates have been captured.
In another embodiment, the repeating of the cycles stops 106B when at least one cycle according to each of the predefined angles, as described above with respect to Fig. 6B, has completed. Otherwise, in case there are any predefined angles left, the deflecting means 5 are moved (tilted) to the next predefined angle, and the cycle is repeated. In a possible embodiment this means that the mirror 5A is in its first position, the laser generator 2A is powered on and a picture is taken with the IR camera 7B. The reflection dots are detected from the picture and the corresponding 3D points 18 are calculated. Next, the mirror 5A is tilted to its next position. Again, the laser generator 2A is started and a picture captured. The 3D point 18 coordinates are calculated. The sequence of mirror tilting, image capturing, and 3D point calculation can repeat for a fixed number of predefined positions of the mirror 5A. Alternatively, the process can be terminated before the mirror 5A visited all the predefined positions. The early stop can be for instance due to the fact that the necessary number of 3D points 18 was already achieved, as described above.
In yet another embodiment, the mirror 5A can be driven so that some intermediate positions are not taken, for instance the mirror 5A is moved over several predefined positions, or the mirror 5A takes only the odd indexed predefined positions. The tilt angles of each laser beam 4A for each mirror position must be known accurately and this measurement must be part of the device calibration. However, no special calibration is necessary and existing, known methods can be used.
The various aspects and implementations has been described in conjunction with various embodiments herein. However, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed subject-matter, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measured cannot be used to advantage. A computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems.
The reference signs used in the claims shall not be construed as limiting the scope.

Claims

1 . A depth sensing system comprising: a structured light source (1 ) comprising emitting means (2) configured to emit structured light beams (4), projecting means (3) configured to project said structured light beams (4) onto an object of interest (6); and a sensor (7) configured to repeatedly capture reflections of said structured light beams (4) reflected from said object of interest (6); wherein said projecting means (3) comprises deflecting means (5) configured to change the projecting direction of said structured light beams (4) with respect to said emitting means (2) to offset the projected structured light beams (4) relative to said object of interest (6) between each consecutive capture of said sensor (7).
2. A system according to claim 1 , wherein said emitting means (2) comprises a laser generator (2A) configured to generate a laser beam (4A), said laser beam (4A) following an optical path between the laser generator (2A) and said object of interest (6); and wherein said projecting means (3) comprises a beam splitter (8) positioned along said optical path configured to split said laser beam (4A) into a plurality of diverging laser beams (4A).
3. A system according to claim 2, wherein said beam splitter (8) is positioned between said laser generator (2A) and said deflecting means (5) along said optical path.
4. A system according to claim 2, wherein said beam splitter (8) is positioned between said deflecting means (5) and said object of interest (6) along said optical path.
5. A system according to any one of claims 2 to 4, wherein said beam splitter (8) comprises a Diffractive Optical Element (8A) configured to generate a structured light pattern (10) to be projected onto said object of interest (6).
6. A system according to any one of claims 2 to 5, wherein said structured light source (1 ) further comprises a corrective optical element (9), positioned between said laser generator (2A) and said beam splitter (8) along said optical path, configured to collimate said laser beam (4A).
7. A system according to any one of claims 1 to 6, wherein said deflecting means (5) comprise a reflective element (5A), such as a mirror.
8. A system according to any one of claims 1 to 7, wherein said deflecting means (5) comprise a refractive element (5B), such as a prism.
9. A system according to any one of claims 7 or 8, wherein said deflecting means (5) are configured in their initial position to deflect the path of a structured light beam (4) from said structured light source (1 ) at a substantially right angle.
10. A system according to any one of claims 1 to 9, wherein said deflecting means (5) are configured to change the projecting direction of said structured light beams (4) from said initial position by means of a tilting mechanism (12) configured to adjust the angle of said deflecting means (5) relative to at least one tilt axis (13), said at least one tilt axis (13) being fixed relative to said emitting means (2).
11 . A system according to any one of claims 1 to 10, wherein said structured light source (1) is configured to project infrared (IR) light beams and said sensor (7) is an optical sensor (7A) that is only sensitive to the IR light spectrum.
12. A depth sensing method comprising: projecting (101 ) structured light beams (4) from a structured light source (1 ) onto an object of interest (6); capturing (102) reflections of said structured light beams (4) reflected from said object of interest (6) by means of a sensor (7); changing the projecting direction of said structured light beams (4) with respect to said object of interest (6); and repeating the above cycle at least once, whereby the projecting direction of said structured light beams (4) is changed with respect to said object of interest (6) by means of deflecting means (5) after each capture, so that the structured light beams (4) projected onto said object of interest (6) are offset in each new cycle with respect to the previous cycle.
13. A method according to claim 12, wherein: projecting (101 ) said structured light beams (4) comprises projecting, onto said object of interest (6), a structured light pattern (10) formed by said structured light beams (4); capturing (102) said reflections comprises capturing, by means of an optical sensor (7A), a reflected pattern (11 ) comprising structured light beams (4) reflected from said object of interest (6); and wherein the method further comprises: calculating (103), using triangulation, a 3D point cloud (19) comprising at least one 3D point (18), wherein each 3D point (18) represents the distance of a point of said object of interest (6) from said structured light source (1 ), calculated from the position x of the reflected light beam on said optical sensor (7A), using a known projecting angle Q and a known baseline distance B between said optical sensor (7A) and said structured light source (1 ); wherein a new 3D point cloud (19) is calculated after each cycle.
14. A method according to claim 13, wherein said different 3D point clouds (19) calculated after each cycle are combined (104) into a cumulative 3D point cloud (10), said cumulative 3D point cloud (20) having a larger data point resolution than each of said different 3D point clouds (19).
15. A method according to claim 14, wherein the repeating of said cycles stops (106A) when the data point resolution of said cumulative 3D point cloud (20) reaches a predetermined minimum threshold.
16. A method according to any one of claims 12 to 15, wherein said deflecting means (5) comprise a tilting mechanism (12) for changing the projecting direction of said structured light beams (4) by adjusting the angle of said deflecting means (5) with respect to said structured light source (1) between predefined angles.
17. A method according to claim 16, wherein the repeating of said cycles stops
(106B) when at least one cycle according to each of said predefined angles has completed.
PCT/EP2019/072364 2019-08-21 2019-08-21 High resolution optical depth scanner WO2021032298A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2019/072364 WO2021032298A1 (en) 2019-08-21 2019-08-21 High resolution optical depth scanner

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2019/072364 WO2021032298A1 (en) 2019-08-21 2019-08-21 High resolution optical depth scanner

Publications (1)

Publication Number Publication Date
WO2021032298A1 true WO2021032298A1 (en) 2021-02-25

Family

ID=67704529

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2019/072364 WO2021032298A1 (en) 2019-08-21 2019-08-21 High resolution optical depth scanner

Country Status (1)

Country Link
WO (1) WO2021032298A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020038445A1 (en) 2018-08-24 2020-02-27 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Infrared projector, imaging device, and terminal device
US11403872B2 (en) * 2018-06-11 2022-08-02 Sray-Tech Image Co., Ltd. Time-of-flight device and method for identifying image using time-of-flight device
WO2023060164A1 (en) * 2021-10-06 2023-04-13 The Board Of Regents Of The University Of Texas System Optical rotator systems and methods

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130215235A1 (en) * 2011-04-29 2013-08-22 Austin Russell Three-dimensional imager and projection device
US20170186182A1 (en) * 2015-05-20 2017-06-29 Facebook, Inc. Method and system for generating light pattern using polygons
WO2019053998A1 (en) * 2017-09-13 2019-03-21 ソニー株式会社 Distance measuring module

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130215235A1 (en) * 2011-04-29 2013-08-22 Austin Russell Three-dimensional imager and projection device
US20170186182A1 (en) * 2015-05-20 2017-06-29 Facebook, Inc. Method and system for generating light pattern using polygons
WO2019053998A1 (en) * 2017-09-13 2019-03-21 ソニー株式会社 Distance measuring module

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11403872B2 (en) * 2018-06-11 2022-08-02 Sray-Tech Image Co., Ltd. Time-of-flight device and method for identifying image using time-of-flight device
WO2020038445A1 (en) 2018-08-24 2020-02-27 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Infrared projector, imaging device, and terminal device
EP3824339A4 (en) * 2018-08-24 2021-09-29 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Infrared projector, imaging device, and terminal device
WO2023060164A1 (en) * 2021-10-06 2023-04-13 The Board Of Regents Of The University Of Texas System Optical rotator systems and methods

Similar Documents

Publication Publication Date Title
US11808854B2 (en) Multiple pixel scanning LIDAR
EP3593166B1 (en) Integrated lidar illumination power control
US9285477B1 (en) 3D depth point cloud from timing flight of 2D scanned light beam pulses
US10754036B2 (en) Scanning illuminated three-dimensional imaging systems
US10775508B1 (en) Remote sensing device
US20210311171A1 (en) Improved 3d sensing
KR20210089792A (en) Synchronized spinning lidar and rolling shutter camera system
WO2021032298A1 (en) High resolution optical depth scanner
CN105143820A (en) Depth scanning with multiple emitters
US11393183B2 (en) Integrated electronic module for 3D sensing applications, and 3D scanning device including the integrated electronic module
JP2017520755A (en) 3D coarse laser scanner
JP2023516297A (en) Eye-safe scanning LIDAR with virtual protective enclosure
EP3465249A1 (en) Multiple pixel scanning lidar
US11575875B2 (en) Multi-image projector and electronic device having multi-image projector
CN108885260B (en) Time-of-flight detector with single axis scanning
US11762066B2 (en) Multi-beam scanning system
CN110673150A (en) Three-dimensional sensing system
JP2000088539A (en) Method and apparatus for three-dimensional inputting
JP7501309B2 (en) Optical devices, measuring devices, robots, electronic devices and modeling devices
US20190391265A1 (en) 3d sensing system
EP3637044A1 (en) Multi-image projector and electronic device having multi-image projector
JPH07174530A (en) Optical measuring instrument for measuring three-dimensionalobject
JPH09178436A (en) Three-dimensional measuring device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19756365

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19756365

Country of ref document: EP

Kind code of ref document: A1