WO2019179123A1 - Tof camera and design method for diffractive optical element - Google Patents

Tof camera and design method for diffractive optical element Download PDF

Info

Publication number
WO2019179123A1
WO2019179123A1 PCT/CN2018/113875 CN2018113875W WO2019179123A1 WO 2019179123 A1 WO2019179123 A1 WO 2019179123A1 CN 2018113875 W CN2018113875 W CN 2018113875W WO 2019179123 A1 WO2019179123 A1 WO 2019179123A1
Authority
WO
WIPO (PCT)
Prior art keywords
tof camera
pixel
sensor
light
light source
Prior art date
Application number
PCT/CN2018/113875
Other languages
French (fr)
Chinese (zh)
Inventor
余新
胡飞
鲁宁
李屹
Original Assignee
深圳光峰科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳光峰科技股份有限公司 filed Critical 深圳光峰科技股份有限公司
Publication of WO2019179123A1 publication Critical patent/WO2019179123A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/22Measuring arrangements characterised by the use of optical techniques for measuring depth
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • G01C3/02Details
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0012Optical design, e.g. procedures, algorithms, optimisation routines
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B15/00Special procedures for taking photographs; Apparatus therefor
    • G03B15/02Illuminating scene

Definitions

  • the present invention relates to a ToF camera, a projection system having the ToF camera, a robot, a driving system and an electronic device, and a method of designing a diffractive optical element for a dot matrix light source of the ToF camera.
  • the triangulation method is insensitive to ambient light and has a simple hardware structure.
  • the structured light measurement method has high spatial resolution and low computational cost, but the measurement result is greatly affected by ambient light and the measurement high-speed moving object has large error.
  • the ToF ranging method Compared with the triangulation method and the structured light measurement method, the ToF ranging method has a simple structure, a small product volume, and a good result of measuring a high-speed moving object, and the ambient light has little influence on the measurement result, and the ranging principle of the ToF ranging method is adopted.
  • Figure is shown in Figure 1.
  • the triangulation method generally measures medium and long distances
  • the structured light measurement method and the ToF distance measurement method measure short distances.
  • the ToF ranging method is divided into two categories, one is to measure the echo time of the pulse signal to measure the distance, and the general single-point range finder and the scanning laser radar adopt this method. Due to the concentration of light energy, rangefinders and lidars using this method measure from a few meters to several kilometers.
  • the other type is the phase detection method, which measures the distance using the phase shift of the modulated signal echoes loaded on the continuous optical signal.
  • the distance measured by this method is limited by the frequency of the modulated signal, typically from a few hundred kilohertz to several tens of megahertz. The effective measurement distance decreases as the frequency of the modulated signal increases.
  • the ToF camera adopts the latter ranging principle, and its structure is as shown in Fig. 2.
  • the system is composed of a light source 101, an object 102 to be measured, a lens 103, a ToF depth sensor 104, and a hardware control processing circuit 105.
  • the amplitude of the light emitted by the light source 101 is modulated by a sine or square wave and is emitted to the object 102 to be measured.
  • the lens 103 collects the reflected signal light and transmits it to the ToF depth image sensor 104.
  • the ToF depth image sensor 104 is a phase locked CCD or CMOS photosensitive array.
  • Figure 3 is a schematic view showing the structure of a phase-locked CCD. The area shown as Light Opening is the effective photosensitive area, and other incident light does not affect the result.
  • This special structure of phase-locked CCD or CMOS devices dynamically controls the electric field distribution in the device, causing electrons in the hole-electron pair excited by the photons to move to specific regions of the device and accumulate in the electron traps in the region. Due to limitations of CCD or CMOS devices, each frame of image requires a long time to integrate, so high-speed sampling of high-speed modulated optical signals (several hundred kilohertz to several tens of megahertz) cannot be achieved.
  • a phase-locked CCD or CMOS device can quickly switch the electric field distribution on the device at a frequency synchronized with the modulation frequency, so that the energy in the same phase range of the modulated signal accumulates in the same electronic trap, thereby achieving different phase time points for the modulated signal. Sampling.
  • A is the amplitude of the received signal and B is the DC component of the received signal, which is proportional to the number of received signal photons and the number of ambient/noise photons, respectively.
  • a and B respectively meet the following formula 2 and formula 3:
  • the error sources of ToF camera ranging are mainly ambient light, quantum noise and device noise.
  • the ambient illuminance does not change with distance, so the measurement result is independent of the distance, but is related to the angle of light reception.
  • Quantum noise also known as shot noise, is determined by the quantum effect in the photoelectric effect and does not change with changes in the environment.
  • the device noise includes reset noise, flicker noise, current noise of the amplifier, and dark current noise.
  • the noise intensity increases with increasing temperature.
  • the size of the measurement error limits the distance of the ToF camera's ranging. The process of reducing the error is to increase the distance measured by the ToF camera. Since quantum noise does not vary with the device and the environment, quantum noise determines the maximum accuracy of the ideal range.
  • the error ⁇ L meets the following formula 5:
  • Equation 6 Equation 6
  • Equation 6 it can be inferred that the larger the A, the smaller the measurement error.
  • the value of A is limited and cannot be infinite.
  • the error can be controlled to 0.04% of the maximum ranging.
  • the method of further reducing the measurement error under the condition that the value of A is limited is to reduce the influence of ambient light on the measurement result. Because ambient illuminance does not change with distance, its effect is independent of distance and light angle. Therefore, it is possible to reduce the angle of view of the lens by increasing the focal length of the light-receiving lens, thereby reducing the influence of ambient light.
  • the variation of the ambient light power entering the lens with the distance is shown in Fig. 5(a).
  • the variation of simulation measurement accuracy with distance is shown in Fig. 5(b).
  • the simulation results show that when the ambient illuminance is constant, the angle of view changes from 50 degrees to 15 degrees, and the ambient light is 5%, which can increase the measurement distance by four times without affecting the accuracy. At the same time, the less the number of electrons generated by ambient light, the higher the measurement accuracy.
  • Equation 7 the expression of the influence of device noise on the measurement result (ie, error ⁇ L) is Equation 7:
  • Npseudo is the noise electron introduced by the device.
  • Equation 8 Equation 9
  • the device parameters and environmental influences are basically constant, and the influence of the device noise can be reduced by the following three methods, thereby reducing the error ⁇ L during the measurement process:
  • Cooling down can reduce device noise
  • the first method is to increase the aperture. If you do not change the number of photo-converted electrons, you need to double the aperture for every doubling of the distance. The disadvantage of this method is that the structure volume increases and the cost increases. .
  • the second method is to increase the optical power, which requires four times the optical power for every doubling of the distance.
  • Simulation experiments show that when the lens light-receiving angle is reduced from 50 degrees to 15 degrees, the number of photons generated by ambient light will become original. 5%, which will greatly improve the accuracy of the measurement, resulting in a farther measurement distance.
  • the measurement result in the long distance range after the error is reduced by changing the lens angle to 15° is shown in Fig. 7.
  • Figure 1(a) shows the variation of the number of electrons generated by signal light and ambient light in a pixel with distance under different light source power conditions
  • Figure 2(b) shows the variation of measurement error with distance under different light source optical power conditions. curve.
  • the simulation results show that long-distance measurement requires very large light source power to ensure measurement accuracy.
  • the difference frequency method of two high-frequency light sources can increase the maximum non-confusion ranging of the phase method. However, theoretically analyzing the surface, the method of difference frequency does not increase the accuracy of the measurement.
  • the method of further increasing the measurement accuracy is to improve the light source signal efficiency of the light source.
  • a method for improving the light efficiency from the sensor structure is to add a corresponding microlens to each pixel to further concentrate the incident light into the photosensitive region. The problem with this method is that it increases the intensity of the signal light and also increases the intensity of the ambient light, which is disadvantageous for the accuracy improvement in a strong light environment.
  • a ToF camera comprising a dot matrix light source, a light receiving lens system and a sensor, the dot matrix light source for emitting a lattice array of illumination beams to a target object, the light collection lens system for receiving the points Array of illumination beams illuminating the target object and reflecting the array of illumination beams arranged to image the target object on the sensor, the sensor for sensing the image of the collection lens system An illumination beam arranged in the lattice on the sensor to obtain depth image information of the target object, wherein the sensor comprises a plurality of pixels, and the pixels are aligned with the illumination beam arranged by the lattice Correspondingly, each pixel is used to sense a corresponding illumination beam.
  • a projection system comprising a ToF camera comprising a lattice light source, a light receiving lens system and a sensor, the dot matrix light source for emitting a lattice array of illumination beams to a target object, the light collection a lens system for receiving an illumination beam of the lattice array of illuminated illumination beams that are illuminated to the target object to image the target object on the sensor, the sensor for sensing the location
  • the light-receiving lens system images the illumination beam of the lattice arrangement on the sensor to obtain depth image information of the target object, wherein the sensor includes a plurality of pixels, the pixel and the The illumination beams arranged in a lattice are in one-to-one correspondence, and each pixel is used to sense a corresponding illumination beam.
  • a robot comprising a ToF camera, the ToF camera comprising a lattice light source, a light receiving lens system and a sensor, the dot matrix light source for emitting a lattice array of illumination beams to a target object, the light receiving lens
  • the system is configured to receive an illumination beam of the lattice array of illuminated illumination beams that are illuminated to the target object to image the target object on the sensor, the sensor for sensing the a light receiving lens system imaging the illumination beam of the lattice arrangement on the sensor to obtain depth image information of the target object, wherein the sensor comprises a plurality of pixels, the pixel
  • the illumination beams arranged in a lattice are in one-to-one correspondence, and each pixel is used to sense a corresponding illumination beam.
  • the robot is an automated guided robot.
  • a driving system comprising a driving device and a ToF camera, the ToF camera comprising a dot matrix light source, a light receiving lens system and a sensor, wherein the dot matrix light source is used to emit a lattice array of illumination beams to a target object,
  • the light-receiving lens system is configured to receive an illumination beam of a dot matrix arranged by the illumination beam of the lattice arrangement to be irradiated to a target object to image the target object on the sensor, the sensor is used for Sensing the illumination lens of the lattice array imaged onto the sensor to obtain depth image information of the target object, wherein the sensor includes a plurality of pixels, The pixels are in one-to-one correspondence with the illumination beams arranged in the lattice, and each pixel is used to sense a corresponding illumination beam.
  • the driving system is an automated driving system.
  • An electronic device comprising a ToF camera comprising a dot matrix light source, a light receiving lens system and a sensor, the dot matrix light source for emitting a lattice array of illumination beams to a target object, the light collection a lens system for receiving an illumination beam of the lattice array of illuminated illumination beams that are illuminated to the target object to image the target object on the sensor, the sensor for sensing the location
  • the light-receiving lens system images the illumination beam of the lattice arrangement on the sensor to obtain depth image information of the target object, wherein the sensor includes a plurality of pixels, the pixel and the The illumination beams arranged in a lattice are in one-to-one correspondence, and each pixel is used to sense a corresponding illumination beam.
  • the electronic device is a cell phone, a computer, or an unmanned aerial vehicle.
  • a method for designing a diffractive optical element for a lattice source of a ToF camera which comprises the following steps:
  • a phase map of the diffractive optical element is calculated.
  • the ToF camera of the present invention uses a dot matrix light source and makes the pixels correspond to the illumination beams arranged in a lattice, so that each pixel of the sensor can sense the corresponding illumination beam, thereby providing reliable
  • the sensing result improves the reliability of the ToF camera; meanwhile, at the same power, the number of photons sensed per pixel is increased relative to the source using uniform illumination, and the ambient light sensed by each pixel
  • the generated photons have substantially no change compared with the light source with uniform illumination, which not only improves the light efficiency and the maximum ranging distance of the ToF camera, but also improves the signal-to-noise ratio of the received sensing signal, and has better measurement accuracy.
  • Figure 1 is a schematic diagram of the ranging of the ToF ranging method.
  • Figure 2 is a principle of ranging used in a ToF camera.
  • FIG. 3 is a schematic structural view of a phase locked CCD.
  • Figure 4 is a schematic diagram of the calculation of the distance of the measured object.
  • Fig. 5(a) is a schematic diagram showing the variation of ambient light power with distance simulated into the lens under different lens focal lengths.
  • Fig. 5(b) is a schematic diagram showing the variation of simulation measurement accuracy with distance under different environmental photoelectron numbers.
  • Figure 6 is a schematic diagram showing the relationship between the reflectance of an object and the number of incident photons.
  • Fig. 3(a) is a graph showing the relationship between the number of electrons generated by signal light and ambient light in a pixel with distance under different light source power conditions.
  • Fig. 4(b) is a graph showing the variation of measurement error with distance under different light source optical power conditions.
  • FIG. 8 is a schematic structural diagram of a ToF camera according to a preferred embodiment of the present invention.
  • Fig. 9 is a view showing the optical path of the modulating element shown in Fig. 8 for diffracting the light source light emitted from the light source.
  • Figure 10 is a schematic view showing the design method of the modulation element shown in Figure 8.
  • Fig. 11(a) is a graph showing the measurement accuracy of uniform illumination and dot illumination at different powers as a function of measurement distance when using 630 nm wavelength signal light.
  • Fig. 11(b) is a graph showing the measurement accuracy of uniform illumination and dot illumination at different powers as a function of measurement distance when using 1550 nm wavelength signal light.
  • Fig. 5(a) is a graph showing the measurement accuracy of uniform illumination and dot illumination at different powers as a function of measurement distance when using 630 nm wavelength signal light.
  • Fig. 6(b) is a graph showing the measurement accuracy of uniform illumination and dot illumination at different powers as a function of measurement distance when using 1550 nm wavelength signal light.
  • Figure 13(a) is a graph showing the measurement accuracy of uniform illumination and dot illumination at different powers as a function of measured distance.
  • Fig. 13(b) is a graph showing the measurement accuracy of uniform illumination and dot illumination at different powers as measured distance after the effective pixels are reduced to 1/4 of the original.
  • Dot matrix light source 107 Dot matrix light source 107
  • the invention aims at the characteristics of the sensor structure of the ToF ranging technology, and proposes a method of using a dot matrix illumination source instead of a uniform illumination source, so that the ToF ranging method is improved in light efficiency, and the measurement distance of the ToF camera under the same optical power condition is improved.
  • FIG. 8 is a schematic structural diagram of a ToF camera 100 according to a preferred embodiment of the present invention.
  • the ToF camera 100 includes a dot matrix light source 107, a light collecting lens system 102, and a sensor 101.
  • the dot matrix light source 107 is configured to emit a lattice-arranged illumination beam to the target object 103
  • the light-receiving lens system 102 is configured to receive the lattice arrangement of the lattice-arranged illumination beam that is irradiated to the target object 103 and reflected.
  • the illumination beam is obtained to obtain depth image information of the target object 103, wherein the sensor 101 includes a plurality of pixels 101a, the pixels 101a are in one-to-one correspondence with the illumination beams arranged in the lattice, and each pixel 101a It is used to sense a corresponding illumination beam.
  • the lattice source 107 includes a light source 104, a collimating lens 106, and a modulating element 105.
  • the light source 104 is an inventive diode or laser source for emitting source light
  • the modulating element 105 is operative to modulate the source light into the array of illumination beams.
  • the collimating lens 106 is disposed between the light source 104 and the modulating element 105 for collimating the light source light emitted by the light source 104 to the modulating element 105.
  • the modulation element 105 is a diffractive optical element. Please refer to FIG. 9.
  • FIG. 9 is a schematic diagram of an optical path of the modulation element 105 for diffracting the light source light emitted by the light source 104.
  • the modulating element 105 can include a plurality of diffractive elements, each diffractive unit for converting received source light into an illumination beam that is reflected by the target object 103 and projected through the collection lens system 102 to the A corresponding pixel 101a on the sensor 101.
  • the senor 101 is a ToF chip, which may be a phase locked CCD or a CMOS photosensitive array; the light receiving lens system 102 may include a light receiving lens, and the light receiving lens system 102 An optical center may be in line with a center of the target object 103 and a center of the sensor 101; in an embodiment, the target object 103 may be located at the light source 104 via the light receiving lens system
  • the plane formed by the far field of 102, or the target object 103 is a far field plane 103a located in the far field, for example, to more clearly describe the optical path principle and diffraction of the ToF camera 100.
  • the design principle of optical components may be a phase locked CCD or a CMOS photosensitive array; the light receiving lens system 102 may include a light receiving lens, and the light receiving lens system 102 An optical center may be in line with a center of the target object 103 and a center of the sensor 101; in an embodiment, the target object 103 may be located at the light source
  • the pixel 101a includes an effective photosensitive area 101b, and the sensor further includes a non-photosensitive area 101c.
  • the non-photosensitive area 101c may be located between the effective photosensitive areas 101b of two adjacent pixels 101a.
  • An illumination beam corresponding to the pixel 101a covers the effective photosensitive area 101b, that is, the spot size of the illumination beam is consistent with the effective photosensitive area, so that the spot of the illumination beam is just covered to the effective Photosensitive area 101b.
  • the function of the light receiving lens system 102 is to change the spatial angular distribution of the illumination beam of the far field plane 103a reflecting the lattice light source 107 to the spatial position distribution on the sensor 101. Any point on the far field plane 103a can be mapped to a corresponding point on the sensor 101.
  • any effective photosensitive area 101b on the sensor 101 uniquely corresponds to an area on the far field plane 103a (as the first area 103b on the far field plane 103a indicates corresponding region).
  • the effective photosensitive area of the sensor 101 obtains a light field signal formed by the far field plane illuminated by the lattice source 107, as can be understood.
  • the light field signal is a beam signal of the illumination light beam arranged by the dot matrix incident on the sensor 101 via the far field plane 103a and the light receiving lens system 102, and the ToF camera according to the sensing
  • the beam signal sensed by the device 101 can obtain depth image information of the far field plane 103a.
  • the spatial angle corresponding to the center of each diffraction unit 105a can be regarded as equal to the spatial angle corresponding to the center of the corresponding pixel 101a.
  • the spatial angle corresponding to the center of each pixel (i, j) includes an angle ⁇ c i,j between the vertical direction and the optical axis corresponding to the pixel (i, j) and a horizontal direction corresponding to the pixel (i, j) Angle with the optical axis among them,
  • i and j respectively represent the number of horizontal lines and the number of vertical columns of the pixel (i, j) on the pixel lattice on the sensor 101, that is, the pixel (i, j) is The pixels in the i-th row and the j-th column on the sensor 101, FOV V is the field of view angle in the vertical direction, FOV h is the field of view angle in the horizontal direction, and N v is the number of pixels in the vertical direction, N h The number of pixels in the horizontal direction.
  • the effective photosensitive area corresponds to a half angle of a spatial angle versus Meet the following formula:
  • d h and d v are the dimensions of the effective photosensitive area 101b in the vertical and horizontal directions
  • d 1 is the distance between the sensor 101 and the optical center of the light receiving lens system 102.
  • the illumination beam arranged by the lattice source 104 on the far field plane 103a and the projection pattern of the sensor 101 passing through the light receiving lens system 102 on the far field plane 103a The present invention further provides a method of designing a diffractive optical element (ie, the modulating element 105) for the lattice source 104 of the ToF camera 100, that is, suitable for the sensor 101 and the light-receiving lens.
  • the projection pattern of the sensor 101 on the far field plane 103a may be equivalent to a spatial solid angle distribution of the effective photosensitive area 101b due to the distance between the light source 104 and the sensor 101.
  • the spatial angle corresponding to the center of each diffraction unit 105a can be regarded as equal to the spatial angle corresponding to the center of the corresponding pixel 101a. Therefore, in the design method, the spatial angle corresponding to the center of each pixel 101a can be obtained by calculation.
  • the center of the diffraction unit 105a corresponds to a spatial angle, such as to obtain a phase distribution of the diffractive optical element, wherein a spatial angular distribution corresponding to the center of each pixel 101a is defined by the size of the sensor 101 and the effective photosensitive area 101b.
  • the focal length of the light receiving lens system 102 is determined.
  • the design method includes the following steps S1-S4.
  • step S1 the effective photosensitive area 101b distribution of the sensor 101 of the ToF camera is calculated. It can be understood that the parameters of the sensor 101 can be known. Specifically, the effective photosensitive area map of the sensor 101 can be determined according to the arrangement of the pixels 101a of the sensor 101 and the position, size and shape of the effective photosensitive area 101b in the pixel 101a.
  • step S2 the plane typical distance of the far field plane 103a is set. It can be understood that the plane typical distance is the distance d2 from the far field plane 103a to the light receiving lens system 102, and can also be regarded as the detection distance of the ToF camera 100 to the far field plane 103a, and the ToF Camera 100 capabilities are related.
  • Step S3 calculating a projection pattern distribution of the far field plane 103a on the sensor 101.
  • the spatial angular distribution corresponding to 101b shown as ⁇ in FIG. 8, as shown in Equations 11-14, and will not be described herein
  • the spatial angular interval of the pixel 101a ⁇ 1 shown in FIG. 8).
  • step S4 a phase map of the diffractive optical element is calculated.
  • the diffractive optical design and the phase hologram can be utilized according to the spatial angular distribution of the center of each pixel 101a of the sensor 101, the optical axis of the light source 104, and the design angle ⁇ 2 between the optical axes of the light receiving lens system 102. a method of generating a phase delay profile of the diffractive optical element for designing the diffractive optical element.
  • ⁇ 2 may be approximately 0 in the design, that is, the spatial angular distribution projected by the sensor 101 (ie, each The spatial angle corresponding to the center of the pixel can be directly equivalent to the spatial angular distribution of the light emitted by the diffractive optical element (as shown in FIG. 8 , as shown in Equation 11-14, which will not be described herein).
  • the ToF camera 100 of the present invention can use the dot matrix light source 107 and make the pixels 101a correspond to the illumination beams arranged in a lattice one by one, so that each pixel 101a of the sensor 101 can sense the corresponding illumination.
  • the beam which in turn provides reliable sensing results, increases the reliability of the ToF camera 100; at the same time, at the same power, the number of photons sensed per pixel 101a is increased relative to the source using uniform illumination, and each The photon generated by the ambient light sensed by the pixel 101a has substantially no change compared with the light source with uniform illumination, which not only improves the light efficiency and the maximum ranging distance of the ToF camera 100, but also improves the signal noise of the receiving sensing signal. Compared with, it has better measurement accuracy.
  • the present invention also provides a projection system for projecting a display image, the projection system having a ToF camera, the projection system having a gesture recognition function, the projection system being implemented according to a sensing signal of the ToF camera
  • the gesture recognition function, the ToF camera adopts the ToF camera 100 of the above embodiment.
  • the present invention also provides a robot, which may be an automatic guided robot, the robot including a ToF camera, which uses the ToF camera to sense external objects, and the ToF camera adopts the above embodiment.
  • ToF camera 100 The present invention also provides a robot, which may be an automatic guided robot, the robot including a ToF camera, which uses the ToF camera to sense external objects, and the ToF camera adopts the above embodiment.
  • ToF camera 100 ToF camera 100.
  • the present invention also provides a driving system, which may be an automatic driving system including a driving device (such as a self-driving car) and a ToF camera, the ToF camera being mountable on the driving device, the ToF camera
  • a driving system which may be an automatic driving system including a driving device (such as a self-driving car) and a ToF camera, the ToF camera being mountable on the driving device, the ToF camera
  • a driving system which may be an automatic driving system including a driving device (such as a self-driving car) and a ToF camera, the ToF camera being mountable on the driving device, the ToF camera
  • a driving device such as a self-driving car
  • ToF camera mountable on the driving device
  • the ToF camera 100 and the ToF camera of the modified embodiment in the above embodiments of the present invention can also be used in electronic devices such as mobile phones, computers, unmanned aerial vehicles, and the like, and are not limited to the above-mentioned projection systems, robots, and Driving system.
  • the dot matrix is used when the power of the uniform illumination and the dot illumination (i.e., the illumination of the array of illumination beams provided by the lattice source) is the same.
  • the number of photons detected by illumination is increased by a factor of 10 relative to uniform illumination.
  • a ToF camera using uniform illumination and a lattice illumination source was simulated to observe a curve of measurement accuracy as a function of measurement distance at different powers.
  • a 65 ⁇ 24 resolution sensor 101 is used in the simulation, the target object has a reflectivity of 35%, and two simulations are performed for beams of different wavelengths of 630 nm and 1550 nm.
  • FIG. 11(a) is when the 630 nm wavelength signal light is used.
  • Figure 11 (b) shows the measurement accuracy of uniform illumination and dot matrix illumination at different powers using 1550 nm wavelength signal light.
  • a graph of changes with measured distance. The comparison shows that the optical power of the dot matrix illumination source is only one tenth of the uniform illumination source with the same accuracy as the uniform illumination source.
  • the accuracy of the 1550 nm wavelength source is high, which is caused by the difference in the transmittance spectrum of the electromagnetic waves of different wavelengths.
  • FIG. 7(a) is a graph showing the measurement accuracy of uniform illumination and dot illumination at different powers as measured distance using 630 nm wavelength signal light;
  • Figure 8(b) shows the different powers when using 1550 nm wavelength signal light.
  • the ToF camera has a high scanning speed and a high precision in a short range, and can be applied to an AGV robot.
  • the characteristics of AGV robot application are: slow moving speed, small moving range (depot, factory area, etc.), 5MHZ modulation, 30m ranging range can meet the ranging requirements.
  • the measurement distance in the simulation is within 30 meters.
  • FIG. 13(a) is a graph showing the measurement accuracy of uniform illumination and dot illumination at different powers as measured distance; Fig. 13(b) shows the uniformity at different powers after the effective pixels are reduced by 1/4 of the original power. A graph of the measurement accuracy of illumination and dot matrix illumination as a function of measured distance.
  • the 7W dot matrix illumination can achieve an accuracy of 3% at 30m, and the number of sampling points is 2.3M/s (320 ⁇ 240*30ps).
  • the 7W dot matrix illumination can achieve an accuracy of 1% at 30m.
  • ToF camera limits its application in long-distance ranging, from the simulation results, it can achieve 1% to 3% accuracy within 30m after applying dot matrix illumination, enough to meet low speed.
  • the requirements of AGV the specific parameters are: 3%, with TI's ToF chip can achieve 2Mpps rate; 1%, 700kpps rate; its performance has exceeded most of the market AGV laser radar.
  • the ToF camera is used for long-distance ranging to meet the error requirement, and the optical power required is very high, resulting in a weak cost and volume advantage. Therefore, the ToF camera can be applied to a driving system such as an automatic driving system. However, compared to AGV robots, the ToF camera is more suitable for AGV robots.

Abstract

A ToF camera (100) comprises a dot-matrix light source (107), a light receiving lens system (102), and a sensor (101). The dot-matrix light source is used for emitting illuminating beams arranged in a dot matrix to a target object (103); the light receiving lens system is used for receiving illuminating beams arranged in a dot matrix reflected after the emitted illuminating beams are irradiated to the target object so as to image the target object on the sensor; the sensor is used for sensing said illuminating beams received by the light receiving lens system for the imaging onto the sensor so as to obtain depth image information of the target object, wherein the sensor comprises multiple pixels (101a), the pixels have one-to-one correspondence to the illuminating beams arranged in a dot matrix, and each pixel is used for sensing a corresponding illuminating beam. Also disclosed are a projection system having the ToF camera, a robot, a driving system, an electronic device, and a design method for a diffractive optical element for the dot-matrix light source of the ToF camera.

Description

ToF相机及衍射光学元件的设计方法ToF camera and design method of diffractive optical element 技术领域Technical field
本发明涉及一种ToF相机、具有所述ToF相机的投影系统、机器人、驾驶系统与电子设备、及用于ToF相机的点阵光源的衍射光学元件的设计方法。The present invention relates to a ToF camera, a projection system having the ToF camera, a robot, a driving system and an electronic device, and a method of designing a diffractive optical element for a dot matrix light source of the ToF camera.
背景技术Background technique
驾驶系统(如自动驾驶系统)、投影系统或电子设备的手势识别和3D成像等应用中经常需要测量目标物体的深度信息。常用的深度测量方法有三角测距法、结构光测量法和ToF测距法。这几种不同的测距技术各有优缺点。三角测距法对环境光不敏感且硬件结构简单,但是 由于用到多个相机会有相机间的同步问题导致图像分析成本高,并且产品难以整体小型化。结构光测量法的空间分辨率高,计算成本低,但是测量结果受环境光影响大且测量高速运动物体误差大。ToF测距法相对于三角测距法和结构光测量法来说,结构简单,产品体积小,测量高速运动物体结果较好,且环境光对测量结果影响小,其中ToF测距法的测距原理图如图1所示。考虑到测量结果精度,一般情况下三角测距法测量中长距离,结构光测量法和ToF测距法测量短距离。It is often necessary to measure the depth information of a target object in applications such as gesture recognition and 3D imaging of driving systems (such as automated driving systems), projection systems, or electronic devices. Commonly used depth measurement methods include triangulation, structured light measurement, and ToF ranging. These different ranging techniques each have advantages and disadvantages. The triangulation method is insensitive to ambient light and has a simple hardware structure. However, since multiple cameras are used, there is a problem of synchronization between cameras, resulting in high image analysis cost and difficulty in miniaturization of the product. The structured light measurement method has high spatial resolution and low computational cost, but the measurement result is greatly affected by ambient light and the measurement high-speed moving object has large error. Compared with the triangulation method and the structured light measurement method, the ToF ranging method has a simple structure, a small product volume, and a good result of measuring a high-speed moving object, and the ambient light has little influence on the measurement result, and the ranging principle of the ToF ranging method is adopted. Figure is shown in Figure 1. Considering the accuracy of the measurement results, the triangulation method generally measures medium and long distances, and the structured light measurement method and the ToF distance measurement method measure short distances.
具体地,ToF测距法分为两类,一类是测量脉冲信号的回波时间来测量距离,一般单点式测距仪和扫描式激光雷达采用这种方式。由于光能量集中,采用这种方法的测距仪和激光雷达的测量范围为几米到几千米。另一类是相位检测法,利用连续光信号上加载的调制信号回波的相移测量距离。这种方法测量的距离受限于调制信号的频率,一般为几百千赫兹到几十兆赫兹。有效测量距离随调制信号的频率增加而降低。ToF相机采用后一种测距原理,其结构如图2所示,系统 由一个光源101,被测物体102,镜头103,ToF深度传感器104和硬件控制处理电路105组成。由光源101发出的光的幅度按正弦或方波调制后出射,照射到被测物体102。镜头103收集经过反射的信号光并传递给ToF深度图像传感器104。ToF深度图像传感器104为锁相CCD或CMOS感光阵列。图3所示为一种锁相CCD的结构示意图,图中表示为Light Opening的区域为有效感光面积,其他部分入射的光不会对结果产生影响。这种特殊结构的锁相CCD或CMOS器件能动态的控制器件中的电场分布,使由光子激发的空穴电子对中的电子向器件中特定的区域移动并在该区域的电子陷阱中累积。由于CCD或CMOS器件的限制,每一帧图像需要较长时间的积分,因而不能实现对高速调制光信号(几百千赫兹到几十兆赫兹)的高速采样。但是锁相CCD或CMOS器件能够以调制频率同步的频率快速的切换器件上的电场分布,使得调制信号相同相位范围内的能量在同一个电子陷阱里累积,同从而实现对调制信号不同相位时间点的采样。Specifically, the ToF ranging method is divided into two categories, one is to measure the echo time of the pulse signal to measure the distance, and the general single-point range finder and the scanning laser radar adopt this method. Due to the concentration of light energy, rangefinders and lidars using this method measure from a few meters to several kilometers. The other type is the phase detection method, which measures the distance using the phase shift of the modulated signal echoes loaded on the continuous optical signal. The distance measured by this method is limited by the frequency of the modulated signal, typically from a few hundred kilohertz to several tens of megahertz. The effective measurement distance decreases as the frequency of the modulated signal increases. The ToF camera adopts the latter ranging principle, and its structure is as shown in Fig. 2. The system is composed of a light source 101, an object 102 to be measured, a lens 103, a ToF depth sensor 104, and a hardware control processing circuit 105. The amplitude of the light emitted by the light source 101 is modulated by a sine or square wave and is emitted to the object 102 to be measured. The lens 103 collects the reflected signal light and transmits it to the ToF depth image sensor 104. The ToF depth image sensor 104 is a phase locked CCD or CMOS photosensitive array. Figure 3 is a schematic view showing the structure of a phase-locked CCD. The area shown as Light Opening is the effective photosensitive area, and other incident light does not affect the result. This special structure of phase-locked CCD or CMOS devices dynamically controls the electric field distribution in the device, causing electrons in the hole-electron pair excited by the photons to move to specific regions of the device and accumulate in the electron traps in the region. Due to limitations of CCD or CMOS devices, each frame of image requires a long time to integrate, so high-speed sampling of high-speed modulated optical signals (several hundred kilohertz to several tens of megahertz) cannot be achieved. However, a phase-locked CCD or CMOS device can quickly switch the electric field distribution on the device at a frequency synchronized with the modulation frequency, so that the energy in the same phase range of the modulated signal accumulates in the same electronic trap, thereby achieving different phase time points for the modulated signal. Sampling.
被测物体距离的计算原理如图4所示。反射光相对激光器发出信号光的相位差
Figure PCTCN2018113875-appb-000001
符合以下公式1。
The calculation principle of the distance of the measured object is shown in Fig. 4. The phase difference between the reflected light and the signal light emitted by the laser
Figure PCTCN2018113875-appb-000001
Meets the following formula 1.
Figure PCTCN2018113875-appb-000002
Figure PCTCN2018113875-appb-000002
其中,A为接收信号的振幅,B为接收信号的直流分量,分别正比于接收到的信号光子数和环境/噪声光子数。A与B分别符合以下公式2及公式3:Where A is the amplitude of the received signal and B is the DC component of the received signal, which is proportional to the number of received signal photons and the number of ambient/noise photons, respectively. A and B respectively meet the following formula 2 and formula 3:
Figure PCTCN2018113875-appb-000003
Figure PCTCN2018113875-appb-000003
Figure PCTCN2018113875-appb-000004
Figure PCTCN2018113875-appb-000004
进一步地,被测物体距离符合以下公式4:Further, the distance of the measured object conforms to the following formula 4:
Figure PCTCN2018113875-appb-000005
Figure PCTCN2018113875-appb-000005
由于ToF相机测距的误差来源主要为环境光,量子噪声和器件噪声。环境光照度不随距离变化,所以测量结果和距离无关,而是与收光角度有关。量子噪声又叫散粒噪声,由光电效应中的量子效应决定,不随环境的变化而变化,而器件噪声包括复位噪声、闪烁噪声、放大 器的电流噪声和暗电流噪声。噪声强度随温度增加而增加。测量误差的大小限制了ToF相机测距的距离,减小误差的过程就是增加ToF相机测量距离的过程。由于量子噪声不随器件和环境变化,因而量子噪声决定了理想情况下测距的最大精度。Since the error sources of ToF camera ranging are mainly ambient light, quantum noise and device noise. The ambient illuminance does not change with distance, so the measurement result is independent of the distance, but is related to the angle of light reception. Quantum noise, also known as shot noise, is determined by the quantum effect in the photoelectric effect and does not change with changes in the environment. The device noise includes reset noise, flicker noise, current noise of the amplifier, and dark current noise. The noise intensity increases with increasing temperature. The size of the measurement error limits the distance of the ToF camera's ranging. The process of reducing the error is to increase the distance measured by the ToF camera. Since quantum noise does not vary with the device and the environment, quantum noise determines the maximum accuracy of the ideal range.
由于ToF相机测距是测量的入射光信号与反射光信号间的相位差来计算距离,所以误差△L符合以下公式5:Since the ToF camera ranging is the phase difference between the measured incident light signal and the reflected light signal to calculate the distance, the error ΔL meets the following formula 5:
Figure PCTCN2018113875-appb-000006
Figure PCTCN2018113875-appb-000006
理想情况下,环境光远小于信号光可以近似为B=A/2,将其代入公式5,可以获得以下公式6:Ideally, the ambient light is much smaller than the signal light and can be approximated as B=A/2. Substituting it into Equation 5 yields the following Equation 6:
Figure PCTCN2018113875-appb-000007
Figure PCTCN2018113875-appb-000007
根据公式6可以推知A越大测量误差越小。但是由于ToF深度图像传感器上每个像素能够累积的电子数有限,当像素累积电子数饱和后还继续增加A并不能减小误差且浪费了光源功率,因此A的值有限,不能为无穷大。当像素饱和电子数为100000时,误差可以控制到最 大测距的0.04%。According to Equation 6, it can be inferred that the larger the A, the smaller the measurement error. However, since the number of electrons that can be accumulated by each pixel on the ToF depth image sensor is limited, when the cumulative number of pixels of the pixel is saturated, the increase of A does not reduce the error and wastes the power of the light source. Therefore, the value of A is limited and cannot be infinite. When the pixel saturation electron number is 100000, the error can be controlled to 0.04% of the maximum ranging.
在A值有限的条件下进一步减小测量误差的方法是减少环境光对测量结果的影响。因为环境光照度不随距离变化,它产生的影响与距离无关与收光角度有关。因此可以通过增加收光镜头的焦距来减少镜头的视角,从而减弱环境光产生的影响。The method of further reducing the measurement error under the condition that the value of A is limited is to reduce the influence of ambient light on the measurement result. Because ambient illuminance does not change with distance, its effect is independent of distance and light angle. Therefore, it is possible to reduce the angle of view of the lens by increasing the focal length of the light-receiving lens, thereby reducing the influence of ambient light.
在不同镜头焦距条件下,仿真进入镜头的环境光功率随距离的变化如图5(a)所示。在不同的环境光电子数条件下,仿真测量精度随距离的变化如图5(b)所示。仿真结果表明,环境光照度不变的情况下,视角从50度变成15度环境光为原来的5%,不影响精度的情况下能够提高四倍测量距离。同时,环境光产生的电子数越少测量精度越高。Under different lens focal length conditions, the variation of the ambient light power entering the lens with the distance is shown in Fig. 5(a). Under different environmental photoelectron numbers, the variation of simulation measurement accuracy with distance is shown in Fig. 5(b). The simulation results show that when the ambient illuminance is constant, the angle of view changes from 50 degrees to 15 degrees, and the ambient light is 5%, which can increase the measurement distance by four times without affecting the accuracy. At the same time, the less the number of electrons generated by ambient light, the higher the measurement accuracy.
进一步地,器件噪声对测量结果(即误差△L)产生的影响的表达式为公式7:Further, the expression of the influence of device noise on the measurement result (ie, error ΔL) is Equation 7:
Figure PCTCN2018113875-appb-000008
Figure PCTCN2018113875-appb-000008
其中,Npseudo是器件引入的噪声电子。Among them, Npseudo is the noise electron introduced by the device.
若使用的激光器是非理想解调和光源,将A、B分别用其所引入的自由电子数表示,则可以获知以下公式8、公式9及公式10。If the laser used is a non-ideal demodulation and light source, and A and B are respectively represented by the number of free electrons introduced by them, the following Equation 8, Equation 9, and Formula 10 can be known.
B eff=N ambient+N pseudo+PE opt   (公式8); B eff =N ambient +N pseudo +PE opt (Equation 8);
A=C modC demodPE opt   (公式9); A=C mod C demod PE opt (Equation 9);
Figure PCTCN2018113875-appb-000009
Figure PCTCN2018113875-appb-000009
具体的测量应用中,器件参数和环境影响基本为常数,可以通过以下三种方法减小器件噪声的影响,从而减小测量过程中的误差△L:In the specific measurement application, the device parameters and environmental influences are basically constant, and the influence of the device noise can be reduced by the following three methods, thereby reducing the error ΔL during the measurement process:
1)降温,可以减小器件噪声;1) Cooling down can reduce device noise;
2)增大接收到的电子数可以减弱器件噪声的影响。方法有两种,第一种方法是增大孔径,在不改变光转换电子数的情况下,每增加一倍的距离需要增加一倍的孔径,这种方法的缺点是导致结构体积增加成本上升。第二种方法是增大光功率,每增加一倍的距离,需要四倍的光功率。2) Increasing the number of received electrons can reduce the effects of device noise. There are two methods. The first method is to increase the aperture. If you do not change the number of photo-converted electrons, you need to double the aperture for every doubling of the distance. The disadvantage of this method is that the structure volume increases and the cost increases. . The second method is to increase the optical power, which requires four times the optical power for every doubling of the distance.
3)由于远距离适配的光源会导致近距离器件饱和,使用动态光功率可以减小此类误差。3) Since remotely adapted light sources can cause near-field device saturation, the use of dynamic optical power can reduce such errors.
在假设器件和光源参数如下时:光源功率700mW、光束发散角50度、像素感光面积12.5×14.5μm2、激光波长630nm、量子效率为65%、积分采样时间25ms、镜头效率为0.35、镜头孔径为2.6mm,仿真入射光子数以及误差变化与环境变量间的关系。图6所示为物体的反射率和入射光子数的关系,可以看到,物体的反射率和传感器接收到的光子数间存在线性关系。另一方面,为抑制环境光的影响,镜头的收光角度影响测量的精度,仿真实验表明,将镜头收光角度从50度减为15度时,环境光产生的光子数将会变为原来的5%,这将大大提高测量的精度,从而得到更远的测量距离。通过改变镜头角度为15°的方法减小误差后仿真长距离范围内的测量结果如图7所示。其中仿真参数为F=1.5MHz,被测物体反射率为0.5。Assuming the device and source parameters are as follows: source power 700mW, beam divergence angle 50 degrees, pixel photosensitive area 12.5×14.5μm2, laser wavelength 630nm, quantum efficiency 65%, integral sampling time 25ms, lens efficiency 0.35, lens aperture 2.6mm, simulate the number of incident photons and the relationship between error changes and environmental variables. Figure 6 shows the relationship between the reflectivity of an object and the number of incident photons. It can be seen that there is a linear relationship between the reflectivity of the object and the number of photons received by the sensor. On the other hand, in order to suppress the influence of ambient light, the light-receiving angle of the lens affects the accuracy of the measurement. Simulation experiments show that when the lens light-receiving angle is reduced from 50 degrees to 15 degrees, the number of photons generated by ambient light will become original. 5%, which will greatly improve the accuracy of the measurement, resulting in a farther measurement distance. The measurement result in the long distance range after the error is reduced by changing the lens angle to 15° is shown in Fig. 7. The simulation parameter is F=1.5MHz, and the reflectivity of the measured object is 0.5.
图1(a)为不同光源功率条件下,信号光和环境光在像素中产生 的电子数随距离的变化曲线;图2(b)为不同的光源光功率条件下,测量误差随距离的变化曲线。仿真结果表明,长距离测量需要非常大的光源功率才能够保证测量精度。采用两个高频光源的差频方法可以增大相位法的最大无混淆测距。但是理论分析表面,通过差频的方法并不能增加测量的精度。Figure 1(a) shows the variation of the number of electrons generated by signal light and ambient light in a pixel with distance under different light source power conditions; Figure 2(b) shows the variation of measurement error with distance under different light source optical power conditions. curve. The simulation results show that long-distance measurement requires very large light source power to ensure measurement accuracy. The difference frequency method of two high-frequency light sources can increase the maximum non-confusion ranging of the phase method. However, theoretically analyzing the surface, the method of difference frequency does not increase the accuracy of the measurement.
考虑到ToF深度图像传感器上的像素结构的有效感光面积相对总像素面积来说非常小,一般为像素面积的6%-15%之间,进一步增加测量精度的方法在于提升光源信号光光效。从传感器结构上考虑提升光效的方法是在每个像素上增加一个对应的微透镜,将入射的光进一步汇聚到感光区。这种方法存在的问题是在提高信号光效率的同时也提高了环境光的强度,对于强光环境下的精度提高不利。Considering that the effective photosensitive area of the pixel structure on the ToF depth image sensor is very small relative to the total pixel area, generally between 6% and 15% of the pixel area, the method of further increasing the measurement accuracy is to improve the light source signal efficiency of the light source. A method for improving the light efficiency from the sensor structure is to add a corresponding microlens to each pixel to further concentrate the incident light into the photosensitive region. The problem with this method is that it increases the intensity of the signal light and also increases the intensity of the ambient light, which is disadvantageous for the accuracy improvement in a strong light environment.
发明内容Summary of the invention
针对以上技术问题,有必要提供一种ToF相机、具有所述ToF 相机的投影系统、机器人、驾驶系统、电子设备与电子设备、及用于ToF相机的点阵光源的衍射光学元件的设计方法。In view of the above technical problems, it is necessary to provide a design method of a ToF camera, a projection system having the ToF camera, a robot, a driving system, an electronic device and an electronic device, and a diffractive optical element for a dot matrix light source of the ToF camera.
一种ToF相机,其包括点阵光源、收光镜头系统及感测器,所述点阵光源用于发出点阵排列的照明光束至目标物体,所述收光镜头系统用于接收所述点阵排列的照明光束照射至目标物体而反射的点阵排列的照明光束以将所述目标物体成像在所述感测器上,所述感测器用于感测所述收光镜头系统成像至所述感测器上的所述点阵排列的照明光束以获得所述目标物体的深度图像信息,其中,所述感测器包括多个像素,所述像素与所述点阵排列的照明光束一一对应,每个像素用于感测对应的一照明光束。A ToF camera comprising a dot matrix light source, a light receiving lens system and a sensor, the dot matrix light source for emitting a lattice array of illumination beams to a target object, the light collection lens system for receiving the points Array of illumination beams illuminating the target object and reflecting the array of illumination beams arranged to image the target object on the sensor, the sensor for sensing the image of the collection lens system An illumination beam arranged in the lattice on the sensor to obtain depth image information of the target object, wherein the sensor comprises a plurality of pixels, and the pixels are aligned with the illumination beam arranged by the lattice Correspondingly, each pixel is used to sense a corresponding illumination beam.
一种投影系统,其包括ToF相机,所述ToF相机包括点阵光源、收光镜头系统及感测器,所述点阵光源用于发出点阵排列的照明光束至目标物体,所述收光镜头系统用于接收所述点阵排列的照明光束照射至目标物体而反射的点阵排列的照明光束以将所述目标物体成像在 所述感测器上,所述感测器用于感测所述收光镜头系统成像至所述感测器上的所述点阵排列的照明光束以获得所述目标物体的深度图像信息,其中,所述感测器包括多个像素,所述像素与所述点阵排列的照明光束一一对应,每个像素用于感测对应的一照明光束。A projection system comprising a ToF camera comprising a lattice light source, a light receiving lens system and a sensor, the dot matrix light source for emitting a lattice array of illumination beams to a target object, the light collection a lens system for receiving an illumination beam of the lattice array of illuminated illumination beams that are illuminated to the target object to image the target object on the sensor, the sensor for sensing the location The light-receiving lens system images the illumination beam of the lattice arrangement on the sensor to obtain depth image information of the target object, wherein the sensor includes a plurality of pixels, the pixel and the The illumination beams arranged in a lattice are in one-to-one correspondence, and each pixel is used to sense a corresponding illumination beam.
一种机器人,其包括ToF相机,所述ToF相机包括点阵光源、收光镜头系统及感测器,所述点阵光源用于发出点阵排列的照明光束至目标物体,所述收光镜头系统用于接收所述点阵排列的照明光束照射至目标物体而反射的点阵排列的照明光束以将所述目标物体成像在所述感测器上,所述感测器用于感测所述收光镜头系统成像至所述感测器上的所述点阵排列的照明光束以获得所述目标物体的深度图像信息,其中,所述感测器包括多个像素,所述像素与所述点阵排列的照明光束一一对应,每个像素用于感测对应的一照明光束。A robot comprising a ToF camera, the ToF camera comprising a lattice light source, a light receiving lens system and a sensor, the dot matrix light source for emitting a lattice array of illumination beams to a target object, the light receiving lens The system is configured to receive an illumination beam of the lattice array of illuminated illumination beams that are illuminated to the target object to image the target object on the sensor, the sensor for sensing the a light receiving lens system imaging the illumination beam of the lattice arrangement on the sensor to obtain depth image information of the target object, wherein the sensor comprises a plurality of pixels, the pixel The illumination beams arranged in a lattice are in one-to-one correspondence, and each pixel is used to sense a corresponding illumination beam.
在一种实施方式中,所述机器人为自动导引机器人。In one embodiment, the robot is an automated guided robot.
一种驾驶系统,其包括驾驶设备及ToF相机,所述ToF相机包括 点阵光源、收光镜头系统及感测器,所述点阵光源用于发出点阵排列的照明光束至目标物体,所述收光镜头系统用于接收所述点阵排列的照明光束照射至目标物体而反射的点阵排列的照明光束以将所述目标物体成像在所述感测器上,所述感测器用于感测所述收光镜头系统成像至所述感测器上的所述点阵排列的照明光束以获得所述目标物体的深度图像信息,其中,所述感测器包括多个像素,所述像素与所述点阵排列的照明光束一一对应,每个像素用于感测对应的一照明光束。A driving system comprising a driving device and a ToF camera, the ToF camera comprising a dot matrix light source, a light receiving lens system and a sensor, wherein the dot matrix light source is used to emit a lattice array of illumination beams to a target object, The light-receiving lens system is configured to receive an illumination beam of a dot matrix arranged by the illumination beam of the lattice arrangement to be irradiated to a target object to image the target object on the sensor, the sensor is used for Sensing the illumination lens of the lattice array imaged onto the sensor to obtain depth image information of the target object, wherein the sensor includes a plurality of pixels, The pixels are in one-to-one correspondence with the illumination beams arranged in the lattice, and each pixel is used to sense a corresponding illumination beam.
在一种实施方式中,所述驾驶系统为自动驾驶系统。In one embodiment, the driving system is an automated driving system.
一种电子设备,其包括ToF相机,所述ToF相机包括点阵光源、收光镜头系统及感测器,所述点阵光源用于发出点阵排列的照明光束至目标物体,所述收光镜头系统用于接收所述点阵排列的照明光束照射至目标物体而反射的点阵排列的照明光束以将所述目标物体成像在所述感测器上,所述感测器用于感测所述收光镜头系统成像至所述感测器上的所述点阵排列的照明光束以获得所述目标物体的深度图像信 息,其中,所述感测器包括多个像素,所述像素与所述点阵排列的照明光束一一对应,每个像素用于感测对应的一照明光束。An electronic device comprising a ToF camera comprising a dot matrix light source, a light receiving lens system and a sensor, the dot matrix light source for emitting a lattice array of illumination beams to a target object, the light collection a lens system for receiving an illumination beam of the lattice array of illuminated illumination beams that are illuminated to the target object to image the target object on the sensor, the sensor for sensing the location The light-receiving lens system images the illumination beam of the lattice arrangement on the sensor to obtain depth image information of the target object, wherein the sensor includes a plurality of pixels, the pixel and the The illumination beams arranged in a lattice are in one-to-one correspondence, and each pixel is used to sense a corresponding illumination beam.
在一种实施方式中,所述电子设备为手机、电脑、或无人飞行仪器。In one embodiment, the electronic device is a cell phone, a computer, or an unmanned aerial vehicle.
一种用于ToF相机的点阵光源的衍射光学元件的设计方法,其特包括以下步骤:A method for designing a diffractive optical element for a lattice source of a ToF camera, which comprises the following steps:
计算所述ToF相机的感测器的有效感光面积分布;Calculating an effective photosensitive area distribution of the sensor of the ToF camera;
设定远场平面的平面典型距离;Setting a typical plane distance of the far field plane;
计算所述远场平面在所述感测器上的投影图案分布;Calculating a projection pattern distribution of the far field plane on the sensor;
计算所述衍射光学元件的相位图。A phase map of the diffractive optical element is calculated.
与现有技术相比较,本发明ToF相机由于采用点阵光源且使得像素与点阵排列的照明光束一一对应,从而感测器的每个像素都可以感测对应的照明光束,进而提供可靠的感测结果,提高所述ToF相机的可靠性;同时,在相同的功率下,每个像素感测到的光子数相对于采 用均匀照明的光源提高,而每个像素感测到的环境光产生的光子和采用均匀照明的光源提高相比基本没有变化,不仅提高所述ToF相机的光效与最大测距距离,也提高了接收感测信号的信噪比,具有更好的测量精度。Compared with the prior art, the ToF camera of the present invention uses a dot matrix light source and makes the pixels correspond to the illumination beams arranged in a lattice, so that each pixel of the sensor can sense the corresponding illumination beam, thereby providing reliable The sensing result improves the reliability of the ToF camera; meanwhile, at the same power, the number of photons sensed per pixel is increased relative to the source using uniform illumination, and the ambient light sensed by each pixel The generated photons have substantially no change compared with the light source with uniform illumination, which not only improves the light efficiency and the maximum ranging distance of the ToF camera, but also improves the signal-to-noise ratio of the received sensing signal, and has better measurement accuracy.
附图说明DRAWINGS
图1是ToF测距法的测距原理图。Figure 1 is a schematic diagram of the ranging of the ToF ranging method.
图2是一种ToF相机采用的测距原理。Figure 2 is a principle of ranging used in a ToF camera.
图3所示为一种锁相CCD的结构示意图。FIG. 3 is a schematic structural view of a phase locked CCD.
图4是被测物体距离的计算原理图。Figure 4 is a schematic diagram of the calculation of the distance of the measured object.
图5(a)为在不同镜头焦距条件下仿真进入镜头的环境光功率随距离的变化示意图。Fig. 5(a) is a schematic diagram showing the variation of ambient light power with distance simulated into the lens under different lens focal lengths.
图5(b)为在不同的环境光电子数条件下仿真测量精度随距离的变化示意图。Fig. 5(b) is a schematic diagram showing the variation of simulation measurement accuracy with distance under different environmental photoelectron numbers.
图6为物体的反射率和入射光子数的关系示意图。Figure 6 is a schematic diagram showing the relationship between the reflectance of an object and the number of incident photons.
图3(a)为不同光源功率条件下信号光和环境光在像素中产生的电子数随距离的变化曲线示意图。Fig. 3(a) is a graph showing the relationship between the number of electrons generated by signal light and ambient light in a pixel with distance under different light source power conditions.
图4(b)为不同的光源光功率条件下测量误差随距离的变化曲线示意图。Fig. 4(b) is a graph showing the variation of measurement error with distance under different light source optical power conditions.
图8是本发明一较佳实施方式的ToF相机的结构示意图。FIG. 8 is a schematic structural diagram of a ToF camera according to a preferred embodiment of the present invention.
图9是图8所示调制元件对光源发出的光源光进行衍射处理的光路示意图。Fig. 9 is a view showing the optical path of the modulating element shown in Fig. 8 for diffracting the light source light emitted from the light source.
图10是图8所示调制元件的设计方法示意图。Figure 10 is a schematic view showing the design method of the modulation element shown in Figure 8.
图11(a)为使用630nm波长信号光时,不同功率下的均匀照明和点阵照明的测量精确度随测量距离变化的曲线图。Fig. 11(a) is a graph showing the measurement accuracy of uniform illumination and dot illumination at different powers as a function of measurement distance when using 630 nm wavelength signal light.
图11(b)为使用1550nm波长信号光时,不同功率下的均匀照明和点阵照明的测量精确度随测量距离变化的曲线图。Fig. 11(b) is a graph showing the measurement accuracy of uniform illumination and dot illumination at different powers as a function of measurement distance when using 1550 nm wavelength signal light.
图5(a)为使用630nm波长信号光时,不同功率下的均匀照明 和点阵照明的测量精确度随测量距离变化的曲线图。Fig. 5(a) is a graph showing the measurement accuracy of uniform illumination and dot illumination at different powers as a function of measurement distance when using 630 nm wavelength signal light.
图6(b)为使用1550nm波长信号光时,不同功率下的均匀照明和点阵照明的测量精确度随测量距离变化的曲线图。Fig. 6(b) is a graph showing the measurement accuracy of uniform illumination and dot illumination at different powers as a function of measurement distance when using 1550 nm wavelength signal light.
图13(a)为不同功率下的均匀照明和点阵照明的测量精确度随测量距离变化的曲线图。Figure 13(a) is a graph showing the measurement accuracy of uniform illumination and dot illumination at different powers as a function of measured distance.
图13(b)为有效像素减为原来的1/4后,不同功率下的均匀照明和点阵照明的测量精确度随测量距离变化的曲线图。Fig. 13(b) is a graph showing the measurement accuracy of uniform illumination and dot illumination at different powers as measured distance after the effective pixels are reduced to 1/4 of the original.
主要元件符号说明Main component symbol description
ToF相机       100 ToF camera 100
点阵光源      107Dot matrix light source 107
收光镜头系统  102Receiving lens system 102
目标物体      103 Target object 103
远场平面      103a Far field plane 103a
第一区域      103b First area 103b
感测器        101 Sensor 101
光源          104 Light source 104
准直透镜      106 Collimating lens 106
衍射光学元件  105Diffractive optical element 105
像素          101a Pixel 101a
如下具体实施方式将结合上述附图进一步说明本发明。The invention will be further illustrated by the following detailed description in conjunction with the accompanying drawings.
具体实施方式detailed description
本发明针对ToF测距技术传感器结构的特点,提出了使用点阵照明光源代替均匀照明光源的方法,使得ToF测距法光效得到提升,提升了同等光功率条件下ToF相机的测量距离。The invention aims at the characteristics of the sensor structure of the ToF ranging technology, and proposes a method of using a dot matrix illumination source instead of a uniform illumination source, so that the ToF ranging method is improved in light efficiency, and the measurement distance of the ToF camera under the same optical power condition is improved.
具体地,请参阅图8,图8是本发明一较佳实施方式的ToF相机100的结构示意图。所述ToF相机100包括点阵光源107、收光镜头 系统102及感测器101。所述点阵光源107用于发出点阵排列的照明光束至目标物体103,所述收光镜头系统102用于接收所述点阵排列的照明光束照射至目标物体103而反射的点阵排列的照明光束以将所述目标物体成像在所述感测器101上,所述感测器101用于感测所述收光镜头系统102成像至所述感测器101上的所述点阵排列的照明光束以获得所述目标物体103的深度图像信息,其中,所述感测器101包括多个像素101a,所述像素101a与所述点阵排列的照明光束一一对应,每个像素101a用于感测对应的一照明光束。Specifically, please refer to FIG. 8. FIG. 8 is a schematic structural diagram of a ToF camera 100 according to a preferred embodiment of the present invention. The ToF camera 100 includes a dot matrix light source 107, a light collecting lens system 102, and a sensor 101. The dot matrix light source 107 is configured to emit a lattice-arranged illumination beam to the target object 103, and the light-receiving lens system 102 is configured to receive the lattice arrangement of the lattice-arranged illumination beam that is irradiated to the target object 103 and reflected. Illuminating a light beam to image the target object on the sensor 101, the sensor 101 for sensing the lattice arrangement of the light receiving lens system 102 onto the sensor 101 The illumination beam is obtained to obtain depth image information of the target object 103, wherein the sensor 101 includes a plurality of pixels 101a, the pixels 101a are in one-to-one correspondence with the illumination beams arranged in the lattice, and each pixel 101a It is used to sense a corresponding illumination beam.
具体地,所述点阵光源107包括光源104、准直透镜106及调制元件105。所述光源104为发明二极管或激光光源,用于发出光源光,所述调制元件105用于将所述光源光调制成所述点阵排列的照明光束。所述准直透镜106设置于所述光源104及所述调制元件105之间,用于将所述光源104发出的光源光进行准直后提供至所述调制元件105。Specifically, the lattice source 107 includes a light source 104, a collimating lens 106, and a modulating element 105. The light source 104 is an inventive diode or laser source for emitting source light, and the modulating element 105 is operative to modulate the source light into the array of illumination beams. The collimating lens 106 is disposed between the light source 104 and the modulating element 105 for collimating the light source light emitted by the light source 104 to the modulating element 105.
本实施方式中,所述调制元件105为衍射光学元件。请参阅图9, 图9是所述调制元件105对所述光源104发出的光源光进行衍射处理的光路示意图。所述调制元件105可以包括多个衍射单元,每个衍射单元用于将接收光源光转换为一照明光束,所述照明光束经目标物体103反射后经由所述收光镜头系统102投影至所述感测器101上对应的一个像素101a。In the present embodiment, the modulation element 105 is a diffractive optical element. Please refer to FIG. 9. FIG. 9 is a schematic diagram of an optical path of the modulation element 105 for diffracting the light source light emitted by the light source 104. The modulating element 105 can include a plurality of diffractive elements, each diffractive unit for converting received source light into an illumination beam that is reflected by the target object 103 and projected through the collection lens system 102 to the A corresponding pixel 101a on the sensor 101.
进一步地,本实施方式中,所述感测器101为ToF芯片,其可以为锁相CCD或CMOS感光阵列;所述收光镜头系统102可以包括收光透镜,所述收光镜头系统102的光学中心可以与所述目标物体103的中心及所述感测器101的中心在一条直线上;在一种实施方式中,所述目标物体103可以位于所述光源104经由所述收光镜头系统102形成的远场的一平面上,或者说,以所述目标物体103为位于所述远场的一远场平面103a为例,以便更清楚的描述所述ToF相机100的光路原理及其衍射光学元件的设计原理。Further, in this embodiment, the sensor 101 is a ToF chip, which may be a phase locked CCD or a CMOS photosensitive array; the light receiving lens system 102 may include a light receiving lens, and the light receiving lens system 102 An optical center may be in line with a center of the target object 103 and a center of the sensor 101; in an embodiment, the target object 103 may be located at the light source 104 via the light receiving lens system The plane formed by the far field of 102, or the target object 103 is a far field plane 103a located in the far field, for example, to more clearly describe the optical path principle and diffraction of the ToF camera 100. The design principle of optical components.
具体地,所述像素101a包括有效感光面积101b,所述感测器还 包括不感光面积101c。所述不感光面积101c可以位于相邻两个像素101a的有效感光面积101b之间。所述像素101a对应的一照明光束覆盖至所述有效感光面积101b上,即所述照明光束的光斑尺寸与所述有效感光面积一致,使得所述照明光束的光斑刚刚好只覆盖至所述有效感光面积101b上。Specifically, the pixel 101a includes an effective photosensitive area 101b, and the sensor further includes a non-photosensitive area 101c. The non-photosensitive area 101c may be located between the effective photosensitive areas 101b of two adjacent pixels 101a. An illumination beam corresponding to the pixel 101a covers the effective photosensitive area 101b, that is, the spot size of the illumination beam is consistent with the effective photosensitive area, so that the spot of the illumination beam is just covered to the effective Photosensitive area 101b.
所述收光镜头系统102的作用为将所述远场平面103a反射所述点阵光源107的照明光束的空间角分布变成所述感测器101上的空间位置分布。所述远场平面103a上的任意一点可以映射到所述感测器101上的对应一点。经过收光镜头系统102,所述感测器101上的任一有效感光面积101b都唯一对应所述远场平面103a上的一个区域(如所述远场平面103a上的第一区域103b表示对应区域)。这样,所述感测器101的有效感光面积获得所述远场平面经所述点阵光源107照明形成的光场信号,可以理解。所述光场信号为所述点阵排列的照明光束经所述远场平面103a及所述收光镜头系统102入射至所 述感测器101的光束信号,所述ToF相机依据所述感测器101感测到的光束信号可以获得所述远场平面103a的深度图像信息。The function of the light receiving lens system 102 is to change the spatial angular distribution of the illumination beam of the far field plane 103a reflecting the lattice light source 107 to the spatial position distribution on the sensor 101. Any point on the far field plane 103a can be mapped to a corresponding point on the sensor 101. Through the light receiving lens system 102, any effective photosensitive area 101b on the sensor 101 uniquely corresponds to an area on the far field plane 103a (as the first area 103b on the far field plane 103a indicates corresponding region). Thus, the effective photosensitive area of the sensor 101 obtains a light field signal formed by the far field plane illuminated by the lattice source 107, as can be understood. The light field signal is a beam signal of the illumination light beam arranged by the dot matrix incident on the sensor 101 via the far field plane 103a and the light receiving lens system 102, and the ToF camera according to the sensing The beam signal sensed by the device 101 can obtain depth image information of the far field plane 103a.
如图8所示,由于所述光源104和所述感测器101之间的距离可以忽略,因此每个衍射单元105a中心对应的空间角度可以视为等于对应的像素101a中心对应的空间角度。每个像素(i,j)中心对应的空间角度包括所述像素(i,j)对应的竖直方向与光轴间的夹角θ c i,j及像素(i,j)对应的水平方向与光轴间的夹角
Figure PCTCN2018113875-appb-000010
其中,
As shown in FIG. 8, since the distance between the light source 104 and the sensor 101 is negligible, the spatial angle corresponding to the center of each diffraction unit 105a can be regarded as equal to the spatial angle corresponding to the center of the corresponding pixel 101a. The spatial angle corresponding to the center of each pixel (i, j) includes an angle θ c i,j between the vertical direction and the optical axis corresponding to the pixel (i, j) and a horizontal direction corresponding to the pixel (i, j) Angle with the optical axis
Figure PCTCN2018113875-appb-000010
among them,
Figure PCTCN2018113875-appb-000011
Figure PCTCN2018113875-appb-000011
Figure PCTCN2018113875-appb-000012
Figure PCTCN2018113875-appb-000012
其中,i及j分别代表所述像素(i,j)在所述感测器101上的像素点阵上的水平行数与竖直列数,即,所述像素(i,j)为所述感测器101上位于第i行及第j列的像素,FOV V为竖直方向的视场角度,FOV h为水平方向的视场角度,N v为竖直方向的像素数目,N h为水平方向的像素数目。 Where i and j respectively represent the number of horizontal lines and the number of vertical columns of the pixel (i, j) on the pixel lattice on the sensor 101, that is, the pixel (i, j) is The pixels in the i-th row and the j-th column on the sensor 101, FOV V is the field of view angle in the vertical direction, FOV h is the field of view angle in the horizontal direction, and N v is the number of pixels in the vertical direction, N h The number of pixels in the horizontal direction.
进一步地,所述像素(i,j)中,所述有效感光面积对应的空间角度半宽
Figure PCTCN2018113875-appb-000013
Figure PCTCN2018113875-appb-000014
符合以下公式:
Further, in the pixel (i, j), the effective photosensitive area corresponds to a half angle of a spatial angle
Figure PCTCN2018113875-appb-000013
versus
Figure PCTCN2018113875-appb-000014
Meet the following formula:
Figure PCTCN2018113875-appb-000015
Figure PCTCN2018113875-appb-000015
Figure PCTCN2018113875-appb-000016
Figure PCTCN2018113875-appb-000016
其中,d h和d v为有效感光面积101b在竖直和水平方向上的尺寸,d 1为感测器101和收光镜头系统102的光学中心的距离。 Where d h and d v are the dimensions of the effective photosensitive area 101b in the vertical and horizontal directions, and d 1 is the distance between the sensor 101 and the optical center of the light receiving lens system 102.
依据上述分析,所述点阵光源104在所述远场平面103a上形成的点阵排列的照明光束需与所述感测器101经过收光镜头系统102在所述远场平面103a的投影图案重合,本发明进一步提供用于所述ToF相机100的点阵光源104的衍射光学元件(即所述调制元件105)的设计方法,即获得适用于所述感测器101及所述收光镜头系统102的衍射光学元件的相位分布。其中,所述感测器101在所述远场平面103a上的投影图案,可以等效为有效感光面积101b的空间立体角分布,由于所述光源104和所述感测器101之间的距离可以忽略,因此 每个衍射单元105a中心对应的空间角度可以视为等于对应的像素101a中心对应的空间角度,故所述设计方法中,通过计算获得每个像素101a中心对应的空间角度即可获知所述衍射单元105a中心对应的空间角度,从而如获得所述衍射光学元件的相位分布,其中,每个像素101a中心对应的空间角度分布由所述感测器101的尺寸和有效感光面积101b图案以及收光镜头系统102的焦距决定。According to the above analysis, the illumination beam arranged by the lattice source 104 on the far field plane 103a and the projection pattern of the sensor 101 passing through the light receiving lens system 102 on the far field plane 103a The present invention further provides a method of designing a diffractive optical element (ie, the modulating element 105) for the lattice source 104 of the ToF camera 100, that is, suitable for the sensor 101 and the light-receiving lens. The phase distribution of the diffractive optical elements of system 102. The projection pattern of the sensor 101 on the far field plane 103a may be equivalent to a spatial solid angle distribution of the effective photosensitive area 101b due to the distance between the light source 104 and the sensor 101. It can be omitted, so that the spatial angle corresponding to the center of each diffraction unit 105a can be regarded as equal to the spatial angle corresponding to the center of the corresponding pixel 101a. Therefore, in the design method, the spatial angle corresponding to the center of each pixel 101a can be obtained by calculation. The center of the diffraction unit 105a corresponds to a spatial angle, such as to obtain a phase distribution of the diffractive optical element, wherein a spatial angular distribution corresponding to the center of each pixel 101a is defined by the size of the sensor 101 and the effective photosensitive area 101b. And the focal length of the light receiving lens system 102 is determined.
具体地,依据上述原理,如图10所示,所述设计方法包括以下步骤S1-S4。Specifically, according to the above principle, as shown in FIG. 10, the design method includes the following steps S1-S4.
步骤S1,计算所述ToF相机的感测器101的有效感光面积101b分布。可以理解,通过使用的感测器101参数即可获知。具体地,可以根据所述感测器101的像素101a排列和有效感光面积101b在像素101a中的位置,大小和形状,确定所述感测器101的有效感光面积图。In step S1, the effective photosensitive area 101b distribution of the sensor 101 of the ToF camera is calculated. It can be understood that the parameters of the sensor 101 can be known. Specifically, the effective photosensitive area map of the sensor 101 can be determined according to the arrangement of the pixels 101a of the sensor 101 and the position, size and shape of the effective photosensitive area 101b in the pixel 101a.
步骤S2,设定远场平面103a的平面典型距离。可以理解,所述 平面典型距离为所述远场平面103a到所述收光镜头系统102的距离d2,也可以视为所述ToF相机100对远场平面103a的检测距离,其与所述ToF相机100能力有关。In step S2, the plane typical distance of the far field plane 103a is set. It can be understood that the plane typical distance is the distance d2 from the far field plane 103a to the light receiving lens system 102, and can also be regarded as the detection distance of the ToF camera 100 to the far field plane 103a, and the ToF Camera 100 capabilities are related.
步骤S3,计算所述远场平面103a在所述感测器101上的投影图案分布。根据所述收光镜头系统102的焦距、所述感测器101的尺寸、所述像素101a的间距(如所述不感光区域101c的宽度)、有效感光面积101b大小,计算像素101a有效感光面积101b对应的空间角分布(如图8所示θ,如公式11-14所示,此处不再赘述)和像素101a的空间角间隔(如图8所示θ1)。Step S3, calculating a projection pattern distribution of the far field plane 103a on the sensor 101. Calculating the effective photosensitive area of the pixel 101a according to the focal length of the light receiving lens system 102, the size of the sensor 101, the pitch of the pixel 101a (such as the width of the non-photosensitive area 101c), and the effective photosensitive area 101b. The spatial angular distribution corresponding to 101b (shown as θ in FIG. 8, as shown in Equations 11-14, and will not be described herein) and the spatial angular interval of the pixel 101a (θ1 shown in FIG. 8).
步骤S4,计算所述衍射光学元件的相位图。具体地,可以根据所述感测器101的各像素101a中心的空间角分布、光源104光轴和所述收光镜头系统102光轴间的设计夹角θ2,利用衍射光学设计和相位全息图的生成方法,计算所述衍射光学元件的相位延迟分布,用以设计所述衍射光学元件。另外,由于所述光源104和所述感测器101间 的距离相比测量的距离可以忽略,因而在设计中θ2可以近似为0,即所述感测器101投影的空间角分布(即各像素中心对应的空间角)可以直接等价于衍射光学元件出射光的空间角分布(如图8所示θ,如公式11-14所示,此处不再赘述)。In step S4, a phase map of the diffractive optical element is calculated. Specifically, the diffractive optical design and the phase hologram can be utilized according to the spatial angular distribution of the center of each pixel 101a of the sensor 101, the optical axis of the light source 104, and the design angle θ2 between the optical axes of the light receiving lens system 102. a method of generating a phase delay profile of the diffractive optical element for designing the diffractive optical element. In addition, since the distance measured between the light source 104 and the sensor 101 is negligible, θ2 may be approximately 0 in the design, that is, the spatial angular distribution projected by the sensor 101 (ie, each The spatial angle corresponding to the center of the pixel can be directly equivalent to the spatial angular distribution of the light emitted by the diffractive optical element (as shown in FIG. 8 , as shown in Equation 11-14, which will not be described herein).
与现有技术相比较,本发明ToF相机100由于采用点阵光源107且使得像素101a与点阵排列的照明光束一一对应,从而感测器101的每个像素101a都可以感测对应的照明光束,进而提供可靠的感测结果,提高所述ToF相机100的可靠性;同时,在相同的功率下,每个像素101a感测到的光子数相对于采用均匀照明的光源提高,而每个像素101a感测到的环境光产生的光子和采用均匀照明的光源提高相比基本没有变化,不仅提高所述ToF相机100的光效与最大测距距离,也提高了接收感测信号的信噪比,具有更好的测量精度。Compared with the prior art, the ToF camera 100 of the present invention can use the dot matrix light source 107 and make the pixels 101a correspond to the illumination beams arranged in a lattice one by one, so that each pixel 101a of the sensor 101 can sense the corresponding illumination. The beam, which in turn provides reliable sensing results, increases the reliability of the ToF camera 100; at the same time, at the same power, the number of photons sensed per pixel 101a is increased relative to the source using uniform illumination, and each The photon generated by the ambient light sensed by the pixel 101a has substantially no change compared with the light source with uniform illumination, which not only improves the light efficiency and the maximum ranging distance of the ToF camera 100, but also improves the signal noise of the receiving sensing signal. Compared with, it has better measurement accuracy.
本发明还提供一种投影系统,所述投影系统用于投影显示图像,所述投影系统具有ToF相机,所述投影系统具有手势识别功能,所述 投影系统依据所述ToF相机的感测信号实现所述手势识别功能,所述ToF相机采用上述实施方式的ToF相机100。The present invention also provides a projection system for projecting a display image, the projection system having a ToF camera, the projection system having a gesture recognition function, the projection system being implemented according to a sensing signal of the ToF camera The gesture recognition function, the ToF camera adopts the ToF camera 100 of the above embodiment.
本发明还提供一种机器人,所述机器人可以为自动导引机器人,所述机器人包括ToF相机,所述机器人利用所述ToF相机实现对外界物体的感测,所述ToF相机采用上述实施方式的ToF相机100。The present invention also provides a robot, which may be an automatic guided robot, the robot including a ToF camera, which uses the ToF camera to sense external objects, and the ToF camera adopts the above embodiment. ToF camera 100.
本发明还提供一种驾驶系统,所述驾驶系统可以为自动驾驶系统,其包括驾驶设备(如自动驾驶汽车)及ToF相机,所述ToF相机可以安装在所述驾驶设备上,所述ToF相机采用上述实施方式的ToF相机。The present invention also provides a driving system, which may be an automatic driving system including a driving device (such as a self-driving car) and a ToF camera, the ToF camera being mountable on the driving device, the ToF camera The ToF camera of the above embodiment is employed.
另外,可以理解,本发明上述实施方式中的ToF相机100及其变更实施方式的ToF相机还可以用于手机、电脑、无人飞行仪器等电子设备上,并不限于上述的投影系统、机器人及驾驶系统。In addition, it can be understood that the ToF camera 100 and the ToF camera of the modified embodiment in the above embodiments of the present invention can also be used in electronic devices such as mobile phones, computers, unmanned aerial vehicles, and the like, and are not limited to the above-mentioned projection systems, robots, and Driving system.
进一步地,经实验验证,假设光源的照明效率为70%,那么当均匀照明和点阵照明(即所述点阵光源提供的点阵排列的照明光束进行照明)的功率相同时,使用点阵照明探测的光子数相对均匀照明增加 10倍。具体地,为了验证使用点阵光源的效果,对使用均匀照明和点阵照明光源的ToF相机进行仿真,观察不同功率下测量精确度随测量距离变化的曲线。仿真中采用65×24分辨率的感测器101,所述目标物体的反射率为35%,并且针对不同波长630nm和1550nm的光束进行两次仿真。Further, it has been experimentally verified that, assuming that the illumination efficiency of the light source is 70%, the dot matrix is used when the power of the uniform illumination and the dot illumination (i.e., the illumination of the array of illumination beams provided by the lattice source) is the same. The number of photons detected by illumination is increased by a factor of 10 relative to uniform illumination. Specifically, in order to verify the effect of using a lattice source, a ToF camera using uniform illumination and a lattice illumination source was simulated to observe a curve of measurement accuracy as a function of measurement distance at different powers. A 65×24 resolution sensor 101 is used in the simulation, the target object has a reflectivity of 35%, and two simulations are performed for beams of different wavelengths of 630 nm and 1550 nm.
仿真结果如图11所示,感测器101(即ToF芯片)的分辨率为65*24,即包括65*24个点阵排列的像素时,图11(a)为使用630nm波长信号光时,不同功率下的均匀照明和点阵照明的测量精确度随测量距离变化的曲线图;图11(b)为使用1550nm波长信号光时,不同功率下的均匀照明和点阵照明的测量精确度随测量距离变化的曲线图。对比可知,与均匀照明光源达到同样精度的情况下,使用点阵照明光源的光功率仅为均匀照明光源的十分之一。同时,1550nm波长的光源测距准确度高,这是由于大气对不同波长电磁波的透过率谱差异导致的。The simulation result is shown in FIG. 11. When the resolution of the sensor 101 (ie, the ToF chip) is 65*24, that is, when 65*24 dot matrix pixels are arranged, FIG. 11(a) is when the 630 nm wavelength signal light is used. , the measurement accuracy of uniform illumination and dot matrix illumination at different powers as a function of measurement distance; Figure 11 (b) shows the measurement accuracy of uniform illumination and dot matrix illumination at different powers using 1550 nm wavelength signal light. A graph of changes with measured distance. The comparison shows that the optical power of the dot matrix illumination source is only one tenth of the uniform illumination source with the same accuracy as the uniform illumination source. At the same time, the accuracy of the 1550 nm wavelength source is high, which is caused by the difference in the transmittance spectrum of the electromagnetic waves of different wavelengths.
此外,ToF相机100的分辨率大小对于测量结果误差也产生影响,为了验证芯片分辨率的影响,使用另一种分辨率为320×240的感测器重复以上仿真,仿真结果如图12所示。图7(a)为使用630nm波长信号光时,不同功率下的均匀照明和点阵照明的测量精确度随测量距离变化的曲线图;图8(b)为使用1550nm波长信号光时,不同功率下的均匀照明和点阵照明的测量精确度随测量距离变化的曲线图。In addition, the resolution of the ToF camera 100 also affects the measurement error. To verify the effect of the chip resolution, the above simulation is repeated using another 320×240 resolution sensor. The simulation results are shown in FIG. . Figure 7(a) is a graph showing the measurement accuracy of uniform illumination and dot illumination at different powers as measured distance using 630 nm wavelength signal light; Figure 8(b) shows the different powers when using 1550 nm wavelength signal light. A graph of the measurement accuracy of uniform illumination and dot matrix illumination as a function of measured distance.
对比仿真结果可以得出结论:Comparing the simulation results can lead to the conclusion:
1.随着像素数的增加,所需的光功率也要成比例增加;1. As the number of pixels increases, the required optical power also increases proportionally;
2.要实现高分辨率的ToF相机远距离测距,靠增加光功率和点光源的方法无法满足100m量程下所述感测器的要求;2. To achieve high-resolution ToF camera long-distance ranging, the method of increasing optical power and point source cannot meet the requirements of the sensor under the 100m range;
3. 65×24分辨率下在100m距离处能够达到1m以内的精度。3. Accuracy within 1m at a distance of 100m at 65×24 resolution.
总结上述分析结果,可以得出以下两点结论:Summarizing the above analysis results, we can draw the following two conclusions:
首先,相比扫描式激光雷达,所述ToF相机的扫描速度快,短程内有很高的精度,可以适用于AGV机器人。First of all, compared to the scanning laser radar, the ToF camera has a high scanning speed and a high precision in a short range, and can be applied to an AGV robot.
AGV机器人应用的特点是:移动速度慢,移动范围不大(库房,厂区等),5MHZ调制,30米测距范围即可满足测距要求。针对这些特点对ToF相机测距精度进行验证,则仿真中测量距离范围为30米以内,此外,其他的仿真参数为:Cmod=1,(λ)=0.65,klens=0.35,Tint=33ms,λ=630nm,D=2.6mm,ρ=0.2,Aimage=4.6mm2,Apixel=3.75μm2,τ=0.8,Npix=320×280,lensAngle=50°,仿真结果如图13所示。图13(a)为不同功率下的均匀照明和点阵照明的测量精确度随测量距离变化的曲线图;图13(b)为有效像素减为原来的1/4后,不同功率下的均匀照明和点阵照明的测量精确度随测量距离变化的曲线图。The characteristics of AGV robot application are: slow moving speed, small moving range (depot, factory area, etc.), 5MHZ modulation, 30m ranging range can meet the ranging requirements. To verify the ranging accuracy of the ToF camera for these characteristics, the measurement distance in the simulation is within 30 meters. In addition, other simulation parameters are: Cmod=1, (λ)=0.65, klens=0.35, Tint=33ms, λ = 630 nm, D = 2.6 mm, ρ = 0.2, Aimage = 4.6 mm2, Apixel = 3.75 μm 2 , τ = 0.8, Npix = 320 × 280, lensAngle = 50°, and the simulation results are shown in Fig. 13. Fig. 13(a) is a graph showing the measurement accuracy of uniform illumination and dot illumination at different powers as measured distance; Fig. 13(b) shows the uniformity at different powers after the effective pixels are reduced by 1/4 of the original power. A graph of the measurement accuracy of illumination and dot matrix illumination as a function of measured distance.
从图14所示的仿真结果中可以观察得到,7W的点阵照明可以在30m处精度达到3%,此时采样点数为2.3M/s(320×240*30ps)。减小CDD传感器上的像素数目为原来的1/4(减小光源点阵点数,每个像素点能分配的能量更多),则7W的点阵照明可以在30m处精度达 到1%。It can be observed from the simulation results shown in Fig. 14 that the 7W dot matrix illumination can achieve an accuracy of 3% at 30m, and the number of sampling points is 2.3M/s (320×240*30ps). By reducing the number of pixels on the CDD sensor to 1/4 of the original (reducing the number of dot matrix points, each pixel can allocate more energy), the 7W dot matrix illumination can achieve an accuracy of 1% at 30m.
综上所述,ToF相机的原理虽然限制了其在长距离测距中的应用,但从仿真结果看,应用点阵照明后在30m以内可以做到1%到3%的精度,足够满足低速AGV的需求,其具体参数为:3%时,用TI的ToF芯片能够做到2Mpps的速率;1%时,700kpps速率;其性能已经超过大部分市面上的AGV激光雷达。In summary, although the principle of ToF camera limits its application in long-distance ranging, from the simulation results, it can achieve 1% to 3% accuracy within 30m after applying dot matrix illumination, enough to meet low speed. The requirements of AGV, the specific parameters are: 3%, with TI's ToF chip can achieve 2Mpps rate; 1%, 700kpps rate; its performance has exceeded most of the market AGV laser radar.
其次,所述ToF相机用作长距离测距时满足误差要求需要的光功率非常高,导致它的成本和体积优势较弱,因此,所述ToF相机可以应用于驾驶系统,如自动驾驶系统,但是相较于AGV机器人,所述ToF相机更适用于AGV机器人。Secondly, the ToF camera is used for long-distance ranging to meet the error requirement, and the optical power required is very high, resulting in a weak cost and volume advantage. Therefore, the ToF camera can be applied to a driving system such as an automatic driving system. However, compared to AGV robots, the ToF camera is more suitable for AGV robots.

Claims (17)

  1. 一种ToF相机,其特征在于:所述ToF相机包括点阵光源、收光镜头系统及感测器,所述点阵光源用于发出点阵排列的照明光束至目标物体,所述收光镜头系统用于接收所述点阵排列的照明光束照射至目标物体而反射的点阵排列的照明光束以将所述目标物体成像在所述感测器上,所述感测器用于感测所述收光镜头系统成像至所述感测器上的所述点阵排列的照明光束以获得所述目标物体的深度图像信息,其中,所述感测器包括多个像素,所述像素与所述点阵排列的照明光束一一对应,每个像素用于感测对应的一照明光束。A ToF camera, comprising: a dot matrix light source, a light receiving lens system and a sensor, wherein the dot matrix light source is used to emit a lattice array of illumination beams to a target object, the light receiving lens The system is configured to receive an illumination beam of the lattice array of illuminated illumination beams that are illuminated to the target object to image the target object on the sensor, the sensor for sensing the a light receiving lens system imaging the illumination beam of the lattice arrangement on the sensor to obtain depth image information of the target object, wherein the sensor comprises a plurality of pixels, the pixel The illumination beams arranged in a lattice are in one-to-one correspondence, and each pixel is used to sense a corresponding illumination beam.
  2. 如权利要求1所述的ToF相机,其特征在于:所述点阵光源包括光源及调制元件,所述光源发出光源光,所述调制元件用于将所述光源光调制成所述点阵排列的照明光束。A ToF camera according to claim 1, wherein said lattice source comprises a light source and a modulating element, said light source emitting light source, said modulating element for modulating said source light into said lattice arrangement Illumination beam.
  3. 如权利要求2所述的ToF相机,其特征在于:所述调制元件为衍射光学元件,其包括多个衍射单元,每个衍射单元将接收光源光转 换为一照明光束并将所述照明光束经所述收光镜头系统投影至所述感测器上对应的一个像素。A ToF camera according to claim 2, wherein said modulating element is a diffractive optical element comprising a plurality of diffractive elements, each diffracting unit converting the received source light into an illumination beam and passing said illumination beam The light receiving lens system projects onto a corresponding one of the pixels on the sensor.
  4. 如权利要求3所述的ToF相机,其特征在于:每个衍射单元中心对应的空间角度等于对应的像素中心对应的空间角度。A ToF camera according to claim 3, wherein the spatial angle corresponding to the center of each diffraction unit is equal to the spatial angle corresponding to the center of the corresponding pixel.
  5. 如权利要求2所述的ToF相机,其特征在于:如权利要求2所述的ToF相机,其特征在于:所述光源为发明二极管或激光光源。The ToF camera according to claim 2, wherein the light source is the inventive diode or the laser light source.
  6. 如权利要求2所述的ToF相机,其特征在于:所述点阵光源还包括准直透镜,所述准直透镜设置于所述光源及所述调制元件之间,用于将所述光源发出的光源光进行准直后提供至所述调制元件。A ToF camera according to claim 2, wherein said lattice source further comprises a collimating lens disposed between said light source and said modulating element for emitting said light source The source light is collimated and provided to the modulating element.
  7. 如权利要求1-6项任意一项所述的ToF相机,其特征在于:每个像素包括有效感光面积,每个像素感测的照明光束覆盖至所述像素的有效感光面积上。A ToF camera according to any of claims 1-6, wherein each pixel comprises an effective photosensitive area, and the illumination beam sensed by each pixel covers the effective photosensitive area of the pixel.
  8. 如权利要求7所述的ToF相机,其特征在于:每个像素(i,j)中心对应的空间角度包括所述像素(i,j)对应的竖直方向与光轴间的夹角 θ c i,j及像素(i,j)对应的水平方向与光轴间的夹角
    Figure PCTCN2018113875-appb-100001
    其中,
    The ToF camera according to claim 7, wherein the spatial angle corresponding to the center of each pixel (i, j) comprises an angle θ c between the vertical direction corresponding to the pixel (i, j) and the optical axis. The angle between the horizontal direction and the optical axis corresponding to i, j and pixel (i, j)
    Figure PCTCN2018113875-appb-100001
    among them,
    Figure PCTCN2018113875-appb-100002
    Figure PCTCN2018113875-appb-100002
    Figure PCTCN2018113875-appb-100003
    Figure PCTCN2018113875-appb-100003
    其中,i及j分别代表所述像素在所述感测器上的像素点阵上的水平行数与竖直列数,FOV V为竖直方向的视场角度,FOV h为水平方向的视场角度,N v为竖直方向的像素数目,N h为水平方向的像素数目。 Where i and j respectively represent the number of horizontal lines and vertical columns of the pixel on the pixel lattice on the sensor, FOV V is the vertical field of view angle, and FOV h is the horizontal direction Field angle, N v is the number of pixels in the vertical direction, and N h is the number of pixels in the horizontal direction.
  9. 如权利要求8所述的ToF相机,其特征在于:每个像素(i,j)的有效感光面积对应的空间角度半宽θ Δ i,j
    Figure PCTCN2018113875-appb-100004
    符合以下公式:
    A ToF camera according to claim 8, wherein the effective photosensitive area of each pixel (i, j) corresponds to a spatial angle half width θ Δ i,j and
    Figure PCTCN2018113875-appb-100004
    Meet the following formula:
    Figure PCTCN2018113875-appb-100005
    Figure PCTCN2018113875-appb-100005
    Figure PCTCN2018113875-appb-100006
    Figure PCTCN2018113875-appb-100006
    其中,d h和d v为所述有效感光面积在竖直和水平方向上的尺寸,d 1为感测器和收光镜头系统的光学中心的距离。 Where d h and d v are the dimensions of the effective photosensitive area in the vertical and horizontal directions, and d 1 is the distance between the sensor and the optical center of the light-receiving lens system.
  10. 一种投影系统,其特征在于:所述投影系统包括ToF相机,所述ToF相机采用权利要求1-9项任意一项所述的ToF相机。A projection system, characterized in that the projection system comprises a ToF camera, the ToF camera employing the ToF camera of any one of claims 1-9.
  11. 一种机器人,其特征在于:所述机器人包括ToF相机,所述ToF相机采用权利要求1-9项任意一项所述的ToF相机。A robot, characterized in that the robot comprises a ToF camera, and the ToF camera employs the ToF camera according to any one of claims 1-9.
  12. 一种驾驶系统,其特征在于:所述驾驶系统包括驾驶设备及ToF相机,所述ToF相机采用权利要求1-9项任意一项所述的ToF相机。A driving system, characterized in that the driving system comprises a driving device and a ToF camera, and the ToF camera employs the ToF camera according to any one of claims 1-9.
  13. 一种电子设备,其特征在于:所述电子设备包括ToF相机,所述ToF相机采用权利要求1-9项任意一项所述的ToF相机。An electronic device, comprising: a ToF camera, the ToF camera employing the ToF camera of any one of claims 1-9.
  14. 一种用于ToF相机的点阵光源的衍射光学元件的设计方法,其特征在于:所述方法包括以下步骤:A method for designing a diffractive optical element for a lattice source of a ToF camera, characterized in that the method comprises the following steps:
    计算所述ToF相机的感测器的有效感光面积分布;Calculating an effective photosensitive area distribution of the sensor of the ToF camera;
    设定远场平面的平面典型距离;Setting a typical plane distance of the far field plane;
    计算所述远场平面在所述感测器上的投影图案分布;Calculating a projection pattern distribution of the far field plane on the sensor;
    计算所述衍射光学元件的相位图。A phase map of the diffractive optical element is calculated.
  15. 如权利要求14所述的设计方法,其特征在于:所述衍射光学 元件包括多个衍射单元,每个衍射单元将接收的光源光转换为一照明光束并将所述照明光束经所述收光镜头系统投影至所述感测器上对应的一个像素。A design method according to claim 14, wherein said diffractive optical element comprises a plurality of diffraction units, each of which converts the received source light into an illumination beam and passes said illumination beam through said light collection The lens system is projected onto a corresponding one of the pixels on the sensor.
  16. 如权利要求15项所述的设计方法,其特征在于:所述衍射光学元件的相位图包括每个衍射单元中心对应的空间角度,每个衍射单元中心对应的空间角度等于对应的像素中心对应的空间角度,每个像素(i,j)中心对应的空间角度包括所述像素(i,j)对应的竖直方向与光轴间的夹角θ c i,j及像素(i,j)对应的水平方向与光轴间的夹角
    Figure PCTCN2018113875-appb-100007
    且分别符合以下公式:
    The design method according to claim 15, wherein the phase map of the diffractive optical element includes a spatial angle corresponding to the center of each diffraction unit, and a spatial angle corresponding to the center of each diffraction unit is equal to a corresponding pixel center. The spatial angle, the spatial angle corresponding to the center of each pixel (i, j) includes the angle θ c i,j between the vertical direction corresponding to the pixel (i, j) and the optical axis , and the pixel (i, j) The angle between the horizontal direction and the optical axis
    Figure PCTCN2018113875-appb-100007
    And respectively meet the following formula:
    Figure PCTCN2018113875-appb-100008
    Figure PCTCN2018113875-appb-100008
    Figure PCTCN2018113875-appb-100009
    Figure PCTCN2018113875-appb-100009
    其中,i及j分别代表所述像素在所述感测器上的像素点阵上的水平行数与竖直列数,FOV V为竖直方向的视场角度,FOV h为水平方向的视场角度,N v为竖直方向的像素数目,N h为水平方向的像素数目。 Where i and j respectively represent the number of horizontal lines and vertical columns of the pixel on the pixel lattice on the sensor, FOV V is the vertical field of view angle, and FOV h is the horizontal direction Field angle, N v is the number of pixels in the vertical direction, and N h is the number of pixels in the horizontal direction.
  17. 如权利要求16所述的设计方法,其特征在于:每个像素包括有效感光面积,每个像素感测的照明光束成像在所述像素的有效感光面积上,所述有效感光面积对应的空间角度半宽θ Δ i,j
    Figure PCTCN2018113875-appb-100010
    符合以下公式:
    The design method according to claim 16, wherein each pixel comprises an effective photosensitive area, and an illumination beam sensed by each pixel is imaged on an effective photosensitive area of the pixel, and a spatial angle corresponding to the effective photosensitive area Half width θ Δ i,j and
    Figure PCTCN2018113875-appb-100010
    Meet the following formula:
    Figure PCTCN2018113875-appb-100011
    Figure PCTCN2018113875-appb-100011
    Figure PCTCN2018113875-appb-100012
    Figure PCTCN2018113875-appb-100012
    其中,d h和d v为有效感光面积在竖直和水平方向上的尺寸,d 1为感测器和收光镜头系统的光学中心的距离。 Where d h and d v are the dimensions of the effective photosensitive area in the vertical and horizontal directions, and d 1 is the distance between the sensor and the optical center of the light-receiving lens system.
PCT/CN2018/113875 2018-03-19 2018-11-05 Tof camera and design method for diffractive optical element WO2019179123A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810225284.2 2018-03-19
CN201810225284.2A CN110285788B (en) 2018-03-19 2018-03-19 ToF camera and design method of diffractive optical element

Publications (1)

Publication Number Publication Date
WO2019179123A1 true WO2019179123A1 (en) 2019-09-26

Family

ID=67988208

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/113875 WO2019179123A1 (en) 2018-03-19 2018-11-05 Tof camera and design method for diffractive optical element

Country Status (2)

Country Link
CN (1) CN110285788B (en)
WO (1) WO2019179123A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113281767A (en) * 2021-07-19 2021-08-20 上海思岚科技有限公司 Narrow-window coaxial single-line laser scanning range finder

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020047248A1 (en) * 2018-08-29 2020-03-05 Sense Photonics, Inc. Glare mitigation in lidar applications
US11019276B1 (en) 2019-11-14 2021-05-25 Hand Held Products, Inc. Apparatuses and methodologies for flicker control
CN111025321B (en) * 2019-12-28 2022-05-27 奥比中光科技集团股份有限公司 Variable-focus depth measuring device and measuring method
CN114615397B (en) * 2020-12-09 2023-06-30 华为技术有限公司 TOF device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002031516A (en) * 2000-07-18 2002-01-31 Asahi Optical Co Ltd Three-dimensional image input device
DE102016219515A1 (en) * 2015-10-30 2017-05-04 pmdtechnologies ag Time of flight camera system
CN107003391A (en) * 2014-11-21 2017-08-01 微软技术许可有限责任公司 Many patterned illumination optics of time-of-flight system
KR20170108347A (en) * 2016-03-17 2017-09-27 (주)미래컴퍼니 Diffractive optical element and optical system
US20170366713A1 (en) * 2013-12-05 2017-12-21 Samsung Electronics Co., Ltd. Camera for measuring depth image and method of measuring depth image using the same

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5808800A (en) * 1994-12-22 1998-09-15 Displaytech, Inc. Optics arrangements including light source arrangements for an active matrix liquid crystal image generator
JPH09160364A (en) * 1995-12-12 1997-06-20 Ricoh Co Ltd Image forming device
EP2772676B1 (en) * 2011-05-18 2015-07-08 Sick Ag 3D camera and method for three dimensional surveillance of a surveillance area
US9696424B2 (en) * 2014-05-19 2017-07-04 Rockwell Automation Technologies, Inc. Optical area monitoring with spot matrix illumination
KR20160069806A (en) * 2014-12-09 2016-06-17 한화테크윈 주식회사 Distance measuring apparatus and distance measuring method
CN104483105B (en) * 2014-12-25 2017-07-18 中国科学院半导体研究所 A kind of pixel-level fusion detecting system and method
CN106093911A (en) * 2016-07-25 2016-11-09 北京理工大学 A kind of dot matrix emitting-receiving system for Non-scanning mode laser imaging
CN206946179U (en) * 2017-07-11 2018-01-30 深圳市光峰光电技术有限公司 Light supply apparatus and optical projection system
CN107515402A (en) * 2017-08-21 2017-12-26 东莞市迈科新能源有限公司 A kind of TOF three-dimensionals range-measurement system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002031516A (en) * 2000-07-18 2002-01-31 Asahi Optical Co Ltd Three-dimensional image input device
US20170366713A1 (en) * 2013-12-05 2017-12-21 Samsung Electronics Co., Ltd. Camera for measuring depth image and method of measuring depth image using the same
CN107003391A (en) * 2014-11-21 2017-08-01 微软技术许可有限责任公司 Many patterned illumination optics of time-of-flight system
DE102016219515A1 (en) * 2015-10-30 2017-05-04 pmdtechnologies ag Time of flight camera system
KR20170108347A (en) * 2016-03-17 2017-09-27 (주)미래컴퍼니 Diffractive optical element and optical system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113281767A (en) * 2021-07-19 2021-08-20 上海思岚科技有限公司 Narrow-window coaxial single-line laser scanning range finder

Also Published As

Publication number Publication date
CN110285788A (en) 2019-09-27
CN110285788B (en) 2022-08-26

Similar Documents

Publication Publication Date Title
WO2019179123A1 (en) Tof camera and design method for diffractive optical element
CN109557522B (en) Multi-beam laser scanner
US10362295B2 (en) Optical apparatus with beam steering and position feedback
WO2021072802A1 (en) Distance measurement system and method
CA2650235C (en) Distance measuring method and distance measuring element for detecting the spatial dimension of a target
US8138488B2 (en) System and method for performing optical navigation using scattered light
CN115144842B (en) Transmitting module, photoelectric detection device, electronic equipment and three-dimensional information detection method
US6741082B2 (en) Distance information obtaining apparatus and distance information obtaining method
CN110824490B (en) Dynamic distance measuring system and method
KR20190055238A (en) System and method for determining distance to an object
JP2004521355A (en) Optical distance measuring device
CN111830530A (en) Distance measuring method, system and computer readable storage medium
WO2021244011A1 (en) Distance measurement method and system, and computer readable storage medium
US7495746B2 (en) Optical method and device for measuring a distance from an obstacle
US20190137611A1 (en) Scanning optical beam source
CN111965658B (en) Distance measurement system, method and computer readable storage medium
CN110658529A (en) Integrated beam splitting scanning unit and manufacturing method thereof
CN110716190A (en) Transmitter and distance measurement system
CN111796295A (en) Collector, manufacturing method of collector and distance measuring system
WO2020221188A1 (en) Synchronous tof discrete point cloud-based 3d imaging apparatus, and electronic device
KR20120043843A (en) The three-dimensional imaging pulsed laser radar system using Geiger-mode avalanche photo-diode focal plane array and Auto-focusing method for the same
RU2467336C2 (en) Device to measure displacement and speed measurement device
CN110716189A (en) Transmitter and distance measurement system
CN111965659B (en) Distance measurement system, method and computer readable storage medium
US11709271B2 (en) Time of flight sensing system and image sensor used therein

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18911235

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18911235

Country of ref document: EP

Kind code of ref document: A1