CN113075691A - TOF depth sensing module and image generation method - Google Patents

TOF depth sensing module and image generation method Download PDF

Info

Publication number
CN113075691A
CN113075691A CN202010007047.6A CN202010007047A CN113075691A CN 113075691 A CN113075691 A CN 113075691A CN 202010007047 A CN202010007047 A CN 202010007047A CN 113075691 A CN113075691 A CN 113075691A
Authority
CN
China
Prior art keywords
light
sensing module
laser
depth sensing
light source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010007047.6A
Other languages
Chinese (zh)
Inventor
高少锐
吴巨帅
宋小刚
邱孟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202010007047.6A priority Critical patent/CN113075691A/en
Publication of CN113075691A publication Critical patent/CN113075691A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/10Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • G01S7/4817Constructional features, e.g. arrangements of optical elements relating to scanning
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • G01S7/484Transmitters
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • G01S7/486Receivers
    • G01S7/4861Circuits for detection, sampling, integration or read-out
    • G01S7/4863Detector arrays, e.g. charge-transfer gates
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/09Beam shaping, e.g. changing the cross-sectional area, not otherwise provided for
    • G02B27/0938Using specific optical elements
    • G02B27/0944Diffractive optical elements, e.g. gratings, holograms
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/09Beam shaping, e.g. changing the cross-sectional area, not otherwise provided for
    • G02B27/0938Using specific optical elements
    • G02B27/095Refractive optical elements
    • G02B27/0955Lenses
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/28Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 for polarising

Abstract

The application provides a TOF depth sensing module and an image generation method. This TOF depth sensing module includes: the device comprises a laser light source, an optical element, a light beam selection device, a receiving unit and a control unit, wherein the optical element is arranged in the direction of a light beam emitted by the laser light source. The laser light source is used for generating a laser beam; the optical element is used for controlling the direction of the laser beam to obtain a first emergent beam and a second emergent beam; the control unit is used for controlling the light beam selection device to respectively transmit the third reflected light beam and the fourth reflected light beam to the receiving unit. The heat loss of the TOF depth sensing module can be reduced.

Description

TOF depth sensing module and image generation method
Technical Field
The present application relates to the field of TOF technology, and more particularly, to a TOF depth sensing module and an image generation method.
Background
The time of flight (TOF) technique is a commonly used depth or distance measurement technique, and its basic principle is that continuous light or pulsed light is emitted through an emitting end, and is reflected after being irradiated on an object to be measured, and then a receiving end receives the reflected light of the object to be measured. Next, by determining the time of flight of the light from the transmitting end to the receiving end, the distance or depth of the object to be measured to the TOF system can be calculated.
Since the liquid crystal device has excellent polarization and phase adjustment capabilities, the liquid crystal device is widely applied to a TOF depth sensing module to realize deflection of a light beam. However, due to the birefringence of the liquid crystal material, the TOF depth sensing module using the liquid crystal device generally adds a polarizer at the emitting end to emit polarized light. In the process of polarized light emergence, due to the polarization selection effect of the polarizing plate, half of energy is lost when a light beam is emergent, and the lost energy can be absorbed or scattered by the polarizing plate and converted into heat, so that the temperature of the TOF depth sensing module is increased, and the stability of the TOF depth sensing module is affected.
Disclosure of Invention
The application provides a TOF depth sensing module and an image generation method, which are used for reducing the heat loss of the TOF depth sensing module and improving the signal-to-noise ratio of the TOF depth sensing module.
In a first aspect, a TOF depth sensing module is provided, which includes a laser light source, an optical element, a light beam selection device, a receiving unit, and a control unit, wherein the optical element is disposed in a direction in which the laser light source emits a light beam.
The functions of each module or unit in the TOF depth sensing module are specifically as follows:
The laser light source is used for generating a laser beam;
the optical element is used for controlling the direction of the laser beam to obtain a first emergent beam and a second emergent beam;
the control unit is used for controlling the light beam selection device to respectively transmit the third reflected light beam and the fourth reflected light beam to the receiving unit.
The emergent direction of the first emergent beam is different from that of the second emergent beam, the first emergent beam and the second emergent beam are both beams in a single polarization state, and the polarization direction of the first emergent beam is orthogonal to that of the second emergent beam; the third reflected beam is a beam reflected by the target object from the first outgoing beam and the fourth reflected beam is a beam reflected by the target object from the second outgoing beam.
The polarization states of the first emergent light beam and the second emergent light beam can be left-handed circular polarization and right-handed circular polarization respectively. Alternatively, the polarization of the first emergent beam and the polarization of the second emergent beam may be linearly polarized in the horizontal direction and vertically polarized, respectively.
In particular, the control unit may be adapted to control the beam selection device to propagate the third reflected beam and the fourth reflected beam to the receiving unit during different time intervals, respectively. That is, the control unit may control the beam selection device to propagate the third reflected beam and the fourth reflected beam to the receiving unit at different times, respectively.
Optionally, the control unit is configured to control a birefringence parameter of an optical element to obtain an adjusted birefringence parameter, and the optical element is configured to adjust a direction of the laser beam based on the adjusted birefringence parameter to obtain the first outgoing beam and the second outgoing beam.
Optionally, the first outgoing beam and the second outgoing beam are obtained at the same time.
The third reflected beam is a beam reflected by the target object from the first outgoing beam from the optical element, and the fourth reflected beam is a beam reflected by the target object from the second outgoing beam from the optical element.
The receiving unit may include a receiving lens and a sensor, and the receiving lens may converge the reflected light beam to the sensor, so that the sensor can receive the reflected light beam, further obtain a time at which the reflected light beam is received by the receiving unit, obtain a TOF corresponding to the outgoing light beam, and finally generate a depth map of the target object according to the TOF corresponding to the outgoing light beam.
Specifically, the receiving lens may converge the third reflected light beam and the fourth reflected light beam to the sensor, and acquire a time when the third reflected light beam and the fourth reflected light beam are received by the receiving unit through the sensor, so as to obtain TOF corresponding to the first outgoing light beam and the second outgoing light beam, and finally may generate a first depth map of the target object with the TOF corresponding to the first outgoing light beam, and generate a second depth map of the target object according to the TOF corresponding to the second outgoing light beam.
The TOF corresponding to the first outgoing light beam may specifically refer to time difference information between the emission time of the first outgoing light beam and the receiving time of the third reflected light beam; the TOF corresponding to the second outgoing light beam may specifically refer to time difference information between the emitting time of the second outgoing light beam and the receiving time of the fourth reflected light beam.
In the above process, the laser beam generated by the laser light source may include a plurality of polarization states. After the processing of the optical element, a first emergent light beam and a second emergent light beam with orthogonal polarization directions can be obtained.
For example, the laser beam includes left-handed circular polarization, right-handed circular polarization, and linear polarization, and then, after processing by the optical element, a first exit beam having a left-handed circular polarization state and a second exit beam having a right-handed circular polarization state can be obtained.
The difference in the emitting directions of the first emitting beam and the second emitting beam may specifically mean that the azimuth angle of the first emitting beam is different from the azimuth angle of the second emitting beam, but the inclination angles of the first emitting beam and the second emitting beam may be the same.
In the embodiment of the application, because the transmitting end is not provided with the polarization filter device, the light beam emitted by the laser source can reach the optical element with almost no loss (the polarization filter device generally absorbs more light energy, and further generates certain heat loss), and the heat loss of the TOF depth sensing module can be reduced.
In addition, because the light beam selection device at the receiving end comprises the polaroid, certain environmental noise can be filtered out through the polaroid in the light beam selection device, so that the noise of the system is reduced, and the signal-to-noise ratio of the system can be improved.
The light beam selection device can transmit the light beams with different polarization states to the receiving unit at different moments respectively under the control of the control unit. The time-sharing mode adopted by the light beam selection device transmits the received reflected light beam to the receiving unit, so that the receiving resolution of the receiving unit can be fully utilized, and the resolution of the finally obtained depth map is relatively high.
With reference to the first aspect, in certain implementations of the first aspect, an optical element includes: the device comprises a transverse polarization control sheet, a transverse liquid crystal polarization grating, a longitudinal polarization control sheet and a longitudinal liquid crystal polarization grating.
Alternatively, in the above optical element, the distances from the laser light source to the transverse polarization control plate, the transverse liquid crystal polarization grating, the longitudinal polarization control plate, and the longitudinal liquid crystal polarization grating become larger in order.
Alternatively, in the above optical element, the distances from the longitudinal polarization control plate, the longitudinal liquid crystal polarization grating, the transverse polarization control plate, and the transverse liquid crystal polarization grating to the laser light source become larger in order.
With reference to the first aspect, in certain implementations of the first aspect, the beam selection device is comprised of 1/4 wave plate + half wave plate + polarizer.
With reference to the first aspect, in certain implementations of the first aspect, the TOF depth sensing module further includes: the collimating lens is arranged between the laser light source and the optical element and is used for collimating the laser beam; the optical element is used for controlling the direction of the light beam after the collimation treatment of the collimator lens so as to obtain a first emergent light beam and a second emergent light beam.
The light beam is collimated by the collimating lens, approximately parallel light beams can be obtained, the power density of the light beam can be improved, and the effect of subsequently scanning by adopting the light beam can be further improved.
With reference to the first aspect, in certain implementations of the first aspect, the clear aperture of the collimator lens is less than or equal to 5 mm.
Because the size of above-mentioned collimating lens is less, consequently, the TOF depth sensing module group that contains the collimating lens is integrated to terminal equipment in relatively easily, can reduce the space that occupies in terminal equipment to a certain extent.
With reference to the first aspect, in certain implementations of the first aspect, the TOF depth sensing module further includes: the dodging device is arranged between the laser light source and the optical element and is used for adjusting the angular space intensity distribution of the laser beam; the optical element is used for controlling the direction of the light beam after the light homogenizing treatment of the light homogenizing device so as to obtain a first emergent light beam and a second emergent light beam.
The light power of the laser beam can be more uniform in an angle space through the dodging treatment, or the light power is distributed according to a specific rule, so that the local light power is prevented from being too small, and the blind spot of the finally obtained target object depth map is avoided.
In certain implementations of the first aspect in combination with the first aspect, the light unifying device is a microlens diffuser or a diffractive optical diffuser.
With reference to the first aspect, in some implementations of the first aspect, the laser light source is a Vertical Cavity Surface Emitting Laser (VCSEL).
Optionally, the laser light source is a fabry-perot laser (which may be abbreviated as FP laser).
Compared with a single VCSEL (vertical cavity laser), a single FP (Fabry-Perot) laser can realize higher power, and meanwhile, the electro-optic conversion efficiency is higher than that of the VCSEL, so that the scanning effect of the TOF depth sensing module can be improved.
Optionally, the wavelength of the laser beam emitted by the laser light source is greater than 900 nm.
Because the intensity of the light ray more than 900nm in the sunlight is relatively weaker, the interference caused by the sunlight is favorably reduced when the wavelength of the laser beam is more than 900nm, and the scanning effect of the TOF depth sensing module can be further improved.
Optionally, the wavelength of the laser beam emitted by the laser light source is 940nm or 1550 nm.
Because the intensity of the light near 940nm or 1550nm in the sunlight is relatively weaker, therefore, can greatly reduced the interference that the sunlight caused when laser beam's wavelength is 940nm or 1550nm, and then can improve the scanning effect of TOF depth sensing module.
With reference to the first aspect, in certain implementations of the first aspect, the light emitting area of the laser light source is less than or equal to 5 × 5mm2
Because the size of above-mentioned laser light source is less, consequently, the TOF depth sensing module that contains laser light source is integrated to terminal equipment in relatively easily, can reduce the space that occupies in terminal equipment to a certain extent.
With reference to the first aspect, in certain implementations of the first aspect, the average output optical power of the TOF depth sensing module is less than 800 mw.
When the average output optical power of the TOF depth sensing module is smaller than or equal to 800mw, the TOF depth sensing module has smaller power consumption, and is convenient to arrange in terminal equipment and other equipment sensitive to power consumption.
In a second aspect, an image generation method is provided, where the image generation method is applied to a terminal device including the TOF depth sensing module in the first aspect, and the image generation method includes: controlling a laser light source to generate a laser beam; controlling the optical element to control the direction of the laser beam to obtain a first emergent beam and a second emergent beam; controlling the beam selection device to propagate the third reflected beam and the fourth reflected beam to the receiving unit at different time intervals, respectively; generating a first depth map of the target object according to the TOF corresponding to the first emergent light beam; and generating a second depth map of the target object according to the TOF corresponding to the second emergent light beam.
The emergent direction of the first emergent beam is different from that of the second emergent beam, the first emergent beam and the second emergent beam are both beams in a single polarization state, and the polarization direction of the first emergent beam is orthogonal to that of the second emergent beam; the third reflected beam is a beam reflected by the target object from the first outgoing beam and the fourth reflected beam is a beam reflected by the target object from the second outgoing beam.
Optionally, the first outgoing beam and the second outgoing beam are obtained at the same time.
Optionally, the method further includes: and acquiring the TOF corresponding to the first emergent light beam and the TOF corresponding to the second emergent light beam.
Optionally, the obtaining of the TOF corresponding to the first outgoing light beam and the TOF corresponding to the second outgoing light beam includes: determining the TOF corresponding to the first emergent light beam according to the emitting time of the first emergent light beam and the receiving time of the third reflected light beam; and determining the TOF corresponding to the second emergent light beam according to the emitting time of the second emergent light beam and the receiving time of the fourth reflected light beam.
The TOF corresponding to the first outgoing light beam may specifically refer to time difference information between the emission time of the first outgoing light beam and the receiving time of the third reflected light beam; the TOF corresponding to the second outgoing light beam may specifically refer to time difference information between the emitting time of the second outgoing light beam and the receiving time of the fourth reflected light beam.
Optionally, the image generating method further includes: and splicing or combining the first depth map and the second depth map to obtain a depth map of the target object.
It should be understood that in the above image generation method, a third depth map, a fourth depth map, and so on may also be generated in a similar manner, and then all the depth maps may be stitched or combined to obtain a final depth map of the target object.
In the embodiment of the application, because the emission end is not provided with the polarization filter device, the light beam emitted by the laser light source can reach the optical element with almost no loss (the polarization filter device generally absorbs more light energy, and further generates certain heat loss), and the heat loss of the terminal equipment can be realized.
With reference to the second aspect, in some implementations of the second aspect, the terminal device further includes a collimating lens, and the collimating lens is disposed between the laser light source and the optical element, and the image generating method further includes: utilizing a collimating lens to collimate the laser beam to obtain a collimated beam; the method for controlling the optical element to control the direction of the laser beam to obtain a first outgoing beam and a second outgoing beam includes: the control optical element controls the direction of the collimated light beam to obtain a first emergent light beam and a second emergent light beam.
The light beam is collimated by the collimating lens, approximately parallel light beams can be obtained, the power density of the light beam can be improved, and the effect of subsequently scanning by adopting the light beam can be further improved.
With reference to the second aspect, in some implementations of the second aspect, the terminal device further includes a light uniformizing device disposed between the laser light source and the optical element, and the image generation method further includes: adjusting the energy distribution of the laser beam by using a light homogenizing device to obtain a light beam after light homogenizing treatment; the above-mentioned control optical element controls the direction of the laser beam, obtains the first outgoing beam and the second outgoing beam, includes: the control optical element controls the direction of the light beam after the dodging treatment so as to obtain a first emergent light beam and a second emergent light beam.
The light power of the laser beam can be more uniform in an angle space through the dodging treatment, or the light power is distributed according to a specific rule, so that the local light power is prevented from being too small, and the blind spot of the finally obtained target object depth map is avoided.
In a third aspect, a terminal device is provided, where the terminal device includes the TOF depth sensing module in the first aspect.
The terminal device of the above-described third aspect may perform the image generation method of the second aspect.
Drawings
FIG. 1 is a schematic diagram of lidar ranging;
FIG. 2 is a schematic diagram of a distance measurement using a TOF depth sensing module according to an embodiment of the present disclosure;
FIG. 3 is a schematic block diagram of a TOF depth sensing module of an embodiment of the present application;
FIG. 4 is a schematic diagram of a VCSEL;
FIG. 5 is a schematic diagram of an array light source;
FIG. 6 is a schematic illustration of a beam splitter used to split a beam from an array light source;
FIG. 7 is a schematic diagram of a projection area obtained by splitting a beam emitted from an array light source with a beam splitter;
FIG. 8 is a schematic diagram of a projection area obtained by splitting a beam emitted from an array light source with a beam splitter;
FIG. 9 is a schematic diagram of a projection area obtained by splitting a beam emitted from an array light source with a beam splitter;
FIG. 10 is a schematic diagram of a projected area obtained by splitting a beam emitted from an array light source with a beam splitter;
FIG. 11 is a schematic block diagram of a TOF depth sensing module of an embodiment of the present application;
FIG. 12 is a schematic view of a beam splitter performing a splitting process;
FIG. 13 is a schematic block diagram of a TOF depth sensing module of an embodiment of the present application;
FIG. 14 is a schematic block diagram of a TOF depth sensing module of an embodiment of the present application;
FIG. 15 is a schematic diagram illustrating operation of a TOF depth sensing module according to an embodiment of the present application;
FIG. 16 is a schematic diagram of a light emitting region of an array light source;
FIG. 17 is a schematic illustration of the beam splitting process for the array source of FIG. 16 using a beam splitter;
FIG. 18 is a schematic flow chart diagram of an image generation method of an embodiment of the present application;
FIG. 19 is a depth map of the target object at times t0-t 3;
FIG. 20 is a schematic flow chart diagram of an image generation method of an embodiment of the present application;
FIG. 21 is a schematic flow chart diagram of an image generation method of an embodiment of the present application;
FIG. 22 is a schematic flow chart of obtaining a final depth map of the target object in the first mode of operation;
FIG. 23 is a schematic flow chart of obtaining a final depth map of a target object in a first mode of operation;
FIG. 24 is a schematic flow chart of obtaining a final depth map of the target object in the second mode of operation;
FIG. 25 is a schematic flow chart of obtaining a final depth map of the target object in the second mode of operation;
FIG. 26 is a schematic diagram of a distance measurement using a TOF depth sensing module of an embodiment of the present application;
FIG. 27 is a schematic block diagram of a TOF depth sensing module of an embodiment of the present application;
FIG. 28 is a schematic illustration of the spatial angle of the laser beam;
FIG. 29 is a schematic block diagram of a TOF depth sensing module of an embodiment of the present application;
FIG. 30 is a schematic view of a TOF depth sensing module of an embodiment of the present application scanning a target object;
FIG. 31 is a schematic view of a scan trajectory of a TOF depth sensing module of an embodiment of the present application;
FIG. 32 is a schematic diagram of a scanning mode of a TOF depth sensing module according to an embodiment of the present disclosure;
FIG. 33 is a schematic block diagram of a TOF depth sensing module of an embodiment of the present application;
FIG. 34 is a schematic block diagram of a TOF depth sensing module of an embodiment of the present application;
FIG. 35 is a schematic diagram of a liquid crystal polarization grating according to an embodiment of the present application;
FIG. 36 is a schematic diagram of a TOF depth sensing module according to an embodiment of the present application;
FIG. 37 is a schematic diagram of changing the physical properties of a liquid crystal polarization grating by a periodic control signal;
FIG. 38 is a schematic illustration of a liquid crystal polarization grating controlling the direction of an input light beam;
FIG. 39 is a schematic illustration of a voltage signal applied to a liquid crystal polarization grating;
FIG. 40 is a schematic view of a scan trajectory of a TOF depth sensing module of an embodiment of the present application;
FIG. 41 is a schematic view of an area to be scanned;
FIG. 42 is a schematic view of an area to be scanned;
FIG. 43 is a schematic structural diagram of a TOF depth sensing module according to an embodiment of the present disclosure;
FIG. 44 is a schematic diagram of an electro-optic crystal controlling the direction of a light beam;
FIG. 45 is a schematic diagram of a voltage signal applied to an electro-optic crystal;
FIG. 46 is a schematic view of a scan trajectory of a TOF depth sensing module of an embodiment of the present application;
FIG. 47 is a schematic structural diagram of a TOF depth sensing module according to an embodiment of the present application;
FIG. 48 is a schematic view of an acousto-optic device controlling the direction of a light beam;
FIG. 49 is a schematic diagram of a TOF depth sensing module according to an embodiment of the present application;
FIG. 50 is a schematic diagram of an OPA device controlling the direction of a light beam;
FIG. 51 is a schematic structural diagram of a TOF depth sensing module according to an embodiment of the present disclosure;
FIG. 52 is a schematic flow chart diagram of an image generation method of an embodiment of the present application;
FIG. 53 is a schematic diagram of a distance measurement using a TOF depth sensing module of an embodiment of the present application;
FIG. 54 is a schematic diagram of a TOF depth sensing module according to an embodiment of the present application;
FIG. 55 is a schematic block diagram of a TOF depth sensing module of an embodiment of the present application;
FIG. 56 is a schematic block diagram of a TOF depth sensing module of an embodiment of the present application;
FIG. 57 is a schematic flow chart diagram of an image generation method of an embodiment of the present application;
FIG. 58 is a schematic block diagram of a TOF depth sensing module of an embodiment of the present application;
FIG. 59 is a schematic block diagram of a TOF depth sensing module of an embodiment of the present application;
FIG. 60 is a schematic block diagram of a TOF depth sensing module of an embodiment of the present application;
FIG. 61 is a schematic diagram of a TOF depth sensing module according to an embodiment of the present application;
FIG. 62 is a schematic flow chart diagram of an image generation method of an embodiment of the present application;
FIG. 63 is a schematic structural diagram of a TOF depth sensing module according to an embodiment of the present disclosure
Fig. 64 is a schematic structural view of a liquid crystal polarizing device of an embodiment of the present application;
FIG. 65 is a schematic diagram of a control sequence;
FIG. 66 is a timing diagram of voltage drive signals;
FIG. 67 is a schematic diagram of the scan region of the TOF depth sensing module at different times;
FIG. 68 is a schematic view of a depth map corresponding to a target object at time t0-t 3;
FIG. 69 is a schematic illustration of a final depth map of the target object;
FIG. 70 is a schematic view of a TOF depth sensing module of an embodiment of the present application in operation;
FIG. 71 is a schematic diagram of a TOF depth sensing module according to an embodiment of the present application;
FIG. 72 is a schematic structural diagram of a TOF depth sensing module according to an embodiment of the present disclosure;
FIG. 73 is a schematic diagram of a TOF depth sensing module according to an embodiment of the present application;
FIG. 74 is a schematic diagram of a TOF depth sensing module according to an embodiment of the present application;
FIG. 75 is a schematic diagram of a TOF depth sensing module according to an embodiment of the present application;
FIG. 76 is a schematic diagram of a TOF depth sensing module 500 according to an embodiment of the present application;
FIG. 77 is a schematic representation of the topography of a microlens diffuser;
FIG. 78 is a schematic flow chart diagram of an image generation method of an embodiment of the present application;
FIG. 79 is a schematic diagram of a TOF depth sensing module according to an embodiment of the present disclosure;
FIG. 80 is a schematic diagram of a specific structure of a TOF depth sensing module according to an embodiment of the present disclosure;
FIG. 81 is a schematic diagram of a specific structure of a TOF depth sensing module according to an embodiment of the present disclosure;
FIG. 82 is a schematic diagram of a specific structure of a TOF depth sensing module according to an embodiment of the present application;
FIG. 83 is a schematic diagram of a specific structure of a TOF depth sensing module according to an embodiment of the present disclosure;
FIG. 84 is a schematic block diagram of a TOF depth sensing module 600 according to an embodiment of the present disclosure;
FIG. 85 is a schematic diagram of a TOF depth sensing module 600 according to an embodiment of the present disclosure;
FIG. 86 is a schematic diagram of a TOF depth sensing module 600 according to an embodiment of the present application;
FIG. 87 is a schematic diagram of a polarizing filter receiving a polarized light beam;
FIG. 88 is a schematic flow chart diagram of an image generation method of an embodiment of the present application;
FIG. 89 is a schematic diagram of a specific structure of a TOF depth sensing module according to an embodiment of the present disclosure;
FIG. 90 is a schematic diagram of a specific structure of a TOF depth sensing module according to an embodiment of the present disclosure;
FIG. 91 is a schematic diagram of a specific structure of a TOF depth sensing module according to an embodiment of the present disclosure;
FIG. 92 is a schematic diagram of a specific structure of a TOF depth sensing module according to an embodiment of the present disclosure;
FIG. 93 is a schematic diagram of a specific structure of a TOF depth sensing module according to an embodiment of the present disclosure;
FIG. 94 is a schematic diagram of drive signals and receive signals of a TOF depth sensing module of an embodiment of the present application;
FIG. 95 is a schematic view of the angles and states of light beams emitted by a TOF depth sensing module of an embodiment of the present application;
FIG. 96 is a schematic diagram of a TOF depth sensing module according to an embodiment of the present application;
FIG. 97 is a schematic diagram of a TOF depth sensing module according to an embodiment of the present application;
FIG. 98 is a schematic diagram of a TOF depth sensing module according to an embodiment of the present application;
FIG. 99 is a schematic diagram of a flat-panel liquid crystal cell for beam deflection;
FIG. 100 is a schematic diagram of a flat-panel liquid crystal cell for beam deflection;
FIG. 101 is a schematic flow chart diagram of an image generation method of an embodiment of the present application;
FIG. 102 is a schematic view of the FOV of the first beam;
fig. 103 is a schematic diagram of FOVs covered by M outgoing beams in different directions.
Detailed Description
The technical solution in the present application will be described below with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of lidar ranging.
As shown in fig. 1, the transmitter of the laser radar transmits a laser pulse (the pulse width may be in the order of nanoseconds to picoseconds), and at the same time, the timer starts timing, when the laser pulse irradiates the target area, a reflected laser pulse is generated due to reflection on the surface of the target area, and when the detector of the laser radar receives the reflected laser pulse, the timer stops timing to obtain the TOF. Next, the distance of the lidar from the target area can be calculated from the TOF.
Specifically, the distance of the lidar from the target area may be determined according to equation (1).
L=c*T/2 (1)
In the above formula (1), L is the distance between the laser radar and the target area, c is the speed of light, and T is the time of light propagation.
It should be understood that, in the TOF depth sensing module according to the embodiment of the present application, a light beam emitted from a laser light source is processed by other elements (e.g., a collimating lens, a beam splitter, etc.) in the TOF depth sensing module, so that the light beam is finally emitted from an emitting end, and in this process, a light beam from a certain element in the TOF depth sensing module may also be referred to as a light beam emitted from the element.
For example, the laser light source emits a light beam, which is emitted after being collimated by the collimator lens, and the light beam emitted by the collimator lens may also be referred to as a light beam from the collimator lens actually, where the light beam emitted by the collimator lens does not represent a light beam emitted by the collimator lens itself, but is emitted after being processed by a light beam transmitted from a previous element.
In addition, in the present application, the light beam emitted from the laser light source or the array light source may also be referred to as a light beam from the laser light source or the array light source.
The TOF depth sensing module according to an embodiment of the present invention is briefly described below with reference to fig. 2.
FIG. 2 is a schematic diagram of a distance measurement using a TOF depth sensing module according to an embodiment of the present disclosure.
As shown in fig. 2, the TOF depth sensing module may include a transmitting end (which may also be referred to as a projecting end), a receiving end and a control unit, where the transmitting end is configured to generate an outgoing light beam, the receiving end is configured to receive a reflected light beam of a target object (the reflected light beam is a light beam obtained by reflecting the outgoing light beam by the target object), and the control unit may control the transmitting end and the receiving end to transmit and receive the light beam respectively.
In fig. 2, the transmitting end may generally include a laser light source, a beam splitter, a collimating lens, and a projection lens (optional), and the receiving end may generally include a receiving lens and a sensor, which may be collectively referred to as a receiving unit.
In fig. 2, a timing device may be used to record TOF corresponding to the outgoing light beam to calculate a distance from the TOF depth sensing module to the target region, so as to obtain a final depth map of the target object. Here, the TOF corresponding to the outgoing light beam may refer to time difference information between the time at which the reflected light beam is received by the receiving unit and the outgoing time of the outgoing light beam.
The laser light source in fig. 2 may be an array light source.
The TOF depth sensing module of this application embodiment can be used for three-dimensional (3dimensions, 3D) image to obtain, and the TOF depth sensing module of this application embodiment can set up in intelligent terminal (for example, cell-phone, flat board, wearable equipment etc.) for the acquisition of depth image or 3D image also can provide gesture and limbs discernment for 3D recreation or body sensing recreation.
The TOF depth sensing module according to an embodiment of the present application is described in detail below with reference to fig. 3.
Fig. 3 is a schematic block diagram of a TOF depth sensing module in accordance with an embodiment of the present application.
The TOF depth sensing module 100 shown in fig. 3 includes an array light source 110, a collimating lens 120, a beam splitter 130, a receiving unit 140, and a control unit 150. These several modules or units in the TOF depth sensing module 100 are described in detail below.
Array light source 110:
the array light source 110 is used to generate (emit) a laser beam.
The array light source 110 includes N light emitting areas, each of which can independently generate a laser beam, where N is a positive integer greater than 1.
The control unit 150 is used for controlling M light emitting areas of the N light emitting areas of the array light source 110 to emit light.
The collimating lens 120 is configured to collimate the light beams emitted by the M light-emitting areas;
the beam splitter 130 is used for splitting the light beam collimated by the collimator lens;
the receiving unit 140 is used for receiving the reflected light beam of the target object.
Wherein M is less than or equal to N, M is a positive integer, and N is a positive integer greater than 1; the beam splitter is specifically configured to split each received beam of light into a plurality of beams of light; the reflected beam of the target object is a beam obtained by reflecting the beam from the beam splitter by the target object. The light beams emitted from the M light-emitting regions may also be referred to as light beams from the M light-emitting regions.
Since M is less than or equal to N, the control unit 150 may control some or all of the light emitting regions in the array light source 110 to emit light.
The N light emitting regions may be N independent light emitting regions, that is, each of the N light emitting regions may independently or individually emit light without being affected by other light emitting regions. For each of the N light-emitting regions, each light-emitting region is generally composed of a plurality of light-emitting units, and in the N light-emitting regions, different light-emitting regions are composed of different light-emitting units, that is, the same light-emitting unit belongs to only one light-emitting region. For each light emitting region, when the light emitting region is controlled to emit light, all the light emitting cells in the light emitting region may emit light.
The total number of the light emitting regions of the array light source may be N, and when M is equal to N, the control unit may control all the light emitting regions of the array light source to emit light simultaneously or in a time-division manner.
Optionally, the control unit is configured to control M light emitting areas of the N light emitting areas of the array light source to emit light simultaneously.
For example, the control unit may control M light emitting regions among the N light emitting regions of the array light source to emit light simultaneously at time T0.
Optionally, the control unit is configured to control M light emitting areas of the N light emitting areas of the array light source to emit light at M different times, respectively.
For example, the control unit may control the 3 light-emitting regions of the array light source to emit light at time T0, time T1 and time T2, respectively, that is, a first light-emitting region of the 3 light-emitting regions emits light at time T0, a second light-emitting region emits light at time T1, and a third light-emitting region emits light at time T2.
Optionally, the control unit is configured to control M light emitting areas of the N light emitting areas of the array light source to emit light at M0 different times, where M0 is a positive integer greater than 1 and smaller than M.
For example, when M is 3 and M0 is 2, the control unit may control 1 of the 3 light emitting regions of the array light source to emit light at time T0, and control the other 2 of the 3 light emitting regions of the array light source to emit light at time T1.
In the embodiment of the application, the different light emitting areas of the array light source are controlled to emit light in a time-sharing mode, the light beam splitter is controlled to split the light beam, the number of the light beams emitted by the TOF depth sensing module within a period of time can be increased, and then higher spatial resolution and higher frame rate can be achieved in the scanning process of a target object.
Optionally, the light emitting area of the array light source 110 is less than or equal to 5 × 5mm2
When the light emitting area of the array light source 110 is less than or equal to 5 × 5mm2When the TOF depth sensing module 100 is installed in a terminal device, the area of the array light source 110 is small, the space occupied by the TOF depth sensing module 100 can be reduced, and the TOF depth sensing module 100 can be conveniently installed in the terminal device with a relatively limited space.
Alternatively, the array light source 110 may be a semiconductor laser light source.
The array light source 110 may be a Vertical Cavity Surface Emitting Laser (VCSEL).
Fig. 5 is a schematic diagram of a VCSEL, as shown in fig. 5, which includes a plurality of light emitting points (black dot regions in fig. 5) each of which can emit light under the control of a control unit.
Alternatively, the laser light source may be a fabry-perot laser (which may be abbreviated as FP laser).
Compared with a single VCSEL, the single FP laser can realize higher power, and meanwhile, the electro-optic conversion efficiency is higher than that of the VCSEL, so that the scanning effect can be improved.
Alternatively, the wavelength of the laser beam emitted by the array light source 110 is greater than 900 nm.
Because the intensity of the light ray of more than 900nm in the sunlight is relatively weaker, therefore, when the wavelength of the laser beam is more than 900nm, the interference caused by the sunlight is reduced, and the scanning effect of the TOF depth sensing module can be improved.
Optionally, the wavelength of the laser beam emitted by the array light source 110 is 940nm or 1550 nm.
Because the intensity of the light near 940nm or 1550nm in the sunlight is relatively weaker, therefore, can greatly reduced the interference that the sunlight caused when laser beam's wavelength is 940nm or 1550nm, can improve TOF depth sensing module's scanning effect.
The following describes in detail a case where the array light source 110 includes a plurality of independent light emitting regions with reference to fig. 5.
As shown in fig. 5, the array light source 110 is composed of mutually independent light-emitting regions 111, 112, 113 and 114, a plurality of light-emitting units 1001 are provided in each region, the plurality of light-emitting units 1001 in each region are connected by a common electrode 1002, and the light-emitting units of different light-emitting regions are connected to different electrodes so that the different regions are mutually independent.
For the arrayed light source 110 shown in fig. 5, the independent light-emitting regions 111, 112, 113 and 114 can be controlled by the control unit 150 to emit light individually at different times. For example, the control unit 150 may control the light-emitting regions 111, 112, 113, and 114 to emit light at times t0, t1, t2, and t3, respectively.
Alternatively, the light beam collimated by the collimating lens 120 may be quasi-parallel light with a divergence angle smaller than 1 degree.
The collimating lens 120 may be composed of one or more lenses, and when the collimating lens 120 is composed of a plurality of lenses, the collimating lens 120 can effectively reduce the aberration generated in the collimating process.
The collimator lens 120 may be made of a plastic material, a glass material, or both a plastic material and a glass material. When the collimator lens 120 is made of a glass material, the collimator lens can reduce the influence of temperature on the back focal length of the collimator lens 120 in the process of collimating the light beam.
Specifically, since the thermal expansion coefficient of the glass material is small, when the glass material is used for the collimator lens 120, the influence of the temperature on the back focal length of the collimator lens 120 can be reduced.
Optionally, the clear aperture of the collimating lens 120 is less than or equal to 5 mm.
When the clear aperture of the collimating lens 120 is less than or equal to 5mm, the area of the collimating lens 120 is small, the space occupied by the TOF depth sensing module 100 can be reduced, and the TOF depth sensing module 100 can be conveniently installed in a terminal device with a relatively limited space.
As shown in fig. 3, the receiving unit 140 may include a receiving lens 141 and a sensor 142, the receiving lens 141 being configured to focus the reflected light beam to the sensor 142.
The sensor 142 may also be referred to as a sensor array, which may be a two-dimensional sensor array.
Optionally, the resolution of the sensor 142 is greater than or equal to P × Q, and the number of beams obtained by splitting the light beam emitted by one light emitting region of the array light source 110 by the beam splitter is P × Q, where P and Q are positive integers.
The resolution of the upload sensor is greater than or equal to the number of beams split by the beam splitter 130 from a light emitting area of the array light source, so that the sensor 142 can receive a reflected beam obtained by reflecting the beam from the beam splitter by a target object, and the TOF depth sensing module can normally receive the reflected beam.
Alternatively, the beam splitter 130 may be a one-dimensional beam splitter or a two-dimensional beam splitter.
In practical application, the one-dimensional beam splitting device or the two-dimensional beam splitting device can be selected according to requirements.
Specifically, in practical application, a one-dimensional beam splitter or a two-dimensional beam splitter can be selected as needed, when only the outgoing light beam is split in one dimension, the one-dimensional beam splitter can be used, and when the outgoing light beam is split in two dimensions, the two-dimensional beam splitter is needed.
When the beam splitter 130 is a one-dimensional beam splitter, the beam splitter 130 may be a cylindrical lens array or a one-dimensional grating.
When the beam splitter 130 is a two-dimensional beam splitting device, the beam splitter 130 may be a micro lens array or a two-dimensional Diffractive Optical Element (DOE), in particular.
The beam splitter 130 may be made of a resin material or a glass material, or may be made of both a resin material and a glass material.
When the components of the beam splitter 130 include the glass material, the influence of the temperature reduction on the performance of the beam splitter 130 can be effectively reduced, so that the beam splitter 130 maintains relatively stable performance. In particular, when the temperature is changed, the thermal expansion coefficient of glass is lower than that of resin, and thus, when the beam splitter 130 uses a glass material, the performance of the beam splitter is relatively stable.
Alternatively, the area of the beam incident end surface of the beam splitter 130 is less than 5 × 5mm2
When the area of the beam incident end surface of the beam splitter 130 is less than 5X 5mm2During this time, the area of beam splitter 130 is less, can reduce the space that TOF depth sensing module 100 took, is convenient for install TOF depth sensing module 100 in the relatively limited terminal equipment in space.
Alternatively, the beam receiving surface of the beam splitter 130 is parallel to the beam emitting surface of the array light source 110.
When the light beam receiving surface of the light beam splitter 130 is parallel to the light beam emitting surface of the array light source 110, the light beam splitter 130 can be made to receive the light beam emitted from the array light source 110 more efficiently, and the efficiency of the light beam splitter 130 in receiving the light beam can be improved.
As shown in fig. 3, the receiving unit 140 may include a receiving lens 141 and a sensor 142. The manner in which the light beam is received by the receiving unit is described below in connection with specific examples.
For example, the array light source 110 includes 4 light emitting regions, and the receiving lens 141 may be respectively used to receive the reflected light beams 1, 2, 3 and 4 reflected by the target object from the light beams respectively generated by the beam splitter 130 at 4 different times (t4, t5, t6 and t7) and transmit the reflected light beams 1, 2, 3 and 4 to the sensor 142.
Alternatively, the receiving lens 141 may be composed of one or more lenses.
When the receiving lens 141 is composed of a plurality of lenses, aberration generated when the receiving lens 141 receives the light beam can be effectively reduced.
The receiving lens 141 may be made of a resin material or a glass material, or may be made of both a resin material and a glass material.
When the receiving lens 141 includes a glass material, the influence of the cooling degree on the back focal length of the receiving lens 141 can be effectively reduced.
The sensor 142 may be configured to receive the light beam transmitted by the lens 141, perform photoelectric conversion on the light beam transmitted by the receiving lens 141, convert an optical signal into an electrical signal, and facilitate subsequent calculation of a time difference between the light beam emitted from the emitting end and the light beam received by the receiving end (the time difference may be referred to as a flight time of the light beam), and calculate a distance between the target object and the TOF depth sensing module according to the time difference, thereby obtaining a depth image of the target object.
The sensor 142 may be a single-photon avalanche diode (SPAD) array.
The SPAD is an avalanche photodiode working in a Geiger mode (bias voltage is higher than breakdown voltage), avalanche effect occurs with certain probability after a single photon is received, and a pulse current signal is generated instantaneously for detecting the arrival time of the photon. Existing SPAD arrays for TOF depth sensing have limited resolution due to the need for complex quenching circuits, timing circuits, and storage and reading units for the SPAD arrays for the TOF depth sensing modules described above.
Under the condition that the distance of target object and TOF depth sensing module is far away, the intensity of receiving lens propagation to the target object's of sensor reflected light is generally very weak, and the sensor needs to have very high detection sensitivity, and SPAD has single photon detection sensitivity and the response time of picosecond magnitude, therefore adopts SPAD as sensor 142 can improve TOF depth sensing module's sensitivity in this application.
The control unit 150 may control the sensor 142 in addition to the array light source 110.
The control unit 150 may be electrically connected to the array light source 110 and the sensor 142 to control the array light source 110 and the sensor 142.
Specifically, the control unit 150 may control the operation mode of the sensor 142 such that the corresponding regions of the sensor can respectively receive the reflected beams of the target object reflected by the light beams emitted by the corresponding light emitting regions of the array light source 110 at M different times.
Specifically, the part of the reflected light beam of the target object, which is located within the numerical aperture of the receiving lens, is received by the receiving lens and transmitted to the sensor, and through the design of the receiving lens, each pixel of the sensor can receive the reflected light beam of different areas of the target object.
In the application, the number of outgoing light beams of the TOF depth sensing module at the same moment can be increased by the mode of controlling the light emission of the array light source in a partitioning mode and splitting the light beams by adopting the light beam splitter, and the spatial resolution and the high frame rate of the depth map of the finally obtained target object can be improved.
It should be understood that, as shown in fig. 2, for the TOF depth sensing module according to the embodiment of the present application, the projecting end and the receiving end of the TOF depth sensing module may both be located on the same side of the target object.
Optionally, the output optical power of the TOF depth sensing module 100 is less than or equal to 800 mw.
Specifically, the maximum output optical power or the average output power of the TOF depth sensing module 100 is less than or equal to 800 mw.
When the output optical power of the TOF depth sensing module 100 is less than or equal to 800mw, the TOF depth sensing module 100 has low power consumption, and is convenient to be arranged in terminal equipment and other equipment sensitive to power consumption.
The process of the TOF depth sensing module 100 obtaining the depth map of the target object according to the embodiment of the present application is described in detail below with reference to fig. 6 to 10.
As shown in fig. 6, the left diagram is a schematic diagram of the light-emitting region of the array light source 110, and the array light source 110 includes four light-emitting regions (which may also be referred to as light-emitting partitions) A, B, C and D, which are lit at times t0, t1, t2, and t3, respectively. The right diagram is a schematic diagram of the target object surface on which the light beam generated by the array light source 110 is projected after being split by the beam splitter 130, wherein each dot represents a projection light spot, and the region enclosed by each black solid frame is a target region corresponding to one pixel in the sensor 142. In fig. 6, the replication order of the corresponding beam splitter 130 is 4 × 4, that is, at each time, the luminous spots generated by one area of the array light source become 4 × 4 spots after being replicated by the beam splitter 130, and thus, the number of spots projected at the same time can be greatly increased by the beam splitter 130.
In fig. 6, by lighting up the four light-emitting regions of the array light source 110 at times t0, t1, t2, and t3, respectively, depth maps of different positions of the target object can be obtained.
Specifically, the schematic diagrams of the light beams emitted by the light emitting region a of the array light source 110 at the time t0 and projected onto the surface of the target object after the beam splitting process of the beam splitter 130 are respectively shown in fig. 7.
Fig. 8 is a schematic diagram of light beams emitted by the light emitting region b of the array light source 110 at time t1 and projected onto the surface of the target object after being split by the beam splitter 130.
Fig. 9 is a schematic diagram of light beams emitted by the light emitting region c of the array light source 110 at time t2 and projected onto the surface of the target object after being split by the beam splitter 130.
Fig. 10 is a schematic diagram of light beams emitted by the light emitting region d of the array light source 110 at time t3 and projected onto the surface of the target object after being split by the beam splitter 130.
According to the light beam projection conditions shown in fig. 7 to 10, the depth maps corresponding to the target objects at the time points t0, t1, t2 and t3 can be obtained, and then the depth maps corresponding to the target objects at the time points t0, t1, t2 and t3 are superposed, so that the depth map of the target object with higher resolution can be obtained.
In the TOF depth sensing module 100 shown in fig. 3, the collimating lens 120 may be located between the array light source 110 and the beam splitter 130, and the light beam emitted from the array light source 110 is collimated by the collimating lens 120 and then is processed by the beam splitter.
Alternatively, for the TOF depth sensing module 100, the beam splitter 130 may first directly split the light beam generated by the array light source 110, and then the collimating lens 120 collimates the split light beam.
This is explained in detail below with reference to fig. 11. In the TOF depth sensing module 100 shown in fig. 11, the specific functions of each module or unit are as follows:
the control unit 150 is configured to control M light emitting areas of the N light emitting areas in the array light source 110 to emit light;
the beam splitter 130 is used for splitting the beams emitted by the M light-emitting areas;
the collimating lens 120 is configured to collimate the light beam emitted by the light beam splitter 130;
a receiving unit 140 for receiving the reflected light beam of the target object.
Wherein M is less than or equal to N, M is a positive integer, and N is a positive integer greater than 1; the beam splitter 130 is specifically configured to split each received beam of light into a plurality of beams of light; the reflected beam of the target object is a beam obtained by reflecting the target object with respect to the beam emitted from the collimator lens 120. The light beams emitted from the M light-emitting regions may also be referred to as light beams from the M light-emitting regions.
The main difference between the TOF depth sensing module shown in fig. 11 and the TOF depth sensing module shown in fig. 3 is the position of the collimating lens, in the TOF depth sensing module shown in fig. 3, the collimating lens is located between the array light source and the beam splitter, and in the TOF depth sensing module shown in fig. 11, the beam splitter is located between the array light source and the collimating lens (which is equivalent to the collimating lens being located in the direction of the outgoing light beam of the beam splitter).
The TOF depth sensing module 100 shown in fig. 11 and the TOF depth sensing module 100 shown in fig. 3 process the light beams emitted by the array light source 110 in a slightly different manner. In the TOF depth sensing module 100 shown in fig. 3, after the array light source 110 emits a light beam, the collimating lens 120 and the beam splitter 130 sequentially perform the collimating process and the beam splitting process. In the TOF depth sensing module 100 shown in fig. 11, after the array light source 110 emits a light beam, the beam splitter 130 and the collimator lens 120 sequentially perform beam splitting and collimating.
The beam splitter 130 splits the light beams emitted from the array light source, which is described below with reference to the drawings.
As shown in fig. 12, after the plurality of light beams generated by the array light source 110 are split by the beam splitter 130, each light beam generated by the array light source 110 may be split into a plurality of light beams, and finally, a greater number of light beams may be obtained after the light beams are split.
On the basis of the TOF depth sensing module shown in fig. 11, the TOF depth sensing module 100 according to the embodiment of the present application may further include an optical element, a refractive index of the optical element is controllable, when the refractive indexes of the optical element are different, the optical element may adjust the light beam in a single polarization state to different directions, different light beams may be irradiated to different directions without mechanical rotation and vibration, and a scanning region of interest may be rapidly located.
Fig. 13 is a schematic structural diagram of a TOF depth sensing module according to an embodiment of the present application.
In the TOF depth sensing module 100 shown in fig. 13, the specific functions of each module or unit are as follows:
the control unit 150 is configured to control M light emitting areas of the N light emitting areas of the array light source 110 to emit light;
the control unit 150 is further configured to control a birefringence parameter of the optical element 160 to change a propagation direction of the light beams emitted from the M light-emitting areas.
The beam splitter 130 is configured to receive the light beam emitted from the optical element 160 and split the light beam emitted from the optical element 160;
alternatively, the beam splitter 130 is specifically configured to split each received light beam into a plurality of light beams, and the number of the light beams obtained by splitting the light beam emitted by one light emitting region of the array light source 110 by the beam splitter 130 may be P × Q.
The collimating lens 120 is configured to collimate the light beam emitted by the light beam splitter 130;
the receiving unit 140 is used for receiving the reflected light beam of the target object.
The reflected beam of the target object is a beam reflected by the target object from the beam splitter 130. The light beams emitted from the M light-emitting regions may also be referred to as light beams from the M light-emitting regions.
In fig. 13, the optical element 160 is located between the array light source 110 and the beam splitter 130, and in fact, the optical element 160 may also be located between the collimator lens 120 and the beam splitter 130, as will be described below with reference to fig. 14.
Fig. 14 is a schematic structural diagram of a TOF depth sensing module according to an embodiment of the present application.
In the TOF depth sensing module 100 shown in fig. 14, the specific functions of each module or unit are as follows:
the control unit 150 is configured to control M light emitting areas of the N light emitting areas of the array light source 110 to emit light;
the collimating lens 120 is configured to collimate the light beams emitted by the M light-emitting areas;
the control unit 150 is further configured to control a birefringence parameter of the optical element 160 to change a propagation direction of the light beam after being collimated by the collimator lens 120;
The beam splitter 130 is configured to receive the light beam emitted from the optical element 160 and split the light beam emitted from the optical element 160;
alternatively, the beam splitter 130 is specifically configured to split each received light beam into a plurality of light beams, and the number of the light beams obtained by splitting the light beam emitted by one light emitting region of the array light source 110 by the beam splitter 130 may be P × Q.
The collimating lens 120 is configured to collimate the light beam emitted by the light beam splitter 130;
the receiving unit 140 is used for receiving the reflected light beam of the target object.
The reflected beam of the target object is a beam reflected by the target object from the beam splitter 130. The light beams emitted from the M light-emitting regions may also be referred to as light beams from the M light-emitting regions.
The operation of the TOF depth sensing module in the embodiment of the present application is described in detail below with reference to fig. 15.
FIG. 15 is a schematic diagram illustrating operation of a TOF depth sensing module according to an embodiment of the present application.
As shown in fig. 15, the TOF depth sensing module includes a projecting end, a receiving end, and a control unit, where the control unit is configured to control the projecting end to emit an outgoing light beam to scan a target area, and the control unit is further configured to control the receiving end to receive a reflected light beam reflected from the target scanning area.
The projection end includes an array light source 110, a collimating lens 120, an optical element 160, a beam splitter 130, and a projection lens (optional). The receiving end includes a receiving lens 141 and a sensor 142. The control unit 150 is also used to control the timing synchronization of the array light source 110, the optical element 160 and the sensor 142.
The collimating lens 140 in the TOF depth sensing module shown in fig. 15 may include 1-4 lenses, and the collimating lens 140 is configured to convert the first light beam generated by the array light source 110 into approximately parallel light.
The working flow of the TOF depth sensing module shown in fig. 15 is as follows:
(1) the light beam emitted by the array light source 110 is collimated by the collimating lens 120 to form a collimated light beam, and the collimated light beam reaches the optical element 160;
(2) the optical element 160 realizes the orderly deflection of the light beam according to the time sequence control of the control unit, thereby realizing the two-dimensional scanning of the angle of the deflected light beam after the emission;
(3) the deflected beam after exiting from the optical element 160 reaches the beam splitter 130;
(4) the beam splitter 130 duplicates the deflected beam of each angle to obtain outgoing beams of a plurality of angles, thereby realizing two-dimensional duplication of the beams;
(5) in each scanning period, the receiving end can only image the target area illuminated by the spot;
(6) After the optical element has completed all sxt scans, the two-dimensional array sensor in the receiving end will produce sxt images, which are finally stitched into a higher resolution image in the processor.
The array light source in the TOF depth sensing module of the embodiment of the application can have a plurality of light emitting areas, each light emitting area can independently emit light, and the following description is given in detail in combination with fig. 16 on the workflow of the TOF depth sensing module under the condition that the array light source of the TOF depth sensing module of the embodiment of the application includes a plurality of light emitting areas.
Fig. 16 is a schematic diagram of a light emitting region of an array light source.
When the array light source 110 includes a plurality of light emitting areas, the working flow of the TOF depth sensing module according to the embodiment of the present disclosure is as follows:
(1) light beams emitted by different light emitting areas of the array light source 110 in a time-sharing manner form collimated light beams through the collimating lens 120, the collimated light beams reach the light beam splitter 130, the light beam splitter 130 is controlled by a timing signal of the control unit, and the light beams can be orderly deflected, so that the angle of the emergent light beams can be two-dimensionally scanned;
(2) the light beam after the collimation processing by the collimating lens 120 reaches the light beam splitter 130, the light beam splitter 130 replicates the incident light beam at each angle, and the emergent light beams at a plurality of angles are generated at the same time, so that the two-dimensional replication of the light beam is realized;
(3) In each scanning period, the receiving end only images a target area illuminated by the spot;
(4) after the optical element has completed all sxt scans, the two-dimensional array sensor in the receiving end will produce sxt images, which are finally stitched into a higher resolution image in the processor.
The scanning operation principle of the TOF depth sensing module according to the embodiment of the present application is described below with reference to fig. 16 and 17.
As shown in fig. 16, 111, 112, 113, and 114 are independent light emitting regions of the array light source, and can be lit in a time-sharing manner, and 115, 116, 117, and 118 are light emitting holes in different independent working regions of the array light source.
Fig. 17 is a schematic diagram of a beam splitter for splitting the light beams emitted from the array light source shown in fig. 16.
As shown in fig. 17, 120 is a replication order (black solid frame in upper left corner of fig. 17) generated by the beam splitter, 121 is a target region (121 includes 122, 123, 124, and 125) corresponding to one pixel of the two-dimensional array sensor, 122 is a spot generated by the light emitting aperture 115 performing beam scanning through the beam splitter, 123 is a spot generated by the light emitting aperture 116 performing beam scanning through the optical element, 124 is a spot generated by the light emitting aperture 117 performing beam scanning through the optical element, and 125 is a spot generated by the light emitting aperture 118 performing beam scanning through the optical element.
The specific scanning process of the TOF depth sensing module with the array light source shown in fig. 16 is as follows:
only the light 115 is lighted, and the optical elements respectively carry out light beam scanning to realize the spot size of 122;
extinguishing 115, lighting 116, and respectively scanning light beams by the optical elements to realize spot size of 123;
extinguishing 116, lighting 117, and respectively scanning light beams by the optical elements to realize spot size of 124;
turning off 117, turning on 118, and respectively scanning the optical beams by the optical elements to realize spot size of 125;
the spot scanning of the target area corresponding to one pixel of the two-dimensional array sensor can be completed through the four steps.
The optical element 160 in fig. 13 to 15 above may be any one of a liquid crystal polarization grating, an electro-optical device, an acousto-optic device, an optical phased array device, and the like, and details regarding the liquid crystal polarization grating, the electro-optical device, the acousto-optic device, the optical phased array device, and the like can be found in the following description of the first to fourth cases.
The TOF depth sensing module according to the embodiment of the present application is described in detail with reference to the accompanying drawings, and the image generation method according to the embodiment of the present application is described with reference to the accompanying drawings.
Fig. 18 is a schematic flowchart of an image generation method according to an embodiment of the present application. The method shown in fig. 18 may be performed by a terminal device including a TOF depth sensing module according to an embodiment of the present application. In particular, the method shown in fig. 18 may be performed by a terminal device including the TOF depth sensing module shown in fig. 3. The method shown in fig. 18 includes steps 2001 to 2006, which are described in detail below.
2001. And controlling M light emitting areas in the N light emitting areas of the array light source to emit light at M different moments respectively by using the control unit.
Wherein M is less than or equal to N, M is a positive integer, and N is a positive integer greater than 1.
In the above step 2001, the light emission of the array light source may be controlled by the control unit.
Specifically, the control unit may respectively send control signals to M light-emitting areas of the array light source at M times to control the M light-emitting areas to respectively emit light at M different times.
For example, as shown in fig. 6, the array light source 110 includes four independent light-emitting regions A, B, C and D, and then the control unit may issue control signals to the four independent light-emitting regions A, B, C and D at times t0, t1, t2, and t3, respectively, so that the four independent light-emitting regions A, B, C and D emit light at times t0, t1, t2, and t3, respectively.
2002. And (3) utilizing the collimating lens to collimate the light beams generated by the M light emitting areas at M different moments respectively to obtain collimated light beams.
Still taking fig. 6 as an example, when the four independent light-emitting regions A, B, C and D of the array light source emit light beams at times t0, t1, t2, and t3, respectively, the collimating lens may collimate the light beams emitted by the light-emitting regions A, B, C and D at times t0, t1, t2, and t3, respectively, to obtain collimated light beams.
2003. And utilizing a beam splitter to align the collimated light beam for beam splitting.
The beam splitter may specifically split each received light beam into a plurality of light beams, and the number of the light beams obtained by splitting the light beam from one light emitting region of the array light source by the beam splitter may be P × Q.
As shown in fig. 6, the light emitting regions A, B, C and D of the array light source emit light beams at times t0, t1, t2 and t3, respectively, then the light beams emitted by the light emitting regions A, B, C and D at times t0, t1, t2 and t3, respectively, are processed by the collimator lens and then incident on the beam splitter, and the result of the beam splitter splitting the light emitting regions A, B, C and D can be as shown in the right side of fig. 6.
Optionally, the splitting processing in step 2003 specifically includes: and a beam splitter is utilized to align the light beam generated after the straightening treatment to perform one-dimensional or two-dimensional beam splitting treatment.
2004. The reflected beam of the target object is received with a receiving unit.
The reflected beam of the target object is a beam obtained by reflecting the beam from the beam splitter by the target object.
Optionally, the receiving unit in step 2004 includes a receiving lens and a sensor, and the receiving unit in step 2004 receives the reflected light beam of the target object, including: and converging the reflected light beam of the target object to the sensor by using the receiving lens. The sensor may also be referred to herein as a sensor array, which may be a two-dimensional sensor array.
Optionally, the resolution of the sensor is greater than or equal to P × Q, and the number of beams obtained by splitting the beam from one light-emitting region of the array light source by the beam splitter is P × Q.
Wherein P and Q are positive integers. The resolution ratio of the uploading sensor is larger than or equal to the number of light beams split by the light beam splitter from one light emitting area of the array light source, so that the sensor can receive a reflected light beam obtained by reflecting the light beam from the light beam splitter by a target object, and the TOF depth sensing module can normally receive the reflected light beam.
2005. And generating M depth maps according to TOFs (time of flight) corresponding to light beams emitted by M light emitting areas of the array light source at M different moments respectively.
The TOF corresponding to the light beams emitted by the M light-emitting areas of the array light source at M different times may specifically refer to time difference information between the emission times of the light beams emitted by the M light-emitting areas of the array light source at M different times and the receiving times of the corresponding reflected light beams.
For example, the array light source includes three light-emitting regions a, B, and C, wherein the light-emitting region a emits a light beam at time T0, the light-emitting region B emits a light beam at time T1, and the light-emitting region C emits a light beam at time T2. Then, the TOF corresponding to the light beam emitted by the light emitting region a at the time T0 may specifically refer to time difference information between the time when the light beam emitted by the light emitting region a at the time T0 passes through the collimating process of the collimating lens and the beam splitting process of the beam splitter, reaches the target object, is reflected by the target object, and finally reaches the receiving unit (or is received by the receiving unit), and the time T0. TOF corresponding to the light beam emitted by the light emitting region B at the time T1 and TOF corresponding to the light beam emitted by the light emitting region C at the time T2 are also similar. Optionally, the M depth maps are depth maps corresponding to M region sets of the target object, and a non-overlapping region exists between any two of the M region sets.
Optionally, the generating M depth maps of the target object in step 2005 includes:
2005a, determining distances between the M regions of the target object and the TOF depth sensing module according to TOFs corresponding to light beams emitted by the M light emitting regions at M different moments respectively;
2005b, generating depth maps of the M regions of the target object according to distances between the M regions of the target object and the TOF depth sensing module.
2006. And obtaining a final depth map of the target object according to the M depth maps.
Specifically, in step 2006, the M depth maps may be line-stitched to obtain a depth map of the target object.
For example, the depth maps of the target object at the time points t0-t3 are obtained through the above steps 2001 to 2005, the depth maps at the four time points are shown in fig. 19, and the final depth map of the target object obtained by splicing the depth maps at the time points t0-t3 shown in fig. 19 can be shown in fig. 69.
When the structures of the TOF depth sensing modules are different, the corresponding processes of the image generation methods are also different. The image generation method according to the embodiment of the present application will be described below with reference to fig. 20.
Fig. 20 is a schematic flowchart of an image generation method according to an embodiment of the present application. The method shown in fig. 20 may be performed by a terminal device including a TOF depth sensing module according to an embodiment of the present application. In particular, the method shown in fig. 20 may be performed by a terminal device that includes the TOF depth sensing module shown in fig. 11. The method shown in fig. 20 includes steps 3001 to 3006, which are described in detail below.
3001. And controlling M light emitting areas in the N light emitting areas of the array light source to emit light at M different moments respectively by using the control unit.
The N light-emitting areas are not overlapped, M is less than or equal to N, M is a positive integer, and N is a positive integer greater than 1.
The controlling the M light-emitting areas of the N light-emitting areas of the array light source to emit light at M different times by the control unit may specifically be controlling the M light-emitting areas to emit light at M different times by the control unit.
For example, as shown in fig. 16, the array light source includes four light emitting regions 111, 112, 113, and 114, and then the control unit may control the light emitting regions 111, 112, and 113 to emit light at time T0, time T1, and time T2, respectively. Alternatively, the control unit may also control 111, 112, 113, and 114 to emit light at time T0, time T1, time T2, and time T3, respectively.
3002. And splitting the light beams generated by the M light emitting areas at M different moments respectively by using a beam splitter.
The beam splitter is specifically configured to split each received beam of light into a plurality of beams of light.
The splitting process of the light beams generated by the M light emitting regions at the M different times by using the beam splitter may specifically be a splitting process of the light beams generated by the M light emitting regions at the M different times by using the beam splitter.
For example, as shown in fig. 16, the array light source includes four light-emitting regions 111, 112, 113 and 114, and the control unit can control the light-emitting regions 111, 112 and 113 to emit light at time T0, time T1 and time T2, respectively, so that the beam splitter can split the light beam emitted from 111 at time T0, 112 at time T1 and 112 at time T2 (it should be understood that the time required for the light beam to reach the beam splitter from the light-emitting region is ignored here).
Optionally, the splitting processing in the step 3002 specifically includes: and respectively carrying out one-dimensional or two-dimensional beam splitting processing on the light beams generated by the M light emitting areas at M different moments by using the beam splitter.
3003. And the light beam from the beam splitter is collimated by the collimating lens.
For example, still taking fig. 16 as an example, the beam splitter splits the beams emitted from 111, 112, and 113 at time T0, time T1, and time T2, respectively, then the collimator lens may collimate the beam split by the beam splitter 111 at time T0, collimate the beam split by the beam splitter 112 at time T1, and collimate the beam split by the beam splitter 113 at time T2.
3004. The reflected beam of the target object is received with a receiving unit.
The reflected beam of the target object is a beam obtained by reflecting the beam from the collimating lens by the target object.
Optionally, the receiving unit in step 3004 includes a receiving lens and a sensor, and the receiving unit in step 3004 receives the reflected light beam of the target object, including: and converging the reflected light beam of the target object to the sensor by using the receiving lens. The sensor may also be referred to herein as a sensor array, which may be a two-dimensional sensor array.
Optionally, the resolution of the sensor is greater than or equal to P × Q, and the number of beams obtained by splitting the beam from one light-emitting region of the array light source by the beam splitter is P × Q.
Wherein P and Q are positive integers. The resolution ratio of the uploading sensor is larger than or equal to the number of light beams split by the light beam splitter from one light emitting area of the array light source, so that the sensor can receive a reflected light beam obtained by reflecting the light beam from the light beam splitter by a target object, and the TOF depth sensing module can normally receive the reflected light beam.
3005. And generating M depth maps according to TOFs (time of flight) corresponding to light beams emitted by M light emitting areas of the array light source at M different moments respectively.
The TOF corresponding to the light beams emitted by the M light-emitting areas of the array light source at M different times may specifically refer to time difference information between the emission times of the light beams emitted by the M light-emitting areas of the array light source at M different times and the receiving times of the corresponding reflected light beams.
For example, the array light source includes three light-emitting regions a, B, and C, wherein the light-emitting region a emits a light beam at time T0, the light-emitting region B emits a light beam at time T1, and the light-emitting region C emits a light beam at time T2. Then, the TOF corresponding to the light beam emitted by the light emitting region a at the time T0 may specifically refer to time difference information between the time when the light beam emitted by the light emitting region a at the time T0 passes through the collimating process of the collimating lens and the beam splitting process of the beam splitter, reaches the target object, is reflected by the target object, and finally reaches the receiving unit (or is received by the receiving unit), and the time T0. TOF corresponding to the light beam emitted by the light emitting region B at the time T1 and TOF corresponding to the light beam emitted by the light emitting region C at the time T2 are also similar.
The M depth maps are depth maps corresponding to M region sets of the target object, respectively, and a non-overlapping region exists between any two of the M region sets.
Optionally, the step 3005 of generating M depth maps specifically includes:
3005a, determining distances between M regions of the target object and the TOF depth sensing module according to TOFs corresponding to light beams emitted by the M light emitting regions at M different moments respectively;
3005b, according to the distance between the M areas of the target object and the TOF depth sensing module, generating the depth map of the M areas of the target object.
3006. And obtaining a final depth map of the target object according to the M depth maps.
Optionally, the obtaining the final depth map of the target object in step 3006 includes: and performing line splicing on the M depth maps to obtain a depth map of the target object.
For example, the depth maps obtained through the processes of steps 3001 to 3005 above may be as shown in fig. 68, where fig. 68 shows the depth maps corresponding to times t0 to t3, and the final depth map of the target object as shown in fig. 69 may be obtained by stitching the depth maps corresponding to times t0 to t 3.
In the embodiment of the application, the light beams are emitted in a time-sharing manner by controlling different light emitting areas of the array light source and are split by controlling the light beam splitter, so that the number of the light beams emitted by the TOF depth sensing module in a period of time can be increased, a plurality of depth maps are obtained, and the final depth map obtained by splicing the plurality of depth maps has higher spatial resolution and higher frame rate.
The method shown in fig. 20 is similar to the main processing procedure of the method shown in fig. 18, and the main difference is that in the method shown in fig. 20, a beam splitter is used to split a light beam emitted from an array light source, and then a collimator lens is used to collimate the split light beam. In the method shown in fig. 18, the light beam emitted from the array light source is collimated by the collimating lens, and then the light beam is split by the beam splitter.
When the image generation method of the embodiment of the application is executed by the terminal device, the terminal device may have different working modes, and the light emitting manner of the array light source and the manner of subsequently generating the final depth map of the target object are different in the different working modes. The following describes how to obtain the final depth map of the target object in different operation modes in detail with reference to the drawings.
Fig. 21 is a schematic flowchart of an image generation method according to an embodiment of the present application.
The method shown in fig. 21 includes steps 4001 to 4003, which are described in detail below, respectively.
4001. And determining the working mode of the terminal equipment.
The terminal device comprises a first operation mode and a second operation mode, wherein in the first operation mode, the control unit can control L light-emitting areas in the N light-emitting areas of the array light source to emit light at the same time, and in the second operation mode, the control unit can control M light-emitting areas in the N light-emitting areas of the array light source to emit light at M different moments.
It should be understood that step 4002 is performed when it is determined in step 4001 that the terminal device is operating in the first operating mode, and step 4003 is performed when it is determined in step 4001 that the terminal device is operating in the second operating mode.
The specific process of determining the operation mode of the terminal device in step 4001 will be described in detail below.
Optionally, the determining the operation mode of the terminal device in step 4001 includes: and determining the working mode of the terminal equipment according to the working mode selection information of the user.
The user's operation mode selection information is used to select one of the first operation mode and the second operation mode as the operation mode of the terminal device.
Specifically, when the above-described image generation method is executed by a terminal device, the terminal device may acquire the operation mode selection information of the user from the user. For example, the user may input the operation mode selection information of the user through the operation interface of the terminal device.
The working mode of the terminal equipment is determined according to the working mode selection information of the user, so that the user can flexibly select and determine the working mode of the terminal equipment.
Optionally, the determining the operation mode of the terminal device in step 4001 includes: and determining the working mode of the terminal equipment according to the distance between the terminal equipment and the target object.
Specifically, when the distance between the terminal device and the target object is less than or equal to a preset distance, it may be determined that the terminal device operates in a first operating mode; and under the condition that the distance between the terminal equipment and the target object is greater than the preset distance, the terminal equipment can be determined to work in the second working mode.
When the distance between the terminal equipment and the target object is small, the array light source has enough luminous power to simultaneously emit a plurality of light beams reaching the target object. Therefore, when the distance between the terminal device and the target object is small, the plurality of light emitting areas of the array light source can emit light simultaneously by adopting the first working mode, so that the depth information of more areas of the target object can be obtained subsequently, and the frame rate of the depth map of the target object can be improved under the condition that the resolution of the depth map of the target object is constant.
When the distance between the terminal device and the target object is large, the depth map of the target object can be obtained by adopting the second working mode because the total power of the array light source is limited. Specifically, the array light source is controlled to emit light beams in a time-sharing manner, so that the light beams emitted by the array light source in the time-sharing manner can also reach the target object. Under the condition that the terminal equipment is far away from the target object, the depth information of different areas of the target object can be acquired in a time-sharing mode, and then the depth map of the target object is acquired.
Optionally, the determining the operation mode of the terminal device in step 4001 includes: and determining the working mode of the terminal equipment according to the scene where the target object is located.
Specifically, under the condition that the terminal device is in an indoor scene, it can be determined that the terminal device operates in a first operating mode; in the case where the terminal device is in an outdoor scenario, it may be determined that the terminal device is operating in the second operating mode.
When the terminal equipment is in an indoor scene, the distance between the terminal equipment and a target object is relatively short, external noise is relatively weak, and the array light source has enough luminous power and can simultaneously emit a plurality of light beams reaching the target object. Therefore, when the distance between the terminal device and the target object is small, the plurality of light emitting areas of the array light source can emit light simultaneously by adopting the first working mode, so that the depth information of more areas of the target object can be obtained subsequently, and the frame rate of the depth map of the target object can be improved under the condition that the resolution of the depth map of the target object is constant.
When the terminal device is in an outdoor scene, the distance between the terminal device and the target object is relatively long, external noise is relatively large, and the total power of the array light source is limited, so that the depth map of the target object can be obtained by adopting the second working mode. Specifically, the array light source is controlled to emit light beams in a time-sharing manner, so that the light beams emitted by the array light source in the time-sharing manner can also reach the target object. Under the condition that the terminal equipment is far away from the target object, the depth information of different areas of the target object can be acquired in a time-sharing mode, and then the depth map of the target object is acquired.
The working mode of the terminal equipment can be flexibly determined according to the distance between the terminal equipment and the target object or the scene where the target object is located, so that the terminal equipment works in the proper working mode.
4002. And acquiring a final depth map of the target object in the first working mode.
4003. And acquiring a final depth map of the target object in the second working mode.
In the embodiment of the application, the image generation method has different working modes, so that the first working mode or the second working mode can be selected according to different situations to generate the depth map of the target object, the flexibility of generating the depth map of the target object can be improved, and the high-resolution depth map of the target object can be obtained in both working modes.
The process of obtaining the final depth map of the target object in the first operation mode will be described in detail with reference to fig. 22.
FIG. 22 is a schematic flow chart of obtaining a final depth map of the target object in the first mode of operation. The process shown in fig. 22 includes steps 4002A to 4002E, which are described in detail below, respectively.
4002A, controlling the array light source to emit light simultaneously in L of the N light emitting areas.
Wherein L is less than or equal to N, L is a positive integer, and N is a positive integer greater than 1.
In step 4002A, L light emitting areas of the N light emitting areas of the array light source may be controlled to emit light simultaneously by the control unit. Specifically, the control unit may emit control signals to L light-emitting areas among the N light-emitting areas of the array light source at time T to control the L light-emitting areas to simultaneously emit light at time T.
For example, the array light source includes four independent light-emitting regions A, B, C and D, and the control unit may send control signals to the four independent light-emitting regions A, B, C and D at time T, so that the four independent light-emitting regions A, B, C and D emit light at the same time at time T.
4002B, collimating the light beams emitted from the L light-emitting areas by a collimating lens.
Assuming that the array light source includes four independent light emitting regions A, B, C and D, the collimating lens can collimate the light beams emitted by the light emitting regions A, B, C and D of the array light source at the time T to obtain collimated light beams.
In step 4002B, the light beam is collimated by the collimating lens, so that an approximately parallel light beam can be obtained, the power density of the light beam can be increased, and the subsequent scanning effect by the light beam can be further improved.
4002C, using a beam splitter to collimate the light beam generated by the collimation of the collimator lens, and performing beam splitting.
The beam splitter is specifically configured to split each received beam of light into a plurality of beams of light.
4002D, receiving a reflected light beam of the target object with a receiving unit.
The reflected beam of the target object is a beam obtained by reflecting the beam from the beam splitter by the target object.
4002E, obtaining a final depth map of the target object according to the TOFs corresponding to the light beams emitted by the L light emitting areas.
The TOF corresponding to the light beams emitted by the L light-emitting areas may specifically refer to time difference information between the receiving time of the reflected light beams corresponding to the light beams emitted by the L light-emitting areas of the array light source at the time T and the time T.
Optionally, the receiving unit includes a receiving lens and a sensor, and the receiving unit receives the reflected light beam of the target object in step 4002D includes: and converging the reflected light beam of the target object to the sensor by using the receiving lens.
The sensor may also be referred to as a sensor array, which may be a two-dimensional sensor array.
Optionally, the resolution of the sensor is greater than P × Q, and the number of beams obtained by splitting the light beam from one light emitting region of the array light source by the beam splitter is P × Q.
Wherein P and Q are positive integers. Because the resolution ratio of uploading the sensor is greater than the beam number that the beam splitter divides the beam that comes from a light-emitting area of array light source after to make the sensor can receive the target object and carry out the reflected beam that the reflection obtained to the beam that comes from the beam splitter for TOF depth sensing module can obtain realizing the normal receipt to the reflected beam.
Optionally, the generating a final depth map of the target object in step 4002E specifically includes:
(1) generating depth maps of L areas of the target object according to TOFs corresponding to the light beams emitted by the L light emitting areas;
(2) and synthesizing the depth map of the target object according to the depth maps of the L areas of the target object.
The method shown in fig. 22 may be performed by the TOF depth sensing module shown in fig. 3 or a terminal device including the TOF depth sensing module shown in fig. 3.
When the relative position relationship between the collimating lens and the beam splitter in the TOF depth sensing module is different, the process of acquiring the final depth map of the target object in the first working mode is also different. The process of obtaining the final depth map of the target object in the first operation mode is described below with reference to fig. 23.
FIG. 23 is a schematic flow chart of obtaining a final depth map of a target object in a first mode of operation. The process shown in fig. 23 includes steps 4002a to 4002e, which are described in detail below, respectively.
4002a, controlling the light emitting area L of the N light emitting areas of the array light source to emit light simultaneously.
Wherein L is less than or equal to N, L is a positive integer, and N is a positive integer greater than 1.
In step 4002a, L light emitting areas of the N light emitting areas of the array light source may be controlled to emit light simultaneously by the control unit. Specifically, the control unit may emit control signals to L light-emitting areas among the N light-emitting areas of the array light source at time T to control the L light-emitting areas to simultaneously emit light at time T.
For example, the array light source includes four independent light-emitting regions A, B, C and D, and the control unit may send control signals to the four independent light-emitting regions A, B, C and D at time T, so that the four independent light-emitting regions A, B, C and D emit light at the same time at time T.
4002b, splitting the light beams of the L light-emitting areas by a beam splitter.
The beam splitter is specifically configured to split each received beam of light into a plurality of beams of light.
4002c, collimating the light beam from the beam splitter with a collimating lens to obtain a collimated light beam.
4002d, receiving a reflected light beam of the target object with a receiving unit.
The reflected beam of the target object is a beam obtained by reflecting the target object on the collimated beam.
4002e, obtaining the final depth map of the target object according to the TOF corresponding to the light beams emitted by the L light emitting areas.
The TOF corresponding to the light beams emitted by the L light-emitting areas may specifically refer to time difference information between the receiving time of the reflected light beams corresponding to the light beams emitted by the L light-emitting areas of the array light source at the time T and the time T.
Optionally, the receiving unit includes a receiving lens and a sensor, and the receiving unit receives the reflected light beam of the target object in step 4002d includes: and converging the reflected light beam of the target object to the sensor by using the receiving lens.
The sensor may also be referred to as a sensor array, which may be a two-dimensional sensor array.
Optionally, the resolution of the sensor is greater than P × Q, and the number of beams obtained by splitting the light beam from one light emitting region of the array light source by the beam splitter is P × Q.
Wherein P and Q are positive integers. Because the resolution ratio of uploading the sensor is greater than the beam number that the beam splitter divides the beam that comes from a light-emitting area of array light source after to make the sensor can receive the target object and carry out the reflected beam that the reflection obtained to the beam that comes from the beam splitter for TOF depth sensing module can obtain realizing the normal receipt to the reflected beam.
Optionally, the generating a final depth map of the target object in step 4002e specifically includes:
(1) generating depth maps of L areas of the target object according to TOFs corresponding to the light beams emitted by the L light emitting areas;
(2) and synthesizing the depth map of the target object according to the depth maps of the L areas of the target object.
The process shown in fig. 23 and the process shown in fig. 22 are how to obtain the final depth map of the target object in the first working mode, and the main difference is that in fig. 23, the light beam splitter is used to split the light beam emitted by the array light source, and then the collimating lens is used to collimate the split light beam; in fig. 22, the light beam emitted from the array light source is collimated by the collimating lens, and then the light beam after being collimated is split by the light beam splitter.
The process of obtaining the final depth map of the target object in the second operation mode will be described in detail with reference to fig. 24.
FIG. 24 is a schematic flow chart of obtaining a final depth map of the target object in the second mode of operation. The process shown in fig. 24 includes steps 4003A to 4003E, which are described in detail below, respectively.
4003A, M of the N light emitting areas of the array light source are controlled to emit light at M different times.
Wherein M is less than or equal to N, and M and N are positive integers.
In step 4003A, the light emission of the array light source may be controlled by the control unit. Specifically, the control unit may respectively send control signals to M light-emitting areas of the array light source at M times to control the M light-emitting areas to respectively emit light at M different times.
For example, the array light source includes four independent light-emitting regions A, B, C and D, and then the control unit may send control signals to the four independent light-emitting regions A, B and C at times t0, t1, and t2, respectively, so that the three independent light-emitting regions A, B and C emit light at times t0, t1, and t2, respectively.
4003B, collimating the light beams generated by the M light emitting regions at M different times by using a collimating lens to obtain collimated light beams.
In step 4003B, the collimating process is performed on the light beams generated by the M light emitting areas at the M different times by the collimating lens, which may specifically be performed on the light beams generated by the M light emitting areas at the M different times by the collimating lens.
Assuming that the array light source includes four independent light-emitting regions A, B, C and D, and three independent light-emitting regions A, B and C in the array light source emit light at times t0, t1, and t2, respectively, under the control of the control unit, the collimator lens may collimate the light beams emitted by the light-emitting regions A, B and C at times t0, t1, and t2, respectively.
The light beam is collimated by the collimating lens, approximately parallel light beams can be obtained, the power density of the light beam can be improved, and the effect of subsequently scanning by adopting the light beam can be further improved.
4003C, beam splitting the collimated beam with a beam splitter.
4003D, receiving a reflected light beam of the target object with a receiving unit.
The beam splitter is specifically configured to split each received beam of light into a plurality of beams of light. The reflected beam of the target object is a beam obtained by reflecting the beam from the beam splitter by the target object.
4003E, generating M depth maps based on TOFs corresponding to light beams emitted by the M light-emitting areas at M different times, respectively.
The TOF corresponding to the light beams emitted by the M light-emitting areas of the array light source at M different times may specifically refer to time difference information between the emission times of the light beams emitted by the M light-emitting areas of the array light source at M different times and the receiving times of the corresponding reflected light beams.
4003F, obtaining a final depth map of the target object according to the M depth maps.
Optionally, the M depth maps are depth maps corresponding to M region sets of the target object, respectively, and a non-overlapping region exists between any two of the M region sets.
Optionally, the receiving unit includes a receiving lens and a sensor, and the receiving unit receives the reflected light beam of the target object in step 4003D includes: and converging the reflected light beam of the target object to the sensor by using the receiving lens.
The sensor may also be referred to as a sensor array, which may be a two-dimensional sensor array.
Optionally, the resolution of the sensor is greater than or equal to P × Q, and the number of beams obtained by splitting the beam from one light emitting region of the array light source by the beam splitter is P × Q.
Wherein P and Q are positive integers. The resolution ratio of the uploading sensor is larger than or equal to the number of light beams split by the light beam splitter from one light emitting area of the array light source, so that the sensor can receive a reflected light beam obtained by reflecting the light beam from the light beam splitter by a target object, and the TOF depth sensing module can normally receive the reflected light beam.
Optionally, the generating M depth maps in step 4003E specifically includes:
(1) determining distances between the M regions of the target object and a TOF depth sensing module according to TOFs corresponding to light beams emitted by the M light emitting regions at M different moments respectively;
(2) generating depth maps of the M regions of the target object according to the distances between the M regions of the target object and the TOF depth sensing module;
(3) and synthesizing the depth map of the target object according to the depth maps of the M areas of the target object.
The method shown in fig. 24 may be performed by the TOF depth sensing module shown in fig. 3 or a terminal device including the TOF depth sensing module shown in fig. 3.
When the relative position relationship between the collimating lens and the beam splitter in the TOF depth sensing module is different, the process of acquiring the final depth map of the target object in the second working mode is also different. The process of obtaining the final depth map of the target object in the second operation mode is described below with reference to fig. 25.
FIG. 25 is a schematic flow chart of obtaining a final depth map of the target object in the second mode of operation. The process shown in fig. 25 includes steps 4003a to 4003f, which are described in detail below, respectively.
4003a, controlling M of the N light emitting areas of the array light source to emit light at M different times.
Wherein M is less than or equal to N, and M and N are both positive integers.
In step 4003a, the light emission of the array light source may be controlled by a control unit. Specifically, the control unit may respectively send control signals to M light-emitting areas of the array light source at M times to control the M light-emitting areas to respectively emit light at M different times.
For example, the array light source includes four independent light-emitting regions A, B, C and D, and then the control unit may send control signals to the four independent light-emitting regions A, B and C at times t0, t1, and t2, respectively, so that the three independent light-emitting regions A, B and C emit light at times t0, t1, and t2, respectively.
4003b, splitting the light beams generated by the M light-emitting areas at the M different times by the beam splitter.
The beam splitter is specifically configured to split each received beam of light into a plurality of beams of light.
The splitting process of the light beams generated by the M light emitting regions at the M different times by using the beam splitter may specifically be a splitting process of the light beams generated by the M light emitting regions at the M different times by using the beam splitter.
For example, the array light source includes four independent light emitting regions A, B, C and D. Under the control of the control unit, the light emitting region a emits light at time T0, the light emitting region B emits light at time T1, and the light emitting region C emits light at time T2. Then, the beam splitter may split the light beam emitted from the light emitting region a at time T0, split the light beam emitted from the light emitting region B at time T1, and split the light beam emitted from the light emitting region C at time T2.
4003c, collimating the light beam from the beam splitter with a collimating lens.
The light beam is collimated by the collimating lens, approximately parallel light beams can be obtained, the power density of the light beam can be improved, and the effect of subsequently scanning by adopting the light beam can be further improved.
4003d, receiving the reflected light beam of the target object with a receiving unit.
The reflected beam of the target object is a beam obtained by reflecting the beam from the collimator lens by the target object.
4003e, generating M depth maps based on TOF corresponding to light beams emitted by the M light emitting regions at M different times, respectively.
The TOF corresponding to the light beams emitted by the M light-emitting areas of the array light source at M different times may specifically refer to time difference information between the emission times of the light beams emitted by the M light-emitting areas of the array light source at M different times and the receiving times of the corresponding reflected light beams.
4003f, obtaining the final depth map of the target object according to the M depth maps.
Optionally, the M depth maps are depth maps corresponding to M region sets of the target object, respectively, and a non-overlapping region exists between any two of the M region sets.
Optionally, the receiving unit includes a receiving lens and a sensor, and the receiving unit receives the reflected light beam of the target object in step 4003d includes: and converging the reflected light beam of the target object to the sensor by using the receiving lens.
The sensor may also be referred to as a sensor array, which may be a two-dimensional sensor array.
Optionally, the resolution of the sensor is greater than or equal to P × Q, and the number of beams obtained by splitting the beam from one light emitting region of the array light source by the beam splitter is P × Q.
Wherein P and Q are positive integers. The resolution ratio of the uploading sensor is larger than or equal to the number of light beams split by the light beam splitter from one light emitting area of the array light source, so that the sensor can receive a reflected light beam obtained by reflecting the light beam from the light beam splitter by a target object, and the TOF depth sensing module can normally receive the reflected light beam.
Optionally, the generating M depth maps in step 4003e specifically includes:
(1) determining distances between the M regions of the target object and a TOF depth sensing module according to TOFs corresponding to light beams emitted by the M light emitting regions at M different moments respectively;
(2) generating depth maps of the M regions of the target object according to the distances between the M regions of the target object and the TOF depth sensing module;
(3) and synthesizing the depth map of the target object according to the depth maps of the M areas of the target object.
The process shown in fig. 25 and the process shown in fig. 24 are how to obtain the final depth map of the target object in the second working mode, and the main difference is that in fig. 25, the light beam splitter is used to split the light beam emitted by the array light source, and then the collimating lens is used to collimate the split light beam; in fig. 24, the light beam emitted from the array light source is collimated by the collimating lens, and then the light beam after being collimated is split by the beam splitter.
A TOF depth sensing module and an image generation method according to an embodiment of the present application are described in detail above with reference to fig. 1 to 25. Another TOF depth sensing module and an image generation method according to an embodiment of the present application are described in detail below with reference to fig. 26 to 52.
The conventional TOF depth sensing module generally changes the propagation direction of a laser beam by using a mechanical rotation or vibration component to drive an optical structure (e.g., a reflector, a lens, a prism, etc.) or a transmission light source to rotate or vibrate, so as to scan different regions of a target object. However, the size of the TOF depth sensing module is large, and the TOF depth sensing module is not suitable for being installed in some equipment with a certain space (for example, a mobile terminal). In addition, the TOF depth sensing module generally scans in a continuous scanning manner, and a generated scanning track is generally continuous, so that the scanning module has poor flexibility in scanning a target object and cannot rapidly locate a region of interest (ROI). Therefore, the embodiment of the application provides a scanning device which can irradiate different light beams to different directions without mechanical rotation and vibration and can be quickly positioned to a scanning area of interest. The following detailed description is made with reference to the accompanying drawings.
The TOF depth sensing module according to an embodiment of the present invention will be briefly described with reference to fig. 26.
FIG. 26 is a schematic diagram of distance measurement using a TOF depth sensing module according to an embodiment of the present application.
As shown in fig. 26, the TOF depth sensing module may include a transmitting end (which may also be referred to as a projecting end), a receiving end and a control unit, where the transmitting end is configured to emit an outgoing light beam, the receiving end is configured to receive a reflected light beam of a target object (the reflected light beam is a light beam obtained by reflecting the outgoing light beam by the target object), and the control unit may control the transmitting end and the receiving end to respectively transmit and receive the light beam.
In fig. 26, the transmitting end may generally include a laser light source, a collimating lens (optional), a polarization filtering device, an optical element, and a projection lens (optional), and the receiving end may generally include a receiving lens and a sensor, which may be collectively referred to as a receiving unit.
In fig. 26, a timing device may be used to record TOF corresponding to the outgoing light beam to calculate the distance from the TOF depth sensing module to the target area, so as to obtain a final depth map of the target object. Here, the TOF corresponding to the outgoing light beam may refer to time difference information between the time at which the reflected light beam is received by the receiving unit and the outgoing time of the outgoing light beam.
The TOF depth sensing module of this application embodiment can be used for 3D image to obtain, and the TOF depth sensing module of this application embodiment can set up in intelligent terminal (for example, cell-phone, flat board, wearable equipment etc.) for the acquisition of depth image or 3D image also can provide gesture and limbs discernment for 3D recreation or body sensing recreation.
The TOF depth sensing module according to an embodiment of the present application is described in detail below with reference to fig. 27.
Fig. 27 is a schematic block diagram of a TOF depth sensing module in an embodiment of the application.
The TOF depth sensing module 200 shown in fig. 27 includes a laser light source 210, a polarization filter device 220, an optical element 230, a receiving unit 240, and a control unit 250. The polarization filter device 220 is disposed between the laser source 210 and the optical element 230, and the modules or units in the TOF depth sensing module 200 are described in detail below.
The laser light source 210:
the laser light source 210 is used to generate a laser beam, and in particular, the laser light source 210 is capable of generating light of multiple polarization states.
Optionally, the laser beam emitted by the laser source 210 is a single quasi-parallel beam, and the divergence angle of the laser beam emitted by the laser source 210 is smaller than 1 °.
Alternatively, the laser light source may be a semiconductor laser light source.
The laser light source may be a Vertical Cavity Surface Emitting Laser (VCSEL).
Alternatively, the laser light source may be a fabry-perot laser (which may be abbreviated as FP laser).
Compared with a single VCSEL, the single FP laser can realize higher power, and meanwhile, the electro-optic conversion efficiency is higher than that of the VCSEL, so that the scanning effect can be improved.
Optionally, the wavelength of the laser beam emitted by the laser light source 210 is greater than 900 nm.
Because the intensity of the light ray of more than 900nm in the sunlight is relatively weaker, therefore, when the wavelength of the laser beam is more than 900nm, the interference caused by the sunlight is reduced, and the scanning effect of the TOF depth sensing module can be improved.
Optionally, the wavelength of the laser beam emitted by the laser light source 210 is 940nm or 1550 nm.
Because the intensity of the light near 940nm or 1550nm in the sunlight is relatively weaker, therefore, can greatly reduced the interference that the sunlight caused when laser beam's wavelength is 940nm or 1550nm, can improve TOF depth sensing module's scanning effect.
The polarization filter device 220:
The polarization filter device 220 is used for filtering the laser beam to obtain a single polarization state beam.
The light beam with a single polarization state filtered by the polarization filter device 220 is the light beam generated by the laser source 210 with one of a plurality of polarization states.
For example, the laser beam generated by the laser source 210 includes linearly polarized light, left circularly polarized light, and right circularly polarized light in different directions, and then the polarization filter device 220 can screen the deflected light with polarization states of left circularly polarized light and right circularly polarized light, so as to obtain the linearly polarized light beam with polarization state in a specific direction.
The optical element 230:
the optical element 230 is used to adjust the direction of the light beam in a single polarization state.
The refractive index parameter of the optical element 230 is controllable, and when the refractive index of the optical element 230 is different, the optical element 230 can adjust the light beam in a single polarization state to different directions.
The propagation direction of the laser beam, which may be defined by a spatial angle, is described below with reference to the drawings. As shown in FIG. 28, the spatial angle of the laser beam includes the angle θ between the laser beam and the emitting surface rectangular coordinate system in the z-axis direction and the angle θ between the projection of the laser beam on the XY plane and the X-axis direction
Figure BDA0002355623930000271
Spatial angle theta or theta of the laser beam during scanning with the laser beam
Figure BDA0002355623930000272
Changes may occur.
The control unit 250:
the control unit 250 is used for controlling the refractive index parameter of the optical element 230 to change the propagation direction of the light beam with a single polarization state.
The control unit 250 may generate a control signal, which may be a voltage signal or a radio frequency driving signal, and the refractive index parameter of the optical element 230 may be changed by the control signal, so as to change the emitting direction of the light beam with a single polarization state received by the optical element 20.
The receiving unit 240:
the receiving unit 240 is used for receiving the reflected light beam of the target object.
The reflected beam of the target object is a beam obtained by reflecting the single polarization state beam by the target object.
Specifically, the light beam of a single polarization state may be irradiated to the surface of the target object after passing through the optical element 230, and a reflected light beam may be generated due to reflection by the surface of the target object, and may be received by the receiving unit 240.
The receiving unit 240 may specifically include a receiving lens 241 and a sensor 242, and the receiving lens 241 is used for receiving the reflected light beam and converging the reflected light beam to the sensor 242.
In the embodiment of the application, because the birefringence of the optical element is different, the light beam can be adjusted to different directions, and therefore, the propagation direction of the light beam can be adjusted by controlling the birefringence parameter of the optical element, so that the adjustment of the propagation direction of the light beam in a non-mechanical rotation mode is realized, the discrete scanning of the light beam can be realized, and the depth or distance measurement of the surrounding environment and the target object can be more flexibly performed.
That is to say, in the embodiment of the present application, by controlling the refractive index parameter of the optical element 230, the spatial angle of the light beam in the single polarization state can be changed, so that the optical element 230 can deflect the propagation direction of the light beam in the single polarization state, and further output an outgoing light beam whose scanning direction and scanning angle meet the requirements, discrete scanning can be achieved, flexibility during scanning is high, and the ROI can be quickly located.
Optionally, the control unit 250 is further configured to: and generating a depth map of the target object according to the TOF corresponding to the laser beam.
The TOF corresponding to the laser beam may specifically refer to time difference information between a time when the reflected light beam corresponding to the laser beam is received by the receiving unit and a time when the laser beam is emitted from the laser light source. The reflected light beam corresponding to the laser beam may specifically refer to a light beam generated after the laser beam reaches the target object after being processed by the polarization filter device and the optical element and is reflected by the target object.
FIG. 29 is a schematic block diagram of a TOF depth sensing module of an embodiment of the present application.
As shown in fig. 29, the TOF depth sensing module 200 further includes: the collimating lens 260, the collimating lens 260 is located between the laser light source 210 and the polarization filter device 220, the collimating lens 260 is used for collimating the laser beam; the polarization filter device 220 is used for filtering the light beam after the collimation processing of the collimating lens, so as to obtain the light beam in a single polarization state.
Optionally, the light emitting area of the laser source 210 is less than or equal to 5 × 5mm2
Optionally, the clear aperture of the collimating lens is less than or equal to 5 mm.
Because the sizes of the laser light source and the collimating lens are smaller, the TOF depth sensing module comprising the device (the laser light source and the collimating lens) is easy to integrate into the terminal equipment, and the occupied space in the terminal equipment can be reduced to a certain extent.
Optionally, the average output optical power of the TOF depth sensing module 200 is less than 800 mw.
When the average output optical power of the TOF depth sensing module is smaller than or equal to 800mw, the TOF depth sensing module has smaller power consumption, and is convenient to arrange in terminal equipment and other equipment sensitive to power consumption.
FIG. 30 is a schematic diagram of a TOF depth sensing module of an embodiment of the present application scanning a target object.
As shown in fig. 30, the optical element 230 can emit the outgoing beam 1 at T0, and at a time T1, the optical element can be directly controlled to emit the outgoing beam 2 at T1 if the scanning direction and the scanning angle need to be changed, and at the next time T2, the optical element can be controlled to emit the outgoing beam 3 at T2 if the scanning direction and the scanning angle need to be changed. The TOF depth sensing module 200 can directly output emergent light beams in different directions at different times, thereby realizing scanning of a target object.
The effect of the TOF depth sensing module 200 to achieve discrete scanning is described in detail below with reference to fig. 31.
Fig. 31 is a schematic diagram of a scanning trajectory of a TOF depth sensing module according to an embodiment of the present application.
As shown in fig. 31, the TOF depth sensing module may start scanning from a scanning point a, and when scanning needs to be performed by switching from the scanning point a to a scanning point B, the optical element 230 may be directly controlled by the control unit 250, so that the emergent light beam is directly irradiated to the scanning point B without gradually moving from the scanning point a to the scanning point B (without moving from a to B along a dotted line between AB in the figure). Likewise, when it is necessary to switch from scanning point B to scanning point C for scanning, the optical element 230 may also be controlled by the control unit 250 so that the outgoing light beam is directly irradiated to the scanning point C without being gradually moved from the scanning point B to the scanning point C (without being moved from B to C along the broken line between BC in the figure).
Therefore, the TOF depth sensing module 200 can achieve discrete scanning, has high scanning flexibility, and can be quickly positioned to a region to be scanned.
Because TOF depth sensing module 200 can realize the discrete scanning, consequently, TOF depth sensing module 200 can adopt multiple scanning orbit to realize the scanning to certain region when the scanning, and the selection of scanning mode is more nimble, also is convenient for TOF depth sensing module 200's time sequence control design.
The scanning method of the TOF depth sensing module 200 will be described with reference to fig. 32, taking a 3 × 3 two-dimensional lattice as an example.
Fig. 32 is a schematic diagram of a scanning manner of the TOF depth sensing module according to an embodiment of the present application.
As shown in fig. 32, the TOF depth sensing module 200 may start scanning at a point at the upper left corner of the two-dimensional lattice and end scanning at a point at the lower right corner of the two-dimensional lattice, where such scanning modes include a scanning mode a to a scanning mode F. In addition to scanning seedlings from the point at the upper left corner of the two-dimensional lattice, the scanning can be started from the central point of the two-dimensional lattice until all the points of the two-dimensional lattice are scanned, so that the whole scanning of the two-dimensional lattice is completed, and the scanning modes comprise a scanning mode G to a scanning mode J.
Further, the scanning may be started from any point in the two-dimensional array until the scanning of all the points of the two-dimensional array is completed. As shown in the scanning mode K in fig. 32, the scanning may be started from the first row and the second column of the two-dimensional array until the central point in the two-dimensional array is scanned, so as to complete the whole scanning of the two-dimensional array lattice.
Alternatively, the optical element 230 may be any one of a liquid crystal polarization grating, an optical phased array, an electro-optic device, and an acousto-optic device.
The detailed structure of the optical element 230 will be described in detail below with reference to the drawings.
In the first case: the optical element 230 is a Liquid Crystal Polarization Grating (LCPG). In the first case, the birefringence of the optical element 230 is controllable, and the optical element is capable of steering a single polarization state of the light beam to different directions when the birefringence of the optical element is different.
The liquid crystal polarization grating is a novel grating device based on the geometric phase principle, acts on circularly polarized light, and has electro-optical adjustability and polarization adjustability.
The liquid crystal polarization grating is a grating formed by utilizing the periodic arrangement of liquid crystal molecules, and the general manufacturing method is that the linear periodic gradual change of liquid crystal molecule director (the long axis direction of the liquid crystal molecules) in one direction is controlled by a light control orientation technology to prepare the liquid crystal polarization grating. Circularly deflected light can be diffracted to +1 or-1 order by controlling the polarization state of the incident light, and beam deflection can be achieved by modulating the switching of the diffraction and zero order.
FIG. 33 is a schematic block diagram of a TOF depth sensing module of an embodiment of the present application.
As shown in fig. 33, the optical element 230 is a liquid crystal polarization grating, and the control unit 250 can control the laser light source to emit a laser beam to the liquid crystal polarization grating, and control the liquid crystal polarization grating to deflect the direction of the laser beam through a control signal, so as to obtain an outgoing beam.
Optionally, the liquid crystal polarization grating includes a horizontal LCPG element and a vertical LCPG element.
FIG. 34 is a schematic block diagram of a TOF depth sensing module of an embodiment of the present application.
As shown in fig. 34, the liquid crystal polarization grating is composed of horizontal LCPG elements and vertical LCPG elements, and discrete random scanning in the horizontal direction can be realized by the horizontal LCPG elements, and discrete random scanning in the vertical direction can be realized by the vertical LCPG elements. When the LCPG component in the horizontal direction and the LCPG component in the vertical direction are combined together, two-dimensional discrete random scanning in the horizontal direction and the vertical direction can be realized.
It should be understood that fig. 34 only shows the case where the LCPG in the horizontal direction is forward and the LCPG in the vertical direction is backward (the distance between the LCPG in the horizontal direction and the laser light source is smaller than the distance between the LCPG in the vertical direction and the laser light source). In fact, in the present application, the liquid crystal polarization grating may be arranged such that the vertical LCPG is in front of the horizontal LCPG (the distance between the vertical LCPG and the laser light source is smaller than the distance between the horizontal LCPG and the laser light source).
In the application, when the liquid crystal polarization grating comprises the LCPG component in the horizontal direction and the LCPG component in the vertical direction, two-dimensional discrete random scanning in the horizontal direction and the vertical direction can be realized.
Optionally, in the first case, the liquid crystal polarization grating may further include a transverse polarization control plate and a longitudinal polarization control plate.
When the liquid crystal polarization grating comprises the polarization control plate, the control of the polarization state of the light beam can be realized.
Fig. 35 is a schematic structural diagram of a liquid crystal polarization grating according to an embodiment of the present application.
As shown in fig. 35, the liquid crystal polarization grating includes not only the lateral LCPG and the longitudinal LCPG but also a lateral polarization control plate and a longitudinal polarization control plate. In fig. 35, the transverse direction LCPG is positioned between the transverse direction polarization control sheet and the longitudinal direction polarization control sheet, and the longitudinal direction polarization control sheet is positioned between the transverse direction LCPG and the longitudinal direction LCPG.
Fig. 36 is a schematic structural diagram of a TOF depth sensing module according to an embodiment of the present disclosure.
As shown in fig. 36, the liquid crystal polarization grating in the TOF depth sensing module has a structure as shown in fig. 35, and distances between the transverse polarization control sheet, the transverse LCPG, the longitudinal polarization control sheet, and the longitudinal LCPG and the laser light source become larger in sequence.
Alternatively, there may be several combinations of the components in the liquid crystal polarization grating shown in fig. 35 described above.
Combination mode 1: 124;
combination mode 2: 342;
combination mode 3: 3412.
in the above-mentioned combination mode 1, 1 may indicate a transverse polarization control sheet and a longitudinal polarization control sheet which are closely attached, and in this case, the two closely attached polarization control sheets correspond to one polarization control sheet, and therefore, combination mode 1 indicates a transverse polarization control sheet and a longitudinal polarization control sheet which are closely attached together by 1. Similarly, in the above combination mode 2, 3 may indicate a transverse polarization control plate and a longitudinal polarization control plate which are closely attached, and in this case, the two closely attached polarization control plates correspond to one polarization control plate, and therefore, 3 is used in combination mode 2 to indicate a transverse polarization control plate and a longitudinal polarization control plate which are closely attached together.
When the optical element 230 of combination 1 or combination 2 is placed in the TOF depth sensing module, the transverse polarization control plate or the longitudinal polarization control plate is located on the side close to the laser light source, and the transverse LCPG and the longitudinal LCPG are located on the side far from the laser light source.
When the optical element 230 of combination 3 is placed in the TOF depth sensing module, the distances between the longitudinal polarization control sheet, the longitudinal LCPG, the transverse polarization control sheet, and the transverse LCPG and the laser light source are sequentially increased.
It should be understood that the above three combinations of the liquid crystal polarization gratings and the combination in fig. 35 are only examples, and actually, the various components in the optical element in the present application may also have different combinations. As long as it is ensured that the distance between the transverse polarization control plate and the laser light source is less than the distance between the transverse LCPG and the laser light source, and the distance between the transverse polarization control plate and the laser light source is less than the distance between the transverse LCPG and the laser light source.
As shown in fig. 37, by inputting a periodic control signal (in fig. 37, the period of the control signal is Λ) to the liquid crystal polarization grating, the physical properties of the liquid crystal polarization grating can be periodically changed, and specifically, the arrangement of the liquid crystal molecules in the liquid crystal polarization grating can be changed (the liquid crystal molecules are generally rod-shaped, and the orientation of the liquid crystal molecules is changed by the influence of the control signal), so that the deflection of the direction of the laser beam can be realized.
When the liquid crystal polarization grating and the polaroid are combined at the same time, the control of different directions of the light beams can be realized.
As shown in fig. 38, the incident light is subjected to voltage control by the left-handed and right-handed circular polarizers and the LCPG, and beam control in three different directions can be realized, and the deflection angle of the emergent light can be determined according to the following diffraction grating equation.
Figure BDA0002355623930000301
In the above diffraction grating equation, θmIn the direction angle of m-level emergent light, lambda is the wavelength of laser light, lambda is the period of LCPG, and theta is the incident angle of incident light. From the above diffraction grating equation, the deflection angle θmDepends on the size of the LCPG grating period, the wavelength and the size of the incident angle, where m is only 0, ± 1. Wherein, when m takes 0, the direction is not deflected, the direction is not changed, m takes 1, the left or the anticlockwise deflection relative to the incident direction is respectively represented, and m takes-1, the right or the clockwise deflection relative to the incident direction is respectively represented (the meanings when m is +1 and m is-1 can be opposite).
The single LCPG can realize deflection of 3 angles, and further can obtain emergent beams of 3 angles, so that the LCPG can be cascaded in a multi-layer mode to obtain emergent beams of more angles. Therefore, 3 can be theoretically realized by a combination of an N-layer polarization control sheet (polarization control sheet for controlling polarization of incident light to realize conversion of left-handed light and right-handed light) and an N-layer LCPGNAn angle of deflection.
For example, as shown in fig. 35, the optical element of the TOF depth sensing module is composed of devices 1, 2, 3 and 4, where the devices 1, 2, 3 and 4 respectively represent a transverse polarization control plate, a transverse LCPG, a longitudinal polarization control plate and a longitudinal LCPG, and the control of the deflection direction and angle of the light beam can be achieved by controlling the voltages of the respective sets of polarization control plates and LCPGs.
Taking the implementation of the point scan of 3 × 3 as an example, applying the voltage signals shown in fig. 39 to the devices 1, 2, 3, and 4 shown in fig. 36 respectively (1, 2, 3, and 4 in fig. 39 represent the voltage signals applied to the devices 1, 2, 3, and 4 shown in fig. 36 respectively), the laser beam emitted by the laser source can be controlled to implement the scanning track shown in fig. 40.
Specifically, assuming that the incident light is left-handed circularly polarized light, the transverse LCPG is deflected to the left by the incidence of left-handed light, and the longitudinal LCPG is deflected downward by the incidence of left-handed light. The deflection direction of the light beam at each instant is described in detail below.
When the two ends of the transverse polarization control plate are high-voltage signals, the polarization state of the light beam passing through the transverse polarization control plate is unchanged, and when the two ends of the transverse polarization control plate are low-voltage signals, the polarization state of the light beam passing through the transverse polarization control plate is changed. Similarly, when the two ends of the longitudinal polarization control plate are high-voltage signals, the polarization state of the light beam passing through the longitudinal polarization control plate is unchanged, and when the two ends of the longitudinal polarization control plate are low-voltage signals, the polarization state of the light beam passing through the longitudinal polarization control plate is changed.
At the time 0, the incident light of the device 1 is left-handed circularly polarized light, and the right-handed circularly polarized light is emitted after passing through the device 1 due to the low voltage applied by the device 1; the incident light of the device 2 is right-handed circularly polarized light, and the emergent light is still right-handed circularly polarized light after passing through the device 2 due to the high voltage applied by the device 2; the incident light of the device 3 is right-handed circularly polarized light, and the low voltage is applied to the device 3, so that the left-handed circularly polarized light is emitted after passing through the device 3; the incident light of the device 4 is left-handed circularly polarized light, and the left-handed circularly polarized light is still emitted after passing through the device 4 due to the high voltage applied by the device 4; therefore, at time 0, after passing through the devices 1 to 4, the direction of incident light is unchanged, and the polarization state is also unchanged. As shown in fig. 40, the scanning point corresponding to the time 0 is located at the center t0 of fig. 40.
At the time t0, the incident light of the device 1 is left circularly polarized light, and the light emitted after passing through the device 1 is still left circularly polarized light due to the high voltage applied by the device 1; the incident light of the device 2 is left-handed circularly polarized light, and the left-handed circularly polarized light is emitted after passing through the device 2 due to the low voltage applied by the device 2; the incident light of the device 3 is right-handed circularly polarized light deflected to the left, and the left-handed circularly polarized light deflected to the left is emitted after passing through the device 3 due to the low voltage applied by the device 3; the incident light of the device 4 is left-handed circularly polarized light deflected leftwards, and because the device 4 applies high voltage, the left-handed circularly polarized light deflected leftwards is emitted after passing through the device 4; that is, the light beam emitted through the device 4 at time t0 is deflected to the left with respect to time 0, and the corresponding scanning point is the position shown at t0 in fig. 40.
At the time t1, the incident light of the device 1 is left circularly polarized light, and the light emitted after passing through the device 1 is still left circularly polarized light due to the high voltage applied by the device 1; the incident light of the device 2 is left-handed circularly polarized light, and the left-handed circularly polarized light is emitted after passing through the device 2 due to the low voltage applied by the device 2; the incident light of the device 3 is rightwards deflected circularly polarized light, and the rightwards deflected circularly polarized light is emitted after passing through the device 3 due to the high voltage applied by the device 3; the incident light of the device 4 is right-handed circularly polarized light deflected to the left, and due to the low voltage applied by the device 4, the left-handed circularly polarized light deflected to the left and deflected upwards is emitted after passing through the device 4; that is, the light beam after exiting through the device 4 at time t1 is deflected leftward and upward with respect to time 0, and the corresponding scanning point is the position shown at t1 in fig. 40.
At time t2, the incident light of the device 1 is left circularly polarized light, and the light emitted after passing through the device 1 is right circularly polarized light due to the low voltage applied by the device 1; the incident light of the device 2 is right-handed circularly polarized light, and the emergent light after passing through the device 2 is still right-handed circularly polarized light due to the high voltage applied by the device 2; the incident light of the device 3 is right-handed circularly polarized light, and the emergent light after passing through the device 3 is still right-handed circularly polarized light due to the high voltage applied by the device 3; the incident light of the device 4 is right-handed circularly polarized light, and the low voltage is applied by the device 4, so that the upward deflected left-handed circularly polarized light is emitted after passing through the device 4; that is, the light beam that has exited the device 4 at time t2 is deflected upward relative to time 0, and the corresponding scanning point is the position shown at t2 in fig. 40.
At time t3, the incident light of the device 1 is left circularly polarized light, and the light emitted after passing through the device 1 is right circularly polarized light due to the low voltage applied by the device 1; the incident light of the device 2 is right-handed circularly polarized light, and the right-handed circularly polarized light deflected to the right is emitted after passing through the device 2 due to the low voltage applied by the device 2; the incident light of the device 3 is rightwards deflected right-handed circularly polarized light, and the outgoing light is rightwards deflected left-handed circularly polarized light after passing through the device 3 due to the low voltage applied by the device 3; the incident light of the device 4 is left-circularly polarized light deflected rightwards, and due to the low voltage applied by the device 4, the left-circularly polarized light deflected rightwards and deflected upwards is emitted after passing through the device 4; that is, the light beam after exiting through the device 4 at time t3 is deflected rightward and upward with respect to time 0, and the corresponding scanning point is the position shown at t3 in fig. 40.
At time t4, the incident light of the device 1 is left circularly polarized light, and the light emitted after passing through the device 1 is right circularly polarized light due to the low voltage applied by the device 1; the incident light of the device 2 is right-handed circularly polarized light, and the left-handed circularly polarized light deflected rightwards is emitted after passing through the device 2 due to the low voltage applied by the device 2; the incident light of the device 3 is left-handed circularly polarized light deflected rightwards, and the right-handed circularly polarized light deflected rightwards is emitted after passing through the device 3 due to the low voltage applied by the device 3; the incident light of the device 4 is rightwards deflected right-handed circularly polarized light, and because the device 4 applies high voltage, the right-handed circularly polarized light which is still rightwards deflected is emitted after passing through the device 4; that is, the light beam emitted from the device 4 at time t0 is deflected to the right with respect to time 0, and the corresponding scanning point is the position shown at t4 in fig. 40.
At time t5, the incident light of the device 1 is left circularly polarized light, and the light emitted after passing through the device 1 is right circularly polarized light due to the low voltage applied by the device 1; the incident light of the device 2 is right-handed circularly polarized light, and the right-handed circularly polarized light deflected to the right is emitted after passing through the device 2 due to the low voltage applied by the device 2; the incident light of the device 3 is rightwards deflected right-handed circularly polarized light, and the emergent light after passing through the device 3 is still rightwards deflected right-handed circularly polarized light due to the high voltage applied by the device 3; the incident light of the device 4 is right-handed circularly polarized light deflected to the right, and due to the low voltage applied by the device 4, the left-handed circularly polarized light deflected to the right and deflected downwards exits after passing through the device 4; that is, the light beam after exiting through the device 4 at time t5 is deflected rightward and downward with respect to time 0, and the corresponding scanning point is the position shown at t5 in fig. 40.
At time t6, the incident light of the device 1 is left circularly polarized light, and the light emitted after passing through the device 1 is right circularly polarized light due to the low voltage applied by the device 1; the incident light of the device 2 is right-handed circularly polarized light, and the emergent light after passing through the device 2 is still right-handed circularly polarized light due to the high voltage applied by the device 2; the incident light of the device 3 is right-handed circularly polarized light, and the low voltage is applied to the device 3, so that the left-handed circularly polarized light is emitted after passing through the device 3; the incident light of the device 4 is left-handed circularly polarized light, and the low voltage is applied to the device 4, so that the right-handed circularly polarized light deflected downwards exits after passing through the device 4; that is, the light beam emitted from the device 4 at time t6 is deflected downward relative to time 0, and the corresponding scanning point is the position shown at t6 in fig. 40.
At the time t7, the incident light of the device 1 is left circularly polarized light, and the light emitted after passing through the device 1 is still left circularly polarized light due to the high voltage applied by the device 1; the incident light of the device 2 is left-handed circularly polarized light, and the left-handed circularly polarized light is emitted after passing through the device 2 due to the low voltage applied by the device 2; the incident light of the device 3 is right-handed circularly polarized light deflected to the left, and the left-handed circularly polarized light deflected to the left is emitted after passing through the device 3 due to the low voltage applied by the device 3; the incident light of the device 4 is left-handed circularly polarized light deflected to the left, and due to the low voltage applied by the device 4, the right-handed circularly polarized light deflected to the left and deflected downwards is emitted after passing through the device 4; that is, the light beam after exiting through the device 4 at time t7 is deflected leftward and upward with respect to time 0, and the corresponding scanning point is the position shown at t7 in fig. 40.
It should be understood that the possible scanning trajectories of the TOF depth sensing module are only described herein with reference to fig. 39 and 40, and that any discrete random scanning can be achieved by varying the voltages controlling the sets of polarization control plates and LCPGs.
For example, the various scanning trajectories shown in fig. 32 can be realized by changing the voltages controlling the sets of polarization control plates and the LCPG.
When scanning a target object by using a conventional laser radar, it is often necessary to perform a Coarse scan (Coarse scan) on a target region, and then perform a finer scan (Fine scan) with a higher resolution after finding a region of interest (ROI). And the TOF depth sensing module of this application embodiment is because can realize the discrete scanning, consequently, can directly fix a position the region of interest and carry out the meticulous scanning, can save the required time of meticulous scanning greatly.
For example, as shown in fig. 41, the total number of points of the region to be scanned (the entire rectangular region including the contour of the human body) is M, and ROI (the image region located within the contour image of the human body in fig. 41) occupies 1/N of the total area of the region to be scanned.
In scanning the region to be scanned shown in fig. 41, it is assumed that the point scanning rates of the conventional laser radar and the laser scanning radar of the embodiment of the present application are both K points/second, and in scanning the ROI region Fine scanning is required and the resolution in fine scanning is increased by a factor of four (i.e., 4K dots/sec). Then, the time required to complete the fine scan of the ROI using the TOF depth sensing module of the embodiment of the present application is t1The time required to complete a fine scan of the ROI using conventional lidar is t2Because the TOF depth sensing module of this application embodiment can realize the discrete scanning, consequently can directly fix a position to the ROI and carry out the fine scanning to the ROI, the scanning time that needs is shorter. However, the conventional lidar performs linear scanning, and is difficult to accurately locate the ROI, so that the conventional lidar needs to perform fine scanning on the whole region to be scanned, thereby greatly increasing the scanning time. As shown in fig. 42, the TOF depth sensing module of the embodiment of the present application can directly locate to the ROI and perform fine scanning on the ROI (as can be seen from fig. 42, the density of the scanning points in the ROI is significantly greater than that of the scanning points outside the ROI).
In addition, the above-mentioned t1And t2The following two formulas (2) and (3) can be used for calculation, respectively.
Figure BDA0002355623930000331
Figure BDA0002355623930000332
As can be known from the above formula (2) and formula (3), the time required for performing the fine scanning on the ROI by adopting the TOF depth sensing module of the embodiment of the application is only 1/N of the time required for performing the fine scanning on the traditional laser radar, and the time required for performing the fine scanning on the ROI is greatly shortened.
Because discrete scanning can be realized to the TOF depth sensing module of this application embodiment, consequently, the TOF depth sensing module of this application embodiment can realize carrying out the fine scanning to the ROI area (car, people, building and random plaque) of arbitrary shape, especially some asymmetric areas and discrete ROI piece. In addition, the TOF depth sensing module can also realize uniform distribution or non-uniform distribution of the point density of the scanning area.
In the second case: the optical element 230 is an electro-optical device.
In the second case, when the optical element 230 is an electro-optical device, the control signal may be a voltage signal, and the voltage signal may be used to change the refractive index of the electro-optical device, so that the electro-optical device deflects the laser beam in different directions without changing the position of the electro-optical device relative to the laser light source, thereby obtaining an outgoing light beam whose scanning direction matches the control signal.
Alternatively, as shown in fig. 43, the above-described electro-optical device may include a lateral electro-optical crystal (a horizontally deflected electro-optical crystal) and a longitudinal electro-optical crystal (a vertically deflected electro-optical crystal). The transverse electro-optical crystal can deflect the laser beam in the horizontal direction, and the longitudinal electro-optical crystal can deflect the laser beam in the vertical direction.
Alternatively, the electro-optical crystal may specifically be any one of potassium tantalate niobate (KTN) crystal, deuterated potassium dihydrogen phosphate (DKDP) crystal, and Lithium Niobate (LN) crystal.
The working principle of the electro-optical crystal is briefly described below with reference to the accompanying drawings.
As shown in fig. 44, when a voltage signal is applied to the electro-optical crystal, due to the second-order photoelectric effect of the electro-optical crystal, a refractive index difference is generated in the electro-optical crystal (that is, refractive indexes of different regions in the electro-optical crystal are different), so that the incident light beam is deflected, and as shown in fig. 44, the outgoing light beam is deflected to a certain extent with respect to the direction of the incident light beam.
The deflection angle of the outgoing beam with respect to the incoming beam can be calculated according to the following equation (4).
Figure BDA0002355623930000341
In the above formula (4), θmaxRepresenting the maximum deflection angle of the emergent beam relative to the incident beam, n being the refractive index of the electro-optic crystal, g11yTo a second order electro-optic coefficient, EmaxRepresenting the maximum electric field strength that can be applied to the electro-optic crystal,
Figure BDA0002355623930000342
is the second order electro-optic coefficient gradient in the y-direction.
As can be seen from the above equation (4), the deflection angle of the light beam can be controlled by adjusting the intensity of the applied electric field (i.e. adjusting the voltage applied to the electro-optical crystal), so as to scan the target region. In addition, to achieve larger deflection angles, multiple electro-optic crystals may be cascaded.
As shown in fig. 43, the optical element includes a horizontal deflection electro-optical crystal and a vertical deflection electro-optical crystal, which are responsible for beam deflection in the horizontal direction and the vertical direction, respectively, and after applying the control voltage signal shown in fig. 45, a 3 × 3 scan can be realized as shown in fig. 46. Specifically, in fig. 45, 1 and 2 denote control voltage signals applied to the horizontal deflection electro-optical crystal and the vertical deflection electro-optical crystal, respectively.
In the third case: the optical element 230 is an acousto-optic device.
As shown in fig. 47, the optical element 230 is an acousto-optic device. The acousto-optic device may include a transducer, and when the optical element 230 is an acousto-optic device, the control signal may specifically be a radio frequency control signal, and the radio frequency control signal may be used to control the transducer to generate sound waves of different frequencies, so as to change the refractive index of the acousto-optic device, and further cause the acousto-optic device to deflect the laser beam in different directions under the condition that the position of the acousto-optic device relative to the laser light source is not changed, thereby obtaining an outgoing light beam whose scanning direction matches with the control signal.
As shown in fig. 48, the acousto-optic device includes an acoustic absorber, quartz, and a piezoelectric transducer. After the acousto-optic device receives the electric signal, the piezoelectric transducer can generate a sound wave signal under the action of the electric signal, the sound wave signal can change the refractive index distribution of quartz when being transmitted in the acousto-optic device, so that a grating is formed, the quartz can deflect an incident beam at a certain angle, and the acousto-optic device can generate emergent beams in different directions at different moments when control signals input at different moments are different. As shown in fig. 48, the deflection directions of the outgoing light beams of the quartz at different times (T0, T1, T2, T3, and T4) may be different.
When the electric signal incident to the acousto-optic device is a periodic signal, quartz in the acousto-optic device changes periodically due to refractive index distribution, so that a periodic grating is formed, and the periodic deflection of an incident beam can be realized by using the periodic grating.
In addition, the intensity of the emergent light of the acousto-optic device is directly related to the power of the radio frequency control signal input to the acousto-optic device, and the diffraction angle of the incident light beam is also directly related to the frequency of the radio frequency control signal. By changing the frequency of the radio frequency control signal, the angle of the emergent beam can be adjusted accordingly. Specifically, the deflection angle of the emitted light beam with respect to the incident light beam can be determined according to the following formula (5).
Figure BDA0002355623930000343
In the above equation (5), θ is the angle of the emergent beam deflected with respect to the incident beam, λ is the wavelength of the incident beam, fsFor controlling the frequency, v, of the signalsIs the velocity of the sound wave. Therefore, the optical deflector can scan the laser beam in a wide angle range, and can accurately control the emitting angle of the laser beam.
In a fourth case: optical element 230 is an Optical Phased Array (OPA) device.
The following describes in detail the case where the optical element 230 is an OPA device with reference to fig. 49 and 50.
As shown in fig. 49, the optical element 230 is an OPA device, and the optical element can deflect an incident light beam to obtain an emergent light beam with a scanning direction matched with a control signal.
The OPA device generally comprises a one-dimensional or two-dimensional phase shifter array, and when there is no phase difference in each phase shifter, the time for light to reach the equiphase plane is the same, and the light propagates forward without interference, so that beam deflection does not occur.
After the phase difference is added to each phase shifter (taking the example that each optical signal is endowed with uniform phase difference, the phase difference between the second waveguide and the first waveguide is delta, the phase difference between the third waveguide and the first waveguide is 2 delta, and so on), the equiphase plane is not perpendicular to the waveguide direction any more but has certain deflection, the beams meeting the equiphase relation are coherent and the beams not meeting the equiphase condition are mutually offset, so the direction of the beams is always perpendicular to the equiphase plane.
As shown in fig. 50, assuming that the distances between adjacent waveguides are d, the optical path difference between the beams output from the adjacent waveguides and reaching the equiphase plane is Δ R · sin θ. Here, θ represents a beam deflection angle, and since this optical path difference is caused by the phase difference of the array element, Δ R ═ Δ · λ/2 π, the beam can be deflected by introducing the phase difference into the array element, which is the principle of OPA for deflecting the beam.
Therefore, the deflection angle θ becomes arcsin (Δ λ/(2 pi × d)), and by controlling the phase difference between adjacent phase shifters, for example, pi/12, pi/6, the beam deflection angles become arcsin (λ/(24d)) and arcsin (λ/(12 d)). Thus, by controlling the phase of the phase shifter array, any two-dimensional deflection can be realized, the phase shifters can be made of liquid crystal materials, and different voltages are applied to the liquid crystals to generate different phase differences.
Optionally, as shown in fig. 51, the TOF depth sensing module 200 further includes:
the collimating lens 260, the collimating lens 260 is located between the laser light source 210 and the polarization filter device 220, the collimating lens 260 is used for collimating the laser beam; the polarization filter 220 is used for filtering the light beam processed by the alignment lens 260 to obtain a light beam with a single polarization state.
In addition, the collimating lens 260 may also be located between the polarization filter device 220 and the optical element 230, in this case, the polarization filter device 220 performs polarization filtering on the light beam generated by the laser light source to obtain a light beam with a single polarization state, and then the collimating lens 260 performs collimation on the light beam with the single polarization state.
Optionally, the collimating lens 260 may be further located at the right side of the optical element 230 (the distance between the collimating lens 260 and the laser light source 210 is greater than the distance between the optical element 230 and the laser light source 210), in this case, after the optical element 230 adjusts the direction of the light beam in the single polarization state, the collimating lens 260 performs collimation processing on the light beam in the single polarization state after the direction adjustment.
The TOF depth sensing module 200 according to the embodiment of the present application is described in detail with reference to fig. 26 to 51, and the image generating method according to the embodiment of the present application is described with reference to fig. 52.
Fig. 52 is a schematic flowchart of an image generation method according to an embodiment of the present application.
The method shown in fig. 52 may be executed by the TOF depth sensing module according to the embodiment of the present application or a terminal device including the TOF depth sensing module according to the embodiment of the present application. Specifically, the method shown in fig. 52 may be performed by the TOF depth sensing module 200 shown in fig. 27 or a terminal device including the TOF depth sensing module 200 shown in fig. 27. The method shown in fig. 52 includes steps 4001 to 4005, which are described in detail below.
5001. And controlling the laser light source to generate a laser beam.
The laser light source can generate light with various polarization states.
For example, the laser light source may generate light having various polarization states, such as linear polarization, left-handed circular polarization, and right-handed circular polarization.
5002. And filtering the laser beam by using a polarization filter to obtain the beam with a single polarization state.
The single polarization state may be any one of linear polarization, left-handed circular polarization, and right-handed circular polarization.
For example, in step 5001, the laser beam generated by the laser light source includes linearly polarized light, left circularly polarized light, and right circularly polarized light, then in step 5002, the left circularly polarized light and the right circularly polarized light, which are polarization states of the laser beam, may be screened, and only the linearly polarized light in a specific direction is retained, and optionally, an 1/4 wave plate may be further included in the polarization filtering device, so that the screened linearly polarized light is converted into left circularly polarized light (or right circularly polarized light).
5003. The optical element is controlled to have different birefringence parameters at M different moments to obtain M outgoing light beams in different directions.
The optical element has controllable birefringence parameters, and can adjust the light beam in a single polarization state to different directions when the birefringence of the optical element is different. M is a positive integer greater than 1. The M reflected light beams are obtained by reflecting M outgoing light beams in different directions by the target object.
At this time, the optical element may be a liquid crystal polarization grating, and specific cases of the liquid crystal polarization grating may be referred to the description of the first case above.
Alternatively, the birefringence parameters of the optical element that are different at M times may specifically include the following two cases:
case 1: the birefringence parameters of the optical element at any two of the M moments are different;
case 2: there are at least two moments in time of the optical element among the M moments in time at which the birefringence parameters of the optical element are different.
In case 1, assuming that M is 5, the optical elements correspond to 5 different birefringence parameters at 5 times, respectively.
In case 2, if M is 5, the optical element may have 2 times corresponding to different birefringence parameters among 5 times.
5004. The M reflected light beams are received by a receiving unit.
5005. And generating a depth map of the target object according to the TOFs corresponding to the M outgoing light beams in different directions.
The TOF corresponding to the M outgoing light beams in different directions may specifically refer to time difference information between the time when the reflected light beams corresponding to the M outgoing light beams in different directions are received by the receiving unit and the outgoing time of the M outgoing light beams in different directions.
Assuming that the M outgoing beams in different directions include the outgoing beam 1, the reflected beam corresponding to the outgoing beam 1 may be a beam generated after the outgoing beam 1 reaches the target object and is reflected by the target object.
In the embodiment of the application, because the birefringence of the optical element is different, the light beam can be adjusted to different directions, and therefore, the propagation direction of the light beam can be adjusted by controlling the birefringence parameter of the optical element, so that the adjustment of the propagation direction of the light beam in a non-mechanical rotation mode is realized, the discrete scanning of the light beam can be realized, and the depth or distance measurement of the surrounding environment and the target object can be more flexibly performed.
Optionally, the generating the depth map of the target object in step 5005 specifically includes:
5005a, determining distances between the M regions of the target object and the TOF depth sensing module according to the TOF corresponding to the M outgoing light beams in different directions.
5005b, generating depth maps of the M regions of the target object according to the distances between the M regions of the target object and the TOF depth sensing module; and synthesizing the depth map of the target object according to the depth maps of the M areas of the target object.
In the method shown in fig. 52, the light beam may also be collimated,
optionally, before the step 5002, the method shown in fig. 52 further includes:
5006. and carrying out collimation treatment on the laser beam to obtain a collimated beam.
After the laser beam is collimated, the obtaining of the beam with a single polarization state in step 5002 specifically includes: and filtering the collimated light beam by using a polarization filter to obtain light in a single polarization state.
Before the polarization filter device is used for filtering the laser beam to obtain the beam in a single polarization state, the laser beam is collimated to obtain approximately parallel beams, so that the power density of the beam can be improved, and the effect of subsequently scanning the beam can be improved.
The light beam after the collimation treatment can be quasi-parallel light with the divergence angle smaller than 1 degree.
It should be understood that in the method shown in fig. 52, the light beam in a single polarization state may also be collimated, and specifically, the method shown in fig. 52 further includes:
5007. and carrying out collimation treatment on the light beam in the single polarization state to obtain the collimated light beam.
The step 5007 may be located between the step 5002 and the step 5003, and the step 5007 may be located between the step 5003 and the step 5004.
When the step 5007 is located between the step 5002 and the step 5003, the polarization filter device filters the laser beam generated by the laser light source to obtain a single polarization state beam, then the single polarization state beam is collimated by the collimating lens to obtain a collimated beam, and then the propagation direction of the single polarization state beam is controlled by the optical element.
When the step 5007 is located between the step 5003 and the step 5004, after the optical element changes the propagation direction of the light beam in the single polarization state, the light beam in the single polarization state is collimated by the collimating lens, so that the collimated light beam is obtained.
It is to be understood that in the method shown in fig. 52, step 5006 and step 5007 are optional steps, and either of steps 5006 or 5007 may be optionally performed.
A TOF depth sensing module and an image generation method according to an embodiment of the present application are described in detail above with reference to fig. 26 to 52. Another TOF depth sensing module and an image generation method according to an embodiment of the present application are described in detail below with reference to fig. 53 to 69.
Adopt pulsed TOF technique to scan in traditional TOF depth sensing module usually, but pulsed TOF technique requires that photoelectric detector's sensitivity is high enough, reaches single photon detection's ability, and photoelectric detector commonly used adopts Single Photon Avalanche Diode (SPAD) often, because SPAD complex interface and processing circuit, leads to the resolution ratio of SPAD sensor commonly used lower, is not enough to satisfy the demand of the high spatial resolution of depth sensing. Therefore, the embodiment of the application provides a TOF depth sensing module and an image generation method, which improve the spatial resolution of depth sensing in a block illumination and time division multiplexing mode. A TOF depth sensing module and an image generation method of this type are described in detail below with reference to the accompanying drawings.
The TOF depth sensing module according to an embodiment of the present invention will be briefly described with reference to fig. 53.
FIG. 53 is a schematic diagram of distance measurement using a TOF depth sensing module of an embodiment of the present application.
As shown in fig. 53, the TOF depth sensing module may include a transmitting end (which may also be called a projecting end), a receiving end and a control unit, where the transmitting end is configured to emit an outgoing light beam, the receiving end is configured to receive a reflected light beam of a target object (the reflected light beam is a light beam obtained by reflecting the outgoing light beam by the target object), and the control unit may control the transmitting end and the receiving end to respectively transmit and receive the light beam.
In fig. 53, the transmitting end may generally include a laser light source, a polarization filter device, a collimating lens (optional), a first optical element, and a projection lens (optional), and the receiving end may generally include a receiving lens, a second optical element, and a sensor. In fig. 53, a timing device may be used to record TOF corresponding to the outgoing light beam to calculate the distance from the TOF depth sensing module to the target area, so as to obtain a final depth map of the target object. Here, the TOF corresponding to the outgoing light beam may refer to time difference information between the time at which the reflected light beam is received by the receiving unit and the outgoing time of the outgoing light beam.
As shown in fig. 53, the FOV of the laser beam can be adjusted by the beam shaping device and the first optical element, different scanning beams can be emitted at the time t0-t17, the target FOV can be reached by splicing the FOVs emitting the beams at the time t0-t17, and the resolution of the TOF depth sensing module can be improved.
The TOF depth sensing module of this application embodiment can be used for 3D image to obtain, and the TOF depth sensing module of this application embodiment can set up in intelligent terminal (for example, cell-phone, flat board, wearable equipment etc.) for the acquisition of depth image or 3D image also can provide gesture and limbs discernment for 3D recreation or body sensing recreation.
The TOF depth sensing module according to an embodiment of the present application is described in detail below with reference to fig. 54.
FIG. 54 is a schematic block diagram of a TOF depth sensing module of an embodiment of the present application.
The TOF depth sensing module 300 shown in fig. 54 includes: a laser light source 310, a polarization filter device 320, a beam shaping device 330, a first optical element 340, a second optical element 350, a receiving unit 360 and a control unit 370. As shown in fig. 54, the transmitting end of the TOF depth sensing module 300 includes a laser light source 310, a polarization filter 320, a beam shaper 330 and a first optical element 340, the receiving end of the TOF depth sensing module 300 includes a second optical element 350 and a receiving unit 360, the first optical element 340 and the second optical element 350 are respectively located at the transmitting end and the receiving end of the TOF depth sensing module 300, wherein the first optical element mainly controls the direction of the light beam at the transmitting end to obtain an outgoing light beam, and the second optical element mainly controls the direction of the reflected light beam to deflect the reflected light beam to the receiving unit.
These several modules or units in the TOF depth sensing module 300 are described in detail below.
The laser light source 310:
the laser light source 310 is used to generate a laser beam, and particularly, the laser light source 310 is capable of generating light of various polarization states.
Optionally, the laser beam emitted by the laser source 310 is a single quasi-parallel beam, and the divergence angle of the laser beam emitted by the laser source 310 is smaller than 1 °.
Alternatively, the laser light source 310 is a semiconductor laser light source.
The laser light source may be a Vertical Cavity Surface Emitting Laser (VCSEL).
Optionally, the laser light source 310 is a fabry-perot laser (which may be abbreviated as FP laser).
Compared with a single VCSEL, the single FP laser can realize higher power, and meanwhile, the electro-optic conversion efficiency is higher than that of the VCSEL, so that the scanning effect can be improved.
Optionally, the wavelength of the laser beam emitted by the laser light source 310 is greater than 900 nm.
Because the intensity of the light ray of more than 900nm in the sunlight is relatively weaker, therefore, when the wavelength of the laser beam is more than 900nm, the interference caused by the sunlight is reduced, and the scanning effect of the TOF depth sensing module can be improved.
Optionally, the wavelength of the laser beam emitted by the laser light source 310 is 940nm or 1550 nm.
Because the intensity of the light near 940nm or 1550nm in the sunlight is relatively weaker, therefore, can greatly reduced the interference that the sunlight caused when laser beam's wavelength is 940nm or 1550nm, can improve TOF depth sensing module's scanning effect.
The light emitting area of the laser source 310 is less than or equal to 5 × 5mm2
Because the size of the laser light source is small, the TOF depth sensing module 300 including the laser light source is relatively easy to integrate into the terminal equipment, and the occupied space in the terminal equipment can be reduced to a certain extent.
Optionally, the average output optical power of the TOF depth sensing module is less than 800 mw.
When the average output optical power of the TOF depth sensing module is smaller than or equal to 800mw, the TOF depth sensing module has smaller power consumption, and is convenient to arrange in terminal equipment and other equipment sensitive to power consumption.
Polarization filter device 320:
the polarization filter 320 is used for filtering the laser beam to obtain a single polarization state beam.
The light beam with a single polarization state filtered by the polarization filter device 320 is the light beam generated by the laser source 310 with one of a plurality of polarization states.
For example, the laser beam generated by the laser light source 310 includes linearly polarized light, left circularly polarized light, and right circularly polarized light, then the polarization filter 320 may screen out left circularly polarized light and right circularly polarized light, which are polarization states of the laser beam, and only retain linearly polarized light in a specific direction, and optionally, the polarization filter may further include 1/4 wave plates, so that the screened linearly polarized light is converted into left circularly polarized light (or right circularly polarized light).
Beam shaping device 330:
the beam shaping device 330 is used to adjust the laser beam to obtain the first beam.
Wherein the FOV of the first light beam includes [5 ° × 5 °, 20 ° × 20 ° ].
It will be appreciated that the horizontal FOV of the first light beam described above may be between 5 ° and 20 ° (including 5 ° and 20 °), and the vertical FOV of the first light beam may be between 5 ° and 20 ° (including 5 ° and 20 °).
The control unit 370:
the control unit 370 is configured to control the first optical element to control the directions of the first light beams at M different times, respectively, so as to obtain M outgoing light beams in different directions.
Wherein the range of the total FOV covered by the M differently directed exit beams comprises [50 ° × 50 °, 80 ° × 80 ° ].
The control unit 370 is further configured to control the second optical element to deflect M reflected light beams, which are obtained by reflecting the M outgoing light beams in different directions by the target object, to the receiving unit.
In the embodiment of the application, the FOV of the light beam is adjusted by the light beam shaping device, so that the first light beam has a larger FOV, and the scanning is performed in an over-time multiplexing manner (the first optical element emits the outgoing light beams in different directions at different times), so that the spatial resolution of the finally obtained depth map of the target object can be improved.
FIG. 55 is a schematic block diagram of a TOF depth sensing module of an embodiment of the present application.
As shown in fig. 55, the TOF depth sensing module further includes: the collimating lens 380 is positioned between the laser light source 310 and the polarization filter 320, and the collimating lens 380 is used for collimating the laser beam; the polarization filter 320 is used for filtering the light beam collimated by the collimating lens 380 to obtain a light beam with a single polarization state.
FIG. 56 is a schematic block diagram of a TOF depth sensing module of an embodiment of the present application. In fig. 56, the collimating lens 380 may also be located between the polarization filtering device 320 and the beam shaping device 330. The collimating lens 380 is used for collimating the light beam in a single polarization state; the beam shaper 330 is configured to adjust the FOV after the collimation by the collimator lens 380 to obtain the first light beam.
The light beam is collimated by the collimating lens, approximately parallel light beams can be obtained, the power density of the light beam can be improved, and the effect of subsequently scanning by adopting the light beam can be further improved.
Optionally, the clear aperture of the collimating lens is less than or equal to 5 mm.
Because the size of above-mentioned collimating lens is less, consequently, the TOF depth sensing module group that contains the collimating lens is integrated to terminal equipment in relatively easily, can reduce the space that occupies in terminal equipment to a certain extent.
It should be understood that the collimating lens may also be located between the optical shaping device 330 and the first optical element 340, in which case the collimating lens collimates the light beam shaped by the optical shaping device 330, and the collimated light beam is processed by the first optical element.
In addition, the collimating lens 380 may be located at any possible location in the TOF depth sensing module 300 and collimates the light beam in any possible process.
Optionally, the horizontal distance between the first optical element and the second optical element is less than or equal to 1 cm.
Optionally, the first optical element and/or the second optical element is a turning mirror device.
The rotating mirror device controls the emergent direction of the emergent light beam through rotation.
The above-mentioned turning mirror device may specifically be a mems galvanometer or a polygon mirror.
The first optical element may be any one of devices such as a liquid crystal polarization grating, an electro-optical device, an acousto-optic device, and an optical phased array device, and the second optical element may be any one of devices such as a liquid crystal polarization grating, an electro-optical device, an acousto-optic device, and an optical phased array device. For details of the liquid crystal polarization grating, the electro-optical device, the acousto-optical device, the optical phased array device, and the like, reference may be made to the descriptions of the first case to the fourth case above.
As shown in fig. 35, the liquid crystal polarization grating includes not only the lateral LCPG and the longitudinal LCPG but also a lateral polarization control plate and a longitudinal polarization control plate. In fig. 35, the transverse direction LCPG is positioned between the transverse direction polarization control sheet and the longitudinal direction polarization control sheet, and the longitudinal direction polarization control sheet is positioned between the transverse direction LCPG and the longitudinal direction LCPG.
Alternatively, there may be several combinations of the components in the liquid crystal polarization grating shown in fig. 35 described above.
Combination mode 1: 124;
Combination mode 2: 342;
combination mode 3: 3412.
in combination 1, 1 may denote a transverse polarization control plate and a longitudinal polarization control plate which are in close contact with each other, and in combination 2, 3 may denote a transverse polarization control plate and a longitudinal polarization control plate which are in close contact with each other.
When the first optical element 340 or the second optical element 350 of combination 1 or combination 2 is placed in the TOF depth sensing module, the transverse polarization control sheet or the longitudinal polarization control sheet is located on a side close to the laser light source, and the transverse LCPG and the longitudinal LCPG are located on a side far from the laser light source.
When the first optical element 340 or the second optical element 350 in combination 3 is placed in the TOF depth sensing module, the distances between the longitudinal polarization control sheet, the longitudinal LCPG, the transverse polarization control sheet, and the transverse LCPG and the laser light source become larger in sequence.
It should be understood that the above three combinations of the liquid crystal polarization gratings and the combination in fig. 35 are only examples, and actually, the various components in the optical element in the present application may also have different combinations. As long as it is ensured that the distance between the transverse polarization control plate and the laser light source is less than the distance between the transverse LCPG and the laser light source, and the distance between the transverse polarization control plate and the laser light source is less than the distance between the transverse LCPG and the laser light source.
Optionally, the second optical element includes: the distances between the transverse polarization control plate, the transverse liquid crystal polarization grating, the longitudinal polarization control plate and the longitudinal liquid crystal polarization grating and the sensor are sequentially increased.
Optionally, the beam shaping device is composed of a diffusion lens and a rectangular aperture.
The TOF depth sensing module according to the embodiment of the present application is described above with reference to fig. 53 to 56, and the image generation method according to the embodiment of the present application is described in detail below with reference to fig. 57.
Fig. 57 is a schematic flowchart of an image generation method according to an embodiment of the present application.
The method shown in fig. 57 may be performed by a TOF depth sensing module or a terminal device including the TOF depth sensing module according to an embodiment of the present application, and specifically, the method shown in fig. 57 may be performed by the TOF depth sensing module shown in fig. 54 or a terminal device including the TOF depth sensing module shown in fig. 54. The method shown in fig. 57 includes steps 5001 to 5006, which are described in detail below.
5001. Controlling a laser light source to generate a laser beam;
5002. and filtering the laser beam by using a polarization filter to obtain the beam with a single polarization state.
The single polarization state is one of the multiple polarization states.
For example, the plurality of polarization states may include linear polarization, left-handed circular polarization, and right-handed circular polarization, and the single polarization state may be any one of linear polarization, left-handed circular polarization, and right-handed circular polarization.
5003. And adjusting the laser beam by using a beam shaping device to obtain a first beam.
Optionally, the step 5003 specifically includes: and adjusting the angular space intensity distribution of the light beam in the single polarization state by using a beam shaping device to obtain a first light beam.
Wherein the FOV of the first light beam is in a range of [5 ° × 5 °, 20 ° × 20 ° ];
5004. and controlling the first optical element to respectively control the direction of the first light beam from the light beam shaping device at M different moments to obtain M outgoing light beams in different directions.
Wherein the range of the total FOV covered by the M outgoing light beams in different directions includes [50 ° × 50 °, 80 ° × 80 ° ].
5005. And controlling the second optical element to deflect M reflected beams obtained by reflecting the M outgoing beams in different directions by the target object to the receiving unit respectively.
5006. And generating a depth map of the target object according to the TOFs respectively corresponding to the M outgoing light beams in different directions.
In the embodiment of the application, the FOV of the light beam is adjusted by the light beam shaping device, so that the first light beam has a larger FOV, and the scanning is performed in an over-time multiplexing manner (the first optical element emits the outgoing light beams in different directions at different times), so that the spatial resolution of the finally obtained depth map of the target object can be improved.
Optionally, the step 5006 specifically includes: generating depth maps of the M regions of the target object according to the distances between the M regions of the target object and the TOF depth sensing module; and synthesizing the depth map of the target object according to the depth maps of the M areas of the target object.
Optionally, the step 5004 specifically includes: the control unit generates a first voltage signal, and the first voltage signal is used for controlling the first optical element to respectively control the directions of the first light beams at M different moments so as to obtain M outgoing light beams in different directions; the step 5005 includes: the control unit generates a second voltage signal, and the second voltage signal is used for controlling the second optical element to deflect M reflected beams obtained by reflecting the M outgoing beams in different directions by the target object to the receiving unit respectively.
The voltage values of the first voltage signal and the second voltage signal at the same time are the same.
In the TOF depth sensing module 300 shown in fig. 54, the transmitting end and the receiving end respectively use different optical elements to achieve control over transmission and reception of the light beam, and optionally, in the TOF depth sensing module according to the embodiment of the present application, the transmitting end and the receiving end may also use the same optical element to achieve control over transmission and reception of the light beam.
The case where the reflection and reception of the light beam are realized by the same optical element common to the transmitting end and the receiving end is described in detail below with reference to fig. 58.
FIG. 58 is a schematic block diagram of a TOF depth sensing module of an embodiment of the present application.
The TOF depth sensing module 400 shown in fig. 58 includes: a laser light source 410, a polarization filter device 420, a beam shaping device 430, an optical element 440, a receiving unit 450, and a control unit 460. As shown in fig. 58, the transmitting end of the TOF depth sensing module 400 includes a laser light source 410, a polarization filter device 420, a beam shaping device 430, and an optical element 440, the receiving end of the TOF depth sensing module 400 includes the optical element 440 and a receiving unit 450, and the transmitting end and the receiving end of the TOF depth sensing module 400 share the optical element 440. The optical element 440 can control both the light beam at the transmitting end to obtain the outgoing light beam and the reflected light beam such that the reflected light beam is deflected to the receiving unit 450.
These several modules or units in the TOF depth sensing module 400 are described in detail below.
The laser light source 410:
the laser light source 410 is used for generating a laser beam;
optionally, the laser beam emitted by the laser source 410 is a single quasi-parallel beam, and the divergence angle of the laser beam emitted by the laser source 410 is smaller than 1 °.
Alternatively, the laser light source 410 is a semiconductor laser light source.
The laser light source 410 may be a Vertical Cavity Surface Emitting Laser (VCSEL).
Alternatively, the laser light source 410 may be a fabry-perot laser (may be abbreviated as FP laser).
Compared with a single VCSEL, the single FP laser can realize higher power, and meanwhile, the electro-optic conversion efficiency is higher than that of the VCSEL, so that the scanning effect can be improved.
Optionally, the wavelength of the laser beam emitted by the laser light source 410 is greater than 900 nm.
Because the intensity of the light ray of more than 900nm in the sunlight is relatively weaker, therefore, when the wavelength of the laser beam is more than 900nm, the interference caused by the sunlight is reduced, and the scanning effect of the TOF depth sensing module can be improved.
Optionally, the wavelength of the laser beam emitted by the laser light source 410 is 940nm or 1550 nm.
Because the intensity of the light near 940nm or 1550nm in the sunlight is relatively weaker, therefore, can greatly reduced the interference that the sunlight caused when laser beam's wavelength is 940nm or 1550nm, can improve TOF depth sensing module's scanning effect.
The light emitting area of the laser light source 410 is less than or equal to 5 × 5mm2
Because the size of the laser light source is small, the TOF depth sensing module 400 including the laser light source is relatively easy to integrate into the terminal equipment, and the occupied space in the terminal equipment can be reduced to a certain extent.
Optionally, the average output optical power of the TOF depth sensing module 400 is less than 800 mw.
When the average output optical power of the TOF depth sensing module is smaller than or equal to 800mw, the TOF depth sensing module has smaller power consumption, and is convenient to arrange in terminal equipment and other equipment sensitive to power consumption.
The polarization filter 420 is used for filtering the laser beam to obtain a single polarization state beam;
the beam shaping device 430 is configured to adjust the FOV of the light beam in the single polarization state, so as to obtain a first light beam;
the control unit 460 is configured to control the optical element 440 to control the direction of the first light beam at M different times, respectively, so as to obtain M outgoing light beams in different directions;
The control unit 460 is further configured to control the optical element 440 to deflect M reflected light beams, which are obtained by reflecting the M outgoing light beams in different directions by the target object, to the receiving unit 450, respectively.
Wherein, the single polarization state is one of a plurality of polarization states;
for example, the plurality of polarization states may include linear polarization, left-handed circular polarization, and right-handed circular polarization, and the single polarization state may be any one of linear polarization, left-handed circular polarization, and right-handed circular polarization.
The range of the FOV of the first light beam includes [5 ° × 5 °, 20 ° × 20 ° ]; the range of the total FOV covered by the above-described M outgoing light beams in different directions includes [50 ° × 50 °, 80 ° × 80 ° ].
In the embodiment of the application, the FOV of the light beam is adjusted by the light beam shaping device, so that the first light beam has a larger FOV, and the scanning is performed in an over-time multiplexing manner (the optical element emits the outgoing light beams in different directions at different times), so that the spatial resolution of the finally obtained depth map of the target object can be improved.
Optionally, the control unit 460 is further configured to: and generating a depth map of the target object according to the TOFs respectively corresponding to the M outgoing light beams in different directions.
The TOF corresponding to the M outgoing light beams in different directions may specifically refer to time difference information between the time when the reflected light beams corresponding to the M outgoing light beams in different directions are received by the receiving unit and the outgoing time of the M outgoing light beams in different directions.
Assuming that the M outgoing beams in different directions include the outgoing beam 1, the reflected beam corresponding to the outgoing beam 1 may be a beam generated after the outgoing beam 1 reaches the target object and is reflected by the target object.
Optionally, the above definitions of the laser light source 310, the polarization filter device 320, and the beam shaping device 330 in the TOF depth sensing module 300 apply equally to the laser light source 410, the polarization filter device 420, and the beam shaping device 430 in the TOF depth sensing module 400.
Optionally, the optical element is a turning mirror device.
The rotating mirror device controls the emergent direction of the emergent light beam through rotation.
Optionally, the turning mirror device is a mems galvanometer or a polygon turning mirror.
The following describes a case where the optical element is a turning mirror device with reference to the drawings.
FIG. 59 is a schematic block diagram of a TOF depth sensing module of an embodiment of the present application.
As shown in fig. 59, the TOF depth sensing module further includes: a collimating lens 470, the collimating lens 470 being located between the laser light source 410 and the polarization filter 420, the collimating lens 470 being used for collimating the laser beam; the polarization filter 420 is used for filtering the light beam collimated by the collimator 470 to obtain a light beam with a single polarization state.
FIG. 60 is a schematic block diagram of a TOF depth sensing module of an embodiment of the present application. In fig. 60, the collimating lens 470 may also be located between the polarization filter device 420 and the beam shaping device 430. The collimating lens 470 is used for collimating the light beam with a single polarization state; the beam shaper 430 is configured to adjust the FOV after the collimation process of the collimator lens 470, so as to obtain the first light beam.
The light beam is collimated by the collimating lens, approximately parallel light beams can be obtained, the power density of the light beam can be improved, and the effect of subsequently scanning by adopting the light beam can be further improved.
Optionally, the clear aperture of the collimating lens is less than or equal to 5 mm.
Because the size of above-mentioned collimating lens is less, consequently, the TOF depth sensing module group that contains the collimating lens is integrated to terminal equipment in relatively easily, can reduce the space that occupies in terminal equipment to a certain extent.
It should be understood that the collimating lens may also be located between the optical shaping device 430 and the optical element 440, in which case the collimating lens collimates the light beam shaped by the optical shaping device 430, and the collimated light beam is processed by the optical element 440.
In addition, the collimating lens 470 may be located at any possible location in the TOF depth sensing module 400 and collimates the light beam in any possible process.
As shown in fig. 61, the TOF depth sensing module includes a laser light source, a light uniformizing device, a beam splitter, a Micro Electro Mechanical Systems (MEMS) galvanometer, a receiving lens, and a sensor. The MEMS in the figure includes an electrostatic galvanometer, an electromagnetic galvanometer, a polygon mirror, and the like. Because the rotating mirror devices work in a reflection mode, the light path in the TOF depth sensing module is a reflection type light path, the emission and the receiving are coaxial light paths, and the polarizing device and the lens can be shared through the beam splitter. In fig. 61, the polarizing device is embodied as a MEMS galvanometer.
Alternatively, the optical element 440 may be a liquid crystal polarizer.
Alternatively, the optical element 440 includes: the device comprises a transverse polarization control sheet, a transverse liquid crystal polarization grating, a longitudinal polarization control sheet and a longitudinal liquid crystal polarization grating.
Alternatively, in the optical element 440, the distances between the transverse polarization control plate, the transverse liquid crystal polarization grating, the longitudinal polarization control plate, and the longitudinal liquid crystal polarization grating and the laser light source are sequentially increased, or the distances between the longitudinal polarization control plate, the longitudinal liquid crystal polarization grating, the transverse polarization control plate, and the transverse liquid crystal polarization grating and the laser light source are sequentially increased.
Alternatively, the beam shaping device 430 is composed of a diffusion lens and a rectangular aperture.
The optical element may be any one of liquid crystal polarization grating, electro-optical device, acousto-optical device, optical phased array device, and the like. For details of the liquid crystal polarization grating, the electro-optical device, the acousto-optical device, the optical phased array device, and the like, reference may be made to the descriptions of the first case to the fourth case above.
Fig. 62 is a schematic flowchart of an image generation method according to an embodiment of the present application.
The method shown in fig. 62 may be performed by a TOF depth sensing module or a terminal device including the TOF depth sensing module according to an embodiment of the present disclosure, and specifically, the method shown in fig. 62 may be performed by the TOF depth sensing module shown in fig. 58 or a terminal device including the TOF depth sensing module shown in fig. 58. The method shown in fig. 62 includes steps 6001 to 6006, which are described in detail below.
6001. And controlling the laser light source to generate a laser beam.
6002. And filtering the laser beam by using a polarization filter to obtain the beam with a single polarization state.
Wherein, the single polarization state is one of a plurality of polarization states;
For example, the plurality of polarization states may include linear polarization, left-handed circular polarization, and right-handed circular polarization, and the single polarization state may be any one of linear polarization, left-handed circular polarization, and right-handed circular polarization.
6003. And adjusting the light beam in the single polarization state by using a light beam shaping device to obtain a first light beam.
The range of the FOV of the first light beam includes [5 ° × 5 °, 20 ° × 20 ° ].
6004. The control optical element controls the direction of the first light beam from the light beam shaping device at M different moments to obtain M outgoing light beams in different directions.
The range of the total FOV covered by the above-described M outgoing light beams in different directions includes [50 ° × 50 °, 80 ° × 80 ° ].
6005. The control optical element deflects M reflected light beams obtained by reflecting the M outgoing light beams in different directions by the target object to the receiving unit respectively.
6006. And generating a depth map of the target object according to the TOFs respectively corresponding to the M outgoing light beams in different directions.
In the embodiment of the application, the FOV of the light beam is adjusted by the light beam shaping device, so that the first light beam has a larger FOV, and the scanning is performed in an over-time multiplexing manner (the optical element emits the outgoing light beams in different directions at different times), so that the spatial resolution of the finally obtained depth map of the target object can be improved.
Optionally, the step 6006 specifically includes: determining distances between M areas of the target object and a TOF depth sensing module according to TOFs (time of flight) corresponding to M outgoing light beams in different directions respectively; generating depth maps of the M regions of the target object according to the distances between the M regions of the target object and the TOF depth sensing module; and synthesizing the depth map of the target object according to the depth maps of the M areas of the target object.
Optionally, the step 6003 specifically includes: and adjusting the angular space intensity distribution of the light beam in the single polarization state by using a beam shaping device to obtain a first light beam.
The detailed operation of the TOF depth sensing module 400 according to an embodiment of the present invention is described in detail below with reference to fig. 63.
Fig. 63 is a schematic structural diagram of a TOF depth sensing module according to an embodiment of the present application.
The specific implementation and function of the components of the TOF depth sensing module shown in fig. 63 are as follows:
(1) the laser light source is a VCSEL array.
VCSEL light sources are arrays capable of emitting light beams with a good directionality.
(2) The polarizer is a polarization filtering device and may be located in front of (below) or behind (above) the light unifying device.
(3) The light homogenizing device may be a Diffractive Optical Element (DOE) or an optical Diffuser (which may be referred to as a Diffuser).
The beam array is processed by the dodging device and then is arranged into a substantially uniform beam block.
(3) The optical element is a multilayer LCPG (liquid crystal polarization grating).
It should be understood that in the above-mentioned fig. 63, only the case where the polarizing plate is located below the light unifying device is shown, and actually, the polarizing plate may be located above the light unifying device.
The specific principle of controlling the direction of the light beam by the liquid crystal polarization grating can be seen in the related contents described with reference to fig. 37 and 38.
In FIG. 63, the multilayer liquid crystal polarization grating and 1/4 wave plate cooperate to reflect the emitted light back to the polarizer through the target, just through 1/2 extra optical path, just enough to deflect the back light of the polarizer opposite to the emitted light. Under quasi-coaxial approximation, the obliquely emitted light is reflected and returns back to the original path, and is deflected back to the direction parallel to the emitted light to reach the receiving lens. The receiving end, using a beam deflection device, can image the target patch selectively illuminated by the emitted light onto the entire receiver (SPAD array). When the target is illuminated by the blocks, each block is received by the entire receiver, and the images at various times are stitched to obtain a complete image. Therefore, the time division multiplexing of the receiver is realized, and the aim of multiplying the resolution is fulfilled.
(4) The receiving lens is completed by a common lens, and received light is imaged on a receiver.
(5) The receiver is a SPAD array.
The SPAD can detect single photons, and the time of the detected single photon pulse can be accurately recorded. The SPAD is activated each time the VCSEL emits light. The VCSELs emit light periodically, and the SPAD array can count the time when each pixel receives reflected light in each period. By counting the time distribution of the reflected signal, the reflected signal pulse can be fitted, and the delay time can be calculated.
The key device of this embodiment is a beam deflection device shared by a projection end and a receiving end, that is, a liquid crystal polarization device, and in this embodiment, the beam deflection device includes a multilayer LCPG, also called an electrically controlled liquid crystal polarization device.
Fig. 64 is a schematic structural view of a liquid crystal polarizing device according to an embodiment of the present application.
An alternative specific structure of the liquid crystal polarizing device is shown in fig. 64, 1 indicates a transverse single-fold angle LCPG, 2 indicates a transverse double-fold angle LCPG, 3 indicates a longitudinal single-fold angle LCPG, 4 indicates a longitudinal double-fold angle LCPG, and 5 is a polarization control sheet. The total number of the polarization control plates is 5, which are respectively positioned at the left side of the 4 LCPGs shown in FIG. 64 and are respectively numbered as 5.1, 5.2, 5.3 and 5.4.
The liquid crystal polarization device shown in fig. 64 can be controlled using the control unit, and the control timing can be as shown in fig. 65 (scanning is started from time t0 and continued until time t 15). A timing chart of the driving signals generated by the control unit is shown in fig. 66.
Fig. 66 shows the voltage driving signals of the device polarization control plates 5.1, 5.2, 5.3, and 5.4 from time t0 to time t15, and the voltage driving signals include both low level signals and high level signals, where the low level is denoted by 0 and the high level is denoted by 1. The voltage driving signals of the polarization control plates 5.1, 5.2, 5.3 and 5.4 from time t0 to time t15 are shown in table 1.
TABLE 1
Figure BDA0002355623930000451
Figure BDA0002355623930000461
For example, in table 1, in the time interval t0, the voltage driving signal of the polarization control plate 5.1 is a low level signal, and the voltage driving signals of the polarization control plates 5.2 to 5.4 are high level signals, so the voltage signal corresponding to the time point t0 is 0111.
As shown in fig. 64, the electrically controlled liquid crystal polarizing device is composed of LCPG and a polarization control sheet. The voltage driving signals for implementing 4 × 4 scanning are shown in fig. 66, where 5.1, 5.2, 5.3, and 5.4 respectively represent the voltage driving signals applied to four polarization control plates, the entire FOV is divided into 4 × 4 blocks, and t0 to t15 are time intervals for illuminating each block. Here, when the voltage driving signal shown in fig. 66 is applied, the state of the light beam passing through each device when passing through the liquid crystal deflection device is shown in table 2.
TABLE 2
Figure BDA0002355623930000462
Figure BDA0002355623930000471
In each item in table 2, the value in parentheses is a voltage signal, L indicates left-hand rotation, R indicates right-hand rotation, and numerical values such as 1 and 3 indicate the angle of beam deflection, where 3 indicates a deflection angle larger than 1.
For example, for R1-1, R represents right hand, the first 1 represents left (the first is-1 representing right), the second-1 represents upper (the second is 1 representing lower).
For example, for L3-3, L represents the left-hand rotation, the first 3 represents the rightmost side (the first is-3, the leftmost side), and the second-3 represents the uppermost side (the second is 3, the lowermost side).
When the voltage driving signal shown in fig. 66 is applied to the liquid crystal polarization device, the scanning areas of the TOF depth sensing module at different times are as shown in fig. 67.
The depth map obtained in the embodiment of the present application is described below with reference to the drawings, and as shown in fig. 68, assuming that a depth map corresponding to a target object from time t0 to time t3 can be obtained through time-sharing scanning, where the resolution of the depth map corresponding to time t0 to time t3 is 160 × 120, a final depth map of the target object as shown in fig. 69 can be obtained by stitching the depth maps corresponding to time t0 to time t3, and the resolution of the final depth map of the target object is 320 × 240. As is clear from fig. 68 and 69, the resolution of the finally obtained depth map can be improved by stitching the depth maps obtained at different times.
A TOF depth sensing module and an image generation method according to an embodiment of the present application are described in detail above with reference to fig. 53 to 69. Another TOF depth sensing module and an image generation method according to an embodiment of the present application are described in detail below with reference to fig. 70 to 78.
In the TOF depth sensing module, the direction of the light beam can be adjusted by using a liquid crystal device, and a polarizing plate is generally added at an emitting end of the TOF depth sensing module to realize the emission of polarized light. However, in the process of emitting polarized light, due to the polarization selection effect of the polarizing plate, half of energy is lost when the light beam is emitted, and the lost energy is absorbed or scattered by the polarizing plate and converted into heat, so that the temperature of the TOF depth sensing module is increased, and the stability of the TOF depth sensing module is affected. Therefore, how to reduce the heat loss of the TOF depth sensing module is a problem to be solved.
In particular, in the TOF depth sensing module of the embodiment of the application, the heat loss of the TOF depth sensing module can be reduced by transferring the polarizer from the transmitting end to the receiving end. The TOF depth sensing module according to an embodiment of the present application is described in detail below with reference to the accompanying drawings.
A brief description of the TOF depth sensing module according to an embodiment of the present invention is provided below with reference to fig. 70.
FIG. 70 is a schematic diagram of a TOF depth sensing module of an embodiment of the present application when operating. As shown in fig. 70, the TOF depth sensing module may include a transmitting end (which may also be referred to as a projecting end), a receiving end and a control unit, wherein the transmitting end is configured to transmit an outgoing light beam, the receiving end is configured to receive a reflected light beam of a target object (the reflected light beam is a light beam obtained by reflecting the outgoing light beam by the target object), and the control unit may control the transmitting end and the receiving end to transmit and receive the light beam respectively.
In fig. 70, the emission end may generally include a laser light source, a collimating lens (optional), a dodging device, an optical element, a projection lens (optional); the receiving end generally comprises: the device comprises a light beam selection device and a receiving unit, wherein the receiving unit specifically comprises a receiving lens and a sensor.
The TOF depth sensing module shown in fig. 70 projects two or more kinds of projection lights (state a and state B) in different states at the same time, after the projection lights in the two different states reach a receiving end through reflection, the light beam selection device selects reflected light in a certain state in a time-sharing manner according to an instruction to enter a sensor, so as to perform depth imaging on the light in a specific state, and then the light beam deflection device can scan different directions to cover the target FOV.
The TOF depth sensing module shown in fig. 70 may be used for 3D image acquisition, and the TOF depth sensing module according to the embodiment of the present application may be disposed in a smart terminal (e.g., a mobile phone, a tablet, a wearable device, etc.), and is used for acquiring a depth image or a 3D image, and may also provide gesture and limb recognition for a 3D game or a motion sensing game.
The TOF depth sensing module according to an embodiment of the present application is described in detail below with reference to fig. 71.
The TOF depth sensing module 500 shown in fig. 71 includes: a laser light source 510, an optical element 520, a beam selection device 530, a receiving unit 540, and a control unit 550.
These several modules or units in the TOF depth sensing module 500 are described in detail below.
The laser light source 510:
the laser light source 510 is used to generate a laser beam.
Alternatively, the laser light source may be a semiconductor laser light source.
The laser light source may be a Vertical Cavity Surface Emitting Laser (VCSEL).
Alternatively, the laser light source may be a fabry-perot laser (which may be abbreviated as FP laser).
Compared with a single VCSEL, the single FP laser can realize higher power, and meanwhile, the electro-optic conversion efficiency is higher than that of the VCSEL, so that the scanning effect can be improved.
Alternatively, the laser light source 510 emits a laser beam having a wavelength greater than 900 nm.
Because the intensity of the light ray of more than 900nm in the sunlight is relatively weaker, therefore, when the wavelength of the laser beam is more than 900nm, the interference caused by the sunlight is reduced, and the scanning effect of the TOF depth sensing module can be improved.
Optionally, the wavelength of the laser beam emitted by the laser light source 510 is 940nm or 1550 nm.
Because the intensity of the light near 940nm or 1550nm in the sunlight is relatively weaker, therefore, can greatly reduced the interference that the sunlight caused when laser beam's wavelength is 940nm or 1550nm, can improve TOF depth sensing module's scanning effect.
Optionally, the light emitting area of the laser source 510 is less than or equal to 5 × 5mm2
Because the size of above-mentioned laser light source is less, consequently, the TOF depth sensing module that contains laser light source is integrated to terminal equipment in relatively easily, can reduce the space that occupies in terminal equipment to a certain extent.
Optionally, the average output optical power of the TOF depth sensing module is less than 800 mw.
When the average output optical power of the TOF depth sensing module is smaller than or equal to 800mw, the TOF depth sensing module has smaller power consumption, and is convenient to arrange in terminal equipment and other equipment sensitive to power consumption.
The optical element 520:
the optical element 520 is disposed in an emitting direction of the laser beam, and the optical element 520 is configured to control the direction of the laser beam to obtain a first emitting beam and a second emitting beam, where the emitting direction of the first emitting beam is different from the emitting direction of the second emitting beam, and the polarization direction of the first emitting beam is orthogonal to the polarization direction of the second emitting beam.
Alternatively, as shown in fig. 35, the optical element 520 may include: the device comprises a transverse polarization control sheet, a transverse liquid crystal polarization grating, a longitudinal polarization control sheet and a longitudinal liquid crystal polarization grating, wherein the distances between the transverse polarization control sheet, the transverse liquid crystal polarization grating, the longitudinal polarization control sheet and the longitudinal liquid crystal polarization grating and a laser light source are sequentially increased.
Alternatively, in the optical element 520, the distances between the longitudinal polarization control plate, the longitudinal liquid crystal polarization grating, the transverse polarization control plate, and the transverse liquid crystal polarization grating and the laser light source are sequentially increased.
The receiving unit 540:
among them, the receiving unit 540 may include a receiving lens 541 and a sensor 542.
Control unit 550 and beam selection device 530:
the control unit 550 is configured to control the operation of the beam selection device 530 by a control signal, and in particular, the control unit 550 may generate a control signal for controlling the beam selection device 530 to propagate a third reflected beam and a fourth reflected beam to the sensor at different time intervals, respectively, wherein the third reflected beam is a beam reflected by the target object from the first outgoing beam, and the fourth reflected beam is a beam reflected by the target object from the second outgoing beam.
The beam selection device 530 can transmit the light beams with different polarization states to the receiving unit at different times under the control of the control unit 550. The time-sharing mode adopted by the beam selection device 530 to transmit the received reflected beam to the receiving unit 540 can make more full use of the receiving resolution of the receiving unit 540, and the resolution of the finally obtained depth map is relatively higher compared with the beam splitter 630 in the TOF depth sensing module 600.
Optionally, the control signal generated by the control unit 550 is used to control the beam selection device 530 to propagate the third reflected beam and the fourth reflected beam to the sensor respectively in different time intervals.
That is, the above-described beam selection device may propagate the third reflected beam and the fourth reflected beam to the receiving unit at different times, respectively, under the control of the control signal generated by the control unit 550.
Alternatively, as an embodiment, the beam selection device 530 is composed of 1/4 wave plates + half-wave plates + polarizers.
As shown in fig. 72, the TOF depth sensing module 500 may further include:
the collimating lens 560 is arranged in the emitting direction of the laser beam, the collimating lens is arranged between the laser light source and the optical element, and the collimating lens 560 is used for collimating the laser beam to obtain a collimated beam; the optical element 520 is used for controlling the direction of the collimated light beam to obtain a first outgoing light beam and a second outgoing light beam.
The light beam is collimated by the collimating lens, approximately parallel light beams can be obtained, the power density of the light beam can be improved, and the effect of subsequently scanning by adopting the light beam can be further improved.
Optionally, the clear aperture of the collimating lens is less than or equal to 5 mm.
Because the size of above-mentioned collimating lens is less, consequently, the TOF depth sensing module group that contains the collimating lens is integrated to terminal equipment in relatively easily, can reduce the space that occupies in terminal equipment to a certain extent.
As shown in fig. 73, the TOF depth sensing module 500 may further include:
the dodging device 570 is arranged in the emitting direction of the laser beam, the collimating lens is arranged between the laser source 510 and the optical element 520, and the dodging device 570 is used for adjusting the energy distribution of the laser beam to obtain a dodged beam; the optical element is used for controlling the direction of the homogenized light beam so as to obtain a first emergent light beam and a second emergent light beam.
Optionally, the light homogenizing device is a microlens Diffuser or a diffractive optical Diffuser (DOE Diffuser).
It should be understood that the TOF depth sensing module 500 may include both the collimating lens 560 and the dodging device 570, and the collimating lens 560 and the dodging device 570 are both located between the laser source 510 and the optical element 520, and for the collimating lens 560 and the dodging device 570, the distance between the collimating lens 560 and the laser source may be closer, or the distance between the dodging device 570 and the laser source may be closer.
As shown in fig. 74, the distance between the collimator lens 560 and the laser light source 510 is smaller than the distance between the dodging device 570 and the laser light source 510.
In the TOF depth sensing module 500 shown in fig. 74, a laser beam emitted by the laser source 510 is first collimated by the collimating lens 560, then is further homogenized by the light homogenizing device 570, and then is transmitted to the optical element 520 for processing.
In the embodiment of the application, the light power of the laser beam can be more uniform in an angle space through dodging treatment, or the light power is distributed according to a specific rule, so that the local light power is prevented from being too small, and a finally obtained target object depth map is prevented from having blind spots.
As shown in fig. 75, the distance between the collimator lens 560 and the laser light source 510 is greater than the distance between the dodging device 570 and the laser light source 510.
In the TOF depth sensing module 500 shown in fig. 74, the laser beam emitted from the laser source 510 is first processed by the dodging device 570, then collimated by the collimating lens 560, and then transmitted to the optical element 520 for processing.
The specific structure of the TOF depth sensing module 500 is described in detail below with reference to fig. 76.
Fig. 76 is a schematic structural diagram of a TOF depth sensing module 500 according to an embodiment of the present disclosure.
As shown in fig. 76, the TOF depth sensing module 500 includes a projecting end, a control unit, and a receiving end. The projection end comprises a laser light source, a light homogenizing device and a light beam deflection device; the receiving end comprises a light beam deflection device, a light beam (dynamic) selection device, a receiving lens and a two-dimensional sensor; the control unit is used for controlling the projection end and the receiving end to complete the scanning of the light beam. Further, the beam deflecting device in fig. 76 corresponds to the optical element in fig. 71, and the beam (dynamic) selecting device in fig. 76 corresponds to the beam selecting device in fig. 71.
The following describes in detail the components specifically employed by each module or cell.
The laser light source may be a Vertical Cavity Surface Emitting Laser (VCSEL) array light source;
the light homogenizing device can be a diffraction optical diffusion sheet;
the beam deflecting device may be a multilayer LCPG and 1/4 waveplate;
the electric control LCPG comprises an LCPG component in an electric control horizontal direction and an LCPG component in an electric control vertical direction.
The two-dimensional block scanning in the horizontal direction and the vertical direction can be realized by utilizing the multilayer cascaded electric control LCPG. The 1/4 wave plate is used for converting circularly polarized light from the LCPG into linearly polarized light, and the effect of quasi-coaxial transmission end and receiving end is achieved.
The wavelength of the VCSEL array light source may be greater than 900nm, and specifically, the wavelength of the VCSEL array light source may be 940nm or 1550 nm.
The 940nm band solar spectrum is relatively weak in intensity, and noise caused by sunlight in an outdoor scene is favorably reduced. In addition, the laser emitted by the VCSEL array light source may be continuous light or pulsed light. The VCSEL array light source can also be divided into a plurality of blocks, time-sharing control is realized, and different areas can be lighted in a time-sharing mode.
The function of the diffractive optical diffuser is to shape the light beam from the VCSEL array light source into a uniform square or rectangular light source with a certain FOV (e.g., a FOV of 5 ° x5 °).
The multilayer LCPG and 1/4 waveplates function to achieve scanning of the beam.
The receiving end and the transmitting end share a multilayer LCPG and 1/4 wave plate, a light beam selection device of the receiving end consists of a 1/4 wave plate, an electric control half-wave plate and a polaroid, and a receiving lens of the receiving end can be a single-chip lens or a combination of multiple lenses. The sensor at the receiving end is a Single Photon Avalanche Diode (SPAD) array, and the SPAD has the sensitivity of single photon detection, so that the detection distance of the Lidar system can be increased.
For the TOF depth sensing module 500, the polarization selection device at the transmitting end is moved to the receiving end. As shown in fig. 76, the laser emitted from the ordinary VCSEL array light source has no fixed polarization state, and can be decomposed into linearly polarized laser parallel to the paper surface and linearly polarized laser perpendicular to the paper surface, while the linearly polarized laser after passing through the LCPG will be divided into two laser beams with different polarization states (left-hand circular polarization and right-hand circular polarization), which are distributed with different emission angles, and the polarization states of the two laser beams after passing through the 1/4 wave plate are converted into linearly polarized light parallel to the paper surface and linearly polarized light perpendicular to the paper surface. The two laser beams with different polarization states irradiate an object in a target area to generate return light beams, and the return light beams are received by an 1/4 wave plate and an LCPG shared by the transmitting end and are changed into laser beams with the same divergence angle but different polarization states of left-handed circularly polarized light and right-handed circularly polarized light. The light beam selection device at the receiving end consists of 1/4 wave plates, an electric control half-wave plate and a polaroid, the polarization state of the received light after passing through the 1/4 wave plate is converted into linear polarization parallel to the paper surface and linear polarization vertical to the paper surface, so that the polarization state of linearly polarized light is rotated by 90 degrees or the polarization state of linearly polarized light passing through the half-wave plate is not changed by controlling the electric control half-wave plate in a time-sharing manner, the linear polarization parallel to the paper surface and the linear polarization vertical to the paper surface are transmitted in a time-sharing manner, and meanwhile, the light in the other polarization state is absorbed or scattered by the polaroid.
Compared with the existing TOF depth sensing module with the polarization selection device positioned at the transmitting end, the TOF depth sensing module 500 provided by the embodiment of the application has the advantages that because the polarization selection device is positioned at the receiving end, the energy absorbed or scattered by the polaroid is obviously reduced, the detection distance is assumed to be R meters, the reflectivity of a target object is rho, the entrance pupil diameter of a receiving system is D, and under the condition of the same receiving FOV, the incident energy P of the polarization selection device of the TOF depth sensing module 500 provided by the embodiment of the application istComprises the following steps:
Figure BDA0002355623930000511
wherein P is the energy emitted by the transmitting end, and the energy can be reduced by about 10 m at a distance of 1m4And (4) doubling.
In addition, suppose that the TOF depth sensing module 500 of the embodiment of the present application and the conventional TOF depth sensing module adopt non-polarized light sources with the same power, because the outdoor light in the TOF depth sensing module 500 of the embodiment of the present application is non-polarized, half of the light is absorbed or scattered when entering the receiving detector, and the outdoor light in the TOF depth sensing module in the conventional scheme all enters the detector, therefore, the signal-to-noise ratio of the embodiment of the present application can be improved by about one time under the same condition.
In addition to the TOF depth sensing module 500 shown in fig. 76, a diffractive optical Diffuser (DOE Diffuser) behind the VCSEL array light source may be changed to a micro-lens Diffuser (Diffuser). Because the micro-lens diffusion sheet realizes light uniformization based on geometric optics, the transmission efficiency of the micro-lens diffusion sheet is higher and can reach more than 80%, while the transmission efficiency of the traditional diffractive optical diffusion sheet (DOE Diffuser) is only about 70%. The shape of the microlens diffuser is shown in fig. 77, and the microlens diffuser is composed of a series of randomly distributed microlenses, and the position and shape of each microlens are designed through simulation optimization, so that the shaped light beam is as uniform as possible, and the transmission efficiency is high.
Fig. 78 is a schematic flowchart of an image generation method of an embodiment of the present application.
The method shown in fig. 78 may be performed by a TOF depth sensing module or a terminal device including the TOF depth sensing module according to an embodiment of the present disclosure, and in particular, the method shown in fig. 78 may be performed by the TOF depth sensing module shown in fig. 71 or a terminal device including the TOF depth sensing module shown in fig. 71. The method shown in fig. 78 includes steps 7001 to 7006, which are described in detail below, respectively.
7001. Controlling a laser light source to generate a laser beam;
7002. the control optical element controls the direction of the laser beam to obtain a first outgoing beam and a second outgoing beam.
7003. And controlling the light beam selection device to transmit a third reflected light beam obtained by reflecting the first emergent light beam by the target object and a fourth reflected light beam obtained by reflecting the second emergent light beam by the target object to different areas of the receiving unit.
7004. Generating a first depth map of the target object according to the TOF corresponding to the first emergent light beam;
7005. and generating a second depth map of the target object according to the TOF corresponding to the second emergent light beam.
The emergent direction of the first emergent light beam is different from that of the second emergent light beam, and the polarization direction of the first emergent light beam is orthogonal to that of the second emergent light beam.
In the embodiment of the application, because the emission end is not provided with the polarization filter device, the light beam emitted by the laser light source can reach the optical element with almost no loss (the polarization filter device generally absorbs more light energy, and further generates certain heat loss), and the heat loss of the terminal equipment can be reduced.
Optionally, the method shown in fig. 78 further includes: and splicing the first depth map and the second depth map to obtain a depth map of the target object.
It should be appreciated that in the method shown in fig. 78, a third depth map, a fourth depth map, etc. may also be generated in a similar manner, and then all the depth maps may be stitched or combined to obtain a final depth map of the target object.
Optionally, the terminal device further includes a collimator lens disposed between the laser light source and the optical element, and the method shown in fig. 78 further includes:
7006. utilizing a collimating lens to collimate the laser beam to obtain a collimated beam;
the step 7002 specifically includes: the control optical element controls the direction of the collimated light beam to obtain a first emergent light beam and a second emergent light beam.
In addition, the light beam is collimated by the collimating lens, approximately parallel light beams can be obtained, the power density of the light beam can be improved, and the effect of subsequently scanning by adopting the light beam can be further improved.
Optionally, the terminal device further includes a light uniformizing device disposed between the laser light source and the optical element, and the method shown in fig. 78 further includes:
7007. adjusting the energy distribution of the laser beam by using a light homogenizing device to obtain a light beam after light homogenizing treatment;
the step 7002 specifically includes: the control optical element controls the direction of the light beam after the dodging treatment so as to obtain a first emergent light beam and a second emergent light beam.
The light power of the laser beam can be more uniform in an angle space through the dodging treatment, or the light power is distributed according to a specific rule, so that the local light power is prevented from being too small, and the blind spot of the finally obtained target object depth map is avoided.
On the basis of the above-described steps 7001 to 7005, the method shown in fig. 78 may further include step 7006 or step 7007.
Alternatively, the method shown in fig. 78 may further include step 7006 and step 7007 on the basis of the above-described steps 7001 to 7005. In this case, after step 7001 is performed, step 7006 may be performed first, step 7007 may be performed, and step 7002 may be performed again, or step 7007 may be performed first, step 7006 may be performed, and step 7002 may be performed again. That is, after the laser light source in step 7001 generates the laser beam, the laser beam may be collimated and homogenized in sequence (the energy distribution of the laser beam is adjusted by the light homogenizing device), and then the optical element is controlled to control the direction of the laser beam. After the laser light source in step 7001 generates the laser beam, the laser beam may be first subjected to the dodging treatment (the energy distribution of the laser beam is adjusted by the dodging device) and the collimation treatment, and then the optical element is controlled to control the direction of the laser beam.
A TOF depth sensing module and an image generation method according to an embodiment of the present application are described in detail above with reference to fig. 70 to 78. Another TOF depth sensing module and an image generation method according to an embodiment of the present application are described in detail below with reference to fig. 79 to 88.
Since the liquid crystal device has excellent polarization and phase adjustment capabilities, the liquid crystal device is widely applied to a TOF depth sensing module to realize deflection of a light beam. However, due to the birefringence of the liquid crystal material, the TOF depth sensing module using the liquid crystal device generally adds a polarizer at the emitting end to emit polarized light. In the process of polarized light emergence, due to the polarization selection effect of the polarizing plate, half of energy is lost when a light beam is emergent, and the lost energy can be absorbed or scattered by the polarizing plate and converted into heat, so that the temperature of the TOF depth sensing module is increased, and the stability of the TOF depth sensing module is affected. Therefore, how to reduce the heat loss of the TOF depth sensing module and improve the signal-to-noise ratio of the TOF depth sensing module is a problem to be solved.
The application provides a novel TOF depth sensing module, which reduces the heat loss of a system by transferring a polaroid from a transmitting end to a receiving end, and improves the signal-to-noise ratio of the system relative to background stray light.
The TOF depth sensing module according to an embodiment of the present invention will be briefly described with reference to fig. 79.
The TOF depth sensing module 600 shown in fig. 79 includes: a laser light source 610, an optical element 620, a beam splitter 630, a receiving unit 640, and a control unit 650.
These several modules or units in the TOF depth sensing module 600 are described in detail below.
The laser light source 610:
the laser light source 610 is used to generate a laser beam.
Optionally, the laser source 610 is a Vertical Cavity Surface Emitting Laser (VCSEL).
Optionally, the laser light source 610 is a fabry-perot laser (which may be abbreviated as FP laser).
Compared with a single VCSEL (vertical cavity laser), a single FP (Fabry-Perot) laser can realize higher power, and meanwhile, the electro-optic conversion efficiency is higher than that of the VCSEL, so that the scanning effect of the TOF depth sensing module can be improved.
Optionally, the wavelength of the laser beam emitted by the laser light source 610 is greater than 900 nm.
Because the intensity of the light ray more than 900nm in the sunlight is relatively weaker, the interference caused by the sunlight is favorably reduced when the wavelength of the laser beam is more than 900nm, and the scanning effect of the TOF depth sensing module can be further improved.
Optionally, the wavelength of the laser beam emitted by the laser light source 610 is 940nm or 1550 nm.
Because the intensity of the light near 940nm or 1550nm in the sunlight is relatively weaker, therefore, can greatly reduced the interference that the sunlight caused when laser beam's wavelength is 940nm or 1550nm, and then can improve the scanning effect of TOF depth sensing module.
Optionally, the light emitting area of the laser source 610 is less than or equal to 5 × 5mm2
Because the size of above-mentioned laser light source is less, consequently, the TOF depth sensing module that contains laser light source is integrated to terminal equipment in relatively easily, can reduce the space that occupies in terminal equipment to a certain extent.
The optical element 620:
the optical element 620 is disposed in the emitting direction of the laser beam, and the optical element 420 is configured to control the direction of the laser beam to obtain a first emitting beam and a second emitting beam, where the emitting direction of the first emitting beam and the emitting direction of the second emitting beam are different, and the polarization direction of the first emitting beam and the polarization direction of the second emitting beam are orthogonal.
Alternatively, as shown in fig. 35, the optical element 620 may include: the device comprises a transverse polarization control sheet, a transverse liquid crystal polarization grating, a longitudinal polarization control sheet and a longitudinal liquid crystal polarization grating, wherein the distances between the transverse polarization control sheet, the transverse liquid crystal polarization grating, the longitudinal polarization control sheet and the longitudinal liquid crystal polarization grating and the laser light source are sequentially increased.
Alternatively, in the optical element 620, the distances between the longitudinal polarization control plate, the longitudinal liquid crystal polarization grating, the transverse polarization control plate, and the transverse liquid crystal polarization grating and the laser light source are sequentially increased.
The receiving unit 640:
among them, the receiving unit 640 may include a receiving lens 641 and a sensor 642.
Beam splitter 630:
a beam splitter 630 for transmitting a third reflected beam from the target object reflected from the first exit beam and a fourth reflected beam from the target object reflected from the second exit beam to different areas of the sensor.
The beam splitter is a passive selection device, generally not controlled by the control unit, and can transmit the light beams in different polarization states in the light beams in the mixed polarization state to different areas of the receiving unit respectively.
Optionally, the beam splitter is implemented based on any one of a liquid crystal polarization grating LCPG, a polarization beam splitter prism PBS, and a polarization filter.
In this application, through transferring the polaroid from the transmitting terminal to the receiving terminal, can reduce the heat loss of system, in addition, through setting up beam splitter at the receiving terminal, can improve the SNR of TOF degree of depth sensing module.
As shown in fig. 80, the TOF depth sensing module 600 may further include: the collimating lens 660 is arranged in the emitting direction of the laser beam, the collimating lens 660 is arranged between the laser light source 610 and the optical element 620, and the collimating lens 660 is used for collimating the laser beam to obtain a collimated beam; when the collimating lens 660 is disposed between the laser light source 610 and the optical element 620, the optical element 620 is configured to control the direction of the collimated light beam to obtain the first outgoing light beam and the second outgoing light beam.
The light beam is collimated by the collimating lens, approximately parallel light beams can be obtained, the power density of the light beam can be improved, and the effect of subsequently scanning by adopting the light beam can be further improved.
Optionally, the clear aperture of the collimating lens is less than or equal to 5 mm.
Because the size of above-mentioned collimating lens is less, consequently, the TOF depth sensing module group that contains the collimating lens is integrated to terminal equipment in relatively easily, can reduce the space that occupies in terminal equipment to a certain extent.
As shown in fig. 81, the TOF depth sensing module 600 may further include:
the dodging device 670, the dodging device 670 is arranged in the emitting direction of the laser beam, the dodging device 670 is arranged between the laser source and the optical element, and the dodging device 670 is used for adjusting the energy distribution of the laser beam to obtain a dodged beam; when the dodging device 670 is disposed between the laser light source 610 and the optical element 620, the optical element 620 is configured to control the direction of the dodged light beam to obtain the first outgoing light beam and the second outgoing light beam.
Alternatively, the light homogenizing device may be a micro lens diffuser or a diffractive optical diffuser.
It should be understood that the TOF depth sensing module 600 may include both the collimating lens 660 and the dodging device 670, and both the collimating lens 660 and the dodging device 670 may be located between the laser source 610 and the optical element 620, and for the collimating lens 660 and the dodging device 670, the distance between the collimating lens 660 and the laser source may be closer, or the distance between the dodging device 670 and the laser source may be closer.
As shown in fig. 82, the distance between the collimator lens 660 and the laser light source 610 is smaller than the distance between the dodging device 670 and the laser light source 610.
In the TOF depth sensing module 600 shown in fig. 82, a laser beam emitted from the laser light source 610 is collimated by the collimating lens 660, and then is homogenized by the light homogenizing device 670, and then is transmitted to the optical element 620 for processing.
As shown in fig. 83, the distance between the collimator lens 660 and the laser light source 610 is greater than the distance between the dodging device 670 and the laser light source 610.
In the TOF depth sensing module 600 shown in fig. 83, a laser beam emitted from the laser source 610 is first subjected to the dodging process by the dodging device 670, and then is collimated by the collimating lens 660 and then transmitted to the optical element 620 for processing.
The detailed structure of the TOF depth sensing module 600 is described in detail below with reference to the accompanying drawings.
Fig. 84 is a schematic structural diagram of a TOF depth sensing module 600 according to an embodiment of the present disclosure.
As shown in fig. 84, the TOF depth sensing module 600 includes a projection end and a receiving end, wherein the laser light source of the projection end is a VCSEL light source, the dodging device is a diffractive optical Diffuser (DOE Diffuser), and the beam elements are multilayer LCPGs and 1/4 wave plates, wherein each layer of LCPG includes: the LCPG component in the electric control horizontal direction and the LCPG component in the electric control vertical direction. Two-dimensional block scanning in the horizontal direction and the vertical direction can be realized by using the LCPG with multi-layer cascade.
The wavelength of the VCSEL array light source may be greater than 900nm, and specifically, the wavelength of the VCSEL array light source may be 940nm or 1650 nm.
When the wavelength of the VCSEL array light source can be 940nm or 1650nm, the solar spectrum intensity is relatively weak, which is beneficial to reducing noise caused by sunlight in outdoor scenes.
The laser light emitted by the VCSEL array light source may be continuous light or pulsed light. The VCSEL array light source can also be divided into a plurality of blocks, time-sharing control is realized, and different areas can be lighted in a time-sharing mode.
The function of the diffractive optical diffuser is to shape the light beam emitted by the VCSEL array light source into a uniform square or rectangular light source with a certain FOV (e.g., 5 ° × 5 ° FOV).
The multilayer LCPG and 1/4 waveplates function to achieve scanning of the beam.
The receiving and transmitting ends share the multilayer LCPG and 1/4 waveplates. The receiving lens of the receiving end can be a single lens or a combination of multiple lenses. The sensor at the receiving end is a Single Photon Avalanche Diode (SPAD) array, and since the SPAD has the sensitivity of single photon detection, the detection distance of the TOF depth sensing module 600 can be increased. The receiving end comprises a beam splitter, and the beam splitter is realized by a single-layer LCPG. At the same moment, the projection end can project light in two polarization states into different FOV ranges, then the light converges into the same light beam after passing through the receiving end multilayer LCPG, and the light is split into two light beams in different directions according to different polarization states after passing through the light beam splitter and is projected to different positions of the SPAD array.
Fig. 85 is a schematic structural diagram of a TOF depth sensing module 600 according to an embodiment of the present disclosure.
The TOF depth sensing module 600 shown in fig. 85 differs from the TOF depth sensing module 600 shown in fig. 84 in that in fig. 84 the beam splitter is implemented as a single layer LCPG, whereas in fig. 85 the beam splitter is implemented as a polarizing beam splitter, which is typically cemented by coated corners. Since the polarization beam splitter is an off-the-shelf product, there is a cost advantage to using the polarization beam splitter as the beam splitter.
As shown in fig. 85, two orthogonal polarization states of the reflected light beam are separated on the polarization beam splitter, one is transmitted directly into the SPAD array sensor, and the other is reflected and then reflected by the other mirror to enter the SPAD array sensor.
Fig. 86 is a schematic structural diagram of a TOF depth sensing module according to an embodiment of the present application.
The difference from the TOF depth sensing module 600 shown in fig. 84 is that in fig. 86, the beam splitter is implemented by a polarizing filter. For example, in fig. 86, an 1/4 wave plate may be used.
The polarizing filter is processed like a pixel picture, and the permeable polarization states of adjacent pixels are different and correspond to each SPAD pixel. Thus, the SPAD sensor can receive two polarization state information simultaneously.
FIG. 87 is a schematic diagram of a polarizing filter receiving a polarized light beam.
As shown in fig. 87, different regions of the polarization filter can transmit either H-polarization, which means polarization in the horizontal direction, or V-polarization, which means polarization in the vertical direction. In fig. 87, different areas on the polarization filter allow only the light beam of the corresponding polarization state to reach the corresponding position of the sensor. For example, the H-polarization only allows vertically and horizontally polarized light beams to reach the corresponding position of the sensor, and the V-polarization only allows vertically polarized light beams to reach the corresponding position of the sensor.
When the beam splitter adopts the polarizing filter, the polarizing filter is thinner and smaller in size, so that the beam splitter is convenient to integrate into terminal equipment with smaller size.
Fig. 88 is a schematic flowchart of an image generation method of an embodiment of the present application.
The method shown in fig. 88 may be performed by a TOF depth sensing module or a terminal device including the TOF depth sensing module according to an embodiment of the present disclosure, and in particular, the method shown in fig. 88 may be performed by the TOF depth sensing module shown in fig. 79 or a terminal device including the TOF depth sensing module shown in fig. 79. The method shown in fig. 88 includes steps 8001 to 8006, which are described in detail below.
8001. And controlling the laser light source to generate a laser beam.
8002. The control optical element controls the direction of the laser beam to obtain a first outgoing beam and a second outgoing beam.
The emergent direction of the first emergent light beam is different from that of the second emergent light beam, and the polarization direction of the first emergent light beam is orthogonal to that of the second emergent light beam.
8003. And controlling the beam splitter to transmit a third reflected beam obtained by reflecting the first emergent beam by the target object and a fourth reflected beam obtained by reflecting the second emergent beam by the target object to different areas of the receiving unit.
8004. And generating a first depth map of the target object according to the TOF corresponding to the first emergent light beam.
8005. And generating a second depth map of the target object according to the TOF corresponding to the second emergent light beam.
The process of the method shown in fig. 88 described above is the same as the basic process of the method shown in fig. 78, with the main difference that the third reflected light beam and the fourth reflected light beam are propagated to different regions of the receiving unit by the beam selection device in step 7003 of the method shown in fig. 78. And in step 8003 of the method shown in fig. 88 the third reflected beam and the fourth reflected beam are propagated to different areas of the receiving unit by the beam splitter.
In the embodiment of the application, because the emission end is not provided with the polarization filter device, the light beam emitted by the laser light source can reach the optical element with almost no loss (the polarization filter device generally absorbs more light energy, and further generates certain heat loss), and the heat loss of the terminal equipment can be reduced.
Optionally, the method shown in fig. 88 further comprises: and splicing the first depth map and the second depth map to obtain a depth map of the target object.
It should be appreciated that in the method shown in fig. 88, a third depth map, a fourth depth map, etc. may also be generated in a similar manner, and then all the depth maps may be stitched or combined to obtain a final depth map of the target object.
Optionally, the terminal device further includes a collimator lens disposed between the laser light source and the optical element, and the method shown in fig. 88 further includes:
8006. utilizing a collimating lens to collimate the laser beam to obtain a collimated beam;
the step 8002 specifically includes: the control optical element controls the direction of the collimated light beam to obtain a first emergent light beam and a second emergent light beam.
In addition, the light beam is collimated by the collimating lens, approximately parallel light beams can be obtained, the power density of the light beam can be improved, and the effect of subsequently scanning by adopting the light beam can be further improved.
Optionally, the terminal device further includes a light unifying device disposed between the laser light source and the optical element, and the method shown in fig. 88 further includes:
8007. adjusting the energy distribution of the laser beam by using a light homogenizing device to obtain a light beam after light homogenizing treatment;
the step 8002 specifically includes: the method for controlling the optical element to control the direction of the laser beam to obtain a first outgoing beam and a second outgoing beam includes: the control optical element controls the direction of the light beam after the dodging treatment so as to obtain a first emergent light beam and a second emergent light beam.
The light power of the laser beam can be more uniform in an angle space through the dodging treatment, or the light power is distributed according to a specific rule, so that the local light power is prevented from being too small, and the blind spot of the finally obtained target object depth map is avoided.
On the basis of the above steps 8001 to 8005, the method shown in fig. 88 may further include step 8006 or step 8007.
Alternatively, the method shown in fig. 88 may further include step 8006 and step 8007 on the basis of steps 8001 to 8005 described above. In this case, after step 8001 is executed, step 8006 may be executed first, step 8007 may be executed next, and step 8002 may be executed next, or step 8007 may be executed first, step 8006 may be executed next, and step 8002 may be executed next. That is, after the laser light source generates the laser beam in step 8001, the laser beam may be collimated and homogenized in sequence (the energy distribution of the laser beam is adjusted by the light homogenizing device), and then the optical element is controlled to control the direction of the laser beam. Or after the laser light source in step 8001 generates the laser beam, the laser beam may be first subjected to dodging (energy distribution of the laser beam is adjusted by the dodging device) and collimating, and then the optical element is controlled to control the direction of the laser beam.
A TOF depth sensing module and an image generation method according to an embodiment of the present application are described in detail above with reference to fig. 79 to 88. Another TOF depth sensing module and an image generation method according to an embodiment of the present application are described in detail below with reference to fig. 89 to 101.
Due to the excellent polarization and phase adjustment capability of the liquid crystal device, the TOF depth sensing module often uses the liquid crystal device to control light beams, but due to the limitation of the liquid crystal material, the response time of the TOF depth sensing module has a certain limit, usually on the order of milliseconds, so that the scanning frequency of the TOF depth sensing module using the liquid crystal device is relatively low (usually less than 1 khz).
The application provides a novel TOF depth sensing module, and the improvement of the scanning frequency of a system is realized by staggering a certain time (for example, a half period) through the time sequence of the driving signals of the electric control liquid crystals of a control transmitting end and a control receiving end.
The TOF depth sensing module according to an embodiment of the present invention will be briefly described with reference to fig. 89.
The TOF depth sensing module 700 shown in fig. 89 includes: a laser light source 710, an optical element 720, a beam selection device 730, a receiving unit 740, and a control unit 750.
The functions of each module or unit in the TOF depth sensing module are specifically as follows:
Laser light source 710:
the laser light source 710 is used to generate a laser beam.
Optionally, the laser source 710 is a Vertical Cavity Surface Emitting Laser (VCSEL).
Optionally, the laser light source 710 is a fabry-perot laser (which may be abbreviated as FP laser).
Compared with a single VCSEL (vertical cavity laser), a single FP (Fabry-Perot) laser can realize higher power, and meanwhile, the electro-optic conversion efficiency is higher than that of the VCSEL, so that the scanning effect of the TOF depth sensing module can be improved.
Optionally, the wavelength of the laser beam emitted by the laser source 710 is greater than 900 nm.
Because the intensity of the light ray more than 900nm in the sunlight is relatively weaker, the interference caused by the sunlight is favorably reduced when the wavelength of the laser beam is more than 900nm, and the scanning effect of the TOF depth sensing module can be further improved.
Optionally, the wavelength of the laser beam emitted by the laser light source 710 is 940nm or 1550 nm.
Because the intensity of the light near 940nm or 1550nm in the sunlight is relatively weaker, therefore, can greatly reduced the interference that the sunlight caused when laser beam's wavelength is 940nm or 1550nm, and then can improve the scanning effect of TOF depth sensing module.
Optionally, the light emitting area of the laser source 710 is less than or equal to 5 × 5mm2
Because the size of above-mentioned laser light source is less, consequently, the TOF depth sensing module that contains laser light source is integrated to terminal equipment in relatively easily, can reduce the space that occupies in terminal equipment to a certain extent.
Optionally, the average output optical power of the TOF depth sensing module 700 is less than 800 mw.
When the average output optical power of the TOF depth sensing module is smaller than or equal to 800mw, the TOF depth sensing module has smaller power consumption, and is convenient to arrange in terminal equipment and other equipment sensitive to power consumption.
The optical element 720:
the optical element 720 is disposed in the direction of the light beam emitted by the laser light source, and the optical element 720 is used for deflecting the laser light beam under the control of the control unit 750 to obtain an outgoing light beam.
Beam selection device 730:
the beam selection device 730 is configured to select a beam having at least two polarization states from the beams in each period in the reflected beam of the target object under the control of the control unit 750, obtain a reception beam, and transmit the reception beam to the reception unit 740.
The emergent light beams are periodically changed, the change period of the emergent light beams is a first time interval, the inclination angles of the light beams positioned in adjacent periods are different in the emergent light beams, the light beams positioned in the same period have at least two polarization states, and the inclination angles of the light beams positioned in the same period are the same and the azimuth angles are different.
In the embodiment of the application, the direction and the polarization state of the light beam emitted by the laser light source are adjusted through the optical element and the light beam selection device, so that the inclination angles of the emergent light beams in adjacent periods are different, and the light beams in the same period have at least two polarization states, so that the scanning frequency of the TOF depth sensing module is improved.
In this application, control signal time sequence through the control unit control transmitting terminal and receiving terminal staggers certain time, can improve TOF depth sensing module's scanning frequency.
Alternatively, as shown in fig. 35, the optical element 720 includes: the device comprises a transverse polarization control sheet, a transverse liquid crystal polarization grating, a longitudinal polarization control sheet and a longitudinal liquid crystal polarization grating. The distances between the transverse polarization control sheet, the transverse liquid crystal polarization grating, the longitudinal polarization control sheet and the longitudinal liquid crystal polarization grating and the laser light source are sequentially increased.
Alternatively, in the optical element 720, the distances between the longitudinal polarization control plate, the longitudinal liquid crystal polarization grating, the transverse polarization control plate, and the transverse liquid crystal polarization grating and the laser light source are sequentially increased.
Optionally, the beam selection device is composed of 1/4 wave plate + electrically controlled half-wave plate + polarizer.
As shown in fig. 90, the TOF depth sensing module may further include: the collimating lens 760, the collimating lens 760 is arranged between the laser light source 710 and the optical element 720, and the collimating lens 760 is used for collimating the laser beam; the optical element 720 is used for deflecting the light beam after the collimation treatment of the collimator lens under the control of the control unit 750, so as to obtain an outgoing light beam.
When the TOF depth sensing module comprises the collimating lens, the collimating lens can be utilized to collimate the light column emitted by the laser source, approximately parallel light beams can be obtained, the power density of the light beams can be improved, and then the effect of scanning by subsequently adopting the light beams can be improved.
Optionally, the clear aperture of the collimating lens is less than or equal to 5 mm.
Because the size of above-mentioned collimating lens is less, consequently, the TOF depth sensing module group that contains the collimating lens is integrated to terminal equipment in relatively easily, can reduce the space that occupies in terminal equipment to a certain extent.
As shown in fig. 91, the TOF depth sensing module 700 further includes: an optical unifying device 770, the optical unifying device 770 being disposed between the laser light source 710 and the optical element 720, the optical unifying device 770 being configured to adjust an angular spatial intensity distribution of the laser beam; the optical element 720 is used for controlling the direction of the light beam after the dodging process of the dodging device 720 under the control of the control unit 750, so as to obtain an outgoing light beam.
Optionally, the light uniformizing device 770 is a microlens diffuser or a diffractive optical diffuser.
The light power of the laser beam can be more uniform in an angle space through the dodging treatment, or the light power is distributed according to a specific rule, so that the local light power is prevented from being too small, and the blind spot of the finally obtained target object depth map is avoided.
It should be understood that the TOF depth sensing module 700 may include both the collimating lens 760 and the dodging device 770, and both the collimating lens 760 and the dodging device 770 may be located between the laser source 710 and the optical element 720, and for the collimating lens 760 and the dodging device 770, either the collimating lens 760 may be closer to the laser source or the dodging device 770 may be closer to the laser source.
Fig. 92 is a schematic structural diagram of a TOF depth sensing module according to an embodiment of the present disclosure.
As shown in fig. 92, the distance between the collimator lens 760 and the laser light source 710 is smaller than the distance between the dodging device 770 and the laser light source 710.
In the TOF depth sensing module 700 shown in fig. 92, a laser beam emitted from the laser source 710 is first collimated by the collimating lens 760, then homogenized by the light homogenizing device 770, and then transmitted to the optical element 720 for processing.
Fig. 93 is a schematic structural diagram of a TOF depth sensing module according to an embodiment of the present application.
As shown in fig. 93, the distance between the collimator lens 760 and the laser light source 710 is greater than the distance between the dodging device 770 and the laser light source 710.
In the TOF depth sensing module 700 shown in fig. 93, a laser beam emitted from the laser source 710 is first subjected to the dodging process by the dodging device 770, then subjected to the collimation process by the collimating lens 760, and then transmitted to the optical element 720 for processing.
The operation of the TOF depth sensing module 700 is described below with reference to fig. 94 and 95.
As shown in fig. 94, assuming that the highest frequencies of the electronic control devices of the transmitting end and the receiving end of the TOF depth sensing module 700 are both 1/T, the control timings of the transmitting end and the receiving end are staggered by a half cycle (0.5T) by the control unit, and then the receiving end sensors can receive light beams at different spatial positions at intervals of 0.5T.
As shown in fig. 95, the receiving end sensor receives the light beam in the angle 1 state a within 0-0.5T; in the time of 0.5T-T, the receiving end sensor receives the light beam in the state B of the angle 1; in the time of T-1.5T, the receiving end sensor receives the light beam in the angle 2 state A; in the time of 1.5T-2T, the receiving end sensor receives the light beam in the angle 2 state B, so that the scanning frequency of the system is increased from 1/T to 2/T and is doubled.
The specific structure of the TOF depth sensing module 700 is described in detail below with reference to the accompanying drawings.
Fig. 96 is a schematic structural diagram of a TOF depth sensing module 700 according to an embodiment of the present disclosure.
As shown in fig. 96, the TOF depth sensing module 700 includes a projecting end, a receiving end, and a control unit. The projecting end includes: a light source, a light uniformizing device and an optical element; the receiving end includes: the device comprises an optical element, a light beam selection device, a receiving lens and a two-dimensional sensor; the control unit is used for controlling the projection end and the receiving end to complete the scanning of the light beam.
The laser light source at the projection end is a VCSEL light source, the dodging device is a diffractive optical Diffuser (DOE Diffuser), and the beam element is a multilayer LCPG and 1/4 wave plate, wherein each layer of LCPG comprises: the LCPG component in the electric control horizontal direction and the LCPG component in the electric control vertical direction. Two-dimensional block scanning in the horizontal direction and the vertical direction can be realized by using the LCPG with multi-layer cascade.
The wavelength of the VCSEL array light source may be greater than 900nm, and specifically, the wavelength of the VCSEL array light source may be 940nm or 1650 nm.
When the wavelength of the VCSEL array light source can be 940nm or 1650nm, the solar spectrum intensity is relatively weak, which is beneficial to reducing noise caused by sunlight in outdoor scenes.
The laser light emitted by the VCSEL array light source may be continuous light or pulsed light. The VCSEL array light source can also be divided into a plurality of blocks, time-sharing control is realized, and different areas can be lighted in a time-sharing mode.
The function of the diffractive optical diffuser is to shape the light beam emitted by the VCSEL array light source into a uniform square or rectangular light source with a certain FOV (e.g., 5 ° × 5 ° FOV).
The multilayer LCPG and 1/4 waveplates function to achieve scanning of the beam.
The light sensor can dynamically select light with different angles and different states to enter the sensor through time-sharing control of the transmitting end and the receiving end. As shown in fig. 96, the laser emitted from the ordinary VCSEL array light source has no fixed polarization state, and can be decomposed into linearly polarized laser parallel to the paper surface and linearly polarized laser perpendicular to the paper surface, while the linearly polarized laser after passing through the LCPG will be divided into two laser beams with different polarization states (left-hand circular polarization and right-hand circular polarization), which are distributed with different emission angles, and the polarization states of the two laser beams after passing through the 1/4 wave plate are converted into linearly polarized light parallel to the paper surface and linearly polarized light perpendicular to the paper surface. The two laser beams with different polarization states irradiate an object in a target area to generate return light beams, and the return light beams are received by an 1/4 wave plate and an LCPG shared by the transmitting end and are changed into laser beams with the same divergence angle but different polarization states of left-handed circularly polarized light and right-handed circularly polarized light. The light beam selection device at the receiving end consists of 1/4 wave plates, an electric control half-wave plate and a polaroid, the polarization state of the received light after passing through the 1/4 wave plate is converted into linear polarization parallel to the paper surface and linear polarization vertical to the paper surface, so that the polarization state of linearly polarized light is rotated by 90 degrees or the polarization state of linearly polarized light passing through the half-wave plate is not changed by controlling the electric control half-wave plate in a time-sharing manner, the linear polarization parallel to the paper surface and the linear polarization vertical to the paper surface are transmitted in a time-sharing manner, and meanwhile, the light in the other polarization state is absorbed or scattered by the polaroid.
In fig. 96, the time-sharing control signals of the transmitting end and the receiving end can be as shown in fig. 94, and the control timing of the electronically controlled LCPG of the transmitting end and the electronically controlled half-wave plate of the receiving end is staggered by half a period (0.5T), so that the scanning frequency of the system can be increased by 1 time.
Fig. 97 is a schematic structural diagram of a TOF depth sensing module 700 according to an embodiment of the present disclosure.
As shown in fig. 97, based on the TOF depth sensing module shown in fig. 96, the diffractive optical Diffuser (DOE Diffuser) behind the VCSEL array light source is changed to a micro-lens Diffuser (Diffuser). Because the micro-lens diffusion sheet realizes light uniformization based on geometric optics, the transmission efficiency of the micro-lens diffusion sheet is higher and can reach more than 80%, while the transmission efficiency of the traditional diffractive optical diffusion sheet (DOE Diffuser) is only about 70%. The shape of the microlens diffuser is shown in fig. 77, and the microlens diffuser is composed of a series of randomly distributed microlenses, and the position and shape of each microlens are designed through simulation optimization, so that the shaped light beam is as uniform as possible, and the transmission efficiency is high.
The TOF depth sensing module shown in fig. 97 is the same as the TOF depth sensing module shown in fig. 96 in terms of driving principle, except that a diffractive optical Diffuser (DOE Diffuser) in the TOF depth sensing module shown in fig. 96 is replaced by an optical Diffuser to improve the transmission efficiency of the emitting end, which is not described herein.
For the TOF depth sensing module shown in fig. 97, the time-sharing control signals of the transmitting end and the receiving end may be as shown in fig. 94, and the control timing sequence of the electronically controlled LCPG of the transmitting end and the electronically controlled half-wave plate of the receiving end is staggered by half a period (0.5T), so that the scanning frequency of the system can be increased by 1 time.
Fig. 98 is a schematic structural diagram of a TOF depth sensing module 700 according to an embodiment of the present disclosure.
Based on the TOF depth sensing module shown in fig. 96 or 97, the optical element can be changed from a multilayer LCPG plus 1/4 waveplate to a multilayer flat-plate liquid crystal cell, as shown in fig. 98. Beam deflection at multiple angles and in both horizontal and vertical directions is achieved using a multi-layer flat panel liquid crystal cell. The beam selection device at the receiving end is composed of an electric control half-wave plate and a polaroid.
The principle of beam deflection in a flat panel liquid crystal cell is shown in fig. 99 and 100, using a wedge Polymer (Polymer) interface to achieve beam deflection. The refractive index of the wedge-shaped polymer material is equal to the ordinary refractive index n of the liquid crystal0In this case, as shown in fig. 99, when the optical axes of the liquid crystal molecules are aligned parallel to the x direction, incident light parallel to the paper surface is deflected by a certain angle, the magnitude of the deflection angle can be controlled by controlling the voltage applied thereto, and incident light perpendicular to the paper surface is propagated along a straight line. This is achieved by stacking a plurality of differently oriented flat cell layers (optic axis parallel to the x-direction, or y-direction) To) it is possible to achieve that deflected incident light is projected to different angles simultaneously.
According to the same principle, the driving voltage of the flat plate liquid crystal box at the transmitting end and the driving voltage of the electric control half-wave plate at the receiving end are controlled to stagger the control time sequences of the flat plate liquid crystal box at the transmitting end and the electric control half-wave plate at the receiving end by a half period (0.5T), so that the liquid crystal scanning frequency can be improved.
Fig. 101 is a schematic flowchart of an image generation method according to an embodiment of the present application.
The method shown in fig. 101 may be performed by a TOF depth sensing module or a terminal device including the TOF depth sensing module according to an embodiment of the present disclosure, and specifically, the method shown in fig. 101 may be performed by the TOF depth sensing module shown in fig. 89 or a terminal device including the TOF depth sensing module shown in fig. 89. The method shown in fig. 101 includes steps 9001 to 9004, which are described in detail below.
9001. And controlling the laser light source to generate a laser beam.
9002. And controlling the optical element to deflect the laser beam to obtain an emergent beam.
9003. The beam selection device is controlled to select a beam having at least two polarization states from the beams in each period in the reflected beam of the target object, to obtain a reception beam, and to transmit the reception beam to the reception unit.
9004. And generating a depth map of the target object according to the TOF corresponding to the emergent light beam.
The emergent light beams are periodically changed, the change period of the emergent light beams is a first time interval, the inclination angles of the light beams positioned in adjacent periods are different in the emergent light beams, the light beams positioned in the same period have at least two polarization states, and the inclination angles of the light beams positioned in the same period are the same and the azimuth angles are different.
The TOF corresponding to the outgoing light beam may specifically refer to time difference information between a time when the reflected light beam corresponding to the outgoing light beam is received by the receiving unit and an outgoing time of the outgoing light source. The reflected light beam corresponding to the outgoing light beam may specifically refer to a light beam generated after the outgoing light beam reaches the target object after being processed by the optical element and the light beam selection device and being reflected by the target object.
In the embodiment of the application, the direction and the polarization state of the light beam emitted by the laser light source are adjusted through the optical element and the light beam selection device, so that the inclination angles of the emergent light beams in adjacent periods are different, and the light beams in the same period have at least two polarization states, so that the scanning frequency of the TOF depth sensing module is improved.
Optionally, the terminal device further includes a collimator lens disposed between the laser light source and the optical element, in which case the method shown in fig. 101 further includes:
9005. utilizing a collimating lens to collimate the laser beam to obtain a collimated beam;
deflecting the laser beam in the step 9002 to obtain an outgoing beam, specifically including: and the control optical element controls the direction of the collimated light beam to obtain an emergent light beam.
The light beam is collimated by the collimating lens, approximately parallel light beams can be obtained, the power density of the light beam can be improved, and the effect of subsequently scanning by adopting the light beam can be further improved.
Optionally, the terminal device further includes a light uniformizing device disposed between the laser light source and the optical element, in which case the method shown in fig. 101 further includes:
9006. adjusting the energy distribution of the laser beam by using a light homogenizing device to obtain a light beam after light homogenizing treatment;
deflecting the laser beam in the step 9002 to obtain an outgoing beam, specifically including: and the control optical element controls the direction of the light beam after the dodging treatment so as to obtain an emergent light beam.
The light power of the laser beam can be more uniform in an angle space through the dodging treatment, or the light power is distributed according to a specific rule, so that the local light power is prevented from being too small, and the blind spot of the finally obtained target object depth map is avoided.
The FOV of the beam processed by the beam shaping device in the TOF depth sensing module 300 is described below with reference to fig. 102 and 103.
The beam shaper 330 in the TOF depth sensing module 300 adjusts the laser beam to obtain a first beam having a FOV in a range of [5 ° × 5 °, 20 ° × 20 ° ].
Fig. 102 is a schematic view of the FOV of the first light beam.
As shown in fig. 102, the first light beam exits from point O, the FOV of the first light beam in the vertical direction is an angle a, the FOV in the horizontal direction is an angle B, and the rectangle E is the area of the first light beam projected on the target object (the first light beam projected on the target object may be a rectangular area, but may have other shapes). The angle a is between 5 ° and 20 ° (may include 5 ° and 20 °), and the angle B is also between 5 ° and 20 ° (may include 5 ° and 20 °).
In the TOF depth sensing module 300, the control unit 370 may be configured to control the first optical element to control the directions of the first light beam at M different times, respectively, so as to obtain M outgoing light beams in different directions, where the ranges of the total FOV covered by the M outgoing light beams in different directions include [50 ° × 50 °, 80 ° × 80 ° ].
Fig. 103 is a schematic diagram of FOVs covered by M outgoing beams in different directions.
Specifically, as shown in fig. 103, M outgoing light beams in different directions are emitted from point O, and the area covered on the target object is a rectangle F, where the angle C is the superimposed value of the vertical FOV of the M outgoing light beams in different directions, and the angle D is the superimposed value of the horizontal FOV of the M outgoing light beams in different directions. Angle C ranges between 50 ° and 80 ° (and may include 50 ° and 80 °), and likewise angle D ranges between 50 ° and 80 ° (and may include 50 ° and 80 °).
The above explanations of the FOVs of the first light beam generated by the TOF depth sensing module 300 and the M outgoing light beams in different directions in conjunction with fig. 102 and 103 are also applicable to the first light beam generated by the TOF depth sensing module 400 and the M outgoing light beams in different directions, and the descriptions are not repeated here.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (15)

1. A TOF depth sensing module is characterized by comprising a laser light source, an optical element, a light beam selection device, a receiving unit and a control unit, wherein the optical element is arranged in the direction of a light beam emitted by the laser light source;
the laser light source is used for generating a laser beam;
the control unit is used for controlling the birefringence parameter of the optical element to obtain an adjusted birefringence parameter;
the optical element is configured to adjust a direction of the laser beam based on the adjusted birefringence parameter to obtain a first outgoing beam and a second outgoing beam, where the optical element is capable of adjusting the laser beam to different directions when the birefringence of the optical element is different, the outgoing direction of the first outgoing beam is different from the outgoing direction of the second outgoing beam, the first outgoing beam and the second outgoing beam are both beams in a single polarization state, and the polarization direction of the first outgoing beam is orthogonal to the polarization direction of the second outgoing beam;
the optical element is used for controlling the direction of the laser beam to obtain a first emergent beam and a second emergent beam, wherein the emergent direction of the first emergent beam is different from that of the second emergent beam, the first emergent beam and the second emergent beam are both beams in a single polarization state, and the polarization direction of the first emergent beam is orthogonal to that of the second emergent beam;
The control unit is configured to control the beam selection device to transmit a third reflected beam and a fourth reflected beam to the receiving unit, respectively, where the third reflected beam is a beam reflected by the target object from the first outgoing beam, and the fourth reflected beam is a beam reflected by the target object from the second outgoing beam.
2. The TOF depth sensing module of claim 1, wherein the optical element comprises: the device comprises a transverse polarization control sheet, a transverse liquid crystal polarization grating, a longitudinal polarization control sheet and a longitudinal liquid crystal polarization grating.
3. The TOF depth sensing module of claim 2, wherein the distances from the lateral polarization control plate, the lateral liquid crystal polarization grating, the longitudinal polarization control plate, and the longitudinal liquid crystal polarization grating to the laser light source are sequentially larger, or the distances from the longitudinal polarization control plate, the longitudinal liquid crystal polarization grating, the lateral polarization control plate, and the lateral liquid crystal polarization grating to the laser light source are sequentially larger.
4. The TOF depth sensing module of any of claims 1-3 wherein the beam selection device is comprised of 1/4 wave plates + half wave plates + polarizers.
5. The TOF depth sensing module of any of claims 1-4, further comprising:
the collimating lens is arranged between the laser light source and the optical element and is used for collimating the laser beam;
the optical element is used for controlling the direction of the light beam collimated by the collimating lens so as to obtain the first emergent light beam and the second emergent light beam.
6. The TOF depth sensing module of claim 5, wherein the clear aperture of the collimating lens is less than or equal to 5 mm.
7. The TOF depth sensing module of any of claims 1-4, further comprising:
the dodging device is arranged between the laser light source and the optical element and is used for adjusting the angular space intensity distribution of the laser beams;
the optical element is used for controlling the direction of the light beam after the light homogenizing treatment of the light homogenizing device so as to obtain the first emergent light beam and the second emergent light beam.
8. The TOF depth sensing module of claim 7, wherein the light homogenizing device is a micro lens diffuser or a diffractive optical diffuser.
9. The TOF depth sensing module of any of claims 1-8, wherein the laser light source is a fabry-perot FP laser.
10. The TOF depth sensing module of any of claims 1-8, wherein the laser light source is a vertical cavity surface emitting laser.
11. The TOF depth sensing module of any of claims 1-10, wherein the light emitting area of the laser light source is less than or equal to 5 x 5mm2
12. The TOF depth sensing module of any of claims 1-11, wherein the average output optical power of the TOF depth sensing module is less than 800 mw.
13. An image generation method is applied to terminal equipment comprising a TOF depth sensing module, wherein the TOF depth sensing module comprises a laser light source, an optical element, a sensor, a light beam selection device, a receiving unit and a control unit, and the optical element is arranged in the direction of a light beam emitted by the laser light source, and the image generation method is characterized by comprising the following steps of:
controlling the laser light source to generate a laser beam;
controlling the optical element to control the direction of the laser beam to obtain a first emergent beam and a second emergent beam, wherein the emergent direction of the first emergent beam is different from that of the second emergent beam, the first emergent beam and the second emergent beam are both beams in a single polarization state, and the polarization direction of the first emergent beam is orthogonal to that of the second emergent beam;
Controlling the beam selection device to transmit a third reflected beam and a fourth reflected beam to the receiving unit at different time intervals, wherein the third reflected beam is a beam reflected by the target object from the first outgoing beam, and the fourth reflected beam is a beam reflected by the target object from the second outgoing beam;
acquiring a TOF corresponding to the first emergent light beam and a TOF corresponding to the second emergent light beam;
generating a first depth map of the target object according to the TOF corresponding to the first emergent light beam;
and generating a second depth map of the target object according to the TOF corresponding to the second emergent light beam.
14. The image generation method according to claim 13, wherein the terminal device further includes a collimator lens disposed between the laser light source and the optical element, the image generation method further comprising:
utilizing the collimating lens to perform collimation treatment on the laser beam to obtain a collimated beam;
the controlling the optical element to control the direction of the laser beam to obtain a first outgoing beam and a second outgoing beam includes:
And controlling the optical element to control the direction of the collimated light beam so as to obtain the first emergent light beam and the second emergent light beam.
15. The image generation method according to claim 13 or 14, wherein the terminal device further includes a light unifying device provided between the laser light source and the optical element, the image generation method further comprising:
adjusting the energy distribution of the laser beam by using the dodging device to obtain a dodged beam;
the controlling the optical element to control the direction of the laser beam to obtain a first outgoing beam and a second outgoing beam includes:
and controlling the optical element to control the direction of the light beam after the dodging treatment so as to obtain the first emergent light beam and the second emergent light beam.
CN202010007047.6A 2020-01-03 2020-01-03 TOF depth sensing module and image generation method Pending CN113075691A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010007047.6A CN113075691A (en) 2020-01-03 2020-01-03 TOF depth sensing module and image generation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010007047.6A CN113075691A (en) 2020-01-03 2020-01-03 TOF depth sensing module and image generation method

Publications (1)

Publication Number Publication Date
CN113075691A true CN113075691A (en) 2021-07-06

Family

ID=76608673

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010007047.6A Pending CN113075691A (en) 2020-01-03 2020-01-03 TOF depth sensing module and image generation method

Country Status (1)

Country Link
CN (1) CN113075691A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115144842A (en) * 2022-09-02 2022-10-04 深圳阜时科技有限公司 Transmitting module, photoelectric detection device, electronic equipment and three-dimensional information detection method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109459849A (en) * 2017-09-06 2019-03-12 脸谱科技有限责任公司 On-mechanical light beam for depth sense turns to
WO2019065975A1 (en) * 2017-09-28 2019-04-04 国立研究開発法人産業技術総合研究所 Circularl polarization-type polarization diversity element, scanning element using same, and lidar
US20190154809A1 (en) * 2017-07-11 2019-05-23 Microsoft Technology Licensing, Llc Illumination for zoned time-of-flight imaging
CN110221444A (en) * 2019-06-06 2019-09-10 深圳市麓邦技术有限公司 Imaging system
CN110244281A (en) * 2019-07-19 2019-09-17 北京一径科技有限公司 A kind of laser radar system
CN110456323A (en) * 2019-07-09 2019-11-15 深圳奥比中光科技有限公司 A kind of light emitting unit, light emitting devices and distance-measuring equipment
CN113075641A (en) * 2020-01-03 2021-07-06 华为技术有限公司 TOF depth sensing module and image generation method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190154809A1 (en) * 2017-07-11 2019-05-23 Microsoft Technology Licensing, Llc Illumination for zoned time-of-flight imaging
CN109459849A (en) * 2017-09-06 2019-03-12 脸谱科技有限责任公司 On-mechanical light beam for depth sense turns to
WO2019065975A1 (en) * 2017-09-28 2019-04-04 国立研究開発法人産業技術総合研究所 Circularl polarization-type polarization diversity element, scanning element using same, and lidar
CN110221444A (en) * 2019-06-06 2019-09-10 深圳市麓邦技术有限公司 Imaging system
CN110456323A (en) * 2019-07-09 2019-11-15 深圳奥比中光科技有限公司 A kind of light emitting unit, light emitting devices and distance-measuring equipment
CN110244281A (en) * 2019-07-19 2019-09-17 北京一径科技有限公司 A kind of laser radar system
CN113075641A (en) * 2020-01-03 2021-07-06 华为技术有限公司 TOF depth sensing module and image generation method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115144842A (en) * 2022-09-02 2022-10-04 深圳阜时科技有限公司 Transmitting module, photoelectric detection device, electronic equipment and three-dimensional information detection method
CN115144842B (en) * 2022-09-02 2023-03-14 深圳阜时科技有限公司 Transmitting module, photoelectric detection device, electronic equipment and three-dimensional information detection method

Similar Documents

Publication Publication Date Title
CN113156459B (en) TOF depth sensing module and image generation method
AU2018264080B2 (en) Multiple depth plane three-dimensional display using a wave guide reflector array projector
WO2021136105A1 (en) Tof depth sensing module and image generation method
US20220342051A1 (en) Tof depth sensing module and image generation method
WO2021159883A1 (en) Off-axis scanning distance measuring system
CN110687542A (en) Off-axis scanning distance measuring system and method
CN113156458A (en) TOF depth sensing module and image generation method
CN113075689A (en) TOF depth sensing module and image generation method
CN211426796U (en) Off-axis scanning distance measuring system
CN109765695A (en) A kind of display system and display device
CN113075691A (en) TOF depth sensing module and image generation method
CN113075671A (en) TOF depth sensing module and image generation method
WO2021212915A1 (en) Laser distance measuring device and method
CN111175768B (en) Off-axis scanning distance measuring system and method
CN109490865B (en) Area array laser radar
CN209821513U (en) Direct type optical projection system
EP4307005A1 (en) Detection apparatus
TW202219547A (en) Systems and operating methods for tof imaging of a target and a light emission apparatus
CN116974053A (en) Light emitting device based on spatial light modulator and solid-state laser radar
CN116560154A (en) Acousto-optic deflection module, distance measuring device and electronic equipment based on diffusion sheet beam expansion
CN116931337A (en) Light beam scanning module based on liquid crystal polarization grating, distance measuring device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination