WO2021016781A1 - 三维图像传感器以及相关三维图像传感模组及手持装置 - Google Patents

三维图像传感器以及相关三维图像传感模组及手持装置 Download PDF

Info

Publication number
WO2021016781A1
WO2021016781A1 PCT/CN2019/098103 CN2019098103W WO2021016781A1 WO 2021016781 A1 WO2021016781 A1 WO 2021016781A1 CN 2019098103 W CN2019098103 W CN 2019098103W WO 2021016781 A1 WO2021016781 A1 WO 2021016781A1
Authority
WO
WIPO (PCT)
Prior art keywords
time
node
pixel
photosensitive
input terminal
Prior art date
Application number
PCT/CN2019/098103
Other languages
English (en)
French (fr)
Inventor
洪自立
梁佑安
杨孟达
Original Assignee
深圳市汇顶科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市汇顶科技股份有限公司 filed Critical 深圳市汇顶科技股份有限公司
Priority to CN201980001339.4A priority Critical patent/CN110574364B/zh
Priority to PCT/CN2019/098103 priority patent/WO2021016781A1/zh
Priority to EP19919548.8A priority patent/EP3799424B1/en
Priority to US17/027,586 priority patent/US11828850B2/en
Publication of WO2021016781A1 publication Critical patent/WO2021016781A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/54Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/22Measuring arrangements characterised by the use of optical techniques for measuring depth
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • G01S7/4814Constructional features, e.g. arrangements of optical elements of transmitters alone
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • G01S7/4816Constructional features, e.g. arrangements of optical elements of receivers alone
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • G01S7/486Receivers
    • G01S7/4861Circuits for detection, sampling, integration or read-out
    • G01S7/4863Detector arrays, e.g. charge-transfer gates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • G01S7/486Receivers
    • G01S7/4865Time delay measurement, e.g. time-of-flight measurement, time of arrival measurement or determining the exact position of a peak
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/254Image signal generators using stereoscopic image cameras in combination with electromagnetic radiation sources for illuminating objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/703SSIS architectures incorporating pixels for producing signals other than image signals
    • H04N25/705Pixels for depth measurement, e.g. RGBZ
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/71Charge-coupled device [CCD] sensors; Charge-transfer registers specially adapted for CCD sensors
    • H04N25/745Circuitry for generating timing or clock signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/76Addressed sensors, e.g. MOS or CMOS sensors
    • H04N25/766Addressed sensors, e.g. MOS or CMOS sensors comprising control or output lines used for a plurality of functions, e.g. for pixel output, driving, reset or power
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Definitions

  • This application relates to an image sensor, in particular to a three-dimensional image sensor, a related three-dimensional image sensing module and a handheld device.
  • CMOS image sensors have been mass-produced and applied.
  • Traditional image sensors can generate two-dimensional (2D) images and videos.
  • image sensors and systems that can generate three-dimensional (3D) images have received widespread attention.
  • These three-dimensional image sensors can be applied to face recognition and augmented reality (AR). )/Virtual reality (VR), drones, etc.
  • AR augmented reality
  • VR Virtual reality
  • the existing three-dimensional image sensor mainly has three implementation methods: stereo binocular, structured light and time of flight (ToF).
  • Time-of-flight is the use of specially designed pixels to measure distances by measuring the time of photon flight and return. In order to increase the accuracy of modeling and reduce costs, how to simply improve the accuracy of the time-of-flight sensor has become an important issue. Work items.
  • One of the objectives of the present application is to disclose an image sensor, particularly a three-dimensional image sensor and related three-dimensional image sensing modules and handheld devices, to solve the above-mentioned problems.
  • An embodiment of the present application discloses a three-dimensional image sensor for emitting light to a target through a light emitting module to generate first depth information and second depth information for a first position and a second position on the target, respectively
  • the three-dimensional image sensor includes: a photosensitive pixel array including: a first photosensitive pixel; a second photosensitive pixel; and a pixel control signal transmission line having a first node and a second node respectively coupled to the first photosensitive pixel and the The second photosensitive pixel transmits a pixel control signal to the first photosensitive pixel and the second photosensitive pixel, so that the first photosensitive pixel and the second photosensitive pixel respectively output corresponding data according to the pixel control signal.
  • An embodiment of the present application discloses a three-dimensional image sensor module, including: the aforementioned three-dimensional image sensor; and the light emitting module.
  • An embodiment of the present application discloses a handheld device including: a display panel; and the aforementioned three-dimensional image sensing module.
  • the three-dimensional image sensor disclosed in this application includes a time delay detection module. Since the time delay detection module can determine the time difference between the pixel control signals reaching different nodes, based on the time difference, relatively accurate depth information for the target can be generated.
  • FIG. 1 is a schematic diagram of an embodiment in which the three-dimensional image sensing module of this application performs time-of-flight sensing on a target to generate a three-dimensional image.
  • FIG. 2 is a circuit diagram of the photosensitive pixel array and delay detection module of FIG. 1.
  • FIG. 3 is a schematic diagram illustrating the sensing operation of the photosensitive pixel array of FIG. 2.
  • FIG. 4 The schematic diagram of FIG. 4 is relative to another sensing operation of FIG. 3.
  • Fig. 6 is a circuit diagram of a delay detection module according to another embodiment of the application.
  • FIG. 7 is a schematic diagram of signal timings related to the first photosensitive pixel, the second photosensitive pixel, the third photosensitive pixel, and the fourth photosensitive pixel in FIG. 6.
  • FIG. 8 is a circuit diagram of a delay detection module according to yet another embodiment of the application.
  • FIG. 9 is a schematic diagram of a time-to-digital converter according to an embodiment of the application.
  • FIG. 10 is a schematic diagram of signal timing related to the time-to-digital converter of FIG. 9.
  • FIG. 11 is a schematic diagram of an embodiment of a three-dimensional image sensor module applied to a handheld device.
  • TX1_d1 Delayed pixel control signal
  • TX1_d2 Delayed pixel control signal
  • TX1_d3 Delayed pixel control signal
  • TX1_d4 Delayed pixel control signal
  • TX2_d1 Delayed pixel control signal
  • TX2_d2 Delayed pixel control signal
  • first and second features are in direct contact with each other; and may also include
  • additional components are formed between the above-mentioned first and second features, so that the first and second features may not be in direct contact.
  • present disclosure may reuse component symbols and/or labels in multiple embodiments. Such repeated use is based on the purpose of brevity and clarity, and does not in itself represent the relationship between the different embodiments and/or configurations discussed.
  • spatially relative terms here such as “below”, “below”, “below”, “above”, “above” and similar, may be used to facilitate the description of the drawing
  • the relationship between one component or feature relative to another component or feature is shown.
  • these spatially relative terms also cover a variety of different orientations in which the device is in use or operation.
  • the device may be placed in other orientations (for example, rotated by 90 degrees or in other orientations), and these spatially-relative description words should be explained accordingly.
  • the three-dimensional image sensor disclosed in this application can compensate for the above-mentioned time delay error to improve the accuracy of the time-of-flight sensor. The details are described below.
  • FIG. 1 is a schematic diagram of an embodiment in which a three-dimensional image sensing module 15 of this application performs time-of-flight sensing on a target 20 to generate a three-dimensional image.
  • the three-dimensional image sensor module 15 includes a three-dimensional image sensor 10 and a light emitting module 100. 1, the three-dimensional image sensor 10 is used to generate depth information for multiple positions on the target 20.
  • the three-dimensional image sensor 10 in FIG. 1 exemplarily generates first depth information and second depth information for the first position 22, the second position 24, the third position 26, and the fourth position 28 on the target 20. Depth information, third depth information, and fourth depth information, but this application is not limited to these four positions.
  • the light emitting module 100 is used for emitting light (incident light) LT_S to the target 20.
  • the target 20 reflects light to the photosensitive pixel array 200.
  • the light emitting module 100 includes a laser diode (LD), a light emitting diode (LED), or other light emitting units that can generate light.
  • LD laser diode
  • LED light emitting diode
  • the three-dimensional image sensor 10 includes a photosensitive pixel array 200, a pixel control signal generating circuit 300, a time delay detection module 400, and a processing unit 500.
  • the photosensitive pixel array 200 is used to receive the reflected light LT_P1, LT_P2, LT_P3, and LT_P4 from the first position 22, the second position 24, the third position 26, and the fourth position 28, respectively.
  • the photosensitive pixel array 200 includes a plurality of photosensitive pixels (not shown in FIG. 1). Each photosensitive pixel includes a photosensitive area and a pixel circuit, which are described in detail in FIG. 2.
  • the photosensitive area receives light from the reflected light LT_P1, LT_P2, LT_P3, and LT_P4 to form a photocharge or a photocurrent. Then, the photosensitive area stores charges corresponding to photoelectrons or photocurrent.
  • the pixel circuit converts the electric charge stored in the photosensitive area into an electric signal and outputs the electric signal to the processing unit 500, which is described in detail in FIGS. 3 to 4.
  • each photosensitive pixel may include a photodiode.
  • the pixel control signal generating circuit 300 is coupled to the photosensitive pixel array 200 and used to generate pixel control signals TX1 and TX2 to activate the plurality of photosensitive pixels of the photosensitive pixel array 200. Specifically, the pixel control signal generating circuit 300 controls whether to read out the electrical signals obtained by the plurality of photosensitive pixels of the photosensitive pixel array 200 by changing the potentials of the pixel control signals TX1 and TX2. The time at which the potentials of the pixel control signals TX1 and TX2 change are controlled by the pixel control signal generating circuit 300. Therefore, the time at which the potentials of the pixel control signals TX1 and TX2 change are known or can be said to be preset values. In some embodiments, the pixel control signal generating circuit 300 includes a clock signal generating circuit.
  • the time delay detection module 400 is used to determine the time difference between the activation of the photosensitive pixels, that is, the time difference between the pixel control signals TX1 and TX2 reaching each photosensitive pixel, that is, the time delay difference between the pixel control signals TX1 and TX2 reaching each photosensitive pixel through the transmission line . Since the time difference may cause an error when estimating the flight time, the time delay detection module 400 outputs the time difference to the processing unit 500 so that the processing unit 500 can compensate the flight time obtained by each photosensitive pixel to eliminate the error. Detailed descriptions are shown in Figures 3 to 4. In some embodiments, the delay detection module 400 includes a time-to-digital converter.
  • the processing unit 500 is configured to generate first depth information, second depth information, third depth information, and fourth depth information of the target 20 based on the electrical signal and the time difference. For example, the processing unit 500 may use the four electrical signals obtained by the four photosensitive pixels of the photosensitive pixel array 200 to calculate the first depth information and the uncompensated second depth information, third depth information, and fourth depth. Information, and then use the time difference corresponding to the first depth information and the second depth information, the time difference between the second depth information and the third depth information, and the third depth information and the fourth depth information.
  • the time difference between the depth information is used to compensate the aforementioned uncompensated second depth information, third depth information, and fourth depth information to obtain the first depth information, second depth information, and third depth information of the target 20 Information and fourth depth information.
  • the above compensation can eliminate the relative error between the depth information caused by the time difference between the pixel control signals TX1 and TX2 reaching each photosensitive pixel.
  • the application does not limit the operation of the processing unit 500.
  • the processing unit 500 may include a control unit (CU), an arithmetic logic unit (ALU), and a storage unit.
  • the storage unit may store a program code, and the program code is used to instruct the control unit and the arithmetic editing unit to execute the process.
  • the processing unit 500 may be implemented by using an application specific integrated circuit (ASIC), or by using a digital signal processor (DSP), a general purpose processor (general purpose processor), or Application processor (application processor) to achieve.
  • ASIC application specific integrated circuit
  • DSP digital signal processor
  • general purpose processor general purpose
  • the processing unit 500 can generate relatively accurate depth information based on the time difference.
  • the 3D image sensor does not include a module similar to the time delay detection module 400, the depth information generated by the 3D image sensor is relatively inaccurate due to the lack of information about the actual time difference between the activation of the photosensitive pixels.
  • FIG. 2 is a circuit diagram of the photosensitive pixel array 200 and the delay detection module 400 of FIG. 1. 2, for the sake of simplicity, only a single row of photosensitive pixels and two photosensitive pixels in the single row of photosensitive pixels, namely, the first photosensitive pixel PX1 and the second photosensitive pixel PX2 are shown.
  • the photosensitive pixel array 200 further includes pixel control signal transmission lines 210 and 212 for transmitting pixel control signals TX1 and TX2, respectively.
  • the time delay detection module 400 includes a time delay detection circuit 402.
  • the time delay detection circuit 402 has a first input terminal in1, a second input terminal in2, and an output terminal out1.
  • the pixel control signal transmission line 210 has a first node n1 and a second node n2. Compared with the second node n2, the first node n1 is closer to the signal source that provides the pixel control signal TX1, that is, the pixel control signal TX1 goes down from the top in FIG. 2 and passes through the first node n1 first and then to the second node. Node n2.
  • the first node n1 and the second node n2 are respectively coupled to the first photosensitive pixel PX1 and the second photosensitive pixel PX2, according to which the pixel control signal TX1 is transmitted to the first photosensitive pixel PX1 and the second photosensitive pixel PX2 to control the first photosensitive pixel PX1 and the second photosensitive pixel PX2.
  • the pixel control signal transmission line 210 may include a metal transmission line.
  • the time when the pixel control signal TX1 reaches the first node n1 is different from the time when the pixel control signal TX1 reaches the second node n2.
  • the pixel control signal reaches the first node n1 at the first time t1 (as shown in FIG. 5) and reaches the second node n2 at the second time t2, where the second time t2 is later than the first time t1. Accordingly, the time when the first photosensitive pixel PX1 is activated is earlier than the second photosensitive pixel PX2, which is described in detail in FIG. 5.
  • the time when the pixel control signal TX1 reaches the first photosensitive pixel PX1 from the first node n1 is the same as the time when the pixel control signal TX1 reaches the second photosensitive pixel PX2 from the second node n2.
  • the transmission delay between each node and the corresponding photosensitive pixel is the same, so when the three-dimensional image sensor 10 generates depth information for multiple positions on the target 20, relative errors between multiple positions will not be caused. .
  • the above operating environment can be achieved through a good circuit layout.
  • the length of the transmission line between the first node n1 and the first photosensitive pixel PX1 and the length of the transmission line between the second node n2 and the second photosensitive pixel PX2 may be planned to be the same as each other.
  • the time when the pixel control signal TX1 reaches the first input terminal in1 of the delay detection circuit 402 from the first node n1, and the time when the pixel control signal TX1 reaches the second input terminal in2 of the delay detection circuit 402 from the second node n2 the same.
  • the transmission delay between each node and the delay detection circuit 402 is the same, so when the three-dimensional image sensor 10 generates depth information for multiple positions on the target 20, the relative position between the multiple positions will not be caused. error.
  • the above operating environment can be achieved through a good circuit layout.
  • the length of the transmission line between the first node n1 and the first input terminal of the delay detection circuit 402 and the length of the transmission line between the second node n2 and the second input terminal of the delay detection circuit 402 can be planned Are the same as each other.
  • the pixel control signal transmission line 212 has a fifth node n5 and a sixth node n6.
  • the pixel control signal TX2 is transmitted to the first photosensitive pixel PX1 and the second photosensitive pixel PX2 to control the first photosensitive pixel PX1 and the second photosensitive pixel PX2.
  • the relevant limitation of the pixel control signal TX2 is the same as the above-mentioned pixel control signal TX1. The difference is that in order to estimate the arrival time of the light reflected from the target 20 to each photosensitive pixel, the pixel control signal TX2 and the pixel control signal TX1 The time (that is, the phase) of the potential change of the signal itself will be different.
  • the pixel control signal transmission line 212 may include a metal transmission line.
  • the first photosensitive pixel PX1 includes a photosensitive area 202 and a pixel circuit 204.
  • the photosensitive area 202 includes a photosensor PD.
  • the photosensor PD is used to convert incident light into electric charges and store them.
  • the pixel circuit 202 includes transistors M1 and M2.
  • the transistor M1 acts as a switch for selectively outputting the charge stored in the photosensor PD to the processing unit 500 via the data line BL1 according to the pixel control signal TX1.
  • the transistor M2 is similarly used as a switch for selectively outputting the charge stored in the photosensor PD to the processing unit 500 via the data line BL1_S according to the pixel control signal TX2.
  • the second photosensitive pixel PX2 also includes a photosensitive area 202 and a pixel circuit 204.
  • the photosensitive area 202 and the pixel circuit 204 of the second photosensitive pixel PX2 are not marked with component symbols in the drawing.
  • the second photosensitive pixel PX2 outputs the stored charges to the processing unit 500 via the data lines BL2 and BL2_S.
  • the operation of the second photosensitive pixel PX2 is the same as that of the first photosensitive pixel PX1, and will not be repeated here.
  • the first input terminal in1 and the second input terminal in2 of the time delay detection circuit 402 are respectively coupled to the first node n1 and the second node n2 to determine the time difference between the first time t1 and the second time t2 (t2-t1) , The detailed description is shown in Figure 3 to Figure 5.
  • the output terminal out1 of the time delay detection circuit 402 is coupled to the processing unit 500 to provide the time difference (t2-t1) to the processing unit 500.
  • the processing unit 500 generates the first depth information and the second depth information based on the photosensitive value of the first photosensitive pixel PX1 and the photosensitive value of the second photosensitive pixel PX2 and the time difference (t2-t1).
  • the time difference (t2-t1) is then used to compensate the aforementioned uncompensated second depth information.
  • the time delay detection circuit 402 includes a time-to-digital converter for converting the time difference (t2-t1) between the first time t1 and the second time t2 into a digital signal.
  • FIG. 3 is a schematic diagram illustrating the sensing operation of the photosensitive pixel array 200 of FIG. 2.
  • the schematic diagram of FIG. 4 is relative to another sensing operation of FIG. 3.
  • 5 is a schematic diagram of signal timings related to the first photosensitive pixel PX1 and the second photosensitive pixel PX2 in FIGS. 3 to 4.
  • Figure 5 includes waveforms 600, 602, 604, 606, 608, 610, and 612.
  • the waveform 600 represents the light LT_S emitted by the light emitting module 100 to the target 20; the waveform 602 represents the reflected light LT_P1 reflected from the target 20 to the first photosensitive pixel PX1; the waveform 604 represents the pixel control signal TX1 due to the delay effect (The time delay caused by the transmission line from the source of the pixel control signal TX1 to the first node n1) The delayed pixel control signal TX1_d1 presented at the first node n1; the waveform 606 represents the pixel control signal TX2 due to the delay effect (from the pixel control signal The time delay caused by the transmission line from the source of TX2 to the fifth node n5) is the delayed pixel control signal TX2_d1 at the fifth node n5; the waveform 608 represents the reflected light LT_P2 reflected from the target 20 to the second photosensitive pixel PX2, where the reflected light LT_P2 is shown to have a similar arrival time to the reflected light
  • the time of the two photosensitive pixels PX2 must be the same; the waveform 610 represents the delayed pixel present at the second node n2 due to the delay effect of the pixel control signal TX1 (the time delay caused by the transmission line from the source of the pixel control signal TX1 to the second node n2) Control signal TX1_d2; and the waveform 612 represents the delayed pixel control signal TX2_d2 presented by the pixel control signal TX2 at the sixth node n6 due to the delay effect (the time delay caused by the transmission line from the source of the pixel control signal TX2 to the sixth node n6) .
  • the light emitting module 100 emits light LT_S to the target 20 at time ts.
  • the pixel control signal generating circuit 300 sequentially and staggeredly utilizes the potentials of the pixel control signals TX1 and TX2 to be changed, and transmits the pixel control signals TX1 and TX2 through the pixel control signal transmission lines 210 and 212, respectively, to enable a plurality of pixels respectively The left half of the transistor and the right half of the transistor.
  • the pixel control signal TX1 reaches the first node n1 (referred to as the delayed pixel control signal TX1_d1) at the first time t1.
  • the transistor M1 on the left side of the first photosensitive pixel PX1 is turned on in response to the delayed pixel control signal TX1_d1, so that the first photosensitive pixel PX1 outputs the first position corresponding to the first position 22.
  • the pixel control signal TX1 reaches the second node n2 (referred to as the delayed pixel control signal TX1_d2) at the second time t2.
  • the transistor M1 on the left side of the second photosensitive pixel PX is turned on in response to the delayed pixel control signal TX1_d2, so that the second photosensitive pixel PX2 outputs the second corresponding to the second position 24 The charge of the photosensitive value Q2_TX1_d2.
  • the transistor M2 on the right side of the first photosensitive pixel PX1 is turned on in response to the delayed pixel control signal TX2_d1, so that the first photosensitive pixel PX1 outputs a charge corresponding to the photosensitive value Q1_TX2_d1 of the first position 22 .
  • the transistor M2 on the right side of the second photosensitive pixel PX2 is turned on in response to the delayed pixel control signal TX2_d2, so that the second photosensitive pixel PX2 outputs a charge corresponding to the photosensitive value Q2_TX2_d2 of the second position 22 .
  • the processing unit 500 can calculate the time when the incident light LT_P1 reaches the first photosensitive pixel PX1 based on the ratio of the first photosensitive value Q1_TX1_d1 obtained by the incident light LT_P1 to the photosensitive value Q1_TX2_d1, and the first flight time can be generated based on the emission time of the light LT_S . Based on the first flight time, the processing unit 500 can determine the first depth information of the first position 22.
  • the processing unit 500 can calculate the time when the incident light LT_P2 reaches the second photosensitive pixel PX2, and then based on the emission time of the light LT_S, it can produce the The compensated second flight time is the second flight time without eliminating the time difference (t2-t1). Therefore, the processing unit 500 can correct the uncompensated second flight time according to the time difference (t2-t1) provided by the time delay detection circuit 402, and the processing unit 500 can generate the second depth information according to the corrected second flight time.
  • the second flight time Since the second flight time has been corrected based on the time difference (t2-t1), the error caused by the delay effect can be eliminated or reduced, and the second depth information obtained accordingly is relatively accurate. In contrast, if the second flight time is not corrected based on the time difference (t2-t1), the second depth information obtained will include the error caused by the time difference (t2-t1), which is relatively inaccurate.
  • FIG. 6 is a circuit diagram of a delay detection module 700 according to another embodiment of the application.
  • the third photosensitive pixel PX3 and the fourth photosensitive pixel PX4 of the photosensitive pixel array 200 are further illustrated.
  • FIG. 7 is a schematic diagram of signal timings related to the first photosensitive pixel PX1, the second photosensitive pixel PX2, the third photosensitive pixel PX3, and the fourth photosensitive pixel PX4 in FIG. 6.
  • the pixel control signal TX1 is taken as an example for description.
  • the operation of the time delay detection module 700 on the pixel control signal TX2 is similar to that of the time delay detection module 400 in FIGS. 2 to 5, and details are not described herein again.
  • the waveform 614 represents the delayed pixel control signal TX1_d3 presented at the third node n3 due to the delay effect of the pixel control signal TX1 (the time delay caused by the transmission line from the source of the pixel control signal TX1 to the third node n3); and ,
  • the waveform 616 represents the delayed pixel control signal TX1_d4 presented at the fourth node n4 due to the delay effect (the time delay caused by the transmission line from the source of the pixel control signal TX1 to the fourth node n4) of the pixel control signal TX1.
  • the delay detection module 700 is similar to the delay detection module 400 in FIG. 3, except that the delay detection module 700 includes a first delay detection circuit 702, a second delay detection circuit 704, and a third delay detection circuit. Circuit 706.
  • the first delay detection circuit 702 has a first input terminal in11, a second input terminal in12, and an output terminal out11;
  • the second delay detection circuit 704 has a first input terminal in21, a second input terminal in22, and an output terminal out22;
  • the third time delay detection circuit 706 has a first input terminal in31, a second input terminal in32, and an output terminal out33.
  • the third and fourth photosensitive pixels PX3 and PX4 of the photosensitive pixel array 200 and the third and fourth nodes n3 and n4 of the pixel control signal transmission line 210 are further illustrated.
  • the third node n3 is closer to the signal source that provides the pixel control signal TX1 than the fourth node n4, that is, the pixel control signal TX1 goes down from the top in FIG. 6, and will pass through the third node n3 before reaching the Four nodes n4, wherein the first node n1 and the second node n2 are both closer to the signal source providing the pixel control signal TX1 than the third node n3.
  • the third node n3 and the fourth node n4 are respectively coupled to the third photosensitive pixel PX3 and the fourth photosensitive pixel PX4, according to which the pixel control signal TX1 is transmitted to the third photosensitive pixel PX3 and the fourth photosensitive pixel PX4, so that the third photosensitive pixel
  • the pixel PX3 and the fourth photosensitive pixel PX4 respectively output a third photosensitive value Q3 and a fourth photosensitive value Q4 corresponding to the third position 26 and the fourth position 28 according to the pixel control signal TX1.
  • the third photosensitive pixel PX3 outputs the stored charges to the processing unit 500 via the data lines BL3 and BL3_S
  • the fourth photosensitive pixel PX4 outputs the stored charges to the processing unit 500 via the data lines BL4 and BL4_S.
  • the time when the pixel control signal TX1 reaches the third node n3 is different from the time when the pixel control signal TX1 reaches the fourth node n4.
  • the pixel control signal reaches the third node n3 at the third time t3 (as shown in FIG. 7) and reaches the fourth node n4 at the fourth time t4, where the fourth time t4 is later than the third time t3. Accordingly, the time when the third photosensitive pixel PX3 is activated is earlier than the fourth photosensitive pixel PX4.
  • the time when the pixel control signal TX1 reaches the third photosensitive pixel PX3 from the third node n3 is the same as the time when the pixel control signal TX1 reaches the fourth photosensitive pixel PX4 from the fourth node n4.
  • the time when the control signal TX1 reaches the third photosensitive pixel PX3 from the third node n3 is the same as the time when the pixel control signal TX1 reaches the first photosensitive pixel PX1 from the first node n1, and the time when the pixel control signal TX1 reaches the first photosensitive pixel PX1 from the second node n2
  • the time of the two photosensitive pixels PX2 is the same.
  • the transmission delay between each node and the corresponding photosensitive pixel is the same. The above operating environment can be achieved through a good circuit layout.
  • the time when the pixel control signal TX1 reaches the first input terminal of the first delay detection circuit 702 from the first node n1, and the time when the pixel control signal TX1 reaches the second input terminal of the first delay detection circuit 702 from the second node n2 the same.
  • the pixel control signal TX1 reaches the first input terminal of the second delay detection circuit 704 from the second node n2 at the same time as the pixel control signal TX1 reaches the second input terminal of the second delay detection circuit 704 from the third node n3. The time is the same.
  • the pixel control signal TX1 arrives at the first input terminal of the third delay detection circuit 706 from the third node n3 at the same time as the pixel control signal TX1 arrives at the second input of the third delay detection circuit 706 from the fourth node n4
  • the end time is the same.
  • the transmission delay between each node and the corresponding delay detection circuit is the same. Therefore, when the three-dimensional image sensor 10 generates depth information for multiple positions on the target 20, it will not cause the difference between multiple positions. Relative error.
  • the first delay detection circuit 702 taking the first time delay detection circuit 702 as an example, as long as the time when the pixel control signal TX1 reaches the first input terminal of the first time delay detection circuit 702 from the first node n1, and the pixel control signal TX1 from the third The time when the node n3 reaches the second input terminal of the first delay detection circuit 702 is the same, and the first delay detection circuit 702 can also be used to detect the first node n1 and the third node n3. In short, this application is not limited to the delay detection circuit that can only detect two adjacent nodes.
  • the first input terminal in11 and the second input terminal in12 of the first time delay detection circuit 702 are respectively coupled to the first node n1 and the second node n2 to determine the time difference between the first time t1 and the second time t2 (t2- t1), the detailed operation method is similar to the embodiment of Fig. 3 to Fig. 5.
  • the output terminal out11 of the first time delay detection circuit 702 is coupled to the processing unit 500 to provide a time difference (t2-t1) to the processing unit 500.
  • the first input terminal in21 and the second input terminal in22 of the second time delay detection circuit 704 are respectively coupled to the second node n2 and the third node n3 to determine the time difference between the second time t2 and the third time t3 (t3- t2), the detailed operation method is similar to the embodiment of Fig. 3 to Fig. 5.
  • the output terminal out22 of the second time delay detection circuit 704 is coupled to the processing unit 500 to provide a time difference (t3-t2) to the processing unit 500.
  • the first input terminal in31 and the second input terminal in32 of the third time delay detection circuit 706 are respectively coupled to the third node n3 and the fourth node n4 to determine the time difference between the third time t3 and the fourth time t4 (t4- t3), the detailed operation method is similar to the embodiment of FIG. 3 to FIG. 5.
  • the output terminal out33 of the third time delay detection circuit 706 is coupled to the processing unit 500 to provide the time difference (t4-t3) to the processing unit 500.
  • the processing unit 500 generates first depth information based on the first light sensitivity value.
  • the processing unit can calculate the time for the incident light LT_P1 to reach the first photosensitive pixel PX1 based on the ratio of the two light sensitivity values (including the first light sensitivity value), and then generate the first flight time based on the emission time of the light LT_S. Based on the first flight time, the processing unit 500 can determine the first depth information of the first position 22.
  • the processing unit 500 generates second depth information based on the second light sensitivity value, the time difference (t2-t1) between the first time t1 and the second time t2.
  • the processing unit 500 can calculate the time when the incident light LT_P2 reaches the second photosensitive pixel PX2 based on the ratio of the two light sensitivity values (including the second light sensitivity value) obtained by the incident light LT_P2, and then based on the emission time of the light LT_S It is possible to generate an uncompensated second flight time.
  • the time delay detection circuit 402 provides the time difference (t2-t1) to the processing unit 500.
  • the processing unit 500 corrects the uncompensated second flight time according to the time difference (t2-t1).
  • the processing unit 500 generates second depth information according to the corrected second flight time.
  • the processing unit 500 generates a third depth based on the third sensitivity value Q3, the time difference (t2-t1) between the first time t1 and the second time t2, and the time difference (t3-t2) between the second time t2 and the third time t3. information.
  • the processing unit 500 can calculate the time when the incident light LT_P3 reaches the third photosensitive pixel PX3 based on the ratio of the two light sensitivity values (including the third light sensitivity value Q3) obtained by the incident light LT_P3, and then based on the emission time of the light LT_S It is possible to generate an uncompensated third flight time.
  • the time delay detection circuit 402 provides the time difference (t2-t1) and (t3-t2) to the processing unit 500.
  • the processing unit 500 corrects the uncompensated third flight time according to the time difference (t2-t1) and (t3-t2). Then, the processing unit 500 generates third depth information according to the corrected third flight time.
  • the processing unit 500 is based on the fourth photosensitive value Q4, the time difference between the first time t1 and the second time t2 (t2-t1), the time difference between the second time t2 and the third time t3 (t3-t2), and the third time
  • the time difference (t4-t3) between t3 and the fourth time t4 produces fourth depth information.
  • the processing unit 500 can calculate the time when the incident light LT_P4 reaches the fourth photosensitive pixel PX3 based on the ratio of the two light sensitivity values (including the fourth light sensitivity value Q4) obtained by the incident light LT_P4, and then based on the emission time of the light LT_S It is possible to generate an uncompensated fourth flight time.
  • the time delay detection circuit 402 provides the time difference (t2-t1), (t3-t2), and (t4-t3) to the processing unit 500.
  • the processing unit 500 corrects the uncompensated fourth flight time according to the time difference (t2-t1), (t3-t2), and (t4-t3). Then, the processing unit 500 generates fourth depth information according to the corrected fourth flight time.
  • the second flight time has been corrected based on the time difference (t2-t1)
  • the third flight time has been corrected based on the time difference (t2-t1) and (t3-t2)
  • the fourth flight time has been corrected based on the time difference (t2-t1), (t3-t2) and (t4-t3) are corrected, so the error caused by the delay effect can be eliminated or reduced, and the second depth information, the third depth information, and the fourth depth information obtained accordingly are relatively accurate.
  • FIG. 8 is a circuit diagram of a delay detection module 800 according to still another embodiment of the application.
  • the third photosensitive pixel PX3 and the fourth photosensitive pixel PX4 of the photosensitive pixel array 200, and the third node and the third node of the pixel control signal transmission line 210 are further illustrated.
  • the delay detection module 800 is similar to the delay detection module 400 in FIG. 2, the difference is that the delay detection module 800 further includes a multiplexer 802.
  • the multiplexer 802 includes a first input terminal m_in1, a second input terminal m_in2, a third input terminal m_in3, and a fourth input terminal m_in4 respectively coupled to a first node n1, a second node n2, a third node n3, and a fourth node n4.
  • the multiplexer 802 further has a first output terminal m_out1 and a second output terminal m_out2 respectively coupled to the first input terminal in1 and the second input terminal in2 of the delay detection circuit 402.
  • the multiplexer 802 is used to selectively transfer the signals received by two of the first input terminal m_in1, the second input terminal m_in2, the third input terminal m_in3, and the fourth input terminal m_in4 from the multiplexer 802 The first output terminal m_out1 and the second output terminal m_out2 output.
  • the three input terminals m_in3 and the fourth input terminal m_in4 have the same time. The above operating environment can be achieved through a good circuit layout.
  • the length of the transmission line between the first node n1 and the first input terminal m_in1 the length of the transmission line between the second node n2 and the second input terminal m_in2, and the length between the third node n3 and the third input terminal m_in3
  • the length of the transmission line and the length of the transmission line between the fourth node n4 and the fourth input terminal m_in4 can be planned to be the same as each other.
  • the pixel control signal TX1 arrives at the first input terminal in1 and the second input terminal in2 of the delay detection circuit 402 at the same time from the first output terminal m_out1 and the second output terminal m_out2 of the multiplexer 802 respectively.
  • the above operating environment can be achieved through a good circuit layout.
  • the length of the transmission line between the second input terminals in2 can be planned to be the same as each other.
  • FIG. 9 is a schematic diagram of a time-to-digital converter 900 according to an embodiment of the application.
  • the delay detection circuit 402 of the embodiment of FIG. 2 and the first delay detection circuit 702, the second delay detection circuit 704, and the third delay detection circuit 706 of the embodiment of FIG. 6 can all be implemented by the time-to-digital converter 900 It. 9, the time-to-digital converter 900 includes a first input terminal T_in1, a second input terminal T_in2, and an output terminal T_out.
  • the first input terminal T_in1 and the second input terminal T_in2 of the time-to-digital converter 900 receive input signals IN ⁇ 0> and IN ⁇ 1>, respectively, and the output terminal T_out of the time-to-digital converter 900 outputs an output signal OUT.
  • the first input terminal T_in1 and the second input terminal T_in2 of the time-to-digital converter 900 are respectively coupled to the first input terminal in1 and the second input terminal in2 of the time delay detection circuit 402, and the output terminal T_out is coupled to Processing unit 500.
  • the time-to-digital converter 900 includes a first delay chain 902, a second delay chain 904, two multiplexers 910 and 912, a plurality of flip-flops 914, and a multiplexer 916.
  • This embodiment includes four flip-flops 914, but the application is not limited thereto.
  • the multiplexer 910 is coupled to the first input terminal T_in1 and the second input terminal T_in2 of the time-to-digital converter 900 to receive the input signals IN ⁇ 0> and IN ⁇ 1>, and input signals according to the selection signal SEL_IN ⁇ 0> One of IN ⁇ 0> and IN ⁇ 1> is output.
  • the multiplexer 912 is coupled to the first input terminal T_in1 and the second input terminal T_in2 of the time-to-digital converter 900 to receive the input signals IN ⁇ 0> and IN ⁇ 1>, and according to the selection signal SEL_IN ⁇ 1> Output one of the input signals IN ⁇ 0> and IN ⁇ 1>, where the selection signal SEL_IN ⁇ 1> is opposite to the selection signal SEL_IN ⁇ 0>.
  • each flip-flop 914 is coupled to the first delay chain 902, and the clock terminal is coupled to the second delay chain 904.
  • the output terminal Q of the flip-flop 914 of the first stage provides the output signal Q ⁇ 0> to the multiplexer 916;
  • the output terminal Q of the flip-flop 914 of the second stage provides the output signal Q ⁇ 1> to the multiplexer 916; and,
  • the output terminal Q of the flip-flop 914 of the fourth stage provides the output signal Q ⁇ 4> to the multiplexer 916.
  • the flip-flop 914 is a D-type flip-flop.
  • the multiplexer 916 is used for selecting and outputting one of the output signals Q ⁇ 0> to Q ⁇ 4> according to the selection signal SEL_OUT ⁇ 4:0>. In some embodiments, the multiplexer 916 is optional. In this embodiment, the output terminal Q of each flip-flop 914 is coupled to the processing unit 500.
  • the first delay chain 902 includes a plurality of first buffers 906 and the second delay chain 904 includes a plurality of second buffers 908.
  • the delay time D0 of the first buffer 906 and the delay time D1 of the second buffer 908 are different. In this embodiment, the delay time D1 is greater than the delay time D0.
  • FIG. 10 is a schematic diagram of signal timing related to the time-to-digital converter 900 of FIG. 9. 9 and 10, the multiplexer 910 outputs the signal CK1_INT to the first delay chain 902, and the multiplexer 912 outputs the signal CK2_INT to the second delay chain 904.
  • the input terminal D of the zeroth level flip-flop 910 receives the signal CK1_INT, and the clock terminal receives the signal CK2_INT. Based on the operating principle of the flip-flop, since the signal CK1_INT lags behind the signal CK2_INT (there is a phase difference TD1 between the two), the output signal Q ⁇ 0> output by the flip-flop 910 of the zeroth stage is logic 0.
  • the signal CK1_INT on the first delay chain 902 is delayed by the first buffer 906 to the signal CK1_INT'
  • the signal CK2_INT on the second delay chain 904 is delayed by the second buffer 908 to the signal CK2_INT'.
  • the input terminal D of the flip-flop 910 of the first stage receives the signal CK1_INT' and the clock terminal receives the signal CK2_INT'. Based on the operating principle of the flip-flop, since the signal CK1_INT' lags the signal CK2_INT' (with a phase difference TD2 between the two), the output signal Q ⁇ 1> output by the flip-flop 910 of the first stage is logic 0.
  • the phase difference TD2 is already smaller than the phase difference TD1.
  • the operation continues in this way until the signal CK1_INT' no longer lags behind the signal CK2_INT' and is input to the input terminal D and the clock terminal of the flip-flop 914 of the fourth stage.
  • the output signal Q ⁇ 4> output by the flip-flop 914 of the fourth stage is logic 1.
  • the processing unit 500 can determine that the phase difference between the input signals IN ⁇ 0> and IN ⁇ 1> is between three times the difference between the delay time D0 and the delay time D1 based on the output signal Q ⁇ 4> being logic 1, and And four times the difference between the delay time D0 and the delay time D1.
  • the time difference (t2-t1) ranges from 3*(D1-D0) to 4*(D1-D0).
  • FIG. 11 is a schematic diagram of an embodiment in which the three-dimensional image sensor module 15 is applied to the handheld device 30.
  • the handheld device 30 includes a display screen assembly 34 and a three-dimensional image sensor module 15.
  • the handheld device 30 can be used to perform time-of-flight sensing and/or three-dimensional image sensing for face recognition.
  • the handheld device 30 can be any handheld electronic device such as a smart phone, a personal digital assistant, a handheld computer system, or a tablet computer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

一种三维图像传感器(10)及相关三维图像传感模组(15)及手持装置(30)。三维图像传感器(10)包括:感光像素阵列(200),包括:第一感光像素(PX1);第二感光像素(PX2)以及像素控制信号传输线(210),使第一感光像素(PX1)及第二感光像素(PX2)依据像素控制信号(TX1、TX2)分别输出第一感光值(Q1_TX1_d1)和第二感光值(Q2_TX1_d2);其中像素控制信号(TX1、TX2)在第一时间(t1)到达第一节点(n1),以及在第二时间(t2)到达第二节点(n2),且第二时间(t2)晚于第一时间(t1);时延检测模块(400),包括:第一时延检测电路(702),判断第一时间(t1)和第二时间(t2)的时间差(t2-t1);以及处理单元(500),基于第一感光值(Q1_TX1_d1)、第二感光值(Q2_TX1_d2)和时间差(t2-t1)产生第一深度信息和第二深度信息。

Description

三维图像传感器以及相关三维图像传感模组及手持装置 技术领域
本申请涉及一种图像传感器,尤其涉及一种三维图像传感器以及相关三维图像传感模组及手持装置。
背景技术
CMOS图像传感器已经得到大规模生产和应用。传统的图像传感器可以生成二维(2D)图像和视频,近来可以产生三维(3D)图像的图像传感器和系统受到广泛关注,这些三维图像传感器可以应用于脸部识别,增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR),无人机等。
现有的三维图像传感器主要有三种实现方式:立体双目,结构光和飞行时间(time of flight,ToF)。
飞行时间是采用特殊设计的像素,通过测量光子飞行和返回的时间来测距,为了增加建模的精准度以及降低成本,如何简单地达到改善飞行时间传感器的精準度的目的,已成为一个重要的工作项目。
发明内容
本申请的目的之一在于公开一种图像传感器,尤其涉及一种三维图像传感器以及相关三维图像传感模组及手持装置,来解决上述问题。
本申请的一实施例公开了一种三维图像传感器,用于通过光发射模块发射光线至目标物来对目标物上的第一位置和第二位置分别 产生第一深度信息和第二深度信息,所述三维图像传感器包括:感光像素阵列,包括:第一感光像素;第二感光像素;以及像素控制信号传输线,具有第一节点和第二节点分别耦接至所述第一感光像素以及所述第二感光像素,以将像素控制信号传送至所述第一感光像素以及所述第二感光像素,使所述第一感光像素及所述第二感光像素依据所述像素控制信号分别输出对应所述第一位置和所述第二位置的第一感光值和第二感光值;其中所述像素控制信号从所述第一节点到达所述第一感光像素的时间,和所述像素控制信号从所述第二节点到达所述第二感光像素的时间相同,且所述像素控制信号在第一时间到达所述第一节点,以及在第二时间到达所述第二节点,且所述第二时间晚于所述第一时间;时延检测模块,所述时延检测模块包括:第一时延检测电路,具有第一输入端以及第二输入端分别耦接至所述第一节点以及所述第二节点,用以判断所述第一时间和所述第二时间的时间差;以及处理单元,用以基于所述第一感光值、所述第二感光值和所述时间差产生所述第一深度信息和所述第二深度信息。
本申请的一实施例公开了一种三维图像传感模组,包括:前述的三维图像传感器;以及所述光发射模块。
本申请的一实施例公开了一种手持装置,包括:显示面板;以及前述的三维图像传感模组。
本申请所公开的三维图像传感器包括时延检测模块。由于时延检测模块能判断像素控制信号到达不同节点之间的时间差,因此基于所述时间差,能够产生针对所述目标物的相对准确的深度信息。
附图说明
图1为本申请的三维图像传感模组对目标物进行飞行时间感测以产生三维图像的实施例的示意图。
图2为图1的感光像素阵列及时延检测模块的电路图。
图3的示意图说明图2的感光像素阵列的感测操作。
图4的示意图系相对于图3的另一感测操作。
图5是图3至图4的第一感光像素及第二感光像素涉及的信号时序的示意图。
图6为本申请的另一实施例的时延检测模块的电路图。
图7是图6的第一感光像素、第二感光像素、第三感光像素及第四感光像素涉及的信号时序的示意图。
图8为本申请的又另一实施例的时延检测模块的电路图。
图9为本申请的一实施例的时间-数字转换器的示意图。
图10是图9的时间-数字转换器涉及的信号时序的示意图。
图11为三维图像传感模组应用在手持装置的一实施例的示意图。
其中,附图标记说明如下:
10                      三维图像传感器
15                      三维图像传感模组
20                      目标物
22                      第一位置
24                      第二位置
26                      第三位置
28                      第四位置
30                      手持装置
34                      显示屏组件
100                     光发射模块
200                     感光像素阵列
210                     像素控制信号传输线
212                     像素控制信号传输线
300                     像素控制信号产生电路
400                      时延检测模块
402                      时延检测电路
500                      处理单元
600                      波形
602                      波形
604                      波形
606                      波形
608                      波形
610                      波形
612                      波形
614                      波形
616                      波形
700                      时延检测模块
702                      第一时延检测电路
704                      第二时延检测电路
706                      第三时延检测电路
800                      时延检测模块
802                      复用器
900                      时间-数字转换器
906                      第一缓冲器
908                      第二缓冲器
910                      复用器
912                      复用器
914                      触发器
916                      复用器
t1                       第一时间
t2                       第二时间
t3                       第三时间
t4                       第四时间
LT_S                     光线
LT_P1                   反射光
LT_P2                   反射光
LT_P3                   反射光
LT_P4                   反射光
TX1                     像素控制信号
TX1_d1                  延时像素控制信号
TX1_d2                  延时像素控制信号
TX1_d3                  延时像素控制信号
TX1_d4                  延时像素控制信号
TX2                     像素控制信号
TX2_d1                  延时像素控制信号
TX2_d2                  延时像素控制信号
n1                      第一节点
n2                      第二节点
n3                      第三节点
n4                      第四节点
M1                      晶体管
M2                      晶体管
BL1                     资料线
BL1_S                   资料线
BL2                     资料线
BL2_S                   资料线
BL3                     资料线
BL3_S                   资料线
BL4                     资料线
BL4_S                   资料线
PX1                     第一感光像素
PX2                     第二感光像素
PX3                     第三感光像素
PX4                     第四感光像素
PD                       光传感器
Q1_TX1_d1                第一感光值
Q1_TX2_d1                感光值
Q2_TX1_d2                第二感光值
Q2_TX2_d2                感光值
Q3                       第三感光值
Q4                       第四感光值
IN<0>                    输入信号
IN<1>                    输入信号
SEL_IN<0>                选择信号
SEL_IN<1>                选择信号
CK1_INT                  信号
CK1_INT’                信号
CK1_INT”                信号
CK2_INT                  信号
CK2_INT’                信号
CK2_INT”                信号
Q                        输出端
D                        输入端
Q<0>                     输出信号
Q<1>                     输出信号
Q<2>                     输出信号
Q<4>                     输出信号
D1                       延时时间
D0                       延时时间
T_in1                    第一输入端
T_in2                    第二输入端
T_out                    输出端
in2                      第二输入端
in1                      第一输入端
out1                      输出端
m_out2                    第二输出端
m_out1                    第一输出端
m_in1                     第一输入端
m_in2                     第二输入端
m_in3                     第三输入端
m_in4                     第四输入端
in31                      第一输入端
in32                      第二输入端
out33                     输出端
in21                      第一输入端
in22                      第二输入端
out22                     输出端
in11                      第一输入端
in12                      第二输入端
out11                     输出端
具体实施方式
以下揭示内容提供了多种实施方式或例示,其能用以实现本揭示内容的不同特征。下文所述之组件与配置的具体例子系用以简化本揭示内容。当可想见,这些叙述仅为例示,其本意并非用于限制本揭示内容。举例来说,在下文的描述中,将一第一特征形成于一第二特征上或之上,可能包括某些实施例其中所述的第一与第二特征彼此直接接触;且也可能包括某些实施例其中还有额外的组件形成于上述第一与第二特征之间,而使得第一与第二特征可能没有直接接触。此外,本揭示内容可能会在多个实施例中重复使用组件符号和/或标号。此种重复使用乃是基于简洁与清楚的目的,且其本身不代表所讨论的不同实施例和/或组态之间的关系。
再者,在此处使用空间上相对的词汇,譬如「之下」、「下方」、 「低于」、「之上」、「上方」及与其相似者,可能是为了方便说明图中所绘示的一组件或特征相对于另一或多个组件或特征之间的关系。这些空间上相对的词汇其本意除了图中所绘示的方位之外,还涵盖了装置在使用或操作中所处的多种不同方位。可能将所述设备放置于其他方位(如,旋转90度或处于其他方位),而这些空间上相对的描述词汇就应该做相应的解释。
虽然用以界定本申请较广范围的数值范围与参数皆是约略的数值,此处已尽可能精确地呈现具体实施例中的相关数值。然而,任何数值本质上不可避免地含有因个别测试方法所致的标准偏差。在此处,「相同」通常系指实际数值在一特定数值或范围的正负10%、5%、1%或0.5%之内。或者是,「相同」一词代表实际数值落在平均值的可接受标准误差之内,视本申请所属技术领域中具有通常知识者的考虑而定。当可理解,除了实验例之外,或除非另有明确的说明,此处所用的所有范围、数量、数值与百分比(例如用以描述材料用量、时间长短、温度、操作条件、数量比例及其他相似者)均经过「相同」的修饰。因此,除非另有相反的说明,本说明书与附随申请专利范围所揭示的数值参数皆为约略的数值,且可视需求而更动。至少应将这些数值参数理解为所指出的有效位数与套用一般进位法所得到的数值。在此处,将数值范围表示成由一端点至另一端点或介于二端点之间;除非另有说明,此处所述的数值范围皆包括端点。
图像传感器在用来计算飞行时间时,多个感光像素受控于控制线上的控制信号,但控制信号到每个感光像素的走线距离并不相同,因此会造成时延误差。本申请所公开的三维图像传感器能够将上述的时延误差补偿回来,以改善飞行时间传感器的精准度,其细节说明如下。
图1为本申请的三维图像传感模组15对目标物20进行飞行时间感测以产生三维图像的实施例的示意图。三维图像传感模组15包括三维图像传感器10及光发射模块100。参照图1,三维图像传 感器10用来针对目标物20上的多个位置产生深度信息。为方便说明,在图1中的三维图像传感器10示例性地针对目标物20上的第一位置22、第二位置24、第三位置26及第四位置28分别产生第一深度信息、第二深度信息、第三深度信息及第四深度信息,但本申请不以此四个位置为限。
光发射模块100用以发射光线(入射光)LT_S至目标物20。目标物20将光线反射至感光像素阵列200。在一些实施例中,光发射模块100包括激光二极管(laser diode,LD)、发光二极管(light emitting diode,LED)或其他可以产生光线的发光单元。
三维图像传感器10包括感光像素阵列200、像素控制信号产生电路300、时延检测模块400以及处理单元500。感光像素阵列200用以接收分别来自第一位置22、第二位置24、第三位置26及第四位置28的反射光LT_P1、LT_P2、LT_P3及LT_P4。感光像素阵列200包括多个感光像素(未绘示于图1)。各感光像素包括感光区域及像素电路,详细说明于图2。所述感光区域接受反射光LT_P1、LT_P2、LT_P3及LT_P4的光照而形成光电子(photocharge)或光电流(photocurrent)。接着,所述感光区域将对应光电子或光电流的电荷存储起来。所述像素电路将所述感光区域所存储的电荷转换成电信号而将电信号输出至处理单元500,详细说明于图3至图4。在一些实施例中,各感光像素可包括光电二极管。
像素控制信号产生电路300耦接至感光像素阵列200,并用以产生像素控制信号TX1及TX2以激活感光像素阵列200的所述多个感光像素。具体来说,像素控制信号产生电路300通过改变像素控制信号TX1及TX2的电位来控制是否读出感光像素阵列200的所述多个感光像素所得到的所述电信号。像素控制信号TX1及TX2的电位改变的时间受像素控制信号产生电路300所控制,因此像素控制信号TX1及TX2的电位改变的时间为已知,或者可以说是预设值。在一些实施例中,像素控制信号产生电路300包括时钟信号产生电路。
时延检测模块400用以判断各感光像素之间被激活的时间差,即像素控制信号TX1及TX2到达各感光像素的时间差,也就是像素控制信号TX1及TX2经过传输线到达各感光像素的时延差距。由于所述时间差会在估计飞行时间时导致误差,因此时延检测模块400会将所述时间差输出至处理单元500,好让处理单元500对各感光像素的得到的飞行时间进行补偿以消除误差。详细说明于图3至图4。在一些实施例中,时延检测模块400包括时间-数字转换器。
处理单元500用以基于所述电信号和所述时间差产生目标物20的第一深度信息、第二深度信息、第三深度信息及第四深度信息。举例来说,处理单元500可利用感光像素阵列200的四个感光像素所得到的四个电信号,计算出第一深度信息以及未经补偿的第二深度信息、第三深度信息及第四深度信息,之后再利用对应所述第一深度信息和第二深度信息之间的时间差、对应所述第二深度信息和第三深度信息之间的时间差、以及对应所述第三深度信息和第四深度信息之间的时间差,来补偿上述未经补偿的第二深度信息、第三深度信息及第四深度信息,以得到目标物20的所述第一深度信息、第二深度信息、第三深度信息及第四深度信息。换句话说,上述的补偿可以消除由像素控制信号TX1及TX2到达各感光像素的时间差造成的各深度信息之间的相对误差。本申请并不对处理单元500的作法多做限制,在一些实施例中,处理单元500可包括控制单元(control unit,CU)、算术逗辑单元(arithmetic logic unit,ALU)以及存储单元。所述存储单元可储存有程序代码,该程序代码用来指示所述控制单元及所述算术逗辑单元执行流程。在一些实施例中,处理单元500可利用特殊应用集成电路(application specific integrated circuit,ASIC)来实现,或利用数字信号处理器(digital signal processor,DSP)、一般用途处理器(general purpose processor)或应用处理器(application processor)来实现。
在本申请中,由于三维图像传感器10包括能判断各感光像素之间被激活的时间差的时延检测模块400,因此处理单元500基于所述时间差能够产生相对准确的深度信息。相对地,若三维图像传感 器不包括类似于时延检测模块400的模块,因缺乏各感光像素之间被激活的实际时间差的信息,所述三维图像传感器所产生的深度信息相对不准确。
图2为图1的感光像素阵列200及时延检测模块400的电路图。参照图2,为了简洁仅绘示出单行感光像素,以及所述单行感光像素中的二个感光像素,即第一感光像素PX1及第二感光像素PX2。除了第一感光像素PX1及第二感光像素PX2以外,感光像素阵列200进一步包括像素控制信号传输线210及212分别用来传送像素控制信号TX1以及TX2。此外,时延检测模块400包括时延检测电路402。时延检测电路402具有第一输入端in1、第二输入端in2以及輸出端out1。
像素控制信号传输线210,具有第一节点n1和第二节点n2。第一节点n1相较于第二节点n2较靠近提供像素控制信号TX1的信号源,也就是像素控制信号TX1是从图2中的上方往下走,会先经过第一节点n1再到第二节点n2。第一节点n1和第二节点n2分别耦接至第一感光像素PX1以及第二感光像素PX2,据以传送像素控制信号TX1至第一感光像素PX1以及第二感光像素PX2以控制第一感光像素PX1以及第二感光像素PX2。在一些实施例中,像素控制信号传输线210可包括金属传输线。
由于像素控制信号传输线210的寄生电阻及/或寄生电容的缘故,像素控制信号TX1到达第一节点n1的时间不同于像素控制信号TX1到达第二节点n2的时间。举例来说,像素控制信号在第一时间t1(如图5所示)到达第一节点n1,以及在第二时间t2到达第二节点n2,其中第二时间t2晚于第一时间t1。第一感光像素PX1被激活的时间据此早于第二感光像素PX2,详细说明于图5。
需注意的是,在本申请中,像素控制信号TX1从第一节点n1到达第一感光像素PX1的时间,和像素控制信号TX1从第二节点n2到达第二感光像素PX2的时间相同。简言之,各节点与对应的感光像素之间的传输延迟都相同,因此不会在三维图像传感器10针对 目标物20上的多个位置产生深度信息时,造成多个位置之间的相对误差。上述的操作环境可透过良好的电路布局来实现。举例来说,第一节点n1与第一感光像素PX1之间的传输线的长度与第二节点n2与第二感光像素PX2之间的传输线的长度可被规划为彼此相同。
此外,像素控制信号TX1从第一节点n1到达时延检测电路402的第一输入端in1的时间,和像素控制信号TX1从第二节点n2到达时延检测电路402的第二输入端in2的时间相同。简言之,各节点与时延检测电路402之间的传输延迟都相同,因此不会在三维图像传感器10针对目标物20上的多个位置产生深度信息时,造成多个位置之间的相对误差。上述的操作环境可透过良好的电路布局来实现。举例来说,第一节点n1与时延检测电路402的第一输入端之间的传输线的长度及第二节点n2与时延检测电路402的第二输入端之间的传输线的长度可被规划为彼此相同。
像素控制信号传输线212,具有第五节点n5和第六节点n6。类似地传输像素控制信号TX2至第一感光像素PX1以及第二感光像素PX2以控制第一感光像素PX1以及第二感光像素PX2。具体来说,像素控制信号TX2的相关限制与上述的像素控制信号TX1相同,差别在于,为了估计从目标物20反射至各感光像素的光线的到达时间,像素控制信号TX2和像素控制信号TX1的信号本身的电位改变的时间(即相位)会不相同。在一些实施例中,像素控制信号传输线212可包括金属传输线。
第一感光像素PX1包括感光区域202及像素电路204。感光区域202包括光传感器PD。光传感器PD用以将入射光转换为电荷并储存起来。像素电路202包括晶体管M1及M2。晶体管M1做为开关以用来依据像素控制信号TX1选择性地将光传感器PD所储存的电荷经由资料线BL1输出至处理单元500。晶体管M2类似地做为开关以用来依据像素控制信号TX2选择性地将光传感器PD所储存的电荷经由资料线BL1_S输出至处理单元500。
第二感光像素PX2亦包括感光区域202及像素电路204。为了 图式简洁,未于图式中另外对第二感光像素PX2的感光区域202及像素电路204标记元件符号。第二感光像素PX2将所储存的电荷经由资料线BL2及BL2_S输出至处理单元500。第二感光像素PX2的操作方式与第一感光像素PX1相同,于此不再赘述。
时延检测电路402的第一输入端in1以及第二输入端in2分别耦接至第一节点n1以及第二节点n2,用以判断第一时间t1和第二时间t2的时间差(t2-t1),详细说明如图3至图5。时延检测电路402的输出端out1耦接至处理单元500以将时间差(t2-t1)提供至处理单元500。处理单元500基于第一感光像素PX1的感光值及第二感光像素PX2的感光值及时间差(t2-t1)产生第一深度信息和第二深度信息。具体来说,处理单元500计算出第一深度信息及未经补偿的第二深度信息后,再利用时间差(t2-t1)来补偿上述未经补偿的第二深度信息。在一些实施例中,时延检测电路402包括时间-数字转换器,用来将第一时间t1和第二时间t2的时间差(t2-t1)转换为数字信号。
图3的示意图说明图2的感光像素阵列200的感测操作。图4的示意图系相对于图3的另一感测操作。图5是图3至图4的第一感光像素PX1及第二感光像素PX2涉及的信号时序的示意图。图5包括波形600、602、604、606、608、610及612。
参照图5,波形600代表光发射模块100发射至目标物20的光线LT_S;波形602代表从目标物20反射到第一感光像素PX1的反射光LT_P1;波形604代表像素控制信号TX1因延时效应(从像素控制信号TX1的源头到第一节点n1的传输线造成的时延)于第一节点n1呈现的延时像素控制信号TX1_d1;波形606代表像素控制信号TX2因延时效应(从像素控制信号TX2的源头到第五节点n5的传输线造成的时延)于第五节点n5的延时像素控制信号TX2_d1;波形608代表从目标物20反射到第二感光像素PX2的反射光LT_P2,其中反射光LT_P2被绘示以与反射光LT_P1具有相近的到达时间,然此仅是为了清楚了解所述时间差造成的负面影响,并不限定反射光LT_P1抵达第一感光像素PX1的时间与反射光LT_P2抵达第二 感光像素PX2的时间必需相同;波形610代表像素控制信号TX1因延时效应(从像素控制信号TX1的源头到第二节点n2的传输线造成的时延)于第二节点n2呈现的延时像素控制信号TX1_d2;以及,波形612代表像素控制信号TX2因延时效应(从像素控制信号TX2的源头到第六节点n6的传输线造成的时延)于第六节点n6呈现的延时像素控制信号TX2_d2。
在图像感测事件中,首先,参照图1辅以图3至图5的波形600,光发射模块100于时间ts发射光线LT_S至目标物20。接着,像素控制信号产生电路300依序地且错开地利用改变像素控制信号TX1及TX2的电位,并分别通过像素控制信号传输线210及212传输像素控制信号TX1及TX2,来分别致能多个像素的左半边的晶体管以及右半边的晶体管。
<关于像素控制信号传输线210上的像素控制信号TX1>
<关于第一感光像素PX1>
由图5的波形604可看出,像素控制信号TX1于第一时间t1到达第一节点n1(称为延时像素控制信号TX1_d1)。参照图3辅以图5的波形602及604,第一感光像素PX1的左边的晶体管M1因应于延时像素控制信号TX1_d1而导通,使得第一感光像素PX1输出对应第一位置22的第一感光值Q1_TX1_d1的电荷。
<关于第二感光像素PX2>
由图5的波形610可知,像素控制信号TX1于第二时间t2到达第二节点n2(称为延时像素控制信号TX1_d2)。参照图3辅以图5的波形608及610,第二感光像素PX的左边的晶体管M1因应于延时像素控制信号TX1_d2而导通,使得第二感光像素PX2输出对应第二位置24的第二感光值Q2_TX1_d2的电荷。
<关于像素控制信号传输线212上的像素控制信号TX2>
<关于第一感光像素PX1>
由图5的波形602及606可知,第一感光像素PX1的右边的晶体管M2因应于延时像素控制信号TX2_d1而导通,使得第一感光像素PX1输出对应第一位置22的感光值Q1_TX2_d1的电荷。
<关于第二感光像素PX2>
由图5的波形608及612可知,第二感光像素PX2的右边的晶体管M2因应于延时像素控制信号TX2_d2而导通,使得第二感光像素PX2输出对应第二位置22的感光值Q2_TX2_d2的电荷。
处理单元500基于通过入射光LT_P1得到的第一感光值Q1_TX1_d1与感光值Q1_TX2_d1的比例,可推算出入射光LT_P1到达第一感光像素PX1的时间,再基于光线LT_S的发射时间就能够产生第一飞行时间。基于第一飞行时间,处理单元500可判断出第一位置22的第一深度信息。类似地,处理单元500基于通过入射光LT_P2得到的第二感光值Q2_TX1_d2与感光值Q2_TX2_d2的比例,可推算出入射光LT_P2到达第二感光像素PX2的时间,再基于光线LT_S的发射时间就能够产生未经补偿的第二飞行时间,即未消除时间差(t2-t1)的第二飞行时间。因此,处理单元500可依据时延检测电路402提供的时间差(t2-t1)来校正未经补偿的第二飞行时间,处理单元500并依据校正后的第二飞行时间来产生第二深度信息。
由于第二飞行时间已基于时间差(t2-t1)进行校正,因此能够消除或减缓因延迟效应产生的误差,据以得到的第二深度信息相对准确。相对地,若第二飞行时间未基于时间差(t2-t1)进行校正,据以得到的第二深度信息会包含时间差(t2-t1)所造成的误差,相对不准确。
图6为本申请的另一实施例的时延检测模块700的电路图。在图6的实施例中,进一步图示出感光像素阵列200的第三感光像素PX3及第四感光像素PX4。图7是图6的第一感光像素PX1、第二感光像素PX2、第三感光像素PX3及第四感光像素PX4涉及的信号时序的示意图。在图6及图7的实施例中,以像素控制信号TX1为 例进行说明。时延检测模块700对像素控制信号TX2的操作类似于图2至图5的时延检测模块400,于此不再赘述。
参照图7,波形614代表像素控制信号TX1因延时效应(从像素控制信号TX1的源头到第三节点n3的传输线造成的时延)于第三节点n3呈现的延时像素控制信号TX1_d3;以及,波形616代表像素控制信号TX1因延时效应(从像素控制信号TX1的源头到第四节点n4的传输线造成的时延)于第四节点n4呈现的延时像素控制信号TX1_d4。
参回图6,时延检测模块700类似于图3的时延检测模块400,差别在于时延检测模块700包括第一时延检测电路702、第二时延检测电路704以及第三时延检测电路706。第一时延检测电路702具有第一输入端in11、第二输入端in12及輸出端out11;第二时延检测电路704具有第一输入端in21、第二输入端in22及輸出端out22;以及,第三时延检测电路706具有第一输入端in31、第二输入端in32及輸出端out33。此外,在图6的实施例中,进一步图示出感光像素阵列200的第三感光像素PX3及第四感光像素PX4,以及像素控制信号传输线210的第三节点及第四节点n3及n4。
第三节点n3相较于第四节点n4较靠近提供像素控制信号TX1的信号源,,也就是像素控制信号TX1是从图6中的上方往下走,会先经过第三节点n3再到第四节点n4,其中第一节点n1及第二节点n2均相较于第三节点n3较靠近提供像素控制信号TX1的信号源。第三节点n3和第四节点n4分别耦接至第三感光像素PX3以及第四感光像素PX4,据以传送像素控制信号TX1传送至第三感光像素PX3以及第四感光像素PX4,使第三感光像素PX3及第四感光像素PX4依据像素控制信号TX1分别输出对应第三位置26和第四位置28的第三感光值Q3和第四感光值Q4。详言之,第三感光像素PX3将所储存的电荷经由资料线BL3及BL3_S输出至处理单元500,以及第四感光像素PX4将所储存的电荷经由资料线BL4及BL4_S输出至处理单元500。
由于像素控制信号传输线210的寄生电阻及/或寄生电容的缘故,像素控制信号TX1到达第三节点n3的时间不同于像素控制信号TX1到达第四节点n4的时间。举例来说,像素控制信号在第三时间t3(如图7所示)到达第三节点n3,以及在第四时间t4到达第四节点n4,其中第四时间t4晚于第三时间t3。第三感光像素PX3被激活的时间据此早于第四感光像素PX4。
需注意的是,在本申请中,像素控制信号TX1从第三节点n3到达第三感光像素PX3的时间,和像素控制信号TX1从第四节点n4到达第四感光像素PX4的时间相同,其中像素控制信号TX1从第三节点n3到达第三感光像素PX3的时间,和像素控制信号TX1从第一节点n1到达第一感光像素PX1的时间相同,以及和像素控制信号TX1从第二节点n2到达第二感光像素PX2的时间相同。简言之,各节点与对应的感光像素之间的传输延迟都相同。上述的操作环境可透过良好的电路布局来实现。
此外,像素控制信号TX1从第一节点n1到达第一时延检测电路702的第一输入端的时间,和像素控制信号TX1从第二节点n2到达第一时延检测电路702的第二输入端的时间相同。此外,像素控制信号TX1从第二节点n2到达第二时延检测电路704的第一输入端的时间相同,和像素控制信号TX1从第三节点n3到达第二时延检测电路704的第二输入端的时间相同。又,像素控制信号TX1从第三节点n3到达第三时延检测电路706的第一输入端的时间相同,以及和像素控制信号TX1从第四节点n4到达第三时延检测电路706的第二输入端的时间相同。简言之,各节点与对应的时延检测电路之间的传输延迟都相同,因此不会在三维图像传感器10针对目标物20上的多个位置产生深度信息时,造成多个位置之间的相对误差。
在一些实施例中,以第一时延检测电路702为例,只要像素控制信号TX1从第一节点n1到达第一时延检测电路702的第一输入端的时间,和像素控制信号TX1从第三节点n3到达第一时延检测电路702的第二输入端的时间相同,第一时延检测电路702亦可用 于检测第一节点n1及第三节点n3。简言之,本申请不限定于时延检测电路仅能检测相邻两节点。
第一时延检测电路702的第一输入端in11以及第二输入端in12分别耦接至第一节点n1以及第二节点n2,用以判断第一时间t1和第二时间t2的时间差(t2-t1),其详细操作方式类似图3至图5的实施例。第一时延检测电路702的输出端out11耦接至处理单元500以提供时间差(t2-t1)至处理单元500。
第二时延检测电路704的第一输入端in21以及第二输入端in22分别耦接至第二节点n2以及第三节点n3,用以判断第二时间t2和第三时间t3的时间差(t3-t2),其详细操作方式类似图3至图5的实施例。第二时延检测电路704的输出端out22耦接至处理单元500以提供时间差(t3-t2)至处理单元500。
第三时延检测电路706的第一输入端in31以及第二输入端in32分别耦接至第三节点n3以及第四节点n4,用以判断第三时间t3和第四时间t4的时间差(t4-t3),其详细操作方式类似图3至图5的实施例。第三时延检测电路706的输出端out33耦接至处理单元500以提供时间差(t4-t3)至处理单元500。
处理单元500基于第一感光值产生第一深度信息。详言之,处理单元基于两个感光值(包括第一感光值)的比例,可推算出入射光LT_P1到达第一感光像素PX1的时间,再基于光线LT_S的发射时间就能够产生第一飞行时间。基于第一飞行时间,处理单元500可判断出第一位置22的第一深度信息。
此外,处理单元500基于第二感光值、第一时间t1和第二时间t2的时间差(t2-t1)产生第二深度信息。详言之,处理单元500基于通过入射光LT_P2得到的二个感光值(包括第二感光值)的比例,可推算出入射光LT_P2到达第二感光像素PX2的时间,再基于光线LT_S的发射时间就能够产生未经补偿的第二飞行时间。接着,时延检测电路402提供时间差(t2-t1)至处理单元500。处理单元500依据时间差(t2-t1)来校正未经补偿的第二飞行时间。接着,处理单元500 依据校正后的第二飞行时间来产生第二深度信息。
类似地,处理单元500基于第三感光值Q3、第一时间t1和第二时间t2的时间差(t2-t1)、第二时间t2和第三时间t3的时间差(t3-t2)产生第三深度信息。详言之,处理单元500基于通过入射光LT_P3得到的二个感光值(包括第三感光值Q3)的比例,可推算出入射光LT_P3到达第三感光像素PX3的时间,再基于光线LT_S的发射时间就能够产生未经补偿的第三飞行时间。接着,时延检测电路402提供时间差(t2-t1)及(t3-t2)至处理单元500。处理单元500依据时间差(t2-t1)及(t3-t2)来校正未经补偿的第三飞行时间。接着,处理单元500依据校正后的第三飞行时间来产生第三深度信息。
类似地,处理单元500基于第四感光值Q4、第一时间t1和第二时间t2的时间差(t2-t1)、第二时间t2和第三时间t3的时间差(t3-t2)以及第三时间t3和第四时间t4的时间差(t4-t3)产生第四深度信息。详言之,处理单元500基于通过入射光LT_P4得到的二个感光值(包括第四感光值Q4)的比例,可推算出入射光LT_P4到达第四感光像素PX3的时间,再基于光线LT_S的发射时间就能够产生未经补偿的第四飞行时间。接着,时延检测电路402提供时间差(t2-t1)、(t3-t2)及(t4-t3)至处理单元500。处理单元500依据时间差(t2-t1)、(t3-t2)及(t4-t3)来校正未经补偿的第四飞行时间。接着,处理单元500依据校正后的第四飞行时间来产生第四深度信息。
由于第二飞行时间已基于时间差(t2-t1)进行校正、第三飞行时间已基于时间差(t2-t1)及(t3-t2)进行校正以及第四飞行时间已基于时间差(t2-t1)、(t3-t2)及(t4-t3)进行校正,因此能够消除或减缓因延迟效应产生的误差,据以得到的第二深度信息、第三深度信息及第四深度信息相对准确。
图8为本申请的又另一实施例的时延检测模块800的电路图。参照图8,类似于图6的实施例,在图8中,进一步图示出感光像素阵列200的第三感光像素PX3及第四感光像素PX4,以及像素控制信号传输线210的第三节点及第四节点n3及n4。时延检测模块 800类似于图2的时延检测模块400,差别在于,时延检测模块800进一步包括复用器802。复用器802包括第一输入端m_in1、第二输入端m_in2、第三输入端m_in3以及第四输入端m_in4分别耦接至第一节点n1、第二节点n2、第三节点n3以及第四节点n4。复用器802另具有第一输出端m_out1以及第二输出端m_out2分别耦接至时延检测电路402的第一输入端in1以及第二输入端in2。
此外,复用器802用来选择性地将第一输入端m_in1、第二输入端m_in2、第三输入端m_in3以及第四输入端m_in4的其中之二所接收到的信号从复用器802的第一输出端m_out1以及第二输出端m_out2输出。
需注意的是,像素控制信号TX1从第一节点n1、第二节点n2、第三节点n3和第四节点n4、分别到达复用器802的第一输入端m_in1、第二输入端m_in2、第三输入端m_in3以及第四输入端m_in4的时间相同。上述的操作环境可透过良好的电路布局来实现。举例来说,第一节点n1与第一输入端m_in1之间的传输线的长度、第二节点n2与第二输入端m_in2之间的传输线的长度、第三节点n3与第三输入端m_in3之间的传输线的长度、第四节点n4与第四输入端m_in4之间的传输线的长度可被规划为彼此相同。
此外,像素控制信号TX1从复用器802的第一输出端m_out1和第二输出端m_out2分别到达时延检测电路402的第一输入端in1以及第二输入端in2的时间相同。上述的操作环境可透过良好的电路布局来实现。举例来说,复用器802的第一输出端m_out1与时延检测电路402的第一输入端in1之间的传输线的长度与复用器802的第二输出端m_out2与时延检测电路402的第二输入端in2之间的传输线的长度可被规划为彼此相同。
图9为本申请的一实施例的时间-数字转换器900的示意图。图2的实施例的时延检测电路402、图6的实施例的第一时延检测电路702、第二时延检测电路704以及第三时延检测电路706均可以时间-数字转换器900实现之。参照图9,时间-数字转换器900包括第一 输入端T_in1、第二输入端T_in2以及输出端T_out。时间-数字转换器900的第一输入端T_in1及第二输入端T_in2分别接收输入信号IN<0>及IN<1>,以及时间-数字转换器900的输出端T_out输出输出信号OUT。举例来说,时间-数字转换器900的第一输入端T_in1及第二输入端T_in2分别耦接时延检测电路402的第一输入端in1及第二输入端in2,以及输出端T_out耦接至处理单元500。
时间-数字转换器900包括第一延时链902、第二延时链904、二个复用器910及912、多个触发器914以及复用器916。本实施例包括四个触发器914,但本申请不限定于此。
复用器910耦接至时间-数字转换器900的第一输入端T_in1及第二输入端T_in2以接收输入信号IN<0>及IN<1>,并依据选择信号SEL_IN<0>将输入信号IN<0>及IN<1>的其中之一输出。类似地,复用器912耦接至时间-数字转换器900的第一输入端T_in1及第二输入端T_in2以接收输入信号IN<0>及IN<1>,并依据选择信号SEL_IN<1>将输入信号IN<0>及IN<1>的其中之一输出,其中选择信号SEL_IN<1>相反于选择信号SEL_IN<0>。
各触发器914的输入端D耦接至第一延时链902,以及时钟端耦接至第二延时链904。第一级的触发器914的输出端Q提供输出信号Q<0>至复用器916;第二级的触发器914的输出端Q提供输出信号Q<1>至复用器916;以及,类似地,第四级的触发器914的输出端Q提供输出信号Q<4>至复用器916。在一些实施例中,触发器914为D型触发器。
复用器916用以依据选择信号SEL_OUT<4:0>选择输出输出信号Q<0>至Q<4>一者。在一些实施例中,复用器916是可选的。在此实施例中,各触发器914的输出端Q耦接至处理单元500。
第一延时链902包括多个第一缓冲器906,以及第二延时链904包括多个第二缓冲器908。第一缓冲器906的延时时间D0与第二缓冲器908的延时时间D1不同。在本实施例中,延时时间D1大于延时时间D0。
图10是图9的时间-数字转换器900涉及的信号时序的示意图。参照图9及图10,复用器910输出信号CK1_INT至第一延时链902,以及复用器912输出信号CK2_INT至第二延时链904。第零级的触发器910的输入端D接收信号CK1_INT,时钟端接收信号CK2_INT。基于触发器的操作原理,由于信号CK1_INT落后于信号CK2_INT(两者间具有相位差TD1),因此第零级的触发器910输出的输出信号Q<0>为逻辑0。
接著,第一延时链902上的信号CK1_INT被第一缓冲器906延时为信号CK1_INT’,而第二延时链904上的信号CK2_INT被第二缓冲器908延时为信号CK2_INT’。第一级的触发器910的输入端D接收信号CK1_INT’,时钟端接收信号CK2_INT’。基于触发器的操作原理,由于信号CK1_INT’落后于信号CK2_INT’(两者间具有相位差TD2),因此第一级的触发器910输出的输出信号Q<1>为逻辑0。然而,因为延时时间D1大于延时时间D0,因此相位差TD2已经小于相位差TD1。依此方式继续进行操作,直到信号CK1_INT’不再落后于信号CK2_INT’,并分别输入至第四级的触发器914的输入端D及时钟端。第四级的触发器914输出的输出信号Q<4>为逻辑1。
处理单元500基于输出信号Q<4>为逻辑1可判断出输入信号IN<0>及IN<1>之间的相位差介于三倍的延时时间D0与延时时间D1的差值,与四倍的延时时间D0与延时时间D1的差值之间。以时间差(t2-t1)为例,时间差(t2-t1)的范围介于3*(D1-D0)至4*(D1-D0)之间。
图11为三维图像传感模组15应用在手持装置30的一实施例的示意图。参照图11,手持装置30包括显示屏组件34以及三维图像传感模组15。手持装置30可用来执行飞行时间感测及/或三维图像感测以进行脸部识别。其中,手持装置30可为例如智能型手机、个人数字助理、手持式计算机系统或平板计算机等任何手持式电子装置。
上文的叙述简要地提出了本申请某些实施例之特征,而使得本申请所属技术领域具有通常知识者能够更全面地理解本揭示内容的多种态样。本申请所属技术领域具有通常知识者当可明了,其可轻易地利用本揭示内容作为基础,来设计或更动其他工艺与结构,以实现与此处所述之实施方式相同的目的和/或达到相同的优点。本申请所属技术领域具有通常知识者应当明白,这些均等的实施方式仍属于本揭示内容之精神与范围,且其可进行各种变更、替代与更动,而不会悖离本揭示内容之精神与范围。

Claims (17)

  1. 一种三维图像传感器,用于接收从目标物反射的光以对所述目标物上的第一位置和第二位置分别产生第一深度信息和第二深度信息,其特征在于,所述三维图像传感器包括:
    感光像素阵列,包括:
    第一感光像素;
    第二感光像素;以及
    像素控制信号传输线,具有第一节点和第二节点分别耦接至所述第一感光像素以及所述第二感光像素,以将像素控制信号传送至所述第一感光像素以及所述第二感光像素,使所述第一感光像素及所述第二感光像素依据所述像素控制信号分别输出对应所述第一位置和所述第二位置的第一感光值和第二感光值;
    其中所述像素控制信号从所述第一节点到达所述第一感光像素的时间,和所述像素控制信号从所述第二节点到达所述第二感光像素的时间相同,且所述像素控制信号在第一时间到达所述第一节点,以及在第二时间到达所述第二节点,且所述第二时间晚于所述第一时间;
    时延检测模块,所述时延检测模块包括:
    第一时延检测电路,具有第一输入端以及第二输入端分别耦接至所述第一节点以及所述第二节点,用以判断所述第一时间和所述第二时间的时间差;以及
    处理单元,用以基于所述第一感光值、所述第二感光值和所述时间差产生所述第一深度信息和所述第二深度信息。
  2. 如权利要求1所述的三维图像传感器,其中所述目标物将所述光线反射至所述第一感光像素和所述第二感光像素。
  3. 如权利要求1所述的三维图像传感器,其中所述像素控制信号从 所述第一节点到达所述第一时延检测电路的所述第一输入端的时间,和所述像素控制信号从所述第二节点到达所述第一时延检测电路的所述第二输入端的时间相同。
  4. 如权利要求3所述的三维图像传感器,其中,所述感光像素阵列进一步包括:
    第三感光像素和第四感光像素,其分别用于对所述目标物的第三位置和第四位置分别产生第三深度信息和第四深度信息;
    其中,所述像素控制信号传输线进一步包括:第三节点和第四节点分别耦接至所述第三感光像素以及所述第四感光像素,以将所述像素控制信号传送至所述第三感光像素以及所述第四感光像素,使所述第三感光像素及所述第四感光像素依据所述像素控制信号分别输出对应所述第三位置和所述第四位置的第三感光值和第四感光值;
    其中所述像素控制信号从所述第三节点到达所述第三感光像素的时间,和所述像素控制信号从所述第四节点到达所述第四感光像素的时间相同,且所述像素控制信号在第三时间到达所述第三节点,以及在第四时间到达所述第四节点,且所述第四时间晚于所述第三时间,以及所述第三时间晚于所述第二时间。
  5. 如权利要求4所述的三维图像传感器,其中所述时延检测模块进一步包括:
    第二时延检测电路,具有第一输入端以及第二输入端分别耦接至所述第二节点以及所述第三节点,用以判断所述第二时间和所述第三时间的时间差;以及
    第三时延检测电路,具有第一输入端以及第二输入端分别耦接至所述第三节点以及所述第四节点,用以判断所述第三时间和所述第四时间的时间差;
    其中所述处理单元更基于所述第三感光值、所述第四感光 值、所述第一时间和所述第二时间的时间差、所述第二时间和所述第三时间的时间差、以及所述第三时间和所述第四时间的时间差产生第三深度信息和第四深度信息。
  6. 如权利要求5所述的三维图像传感器,其中所述像素控制信号从所述第二节点到达所述第二时延检测电路的所述第一输入端的时间,和所述像素控制信号从所述第三节点到达所述第二时延检测电路的所述第二输入端的时间相同,以及所述像素控制信号从所述第三节点到达所述第三时延检测电路的所述第一输入端的时间,和所述像素控制信号从所述第四节点到达所述第三时延检测电路的所述第二输入端的时间相同。
  7. 如权利要求4所述的三维图像传感器,其中所述时延检测模块进一步包括:
    复用器,具有第一输入端、第二输入端、第三输入端以及第四输入端分别耦接至所述第一节点、所述第二节点、所述第三节点以及所述第四节点,所述复用器另具有第一输出端以及第二输出端分别耦接至所述第一时延检测电路的所述第一输入端以及所述第二输入端,其中所述复用器用来选择性地将所述第一输入端、所述第二输入端、所述第三输入端以及所述第四输入端的其中之二所接收到的信号从所述复用器的所述第一输出端以及所述第二输出端输出。
  8. 如权利要求7所述的三维图像传感器,其中所述像素控制信号从所述第一节点、所述第二节点、所述第三节点和所述第四节点、分别到达所述复用器的所述第一输入端、所述第二输入端、所述第三输入端以及所述第四输入端的时间相同。
  9. 如权利要求7所述的三维图像传感器,其中所述像素控制信号从所述复用器的所述第一输出端和所述第二输出端分别到达所述第一时延检测电路的所述第一输入端以及所述第二输入端的时间相同。
  10. 如权利要求2所述的三维图像传感器,其中所述处理单元进一步基于所述光发射模块发射所述光线的时间点、所述第一感光值和所述第二感光值产生第一飞行时间和未经补偿的第二飞行时间
  11. 如权利要求10所述的三维图像传感器,其中所述处理单元进一步依据所述时间差来校正所述未经补偿的第二飞行时间,并依据所述第一飞行时间和所述校正后的第二飞行时间来产生所述第一深度信息和所述第二深度信息。
  12. 如权利要求10所述的三维图像传感器,其中所述第一时延检测电路包括时间-数字转换器,用来将所述第一时间和所述第二时间的时间差转换为数字信号。
  13. 如权利要求12所述的三维图像传感器,其中所述时间-数字转换器包括第一延时链以及第二延时链,其中所述第一延时链包括多个第一缓冲器,所述第二延时链包括多个第二缓冲器,所述第一缓冲器与所述第二缓冲器的延时时间不同。
  14. 如权利要求1所述的三维图像传感器,其中所述第一感光像素及所述第二感光像素各包括:
    光传感器,用以将入射光转换为电荷并储存起来;以及
    开关,用来依据所述像素控制信号选择性地将所述光传感器所储存的电荷输出;
    其中所述第一感光像素输出对应所述第一感光值的电荷,及所述第二感光像素输出对应所述第二感光值的电荷。
  15. 一种三维图像传感模组,其特征在于,包括:
    光发射模块,用于向目标物发射光信号;以及
    如权利要求1-14任意一项所述的三维图像传感器。
  16. 如权利要求15所述的三维图像传感模组,其中,所述光发射模块包括激光二极管或发光二极管。
  17. 一种手持装置,其特征在于,包括:
    显示面板;以及
    如权利要求15或16所述的三维图像传感模组。
PCT/CN2019/098103 2019-07-29 2019-07-29 三维图像传感器以及相关三维图像传感模组及手持装置 WO2021016781A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201980001339.4A CN110574364B (zh) 2019-07-29 2019-07-29 三维图像传感器以及相关三维图像传感模组及手持装置
PCT/CN2019/098103 WO2021016781A1 (zh) 2019-07-29 2019-07-29 三维图像传感器以及相关三维图像传感模组及手持装置
EP19919548.8A EP3799424B1 (en) 2019-07-29 2019-07-29 Three-dimensional image sensor, related three-dimensional image sensing module, and hand-held device
US17/027,586 US11828850B2 (en) 2019-07-29 2020-09-21 3D image sensor and related 3D image sensing module and hand-held device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/098103 WO2021016781A1 (zh) 2019-07-29 2019-07-29 三维图像传感器以及相关三维图像传感模组及手持装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/027,586 Continuation US11828850B2 (en) 2019-07-29 2020-09-21 3D image sensor and related 3D image sensing module and hand-held device

Publications (1)

Publication Number Publication Date
WO2021016781A1 true WO2021016781A1 (zh) 2021-02-04

Family

ID=68786102

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/098103 WO2021016781A1 (zh) 2019-07-29 2019-07-29 三维图像传感器以及相关三维图像传感模组及手持装置

Country Status (4)

Country Link
US (1) US11828850B2 (zh)
EP (1) EP3799424B1 (zh)
CN (1) CN110574364B (zh)
WO (1) WO2021016781A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111398979B (zh) * 2020-06-08 2020-10-16 深圳市汇顶科技股份有限公司 基于飞行时间的测距方法和相关测距系统
WO2022087776A1 (zh) * 2020-10-26 2022-05-05 深圳市汇顶科技股份有限公司 飞行时间传感器、测距系统及电子装置
WO2022087950A1 (zh) * 2020-10-29 2022-05-05 深圳市汇顶科技股份有限公司 飞行时间量测电路及其控制方法及电子装置
WO2023199804A1 (ja) * 2022-04-13 2023-10-19 ヌヴォトンテクノロジージャパン株式会社 測距装置および測距方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103731611A (zh) * 2012-10-12 2014-04-16 三星电子株式会社 深度传感器、图像捕获方法和图像处理系统
US20140104397A1 (en) * 2012-10-16 2014-04-17 Samsung Electronics Co., Ltd. Image sensor having pixel architecture for capturing depth image and color image
US20140184746A1 (en) * 2012-12-28 2014-07-03 Samsung Electronics Co., Ltd. Image processing method and apparauts
CN104067607A (zh) * 2012-01-13 2014-09-24 全视科技有限公司 共享飞行时间像素
US20150130904A1 (en) * 2013-11-12 2015-05-14 Samsung Electronics Co., Ltd. Depth Sensor and Method of Operating the Same
US20170064235A1 (en) * 2015-08-27 2017-03-02 Samsung Electronics Co., Ltd. Epipolar plane single-pulse indirect tof imaging for automotives

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6522395B1 (en) * 1999-04-30 2003-02-18 Canesta, Inc. Noise reduction techniques suitable for three-dimensional information acquirable with CMOS-compatible image sensor ICS
ITBO20010759A1 (it) * 2001-12-14 2003-06-16 Datasensor Spa Dispositivo misuratore di distanza
EP2060005B1 (en) 2006-08-31 2017-11-08 Trixell Single slope analog-to-digital converter
KR101980722B1 (ko) * 2017-08-18 2019-08-28 선전 구딕스 테크놀로지 컴퍼니, 리미티드 이미지 센서 회로 및 깊이 이미지 센서 시스템
EP3508874A1 (de) * 2018-01-03 2019-07-10 Espros Photonics AG Kalibriervorrichtung für eine tof-kameravorrichtung
CN109005326B (zh) * 2018-08-30 2021-03-26 Oppo广东移动通信有限公司 成像装置及电子设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104067607A (zh) * 2012-01-13 2014-09-24 全视科技有限公司 共享飞行时间像素
CN103731611A (zh) * 2012-10-12 2014-04-16 三星电子株式会社 深度传感器、图像捕获方法和图像处理系统
US20140104397A1 (en) * 2012-10-16 2014-04-17 Samsung Electronics Co., Ltd. Image sensor having pixel architecture for capturing depth image and color image
US20140184746A1 (en) * 2012-12-28 2014-07-03 Samsung Electronics Co., Ltd. Image processing method and apparauts
US20150130904A1 (en) * 2013-11-12 2015-05-14 Samsung Electronics Co., Ltd. Depth Sensor and Method of Operating the Same
US20170064235A1 (en) * 2015-08-27 2017-03-02 Samsung Electronics Co., Ltd. Epipolar plane single-pulse indirect tof imaging for automotives

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3799424A4 *

Also Published As

Publication number Publication date
CN110574364A (zh) 2019-12-13
EP3799424A4 (en) 2021-08-04
EP3799424B1 (en) 2023-05-10
US11828850B2 (en) 2023-11-28
CN110574364B (zh) 2021-10-29
EP3799424A1 (en) 2021-03-31
US20210080588A1 (en) 2021-03-18

Similar Documents

Publication Publication Date Title
WO2021016781A1 (zh) 三维图像传感器以及相关三维图像传感模组及手持装置
JP5698527B2 (ja) 深さセンサーの深さ推定方法及びその記録媒体
US11604265B2 (en) Single SPAD array ranging system
TWI524762B (zh) 共用飛行時間像素
KR101710514B1 (ko) 깊이 센서 및 이를 이용한 거리 추정 방법
KR20160032014A (ko) 타임 오브 플라이트 시스템 구동 방법
KR20130111130A (ko) 거리 측정 장치와 이의 동작 방법
JP7100049B2 (ja) 光測距システム用シストリックプロセッサシステム
JP2008167178A (ja) 画像データ生成装置及び受光デバイス
TW202127635A (zh) 飛行時間感測系統及其中使用的圖像感測器
EP3462730B1 (en) Image sensing circuit and image depth sensing system
WO2022110947A1 (zh) 电子装置的控制方法、电子装置及计算机可读存储介质
CN109804426B (zh) 图像传感电路及图像深度传感系统
WO2021136284A1 (zh) 三维测距方法和装置
WO2019200513A1 (zh) 影像传感系统及电子装置
WO2019196049A1 (zh) 影像传感系统及电子装置
WO2023199804A1 (ja) 測距装置および測距方法
JP3420782B2 (ja) 物体計測装置
TW202005359A (zh) 影像感測系統及其多功能影像感測器
KR20130006168A (ko) 센서 및 이를 포함하는 데이터 처리 시스템

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2019919548

Country of ref document: EP

Effective date: 20200923

NENP Non-entry into the national phase

Ref country code: DE