WO2020172812A1 - 成像系统及成像系统的像素阵列和图像传感器 - Google Patents

成像系统及成像系统的像素阵列和图像传感器 Download PDF

Info

Publication number
WO2020172812A1
WO2020172812A1 PCT/CN2019/076295 CN2019076295W WO2020172812A1 WO 2020172812 A1 WO2020172812 A1 WO 2020172812A1 CN 2019076295 W CN2019076295 W CN 2019076295W WO 2020172812 A1 WO2020172812 A1 WO 2020172812A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
pixel array
signal
predetermined direction
light
Prior art date
Application number
PCT/CN2019/076295
Other languages
English (en)
French (fr)
Inventor
杨孟达
Original Assignee
深圳市汇顶科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市汇顶科技股份有限公司 filed Critical 深圳市汇顶科技股份有限公司
Priority to PCT/CN2019/076295 priority Critical patent/WO2020172812A1/zh
Priority to CN201980000288.3A priority patent/CN110024374B/zh
Priority to EP19916831.1A priority patent/EP3820143B1/en
Publication of WO2020172812A1 publication Critical patent/WO2020172812A1/zh
Priority to US17/033,767 priority patent/US11442170B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • G01S7/486Receivers
    • G01S7/4861Circuits for detection, sampling, integration or read-out
    • G01S7/4863Detector arrays, e.g. charge-transfer gates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/491Details of non-pulse systems
    • G01S7/4912Receivers
    • G01S7/4913Circuits for detection, sampling, integration or read-out
    • G01S7/4914Circuits for detection, sampling, integration or read-out of detector arrays, e.g. charge-transfer gates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01TMEASUREMENT OF NUCLEAR OR X-RADIATION
    • G01T1/00Measuring X-radiation, gamma radiation, corpuscular radiation, or cosmic radiation
    • G01T1/29Measurement performed on radiation beams, e.g. position or section of the beam; Measurement of spatial distribution of radiation
    • G01T1/2914Measurement of spatial distribution of radiation
    • G01T1/2964Scanners
    • G01T1/2971Scanners using solid state detectors
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • H01L27/14601Structural or functional details thereof
    • H01L27/14609Pixel-elements with integrated switching, control, storage or amplification elements
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • H01L27/14643Photodiode arrays; MOS imagers
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • H01L27/14665Imagers using a photoconductor layer
    • H01L27/14672Blooming suppression
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/71Charge-coupled device [CCD] sensors; Charge-transfer registers specially adapted for CCD sensors
    • H04N25/75Circuitry for providing, modifying or processing image signals from the pixel array
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/76Addressed sensors, e.g. MOS or CMOS sensors
    • H04N25/77Pixel circuitry, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01TMEASUREMENT OF NUCLEAR OR X-RADIATION
    • G01T1/00Measuring X-radiation, gamma radiation, corpuscular radiation, or cosmic radiation
    • G01T1/16Measuring radiation intensity
    • G01T1/20Measuring radiation intensity with scintillation detectors
    • G01T1/2018Scintillation-photodiode combinations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01TMEASUREMENT OF NUCLEAR OR X-RADIATION
    • G01T1/00Measuring X-radiation, gamma radiation, corpuscular radiation, or cosmic radiation
    • G01T1/29Measurement performed on radiation beams, e.g. position or section of the beam; Measurement of spatial distribution of radiation
    • G01T1/2914Measurement of spatial distribution of radiation
    • G01T1/2921Static instruments for imaging the distribution of radioactivity in one or two dimensions; Radio-isotope cameras
    • G01T1/2928Static instruments for imaging the distribution of radioactivity in one or two dimensions; Radio-isotope cameras using solid state detectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/76Addressed sensors, e.g. MOS or CMOS sensors
    • H04N25/78Readout circuits for addressed sensors, e.g. output amplifiers or A/D converters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N3/00Scanning details of television systems; Combination thereof with generation of supply voltages
    • H04N3/10Scanning details of television systems; Combination thereof with generation of supply voltages by means not exclusively optical-mechanical
    • H04N3/14Scanning details of television systems; Combination thereof with generation of supply voltages by means not exclusively optical-mechanical by means of electrically scanned solid-state devices
    • H04N3/15Scanning details of television systems; Combination thereof with generation of supply voltages by means not exclusively optical-mechanical by means of electrically scanned solid-state devices for picture signal generation
    • H04N3/155Control of the image-sensor operation, e.g. image processing within the image-sensor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/30Transforming light or analogous information into electric information
    • H04N5/32Transforming X-rays

Definitions

  • the present disclosure relates to imaging technology, and in particular to a pixel array applicable to time-of-flight imaging technology, and related image sensors and imaging systems.
  • Time of flight (TOF) ranging technology calculates the flight time of the optical signal from the transmitter to the receiver by continuously sending optical signals from the transmitter to the target, and receiving the optical signals from the target at the receiver. Get the distance between the target and the transmitter/receiver.
  • the time-of-flight ranging technology uses a uniform surface light source as the light source, so that each pixel in the pixel array can receive the light signal returned from the target.
  • the emission power of the uniform surface light source is too high, it not only increases the power consumption of the system, but also easily causes damage to the human eyes when applied to face recognition.
  • One of the objectives of the present disclosure is to provide a pixel array applicable to time-of-flight imaging technology, and related image sensors and imaging systems to solve the above-mentioned problems.
  • An embodiment of the present disclosure provides a pixel array of an imaging system.
  • the pixel array includes a plurality of pixels arranged in multiple rows and multiple columns for sensing reflection signals incident on the pixel array to form a plurality of separated reflection light spots on the pixel array.
  • Each pixel of the plurality of pixels includes a photodetector and a reading circuit.
  • the light detector is used for detecting the reflected signal and correspondingly outputting a light response signal.
  • the reading circuit is coupled to the light detector for generating pixel output according to the light response signal.
  • the plurality of pixels includes a first pixel and a second pixel adjacent to the first pixel in a first predetermined direction; the reading circuit of the first pixel is aligned with the first pixel in the first predetermined direction.
  • the photodetector of the second pixel is adjacent to and adjacent to the photodetector of the first pixel in a second predetermined direction perpendicular to the first predetermined direction.
  • An embodiment of the present disclosure provides an image sensor of an imaging system.
  • the image sensor includes a pixel array and a processing circuit.
  • the pixel array is used to sense a reflection signal incident on the pixel array to form a plurality of reflected light spots separated from each other on the pixel array, wherein the reflection signal is a structure that is reflected by a target and sent from an imaging system
  • the light signal is generated.
  • the pixel array includes a plurality of pixels arranged in multiple rows and multiple columns. Each pixel of the plurality of pixels includes a photodetector and a reading circuit.
  • the light detector is used for detecting the reflected signal and correspondingly outputting a light response signal.
  • the reading circuit is coupled to the light detector for generating pixel output according to the light response signal.
  • the plurality of pixels includes a first pixel and a second pixel adjacent to the first pixel in a first predetermined direction; the reading circuit of the first pixel is aligned with the first pixel in the first predetermined direction.
  • the photodetector of the second pixel is adjacent to and adjacent to the photodetector of the first pixel in a second predetermined direction perpendicular to the first predetermined direction.
  • the processing circuit is coupled to the pixel array for detecting the flight time of the structured light signal according to the sensing result of the pixel array, and obtaining the depth information of the target according to the flight time
  • An embodiment of the present disclosure provides an imaging system.
  • the imaging system includes a light emitting unit and a pixel array.
  • the light emitting unit is used to send structured light signals.
  • the pixel array is used for sensing the reflected signal generated by the structured light signal being reflected by the target.
  • the pixel array includes a plurality of pixels arranged in multiple rows and multiple columns.
  • the reflected signal forms a plurality of reflected light spots separated from each other on the pixel array.
  • Each pixel of the plurality of pixels includes a photodetector and a reading circuit.
  • the light detector is used for detecting the reflected signal and correspondingly outputting a light response signal.
  • the reading circuit is coupled to the light detector for generating pixel output according to the light response signal.
  • the plurality of pixels includes a first pixel and a second pixel adjacent to the first pixel in a first predetermined direction; the reading circuit of the first pixel is aligned with the first pixel in the first predetermined direction.
  • the photodetector of the second pixel is adjacent to and adjacent to the photodetector of the first pixel in a second predetermined direction perpendicular to the first predetermined direction.
  • Fig. 1 is a functional block diagram of an embodiment of the imaging system of the present disclosure.
  • FIG. 2 is a schematic diagram of an embodiment of multiple light spots formed by projecting the structured light signal shown in FIG. 1 on a cross-section.
  • FIG. 3 is a schematic diagram of an embodiment in which the reflected signal shown in FIG. 1 is incident on a plurality of reflected light spots formed on the pixel array.
  • FIG. 4 is a schematic diagram of an embodiment of the circuit structure of each pixel when the pixel array shown in FIG. 3 is applied to continuous wave modulation.
  • FIG. 5 is a schematic diagram of an embodiment of the circuit structure of each pixel when the pixel array shown in FIG. 3 is applied to pulse modulation.
  • FIG. 6 is a schematic diagram of an embodiment in which the reflected light spot shown in FIG. 3 is located at the boundary of two adjacent pixels.
  • FIG. 7 is a schematic diagram of an embodiment in which the reflected light spot shown in FIG. 3 is located at the boundary of two adjacent pixels.
  • FIG. 8 is a schematic diagram of an embodiment in which the reflected light spot shown in FIG. 3 is located at the boundary of two adjacent pixels.
  • FIG. 9 is a schematic diagram of an embodiment in which the reflected light spot shown in FIG. 3 is located at the boundary of two adjacent pixels.
  • FIG. 10 is a schematic diagram of an embodiment in which the reflected light spot shown in FIG. 3 is located at the boundary of two adjacent pixels.
  • Fig. 1 is a functional block diagram of an embodiment of the imaging system of the present disclosure.
  • the imaging system 100 can be implemented by a three-dimensional imaging system to obtain depth information (or depth images) of surrounding objects.
  • the imaging system 100 may be a time-of-flight imaging system, which can obtain the depth information of the target 102 by measuring the distance between the target 102 and the imaging system 100.
  • the imaging system 100 may be a structured light three-dimensional imaging system, which can determine the depth information of the target 102 according to the pattern deformation of the structured light received by the receiving end.
  • the imaging system 100 is implemented as a time-of-flight imaging system to illustrate the imaging solution of the present disclosure.
  • the imaging solution of the present disclosure can be applied to other three-dimensional imaging systems that obtain depth images based on optical signals from the transmitting end and the receiving end.
  • the imaging system 100 may include (but is not limited to) a light emitting unit 110 and an image sensor 120.
  • the light emitting unit 110 is used to generate a structured light signal LS, wherein the structured light signal LS may have a predetermined pattern, so that energy can be concentrated in the predetermined pattern.
  • the light-emitting unit 110 may include a light source 112 and an optical microstructure 114.
  • the optical microstructure 114 can be used to change the travel route of the light signal LI output by the light source 112, thereby generating a structured light signal LS having the predetermined pattern.
  • the structured light signal LS is projected on the cross section AB to form multiple light spots that are separated from each other. In this way, the energy of the light signal LI can be concentrated on the light spots formed by the structured light signal LS, thereby enhancing the intensity of the light signal incident on the target 102 and reducing the influence of background noise.
  • the optical microstructure 114 may include a diffractive optical element (DOE) or a refractive optical element (ROE) for conical diffraction of the optical signal LI (Or conical refraction) to generate a structured light signal LS, and projecting the structured light signal LS on the cross section AB can form a plurality of light spots separated from each other.
  • DOE diffractive optical element
  • ROE refractive optical element
  • the image sensor 120 is used to sense the reflection signal LR returned from the target 102 to obtain image information of the target 102, wherein the reflection signal LR is generated by the structured light signal LS reflected by the target 102.
  • the image sensor 120 includes (but is not limited to) a pixel array 122 and a processing circuit 124.
  • the pixel array 122 is used for sensing the reflected signal LR to generate a sensing result SR. It is worth noting that because the structured light signal LS can form multiple light spots on the surface of the target 102, the reflected signal LS incident on the pixel array 122 can form multiple reflected light spots on the pixel array 122 that are separated from each other.
  • the pixel array 122 has a plurality of pixels, and each reflected light spot can illuminate at least one pixel.
  • the energy of the reflected signal LR can be concentratedly provided to the pixels illuminated by the multiple reflected light spots. In this way, in the signal received by the pixel illuminated by the reflected light spot, the proportion of the background noise component can be reduced, thereby improving the signal quality of the pixel output (part of the sensing result SR) of the pixel.
  • the processing circuit 124 is coupled to the pixel array 122 to detect the flight time of the structured light signal LS according to the sensing result SR of the pixel array 122, and obtain the depth information of the target 102 according to the flight time. For example, the processing circuit 124 can obtain the depth information (depth image) of the area illuminated by the multiple light points formed by the structured light signal LS on the surface of the target 102 according to the pixel output generated by the pixels illuminated by the multiple reflected light points. .
  • the distance between the light-emitting unit 110 and each pixel in the pixel array 122 is much smaller than the distance between the light-emitting unit 110 and the target 102, and much smaller than the distance between each pixel and the target 102.
  • the distance between. Therefore, for a pixel, the flying distance of the structured light signal LS sent from the light-emitting unit 110 to the target 102 can be regarded as equivalent to the flying distance of the reflected signal LR from the target 102 back to the pixel.
  • the processing circuit 124 can measure the distance between the target 102 and the pixel according to the flight time and propagation speed of the structured light signal LS.
  • the processing circuit 124 can, according to the corresponding depth information of the pixels illuminated by the multiple reflected light spots, The depth information of the area on the surface of the target 102 that is not illuminated by the light spot of the structured light signal LS is determined, so as to construct a depth image of the target 102.
  • the processing circuit 124 can perform related signal processing, such as interpolation processing, based on the pixel output of the pixels illuminated by the multiple reflected light points, and generate other signals that are not The pixel output of a pixel illuminated by a reflected light spot.
  • the pixel array 122 can adopt a smaller pixel pitch, so that the reflected light spot can illuminate a sufficient area (e.g., at least one pixel), and the resolution can be improved. In this way, the imaging system 100 can not only reduce system power consumption by concentrating the energy sent by the transmitting end, but also has good signal sensing quality and high resolution effects.
  • FIGS. 2 and 3 respectively show an embodiment of the light point distribution related to the structured light signal LS and the reflected signal LR shown in FIG. 1.
  • the multiple light spots shown in FIGS. 2 and 3 are all arranged in an array pattern of multiple rows and multiple columns. However, this is only for illustration, and the present disclosure is not limited thereto.
  • structured light signals/reflected signals with other different light point distributions for example, multiple light points arranged in a polygon and separated from each other
  • the signal LR is all feasible.
  • FIG. 2 is a schematic diagram of an embodiment of multiple light spots formed by projecting the structured light signal LS shown in FIG. 1 on the cross section AB.
  • the optical microstructure 114 can cone-diffract the light signal LI to generate a structured light signal LS, so that the structured light signal LS is projected on the cross section AB to form a plurality of light spots S1 arranged in M rows and N columns. , Where M and N are both positive integers greater than 1.
  • the energy of the light signal LI (or the energy of the structured light signal LS) can be concentrated in multiple light spots S1.
  • the distance between two adjacent light spots shown in FIG. 2 is only shown schematically, and is not a limitation of the present disclosure.
  • the distance between two adjacent light spots in the row direction and/or the distance between two adjacent light spots in the column direction can be adjusted by changing the design of the optical microstructure 114 the distance.
  • FIG. 3 is a schematic diagram of an embodiment of a plurality of reflected light spots formed by the reflected signal LR shown in FIG. 1 incident on the pixel array 122.
  • the reflection signal LR can be generated by reflecting the structured light signal LS shown in FIG. 2 by the target 102, and a plurality of reflection light spots S2 arranged in M rows and N columns are formed on the pixel array 122.
  • the pixel array 122 may have a plurality of pixels arranged in P rows and Q columns, where P is a positive integer greater than 1 and less than M, and Q is a positive integer greater than 1 and less than N.
  • At least one pixel may be arranged between two adjacent reflective light spots among the multiple reflective light spots S2.
  • the pixel array 122 may have millions of pixels (for example, 1280 ⁇ 960), and the light-emitting unit 110 may use a structured light signal LS with 320 ⁇ 240 light spots, so that the reflected signal LR is 320 ⁇ 240 reflected light spots S2 are formed on the pixel array 122.
  • the pixel array 122 can adopt a smaller pixel pitch.
  • the pixel array of the present disclosure can adopt an asymmetrical pixel arrangement, while meeting the design requirements of small pixel pitch and high fill factor, and can ensure that each reflected light spot can illuminate the photosensitive area of at least one pixel, which fills
  • the factor refers to the ratio of the photosensitive area of the pixel to the entire pixel area.
  • each pixel of the pixel array 122 may include a photodetector and a reading circuit.
  • the reading circuit of each pixel can be adjacent to the photosensitive area (photodetector) of the pixel in the row direction and the column direction, so that each pixel can have a high fill factor.
  • the pixel PXA includes a photodetector PDA and a reading circuit RDA
  • the pixel PXB includes a photodetector PDB and a reading circuit. Take the circuit RDB.
  • the photodetector PDA is used for detecting the reflection signal LR to generate a light response signal DRA
  • the photodetector PDB is used for detecting the reflection signal LR to generate a light response signal DRB.
  • the reading circuit RDA is coupled to the photodetector PDA for generating a pixel output POA according to the light response signal DRA.
  • the reading circuit RDB is coupled to the photodetector PDB for generating a pixel output POB according to the light response signal DRB.
  • the reading circuit RDA of the pixel PXA is adjacent to the photodetector PDB of the pixel PXB along the first predetermined direction D1, and along the second predetermined direction D2 (for example, the column direction) perpendicular to the first predetermined direction D1 It is adjacent to the photodetector PDA of the pixel PXA.
  • the light detector PDA may be arranged on one side (for example, the upper side) of the reading circuit RDA in the second predetermined direction D2, and the light detector PDB may be arranged on the second predetermined direction D2.
  • the other side (for example, the lower side) of the reading circuit RDB opposite to the side.
  • the arrangement of the photodetector PDA and the reading circuit RDA in the pixel PXA and the arrangement of the photodetector PDB and the reading circuit RDB in the pixel PXB can be reversed.
  • the pixel PXB can be regarded as a mirror image of the pixel PXA upside down.
  • the reading circuit RDA/RDB can be adjacent to the photosensitive area of the pixel in both the row direction and the column direction.
  • the length of each pixel in the second predetermined direction D2 can be greater than the length in the first predetermined direction D1, so that the photosensitive area of each pixel (corresponding to the photosensitive area of the photodetector) can be increased, thereby Improve pixel fill factor and sensitivity.
  • the length L of each pixel may be twice the width W.
  • the pixel array 122 only needs two reading circuits in a square area with a side length equal to L.
  • the other pixel array requires four reading circuits (located in the four pixels). Therefore, when the pixel array 122 and the other pixel array are read at the same signal reading rate, the imaging solution of the present disclosure can have a higher frame rate.
  • first predetermined direction D1 and the second predetermined direction D2 are the row direction and the column direction, respectively, this is not intended to limit the present disclosure.
  • first predetermined direction D1 and the second predetermined direction D2 may be a column direction and a row direction, respectively.
  • the reading circuit of each pixel may include two photoelectric reading units, which are controlled by control signals of different phases, so that the subsequent processing circuit can obtain the transmitting end
  • the phase offset information between the signal and the signal at the receiving end such as signal information required for continuous wave modulation and/or pulse modulation.
  • the reading circuit RDA may include a first photoelectric reading unit 334 and a second photoelectric reading unit 336.
  • the first photoelectric reading unit 334 is used for selectively coupling to the photodetector PDA according to a first control signal TX1 to generate the first part of the pixel output POA of the pixel PXA.
  • the second photoelectric reading unit 336 is used for selectively coupling to the photodetector PDA according to a second control signal TX2 to generate the second part of the pixel output POA of the pixel PXA, wherein the second control signal TX2 and the first control signal
  • the signal TX1 can have different phases.
  • the reading circuit RDB may include a first photoelectric reading unit 344 and a second photoelectric reading unit 346.
  • the first photoelectric reading unit 344 is used for selectively coupling to the photodetector PDB according to the first control signal TX1 to generate the first part of the pixel output POB of the pixel PXB.
  • the second photoelectric reading unit 346 is used for selectively coupling to the photodetector PDB according to the second control signal TX2 to generate the second part of the pixel output POB of the pixel PXB.
  • FIG. 4 is a schematic diagram of an embodiment of the circuit structure of each pixel when the pixel array 122 shown in FIG. 3 is applied to continuous wave modulation.
  • the pixel 430 can be used to implement the pixel PXA and the pixel PXB shown in FIG. 3.
  • the pixel 430 includes (but is not limited to) a photodetector 432, a first photoelectric reading unit 434, and a second photoelectric reading unit 436.
  • the first photoelectric reading unit 334/344 shown in FIG. 3 may be implemented by the first photoelectric reading unit 434, and/or the second photoelectric reading unit 336/346 shown in FIG. It can be implemented by the second photoelectric reading unit 436.
  • the photodetector 432 (for example, a photodiode) can detect the reflection signal LR to correspondingly generate a photoresponse signal PR, where the photoresponse signal PR can pass through at least one of the first photoelectric reading unit 434 and the second photoelectric reading unit 436. Read the unit output.
  • the photodetector 432 can convert the received optical signal into a photocurrent signal of a corresponding size, that is, the photoresponse signal PR can be a current signal representing the size of the optical signal, and the photoelectric reading unit is used to read the photocurrent signal.
  • the first photoelectric reading unit 434 may include (but is not limited to) a first reset transistor MR1, a first transfer transistor MT1, a first output transistor MF1, and a first read transistor MW1.
  • the first reset transistor MR1 resets a first floating diffusion node FD1 according to a reset signal RST.
  • the first transfer transistor MT1 is coupled to the photodetector 432 and is used to turn on according to the first control signal TX1, that is, the first transfer transistor MT1 is controlled by the first control signal TX1 to realize the connection and the photodetector 432 disconnect.
  • the first output transistor MF1 is used to amplify the voltage signal of the first floating diffusion node FD1 to generate a first amplified signal VS1, where the first amplified signal VS1 can be used as the first part of a pixel output of the pixel 430.
  • the first read transistor MW1 is used to selectively output the first amplified signal VS1 according to a selection signal SEL.
  • the second photoelectric reading unit 436 is used for selectively coupling to the photodetector 432 according to a second control signal TX2 to generate the second part of the pixel output of the pixel 430, wherein the second control signal TX2 and the first The control signal TX1 can have different phases.
  • the second photoelectric reading unit 436 may include (but is not limited to) a second reset transistor MR2, a second transfer transistor MT2, a second output transistor MF2, and a second read transistor MW2.
  • the second reset transistor MR2 resets a second floating diffusion node FD2 according to the reset signal RST.
  • the second transfer transistor MT2 is coupled to the photodetector 432 to be turned on according to the second control signal TX2, that is, the second transfer transistor MT2 is controlled by the second control signal TX2 to realize the connection and the photodetector 432 disconnect.
  • the second output transistor MF2 is used to amplify the voltage signal of the second floating diffusion node FD2 to generate a second amplified signal VS2, where the second amplified signal VS2 can be used as the second part of the pixel output.
  • the second reading transistor MW2 is used to selectively output the second amplified signal VS2 according to the selection signal SEL.
  • first photoelectric reading unit 434 and the second photoelectric reading unit 436 to generate the signal information required for continuous wave modulation according to the first control signal TX1 and the second control signal TX2, respectively , Further description will not be repeated here.
  • FIG. 5 is a schematic diagram of an embodiment of the circuit structure of each pixel when the pixel array 122 shown in FIG. 3 is applied to pulse modulation.
  • the pixel 530 can be used to implement the pixel PXA and the pixel PXB shown in FIG. 3.
  • the difference between the circuit structure of the pixel 530 and the circuit structure of the pixel 430 shown in FIG. 4 is that the pixel 530 also includes an anti-blooming transistor MB.
  • the anti-halation transistor MB is controlled by a third control signal TX3 to extract the photoelectrons generated by the light sensor 432 due to the background light (for example, conduct to the power supply voltage), so as not to affect the normal operation of the circuit.
  • the anti-blooming transistor MB can be turned on according to the third control signal TX3, so as to extract the photoelectrons generated by the photosensor 432 due to receiving the background light, thereby enhancing the pixel 530 The ability to resist background light.
  • the first photoelectric reading unit 434, the second photoelectric reading unit 436, and the anti-blooming transistor MB are generated according to the first control signal TX1, the second control signal TX2, and the third control signal TX3, respectively. The details of the operation of the signal information required for pulse modulation will not be further described here.
  • FIG. 6 is a schematic diagram of an embodiment in which the reflected light point S2 shown in FIG. 3 is located at the boundary of two adjacent pixels.
  • each reading circuit It can include two photoelectric reading units RU1 and RU2, which can be implemented by the first photoelectric reading unit 334/344 and the second photoelectric reading unit 336/346 shown in FIG. 3, respectively.
  • the reading circuit RD of the pixel PX3 is adjacent to the respective photodetectors PD of the pixels PX3 and PX4, and is also adjacent to the respective photodetectors PD of the pixels PX2 and PX7.
  • the pixel array 122 can still provide enough photosensitive area to detect the reflected light spot S2.
  • FIG. 7 is a schematic diagram of an embodiment in which the reflected light spot S2 shown in FIG. 3 is located at the boundary of two adjacent pixels.
  • Each pixel of the pixel array 122 shown in FIG. 3 can be implemented by the pixel structure shown in FIG. 7.
  • the difference between the pixel arrangement shown in FIG. 7 and the pixel arrangement shown in FIG. 6 is that the photodetector PD of each pixel shown in FIG. 7 is arranged in the reading circuit of the pixel in the second predetermined direction D2.
  • Same side of RD e.g. upper side.
  • the pixels PX2/PX4/PX6/PX8 can be implemented by pixels that are upside down from the pixel PXB shown in FIG. 3.
  • the reading circuit of a single pixel can still be surrounded by four photodetectors . In this way, even if the center of the reflected light spot S2 deviates from the photosensitive area of a single pixel, the pixel array 122 can still provide a sufficient photosensitive area to detect the reflected light spot S2.
  • FIG. 8 is a schematic diagram of an embodiment in which the reflected light spot S2 shown in FIG. 3 is located at the boundary of two adjacent pixels.
  • Each pixel of the pixel array 122 shown in FIG. 3 can be implemented by the pixel structure shown in FIG. 8.
  • the difference between the pixel arrangement shown in Fig. 8 and the pixel arrangement shown in Fig. 6 is that the photosensitive area of the photodetector PD of each pixel shown in Fig. 8 may be non-quadrilateral.
  • the photodetector PD of each pixel may have an L-shaped photosensitive area, and the corresponding reading circuit RD may be arranged at the negative corner of the L-shaped photosensitive area, so that the periphery of the reading circuit RD is surrounded by the photosensitive area .
  • the reading circuit RD of the pixel PX3 is adjacent to the photodetector PD of the pixel PX4 along the first predetermined direction D1, and is adjacent to the photodetector PD of the pixel PX3 along the second predetermined direction D2.
  • the PDs are adjacent.
  • the reading circuit RD of the pixel PX3 is also adjacent to the photodetector PD of the pixel PX3 along a third predetermined direction D3 opposite to the first predetermined direction D1. In this way, even if the center of the reflected light spot S2 deviates from the photosensitive area of a single pixel, the pixel array 122 can still provide enough photosensitive area to detect the reflected light spot S2.
  • FIG. 9 is a schematic diagram of an embodiment in which the reflected light point S2 shown in FIG. 3 is located at the boundary of two adjacent pixels.
  • Each pixel of the pixel array 122 shown in FIG. 3 can be implemented by the pixel structure shown in FIG. 9.
  • the first photoelectric reading unit RU1 and the second photoelectric reading unit RU2 of each pixel shown in FIG. 9 can be respectively arranged on the photodetector PD of the pixel.
  • the reading circuit located at the pixel boundary (such as the second photoelectric reading unit RU2 of the pixel PX3 and the pixel
  • the first photoelectric reading unit RU1 of the PX7 can still be surrounded by sufficient photosensitive area. In this way, even if the center of the reflected light spot S2 deviates from the photosensitive area of a single pixel, the pixel array 122 can still provide a sufficient photosensitive area to detect the reflected light spot S2.
  • FIG. 10 is a schematic diagram of an embodiment in which the reflected light spot S2 shown in FIG. 3 is located at the boundary of two adjacent pixels.
  • Each pixel of the pixel array 122 shown in FIG. 3 can be implemented by the pixel structure shown in FIG. 10.
  • the difference between the pixel arrangement shown in FIG. 10 and the pixel arrangement shown in FIG. 6 is: the photosensitive area of the photodetector PD of each pixel shown in FIG. 10 may be non-quadrilateral, and the first photoelectric reading of each pixel
  • the unit RU1 and the second photoelectric reading unit RU2 are arranged on different sides of the photodetector PD.
  • the photodetector PD of each pixel may have a T-shaped photosensitive area, and the corresponding first photoelectric reading unit RU1 may be disposed at a negative corner of the T-shaped photosensitive area, and the corresponding second photoelectric The reading unit RU2 can be arranged at another negative corner of the T-shaped photosensitive area, so that the photoelectric reading unit located at the pixel boundary (such as the second photoelectric reading unit RU2 of the pixel PX3 and the first photoelectric reading unit of the pixel PX7) RU1) can still be surrounded by enough photosensitive area. In this way, even if the center of the reflected light spot S2 deviates from the photosensitive area of a single pixel, the pixel array 122 can still provide a sufficient photosensitive area to detect the reflected light spot S2.
  • the imaging solution disclosed in this application can make the reflected signal returning from the target object to the receiving end form multiple reflected light spots on the pixel array, and increase the optical signal incident on the pixel. Signal strength, thereby reducing the influence of background noise.
  • an asymmetrical pixel arrangement structure such as using rectangular pixels, staggering the photosensitive regions of a plurality of pixels in a predetermined direction, and/or using non-quadrilateral photosensitive regions, the imaging disclosed in this application
  • the solution can meet the needs of small pixel pitch and high fill factor at the same time, and has the effects of high sensitivity, high resolution and high frame rate.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Power Engineering (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Condensed Matter Physics & Semiconductors (AREA)
  • Computer Hardware Design (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Molecular Biology (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Solid State Image Pick-Up Elements (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

本公开提供一种成像系统、像素阵列和图像传感器。所述像素阵列包括多个像素,用以传感入射至所述像素阵列而在所述像素阵列上形成彼此分开的多个反射光点的反射信号。各像素包括光检测器和读取电路。所述光检测器用以检测所述反射信号并对应输出光响应信号。所述读取电路根据所述光响应信号产生像素输出。所述多个像素包括第一像素和与所述第一像素在第一预定方向上相邻的第二像素。所述第一像素的读取电路在沿着所述第一预定方向上与所述第二像素的光检测器相邻,并在沿着垂直于所述第一预定方向的第二预定方向上与所述第一像素的光检测器相邻。所述像素阵列具备小像素间距和高填充因数的功效。

Description

成像系统及成像系统的像素阵列和图像传感器 技术领域
本公开涉及成像技术,尤其涉及一种可应用于飞行时间成像技术的像素阵列,及其相关的图像传感器和成像系统。
背景技术
飞行时间(time of flight,TOF)测距技术通过从发射端连续发送光信号至目标物,并在接收端接收从目标物返回的光信号,从而计算光信号从发射端返回接收端的飞行时间,得到目标物与发射端/接收端之间的距离。为了提高分辨率和采集远距离的目标物的深度信息,飞行时间测距技术采用均匀面光源作为照光光源,使像素阵列中的各像素均可接收从目标物返回的光信号。然而,由于均匀面光源的发射功率过高,不仅增加系统的功耗,并且在应用至人脸识别时,容易对人眼造成伤害。
因此,需要一种创新的飞行时间成像方案,其可具备高分辨率,并且满足低耗能的需求。
发明内容
本公开的目的之一在于提供一种可应用于飞行时间成像技术的像素阵列,及其相关的图像传感器和成像系统,来解决上述问题。
本公开的一实施例提供了一种成像系统的像素阵列。所述像素阵列包括排列为多行与多列的多个像素,用以传感入射至所述像素阵列而在所述像素阵列上形成彼此分开的多个反射光点的反射信号。所述多个像素的各像素包括光检测器以及读取电路。所述光检 测器用以检测所述反射信号并对应输出光响应信号。所述读取电路耦接于所述光检测器,用以根据所述光响应信号产生像素输出。所述多个像素包括第一像素和与所述第一像素在第一预定方向上相邻的第二像素;所述第一像素的读取电路在沿着所述第一预定方向上与所述第二像素的光检测器相邻,以及在沿着垂直于所述第一预定方向的第二预定方向上与所述第一像素的光检测器相邻。
本公开的一实施例提供了一种成像系统的图像传感器。所述图像传感器包括像素阵列和处理电路。所述像素阵列用以传感入射至所述像素阵列而在所述像素阵列上形成彼此分开的多个反射光点的反射信号,其中所述反射信号是由目标物反射从成像系统发送的结构光信号所产生。所述像素阵列包括排列为多行与多列的多个像素。所述多个像素的各像素包括光检测器以及读取电路。所述光检测器用以检测所述反射信号并对应输出光响应信号。所述读取电路耦接于所述光检测器,用以根据所述光响应信号产生像素输出。所述多个像素包括第一像素和与所述第一像素在第一预定方向上相邻的第二像素;所述第一像素的读取电路在沿着所述第一预定方向上与所述第二像素的光检测器相邻,以及在沿着垂直于所述第一预定方向的第二预定方向上与所述第一像素的光检测器相邻。所述处理电路耦接于所述像素阵列,用以根据所述像素阵列的传感结果侦测所述结构光信号的飞行时间,以及根据所述飞行时间获得所述目标物的深度信息
本公开的一实施例提供了一种成像系统。所述成像系统包括发光单元和像素阵列。所述发光单元用以发送结构光信号。所述像素阵列,用以传感所述结构光信号被目标物反射而产生的反射信号。所述像素阵列包括排列为多行与多列的多个像素。所述反射信号在所述像素阵列上形成彼此分开的多个反射光点。所述多个像素的各像素包括光检测器以及读取电路。所述光检测器用以检测所述反射信号并对应输出光响应信号。所述读取电路耦接于所述光检测器,用以根据所述光响应信号产生像素输出。所述多个像素包括第一像素和与所述第一像素在第一预定方向上相邻的第二像素;所述第一 像素的读取电路在沿着所述第一预定方向上与所述第二像素的光检测器相邻,以及在沿着垂直于所述第一预定方向的第二预定方向上与所述第一像素的光检测器相邻。
附图说明
图1是本公开的成像系统的一实施例的功能方框示意图。
图2是图1所示的结构光信号投影在截面上所形成的多个光点的一实施例的示意图。
图3是图1所示的反射信号入射到像素阵列上所形成的多个反射光点的一实施例的示意图。
图4是图3所示的像素阵列应用在连续波调制的情形下各像素的电路结构的一实施例的示意图。
图5是图3所示的像素阵列应用在脉冲调制的情形下各像素的电路结构的一实施例的示意图。
图6是图3所示的反射光点位于相邻两个像素的边界的一实施例的示意图。
图7是图3所示的反射光点位于相邻两个像素的边界的一实施例的示意图。
图8是图3所示的反射光点位于相邻两个像素的边界的一实施例的示意图。
图9是图3所示的反射光点位于相邻两个像素的边界的一实施例的示意图。
图10是图3所示的反射光点位于相邻两个像素的边界的一实施例的示意图。
其中,附图标记说明如下:
100                               成像系统
102                               目标物
110                               发光单元
112                               光源
114                               光学微结构
120                               图像传感器
122                               像素阵列
124                               处理电路
334、344、434、RU1                第一光电读取单元
336、346、436、RU2                第二光电读取单元
430、530、PXA、PXB、PX1-PX8       像素
432、PDA、PDB、PD                 光检测器
RDA、RDB、RD                      读取电路
MR1                               第一复位晶体管
MR2                               第二复位晶体管
MT1                               第一传输晶体管
MT2                               第二传输晶体管
MF1                               第一输出晶体管
MF2                               第二输出晶体管
MW1                               第一读取晶体管
MW2                               第二读取晶体管
MB                                抗晕晶体管
FD1                               第一浮动扩散节点
FD2                               第二浮动扩散节点
AB                                截面
BD                                像素边界
S1                                光点
S2                                反射光点
LI                                光信号
LS                                结构光信号
LR                                反射信号
TX1                               第一控制信号
TX2                               第二控制信号
TX3                               第三控制信号
VS1                               第一放大信号
VS2                               第二放大信号
RST                               复位信号
SEL                               选择信号
L                                 长度
W                                 宽度
DRA、DRB                          光响应信号
POA、POB、PO                      像素输出
D1                                第一预定方向
D2                                第二预定方向
D3                                第三预定方向
具体实施方式
在说明书及之前的权利要求书当中使用了某些词汇来指称特定的组件。本领域的技术人员应可理解,制造商可能会用不同的名词来称呼同样的组件。本说明书及之前的权利要求书并不以名称的差异来作为区分组件的方式,而是以组件在功能上的差异来作为区分的基准。在通篇说明书及之前的权利要求书当中所提及的“包含”为一开放式的用语,故应解释成“包含但不限定于”。此外,“耦接”一词在此包含任何直接和间接的电连接手段。因此,若文中描述一第一装置耦接于一第二装置,则代表所述第一装置可直接电连接于所述第二装置,或通过其它装置或连接手段间接地电连接到所述第二装置。
图1是本公开的成像系统的一实施例的功能方框示意图。成像系统100可由三维成像系统来实施,用以获得周遭目标物的深度信 息(或深度图像)。举例来说(但本公开不限于此),成像系统100可以是飞行时间成像系统,其可通过测量目标物102与成像系统100之间的距离,获得目标物102的深度信息。值得注意的是,在某些实施例中,成像系统100可以是结构光三维成像系统,其可根据接收端收到的结构光的图案形变,判断目标物102的深度信息。为简洁起见,以下以成像系统100实施为飞行时间成像系统的实施例来说明本公开的成像方案。然而,本领域所属技术人员应可了解本公开的成像方案可应用于其他根据发射端和接收端的光信号来得到深度图像的三维成像系统。
成像系统100可包括(但不限于)一发光单元110和一图像传感器120。发光单元110用以产生一结构光信号LS,其中结构光信号LS可具有一预定图案(pattern),使能量可集中在所述预定图案。发光单元110可包括一光源112及一光学微结构114。光学微结构114可用来改变光源112输出的光信号LI的行进路线,从而产生具有所述预定图案的结构光信号LS。在此实施例中,结构光信号LS投影在截面AB上可形成彼此分开的多个光点(light spot)。这样,光信号LI的能量可集中在结构光信号LS形成的各光点,从而增强入射到目标物102上的光信号强度,减少背景噪声的影响。
举例来说(但本公开不限于此),光学微结构114可包括衍射光学元件(diffractive optical element,DOE)或折射光学元件(refractive optical element,ROE),用以将光信号LI进行锥形衍射(或锥形折射)以产生结构光信号LS,使结构光信号LS投影在截面AB上可形成彼此分开的多个光点。
图像传感器120用以传感从目标物102返回的反射信号LR,以得到目标物102的图像信息,其中反射信号LR是由目标物102反射结构光信号LS所产生。在此实施例中,图像传感器120包括(但不限于)一像素阵列122和一处理电路124。像素阵列122用以传感反射信号LR以产生一传感结果SR。值得注意的是,由于结构光信号LS可在目标物102的表面上形成多个光点,因此,入射至像素阵 列122的反射信号LS可在像素阵列122上形成彼此分开的多个反射光点,其中像素阵列122具有多个像素,各反射光点可照射在至少一个像素上。也就是说,反射信号LR的能量可集中提供给所述多个反射光点所照射的像素。这样,在被反射光点照射的像素所接收的信号中,背景噪声成分所占的比例可降低,从而提高所述像素的像素输出(传感结果SR的一部分)的信号质量。
处理电路124耦接于像素阵列122,用以根据像素阵列122的传感结果SR侦测结构光信号LS的飞行时间,以及根据所述飞行时间获得目标物102的深度信息。例如,处理电路124可根据被所述多个反射光点照射的像素产生的像素输出,得到结构光信号LS在目标物102表面形成的多个光点所照射的区域的深度信息(深度图像)。
需注意的是,在此实施例中,发光单元110与像素阵列122中各像素之间的距离,远小于发光单元110与目标物102之间的距离,以及远小于各像素与目标物102之间的距离。因此,对于一个像素来说,结构光信号LS从发光单元110发送至目标物102的飞行距离,可视为等同于反射信号LR从目标物102返回所述像素的飞行距离。处理电路124便可根据结构光信号LS的飞行时间以及传播速度,测量目标物102与所述像素之间的距离。
在某些实施例中,由于被所述多个反射光点照射的像素产生的像素输出具备良好的信号质量,处理电路124可根据被所述多个反射光点照射的像素相应的深度信息,判断在目标物102的表面上未被结构光信号LS的光点照射的区域的深度信息,从而建构目标物102的深度图像。举例来说(但本公开不限于此),处理电路124可根据被所述多个反射光点照射的像素的像素输出,进行相关的信号处理,如内插处理,产生其他未被所述多个反射光点照射的像素的像素输出。
此外,在某些实施例中,像素阵列122可采用较小的像素间距(pixel pitch),使反射光点可照射足够的区域(例如至少照射一 个像素),并提高分辨率。这样,成像系统100不仅可通过将发射端发送的能量集中来降低系统功耗,还具备了良好的信号传感质量和高分辨率的功效。
为方便理解本公开的成像方案,图2和图3分别示出与图1所示的结构光信号LS和反射信号LR涉及的光点分布的一种实施方式。图2和图3所示的多个光点均排列为多行与多列的阵列图案。然而,这只是用来说明而已,本公开并不以此为限。在某些实施例中,采用具有其他不同的光点分布(例如,排列为多边形且彼此分开的多个光点)的结构光信号/反射信号来实施图1所示的结构光信号LS/反射信号LR均是可行的。
首先请连同图1参阅图2。图2是图1所示的结构光信号LS投影在截面AB上所形成的多个光点的一实施例的示意图。在此实施例中,光学微结构114可将光信号LI进行锥形衍射以产生结构光信号LS,使结构光信号LS投影在截面AB上形成排列为M行与N列的多个光点S1,其中M和N均是大于1的正整数。也就是说,光信号LI的能量(或结构光信号LS的能量)可集中在多个光点S1。
需注意的是,图2所示的相邻两个光点之间的距离只是示意性地示出,并非作为本公开的限制。在某些实施例中,可通过改变光学微结构114的设计方式,调整在行方向上的相邻两个光点之间的距离,和/或在列方向上的相邻两个光点之间的距离。
请连同图1参阅图3。图3是图1所示的反射信号LR入射到像素阵列122上所形成的多个反射光点的一实施例的示意图。在此实施例中,反射信号LR可由目标物102反射图2所示的结构光信号LS而产生,并且在像素阵列122上形成排列为M行与N列的多个反射光点S2。像素阵列122可具有排列为P行与Q列的多个像素,其中P是大于1小于M的正整数,Q是大于1小于N的正整数。例如(但本公开不限于此),多个反射光点S2中相邻的两个反射光点之间可设置有至少一个像素。又例如(但本公开不限于此),像素阵列122可具有百万个像素(如1280×960),发光单元110可采用具有 320×240个光点的结构光信号LS,使反射信号LR在像素阵列122上形成320×240个反射光点S2。
为使各反射光点可在像素阵列122上照射足够的区域,像素阵列122可采用较小的像素间距。此外,本公开的像素阵列可采用非对称的像素排列,同时满足小像素间距和高填充因数(fill factor)的设计需求,并可确保各反射光点可照射至少一个像素的感光区域,其中填充因数是指像素的感光面积在整个像素面积所占的比例。
举例来说,像素阵列122的各像素可包括光检测器和读取电路。通过非对称的像素排列,各像素的读取电路在行方向和列方向上均可紧邻像素的感光区域(光检测器),使各像素可具有高填充因数。以在第一预定方向D1(例如行方向)相邻的两个像素PXA和PXB为例,像素PXA包括一光检测器PDA和一读取电路RDA,像素PXB包括一光检测器PDB和一读取电路RDB。光检测器PDA用以检测反射信号LR以产生一光响应信号DRA,光检测器PDB用以检测反射信号LR以产生一光响应信号DRB。此外,读取电路RDA耦接于光检测器PDA,用以根据光响应信号DRA产生一像素输出POA。读取电路RDB耦接于光检测器PDB,用以根据光响应信号DRB产生一像素输出POB。
像素PXA的读取电路RDA在沿着第一预定方向D1上与像素PXB的光检测器PDB相邻,以及在沿着垂直于第一预定方向D1的第二预定方向D2(例如列方向)上与像素PXA的光检测器PDA相邻。例如,在此实施例中,光检测器PDA在第二预定方向D2上可设置在读取电路RDA的一侧(例如上侧),而光检测器PDB在第二预定方向D2上可设置在读取电路RDB的与所述侧相反的另一侧(例如下侧)。也就是说,像素PXA中光检测器PDA和读取电路RDA的设置方式与像素PXB中光检测器PDB和读取电路RDB的设置方式可互为颠倒。相对于像素边界BD,像素PXB可视为与像素PXA上下颠倒的镜像。这样,读取电路RDA/RDB在行方向和列方向上均可紧邻像素的感光区域。
在此实施例中,各像素的在第二预定方向D2上的长度可大于在第一预定方向D1上的长度,使各像素的感光面积(对应于光检测器 的感光区域)可增加,从而提高像素的填充因数及灵敏度。举例来说(但本公开不限于此),各像素的长度L可以是宽度W的两倍。相比于采用像素长度和宽度均为W的另一像素阵列(其具有与像素阵列122相同大小的阵列面积),像素阵列122在在边长等于L的方形区域中只需两个读取电路,然而,所述另一像素阵列需要四个读取电路(分别位于四个像素中)。因此,当采用相同的信号读取速率来读取像素阵列122和所述另一像素阵列时,本公开的成像方案可具备较高的帧率(frame rate)。
需注意的是,虽然上述第一预定方向D1和第二预定方向D2分别是行方向和列方向,然而,这并非用来限制本公开。在某些实施例中,第一预定方向D1和第二预定方向D2可分别是列方向和行方向。
此外,在像素阵列122应用于飞行时间成像系统的情形下,各像素的读取电路可包括两个光电读取单元,其受控于不同相位的控制信号,使后续的处理电路可以得到发射端信号与接收端信号两者之间的相位偏移信息,例如连续波调制(continuous wave modulation)和/或脉冲调制(pulse modulation)所需的信号信息。
在此实施例中,读取电路RDA可包括一第一光电读取单元334和一第二光电读取单元336。第一光电读取单元334用以根据一第一控制信号TX1选择性地耦接于光检测器PDA,以产生像素PXA的像素输出POA的第一部分。第二光电读取单元336用以根据一第二控制信号TX2选择性地耦接于光检测器PDA,以产生像素PXA的像素输出POA的第二部分,其中第二控制信号TX2和第一控制信号TX1可具有不同的相位。相似地,读取电路RDB可包括一第一光电读取单元344和一第二光电读取单元346。第一光电读取单元344用以根据第一控制信号TX1选择性地耦接于光检测器PDB,以产生像素PXB的像素输出POB的第一部分。第二光电读取单元346用以根据第二控制信号TX2选择性地耦接于光检测器PDB,以产生像素PXB的像素输出POB的第二部分。
图4和图5分别示出了在采用连续波调制和脉冲调制来测量目标物深度信息的情形下,像素的读取电路结构的一种实施方式。然而,这并非用来限制本公开。首先请参阅图4,其为图3所示的像素阵列122应用在连续波调制的情形下各像素的电路结构的一实施例的示意图。像素430可用来实施图3所示的像素PXA和像素PXB。在此实施例中,像素430包括(但不限于)一光检测器432、一第一光电读取单元434和一第二光电读取单元436,其中图3所示的光检测器PDA/PDB可由光检测器432来实施,图3所示的第一光电读取单元334/344可由第一光电读取单元434来实施,和/或图3所示的第二光电读取单元336/346可由第二光电读取单元436来实施。
光检测器432(例如光电二极管)可检测反射信号LR以对应地产生光响应信号PR,其中光响应信号PR可通过第一光电读取单元434和第二光电读取单元436其中的至少一个光电读取单元输出。光检测器432可将接收到的光信号转换成对应大小的光电流信号,即光响应信号PR可以是表征光信号大小的电流信号,光电读取单元用于读取所述的光电流信号。
第一光电读取单元434可包括(但不限于)一第一复位晶体管MR1、一第一传输晶体管MT1、一第一输出晶体管MF1和一第一读取晶体管MW1。第一复位晶体管MR1根据一复位信号RST来复位一第一浮动扩散节点FD1。第一传输晶体管MT1耦接于光检测器432,用以根据第一控制信号TX1来导通,即第一传输晶体管MT1受控于第一控制信号TX1,以实现与光检测器432的连接与断开。第一输出晶体管MF1用以放大第一浮动扩散节点FD1的电压信号,以产生一第一放大信号VS1,其中第一放大信号VS1可作为像素430的一像素输出的第一部分。此外,第一读取晶体管MW1用以根据一选择信号SEL将第一放大信号VS1选择性地输出。
第二光电读取单元436用以根据一第二控制信号TX2选择性地耦接于光检测器432,以产生像素430的所述像素输出的第二部分, 其中第二控制信号TX2和第一控制信号TX1可具有不同的相位。在此实施例中,第二光电读取单元436可包括(但不限于)一第二复位晶体管MR2、一第二传输晶体管MT2、一第二输出晶体管MF2和一第二读取晶体管MW2。第二复位晶体管MR2根据复位信号RST来复位一第二浮动扩散节点FD2。第二传输晶体管MT2耦接于光检测器432,用以根据第二控制信号TX2来导通,即第二传输晶体管MT2受控于第二控制信号TX2,以实现与光检测器432的连接与断开。第二输出晶体管MF2用以放大第二浮动扩散节点FD2的电压信号,以产生一第二放大信号VS2,其中第二放大信号VS2可作为所述像素输出的第二部分。此外,第二读取晶体管MW2用以根据选择信号SEL将第二放大信号VS2选择性地输出。
由于本领域的技术人员应可理解第一光电读取单元434和第二光电读取单元436分别根据第一控制信号TX1和第二控制信号TX2来产生连续波调制所需的信号信息的操作细节,进一步的说明在此便不再赘述。
图5是图3所示的像素阵列122应用在脉冲调制的情形下各像素的电路结构的一实施例的示意图。像素530可用来实施图3所示的像素PXA和像素PXB。像素530的电路结构与图4所示的像素430的电路结构之间的差别在于像素530还包括一抗晕晶体管(anti-blooming transistor)MB。抗晕晶体管MB受控于一第三控制信号TX3,用以将光传感器432因为接收背景光而产生的光电子汲取出来(例如,传导到电源电压),以免影响电路的正常运作。举例来说,在第一传输晶体管MT1和第二传输晶体管MT2,抗晕晶体管MB可根据第三控制信号TX3导通,以将光传感器432因为接收背景光而产生的光电子汲取出来,提升像素530抵抗背景光的能力。由于本领域的技术人员应可理解第一光电读取单元434、第二光电读取单元436和抗晕晶体管MB分别根据第一控制信号TX1、第二控制信号TX2和第三控制信号TX3来产生脉冲调制所需的信号信息的操作细节,进一步的说明在此便不再赘述。
请再次参阅图3。像素阵列122中的各像素均可采用与像素PXA和像素PXB相同/相似的结构来实施,因此,即使单个反射光点S2的中心偏离单个像素的感光区域时,仍可产生足够的光响应信号。请连同图3参阅图6,图6是图3所示的反射光点S2位于相邻两个像素的边界的一实施例的示意图。在此实施例中,为简洁起见,各像素(即多个像素PX1-PX8其中的一个)的光检测器均标记为PD,以及各像素的读取电路均标记为RD,其中各读取电路可包括两个光电读取单元RU1和RU2,其可分别由图3所示的第一光电读取单元334/344和第二光电读取单元336/346来实施。
以像素PX3为例,像素PX3的读取电路RD除了与像素PX3及像素PX4各自的光检测器PD相邻以外,还与像素PX2及像素PX7各自的光检测器PD相邻。这样,即使反射光点S2的中心偏离单个像素的感光区域,通过将沿着第一预定方向D1设置的一组像素(例如一行像素)各自的光检测器PD在第二预定方向D2上交错排列,像素阵列122仍可提供足够的感光面积以侦测反射光点S2。
图7是图3所示的反射光点S2位于相邻两个像素的边界的一实施例的示意图。图3所示的像素阵列122的各像素可由图7所示的像素结构来实施。图7所示的像素排列与图6所示的像素排列之间的差别在于:图7所示的各像素的光检测器PD在第二预定方向D2上均设置在所述像素的读取电路RD的同一侧(例如上侧)。例如,像素PX2/PX4/PX6/PX8可由与图3所示的像素PXB上下颠倒的像素来实施。在此实施例中,通过将沿着第一预定方向D1设置的一组像素(例如一行像素)在第二预定方向D2上交错排列,单个像素的读取电路仍可被四个光检测器包围。这样,即使反射光点S2的中心偏离单个像素的感光区域,像素阵列122仍可提供足够的感光面积以侦测反射光点S2。
图8是图3所示的反射光点S2位于相邻两个像素的边界的一实施例的示意图。图3所示的像素阵列122的各像素可由图8所示的像素结构来实施。图8所示的像素排列与图6所示的像素排列之间 的差别在于:图8所示的各像素的光检测器PD的感光区域可为非四边形。在此实施例中,各像素的光检测器PD可具有L形感光区域,相对应的读取电路RD可设置在L形感光区域的阴角处,使读取电路RD的周围被感光区域包围。以像素PX3为例,像素PX3的读取电路RD在沿着第一预定方向D1上与像素PX4的光检测器PD相邻,以及在沿着第二预定方向D2上与像素PX3的光检测器PD相邻。此外,像素PX3的读取电路RD在沿着与第一预定方向D1相反的一第三预定方向D3上,还与像素PX3的光检测器PD相邻。这样,即使反射光点S2的中心偏离单个像素的感光区域,像素阵列122仍可提供足够的感光面积以侦测反射光点S2。
虽然以上各像素的第一光电读取单元和第二光电读取单元均设置在所述像素的光检测器的同一侧,然而,本公开并不以此为限。请参阅图9,其为图3所示的反射光点S2位于相邻两个像素的边界的一实施例的示意图。图3所示的像素阵列122的各像素可由图9所示的像素结构来实施。相比于图6至图8所示的像素排列方式,图9所示的各像素的第一光电读取单元RU1和第二光电读取单元RU2可分别设置在所述像素的光检测器PD的一侧(例如上侧)与所述侧相反的另一侧(例如下侧)。通过将沿着第一预定方向D1设置的一组像素(例如一行像素)在第二预定方向D2上交错排列,位于像素边界的读取电路(例如像素PX3的第二光电读取单元RU2和像素PX7的的第一光电读取单元RU1)仍可被足够的感光区域包围。这样,即使反射光点S2的中心偏离单个像素的感光区域,像素阵列122仍可提供足够的感光面积以侦测反射光点S2。
图10是图3所示的反射光点S2位于相邻两个像素的边界的一实施例的示意图。图3所示的像素阵列122的各像素可由图10所示的像素结构来实施。图10所示的像素排列与图6所示的像素排列之间的差别在于:图10所示的各像素的光检测器PD的感光区域可为非四边形,且各像素的第一光电读取单元RU1和第二光电读取单元RU2設置在光檢測器PD的不同側。在此实施例中,各像素的光检测器PD可具有T形感光区域,相对应的第一光电读取单元RU1可设置 在T形感光区域的一阴角处,以及相对应的第二光电读取单元RU2可设置在T形感光区域的另一阴角处,使位于像素边界的光電讀取單元(例如像素PX3的第二光电读取单元RU2和像素PX7的的第一光电读取单元RU1)仍可被足够的感光区域包围。这样,即使反射光点S2的中心偏离单个像素的感光区域,像素阵列122仍可提供足够的感光面积以侦测反射光点S2。
需注意的是,上述像素排列的实施方式仅供说明的目的,并非用来限制本公开。只要像素的读取电路在行方向和列方向均可紧邻光检测器的感光区域,设计上相关的变化均遵循本公开的精神而落入本公开的范畴。
通过从发射端发送具有预定图案的结构光信号,本申请所公开的成像方案可使从目标物返回至接收端的反射信号在像素阵列上形成多个反射光点,增加入射至像素的光信号的信号强度,从而降低背景噪声的影响。此外,通过采用非对称的像素排列结构,例如采用长方形的像素、将沿着预定方向上的连续多个像素的感光区域交错排列,和/或采用非四边形的感光区域,本申请所公开的成像方案可同时满足小像素间距和高填充因数的需求,并具备了高灵敏度、高分辨率和高帧率的功效。
以上所述仅为本公开的实施例而已,并不用于限制本公开,对于本领域的技术人员来说,本公开可以有各种更改和变化。凡在本公开的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本公开的保护范围之内。

Claims (16)

  1. 一种成像系统的像素阵列,其特征在于,包括:
    排列为多行与多列的多个像素,用以传感入射至所述像素阵列而在所述像素阵列上形成彼此分开的多个反射光点的反射信号,其中所述多个像素的各像素包括:
    光检测器,用以检测所述反射信号并对应输出光响应信号;
    以及
    读取电路,耦接于所述光检测器,用以根据所述光响应信号产生像素输出;
    其中所述多个像素包括第一像素和与所述第一像素在第一预定方向上相邻的第二像素;所述第一像素的读取电路在沿着所述第一预定方向上与所述第二像素的光检测器相邻,以及在沿着垂直于所述第一预定方向的第二预定方向上与所述第一像素的光检测器相邻。
  2. 如权利要求1所述的像素阵列,其特征在于,所述第一像素的光检测器在所述第二预定方向上设置在所述第一像素的读取电路的一侧,所述第二像素的光检测器在所述第二预定方向上设置在所述第二像素的读取电路的与所述侧相反的另一侧。
  3. 如权利要求1所述的像素阵列,其特征在于,所述多个像素中沿着所述第一预定方向设置的一组像素,在所述第二预定方向上交错排列。
  4. 如权利要求1所述的像素阵列,其特征在于,所述第一预定方向是行方向,所述第二预定方向是列方向。
  5. 如权利要求1所述的像素阵列,其特征在于,所述第一像素的读取电路在沿着与所述第一预定方向相反的第三预定方向上,还与所述第一像素的光检测器相邻。
  6. 如权利要求1所述的像素阵列,其特征在于,所述多个反射光点中相邻的两个反射光点之间设置有至少一个像素。
  7. 如权利要求1所述的像素阵列,其特征在于,各像素在所述第二预定方向上的长度大于在所述第一预定方向上的长度。
  8. 如权利要求1所述的像素阵列,其特征在于,所述第一像素的读取电路包括:
    第一光电读取单元,用以根据第一控制信号选择性地耦接于所述第一像素的光检测器,以产生所述第一像素的像素输出的第一部分;以及
    第二光电读取单元,用以根据第二控制信号选择性地耦接于所述第一像素的光检测器,以产生所述第一像素的像素输出的第二部分,其中所述第二控制信号和所述第一控制信号具有不同的相位。
  9. 如权利要求8所述的像素阵列,其特征在于,所述第一光电读取单元和所述第二光电读取单元设置在所述第一像素的光检测器的同一侧。
  10. 如权利要求8所述的像素阵列,其特征在于,所述第一光电读取单元和所述第二光电读取单元分别设置在所述第一像素的光 检测器的一侧和与所述侧相反的另一侧。
  11. 一种成像系统的图像传感器,其特征在于,包括:
    至少一如权利要求1至10中任一项所述的像素阵列,用以传感入射至所述像素阵列而在所述像素阵列上形成彼此分开的多个反射光点的反射信号,其中所述反射信号是由目标物反射从成像系统发送的结构光信号所产生;以及
    处理电路,耦接于所述像素阵列,用以根据所述像素阵列的传感结果侦测所述结构光信号的飞行时间,以及根据所述飞行时间获得所述目标物的深度信息。
  12. 一种成像系统,其特征在于,包括:
    发光单元,用以发送结构光信号;以及
    至少一如权利要求1至10中任一项所述的像素阵列,用以传感所述结构光信号被目标物反射而产生的反射信号。
  13. 如权利要求12所述的成像系统,其特征在于,还包括:
    处理电路,耦接于所述像素阵列,用以根据所述像素阵列的传感结果侦测所述结构光信号的飞行时间,以及根据所述飞行时间获得所述目标物的深度信息。
  14. 如权利要求12所述的成像系统,其特征在于,所述结构光信号投影在所述发光单元与所述目标物之间的截面上形成彼此分开的多个光点。
  15. 如权利要求12所述的成像系统,其特征在于,所述反射信号入射至所述像素阵列而在所述像素阵列上形成彼此分开的多个反射光点,各反射光点照射在至少一个像素上。
  16. 如权利要求12所述的成像系统,其特征在于,所述发光单元包括:
    光源,用以输出光信号;以及
    光学微结构,用以改变所述光信号的行进路线以产生所述结构光信号。
PCT/CN2019/076295 2019-02-27 2019-02-27 成像系统及成像系统的像素阵列和图像传感器 WO2020172812A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
PCT/CN2019/076295 WO2020172812A1 (zh) 2019-02-27 2019-02-27 成像系统及成像系统的像素阵列和图像传感器
CN201980000288.3A CN110024374B (zh) 2019-02-27 2019-02-27 成像系统及成像系统的像素阵列和图像传感器
EP19916831.1A EP3820143B1 (en) 2019-02-27 2019-02-27 Imaging system, and pixel array and image sensor thereof
US17/033,767 US11442170B2 (en) 2019-02-27 2020-09-26 Imaging system, pixel array of imaging system and image sensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/076295 WO2020172812A1 (zh) 2019-02-27 2019-02-27 成像系统及成像系统的像素阵列和图像传感器

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/033,767 Continuation US11442170B2 (en) 2019-02-27 2020-09-26 Imaging system, pixel array of imaging system and image sensor

Publications (1)

Publication Number Publication Date
WO2020172812A1 true WO2020172812A1 (zh) 2020-09-03

Family

ID=67194571

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/076295 WO2020172812A1 (zh) 2019-02-27 2019-02-27 成像系统及成像系统的像素阵列和图像传感器

Country Status (4)

Country Link
US (1) US11442170B2 (zh)
EP (1) EP3820143B1 (zh)
CN (1) CN110024374B (zh)
WO (1) WO2020172812A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11442170B2 (en) 2019-02-27 2022-09-13 Shenzhen GOODIX Technology Co., Ltd. Imaging system, pixel array of imaging system and image sensor

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113037989B (zh) * 2019-12-09 2022-11-18 华为技术有限公司 一种图像传感器、相机模组及控制方法
US11853142B2 (en) * 2020-05-28 2023-12-26 Apple Inc. Sensor-based user detection for electronic devices
CN113038041B (zh) * 2020-06-12 2022-09-13 深圳市汇顶科技股份有限公司 成像系统以及相关电子装置及成像系统的操作方法
WO2021248427A1 (zh) 2020-06-12 2021-12-16 深圳市汇顶科技股份有限公司 深度传感装置以及相关电子装置及深度传感装置的操作方法
CN112859046B (zh) * 2021-01-19 2024-01-12 Oppo广东移动通信有限公司 光接收模组、飞行时间装置及电子设备
JP2024023058A (ja) * 2022-08-08 2024-02-21 ソニーセミコンダクタソリューションズ株式会社 光検出装置および光検出システム

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106576146A (zh) * 2014-08-29 2017-04-19 松下知识产权经营株式会社 摄像装置
CN107210314A (zh) * 2015-04-14 2017-09-26 索尼公司 固态成像装置、成像系统和测距方法
JP2017229001A (ja) * 2016-06-24 2017-12-28 株式会社ニコン 撮像装置および測距装置
CN108370424A (zh) * 2015-12-16 2018-08-03 索尼公司 成像元件、驱动方法和电子设备

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2133918B1 (en) * 2008-06-09 2015-01-28 Sony Corporation Solid-state imaging device, drive method thereof and electronic apparatus
CN102157529A (zh) * 2010-02-12 2011-08-17 联咏科技股份有限公司 影像传感器
CN103826538B (zh) * 2011-09-27 2016-06-22 富士胶片株式会社 放射线成像系统及其操作方法,以及放射线图像检测装置
KR20130038035A (ko) * 2011-10-07 2013-04-17 삼성전자주식회사 촬상소자
US9541386B2 (en) * 2012-03-21 2017-01-10 Semiconductor Energy Laboratory Co., Ltd. Distance measurement device and distance measurement system
EP2738812B8 (en) * 2012-11-29 2018-07-18 ams Sensors Belgium BVBA A pixel array
KR102003496B1 (ko) * 2013-03-06 2019-10-01 삼성전자주식회사 이미지 센서 및 이미지 픽업 장치
CN103561218B (zh) * 2013-10-30 2019-07-19 上海集成电路研发中心有限公司 一种高光灵敏度cmos图像传感器像素结构
KR102163728B1 (ko) * 2013-12-05 2020-10-08 삼성전자주식회사 거리영상 측정용 카메라 및 이를 이용한 거리영상 측정방법
CN104800054B (zh) * 2014-01-27 2017-01-25 光宝电子(广州)有限公司 距离侦测与指示方法及具有此侦测与指示功能的行动装置
JP6483725B2 (ja) * 2014-04-07 2019-03-13 サムスン エレクトロニクス カンパニー リミテッド 光学的イベントを感知する方法とそのための光学的イベントセンサ、及び距離測定モバイル装置
JPWO2017056347A1 (ja) * 2015-09-29 2018-08-02 パナソニック・タワージャズセミコンダクター株式会社 固体撮像装置
CN106897688B (zh) * 2017-02-21 2020-12-08 杭州易现先进科技有限公司 交互式投影装置、控制交互式投影的方法和可读存储介质
CN208548353U (zh) * 2018-08-22 2019-02-26 苏州多感科技有限公司 一种图像传感器和电子设备
WO2020057125A1 (en) 2018-09-18 2020-03-26 Shenzhen GOODIX Technology Co., Ltd. Depth information construction system, associated electronic device, and method for constructing depth information
CN110024374B (zh) 2019-02-27 2021-08-10 深圳市汇顶科技股份有限公司 成像系统及成像系统的像素阵列和图像传感器

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106576146A (zh) * 2014-08-29 2017-04-19 松下知识产权经营株式会社 摄像装置
CN107210314A (zh) * 2015-04-14 2017-09-26 索尼公司 固态成像装置、成像系统和测距方法
CN108370424A (zh) * 2015-12-16 2018-08-03 索尼公司 成像元件、驱动方法和电子设备
JP2017229001A (ja) * 2016-06-24 2017-12-28 株式会社ニコン 撮像装置および測距装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3820143A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11442170B2 (en) 2019-02-27 2022-09-13 Shenzhen GOODIX Technology Co., Ltd. Imaging system, pixel array of imaging system and image sensor

Also Published As

Publication number Publication date
US20210011169A1 (en) 2021-01-14
EP3820143B1 (en) 2024-08-21
CN110024374B (zh) 2021-08-10
EP3820143A1 (en) 2021-05-12
US11442170B2 (en) 2022-09-13
CN110024374A (zh) 2019-07-16
EP3820143A4 (en) 2021-05-12

Similar Documents

Publication Publication Date Title
WO2020172812A1 (zh) 成像系统及成像系统的像素阵列和图像传感器
TWI773149B (zh) 具有電子掃描發射器陣列及同步感測器陣列之光測距裝置
US20200103507A1 (en) Lidar 2d receiver array architecture
WO2020077514A1 (zh) 一种激光雷达系统
WO2021072802A1 (zh) 一种距离测量系统及方法
TWI524762B (zh) 共用飛行時間像素
US20010046317A1 (en) Three-dimensional measurement device and three-dimensional measurement method
WO2021056668A1 (zh) 一种动态距离测量系统及方法
US20200314376A1 (en) Imaging device and image sensor
KR102324449B1 (ko) 광 검출기 어레이 및 아날로그 판독 회로가 개재된 lidar 수신기용 멀티 검출기
TW200427078A (en) Pixel sensor circuit device and method thereof
EP3602110B1 (en) Time of flight distance measurement system and method
CN110780312B (zh) 一种可调距离测量系统及方法
WO2021056667A1 (zh) 一种发射器及距离测量系统
US11686819B2 (en) Dynamic beam splitter for direct time of flight distance measurements
US11960004B2 (en) Light detector and distance measuring device
KR102578977B1 (ko) 라이다 시스템
JPS6111637A (ja) 液体センサ
CN116670535A (zh) 距离测量系统
KR20130098037A (ko) 리플렉션 메탈 층을 포함하는 백사이드 일루미네이션 이미지 센서 및 그의 광전하 생성 방법
US20240192327A1 (en) Device for distance measurement
WO2023123984A1 (zh) 光收发模组及激光雷达
CN215678765U (zh) 混合型tof传感器系统
JPH01214059A (ja) Ccd受光デバイス
CN113795768A (zh) 飞行时间装置和方法

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2019916831

Country of ref document: EP

Effective date: 20200929

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19916831

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE