CN117676363A - Image sensor for distance measurement and camera module including the same - Google Patents

Image sensor for distance measurement and camera module including the same Download PDF

Info

Publication number
CN117676363A
CN117676363A CN202311140868.7A CN202311140868A CN117676363A CN 117676363 A CN117676363 A CN 117676363A CN 202311140868 A CN202311140868 A CN 202311140868A CN 117676363 A CN117676363 A CN 117676363A
Authority
CN
China
Prior art keywords
signal
image sensor
generate
pixel
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311140868.7A
Other languages
Chinese (zh)
Inventor
朴智宪
申绳澈
奇明吾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020220137652A external-priority patent/KR20240035282A/en
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Publication of CN117676363A publication Critical patent/CN117676363A/en
Pending legal-status Critical Current

Links

Landscapes

  • Optical Radar Systems And Details Thereof (AREA)

Abstract

An image sensor for distance measurement and a camera module including the image sensor are provided. The image sensor for distance measurement includes: a pixel array including a plurality of unit pixels; a readout circuit configured to read out pixel signals from the pixel array in units of subframes and generate raw data; a preprocessing circuit configured to preprocess the raw data to generate phase data; a memory configured to store phase data; a calibration circuit configured to generate correction data by performing a calibration operation on the phase data; an image signal processor configured to generate depth information using the correction data; and an output interface circuit configured to output depth data including depth information in units of depth frames.

Description

Image sensor for distance measurement and camera module including the same
The present application is based on and claims priority of korean patent application No. 10-2022-0137552 filed on the korean intellectual property office at 24 th 10 of 2022 and korean patent application No. 10-2022-0114470 filed on the korean intellectual property office at 8 of 2022, the disclosures of which are incorporated herein by reference in their entirety.
Technical Field
The present inventive concept relates to an image sensor, and more particularly, to an image sensor for distance measurement and a camera module including the same.
Background
A time of flight (ToF) image sensor may generate a 3D image of an object by measuring information about a distance to the object. The ToF image sensor can obtain information about a distance to an object by measuring a time of flight between emission of light toward the object and return of light reflected from the object. The distance information includes noise due to various factors, and thus, it is necessary to minimize noise to obtain accurate distance information.
Disclosure of Invention
The inventive concept provides an image sensor configured to output depth data including depth information for distance measurement, and a camera module including the image sensor.
According to some aspects of the inventive concept, there is provided an image sensor for distance measurement, the image sensor including: a pixel array including a plurality of unit pixels; a readout circuit configured to read out pixel signals from the pixel array in units of subframes and generate raw data; a preprocessing circuit configured to preprocess the raw data to generate phase data; a memory configured to store phase data; a calibration circuit configured to generate correction data by performing a calibration operation on the phase data; an image signal processor configured to generate depth information using the correction data; and an output interface circuit configured to output depth data including depth information in units of depth frames.
According to some aspects of the inventive concept, there is provided a camera module including: a light source unit configured to transmit an optical transmission signal to a subject; and an image sensor configured to receive a light reception signal reflected from the object, wherein the image sensor includes: a pixel array including a plurality of unit pixels; a control circuit configured to transmit a modulation signal to the light source unit and a plurality of demodulation signals to the pixel array, the modulation signal and the plurality of demodulation signals having the same modulation frequency; a readout circuit configured to read out pixel signals from the pixel array in units of subframes and generate raw data; a preprocessing circuit configured to preprocess the raw data to generate phase data; a frame memory configured to store phase data; an image signal processor configured to generate depth information based on the phase data; and an output interface circuit configured to output depth data including depth information in units of depth frames.
According to some aspects of the inventive concept, there is provided a camera module including: a light source unit configured to transmit an optical transmission signal to a subject; and an image sensor configured to receive a light reception signal reflected from the object, wherein the image sensor includes: a pixel array including a plurality of unit pixels; a control circuit configured to transmit a modulation signal to the light source unit and a plurality of demodulation signals to the pixel array, the modulation signal and the plurality of demodulation signals having the same modulation frequency; a readout circuit configured to read out pixel signals from the pixel array and generate raw data; a preprocessing circuit configured to preprocess the raw data to generate phase data; a memory configured to store phase data; a calibration circuit configured to generate correction data by performing a calibration operation on the phase data based on the calibration information; an image signal processor configured to generate depth information using the correction data; and an output interface circuit configured to output depth data including the depth information.
Drawings
Example embodiments will be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a configuration diagram illustrating a system according to some example embodiments;
fig. 2 is a configuration diagram illustrating a camera module according to some example embodiments;
fig. 3A is a diagram illustrating an example structure of the unit pixel illustrated in fig. 2 according to some example embodiments;
fig. 3B is a diagram illustrating an example structure of the unit pixel illustrated in fig. 2 according to some example embodiments;
fig. 4A and 4B are block diagrams showing a schematic configuration of a system according to some example embodiments;
fig. 5 is a diagram showing calibration information stored in a memory;
fig. 6 is a diagram illustrating an operation of an image sensor according to the inventive concept, wherein a timing diagram illustrates frequencies of first to fourth photo gate signals;
fig. 7 is a diagram illustrating a shuffling operation of an image sensor according to the inventive concept;
fig. 8A is a timing chart illustrating an operation of an image sensor according to a comparative example, and fig. 8B is a diagram illustrating an operation of an image sensor according to an inventive concept;
fig. 9A to 9C are diagrams illustrating an operation of an image sensor according to the inventive concept; and
Fig. 10 and 11 are schematic diagrams illustrating an image sensor according to some example embodiments.
Detailed Description
Hereinafter, some example embodiments will be described with reference to the accompanying drawings.
Fig. 1 is a block diagram illustrating a schematic configuration of a system 10 according to some example embodiments.
Referring to fig. 1, the system 10 may include a processor 30 and a camera module 100. In some example embodiments, the camera module 100 may transmit image data including depth data. The system 10 may also include a memory module 20, the memory module 20 being connected to the processor 30 and configured to store information (such as image data including depth data received from the camera module 100).
In some example embodiments, the system 10 may be integrated into a single semiconductor chip, or the camera module 100, the processor 30, and the memory module 20 may be implemented as separate semiconductor chips, respectively. Memory module 20 may include one or more memory chips. In some example embodiments, the processor 30 may include a plurality of processing chips.
According to some example embodiments, the system 10 may be an electronic device to which a distance measurement image sensor may be applied. The system 10 may be of a portable type or of a fixed type. Examples of portable types include mobile devices, cellular telephones, smart phones, user Equipment (UE), tablet computers, digital cameras, laptop or desktop computers, electronic smart watches, machine-to-machine (M2M) communication devices, virtual Reality (VR) devices or modules, robots, and the like. Examples of fixed types include game consoles in video game centers, interactive video terminals, automobiles, machine vision systems, industrial robots, VR devices, driver-side cameras, and the like.
The camera module 100 may include a light source unit 12 and an image sensor 14. The light source unit 12 may transmit the light transmission signal TX to the object 200. For example, the optical transmission signal TX may be a sine wave signal. The optical transmission signal TX transmitted from the light source unit 12 to the object 200 may be reflected by the object 200, and then the image sensor 14 may receive the reflected optical transmission signal TX as the optical reception signal RX. The image sensor 14 may obtain depth information, which is information about a distance to the object 200, based on time of flight (ToF). The structures of the light source unit 12 and the image sensor 14 are described below with reference to fig. 3A and 3B.
Processor 30 may include a general-purpose processor, such as a Central Processing Unit (CPU). In some example embodiments, the processor 30 may include a microcontroller, a Digital Signal Processor (DSP), a Graphics Processor (GPU), an Application Specific Integrated Circuit (ASIC) processor, or the like, in addition to the CPU. Further, the processor 30 may include two or more CPUs configured to operate in a distributed processing environment. In some example embodiments, the processor 30 may be a system on a chip (SoC) with CPU functionality and other additional functionality, or may be an Application Processor (AP) of a smart phone, tablet computer, smart watch, or the like.
The processor 30 may control the operation of the camera module 100. In some example embodiments, the system 10 may include a plurality of camera modules. In this case, the processor 30 may receive depth data from the image sensor 14 of the camera module 100, and may combine the depth data with image data received from the camera module (e.g., the camera module 100 and/or an additional camera module not shown) to generate a 3D depth image. Processor 30 may display the 3D depth image on a display screen (not shown) of system 10.
Processor 30 may be programmed with software or firmware for various processing tasks. In some example embodiments, the processor 30 may include programmable hardware logic configured to perform some or all of the functions of the processor 30. For example, the memory module 20 may store program code, a look-up table, or intermediate calculation results so that the processor 30 may perform the corresponding functions.
Examples of memory modules 20 may include Dynamic Random Access Memory (DRAM) modules (such as Synchronous DRAM (SDRAM) modules); a High Bandwidth Memory (HBM) module; and DRAM-based 3D stack (3 DS) memory modules, such as Hybrid Memory Cube (HMC) modules. For example, the memory module 20 may be a semiconductor-based memory device such as a Solid State Drive (SSD), DRAM, static Random Access Memory (SRAM), phase change random access memory (PRAM), resistive Random Access Memory (RRAM), conductive Bridging RAM (CBRAM), magnetic RAM (MRAM), and spin transfer torque MRAM (STT-MRAM).
Fig. 2 is a configuration diagram illustrating a camera module 100 according to some example embodiments.
Referring to fig. 1 and 2, the camera module 100 may include a light source unit 12 and an image sensor 14 configured to measure a distance. The camera module 100 may be used to acquire depth data DDATA including depth information DI about the object 200. In some example embodiments, the depth data DDATA may be used by the processor 30 as part of a 3D user interface to allow a user of the system 10 to interact with a 3D image of the object 200 or use a 3D image of the object 200 in a portion of a game or application running on the system 10.
The light source unit 12 may include a light source driver 210 and a light source 220. The light source unit 12 may further include a lens and a diffuser configured to diffuse light generated by the light source 220.
The light source 220 may transmit the optical transmission signal TX to the object 200. The light source 220 may include a Laser Diode (LD) or Light Emitting Diode (LED) configured to emit infrared light or visible light, a near infrared laser (NIR), a point light source, a white light, a monochromatic light source with a combination of monochromators, or a combination of other laser light sources. For example, the light source 220 may include a Vertical Cavity Surface Emitting Laser (VCSEL). In some example embodiments, the light source 220 may output an infrared optical transmission signal TX having a wavelength of about 800nm to about 1000 nm.
The light source driver 210 may generate a driving signal for driving the light source 220. The light source driver 210 may drive the light source 220 in response to the modulation signal MOD received from the control circuit 120. In this case, the modulation signal MOD may have at least one specified modulation frequency. The modulation frequency may have at least one value. For example, the control circuit 120 may generate the modulation signal MOD having the first modulation frequency F1 (e.g., refer to fig. 6) in a specific subframe, and may generate the modulation signal MOD having the second modulation frequency F2 (e.g., refer to fig. 6) in another subframe.
The image sensor 14 may receive the light reception signal RX reflected from the object 200. Image sensor 14 may measure distance or depth based on ToF.
The image sensor 14 may include a pixel array 110, a control circuit 120, a readout circuit 130, a preprocessing circuit 140, a memory 150, a calibration circuit 160, an Image Signal Processor (ISP) 170, and an output interface circuit 180. The image sensor 14 may further include a lens, and the light reception signal RX may be provided to the pixel array 110 through the lens. Furthermore, the image sensor 14 may further include: a ramp signal generator configured to supply a ramp signal to the readout circuit 130; and an Ambient Light Detector (ALD) (not shown) configured to calculate an ambient of ambient light and determine whether to start a binning (binning) mode.
The pixel array 110 may include a plurality of unit pixels 111. The plurality of unit pixels 111 may operate based on the ToF method. The structure of each of the plurality of unit pixels 111 is described below with reference to fig. 3A and 3B.
The pixel array 110 may convert the light reception signal RX into a corresponding electrical signal (i.e., a plurality of pixel signals PS). The pixel array 110 may generate a plurality of pixel signals PS according to control signals received from the control circuit 120. For example, the pixel array 110 may generate a plurality of pixel signals PS according to a control signal having a first modulation frequency F1 in a first subframe, and may generate a plurality of pixel signals PS according to a control signal having a second modulation frequency F2 in a second subframe. The pixel array 110 may receive a plurality of demodulation signals DEMOD from the control circuit 120 as photogate signals for controlling the transfer transistors of the unit pixels 111, respectively. The plurality of pixel signals PS may include information about a phase difference between the optical transmission signal TX and the optical reception signal RX.
The plurality of demodulation signals DEMOD may have the same frequency as the modulation signal MOD (that is, the plurality of demodulation signals DEMOD may have the same frequency as the modulation frequency). The demodulation signal DEMOD may include first to fourth photo gate signals PGA to PGD (for example, refer to fig. 3A) that are 90 ° out of phase with each other. For example, the first photo gate signal PGA may have a phase shift of 0 °, the second photo gate signal PGB may have a phase shift of 90 °, the third photo gate signal PGC may have a phase shift of 180 °, and the fourth photo gate signal PGD may have a phase shift of 270 °. That is, the first to fourth photo gate signals PGA to PGD may be separated by 90 °. In some example embodiments, there may be a fewer or greater number of photogate signals and the offset may be equidistant or variable without departing from the inventive concepts. The plurality of pixel signals PS output from the pixel array 110 may include: a first pixel signal Vout1 generated from the first photogate signal PGA (for example, refer to fig. 3A); a second pixel signal Vout2 generated from the second photogate signal PGB (for example, refer to fig. 3A); a third pixel signal Vout3 generated from the third photogate signal PGC (for example, refer to fig. 3A); and a fourth pixel signal Vout4 (for example, refer to fig. 3A) generated from the fourth photogate signal PGD.
The readout circuit 130 may generate the raw data RDATA based on the plurality of pixel signals PS output from the pixel array 110. For example, the readout circuit 130 may read out a plurality of pixel signals PS from the pixel array 110 in units of subframes. The readout circuit 130 may generate the raw data RDATA by performing analog-to-digital conversion on each of the plurality of pixel signals PS. For example, the readout circuit 130 may include a Correlated Double Sampling (CDS) circuit, a column counter, and a decoder. The readout circuit 130 may perform a CDS operation by comparing the plurality of pixel signals PS with the ramp signal.
The control circuit 120 may control components of the image sensor 14 and the light source driver 210 of the light source unit 12. The control circuit 120 may transmit the modulation signal MOD to the light source driver 210, and may transmit a plurality of demodulation signals DEMOD corresponding to the modulation signal MOD to the pixel array 110. The control circuit 120 may include: a photo gate driver configured to supply a plurality of demodulation signals DEMOD as photo gate signals to the pixel array 110; a row driver and decoder configured to supply row control signals to the pixel array 110; a Phase Locked Loop (PLL) circuit configured to generate an internal clock signal from a master clock signal; a timing generator configured to adjust a timing of each control signal; a transmission circuit configured to transmit a modulation signal MOD; and a main controller configured to control the operation of the image sensor 14 according to a command received from the outside of the camera module 100.
In some example embodiments, the control circuit 120 may perform a shuffle (shuffle) operation to change the phases of the photogate signals provided to the photogate transistors (e.g., refer to TS1 to TS4 in fig. 3A) of the unit pixels 111 according to the subframes. A mismatch (mismatch) between taps (tap) of the unit pixel 111 or a mismatch between the unit pixel 111 and the readout circuit 130 may be compensated for by a shuffling operation.
In some example embodiments, the control circuit 120 may operate in a binning mode based on the ambient light environment sensed by ALD. For example, the control circuit 120 may operate in a binning mode in a low-light environment. The control circuit 120 may operate in an analog pixel combination mode in which the pixel array 110 and the readout circuit 130 are controlled to obtain one signal by adding in-phase pixel signals among pixel signals output from a plurality of unit pixels 111 (for example, four unit pixels 111), and then analog-to-digital converting the obtained signal. The analog pixel binning mode may have the effect of significantly increasing the sensitivity to light.
Alternatively, the control circuit 120 may operate in a digital pixel combination mode to analog-to-digital convert pixel signals output from a plurality of unit pixels 111 (e.g., four unit pixels 111), and then add in-phase data. The digital pixel binning mode may have the effect of significantly increasing the Full Well Capacity (FWC).
Preprocessing circuit 140 may preprocess raw data RDATA so that ISP 170 may operate easily. The preprocessing circuit 140 may generate the phase data PDATA by converting the raw data RDATA into a form that facilitates conversion into depth information or by compressing the raw data RDATA.
For example, the preprocessing circuit 140 may calculate the value I by subtracting the raw data generated from the third photo gate signal PGC having a phase shift of 180 ° from the raw data generated from the first photo gate signal PGA having a phase shift of 0 °. Further, for example, the preprocessing circuit 140 may calculate the value Q by subtracting the raw data generated from the fourth photo gate signal PGD having a phase shift of 270 ° from the raw data generated from the second photo gate signal PGB having a phase shift of 90 °.
The phase data preprocessed by the preprocessing circuit 140 may be stored in the memory 150. For example, the memory 150 may be implemented as a buffer. The memory 150 may include a frame memory, and phase data generated on a subframe basis through an exposure integration operation (exposure integration operation) and a readout operation may be stored in the memory 150. For example, phase data PDATA generated in each of a plurality of subframes according to a shuffling operation or a modulation frequency change may be stored in the memory 150.
The calibration circuit 160 may perform a calibration operation to improve the accuracy of the depth information DI to be generated in the ISP 170. The calibration circuit 160 may generate the correction data CDATA (for example, refer to fig. 5) by performing a calibration operation on the phase data PDATA based on the calibration information CD. For example, the calibration circuit 160 may perform an operation such as a calibration operation by considering factors such as physical characteristics of the image sensor 14 or physical characteristics of lenses included in the camera module 100, a calibration operation by considering a distance between the light source unit 12 and the image sensor 14, or a calibration operation by considering a nonlinear filter error caused by the square wave demodulation signal DEMOD.
Although fig. 2 illustrates the calibration circuit 160 performing a calibration operation on the phase data PDATA stored in the memory 150, the image sensor 14 of the inventive concept is not limited thereto. Calibration circuit 160 may receive phase data PDATA from preprocessing circuit 140 and correction data CDATA generated as a result of the calibration operation may be stored in memory 150.
ISP 170 may receive correction data CDATA from calibration circuitry 160 and generate depth information DI. However, in some example embodiments, ISP 170 may receive phase data PDATA from memory 150, and the operation of calibration circuitry 160 may be performed by ISP 170.
In some example embodiments, ISP 170 may be implemented as an embedded depth processor unit (eDPU), and the eDPU may generate depth information DI by performing operations such as phase delay calculations, lens correction, spatial filtering, temporal filtering, or data expansion. The eDPU may be configured to perform simple mathematical operations using hardwired logic, but is not limited to such. The calibration circuit 160 may also be implemented as an eDPU. The eDPU may perform shuffling and correction operations according to demodulation frequency changes.
ISP 170 may generate depth information DI corresponding to the depth frame by using correction data CDATA generated in each of the plurality of subframes. Once data compression is performed by preprocessing circuit 140, ISP 170 may perform decompression operations to decompress correction data CDATA.
The correction data CDATA may include information about a phase difference between the optical transmission signal TX and the optical reception signal RX. The ISP 170 may calculate a distance between the object 200 and the camera module 100 by using information on the phase difference, and may generate depth information DI. For example, as described above, when the preprocessing circuit 140 performs a preprocessing operation to calculate the values I and Q, the phase difference between the optical transmission signal TX and the optical reception signal RX may be calculated by using the values I and Q, for example, by calculating an inverse trigonometric function (e.g., arctangent) of the ratio of the values Q and I, and the distance between the object 200 and the camera module 100 may be calculated according to the phase difference. Alternatively, for example, the ISP 170 may calculate the value I and the value Q, calculate a phase difference between the optical transmission signal TX and the optical reception signal RX by using the value I and the value Q, and calculate a distance between the object 200 and the camera module 100 by using the phase difference.
In some example embodiments, the ISP 170 may perform a crossover operation (crossover operation) using the multi-frequency modulation signal MOD having the first and second modulation frequencies F1 and F2 and the correction data CDATA generated from the multi-frequency demodulation signal DEMOD, thereby preventing or reducing a repetition distance phenomenon (repeated distance phenomenon) representing an error in the depth information DI caused by the maximum measurement distance limitation, and making it possible to obtain the depth information without the maximum measurement distance limitation. Further, in some example embodiments, ISP 170 may use correction data CDATA generated by a shuffling operation for each of the first and second subframes to compensate for noise caused by a process-derived mismatch between taps of unit pixels 111 or a process-derived mismatch between unit pixels 111 and readout circuitry 130.
The output interface circuit 180 may generate the depth data DDATA in units of depth frames by formatting the depth information DI received from the ISP 170, and may output the depth data DDATA to the outside of the camera module 100 through a channel. Because the image sensor 14 for distance measurement of the inventive concept includes the memory 150 and the ISP 170 therein, the image sensor 14 may calculate a phase difference and generate depth data DDATA including depth information DI. Accordingly, since the image sensor 14 transmits the depth data DDATA to the processor 30 outside the camera module 100, even when the bandwidth of a channel between the image sensor 14 and the processor 30 is limited, the data transmission delay can be prevented or reduced, thereby improving the quality of the depth data DDATA.
Furthermore, the calibration circuit 160 of the image sensor 14 may reduce noise that may occur in the depth data DDATA, and the ISP 170 included in the image sensor 14 may make it possible to generate high-quality depth data DDATA. The processor 30 disposed outside the image sensor 14 may be lightweight, and the power consumption of the system 10 may be reduced.
Fig. 3A is a diagram illustrating an example structure of the unit pixel 111 illustrated in fig. 2 according to some example embodiments.
The unit pixel 111 illustrated in fig. 3A may have a 4-tap structure. The 4-tap structure represents a structure in which one unit pixel 111 includes four taps, and the four taps may be such a unit component: is configured such that when the unit pixel 111 generates and accumulates photo-charges in response to an external light signal applied thereto, the unit cell can differentially transfer the photo-charges according to phase.
An image sensor including the unit pixel 111 having a 4-tap structure (for example, the image sensor 14 shown in fig. 2) can realize such a transmission method: data is transmitted using four taps with phase shifts of 0 °, 90 °, 180 °, and 270 °. For example, the unit pixel 111 may generate a pixel signal based on the first tap of the unit pixel 111. Specifically, when the first tap of the unit pixel 111 generates the first pixel signal Vout1 with respect to the phase shift of 0 °, the second tap of the unit pixel 111 may generate the second pixel signal Vout2 with respect to the phase shift of 90 °, the third tap of the unit pixel 111 may generate the third pixel signal Vout3 with respect to the phase shift of 180 °, and the fourth tap of the unit pixel 111 may generate the fourth pixel signal Vout4 with respect to the phase shift of 270 °.
The pixel array 110 (for example, refer to fig. 2) may include a plurality of unit pixels 111 arranged in a plurality of rows and a plurality of columns. In some example embodiments, the first tap and the fourth tap of each of the plurality of unit pixels 111 may be disposed in the i-th row, and the second tap and the third tap of each of the plurality of unit pixels 111 may be disposed in the (i+1) -th row.
Referring to fig. 3A, the unit pixel 111 may include a photodiode PD, an overflow transistor OT, first to fourth transfer transistors TS1 to TS4, first to fourth storage transistors SS1 to SS4, first to fourth tap transfer transistors TXS1 to TXS4, first to fourth reset transistors RX1 to RX4, first to fourth source followers SF1 to SF4, and first to fourth selection transistors SELX1 to SELX4. In some example embodiments, at least one selected from the group consisting of the overflow transistor OT, the first to fourth storage transistors SS1 to SS4, the first to fourth tap transfer transistors TXS1 to TXS4, the first to fourth reset transistors RX1 to RX4, the first to fourth source followers SF1 to SF4, and the first to fourth selection transistors SELX1 to SELX4 may be omitted. Further, in some example embodiments, the unit pixel 111 may further include a transistor disposed between the transfer transistor (one of the first to fourth transfer transistors TS1 to TS 4) and the storage transistor (one of the first to fourth storage transistors SS1 to SS 4).
The photodiode PD may generate a photo-charge that varies according to the intensity of a light reception signal (e.g., refer to RX in fig. 2). That is, the photodiode PD may convert the light reception signal RX into an electrical signal. The photodiode PD is an example of a photoelectric conversion element, and may be one of a phototransistor, a photogate, a Pinned Photodiode (PPD), and a combination thereof.
The first to fourth transfer transistors TS1 to TS4 may transfer charges generated in the photodiodes PD to the first to fourth storage transistors SS1 to SS4, respectively, according to the first to fourth photo gate signals PGA to PGD. Accordingly, the first to fourth transfer transistors TS1 to TS4 may transmit charges generated in the photodiodes PD to the first to fourth floating diffusion nodes FD1 to FD4, respectively, according to the first to fourth photo gate signals PGA to PGD.
The first to fourth photo gate signals PGA to PGD may be included in the demodulation signal DEMOD described with reference to fig. 2, and may be signals having the same frequency and duty ratio and out of phase with each other. The first to fourth photo gate signals PGA to PGD may have a phase difference of 90 ° from each other. For example, based on the first photo gate signal PGA, when the first photo gate signal PGA has a phase shift of 0 °, the second photo gate signal PGB may have a phase shift of 90 °, the third photo gate signal PGC may have a phase shift of 180 °, and the fourth photo gate signal PGD may have a phase shift of 270 °.
The first to fourth storage transistors SS1 to SS4 may store photo charges received from the first to fourth transfer transistors TS1 to TS4, respectively, according to the first to fourth storage control signals SGA to SGD. The first to fourth tap transfer transistors TXS1 to TXS4 may transfer the photo-charges respectively stored in the first to fourth storage transistors SS1 to SS4 to the first to fourth floating diffusion nodes FD1 to FD4 according to the first and second transmission control signals TG [ i ] and TG [ i+1 ].
The first to fourth source followers SF1 to SF4 may amplify the respective photo charges according to the potentials of the photo charges accumulated in the first to fourth floating diffusion nodes FD1 to FD4 and output the amplified respective photo charges to the first to fourth selection transistors SELX1 to SELX4. The first to fourth selection transistors SELX1 to SELX4 may output the first to fourth pixel signals Vout1 to Vout4 through the column lines in response to the first and second selection control signals SEL [ i ] and SEL [ i+1 ].
The unit pixel 111 may accumulate photo charges for a certain period (e.g., an integration period), and may output the first to fourth pixel signals Vout1 to Vout4 generated according to the accumulation result to the readout circuit 130 (e.g., refer to fig. 2).
The first to fourth reset transistors RX1 to RX4 may reset the first to fourth floating diffusion nodes FD1 to the power supply voltage VDD in response to the first and second reset control signals RS [ i ] and RS [ i+1 ]. The overflow transistor OT is a transistor configured to discharge overflow charge according to the overflow control signal OG. The source of the overflow transistor OT may be connected to the photodiode PD, and the power supply voltage VDD may be supplied to the drain of the overflow transistor OT.
Fig. 3B is a diagram illustrating an example structure of the unit pixel 111 illustrated in fig. 2 according to some example embodiments.
The unit pixel 111A illustrated in fig. 3B may have a 2-tap structure. The 2-tap structure indicates a structure in which one unit pixel 111A includes two taps, and the two taps may be such a unit component: is configured such that when the unit pixel 111A generates and accumulates photo-charges in response to an external light signal applied thereto, the unit cell can differentially transfer the photo-charges according to phase.
An image sensor including the unit pixel 111A having a 2-tap structure (for example, the image sensor 14 shown in fig. 2) can realize such a transmission method: data is transmitted using two taps with phase shifts of 0 °, 90 °, 180 °, and 270 °. For example, when the first tap of the unit pixel 111A generates the first pixel signal Vout1 with respect to the phase shift of 0 ° in the even sub-frame, the second tap may generate the second pixel signal Vout2 with respect to the phase shift of 180 ° in the even sub-frame. When the first tap of the unit pixel 111A generates the first pixel signal Vout1 with respect to the phase shift of 90 ° in the odd-numbered sub-frame, the second tap of the unit pixel 111A may generate the second pixel signal Vout2 with respect to the phase shift of 270 ° in the odd-numbered sub-frame.
The pixel array 110 (for example, refer to fig. 2) may include a plurality of unit pixels 111A arranged in a plurality of rows and a plurality of columns. In some example embodiments, the first tap and the second tap of each of the plurality of unit pixels 111A may be arranged in the i-th row.
Referring to fig. 3B, the unit pixel 111A may include a photodiode PD, an overflow transistor OT, a first transfer transistor TS1, a second transfer transistor TS2, a first storage transistor SS1, a second storage transistor SS2, a first tap transfer transistor TXS1, a second tap transfer transistor TXS2, a first reset transistor RX1, a second reset transistor RX2, a first source follower SF1, a second source follower SF2, a first selection transistor SELX1, and a second selection transistor SELX2. In some example embodiments, at least one selected from the group consisting of the overflow transistor OT, the first storage transistor SS1, the second storage transistor SS2, the first tap transfer transistor TXS1, the second tap transfer transistor TXS2, the first reset transistor RX1, the second reset transistor RX2, the first source follower SF1, the second source follower SF2, the first selection transistor SELX1, and the second selection transistor SELX2 may be omitted. Further, in some example embodiments, the unit pixel 111A may further include a transistor disposed between the transfer transistor (one of the first transfer transistor TS1 and the second transfer transistor TS 2) and the storage transistor (one of the first storage transistor SS1 and the second storage transistor SS 2).
In even (e.g., 2 nd, 4 th, 6 th, etc.) subframes, the first transfer transistor TS1 may transfer the charge generated in the photodiode PD to the first storage transistor SS1 according to the first photo gate signal PGA, and in odd (e.g., 1 st, 3 rd, 5 th, etc.) subframes, the first transfer transistor TS1 may transfer the charge generated in the photodiode PD to the first storage transistor SS1 according to the second photo gate signal PGB. In even subframes, the second transfer transistor TS2 may transfer the charge generated in the photodiode PD to the second storage transistor SS2 according to the third photo gate signal PGC, and in odd subframes, the second transfer transistor TS2 may transfer the charge generated in the photodiode PD to the second storage transistor SS2 according to the fourth photo gate signal PGD. The first to fourth photo gate signals PGA to PGD may be included in the demodulation signal DEMOD described with reference to fig. 2, and may be signals having the same frequency and duty ratio and out of phase with each other. The first to fourth photo gate signals PGA to PGD may have a phase difference of 90 ° from each other.
In the even sub-frame, the unit pixel 111A may accumulate photo-charges for an integration time, and may output the first pixel signal Vout1 and the second pixel signal Vout2 generated according to the accumulation result to the readout circuit 130 (for example, refer to fig. 2). Further, in the odd-numbered sub-frames, the unit pixel 111A may accumulate photo-charges for an integration time, and the first pixel signal Vout1 and the second pixel signal Vout2 generated according to the accumulation result may be output to the readout circuit 130.
Fig. 4A and 4B are block diagrams showing schematic configurations of systems according to some example embodiments. Fig. 5 is a diagram showing calibration information stored in the memories 16 and 16'. According to the inventive concept, the camera modules 100a and 100b may store calibration information to be used for calibration in an internal memory. However, the inventive concept is not limited to the example embodiments shown in fig. 4A and 4B, and the camera modules 100a and 100B may receive calibration information from outside of the camera modules 100a and 100B (e.g., from the processor 30).
Referring to fig. 4A, the camera module 100a may further include a memory 16 for storing calibration information CD. The image sensor 14 may receive calibration information CD from the memory 16. The image sensor 14 may perform a calibration operation based on a CD received from a memory 16 disposed outside the image sensor 14.
Referring to fig. 4B, the camera module 100B may include an image sensor 14', the image sensor 14' including a memory 16'. The memory 16' may store calibration information CD and may be different from the memory 150 shown in fig. 2. The calibration circuit of the image sensor 14' (e.g., with reference to the calibration circuit 160) may perform a calibration operation based on the calibration information CD received from the memory 16' disposed inside the image sensor 14 '.
In some example embodiments, the memories 16 and 16' may be one-time programmable (OTP) memories or electrically erasable programmable read only memories. However, the memories 16 and 16 'are not limited thereto, and various other types of memories may be used as the memories 16 and 16'.
Referring to fig. 2 and 5, for example, the calibration information CD stored in the memories 16 and 16' may include at least one selected from the group consisting of an intrinsic characteristic parameter, a wobble lookup table (wiggling lookup table), a stationary phase pattern noise (FPPN) lookup table, and a temperature parameter. The temperature parameter may be a calibration parameter related to the temperature environment in which the camera module 100 may operate (e.g., related to the external ambient temperature).
The intrinsic characteristic parameter may be a calibration parameter related to an intrinsic physical characteristic of the camera module 100. That is, the intrinsic characteristic parameter may be a calibration parameter related to the physical characteristics of the image sensor 14 and the light source unit 12. For example, the intrinsic characteristic parameters may include parameters for correcting errors caused by aberrations of lenses included in the camera module 100 to transmit the optical transmission signal TX and receive the optical reception signal RX; or a parameter for correcting an error caused by movement/tilting of the lens when the lens group is assembled to the camera module 100.
The wobble look-up table may comprise a look-up table for correcting wobble effects. The wobble effect may represent an error caused by harmonic components generated from the waveform of the optical transmission signal output from the light source unit 12 and the waveform of the demodulation signal DEMOD. Here, the error caused by the wobbling effect may vary according to the distance between the object and the camera module, and the wobbling lookup table may include information on the degree of correction according to the distance between the object and the camera module.
The FPPN lookup table may be used to correct errors caused by the FPPN. FPPN may occur due to misalignment (misalignment) between the light source unit 12 and the image sensor 14. For example, the FPPN lookup table may include information on the degree of correction according to the position as information for correcting the phase deviation occurring according to the position of the unit pixel 111 in the pixel array 110. For example, the calibration operation may be performed using the FPPN lookup table for an error occurring according to the distance between the light source unit 12 and the image sensor 14 or an error caused by a time delay occurring when a control signal is supplied from the control circuit 120 to the pixel array 110.
Fig. 6 is a diagram illustrating an operation of the image sensor 14 according to the inventive concept, in which a timing diagram illustrates frequencies of the first to fourth photo gate signals.
Referring to fig. 2 and 6, the control circuit 120 may generate first to fourth photo gate signals PGA1 to PGD1 having a first modulation frequency F1 in a first subframe. Further, the control circuit 120 may generate the first to fourth photo gate signals PGA2 to PGD2 having the second modulation frequency F2 in the second subframe. The first modulation frequency F1 and the second modulation frequency F2 may be different from each other. For example, the first modulation frequency F1 may be set to 20MHz, and the second modulation frequency F2 may be set to 10MHz. Alternatively, for example, the first modulation frequency F1 may be set to 100MHz, and the second modulation frequency F2 may be set to 30MHz.
The maximum measurement distance that can be measured using the image sensor 14 may be inversely proportional to the modulation frequency. For example, when the first modulation frequency F1 is 20MHz, the maximum measurement distance may be 7.5m, and when the second modulation frequency F2 is 10MHz, the maximum measurement distance may be 15m. Accordingly, since the ISP 170 changes the modulation frequency by the cross operation (greatest common divisor operation) based on the pixel signal PS generated in the first subframe and the pixel signal PS generated in the second subframe and generates the depth information DI, the repetitive distance phenomenon representing an error in the depth information DI caused by the greatest measurement distance limitation can be prevented or reduced, and the depth information DI can be obtained without being limited by the greatest measurement distance.
Fig. 7 is a diagram illustrating a shuffling operation of the image sensor 14 according to the inventive concept. Fig. 7 shows an example in which the unit pixel 111 of the image sensor 14 has the 4-tap structure described with reference to fig. 3A.
Referring to fig. 2, 3A and 7, in the first sub-frame, the image sensor 14 may supply the first photo gate signal PGA having a phase shift of 0 ° to the first transfer transistor TS1 of the first tap of the unit pixel 111, may supply the second photo gate signal PGB having a phase shift of 90 ° to the second transfer transistor TS2 of the second tap of the unit pixel 111, may supply the third photo gate signal PGC having a phase shift of 180 ° to the third transfer transistor TS3 of the third tap of the unit pixel 111, and may supply the fourth photo gate signal PGD having a phase shift of 270 ° to the fourth transfer transistor TS4 of the fourth tap of the unit pixel 111. Thus, based on the first tap of the unit pixel 111, when the first tap of the unit pixel 111 generates the first pixel signal Vout1 with respect to the phase shift of 0 °, the second tap of the unit pixel 111 may generate the second pixel signal Vout2 with respect to the phase shift of 90 °, the third tap of the unit pixel 111 may generate the third pixel signal Vout3 with respect to the phase shift of 180 °, and the fourth tap of the unit pixel 111 may generate the fourth pixel signal Vout4 with respect to the phase shift of 270 °. In the first sub-frame, the first raw data RDATA1' may be generated from the first to fourth pixel signals Vout1 to Vout4 respectively generated from the first to fourth photo gate signals PGA to PGD having four different phase shifts (0 °, 90 °, 180 °, and 270 °).
In a second subframe subsequent to the first subframe, the image sensor 14 may perform a shuffling operation. In the second sub-frame, the image sensor 14 may supply the third photo gate signal PGC having a phase shift of 180 ° to the first transfer transistor TS1 of the first tap of the unit pixel 111, may supply the fourth photo gate signal PGD having a phase shift of 270 ° to the second transfer transistor TS2 of the second tap of the unit pixel 111, may supply the first photo gate signal PGA having a phase shift of 0 ° to the third transfer transistor TS3 of the third tap of the unit pixel 111, and may supply the second photo gate signal PGB having a phase shift of 90 ° to the fourth transfer transistor TS4 of the fourth tap of the unit pixel 111. Thus, based on the first tap of the unit pixel 111, when the first tap of the unit pixel 111 generates the first pixel signal Vout1 with respect to the phase shift of 180 °, the second tap of the unit pixel 111 may generate the second pixel signal Vout2 with respect to the phase shift of 270 °, the third tap of the unit pixel 111 may generate the third pixel signal Vout3 with respect to the phase shift of 0 °, and the fourth tap of the unit pixel 111 may generate the fourth pixel signal Vout4 with respect to the phase shift of 90 °. In the second sub-frame, the second raw data RDATA2' may be generated from the first to fourth pixel signals Vout1 to Vout4 respectively generated from the first to fourth photo gate signals PGA to PGD having four different phase shifts (180 °, 270 °, 0 °, and 90 °).
ISP 170 may use first original data RDATA1 'and second original data RDATA2' to generate depth information DI. Although ISP 170 may generate a plurality of pieces of depth information DI from the first and second pieces of raw data RDATA1' and RDATA2', respectively, ISP 170 may generate one piece of depth data DDATA ' (including a plurality of pieces of depth information DI) using the first and second pieces of raw data RDATA1' and RDATA2' to compensate for a mismatch between the first to fourth taps of unit pixel 111 or a mismatch between unit pixel 111 and readout circuit 130 that may occur during processing. For example, the ISP 170 may average the first and second pieces of raw data RDATA1 'and RDATA2' generated through the shuffling operation, and may remove errors (such as a gain error of each of the first to fourth taps, an error caused by a conversion gain difference between the first to fourth floating diffusion nodes FD1 to FD4 of the first to fourth taps, an offset error of each of the first to fourth taps, and an error caused by an offset difference between the first to fourth taps).
Fig. 8A is a timing chart showing the operation of the image sensor according to the comparative example, and fig. 8B is a diagram showing the operation of the image sensor 14 according to the inventive concept.
Referring to fig. 8A, the image sensor of the comparative example does not include an ISP therein. In the image sensor of the comparative example, the pixel array 110 may store phase information during the exposure integration time EIT, and the readout circuit 130 may generate raw data during the readout time. For example, in a first subframe, the image sensor of the comparative example may generate a first piece of raw data rdata1_n (N is a natural number) according to a first modulation frequency F1 (e.g., refer to fig. 6), and in a second subframe, the image sensor of the comparative example may generate a second piece of raw data rdata2_n according to a second modulation frequency F2 (e.g., refer to fig. 6). The output interface circuit 180 of the image sensor of the comparative example may sequentially transmit the first and second pieces of raw data rdata1_n and rdata2_n including the phase information to the processor 30 disposed outside the image sensor of the comparative example. Processor 30 may comprise an ISP.
While the image sensor of the comparative example performs an operation with respect to the nth depth data ddata_n, the processor 30 disposed outside the image sensor of the comparative example may perform an operation of generating the (N-1) th depth data ddata_n-1. After receiving both the first and second pieces of raw data rdata1_n and rdata2_n including phase information from the image sensor of the comparative example, the processor 30 may generate the nth depth data ddata_n using the first and second pieces of raw data rdata1_n and rdata2_n. While the processor 30 generates the nth depth data ddata_n, the image sensor of the comparative example may perform an operation with respect to the (n+1) th depth data ddata_ (n+1). For example, in a first subframe, the image sensor of the comparative example may generate a first piece of raw data rdata1_ (n+1) according to a first modulation frequency F1, and in a second subframe, the image sensor of the comparative example may generate a second piece of raw data rdata2_ (n+1) according to a second modulation frequency F2.
Although the image sensor of the comparative example must transmit all the raw data including the phase information (e.g., the first and second pieces of raw data rdat1_n and rdat2_n) to the processor 30, the bandwidth of a channel between the image sensor and the processor 30 is limited, and thus, it may take a long time to transmit the first and second pieces of raw data rdat1_n and rdat2_n. In addition, even after the image sensor of the comparative example transmits the first and second raw data rdata1_n and rdata2_n, the processor 30 takes time to generate the nth depth data ddata_n using the first and second raw data rdata1_n and rdata2_n. Accordingly, there is a time delay until the nth depth data ddata_n is generated after the pixel array 110 and the readout circuit 130 generate the first and second pieces of raw data rdata1_n and rdata2_n.
Referring to fig. 2 and 8B, the image sensor 14 of the inventive concept may include a memory 150 and an ISP 170. The pixel array 110 and readout circuitry 130 of the image sensor 14 may store phase information during an exposure integration time EIT and may generate raw data during a readout time. For example, in a first subframe, image sensor 14 may generate a first piece of raw data rdata1_n according to a first modulation frequency F1, and in a second subframe, image sensor 14 may generate a second piece of raw data rdata2_n according to a second modulation frequency F2 to generate an nth piece of depth data ddata_n. Alternatively, for example, in the first and second subframes, the image sensor 14 of the inventive concept may perform a shuffling operation as described with reference to fig. 7 to change the phase of the demodulation signal DEMOD to be supplied to the unit pixels 111, thereby generating the first and second pieces of raw data rdat1_n and rdat2_n to generate the nth piece of depth data ddata_n.
The memory 150 may store a first piece of phase data PDATA1_n obtained by preprocessing a first piece of raw data rdata1_n, and then may store a second piece of phase data PDATA2_n obtained by preprocessing a second piece of raw data rdata2_n. ISP 170 may generate an nth piece of depth information using the first and second pieces of raw data RDATA1_N and RDATA2_N stored in memory 150, and output interface circuit 180 may format the nth piece of depth information and may then send the nth piece of depth information to processor 30 as the nth piece of depth data DDATA_N.
Furthermore, in the first sub-frame, image sensor 14 may generate a first piece of raw data rdata1_ (n+1) according to a first modulation frequency F1, and in the second sub-frame, image sensor 14 may generate a second piece of raw data rdata2_ (n+1) according to a second modulation frequency F2 to generate an (n+1) -th piece of depth data ddata_n+1.
The memory 150 may store a first piece of phase data PDATA1_ (n+1) obtained by preprocessing a first piece of raw data RDATA1_ (n+1), and then may store a second piece of phase data PDATA2_ (n+1) obtained by preprocessing a second piece of raw data RDATA2_ (n+1). ISP 170 may generate a (n+1) -th piece of depth information using the first piece of raw data RDATA1_ (N+1) and the second piece of raw data RDATA2_ (N+1) stored in memory 150, and output interface circuit 180 may format the (n+1) -th piece of depth information, and may then send the (n+1) -th piece of depth information to processor 30 as the (n+1) -th piece of depth data DDATA_N+1.
Because the image sensor 14 for distance measurement of the inventive concept includes the memory 150 and the ISP 170 therein, the image sensor 14 may calculate a phase difference and may generate depth data (e.g., ddata_n and ddata_n+1). Because the image sensor 14 transmits the depth data (e.g., ddata_n and ddata_n+1) to the processor 30 disposed outside the image sensor 14, the data transfer delay can be prevented or reduced even when the bandwidth of the channel between the image sensor 14 and the processor 30 is limited, and thus, the quality of the depth data (e.g., ddata_n and ddata_n+1) can be improved. Further, since the image sensor 14 includes the ISP 170 dedicated to the image sensor 14, the image sensor 14 can generate high-quality depth data DDATA, the processor 30 disposed outside the image sensor 14 can be lightweight, and the power consumption of the system 10 can be reduced.
Fig. 9A to 9C are diagrams illustrating an operation of an image sensor according to the inventive concept. Fig. 9A is a diagram showing an image sensor operating at a single modulation frequency and performing a shuffling operation, and fig. 9B is a diagram showing an image sensor operating at a double modulation frequency and performing a shuffling operation. Fig. 9C is a timing diagram illustrating signals generated in one subframe. The description given with reference to fig. 9A to 9C may be applied to an image sensor including a unit pixel having a 4-tap structure, but may be similarly applied to an image sensor including a unit pixel having a 2-tap structure.
Referring to fig. 2 and 9A, an nth depth frame for generating an nth depth data may include a first subframe and a second subframe. As described with reference to fig. 7, the first piece of original data generated in the first subframe and the second piece of original data generated in the second subframe may be data sampled by a shuffling operation using photogate signals having different phases.
The first piece of original data generated in the first subframe or the first piece of phase data obtained by preprocessing the first piece of original data may be stored in the first memory MEM 1. The second piece of original data generated in the second subframe or the second piece of phase data obtained by preprocessing the second piece of original data may be stored in the second memory MEM 2. The first memory MEM1 and the second memory MEM2 may be included in the memory 150, and the high level period may be a period in which the corresponding memory is activated (i.e., a period in which data is written to the corresponding memory or data is read from the corresponding memory).
The ISP 170 may perform a shuffling operation by using the first piece of phase data read from the first memory MEM1 and the second piece of phase data read from the second memory MEM 2. The data whose errors are removed by the shuffling operation may be stored again in the second memory MEM 2.
In some example embodiments, the first memory MEM1 and the second memory MEM2 may be frame memories. That is, the first memory MEM1 and the second memory MEM2 may store all phase data generated in one subframe. Alternatively, in some example embodiments of the memory size optimization, the first pieces of phase data obtained in the first subframes may be directly stored in the first memory MEM1 as a frame memory, and after the shuffling operation is performed using the second memory MEM2 as a line memory and errors are removed from the second pieces of phase data according to the result of the shuffling operation, the second pieces of phase data obtained in the second subframes in which the shuffling operation is performed may be stored in the frame memory.
Referring to fig. 2 and 9B, an nth depth frame for generating an nth depth data may include first to fourth subframes. The first piece of phase data generated in the first subframe may be stored in the first memory MEM1, and the second piece of phase data generated in the second subframe may be stored in the second memory MEM 2. The third piece of phase data generated in the third sub-frame may be stored in the third memory MEM3, and the fourth piece of phase data generated in the fourth sub-frame may be stored in the fourth memory MEM 4. The first to fourth memories MEM1 to MEM4 may be included in the memory 150.
The first and second pieces of phase data may be data generated according to a first modulation frequency. The ISP 170 may perform a first shuffling operation using the first piece of phase data read from the first memory MEM1 and the second piece of phase data read from the second memory MEM 2. The first piece of data whose error is removed according to the result of the first shuffling operation may be stored again in the second memory MEM 2.
The third and fourth pieces of phase data may be data generated according to a second modulation frequency different from the first modulation frequency. The ISP 170 may perform the second shuffling operation using the third piece of phase data read from the third memory MEM3 and the fourth piece of phase data read from the fourth memory MEM 4. The second piece of data whose error is removed according to the result of the second shuffling operation may be stored again in the fourth memory MEM 4.
ISP 170 may use the first piece of data generated according to the first shuffling operation and the second piece of data generated according to the second shuffling operation to correct errors generated due to the maximum measured distance limit. The error corrected data may again be stored in the fourth memory MEM 4.
Referring to fig. 3A and 9C, the first subframe may include an exposure integration time EIT and a readout time. The description of the first subframe shown in fig. 9C may also be applied to other subframes.
The modulation clock may be switched at a constant period during the exposure integration time EIT of the first subframe. The first to fourth photo gate signals PGA to PGD may have the same period as the modulation clock, and may be switched to have different phase shifts (0 °, 90 °, 180 °, and 270 °).
The overflow control signal OG may maintain a logic low level, the storage control signals SG (e.g., SG1 to SG 4) may maintain a logic high level, and the selection control signals SEL [0] to SEL [ n-1] and the transmission control signals TG [0] to TG [ n-1] may maintain a logic low level. The photo charges transferred through the first to fourth transfer transistors TS1 to TS4, respectively, may be stored in the first to fourth storage transistors SS1 to SS 4.
The first through fourth photo gate signals PGA through PGD may maintain a logic high level during a readout time after the exposure integration time EIT in the first subframe. The overflow control signal OG may maintain a logic high level, and the storage control signals SG (e.g., SG1 to SG 4) may maintain a logic low level. The selection control signals SEL [0] to SEL [ n-1] and the transmission control signals TG [0] to TG [ n-1] may transition to logic high levels so that the first to nth rows may be sequentially turned on.
The Ramp signal Ramp may be a signal for the readout circuit 130 (for example, refer to fig. 2) to perform a CDS operation, and the readout circuit 130 may generate original data by comparing the first to fourth pixel signals Vout1 to Vout4 with the Ramp signal Ramp. For example, the Ramp signal Ramp may decrease or increase with a constant slope.
Fig. 10 and 11 are schematic diagrams illustrating image sensors 1000 and 1000A according to some example embodiments.
Referring to fig. 10, the image sensor 1000 may be a stacked image sensor including a first chip CP1 and a second chip CP2 stacked in a vertical direction. The image sensor 1000 may be an embodiment of the image sensor 14 described with reference to fig. 1 and 2.
The first chip CP1 may include a pixel region PR1 and a pad (pad) region PR2, and the second chip CP2 may include a peripheral circuit region PR3 and a pad region PR2'. A pixel array in which a plurality of unit pixels PX are arranged may be formed in the pixel region PR 1. Each of the plurality of unit pixels PX may be the same as the unit pixel 111 described with reference to fig. 3A or the unit pixel 111A described with reference to fig. 3B.
The peripheral circuit region PR3 of the second chip CP2 may include a logic circuit block LC and may include a plurality of transistors. For example, the logic circuit block LC may include at least some of the control circuit 120, the readout circuit 130, the preprocessing circuit 140, the memory 150, the calibration circuit 160, the ISP 170, and the output interface circuit 180 described with reference to fig. 2. The peripheral circuit region PR3 may supply a constant signal to each of the plurality of unit pixels PX included in the pixel region PR1, and may read a pixel signal output from each of the plurality of unit pixels PX. In some example embodiments, the main controller, ISP 170, and memory 150 may be disposed in a central portion of the peripheral circuit region PR3, and the photo gate driver, readout circuit 130, output interface circuit 180, PLL circuit, and the like may be disposed in an outer portion of the peripheral circuit region PR3 surrounding the central portion of the peripheral circuit region PR 3.
The PAD region PR2 'of the second chip CP2 may include a lower conductive PAD'. The number of the lower conductive PADs PAD 'may be two or more, and the lower conductive PADs PAD' may each correspond to the upper conductive PADs PAD. The lower conductive PAD' may be electrically connected to the upper conductive PAD of the first chip CP1 through the via structure VS.
Referring to fig. 11, the image sensor 1000A may be a stacked image sensor including a first chip CP1, a third chip CP3, and a second chip CP2 stacked in a vertical direction. The image sensor 1000A may be an embodiment of the image sensor 14 described with reference to fig. 1 and 2.
The first chip CP1 may include a pixel region PR1 and a pad region PR2. A pixel array in which a plurality of unit pixels PX are arranged may be formed in the pixel region PR 1. The second chip CP2 may include a peripheral circuit region PR3 and a pad region PR2'. The peripheral circuit region PR3 of the second chip CP2 may include a logic circuit block LC and may include a plurality of transistors. For example, the logic circuit block LC may include at least some of the control circuit 120, readout circuit 130, preprocessing circuit 140, calibration circuit 160, ISP 170, and output interface circuit 180 described with reference to fig. 2.
The third chip CP3 may include a memory region PR4 and a pad region PR. The memory MEM may be formed in the memory region PR 4. The memory MEM may be the same as the memory 150 described with reference to fig. 2 and may include a frame memory. Further, the memory MEM may include the memory 16' described with reference to fig. 4B.
The PAD region pr″ of the third chip CP3 may include a conductive PAD. The number of conductive PADs PAD "may be two or more, and the conductive PAD" may be electrically connected to the upper conductive PAD or the lower conductive PAD' through the via structure. The image sensor 1000A of fig. 11 may have a structure in which the first chip CP1, the third chip CP3, and the second chip CP2 are sequentially stacked, but the image sensor 1000A of the inventive concept is not limited thereto. The image sensor 1000A may have a structure in which a first chip CP1, a second chip CP2, and a third chip CP3 are sequentially stacked.
When the term "about" or "substantially" is used in this specification in connection with a numerical value, it is intended that the associated numerical value includes manufacturing or operating tolerances (e.g., ±10%) centered on the stated numerical value. Furthermore, when the words "generally" and "substantially" are used in connection with a geometry, the accuracy of the geometry is not intended to be required, but the swirled room (latitude) for the shape is within the scope of the disclosure. Further, whether numerical values or shapes are modified to be "about" or "substantially," it is to be understood that such values and shapes should be construed as including manufacturing or operating tolerances (e.g., ±10%) centered on the stated numerical value or shape.
The system 10 (or other circuitry (e.g., camera modules 100, 100a, 100b, processor 30, memory module 20, light source unit 12, image sensors 14, 14', light source driver 210, light source 220, pixel array 110, control circuit 120, readout circuit 130, preprocessing circuit 140, memory 150, calibration circuit 160, image Signal Processor (ISP) 170, output interface circuit 180, memory 16, 16', and sub-components thereof)) may include: hardware including logic circuits; a hardware/software combination (such as a processor executing software); or a combination thereof. For example, processing circuitry may more particularly include, but is not limited to, a Central Processing Unit (CPU), an Arithmetic Logic Unit (ALU), a digital signal processor, a microcomputer, a Field Programmable Gate Array (FPGA), a system on a chip (SoC), a programmable logic unit, a microprocessor, an Application Specific Integrated Circuit (ASIC), and the like.
While the inventive concept has been particularly shown and described with reference to some example embodiments thereof, it will be understood that various changes in form and details may be made therein without departing from the spirit and scope of the appended claims.

Claims (20)

1. An image sensor for distance measurement, the image sensor comprising:
A pixel array including a plurality of unit pixels;
a readout circuit configured to read out pixel signals from the pixel array in units of subframes and generate raw data;
a preprocessing circuit configured to preprocess the raw data to generate phase data;
a memory configured to store phase data;
a calibration circuit configured to generate correction data by performing a calibration operation on the phase data;
an image signal processor configured to generate depth information using the correction data; and
and an output interface circuit configured to output depth data including depth information in units of depth frames.
2. The image sensor of claim 1, wherein the plurality of unit pixels have a 4-tap structure, the 4-tap structure including first to fourth taps configured to generate first to fourth pixel signals from the first to fourth photogate signals, respectively.
3. The image sensor of claim 2, wherein,
in the first sub-frame, the first tap is configured to generate a first pixel signal from the first photogate signal, the second tap is configured to generate a second pixel signal from the second photogate signal, the third tap is configured to generate a third pixel signal from the third photogate signal, and the fourth tap is configured to generate a fourth pixel signal from the fourth photogate signal,
In the second subframe, the first tap is configured to generate a first pixel signal from the third photogate signal, the second tap is configured to generate a second pixel signal from the fourth photogate signal, the third tap is configured to generate a third pixel signal from the first photogate signal, and the fourth tap is configured to generate a fourth pixel signal from the second photogate signal.
4. The image sensor of claim 3, wherein the second photogate signal has a phase difference of 90 ° based on the first photogate signal, the third photogate signal has a phase difference of 180 ° and the fourth photogate signal has a phase difference of 270 °.
5. The image sensor of claim 1, wherein the plurality of unit pixels have a 2-tap structure, the 2-tap structure comprising two taps configured to generate the first pixel signal and the second pixel signal from the first photogate signal and the second photogate signal, respectively.
6. The image sensor of claim 1, wherein,
in a first subframe, the pixel array is configured to: generating a pixel signal from a control signal having a first modulation frequency, and
In a second subframe, the pixel array is configured to: the pixel signal is generated from a control signal having a second modulation frequency different from the first modulation frequency.
7. The image sensor of claim 1, wherein
The calibration circuit is configured to: performs a calibration operation based on the calibration information, and
the calibration information includes at least one selected from the group consisting of an intrinsic characteristic parameter related to a physical characteristic of the image sensor, a wobble lookup table related to a wobble effect, a stationary phase pattern noise lookup table related to a stationary phase pattern noise, and a temperature parameter related to an external ambient temperature.
8. The image sensor of claim 7, further comprising: and a memory configured to store the calibration information.
9. A camera module, comprising:
a light source unit configured to transmit an optical transmission signal to a subject; and
an image sensor configured to receive a light reception signal reflected from an object,
wherein the image sensor includes:
a pixel array including a plurality of unit pixels,
a control circuit configured to transmit a modulation signal to the light source unit and a plurality of demodulation signals to the pixel array, the modulation signal and the plurality of demodulation signals having the same modulation frequency,
A readout circuit configured to read out pixel signals from the pixel array in units of subframes and generate raw data,
a preprocessing circuit configured to preprocess the raw data to generate phase data,
a frame memory configured to store phase data,
an image signal processor configured to generate depth information based on the phase data, and
and an output interface circuit configured to output depth data including depth information in units of depth frames.
10. The camera module of claim 9, wherein
The plurality of unit pixels have a 4-tap structure, the 4-tap structure including first to fourth taps configured to receive first to fourth photogate signals included in the plurality of demodulation signals,
the second photogate signal has a phase difference of 90 deg., the third photogate signal has a phase difference of 180 deg., and the fourth photogate signal has a phase difference of 270 deg., based on the first photogate signal.
11. The camera module of claim 10, wherein,
in a first subframe, the control circuit is configured to: transmitting the first photogate signal to the first tap, transmitting the second photogate signal to the second tap, transmitting the third photogate signal to the third tap, and transmitting the fourth photogate signal to the fourth tap,
In the second subframe, the control circuit is configured to: the third photogate signal is sent to the first tap, the fourth photogate signal is sent to the second tap, the first photogate signal is sent to the third tap, and the second photogate signal is sent to the fourth tap.
12. The camera module of claim 11, wherein the image signal processor is configured to: depth information corresponding to one depth frame is generated based on the first piece of phase data generated in the first subframe and the second piece of phase data generated in the second subframe.
13. The camera module of claim 9, wherein,
in a first subframe, the control circuit is configured to: transmitting a plurality of demodulation signals having a first modulation frequency to a pixel array, and
in the second subframe, the control circuit is configured to: a plurality of demodulation signals having a second modulation frequency different from the first modulation frequency are transmitted to the pixel array.
14. The camera module of claim 13, wherein the image signal processor is configured to: depth information corresponding to one depth frame is generated based on the first piece of phase data generated in the first subframe and the second piece of phase data generated in the second subframe.
15. The camera module of claim 9, wherein
The image sensor further includes: a calibration circuit configured to generate correction data by performing a calibration operation on the phase data, an
The image signal processor is configured to: the correction data is used to generate depth information.
16. The camera module of claim 15, wherein
The calibration circuit is configured to: performing a calibration operation based on the calibration information, and
the calibration information includes at least one selected from the group consisting of an intrinsic characteristic parameter related to a physical characteristic of the image sensor, a wobble lookup table related to a wobble effect, a stationary phase pattern noise lookup table related to a stationary phase pattern noise, and a temperature parameter related to an external ambient temperature.
17. The camera module of claim 16, wherein the image sensor further comprises: and a memory configured to store the calibration information.
18. The camera module of claim 16, further comprising: a memory configured to store calibration information, an
The image sensor is configured to: calibration information is received from a memory.
19. A camera module, comprising:
a light source unit configured to transmit an optical transmission signal to a subject; and
An image sensor configured to receive a light reception signal reflected from an object,
wherein the image sensor includes:
a pixel array including a plurality of unit pixels,
a control circuit configured to transmit a modulation signal to the light source unit and a plurality of demodulation signals to the pixel array, the modulation signal and the plurality of demodulation signals having the same modulation frequency,
a readout circuit configured to read out pixel signals from the pixel array and generate raw data,
a preprocessing circuit configured to preprocess the raw data to generate phase data,
a memory configured to store phase data,
a calibration circuit configured to generate correction data by performing a calibration operation on the phase data based on the calibration information,
an image signal processor configured to generate depth information using the correction data, and
and an output interface circuit configured to output depth data including the depth information.
20. The camera module of claim 19, wherein the camera module is configured to store calibration information.
CN202311140868.7A 2022-09-08 2023-09-05 Image sensor for distance measurement and camera module including the same Pending CN117676363A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
KR10-2022-0114470 2022-09-08
KR1020220137652A KR20240035282A (en) 2022-09-08 2022-10-24 Image Sensor For Distance Measuring And Camera Module Including The Same
KR10-2022-0137652 2022-10-24

Publications (1)

Publication Number Publication Date
CN117676363A true CN117676363A (en) 2024-03-08

Family

ID=90068897

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311140868.7A Pending CN117676363A (en) 2022-09-08 2023-09-05 Image sensor for distance measurement and camera module including the same

Country Status (1)

Country Link
CN (1) CN117676363A (en)

Similar Documents

Publication Publication Date Title
US10609318B2 (en) Imaging device, driving method, and electronic apparatus
US9025064B2 (en) Solid-state imaging device, imaging device, and signal readout method
KR101848771B1 (en) 3d image sensor and mobile device including the same
US9538101B2 (en) Image sensor, image processing system including the same, and method of operating the same
US11681024B2 (en) Imaging device and image sensor
US9099367B2 (en) Image sensor and image processing device including the same
CN110072069B (en) Image sensor
US10841517B2 (en) Solid-state imaging device and imaging system
US11513222B2 (en) Image sensors for measuring distance including delay circuits for transmitting separate delay clock signals
US9258502B2 (en) Methods of operating depth pixel included in three-dimensional image sensor and methods of operating three-dimensional image sensor
CN105049753A (en) Image sensor and image capturing apparatus
JPWO2021166584A5 (en)
US20230039542A1 (en) Pixel array accumulating photocharges in each unit frame, and image sensor incuding the pixel array
US20210080554A1 (en) Time-of-flight sensor and method of calibrating errors in the same
US20220018946A1 (en) Multi-function time-of-flight sensor and method of operating the same
US20180084187A1 (en) Image sensor and imaging device including the same
CN117676363A (en) Image sensor for distance measurement and camera module including the same
JP2013243456A (en) Solid state imaging device, solid state imaging device control method and imaging device
CN111835989A (en) Image sensor with a plurality of pixels
US20230113504A1 (en) Semiconductor device
US20240085561A1 (en) Image sensor for distance measurement and camera module including the same
KR20240035282A (en) Image Sensor For Distance Measuring And Camera Module Including The Same
JP5941728B2 (en) Image generating apparatus and image generating method
TW201540070A (en) Imaging element, control method, and imaging device
US11729524B2 (en) Depth sensor and method of operating the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication