WO2021163824A1 - 图像信号处理方法以及相关图像传感系统及电子装置 - Google Patents

图像信号处理方法以及相关图像传感系统及电子装置 Download PDF

Info

Publication number
WO2021163824A1
WO2021163824A1 PCT/CN2020/075483 CN2020075483W WO2021163824A1 WO 2021163824 A1 WO2021163824 A1 WO 2021163824A1 CN 2020075483 W CN2020075483 W CN 2020075483W WO 2021163824 A1 WO2021163824 A1 WO 2021163824A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
floating diffusion
diffusion node
charge
control signal
Prior art date
Application number
PCT/CN2020/075483
Other languages
English (en)
French (fr)
Inventor
赵维民
Original Assignee
深圳市汇顶科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市汇顶科技股份有限公司 filed Critical 深圳市汇顶科技股份有限公司
Priority to PCT/CN2020/075483 priority Critical patent/WO2021163824A1/zh
Priority to CN202080002033.3A priority patent/CN112042180B/zh
Publication of WO2021163824A1 publication Critical patent/WO2021163824A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/703SSIS architectures incorporating pixels for producing signals other than image signals
    • H04N25/704Pixels specially adapted for focusing, e.g. phase difference pixel sets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled

Definitions

  • This application relates to a signal processing method, and in particular to an image signal processing method, related image sensing system and electronic device.
  • the function includes a slow-motion video recording function, and for image sensors with high pixels and embedded phase sensing, the reading method of pixel binning and accumulation is often used to increase the sensitivity and reduce the resolution.
  • this mode there are usually two pixel operation modes.
  • One type needs to output the result of pixel binning and the other type needs to output individual pixels for phase focusing. Therefore, the pixels need to be read multiple times, but slow-motion video
  • the recording function has higher requirements for the maximum frame rate that the image sensor can provide than other functions.
  • the frame rate of the image sensor is related to the reading time of the sensing result of the pixel array of the image sensor. Therefore, in order to further optimize the performance of, for example, slow-motion video recording, improving the reading time has become an important work item.
  • One of the objectives of the present application is to disclose a signal processing method, in particular to an image signal processing method, and related image sensing systems and electronic devices, to solve the above-mentioned problems.
  • An embodiment of the present application discloses an image signal processing method for a pixel array, the pixel array includes a first N*N pixel group, and the first N*N pixel group includes a first pixel and a second pixel, Wherein N is greater than 1, and one of the first pixel and the second pixel is a phase detection pixel, and the other is a non-phase detection pixel, and the method includes: providing a first control signal to enable the first pixel A pixel generates a first charge to the first common floating diffusion node of the pixel array; provides pixel information of the first pixel; provides a second control signal to cause the second pixel to generate a second charge to the first common A floating diffusion node, which enables the first charge and the second charge to be combined and accumulated on the first common floating diffusion node; and provides an equivalent charge after the combined accumulation of the first charge and the second charge Pixel information.
  • An embodiment of the present application discloses an image sensing system.
  • the image sensing system includes: a pixel array, including: a plurality of pixels, the pixel array including: a first N*N pixel group, including: a first Pixels and second pixels, where N is greater than 1; and a first shared floating diffusion node, which is selectively coupled to the first N*N pixel group; wherein in response to a first control signal, the first pixel generates a first A charge to the first common floating diffusion node, and in response to a second control signal, the second pixel generates a second charge to the first common floating diffusion node, so that the first charge and the second Two charges are combined and accumulated on the first common floating diffusion node, wherein the second control signal is later than the first control signal, and one of the first pixel and the second pixel is phase detection One of the pixels is a non-phase detection pixel.
  • An embodiment of the application discloses an electronic device.
  • the electronic device includes the aforementioned image sensing system.
  • the signal processing method disclosed in the present application can also obtain the charge information of the phase detection pixel during the period of obtaining the combined accumulated charge information of adjacent pixels. After obtaining the combined accumulated charge information, there is no need to add an additional working time to obtain the charge information of the phase detection pixel. Therefore, the reading time of the sensing result of the pixel array can be reduced.
  • FIG. 1 is a schematic block diagram of an embodiment of a pixel array of the image sensor system of this application.
  • FIG. 2 is a circuit diagram of an embodiment of a pixel circuit of the image sensing system of the application.
  • 3A to 8A illustrate the signal combination patterns of the first embodiment.
  • FIGS. 3B to 8B respectively correspond to FIGS. 3A to 8A to respectively show the operation of the pixel circuit of FIG. 2 at each time point.
  • 9A to 12A illustrate the signal combination patterns of the second embodiment.
  • FIGS. 9B to 12B respectively correspond to FIGS. 9A to 12A to respectively show the operation of the pixel circuit of FIG. 2 at each time point.
  • FIG. 13 is a schematic block diagram of another embodiment of the pixel array of this application.
  • FIG. 14A illustrates the signal combination pattern used in the third embodiment of the first 2*2 pixel group.
  • FIG. 14B illustrates the signal combination pattern used in the third embodiment of the second 2*2 pixel group.
  • FIG. 15 is a schematic diagram of an embodiment in which a chip including the pixel circuit of FIG. 2 and an image sensor including the pixel array of FIG. 1 or 13 are applied to an electronic device.
  • first and second features are in direct contact with each other; and may also include
  • additional components are formed between the above-mentioned first and second features, so that the first and second features may not be in direct contact.
  • present disclosure may reuse component symbols and/or labels in multiple embodiments. Such repeated use is based on the purpose of brevity and clarity, and does not in itself represent the relationship between the different embodiments and/or configurations discussed.
  • spatially relative terms here such as “below”, “below”, “below”, “above”, “above” and similar, may be used to facilitate the description of the drawing
  • the relationship between one component or feature relative to another component or feature is shown.
  • the original meaning of these spatially-relative vocabulary covers not only the orientation shown in the figure, but also the various orientations of the device in use or operation.
  • the device may be placed in other orientations (for example, rotated 90 degrees or in other orientations), and these spatially-relative description vocabulary should be explained accordingly.
  • the pixel array of the image sensor of the electronic device is generally composed of a plurality of red pixels, a plurality of green pixels, and a plurality of blue pixels.
  • the color signals provided by the red pixels, green pixels, and blue pixels together create an image that can be observed by the human eye.
  • the total number of pixels of the pixel array must be increased. In the case of a given area of the pixel array, when the total number of pixels is larger, the area of a single pixel of the pixel array is smaller, which will result in a deterioration of the signal-to-noise ratio.
  • the merge operation is a way of image reading. In detail, it is to combine and accumulate the charges generated by multiple adjacent pixels, and use the combined and accumulated charge as the charge of an equivalent pixel (equivalent to the multiple adjacent pixels).
  • the pixels of the pixel array are not only used to provide the information needed to create the image, but also can be used to provide the information needed for a phase difference auto focus (PDAF) operation.
  • some of the pixels of the pixel array are configured as phase detection pixels. After obtaining the charge information provided by the phase detection pixels, the phase difference focusing operation can be completed.
  • the electronic device When the electronic device is instructed or self-assessed to perform two operations (combination operation and phase difference focusing operation) according to the current operating environment, the electronic device sequentially performs the above two operations. However, this will make the reading time of the sensing result of the pixel array longer. For example, before performing each of the above two operations, other pre-operations, such as pre-charging operation, or phase pixel re-exposure, need to be performed. According to this, at least two pre-operations are required. Even after performing each of the above two operations, other follow-up operations may need to be performed first. As a result, the reading time will become longer.
  • the requirements for the reading time are more stringent than other modes. Too long reading time may cause the slow motion video recording function to fail. Therefore, to reduce the reading time, you can improve the flow of the above two operations. The details are as follows.
  • FIG. 1 is a block diagram of an embodiment of the pixel array 100 of the image sensor system 10 of this application.
  • the pixel array 100 includes a plurality of pixels 102 (6*8 pixels 102 in the figure, but the present application is not limited to this).
  • Each pixel 102 is equipped with a color filter (indicated by R, G, B), so that each pixel 102 can only receive light of a specific color (or a specific wavelength range).
  • R, G, B color filter
  • the name of the pixel 102 is named after the type of the corresponding color filter.
  • red (R) pixels 102R another part of pixels may be referred to as blue (B) pixels 102B
  • another part of pixels may be referred to as or green (G) pixels 102G
  • this application is not limited to red, blue, and green. In other embodiments, it may also be white, or yellow, or other colors.
  • the multiple pixels 102 of the pixel array 100 are divided into multiple N*N pixel groups 110, where N is greater than one.
  • N is 2.
  • four pixels 102 constitute a 2*2 pixel group 110.
  • the charges generated by the four pixels 102 are combined and accumulated, and the combined and accumulated charges are used as the charges of a 2*2 pixel group 110, the details of which are described in the embodiments of FIGS. 4A and 4B.
  • At least one 2*2 pixel group 110 of the pixel array 100 includes phase detection pixels 104L and 104R (shown with hatching) and non-phase detection pixels 104NL and 104NR, wherein the phase detection pixels 104L and 104R constitute a phase detection pixel pair.
  • the 2*2 pixel group 110 including the green pixels 102G has phase detection pixels 104L and 104R, and the remaining 2*2 pixel group 110 does not include the phase detection pixels 104L and 104R.
  • this application is not limited to this.
  • the 2*2 pixel group 110 including pixels 102 of other colors may have phase detection pixels 104L and 104R.
  • phase detection pixels 104L and 104R The difference in structure between the phase detection pixels 104L and 104R and the non-phase detection pixels 104NL and 104NR is that the phase detection pixels 104L and 104R share an elliptical microlens OML, while the non-phase detection pixels each have a microlens ML (that is, the non-phase detection pixel The microlens ML) is not shared with other pixels.
  • this application is not limited to this.
  • the phase detection pixels can be implemented in other ways.
  • FIG. 2 is a circuit diagram of an embodiment in which the pixel circuit 200 of the image sensor system 10 of the present application is coupled to the phase detection pixels 104L and 104R and the non-phase detection pixels 104NL and 104NR of FIG. 1.
  • the pixel circuit 200 is used to read the sensing result of the pixel array 100.
  • the pixel circuit 200 is used to read the sensing results of the phase detection pixels 104L and 104R and the non-phase detection pixels 104NL and 104NR.
  • the pixel circuit 200 includes transmission gates M1, M2, M3, M4, a reset gate MR, a shared floating diffusion node FD, a capacitor C, a source follower MS, a selection gate ME, and a current source I.
  • the input end of the transmission gate M1 is coupled to the phase detection pixel 104L, the output end is coupled to the common floating diffusion node FD, and the controlled end receives the control signal TG1.
  • the transfer gate M1 is used to selectively transfer the charge generated by the phase detection pixel 104L to the shared floating diffusion node FD in response to the control signal TG1.
  • the transmission gate M1 is a transistor in this embodiment. In this embodiment, the transmission gate M1 is an N-type transistor. Accordingly, the drain of the transmission gate M1 is coupled to the phase detection pixel 104L, the source is coupled to the common floating diffusion node FD, and the gate receives the control signal TG1.
  • the drain and source of the transmission gate M1 can be interchanged depending on the relative magnitude of the voltage applied to it.
  • the source-drain voltage of the transmission gate M1 can be regarded as zero.
  • the control signal TG1 may be provided by other circuits (not shown) of the image sensing system 10.
  • the control signal TG1 may be provided by an external circuit of the image sensor system 10.
  • the input end of the transmission gate M2 is coupled to the phase detection pixel 104R, the output end is coupled to the common floating diffusion node FD, and the controlled end receives the control signal TG2.
  • the transfer gate M2 is used to selectively transfer the charge generated by the phase detection pixel 104R to the shared floating diffusion node FD in response to the control signal TG2.
  • the transmission gate M2 is a transistor in this embodiment. In this embodiment, the transmission gate M2 is an N-type transistor. Accordingly, the drain of the transmission gate M2 is coupled to the phase detection pixel 104R, the source is coupled to the common floating diffusion node FD, and the gate receives the control signal TG2.
  • the drain and source of the transmission gate M2 can be interchanged depending on the relative magnitude of the voltage applied to it. In this application, for brevity, when the transmission gate M2 is turned on, the source-drain voltage of the transmission gate M2 can be regarded as zero.
  • the control signal TG2 may be provided by other circuits of the image sensing system 10. In some embodiments, the control signal TG2 may be provided by an external circuit of the image sensor system 10.
  • the input end of the transmission gate M3 is coupled to the non-phase detection pixel 104NL, the output end is coupled to the common floating diffusion node FD, and the controlled end receives the control signal TG3.
  • the transfer gate M3 is used to selectively transfer the charge generated by the non-phase detection pixel 104NL to the shared floating diffusion node FD in response to the control signal TG3.
  • the transmission gate M3 is a transistor in this embodiment. In this embodiment, the transmission gate M3 is an N-type transistor. Accordingly, the drain of the transmission gate M3 is coupled to the non-phase detection pixel 104NL, the source is coupled to the common floating diffusion node FD, and the gate receives the control signal TG3.
  • the drain and source of the transmission gate M3 can be interchanged depending on the relative magnitude of the voltage applied to it. In this application, for brevity, when the transmission gate M3 is turned on, the source-drain voltage of the transmission gate M3 can be regarded as zero.
  • the control signal TG3 may be provided by other circuits of the image sensing system 10. In some embodiments, the control signal TG3 may be provided by an external circuit of the image sensor system 10.
  • the input end of the transmission gate M4 is coupled to the non-phase detection pixel 104NR, the output end is coupled to the shared floating diffusion node FD, and the controlled end receives the control signal TG4.
  • the transfer gate M4 is used to selectively transfer the charge generated by the non-phase detection pixel 104NR to the shared floating diffusion node FD in response to the control signal TG4.
  • the shared floating diffusion node FD is selectively coupled to the 2*2 pixel group 110.
  • the transmission gate M4 is a transistor in this embodiment. In this embodiment, the transmission gate M4 is an N-type transistor.
  • the drain of the transmission gate M4 is coupled to the non-phase detection pixel 104NR, the source is coupled to the shared floating diffusion node FD, and the gate receives the control signal TG4.
  • the drain and source of the transmission gate M4 can be interchanged depending on the relative magnitude of the voltage applied to it. In this application, for brevity, when the transmission gate M4 is turned on, the source-drain voltage of the transmission gate M4 can be regarded as zero.
  • the control signal TG4 may be provided by other circuits of the image sensing system 10. In some embodiments, the control signal TG4 may be provided by an external circuit of the image sensor system 10.
  • the reset gate MR receives the supply voltage VDD, the other end is coupled to the common floating diffusion node FD, and the controlled end receives the reset signal RST.
  • the reset gate MR is used to selectively use the supply voltage VDD to reset the voltage of the common floating diffusion node FD based on the reset signal RST.
  • the reset gate MR is a transistor in this embodiment. In this embodiment, the reset gate MR is an N-type transistor. However, this application is not limited to this. In this application, for brevity, when the reset gate MR is turned on, the source-drain voltage of the reset gate MR can be regarded as zero.
  • the reset signal RST may be provided by other circuits of the image sensing system 10. In some embodiments, the reset signal RST may be provided by an external circuit of the image sensing system 10.
  • One end of the capacitor C is coupled to the common floating diffusion node FD, and the other end is coupled to the supply voltage VSS.
  • the capacitor C is used to store electrical energy.
  • the supply voltage VSS is the reference ground voltage.
  • the capacitor C is a capacitor formed by parasitic capacitance.
  • the input terminal of the source follower MS is coupled to the common floating diffusion node FD, and the output terminal is coupled to the selection gate ME.
  • the source follower MS is used to amplify the sensing signal on the shared floating diffusion node FD received at the input terminal (that is, the voltage on the shared floating diffusion node FD) based on its own gain, and then output the amplified sensing signal from the output terminal .
  • the source follower MS is a transistor in this embodiment. In this embodiment, the source follower MS is an N-type transistor.
  • the drain of the source follower MS receives the supply voltage VDD, the gate is coupled to the common floating diffusion node FD, and the source is coupled to the selection gate ME.
  • the input terminal of the select gate ME is coupled to the output terminal of the source follower MS, the output terminal outputs the output signal VOUT, and the controlled terminal receives the row selection signal RS.
  • the selector ME is used to selectively transfer the amplified sensing signal received by the input terminal to the output terminal based on the row selection signal RS, and output the amplified sensing signal as an output signal at the output terminal VOUT.
  • the selector gate ME is a transistor in this embodiment. In this embodiment, the selector gate ME is an N-type transistor.
  • the drain of the select gate ME is coupled to the source follower MS, the gate receives the row selection signal RS, and the source outputs the output signal VOUT.
  • the drain-source voltage of the selector gate ME can be regarded as zero.
  • the current source I is coupled between the source of the selection gate ME and the supply voltage VSS, and is used to provide a stable current.
  • the image signal processing method used to read the sensing result of the pixel array 100 will be described in the following embodiments. To put it simply, the image signal processing methods of different embodiments are implemented based on the different signal combination patterns of the control signals TG1 to TG4.
  • FIGS. 3A to 8A illustrate the signal combination patterns of the first embodiment.
  • FIGS. 3B to 8B respectively correspond to FIGS. 3A to 8A to respectively show the operation of the pixel circuit 200 of FIG. 2 at each time point.
  • FIGS. 3A and 4A are used to illustrate the merge operation.
  • FIG. 5A is used to illustrate the pre-work after the merging operation and before the phase difference focusing operation.
  • 6A to 8A are used to illustrate the phase difference focusing operation performed after the pre-work is completed.
  • the reset signal RST before the time point t1, the reset signal RST is pulled to the positive edge, and the control signals TG1 to TG4 are kept at low levels. Accordingly, referring to FIG. 3B, the transmission gates M1 to M4 remain non-conductive, and the reset gate MR is turned on when the reset signal RST is pulled to the positive edge.
  • the supply voltage VDD charges the capacitor C through the reset gate MR, so that the potential of the common floating diffusion node FD is reset to the supply voltage VDD.
  • the transmission gates M1 to M4 are turned on when the control signals TG1 to TG4 are pulled to the positive edge, respectively. Therefore, when the phase detection pixels 104L and 104R and the non-phase detection pixels 104NL and 104NR are illuminated, the phase detection pixels 104L and 104R and the non-phase detection pixels 104NL and 104NR respectively perform photoelectric conversion to generate charges to the shared floating diffusion node FD.
  • the charges generated by the phase detection pixels 104L and 104R and the non-phase detection pixels 104NL and 104NR are combined and accumulated on the common floating diffusion node FD.
  • the capacitor C is discharged through the phase detection pixels 104L and 104R and the non-phase detection pixels 104NL and 104NR, so that the potential of the common floating diffusion node FD is reduced.
  • the charge generated by the detection pixel 104NR receiving light generates a voltage drop VQ4. Therefore, the voltage of the shared floating diffusion node FD can be expressed as VDD-(VQ1+VQ2+VQ3+VQ4), which serves as the sensing signal.
  • the selector gate ME is turned on, and the output voltage VOUT is provided based on the voltage (VDD-(VQ1+VQ2+VQ3+VQ4)) of the shared floating diffusion node FD.
  • the digital signal processor (not shown) completes the merging operation of the 2*2 pixel group 110 where the phase detection pixels 104L and 104R are located.
  • the phase detection pixels 104L and 104R and the non-phase detection pixel 104NR are both used to receive green light, so the combined and accumulated charges are related to green light, but the phase detection pixels 104L and 104R may not necessarily be required Using a green filter can absorb white light or other colors of light.
  • the voltage of the shared floating diffusion node FD is first reset to avoid other noises on the shared floating diffusion node FD .
  • the operation of the pixel circuit 200 in FIG. 5B is the same as the operation in FIG. 3B, and will not be repeated here.
  • This application only gives an exemplary pre-work. In fact, the pre-operations also include other additional operations, which take up a lot of time.
  • the charge generated by the phase detection pixel 104L receiving light generates a voltage drop VQ1. Therefore, the voltage of the shared floating diffusion node FD can be expressed as VDD-(VQ1), which serves as the sensing signal.
  • the selection gate ME is turned on, and the output voltage VOUT is provided based on the voltage VDD-(VQ1) of the common floating diffusion node FD. Since the value of the supply voltage VDD is known, the digital signal processor can perform a mathematical operation on the voltage VDD-(VQ1) based on the supply voltage VDD to restore the voltage drop VQ1, which is used as the first phase detection signal.
  • the voltage of the common floating diffusion node FD is reset to clear the charge information of the phase detection pixel 104L, that is, the voltage drop VQ1 caused by the phase detection pixel 104L.
  • the operation of the pixel circuit 200 in FIG. 7B is the same as the operation in FIG. 3B, and will not be repeated here.
  • the clearing of the charge information of the phase detection pixel 104L given in this embodiment is only an example. In other embodiments of the present application, other methods may be used to clear the charge information of the phase detection pixel 104L.
  • FIG. 8A after the time point t5 and before the time point t6, there is a process in which only the control signal TG2 is pulled to the positive edge.
  • the transmission gate M2 is turned on when the control signal TG2 is pulled to the positive edge. Therefore, when the phase detection pixel 104R is illuminated, the phase detection pixel 104R performs photoelectric conversion to generate electric charges to the common floating diffusion node FD. In other words, the capacitor C is discharged through the phase detection pixel 104R, so that the potential of the common floating diffusion node FD is reduced.
  • the charge generated by the phase detection pixel 104R receiving light generates a voltage drop VQ2. Therefore, the voltage of the shared floating diffusion node FD can be expressed as VDD-(VQ2).
  • the selector gate ME is turned on, and the output voltage VOUT is provided based on the voltage VDD-(VQ2) of the common floating diffusion node FD. Since the value of the supply voltage VDD is known, the digital signal processor can perform a mathematical operation on the voltage VDD-(VQ2) based on the supply voltage VDD to restore the voltage drop VQ2, which is used as the second phase detection signal.
  • the digital signal processor completes the phase difference focusing operation .
  • the time interval T1 between the time points t1 and t6 is spent.
  • 9A to 12A illustrate the signal combination patterns of the second embodiment.
  • 9B to 12B respectively correspond to FIGS. 9A to 12A to respectively show the operation of the pixel circuit 200 of FIG. 2 at each time point.
  • the transmission gate M1 is turned on when the control signal TG1 is pulled to the positive edge. Therefore, when the phase detection pixel 104L is illuminated, the phase detection pixel 104L performs photoelectric conversion in response to the control signal TG1 to generate the first charge to the common floating diffusion node FD. In other words, the capacitor C is discharged through the phase detection pixel 104L, so that the potential of the common floating diffusion node FD is lowered.
  • the first charge generated by the phase detection pixel 104L receiving light generates a voltage drop VQ1. Therefore, the voltage of the shared floating diffusion node FD can be expressed as VDD-(VQ1). It should be noted that the voltage value of the shared floating diffusion node FD can be obtained through circuit simulation or actual measurement.
  • the selector gate ME is turned on and provides the output voltage VOUT based on the voltage VDD-(VQ1) of the shared floating diffusion node FD. Accordingly, the pixel circuit 200 provides pixel information of the phase detection pixel 104L.
  • the control signal TG1 when the control signal TG1 is pulled to a positive edge, the control signals TG2 to TG4 remain at low levels. Accordingly, the phase detection pixel 104R and the non-phase detection pixels 104NL and 104NR are disconnected from the shared floating diffusion node FD in response to the control signal TG1. Therefore, the potential of the shared floating diffusion node FD is independent of the charges generated by the phase detection pixel 104R and the non-phase detection pixels 104NL and 104NR.
  • the capacitor C is discharged through the phase detection pixel 104R, so that the potential of the common floating diffusion node FD is reduced from the voltage VDD-(VQ1) to the voltage VDD-(VQ1+VQ2), wherein the phase detection pixel 104R receives the second light generated by the light.
  • the two charges generate a voltage drop VQ2.
  • the selector gate ME is turned on, and the output voltage VOUT is provided based on the voltage VDD-(VQ1+VQ2) of the shared floating diffusion node FD. It should be noted that the voltage value of the shared floating diffusion node FD can be obtained through circuit simulation or actual measurement.
  • the voltage drop VQ2 caused by the phase detection pixel 104R can be calculated, that is, the second phase detection described in the embodiment of FIGS. 8A and 8B. Test signal. Then, based on the calculated voltage drop VQ2 and voltage VDD-(VQ1+VQ2), the voltage drop VQ1 caused by the phase detection pixel 104L can be calculated, that is, the first phase detection described in the embodiment of FIGS. 6A and 6B Signal. Such a calculation process, in some embodiments, may be completed by a digital signal processor. Then, based on the first phase detection signal and the second phase detection signal, the digital signal processor completes a phase difference focusing operation.
  • the control signal TG2 when the control signal TG2 is pulled to a positive edge, the control signals TG1, TG3, and TG4 remain at low levels. Accordingly, the phase detection pixel 104L and the non-phase detection pixels 104NL and 104NR are disconnected from the shared floating diffusion node FD in response to the control signal TG2. Therefore, the potential of the shared floating diffusion node FD has nothing to do with the charges generated by the non-phase detection pixels 104NL and 104NR.
  • the control signals TG3 and TG4 are pulled to the positive edge, and the control signals TG1 and TG2 remain at low levels.
  • the transmission gates M3 and M4 are turned on when the control signals TG3 and TG4 are pulled to the positive edge, respectively. Therefore, when the non-phase detection pixels 104NL and 104NR are illuminated, the non-phase detection pixels 104NL and 104NR perform photoelectric conversion to generate third and fourth charges to the common floating diffusion node FD, respectively. In other words, the capacitor C is discharged through the non-phase detection pixels 104NL and 104NR, so that the potential of the common floating diffusion node FD is lowered.
  • the pixel circuit 200 provides pixel information of the charge of the phase detection pixel 104L, the charge of the phase detection pixel 104R, the charge of the non-phase detection pixel 104NL, and the charge of the non-phase detection pixel 104 NR combined and accumulated equivalent charge. . Accordingly, the digital signal processor completes the merging operation of the 2*2 pixel group 110 where the phase detection pixels 104L and 104R are located.
  • the signal combination pattern provided in the second embodiment is only an example. Anything that can be solved through simultaneous equations to obtain the voltage drop VQ1 caused by the first charge generated by the phase detection pixel 104L and the voltage drop VQ2 caused by the second charge generated by the phase detection pixel 104R fall within the scope of this application. .
  • the non-phase detection pixels 104NL and 104NR can be turned on, and the phase detection pixels 104L and 104R can be turned off to establish a voltage (VDD -(VQ3+VQ4)). Then, after the time point t2 and before the time point t3, the phase detection pixel 104L is turned on, and the non-phase detection pixels 104NL and 104NR and the phase detection pixel 104R are not turned on to establish a voltage (VDD -(VQ1+VQ3+VQ4)).
  • the phase detection pixel 104R is turned on, and the non-phase detection pixels 104NL and 104NR and the phase detection pixel 104L are not turned on to establish a voltage (VDD -(VQ1+VQ2+VQ3+VQ4)).
  • the phase detection pixel 104L is turned on, and the non-phase detection pixels 104NL and 104NR and the phase detection pixels and 104R are not turned on to establish a voltage on the shared floating diffusion node FD ( VDD-(VQ1)). Then, after the time point t2 and before the time point t3, the non-phase detection pixels 104NL and 104NR are turned on, and the phase detection pixels 104L and 104R are not turned on to establish a voltage (VDD-(VQ1 +VQ3+VQ4)).
  • the phase detection pixel 104R is turned on, and the non-phase detection pixels 104NL and 104NR and the phase detection pixel 104L are not turned on to establish a voltage (VDD -(VQ1+VQ2+VQ3+VQ4)).
  • FIG. 13 is a block diagram of an embodiment of the pixel array 300 of this application.
  • the pixel array 300 is similar to the pixel array 100 described and illustrated in FIG. 1, the difference is that the phase detection pixels 104L and 104R are located in different 2*2 pixel groups 110, but the phase detection pixels 104L and 104R are still
  • the elliptical microlens OML is shared and constitutes a phase detection pixel pair.
  • the 2*2 pixel group 110 where the phase detection pixel 104L is located is renamed as the 2*2 pixel group 310A
  • the 2*2 pixel group 110 where the phase detection pixel 104R is located is renamed as the 2*2 pixel group 310B.
  • Each of the 2*2 pixel groups 310A and 310B has a corresponding pixel circuit 200.
  • the common floating diffusion node FD selectively coupled to the 2*2 pixel group 310A is called the first common floating diffusion node
  • the common floating diffusion node FD selectively coupled to the 2*2 pixel group 310B is called the second common Floating diffusion node.
  • FIG. 14A illustrates the signal combination pattern for the 2*2 pixel group 310A of the third embodiment.
  • FIG. 14B illustrates the signal combination pattern for the 2*2 pixel group 310B of the third embodiment.
  • the reset signal RST is pulled to the positive edge, and the control signals TG1 to TG4 are kept at low levels. Therefore, the voltage of the first common floating diffusion node is VDD.
  • the voltage of the first common floating diffusion node can be expressed as VDD-(VQ1), and the voltage drop VQ1 caused by the phase detection pixel 104L can be calculated as described in the embodiment of FIG. 8A and FIG. 8B, which is used as the first phase detection Test signal.
  • the control signal TG1 is pulled to a positive edge
  • the control signals TG2 to TG4 remain at low levels. Accordingly, the phase detection pixel 104R and the non-phase detection pixels 104NL and 104NR are disconnected from the first common floating diffusion node in response to the control signal TG1. Therefore, the voltage of the first common floating diffusion node is independent of the charges generated by the phase detection pixel 104R and the non-phase detection pixels 104NL and 104NR.
  • the voltage of the first common floating diffusion node can be expressed as VDD-(VQ1+VQ2+VQ3+VQ4), that is, the voltage described in the embodiments of FIGS. 4A and 4B.
  • the digital signal processor completes the merging operation of the 2*2 pixel group 310A where the phase detection pixel 104L is located.
  • the reset signal RST is pulled to the positive edge, and the control signals TG1 to TG4 are kept at low levels. Therefore, the voltage of the second common floating diffusion node is VDD.
  • the voltage of the second shared floating diffusion node can be expressed as VDD-(VQ2), and the voltage drop VQ2 caused by the phase detection pixel 104R can be calculated as described in the embodiment of FIG. 8A and FIG. 8B, which is used as the second phase detection Test signal. Then, based on the first phase detection signal and the second phase detection signal, the digital signal processor completes a phase difference focusing operation. On the other hand, when the control signal TG2 is pulled to a positive edge, the control signals TG1, TG3, and TG4 remain at low levels.
  • phase detection pixel 104L and the non-phase detection pixels 104NL and 104NR are disconnected from the second common floating diffusion node in response to the control signal TG1. Therefore, the voltage of the second common floating diffusion node is independent of the charges generated by the phase detection pixel 104L and the non-phase detection pixels 104NL and 104NR.
  • the voltage of the second shared floating diffusion node can be expressed as VDD-(VQ1+VQ2+VQ3+VQ4), that is, the voltage described in the embodiments of FIGS. 4A and 4B.
  • the digital signal processor completes the merging operation of the 2*2 pixel group 310B where the phase detection pixel 104R is located.
  • the image signal processing method of the third embodiment in order to perform two operations (merging operation and phase difference focusing operation), it takes a time interval T3 between the time points t1 and t3, which is shorter than the time interval T1. Accordingly, when the image signal processing method of the third embodiment is used to read the sensing result of the pixel array 300, the reading time can be reduced, which is beneficial to, for example, the execution of the slow motion video recording function.
  • the phase detection pixels 104L and 104R are respectively arranged in different 2*2 pixel groups 310A and 310B, the voltage drops VQ1 and VQ2 can be obtained without using simultaneous equations.
  • Simultaneous equations involve mathematical operations of multiple equations, and the equations are obtained through circuit operations, so it is difficult to avoid noise in the equations.
  • the accuracy of the values of the voltage drops VQ1 and VQ2 obtained in the third embodiment is relatively high, so the accuracy of the phase difference focusing operation is relatively high.
  • FIG. 15 is a schematic diagram of an embodiment in which a chip including a pixel circuit 200 and an image sensor including a pixel array 100 or 300 are applied to an electronic device 40.
  • the electronic device 40 includes a pixel circuit 200 and a pixel array 100 or 300.
  • the electronic device 40 is any handheld electronic device such as a smart phone, a personal digital assistant, a handheld computer system, or a tablet computer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Solid State Image Pick-Up Elements (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)

Abstract

本申请公开了一种图像信号处理方法以及相关图像传感系统(10)及电子装置。所述方法用于像素阵列(100)。所述像素阵列包括N*N像素组(110),所述N*N像素组包括第一像素(104L)及第二像素(104NL),其中N大于1,且所述第一像素和所述第二像素的其中之一为相位检测像素,其中之另一为非相位检测像素,所述方法包括:提供第一控制信号使所述第一像素产生第一电荷至所述像素阵列的第一共用浮动扩散节点;提供所述第一像素的像素信息;提供第二控制信号使所述第二像素产生第二电荷至所述第一共用浮动扩散节点,使所述第一电荷和所述第二电荷在所述第一共用浮动扩散节点上合并累积;以及提供所述第一电荷及所述第二电荷合并累积后的等效电荷的像素信息。

Description

图像信号处理方法以及相关图像传感系统及电子装置 技术领域
本申请涉及一种信号处理方法,尤其涉及一种图像信号处理方法以及相关图像传感系统及电子装置。
背景技术
随着科技的发展与进步,图像传感器提供的功能越来越多样化。举例来说,所述功能包括慢动作视频录制功能,而对于高像素且内嵌相位感测的图像传感器,常会使用像素合并累积的读取方式来提高敏感度而降低分辨率。在此模式下,通常内含两种像素的操作模式,一类需要输出像素合并累积的结果,另一类需要输出个别像素作为相位对焦使用,因此像素需要被多次读取,但慢动作视频录制功能对于所述图像传感器所能够提供的最大帧率的要求比起其他功能还要来的高。又,所述图像传感器的帧率是相关于所述图像传感器的像素阵列的感测结果的读取时间。因此,为了进一步优化例如慢动作视频录制功能的效能,改善读取时间已成为一个重要的工作项目。
发明内容
本申请的目的之一在于公开一种信号处理方法,尤其涉及一种图像信号处理方法以及相关图像传感系统及电子装置,来解决上述问题。
本申请的一实施例公开了一种图像信号处理方法,用于像素阵列所述像素阵列包括第一N*N像素组,所述第一N*N像素组包括第一像素及第二像素,其中N大于1,且所述第一像素和所述第二 像素的其中之一为相位检测像素,其中之另一为非相位检测像素,所述方法包括:提供第一控制信号使所述第一像素产生第一电荷至所述像素阵列的第一共用浮动扩散节点;提供所述第一像素的像素信息;提供第二控制信号使所述第二像素产生第二电荷至所述第一共用浮动扩散节点,使所述第一电荷和所述第二电荷在所述第一共用浮动扩散节点上合并累积;以及提供所述第一电荷及所述第二电荷合并累积后的等效电荷的像素信息。
本申请的一实施例公开了一种图像传感系统,所述图像传感系统包括:像素阵列,包括:多个像素,所述像素阵列包括:第一N*N像素组,包括:第一像素及第二像素,其中N大于1;以及第一共用浮动扩散节点,选择性地耦接至所述第一N*N像素组;其中因应于第一控制信号,所述第一像素产生第一电荷至所述第一共用浮动扩散节点,以及其中因应于第二控制信号,所述第二像素产生第二电荷至所述第一共用浮动扩散节点,使所述第一电荷和所述第二电荷在所述第一共用浮动扩散节点上合并累积,其中所述第二控制信号晚于所述第一控制信号,且所述第一像素和所述第二像素的其中之一为相位检测像素,其中之另一为非相位检测像素。
本申请的一实施例公开了一种电子装置。所述电子装置包括前述的图像传感系统。
本申请所公开的信号处理方法能够在获取相邻像素的合并累积电荷资讯的期间,也获得相位检测像素的电荷资讯。不需要在获得合并累积电荷资讯之后,额外增加一段工作时间来获得相位检测像素的电荷资讯。因此,能够降低所述像素阵列的感测结果的读取时间。
附图说明
图1为本申请的图像传感系统的像素阵列的实施例的方块示意图。
图2为本申请的图像传感系统的像素电路的实施例的电路圖。
图3A至图8A图示说明实施例一的信号组合图案。
图3B至8B分别对应图3A至8A以分别显示图2的像素电路在各时间点的操作情况。
图9A至图12A图示说明实施例二的信号组合图案。
图9B至12B分别对应图9A至12A以分别显示图2的像素电路在各时间点的操作情况。
图13为本申请的另一像素阵列的实施例的方块示意图。
图14A图示说明用于第一2*2像素组的实施例三的信号组合图案。
图14B图示说明用于第二2*2像素组的实施例三的信号组合图案。
图15为包括图2的像素电路的芯片以及包括图1或图13的像素阵列的图像传感器应用在电子装置的实施例的示意图。
具体实施方式
以下揭示内容提供了多种实施方式或例示,其能用以实现本揭示内容的不同特征。下文所述之组件与配置的具体例子系用以简化本揭示内容。当可想见,这些叙述仅为例示,其本意并非用于限制本揭示内容。举例来说,在下文的描述中,将一第一特征形成于一第二特征上或之上,可能包括某些实施例其中所述的第一与第二特征彼此直接接触;且也可能包括某些实施例其中还有额外的组件形成于上述第一与第二特征之间,而使得第一与第二特征可能没有直接接触。此外,本揭示内容可能会在多个实施例中重复使用组件符号和/或标号。此种重复使用乃是基于简洁与清楚的目的,且其本身不代表所讨论的不同实施例和/或组态之间的关系。
再者,在此处使用空间上相对的词汇,譬如「之下」、「下方」、「低于」、「之上」、「上方」及与其相似者,可能是为了方便说明图 中所绘示的一组件或特征相对于另一或多个组件或特征之间的关系。这些空间上相对的词汇其本意除了图中所绘示的方位之外,还涵盖了装置在使用或操作中所处的多种不同方位。可能将所述设备放置于其他方位(如,旋转90度或处于其他方位),而这些空间上相对的描述词汇就应该做相应的解释。
虽然用以界定本申请较广范围的数值范围与参数皆是约略的数值,此处已尽可能精确地呈现具体实施例中的相关数值。然而,任何数值本质上不可避免地含有因个别测试方法所致的标准偏差。在此处,「相同」通常系指实际数值在一特定数值或范围的正负10%、5%、1%或0.5%之内。或者是,「相同」一词代表实际数值落在平均值的可接受标准误差之内,视本申请所属技术领域中具有通常知识者的考虑而定。当可理解,除了实验例之外,或除非另有明确的说明,此处所用的所有范围、数量、数值与百分比(例如用以描述材料用量、时间长短、温度、操作条件、数量比例及其他相似者)均经过「相同」的修饰。因此,除非另有相反的说明,本说明书与附随申请专利范围所揭示的数值参数皆为约略的数值,且可视需求而更动。至少应将这些数值参数理解为所指出的有效位数与套用一般进位法所得到的数值。在此处,将数值范围表示成由一端点至另一端点或介于二端点之间;除非另有说明,此处所述的数值范围皆包括端点。
电子装置的图像传感器的像素阵列一般由多个红像素、多个绿像素、多个蓝像素构成。红像素、绿像素、蓝像素各自提供的颜色信号一同建立人眼可观察到的图像。为了提升所述图像的解析度,必须增加所述像素阵列的总像素量。在所述像素阵列的面积给定的情况下,当所述总像素量越多,所述像素阵列的单个像素的面积越小,其将导致信噪比变差。
通常,为了改善信噪比,会进行合并(binning)操作。合并操作是一种图像读出方式。详言之,是将多个相邻像素产生的电荷合并累积,并将合并累积后的电荷做为一个等效像素(由所述多个相邻像 素等效而成)的电荷。
所述像素阵列的像素除了用于提供建立所述图像所需要的资讯以外,还可用于提供相位差对焦(phase difference auto focus,PDAF)操作所需要的资讯。详言之,所述像素阵列的所述多个像素的部份像素被组态成相位检测像素。在得到所述相位检测像素提供的电荷资讯后,就可完成相位差对焦操作。
当所述电子装置被指示或是依照当时操作环境自行评估要执行两种操作(合并操作及相位差对焦操作)时,所述电子装置依序执行上述两种操作。不过如此一来,将会使得所述像素阵列的感测结果的读取时间变长。举例来说,在执行上述两种操作的每一者之前需先进行其他的前置作业,例如预充电操作,又或相位像素再次曝光。据此,至少需经历两次前置作业的时间。甚至,在执行上述两种操作的每一者之后可能还需先进行其他的后续作业。如此一来,将导致所述读取时间变长。
在一些所述电子装置的操作模式下,例如慢动作视频录制功能,对于读取时间的要求相较于其他模式较为严苛。过长的读取时间,可能导致所述慢动作视频录制功能失败。因此,要减少读取时间,可以从改善上述两种操作的流程来做,其细节说明如下。
图1为本申请的图像传感系统10的像素阵列100的实施例的方块示意图。参照图1,像素阵列100包括多个像素102(图中为6*8的像素102,但本申请不以此限)。各像素102配有彩色滤光片(以R、G、B表示),使得各像素102只能接收到特定颜色(或是特定波长段)的光。为了方便说明,像素102的名称以对应的彩色滤光片的类型来命名。据此,所述多个像素102中的一部分像素可称为红(R)像素102R,另一部分像素可称为蓝(B)像素102B,又另一部分像素可称为或绿(G)像素102G。然而,本申请不以红色、蓝色、绿色为限。在其他实施例中,也可以是白色,或黄色,或是其他颜色。
像素阵列100的多个像素102被划分为多个N*N像素组110,其中N大于1。在本实施例中,N为2。也就是说,四个像素102构成一个2*2像素组110。在合并操作中,四个像素102产生的电荷合并累积,合并累积后的电荷做为一个2*2像素组110的电荷,其细节说明于图4A及图4B的实施例中。
像素阵列100的至少一2*2像素组110包括相位检测像素104L及104R(以阴影线绘示)以及非相位检测像素104NL及104NR,其中相位检测像素104L及104R构成相位检测像素对。在本实施例中,是包括绿像素102G的2*2像素组110具有相位检测像素104L及104R,其馀2*2像素组110则不包括相位检测像素104L及104R。然而,本申请不限定于此。在其他实施例中,可以是包括其他颜色的像素102的2*2像素组110具有相位检测像素104L及104R。
相位检测像素104L及104R与非相位检测像素104NL及104NR在结构上的差别在于,相位检测像素104L及104R共享椭圆微透镜OML,而非相位检测像素各自具有微透镜ML(亦即非相位检测像素没有与其他像素共享微透镜ML)。然而,本申请不限定于此。在其他实施例中,能够以其他方式来实现相位检测像素。
图2为本申请的图像传感系统10的像素电路200耦接于图1的相位检测像素104L及104R以及非相位检测像素104NL及104NR的实施例的电路图。像素电路200用以读取像素阵列100的感测结果。在本实施例中,像素电路200用以读取相位检测像素104L及104R以及非相位检测像素104NL及104NR的感测结果。
参照图2,像素电路200包括传输闸M1、M2、M3、M4,复位闸MR,共用浮动扩散节点FD、电容器C、源极追随器MS、选择闸ME以及电流源I。
传输闸M1的输入端耦接于相位检测像素104L,输出端耦接于共用浮动扩散节点FD,受控端接收控制信号TG1。传输闸M1用以因应于控制信号TG1选择性地将相位检测像素104L产生的电荷转 移至共用浮动扩散节点FD。传输闸M1在本实施例中为电晶体。在本实施例中,传输闸M1为N型电晶体。据此,传输闸M1的汲极耦接于相位检测像素104L,源极耦接于共用浮动扩散节点FD,闸极接收控制信号TG1。需注意的是,传输闸M1的汲极及源极视施加于其上的电压相对大小而可相互对调。此外,在本申请中,为了简洁,当传输闸M1导通时,传输闸M1的源极-汲极电压可视为是零。在一些实施例中,控制信号TG1可由图像传感系统10的其他电路(未图示)来提供。在一些实施例中,控制信号TG1可由图像传感系统10的外部电路来提供。
传输闸M2的输入端耦接于相位检测像素104R,输出端耦接于共用浮动扩散节点FD,受控端接收控制信号TG2。传输闸M2用以因应于控制信号TG2选择性地将相位检测像素104R产生的电荷转移至共用浮动扩散节点FD。传输闸M2在本实施例中为电晶体。在本实施例中,传输闸M2为N型电晶体。据此,传输闸M2的汲极耦接于相位检测像素104R,源极耦接于共用浮动扩散节点FD,闸极接收控制信号TG2。需注意的是,传输闸M2的汲极及源极视施加于其上的电压相对大小而可相互对调。在本申请中,为了简洁,当传输闸M2导通时,传输闸M2的源极-汲极电压可视为是零。在一些实施例中,控制信号TG2可由图像传感系统10的其他电路来提供。在一些实施例中,控制信号TG2可由图像传感系统10的外部电路来提供。
传输闸M3的输入端耦接于非相位检测像素104NL,输出端耦接于共用浮动扩散节点FD,受控端接收控制信号TG3。传输闸M3用以因应于控制信号TG3选择性地将非相位检测像素104NL产生的电荷转移至共用浮动扩散节点FD。传输闸M3在本实施例中为电晶体。在本实施例中,传输闸M3为N型电晶体。据此,传输闸M3的汲极耦接于非相位检测像素104NL,源极耦接于共用浮动扩散节点FD,闸极接收控制信号TG3。需注意的是,传输闸M3的汲极及源极视施加于其上的电压相对大小而可相互对调。在本申请中, 为了简洁,当传输闸M3导通时,传输闸M3的源极-汲极电压可视为是零。在一些实施例中,控制信号TG3可由图像传感系统10的其他电路来提供。在一些实施例中,控制信号TG3可由图像传感系统10的外部电路来提供。
传输闸M4的输入端耦接于非相位检测像素104NR,输出端耦接于共用浮动扩散节点FD,受控端接收控制信号TG4。传输闸M4用以因应于控制信号TG4选择性地将非相位检测像素104NR产生的电荷转移至共用浮动扩散节点FD。总的来说,共用浮动扩散节点FD选择性地耦接至2*2像素组110。传输闸M4在本实施例中为电晶体。在本实施例中,传输闸M4为N型电晶体。据此,传输闸M4的汲极耦接于非相位检测像素104NR,源极耦接于共用浮动扩散节点FD,闸极接收控制信号TG4。需注意的是,传输闸M4的汲极及源极视施加于其上的电压相对大小而可相互对调。在本申请中,为了简洁,当传输闸M4导通时,传输闸M4的源极-汲极电压可视为是零。在一些实施例中,控制信号TG4可由图像传感系统10的其他电路来提供。在一些实施例中,控制信号TG4可由图像传感系统10的外部电路来提供。
复位闸MR的一端接收供应电压VDD,另一端耦接于共用浮动扩散节点FD,受控端接收复位信号RST。复位闸MR用以基于复位信号RST选择性地使用供应电压VDD复位共用浮动扩散节点FD的电压。复位闸MR在本实施例中为电晶体。在本实施例中,复位闸MR为N型电晶体。然而,本申请不限定于此。在本申请中,为了简洁,当复位闸MR导通时,复位闸MR的源极-汲极电压可视为是零。在一些实施例中,复位信号RST可由图像传感系统10的其他电路来提供。在一些实施例中,复位信号RST可由图像传感系统10的外部电路来提供。
电容器C的一端耦接于共用浮动扩散节点FD,另一端耦接于供应电压VSS。电容器C用以储存电能。在一些实施例中,供应电压VSS为参考接地电压。在一些实施例中,电容器C为寄生电容所 构成的电容器。
源极追随器MS的输入端耦接于共用浮动扩散节点FD,以及输出端耦接于选择闸ME。源极追随器MS用以基于自身的增益将输入端接收的共用浮动扩散节点FD上的感测信号(也就是,共用浮动扩散节点FD上的电压)放大后,从输出端输出放大感测信号。源极追随器MS在本实施例中为电晶体。在本实施例中,源极追随器MS为N型电晶体。源极追随器MS的汲极接收供应电压VDD,闸极耦接于共用浮动扩散节点FD,源极耦接于选择闸ME。
选择闸ME的输入端耦接于源极追随器MS的输出端,输出端输出输出信号VOUT,受控端接收行选择信号RS。选择闸ME用以基于行选择信号RS选择性地将所述输入端接收的所述放大感测信号转移至所述输出端,并于所述输出端将所述放大感测信号输出为输出信号VOUT。选择闸ME在本实施例中为电晶体。在本实施例中,选择闸ME为N型电晶体。选择闸ME的汲极耦接于源极追随器MS、闸极接收行选择信号RS、源极输出输出信号VOUT。在本实施例中,为了简洁,当选择闸ME导通时,选择闸ME的汲极-源极电压可视为是零。电流源I耦接于选择闸ME的源极与供应电压VSS之间,并用以提供稳定电流。
用来读取像素阵列100的感测结果的图像信号处理方法将说明于下列的各实施例中。简单来说,是以控制信号TG1至TG4的信号组合图案的不同来完成不同的实施例的图像信号处理方法。
<实施例一>
图3A至图8A图示说明实施例一的信号组合图案。图3B至8B分别对应图3A至8A以分别显示图2的像素电路200在各时间点的操作情况。详言之,图3A及图4A用来说明合并操作。图5A用来说明在合并操作之后且在相位差对焦操作之前的前置作业。图6A至图8A用来说明在前置作业结束之后进行的相位差对焦操作。
(1)合并操作:
参照图3A,在时间点t1之前,经历了复位信号RST被拉至正缘的过程,并且控制信号TG1至TG4保持在低位准。据此,参照图3B,传输闸M1至M4保持不导通,复位闸MR在复位信号RST被拉至正缘的过程中导通。供应电压VDD通过复位闸MR对电容器C进行充电,使得共用浮动扩散节点FD的电位被复位至供应电压VDD。
参照图4A,在时间点t1以后并且在时间点t2以前,经历了控制信号TG1至TG4被拉至正缘的过程。参照图4B,传输闸M1至M4分别在控制信号TG1至TG4被拉至正缘的过程中导通。因此,在相位检测像素104L及104R和非相位检测像素104NL及104NR在被照光的情况下,相位检测像素104L及104R和非相位检测像素104NL及104NR各自进行光电转换以产生电荷至共用浮动扩散节点FD。相位检测像素104L及104R和非相位检测像素104NL及104NR各自产生的电荷在共用浮动扩散节点FD上合并累积。换言之,电容器C通过相位检测像素104L及104R和非相位检测像素104NL及104NR进行放电,使得共用浮动扩散节点FD的电位降低。
具体而言,相位检测像素104L受光所产生的电荷产生压降VQ1,相位检测像素104R受光所产生的电荷产生压降VQ2,非相位检测像素104NL受光所产生的电荷产生压降VQ3,以及非相位检测像素104NR受光所产生的电荷产生压降VQ4。因此,共用浮动扩散节点FD的电压可表示为VDD-(VQ1+VQ2+VQ3+VQ4),其作为所述感测信号。放电完成后,选择闸ME导通,并且基于共用浮动扩散节点FD的电压(VDD-(VQ1+VQ2+VQ3+VQ4))提供输出电压VOUT。数字信号处理器(未图示)完成相位检测像素104L及104R所在的2*2像素组110的合并操作。在本实施例中,相位检测像素104L及104R及非相位检测像素104NR均是用于接收绿色光,因此合并累积后的电荷是相关于绿色光的电荷,但相位检测像素104L与104R可不必然需要使用绿光滤光片,可以吸收白光或其他颜色的光。
(2)前置操作:
参照图5A,在进行相位差对焦操作之前(也就是在时间点t2之后且在时间点t3之前),先对共用浮动扩散节点FD的电压进行复位,以避免共用浮动扩散节点FD上存在其他噪声。参照图5B,像素电路200在图5B的操作相同于在图3B的操作,于此不再赘述。本申请仅给出一种示范性的前置作业。实际上,前置操作还包括其他额外的操作,并据此占用了大量的时间。
(3)相位差对焦操作:
参照图6A,在时间点t3以后并且在时间点t4以前,经历了仅控制信号TG1被拉至正缘的过程。参照图6B,传输闸M1在控制信号TG1被拉至正缘的过程中导通。因此,在相位检测像素104L在被照光的情况下,相位检测像素104L进行光电转换以产生电荷至共用浮动扩散节点FD。换言之,电容器C通过相位检测像素104L进行放电,使得共用浮动扩散节点FD的电位降低。
具体而言,相位检测像素104L受光所产生的电荷产生压降VQ1。因此,共用浮动扩散节点FD的电压可表示为VDD-(VQ1),其作为所述感测信号。在完成放电之后,选择闸ME导通,并且基于共用浮动扩散节点FD的电压VDD-(VQ1)提供输出电压VOUT。由于供应电压VDD的数值为已知,因此数字信号处理器可基于供应电压VDD对电压VDD-(VQ1)进行数学运算进而还原出压降VQ1,其作为第一相位侦测信号。
参照图7A,在时间点t4以后并且在时间点t5以前,对共用浮动扩散节点FD的电压进行复位,以清除相位检测像素104L的电荷信息,也就是,相位检测像素104L造成的压降VQ1。参照图7B,像素电路200在图7B的操作相同于在图3B的操作,于此不再赘述。本实施例所给出的清除相位检测像素104L的电荷信息仅是做为释例。本申请在其他实施例中可使用其他方式清除相位检测像素104L的电荷信息。
参照图8A,在时间点t5以后且在时间点t6以前,经历了仅控 制信号TG2被拉至正缘的过程。参照图8B,传输闸M2在控制信号TG2被拉至正缘的过程中导通。因此,在相位检测像素104R在被照光的情况下,相位检测像素104R进行光电转换以产生电荷至共用浮动扩散节点FD。换言之,电容器C通过相位检测像素104R进行放电,使得共用浮动扩散节点FD的电位降低。
具体而言,相位检测像素104R受光所产生的电荷产生压降VQ2。因此,共用浮动扩散节点FD的电压可表示为VDD-(VQ2)。在放电完成以后,选择闸ME导通,并且基于共用浮动扩散节点FD的电压VDD-(VQ2)提供输出电压VOUT。由于供应电压VDD的数值为已知,因此数字信号处理器可基于供应电压VDD对电压VDD-(VQ2)进行数学运算进而还原出压降VQ2,其作为第二相位侦测信号。
接着,基于在图6A及图6B的实施例中得到的第一相位侦测信号以及在图8A及图8B的实施例中得到的第二相位侦测信号,数字信号处理器完成相位差对焦操作。总的来说,为了执行两种操作(合并操作及相位差对焦操作),至少花费时间点t1及t6之间的时间间隔T1。
<实施例二>
图9A至图12A图示说明实施例二的信号组合图案。图9B至12B分别对应图9A至12A以分别显示图2的像素电路200在各时间点的操作情况。
参照图9A,在时间点t1之前,经历了复位信号RST被拉至正缘的过程,并且控制信号TG1至TG4保持在低位准。参照图9B,像素电路200在图9B的操作相同于在图3B的操作,于此不再赘述。
参照图10A,在时间点t1以后并且在时间点t2以前,经历了仅控制信号TG1被拉至正缘的过程。参照图10B,传输闸M1在控制信号TG1被拉至正缘的过程中导通。因此,在相位检测像素104L在被照光的情况下,相位检测像素104L因应于控制信号TG1进行光电转换以产生第一电荷至共用浮动扩散节点FD。换言之,电容器 C通过相位检测像素104L进行放电,使得共用浮动扩散节点FD的电位降低。
具体而言,相位检测像素104L受光所产生的第一电荷产生压降VQ1。因此,共用浮动扩散节点FD的电压可表示为VDD-(VQ1)。需说明的是,共用浮动扩散节点FD的电压数值可透过电路模拟或实际量测得知。在放电完成后,选择闸ME导通,并且基于共用浮动扩散节点FD的电压VDD-(VQ1)提供输出电压VOUT。据此,像素电路200提供相位检测像素104L的像素信息。
另一方面,在控制信号TG1被拉至正缘时,控制信号TG2至TG4保持在低位准。据此,相位检测像素104R及非相位检测像素104NL及104NR因应于控制信号TG1自共用浮动扩散节点FD断开。因此,共用浮动扩散节点FD的电位与相位检测像素104R及非相位检测像素104NL及104NR产生的电荷无关。
参照图11A,在时间点t2以后并且在时间点t3以前,经历了仅控制信号TG2被拉至正缘的过程。控制信号TG2被拉至正缘的开始时间点晚于控制信号TG1被拉至正缘的开始时间点。参照图11B,传输闸M2在控制信号TG2被拉至正缘的过程中导通。因此,在相位检测像素104R在被照光的情况下,相位检测像素104R因应于控制信号TG2进行光电转换以产生第二电荷至共用浮动扩散节点FD,使所述第一电荷和所述第二电荷在共用浮动扩散节点FD上合并累积。换言之,电容器C通过相位检测像素104R进行放电,使得共用浮动扩散节点FD的电位从电压VDD-(VQ1)降成电压VDD-(VQ1+VQ2),其中相位检测像素104R受光所产生的所述第二电荷产生压降VQ2。在放电完成后,选择闸ME导通,并且基于共用浮动扩散节点FD的电压VDD-(VQ1+VQ2)提供输出电压VOUT。需说明的是,共用浮动扩散节点FD的电压数值可透过电路模拟或实际量测得知。
通过将电压VDD-(VQ1)与电压VDD-(VQ1+VQ2)相减,可计算 出相位检测像素104R造成的压降VQ2,亦即图8A及图8B的实施例所记载的第二相位侦测信号。接着,基于计算出的压降VQ2以及电压VDD-(VQ1+VQ2),可计算出相位检测像素104L造成的压降VQ1,亦即图6A及图6B的实施例所记载的第一相位侦测信号。这样的计算过程,在一些实施例中,可以通过数字信号处理器来完成。接着,基于所述第一相位侦测信号以及所述第二相位侦测信号,数字信号处理器完成相位差对焦操作。
另一方面,在控制信号TG2被拉至正缘时,控制信号TG1、TG3及TG4保持在低位准。据此,相位检测像素104L及非相位检测像素104NL及104NR因应于控制信号TG2自共用浮动扩散节点FD断开。因此,共用浮动扩散节点FD的电位与非相位检测像素104NL及104NR产生的电荷无关。
参照图12A,在时间点t3以后并且在时间点t4以前,经历了控制信号TG3及TG4被拉至正缘的过程,控制信号TG1及TG2保持在低位准。参照图12B,传输闸M3及M4分别在控制信号TG3及TG4被拉至正缘的过程中导通。因此,在非相位检测像素104NL及104NR在被照光的情况下,非相位检测像素104NL及104NR进行光电转换以分别产生第三电荷及第四电荷至共用浮动扩散节点FD。换言之,电容器C通过非相位检测像素104NL及104NR进行放电,使得共用浮动扩散节点FD的电位降低。
具体而言,非相位检测像素104NL受光所产生的所述第三电荷产生压降VQ3,以及非相位检测像素104NR受光所产生的所述第四电荷产生压降VQ4。因此,共用浮动扩散节点FD从电压VDD-(VQ1+VQ2)下降为电压VDD-(VQ1+VQ2+VQ3+VQ4),亦即图4A及图4B的实施例所记载的电压。简言之,像素电路200提供相位检测像素104L的电荷、相位检测像素104R的电荷、非相位检测像素104NL的电荷,以及非相位检测像素104的NR的电荷合并累积后的等效电荷的像素信息。据此,数字信号处理器完成相位检测像素104L及104R所在的2*2像素组110的合并操作。
总的来说,为了执行两种操作(合并操作及相位差对焦操作),花费时间点t1及t4之间的时间间隔T2,其短于时间间隔T1。据此,当使用实施例二的图像信号处理方法来读取像素阵列100的感测结果时,可减少读取时间,有利于例如慢动作视频录制功能的执行。
实施例二所提供的信号组合图案仅作为一种释例。任何可透过联立方程式求解来得到相位检测像素104L产生的所述第一电荷造成的压降VQ1及相位检测像素104R产生的所述第二电荷造成的压降VQ2均落入本申请的范畴。
举例来说,可以在时间点t1后且在时间点t2以前,导通非相位检测像素104NL及104NR,以及不导通相位检测像素104L及104R,以在共用浮动扩散节点FD上建立电压(VDD-(VQ3+VQ4))。接着,在时间点t2后且在时间点t3以前,导通相位检测像素104L,以及不导通非相位检测像素104NL及104NR及相位检测像素104R,以在共用浮动扩散节点FD上建立电压(VDD-(VQ1+VQ3+VQ4))。接着,在时间点t3后且在时间点t4以前,导通相位检测像素104R,以及不导通非相位检测像素104NL及104NR及相位检测像素104L,以在共用浮动扩散节点FD上建立电压(VDD-(VQ1+VQ2+VQ3+VQ4))。
或者,可以在时间点t1后且在时间点t2以前,导通相位检测像素104L,不导通非相位检测像素104NL及104NR以及相位检测像素及104R,以在共用浮动扩散节点FD上建立电压(VDD-(VQ1))。接着,在时间点t2后且在时间点t3以前,导通非相位检测像素104NL及104NR,以及不导通相位检测像素104L及104R,以在共用浮动扩散节点FD上建立电压(VDD-(VQ1+VQ3+VQ4))。接着,在时间点t3后且在时间点t4以前,导通相位检测像素104R,以及不导通非相位检测像素104NL及104NR及相位检测像素104L,以在共用浮动扩散节点FD上建立电压(VDD-(VQ1+VQ2+VQ3+VQ4))。
<实施例三>
图13为本申请的像素阵列300的实施例的方块示意图。参照图13,像素阵列300类似于图1所记载及图示说明的像素阵列100,差别在于,相位检测像素104L及104R位于不同的2*2像素组110中,但相位检测像素104L及104R仍共享椭圆微透镜OML并构成相位检测像素对。为了识别,相位检测像素104L所在的2*2像素组110,重新命名为2*2像素组310A,以及相位检测像素104R所在的2*2像素组110,重新命名为2*2像素组310B。
2*2像素组310A及310B各自具有与其配合的像素电路200。选择性地耦接于2*2像素组310A的共用浮动扩散节点FD称为第一共用浮动扩散节点,选择性地耦接于2*2像素组310B的共用浮动扩散节点FD称为第二共用浮动扩散节点。
图14A图示说明实施例三的用于2*2像素组310A的信号组合图案。图14B图示说明实施例三的用于2*2像素组310B的信号组合图案。像素电路200基于图14A的信号组合图案的操作方式及像素电路200基于图14B的信号组合图案的操作方式与图3B至图12B的实施例所描述的方式相同,雷同之处不再赘述。
参照图14A,在时间点t1之前,经历了复位信号RST被拉至正缘的过程,并且控制信号TG1至TG4保持在低位准。因此,第一共用浮动扩散节点的电压为VDD。
在时间点t1以后并且在时间点t2以前,经历了仅控制信号TG1被拉至正缘的过程。因此,第一共用浮动扩散节点的电压可表示为VDD-(VQ1),并可如图8A及图8B的实施例所述计算出相位检测像素104L造成的压降VQ1,其作为第一相位侦测信号。另一方面,在控制信号TG1被拉至正缘时,控制信号TG2至TG4保持在低位准。据此,相位检测像素104R及非相位检测像素104NL及104NR因应于控制信号TG1自第一共用浮动扩散节点断开。因此,第一共用浮动扩散节点的电压与相位检测像素104R及非相位检测像素104NL及104NR产生的电荷无关。
在时间点t2以后并且在时间点t3以前,经历了控制信号TG2至TG4被拉至正缘的过程,控制信号TG1保持在低位准。因此,第一共用浮动扩散节点的电压可表示为VDD-(VQ1+VQ2+VQ3+VQ4),亦即图4A及图4B的实施例所记载的电压。数字信号处理器完成相位检测像素104L所在的2*2像素组310A的合并操作。
参照图14B,在时间点t1之前,经历了复位信号RST被拉至正缘的过程,并且控制信号TG1至TG4保持在低位准。因此,第二共用浮动扩散节点的电压为VDD。
在时间点t1以后并且在时间点t2以前,经历了仅控制信号TG2被拉至正缘的过程。因此,第二共用浮动扩散节点的电压可表示为VDD-(VQ2),并可如图8A及图8B的实施例所述计算出相位检测像素104R造成的压降VQ2,其作为第二相位侦测信号。接着,基于所述第一相位侦测信号以及所述第二相位侦测信号,数字信号处理器完成相位差对焦操作。另一方面,在控制信号TG2被拉至正缘时,控制信号TG1、TG3及TG4保持在低位准。据此,相位检测像素104L及非相位检测像素104NL及104NR因应于控制信号TG1自第二共用浮动扩散节点断开。因此,第二共用浮动扩散节点的电压与相位检测像素104L及非相位检测像素104NL及104NR产生的电荷无关。
在时间点t2以后并且在时间点t3以前,经历了控制信号TG1、TG3及TG4被拉至正缘的过程,控制信号TG2保持在低位准。因此,第二共用浮动扩散节点的电压可表示为VDD-(VQ1+VQ2+VQ3+VQ4),亦即图4A及图4B的实施例所记载的电压。数字信号处理器完成相位检测像素104R所在的2*2像素组310B的合并操作。
总的来说,为了执行两种操作(合并操作及相位差对焦操作),花费时间点t1及t3之间的时间间隔T3,其短于时间间隔T1。据此, 当使用实施例三的图像信号处理方法来读取像素阵列300的感测结果时,可减少读取时间,有利于例如慢动作视频录制功能的执行。
除此之外,由于相位检测像素104L及104R是分别设置在不同的2*2像素组310A及310B,因此不需要透过联立方程式的就能够得到压降VQ1及VQ2。联立方程式涉及多个方程式的数学运算,而方程式是通过电路操作获得,因此方程式难以避免存在噪声。当需要越多的方程式才能得到压降时,涉及的噪声越严重,所求出的压降VQ1及VQ2越不准确。相反地,通过实施例三所得到的压降VQ1及VQ2的数值的准确度较高,因此相位差对焦操作的准确度较高。
图15为包括像素电路200的芯片以及包括像素阵列100或300的图像传感器应用在电子装置40的实施例的示意图。参照图15,电子装置40包含像素电路200及像素阵列100或300。电子装置40例如智能型手机、个人数字助理、手持式计算机系统或平板计算机等任何手持式电子装置。
上文的叙述简要地提出了本申请某些实施例之特征,而使得本申请所属技术领域具有通常知识者能够更全面地理解本揭示内容的多种态样。本申请所属技术领域具有通常知识者当可明了,其可轻易地利用本揭示内容作为基础,来设计或更动其他工艺与结构,以实现与此处所述之实施方式相同的目的和/或达到相同的优点。本申请所属技术领域具有通常知识者应当明白,这些均等的实施方式仍属于本揭示内容之精神与范围,且其可进行各种变更、替代与更动,而不会悖离本揭示内容之精神与范围。

Claims (20)

  1. 一种图像信号处理方法,用于像素阵列,所述像素阵列包括第一N*N像素组,所述第一N*N像素组包括第一像素及第二像素,其中N大于1,且所述第一像素和所述第二像素的其中之一为相位检测像素,其中之另一为非相位检测像素,所述方法包括:
    提供第一控制信号使所述第一像素产生第一电荷至所述像素阵列的第一共用浮动扩散节点;
    提供所述第一像素的像素信息;
    提供第二控制信号使所述第二像素产生第二电荷至所述第一共用浮动扩散节点,使所述第一电荷和所述第二电荷在所述第一共用浮动扩散节点上合并累积;以及
    提供所述第一电荷及所述第二电荷合并累积后的等效电荷的像素信息。
  2. 如权利要求1所述的方法,其中所述第一N*N像素组进一步包括第三像素,所述第三像素为相位检测像素,所述第三像素与所述第一像素和所述第二像素的所述其中之一构成相位像素对,且所述第三像素与所述第一像素和所述第二像素的所述其中之一共享椭圆微透镜。
  3. 如权利要求2所述的方法,进一步包括:
    提供第三控制信号使所述第三像素产生第三电荷至所述第一共用浮动扩散节点,使所述第三电荷及所述第一电荷在所述第一共用浮动扩散节点上合并累积。
  4. 如权利要求3所述的方法,其中在产生所述第三电荷至所述第一共用浮动扩散节点后,提供所述第二控制信号,使所述第一电荷、所述第二电荷、所述第三电荷在所述第一共用浮动扩散节点上合并累积。
  5. 如权利要求1所述的方法,其中所述像素阵列进一步包括:第二 N*N像素组,包括第三像素,所述第三像素为相位检测像素,所述第三像素与所述第一像素和所述第二像素的所述其中之一构成相位像素对,且所述第三像素与所述第一像素和所述第二像素的所述其中之一共享椭圆微透镜;以及所述像素阵列进一步包括:第二共用浮动扩散节点,选择性地耦接至所述第二N*N像素组。
  6. 如权利要求5所述的方法,其中提供所述第一控制信号的操作进一步包括:
    提供所述第一控制信号使在所述第二N*N像素组中除了所述第三像素以外的像素因应于所述第一控制信号自所述第二共用浮动扩散节点断开。
  7. 如权利要求6所述的方法,其中提供所述第一控制信号的操作进一步包括:
    提供所述第一控制信号使在所述第一N*N像素组中除了所述第一像素以外的像素因应于所述第一控制信号自所述第一共用浮动扩散节点断开。
  8. 如权利要求1所述的方法,其中所述第一N*N像素组相邻所述第二N*N像素组。
  9. 如权利要求1所述的方法,进一步包括:
    提供第一传输闸,耦接于所述第一像素及所述第一共用浮动扩散节点之间,并且用以接收所述第一控制信号,基于所述第一控制信号选择性地连接所述第一像素及所述第一共用浮动扩散节点;以及
    提供第二传输闸,耦接于所述第二像素及所述第一共用浮动扩散节点之间,并且用以接收所述第二控制信号,基于所述第二控制信号选择性地连接所述第二像素及所述第一共用浮动扩散节点。
  10. 如权利要求9所述的方法,进一步包括:
    提供复位闸,耦接于所述第一共用浮动扩散节点,用以接收供应电压,并选择性地基于所述供应电压复位所述第一共用浮动扩散节点的电压。
  11. 如权利要求10所述的方法,其中在提供所述第一控制信号使所述第一像素产生所述第一电荷至所述第一共用浮动扩散节点之前,使用所述复位闸来基于所述供应电压复位所述第一共用浮动扩散节点的电压。
  12. 一种图像传感系统,所述图像传感系统包括:
    像素阵列,包括:多个像素,所述像素阵列包括:
    第一N*N像素组,包括:第一像素及第二像素,其中N大于1;以及
    第一共用浮动扩散节点,选择性地耦接至所述第一N*N像素组;
    其中因应于第一控制信号,所述第一像素产生第一电荷至所述第一共用浮动扩散节点,以及其中因应于第二控制信号,所述第二像素产生第二电荷至所述第一共用浮动扩散节点,使所述第一电荷和所述第二电荷在所述第一共用浮动扩散节点上合并累积,其中所述第二控制信号晚于所述第一控制信号,且所述第一像素和所述第二像素的其中之一为相位检测像素,其中之另一为非相位检测像素。
  13. 如权利要求12所述的图像传感系统,其中所述第一N*N像素组进一步包括第三像素,所述第三像素为相位检测像素,所述第三像素与所述第一像素和所述第二像素的所述其中之一构成相位像素对,且所述第三像素与所述第一像素和所述第二像素的所述其中之一共享椭圆微透镜。
  14. 如权利要求13所述的图像传感系统,其中因应于第三控制信号, 所述第三像素产生第三电荷至所述第一共用浮动扩散节点,使所述第三电荷及所述第一电荷在所述第一共用浮动扩散节点上合并累积。
  15. 如权利要求14所述的图像传感系统,其中所述第三控制信号早于所述第二控制信号。
  16. 如权利要求15所述的图像传感系统,其中因应于所述第二控制信号,所述第二像素产生所述第二电荷至所述第一共用浮动扩散节点,使所述第一电荷、所述第二电荷、所述第三电荷在所述第一共用浮动扩散节点上合并累积。
  17. 如权利要求12所述的图像传感系统,其中所述像素阵列进一步包括:
    第二N*N像素组,包括第三像素,所述第三像素为相位检测像素,所述第三像素与所述第一像素和所述第二像素的所述其中之一构成相位像素对,且所述第三像素与所述第一像素和所述第二像素的所述其中之一共享椭圆微透镜;以及
    第二共用浮动扩散节点,选择性地耦接至所述第二N*N像素组。
  18. 如权利要求17所述的图像传感系统,其中因应于所述第一控制信号,所述第三像素产生第三电荷至所述第二共用浮动扩散节点,并且在所述第二N*N像素组中除了所述第三像素以外的像素自所述第二浮动扩散节点共用浮动扩散节点断开。
  19. 如权利要求18所述的图像传感系统,其中因应于所述第一控制信号,所述第一像素产生所述第一电荷至所述第一共用浮动扩散节点,并且在所述第一N*N像素组中除了所述第一像素以外的像素自所述第一共用浮动扩散节点断开。
  20. 一种电子装置,其特征在于,所述电子装置包括:
    如权利要求12-19中任一项所述的图像传感系统。
PCT/CN2020/075483 2020-02-17 2020-02-17 图像信号处理方法以及相关图像传感系统及电子装置 WO2021163824A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/075483 WO2021163824A1 (zh) 2020-02-17 2020-02-17 图像信号处理方法以及相关图像传感系统及电子装置
CN202080002033.3A CN112042180B (zh) 2020-02-17 2020-02-17 图像信号处理方法以及相关图像传感系统及电子装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/075483 WO2021163824A1 (zh) 2020-02-17 2020-02-17 图像信号处理方法以及相关图像传感系统及电子装置

Publications (1)

Publication Number Publication Date
WO2021163824A1 true WO2021163824A1 (zh) 2021-08-26

Family

ID=73572902

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/075483 WO2021163824A1 (zh) 2020-02-17 2020-02-17 图像信号处理方法以及相关图像传感系统及电子装置

Country Status (2)

Country Link
CN (1) CN112042180B (zh)
WO (1) WO2021163824A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140071180A1 (en) * 2012-09-10 2014-03-13 The Regents Of The University Of Michigan Method and apparatus for suppressing background light in time of flight sensor
CN105611124A (zh) * 2015-12-18 2016-05-25 广东欧珀移动通信有限公司 图像传感器、成像方法、成像装置及电子装置
CN206759600U (zh) * 2016-06-23 2017-12-15 半导体元件工业有限责任公司 成像系统
CN109257534A (zh) * 2017-07-12 2019-01-22 奥林巴斯株式会社 摄像元件、摄像装置、记录介质、摄像方法
CN109302559A (zh) * 2017-07-24 2019-02-01 格科微电子(上海)有限公司 实现cmos图像传感器于像素合成模式下的相位对焦方法

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6987562B2 (ja) * 2017-07-28 2022-01-05 キヤノン株式会社 固体撮像素子
JP6947590B2 (ja) * 2017-09-08 2021-10-13 オリンパス株式会社 撮像装置、撮像装置の制御方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140071180A1 (en) * 2012-09-10 2014-03-13 The Regents Of The University Of Michigan Method and apparatus for suppressing background light in time of flight sensor
CN105611124A (zh) * 2015-12-18 2016-05-25 广东欧珀移动通信有限公司 图像传感器、成像方法、成像装置及电子装置
CN206759600U (zh) * 2016-06-23 2017-12-15 半导体元件工业有限责任公司 成像系统
CN109257534A (zh) * 2017-07-12 2019-01-22 奥林巴斯株式会社 摄像元件、摄像装置、记录介质、摄像方法
CN109302559A (zh) * 2017-07-24 2019-02-01 格科微电子(上海)有限公司 实现cmos图像传感器于像素合成模式下的相位对焦方法

Also Published As

Publication number Publication date
CN112042180B (zh) 2023-05-02
CN112042180A (zh) 2020-12-04

Similar Documents

Publication Publication Date Title
US10263022B2 (en) RGBZ pixel unit cell with first and second Z transfer gates
US10170514B2 (en) Image sensor
CN105723700B (zh) 具有恒定电压偏置的光电二极管的像素电路和相关的成像方法
CN105681697A (zh) 用于改善行码区的非线性的图像传感器及包括其的装置
CN101189863B (zh) 耦合电容匹配的共享放大器像素
US10313610B2 (en) Image sensors with dynamic pixel binning
CN105308747A (zh) 分离栅极有条件重置的图像传感器
CN105144699A (zh) 阈值监测的有条件重置的图像传感器
CN103369269A (zh) 双源极跟随器像素单元架构
US20210014435A1 (en) Method of correcting dynamic vision sensor (dvs) events and image sensor performing the same
US11323639B2 (en) Image sensor and operation method thereof
CN103716559B (zh) 像素单元读出装置及方法、像素阵列读出装置及方法
JP7080913B2 (ja) デジタルピクセルを含むイメージセンサ
CN103067676A (zh) 高动态图像传感器及其有源像素
TWI634469B (zh) 光感測電路
US20140103190A1 (en) Binary image sensor and image sensing method
US10573682B2 (en) Pixel array included in image sensor and image sensor including the same
WO2021163824A1 (zh) 图像信号处理方法以及相关图像传感系统及电子装置
US11503229B2 (en) Image sensor and imaging device including the same
TW202203443A (zh) 動態視覺感測器
US20200112699A1 (en) Image sensor and semiconductor structure
Hirsch et al. Realization and opto-electronic characterization of linear self-reset pixel cells for a high dynamic CMOS image sensor
CN112866590B (zh) 一种减小图像传感器时序电路误差值的方法
US9894288B2 (en) Image forming method for forming a high-resolution image, and a related image forming apparatus and image forming program
CN103873791A (zh) 像素单元读出电路及其方法、像素阵列读出电路及其方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20920437

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20920437

Country of ref document: EP

Kind code of ref document: A1