CN115152198B - Image sensor and related electronic device - Google Patents

Image sensor and related electronic device Download PDF

Info

Publication number
CN115152198B
CN115152198B CN202080033580.8A CN202080033580A CN115152198B CN 115152198 B CN115152198 B CN 115152198B CN 202080033580 A CN202080033580 A CN 202080033580A CN 115152198 B CN115152198 B CN 115152198B
Authority
CN
China
Prior art keywords
pixel
sub
pixels
image sensor
floating diffusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202080033580.8A
Other languages
Chinese (zh)
Other versions
CN115152198A (en
Inventor
赵维民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Goodix Technology Co Ltd
Original Assignee
Shenzhen Goodix Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Goodix Technology Co Ltd filed Critical Shenzhen Goodix Technology Co Ltd
Publication of CN115152198A publication Critical patent/CN115152198A/en
Application granted granted Critical
Publication of CN115152198B publication Critical patent/CN115152198B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/12Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths with one sensor only
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/46Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by combining or binning pixels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise

Abstract

The application discloses an image sensor and a related electronic device. The image sensor is coupled to an image processor, the image processor establishes an image based on sensing data provided by the image sensor, the image sensor comprising: a pixel array comprising: a first floating diffusion node; and a first pixel (110_1) including a first sub-pixel (r) and a second sub-pixel (c), sharing the first floating diffusion node, wherein the first sub-pixel is one of a red sub-pixel, a green sub-pixel and a blue sub-pixel, the second sub-pixel is not one of a red sub-pixel, a green sub-pixel and a blue sub-pixel, and the first floating diffusion node is disposed between the first sub-pixel and the second sub-pixel.

Description

Image sensor and related electronic device
Technical Field
The present disclosure relates to sensors, and more particularly, to an image sensor and an electronic device.
Background
Image sensors have been mass produced and used. At least the light sensing capability of the image sensor in low light and the signal to noise ratio are generally considered in evaluating the performance of the image sensor. Thus, there is a need for an innovative design to improve the light sensing capability of an image sensor in low light and at the same time improve the signal-to-noise ratio of the image sensor.
Disclosure of Invention
An object of the present application is to disclose a sensor, and more particularly, to an image sensor and related electronic device for solving the above-mentioned problems.
An embodiment of the present application discloses an image sensor coupled to an image processor, the image processor creating an image based on sensing data provided by the image sensor, the image sensor comprising: a pixel array comprising: a first floating diffusion node; the first pixel comprises a first sub-pixel and a second sub-pixel, the first floating diffusion node is shared, the first sub-pixel is one of a red sub-pixel, a green sub-pixel and a blue sub-pixel, the second sub-pixel is not one of the red sub-pixel, the green sub-pixel and the blue sub-pixel, and the first floating diffusion node is arranged between the first sub-pixel and the second sub-pixel.
An embodiment of the application discloses an electronic device. The electronic device comprises the image processor and the image sensor.
The image sensor disclosed by the application is further improved on the four Bayer pattern so as to reduce the difference degree of a plurality of pixels on the light sensing capability, increase the exposure time and further improve the signal-to-noise ratio of red data, green data or blue data. In addition, the plurality of pixels each have a sub-pixel with strong light sensing capability, for example, a white sub-pixel, so that the light sensing capability of the image sensor in low light can also be improved.
Drawings
Fig. 1 is a schematic diagram of an embodiment of an image sensor of the present application.
Fig. 2 is a schematic diagram of an embodiment of a pixel group of the present application.
Fig. 3A is a schematic diagram of another embodiment of a pixel set according to the present application.
Fig. 3B is a schematic diagram of another embodiment of a pixel set according to the present application.
Fig. 4 is a schematic diagram of still another embodiment of a pixel set according to the present application.
Fig. 5 is a schematic diagram of still another embodiment of a pixel set of the present application.
Fig. 6A is a timing diagram of an embodiment of the signals of the present application.
FIG. 6B is a timing diagram of another embodiment of the signals of the present application.
Fig. 7 is a timing diagram of yet another embodiment of the signals of the present application.
FIG. 8A is a timing diagram of yet another embodiment of the signals of the present application.
Fig. 8B is a timing diagram of yet another embodiment of the signals of the present application.
Fig. 9 is a timing diagram of yet another embodiment of the signals of the present application.
FIG. 10 is a timing diagram of yet another embodiment of the signals of the present application.
Fig. 11 is a schematic diagram of an embodiment of the image processor and the image sensor shown in fig. 1 applied to an electronic device.
Detailed Description
The following disclosure provides various embodiments or examples that can be used to implement the various features of the present disclosure. Specific examples of components and arrangements are described below to simplify the present disclosure. It is to be understood that these descriptions are merely exemplary and are not intended to limit the present disclosure. For example, in the following description, forming a first feature on or over a second feature may include certain embodiments in which the first and second features are in direct contact with each other; and may include embodiments in which additional components are formed between the first and second features such that the first and second features may not be in direct contact. In addition, the present disclosure may repeat reference numerals and/or letters in the various examples. Such reuse is for brevity and clarity purposes and does not itself represent a relationship between the different embodiments and/or configurations discussed.
Moreover, spatially relative terms, such as "under," "below," "lower," "upper," and the like, may be used herein to facilitate a description of the relationship between one element or feature to another element or feature as illustrated in the figures. These spatially relative terms are intended to encompass a variety of different orientations of the device in use or operation in addition to the orientation depicted in the figures. The device may be placed in other orientations (e.g., rotated 90 degrees or at other orientations), and the spatially relative descriptors used herein interpreted accordingly.
Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the application are approximations, the numerical values set forth in the specific examples are reported as precisely as possible. However, any numerical value inherently contains certain standard deviations found in their respective testing measurements. As used herein, "identical" generally refers to actual values within plus or minus 10%, 5%, 1%, or 0.5% of a particular value or range. Alternatively, the term "same" means that the actual values fall within acceptable standard deviations of the average values, as determined by the consideration of those having ordinary skill in the art to which the subject application pertains. It is to be understood that all ranges, amounts, values and percentages used herein (e.g., to describe amounts of materials, lengths of time, temperatures, operating conditions, ratios of amounts and the like) are modified by the term "same" unless otherwise specifically indicated. Accordingly, unless indicated to the contrary, the numerical parameters set forth in the present specification and attached claims are approximations that may vary depending upon the desired properties. At least these numerical parameters should be construed as the number of significant digits and by applying ordinary rounding techniques. Herein, a numerical range is expressed as from one end point to another end point or between two end points; unless otherwise indicated, all numerical ranges recited herein include endpoints.
The conventional pixel group is often formed in a bayer pattern, for example, one red pixel (R), two green pixels (G), and one blue pixel (B), abbreviated as RGGB.
In addition, recently, in order to enhance the light-sensing capability under low light, at least one pixel in the bayer pattern is replaced by a clear pixel C (clear pixel), and other improved patterns are derived to realize the pixel group. Each pixel in the pattern of high light-sensing capability Yu Baier of the clear pixel C here, for example, the clear pixel C may be a white pixel (W) or a yellow pixel (Y). The derived improved pattern may be composed of one red pixel, two yellow pixels (Y) and one blue pixel, abbreviated as RYYB, or, for example, one red pixel, one green pixel, one white pixel (W) and one blue pixel, abbreviated as RGWB.
In addition to the above-described changing of the pattern of the pixel groups, recently, in order to enhance the flexibility of use, the user can choose between a high resolution and a high signal to noise ratio, each pixel in the pixel group is further divided into a plurality of same sub-pixels sharing a floating diffusion node, for example, four sub-pixels.
As described above, since the clear pixel C has a strong light sensing capability, the above-mentioned determination of the overall exposure time of the other improved patterns is limited to avoid overexposure of the clear pixel C. However, the light sensing capability of the red pixel, the green pixel or the blue pixel is about one half or less of that of the clear pixel C, so if the red pixel, the green pixel or the blue pixel is exposed according to the exposure time of the clear pixel C, the red data, the green data or the blue data that can be obtained by the pixel group is greatly reduced, which results in poor signal to noise ratio of the red data, the green data or the blue data.
In order to solve the above problems, the present application breaks up a plurality of clear sub-pixels in a clear pixel C into a red pixel R, a green pixel G and a blue pixel B in the same pixel group, so as to reduce the difference in light-sensing capability between each pixel in the pixel group, thereby improving the light-sensing capability under low light, and increasing the data size of red data, green data or blue data.
Fig. 1 is a schematic diagram of an embodiment of an image sensor 100 coupled to an image processor 50 of the present application, wherein the image processor 50 creates an image based on sensing data provided by the image sensor 100. Referring to fig. 1, an image sensor 100 includes a pixel array (not shown) including a plurality of pixels 110 and a control signal generating circuit 120. For simplicity of illustration, only one pixel 110 is shown in fig. 1.
The pixel 110 includes a plurality of sub-pixels 112 and a pixel circuit 114, wherein the plurality of sub-pixels 112 includes sub-pixels 112_1 to 112—n, wherein n is a positive integer greater than 1, and in the present embodiment, n=4 is specifically described, that is, the pixel 110 includes four sub-pixels 112. The subpixels 112_1 to 112_4 share the same floating diffusion node FD, and the floating diffusion node FD may be located between the subpixels 112_1 to 112_4 on a circuit layout. That is, the pixel 110 has a four-shared pixel (four shared) structure.
In some embodiments, the subpixel 112 includes a photodiode PD and a microlens 160 (as shown in fig. 2). The microlens 160 serves to focus light entering the subpixel 112 onto the photodiode PD, which converts an optical signal into an electrical signal. The photodiode PD may be a photodiode with electrons as a main carrier or a photodiode with holes as a main carrier. Furthermore, it is noted that the photodiode PD is intended to cover essentially any type of photon or light detecting element, such as a photogate or other photosensitive region.
The pixel circuit 114 is selectively coupled to the sub-pixels 112_1 to 112_4 and selectively outputs a column output signal related to the charges generated by the sub-pixels 112_1 to 112_4 to the image processor 50, i.e. the column output signal includes the sensing data provided by the image sensor 100. The pixel circuit 114 includes transfer gates 116_1, 116_2, 116_3, and 116_4, a reset gate 117, a source follower 118, a row select gate 119, and a capacitor 120.
The transfer gates 116_1, 116_2, 116_3, and 116_4 are controlled by control signals TG1, TG2, TG3, and TG4 provided by the control signal generating circuit 120, respectively, to selectively transfer charges generated by the respective corresponding sub-pixels 112_1 to 112_4 to the floating diffusion node FD. Then, through the capacitor 120 coupled between the floating diffusion node FD and the reference voltage 190, electric energy brought by the electric charge is stored in the capacitor 120 to establish an initial sensing voltage at the floating diffusion node FD. The initial sensing voltage is amplified by the source follower 118 coupled to the reference voltage VDD, and the amplified sensing voltage is outputted at the drain of the source follower 118. The row selection gate 119 is controlled by a row selection signal SEL supplied from a row selector (not shown) to selectively output the amplified sensing voltage as a column output signal to the image processor 50. The image processor 50 builds an image based on the column output signals. The reset gate 117 is controlled by a reset signal RST supplied from the control signal generating circuit 120 to selectively clear the charge on the floating diffusion node FD using the reference voltage VDD.
The plurality of pixels 110 may constitute a single pixel group 140 (as shown in fig. 2), and the pixel array may be formed by repeatedly disposing the pixel groups. In the present embodiment, the single pixel group 140 is composed of four pixels 110 (i.e., 110_1, 110_2, 110_3, and 110_4 as shown in fig. 2).
The pixel group 140 of the present invention has a specific pattern, so that the difference of the light sensing capability between any two pixels 110 in the same pixel group 140 is smaller than any two pixels of the comparison pixel group having a high light sensing pattern RGWB, and thus the overall exposure time of the pixel group 140 can be longer than that of the comparison pixel group to increase the data amount of red data, green data or blue data. In addition, the photosensitivity of the pixel 110 is superior to that of the red pixel R, the green pixel G, or the blue pixel B, and thus improvement in photosensitivity under low light can be achieved.
The specific pattern of the pixel group 140 is illustrated in the embodiments of fig. 2, 3A, 3B, 4 and 5, wherein the number of clear sub-pixels c (shown in fig. 2) of the pixel 110 of fig. 2 is smaller than that of fig. 3A and 3B, which is smaller than that of fig. 4. The specific pattern of fig. 5 can be applied to phase detection autofocus, which is described in detail below.
The pattern design of the pixel set 140 determines the selection of the signal combinations described above. The pattern designs for the different pixel groups 140 can be combined into different signal combinations by arbitrarily adjusting the relative timing of the trigger potential of each of the control signals TG1, TG2, TG3 and TG4 and the reset signal RST. The pixel circuits 114 may provide column output signals representing different physical meanings in response to different signal combinations. Further, the pattern design of either of fig. 2 and 4 may be operated using the signal combination of fig. 6A, 6B, 8A or 8B, the pattern design of fig. 3B may be operated using the signal combination of fig. 7 or 9, and the pattern design of fig. 5 may be operated using the signal combination of fig. 10, wherein the signal combination for the pattern design of fig. 3A is similar to the signal combination of fig. 7 or 9, and will not be repeated here.
In general, in the present invention, the column output signals from the pixel circuits 114 can provide standard red, green, and blue data required for the bayer pattern, as well as profile information for enhancing the image generated by the image processor 50. Note that the subpixels 112_1 to 112_4 are continuously exposed on the entire time axis in fig. 6A to 10.
Fig. 2 is a schematic diagram of an embodiment of a pixel group 140 according to the present application. Referring to fig. 2, in order to distinguish from the red, green, blue, clear, white, and yellow pixels R, G, B, C, W, and Y, the red, green, blue, clear, white, and yellow sub-pixels are labeled R, G, B, C, W, and Y.
The red, green and blue sub-pixels R, G and B are used in the same manner as the red, green and blue pixels R, G and B, respectively, to provide standard red, green and blue data required by the bayer pattern.
In order to increase the light sensing capability under low light, a clear subpixel c is disposed in the pixel group 140. The clear subpixel c is not one of the red subpixel r, the green subpixel g, and the blue subpixel b. The clear sub-pixel c can absorb at least two of red, green and blue lights, and accordingly, the photosensitivity of the clear sub-pixel c is better than that of the red sub-pixel r, the green sub-pixel g and the blue sub-pixel b, so that the contour information of the image generated by the image processor 50 can be improved. In some embodiments, the clear subpixel c may be a white subpixel w, a yellow subpixel y, a cyan subpixel, or a magenta subpixel.
In order to improve the signal-to-noise ratio by improving the degree of imbalance in the light sensing capability between the pixels 110_1, 110_2, 110_3, and 110_4, a clear subpixel c is further provided in each of the pixels 110_1 to 110_4 and the pixels 110_1 to 110_4 include the same number of clear subpixels c. In this embodiment, each of the pixels 110_1 to 110_4 includes a clear subpixel c.
Taking the pixel 110_1 as an example, in the pixel 110_1, the upper left corner is the red sub-pixel r, the upper right corner is the red sub-pixel r, the lower left corner is the red sub-pixel r, and the lower right corner is the clear sub-pixel c, which respectively correspond to the sub-pixels 112_1, 112_2, 112_3, and 112_4 of fig. 1. The remaining pixels 110_2, 110_3, and 110_4, and so on. Accordingly, pixel 110_1 may be referred to as a modified red pixel R', where appropriate. Similarly, pixels 110_2 and 110_3 can be referred to as modified green pixels G ', and pixel 110_4 can be referred to as modified blue pixels B'. In the pixel group 140 of fig. 2, two modified green pixels G ' are disposed diagonally, and one modified red pixel R ' and one modified blue pixel B ' are disposed diagonally.
In order to better understand that the pattern of the pixel set 140 of fig. 2 can improve the imbalance of the light sensing capability among the pixels 110_1, 110_2, 110_3 and 110_4, the light sensing capability is further quantitatively described as shown in table one and table two.
Tables one through two exemplarily illustrate the difference in photosensitivity balance between the present embodiment and the existing high-photosensitivity pattern RGWB, wherein the pixels of the comparative pixel group having the high-photosensitivity pattern RGWB are also implemented using a four-common pixel structure in order to be aligned with the structure of the pixels 110. For example, in the high-sensitivity pattern RGWB, the red pixel R is composed of four red sub-pixels R, while the remaining green pixels G, blue pixels B, white pixels W, and so on.
For ease of understanding, the photosensitivity of the blue subpixel b is taken as a reference, and the photosensitivity value of the blue subpixel b is set to 1. Compared to the photosensitivity of the blue subpixel b, the red subpixel r has a photosensitivity value of 1.12, the green subpixel g has a photosensitivity value of 1.4, the yellow subpixel y has a photosensitivity value of 2, and the white subpixel w has a photosensitivity value of 3.08.
Table one is used to exemplarily illustrate the average exposure values of the pixel set 140 of fig. 2.
List one
It can be seen from Table one that the absolute value of the difference in average light sensing value between pixels 110_3 and 110_4 is the largest, i.e., 0.3, and the absolute value of the difference in average light sensing value between pixels 110_2 and 110_3 is the smallest, i.e., 0.
Table two is used to exemplarily illustrate the average sensitization value of a comparison pixel set having a high sensitization pattern RGWB. As described above, since each pixel of the comparison pixel group is composed of sub-pixels of the same color. Therefore, the average light receiving values of the red pixel R, the green pixel G, the blue pixel B, and the white pixel W are the light receiving values of the red sub-pixel R, the green sub-pixel G, the blue sub-pixel B, and the white sub-pixel W, respectively.
Watch II
As can be seen from Table II, the maximum and minimum absolute values of the difference between any two average sensitization values are 2.08 and 0.12, respectively.
Comparing the first and second tables, the maximum value of the absolute value of the difference between the average light sensing values of the pixel group 140 is smaller than that of the comparison pixel group with high light sensing pattern RGWB, which means that the degree of unbalanced light sensing capability of the pixel group 140 is improved. This also means that no particular pixel (e.g., white pixel W) is present in the pixel group 140, which has a significantly greater light sensing capability than other pixels, e.g., about two times more. Therefore, the limitation caused by the original white pixel W can be released to increase the exposure time, thereby improving the signal-to-noise ratio of the red data, the green data, or the blue data.
Fig. 3A is a schematic diagram of another embodiment of a pixel set 140 according to the present application. Referring to fig. 3A, the pixel group 140 of fig. 3A is similar to the pixel group 140 of fig. 2, except that the number of clear subpixels c of the single pixel 110 of fig. 3A is two and arranged diagonally. In this embodiment, the clear subpixel c is a white subpixel w. The table is used to exemplarily illustrate the average exposure values of the pixel groups 140 of fig. 3A.
Watch III
As can be seen from table three, the absolute value of the difference in average sensitization value between pixels 110_3 and 110_4 is the largest, i.e., 0.2. As can be seen from the comparison of table one and table three, the maximum value of the absolute value of the difference of the average light sensing values of the pixel group 140 in fig. 3A is smaller than that of the comparison pixel group with high light sensing pattern RGWB. Therefore, the limitation caused by the original white pixel W can be released to increase the exposure time, thereby improving the signal-to-noise ratio of the red data, the green data, or the blue data.
In the present embodiment, two clear sub-pixels c are respectively disposed at the upper left and lower right of each pixel 110. However, the present disclosure is not limited thereto. In some embodiments, two clear subpixels c may be disposed at the upper right and lower left of each pixel 110, respectively. In other embodiments, the pixel group 140 may be formed by arbitrarily arranging and combining the foregoing two clear sub-pixels c.
Fig. 3B is a schematic diagram of another embodiment of a pixel set 140 according to the present application. Referring to fig. 3B, the pixel group 140 of fig. 3B is similar to the pixel group 140 of fig. 3A, except that the two clear sub-pixels c of fig. 3B are adjacently disposed. In some embodiments, the pixel group 140 may be formed by arbitrarily arranging and combining the arrangement of the clear sub-pixels c of fig. 3B and the arrangement of the clear sub-pixels c of fig. 3A.
Fig. 4 is a schematic diagram of still another embodiment of a pixel set 140 according to the present application. Referring to fig. 4, the pixel group 140 of fig. 4 is similar to the pixel group 140 of fig. 3B, except that the number of clear sub-pixels c of the single pixel 110 of fig. 4 is three. In this embodiment, the clear subpixel c is a white subpixel w.
Table four is used to exemplarily illustrate the average sensitization values of the pixel groups 140 of fig. 4.
Table four
As can be seen from table four, the absolute value of the difference in average sensitization value between pixels 110_3 and 110_4 is the largest, i.e., 0.1. As can be seen from the comparison of table one and table four, the maximum value of the absolute value of the difference of the average light sensing values of the pixel group 140 in fig. 4 is smaller than that of the comparison pixel group with high light sensing pattern RGWB. Therefore, the limitation caused by the original white pixel W can be released to increase the exposure time, thereby improving the signal-to-noise ratio of the red data, the green data, or the blue data.
In the present embodiment, the red, green, or blue sub-pixels r, g, or b are disposed in the lower right corner of the respective pixels 110. However, the present disclosure is not limited thereto, and the red, green, or blue sub-pixels r, g, or b may be disposed at the upper left, lower left, or upper right of the respective pixels 110. In addition, in the present embodiment, the red, green, or blue sub-pixels r, g, or b are disposed at the lower right corner of each pixel 110. However, the present disclosure is not limited thereto, and the red, green, or blue sub-pixels r, g, or b may be disposed at different positions of the respective pixels 110. For example, the red subpixel r is disposed at the upper left corner of the pixel 110_1, and the green subpixel g is disposed at the upper right corner of the pixel 110_2.
Fig. 5 is a schematic diagram of still another embodiment of a pixel set 140 according to the present application. Referring to fig. 5, the pixel group 140 of fig. 5 is similar to the pixel group 140 of fig. 2, except that the clear subpixel c of the pixel 110_1 and the green subpixel g of the pixel 110_2 further form a phase pixel pair. The clear subpixel c of pixel 110_1 shares the elliptical microlens 162 with the green subpixel g of pixel 110_2.
In other embodiments, the phase pixel pairs may be formed from any two sub-pixels 110. In addition, in an embodiment in which the number of clear subpixels c is two or three, pairs of phase pixels may be arranged by analogy.
The embodiment of fig. 6A-10 will illustrate details of the standard red, green, and blue data required for the pixel groups 140 of fig. 2-5 to provide bayer patterns, as well as details of the contour information of the image generated by the image processor 50. For brevity, in the embodiment of fig. 6A to 9, the pixel 110_1 of the pixel group 140 will be taken as an example, and the remaining pixels 110_2 to 110_4 operate in the same manner.
Referring to fig. 6A, the signals shown in fig. 6A are used to control the pixel group 140 of fig. 2. Referring to fig. 2 and 6A, initially, the control signals TG1 to TG4 and the reset signal RST are pulled to the high level before the time point t 1. In the present embodiment, the transfer gates 116_1, 116_2, 116_3 and 116_4 and the reset gate 117 are positive edge triggered elements. Accordingly, transfer gates 116_1, 116_2, 116_3, and 116_4 and reset gate 117 are turned on to reset subpixels 112_1, 112_2, 112_3, and 112_4 and reset floating diffusion node FD. At time point t1, the voltage of the floating diffusion node FD is the reference voltage VDD.
Between time points t1 and t2, the reset signal RST is pulled high again to reset the floating diffusion node FD again. At time point t2, the voltage of the floating diffusion node FD is the reference voltage VDD.
Between time points T2 and T3, the control signal generating circuit 120 generates the control signal TG4 according to the exposure time T1 of the bottom-right clear subpixel c (corresponding to subpixel 112_4 of fig. 1) and pulls the control signal TG4 high, so that the charge generated by the bottom-right clear subpixel c is transferred to the floating diffusion node FD. At time t3, the voltage of floating diffusion node FD drops from reference voltage VDD to a voltage (VDD-VQ 4), where VQ4 is the voltage drop caused by the charge generated by clear subpixel c. At this time, the column selection gate 119 is turned on, for example, in response to the high level of the column selection signal SEL, and the pixel circuit 114 outputs a column output signal based on the voltage (VDD-VQ 4) for enhancing the profile information of the image generated by the image processor 50. Note that the charge generated by the clear subpixel c remains uncleaned at the floating diffusion node FD.
Between time points T3 and T4, the control signal generating circuit 120 generates control signals TG1 to TG3 according to the exposure time T2 of the red sub-pixel r and pulls the control signals TG1 to TG3 to a high level, so that charges generated by the three red sub-pixels r (corresponding to the sub-pixels 112_1 to 112_3 of fig. 1) at the upper left, the upper right and the lower left are transferred to the floating diffusion node FD and accumulated in combination with charges generated by the clear sub-pixel c. The voltage of the floating diffusion node FD drops again to [ VDD- (vq1+vq2+vq3+vq4) ]. At this time, the row selection gate 119 is turned on, and the pixel circuit 114 outputs a column output signal based on the voltage [ VDD- (vq1+vq2+vq3+vq4) ].
For example, by subtracting the column output signal at the time point t3 from the column output signal at the time point t4 by the image processor 50, the combined and accumulated charge data (i.e. vq2+vq3+vq4) of the three red sub-pixels r can be calculated as the standard red data required by the bayer pattern.
In addition, the exposure time T2 is greater than the overall exposure time used for the comparison pixel group having the high photosensitive pattern RGWB. Therefore, the standard red data, the standard green data and the standard blue data collected by the pixel set 140 are more abundant than those of the comparison pixel set with the high photosensitive pattern RGWB, so that the red data, the green data and the blue data have better signal-to-noise ratio.
In the embodiment of fig. 6A, the column output signal for the charge of the clear subpixel c is first output. However, the present disclosure is not limited thereto. In the embodiment of fig. 6B, the column output signal for the combined accumulated charge of the three red subpixels r is output first. The principle of operation according to the signals of fig. 6B is the same as that of fig. 6A, and will not be described again.
Referring to fig. 7, the signals shown in fig. 7 are used to control the pixel group 140 of fig. 3B. Incidentally, the pixel group 140 in fig. 3A and 3B has the same number of clear sub-pixels c, so that the control signals TG1 to TG4 can be operated according to a similar operation manner as long as the time for pulling the control signals TG1 to TG4 to the high level is adaptively adjusted, and the description thereof is omitted.
At time t3, the voltage of floating diffusion node FD is reduced from reference voltage VDD to voltage [ VDD- (vq3+vq4) ], based on the charge generated by the lower left and lower right two clear subpixels c, where VQ3 and VQ4 are the voltage drops caused by the charge generated by the two clear subpixels c, respectively. At this time, the pixel circuit 114 outputs a column output signal regarding the combined accumulated charge data of the two clear sub-pixels c.
At time t4, the charges generated by the exposure of the upper left and right red subpixels r are transferred to the floating diffusion node FD and accumulated in combination with the charges generated by the two clear subpixels c. The voltage of the floating diffusion node FD is then reduced to a voltage [ VDD- (vq1+vq2+vq3+vq4) ], where VQ1 and VQ2 are the voltage drops caused by the charges generated by the two red subpixels r, respectively. At this time, the pixel circuit 114 outputs a column output signal based on the voltage [ VDD- (vq1+vq2+vq3+vq4) ].
For example, by subtracting the column output signals at time points t3 and t4 from each other by the image processor 50, the combined and accumulated charge data (i.e., vq1+vq2) of the two red sub-pixels r can be calculated.
In the embodiment of fig. 7, the column output signal for the charge of the clear subpixel c is first output. However, the present disclosure is not limited thereto. In some embodiments, a column output signal for the charge of the red subpixel r may also be output first.
In the embodiments of fig. 8A, 8B and 9, the operation of combining accumulated charges is not employed. Therefore, it is not necessary to calculate the charge data of, for example, the red subpixel r and the clear subpixel c using mathematical operations.
Referring to fig. 8A, the signals shown in fig. 8A are used to control the pixel group 140 of fig. 2. Referring to fig. 2 and 8A, the operation before the time point t3 is the same as that of fig. 6A, and will not be repeated here. Between time t3 and t4, the reset signal RST is pulled high, turning on the reset gate 117, and the floating diffusion node FD is cleared of charge corresponding to the clear subpixel c collected by the control signal TG 4. The floating diffusion node FD is reset from the voltage (VDD-VQ 4) to the reference voltage VDD.
Between time points T4 and T5, the control signal generating circuit 120 generates control signals TG1 to TG3 according to the exposure time T2 of the red sub-pixels r at the upper left, the upper right and the lower left, and pulls the control signals TG1 to TG3 to a high level, so that charges generated by the three red sub-pixels r are transferred to the floating diffusion node FD for integration. The floating diffusion node FD drops from the reference voltage VDD to [ VDD- (vq1+vq2+vq3) ]. At this time, the row selection gate 119 is turned on, and the pixel circuit 114 outputs a column output signal based on the voltage [ VDD- (vq1+vq2+vq3) ].
Briefly, in the embodiment of fig. 8A, a column output signal is first output for the charge of the clear subpixel c. However, the present disclosure is not limited thereto. For example, in the embodiment of fig. 8B, the column output signal for the charge of the red subpixel r is output first. The principle of operation according to the signals of fig. 8B is the same as that of fig. 8A, and will not be described again.
Referring to fig. 9, the signals shown in fig. 9 are used to control the pixel group 140 of fig. 3B. At time t3, the voltage of floating diffusion node FD is reduced from reference voltage VDD to voltage [ VDD- (vq3+vq4) ], based on the combined accumulated charges of the lower left and lower right two clear subpixels c, where VQ3 and VQ4 are the voltage drops caused by the charges generated by the two clear subpixels c, respectively. At this time, the pixel circuit 114 outputs a column output signal regarding the combined accumulated charges of the two clear sub-pixels c. At time t4, the combined accumulated charge of the two clear subpixels c at the floating diffusion node FD is cleared and the floating diffusion node FD is reset to the reference voltage VDD. At time t5, based on the combined accumulated charges of the two red subpixels r, the voltage of the floating diffusion node FD is reduced from the reference voltage VDD to a voltage [ VDD- (vq1+vq2) ], where VQ1 and VQ2 are the voltage drops caused by the charges generated by the two red subpixels r, respectively. At this time, the pixel circuit 114 outputs a column output signal based on the voltage [ VDD- (vq1+vq2) ].
Briefly, in the embodiment of fig. 9, a column output signal is first output for the charge of the clear subpixel c. However, the present disclosure is not limited thereto. In some embodiments, a column output signal for the charge of the red subpixel r may also be output first.
Referring to fig. 10, the signals shown in fig. 10 are used to control the pixel group 140 of fig. 5. In fig. 5, the clear subpixel c and the green subpixel g constituting the phase detection pixel pair are located in two different subpixels 110_1 and 110_2. Accordingly, at least two sub-pixels 110_1 and 110_2 need to perform operations according to the signals of fig. 10 to obtain two pieces of phase detection data required by the phase detection auto-focusing function, which are described below.
For the pixel 110_1, referring to fig. 5 and 10, the operation before the time point t3 is the same as that of fig. 6A, and will not be repeated here. Between the time points T3 and T4, the control signal generating circuit 120 generates the control signal TG3 according to the exposure time T2 of the red sub-pixel r at the lower left corner and pulls the control signal TG3 to a high level, so that the charges generated by the red sub-pixel r and the charges collected by the floating diffusion node FD in response to the control signal TG4 are combined and accumulated. At time t4, the voltage of floating diffusion node FD drops from voltage (VDD-VQ 4) to voltage [ VDD- (vq3+vq4) ], based on the combined accumulated charges of clear subpixel c and red subpixel r, where VQ3 and VQ4 are the voltage drops caused by the charges generated by red subpixel r and clear subpixel c, respectively. At this time, the pixel circuit 114 outputs a column output signal based on the voltage [ VDD- (vq3+vq4) ].
Between time points T4 and T5, the control signal generating circuit 120 generates control signals TG1 and TG2 according to the exposure time T3 of the red sub-pixels r in the upper left and upper right corners, and pulls the control signals TG1 and TG2 to a high level, so that the charges generated by the red sub-pixels r in the upper left and upper right corners are combined and accumulated again with the combined accumulated charges on the floating diffusion node FD. At time t5, based on the above-described re-merging of the accumulated charges, the voltage of the floating diffusion node FD drops from the voltage [ VDD- (vq3+vq4) ] to the voltage [ VDD- (vq1+vq2+vq3+vq4) ], where VQ1 and VQ2 are the voltage drops caused by the charges generated by the red sub-pixels r in the upper left and right corners, respectively. At this time, the pixel circuit 114 outputs a column output signal based on the voltage [ VDD- (vq1+vq2+vq3+vq4) ]. It should be noted that the exposure time difference between exposure times T2 and T3 is exaggerated in FIG. 10, and in some embodiments, is not significant. This means that VQ1 and VQ2 are similar to VQ3.
For example, the image processor 50 can determine the charge data of the first phase detection pixel (i.e., the clear subpixel c of the pixel 110_1) based on the column output signal at the time point T3. Next, the image processor 50 performs a mathematical operation on the column output signals at the time points T3, T4 and T5 to calculate the combined and accumulated charge data of the three red sub-pixels r.
For the pixel 110_2, referring to fig. 5 and 10, the operation before the time point t3 is the same as that of fig. 6A, and will not be repeated here. It should be noted that VQ4 at time t3 is a voltage drop caused by the charge generated by the clear subpixel c, but the clear subpixel c in the pixel 110_2 is not used as a phase detection pixel, so VQ4 cannot be used as charge data of the phase detection pixel. Between time points t3 and t4, the control signal TG3 is pulled high to combine and accumulate the charge generated by the green sub-pixel g (i.e., the second phase detection pixel) at the lower left corner with the charge of the clear sub-pixel c collected by the floating diffusion node FD in response to the control signal TG 4. Accordingly, the pixel circuit 114 outputs a column output signal based on the voltage [ VDD- (vq3+vq4) ] where VQ3 and VQ4 are voltage drops caused by charges generated by the green sub-pixel g and the clear sub-pixel c in the lower left corner, respectively. The operation of the pixel 110_2 at the remaining time point is the same as that of the pixel 110_1. Next, the image processor 50 performs a mathematical operation on the column output signals at the time points T3, T4 and T5 to calculate the combined and accumulated charge data of the three green sub-pixels g.
In addition, for example, the image processor 50 performs a mathematical operation based on the column output signals at the time points T3 and T4 to calculate the charge data of the green sub-pixel g (i.e., the second phase detection pixel) at the lower left corner, thereby obtaining the charge data of the second phase detection pixel. Then, the image processor 50 can complete the phase detection auto-focusing function based on the charge data of the clear subpixel c at the bottom right corner of the pixel 110_1 and the green subpixel g at the bottom left corner of the pixel 110_2.
The signals shown in fig. 10 are for illustration only. As for various pixel group patterns, any signal as long as the charge data of the phase detection pixels is obtained by combining accumulation and mathematical operation falls within the scope of the present application.
Fig. 11 is a schematic diagram of an embodiment of the image processor 50 and the image sensor 100 shown in fig. 1 applied to the electronic device 60. Referring to fig. 6, the electronic device 60 may be any electronic device such as a smart phone, a personal digital assistant, a handheld computer system, or a tablet computer.
The foregoing description briefly sets forth features of certain embodiments of the present disclosure to provide a more thorough understanding of the various aspects of the present disclosure to those skilled in the art to which the present disclosure pertains. It will be appreciated by those skilled in the art that the present disclosure may be readily utilized as a basis for designing or modifying other processes and structures for carrying out the same purposes and/or achieving the same advantages of the embodiments described herein. Those skilled in the art should understand that they can make various changes, substitutions and alterations herein without departing from the spirit and scope of the disclosure.

Claims (14)

1. An image sensor coupled to an image processor that creates an image based on sensed data provided by the image sensor, the image sensor comprising:
a pixel array comprising:
a first floating diffusion node;
the first pixel is composed of a first sub-pixel and a second sub-pixel, the first sub-pixel is one of a red sub-pixel, a green sub-pixel and a blue sub-pixel, the second sub-pixel is not one of the red sub-pixel, the green sub-pixel and the blue sub-pixel, and the first floating diffusion node is arranged between the first sub-pixel and the second sub-pixel, wherein the second sub-pixel is used as a first phase detection pixel in a phase pixel pair;
a second floating diffusion node;
the second pixel is composed of a third sub-pixel and a fourth sub-pixel, the third sub-pixel is one of a red sub-pixel, a green sub-pixel and a blue sub-pixel, the color of the third sub-pixel is different from that of the first sub-pixel, the fourth sub-pixel is not one of the red sub-pixel, the green sub-pixel and the blue sub-pixel, and the third sub-pixel is used as a second phase detection pixel in the phase pixel pair;
a microlens shared by the second subpixel and the third subpixel; and
a control circuit shared by the first sub-pixel and the second sub-pixel, wherein the control circuit is configured to:
generating a first control signal according to the first exposure time to control the charge output of one of the first sub-pixel and the second sub-pixel to the first floating diffusion node; and generating a second control signal according to a second exposure time to control the charge of the other one of the first sub-pixel and the second sub-pixel to be output to the first floating diffusion node, and combining and accumulating the charge collected by the first floating diffusion node according to the first control signal; or (b)
Generating a first control signal according to the first exposure time to control the charge output of one of the first sub-pixel and the second sub-pixel to the first floating diffusion node; generating a reset signal to clear the first floating diffusion node from resetting the charge collected by the first control signal; and generating a second control signal according to a second exposure time to control the charge output of the other one of the first sub-pixel and the second sub-pixel to the first floating diffusion node;
wherein the first exposure time and the second exposure time are determined according to the first light sensing capability of the first sub-pixel and the second light sensing capability of the second sub-pixel.
2. The image sensor of claim 1, wherein the second subpixel and the fourth subpixel have the same sensing capability.
3. The image sensor of claim 2, wherein the second and fourth sub-pixels have a higher photosensitivity than either the red, green, or blue sub-pixels.
4. The image sensor of claim 3, wherein the first pixel has a first average light sensing value and the second pixel has a second average light sensing value, and an absolute value of a difference between the first average light sensing value and the second average light sensing value is less than an absolute value of a difference between light sensing values of any two of the red sub-pixel, the green sub-pixel, and the blue sub-pixel.
5. The image sensor of claim 2, wherein the second subpixel and the fourth subpixel are both white subpixels.
6. The image sensor of claim 1, wherein the number of second sub-pixels is the same as the number of fourth sub-pixels.
7. The image sensor of claim 1, wherein the first pixel comprises three of the first sub-pixels and one of the second sub-pixels are arranged at 2 x 2, and the second pixel comprises three of the third sub-pixels and one of the fourth sub-pixels are arranged at 2 x 2, wherein the microlens is shared by one of the second sub-pixels and one of the third sub-pixels.
8. The image sensor of claim 1, wherein the first pixel comprises two of the first sub-pixels and two of the second sub-pixels are arranged at 2 x 2, and the second pixel comprises two of the third sub-pixels and two of the fourth sub-pixels are arranged at 2 x 2, wherein the microlens is shared by one of the second sub-pixels and one of the third sub-pixels.
9. The image sensor of claim 8, wherein two of the first sub-pixels are disposed adjacent or diagonally, and two of the third sub-pixels are disposed adjacent or diagonally.
10. The image sensor of claim 1, wherein the first pixel comprises one of the first and three of the second sub-pixels arranged at 2 x 2, and the second pixel comprises one of the third and three of the fourth sub-pixels arranged at 2 x 2, wherein the microlens is shared by one of the second and third sub-pixels.
11. The image sensor of claim 1, wherein the second subpixel and the fourth subpixel are used to promote contour information of the image.
12. The image sensor of claim 1, wherein the pixel array further comprises:
a third floating diffusion node;
the third pixel comprises a fifth sub-pixel and a sixth sub-pixel, the third floating diffusion node is shared, the fifth sub-pixel is one of a red sub-pixel, a green sub-pixel and a blue sub-pixel, and the color of the fifth sub-pixel is different from that of the first sub-pixel and the third sub-pixel; and
a fourth floating diffusion node;
and a fourth pixel including a seventh sub-pixel and an eighth sub-pixel sharing the fourth floating diffusion node, wherein the seventh sub-pixel is one of a red sub-pixel, a green sub-pixel and a blue sub-pixel, and the seventh sub-pixel has a color different from the first sub-pixel and the third sub-pixel and the same as the fifth sub-pixel.
13. The image sensor of claim 12, wherein the seventh subpixel and the fifth subpixel are green subpixels.
14. An electronic device, comprising:
the image sensor of any one of claims 1-13; and
the image processor.
CN202080033580.8A 2020-07-14 2020-07-14 Image sensor and related electronic device Active CN115152198B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/101880 WO2022011547A1 (en) 2020-07-14 2020-07-14 Image sensor and related electronic device

Publications (2)

Publication Number Publication Date
CN115152198A CN115152198A (en) 2022-10-04
CN115152198B true CN115152198B (en) 2024-02-09

Family

ID=79554395

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080033580.8A Active CN115152198B (en) 2020-07-14 2020-07-14 Image sensor and related electronic device

Country Status (2)

Country Link
CN (1) CN115152198B (en)
WO (1) WO2022011547A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104272727A (en) * 2012-05-14 2015-01-07 索尼公司 Imaging device and imaging method, electronic apparatus, as well as program
CN104780321A (en) * 2014-01-10 2015-07-15 全视科技有限公司 Method for capturing image data, HDR imaging system for use and pixel
CN105611257A (en) * 2015-12-18 2016-05-25 广东欧珀移动通信有限公司 Imaging method, image sensor, imaging device and electronic device

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014126903A (en) * 2012-12-25 2014-07-07 Toshiba Corp Image processing apparatus, image processing method, and program
CN105578079B (en) * 2015-12-18 2017-11-17 广东欧珀移动通信有限公司 Imaging sensor and picture quality regulation method, imaging device and method and mobile terminal
CN105430362B (en) * 2015-12-18 2017-09-19 广东欧珀移动通信有限公司 Imaging method, imaging device and electronic installation
CN105578080B (en) * 2015-12-18 2019-02-05 Oppo广东移动通信有限公司 Imaging method, imaging device and electronic device
CN105578006B (en) * 2015-12-18 2018-02-13 广东欧珀移动通信有限公司 Imaging method, imaging device and electronic installation
CN105578078B (en) * 2015-12-18 2018-01-19 广东欧珀移动通信有限公司 Imaging sensor, imaging device, mobile terminal and imaging method
CN105430361B (en) * 2015-12-18 2018-03-20 广东欧珀移动通信有限公司 Imaging method, imaging sensor, imaging device and electronic installation
CN105611125B (en) * 2015-12-18 2018-04-10 广东欧珀移动通信有限公司 Imaging method, imaging device and electronic installation
CN105592303B (en) * 2015-12-18 2018-09-11 广东欧珀移动通信有限公司 imaging method, imaging device and electronic device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104272727A (en) * 2012-05-14 2015-01-07 索尼公司 Imaging device and imaging method, electronic apparatus, as well as program
CN104780321A (en) * 2014-01-10 2015-07-15 全视科技有限公司 Method for capturing image data, HDR imaging system for use and pixel
CN105611257A (en) * 2015-12-18 2016-05-25 广东欧珀移动通信有限公司 Imaging method, image sensor, imaging device and electronic device

Also Published As

Publication number Publication date
CN115152198A (en) 2022-10-04
WO2022011547A1 (en) 2022-01-20

Similar Documents

Publication Publication Date Title
US8854514B1 (en) Pixel array having wide dynamic range and good color reproduction and resolution and image sensor using the pixel array
RU2531368C2 (en) Solid-state image forming device, method of processing signal of solid-state image forming device and image forming device
US20170347042A1 (en) Imaging systems with high dynamic range and phase detection pixels
CN101521216B (en) Solid-state imaging device and camera
US7781716B2 (en) Stacked image sensor with shared diffusion regions in respective dropped pixel positions of a pixel array
CN102449765B (en) Gradient color filters for sub-diffraction limit sensors
KR102136852B1 (en) CMOS Image Sensor based on a Thin-Film on ASIC and operating method thereof
US20110317048A1 (en) Image sensor with dual layer photodiode structure
US11843877B2 (en) Image sensor comprising array of colored pixels
KR20210102644A (en) Image sensor and electronic device including the same
EP2680591B1 (en) Color imaging device
CN112866598A (en) Image sensor, imaging apparatus having the same, and method of operating the same
CN116208864A (en) Image sensor having low noise and high resolution and method of operating the same
CN115152198B (en) Image sensor and related electronic device
US20070201114A1 (en) Solid-state image sensing device having photoelectric conversion cells each configured by n pixels in vertical direction
US11670660B2 (en) Pixel array included in auto-focus image sensor and auto-focus image sensor including the same
CN101350893A (en) Image sensor and camera die set
US20190349544A1 (en) Image sensing device
KR102539389B1 (en) Solid-state image sensor
US20240153974A1 (en) Image sensor
JP2014187648A (en) Solid-state imaging device
US20240031698A1 (en) Image sensor including phase detection pixels
US20220132079A1 (en) Pixel array including evenly arranged phase detection pixels and image sensor including the pixel array
US20240163577A1 (en) Photoelectric conversion apparatus, method for controlling photoelectric conversion apparatus, and storage medium
US11696041B2 (en) Image sensor, control method, camera component and mobile terminal with raised event adaptability and phase detection auto focus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant