CN115152198A - Image sensor and related electronic device - Google Patents

Image sensor and related electronic device Download PDF

Info

Publication number
CN115152198A
CN115152198A CN202080033580.8A CN202080033580A CN115152198A CN 115152198 A CN115152198 A CN 115152198A CN 202080033580 A CN202080033580 A CN 202080033580A CN 115152198 A CN115152198 A CN 115152198A
Authority
CN
China
Prior art keywords
pixel
sub
subpixel
image sensor
subpixels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202080033580.8A
Other languages
Chinese (zh)
Other versions
CN115152198B (en
Inventor
赵维民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Goodix Technology Co Ltd
Original Assignee
Shenzhen Goodix Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Goodix Technology Co Ltd filed Critical Shenzhen Goodix Technology Co Ltd
Publication of CN115152198A publication Critical patent/CN115152198A/en
Application granted granted Critical
Publication of CN115152198B publication Critical patent/CN115152198B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/12Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths with one sensor only
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/46Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by combining or binning pixels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Color Television Image Signal Generators (AREA)
  • Solid State Image Pick-Up Elements (AREA)

Abstract

The application discloses an image sensor and a related electronic device. The image sensor is coupled to an image processor, the image processor creates an image based on sensing data provided by the image sensor, the image sensor includes: a pixel array comprising: a first floating diffusion node; and a first pixel (110 \u1) including a first sub-pixel (r) and a second sub-pixel (c) sharing the first floating diffusion node, the first sub-pixel being one of a red sub-pixel, a green sub-pixel, and a blue sub-pixel, wherein the second sub-pixel is not one of a red sub-pixel, a green sub-pixel, and a blue sub-pixel, and the first floating diffusion node is disposed between the first sub-pixel and the second sub-pixel.

Description

Image sensor and related electronic device Technical Field
The present disclosure relates to sensors, and more particularly, to an image sensor and an electronic device using the same.
Background
Image sensors have been mass produced and used. In evaluating the performance of an image sensor, at least the sensitivity of the image sensor under low light and the signal-to-noise ratio are usually considered. Therefore, an innovative design is needed to improve the light sensing capability of the image sensor under low light and simultaneously improve the signal-to-noise ratio of the image sensor.
Disclosure of Invention
An object of the present application is to disclose a sensor, and more particularly, to an image sensor and related electronic device, so as to solve the above problems.
An embodiment of the present application discloses an image sensor coupled to an image processor, the image processor creating an image based on sensing data provided by the image sensor, the image sensor comprising: a pixel array comprising: a first floating diffusion node; and a first pixel including a first sub-pixel and a second sub-pixel sharing the first floating diffusion node, wherein the first sub-pixel is one of a red sub-pixel, a green sub-pixel and a blue sub-pixel, the second sub-pixel is not one of the red sub-pixel, the green sub-pixel and the blue sub-pixel, and the first floating diffusion node is disposed between the first sub-pixel and the second sub-pixel.
An embodiment of the present application discloses an electronic device. The electronic device comprises the image processor and the image sensor.
The image sensor disclosed in the present application further improves the tetrabel pattern to reduce the degree of difference in light sensitivity between the plurality of pixels to increase the exposure time, thereby improving the signal-to-noise ratio of the red data, the green data, or the blue data. In addition, the plurality of pixels each have a sub-pixel with a strong light sensing capability, such as a white sub-pixel, and therefore the light sensing capability of the image sensor under low light can also be improved.
Drawings
Fig. 1 is a schematic diagram of an embodiment of an image sensor of the present application.
Fig. 2 is a schematic diagram of an embodiment of a pixel group according to the present application.
Fig. 3A is a schematic diagram of another embodiment of a pixel group according to the present application.
Fig. 3B is a schematic diagram of a pixel group according to another embodiment of the present application.
Fig. 4 is a schematic diagram of a pixel group according to still another embodiment of the present application.
Fig. 5 is a schematic diagram of a pixel group according to still another embodiment of the present application.
FIG. 6A is a timing diagram of an embodiment of signals of the present application.
FIG. 6B is a timing diagram of another embodiment of signals of the present application.
FIG. 7 is a timing diagram of yet another embodiment of signals of the present application.
FIG. 8A is a timing diagram of still another embodiment of signals of the present application.
FIG. 8B is a timing diagram of still another embodiment of signals of the present application.
FIG. 9 is a timing diagram of yet another embodiment of signals of the present application.
FIG. 10 is a timing diagram of yet another embodiment of signals of the present application.
FIG. 11 is a diagram illustrating an embodiment of an electronic device employing the image processor and the image sensor shown in FIG. 1.
Detailed Description
The following disclosure provides various embodiments or illustrations that can be used to implement various features of the disclosure. The embodiments of components and arrangements described below serve to simplify the present disclosure. It is to be understood that such descriptions are merely illustrative and are not intended to limit the present disclosure. For example, in the description that follows, forming a first feature on or over a second feature may include certain embodiments in which the first and second features are in direct contact with each other; and may also include embodiments in which additional elements are formed between the first and second features described above, such that the first and second features may not be in direct contact. In addition, the present disclosure may repeat reference numerals and/or characters in the various embodiments. Such reuse is for brevity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed.
Moreover, spatially relative terms, such as "under," "below," "over," "above," and the like, may be used herein to facilitate describing a relationship between one element or feature relative to another element or feature as illustrated in the figures. These spatially relative terms are intended to encompass a variety of different orientations of the device in use or operation in addition to the orientation depicted in the figures. The device may be otherwise oriented (e.g., rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.
Although numerical ranges and parameters setting forth the broad scope of the application are approximations, the numerical values set forth in the specific examples are reported as precisely as possible. Any numerical value, however, inherently contains certain standard deviations found in their respective testing measurements. As used herein, "identical" generally means that the actual value is within 10%, 5%, 1%, or 0.5% of a particular value or range. Alternatively, the term "identical" means that the actual value falls within the acceptable standard error of the mean, subject to consideration by those of ordinary skill in the art to which this application pertains. It is understood that all ranges, amounts, values and percentages used herein (e.g., to describe amounts of materials, length of time, temperature, operating conditions, quantitative ratios, and the like) are "the same" unless otherwise specifically indicated or indicated. Accordingly, unless indicated to the contrary, the numerical parameters set forth in the specification and attached claims are approximations that may vary depending upon the desired properties sought to be obtained. At the very least, these numerical parameters are to be understood as meaning the number of significant digits and the number resulting from applying ordinary rounding techniques. Herein, numerical ranges are expressed from one end to the other or between the two ends; unless otherwise indicated, all numerical ranges set forth herein are inclusive of the endpoints.
The conventional pixel group is mostly formed in a bayer pattern, and for example, includes one red pixel (R), two green pixels (G), and one blue pixel (B), which is abbreviated as RGGB.
In addition, recently, in order to improve the light sensitivity under low light, at least one pixel in the bayer pattern is replaced by a clear pixel C (clear pixel), and other improved patterns are derived to realize the pixel group. Here, each pixel in the pattern of high photosensitivity Yu Baier of the clear pixel C, for example, the clear pixel C may be a white pixel (W) or a yellow pixel (Y). The derived modified pattern may be constituted by one red pixel, two yellow pixels (Y) and one blue pixel, abbreviated RYYB, or for example by one red pixel, one green pixel, one white pixel (W) and one blue pixel, abbreviated RGWB.
In addition to the above-mentioned changing the pattern of the pixel group, recently, in order to improve the flexibility for the user to choose between high resolution and high snr, each pixel in the pixel group is divided into a plurality of same-color sub-pixels sharing a floating diffusion node, such as four sub-pixels.
As mentioned above, due to the strong light sensitivity of the clear pixel C, the determination of the overall exposure time of the other improved patterns is limited to avoid over-exposing the clear pixel C in order to avoid over-exposing the clear pixel C. However, the light sensing capability of the red pixel, the green pixel or the blue pixel is only about one-half or less of that of the clear pixel C, so that if the red pixel, the green pixel or the blue pixel is exposed according to the exposure time of the clear pixel C, the red data, the green data or the blue data obtained by the pixel group can be greatly reduced, and the signal-to-noise ratio of the red data, the green data or the blue data is poor.
In order to solve the above problem, the multiple clear subpixels in the clear pixel C are scattered into the red pixel R, the green pixel G, and the blue pixel B in the same pixel group to reduce the difference in the light sensing capability between each pixel in the pixel group, so that the light sensing capability under low light can be improved, and the data amount of the red data, the green data, or the blue data can be increased.
Fig. 1 is a schematic diagram of an embodiment of an image sensor 100 coupled to an image processor 50 according to the present application, wherein the image processor 50 creates an image based on sensing data provided by the image sensor 100. Referring to fig. 1, an image sensor 100 includes a pixel array (not shown) composed of a plurality of pixels 110 and a control signal generating circuit 120. For simplicity, only one pixel 110 is illustrated in FIG. 1.
The pixel 110 includes a plurality of sub-pixels 112 and a pixel circuit 114, wherein the plurality of sub-pixels 112 includes sub-pixels 112 \u1 to 112 \un, where n is a positive integer greater than 1, and in the present embodiment, n =4 is specifically used for description, that is, the pixel 110 includes four sub-pixels 112. The subpixels 112 _1through 112 _4share the same floating diffusion node FD, and the floating diffusion node FD may be located between the subpixels 112 _1through 112 _4in circuit layout. That is, the pixel 110 has a four-shared pixel (four shared) structure.
In some embodiments, the sub-pixel 112 includes a photodiode PD and a microlens 160 (as shown in fig. 2). The microlens 160 is used to focus light entering the sub-pixel 112 onto the photodiode PD, which is used to convert an optical signal into an electrical signal. The photodiode PD may be a photodiode with electrons as a dominant carrier or a photodiode with holes as a dominant carrier. Further, it is noted that the photodiode PD is intended to encompass substantially any type of photonic or light detecting element, such as a photogate or other photosensitive region.
The pixel circuit 114 is configured to be selectively coupled to the sub-pixels 112 _1to 112 _4and selectively output a column output signal related to the charges generated by the sub-pixels 112 _1to 112 _4to the image processor 50, i.e., the column output signal includes the sensing data provided by the image sensor 100. The pixel circuit 114 includes transfer gates 116_1, 116_2, 116_3, and 116_4, a reset gate 117, a source follower 118, a row select gate 119, and a capacitor 120.
The transfer gates 116_1, 116_2, 116 _3and 116 _4are respectively controlled by control signals TG1, TG2, TG3 and TG4 provided by the control signal generating circuit 120 to selectively transfer the charges generated by the respective sub-pixels 112 _1to 112 _4to the floating diffusion FD. Then, through the capacitor 120 coupled between the floating diffusion node FD and the reference voltage 190, the electric energy due to the charge is stored in the capacitor 120 to establish the initial sensing voltage at the floating diffusion node FD. The initial sensing voltage is amplified by the source follower 118 coupled to the reference voltage VDD, and the amplified sensing voltage is output at the drain of the source follower 118. The row select gate 119 is controlled by a row select signal SEL provided by a row selector (not shown) to selectively output the amplified sensing voltage as a column output signal to the image processor 50. The image processor 50 builds an image based on the column output signals. The reset gate 117 is controlled by a reset signal RST provided by the control signal generation circuit 120 to selectively clear the charges on the floating diffusion node FD using a reference voltage VDD.
The plurality of pixels 110 may constitute a single pixel group 140 (shown in fig. 2), and the pixel array may be formed by repeatedly arranging the pixel groups. In the present embodiment, a single pixel group 140 is formed by four pixels 110 (i.e., 110_1, 110_2, 110_3, and 110 _4as shown in fig. 2).
The pixel group 140 of the present invention has a specific pattern, such that the difference between the light sensing capabilities of any two pixels 110 in the same pixel group 140 is smaller than that of any two pixels of the comparison pixel group having the high light sensing pattern RGWB, and therefore the overall exposure time of the pixel group 140 can be longer than that of the comparison pixel group to increase the data amount of the red data, the green data or the blue data. In addition, the light sensing capability of the pixel 110 is superior to that of the red pixel R, the green pixel G, or the blue pixel B, and thus improvement of the light sensing capability under low light can be achieved.
The specific pattern of the pixel set 140 is illustrated in the embodiments of fig. 2, 3A, 3B, 4 and 5, wherein the number of clear subpixels c (shown in fig. 2) of the pixel 110 of fig. 2 is less than that of fig. 3A and 3B, which is still less than that of fig. 4. The specific pattern of fig. 5 can be applied in phase detection autofocus, which is described in detail below.
The pattern design of the pixel groups 140 determines the selection of the aforementioned signal combinations. For different pixel groups 140, different signal combinations can be combined by arbitrarily adjusting the relative timings of the trigger potentials of each of the control signals TG1, TG2, TG3, and TG4 and the reset signal RST. The pixel circuits 114 may provide column output signals representing different physical meanings in response to different combinations of signals. Further, the pattern designs of either of fig. 2 and 4 can be operated by using the signals of fig. 6A, 6B, 8A or 8B, the pattern design of fig. 3B can be operated by using the signals of fig. 7 or 9, and the pattern design of fig. 5 can be operated by using the signals of fig. 10, wherein the signal combinations for the pattern design of fig. 3A are similar to the signal combinations of fig. 7 or 9, and are not repeated herein.
In summary, in the present invention, the column output signals outputted by the pixel circuits 114 can provide the standard red, green and blue data required by the bayer pattern, and can be used to enhance the profile information of the image generated by the image processor 50. Note that the sub-pixels 112 _1to 112 _4are continuously exposed over the entire time axis in fig. 6A to 10.
Fig. 2 is a schematic diagram of an embodiment of a pixel group 140 according to the present application. Referring to fig. 2, for distinction from a red pixel R, a green pixel G, a blue pixel B, a clear pixel C, a white pixel W, and a yellow pixel Y, a red sub-pixel is denoted by R, a green sub-pixel is denoted by G, a blue sub-pixel is denoted by B, a clear sub-pixel is denoted by C, a white sub-pixel is denoted by W, and a yellow sub-pixel is denoted by Y.
The red, green and blue sub-pixels R, G and B are used for providing standard red, green and blue data required by the bayer pattern, respectively, as the red, green and blue pixels R, G and B.
In order to increase the light sensing capability under low light, a clear subpixel c is provided in the pixel group 140. The clear sub-pixel c is not one of the red sub-pixel r, the green sub-pixel g and the blue sub-pixel b. The clear sub-pixel c can absorb at least two of the red, green and blue lights, and accordingly, the light sensing capability of the clear sub-pixel c is better than that of the red sub-pixel r, the green sub-pixel g and the blue sub-pixel b, and the clear sub-pixel c can be used for improving the contour information of the image generated by the image processor 50. In some embodiments, the clear subpixel c may be a white subpixel w, a yellow subpixel y, a cyan subpixel, or a magenta subpixel.
In order to improve the signal-to-noise ratio by improving the degree of the imbalance of the light sensing capability among the pixels 110_1, 110_2, 110_3, and 110_4, a clear subpixel c is further provided in each of the pixels 110 _1to 110 _4and the pixels 110 _1to 110 _4include the same number of clear subpixels c. In the present embodiment, each of the pixels 110 _1through 110 _4includes one clear subpixel c.
Taking the pixel 110 \u1 as an example, in the pixel 110 _u1, the upper left corner is the red subpixel r, the upper right corner is the red subpixel r, the lower left corner is the red subpixel r, and the lower right corner is the clear subpixel c, which correspond to the subpixels 112_1, 112_2, 112_3, and 112 _4of fig. 1, respectively. The remaining pixels 110_2, 110_3, and 110_4, and so on. Accordingly, pixel 110 \u1 may be referred to as a modified red pixel R' where appropriate. Similarly, pixels 110 \u2 and 110 \u3 may be referred to as modified green pixels G 'and pixel 110 \u4 may be referred to as modified blue pixels B'. In the pixel group 140 of fig. 2, two modified green pixels G ' are arranged in diagonal, and one modified red pixel R ' and one modified blue pixel B ' are arranged in diagonal.
To better understand the pattern of the pixel group 140 of fig. 2, which can improve the degree of the imbalance of the light sensing capability among the pixels 110_1, 110_2, 110 _3and 110_4, the light sensing capability is further quantitatively described as table one and table two.
Table one to table two exemplarily illustrate differences in the balance of the photosensitivity between the present embodiment and the conventional high photosensitive pattern RGWB, in which the pixels of the comparison pixel group having the high photosensitive pattern RGWB are also implemented using a four-shared pixel structure in order to be aligned with the structure of the pixel 110. For example, in the high-sensitivity pattern RGWB, the red pixel R is composed of four red sub-pixels R, and the rest of the green pixel G, the blue pixel B, the white pixel W, and so on.
For easy understanding, the light sensing capability of the blue subpixel b is taken as a reference, and the light sensing value of the blue subpixel b is set to 1. Compared to the photosensitivity of the blue subpixel b, the photosensitivity of the red subpixel r is 1.12, the photosensitivity of the green subpixel g is 1.4, the photosensitivity of the yellow subpixel y is 2, and the photosensitivity of the white subpixel w is 3.08.
Table one is used to exemplarily illustrate the average luminance values of the pixel group 140 of fig. 2.
Figure PCTCN2020101880-APPB-000001
Watch 1
It can be observed from table one that the absolute value of the difference in the average photopic values between pixels 110 _3and 110 _4is the largest, i.e., 0.3, and the absolute value of the difference in the average photopic values between pixels 110 _2and 110 _3is the smallest, i.e., 0.
The table is used to exemplarily illustrate the average photosensitive value of the comparison pixel group having the high photosensitive pattern RGWB. As described above, each pixel of the comparison pixel group is composed of sub-pixels of the same color. Therefore, the average photosensitive values of the red pixel R, the green pixel G, the blue pixel B, and the white pixel W are the photosensitive values of the red subpixel R, the green subpixel G, the blue subpixel B, and the white subpixel W, respectively.
Figure PCTCN2020101880-APPB-000002
Watch two
It can be observed from table two that the maximum and minimum values of the absolute value of the difference between any two average light sensitivity values are 2.08 and 0.12, respectively.
As can be seen from comparison of table one and table two, the maximum value of the absolute value of the difference value of the average photosensitive values of the pixel group 140 is smaller than that of the comparison pixel group having the high photosensitive pattern RGWB, which means that the degree of the imbalance of the photosensitive capability of the pixel group 140 is improved. This also means that the pixel group 140 does not have a specific pixel (e.g., the white pixel W) and its light sensing capability is significantly larger than other pixels, for example, more than about twice as large. Therefore, the limitation caused by the white pixel W can be removed to increase the exposure time, thereby improving the signal-to-noise ratio of the red, green, or blue data.
Fig. 3A is a schematic diagram of another embodiment of a pixel group 140 according to the present application. Referring to fig. 3A, the pixel group 140 of fig. 3A is similar to the pixel group 140 of fig. 2, with the difference that the number of clear subpixels c of the single pixel 110 of fig. 3A is two and is arranged in a diagonal direction. In the present embodiment, the clear subpixel c is a white subpixel w. Table three is used to exemplarily illustrate the average photosensitive value of the pixel group 140 of fig. 3A.
Figure PCTCN2020101880-APPB-000003
Figure PCTCN2020101880-APPB-000004
Watch III
It can be observed from table three that the absolute value of the difference in average photopic values between pixels 110 _3and 110 _4is a maximum, i.e., 0.2. As can be seen from the comparison tables i and iii, the maximum value of the absolute value of the difference in the average photosensitive value of the pixel group 140 of fig. 3A is also smaller than that of the comparison pixel group having the high photosensitive pattern RGWB. Therefore, the limitation caused by the white pixel W can be removed to increase the exposure time, thereby improving the signal-to-noise ratio of the red, green, or blue data.
In the present embodiment, two clear sub-pixels c are respectively disposed at the upper left and lower right of each pixel 110. However, the present disclosure is not limited thereto. In some embodiments, two clear sub-pixels c may be respectively disposed at the upper right and lower left of each pixel 110. In other embodiments, the pixel group 140 may be formed by combining the two aforementioned ways of setting the clear sub-pixel c in any arrangement.
Fig. 3B is a schematic diagram of another embodiment of a pixel group 140 according to the present application. Referring to fig. 3B, the pixel group 140 of fig. 3B is similar to the pixel group 140 of fig. 3A, except that two clear subpixels c of fig. 3B are adjacently disposed. In some embodiments, the pixel group 140 may be formed by arbitrarily arranging and combining the arrangement of the clear subpixel c of fig. 3B and the arrangement of the clear subpixel c of fig. 3A.
Fig. 4 is a schematic diagram of a pixel group 140 according to still another embodiment of the present disclosure. Referring to fig. 4, the pixel group 140 of fig. 4 is similar to the pixel group 140 of fig. 3B, with the difference that the number of clear subpixels c of the single pixel 110 of fig. 4 is three. In the present embodiment, the clear subpixel c is a white subpixel w.
Table four is used to exemplarily illustrate the average photosensitive value of the pixel group 140 of fig. 4.
Figure PCTCN2020101880-APPB-000005
Figure PCTCN2020101880-APPB-000006
Watch four
As can be observed from table four, the absolute value of the difference in the average sensitization values between pixels 110 \u3 and 110 \u4 is the maximum, i.e., 0.1. As can be seen from the comparison table one and table four, the maximum value of the absolute value of the difference in the average photosensitive value of the pixel group 140 of fig. 4 is smaller than that of the comparison pixel group having the high photosensitive pattern RGWB. Therefore, the limitation caused by the white pixel W can be removed to increase the exposure time, thereby improving the signal-to-noise ratio of the red data, the green data or the blue data.
In the present embodiment, the red sub-pixel r, the green sub-pixel g, or the blue sub-pixel b is disposed at the lower right corner of the respective pixels 110. However, the disclosure is not limited thereto, and the red sub-pixel r, the green sub-pixel g, or the blue sub-pixel b may be disposed at the upper left corner, the lower left corner, or the upper right corner of the respective pixels 110. In addition, in the present embodiment, the red sub-pixel r, the green sub-pixel g, or the blue sub-pixel b is disposed at the lower right corner of each pixel 110. However, the disclosure is not limited thereto, and the red sub-pixel r, the green sub-pixel g, or the blue sub-pixel b may be disposed at different positions of the respective pixels 110. For example, the red subpixel r is disposed in the upper left corner of pixel 110 \u1, and the green subpixel g is disposed in the upper right corner of 110 \u2.
Fig. 5 is a schematic diagram of yet another embodiment of a pixel group 140 according to the present application. Referring to fig. 5, the pixel group 140 of fig. 5 is similar to the pixel group 140 of fig. 2, with the difference that the clear subpixel c of pixel 110 \u1 further forms a phase pixel pair with the green subpixel g of pixel 110 \u2. The clear subpixel c of pixel 110_1 shares the elliptical microlens 162 with the green subpixel g of pixel 110 _u2.
In other embodiments, a phase pixel pair may be formed by any two types of sub-pixels 110. In addition, in the embodiment where the number of clear subpixels c is two or three, the phase pixel pairs may be arranged in the same way.
The embodiments of fig. 6A-10 will illustrate the details of the standard red, green, and blue data required by the pixel set 140 of fig. 2-5 to provide the bayer pattern, as well as the details of enhancing the contour information of the image generated by the image processor 50. For simplicity, in the embodiment of fig. 6A to 9, the pixel 110 \u1 of the pixel group 140 is taken as an example, and the remaining pixels 110 \u2 to 110 \u4 operate in the same manner.
Referring to fig. 6A, the signals illustrated in fig. 6A are used to control the pixel group 140 of fig. 2. Referring to fig. 2 and 6A, initially, before the time point t1, the control signals TG1 to TG4 and the reset signal RST are pulled to high level. In the present embodiment, the transfer gates 116_1, 116_2, 116_3, and 116 _4and the reset gate 117 are positive edge triggered elements. Thus, the transfer gates 116_1, 116_2, 116_3, and 116 _4and the reset gate 117 are turned on to reset the subpixels 112_1, 112_2, 112_3, and 112 _4and reset the floating diffusion node FD. At time point t1, the voltage of the floating diffusive node FD is the reference voltage VDD.
Between time points t1 and t2, the reset signal RST is pulled high again to reset the floating diffusive node FD again. At time point t2, the voltage of the floating diffusive node FD is the reference voltage VDD.
Between time points T2 and T3, the control signal generating circuit 120 generates the control signal TG4 according to the exposure time T1 of the clear subpixel c (corresponding to the subpixel 112 \u4 in fig. 1) at the lower right corner and pulls the control signal TG4 to the high level, so that the charges generated by the clear subpixel c at the lower right corner are transferred to the floating diffusion node FD. At time t3, the voltage of the floating diffusion node FD drops from the reference voltage VDD to a voltage (VDD-VQ 4), where VQ4 is the voltage drop caused by the charges generated by the clear subpixel c. At this time, the column select gate 119 is turned on, for example, in response to the high level of the column select signal SEL, and the pixel circuit 114 outputs a column output signal based on the voltage (VDD-VQ 4), which is used to enhance the contour information of the image generated by the image processor 50. Note that the charge generated by the clear subpixel c remains on the floating diffusion node FD and is not cleared.
Between time points T3 and T4, the control signal generating circuit 120 generates the control signals TG1 to TG3 according to the exposure time T2 of the red sub-pixel r and pulls the control signals TG1 to TG3 to a high level, so that the charges generated by the three red sub-pixels r (corresponding to the sub-pixels 112 \u1 to 112 \u3 in fig. 1) at the upper left, the upper right and the lower left are transferred to the floating diffusion node FD and accumulated with the charges generated by the clear sub-pixel c. The voltage at floating diffusion node FD drops to VDD- (VQ 1+ VQ2+ VQ3+ VQ 4). At this time, the row selection gate 119 is turned on, and the pixel circuit 114 outputs a column output signal based on the voltage [ VDD- (VQ 1+ VQ2+ VQ3+ VQ 4) ].
For example, the column output signal outputted at the time point t3 is subtracted from the column output signal outputted at the time point t4 by the image processor 50, so that the combined accumulated charge data (i.e. VQ2+ VQ3+ VQ 4) of the three red sub-pixels r can be calculated as the standard red data required by the bayer pattern.
In addition, the exposure time T2 is larger than the entire exposure time used for the comparison pixel group having the high-sensitive pattern RGWB. Therefore, the standard red, green, and blue data collected by the pixel set 140 is more sufficient than the comparison pixel set with the high photosensitive pattern RGWB, so that the red, green, and blue data have better signal-to-noise ratios.
In the embodiment of fig. 6A, the column output signal relating to the charge of the clear subpixel c is output first. However, the present disclosure is not limited thereto. In the embodiment of fig. 6B, the column output signals for the combined accumulated charges of the three red subpixels r are output first. The principle of operation according to the signals of fig. 6B is the same as that of fig. 6A, and is not described herein again.
Referring to fig. 7, the signals shown in fig. 7 are used to control the pixel group 140 of fig. 3B. Incidentally, the pixel groups 140 in fig. 3A and 3B have the same number of clear subpixels c, so that operations can be performed according to similar operation manners as long as the time for pulling the control signals TG1 to TG4 to the high level is adaptively adjusted, and details thereof are not repeated.
At the time point t3, the voltage of the floating diffusion node FD is decreased from the reference voltage VDD to the voltage VDD- (VQ 3+ VQ 4) based on the charges generated by the two clear subpixels c at the bottom left and bottom right, where VQ3 and VQ4 are the voltage drops caused by the charges generated by the two clear subpixels c, respectively. At this time, the pixel circuit 114 outputs a column output signal with respect to the charge data of the combined accumulation of the two clear subpixels c.
At time t4, charges generated by the exposure of the two red subpixels r at the upper left and the upper right are transferred to the floating diffusion node FD and accumulated together with charges generated by the two clear subpixels c. The voltage of the floating diffusion node FD is then reduced to a voltage [ VDD- (VQ 1+ VQ2+ VQ3+ VQ 4) ], wherein VQ1 and VQ2 are respectively the voltage drops caused by the charges generated by the two red sub-pixels r. At this time, the pixel circuit 114 outputs a column output signal based on the voltage [ VDD- (VQ 1+ VQ2+ VQ3+ VQ 4) ].
For example, by subtracting the column output signals outputted from the time points t3 and t4 by the image processor 50, the charge data (i.e. VQ1+ VQ 2) accumulated by the two red subpixels r can be calculated.
In the embodiment of fig. 7, the column output signal relating to the charge of the clear subpixel c is output first. However, the present disclosure is not limited thereto. In some embodiments, the column output signal for the charge of the red sub-pixel r may also be output first.
In the embodiments of fig. 8A, 8B and 9, the operation of combining the accumulated charges is not used. Therefore, it is not necessary to use mathematical operations to calculate the charge data of, for example, the red sub-pixel r and the clear sub-pixel c.
Referring to fig. 8A, the signals illustrated in fig. 8A are used to control the pixel group 140 of fig. 2. Referring to fig. 2 and 8A together, the operation before the time point t3 is the same as that of fig. 6A, and is not described herein again. Between time points t3 and t4, the reset signal RST is pulled high, turning on the reset gate 117, and the floating diffusion node FD is cleared in response to the charges of the clear sub-pixel c collected by the control signal TG 4. The floating diffusive node FD is reset from the voltage (VDD-VQ 4) to the reference voltage VDD.
Between time points T4 and T5, the control signal generating circuit 120 generates the control signals TG1 to TG3 according to the exposure time T2 of the red sub-pixels r at the upper left, the upper right, and the lower left, and pulls the control signals TG1 to TG3 to the high level, so that the charges generated by the three red sub-pixels r are transferred to the floating diffusion node FD and accumulated. The floating diffusive node FD drops from the reference voltage VDD to [ VDD- (VQ 1+ VQ2+ VQ 3) ]. At this time, the row selection gate 119 is turned on, and the pixel circuit 114 outputs a column output signal based on the voltage [ VDD- (VQ 1+ VQ2+ VQ 3) ].
Briefly, in the embodiment of fig. 8A, the column output signal related to the charge of the clear sub-pixel c is output first. However, the present disclosure is not limited thereto. For example, in the embodiment of fig. 8B, the column output signal related to the charge of the red sub-pixel r is outputted first. The principle of operation according to the signals of fig. 8B is the same as that of fig. 8A, and is not described herein again.
Referring to fig. 9, the signals shown in fig. 9 are used to control the pixel group 140 of fig. 3B. At the time point t3, the voltage of the floating diffusion node FD is decreased from the reference voltage VDD to the voltage VDD- (VQ 3+ VQ 4) based on the combined accumulated charges of the two clear subpixels c at the bottom left and bottom right, where VQ3 and VQ4 are the voltage drops caused by the charges generated by the two clear subpixels c, respectively. At this time, the pixel circuit 114 outputs a column output signal with respect to the combined accumulated charges of the two clear subpixels c. At time t4, the combined accumulated charges of the two clear subpixels c at the floating diffusion node FD are cleared, and the floating diffusion node FD is reset to the reference voltage VDD. At time t5, the voltage of the floating diffusion node FD is decreased from the reference voltage VDD to the voltage VDD- (VQ 1+ VQ 2) based on the combined accumulated charges of the two red subpixels r, where VQ1 and VQ2 are respectively the voltage drops caused by the charges generated by the two red subpixels r. At this time, the pixel circuit 114 outputs a column output signal based on the voltage [ VDD- (VQ 1+ VQ 2) ].
Briefly, in the embodiment of fig. 9, the column output signal related to the charge of the clear sub-pixel c is output first. However, the present disclosure is not limited thereto. In some embodiments, the column output signal regarding the charge of the red sub-pixel r may also be output first.
Referring to fig. 10, the signals shown in fig. 10 are used to control the pixel group 140 of fig. 5. In fig. 5, the clear subpixel c and the green subpixel g constituting the phase detection pixel pair are located in two different subpixels 110 _1and 110 _2. Accordingly, at least two sub-pixels 110 \u1 and 110 \u2 need to be operated according to the signals of fig. 10 to obtain two sets of phase detection data required by the phase detection auto-focus function, which are respectively described below.
As for the pixel 110 \u1, referring to fig. 5 and 10, the operation before the time point t3 is the same as that of fig. 6A, and the description thereof is omitted. Between time points T3 and T4, the control signal generating circuit 120 generates the control signal TG3 according to the exposure time T2 of the red sub-pixel r at the lower left corner and pulls the control signal TG3 to the high level, so that the charges generated by the red sub-pixel r and the charges of the clear sub-pixel c collected by the floating diffusion node FD in response to the control signal TG4 are combined and accumulated. At time t4, the voltage of the floating diffusion node FD is decreased from the voltage (VDD-VQ 4) to the voltage VDD- (VQ 3+ VQ 4) based on the combined accumulated charges of the clear subpixel c and the red subpixel r, where VQ3 and VQ4 are the voltage drops caused by the charges generated by the red subpixel r and the clear subpixel c, respectively. At this time, the pixel circuit 114 outputs a column output signal based on the voltage [ VDD- (VQ 3+ VQ 4) ].
Between time points T4 and T5, the control signal generating circuit 120 generates the control signals TG1 and TG2 according to the exposure time T3 of the red sub-pixel r at the upper left corner and the upper right corner, and pulls the control signals TG1 and TG2 to the high level, so that the charges generated by the red sub-pixel r at the upper left corner and the upper right corner and the combined accumulated charges on the floating diffusion node FD are combined and accumulated again. At time t5, based on the above-mentioned re-combination of the accumulated charges, the voltage of the floating diffusion node FD is decreased from the voltage [ VDD- (VQ 3+ VQ 4) ] to the voltage [ VDD- (VQ 1+ VQ2+ VQ3+ VQ 4) ], where VQ1 and VQ2 are voltage drops caused by the charges generated by the red subpixels r at the upper left corner and the upper right corner, respectively. At this time, the pixel circuit 114 outputs a column output signal based on the voltage [ VDD- (VQ 1+ VQ2+ VQ3+ VQ 4) ]. It is noted that the difference between the exposure times T2 and T3 is shown exaggerated in FIG. 10, and in some embodiments, the difference is not significant. This means that VQ1 and VQ2 approximate VQ3.
For example, the image processor 50 can determine the charge data of the first phase detection pixel (i.e., the clear subpixel c of the pixel 110 \u1) based on the column output signal at the time point T3. Then, the image processor 50 performs mathematical operations on the column output signals at time points T3, T4 and T5 to calculate the charge data of the three red subpixels r combined and accumulated.
As for the pixel 110 \u2, referring to fig. 5 and 10, the operation before the time point t3 is the same as that of fig. 6A, and the description thereof is omitted. It should be noted that VQ4 at time t3 is a voltage drop caused by the charge generated by the clear subpixel c, but the clear subpixel c in the pixel 110\ u 2 is not used as the phase detection pixel, so VQ4 cannot be used as the charge data of the phase detection pixel. Between time points t3 and t4, the control signal TG3 is pulled high, so that the charges generated by the green sub-pixel g (i.e., the second phase detection pixel) at the lower left corner are combined and accumulated with the charges collected by the floating diffusion node FD in response to the control signal TG 4. Accordingly, the pixel circuit 114 outputs the column output signal based on the voltage [ VDD- (VQ 3+ VQ 4) ], where VQ3 and VQ4 are voltage drops caused by the charges generated by the green sub-pixel g and the clear sub-pixel c at the bottom left corner, respectively. The operation of pixel 110 \u2 at the remaining time point is the same as that of pixel 110 _u1. Then, the image processor 50 performs mathematical operations on the column output signals at time points T3, T4 and T5 to calculate the charge data accumulated by the three green sub-pixels g.
For example, the image processor 50 can calculate the charge data of the green sub-pixel g (i.e., the second phase detection pixel) at the lower left corner by performing a mathematical operation based on the row output signals at the time points T3 and T4, thereby obtaining the charge data of the second phase detection pixel. Then, the image processor 50 can complete the phase detection auto-focusing function based on the charge data of the clear sub-pixel c at the lower right corner of the pixel 110 \u1 and the green sub-pixel g at the lower left corner of the pixel 110 _u2.
The signals shown in fig. 10 are merely provided as an example. As for the various pixel group patterns, any signal that obtains charge data of the phase detection pixels by combining accumulation and mathematical operation falls within the scope of the present application.
Fig. 11 is a schematic diagram illustrating an embodiment of the electronic device 60 in which the image processor 50 and the image sensor 100 shown in fig. 1 are applied. Referring to fig. 6, the electronic device 60 may be any electronic device such as a smart phone, a personal digital assistant, a handheld computer system, or a tablet computer.
The foregoing description has set forth briefly the features of certain embodiments of the present application so that those skilled in the art may more fully appreciate the various aspects of the present disclosure. Those skilled in the art should appreciate that they may readily use the present disclosure as a basis for designing or modifying other processes and structures for carrying out the same purposes and/or achieving the same advantages of the embodiments introduced herein. Those skilled in the art should understand that they can make various changes, substitutions and alterations herein without departing from the spirit and scope of the present disclosure.

Claims (19)

  1. An image sensor coupled to an image processor, the image processor creating an image based on sensing data provided by the image sensor, the image sensor comprising:
    a pixel array comprising:
    a first floating diffusion node; and
    and a first pixel including a first sub-pixel and a second sub-pixel sharing the first floating diffusion node, wherein the first sub-pixel is one of a red sub-pixel, a green sub-pixel and a blue sub-pixel, the second sub-pixel is not one of the red sub-pixel, the green sub-pixel and the blue sub-pixel, and the first floating diffusion node is disposed between the first sub-pixel and the second sub-pixel.
  2. The image sensor of claim 1, wherein the pixel array further comprises:
    a second floating diffusion node; and
    and a second pixel including a third subpixel and a fourth subpixel, sharing the second floating diffusion node, wherein the third subpixel is one of a red subpixel, a green subpixel and a blue subpixel, the third subpixel has a color different from that of the first subpixel, and the fourth subpixel is not one of the red subpixel, the green subpixel and the blue subpixel.
  3. The image sensor of claim 2 wherein the second subpixel and the fourth subpixel have the same photosensibility.
  4. The image sensor of claim 3 wherein said second subpixel and said fourth subpixel have a photosensibility that is better than a red subpixel, a green subpixel, or a blue subpixel.
  5. The image sensor of claim 4, wherein the first pixel has a first average photo-sensitivity value, the second pixel has a second average photo-sensitivity value, and an absolute value of a difference between the first average photo-sensitivity value and the second average photo-sensitivity value is less than an absolute value of a difference in photo-sensitivity values between any two of the red, green, and blue sub-pixels.
  6. The image sensor of claim 3, wherein the second subpixel and the fourth subpixel are both white subpixels.
  7. The image sensor of claim 1, wherein the number of the second subpixels is the same as the number of the fourth subpixels.
  8. The image sensor of claim 1 wherein said first pixel comprises three of said first subpixels and one of said second subpixels disposed at 2*2 and said second pixel comprises three of said third subpixels and one of said fourth subpixels disposed at 2*2.
  9. The image sensor of claim 1 wherein said first pixel comprises two of said first subpixels and two of said second subpixels are arranged at 2*2, and said second pixel comprises two of said third subpixels and two of said fourth subpixels are arranged at 2*2.
  10. The image sensor of claim 9, wherein two of said first subpixels are disposed adjacently or diagonally and two of said third subpixels are disposed adjacently or diagonally, and.
  11. The image sensor of claim 1 wherein said first pixel comprises one said first subpixel and three said second subpixels are disposed at 2*2 and said second pixel comprises one said third subpixel and three said fourth subpixels are disposed at 2*2.
  12. The image sensor of claim 1, further comprising:
    a control circuit shared by the first subpixel and the second subpixel, wherein the control circuit is to:
    generating a first control signal to control the charge output of one of the first sub-pixel and the second sub-pixel to the first floating diffusion node according to a first exposure time; and
    and generating a second control signal according to a second exposure time to control the charge of the other of the first sub-pixel and the second sub-pixel to be output to the first floating diffusion node so as to be combined and accumulated with the charge collected by the first floating diffusion node in response to the first control signal.
  13. The image sensor of claim 1, further comprising:
    a control circuit shared by the first subpixel and the second subpixel, wherein the control circuit is to:
    generating a first control signal to control the charge output of one of the first sub-pixel and the second sub-pixel to the first floating diffusion node according to a first exposure time;
    generating a reset signal to clear the first floating diffusion node from charge reset in response to the first control signal; and
    and generating a second control signal according to a second exposure time so as to control the charge of the other of the first sub-pixel and the second sub-pixel to be output to the first floating diffusion node.
  14. The image sensor of any of claims 12 or 13, wherein the first exposure time and the second exposure time are determined according to the first average photo-sensitivity value and the second average photo-sensitivity value.
  15. The image sensor of claim 1, wherein the second sub-pixel and the fourth sub-pixel are used to boost contour information of the image.
  16. The image sensor of claim 1, wherein the pixel array further comprises:
    a third floating diffusion node;
    a third pixel including a fifth subpixel and a sixth subpixel, sharing the third floating diffusion node, the fifth subpixel being one of a red subpixel, a green subpixel, and a blue subpixel, the fifth subpixel having a color different from the first subpixel and the third subpixel; and
    a fourth floating diffusion node;
    a fourth pixel including a seventh sub-pixel and an eighth sub-pixel sharing the fourth floating diffusion node, the seventh sub-pixel being one of a red sub-pixel, a green sub-pixel, and a blue sub-pixel, the seventh sub-pixel having a color different from the first sub-pixel and the third sub-pixel and being identical to the fifth sub-pixel.
  17. The image sensor of claim 16, wherein the seventh subpixel and the fifth subpixel are green subpixels.
  18. The image sensor of claim 1, wherein the second sub-pixel serves as a first phase detection pixel in a phase pixel pair, and the third sub-pixel serves as a second phase detection pixel in the phase pixel pair.
  19. An electronic device, comprising:
    the image processor; and
    the image sensor of any one of claims 1-18.
CN202080033580.8A 2020-07-14 2020-07-14 Image sensor and related electronic device Active CN115152198B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/101880 WO2022011547A1 (en) 2020-07-14 2020-07-14 Image sensor and related electronic device

Publications (2)

Publication Number Publication Date
CN115152198A true CN115152198A (en) 2022-10-04
CN115152198B CN115152198B (en) 2024-02-09

Family

ID=79554395

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080033580.8A Active CN115152198B (en) 2020-07-14 2020-07-14 Image sensor and related electronic device

Country Status (2)

Country Link
CN (1) CN115152198B (en)
WO (1) WO2022011547A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104272727A (en) * 2012-05-14 2015-01-07 索尼公司 Imaging device and imaging method, electronic apparatus, as well as program
CN104780321A (en) * 2014-01-10 2015-07-15 全视科技有限公司 Method for capturing image data, HDR imaging system for use and pixel
CN105611257A (en) * 2015-12-18 2016-05-25 广东欧珀移动通信有限公司 Imaging method, image sensor, imaging device and electronic device

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014126903A (en) * 2012-12-25 2014-07-07 Toshiba Corp Image processing apparatus, image processing method, and program
CN105578080B (en) * 2015-12-18 2019-02-05 Oppo广东移动通信有限公司 Imaging method, imaging device and electronic device
CN105592303B (en) * 2015-12-18 2018-09-11 广东欧珀移动通信有限公司 imaging method, imaging device and electronic device
CN105611125B (en) * 2015-12-18 2018-04-10 广东欧珀移动通信有限公司 Imaging method, imaging device and electronic installation
CN105430361B (en) * 2015-12-18 2018-03-20 广东欧珀移动通信有限公司 Imaging method, imaging sensor, imaging device and electronic installation
CN105578078B (en) * 2015-12-18 2018-01-19 广东欧珀移动通信有限公司 Imaging sensor, imaging device, mobile terminal and imaging method
CN105578079B (en) * 2015-12-18 2017-11-17 广东欧珀移动通信有限公司 Imaging sensor and picture quality regulation method, imaging device and method and mobile terminal
CN105578006B (en) * 2015-12-18 2018-02-13 广东欧珀移动通信有限公司 Imaging method, imaging device and electronic installation
CN105430362B (en) * 2015-12-18 2017-09-19 广东欧珀移动通信有限公司 Imaging method, imaging device and electronic installation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104272727A (en) * 2012-05-14 2015-01-07 索尼公司 Imaging device and imaging method, electronic apparatus, as well as program
CN104780321A (en) * 2014-01-10 2015-07-15 全视科技有限公司 Method for capturing image data, HDR imaging system for use and pixel
CN105611257A (en) * 2015-12-18 2016-05-25 广东欧珀移动通信有限公司 Imaging method, image sensor, imaging device and electronic device

Also Published As

Publication number Publication date
CN115152198B (en) 2024-02-09
WO2022011547A1 (en) 2022-01-20

Similar Documents

Publication Publication Date Title
US8792029B2 (en) Pixel array having wide dynamic range and good color reproduction and resolution and image sensor using the pixel array
US11089253B2 (en) Image sensor with controllable conversion gain
KR101533134B1 (en) Stacked image sensor with shared diffusion regions
CN102449765B (en) Gradient color filters for sub-diffraction limit sensors
US11678075B2 (en) Image sensor that includes sensing pixels sharing a floating diffusion node and operation method thereof
US20070291982A1 (en) Camera module
CN110649057B (en) Image sensor, camera assembly and mobile terminal
CN102214670A (en) Optical member, solid-state imaging device, and manufacturing method
KR20150077996A (en) CMOS Image Sensor based on a Thin-Film on ASIC and operating method thereof
US11843877B2 (en) Image sensor comprising array of colored pixels
JP7080913B2 (en) Image sensor containing digital pixels
KR20210102644A (en) Image sensor and electronic device including the same
CN112866598A (en) Image sensor, imaging apparatus having the same, and method of operating the same
CN112701132A (en) Image sensor and electronic device including the same
CN113923386B (en) Dynamic vision sensor
CN113766088A (en) Image sensor, mobile device and method of controlling sensing sensitivity
CN115152198A (en) Image sensor and related electronic device
US20150122973A1 (en) Sensing pixel and image sensor including the same
US10728479B2 (en) Image sensing device
CN114008782A (en) Image sensor, camera assembly and mobile terminal
US20240031698A1 (en) Image sensor including phase detection pixels
KR101930757B1 (en) Pixel, pixel array, and image sensor
US20220132079A1 (en) Pixel array including evenly arranged phase detection pixels and image sensor including the pixel array
CN112042185A (en) Image sensor and related electronic device
CN114008781A (en) Image sensor, camera assembly and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant