WO2015045375A1 - 撮像素子および撮像装置 - Google Patents
撮像素子および撮像装置 Download PDFInfo
- Publication number
- WO2015045375A1 WO2015045375A1 PCT/JP2014/004885 JP2014004885W WO2015045375A1 WO 2015045375 A1 WO2015045375 A1 WO 2015045375A1 JP 2014004885 W JP2014004885 W JP 2014004885W WO 2015045375 A1 WO2015045375 A1 WO 2015045375A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pixel
- pixels
- light receiving
- signal
- receiving region
- Prior art date
Links
- 238000012545 processing Methods 0.000 claims description 97
- 238000006243 chemical reaction Methods 0.000 claims description 70
- 238000003384 imaging method Methods 0.000 claims description 53
- 238000001514 detection method Methods 0.000 claims description 31
- 238000000034 method Methods 0.000 claims description 22
- 230000008569 process Effects 0.000 claims description 17
- 238000012937 correction Methods 0.000 claims description 15
- 230000003111 delayed effect Effects 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 33
- 239000012071 phase Substances 0.000 description 23
- 238000012546 transfer Methods 0.000 description 9
- 238000009825 accumulation Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 8
- 230000003287 optical effect Effects 0.000 description 8
- 241000724291 Tobacco streak virus Species 0.000 description 4
- 238000005375 photometry Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 230000004907 flux Effects 0.000 description 3
- 230000002093 peripheral effect Effects 0.000 description 3
- 210000001747 pupil Anatomy 0.000 description 3
- 230000007423 decrease Effects 0.000 description 2
- 238000002161 passivation Methods 0.000 description 2
- 238000005096 rolling process Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 239000003990 capacitor Substances 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000002844 melting Methods 0.000 description 1
- 230000008018 melting Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
- 229910000679 solder Inorganic materials 0.000 description 1
- 239000007790 solid phase Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L27/00—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
- H01L27/14—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
- H01L27/144—Devices controlled by radiation
- H01L27/146—Imager structures
- H01L27/14601—Structural or functional details thereof
- H01L27/14641—Electronic components shared by two or more pixel-elements, e.g. one amplifier shared by two pixel elements
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L27/00—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
- H01L27/14—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
- H01L27/144—Devices controlled by radiation
- H01L27/146—Imager structures
- H01L27/14601—Structural or functional details thereof
- H01L27/14625—Optical elements or arrangements associated with the device
- H01L27/14627—Microlenses
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L27/00—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
- H01L27/14—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
- H01L27/144—Devices controlled by radiation
- H01L27/146—Imager structures
- H01L27/14601—Structural or functional details thereof
- H01L27/1464—Back illuminated imager structures
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
- H04N23/672—Focus control based on electronic image sensor signals based on the phase difference signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/10—Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
- H04N25/11—Arrangement of colour filter arrays [CFA]; Filter mosaics
- H04N25/13—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
- H04N25/134—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/40—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
- H04N25/46—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by combining or binning pixels
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
- H04N25/703—SSIS architectures incorporating pixels for producing signals other than image signals
- H04N25/704—Pixels specially adapted for focusing, e.g. phase difference pixel sets
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
- H04N25/76—Addressed sensors, e.g. MOS or CMOS sensors
- H04N25/77—Pixel circuitry, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components
- H04N25/778—Pixel circuitry, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components comprising amplifiers shared between a plurality of pixels, i.e. at least one part of the amplifier must be on the sensor array itself
Definitions
- the present invention relates to an imaging element and an imaging apparatus.
- Patent Document 1 Japanese Patent Application Laid-Open No. 2011-77770
- the focus detection pixels are discretely arranged, the focus detection accuracy is lowered as compared with the case where the focus detection pixels are continuously arranged.
- the pixel arrangement is different from a predetermined arrangement such as a Bayer arrangement. In the conventional technique, if an attempt is made to convert to a predetermined array such as a Bayer array by interpolation processing or the like, the calculation becomes complicated.
- two first pixels that are continuously arranged in the first direction and detect the light of the first color and continuously arranged in the second direction intersecting the first direction are arranged.
- the two second pixels that are adjacent to the two first pixels and detect the second color light, and are arranged in the first pixel and divided in the first direction to receive the first color light.
- an imaging device including a plurality of first light receiving regions and a plurality of second light receiving regions arranged in a second pixel and divided in a second direction for receiving light of a second color.
- a plurality of other pixels corresponding to a color different from the first color, and at least some of the plurality of first pixels and the plurality of other pixels are separated by two
- An imaging device having a light receiving region is provided.
- an imaging apparatus including the imaging device of the first or second aspect is provided.
- FIG. 1 is a diagram illustrating an overview of an image sensor 100 according to one embodiment. It is a figure which shows an example of the 1st pixel 202-1. It is a figure which shows an example of the 2nd pixel 202-2 and the 3rd pixel 202-3.
- 2 is a diagram illustrating an example of a light receiving unit 200.
- FIG. It is a figure which shows an example of the array conversion process in the signal processing part. It is a figure which shows the example of an arrangement
- FIG. 1 is a perspective view of a micro lens 101.
- FIG. 1 is a perspective view of a micro lens 101.
- FIG. 2 is a diagram illustrating a planar shape of a microlens 101.
- FIG. It is a figure which shows the other process example of the signal processing part.
- 3 is a diagram illustrating a configuration example of a light receiving unit 200.
- FIG. 6 is a diagram illustrating another configuration example of the light receiving unit 200.
- FIG. It is a figure which shows the example of arrangement
- positioning of the transfer transistor TX and the electric charge detection part in the example shown in FIG. 1 is a diagram illustrating an example of a cross section of an image sensor 100.
- FIG. 3 is a block diagram showing a part of functions of a signal processing unit 210.
- FIG. It is a figure explaining the relationship between a lens characteristic and an output signal. It is a block diagram which shows the structural example of the imaging device 500 which concerns on one embodiment.
- FIG. 1 is a diagram showing an outline of an image sensor 100 according to one embodiment.
- the image sensor 100 includes a light receiving unit 200 in which a plurality of pixels 202 are arranged, and a signal processing unit 210 that processes a signal from the light receiving unit 200.
- Each of the plurality of pixels 202 has a light receiving element such as a photodiode, and accumulates electric charges according to the amount of received light.
- the signal processing unit 210 of this example reads a signal corresponding to the amount of charge accumulated in each pixel 202 and performs a predetermined process.
- the plurality of pixels 202 in this example are arranged in a matrix. That is, the plurality of pixels 202 are arranged along a plurality of rows and a plurality of columns.
- the row direction is illustrated as the x-axis direction
- the column direction is illustrated as the y-axis direction.
- the row direction is an example of the first direction
- the column direction is an example of the second direction.
- the plurality of pixels 202 includes a plurality of first pixels 202-1, a plurality of second pixels 202-2, and a plurality of third pixels 202-3.
- the first pixel 202-1 is a pixel corresponding to the color filter of the first color
- the second pixel 202-2 is a pixel corresponding to the color filter of the second color
- the third pixel 202-3 are pixels corresponding to the color filter of the third color.
- the first color is green
- the second color is blue
- the third color is red.
- the planar shape of each pixel 202 is a quadrangle, and each side of the pixel 202 is inclined 45 degrees with respect to the first direction and the second direction. In a more specific example, the planar shape of each pixel 202 is a square.
- the plurality of first pixels 202-1 are arranged along both the row direction and the column direction.
- the vertices of the first pixel 202-1 are arranged adjacent to each other. With such an arrangement, a region surrounded by four first pixels 202-1 arranged in proximity is generated.
- the second pixel 202-2 and the third pixel 202-3 are provided in a region surrounded by the four first pixels 202-1.
- the shape of each pixel 202 is the same.
- the second pixels 202-2 are arranged along the column direction.
- the third pixel 202-3 is also arranged along the column direction.
- the columns of the second pixels 202-2 and the columns of the third pixels 202-3 are alternately arranged in the row direction. Further, the column of the second pixel 202-2 and the column of the third pixel 202-3 are arranged so as to be shifted by half a pixel in the column direction with respect to the column of the first pixel 202-1.
- FIG. 2A is a diagram illustrating an example of the first pixel 202-1. At least a part of the first pixel 202-1 has two light receiving regions 214 separated from each other. The first light receiving region 214a and the second light receiving region 214b of the first pixel 202-1 are arranged side by side in the row direction. In this example, the two light receiving regions 214 are defined by dividing the region of the first pixel 202-1 into two equal parts by a straight line extending in the column direction. In this example, the straight line is a diagonal line of the first pixel 202-1. An element isolation portion is provided between the light receiving regions 214 so that charges generated according to incident light do not move between the light receiving regions 214. In FIG. 2A, the microlens 101 provided corresponding to the first pixel 202-1 is indicated by a dotted line. The two light receiving regions 214 in this example are provided at different positions in the row direction with respect to the common microlens 101.
- a plurality of first pixels 202-1 having two light receiving regions 214 are arranged adjacent to each other in the row direction.
- the signal processing unit 210 calculates an image plane phase difference in the row direction between the signal from the first light receiving region 214a of the first pixel 202-1 and the signal from the second light receiving region 214b arranged adjacent to each other in the row direction. By detecting, it functions as a focus detection unit that detects the focus state. Since the first pixels 202-1 for detecting the image plane phase difference are arranged adjacent to each other in the row direction, the image plane phase difference in the row direction can be detected with high accuracy. In addition, the light use efficiency can be improved as compared with the method of detecting the image plane phase difference using light shielding.
- FIG. 2B is a diagram illustrating an example of the second pixel 202-2 and the third pixel 202-3. At least a part of the second pixel 202-2 and the third pixel 202-3 has two light receiving regions 214 separated from each other.
- the first light receiving region 214a and the second light receiving region 214b of the second pixel 202-2 and the third pixel 202-3 are arranged side by side in the column direction.
- the two light receiving regions 214 are defined by dividing the region of the second pixel 202-2 or the third pixel 202-3 into two equal parts by a straight line extending in the row direction.
- the two light receiving regions 214 of the second pixel 202-2 and the third pixel 202-3 are provided at different positions in the column direction with respect to the common microlens 101.
- a plurality of second pixels 202-2 or third pixels 202-3 having two light receiving regions 214 are arranged adjacent to each other in the column direction.
- the signal processing unit 210 includes a column of the signal from the first light receiving region 214a and the signal from the second light receiving region 214b of the second pixel 202-2 or the third pixel 202-3 arranged adjacent to each other in the column direction.
- it functions as a focus detection unit that detects the focus state. Since the second pixel 202-2 or the third pixel 202-3 for detecting the image plane phase difference is arranged adjacent to each other in the column direction, the image plane phase difference in the column direction can be detected with high accuracy.
- the light use efficiency can be improved as compared with the method of detecting the image plane phase difference using light shielding.
- FIG. 3 is a diagram illustrating an example of the light receiving unit 200.
- all the pixels 202 have two light receiving regions 214.
- the boundary of the light receiving region 214 in each pixel 202 is indicated by a dotted line.
- image data is generated using the outputs of all the pixels 202, and at least some of the outputs of the pixels 202 are used for image plane phase difference detection.
- the signal processing unit 210 can use the pixel 202 at an arbitrary position as the image plane phase difference detection pixel 202.
- the signal processing unit 210 may change the pixel 202 used for image plane phase difference detection at any time.
- the signal processing unit 210 may use the pixel 202 that captures an image of a specific subject as the image plane phase difference detection pixel 202.
- the signal processing unit 210 may select the image plane phase difference detection pixel 202 following the change. Further, all the pixels 202 may be used for detecting an image plane phase difference while being used for generating an image signal. In this example, since light shielding is not used for image plane phase difference detection, even if all the pixels 202 can be used for image plane phase difference detection, the utilization efficiency of incident light does not decrease.
- the signal processing unit 210 also functions as an array conversion unit that converts image data based on each pixel signal from the light receiving unit 200 into image data of a predetermined pixel array such as a Bayer array. When array conversion is performed, the signal processing unit 210 adds signals from the two light receiving regions 214 of each pixel 202 to obtain a pixel signal from each pixel 202.
- FIG. 4 is a diagram illustrating an example of array conversion processing in the signal processing unit 210.
- the number of each column of the plurality of pixels 202 is m, m + 1, m + 2,..., M + k,..., And the number of each row is n, n + 1, n + 2,. ⁇ Let's say. However, k and l are integers.
- a process of generating a converted pixel signal of the first converted pixel 203-1 after the array conversion from the pixel signal of the first pixel 202-1 will be described.
- the first pixels 202-1 of this example are arranged in columns where k is 0 or even and rows where l is 0 or even.
- the plurality of first pixels 202-1 include three or more first pixels 202-1 arranged in succession in the first direction.
- three first pixels 202-1 are arranged at positions (m, n + 2), (m + 2, n + 2), and (m + 4, n + 2).
- the plurality of second pixels 202-2 (corresponding to the pixel “B” in FIG. 4) are continuously arranged in the second direction intersecting the first direction, and each of the three first pixels described above. Two second pixels 202-2 adjacent to two first pixels 202-1 of the pixels 202-1 are included.
- each of the second pixels 202-2 arranged at the positions (m + 3, n + 1) and (m + 3, n + 3) has two first pixels arranged at the positions (m + 2, n + 2) and (m + 4, n + 2).
- the pixel 202-1 is disposed so as to intersect and be adjacent to the pixel 202-1.
- the plurality of third pixels 202-3 are continuously arranged in the third direction intersecting the first direction, and each of the two first pixels 202 is one of the three first pixels 202-1 described above.
- Two third pixels 202-3 adjacent to ⁇ 1 are included.
- the second direction and the third direction are parallel directions and indicate directions in different places.
- the second direction is a direction from the position (m + 3, n + 1) to the position (m + 3, n + 3)
- the third direction is a direction from the position (m + 1, n + 1) to the position (m + 1, n + 3). It is.
- the two first pixels 202-1 adjacent to the two third pixels 202-3 are at least one of the two first pixels 202-1 adjacent to the two second pixels 202-2.
- the first pixel 202-1 is different.
- each of the two third pixels 202-3 arranged at the positions (m + 1, n + 1) and (m + 1, n + 3) has two pixels arranged at the positions (m, n + 2) and (m + 2, n + 2).
- the first pixel 202-1 is disposed so as to intersect and be adjacent to the first pixel 202-1.
- the signal processing unit 210 adds the pixel signals of the two first pixels 202-1 adjacent in the row direction, and the first conversion pixel 203 virtually arranged between the two first pixels 202-1. A conversion pixel signal of ⁇ 1 is generated.
- two first pixels 202-1 to which pixel signals are added are connected by a double arrow.
- the signal processing unit 210 groups the first pixels 202-1 in each row so as to form a pair of two adjacent first pixels 202-1.
- the signal processing unit 210 adds the pixel signals of the two first pixels 202-1 paired to generate a converted pixel signal of the first converted pixel 203-1.
- the first pixels 202-1 in each row are grouped so that the positions in the row direction of the first conversion pixels 203-1 are alternately different for each row of the first pixels 202-1.
- the first pixel 202-1 at the column position of (m, m + 2), (m + 4, m + 6), (m + 8, m + 10). Are grouped.
- the first pixels 202-1 at the column positions (m + 2, m + 4), (m + 6, m + 8), (m + 10, m + 12) are grouped.
- FIG. 5 is a diagram showing an example of the arrangement of the first conversion pixels 203-1.
- the first conversion pixels 203-1 are arranged as shown in FIG. 5 by the conversion processing described in FIG. That is, the positions of the first conversion pixels 203-1 in the row direction are arranged so as to be alternately different for each row of the first conversion pixels 203-1.
- the first conversion pixel 203-1 is arranged at the column positions m + 1, m + 5, and m + 9.
- the first conversion pixel 203-1 is arranged at the column positions of m + 3, m + 7, and m + 11.
- FIG. 6 is a diagram illustrating an example of array conversion processing in the signal processing unit 210.
- the second pixel 202-2 and the third pixel 202-3 in this example are arranged in columns in which k is an odd number.
- the second pixels 202-2 are arranged in columns of m + 3, m + 7, m + 11,.
- the third pixels 202-3 are arranged in columns of m + 1, m + 5, m + 9,.
- the signal processing unit 210 adds the pixel signals of the two second pixels 202-2 adjacent in the column direction, and the second conversion pixel 203 that is virtually disposed between the two second pixels 202-2. -2 conversion pixel signal is generated. Further, the signal processing unit 210 adds the pixel signals of the two third pixels 202-3 adjacent in the column direction, and performs a third conversion virtually arranged between the two third pixels 202-3. A converted pixel signal of the pixel 203-3 is generated. In FIG. 6, two pixels 202 to which pixel signals are added are connected by a double arrow.
- a double arrow connecting the two first pixels 202-1 described in FIG. 4, a double arrow connecting the two second pixels 202-2 described in FIG. 6, and two third pixels 202-3 are connected.
- the pair of the second pixel 202-2 and the pair of the third pixel 202-3 to which the pixel signals are added are selected so that the connecting arrows do not overlap. That is, the pair of the second pixel 202-2 and the third pixel to which the pixel signals are added so that the positions of the first conversion pixel 203-1, the second conversion pixel 203-2, and the third conversion pixel 203-3 do not overlap.
- the pair 202-3 is selected.
- the second pixel 202-2 is grouped with the second pixel 202-2 in the row position of (n + 3, n + 5), (n + 7, n + 9), (n + 11, n + 13).
- the third pixel 202-3 is grouped by the third pixels 202-3 at the column positions (n + 1, n + 3), (n + 5, n + 7), (n + 9, n + 11).
- FIG. 7 is a diagram showing an arrangement example of the second conversion pixel 203-2 and the third conversion pixel 203-3.
- the second conversion pixel 203-2 and the third conversion pixel 203-3 are arranged as shown in FIG. Specifically, in the m + 3, m + 7, and m + 11 columns, the second conversion pixel 203-2 is arranged at the row positions of n + 4, n + 8, and n + 12.
- the third conversion pixels 203-3 are arranged at the (n + 2, n + 6, n + 10) row positions.
- FIG. 8 is a diagram showing an arrangement example of the first conversion pixel 203-1, the second conversion pixel 203-2, and the third conversion pixel 203-3.
- the array illustrated in FIG. 8 is an array in which the arrays of the conversion pixels 203 illustrated in FIGS. 5 and 7 are overlapped. 4 to 7, the signal processing unit 210 can acquire Bayer array image data as shown in FIG.
- the image plane phase difference detection pixels can be continuously arranged in the row direction and the column direction, so that the detection accuracy of the image plane phase difference can be improved. Then, Bayer array image data can be acquired by a simple calculation of adding pixel signals of adjacent pixels 202. Further, since light shielding is not used for image plane phase difference detection, the light utilization efficiency can be improved.
- FIG. 9 is a diagram illustrating another example of the light receiving unit 200.
- some of the first pixels 202-1, some of the second pixels 202-2, and some of the third pixels 202-3 each have two light receiving regions 214.
- the first pixel 202-1 having the two light receiving regions 214 is continuously arranged in the row direction.
- the second pixel 202-2 having the two light receiving regions 214 is continuously arranged in the column direction.
- the third pixel 202-3 having the two light receiving regions 214 is continuously arranged in the column direction.
- Other configurations are the same as those of the light receiving unit 200 described with reference to FIGS.
- the pixels for detecting the image plane phase difference can be continuously arranged in the row direction and the column direction, so that the detection accuracy of the image plane phase difference can be improved. Then, Bayer array image data can be acquired simply by adding pixel signals of adjacent pixels 202. Further, since light shielding is not used for image plane phase difference detection, the light utilization efficiency can be improved.
- FIG. 10A to FIG. 10D are diagrams for explaining other processing examples of the signal processing unit 210.
- the signal processing unit 210 of this example generates first to fourth converted pixel signals whose positions in the row direction are shifted as converted pixel signals for the first pixel 202-1.
- FIG. 10A is a diagram illustrating an example of generating the first converted pixel signal G1.
- the processing in this example is the same as the processing described in FIG. That is, the signal processing unit 210 adds the output signals of the first light receiving region 214a and the second light receiving region 214b in each pixel for each first pixel 202-1, and generates the first pixel signal S1. Then, the first converted pixel signal G1 is generated by adding the first pixel signals S1 of the two adjacent first pixels 202-1.
- the first conversion pixel signal G1 is a virtual conversion pixel signal at positions m + 1, m + 5,.
- FIG. 10B is a diagram illustrating an example of generating the second conversion pixel signal G2.
- the second conversion pixel signal G2 is a signal of a conversion pixel at a position different from the first conversion pixel signal G1.
- the output signal of the first light receiving region 214a of the pixel and the second light receiving region 214b of the first pixel 202-1 adjacent to the first light receiving region of the pixel are displayed.
- the output signal is added to generate the second pixel signal S2.
- the signal processing unit 210 adds the adjacent second pixel signals S2 to generate the second converted pixel signal G2.
- the second conversion pixel signal G2 is a virtual conversion pixel signal at the positions m + 2, m + 6,.
- FIG. 10C is a diagram illustrating an example of generating the third conversion pixel signal G3.
- the third conversion pixel signal G3 is a signal of a conversion pixel at a position different from the first conversion pixel signal G1 and the second conversion pixel signal G2.
- the third pixel signal S3 is generated by the same processing as the first pixel signal S1.
- the signal processing unit 210 adds the adjacent third pixel signals S3 to generate a third converted pixel signal G3.
- the third conversion pixel signal G3 is a virtual conversion pixel signal at the positions m + 3, m + 7,.
- FIG. 10D is a diagram illustrating an example of generating the fourth conversion pixel signal G4.
- the fourth conversion pixel signal G4 is a signal of a conversion pixel at a position different from the first conversion pixel signal G1, the second conversion pixel signal G2, and the third conversion pixel signal G3.
- the fourth pixel signal S4 is generated by the same processing as the second pixel signal S2.
- the signal processing unit 210 adds the adjacent fourth pixel signals S4 to generate a fourth converted pixel signal G4.
- the fourth conversion pixel signal G4 is a virtual conversion pixel signal at the positions m, m + 4,.
- the signal processing unit 210 can generate a plurality of types of converted pixel signals G1 to G4 having different positions.
- the signal processing unit 210 may use a plurality of types of converted pixel signals as image data of one frame or may be used as image data of different frames. That is, images based on a plurality of types of converted pixel signals may be displayed substantially simultaneously, and may be displayed at different frame timings.
- the signal processing unit 210 may generate the above-described plurality of types of converted pixel signals from pixel signals captured at substantially the same time, or may generate a plurality of types of converted pixel signals from pixel signals acquired at different imaging timings. Good. By such processing, the spatial resolution of the image data can be improved.
- the first pixel 202-1 has been described as an example, but a plurality of types of converted pixel signals are generated by the same process for the second pixel 202-2 and the third pixel 202-3. can do.
- FIG. 11A and FIG. 11B are diagrams showing a structure example of the microlens 101.
- FIG. 11A is a perspective view of the microlens 101. However, a curved grid line indicates a curved surface, and a straight grid line indicates a plane.
- FIG. 11B is a diagram illustrating a planar shape of the microlens 101.
- the microlens 101 has a shape obtained by cutting off four sides of a spherical lens. Thereby, a spherical lens with a large diameter can be used, and the substantial opening of the microlens 101 can be raised. Further, by matching the positions of the four sides of the microlens 101 with the positions of the four sides of the pixel 202, the microlens 101 can be efficiently spread.
- FIG. 12 is a diagram illustrating another processing example of the signal processing unit 210.
- the signal processing unit 210 of this example selects the pixels 202 from which the output signal of the light receiving region 214 is read out in units of rows.
- the signal processing unit 210 simultaneously reads output signals from the pixels 202 belonging to the selected row. In this case, the read timing of the output signal is different for each row, and the charge accumulation time is different for each row.
- the signal processing unit 210 of this example compensates for the difference in the charge accumulation time by correcting the output signal of the first light receiving region 214a using the output signal of the second light receiving region 214b of each pixel 202.
- all the pixels 202 have two light receiving regions 214.
- the charge accumulation time of the first light receiving region 214a of the pixels 202 belonging to the first row is indicated by a1
- the charge accumulation time of the second light receiving region 214b is indicated by b1.
- the charge accumulation time of the first light receiving region 214a of the pixels 202 belonging to the second row is denoted by a2
- the charge accumulation time of the second light receiving region 214b is denoted by b2.
- ADC in FIG. 12 indicates time for digitally converting the output signal of each light receiving region 214.
- the signal processing unit 210 delays the reset timing B of the second light receiving region 214b for each pixel 202 with respect to the reset timing A for resetting the charge accumulated in the first light receiving region 214a. For this reason, the light receiving unit 200 has a reset line that independently controls the reset timing of the first light receiving region 214a and the second light receiving region 214b of each pixel 202.
- the reset timing A and the reset timing B are common to all the pixels 202.
- the signal processing unit 210 simultaneously reads out an output signal corresponding to the charge amount accumulated in the first light receiving region 214a and the second light receiving region 214b for each pixel 202. For this reason, the light receiving unit 200 has a readout line that transmits the output signals of the first light receiving region 214a and the second light receiving region 214b of each pixel 202 in parallel.
- the signal processing unit 210 includes a processing circuit that processes output signals from the first light receiving region 214a and the second light receiving region 214b of each pixel 202 in parallel.
- the signal processing unit 210 subtracts the value of the output signal of the second light receiving area 214b from the value of the output signal of the first light receiving area 214a for each pixel 202 to generate a pixel signal of each pixel 202. Thereby, in all the pixels 202, a pixel signal corresponding to the charge accumulation time from the reset timing A to the reset timing B can be generated. By such processing, a pixel signal by a global shutter can be generated in a pseudo manner from an output signal read by rolling reading.
- the signal processing unit 210 also functions as a global shutter processing unit that performs the processing described in FIG.
- FIG. 13 is a diagram illustrating a configuration example of the light receiving unit 200. Although only the configuration related to one pixel 202 is shown in FIG. 13, the light receiving unit 200 has the same configuration for all the pixels 202. As described above, the light receiving unit 200 includes the reset line 221-1 that controls the reset timing of the first light receiving region 214a and the reset line 221-2 that controls the reset timing of the second light receiving region 214b. The reset line 221-1 and the reset line 221-2 are provided for each row of the pixels 202. Pixels 202 included in the same row are connected to a common reset line 221-1 and reset line 221-2.
- the light receiving unit 200 includes a read line 224-1 for reading the output signal of the first light receiving region 214a and a read line 224-2 for reading the output signal of the second light receiving region 214b.
- the readout line 224-1 and the readout line 224-2 are provided for each column of the pixels 202. Pixels 202 included in the same column are connected to a common readout line 224-1 and readout line 224-2.
- the readout line 224 transmits each output signal to the signal processing unit 210.
- the signal processing unit 210 selects a row from which an output signal is read based on the row selection signal SEL. Further, the signal processing unit 210 selects the light receiving region 214 to which the output signal is to be transferred, based on the transfer signals Tx1 and Tx2.
- the signal processing unit 210 simultaneously outputs an output signal corresponding to the amount of charge accumulated in the first light receiving region 214a and the second light receiving region 214b for each pixel 202 and independently for each light receiving region. It functions as a reading unit that reads out data. Further, the signal processing unit 210 can artificially generate a pixel signal by the global shutter from the output signal read by rolling reading. Note that the signal processing unit 210 may perform the array conversion process using the pixel signals described in FIGS. 11A, 11B, and 12 instead of the pixel signals described in FIGS. 1 to 10D.
- the signal processing unit 210 subtracts the output signal of the second light receiving region 214b from the output signal of the first light receiving region 214a without adding the output signals of the first light receiving region 214a and the second light receiving region 214b.
- a pixel signal may be generated.
- FIG. 14 is a diagram illustrating another configuration example of the light receiving unit 200.
- the global shutter process described in FIGS. 12 and 13 is not performed.
- each light receiving region 214 is a photodiode.
- the reset transistor R, the source follower transistor SF, and the selection transistor S are provided in common for the four photodiodes.
- a reset transistor R and the like are provided in common for four photodiodes included in the region 240.
- a transfer transistor TX is provided for each photodiode.
- the four photodiodes are included in different pixels 202, respectively.
- four photodiodes sharing the reset transistor R and the like are included in two first pixels 202-1 and two second pixels 202-2.
- the transfer transistor TX switches whether to transfer the charge accumulated in the photodiode to the charge detection unit.
- the charge detection unit is, for example, a capacitor connected between the wiring and the reference potential (not shown).
- the charge detection unit is also shared by the four photodiodes.
- the reset transistor R switches whether to reset the charge transferred to the charge detection unit.
- the source follower transistor SF outputs an output signal corresponding to the charge accumulated in the charge detection unit.
- the selection transistor S switches whether to output an output signal to the readout line 224.
- FIG. 15 is a diagram illustrating an arrangement example of the transfer transistor TX and the charge detection unit in the example illustrated in FIG.
- the pixel 202 and the transistor are provided in different layers. Therefore, the pixel 202 and the transistor can be stacked.
- the charge detection unit, the reset transistor R, and the like are shared by the four photodiodes PD.
- Each photodiode PD is provided with a transfer transistor TX.
- the gate electrode of the transfer transistor TX is indicated by a hatched portion.
- the four photodiodes are included in the two first pixels 202-1 and the two second pixels 202-2 or the third pixel 202-3. Since the first pixel 202-1 is different from the second pixel 202-2 and the third pixel 202-3 in the direction in which the pixels are divided, a region surrounded by the four transfer transistors TX is generated. This region functions as a charge detection unit. In FIG. 15, the reset transistor R and the like are omitted, but the reset transistor R and the like are also shared by four photodiodes as shown in FIG.
- FIG. 16 is a diagram illustrating an example of a cross section of the image sensor 100.
- the back-illuminated image sensor 100 is shown, but the image sensor 100 is not limited to the back-illuminated image sensor.
- the imaging device 100 of this example includes an imaging chip 113 that outputs a signal corresponding to incident light, a signal processing chip 111 that processes a signal from the imaging chip 113, and a memory that stores image data processed by the signal processing chip 111.
- the imaging chip 113, the signal processing chip 111, and the memory chip 112 are stacked, and are electrically connected to each other by a conductive bump 109 such as Cu.
- incident light is incident mainly in the direction indicated by the white arrow.
- the imaging chip 113 the surface on the side where incident light is incident is referred to as a back surface.
- An example of the imaging chip 113 is a back-illuminated MOS image sensor.
- the imaging chip 113 corresponds to the light receiving unit 200.
- the PD (photodiode) layer 106 is disposed on the back side of the wiring layer 108.
- the PD layer 106 includes a plurality of PD sections 104 that are two-dimensionally arranged and accumulate charges corresponding to incident light, and transistors 105 that are provided corresponding to the PD sections 104.
- One PD unit 104 is provided for one pixel 202. That is, the PD unit 104 has a first light receiving region 214a and a second light receiving region 214b.
- a color filter 102 is provided on the incident light incident side of the PD layer 106 via a passivation film 103.
- the color filter 102 has a plurality of types that transmit different wavelength regions, and has a specific arrangement corresponding to each of the PD units 104.
- a set of the color filter 102, the PD unit 104, and the plurality of transistors 105 forms one pixel. By controlling on / off of the plurality of transistors 105, the reading timing of each light receiving region 214, the light receiving start timing (reset timing), and the like are controlled.
- a microlens 101 is provided on the incident light incident side of the color filter 102 corresponding to each pixel.
- the microlens 101 condenses incident light toward the corresponding PD unit 104.
- the wiring layer 108 includes a wiring 107 that transmits a signal from the PD layer 106 to the signal processing chip 111.
- the wiring 107 corresponds to, for example, the readout line 224 illustrated in FIG.
- gate electrodes of the transistors illustrated in FIGS. 13 and 14 may be formed.
- the transistors illustrated in FIGS. 13 and 14 may be formed in the signal processing chip 111.
- the wiring 107 corresponds to a wiring that connects the PD layer 106 and each transistor.
- the wiring 107 may be multilayer, and a passive element and an active element may be provided.
- the signal processing chip 111 in this example includes a signal processing unit 210.
- a plurality of bumps 109 are arranged on the surface of the wiring layer 108.
- the plurality of bumps 109 are aligned with the plurality of bumps 109 provided on the opposing surfaces of the signal processing chip 111, and the imaging chip 113 and the signal processing chip 111 are pressed and aligned.
- the bumps 109 are joined and electrically connected.
- a plurality of bumps 109 are arranged on the mutually facing surfaces of the signal processing chip 111 and the memory chip 112.
- the bumps 109 are aligned with each other, and the signal processing chip 111 and the memory chip 112 are pressurized, so that the aligned bumps 109 are joined and electrically connected.
- the bonding between the bumps 109 is not limited to Cu bump bonding by solid phase diffusion, and micro bump bonding by solder melting may be employed. Further, about one bump 109 may be provided for one unit block described later. Therefore, the size of the bump 109 may be larger than the pitch of the PD unit 104. Further, a bump larger than the bump 109 corresponding to the imaging region may be provided in a peripheral region other than the imaging region where the pixels are arranged.
- the signal processing chip 111 has TSVs (silicon through electrodes) 110 that connect circuits provided on the front and back surfaces to each other.
- the TSV 110 is preferably provided in the peripheral area.
- the TSV 110 may also be provided in the peripheral area of the imaging chip 113 and the memory chip 112.
- FIG. 17 is a block diagram showing a part of the function of the signal processing unit 210.
- the signal processing unit 210 of this example includes a correction unit 260 and a lookup table 270. As described with reference to FIGS. 1 to 16, the signal processing unit 210 adds or subtracts the output signals of the two light receiving regions 214 in each pixel 202. However, the output signals of the two light receiving regions 214 may vary depending on the characteristics of the lens through which the light incident on the image sensor has passed.
- the ratio between the output value of the first light receiving region 214a and the output value of the second light receiving region 214b in each pixel 202 varies.
- the EPD value is a value indicating the distance from the image plane (the surface of the image sensor 100) to the exit pupil of the lens.
- the F value is a value obtained by dividing the focal length of the lens by the effective aperture.
- the lookup table 270 stores a table in which correction values for correcting the output values of the respective light receiving regions 214 are associated with lens characteristic values such as EPD values and F values. A table of lens characteristic values and correction values may be set for each position of the pixel 202.
- the correction unit 260 receives the lens data of the lens through which the light incident on the imaging element has passed from the imaging device, and receives the output signal from the light receiving unit 200.
- the imaging device may detect the lens characteristics from the identification information of the lens unit being used.
- the imaging device may detect the lens characteristics based on an operation of the imaging device by a user or the like.
- the correction unit 260 further receives information indicating the position of the pixel 202 of the output signal. The position information may be generated by the signal processing unit 210 based on the row selection signal SEL or the like.
- the correction unit 260 extracts a correction value corresponding to the lens data from the lookup table 270.
- the correction value may be different for each light receiving region 214.
- the correction unit 260 generates a correction signal by correcting the output signals of the two light receiving regions 214 using the extracted correction value.
- the signal processing unit 210 generates a pixel signal using the correction signal.
- FIG. 18 is a diagram for explaining the relationship between lens characteristics and output signals.
- the horizontal axis indicates the distance of the pixel 202 to the optical axis
- the vertical axis indicates the magnitude of the output signal of the light receiving region 214 in each pixel 202.
- the output signals of the two light receiving regions 214 are indicated by solid lines and dotted lines.
- the microlens 101 in the image sensor 100 is arranged to be shifted with respect to the pixel 202 in accordance with the position of the pixel 202 with respect to the optical axis.
- a light spot is arranged at the center of the pixel 202 at any position of the pixel 202 with respect to a lens having a specific EPD value.
- EPD just the EPD value in which the light spot is at the center of the pixel 202 regardless of the position of the pixel 202 is referred to as EPD just.
- the light spot is shifted from the center of the pixel 202 depending on the position of the pixel 202. Since the pixel 202 is divided into two light receiving regions 214 along the center line, if the spot of light deviates from the center of the pixel 202, a difference occurs in the magnitude of the output signal in the two light receiving regions 214. For example, at a position away from the optical axis, most of the light spot is included in one light receiving area 214, and the output signal of the light receiving area 214 becomes very large, whereas the other light receiving area 214 The output signal of 214 becomes very small.
- the spot diameter of light on the image plane changes. For example, when the F value is small, the spot diameter is large. In this case, the difference between the magnitudes of the output signals of the two light receiving areas 214 is reduced. On the other hand, at a position away from the optical axis, a light spot protrudes outside the region of the pixel 202, and the magnitude of the output signal in the entire pixel 202 decreases.
- the magnitudes of the output signals of the two light receiving regions 214 vary according to lens characteristics such as EPD value and F value.
- a table in which correction values for correcting the variation and lens characteristic values are associated with each other is given in advance.
- the table can be created by changing the lens characteristics and actually detecting the output signal. With such a configuration, a pixel signal can be generated with higher accuracy.
- FIG. 19 is a block diagram illustrating a configuration example of the imaging apparatus 500 according to an embodiment.
- the imaging apparatus 500 includes a photographic lens 520 as a photographic optical system, and the photographic lens 520 guides a subject luminous flux incident along the optical axis OA to the imaging element 100.
- the photographing lens 520 may be an interchangeable lens that can be attached to and detached from the imaging apparatus 500.
- the imaging apparatus 500 mainly includes an imaging device 100, a system control unit 501, a drive unit 502, a photometry unit 503, a work memory 504, a recording unit 505, a display unit 506, and a drive unit 514.
- the photographing lens 520 is composed of a plurality of optical lens groups, and forms an image of a subject light flux from the scene in the vicinity of its focal plane.
- the photographing lens 520 is representatively represented by a single virtual lens disposed in the vicinity of the pupil.
- the driving unit 514 drives the taking lens 520. More specifically, the drive unit 514 moves the optical lens group of the photographic lens 520 to change the focus position, and also drives the iris diaphragm in the photographic lens 520 to input the light amount of the subject luminous flux incident on the image sensor 100. To control.
- the driving unit 502 is a control circuit that executes charge accumulation control such as timing control and region control of the image sensor 100 in accordance with an instruction from the system control unit 501.
- the driving unit 502 operates the light receiving unit 200 and the signal processing unit 210 of the image sensor 100 as described with reference to FIGS. Further, the operation unit 508 receives an instruction from the photographer through a release button or the like.
- the image sensor 100 is the same as the image sensor 100 described with reference to FIGS.
- the image sensor 100 delivers the pixel signal to the image processing unit 511 of the system control unit 501.
- the image processing unit 511 performs various image processing using the work memory 504 as a work space, and generates image data. For example, when generating image data in JPEG file format, a compression process is executed after generating a color video signal from a signal obtained by the Bayer array.
- the image processing unit 511 may include a signal processing unit 210. In this case, the image sensor 100 may not have the signal processing unit 210.
- the generated image data is recorded in the recording unit 505, converted into a display signal, and displayed on the display unit 506 for a preset time.
- the photometric unit 503 detects the luminance distribution of the scene prior to a series of shooting sequences for generating image data.
- the photometry unit 503 includes, for example, an AE sensor having about 1 million pixels.
- the calculation unit 512 of the system control unit 501 receives the output of the photometry unit 503 and calculates the luminance for each area of the scene.
- the calculation unit 512 determines the shutter speed, aperture value, and ISO sensitivity according to the calculated luminance distribution.
- the light metering unit 503 may be shared by the image sensor 100.
- the arithmetic unit 512 also executes various arithmetic operations for operating the imaging device 500.
- a part or all of the drive unit 502 may be mounted on the signal processing chip 111 of the image sensor 100.
- a part of the system control unit 501 may be mounted on the signal processing chip 111 of the image sensor 100.
Abstract
Description
特許文献1 特開2011-77770号公報
Claims (18)
- 第1方向に連続して配置され、第1の色の光を検出する2つの第1画素と、
前記第1方向に交差する第2方向に連続して配置され、前記2つの第1画素に隣接し、第2の色の光を検出する2つの第2画素と、
前記第1画素に配置され、前記第1の色の光を受光する前記第1方向に分割された複数の第1受光領域と、
前記第2画素に配置され、前記第2の色の光を受光する前記第2方向に分割された複数の第2受光領域と
を備える撮像素子。 - 前記第1方向と前記第2方向とは直交している
請求項1に記載の撮像素子。 - 前記第1画素からの出力信号と、前記第2画素からの出力信号とにより、焦点状態を検出する焦点検出部を更に備える
請求項1または2に記載の撮像素子。 - 前記第1画素と前記第2画素とが配置された撮像部と、
前記撮像部と積層され、前記撮像部からの信号を処理する信号処理部と
を有する請求項1から3のいずれか一項に記載の撮像素子。 - 前記第1画素を複数有し、
前記第1方向に交差する第3方向に連続して配置され、前記複数の第1画素のうちの2つの第1画素に隣接し、第3の色の光を検出する2つの第3画素と、
前記第3画素に配置され、前記第3方向に分割された前記第3の色の光を受光する複数の第3受光領域と
を備える請求項1から4の何れか一項に記載の撮像素子。 - 前記第2方向と前記第3方向とは平行である
請求項5に記載の撮像素子。 - 第1方向および第2方向に沿って配列され、第1の色に対応する複数の第1画素と、
近接する4つの第1画素に囲まれるそれぞれの領域に設けられ、前記第1の色とは異なる色に対応する複数の他の画素と
を備え、
前記複数の第1画素および前記複数の他の画素のうち、少なくとも一部の画素は、分離された2つの受光領域を有する撮像素子。 - 前記2つの受光領域を有する画素の各受光領域からの出力信号に基づいて、前記撮像素子の焦点状態を検出する焦点検出部を更に備える
請求項7に記載の撮像素子。 - 前記複数の他の画素は、
前記第2方向に沿って配列され、第2の色に対応する複数の第2画素と、
前記第2方向に沿って配列され、第3の色に対応する複数の第3画素と
を含み、
第2画素の列および第3画素の列は、第1方向において交互に配置され、
前記第1方向において隣接する2つの前記第1画素の画素信号を加算して第1変換画素信号を生成し、前記第2方向において隣接する2つの前記第2画素の画素信号を加算して第2変換画素信号を生成し、前記第2方向において隣接する2つの前記第3画素の画素信号を加算して第3変換画素信号を生成する配列変換部を更に備える
請求項7または8に記載の撮像素子。 - 前記第1画素の少なくとも一部の画素は、前記第1方向に並んで配列される第1受光領域および第2受光領域を有し、
前記第2画素および前記第3画素の少なくとも一部の画素は、前記第2方向に並んで配列される第1受光領域および第2受光領域を有する
請求項9に記載の撮像素子。 - 全ての前記画素が前記2つの受光領域を有する
請求項10に記載の撮像素子。 - 前記配列変換部は、それぞれの前記画素について、
当該画素の前記第1受光領域の出力信号と、前記第2受光領域の出力信号とを加算した第1画素信号を生成し、
当該画素の前記第1受光領域の出力信号と、当該画素の前記第1受光領域に隣接する画素の前記第2受光領域の出力信号とを加算した第2画素信号を生成する
請求項11に記載の撮像素子。 - それぞれの画素について、前記2つの受光領域のうち、第1受光領域が蓄積した電荷をリセットするリセットタイミングに対して、第2受光領域のリセットタイミングを遅らせて、且つ、前記第1受光領域および前記第2受光領域が蓄積した電荷量に応じた出力信号を同時に読み出し、前記第1受光領域の前記出力信号の値から、前記第2受光領域の前記出力信号の値を減算して当該画素の画素信号を生成する、グローバルシャッタ処理部を更に備える
請求項7から12のいずれか一項に記載の撮像素子。 - それぞれの画素について、前記2つの受光領域のそれぞれが蓄積した電荷量に応じた出力信号を同時に、且つ、受光領域毎に独立して読み出す読出部を更に備える
請求項7から12のいずれか一項に記載の撮像素子。 - それぞれの前記画素の平面形状が四角形であり、
前記画素の各辺が前記第1方向および前記第2方向に対して45度傾いている
請求項7から14のいずれか一項に記載の撮像素子。 - 前記撮像素子に入射する光が通過したレンズの特性を示すレンズデータに基づいて、前記2つの受光領域のそれぞれが出力する出力信号の値を補正する補正部を更に備える
請求項7から15のいずれか一項に記載の撮像素子。 - それぞれの前記画素が形成される撮像チップと、
前記撮像チップと積層され、前記撮像チップからの信号を処理する信号処理チップと
を有する請求項7から16のいずれか一項に記載の撮像素子。 - 請求項1から17のいずれか一項に記載の撮像素子を備える撮像装置。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010314167.0A CN111479066B (zh) | 2013-09-26 | 2014-09-24 | 摄像元件以及摄像装置 |
EP14848227.6A EP3051811A4 (en) | 2013-09-26 | 2014-09-24 | Image pickup element and image pickup device |
CN201480060477.7A CN105684436B (zh) | 2013-09-26 | 2014-09-24 | 摄像元件以及摄像装置 |
JP2015538908A JP6561836B2 (ja) | 2013-09-26 | 2014-09-24 | 撮像素子および撮像装置 |
US15/080,180 US20160286104A1 (en) | 2013-09-26 | 2016-03-24 | Imaging element and imaging device |
US17/370,353 US20210335876A1 (en) | 2013-09-26 | 2021-07-08 | Imaging element and imaging device having pixels each with multiple photoelectric converters |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013199712 | 2013-09-26 | ||
JP2013-199712 | 2013-09-26 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/080,180 Continuation US20160286104A1 (en) | 2013-09-26 | 2016-03-24 | Imaging element and imaging device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015045375A1 true WO2015045375A1 (ja) | 2015-04-02 |
Family
ID=52742547
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2014/004885 WO2015045375A1 (ja) | 2013-09-26 | 2014-09-24 | 撮像素子および撮像装置 |
Country Status (5)
Country | Link |
---|---|
US (2) | US20160286104A1 (ja) |
EP (1) | EP3051811A4 (ja) |
JP (4) | JP6561836B2 (ja) |
CN (3) | CN105684436B (ja) |
WO (1) | WO2015045375A1 (ja) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106385573A (zh) * | 2016-09-06 | 2017-02-08 | 努比亚技术有限公司 | 一种图片处理方法及终端 |
WO2017057279A1 (ja) * | 2015-09-30 | 2017-04-06 | 株式会社ニコン | 撮像装置、画像処理装置、および表示装置 |
WO2017057280A1 (ja) * | 2015-09-30 | 2017-04-06 | 株式会社ニコン | 撮像装置、および被写体検出装置 |
JP2017103603A (ja) * | 2015-12-01 | 2017-06-08 | 株式会社ニコン | 撮像素子、撮像装置、及び撮像方法 |
JP2020171055A (ja) * | 2020-07-06 | 2020-10-15 | 株式会社ニコン | 撮像素子 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105684436B (zh) * | 2013-09-26 | 2020-05-01 | 株式会社尼康 | 摄像元件以及摄像装置 |
JP6537838B2 (ja) * | 2015-01-30 | 2019-07-03 | ルネサスエレクトロニクス株式会社 | 撮像素子 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001083407A (ja) * | 1999-09-13 | 2001-03-30 | Canon Inc | 撮像装置 |
WO2006129460A1 (ja) * | 2005-06-03 | 2006-12-07 | Konica Minolta Holdings, Inc. | 撮像装置 |
WO2008032820A1 (fr) * | 2006-09-14 | 2008-03-20 | Nikon Corporation | Élément et dispositif d'imagerie |
JP2009031682A (ja) * | 2007-07-30 | 2009-02-12 | Olympus Corp | 撮像システム及び画像信号処理プログラム |
JP2011077770A (ja) | 2009-09-30 | 2011-04-14 | Fujifilm Corp | 固体電子撮像装置の制御装置およびその動作制御方法 |
Family Cites Families (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4532800B2 (ja) * | 2001-11-08 | 2010-08-25 | キヤノン株式会社 | 撮像装置及びシステム |
JP4349232B2 (ja) * | 2004-07-30 | 2009-10-21 | ソニー株式会社 | 半導体モジュール及びmos型固体撮像装置 |
JP4691930B2 (ja) * | 2004-09-10 | 2011-06-01 | ソニー株式会社 | 物理情報取得方法および物理情報取得装置、並びに物理量分布検知の半導体装置、プログラム、および撮像モジュール |
JP2006121650A (ja) * | 2004-09-24 | 2006-05-11 | Fuji Photo Film Co Ltd | 固体撮像装置 |
JP2007065330A (ja) * | 2005-08-31 | 2007-03-15 | Canon Inc | カメラ |
JP4599258B2 (ja) * | 2005-09-16 | 2010-12-15 | 富士フイルム株式会社 | 固体撮像素子 |
JP4808557B2 (ja) * | 2006-07-04 | 2011-11-02 | 浜松ホトニクス株式会社 | 固体撮像装置 |
JP5106092B2 (ja) * | 2007-12-26 | 2012-12-26 | パナソニック株式会社 | 固体撮像装置およびカメラ |
JP5026951B2 (ja) * | 2007-12-26 | 2012-09-19 | オリンパスイメージング株式会社 | 撮像素子の駆動装置、撮像素子の駆動方法、撮像装置、及び撮像素子 |
JP5552214B2 (ja) * | 2008-03-11 | 2014-07-16 | キヤノン株式会社 | 焦点検出装置 |
JP5276374B2 (ja) * | 2008-07-25 | 2013-08-28 | キヤノン株式会社 | 焦点検出装置 |
JP5149143B2 (ja) * | 2008-12-24 | 2013-02-20 | シャープ株式会社 | 固体撮像素子およびその製造方法、電子情報機器 |
JP5029624B2 (ja) * | 2009-01-15 | 2012-09-19 | ソニー株式会社 | 固体撮像装置及び電子機器 |
JP5215262B2 (ja) * | 2009-02-03 | 2013-06-19 | オリンパスイメージング株式会社 | 撮像装置 |
JP2010219437A (ja) * | 2009-03-18 | 2010-09-30 | Canon Inc | 固体撮像装置 |
JP5332822B2 (ja) * | 2009-03-31 | 2013-11-06 | ソニー株式会社 | 固体撮像素子、撮像装置 |
JP5517514B2 (ja) * | 2009-07-16 | 2014-06-11 | キヤノン株式会社 | 撮像装置及びその制御方法 |
JP5476832B2 (ja) * | 2009-07-23 | 2014-04-23 | ソニー株式会社 | 固体撮像装置及びカメラ |
JP5232118B2 (ja) * | 2009-09-30 | 2013-07-10 | 富士フイルム株式会社 | 撮像デバイスおよび電子カメラ |
JP4547462B1 (ja) * | 2009-11-16 | 2010-09-22 | アキュートロジック株式会社 | 撮像素子、撮像素子の駆動装置、撮像素子の駆動方法、画像処理装置、プログラム、及び、撮像装置 |
JP2011176715A (ja) * | 2010-02-25 | 2011-09-08 | Nikon Corp | 裏面照射型撮像素子および撮像装置 |
JP5644177B2 (ja) * | 2010-05-07 | 2014-12-24 | ソニー株式会社 | 固体撮像装置、および、その製造方法、電子機器 |
JP5764884B2 (ja) * | 2010-08-16 | 2015-08-19 | ソニー株式会社 | 撮像素子および撮像装置 |
JP5803095B2 (ja) * | 2010-12-10 | 2015-11-04 | ソニー株式会社 | 撮像素子および撮像装置 |
JP5664270B2 (ja) * | 2011-01-21 | 2015-02-04 | ソニー株式会社 | 撮像素子および撮像装置 |
KR20140030183A (ko) * | 2011-04-14 | 2014-03-11 | 가부시키가이샤 니콘 | 화상 처리 장치 및 화상 처리 프로그램 |
JP5907668B2 (ja) * | 2011-04-27 | 2016-04-26 | オリンパス株式会社 | 撮像装置及び撮像素子 |
CN102809877B (zh) * | 2011-05-31 | 2016-05-25 | 株式会社尼康 | 镜头镜筒及相机机身 |
JP2014158062A (ja) * | 2011-06-06 | 2014-08-28 | Fujifilm Corp | 立体動画像及び平面動画像を撮像する撮像素子及びこの撮像素子を搭載する撮像装置 |
JP5791571B2 (ja) * | 2011-08-02 | 2015-10-07 | キヤノン株式会社 | 撮像素子及び撮像装置 |
JP5907595B2 (ja) * | 2011-09-27 | 2016-04-26 | キヤノン株式会社 | 撮像装置 |
WO2013047160A1 (ja) * | 2011-09-28 | 2013-04-04 | 富士フイルム株式会社 | 固体撮像素子、撮像装置、及び合焦制御方法 |
JP2013143729A (ja) * | 2012-01-12 | 2013-07-22 | Sony Corp | 撮像素子、撮像装置、電子機器および撮像方法 |
JP2013145779A (ja) * | 2012-01-13 | 2013-07-25 | Sony Corp | 固体撮像装置及び電子機器 |
JP2013157883A (ja) * | 2012-01-31 | 2013-08-15 | Sony Corp | 固体撮像素子およびカメラシステム |
JP5860168B2 (ja) * | 2012-12-21 | 2016-02-16 | 富士フイルム株式会社 | 固体撮像装置 |
CN105684436B (zh) * | 2013-09-26 | 2020-05-01 | 株式会社尼康 | 摄像元件以及摄像装置 |
-
2014
- 2014-09-24 CN CN201480060477.7A patent/CN105684436B/zh active Active
- 2014-09-24 WO PCT/JP2014/004885 patent/WO2015045375A1/ja active Application Filing
- 2014-09-24 CN CN202010314167.0A patent/CN111479066B/zh active Active
- 2014-09-24 EP EP14848227.6A patent/EP3051811A4/en not_active Withdrawn
- 2014-09-24 CN CN202010315657.2A patent/CN111479067A/zh active Pending
- 2014-09-24 JP JP2015538908A patent/JP6561836B2/ja active Active
-
2016
- 2016-03-24 US US15/080,180 patent/US20160286104A1/en not_active Abandoned
-
2019
- 2019-07-25 JP JP2019136632A patent/JP7047822B2/ja active Active
-
2021
- 2021-07-08 US US17/370,353 patent/US20210335876A1/en active Pending
-
2022
- 2022-03-18 JP JP2022044741A patent/JP7435648B2/ja active Active
-
2024
- 2024-02-06 JP JP2024016730A patent/JP2024050824A/ja active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001083407A (ja) * | 1999-09-13 | 2001-03-30 | Canon Inc | 撮像装置 |
WO2006129460A1 (ja) * | 2005-06-03 | 2006-12-07 | Konica Minolta Holdings, Inc. | 撮像装置 |
WO2008032820A1 (fr) * | 2006-09-14 | 2008-03-20 | Nikon Corporation | Élément et dispositif d'imagerie |
JP2009031682A (ja) * | 2007-07-30 | 2009-02-12 | Olympus Corp | 撮像システム及び画像信号処理プログラム |
JP2011077770A (ja) | 2009-09-30 | 2011-04-14 | Fujifilm Corp | 固体電子撮像装置の制御装置およびその動作制御方法 |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112714252A (zh) * | 2015-09-30 | 2021-04-27 | 株式会社尼康 | 摄像装置 |
WO2017057279A1 (ja) * | 2015-09-30 | 2017-04-06 | 株式会社ニコン | 撮像装置、画像処理装置、および表示装置 |
WO2017057280A1 (ja) * | 2015-09-30 | 2017-04-06 | 株式会社ニコン | 撮像装置、および被写体検出装置 |
US11716545B2 (en) | 2015-09-30 | 2023-08-01 | Nikon Corporation | Image capturing device, image processing device and display device for setting different exposure conditions |
JPWO2017057279A1 (ja) * | 2015-09-30 | 2018-07-19 | 株式会社ニコン | 撮像装置、画像処理装置、および表示装置 |
JPWO2017057280A1 (ja) * | 2015-09-30 | 2018-08-16 | 株式会社ニコン | 撮像装置、および被写体検出装置 |
CN112714252B (zh) * | 2015-09-30 | 2023-04-07 | 株式会社尼康 | 摄像装置 |
US10810716B2 (en) | 2015-09-30 | 2020-10-20 | Nikon Corporation | Image capturing device, image processing device and display device for setting different exposure conditions |
JP2020184766A (ja) * | 2015-09-30 | 2020-11-12 | 株式会社ニコン | 撮像装置、画像処理装置、および表示装置 |
JP2017103603A (ja) * | 2015-12-01 | 2017-06-08 | 株式会社ニコン | 撮像素子、撮像装置、及び撮像方法 |
CN106385573A (zh) * | 2016-09-06 | 2017-02-08 | 努比亚技术有限公司 | 一种图片处理方法及终端 |
JP7247975B2 (ja) | 2020-07-06 | 2023-03-29 | 株式会社ニコン | 撮像素子及び撮像装置 |
JP2020171055A (ja) * | 2020-07-06 | 2020-10-15 | 株式会社ニコン | 撮像素子 |
Also Published As
Publication number | Publication date |
---|---|
CN105684436A (zh) | 2016-06-15 |
US20210335876A1 (en) | 2021-10-28 |
CN105684436B (zh) | 2020-05-01 |
JP7047822B2 (ja) | 2022-04-05 |
EP3051811A1 (en) | 2016-08-03 |
JP6561836B2 (ja) | 2019-08-21 |
JPWO2015045375A1 (ja) | 2017-03-09 |
JP2019205194A (ja) | 2019-11-28 |
JP2022078354A (ja) | 2022-05-24 |
US20160286104A1 (en) | 2016-09-29 |
JP7435648B2 (ja) | 2024-02-21 |
CN111479067A (zh) | 2020-07-31 |
CN111479066B (zh) | 2022-11-18 |
CN111479066A (zh) | 2020-07-31 |
JP2024050824A (ja) | 2024-04-10 |
EP3051811A4 (en) | 2017-03-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6848941B2 (ja) | 撮像素子および撮像装置 | |
JP7435648B2 (ja) | 撮像素子および撮像装置 | |
JP6343870B2 (ja) | 撮像素子および撮像装置 | |
JP2014179911A (ja) | 撮像装置 | |
JP6368999B2 (ja) | 撮像素子および撮像装置 | |
WO2015080163A1 (ja) | 電子機器、撮像装置、および、撮像素子 | |
JP2017152658A (ja) | 撮像素子および撮像装置 | |
JP6136103B2 (ja) | 撮像装置、撮像素子および読出方法。 | |
JP6056572B2 (ja) | 撮像装置 | |
JP2015041838A (ja) | 撮像素子および撮像装置 | |
JP6680310B2 (ja) | 撮像装置 | |
JP2017108281A (ja) | 撮像素子および撮像装置 | |
JP6767336B2 (ja) | 撮像素子および撮像装置 | |
JP2014230242A (ja) | 撮像素子および撮像装置 | |
JP6597769B2 (ja) | 撮像素子および撮像装置 | |
JP2018201207A (ja) | 撮像素子および撮像装置 | |
JP6825665B2 (ja) | 撮像素子および撮像装置 | |
JP6610648B2 (ja) | 撮像装置 | |
JP6268782B2 (ja) | 撮像素子および撮像装置 | |
WO2022050134A1 (ja) | 固体撮像装置及び電子機器 | |
JP6767306B2 (ja) | 撮像装置 | |
JP2023104965A (ja) | 撮像素子および撮像装置 | |
JP2019083550A (ja) | 電子機器 | |
JP2014200055A (ja) | 撮像素子および撮像装置 | |
JP2015023380A (ja) | 撮像素子および撮像装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14848227 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2015538908 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
REEP | Request for entry into the european phase |
Ref document number: 2014848227 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2014848227 Country of ref document: EP |