CN117296311A - Imaging device and control method thereof - Google Patents
Imaging device and control method thereof Download PDFInfo
- Publication number
- CN117296311A CN117296311A CN202280001299.5A CN202280001299A CN117296311A CN 117296311 A CN117296311 A CN 117296311A CN 202280001299 A CN202280001299 A CN 202280001299A CN 117296311 A CN117296311 A CN 117296311A
- Authority
- CN
- China
- Prior art keywords
- pixel group
- pixels
- data
- unit
- sub
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 30
- 238000000034 method Methods 0.000 title claims description 25
- 230000036961 partial effect Effects 0.000 claims abstract description 131
- 238000004458 analytical method Methods 0.000 claims abstract description 47
- 238000006243 chemical reaction Methods 0.000 claims description 23
- 238000009792 diffusion process Methods 0.000 claims description 16
- 238000010586 diagram Methods 0.000 description 24
- 230000000875 corresponding effect Effects 0.000 description 8
- 102100040678 Programmed cell death protein 1 Human genes 0.000 description 5
- 101710089372 Programmed cell death protein 1 Proteins 0.000 description 5
- 238000013473 artificial intelligence Methods 0.000 description 5
- 230000009977 dual effect Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 238000005070 sampling Methods 0.000 description 4
- 101100388215 Arabidopsis thaliana DSP5 gene Proteins 0.000 description 3
- 238000009825 accumulation Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000002829 reductive effect Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
- H04N25/618—Noise processing, e.g. detecting, correcting, reducing or removing noise for random or high-frequency noise
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/10—Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
- H04N25/11—Arrangement of colour filter arrays [CFA]; Filter mosaics
- H04N25/13—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
- H04N25/134—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
- H04N23/672—Focus control based on electronic image sensor signals based on the phase difference signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/81—Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/40—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
- H04N25/46—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by combining or binning pixels
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Transforming Light Signals Into Electric Signals (AREA)
- Studio Devices (AREA)
- Color Television Image Signal Generators (AREA)
Abstract
An imaging device (100) is provided with: a data acquisition unit (20) that acquires first partial combination data (p 1) based on a first pixel group that is formed by combining at least one pixel among a unit pixel group formed of a plurality of pixels that are grouped; an analysis unit (30) that analyzes the frequency characteristics of the video signal of the region formed by the unit pixel groups, based on the cross-correlation between the first partial combination data (p 1) and the second partial combination data (p 2) of the second pixel group formed by pixels other than the first pixel group in the unit pixel groups; and a moire removing unit (40) that removes moire generated in the region constituted by the unit pixel group, based on the analysis result.
Description
The present disclosure relates to a photographing apparatus and a control method thereof.
In general, in imaging devices such as cameras, performance improvements such as image quality improvement and high functionality improvement have been demanded, and image sensors such as CMOS mounted in the imaging devices have been variously designed.
For example, a technique for realizing a High Dynamic Range (HDR) image by processing a plurality of pixels of an image sensor in units of groups is disclosed (for example, refer to patent document 1).
Prior art literature
Patent literature
Patent document 1: U.S. patent application publication No. 2021/0385389.
Disclosure of Invention
Problems to be solved by the invention
However, in the image sensor described in patent document 1, moire generated in the photographed image is not considered, and as a result, moire may be generated in the photographed image.
Accordingly, an object of the present disclosure is to provide an imaging device that appropriately removes moire and a control method thereof.
Solution for solving the problem
An imaging device according to an aspect of the present disclosure includes: a data acquisition unit that acquires first partial combination data based on a first pixel group that is configured by combining at least one pixel among a unit pixel group configured by a plurality of pixels that are grouped; an analysis unit that analyzes frequency characteristics of video signals of a region configured by the unit pixel groups based on cross-correlation between the first partial combination data and second partial combination data based on a second pixel group configured by pixels other than the first pixel group in the unit pixel groups; and a moire removing unit for removing moire generated in the region constituted by the unit pixel group based on the analysis result.
In the above aspect, the moire removing unit may remove moire by removing a high frequency component of the video signal by a low pass filter.
In the above aspect, the moire removing unit may remove moire based on an image signal of a region constituted by a unit pixel group in the vicinity of the unit pixel group.
In the above aspect, the data acquisition unit may acquire all the merged data based on all the pixels constituting the unit pixel group, and subtract the first partial merged data from all the merged data, thereby acquiring the second partial merged data.
In the above aspect, each photodiode constituted corresponding to a plurality of pixels may also be connected to a common floating diffusion region.
In the above aspect, the floating diffusion region may be switched between a plurality of charge-voltage conversion gains, the data acquisition unit may acquire first partial combination data at a low conversion gain among the plurality of charge-voltage conversion gains, the analysis unit may analyze frequency characteristics of the image signal of the region constituted by the unit pixel group based on a cross correlation between the first partial combination data and the second partial combination data at the low conversion gain, and the moire removing unit may remove moire generated in the region constituted by the unit pixel group based on an analysis result at the low conversion gain.
In the above aspect, each of the plurality of pixels may be further configured of two or more sub-pixels, and the data acquisition unit may acquire, among the two or more sub-pixels, first sub-portion combination data based on a first sub-pixel group configured by combining at least one or more sub-pixels and second sub-portion combination data based on a second sub-pixel group configured by sub-pixels other than the first sub-pixel group, the first sub-portion combination data and the second sub-portion combination data for phase difference auto focusing.
In the above aspect, each of the plurality of pixels may further include two or more sub-pixels, any one of the plurality of pixels may include a mask pixel that masks at least one or more sub-pixels of the two or more sub-pixels, and the data acquisition unit may acquire sub-portion combination data based on sub-pixels other than the mask pixel among the pixels including the mask pixel, the sub-portion combination data being used for phase difference auto focusing.
Another aspect of the present disclosure provides a camera: a data acquisition unit that acquires first partial combination data based on a first pixel group that is configured by combining at least one pixel among a unit pixel group configured by a plurality of pixels that are grouped; an analysis unit that analyzes frequency characteristics of video signals of a region configured by the unit pixel groups based on cross-correlation between the first partial combination data and second partial combination data based on a second pixel group configured by pixels other than the first pixel group in the unit pixel groups; and an image generation unit that restores the high-frequency component in the region constituted by the unit pixel group based on the analysis result and generates an image.
A control method of an aspect of the present disclosure is executed by a processor included in a photographing apparatus, and includes: a data acquisition step of acquiring first partial combination data based on a first pixel group constituted by combining at least one pixel among unit pixel groups constituted by a plurality of pixels being grouped; an analysis step of analyzing frequency characteristics of an image signal of a region constituted by the unit pixel groups based on cross-correlation between the first partial combination data and second partial combination data based on a second pixel group constituted by pixels other than the first pixel group in the unit pixel groups; and a moire removing step of removing moire fringes generated in the region constituted by the unit pixel group based on the analysis result.
Effects of the invention
According to the present disclosure, it is possible to provide an imaging device that appropriately removes moire and a control method thereof.
Fig. 1 is a schematic diagram for explaining the configuration of an image sensor 10 of a first embodiment of the present disclosure.
Fig. 2 is a diagram for explaining the combination used in the image sensor 10 of the first embodiment of the present disclosure.
Fig. 3 is a diagram for explaining partial merging used in the image sensor 10 of the first embodiment of the present disclosure.
Fig. 4 is a block diagram for explaining functions and data flows of the photographing apparatus 100 of the first embodiment of the present disclosure.
Fig. 5 is a diagram schematically showing a circuit configuration concerning a signal flow for explaining an example of combination in 4 (2×2) pixels.
Fig. 6 is a diagram for explaining the operation of each element of the circuit configuration of 4 (2×2) pixels shown in fig. 5.
Fig. 7 is a flowchart showing a flow of processing of the control method M100 performed by the photographing apparatus 100 of the first embodiment of the present disclosure.
Fig. 8 is a diagram showing another partial combination (another specific example 1) used in the image sensor 10 of the first embodiment of the present disclosure.
Fig. 9 is a diagram showing another partial combination (another specific example 2) used in the image sensor 10 of the first embodiment of the present disclosure.
Fig. 10 is a diagram showing another partial combination (another specific example 3) used in the image sensor 10 of the first embodiment of the present disclosure.
Fig. 11 is a diagram schematically showing a circuit configuration of a signal flow for explaining an example of an operation of combining the dual conversion gain and the full-pixel imaging plane phase difference AF in 4 (2×2) pixels.
Fig. 12 is a diagram for explaining the operation of each element of the circuit configuration of 4 (2×2) pixels shown in fig. 11.
Fig. 13 is a schematic diagram showing an image sensor provided with dedicated pixels for acquiring signals for phase difference AF.
Hereinafter, preferred embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. The embodiments described below are merely examples for implementing the present disclosure, and are not intended to limit the present disclosure. In order to facilitate understanding of the description, the same reference numerals are given to the same components as much as possible in the drawings, and duplicate descriptions may be omitted.
< first embodiment >, first embodiment
[ concerning image sensor ]
Fig. 1 is a schematic diagram for explaining the configuration of an image sensor 10 of a first embodiment of the present disclosure. As shown in fig. 1, an image sensor 10 is typically a CMOS image sensor or the like, and includes a control circuit 1, a plurality of pixel groups 2 arranged two-dimensionally, a signal line 3, a reading circuit 4, and a digital signal processing unit (DSP) 5.
Here, the pixel group 2 is formed by grouping 4 (2×2) pixels into one pixel group (unit pixel group), but the present invention is not limited thereto, and for example, 3 (3×1) pixels, 8 (4×2) pixels, 9 (3×3) pixels, 16 (4×4) pixels, and the like may be formed as one unit pixel group.
The control circuit 1 drives the plurality of pixel groups 2 of the image sensor 10, controls reading of data based on the optical signals accumulated in the plurality of pixel groups 2, and outputs the data to the outside of the image sensor 10.
The plurality of pixel groups 2 are two-dimensionally arranged, and light signals brought to the image sensor 10 are accumulated based on control signals from the control circuit 1 and control signals generated by the pixel groups 2 themselves, and read as data (electric signals) based on the light signals.
The electric signals read from the plurality of pixel groups 2 are transmitted to the reading circuit 4 via the signal lines 3 (typically, column signal lines parallel to the column direction), and the electric signals are analog-to-digital converted.
A digital signal processing section (DSP) 5 processes the digital signal analog-digital converted by the reading circuit 4. The processed digital signal is then transferred to a processor, a memory, or the like of the imaging device via a data bus.
The DSP5 is not limited to this configuration, and may be configured as follows, for example: the image sensor 10 does not include the DSP5, and a processor of a subsequent stage has the DSP. The composition may also be as follows: part of digital signal processing in the image processing is processed by the DSP5 of the image sensor 10 and a DSP included in a processor or the like of a subsequent stage, respectively. In other words, the location of the DSP in the present disclosure is not limited to a specified location.
[ about merging ]
Fig. 2 is a diagram for explaining the combination used in the image sensor 10 of the first embodiment of the present disclosure. In fig. 2, as an example, in the color pixel arrangement of the single-plate bayer arrangement, each color is composed of 4 (2×2) pixels.
If each pixel is set as an independent pixel and data from each pixel is read, a high-resolution image with a high sampling frequency can be obtained. On the other hand, as shown in fig. 2, by combining 4 pixels into one pixel group (unit pixel group) and reading in data from the 4 pixels, it is possible to realize high SNR based on a high signal electron number, high sensitivity based on a wide pixel size, high frame rate based on a small number of pixels, and low power consumption based on low reading.
That is, resolution and other properties are in a trade-off relationship based on merging. Specifically, when each pixel is set as an independent pixel and data from all pixels is read, the sampling frequency at the time of reading is set to fs (all-pixel reading mode: "full readout mode"). In contrast, when 4 pixels are combined as one pixel group (unit pixel group) and data from the 4 pixels are read, the sampling frequency at the time of the reading is lowered to fs/2 (combining mode: "binding mode").
Fig. 3 is a diagram for explaining partial merging used in the image sensor 10 of the first embodiment of the present disclosure. In fig. 3, as an example, bayer arrangement made up of green (G), red (R), blue (B), and green (G) is taken as one bayer unit ("one bayer unit") arranged in a matrix.
Here, one bayer unit ("one bayer unit") is constituted by 4 (2×2) unit pixel groups of G, R, B, G, but not limited thereto, and may be constituted by, for example, 9 (3×3) unit pixel groups, 16 (4×4) unit pixel groups, or the like.
In the even-numbered line group, as indicated by a numeral "1", for example, in a unit pixel group of 4 (2×2) pixels constituted by G, two pixels (first pixel group) of the left half portion are partially combined, and data (first partial combined data) is read. Next, all of 4 (2×2) pixels formed of G are combined, and data (all combined data) is read.
In the odd-numbered line group, as indicated by a numeral "1", for example, in a unit pixel group of 4 (2×2) pixels composed of G, two pixels (first pixel group) of the upper half are partially combined, and data (first partial combined data) is read. Next, all of 4 (2×2) pixels formed of G are combined, and data (all combined data) is read.
In addition, in the unit pixel group of 4 (2×2) pixels configured by G, data of the right half and the lower half may be generated based on a difference between all the merged data read in all the merging and the partial merged data read in the partial merging (second pixel group, second partial merged data).
Here, a part of the unit pixel groups of 4 (2×2) pixels including G is specifically described as an example, but the same processing is performed for the unit pixel groups of 4 (2×2) pixels including G, the unit pixel groups of 4 (2×2) pixels including R, and the unit pixel groups of 4 (2×2) pixels including B.
[ concerning Moire fringe removal ]
Next, a process of removing moire (aliasing) using part of the combined data and all of the combined data output from the image sensor 10 will be described. Moire is a noise generated in an image, but moire removal in the present specification includes removal of aliases (aliases) of the same type, and naturally can also be performed.
Fig. 4 is a block diagram for explaining functions and data flows of the photographing apparatus 100 of the first embodiment of the present disclosure. As shown in fig. 4, the imaging device 100 includes an image sensor 10, a data acquisition unit 20, an analysis unit 30, and a moire removal unit 40. Although an optical system, a memory, and the like are not shown here, detailed description thereof is omitted, and functions and components included in a general imaging device are also included in the imaging device 100. In addition, the photographing device disclosed by the invention is suitable for terminals such as digital cameras, smart phones, tablets and notebook computers with photographing functions.
The image sensor 10 is the image sensor described with reference to fig. 1 to 3, and as shown in fig. 4, the first partial merged data p1 and the entire merged data a1 are read from the image sensor 10.
Here, the entire combined data a1, the first partial combined data p1, and the second partial combined data p2 may be read and generated by the above-described procedure described using fig. 3, for example.
Specifically, the first partial merging data p1 is data based on a first pixel group constituted by at least one pixel by merging, among unit pixel groups constituted by a plurality of pixels that are grouped. In fig. 3, the first partial combination data p1 corresponds to data read from two pixels indicated by the numeral "1" in the unit pixel group.
The all-combined data a1 is data based on all pixels in a unit pixel group composed of a plurality of pixels that are grouped. In fig. 3, the total merged data a1 corresponds to data read from 4 (2×2) pixels in the unit pixel group.
Then, the first partial merged data p1 is subtracted from the entire merged data a1, thereby generating second partial merged data p2 based on the difference thereof.
The analysis unit 30 analyzes the frequency characteristics of the video signal of the region constituted by the unit pixel group based on the cross-correlation between the first partial combination data p1 and the second partial combination data p2.
For example, the analysis unit 30 calculates a cross-correlation between the first partial merged data p1 and the second partial merged data p2, and determines that the region including the unit pixel group includes a large amount of high-frequency components when the cross-correlation is small (equal to or smaller than a predetermined threshold).
Moreover, the possibility of occurrence of moire periodicity is high, and in consideration of the characteristics thereof, the analysis section 30 can estimate in the image sensor 10 in which unit pixel group the moire is generated in the region constituted by.
For example, in the example shown in fig. 3, in the even-numbered line group, the analysis section 30 calculates a cross-correlation between first partial merged data based on two pixels in the left half and second merged data based on two pixels in the right half in the unit pixel group. In addition, in the odd-numbered line group, the analysis section 30 calculates a cross-correlation between the first partial merged data of the two pixels based on the upper half and the second partial merged data of the two pixels based on the lower half in the unit pixel group. That is, the analysis unit 30 analyzes the frequency characteristics of the video signal in the vertical and horizontal directions in the unit pixel group, but sets the merge group for each line alternately in the vertical and horizontal directions. Therefore, depending on the occurrence of moire, it is considered that all the areas (unit pixel groups) where moire is generated cannot be determined, but as described above, it is possible to estimate in which area (unit pixel group) moire is generated by assuming that moire is periodically (a predetermined length and period) generated in a fringe pattern.
The threshold value for calculating the correlation between the first partial merged data p1 and the second partial merged data p2 and determining that the high frequency component is contained in the high frequency component may be set or changed in advance according to the type and performance of the imaging device including a lens, an image sensor, or the like, the subject, the surrounding environment, and other imaging conditions, for example. In addition, an appropriate threshold value may be set by learning using AI (Artificial Intelligence). For example, the first partial combination data p1, the second combination data p2, and the entire combination data a1 may be used as the monitor data, and AI may be used to determine whether moire is generated.
As described above, the analysis method of the analysis unit 30 is not particularly limited, and various analysis methods can be used to analyze the frequency characteristics of the video signal of the region composed of the unit pixel group, and detect the region containing a large amount of high frequency components, the region where moire is generated, and the like.
The moire removing unit 40 removes moire generated in the region constituted by the unit pixel group based on the analysis result of the analyzing unit 30.
For example, the moire removing unit 40 may remove the high frequency component of the video signal (for example, the total combined data a1 of the unit pixel group) in the region including the large amount of the high frequency component by a low pass filter.
The moire removing unit 40 may remove moire based on an image signal (for example, all of the combined data a1 of another unit pixel group) in a region where moire is not generated in the vicinity of the moire generating region. Here, the moire-free region in the vicinity of the moire-free region means: for example, the moire region is a region in which moire is not generated, which is located adjacent to (at an inclined position on an extension line of a diagonal line of the upper, lower, left, right) and surrounds and generates the moire region. That is, the moire removing unit 40 interpolates the moire-generating region from the image signal of the other region, thereby generating an image without moire.
In addition, not only the generation of moire, but also an image generation unit (not shown) for appropriately recovering the high frequency component may be provided instead of or in addition to the moire removal unit 40 for a region (unit pixel group) determined by the analysis unit 30 to contain the high frequency component and to be processed for the high frequency component. Typically, the image generation section may interpolate an area (unit pixel group) determined to require image processing based on the video signal of the vicinity thereof, thereby appropriately restoring the high-frequency component, and may also appropriately restore the high-frequency component using AI.
In the example shown in fig. 4, the data acquisition unit 20 acquires the second partial merged data p2 by subtracting the first partial merged data p1 from the entire merged data a1, and the analysis unit 30 calculates the cross-correlation between the first partial merged data and the second partial merged data p2, but is not limited thereto. For example, the analysis unit 30 may analyze the frequency characteristics of the video signal of the region constituted by the unit pixel group based on the cross-correlation between the first partial merged data p1 and the second partial merged data p2, based on the first partial merged data p1 and the entire merged data a 1.
The data acquisition section 20 reads the first partial merged data p1 and the entire merged data a1 from the image sensor 10, but is not limited thereto. For example, the first partial merge data p1 and the second partial merge data p2 may also be read. In this case, the analysis section 30 may analyze the frequency characteristics of the video signal of the region constituted by the unit pixel group based on the cross-correlation between the first partial combination data p1 and the second partial combination data p2 read from the image sensor 10.
The moire removing unit 40 and/or the image generating unit typically generates an appropriate image based on all of the combined data a1 for a region containing a large amount of high frequency components based on the analysis result of the analysis unit 30, including removing moire, but may generate an image based on the first partial combined data p1 and the second partial combined data p2.
The moire removing unit 40 and/or the image generating unit may perform demosaicing processing on all the combined data a1 or the first partial combined data p1 and the second partial combined data p2, and then generate an appropriate image.
[ Circuit configuration for pixels in image sensor ]
As an image sensor, a specific method of merging unit pixel groups is described. Here, specific configurations and operations of the unit pixel group in the image sensor will be described in further detail.
Fig. 5 is a diagram schematically showing a circuit configuration concerning a signal flow for explaining an example of combination in 4 (2×2) pixels. As shown in fig. 5, 4 (2×2) pixels correspond to 4 photodiodes (PD 1 to PD 4), and are constituted by a Floating Diffusion (FD), a source follower amplifier (SF), a reset transistor (RES), transfer transistors (TX 1 to TX 4), and a selection transistor (SEL) connected thereto.
The 4 photodiodes (PD 1-PD 4) are connected to a common floating diffusion region (FD). The output of the source follower amplifier (SF) is connected to a common output line (corresponding to the signal line 3 of fig. 1) via a selection transistor (SEL) on a column in which a plurality of pixel groups are two-dimensionally arranged, and a constant current source (I) which is a load of the source follower amplifier (SF), a voltage gain conversion unit (not shown), and an analog-digital converter (ADC) are connected.
Also, the digital signal (data) converted by the analog-to-digital converter (ADC) is held in the line memory 1 or the line memory 2.
Fig. 6 is a diagram for explaining the operation of each element of the circuit configuration of 4 (2×2) pixels shown in fig. 5.
At time t1, the reset transistor (RES) and the transfer transistors (TX 1 to TX 4) are turned on, and the photodiodes (PD 1 to PD 4) are reset.
After a predetermined accumulation period for accumulating data has elapsed, a process of reading data from the pixels constituting the unit pixel group is started, and first, at time t2, the reset transistor (RES) is turned off and the selection transistor (SEL) is turned on. Then, the value is analog-digital converted with a predetermined voltage gain, and is held in the line memory 1 (FD reset noise).
At time t3, in the transfer transistors (TX 1 to TX 4), for example, the transfer transistors (TX 1 to TX 2) are turned on for partial combination, thereby transferring signals from the photodiodes (PD 1 to PD 2) to the floating diffusion region (FD). Then, the value is analog-digital converted with a predetermined voltage gain, and is held in the line memory 2 (partially merged data).
At time t4, the value held in the line memory 1 is subtracted from the value held in the line memory 2, and the result is output and transferred to an Image Signal Processor (ISP) or a frame memory at a subsequent stage. Thereby, data (noise removed/partially combined data) from which the reset noise of the Floating Diffusion (FD) is removed, which is called correlated double sampling, can be acquired. This corresponds to the first partial merged data p1 of fig. 4.
At time t5, signals from the photodiodes (PD 1 to PD 4) are transferred to the floating diffusion region (FD) by turning on the transfer transistors (TX 1 to TX 4) for all the combination. Then, the value is analog-digital converted with a predetermined voltage gain, and is held in the line memory 2 (all the merged data).
Here, the output of the partial merged data held in the line memory 2 is completed before the analog-digital conversion of all the merged data is completed, but if the output of the partial merged data is not completed yet, it is preferable to provide another line memory or the like for holding all the merged data.
In addition, since the reset noise of all the merged floating diffusion regions (FD) can use the data held in the line memory 1, the value held in the line memory 1 is subtracted from the value held in the line memory 2 at time t6, and the result thereof is output. Thus, all the merged data (noise removed and all the merged data) from which the reset noise of the Floating Diffusion (FD) is removed can be obtained. This corresponds to the total combined data a1 of fig. 4.
In this way, the first partial combination data p1 and the entire combination data a1 are extracted from each unit pixel group of the image sensor 10.
[ about control method ]
Next, a control method for generating an image while removing moire using the combined data will be described in detail.
Fig. 7 is a flowchart showing a flow of processing of the control method M100 performed by the photographing apparatus 100 of the first embodiment of the present disclosure. As shown in fig. 7, the control method M100 includes steps S10 to S50, and the steps are executed by a processor included in the imaging device 100.
In step S10, the data acquisition section 20 acquires first partial combination data based on a first pixel group among the unit pixel groups (data acquisition step). As a specific example, as shown in fig. 3 and 4, the data acquisition unit 20 merges two pixel portions of 4 (2×2) pixels, which are indicated by a numeral "1", in a unit pixel group from the image sensor 10, and reads data (first portion merged data p 1).
In step S20, the analysis unit 30 analyzes the frequency characteristics of the video signal of the region constituted by the unit pixel group based on the cross-correlation between the first partial combination data acquired in step S10 and the second partial combination data based on the second pixel group constituted by the pixels other than the first pixel group in the unit pixel group (analysis step). As a specific example, as shown in fig. 3 and 4, the data acquisition unit 20 acquires the second partial merged data p2 by merging all pixels of the unit pixel group of 4 (2×2) pixels from the image sensor 10, reading out data (all merged data a 1), and subtracting the first partial merged data p 1. Next, the analysis unit 30 calculates a cross-correlation between the first partial merged data p1 and the second partial merged data p2, and analyzes the frequency characteristics of the video signal of the region constituted by the unit pixel group.
In step S30, the analysis unit 30 determines whether or not the region including the unit pixel group is a region to be processed that includes a large amount of high-frequency components and requires processing of the high-frequency components. Specifically, the analysis unit 30 determines whether or not the region including the unit pixel group is a region to be processed of a high frequency component, based on the cross-correlation between the first partial merged data p1 and the second partial merged data p2 calculated in step S20. If the correlation is small, the region is determined to be a region to be processed of a high frequency component because it contains a large amount of high frequency components (yes in step S30), and if the correlation is large, the region is determined not to be a region to be processed of a high frequency component (no in step S30).
In step S40 (yes in step S30), the moire-removing section 40 removes moire generated in the region constituted by the unit pixel group, and simultaneously generates an image (moire-removing step). Specifically, the moire removing unit 40 removes high frequency components in a region including the unit pixel group by using a low pass filter, or interpolates the region based on a video signal of another region, thereby removing moire and generating an image.
In step S50 (no in step S30), the image generation unit generates an appropriate image based on all the merged data a1 for the region constituted by the unit pixel group.
As described above, according to the imaging device 100 and the control method M100 of the first embodiment of the present disclosure, the data acquisition unit 20 acquires the first partial combination data p1 based on the first pixel group among the unit pixel groups, the analysis unit 30 analyzes the frequency characteristics of the image signal of the region constituted by the unit pixel groups based on the cross-correlation between the first partial combination data p1 and the second partial combination data p2, and the moire removing unit 40 removes moire generated in the region constituted by the unit pixel groups based on the analysis result. As a result, the moire can be properly removed and an image can be generated.
[ another specific example of grouping (partial merging) of Unit pixel groups ]
In the present embodiment, as shown in fig. 3, 4 (2×2) pixels are grouped into unit pixel groups, two pixels of the left half or two pixel portions of the upper half are combined, and the read data is taken as first partial combined data, but partial combination is not limited thereto. In the following, another specific example of partial merging will be described.
(another embodiment 1)
Fig. 8 is a diagram showing another partial combination (another specific example 1) used in the image sensor 10 of the first embodiment of the present disclosure. As shown in fig. 8, bayer cells each composed of green (G), red (R), blue (B), and green (G) are arranged in a matrix, as in fig. 3.
In the even-numbered line group, as indicated by the numeral "1", in the unit pixel group, the two pixels (first pixel group) at the upper left and lower right are partially combined, and data (first partially combined data) is read.
In the odd-numbered line group, as indicated by the numeral "1", in the unit pixel group, the two pixels (first pixel group) at the upper right and lower left are partially combined, and data (first partially combined data) is read.
In this way, in the unit pixel group, pixels arranged on the diagonal line are partially merged. The other is the same as the processing described using fig. 3.
(another embodiment 2)
Fig. 9 is a diagram showing another partial combination (another specific example 2) used in the image sensor 10 of the first embodiment of the present disclosure. As shown in fig. 9, bayer cells each composed of green (G), red (R), blue (B), and green (G) are arranged in a matrix, as in fig. 3.
In the even-numbered line group and the odd-numbered line group, as indicated by the numeral "1", 3 pixels (first pixel group) at the upper right, lower right and lower left are partially combined in the unit pixel group, and data (first partial combined data) is read.
Thus, in the unit pixel group (4 pixels), three pixels are partially combined. The other is the same as the processing described using fig. 3.
In the example of fig. 9, a plurality of pixels are grouped in an asymmetric manner in a unit pixel group, and are partially combined. Thus, the analysis unit 30 can analyze the frequencies of the video signals in the vertical and horizontal directions more appropriately in the unit pixel group based on the cross-correlation between the first partial merged data (the first pixel group indicated by the numeral "1") and the second partial merged data (the second pixel group other than the first pixel group in the unit pixel group). That is, the analysis unit 30 can more appropriately analyze that the region including the unit pixel group includes a large amount of high-frequency components and moire is generated.
(another embodiment 3)
Fig. 10 is a diagram showing another partial combination (another specific example 3) used in the image sensor 10 of the first embodiment of the present disclosure. As shown in fig. 10, bayer cells each composed of green (G), red (R), blue (B), and green (G) are arranged in a matrix, as in fig. 3.
In the even-numbered line group, as indicated by a numeral "1", in the unit pixel group, two pixels of the left half (first pixel group) are partially combined and data is read (first partial combined data), and the upper-right pixel is added to the first pixel group, or partially combined alone and data is read (additional partial combined data).
In the odd-numbered line group, as indicated by the numeral "1", in the unit pixel group, two pixels of the upper half (first pixel group) are partially combined and data is read (first partial combined data), and a pixel of the lower left is added to the first pixel group, or partially combined alone and data is read (additional partial combined data).
In this way, in the unit pixel group, the first pixel group is partially combined, and further, a different pixel group (first pixel group+another pixel or another pixel alone) is partially combined. Then, the unit pixel groups are all combined, and all combined data is read.
In the example of fig. 10, since the partial merged data in the plurality of regions constituted by the pixel groups having different centers of gravity are acquired, if the partial merged data is subtracted from the entire merged data, the plurality of second partial merged data can also be acquired. The analysis unit 30 can more appropriately analyze that a region including a large amount of high-frequency components and moire is generated in the unit pixel group based on the first partial combination data and the second partial combination data obtained by the respective combinations.
As shown herein, there are various ways in which the partial merge may exist, but are not limited thereto. The partially merged pixels in the unit pixel group may be regularly set or may be randomly set. The analysis unit 30 may set the partially combined pixels in the unit pixel group according to, for example, the type and performance of an imaging device including a lens, an image sensor, or the like, an object, a surrounding environment, and other imaging conditions, so that the unit pixel group (region) including a large amount of high-frequency components and generating moire can be appropriately analyzed.
As described above, one unit pixel group is not limited to 4 (2×2) pixels, and may be configured of, for example, 3 (3×1) pixels, 8 (4×2) pixels, 9 (3×3) pixels, 16 (4×4) pixels, and the like, and one bayer unit ("one bayer unit") is not limited to 4 (2×2) unit pixel groups, and may be configured of, for example, 9 (3×3) unit pixel groups, 16 (4×4) unit pixel groups, and the like. Here, the determination may be made as appropriate as to how to set the partially merged pixels, or may be made using AI.
< second embodiment >
Next, a specific method of combining Dual Conversion Gain (DCG) and all-pixel imaging plane phase difference AF (auto focus) operation will be described as an image sensor according to a second embodiment of the present disclosure. The basic configuration of the image sensor of the present embodiment is the same as that of the image sensor 10 of the first embodiment, and the combination in pixels is also performed in the same manner as in the first embodiment. Here, a specific configuration and operation of the image sensor in which the dual conversion gain and the all-pixel imaging plane phase difference AF are combined and operated will be described in detail.
Fig. 11 is a diagram schematically showing a circuit configuration of a signal flow for explaining an example of an operation of combining the dual conversion gain and the full-pixel imaging plane phase difference AF in 4 (2×2) pixels. As shown in fig. 11, each of the 4 photodiodes (PD 1 to PD 4) shown in fig. 5 is divided into two for use in the full pixel imaging plane phase difference AF, and is respectively a sub-photodiode (PD 1L/PD1R to PD4L/PD 4R), and the transfer transistors (TX 1 to TX 4) are respectively transfer transistors (TX 1L/TX1R to TX4L/TX 4R) (L: left, R: right) corresponding to the sub-photodiode.
In this circuit, a Floating Diffusion (FD), a source follower amplifier (SF), a reset transistor (RES), a switching transistor (X), and a selection transistor (SEL) are arranged.
In addition, for dual conversion gain, an additional load capacitance is added to the pixel, which can be electrically switched by a switching transistor (X). The charge-voltage conversion gain can be set smaller by increasing the load capacitance of the Floating Diffusion (FD), and can be set larger by decreasing the load capacitance.
Fig. 12 is a diagram for explaining the operation of each element of the circuit configuration of 4 (2×2) pixels shown in fig. 11. Further, each of 4 (2×2) pixels is constituted by sub-pixels (L: left, R: right) divided into two. Here, each pixel is composed of two sub-pixels, but the present invention is not limited to this, and may be composed of three or more sub-pixels, for example.
At time t1, the reset transistor (RES), the switching transistor (X), and the transfer transistors (TX 1L/TX1R to TX4L/TX 4R) are turned on, and the sub photodiodes (PD 1L/PD1R to PD4L/PD 4R) are reset.
After a predetermined accumulation period for accumulating data has elapsed, a process of reading data from the pixels constituting the unit pixel group is started, and first, at time t2, the reset transistor (RES) is turned off, and the switching transistor (X) and the selection transistor (SEL) are turned on. Next, in a state (LCG) where the charge-voltage conversion gain of the Floating Diffusion (FD) is reduced, the FD reset noise is analog-digital converted and stored in the line memory 1 (LCG/FD reset noise).
At time t3, the switching transistor (X) is turned off, and in a state (HCG) in which the charge-voltage conversion gain of the Floating Diffusion (FD) is increased, the FD reset noise is AD-converted and stored in the row memory 2 (HCG/FD reset noise).
At time t4, the transfer transistor (TX 1L) and the transfer transistor (TX 2L) are turned on, and the left-side partial combination data for the imaging plane phase difference AF in the HCG state is acquired, AD-converted, and stored in the line memory 3 (HCG/phase difference AF L partial combination data).
Further, by subtracting the HCG FD reset noise stored in the line memory 2 from the HCG phase difference AF L-portion combined data stored in the line memory 3, the phase difference AF L-portion combined data (noise removal HCG phase difference AF L-portion combined data) in the HCG state from which the reset noise is removed can be obtained.
At time t5, the transfer transistors (TX 1l·tx 1R) and (TX 2l·tx 2R) are turned on, and the partial combination data in the HCG state is acquired, AD-converted, and stored in the line memory 3 (hcg·partial combination data).
By subtracting the HCG/FD reset noise stored in the line memory 2 from the HCG/partial combined data stored in the line memory 3, the partial combined data (noise removed/HCG/partial combined data) in the HCG state from which the reset noise is removed can be obtained.
Further, by subtracting the noise removal HCG phase difference AF L-portion combined data from the noise removal HCG-portion combined data, the noise removal HCG phase difference AF R-portion combined data can be obtained.
At time t6, the switching transistor (X) is turned on, and in a state (LCG) in which the charge-voltage conversion gain of the Floating Diffusion (FD) is small, the transfer transistors (TX 1l·tx 1R) and (TX 2l·tx 2R) are turned on, so that partial combination data in the LCG state is acquired, AD-converted, and stored in the line memory 3 (lcg·partial combination data).
Next, by subtracting the LCG FD reset noise stored in the line memory 1 from the LCG partial combination data stored in the line memory 3, the partial combination data (noise removed LCG partial combination data) in the LCG state from which the reset noise is removed can be obtained.
At time t7, the transfer transistors (TX 1l·tx1R to TX4l·tx 4R) are turned on, all the merged data in the LCG state is acquired, AD-converted, and stored in the line memory 3 (lcg·all the merged data).
At time t8, the LCG FD reset noise stored in the line memory 1 is subtracted from the LCG total combined data stored in the line memory 3, whereby the total combined data (noise removed LCG total combined data) in the LCG state from which the reset noise has been removed can be obtained.
In this way, in the HCG state, the phase difference AF L partial merged data and partial merged data (corresponding to the first partial merged data p 1) are extracted from the image sensor 10, and in the LCG state, the partial merged data (corresponding to the first partial merged data p 1) and the entire merged data (corresponding to the entire merged data a 1) are extracted. Further, as described above, in the HCG state, the R-portion combination data for phase difference AF can be acquired by calculation.
As described above, according to the imaging device and the control method in which the image sensor according to the second embodiment of the present disclosure is mounted, in the LCG state, the partial combined data (corresponding to the first partial combined data p 1) and the total combined data (corresponding to the total combined data a 1) are extracted, and therefore, as in the first embodiment of the present disclosure, it is possible to generate an image while removing moire properly. Moire generation in the fine image can be suppressed by appropriately removing moire for high SNR data in the LCG state.
In the present embodiment, all the merged data is not read in the HCG state, but if the transfer transistors (TX 1l·tx1R to TX4l·tx 4R) are turned on in the HCG state, all the merged data in the HCG state is acquired and AD-converted, hcg·all the merged data may be acquired. Since the switching of the transistor or the AD conversion process applies a load to the processor mounted on the imaging device 100, by reducing the number of times of switching of the transistor or AD conversion, an increase in load and power consumption applied to the processor mounted on the imaging device 100 can be suppressed.
In the present embodiment, phase difference AF partial combination data in the HCG state can be acquired. The phase difference AF data needs to have a high SNR, and is effective for acquiring data in an anti-noise HCG state.
In the present embodiment, the phase difference AF combination data cannot be acquired in the LCG state, but a signal for phase difference AF may be acquired by setting a dedicated pixel in the image sensor, masking a part of the pixel, or using a (2×1) on-chip microlens structure.
Fig. 13 is a schematic diagram showing an image sensor provided with dedicated pixels for acquiring signals for phase difference AF. As shown in fig. 13, among a plurality of pixels arranged in the image sensor, a dedicated pixel is set, which is masked, for example, in a left half (L region) or a right half (R region). Here, the dedicated pixel is divided into two left and right, but not limited thereto, and for example, may be divided into two or three or more up and down to mask so that a phase signal for phase difference AF can be appropriately acquired.
In the dedicated pixel, if the phase difference signal is optically acquired in an area not shielded in the LCG state, LCG/phase difference AF data can be acquired.
The embodiments described above are for ease of understanding the present disclosure and are not intended to be limiting of the present disclosure. The elements and their arrangement, materials, conditions, shapes, sizes, and the like in the embodiments are not limited to those exemplified, and may be appropriately changed. In addition, the structures shown in the different embodiments may be partially replaced or combined.
Reference numerals illustrate:
1 … control circuit, 2 … pixel group, 3 … signal line, 4 … reading circuit, 5 … digital signal processing unit (DSP), 10 … image sensor, 20 … data acquisition unit, 30 … analysis unit, 40 … moire fringe removal unit, 100 … camera, M100 … control method, S10-S50 … control method M100
Claims (11)
- An imaging device is characterized by comprising:a data acquisition unit that acquires first partial combination data based on a first pixel group that is configured by combining at least one pixel among unit pixel groups configured by a plurality of pixels that are grouped;An analysis unit that analyzes a frequency characteristic of an image signal of a region constituted by the unit pixel group based on a cross-correlation between the first partial combination data and second partial combination data based on a second pixel group constituted by pixels other than the first pixel group in the unit pixel group; andand a moire removing unit that removes moire generated on an image generated based on the video signal of the region constituted by the unit pixel group, based on the analysis result.
- The photographing device as claimed in claim 1, wherein,the moire removing unit removes the moire by removing a high frequency component of the video signal by a low pass filter.
- The photographing device as claimed in claim 1, wherein,the moire removing unit removes moire based on an image signal of an area constituted by a unit pixel group in the vicinity of the unit pixel group.
- The photographing device as claimed in claim 1, wherein,the data acquisition section acquires all of the merged data based on all of the pixels constituting the unit pixel group,the second partial merged data is obtained by subtracting the first partial merged data from the total merged data.
- The photographing device as claimed in claim 1, wherein,each photodiode formed corresponding to the plurality of pixels is connected to a common floating diffusion region.
- The photographing device of claim 5, wherein,the floating diffusion region is capable of switching between a plurality of charge-to-voltage conversion gains,the data acquisition section acquires the first partial combination data at a low conversion gain among the plurality of charge-voltage conversion gains,the analysis unit analyzes frequency characteristics of the video signal of the region constituted by the unit pixel group based on a cross-correlation between the first partial combination data and the second partial combination data at the low conversion gain,the moire removing section removes moire generated in a region constituted by the unit pixel group based on the analysis result at the low conversion gain.
- The photographing device as claimed in claim 1, wherein,each of the plurality of pixels is further comprised of more than two sub-pixels,the data acquisition section acquires, of the two or more sub-pixels, first sub-partial combination data based on a first sub-pixel group constituted by combining at least one sub-pixel and second sub-partial combination data based on a second sub-pixel group constituted by sub-pixels other than the first sub-pixel group,The first sub-portion combined data and the second sub-portion combined data are used for phase difference auto-focusing.
- The photographing device as claimed in claim 1, wherein,each of the plurality of pixels is further comprised of more than two sub-pixels,any one of the plurality of pixels includes a shielding pixel shielding at least one of the two or more sub-pixels,the data acquisition section acquires sub-portion combination data based on sub-pixels other than the mask pixels among pixels including the mask pixels,the sub-portion combined data is used for phase difference auto-focusing.
- An imaging device is characterized by comprising:a data acquisition unit that acquires first partial combination data based on a first pixel group that is configured by combining at least one pixel among unit pixel groups configured by a plurality of pixels that are grouped;an analysis unit that analyzes a frequency characteristic of an image signal of a region configured by the unit pixel group based on a cross-correlation between the first partial combination data and second partial combination data based on a second pixel group configured by pixels other than the first pixel group in the unit pixel group; andAnd an image generation unit that restores high-frequency components in the region constituted by the unit pixel groups based on the analysis result and generates an image.
- A control method executed by a processor included in a photographing apparatus, comprising:a data acquisition step of acquiring first partial combination data based on a first pixel group constituted by combining at least one pixel among unit pixel groups constituted by a plurality of pixels being grouped;an analysis step of analyzing a frequency characteristic of an image signal of a region constituted by the unit pixel group based on a cross-correlation between the first partial combination data and second partial combination data based on a second pixel group constituted by pixels other than the first pixel group in the unit pixel group; andand a moire removing step of removing moire generated in the region constituted by the unit pixel group based on the analysis result.
- A terminal, characterized in that,the imaging device according to any one of claims 1 to 9 is mounted.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2022/089097 WO2023206030A1 (en) | 2022-04-25 | 2022-04-25 | Shooting devices and control method therefor |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117296311A true CN117296311A (en) | 2023-12-26 |
Family
ID=88516442
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202280001299.5A Pending CN117296311A (en) | 2022-04-25 | 2022-04-25 | Imaging device and control method thereof |
Country Status (4)
Country | Link |
---|---|
US (1) | US20240244346A1 (en) |
JP (1) | JP2024521588A (en) |
CN (1) | CN117296311A (en) |
WO (1) | WO2023206030A1 (en) |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3755375B2 (en) * | 1999-03-23 | 2006-03-15 | 株式会社日立製作所 | Pixel defect inspection method for electronic display device |
JP2002369083A (en) * | 2001-06-07 | 2002-12-20 | Olympus Optical Co Ltd | Image pickup device |
US6850651B2 (en) * | 2001-07-02 | 2005-02-01 | Corel Corporation | Moiré correction in images |
JP2013106883A (en) * | 2011-11-24 | 2013-06-06 | Fujifilm Corp | Radiographic apparatus and image processing method |
JP2013123460A (en) * | 2011-12-13 | 2013-06-24 | Fujifilm Corp | Radiographic apparatus, and radiographic method |
CN103686103B (en) * | 2013-12-31 | 2018-01-26 | 上海集成电路研发中心有限公司 | With imaging sensor, the pixel cell merged with schizotype |
NL2020217B1 (en) * | 2017-12-30 | 2019-07-08 | Zhangjiagang Kangde Xin Optronics Mat Co Ltd | Method for reducing moire patterns on an autostereoscopic display |
DE102021113883A1 (en) | 2020-06-04 | 2021-12-09 | Samsung Electronics Co., Ltd. | IMAGE SENSOR, ELECTRONIC DEVICE, AND OPERATING PROCEDURES OF AN IMAGE SENSOR |
-
2022
- 2022-04-25 JP JP2022529399A patent/JP2024521588A/en active Pending
- 2022-04-25 US US18/289,545 patent/US20240244346A1/en active Pending
- 2022-04-25 WO PCT/CN2022/089097 patent/WO2023206030A1/en active Application Filing
- 2022-04-25 CN CN202280001299.5A patent/CN117296311A/en active Pending
Also Published As
Publication number | Publication date |
---|---|
WO2023206030A1 (en) | 2023-11-02 |
US20240244346A1 (en) | 2024-07-18 |
JP2024521588A (en) | 2024-06-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11719908B2 (en) | Image sensor and image capturing apparatus | |
US11493729B2 (en) | Image sensor capable of reducing readout time and image capturing apparatus | |
US10021321B2 (en) | Imaging device and imaging system | |
US7282685B2 (en) | Multi-point correlated sampling for image sensors | |
US8248481B2 (en) | Method and apparatus for motion artifact removal in multiple-exposure high-dynamic range imaging | |
JP4691930B2 (en) | PHYSICAL INFORMATION ACQUISITION METHOD, PHYSICAL INFORMATION ACQUISITION DEVICE, PHYSICAL QUANTITY DISTRIBUTION SENSING SEMICONDUCTOR DEVICE, PROGRAM, AND IMAGING MODULE | |
JP5850680B2 (en) | Imaging apparatus and control method thereof | |
US20110084197A1 (en) | Solid-State Image Sensor | |
JP2006197393A (en) | Solid-state imaging device, driving method thereof and camera | |
JP2010130657A (en) | Solid-state imaging apparatus and imaging system using the same | |
JP2008099073A (en) | Solid imaging device and imaging device | |
WO2018198766A1 (en) | Solid-state image capturing device and electronic instrument | |
JP5635092B2 (en) | Solid-state imaging device, imaging apparatus including the solid-state imaging device, imaging control method, and imaging control program | |
US20180302561A1 (en) | Image capturing system and control method of image capturing system | |
JP6478600B2 (en) | Imaging apparatus and control method thereof | |
KR102046635B1 (en) | Image Sensors, Control Methods, and Electronic Devices | |
US8582006B2 (en) | Pixel arrangement for extended dynamic range imaging | |
JP2016213740A (en) | Imaging apparatus and imaging system | |
CN115209068A (en) | Solid-state imaging device, method for driving solid-state imaging device, and electronic apparatus | |
JP2016090785A (en) | Imaging equipment and control method thereof | |
EP3756342B1 (en) | Cmos sensor architecture for temporal dithered sampling | |
CN117296311A (en) | Imaging device and control method thereof | |
US9894288B2 (en) | Image forming method for forming a high-resolution image, and a related image forming apparatus and image forming program | |
TWI633790B (en) | Solid-state imaging device and driving method thereof and electronic device | |
JP6705054B2 (en) | Imaging device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication |