WO2023206030A1 - 拍摄装置及其控制方法 - Google Patents
拍摄装置及其控制方法 Download PDFInfo
- Publication number
- WO2023206030A1 WO2023206030A1 PCT/CN2022/089097 CN2022089097W WO2023206030A1 WO 2023206030 A1 WO2023206030 A1 WO 2023206030A1 CN 2022089097 W CN2022089097 W CN 2022089097W WO 2023206030 A1 WO2023206030 A1 WO 2023206030A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pixel group
- pixels
- unit
- data
- merged data
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 27
- 238000004458 analytical method Methods 0.000 claims abstract description 49
- 238000003384 imaging method Methods 0.000 claims description 28
- 238000006243 chemical reaction Methods 0.000 claims description 27
- 238000009792 diffusion process Methods 0.000 claims description 16
- 238000010586 diagram Methods 0.000 description 24
- 238000013473 artificial intelligence Methods 0.000 description 5
- 230000000875 corresponding effect Effects 0.000 description 5
- 230000009977 dual effect Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000010354 integration Effects 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 238000005070 sampling Methods 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000009825 accumulation Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
- H04N25/618—Noise processing, e.g. detecting, correcting, reducing or removing noise for random or high-frequency noise
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/10—Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
- H04N25/11—Arrangement of colour filter arrays [CFA]; Filter mosaics
- H04N25/13—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
- H04N25/134—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
- H04N23/672—Focus control based on electronic image sensor signals based on the phase difference signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/81—Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/40—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
- H04N25/46—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by combining or binning pixels
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
Definitions
- the present disclosure relates to a photographing device and a control method thereof.
- image sensors such as CMOS installed in the imaging devices have been designed in various ways.
- HDR high dynamic range
- Patent Document 1 U.S. Patent Application Publication No. 2021/0385389.
- an object of the present disclosure is to provide an imaging device that appropriately removes moiré fringes and a control method thereof.
- a photographing device includes a data acquisition unit that acquires first partial merged data based on a first pixel group consisting of at least one unit pixel group composed of a plurality of grouped pixels. pixels are formed by merging; and an analysis section that analyzes the unit pixels based on cross-correlation between the first portion of the merging data and the second portion of the merging data based on the second pixel group composed of pixels in the unit pixel group other than the first pixel group. frequency characteristics of an image signal in a region composed of pixel groups; and a moiré fringe removal unit that removes moiré fringes generated in a region composed of unit pixel groups based on analysis results.
- the moiré fringe removing unit may remove the high-frequency components of the image signal through a low-pass filter, thereby removing the moiré fringes.
- the moiré stripe removing unit may remove moiré stripes based on the image signal of a region composed of unit pixel groups in the vicinity of the unit pixel group.
- the data acquisition unit may acquire all merged data based on all pixels constituting the unit pixel group, subtract the first partial merge data from the entire merge data, and thereby acquire the second partial merge data.
- each photodiode formed corresponding to a plurality of pixels may also be connected to a common floating diffusion area.
- the floating diffusion region can switch between multiple charge voltage conversion gains
- the data acquisition unit acquires the first part of the combined data at a low conversion gain among the multiple charge voltage conversion gains
- the analysis unit is based on the low conversion gain.
- the cross-correlation between the first part of the combined data and the second part of the combined data under the conversion gain is analyzed, and the frequency characteristics of the image signal in the area composed of the unit pixel group are analyzed.
- the moiré fringe removal part is based on the analysis results under the low conversion gain. Moire fringes generated in an area composed of unit pixel groups.
- each of the plurality of pixels is further composed of two or more sub-pixels
- the data acquisition unit acquires the data in the two or more sub-pixels, and is composed of at least one or more sub-pixels by merging.
- first sub-part merged data based on a first sub-pixel group
- second sub-part merge data based on a second sub-pixel group consisting of sub-pixels other than the first sub-pixel group
- the first sub-part merge data and the first sub-part merge data The two sub-parts combine data for phase difference autofocus.
- each of the plurality of pixels may be further composed of two or more sub-pixels, and any one of the plurality of pixels may include at least one or more sub-pixels among the two or more sub-pixels.
- the data acquisition unit acquires sub-partial merged data based on sub-pixels other than the shielded pixel in pixels including the shielded pixel, and the sub-part merged data is used for phase difference autofocus.
- a photographing device that acquires first partial merged data based on a first pixel group consisting of at least one unit pixel group composed of a plurality of grouped pixels. pixels are formed by merging; and an analysis section that analyzes the unit pixels based on cross-correlation between the first portion of the merging data and the second portion of the merging data based on the second pixel group composed of pixels in the unit pixel group other than the first pixel group. frequency characteristics of an image signal in a region composed of pixel groups; and an image generation unit that restores high-frequency components in a region composed of unit pixel groups based on the analysis results and generates an image.
- a control method in one aspect of the present disclosure is executed by a processor included in a photographing device, and includes: a data acquisition step of acquiring first partial merged data based on a first pixel group, where the first pixel group is composed of a plurality of grouped pixels.
- the unit pixel group is composed of at least one pixel through merging;
- the analysis step is based on the first part of the merged data and the second part of the merged data based on the second pixel group composed of pixels in the unit pixel group except the first pixel group.
- the cross-correlation between them analyzes the frequency characteristics of the image signal in the area composed of the unit pixel group; and the moiré fringe removal step removes the moiré fringes generated in the area composed of the unit pixel group based on the analysis results.
- FIG. 1 is a schematic diagram for explaining the structure of the image sensor 10 according to the first embodiment of the present disclosure.
- FIG. 2 is a diagram for explaining binning used in the image sensor 10 according to the first embodiment of the present disclosure.
- FIG. 3 is a diagram for explaining partial binning used in the image sensor 10 according to the first embodiment of the present disclosure.
- FIG. 4 is a block diagram for explaining each function and data flow of the imaging device 100 according to the first embodiment of the present disclosure.
- FIG. 5 is a diagram schematically showing a circuit configuration related to a signal flow for explaining an example of binning in 4 (2 ⁇ 2) pixels.
- FIG. 6 is a diagram for explaining the operation of each element of the circuit configuration of 4 (2 ⁇ 2) pixels shown in FIG. 5 .
- FIG. 7 is a flowchart showing the flow of processing of the control method M100 executed by the imaging device 100 according to the first embodiment of the present disclosure.
- FIG. 8 is a diagram showing another partial binning (another specific example 1) used in the image sensor 10 of the first embodiment of the present disclosure.
- FIG. 9 is a diagram showing another partial binning (another specific example 2) used in the image sensor 10 of the first embodiment of the present disclosure.
- FIG. 10 is a diagram showing another partial binning (another specific example 3) used in the image sensor 10 of the first embodiment of the present disclosure.
- FIG. 11 is a diagram schematically showing a circuit configuration related to a signal flow for explaining an example of operating the dual conversion gain and the phase difference AF of the entire pixel imaging plane in 4 (2 ⁇ 2) pixels.
- FIG. 12 is a diagram for explaining the operation of each element of the circuit configuration of 4 (2 ⁇ 2) pixels shown in FIG. 11 .
- FIG. 13 is a schematic diagram showing an image sensor provided with dedicated pixels for acquiring signals for phase difference AF.
- FIG. 1 is a schematic diagram for explaining the structure of the image sensor 10 according to the first embodiment of the present disclosure.
- the image sensor 10 is typically a CMOS image sensor or the like, and includes a control circuit 1 , a plurality of two-dimensionally arranged pixel groups 2 , a signal line 3 , a reading circuit 4 , and a digital signal processing unit (DSP) 5 .
- DSP digital signal processing unit
- the pixel group 2 is formed into one pixel group (unit pixel group) by grouping 4 (2 ⁇ 2) pixels, but the invention is not limited to this.
- 3 (3 ⁇ 1) pixels, 8 (4 ⁇ 2) pixels, 9 (3 ⁇ 3) pixels, and 16 (4 ⁇ 4) pixels are used as a unit pixel group.
- the control circuit 1 drives the plurality of pixel groups 2 of the image sensor 10 , controls the reading of data based on the light signals accumulated in the plurality of pixel groups 2 , and outputs the data to the outside of the image sensor 10 .
- a plurality of pixel groups 2 are arranged two-dimensionally, and based on the control signal from the control circuit 1 and the control signal generated by the pixel group 2 itself, the optical signal brought to the image sensor 10 is accumulated as data (electrical signal) based on the optical signal. is read.
- the electrical signals read from the plurality of pixel groups 2 are transmitted to the reading circuit 4 via the signal lines 3 (typically, column signal lines parallel to the column direction), and the electrical signals are converted from analog to digital.
- the digital signal processing unit (DSP) 5 processes the digital signal converted from analog to digital by the reading circuit 4 . Then, the processed digital signal is transmitted to the processor or memory of the photographing device via the data bus.
- the DSP 5 is not limited to this configuration.
- the image sensor 10 may not include the DSP 5 and a subsequent-stage processor may include the DSP.
- a part of the digital signal processing in the image processing may be processed by the DSP 5 of the image sensor 10 and the DSP included in the subsequent processor or the like.
- the location of the DSP in the present disclosure is not limited to the specified location.
- FIG. 2 is a diagram for explaining binning used in the image sensor 10 according to the first embodiment of the present disclosure.
- each color is composed of 4 (2 ⁇ 2) pixels.
- each pixel is treated as an independent pixel and the data from each pixel is read in, a high-resolution image based on a high sampling frequency can be obtained.
- Figure 2 by merging four pixels into one pixel group (unit pixel group) and reading data from the four pixels, it is possible to achieve high SNR based on a high number of signal electrons, and high bandwidth based on High sensitivity based on pixel size, high frame rate based on small number of pixels, and low power consumption based on low readout.
- FIG. 3 is a diagram for explaining partial binning used in the image sensor 10 according to the first embodiment of the present disclosure.
- a Bayer arrangement consisting of green (G), red (R), blue (B) and green (G) is used as a Bayer unit ("one bayer unit").
- the Bayer unit configuration into a matrix.
- one Bayer unit (“one bayer unit”) is composed of four (2 ⁇ 2) unit pixel groups of G, R, B, and G, but it is not limited to this. For example, it may also be composed of 9 (3 ⁇ 3) unit pixel groups, and 16 (4 ⁇ 4) unit pixel groups.
- an odd-numbered row group as indicated by the number "1"
- the two pixels in the upper half are partially merged , and read the data (the first part merges the data).
- all 4 (2 ⁇ 2) pixels composed of the above-mentioned G are combined, and the data (all combined data) is read.
- the data of the right half and the lower half can be merged based on the entire merged data read in the full merge and the partial merged data read in the partial merge.
- the difference value of the data is generated (the second pixel group, the second part of the merged data).
- the unit pixel group of 4 (2 ⁇ 2) pixels composed of G is used as an example for detailed description.
- the other unit pixels of 4 (2 ⁇ 2) pixels composed of G are The same process is also performed for the group, the unit pixel group of 4 (2 ⁇ 2) pixels composed of R, and the unit pixel group of 4 (2 ⁇ 2) pixels composed of B.
- moiré is a type of noise produced in images, but moiré removal in this manual also includes similar aliasing removal, which can naturally be processed as well.
- FIG. 4 is a block diagram for explaining each function and data flow of the imaging device 100 according to the first embodiment of the present disclosure.
- the imaging device 100 includes an image sensor 10 , a data acquisition unit 20 , an analysis unit 30 , and a moiré removal unit 40 .
- the imaging device 100 is also equipped with functions or components that are generally included in imaging devices.
- the imaging device of the present disclosure is applicable to digital cameras and terminals such as smartphones, tablets, and notebook computers equipped with imaging functions.
- the image sensor 10 is the image sensor described above using FIGS. 1 to 3 . As shown in FIG. 4 , the first partial merged data p1 and the entire merged data a1 are read from the image sensor 10 .
- the entire merged data a1, the first partial merged data p1, and the second partial merged data p2 can be read and generated by, for example, the process explained above using FIG. 3 .
- the first partial merged data p1 is data based on a first pixel group constituted by at least one pixel among unit pixel groups constituted by grouped plurality of pixels.
- the first partial merged data p1 corresponds to data read from two pixels represented by the number “1” in the unit pixel group.
- the total combined data a1 is data based on all pixels in a unit pixel group composed of a plurality of grouped pixels.
- the entire merged data a1 corresponds to data read from 4 (2 ⁇ 2) pixels in the unit pixel group.
- the first partial merged data p1 is subtracted from the entire merged data a1, thereby generating the second partial merged data p2 based on the difference.
- the analysis unit 30 analyzes the frequency characteristics of the image signal in the area composed of the unit pixel group based on the cross-correlation between the first partial merged data p1 and the second partial merged data p2.
- the analysis unit 30 calculates the cross-correlation between the first partial merged data p1 and the second partial merged data p2. If the cross-correlation is small (less than a predetermined threshold), it is determined that the area composed of the unit pixel group includes a large number of high frequency components.
- the analysis unit 30 can estimate in which area of the unit pixel group the moiré fringes occur in the image sensor 10 .
- the analysis section 30 calculates the first partial merged data based on the two pixels in the left half of the unit pixel group and the second partial merged data based on the two pixels in the right half of the unit pixel group.
- Cross-correlation between merged data In addition, in the odd-numbered row group, the analysis section 30 calculates the cross-correlation between the first partial merged data based on the two pixels in the upper half of the unit pixel group and the second partial merged data based on the two pixels in the lower half.
- the analysis unit 30 analyzes the frequency characteristics of the video signal in the vertical and horizontal directions in the unit pixel group, but alternately sets binning groups for each line in the vertical and horizontal directions. Therefore, it is considered that depending on the occurrence of moire fringes, it is not possible to determine all areas (unit pixel groups) where moiré fringes occur. However, as mentioned above, moiré fringes are generated in a fringe pattern by assuming periodicity (predetermined length and period). , it is possible to estimate in which area (unit pixel group) moiré fringes are generated.
- the threshold value used to calculate the cross-correlation between the first partial merged data p1 and the second partial merged data p2 and determine that it contains more high-frequency components may be based on, for example, the type and performance of the imaging device including a lens, an image sensor, etc., and the object being used. The subject or surrounding environment and other shooting situations are preset or changed. In addition, you can also set appropriate thresholds by using AI (Artificial Intelligence) learning. Furthermore, for example, the first partial merged data p1, the second merged data p2, the entire merged data a1, etc. may be used as supervision data, and AI may be used to determine whether moiré fringes are generated.
- AI Artificial Intelligence
- the analysis method of the analysis unit 30 is not particularly limited, and various analysis methods can be used to analyze the frequency characteristics of the image signal in the area composed of the unit pixel group, and detect areas containing a large number of high-frequency components and moiré fringes. area etc.
- the moiré removal unit 40 removes moiré that occurs in a region composed of unit pixel groups based on the analysis result of the analysis unit 30 .
- the moiré removal unit 40 may use a low-pass filter to remove high-frequency components of the image signal in a region containing many high-frequency components (for example, all combined data a1 of the unit pixel group).
- the moire removing unit 40 may remove moiré based on the image signal (for example, all combined data a1 of another unit pixel group) in a region where moiré does not occur near the moiré producing region.
- the area in which moiré fringe does not occur near the area where moiré fringe occurs means, for example, an area adjacent to the area where moiré fringe occurs (inclined positions on the vertical, downward, left, right, or diagonal extensions) and surrounding the area where moiré fringe occurs.
- the moire removing unit 40 generates an image without moiré by interpolating the area where moiré occurs based on the image signal of another area.
- the image generation unit can interpolate an area (unit pixel group) determined to require image processing based on image signals in the vicinity of the area to appropriately restore high-frequency components. It can also use AI to appropriately restore high-frequency components. .
- the data acquisition unit 20 obtains the second partial merged data p2 by subtracting the first partial merged data p1 from the entire merged data a1, and the analysis unit 30 calculates the difference between the first partial merged data and the second partial merged data p1.
- the analysis unit 30 may analyze the image signal of the area composed of the unit pixel group based on the first partial merged data p1 and the entire merged data a1, and based on the cross-correlation between the first partial merged data p1 and the second partial merged data p2. frequency characteristics.
- the data acquisition unit 20 reads the first partial merged data p1 and the entire merged data a1 from the image sensor 10, but is not limited thereto.
- the first part of the merged data p1 and the second part of the merged data p2 can also be read.
- the analysis unit 30 may analyze the frequency characteristics of the image signal of the area composed of the unit pixel group based on the cross-correlation between the first partial merged data p1 and the second partial merged data p2 read from the image sensor 10 .
- the moiré removal unit 40 and/or the image generation unit generates an appropriate image including moiré removal based on the entire merged data a1 typically for a region containing a large number of high-frequency components. , but the image can also be generated based on the first part of the merged data p1 and the second part of the merged data p2.
- the moiré removal unit 40 and/or the image generation unit may generate an appropriate image after performing demosaic processing on the entire merged data a1 or the first partial merged data p1 and the second partial merged data p2.
- FIG. 5 is a diagram schematically showing a circuit configuration related to a signal flow for explaining an example of binning in 4 (2 ⁇ 2) pixels.
- 4 (2 ⁇ 2) pixels correspond to 4 photodiodes (PD1 ⁇ PD4), which are connected by a floating diffusion (FD), a source follower amplifier (SF), and a reset transistor (RES). , transmission transistors (TX1 ⁇ TX4), and selection transistors (SEL).
- FD floating diffusion
- SF source follower amplifier
- RES reset transistor
- TX1 ⁇ TX4 selection transistors
- the four photodiodes are connected to the common floating diffusion (FD).
- the output of the source follower amplifier (SF) is connected to a common output line (equivalent to the signal line 3 in FIG. 1 ) in a column in which a plurality of pixel groups are two-dimensionally arranged via a selection transistor (SEL), and is connected as a source follower
- the digital signal (data) converted by the analog-to-digital converter (ADC) is held in the line memory 1 or the line memory 2 .
- FIG. 6 is a diagram for explaining the operation of each element of the circuit configuration of 4 (2 ⁇ 2) pixels shown in FIG. 5 .
- the reset transistor (RES) and the transfer transistor (TX1 to TX4) are turned on, and the photodiodes (PD1 to PD4) are reset.
- a process of reading data from the pixels constituting the unit pixel group is started. First, at time t2, the reset transistor (RES) is turned off and the selection transistor (SEL) is turned on. Next, the value is converted from analog to digital with a predetermined voltage gain and stored in the line memory 1 (FD reset noise).
- RES reset transistor
- SEL selection transistor
- the transfer transistors (TX1 ⁇ TX4) for example, the transfer transistors (TX1 ⁇ TX2) are turned on, thereby transferring the signals from the photodiodes (PD1 ⁇ PD2) to the floating diffusion (FD) .
- the value is converted from analog to digital with a predetermined voltage gain and stored in the line memory 2 (partially merged data).
- the value held by the line memory 1 is subtracted from the value held by the line memory 2, and the result is output and transmitted to a subsequent-stage image signal processor (ISP) or frame memory.
- ISP image signal processor
- data called correlated double sampling in which reset noise of the floating diffusion (FD) is removed (noise-removed/partially merged data) can be acquired. This is equivalent to the first part of the merged data p1 in Figure 4.
- the transmission transistors (TX1 to TX4) are turned on, thereby transmitting the signals from the photodiodes (PD1 to PD4) to the floating diffusion (FD).
- the value is converted from analog to digital with a predetermined voltage gain and stored in the line memory 2 (all data are merged).
- the first partial merged data p1 and the entire merged data a1 are extracted from each unit pixel group of the image sensor 10 .
- FIG. 7 is a flowchart showing the flow of processing of the control method M100 executed by the imaging device 100 according to the first embodiment of the present disclosure. As shown in FIG. 7 , the control method M100 includes steps S10 to S50, and each step is executed by the processor included in the imaging device 100.
- step S10 the data acquisition section 20 acquires the first partial merged data based on the first pixel group among the unit pixel groups (data acquisition step).
- the data acquisition unit 20 combines the two pixel portions represented by the number “1” in the unit pixel group of 4 (2 ⁇ 2) pixels from the image sensor 10 , and read the data (the first part of the merged data p1).
- step S20 the analysis section 30 is based on the cross-correlation between the first partial merged data acquired in step S10 and the second partial merged data based on the second pixel group composed of pixels in the unit pixel group other than the first pixel group. , analyze the frequency characteristics of the image signal in the area composed of the unit pixel group (analysis step).
- the data acquisition unit 20 combines all pixels in a unit pixel group of 4 (2 ⁇ 2) pixels from the image sensor 10 and reads data (all combined data a1), and by subtracting the first part of the merged data p1, the second part of the merged data p2 is obtained.
- the analysis unit 30 calculates the cross-correlation between the first partial merged data p1 and the second partial merged data p2, and analyzes the frequency characteristics of the image signal in the area composed of the unit pixel group.
- step S30 the analysis unit 30 determines whether the area constituted by the unit pixel group is a processing target area that contains many high-frequency components and requires processing of the high-frequency components.
- the analysis unit 30 determines whether the area composed of the unit pixel group is a processing target of high-frequency components based on the cross-correlation between the first partial merged data p1 and the second partial merged data p2 calculated in step S20 area. If the cross-correlation is small, the area contains a large number of high-frequency components and is therefore determined to be a processing target area for high-frequency components (Yes in step S30). If the cross-correlation is large, the area is determined not to be a high-frequency component. processing target area (No in step S30).
- step S40 the moire removal unit 40 removes moiré that occurs in a region composed of unit pixel groups and generates an image (moiré removal step).
- the moiré removal unit 40 uses a low-pass filter to remove high-frequency components in a region composed of the unit pixel group, or performs interpolation based on an image signal in another region, thereby removing moiré and simultaneously generating image.
- step S50 the image generation unit generates an appropriate image based on the entire merged data a1 for the area composed of the unit pixel group.
- the data acquisition unit 20 acquires the first partial merged data p1 based on the first pixel group in the unit pixel group
- the analysis unit 30 acquires the first partial merged data p1 based on the first pixel group in the unit pixel group.
- the cross-correlation between the merged data p1 and the second partial merged data p2 analyzes the frequency characteristics of the image signal in the area composed of the unit pixel group.
- the moiré fringe removal unit 40 removes the frequency characteristics of the image signal in the area composed of the unit pixel group. Moiré fringes are produced. As a result, it is possible to generate an image while appropriately removing moiré fringes.
- FIG. 8 is a diagram showing another partial binning (another specific example 1) used in the image sensor 10 of the first embodiment of the present disclosure. As shown in FIG. 8 , Bayer cells composed of green (G), red (R), blue (B), and green (G) are arranged in a matrix, similarly to FIG. 3 .
- the upper left and lower right two pixels are partially merged, and the data (the first partial merged data) is read.
- the upper right and lower left two pixels are partially merged, and the data (the first partial merged data) is read.
- FIG. 9 is a diagram showing another partial binning (another specific example 2) used in the image sensor 10 of the first embodiment of the present disclosure. As shown in FIG. 9 , Bayer cells composed of green (G), red (R), blue (B), and green (G) are arranged in a matrix, similarly to FIG. 3 .
- the upper right, lower right and lower left 3 pixels are partially merged, and the data is read (the first part is merged data).
- the analysis section 30 analyzes the data based on the relationship between the first partial merged data (the first pixel group represented by the number “1”) and the second partial merged data (the second pixel group other than the first pixel group in the unit pixel group).
- Cross-correlation enables more appropriate analysis of the vertical and horizontal frequencies of the image signal in the unit pixel group. That is, the analysis unit 30 can analyze more appropriately that the area composed of the unit pixel group contains many high-frequency components and that moiré fringes have occurred.
- FIG. 10 is a diagram showing another partial binning (another specific example 3) used in the image sensor 10 of the first embodiment of the present disclosure.
- Bayer cells composed of green (G), red (R), blue (B), and green (G) are arranged in a matrix, similarly to FIG. 3 .
- the two pixels in the left half are partially merged, and the data is read (the first part merged data), and Add the upper right pixel to the first pixel group, or perform a partial merge separately, and read the data (append partial merge data).
- the two pixels in the upper half are partially merged, and the data is read (the first partial merged data), and Add the lower left pixel to the first pixel group, or perform a partial merge separately, and read the data (append partial merge data).
- the first pixel group is partially merged, and then different pixel groups (the first pixel group + another pixel or another pixel alone) are partially merged. Next, all unit pixel groups are combined, and all combined data are read.
- a plurality of partial merged data are acquired in a region composed of pixel groups with different centers of gravity. Therefore, if the partial merged data is subtracted from the entire merged data, a plurality of second parts can also be acquired. Merge data. Based on the first partial merged data and the second partial merged data obtained in various combinations obtained in this way, the analysis unit 30 can more appropriately analyze whether a region composed of the unit pixel group contains a large number of high-frequency components, and whether a high-frequency component is generated. Moiré stripes.
- the partially merged pixels in the unit pixel group can be set regularly or randomly.
- the analysis unit 30 may set the partially merged pixels in the unit pixel group according to the type and performance of the imaging device including the lens, image sensor, etc., the subject, the surrounding environment, and other imaging conditions, so as to be able to appropriately analyze the pixels including relatively large pixels.
- one unit pixel group is not limited to consisting of 4 (2 ⁇ 2) pixels.
- it may also consist of 3 (3 ⁇ 1) pixels, 8 (4 ⁇ 2) pixels, or 9 (3 ⁇ 3) pixels, and 16 (4 ⁇ 4) pixels, etc.
- a Bayer unit (“one bayer unit") is not limited to consisting of 4 (2 ⁇ 2) unit pixel groups, for example, it can also be composed of 9 It consists of (3 ⁇ 3) unit pixel groups and 16 (4 ⁇ 4) unit pixel groups.
- how to set the partially merged pixels can be appropriately determined, or AI can be used to determine.
- FIG. 11 is a diagram schematically showing a circuit configuration related to a signal flow for explaining an example of operating the dual conversion gain and the phase difference AF of the entire pixel imaging plane in 4 (2 ⁇ 2) pixels.
- each of the four photodiodes (PD1 to PD4) shown in FIG. 5 is divided into two to be used for all-pixel imaging plane phase difference AF, and each becomes a sub-photodiode ( PD1L/PD1R to PD4L/PD4R), and the transfer transistors (TX1 to TX4) correspond to the sub-photodiodes and become transfer transistors (TX1L/TX1R to TX4L/TX4R) (L: left, R: right).
- this circuit is provided with a floating diffusion (FD), a source follower amplifier (SF), a reset transistor (RES), a switching transistor (X), and a selection transistor (SEL).
- FD floating diffusion
- SF source follower amplifier
- RE reset transistor
- X switching transistor
- SEL selection transistor
- an additional load capacitance is added to the pixel that can be electrically switched via the switching transistor (X).
- the charge-to-voltage conversion gain can be set smaller, and by decreasing the load capacitance, the charge-voltage conversion gain can be set larger.
- FIG. 12 is a diagram for explaining the operation of each element of the circuit configuration of 4 (2 ⁇ 2) pixels shown in FIG. 11 .
- each of the 4 (2 ⁇ 2) pixels is composed of sub-pixels divided into two (L: left, R: right).
- each pixel is composed of two sub-pixels, it is not limited to this. For example, it may be composed of three or more sub-pixels.
- the reset transistor (RES), the switching transistor (X), and the transfer transistor (TX1L/TX1R to TX4L/TX4R) are turned on, and the sub photodiodes (PD1L/PD1R to PD4L/PD4R) are reset.
- the process of reading data from the pixels constituting the unit pixel group is started.
- the reset transistor (RES) is turned off, and the transistor (X) and the selection transistor are switched.
- (SEL) is turned on.
- the FD reset noise is converted from analog to digital and stored in the line memory 1 (LCG/FD reset noise).
- the switching transistor (X) is turned off, and in a state (HCG) in which the charge-to-voltage conversion gain of the floating diffusion (FD) becomes large, the FD reset noise is AD converted and stored in the line memory 2 (HCG/ FD reset noise).
- the transfer transistor (TX1L) and the transfer transistor (TX2L) are turned on, and the left part combined data for phase difference AF of the imaging plane in the HCG state is acquired, AD converted, and stored in the line memory 3 (HCG/ Phase difference AF uses the L part to merge the data).
- the transfer transistor (TX1L ⁇ TX1R) and the transfer transistor (TX2L ⁇ TX2R) are turned on, and the partially combined data in the HCG state is acquired, AD converted, and stored in the line memory 3 (HCG ⁇ partially combined data).
- the switching transistor (X) is turned on, and the transfer transistor (TX1L ⁇ TX1R) and the transfer transistor (TX2L ⁇ TX2R) are turned on in a state (LCG) in which the charge-to-voltage conversion gain of the floating diffusion region (FD) becomes small. , obtain the partial merged data in the LCG state, perform AD conversion, and save it to line memory 3 (LCG ⁇ partial merged data).
- the transfer transistors (TX1L ⁇ TX1R to TX4L ⁇ TX4R) are turned on, and all combined data in the LCG state are acquired, AD converted, and stored in the line memory 3 (LCG ⁇ all combined data).
- the L partial merge data for phase difference AF and the partial merge data are fetched from the image sensor 10.
- the partial merge data (corresponding to the first partial merge data p1) are fetched. p1) and all merged data (equivalent to all merged data a1).
- the imaging device and the control method equipped with the image sensor according to the second embodiment of the present disclosure in the LCG state, the partial merged data (corresponding to the first partial merged data p1) and the entire merged data (corresponding to the entire merged data a1), therefore, like the first embodiment of the present disclosure, it is possible to generate an image while appropriately removing moire fringes.
- By appropriately removing moire fringes from high SNR data in the LCG state it is possible to suppress the occurrence of moiré fringes in fine images.
- partial integration data for phase difference AF in the HCG state can be acquired.
- Data for phase difference AF requires high SNR, and it is very effective because it can acquire data in the HCG state that is immune to noise.
- the merged data for phase difference AF cannot be acquired, but a dedicated pixel is set in the image sensor, a part of the pixel is masked, or a (2 ⁇ 1) on-chip micro-pixel is used.
- the lens structure can also acquire signals for phase difference AF.
- FIG. 13 is a schematic diagram showing an image sensor provided with dedicated pixels for acquiring signals for phase difference AF.
- dedicated pixels are set among the plurality of pixels arranged in the image sensor, and the dedicated pixels are blocked, for example, in the left half (L area) or the right half (R area).
- the dedicated pixel is divided into two on the left and right, but it is not limited to this. For example, it may be divided into two or more on the top and bottom for masking, so that the phase signal for phase difference AF can be appropriately acquired.
- phase difference signal is optically acquired from an unobstructed area in the LCG state, LCG ⁇ phase difference AF data can be acquired.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Transforming Light Signals Into Electric Signals (AREA)
- Studio Devices (AREA)
- Color Television Image Signal Generators (AREA)
Abstract
一种拍摄装置(100)具备:数据获取部(20),其获取基于第一像素组的第一部分合并数据(p1),该第一像素组在由被分组的多个像素构成的单位像素组中,由至少一个像素通过合并构成;分析部(30),其基于第一部分合并数据(p1)与基于由单位像素组中除第一像素组以外的像素构成的第二像素组的第二部分合并数据(p2)之间的互相关,分析由单位像素组构成的区域的影像信号的频率特性;以及莫尔条纹去除部(40),其基于分析结果,去除由单位像素组构成的区域中产生的莫尔条纹。
Description
本公开涉及拍摄装置及其控制方法。
一般而言,在摄像头等拍摄装置中,寻求高画质化、高功能化等的性能提升,已经以各种方式设计了在该拍摄装置中搭载的例如CMOS等图像传感器。
例如,公开了通过以组为单位处理图像传感器的多个像素来实现高动态范围(HDR)图像的技术(例如,参照专利文献1)。
现有技术文献
专利文献
专利文献1:美国专利申请公开第2021/0385389号说明书。
发明内容
发明要解决的问题
然而,在专利文献1所述的图像传感器中,没有考虑在拍摄图像中产生的莫尔条纹,结果在拍摄图像中可能会产生莫尔条纹。
因此,本公开的目的在于,提供一种适当去除莫尔条纹的拍摄装置及其控制方法。
用于解决问题的方案
本公开的一个方面的拍摄装置具备:数据获取部,其获取基于第一像素组的第一部分合并数据,该第一像素组在由被分组的多个像素构成的单位像素组中,由至少一个像素通过合并构成;分析部,其基于第一部分合并数据与基于由单位像素组中除第一像素组以外的像素构成的第二像素组的第二部分合并数据之间的互相关,分析由单位像素组构成的区域的影像信号的频率特性;以及莫尔条纹去除部,其基于分析结果,去除由单位像素组构成的区域中产生的莫尔条纹。
在上述方面中,莫尔条纹去除部也可以通过低通滤波器去除影像信号的高频成分,从而去除莫尔条纹。
在上述方面中,莫尔条纹去除部也可以基于由单位像素组附近的单位像素组构成的区域的影像信号,去除莫尔条纹。
在上述方面中,数据获取部也可以获取基于构成单位像素组的全部像素的全部合并数据,从全部合并数据中减去第一部分合并数据,从而获取第二部分合并数据。
在上述方面中,对应多个像素构成的各光电二极管也可以连接到公共浮动扩散区。
在上述方面中,也可以是,浮动扩散区可以在多个电荷电压转换增益之间进行切换,数据获取部获取多个电荷电压转换增益中低转换增益下的第一部分合并数据,分析部基于低转换增益下的第一部分合并数据与第二部分合并数据之间的互相关,分析由单位像素组构成的区域的影像信号的频率特性,莫尔条纹去除部基于低转换增益下的分析结果,去除由单位像素组构成的区域中产生的莫尔条纹。
在上述方面中,也可以是,多个像素中的每一个进一步由两个以上的子像素构成,数据获取部获取在两个以上的子像素中,基于通过合并由至少一个以上的子像素构成的第一子像素组的第一子部分合并数据、以及基于由除了第一子像素组以外的子像素构成的第二子像素组的第二子部分合并数据,第一子部分合并数据以及第二子部分合并数据用于相位差自动对焦。
在上述方面中,也可以是,多个像素中的每一个进一步由两个以上的子像素构成,多个像素中的任意一个像素包含对两个以上的子像素中的至少一个以上的子像素进行遮蔽的遮蔽像素,数据获取部获取在包含遮蔽像素的像素中基于除遮蔽像素以外的子像素的子部分合并数据,子部分合并数据用于相位差自动对焦。
本公开的另一方面的拍摄装置:数据获取部,其获取基于第一像素组的第一部分合并数据,该第一像素组在由被分组的多个像素构成的单位像素组中,由至少一个像素通过合并构成;分析部,其基于第一部分合并数据与基于由单位像素组中除第一像素组以外的像素构成的第二像素组的第二部分合并数据之间的互相关,分析由单位像素组构成的区域的影像信号的频率特性;以及图像生成部,其基于分析结果,恢复由单位像素组构成的区域中的高频成分并生成图像。
本公开的一方面的控制方法由拍摄装置中包含的处理器执行,包含:数据获取步骤,获取基于第一像素组的第一部分合并数据,该第一像素组在由被分组的多个像素构成的单位像素组中,由至少一个像素通过合并构成;分析步骤,基于第一部分合并数据与基于由单位像素组中除第一像素组以外的像素构 成的第二像素组的第二部分合并数据之间的互相关,分析由单位像素组构成的区域的影像信号的频率特性;以及莫尔条纹去除步骤,基于分析结果,去除由单位像素组构成的区域中产生的莫尔条纹。
发明效果
根据本公开,能够提供一种适当去除莫尔条纹的拍摄装置及其控制方法。
图1是用于说明本公开的第一实施方式的图像传感器10的构成的示意图。
图2是用于说明在本公开的第一实施方式的图像传感器10中使用的合并的图。
图3是用于说明在本公开的第一实施方式的图像传感器10中使用的部分合并的图。
图4是用于说明本公开的第一实施方式的拍摄装置100的各功能和数据流的框图。
图5是示意性地示出有关用于说明4(2×2)个像素中的合并的例子的信号流的电路构成的图。
图6是用于说明图5所示的4(2×2)个像素的电路构成的各要素的动作的图。
图7是示出本公开的第一实施方式的拍摄装置100执行的控制方法M100的处理的流程的流程图。
图8是示出在本公开的第一实施方式的图像传感器10中使用的另一部分合并(另一具体例1)的图。
图9是示出在本公开的第一实施方式的图像传感器10中使用的另一部分合并(另一具体例2)的图。
图10是示出在本公开的第一实施方式的图像传感器10中使用的另一部分合并(另一具体例3)的图。
图11是示意性地示出有关信号流的电路构成的图,上述信号流用于说明在4(2×2)个像素中组合双重转换增益和全部像素拍摄面相位差AF进行动作的例子。
图12是用于说明图11所示的4(2×2)个像素的电路构成的各要素的动作的图。
图13是表示为了获取相位差AF用的信号而设置了专用的像素的图像传感器的示意图。
以下,参照附图具体说明本公开的优选实施方式。另外,以下说明的各实施方式说仅为举出用于实施本公开的具体的一例,而不是限制性地解释本公开。另外,为了便于理解说明,在各附图中对相同的构成要素尽可能地赋予相同的附图标记,有时省略重复的说明。
<第一实施方式>
[关于图像传感器]
图1是用于说明本公开的第一实施方式的图像传感器10的构成的示意图。如图1所示,图像传感器10典型地为CMOS图像传感器等,具备控制电路1、二维排列的多个像素组2、信号线3、读取电路4、以及数字信号处理部(DSP)5。
此外,在此,通过将4(2×2)个像素分组而使像素组2成为一个像素组(单位像素组),但不限于此,例如,也可以将3(3×1)个像素、8(4×2)个像素、9(3×3)个像素、以及16(4×4)像素等作为一个单位像素组。
控制电路1驱动图像传感器10的多个像素组2,控制读取基于上述多个像素组2中积累的光信号的数据,并将其输出到图像传感器10的外部。
多个像素组2二维排列,基于来自控制电路1的控制信号以及该像素2组自身生成的控制信号,积累被带到图像传感器10的光信号,作为基于该光信号的数据(电信号)被读取。
从多个像素组2读取的电信号经由信号线3(典型地,与列方向平行的列信号线)传输到读取电路4,该电信号被模拟数字转换。
数字信号处理部(DSP)5处理由读取电路4模拟数字转换后的数字信号。接着,被处理后的数字信号经由数据总线传输到拍摄装置所具有的处理器或存储器等。
此外,DSP5不限于这种配置构成,例如,也可以是如下构成:图像传感器10不包含DSP5,后级的处理器具有DSP。而且,还可以是如下构成:分别由图像传感器10的DSP5以及后级的处理器等中包含的DSP来处理图像处理中的数字信号处理的一部分。换言之,本公开中的DSP的位置不限于指定位置。
[关于合并]
图2是用于说明在本公开的第一实施方式的图像传感器10中使用的合并的图。在图2中,作为一个例子,在单板式拜耳排列的颜色像素配置中,各个颜色由4(2×2)个像素构成。
如果将各像素分别作为独立像素并读入来自各像素的数据,则能够获取基于高采样频率的高分辨率 影像。另一方面,如图2所示,通过合并使4个像素成为一个像素组(单位像素组)并读入来自该4个像素的数据,则能够实现基于高信号电子数的高SNR、基于宽像素尺寸的高灵敏度、基于少量像素的高帧率、以及基于低读取的低功耗。
即,根据合并,分辨率和其他性质处于折中关系。具体地,在将各像素分别作为独立像素并读取来自全部像素的数据的情况下,将上述读取时的采样频率设为fs(全部像素读取模式:“full readout mode”)。相对于此,在通过合并将4个像素作为一个像素组(单位像素组)并读取来自该4个像素的数据的情况下,上述读取时的采样频率下降到fs/2(合并模式:“binning mode”)。
图3是用于说明在本公开的第一实施方式的图像传感器10中使用的部分合并的图。在图3中,作为一个例子,将由绿色(G)、红色(R)、蓝色(B)以及绿色(G)构成的拜耳排列作为一个拜耳单元(“one bayer unit”),该拜耳单元配置成矩阵。
此外,在此,一个拜耳单元(“one bayer unit”)由G、R、B、G的4个(2×2)的单位像素组构成,但不限于此,例如,也可以由9(3×3)个的单位像素组、以及16(4×4)个的单位像素组等构成。
在偶数行组中,如数字“1”所示,例如,在由G构成的4(2×2)个像素的单位像素组中,将左半部分的两个像素(第一像素组)部分合并,并读取数据(第一部分合并数据)。接着,将由上述G构成的4(2×2)个像素全部合并,并读取数据(全部合并数据)。
在奇数行组中,如数字“1”所示,例如,由G构成的4(2×2)个像素的单位像素组中,将上半部分的两个像素(第一像素组)部分合并,并读取数据(第一部分合并数据)。接着,将由上述G构成的4(2×2)个像素全部合并,并读取数据(全部合并数据)。
另外,在由G构成的4(2×2)个像素的单位像素组中,右半部分以及下半部分的数据可以基于全部合并中读取的全部合并数据与部分合并中读取的部分合并数据的差值来生成(第二像素组、第二部分合并数据)。
此外,在此,以由G构成的4(2×2)个像素的单位像素组中的一部分为例进行了具体说明,但对其他由G构成的4(2×2)个像素的单位像素组、由R构成的4(2×2)个像素的单位像素组、以及由B构成的4(2×2)个像素的单位像素组也进行同样的处理。
[关于莫尔条纹去除]
以下,说明使用从图像传感器10输出的部分合并数据以及全部合并数据去除莫尔条纹(混叠)的处理。此外,莫尔条纹是在图像中产生的一种噪音,但本说明书中的莫尔条纹去除也包括同类的混叠(Aliasing)的去除,自然也可以进行处理。
图4是用于说明本公开的第一实施方式的拍摄装置100的各功能和数据流的框图。如图4所示,拍摄装置100具备图像传感器10、数据获取部20、分析部30、以及莫尔条纹去除部40。此外,在此,虽然未图示光学系统、存储器等,也省略了详细说明,但一般拍摄装置所具备的功能或构件在拍摄装置100中也具备。另外,本公开的拍摄装置适用于数码摄像头、以及搭载摄像功能的智能手机、平板、笔记本电脑等的终端。
图像传感器10是使用上述图1~图3说明的图像传感器,如图4所示,从图像传感器10读取第一部分合并数据p1以及全部合并数据a1。
在此,全部合并数据a1、第一部分合并数据p1以及第二部分合并数据p2例如可以通过上述使用图3说明的过程来读取和生成。
具体地,第一部分合并数据p1是基于由被分组的多个像素构成的单位像素组中的、通过合并由至少一个像素构成的第一像素组的数据。在图3中,第一部分合并数据p1相当于从单位像素组中由数字“1”表示的两个像素中读取的数据。
全部合并数据a1是基于由被分组的多个像素构成的单位像素组中的、全部像素的数据。在图3中,全部合并数据a1相当于从单位像素组中的4(2×2)个像素中读取的数据。
然后,从全部合并数据a1中减去第一部分合并数据p1,从而基于其差值生成第二部分合并数据p2。
分析部30基于第一部分合并数据p1与第二部分合并数据p2之间的互相关,分析由该单位像素组构成的区域的影像信号的频率特性。
例如,分析部30计算第一部分合并数据p1与第二部分合并数据p2之间的互相关,在互相关小的(规定阈值以下的)情况下,判定由该单位像素组构成的区域包含较多高频成分。
而且,莫尔条纹周期性产生的可能性很高,考虑到其特性,分析部30可以在图像传感器10中估计在由哪个单位像素组构成的区域中产生了莫尔条纹。
例如,在图3所示的例子中,在偶数行组中,分析部30计算基于单位像素组中左半部分的两个像素的第一部分合并数据与基于右半部分的两个像素的第二合并数据之间的互相关。另外,在奇数行组中,分析部30计算基于单位像素组中上半部分的两个像素的第一部分合并数据与基于下半部分的两个像素 的第二部分合并数据之间的互相关。即,分析部30虽然在单位像素组中分析影像信号在纵向和横向上的频率特性,但在纵向和横向上交替地对每一行设置合并分组。因此,根据莫尔条纹的产生情况,也考虑无法判定所有产生莫尔条纹的区域(单位像素组),但如上所述,通过假设莫尔条纹周期性(规定的长度以及周期)以条纹图案产生,能够估计在哪个区域(单位像素组)产生了莫尔条纹。
此外,用于计算第一部分合并数据p1与第二部分合并数据p2之间的互相关并判定包含较多高频成分的阈值例如可以根据包含透镜或图像传感器等的拍摄装置的种类以及性能、被摄体或周边环境以及其他拍摄情况预先设定或变更。另外,也可以通过使用AI(Artificial Intelligence)学习来设定适当的阈值。而且,例如,也可以将第一部分合并数据p1、第二合并数据p2以及全部合并数据a1等作为监督数据,使用AI来判定是否产生了莫尔条纹。
这样,分析部30的分析方法不特别限定,可以使用各种分析方法来分析由单位像素组构成的区域的影像信号的频率特性,检测包含较多高频成分的区域和产生了莫尔条纹的区域等。
莫尔条纹去除部40基于分析部30的分析结果,去除由单位像素组构成的区域中产生的莫尔条纹。
例如,莫尔条纹去除部40可以通过低通滤波器去除包含较多高频成分的区域中的影像信号(例如,该单位像素组的全部合并数据a1)的高频成分。
另外,莫尔条纹去除部40也可以基于产生莫尔条纹区域附近的未产生莫尔条纹的区域中的影像信号(例如,另一单位像素组的全部合并数据a1)来去除莫尔条纹。在此,产生莫尔条纹区域附近的未产生莫尔条纹的区域是指:例如,是在位于产生莫尔条纹区域的相邻(上下左右、对角线的延长线上的倾斜位置)并包围产生该莫尔条纹区域中,未产生莫尔条纹的区域。即,莫尔条纹去除部40根据另一区域的影像信号对产生莫尔条纹的区域进行插值,从而生成没有莫尔条纹的图像。
另外,不限于莫尔条纹的产生,也可以针对通过分析部30判定为包含高频成分且需要对该高频成分进行处理的区域(单位像素组),代替莫尔条纹去除部40而具备或额外具备适当恢复高频成分的图像生成部(未图示)。典型地,图像生成部可以根据其附近区域的影像信号对被判定为需要图像处理的区域(单位像素组)进行插值,从而适当地恢复高频成分,而且还可以使用AI适当地恢复高频成分。
此外,在图4所示的例子中,数据获取部20通过从全部合并数据a1中减去第一部分合并数据p1来获取第二部分合并数据p2,分析部30算出第一部分合并数据与第二部分合并数据p2之间的互相关,但不限于此。例如,分析部30也可以基于第一部分合并数据p1和全部合并数据a1,并基于第一部分合并数据p1与第二部分合并数据p2之间的互相关来分析由单位像素组构成的区域的影像信号的频率特性。
数据获取部20从图像传感器10读取第一部分合并数据p1和全部合并数据a1,但不限于此。例如,也可以读取第一部分合并数据p1和第二部分合并数据p2。在这种情况下,分析部30可以基于从图像传感器10读取的第一部分合并数据p1与第二部分合并数据p2之间的互相关分析由该单位像素组构成的区域的影像信号的频率特性。
另外,莫尔条纹去除部40和/或图像生成部基于分析部30的分析结果,针对包含较多高频成分的区域,典型地基于全部合并数据a1来生成适当的图像,包括去除莫尔条纹,但也可以基于第一部分合并数据p1以及第二部分合并数据p2来生成图像。
而且,莫尔条纹去除部40和/或图像生成部可以在对全部合并数据a1或第一部分合并数据p1以及第二部分合并数据p2进行去马赛克处理之后,生成适当的图像。
[关于图像传感器中的各像素的电路构成]
作为图像传感器,说明单位像素组的合并的具体方法。在此,对于图像传感器中的单位像素组,进一步详细地说明具体构成及动作。
图5是示意性地示出有关用于说明4(2×2)个像素中的合并的例子的信号流的电路构成的图。如图5所示,4(2×2)个像素对应于4个光电二极管(PD1~PD4),由与其连接的浮动扩散区(FD)、源极跟随放大器(SF)、复位晶体管(RES)、传输晶体管(TX1~TX4)、以及选择晶体管(SEL)构成。
4个光电二极管(PD1~PD4)连接到公共浮动扩散区(FD)。源极跟随放大器(SF)的输出经由选择晶体管(SEL)在二维配置有多个像素组的列上连接到公共输出线(相当于图1的信号线3),而且,连接作为源极跟随放大器(SF)的负载的恒流源(I)、电压增益转换单元(未图示)、以及模拟数字转换器(ADC)。
并且,由模拟数字转换器(ADC)转换的数字信号(数据)保持在行存储器1或行存储器2中。
图6是用于说明图5所示的4(2×2)个像素的电路构成的各要素的动作的图。
在时刻t1,复位晶体管(RES)以及传输晶体管(TX1~TX4)导通,光电二极管(PD1~PD4)被复位。
之后,在经过用于积累数据的规定积累期间后,开始从构成单位像素组的像素中读取数据的处理, 首先,在时刻t2,复位晶体管(RES)截止,选择晶体管(SEL)导通。接着,以规定的电压增益对其值进行模拟数字转换,并保持到行存储器1中(FD复位噪声)。
在时刻t3,为了部分合并,在传输晶体管(TX1~TX4)中,例如,导通传输晶体管(TX1~TX2),从而将来自光电二极管(PD1~PD2)的信号传输到浮动扩散区(FD)。接着,以规定的电压增益对其值进行模拟数字转换,并保持到行存储器2中(部分合并数据)。
在时刻t4,从行存储器2保持的值中减去行存储器1保持的值,输出其结果,并传输到后级的图像信号处理器(ISP)或帧存储器中。由此,能够获取被称为相关二重采样的、去除浮动扩散区(FD)的复位噪声的数据(噪声去除/部分合并数据)。这相当于图4的第一部分合并数据p1。
在时刻t5,为了全部合并,通过导通传输晶体管(TX1~TX4),将来自光电二极管(PD1~PD4)的信号传输到浮动扩散区(FD)。接着,以规定的电压增益对其值进行模拟数字转换,并保持到行存储器2中(全部合并数据)。
此外,在此,设为在全部合并数据的模拟数字转换结束之前完成行存储器2中保持的部分合并数据的输出,但如果在部分合并数据的输出还未结束的情况下,优选具备用于保持全部合并数据的另一行存储器等。
另外,由于全部合并的浮动扩散区(FD)的复位噪声可以使用行存储器1中保持的数据,因此,在时刻t6,从行存储器2保持的值中减去行存储器1保持的值,并输出其结果。由此,能够获取去除了浮动扩散区(FD)的复位噪声的全部合并数据(噪声去除·全部合并数据)。这相当于图4的全部合并数据a1。
这样,从图像传感器10的各单位像素组中取出第一部分合并数据p1以及全部合并数据a1。
[关于控制方法]
接着,详细说明使用合并数据去除莫尔条纹同时生成图像的控制方法。
图7是示出本公开的第一实施方式的拍摄装置100执行的控制方法M100的处理的流程的流程图。如图7所示,控制方法M100包含步骤S10~S50,各步骤由拍摄装置100中包含的处理器执行。
在步骤S10中,数据获取部20获取基于单位像素组中的第一像素组的第一部分合并数据(数据获取步骤)。作为具体例,如图3及图4所示,数据获取部20从图像传感器10,将4(2×2)个像素的在单位像素组中的由数字“1”表示的两个像素部分合并,并读取数据(第一部分合并数据p1)。
在步骤S20中,分析部30基于在步骤S10获取的第一部分合并数据与基于由单位像素组中除第一像素组以外的像素构成的第二像素组的第二部分合并数据之间的互相关,分析由该单位像素组构成的区域的影像信号的频率特性(分析步骤)。作为具体例,如图3以及图4所示,数据获取部20从图像传感器10,将4(2×2)个像素的单位像素组的全部像素进行全部合并,并读取数据(全部合并数据a1),并通过减去第一部分合并数据p1,从而获取第二部分合并数据p2。接着,分析部30计算第一部分合并数据p1与第二部分合并数据p2之间的互相关,分析由该单位像素组构成的区域的影像信号的频率特性。
在步骤S30中,分析部30判断由单位像素组构成的区域是否是包含较多高频成分且需要对高频成分进行处理的处理对象区域。作为具体例,分析部30基于在步骤S20中计算出的第一部分合并数据p1与第二部分合并数据p2之间的互相关,判断由该单位像素组构成的区域是否是高频成分的处理对象区域。在互相关小的情况下,该区域由于包含较多高频成分而被判定为高频成分的处理对象区域(步骤S30的是),在互相关大的情况下,判定该区域不是高频成分的处理对象区域(步骤S30的否)。
在步骤S40(步骤S30的是)中,莫尔条纹去除部40去除由单位像素组构成的区域中产生的莫尔条纹,同时生成图像(莫尔条纹去除步骤)。作为具体例,莫尔条纹去除部40针对由该单位像素组构成的区域,通过使用低通滤波器来去除高频成分,或根据另一区域的影像信号进行插值,从而去除莫尔条纹同时生成图像。
在步骤S50(步骤S30的否)中,图像生成部针对由该单位像素组构成的区域,基于全部合并数据a1生成适当的图像。
如以上所述,根据本公开的第一实施方式的拍摄装置100以及控制方法M100,数据获取部20获取基于单位像素组中的第一像素组的第一部分合并数据p1,分析部30基于第一部分合并数据p1与第二部分合并数据p2之间的互相关,分析由单位像素组构成的区域的影像信号的频率特性,莫尔条纹去除部40基于分析结果,去除由单位像素组构成的区域中产生的莫尔条纹。其结果是,能够在适当去除莫尔条纹的同时生成图像。
[单位像素组的分组(部分合并)的另一具体例]
在本实施方式中,如图3所示,将4(2×2)个像素分组为单位像素组,将左半部分的两个像素或上半部分的两个像素部分合并,并读取数据作为第一部分合并数据,但部分合并不限于此。以下,说明部分合并的另一具体例。
(另一具体例1)
图8是示出在本公开的第一实施方式的图像传感器10中使用的另一部分合并(另一具体例1)的图。如图8所示,与图3同样地,由绿色(G)、红色(R)、蓝色(B)以及绿色(G)构成的拜耳单元配置成矩阵。
在偶数行组中,如数字“1”所示,在单位像素组中,将左上以及右下的两个像素(第一像素组)进行部分合并,并读取数据(第一部分合并数据)。
在奇数行组中,如数字“1”所示,在单位像素组中,将右上以及左下的两个像素(第一像素组)进行部分合并,并读取数据(第一部分合并数据)。
这样,在单位像素组中,配置在对角线上的像素被部分合并。其他与使用图3说明的处理相同。
(另一具体例2)
图9是示出在本公开的第一实施方式的图像传感器10中使用的另一部分合并(另一具体例2)的图。如图9所示,与图3同样地,由绿色(G)、红色(R)、蓝色(B)以及绿色(G)构成的拜耳单元配置成矩阵。
在偶数行组以及奇数行组中,如数字“1”所示,在单位像素组中,右上,右下以及左下的3像素(第一像素组)部分合并,并读取数据(第一部分合并数据)。
这样,在单位像素组(4个像素)中,三个像素被部分合并。其他与使用图3说明的处理相同。
在图9的例子中,在单位像素组中,以非对称的方式将多个像素分组,并进行部分合并。由此,分析部30基于第一部分合并数据(由数字“1”表示的第一像素组)与第二部分合并数据(单位像素组中除第一像素组以外的第二像素组)之间的互相关,能够更适当地在该单位像素组中分析影像信号在纵向和横向上的频率。即,分析部30能够更适当地分析由该单位像素组构成的区域中包含较多高频成分、以及产生了莫尔条纹。
(另一具体例3)
图10是示出在本公开的第一实施方式的图像传感器10中使用的另一部分合并(另一具体例3)的图。如图10所示,与图3同样地,由绿色(G)、红色(R)、蓝色(B)以及绿色(G)构成的拜耳单元配置成矩阵。
在偶数行组中,如数字“1”所示,在单位像素组中,将左半部分的两个像素(第一像素组)进行部分合并,并读取数据(第一部分合并数据),并且将右上的像素添加到第一像素组中,或单独进行部分合并,并读取数据(追加部分合并数据)。
在奇数行组中,如数字“1”所示,在单位像素组中,将上半部分的两个像素(第一像素组)进行部分合并,并读取数据(第一部分合并数据),并且将左下的像素添加到第一像素组中,或单独进行部分合并,并读取数据(追加部分合并数据)。
这样,在单位像素组中,将第一像素组进行部分合并,进而将不同的像素组(第一像素组+另一像素或单独另一像素)进行部分合并。接着,将单位像素组全部合并,并读取全部合并数据。
在图10的例子中,由于获取多个由重心不同的像素组构成的区域中的部分合并数据,因此,如果从全部合并数据中减去上述部分合并数据,则也能够获取多个第二部分合并数据。分析部30基于这样获取到的各种组合获得的第一部分合并数据和第二部分合并数据,能够更适当地分析在由该单位像素组构成的区域中包含了较多高频成分、以及产生了莫尔条纹。
如本文所示,部分合并中存在各种方式,但不限于此。单位像素组中所部分合并的像素可以有规律地设定,也可以被随机地设定。分析部30例如可以根据包含透镜、图像传感器等的拍摄装置的种类及性能、被摄体、周边环境及其他拍摄状况来设定单位像素组中被部分合并的像素,以能够适当地分析包含较多高频成分且产生莫尔条纹的单位像素组(区域)。
另外,如上所述,一个单位像素组不限于由4(2×2)个像素构成,例如,也可以由3(3×1)个像素、8(4×2)个像素、9(3×3)个像素、以及16(4×4)个像素等构成,一个拜耳单元(“one bayer unit”)也不限于由4个(2×2)个单位像素组构成,例如,也可以由9(3×3)个单位像素组、以及16(4×4)个单位像素组等构成。其中,关于如何设定被部分合并的像素,适当确定即可,也可以使用AI来确定。
<第二实施方式>
接着,作为本公开的第二实施方式的图像传感器,对组合双重转换增益(DCG)和全部像素拍摄面相位差AF(自动对焦)进行动作的具体方法进行说明。本实施方式的图像传感器的基本构成与第一实施方式的图像传感器10的构成相同,像素中的合并也利用与第一实施方式同样的方式。在此,关于图像传感器中的像素,对组合双重转换增益和全部像素拍摄面相位差AF进行动作的具体构成以及动作进行详细说明。
图11是示意性地示出有关信号流的电路构成的图,上述信号流用于说明在4(2×2)个像素中组合双重转换增益和全部像素拍摄面相位差AF进行动作的例子。如图11所示,在此,图5所示的4个光电二极管(PD1~PD4)中的每一个为了用于全部像素拍摄面相位差AF而被分割为两个,分别成为副光电二极管(PD1L/PD1R~PD4L/PD4R),传输晶体管(TX1~TX4)与该副光电二极管对应地成为传输晶体管(TX1L/TX1R~TX4L/TX4R)(L:左,R:右)。
并且,在该电路中配置有浮动扩散区(FD)、源极跟随放大器(SF)、复位晶体管(RES)、切换晶体管(X)、以及选择晶体管(SEL)。
而且,为了双重转换增益,像素中添加了可以通过切换晶体管(X)进行电切换的附加负载电容。通过增大浮动扩散区(FD)的负载电容,能够将电荷电压转换增益设定得较小,通过减小负载电容,能够将电荷电压转换增益设定得较大。
图12是用于说明图11所示的4(2×2)个像素的电路构成的各要素的动作的图。此外,4(2×2)个像素中的每一个像素由被分割为两个的子像素(L:左,R:右)构成。此外,在此,虽然各像素由两个子像素构成,但不限于此,例如,也可以由三个以上的子像素构成。
在时刻t1,复位晶体管(RES)、切换晶体管(X)、以及传输晶体管(TX1L/TX1R~TX4L/TX4R)导通,副光电二极管(PD1L/PD1R~PD4L/PD4R)被复位。
之后,在经过用于积累数据的规定积累期间后,开始从构成单位像素组的像素中读取数据的处理,首先,在时刻t2,复位晶体管(RES)截止,切换晶体管(X)以及选择晶体管(SEL)导通。接着,在浮动扩散区(FD)的电荷电压转换增益变小的状态(LCG)下,对FD复位噪声进行模拟数字转换,并保存到行存储器1中(LCG/FD复位噪声)。
在时刻t3,切换晶体管(X)截止,在浮动扩散区(FD)的电荷电压转换增益变大的状态(HCG)下,对FD复位噪声进行AD转换,并保存到行存储器2中(HCG/FD复位噪声)。
在时刻t4,传输晶体管(TX1L)以及传输晶体管(TX2L)导通,获取HCG状态下的拍摄面相位差AF用的左侧部分合并数据,进行AD转换,并保存到行存储器3中(HCG/相位差AF用L部分合并数据)。
并且,通过从行存储器3中保存的HCG·相位差AF用L部分合并数据中减去行存储器2中保存的HCG·FD复位噪声,能够获取去除了复位噪声的HCG状态下的相位差AF用L部分合并数据(噪声去除·HCG·相位差AF用L部分合并数据)。
在时刻t5,传输晶体管(TX1L·TX1R)、传输晶体管(TX2L·TX2R)导通,获取HCG状态下的部分合并数据,进行AD转换,并保存到行存储器3中(HCG·部分合并数据)。
通过从行存储器3中保存的HCG/部分合并数据中减去行存储器2中保存的HCG/FD复位噪声,能够获取去除了复位噪声的HCG状态下的部分合并数据(噪声去除/HCG/部分合并数据)。
另外,通过从噪声去除·HCG·部分合并数据中减去噪声去除·HCG·相位差AF用L部分合并数据,能够获取噪声去除·HCG·相位差AF用R部分合并数据。
在时刻t6,切换晶体管(X)导通,在浮动扩散区(FD)的电荷电压转换增益变小的状态(LCG)下,传输晶体管(TX1L·TX1R)以及传输晶体管(TX2L·TX2R)导通,获取LCG状态下的部分合并数据,进行AD转换,并保存到行存储器3中(LCG·部分合并数据)。
接着,通过从行存储器3中保存的LCG·部分合并数据中减去行存储器1中保存的LCG·FD复位噪声,能够获取去除了复位噪声的LCG状态下的部分合并数据(噪声去除·LCG·部分合并数据)。
在时刻t7,传输晶体管(TX1L·TX1R~TX4L·TX4R)导通,获取LCG状态下的全部合并数据,进行AD转换,并保存到行存储器3中(LCG·全部合并数据)。
在时刻t8,通过从行存储器3中保存的LCG·全部合并数据中减去行存储器1中保存的LCG·FD复位噪声,能够获取去除了复位噪声的LCG状态下的全部合并数据(噪声去除·LCG·全部合并数据)。
这样,在HCG状态下,从图像传感器10取出相位差AF用L部分合并数据以及部分合并数据(相当于第一部分合并数据p1),在LCG状态下,取出部分合并数据(相当于第一部分合并数据p1)以及全部合并数据(相当于全部合并数据a1)。此外,如上所述,在HCG状态下,能够通过计算获取相位差AF用R部分合并数据。
如上所述,根据搭载本公开第二实施方式的图像传感器的拍摄装置以及控制方法,在LCG状态下,取出部分合并数据(相当于第一部分合并数据p1)以及全部合并数据(相当于全部合并数据a1),因此,与本公开的第一实施方式同样地,能够在适当去除莫尔条纹的同时生成图像。通过对LCG状态下的高SNR的数据适当地去除莫尔条纹,能够抑制精细图像中产生莫尔条纹。
另外,在本实施方式中,在HCG状态下,未读取全部合并数据,但如果在HCG状态下将传输晶体管(TX1L·TX1R~TX4L·TX4R)导通,获取HCG状态下的全部合并数据并进行AD转换,则也可 以获取HCG·全部合并数据。由于晶体管的切换或AD转换处理对搭载于拍摄装置100的处理器施加负荷,因此,这里通过降低晶体管的切换或AD转换的次数,能够抑制施加到搭载于拍摄装置100上的处理器的负荷和功耗的增加。
另外,在本实施方式中,能够获取HCG状态下的相位差AF用部分合并数据。相位差AF用数据需要高SNR,由于能够获取抗噪声的HCG状态下的数据,因此对其非常有效。
此外,在本实施方式中,在LCG状态下,无法获取相位差AF用合并数据,但通过在图像传感器中设定专用像素,对该像素的一部分进行遮蔽,或使用(2×1)片上微透镜结构,也可以获取相位差AF用的信号。
图13是表示为了获取相位差AF用的信号而设置了专用像素的图像传感器的示意图。如图13所示,在图像传感器中排列的多个像素中设定了专用像素,该专用像素例如左半部分(L区域)或右半部分(R区域)被遮蔽。此外,在此,专用像素被分割为左右两个,但不限于此,例如,可以上下分割为两个或三个以上进行遮蔽,以能够适当获取用于相位差AF的相位信号。
在专用像素中,在LCG状态下,如果在未被遮蔽的区域以光学方式获取相位差信号,则能够获取LCG·相位差AF用数据。
以上说明的各实施方式是为了便于理解本公开,并不是为了对本公开进行限定解释。实施方式所具备的各要素及其配置、材料、条件、形状及尺寸等不限于例示的要素,可以适当地变更。另外,可以部分地替换或组合在不同实施方式中示出的构成。
附图标记说明:
1…控制电路,2…像素组,3…信号线,4…读取电路,5…数字信号处理部(DSP),10…图像传感器,20…数据获取部,30…分析部,40…莫尔条纹去除部,100…拍摄装置,M100…控制方法,S10~S50…控制方法M100的各步骤
Claims (11)
- 一种拍摄装置,其特征在于,具备:数据获取部,其获取基于第一像素组的第一部分合并数据,所述第一像素组在由被分组的多个像素构成的单位像素组中,由至少一个像素通过合并构成;分析部,其基于所述第一部分合并数据与基于由所述单位像素组中除所述第一像素组以外的像素构成的第二像素组的第二部分合并数据之间的互相关,分析由所述单位像素组构成的区域的影像信号的频率特性;以及莫尔条纹去除部,其基于所述分析结果,去除在图像上产生的莫尔条纹,所述图像是基于所述单位像素组构成的区域的影像信号生成。
- 根据权利要求1所述的拍摄装置,其中,所述莫尔条纹去除部通过低通滤波器去除所述影像信号的高频成分,从而去除所述莫尔条纹。
- 根据权利要求1所述的拍摄装置,其中,所述莫尔条纹去除部基于由所述单位像素组附近的单位像素组构成的区域的影像信号,去除所述莫尔条纹。
- 根据权利要求1所述的拍摄装置,其中,所述数据获取部获取基于构成所述单位像素组的全部像素的全部合并数据,通过从所述全部合并数据中减去所述第一部分合并数据,从而获取所述第二部分合并数据。
- 根据权利要求1所述的拍摄装置,其中,对应所述多个像素构成的各光电二极管连接到公共浮动扩散区。
- 根据权利要求5所述的拍摄装置,其中,所述浮动扩散区能够在多个电荷电压转换增益之间进行切换,所述数据获取部获取所述多个电荷电压转换增益中低转换增益下的所述第一部分合并数据,所述分析部基于所述低转换增益下的所述第一部分合并数据与所述第二部分合并数据之间的互相关,分析由所述单位像素组构成的区域的影像信号的频率特性,所述莫尔条纹去除部基于所述低转换增益下的所述分析结果,去除由所述单位像素组构成的区域中产生的莫尔条纹。
- 根据权利要求1所述的拍摄装置,其中,所述多个像素中的每一个进一步由两个以上的子像素构成,所述数据获取部获取在所述两个以上的子像素中,基于通过合并由至少一个以上的子像素构成的第一子像素组的第一子部分合并数据、以及基于由除了所述第一子像素组以外的子像素构成的第二子像素组的第二子部分合并数据,所述第一子部分合并数据以及所述第二子部分合并数据用于相位差自动对焦。
- 根据权利要求1所述的拍摄装置,其中,所述多个像素中的每一个进一步由两个以上的子像素构成,所述多个像素中的任意一个像素包含对两个以上的子像素中的至少一个以上的子像素进行遮蔽的遮蔽像素,所述数据获取部获取在包含所述遮蔽像素的像素中基于除所述遮蔽像素以外的子像素的子部分合并数据,所述子部分合并数据用于相位差自动对焦。
- 一种拍摄装置,其特征在于,具备:数据获取部,其获取基于第一像素组的第一部分合并数据,所述第一像素组在由被分组的多个像素构成的单位像素组中,由至少一个像素通过合并构成;分析部,其基于所述第一部分合并数据与基于由所述单位像素组中除所述第一像素组以外的像素构成的第二像素组的第二部分合并数据之间的互相关,分析由所述单位像素组构成的区域的影像信号的频 率特性;以及图像生成部,其基于所述分析结果,恢复由所述单位像素组构成的区域中的高频成分并生成图像。
- 一种控制方法,由拍摄装置中包含的处理器执行,其特征在于,包括:数据获取步骤,其获取基于第一像素组的第一部分合并数据,所述第一像素组在由被分组的多个像素构成的单位像素组中,由至少一个像素通过合并构成;分析步骤,其基于所述第一部分合并数据与基于由所述单位像素组中除所述第一像素组以外的像素构成的第二像素组的第二部分合并数据之间的互相关,分析由所述单位像素组构成的区域的影像信号的频率特性;以及莫尔条纹去除步骤,其基于所述分析结果,去除由所述单位像素组构成的区域中产生的莫尔条纹。
- 一种终端,其特征在于,搭载根据权利要求1~9中任一项所述的拍摄装置。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2022529399A JP2024521588A (ja) | 2022-04-25 | 2022-04-25 | 撮像装置およびその制御方法 |
CN202280001299.5A CN117296311A (zh) | 2022-04-25 | 2022-04-25 | 拍摄装置及其控制方法 |
US18/289,545 US20240244346A1 (en) | 2022-04-25 | 2022-04-25 | Photographing device and control method thereof |
PCT/CN2022/089097 WO2023206030A1 (zh) | 2022-04-25 | 2022-04-25 | 拍摄装置及其控制方法 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2022/089097 WO2023206030A1 (zh) | 2022-04-25 | 2022-04-25 | 拍摄装置及其控制方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023206030A1 true WO2023206030A1 (zh) | 2023-11-02 |
Family
ID=88516442
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/089097 WO2023206030A1 (zh) | 2022-04-25 | 2022-04-25 | 拍摄装置及其控制方法 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20240244346A1 (zh) |
JP (1) | JP2024521588A (zh) |
CN (1) | CN117296311A (zh) |
WO (1) | WO2023206030A1 (zh) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000338000A (ja) * | 1999-03-23 | 2000-12-08 | Hitachi Ltd | 電子ディスプレイ装置の画素欠陥検査方法、および、電子ディスプレイ装置の製造方法 |
JP2002369083A (ja) * | 2001-06-07 | 2002-12-20 | Olympus Optical Co Ltd | 撮像装置 |
US20030002747A1 (en) * | 2001-07-02 | 2003-01-02 | Jasc Software,Inc | Moire correction in images |
JP2013106883A (ja) * | 2011-11-24 | 2013-06-06 | Fujifilm Corp | 放射線撮影装置及び画像処理方法 |
WO2013088878A1 (ja) * | 2011-12-13 | 2013-06-20 | 富士フイルム株式会社 | 放射線撮影方法及び装置 |
CN103686103A (zh) * | 2013-12-31 | 2014-03-26 | 上海集成电路研发中心有限公司 | 具有合并和分裂模式的图像传感器、像素单元 |
CN111869203A (zh) * | 2017-12-30 | 2020-10-30 | 张家港康得新光电材料有限公司 | 用于减少自动立体显示器上的莫尔图案的方法 |
US20210385389A1 (en) | 2020-06-04 | 2021-12-09 | Samsung Electronics Co., Ltd. | Image sensor, electronic device, and operating method of image sensor |
-
2022
- 2022-04-25 JP JP2022529399A patent/JP2024521588A/ja active Pending
- 2022-04-25 US US18/289,545 patent/US20240244346A1/en active Pending
- 2022-04-25 WO PCT/CN2022/089097 patent/WO2023206030A1/zh active Application Filing
- 2022-04-25 CN CN202280001299.5A patent/CN117296311A/zh active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000338000A (ja) * | 1999-03-23 | 2000-12-08 | Hitachi Ltd | 電子ディスプレイ装置の画素欠陥検査方法、および、電子ディスプレイ装置の製造方法 |
JP2002369083A (ja) * | 2001-06-07 | 2002-12-20 | Olympus Optical Co Ltd | 撮像装置 |
US20030002747A1 (en) * | 2001-07-02 | 2003-01-02 | Jasc Software,Inc | Moire correction in images |
JP2013106883A (ja) * | 2011-11-24 | 2013-06-06 | Fujifilm Corp | 放射線撮影装置及び画像処理方法 |
WO2013088878A1 (ja) * | 2011-12-13 | 2013-06-20 | 富士フイルム株式会社 | 放射線撮影方法及び装置 |
CN103686103A (zh) * | 2013-12-31 | 2014-03-26 | 上海集成电路研发中心有限公司 | 具有合并和分裂模式的图像传感器、像素单元 |
CN111869203A (zh) * | 2017-12-30 | 2020-10-30 | 张家港康得新光电材料有限公司 | 用于减少自动立体显示器上的莫尔图案的方法 |
US20210385389A1 (en) | 2020-06-04 | 2021-12-09 | Samsung Electronics Co., Ltd. | Image sensor, electronic device, and operating method of image sensor |
Also Published As
Publication number | Publication date |
---|---|
CN117296311A (zh) | 2023-12-26 |
US20240244346A1 (en) | 2024-07-18 |
JP2024521588A (ja) | 2024-06-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6239975B2 (ja) | 固体撮像装置及びそれを用いた撮像システム | |
US10403672B2 (en) | Solid-state imaging device with pixels having first and second photoelectric conversion units, driving method therefor, and electronic apparatus | |
EP3093819B1 (en) | Imaging apparatus, imaging system, and signal processing method | |
JP5351156B2 (ja) | 画像センサの複数の構成要素の読み出し | |
US8350940B2 (en) | Image sensors and color filter arrays for charge summing and interlaced readout modes | |
US9060134B2 (en) | Imaging apparatus, image sensor, imaging control method, and program | |
US8953075B2 (en) | CMOS image sensors implementing full frame digital correlated double sampling with global shutter | |
US8780238B2 (en) | Systems and methods for binning pixels | |
KR102612718B1 (ko) | 복수의 슈퍼-픽셀(super-pixel)들을 포함하는 이미지 센서 | |
JP6229652B2 (ja) | 撮像装置および撮像方法、電子機器、並びにプログラム | |
US9036052B2 (en) | Image pickup apparatus that uses pixels different in sensitivity, method of controlling the same, and storage medium | |
KR20120049801A (ko) | 고체 촬상 소자 및 카메라 시스템 | |
JP2007274589A (ja) | 固体撮像装置 | |
JP2012231333A (ja) | 撮像装置及びその制御方法、プログラム | |
US20140232914A1 (en) | Solid-state image pickup apparatus capable of realizing image pickup with wide dynamic range and at high frame rate, control method therefor, and storage medium | |
US7760959B2 (en) | Imaging apparatus and imaging system | |
JPWO2018198766A1 (ja) | 固体撮像装置および電子機器 | |
US8970721B2 (en) | Imaging device, solid-state imaging element, image generation method, and program | |
US8582006B2 (en) | Pixel arrangement for extended dynamic range imaging | |
JP6478600B2 (ja) | 撮像装置およびその制御方法 | |
JP2016213740A (ja) | 撮像装置及び撮像システム | |
JP2016090785A (ja) | 撮像装置及びその制御方法 | |
WO2023206030A1 (zh) | 拍摄装置及其控制方法 | |
JP6004656B2 (ja) | 撮像装置、その制御方法、および制御プログラム | |
JP6705054B2 (ja) | 撮像装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 202280001299.5 Country of ref document: CN Ref document number: 2022529399 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18289545 Country of ref document: US |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22938887 Country of ref document: EP Kind code of ref document: A1 |