WO2016167140A1 - Image-capturing device, image-capturing method, and program - Google Patents

Image-capturing device, image-capturing method, and program Download PDF

Info

Publication number
WO2016167140A1
WO2016167140A1 PCT/JP2016/060897 JP2016060897W WO2016167140A1 WO 2016167140 A1 WO2016167140 A1 WO 2016167140A1 JP 2016060897 W JP2016060897 W JP 2016060897W WO 2016167140 A1 WO2016167140 A1 WO 2016167140A1
Authority
WO
WIPO (PCT)
Prior art keywords
short
long
pixels
exposure
image
Prior art date
Application number
PCT/JP2016/060897
Other languages
French (fr)
Japanese (ja)
Inventor
亮介 古川
Original Assignee
ソニーセミコンダクタソリューションズ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーセミコンダクタソリューションズ株式会社 filed Critical ソニーセミコンダクタソリューションズ株式会社
Publication of WO2016167140A1 publication Critical patent/WO2016167140A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/75Circuitry for compensating brightness variation in the scene by influencing optical camera components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/57Control of the dynamic range

Definitions

  • the present technology relates to an imaging device, an imaging method, and a program.
  • the present invention relates to an imaging apparatus, an imaging method, and a program that can perform imaging with an expanded dynamic range.
  • CCD Charge Coupled Device
  • CMOS Complementary Metal Oxide Semiconductor
  • CMOS image sensor as a method of expanding the dynamic range, a method of adjusting the exposure time by turning the electronic shutter at a high speed, a method of photographing a plurality of frames at a high speed, and a photoelectric conversion of the light receiving unit A method of making the characteristic logarithmic response is known.
  • pixels having different exposure times or different sensitivities in a local area of a predetermined size are replaced with pixels having the other exposure time or sensitivity.
  • It has been proposed to increase the dynamic range by combining processing see, for example, Patent Document 1).
  • the gain multiplied by the set exposure time ratio or the sensitivity ratio calculated in advance is applied to the pixel signal with a low signal amount, and this value and the pixel signal with a high signal amount are fixed. It has been proposed to synthesize in proportions.
  • an HDR image can be obtained by synthesizing a plurality of differently exposed images obtained through multiple exposures and shutters by the above method.
  • this method there is a possibility that the image is collapsed in the area of the moving object.
  • Patent Document 1 proposes to suppress blurring of a moving object by selecting an optimum combination ratio along the region at each pixel position.
  • the present technology has been made in view of such a situation, and suppresses the generation of false colors and enables photographing with a wide dynamic range.
  • 2 ⁇ 2 pixels having the same spectral sensitivity when 2 ⁇ 2 pixels having the same spectral sensitivity are defined as one block, two pixels out of 2 ⁇ 2 pixels in the one block are exposed for a long time.
  • 2 pixels are short-time exposure pixels, and pixels having the same exposure time are arranged in an oblique direction, and include a processing unit that processes signals from pixels arranged on the imaging surface in units of the blocks, The processing unit generates a long-time exposure image by adding signals from the long-time exposure pixels in the one block, and adds a signal from the short-time exposure pixels to generate a short-time exposure image.
  • the aliasing component detection unit determines whether or not a difference between the long-time exposure image and the short-time exposure image and a saturation of each of the long-time exposure image and the short-time exposure image satisfy a predetermined condition. By doing so, the folded component can be detected.
  • the folding component detection unit can detect the folding component by determining whether or not the following first to fourth conditions are satisfied.
  • First condition There is a difference between the long-time exposure image and the short-time exposure image
  • Second condition There is a difference in saturation between the long-time exposure image and the short-time exposure image
  • Third condition Saturation A large signal has a green or magenta color.
  • Fourth condition When a generated signal is subtracted from a signal having no aliasing component, the difference is determined as G pixel and R pixel or G pixel and B. It occurs in the pixel with the same amplitude in the reverse direction.
  • the short-time exposure image can be an exposure-corrected image.
  • the long exposure image and the short exposure image can be transferred to a predetermined color space to obtain the saturation.
  • the composite ratio is the long-time exposure image or the short-time exposure in which it is determined that no aliasing component has occurred in a pixel where the aliasing component is detected by the aliasing component detection unit.
  • the ratio of using a large amount of images can be set.
  • the composition ratio is a pixel in which the moving object is detected by the moving object detection unit, but it is determined that no folding component is generated in the pixel in which the folding component is detected by the folding component detection unit.
  • the long exposure image or the short exposure image that is frequently used can be used.
  • the imaging method when 2 ⁇ 2 pixels having the same spectral sensitivity are defined as one block, two pixels out of 2 ⁇ 2 pixels in the one block are exposed for a long time.
  • An image is provided with a processing unit that processes signals from pixels arranged on the imaging surface in units of blocks, the pixels having the same exposure time are arranged in an oblique direction.
  • the processing unit generates a long-time exposure image by adding signals from the long-time exposure pixels in the one block, and adds the signals from the short-time exposure pixels.
  • a short-exposure image, and the generated long-exposure image and the short-exposure image are synthesized at a predetermined composition ratio, and from the difference between the long-exposure image and the short-exposure image, the moving object Detecting the long exposure image and A step of detecting a folding component from the short-time exposure image, wherein the composition ratio is a detection result of the moving body in the moving body detection unit and a detection result of the folding component in the folding component detection unit Set from
  • 2 pixels out of 2 ⁇ 2 pixels in the one block are long-time exposure pixels.
  • 2 pixels are short-time exposure pixels, pixels having the same exposure time are arranged in an oblique direction, and an image pickup apparatus including a processing unit that processes signals from the pixels arranged on the image pickup surface in units of blocks
  • the processing unit generates a long-time exposure image by adding signals from the long-time exposure pixels in the one block, and adds short-time exposure by adding signals from the short-time exposure pixels.
  • Generating an image combining the generated long-time exposure image and the short-time exposure image at a predetermined composition ratio, detecting a moving object from a difference between the long-time exposure image and the short-time exposure image;
  • the long exposure image and the short exposure A process including a step of detecting a folded component from an image is executed, and the composition ratio is calculated based on the detection result of the moving object in the moving object detection unit and the detection result of the folded component in the folded component detection unit.
  • 2 ⁇ 2 pixels having the same spectral sensitivity when 2 ⁇ 2 pixels having the same spectral sensitivity are defined as one block, 2 ⁇ 2 pixels out of 2 ⁇ 2 pixels in one block are used.
  • the pixels are long-time exposure pixels
  • the two pixels are short-time exposure pixels
  • the pixels with the same exposure time are arranged in an oblique direction
  • signals from the pixels arranged on the imaging surface are processed in block units.
  • the process generates a long exposure image by adding signals from long exposure pixels in one block, and generates a short exposure image by adding signals from short exposure pixels.
  • the long-time exposure image and the short-time exposure image are combined at a predetermined composition ratio, and the moving object is detected from the difference between the long-time exposure image and the short-time exposure image.
  • the synthesis ratio is set based on the detection result of the moving body in the moving body detection unit and the detection result of the folding component in the folding component detection unit.
  • FIG. 1 is a diagram illustrating a configuration of an embodiment of an imaging apparatus to which the present technology is applied.
  • an imaging element 102 configured by an imaging unit, for example, a CMOS image sensor, and outputs image data by photoelectric conversion.
  • the output image data is input to the image processing unit 103.
  • the output image of the image sensor 102 is a so-called mosaic image in which any pixel value of RGB is set for each pixel.
  • the image processing unit 103 generates a high dynamic range (HDR) image based on demosaic processing that sets all RGB pixel values for each pixel and synthesis processing of a long-time exposure image and a short-time exposure image, which will be described later. Processing, blur correction processing, etc.
  • HDR high dynamic range
  • the output of the image processing unit 103 is input to the signal processing unit 104.
  • the signal processing unit 104 performs signal processing in a general camera, such as white balance (WB) adjustment and gamma correction, and generates an output image 120.
  • the output image 120 is stored in a storage unit (not shown). Or it outputs to a display part (not shown).
  • the control unit 105 outputs a control signal to each unit according to a program stored in a memory (not shown), for example, and controls various processes.
  • each rectangle schematically represents a pixel.
  • Each rectangle has a symbol indicating the type of color filter (color light output from each pixel). For example, “R” is assigned to R (Red) pixels, “G” is assigned to G (Green) pixels, and “B” is assigned to B (Blue) pixels. The same applies to the following description.
  • the arrangement of the R pixel, the G pixel, and the B pixel is repeated in units of 4 ⁇ 4 in the vertical and horizontal directions.
  • the 2 ⁇ 2 four pixels at the upper left are all R pixels.
  • These 2 ⁇ 2 R pixels are defined as R blocks.
  • All 4 pixels of 2 ⁇ 2 adjacent to the right side of the 4-pixel R block are G pixels (referred to as G blocks).
  • the lower 2 ⁇ 2 4 pixels of the R block are all G pixels (G block). All 2 ⁇ 2 4 pixels on the lower right side of the R block are B pixels (referred to as B blocks). In this way, all the 2 ⁇ 2 4 pixels have the same color, and the R block, G block, G block, and B block in units of 4 pixels are arranged in the 4 ⁇ 4 pixel region.
  • the R pixel, the G pixel, the G pixel, and the B pixel are configured in 4 ⁇ 4 units each including four pixels.
  • such an arrangement of pixels is appropriately described as a four-divided Bayer RGB arrangement.
  • RGB pixels are arranged, but a configuration including W (White) pixels is also possible.
  • W White pixels
  • present technology can be applied to a combination of cyan, magenta, and yellow instead of RGB.
  • the W pixel When the W pixel is included, the W pixel functions as a spectral sensitivity pixel having total color matching, and the R pixel, the G pixel, and the B pixel are spectral sensitivity pixels having characteristics in their respective colors. Function.
  • the present technology can also be applied to an image sensor (image sensor) in which pixels of four types of spectral sensitivities including spectral sensitivities having total color matching are arranged on an imaging surface.
  • one unit composed of 4 ⁇ 4 four blocks are included in one unit composed of 4 ⁇ 4, and two of them are G blocks.
  • One of the two G blocks may be a W block in which W pixels are arranged.
  • the four pixels included in one block have the same color, but two types of exposure times are set.
  • Four pixels included in one block are set as long exposure pixels L or short exposure pixels S, respectively.
  • the relationship of exposure time is as shown below. Long exposure L> Short exposure S
  • the pixel arrangement focusing on the exposure time will be described.
  • the R pixels located at the upper left and lower right in the R block are long-time exposure pixels L.
  • the R pixel set as the long-time exposure pixel L is described as an RL pixel.
  • the pixels set as the long-time exposure pixel L are described as a GL pixel and a BL pixel, respectively.
  • the R pixels located at the upper right and lower left in the R block are the short-time exposure pixels S.
  • the R pixel set as the short-time exposure pixel S is described as an RS pixel.
  • the pixels set as the short-time exposure pixel S are described as a GS pixel and a BS pixel, respectively.
  • G pixels located at the upper left and lower right in the G block are GL pixels set as long exposure pixels L, and G pixels located at the upper right and lower left are set as short exposure pixels S. GS pixel.
  • B pixels located at the upper left and lower right in the B block are BL pixels set as long exposure pixels L, and B pixels located at the upper right and lower left are set as short exposure pixels S. This is the BS pixel.
  • the arrangement of the pixels to which the present technology is applied is that the same color pixels are arranged as four blocks of 2 ⁇ 2 pixels, and the four pixels of the same color in one block are pixels that are photographed with long exposure. It is set for each pixel for which shooting is performed with short-time exposure.
  • one block is described as an example in which the vertical and horizontal are M ⁇ M and the number of pixels in the vertical and horizontal directions is the same, but the vertical and horizontal are M ⁇ N, The present technology can be applied even when the number of pixels in the vertical direction and the horizontal direction is different.
  • the long exposure pixel L and the short exposure pixel S are arranged according to the number of pixels in one block.
  • the arrangement of pixels with different exposure times shown in FIG. 2 is an example, and other arrangements may be used.
  • the long-time exposure pixels L are arranged at the upper left and lower right, but other arrangements such as the long-time exposure pixels L at the upper right and lower left may be used.
  • the arrangement of the pixels having different exposure times in the R block, the G block, and the B block has been described as being the same, the arrangement may be different for each color.
  • different exposures in the R block, the G block, and the B block are arranged such that the long exposure pixel L in the R block and the long exposure pixel L in the G block located on the right are adjacent to each other.
  • the arrangement of the temporal pixels may be the same or different.
  • the long exposure pixel and the short exposure pixel are set for each pixel included in one photographed image, and the synthesis process (blend) between these pixels is performed.
  • an HDR image is generated.
  • This exposure time control is performed under the control of the control unit 105.
  • Fig. 3 shows an example of the timing of exposure time for each pixel.
  • the long exposure pixel L is subjected to a long exposure process.
  • the short exposure pixel S is subjected to a short exposure process.
  • the exposure start timings of the short-time exposure pixels S and the long-time exposure pixels L do not match, but the exposure times are controlled so that the exposure end timings match.
  • a process performed by the image processing unit 103 that processes a signal from the image sensor 102 in which the short-time exposure pixels S and the long-time exposure pixels L are arranged as described above will be described.
  • an outline of processing performed by the image processing unit 103 will be described, and details will be described later. As an explanation to be described later, there is a process for suppressing the occurrence of false color due to the aliasing component.
  • FIG. 4 shows a processing example when the exposure time is changed in an oblique direction in a four-divided Bayer RGB array.
  • FIG. 4 shows the following three data.
  • the imaging data is imaging data of the imaging device, and shows an image taken when the exposure time is changed for each column in the Bayer array.
  • the white portion indicates the long-time exposure pixel L
  • the dark gray portion indicates the short-time exposure pixel S.
  • RL00 is the long-time exposure pixel L of the R pixel at the coordinate position (0, 0).
  • GL20 is the long-time exposure pixel L of the G pixel at the coordinate position (2, 0).
  • the coordinates are shown in a format such as GSxy, GLxy, etc. by applying coordinates (x, y) where x is the vertical downward direction and y is the horizontal right direction.
  • the long exposure pixels L and the short exposure pixels S are alternately set in an oblique direction.
  • Imaging data indicates a 4 ⁇ 6 pixel area.
  • Intermediate data indicates intermediate data generated based on 4 ⁇ 6 (4a) imaging data.
  • step S1 (STEP 1), 12 pieces of intermediate pixel data are calculated based on 4 ⁇ 6 (4a) imaging data.
  • Output data indicates output data generated based on 12 (4b) intermediate pixel data. This output data is output data generated as a wide dynamic range image.
  • Step 1 The process of generating (4b) intermediate data from (4a) imaging data in step S1 is performed by the following diagonal addition process of a plurality of pixel values.
  • the pixel value of RLA00 (RLA00) and the pixel value of RSA00 (RSA00) calculated from the R block shown in FIG. 4 (4b) are the values obtained by applying pixel values of a plurality of pixels included in (4a) imaging data. It is calculated by the diagonal addition process according to the equation.
  • RLA00 (RL00 + RL11) / 2
  • RSA00 (RS01 + RS10) / 2 (1)
  • the average value of the RL pixels arranged in the diagonal direction in the R block is calculated, and the average value of the RS pixels is calculated, so that intermediate data is generated.
  • a value obtained by adding pixel values may be used as it is.
  • intermediate data is generated by calculating the average value of the pixel values of the long-time exposure pixels L and the average value of the pixel values of the short-time exposure pixels S arranged in an oblique direction. Is done.
  • DLA represents the average value of the pixel values of the long-time exposure pixels L in one block
  • DSA represents the average value of the pixel values of the short-time exposure pixels S in one block
  • DL represents one pixel value of the two long-time exposure pixels L in one block
  • Dl represents the other pixel value
  • DS represents one pixel value of the two short-time exposure pixels S in one block
  • Ds represents the other pixel value.
  • Step 2 Output data generation processing from (4b) intermediate data in step S2 is performed by (4b) blend processing of pixel values included in the intermediate data as follows.
  • the pixel value (R00) of the R00 pixel shown in FIG. 4 (4c) is calculated according to the following calculation formula (1) to which the pixel value of a plurality of pixels included in the intermediate data (4b) and the blend coefficient ⁇ are applied. Is done.
  • R00 (1- ⁇ ) ⁇ RSA00 ⁇ Gain + ⁇ ⁇ RLA00 (3)
  • Gain Gain multiplied by the pixel value of the short-time exposure pixel (exposure ratio between the long-time exposure pixel and the short-time exposure pixel)
  • The blend coefficient of the pixel value of the long exposure pixel and the pixel value of the short exposure pixel.
  • output data is generated in the G pixel and B pixel in accordance with a calculation formula using a gain and a blend coefficient.
  • the R pixel, the G pixel, and the B pixel have different sensitivities, and therefore, for example, different values may be used for the R pixel, the G pixel, and the B pixel for the gain and the blend coefficient.
  • the following expression (4) is obtained by expressing the expression (3) as an expression common to the R pixel, the G pixel, and the B pixel.
  • DH (1 ⁇ ) ⁇ DS + ⁇ ⁇ DL (4)
  • DH represents a pixel value of a predetermined pixel in the HDR image.
  • DS corresponds to RSA00 ⁇ GAIN in equation (3).
  • FIG. 5A shows an example of the blend processing configuration.
  • blur information is measured based on both the short-time exposure image and the long-time exposure image
  • the blend coefficient is calculated based on the measured blur information and both the short-time exposure image and the long-time exposure image.
  • “Blur” can be defined as a shift in the pixel value of the pixel at the corresponding pixel position between the long-exposure image and the short-exposure image that have been corrected based on the exposure ratio, and “blur information” It can be set as an index value indicating the degree of occurrence of blur corresponding to the shift amount of the pixel value.
  • “Blur information” for each pixel is acquired from the captured image, and the blending coefficient determined based on the acquired “blur information” is applied to execute the blending process between the short exposure image and the long exposure image. , An HDR image is generated.
  • the blend coefficient ⁇ in consideration of blur is calculated according to the following equation (5), for example.
  • max (a, b) a function for obtaining the maximum value of a and b.
  • min (a, b) a function for obtaining the minimum value of a and b.
  • K 1 and k 0 are parameters. Details will be described later.
  • M ( ⁇ L ⁇ S) 2 (6)
  • ⁇ L and ⁇ S have the following values.
  • ⁇ L Ideal pixel value of exposure-corrected long-exposure image obtained when there is no influence of noise (corresponding to DL in equation (4))
  • ⁇ S Exposure correction obtained when there is no influence of noise Ideal pixel value of short-exposure image (corresponding to DS in equation (4))
  • the blending process is performed by determining such a blending coefficient ⁇ .
  • is set to preferentially output a pixel value based on a short-exposure image with little blur because ⁇ approaches 0 at a portion where blur is large, and ⁇ is a value equivalent to the conventional method at a portion where blur is small.
  • a pixel value corresponding to a predetermined blend coefficient is generated.
  • Such processing is realized, and as a result, an HDR image with a good S / N ratio is obtained from a dark part to a bright part in the moving part with less blur in the moving subject part.
  • the amount of calculation of the blend coefficient is not so large and can be processed at high speed. For example, it can be applied to HDR image generation processing of moving images.
  • Expression (6) is an expression for calculating a value M indicating the magnitude of blur of a long exposure image.
  • M ( ⁇ L ⁇ S) 2 (6)
  • ⁇ L Pixel value of exposure-corrected long-exposure image
  • ⁇ S Pixel value of exposure-corrected short-exposure image
  • the exposure correction long-time exposure image ⁇ L is calculated by multiplying the pixel value of the long-time exposure pixel L by 1
  • the exposure correction short-time exposure image ⁇ S is calculated as the short-time exposure pixel. It is calculated by multiplying the pixel value of S by 16.
  • the exposure correction long-time exposure image ⁇ L and the exposure correction short-time exposure image ⁇ S calculated in this way are images in which the brightness is matched.
  • the exposure-corrected long-time exposure image ⁇ L and the exposure-corrected short-time exposure image ⁇ S which are adjusted in brightness in this way, have substantially the same signal value as long as there is no influence of noise.
  • blur detection is performed as described above. That is, in Expression (6), if the exposure correction long-time exposure image ⁇ L and the exposure correction short-time exposure image ⁇ S are the same, the value is 0. However, by capturing a moving subject or the like, the exposure correction long-time exposure image ⁇ L is obtained. When a difference occurs between the exposure correction short-exposure image ⁇ S, some value is calculated.
  • FIG. 4 In the (4a) imaging data shown in FIG. 4, attention is paid to the BL22 pixel.
  • the BL22 pixel is adjacent to the GS21 pixel on the left, the RL11 pixel on the upper left, and the GS12 pixel on the upper left.
  • the BL22 pixel is affected by these GS21 pixel, RL11 pixel, and GS12 pixel. That is, the BL22 pixel is easily affected by adjacent different color pixels.
  • the BL33 pixel is adjacent to the BS32 pixel on the left, the BL22 pixel on the upper left, and the BS23 pixel on the upper left.
  • the BL33 pixel is affected by these BS32 pixel, BL22 pixel, and BS23 pixel.
  • the adjacent pixels affected by the BL33 pixel have the same color, and thus the influence is small.
  • the BL22 pixel and the BL33 pixel are the long-time exposure pixels L in the same B block, but there is a difference in whether adjacent pixels are the same color or different colors. As described above, there is a possibility that a difference occurs in the signal value. This is considered to be the influence of the color mixture due to the oblique light component.
  • Such an influence of the color mixture due to the oblique light component occurs not only in the long-time exposure pixels L in the B block exemplified in the above description.
  • the influence of the color mixture due to the oblique light component appears in the short-time exposure pixel S, as described above, when the exposure correction is converted into the short-time exposure image ⁇ S, an exposure ratio, for example, a value of 16 times is obtained. Since the multiplication is performed, the influence becomes large.
  • long exposure pixels L and short exposure pixels S are set in one block, the long exposure pixels L are arranged in an oblique direction, and the short exposure pixels S It was set as the structure arrange
  • the influence of the color mixture can be suppressed by performing the calculation shown in the equation (2).
  • the HDR image is generated by performing the calculation shown in Expression (4) using the signal in which the influence of the color mixture is suppressed.
  • the blend coefficient ⁇ in the equation (4) is set as a coefficient for suppressing the occurrence of blur as described above.
  • the signal from the long exposure pixel L and the short exposure pixel S are added.
  • the difference in the frequency characteristics in the oblique direction is caused by the signal from the signal, and there is a difference in the pixel value of the long-time exposure pixel L and the pixel value of the short-time exposure pixel S whose exposure ratio is corrected even in the region including the oblique high frequency. End up.
  • FIG. 1 An example will be described in which an image including a high-frequency signal is taken with the imaging apparatus 100 (FIG. 1) having the pixel arrangement shown in FIG. 2 (FIG. 4).
  • FIG. 4 An example will be described in which an image including a high-frequency signal is taken with the imaging apparatus 100 (FIG. 1) having the pixel arrangement shown in FIG. 2 (FIG. 4).
  • FIG. 4 Such an image may be acquired.
  • the difference between the long exposure image and the short exposure image is 0, so the image represents 0.
  • An image of a color for example, a black color in FIG.
  • the image shown in FIG. 6 has a white portion (hereinafter referred to as a folding signal), which indicates that there is a portion where a difference occurs between the long-time exposure image and the short-time exposure image. .
  • the difference between the long-time exposure image and the short-time exposure image can be calculated by performing the calculation according to the above equation (6). As described above, it is possible to determine whether or not there is an influence (blurring) due to the moving subject based on the calculation result according to Expression (6). That is, if there is a difference between the long exposure image and the short exposure image, it can be determined that the pixel (region) is affected by the moving subject, and if there is no difference, the pixel (region) is not affected by the moving subject. It can be determined that there is.
  • the region is influenced by the moving subject based on the above processing. Is treated as an area where blur is likely to occur.
  • the signal of the short-time exposure pixel S is likely to contain a noise component, which is higher than the signal of the long-time exposure pixel L.
  • the SN ratio is lowered. There is a possibility.
  • the processing for suppressing blur and the processing for suppressing the aliasing signal due to the high frequency signal are not the same processing but different processing.
  • the high-frequency information in the oblique direction is detected, and the portion where the detection signal is high is that even if there is a difference between the exposure-corrected long-time exposure image ⁇ L and the exposure-corrected short-time exposure image ⁇ S whose exposure ratio is corrected, It is determined that it is due to high frequency, not caused by the influence of moving objects, and by reducing the strength of the long / short difference used to detect the moving object region, it suppresses erroneous determination of moving object region detection. .
  • the signal having the reduced aliasing signal of the exposure correction long exposure image ⁇ L and the exposure correction short exposure image ⁇ S is combined so as to be selectively strongly combined.
  • the folding signal is reduced.
  • the Applicant has obtained an analysis result that the following four conditions are satisfied for a pixel (region) that is affected by a high-frequency component in an oblique direction.
  • the influence of the oblique high-frequency component is described as “the influence of the aliasing component”.
  • First condition There is a difference between the exposure correction long-time exposure image ⁇ L and the exposure correction short-time exposure image ⁇ S.
  • Second condition There is a large difference in saturation between the exposure-corrected long-time exposure image ⁇ L and the exposure-corrected short-time exposure image ⁇ S.
  • Third condition a large saturated signal has a green or magenta color.
  • Fourth condition When the generated signal is subtracted from the signal in which no aliasing occurs, the difference is generated with the same amplitude in the opposite direction between G and R or G and B.
  • the first condition is that there is a difference between the exposure-corrected long-time exposure image ⁇ L and the exposure-corrected short-time exposure image ⁇ S even when a still image is captured as described above.
  • the difference is imaged, for example, it means that an image as shown in FIG. 6 is obtained.
  • the second condition is that the exposure correction long exposure image ⁇ L and the exposure correction short exposure image ⁇ S are acquired separately, and when the same area is compared, the area affected by the aliasing component is the exposure correction long exposure image. This means that there is a large difference in saturation between ⁇ L and the exposure-corrected short exposure image ⁇ S.
  • the hue is the same and the saturation is the same.
  • the hue is greatly different, and there is a large difference in saturation. Arise.
  • the exposure correction long exposure image ⁇ L and the exposure correction short exposure image ⁇ S are acquired separately and the same area is compared, the exposure correction long exposure image ⁇ L or exposure correction is obtained.
  • a folding signal is generated in one of the short-time exposure images ⁇ S. Therefore, when the exposure correction long exposure image ⁇ L and the exposure correction short exposure image ⁇ S are compared, if a folding signal is generated in one of the images, the exposure correction long exposure is performed in an area where the folding signal is present. A difference is generated between the image ⁇ L and the exposure correction short-time exposure image ⁇ S.
  • the third condition is that a large saturation signal has a green or magenta color.
  • the influence is an image having a color of green or magenta.
  • FIG. 7 shows pixels (signals) in a region affected by the folding signal.
  • FIG. 7A shows the pixel arrangement, which is basically the same as the pixel arrangement shown in FIG. 2, but is shown in an obliquely inclined state in order to illustrate pixels to be added diagonally in the horizontal direction. It is shown.
  • B of FIG. 7 represents the result of the diagonal addition of the same color.
  • C of FIG. 7 represents the addition result (long livestock addition result) of the long exposure pixel L.
  • D of FIG. 7 represents the addition result (short livestock addition result) of the short-time exposure pixels S.
  • E of FIG. 7 represents a result when the difference between the long stock addition result and the short stock addition result is calculated.
  • F of FIG. 7 represents the result of correcting the difference result shown in E of FIG. 7 in consideration of the sensitivity ratio.
  • the pixel values of the long-time exposure pixels L in one block are calculated by adding the pixel values of the long-time exposure pixels L arranged in an oblique direction in one block.
  • the pixel values of the short-time exposure pixels S are calculated by adding the pixel values of the short-time exposure pixels S to each other.
  • FIG. 7C and FIG. 7D illustrate the G pixel and the R pixel.
  • C in FIG. 7 represents the G pixel and R pixel of the long-time exposure pixel L diagonally added
  • D in FIG. 7 represents the G pixel and R pixel of the short-time exposure pixel S diagonally added.
  • FIG. 7E shows the difference (Gg) between the G pixel of the long stock addition result of FIG. 7C and the G pixel of the short stock addition result of FIG. 7D. Further, E in FIG. 7 represents the difference (R ⁇ r) between the R pixel of the long stock addition result of C in FIG. 7 and the R pixel of the short stock addition result of D in FIG.
  • a predetermined gain is applied to the signal of the R pixel. Need to be multiplied. Such processing is performed as white balance adjustment in a general imaging apparatus.
  • the signal intensity obtained by multiplying the R pixel difference (R ⁇ r) shown in E of FIG. 7 by a predetermined gain to match the sensitivity is shown in F of FIG.
  • the adjusted R pixel difference (R ⁇ r) and the G pixel difference (G ⁇ g) have substantially the same signal intensity and are in the opposite directions.
  • the R pixel has been described as an example, but the same applies to the B pixel. That is, within the region affected by the aliasing signal, the difference (B ⁇ b) between the B pixels after sensitivity adjustment and the difference (G ⁇ g) between the G pixels have substantially the same signal intensity and are in the opposite directions.
  • the fourth condition is that if the generated signal is subtracted from the signal that does not cause aliasing, the difference However, G and R or G and B are generated with the same amplitude in the opposite directions. " Natural signals tend to satisfy the fourth condition.
  • a pixel (region) that satisfies all of the first to fourth conditions is a pixel that is affected by the aliasing component.
  • FIG. 8 is a diagram illustrating a configuration of the HDR image generation unit 200.
  • the HDR image generation unit 200 illustrated in FIG. 8 includes an RGB interpolation signal generation unit 211, an exposure correction unit 212, an SN maximization synthesis ratio calculation unit 213, a aliasing reduction synthesis ratio calculation unit 214, a blur reduction synthesis ratio calculation unit 215,
  • the long-lived saturation-considering synthesis ratio calculation unit 216, the long / short synthesis processing unit 217, the aliasing component detection unit 218, the moving object detection unit 219, and the noise reduction processing unit 220 are included.
  • the HDR image generation unit 200 receives a signal from the long exposure pixel L and a signal from the short exposure pixel S from the image sensor 102.
  • the input signal is a signal after the diagonal addition of the same color and is a signal that has been subjected to the color mixture reduction process.
  • the RGB interpolation signal generation unit 211 interpolates signals of R pixels, G pixels, and B pixels that have been exposed for a long time at all pixel positions, and generates a long exposure image that includes R pixels and a long time that includes G pixels. An exposure image and a long exposure image composed of B pixels are generated.
  • the RGB interpolation signal generation unit 211 interpolates signals of R pixels, G pixels, and B pixels that have been exposed to a short time at all pixel positions, and includes a short exposure image that includes R pixels and a G pixel. A short-time exposure image and a short-time exposure image composed of B pixels are generated. Each of these generated images is supplied to the exposure correction unit 212.
  • the exposure correction unit 212 performs correction to absorb the difference in sensitivity between the R pixel, the G pixel, and the B pixel. As described above, since the G pixel has higher sensitivity than the R pixel and the B pixel, exposure correction is performed by multiplying the R pixel signal and the B pixel signal by respective predetermined gains.
  • the exposure correction unit 212 generates a signal for the long exposure pixel L (hereinafter referred to as a long exposure signal) and a signal for the short exposure pixel S (hereinafter described as a short exposure signal).
  • a long exposure signal for the R pixel a long exposure signal for the G pixel, a long exposure signal for the B pixel, a short exposure signal for the R pixel, a short exposure signal for the G pixel, and a B pixel
  • the short exposure signal is output.
  • the long-time exposure signals for R, G, and B from the exposure correction unit 212 and the short-time exposure signals for R, G, and B are respectively converted into an SN maximizing synthesis ratio calculation unit 213, an aliasing component detection unit 218, And supplied to the moving object detection unit 219. Further, the long-time exposure signal from the exposure correction unit 212 is also supplied to the long-lived saturation-considering synthesis ratio calculation unit 216 and the long / short synthesis processing unit 217. The short-time exposure signal from the exposure correction unit 212 is also supplied to the noise reduction processing unit 220.
  • the SN maximized combination ratio calculation unit 213 calculates a combination ratio that maximizes the SN ratio, and supplies the SN maximized combination ratio to the aliasing reduction combination ratio calculation unit 214.
  • the aliasing reduction synthesis ratio calculation unit 214 corrects the SN maximization synthesis ratio based on the aliasing component information from the aliasing component detection unit 218.
  • the configuration and processing of the aliasing component detection unit 218 will be described later with reference to FIG.
  • the aliasing component information is information obtained by determining whether or not the first to fourth conditions described above are satisfied, and is information indicating whether or not the pixel is influenced by the aliasing signal. is there.
  • the folding signal is generated on the long exposure signal side, the folding signal is not generated on the short exposure signal side. Further, if the folding signal is generated on the short exposure signal side, the folding signal is not generated on the long exposure signal side.
  • the reduced synthesis ratio calculation unit 214 calculates a synthesis ratio.
  • A ⁇ SN maximization composite ratio ⁇ (1.0 ⁇ short accumulation folding component) ⁇ + (1.0 ⁇ short accumulation folding component)
  • OUT ⁇ A ⁇ (1.0-long animal folding component) ⁇ + (0.0 ⁇ long animal folding component)
  • the short accumulation folding component and the long stock folding component are information supplied from the folding component detection unit 218 as folding component information.
  • an operation for selectively bringing the SN maximal composite ratio close to 1.0 (uses 100% long exposure signal) or 0.0 (uses 100% short exposure signal) is performed using the aliasing component. .
  • the aliasing reduction composition ratio calculated in this way is supplied to the blur reduction composition ratio calculation unit 215.
  • the blur reduction composition ratio calculation unit 215 calculates a composition ratio for suppressing blur as described above in ⁇ About occurrence of blur>. Specifically, the blend coefficient ⁇ is calculated by the calculation as described above.
  • the blur reduction synthesis ratio calculation unit 215 selectively selects the folding reduction synthesis ratio supplied from the folding reduction synthesis ratio calculation unit 214 using the moving object detection information supplied from the moving object detection unit 219. Calculation is performed so as to approach 100 (use 100% long exposure signal) or 0.0 (use 100% short exposure signal).
  • the blur reduction composition ratio calculation unit 215 is supplied with moving object detection information indicating whether a moving object is detected from the moving object detection unit 219, that is, whether a pixel is likely to cause blur.
  • the animal body detection unit 219 is supplied with the folding component information from the folding component detection unit 218.
  • the moving object detection unit 219 assumes that the detected component is not a moving object but a folded component even if it is detected as a moving object.
  • the information that the moving object is not detected is supplied to the blur reduction composition ratio calculation unit 215. Therefore, it is possible for the blur reduction composition ratio calculation unit 215 to perform control so as not to execute the process for reducing the blur for the pixel in which the aliasing component is generated.
  • the folding component information output from the folding component detection unit 218 may be, for example, 0 or 1 information indicating whether or not the folding component is generated, or the folding component is generated. For example, it may be information having a value of 0 to 1 that represents the likelihood of being unclear.
  • the moving object detection information output from the moving object detection unit 219 may be, for example, 0 or 1 information indicating whether or not blur has occurred due to the influence of the moving object. It may be information having a value of 0 to 1, for example, representing the certainty of the possibility of being.
  • the processing up to this point suppresses color mixing, suppresses false colors that may occur due to the effects of aliasing components, and suppresses blurs that may occur due to effects of moving objects.
  • the blur reduction composition ratio from the blur reduction composition ratio calculation unit 215 is supplied to the long livestock saturation consideration composition ratio calculation unit 216.
  • the long-lived saturation-considering composition ratio calculation unit 216 refers to the long-time exposure signal supplied from the exposure correction unit 212 and determines whether or not the long-time exposure pixel L is saturated.
  • the pixel value (signal) of the saturated pixel is not used, and for the saturated pixel, the supplied blur reduction composition ratio is converted to a ratio that uses the pixel value (signal) of the short-time exposure pixel S. To do.
  • the long-lived saturation-considering composition ratio calculation unit 216 comprehensively synthesizes a ratio (0.0 (uses 100% short-time exposure signal)) that uses a short-time exposure signal instead of a long-time exposure signal for saturated pixels.
  • the ratio is output to the subsequent long / short synthesis processing unit 217 as the ratio, and for the pixels that are not saturated, the input blur reduction synthesis ratio is output to the subsequent long / short synthesis processing unit 217 as the total synthesis ratio.
  • the total synthesis ratio is a synthesis ratio that maximizes the S / N ratio in the flat part, and is a synthesis ratio that reduces the strength if the area is the aliasing component.
  • the composition ratio is reduced.
  • the long / short synthesis processing unit 217 is supplied with the long time exposure signal from the exposure correction unit 212 and the short time exposure signal via the noise reduction processing unit 220.
  • the short exposure signal is supplied to the long / short combination processing unit 217 after noise is reduced by the noise reduction processing unit 220.
  • the long / short combination processing unit 217 combines the supplied long-time exposure signal and short-time exposure signal based on the total combination ratio from the long-lived saturation consideration component ratio calculation unit 216.
  • the signal synthesized in this way is output as an HDR image signal.
  • the configuration of the aliasing component detection unit 218 of the HDR image generation unit 200 is shown in FIG.
  • the aliasing component detection unit 218 detects the aliasing component by determining whether or not the first to fourth conditions are satisfied.
  • the aliasing component detection unit 218 includes a strong saturation generation region detection unit 251, a local color ratio calculation unit 252, each color length difference calculation unit 253, each color length difference difference normalization unit 254, and a normalized amplitude intensity similarity calculation unit 255. It is said that.
  • the folding component detection unit 218 is supplied with a long exposure signal and a short exposure signal from the exposure correction unit 212 (FIG. 8). Since the long exposure signal and the short exposure signal include an R signal, a G signal, and a B signal, respectively, six-color signals are supplied to the aliasing component detection unit 218.
  • the intense saturation generation area detection unit 251 is a part that mainly determines whether or not the area (pixel) satisfies the second condition and the third condition.
  • the strong saturation generation area detection unit 251 performs conversion to a color space, detection of a saturation difference, and detection of a specific color.
  • the intense saturation generation area detection unit 251 converts the supplied long exposure signal and short exposure signal into color spaces, respectively. For example, the image is transferred to a color space such as a Lab color space, and saturation and color difference are obtained to determine whether the color is a specific color.
  • the second condition is that there is a large difference in saturation between the exposure correction long exposure image ⁇ L and the exposure correction short exposure image ⁇ S. It is determined whether or not.
  • a signal with a large saturation has a green or magenta color
  • This determination result (determination result E) is supplied to the local color ratio calculation unit 252 and the normalized amplitude intensity similarity calculation unit 255.
  • each color length difference calculating unit 253 determines the exposure correction long exposure image ⁇ L for each of the R signal, the G signal, and the B signal. The difference from the exposure corrected short exposure image ⁇ S is calculated.
  • each color length short / difference calculating unit 253 divides the calculated difference using a signal with lower saturation in order to perform normalization.
  • the signal having the lower saturation is a signal having a lower saturation when the saturation of the long-time exposure signal and the saturation of the short-time exposure signal are compared. Further, as described above, it can be determined that the signal with the lower saturation is the signal on the side where the aliasing component does not appear. That is, here, a long-time exposure signal or a short-time exposure signal with no aliasing component is selected, and normalization of the long-short difference is performed.
  • Each color length difference calculation unit 253 obtains a difference between the exposure-corrected long-time exposure image ⁇ L and the exposure-correction short-time exposure image ⁇ S for each of the R signal, the G signal, and the B signal, Each of the lower R signal, G signal, and B signal is divided to generate a standardized long / short difference signal, which is output to each subsequent color length / short difference normalization unit 254.
  • the first condition that is, the condition that “the difference occurs between the exposure-corrected long-time exposure image ⁇ L and the exposure-corrected short-time exposure image ⁇ S”. Is satisfied.
  • the long / short difference signal does not have a predetermined value, it can be determined that the first condition is not satisfied. In such a case, the long / short difference signal is processed as 0 in the subsequent processing. Therefore, as a result, the folding component information becomes a determination result that the folding component is not detected.
  • the difference between the exposure-corrected long-exposure image ⁇ L and the exposure-corrected short-exposure image ⁇ S is affected by noise, and there is a possibility that the difference may be calculated even if there is no difference.
  • a predetermined threshold is provided, and it is determined whether or not the difference between the exposure-corrected long-time exposure image ⁇ L and the exposure-corrected short-time exposure image ⁇ S is equal to or larger than the predetermined threshold. In this case, it may be determined that the first condition is satisfied.
  • the local color ratio calculation unit 252 obtains a local color ratio using a signal with lower saturation.
  • the local color ratio calculation unit 252 refers to the determination result E from the strong saturation generation region detection unit 251 to determine the signal with the lower saturation, and based on the determination, the supplied exposure correction long time
  • the signal of the exposure image ⁇ L or the signal of the exposure corrected short-time exposure image ⁇ S is selected, and the local color ratio is obtained.
  • the obtained local color ratios of R, G, and B are supplied to each color length difference difference normalizing unit 254.
  • Each color length difference difference normalizing unit 254 multiplies each of the R, G, and B difference signals from each color length difference calculating unit 253 by the local ratio of each of R, G, and B from the local color ratio calculating unit 252.
  • the RGB color levels are aligned to generate a standardized R component long / short difference signal, a standardized G component long / short difference signal, and a standardized B component long / short difference signal, respectively.
  • Each color length difference standardization unit 254 unifies the RB value by selecting the larger normalized value of the normalized R component length difference signal or the B component length difference signal, The selected R component or B component long / short difference signal is supplied to the normalized amplitude intensity similarity calculation unit 255.
  • Each color length difference normalization unit 254 supplies the normalized G component length difference signal to the normalized amplitude intensity similarity calculation unit 255.
  • the standardized amplitude intensity similarity calculation unit 255 calculates the similarity between the supplied standardized G component long / short difference signal, the standardized R (or B) component long / short difference signal, and the sign direction. Is evaluated, and it is determined whether or not both signals have the same amplitude by the inverse method. This determination is the determination described with reference to FIG. 7 and is a determination as to whether or not the fourth condition is satisfied.
  • the normalized amplitude intensity similarity calculation unit 255 determines whether or not the fourth condition is satisfied.
  • the standardized amplitude intensity similarity calculation unit 255 is also supplied with the determination result E from the strong saturation generation region detection unit 251.
  • This determination result E is information including a determination result as to whether or not the second condition and the third condition are satisfied.
  • the normalized amplitude intensity similarity calculation unit 255 determines that the fourth condition is satisfied, and the supplied determination result E also indicates a result determined to satisfy the second and third conditions, If the region (pixel) to be processed is a region where a aliasing component is generated, a final determination is made, and the determination result is used as aliasing component information to determine the aliasing reduction combination ratio calculation unit 214 and the animal. It supplies to the body detection part 219 (FIG. 8).
  • the folding component information output from the folding component detection unit 218 may be, for example, 0 or 1 information indicating whether or not the folding component is generated.
  • it may be information having a value of 0 to 1 that represents the probability of occurrence of a component.
  • the process for the region (pixel) to be processed is terminated.
  • information indicating that there is no folding component may be output as the folding component information.
  • the local color ratio calculation unit 252 each color length difference calculation unit 253, The processing in each color length difference normalization unit 254 may not be performed, and the standardized amplitude intensity similarity calculation unit 255 may output information indicating that there is no aliasing component.
  • the folded component detection unit 218 can detect a region where the folded component is generated.
  • the aliasing component As described above, it is possible to detect a region where the aliasing component is generated, so that it is a region where a difference between the long-time exposure signal and the short-time exposure signal whose exposure ratio is corrected occurs. However, it is possible to perform a process of weakening the signal value of the difference at a place where the detection strength of the aliasing component is high. By enabling such processing, it is possible to prevent erroneous detection of the moving object.
  • a composite ratio that maximizes the S / N ratio is obtained, and if it is a folded area, a composite ratio that reduces the strength is adopted (in a four-part Bayer array, a right diagonal pattern and a left diagonal pattern are used.
  • an optimum combining ratio can be calculated by adopting a combining ratio that reduces blur (moving object blur).
  • the adjacent 2 ⁇ 2 pixels are set to the same color filter, and the 2 ⁇ 2 color filter is used with an imaging element using a four-divided Bayer type array arranged in a Bayer type.
  • the effects of color mixing are alleviated, blurring of moving objects, suppression of aliasing signals due to high-frequency signals, optimization of SN, saturation of long-time exposure signals Can be obtained.
  • FIG. 10 is a diagram illustrating a usage example in which the above-described imaging device and an electronic apparatus including the imaging device are used.
  • the imaging device described above can be used in various cases for sensing light such as visible light, infrared light, ultraviolet light, and X-rays as follows.
  • Devices for taking images for viewing such as digital cameras and mobile devices with camera functions
  • Devices used for traffic such as in-vehicle sensors that capture the back, surroundings, and interiors of vehicles, surveillance cameras that monitor traveling vehicles and roads, and ranging sensors that measure distances between vehicles, etc.
  • Equipment used for home appliances such as TVs, refrigerators, air conditioners, etc. to take pictures and operate the equipment according to the gestures ⁇ Endoscopes, equipment that performs blood vessel photography by receiving infrared light, etc.
  • Equipment used for medical and health care ⁇ Security equipment such as security surveillance cameras and personal authentication cameras ⁇ Skin measuring instrument for photographing skin and scalp photography Such as a microscope to do beauty Equipment used for sports-Equipment used for sports such as action cameras and wearable cameras for sports applications-Used for agriculture such as cameras for monitoring the condition of fields and crops apparatus
  • the series of processes described above can be executed by hardware or can be executed by software.
  • a program constituting the software is installed in the computer.
  • the computer includes, for example, a general-purpose personal computer capable of executing various functions by installing various programs by installing a computer incorporated in dedicated hardware.
  • FIG. 11 is a block diagram showing an example of the hardware configuration of a computer that executes the above-described series of processing by a program.
  • a CPU Central Processing Unit
  • ROM Read Only Memory
  • RAM Random Access Memory
  • An input / output interface 305 is further connected to the bus 304.
  • An input unit 306, an output unit 307, a storage unit 308, a communication unit 309, and a drive 310 are connected to the input / output interface 305.
  • the input unit 306 includes a keyboard, a mouse, a microphone, and the like.
  • the output unit 307 includes a display, a speaker, and the like.
  • the storage unit 308 includes a hard disk, a nonvolatile memory, and the like.
  • the communication unit 309 includes a network interface and the like.
  • the drive 310 drives a removable medium 311 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
  • the CPU 301 loads the program stored in the storage unit 308 to the RAM 303 via the input / output interface 305 and the bus 304 and executes the program, for example. Is performed.
  • the program executed by the computer (CPU 301) can be provided by being recorded in, for example, a removable medium 311 as a package medium or the like.
  • the program can be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
  • the program can be installed in the storage unit 308 via the input / output interface 305 by attaching the removable medium 311 to the drive 310. Further, the program can be received by the communication unit 309 via a wired or wireless transmission medium and installed in the storage unit 308. In addition, the program can be installed in the ROM 302 or the storage unit 308 in advance.
  • the program executed by the computer may be a program that is processed in time series in the order described in this specification, or in parallel or at a necessary timing such as when a call is made. It may be a program for processing.
  • system represents the entire apparatus composed of a plurality of apparatuses.
  • this technique can also take the following structures.
  • a pixel having the same exposure time is disposed in an oblique direction, and includes a processing unit that processes a signal from the pixel disposed on the imaging surface in the block unit,
  • the processor is A generating unit that generates a long-time exposure image by adding signals from the long-time exposure pixels in the one block and generates a short-time exposure image by adding signals from the short-time exposure pixels;
  • a combining unit that combines the long-time exposure image generated by the generation unit and the short-time exposure image at a predetermined combining ratio; From the difference between the long-time exposure image and the short-time exposure image, an animal body detection unit that detects an animal body, A folding component detection unit for detecting a folding component from the long exposure image and the short exposure image;
  • the imaging ratio is set from
  • the aliasing component detection unit determines whether or not a difference between the long-time exposure image and the short-time exposure image and a saturation of each of the long-time exposure image and the short-time exposure image satisfy a predetermined condition.
  • the aliasing component detection unit detects the aliasing component by determining whether or not the following first to fourth conditions are satisfied.
  • First condition long-time exposure image and short-time exposure image
  • Second condition There is a difference in saturation between the long-time exposure image and the short-time exposure image
  • Third condition A large signal with saturation has a green or magenta color Condition 4: When the generated signal is subtracted from the signal having no aliasing component, the difference is generated with the same amplitude in the reverse direction between the G pixel and the R pixel or the G pixel and the B pixel.
  • the composite ratio is the long-time exposure image or the short-time exposure in which it is determined that no aliasing component has occurred in a pixel where the aliasing component is detected by the aliasing component detection unit.
  • the imaging apparatus according to any one of (1) to (5), wherein a ratio of using a large amount of images is used.
  • the composition ratio is a pixel in which the moving object is detected by the moving object detection unit, but it is determined that no folding component is generated in the pixel in which the folding component is detected by the folding component detection unit.
  • the imaging device according to any one of (1) to (6), wherein the ratio is a ratio in which the long-time exposure image or the short-time exposure image is frequently used.
  • the pixels having the same exposure time are arranged in an oblique direction and include a processing unit that processes signals from the pixels arranged on the imaging surface in units of the blocks.
  • the processor is A long exposure image is generated by adding signals from the long exposure pixels in the block, and a short exposure image is generated by adding signals from the short exposure pixels.
  • the generated long exposure image and the short exposure image are synthesized at a predetermined synthesis ratio, From the difference between the long exposure image and the short exposure image, the moving object is detected, Detecting a folded component from the long exposure image and the short exposure image, The imaging ratio is set from the detection result of the moving object in the moving object detection unit and the detection result of the folded component in the folded component detection unit. (9) When 2 ⁇ 2 pixels having the same spectral sensitivity are made into one block, out of 2 ⁇ 2 pixels in the one block, two pixels are long-time exposure pixels, and two pixels are short-time exposure.
  • An image pickup apparatus including a processing unit that processes a signal from a pixel that is arranged in an oblique direction and is arranged on the image pickup surface in a block unit.
  • the processor is A long exposure image is generated by adding signals from the long exposure pixels in the block, and a short exposure image is generated by adding signals from the short exposure pixels.
  • the generated long exposure image and the short exposure image are synthesized at a predetermined synthesis ratio, From the difference between the long exposure image and the short exposure image, the moving object is detected, From the long-time exposure image and the short-time exposure image, to execute a process including a step of detecting a folding component,
  • the composition ratio is a program for causing a computer to execute a process set based on the detection result of the moving object in the moving object detection unit and the detection result of the folding component in the folding component detection unit.
  • Imaging device 101 optical lens, 102, imaging device, 103 image processing unit, 104 signal processing unit, 105 control unit, 200 HDR image generation unit, 211 RGB interpolation signal generation unit, 212 exposure correction unit, 213 SN maximization synthesis Ratio calculation unit, 214 Folding reduction synthesis ratio calculation unit, 215 Blur reduction synthesis ratio calculation unit, 216 Long livestock saturation consideration synthesis ratio calculation unit, 217 Long and short synthesis processing unit, 218 Folding component detection unit, 219 Animal body detection unit, 220 noise reduction processing unit, 251 strong saturation generation region detection unit, 252 local color ratio calculation unit, 253 each color length short difference calculation unit, 254 each color length short difference normalization unit, 255 standardized amplitude intensity similarity calculation unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Color Television Image Signal Generators (AREA)
  • Studio Devices (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)

Abstract

The present technology pertains to an image-capturing method, a program, an image-capturing device configured to make it possible to improve image quality. The present invention is provided with a processor such that, when a 2×2 array of pixels having the same spectral sensitivity is defined as one block, two pixels within the block are long-exposure pixels, two pixels are short-exposure pixels, pixels having the same exposure time are disposed diagonally, and signals from pixels disposed on the image-capturing surface are processed in block units. The processor: adds signals from the long-exposure pixels in the block to generate a long-exposure image; adds signals from the short-exposure pixels to generate a short-exposure image; synthesizes the resulting long-exposure image and short-exposure image at a prescribed synthesis ratio; detects a moving body based on the difference between the long-exposure image and the short-exposure image; and detects a folding component based on the long-exposure image and the short-exposure image. The synthesis ratio is set based on the result of detecting the moving body and the result of detecting the folding component.

Description

撮像装置、撮像方法、並びにプログラムImaging apparatus, imaging method, and program
 本技術は、撮像装置、撮像方法、並びにプログラムに関する。詳しくは、ダイナミックレンジを拡大した撮像を行える撮像装置、撮像方法、並びにプログラムに関する。 The present technology relates to an imaging device, an imaging method, and a program. Specifically, the present invention relates to an imaging apparatus, an imaging method, and a program that can perform imaging with an expanded dynamic range.
 近年、ビデオカメラやデジタルスチルカメラなどの応用に適した固体撮像装置として知られるCCD(Charge Coupled Device)イメージセンサや増幅型のイメージセンサは、高感度での画素数の増加やイメージサイズの縮小による画素サイズの微細化が進んでいる。一方で、一般にCCDイメージセンサやCMOS(Complementary Metal Oxide Semiconductor)イメージセンサのような固体撮像装置は、屋内や野外、昼間や夜間といった多様な環境下で使用される傾向があり、外光の変化等に応じて、光電変換素子における電荷蓄積期間を制御することによって露光時間を調整し、感度を最適値にする電子シャッタ動作などが必要となることが多い。 In recent years, CCD (Charge Coupled Device) image sensors and amplification-type image sensors, known as solid-state imaging devices suitable for applications such as video cameras and digital still cameras, are due to an increase in the number of pixels with high sensitivity and a reduction in image size. The pixel size is becoming finer. On the other hand, in general, solid-state imaging devices such as CCD image sensors and CMOS (Complementary Metal Oxide Semiconductor) image sensors tend to be used in various environments such as indoors, outdoors, daytime, and nighttime. Accordingly, it is often necessary to perform an electronic shutter operation or the like that adjusts the exposure time by controlling the charge accumulation period in the photoelectric conversion element and optimizes the sensitivity.
 ところで、CMOSイメージセンサにおいて、そのダイナミックレンジを拡大する方法として、電子シャッタを高速に切ることで露光時間を調整する方法や、高速に複数のフレームを撮影し重ね合わせる方法や、受光部の光電変換特性を対数応答にする方法などが知られている。 By the way, in the CMOS image sensor, as a method of expanding the dynamic range, a method of adjusting the exposure time by turning the electronic shutter at a high speed, a method of photographing a plurality of frames at a high speed, and a photoelectric conversion of the light receiving unit A method of making the characteristic logarithmic response is known.
 異なる露光時間またはアナログゲインや異なる感度を有する1枚の画像データを処理する際、所定サイズの局所領域において、異なる露光時間や異なる感度を有する画素を、もう一方の露光時間や感度を有する画素と合成処理によって、ダイナミックレンジの拡大を図ることが提案されている(例えば、特許文献1参照)。ダイナミックレンジを拡大するために、設定された露光時間の比やあらかじめ算出された感度比から決定したゲイン倍を信号量の低い画素信号へ適応し、この値と信号量の高い画素信号を一定の比率で合成することが提案されている。 When processing one piece of image data having different exposure times or analog gains or different sensitivities, pixels having different exposure times or different sensitivities in a local area of a predetermined size are replaced with pixels having the other exposure time or sensitivity. It has been proposed to increase the dynamic range by combining processing (see, for example, Patent Document 1). In order to expand the dynamic range, the gain multiplied by the set exposure time ratio or the sensitivity ratio calculated in advance is applied to the pixel signal with a low signal amount, and this value and the pixel signal with a high signal amount are fixed. It has been proposed to synthesize in proportions.
 通常、複数回に露光&シャッタを経て得られた複数枚の異露光画像を上記の方法で合成することで、HDR画像を得ることができる。しかしながら、この手法では動物体の領域においては、画像が崩壊してしまう可能性がある。 Usually, an HDR image can be obtained by synthesizing a plurality of differently exposed images obtained through multiple exposures and shutters by the above method. However, in this method, there is a possibility that the image is collapsed in the area of the moving object.
 特許文献1で提案されている方法で、周期的に異なる露光時間や異なる感度を有する画素を2次元的に、周期的に配置した画素を用いて、異露光の画素間で読み出しのタイミングを揃えて、各画素位置において、合成比率をその領域に沿った最適な比率を選択することで、動物体のボケ抑制などを図ることが特許文献2で提案されている。 Using the method proposed in Patent Document 1, the pixels having different exposure times and different sensitivities are arranged two-dimensionally and periodically, and the readout timing is aligned between the differently exposed pixels. Patent Document 2 proposes to suppress blurring of a moving object by selecting an optimum combination ratio along the region at each pixel position.
特開2013-66145号公報JP 2013-66145 A 特開2013-66142号公報JP 2013-66142 A
 上記したような技術により、ダイナミックレンジが拡張された画像を得ることでき、高画質化に寄与することができる。 With the above-described technology, an image with an extended dynamic range can be obtained, which can contribute to higher image quality.
 しかしながら、高周波信号を含む画像を撮影したときなどに、折り返し成分による偽色が発生する可能性があり、このような偽色の発生を抑制することが望まれている。 However, there is a possibility that false colors due to aliasing components may occur when an image including a high-frequency signal is taken, and it is desired to suppress the occurrence of such false colors.
 本技術は、このような状況に鑑みてなされたものであり、偽色などの発生を抑制し、ダイナミックレンジが広い撮影を行うことができるようにするものである。 The present technology has been made in view of such a situation, and suppresses the generation of false colors and enables photographing with a wide dynamic range.
 本技術の一側面の撮像装置は、同一の分光感度を有する2×2個の画素を1ブロックとしたとき、前記1ブロック内の2×2個の画素のうち、2画素は、長時間露光画素であり、2画素は短時間露光画素であり、同露光時間の画素は、斜め方向に配置され、前記ブロック単位で撮像面上に配置された画素からの信号を処理する処理部を備え、前記処理部は、前記1ブロック内の前記長時間露光画素からの信号を加算することで、長時間露光画像を生成し、前記短時間露光画素からの信号を加算することで短時間露光画像を生成する生成部と、前記生成部で生成された前記長時間露光画像と前記短時間露光画像を所定の合成比率で合成する合成部と、前記長時間露光画像と前記短時間露光画像との差分から、動物体を検出する動物体検出部と、前記長時間露光画像と前記短時間露光画像から、折り返り成分を検出する折り返り成分検出部とを備え、前記合成比率は、前記動物体検出部での動物体の検出結果と、前記折り返り成分検出部での折り返り成分の検出結果から設定される。 In the imaging device according to one aspect of the present technology, when 2 × 2 pixels having the same spectral sensitivity are defined as one block, two pixels out of 2 × 2 pixels in the one block are exposed for a long time. 2 pixels are short-time exposure pixels, and pixels having the same exposure time are arranged in an oblique direction, and include a processing unit that processes signals from pixels arranged on the imaging surface in units of the blocks, The processing unit generates a long-time exposure image by adding signals from the long-time exposure pixels in the one block, and adds a signal from the short-time exposure pixels to generate a short-time exposure image. A generating unit, a combining unit that combines the long-time exposure image and the short-time exposure image generated by the generation unit at a predetermined combining ratio, and a difference between the long-time exposure image and the short-time exposure image From the moving object detection unit for detecting the moving object, A folding component detection unit that detects a folding component from the long-time exposure image and the short-time exposure image, and the composition ratio is the detection result of the moving object in the moving object detection unit, and the folding It is set from the detection result of the folded component in the component detection unit.
 前記折り返り成分検出部は、前記長時間露光画像と前記短時間露光画像との差分、前記長時間露光画像と前記短時間露光画像のそれぞれ彩度が、所定の条件を満たすか否かを判定することで、前記折り返り成分を検出するようにすることができる。 The aliasing component detection unit determines whether or not a difference between the long-time exposure image and the short-time exposure image and a saturation of each of the long-time exposure image and the short-time exposure image satisfy a predetermined condition. By doing so, the folded component can be detected.
 前記折り返り成分検出部は、以下の第1乃至第4の条件が満たされるか否かを判定することで、前記折り返り成分を検出するようにすることができる。
 第1の条件:長時間露光画像と短時間露光画像とで、差がある
 第2の条件:長時間露光画像と短時間露光画像で、彩度において差がある
 第3の条件:彩度な大きな信号が緑またはマゼンタの色を有している
 第4の条件:折り返し成分がない方の信号から、発生している信号を減算すると、その差分が、G画素とR画素またはG画素とB画素で、逆方向で同振幅に発生している
The folding component detection unit can detect the folding component by determining whether or not the following first to fourth conditions are satisfied.
First condition: There is a difference between the long-time exposure image and the short-time exposure image Second condition: There is a difference in saturation between the long-time exposure image and the short-time exposure image Third condition: Saturation A large signal has a green or magenta color. Fourth condition: When a generated signal is subtracted from a signal having no aliasing component, the difference is determined as G pixel and R pixel or G pixel and B. It occurs in the pixel with the same amplitude in the reverse direction.
 前記短時間露光画像は、露光補正された画像であるようにすることができる。 The short-time exposure image can be an exposure-corrected image.
 前記長時間露光画像と前記短時間露光画像を所定の色空間に転写し、前記彩度を求めるようにすることができる。 The long exposure image and the short exposure image can be transferred to a predetermined color space to obtain the saturation.
 前記合成比率は、前記折り返り成分検出部で折り返り成分が発生していると検出された画素においては、折り返り成分が発生していないと判定される前記長時間露光画像または前記短時間露光画像を多く用いる比率とされるようにすることができる。 The composite ratio is the long-time exposure image or the short-time exposure in which it is determined that no aliasing component has occurred in a pixel where the aliasing component is detected by the aliasing component detection unit. The ratio of using a large amount of images can be set.
 前記合成比率は、前記動物体検出部で動物体が検出された画素であるが、前記折り返り成分検出部で折り返り成分が検出された画素においては、折り返り成分が発生していないと判定される前記長時間露光画像または前記短時間露光画像を多く用いる比率とされるようにすることができる。 The composition ratio is a pixel in which the moving object is detected by the moving object detection unit, but it is determined that no folding component is generated in the pixel in which the folding component is detected by the folding component detection unit. The long exposure image or the short exposure image that is frequently used can be used.
 本技術の一側面の撮像方法は、同一の分光感度を有する2×2個の画素を1ブロックとしたとき、前記1ブロック内の2×2個の画素のうち、2画素は、長時間露光画素であり、2画素は短時間露光画素であり、同露光時間の画素は、斜め方向に配置され、前記ブロック単位で撮像面上に配置された画素からの信号を処理する処理部を備える撮像装置の撮像方法において、前記処理部は、前記1ブロック内の前記長時間露光画素からの信号を加算することで、長時間露光画像を生成し、前記短時間露光画素からの信号を加算することで短時間露光画像を生成し、生成された前記長時間露光画像と前記短時間露光画像を所定の合成比率で合成し、前記長時間露光画像と前記短時間露光画像との差分から、動物体を検出し、前記長時間露光画像と前記短時間露光画像から、折り返り成分を検出するステップを含み、前記合成比率は、前記動物体検出部での動物体の検出結果と、前記折り返り成分検出部での折り返り成分の検出結果から設定される。 In the imaging method according to one aspect of the present technology, when 2 × 2 pixels having the same spectral sensitivity are defined as one block, two pixels out of 2 × 2 pixels in the one block are exposed for a long time. An image is provided with a processing unit that processes signals from pixels arranged on the imaging surface in units of blocks, the pixels having the same exposure time are arranged in an oblique direction. In the imaging method of the apparatus, the processing unit generates a long-time exposure image by adding signals from the long-time exposure pixels in the one block, and adds the signals from the short-time exposure pixels. A short-exposure image, and the generated long-exposure image and the short-exposure image are synthesized at a predetermined composition ratio, and from the difference between the long-exposure image and the short-exposure image, the moving object Detecting the long exposure image and A step of detecting a folding component from the short-time exposure image, wherein the composition ratio is a detection result of the moving body in the moving body detection unit and a detection result of the folding component in the folding component detection unit Set from
 本技術の一側面のプログラムは、同一の分光感度を有する2×2個の画素を1ブロックとしたとき、前記1ブロック内の2×2個の画素のうち、2画素は、長時間露光画素であり、2画素は短時間露光画素であり、同露光時間の画素は、斜め方向に配置され、前記ブロック単位で撮像面上に配置された画素からの信号を処理する処理部を備える撮像装置に、前記処理部は、前記1ブロック内の前記長時間露光画素からの信号を加算することで、長時間露光画像を生成し、前記短時間露光画素からの信号を加算することで短時間露光画像を生成し、生成された前記長時間露光画像と前記短時間露光画像を所定の合成比率で合成し、前記長時間露光画像と前記短時間露光画像との差分から、動物体を検出し、前記長時間露光画像と前記短時間露光画像から、折り返り成分を検出するステップを含む処理を実行させ、前記合成比率は、前記動物体検出部での動物体の検出結果と、前記折り返り成分検出部での折り返り成分の検出結果から設定される処理をコンピュータに実行させる。 According to a program of one aspect of the present technology, when 2 × 2 pixels having the same spectral sensitivity are defined as one block, 2 pixels out of 2 × 2 pixels in the one block are long-time exposure pixels. And 2 pixels are short-time exposure pixels, pixels having the same exposure time are arranged in an oblique direction, and an image pickup apparatus including a processing unit that processes signals from the pixels arranged on the image pickup surface in units of blocks In addition, the processing unit generates a long-time exposure image by adding signals from the long-time exposure pixels in the one block, and adds short-time exposure by adding signals from the short-time exposure pixels. Generating an image, combining the generated long-time exposure image and the short-time exposure image at a predetermined composition ratio, detecting a moving object from a difference between the long-time exposure image and the short-time exposure image; The long exposure image and the short exposure A process including a step of detecting a folded component from an image is executed, and the composition ratio is calculated based on the detection result of the moving object in the moving object detection unit and the detection result of the folded component in the folded component detection unit. Causes the computer to execute the process set by
 本技術の一側面の撮像装置、撮像方法、並びにプログラムにおいては、同一の分光感度を有する2×2個の画素を1ブロックとしたとき、1ブロック内の2×2個の画素のうち、2画素は、長時間露光画素であり、2画素は短時間露光画素であり、同露光時間の画素は、斜め方向に配置され、ブロック単位で撮像面上に配置された画素からの信号が処理される。その処理は、1ブロック内の長時間露光画素からの信号を加算することで、長時間露光画像を生成し、短時間露光画素からの信号を加算することで短時間露光画像を生成し、生成された長時間露光画像と短時間露光画像を所定の合成比率で合成し、長時間露光画像と短時間露光画像との差分から、動物体を検出し、長時間露光画像と短時間露光画像から、折り返り成分を検出することで行われ、合成比率は、動物体検出部での動物体の検出結果と、折り返り成分検出部での折り返り成分の検出結果から設定される。 In the imaging device, the imaging method, and the program according to one aspect of the present technology, when 2 × 2 pixels having the same spectral sensitivity are defined as one block, 2 × 2 pixels out of 2 × 2 pixels in one block are used. The pixels are long-time exposure pixels, the two pixels are short-time exposure pixels, the pixels with the same exposure time are arranged in an oblique direction, and signals from the pixels arranged on the imaging surface are processed in block units. The The process generates a long exposure image by adding signals from long exposure pixels in one block, and generates a short exposure image by adding signals from short exposure pixels. The long-time exposure image and the short-time exposure image are combined at a predetermined composition ratio, and the moving object is detected from the difference between the long-time exposure image and the short-time exposure image. The synthesis ratio is set based on the detection result of the moving body in the moving body detection unit and the detection result of the folding component in the folding component detection unit.
 本技術の一側面によれば、偽色などの発生を抑制することができる。またダイナミックレンジが広い撮影を行うことができる。 According to one aspect of the present technology, it is possible to suppress the occurrence of false colors. In addition, shooting with a wide dynamic range can be performed.
 なお、ここに記載された効果は必ずしも限定されるものではなく、本開示中に記載されたいずれかの効果であってもよい。 It should be noted that the effects described here are not necessarily limited, and may be any of the effects described in the present disclosure.
本技術を適用した撮像装置の一実施の形態の構成を示す図である。It is a figure showing the composition of the 1 embodiment of the imaging device to which this art is applied. 画素配置について説明するための図である。It is a figure for demonstrating pixel arrangement | positioning. 露光時間の制御について説明するための図である。It is a figure for demonstrating control of exposure time. HDR画像の生成について説明するための図である。It is a figure for demonstrating the production | generation of a HDR image. ブラーの抑制について説明するための図である。It is a figure for demonstrating suppression of blur. 折り返り信号の一例を示す画像である。It is an image which shows an example of a folding signal. 第4の条件について説明するための図である。It is a figure for demonstrating 4th conditions. HDR画像生成部の構成について説明するための図である。It is a figure for demonstrating the structure of a HDR image generation part. 折り返り成分検出部の構成について説明するための図である。It is a figure for demonstrating the structure of a folding | turning component detection part. 撮像装置の使用例について説明するための図である。It is a figure for demonstrating the usage example of an imaging device. 記録媒体について説明するための図である。It is a figure for demonstrating a recording medium.
 以下に、本技術を実施するための形態(以下、実施の形態という)について説明する。なお、説明は、以下の順序で行う。
 1.撮像装置の構成
 2.画素配置
 3.画像処理部の処理について
 4.ブラーの発生について
 5.折り返し成分による偽色の発生について
 6.HDR画像生成部の構成
 7.折り返り成分検出部の構成
 8.撮像装置の使用例
 9.記録媒体について
Hereinafter, modes for carrying out the present technology (hereinafter referred to as embodiments) will be described. The description will be given in the following order.
1. Configuration of imaging apparatus 2. Pixel arrangement 3. Processing of image processing unit 4. Generation of blur 5. Generation of false color due to aliasing components 6. Configuration of HDR image generation unit Configuration of aliasing component detection unit 8. 8. Usage example of imaging device About recording media
 <撮像装置の構成>
 図1は、本技術を適用した撮像装置の一実施の形態の構成を示す図である。図1に示した撮像装置100は、光学レンズ101を介して入射される光は撮像部、例えばCMOSイメージセンサなどによって構成される撮像素子102に入射し、光電変換による画像データを出力する。出力画像データは画像処理部103に入力される。
<Configuration of imaging device>
FIG. 1 is a diagram illustrating a configuration of an embodiment of an imaging apparatus to which the present technology is applied. In the imaging apparatus 100 illustrated in FIG. 1, light incident through the optical lens 101 enters an imaging element 102 configured by an imaging unit, for example, a CMOS image sensor, and outputs image data by photoelectric conversion. The output image data is input to the image processing unit 103.
 撮像素子102の出力画像は、各画素にRGBのいずれかの画素値が設定されたいわゆるモザイク画像である。画像処理部103は、各画素にRGBの全画素値を設定するデモザイク処理、後述する長時間露光画像と短時間露光画像との合成処理に基づく広ダイナミックレンジ(HDR:High Dynamic Range)画像の生成処理、ブレ補正処理などを行う。 The output image of the image sensor 102 is a so-called mosaic image in which any pixel value of RGB is set for each pixel. The image processing unit 103 generates a high dynamic range (HDR) image based on demosaic processing that sets all RGB pixel values for each pixel and synthesis processing of a long-time exposure image and a short-time exposure image, which will be described later. Processing, blur correction processing, etc.
 この画像処理部103の出力は信号処理部104に入力される。信号処理部104は、例えばホワイトバランス(WB)調整、ガンマ補正等、一般的なカメラにおける信号処理を実行して出力画像120を生成する。出力画像120は図示しない記憶部に格納される。あるいは表示部(不図示)に出力される。 The output of the image processing unit 103 is input to the signal processing unit 104. The signal processing unit 104 performs signal processing in a general camera, such as white balance (WB) adjustment and gamma correction, and generates an output image 120. The output image 120 is stored in a storage unit (not shown). Or it outputs to a display part (not shown).
 制御部105は、例えば図示しないメモリに格納されたプログラムに従って各部に制御信号を出力し、各種の処理の制御を行う。 The control unit 105 outputs a control signal to each unit according to a program stored in a memory (not shown), for example, and controls various processes.
 <画素配置>
 以下に本技術を適用した撮像素子における画素の配置について、図2を参照して説明する。図2において、各矩形は画素を模式的に表す。また、各矩形の内部には、カラーフィルタの種類(各画素が出力する色光)を示す記号を示す。例えば、R(Red)画素には「R」を付し、G(Green)画素には「G」を付し、B(Blue)画素には「B」を付す。以下の説明においても、同様に記載する。
<Pixel arrangement>
Hereinafter, the arrangement of pixels in an image sensor to which the present technology is applied will be described with reference to FIG. In FIG. 2, each rectangle schematically represents a pixel. Each rectangle has a symbol indicating the type of color filter (color light output from each pixel). For example, “R” is assigned to R (Red) pixels, “G” is assigned to G (Green) pixels, and “B” is assigned to B (Blue) pixels. The same applies to the following description.
 図2に示した画素配置においては、縦×横の4×4単位で、R画素、G画素、およびB画素の配置が繰り返される。図4に示した例では、左上の2×2の4画素が全てR画素とされている。この2×2の4個のR画素を、Rブロックとする。この4画素のRブロックの右隣の2×2の4画素は、全てG画素(Gブロックとする)とされている。 In the pixel arrangement shown in FIG. 2, the arrangement of the R pixel, the G pixel, and the B pixel is repeated in units of 4 × 4 in the vertical and horizontal directions. In the example shown in FIG. 4, the 2 × 2 four pixels at the upper left are all R pixels. These 2 × 2 R pixels are defined as R blocks. All 4 pixels of 2 × 2 adjacent to the right side of the 4-pixel R block are G pixels (referred to as G blocks).
 Rブロックの下側の2×2の4画素は、全てG画素(Gブロック)とされている。Rブロックの右斜め下側の2×2の4画素は、全てB画素(Bブロックとする)とされている。このように、2×2の4画素は、全て同色とされ、4画素単位のRブロック、Gブロック、Gブロック、Bブロックが、4×4の画素領域内に配置されている。 The lower 2 × 2 4 pixels of the R block are all G pixels (G block). All 2 × 2 4 pixels on the lower right side of the R block are B pixels (referred to as B blocks). In this way, all the 2 × 2 4 pixels have the same color, and the R block, G block, G block, and B block in units of 4 pixels are arranged in the 4 × 4 pixel region.
 このように、図2に示した画素配列では、R画素、G画素、G画素、B画素が、4画素ずつ含まれる4×4単位で構成されている。以下、このような画素の配置を、適宜、4分割ベイヤ型RGB配列と記述する。 As described above, in the pixel array shown in FIG. 2, the R pixel, the G pixel, the G pixel, and the B pixel are configured in 4 × 4 units each including four pixels. Hereinafter, such an arrangement of pixels is appropriately described as a four-divided Bayer RGB arrangement.
 なおここでは、RGBの画素が配置されている例を挙げて説明を続けるが、W(White)画素を含む構成とすることも可能である。また、RGBではなく、シアン、マゼンタ、イエローの組み合わせに対しても本技術を適用することはできる。 Here, the description will be continued with an example in which RGB pixels are arranged, but a configuration including W (White) pixels is also possible. Further, the present technology can be applied to a combination of cyan, magenta, and yellow instead of RGB.
 W画素が含まれるようにした場合、W画素は、全整色性である分光感度の画素として機能し、R画素、G画素、B画素は、それぞれの色に特性のある分光感度の画素として機能する。本技術は、全整色性である分光感度を含む4種類の分光感度の画素が、撮像面上に配置されている撮像素子(イメージセンサ)にも適用できる。 When the W pixel is included, the W pixel functions as a spectral sensitivity pixel having total color matching, and the R pixel, the G pixel, and the B pixel are spectral sensitivity pixels having characteristics in their respective colors. Function. The present technology can also be applied to an image sensor (image sensor) in which pixels of four types of spectral sensitivities including spectral sensitivities having total color matching are arranged on an imaging surface.
 本技術を適用した画素配置では、4×4で構成される1単位内に、4つのブロックが含まれるが、そのうちの2つのブロックはGブロックである。この2つのGブロックのうちの一方をW画素が配置されるWブロックとしても良い。 In the pixel arrangement to which the present technology is applied, four blocks are included in one unit composed of 4 × 4, and two of them are G blocks. One of the two G blocks may be a W block in which W pixels are arranged.
 図2に示した画素配置に基づく配置がなされたイメージセンサにおいて、1ブロックに含まれる4画素は、同色とされているが、露光時間は2種類設定されている。1ブロックに含まれる4画素は、それぞれ長時間露光画素L、または短時間露光画素Sに設定されている。露光時間の関係は、以下に示すとおりである。
  長時間露光L>短時間露光S
In the image sensor arranged based on the pixel arrangement shown in FIG. 2, the four pixels included in one block have the same color, but two types of exposure times are set. Four pixels included in one block are set as long exposure pixels L or short exposure pixels S, respectively. The relationship of exposure time is as shown below.
Long exposure L> Short exposure S
 再度図2を参照して、露光時間に注目した画素配置について説明する。左上に位置するRブロックの4画素に注目する。Rブロック内の左上と右下に位置するR画素は、長時間露光画素Lである。以下、長時間露光画素Lに設定されているR画素を、RL画素と記述する。G画素やB画素も同様に、長時間露光画素Lに設定されている画素は、それぞれGL画素、BL画素と記述する。 Referring to FIG. 2 again, the pixel arrangement focusing on the exposure time will be described. Note the four pixels in the R block located in the upper left. The R pixels located at the upper left and lower right in the R block are long-time exposure pixels L. Hereinafter, the R pixel set as the long-time exposure pixel L is described as an RL pixel. Similarly, for the G pixel and the B pixel, the pixels set as the long-time exposure pixel L are described as a GL pixel and a BL pixel, respectively.
 Rブロック内の右上と左下に位置するR画素は、短時間露光画素Sである。以下、短時間露光画素Sに設定されているR画素を、RS画素と記述する。G画素やB画素も同様に、短時間露光画素Sに設定されている画素は、それぞれGS画素、BS画素と記述する。 The R pixels located at the upper right and lower left in the R block are the short-time exposure pixels S. Hereinafter, the R pixel set as the short-time exposure pixel S is described as an RS pixel. Similarly, for the G pixel and the B pixel, the pixels set as the short-time exposure pixel S are described as a GS pixel and a BS pixel, respectively.
 このような配置は、他の色のブロックでも同様である。例えば、Gブロック内の左上と右下に位置するG画素は、長時間露光画素Lに設定されているGL画素であり、右上と左下に位置するG画素は、短時間露光画素Sに設定されているGS画素である。 This arrangement is the same for other color blocks. For example, G pixels located at the upper left and lower right in the G block are GL pixels set as long exposure pixels L, and G pixels located at the upper right and lower left are set as short exposure pixels S. GS pixel.
 同様に、Bブロック内の左上と右下に位置するB画素は、長時間露光画素Lに設定されているBL画素であり、右上と左下に位置するB画素は、短時間露光画素Sに設定されているBS画素である。 Similarly, B pixels located at the upper left and lower right in the B block are BL pixels set as long exposure pixels L, and B pixels located at the upper right and lower left are set as short exposure pixels S. This is the BS pixel.
 このように、本技術を適用した画素の配置は、同色の画素が2×2の4画素を1ブロックとして配置され、1ブロック内の同色の4画素は、長時間露光で撮影を行う画素と短時間露光で撮影を行う画素に、それぞれ設定されている。 As described above, the arrangement of the pixels to which the present technology is applied is that the same color pixels are arranged as four blocks of 2 × 2 pixels, and the four pixels of the same color in one block are pixels that are photographed with long exposure. It is set for each pixel for which shooting is performed with short-time exposure.
 なおここでは、2×2を1ブロックとして説明を続けるが、1ブロック内の画素数は、4個に限定されることを示す記載ではなく、複数個であれば、本技術の適用範囲内である。例えば、3×3を1ブロックとし、1ブロックに9個の画素が含まれるようにしても良い。 Here, the description will be continued assuming 2 × 2 as one block, but it is not a description indicating that the number of pixels in one block is limited to four. is there. For example, 3 × 3 may be one block, and nine pixels may be included in one block.
 またここでは、1ブロックは、縦×横がM×Mであり、縦方向と横方向の画素数が同数である場合を例に挙げて説明するが、縦×横がM×Nであり、縦方向と横方向の画素数が異なる場合であっても、本技術を適用することはできる。1ブロック内の画素数に合わせて、長時間露光画素Lと短時間露光画素Sは配置される。 In addition, here, one block is described as an example in which the vertical and horizontal are M × M and the number of pixels in the vertical and horizontal directions is the same, but the vertical and horizontal are M × N, The present technology can be applied even when the number of pixels in the vertical direction and the horizontal direction is different. The long exposure pixel L and the short exposure pixel S are arranged according to the number of pixels in one block.
 また、図2に示した異なる露光時間の画素の配置は一例であり、他の配置であっても良い。例えば、図2では、左上と右下に長時間露光画素Lが配置されるようにしたが、右上と左下に長時間露光画素Lが配置されるなど、他の配置であっても良い。 Further, the arrangement of pixels with different exposure times shown in FIG. 2 is an example, and other arrangements may be used. For example, in FIG. 2, the long-time exposure pixels L are arranged at the upper left and lower right, but other arrangements such as the long-time exposure pixels L at the upper right and lower left may be used.
 また、Rブロック、Gブロック、およびBブロックで、異なる露光時間の画素の配置は同一であるとして説明したが、色毎に異なる配置としても良い。例えば、Rブロックの長時間露光画素Lと右隣に位置するGブロックの長時間露光画素Lとが隣り合うように配置するといったように、Rブロック、Gブロック、およびBブロック内での異なる露光時間の画素の配置は、同一であっても、異なっていても良い。 In addition, although the arrangement of the pixels having different exposure times in the R block, the G block, and the B block has been described as being the same, the arrangement may be different for each color. For example, different exposures in the R block, the G block, and the B block are arranged such that the long exposure pixel L in the R block and the long exposure pixel L in the G block located on the right are adjacent to each other. The arrangement of the temporal pixels may be the same or different.
 このように、本技術を適用した撮像装置100では、1枚の撮影画像に含まれる画素単位で、長時間露光画素と短時間露光画素を設定して、これらの画素間の合成処理(ブレンド)により、HDR画像を生成する。この露光時間制御は制御部105の制御によって行われる。 As described above, in the imaging apparatus 100 to which the present technology is applied, the long exposure pixel and the short exposure pixel are set for each pixel included in one photographed image, and the synthesis process (blend) between these pixels is performed. Thus, an HDR image is generated. This exposure time control is performed under the control of the control unit 105.
 図3に各画素の露光時間のタイミング例を示す。長時間露光画素Lは、長時間の露光処理がなされる。短時間露光画素Sは、短時間の露光処理がなされる。図3に示すように、短時間露光画素Sと長時間露光画素Lの露光開始タイミングは一致していないが、露光終了のタイミングは一致するように、露光時間は制御される。 Fig. 3 shows an example of the timing of exposure time for each pixel. The long exposure pixel L is subjected to a long exposure process. The short exposure pixel S is subjected to a short exposure process. As shown in FIG. 3, the exposure start timings of the short-time exposure pixels S and the long-time exposure pixels L do not match, but the exposure times are controlled so that the exposure end timings match.
 <画像処理部の処理について>
 上記したような短時間露光画素Sと長時間露光画素Lが配置された撮像素子102からの信号を処理する画像処理部103で行われる処理について説明する。なおここでは、画像処理部103が行う処理の概略について説明し、詳細については後述する。後述する説明としては、折り返し成分による偽色の発生を抑制する処理などがある。
<About the processing of the image processing unit>
A process performed by the image processing unit 103 that processes a signal from the image sensor 102 in which the short-time exposure pixels S and the long-time exposure pixels L are arranged as described above will be described. Here, an outline of processing performed by the image processing unit 103 will be described, and details will be described later. As an explanation to be described later, there is a process for suppressing the occurrence of false color due to the aliasing component.
 図4に、4分割ベイヤ型RGB配列において斜め方向に露光時間を変更した場合の処理例を示す。図4には以下の3つのデータを示している。
 (4a)撮像データ
 (4b)中間データ
 (4c)出力データ
FIG. 4 shows a processing example when the exposure time is changed in an oblique direction in a four-divided Bayer RGB array. FIG. 4 shows the following three data.
(4a) Imaging data (4b) Intermediate data (4c) Output data
 (4a)撮像データは撮像素子の撮像データであり、ベイヤ配列において列毎に露光時間を変更した場合に撮影される画像を示している。図4において、白い部分が長時間露光画素Lであり、濃いグレー部分が短時間露光画素Sを示している。例えばRL00は、座標位置(0,0)のR画素の長時間露光画素Lである。GL20は、座標位置(2,0)のG画素の長時間露光画素Lである。なお、座標は、垂直下方向をx、水平右方向をyとした座標(x,y)を適用して、例えば、GSxy、GLxy等の形式で示している。 (4a) The imaging data is imaging data of the imaging device, and shows an image taken when the exposure time is changed for each column in the Bayer array. In FIG. 4, the white portion indicates the long-time exposure pixel L, and the dark gray portion indicates the short-time exposure pixel S. For example, RL00 is the long-time exposure pixel L of the R pixel at the coordinate position (0, 0). GL20 is the long-time exposure pixel L of the G pixel at the coordinate position (2, 0). The coordinates are shown in a format such as GSxy, GLxy, etc. by applying coordinates (x, y) where x is the vertical downward direction and y is the horizontal right direction.
 また図2を参照して説明したように、本実施の形態においては、斜め方向に長時間露光画素Lと短時間露光画素Sが交互に設定されている。 As described with reference to FIG. 2, in the present embodiment, the long exposure pixels L and the short exposure pixels S are alternately set in an oblique direction.
 (4a)撮像データは4×6の画素領域を示している。
 (4b)中間データは、4×6の(4a)撮像データに基づいて生成される中間データを示している。
(4a) The imaging data indicates a 4 × 6 pixel area.
(4b) Intermediate data indicates intermediate data generated based on 4 × 6 (4a) imaging data.
 このような場合、ステップS1(STEP1)において、4×6の(4a)撮像データに基づいて、12個の中間画素データが算出される。 In such a case, in step S1 (STEP 1), 12 pieces of intermediate pixel data are calculated based on 4 × 6 (4a) imaging data.
 (4c)出力データは、12個の(4b)中間画素データに基づいて生成される出力データを示している。この出力データは、広ダイナミックレンジ画像として生成される出力データである。 (4c) Output data indicates output data generated based on 12 (4b) intermediate pixel data. This output data is output data generated as a wide dynamic range image.
  (ステップ1)
 ステップS1における(4a)撮像データから(4b)中間データの生成処理は、以下の複数の画素値の斜め加算処理によって行われる。例えば、図4(4b)に示すRブロックから算出されるRLA00の画素値(RLA00)、RSA00の画素値(RSA00)は、(4a)撮像データに含まれる複数の画素の画素値を適用した以下の式に従った斜め加算処理によって算出される。
 RLA00=(RL00+RL11)/2
 RSA00=(RS01+RS10)/2   ・・・(1)
(Step 1)
The process of generating (4b) intermediate data from (4a) imaging data in step S1 is performed by the following diagonal addition process of a plurality of pixel values. For example, the pixel value of RLA00 (RLA00) and the pixel value of RSA00 (RSA00) calculated from the R block shown in FIG. 4 (4b) are the values obtained by applying pixel values of a plurality of pixels included in (4a) imaging data. It is calculated by the diagonal addition process according to the equation.
RLA00 = (RL00 + RL11) / 2
RSA00 = (RS01 + RS10) / 2 (1)
 このように、Rブロック内で斜め方向に配置されたRL画素同士の平均値が算出され、RS画素同士の平均値が算出されることで、中間データが生成される。なお平均値を求めるのではなく、画素値を加算した値が、そのまま用いられるようにしても良い。 Thus, the average value of the RL pixels arranged in the diagonal direction in the R block is calculated, and the average value of the RS pixels is calculated, so that intermediate data is generated. Instead of obtaining an average value, a value obtained by adding pixel values may be used as it is.
 Gブロック、Bブロックも同様に、斜め方向に配置された長時間露光画素Lの画素値の平均値と、短時間露光画素Sの画素値の平均値が算出されることで、中間データが生成される。 Similarly, for the G block and B block, intermediate data is generated by calculating the average value of the pixel values of the long-time exposure pixels L and the average value of the pixel values of the short-time exposure pixels S arranged in an oblique direction. Is done.
 式(1)をR画素、G画素、およびB画素で共通の式として表した式を次式(2)とする。
 DLA=(DL+Dl)/2
 DSA=(DS+Ds)/2   ・・・(2)
An expression expressing Expression (1) as an expression common to the R pixel, the G pixel, and the B pixel is defined as the following Expression (2).
DLA = (DL + Dl) / 2
DSA = (DS + Ds) / 2 (2)
 式(2)において、DLAは、1ブロック内の長時間露光画素Lの画素値の平均値を表し、DSAは、1ブロック内の短時間露光画素Sの画素値の平均値を表す。DLは、1ブロック内の2個の長時間露光画素Lの一方の画素値を表し、Dlは、他方の画素値を表す。同じくDSは、1ブロック内の2個の短時間露光画素Sの一方の画素値を表し、Dsは、他方の画素値を表す。 In Expression (2), DLA represents the average value of the pixel values of the long-time exposure pixels L in one block, and DSA represents the average value of the pixel values of the short-time exposure pixels S in one block. DL represents one pixel value of the two long-time exposure pixels L in one block, and Dl represents the other pixel value. Similarly, DS represents one pixel value of the two short-time exposure pixels S in one block, and Ds represents the other pixel value.
 このように、1ブロック内で斜め方向に配置された同露光時間の画素同士を加算することで、混色成分の抑止をはかることができる。 In this way, by adding together pixels having the same exposure time arranged in an oblique direction within one block, it is possible to suppress color mixture components.
  (ステップ2)
 ステップS2における(4b)中間データから(4c)出力データの生成処理は以下のように、(4b)中間データに含まれる画素値のブレンド処理によって行われる。例えば、図4(4c)に示すR00画素の画素値(R00)は、(4b)中間データに含まれる複数の画素の画素値と、ブレンド係数αを適用した以下の算出式(1)に従って算出される。
 R00=(1-α)×RSA00×Gain+α×RLA00 ・・・(3)
(Step 2)
(4b) Output data generation processing from (4b) intermediate data in step S2 is performed by (4b) blend processing of pixel values included in the intermediate data as follows. For example, the pixel value (R00) of the R00 pixel shown in FIG. 4 (4c) is calculated according to the following calculation formula (1) to which the pixel value of a plurality of pixels included in the intermediate data (4b) and the blend coefficient α are applied. Is done.
R00 = (1-α) × RSA00 × Gain + α × RLA00 (3)
 ただし、
 Gain:短時間露光画素の画素値に乗ずるゲイン(長時間露光画素と短時間露光画素の露光比)
 α:長時間露光画素の画素値と短時間露光画素の画素値とのブレンド係数
 とする。
However,
Gain: Gain multiplied by the pixel value of the short-time exposure pixel (exposure ratio between the long-time exposure pixel and the short-time exposure pixel)
α: The blend coefficient of the pixel value of the long exposure pixel and the pixel value of the short exposure pixel.
 G画素やB画素においても同様に、ゲインとブレンド係数を用いた算出式に従って出力データが生成される。なお、R画素、G画素、B画素は、それぞれ感度が異なるため、例えば、ゲイン(Gain)やブレンド係数は、R画素、G画素、B画素で異なる値を用いるようにしてもよい。 Similarly, output data is generated in the G pixel and B pixel in accordance with a calculation formula using a gain and a blend coefficient. Note that the R pixel, the G pixel, and the B pixel have different sensitivities, and therefore, for example, different values may be used for the R pixel, the G pixel, and the B pixel for the gain and the blend coefficient.
 式(3)をR画素、G画素、およびB画素で共通の式として表した式を次式(4)となる。
 DH=(1-α)×DS+α×DL  ・・・(4)
The following expression (4) is obtained by expressing the expression (3) as an expression common to the R pixel, the G pixel, and the B pixel.
DH = (1−α) × DS + α × DL (4)
 式(4)において、DHは、HDR画像内の所定の画素の画素値を表す。DSは、式(3)では、RSA00×GAINに相当する。DLは、式(2)では、DLAに相当する。なお、DLAに所定のゲインを乗算し、DL=DLA×Gainとすることも可能である。 In Expression (4), DH represents a pixel value of a predetermined pixel in the HDR image. DS corresponds to RSA00 × GAIN in equation (3). DL corresponds to DLA in equation (2). It is also possible to multiply DLA by a predetermined gain so that DL = DLA × Gain.
 このようにして、HDR画像が生成される。 In this way, an HDR image is generated.
 <ブラーの発生について>
 上記のような広ダイナミックレンジ(HDR)画像生成処理においては、動被写体を撮影すると、ぼけが発生してしまう可能性がある。
<About the occurrence of blur>
In the wide dynamic range (HDR) image generation process as described above, blurring may occur when a moving subject is photographed.
 一般に、イメージセンサの露光時間中に被写体と撮像装置の相対的な位置関係に変化が生じた場合、被写体が複数画素にまたがって撮像されるため、得られる画像の精細さは失われる。特に、長時間露光画像は短時間露光画像に比べて露光時間が長いために精細さを失いやすい。そのため動被写体や撮像装置でのブレが生じた場合、短時間露光画像と長時間露光画像との露出を補正して明るさをそろえても、長時間露光画像と短時間露光画像の対応画素位置の画素値にずれが生じる。 Generally, when a change occurs in the relative positional relationship between the subject and the imaging device during the exposure time of the image sensor, the subject is imaged over a plurality of pixels, so that the fineness of the obtained image is lost. In particular, long exposure images tend to lose fineness because exposure time is longer than short exposure images. For this reason, when blurring occurs on a moving subject or an imaging device, the corresponding pixel positions of the long-exposure image and the short-exposure image are corrected even if the exposure of the short-exposure image and the long-exposure image is corrected to make the brightness uniform. Deviation occurs in the pixel values.
 このように、露光時間中に被写体と撮像装置の相対的な位置関係に変化が生じることによって露出補正し明るさをそろえた長時間露光画像と短時間露光画像との対応画素位置の画素値がずれる現象を「ブラー」と定義する。 In this way, the pixel value of the corresponding pixel position between the long-exposure image and the short-exposure image that have been corrected for exposure by changing the relative positional relationship between the subject and the imaging device during the exposure time. The phenomenon of deviation is defined as “blur”.
 このようなブラーの発生を抑制するために、以下の処理が行われるようにする。図5を参照して、長時間露光画像と短時間露光画像のブレンド係数αの決定処理例について説明する。図5のAには、ブレンド処理構成の一例を示している。 In order to suppress the occurrence of such blur, the following processing is performed. With reference to FIG. 5, an example of determination processing of the blend coefficient α between the long exposure image and the short exposure image will be described. FIG. 5A shows an example of the blend processing configuration.
 本実施の形態においては、短時間露光画像と長時間露光画像の双方に基づいてブラー情報を計測し、この計測したブラー情報と、短時間露光画像と長時間露光画像の双方に基づいてブレンド係数を決定し、決定したブレンド係数を適用して、短時間露光画像と長時間露光画像のブレンド処理を実行する例を挙げて説明する。 In this embodiment, blur information is measured based on both the short-time exposure image and the long-time exposure image, and the blend coefficient is calculated based on the measured blur information and both the short-time exposure image and the long-time exposure image. An example will be described in which blending of a short-exposure image and a long-exposure image is executed by applying the determined blend coefficient.
 なお、「ブラー」とは、露光比に基づく補正を行った長時間露光画像と短時間露光画像との対応画素位置の画素の画素値のずれであると定義でき、「ブラー情報」は、この画素値のずれ量に相当するブラー発生度合いを示す指標値とすることができる。 “Blur” can be defined as a shift in the pixel value of the pixel at the corresponding pixel position between the long-exposure image and the short-exposure image that have been corrected based on the exposure ratio, and “blur information” It can be set as an index value indicating the degree of occurrence of blur corresponding to the shift amount of the pixel value.
 撮影画像から各画素単位の「ブラー情報」が取得され、取得された「ブラー情報」に基づいて決定されたブレンド係数が適用されて、短時間露光画像と長時間露光画像のブレンド処理が実行され、HDR画像が生成される。 “Blur information” for each pixel is acquired from the captured image, and the blending coefficient determined based on the acquired “blur information” is applied to execute the blending process between the short exposure image and the long exposure image. , An HDR image is generated.
 本開示の構成では、ブラーを考慮したブレンド係数αは、例えば次式(5)に従って算出される。 In the configuration of the present disclosure, the blend coefficient α in consideration of blur is calculated according to the following equation (5), for example.
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 式(5)において、
 M:ブラー情報(=ブラーの大小を示すブラー度合い指標値)
 max(a,b):aとbの最大値を求める関数
 min(a,b):aとbの最小値を求める関数
 をそれぞれ表す。またk、kは、パラメータである。詳細は後段で説明する。
In equation (5),
M: Blur information (= blur degree index value indicating the size of blur)
max (a, b): a function for obtaining the maximum value of a and b. min (a, b): a function for obtaining the minimum value of a and b. K 1 and k 0 are parameters. Details will be described later.
 なお、長時間露光画像のブラーの大小を示す値:Mは理想的には以下の式で算出される。
  M=(μL-μS)   ・・・(6)
 なおμL、μSは以下の値とする。
 μL:ノイズの影響が全くないときに得られる露出補正長時間露光画像の理想的な画素値(式(4)のおけるDLに該当)
 μS:ノイズの影響が全くないときに得られる露出補正短時間露光画像の理想的な画素値(式(4)のおけるDSに該当)
Note that the value M indicating the magnitude of the blur of the long exposure image is ideally calculated by the following equation.
M = (μL−μS) 2 (6)
Note that μL and μS have the following values.
μL: Ideal pixel value of exposure-corrected long-exposure image obtained when there is no influence of noise (corresponding to DL in equation (4))
μS: Exposure correction obtained when there is no influence of noise Ideal pixel value of short-exposure image (corresponding to DS in equation (4))
 しかしながら、上記式(6)において、実際には、μL、μSはノイズの影響があるため直接求めることができない。そこで注目画素の周辺の画素群を用いて、例えば次式(7)を用いて近似的にMを求める。 However, in the above equation (6), in reality, μL and μS cannot be directly obtained because of the influence of noise. Therefore, M is approximately obtained using, for example, the following equation (7) using a pixel group around the target pixel.
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
 式(7)において、
 M(x、y):画素位置(x、y)でのブラー情報
 DL(x、y):露出補正長時間露光画像の画素位置(x、y)での画素値
 DS(x、y):露出補正短時間露光画像の画素位置(x、y)での画素値
 VL(x、y):露出補正長時間露光画像の画素位置(x、y)でのノイズの分散値
 VS(x、y):露出補正短時間露光画像の画素位置(x、y)でのノイズの分散値
 φ(dx,dy):ローパスフィルタの重み係数
 min(a,b):値aと値bの最小値を計算する関数
 max(a,b):値aと値bの最大値を計算する関数
 p:調整用のパラメータ。ゼロ以上の定数とする。
In equation (7),
M (x, y): Blur information at pixel position (x, y) DL (x, y): Pixel value at pixel position (x, y) of exposure-corrected long exposure image DS (x, y): Pixel value VL (x, y) at pixel position (x, y) of exposure-corrected short-exposure image VS: variance value VS (x, y) of noise at pixel position (x, y) of exposure-corrected long-exposure image ): Exposure correction Noise variance value at pixel position (x, y) of short-exposure image φ (dx, dy): Weight coefficient of low-pass filter min (a, b): Minimum value of value a and value b Function to calculate max (a, b): Function to calculate the maximum value of value a and value b p: Parameter for adjustment. A constant greater than or equal to zero.
 また、式(7)の演算の結果、M(x、y)が負の値になった時は、差分絶対値もしくは長短差分を0にする。あるいは、長時間露光画像のブラーの大小を示す値Mの計算は、簡略化して次式(8)に従って算出してもよい。 Also, as a result of the calculation of equation (7), when M (x, y) becomes a negative value, the difference absolute value or the long / short difference is set to zero. Alternatively, the calculation of the value M indicating the blur size of the long exposure image may be simplified and calculated according to the following equation (8).
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000003
 このようなブレンド係数αを決定してブレンド処理を行う。この処理により、ブラーが大きい部位では、αは0に近づくためブレの少ない短時間露光画像に基づく画素値を優先的に出力する設定となり、ブラーが小さい部位では、αは従来手法と同等の値をとり、所定のブレンド係数に応じた画素値が生成される。 The blending process is performed by determining such a blending coefficient α. As a result of this processing, α is set to preferentially output a pixel value based on a short-exposure image with little blur because α approaches 0 at a portion where blur is large, and α is a value equivalent to the conventional method at a portion where blur is small. And a pixel value corresponding to a predetermined blend coefficient is generated.
 このような処理が実現され、その結果、動被写体部は、ブレの少なく、静止部は、暗い部位から明るい部位までSN比の良いHDR画像が得られる。なお、上記のブレンド係数の計算量はさほど大きくなく、高速に処理可能であり、例えば動画のHDR画像生成処理にも適用することができる。 Such processing is realized, and as a result, an HDR image with a good S / N ratio is obtained from a dark part to a bright part in the moving part with less blur in the moving subject part. Note that the amount of calculation of the blend coefficient is not so large and can be processed at high speed. For example, it can be applied to HDR image generation processing of moving images.
 <折り返し成分による偽色の発生について>
 上述したような処理により、ブラーの発生を抑制したHDR画像を生成できるが、高周波成分を含む画像を撮像したときに、折り返し成分による偽色が発生する可能性がある。この折り返し成分による偽色について説明を加える。
<Generation of false color due to aliasing components>
By the processing as described above, an HDR image in which the occurrence of blur is suppressed can be generated. However, when an image including a high-frequency component is captured, a false color due to the aliasing component may occur. The false color due to the aliasing component will be described.
 ここで、再度、式(6)を参照する。式(6)は、長時間露光画像のブラーの大小を示す値:Mを演算する式であった。
  M=(μL-μS)   ・・・(6)
 μL:露出補正長時間露光画像の画素値
 μS:露出補正短時間露光画像の画素値
Here, reference is again made to equation (6). Expression (6) is an expression for calculating a value M indicating the magnitude of blur of a long exposure image.
M = (μL−μS) 2 (6)
μL: Pixel value of exposure-corrected long-exposure image μS: Pixel value of exposure-corrected short-exposure image
 例えば、露光比が16である場合、露出補正長時間露光画像μLは、長時間露光画素Lの画素値に1を乗算することで算出され、露出補正短時間露光画像μSは、短時間露光画素Sの画素値に16を乗算することで算出される。このように算出された露出補正長時間露光画像μLと露出補正短時間露光画像μSは、明るさが合わせられた画像となる。 For example, when the exposure ratio is 16, the exposure correction long-time exposure image μL is calculated by multiplying the pixel value of the long-time exposure pixel L by 1, and the exposure correction short-time exposure image μS is calculated as the short-time exposure pixel. It is calculated by multiplying the pixel value of S by 16. The exposure correction long-time exposure image μL and the exposure correction short-time exposure image μS calculated in this way are images in which the brightness is matched.
 このように明るさが合わせられた露出補正長時間露光画像μLと露出補正短時間露光画像μSは、ノイズの影響がなければほぼ同じ信号値となる。この特性を利用することで、上記したように、ブラーの検出が行われる。すなわち、式(6)において、露出補正長時間露光画像μLと露出補正短時間露光画像μSが同じであれば0となるが、動被写体等が撮像されることで、露出補正長時間露光画像μLと露出補正短時間露光画像μSに差分が生じると、何らかの値が算出される。 The exposure-corrected long-time exposure image μL and the exposure-corrected short-time exposure image μS, which are adjusted in brightness in this way, have substantially the same signal value as long as there is no influence of noise. By using this characteristic, blur detection is performed as described above. That is, in Expression (6), if the exposure correction long-time exposure image μL and the exposure correction short-time exposure image μS are the same, the value is 0. However, by capturing a moving subject or the like, the exposure correction long-time exposure image μL is obtained. When a difference occurs between the exposure correction short-exposure image μS, some value is calculated.
 ところで、図2に示したカラーフィルタの色配置の場合、隣接画素からの影響で、露出補正長時間露光画像μLと露出補正短時間露光画像μSに差分が生じる可能性がある。ここで、図4を再度参照する。図4に示した(4a)撮像データにおいて、BL22画素に注目する。BL22画素は、左隣のGS21画素、左上のRL11画素、上のGS12画素に隣接している。BL22画素は、これら、GS21画素、RL11画素、GS12画素からの影響を受ける。すなわち、BL22画素は、隣接する異色の画素からの影響を受けやすい。 Incidentally, in the case of the color arrangement of the color filter shown in FIG. 2, there is a possibility that a difference occurs between the exposure-corrected long-time exposure image μL and the exposure-corrected short-time exposure image μS due to the influence from adjacent pixels. Reference is now made to FIG. 4 again. In the (4a) imaging data shown in FIG. 4, attention is paid to the BL22 pixel. The BL22 pixel is adjacent to the GS21 pixel on the left, the RL11 pixel on the upper left, and the GS12 pixel on the upper left. The BL22 pixel is affected by these GS21 pixel, RL11 pixel, and GS12 pixel. That is, the BL22 pixel is easily affected by adjacent different color pixels.
 次にBL33画素に注目する。BL33画素は、左隣のBS32画素、左上のBL22画素、上のBS23画素に隣接している。BL33画素は、これら、BS32画素、BL22画素、BS23画素からの影響を受ける。しかしながら、BL33画素が影響を受ける隣接する画素は、同色であるため、その影響は小さい。 Next, pay attention to the BL33 pixel. The BL33 pixel is adjacent to the BS32 pixel on the left, the BL22 pixel on the upper left, and the BS23 pixel on the upper left. The BL33 pixel is affected by these BS32 pixel, BL22 pixel, and BS23 pixel. However, the adjacent pixels affected by the BL33 pixel have the same color, and thus the influence is small.
 このように、BL22画素とBL33画素は、同じBブロック内の長時間露光画素Lであるが、隣接画素が同色であるか、異色であるかの違いがあり、その違いから、ノイズ成分の影響以上に、信号値に差が生じる可能性がある。これは、斜め光成分による混色の影響であると考えられる。 As described above, the BL22 pixel and the BL33 pixel are the long-time exposure pixels L in the same B block, but there is a difference in whether adjacent pixels are the same color or different colors. As described above, there is a possibility that a difference occurs in the signal value. This is considered to be the influence of the color mixture due to the oblique light component.
 このような斜め光成分による混色の影響は、上記した説明において例に挙げたBブロック内の長時間露光画素Lに限らず発生す。このような斜め光成分による混色の影響が、短時間露光画素Sに現れた場合、上記したように、露出補正短時間露光画像μSに変換するときに、露光比、例えば、16倍といった値が乗算されるため、その影響は大きくなる。 Such an influence of the color mixture due to the oblique light component occurs not only in the long-time exposure pixels L in the B block exemplified in the above description. When the influence of the color mixture due to the oblique light component appears in the short-time exposure pixel S, as described above, when the exposure correction is converted into the short-time exposure image μS, an exposure ratio, for example, a value of 16 times is obtained. Since the multiplication is performed, the influence becomes large.
 このような斜め光成分による影響がある場合、“明るさが合わせられた露出補正長時間露光画像と露出補正短時間露光画像は、ほぼ同じ信号値となる”という関係が成り立たなくなる。 When there is an influence of such an oblique light component, the relationship that “the exposure corrected long exposure image and the exposure corrected short exposure image having the same brightness have substantially the same signal value” does not hold.
 しかしながら、式(1)、式(2)を参照して説明したように、1ブロック内の斜め方向に配置された同露光時間の画素同士を加算することで、混色成分の抑止をはかることができる。これは、混色の主成分である斜め方向の光の成分を、斜め方向に配置された同露光時間の画素同士を加算することで相殺することができるためであると考えられる。 However, as described with reference to Expression (1) and Expression (2), it is possible to suppress mixed color components by adding pixels having the same exposure time arranged in an oblique direction within one block. it can. This is considered to be because the light component in the oblique direction, which is the main component of the color mixture, can be canceled by adding pixels of the same exposure time arranged in the oblique direction.
 例えば、図2や図4に示したように、1ブロック内に長時間露光画素Lと短時間露光画素Sを設定し、長時間露光画素Lを斜め方向に配置し、短時間露光画素Sも斜め方向に配置する構成とした。このように、同色斜め方向の画素に同露光の画素を配置し、それらを加算することで、混色成分を抑制した信号を得ることができる。 For example, as shown in FIG. 2 and FIG. 4, long exposure pixels L and short exposure pixels S are set in one block, the long exposure pixels L are arranged in an oblique direction, and the short exposure pixels S It was set as the structure arrange | positioned in the diagonal direction. In this way, by arranging pixels of the same exposure in pixels of the same color diagonal direction and adding them, a signal in which the color mixture component is suppressed can be obtained.
 このように、式(2)に示した演算が行われることで、混色の影響を抑制することができる。また、混色の影響を抑制した信号を用いて、式(4)に示した演算が行われることで、HDR画像が生成される。式(4)におけるブレンド係数αは、上記したように、ブラーの発生を抑制する係数として設定される。 Thus, the influence of the color mixture can be suppressed by performing the calculation shown in the equation (2). Also, the HDR image is generated by performing the calculation shown in Expression (4) using the signal in which the influence of the color mixture is suppressed. The blend coefficient α in the equation (4) is set as a coefficient for suppressing the occurrence of blur as described above.
 よって、混色やブラーによる影響が抑制されたHDR画像を取得することができる。しかしながら、このような処理は、長時間露光画素Lの画素値と露光比補正された短時間露光画素Sの画素値との差が、ノイズ成分や動被写体の影響のみで発生している場合に有効であるが、以下に説明する折り返し成分による影響までは抑制できない可能性がある。 Therefore, it is possible to acquire an HDR image in which the influence of color mixture and blur is suppressed. However, such a process is performed when the difference between the pixel value of the long-time exposure pixel L and the pixel value of the short-time exposure pixel S whose exposure ratio has been corrected occurs only due to the influence of noise components or moving subjects. Although effective, there is a possibility that the influence of the aliasing component described below cannot be suppressed.
 上記したように、斜め光成分による影響を低減させるために、斜め方向に配置された同露光時間の画素同士を加算するようにした場合、長時間露光画素Lからの信号と短時間露光画素Sからの信号で、斜め方向の周波数特性に差が生じ、斜め方向の高周波を含む領域でも、長時間露光画素Lの画素値と露光比補正された短時間露光画素Sの画素値に差が生じてしまう。 As described above, when pixels having the same exposure time arranged in an oblique direction are added to reduce the influence of the oblique light component, the signal from the long exposure pixel L and the short exposure pixel S are added. The difference in the frequency characteristics in the oblique direction is caused by the signal from the signal, and there is a difference in the pixel value of the long-time exposure pixel L and the pixel value of the short-time exposure pixel S whose exposure ratio is corrected even in the region including the oblique high frequency. End up.
 図2(図4)に示した画素配置を有する撮像装置100(図1)で、高周波信号を含む画像を撮影した場合を例に挙げて説明する。図示はしないが、例えば、同心円の円が、幅が狭い状態で描かれている静止画像を撮像し、長時間露光画像と短時間露光画像との差分を画像として取得した場合、図6に示すような画像が取得される場合がある。 An example will be described in which an image including a high-frequency signal is taken with the imaging apparatus 100 (FIG. 1) having the pixel arrangement shown in FIG. 2 (FIG. 4). Although not shown, for example, when a still image in which concentric circles are drawn in a narrow state is captured and the difference between the long-time exposure image and the short-time exposure image is acquired as an image, it is shown in FIG. Such an image may be acquired.
 静止画像を撮像し、長時間露光画像と短時間露光画像との差分を画像として表した場合、長時間露光画像と短時間露光画像との差分は0となるため、その画像は、0を表す色、例えば、図6においては黒一色の画像となる。しかしながら、図6に示した画像には、白い部分(以下、折り返り信号と記述する)があり、長時間露光画像と短時間露光画像に差分が発生している部分があることを示している。 When a still image is captured and the difference between the long exposure image and the short exposure image is represented as an image, the difference between the long exposure image and the short exposure image is 0, so the image represents 0. An image of a color, for example, a black color in FIG. However, the image shown in FIG. 6 has a white portion (hereinafter referred to as a folding signal), which indicates that there is a portion where a difference occurs between the long-time exposure image and the short-time exposure image. .
 長時間露光画像と短時間露光画像との差分は、上記した式(6)による演算を行うことで算出することができる。上記したように、式(6)による演算結果により、動被写体による影響(ブラー)があるか否かを判定することができる。すなわち、長時間露光画像と短時間露光画像との差分があれば、動被写体による影響がある画素(領域)であると判定でき、差分が無ければ、動被写体による影響がない画素(領域)であると判定できる。 The difference between the long-time exposure image and the short-time exposure image can be calculated by performing the calculation according to the above equation (6). As described above, it is possible to determine whether or not there is an influence (blurring) due to the moving subject based on the calculation result according to Expression (6). That is, if there is a difference between the long exposure image and the short exposure image, it can be determined that the pixel (region) is affected by the moving subject, and if there is no difference, the pixel (region) is not affected by the moving subject. It can be determined that there is.
 しかしながら、図6に示したように、静止画像を撮影した場合、換言すれば、動被写体がない画を撮影した場合であっても、長時間露光画像と短時間露光画像との差分が出てしまう場合がある。図6で折り返り信号があるのは、混色成分の低減のための斜め方向加算により、長時間露光画素Lの信号と短時間露光画素Sの信号で、斜め方向の周波数特性に差が生じ、斜め方向の高周波を含む領域でも長時間露光画素Lの信号と、露光比補正された短時間露光画素Sの信号に差を生じてしまうためであると考えられる。 However, as shown in FIG. 6, when a still image is captured, in other words, even when an image without a moving subject is captured, the difference between the long exposure image and the short exposure image is obtained. May end up. In FIG. 6, there is a aliasing signal because of the addition in the oblique direction for reducing the mixed color component, a difference occurs in the frequency characteristics in the oblique direction between the signal of the long exposure pixel L and the signal of the short exposure pixel S, This is presumably because a difference occurs between the signal of the long-time exposure pixel L and the signal of the short-time exposure pixel S whose exposure ratio has been corrected even in the region including the oblique high frequency.
 このように、静止画像が撮影されたときに、長時間露光画像と短時間露光画像との差分が発生してしまう領域があると、その領域は、上記した処理に基づくと、動被写体による影響がある(ブラーが発生する可能性がある)領域として処理される。 As described above, when a still image is captured, if there is a region where a difference between the long-exposure image and the short-exposure image occurs, the region is influenced by the moving subject based on the above processing. Is treated as an area where blur is likely to occur.
 ブラーの発生を抑制するための処理については上述したが、簡単に記載すると、ブラーが発生する可能性のある領域では、長時間露光画素Lの信号を用いるとブラーが発生する可能性が高くなるため、短時間露光画素Sの信号が用いられるようにする。 Although the processing for suppressing the occurrence of blur has been described above, in brief, in a region where blurring may occur, the possibility of blurring increases when the signal of the long-time exposure pixel L is used. Therefore, the signal of the short-time exposure pixel S is used.
 しかしながら、短時間露光画素Sの信号はノイズ成分が含まれている可能性が、長時間露光画素Lの信号よりも高く、短時間露光画素Sの信号を用いることで、SN比が低下してしまう可能性がある。 However, the signal of the short-time exposure pixel S is likely to contain a noise component, which is higher than the signal of the long-time exposure pixel L. By using the signal of the short-time exposure pixel S, the SN ratio is lowered. There is a possibility.
 このようなことから、静止画像を撮影したときに、長時間露光画像と短時間露光画像との差分が発生し、ブラー抑制のための処理が実行されると、SN比が低下してしまう可能性がある。 For this reason, when a still image is taken, a difference between the long-time exposure image and the short-time exposure image occurs, and if the processing for blur suppression is executed, the SN ratio may be reduced. There is sex.
 このようなことから、ブラーを抑制するための処理と、高周波信号による折り返り信号を抑制するための処理を、同一の処理とするのではなく、異なる処理とする。異なる処理とすることで、適切にブラーを抑制し、高周波信号による折り返り信号を抑制することが可能となる。 For this reason, the processing for suppressing blur and the processing for suppressing the aliasing signal due to the high frequency signal are not the same processing but different processing. By using different processing, it is possible to appropriately suppress blurring and suppress a folding signal due to a high-frequency signal.
 ブラーの抑制と斜め方向の高周波成分による影響の抑制を、それぞれ別の処理で行うようにするためには、動被写体による影響であるのか、斜め方向の高周波成分による影響であるのかを識別し、処理を分ける必要がある。 In order to perform the suppression of blurring and the influence of high-frequency components in the diagonal direction in different processes, identify whether the influence is due to moving subjects or the influence of high-frequency components in the diagonal direction, It is necessary to divide the processing.
 ここでは、斜め方向の高周波情報を検出し、この検出信号が高い個所は、露出補正長時間露光画像μLと、露光比補正された露出補正短時間露光画像μSの差が出ていたとしても、それは高周波によるもので、動体の影響で発生しているわけではないと判定し、動物体領域を検出するために使用する長短差分の強度を弱めることで、動物体領域検出の誤判定を抑制する。 Here, the high-frequency information in the oblique direction is detected, and the portion where the detection signal is high is that even if there is a difference between the exposure-corrected long-time exposure image μL and the exposure-corrected short-time exposure image μS whose exposure ratio is corrected, It is determined that it is due to high frequency, not caused by the influence of moving objects, and by reducing the strength of the long / short difference used to detect the moving object region, it suppresses erroneous determination of moving object region detection. .
 また、高周波情報が検出された箇所では、露出補正長時間露光画像μLと露出補正短時間露光画像μSのうち、折り返り信号が低減している方の信号を選択的に強く合成するように合成比率を制御することで、折り返り信号が低減されるようにする。 In addition, in the portion where the high frequency information is detected, the signal having the reduced aliasing signal of the exposure correction long exposure image μL and the exposure correction short exposure image μS is combined so as to be selectively strongly combined. By controlling the ratio, the folding signal is reduced.
 本出願人は、斜め方向の高周波成分による影響が発生している画素(領域)には、以下の4つの条件が満たされているとの解析結果を得た。以下、“斜め方向の高周波成分による影響”を、“折り返り成分の影響“と記述する。 The Applicant has obtained an analysis result that the following four conditions are satisfied for a pixel (region) that is affected by a high-frequency component in an oblique direction. Hereinafter, “the influence of the oblique high-frequency component” is described as “the influence of the aliasing component”.
 第1の条件:露出補正長時間露光画像μLと露光補正短時間露光画像μSとで、差が生じている。
 第2の条件:露出補正長時間露光画像μLと露光補正短時間露光画像μSで、彩度において大きな差が生じている。
 第3の条件:彩度な大きな信号が緑またはマゼンタの色を有している。
 第4の条件:折り返しが発生していない方の信号から、発生している信号を減算すると、その差分が、GとRまたはGとBで、逆方向で同振幅に発生している。
First condition: There is a difference between the exposure correction long-time exposure image μL and the exposure correction short-time exposure image μS.
Second condition: There is a large difference in saturation between the exposure-corrected long-time exposure image μL and the exposure-corrected short-time exposure image μS.
Third condition: a large saturated signal has a green or magenta color.
Fourth condition: When the generated signal is subtracted from the signal in which no aliasing occurs, the difference is generated with the same amplitude in the opposite direction between G and R or G and B.
 第1の条件は、折り返り成分が発生する状況は、上記したように、静止画像を撮影した場合であっても、露出補正長時間露光画像μLと露光補正短時間露光画像μSに差分があり、その差分を画像化した場合、例えば、図6に示すような画像が得られ状態であることを意味する。まず、折り返り成分よる影響がある画像では、露出補正長時間露光画像μLと露光補正短時間露光画像μSに差分があるという条件が成り立つ。 The first condition is that there is a difference between the exposure-corrected long-time exposure image μL and the exposure-corrected short-time exposure image μS even when a still image is captured as described above. When the difference is imaged, for example, it means that an image as shown in FIG. 6 is obtained. First, in an image that is affected by the aliasing component, there is a condition that there is a difference between the exposure-corrected long-time exposure image μL and the exposure-corrected short-time exposure image μS.
 第2の条件は、露出補正長時間露光画像μLと露光補正短時間露光画像μSを別々に取得し、同一領域を比較したとき、折り返り成分による影響がある領域は、露出補正長時間露光画像μLと露光補正短時間露光画像μSで、彩度において大きな差がある状態であることを意味する。 The second condition is that the exposure correction long exposure image μL and the exposure correction short exposure image μS are acquired separately, and when the same area is compared, the area affected by the aliasing component is the exposure correction long exposure image. This means that there is a large difference in saturation between μL and the exposure-corrected short exposure image μS.
 折り返り成分による影響がない領域では、露出補正長時間露光画像μLと露光補正短時間露光画像μSの同一領域を比較した場合、その色合いは同じであり、彩度は同程度である。これに対して、折り返り成分による影響がある領域では、露出補正長時間露光画像μLと露光補正短時間露光画像μSの同一領域を比較した場合、その色合いは大きく異なり、彩度において大きな差が生じる。 In the region where there is no influence by the aliasing component, when the same region of the exposure-corrected long-time exposure image μL and the exposure-corrected short-time exposure image μS are compared, the hue is the same and the saturation is the same. On the other hand, in the area affected by the aliasing component, when comparing the same area of the exposure-corrected long-time exposure image μL and the exposure-correction short-time exposure image μS, the hue is greatly different, and there is a large difference in saturation. Arise.
 なお、折り返り成分による影響がある領域では、露出補正長時間露光画像μLと露光補正短時間露光画像μSを別々に取得し、同一領域を比較した場合、露出補正長時間露光画像μLまたは露光補正短時間露光画像μSのどちらか一方に折り返り信号が発生している。よって、露出補正長時間露光画像μLと露光補正短時間露光画像μSを比較したとき、一方の画像に折り返り信号が発生していると、その折り返り信号がある領域では、露出補正長時間露光画像μLと露光補正短時間露光画像μSに差分が生じることになる。 In the area affected by the aliasing component, when the exposure correction long exposure image μL and the exposure correction short exposure image μS are acquired separately and the same area is compared, the exposure correction long exposure image μL or exposure correction is obtained. A folding signal is generated in one of the short-time exposure images μS. Therefore, when the exposure correction long exposure image μL and the exposure correction short exposure image μS are compared, if a folding signal is generated in one of the images, the exposure correction long exposure is performed in an area where the folding signal is present. A difference is generated between the image μL and the exposure correction short-time exposure image μS.
 次に第3の条件について説明する。第3の条件は、彩度な大きな信号が緑またはマゼンタの色を有しているである。折り返り成分による影響がある場合、その影響を抑制せずに、画像として取得した場合、その影響は緑(Green)またはマゼンタ(Magenta)の色を有する画となる。 Next, the third condition will be described. The third condition is that a large saturation signal has a green or magenta color. When there is an influence due to the aliasing component, when the image is acquired without suppressing the influence, the influence is an image having a color of green or magenta.
 仮に、第1の条件と第2の条件を満たす領域(画素)があった場合であっても、その領域の色が、緑またはマゼンダではない場合、その領域には、折り返り成分による影響はないと判定できる。 Even if there is an area (pixel) that satisfies the first condition and the second condition, if the color of the area is not green or magenta, the area is not affected by the aliasing component. It can be determined that there is no.
 第4の条件について、図7を参照して説明する。図7において、大文字のR、G、Bは、それぞれ長時間露光画素LのR画素、G画素、B画素を表す。また図7において、小文字のr、g、bは、それぞれ短時間露光画素SのR画素、G画素、B画素を表す。また図7は、折り返り信号による影響がある領域内の画素(信号)を表している。 The fourth condition will be described with reference to FIG. In FIG. 7, capital letters R, G, and B represent the R pixel, G pixel, and B pixel of the long-time exposure pixel L, respectively. In FIG. 7, lowercase r, g, and b represent the R pixel, G pixel, and B pixel of the short-time exposure pixel S, respectively. FIG. 7 shows pixels (signals) in a region affected by the folding signal.
 図7のAは、画素配置を表し、基本的に、図2に示した画素配置と同じであるが、斜め加算される画素同士を横方向に図示するために、斜めに傾いた状態で図示してある。図7のBは、同色の斜め加算の結果を表す。図7のCは、長時間露光画素Lの加算結果(長畜加算結果)を表す。図7のDは、短時間露光画素Sの加算結果(短畜加算結果)を表す。図7のEは、長畜加算結果と、短畜加算結果との差分を演算したときの結果を表す。図7のFは、図7のEに示した差分の結果を、感度比を考慮して補正した結果を表す。 7A shows the pixel arrangement, which is basically the same as the pixel arrangement shown in FIG. 2, but is shown in an obliquely inclined state in order to illustrate pixels to be added diagonally in the horizontal direction. It is shown. B of FIG. 7 represents the result of the diagonal addition of the same color. C of FIG. 7 represents the addition result (long livestock addition result) of the long exposure pixel L. D of FIG. 7 represents the addition result (short livestock addition result) of the short-time exposure pixels S. E of FIG. 7 represents a result when the difference between the long stock addition result and the short stock addition result is calculated. F of FIG. 7 represents the result of correcting the difference result shown in E of FIG. 7 in consideration of the sensitivity ratio.
 図7のBに示すように、1ブロック内の斜め方向に配置された長時間露光画素Lの画素値同士が加算されることで、1ブロック内の長時間露光画素Lの画素値が算出され、短時間露光画素Sの画素値同士が加算されることで、1ブロック内の短時間露光画素Sの画素値が算出される。 As shown in FIG. 7B, the pixel values of the long-time exposure pixels L in one block are calculated by adding the pixel values of the long-time exposure pixels L arranged in an oblique direction in one block. The pixel values of the short-time exposure pixels S are calculated by adding the pixel values of the short-time exposure pixels S to each other.
 図7のC、図7のDでは、G画素とR画素を図示している。図7のCは、斜め加算された長時間露光画素LのG画素とR画素を表し、図7のDは、斜め加算された短時間露光画素SのG画素とR画素を表す。 7C and FIG. 7D illustrate the G pixel and the R pixel. C in FIG. 7 represents the G pixel and R pixel of the long-time exposure pixel L diagonally added, and D in FIG. 7 represents the G pixel and R pixel of the short-time exposure pixel S diagonally added.
 図7のEでは、図7のCの長畜加算結果のG画素と図7のDの短畜加算結果のG画素との差分(G-g)を表す。また図7のEでは、図7のCの長畜加算結果のR画素と図7のDの短畜加算結果のR画素との差分(R-r)を表す。 7E shows the difference (Gg) between the G pixel of the long stock addition result of FIG. 7C and the G pixel of the short stock addition result of FIG. 7D. Further, E in FIG. 7 represents the difference (R−r) between the R pixel of the long stock addition result of C in FIG. 7 and the R pixel of the short stock addition result of D in FIG.
 G画素とR画素は、感度比が異なり、G画素の方がR画素よりも感度が高いため、G画素の信号レベルとR画素の信号レベルを合わせるには、R画素の信号に所定のゲインを乗算する必要がある。このような処理は、一般的な撮像装置において、ホワイトバランス調整として行われている。 Since the G pixel and the R pixel have different sensitivity ratios, and the G pixel has a higher sensitivity than the R pixel, in order to match the signal level of the G pixel and the signal level of the R pixel, a predetermined gain is applied to the signal of the R pixel. Need to be multiplied. Such processing is performed as white balance adjustment in a general imaging apparatus.
 感度を合わせるために、図7のEに示したR画素の差分(R―r)に所定のゲインを乗算した信号強度を、図7のFに示す。図7のFに示すように、調整後のR画素の差分(R―r)と、G画素の差分(G-g)は、ほぼ同じ信号強度となり、逆方向となる。 The signal intensity obtained by multiplying the R pixel difference (R−r) shown in E of FIG. 7 by a predetermined gain to match the sensitivity is shown in F of FIG. As shown in FIG. 7F, the adjusted R pixel difference (R−r) and the G pixel difference (G−g) have substantially the same signal intensity and are in the opposite directions.
 ここでは、R画素を例に挙げて説明したが、B画素でも同様である。すなわち、折り返り信号による影響がある領域内では、感度調整後のB画素の差分(B―b)と、G画素の差分(G-g)は、ほぼ同じ信号強度となり、逆方向となる。 Here, the R pixel has been described as an example, but the same applies to the B pixel. That is, within the region affected by the aliasing signal, the difference (B−b) between the B pixels after sensitivity adjustment and the difference (G−g) between the G pixels have substantially the same signal intensity and are in the opposite directions.
 このようなことが、折り返り成分による影響がある領域内では起きているため、第4の条件として、“折り返しが発生していない方の信号から、発生している信号を減算すると、その差分が、GとRまたはGとBで、逆方向で同振幅に発生している”という条件が得られる。自然界の信号には、このような第4の条件を満たす傾向がある。 Since this occurs in the region affected by the aliasing component, the fourth condition is that if the generated signal is subtracted from the signal that does not cause aliasing, the difference However, G and R or G and B are generated with the same amplitude in the opposite directions. " Natural signals tend to satisfy the fourth condition.
 このような第1乃至第4の条件が全て満たされる画素(領域)は、折り返り成分による影響がある画素であると判定することができる。 It can be determined that a pixel (region) that satisfies all of the first to fourth conditions is a pixel that is affected by the aliasing component.
 このように、第1乃至第4の条件が満たされるか否かを判定することで、折り返り成分による影響を受ける画素であるか否かを判定できる。このような判定を行い、判定結果も用いるようにすることで、動被写体による影響によるブラーを抑制し、折り返り成分による影響による偽色の発生を抑制することができる。 Thus, by determining whether or not the first to fourth conditions are satisfied, it is possible to determine whether or not the pixel is affected by the aliasing component. By making such a determination and using the determination result, it is possible to suppress the blur due to the influence of the moving subject and to suppress the generation of the false color due to the influence of the aliasing component.
 <HDR画像生成部の構成>
 長時間露光画素Lと短時間露光画素Sが、図2に示したように配置されている撮像素子102(図1)からの信号を処理することで、HDR画像を生成するHDR画像生成部について説明する。図8は、HDR画像生成部200の構成を示す図である。
<Configuration of HDR image generation unit>
About the HDR image generation part which produces | generates an HDR image by processing the signal from the image pick-up element 102 (FIG. 1) in which the long exposure pixel L and the short exposure pixel S are arrange | positioned as shown in FIG. explain. FIG. 8 is a diagram illustrating a configuration of the HDR image generation unit 200.
 図8に示したHDR画像生成部200は、RGB補間信号生成部211、露光補正部212、SN最大化合成比率算出部213、折り返り低減合成比率算出部214、ブラー低減合成比率算出部215、長畜飽和考慮合成比率算出部216、長短合成処理部217、折り返り成分検出部218、動物体検出部219、ノイズ低減処理部220を含む構成とされている。 The HDR image generation unit 200 illustrated in FIG. 8 includes an RGB interpolation signal generation unit 211, an exposure correction unit 212, an SN maximization synthesis ratio calculation unit 213, a aliasing reduction synthesis ratio calculation unit 214, a blur reduction synthesis ratio calculation unit 215, The long-lived saturation-considering synthesis ratio calculation unit 216, the long / short synthesis processing unit 217, the aliasing component detection unit 218, the moving object detection unit 219, and the noise reduction processing unit 220 are included.
 HDR画像生成部200には、撮像素子102から、長時間露光画素Lからの信号と短時間露光画素Sからの信号が入力される。入力される信号は、同色斜め加算後の信号であり、混色低減処理済みの信号である。RGB補間信号生成部211は、全ての画素位置に、長時間露光されたR画素、G画素、B画素のそれぞれの信号を補間し、R画素からなる長時間露光画像、G画素からなる長時間露光画像、およびB画素からなる長時間露光画像を生成する。 The HDR image generation unit 200 receives a signal from the long exposure pixel L and a signal from the short exposure pixel S from the image sensor 102. The input signal is a signal after the diagonal addition of the same color and is a signal that has been subjected to the color mixture reduction process. The RGB interpolation signal generation unit 211 interpolates signals of R pixels, G pixels, and B pixels that have been exposed for a long time at all pixel positions, and generates a long exposure image that includes R pixels and a long time that includes G pixels. An exposure image and a long exposure image composed of B pixels are generated.
 また、RGB補間信号生成部211は、全ての画素位置に、短時間露光されたR画素、G画素、B画素のそれぞれの信号を補間し、R画素からなる短時間露光画像、G画素からなる短時間露光画像、およびB画素からなる短時間露光画像を生成する。これら生成されたそれぞれの画像は、露光補正部212に供給される。 In addition, the RGB interpolation signal generation unit 211 interpolates signals of R pixels, G pixels, and B pixels that have been exposed to a short time at all pixel positions, and includes a short exposure image that includes R pixels and a G pixel. A short-time exposure image and a short-time exposure image composed of B pixels are generated. Each of these generated images is supplied to the exposure correction unit 212.
 露光補正部212は、R画素、G画素、およびB画素の感度の違いを吸収するための補正を行う。上記したように、G画素は、R画素やB画素よりも感度が高いため、R画素の信号とB画素の信号に対して、それぞれ所定のゲインを乗算することで、露光補正が行われる。露光補正部212は、長時間露光画素Lの信号(以下、長時間露光信号と記述する)と、短時間露光画素Sの信号(以下、短時間露光信号と記述する)をそれぞれ生成する。露光補正部212からは、R画素の長時間露光信号、G画素の長時間露光信号、B画素の長時間露光信号、R画素の短時間露光信号、G画素の短時間露光信号、およびB画素の短時間露光信号が出力される。 The exposure correction unit 212 performs correction to absorb the difference in sensitivity between the R pixel, the G pixel, and the B pixel. As described above, since the G pixel has higher sensitivity than the R pixel and the B pixel, exposure correction is performed by multiplying the R pixel signal and the B pixel signal by respective predetermined gains. The exposure correction unit 212 generates a signal for the long exposure pixel L (hereinafter referred to as a long exposure signal) and a signal for the short exposure pixel S (hereinafter described as a short exposure signal). From the exposure correction unit 212, a long exposure signal for the R pixel, a long exposure signal for the G pixel, a long exposure signal for the B pixel, a short exposure signal for the R pixel, a short exposure signal for the G pixel, and a B pixel The short exposure signal is output.
 露光補正部212からのR、G、Bのそれぞれの長時間露光信号と、R、G、Bのそれぞれの短時間露光信号は、SN最大化合成比率算出部213、折り返り成分検出部218、および動物体検出部219に供給される。また、露光補正部212からの長時間露光信号は、長畜飽和考慮合成比率算出部216と長短合成処理部217にも供給される。また、露光補正部212からの短時間露光信号は、ノイズ低減処理部220にも供給される。 The long-time exposure signals for R, G, and B from the exposure correction unit 212 and the short-time exposure signals for R, G, and B are respectively converted into an SN maximizing synthesis ratio calculation unit 213, an aliasing component detection unit 218, And supplied to the moving object detection unit 219. Further, the long-time exposure signal from the exposure correction unit 212 is also supplied to the long-lived saturation-considering synthesis ratio calculation unit 216 and the long / short synthesis processing unit 217. The short-time exposure signal from the exposure correction unit 212 is also supplied to the noise reduction processing unit 220.
 SN最大化合成比率算出部213は、SN比を最大にする合成比率を算出し、SN最大化合成比率を折り返り低減合成比率算出部214に供給する。 The SN maximized combination ratio calculation unit 213 calculates a combination ratio that maximizes the SN ratio, and supplies the SN maximized combination ratio to the aliasing reduction combination ratio calculation unit 214.
 折り返り低減合成比率算出部214は、折り返り成分検出部218からの折り返り成分情報に基づき、SN最大化合成比率を補正する。折り返り成分検出部218の構成と処理については、図9を参照して後述する。また、折り返し成分情報とは、上述した第1乃至第4の条件を満たすか否かを判定することで得られる情報であり、折り返り信号による影響がある画素であるか否かを表す情報である。 The aliasing reduction synthesis ratio calculation unit 214 corrects the SN maximization synthesis ratio based on the aliasing component information from the aliasing component detection unit 218. The configuration and processing of the aliasing component detection unit 218 will be described later with reference to FIG. The aliasing component information is information obtained by determining whether or not the first to fourth conditions described above are satisfied, and is information indicating whether or not the pixel is influenced by the aliasing signal. is there.
 上記したように、長時間露光信号側で折り返り信号が発生しているならば、短時間露光信号側では折り返り信号は発生していない。また、短時間露光信号側で折り返り信号が発生しているならば、長時間露光信号側では折り返り信号は発生していない。 As described above, if the folding signal is generated on the long exposure signal side, the folding signal is not generated on the short exposure signal side. Further, if the folding signal is generated on the short exposure signal side, the folding signal is not generated on the long exposure signal side.
 このようなことから、折り返り信号が発生している側の信号を使用する比率を下げ、折り返り信号が発生していない側の信号が優先的に使用される比率となるように、折り返り低減合成比率算出部214は、合成比率を算出する。 For this reason, the ratio of using the signal on the side where the folding signal is generated is lowered, and the folding is performed so that the signal on the side where the folding signal is not generated is preferentially used. The reduced synthesis ratio calculation unit 214 calculates a synthesis ratio.
 具体的には例えば、次式に基づく演算が、折り返り低減合成比率算出部214において行われる。
 A={SN最大化合成比率×(1.0-短蓄折り返り成分)}+(1.0×短蓄折り返り成分)
 OUT={A×(1.0-長畜折り返り成分)}+(0.0×長畜折り返り成分)
Specifically, for example, the calculation based on the following equation is performed in the aliasing reduction combination ratio calculation unit 214.
A = {SN maximization composite ratio × (1.0−short accumulation folding component)} + (1.0 × short accumulation folding component)
OUT = {A × (1.0-long animal folding component)} + (0.0 × long animal folding component)
 この式において、短蓄折り返り成分と長畜折り返り成分は、折り返り成分情報として、折り返り成分検出部218から供給される情報である。このように、SN最大化合成比率を、折り返り成分を用いて、選択的に1.0(100%長時間露光信号を使用)または0.0(100%短時間露光信号を使用)に近づける演算が行われる。 In this equation, the short accumulation folding component and the long stock folding component are information supplied from the folding component detection unit 218 as folding component information. In this way, an operation for selectively bringing the SN maximal composite ratio close to 1.0 (uses 100% long exposure signal) or 0.0 (uses 100% short exposure signal) is performed using the aliasing component. .
 このようにして算出された折り返り低減合成比率は、ブラー低減合成比率算出部215に供給される。ブラー低減合成比率算出部215は、<ブラーの発生について>のところで上述したように、ブラーを抑制するための合成比率を算出する。具体的には、ブレンド係数αを、上述したような演算により算出する。 The aliasing reduction composition ratio calculated in this way is supplied to the blur reduction composition ratio calculation unit 215. The blur reduction composition ratio calculation unit 215 calculates a composition ratio for suppressing blur as described above in <About occurrence of blur>. Specifically, the blend coefficient α is calculated by the calculation as described above.
 ブラー低減合成比率算出部215は、折り返り低減合成比率算出部214から供給された折り返り低減合成比率を、動物体検出部219から供給された動体物検出情報を用いて、選択的に1.0(100%長露光信号を使用)または0.0(100%短時間露光信号を使用)に近づける演算を行う。 The blur reduction synthesis ratio calculation unit 215 selectively selects the folding reduction synthesis ratio supplied from the folding reduction synthesis ratio calculation unit 214 using the moving object detection information supplied from the moving object detection unit 219. Calculation is performed so as to approach 100 (use 100% long exposure signal) or 0.0 (use 100% short exposure signal).
 ブラー低減合成比率算出部215には、動物体検出部219からの動物体を検出したか否か、すなわちブラーが発生する可能性のある画素であるか否かを表す動物体検出情報が供給される。動物体検出部219には、折り返り成分検出部218からの折り返り成分情報が供給される。 The blur reduction composition ratio calculation unit 215 is supplied with moving object detection information indicating whether a moving object is detected from the moving object detection unit 219, that is, whether a pixel is likely to cause blur. The The animal body detection unit 219 is supplied with the folding component information from the folding component detection unit 218.
 動物体検出部219は、折り返り成分情報が、折り返り成分が発生しているとの情報である場合、動物体として検出されていたとしても、それは動物体ではなく、折り返り成分であるとし、動物体を検出していないとの情報を、ブラー低減合成比率算出部215に供給する。よって、ブラー低減合成比率算出部215が、折り返り成分が発生している画素に対しては、ブラーを低減するための処理を実行しないように制御することが可能となる。 When the folded component information is information that the folded component is generated, the moving object detection unit 219 assumes that the detected component is not a moving object but a folded component even if it is detected as a moving object. The information that the moving object is not detected is supplied to the blur reduction composition ratio calculation unit 215. Therefore, it is possible for the blur reduction composition ratio calculation unit 215 to perform control so as not to execute the process for reducing the blur for the pixel in which the aliasing component is generated.
 このような処理が行われることで、動物体により影響をうけた画素である場合には、ブラーを抑制するための処理を実行させ、折り返り成分により影響を受けた画素である場合には、折り返り成分による偽色の発生を抑制するための処理を実行させることが可能となる。 By performing such processing, when the pixel is affected by the moving object, processing for suppressing blur is executed, and when the pixel is affected by the aliasing component, It is possible to execute processing for suppressing generation of false colors due to aliasing components.
 なお、折り返り成分検出部218から出力される折り返り成分情報は、折り返り成分が発生しているか否かを表す、例えば0または1の情報であっても良いし、折り返り成分が発生している可能性の確からしさを表す、例えば、0~1の値を有する情報であっても良い。 Note that the folding component information output from the folding component detection unit 218 may be, for example, 0 or 1 information indicating whether or not the folding component is generated, or the folding component is generated. For example, it may be information having a value of 0 to 1 that represents the likelihood of being unclear.
 同様に、動物体検出部219から出力される動物体検出情報は、動物体による影響でブラーが発生しているか否かを表す、例えば0または1の情報であっても良いし、ブラーが発生している可能性の確からしさを表す、例えば、0~1の値を有する情報であっても良い。 Similarly, the moving object detection information output from the moving object detection unit 219 may be, for example, 0 or 1 information indicating whether or not blur has occurred due to the influence of the moving object. It may be information having a value of 0 to 1, for example, representing the certainty of the possibility of being.
 ここまでの処理で、混色の抑制、折り返り成分による影響で発生する可能性のある偽色の抑制、および動物体による影響で発生する可能性のあるブラーの抑制が行われる。 The processing up to this point suppresses color mixing, suppresses false colors that may occur due to the effects of aliasing components, and suppresses blurs that may occur due to effects of moving objects.
 ブラー低減合成比率算出部215からのブラー低減合成比率は、長畜飽和考慮合成比率算出部216に供給される。 The blur reduction composition ratio from the blur reduction composition ratio calculation unit 215 is supplied to the long livestock saturation consideration composition ratio calculation unit 216.
 長畜飽和考慮合成比率算出部216は、露光補正部212から供給される長時間露光信号を参照し、長時間露光画素Lは、飽和していないか否かを判定する。飽和している画素の画素値(信号)は用いず、飽和している画素に対しては、短時間露光画素Sの画素値(信号)を用いる比率に、供給されたブラー低減合成比率を変換する。 The long-lived saturation-considering composition ratio calculation unit 216 refers to the long-time exposure signal supplied from the exposure correction unit 212 and determines whether or not the long-time exposure pixel L is saturated. The pixel value (signal) of the saturated pixel is not used, and for the saturated pixel, the supplied blur reduction composition ratio is converted to a ratio that uses the pixel value (signal) of the short-time exposure pixel S. To do.
 長畜飽和考慮合成比率算出部216は、飽和している画素においては、長時間露光信号を用いず、短時間露光信号を用いる比率(0.0(100%短時間露光信号を使用))を総合合成比率として後段の長短合成処理部217に出力し、飽和していない画素においては、入力されたブラー低減合成比率を、総合合成比率として後段の長短合成処理部217に出力する。 The long-lived saturation-considering composition ratio calculation unit 216 comprehensively synthesizes a ratio (0.0 (uses 100% short-time exposure signal)) that uses a short-time exposure signal instead of a long-time exposure signal for saturated pixels. The ratio is output to the subsequent long / short synthesis processing unit 217 as the ratio, and for the pixels that are not saturated, the input blur reduction synthesis ratio is output to the subsequent long / short synthesis processing unit 217 as the total synthesis ratio.
 総合合成比率は、平坦部では、SN比を最大にする合成比率となり、折り返り成分が発生している領域であるならば、その強度を低減させる合成比率となり、動体領域であれば、動体ボケを低減させる合成比率となっている。 The total synthesis ratio is a synthesis ratio that maximizes the S / N ratio in the flat part, and is a synthesis ratio that reduces the strength if the area is the aliasing component. The composition ratio is reduced.
 長短合成処理部217には、露光補正部212からの長時間露光信号と、ノイズ低減処理部220を介して短時間露光信号が供給される。短時間露光信号は、ノイズ低減処理部220によりノイズが低減された後、長短合成処理部217に供給される。 The long / short synthesis processing unit 217 is supplied with the long time exposure signal from the exposure correction unit 212 and the short time exposure signal via the noise reduction processing unit 220. The short exposure signal is supplied to the long / short combination processing unit 217 after noise is reduced by the noise reduction processing unit 220.
 長短合成処理部217は、供給された長時間露光信号と短時間露光信号を、長畜飽和考慮構成比率算出部216からの総合合成比率に基づき合成する。このようにして合成された信号は、HDR画像の信号として出力される。 The long / short combination processing unit 217 combines the supplied long-time exposure signal and short-time exposure signal based on the total combination ratio from the long-lived saturation consideration component ratio calculation unit 216. The signal synthesized in this way is output as an HDR image signal.
 <折り返り成分検出部の構成>
 上記したようにHDR画像が生成されることで、SNの最適化、混色の抑制、折り返り成分による影響で発生する可能性のある偽色の抑制、動物体による影響で発生する可能性のあるブラーの抑制、長時間露光信号の飽和を考慮したHDR画像を生成することができる。
<Configuration of folding component detection unit>
When an HDR image is generated as described above, SN optimization, suppression of color mixing, suppression of false colors that may occur due to the influence of aliasing components, and generation of effects due to the moving object may occur. It is possible to generate an HDR image in consideration of blur suppression and long-time exposure signal saturation.
 HDR画像生成部200の折り返し成分検出部218の構成を、図9に示す。折り返し成分検出部218は、上記した第1乃至第4の条件を満たすか否かを判定することで、折り返し成分を検出する。 The configuration of the aliasing component detection unit 218 of the HDR image generation unit 200 is shown in FIG. The aliasing component detection unit 218 detects the aliasing component by determining whether or not the first to fourth conditions are satisfied.
 折り返し成分検出部218は、強彩度発生領域検出部251、局所色比算出部252、各色長短差算出部253、各色長短差規格化部254、規格化振幅強度類似度算出部255を含む構成とされている。 The aliasing component detection unit 218 includes a strong saturation generation region detection unit 251, a local color ratio calculation unit 252, each color length difference calculation unit 253, each color length difference difference normalization unit 254, and a normalized amplitude intensity similarity calculation unit 255. It is said that.
 折り返り成分検出部218には、露光補正部212(図8)からの長時間露光の信号と、短時間露光の信号が供給される。長時間露光の信号と、短時間露光の信号は、それぞれR信号、G信号、B信号があるため、折り返り成分検出部218には、6色の信号が供給されることになる。 The folding component detection unit 218 is supplied with a long exposure signal and a short exposure signal from the exposure correction unit 212 (FIG. 8). Since the long exposure signal and the short exposure signal include an R signal, a G signal, and a B signal, respectively, six-color signals are supplied to the aliasing component detection unit 218.
 強彩度発生領域検出部251は、主に、第2の条件と第3の条件を満たす領域(画素)であるか否かを判定する部分である。 The intense saturation generation area detection unit 251 is a part that mainly determines whether or not the area (pixel) satisfies the second condition and the third condition.
 強彩度発生領域検出部251は、色空間への変換、彩度差の検出、および特定色の検出を行う。強彩度発生領域検出部251は、供給された長時間露光の信号と、短時間露光の信号を、それぞれ色空間に変換する。例えば、Lab色空間などの色空間へ転写し、彩度と色差を求め、特定色であるかが判定される。 The strong saturation generation area detection unit 251 performs conversion to a color space, detection of a saturation difference, and detection of a specific color. The intense saturation generation area detection unit 251 converts the supplied long exposure signal and short exposure signal into color spaces, respectively. For example, the image is transferred to a color space such as a Lab color space, and saturation and color difference are obtained to determine whether the color is a specific color.
 第2の条件として、“露出補正長時間露光画像μLと露光補正短時間露光画像μSで、彩度において大きな差が生じている”という条件があるため、まず彩度において大きな差が生じているか否かが判定される。 The second condition is that there is a large difference in saturation between the exposure correction long exposure image μL and the exposure correction short exposure image μS. It is determined whether or not.
 さらに、第3の条件として、“彩度な大きな信号が緑またはマゼンタの色を有している”との条件があるため、彩度の大きな信号であると判定された領域(画素)の色は、緑またはマゼンタであるか否かが判定される。この判定結果(判定結果Eとする)は、局所色比算出部252と規格化振幅強度類似度算出部255に供給される。 Further, as a third condition, there is a condition that “a signal with a large saturation has a green or magenta color”, so that the color of an area (pixel) determined to be a signal with a large saturation Is determined to be green or magenta. This determination result (determination result E) is supplied to the local color ratio calculation unit 252 and the normalized amplitude intensity similarity calculation unit 255.
 第1の条件と第4の条件を満たすか否かを判定するために、まず、各色長短差算出部253で、R信号、G信号、B信号のそれぞれにおいて、露出補正長時間露光画像μLと露光補正短時間露光画像μSとの差分が算出される。 In order to determine whether or not the first condition and the fourth condition are satisfied, first, each color length difference calculating unit 253 determines the exposure correction long exposure image μL for each of the R signal, the G signal, and the B signal. The difference from the exposure corrected short exposure image μS is calculated.
 また各色長短差算出部253は、規格化を行うために、彩度が低い方の信号を用いて、算出された差分を除算する。彩度が低い方の信号とは、長時間露光信号の彩度と短時間露光信号の彩度を比較したときに低い彩度を有する方の信号である。また、彩度が低い方の信号は、折り返り成分が出ていない側の信号であると判定できることは上記した。すなわちここでは、折り返り成分が出ていない長時間露光信号または短時間露光信号が選択され、長短差分の規格化が行われる。 Further, each color length short / difference calculating unit 253 divides the calculated difference using a signal with lower saturation in order to perform normalization. The signal having the lower saturation is a signal having a lower saturation when the saturation of the long-time exposure signal and the saturation of the short-time exposure signal are compared. Further, as described above, it can be determined that the signal with the lower saturation is the signal on the side where the aliasing component does not appear. That is, here, a long-time exposure signal or a short-time exposure signal with no aliasing component is selected, and normalization of the long-short difference is performed.
 各色長短差算出部253は、R信号、G信号、B信号のそれぞれにおいて、露出補正長時間露光画像μLと露光補正短時間露光画像μSとの差分を求め、その長短差分信号を、彩度が低い方のR信号、G信号、B信号でそれぞれ除算し、規格化された長短差分信号を生成し、後段の各色長短差分規格化部254に出力する。 Each color length difference calculation unit 253 obtains a difference between the exposure-corrected long-time exposure image μL and the exposure-correction short-time exposure image μS for each of the R signal, the G signal, and the B signal, Each of the lower R signal, G signal, and B signal is divided to generate a standardized long / short difference signal, which is output to each subsequent color length / short difference normalization unit 254.
 長短差分信号が、所定の値を有していれば、第1の条件、すなわち、“露出補正長時間露光画像μLと露光補正短時間露光画像μSとで、差が生じている”との条件が満たされていることになる。一方で、長短差分信号が、所定の値を有していなければ、第1の条件が満たされていないと判定でき、そのような場合には、後段の処理において長短差分信号が0として処理されるため、結果として、折り返り成分情報は、折り返り成分を検出していないとの判定結果になる。 If the long / short difference signal has a predetermined value, the first condition, that is, the condition that “the difference occurs between the exposure-corrected long-time exposure image μL and the exposure-corrected short-time exposure image μS”. Is satisfied. On the other hand, if the long / short difference signal does not have a predetermined value, it can be determined that the first condition is not satisfied. In such a case, the long / short difference signal is processed as 0 in the subsequent processing. Therefore, as a result, the folding component information becomes a determination result that the folding component is not detected.
 なお、露出補正長時間露光画像μLと露光補正短時間露光画像μSとの差は、ノイズの影響で、本来なら差が生じていない状態であっても、差が算出される可能性もある。このようなことを考慮し、所定の閾値を設け、露出補正長時間露光画像μLと露光補正短時間露光画像μSの差分が、所定の閾値以上であるか否かを判定し、閾値以上である場合に、第1の条件が満たされると判定されるようにしても良い。 Note that the difference between the exposure-corrected long-exposure image μL and the exposure-corrected short-exposure image μS is affected by noise, and there is a possibility that the difference may be calculated even if there is no difference. Considering this, a predetermined threshold is provided, and it is determined whether or not the difference between the exposure-corrected long-time exposure image μL and the exposure-corrected short-time exposure image μS is equal to or larger than the predetermined threshold. In this case, it may be determined that the first condition is satisfied.
 局所色比算出部252は、彩度が低い方の信号を用いて局所的な色比を求める。局所色比算出部252は、強彩度発生領域検出部251からの判定結果Eを参照することで、彩度が低い方の信号を判定し、その判定に基づき、供給された露出補正長時間露光画像μLの信号または露光補正短時間露光画像μSの信号を選択し、局所的な色比を求める。求められたR、G、Bそれぞれの局所色比は、各色長短差規格化部254に供給される。 The local color ratio calculation unit 252 obtains a local color ratio using a signal with lower saturation. The local color ratio calculation unit 252 refers to the determination result E from the strong saturation generation region detection unit 251 to determine the signal with the lower saturation, and based on the determination, the supplied exposure correction long time The signal of the exposure image μL or the signal of the exposure corrected short-time exposure image μS is selected, and the local color ratio is obtained. The obtained local color ratios of R, G, and B are supplied to each color length difference difference normalizing unit 254.
 各色長短差規格化部254は、各色長短差算出部253からのR、G、Bそれぞれの長短差分信号を、局所色比算出部252からのR、G、Bそれぞれの局所比を乗算することで、RGBの色レベルを揃え、規格化されたR成分の長短差分信号、規格化されたG成分の長短差分信号、および規格化されたB成分の長短差分信号を、それぞれ生成する。 Each color length difference difference normalizing unit 254 multiplies each of the R, G, and B difference signals from each color length difference calculating unit 253 by the local ratio of each of R, G, and B from the local color ratio calculating unit 252. Thus, the RGB color levels are aligned to generate a standardized R component long / short difference signal, a standardized G component long / short difference signal, and a standardized B component long / short difference signal, respectively.
 各色長短差規格化部254は、規格化されたR成分の長短差分信号またはB成分の長短差分信号のうち、規格化された値の大きい方を選択することで、RB値の一元化を行い、選択した方のR成分またはB成分の長短差分信号を、規格化振幅強度類似度算出部255に供給する。また、各色長短差規格化部254は、規格化されたG成分の長短差分信号を、規格化振幅強度類似度算出部255に供給する。 Each color length difference standardization unit 254 unifies the RB value by selecting the larger normalized value of the normalized R component length difference signal or the B component length difference signal, The selected R component or B component long / short difference signal is supplied to the normalized amplitude intensity similarity calculation unit 255. Each color length difference normalization unit 254 supplies the normalized G component length difference signal to the normalized amplitude intensity similarity calculation unit 255.
 規格化振幅強度類似度算出部255は、供給された規格化されたG成分の長短差分信号と、規格化されたR(またはB)成分の長短差分信号の強度の類似度と、符号の向きを評価し、両信号が、逆方法で、かつ、同振幅程度になっているか否かを判定する。この判定は、図7を参照して説明した判定であり、第4の条件を満たすか否かの判定である。 The standardized amplitude intensity similarity calculation unit 255 calculates the similarity between the supplied standardized G component long / short difference signal, the standardized R (or B) component long / short difference signal, and the sign direction. Is evaluated, and it is determined whether or not both signals have the same amplitude by the inverse method. This determination is the determination described with reference to FIG. 7 and is a determination as to whether or not the fourth condition is satisfied.
 このようにして、最終的に、規格化振幅強度類似度算出部255にて、第4の条件が満たされるか否かが判定される。規格化振幅強度類似度算出部255には、強彩度発生領域検出部251からの判定結果Eも供給される。この判定結果Eは、第2の条件と第3の条件が満たされるか否かの判定結果を含む情報である。 In this way, finally, the normalized amplitude intensity similarity calculation unit 255 determines whether or not the fourth condition is satisfied. The standardized amplitude intensity similarity calculation unit 255 is also supplied with the determination result E from the strong saturation generation region detection unit 251. This determination result E is information including a determination result as to whether or not the second condition and the third condition are satisfied.
 規格化振幅強度類似度算出部255は、第4の条件が満たされると判定し、供給された判定結果Eも、第2、第3の条件を満たすと判定された結果を示している場合、処理対象とした領域(画素)は、折り返し成分が発生している領域であると、最終的な判定を行い、その判定結果を、折り返り成分情報として、折り返り低減合成比率算出部214と動物体検出部219(図8)に供給する。 When the normalized amplitude intensity similarity calculation unit 255 determines that the fourth condition is satisfied, and the supplied determination result E also indicates a result determined to satisfy the second and third conditions, If the region (pixel) to be processed is a region where a aliasing component is generated, a final determination is made, and the determination result is used as aliasing component information to determine the aliasing reduction combination ratio calculation unit 214 and the animal. It supplies to the body detection part 219 (FIG. 8).
 なお上述したように、折り返り成分検出部218から出力される折り返り成分情報は、折り返り成分が発生しているか否かを表す、例えば0または1の情報であっても良いし、折り返り成分が発生している可能性の確からしさを表す、例えば、0~1の値を有する情報であっても良い。 As described above, the folding component information output from the folding component detection unit 218 may be, for example, 0 or 1 information indicating whether or not the folding component is generated. For example, it may be information having a value of 0 to 1 that represents the probability of occurrence of a component.
 また、折り返り成分検出部218内の各部の演算結果において、第1乃至第4の条件のいずれかが満たされないと判定されたときには、処理対象とされている領域(画素)に対する処理を終了し、折り返り成分情報として、折り返り成分はないとの情報が出力されるようにしても良い。 Further, when it is determined that any one of the first to fourth conditions is not satisfied in the calculation result of each part in the aliasing component detection unit 218, the process for the region (pixel) to be processed is terminated. Alternatively, information indicating that there is no folding component may be output as the folding component information.
 例えば、強彩度発生領域検出部251において、彩度において大きな差がないと判定された場合や、緑またはマゼンタといった色は検出されない場合、局所色比算出部252、各色長短差算出部253、各色長短差規格化部254における処理は行われず、規格化振幅強度類似度算出部255において、折り返り成分はないとの情報が出力されるようにしても良い。 For example, when the strong saturation generation region detection unit 251 determines that there is no significant difference in saturation, or when no color such as green or magenta is detected, the local color ratio calculation unit 252, each color length difference calculation unit 253, The processing in each color length difference normalization unit 254 may not be performed, and the standardized amplitude intensity similarity calculation unit 255 may output information indicating that there is no aliasing component.
 このように、折り返り成分検出部218において、折り返り成分が発生している領域を検出することができる。 In this way, the folded component detection unit 218 can detect a region where the folded component is generated.
 このように、折り返り成分が発生している領域を検出することができるようになることで、長時間露光信号と、露光比補正された短時間露光信号の差が発生する領域であったとしても、折り返り成分の検出強度が高い個所では、その差の信号値を弱めるといった処理を行うことが可能となる。このような処理が可能となることで、動物体の誤検出を防ぐことも可能となる。 As described above, it is possible to detect a region where the aliasing component is generated, so that it is a region where a difference between the long-time exposure signal and the short-time exposure signal whose exposure ratio is corrected occurs. However, it is possible to perform a process of weakening the signal value of the difference at a place where the detection strength of the aliasing component is high. By enabling such processing, it is possible to prevent erroneous detection of the moving object.
 上記したように、SN比を最大にする合成比率を求めておき、折り返り領域であるならば、その強度を低減させる合成比率を採用し(4分割ベイヤ型配列では、右斜め模様と左斜め模様で長もしくは短を選択すればよい)、動体領域であれば、ブラー(動体ボケ)を低減させる合成比率を採用すれば、最適な合成比率を算出することができる。 As described above, a composite ratio that maximizes the S / N ratio is obtained, and if it is a folded area, a composite ratio that reduces the strength is adopted (in a four-part Bayer array, a right diagonal pattern and a left diagonal pattern are used. In the case of a moving object region, an optimum combining ratio can be calculated by adopting a combining ratio that reduces blur (moving object blur).
 このように、本技術によれば、隣接2×2画素を同色の色フィルタとし、かつ2×2の色フィルタが、ベイヤ型に配置された4分割ベイヤ型配列を用いた撮像素子を用いて、空間的周期で露光時間を変化させて撮影された画像において、混色の影響を緩和し、動物体のボケ抑制、高周波信号による折り返り信号の抑制、SNの最適化、長時間露光信号の飽和を考慮したHDR画像を得ることができる。 As described above, according to the present technology, the adjacent 2 × 2 pixels are set to the same color filter, and the 2 × 2 color filter is used with an imaging element using a four-divided Bayer type array arranged in a Bayer type. In an image taken by changing the exposure time in a spatial cycle, the effects of color mixing are alleviated, blurring of moving objects, suppression of aliasing signals due to high-frequency signals, optimization of SN, saturation of long-time exposure signals Can be obtained.
 <撮像装置の使用例>
 図10は、上述の撮像装置や撮像装置を含む電子機器を使用する使用例を示す図である。
<Usage example of imaging device>
FIG. 10 is a diagram illustrating a usage example in which the above-described imaging device and an electronic apparatus including the imaging device are used.
 上述した撮像装置は、例えば、以下のように、可視光や、赤外光、紫外光、X線等の光をセンシングする様々なケースに使用することができる。 The imaging device described above can be used in various cases for sensing light such as visible light, infrared light, ultraviolet light, and X-rays as follows.
 ・ディジタルカメラや、カメラ機能付きの携帯機器等の、鑑賞の用に供される画像を撮影する装置
 ・自動停止等の安全運転や、運転者の状態の認識等のために、自動車の前方や後方、周囲、車内等を撮影する車載用センサ、走行車両や道路を監視する監視カメラ、車両間等の測距を行う測距センサ等の、交通の用に供される装置
 ・ユーザのジェスチャを撮影して、そのジェスチャに従った機器操作を行うために、TVや、冷蔵庫、エアーコンディショナ等の家電に供される装置
 ・内視鏡や、赤外光の受光による血管撮影を行う装置等の、医療やヘルスケアの用に供される装置
 ・防犯用途の監視カメラや、人物認証用途のカメラ等の、セキュリティの用に供される装置
 ・肌を撮影する肌測定器や、頭皮を撮影するマイクロスコープ等の、美容の用に供される装置
 ・スポーツ用途等向けのアクションカメラやウェアラブルカメラ等の、スポーツの用に供される装置
 ・畑や作物の状態を監視するためのカメラ等の、農業の用に供される装置
・ Devices for taking images for viewing, such as digital cameras and mobile devices with camera functions ・ For safe driving such as automatic stop and recognition of the driver's condition, Devices used for traffic, such as in-vehicle sensors that capture the back, surroundings, and interiors of vehicles, surveillance cameras that monitor traveling vehicles and roads, and ranging sensors that measure distances between vehicles, etc. Equipment used for home appliances such as TVs, refrigerators, air conditioners, etc. to take pictures and operate the equipment according to the gestures ・ Endoscopes, equipment that performs blood vessel photography by receiving infrared light, etc. Equipment used for medical and health care ・ Security equipment such as security surveillance cameras and personal authentication cameras ・ Skin measuring instrument for photographing skin and scalp photography Such as a microscope to do beauty Equipment used for sports-Equipment used for sports such as action cameras and wearable cameras for sports applications-Used for agriculture such as cameras for monitoring the condition of fields and crops apparatus
 <記録媒体について>
 上述した一連の処理は、ハードウエアにより実行することもできるし、ソフトウエアにより実行することもできる。一連の処理をソフトウエアにより実行する場合には、そのソフトウエアを構成するプログラムが、コンピュータにインストールされる。ここで、コンピュータには、専用のハードウエアに組み込まれているコンピュータや、各種のプログラムをインストールすることで、各種の機能を実行することが可能な、例えば汎用のパーソナルコンピュータなどが含まれる。
<About recording media>
The series of processes described above can be executed by hardware or can be executed by software. When a series of processing is executed by software, a program constituting the software is installed in the computer. Here, the computer includes, for example, a general-purpose personal computer capable of executing various functions by installing various programs by installing a computer incorporated in dedicated hardware.
 図11は、上述した一連の処理をプログラムにより実行するコンピュータのハードウエアの構成例を示すブロック図である。コンピュータにおいて、CPU(Central Processing Unit)301、ROM(Read Only Memory)302、RAM(Random Access Memory)303は、バス304により相互に接続されている。バス304には、さらに、入出力インタフェース305が接続されている。入出力インタフェース305には、入力部306、出力部307、記憶部308、通信部309、及びドライブ310が接続されている。 FIG. 11 is a block diagram showing an example of the hardware configuration of a computer that executes the above-described series of processing by a program. In the computer, a CPU (Central Processing Unit) 301, a ROM (Read Only Memory) 302, and a RAM (Random Access Memory) 303 are connected to each other by a bus 304. An input / output interface 305 is further connected to the bus 304. An input unit 306, an output unit 307, a storage unit 308, a communication unit 309, and a drive 310 are connected to the input / output interface 305.
 入力部306は、キーボード、マウス、マイクロフォンなどよりなる。出力部307は、ディスプレイ、スピーカなどよりなる。記憶部308は、ハードディスクや不揮発性のメモリなどよりなる。通信部309は、ネットワークインタフェースなどよりなる。ドライブ310は、磁気ディスク、光ディスク、光磁気ディスク、又は半導体メモリなどのリムーバブルメディア311を駆動する。 The input unit 306 includes a keyboard, a mouse, a microphone, and the like. The output unit 307 includes a display, a speaker, and the like. The storage unit 308 includes a hard disk, a nonvolatile memory, and the like. The communication unit 309 includes a network interface and the like. The drive 310 drives a removable medium 311 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
 以上のように構成されるコンピュータでは、CPU301が、例えば、記憶部308に記憶されているプログラムを、入出力インタフェース305及びバス304を介して、RAM303にロードして実行することにより、上述した一連の処理が行われる。 In the computer configured as described above, the CPU 301 loads the program stored in the storage unit 308 to the RAM 303 via the input / output interface 305 and the bus 304 and executes the program, for example. Is performed.
 コンピュータ(CPU301)が実行するプログラムは、例えば、パッケージメディア等としてのリムーバブルメディア311に記録して提供することができる。また、プログラムは、ローカルエリアネットワーク、インターネット、デジタル衛星放送といった、有線または無線の伝送媒体を介して提供することができる。 The program executed by the computer (CPU 301) can be provided by being recorded in, for example, a removable medium 311 as a package medium or the like. The program can be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
 コンピュータでは、プログラムは、リムーバブルメディア311をドライブ310に装着することにより、入出力インタフェース305を介して、記憶部308にインストールすることができる。また、プログラムは、有線または無線の伝送媒体を介して、通信部309で受信し、記憶部308にインストールすることができる。その他、プログラムは、ROM302や記憶部308に、あらかじめインストールしておくことができる。 In the computer, the program can be installed in the storage unit 308 via the input / output interface 305 by attaching the removable medium 311 to the drive 310. Further, the program can be received by the communication unit 309 via a wired or wireless transmission medium and installed in the storage unit 308. In addition, the program can be installed in the ROM 302 or the storage unit 308 in advance.
 なお、コンピュータが実行するプログラムは、本明細書で説明する順序に沿って時系列に処理が行われるプログラムであっても良いし、並列に、あるいは呼び出しが行われたとき等の必要なタイミングで処理が行われるプログラムであっても良い。 The program executed by the computer may be a program that is processed in time series in the order described in this specification, or in parallel or at a necessary timing such as when a call is made. It may be a program for processing.
 また、本明細書において、システムとは、複数の装置により構成される装置全体を表すものである。 In addition, in this specification, the system represents the entire apparatus composed of a plurality of apparatuses.
 なお、本明細書に記載された効果はあくまで例示であって限定されるものではなく、また他の効果があってもよい。 It should be noted that the effects described in the present specification are merely examples and are not limited, and other effects may be obtained.
 なお、本技術の実施の形態は、上述した実施の形態に限定されるものではなく、本技術の要旨を逸脱しない範囲において種々の変更が可能である。 Note that the embodiments of the present technology are not limited to the above-described embodiments, and various modifications can be made without departing from the gist of the present technology.
 なお、本技術は以下のような構成も取ることができる。
(1)
 同一の分光感度を有する2×2個の画素を1ブロックとしたとき、前記1ブロック内の2×2個の画素のうち、2画素は、長時間露光画素であり、2画素は短時間露光画素であり、同露光時間の画素は、斜め方向に配置され、前記ブロック単位で撮像面上に配置された画素からの信号を処理する処理部を備え、
 前記処理部は、
 前記1ブロック内の前記長時間露光画素からの信号を加算することで、長時間露光画像を生成し、前記短時間露光画素からの信号を加算することで短時間露光画像を生成する生成部と、
 前記生成部で生成された前記長時間露光画像と前記短時間露光画像を所定の合成比率で合成する合成部と、
 前記長時間露光画像と前記短時間露光画像との差分から、動物体を検出する動物体検出部と、
 前記長時間露光画像と前記短時間露光画像から、折り返り成分を検出する折り返り成分検出部と
 を備え、
 前記合成比率は、前記動物体検出部での動物体の検出結果と、前記折り返り成分検出部での折り返り成分の検出結果から設定される
 撮像装置。
(2)
 前記折り返り成分検出部は、前記長時間露光画像と前記短時間露光画像との差分、前記長時間露光画像と前記短時間露光画像のそれぞれ彩度が、所定の条件を満たすか否かを判定することで、前記折り返り成分を検出する
 前記(1)に記載の撮像装置。
(3)
 前記折り返り成分検出部は、以下の第1乃至第4の条件が満たされるか否かを判定することで、前記折り返り成分を検出する
 第1の条件:長時間露光画像と短時間露光画像とで、差がある
 第2の条件:長時間露光画像と短時間露光画像で、彩度において差がある
 第3の条件:彩度な大きな信号が緑またはマゼンタの色を有している
 第4の条件:折り返し成分がない方の信号から、発生している信号を減算すると、その差分が、G画素とR画素またはG画素とB画素で、逆方向で同振幅に発生している
 前記(1)に記載の撮像装置。
(4)
 前記短時間露光画像は、露光補正された画像である
 前記(3)に記載の撮像装置。
(5)
 前記長時間露光画像と前記短時間露光画像を所定の色空間に転写し、前記彩度を求める
 前記(3)に記載の撮像装置。
(6)
 前記合成比率は、前記折り返り成分検出部で折り返り成分が発生していると検出された画素においては、折り返り成分が発生していないと判定される前記長時間露光画像または前記短時間露光画像を多く用いる比率とされる
 前記(1)乃至(5)のいずれかに記載の撮像装置。
(7)
 前記合成比率は、前記動物体検出部で動物体が検出された画素であるが、前記折り返り成分検出部で折り返り成分が検出された画素においては、折り返り成分が発生していないと判定される前記長時間露光画像または前記短時間露光画像を多く用いる比率とされる
 前記(1)乃至(6)のいずれかに記載の撮像装置。
(8)
 同一の分光感度を有する2×2個の画素を1ブロックとしたとき、前記1ブロック内の2×2個の画素のうち、2画素は、長時間露光画素であり、2画素は短時間露光画素であり、同露光時間の画素は、斜め方向に配置され、前記ブロック単位で撮像面上に配置された画素からの信号を処理する処理部を備える撮像装置の撮像方法において、
 前記処理部は、
 前記1ブロック内の前記長時間露光画素からの信号を加算することで、長時間露光画像を生成し、前記短時間露光画素からの信号を加算することで短時間露光画像を生成し、
 生成された前記長時間露光画像と前記短時間露光画像を所定の合成比率で合成し、
 前記長時間露光画像と前記短時間露光画像との差分から、動物体を検出し、
 前記長時間露光画像と前記短時間露光画像から、折り返り成分を検出する
 ステップを含み、
 前記合成比率は、前記動物体検出部での動物体の検出結果と、前記折り返り成分検出部での折り返り成分の検出結果から設定される
 撮像方法。
(9)
 同一の分光感度を有する2×2個の画素を1ブロックとしたとき、前記1ブロック内の2×2個の画素のうち、2画素は、長時間露光画素であり、2画素は短時間露光画素であり、同露光時間の画素は、斜め方向に配置され、前記ブロック単位で撮像面上に配置された画素からの信号を処理する処理部を備える撮像装置に、
 前記処理部は、
 前記1ブロック内の前記長時間露光画素からの信号を加算することで、長時間露光画像を生成し、前記短時間露光画素からの信号を加算することで短時間露光画像を生成し、
 生成された前記長時間露光画像と前記短時間露光画像を所定の合成比率で合成し、
 前記長時間露光画像と前記短時間露光画像との差分から、動物体を検出し、
 前記長時間露光画像と前記短時間露光画像から、折り返り成分を検出する
 ステップを含む処理を実行させ、
 前記合成比率は、前記動物体検出部での動物体の検出結果と、前記折り返り成分検出部での折り返り成分の検出結果から設定される
 処理をコンピュータに実行させるためのプログラム。
In addition, this technique can also take the following structures.
(1)
When 2 × 2 pixels having the same spectral sensitivity are made into one block, out of 2 × 2 pixels in the one block, two pixels are long-time exposure pixels, and two pixels are short-time exposure. A pixel having the same exposure time is disposed in an oblique direction, and includes a processing unit that processes a signal from the pixel disposed on the imaging surface in the block unit,
The processor is
A generating unit that generates a long-time exposure image by adding signals from the long-time exposure pixels in the one block and generates a short-time exposure image by adding signals from the short-time exposure pixels; ,
A combining unit that combines the long-time exposure image generated by the generation unit and the short-time exposure image at a predetermined combining ratio;
From the difference between the long-time exposure image and the short-time exposure image, an animal body detection unit that detects an animal body,
A folding component detection unit for detecting a folding component from the long exposure image and the short exposure image;
The imaging ratio is set from the detection result of the moving object in the moving object detection unit and the detection result of the folded component in the folded component detection unit.
(2)
The aliasing component detection unit determines whether or not a difference between the long-time exposure image and the short-time exposure image and a saturation of each of the long-time exposure image and the short-time exposure image satisfy a predetermined condition. The imaging device according to (1), wherein the folded component is detected.
(3)
The aliasing component detection unit detects the aliasing component by determining whether or not the following first to fourth conditions are satisfied. First condition: long-time exposure image and short-time exposure image Second condition: There is a difference in saturation between the long-time exposure image and the short-time exposure image Third condition: A large signal with saturation has a green or magenta color Condition 4: When the generated signal is subtracted from the signal having no aliasing component, the difference is generated with the same amplitude in the reverse direction between the G pixel and the R pixel or the G pixel and the B pixel. The imaging device according to (1).
(4)
The imaging device according to (3), wherein the short-time exposure image is an exposure-corrected image.
(5)
The imaging device according to (3), wherein the long-time exposure image and the short-time exposure image are transferred to a predetermined color space to obtain the saturation.
(6)
The composite ratio is the long-time exposure image or the short-time exposure in which it is determined that no aliasing component has occurred in a pixel where the aliasing component is detected by the aliasing component detection unit. The imaging apparatus according to any one of (1) to (5), wherein a ratio of using a large amount of images is used.
(7)
The composition ratio is a pixel in which the moving object is detected by the moving object detection unit, but it is determined that no folding component is generated in the pixel in which the folding component is detected by the folding component detection unit. The imaging device according to any one of (1) to (6), wherein the ratio is a ratio in which the long-time exposure image or the short-time exposure image is frequently used.
(8)
When 2 × 2 pixels having the same spectral sensitivity are made into one block, out of 2 × 2 pixels in the one block, two pixels are long-time exposure pixels, and two pixels are short-time exposure. In the imaging method of the imaging apparatus, the pixels having the same exposure time are arranged in an oblique direction and include a processing unit that processes signals from the pixels arranged on the imaging surface in units of the blocks.
The processor is
A long exposure image is generated by adding signals from the long exposure pixels in the block, and a short exposure image is generated by adding signals from the short exposure pixels.
The generated long exposure image and the short exposure image are synthesized at a predetermined synthesis ratio,
From the difference between the long exposure image and the short exposure image, the moving object is detected,
Detecting a folded component from the long exposure image and the short exposure image,
The imaging ratio is set from the detection result of the moving object in the moving object detection unit and the detection result of the folded component in the folded component detection unit.
(9)
When 2 × 2 pixels having the same spectral sensitivity are made into one block, out of 2 × 2 pixels in the one block, two pixels are long-time exposure pixels, and two pixels are short-time exposure. An image pickup apparatus including a processing unit that processes a signal from a pixel that is arranged in an oblique direction and is arranged on the image pickup surface in a block unit.
The processor is
A long exposure image is generated by adding signals from the long exposure pixels in the block, and a short exposure image is generated by adding signals from the short exposure pixels.
The generated long exposure image and the short exposure image are synthesized at a predetermined synthesis ratio,
From the difference between the long exposure image and the short exposure image, the moving object is detected,
From the long-time exposure image and the short-time exposure image, to execute a process including a step of detecting a folding component,
The composition ratio is a program for causing a computer to execute a process set based on the detection result of the moving object in the moving object detection unit and the detection result of the folding component in the folding component detection unit.
 100 撮像装置, 101 光学レンズ, 102, 撮像素子, 103 画像処理部, 104 信号処理部, 105 制御部, 200 HDR画像生成部, 211 RGB補間信号生成部, 212 露光補正部, 213 SN最大化合成比率算出部, 214 折り返り低減合成比率算出部, 215 ブラー低減合成比率算出部, 216 長畜飽和考慮合成比率算出部, 217 長短合成処理部, 218 折り返り成分検出部, 219 動物体検出部, 220 ノイズ低減処理部, 251 強彩度発生領域検出部, 252 局所色比算出部, 253 各色長短差算出部, 254 各色長短差規格化部, 255 規格化振幅強度類似度算出部 100 imaging device, 101 optical lens, 102, imaging device, 103 image processing unit, 104 signal processing unit, 105 control unit, 200 HDR image generation unit, 211 RGB interpolation signal generation unit, 212 exposure correction unit, 213 SN maximization synthesis Ratio calculation unit, 214 Folding reduction synthesis ratio calculation unit, 215 Blur reduction synthesis ratio calculation unit, 216 Long livestock saturation consideration synthesis ratio calculation unit, 217 Long and short synthesis processing unit, 218 Folding component detection unit, 219 Animal body detection unit, 220 noise reduction processing unit, 251 strong saturation generation region detection unit, 252 local color ratio calculation unit, 253 each color length short difference calculation unit, 254 each color length short difference normalization unit, 255 standardized amplitude intensity similarity calculation unit

Claims (9)

  1.  同一の分光感度を有する2×2個の画素を1ブロックとしたとき、前記1ブロック内の2×2個の画素のうち、2画素は、長時間露光画素であり、2画素は短時間露光画素であり、同露光時間の画素は、斜め方向に配置され、前記ブロック単位で撮像面上に配置された画素からの信号を処理する処理部を備え、
     前記処理部は、
     前記1ブロック内の前記長時間露光画素からの信号を加算することで、長時間露光画像を生成し、前記短時間露光画素からの信号を加算することで短時間露光画像を生成する生成部と、
     前記生成部で生成された前記長時間露光画像と前記短時間露光画像を所定の合成比率で合成する合成部と、
     前記長時間露光画像と前記短時間露光画像との差分から、動物体を検出する動物体検出部と、
     前記長時間露光画像と前記短時間露光画像から、折り返り成分を検出する折り返り成分検出部と
     を備え、
     前記合成比率は、前記動物体検出部での動物体の検出結果と、前記折り返り成分検出部での折り返り成分の検出結果から設定される
     撮像装置。
    When 2 × 2 pixels having the same spectral sensitivity are made into one block, out of 2 × 2 pixels in the one block, two pixels are long-time exposure pixels, and two pixels are short-time exposure. A pixel having the same exposure time is disposed in an oblique direction, and includes a processing unit that processes a signal from the pixel disposed on the imaging surface in the block unit,
    The processor is
    A generating unit that generates a long-time exposure image by adding signals from the long-time exposure pixels in the one block and generates a short-time exposure image by adding signals from the short-time exposure pixels; ,
    A combining unit that combines the long-time exposure image generated by the generation unit and the short-time exposure image at a predetermined combining ratio;
    From the difference between the long-time exposure image and the short-time exposure image, an animal body detection unit that detects an animal body,
    A folding component detection unit for detecting a folding component from the long exposure image and the short exposure image;
    The imaging ratio is set from the detection result of the moving object in the moving object detection unit and the detection result of the folded component in the folded component detection unit.
  2.  前記折り返り成分検出部は、前記長時間露光画像と前記短時間露光画像との差分、前記長時間露光画像と前記短時間露光画像のそれぞれ彩度が、所定の条件を満たすか否かを判定することで、前記折り返り成分を検出する
     請求項1に記載の撮像装置。
    The aliasing component detection unit determines whether or not a difference between the long-time exposure image and the short-time exposure image and a saturation of each of the long-time exposure image and the short-time exposure image satisfy a predetermined condition. The imaging apparatus according to claim 1, wherein the folded component is detected.
  3.  前記折り返り成分検出部は、以下の第1乃至第4の条件が満たされるか否かを判定することで、前記折り返り成分を検出する
     第1の条件:長時間露光画像と短時間露光画像とで、差がある
     第2の条件:長時間露光画像と短時間露光画像で、彩度において差がある
     第3の条件:彩度な大きな信号が緑またはマゼンタの色を有している
     第4の条件:折り返し成分がない方の信号から、発生している信号を減算すると、その差分が、G画素とR画素またはG画素とB画素で、逆方向で同振幅に発生している
     請求項1に記載の撮像装置。
    The aliasing component detection unit detects the aliasing component by determining whether or not the following first to fourth conditions are satisfied. First condition: long-time exposure image and short-time exposure image Second condition: There is a difference in saturation between the long-time exposure image and the short-time exposure image Third condition: A large signal with saturation has a green or magenta color Condition 4: When the generated signal is subtracted from the signal without the aliasing component, the difference is generated with the same amplitude in the reverse direction in the G pixel and the R pixel or the G pixel and the B pixel. Item 2. The imaging device according to Item 1.
  4.  前記短時間露光画像は、露光補正された画像である
     請求項3に記載の撮像装置。
    The imaging device according to claim 3, wherein the short-time exposure image is an exposure-corrected image.
  5.  前記長時間露光画像と前記短時間露光画像を所定の色空間に転写し、前記彩度を求める
     請求項3に記載の撮像装置。
    The imaging device according to claim 3, wherein the long-time exposure image and the short-time exposure image are transferred to a predetermined color space to obtain the saturation.
  6.  前記合成比率は、前記折り返り成分検出部で折り返り成分が発生していると検出された画素においては、折り返り成分が発生していないと判定される前記長時間露光画像または前記短時間露光画像を多く用いる比率とされる
     請求項1に記載の撮像装置。
    The composite ratio is the long-time exposure image or the short-time exposure in which it is determined that no aliasing component has occurred in a pixel where the aliasing component is detected by the aliasing component detection unit. The imaging apparatus according to claim 1, wherein the imaging apparatus is configured to use a large amount of images.
  7.  前記合成比率は、前記動物体検出部で動物体が検出された画素であるが、前記折り返り成分検出部で折り返り成分が検出された画素においては、折り返り成分が発生していないと判定される前記長時間露光画像または前記短時間露光画像を多く用いる比率とされる
     請求項1に記載の撮像装置。
    The composition ratio is a pixel in which the moving object is detected by the moving object detection unit, but it is determined that no folding component is generated in the pixel in which the folding component is detected by the folding component detection unit. The imaging device according to claim 1, wherein the ratio is a ratio of using a lot of the long-time exposure image or the short-time exposure image.
  8.  同一の分光感度を有する2×2個の画素を1ブロックとしたとき、前記1ブロック内の2×2個の画素のうち、2画素は、長時間露光画素であり、2画素は短時間露光画素であり、同露光時間の画素は、斜め方向に配置され、前記ブロック単位で撮像面上に配置された画素からの信号を処理する処理部を備える撮像装置の撮像方法において、
     前記処理部は、
     前記1ブロック内の前記長時間露光画素からの信号を加算することで、長時間露光画像を生成し、前記短時間露光画素からの信号を加算することで短時間露光画像を生成し、
     生成された前記長時間露光画像と前記短時間露光画像を所定の合成比率で合成し、
     前記長時間露光画像と前記短時間露光画像との差分から、動物体を検出し、
     前記長時間露光画像と前記短時間露光画像から、折り返り成分を検出する
     ステップを含み、
     前記合成比率は、前記動物体検出部での動物体の検出結果と、前記折り返り成分検出部での折り返り成分の検出結果から設定される
     撮像方法。
    When 2 × 2 pixels having the same spectral sensitivity are made into one block, out of 2 × 2 pixels in the one block, two pixels are long-time exposure pixels, and two pixels are short-time exposure. In the imaging method of the imaging apparatus, the pixels having the same exposure time are arranged in an oblique direction and include a processing unit that processes signals from the pixels arranged on the imaging surface in units of the blocks.
    The processor is
    A long exposure image is generated by adding signals from the long exposure pixels in the block, and a short exposure image is generated by adding signals from the short exposure pixels.
    The generated long exposure image and the short exposure image are synthesized at a predetermined synthesis ratio,
    From the difference between the long exposure image and the short exposure image, the moving object is detected,
    Detecting a folded component from the long exposure image and the short exposure image,
    The imaging ratio is set from the detection result of the moving object in the moving object detection unit and the detection result of the folded component in the folded component detection unit.
  9.  同一の分光感度を有する2×2個の画素を1ブロックとしたとき、前記1ブロック内の2×2個の画素のうち、2画素は、長時間露光画素であり、2画素は短時間露光画素であり、同露光時間の画素は、斜め方向に配置され、前記ブロック単位で撮像面上に配置された画素からの信号を処理する処理部を備える撮像装置に、
     前記処理部は、
     前記1ブロック内の前記長時間露光画素からの信号を加算することで、長時間露光画像を生成し、前記短時間露光画素からの信号を加算することで短時間露光画像を生成し、
     生成された前記長時間露光画像と前記短時間露光画像を所定の合成比率で合成し、
     前記長時間露光画像と前記短時間露光画像との差分から、動物体を検出し、
     前記長時間露光画像と前記短時間露光画像から、折り返り成分を検出する
     ステップを含む処理を実行させ、
     前記合成比率は、前記動物体検出部での動物体の検出結果と、前記折り返り成分検出部での折り返り成分の検出結果から設定される
     処理をコンピュータに実行させるためのプログラム。
    When 2 × 2 pixels having the same spectral sensitivity are made into one block, out of 2 × 2 pixels in the one block, two pixels are long-time exposure pixels, and two pixels are short-time exposure. An image pickup apparatus including a processing unit that processes a signal from a pixel that is arranged in an oblique direction and is arranged on the image pickup surface in a block unit.
    The processor is
    A long exposure image is generated by adding signals from the long exposure pixels in the block, and a short exposure image is generated by adding signals from the short exposure pixels.
    The generated long exposure image and the short exposure image are synthesized at a predetermined synthesis ratio,
    From the difference between the long exposure image and the short exposure image, the moving object is detected,
    From the long-time exposure image and the short-time exposure image, to execute a process including a step of detecting a folding component,
    The composition ratio is a program for causing a computer to execute a process set based on the detection result of the moving object in the moving object detection unit and the detection result of the folding component in the folding component detection unit.
PCT/JP2016/060897 2015-04-13 2016-04-01 Image-capturing device, image-capturing method, and program WO2016167140A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015081666A JP2016201733A (en) 2015-04-13 2015-04-13 Imaging device, imaging method and program
JP2015-081666 2015-04-13

Publications (1)

Publication Number Publication Date
WO2016167140A1 true WO2016167140A1 (en) 2016-10-20

Family

ID=57127239

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/060897 WO2016167140A1 (en) 2015-04-13 2016-04-01 Image-capturing device, image-capturing method, and program

Country Status (2)

Country Link
JP (1) JP2016201733A (en)
WO (1) WO2016167140A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113658128A (en) * 2021-08-13 2021-11-16 Oppo广东移动通信有限公司 Image blurring degree determining method, data set constructing method and deblurring method
EP3917138A1 (en) * 2020-05-29 2021-12-01 Canon Kabushiki Kaisha Encoding apparatus and method, image capture apparatus, and storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113632134B (en) * 2019-04-11 2023-06-13 杜比实验室特许公司 Method, computer readable storage medium, and HDR camera for generating high dynamic range image
JP7497216B2 (en) 2020-05-29 2024-06-10 キヤノン株式会社 Image processing device and method, imaging device, program, and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013066142A (en) * 2011-08-31 2013-04-11 Sony Corp Image processing apparatus, image processing method, and program
JP2014039170A (en) * 2012-08-16 2014-02-27 Sony Corp Image processing device, image processing method, and program
JP2015033107A (en) * 2013-08-07 2015-02-16 ソニー株式会社 Image processing apparatus, image processing method, and electronic apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013066142A (en) * 2011-08-31 2013-04-11 Sony Corp Image processing apparatus, image processing method, and program
JP2014039170A (en) * 2012-08-16 2014-02-27 Sony Corp Image processing device, image processing method, and program
JP2015033107A (en) * 2013-08-07 2015-02-16 ソニー株式会社 Image processing apparatus, image processing method, and electronic apparatus

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3917138A1 (en) * 2020-05-29 2021-12-01 Canon Kabushiki Kaisha Encoding apparatus and method, image capture apparatus, and storage medium
CN113747152A (en) * 2020-05-29 2021-12-03 佳能株式会社 Encoding apparatus and method, image capturing apparatus, and storage medium
US11483497B2 (en) 2020-05-29 2022-10-25 Canon Kabushiki Kaisha Encoding apparatus and method, image capture apparatus, and storage medium
CN113658128A (en) * 2021-08-13 2021-11-16 Oppo广东移动通信有限公司 Image blurring degree determining method, data set constructing method and deblurring method

Also Published As

Publication number Publication date
JP2016201733A (en) 2016-12-01

Similar Documents

Publication Publication Date Title
US8169491B2 (en) Apparatus and method of obtaining image and apparatus and method of processing image
CN112532855B (en) Image processing method and device
KR101263888B1 (en) Image processing apparatus and image processing method as well as computer program
CN104349018B (en) Image processing equipment, image processing method and electronic equipment
US8175378B2 (en) Method and system for noise management for spatial processing in digital image/video capture systems
JP5367640B2 (en) Imaging apparatus and imaging method
JP6312487B2 (en) Image processing apparatus, control method therefor, and program
US10560642B2 (en) Image processing device, image processing method and imaging device
JP4806476B2 (en) Image processing apparatus, image generation system, method, and program
JP5948073B2 (en) Image signal processing apparatus and image signal processing method
WO2014027511A1 (en) Image processing device, image processing method, and program
WO2016167140A1 (en) Image-capturing device, image-capturing method, and program
WO2017086155A1 (en) Image capturing device, image capturing method, and program
US20130265412A1 (en) Image processing apparatus and control method therefor
JP5414691B2 (en) Image processing apparatus and image processing method
JP2013162347A (en) Image processor, image processing method, program, and device
WO2017149854A1 (en) Signal processing apparatus, imaging apparatus, and signal processing method
JP2011171842A (en) Image processor and image processing program
US10944929B2 (en) Imaging apparatus and imaging method
JP2016111568A (en) Image blur correction control device, imaging apparatus, control method and program thereof
JP5245648B2 (en) Image processing apparatus and program
JP6800806B2 (en) Image processing equipment, image processing methods and programs
JP6700028B2 (en) Vector calculating device and vector calculating method
JP2015080157A (en) Image processing device, image processing method and program
JP7263018B2 (en) Image processing device, image processing method and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16779931

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16779931

Country of ref document: EP

Kind code of ref document: A1