WO2016121928A1 - Image detection device, image detection method and image capture device - Google Patents

Image detection device, image detection method and image capture device Download PDF

Info

Publication number
WO2016121928A1
WO2016121928A1 PCT/JP2016/052660 JP2016052660W WO2016121928A1 WO 2016121928 A1 WO2016121928 A1 WO 2016121928A1 JP 2016052660 W JP2016052660 W JP 2016052660W WO 2016121928 A1 WO2016121928 A1 WO 2016121928A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pixel
color difference
color
subject
Prior art date
Application number
PCT/JP2016/052660
Other languages
French (fr)
Japanese (ja)
Inventor
健児 松本
Original Assignee
リコーイメージング株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by リコーイメージング株式会社 filed Critical リコーイメージング株式会社
Publication of WO2016121928A1 publication Critical patent/WO2016121928A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/843Demosaicing, e.g. interpolating colour pixel values
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/12Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths with one sensor only
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/134Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/672Focus control based on electronic image sensor signals based on the phase difference signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/673Focus control based on electronic image sensor signals based on contrast or high frequency components of image signals, e.g. hill climbing method

Definitions

  • the present invention relates to an image detection device, an image detection method, and a photographing device that detect image deterioration in an image.
  • Patent Document 1 Japanese Patent Application Laid-Open No. 2011-109494 (hereinafter referred to as “Patent Document 1”) describes a specific configuration of a photographing apparatus capable of detecting a false color generated in a photographed image.
  • the imaging apparatus described in Patent Document 1 images a focused subject, and then images a non-focused subject. In the out-of-focus photographed image, the contrast of the subject is reduced and high-frequency components are reduced, so that the false color generated in the in-focus photographed image is also reduced. Therefore, the imaging apparatus described in Patent Document 1 calculates a difference value between the color difference signal of the captured image in the focused state and the color difference signal of the captured image in the out-of-focus state for each block, and falsely calculates a block having a large difference value. Detect as a block with color.
  • the imaging apparatus described in Patent Document 1 detects a subject with high contrast and a false color by detecting a subject with low contrast and a reduced false color by comparing processing. The occurrence of color is detected. However, since the occurrence of false color is only reduced in the in-focus shot image compared to the in-focus shot image, even in a block where the difference value of the color difference signal is false. Don't get big. Therefore, it is difficult for the photographing apparatus described in Patent Document 1 to detect false colors with high accuracy.
  • the present invention has been made in view of the above circumstances, and an object of the present invention is to provide an image detection apparatus, an image detection method, and a photographing apparatus capable of accurately detecting image degradation.
  • An image detection apparatus relates a subject image incident on an image sensor having a predetermined pixel arrangement via a photographing optical system and the image sensor relative to a light receiving surface direction of the image sensor.
  • the image detection apparatus physically connects at least one of the optical elements in the imaging optical system and at least one of the imaging elements based on the pixel arrangement of the imaging element with the transmission wavelength selection element.
  • image degradation of different colors occurs in the region including high frequency components in the captured images before and after the movement, Changes between captured images tend to be large. Therefore, it is possible to detect image degradation with high accuracy.
  • the moving means is arranged on the light receiving surface of the image sensor so that the color information of at least the pair of color difference signals has a complementary color relationship in the generation region where the image degradation occurs.
  • the position of the subject image may be moved by an amount corresponding to the pixel interval.
  • the detection means includes the same subject image incident on the pixel having the same address with respect to at least a pair of captured images having different positions of the subject image on the light receiving surface of the image sensor.
  • the address of each pixel constituting one captured image is converted in accordance with the amount of movement of the position of the subject image, and the color difference signal of one captured image and the other captured image are converted.
  • a configuration may be adopted in which image degradation is detected for each pixel based on the color difference signal.
  • the detection unit may calculate at least one of a difference value and an addition value of at least a pair of color difference signals and detect image deterioration based on the calculation result.
  • the detection unit calculates a difference value of at least a pair of color difference signals for each pixel, and image degradation occurs in a pixel in which the calculated difference value is equal to or greater than a first threshold value. It is good also as a structure detected as a pixel of an area
  • the detection unit calculates an addition value of at least a pair of color difference signals for each pixel, and image degradation occurs in a pixel in which the calculated addition value is equal to or less than a second threshold value. It is good also as a structure detected as a pixel of an area
  • the signal generation unit generates a luminance signal that forms a pair with the color difference signal by performing predetermined signal processing, and the detection unit performs image processing based on at least a pair of luminance signals. It is good also as a structure which detects deterioration.
  • An imaging device is an imaging device having the above-described image detection device, and includes a stationary state determination unit that determines whether or not the imaging device is in a stationary state. When it is determined by the state determination means that the photographing apparatus is in a stationary state, image deterioration is detected by the image detection apparatus.
  • a photographing apparatus may be a photographing apparatus having the above-described image detection device, and may include a shake detection unit that detects a shake of the photographing apparatus.
  • the moving means physically moves at least one of some of the optical elements and the imaging element in the imaging optical system based on the shake of the imaging apparatus detected by the shake detection means, thereby causing the shake of the imaging apparatus.
  • the image blur caused by the image is corrected.
  • the image detection method physically connects a part of the optical elements in the photographing optical system and at least one of the image sensors based on the pixel arrangement of the image sensor with the transmission wavelength selection element.
  • the step of moving the position of the subject image on the light receiving surface of the image sensor by moving the image, and the subject image captured by the image sensor is captured every time the position of the subject image is moved, and color interpolation processing is performed.
  • Generating a color difference signal by performing predetermined signal processing, and detecting image deterioration occurring in the captured image based on at least a pair of color difference signals having different positions of the subject image on the light receiving surface of the image sensor. Performing steps.
  • an image detection device an image detection method, and an imaging device that can detect image degradation with high accuracy.
  • FIG. 1 is a diagram schematically illustrating a configuration of an image shake correction apparatus provided in an imaging apparatus according to an embodiment of the present invention.
  • 1 is a diagram schematically illustrating a configuration of an image shake correction apparatus provided in an imaging apparatus according to an embodiment of the present invention. It is a figure which assists description of the LPF drive in embodiment of this invention. It is a figure which shows the false color detection flow by the system controller in embodiment of this invention. It is a figure regarding the to-be-photographed object image
  • a photographing apparatus according to an embodiment of the present invention will be described with reference to the drawings.
  • a digital single lens reflex camera will be described as an embodiment of the present invention.
  • the photographing apparatus is not limited to a digital single lens reflex camera, but, for example, a mirrorless single lens camera, a compact digital camera, a video camera, a camcorder, a tablet terminal, a PHS (Personal Handy Phone system), a smartphone, a feature phone, a portable game machine,
  • the apparatus may be replaced with another type of apparatus having a photographing function.
  • FIG. 1 is a block diagram showing a configuration of the photographing apparatus 1 of the present embodiment.
  • the photographing apparatus 1 includes a system controller 100, an operation unit 102, a drive circuit 104, a photographing lens 106, a diaphragm 108, a shutter 110, an image shake correction device 112, a signal processing circuit 114, and an image processing engine 116.
  • Buffer memory 118 card interface 120, LCD (Liquid Crystal Display) control circuit 122, LCD 124, ROM (Read Only Memory) 126, gyro sensor 128, acceleration sensor 130, geomagnetic sensor 132, and GPS (Global Positioning System) sensor 134 It has.
  • the photographing lens 106 has a plurality of lenses, it is shown as a single lens for convenience in FIG. Further, the same direction as the optical axis AX of the photographing lens 106 is defined as the Z-axis direction, and the two axial directions orthogonal to the Z-axis direction and orthogonal to each other are the X-axis direction (horizontal direction) and the Y-axis direction (vertical direction), respectively. It is defined as
  • the operation unit 102 includes various switches necessary for the user to operate the photographing apparatus 1, such as a power switch, a release switch, and a photographing mode switch.
  • a power switch When the user operates the power switch, power is supplied from the battery (not shown) to the various circuits of the photographing apparatus 1 through the power line.
  • the system controller 100 includes a CPU (Central Processing Unit) and a DSP (Digital Signal Processor). After supplying power, the system controller 100 accesses the ROM 126, reads out a control program, loads it into a work area (not shown), and executes the loaded control program to control the entire photographing apparatus 1.
  • a CPU Central Processing Unit
  • DSP Digital Signal Processor
  • the system controller 100 detects, for example, a photometric value calculated based on an image captured by a solid-state imaging device 112a (see FIG. 2 described later) or an exposure meter built in the photographing apparatus 1.
  • the diaphragm 108 and the shutter 110 are driven and controlled via the drive circuit 104 so that proper exposure is obtained based on the photometric value measured in (not shown). More specifically, drive control of the aperture 108 and the shutter 110 is performed based on an AE function specified by the shooting mode switch, such as a program AE (Automatic Exposure), shutter priority AE, aperture priority AE, or the like. Further, the system controller 100 performs AF (Autofocus) control together with AE control.
  • AF Automatic Exposure
  • the AF mode includes a central single-point ranging mode using a single central ranging area, a multi-point ranging mode using a plurality of ranging areas, a full-screen ranging mode based on full-screen distance information, and the like. is there.
  • the system controller 100 controls driving of the photographing lens 106 via the driving circuit 104 based on the AF result, and adjusts the focus of the photographing lens 106. Since the configuration and control of this type of AE and AF are well known, detailed description thereof is omitted here.
  • FIGS. 2 and 3 are diagrams schematically showing a configuration of the image blur correction device 112.
  • the image blur correction device 112 includes a solid-state image sensor 112a.
  • the light beam from the subject passes through the photographing lens 106, the diaphragm 108, and the shutter 110, and is received by the light receiving surface 112aa of the solid-state imaging device 112a.
  • the light receiving surface 112aa of the solid-state imaging device 112a is an XY plane including the X axis and the Y axis.
  • the solid-state imaging device 112a is a single-plate color CCD (Charge-Coupled Device) image sensor having a color filter with a Bayer-type pixel arrangement as a transmission wavelength selection device.
  • the solid-state imaging device 112a accumulates an optical image formed by each pixel on the light receiving surface 112aa as a charge corresponding to the amount of light, and generates R (Red), G (Green), and B (Blue) image signals. Output.
  • the solid-state imaging device 112a is not limited to a CCD image sensor, and may be replaced with a CMOS (Complementary Metal Oxide ⁇ Semiconductor) image sensor or other types of imaging devices.
  • the solid-state imaging device 112a may be a filter having a complementary color filter, or a filter having a periodic color arrangement such as a Bayer arrangement if any color filter is arranged for each pixel. There is no need.
  • the signal processing circuit 114 performs predetermined signal processing such as clamping and demosaicing (color interpolation) on the image signal input from the solid-state imaging device 112a, and outputs the processed signal to the image processing engine 116.
  • the image processing engine 116 performs predetermined signal processing such as matrix operation, Y / C separation, and white balance on the image signal input from the signal processing circuit 114 to generate a luminance signal Y and color difference signals Cb and Cr. , Compressed in a predetermined format such as JPEG (Joint Photographic ⁇ Experts Group).
  • the buffer memory 118 is used as a temporary storage location for processing data when the image processing engine 116 executes processing.
  • the captured image storage format is not limited to the JPEG format, and may be a RAW format in which minimal image processing (for example, clamping) is performed.
  • the memory card 200 is detachably inserted into the card slot of the card interface 120.
  • the image processing engine 116 can communicate with the memory card 200 via the card interface 120.
  • the image processing engine 116 stores the generated compressed image signal (captured image data) in the memory card 200 (or a built-in memory (not shown) provided in the image capturing apparatus 1).
  • the image processing engine 116 buffers the generated luminance signal Y and color difference signals Cb and Cr in a frame memory (not shown) in units of frames.
  • the image processing engine 116 sweeps the buffered signal from each frame memory at a predetermined timing, converts it into a video signal of a predetermined format, and outputs it to the LCD control circuit 122.
  • the LCD control circuit 122 modulates and controls the liquid crystal based on the image signal input from the image processing engine 116. Thereby, the photographed image of the subject is displayed on the display screen of the LCD 124.
  • the user can view through a display screen of the LCD 124 a real-time through image (live view) captured with appropriate brightness and focus based on AE control and AF control.
  • the image processing engine 116 reads the photographed image data designated by the operation from the memory card 200 or the built-in memory, converts it into an image signal of a predetermined format, and the LCD control circuit 122. Output to.
  • the LCD control circuit 122 performs modulation control on the liquid crystal based on the image signal input from the image processing engine 116, so that a captured image of the subject is displayed on the display screen of the LCD 124.
  • the image shake correction device 112 drives a shake correction member.
  • the shake correction member is the solid-state image sensor 112a.
  • the shake correction member is not limited to the solid-state image sensor 112a, but is partially moved with respect to the optical axis AX, such as a part of lenses included in the photographing lens 106, so that the light-receiving surface 112aa of the solid-state image sensor 112a.
  • Another configuration capable of shifting the incident position of the subject image above may be used, or a configuration in which two or more members of these and the solid-state imaging device 112a are combined may be used.
  • the image blur correction device 112 not only slightly drives (vibrates) the blur correction member in a plane orthogonal to the optical axis AX (that is, in the XY plane) in order to correct the image blur.
  • the shake correction member is driven minutely in a plane perpendicular to the optical axis AX so that an optical low-pass filter (LPF) effect (reduction of moiré such as false color) can be obtained. Rotate).
  • image shake correction drive driving the shake correction member so as to obtain the same effect as an optical LPF
  • LPF optical low-pass filter
  • the gyro sensor 128 is a sensor that detects information for controlling image blur correction. Specifically, the gyro sensor 128 detects angular velocities around the two axes (around the X axis and around the Y axis) applied to the imaging apparatus 1, and the detected angular velocities around the two axes are within the XY plane (in other words, a solid state This is output to the system controller 100 as a shake detection signal indicating the shake of the light receiving surface 112aa of the image sensor 112a.
  • the image blur correction device 112 includes a fixed support substrate 112b fixed to a structure such as a chassis included in the photographing device 1.
  • the fixed support substrate 112b slidably supports the movable stage 112c on which the solid-state imaging device 112a is mounted.
  • Magnets M YR , M YL , M XD , and M XU are attached on the surface of the fixed support substrate 112 b facing the movable stage 112 c.
  • yokes Y YR , Y YL , Y XD , and Y XU that are magnetic bodies are attached to the fixed support substrate 112b.
  • the yokes Y YR , Y YL , Y XD , and Y XU each have a shape that extends from the fixed support substrate 112 b to the position facing the magnets M YR , M YL , M XD , and M XU around the movable stage 112 c, A magnetic circuit is formed between the magnets M YR , M YL , M XD , and M XU .
  • driving coils C YR , C YL , C XD , and C XU are attached to the movable stage 112c.
  • the driving coils C YR , C YL , C XD , and C XU receive a current in the magnetic field of the magnetic circuit, a driving force is generated.
  • the movable stage 112c solid-state imaging device 112a
  • the movable stage 112c is minutely driven in the XY plane with respect to the fixed support substrate 112b by the generated driving force.
  • VCM YR a voice coil motor made up of a magnet M YR , a yoke Y YR and a driving coil C YR
  • a voice coil motor made up of a magnet M YL , a yoke Y YL and a driving coil C YL is called up.
  • VCM XU a voice coil motor reference numeral VCM XD magnet M XD
  • the voice coil motor consisting of a yoke Y XD and the driving coil C XD, consisting of the magnet M XU, yoke Y XU and the driving coil C XU
  • VCM XU is appended.
  • Each voice coil motor VCM YR , VCM YL , VCM XD , VCM XU (drive coils C YR , C YL , C XD , C XU ) is PWM (Pulse Width Modulation) driven under the control of the system controller 100.
  • the voice coil motors VCM YR and VCM YL are arranged below the solid-state image sensor 112a and arranged side by side with a predetermined interval in the horizontal direction (X-axis direction).
  • the voice coil motors VCM XD and VCM XU are On the side of the solid-state image sensor 112a, they are arranged side by side with a predetermined interval in the vertical direction (Y-axis direction).
  • Hall elements H YR , H YL , H XD , and H XU are attached to the positions near the driving coils C YR , C YL , C XD , and C XU on the fixed support substrate 112b.
  • the Hall elements H YR , H YL , H XD , and H XU detect the magnetic forces of the magnets M YR , M YL , M XD , and M XU , respectively, and the position of the movable stage 112 c (solid-state imaging element 112 a) in the XY plane. Is output to the system controller 100.
  • the Y-axis direction position and tilt (rotation) of the movable stage 112c are detected by the Hall elements H YR and H YL
  • the movable stage 112c solid-state imaging
  • the X-axis direction position and inclination (rotation) of the element 112a) are detected.
  • the system controller 100 includes a driver IC for a voice coil motor.
  • the system controller 100 determines the rated power (allowable power) of the driver IC based on the shake detection signal output from the gyro sensor 128 and the position detection signal output from the Hall elements H YR , H YL , H XD , and H XU. Calculate the duty ratio so as not to disturb the balance of the current flowing through each voice coil motor VCM YR , VCM YL , VCM XD , VCM XU (drive coil C YR , C YL , C XD , C XU ) within the range not exceeding To do.
  • the system controller 100 sends a drive current to each of the voice coil motors VCM YR , VCM YL , VCM XD , and VCM XU with the calculated duty ratio to drive the solid-state imaging device 112 a with image blur correction.
  • the image blur on the light receiving surface 112aa of the solid-state image sensor 112a is corrected while the solid-state image sensor 112a is held at a predetermined position against gravity, disturbance, or the like (in other words, on the light receiving surface 112aa).
  • the position of the solid-state imaging device 112a is adjusted so that the incident position of the subject image does not fluctuate.
  • the image blur correction device 112 causes a predetermined drive current to flow through the voice coil motors VCM YR , VCM YL , VCM XD , and VCM XU , so that the image shake correction device 112 is predetermined in the XY plane for one exposure period.
  • the movable stage 112c solid-state image sensor 112a
  • the movable stage 112c is driven so as to draw a trajectory, and the subject image is incident on a plurality of pixels having different detection colors (R, G, or B) of the solid-state image sensor 112a.
  • FIGS. 4 (a) and 4 (b) are diagrams for assisting the description of the LPF drive.
  • a plurality of pixels PIX are arranged in a matrix at a predetermined pixel pitch P.
  • each pixel PIX in the figure is given a reference (any one of R, G, and B) corresponding to the filter color arranged on the front surface.
  • FIG. 4A shows an example in which the solid-state imaging device 112a is driven so as to draw a square locus centered on the optical axis AX.
  • the square locus can be a square closed path with the pixel pitch P of the solid-state imaging element 112a as one side.
  • the solid-state imaging device 112a is driven so as to form a square path alternately in units of one pixel pitch P in the X-axis direction and the Y-axis direction.
  • FIG. 4B shows an example in which the solid-state imaging device 112a is driven to draw a rotationally symmetric circular locus centered on the optical axis AX.
  • This circular locus can be a closed circular path having a radius r of ⁇ 2 / 2 times the pixel pitch P of the solid-state image sensor 112a, for example.
  • information on the driving locus including the pixel pitch P is held in advance in the internal memory of the system controller 100 or the ROM 126.
  • the solid-state imaging device 112a is driven to draw a predetermined square locus (or circular locus) based on the information of the drive locus. Then, the subject image is uniformly incident on the four color filters R, G, B, and G (four (two rows and two columns) pixels PIX). Thereby, an effect equivalent to that of an optical LPF can be obtained. In other words, since the subject image incident on any color filter (pixel PIX) is necessarily incident on the surrounding color filter (pixel PIX), the same effect as when the subject image passes through the optical LPF. (Reduction of moiré such as false color) is obtained.
  • the user can switch on / off of image blur correction driving and LPF driving by operating the operation unit 102.
  • FIG. 5 shows a false color detection flow executed by the system controller 100.
  • the false color detection flow shown in FIG. 5 is started when the release switch is pressed, for example.
  • the posture of the photographing apparatus 1 including the stationary state may be detected using information output from other sensors such as the acceleration sensor 130, the geomagnetic sensor 132, and the GPS sensor 134 instead of the gyro sensor 128.
  • sensor fusion technology may be applied so that information output from these sensors may be used in combination.
  • the photographer may be notified, for example, via the display screen of the LCD 124 that the processing step S12 (photographing the first image) and subsequent steps are not executed.
  • the processing step S12 (photographing the first image) and subsequent steps may be executed regardless of whether or not the photographing apparatus 1 is in a stationary state.
  • the photographer may be notified through the display screen of the LCD 124 that the processing step S12 (photographing the first image) and subsequent steps are executed.
  • the determination threshold value for the still state may be changed according to the shutter speed set in the photographing apparatus 1.
  • FIG. 6A is an oblique stripe pattern in which light and dark appear alternately at the same pitch as the pixel pitch P of the pixel PIX of the solid-state image sensor 112a.
  • the subject shown in FIG. It is a vertical stripe pattern in which light and dark appear alternately at the same pitch as P.
  • a 6 ⁇ 6 square subject surrounded by a thick solid line among the subjects shown in FIG. 6A is referred to as “obliquely striped subject 6a”, and a thick solid line among the subjects shown in FIG. 7A.
  • the subject of 6 ⁇ 6 square surrounded by is denoted as “vertical stripe subject 7a”.
  • a subject image (first image) is taken with appropriate brightness and focus based on AE control and AF control.
  • AE control and AF control it is assumed that both the diagonally striped subject 6a and the vertically striped subject 7a are in focus.
  • FIGS. 6B and 7B are diagrams schematically showing the oblique stripe subject 6a and the vertical stripe subject 7a captured by each pixel PIX of the solid-state image sensor 112a, and the light receiving surface 112aa of the solid-state image sensor 112a. Is a front view from the subject side.
  • each pixel PIX is assigned a code (any one of R, G, and B) corresponding to the filter color arranged on the front surface.
  • the black pixel PIX indicates that the dark portion of the striped pattern has been captured, and the white pixel PIX has captured the bright portion of the striped pattern. It shows that.
  • pixel addresses (numerals 1 to 8 and symbols 1 to 1) are attached to the diagrams of FIGS. 6B and 7B.
  • the subject is imaged on the solid-state imaging device 112a by the imaging action of the photographic lens 106 in a state where the subject is turned upside down.
  • the upper left corner portion of the “obliquely striped subject 6a” forms an image as the lower right corner portion on the solid-state imaging device 112a.
  • the upper left corner portion of the “obliquely striped subject 6a” is described as corresponding to the upper left corner portion of FIG.
  • the subject image (second image) is captured based on the AE control and AF control at the time of capturing the first image. After the second image is captured, the photographer may be notified that the false color detection process using the first image and the second image is to be started.
  • FIGS. 6 (c) and 7 (c) are similar to FIGS. 6 (b) and 7 (b), respectively, and the diagonally striped subject 6a and the vertically striped subject captured by each pixel PIX of the solid-state imaging device 112a.
  • 7a is shown schematically.
  • the incident position of the subject image on the light receiving surface 112aa of the solid-state image sensor 112a is determined by processing step S13 (shift of the solid-state image sensor 112a). Is shifted by a distance corresponding to one pixel in the left direction on the light receiving surface 112aa.
  • the shooting conditions in this processing step S14 are other than the point that the solid-state imaging device 112a is shifted rightward by a distance of one pixel with respect to the shooting in the processing step S12 (first image shooting). Are the same.
  • the second image is substantially a photograph of a range shifted to the right by one pixel with respect to the first image.
  • the subject is shifted to the left by a distance of one pixel as a whole with respect to the first image. It is reflected.
  • the signal processing (clamping) of the first image taken at the processing step S12 (photographing the first image) and the second image taken at the processing step S14 (photographing the second image) are both described above. , Demosaic, matrix calculation, Y / C separation, white balance, etc.), and converted into a luminance signal Y and color difference signals Cb, Cr.
  • the color difference signals (Cb, Cr) of the first image captured in the processing step S12 (capturing the first image) will be referred to as “first color difference signals”, and the processing step S14 (the second image of the second image).
  • the color difference signals (Cb, Cr) of the second image taken in (shooting) are referred to as “second color difference signals”.
  • the “target pixel” refers to a pixel of each image after at least demosaic processing.
  • the occurrence of false colors is detected based on the signal difference value or signal addition value between the first color difference signal and the second color difference signal.
  • the signal difference value between the first color difference signal and the second color difference signal may be large even if a false color is not generated. In this case, there is a risk of false detection that a false color has occurred in the edge portion.
  • the first color difference signal and the second color difference signal used for the calculation of the signal difference value and the signal addition value are the color difference signals of the pixels that capture the same subject image.
  • demosaic processing is performed using pixels having different addresses.
  • the first color difference signal and the second color difference signal may be slightly different in color information even though they are color difference signals of pixels that capture the same subject image. Therefore, in this processing step S15, LPF processing is performed on the image signal (luminance signal Y, color difference signals Cb, Cr). By blurring the image by the LPF processing, false detection of false colors at the edge portion is suppressed, and errors in color information between the first color difference signal and the second color difference signal are suppressed.
  • each pixel corresponds to the shift amount of the solid-state image sensor 112a in processing step S13 (shift of the solid-state image sensor 112a) (in other words, the shift amount of the incident position of the subject image on the light receiving surface 112aa of the solid-state image sensor 112a). Converted accordingly.
  • the address (c, 2) of the pixel PIXb in FIG. 6C is the same as the pixel that is located when the pixel PIXb is shifted by one pixel in the right direction according to the shift amount of the incident position of the subject image, that is, the pixel PIXb. Is converted to the same address (d, 2) as the pixel PIXa (see FIG. 6B) for capturing the subject image.
  • the oblique stripe subject 6a and the vertical stripe subject 7a include a high-frequency component having the same pitch as the pixel pitch P of the pixel PIX of the solid-state image sensor 112a. Therefore, if the image signals of the diagonally striped subject 6a and the vertically striped subject 7a are demosaiced in the processing step S12 (first image capturing) and the processing step S14 (second image capturing), a false color is generated.
  • the luminance of the G component pixel PIX is high and the luminance of the R and B component pixels PIX is low. Therefore, the G component is dominant in the color information of each pixel PIX after the demosaic process, and a green false color is generated in the diagonally striped subject 6a.
  • the luminance of the R and B component pixels PIX is high and the luminance of the G component pixel PIX is low, contrary to the example of FIG. 6B. Therefore, the R and B components are dominant in the color information of each pixel PIX after the demosaic process, and a purple false color is generated in the diagonally striped subject 6a.
  • the luminance of the B component pixel PIX is high and the luminance of the R component pixel PIX is low. Therefore, the color information of each pixel PIX after the demosaic processing becomes a mixed color component of B and G, and a false color of an intermediate color between blue and green (for example, a color near the cyan color center) is generated.
  • the luminance of the R component pixel PIX is high and the luminance of the B component pixel PIX is low, contrary to the example of FIG. 7B. Therefore, the color information of each pixel PIX after the demosaic processing becomes a mixed color component of R and G, and a false color of an intermediate color between red and green (a color near the orange center) is generated.
  • FIG. 8 shows a color space defined by two axes of Cb and Cr.
  • reference numeral 6b is a plot corresponding to the green false color generated at the target pixel in the example of FIG. 6B
  • reference numeral 6c is a purple color generated at the target pixel in the example of FIG. 6C
  • 7b is a plot corresponding to a false color of an intermediate color between blue and green generated in the target pixel in the example of FIG. 7B
  • reference numeral 7c is a plot corresponding to FIG. It is a plot corresponding to the false color of the intermediate color of red and green generated in the target pixel in the example of c).
  • the following shows the coordinate information of each plot.
  • the origin O is the coordinates (0, 0).
  • the false color itself that is generated when a subject image having a high frequency component equivalent to the pixel pitch P of the pixel PIX of the solid-state image sensor 112a is captured is the light receiving surface 112aa.
  • a portion (a pixel of interest) where a false color is generated is detected by utilizing the change by shifting the incident position of the subject image above. More specifically, in a pixel of interest in which a high-frequency component subject image is captured, a portion in which the color information of the first color difference signal and the second color difference signal has a complementary color relationship is determined to be a portion where a false color is generated. It is detected.
  • the incident position of the subject image on the light receiving surface 112aa is shifted by one pixel in the left direction (horizontal pixel arrangement direction) (the solid-state imaging device 112a is shifted from the subject image).
  • the shift direction may be the right direction (horizontal pixel arrangement direction), or the upper direction (vertical pixel arrangement direction), the lower direction (vertical pixel arrangement direction), or the upper right direction.
  • Lower right, upper left, lower left diagonal directions directions forming 45 degrees with respect to horizontal and vertical arrangement directions
  • other directions according to the pixel arrangement may be used together. .
  • the shift distance may be a distance other than the even number of pixels and the vicinity thereof (for example, 1.9 to 2.1 pixels) depending on the subject to be photographed and the photographing conditions.
  • the first color difference signal is obtained at the target pixel in which the subject image of the high frequency component is captured (false color is generated). Since the color information of the second color difference signal and the second color difference signal are complementary to each other, the occurrence of false color can be detected.
  • the difference value (Cb sub) between the first color difference signal and the second color difference signal. , Cr sub ) is calculated.
  • Cb and Cr of the first color difference signal are defined as Cb1 and Cr1, respectively
  • Cb and Cr of the second color difference signal of the same address are defined as Cb2 and Cr2, respectively.
  • the difference value (Cb sub , Cr sub ) is calculated by the following equation.
  • the first distance information Saturation_ sub are each an example of each plot pair of FIG. 8 (plot 6b and plot 6c, plots 7b and plot 7c), 2 ⁇ (M 2 + N 2), ⁇ ⁇ (2M'- ⁇ ) 2 + (2N ′ + ⁇ ) 2 ⁇ ⁇ .
  • first distance information Saturation_ sub As understood from the positional relationship of each plot pair of FIG. 8, as the color information with the first color difference signal and a second color difference signal is strong complementary relationship first distance information Saturation_ sub increases, first color difference signal and the color information having the second color difference signal is not (for example similar hue) in complementary relationship as the first distance information Saturation_ sub decreases. That is, the first distance information Saturation_ sub ideally if false color occurrence region is zero, the larger the false color occurrence region in which false color is generated strongly.
  • provisional addition values (Cb add , Cr add ) are calculated according to the following equations.
  • Cb ′ add Cb add ⁇ Cb mean
  • Cr ' add Cr add -Cr mean
  • Second distance information Saturation_ the add each example of each plot pair of FIG. 8 (plot 6b and plot 6c, plots 7b and plot 7c), zero, and ⁇ ( ⁇ 2 + ⁇ 2) .
  • the first and second color difference signals change under the influence of the light source, the exposure condition, the white balance and the like at the time of image capturing.
  • the second respective color difference signals varies in the same way, mutual relative distance in the color space (i.e., the first distance information Saturation_ sub) changes little.
  • the second distance information Saturation_ the add decreases.
  • the second distance information Saturation_ to add becomes smaller as is strong complementary relationship, the zero ideally.
  • the signs are the same, so the added values (Cb ′ add , Cr ′) add) and increases, the second distance information Saturation_ the add increases. That is, the second distance information Saturation_ the add is larger if false color occurrence region decreases if false color occurrence region.
  • the luminance signal Y of the first image taken in the processing step S12 (photographing the first image) will be referred to as “first luminance signal” and taken in the processing step S14 (photographing the second image).
  • the luminance signal Y of the second image is referred to as “second luminance signal”.
  • a difference value Y diff between the first luminance signal and the second luminance signal is calculated for each target pixel having the same address.
  • condition (1) is defined as follows.
  • condition (2) is defined as follows.
  • condition (3) is defined as follows.
  • the conditions (1) and (2) are conditions for directly determining whether or not the pixel of interest is a false color generation area. Therefore, in another embodiment, when at least one of the condition (1) and the condition (2) is satisfied, the target pixel may be determined to be a false color generation region.
  • the user can change the false color detection sensitivity by operating the operation unit 102 to change the settings of the threshold values T1 to T3.
  • the presence or absence of a false color is detected.
  • the number of target pixels determined to be a false color generation region in processing step S22 determination of the false color generation region
  • the ratio of the target pixels determined to be the false color generation region out of the total number of effective pixels If it is equal to or greater than the predetermined threshold, a detection result that there is a false color is obtained (S23: YES), and if the number of pixels of interest (or a ratio) is less than the predetermined threshold, a detection result that there is no false color is obtained (S23). : NO).
  • This processing step S24 is executed when a detection result indicating no false color is obtained in the processing step S23 (detection of false color) (S23: NO).
  • this processing step S24 false colors are not detected for the first image captured in processing step S12 (capturing the first image) and the second image captured in processing step S14 (capturing the second image).
  • at least one of them is stored in the memory card 200 (or a built-in memory (not shown) provided in the photographing apparatus 1). At this point, the photographer may be notified that the shooting operation has been completed.
  • the photographing operation is completed. Communicate to the photographer. As a result, the photographer can proceed to the next work, for example, change of the state (setting) of the photographing apparatus 1.
  • This processing step S25 is executed when a detection result indicating that there is a false color is obtained in processing step S23 (detection of false color) (S23: YES).
  • this processing step S25 false colors were detected for the first image captured in processing step S12 (capturing the first image) and the second image captured in processing step S14 (capturing the second image). Therefore, the LPF drive is executed.
  • the driving cycle (rotation cycle) and driving of the solid-state imaging device 112a are obtained so that a stronger optical LPF effect (reduction of moire such as false colors) can be obtained.
  • the amplitude (rotation radius) is adjusted.
  • the subject is imaged (third image is captured). That is, when a false color is detected, the third image is captured. Therefore, when the photographer is notified that the false color detection process is started in the processing step S14 (photographing of the second image), the photographer is in a state of the photographing device 1 until the photographing of the third image is completed. (Setting) can be maintained.
  • the difference occurs when a subject image having a high frequency component equal to or greater than the pixel pitch P is captured by shifting the incident position of the subject image on the light receiving surface 112aa of the solid-state imaging device 112a. False colors (false colors that are complementary to each other) are generated, and an image with a large difference is generated. According to this embodiment, since an image with a large difference is used for detecting a false color, the false color is detected with high accuracy.
  • Embodiments of the present invention are not limited to those described above, and various modifications are possible within the scope of the technical idea of the present invention.
  • the embodiment of the present application also includes an embodiment that is exemplarily specified in the specification or a combination of obvious embodiments and the like as appropriate.
  • the false color is optically removed from the entire image by executing the LPF drive, but the present invention is not limited to this. False colors may be removed using image processing. In the case of image processing, the false color can be removed not only in the entire image but also locally (for example, for each pixel of interest determined as a false color generation region). In addition, if information indicating the false color generation area is stored in association with the first image (second image), even if the false color correction is manually performed by another terminal (such as a computer), A false color generation region can be easily found based on the information. Therefore, the trouble of searching for the false color generation area from the entire image is reduced.
  • the incident position of the subject image on the light receiving surface 112aa is shifted by shifting the solid-state imaging device 112a itself.
  • the incident position of the subject image on the light receiving surface 112aa may be shifted by driving another shake correction member (such as a part of lenses included in the photographing lens 106) eccentrically, or the photographing lens A parallel plate arranged in the optical path 106 at a slight inclination with respect to the optical axis AX is rotated around the optical axis AX, or the apex variable prism, the cover glass of the solid-state image sensor 112a (the dust adhering to the cover glass). Or the like) may be driven to shift the incident position of the subject image on the light receiving surface 112aa.
  • another shake correction member such as a part of lenses included in the photographing lens 106
  • the threshold values T1 to T3 are unchanged in all the determinations for each pixel in the processing step S22 (determination of false color generation region), but the present invention is not limited to this.
  • a range that can be regarded as being in focus in the image (a focus area, illustratively a range that falls within the depth of field) is obtained. Since the subject in the focus area (in-focus state) has high contrast and tends to include high-frequency components, false colors are likely to occur. On the other hand, a subject outside the in-focus area (out-of-focus state) has a low contrast and is unlikely to contain a high-frequency component, so that false colors are unlikely to occur.
  • the threshold values T1 to T3 are set depending on whether the determination is made for the pixel in the in-focus area obtained by the calculation or the determination for the pixel outside the in-focus area. It may be changed to a different value.
  • a threshold setting that increases detection sensitivity for example, the threshold T1 is set to a low value
  • threshold setting for example, setting the threshold value T1 to a high value that lowers the detection sensitivity or false color detection is performed. The process itself is omitted. As a result, the false color detection accuracy is further improved, and the detection speed is improved.
  • a pair of images (a first image taken at processing step S12 (photographing the first image) and a second image taken at processing step S14 (photographing the second image)).
  • false color may be detected using three or more images.
  • a false color detection flow shown in FIG. 9 can be considered.
  • processing similar to the false color detection flow in FIG. 5 is simplified or omitted as appropriate.
  • This processing step S112 is executed when it is determined in processing step S111 (state determination) that the photographing apparatus 1 is in a stationary state (S111: YES).
  • processing step S112 a subject image (first image) is taken with appropriate brightness and focus based on AE control and AF control.
  • a subject image (second image) is captured based on the AE control and AF control at the time of capturing the first image.
  • the same subject is used for the first image captured in processing step S112 (capturing the first image) and the third image captured in processing step S116 (capturing the third image).
  • the address of each pixel constituting the third image is determined by the amount of shift of the solid-state image sensor 112a in the processing step S115 (shift of the solid-state image sensor 112a) so that the image is processed as being incident on the pixel having the same address. In other words, it is converted according to the shift amount of the incident position of the subject image on the light receiving surface 112aa of the solid-state image sensor 112a.
  • the same subject is used for the first image captured in processing step S112 (capturing the first image) and the fourth image captured in processing step S118 (capturing the fourth image).
  • the address of each pixel making up the fourth image is the shift amount of the solid-state image sensor 112a in the processing step S117 (shift of the solid-state image sensor 112a) so that the image is processed as being incident on the pixel having the same address. In other words, it is converted according to the shift amount of the incident position of the subject image on the light receiving surface 112aa of the solid-state image sensor 112a.
  • the color difference signals (Cb, Cr) of the first image photographed in the processing step S112 are referred to as “first color difference signal”, and the processing step S114 (second image photographing).
  • the color difference signals (Cb, Cr) of the second image photographed in Step 2 are referred to as “second color difference signals”, and the color difference signals (Cb, Cr) of the third image photographed in processing step S116 (third image photographing).
  • Cr) is referred to as “third color difference signal”, and the color difference signals (Cb, Cr) of the fourth image captured in processing step S118 (fourth image capturing) are referred to as “fourth color difference signal”.
  • the luminance signal Y of the first image taken in the processing step S112 (photographing the first image) will be referred to as “first luminance signal” and taken in the processing step S114 (photographing the second image).
  • the luminance signal Y of the second image is denoted as “second luminance signal”
  • the luminance signal Y of the third image photographed in the processing step S116 (third image photographing) is denoted as “third luminance signal”.
  • the luminance signal Y of the fourth image shot in the processing step S118 (fourth image shooting) is referred to as “fourth luminance signal”.
  • This processing step S128 is executed when a detection result indicating that there is no false color is obtained in processing step S127 (detection of false color) (S127: NO).
  • processing step S128 the first image captured in processing step S112 (capturing the first image), the second image captured in processing step S114 (capturing the second image), and processing step S116 (third capturing) Assuming that no false color is detected for the third image captured in (Image Capture) and the fourth image captured in Processing Step S118 (Fourth Image Capture), at least one of them is the memory card 200 (or (Stored in an unillustrated built-in memory provided in the photographing apparatus 1).
  • This processing step S129 is executed when a detection result indicating that there is a false color is obtained in processing step S127 (detection of false color) (S127: YES).
  • processing step S127 detection of false color
  • the first image photographed in the processing step S112 first image photographing
  • the second image photographed in the processing step S114 second image photographing
  • the processing step S116 third image photographing
  • LPF driving is executed because a false color is detected from at least one of the third image captured in (image capture) and the fourth image captured in processing step S118 (fourth image capture).
  • the subject is imaged.
  • the false color detection flow of FIG. 9 since a change in image (occurrence of false color) is determined when the solid-state imaging device 112a is shifted in different directions, in a region where a high frequency component appears in the subject, The change between at least a pair of images becomes large. For this reason, the false color is detected with high accuracy regardless of the appearance direction of the high-frequency component in the subject (in the case of the vertical-striped subject 7a, the left-right direction in which the light and shade alternate).

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Color Television Image Signal Generators (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

An image detection device comprises: a movement means that physically moves, on the basis of a pixel arrangement of an image pickup element having a transmission wavelength selection element, at least one of an optical element, which is a part of an image capture optical system, and the image pickup element, thereby moving the position of a subject image on the light reception surface of the image pickup element; a signal generation means that picks up, each time the position of the subject image is moved, the subject image taken in the image pickup element and that performs predetermined signal processings of the subject image as picked up, which include a color interpolation processing, thereby generating color difference signals; and a detection means that detects, on the basis of at least one pair of color difference signals for which the positions of the subject image on the light reception surface of the image pickup element are different from each other, an image degradation occurring in a captured image.

Description

画像検出装置、画像検出方法及び撮影装置Image detecting apparatus, image detecting method, and photographing apparatus
 本発明は、画像内の画像劣化を検出する画像検出装置及び画像検出方法並びに撮影装置に関する。 The present invention relates to an image detection device, an image detection method, and a photographing device that detect image deterioration in an image.
 撮像素子の画素ピッチと同程度以上の高周波成分を含む被写体を撮像すると、偽色(色モアレ)等のモアレが発生して、撮影画像が劣化することが知られている。そこで、この種の偽色を除去するための種々の技術が提案されている。例えば特開2011-109496号公報(以下、「特許文献1」と記す。)に、撮影画像内に発生する偽色を検出することが可能な撮影装置の具体的構成が記載されている。 It is known that when a subject including a high-frequency component equal to or higher than the pixel pitch of the image sensor is picked up, a moiré such as a false color (color moire) is generated and a captured image is deteriorated. Therefore, various techniques for removing this type of false color have been proposed. For example, Japanese Patent Application Laid-Open No. 2011-109494 (hereinafter referred to as “Patent Document 1”) describes a specific configuration of a photographing apparatus capable of detecting a false color generated in a photographed image.
 特許文献1に記載の撮影装置は、合焦状態の被写体を撮像し、次いで、非合焦状態の被写体を撮像する。非合焦状態の撮影画像では、被写体のコントラストが低下して高周波成分が低減されるため、合焦状態の撮影画像に発生していた偽色も低減される。そこで、特許文献1に記載の撮影装置は、合焦状態の撮影画像の色差信号と非合焦状態の撮影画像の色差信号との差分値をブロック毎に演算し、差分値の大きいブロックを偽色が発生しているブロックとして検出する。 The imaging apparatus described in Patent Document 1 images a focused subject, and then images a non-focused subject. In the out-of-focus photographed image, the contrast of the subject is reduced and high-frequency components are reduced, so that the false color generated in the in-focus photographed image is also reduced. Therefore, the imaging apparatus described in Patent Document 1 calculates a difference value between the color difference signal of the captured image in the focused state and the color difference signal of the captured image in the out-of-focus state for each block, and falsely calculates a block having a large difference value. Detect as a block with color.
 このように、特許文献1に記載の撮影装置では、コントラストが高く偽色が発生している被写体と、コントラストが低く偽色の発生が低減された被写体とを比較処理で検出することにより、偽色の発生を検出している。しかし、非合焦状態の撮影画像では、合焦状態の撮影画像と比べて偽色の発生が軽減されるだけであることから、色差信号の差分値が偽色の発生しているブロックにおいても大きくはならない。そのため、特許文献1に記載の撮影装置では、偽色を精度良く検出することが難しい。 As described above, the imaging apparatus described in Patent Document 1 detects a subject with high contrast and a false color by detecting a subject with low contrast and a reduced false color by comparing processing. The occurrence of color is detected. However, since the occurrence of false color is only reduced in the in-focus shot image compared to the in-focus shot image, even in a block where the difference value of the color difference signal is false. Don't get big. Therefore, it is difficult for the photographing apparatus described in Patent Document 1 to detect false colors with high accuracy.
 本発明は上記の事情に鑑みてなされたものであり、その目的とするところは、画像劣化を精度良く検出することが可能な画像検出装置、画像検出方法及び撮影装置を提供することである。 The present invention has been made in view of the above circumstances, and an object of the present invention is to provide an image detection apparatus, an image detection method, and a photographing apparatus capable of accurately detecting image degradation.
 本発明の一実施形態に係る画像検出装置は、撮影光学系を介して所定の画素配置を有する撮像素子に入射される被写体像と該撮像素子とを該撮像素子の受光面方向に相対的に位置移動させる移動手段と、該相対的な位置が異なる状態で被写体像を撮像して色差信号を生成する信号生成手段と、色差信号に基づいて撮影画像内に発生する画像劣化の検出を行う検出手段とを備える。 An image detection apparatus according to an embodiment of the present invention relates a subject image incident on an image sensor having a predetermined pixel arrangement via a photographing optical system and the image sensor relative to a light receiving surface direction of the image sensor. A moving means for moving the position, a signal generating means for picking up a subject image in a state where the relative positions are different to generate a color difference signal, and a detection for detecting image deterioration occurring in the captured image based on the color difference signal Means.
 また、本発明の一実施形態に係る画像検出装置は、透過波長選択素子付きの撮像素子の画素配置に基づいて撮影光学系内の一部の光学素子と該撮像素子の少なくとも一方を物理的に動かすことにより、該撮像素子の受光面上での被写体像の位置を移動させる移動手段と、被写体像の位置が移動される毎に、撮像素子に取り込まれた被写体像を撮像し、色補間処理を含む所定の信号処理を施して色差信号を生成する信号生成手段と、撮像素子の受光面上での被写体像の位置が互いに異なる少なくとも一対の色差信号に基づいて撮影画像内に発生する画像劣化の検出を行う検出手段とを備える。 In addition, the image detection apparatus according to an embodiment of the present invention physically connects at least one of the optical elements in the imaging optical system and at least one of the imaging elements based on the pixel arrangement of the imaging element with the transmission wavelength selection element. Moving means for moving the position of the subject image on the light receiving surface of the image sensor by moving, and capturing the subject image captured by the image sensor every time the position of the subject image is moved, and color interpolation processing Image generation that occurs in a captured image based on at least a pair of color difference signals that are different from each other in position of the subject image on the light receiving surface of the image sensor, and a signal generation unit that generates a color difference signal by performing predetermined signal processing including And detecting means for detecting.
 本発明の一実施形態によれば、撮像素子の受光面上での被写体像の位置を移動させることにより、移動前後の撮影画像において高周波成分を含む領域で異なる色の画像劣化が発生して、撮影画像間の変化が大きくなりやすい。そのため、画像劣化を精度良く検出することが可能となる。 According to one embodiment of the present invention, by moving the position of the subject image on the light receiving surface of the image sensor, image degradation of different colors occurs in the region including high frequency components in the captured images before and after the movement, Changes between captured images tend to be large. Therefore, it is possible to detect image degradation with high accuracy.
 また、本発明の一実施形態において、移動手段は、少なくとも一対の色差信号が持つ色情報が、画像劣化が発生する発生領域で互いに補色の関係となるように、撮像素子の受光面上での被写体像の位置を画素間隔に応じた量だけ移動させる構成としてもよい。 Further, in one embodiment of the present invention, the moving means is arranged on the light receiving surface of the image sensor so that the color information of at least the pair of color difference signals has a complementary color relationship in the generation region where the image degradation occurs. The position of the subject image may be moved by an amount corresponding to the pixel interval.
 また、本発明の一実施形態において、移動手段は、撮像素子の受光面上での被写体像の位置を画素配置に応じた方向にn画素分又は(m+0.5)画素分(但し、n=奇数の自然数,m=0又は奇数の自然数)の距離だけ移動させる構成としてもよい。 In one embodiment of the present invention, the moving means moves the position of the subject image on the light receiving surface of the image sensor for n pixels or (m + 0.5) pixels in a direction corresponding to the pixel arrangement (where n = The distance may be moved by an odd natural number, m = 0 or an odd natural number).
 また、本発明の一実施形態において、検出手段は、撮像素子の受光面上での被写体像の位置が互いに異なる少なくとも一対の撮影画像について、同一の被写体像が同一アドレスの画素に入射されたものとして処理されるように、一方の撮影画像を構成する各画素のアドレスを被写体像の位置の移動量に応じて変換し、アドレスが変換された一方の撮影画像の色差信号及び他方の撮影画像の色差信号に基づいて画像劣化の検出を画素毎に行う構成としてもよい。 In one embodiment of the present invention, the detection means includes the same subject image incident on the pixel having the same address with respect to at least a pair of captured images having different positions of the subject image on the light receiving surface of the image sensor. The address of each pixel constituting one captured image is converted in accordance with the amount of movement of the position of the subject image, and the color difference signal of one captured image and the other captured image are converted. A configuration may be adopted in which image degradation is detected for each pixel based on the color difference signal.
 また、本発明の一実施形態において、検出手段は、少なくとも一対の色差信号の差分値と加算値の少なくとも一方を演算し、演算結果に基づいて画像劣化の検出を行う構成としてもよい。 In one embodiment of the present invention, the detection unit may calculate at least one of a difference value and an addition value of at least a pair of color difference signals and detect image deterioration based on the calculation result.
 また、本発明の一実施形態において、検出手段は、少なくとも一対の色差信号の差分値を画素毎に演算し、演算された差分値が第一の閾値以上となる画素を画像劣化が発生する発生領域の画素として検出する構成としてもよい。 In one embodiment of the present invention, the detection unit calculates a difference value of at least a pair of color difference signals for each pixel, and image degradation occurs in a pixel in which the calculated difference value is equal to or greater than a first threshold value. It is good also as a structure detected as a pixel of an area | region.
 また、本発明の一実施形態において、検出手段は、少なくとも一対の色差信号の加算値を画素毎に演算し、演算された加算値が第二の閾値以下となる画素を画像劣化が発生する発生領域の画素として検出する構成としてもよい。 In one embodiment of the present invention, the detection unit calculates an addition value of at least a pair of color difference signals for each pixel, and image degradation occurs in a pixel in which the calculated addition value is equal to or less than a second threshold value. It is good also as a structure detected as a pixel of an area | region.
 また、本発明の一実施形態において、信号生成手段は、所定の信号処理を施すことにより、色差信号と組になる輝度信号を生成し、検出手段は、少なくとも一対の輝度信号にも基づいて画像劣化の検出を行う構成としてもよい。 In one embodiment of the present invention, the signal generation unit generates a luminance signal that forms a pair with the color difference signal by performing predetermined signal processing, and the detection unit performs image processing based on at least a pair of luminance signals. It is good also as a structure which detects deterioration.
 また、本発明の一実施形態に係る撮影装置は、上記の画像検出装置を有する撮影装置であって、撮影装置が静止状態であるか否かを判定する静止状態判定手段を備えており、静止状態判定手段により撮影装置が静止状態であると判定すると、画像検出装置による画像劣化の検出を行う。 An imaging device according to an embodiment of the present invention is an imaging device having the above-described image detection device, and includes a stationary state determination unit that determines whether or not the imaging device is in a stationary state. When it is determined by the state determination means that the photographing apparatus is in a stationary state, image deterioration is detected by the image detection apparatus.
 また、本発明の一実施形態に係る撮影装置は、上記の画像検出装置を有する撮影装置であって、該撮影装置の振れを検出する振れ検出手段を備えた構成としてもよい。この場合、移動手段は、振れ検出手段により検出された撮影装置の振れに基づいて撮影光学系内の一部の光学素子と撮像素子の少なくとも一方を物理的に動かすことにより、該撮影装置の振れに起因する像振れを補正する。 Further, a photographing apparatus according to an embodiment of the present invention may be a photographing apparatus having the above-described image detection device, and may include a shake detection unit that detects a shake of the photographing apparatus. In this case, the moving means physically moves at least one of some of the optical elements and the imaging element in the imaging optical system based on the shake of the imaging apparatus detected by the shake detection means, thereby causing the shake of the imaging apparatus. The image blur caused by the image is corrected.
 また、本発明の一実施形態に係る画像検出方法は、透過波長選択素子付きの撮像素子の画素配置に基づいて撮影光学系内の一部の光学素子と該撮像素子の少なくとも一方を物理的に動かすことにより、該撮像素子の受光面上での被写体像の位置を移動させるステップと、被写体像の位置が移動される毎に、撮像素子に取り込まれた被写体像を撮像し、色補間処理を含む所定の信号処理を施して色差信号を生成するステップと、撮像素子の受光面上での被写体像の位置が互いに異なる少なくとも一対の色差信号に基づいて撮影画像内に発生する画像劣化の検出を行うステップとを含む。 In addition, the image detection method according to an embodiment of the present invention physically connects a part of the optical elements in the photographing optical system and at least one of the image sensors based on the pixel arrangement of the image sensor with the transmission wavelength selection element. The step of moving the position of the subject image on the light receiving surface of the image sensor by moving the image, and the subject image captured by the image sensor is captured every time the position of the subject image is moved, and color interpolation processing is performed. Generating a color difference signal by performing predetermined signal processing, and detecting image deterioration occurring in the captured image based on at least a pair of color difference signals having different positions of the subject image on the light receiving surface of the image sensor. Performing steps.
 本発明の一実施形態によれば、画像劣化を精度良く検出することが可能な画像検出装置、画像検出方法及び撮影装置が提供される。 According to an embodiment of the present invention, there are provided an image detection device, an image detection method, and an imaging device that can detect image degradation with high accuracy.
本発明の実施形態の撮影装置の構成を示すブロック図である。It is a block diagram which shows the structure of the imaging device of embodiment of this invention. 本発明の実施形態の撮影装置に備えられる像振れ補正装置の構成を概略的に示す図である。1 is a diagram schematically illustrating a configuration of an image shake correction apparatus provided in an imaging apparatus according to an embodiment of the present invention. 本発明の実施形態の撮影装置に備えられる像振れ補正装置の構成を概略的に示す図である。1 is a diagram schematically illustrating a configuration of an image shake correction apparatus provided in an imaging apparatus according to an embodiment of the present invention. 本発明の実施形態におけるLPF駆動の説明を補助する図である。It is a figure which assists description of the LPF drive in embodiment of this invention. 本発明の実施形態におけるシステムコントローラによる偽色検出フローを示す図である。It is a figure which shows the false color detection flow by the system controller in embodiment of this invention. 本発明の実施形態において撮影される被写体に関する図である。It is a figure regarding the to-be-photographed object image | photographed in embodiment of this invention. 本発明の実施形態において撮影される被写体に関する図である。It is a figure regarding the to-be-photographed object image | photographed in embodiment of this invention. 本発明の実施形態の色空間内においてプロットされる各色差信号を示す図である。It is a figure which shows each color difference signal plotted in the color space of embodiment of this invention. 別の実施形態におけるシステムコントローラによる偽色検出フローを示す図である。It is a figure which shows the false color detection flow by the system controller in another embodiment.
 以下、本発明の実施形態の撮影装置について図面を参照しながら説明する。以下においては、本発明の一実施形態として、デジタル一眼レフカメラについて説明する。なお、撮影装置は、デジタル一眼レフカメラに限らず、例えば、ミラーレス一眼カメラ、コンパクトデジタルカメラ、ビデオカメラ、カムコーダ、タブレット端末、PHS(Personal Handy phone System)、スマートフォン、フィーチャフォン、携帯ゲーム機など、撮影機能を有する別の形態の装置に置き換えてもよい。 Hereinafter, a photographing apparatus according to an embodiment of the present invention will be described with reference to the drawings. In the following, a digital single lens reflex camera will be described as an embodiment of the present invention. Note that the photographing apparatus is not limited to a digital single lens reflex camera, but, for example, a mirrorless single lens camera, a compact digital camera, a video camera, a camcorder, a tablet terminal, a PHS (Personal Handy Phone system), a smartphone, a feature phone, a portable game machine, The apparatus may be replaced with another type of apparatus having a photographing function.
[撮影装置1全体の構成]
 図1は、本実施形態の撮影装置1の構成を示すブロック図である。図1に示されるように、撮影装置1は、システムコントローラ100、操作部102、駆動回路104、撮影レンズ106、絞り108、シャッタ110、像振れ補正装置112、信号処理回路114、画像処理エンジン116、バッファメモリ118、カード用インタフェース120、LCD(Liquid Crystal Display)制御回路122、LCD124、ROM(Read Only Memory)126、ジャイロセンサ128、加速度センサ130、地磁気センサ132及びGPS(Global Positioning System)センサ134を備えている。なお、撮影レンズ106は複数枚構成であるが、図1においては便宜上一枚のレンズとして示す。また、撮影レンズ106の光軸AXと同じ方向をZ軸方向と定義し、Z軸方向と直交し且つ互いに直交する二軸方向をそれぞれX軸方向(水平方向)、Y軸方向(垂直方向)と定義する。
[Configuration of the entire photographing apparatus 1]
FIG. 1 is a block diagram showing a configuration of the photographing apparatus 1 of the present embodiment. As shown in FIG. 1, the photographing apparatus 1 includes a system controller 100, an operation unit 102, a drive circuit 104, a photographing lens 106, a diaphragm 108, a shutter 110, an image shake correction device 112, a signal processing circuit 114, and an image processing engine 116. , Buffer memory 118, card interface 120, LCD (Liquid Crystal Display) control circuit 122, LCD 124, ROM (Read Only Memory) 126, gyro sensor 128, acceleration sensor 130, geomagnetic sensor 132, and GPS (Global Positioning System) sensor 134 It has. Although the photographing lens 106 has a plurality of lenses, it is shown as a single lens for convenience in FIG. Further, the same direction as the optical axis AX of the photographing lens 106 is defined as the Z-axis direction, and the two axial directions orthogonal to the Z-axis direction and orthogonal to each other are the X-axis direction (horizontal direction) and the Y-axis direction (vertical direction), respectively. It is defined as
 操作部102には、電源スイッチやレリーズスイッチ、撮影モードスイッチなど、ユーザが撮影装置1を操作するために必要な各種スイッチが含まれる。ユーザにより電源スイッチが操作されると、図示省略されたバッテリから撮影装置1の各種回路に電源ラインを通じて電源供給が行われる。 The operation unit 102 includes various switches necessary for the user to operate the photographing apparatus 1, such as a power switch, a release switch, and a photographing mode switch. When the user operates the power switch, power is supplied from the battery (not shown) to the various circuits of the photographing apparatus 1 through the power line.
 システムコントローラ100は、CPU(Central Processing Unit)及びDSP(Digital Signal Processor)を含む。システムコントローラ100は電源供給後、ROM126にアクセスして制御プログラムを読み出してワークエリア(不図示)にロードし、ロードされた制御プログラムを実行することにより、撮影装置1全体の制御を行う。 The system controller 100 includes a CPU (Central Processing Unit) and a DSP (Digital Signal Processor). After supplying power, the system controller 100 accesses the ROM 126, reads out a control program, loads it into a work area (not shown), and executes the loaded control program to control the entire photographing apparatus 1.
 レリーズスイッチが操作されると、システムコントローラ100は、例えば、固体撮像素子112a(後述の図2参照)により撮像された画像に基づいて計算された測光値や、撮影装置1に内蔵された露出計(不図示)で測定された測光値に基づき適正露出が得られるように、駆動回路104を介して絞り108及びシャッタ110を駆動制御する。より詳細には、絞り108及びシャッタ110の駆動制御は、プログラムAE(Automatic Exposure)、シャッタ優先AE、絞り優先AEなど、撮影モードスイッチにより指定されるAE機能に基づいて行われる。また、システムコントローラ100はAE制御と併せてAF(Autofocus)制御を行う。AF制御には、アクティブ方式、位相差検出方式、像面位相差検出方式、コントラスト検出方式等が適用される。また、AFモードには、中央一点の測距エリアを用いた中央一点測距モード、複数の測距エリアを用いた多点測距モード、全画面の距離情報に基づく全画面測距モード等がある。システムコントローラ100は、AF結果に基づいて駆動回路104を介して撮影レンズ106を駆動制御し、撮影レンズ106の焦点を調整する。なお、この種のAE及びAFの構成及び制御については周知であるため、ここでの詳細な説明は省略する。 When the release switch is operated, the system controller 100 detects, for example, a photometric value calculated based on an image captured by a solid-state imaging device 112a (see FIG. 2 described later) or an exposure meter built in the photographing apparatus 1. The diaphragm 108 and the shutter 110 are driven and controlled via the drive circuit 104 so that proper exposure is obtained based on the photometric value measured in (not shown). More specifically, drive control of the aperture 108 and the shutter 110 is performed based on an AE function specified by the shooting mode switch, such as a program AE (Automatic Exposure), shutter priority AE, aperture priority AE, or the like. Further, the system controller 100 performs AF (Autofocus) control together with AE control. For AF control, an active method, a phase difference detection method, an image plane phase difference detection method, a contrast detection method, or the like is applied. In addition, the AF mode includes a central single-point ranging mode using a single central ranging area, a multi-point ranging mode using a plurality of ranging areas, a full-screen ranging mode based on full-screen distance information, and the like. is there. The system controller 100 controls driving of the photographing lens 106 via the driving circuit 104 based on the AF result, and adjusts the focus of the photographing lens 106. Since the configuration and control of this type of AE and AF are well known, detailed description thereof is omitted here.
 図2及び図3は、像振れ補正装置112の構成を概略的に示す図である。図2及び図3に示されるように、像振れ補正装置112は、固体撮像素子112aを備えている。被写体からの光束は、撮影レンズ106、絞り108、シャッタ110を通過して固体撮像素子112aの受光面112aaにて受光される。なお、固体撮像素子112aの受光面112aaは、X軸及びY軸を含むXY平面である。固体撮像素子112aは、透過波長選択素子としてベイヤ型画素配置のカラーフィルタを有する単板式カラーCCD(Charge Coupled Device)イメージセンサである。固体撮像素子112aは、受光面112aa上の各画素で結像した光学像を光量に応じた電荷として蓄積して、R(Red)、G(Green)、B(Blue)の画像信号を生成して出力する。なお、固体撮像素子112aは、CCDイメージセンサに限らず、CMOS(Complementary Metal Oxide Semiconductor)イメージセンサやその他の種類の撮像装置に置き換えられてもよい。固体撮像素子112aはまた、補色系フィルタを搭載したものであってもよいし、画素毎に何れかのカラーフィルタを配置していれば、ベイヤ配列等の周期的なカラー配列を有するフィルタである必要はない。 2 and 3 are diagrams schematically showing a configuration of the image blur correction device 112. FIG. As shown in FIGS. 2 and 3, the image blur correction device 112 includes a solid-state image sensor 112a. The light beam from the subject passes through the photographing lens 106, the diaphragm 108, and the shutter 110, and is received by the light receiving surface 112aa of the solid-state imaging device 112a. Note that the light receiving surface 112aa of the solid-state imaging device 112a is an XY plane including the X axis and the Y axis. The solid-state imaging device 112a is a single-plate color CCD (Charge-Coupled Device) image sensor having a color filter with a Bayer-type pixel arrangement as a transmission wavelength selection device. The solid-state imaging device 112a accumulates an optical image formed by each pixel on the light receiving surface 112aa as a charge corresponding to the amount of light, and generates R (Red), G (Green), and B (Blue) image signals. Output. Note that the solid-state imaging device 112a is not limited to a CCD image sensor, and may be replaced with a CMOS (Complementary Metal Oxide や Semiconductor) image sensor or other types of imaging devices. The solid-state imaging device 112a may be a filter having a complementary color filter, or a filter having a periodic color arrangement such as a Bayer arrangement if any color filter is arranged for each pixel. There is no need.
 信号処理回路114は、固体撮像素子112aより入力される画像信号に対してクランプ、デモザイク(色補間)等の所定の信号処理を施して、画像処理エンジン116に出力する。画像処理エンジン116は、信号処理回路114より入力される画像信号に対してマトリクス演算、Y/C分離、ホワイトバランス等の所定の信号処理を施して輝度信号Y、色差信号Cb、Crを生成し、JPEG(Joint Photographic Experts Group)等の所定のフォーマットで圧縮する。バッファメモリ118は、画像処理エンジン116による処理の実行時、処理データの一時的な保存場所として用いられる。また、撮影画像の保存形式は、JPEG形式に限らず、最小限の画像処理(例えばクランプ)しか施されないRAW形式であってもよい。 The signal processing circuit 114 performs predetermined signal processing such as clamping and demosaicing (color interpolation) on the image signal input from the solid-state imaging device 112a, and outputs the processed signal to the image processing engine 116. The image processing engine 116 performs predetermined signal processing such as matrix operation, Y / C separation, and white balance on the image signal input from the signal processing circuit 114 to generate a luminance signal Y and color difference signals Cb and Cr. , Compressed in a predetermined format such as JPEG (Joint Photographic 等 Experts Group). The buffer memory 118 is used as a temporary storage location for processing data when the image processing engine 116 executes processing. The captured image storage format is not limited to the JPEG format, and may be a RAW format in which minimal image processing (for example, clamping) is performed.
 カード用インタフェース120のカードスロットには、メモリカード200が着脱可能に差し込まれている。 The memory card 200 is detachably inserted into the card slot of the card interface 120.
 画像処理エンジン116は、カード用インタフェース120を介してメモリカード200と通信可能である。画像処理エンジン116は、生成された圧縮画像信号(撮影画像データ)をメモリカード200(又は撮影装置1に備えられる不図示の内蔵メモリ)に保存する。 The image processing engine 116 can communicate with the memory card 200 via the card interface 120. The image processing engine 116 stores the generated compressed image signal (captured image data) in the memory card 200 (or a built-in memory (not shown) provided in the image capturing apparatus 1).
 また、画像処理エンジン116は、生成された輝度信号Y、色差信号Cb、Crをフレームメモリ(不図示)にフレーム単位でバッファリングする。画像処理エンジン116は、バッファリングされた信号を所定のタイミングで各フレームメモリから掃き出して所定のフォーマットのビデオ信号に変換し、LCD制御回路122に出力する。LCD制御回路122は、画像処理エンジン116より入力される画像信号を基に液晶を変調制御する。これにより、被写体の撮影画像がLCD124の表示画面に表示される。ユーザは、AE制御及びAF制御に基づいて適正な輝度及びピントで撮影されたリアルタイムのスルー画(ライブビュー)を、LCD124の表示画面を通じて視認することができる。 Also, the image processing engine 116 buffers the generated luminance signal Y and color difference signals Cb and Cr in a frame memory (not shown) in units of frames. The image processing engine 116 sweeps the buffered signal from each frame memory at a predetermined timing, converts it into a video signal of a predetermined format, and outputs it to the LCD control circuit 122. The LCD control circuit 122 modulates and controls the liquid crystal based on the image signal input from the image processing engine 116. Thereby, the photographed image of the subject is displayed on the display screen of the LCD 124. The user can view through a display screen of the LCD 124 a real-time through image (live view) captured with appropriate brightness and focus based on AE control and AF control.
 画像処理エンジン116は、ユーザにより撮影画像の再生操作が行われると、操作により指定された撮影画像データをメモリカード200又は内蔵メモリより読み出して所定のフォーマットの画像信号に変換し、LCD制御回路122に出力する。LCD制御回路122が画像処理エンジン116より入力される画像信号を基に液晶を変調制御することで、被写体の撮影画像がLCD124の表示画面に表示される。 When the user performs a reproduction operation of the photographed image, the image processing engine 116 reads the photographed image data designated by the operation from the memory card 200 or the built-in memory, converts it into an image signal of a predetermined format, and the LCD control circuit 122. Output to. The LCD control circuit 122 performs modulation control on the liquid crystal based on the image signal input from the image processing engine 116, so that a captured image of the subject is displayed on the display screen of the LCD 124.
[振れ補正部材の駆動に関する説明]
 像振れ補正装置112は、振れ補正部材を駆動させる。本実施形態において、振れ補正部材は、固体撮像素子112aである。なお、振れ補正部材は、固体撮像素子112aに限らず、撮影レンズ106内に含まれる一部のレンズなど、光軸AXを基準として物理的に動かされることにより、固体撮像素子112aの受光面112aa上での被写体像の入射位置をシフトさせることが可能な別の構成であってもよく、又は、これらと固体撮像素子112aのうち2つ以上の部材を組み合わせた構成であってもよい。
[Explanation regarding the drive of the shake correction member]
The image shake correction device 112 drives a shake correction member. In the present embodiment, the shake correction member is the solid-state image sensor 112a. Note that the shake correction member is not limited to the solid-state image sensor 112a, but is partially moved with respect to the optical axis AX, such as a part of lenses included in the photographing lens 106, so that the light-receiving surface 112aa of the solid-state image sensor 112a. Another configuration capable of shifting the incident position of the subject image above may be used, or a configuration in which two or more members of these and the solid-state imaging device 112a are combined may be used.
 像振れ補正装置112は、像振れを補正するために振れ補正部材を光軸AXと直交する平面内(すなわち、XY平面内)で微小に駆動(振動)させるだけでなく、被写体像が画素ピッチ分ぼかされることによる光学的なローパスフィルタ(LPF:Low Pass Filter)効果(偽色等のモアレの軽減)が得られるように振れ補正部材を光軸AXと直交する平面内で微小に駆動(微小回転)させる。以下、説明の便宜上、振れ補正部材を像振れ補正で駆動させることを「像振れ補正駆動」と記し、振れ補正部材を光学的なLPFと同様の効果が得られるように駆動させることを「LPF駆動」と記す。 The image blur correction device 112 not only slightly drives (vibrates) the blur correction member in a plane orthogonal to the optical axis AX (that is, in the XY plane) in order to correct the image blur. The shake correction member is driven minutely in a plane perpendicular to the optical axis AX so that an optical low-pass filter (LPF) effect (reduction of moiré such as false color) can be obtained. Rotate). Hereinafter, for convenience of description, driving the shake correction member with image shake correction is referred to as “image shake correction drive”, and driving the shake correction member so as to obtain the same effect as an optical LPF is referred to as “LPF”. "Drive".
(像振れ補正駆動に関する説明)
 ジャイロセンサ128は、像振れ補正を制御するための情報を検出するセンサである。具体的には、ジャイロセンサ128は、撮影装置1に加わる二軸周り(X軸周り、Y軸周り)の角速度を検出し、検出された二軸周りの角速度をXY平面内(換言すると、固体撮像素子112aの受光面112aa内)の振れを示す振れ検出信号としてシステムコントローラ100に出力する。
(Explanation on image blur correction drive)
The gyro sensor 128 is a sensor that detects information for controlling image blur correction. Specifically, the gyro sensor 128 detects angular velocities around the two axes (around the X axis and around the Y axis) applied to the imaging apparatus 1, and the detected angular velocities around the two axes are within the XY plane (in other words, a solid state This is output to the system controller 100 as a shake detection signal indicating the shake of the light receiving surface 112aa of the image sensor 112a.
 図2及び図3に示されるように、像振れ補正装置112は、撮影装置1が備えるシャーシ等の構造物に固定された固定支持基板112bを備えている。固定支持基板112bは、固体撮像素子112aが搭載された可動ステージ112cをスライド可能に支持している。 As shown in FIGS. 2 and 3, the image blur correction device 112 includes a fixed support substrate 112b fixed to a structure such as a chassis included in the photographing device 1. The fixed support substrate 112b slidably supports the movable stage 112c on which the solid-state imaging device 112a is mounted.
 可動ステージ112cと対向する固定支持基板112bの面上には、磁石MYR、MYL、MXD、MXUが取り付けられている。また、固定支持基板112bには、磁性体であるヨークYYR、YYL、YXD、YXUが取り付けられている。ヨークYYR、YYL、YXD、YXUはそれぞれ、固定支持基板112bから可動ステージ112cを回り込んで磁石MYR、MYL、MXD、MXUと対向する位置まで延びた形状を持ち、磁石MYR、MYL、MXD、MXUとの間に磁気回路を構成する。また、可動ステージ112cには、駆動用コイルCYR、CYL、CXD、CXUが取り付けられている。駆動用コイルCYR、CYL、CXD、CXUが磁気回路の磁界内において電流を受けることにより、駆動力が発生する。可動ステージ112c(固体撮像素子112a)は、発生した駆動力により、固定支持基板112bに対してXY平面内で微小に駆動される。 Magnets M YR , M YL , M XD , and M XU are attached on the surface of the fixed support substrate 112 b facing the movable stage 112 c. In addition, yokes Y YR , Y YL , Y XD , and Y XU that are magnetic bodies are attached to the fixed support substrate 112b. The yokes Y YR , Y YL , Y XD , and Y XU each have a shape that extends from the fixed support substrate 112 b to the position facing the magnets M YR , M YL , M XD , and M XU around the movable stage 112 c, A magnetic circuit is formed between the magnets M YR , M YL , M XD , and M XU . In addition, driving coils C YR , C YL , C XD , and C XU are attached to the movable stage 112c. When the driving coils C YR , C YL , C XD , and C XU receive a current in the magnetic field of the magnetic circuit, a driving force is generated. The movable stage 112c (solid-state imaging device 112a) is minutely driven in the XY plane with respect to the fixed support substrate 112b by the generated driving force.
 対応する磁石、ヨーク及び駆動用コイルはボイスコイルモータを構成する。以下、便宜上、磁石MYR、ヨークYYR及び駆動用コイルCYRよりなるボイスコイルモータに符号VCMYRを付し、磁石MYL、ヨークYYL及び駆動用コイルCYLよりなるボイスコイルモータに符号VCMYLを付し、磁石MXD、ヨークYXD及び駆動用コイルCXDよりなるボイスコイルモータに符号VCMXDを付し、磁石MXU、ヨークYXU及び駆動用コイルCXUよりなるボイスコイルモータに符号VCMXUを付す。 Corresponding magnets, yokes and driving coils constitute a voice coil motor. Hereinafter, for the sake of convenience, a reference numeral VCM YR is given to a voice coil motor made up of a magnet M YR , a yoke Y YR and a driving coil C YR , and a voice coil motor made up of a magnet M YL , a yoke Y YL and a driving coil C YL is called up. given the VCM YL, a voice coil motor reference numeral VCM XD magnet M XD, the voice coil motor consisting of a yoke Y XD and the driving coil C XD, consisting of the magnet M XU, yoke Y XU and the driving coil C XU The symbol VCM XU is appended.
 各ボイスコイルモータVCMYR、VCMYL、VCMXD、VCMXU(駆動用コイルCYR、CYL、CXD、CXU)は、システムコントローラ100の制御下でPWM(Pulse Width Modulation)駆動される。ボイスコイルモータVCMYRとVCMYLは、固体撮像素子112aの下方であって、水平方向(X軸方向)に所定の間隔を空けて並べて配置されており、ボイスコイルモータVCMXDとVCMXUは、固体撮像素子112aの側方であって、垂直方向(Y軸方向)に所定の間隔を空けて並べて配置されている。 Each voice coil motor VCM YR , VCM YL , VCM XD , VCM XU (drive coils C YR , C YL , C XD , C XU ) is PWM (Pulse Width Modulation) driven under the control of the system controller 100. The voice coil motors VCM YR and VCM YL are arranged below the solid-state image sensor 112a and arranged side by side with a predetermined interval in the horizontal direction (X-axis direction). The voice coil motors VCM XD and VCM XU are On the side of the solid-state image sensor 112a, they are arranged side by side with a predetermined interval in the vertical direction (Y-axis direction).
 固定支持基板112b上であって駆動用コイルCYR、CYL、CXD、CXUの各近傍位置には、ホール素子HYR、HYL、HXD、HXUが取り付けられている。ホール素子HYR、HYL、HXD、HXUはそれぞれ、磁石MYR、MYL、MXD、MXUの磁力を検出して、可動ステージ112c(固体撮像素子112a)のXY平面内の位置を示す位置検出信号をシステムコントローラ100に出力する。具体的には、ホール素子HYR及びHYLにより可動ステージ112c(固体撮像素子112a)のY軸方向位置及び傾き(回転)が検出され、ホール素子HXD及びHXUにより可動ステージ112c(固体撮像素子112a)のX軸方向位置及び傾き(回転)が検出される。 Hall elements H YR , H YL , H XD , and H XU are attached to the positions near the driving coils C YR , C YL , C XD , and C XU on the fixed support substrate 112b. The Hall elements H YR , H YL , H XD , and H XU detect the magnetic forces of the magnets M YR , M YL , M XD , and M XU , respectively, and the position of the movable stage 112 c (solid-state imaging element 112 a) in the XY plane. Is output to the system controller 100. Specifically, the Y-axis direction position and tilt (rotation) of the movable stage 112c (solid-state imaging device 112a) are detected by the Hall elements H YR and H YL , and the movable stage 112c (solid-state imaging) is detected by the Hall elements H XD and H XU. The X-axis direction position and inclination (rotation) of the element 112a) are detected.
 システムコントローラ100は、ボイスコイルモータ用のドライバICを内蔵している。システムコントローラ100は、ジャイロセンサ128より出力される振れ検出信号及びホール素子HYR、HYL、HXD、HXUより出力される位置検出信号に基づいて、ドライバICの定格電力(許容電力)を超えない範囲内において各ボイスコイルモータVCMYR、VCMYL、VCMXD、VCMXU(駆動用コイルCYR、CYL、CXD、CXU)に流す電流のバランスを崩さないようにデューティ比を計算する。システムコントローラ100は、計算されたデューティ比で各ボイスコイルモータVCMYR、VCMYL、VCMXD、VCMXUに駆動電流を流し、固体撮像素子112aを像振れ補正駆動する。これにより、固体撮像素子112aが重力や外乱等に抗して規定の位置に保持されつつ固体撮像素子112aの受光面112aa上での像振れが補正(別の言い方によれば、受光面112aa上での被写体像の入射位置が振れないように固体撮像素子112aの位置が調整)される。 The system controller 100 includes a driver IC for a voice coil motor. The system controller 100 determines the rated power (allowable power) of the driver IC based on the shake detection signal output from the gyro sensor 128 and the position detection signal output from the Hall elements H YR , H YL , H XD , and H XU. Calculate the duty ratio so as not to disturb the balance of the current flowing through each voice coil motor VCM YR , VCM YL , VCM XD , VCM XU (drive coil C YR , C YL , C XD , C XU ) within the range not exceeding To do. The system controller 100 sends a drive current to each of the voice coil motors VCM YR , VCM YL , VCM XD , and VCM XU with the calculated duty ratio to drive the solid-state imaging device 112 a with image blur correction. As a result, the image blur on the light receiving surface 112aa of the solid-state image sensor 112a is corrected while the solid-state image sensor 112a is held at a predetermined position against gravity, disturbance, or the like (in other words, on the light receiving surface 112aa). The position of the solid-state imaging device 112a is adjusted so that the incident position of the subject image does not fluctuate.
(LPF駆動に関する説明)
 次に、LPF駆動に関する説明を行う。本実施形態において、像振れ補正装置112は、ボイスコイルモータVCMYR、VCMYL、VCMXD、VCMXUに所定の駆動電流を流すことにより、一回の露光期間に対して、XY平面内において所定の軌跡を描くように可動ステージ112c(固体撮像素子112a)を駆動して、被写体像を固体撮像素子112aの検出色(R、G又はB)の異なる複数の画素に入射させる。これにより、光学的なLPFと同様の効果が得られる。
(Explanation regarding LPF drive)
Next, the LPF driving will be described. In the present embodiment, the image blur correction device 112 causes a predetermined drive current to flow through the voice coil motors VCM YR , VCM YL , VCM XD , and VCM XU , so that the image shake correction device 112 is predetermined in the XY plane for one exposure period. The movable stage 112c (solid-state image sensor 112a) is driven so as to draw a trajectory, and the subject image is incident on a plurality of pixels having different detection colors (R, G, or B) of the solid-state image sensor 112a. Thereby, the same effect as an optical LPF can be obtained.
 図4(a)、図4(b)は、LPF駆動の説明を補助する図である。同図に示されるように、固体撮像素子112aの受光面112aa上には、複数の画素PIXが所定の画素ピッチPでマトリックス状に並べて配置されている。説明の便宜上、同図の各画素PIXについて、前面に配置されたフィルタ色に対応させて符号(R、G、Bの何れか1つ)を付す。 4 (a) and 4 (b) are diagrams for assisting the description of the LPF drive. As shown in the figure, on the light receiving surface 112aa of the solid-state imaging device 112a, a plurality of pixels PIX are arranged in a matrix at a predetermined pixel pitch P. For convenience of explanation, each pixel PIX in the figure is given a reference (any one of R, G, and B) corresponding to the filter color arranged on the front surface.
 図4(a)は、固体撮像素子112aが光軸AXを中心とする正方形軌跡を描くように駆動される例を示す。この正方形軌跡は、例えば固体撮像素子112aの画素ピッチPを一辺とした正方形の閉じた経路とすることができる。図4(a)の例では、固体撮像素子112aは、X軸方向とY軸方向とに1画素ピッチP単位で交互に且つ正方形経路となるように駆動される。 FIG. 4A shows an example in which the solid-state imaging device 112a is driven so as to draw a square locus centered on the optical axis AX. For example, the square locus can be a square closed path with the pixel pitch P of the solid-state imaging element 112a as one side. In the example of FIG. 4A, the solid-state imaging device 112a is driven so as to form a square path alternately in units of one pixel pitch P in the X-axis direction and the Y-axis direction.
 図4(b)は、固体撮像素子112aが光軸AXを中心とする回転対称な円形軌跡を描くように駆動される例を示す。この円形軌跡は、例えば固体撮像素子112aの画素ピッチPの√2/2倍を半径rとする円形の閉じた経路とすることができる。 FIG. 4B shows an example in which the solid-state imaging device 112a is driven to draw a rotationally symmetric circular locus centered on the optical axis AX. This circular locus can be a closed circular path having a radius r of √2 / 2 times the pixel pitch P of the solid-state image sensor 112a, for example.
 なお、画素ピッチPを含む駆動軌跡の情報は、システムコントローラ100の内部メモリ又はROM126に予め保持されている。 Note that information on the driving locus including the pixel pitch P is held in advance in the internal memory of the system controller 100 or the ROM 126.
 図4(a)(又は図4(b))に例示されるように、露光期間中、固体撮像素子112aが駆動軌跡の情報に基づいて所定の正方形軌跡(又は円形軌跡)を描くように駆動されると、被写体像が4つのカラーフィルタR、G、B、G(4つ(二行二列)の画素PIX)に均等に入射される。これにより、光学的なLPFと同等の効果が得られる。すなわち、何れのカラーフィルタ(画素PIX)に入射された被写体像も、その周辺のカラーフィルタ(画素PIX)に必ず入射されるため、恰も光学的なLPFを被写体像が通過したときと同等の効果(偽色等のモアレの軽減)が得られる。 As illustrated in FIG. 4A (or FIG. 4B), during the exposure period, the solid-state imaging device 112a is driven to draw a predetermined square locus (or circular locus) based on the information of the drive locus. Then, the subject image is uniformly incident on the four color filters R, G, B, and G (four (two rows and two columns) pixels PIX). Thereby, an effect equivalent to that of an optical LPF can be obtained. In other words, since the subject image incident on any color filter (pixel PIX) is necessarily incident on the surrounding color filter (pixel PIX), the same effect as when the subject image passes through the optical LPF. (Reduction of moiré such as false color) is obtained.
 なお、ユーザは、操作部102を操作することにより、像振れ補正駆動、LPF駆動のそれぞれのオン/オフを切り替えることができる。 Note that the user can switch on / off of image blur correction driving and LPF driving by operating the operation unit 102.
[偽色の検出に関する説明]
 次に、本実施形態において撮影画像内に発生する、モアレの一種である偽色を検出する方法について説明する。図5は、システムコントローラ100により実行される偽色検出フローを示す。図5に示される偽色検出フローは、例えば、レリーズスイッチが押された時点で開始される。
[Explanation about false color detection]
Next, a method for detecting a false color, which is a kind of moire, that occurs in a captured image in the present embodiment will be described. FIG. 5 shows a false color detection flow executed by the system controller 100. The false color detection flow shown in FIG. 5 is started when the release switch is pressed, for example.
[図5のS11(状態の判定)]
 本処理ステップS11では、撮影装置1が静止状態であるか否かが判定される。例示的には、ジャイロセンサ128より入力される振れ検出信号のうち一定周波数以上の信号成分の振幅が一定期間継続してある閾値以内に収まる場合に静止状態と判定される。撮影装置1の静止状態として、典型的には、撮影装置1が三脚に固定された状態が挙げられる。なお、静止状態を含めた撮影装置1の姿勢はジャイロセンサ128に代えて、加速度センサ130、地磁気センサ132、GPSセンサ134など、他のセンサより出力される情報を用いて検出されてもよい。また、検出精度を向上させるため、例えばセンサ・フュージョン技術を適用し、これらのセンサより出力される情報が複合的に用いられるようにしてもよい。
[S11 in FIG. 5 (state determination)]
In this processing step S11, it is determined whether or not the photographing apparatus 1 is in a stationary state. Illustratively, when the amplitude of a signal component having a frequency equal to or higher than a certain frequency among the vibration detection signals input from the gyro sensor 128 is within a certain threshold continuously for a certain period, it is determined to be in a stationary state. A typical example of the stationary state of the photographing apparatus 1 is a state in which the photographing apparatus 1 is fixed to a tripod. Note that the posture of the photographing apparatus 1 including the stationary state may be detected using information output from other sensors such as the acceleration sensor 130, the geomagnetic sensor 132, and the GPS sensor 134 instead of the gyro sensor 128. In order to improve the detection accuracy, for example, sensor fusion technology may be applied so that information output from these sensors may be used in combination.
 撮影装置1が低速シャッタスピード設定下の手持ち撮影状態など、静止状態にない場合は、手振れ(あるいは被写体ぶれ)による被写体のボケに起因して偽色がそもそも発生し難い。従って、本実施形態では、撮影装置1が静止状態である(すなわち、偽色が発生しやすい状態である)と判定された場合に限り(S11:YES)、撮影画像内の偽色を検出すべく、処理ステップS12(第一画像の撮影)以降が実行される。撮影装置1が静止状態にない場合は、処理ステップS12(第一画像の撮影)以降が実行されないため、システムコントローラ100の処理負荷が軽減される。処理ステップS12(第一画像の撮影)以降が実行されない旨は、撮影者に例えばLCD124の表示画面等を通じて通知されてもよい。なお、撮影画像内の偽色の検出を重視したい場合は、撮影装置1が静止状態であるか否かに拘わらず処理ステップS12(第一画像の撮影)以降が実行されてもよい。この場合、処理ステップS12(第一画像の撮影)以降が実行される旨は、撮影者に例えばLCD124の表示画面等を通じて通知されてもよい。また、静止状態の判定閾値は撮影装置1に設定されたシャッタスピードに応じて変更されてもよい。 When the photographing apparatus 1 is not in a stationary state such as a hand-held photographing state with a low shutter speed setting, a false color is unlikely to occur in the first place due to blurring of the subject due to camera shake (or subject blurring). Therefore, in this embodiment, only when it is determined that the photographing apparatus 1 is in a stationary state (that is, in a state in which false color is likely to occur) (S11: YES), the false color in the photographed image is detected. Therefore, processing step S12 (photographing of the first image) and subsequent steps are executed. When the photographing apparatus 1 is not in a stationary state, the processing load of the system controller 100 is reduced because the processing step S12 (first image photographing) and subsequent steps are not executed. The photographer may be notified, for example, via the display screen of the LCD 124 that the processing step S12 (photographing the first image) and subsequent steps are not executed. Note that, when it is important to detect the false color in the photographed image, the processing step S12 (photographing the first image) and subsequent steps may be executed regardless of whether or not the photographing apparatus 1 is in a stationary state. In this case, the photographer may be notified through the display screen of the LCD 124 that the processing step S12 (photographing the first image) and subsequent steps are executed. Further, the determination threshold value for the still state may be changed according to the shutter speed set in the photographing apparatus 1.
[図5のS12(第一画像の撮影)]
 図6(a)、図7(a)は、撮影される被写体の一部であって、それぞれ異なる部分を拡大して示す図である。図6(a)に示される被写体は、固体撮像素子112aの画素PIXの画素ピッチPと同ピッチで明暗が交互に現れる斜め縞模様であり、図7(a)に示される被写体は、画素ピッチPと同ピッチで明暗が交互に現れる縦縞模様である。説明の便宜上、図6(a)に示される被写体のうち太実線で囲まれた6×6マスの被写体を「斜め縞被写体6a」と記し、図7(a)に示される被写体のうち太実線で囲まれた6×6マスの被写体を「縦縞被写体7a」と記す。
[S12 in FIG. 5 (capturing the first image)]
6 (a) and 7 (a) are enlarged views showing different parts of a subject to be photographed. The subject shown in FIG. 6A is an oblique stripe pattern in which light and dark appear alternately at the same pitch as the pixel pitch P of the pixel PIX of the solid-state image sensor 112a. The subject shown in FIG. It is a vertical stripe pattern in which light and dark appear alternately at the same pitch as P. For convenience of explanation, a 6 × 6 square subject surrounded by a thick solid line among the subjects shown in FIG. 6A is referred to as “obliquely striped subject 6a”, and a thick solid line among the subjects shown in FIG. 7A. The subject of 6 × 6 square surrounded by is denoted as “vertical stripe subject 7a”.
 本処理ステップS12では、AE制御及びAF制御に基づいて適正な輝度及びピントで被写体画像(第一画像)が撮影される。ここでは、斜め縞被写体6aと縦縞被写体7aの何れにもピントが合っているものとする。 In this processing step S12, a subject image (first image) is taken with appropriate brightness and focus based on AE control and AF control. Here, it is assumed that both the diagonally striped subject 6a and the vertically striped subject 7a are in focus.
 図6(b)、図7(b)はそれぞれ、固体撮像素子112aの各画素PIXの取り込まれる斜め縞被写体6a、縦縞被写体7aを模式的に示す図であり、固体撮像素子112aの受光面112aaを被写体側から正面視した図である。図6(b)及び図7(b)では、図4と同様に、各画素PIXについて、前面に配置されたフィルタ色に対応させて符号(R、G、Bの何れか1つ)を付す。図6(b)、図7(b)の各図中、黒塗りの画素PIXは、縞模様の暗部分を取り込んだことを示し、白塗りの画素PIXは、縞模様の明部分を取り込んだことを示す。また、説明の便宜上、図6(b)、図7(b)の各図に画素のアドレス(数字1~8、符号イ~チ)を付す。なお、厳密には、撮影レンズ106の結像作用によって、被写体は上下左右が反転した状態で固体撮像素子112a上に結像される。一例として、「斜め縞被写体6a」の左上角の部分は、固体撮像素子112a上では右下角の部分として結像する。しかし、本実施形態では、説明の煩雑化を避けるため、「斜め縞被写体6a」の左上角の部分は、図6(b)の左上角の部分に対応するものとして説明する。 FIGS. 6B and 7B are diagrams schematically showing the oblique stripe subject 6a and the vertical stripe subject 7a captured by each pixel PIX of the solid-state image sensor 112a, and the light receiving surface 112aa of the solid-state image sensor 112a. Is a front view from the subject side. In FIG. 6B and FIG. 7B, as in FIG. 4, each pixel PIX is assigned a code (any one of R, G, and B) corresponding to the filter color arranged on the front surface. . In each of FIGS. 6B and 7B, the black pixel PIX indicates that the dark portion of the striped pattern has been captured, and the white pixel PIX has captured the bright portion of the striped pattern. It shows that. For convenience of explanation, pixel addresses (numerals 1 to 8 and symbols 1 to 1) are attached to the diagrams of FIGS. 6B and 7B. Strictly speaking, the subject is imaged on the solid-state imaging device 112a by the imaging action of the photographic lens 106 in a state where the subject is turned upside down. As an example, the upper left corner portion of the “obliquely striped subject 6a” forms an image as the lower right corner portion on the solid-state imaging device 112a. However, in this embodiment, in order to avoid complication of explanation, the upper left corner portion of the “obliquely striped subject 6a” is described as corresponding to the upper left corner portion of FIG.
[図5のS13(固体撮像素子112aのシフト)]
 本処理ステップS13では、可動ステージ112cが駆動されて、固体撮像素子112aが1画素分の距離だけ右方向(図6のX軸の矢じり側の方向)にシフトされる。
[S13 in FIG. 5 (Shift of Solid-State Image Sensor 112a)]
In this processing step S13, the movable stage 112c is driven, and the solid-state imaging device 112a is shifted rightward (the direction of the arrow on the X axis in FIG. 6) by a distance of one pixel.
[図5のS14(第二画像の撮影)]
 本処理ステップS14においても、第一画像撮影時のAE制御及びAF制御に基づいて被写体画像(第二画像)が撮影される。第二画像の撮影完了後、第一画像と第二画像を用いた偽色検出処理を開始する旨が撮影者に告知されてもよい。
[S14 in FIG. 5 (Taking second image)]
Also in this processing step S14, the subject image (second image) is captured based on the AE control and AF control at the time of capturing the first image. After the second image is captured, the photographer may be notified that the false color detection process using the first image and the second image is to be started.
 図6(c)、図7(c)はそれぞれ、図6(b)、図7(b)と同様の図であり、固体撮像素子112aの各画素PIXの取り込まれる斜め縞被写体6a、縦縞被写体7aを模式的に示す。図6、図7の各図(b)、(c)に示されるように、固体撮像素子112aの受光面112aa上での被写体像の入射位置は、処理ステップS13(固体撮像素子112aのシフト)における固体撮像素子112aのシフト量に応じて(受光面112aa上で左方向に1画素分の距離だけ)シフトする。 6 (c) and 7 (c) are similar to FIGS. 6 (b) and 7 (b), respectively, and the diagonally striped subject 6a and the vertically striped subject captured by each pixel PIX of the solid-state imaging device 112a. 7a is shown schematically. As shown in FIGS. 6B and 7B, the incident position of the subject image on the light receiving surface 112aa of the solid-state image sensor 112a is determined by processing step S13 (shift of the solid-state image sensor 112a). Is shifted by a distance corresponding to one pixel in the left direction on the light receiving surface 112aa.
 附言するに、本処理ステップS14における撮影条件は、処理ステップS12(第一画像の撮影)における撮影時に対して、固体撮像素子112aが1画素分の距離だけ右方向にシフトされた点以外は同一である。そのため、第二画像は、実質的に、第一画像に対して1画素分右方向にシフトした範囲を撮影したものとなっている。図6、図7の各図(b)、(c)から判るように、第二画像において、被写体は、第一画像に対して全体的に1画素分の距離だけ左方向にシフトした位置に写る。 In addition, the shooting conditions in this processing step S14 are other than the point that the solid-state imaging device 112a is shifted rightward by a distance of one pixel with respect to the shooting in the processing step S12 (first image shooting). Are the same. For this reason, the second image is substantially a photograph of a range shifted to the right by one pixel with respect to the first image. As can be seen from FIGS. 6B and 7B, in the second image, the subject is shifted to the left by a distance of one pixel as a whole with respect to the first image. It is reflected.
 処理ステップS12(第一画像の撮影)にて撮影された第一画像、処理ステップS14(第二画像の撮影)にて撮影された第二画像の何れの画像信号も、上述した信号処理(クランプ、デモザイク、マトリクス演算、Y/C分離、ホワイトバランス等)が施されて、輝度信号Y、色差信号Cb、Crに変換される。以下、説明の便宜上、処理ステップS12(第一画像の撮影)にて撮影された第一画像の色差信号(Cb、Cr)を「第一色差信号」と記し、処理ステップS14(第二画像の撮影)にて撮影された第二画像の色差信号(Cb、Cr)を「第二色差信号」と記す。また、「注目画素」とは、少なくともデモザイク処理された後の各画像の画素を指すものとする。 The signal processing (clamping) of the first image taken at the processing step S12 (photographing the first image) and the second image taken at the processing step S14 (photographing the second image) are both described above. , Demosaic, matrix calculation, Y / C separation, white balance, etc.), and converted into a luminance signal Y and color difference signals Cb, Cr. Hereinafter, for convenience of explanation, the color difference signals (Cb, Cr) of the first image captured in the processing step S12 (capturing the first image) will be referred to as “first color difference signals”, and the processing step S14 (the second image of the second image). The color difference signals (Cb, Cr) of the second image taken in (shooting) are referred to as “second color difference signals”. The “target pixel” refers to a pixel of each image after at least demosaic processing.
[図5のS15(電気的なLPF処理)]
 本実施形態では、詳しくは後述するが、第一色差信号と第二色差信号との信号差分値や信号加算値に基づいて偽色の発生が検出される。しかし、コントラストの高いエッジ部分では、偽色が発生していなくても、第一色差信号と第二色差信号との信号差分値が大きくなることがある。この場合、エッジ部分において偽色が発生していると誤検出される虞がある。また、詳しくは後述するが、信号差分値や信号加算値の演算に用いられる第一色差信号と第二色差信号は、同一の被写体像を写す画素の色差信号ではあるが、処理ステップS13(固体撮像素子112aのシフト)にて固体撮像素子112aがシフトされたことが原因で、それぞれ、アドレスが異なる画素を用いてデモザイク処理されている。そのため、第一色差信号と第二色差信号は、同一の被写体像を写す画素の色差信号であるにも拘わらず色情報が極僅かに異なる場合がある。そこで、本処理ステップS15では、画像信号(輝度信号Y、色差信号Cb、Cr)に対してLPF処理が施される。LPF処理によって画像がぼかされることで、エッジ部分における偽色の誤検出が抑えられると共に第一色差信号と第二色差信号との色情報の誤差が抑えられる。
[S15 in FIG. 5 (electrical LPF processing)]
In the present embodiment, although described in detail later, the occurrence of false colors is detected based on the signal difference value or signal addition value between the first color difference signal and the second color difference signal. However, at the edge portion with high contrast, the signal difference value between the first color difference signal and the second color difference signal may be large even if a false color is not generated. In this case, there is a risk of false detection that a false color has occurred in the edge portion. As will be described in detail later, the first color difference signal and the second color difference signal used for the calculation of the signal difference value and the signal addition value are the color difference signals of the pixels that capture the same subject image. Due to the fact that the solid-state image sensor 112a is shifted by the shift of the image sensor 112a), demosaic processing is performed using pixels having different addresses. For this reason, the first color difference signal and the second color difference signal may be slightly different in color information even though they are color difference signals of pixels that capture the same subject image. Therefore, in this processing step S15, LPF processing is performed on the image signal (luminance signal Y, color difference signals Cb, Cr). By blurring the image by the LPF processing, false detection of false colors at the edge portion is suppressed, and errors in color information between the first color difference signal and the second color difference signal are suppressed.
[図5のS16(アドレスの変換)]
 本実施形態では、撮影画像内において偽色が発生する偽色発生領域が画素単位で検出される。この場合、偽色の検出精度を向上させるため、同一の被写体像が入射される注目画素同士で信号差分値や信号加算値を演算することが望ましい。そこで、本処理ステップS16では、同一の被写体像が同一アドレスの画素に入射されたものとして処理されるように、処理ステップS14(第二画像の撮影)にて撮影された第二画像を構成する各画素のアドレスが、処理ステップS13(固体撮像素子112aのシフト)における固体撮像素子112aのシフト量(換言すると、固体撮像素子112aの受光面112aa上での被写体像の入射位置のシフト量)に応じて変換される。
[S16 in FIG. 5 (address conversion)]
In the present embodiment, a false color generation area in which a false color is generated in a captured image is detected on a pixel basis. In this case, in order to improve the false color detection accuracy, it is desirable to calculate the signal difference value and the signal addition value between the target pixels on which the same subject image is incident. Therefore, in this processing step S16, the second image captured in the processing step S14 (second image capturing) is configured so that the same subject image is processed as being incident on the pixel having the same address. The address of each pixel corresponds to the shift amount of the solid-state image sensor 112a in processing step S13 (shift of the solid-state image sensor 112a) (in other words, the shift amount of the incident position of the subject image on the light receiving surface 112aa of the solid-state image sensor 112a). Converted accordingly.
 一例として、図6(c)の画素PIXbのアドレス(ハ,2)は、被写体像の入射位置のシフト量に応じて右方向に1画素シフトしたときに位置する画素、すなわち、画素PIXbと同一の被写体像を取り込む画素PIXa(図6(b)参照)と同一のアドレス(ニ,2)に変換される。 As an example, the address (c, 2) of the pixel PIXb in FIG. 6C is the same as the pixel that is located when the pixel PIXb is shifted by one pixel in the right direction according to the shift amount of the incident position of the subject image, that is, the pixel PIXb. Is converted to the same address (d, 2) as the pixel PIXa (see FIG. 6B) for capturing the subject image.
[図5のS17(色差信号の差分値の演算)]
 斜め縞被写体6a及び縦縞被写体7aは、固体撮像素子112aの画素PIXの画素ピッチPと同ピッチの高周波成分を含む。そのため、斜め縞被写体6a及び縦縞被写体7aの画像信号が処理ステップS12(第一画像の撮影)、処理ステップS14(第二画像の撮影)においてデモザイク処理されると、偽色が発生する。
[S17 in FIG. 5 (Calculation of Difference Value of Color Difference Signal)]
The oblique stripe subject 6a and the vertical stripe subject 7a include a high-frequency component having the same pitch as the pixel pitch P of the pixel PIX of the solid-state image sensor 112a. Therefore, if the image signals of the diagonally striped subject 6a and the vertically striped subject 7a are demosaiced in the processing step S12 (first image capturing) and the processing step S14 (second image capturing), a false color is generated.
 具体的には、図6(b)の例では、G成分の画素PIXの輝度が高く、R及びB成分の画素PIXの輝度が低い。そのため、デモザイク処理後の各画素PIXの色情報はG成分が支配的となって、斜め縞被写体6aに緑色の偽色が発生する。一方、図6(c)の例では、図6(b)の例とは反対に、R及びB成分の画素PIXの輝度が高く、G成分の画素PIXの輝度が低い。そのため、デモザイク処理後の各画素PIXの色情報はR及びB成分が支配的となって、斜め縞被写体6aに紫色の偽色が発生する。 Specifically, in the example of FIG. 6B, the luminance of the G component pixel PIX is high and the luminance of the R and B component pixels PIX is low. Therefore, the G component is dominant in the color information of each pixel PIX after the demosaic process, and a green false color is generated in the diagonally striped subject 6a. On the other hand, in the example of FIG. 6C, the luminance of the R and B component pixels PIX is high and the luminance of the G component pixel PIX is low, contrary to the example of FIG. 6B. Therefore, the R and B components are dominant in the color information of each pixel PIX after the demosaic process, and a purple false color is generated in the diagonally striped subject 6a.
 また、図7(b)の例では、B成分の画素PIXの輝度が高く、R成分の画素PIXの輝度が低い。そのため、デモザイク処理後の各画素PIXの色情報はBとGとの混色成分となって、青と緑との中間色(例えばシアン色中心の近傍色)の偽色が発生する。一方、図7(c)の例では、図7(b)の例とは反対に、R成分の画素PIXの輝度が高く、B成分の画素PIXの輝度が低い。そのため、デモザイク処理後の各画素PIXの色情報はRとGとの混色成分となって、赤と緑の中間色(橙色中心の近傍色)の偽色が発生する。 In the example of FIG. 7B, the luminance of the B component pixel PIX is high and the luminance of the R component pixel PIX is low. Therefore, the color information of each pixel PIX after the demosaic processing becomes a mixed color component of B and G, and a false color of an intermediate color between blue and green (for example, a color near the cyan color center) is generated. On the other hand, in the example of FIG. 7C, the luminance of the R component pixel PIX is high and the luminance of the B component pixel PIX is low, contrary to the example of FIG. 7B. Therefore, the color information of each pixel PIX after the demosaic processing becomes a mixed color component of R and G, and a false color of an intermediate color between red and green (a color near the orange center) is generated.
 図8は、Cb、Crの二軸で定義される色空間を示す。図8中、符号6bは、図6(b)の例において注目画素で発生する緑色の偽色に対応するプロットであり、符号6cは、図6(c)の例において注目画素で発生する紫色の偽色に対応するプロットであり、符号7bは、図7(b)の例において注目画素で発生する青と緑との中間色の偽色に対応するプロットであり、符号7cは、図7(c)の例において注目画素で発生する赤と緑の中間色の偽色に対応するプロットである。下記は、各プロットの座標情報を示す。なお、原点Oは座標(0,0)である。
プロット6b:(Cb,Cr)=(-M,-N)
プロット6c:(Cb,Cr)=(M,N)
プロット7b:(Cb,Cr)=(M’,-N’)
プロット7c:(Cb,Cr)=(-M’+α,N’+β)
但し、M,N,M’,N’,α,βは何れも正数である。
FIG. 8 shows a color space defined by two axes of Cb and Cr. In FIG. 8, reference numeral 6b is a plot corresponding to the green false color generated at the target pixel in the example of FIG. 6B, and reference numeral 6c is a purple color generated at the target pixel in the example of FIG. 6C. 7b is a plot corresponding to a false color of an intermediate color between blue and green generated in the target pixel in the example of FIG. 7B, and reference numeral 7c is a plot corresponding to FIG. It is a plot corresponding to the false color of the intermediate color of red and green generated in the target pixel in the example of c). The following shows the coordinate information of each plot. The origin O is the coordinates (0, 0).
Plot 6b: (Cb, Cr) = (− M, −N)
Plot 6c: (Cb, Cr) = (M, N)
Plot 7b: (Cb, Cr) = (M ′, −N ′)
Plot 7c: (Cb, Cr) = (− M ′ + α, N ′ + β)
However, all of M, N, M ′, N ′, α, and β are positive numbers.
 図8に示されるように、本実施形態では、固体撮像素子112aの画素PIXの画素ピッチPと同程度の高周波成分の被写体像を取り込んだときに発生する偽色の色自体が、受光面112aa上での被写体像の入射位置をシフトさせることによって変化することを利用して偽色が発生する箇所(注目画素)を検出している。より詳細には、高周波成分の被写体像が取り込まれる注目画素において、第一色差信号と第二色差信号が持つ色情報が互いに補色の関係となる部分を、偽色が発生する部分であると判断し検出している。 As shown in FIG. 8, in the present embodiment, the false color itself that is generated when a subject image having a high frequency component equivalent to the pixel pitch P of the pixel PIX of the solid-state image sensor 112a is captured is the light receiving surface 112aa. A portion (a pixel of interest) where a false color is generated is detected by utilizing the change by shifting the incident position of the subject image above. More specifically, in a pixel of interest in which a high-frequency component subject image is captured, a portion in which the color information of the first color difference signal and the second color difference signal has a complementary color relationship is determined to be a portion where a false color is generated. It is detected.
 本実施形態では、上記の補色関係を得るべく、受光面112aa上での被写体像の入射位置が左方向(水平の画素の並び方向)に1画素分シフト(固体撮像素子112aが被写体像に対して右方向に1画素分シフト)されているが、本発明はこれに限らない。シフト方向は、例示的には、右方向(水平の画素の並び方向)であってもよく、又は、上方向(垂直な画素の並び方向)、下方向(垂直な画素の並び方向)、右上、右下、左上、左下の各斜め方向(水平、垂直の各並び方向に対して45度をなす方向)など、画素配置に応じた他の方向であってもよいし、併用してもよい。また、シフト距離は、例示的には、3画素分、5画素分など、他の奇数画素分の距離であってもよく、また、半画素分又は半画素分+奇数画素分(例えば1.5画素分、2.5画素分等)であってもよく(すなわち、n画素分又は(m+0.5)画素分(但し、n=奇数の自然数,m=0又は奇数の自然数)の何れかであればよく)、シフト駆動する機構の精度に応じて選択できることもできる。また、シフト距離は、撮影対象の被写体や撮影条件によっては、偶数画素分及びその近傍(例えば1.9~2.1画素分等)以外の距離であればよい。これらの量(方向及び距離)で受光面112aa上での被写体像の入射位置がシフトされる場合も、高周波成分の被写体像が取り込まれる(偽色が発生する)注目画素において、第一色差信号と第二色差信号が持つ色情報が互いに補色の関係となることから、偽色の発生が検出できる。 In the present embodiment, in order to obtain the above complementary color relationship, the incident position of the subject image on the light receiving surface 112aa is shifted by one pixel in the left direction (horizontal pixel arrangement direction) (the solid-state imaging device 112a is shifted from the subject image). However, the present invention is not limited to this. For example, the shift direction may be the right direction (horizontal pixel arrangement direction), or the upper direction (vertical pixel arrangement direction), the lower direction (vertical pixel arrangement direction), or the upper right direction. , Lower right, upper left, lower left diagonal directions (directions forming 45 degrees with respect to horizontal and vertical arrangement directions), or other directions according to the pixel arrangement may be used together. . In addition, the shift distance may be, for example, a distance corresponding to another odd pixel such as 3 pixels or 5 pixels, or a half pixel or a half pixel plus an odd pixel (for example, 1.. 5 pixels, 2.5 pixels, etc.) (ie, n pixels or (m + 0.5) pixels (where n = odd natural number, m = 0 or odd natural number) However, it can be selected according to the accuracy of the mechanism for shift driving. Further, the shift distance may be a distance other than the even number of pixels and the vicinity thereof (for example, 1.9 to 2.1 pixels) depending on the subject to be photographed and the photographing conditions. Even when the incident position of the subject image on the light receiving surface 112aa is shifted by these amounts (direction and distance), the first color difference signal is obtained at the target pixel in which the subject image of the high frequency component is captured (false color is generated). Since the color information of the second color difference signal and the second color difference signal are complementary to each other, the occurrence of false color can be detected.
 本処理ステップS17では、処理ステップS16(アドレスの変換)におけるアドレス変換後において(以下、同様)、アドレスが同一の注目画素毎に、第一色差信号と第二色差信号との差分値(Cbsub,Crsub)が演算される。具体的には、本処理ステップS17では、第一色差信号のCb、CrをそれぞれCb1、Cr1と定義し、これと同一アドレスの第二色差信号のCb、CrをそれぞれCb2、Cr2と定義した場合に、差分値(Cbsub,Crsub)が次式により演算される。
Cbsub=Cb1-Cb2
Crsub=Cr1-Cr2
In this processing step S17, after the address conversion in the processing step S16 (address conversion) (hereinafter the same), for each target pixel having the same address, the difference value (Cb sub) between the first color difference signal and the second color difference signal. , Cr sub ) is calculated. Specifically, in this processing step S17, Cb and Cr of the first color difference signal are defined as Cb1 and Cr1, respectively, and Cb and Cr of the second color difference signal of the same address are defined as Cb2 and Cr2, respectively. In addition, the difference value (Cb sub , Cr sub ) is calculated by the following equation.
Cb sub = Cb1-Cb2
Cr sub = Cr1-Cr2
[図5のS18(第一の距離情報の演算)]
 本処理ステップS18では、アドレスが同一の注目画素毎に、第一色差信号と第二色差信号との色空間内での距離(第一の距離情報Saturation_sub)が次式により演算される。
Saturation_sub=√(Cbsub +Crsub
[S18 in FIG. 5 (calculation of first distance information)]
In the process step S18, address for each identical target pixel, a distance in the color space of the first color difference signal and a second color difference signal (the first distance information Saturation_ sub) is calculated by the following equation.
Saturation_ sub = √ (Cb sub 2 + Cr sub 2)
 第一の距離情報Saturation_subは、図8の各プロット対(プロット6bとプロット6c、プロット7bとプロット7c)の例でそれぞれ、2√(M+N)、√{(2M’-α)+(2N’+β)2-}となる。 The first distance information Saturation_ sub are each an example of each plot pair of FIG. 8 (plot 6b and plot 6c, plots 7b and plot 7c), 2√ (M 2 + N 2), √ {(2M'-α) 2 + (2N ′ + β) 2 − }.
 図8の各プロット対の位置関係から把握されるように、第一色差信号と第二色差信号が持つ色情報が強い補色関係にあるほど第一の距離情報Saturation_subが大きくなり、第一色差信号と第二色差信号が持つ色情報が補色関係にない(例えば同様の色相である)ほど第一の距離情報Saturation_subが小さくなる。すなわち、第一の距離情報Saturation_subは、偽色発生領域でなければ理想的にはゼロであり、偽色が強く発生する偽色発生領域ほど大きくなる。 As understood from the positional relationship of each plot pair of FIG. 8, as the color information with the first color difference signal and a second color difference signal is strong complementary relationship first distance information Saturation_ sub increases, first color difference signal and the color information having the second color difference signal is not (for example similar hue) in complementary relationship as the first distance information Saturation_ sub decreases. That is, the first distance information Saturation_ sub ideally if false color occurrence region is zero, the larger the false color occurrence region in which false color is generated strongly.
[図5のS19(色差信号の加算値の演算)]
 本処理ステップS19では、アドレスが同一の注目画素毎に、第一色差信号と第二色差信号との加算値(Cb’add,Cr’add)が演算される。
[S19 in FIG. 5 (Calculation of Addition Value of Color Difference Signal)]
In this processing step S19, for each target pixel having the same address, an addition value (Cb ′ add , Cr ′ add ) of the first color difference signal and the second color difference signal is calculated.
 具体的には、本処理ステップS19では、暫定加算値(Cbadd,Cradd)が次式により演算される。
Cbadd=Cb1+Cb2
Cradd=Cr1+Cr2
Specifically, in this processing step S19, provisional addition values (Cb add , Cr add ) are calculated according to the following equations.
Cb add = Cb1 + Cb2
Cr add = Cr1 + Cr2
 次いで、暫定加算値の平均値(Cbmean,Crmean)が次式により演算される。
Cbmean=Cbadd/2
Crmean=Cradd/2
Next, the average value (Cb mean , Cr mean ) of the provisional addition values is calculated by the following equation.
Cb mean = Cb add / 2
Cr mean = Cr add / 2
 次いで、加算値(Cb’add,Cr’add)が次式により演算される。
Cb’add=Cbadd-Cbmean
Cr’add=Cradd-Crmean
Next, the added value (Cb ′ add , Cr ′ add ) is calculated by the following equation.
Cb ′ add = Cb add −Cb mean
Cr ' add = Cr add -Cr mean
[図5のS20(第二の距離情報の演算)]
 本処理ステップS20では、アドレスが同一の注目画素毎に、加算値(Cb’add,Cr’add)に基づいて色空間内における第二の距離情報Saturation_addが次式により演算される。
Saturation_add=√(Cb’add +Cr’add
[S20 in FIG. 5 (calculation of second distance information)]
In the process step S20, address for each identical target pixel, the addition value (Cb 'add, Cr' add ) the second distance information Saturation_ the add in a color space based on the calculation by the following equation.
Saturation_ add = √ (Cb 'add 2 + Cr' add 2)
 第二の距離情報Saturation_addは、図8の各プロット対(プロット6bとプロット6c、プロット7bとプロット7c)の例でそれぞれ、ゼロ、√(α+β)となる。 Second distance information Saturation_ the add each example of each plot pair of FIG. 8 (plot 6b and plot 6c, plots 7b and plot 7c), zero, and √ (α 2 + β 2) .
 ここで、第一、第二の各色差信号は、画像撮影時の光源、露出条件、ホワイトバランス等の影響を受けて変化する。しかし、第一、第二の各色差信号が同じように変化するため、色空間内における互いの相対距離(すなわち、第一の距離情報Saturation_sub)は変化が少ない。 Here, the first and second color difference signals change under the influence of the light source, the exposure condition, the white balance and the like at the time of image capturing. However, first, since the second respective color difference signals varies in the same way, mutual relative distance in the color space (i.e., the first distance information Saturation_ sub) changes little.
 一方、第二の距離情報Saturation_addは、第一、第二の各色差信号が画像撮影時の光源、露出条件、ホワイトバランス等の影響を受けて色空間の原点Oの位置に対して変化すると、大きく変化する。そこで、処理ステップS19(色差信号の加算値の演算)では、第二の距離情報Saturation_addを暫定加算値(Cbadd,Cradd)を用いて即座には演算せず、上記影響による原点Oに対する位置の変化を相殺又は軽減すべく(上記影響により、原点Oから離れた第一色差信号と第二色差信号との中点を原点Oに近付けるべく)、加算値(Cb’add,Cr’add)が演算されている。 On the other hand, the second distance information Saturation_ the add, the first, second light source for the respective color difference signal imaging, exposure conditions, the changes relative to the position of the origin O of the color space under the influence of such as white balance , Change a lot. Therefore, in the process step S19 (calculation of the sum of the color difference signal), the second distance information Saturation_ the add provisional sum value (Cb add, Cr add) without operation immediately using, relative to the origin O by the impact In order to cancel or reduce the change in position (to make the midpoint between the first color difference signal and the second color difference signal far from the origin O closer to the origin O due to the above influence), the added values (Cb ′ add , Cr ′ add) ) Is calculated.
 図8の各プロット対の位置関係から把握されるように、第一色差信号と第二色差信号が持つ色情報が補色関係にある場合は、互いの符号が逆となることから加算値(Cb’add,Cr’add)が小さくなって、第二の距離情報Saturation_addが小さくなる。また、強い補色関係であるほど第二の距離情報Saturation_addが小さくなって、理想的にはゼロとなる。一方、第一色差信号と第二色差信号が持つ色情報が補色関係にない(例えば同様の色相である)場合は、互いの符号が同一となることから、加算値(Cb’add,Cr’add)が大きくなって、第二の距離情報Saturation_addが大きくなる。すなわち、第二の距離情報Saturation_addは、偽色発生領域でなければ大きくなり、偽色発生領域であれば小さくなる。 As can be understood from the positional relationship between each pair of plots in FIG. 8, when the color information of the first color difference signal and the second color difference signal has a complementary color relationship, the signs are opposite to each other, so that the added value (Cb 'add, Cr' add) is decreased, the second distance information Saturation_ the add decreases. The second distance information Saturation_ to add becomes smaller as is strong complementary relationship, the zero ideally. On the other hand, when the color information of the first color difference signal and the second color difference signal is not in a complementary color relationship (for example, the same hue), the signs are the same, so the added values (Cb ′ add , Cr ′) add) and increases, the second distance information Saturation_ the add increases. That is, the second distance information Saturation_ the add is larger if false color occurrence region decreases if false color occurrence region.
[図5のS21(輝度信号の差分値の演算)]
 説明の便宜上、処理ステップS12(第一画像の撮影)にて撮影された第一画像の輝度信号Yを「第一輝度信号」と記し、処理ステップS14(第二画像の撮影)にて撮影された第二画像の輝度信号Yを「第二輝度信号」と記す。本処理ステップS21では、アドレスが同一の注目画素毎に、第一輝度信号と第二輝度信号との差分値Ydiffが演算される。具体的には、本処理ステップS21では、第一輝度信号をY1と定義し、第二輝度信号をY2と定義した場合に、差分値Ydiffが次式により演算される。
diff=|Y1-Y2|
[S21 in FIG. 5 (Calculation of Difference Value of Luminance Signal)]
For convenience of explanation, the luminance signal Y of the first image taken in the processing step S12 (photographing the first image) will be referred to as “first luminance signal” and taken in the processing step S14 (photographing the second image). The luminance signal Y of the second image is referred to as “second luminance signal”. In this processing step S21, a difference value Y diff between the first luminance signal and the second luminance signal is calculated for each target pixel having the same address. Specifically, in this processing step S21, when the first luminance signal is defined as Y1 and the second luminance signal is defined as Y2, the difference value Y diff is calculated by the following equation.
Y diff = | Y1-Y2 |
[図5のS22(偽色発生領域の判定)]
 本処理ステップS22では、アドレスが同一の注目画素毎に、偽色発生領域であるか否かが判定される。具体的には、次の条件(1)~(3)が全て満たされる場合に、当該画素が偽色発生領域であると判定される。
[S22 in FIG. 5 (Determination of False Color Generation Area)]
In this processing step S22, it is determined for each pixel of interest having the same address whether or not it is a false color generation region. Specifically, when all of the following conditions (1) to (3) are satisfied, it is determined that the pixel is a false color generation region.
・条件(1)
 上述したように、第一の距離情報Saturation_subが大きいほど第一色差信号と第二色差信号が持つ色情報が強い補色関係となることから、当該注目画素が偽色発生領域である可能性が高い。そこで、条件(1)は次のように規定される。
条件(1):Saturation_sub≧閾値T1
・ Condition (1)
As described above, since the color information having the first color difference signal the larger the first distance information Saturation_ sub and the second color difference signal is strong complementary relationship, likely the target pixel is false color occurrence region is high. Therefore, condition (1) is defined as follows.
Condition (1): Saturation_ sub ≧ threshold T1
・条件(2)
 上述したように、第二の距離情報Saturation_addが小さいほど第一色差信号と第二色差信号が持つ色情報が強い補色関係となることから、当該注目画素が偽色発生領域である可能性が高い。そこで、条件(2)は次のように規定される。
条件(2):Saturation_add≦閾値T2
・ Condition (2)
As described above, since the color information having the second distance information Saturation_ the add is small enough first color difference signal and a second color difference signal is strong complementary relationship, likely the target pixel is false color occurrence region is high. Therefore, condition (2) is defined as follows.
Condition (2): Saturation_ add ≦ threshold T2
 処理ステップS12(第一画像の撮影)にて撮影された第一画像と処理ステップS14(第二画像の撮影)にて撮影された第二画像の一方だけで大きな像振れが発生した場合を考える。この場合、第一色差信号と第二色差信号との差分が大きくなって、偽色が誤検出される虞がある。そこで、条件(3)は次のように規定される。
条件(3):Ydiff≦閾値T3
Consider a case where a large image blur occurs in only one of the first image taken in processing step S12 (shooting the first image) and the second image taken in processing step S14 (shooting the second image). . In this case, the difference between the first color difference signal and the second color difference signal becomes large, and there is a possibility that a false color is erroneously detected. Therefore, the condition (3) is defined as follows.
Condition (3): Y diff ≦ threshold value T3
 すなわち、差分値Ydiffが閾値T3よりも大きい場合は、上記の誤検出の虞があることから、当該注目画素に対する偽色の検出が行われない(当該注目画素が偽色発生領域でないものとして処理される。)。 That is, when the difference value Y diff is larger than the threshold value T3, there is a risk of the erroneous detection described above, so that false color detection for the target pixel is not performed (assuming that the target pixel is not a false color generation region). It is processed.).
 なお、条件(1)と条件(2)は、当該注目画素が偽色発生領域であるか否かを直接的に判定する条件となっている。そこで、別の実施形態では、条件(1)と条件(2)の少なくとも一方が満たされる場合に、当該注目画素が偽色発生領域であると判定されるようにしてもよい。 The conditions (1) and (2) are conditions for directly determining whether or not the pixel of interest is a false color generation area. Therefore, in another embodiment, when at least one of the condition (1) and the condition (2) is satisfied, the target pixel may be determined to be a false color generation region.
 また、ユーザは、操作部102を操作して閾値T1~T3を設定変更することにより、偽色の検出感度を変更することができる。 Also, the user can change the false color detection sensitivity by operating the operation unit 102 to change the settings of the threshold values T1 to T3.
[図5のS23(偽色の検出)]
 本処理ステップS23では、偽色の有無が検出される。例示的には、処理ステップS22(偽色発生領域の判定)において偽色発生領域と判定された注目画素数(又は全有効画素数のうち偽色発生領域と判定された注目画素の割合)が所定の閾値以上である場合に、偽色有りという検出結果となり(S23:YES)、該注目画素数(又は割合)が所定の閾値未満である場合に、偽色無しという検出結果となる(S23:NO)。
[S23 in FIG. 5 (detection of false color)]
In this processing step S23, the presence or absence of a false color is detected. Illustratively, the number of target pixels determined to be a false color generation region in processing step S22 (determination of the false color generation region) (or the ratio of the target pixels determined to be the false color generation region out of the total number of effective pixels). If it is equal to or greater than the predetermined threshold, a detection result that there is a false color is obtained (S23: YES), and if the number of pixels of interest (or a ratio) is less than the predetermined threshold, a detection result that there is no false color is obtained (S23). : NO).
[図5のS24(撮影画像の保存)]
 本処理ステップS24は、処理ステップS23(偽色の検出)にて偽色無しという検出結果が得られた場合(S23:NO)に実行される。本処理ステップS24では、処理ステップS12(第一画像の撮影)にて撮影された第一画像及び処理ステップS14(第二画像の撮影)にて撮影された第二画像について偽色が検出されなかったとして、その少なくとも一方がメモリカード200(又は撮影装置1に備えられる不図示の内蔵メモリ)に保存される。この時点で撮影動作が完了した旨が撮影者に告知されてもよい。特に、処理ステップS14(第二画像の撮影)で第一画像と第二画像を用いた偽色検出処理を開始する旨が撮影者に告知されている場合には、撮影動作が完了したことが撮影者に伝わる。これにより、撮影者は次の作業、例えば撮影装置1の状態(セッティング)の変更に進むことができる。
[S24 in FIG. 5 (Storing Captured Images)]
This processing step S24 is executed when a detection result indicating no false color is obtained in the processing step S23 (detection of false color) (S23: NO). In this processing step S24, false colors are not detected for the first image captured in processing step S12 (capturing the first image) and the second image captured in processing step S14 (capturing the second image). For example, at least one of them is stored in the memory card 200 (or a built-in memory (not shown) provided in the photographing apparatus 1). At this point, the photographer may be notified that the shooting operation has been completed. In particular, when the photographer is informed that the false color detection process using the first image and the second image is started in the processing step S14 (second image capturing), the photographing operation is completed. Communicate to the photographer. As a result, the photographer can proceed to the next work, for example, change of the state (setting) of the photographing apparatus 1.
[図5のS25(LPF駆動下での撮像)]
 本処理ステップS25は、処理ステップS23(偽色の検出)にて偽色有りという検出結果が得られた場合(S23:YES)に実行される。本処理ステップS25では、処理ステップS12(第一画像の撮影)にて撮影された第一画像及び処理ステップS14(第二画像の撮影)にて撮影された第二画像について偽色が検出されたことから、LPF駆動が実行される。既にLPF駆動下での撮像が行われていた場合は、より強い光学的なLPF効果(偽色等のモアレの軽減)が得られるように、固体撮像素子112aの駆動周期(回転周期)や駆動振幅(回転半径)が調整される。すなわち、偽色を軽減するためのより有利な撮影条件に変更される。そのうえで、被写体の撮像(第三画像の撮影)が行われる。つまり、偽色が検出された場合には第三画像の撮影が行われる。そのため、処理ステップS14(第二画像の撮影)にて偽色検出処理を開始する旨が撮影者に告知されている場合、撮影者は第三画像の撮影が完了するまで、撮影装置1の状態(セッティング)を維持することができる。
[S25 in FIG. 5 (Image pickup under LPF drive)]
This processing step S25 is executed when a detection result indicating that there is a false color is obtained in processing step S23 (detection of false color) (S23: YES). In this processing step S25, false colors were detected for the first image captured in processing step S12 (capturing the first image) and the second image captured in processing step S14 (capturing the second image). Therefore, the LPF drive is executed. When imaging under LPF driving has already been performed, the driving cycle (rotation cycle) and driving of the solid-state imaging device 112a are obtained so that a stronger optical LPF effect (reduction of moire such as false colors) can be obtained. The amplitude (rotation radius) is adjusted. That is, it is changed to a more advantageous shooting condition for reducing false colors. Then, the subject is imaged (third image is captured). That is, when a false color is detected, the third image is captured. Therefore, when the photographer is notified that the false color detection process is started in the processing step S14 (photographing of the second image), the photographer is in a state of the photographing device 1 until the photographing of the third image is completed. (Setting) can be maintained.
 本実施形態によれば、固体撮像素子112aの受光面112aa上での被写体像の入射位置をシフトさせることにより、画素ピッチPと同程度以上の高周波成分の被写体像が取り込まれた場合に、異なる偽色(互いに補色の関係となる偽色)が発生し、差分の大きい画像が生成される構成となっている。本実施形態によれば、偽色の検出に差分の大きい画像が用いられることから、偽色が精度良く検出される。 According to the present embodiment, the difference occurs when a subject image having a high frequency component equal to or greater than the pixel pitch P is captured by shifting the incident position of the subject image on the light receiving surface 112aa of the solid-state imaging device 112a. False colors (false colors that are complementary to each other) are generated, and an image with a large difference is generated. According to this embodiment, since an image with a large difference is used for detecting a false color, the false color is detected with high accuracy.
 以上が本発明の例示的な実施形態の説明である。本発明の実施形態は、上記に説明したものに限定されず、本発明の技術的思想の範囲において様々な変形が可能である。例えば明細書中に例示的に明示される実施形態等又は自明な実施形態等を適宜組み合わせた内容も本願の実施形態に含まれる。 This completes the description of the exemplary embodiment of the present invention. Embodiments of the present invention are not limited to those described above, and various modifications are possible within the scope of the technical idea of the present invention. For example, the embodiment of the present application also includes an embodiment that is exemplarily specified in the specification or a combination of obvious embodiments and the like as appropriate.
 上記の実施形態では、LPF駆動を実行することにより、画像全体に対して光学的に偽色の除去を施しているが、本発明はこれに限らない。偽色は、画像処理を用いて除去されてもよい。画像処理の場合は、偽色を画像全体に限らず局所的に(例えば偽色発生領域と判定された注目画素毎に)除去することもできる。また、偽色発生領域を示す情報を第一画像(第二画像)と関連付けて保存しておけば、仮に別端末(コンピューターなど)によって手作業で偽色補正をする場合でも、作業者は、当該情報に基づいて偽色発生領域を容易に発見することができる。そのため、画像全体から偽色発生領域を探す手間が軽減される。 In the above embodiment, the false color is optically removed from the entire image by executing the LPF drive, but the present invention is not limited to this. False colors may be removed using image processing. In the case of image processing, the false color can be removed not only in the entire image but also locally (for example, for each pixel of interest determined as a false color generation region). In addition, if information indicating the false color generation area is stored in association with the first image (second image), even if the false color correction is manually performed by another terminal (such as a computer), A false color generation region can be easily found based on the information. Therefore, the trouble of searching for the false color generation area from the entire image is reduced.
 また、上記の実施形態では、固体撮像素子112a自体をシフトさせることにより、受光面112aa上での被写体像の入射位置をシフトさせているが、本発明はこれに限らない。例えば、別の振れ補正部材(撮影レンズ106内に含まれる一部のレンズなど)を偏心駆動させることにより、受光面112aa上での被写体像の入射位置をシフトさせてもよく、また、撮影レンズ106の光路内に光軸AXに対して垂直よりやや傾けて配置された平行平板を光軸AX周りに回転させたり、頂角可変プリズム、固体撮像素子112aのカバーガラス(カバーガラスに付着した塵等を除去するための加振機能を備えたもの)等を駆動させたりすることにより、受光面112aa上での被写体像の入射位置をシフトさせてもよい。 In the above embodiment, the incident position of the subject image on the light receiving surface 112aa is shifted by shifting the solid-state imaging device 112a itself. However, the present invention is not limited to this. For example, the incident position of the subject image on the light receiving surface 112aa may be shifted by driving another shake correction member (such as a part of lenses included in the photographing lens 106) eccentrically, or the photographing lens A parallel plate arranged in the optical path 106 at a slight inclination with respect to the optical axis AX is rotated around the optical axis AX, or the apex variable prism, the cover glass of the solid-state image sensor 112a (the dust adhering to the cover glass). Or the like) may be driven to shift the incident position of the subject image on the light receiving surface 112aa.
 また、上記の実施形態では、処理ステップS22(偽色発生領域の判定)において、閾値T1~T3が各画素に対する全ての判定において不変であるが、本発明はこれに限らない。 In the above embodiment, the threshold values T1 to T3 are unchanged in all the determinations for each pixel in the processing step S22 (determination of false color generation region), but the present invention is not limited to this.
 例えば、システムコントローラ100によってAF制御が行われることにより、画像内でピントが合っているとみなせる範囲(合焦エリアであり、例示的には被写界深度に収まる範囲)が求まる。合焦エリア(合焦状態)の被写体はコントラストが高く、高周波成分を含みやすいため、偽色が発生しやすい。一方、合焦エリア外(非合焦状態)の被写体はコントラストが低く、高周波成分を含み難いため、偽色が発生し難い。そこで、処理ステップS22(偽色発生領域の判定)では、演算によって求まった合焦エリア内の画素に対する判定を行う場合と合焦エリア外の画素に対する判定を行う場合とで、閾値T1~T3が異なる値に変更されてもよい。例示的には、合焦エリア内の画素では偽色が発生している(目立つ)可能性が高いことから、検出感度が高くなるような閾値設定(例えば閾値T1を低い値に設定)を行い、それ以外の画素では偽色が発生している可能性が低い(目立たない)ことから、検出感度が低くなるような閾値設定(例えば閾値T1を高い値に設定)を行うか、偽色検出の処理自体が省略される。これにより、偽色の検出精度がより一層向上したり、検出速度が向上したりする。 For example, by performing AF control by the system controller 100, a range that can be regarded as being in focus in the image (a focus area, illustratively a range that falls within the depth of field) is obtained. Since the subject in the focus area (in-focus state) has high contrast and tends to include high-frequency components, false colors are likely to occur. On the other hand, a subject outside the in-focus area (out-of-focus state) has a low contrast and is unlikely to contain a high-frequency component, so that false colors are unlikely to occur. Therefore, in the processing step S22 (determination of the false color generation area), the threshold values T1 to T3 are set depending on whether the determination is made for the pixel in the in-focus area obtained by the calculation or the determination for the pixel outside the in-focus area. It may be changed to a different value. Illustratively, since there is a high possibility that a false color is generated (conspicuous) in a pixel in the in-focus area, a threshold setting that increases detection sensitivity (for example, the threshold T1 is set to a low value) is performed. Since the possibility of false color is low (not conspicuous) in other pixels, threshold setting (for example, setting the threshold value T1 to a high value) that lowers the detection sensitivity or false color detection is performed. The process itself is omitted. As a result, the false color detection accuracy is further improved, and the detection speed is improved.
 また、上記の実施形態では、一対の画像(処理ステップS12(第一画像の撮影)にて撮影された第一画像と処理ステップS14(第二画像の撮影)にて撮影された第二画像)を用いて偽色の検出が行われているが、別の実施形態では、3枚以上の画像を用いて偽色の検出が行われてもよい。 In the above-described embodiment, a pair of images (a first image taken at processing step S12 (photographing the first image) and a second image taken at processing step S14 (photographing the second image)). However, in another embodiment, false color may be detected using three or more images.
 例えば、縦縞被写体7aに対しては、上記の実施形態のように、処理ステップS13(固体撮像素子112aのシフト)にて固体撮像素子112aを縞の明暗方向(左方向)にシフトさせることにより、補色の関係となる偽色画像が得られやすい。しかし、縦縞被写体7aに対して固体撮像素子112aを上、下又は斜め方向にシフトさせた場合は、輝度の高い色成分が固体撮像素子112aのシフトの前後で変わり難くなり、補色の関係となる偽色画像が得られ難くなる。そこで、偽色検出の精度をより一層向上させる(別の観点では、検出漏れを防ぐ)ため、図9に示される偽色検出フローが考えられる。図9の偽色検出フローの説明において、図5の偽色検出フローと同様の処理については、適宜簡略又は省略する。 For example, for the vertical stripe subject 7a, by shifting the solid-state imaging device 112a in the light / dark direction (left direction) of the stripe in processing step S13 (shift of the solid-state imaging device 112a) as in the above embodiment, It is easy to obtain a false color image having a complementary color relationship. However, when the solid-state image sensor 112a is shifted in the up, down, or diagonal directions with respect to the vertical stripe subject 7a, the color component with high luminance is less likely to change before and after the shift of the solid-state image sensor 112a, resulting in a complementary color relationship. It becomes difficult to obtain a false color image. Therefore, in order to further improve the accuracy of false color detection (in another aspect, to prevent detection omission), a false color detection flow shown in FIG. 9 can be considered. In the description of the false color detection flow in FIG. 9, processing similar to the false color detection flow in FIG. 5 is simplified or omitted as appropriate.
[図9のS111(状態の判定)]
 本処理ステップS111では、撮影装置1が静止状態であるか否かが判定される。
[S111 in FIG. 9 (state determination)]
In this processing step S111, it is determined whether or not the photographing apparatus 1 is in a stationary state.
[図9のS112(第一画像の撮影)]
 本処理ステップS112は、処理ステップS111(状態の判定)にて撮影装置1が静止状態であると判定された場合(S111:YES)に実行される。本処理ステップS112では、AE制御及びAF制御に基づいて適正な輝度及びピントで被写体画像(第一画像)が撮影される。
[S112 in FIG. 9 (capturing the first image)]
This processing step S112 is executed when it is determined in processing step S111 (state determination) that the photographing apparatus 1 is in a stationary state (S111: YES). In this processing step S112, a subject image (first image) is taken with appropriate brightness and focus based on AE control and AF control.
[図9のS113(固体撮像素子112aのシフト)]
 本処理ステップS113では、可動ステージ112cが駆動されて、固体撮像素子112aが1画素分の距離だけ右方向にシフトされる。
[S113 in FIG. 9 (Shift of Solid-State Image Sensor 112a)]
In this processing step S113, the movable stage 112c is driven, and the solid-state imaging device 112a is shifted rightward by a distance of one pixel.
[図9のS114(第二画像の撮影)]
 本処理ステップS114では、第一画像撮影時のAE制御及びAF制御に基づいて被写体画像(第二画像)が撮影される。
[S114 in FIG. 9 (Taking Second Image)]
In this processing step S114, a subject image (second image) is captured based on the AE control and AF control at the time of capturing the first image.
[図9のS115(固体撮像素子112aのシフト)]
 本処理ステップS115では、固体撮像素子112aが処理ステップS112(第一画像の撮影)での第一画像撮影時の位置に対して、1画素分の距離だけ上方向にシフトされる。
[S115 in FIG. 9 (Shift of Solid-State Image Sensor 112a)]
In this processing step S115, the solid-state imaging device 112a is shifted upward by a distance of one pixel with respect to the position at the time of first image shooting in the processing step S112 (first image shooting).
[図9のS116(第三画像の撮影)]
 本処理ステップS116では、第一画像撮影時のAE制御及びAF制御に基づいて被写体画像(第三画像)が撮影される。
[S116 in FIG. 9 (Third Image Shooting)]
In this processing step S116, a subject image (third image) is photographed based on AE control and AF control at the time of photographing the first image.
[図9のS117(固体撮像素子112aのシフト)]
 本処理ステップS117では、固体撮像素子112aが処理ステップS112(第一画像の撮影)での第一画像撮影時の位置に対して、1画素分の距離だけ斜め左下方向にシフトされる。
[S117 in FIG. 9 (Shift of Solid-State Image Sensor 112a)]
In this processing step S117, the solid-state imaging device 112a is shifted diagonally to the lower left by a distance of one pixel with respect to the position at the time of first image shooting in the processing step S112 (first image shooting).
[図9のS118(第四画像の撮影)]
 本処理ステップS118では、第一画像撮影時のAE制御及びAF制御に基づいて適被写体画像(第四画像)が撮影される。
[S118 in FIG. 9 (Fourth Image Shooting)]
In this processing step S118, an appropriate subject image (fourth image) is captured based on the AE control and AF control at the time of capturing the first image.
[図9のS119(電気的なLPF処理)]
 本処理ステップS119では、偽色の誤検出等を抑えるべく、画像信号(輝度信号Y、色差信号Cb、Cr)に対してLPF処理が施される。
[S119 in FIG. 9 (electrical LPF processing)]
In this processing step S119, an LPF process is performed on the image signals (luminance signal Y, color difference signals Cb, Cr) in order to suppress false detection of false colors.
[図9のS120(アドレスの変換)]
 本処理ステップS120では、処理ステップS112(第一画像の撮影)にて撮影された第一画像と処理ステップS114(第二画像の撮影)にて撮影された第二画像について、同一の被写体像が同一アドレスの画素に入射されたものとして処理されるように、第二画像を構成する各画素のアドレスが、処理ステップS113(固体撮像素子112aのシフト)における固体撮像素子112aのシフト量(換言すると、固体撮像素子112aの受光面112aa上での被写体像の入射位置のシフト量)に応じて変換される。
[S120 in FIG. 9 (address conversion)]
In this processing step S120, the same subject image is obtained for the first image captured in processing step S112 (capturing the first image) and the second image captured in processing step S114 (capturing the second image). The address of each pixel constituting the second image is processed as if it were incident on the pixel of the same address, and the shift amount (in other words, the shift amount of the solid-state image sensor 112a in the processing step S113 (shift of the solid-state image sensor 112a)). , The amount of shift of the incident position of the subject image on the light receiving surface 112aa of the solid-state imaging device 112a).
 また、本処理ステップS120では、処理ステップS112(第一画像の撮影)にて撮影された第一画像と処理ステップS116(第三画像の撮影)にて撮影された第三画像について、同一の被写体像が同一アドレスの画素に入射されたものとして処理されるように、第三画像を構成する各画素のアドレスが、処理ステップS115(固体撮像素子112aのシフト)における固体撮像素子112aのシフト量(換言すると、固体撮像素子112aの受光面112aa上での被写体像の入射位置のシフト量)に応じて変換される。 In this processing step S120, the same subject is used for the first image captured in processing step S112 (capturing the first image) and the third image captured in processing step S116 (capturing the third image). The address of each pixel constituting the third image is determined by the amount of shift of the solid-state image sensor 112a in the processing step S115 (shift of the solid-state image sensor 112a) so that the image is processed as being incident on the pixel having the same address. In other words, it is converted according to the shift amount of the incident position of the subject image on the light receiving surface 112aa of the solid-state image sensor 112a.
 また、本処理ステップS120では、処理ステップS112(第一画像の撮影)にて撮影された第一画像と処理ステップS118(第四画像の撮影)にて撮影された第四画像について、同一の被写体像が同一アドレスの画素に入射されたものとして処理されるように、第四画像を構成する各画素のアドレスが、処理ステップS117(固体撮像素子112aのシフト)における固体撮像素子112aのシフト量(換言すると、固体撮像素子112aの受光面112aa上での被写体像の入射位置のシフト量)に応じて変換される。 In this processing step S120, the same subject is used for the first image captured in processing step S112 (capturing the first image) and the fourth image captured in processing step S118 (capturing the fourth image). The address of each pixel making up the fourth image is the shift amount of the solid-state image sensor 112a in the processing step S117 (shift of the solid-state image sensor 112a) so that the image is processed as being incident on the pixel having the same address. In other words, it is converted according to the shift amount of the incident position of the subject image on the light receiving surface 112aa of the solid-state image sensor 112a.
[図9のS121(色差信号の差分値の演算)]
 説明の便宜上、処理ステップS112(第一画像の撮影)にて撮影された第一画像の色差信号(Cb、Cr)を「第一色差信号」と記し、処理ステップS114(第二画像の撮影)にて撮影された第二画像の色差信号(Cb、Cr)を「第二色差信号」と記し、処理ステップS116(第三画像の撮影)にて撮影された第三画像の色差信号(Cb、Cr)を「第三色差信号」と記し、処理ステップS118(第四画像の撮影)にて撮影された第四画像の色差信号(Cb、Cr)を「第四色差信号」と記す。本処理ステップS121では、アドレスが同一の画素毎に、第一色差信号と第二色差信号、第一色差信号と第三色差信号、第一色差信号と第四色差信号のそれぞれについて、差分値(Cbsub,Crsub)が演算される。
[S121 in FIG. 9 (Calculation of Difference Value of Color Difference Signal)]
For convenience of explanation, the color difference signals (Cb, Cr) of the first image photographed in the processing step S112 (first image photographing) are referred to as “first color difference signal”, and the processing step S114 (second image photographing). The color difference signals (Cb, Cr) of the second image photographed in Step 2 are referred to as “second color difference signals”, and the color difference signals (Cb, Cr) of the third image photographed in processing step S116 (third image photographing). Cr) is referred to as “third color difference signal”, and the color difference signals (Cb, Cr) of the fourth image captured in processing step S118 (fourth image capturing) are referred to as “fourth color difference signal”. In this processing step S121, for each pixel having the same address, for each of the first color difference signal and the second color difference signal, the first color difference signal and the third color difference signal, the first color difference signal and the fourth color difference signal, Cb sub , Cr sub ) is calculated.
[図9のS122(第一の距離情報の演算)]
 本処理ステップS122では、アドレスが同一の注目画素毎に、第一色差信号と第二色差信号、第一色差信号と第三色差信号、第一色差信号と第四色差信号のそれぞれについて、第一の距離情報Saturation_subが演算される。
[S122 in FIG. 9 (Calculation of First Distance Information)]
In this processing step S122, the first color difference signal and the second color difference signal, the first color difference signal and the third color difference signal, the first color difference signal and the fourth color difference signal are respectively determined for each target pixel having the same address. distance information Saturation_ sub is calculated.
[図9のS123(色差信号の加算値の演算)]
 本処理ステップS123では、アドレスが同一の注目画素毎に、第一色差信号と第二色差信号、第一色差信号と第三色差信号、第一色差信号と第四色差信号のそれぞれについて、加算値(Cb’add,Cr’add)が演算される。
[S123 in FIG. 9 (Calculation of Addition Value of Color Difference Signal)]
In this processing step S123, for each pixel of interest having the same address, the added value for each of the first color difference signal and the second color difference signal, the first color difference signal and the third color difference signal, and the first color difference signal and the fourth color difference signal. (Cb ′ add , Cr ′ add ) is calculated.
[図9のS124(第二の距離情報の演算)]
 本処理ステップS124では、アドレスが同一の注目画素毎に、第一色差信号と第二色差信号、第一色差信号と第三色差信号、第一色差信号と第四色差信号のそれぞれについて、第二の距離情報Saturation_addが演算される。
[S124 in FIG. 9 (Calculation of Second Distance Information)]
In this processing step S124, the second color difference signal and the second color difference signal, the first color difference signal and the third color difference signal, the first color difference signal and the fourth color difference signal are respectively determined for each target pixel having the same address. distance information Saturation_ the add is calculated.
[図9のS125(輝度信号の差分値の演算)]
 説明の便宜上、処理ステップS112(第一画像の撮影)にて撮影された第一画像の輝度信号Yを「第一輝度信号」と記し、処理ステップS114(第二画像の撮影)にて撮影された第二画像の輝度信号Yを「第二輝度信号」と記し、処理ステップS116(第三画像の撮影)にて撮影された第三画像の輝度信号Yを「第三輝度信号」と記し、処理ステップS118(第四画像の撮影)にて撮影された第四画像の輝度信号Yを「第四輝度信号」と記す。本処理ステップS124では、アドレスが同一の注目画素毎に、第一輝度信号と第二輝度信号、第一輝度信号と第三輝度信号、第一輝度信号と第四輝度信号のそれぞれについて、差分値Ydiffが演算される。
[S125 in FIG. 9 (Calculation of Difference Value of Luminance Signal)]
For convenience of explanation, the luminance signal Y of the first image taken in the processing step S112 (photographing the first image) will be referred to as “first luminance signal” and taken in the processing step S114 (photographing the second image). The luminance signal Y of the second image is denoted as “second luminance signal”, and the luminance signal Y of the third image photographed in the processing step S116 (third image photographing) is denoted as “third luminance signal”. The luminance signal Y of the fourth image shot in the processing step S118 (fourth image shooting) is referred to as “fourth luminance signal”. In this processing step S124, for each target pixel having the same address, a difference value is obtained for each of the first luminance signal and the second luminance signal, the first luminance signal and the third luminance signal, and the first luminance signal and the fourth luminance signal. Y diff is calculated.
[図9のS126(偽色発生領域の判定)]
 本処理ステップS126では、アドレスが同一の注目画素毎に、偽色発生領域であるか否かが判定される。具体的には、次の条件(1’)~(3’)のうち少なくとも1つが満たされる場合に、当該画素が偽色発生領域であると判定される。
[S126 in FIG. 9 (Determination of False Color Generation Area)]
In this processing step S126, it is determined whether or not each pixel of interest having the same address is a false color generation region. Specifically, when at least one of the following conditions (1 ′) to (3 ′) is satisfied, it is determined that the pixel is a false color generation region.
・条件(1’)
 第一色差信号と第二色差信号における(各撮影時で固体撮像素子112aが左右方向に一画素ずれた位置関係となる場合の)第一の距離情報Saturation_subが閾値T1’以上であり、且つ第二の距離情報Saturation_addが閾値T2’以下であり、且つ差分値Ydiffが閾値T3’以下であること。
・ Condition (1 ')
And a first in first color difference signal and a second color difference signal (in the case of the solid-state imaging device 112a at the time of each shot is a positional relationship shifted one pixel in the lateral direction) the first distance information Saturation_ sub threshold T1 'above, and it second distance information Saturation_ the add is "or less, and the difference value Y diff threshold T3 'threshold T2 or less.
・条件(2’)
 第一色差信号と第三色差信号における(各撮影時で固体撮像素子112aが上下方向に一画素ずれた位置関係となる場合の)第一の距離情報Saturation_subが閾値T1’以上であり、且つ第二の距離情報Saturation_addが閾値T2’以下であり、且つ差分値Ydiffが閾値T3’以下であること。
・ Condition (2 ')
And at the first color difference signal and the third color difference signal (solid-state imaging device 112a at the time of each shooting when a positional relationship shifted one pixel in the vertical direction) the first distance information Saturation_ sub threshold T1 'above, and it second distance information Saturation_ the add is "or less, and the difference value Y diff threshold T3 'threshold T2 or less.
・条件(3’)
 第一色差信号と第四色差信号における(各撮影時で固体撮像素子112aが斜め方向に一画素ずれた位置関係となる場合の)第一の距離情報Saturation_subが閾値T1’以上であり、且つ第二の距離情報Saturation_addが閾値T2’以下であり、且つ差分値Ydiffが閾値T3’以下であること。
・ Condition (3 ')
And at the first color difference signal and the fourth color difference signal (solid-state imaging device 112a at the time of each shooting when a positional relationship shifted one pixel in the diagonal direction) the first distance information Saturation_ sub threshold T1 'above, and it second distance information Saturation_ the add is "or less, and the difference value Y diff threshold T3 'threshold T2 or less.
[図9のS127(偽色の検出)]
 本処理ステップS127では、偽色の有無が検出される。
[S127 in FIG. 9 (Detection of False Color)]
In this processing step S127, the presence or absence of a false color is detected.
[図9のS128(撮影画像の保存)]
 本処理ステップS128は、処理ステップS127(偽色の検出)にて偽色無しという検出結果が得られた場合(S127:NO)に実行される。本処理ステップS128では、処理ステップS112(第一画像の撮影)にて撮影された第一画像、処理ステップS114(第二画像の撮影)にて撮影された第二画像、処理ステップS116(第三画像の撮影)にて撮影された第三画像及び処理ステップS118(第四画像の撮影)にて撮影された第四画像について偽色が検出されなかったとして、その少なくとも1つがメモリカード200(又は撮影装置1に備えられる不図示の内蔵メモリ)に保存される。
[S128 in FIG. 9 (Storing Captured Images)]
This processing step S128 is executed when a detection result indicating that there is no false color is obtained in processing step S127 (detection of false color) (S127: NO). In this processing step S128, the first image captured in processing step S112 (capturing the first image), the second image captured in processing step S114 (capturing the second image), and processing step S116 (third capturing) Assuming that no false color is detected for the third image captured in (Image Capture) and the fourth image captured in Processing Step S118 (Fourth Image Capture), at least one of them is the memory card 200 (or (Stored in an unillustrated built-in memory provided in the photographing apparatus 1).
[図9のS129(LPF駆動下での撮像)]
 本処理ステップS129は、処理ステップS127(偽色の検出)にて偽色有りという検出結果が得られた場合(S127:YES)に実行される。本処理ステップS129では、処理ステップS112(第一画像の撮影)にて撮影された第一画像、処理ステップS114(第二画像の撮影)にて撮影された第二画像、処理ステップS116(第三画像の撮影)にて撮影された第三画像及び処理ステップS118(第四画像の撮影)にて撮影された第四画像の少なくとも1つから偽色が検出されたことから、LPF駆動が実行されて、被写体の撮像が行われる。
[S129 in FIG. 9 (image pickup under LPF drive)]
This processing step S129 is executed when a detection result indicating that there is a false color is obtained in processing step S127 (detection of false color) (S127: YES). In this processing step S129, the first image photographed in the processing step S112 (first image photographing), the second image photographed in the processing step S114 (second image photographing), and the processing step S116 (third image photographing). LPF driving is executed because a false color is detected from at least one of the third image captured in (image capture) and the fourth image captured in processing step S118 (fourth image capture). Thus, the subject is imaged.
 図9の偽色検出フローによれば、固体撮像素子112aをそれぞれ異なる方向にシフトさせたときの画像の変化(偽色の発生)が判定されるため、被写体内において高周波成分が現れる領域では、少なくとも一対の画像間での変化が大きくなる。そのため、被写体内における高周波成分の出現方向(縦縞被写体7aであれば明暗が交互する左右方向)に拘わらず、偽色が精度良く検出される。 According to the false color detection flow of FIG. 9, since a change in image (occurrence of false color) is determined when the solid-state imaging device 112a is shifted in different directions, in a region where a high frequency component appears in the subject, The change between at least a pair of images becomes large. For this reason, the false color is detected with high accuracy regardless of the appearance direction of the high-frequency component in the subject (in the case of the vertical-striped subject 7a, the left-right direction in which the light and shade alternate).

Claims (12)

  1.  撮影光学系を介して、所定の画素配置を有する撮像素子に入射される被写体像と、該撮像素子とを、該撮像素子の受光面方向に相対的に位置移動させる移動手段と、
     前記相対的な位置が異なる状態で前記被写体像を撮像して色差信号を生成する信号生成手段と、
     前記色差信号に基づいて撮影画像内に発生する画像劣化の検出を行う検出手段と、
    を備える、
    画像検出装置。
    A subject image incident on an image sensor having a predetermined pixel arrangement via a photographing optical system, and a moving means for relatively moving the position of the image sensor in the direction of the light receiving surface of the image sensor;
    Signal generating means for capturing the subject image in a state where the relative positions are different to generate a color difference signal;
    Detection means for detecting image degradation occurring in the captured image based on the color difference signal;
    Comprising
    Image detection device.
  2.  透過波長選択素子付きの撮像素子の画素配置に基づいて撮影光学系内の一部の光学素子と該撮像素子の少なくとも一方を物理的に動かすことにより、該撮像素子の受光面上での被写体像の位置を移動させる移動手段と、
     前記被写体像の位置が移動される毎に、前記撮像素子に取り込まれた被写体像を撮像し、色補間処理を含む所定の信号処理を施して色差信号を生成する信号生成手段と、
     前記撮像素子の受光面上での被写体像の位置が互いに異なる少なくとも一対の色差信号に基づいて撮影画像内に発生する画像劣化の検出を行う検出手段と、
    を備える、
    画像検出装置。
    An object image on the light receiving surface of the image sensor by physically moving at least one of the optical elements in the imaging optical system and at least one of the image sensors based on the pixel arrangement of the image sensor with a transmission wavelength selection element Moving means for moving the position of
    A signal generating unit that captures a subject image captured by the imaging device each time the position of the subject image is moved, and performs predetermined signal processing including color interpolation processing to generate a color difference signal;
    Detecting means for detecting image degradation occurring in a captured image based on at least a pair of color difference signals in which the positions of the subject images on the light receiving surface of the image sensor are different from each other;
    Comprising
    Image detection device.
  3.  前記移動手段は、
      前記少なくとも一対の色差信号が持つ色情報が、画像劣化が発生する発生領域で互いに補色の関係となるように、前記撮像素子の受光面上での被写体像の位置を画素間隔に応じた量だけ移動させる、
    請求項2に記載の画像検出装置。
    The moving means is
    The position of the subject image on the light receiving surface of the image sensor is set by an amount corresponding to the pixel interval so that the color information of the at least one pair of color difference signals has a complementary color relationship with each other in an occurrence region where image degradation occurs. Move,
    The image detection apparatus according to claim 2.
  4.  前記移動手段は、
      前記撮像素子の受光面上での被写体像の位置を前記画素配置に応じた方向にn画素分又は(m+0.5)画素分(但し、n=奇数の自然数,m=0又は奇数の自然数)の距離だけ移動させる、
    請求項3に記載の画像検出装置。
    The moving means is
    The position of the subject image on the light receiving surface of the image sensor is equivalent to n pixels or (m + 0.5) pixels in the direction corresponding to the pixel arrangement (where n = odd natural number, m = 0 or odd natural number). Move the distance of
    The image detection apparatus according to claim 3.
  5.  前記検出手段は、
      前記撮像素子の受光面上での被写体像の位置が互いに異なる少なくとも一対の撮影画像について、同一の被写体像が同一アドレスの画素に入射されたものとして処理されるように、一方の撮影画像を構成する各画素のアドレスを前記被写体像の位置の移動量に応じて変換し、
      前記アドレスが変換された一方の撮影画像の色差信号及び他方の撮影画像の色差信号に基づいて画像劣化の検出を画素毎に行う、
    請求項2から請求項4の何れか一項に記載の画像検出装置。
    The detection means includes
    One photographed image is configured so that the same subject image is processed as being incident on a pixel having the same address with respect to at least a pair of photographed images having different positions of the subject image on the light receiving surface of the image sensor. The address of each pixel to be converted according to the amount of movement of the position of the subject image,
    Detection of image degradation is performed for each pixel based on the color difference signal of the one captured image and the color difference signal of the other captured image in which the address is converted.
    The image detection apparatus according to any one of claims 2 to 4.
  6.  前記検出手段は、
      前記少なくとも一対の色差信号の差分値と加算値の少なくとも一方を演算し、
      演算結果に基づいて画像劣化の検出を行う、
    請求項2から請求項5の何れか一項に記載の画像検出装置。
    The detection means includes
    Calculating at least one of a difference value and an addition value of the at least one pair of color difference signals;
    Image degradation is detected based on the calculation result.
    The image detection apparatus according to any one of claims 2 to 5.
  7.  前記検出手段は、
      前記少なくとも一対の色差信号の差分値を画素毎に演算し、
      演算された差分値が第一の閾値以上となる画素を画像劣化が発生する発生領域の画素として検出する、
    請求項5に記載の画像検出装置。
    The detection means includes
    Calculating a difference value of the at least one pair of color difference signals for each pixel;
    Detecting a pixel whose calculated difference value is equal to or greater than a first threshold as a pixel in an occurrence region where image degradation occurs,
    The image detection apparatus according to claim 5.
  8.  前記検出手段は、
      前記少なくとも一対の色差信号の加算値を画素毎に演算し、
      演算された加算値が第二の閾値以下となる画素を画像劣化が発生する発生領域の画素として検出する、
    請求項5に記載の画像検出装置。
    The detection means includes
    An added value of the at least one pair of color difference signals is calculated for each pixel,
    A pixel whose calculated addition value is equal to or smaller than a second threshold is detected as a pixel in a generation region where image degradation occurs;
    The image detection apparatus according to claim 5.
  9.  前記信号生成手段は、
      前記所定の信号処理を施すことにより、前記色差信号と組になる輝度信号を生成し、
     前記検出手段は、
      前記少なくとも一対の輝度信号にも基づいて画像劣化の検出を行う、
    請求項6から請求項8の何れか一項に記載の画像検出装置。
    The signal generating means includes
    By performing the predetermined signal processing, a luminance signal paired with the color difference signal is generated,
    The detection means includes
    Detecting image degradation based on the at least one pair of luminance signals;
    The image detection apparatus according to any one of claims 6 to 8.
  10.  請求項1から請求項9の何れか一項に記載の画像検出装置を有する撮影装置であって、
     前記撮影装置が静止状態であるか否かを判定する静止状態判定手段
    を備え、
     前記静止状態判定手段により前記撮影装置が静止状態であると判定すると、前記画像検出装置による画像劣化の検出を行う、
    撮影装置。
    A photographing apparatus comprising the image detection device according to any one of claims 1 to 9,
    Comprising a stationary state determining means for determining whether or not the photographing apparatus is in a stationary state;
    When it is determined by the stationary state determining means that the photographing apparatus is in a stationary state, image degradation is detected by the image detecting device.
    Shooting device.
  11.  請求項1から請求項9の何れか一項に記載の画像検出装置を有する撮影装置又は請求項10に記載の撮影装置であって、
     前記撮影装置の振れを検出する振れ検出手段
    を備え、
     前記移動手段は、
      前記振れ検出手段により検出された撮影装置の振れに基づいて前記撮影光学系内の一部の光学素子と前記撮像素子の少なくとも一方を物理的に動かすことにより、該撮影装置の振れに起因する像振れを補正する、
    撮影装置。
    An imaging device comprising the image detection device according to any one of claims 1 to 9, or the imaging device according to claim 10,
    Comprising shake detection means for detecting shake of the imaging device;
    The moving means is
    By physically moving at least one of some of the optical elements and the image sensor in the imaging optical system based on the shake of the imaging apparatus detected by the shake detection unit, an image caused by the shake of the imaging apparatus To correct shake,
    Shooting device.
  12.  透過波長選択素子付きの撮像素子の画素配置に基づいて撮影光学系内の一部の光学素子と該撮像素子の少なくとも一方を物理的に動かすことにより、該撮像素子の受光面上での被写体像の位置を移動させるステップと、
     前記被写体像の位置が移動される毎に、前記撮像素子に取り込まれた被写体像を撮像し、色補間処理を含む所定の信号処理を施して色差信号を生成するステップと、
     前記撮像素子の受光面上での被写体像の位置が互いに異なる少なくとも一対の色差信号に基づいて撮影画像内に発生する画像劣化の検出を行うステップと、
    を含む、
    画像検出方法。
    An object image on the light receiving surface of the image sensor by physically moving at least one of the optical elements in the imaging optical system and at least one of the image sensors based on the pixel arrangement of the image sensor with a transmission wavelength selection element Moving the position of
    Each time the position of the subject image is moved, capturing the subject image captured by the image sensor, performing predetermined signal processing including color interpolation processing, and generating a color difference signal;
    Detecting image degradation occurring in a captured image based on at least a pair of color difference signals having different positions of the subject image on the light receiving surface of the image sensor; and
    including,
    Image detection method.
PCT/JP2016/052660 2015-01-30 2016-01-29 Image detection device, image detection method and image capture device WO2016121928A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015016341A JP6501106B2 (en) 2015-01-30 2015-01-30 Image detection apparatus, image detection method and imaging apparatus
JP2015-016341 2015-01-30

Publications (1)

Publication Number Publication Date
WO2016121928A1 true WO2016121928A1 (en) 2016-08-04

Family

ID=56543532

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/052660 WO2016121928A1 (en) 2015-01-30 2016-01-29 Image detection device, image detection method and image capture device

Country Status (2)

Country Link
JP (1) JP6501106B2 (en)
WO (1) WO2016121928A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05244608A (en) * 1991-08-12 1993-09-21 Olympus Optical Co Ltd Color smear reduction device
JPH0974524A (en) * 1995-07-05 1997-03-18 Sharp Corp Image input device
JPH10191135A (en) * 1996-12-27 1998-07-21 Canon Inc Image pickup device and image synthesizer
JP2000092350A (en) * 1998-09-10 2000-03-31 Hitachi Ltd Method and device for image pickup of object with periodic contrast pattern and check method and device using them
JP2011109496A (en) * 2009-11-19 2011-06-02 Nikon Corp Imaging apparatus
JP2014209689A (en) * 2013-04-16 2014-11-06 リコーイメージング株式会社 Imaging device and method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05244608A (en) * 1991-08-12 1993-09-21 Olympus Optical Co Ltd Color smear reduction device
JPH0974524A (en) * 1995-07-05 1997-03-18 Sharp Corp Image input device
JPH10191135A (en) * 1996-12-27 1998-07-21 Canon Inc Image pickup device and image synthesizer
JP2000092350A (en) * 1998-09-10 2000-03-31 Hitachi Ltd Method and device for image pickup of object with periodic contrast pattern and check method and device using them
JP2011109496A (en) * 2009-11-19 2011-06-02 Nikon Corp Imaging apparatus
JP2014209689A (en) * 2013-04-16 2014-11-06 リコーイメージング株式会社 Imaging device and method

Also Published As

Publication number Publication date
JP2016143938A (en) 2016-08-08
JP6501106B2 (en) 2019-04-17

Similar Documents

Publication Publication Date Title
US11394881B2 (en) Image processing apparatus, image processing method, storage medium, system, and electronic apparatus
JP6486656B2 (en) Imaging device
JP5502205B2 (en) Stereo imaging device and stereo imaging method
US8593531B2 (en) Imaging device, image processing method, and computer program
JP2012226213A (en) Imaging apparatus and control method therefor
JP2011223562A (en) Imaging apparatus
JP6841933B2 (en) Image pickup device, finder device, control method of image pickup device, control method of finder device, control program of image pickup device and control program of finder device
US11012633B2 (en) Image capturing apparatus, image capturing method, and image processing apparatus
JP5595505B2 (en) Stereo imaging device and stereo imaging method
JP6606838B2 (en) Imaging apparatus and imaging method
JP7080118B2 (en) Image pickup device and its control method, shooting lens, program, storage medium
JP6579369B2 (en) Image processing apparatus, image processing method, and image processing program
JP6525139B2 (en) Image detection apparatus, image detection method and imaging apparatus
JP6501106B2 (en) Image detection apparatus, image detection method and imaging apparatus
JP2004038114A (en) Auto-focus camera
JP2014142497A (en) Imaging apparatus and method for controlling the same
JP6548141B2 (en) Image detection apparatus, image detection method and image detection program
JP2008245236A (en) Imaging apparatus and defective pixel correcting method
JP6508609B2 (en) Imaging apparatus and control method therefor
JP6489350B2 (en) Image blur correction apparatus and image blur correction method in the image blur correction apparatus
JP2018148500A (en) Imaging system, image determination program, image imaging device, image data structure
JP6460395B2 (en) Imaging apparatus and display control method in imaging apparatus
JP2019153918A (en) Display control apparatus and method, and imaging apparatus
JP2017200028A (en) Image deterioration detector
JP2017200029A (en) Image deterioration detector

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16743531

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16743531

Country of ref document: EP

Kind code of ref document: A1