WO2017191717A1 - Control device and imaging device - Google Patents

Control device and imaging device Download PDF

Info

Publication number
WO2017191717A1
WO2017191717A1 PCT/JP2017/012907 JP2017012907W WO2017191717A1 WO 2017191717 A1 WO2017191717 A1 WO 2017191717A1 JP 2017012907 W JP2017012907 W JP 2017012907W WO 2017191717 A1 WO2017191717 A1 WO 2017191717A1
Authority
WO
WIPO (PCT)
Prior art keywords
image data
data
low
variable optical
image
Prior art date
Application number
PCT/JP2017/012907
Other languages
French (fr)
Japanese (ja)
Inventor
翔平 野口
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Priority to JP2018515407A priority Critical patent/JPWO2017191717A1/en
Priority to US16/092,503 priority patent/US10972710B2/en
Publication of WO2017191717A1 publication Critical patent/WO2017191717A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/646Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters
    • GPHYSICS
    • G02OPTICS
    • G02FOPTICAL DEVICES OR ARRANGEMENTS FOR THE CONTROL OF LIGHT BY MODIFICATION OF THE OPTICAL PROPERTIES OF THE MEDIA OF THE ELEMENTS INVOLVED THEREIN; NON-LINEAR OPTICS; FREQUENCY-CHANGING OF LIGHT; OPTICAL LOGIC ELEMENTS; OPTICAL ANALOGUE/DIGITAL CONVERTERS
    • G02F1/00Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics
    • G02F1/01Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the intensity, phase, polarisation or colour 
    • G02F1/13Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the intensity, phase, polarisation or colour  based on liquid crystals, e.g. single liquid crystal display cells
    • GPHYSICS
    • G02OPTICS
    • G02FOPTICAL DEVICES OR ARRANGEMENTS FOR THE CONTROL OF LIGHT BY MODIFICATION OF THE OPTICAL PROPERTIES OF THE MEDIA OF THE ELEMENTS INVOLVED THEREIN; NON-LINEAR OPTICS; FREQUENCY-CHANGING OF LIGHT; OPTICAL LOGIC ELEMENTS; OPTICAL ANALOGUE/DIGITAL CONVERTERS
    • G02F1/00Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics
    • G02F1/01Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the intensity, phase, polarisation or colour 
    • G02F1/13Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the intensity, phase, polarisation or colour  based on liquid crystals, e.g. single liquid crystal display cells
    • G02F1/133Constructional arrangements; Operation of liquid crystal cells; Circuit arrangements
    • G02F1/13306Circuit arrangements or driving methods for the control of single liquid crystal cells
    • GPHYSICS
    • G02OPTICS
    • G02FOPTICAL DEVICES OR ARRANGEMENTS FOR THE CONTROL OF LIGHT BY MODIFICATION OF THE OPTICAL PROPERTIES OF THE MEDIA OF THE ELEMENTS INVOLVED THEREIN; NON-LINEAR OPTICS; FREQUENCY-CHANGING OF LIGHT; OPTICAL LOGIC ELEMENTS; OPTICAL ANALOGUE/DIGITAL CONVERTERS
    • G02F1/00Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics
    • G02F1/01Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the intensity, phase, polarisation or colour 
    • G02F1/13Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the intensity, phase, polarisation or colour  based on liquid crystals, e.g. single liquid crystal display cells
    • G02F1/133Constructional arrangements; Operation of liquid crystal cells; Circuit arrangements
    • G02F1/1333Constructional arrangements; Manufacturing methods
    • G02F1/1335Structural association of cells with optical devices, e.g. polarisers or reflectors
    • G02F1/13363Birefringent elements, e.g. for optical compensation
    • GPHYSICS
    • G02OPTICS
    • G02FOPTICAL DEVICES OR ARRANGEMENTS FOR THE CONTROL OF LIGHT BY MODIFICATION OF THE OPTICAL PROPERTIES OF THE MEDIA OF THE ELEMENTS INVOLVED THEREIN; NON-LINEAR OPTICS; FREQUENCY-CHANGING OF LIGHT; OPTICAL LOGIC ELEMENTS; OPTICAL ANALOGUE/DIGITAL CONVERTERS
    • G02F1/00Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics
    • G02F1/01Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the intensity, phase, polarisation or colour 
    • G02F1/13Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the intensity, phase, polarisation or colour  based on liquid crystals, e.g. single liquid crystal display cells
    • G02F1/137Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the intensity, phase, polarisation or colour  based on liquid crystals, e.g. single liquid crystal display cells characterised by the electro-optical or magneto-optical effect, e.g. field-induced phase transition, orientation effect, guest-host interaction or dynamic scattering
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B11/00Filters or other obturators specially adapted for photographic purposes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • GPHYSICS
    • G02OPTICS
    • G02FOPTICAL DEVICES OR ARRANGEMENTS FOR THE CONTROL OF LIGHT BY MODIFICATION OF THE OPTICAL PROPERTIES OF THE MEDIA OF THE ELEMENTS INVOLVED THEREIN; NON-LINEAR OPTICS; FREQUENCY-CHANGING OF LIGHT; OPTICAL LOGIC ELEMENTS; OPTICAL ANALOGUE/DIGITAL CONVERTERS
    • G02F1/00Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics
    • G02F1/01Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the intensity, phase, polarisation or colour 
    • G02F1/13Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the intensity, phase, polarisation or colour  based on liquid crystals, e.g. single liquid crystal display cells
    • G02F1/133Constructional arrangements; Operation of liquid crystal cells; Circuit arrangements
    • G02F1/1333Constructional arrangements; Manufacturing methods
    • G02F1/1343Electrodes
    • G02F1/13439Electrodes characterised by their electrical, optical, physical properties; materials therefor; method of making
    • GPHYSICS
    • G02OPTICS
    • G02FOPTICAL DEVICES OR ARRANGEMENTS FOR THE CONTROL OF LIGHT BY MODIFICATION OF THE OPTICAL PROPERTIES OF THE MEDIA OF THE ELEMENTS INVOLVED THEREIN; NON-LINEAR OPTICS; FREQUENCY-CHANGING OF LIGHT; OPTICAL LOGIC ELEMENTS; OPTICAL ANALOGUE/DIGITAL CONVERTERS
    • G02F1/00Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics
    • G02F1/01Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the intensity, phase, polarisation or colour 
    • G02F1/13Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the intensity, phase, polarisation or colour  based on liquid crystals, e.g. single liquid crystal display cells
    • G02F1/137Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the intensity, phase, polarisation or colour  based on liquid crystals, e.g. single liquid crystal display cells characterised by the electro-optical or magneto-optical effect, e.g. field-induced phase transition, orientation effect, guest-host interaction or dynamic scattering
    • G02F1/139Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the intensity, phase, polarisation or colour  based on liquid crystals, e.g. single liquid crystal display cells characterised by the electro-optical or magneto-optical effect, e.g. field-induced phase transition, orientation effect, guest-host interaction or dynamic scattering based on orientation effects in which the liquid crystal remains transparent
    • G02F1/1396Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the intensity, phase, polarisation or colour  based on liquid crystals, e.g. single liquid crystal display cells characterised by the electro-optical or magneto-optical effect, e.g. field-induced phase transition, orientation effect, guest-host interaction or dynamic scattering based on orientation effects in which the liquid crystal remains transparent the liquid crystal being selectively controlled between a twisted state and a non-twisted state, e.g. TN-LC cell
    • GPHYSICS
    • G02OPTICS
    • G02FOPTICAL DEVICES OR ARRANGEMENTS FOR THE CONTROL OF LIGHT BY MODIFICATION OF THE OPTICAL PROPERTIES OF THE MEDIA OF THE ELEMENTS INVOLVED THEREIN; NON-LINEAR OPTICS; FREQUENCY-CHANGING OF LIGHT; OPTICAL LOGIC ELEMENTS; OPTICAL ANALOGUE/DIGITAL CONVERTERS
    • G02F2413/00Indexing scheme related to G02F1/13363, i.e. to birefringent elements, e.g. for optical compensation, characterised by the number, position, orientation or value of the compensation plates
    • G02F2413/02Number of plates being 2
    • GPHYSICS
    • G02OPTICS
    • G02FOPTICAL DEVICES OR ARRANGEMENTS FOR THE CONTROL OF LIGHT BY MODIFICATION OF THE OPTICAL PROPERTIES OF THE MEDIA OF THE ELEMENTS INVOLVED THEREIN; NON-LINEAR OPTICS; FREQUENCY-CHANGING OF LIGHT; OPTICAL LOGIC ELEMENTS; OPTICAL ANALOGUE/DIGITAL CONVERTERS
    • G02F2413/00Indexing scheme related to G02F1/13363, i.e. to birefringent elements, e.g. for optical compensation, characterised by the number, position, orientation or value of the compensation plates
    • G02F2413/05Single plate on one side of the LC cell
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics

Definitions

  • the present disclosure relates to a control device and an imaging device.
  • a low-pass filter between the lens and the imager.
  • the false color that is one of the artifacts can be reduced by adjusting the effect of the low-pass filter.
  • a low-pass filter for reducing moire for example, an optical low-pass filter described in Patent Document 1 below can be used.
  • the control device sets the low-pass characteristic of the low-pass filter based on a change in resolution and a change in false color in a plurality of captured image data according to a change in the low-pass characteristic of the low-pass filter.
  • a control unit for setting a value is provided.
  • the imaging apparatus includes an imaging element that generates image data from light incident via a low-pass filter.
  • the imaging apparatus further sets a setting value of the low-pass characteristic of the low-pass filter based on a change in resolution and a change in false color in a plurality of image data that is captured in accordance with a change in the low-pass characteristic of the low-pass filter. It has.
  • the low-pass characteristics of the low-pass filter based on the change in resolution and the change in false color in the plurality of imaged image data according to the change in the low-pass characteristics of the low-pass filter Is set.
  • a low-pass filter setting value suitable for obtaining image data with a balanced image quality according to the user's purpose is obtained.
  • control device and the imaging device of the embodiment of the present disclosure it is possible to obtain a setting value of a low-pass filter suitable for obtaining image data with a balanced image quality according to a user's purpose.
  • the low-pass filter is used, an image with a balanced image quality can be acquired.
  • the effect of this indication is not necessarily limited to the effect described here, Any effect described in this specification may be sufficient.
  • FIG. 5 is a diagram illustrating an example of a polarization conversion efficiency curve (VT curve) of the liquid crystal layer in FIG. 4. It is a figure showing an example of an effect
  • FIG. 7 is a diagram illustrating an example of an effect
  • FIG. 7 is a diagram illustrating an example of MTF (Modulation / Transfer / Function) in FIGS. 6A to 6C. It is a figure showing an example of evaluation data.
  • FIG. 2 is a diagram illustrating an example of a schematic configuration of an image processing unit in FIG. 1. It is a figure showing an example of the imaging procedure in the imaging device of FIG. It is a figure showing an example of evaluation data. It is a figure showing an example of the imaging procedure in the imaging device of FIG. It is a figure showing an example of evaluation data. It is a figure showing an example of the imaging procedure in the imaging device of FIG. It is a figure showing an example of evaluation data. It is a figure showing an example of the imaging procedure in the imaging device of FIG. It is a figure showing an example of a histogram.
  • FIG. 1 illustrates an example of a schematic configuration of an imaging apparatus 1 according to an embodiment of the present disclosure.
  • the imaging device 1 includes, for example, an imaging optical system 10, a lens driving unit 20, an LPF driving unit 30, an imaging element 40, and an image processing unit 50.
  • the imaging apparatus 1 further includes, for example, a display panel 60, a memory unit 70, a control unit 80, and an operation unit 90.
  • the imaging optical system 10 includes, for example, a variable optical LPF (Low Pass Filter) 11 and a lens 12.
  • the lens 12 forms an optical subject image on the image sensor 40.
  • the lens 12 has a plurality of lenses, and is driven by the lens driving unit 20 so that at least one lens can be moved. Thereby, the lens 12 can perform optical focus adjustment and zoom adjustment.
  • the variable optical LPF 11 removes a high spatial frequency component contained in light, and is driven by the LPF driving unit 30 to change a cutoff frequency fc, which is one of low-pass characteristics.
  • the imaging optical system 10 may be configured integrally with the imaging device 40 or may be configured separately from the imaging device 40.
  • variable optical LPF 11 in the imaging optical system 10 may be configured integrally with the imaging device 40 or may be configured separately from the imaging device 40. A specific configuration of the variable optical LPF 11 and a modulation method of the cutoff frequency fc will be described in detail later.
  • the lens drive unit 20 drives at least one lens of the lens 12 for optical zoom magnification, focus adjustment, and the like in accordance with instructions from the control unit 80.
  • the LPF drive unit 30 adjusts the effect of the variable optical LPF 11 by performing control to change the low-pass characteristic (cut-off frequency fc) of the variable optical LPF 11 in accordance with an instruction from the control unit 80.
  • the “effect of the variable optical LPF 11” refers to a reduction in a component having a spatial frequency higher than the Nyquist frequency contained in light.
  • the LPF driving unit 30 adjusts the cutoff frequency fc of the variable optical LPF 11 by applying a predetermined voltage V (constant frequency) between the electrodes of the variable optical LPF 11.
  • the imaging element 40 generates image data by converting a subject image formed on the light receiving surface 40A via the lens 12 and the variable optical LPF 11 into an electrical signal by photoelectric conversion.
  • the imaging device 40 is configured by, for example, a CCD (Charge Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor) image sensor.
  • the imaging element 40 has a light receiving surface 40A in which a plurality of photoelectric conversion elements 40B are two-dimensionally arranged at a predetermined interval, for example, as shown in FIG.
  • the image sensor 40 further includes a color filter array 40C on the light receiving surface 40A, for example.
  • FIG. 2 illustrates a state in which the color filter array 40C has a Bayer array in which R, G, G, and B of a 2 ⁇ 2 matrix are arranged in a matrix.
  • the color filter array 40C may have an array different from the Bayer array.
  • the image sensor 40 generates image data based on light incident via the variable optical LPF 11.
  • the image sensor 40 generates color image data by spatially sampling light incident through the lens 12 and the variable optical LPF 11, for example.
  • the image data has a color signal of each color included in the color filter array 40C for each pixel.
  • the image processing unit 50 performs image processing such as white balance, demosaicing, gradation conversion, color conversion, and noise reduction on the image data generated by the image sensor 40.
  • the image processing unit 50 performs processing such as converting image data into display data suitable for display on the display panel 60, and converting image data into data suitable for recording in the memory unit 70. ing.
  • the image processing in the image processing unit 50 will be described in detail later.
  • the display panel 60 is configured by a liquid crystal panel, for example.
  • the display panel 60 displays display data input from the image processing unit 50.
  • the memory unit 70 can store photographed image data and various programs.
  • the memory unit 70 is configured by, for example, a nonvolatile memory, and is configured by, for example, an EEPROM (Electrically Erasable Programmable Read-Only Memory), a flash memory, a resistance change type memory, or the like.
  • the memory unit 70 may be an external memory that can be attached to and detached from the imaging apparatus 1.
  • the memory unit 70 stores various data generated by the image processing unit 50 and various data input from the operation unit 90.
  • FIG. 3 shows an example of data stored in the memory unit 70.
  • Examples of data stored in the memory unit 70 include image data 71, saturation data 72, evaluation data 73, and a set value 74 as shown in FIG.
  • the image data 71, the saturation data 72, the evaluation data 73, and the set value 74 will be described in detail later.
  • the operation unit 90 receives an instruction from the user, and includes, for example, an operation button, a shutter button, an operation dial, a keyboard, and a touch panel.
  • the control unit 80 is a processor that controls the lens driving unit 20, the LPF driving unit 30, the image sensor 40, the image processing unit 50, the display panel 60, and the memory unit 70.
  • the control unit 80 controls the lens driving unit 20 to adjust the optical zoom magnification and focus of the lens 12.
  • the control unit 80 adjusts the effect (cut-off frequency fc) of the variable optical LPF 11 by controlling the LPF driving unit 30.
  • the controller 80 further drives the image sensor 40 to generate image data by the image sensor 40 and to output the generated image data to the image processor 50.
  • the control unit 80 controls the image processing unit 50 to cause the image processing unit 50 to perform the above-described image processing, and to store various data obtained as a result of the above-described image processing in the memory unit 70 and the display panel 60.
  • the control unit 80 controls the lens driving unit 20, the LPF driving unit 30, the image sensor 40, the image processing unit 50, and the display panel 60 according to various data input from the operation unit 90, or is input from the operation unit 90.
  • Various types of data are stored in the memory unit 70.
  • FIG. 4 illustrates an example of a schematic configuration of the variable optical LPF 11.
  • the variable optical LPF 11 removes high spatial frequency components contained in light.
  • the variable optical LPF 11 is driven by the LPF driving unit 30 to change the effect (cut-off frequency fc) of the variable optical LPF 11.
  • the variable optical LPF 11 is configured to change the cutoff frequency fc by, for example, a peak value modulation method. The peak value modulation method will be described later in detail.
  • the variable optical LPF 11 includes a pair of birefringent plates 111 and 115 having birefringence, and a liquid crystal layer 113 disposed between the pair of birefringent plates 111 and 115.
  • the variable optical LPF 11 further includes electrodes 112 and 114 that apply an electric field to the liquid crystal layer 113.
  • the variable optical LPF 11 may include, for example, an alignment film that regulates the alignment of the liquid crystal layer 113.
  • the electrodes 112 and 114 are arranged to face each other with the liquid crystal layer 113 interposed therebetween. Each of the electrodes 112 and 114 is composed of one sheet-like electrode. Note that at least one of the electrode 112 and the electrode 114 may be composed of a plurality of partial electrodes.
  • the electrodes 112 and 114 are translucent conductive films such as ITO (Indium Tin Oxide), for example.
  • the electrodes 112 and 114 may be, for example, a light-transmitting inorganic conductive film, a light-transmitting organic conductive film, or a light-transmitting metal oxide film.
  • the birefringent plate 111 is disposed on the light incident side of the variable optical LPF 11, and for example, the outer surface of the birefringent plate 111 is a light incident surface 110A.
  • the incident light L1 is light that enters the light incident surface 110A from the subject side.
  • the birefringent plate 111 is disposed so that the optical axis of the incident light L1 is parallel to the normal line of the birefringent plate 111 (or the light incident surface 110A).
  • the birefringent plate 115 is disposed on the light exit side of the variable optical LPF 11, and for example, the outer surface of the birefringent plate 115 is a light exit surface 110B.
  • the transmitted light L2 of the variable optical LPF 11 is light emitted to the outside from the light emitting surface 110B.
  • the birefringent plate 111, the electrode 112, the liquid crystal layer 113, the electrode 114, and the birefringent plate 115 are stacked in this order from the light incident side.
  • the birefringent plates 111 and 115 are birefringent and have a uniaxial crystal structure.
  • the birefringent plates 111 and 115 have a function of separating ps of circularly polarized light using birefringence.
  • the birefringent plates 111 and 115 are made of, for example, quartz, calcite, or lithium niobate.
  • the image separation directions are opposite to each other.
  • the optical axis AX1 of the birefringent plate 111 and the optical axis AX2 of the birefringent plate 115 intersect each other in a plane parallel to the normal line of the light incident surface 110A.
  • the angle formed by the optical axis AX1 and the optical axis AX2 is, for example, 90 °. Further, the optical axes AX1 and AX2 obliquely intersect the normal line of the light incident surface 110A.
  • the angle formed by the optical axis AX1 and the normal line of the light incident surface 110A is, for example, less than 90 ° counterclockwise with respect to the normal line of the light incident surface 110A, for example, 45 °. .
  • FIG. 5 shows an example of a polarization conversion efficiency curve (VT curve) of the liquid crystal layer 113.
  • the horizontal axis is the voltage V (frequency constant) applied between the electrodes 112 and 114.
  • the vertical axis represents the polarization conversion efficiency T.
  • the polarization conversion efficiency T is obtained by multiplying a value obtained by dividing the phase difference given to linearly polarized light by 90 degrees by 100.
  • a polarization conversion efficiency T of 0% means that no phase difference is given to linearly polarized light. For example, it means that linearly polarized light has passed through the medium without changing its polarization direction. Yes.
  • a polarization conversion efficiency T of 100% means that a phase difference of 90 degrees is given to linearly polarized light.
  • a medium obtained by converting p-polarized light to s-polarized light or s-polarized light to p-polarized light. Indicates that it has passed through.
  • a polarization conversion efficiency T of 50% means that a phase difference of 45 degrees is given to linearly polarized light.
  • p-polarized light or s-polarized light is converted into circularly polarized light and transmitted through the medium. pointing.
  • the liquid crystal layer 113 controls polarization based on the electric field generated by the voltage between the electrodes 112 and 114.
  • the polarization conversion efficiency T becomes T2
  • the voltage V2 (V1 ⁇ V2) is applied between the electrodes 112 and 114.
  • the polarization conversion efficiency T becomes T1.
  • T2 is 100% and T1 is 0%.
  • the polarization conversion efficiency T becomes T3.
  • T3 is a value larger than 0% and smaller than 100%.
  • the voltage V3 is a voltage when T3 is 50%.
  • the voltage V1 is a voltage equal to or lower than the voltage at the falling position of the polarization conversion efficiency curve.
  • the voltage V2 is a voltage equal to or higher than the voltage at the rising position of the polarization conversion efficiency curve, and specifically refers to a voltage in a section where the polarization conversion efficiency is saturated near the minimum value in the polarization conversion efficiency curve.
  • the voltage V3 is a voltage (intermediate voltage) between the voltage at the falling position of the polarization conversion efficiency curve and the voltage at the rising position of the polarization conversion efficiency curve.
  • the liquid crystal layer 113 controls polarization.
  • the liquid crystal having the polarization conversion efficiency curve as described above include a TN (TwistedistNematic) liquid crystal.
  • the TN liquid crystal is composed of a chiral nematic liquid crystal and has an optical rotation that rotates the polarization direction of light passing therethrough along with the rotation of the nematic liquid crystal.
  • FIG. 6A, 6B, and 6C illustrate an example of the action of the variable optical LPF 11.
  • FIG. 6A the voltage V between the electrodes 112 and 114 is the voltage V1.
  • FIG. 6B the voltage V between the electrodes 112 and 114 is the voltage V2.
  • FIG. 6C the voltage V between the electrodes 112 and 114 is the voltage V3.
  • the p-polarized component included in the incident light L1 vibrates in a direction orthogonal to the vibration direction of the s-polarized light, and therefore travels diagonally in the birefringent plate 111 due to the influence of birefringence, and the back surface of the birefringent plate 111 Among them, the light is refracted at a position shifted by the separation width d1 and emitted from the back surface of the birefringent plate 111. Accordingly, the birefringent plate 111 separates the incident light L1 into the p-polarized transmitted light L2 and the s-polarized transmitted light L2 with the separation width d1.
  • the p-polarized light separated by the birefringent plate 111 When the p-polarized light separated by the birefringent plate 111 is incident on the liquid crystal layer 113 having a polarization conversion efficiency of T2, the p-polarized light is converted into s-polarized light and travels straight in the liquid crystal layer 113. The light is emitted from the back surface.
  • the s-polarized light separated by the birefringent plate 111 is incident on the liquid crystal layer 113 having a polarization conversion efficiency of T2
  • the s-polarized light is converted to p-polarized light and travels straight in the liquid crystal layer 113.
  • the light is emitted from the back surface. Therefore, the liquid crystal layer 113 performs ps conversion on the p-polarized light and the s-polarized light separated by the birefringent plate 111 while keeping the separation width constant.
  • the separation width of the s-polarized light and p-polarized light changes depending on the birefringence of the birefringent plate 115.
  • the polarization component that vibrates perpendicularly to the optical axis AX2 of the birefringent plate 115 is s-polarized light
  • the s-polarized light travels straight in the birefringent plate 115 without being affected by birefringence, and the birefringent plate 115. The light is emitted from the back surface.
  • the birefringent plate 115 Since the p-polarized light vibrates in a direction orthogonal to the vibration direction of the s-polarized light, the birefringent plate 115 is influenced by the birefringence and proceeds obliquely in a direction opposite to the image separation direction on the birefringent plate 111. . Further, the p-polarized light is refracted at a position shifted by the separation width d ⁇ b> 2 among the back surface of the birefringent plate 115 and is emitted from the back surface of the birefringent plate 115.
  • the birefringent plate 115 separates the s-polarized light and p-polarized light transmitted through the liquid crystal layer 113 into s-polarized transmitted light L2 and p-polarized transmitted light L2 with a separation width (d1 + d2).
  • the separation width of the s-polarized light and p-polarized light changes depending on the birefringence of the birefringent plate 115.
  • the polarization component that vibrates perpendicularly to the optical axis AX2 of the birefringent plate 115 is s-polarized light
  • the s-polarized light travels straight in the birefringent plate 115 without being affected by birefringence, and the birefringent plate 115. The light is emitted from the back surface.
  • the birefringent plate 115 Since the p-polarized light vibrates in a direction orthogonal to the vibration direction of the s-polarized light, the birefringent plate 115 is influenced by the birefringence and proceeds obliquely in a direction opposite to the image separation direction on the birefringent plate 111. . Further, the p-polarized light is refracted at a position shifted by the separation width d ⁇ b> 2 among the back surface of the birefringent plate 115 and is emitted from the back surface of the birefringent plate 115.
  • the birefringent plate 115 separates the s-polarized light and the p-polarized light transmitted through the liquid crystal layer 113 into s-polarized transmitted light L2 and p-polarized transmitted light L2 with a separation width (
  • d1 d2
  • the s-polarized transmitted light L2 and the p-polarized transmitted light L2 are emitted from the same location on the back surface of the birefringent plate 115. Therefore, in this case, the birefringent plate 115 makes the light obtained by synthesizing the s-polarized light and the p-polarized light transmitted through the liquid crystal layer 113 with each other.
  • the circularly polarized light emitted from the liquid crystal layer 113 enters the birefringent plate 115, it is separated into p-polarized light and s-polarized light with a separation width d2 due to the birefringence of the birefringent plate 115.
  • the polarization component that vibrates perpendicularly to the optical axis AX2 of the birefringent plate 115 is s-polarized light
  • the s-polarized light travels straight in the birefringent plate 115 without being affected by birefringence, and the birefringent plate 115.
  • the light is emitted from the back surface.
  • the birefringent plate 115 Since the p-polarized light vibrates in a direction orthogonal to the vibration direction of the s-polarized light, the birefringent plate 115 is influenced by the birefringence and proceeds obliquely in a direction opposite to the image separation direction on the birefringent plate 111. . Further, the p-polarized light is refracted at a position shifted by the separation width d ⁇ b> 2 among the back surface of the birefringent plate 115 and is emitted from the back surface of the birefringent plate 115.
  • the birefringent plate 115 converts the circularly polarized light converted from the p-polarized light by the liquid crystal layer 113 and the circularly polarized light converted from the s-polarized light by the liquid crystal layer 113, respectively, into the s-polarized transmitted light L2 with a separation width d2. And p-polarized transmitted light L2.
  • the p-polarized light separated from the circularly polarized light converted from the p-polarized light in the liquid crystal layer 113 and the s-polarized light separated from the circularly polarized light converted from the s-polarized light in the liquid crystal layer 113 are Out of the back surface of the birefringent plate 115, the light is emitted from the same location.
  • circularly polarized transmitted light L 2 is emitted from the back surface of the birefringent plate 115.
  • the birefringent plate 115 separates the two circularly polarized lights emitted from the liquid crystal layer 113 into p-polarized transmitted light L2 and s-polarized transmitted light L2 with a separation width (d2 + d2).
  • the p-polarized light and the s-polarized light that have been once separated are combined with the p-polarized light and the s-polarized light at a position between the p-polarized transmitted light L2 and the s-polarized transmitted light L2.
  • variable optical LPF 11 When the voltage V ⁇ b> 2 is applied between the electrodes 112 and 114, the variable optical LPF 11 generates one peak p ⁇ b> 1 in the point image intensity distribution of the transmitted light of the variable optical LPF 11.
  • the peak p1 is formed by one transmitted light L2 emitted from the birefringent plate 115.
  • the variable optical LPF 11 causes two peaks p2 and p3 in the point image intensity distribution of the transmitted light of the variable optical LPF 11 when the voltage V1 is applied between the electrodes 112 and 114.
  • the two peaks p2 and p3 are formed by the two transmitted lights L2 emitted from the birefringent plate 115.
  • the three peaks p1, p2, and p3 are formed by the three transmitted lights L2 emitted from the birefringent plate 115.
  • the variable optical LPF 11 has four peaks p1, p2, p3, and p4 in the point image intensity distribution of the transmitted light of the variable optical LPF 11 when the voltage V3 is applied between the electrodes 112 and 114 and d1 ⁇ d2. Cause it to occur.
  • the four peaks p1, p2, p3, and p4 are formed by the four transmitted lights L2 emitted from the birefringent plate 115.
  • variable optical LPF 11 has three peaks p1 to p3 or four peaks p1 in the point image intensity distribution of the transmitted light of the variable optical LPF 11 when the voltage V3 is applied between the electrodes 112 and 114. Produces ⁇ p4.
  • the values of the three peaks p1 to p3 or the four peaks p1 to p4 change. That is, in the variable optical LPF 11, when the voltage V3 applied between the electrodes 112 and 114 changes, the point image intensity distribution of the transmitted light changes.
  • variable optical LPF 11 changes the point image intensity distribution of the transmitted light by changing the magnitude of the voltage V applied between the electrodes 112 and 114.
  • the peak value (peak height) of the three peaks p1 to p3 and the peak value (peak height) of the four peaks p1 to p4 are the magnitude of the voltage V applied between the electrodes 112 and 114. It depends on.
  • the peak positions of the three peaks p1 to p3 and the peak positions of the four peaks p1 to p4 are determined by the separation widths d1 and d2.
  • the separation widths d1 and d2 are constant regardless of the magnitude of the voltage V3 applied between the electrodes 112 and 114. Therefore, the peak positions of the three peaks p1 to p3 and the peak positions of the four peaks p1 to p4 are constant regardless of the magnitude of the voltage V3 applied between the electrodes 112 and 114.
  • FIG. 7 shows an example of the MTF (Modulation Transfer Function) of FIGS. 6A to 6C.
  • the horizontal axis is the spatial frequency
  • the vertical axis is the normalized contrast.
  • the MTF in FIG. 6B matches the MTF of a lens (for example, the lens 13 or the like) disposed in front of the variable optical LPF 11.
  • the cutoff frequency fc1 of the MTF in FIG. 6A is smaller than the cutoff frequency fc2 of the MTF in FIG. 6C.
  • the separation width is equal to the separation width in FIG. 6A, but the number of peaks is larger than the number of peaks in FIG. 6A, and the distance between peaks is the distance between peaks in FIG. 6A. It is narrower than. Therefore, in FIG. 6C, the light beam separation effect is weaker than the light beam separation effect in FIG. 6A, so that the MTF cutoff frequency fc2 in FIG. 6C is larger than the MTF cutoff frequency fc1 in FIG. 6A. .
  • variable optical LPF 11 changes the magnitude of the voltage V applied between the electrodes 112 and 114 to change the cutoff frequency fc to an arbitrary value equal to or higher than the cutoff frequency when the light beam separation effect is maximized. Can be set.
  • the memory unit 70 stores the image data 71, the saturation data 72, the evaluation data 73, and the set value 74.
  • the image data 71 is a plurality of image data obtained by the imaging device 40, and includes, for example, image data I 1 described later, a plurality of image data I 2 described later, and image data I described later.
  • the image data I 1 is image data having no false color or image data having few false colors. Image data having no false color or image data having few false colors is obtained, for example, when the effect of the variable optical LPF 11 is maximized or almost maximized. That is, image data having no false color or image data having less false color is obtained, for example, when the cut-off frequency fc of the variable optical LPF 11 is minimized or substantially minimized.
  • the plurality of image data I 2 is image data obtained when a setting value different from the setting value used to obtain the image data I 1 is set in the variable optical LPF 11.
  • the image data I is image data acquired by the image sensor 40 when an appropriate setting value (setting value 74) is set for the variable optical LPF 11.
  • the setting value 74 is a setting value of the variable optical LPF 11 that matches the user's purpose, and is a setting value that is set for the variable optical LPF 11 when obtaining the image data I.
  • the set value 74 is obtained by executing image processing in the image processing unit 50.
  • the saturation data 72 is data relating to saturation obtained from the image data I 1 .
  • the saturation data 72 is, for example, two-dimensional data associated with each predetermined unit dot of the image data I 1 .
  • the evaluation data 73 is data for evaluating the resolution and the false color.
  • the resolution is an index indicating how fine details can be identified. “High resolution” means that a captured image has a fineness that allows a finer one to be identified. “Resolution is low” means that the photographed image has no fineness and is blurred. “Resolution is degraded” means that the photographed image has no original fineness and is more blurred than the original photographed image.
  • a captured image including a high spatial frequency is passed through a low-pass filter, the high-frequency component of the captured image is reduced, and the resolution of the captured image is reduced (deteriorated). That is, the deterioration in resolution corresponds to a decrease in spatial frequency.
  • the false color is a phenomenon in which an original color appears in an image.
  • This phenomenon occurs due to aliasing (folding of high-frequency components into a low-frequency region) when a captured image includes a high-frequency component exceeding the Nyquist frequency because each color is spatially sampled. Therefore, for example, high-frequency components exceeding the Nyquist frequency included in the captured image are reduced by the low-pass filter, so that the resolution of the captured image is reduced (deteriorated), while the occurrence of false colors in the captured image is suppressed.
  • D is evaluation data for resolution and false color.
  • f (I 1 , I 2 ) is a function for deriving evaluation data (evaluation data D 1) regarding a change (degradation) in resolution between the two image data I 1 and I 2 obtained by the image sensor 40. It means that the larger the value of the evaluation data D1, the lower the resolution.
  • the evaluation data D1 corresponds to a specific example of “first evaluation data” of the present disclosure.
  • f (I 1 , I 2 ) is a function for deriving the evaluation data D1 based on the spatial frequencies of the two image data I 1 and I 2 .
  • f (I 1 , I 2 ) is a function for deriving the evaluation data D1 based on the difference between the frequency spectrum of the image data I 1 and the frequency spectrum of the image data I 2 , for example.
  • f (I 1 , I 2 ) extracts the power at the frequency at which the effect of the variable optical LPF 11 is greatest from the difference between the frequency spectrum of the image data I 1 and the frequency spectrum of the image data I 2.
  • This is a function for deriving evaluation data D1 based on the extracted power.
  • Evaluation data D1 is derived from the first term on the right side of Equation (1).
  • C is saturation data 72.
  • ⁇ C ⁇ G (I 1, I 2) the two image data I 1, based on the grayscale data of I 2, the evaluation data (assessment of false color change between two image data I 1, I 2 This is a function for deriving data D2).
  • a larger value of ⁇ C ⁇ G (I 1 , I 2 ) means that a false color is generated in a wider range.
  • the evaluation data D2 corresponds to a specific example of “second evaluation data” of the present disclosure.
  • ⁇ C ⁇ G (I 1 , I 2 ) is a function derived as evaluation data D 2 based on, for example, the difference between the gradation data of the image data I 1 and the gradation data of the image data I 2 .
  • ⁇ C ⁇ G (I 1 , I 2 ) is, for example, the difference between the gradation data of the image data I 1 and the gradation data of the image data I 2 , and C (saturation data) obtained from the image data I 1. 72) and a function for deriving the evaluation data D2.
  • ⁇ C ⁇ G (I 1 , I 2 ) is, for example, the sum of the difference between the gradation data of the image data I 1 and the gradation data of the image data I 2 multiplied by C as the evaluation data D 2 A function to derive. From the second term on the right side of Equation (1), evaluation data D2 in consideration of the saturation of the subject is derived.
  • FIG. 8 is a graph illustrating evaluation data D1 obtained from the first term on the right side of Equation (1) and evaluation data D2 obtained from the second term on the right side of Equation (1).
  • evaluation data D1 obtained from the first term on the right side of Expression (1) As the effect of the variable optical LPF 11 becomes stronger, the resolution deteriorates abruptly at first, and thereafter, the change in resolution gradually becomes gentler. I understand that.
  • evaluation data D2 obtained from the second term on the right side of Expression (1) it can be seen that the false color range gradually decreases as the effect of the variable optical FPF 11 increases.
  • FIG. 9 illustrates an example of functional blocks of the image processing unit 50.
  • the image processing unit 50 performs predetermined processing on the image data output from the image sensor 40.
  • the image processing unit 50 for example, the effect (or the effect of the variable optical LPF 11 on the basis of a change in resolution and a change in false color in a plurality of captured image data in accordance with a change in the effect (or low-pass characteristics) of the variable optical LPF 11
  • a low-pass characteristic) set value 74 is set.
  • An example of the low-pass characteristic is a cutoff frequency fc.
  • the image processing unit 50 includes, for example, a preprocessing circuit 51, an image processing circuit 52, a display processing circuit 53, a compression / decompression circuit 54, and a memory control circuit 55.
  • the pre-processing circuit 51 performs optical correction processing such as shading correction on the image data output from the image sensor 40.
  • the image processing circuit 52 performs various processes described later on the corrected image data output from the preprocessing circuit 51.
  • the image processing circuit 52 further outputs, for example, image data acquired from the image sensor 40 to the display processing circuit 53.
  • the image processing circuit 52 further outputs, for example, image data acquired from the image sensor 40 to the compression / decompression circuit 54.
  • the image processing circuit 52 will be described later in detail.
  • the display processing circuit 53 generates an image signal to be displayed on the display panel 60 from the image data received from the image processing circuit 52, and sends the image signal to the display panel 60.
  • the compression / decompression circuit 54 performs compression encoding processing on the still image data received from the image processing circuit 52 by a still image encoding method such as JPEG (JointJPhotographic Experts Group).
  • the compression / decompression circuit 54 performs compression coding processing on the moving image data received from the image processing circuit 52 by a moving image coding method such as MPEG (Moving Picture Experts ⁇ ⁇ Group).
  • the memory control circuit 55 controls writing and reading of data with respect to the memory unit 70.
  • FIG. 10 illustrates an example of an imaging procedure in the imaging apparatus 1.
  • the control unit 80 prepares for operation (step S101).
  • the operation preparation refers to preparations required when the image data I is output from the image sensor 40, for example, setting AF (autofocus) conditions and the like.
  • the control unit 80 instructs the lens driving unit 20 and the LPF driving unit 30 to prepare for operation such as AF.
  • the lens driving unit 20 performs operation preparation for the lens 12 before outputting the image data I 1 in accordance with an instruction from the control unit 80.
  • the lens driving unit 20 sets a focus condition of the lens 12 to a predetermined value.
  • the control unit 80 causes the lens driving unit 20 to prepare for operation such as AF after the variable optical LPF 11 is not optically operated.
  • the LPF driving unit 30 performs operation preparation for the variable optical LPF 11 before outputting the image data I 1 in accordance with an instruction from the control unit 80.
  • the LPF driving unit 30 applies a voltage V2 between the electrodes 112 and 114, for example.
  • the polarization conversion efficiency T of the variable optical LPF 11 is T1.
  • the control unit 80 generates a change instruction for changing the effect of the variable optical LPF 11 at a pitch larger than the set minimum resolution of the effect of the variable optical LPF 11, and gives the change instruction to the LPF driving unit 30. That is, the control unit 80 generates a change instruction for changing the cutoff frequency fc of the variable optical LPF 11 at a pitch larger than the set minimum resolution of the cutoff frequency fc of the variable optical LPF 11, and outputs the change instruction to the LPF driving unit 30. To do. Then, for example, the LPF driving unit 30 gradually changes the cutoff frequency fc of the variable optical LPF 11 by changing the voltage V (constant frequency) applied between the electrodes of the variable optical LPF 11 at a pitch larger than the set minimum resolution.
  • the control unit 80 generates an imaging instruction synchronized with a change in the effect (cut-off frequency fc) of the variable optical LPF 11, and gives the imaging instruction to the imaging element 40. Then, the imaging device 40 performs imaging in synchronization with a change in the effect of the variable optical LPF 11 (change in the cutoff frequency fc) in accordance with an instruction from the control unit 80. As a result, the imaging device 40 generates a plurality of imaging data different from each other in the effect (cutoff frequency fc) of the variable optical LPF 11 and outputs the imaging data to the image processing unit 50.
  • the control unit 80 instructs the LPF driving unit 30 to maximize or substantially maximize the effect of the variable optical LPF 11 within the range in which the effect of the variable optical LPF 11 can be changed (step S102).
  • the control unit 80 instructs the LPF driving unit 30 so that the cut-off frequency fc of the variable optical LPF 11 is minimized or almost within the changeable range of the cut-off frequency fc of the variable optical LPF 11.
  • the LPF driving unit 30 maximizes or substantially maximizes the effect of the variable optical LPF 11 within the range in which the effect of the variable optical LPF 11 can be changed in accordance with an instruction from the control unit 80.
  • the LPF driving unit 30 minimizes or substantially minimizes the cutoff frequency fc of the variable optical LPF 11 within a changeable range of the cutoff frequency fc of the variable optical LPF 11.
  • the LPF driving unit 30 applies a voltage V1 between the electrodes 112 and 114, or a voltage slightly larger than the voltage V1.
  • the polarization conversion efficiency T of the variable optical LPF 11 becomes T2 (maximum) or a value close to T2.
  • the control unit 80 instructs the image sensor 40 to acquire the image data I 1 (step S). 103). Specifically, the control unit 80 acquires the image data I 1 when the effect of the variable optical LPF 11 is the maximum or almost the maximum within the changeable range of the effect of the variable optical LPF 11. 40. That is, the control unit 80 acquires the image data I 1 when the cut-off frequency fc of the variable optical LPF 11 is the minimum or almost the minimum within the changeable range of the cut-off frequency fc of the variable optical LPF 11. To the image sensor 40.
  • the image sensor 40 discretely samples the light incident through the variable optical LPF 11 in which the effect of the variable optical LPF 11 is maximized or substantially maximized by the light receiving surface 40A, thereby obtaining color image data I 1.
  • the image pickup device 40 discretely samples the light incident through the variable optical LPF 11 with the cut-off frequency fc of the variable optical LPF 11 being minimum or almost minimum by the light receiving surface 40A, thereby obtaining a color image.
  • Data I 1 is acquired.
  • the image sensor 40 discretely samples light incident via the variable optical LPF 11 having the maximum or almost maximum polarization conversion efficiency T by the light receiving surface 40A, thereby obtaining the color image data I 1 . get.
  • the image data I 1 is an image generated by the image sensor 40 by driving the image sensor 40 when the effect of the variable optical LPF 11 is maximized or substantially maximized within a range in which the effect of the variable optical LPF 11 can be changed. It is data. That is, the image data I 1 is obtained by driving the image sensor 40 when the cut-off frequency fc of the variable optical LPF 11 is minimized or almost within the changeable range of the cut-off frequency fc of the variable optical LPF 11. This is image data generated by the image sensor 40.
  • the image data I 1 corresponds to a specific example of “first image data” of the present disclosure.
  • the image sensor 40 outputs the acquired image data I 1 to the image processing unit 50.
  • the image processing unit 50 analyzes the acquired image data I 1 to derive data relating to the saturation (saturation data 33) in the image data I 1 (step S104).
  • the control unit 80 instructs the LPF driving unit 30 to change the effect of the variable optical LPF 11 (step S105). Specifically, the control unit 80 instructs the LPF driving unit 30 so that the effect of the variable optical LPF 11 is smaller than the previous effect of the variable optical LPF 11. That is, the control unit 80 instructs the LPF driving unit 30 so that the cutoff frequency fc of the variable optical LPF 11 is higher than the previous cutoff frequency fc of the variable optical LPF 11. Then, the LPF driving unit 30 makes the effect of the variable optical LPF 11 smaller than the previous effect of the variable optical LPF 11 in accordance with an instruction from the control unit 80.
  • the LPF driving unit 30 sets the cutoff frequency fc of the variable optical LPF 11 to be higher than the previous cutoff frequency fc of the variable optical LPF 11.
  • the LPF driving unit 30 applies a voltage higher than the previous voltage as the voltage V ⁇ b> 3 between the electrodes 112 and 114.
  • the polarization conversion efficiency T of the variable optical LPF 11 is a magnitude between T2 and T1, and is smaller than the previous time.
  • the control unit 80 instructs the image sensor 40 to acquire the image data I 2 (step S106).
  • the image data I 2 corresponds to a specific example of “second image data” of the present disclosure.
  • the control unit 80 instructs the image sensor 40 to acquire the image data I 2 when a setting value different from the setting value used to obtain the image data I 1 is set in the variable optical LPF 11. .
  • the imaging device 40 performs color sampling by spatially sampling light incident through the variable optical LPF 11 in which the effect of the variable optical LPF 11 is smaller than the previous effect of the variable optical LPF 11 on the light receiving surface 40A. Image data I 2 is acquired.
  • the image sensor 40 spatially receives light incident through the variable optical LPF 11 in which the cutoff frequency fc of the variable optical LPF 11 is higher than the previous cutoff frequency fc of the variable optical LPF 11 on the light receiving surface 40A.
  • the color image data I 2 is obtained by sampling the image data.
  • the image pickup device 40 spatially samples light incident through the variable optical LPF 11 having a polarization conversion efficiency T between T2 and T1, and spatially samples the light on the light receiving surface 40A.
  • Data I 2 is acquired.
  • the image sensor 40 outputs the acquired image data I 2 to the image processing unit 50.
  • control unit 80 instructs the image processing unit 50 to derive an appropriate low-pass characteristic (set value 74). Then, the image processing unit 50 sets an appropriate low-pass characteristic (set value) based on a change in resolution and a change in false color in the plurality of image data (image data I 1 and image data I 2 ) obtained by the image sensor 40. 74) is derived.
  • the image processing unit 50 first derives two evaluation data D1 and D2 based on a plurality of image data (image data I 1 and image data I 2 ) (step S107).
  • the image processing unit 50 derives evaluation data D1 for a change in resolution based on the spatial frequency of the plurality of image data (image data I 1 and image data I 2 ). For example, the image processing unit 50 derives the evaluation data D1 based on the difference between the frequency spectrum of the image data I 1 and the frequency spectrum of the image data I 2 . The image processing unit 50 derives the evaluation data D1 by, for example, applying the image data I 1 and the image data I 2 to the first term on the right side of Expression (1).
  • the image processing unit 50 derives evaluation data D2 for a change in false color based on the gradation data of a plurality of image data (image data I 1 and image data I 2 ). For example, the image processing unit 50 derives the evaluation data D2 based on the difference between the gradation data of the image data I 1 and the gradation data of the image data I 2 . The image processing unit 50 evaluates the evaluation data D2 based on, for example, the difference between the gradation data of the image data I 1 and the gradation data of the image data I 2 and the saturation data 33 obtained from the image data I 1. Is derived. The image processing unit 50 derives the evaluation data D2 by, for example, applying the image data I 1 , the image data I 2, and the saturation data 33 to the second term on the right side of Expression (1).
  • the image processing unit 50 determines appropriateness of the effect of the variable optical LPF 11 based on the obtained two evaluation data D1 and D2 (step S108). That is, the image processing unit 50 determines whether or not the obtained two evaluation data D1 and D2 satisfy a desired standard according to the purpose.
  • the “desired reference according to the purpose” varies depending on the shooting mode, the subject, the scene, and the like.
  • the image processing unit 50 for example, minimizes the total value (evaluation data D) of the two newly obtained evaluation data D1 and D2 this time in the region where the effect (cutoff frequency fc) of the variable optical LPF 11 can be controlled. It is determined whether it becomes a value.
  • the low-pass characteristic of the variable optical LPF 11 at that time is set as the set value 74.
  • the setting value of the variable optical LPF 11 at that time is set as the setting value 74.
  • the image processing unit 50 stores the setting value 74 in the memory unit 70. For example, when the image processing unit 50 determines that the total value (evaluation data D) of the two evaluation data D1 and D2 newly obtained this time is the minimum value in a region where the effect of the variable optical LPF 11 can be controlled.
  • the setting value of the variable optical LPF 11 at that time is set as a setting value 74.
  • the image processing unit 50 When it is determined that the effect of the variable optical LPF 11 is not appropriate, the image processing unit 50 returns to step S105, changes the effect (cutoff frequency fc) of the variable optical LPF 11, and sequentially performs steps after step S106. ,carry out. In this way, the image processing unit 50 derives the set value 74 of the variable optical LPF 11 based on the derived two evaluation data D1 and D2 by repeatedly performing Steps S105 to S108. That is, the image processing unit 50 has a plurality of pieces of image data (image data I) captured with different low-pass characteristics (cut-off frequency fc) in accordance with a preparation instruction from the user (for example, half-pressing the shutter button). The set value 74 of the variable optical LPF 11 is derived based on one and a plurality of image data I 2 ). The image processing unit 50 notifies the control unit 80 that the setting value 74 has been obtained.
  • the control unit 80 sets the variable optical LPF 11 when detecting an imaging instruction (for example, pressing the shutter button) from the user after the notification that the setting value 74 is obtained is input from the image processing unit 50.
  • the LPF driving unit 30 is instructed to set the value 74. Specifically, when detecting an imaging instruction (for example, pressing of a shutter button) from the user, the control unit 80 reads the setting value 74 from the memory unit 70 and outputs the read setting value 74 to the LPF driving unit 30. At the same time, the LPF driving unit 30 is instructed to set the variable optical LPF 11 to the set value 74. Then, the LPF drive unit 30 sets the variable optical LPF 11 to the set value 74 in accordance with an instruction from the control unit 80.
  • the control unit 80 further instructs the image sensor 40 to acquire the image data I.
  • the image sensor 40 acquires the image data I in accordance with an instruction from the control unit 80 (step S109). That is, the imaging device 40 acquires color image data I by discretely sampling the light incident through the variable optical LPF 11 having the set value 74 set by the light receiving surface 40A.
  • the image sensor 40 outputs the acquired image data I to the image processing unit 50.
  • the image processing unit 50 performs predetermined processing on the image data I, and then outputs the processed image data I to the memory unit 70 and also to the display panel 60.
  • the memory unit 70 stores the image data I input from the image processing unit 50, and the display unit 60 displays the image data I input from the image processing unit 50 (step S110).
  • the above-described operation preparation may be performed manually by a user.
  • the imaging apparatus 1 when the focus condition of the lens 13 is changed while the image data I 2 is continuously acquired while the effect of the variable optical LPF 11 is changed, the image data I Redo may be performed from acquisition of 1 .
  • the instruction from the user may be performed by a method other than “half-pressing the shutter button”.
  • the above-described instruction from the user may be performed by pressing a button (an operation button other than the shutter button) attached to the main body of the imaging apparatus 1 after the focus condition of the lens 13 is set.
  • the above-described instruction from the user may be performed by turning an operation dial attached to the main body of the imaging apparatus 1 after the focus condition of the lens 13 is set.
  • the “desired reference according to the purpose” varies depending on the shooting mode, the subject, the scene, and the like.
  • “desired criteria according to purpose” will be exemplified below.
  • the control unit 80 when there is a range (range ⁇ ) in which the change of the evaluation data D2 relative to the effect of the variable optical LPF 11 is relatively gentle (range ⁇ ), the control unit 80 includes the range ⁇ within the range ⁇ .
  • the effect of the variable optical LPF 11 when the evaluation data D2 is the smallest (white circle in FIG. 8) may be set as the set value 74. That is, when there is a range (range ⁇ ) in which the change of the evaluation data D2 relative to the cutoff frequency fc of the variable optical LPF 11 is relatively gentle (range ⁇ ), the control unit 80 determines that the evaluation data D2 is within the range ⁇ .
  • the cut-off frequency fc of the variable optical LPF 11 when it becomes the smallest (white circle in FIG. 8) may be set as the set value 74. In such a case, resolution degradation can be reduced without a large increase in false color.
  • the second term (evaluation data D2) on the right side of the equation (1) may be a stepped profile as shown in FIG. 11, for example, depending on the characteristics of the subject and the lens 13.
  • FIG. 11 shows an example of the evaluation data D1 and D2.
  • the control unit 80 divides the controllable region of the effect (cutoff frequency fc) of the variable optical LPF 11 into a plurality of regions R, and the effect of the variable optical LPF 11 ( The LPF driving unit 30 may be instructed to sequentially change the cutoff frequency fc) to a value set for each divided region R.
  • the LPF driving unit 30 sequentially sets the effect (cutoff frequency fc) of the variable optical LPF 11 to a value set for each region R in accordance with an instruction from the control unit 80.
  • each region R has a width larger than the set minimum resolution of the effect (cutoff frequency fc) of the variable optical LPF 11. Accordingly, the LPF driving unit 30 sequentially changes the effect (cutoff frequency fc) of the variable optical LPF 11 at a pitch larger than the set minimum resolution of the effect (cutoff frequency fc) of the variable optical LPF 11.
  • the control unit 80 further instructs the imaging device 40 to perform imaging in synchronization with the setting of the effect (cutoff frequency fc) of the variable optical LPF 11. Then, the imaging element 40 acquires image data I 1 and a plurality of image data I 2, the acquired image data I 1 and a plurality of image data I 2, and outputs to the image processing unit 50.
  • the image processing unit 50 derives two evaluation data D1 and D2 based on the input image data I 1 and the plurality of image data I 2 .
  • the image processing unit 50 further starts to increase the false color of the setting value of the variable optical LPF 11 in the region R (for example, the region R1) where the total value (evaluation data D) of the two evaluation data D1 and D2 is minimum.
  • a value (for example, k in FIG. 11) may be set. In this case, the number of times of actually changing the effect (cutoff frequency fc) of the variable optical LPF 11 can be reduced as compared with the above embodiment. As a result, the processing speed is increased.
  • the image processing unit 50 may set the setting value 74 based on a change in false color in the face area included in the plurality of image data (image data I 1 and image data I 2 ). .
  • FIG. 12 illustrates an example of an imaging procedure when a person whose face is detected has hair.
  • the image processing unit 50 may perform face detection on the image data I 1 when acquiring the image data I 1 from the image sensor 40 (FIG. 12, step S111). Further, when the image processing unit 50 detects a face as a result of performing face detection on the image data I 1 in step S111, it may be detected whether the person whose face is detected has hair. (FIG. 12, step S112).
  • the image processing unit 50 may perform threshold processing on the evaluation data D2, for example.
  • the image processing unit 50 may determine, as the setting value 74, a setting value that weakens the effect of the variable optical LPF 11 in a range where the evaluation data D2 falls below a predetermined threshold Th1 (FIG. 13).
  • FIG. 13 shows an example of the evaluation data D1 and D2 together with various threshold values.
  • the predetermined threshold value Th1 is a threshold value for face detection, and is a value suitable for excluding false colors generated in the hair of a person who is a subject, for example.
  • the image processing unit 50 may perform threshold processing not only on the evaluation data D2 but also on the evaluation data D1.
  • the control unit 80 sets a setting value that makes the effect of the variable optical LPF 11 the weakest.
  • the setting value 74 may be determined (FIG. 13).
  • the image processing unit 50 may determine the set value 74 by the same method as described above. When the person whose face is detected has hair, the image processing unit 50 may determine the setting value 74 in this way.
  • FIG. 14 illustrates an example of an imaging procedure in the macro imaging mode.
  • the photographing mode is the macro photographing mode
  • the user often tries to photograph an object close to it.
  • the user is often not allowed to generate a false color for the product to be photographed.
  • the image processing unit 50 sets the effect (cut-off frequency fc) of the variable optical LPF 11 when one image data I 2 selected by the user from the plurality of image data I 2 is captured. It is preferable to set to 74.
  • the image processing unit 50 determines in step S108 whether, for example, the evaluation data D2 is below a predetermined threshold Th3.
  • the predetermined threshold value Th3 is a threshold value for the macro shooting mode, and is a value suitable for excluding false colors generated in an object that is a subject in the macro shooting mode, for example.
  • the image processing unit 50 acquires the set value of the variable optical LPF 11 corresponding to the evaluation data D2 at that time as the appropriate value candidate 35a. Further, the image data I 2 corresponding to the appropriate value candidate 35a is stored in the memory unit 70 together with the appropriate value candidate 35a (FIG. 14, step S113). If the evaluation data regarding the change in the false color does not fall below the predetermined threshold Th1, the image processing unit 50 returns to step S105.
  • the image processing unit 50 may perform threshold processing not only on the evaluation data D2 but also on the evaluation data D1. For example, the image processing unit 50 may determine whether the evaluation data D2 is below a predetermined threshold Th3 and whether the evaluation data D1 is below a predetermined threshold Th4 (FIG. 13). As a result, when the evaluation data D2 is lower than the predetermined threshold Th3 and the evaluation data D1 is lower than the predetermined threshold Th4, the image processing unit 50 sets the set value of the variable optical LPF 11 corresponding to the evaluation data D2. You may acquire as the appropriate value candidate 35a. When the evaluation data D2 does not fall below the predetermined threshold Th3, or when the evaluation data D1 does not fall below the predetermined threshold Th4, the image processing unit 50 may return to step S105.
  • the image processing unit 50 determines whether or not the effect of the variable optical LPF 11 has been changed (FIG. 14, step S114).
  • the image processing unit 50 repeatedly executes steps S105 to S108 and S113 until the change of the effect of the variable optical LPF 11 is completed. That is, the image processing unit 50 acquires a value corresponding to the evaluation data D2 at that time as the appropriate value candidate 35a every time the evaluation data regarding the change in false color falls below the predetermined threshold Th3.
  • the image processing unit 50 requests the user to select one image data I 2 from the plurality of image data I 2 corresponding to each appropriate value candidate 35a. .
  • the image processing unit 50 outputs the image data I 2 of one from among the plurality of image data I 2 corresponding to each proper value candidate 35a on the display unit 60. Then, the display unit 60 displays the image data I 2 input from the image processing unit 50 (FIG. 14, step S115). The image processing unit 50 sequentially outputs a plurality of image data I 2 corresponding to each appropriate value candidate 35 a to the display unit 60. The image processing unit 50 displays a display every time a display instruction from the user (for example, depression of an operation button attached to the main body of the imaging apparatus 1 or rotation of an operation dial attached to the main body of the imaging apparatus 1) is detected.
  • a display instruction from the user for example, depression of an operation button attached to the main body of the imaging apparatus 1 or rotation of an operation dial attached to the main body of the imaging apparatus 1.
  • image data I 2 to be displayed on the section 60 is so switched to another image data I 2, and outputs a plurality of image data I 2 on the display unit 60.
  • the display unit 60 replaces the image data I 2 to be displayed every time the image data I 2 is input from the image processing unit 50.
  • the image processing unit 50 may output a numerical value (for example, a pixel-converted area of the false color generation region) and a histogram together with the image data I 2 to the display unit 60. Then, the display unit 60 displays numerical values and histograms input from the image processing unit 50.
  • the histogram has saturation (for example, Cb, Cr on the YCbCr space) on the vertical and horizontal axes, and each region is shaded according to the number of pixels in which a false color is generated. It may be something that expresses.
  • the image processing unit 50 displays the selection on the display unit 60 when the detection is detected.
  • the appropriate value candidate 35a corresponding to the image data I 2 is determined as the set value 74. In this way, the image processing unit 50 causes the user to select appropriate image data I 2 (that is, an appropriate value as the variable optical LPF 11) (FIG. 14, step S116).
  • the image processing unit 50 sets, for example, a setting value (cut-off value) that weakens the effect of the variable optical LPF 11 in a range where the evaluation data D2 falls below a predetermined threshold Th3.
  • a setting value at which the frequency fc is maximized may be determined as the setting value 74.
  • the image processing unit 50 may perform threshold processing not only on the evaluation data D2 but also on the evaluation data D1.
  • the image processing unit 50 has a setting value that weakens the effect of the variable optical LPF 11 in a range where the evaluation data D2 is below the predetermined threshold Th3 and the evaluation data D1 is below the predetermined threshold Th4.
  • the set value (the set value at which the cut-off frequency fc is maximized) may be determined as the set value 74.
  • the image processing unit 50 uses the false color processing mode.
  • the false color generation position may be detected after step S101 is performed. If a false color has occurred in a fairly large area of the entire screen, or if a false color has occurred in the center of the screen, if the false color remains, the remaining false color may stand out. It is thought that there is. Also, if there are false colors at the four corners of the screen, it may be possible to unnecessarily degrade the resolution by completely erasing them.
  • the image processing unit 50 adjusts the effect of the variable optical LPF 11 according to the generation position of the false color. Specifically, the image processing unit 50, based resolution and change and false color change in a plurality of image data I 1, I 3, in the false color location included in the plurality of image data I 1, I 3 It is preferable to set the set value 74.
  • FIG. 16 shows an example of an imaging procedure in the false color processing mode.
  • the image processing unit 50 detects a false color occurrence position after steps S101 to S104 are performed.
  • the control unit 80 minimizes or substantially minimizes the effect of the variable optical LPF 11 (step S117).
  • the control unit 80 instructs the LPF driving unit 30 so that the effect of the variable optical LPF 11 is minimized or substantially minimized (so that the cutoff frequency fc is maximized or substantially maximized).
  • the LPF driving unit 30 minimizes or substantially minimizes the effect of the variable optical LPF 11 in accordance with an instruction from the control unit 80.
  • the LPF drive unit 30 maximizes or substantially maximizes the cutoff frequency fc in accordance with an instruction from the control unit 80.
  • the LPF driving unit 30 applies a voltage V2 or a voltage slightly smaller than the voltage V2 between the electrodes 112 and 114.
  • the polarization conversion efficiency T of the variable optical LPF 11 is T1 (minimum) or almost T1.
  • the image processing unit 50 acquires the image data I 3 when the effect of the variable optical LPF 11 is minimized or substantially minimized (step S118).
  • the control unit 80 instructs the imaging device 40 to perform imaging when the effect of the variable optical LPF 11 is minimized or almost minimized. That is, the control unit 80 instructs the imaging device 40 to perform imaging when the cutoff frequency fc is maximum or almost maximum.
  • the image pickup device 40 discretely samples the light incident through the variable optical LPF 11 in which the effect of the variable optical LPF 11 is minimized or substantially minimized by the light receiving surface 40A, so that the color image data I 3 is obtained. To get.
  • the image sensor 40 discretely samples the light incident via the variable optical LPF 11 having the maximum or almost maximum cutoff frequency fc by the light receiving surface 40A, thereby obtaining the color image data I 3 . get.
  • the image sensor 40 discretely samples the light incident through the variable optical LPF 11 having the polarization conversion efficiency T that is minimum or substantially minimum at the light receiving surface 40A, thereby obtaining the color image data I 3 . get.
  • the image sensor 40 outputs the acquired image data I 3 to the image processing unit 50.
  • the image processing unit 50 acquires the image data I 3 .
  • the image processing unit 50 detects the position of the false color included in the image data I 3 (step S119). Specifically, the image processing unit 50 detects the position of a false color included in the image data I 3 using the image data I 1 , the image data I 3, and the saturation data 33. The image processing unit 50 calculates, for example, the difference between the gradations of the two image data I 1 and I 3 for each dot, and multiplies the difference image obtained thereby by C, thereby obtaining a false color determination image. Is generated. Next, the image processing unit 50 determines whether or not each dot value of the false color determination image exceeds a predetermined threshold Th5.
  • the image processing unit 50 acquires and acquires the coordinates of the dot exceeding the predetermined threshold Th5.
  • the coordinates are stored in the memory unit 70 as a false color generation position (false color generation position 35) included in the image data I 3 (FIG. 17).
  • FIG. 17 shows an example of data stored in the memory unit 70.
  • the image processing unit 50 may use the image data I 2 instead of the image data I 3 . Specifically, the image processing unit 50 may detect a false color position in the image data I 2 . The image processing unit 50 may detect a false color position in the image data I 2 and I 3 .
  • the image processing unit 50 determines that the evaluation data D2 obtained in step S108 has a desired reference corresponding to the false color generation position (false color generation position 35). It is determined whether it is satisfied.
  • the image processing unit 50 When the false color generation position 35 is the screen center or a large area including the screen center, the image processing unit 50, for example, with respect to the evaluation data D2, the false color generation position 35 is the screen center.
  • a certain correction coefficient ⁇ may be multiplied (FIG. 18).
  • the image processing unit 50 may multiply the evaluation data D2 by the correction coefficient ⁇ when the false color generation position 35 is the four corners of the screen. Good (FIG. 18).
  • the correction coefficient ⁇ is smaller than the correction coefficient ⁇ .
  • FIG. 18 shows an example of the false color occurrence position to which the correction coefficients ⁇ and ⁇ are applied. In FIG. 18, the correction coefficients ⁇ and ⁇ are represented in correspondence with the false color occurrence position.
  • the image processing unit 50 determines that the evaluation data D2 is, for example, the false color generation position 35 is the center of the screen. It may be determined whether or not the threshold value Th6 is below a certain threshold (FIG. 19).
  • the evaluation data D2 indicates whether, for example, the evaluation data D2 is equal to or less than a threshold Th7 when the false color generation position 35 is at the four corners of the screen. You may determine (FIG. 19).
  • the threshold value Th6 is, for example, a value smaller than the threshold value Th7.
  • FIG. 19 illustrates an example of the false color occurrence position to which the thresholds Th6 and Th7 are applied. In FIG. 19, threshold values Th6 and Th7 are shown corresponding to the occurrence positions of false colors.
  • the image processing unit 50 sets the value corresponding to the evaluation data D2 at that time as the set value 74 and stores it in the memory unit 70. To do.
  • the image processing unit 50 may detect a position where resolution degradation occurs in the image data I 3 instead of the false color or together with the false color. At this time, the image processing unit 50 may use the image data I 2 instead of the image data I 3 . Specifically, the image processing unit 50 may detect a position where resolution degradation occurs in the image data I 2 instead of the false color or together with the false color. The image processing unit 50 may detect a position where resolution degradation occurs in the image data I 2 and I 3 instead of the false color or together with the false color. In step S ⁇ b> 108, the image processing unit 50 determines whether or not the evaluation data D ⁇ b> 1 satisfies a desired criterion according to the position where resolution degradation occurs (resolution degradation position).
  • the image processing unit 50 corrects, for example, the evaluation data D1 when the resolution degradation position is the screen center.
  • the coefficient ⁇ may be multiplied.
  • the image processing unit 50 may multiply the evaluation data D1 by the correction coefficient ⁇ when the resolution degradation position is at the four corners of the screen, for example.
  • the correction coefficient ⁇ is smaller than the correction coefficient ⁇ .
  • the image processing unit 50 uses, for example, a threshold when the evaluation data D1 is the resolution deterioration position of the screen center. You may determine whether it is Th8 or less. When the resolution degradation position is at the four corners of the screen, for example, the image processing unit 50 determines whether or not the evaluation data D1 is equal to or less than the threshold Th9 when the resolution degradation position is at the four corners of the screen. Also good.
  • the threshold value Th8 is smaller than the threshold value Th9, for example.
  • control unit 80 sets the value corresponding to the evaluation data D1 at that time as the set value 74 and stores it in the memory unit 70.
  • the false color that is one of the artifacts can be reduced by adjusting the effect of the low-pass filter.
  • the resolution further decreases in exchange for the reduction of false colors.
  • the effect of the low-pass filter can be adjusted so that the resolution does not decrease excessively, in such a case, the effect of reducing false colors is reduced.
  • the low-pass filter there is a trade-off relationship between an increase in resolution and a reduction in false color. For this reason, when a low pass filter is set by focusing on only one of the resolution and the false color, an image with a strong effect of the low pass filter is selected, and the resolution deteriorates more than necessary. There was a problem.
  • evaluation data D1 and evaluation data D2 are derived based on a plurality of image data (image data I 1 and a plurality of image data I 2 ) acquired while changing the effect of the variable optical LPF 11. . Further, the set value 74 of the variable optical LPF 11 is derived based on the derived evaluation data D1 and evaluation data D2. Thereby, it is possible to obtain the image data I in which the image quality is balanced according to the purpose of the user.
  • the evaluation data D1 is generated based on the difference between the frequency spectra of the two image data I 1 and I 2 , and the difference between the gradations of the two image data I 1 and I 2 is calculated.
  • the evaluation data D2 is generated based on the image data I, it is possible to obtain the image data I in which the image quality is balanced with higher accuracy.
  • the image quality can be balanced by a simple method.
  • Image data I can be obtained.
  • an area where the effect of the variable optical LPF 11 can be controlled is divided into a plurality of areas R, and the evaluation data D1 and the evaluation are obtained for each divided area R.
  • the data D2 is derived, the number of times of actually changing the effect of the variable optical LPF 11 can be reduced as compared with the above embodiment. As a result, it is possible to obtain image data I with a balanced image quality while increasing the processing speed.
  • the evaluation data D2 is a threshold for face detection.
  • Th1 a value that makes the effect of the variable optical LPF 11 the weakest is set as the set value 74 of the variable optical LPF 11.
  • the image data I with few false colors of the hair of the subject can be obtained.
  • the image data I with balanced image quality is set. Can be obtained.
  • the shooting mode is the macro shooting mode
  • the value corresponding to the evaluation data D2 at that time is an appropriate value.
  • the user can select one image data I 2 from the plurality of image data I 2 corresponding to each proper value candidates 35a. As a result, it is possible to obtain image data I with a balanced image quality according to the purpose of the user.
  • the evaluation data D2 is set to the false color position.
  • a value when the corresponding standard is satisfied can be set as the set value 74 of the variable optical LPF 11.
  • the evaluation data D2 is derived based on the difference between the gradation data of the two image data I 1 and I 2 and the saturation data 33 obtained from the image data I 1 . It is possible to obtain the image data I in which the image quality is balanced with higher accuracy.
  • the image data I 1 is generated by the image sensor 40 by driving the image sensor 40 when the effect of the variable optical LPF 11 is maximized or substantially maximized.
  • the evaluation data D1 and the evaluation data D2 are derived based on the image data I 1 having no false color or almost no false color and the image data I 2 having the false color.
  • the acquired image data I can be obtained.
  • variable optical LPF 11 when the variable optical LPF 11 is set as the shutter button is half-depressed, the variable optical LPF 11 is changed together with an AF (autofocus) condition or an iris 14 condition setting instruction. A setting instruction for the optical LPF 11 can be issued. As a result, it is possible to obtain image data I with a balanced image quality without increasing the work load on the user.
  • the image processing unit 50 maximizes or substantially maximizes the effect of the variable optical LPF 11 in step S102.
  • the image processing unit 50 may set the effect of the variable optical LPF 11 to an arbitrary value in step S102.
  • the image processing unit 50 performs the process of erasing the false color on the area where the false color may occur in the acquired image data I 1 , and then derives the saturation data 33. Also good. In such a case, it is possible to shorten the time required to obtain an appropriate set value for the variable optical LPF 11 compared to when the effect of the variable optical LPF 11 is maximized or substantially maximized. is there.
  • the variable optical LPF 11 is configured to change the cutoff frequency fc by voltage control.
  • the control unit 80 may change the cutoff frequency fc of the variable optical LPF 11 by frequency control.
  • the imaging apparatus 1 uses an optical LPF that changes the cutoff frequency fc in accordance with the magnitude of the physically applied vibration instead of the variable optical LPF 11. Also good. That is, the variable optical LPF according to the present disclosure may be configured so that the cut-off frequency fc can be adjusted by voltage change, frequency change, or vibration amplitude change. Such an optical LPF may be used.
  • control unit 80 automatically changes the effect of the variable optical LPF 11.
  • the user may manually change the effect of the variable optical LPF 11.
  • FIG. 20 shows an example of an imaging procedure in this modification.
  • the image processing unit 50 determines whether or not the change of the effect of the variable optical LPF 11 is completed after Steps S101 to S107 are performed (Step S120). Steps S105 to S107 are repeatedly executed until the change of the effect of the variable optical LPF 11 is completed. As a result, when the change of the effect of the variable optical LPF 11 is completed, the image processing unit 50 derives an appropriate value candidate k (k1, k2, k3,...) Of the variable optical LPF 11 corresponding to the position indicated by the white circle in FIG. (Step S121).
  • FIG. 21 shows an example of the evaluation data D1 and D2.
  • the image processing unit 50 has an end portion for each range.
  • a value corresponding to the position of (the end portion with the smaller effect of the variable optical LPF 11) is set as an appropriate value candidate k (k1, k2, k3,...) Of the variable optical LPF 11.
  • Each appropriate value candidate k (k1, k2, k3,...) Is set to the set value (cut-off frequency fc) of the variable optical LPF 11 when the effect of the variable optical LPF 11 is weakened within the range in which the evaluation data D2 does not increase. This corresponds to the setting value of the variable optical LPF 11 when it is increased).
  • the image processing unit 50 requests the user to select one image data I 2 from among a plurality of image data I 2 corresponding to each appropriate value candidate k (k1, k2, k3,). To do. Specifically, the image processing unit 50, outputs the image data I 2 from one of a plurality of image data I 2 corresponding to the display unit 60 on the appropriate value candidate k (k1, k2, k3, ...) To do. Then, the display unit 60 displays the image data I 2 input from the control unit 80 (step S122). The image processing unit 50 sequentially outputs a plurality of image data I 2 corresponding to each appropriate value candidate k to the display unit 60.
  • the image processing unit 50 displays a display every time a display instruction from the user (for example, depression of an operation button attached to the main body of the imaging apparatus 1 or rotation of an operation dial attached to the main body of the imaging apparatus 1) is detected.
  • image data I 2 image data I 2 corresponding to the proper value candidate k
  • the display unit 60 replaces the image data I 2 to be displayed every time the image data I 2 is input from the image processing unit 50.
  • the image processing unit 50 displays the selection on the display unit 60 when the detection is detected.
  • the appropriate value candidate k corresponding to the image data I 2 is determined as the set value 74. In this way, the image processing unit 50 causes the user to select appropriate image data I 2 (or appropriate value 35 appropriate to the user) (step S123).
  • the image processing unit 50 sets the set value of the variable optical LPF 11.
  • One set value k among a plurality of set values k can be set. Thereby, the trouble of the manual setting by a user can be reduced. Furthermore, it is possible to obtain image data I having a balance between the resolution and the false color as intended by the user.
  • the image processing unit 50 generates, in each image data I 2 , an emphasis (specifically enlarged) position of a false color included in the plurality of image data I 2 , and displays it on the display unit 60. It may be output.
  • the “emphasis” refers to distinguishing a target region from other regions (for example, making it more conspicuous than other regions).
  • a false color change (or occurrence) may be detected based on a plurality of image data I 2 photographed at different frequencies fc).
  • the image processing unit 50 further emphasizes (specifically enlarges) a region (region 61) in which the false color is changed (or generated) due to the transition in each image data I 2 (see FIG. 22). May be generated and output to the display unit 60.
  • FIG. 22 shows a state in which a part of the image data I 2 is enlarged.
  • an area 61 corresponds to an area displayed at a magnification larger than the magnification of the area other than the area 61 in the image data I 2 .
  • the display unit 60 causes the transition in the image data I 2 .
  • the region (region 61) where the false color has changed (or has occurred) is highlighted (specifically, enlarged display). In this case, even when the display unit 60 is small and it is difficult to visually recognize the false color change, it is possible to intuitively know that the false color has changed and where the false color has occurred. . As a result, the effect of the variable optical LPF 11 can be easily adjusted.
  • the image processing unit 50 does not need to perform enlargement for all the locations where the false color has changed (or has occurred) in the image data I 2 .
  • the image processing unit 50 may be output to the display unit 60 to an enlarged portion evaluation data D2 in the image data I 2 was greatest (region 61).
  • the image processing unit 50 in each image data I 2, to produce what emphasizes the position of the false color included in the plurality of image data I 2, may be output to the display unit 60 .
  • the set value of the variable optical LPF 11 transitions to a value (set value k) corresponding to the white circle in FIG.
  • a false color change (or occurrence) may be detected based on a plurality of image data I 2 photographed at different frequencies fc).
  • the image processing unit 50 further, for example, in each image data I 2, the false color is changed by the transition (or development) were emphasized area (area 62) (specifically, zebra treatment) were those (see FIG. 23 ) May be generated and output to the display unit 60.
  • FIG. 23 shows a state in which a part of the image data I 2 is zebra processed.
  • FIG. 23 illustrates a state in which the region 62 is distributed from the center to the lower end of the image data I 2 .
  • the display unit 60 displays a false color in the image data I 2.
  • the changed (or generated) area (area 62) is highlighted (specifically, zebra display). In this case, even when the display unit 60 is small and it is difficult to visually recognize the false color, it is possible to intuitively know that the false color has changed and where the false color has occurred. As a result, the effect of the variable optical LPF 11 can be easily adjusted.
  • the image processing unit 50 does not need to perform zebra processing on all locations where the false color changes (or occurs) in the image data I 2 .
  • the image processing unit 50 may be output to the display unit 60 that places the evaluation data D2 in the image data I 2 was greatest (the region 62) and Zebra process.
  • the image processing unit 50 emphasizes (specifically highlights) the position where the resolution degradation is included in the plurality of image data I 2 in each image data I 2 . It may be generated and output to the display unit 60. For example, when the set value of the variable optical LPF 11 transitions to a value corresponding to the white circle in FIG. 21 (set value k) in response to a user request, the image processing unit 50 has a low-pass characteristic (cutoff frequency fc). A change in resolution may be detected based on a plurality of image data I 2 photographed with different values. The image processing unit 50 further generates, for example, the image data I 2 that emphasizes (specifically highlights) the edge (part 63) of the area whose resolution has changed due to the transition (see FIG.
  • FIG. 24 illustrates a state in which part of the image data I 2 (part 63) are emphasized.
  • FIG. 24 illustrates a state in which the portion 63 is configured by a plurality of line segments.
  • the display unit 60 changes the resolution of the image data I 2.
  • the edge (part 63) of the region is highlighted (specifically, highlighted). In such a case, even when the display unit 60 is small and it is difficult to visually recognize the resolution degradation, it is possible to intuitively know the location where the resolution degradation has occurred. As a result, the effect of the variable optical LPF 11 can be easily adjusted.
  • the image processing unit 50 does not need to perform highlighting on all edges of the area where the resolution has changed (or has occurred) in the image data I 2 .
  • the image processing unit 50 may be output to the display unit 60 that the largest was place by the evaluation data D1 in the image data I 2 a (section 63) to highlight.
  • the image processing unit 50 may perform a plurality of types of enhancement on each image data I 2 .
  • the image processing unit 50 generates a result of emphasizing the position of the false color included in the plurality of image data I 2 and the position where the resolution is deteriorated, and outputs it to the display unit 60. May be.
  • the image processing unit 50 has a low-pass characteristic (cutoff frequency fc).
  • cutoff frequency fc cutoff frequency
  • the image processing unit 50 emphasizes (specifically enlarges) the region 61 in which the false color has changed (or has been generated) due to the transition, and also the region whose resolution has changed due to the transition.
  • An edge (part 63) highlighted (specifically, highlighted) may be generated and output to the display unit 60.
  • the image processing unit 50 may change the display color of the region 61 and the display color of the portion 63 from each other. For example, when the set value of the variable optical LPF 11 transitions to a value corresponding to the white circle in FIG. 21 (set value k) in response to a user request, the display unit 60 displays a false color in the image data I 2.
  • the changed (or generated) area 61 is emphasized (specifically enlarged), and the edge (part 63) of the area where the resolution has changed is highlighted (specifically highlighted).
  • the edge (part 63) of the area where the resolution has changed is highlighted (specifically highlighted).
  • the image processing unit 50 emphasizes (specifically, zebra processing) the region 62 in which the false color has changed (or has been generated) due to the transition, and the region in which the resolution has changed due to the transition.
  • the edge (part 63) of the image may be generated and output to the display unit 60.
  • the display unit 60 displays a false color in the image data I 2.
  • the changed (or generated) area 62 is emphasized (specifically, zebra processing), and the edge (part 63) of the area where the resolution has changed is highlighted (specifically highlighted).
  • the edge (part 63) of the area where the resolution has changed is highlighted (specifically highlighted).
  • the image processing unit 50 or enlarged area 61 of the image data I 2, the regions 62 of the image data I 2 to or Zebra process, the image processing unit 50, in advance, false color
  • the occurrence location must be specified. Therefore, for example, as shown in FIG. 25, the image processing unit 50 needs to perform steps S117 to S119 before performing step S105 in the series of steps of FIG. FIG. 25 illustrates an example of an imaging procedure. Since the image processing unit 50 can specify the generation position of the false color by performing steps S117 to S119, it is possible to enlarge the region 61 and perform the zebra processing of the region 62.
  • step S117 calculates the value (evaluation data D1) obtained from the first term on the right side of equation (1) and the value (evaluation data D2) obtained from the second term on the right side of equation (1). Based on the added value (evaluation data D), the effect (set value) of the variable optical LPF 11 is automatically set.
  • the image processing unit 50 may automatically set the effect (setting value) of the variable optical LPF 11 based on one of the evaluation data D1 and the evaluation data D2. In such a case, the effect (set value) of the variable optical LPF 11 can be adjusted according to various purposes.
  • a low pass filter having a fixed cutoff frequency may be provided instead of the variable optical LPF 11.
  • the variable optical LPF 11 may be omitted.
  • the imaging apparatus 1 may include a drive unit that vibrates the light receiving surface 40A of the imaging element 40 in the in-plane direction.
  • a device including a drive unit that vibrates the imaging element 40 and the light receiving surface 40A functions as a so-called imager shift type low-pass filter.
  • the image data I having a balanced image quality according to the purpose of the user can be obtained as in the above embodiment.
  • a low pass filter having a fixed cutoff frequency may be provided instead of the variable optical LPF 11.
  • the variable optical LPF 11 may be omitted.
  • the lens driving unit 20 may drive the lens 12 in a plane parallel to a plane orthogonal to the optical axis of the lens 12.
  • the device including the lens 12 and the lens driving unit 20 functions as a so-called lens shift type low-pass filter. As described above, even when the lens shift type low-pass filter is provided, the image data I in which the image quality is balanced according to the purpose of the user can be obtained as in the above embodiment.
  • the imaging device 1 can be applied to an in-vehicle camera, a monitoring camera, a medical camera (endoscopic camera), and the like in addition to a normal camera.
  • this indication can take the following composition.
  • a control apparatus comprising: a control unit configured to set a setting value of a low-pass characteristic of the low-pass filter based on a change in resolution and a change in false color in a plurality of captured image data according to a change in the low-pass characteristic of the low-pass filter.
  • the control unit includes a spatial frequency of first image data that is one of the plurality of image data, and a spatial frequency of second image data other than the first image data among the plurality of image data.
  • control unit (3) The control unit according to (2), wherein the control unit derives second evaluation data regarding a change in false color based on a difference between the gradation data of the first image data and the gradation data of the second image data. apparatus.
  • the control device sets the set value based on the first evaluation data and the second evaluation data.
  • the control unit has the smallest first evaluation data in the range.
  • the control device wherein the low-pass characteristic of the low-pass filter is set to the set value.
  • control unit sets the setting value based on a false color change in a face area included in the plurality of image data.
  • the control unit sets the low-pass characteristic of the low-pass filter when the image data selected by the user from among the plurality of image data is imaged to the set value (1) to (5)
  • the control apparatus as described in any one.
  • the control unit sets the setting value based on a change in resolution and a change in false color in the plurality of image data and a position of a false color included in the plurality of image data. (1) to (5) The control apparatus as described in any one of these.
  • control unit (9) The control unit according to any one of (1) to (5), wherein the control unit generates a plurality of the image data obtained by enlarging the positions of false colors included in the plurality of image data. (10) The control unit according to any one of (1) to (5), wherein a plurality of the image data are generated by highlighting the positions of false colors included in the plurality of image data. (11) The said control part produces
  • the control device according to (3).
  • (13) The control device according to (3), wherein the first image data is image data when the second evaluation data is minimized or substantially minimized within a changeable range of a low-pass characteristic of the low-pass filter.
  • An image sensor that generates image data from light incident via a low-pass filter;
  • a controller configured to set a setting value of the low-pass characteristic of the low-pass filter based on a change in resolution and a change in false color in a plurality of image data captured in accordance with a change in the low-pass characteristic of the low-pass filter. apparatus.
  • the control unit generates a change instruction for changing the low-pass characteristic of the low-pass filter at a pitch larger than a set minimum resolution of the low-pass characteristic of the low-pass filter, and at the same time, an imaging instruction synchronized with the change of the low-pass characteristic of the low-pass filter

Abstract

A control device according to one embodiment of the present invention controls an imaging device provided with: a variable optical low pass filter; and an imaging element which generates image data from incident light which has passed through the variable optical low pass filter. The control device is provided with a first derivation unit and a second derivation unit. The first derivation unit derives resolution deterioration amounts and false colour generation amounts on the basis of a plurality of pieces of image data generated by the imaging element as a result of driving the imaging element while changing the effect of the variable optical low pass filter. The second derivation unit derives set values for the variable optical low pass filter on the basis of the resolution deterioration amounts and the false colour generation amounts derived by the first derivation unit.

Description

制御装置および撮像装置Control device and imaging device
 本開示は、制御装置および撮像装置に関する。 The present disclosure relates to a control device and an imaging device.
 通常の単板式のデジタルカメラにおいてベイヤーコーディングのイメージャを用いる場合には、イメージャで得られた画像データに対してデモザイク処理を行うことにより、失われた情報を復元することが必要となる。しかし、原理的に失われた情報を完全に導出することは難しいことから、解像度の低下やアーティファクトの発生が避けられない。 When a Bayer coding imager is used in a normal single-panel digital camera, it is necessary to restore lost information by performing demosaic processing on image data obtained by the imager. However, in principle, it is difficult to completely derive the lost information, so that a reduction in resolution and the occurrence of artifacts cannot be avoided.
 そこで、ローパスフィルタをレンズとイメージャとの間に設けることが考えられる。このようにした場合には、ローパスフィルタの効果を調整することで、アーティファクトの1つである偽色を低減することができる。なお、モアレを低減するローパスフィルタとして、例えば、下記の特許文献1に記載の光学的ローパスフィルタを用いることができる。 Therefore, it is conceivable to provide a low-pass filter between the lens and the imager. In this case, the false color that is one of the artifacts can be reduced by adjusting the effect of the low-pass filter. As a low-pass filter for reducing moire, for example, an optical low-pass filter described in Patent Document 1 below can be used.
特開2006-08045号公報JP 2006-08045 A
 ところで、上記の特許文献1に記載の発明では、ローパスフィルタの効果を、モアレが発生するまで弱めながら画像を取得し、取得した画像の中から、モアレが発生する直前に得られた画像が選択される。しかし、上記の特許文献1に記載の発明では、ローパスフィルタの効果が強くかかった画像が選択され易いので、モアレ以外の要因で画質が劣化してしまうという問題があった。ローパスフィルタを用いた際に画質のバランスのとれた画像を取得することの可能な制御装置および撮像装置を提供することが望ましい。 By the way, in the invention described in Patent Document 1, an image is acquired while reducing the effect of the low-pass filter until moire occurs, and an image obtained immediately before the occurrence of moire is selected from the acquired images. Is done. However, the invention described in Patent Document 1 has a problem that the image quality is deteriorated due to factors other than moire because an image on which the effect of the low-pass filter is strongly applied is easily selected. It is desirable to provide a control device and an imaging device capable of acquiring an image with balanced image quality when using a low-pass filter.
 本開示の一実施の形態の制御装置は、ローパスフィルタのローパス特性の変更に応じた、撮像された複数の画像データにおける解像度の変化及び偽色の変化に基づいて前記ローパスフィルタのローパス特性の設定値を設定する制御部を備えている。 The control device according to an embodiment of the present disclosure sets the low-pass characteristic of the low-pass filter based on a change in resolution and a change in false color in a plurality of captured image data according to a change in the low-pass characteristic of the low-pass filter. A control unit for setting a value is provided.
 本開示の一実施の形態の撮像装置は、ローパスフィルタを経由して入射した光から画像データを生成する撮像素子を備えている。この撮像装置は、さらに、ローパスフィルタのローパス特性の変更に応じた、撮像された複数の画像データにおける解像度の変化及び偽色の変化に基づいてローパスフィルタのローパス特性の設定値を設定する制御部を備えている。 The imaging apparatus according to an embodiment of the present disclosure includes an imaging element that generates image data from light incident via a low-pass filter. The imaging apparatus further sets a setting value of the low-pass characteristic of the low-pass filter based on a change in resolution and a change in false color in a plurality of image data that is captured in accordance with a change in the low-pass characteristic of the low-pass filter. It has.
 本開示の一実施の形態の制御装置および撮像装置では、ローパスフィルタのローパス特性の変更に応じた、撮像された複数の画像データにおける解像度の変化及び偽色の変化に基づいてローパスフィルタのローパス特性の設定値が設定される。これにより、ユーザの目的に応じた、画質のバランスのとれた画像データを得るのに適したローパスフィルタの設定値が得られる。 In the control device and the imaging device according to the embodiment of the present disclosure, the low-pass characteristics of the low-pass filter based on the change in resolution and the change in false color in the plurality of imaged image data according to the change in the low-pass characteristics of the low-pass filter Is set. As a result, a low-pass filter setting value suitable for obtaining image data with a balanced image quality according to the user's purpose is obtained.
 本開示の一実施の形態の制御装置および撮像装置によれば、ユーザの目的に応じた、画質のバランスのとれた画像データを得るのに適したローパスフィルタの設定値が得られるようにしたので、ローパスフィルタを用いた際に画質のバランスのとれた画像を取得することができる。なお、本開示の効果は、ここに記載された効果に必ずしも限定されず、本明細書中に記載されたいずれの効果であってもよい。 According to the control device and the imaging device of the embodiment of the present disclosure, it is possible to obtain a setting value of a low-pass filter suitable for obtaining image data with a balanced image quality according to a user's purpose. When the low-pass filter is used, an image with a balanced image quality can be acquired. In addition, the effect of this indication is not necessarily limited to the effect described here, Any effect described in this specification may be sufficient.
本開示の一実施の形態に係る撮像装置の概略構成の一例を表す図である。It is a figure showing an example of the schematic structure of the imaging device concerning one embodiment of this indication. 図1の撮像デバイスにおける受光面の概略構成の一例を表す図である。It is a figure showing an example of schematic structure of the light-receiving surface in the imaging device of FIG. 図1のメモリ部に記憶されるデータの一例を表す図である。It is a figure showing an example of the data memorize | stored in the memory part of FIG. 図1の可変光学LPF(Low Pass Filter)の概略構成の一例を表す図である。It is a figure showing an example of schematic structure of variable optical LPF (Low * Pass * Filter) of FIG. 図4の液晶層の偏光変換効率曲線(V-T曲線)の一例を表す図である。FIG. 5 is a diagram illustrating an example of a polarization conversion efficiency curve (VT curve) of the liquid crystal layer in FIG. 4. 図5の可変光学LPFの作用の一例を表す図である。It is a figure showing an example of an effect | action of the variable optical LPF of FIG. 図5の可変光学LPFの作用の一例を表す図である。It is a figure showing an example of an effect | action of the variable optical LPF of FIG. 図5の可変光学LPFの作用の一例を表す図である。It is a figure showing an example of an effect | action of the variable optical LPF of FIG. 図6A~図6CのMTF(Modulation Transfer Function)の一例を表す図である。FIG. 7 is a diagram illustrating an example of MTF (Modulation / Transfer / Function) in FIGS. 6A to 6C. 評価データの一例を表す図である。It is a figure showing an example of evaluation data. 図1の画像処理部の概略構成の一例を表す図である。FIG. 2 is a diagram illustrating an example of a schematic configuration of an image processing unit in FIG. 1. 図1の撮像装置における撮像手順の一例を表す図である。It is a figure showing an example of the imaging procedure in the imaging device of FIG. 評価データの一例を表す図である。It is a figure showing an example of evaluation data. 図1の撮像装置における撮像手順の一例を表す図である。It is a figure showing an example of the imaging procedure in the imaging device of FIG. 評価データの一例を表す図である。It is a figure showing an example of evaluation data. 図1の撮像装置における撮像手順の一例を表す図である。It is a figure showing an example of the imaging procedure in the imaging device of FIG. ヒストグラムの一例を表す図である。It is a figure showing an example of a histogram. 図1の撮像装置における撮像手順の一例を表す図である。It is a figure showing an example of the imaging procedure in the imaging device of FIG. 図1のメモリ部に記憶されるデータの一例を表す図である。It is a figure showing an example of the data memorize | stored in the memory part of FIG. 補正係数α,βが適用される偽色の発生位置の一例を表す図である。It is a figure showing an example of the generation | occurrence | production position of the false color to which correction coefficient (alpha), (beta) is applied. 閾値Th6,Th7が適用される偽色の発生位置の一例を表す図である。It is a figure showing an example of the generation | occurrence | production position of the false color to which threshold value Th6, Th7 is applied. 図1の撮像装置における撮像手順の一例を表す図である。It is a figure showing an example of the imaging procedure in the imaging device of FIG. 評価データの一例を表す図である。It is a figure showing an example of evaluation data. 画像データの一部が拡大されている様子を表す図である。It is a figure showing a mode that a part of image data is expanded. 画像データの一部がゼブラ表示されている様子を表す図である。It is a figure showing a mode that a part of image data is displayed by zebra. 画像データの一部がエッジ強調表示されている様子を表す図である。It is a figure showing a mode that a part of image data is edge-emphasized displayed. 図1の撮像装置における撮像手順の一例を表す図である。It is a figure showing an example of the imaging procedure in the imaging device of FIG.
 以下、本開示を実施するための形態について、図面を参照して詳細に説明する。なお、説明は以下の順序で行う。

1.実施の形態
2.変形例
Hereinafter, modes for carrying out the present disclosure will be described in detail with reference to the drawings. The description will be given in the following order.

1. Embodiment 2. FIG. Modified example
<1.実施の形態>
[構成]
 図1は、本開示の一実施の形態に係る撮像装置1の概略構成の一例を表したものである。撮像装置1は、例えば、撮像光学系10、レンズ駆動部20、LPF駆動部30、撮像素子40および画像処理部50を備えている。撮像装置1は、さらに、例えば、表示パネル60、メモリ部70、制御部80および操作部90を備えている。
<1. Embodiment>
[Constitution]
FIG. 1 illustrates an example of a schematic configuration of an imaging apparatus 1 according to an embodiment of the present disclosure. The imaging device 1 includes, for example, an imaging optical system 10, a lens driving unit 20, an LPF driving unit 30, an imaging element 40, and an image processing unit 50. The imaging apparatus 1 further includes, for example, a display panel 60, a memory unit 70, a control unit 80, and an operation unit 90.
(撮像光学系10)
 撮像光学系10は、例えば、可変光学LPF(Low Pass Filter)11およびレンズ12を有している。レンズ12は、光学的な被写体像を撮像素子40上に形成するものである。レンズ12は、複数のレンズを有しており、レンズ駆動部20によって駆動されることにより、少なくとも1つのレンズを移動させることができるようになっている。これにより、レンズ12は、光学的なフォーカス調節やズーム調節が可能となっている。可変光学LPF11は、光に含まれる高い空間周波数の成分を除去するものであり、LPF駆動部30によって駆動されることによりローパス特性の1つであるカットオフ周波数fcを変化させるようになっている。撮像光学系10は、撮像素子40と一体に構成されていてもよいし、撮像素子40とは別体で構成されていてもよい。また、撮像光学系10の中の可変光学LPF11が、撮像素子40と一体に構成されていてもよいし、撮像素子40とは別体で構成されていてもよい。可変光学LPF11の具体的な構成やカットオフ周波数fcの変調方法については、後に詳述するものとする。
(Imaging optical system 10)
The imaging optical system 10 includes, for example, a variable optical LPF (Low Pass Filter) 11 and a lens 12. The lens 12 forms an optical subject image on the image sensor 40. The lens 12 has a plurality of lenses, and is driven by the lens driving unit 20 so that at least one lens can be moved. Thereby, the lens 12 can perform optical focus adjustment and zoom adjustment. The variable optical LPF 11 removes a high spatial frequency component contained in light, and is driven by the LPF driving unit 30 to change a cutoff frequency fc, which is one of low-pass characteristics. . The imaging optical system 10 may be configured integrally with the imaging device 40 or may be configured separately from the imaging device 40. In addition, the variable optical LPF 11 in the imaging optical system 10 may be configured integrally with the imaging device 40 or may be configured separately from the imaging device 40. A specific configuration of the variable optical LPF 11 and a modulation method of the cutoff frequency fc will be described in detail later.
(レンズ駆動部20、LPF駆動部30)
 レンズ駆動部20は、制御部80からの指示に従って、光学的なズーム倍率、およびフォーカスの調節等のためにレンズ12の少なくとも1つのレンズを駆動するものである。LPF駆動部30は、制御部80からの指示に従って、可変光学LPF11のローパス特性(カットオフ周波数fc)を変化させる制御を行うことにより、可変光学LPF11の効果を調整するものである。「可変光学LPF11の効果」とは、光に含まれる、ナイキスト周波数よりも高い空間周波数の成分の低減を指している。LPF駆動部30は、例えば、可変光学LPF11の電極間に、所定の電圧V(周波数一定)を印加することにより、可変光学LPF11のカットオフ周波数fcを調整するようになっている。
(Lens drive unit 20, LPF drive unit 30)
The lens drive unit 20 drives at least one lens of the lens 12 for optical zoom magnification, focus adjustment, and the like in accordance with instructions from the control unit 80. The LPF drive unit 30 adjusts the effect of the variable optical LPF 11 by performing control to change the low-pass characteristic (cut-off frequency fc) of the variable optical LPF 11 in accordance with an instruction from the control unit 80. The “effect of the variable optical LPF 11” refers to a reduction in a component having a spatial frequency higher than the Nyquist frequency contained in light. For example, the LPF driving unit 30 adjusts the cutoff frequency fc of the variable optical LPF 11 by applying a predetermined voltage V (constant frequency) between the electrodes of the variable optical LPF 11.
(撮像素子40)
 撮像素子40は、レンズ12および可変光学LPF11を介して受光面40Aに結像された被写体像を光電変換により電気信号に変換して画像データを生成するものである。撮像素子40は、例えばCCD(Charge Coupled Device)やCMOS(Complementary Metal Oxide Semiconductor)イメージセンサにより構成されている。
(Image sensor 40)
The imaging element 40 generates image data by converting a subject image formed on the light receiving surface 40A via the lens 12 and the variable optical LPF 11 into an electrical signal by photoelectric conversion. The imaging device 40 is configured by, for example, a CCD (Charge Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor) image sensor.
 撮像素子40は、例えば、図2に示したように、複数の光電変換素子40Bが所定の間隔で2次元配置された受光面40Aを有している。撮像素子40は、さらに、例えば、受光面40Aにカラーフィルタアレイ40Cを有している。図2には、カラーフィルタアレイ40Cが、2x2行列のR,G,G,Bが行列状に配置されたBayer配列となっている様子が例示されている。カラーフィルタアレイ40Cは、Bayer配列とは異なる配列となっていてもよい。撮像素子40は、可変光学LPF11を経由して入射した光に基づいて画像データを生成するようになっている。撮像素子40は、例えば、レンズ12および可変光学LPF11を経由して入射した光を空間的にサンプリングすることによりカラーの画像データを生成するようになっている。画像データは、カラーフィルタアレイ40Cに含まれる各色の色信号を画素ごとに有している。 The imaging element 40 has a light receiving surface 40A in which a plurality of photoelectric conversion elements 40B are two-dimensionally arranged at a predetermined interval, for example, as shown in FIG. The image sensor 40 further includes a color filter array 40C on the light receiving surface 40A, for example. FIG. 2 illustrates a state in which the color filter array 40C has a Bayer array in which R, G, G, and B of a 2 × 2 matrix are arranged in a matrix. The color filter array 40C may have an array different from the Bayer array. The image sensor 40 generates image data based on light incident via the variable optical LPF 11. The image sensor 40 generates color image data by spatially sampling light incident through the lens 12 and the variable optical LPF 11, for example. The image data has a color signal of each color included in the color filter array 40C for each pixel.
(画像処理部50)
 画像処理部50は、撮像素子40で生成された画像データに対して、ホワイトバランス、デモザイク、階調変換、色変換、およびノイズリダクションなどの画像処理を行うものである。画像処理部50は、画像データを表示パネル60に表示するのに適した表示データに変換したり、画像データをメモリ部70への記録に適したデータに変換する等の処理を行うようになっている。画像処理部50における画像処理については、後に詳述するものとする。
(Image processing unit 50)
The image processing unit 50 performs image processing such as white balance, demosaicing, gradation conversion, color conversion, and noise reduction on the image data generated by the image sensor 40. The image processing unit 50 performs processing such as converting image data into display data suitable for display on the display panel 60, and converting image data into data suitable for recording in the memory unit 70. ing. The image processing in the image processing unit 50 will be described in detail later.
(表示パネル60、メモリ部70、操作部90)
 表示パネル60は、例えば液晶パネルにより構成されている。表示パネル60は、画像処理部50から入力された表示データなどを表示するものである。メモリ部70は、撮影された画像データや各種プログラムなどを記憶可能となっている。メモリ部70は、例えば、不揮発性メモリによって構成されており、例えば、EEPROM(Electrically Erasable Programmable Read-Only Memory)、フラッシュメモリ、抵抗変化型メモリなどによって構成されている。メモリ部70は、撮像装置1に着脱可能な外部メモリであってもよい。メモリ部70には、例えば、図3に示したように、画像処理部50で生成される各種データや、操作部90から入力される各種データが記憶される。図3は、メモリ部70に記憶されるデータの一例を表したものである。メモリ部70に記憶されるデータとしては、例えば、図3に示したように、画像データ71、彩度データ72、評価データ73および設定値74が挙げられる。画像データ71、彩度データ72、評価データ73および設定値74については、後に詳述するものとする。操作部90は、ユーザからの指示を受け付けるものであり、例えば、操作ボタンや、シャッターボタン、操作ダイヤル、キーボード、タッチパネルなどで構成されている。
(Display panel 60, memory unit 70, operation unit 90)
The display panel 60 is configured by a liquid crystal panel, for example. The display panel 60 displays display data input from the image processing unit 50. The memory unit 70 can store photographed image data and various programs. The memory unit 70 is configured by, for example, a nonvolatile memory, and is configured by, for example, an EEPROM (Electrically Erasable Programmable Read-Only Memory), a flash memory, a resistance change type memory, or the like. The memory unit 70 may be an external memory that can be attached to and detached from the imaging apparatus 1. For example, as illustrated in FIG. 3, the memory unit 70 stores various data generated by the image processing unit 50 and various data input from the operation unit 90. FIG. 3 shows an example of data stored in the memory unit 70. Examples of data stored in the memory unit 70 include image data 71, saturation data 72, evaluation data 73, and a set value 74 as shown in FIG. The image data 71, the saturation data 72, the evaluation data 73, and the set value 74 will be described in detail later. The operation unit 90 receives an instruction from the user, and includes, for example, an operation button, a shutter button, an operation dial, a keyboard, and a touch panel.
(制御部80)
 制御部80は、レンズ駆動部20、LPF駆動部30、撮像素子40、画像処理部50、表示パネル60およびメモリ部70の制御を行うプロセッサである。制御部80は、レンズ駆動部20を制御することにより、レンズ12の光学的なズーム倍率、およびフォーカスの調節等を行うようになっている。制御部80は、LPF駆動部30を制御することにより、可変光学LPF11の効果(カットオフ周波数fc)を調整するようになっている。制御部80は、さらに、撮像素子40を駆動することにより、撮像素子40で画像データを生成させると共に、生成させた画像データを画像処理部50に出力させるようになっている。制御部80は、画像処理部50を制御することにより、画像処理部50で、上述の画像処理を行わせるとともに、上述の画像処理の結果、得られた各種データをメモリ部70や表示パネル60に出力させるようになっている。制御部80は、操作部90から入力される各種データに従って、レンズ駆動部20、LPF駆動部30、撮像素子40、画像処理部50および表示パネル60の制御を行ったり、操作部90から入力される各種データをメモリ部70に格納したりするようになっている。
(Control unit 80)
The control unit 80 is a processor that controls the lens driving unit 20, the LPF driving unit 30, the image sensor 40, the image processing unit 50, the display panel 60, and the memory unit 70. The control unit 80 controls the lens driving unit 20 to adjust the optical zoom magnification and focus of the lens 12. The control unit 80 adjusts the effect (cut-off frequency fc) of the variable optical LPF 11 by controlling the LPF driving unit 30. The controller 80 further drives the image sensor 40 to generate image data by the image sensor 40 and to output the generated image data to the image processor 50. The control unit 80 controls the image processing unit 50 to cause the image processing unit 50 to perform the above-described image processing, and to store various data obtained as a result of the above-described image processing in the memory unit 70 and the display panel 60. To output. The control unit 80 controls the lens driving unit 20, the LPF driving unit 30, the image sensor 40, the image processing unit 50, and the display panel 60 according to various data input from the operation unit 90, or is input from the operation unit 90. Various types of data are stored in the memory unit 70.
(可変光学LPF11)
 次に、可変光学LPF11について詳述する。図4は、可変光学LPF11の概略構成の一例を表したものである。可変光学LPF11は、光に含まれる高い空間周波数の成分を除去するものである。可変光学LPF11は、LPF駆動部30によって駆動されることにより可変光学LPF11の効果(カットオフ周波数fc)を変化させるようになっている。可変光学LPF11は、例えば、ピーク値変調方式でカットオフ周波数fcを変化させるようになっている。なお、ピーク値変調方式については、後に詳述するものとする。
(Variable optical LPF11)
Next, the variable optical LPF 11 will be described in detail. FIG. 4 illustrates an example of a schematic configuration of the variable optical LPF 11. The variable optical LPF 11 removes high spatial frequency components contained in light. The variable optical LPF 11 is driven by the LPF driving unit 30 to change the effect (cut-off frequency fc) of the variable optical LPF 11. The variable optical LPF 11 is configured to change the cutoff frequency fc by, for example, a peak value modulation method. The peak value modulation method will be described later in detail.
 可変光学LPF11は、複屈折性を有する一対の複屈折板111,115と、一対の複屈折板111,115の間に配置された液晶層113とを備えている。可変光学LPF11は、さらに、液晶層113に電界を印加する電極112,114を備えている。なお、可変光学LPF11は、例えば、液晶層113の配向を規制する配向膜を備えていてもよい。電極112,114は、液晶層113を介して互いに対向配置されている。電極112,114は、それぞれ、1枚のシート状電極からなる。なお、電極112および電極114の少なくとも一方が、複数の部分電極で構成されていてもよい。 The variable optical LPF 11 includes a pair of birefringent plates 111 and 115 having birefringence, and a liquid crystal layer 113 disposed between the pair of birefringent plates 111 and 115. The variable optical LPF 11 further includes electrodes 112 and 114 that apply an electric field to the liquid crystal layer 113. Note that the variable optical LPF 11 may include, for example, an alignment film that regulates the alignment of the liquid crystal layer 113. The electrodes 112 and 114 are arranged to face each other with the liquid crystal layer 113 interposed therebetween. Each of the electrodes 112 and 114 is composed of one sheet-like electrode. Note that at least one of the electrode 112 and the electrode 114 may be composed of a plurality of partial electrodes.
 電極112,114は、例えば、ITO(Indium Tin Oxide)などの透光性の導電膜である。電極112,114は、例えば、透光性を有する無機導電膜、透光性を有する有機導電膜、または、透光性を有する金属酸化膜であってもよい。複屈折板111は、可変光学LPF11の光入射側に配置されており、例えば、複屈折板111の外側の表面が光入射面110Aとなっている。入射光L1は、被写体側から光入射面110Aに入射する光である。複屈折板111は、例えば、入射光L1の光軸が複屈折板111(または光入射面110A)の法線と平行となるように配置される。複屈折板115は、可変光学LPF11の光出射側に配置されており、例えば、複屈折板115の外側の表面が光出射面110Bとなっている。可変光学LPF11の透過光L2は、光出射面110Bから外部に出射された光である。複屈折板111、電極112、液晶層113、電極114および複屈折板115は、光入射側からこの順に積層されている。 The electrodes 112 and 114 are translucent conductive films such as ITO (Indium Tin Oxide), for example. The electrodes 112 and 114 may be, for example, a light-transmitting inorganic conductive film, a light-transmitting organic conductive film, or a light-transmitting metal oxide film. The birefringent plate 111 is disposed on the light incident side of the variable optical LPF 11, and for example, the outer surface of the birefringent plate 111 is a light incident surface 110A. The incident light L1 is light that enters the light incident surface 110A from the subject side. For example, the birefringent plate 111 is disposed so that the optical axis of the incident light L1 is parallel to the normal line of the birefringent plate 111 (or the light incident surface 110A). The birefringent plate 115 is disposed on the light exit side of the variable optical LPF 11, and for example, the outer surface of the birefringent plate 115 is a light exit surface 110B. The transmitted light L2 of the variable optical LPF 11 is light emitted to the outside from the light emitting surface 110B. The birefringent plate 111, the electrode 112, the liquid crystal layer 113, the electrode 114, and the birefringent plate 115 are stacked in this order from the light incident side.
 複屈折板111,115は、複屈折性を有しており、1軸性結晶の構造を有している。複屈折板111,115は、複屈折性を利用して円偏光の光をps分離する機能を有している。複屈折板111,115は、例えば、水晶、方解石またはニオブ酸リチウムによって構成されている。 The birefringent plates 111 and 115 are birefringent and have a uniaxial crystal structure. The birefringent plates 111 and 115 have a function of separating ps of circularly polarized light using birefringence. The birefringent plates 111 and 115 are made of, for example, quartz, calcite, or lithium niobate.
 複屈折板111,115では、例えば、像の分離方向が互いに反対方向を向いている。複屈折板111の光学軸AX1および複屈折板115の光学軸AX2は、光入射面110Aの法線と平行な面内において互いに交差している。光学軸AX1と光学軸AX2とのなす角は、例えば、90°となっている。さらに、光学軸AX1,AX2が光入射面110Aの法線と斜めに交差している。光学軸AX1と光入射面110Aの法線とのなす角は、例えば、光入射面110Aの法線を基準として反時計回りに90°よりも小さくなっており、例えば、45°となっている。光学軸AX2と光入射面110Aの法線とのなす角は、例えば、光入射面110Aの法線を基準として反時計回りに90°よりも大きく180°よりも小さくなっており、例えば、135°(=180-45°)となっている。 In the birefringent plates 111 and 115, for example, the image separation directions are opposite to each other. The optical axis AX1 of the birefringent plate 111 and the optical axis AX2 of the birefringent plate 115 intersect each other in a plane parallel to the normal line of the light incident surface 110A. The angle formed by the optical axis AX1 and the optical axis AX2 is, for example, 90 °. Further, the optical axes AX1 and AX2 obliquely intersect the normal line of the light incident surface 110A. The angle formed by the optical axis AX1 and the normal line of the light incident surface 110A is, for example, less than 90 ° counterclockwise with respect to the normal line of the light incident surface 110A, for example, 45 °. . The angle formed by the optical axis AX2 and the normal line of the light incident surface 110A is, for example, greater than 90 ° and smaller than 180 ° counterclockwise with respect to the normal line of the light incident surface 110A. ° (= 180-45 °).
 図5は、液晶層113の偏光変換効率曲線(V-T曲線)の一例を表したものである。図5において、横軸は電極112,114間に印加される電圧V(周波数一定)である。図5において、縦軸は、偏光変換効率Tである。偏光変換効率Tとは、直線偏光の光に与えられた位相差を90度で割ることにより得られた値に100を掛けたものである。偏光変換効率Tが0%とは、直線偏光に対して何らの位相差も与えられていないことを指しており、例えば、直線偏光が偏光方向を変えられずに媒体を透過したことを指している。偏光変換効率Tが100%とは、直線偏光に対して90度の位相差が与えられたことを指しており、例えば、p偏光がs偏光に、またはs偏光がp偏光に変換されて媒体を透過したことを指している。偏光変換効率Tが50%とは、直線偏光に対して45度の位相差が与えられたことを指しており、例えば、p偏光またはs偏光が円偏光に変換されて媒体を透過したことを指している。 FIG. 5 shows an example of a polarization conversion efficiency curve (VT curve) of the liquid crystal layer 113. In FIG. 5, the horizontal axis is the voltage V (frequency constant) applied between the electrodes 112 and 114. In FIG. 5, the vertical axis represents the polarization conversion efficiency T. The polarization conversion efficiency T is obtained by multiplying a value obtained by dividing the phase difference given to linearly polarized light by 90 degrees by 100. A polarization conversion efficiency T of 0% means that no phase difference is given to linearly polarized light. For example, it means that linearly polarized light has passed through the medium without changing its polarization direction. Yes. A polarization conversion efficiency T of 100% means that a phase difference of 90 degrees is given to linearly polarized light. For example, a medium obtained by converting p-polarized light to s-polarized light or s-polarized light to p-polarized light. Indicates that it has passed through. A polarization conversion efficiency T of 50% means that a phase difference of 45 degrees is given to linearly polarized light. For example, p-polarized light or s-polarized light is converted into circularly polarized light and transmitted through the medium. pointing.
 液晶層113は、電極112,114間の電圧によって生成される電界に基づいて、偏光を制御するものである。液晶層113では、図5に示したように、電極112,114間に電圧V1が印加されると、偏光変換効率TがT2となり、電極112,114間に電圧V2(V1<V2)が印加されると、偏光変換効率TがT1となる。T2は100%であり、T1は0%である。液晶層113では、さらに、図5に示したように、電極112,114間に電圧V3(V1<V3<V2)が印加されると、偏光変換効率TがT3となる。T3は0%よりも大きく、100%よりも小さな値である。図5には、電圧V3が、T3が50%となるときの電圧となっている場合が例示されている。ここで、電圧V1は、偏光変換効率曲線の立ち下がり位置の電圧以下の電圧であり、具体的には、偏光変換効率曲線において、偏光変換効率が最大値付近で飽和している区間の電圧を指している。電圧V2は、偏光変換効率曲線の立ち上がり位置の電圧以上の電圧であり、具体的には、偏光変換効率曲線において、偏光変換効率が最小値付近で飽和している区間の電圧を指している。電圧V3は、偏光変換効率曲線の立ち下がり位置の電圧と、偏光変換効率曲線の立ち上がり位置の電圧との間の電圧(中間電圧)である。 The liquid crystal layer 113 controls polarization based on the electric field generated by the voltage between the electrodes 112 and 114. In the liquid crystal layer 113, as shown in FIG. 5, when the voltage V1 is applied between the electrodes 112 and 114, the polarization conversion efficiency T becomes T2, and the voltage V2 (V1 <V2) is applied between the electrodes 112 and 114. Then, the polarization conversion efficiency T becomes T1. T2 is 100% and T1 is 0%. Further, in the liquid crystal layer 113, as shown in FIG. 5, when the voltage V3 (V1 <V3 <V2) is applied between the electrodes 112 and 114, the polarization conversion efficiency T becomes T3. T3 is a value larger than 0% and smaller than 100%. FIG. 5 illustrates a case where the voltage V3 is a voltage when T3 is 50%. Here, the voltage V1 is a voltage equal to or lower than the voltage at the falling position of the polarization conversion efficiency curve. Specifically, in the polarization conversion efficiency curve, the voltage in a section where the polarization conversion efficiency is saturated near the maximum value. pointing. The voltage V2 is a voltage equal to or higher than the voltage at the rising position of the polarization conversion efficiency curve, and specifically refers to a voltage in a section where the polarization conversion efficiency is saturated near the minimum value in the polarization conversion efficiency curve. The voltage V3 is a voltage (intermediate voltage) between the voltage at the falling position of the polarization conversion efficiency curve and the voltage at the rising position of the polarization conversion efficiency curve.
 上述したように、液晶層113は、偏光を制御するものである。上述したような偏光変換効率曲線を有する液晶としては、例えば、TN(Twisted Nematic)液晶が挙げられる。TN液晶は、カイラルなネマティック液晶によって構成されており、通過する光の偏光方向をネマティック液晶の回転に沿って回転させる旋光性を有している。 As described above, the liquid crystal layer 113 controls polarization. Examples of the liquid crystal having the polarization conversion efficiency curve as described above include a TN (TwistedistNematic) liquid crystal. The TN liquid crystal is composed of a chiral nematic liquid crystal and has an optical rotation that rotates the polarization direction of light passing therethrough along with the rotation of the nematic liquid crystal.
 次に、可変光学LPF11の光学的な作用について説明する。図6A、図6B、図6Cは、可変光学LPF11の作用の一例を表したものである。図6Aでは、電極112,114間の電圧Vが電圧V1となっている。図6Bでは、電極112,114間の電圧Vが電圧V2となっている。図6Cでは、電極112,114間の電圧Vが電圧V3となっている。 Next, the optical action of the variable optical LPF 11 will be described. 6A, 6B, and 6C illustrate an example of the action of the variable optical LPF 11. FIG. In FIG. 6A, the voltage V between the electrodes 112 and 114 is the voltage V1. In FIG. 6B, the voltage V between the electrodes 112 and 114 is the voltage V2. In FIG. 6C, the voltage V between the electrodes 112 and 114 is the voltage V3.
(V=V1の場合(図6A))
 円偏光の入射光L1が複屈折板111に入射すると、入射光L1は、複屈折板111の複屈折性により、分離幅d1でp偏光とs偏光とに分離される。複屈折板111の光学軸AX1に対して垂直に振動する偏光成分が、入射光L1に含まれるs偏光の成分である場合、分離されたs偏光は、複屈折板111内を、複屈折の影響を受けずに直進し、複屈折板111の裏面から出射する。入射光L1に含まれるp偏光の成分は、s偏光の振動方向と直交する方向に振動するので、複屈折板111内を、複屈折の影響を受けて斜めに進み、複屈折板111の裏面のうち、分離幅d1だけシフトした位置で屈折して、複屈折板111の裏面から出射する。従って、複屈折板111は、入射光L1を、分離幅d1で、p偏光の透過光L2と、s偏光の透過光L2とに分離する。
(When V = V1 (FIG. 6A))
When the circularly polarized incident light L1 enters the birefringent plate 111, the incident light L1 is separated into p-polarized light and s-polarized light with a separation width d1 due to the birefringence of the birefringent plate 111. When the polarization component that vibrates perpendicularly to the optical axis AX1 of the birefringent plate 111 is an s-polarized component included in the incident light L1, the separated s-polarized light is birefringent in the birefringent plate 111. The light travels straight without being affected and exits from the back surface of the birefringent plate 111. The p-polarized component included in the incident light L1 vibrates in a direction orthogonal to the vibration direction of the s-polarized light, and therefore travels diagonally in the birefringent plate 111 due to the influence of birefringence, and the back surface of the birefringent plate 111 Among them, the light is refracted at a position shifted by the separation width d1 and emitted from the back surface of the birefringent plate 111. Accordingly, the birefringent plate 111 separates the incident light L1 into the p-polarized transmitted light L2 and the s-polarized transmitted light L2 with the separation width d1.
 複屈折板111で分離されたp偏光が、偏光変換効率がT2となっている液晶層113に入射すると、p偏光はs偏光に変換されると共に、液晶層113内を直進し、液晶層113の裏面から出射する。複屈折板111で分離されたs偏光が、偏光変換効率がT2となっている液晶層113に入射すると、s偏光はp偏光に変換されると共に、液晶層113内を直進し、液晶層113の裏面から出射する。従って、液晶層113は、複屈折板111で分離されたp偏光およびs偏光に対して、分離幅を一定に保ったままで、ps変換を行う。 When the p-polarized light separated by the birefringent plate 111 is incident on the liquid crystal layer 113 having a polarization conversion efficiency of T2, the p-polarized light is converted into s-polarized light and travels straight in the liquid crystal layer 113. The light is emitted from the back surface. When the s-polarized light separated by the birefringent plate 111 is incident on the liquid crystal layer 113 having a polarization conversion efficiency of T2, the s-polarized light is converted to p-polarized light and travels straight in the liquid crystal layer 113. The light is emitted from the back surface. Therefore, the liquid crystal layer 113 performs ps conversion on the p-polarized light and the s-polarized light separated by the birefringent plate 111 while keeping the separation width constant.
 液晶層113を透過してきたs偏光およびp偏光が複屈折板115に入射すると、s偏光およびp偏光の分離幅が、複屈折板115の複屈折性により、変化する。複屈折板115の光学軸AX2に対して垂直に振動する偏光成分がs偏光である場合、s偏光は、複屈折板115内を、複屈折の影響を受けずに直進し、複屈折板115の裏面から出射する。p偏光は、s偏光の振動方向と直交する方向に振動するので、複屈折板115内を、複屈折の影響を受けて、複屈折板111における像の分離方向とは反対方向に斜めに進む。さらに、p偏光は、複屈折板115の裏面のうち、分離幅d2だけシフトした位置で屈折して、複屈折板115の裏面から出射する。従って、複屈折板115は、液晶層113を透過してきたs偏光およびp偏光を、分離幅(d1+d2)で、s偏光の透過光L2と、p偏光の透過光L2とに分離する。 When s-polarized light and p-polarized light transmitted through the liquid crystal layer 113 are incident on the birefringent plate 115, the separation width of the s-polarized light and p-polarized light changes depending on the birefringence of the birefringent plate 115. When the polarization component that vibrates perpendicularly to the optical axis AX2 of the birefringent plate 115 is s-polarized light, the s-polarized light travels straight in the birefringent plate 115 without being affected by birefringence, and the birefringent plate 115. The light is emitted from the back surface. Since the p-polarized light vibrates in a direction orthogonal to the vibration direction of the s-polarized light, the birefringent plate 115 is influenced by the birefringence and proceeds obliquely in a direction opposite to the image separation direction on the birefringent plate 111. . Further, the p-polarized light is refracted at a position shifted by the separation width d <b> 2 among the back surface of the birefringent plate 115 and is emitted from the back surface of the birefringent plate 115. Therefore, the birefringent plate 115 separates the s-polarized light and p-polarized light transmitted through the liquid crystal layer 113 into s-polarized transmitted light L2 and p-polarized transmitted light L2 with a separation width (d1 + d2).
(V=V2の場合(図6B))
 複屈折板111の、入射光L1に対する作用は、上記と同じである。そこで、以下では、液晶層113と複屈折板115の作用について説明する。複屈折板111で分離されたp偏光およびs偏光が、偏光変換効率がT1となっている液晶層113に入射すると、p偏光およびs偏光は、液晶層113によって偏光変換されずに、液晶層113内を直進し、液晶層113の裏面から出射する。従って、液晶層113は、複屈折板111で分離されたp偏光およびs偏光に対して、光学的な作用を有していない。
(When V = V2 (FIG. 6B))
The action of the birefringent plate 111 on the incident light L1 is the same as described above. Therefore, the operation of the liquid crystal layer 113 and the birefringent plate 115 will be described below. When the p-polarized light and the s-polarized light separated by the birefringent plate 111 enter the liquid crystal layer 113 having the polarization conversion efficiency T1, the p-polarized light and the s-polarized light are not subjected to polarization conversion by the liquid crystal layer 113, and the liquid crystal layer The light travels straight through 113 and exits from the back surface of the liquid crystal layer 113. Therefore, the liquid crystal layer 113 does not have an optical effect on p-polarized light and s-polarized light separated by the birefringent plate 111.
 液晶層113を透過してきたs偏光およびp偏光が複屈折板115に入射すると、s偏光およびp偏光の分離幅が、複屈折板115の複屈折性により、変化する。複屈折板115の光学軸AX2に対して垂直に振動する偏光成分がs偏光である場合、s偏光は、複屈折板115内を、複屈折の影響を受けずに直進し、複屈折板115の裏面から出射する。p偏光は、s偏光の振動方向と直交する方向に振動するので、複屈折板115内を、複屈折の影響を受けて、複屈折板111における像の分離方向とは反対方向に斜めに進む。さらに、p偏光は、複屈折板115の裏面のうち、分離幅d2だけシフトした位置で屈折して、複屈折板115の裏面から出射する。従って、複屈折板115は、液晶層113を透過してきたs偏光およびp偏光を、分離幅(|d1-d2|)で、s偏光の透過光L2と、p偏光の透過光L2とに分離する。ここで、d1=d2の場合、s偏光の透過光L2と、p偏光の透過光L2とは、複屈折板115の裏面のうち、互いに同じ場所から出射される。従って、この場合は、複屈折板115は、液晶層113を透過してきたs偏光およびp偏光を互いに合成した光にする。 When s-polarized light and p-polarized light transmitted through the liquid crystal layer 113 are incident on the birefringent plate 115, the separation width of the s-polarized light and p-polarized light changes depending on the birefringence of the birefringent plate 115. When the polarization component that vibrates perpendicularly to the optical axis AX2 of the birefringent plate 115 is s-polarized light, the s-polarized light travels straight in the birefringent plate 115 without being affected by birefringence, and the birefringent plate 115. The light is emitted from the back surface. Since the p-polarized light vibrates in a direction orthogonal to the vibration direction of the s-polarized light, the birefringent plate 115 is influenced by the birefringence and proceeds obliquely in a direction opposite to the image separation direction on the birefringent plate 111. . Further, the p-polarized light is refracted at a position shifted by the separation width d <b> 2 among the back surface of the birefringent plate 115 and is emitted from the back surface of the birefringent plate 115. Accordingly, the birefringent plate 115 separates the s-polarized light and the p-polarized light transmitted through the liquid crystal layer 113 into s-polarized transmitted light L2 and p-polarized transmitted light L2 with a separation width (| d1-d2 |). To do. Here, when d1 = d2, the s-polarized transmitted light L2 and the p-polarized transmitted light L2 are emitted from the same location on the back surface of the birefringent plate 115. Therefore, in this case, the birefringent plate 115 makes the light obtained by synthesizing the s-polarized light and the p-polarized light transmitted through the liquid crystal layer 113 with each other.
(V=V3の場合(図6C))
 複屈折板111の、入射光L1に対する作用は、上記と同じである。そこで、以下では、液晶層113と複屈折板115の作用について説明する。複屈折板111で分離されたp偏光が、偏光変換効率がT3(=50%)となっている液晶層113に入射すると、p偏光は円偏光に変換されると共に、液晶層113内を直進し、液晶層113の裏面から出射する。複屈折板111で分離されたs偏光が、偏光変換効率がT3(=50%)となっている液晶層113に入射すると、s偏光も円偏光に変換されると共に、液晶層113内を直進し、液晶層113の裏面から出射する。従って、液晶層113は、複屈折板111で分離されたp偏光およびs偏光を、分離幅を一定に保ったままで、円偏光に変換する。
(When V = V3 (FIG. 6C))
The action of the birefringent plate 111 on the incident light L1 is the same as described above. Therefore, the operation of the liquid crystal layer 113 and the birefringent plate 115 will be described below. When the p-polarized light separated by the birefringent plate 111 enters the liquid crystal layer 113 having a polarization conversion efficiency of T3 (= 50%), the p-polarized light is converted into circularly polarized light and travels straight through the liquid crystal layer 113. Then, the light is emitted from the back surface of the liquid crystal layer 113. When the s-polarized light separated by the birefringent plate 111 is incident on the liquid crystal layer 113 whose polarization conversion efficiency is T3 (= 50%), the s-polarized light is also converted into circularly polarized light and travels straight through the liquid crystal layer 113. Then, the light is emitted from the back surface of the liquid crystal layer 113. Therefore, the liquid crystal layer 113 converts the p-polarized light and the s-polarized light separated by the birefringent plate 111 into circularly polarized light while keeping the separation width constant.
 液晶層113から出射されてきた円偏光が複屈折板115に入射すると、複屈折板115の複屈折性により、分離幅d2でp偏光とs偏光とに分離される。複屈折板115の光学軸AX2に対して垂直に振動する偏光成分がs偏光である場合、s偏光は、複屈折板115内を、複屈折の影響を受けずに直進し、複屈折板115の裏面から出射する。p偏光は、s偏光の振動方向と直交する方向に振動するので、複屈折板115内を、複屈折の影響を受けて、複屈折板111における像の分離方向とは反対方向に斜めに進む。さらに、p偏光は、複屈折板115の裏面のうち、分離幅d2だけシフトした位置で屈折して、複屈折板115の裏面から出射する。従って、複屈折板115は、液晶層113でp偏光から変換された円偏光と、液晶層113でs偏光から変換された円偏光とを、それぞれ、分離幅d2で、s偏光の透過光L2と、p偏光の透過光L2とに分離する。 When the circularly polarized light emitted from the liquid crystal layer 113 enters the birefringent plate 115, it is separated into p-polarized light and s-polarized light with a separation width d2 due to the birefringence of the birefringent plate 115. When the polarization component that vibrates perpendicularly to the optical axis AX2 of the birefringent plate 115 is s-polarized light, the s-polarized light travels straight in the birefringent plate 115 without being affected by birefringence, and the birefringent plate 115. The light is emitted from the back surface. Since the p-polarized light vibrates in a direction orthogonal to the vibration direction of the s-polarized light, the birefringent plate 115 is influenced by the birefringence and proceeds obliquely in a direction opposite to the image separation direction on the birefringent plate 111. . Further, the p-polarized light is refracted at a position shifted by the separation width d <b> 2 among the back surface of the birefringent plate 115 and is emitted from the back surface of the birefringent plate 115. Accordingly, the birefringent plate 115 converts the circularly polarized light converted from the p-polarized light by the liquid crystal layer 113 and the circularly polarized light converted from the s-polarized light by the liquid crystal layer 113, respectively, into the s-polarized transmitted light L2 with a separation width d2. And p-polarized transmitted light L2.
 ここで、d1=d2の場合、液晶層113でp偏光から変換された円偏光から分離されたp偏光と、液晶層113でs偏光から変換された円偏光から分離されたs偏光とが、複屈折板115の裏面のうち、互いに同じ場所から出射される。この場合、円偏光の透過光L2が複屈折板115の裏面から出射される。従って、この場合は、複屈折板115は、液晶層113から出射されてきた2つの円偏光を、分離幅(d2+d2)でp偏光の透過光L2と、s偏光の透過光L2とに分離すると共に、一旦分離したp偏光とs偏光とをp偏光の透過光L2とs偏光の透過光L2との間の位置で、p偏光とs偏光とを合成した光にする。 Here, when d1 = d2, the p-polarized light separated from the circularly polarized light converted from the p-polarized light in the liquid crystal layer 113 and the s-polarized light separated from the circularly polarized light converted from the s-polarized light in the liquid crystal layer 113 are Out of the back surface of the birefringent plate 115, the light is emitted from the same location. In this case, circularly polarized transmitted light L 2 is emitted from the back surface of the birefringent plate 115. Therefore, in this case, the birefringent plate 115 separates the two circularly polarized lights emitted from the liquid crystal layer 113 into p-polarized transmitted light L2 and s-polarized transmitted light L2 with a separation width (d2 + d2). At the same time, the p-polarized light and the s-polarized light that have been once separated are combined with the p-polarized light and the s-polarized light at a position between the p-polarized transmitted light L2 and the s-polarized transmitted light L2.
 次に、可変光学LPF11の透過光の点像強度分布について説明する。可変光学LPF11は、電極112,114間に電圧V2が印加されているときには、可変光学LPF11の透過光の点像強度分布に1つのピークp1を生じさせる。ピークp1は、複屈折板115から出射される1つの透過光L2によって形成されたものである。可変光学LPF11は、電極112,114間に電圧V1が印加されているときには、可変光学LPF11の透過光の点像強度分布に2つのピークp2,p3を生じさせる。2つのピークp2,p3は、複屈折板115から出射される2つの透過光L2によって形成されたものである。 Next, the point image intensity distribution of the transmitted light of the variable optical LPF 11 will be described. When the voltage V <b> 2 is applied between the electrodes 112 and 114, the variable optical LPF 11 generates one peak p <b> 1 in the point image intensity distribution of the transmitted light of the variable optical LPF 11. The peak p1 is formed by one transmitted light L2 emitted from the birefringent plate 115. The variable optical LPF 11 causes two peaks p2 and p3 in the point image intensity distribution of the transmitted light of the variable optical LPF 11 when the voltage V1 is applied between the electrodes 112 and 114. The two peaks p2 and p3 are formed by the two transmitted lights L2 emitted from the birefringent plate 115.
 可変光学LPF11は、電極112,114間に電圧V3が印加され、かつd1=d2となっているときには、可変光学LPF11の透過光の点像強度分布に3つのピークp1,p2,p3を生じさせる。3つのピークp1,p2,p3は、複屈折板115から出射される3つの透過光L2によって形成されたものである。可変光学LPF11は、電極112,114間に電圧V3が印加され、かつd1≠d2となっているときには、可変光学LPF11の透過光の点像強度分布に4つのピークp1,p2,p3,p4を生じさせる。4つのピークp1,p2,p3,p4は、複屈折板115から出射される4つの透過光L2によって形成されたものである。 The variable optical LPF 11 generates three peaks p1, p2, and p3 in the point image intensity distribution of the transmitted light of the variable optical LPF 11 when the voltage V3 is applied between the electrodes 112 and 114 and d1 = d2. . The three peaks p1, p2, and p3 are formed by the three transmitted lights L2 emitted from the birefringent plate 115. The variable optical LPF 11 has four peaks p1, p2, p3, and p4 in the point image intensity distribution of the transmitted light of the variable optical LPF 11 when the voltage V3 is applied between the electrodes 112 and 114 and d1 ≠ d2. Cause it to occur. The four peaks p1, p2, p3, and p4 are formed by the four transmitted lights L2 emitted from the birefringent plate 115.
 上述したように、可変光学LPF11は、電極112,114間に電圧V3が印加されているときに、可変光学LPF11の透過光の点像強度分布に3つのピークp1~p3または、4つのピークp1~p4を生じさせる。ここで、可変光学LPF11では、電極112,114間に印加する電圧V3の大きさが変化すると、上記3つのピークp1~p3の値または、上記4つのピークp1~p4が変化する。つまり、可変光学LPF11では、電極112,114間に印加する電圧V3の大きさが変化すると、透過光の点像強度分布が変化する。 As described above, the variable optical LPF 11 has three peaks p1 to p3 or four peaks p1 in the point image intensity distribution of the transmitted light of the variable optical LPF 11 when the voltage V3 is applied between the electrodes 112 and 114. Produces ~ p4. Here, in the variable optical LPF 11, when the voltage V3 applied between the electrodes 112 and 114 changes, the values of the three peaks p1 to p3 or the four peaks p1 to p4 change. That is, in the variable optical LPF 11, when the voltage V3 applied between the electrodes 112 and 114 changes, the point image intensity distribution of the transmitted light changes.
 このように、可変光学LPF11は、電極112,114間に印加する電圧Vの大きさを変化させることにより透過光の点像強度分布を変化させる。ここで、上記3つのピークp1~p3のピーク値(ピーク高さ)や、上記4つのピークp1~p4のピーク値(ピーク高さ)は、電極112,114間に印加する電圧Vの大きさによって変化する。一方、上記3つのピークp1~p3のピーク位置や、上記4つのピークp1~p4のピーク位置は、分離幅d1,d2によって定まる。分離幅d1,d2は、電極112,114間に印加する電圧V3の大きさに依らず、一定である。従って、上記3つのピークp1~p3のピーク位置や、上記4つのピークp1~p4のピーク位置は、電極112,114間に印加する電圧V3の大きさに依らず、一定である。 Thus, the variable optical LPF 11 changes the point image intensity distribution of the transmitted light by changing the magnitude of the voltage V applied between the electrodes 112 and 114. Here, the peak value (peak height) of the three peaks p1 to p3 and the peak value (peak height) of the four peaks p1 to p4 are the magnitude of the voltage V applied between the electrodes 112 and 114. It depends on. On the other hand, the peak positions of the three peaks p1 to p3 and the peak positions of the four peaks p1 to p4 are determined by the separation widths d1 and d2. The separation widths d1 and d2 are constant regardless of the magnitude of the voltage V3 applied between the electrodes 112 and 114. Therefore, the peak positions of the three peaks p1 to p3 and the peak positions of the four peaks p1 to p4 are constant regardless of the magnitude of the voltage V3 applied between the electrodes 112 and 114.
 次に、透過光の点像強度分布とカットオフ周波数fcとの関係について説明する。図7は、図6A~図6CのMTF(Modulation Transfer Function)の一例を表したものである。横軸は空間周波数であり、縦軸は規格化されたコントラストである。図6Bでは、可変光学LPF11が光線分離効果を有していないので、図6BのMTFは、可変光学LPF11の前段に配置するレンズ(例えば、レンズ13など)のMTFと一致している。図6Aでは、ピーク間距離が、図6Cでのピーク間距離よりも広く、光線分離効果が最も大きくなっている。そのため、図6AのMTFのカットオフ周波数fc1が、図6CのMTFのカットオフ周波数fc2よりも小さくなっている。 Next, the relationship between the point image intensity distribution of the transmitted light and the cutoff frequency fc will be described. FIG. 7 shows an example of the MTF (Modulation Transfer Function) of FIGS. 6A to 6C. The horizontal axis is the spatial frequency, and the vertical axis is the normalized contrast. In FIG. 6B, since the variable optical LPF 11 does not have a light beam separation effect, the MTF in FIG. 6B matches the MTF of a lens (for example, the lens 13 or the like) disposed in front of the variable optical LPF 11. In FIG. 6A, the distance between peaks is wider than the distance between peaks in FIG. 6C, and the light beam separation effect is the largest. For this reason, the cutoff frequency fc1 of the MTF in FIG. 6A is smaller than the cutoff frequency fc2 of the MTF in FIG. 6C.
 図6Cでは、分離幅は、図6Aでの分離幅と等しくなっているが、ピーク数が、図6Aでのピーク数よりも多くなっており、ピーク間距離が、図6Aでのピーク間距離よりも狭くなっている。そのため、図6Cでは、光線分離効果が図6Aの光線分離効果よりも弱くなっているので、図6CのMTFのカットオフ周波数fc2が、図6AのMTFのカットオフ周波数fc1よりも大きくなっている。図6CのMTFのカットオフ周波数fc2は、電極112,114間に印加する電圧V3の大きさによって変化し、図6AのMTFのカットオフ周波数fc1よりも大きな任意の周波数を採りうる。従って、可変光学LPF11は、電極112,114間に印加する電圧Vの大きさを変化させることにより、カットオフ周波数fcを、光線分離効果が最大となるときのカットオフ周波数以上の任意の値に設定することができる。 In FIG. 6C, the separation width is equal to the separation width in FIG. 6A, but the number of peaks is larger than the number of peaks in FIG. 6A, and the distance between peaks is the distance between peaks in FIG. 6A. It is narrower than. Therefore, in FIG. 6C, the light beam separation effect is weaker than the light beam separation effect in FIG. 6A, so that the MTF cutoff frequency fc2 in FIG. 6C is larger than the MTF cutoff frequency fc1 in FIG. 6A. . The cut-off frequency fc2 of the MTF in FIG. 6C varies depending on the magnitude of the voltage V3 applied between the electrodes 112 and 114, and can take any frequency greater than the cut-off frequency fc1 of the MTF in FIG. 6A. Therefore, the variable optical LPF 11 changes the magnitude of the voltage V applied between the electrodes 112 and 114 to change the cutoff frequency fc to an arbitrary value equal to or higher than the cutoff frequency when the light beam separation effect is maximized. Can be set.
(画像処理部50およびメモリ部70の詳細な説明)
 次に、画像処理部50およびメモリ部70について詳細に説明する。
(Detailed description of the image processing unit 50 and the memory unit 70)
Next, the image processing unit 50 and the memory unit 70 will be described in detail.
 メモリ部70には、上述したように、画像データ71、彩度データ72、評価データ73および設定値74が記憶される。 As described above, the memory unit 70 stores the image data 71, the saturation data 72, the evaluation data 73, and the set value 74.
 画像データ71は、撮像素子40によって得られた複数の画像データであり、例えば、後述の画像データI1や、後述の複数の画像データI2、後述の画像データIなどを含んでいる。画像データI1は、偽色の無い画像データ、もしくは、偽色の少ない画像データである。偽色の無い画像データ、もしくは、偽色の少ない画像データは、例えば、可変光学LPF11の効果を最大もしくはほぼ最大にしたときに得られる。つまり、偽色の無い画像データ、もしくは、偽色の少ない画像データは、例えば、可変光学LPF11のカットオフ周波数fcを最小もしくはほぼ最小にしたときに得られる。複数の画像データI2は、画像データI1を得る際の設定値とは異なる設定値が可変光学LPF11に設定されているときに得られる画像データである。画像データIは、可変光学LPF11に対して適正な設定値(設定値74)が設定された状態のときに撮像素子40で取得された画像データである。設定値74は、ユーザの目的に合った可変光学LPF11の設定値であり、画像データIを得る際に可変光学LPF11に対して設定される設定値である。設定値74は、画像処理部50での画像処理の実行によって得られる。 The image data 71 is a plurality of image data obtained by the imaging device 40, and includes, for example, image data I 1 described later, a plurality of image data I 2 described later, and image data I described later. The image data I 1 is image data having no false color or image data having few false colors. Image data having no false color or image data having few false colors is obtained, for example, when the effect of the variable optical LPF 11 is maximized or almost maximized. That is, image data having no false color or image data having less false color is obtained, for example, when the cut-off frequency fc of the variable optical LPF 11 is minimized or substantially minimized. The plurality of image data I 2 is image data obtained when a setting value different from the setting value used to obtain the image data I 1 is set in the variable optical LPF 11. The image data I is image data acquired by the image sensor 40 when an appropriate setting value (setting value 74) is set for the variable optical LPF 11. The setting value 74 is a setting value of the variable optical LPF 11 that matches the user's purpose, and is a setting value that is set for the variable optical LPF 11 when obtaining the image data I. The set value 74 is obtained by executing image processing in the image processing unit 50.
 彩度データ72は、画像データI1から得られた彩度に関するデータである。彩度データ72は、例えば、画像データI1の所定の単位のドットごとに対応付けられた2次元データである。評価データ73は、解像度と偽色について評価するためのデータである。 The saturation data 72 is data relating to saturation obtained from the image data I 1 . The saturation data 72 is, for example, two-dimensional data associated with each predetermined unit dot of the image data I 1 . The evaluation data 73 is data for evaluating the resolution and the false color.
 解像度とは、どこまで細かいものが識別できるかを表す指標である。「解像度が高い」とは、撮影画像が、より細かいものが識別できる精細さを有していることを指している。「解像度が低い」とは、撮影画像に精細さがなく、ボケが生じていることを指している。「解像度が劣化している」とは、撮影画像に当初の精細さがなく、当初の撮影画像と比べてボケが強くなっていることを指している。高い空間周波数を含む撮像画像をローパスフィルタに通すと、撮像画像の高周波成分が減殺され、撮像画像の解像度が低下(劣化)する。つまり、解像度の劣化は、空間周波数の低下に相当する。ところで、偽色とは、本来はない色が画像に現れる現象である。この現象は、各色が空間的にサンプリングされているため撮影画像にナイキスト周波数を超える高周波成分が含まれている場合にエイリアシング(高周波成分の低周波領域への折り返し)が生じることによって発生する。従って、例えば、撮像画像に含まれる、ナイキスト周波数を超える高周波成分がローパスフィルタによって減殺されることにより、撮像画像の解像度が低下(劣化)する一方で、撮像画像において偽色の発生が抑えられる。 The resolution is an index indicating how fine details can be identified. “High resolution” means that a captured image has a fineness that allows a finer one to be identified. “Resolution is low” means that the photographed image has no fineness and is blurred. “Resolution is degraded” means that the photographed image has no original fineness and is more blurred than the original photographed image. When a captured image including a high spatial frequency is passed through a low-pass filter, the high-frequency component of the captured image is reduced, and the resolution of the captured image is reduced (deteriorated). That is, the deterioration in resolution corresponds to a decrease in spatial frequency. By the way, the false color is a phenomenon in which an original color appears in an image. This phenomenon occurs due to aliasing (folding of high-frequency components into a low-frequency region) when a captured image includes a high-frequency component exceeding the Nyquist frequency because each color is spatially sampled. Therefore, for example, high-frequency components exceeding the Nyquist frequency included in the captured image are reduced by the low-pass filter, so that the resolution of the captured image is reduced (deteriorated), while the occurrence of false colors in the captured image is suppressed.
 評価データ73は、画像データI1と、複数の画像データI2とに基づいて導出される。具体的には、評価データ73は、画像データI1と、複数の画像データI2と、彩度データ33とに基づいて導出される。評価データ73は、例えば、以下の式(1)を用いて導出される。
 D=f(I1,I2)+ΣC・G(I1,I2)…(1)
The evaluation data 73 is derived based on the image data I 1 and the plurality of image data I 2 . Specifically, the evaluation data 73 is derived based on the image data I 1 , the plurality of image data I 2, and the saturation data 33. The evaluation data 73 is derived using, for example, the following formula (1).
D = f (I 1 , I 2 ) + ΣC · G (I 1 , I 2 ) (1)
 ここで、Dは、解像度および偽色についての評価データである。f(I1,I2)は、撮像素子40によって得られた2つの画像データI1,I2間の解像度の変化(劣化)についての評価データ(評価データD1)を導出する関数である。評価データD1の値が大きくなるほど、解像度が低くなることを意味する。評価データD1は、本開示の「第1評価データ」の一具体例に対応する。f(I1,I2)は、2つの画像データI1,I2の空間周波数に基づいて、評価データD1を導出する関数である。f(I1,I2)は、例えば、画像データI1の周波数スペクトルと、画像データI2の周波数スペクトルとの差分に基づいて、評価データD1を導出する関数である。f(I1,I2)は、例えば、画像データI1の周波数スペクトルと、画像データI2の周波数スペクトルとの差分の中から、可変光学LPF11の効果が最も大きくなる周波数におけるパワーを抽出し、抽出したパワーに基づいて評価データD1を導出する関数である。式(1)の右辺第1項から、評価データD1が導出される。Cは、彩度データ72である。ΣC・G(I1,I2)は、2つの画像データI1,I2の階調データに基づいて、2つの画像データI1,I2間の偽色の変化についての評価データ(評価データD2)を導出する関数である。ΣC・G(I1,I2)の値が大きくなるほど、偽色がより広範囲に発生していることを意味する。評価データD2は、本開示の「第2評価データ」の一具体例に対応する。ΣC・G(I1,I2)は、例えば、画像データI1の階調データと、画像データI2の階調データとの差分に基づいて、評価データD2として導出する関数である。ΣC・G(I1,I2)は、例えば、画像データI1の階調データと、画像データI2の階調データとの差分と、画像データI1から得られたC(彩度データ72)とに基づいて、評価データD2を導出する関数である。ΣC・G(I1,I2)は、例えば、画像データI1の階調データと、画像データI2の階調データとの差分に対してCを掛けたものの総和を、評価データD2として導出する関数である。式(1)の右辺第2項から、被写体の彩度が考慮された評価データD2が導出される。 Here, D is evaluation data for resolution and false color. f (I 1 , I 2 ) is a function for deriving evaluation data (evaluation data D 1) regarding a change (degradation) in resolution between the two image data I 1 and I 2 obtained by the image sensor 40. It means that the larger the value of the evaluation data D1, the lower the resolution. The evaluation data D1 corresponds to a specific example of “first evaluation data” of the present disclosure. f (I 1 , I 2 ) is a function for deriving the evaluation data D1 based on the spatial frequencies of the two image data I 1 and I 2 . f (I 1 , I 2 ) is a function for deriving the evaluation data D1 based on the difference between the frequency spectrum of the image data I 1 and the frequency spectrum of the image data I 2 , for example. For example, f (I 1 , I 2 ) extracts the power at the frequency at which the effect of the variable optical LPF 11 is greatest from the difference between the frequency spectrum of the image data I 1 and the frequency spectrum of the image data I 2. This is a function for deriving evaluation data D1 based on the extracted power. Evaluation data D1 is derived from the first term on the right side of Equation (1). C is saturation data 72. ΣC · G (I 1, I 2) , the two image data I 1, based on the grayscale data of I 2, the evaluation data (assessment of false color change between two image data I 1, I 2 This is a function for deriving data D2). A larger value of ΣC · G (I 1 , I 2 ) means that a false color is generated in a wider range. The evaluation data D2 corresponds to a specific example of “second evaluation data” of the present disclosure. ΣC · G (I 1 , I 2 ) is a function derived as evaluation data D 2 based on, for example, the difference between the gradation data of the image data I 1 and the gradation data of the image data I 2 . ΣC · G (I 1 , I 2 ) is, for example, the difference between the gradation data of the image data I 1 and the gradation data of the image data I 2 , and C (saturation data) obtained from the image data I 1. 72) and a function for deriving the evaluation data D2. ΣC · G (I 1 , I 2 ) is, for example, the sum of the difference between the gradation data of the image data I 1 and the gradation data of the image data I 2 multiplied by C as the evaluation data D 2 A function to derive. From the second term on the right side of Equation (1), evaluation data D2 in consideration of the saturation of the subject is derived.
 図8は、式(1)の右辺第1項から得られる評価データD1と、式(1)の右辺第2項から得られる評価データD2とをグラフで例示したものである。式(1)の右辺第1項から得られる評価データD1では、可変光学LPF11の効果が強くなるにつれて、当初は解像度が急激に悪化し、それ以降では解像度の変化が徐々に緩やかになっていることがわかる。また、式(1)の右辺第2項から得られる評価データD2では、可変光学FPF11の効果が強くなるにつれて、偽色の範囲が徐々に低下していくことがわかる。 FIG. 8 is a graph illustrating evaluation data D1 obtained from the first term on the right side of Equation (1) and evaluation data D2 obtained from the second term on the right side of Equation (1). In the evaluation data D1 obtained from the first term on the right side of Expression (1), as the effect of the variable optical LPF 11 becomes stronger, the resolution deteriorates abruptly at first, and thereafter, the change in resolution gradually becomes gentler. I understand that. In addition, in the evaluation data D2 obtained from the second term on the right side of Expression (1), it can be seen that the false color range gradually decreases as the effect of the variable optical FPF 11 increases.
(画像処理部50)
 図9は、画像処理部50の機能ブロックの一例を表したものである。画像処理部50は、撮像素子40から出力された画像データに対して所定の処理を行うものである。画像処理部50は、例えば、可変光学LPF11の効果(もしくはローパス特性)の変更に応じた、撮像された複数の画像データにおける解像度の変化及び偽色の変化に基づいて可変光学LPF11の効果(もしくはローパス特性)の設定値74を設定するようになっている。ローパス特性としては、例えば、カットオフ周波数fcが挙げられる。画像処理部50は、例えば、前処理回路51、画像処理回路52、表示処理回路53、圧縮伸長回路54およびメモリ制御回路55を有している。
(Image processing unit 50)
FIG. 9 illustrates an example of functional blocks of the image processing unit 50. The image processing unit 50 performs predetermined processing on the image data output from the image sensor 40. The image processing unit 50, for example, the effect (or the effect of the variable optical LPF 11 on the basis of a change in resolution and a change in false color in a plurality of captured image data in accordance with a change in the effect (or low-pass characteristics) of the variable optical LPF 11 A low-pass characteristic) set value 74 is set. An example of the low-pass characteristic is a cutoff frequency fc. The image processing unit 50 includes, for example, a preprocessing circuit 51, an image processing circuit 52, a display processing circuit 53, a compression / decompression circuit 54, and a memory control circuit 55.
 前処理回路51は、撮像素子40から出力された画像データに対して、シェーディング補正などの光学的な補正処理を行うものである。画像処理回路52は、前処理回路51から出力された補正後の画像データに対して、後述する種々の処理を行うようになっている。画像処理回路52は、さらに、例えば、撮像素子40から取得した画像データを、表示処理回路53に出力するようになっている。画像処理回路52は、さらに、例えば、撮像素子40から取得した画像データを圧縮伸長回路54に出力するようになっている。画像処理回路52についての説明は、後に詳述するものとする。 The pre-processing circuit 51 performs optical correction processing such as shading correction on the image data output from the image sensor 40. The image processing circuit 52 performs various processes described later on the corrected image data output from the preprocessing circuit 51. The image processing circuit 52 further outputs, for example, image data acquired from the image sensor 40 to the display processing circuit 53. The image processing circuit 52 further outputs, for example, image data acquired from the image sensor 40 to the compression / decompression circuit 54. The image processing circuit 52 will be described later in detail.
 表示処理回路53は、画像処理回路52から受け取った画像データから、表示パネル60に表示させる画像信号を生成して、その画像信号を表示パネル60に送るものである。圧縮伸長回路54は、画像処理回路52から受け取った静止画の画像データに対して、例えば、JPEG(Joint Photographic Experts Group)などの静止画像の符号化方式で圧縮符号化処理を行うものである。また、圧縮伸長回路54は、画像処理回路52から受け取った動画の画像データに対して、例えば、MPEG(Moving Picture Experts Group)などの動画像の符号化方式で圧縮符号化処理を行うものである。メモリ制御回路55は、メモリ部70に対するデータの書き込みおよび読み出しを制御するものである。 The display processing circuit 53 generates an image signal to be displayed on the display panel 60 from the image data received from the image processing circuit 52, and sends the image signal to the display panel 60. The compression / decompression circuit 54 performs compression encoding processing on the still image data received from the image processing circuit 52 by a still image encoding method such as JPEG (JointJPhotographic Experts Group). The compression / decompression circuit 54 performs compression coding processing on the moving image data received from the image processing circuit 52 by a moving image coding method such as MPEG (Moving Picture Experts な ど Group). . The memory control circuit 55 controls writing and reading of data with respect to the memory unit 70.
 次に、撮像装置1における撮像手順について説明する。 Next, an imaging procedure in the imaging apparatus 1 will be described.
 図10は、撮像装置1における撮像手順の一例を表したものである。まず、制御部80は動作準備を行う(ステップS101)。動作準備とは、画像データIが撮像素子40から出力されるにあたって必要となる準備を指しており、例えば、AF(オートフォーカス)の条件などを設定することを指している。具体的には、制御部80は、ユーザからの準備指示(例えば、シャッターボタンの半押下げ)を検知すると、AF等の動作準備をレンズ駆動部20およびLPF駆動部30に指示する。すると、レンズ駆動部20は、制御部80からの指示に従って、動作準備を画像データI1の出力前にレンズ12に対して行う。レンズ駆動部20は、例えば、レンズ12のフォーカスの条件などを所定の値に設定する。このとき、制御部80は、可変光学LPF11が光学的に作用しないようにした上でAF等の動作準備をレンズ駆動部20に実行させる。LPF駆動部30は、制御部80からの指示に従って、動作準備を画像データI1の出力前に可変光学LPF11に対して行う。LPF駆動部30は、例えば、電極112,114間に電圧V2を印加する。このとき、可変光学LPF11の偏光変換効率TはT1となっている。 FIG. 10 illustrates an example of an imaging procedure in the imaging apparatus 1. First, the control unit 80 prepares for operation (step S101). The operation preparation refers to preparations required when the image data I is output from the image sensor 40, for example, setting AF (autofocus) conditions and the like. Specifically, when detecting a preparation instruction from the user (for example, half-pressing the shutter button), the control unit 80 instructs the lens driving unit 20 and the LPF driving unit 30 to prepare for operation such as AF. Then, the lens driving unit 20 performs operation preparation for the lens 12 before outputting the image data I 1 in accordance with an instruction from the control unit 80. For example, the lens driving unit 20 sets a focus condition of the lens 12 to a predetermined value. At this time, the control unit 80 causes the lens driving unit 20 to prepare for operation such as AF after the variable optical LPF 11 is not optically operated. The LPF driving unit 30 performs operation preparation for the variable optical LPF 11 before outputting the image data I 1 in accordance with an instruction from the control unit 80. The LPF driving unit 30 applies a voltage V2 between the electrodes 112 and 114, for example. At this time, the polarization conversion efficiency T of the variable optical LPF 11 is T1.
 次に、制御部80は、可変光学LPF11の効果を、可変光学LPF11の効果の設定最小分解能よりも大きなピッチで変化させる変化指示を生成し、その変化指示をLPF駆動部30に対して行う。つまり、制御部80は、可変光学LPF11のカットオフ周波数fcを、可変光学LPF11のカットオフ周波数fcの設定最小分解能よりも大きなピッチで変化させる変化指示を生成し、その変化指示をLPF駆動部30に対して行う。すると、LPF駆動部30は、例えば、可変光学LPF11の電極間に印加する電圧V(周波数一定)を設定最小分解能よりも大きなピッチで変えることにより、可変光学LPF11のカットオフ周波数fcを徐々に変化させる。このとき、制御部80は、可変光学LPF11の効果(カットオフ周波数fc)の変化に同期した撮像指示を生成し、その撮像指示を撮像素子40に対して行う。すると、撮像素子40は、制御部80からの指示に従って、可変光学LPF11の効果の変化(カットオフ周波数fcの変化)に同期した撮像を行う。その結果、撮像素子40は、可変光学LPF11の効果(カットオフ周波数fc)の互いに異なる複数の撮像データを生成し、画像処理部50に出力する。 Next, the control unit 80 generates a change instruction for changing the effect of the variable optical LPF 11 at a pitch larger than the set minimum resolution of the effect of the variable optical LPF 11, and gives the change instruction to the LPF driving unit 30. That is, the control unit 80 generates a change instruction for changing the cutoff frequency fc of the variable optical LPF 11 at a pitch larger than the set minimum resolution of the cutoff frequency fc of the variable optical LPF 11, and outputs the change instruction to the LPF driving unit 30. To do. Then, for example, the LPF driving unit 30 gradually changes the cutoff frequency fc of the variable optical LPF 11 by changing the voltage V (constant frequency) applied between the electrodes of the variable optical LPF 11 at a pitch larger than the set minimum resolution. Let At this time, the control unit 80 generates an imaging instruction synchronized with a change in the effect (cut-off frequency fc) of the variable optical LPF 11, and gives the imaging instruction to the imaging element 40. Then, the imaging device 40 performs imaging in synchronization with a change in the effect of the variable optical LPF 11 (change in the cutoff frequency fc) in accordance with an instruction from the control unit 80. As a result, the imaging device 40 generates a plurality of imaging data different from each other in the effect (cutoff frequency fc) of the variable optical LPF 11 and outputs the imaging data to the image processing unit 50.
 以下に、複数の撮像データのより具体的な取得方法について説明する。 Hereinafter, a more specific method for acquiring a plurality of imaging data will be described.
 まず、制御部80は、可変光学LPF11の効果の変更可能な範囲の中で、可変光学LPF11の効果を最大もしくはほぼ最大にするよう、LPF駆動部30に指示する(ステップS102)。つまり、制御部80は、可変光学LPF11のカットオフ周波数fcの変更可能な範囲の中で、可変光学LPF11のカットオフ周波数fcが最小もしくはほぼ最小となるよう、LPF駆動部30に指示する。すると、LPF駆動部30は、制御部80からの指示に従って、可変光学LPF11の効果の変更可能な範囲の中で、可変光学LPF11の効果を最大もしくはほぼ最大にする。つまり、LPF駆動部30は、可変光学LPF11のカットオフ周波数fcの変更可能な範囲の中で、可変光学LPF11のカットオフ周波数fcを最小もしくはほぼ最小にする。LPF駆動部30は、例えば、電極112,114間に電圧V1、もしくは、電圧V1よりも若干大きな値の電圧を印加する。その結果、可変光学LPF11の偏光変換効率TがT2(最大)、もしくは、T2に近い値となる。 First, the control unit 80 instructs the LPF driving unit 30 to maximize or substantially maximize the effect of the variable optical LPF 11 within the range in which the effect of the variable optical LPF 11 can be changed (step S102). In other words, the control unit 80 instructs the LPF driving unit 30 so that the cut-off frequency fc of the variable optical LPF 11 is minimized or almost within the changeable range of the cut-off frequency fc of the variable optical LPF 11. Then, the LPF driving unit 30 maximizes or substantially maximizes the effect of the variable optical LPF 11 within the range in which the effect of the variable optical LPF 11 can be changed in accordance with an instruction from the control unit 80. In other words, the LPF driving unit 30 minimizes or substantially minimizes the cutoff frequency fc of the variable optical LPF 11 within a changeable range of the cutoff frequency fc of the variable optical LPF 11. For example, the LPF driving unit 30 applies a voltage V1 between the electrodes 112 and 114, or a voltage slightly larger than the voltage V1. As a result, the polarization conversion efficiency T of the variable optical LPF 11 becomes T2 (maximum) or a value close to T2.
 続いて、制御部80は、画像データI1の取得を撮像素子40に指示する(ステップS
103)。具体的には、制御部80は、可変光学LPF11の効果の変更可能な範囲の中で、可変光学LPF11の効果が最大もしくはほぼ最大となっているときに、画像データI1の取得を撮像素子40に指示する。つまり、制御部80は、可変光学LPF11のカットオフ周波数fcの変更可能な範囲の中で、可変光学LPF11のカットオフ周波数fcが最小もしくはほぼ最小となっているときに、画像データI1の取得を撮像素子40に指示する。すると、撮像素子40は、可変光学LPF11の効果が最大もしくはほぼ最大となっている可変光学LPF11を経由して入射した光を、受光面40Aで離散的にサンプリングすることによりカラーの画像データI1を取得する。つまり、撮像素子40は、可変光学LPF11のカットオフ周波数fcが最小もしくはほぼ最小となっている可変光学LPF11を経由して入射した光を、受光面40Aで離散的にサンプリングすることによりカラーの画像データI1を取得する。撮像素子40は、例えば、偏光変換効率Tが最大もしくはほぼ最大となっている可変光学LPF11を経由して入射した光を、受光面40Aで離散的にサンプリングすることによりカラーの画像データI1を取得する。
Subsequently, the control unit 80 instructs the image sensor 40 to acquire the image data I 1 (step S).
103). Specifically, the control unit 80 acquires the image data I 1 when the effect of the variable optical LPF 11 is the maximum or almost the maximum within the changeable range of the effect of the variable optical LPF 11. 40. That is, the control unit 80 acquires the image data I 1 when the cut-off frequency fc of the variable optical LPF 11 is the minimum or almost the minimum within the changeable range of the cut-off frequency fc of the variable optical LPF 11. To the image sensor 40. Then, the image sensor 40 discretely samples the light incident through the variable optical LPF 11 in which the effect of the variable optical LPF 11 is maximized or substantially maximized by the light receiving surface 40A, thereby obtaining color image data I 1. To get. That is, the image pickup device 40 discretely samples the light incident through the variable optical LPF 11 with the cut-off frequency fc of the variable optical LPF 11 being minimum or almost minimum by the light receiving surface 40A, thereby obtaining a color image. Data I 1 is acquired. For example, the image sensor 40 discretely samples light incident via the variable optical LPF 11 having the maximum or almost maximum polarization conversion efficiency T by the light receiving surface 40A, thereby obtaining the color image data I 1 . get.
 画像データI1は、可変光学LPF11の効果の変更可能な範囲の中で、可変光学LPF11の効果を最大もしくはほぼ最大としたときに撮像素子40を駆動することにより撮像素子40で生成された画像データである。つまり、画像データI1は、可変光学LPF11のカットオフ周波数fcの変更可能な範囲の中で、可変光学LPF11のカットオフ周波数fcが最小もしくはほぼ最小としたときに撮像素子40を駆動することにより撮像素子40で生成された画像データである。画像データI1は、本開示の「第1画像データ」の一具体例に相当する。撮像素子40は、取得した画像データI1を画像処理部50に出力する。次に、画像処理部50は、取得した画像データI1を解析することにより、画像データI1における彩度に関するデータ(彩度データ33)を導出する(ステップS104)。 The image data I 1 is an image generated by the image sensor 40 by driving the image sensor 40 when the effect of the variable optical LPF 11 is maximized or substantially maximized within a range in which the effect of the variable optical LPF 11 can be changed. It is data. That is, the image data I 1 is obtained by driving the image sensor 40 when the cut-off frequency fc of the variable optical LPF 11 is minimized or almost within the changeable range of the cut-off frequency fc of the variable optical LPF 11. This is image data generated by the image sensor 40. The image data I 1 corresponds to a specific example of “first image data” of the present disclosure. The image sensor 40 outputs the acquired image data I 1 to the image processing unit 50. Next, the image processing unit 50 analyzes the acquired image data I 1 to derive data relating to the saturation (saturation data 33) in the image data I 1 (step S104).
 次に、制御部80は、可変光学LPF11の効果の変更をLPF駆動部30に指示する(ステップS105)。具体的には、制御部80は、可変光学LPF11の効果が可変光学LPF11の前回の効果よりも小さくなるよう、LPF駆動部30に指示する。つまり、制御部80は、可変光学LPF11のカットオフ周波数fcが可変光学LPF11の前回のカットオフ周波数fcよりも大きくなるよう、LPF駆動部30に指示する。すると、LPF駆動部30は、制御部80からの指示に従って、可変光学LPF11の効果を可変光学LPF11の前回の効果よりも小さくする。つまり、LPF駆動部30は、可変光学LPF11のカットオフ周波数fcを可変光学LPF11の前回のカットオフ周波数fcよりも大きくする。LPF駆動部30は、例えば、電極112,114間に、電圧V3として、前回よりも大きな電圧を印加する。その結果、可変光学LPF11の偏光変換効率TがT2とT1との間の大きさであって、かつ前回よりも小さな値となる。 Next, the control unit 80 instructs the LPF driving unit 30 to change the effect of the variable optical LPF 11 (step S105). Specifically, the control unit 80 instructs the LPF driving unit 30 so that the effect of the variable optical LPF 11 is smaller than the previous effect of the variable optical LPF 11. That is, the control unit 80 instructs the LPF driving unit 30 so that the cutoff frequency fc of the variable optical LPF 11 is higher than the previous cutoff frequency fc of the variable optical LPF 11. Then, the LPF driving unit 30 makes the effect of the variable optical LPF 11 smaller than the previous effect of the variable optical LPF 11 in accordance with an instruction from the control unit 80. That is, the LPF driving unit 30 sets the cutoff frequency fc of the variable optical LPF 11 to be higher than the previous cutoff frequency fc of the variable optical LPF 11. For example, the LPF driving unit 30 applies a voltage higher than the previous voltage as the voltage V <b> 3 between the electrodes 112 and 114. As a result, the polarization conversion efficiency T of the variable optical LPF 11 is a magnitude between T2 and T1, and is smaller than the previous time.
 続いて、制御部80は、画像データI2の取得を撮像素子40に指示する(ステップS106)。画像データI2は、本開示の「第2画像データ」の一具体例に相当する。具体的には、制御部80は、画像データI1を得る際の設定値とは異なる設定値が可変光学LPF11に設定されているときに、画像データI2の取得を撮像素子40に指示する。すると、撮像素子40は、可変光学LPF11の効果が可変光学LPF11の前回の効果よりも小さくなっている可変光学LPF11を経由して入射した光を、受光面40Aで空間的にサンプリングすることによりカラーの画像データI2を取得する。つまり、撮像素子40は、可変光学LPF11のカットオフ周波数fcが可変光学LPF11の前回のカットオフ周波数fcよりも大きくなっている可変光学LPF11を経由して入射した光を、受光面40Aで空間的にサンプリングすることによりカラーの画像データI2を取得する。撮像素子40は、例えば、偏光変換効率TがT2とT1との間の大きさとなっている可変光学LPF11を経由して入射した光を、受光面40Aで空間的にサンプリングすることによりカラーの画像データI2を取得する。撮像素子40は、取得した画像データI2を、画像処理部50に出力する。 Subsequently, the control unit 80 instructs the image sensor 40 to acquire the image data I 2 (step S106). The image data I 2 corresponds to a specific example of “second image data” of the present disclosure. Specifically, the control unit 80 instructs the image sensor 40 to acquire the image data I 2 when a setting value different from the setting value used to obtain the image data I 1 is set in the variable optical LPF 11. . Then, the imaging device 40 performs color sampling by spatially sampling light incident through the variable optical LPF 11 in which the effect of the variable optical LPF 11 is smaller than the previous effect of the variable optical LPF 11 on the light receiving surface 40A. Image data I 2 is acquired. That is, the image sensor 40 spatially receives light incident through the variable optical LPF 11 in which the cutoff frequency fc of the variable optical LPF 11 is higher than the previous cutoff frequency fc of the variable optical LPF 11 on the light receiving surface 40A. The color image data I 2 is obtained by sampling the image data. For example, the image pickup device 40 spatially samples light incident through the variable optical LPF 11 having a polarization conversion efficiency T between T2 and T1, and spatially samples the light on the light receiving surface 40A. Data I 2 is acquired. The image sensor 40 outputs the acquired image data I 2 to the image processing unit 50.
 次に、制御部80は、適正なローパス特性(設定値74)の導出を画像処理部50に指示する。すると、画像処理部50は、撮像素子40で得られた複数の画像データ(画像データI1および画像データI2)における解像度の変化及び偽色の変化に基づいて、適正なローパス特性(設定値74)を導出する。 Next, the control unit 80 instructs the image processing unit 50 to derive an appropriate low-pass characteristic (set value 74). Then, the image processing unit 50 sets an appropriate low-pass characteristic (set value) based on a change in resolution and a change in false color in the plurality of image data (image data I 1 and image data I 2 ) obtained by the image sensor 40. 74) is derived.
 画像処理部50は、まず、複数の画像データ(画像データI1および画像データI2)に基づいて、2つの評価データD1,D2を導出する(ステップS107)。 The image processing unit 50 first derives two evaluation data D1 and D2 based on a plurality of image data (image data I 1 and image data I 2 ) (step S107).
 画像処理部50は、複数の画像データ(画像データI1および画像データI2)の空間周波数に基づいて、解像度の変化についての評価データD1を導出する。画像処理部50は、例えば、画像データI1の周波数スペクトルと、画像データI2の周波数スペクトルとの差分に基づいて、評価データD1を導出する。画像処理部50は、例えば、画像データI1と画像データI2とを式(1)の右辺第1項に当てはめることにより、評価データD1を導出する。 The image processing unit 50 derives evaluation data D1 for a change in resolution based on the spatial frequency of the plurality of image data (image data I 1 and image data I 2 ). For example, the image processing unit 50 derives the evaluation data D1 based on the difference between the frequency spectrum of the image data I 1 and the frequency spectrum of the image data I 2 . The image processing unit 50 derives the evaluation data D1 by, for example, applying the image data I 1 and the image data I 2 to the first term on the right side of Expression (1).
 画像処理部50は、複数の画像データ(画像データI1および画像データI2)の階調データに基づいて、偽色の変化についての評価データD2を導出する。画像処理部50は、例えば、画像データI1の階調データと、画像データI2の階調データとの差分に基づいて、評価データD2を導出する。画像処理部50は、例えば、画像データI1の階調データと画像データI2の階調データとの差分と、画像データI1から得られた彩度データ33とに基づいて、評価データD2を導出する。画像処理部50は、例えば、画像データI1と、画像データI2と、彩度データ33とを式(1)の右辺第2項に当てはめることにより、評価データD2を導出する。 The image processing unit 50 derives evaluation data D2 for a change in false color based on the gradation data of a plurality of image data (image data I 1 and image data I 2 ). For example, the image processing unit 50 derives the evaluation data D2 based on the difference between the gradation data of the image data I 1 and the gradation data of the image data I 2 . The image processing unit 50 evaluates the evaluation data D2 based on, for example, the difference between the gradation data of the image data I 1 and the gradation data of the image data I 2 and the saturation data 33 obtained from the image data I 1. Is derived. The image processing unit 50 derives the evaluation data D2 by, for example, applying the image data I 1 , the image data I 2, and the saturation data 33 to the second term on the right side of Expression (1).
 次に、画像処理部50は、得られた2つの評価データD1,D2に基づいて、可変光学LPF11の効果の適正を判定する(ステップS108)。つまり、画像処理部50は、得られた2つの評価データD1,D2が、目的に応じた所望の基準を満たしているか否かを判定する。「目的に応じた所望の基準」は、撮影モードや被写体、シーンなどに応じて異なる。画像処理部50は、例えば、今回新たに得られた2つの評価データD1,D2の合計値(評価データD)が、可変光学LPF11の効果(カットオフ周波数fc)の制御可能な領域において、最小値となるか否かを判定する。 Next, the image processing unit 50 determines appropriateness of the effect of the variable optical LPF 11 based on the obtained two evaluation data D1 and D2 (step S108). That is, the image processing unit 50 determines whether or not the obtained two evaluation data D1 and D2 satisfy a desired standard according to the purpose. The “desired reference according to the purpose” varies depending on the shooting mode, the subject, the scene, and the like. The image processing unit 50, for example, minimizes the total value (evaluation data D) of the two newly obtained evaluation data D1 and D2 this time in the region where the effect (cutoff frequency fc) of the variable optical LPF 11 can be controlled. It is determined whether it becomes a value.
 画像処理部50は、可変光学LPF11の効果が適正であると判定した場合には、そのときの可変光学LPF11のローパス特性を設定値74とする。具体的には、画像処理部50は、可変光学LPF11の効果が適正であると判定した場合には、そのときの可変光学LPF11の設定値を設定値74とする。画像処理部50は、設定値74をメモリ部70に格納する。画像処理部50は、例えば、今回新たに得られた2つの評価データD1,D2の合計値(評価データD)が、可変光学LPF11の効果の制御可能な領域において最小値となると判定した場合には、そのときの可変光学LPF11の設定値を設定値74とする。 When the image processing unit 50 determines that the effect of the variable optical LPF 11 is appropriate, the low-pass characteristic of the variable optical LPF 11 at that time is set as the set value 74. Specifically, when the image processing unit 50 determines that the effect of the variable optical LPF 11 is appropriate, the setting value of the variable optical LPF 11 at that time is set as the setting value 74. The image processing unit 50 stores the setting value 74 in the memory unit 70. For example, when the image processing unit 50 determines that the total value (evaluation data D) of the two evaluation data D1 and D2 newly obtained this time is the minimum value in a region where the effect of the variable optical LPF 11 can be controlled. The setting value of the variable optical LPF 11 at that time is set as a setting value 74.
 画像処理部50は、可変光学LPF11の効果が適正ではないと判定した場合には、ステップS105に戻って、可変光学LPF11の効果(カットオフ周波数fc)を変更し、ステップS106以降のステップを順次、実施する。このようにして、画像処理部50は、ステップS105~S108を繰り返し実施することにより、導出した2つの評価データD1,D2に基づいて可変光学LPF11の設定値74を導出する。つまり、画像処理部50は、ユーザからの準備指示(例えば、シャッターボタンの半押下げ)に伴って、ローパス特性(カットオフ周波数fc)を異ならせて撮影された複数の画像データ(画像データI1および複数の画像データI2)に基づいて可変光学LPF11の設定値74を導出する。画像処理部50は、設定値74が得られたことを、制御部80に知らせる。 When it is determined that the effect of the variable optical LPF 11 is not appropriate, the image processing unit 50 returns to step S105, changes the effect (cutoff frequency fc) of the variable optical LPF 11, and sequentially performs steps after step S106. ,carry out. In this way, the image processing unit 50 derives the set value 74 of the variable optical LPF 11 based on the derived two evaluation data D1 and D2 by repeatedly performing Steps S105 to S108. That is, the image processing unit 50 has a plurality of pieces of image data (image data I) captured with different low-pass characteristics (cut-off frequency fc) in accordance with a preparation instruction from the user (for example, half-pressing the shutter button). The set value 74 of the variable optical LPF 11 is derived based on one and a plurality of image data I 2 ). The image processing unit 50 notifies the control unit 80 that the setting value 74 has been obtained.
 制御部80は、設定値74が得られたことについての通知が画像処理部50から入力された後、ユーザからの撮像指示(例えば、シャッターボタンの押下げ)を検知すると、可変光学LPF11を設定値74に設定するよう、LPF駆動部30に指示する。具体的には、制御部80は、ユーザからの撮像指示(例えば、シャッターボタンの押下げ)を検知すると、メモリ部70から設定値74を読み出し、読み出した設定値74をLPF駆動部30に出力するとともに、可変光学LPF11を設定値74に設定するよう、LPF駆動部30に指示する。すると、LPF駆動部30は、制御部80からの指示に従って、可変光学LPF11を設定値74に設定する。 The control unit 80 sets the variable optical LPF 11 when detecting an imaging instruction (for example, pressing the shutter button) from the user after the notification that the setting value 74 is obtained is input from the image processing unit 50. The LPF driving unit 30 is instructed to set the value 74. Specifically, when detecting an imaging instruction (for example, pressing of a shutter button) from the user, the control unit 80 reads the setting value 74 from the memory unit 70 and outputs the read setting value 74 to the LPF driving unit 30. At the same time, the LPF driving unit 30 is instructed to set the variable optical LPF 11 to the set value 74. Then, the LPF drive unit 30 sets the variable optical LPF 11 to the set value 74 in accordance with an instruction from the control unit 80.
 制御部80は、さらに、画像データIの取得を撮像素子40に指示する。すると、撮像素子40は、制御部80からの指示に従って、画像データIを取得する(ステップS109)。つまり、撮像素子40は、設定値74の設定された可変光学LPF11を経由して入射した光を、受光面40Aで離散的にサンプリングすることによりカラーの画像データIを取得する。撮像素子40は、取得した画像データIを画像処理部50に出力する。画像処理部50は、画像データIに対して所定の処理を行った後、処理のなされた後の画像データIをメモリ部70に出力するとともに、表示パネル60に出力する。すると、メモリ部70は、画像処理部50から入力された画像データIを記憶し、表示部60は、画像処理部50から入力された画像データIを表示する(ステップS110)。 The control unit 80 further instructs the image sensor 40 to acquire the image data I. Then, the image sensor 40 acquires the image data I in accordance with an instruction from the control unit 80 (step S109). That is, the imaging device 40 acquires color image data I by discretely sampling the light incident through the variable optical LPF 11 having the set value 74 set by the light receiving surface 40A. The image sensor 40 outputs the acquired image data I to the image processing unit 50. The image processing unit 50 performs predetermined processing on the image data I, and then outputs the processed image data I to the memory unit 70 and also to the display panel 60. Then, the memory unit 70 stores the image data I input from the image processing unit 50, and the display unit 60 displays the image data I input from the image processing unit 50 (step S110).
 なお、撮像装置1において、上記の動作準備は、ユーザによる手動操作によって行われてもよい。また、撮像装置1において、可変光学LPF11の効果が変更されながら画像データI2が連続して取得されている最中に、レンズ13のフォーカスの条件などが変更された場合には、画像データI1の取得からやり直しが行われてもよい。上記のユーザからの指示は、「シャッターボタンの半押下げ」以外の方法で行われてもよい。例えば、上記のユーザからの指示が、レンズ13のフォーカスの条件などが設定された後の、撮像装置1本体に付属するボタン(シャッターボタン以外の操作ボタン)の押下げによって行われてもよい。また、例えば、上記のユーザからの指示が、レンズ13のフォーカスの条件などが設定された後の、撮像装置1本体に付属する操作ダイヤルの回動によって行われてもよい。 In the imaging apparatus 1, the above-described operation preparation may be performed manually by a user. In the imaging apparatus 1, when the focus condition of the lens 13 is changed while the image data I 2 is continuously acquired while the effect of the variable optical LPF 11 is changed, the image data I Redo may be performed from acquisition of 1 . The instruction from the user may be performed by a method other than “half-pressing the shutter button”. For example, the above-described instruction from the user may be performed by pressing a button (an operation button other than the shutter button) attached to the main body of the imaging apparatus 1 after the focus condition of the lens 13 is set. In addition, for example, the above-described instruction from the user may be performed by turning an operation dial attached to the main body of the imaging apparatus 1 after the focus condition of the lens 13 is set.
 ところで、上述したように、「目的に応じた所望の基準」は、撮影モードや被写体、シーンなどに応じて異なる。そこで、以下に、「目的に応じた所望の基準」を例示する。 By the way, as described above, the “desired reference according to the purpose” varies depending on the shooting mode, the subject, the scene, and the like. Thus, “desired criteria according to purpose” will be exemplified below.
(偽色の大きな増大を伴わずに解像度の劣化を低減したい場合)
 例えば、図8に示したように、評価データD2の、可変光学LPF11の効果に対する変化が相対的に緩やかな範囲(範囲α)が存在する場合に、制御部80は、その範囲αの中で、評価データD2が最も小さくなるとき(図8中の白丸)の可変光学LPF11の効果を、設定値74として設定してもよい。つまり、評価データD2の、可変光学LPF11のカットオフ周波数fcに対する変化が相対的に緩やかな範囲(範囲α)が存在する場合に、制御部80は、その範囲αの中で、評価データD2が最も小さくなるとき(図8中の白丸)の可変光学LPF11のカットオフ周波数fcを、設定値74として設定してもよい。このようにした場合には、偽色の大きな増大を伴わずに、解像度の劣化を低減することができる。
(If you want to reduce resolution degradation without a large increase in false color)
For example, as shown in FIG. 8, when there is a range (range α) in which the change of the evaluation data D2 relative to the effect of the variable optical LPF 11 is relatively gentle (range α), the control unit 80 includes the range α within the range α. The effect of the variable optical LPF 11 when the evaluation data D2 is the smallest (white circle in FIG. 8) may be set as the set value 74. That is, when there is a range (range α) in which the change of the evaluation data D2 relative to the cutoff frequency fc of the variable optical LPF 11 is relatively gentle (range α), the control unit 80 determines that the evaluation data D2 is within the range α. The cut-off frequency fc of the variable optical LPF 11 when it becomes the smallest (white circle in FIG. 8) may be set as the set value 74. In such a case, resolution degradation can be reduced without a large increase in false color.
(処理速度を速くしたい場合)
 上記実施の形態において、式(1)右辺第2項(評価データD2)は、被写体やレンズ13の特性次第では、例えば、図11に示したように、階段状のプロファイルとなることがある。なお、図11は、評価データD1,D2の一例を表したものである。この場合、制御部80は、例えば、図11に示したように、可変光学LPF11の効果(カットオフ周波数fc)の制御可能な領域を複数の領域Rに分割して、可変光学LPF11の効果(カットオフ周波数fc)を、分割した領域Rごとに設定した値に順次、変えるよう、LPF駆動部30に指示してもよい。この場合、LPF駆動部30は、制御部80からの指示に従って、可変光学LPF11の効果(カットオフ周波数fc)を、領域Rごとに設定した値に順次、設定する。ここで、各領域Rは、可変光学LPF11の効果(カットオフ周波数fc)の設定最小分解能よりも大きな幅となっている。従って、LPF駆動部30は、可変光学LPF11の効果(カットオフ周波数fc)を、可変光学LPF11の効果(カットオフ周波数fc)の設定最小分解能よりも大きなピッチで順次、変化させる。
(If you want to increase the processing speed)
In the above embodiment, the second term (evaluation data D2) on the right side of the equation (1) may be a stepped profile as shown in FIG. 11, for example, depending on the characteristics of the subject and the lens 13. FIG. 11 shows an example of the evaluation data D1 and D2. In this case, for example, as illustrated in FIG. 11, the control unit 80 divides the controllable region of the effect (cutoff frequency fc) of the variable optical LPF 11 into a plurality of regions R, and the effect of the variable optical LPF 11 ( The LPF driving unit 30 may be instructed to sequentially change the cutoff frequency fc) to a value set for each divided region R. In this case, the LPF driving unit 30 sequentially sets the effect (cutoff frequency fc) of the variable optical LPF 11 to a value set for each region R in accordance with an instruction from the control unit 80. Here, each region R has a width larger than the set minimum resolution of the effect (cutoff frequency fc) of the variable optical LPF 11. Accordingly, the LPF driving unit 30 sequentially changes the effect (cutoff frequency fc) of the variable optical LPF 11 at a pitch larger than the set minimum resolution of the effect (cutoff frequency fc) of the variable optical LPF 11.
 制御部80は、さらに、可変光学LPF11の効果(カットオフ周波数fc)の設定に同期して、撮像を撮像素子40に指示する。すると、撮像素子40は、画像データI1および複数の画像データI2を取得し、取得した画像データI1および複数の画像データI2を、画像処理部50に出力する。画像処理部50は、入力された画像データI1および複数の画像データI2に基づいて2つの評価データD1,D2を導出する。 The control unit 80 further instructs the imaging device 40 to perform imaging in synchronization with the setting of the effect (cutoff frequency fc) of the variable optical LPF 11. Then, the imaging element 40 acquires image data I 1 and a plurality of image data I 2, the acquired image data I 1 and a plurality of image data I 2, and outputs to the image processing unit 50. The image processing unit 50 derives two evaluation data D1 and D2 based on the input image data I 1 and the plurality of image data I 2 .
 画像処理部50は、さらに、2つの評価データD1,D2の合計値(評価データD)が最小となる領域R(例えば領域R1)において、可変光学LPF11の設定値を、偽色が増加し始める値(例えば、図11中のk)に設定してもよい。このようにした場合には、可変光学LPF11の効果(カットオフ周波数fc)を実際に変化させる回数を、上記実施の形態よりも減らすことができる。その結果、処理速度が速くなる。 The image processing unit 50 further starts to increase the false color of the setting value of the variable optical LPF 11 in the region R (for example, the region R1) where the total value (evaluation data D) of the two evaluation data D1 and D2 is minimum. A value (for example, k in FIG. 11) may be set. In this case, the number of times of actually changing the effect (cutoff frequency fc) of the variable optical LPF 11 can be reduced as compared with the above embodiment. As a result, the processing speed is increased.
(顔検出された人物に髪の毛がある場合)
 撮影対象の被写体に髪の毛がある場合には、その髪の毛に偽色が発生することがある。被写体に発生した偽色は目立つことから、そのような偽色をできるだけ小さくすることが好ましい。この場合、画像処理部50は、複数の画像データ(画像データI1および画像データI2)に含まれる顔領域における偽色の変化に基づいて設定値74を設定するよう
になっていてもよい。
(If the person whose face is detected has hair)
When a subject to be photographed has a hair, a false color may occur in the hair. Since the false color generated in the subject is conspicuous, it is preferable to make such a false color as small as possible. In this case, the image processing unit 50 may set the setting value 74 based on a change in false color in the face area included in the plurality of image data (image data I 1 and image data I 2 ). .
 図12は、顔検出された人物に髪の毛がある場合の撮像手順の一例を表したものである。画像処理部50は、撮像素子40から画像データI1を取得した際に、画像データI1に対して顔検出を実施してもよい(図12、ステップS111)。さらに、画像処理部50が、ステップS111において画像データI1に対して顔検出を実施した結果、顔を検出した場合には、顔検出された人物に髪の毛があるか否か検出してもよい(図12、ステップS112)。 FIG. 12 illustrates an example of an imaging procedure when a person whose face is detected has hair. The image processing unit 50 may perform face detection on the image data I 1 when acquiring the image data I 1 from the image sensor 40 (FIG. 12, step S111). Further, when the image processing unit 50 detects a face as a result of performing face detection on the image data I 1 in step S111, it may be detected whether the person whose face is detected has hair. (FIG. 12, step S112).
 その結果、顔検出された人物に髪の毛がある場合には、画像処理部50は、例えば、評価データD2について閾値処理を行ってもよい。画像処理部50は、例えば、評価データD2が所定の閾値Th1を下回る範囲において、可変光学LPF11の効果を最も弱くする設定値を、設定値74として決定してもよい(図13)。なお、図13は、評価データD1,D2の一例を種々の閾値とともに表したものである。このとき、所定の閾値Th1は、顔検出用の閾値となっており、例えば、被写体である人物の髪の毛に生じた偽色を除外するのに適した値となっている。画像処理部50は、例えば、評価データD2だけでなく、評価データD1についても閾値処理を行ってもよい。制御部80は、例えば、評価データD2が所定の閾値Th1を下回る範囲であって、かつ、評価データD1が所定の閾値Th2を下回る範囲において、可変光学LPF11の効果を最も弱くする設定値を、設定値74として決定してもよい(図13)。 As a result, if the person whose face is detected has hair, the image processing unit 50 may perform threshold processing on the evaluation data D2, for example. For example, the image processing unit 50 may determine, as the setting value 74, a setting value that weakens the effect of the variable optical LPF 11 in a range where the evaluation data D2 falls below a predetermined threshold Th1 (FIG. 13). FIG. 13 shows an example of the evaluation data D1 and D2 together with various threshold values. At this time, the predetermined threshold value Th1 is a threshold value for face detection, and is a value suitable for excluding false colors generated in the hair of a person who is a subject, for example. For example, the image processing unit 50 may perform threshold processing not only on the evaluation data D2 but also on the evaluation data D1. For example, in the range where the evaluation data D2 is lower than the predetermined threshold Th1 and the evaluation data D1 is lower than the predetermined threshold Th2, the control unit 80 sets a setting value that makes the effect of the variable optical LPF 11 the weakest. The setting value 74 may be determined (FIG. 13).
 顔検出された人物に髪の毛がない場合、画像処理部50は、上述と同様の方法で、設定値74を決定すればよい。顔検出された人物に髪の毛がある場合には、画像処理部50は、このようにして設定値74を決定してもよい。 If the person whose face is detected has no hair, the image processing unit 50 may determine the set value 74 by the same method as described above. When the person whose face is detected has hair, the image processing unit 50 may determine the setting value 74 in this way.
(マクロ撮影モードの場合)
 図14は、マクロ撮影モードにおける撮像手順の一例を表したものである。撮影モードがマクロ撮影モードとなっている場合、ユーザは物体を近接して撮影しようとしていることが多い。このとき、特に、ユーザが商品撮影をしようとしている場合には、撮影対象の商品に対して偽色が発生することは許容されないことが多い。そのような場合には、ユーザ自身が、偽色の有無を実際に確認した上で、設定値74を決定することが好ましい。具体的には、画像処理部50が、複数の画像データI2のうちユーザによって選択された1つの画像データI2が撮像されたときの可変光学LPF11の効果(カットオフ周波数fc)を設定値74に設定することが好ましい。
(In macro shooting mode)
FIG. 14 illustrates an example of an imaging procedure in the macro imaging mode. When the photographing mode is the macro photographing mode, the user often tries to photograph an object close to it. At this time, in particular, when the user intends to shoot a product, it is often not allowed to generate a false color for the product to be photographed. In such a case, it is preferable for the user himself / herself to determine the set value 74 after actually confirming the presence or absence of a false color. Specifically, the image processing unit 50 sets the effect (cut-off frequency fc) of the variable optical LPF 11 when one image data I 2 selected by the user from the plurality of image data I 2 is captured. It is preferable to set to 74.
 撮像素子40がマクロ撮影モードとなっている場合には、ステップS101が終了した後に、ユーザからの撮像指示(例えば、シャッターボタンの押下げ)が検知されたときには、画像処理部50は、以下の処理を行う。画像処理部50は、ステップS102からステップS107が順次、実施された後、ステップS108において、例えば、評価データD2が所定の閾値Th3を下回るか否か判定する。このとき、所定の閾値Th3は、マクロ撮影モード用の閾値であり、例えば、マクロ撮影モードにおいて被写体である物体に生じる偽色を除外するのに適した値となっている。 When the image pickup device 40 is in the macro shooting mode, when an image pickup instruction (for example, pressing of the shutter button) from the user is detected after step S101 ends, the image processing unit 50 Process. After step S102 to step S107 are sequentially performed, the image processing unit 50 determines in step S108 whether, for example, the evaluation data D2 is below a predetermined threshold Th3. At this time, the predetermined threshold value Th3 is a threshold value for the macro shooting mode, and is a value suitable for excluding false colors generated in an object that is a subject in the macro shooting mode, for example.
 その結果、評価データD2が所定の閾値Th3(図13)を下回る場合には、画像処理部50は、そのときの評価データD2に対応する可変光学LPF11の設定値を、適正値候補35aとして取得し(図14、ステップS113)、さらに、適正値候補35aに対応する画像データI2を適正値候補35aとともにメモリ部70に格納する。偽色の変化に関する評価データが所定の閾値Th1を下回らない場合には、画像処理部50は、ステップS105に戻る。 As a result, when the evaluation data D2 falls below the predetermined threshold Th3 (FIG. 13), the image processing unit 50 acquires the set value of the variable optical LPF 11 corresponding to the evaluation data D2 at that time as the appropriate value candidate 35a. Further, the image data I 2 corresponding to the appropriate value candidate 35a is stored in the memory unit 70 together with the appropriate value candidate 35a (FIG. 14, step S113). If the evaluation data regarding the change in the false color does not fall below the predetermined threshold Th1, the image processing unit 50 returns to step S105.
 なお、画像処理部50は、例えば、評価データD2だけでなく、評価データD1についても閾値処理を行ってもよい。画像処理部50は、例えば、評価データD2が所定の閾値Th3を下回るとともに、評価データD1が所定の閾値Th4(図13)を下回るか否か判定してもよい。その結果、評価データD2が所定の閾値Th3を下回るとともに、評価データD1が所定の閾値Th4を下回る場合には、画像処理部50は、その評価データD2に対応する可変光学LPF11の設定値を、適正値候補35aとして取得してもよい。また、評価データD2が所定の閾値Th3を下回らない場合や、評価データD1が所定の閾値Th4を下回らない場合には、画像処理部50は、ステップS105に戻ってもよい。 Note that the image processing unit 50 may perform threshold processing not only on the evaluation data D2 but also on the evaluation data D1. For example, the image processing unit 50 may determine whether the evaluation data D2 is below a predetermined threshold Th3 and whether the evaluation data D1 is below a predetermined threshold Th4 (FIG. 13). As a result, when the evaluation data D2 is lower than the predetermined threshold Th3 and the evaluation data D1 is lower than the predetermined threshold Th4, the image processing unit 50 sets the set value of the variable optical LPF 11 corresponding to the evaluation data D2. You may acquire as the appropriate value candidate 35a. When the evaluation data D2 does not fall below the predetermined threshold Th3, or when the evaluation data D1 does not fall below the predetermined threshold Th4, the image processing unit 50 may return to step S105.
 画像処理部50は、ステップS113を行った後、可変光学LPF11の効果の変更が終了したか否か判定する(図14、ステップS114)。画像処理部50は、可変光学LPF11の効果の変更が終了するまで、ステップS105~S108、S113を繰り返し実行する。つまり、画像処理部50は、偽色の変化に関する評価データが所定の閾値Th3を下回るごとに、そのときの評価データD2に対応する値を適正値候補35aとして取得する。その結果、可変光学LPF11の効果の変更が終了したら、画像処理部50は、各適正値候補35aに対応する複数の画像データI2の中から1つの画像データI2の選択をユーザに要求する。 After performing step S113, the image processing unit 50 determines whether or not the effect of the variable optical LPF 11 has been changed (FIG. 14, step S114). The image processing unit 50 repeatedly executes steps S105 to S108 and S113 until the change of the effect of the variable optical LPF 11 is completed. That is, the image processing unit 50 acquires a value corresponding to the evaluation data D2 at that time as the appropriate value candidate 35a every time the evaluation data regarding the change in false color falls below the predetermined threshold Th3. As a result, when the change of the effect of the variable optical LPF 11 is completed, the image processing unit 50 requests the user to select one image data I 2 from the plurality of image data I 2 corresponding to each appropriate value candidate 35a. .
 具体的には、画像処理部50は、各適正値候補35aに対応する複数の画像データI2の中から1枚の画像データI2を表示部60に出力する。すると、表示部60は、画像処理部50から入力された画像データI2を表示する(図14、ステップS115)。画像処理部50は、各適正値候補35aに対応する複数の画像データI2を順次、表示部60に出力する。画像処理部50は、ユーザからの表示指示(例えば、撮像装置1本体に付属する操作ボタンの押下げや、撮像装置1本体に付属する操作ダイヤルの回動など)が検知されるたびに、表示部60に表示される画像データI2が別の画像データI2に入れ替わるように、複数の画像データI2を表示部60に出力する。表示部60は、画像処理部50から画像データI2が入力される度に、表示させる画像データI2を入れ換える。 Specifically, the image processing unit 50 outputs the image data I 2 of one from among the plurality of image data I 2 corresponding to each proper value candidate 35a on the display unit 60. Then, the display unit 60 displays the image data I 2 input from the image processing unit 50 (FIG. 14, step S115). The image processing unit 50 sequentially outputs a plurality of image data I 2 corresponding to each appropriate value candidate 35 a to the display unit 60. The image processing unit 50 displays a display every time a display instruction from the user (for example, depression of an operation button attached to the main body of the imaging apparatus 1 or rotation of an operation dial attached to the main body of the imaging apparatus 1) is detected. image data I 2 to be displayed on the section 60 is so switched to another image data I 2, and outputs a plurality of image data I 2 on the display unit 60. The display unit 60 replaces the image data I 2 to be displayed every time the image data I 2 is input from the image processing unit 50.
 このとき、画像処理部50は、例えば、画像データI2とともに、数値(例えば、偽色発生領域のピクセル換算の面積)や、ヒストグラムを表示部60に出力してもよい。すると、表示部60は、画像処理部50から入力された数値やヒストグラムを表示する。ヒストグラムは、例えば、図15に示したように、縦・横軸を彩度(例えば、YCbCr空間上でのCb,Cr)とし、偽色が発生しているピクセル数に応じた濃淡で各領域を表現したものとなっていてもよい。 At this time, for example, the image processing unit 50 may output a numerical value (for example, a pixel-converted area of the false color generation region) and a histogram together with the image data I 2 to the display unit 60. Then, the display unit 60 displays numerical values and histograms input from the image processing unit 50. For example, as shown in FIG. 15, the histogram has saturation (for example, Cb, Cr on the YCbCr space) on the vertical and horizontal axes, and each region is shaded according to the number of pixels in which a false color is generated. It may be something that expresses.
 画像処理部50は、ユーザからの選択指示(例えば、撮像装置1本体に付属する別の操作ボタンの押下げなど)が検知された場合には、検知したときに表示部60に表示させていた画像データI2に対応する適正値候補35aを設定値74として決定する。このようにして、画像処理部50は、ユーザに、適正な画像データI2(つまりは可変光学LPF11としての適正値)を選択させる(図14、ステップS116)。 When a selection instruction from the user (for example, pressing of another operation button attached to the main body of the imaging apparatus 1) is detected, the image processing unit 50 displays the selection on the display unit 60 when the detection is detected. The appropriate value candidate 35a corresponding to the image data I 2 is determined as the set value 74. In this way, the image processing unit 50 causes the user to select appropriate image data I 2 (that is, an appropriate value as the variable optical LPF 11) (FIG. 14, step S116).
 なお、撮影モードがマクロ撮影モードとなっている場合に、画像処理部50は、例えば、評価データD2が所定の閾値Th3を下回る範囲において、可変光学LPF11の効果を最も弱くする設定値(カットオフ周波数fcが最も大きくなる設定値)を、設定値74として決定してもよい。画像処理部50は、例えば、評価データD2だけでなく、評価データD1についても閾値処理を行ってもよい。画像処理部50は、例えば、評価データD2が所定の閾値Th3を下回る範囲であって、かつ、評価データD1が所定の閾値Th4を下回る範囲において、可変光学LPF11の効果を最も弱くする設定値(カットオフ周波数fcが最も大きくなる設定値)を、設定値74として決定してもよい。 When the shooting mode is the macro shooting mode, the image processing unit 50 sets, for example, a setting value (cut-off value) that weakens the effect of the variable optical LPF 11 in a range where the evaluation data D2 falls below a predetermined threshold Th3. A setting value at which the frequency fc is maximized) may be determined as the setting value 74. For example, the image processing unit 50 may perform threshold processing not only on the evaluation data D2 but also on the evaluation data D1. For example, the image processing unit 50 has a setting value that weakens the effect of the variable optical LPF 11 in a range where the evaluation data D2 is below the predetermined threshold Th3 and the evaluation data D1 is below the predetermined threshold Th4. The set value (the set value at which the cut-off frequency fc is maximized) may be determined as the set value 74.
(偽色の発生位置に応じた処理を必要とする場合)
 撮影モードとして、偽色の発生位置に応じた処理を行うためのモード(便宜的に、「偽色処理モード」と称する。)が存在する場合には、画像処理部50は、偽色処理モードにおいて、ステップS101を行った後に、偽色の発生位置の検出を実施してもよい。画面一面に対して、かなり大きい領域に偽色が発生している場合や、画面中心に偽色が発生している場合には、偽色が残存すると、残存している偽色が目立つ可能性があると考えられる。また、画面に対して、四隅に偽色がある場合には、それを完全に消すことで不必要に解像度劣化を招くことが考えられる。そのような場合には、画像処理部50は、偽色の発生位置に応じて、可変光学LPF11の効果を調整することが好ましい。具体的には、画像処理部50が、複数の画像データI1,I3における解像度の変化及び偽色の変化と、複数の画像データI1,I3に含まれる偽色の位置とに基づいて設定値74を設定することが好ましい。
(When processing according to the false color occurrence position is required)
When there is a mode for performing processing according to the position where the false color is generated as the shooting mode (referred to as “false color processing mode” for convenience), the image processing unit 50 uses the false color processing mode. In step S101, the false color generation position may be detected after step S101 is performed. If a false color has occurred in a fairly large area of the entire screen, or if a false color has occurred in the center of the screen, if the false color remains, the remaining false color may stand out. It is thought that there is. Also, if there are false colors at the four corners of the screen, it may be possible to unnecessarily degrade the resolution by completely erasing them. In such a case, it is preferable that the image processing unit 50 adjusts the effect of the variable optical LPF 11 according to the generation position of the false color. Specifically, the image processing unit 50, based resolution and change and false color change in a plurality of image data I 1, I 3, in the false color location included in the plurality of image data I 1, I 3 It is preferable to set the set value 74.
 図16は、偽色処理モードにおける撮像手順の一例を表したものである。まず、画像処理部50は、ステップS101~S104が実施された後、偽色の発生位置の検出を実施する。具体的には、まず、制御部80は、可変光学LPF11の効果を最小またはほぼ最小にする(ステップS117)。例えば、制御部80は、可変光学LPF11の効果が最小またはほぼ最小となるよう(カットオフ周波数fcが最大またはほぼ最大となるように)、LPF駆動部30に指示する。すると、LPF駆動部30は、制御部80からの指示に従って、可変光学LPF11の効果を最小またはほぼ最小にする。つまり、LPF駆動部30は、制御部80からの指示に従って、カットオフ周波数fcを最大またはほぼ最大にする。LPF駆動部30は、例えば、電極112,114間に電圧V2または電圧V2よりも若干小さな値の電圧を印加する。その結果、可変光学LPF11の偏光変換効率TがT1(最小)またはほぼT1となる。 FIG. 16 shows an example of an imaging procedure in the false color processing mode. First, the image processing unit 50 detects a false color occurrence position after steps S101 to S104 are performed. Specifically, first, the control unit 80 minimizes or substantially minimizes the effect of the variable optical LPF 11 (step S117). For example, the control unit 80 instructs the LPF driving unit 30 so that the effect of the variable optical LPF 11 is minimized or substantially minimized (so that the cutoff frequency fc is maximized or substantially maximized). Then, the LPF driving unit 30 minimizes or substantially minimizes the effect of the variable optical LPF 11 in accordance with an instruction from the control unit 80. That is, the LPF drive unit 30 maximizes or substantially maximizes the cutoff frequency fc in accordance with an instruction from the control unit 80. For example, the LPF driving unit 30 applies a voltage V2 or a voltage slightly smaller than the voltage V2 between the electrodes 112 and 114. As a result, the polarization conversion efficiency T of the variable optical LPF 11 is T1 (minimum) or almost T1.
 続いて、画像処理部50は、可変光学LPF11の効果が最小またはほぼ最小となっているときの画像データI3を取得する(ステップS118)。具体的には、制御部80は、可変光学LPF11の効果が最小またはほぼ最小となっているときに、撮像を撮像素子40に指示する。つまり、制御部80は、カットオフ周波数fcが最大またはほぼ最大となっているときに、撮像を撮像素子40に指示する。すると、撮像素子40は、可変光学LPF11の効果が最小またはほぼ最小となっている可変光学LPF11を経由して入射した光を、受光面40Aで離散的にサンプリングすることによりカラーの画像データI3を取得する。つまり、撮像素子40は、カットオフ周波数fcが最大またはほぼ最大となっている可変光学LPF11を経由して入射した光を、受光面40Aで離散的にサンプリングすることによりカラーの画像データI3を取得する。撮像素子40は、例えば、偏光変換効率Tが最小またはほぼ最小となっている可変光学LPF11を経由して入射した光を、受光面40Aで離散的にサンプリングすることによりカラーの画像データI3を取得する。このとき、画像データI3には、可変光学LPF11によって取り除かれずに残った偽色が存在する可能性が高い。撮像素子40は、取得した画像データI3を画像処理部50に出力する。その結果、画像処理部50は、画像データI3を取得する。 Subsequently, the image processing unit 50 acquires the image data I 3 when the effect of the variable optical LPF 11 is minimized or substantially minimized (step S118). Specifically, the control unit 80 instructs the imaging device 40 to perform imaging when the effect of the variable optical LPF 11 is minimized or almost minimized. That is, the control unit 80 instructs the imaging device 40 to perform imaging when the cutoff frequency fc is maximum or almost maximum. Then, the image pickup device 40 discretely samples the light incident through the variable optical LPF 11 in which the effect of the variable optical LPF 11 is minimized or substantially minimized by the light receiving surface 40A, so that the color image data I 3 is obtained. To get. That is, the image sensor 40 discretely samples the light incident via the variable optical LPF 11 having the maximum or almost maximum cutoff frequency fc by the light receiving surface 40A, thereby obtaining the color image data I 3 . get. For example, the image sensor 40 discretely samples the light incident through the variable optical LPF 11 having the polarization conversion efficiency T that is minimum or substantially minimum at the light receiving surface 40A, thereby obtaining the color image data I 3 . get. At this time, there is a high possibility that the false color remaining in the image data I 3 without being removed by the variable optical LPF 11 exists. The image sensor 40 outputs the acquired image data I 3 to the image processing unit 50. As a result, the image processing unit 50 acquires the image data I 3 .
 次に、画像処理部50は、画像データI3に含まれる偽色の位置を検出する(ステップS119)。具体的には、画像処理部50は、画像データI1と、画像データI3と、彩度データ33とを用いて、画像データI3に含まれる偽色の位置を検出する。画像処理部50は、例えば、2つの画像データI1,I3の階調の差分をドットごとに計算し、それにより得られた差分画像に対してCを掛け合わせることにより、偽色判定画像を生成する。次に、画像処理部50は、偽色判定画像の各ドットの値が所定の閾値Th5を超えるか否か判定する。その結果、偽色判定画像の各ドットのいずれかの値が所定の閾値Th5を超えている場合には、画像処理部50は、所定の閾値Th5を超えたドットの座標を取得し、取得した座標を、画像データI3に含まれる偽色の発生位置(偽色発生位置35)として、メモリ部70に格納する(図17)。なお、図17は、メモリ部70に記憶されているデータの一例を表したものである。画像処理部50は、画像データI3の代わりに、画像データI2を用いてもよい。具体的には、画像処理部50は、画像データI2において、偽色の位置を検出してもよい。画像処理部50は、画像データI2,I3において、偽色の位置を検出してもよい。 Next, the image processing unit 50 detects the position of the false color included in the image data I 3 (step S119). Specifically, the image processing unit 50 detects the position of a false color included in the image data I 3 using the image data I 1 , the image data I 3, and the saturation data 33. The image processing unit 50 calculates, for example, the difference between the gradations of the two image data I 1 and I 3 for each dot, and multiplies the difference image obtained thereby by C, thereby obtaining a false color determination image. Is generated. Next, the image processing unit 50 determines whether or not each dot value of the false color determination image exceeds a predetermined threshold Th5. As a result, when any value of each dot of the false color determination image exceeds the predetermined threshold Th5, the image processing unit 50 acquires and acquires the coordinates of the dot exceeding the predetermined threshold Th5. The coordinates are stored in the memory unit 70 as a false color generation position (false color generation position 35) included in the image data I 3 (FIG. 17). FIG. 17 shows an example of data stored in the memory unit 70. The image processing unit 50 may use the image data I 2 instead of the image data I 3 . Specifically, the image processing unit 50 may detect a false color position in the image data I 2 . The image processing unit 50 may detect a false color position in the image data I 2 and I 3 .
 次に、画像処理部50は、ステップS105~S108が実施されたとき、ステップS108において、得られた評価データD2が、偽色の発生位置(偽色発生位置35)に応じた所望の基準を満たしているか否かを判定する。 Next, when steps S105 to S108 are performed, the image processing unit 50 determines that the evaluation data D2 obtained in step S108 has a desired reference corresponding to the false color generation position (false color generation position 35). It is determined whether it is satisfied.
 偽色発生位置35が画面中心であったり、画面中心を含む大きな領域であったりする場合には、画像処理部50は、例えば、評価データD2に対して、偽色発生位置35が画面中心であるときの補正係数αを掛けてもよい(図18)。偽色発生位置35が画面の四隅である場合には、画像処理部50は、例えば、評価データD2に対して、偽色発生位置35が画面の四隅であるときの補正係数βを掛けてもよい(図18)。補正係数βは、例えば、補正係数αよりも小さな値となっている。なお、図18は、補正係数α,βが適用される偽色の発生位置の一例を表したものである。図18では、補正係数α,βが、偽色の発生位置に対応して表されている。 When the false color generation position 35 is the screen center or a large area including the screen center, the image processing unit 50, for example, with respect to the evaluation data D2, the false color generation position 35 is the screen center. A certain correction coefficient α may be multiplied (FIG. 18). When the false color generation position 35 is the four corners of the screen, for example, the image processing unit 50 may multiply the evaluation data D2 by the correction coefficient β when the false color generation position 35 is the four corners of the screen. Good (FIG. 18). For example, the correction coefficient β is smaller than the correction coefficient α. FIG. 18 shows an example of the false color occurrence position to which the correction coefficients α and β are applied. In FIG. 18, the correction coefficients α and β are represented in correspondence with the false color occurrence position.
 なお、偽色発生位置35が画面中心であったり、画面中心を含む大きな領域であったりする場合には、画像処理部50は、例えば、評価データD2が、偽色発生位置35が画面中心であるときの閾値Th6以下となっているか否かを判定してもよい(図19)。偽色発生位置35が画面の四隅である場合には、評価データD2は、例えば、評価データD2が、偽色発生位置35が画面の四隅であるときの閾値Th7以下となっているか否かを判定してもよい(図19)。閾値Th6は、例えば、閾値Th7よりも小さな値となっている。なお、図19は、閾値Th6,Th7が適用される偽色の発生位置の一例を表したものである。図19では、閾値Th6,Th7が、偽色の発生位置に対応して表されている。 When the false color generation position 35 is the center of the screen or is a large area including the screen center, the image processing unit 50 determines that the evaluation data D2 is, for example, the false color generation position 35 is the center of the screen. It may be determined whether or not the threshold value Th6 is below a certain threshold (FIG. 19). When the false color generation position 35 is at the four corners of the screen, the evaluation data D2 indicates whether, for example, the evaluation data D2 is equal to or less than a threshold Th7 when the false color generation position 35 is at the four corners of the screen. You may determine (FIG. 19). The threshold value Th6 is, for example, a value smaller than the threshold value Th7. FIG. 19 illustrates an example of the false color occurrence position to which the thresholds Th6 and Th7 are applied. In FIG. 19, threshold values Th6 and Th7 are shown corresponding to the occurrence positions of false colors.
 画像処理部50は、評価データD2が偽色発生位置35に応じた所望の基準を満たしていた場合には、そのときの評価データD2に対応する値を設定値74とし、メモリ部70に格納する。 When the evaluation data D2 satisfies a desired standard corresponding to the false color generation position 35, the image processing unit 50 sets the value corresponding to the evaluation data D2 at that time as the set value 74 and stores it in the memory unit 70. To do.
 なお、画像処理部50は、画像データI3において、偽色の代わりに、または偽色とともに、解像度劣化が生じている位置を検出してもよい。このとき、画像処理部50は、画像データI3の代わりに、画像データI2を用いてもよい。具体的には、画像処理部50は、画像データI2において、偽色の代わりに、または偽色とともに、解像度劣化が生じている位置を検出してもよい。画像処理部50は、画像データI2,I3において、偽色の代わりに、または偽色とともに、解像度劣化が生じている位置を検出してもよい。画像処理部50は、ステップS108において、評価データD1が、解像度劣化の発生位置(解像度劣化位置)に応じた所望の基準を満たしているか否かを判定する。 Note that the image processing unit 50 may detect a position where resolution degradation occurs in the image data I 3 instead of the false color or together with the false color. At this time, the image processing unit 50 may use the image data I 2 instead of the image data I 3 . Specifically, the image processing unit 50 may detect a position where resolution degradation occurs in the image data I 2 instead of the false color or together with the false color. The image processing unit 50 may detect a position where resolution degradation occurs in the image data I 2 and I 3 instead of the false color or together with the false color. In step S <b> 108, the image processing unit 50 determines whether or not the evaluation data D <b> 1 satisfies a desired criterion according to the position where resolution degradation occurs (resolution degradation position).
 解像度劣化位置が画面中心であったり、画面中心を含む大きな領域であったりする場合には、画像処理部50は、例えば、評価データD1に対して、解像度劣化位置が画面中心であるときの補正係数γを掛けてもよい。解像度劣化位置が画面の四隅である場合には、画像処理部50は、例えば、評価データD1に対して、解像度劣化位置が画面の四隅であるときの補正係数δを掛けてもよい。補正係数δは、例えば、補正係数γよりも小さな値となっている。 When the resolution degradation position is the screen center or a large area including the screen center, the image processing unit 50 corrects, for example, the evaluation data D1 when the resolution degradation position is the screen center. The coefficient γ may be multiplied. When the resolution degradation position is at the four corners of the screen, the image processing unit 50 may multiply the evaluation data D1 by the correction coefficient δ when the resolution degradation position is at the four corners of the screen, for example. For example, the correction coefficient δ is smaller than the correction coefficient γ.
 なお、解像度劣化位置が画面中心であったり、画面中心を含む大きな領域であったりする場合には、画像処理部50は、例えば、評価データD1が、解像度劣化位置が画面中心であるときの閾値Th8以下となっているか否かを判定してもよい。解像度劣化位置が画面の四隅である場合には、画像処理部50は、例えば、評価データD1が、解像度劣化位置が画面の四隅であるときの閾値Th9以下となっているか否かを判定してもよい。閾値Th8は、例えば、閾値Th9よりも小さな値となっている。 When the resolution deterioration position is the center of the screen or a large area including the screen center, the image processing unit 50 uses, for example, a threshold when the evaluation data D1 is the resolution deterioration position of the screen center. You may determine whether it is Th8 or less. When the resolution degradation position is at the four corners of the screen, for example, the image processing unit 50 determines whether or not the evaluation data D1 is equal to or less than the threshold Th9 when the resolution degradation position is at the four corners of the screen. Also good. The threshold value Th8 is smaller than the threshold value Th9, for example.
 制御部80は、評価データD1が解像度劣化位置に応じた所望の基準を満たしていた場合には、そのときの評価データD1に対応する値を設定値74とし、メモリ部70に格納する。 When the evaluation data D1 satisfies a desired standard according to the resolution degradation position, the control unit 80 sets the value corresponding to the evaluation data D1 at that time as the set value 74 and stores it in the memory unit 70.
[効果]
 次に、撮像装置1の効果について説明する。
[effect]
Next, the effect of the imaging device 1 will be described.
 通常の単板式のデジタルカメラにおいてベイヤーコーディングのイメージャを用いる場合には、イメージャで得られた画像データに対してデモザイク処理を行うことにより、失われた情報を復元することが必要となる。しかし、原理的に失われた情報を完全に導出することは難しいことから、解像度の低下やアーティファクトの発生が避けられない。 When a Bayer coding imager is used in a normal single-panel digital camera, it is necessary to restore lost information by performing demosaic processing on image data obtained by the imager. However, in principle, it is difficult to completely derive the lost information, so that a reduction in resolution and the occurrence of artifacts cannot be avoided.
 そこで、ローパスフィルタをレンズとイメージャとの間に設けることが考えられる。このようにした場合には、ローパスフィルタの効果を調整することで、アーティファクトの1つである偽色を低減することができる。ただし、偽色の低減と引き換えに、解像度が更に低下してしまう。解像度が過度に低下しないよう、ローパスフィルタの効果を調整することはできるが、そのようにした場合には、偽色の低減効果が小さくなってしまう。このように、ローパスフィルタにおいては、解像度の上昇と偽色の低減とがトレードオフの関係となっている。そのため、解像度および偽色のいずれか一方だけに着目して、ローパスフィルタを設定した場合には、ローパスフィルタの効果が強くかかった画像が選択されてしまい、必要以上に解像度が劣化してしまうという問題があった。 Therefore, it is conceivable to provide a low-pass filter between the lens and the imager. In this case, the false color that is one of the artifacts can be reduced by adjusting the effect of the low-pass filter. However, the resolution further decreases in exchange for the reduction of false colors. Although the effect of the low-pass filter can be adjusted so that the resolution does not decrease excessively, in such a case, the effect of reducing false colors is reduced. Thus, in the low-pass filter, there is a trade-off relationship between an increase in resolution and a reduction in false color. For this reason, when a low pass filter is set by focusing on only one of the resolution and the false color, an image with a strong effect of the low pass filter is selected, and the resolution deteriorates more than necessary. There was a problem.
 一方、本実施の形態では、可変光学LPF11の効果を変えながら取得した複数の画像データ(画像データI1および複数の画像データI2)に基づいて、評価データD1および評価データD2が導出される。さらに、導出された評価データD1および評価データD2に基づいて可変光学LPF11の設定値74が導出される。これにより、ユーザの目的に応じた、画質のバランスのとれた画像データIを得ることができる。 On the other hand, in the present embodiment, evaluation data D1 and evaluation data D2 are derived based on a plurality of image data (image data I 1 and a plurality of image data I 2 ) acquired while changing the effect of the variable optical LPF 11. . Further, the set value 74 of the variable optical LPF 11 is derived based on the derived evaluation data D1 and evaluation data D2. Thereby, it is possible to obtain the image data I in which the image quality is balanced according to the purpose of the user.
 また、本実施の形態において、2つの画像データI1,I2の周波数スペクトルの差分に基づいて評価データD1が生成されるとともに、2つの画像データI1,I2の階調との差分に基づいて評価データD2が生成される場合には、より精度よく、画質のバランスのとれた画像データIを得ることができる。 In the present embodiment, the evaluation data D1 is generated based on the difference between the frequency spectra of the two image data I 1 and I 2 , and the difference between the gradations of the two image data I 1 and I 2 is calculated. When the evaluation data D2 is generated based on the image data I, it is possible to obtain the image data I in which the image quality is balanced with higher accuracy.
 また、本実施の形態において、評価データD1および評価データD2の合計値(評価データD)が最も小さくなるときの値を設定値74にした場合には、簡易な方法で、画質のバランスのとれた画像データIを得ることができる。 In the present embodiment, when the value when the total value (evaluation data D) of the evaluation data D1 and the evaluation data D2 is the smallest is set to the set value 74, the image quality can be balanced by a simple method. Image data I can be obtained.
 また、本実施の形態において、例えば、図8に示したように、評価データD2の、可変光学LPF11の効果(カットオフ周波数fc)に対する変化が相対的に緩やかな範囲αが存在する場合に、その範囲αの中で、可変光学LPF11の効果(カットオフ周波数fc)が最も低くなるときの値を設定値74にした場合には、簡易な方法で、偽色の大きな変化を伴わずに、解像度を低減することができる。 Further, in the present embodiment, for example, as shown in FIG. 8, when there is a range α in which the change of the evaluation data D2 with respect to the effect (cutoff frequency fc) of the variable optical LPF 11 is relatively gentle, When the value at which the effect (cutoff frequency fc) of the variable optical LPF 11 is the lowest in the range α is set to the set value 74, a simple method can be used without causing a large change in false color. The resolution can be reduced.
 また、本実施の形態において、例えば、図11に示したように、可変光学LPF11の効果の制御可能な領域を複数の領域Rに分割して、分割した領域Rごとに、評価データD1および評価データD2を導出した場合には、可変光学LPF11の効果を実際に変化させる回数を、上記実施の形態よりも減らすことができる。その結果、処理速度を速くしつつ、画質のバランスのとれた画像データIを得ることができる。 Further, in the present embodiment, for example, as shown in FIG. 11, an area where the effect of the variable optical LPF 11 can be controlled is divided into a plurality of areas R, and the evaluation data D1 and the evaluation are obtained for each divided area R. When the data D2 is derived, the number of times of actually changing the effect of the variable optical LPF 11 can be reduced as compared with the above embodiment. As a result, it is possible to obtain image data I with a balanced image quality while increasing the processing speed.
 本実施の形態において、画像データI1に対して顔検出が行われるとともに、顔検出された人物に髪の毛があるか否かの検出が行われる場合には、評価データD2が顔検出用の閾値Th1を下回る範囲において、可変光学LPF11の効果を最も弱くする値が可変光学LPF11の設定値74として設定される。これにより、被写体の髪の毛の偽色が少ない画像データIを得ることができる。さらに、評価データD1が所定の閾値Th2を下回る範囲において、可変光学LPF11の効果を最も弱くする値が可変光学LPF11の設定値74として設定される場合には、画質のバランスのとれた画像データIを得ることができる。 In the present embodiment, when face detection is performed on the image data I 1 and whether or not the face detected person has hair is detected, the evaluation data D2 is a threshold for face detection. In the range below Th1, a value that makes the effect of the variable optical LPF 11 the weakest is set as the set value 74 of the variable optical LPF 11. Thereby, the image data I with few false colors of the hair of the subject can be obtained. Further, when the value that weakens the effect of the variable optical LPF 11 is set as the set value 74 of the variable optical LPF 11 in a range where the evaluation data D1 is less than the predetermined threshold Th2, the image data I with balanced image quality is set. Can be obtained.
 本実施の形態において、撮影モードがマクロ撮影モードとなっている場合には、評価データD2がマクロ撮影モード用の閾値値Th3を下回るごとに、そのときの評価データD2に対応する値が適正値候補35aとなることにより、ユーザは、各適正値候補35aに対応する複数の画像データI2の中から1つの画像データI2を選択することができる。その結果、ユーザの目的に応じた、画質のバランスのとれた画像データIを得ることができる。 In the present embodiment, when the shooting mode is the macro shooting mode, each time the evaluation data D2 falls below the threshold value Th3 for the macro shooting mode, the value corresponding to the evaluation data D2 at that time is an appropriate value. by the candidate 35a, the user can select one image data I 2 from the plurality of image data I 2 corresponding to each proper value candidates 35a. As a result, it is possible to obtain image data I with a balanced image quality according to the purpose of the user.
 本実施の形態において、可変光学LPF11の効果を最小もしくはほぼ最小としたときに得られた画像データI3に含まれる偽色の位置が検出されることにより、評価データD2が偽色の位置に応じた基準を満たすときの値を可変光学LPF11の設定値74とすることができる。これにより、例えば、画面一面に対して、かなり大きい領域に偽色が発生している場合や、画面中心に偽色が発生している場合であっても、偽色の目立たない画像データIを得ることができる。また、例えば、画面に対して、四隅に偽色がある場合であっても、不必要に解像度劣化が生じておらず、しかも偽色の目立たない画像データIを得ることができる。 In the present embodiment, by detecting the false color position included in the image data I 3 obtained when the effect of the variable optical LPF 11 is minimized or almost minimized, the evaluation data D2 is set to the false color position. A value when the corresponding standard is satisfied can be set as the set value 74 of the variable optical LPF 11. Thereby, for example, even if a false color is generated in a considerably large area on the entire screen or a false color is generated in the center of the screen, the image data I in which the false color is not conspicuous is obtained. Obtainable. Further, for example, even when there are false colors at the four corners of the screen, it is possible to obtain image data I in which the resolution is not unnecessarily deteriorated and the false colors are not conspicuous.
 本実施の形態において、2つの画像データI1,I2の階調データの差分と、画像データI1から得られた彩度データ33とに基づいて評価データD2が導出された場合には、より精度よく、画質のバランスのとれた画像データIを得ることができる。 In the present embodiment, when the evaluation data D2 is derived based on the difference between the gradation data of the two image data I 1 and I 2 and the saturation data 33 obtained from the image data I 1 , It is possible to obtain the image data I in which the image quality is balanced with higher accuracy.
 また、本実施の形態では、画像データI1は、可変光学LPF11の効果を最大もしくはほぼ最大としたときに撮像素子40を駆動することにより撮像素子40で生成される。これにより、偽色の無いもしくは偽色のほとんど無い画像データI1と、偽色のある画像データI2とに基づいて評価データD1および評価データD2が導出されるので、精度よく、画質のバランスのとれた画像データIを得ることができる。 In the present embodiment, the image data I 1 is generated by the image sensor 40 by driving the image sensor 40 when the effect of the variable optical LPF 11 is maximized or substantially maximized. As a result, the evaluation data D1 and the evaluation data D2 are derived based on the image data I 1 having no false color or almost no false color and the image data I 2 having the false color. The acquired image data I can be obtained.
 また、本実施の形態において、シャッターボタンの半押下げに伴って、可変光学LPF11の設定が行われる場合には、AF(オートフォーカス)の条件や、アイリス14の条件の設定指示と一緒に、可変光学LPF11の設定指示を行うことができる。これにより、ユーザに対する作業負荷を増やすことなく、画質のバランスのとれた画像データIを得ることができる。 In the present embodiment, when the variable optical LPF 11 is set as the shutter button is half-depressed, the variable optical LPF 11 is changed together with an AF (autofocus) condition or an iris 14 condition setting instruction. A setting instruction for the optical LPF 11 can be issued. As a result, it is possible to obtain image data I with a balanced image quality without increasing the work load on the user.
<2.変形例>
 次に、上記実施の形態に係る撮像装置1の変形例について説明する。
<2. Modification>
Next, a modification of the imaging device 1 according to the above embodiment will be described.
[変形例A]
 上記実施の形態では、画像処理部50は、ステップS102において、可変光学LPF11の効果を最大もしくはほぼ最大にしていた。しかし、画像処理部50は、ステップS102において、可変光学LPF11の効果を任意の値に設定してもよい。この場合、画像処理部50は、取得した画像データI1のうち、偽色の発生する可能性のある領域に対して偽色を消す処理を行った上で、彩度データ33を導出してもよい。このようにした場合には、最初に、可変光学LPF11の効果を最大もしくはほぼ最大にしていたときと比べて、可変光学LPF11に対する適切な設定値を得るまでに要する時間を短縮することも可能である。
[Modification A]
In the above embodiment, the image processing unit 50 maximizes or substantially maximizes the effect of the variable optical LPF 11 in step S102. However, the image processing unit 50 may set the effect of the variable optical LPF 11 to an arbitrary value in step S102. In this case, the image processing unit 50 performs the process of erasing the false color on the area where the false color may occur in the acquired image data I 1 , and then derives the saturation data 33. Also good. In such a case, it is possible to shorten the time required to obtain an appropriate set value for the variable optical LPF 11 compared to when the effect of the variable optical LPF 11 is maximized or substantially maximized. is there.
[変形例B]
 上記実施の形態およびその変形例では、可変光学LPF11は、電圧制御によりカットオフ周波数fcを変化させるようになっていた。しかし、上記実施の形態およびその変形例において、制御部80は、可変光学LPF11のカットオフ周波数fcを周波数制御により変化させるようになっていてもよい。また、上記実施の形態およびその変形例において、可変光学LPF11の代わりに、物理的に加える振動の振幅の大きさに応じてカットオフ周波数fcを変化させる光学LPFが撮像装置1に用いられていてもよい。つまり、本開示における可変光学LPFは、電圧変化、周波数変化または振動振幅変化によってカットオフ周波数fcを調整可能に構成されていればよく、カットオフ周波数fcを電子制御可能なものであれば、どのような光学LPFであっても構わない。
[Modification B]
In the embodiment and the modification thereof, the variable optical LPF 11 is configured to change the cutoff frequency fc by voltage control. However, in the embodiment and the modification thereof, the control unit 80 may change the cutoff frequency fc of the variable optical LPF 11 by frequency control. Further, in the above-described embodiment and its modification, the imaging apparatus 1 uses an optical LPF that changes the cutoff frequency fc in accordance with the magnitude of the physically applied vibration instead of the variable optical LPF 11. Also good. That is, the variable optical LPF according to the present disclosure may be configured so that the cut-off frequency fc can be adjusted by voltage change, frequency change, or vibration amplitude change. Such an optical LPF may be used.
[変形例C]
 上記実施の形態では、制御部80が可変光学LPF11の効果を自動で変化させていた。しかし、ユーザが可変光学LPF11の効果を手動で変化させてもよい。
[Modification C]
In the above embodiment, the control unit 80 automatically changes the effect of the variable optical LPF 11. However, the user may manually change the effect of the variable optical LPF 11.
 図20は、本変形例における撮像手順の一例を表したものである。まず、画像処理部50は、ステップS101~S107が実施された後、可変光学LPF11の効果の変更が終了したか否か判定する(ステップS120)。可変光学LPF11の効果の変更が終了するまで、ステップS105~S107が繰り返し実行される。その結果、可変光学LPF11の効果の変更が終了したら、画像処理部50は、図21の白丸で示した位置に対応する可変光学LPF11の適正値候補k(k1,k2,k3,…)を導出する(ステップS121)。なお、図21は、評価データD1,D2の一例を表したものである。具体的には、画像処理部50は、評価データD2の、可変光学LPF11の効果に対する変化(カットオフ周波数fc)が相対的に緩やかな範囲が複数、存在する場合に、その範囲ごとの端部(可変光学LPF11の効果の小さい方の端部)の位置に対応する値を、可変光学LPF11の適正値候補k(k1,k2,k3,…)とする。各適正値候補k(k1,k2,k3,…)は、評価データD2が増えない範囲内で、可変光学LPF11の効果を最も弱めたときの可変光学LPF11の設定値(カットオフ周波数fcを最も大きくしたときの可変光学LPF11の設定値)に対応している。 FIG. 20 shows an example of an imaging procedure in this modification. First, the image processing unit 50 determines whether or not the change of the effect of the variable optical LPF 11 is completed after Steps S101 to S107 are performed (Step S120). Steps S105 to S107 are repeatedly executed until the change of the effect of the variable optical LPF 11 is completed. As a result, when the change of the effect of the variable optical LPF 11 is completed, the image processing unit 50 derives an appropriate value candidate k (k1, k2, k3,...) Of the variable optical LPF 11 corresponding to the position indicated by the white circle in FIG. (Step S121). FIG. 21 shows an example of the evaluation data D1 and D2. Specifically, when there are a plurality of ranges in which the change (cutoff frequency fc) of the evaluation data D2 with respect to the effect of the variable optical LPF 11 is relatively gentle, the image processing unit 50 has an end portion for each range. A value corresponding to the position of (the end portion with the smaller effect of the variable optical LPF 11) is set as an appropriate value candidate k (k1, k2, k3,...) Of the variable optical LPF 11. Each appropriate value candidate k (k1, k2, k3,...) Is set to the set value (cut-off frequency fc) of the variable optical LPF 11 when the effect of the variable optical LPF 11 is weakened within the range in which the evaluation data D2 does not increase. This corresponds to the setting value of the variable optical LPF 11 when it is increased).
 次に、画像処理部50は、ユーザに対して、各適正値候補k(k1,k2,k3,…)に対応する複数の画像データI2の中から1つの画像データI2の選択を要求する。具体的には、画像処理部50は、各適正値候補k(k1,k2,k3,…)に対応する複数の画像データI2の中から1枚の画像データI2を表示部60に出力する。すると、表示部60は、制御部80から入力された画像データI2を表示する(ステップS122)。画像処理部50は、各適正値候補kに対応する複数の画像データI2を順次、表示部60に出力
する。画像処理部50は、ユーザからの表示指示(例えば、撮像装置1本体に付属する操作ボタンの押下げや、撮像装置1本体に付属する操作ダイヤルの回動など)が検知されるたびに、表示部60に表示される画像データI2(適正値候補kに対応する画像データI2)が適正値候補kに対応する別の画像データI2に入れ替わるように、複数の画像データI2を表示部60に出力する。表示部60は、画像処理部50から画像データI2が入力される度に、表示させる画像データI2を入れ換える。画像処理部50は、ユーザからの選択指示(例えば、撮像装置1本体に付属する別の操作ボタンの押下げなど)が検知された場合には、検知したときに表示部60に表示させていた画像データI2に対応する適正値候補kを、設定値74として決定する。このようにして、画像処理部50は、ユーザに適正な画像データI2(またはユーザに適正な適正値35)を選択させる(ステップS123)。
Next, the image processing unit 50 requests the user to select one image data I 2 from among a plurality of image data I 2 corresponding to each appropriate value candidate k (k1, k2, k3,...). To do. Specifically, the image processing unit 50, outputs the image data I 2 from one of a plurality of image data I 2 corresponding to the display unit 60 on the appropriate value candidate k (k1, k2, k3, ...) To do. Then, the display unit 60 displays the image data I 2 input from the control unit 80 (step S122). The image processing unit 50 sequentially outputs a plurality of image data I 2 corresponding to each appropriate value candidate k to the display unit 60. The image processing unit 50 displays a display every time a display instruction from the user (for example, depression of an operation button attached to the main body of the imaging apparatus 1 or rotation of an operation dial attached to the main body of the imaging apparatus 1) is detected. image data I 2 (image data I 2 corresponding to the proper value candidate k) such that replaced with another image data I 2 corresponding to the proper value candidate k that appears in section 60, displays a plurality of image data I 2 To the unit 60. The display unit 60 replaces the image data I 2 to be displayed every time the image data I 2 is input from the image processing unit 50. When a selection instruction from the user (for example, pressing of another operation button attached to the main body of the imaging apparatus 1) is detected, the image processing unit 50 displays the selection on the display unit 60 when the detection is detected. The appropriate value candidate k corresponding to the image data I 2 is determined as the set value 74. In this way, the image processing unit 50 causes the user to select appropriate image data I 2 (or appropriate value 35 appropriate to the user) (step S123).
 本変形例では、例えば、ユーザが撮像装置1本体に付属する操作ボタンを一度押すごとや、撮像装置1本体に付属する操作ダイヤルを回すごとに、画像処理部50が可変光学LPF11の設定値を複数の設定値kの中の1つの設定値kにすることができる。これにより、ユーザによる手動設定の手間を減らすことができる。さらに、ユーザの意図通りの解像度と偽色のバランスとなっている画像データIを得ることができる。 In this modification, for example, every time the user presses an operation button attached to the main body of the imaging apparatus 1 or every time an operation dial attached to the main body of the imaging apparatus 1 is turned, the image processing unit 50 sets the set value of the variable optical LPF 11. One set value k among a plurality of set values k can be set. Thereby, the trouble of the manual setting by a user can be reduced. Furthermore, it is possible to obtain image data I having a balance between the resolution and the false color as intended by the user.
 本変形例において、画像処理部50は、各画像データI2において、複数の画像データI2に含まれる偽色の位置を強調(具体的には拡大)したものを生成し、表示部60に出力してもよい。「強調」とは、対象となる領域を、他の領域と区別する(例えば他の領域よりも目立たせる)ことを指している。このとき、画像処理部50は、例えば、可変光学LPF11の設定値が、ユーザの要求に応じて図21中の白丸に対応する値(設定値k)に遷移したときに、ローパス特性(カットオフ周波数fc)を異ならせて撮影された複数の画像データI2に基づいて、偽色の変化(もしくは発生)を検出してもよい。画像処理部50は、さらに、例えば、各画像データI2において、その遷移によって偽色が変化(もしくは発生)した領域(領域61)を強調(具体的には拡大)したもの(図22参照)を生成し、表示部60に出力してもよい。なお、図22は、画像データI2の一部が拡大されている様子を表したものである。図22において、領域61は、画像データI2のうち領域61以外の領域の倍率よりも大きな倍率で表示されている領域に相当する。表示部60は、例えば、可変光学LPF11の設定値が、ユーザの要求に応じて図21中の白丸に対応する値(設定値k)に遷移したときに、画像データI2において、その遷移によって偽色が変化(もしくは発生)した領域(領域61)を強調表示(具体的には拡大表示)する。このようにした場合には、表示部60が小さくて偽色の変化を視認しにくいときであっても、偽色が変化したことや、偽色が発生した場所を直感的に知ることができる。その結果、可変光学LPF11の効果の調整を簡単に行うことができる。 In this modification, the image processing unit 50 generates, in each image data I 2 , an emphasis (specifically enlarged) position of a false color included in the plurality of image data I 2 , and displays it on the display unit 60. It may be output. The “emphasis” refers to distinguishing a target region from other regions (for example, making it more conspicuous than other regions). At this time, for example, when the set value of the variable optical LPF 11 transitions to a value (set value k) corresponding to the white circle in FIG. A false color change (or occurrence) may be detected based on a plurality of image data I 2 photographed at different frequencies fc). The image processing unit 50 further emphasizes (specifically enlarges) a region (region 61) in which the false color is changed (or generated) due to the transition in each image data I 2 (see FIG. 22). May be generated and output to the display unit 60. FIG. 22 shows a state in which a part of the image data I 2 is enlarged. In FIG. 22, an area 61 corresponds to an area displayed at a magnification larger than the magnification of the area other than the area 61 in the image data I 2 . For example, when the set value of the variable optical LPF 11 transitions to a value corresponding to the white circle in FIG. 21 (set value k) in response to the user's request, the display unit 60 causes the transition in the image data I 2 . The region (region 61) where the false color has changed (or has occurred) is highlighted (specifically, enlarged display). In this case, even when the display unit 60 is small and it is difficult to visually recognize the false color change, it is possible to intuitively know that the false color has changed and where the false color has occurred. . As a result, the effect of the variable optical LPF 11 can be easily adjusted.
 画像処理部50は、画像データI2において偽色が変化(もしくは発生)した箇所全てに対して拡大を行う必要はない。画像処理部50は、例えば、画像データI2において評価データD2が最も大きかった箇所(領域61)を拡大したものを表示部60に出力してもよい。 The image processing unit 50 does not need to perform enlargement for all the locations where the false color has changed (or has occurred) in the image data I 2 . The image processing unit 50, for example, may be output to the display unit 60 to an enlarged portion evaluation data D2 in the image data I 2 was greatest (region 61).
 また、本変形例において、画像処理部50は、各画像データI2において、複数の画像データI2に含まれる偽色の位置を強調したものを生成し、表示部60に出力してもよい。このとき、画像処理部50は、例えば、可変光学LPF11の設定値が、ユーザの要求に応じて図21中の白丸に対応する値(設定値k)に遷移したときに、ローパス特性(カットオフ周波数fc)を異ならせて撮影された複数の画像データI2に基づいて、偽色の変化(もしくは発生)を検出してもよい。画像処理部50は、さらに、例えば、各画像データI2において、その遷移によって偽色が変化(もしくは発生)した領域(領域62)を強調(具体的にはゼブラ処理)したもの(図23参照)を生成し、表示部60に出力してもよい。ゼブラ処理とは、画像に対してゼブラ模様を重畳することを指している。なお、図23は、画像データI2の一部がゼブラ処理されている様子を表したものである。図23には、領域62が、画像データI2のうち中央から下端にかけて分布している様子が例示されている。表示部60は、例えば、可変光学LPF11の設定値がユーザの要求に応じて図21中の白丸に対応する値(設定値k)に遷移したときに、画像データI2のうち、偽色が変化(もしくは発生)した領域(領域62)を強調表示(具体的にはゼブラ表示)する。このようにした場合には、表示部60が小さくて偽色を視認しにくいときであっても、偽色が変化したことや、偽色が発生した場所を直感的に知ることができる。その結果、可変光学LPF11の効果の調整を簡単に行うことができる。 Further, in this modification, the image processing unit 50, in each image data I 2, to produce what emphasizes the position of the false color included in the plurality of image data I 2, may be output to the display unit 60 . At this time, for example, when the set value of the variable optical LPF 11 transitions to a value (set value k) corresponding to the white circle in FIG. A false color change (or occurrence) may be detected based on a plurality of image data I 2 photographed at different frequencies fc). The image processing unit 50 further, for example, in each image data I 2, the false color is changed by the transition (or development) were emphasized area (area 62) (specifically, zebra treatment) were those (see FIG. 23 ) May be generated and output to the display unit 60. Zebra processing refers to superimposing a zebra pattern on an image. FIG. 23 shows a state in which a part of the image data I 2 is zebra processed. FIG. 23 illustrates a state in which the region 62 is distributed from the center to the lower end of the image data I 2 . For example, when the set value of the variable optical LPF 11 transitions to a value corresponding to the white circle in FIG. 21 (set value k) in response to a user request, the display unit 60 displays a false color in the image data I 2. The changed (or generated) area (area 62) is highlighted (specifically, zebra display). In this case, even when the display unit 60 is small and it is difficult to visually recognize the false color, it is possible to intuitively know that the false color has changed and where the false color has occurred. As a result, the effect of the variable optical LPF 11 can be easily adjusted.
 画像処理部50は、画像データI2において偽色が変化(もしくは発生)した箇所全てに対してゼブラ処理を行う必要はない。画像処理部50は、例えば、画像データI2において評価データD2が最も大きかった箇所(領域62)をゼブラ処理したものを表示部60に出力してもよい。 The image processing unit 50 does not need to perform zebra processing on all locations where the false color changes (or occurs) in the image data I 2 . The image processing unit 50, for example, may be output to the display unit 60 that places the evaluation data D2 in the image data I 2 was greatest (the region 62) and Zebra process.
 また、本変形例において、画像処理部50は、各画像データI2において、複数の画像データI2に含まれる、解像度劣化の生じている位置を強調(具体的にはハイライト)したものを生成し、表示部60に出力してもよい。画像処理部50は、例えば、可変光学LPF11の設定値が、ユーザの要求に応じて図21中の白丸に対応する値(設定値k)に遷移したときに、ローパス特性(カットオフ周波数fc)を異ならせて撮影された複数の画像データI2に基づいて、解像度の変化を検出してもよい。画像処理部50は、さらに、例えば、各画像データI2において、その遷移によって解像度が変化した領域のエッジ(部分63)を強調(具体的にはハイライト)したもの(図24参照)を生成し、表示部60に出力してもよい。ハイライト表示は、例えば、画像データI2の輝度の変更、画像データI2の彩度の変更、または、画像データI2に対する色信号の重畳によってなされる。なお、図24は、画像データI2の一部(部分63)が強調されている様子を表したものである。図24には、部分63が、複数の線分で構成されている様子が例示されている。表示部60は、例えば、可変光学LPF11の設定値がユーザの要求に応じて図21中の白丸に対応する値(設定値k)に遷移したときに、画像データI2のうち、解像度が変化した領域のエッジ(部分63)を強調表示(具体的にはハイライト表示)する。このようにした場合には、表示部60が小さくて解像度劣化を視認しにくいときであっても、解像度劣化の生じている箇所を直感的に知ることができる。その結果、可変光学LPF11の効果の調整を簡単に行うことができる。 Further, in this modification, the image processing unit 50 emphasizes (specifically highlights) the position where the resolution degradation is included in the plurality of image data I 2 in each image data I 2 . It may be generated and output to the display unit 60. For example, when the set value of the variable optical LPF 11 transitions to a value corresponding to the white circle in FIG. 21 (set value k) in response to a user request, the image processing unit 50 has a low-pass characteristic (cutoff frequency fc). A change in resolution may be detected based on a plurality of image data I 2 photographed with different values. The image processing unit 50 further generates, for example, the image data I 2 that emphasizes (specifically highlights) the edge (part 63) of the area whose resolution has changed due to the transition (see FIG. 24). Then, it may be output to the display unit 60. Highlighting, for example, changes in the brightness of the image data I 2, change the color saturation of the image data I 2, or made by superimposition of the color signal for the image data I 2. Incidentally, FIG. 24 illustrates a state in which part of the image data I 2 (part 63) are emphasized. FIG. 24 illustrates a state in which the portion 63 is configured by a plurality of line segments. For example, when the setting value of the variable optical LPF 11 changes to a value corresponding to the white circle in FIG. 21 (setting value k) in response to a user request, the display unit 60 changes the resolution of the image data I 2. The edge (part 63) of the region is highlighted (specifically, highlighted). In such a case, even when the display unit 60 is small and it is difficult to visually recognize the resolution degradation, it is possible to intuitively know the location where the resolution degradation has occurred. As a result, the effect of the variable optical LPF 11 can be easily adjusted.
 画像処理部50は、画像データI2において解像度が変化(もしくは発生)した領域のエッジ全てに対してハイライトを行う必要はない。画像処理部50は、例えば、画像データI2において評価データD1が最も大きかった箇所(部分63)をハイライトしたものを表示部60に出力してもよい。 The image processing unit 50 does not need to perform highlighting on all edges of the area where the resolution has changed (or has occurred) in the image data I 2 . The image processing unit 50, for example, may be output to the display unit 60 that the largest was place by the evaluation data D1 in the image data I 2 a (section 63) to highlight.
 また、本変形例において、画像処理部50は、各画像データI2において、複数種類の強調を行うようにしてもよい。画像処理部50は、例えば、各画像データI2において、複数の画像データI2に含まれる偽色の位置および解像度劣化の生じている位置を強調したものを生成し、表示部60に出力してもよい。画像処理部50は、例えば、可変光学LPF11の設定値が、ユーザの要求に応じて図21中の白丸に対応する値(設定値k)に遷移したときに、ローパス特性(カットオフ周波数fc)を異ならせて撮影された複数の画像データI2に基づいて、偽色の変化(もしくは発生)および解像度の変化を検出してもよい。 In the present modification, the image processing unit 50 may perform a plurality of types of enhancement on each image data I 2 . For example, in each image data I 2 , the image processing unit 50 generates a result of emphasizing the position of the false color included in the plurality of image data I 2 and the position where the resolution is deteriorated, and outputs it to the display unit 60. May be. For example, when the set value of the variable optical LPF 11 transitions to a value corresponding to the white circle in FIG. 21 (set value k) in response to a user request, the image processing unit 50 has a low-pass characteristic (cutoff frequency fc). On the basis of a plurality of image data I 2 photographed with different values, a false color change (or occurrence) and a resolution change may be detected.
 画像処理部50は、例えば、各画像データI2において、その遷移によって偽色が変化(もしくは発生)した領域61を強調(具体的には拡大)するとともに、その遷移によって解像度が変化した領域のエッジ(部分63)を強調(具体的にはハイライト)したもの(図25参照)を生成し、表示部60に出力してもよい。このとき、画像処理部50は、領域61の表示色と、部分63の表示色とを互いに異ならせてもよい。表示部60は、例えば、可変光学LPF11の設定値がユーザの要求に応じて図21中の白丸に対応する値(設定値k)に遷移したときに、画像データI2のうち、偽色が変化(もしくは発生)した領域61を強調(具体的には拡大)するとともに、解像度が変化した領域のエッジ(部分63)を強調表示(具体的にはハイライト表示)する。このようにした場合には、表示部60が小さくて解像度劣化を視認しにくいときであっても、解像度劣化の生じている箇所を直感的に知ることができる。その結果、可変光学LPF11の効果の調整を簡単に行うことができる。 For example, in each image data I 2 , the image processing unit 50 emphasizes (specifically enlarges) the region 61 in which the false color has changed (or has been generated) due to the transition, and also the region whose resolution has changed due to the transition. An edge (part 63) highlighted (specifically, highlighted) (see FIG. 25) may be generated and output to the display unit 60. At this time, the image processing unit 50 may change the display color of the region 61 and the display color of the portion 63 from each other. For example, when the set value of the variable optical LPF 11 transitions to a value corresponding to the white circle in FIG. 21 (set value k) in response to a user request, the display unit 60 displays a false color in the image data I 2. The changed (or generated) area 61 is emphasized (specifically enlarged), and the edge (part 63) of the area where the resolution has changed is highlighted (specifically highlighted). In such a case, even when the display unit 60 is small and it is difficult to visually recognize the resolution degradation, it is possible to intuitively know the location where the resolution degradation has occurred. As a result, the effect of the variable optical LPF 11 can be easily adjusted.
 画像処理部50は、例えば、各画像データI2において、その遷移によって偽色が変化(もしくは発生)した領域62を強調(具体的にはゼブラ処理)するとともに、その遷移によって解像度が変化した領域のエッジ(部分63)を強調(具体的にはハイライト)したもの(図26参照)を生成し、表示部60に出力してもよい。表示部60は、例えば、可変光学LPF11の設定値がユーザの要求に応じて図21中の白丸に対応する値(設定値k)に遷移したときに、画像データI2のうち、偽色が変化(もしくは発生)した領域
62を強調(具体的にはゼブラ処理)するとともに、解像度が変化した領域のエッジ(部分63)を強調表示(具体的にはハイライト表示)する。このようにした場合には、表示部60が小さくて解像度劣化を視認しにくいときであっても、解像度劣化の生じている箇所を直感的に知ることができる。その結果、可変光学LPF11の効果の調整を簡単に行うことができる。
For example, in each image data I 2 , the image processing unit 50 emphasizes (specifically, zebra processing) the region 62 in which the false color has changed (or has been generated) due to the transition, and the region in which the resolution has changed due to the transition. The edge (part 63) of the image (specifically highlighted) (see FIG. 26) may be generated and output to the display unit 60. For example, when the set value of the variable optical LPF 11 transitions to a value corresponding to the white circle in FIG. 21 (set value k) in response to a user request, the display unit 60 displays a false color in the image data I 2. The changed (or generated) area 62 is emphasized (specifically, zebra processing), and the edge (part 63) of the area where the resolution has changed is highlighted (specifically highlighted). In such a case, even when the display unit 60 is small and it is difficult to visually recognize the resolution degradation, it is possible to intuitively know the location where the resolution degradation has occurred. As a result, the effect of the variable optical LPF 11 can be easily adjusted.
 ところで、画像処理部50が、画像データI2のうち領域61を拡大したり、画像データI2のうち領域62をゼブラ処理したりするためには、画像処理部50が、あらかじめ、偽色の発生位置を特定している必要がある。従って、画像処理部50は、例えば、図25に示したように、図20の一連のステップにおいてステップS105を行う前に、ステップS117~S119が行われている必要がある。なお、図25は、撮像手順の一例を表したものである。画像処理部50は、ステップS117~S119が行われていることにより、偽色の発生位置を特定できるので、領域61の拡大や、領域62のゼブラ処理を行うことが可能となる。 Incidentally, the image processing unit 50, or enlarged area 61 of the image data I 2, the regions 62 of the image data I 2 to or Zebra process, the image processing unit 50, in advance, false color The occurrence location must be specified. Therefore, for example, as shown in FIG. 25, the image processing unit 50 needs to perform steps S117 to S119 before performing step S105 in the series of steps of FIG. FIG. 25 illustrates an example of an imaging procedure. Since the image processing unit 50 can specify the generation position of the false color by performing steps S117 to S119, it is possible to enlarge the region 61 and perform the zebra processing of the region 62.
[変形例D]
 上記実施の形態では、ステップS117は、式(1)の右辺第1項から得られる値(評価データD1)と、式(1)の右辺第2項から得られる値(評価データD2)とを足し合わせた値(評価データD)に基づいて、可変光学LPF11の効果(設定値)の自動設定を行っていた。しかし、画像処理部50は、評価データD1および評価データD2のいずれか一方の値に基づいて、可変光学LPF11の効果(設定値)の自動設定を行ってもよい。このようにした場合には、可変光学LPF11の効果(設定値)の調整を、様々な目的に応じて行うことができる。
[Modification D]
In the above embodiment, step S117 calculates the value (evaluation data D1) obtained from the first term on the right side of equation (1) and the value (evaluation data D2) obtained from the second term on the right side of equation (1). Based on the added value (evaluation data D), the effect (set value) of the variable optical LPF 11 is automatically set. However, the image processing unit 50 may automatically set the effect (setting value) of the variable optical LPF 11 based on one of the evaluation data D1 and the evaluation data D2. In such a case, the effect (set value) of the variable optical LPF 11 can be adjusted according to various purposes.
[変形例E]
 上記実施の形態およびその変形例において、可変光学LPF11の代わりに、カットオフ周波数が固定のローパスフィルタが設けられていてもよい。また、上記実施の形態およびその変形例において、可変光学LPF11が省略されていてもよい。これらの場合に、撮像装置1が、撮像素子40の受光面40Aを面内方向で振動させる駆動部を備えていてもよい。このとき、撮像素子40および受光面40Aを振動させる駆動部からなるデバイスは、いわゆるイメージャシフト型のローパスフィルタとして機能する。このように、イメージャシフト型のローパスフィルタが設けられている場合であっても、上記実施の形態と同様、ユーザの目的に応じた、画質のバランスのとれた画像データIを得ることができる。
[Modification E]
In the embodiment and the modification thereof, a low pass filter having a fixed cutoff frequency may be provided instead of the variable optical LPF 11. In the above-described embodiment and its modifications, the variable optical LPF 11 may be omitted. In these cases, the imaging apparatus 1 may include a drive unit that vibrates the light receiving surface 40A of the imaging element 40 in the in-plane direction. At this time, a device including a drive unit that vibrates the imaging element 40 and the light receiving surface 40A functions as a so-called imager shift type low-pass filter. As described above, even when the imager shift type low-pass filter is provided, the image data I having a balanced image quality according to the purpose of the user can be obtained as in the above embodiment.
[変形例F]
 上記実施の形態およびその変形例において、可変光学LPF11の代わりに、カットオフ周波数が固定のローパスフィルタが設けられていてもよい。また、上記実施の形態およびその変形例において、可変光学LPF11が省略されていてもよい。これらの場合に、レンズ駆動部20が、レンズ12の光軸と直交する面と平行な面内でレンズ12を駆動させるようになっていてもよい。このとき、レンズ12およびレンズ駆動部20からなるデバイスは、いわゆるレンズシフト型のローパスフィルタとして機能する。このように、レンズシフト型のローパスフィルタが設けられている場合であっても、上記実施の形態と同様、ユーザの目的に応じた、画質のバランスのとれた画像データIを得ることができる。
[Modification F]
In the embodiment and the modification thereof, a low pass filter having a fixed cutoff frequency may be provided instead of the variable optical LPF 11. In the above-described embodiment and its modifications, the variable optical LPF 11 may be omitted. In these cases, the lens driving unit 20 may drive the lens 12 in a plane parallel to a plane orthogonal to the optical axis of the lens 12. At this time, the device including the lens 12 and the lens driving unit 20 functions as a so-called lens shift type low-pass filter. As described above, even when the lens shift type low-pass filter is provided, the image data I in which the image quality is balanced according to the purpose of the user can be obtained as in the above embodiment.
[変形例G]
 上記実施の形態およびその変形例において、撮像装置1は、通常のカメラの他に、車載用カメラ、監視カメラ、医療用カメラ(内視鏡カメラ)などにも適用可能である。
[Modification G]
In the embodiment and the modification thereof, the imaging device 1 can be applied to an in-vehicle camera, a monitoring camera, a medical camera (endoscopic camera), and the like in addition to a normal camera.
 以上、実施の形態およびその変形例を挙げて本開示を説明したが、本開示は上記実施の形態等に限定されるものではなく、種々変形が可能である。なお、本明細書中に記載された効果は、あくまで例示である。本開示の効果は、本明細書中に記載された効果に限定されるものではない。本開示が、本明細書中に記載された効果以外の効果を持っていてもよい。 As described above, the present disclosure has been described with reference to the embodiment and its modifications. However, the present disclosure is not limited to the above-described embodiment and the like, and various modifications are possible. In addition, the effect described in this specification is an illustration to the last. The effects of the present disclosure are not limited to the effects described in this specification. The present disclosure may have effects other than those described in this specification.
 また、例えば、本開示は以下のような構成を取ることができる。
(1)
 ローパスフィルタのローパス特性の変更に応じた、撮像された複数の画像データにおける解像度の変化及び偽色の変化に基づいて前記ローパスフィルタのローパス特性の設定値を設定する制御部を備えた
 制御装置。
(2)
 前記制御部は、複数の前記画像データのうちの1つである第1画像データの空間周波数と、複数の前記画像データのうち、前記第1画像データ以外の第2画像データの空間周波数とに基づいて解像度の変化についての第1評価データを導出する
 (1)に記載の制御装置。
(3)
 前記制御部は、前記第1画像データの階調データと前記第2画像データの階調データとの差分に基づいて偽色の変化についての第2評価データを導出する
 (2)に記載の制御装置。
(4)
 前記制御部は、前記第1評価データおよび前記第2評価データに基づいて前記設定値を設定する
 (3)に記載の制御装置。
(5)
 前記制御部は、前記第2評価データの、前記可変光学ローパスフィルタのローパス特性に対する変化が相対的に緩やかな範囲が存在する場合に、その範囲の中で、前記第1評価データが最も小さくなるときの前記ローパスフィルタのローパス特性を前記設定値に設定する
 (3)に記載の制御装置。
(6)
 前記制御部は、複数の前記画像データに含まれる顔領域における偽色の変化に基づいて前記設定値を設定する
 (1)ないし(5)のいずれか1つに記載の制御装置。
(7)
 前記制御部は、複数の前記画像データのうちユーザによって選択された1つの前記画像データが撮像されたときの前記ローパスフィルタのローパス特性を前記設定値に設定する
 (1)ないし(5)のいずれか1つに記載の制御装置。
(8)
 前記制御部は、複数の前記画像データにおける解像度の変化及び偽色の変化と、複数の前記画像データに含まれる偽色の位置とに基づいて前記設定値を設定する
 (1)ないし(5)のいずれか1つに記載の制御装置。
(9)
 前記制御部は、複数の前記画像データにおいて、複数の前記画像データに含まれる偽色の位置を拡大したものを生成する
 (1)ないし(5)のいずれか1つに記載の制御装置。
(10)
 前記制御部は、複数の前記画像データにおいて、複数の前記画像データに含まれる偽色の位置を強調表示したものを生成する
 (1)ないし(5)のいずれか1つに記載の制御装置。
(11)
 前記制御部は、複数の前記画像データにおいて、複数の前記画像データに含まれる、解像度劣化の生じている位置を強調表示したものを生成する
 (1)ないし(5)のいずれか1つに記載の制御装置。
(12)
 前記制御部は、前記第1画像データの階調データと前記第2画像データの階調データとの差分と、前記第1画像データから得られた彩度データとに基づいて前記第2評価データを導出する
 (3)に記載の制御装置。
(13)
 前記第1画像データは、前記ローパスフィルタのローパス特性の変更可能な範囲の中で、前記第2評価データが最小もしくはほぼ最小となるときの画像データである
 (3)に記載の制御装置。
(14)
 ローパスフィルタを経由して入射した光から画像データを生成する撮像素子と、
 前記ローパスフィルタのローパス特性の変更に応じた、撮像された複数の画像データにおける解像度の変化及び偽色の変化に基づいて前記ローパスフィルタのローパス特性の設定値を設定する制御部と
 を備えた
 撮像装置。
(15)
 前記制御部は、前記ローパスフィルタのローパス特性を、前記ローパスフィルタのローパス特性の設定最小分解能よりも大きなピッチで変化させる変化指示を生成するともに、前記ローパスフィルタのローパス特性の変化に同期した撮像指示を生成する
 (14)に記載の撮像装置。
(16)
 シャッターボタンをさらに備え、
 前記制御部は、前記シャッターボタンの半押下げに伴って、前記変化指示および前記撮像指示を生成する
 (15)に記載の撮像装置。
For example, this indication can take the following composition.
(1)
A control apparatus comprising: a control unit configured to set a setting value of a low-pass characteristic of the low-pass filter based on a change in resolution and a change in false color in a plurality of captured image data according to a change in the low-pass characteristic of the low-pass filter.
(2)
The control unit includes a spatial frequency of first image data that is one of the plurality of image data, and a spatial frequency of second image data other than the first image data among the plurality of image data. The control device according to (1), wherein first evaluation data regarding a change in resolution is derived based on the first evaluation data.
(3)
The control unit according to (2), wherein the control unit derives second evaluation data regarding a change in false color based on a difference between the gradation data of the first image data and the gradation data of the second image data. apparatus.
(4)
The control device according to (3), wherein the control unit sets the set value based on the first evaluation data and the second evaluation data.
(5)
When there is a range in which the change of the second evaluation data with respect to the low-pass characteristics of the variable optical low-pass filter is relatively gentle, the control unit has the smallest first evaluation data in the range. The control device according to (3), wherein the low-pass characteristic of the low-pass filter is set to the set value.
(6)
The control device according to any one of (1) to (5), wherein the control unit sets the setting value based on a false color change in a face area included in the plurality of image data.
(7)
The control unit sets the low-pass characteristic of the low-pass filter when the image data selected by the user from among the plurality of image data is imaged to the set value (1) to (5) The control apparatus as described in any one.
(8)
The control unit sets the setting value based on a change in resolution and a change in false color in the plurality of image data and a position of a false color included in the plurality of image data. (1) to (5) The control apparatus as described in any one of these.
(9)
The control unit according to any one of (1) to (5), wherein the control unit generates a plurality of the image data obtained by enlarging the positions of false colors included in the plurality of image data.
(10)
The control unit according to any one of (1) to (5), wherein a plurality of the image data are generated by highlighting the positions of false colors included in the plurality of image data.
(11)
The said control part produces | generates what highlighted the position where the resolution degradation is included among several said image data among several said image data. (1) thru | or (5) Control device.
(12)
The control unit is configured to determine the second evaluation data based on a difference between the gradation data of the first image data and the gradation data of the second image data, and saturation data obtained from the first image data. The control device according to (3).
(13)
The control device according to (3), wherein the first image data is image data when the second evaluation data is minimized or substantially minimized within a changeable range of a low-pass characteristic of the low-pass filter.
(14)
An image sensor that generates image data from light incident via a low-pass filter;
A controller configured to set a setting value of the low-pass characteristic of the low-pass filter based on a change in resolution and a change in false color in a plurality of image data captured in accordance with a change in the low-pass characteristic of the low-pass filter. apparatus.
(15)
The control unit generates a change instruction for changing the low-pass characteristic of the low-pass filter at a pitch larger than a set minimum resolution of the low-pass characteristic of the low-pass filter, and at the same time, an imaging instruction synchronized with the change of the low-pass characteristic of the low-pass filter The imaging device according to (14).
(16)
A shutter button,
The imaging device according to (15), wherein the control unit generates the change instruction and the imaging instruction when the shutter button is half-depressed.
 本出願は、日本国特許庁において2016年5月6日に出願された日本特許出願番号第2016-092993号を基礎として優先権を主張するものであり、この出願のすべての内容を参照によって本出願に援用する。 This application claims priority on the basis of Japanese Patent Application No. 2016-092993 filed on May 6, 2016 at the Japan Patent Office. The entire contents of this application are incorporated herein by reference. This is incorporated into the application.
 当業者であれば、設計上の要件や他の要因に応じて、種々の修正、コンビネーション、サブコンビネーション、および変更を想到し得るが、それらは添付の請求の範囲やその均等物の範囲に含まれるものであることが理解される。 Those skilled in the art will envision various modifications, combinations, subcombinations, and changes, depending on design requirements and other factors, which are within the scope of the appended claims and their equivalents. It is understood that

Claims (16)

  1.  ローパスフィルタのローパス特性の変更に応じた、撮像された複数の画像データにおける解像度の変化及び偽色の変化に基づいて前記ローパスフィルタのローパス特性の設定値を設定する制御部を備えた
     制御装置。
    A control apparatus comprising: a control unit configured to set a setting value of a low-pass characteristic of the low-pass filter based on a change in resolution and a change in false color in a plurality of captured image data according to a change in the low-pass characteristic of the low-pass filter.
  2.  前記制御部は、複数の前記画像データのうちの1つである第1画像データの空間周波数と、複数の前記画像データのうち、前記第1画像データ以外の第2画像データの空間周波数とに基づいて解像度の変化についての第1評価データを導出する
     請求項1に記載の制御装置。
    The control unit includes a spatial frequency of first image data that is one of the plurality of image data, and a spatial frequency of second image data other than the first image data among the plurality of image data. The control device according to claim 1, wherein first evaluation data for a change in resolution is derived based on the first evaluation data.
  3.  前記制御部は、前記第1画像データの階調データと前記第2画像データの階調データとの差分に基づいて偽色の変化についての第2評価データを導出する
     請求項2に記載の制御装置。
    3. The control according to claim 2, wherein the control unit derives second evaluation data for a change in false color based on a difference between the gradation data of the first image data and the gradation data of the second image data. apparatus.
  4.  前記制御部は、前記第1評価データおよび前記第2評価データに基づいて前記設定値を設定する
     請求項3に記載の制御装置。
    The control device according to claim 3, wherein the control unit sets the set value based on the first evaluation data and the second evaluation data.
  5.  前記制御部は、前記第2評価データの、前記可変光学ローパスフィルタのローパス特性に対する変化が相対的に緩やかな範囲が存在する場合に、その範囲の中で、前記第1評価データが最も小さくなるときの前記ローパスフィルタのローパス特性を前記設定値に設定する
     請求項3に記載の制御装置。
    When there is a range in which the change of the second evaluation data with respect to the low-pass characteristics of the variable optical low-pass filter is relatively gentle, the control unit has the smallest first evaluation data in the range. The control device according to claim 3, wherein the low-pass characteristic of the low-pass filter is set to the set value.
  6.  前記制御部は、複数の前記画像データに含まれる顔領域における偽色の変化に基づいて前記設定値を設定する
     請求項1に記載の制御装置。
    The control device according to claim 1, wherein the control unit sets the setting value based on a change in false color in a face area included in the plurality of image data.
  7.  前記制御部は、複数の前記画像データのうちユーザによって選択された1つの前記画像データが撮像されたときの前記ローパスフィルタのローパス特性を前記設定値に設定する
     請求項1に記載の制御装置。
    The control device according to claim 1, wherein the control unit sets a low-pass characteristic of the low-pass filter when the image data selected by a user among a plurality of the image data is captured to the set value.
  8.  前記制御部は、複数の前記画像データにおける解像度の変化及び偽色の変化と、複数の前記画像データに含まれる偽色の位置とに基づいて前記設定値を設定する
     請求項1に記載の制御装置。
    The control according to claim 1, wherein the control unit sets the setting value based on a change in resolution and a change in false color in the plurality of image data and a position of a false color included in the plurality of image data. apparatus.
  9.  前記制御部は、複数の前記画像データにおいて、複数の前記画像データに含まれる偽色の位置を拡大したものを生成する
     請求項1に記載の制御装置。
    The control device according to claim 1, wherein the control unit generates a plurality of the image data obtained by enlarging the positions of false colors included in the plurality of image data.
  10.  前記制御部は、複数の前記画像データにおいて、複数の前記画像データに含まれる偽色の位置を強調表示したものを生成する
     請求項1に記載の制御装置。
    The control device according to claim 1, wherein the control unit generates a plurality of the image data in which false color positions included in the plurality of image data are highlighted.
  11.  前記制御部は、複数の前記画像データにおいて、複数の前記画像データに含まれる、解像度劣化の生じている位置を強調表示したものを生成する
     請求項1に記載の制御装置。
    The control device according to claim 1, wherein the control unit generates, among the plurality of image data, a highlight display of a position where resolution degradation is included, which is included in the plurality of image data.

  12.  前記制御部は、前記第1画像データの階調データと前記第2画像データの階調データとの差分と、前記第1画像データから得られた彩度データとに基づいて前記第2評価データを導出する
     請求項3に記載の制御装置。

    The control unit is configured to determine the second evaluation data based on a difference between the gradation data of the first image data and the gradation data of the second image data, and saturation data obtained from the first image data. The control device according to claim 3.
  13.  前記第1画像データは、前記ローパスフィルタのローパス特性の変更可能な範囲の中で、前記第2評価データが最小もしくはほぼ最小となるときの画像データである
     請求項3に記載の制御装置。
    The control device according to claim 3, wherein the first image data is image data when the second evaluation data is minimized or substantially minimized within a range in which the low-pass characteristics of the low-pass filter can be changed.
  14.  ローパスフィルタを経由して入射した光から画像データを生成する撮像素子と、
     前記ローパスフィルタのローパス特性の変更に応じた、撮像された複数の画像データにおける解像度の変化及び偽色の変化に基づいて前記ローパスフィルタのローパス特性の設定値を設定する制御部と
     を備えた
     撮像装置。
    An image sensor that generates image data from light incident via a low-pass filter;
    A controller configured to set a setting value of the low-pass characteristic of the low-pass filter based on a change in resolution and a change in false color in a plurality of image data captured in accordance with a change in the low-pass characteristic of the low-pass filter. apparatus.
  15.  前記制御部は、前記ローパスフィルタのローパス特性を、前記ローパスフィルタのローパス特性の設定最小分解能よりも大きなピッチで変化させる変化指示を生成するともに、前記ローパスフィルタのローパス特性の変化に同期した撮像指示を生成する
     請求項14に記載の撮像装置。
    The control unit generates a change instruction for changing the low-pass characteristic of the low-pass filter at a pitch larger than a set minimum resolution of the low-pass characteristic of the low-pass filter, and at the same time, an imaging instruction synchronized with the change of the low-pass characteristic of the low-pass filter The imaging device according to claim 14.
  16.  シャッターボタンをさらに備え、
     前記制御部は、前記シャッターボタンの半押下げに伴って、前記変化指示および前記撮像指示を生成する
     請求項15に記載の撮像装置。
    A shutter button,
    The imaging device according to claim 15, wherein the control unit generates the change instruction and the imaging instruction when the shutter button is half-depressed.
PCT/JP2017/012907 2016-05-06 2017-03-29 Control device and imaging device WO2017191717A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2018515407A JPWO2017191717A1 (en) 2016-05-06 2017-03-29 Control device and imaging device
US16/092,503 US10972710B2 (en) 2016-05-06 2017-03-29 Control apparatus and imaging apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016-092993 2016-05-06
JP2016092993 2016-05-06

Publications (1)

Publication Number Publication Date
WO2017191717A1 true WO2017191717A1 (en) 2017-11-09

Family

ID=60202887

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/012907 WO2017191717A1 (en) 2016-05-06 2017-03-29 Control device and imaging device

Country Status (3)

Country Link
US (1) US10972710B2 (en)
JP (1) JPWO2017191717A1 (en)
WO (1) WO2017191717A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005136497A (en) * 2003-10-28 2005-05-26 Canon Inc Image processing method, and image processing apparatus
JP2005176041A (en) * 2003-12-12 2005-06-30 Canon Inc Image pickup apparatus
JP2006080845A (en) * 2004-09-09 2006-03-23 Nikon Corp Electronic camera
JP2011109496A (en) * 2009-11-19 2011-06-02 Nikon Corp Imaging apparatus
WO2015098305A1 (en) * 2013-12-27 2015-07-02 リコーイメージング株式会社 Image capturing device, image capturing method, and program

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08313233A (en) * 1995-05-23 1996-11-29 Fuji Photo Optical Co Ltd Photosensor and optical apparatus using it
US7376485B2 (en) * 2001-05-31 2008-05-20 Mark Salerno Method of remotely programming and updating food product holding apparatus using hand held computer
JP4450187B2 (en) * 2004-06-08 2010-04-14 パナソニック株式会社 Solid-state imaging device
US8559693B2 (en) * 2007-10-11 2013-10-15 British Columbia Cancer Agency Branch Systems and methods for automated characterization of genetic heterogeneity in tissue samples
US10911681B2 (en) * 2016-05-06 2021-02-02 Sony Corporation Display control apparatus and imaging apparatus

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005136497A (en) * 2003-10-28 2005-05-26 Canon Inc Image processing method, and image processing apparatus
JP2005176041A (en) * 2003-12-12 2005-06-30 Canon Inc Image pickup apparatus
JP2006080845A (en) * 2004-09-09 2006-03-23 Nikon Corp Electronic camera
JP2011109496A (en) * 2009-11-19 2011-06-02 Nikon Corp Imaging apparatus
WO2015098305A1 (en) * 2013-12-27 2015-07-02 リコーイメージング株式会社 Image capturing device, image capturing method, and program

Also Published As

Publication number Publication date
JPWO2017191717A1 (en) 2019-03-28
US20190132562A1 (en) 2019-05-02
US10972710B2 (en) 2021-04-06

Similar Documents

Publication Publication Date Title
US9113075B2 (en) Image processing method and apparatus and digital photographing apparatus using the same
US20150092992A1 (en) Image processing device, image capturing apparatus, and image processing method
US9060110B2 (en) Image capture with tunable polarization and tunable spectral sensitivity
US20170134634A1 (en) Photographing apparatus, method of controlling the same, and computer-readable recording medium
US10992854B2 (en) Image processing apparatus, imaging apparatus, image processing method, and storage medium
US20180184013A1 (en) Image processing apparatus, image processing method, program, and storage medium
US20180182075A1 (en) Image processing apparatus, image capturing apparatus, method of image processing, and storage medium
US9071737B2 (en) Image processing based on moving lens with chromatic aberration and an image sensor having a color filter mosaic
JP2006115446A (en) Photographing device, and method of evaluating image
US11729500B2 (en) Lowpass filter control apparatus and lowpass filter control method for controlling variable lowpass filter
US11569283B2 (en) Image pickup apparatus having image sensor equipped with polarizing element and control method therefor
JP6291747B2 (en) Optical low-pass filter, imaging device, and imaging apparatus
WO2017191716A1 (en) Display control device and imaging device
US20180040108A1 (en) Image processing device, imaging device, image processing method, and image processing program
US8537266B2 (en) Apparatus for processing digital image and method of controlling the same
JP2015177510A (en) camera system, image processing method and program
JP6060552B2 (en) Image processing apparatus, imaging apparatus, and image processing program
WO2017191717A1 (en) Control device and imaging device
JP6669065B2 (en) Filter control device, filter control method, and imaging device
US9288461B2 (en) Apparatus and method for processing image, and computer-readable storage medium
US11792534B2 (en) Signal processing device, signal processing method, and image capture device
JP2016177215A (en) Imaging device
JP2013149043A (en) Image processing device
JP2007195258A (en) Color imaging apparatus

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2018515407

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17792650

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17792650

Country of ref document: EP

Kind code of ref document: A1