WO2022244311A1 - Imaging device, image processing method, and program - Google Patents

Imaging device, image processing method, and program Download PDF

Info

Publication number
WO2022244311A1
WO2022244311A1 PCT/JP2022/003019 JP2022003019W WO2022244311A1 WO 2022244311 A1 WO2022244311 A1 WO 2022244311A1 JP 2022003019 W JP2022003019 W JP 2022003019W WO 2022244311 A1 WO2022244311 A1 WO 2022244311A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
reliability
unit
image area
physical quantity
Prior art date
Application number
PCT/JP2022/003019
Other languages
French (fr)
Japanese (ja)
Inventor
龍之介 横矢
大 水落
祐基 明壁
貴洸 小杉
洋司 山本
Original Assignee
ソニーグループ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーグループ株式会社 filed Critical ソニーグループ株式会社
Priority to JP2023522209A priority Critical patent/JPWO2022244311A1/ja
Publication of WO2022244311A1 publication Critical patent/WO2022244311A1/en

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B17/00Details of cameras or camera bodies; Accessories therefor
    • G03B17/18Signals indicating condition of a camera member or suitability of light
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B7/00Control of exposure by setting shutters, diaphragms or filters, separately or conjointly
    • G03B7/08Control effected solely on the basis of the response, to the intensity of the light received by the camera, of a built-in light-sensitive device
    • G03B7/091Digital circuits
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene

Definitions

  • the present disclosure relates to imaging devices, image processing methods, and programs. More specifically, the reliability of the physical quantity for each image region, such as the defocus amount and the distance value for each image region obtained from the detection information of the image plane phase difference detection pixels, is calculated, and various values are calculated according to the calculated reliability.
  • the present invention relates to an imaging device that performs processing, an image processing method, and a program.
  • imaging devices there is an image plane phase difference method using an image plane phase difference pixel as a method for detecting a focus position (in-focus position).
  • This image plane phase difference method divides the light passing through the imaging lens into a pair of images and analyzes the phase difference between the generated pair of images to detect the focus position (in-focus position). It is a method to
  • the light flux that has passed through the imaging lens is split into two, and the two split light fluxes are received by a set of image plane phase difference detection pixels that function as focus detection sensors.
  • the focus lens is adjusted by detecting the degree of focus based on the shift amount of the signal output according to the amount of light received by each of the set of image plane phase difference detection pixels.
  • Patent Document 1 Japanese Patent Application Laid-Open No. 2012-142952 discloses a configuration that performs blurring processing on a captured image using detection information of an image plane phase difference detection pixel. Specifically, the detection information of the image plane phase difference detection pixels is used to analyze the distribution of the defocus amount in the captured image, and the analysis result is used to apply blur processing to the captured image.
  • Patent Document 2 Japanese Patent Application Laid-Open No. 2019-035967 discloses a configuration for changing and outputting the gradation characteristics of the defocus amount and the distance information obtained from the detection information of the image plane phase difference detection pixels according to the output destination. ing. For example, different information is generated based on the detection information of the image plane phase difference detection pixels according to the information required by the output destination, such as an output destination that requires distance resolution near in-focus and an output destination that requires distance measurement range information.
  • a configuration for outputting by Japanese Patent Application Laid-Open No. 2019-035967 discloses a configuration for changing and outputting the gradation characteristics of the defocus amount and the distance information obtained from the detection information of the image plane phase difference detection pixels according to the output destination. ing. For example, different information is generated based on the detection information of the image plane phase difference detection pixels according to the information required by the output destination, such as an output destination that requires distance resolution near in-focus and an output destination that requires distance measurement range information.
  • Patent Document 3 Japanese Patent Application Laid-Open No. 2011-053378 discloses a configuration for performing exposure control according to the defocus amount of each image area of a captured image and the photometric value. Specifically, for example, a configuration is disclosed in which exposure control is performed in consideration of the photometric value of an out-of-focus area in addition to the photometric value of a focus detection area to be focused.
  • the present disclosure calculates the reliability of physical quantities for each image region, such as the defocus amount and distance value for each image region obtained from the detection information of the image plane phase difference detection pixels, and performs various processing according to the calculated reliability. It is an object of the present invention to provide an imaging device, an image processing method, and a program that perform
  • a configuration is realized in which an image is generated and output in which the reliability of the defocus amount for each image area can be identified.
  • a configuration for outputting the distance ratio between the in-focus object and the background object is realized.
  • an image is generated and output in which the stability of the mask set in a partial area of the captured image can be identified according to the reliability of the defocus amount for each image area.
  • a first aspect of the present disclosure includes: an image region physical quantity calculator that calculates physical quantities that change according to the subject distance for each image region that is a segmented region of a captured image; an image area unit physical quantity reliability calculation unit that calculates the reliability of the image area unit physical quantity calculated by the image area physical quantity calculation unit;
  • the imaging apparatus includes an image area unit physical quantity reliability corresponding processing execution unit that executes control processing according to the reliability of the image area unit physical quantity calculated by the image area unit physical quantity reliability calculation unit.
  • a second aspect of the present disclosure is An image processing method executed in an imaging device, an image region physical quantity calculation step in which the image region physical quantity calculation unit calculates a physical quantity that changes according to the subject distance for each image region that is a segmented region of the captured image; an image region unit physical quantity reliability calculation step in which the image region unit physical quantity reliability calculation unit calculates the reliability of the image region unit physical quantity calculated in the image region physical quantity calculation step; an image area unit physical quantity reliability correspondence process execution step in which the image area unit physical quantity reliability correspondence process execution unit executes control processing according to the reliability of the image area unit physical quantity reliability calculated in the image area unit physical quantity reliability calculation step; An image processing method for executing
  • a third aspect of the present disclosure is A program for executing image processing in an imaging device, an image region physical quantity calculation step of causing an image region physical quantity calculation unit to calculate a physical quantity that changes according to a subject distance for each image region that is a segmented region of a captured image; an image region unit physical quantity reliability calculation step for causing an image region unit physical quantity reliability calculation unit to calculate the reliability of the image region unit physical quantity calculated in the image region physical quantity calculation step; An image area unit physical quantity reliability correspondence process execution step for causing an image area unit physical quantity reliability correspondence process execution unit to execute a control process according to the reliability of the image area unit physical quantity reliability calculated in the image area unit physical quantity reliability calculation step. is in the program that runs the
  • the program of the present disclosure is, for example, a program that can be provided in a computer-readable format to an information processing device or computer system capable of executing various program codes via a storage medium or communication medium.
  • processing according to the program is realized on the information processing device or computer system.
  • a system is a logical collective configuration of a plurality of devices, and the devices of each configuration are not limited to being in the same housing.
  • the reliability of the defocus amount and the distance value for each image area of the captured image is calculated, the display processing of the graphic data indicating the reliability, the distance ratio display processing of the subject, and the , an apparatus and a method for executing exposure time control processing and the like are realized. Specifically, for example, the defocus amount and the distance value are calculated for each image area of the captured image, and the reliability of the calculated defocus amount and distance value for each image area is calculated. Control is executed according to the reliability of the defocus amount and the distance value.
  • the processor executes a process of superimposing and displaying graphic data indicating the reliability of the defocus amount for each image area on the captured image, a process of displaying the distance ratio of the object, a process of controlling the exposure time, and the like.
  • the reliability of the defocus amount and distance value for each image area of the captured image is calculated, and display processing of graphic data indicating the reliability, processing of displaying the distance ratio of the subject, processing of exposure time control, etc. are executed.
  • An apparatus and method are realized. Note that the effects described in this specification are merely examples and are not limited, and additional effects may be provided.
  • FIG. 10 is a diagram illustrating an outline of focus detection processing using a phase difference detection method
  • FIG. 10 is a diagram illustrating an outline of focus detection processing using a phase difference detection method
  • FIG. 10 is a diagram illustrating an outline of focus detection processing using a phase difference detection method
  • FIG. 2 is a diagram illustrating a configuration example of a digital signal processing unit of an imaging device
  • FIG. 4 is a diagram illustrating a specific example of a pixel configuration of an image pickup device and an image area as a defocus amount calculation unit;
  • FIG. 10 is a diagram illustrating an outline of focus detection processing using a phase difference detection method
  • FIG. 2 is a diagram illustrating a configuration example of a digital signal processing unit of an imaging device
  • FIG. 4 is a diagram illustrating a specific example of a pixel configuration of an image pickup device and an image area as a defocus amount calculation unit
  • FIG. 4 is a diagram illustrating a specific example of a defocus map generated by a digital signal processing unit of an imaging device; 4 is a diagram illustrating a configuration example of a digital signal processing unit of the image pickup apparatus of Example 1; FIG. 4A and 4B are diagrams for explaining a specific example of processing executed by a digital signal processing unit of the imaging apparatus according to the first embodiment; FIG. 4A and 4B are diagrams for explaining a specific example of processing executed by a digital signal processing unit of the imaging apparatus according to the first embodiment; FIG. 4A and 4B are diagrams for explaining a specific example of processing executed by a digital signal processing unit of the imaging apparatus according to the first embodiment; FIG. FIG.
  • FIG. 10 is a diagram illustrating a configuration example of a digital signal processing unit of the imaging apparatus of Example 2;
  • FIG. 11 is a diagram illustrating a specific example of processing executed by a digital signal processing unit of the imaging apparatus according to the second embodiment;
  • FIG. 11 is a diagram illustrating a configuration example of a digital signal processing unit of an imaging apparatus according to Example 3;
  • FIG. 11 is a diagram illustrating a specific example of processing executed by a digital signal processing unit of the imaging apparatus of Example 3;
  • FIG. 11 is a diagram illustrating a specific example of processing executed by a digital signal processing unit of the imaging apparatus of Example 3;
  • FIG. 11 is a diagram illustrating a specific example of processing executed by a digital signal processing unit of the imaging apparatus of Example 3;
  • FIG. 11 is a diagram illustrating a specific example of processing executed by a digital signal processing unit of the imaging apparatus of Example 3;
  • FIG. 11 is a diagram illustrating a specific example of processing executed by a digital signal processing unit of the imaging
  • FIG. 11 is a diagram illustrating a configuration example of a digital signal processing unit of an imaging apparatus according to a fourth embodiment
  • FIG. 12 is a diagram illustrating a specific example of processing executed by a digital signal processing unit of the imaging apparatus according to the fourth embodiment
  • FIG. 12 is a diagram illustrating a specific example of processing executed by a digital signal processing unit of the imaging apparatus according to the fourth embodiment
  • FIG. 11 is a diagram illustrating a configuration example of a digital signal processing unit of an imaging apparatus according to Example 5
  • FIG. 12 is a diagram illustrating a specific example of processing executed by a digital signal processing unit of the imaging apparatus of Example 5
  • FIG. 12 is a diagram illustrating a specific example of processing executed by a digital signal processing unit of the imaging apparatus of Example 5
  • FIG. 12 is a diagram illustrating a specific example of processing executed by a digital signal processing unit of the imaging apparatus of Example 5;
  • FIG. 12 is a diagram illustrating a specific example of processing executed by a digital signal processing unit of the imaging apparatus
  • FIG. 12 is a diagram illustrating a configuration example of a digital signal processing unit of an imaging apparatus according to Example 6;
  • FIG. 14 is a diagram illustrating a specific example of processing executed by a digital signal processing unit of the imaging apparatus according to the sixth embodiment;
  • FIG. 12 is a diagram illustrating a configuration example of a digital signal processing unit of an imaging apparatus according to Example 6;
  • FIG. 14 is a diagram illustrating a specific example of processing executed by a digital signal processing unit of the imaging apparatus according to the sixth embodiment;
  • FIG. 12 is a diagram illustrating a configuration example of a digital signal processing unit of an imaging apparatus according to Example 6;
  • Embodiment 1 Embodiment of generating and outputting an image in which the reliability of the defocus amount for each image area can be identified 4-2.
  • Embodiment 2 Embodiment of generating and outputting an image in which the distance ratio between a focused subject and other background subjects is superimposed on a photographed image 4-3.
  • Embodiment 3) Embodiment of generating and outputting an image in which mask stability information is superimposed on a photographed image 4-4.
  • Embodiment 4) Embodiment of generating and outputting an image in which a color map outputting a color corresponding to a defocus amount for each image area is superimposed on a photographed image 4-5.
  • FIG. 1 is a block diagram showing a configuration example of an imaging device 100 of the present disclosure.
  • a configuration example of an imaging device 100 of the present disclosure will be described with reference to FIG.
  • Incident light passing through the focus lens 101 and the zoom lens 102 is input to an imaging device 103 such as a CMOS or CCD, and photoelectrically converted in the imaging device 103 .
  • the image pickup device 103 has a plurality of pixels each having a photodiode, for example, arranged two-dimensionally in a matrix. It has normal pixels in which a B (blue) color filter is arranged, and phase difference detection pixels for pupil-dividing subject light and performing focus detection.
  • the normal pixels of the image sensor 103 generate analog electrical signals (image signals) of R (red), G (green), and B (blue) color components of the subject image, and generate R, G, and B color image signals. Output.
  • a phase difference detection pixel of the image sensor 103 outputs a phase difference detection signal (detection information).
  • a phase difference detection signal (detection information) is a signal mainly used for autofocus control. The configuration of the phase difference detection pixels, phase difference detection signals generated by the phase difference detection pixels, and focus control using the phase difference detection signals will be described later in detail.
  • Photoelectrically converted data output from the image sensor 103 in this manner includes RGB image signals and phase difference detection signals from phase difference detection pixels. Each of these signals is input to the analog signal processing unit 104 , subjected to processing such as noise removal in the analog signal processing unit 104 , and converted into a digital signal in the A/D conversion unit 105 .
  • a digital signal digitally converted by the A/D conversion unit 105 is input to a digital signal processing unit (DSP) 108 and subjected to various signal processing.
  • DSP digital signal processing unit
  • Various image signal processing such as demosaic processing, white balance adjustment, gamma correction, etc. is performed on the RGB image signal, and the processed image is recorded in a recording device 115 such as a flash memory. Further, it is displayed on the monitor 117 and viewfinder (EVF) 116 .
  • An image through the lens is displayed as a through image on the monitor 117 and the viewfinder (EVF) 116 regardless of whether or not shooting is performed.
  • Phase difference detection pixel information (detection signal) output from the phase difference detection pixels of the image sensor 103 is also input to the digital signal processing unit (DSP) 108 via the AD conversion unit 106 .
  • a digital signal processing unit (DSP) 108 analyzes the phase difference between the pair of images generated by the phase difference detection pixel information (detection signal), and focuses on the subject (focusing object) to be focused. , that is, the amount of deviation between the in-focus distance and the object distance (defocus amount (DF)) is calculated.
  • An input unit (operation unit) 118 is an operation unit including an input unit for inputting various operation information such as shutter and zoom buttons on the camera body, a mode dial for setting the shooting mode, and the like.
  • the control unit 110 has a CPU, and controls various processes executed by the imaging device according to programs stored in advance in a memory (ROM) 120 or the like.
  • a memory (EEPROM) 119 is a non-volatile memory and stores image data, various auxiliary information, programs, and the like.
  • the memory (ROM) 120 stores programs, calculation parameters, etc. used by the control unit (CPU) 110 .
  • a memory (RAM) 121 stores programs used in the control unit (CPU) 110, the AF control unit 112a, and the like, parameters that change as appropriate during the execution of the programs, and the like.
  • the gyro 131 is a sensor that measures the inclination, angle, inclination velocity (angular velocity), etc. of the imaging device 100 .
  • the detection information of the gyro 131 is used, for example, for calculating the amount of camera shake during image capturing.
  • the AF control unit 112a drives the focus lens drive motor 113a set corresponding to the focus lens 101, and executes autofocus control (AF control) processing. For example, the focus lens 101 is moved to the focus position of the focus lens 101 with respect to the subject included in the area selected by the user to obtain the focused state.
  • AF control autofocus control
  • the zoom control unit 112b drives a zoom lens driving motor 113b set corresponding to the zoom lens 102.
  • FIG. A vertical driver 107 drives an image sensor (CCD) 103 .
  • the timing generator 106 generates control signals for processing timings of the image sensor 103 and the analog signal processing unit 104, and controls the processing timings of these processing units. Note that the focus lens 101 is driven in the optical axis direction under the control of the AF control section 112a.
  • the imaging device 103 of the imaging apparatus 100 shown in FIG. G, and B image signals, and phase difference detection signals (detection information) from the phase difference detection pixels, which are signals used for autofocus control, are also output.
  • phase difference detection signals detection information
  • FIG. 2 is a diagram showing a pixel configuration example of the image sensor 103.
  • FIG. 2 shows a pixel configuration example of the image sensor 103 corresponding to (A) a partial area of the captured image.
  • the vertical direction is the Y-axis, and the horizontal direction is the X-axis.
  • one pixel is indicated by one square.
  • the RGB pixels shown in FIG. 2 are pixels for normal image capturing. RGB pixels have, for example, a Bayer array configuration.
  • phase difference detection pixels 151 for acquiring phase difference information are discretely set in some (rows) of RGB pixels having a Bayer array.
  • a phase difference detection pixel is configured by a pair of a right opening phase difference detection pixel Pa and a left opening phase difference detection pixel Pb.
  • the following two data outputs are individually performed from the imaging device 103 .
  • the phase difference detection pixel information (detection signal) output from the phase difference detection pixel 151 is input to the digital signal processor (DSP) 108 via the AD converter 106 .
  • a digital signal processing unit (DSP) 108 analyzes the phase difference between the pair of images generated by the phase difference detection pixel information (detection signal), and focuses on the subject (focusing object) to be focused. , that is, the amount of deviation between the in-focus distance and the object distance (defocus amount (DF)) is calculated.
  • FIG. 3 An outline of focus detection processing of the phase difference detection method will be described with reference to FIGS. 3 to 5.
  • the defocus of the focus lens is detected based on the shift amount of the signal output according to the amount of light received by each of a set of phase difference detection pixels that function as a focus detection sensor. The amount is calculated, and the focus lens is set to the focus position (focus position) based on this defocus amount.
  • phase difference detection pixels described with reference to FIG. 2 will be referred to as pixel Pa and pixel Pb, respectively, and the details of light incident on these pixels will be described with reference to FIG.
  • the phase difference detection unit stores a luminous flux Ta from a right portion (also referred to as a “right partial pupil region” or simply a “right pupil region”) Qa of the exit pupil EY of the photographing optical system, and a light beam Ta from the left side.
  • Phase difference detection pixels Pa and Pb configured by a pair of photodetectors (PD) for receiving a light flux Tb from a portion (also referred to as “left partial pupil region” or simply “left pupil region”) Qb are arranged in the horizontal direction.
  • PD photodetectors
  • phase difference detection pixel (hereinafter referred to as “first phase difference detection pixel”) Pa collects incident light to the first phase difference detection pixel Pa.
  • the first opening OP1 in the first phase difference detection pixel Pa is set in a specific direction (here, the right direction (+X direction) with respect to the central axis CL passing through the center of the light receiving element PD and parallel to the optical axis LT as a reference (starting point). ).
  • the second opening OP2 in the first phase difference detection pixel Pa is provided at a position biased in a direction opposite to the specific direction (also referred to as "anti-specific direction") with respect to the central axis CL. .
  • the other phase difference detection pixel (hereinafter referred to as “second phase difference detection pixel”) Pb is a first pixel having a slit-shaped first opening OP1.
  • a light shielding plate AS1 and a second light shielding plate AS2 arranged below the first light shielding plate AS1 and having a slit-shaped second opening OP2 are provided.
  • the first opening OP1 in the second phase difference detection pixel Pb is provided at a position biased in the direction opposite to the specific direction with reference to the central axis CL.
  • the second opening OP2 in the second phase difference detection pixel Pb is provided at a position biased in the specific direction with reference to the central axis CL.
  • the first openings OP1 are arranged to be biased in different directions. Also, the second openings OP2 are arranged to be shifted in different directions with respect to the corresponding first openings OP1 in the phase difference detection pixels Pa and Pb.
  • the pair of phase difference detection pixels Pa and Pb configured as described above acquire subject light that has passed through different regions (parts) in the exit pupil EY.
  • the luminous flux Ta that has passed through the right pupil region Qa of the exit pupil EY passes through the microlens ML corresponding to the phase difference detection pixel Pa and the first opening OP1 of the first light shielding plate AS1.
  • the light After being limited (limited) by the light shielding plate AS2, the light is received by the light receiving element PD of the first phase difference detection pixel Pa.
  • the light flux Tb that has passed through the left pupil region Qb of the exit pupil EY passes through the microlens ML corresponding to the phase difference detection pixel Pb and the first opening OP1 of the second light shielding plate AS2, and further passes through the second light shielding plate AS2. , the light is received by the light receiving element PD of the second phase difference detection pixel Pb.
  • Fig. 4 shows an example of the output of the light-receiving element obtained at each pixel of Pa and Pb.
  • the output line from the pixel Pa and the output line from the pixel Pb are signals having a predetermined amount of shift Sf.
  • FIG. 5(a) shows the shift amount Sfa generated between the pixels Pa and Pb when the focus lens is set at a position corresponding to the subject distance and the focus is achieved, that is, in the in-focus state.
  • 5B1 and 5B2 show shifts occurring between pixels Pa and Pb when the focus lens is not set to a position corresponding to the subject distance and is out of focus, i.e., in an out-of-focus state.
  • Quantity Sfa is shown.
  • (b1) is an example in which the shift amount is larger than that at the in-focus state
  • (b2) is an example in which the shift amount is smaller than that at the in-focus state.
  • This process is the focusing process according to the "phase difference detection method". Focusing processing according to this "phase difference detection method” enables setting of the focus lens to the in-focus position, and the focus lens can be set to a position according to the object distance.
  • the shift amount described with reference to FIG. 5 can be measured for each set of pixels Pa and Pb, which are phase difference detection pixels configured in the image sensor shown in FIG. It is possible to individually calculate the in-focus position (focus point) and the defocus amount for the subject image captured in the pixel combination area).
  • FIG. 6 shows an internal configuration example of the digital signal processing unit 108, which is a component of the imaging apparatus 100 described with reference to FIG.
  • the digital signal processing unit 108 shown in FIG. 6 calculates the reliability of the physical quantity for each image region, such as the defocus amount and the distance value for each image region, using the detection information of the phase difference detection pixel, and the calculated reliability is Executes various processes according to the
  • the digital signal processing unit 108 includes a phase difference information acquisition unit 201, a defocus amount calculation unit 202, an AF control signal generation unit 203, a defocus map generation unit 204, an image region unit physical quantity reliability calculation unit. 205 , an image area unit physical quantity reliability correspondence processing execution unit 206 , an image information acquisition unit 211 , an image signal processing unit 212 , and an image output unit 213 .
  • the digital signal processing unit 108 receives the RGB image signals from the preceding A/D conversion unit 105 and the phase difference detection signals (detection information) output from the phase difference detection pixels.
  • a phase difference information acquisition unit 201 shown in FIG. 6 selects only phase difference detection signals (detection information), which are outputs of phase difference detection pixels, from the input signal from the A/D conversion unit 105 .
  • the image information acquisition unit 211 shown in FIG. 6 selects only image signals (for example, RGB image signals) from the input signals from the A/D conversion unit 105 .
  • the image signal acquired by the image information acquiring section 211 is input to the image signal processing section 212 .
  • the image signal processing unit 212 performs various image signal processing such as demosaic processing, white balance adjustment, and gamma correction on the image signal, and outputs the processed image (eg, RGB image) to the image output unit 213 .
  • the image region unit physical quantity reliability corresponding processing execution unit 206 may further perform image control processing on the image (for example, an RGB image) generated by the image signal processing unit 212 .
  • image control processing for example, superimposition processing of defocus amount reliability identification data is performed on an RGB image. Specific examples of these processes will be described later.
  • An image (for example, an RGB image) generated by the image signal processing unit 212, or an image generated by the image signal processing unit 212 changed or processed by the image region unit physical quantity reliability correspondence processing execution unit 206 is output to the image output unit. 213.
  • the image output unit 213 outputs the image input from the image signal processing unit 212 .
  • image output to the monitor 117, viewfinder (EVF) 116, and recording device 115 is executed.
  • the phase difference information acquisition unit 201 selects a phase difference detection signal (detection information) that is the output of the phase difference detection pixel from the input signal from the A/D conversion unit 105, and obtains the selected phase difference detection signal (detection information). is output to the defocus amount calculation unit 202 .
  • the defocus amount calculation unit 202 calculates the defocus amount, that is, the amount of deviation between the in-focus distance and the object distance (defocus amount (DF)) for each minute image area unit, for example, an image area unit composed of a plurality of pixels. calculate.
  • a defocus amount calculation unit 202 calculates a defocus amount, which is a physical quantity that changes according to the subject distance, for each image area.
  • the defocus of the focus lens is detected based on the shift amount of the signal output according to the amount of light received by each of a set of phase difference detection pixels that function as a focus detection sensor. Amount is calculated.
  • the AF control signal generation unit 203 generates an autofocus control signal (AF control signal) for setting the focus lens to an in-focus position (focus position) for a subject designated by the user based on the defocus amount. , and outputs the generated AF control signal to the AF control unit 112a.
  • AF control signal autofocus control signal
  • the AF control unit 112a drives the focus lens according to an autofocus control signal (AF control signal) input from the AF control signal generation unit 203, and moves the focus lens to the in-focus position (focus position) for the subject specified by the user, for example. set.
  • AF control signal autofocus control signal
  • the subject to be set at the in-focus position is not the entire image area of the captured image, but a subject specified by the user such as a person. Images of other objects, such as the background, are out of focus and appear blurred.
  • the shift amount described above with reference to FIG. 5 is measured in units of a set of pixels Pa and Pb, which are phase difference detection pixels configured in the imaging device shown in FIG. That is, the defocus amount calculation unit 202 can calculate the defocus amount of the subject image for each fine image area unit of the captured image.
  • FIG. 7 shows an example of an image area that is used as a defocus amount calculation unit.
  • FIG. 7 shows the pixel configuration of the imaging element 103 similar to that of FIG. 2 described above.
  • the phase difference detection pixels 151 for acquiring phase difference information are discretely set in some (rows) of RGB pixels having a Bayer array.
  • a phase difference detection pixel is configured by a pair of a right opening phase difference detection pixel Pa and a left opening phase difference detection pixel Pb.
  • An image area that is a unit for calculating the defocus amount can be set, for example, as an image area 152 shown in FIG.
  • the image area 152 which is the unit for calculating the defocus amount, is a fine image area of n ⁇ m pixels, such as 6 ⁇ 5 pixels.
  • a plurality of sets of phase difference detection pixels are included in this fine image area of n ⁇ m pixels. The shift amount described above with reference to FIG. 5 is measured from each of the plurality of sets of phase difference detection pixels.
  • the defocus amount calculation unit 202 calculates, for example, an average value of a plurality of sets of shift amounts in an image area 152 as shown in FIG. 7 as a shift amount of the image area 152 of n ⁇ m pixels. Further, the defocus amount, that is, the defocus amount corresponding to the amount of deviation between the in-focus distance and the subject distance, is calculated from the shift amount calculated and the shift amount of the focused image area. In this manner, the defocus amount calculation unit 202 calculates the defocus amount for each image area 152 as shown in FIG. 7, for example.
  • the defocus map generation unit 204 receives the fine defocus amount of the captured image calculated by the defocus amount calculation unit 202, and calculates the defocus amount of each image area based on the defocus amount of each image area. To generate a defocus map in which the amount of focus can be identified.
  • the defocus amount for each image area 152 as shown in FIG. is calculated to generate a defocus map.
  • FIG. 8A is an example of an image captured by the imaging device 100.
  • the photographed image (A) shown in FIG. 8 is not limited to the photographed image recorded by the user's shutter operation on the imaging apparatus 100, and is input through the lens of the imaging apparatus 100 regardless of whether the shutter operation is performed and is monitored. 117 or the like, which includes a so-called through image.
  • the defocus map of FIG. 8B is a defocus map corresponding to the captured image of (A), and is a defocus map generated by the defocus map generation unit 204 .
  • the defocus map generation unit 204 calculates the image area 152 as shown in FIG. 7 based on the defocus amount for each image area 152 as shown in FIG. A unit defocus amount is calculated to generate a defocus map.
  • the rectangular area shown in the defocus map in FIG. 8(B) is the image area of the defocus amount calculation unit, and corresponds to the image area 152 shown in FIG. It should be noted that the rectangular area shown in the defocus map of FIG. 8B is shown as a large size for easy understanding.
  • the image area as the actual defocus amount calculation unit can be set as a finer image area. That is, as described above with reference to FIG. 7, it is possible to set a pixel region of several pixels to several tens of pixels.
  • the defocus map shown in FIG. 8B is a map in which luminance values (pixel values) are set according to defocus amounts in units of pixel regions (rectangular regions).
  • luminance values pixel values
  • the lower luminance (black) region is a pixel region with a larger defocus amount, that is, a pixel region with a lower degree of focus.
  • the luminance value (pixel value) when the luminance value (pixel value) is set to 0 to 255, the closer the luminance value (pixel value) is to 255 (maximum luminance (white)), the smaller the defocus amount, that is, the pixel area with a high degree of focus.
  • the person or house shown in the captured image is set as the focus target subject
  • the image area corresponding to the person or house has high brightness.
  • the pixel area is set to the (white) area and has a small defocus amount and a high degree of focus.
  • the background area other than the person and the house is set to a low luminance (black) area or a gray area, and is a pixel area with a large defocus amount and a low focus degree.
  • the defocus map generation unit 204 calculates the defocus amount for each image area based on the defocus amount for each image area 152 as shown in FIG. 7 calculated by the defocus amount calculation unit 202. , to generate a defocus map as shown in FIG. 8(B).
  • the image area unit physical quantity reliability calculation unit 205 calculates the reliability of the image area unit physical quantity that changes according to the subject distance, such as the image area unit physical quantity, for example, the image area unit defocus amount and distance value.
  • the image area unit physical quantity reliability calculation unit 205 inputs the defocus amount for each image area calculated by the defocus amount calculation unit 202 and the defocus map generated by the defocus map generation unit 204, and inputs these inputs. Based on the data, the reliability of the defocus amount calculated for each image area is calculated. Further, the image area unit physical quantity reliability calculation unit 205 calculates the reliability of the distance value calculated from the defocus amount for each image area. Specific examples of these processes will be described later.
  • the reliability of the defocus amount for each image region or the reliability of the distance value for each image region calculated by the image region unit physical quantity reliability calculation unit 205 is output to the image region unit physical quantity reliability correspondence processing execution unit 206. .
  • the image area unit physical quantity reliability correspondence processing execution unit 206 performs the following operations according to the reliability of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit 205 or the reliability of the distance value for each image area.
  • Various controls such as image control and shooting control are performed on the captured image (for example, an RGB image) generated by the image signal processing unit 212 .
  • image control processing for the captured image eg, RGB image
  • the following processing is executed.
  • exposure control for each image area is executed according to the reliability of the defocus amount for each image area.
  • Embodiment 1 Embodiment in which an image is generated and output in which the reliability of the defocus amount for each image area can be identified (Embodiment 2) The distance ratio between the in-focus subject and other background subjects is displayed on the captured image (Embodiment 3) Example of generating and outputting an image in which mask stability information is superimposed on a photographed image (Embodiment 4) Defocus amount for each image area Example of generating and outputting an image in which a color map outputting corresponding colors is superimposed on a captured image (Embodiment 5) Outputting colors corresponding to the defocus amount of each image area, and defocusing of each image area Example for generating and outputting an image in which a color map that enables identification of the amount reliability is superimposed on a captured image (Embodiment 1) Embodiment in which an image is generated and output in which the reliability of the defocus amount for each image area can be identified (Embodiment 2) The distance ratio between the in-focus subject and other background subjects is displayed on the captured image
  • Example 1 Embodiment of generating and outputting an image in which the reliability of the defocus amount for each image area can be identified.
  • FIG. 9 is a block diagram for explaining the configuration of the first embodiment.
  • FIG. 9 shows an internal configuration example of the digital signal processing unit 108, which is a component of the imaging apparatus 100 described with reference to FIG.
  • the digital signal processing unit 108 includes a phase difference information acquisition unit 201, a defocus amount calculation unit 202, an AF control signal generation unit 203, a defocus map generation unit 204, an image region unit physical quantity reliability calculation unit. 205 , an image area unit physical quantity reliability correspondence processing execution unit 206 , an image information acquisition unit 211 , an image signal processing unit 212 , and an image output unit 213 .
  • the digital signal processing unit 108 receives the RGB image signals from the preceding A/D conversion unit 105 and the phase difference detection signals (detection information) output from the phase difference detection pixels.
  • a phase difference information acquisition unit 201 shown in FIG. 9 selects only the phase difference detection signal (detection information) that is the output of the phase difference detection pixel from the input signal from the A/D conversion unit 105 .
  • the image information acquisition unit 211 shown in FIG. 9 selects only image signals (for example, RGB image signals) from the input signals from the A/D conversion unit 105 .
  • the image signal acquired by the image information acquiring section 211 is input to the image signal processing section 212 .
  • the image signal processing unit 212 performs various image signal processing such as demosaic processing, white balance adjustment, and gamma correction on the image signal, and outputs the processed image (eg, RGB image) to the image output unit 213 .
  • the image region unit physical quantity reliability corresponding processing execution unit 206 further performs image control processing on the image (eg, RGB image) generated by the image signal processing unit 212 .
  • the image (for example, the RGB image) generated by the image signal processing unit 212 is superimposed with the reliability identification data of the defocus amount.
  • An image (for example, an RGB image) generated by the image signal processing unit 212, or an image generated by the image signal processing unit 212 changed or processed by the image region unit physical quantity reliability correspondence processing execution unit 206 is output to the image output unit. 213.
  • the image output unit 213 outputs the image input from the image signal processing unit 212 .
  • image output to the monitor 117, viewfinder (EVF) 116, and recording device 115 is executed.
  • the phase difference information acquisition unit 201 selects a phase difference detection signal (detection information) that is the output of the phase difference detection pixel from the input signal from the A/D conversion unit 105, and obtains the selected phase difference detection signal (detection information). is output to the defocus amount calculation unit 202 .
  • the defocus amount calculation unit 202 calculates the defocus amount, that is, the amount of deviation between the in-focus distance and the object distance (defocus amount (DF)) for each minute image area unit, for example, an image area unit composed of a plurality of pixels. calculate.
  • a defocus amount calculation unit 202 calculates a defocus amount, which is a physical quantity that changes according to the subject distance, for each image area.
  • the defocus of the focus lens is detected based on the shift amount of the signal output according to the amount of light received by each of a set of phase difference detection pixels that function as a focus detection sensor. Amount is calculated.
  • the AF control signal generation unit 203 generates an autofocus control signal (AF control signal) for setting the focus lens to an in-focus position (focus position) for a subject designated by the user based on the defocus amount. , and outputs the generated AF control signal to the AF control unit 112a.
  • AF control signal autofocus control signal
  • the AF control unit 112a drives the focus lens according to an autofocus control signal (AF control signal) input from the AF control signal generation unit 203, and moves the focus lens to the in-focus position (focus position) for the subject specified by the user, for example. set.
  • AF control signal autofocus control signal
  • the subject to be set at the in-focus position is not the entire image area of the captured image, but a subject specified by the user such as a person. Images of other objects, such as the background, are out of focus and appear blurred.
  • the defocus amount calculation unit 202 calculates the defocus amount, that is, the amount of deviation between the in-focus distance and the subject distance, in units of minute image areas such as n ⁇ m pixels. A defocus amount equivalent to is calculated.
  • the defocus map generation unit 204 receives the fine defocus amount of the captured image calculated by the defocus amount calculation unit 202, and calculates the defocus amount of each image area based on the defocus amount of each image area. To generate a defocus map in which the amount of focus can be identified.
  • a defocus map as described above with reference to FIG. 8B is generated.
  • the defocus map generation unit 204 generates a defocus map as shown in FIG. 8B, for example.
  • the image area unit physical quantity reliability calculation unit 205 calculates the reliability of the defocus amount for each image area, which is the physical quantity for each image area.
  • the image area unit physical quantity reliability calculation unit 205 receives the defocus amount for each image area calculated by the defocus amount calculation unit 202 and the defocus map generated by the defocus map generation unit 204, and converts these input data into Based on this, the reliability of the defocus amount calculated for each image area is calculated.
  • various methods can be applied as the method of calculating the reliability of the defocus amount for each image area.
  • a method of calculating the defocus amount reliability for each image area using the contrast of the image (RGB image) generated by the image signal processing unit 212, the image frequency, the edge detection result, or the like can be used.
  • a method of calculating the defocus amount reliability for each image area using a cross-correlation function of two waveforms of parallax data used for defocus amount calculation is also possible.
  • This defocus amount reliability calculation method uses two waveforms of the parallax data shown in FIG. , a technique that uses the cross-correlation function of these two waveforms.
  • the pair of phase difference detection pixels Pa and Pb acquire subject light that has passed through different regions (parts) in the exit pupil EY.
  • the outputs of the light-receiving elements acquired by the pixels Pa and Pb are signals having a predetermined amount of shift Sf for the output line from the pixel Pa and the output line from the pixel Pb.
  • FIG. 5(a) shows the shift amount Sfa generated between the pixels Pa and Pb when the focus lens is set at a position corresponding to the subject distance and the focus is achieved, that is, in the in-focus state.
  • 5B1 and 5B2 show shifts occurring between pixels Pa and Pb when the focus lens is not set to a position corresponding to the subject distance and is out of focus, i.e., in an out-of-focus state.
  • Quantity Sfa is shown.
  • the image area unit physical quantity reliability calculation unit 205 indicates the reliability of the defocus amount for each image area by using two waveforms of the parallax data used for calculating the defocus amount, that is, the output from the pixel Pa shown in FIG. It is calculated using the waveform, the waveform indicating the output from the pixel Pb, and the cross-correlation function of these two waveforms.
  • a waveform indicating the output from the pixel Pa and a waveform indicating the output from the pixel Pb, and these two waveform data are acquired by the phase information acquisition unit 201 and passed through the defocus amount calculation unit 202. It is input to the reliability calculation unit 205 .
  • the image area unit physical quantity reliability calculation unit 205 calculates the defocus amount reliability from the two waveform data as follows. Let f(t) be the waveform data representing the output from the pixel Pa shown in FIG. 4, and g(t) be the waveform data representing the output from the pixel Pb. A cross-correlation function h( ⁇ ) of these two waveform data f(t) and g(t) can be calculated by the following (equation 1). Note that t indicates the position within each pixel Pa, Pb.
  • h( ⁇ max ) can be calculated as the reliability of the defocus amount, where ⁇ max is the ( ⁇ ) at which the cross-correlation function h( ⁇ ) reaches its maximum value.
  • the defocus amount reliability calculated in this example is the reliability of the defocus amount for each pixel region.
  • one image region 152 includes a plurality of sets of phase difference detection pixels.
  • the defocus amount calculation unit 202 calculates, for example, an average value of a plurality of sets of shift amounts in an image area 152 as shown in FIG. 7 as a shift amount of the image area 152 of n ⁇ m pixels. Further, the defocus amount, that is, the defocus amount corresponding to the amount of deviation between the in-focus distance and the subject distance, is calculated from the shift amount calculated and the shift amount of the focused image area.
  • the image area unit physical quantity reliability calculation unit 205 calculates the defocus amount for each image area calculated by the defocus amount calculation unit 202, that is, the image area which is one rectangular area in the defocus map shown in FIG. Using the unit defocus amount, the reliability of the defocus amount for each image area is calculated.
  • the waveform data f(t) and g(t) that serve as the basis for calculating the reliability are, for example, the waveform data f( t) and the average waveform of the waveform data g(t) are calculated and used.
  • the image region unit physical quantity reliability calculation unit 205 uses, for example, the cross-correlation function of the two waveforms of the parallax data used to calculate the defocus amount, and the image region calculated by the defocus amount calculation unit 202. Calculate the reliability of the unit defocus amount.
  • the reliability data of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit 205 is output to the image area unit physical quantity reliability correspondence processing execution unit 206 .
  • the image area unit physical quantity reliability correspondence processing execution unit 206 calculates the captured image generated by the image signal processing unit 212 according to the reliability data of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit 205. (e.g., RGB images). Specifically, an image generation process is performed in which the reliability of the defocus amount for each image region calculated by the image region unit physical quantity reliability calculation unit 205 can be identified.
  • the reliability data of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit 205 e.g., RGB images.
  • the image area unit physical quantity reliability correspondence processing execution unit 206 performs image control processing on an image (for example, an RGB image) generated by the image signal processing unit 212.
  • the defocus amount calculated for each image area is calculated for each image area. and a process of generating an output image in which high-reliability regions can be identified. A specific example of this processing will be described with reference to FIG. 10 and subsequent drawings.
  • FIG. 10 shows the following data.
  • A Captured image
  • B Defocus map
  • a captured image is a captured image (RGB image) generated by the image signal processing unit 212 based on the image signal input from the image information acquisition unit 211 to the image signal processing unit 212 .
  • a defocus map is a defocus map generated by the defocus map generation unit 204 . That is, based on the defocus amount for each image area calculated by the defocus amount calculation unit 202 based on the pixel value signal of the phase difference detection pixel acquired by the phase difference information acquisition unit 201, the defocus map generation unit 204 generates This is a defocus map that outputs the defocus amount for each image area as a luminance value (for example, 0 to 255).
  • the image area unit physical quantity reliability calculation unit 205 calculates the reliability of the defocus amount for each image area in the defocus map generated by the defocus map generation unit 204, and uses the calculated reliability data. It is output to the image area unit physical quantity reliability correspondence processing execution unit 206 .
  • the image region unit physical quantity reliability corresponding processing execution unit 206 calculates, for example, as shown in FIG. , a defocus amount low-reliability area and a defocus amount high-reliability area are extracted.
  • the defocus amount reliability for each image area input from the image area unit physical quantity reliability calculation unit 205 is compared with a predetermined reliability threshold to determine the defocus amount low reliability area. , to extract the defocus amount high reliability region. For example, using a predefined low-reliability threshold Th1 and a high-reliability threshold Th2, (Decision formula 1) Defocus amount reliability ⁇ Th1 An image region that satisfies the above determination formula 1 is extracted as a defocus amount low reliability region.
  • the image area unit physical quantity reliability correspondence processing execution unit 206 extracts a defocus amount low reliability area and a defocus amount high reliability area according to these determination formulas, and based on the extraction result, is calculated for each image area. Then, a process of generating an output image in which a low-reliability region and a high-reliability region of the defocus amount can be identified is executed. A specific example will be described with reference to FIG. 11 and subsequent figures.
  • FIG. 11 shows the following data.
  • A Captured image
  • B Defocus map
  • C Output image
  • a captured image is a captured image (RGB image) generated by the image signal processing unit 212 based on the image signal input from the image information acquisition unit 211 to the image signal processing unit 212 .
  • a defocus map is a defocus map generated by the defocus map generation unit 204 .
  • An output image is an example of an output image generated by the image area unit physical quantity reliability correspondence processing execution unit 206 based on (A) a photographed image and (B) a defocus map.
  • An example of an output image (C) shown in FIG. 11 is an example of an output image in which a defocus amount low-reliability region can be identified.
  • the image area unit physical quantity reliability correspondence processing execution unit 206 performs (B) an area where the reliability of the defocus amount for each image area set in the defocus map is equal to or lower than the low reliability threshold Th1 described above, that is, (Decision formula 1) Defocus amount reliability ⁇ Th1 An image area that satisfies the above determination formula 1 is extracted as a defocus amount low-reliability area, and graphic data that makes it possible to identify this low-reliability area is superimposed on (A) the captured image to generate (C) an output image. do.
  • the defocus amount low-reliability area shown in the output image of FIG. 11(C) is composed of, for example, graphic data of a plurality of translucent red rectangular blocks. Each rectangular block corresponds to one image area that is a defocus amount calculation unit. Note that red is an example, and other colors may be set.
  • This (C) output image is output to the monitor 117 of the imaging apparatus 100, for example.
  • the user can easily check the area where the defocus amount may not be calculated correctly.
  • the (C) output image in FIG. 11 is an example of an output image in which a defocus amount low-reliability region can be identified.
  • FIG. 12 shows the following data.
  • a captured image is a captured image (RGB image) generated by the image signal processing unit 212 based on the image signal input from the image information acquisition unit 211 to the image signal processing unit 212 .
  • a defocus map is a defocus map generated by the defocus map generation unit 204 .
  • An output image is an example of an output image generated by the image area unit physical quantity reliability correspondence processing execution unit 206 based on (A) a photographed image and (B) a defocus map.
  • An example of the output image (C) shown in FIG. 12 is an example of an output image in which the defocus amount high reliability region can be identified.
  • the image area unit physical quantity reliability correspondence processing execution unit 206 performs (B) an area where the reliability of the defocus amount for each image area set in the defocus map is equal to or higher than the above-described high reliability threshold Th2, that is, (Decision formula 2) Th2 ⁇ defocus amount reliability An image area that satisfies the above determination formula 2 is extracted as a defocus amount high reliability area, and the graphic data that makes this high reliability area identifiable (A) Photographed image Superimpose on top to generate (C) the output image.
  • the defocus amount high-reliability area shown in the output image of FIG. 12(C) is composed of, for example, graphic data of a plurality of translucent blue rectangular blocks. Each rectangular block corresponds to one image area that is a defocus amount calculation unit. Note that blue is an example, and other colors may be set.
  • This (C) output image is output to the monitor 117 of the imaging apparatus 100, for example.
  • the user can easily confirm the area where the defocus amount is calculated correctly.
  • the first embodiment is an embodiment for generating and outputting an image in which the graphic data that enables identification of the reliability of the defocus amount for each image area is superimposed on the captured image. With the graphic data superimposed on , it is possible to distinguish and confirm an area where the defocus amount is calculated correctly and an area where the defocus amount is not calculated correctly.
  • the digital signal processing unit 108 described with reference to FIG. 204 an image region unit physical quantity reliability calculation unit 205, an image region unit physical quantity reliability corresponding processing execution unit 206, an image information acquisition unit 211, an image signal processing unit 212, an image output unit 213, and a setting having all these configurations.
  • this configuration is an example.
  • the data processing may be performed by an external device different from the imaging device 100 .
  • the image signal processing unit 212 can be configured outside the digital signal processing unit 108 .
  • the image signal processing may be performed by an external device other than the imaging device 100, such as a PC.
  • Example 2 Example of generating and outputting an image in which a distance ratio between a focused subject and other background subjects is superimposed on a photographed image.
  • Example 2 an example of generating and outputting an image in which the distance ratio between the in-focus object and other background objects is superimposed on the captured image will be described.
  • FIG. 13 is a block diagram for explaining the configuration of the second embodiment.
  • FIG. 13 shows an internal configuration example of the digital signal processing unit 108, which is a component of the imaging apparatus 100 described with reference to FIG.
  • the digital signal processing unit 108 includes a phase difference information acquisition unit 201, a defocus amount calculation unit 202, an AF control signal generation unit 203, a defocus map generation unit 204, an image region unit physical quantity reliability calculation unit. 205 , an image area unit physical quantity reliability correspondence processing execution unit 206 , an image information acquisition unit 211 , an image signal processing unit 212 , an image output unit 213 , and a distance information calculation unit 221 .
  • the digital signal processing unit 108 receives the RGB image signals from the preceding A/D conversion unit 105 and the phase difference detection signals (detection information) output from the phase difference detection pixels.
  • a phase difference information acquisition unit 201 shown in FIG. 13 selects only the phase difference detection signal (detection information) that is the output of the phase difference detection pixel from the input signal from the A/D conversion unit 105 .
  • the image information acquisition unit 211 shown in FIG. 13 selects only image signals (for example, RGB image signals) from the input signals from the A/D conversion unit 105 .
  • the image signal acquired by the image information acquiring section 211 is input to the image signal processing section 212 .
  • the image signal processing unit 212 performs various image signal processing such as demosaic processing, white balance adjustment, and gamma correction on the image signal, and outputs the processed image (eg, RGB image) to the image output unit 213 .
  • image control by the image region unit physical quantity reliability corresponding processing execution unit 206 is performed on an image (for example, an RGB image) generated by the image signal processing unit 212. processing takes place.
  • an image in which the distance ratio between the in-focus subject and other background subjects is superimposed on the image (for example, the RGB image) generated by the image signal processing unit 212 is generated and output.
  • An image (for example, an RGB image) generated by the image signal processing unit 212, or an image generated by the image signal processing unit 212 changed or processed by the image region unit physical quantity reliability correspondence processing execution unit 206 is output to the image output unit. 213.
  • the image output unit 213 outputs the image input from the image signal processing unit 212 .
  • image output to the monitor 117, viewfinder (EVF) 116, and recording device 115 is executed.
  • the phase difference information acquisition unit 201 selects a phase difference detection signal (detection information) that is the output of the phase difference detection pixel from the input signal from the A/D conversion unit 105, and obtains the selected phase difference detection signal (detection information). is output to the defocus amount calculation unit 202 .
  • the defocus amount calculation unit 202 calculates the defocus amount, that is, the amount of deviation between the in-focus distance and the object distance (defocus amount (DF)) for each minute image area unit, for example, an image area unit composed of a plurality of pixels. calculate.
  • the defocus of the focus lens is detected based on the shift amount of the signal output according to the amount of light received by each of a set of phase difference detection pixels that function as a focus detection sensor. Amount is calculated.
  • the AF control signal generation unit 203 generates an autofocus control signal (AF control signal) for setting the focus lens to an in-focus position (focus position) for a subject designated by the user based on the defocus amount. , and outputs the generated AF control signal to the AF control unit 112a.
  • AF control signal autofocus control signal
  • the AF control unit 112a drives the focus lens according to an autofocus control signal (AF control signal) input from the AF control signal generation unit 203, and moves the focus lens to the in-focus position (focus position) for the subject specified by the user, for example. set.
  • AF control signal autofocus control signal
  • the subject to be set at the in-focus position is not the entire image area of the captured image, but a subject specified by the user such as a person. Images of other objects, such as the background, are out of focus and appear blurred.
  • the defocus amount calculation unit 202 calculates the defocus amount, that is, the amount of deviation between the in-focus distance and the subject distance, in units of minute image areas such as n ⁇ m pixels. A defocus amount equivalent to is calculated.
  • the distance information calculation unit 221 receives the defocus amount for each image area of the captured image calculated by the defocus amount calculation unit 202, and calculates the distance for each image area of the captured image based on the defocus amount for each image area. Calculate information.
  • the distance information calculation unit 221 calculates a distance value, which is a physical quantity that changes according to the object distance, for each image area.
  • the distance information calculation unit 221 generates, for example, a depth map indicating the distance value for each image area by pixel values (eg, 0 to 255).
  • the (B) distance information (depth map) for each image area generated by the distance information calculation unit 221 is based on the defocus amount for each minute image area such as n ⁇ m pixels calculated by the defocus amount calculation unit 202.
  • 4 is a depth map showing the value of the object distance for each image area calculated in the above manner as a pixel value (0 to 255, for example).
  • a high luminance (high pixel value) area is an area with a short subject distance
  • a low luminance (low pixel value) area is an area with a long subject distance.
  • the process of calculating the subject distance from the defocus amount can be calculated by applying parameters such as the focal length of the lens (focus lens) of the imaging device. Specifically, the subject distance for each image area is calculated according to (Equation 2) below.
  • the distance information calculation unit 221 calculates the subject distance for each image area according to the above (Formula 2).
  • the object distance information for each image area calculated by the distance information calculation unit 221 is output to the image area unit physical quantity reliability calculation unit 205 .
  • the defocus map generation unit 204 receives the fine defocus amount of the captured image calculated by the defocus amount calculation unit 202, and calculates the defocus amount of each image area based on the defocus amount of each image area. To generate a defocus map in which the amount of focus can be identified.
  • a defocus map as described above with reference to FIG. 8B is generated.
  • the defocus map generation unit 204 generates a defocus map as shown in FIG. 8B, for example.
  • the image area unit physical quantity reliability calculation unit 205 calculates the reliability of the defocus amount for each image area, which is the physical quantity for each image area, and the reliability of the distance information for each image area.
  • the image area unit physical quantity reliability calculation unit 205 calculates the reliability of the distance information for each image area as the reliability according to the reliability of the defocus amount for each image area. That is, it is determined that the reliability of the distance value calculated based on the defocus amount is low for an image area with a low reliability of the defocus amount, and the image area with a high reliability of the defocus amount is determined based on the defocus amount. It is judged that the reliability of the distance value calculated by
  • the image region unit physical quantity reliability calculation unit 205 calculates the defocus amount for each image region calculated by the defocus amount calculation unit 202 and the defocus map generated by the defocus map generation unit 204. Based on these input data, the reliability of the defocus amount calculated for each image area is calculated.
  • various methods can be applied as the method of calculating the reliability of the defocus amount for each image area.
  • a method of calculating the defocus amount reliability for each image area using the contrast of the image (RGB image) generated by the image signal processing unit 212, the image frequency, the edge detection result, or the like can be used.
  • the defocus amount reliability for each image area is calculated using the method using (Equation 1) described in the first embodiment, that is, the cross-correlation function of the two waveforms of the parallax data used for defocus amount calculation. method is also possible.
  • the defocus amount reliability calculated in the second embodiment is also the reliability of the defocus amount for each pixel area, as described in the first embodiment.
  • the image area unit physical quantity reliability calculation unit 205 calculates the defocus amount for each image area calculated by the defocus amount calculation unit 202, that is, the image area unit which is one rectangular area in the defocus map shown in FIG. is used to calculate the reliability of the defocus amount for each image area.
  • the image region unit physical quantity reliability calculation unit 205 uses, for example, the cross-correlation function of the two waveforms of the parallax data used to calculate the defocus amount, and the image region calculated by the defocus amount calculation unit 202. Calculate the reliability of the unit defocus amount.
  • the image area unit physical quantity reliability calculation unit 205 calculates the reliability of the distance information for each image area according to the reliability of the defocus amount for each image area.
  • the reliability of the distance information for each image area is calculated as the reliability corresponding to the reliability of the defocus amount for each image area. That is, it is determined that the reliability of the distance value calculated based on the defocus amount is low for an image area with a low reliability of the defocus amount, and the image area with a high reliability of the defocus amount is determined based on the defocus amount. It is judged that the reliability of the distance value calculated by
  • the reliability data of the distance information for each image area calculated by the image area unit physical quantity reliability calculation unit 205 is output to the image area unit physical quantity reliability correspondence processing execution unit 206 .
  • the image area unit physical quantity reliability correspondence processing execution unit 206 calculates the photographed image generated by the image signal processing unit 212 ( image control for RGB images). Specifically, according to the reliability of the distance information for each image area calculated by the image area unit physical quantity reliability calculation unit 205, distance ratio data between areas having highly reliable distance information is generated to generate the captured image. Displayed superimposed on top. For example, an image area having a distance value with a degree of reliability greater than or equal to a predetermined threshold is selected, and distance ratio data between subjects in the selected image area is generated and displayed superimposed on the captured image.
  • FIG. 14 shows the following data.
  • A Photographed image
  • B Output image
  • p Separation degree (distance ratio) calculation example
  • a captured image is a captured image (RGB image) generated by the image signal processing unit 212 based on the image signal input from the image information acquisition unit 211 to the image signal processing unit 212 .
  • the (B) output image is an example of an output image generated by the image-region unit physical quantity reliability correspondence processing execution unit 206 based on the (A) captured image.
  • the image area unit physical quantity reliability calculation unit 205 selects the in-focus subject area and a part of the background area as areas having highly reliable distance information according to the reliability of the distance information for each image area, A distance ratio between these regions is calculated and displayed as separation degree (distance ratio) data superimposed on the captured image.
  • a calculation example of the separation (distance ratio) data is as shown in FIG. 14(p) separation (distance ratio) calculation example.
  • a be the distance from the camera to the in-focus object
  • b be the distance from the camera to the background area.
  • the degree of separation (distance ratio) between the focused object and the background area is b/a.
  • the upper left of the output image in FIG. is displayed.
  • This (B) output image is output to the monitor 117 of the imaging apparatus 100, for example.
  • the user can easily check the degree of separation (distance ratio) between the regions whose distances are calculated correctly.
  • the reliability of distance information for each image area is calculated, and data that enables confirmation of the degree of separation (distance ratio) between areas having highly reliable distance information is displayed on the captured image.
  • This is an embodiment in which superimposed images are generated and output, and the user sees the degree of separation (distance ratio) data superimposed on the captured image, and the degree of separation (distance ratio) between regions whose distances are correctly calculated. can be easily confirmed.
  • the data processing may be performed by an external device different from the imaging device 100 .
  • the image signal processing unit 212 can be configured outside the digital signal processing unit 108 .
  • the image signal processing may be performed by an external device other than the imaging device 100, such as a PC.
  • Example 3 Example of generating and outputting an image in which mask stability information is superimposed on a photographed image.
  • Example 3 an example of generating and outputting an image in which mask stability information is superimposed on a photographed image will be described.
  • mask processing may be performed to set a background area other than a specific subject selected from a captured image to a uniform color such as green background or blue background.
  • a uniform color such as green background or blue background.
  • FIG. 15 is a block diagram for explaining the configuration of the third embodiment.
  • FIG. 15 shows an internal configuration example of the digital signal processing unit 108, which is a component of the imaging apparatus 100 described with reference to FIG.
  • the digital signal processing unit 108 includes a phase difference information acquisition unit 201, a defocus amount calculation unit 202, an AF control signal generation unit 203, a defocus map generation unit 204, an image area unit physical quantity reliability calculation unit. 205 , an image area unit physical quantity reliability correspondence processing execution unit 206 , an image information acquisition unit 211 , an image signal processing unit 212 , and an image output unit 213 .
  • the digital signal processing unit 108 receives the RGB image signals from the preceding A/D conversion unit 105 and the phase difference detection signals (detection information) output from the phase difference detection pixels.
  • a phase difference information acquisition unit 201 shown in FIG. 15 selects only the phase difference detection signal (detection information) that is the output of the phase difference detection pixel from the input signal from the A/D conversion unit 105 .
  • the image information acquisition unit 211 shown in FIG. 15 selects only image signals (for example, RGB image signals) from the input signals from the A/D conversion unit 105 .
  • the image signal acquired by the image information acquiring section 211 is input to the image signal processing section 212 .
  • the image signal processing unit 212 performs various image signal processing such as demosaic processing, white balance adjustment, and gamma correction on the image signal, and outputs the processed image (eg, RGB image) to the image output unit 213 .
  • the image (for example, RGB image) generated by the image signal processing unit 212 is further processed by the image region unit physical quantity reliability corresponding processing executing unit. 206 performs image control processing.
  • a process of superimposing and outputting mask stability information according to the reliability of the defocus amount is performed on an image (for example, an RGB image) generated by the image signal processing unit 212 .
  • An image (for example, an RGB image) generated by the image signal processing unit 212, or an image generated by the image signal processing unit 212 changed or processed by the image region unit physical quantity reliability correspondence processing execution unit 206 is output to the image output unit. 213.
  • the image output unit 213 outputs the image input from the image signal processing unit 212 .
  • image output to the monitor 117, viewfinder (EVF) 116, and recording device 115 is executed.
  • the phase difference information acquisition unit 201 selects a phase difference detection signal (detection information) that is the output of the phase difference detection pixel from the input signal from the A/D conversion unit 105, and obtains the selected phase difference detection signal (detection information). is output to the defocus amount calculation unit 202 .
  • the defocus amount calculation unit 202 calculates the defocus amount, that is, the amount of deviation between the in-focus distance and the object distance (defocus amount (DF)) for each minute image area unit, for example, an image area unit composed of a plurality of pixels. calculate.
  • the defocus of the focus lens is detected based on the shift amount of the signal output according to the amount of light received by each of a set of phase difference detection pixels that function as a focus detection sensor. Amount is calculated.
  • the AF control signal generation unit 203 generates an autofocus control signal (AF control signal) for setting the focus lens to an in-focus position (focus position) for a subject designated by the user based on the defocus amount. , and outputs the generated AF control signal to the AF control unit 112a.
  • AF control signal autofocus control signal
  • the AF control unit 112a drives the focus lens according to an autofocus control signal (AF control signal) input from the AF control signal generation unit 203, and moves the focus lens to the in-focus position (focus position) for the subject specified by the user, for example. set.
  • AF control signal autofocus control signal
  • the subject to be set at the in-focus position is not the entire image area of the captured image, but a subject specified by the user such as a person. Images of other objects, such as the background, are out of focus and appear blurred.
  • the defocus amount calculation unit 202 calculates the defocus amount, that is, the amount of deviation between the in-focus distance and the subject distance, in units of minute image areas such as n ⁇ m pixels. A defocus amount equivalent to is calculated.
  • the defocus map generation unit 204 receives the fine defocus amount of the captured image calculated by the defocus amount calculation unit 202, and calculates the defocus amount of each image area based on the defocus amount of each image area. To generate a defocus map in which the amount of focus can be identified.
  • a defocus map as described above with reference to FIG. 8B is generated.
  • the defocus map generation unit 204 generates a defocus map as shown in FIG. 8B, for example.
  • the image area unit physical quantity reliability calculation unit 205 calculates the reliability of the defocus amount for each image area, which is the physical quantity for each image area. As in the first embodiment, the image region unit physical quantity reliability calculation unit 205 calculates the defocus amount for each image region calculated by the defocus amount calculation unit 202 and the defocus map generated by the defocus map generation unit 204. Based on these input data, the reliability of the defocus amount calculated for each image area is calculated.
  • various methods can be applied as the method of calculating the reliability of the defocus amount for each image area.
  • a method of calculating the defocus amount reliability for each image area using the contrast of the image (RGB image) generated by the image signal processing unit 212, the image frequency, the edge detection result, or the like can be used.
  • the defocus amount reliability for each image area is calculated using the method using (Equation 1) described in the first embodiment, that is, the cross-correlation function of the two waveforms of the parallax data used for defocus amount calculation. method is also possible.
  • the defocus amount reliability calculated in the third embodiment is also the reliability of the defocus amount for each pixel area, as described in the first embodiment.
  • the image area unit physical quantity reliability calculation unit 205 calculates the defocus amount for each image area calculated by the defocus amount calculation unit 202, that is, the image area unit which is one rectangular area in the defocus map shown in FIG. is used to calculate the reliability of the defocus amount for each image area.
  • the image region unit physical quantity reliability calculation unit 205 uses, for example, the cross-correlation function of the two waveforms of the parallax data used to calculate the defocus amount, and the image region calculated by the defocus amount calculation unit 202. Calculate the reliability of the unit defocus amount.
  • the reliability data of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit 205 is output to the image area unit physical quantity reliability correspondence processing execution unit 206 .
  • the image area unit physical quantity reliability correspondence processing execution unit 206 calculates the captured image generated by the image signal processing unit 212 according to the reliability data of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit 205. (e.g., RGB images). Specifically, mask stability information corresponding to the reliability of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit 205 is generated and superimposed on the captured image and output.
  • the reliability data of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit 205 e.g., RGB images.
  • the mask stability information is information indicating whether or not it is possible to set the mask accurately when setting the mask on an area other than the selected subject of the captured image, for example, the background area.
  • mask processing is performed to set a background region other than a specific subject selected from a photographed image to a uniform color such as green background or blue background, and the green background region or blue background of the image subjected to mask processing is performed.
  • a new background image By synthesizing another image, such as a new background image, in the background area, it is possible to generate a synthesized image in which the selected specific subject is displayed on the new background image.
  • the present embodiment is an embodiment in which mask stability information indicating whether or not highly accurate mask setting is possible is generated and superimposed on the captured image and output when such mask processing is performed.
  • the image region unit physical quantity reliability correspondence processing execution unit 206 generates mask stability information according to the reliability of the defocus amount for each image region calculated by the image region unit physical quantity reliability calculation unit 205, and displays it on the captured image. superimposed on the output. A specific example of this processing will be described with reference to FIG. 16 and subsequent drawings.
  • the photographed image in FIG. 16A is a photographed image in which a person and a house are set as focused subjects.
  • the defocus amount of this in-focus object is almost zero.
  • a mask process is performed to set the background area other than the in-focus object area as a mask area to a uniform color such as green background or blue background.
  • B a mask setting image is generated by this mask setting process.
  • the following processing is possible as a specific mask area determination process when an area where no mask is set is set as a focus area such as a person or a house, and a background area other than the focus area is set as a mask area.
  • An area that satisfies this determination formula a is determined as an in-focus area and determined as an area not to be masked.
  • an area that does not satisfy the determination formula a is determined as an out-of-focus area and determined as a mask area to be masked.
  • a mask area can be determined by such processing.
  • the mask area cannot be determined accurately even if the mask area determination process is performed using the above (determination formula a).
  • mask stability information indicating whether or not highly accurate mask setting is possible is generated, superimposed on the photographed image, and output to provide the user with stable mask area setting. This is an embodiment in which it is possible to notify whether or not it is possible.
  • the image region unit physical quantity reliability correspondence processing execution unit 206 generates mask stability information according to the reliability of the defocus amount for each image region calculated by the image region unit physical quantity reliability calculation unit 205, and displays it on the captured image. superimposed on the output.
  • a specific processing example will be described with reference to FIG. 17 and the subsequent drawings.
  • FIG. 17 shows the following data.
  • A Captured image
  • B Defocus map
  • C Output image
  • a captured image is a captured image (RGB image) generated by the image signal processing unit 212 based on the image signal input from the image information acquisition unit 211 to the image signal processing unit 212 .
  • a defocus map is a defocus map generated by the defocus map generation unit 204 . That is, based on the defocus amount for each image area calculated by the defocus amount calculation unit 202 based on the pixel value signal of the phase difference detection pixel acquired by the phase difference information acquisition unit 201, the defocus map generation unit 204 generates This is a defocus map that outputs the defocus amount for each image area as a luminance value (for example, 0 to 255).
  • the (C) output image is an example of the output image generated by the image region unit physical quantity reliability correspondence processing execution unit 206 based on the (A) captured image.
  • (C) Mask stability information is displayed in the upper right of the output image.
  • the image area unit physical quantity reliability calculation unit 205 calculates the reliability of the defocus amount for each image area in the defocus map generated by the defocus map generation unit 204, and uses the calculated reliability data. It is output to the image area unit physical quantity reliability correspondence processing execution unit 206 .
  • the image region unit physical quantity reliability corresponding processing execution unit 206 calculates, for example, as shown in FIG. , a defocus amount low-reliability region is extracted.
  • the defocus amount reliability for each image area input from the image area unit physical quantity reliability calculation unit 205 is compared with a predetermined reliability threshold to determine a defocus amount low reliability area. Extract. For example, using a predefined low reliability threshold Th1, (Decision formula 1) Defocus amount reliability ⁇ Th1 An image region that satisfies the above determination formula 1 is extracted as a defocus amount low reliability region.
  • the defocus map shown in FIG. 17B includes a defocus amount low reliability region.
  • the image area unit physical quantity reliability correspondence processing execution unit 206 determines that stable mask area determination processing is difficult. Furthermore, according to this determination, an output image is generated and output by superimposing mask stability information indicating "mask instability" on the photographed image as shown in FIG. 17(C).
  • mask stability information “mask instability” indicating that it is difficult to determine a stable mask region is displayed. It is an example of an output image superimposed on a captured image.
  • This (C) output image is output to the monitor 117 of the imaging apparatus 100, for example.
  • the user can see the image output to the monitor 117 and recognize that stable mask setting is difficult.
  • FIG. 18 also shows the following data.
  • A Captured image
  • B Defocus map
  • C Output image
  • a captured image is a captured image (RGB image) generated by the image signal processing unit 212 based on the image signal input from the image information acquisition unit 211 to the image signal processing unit 212 .
  • a defocus map is a defocus map generated by the defocus map generation unit 204 . That is, based on the defocus amount for each image area calculated by the defocus amount calculation unit 202 based on the pixel value signal of the phase difference detection pixel acquired by the phase difference information acquisition unit 201, the defocus map generation unit 204 generates This is a defocus map that outputs the defocus amount for each image area as a luminance value (for example, 0 to 255).
  • the (C) output image is an example of the output image generated by the image region unit physical quantity reliability correspondence processing execution unit 206 based on the (A) captured image.
  • (C) Mask stability information is displayed in the upper right of the output image.
  • the image area unit physical quantity reliability calculation unit 205 calculates the reliability of the defocus amount for each image area in the defocus map generated by the defocus map generation unit 204, and uses the calculated reliability data as the image area unit physical quantity reliability. Output to the degree correspondence processing execution unit 206 .
  • the example shown in FIG. 18 is an example in which no defocus amount low-reliability region is detected from the defocus map shown in FIG. 18B. That is, as explained earlier, (Decision formula 1) Defocus amount reliability ⁇ Th1 This is an example of a case where an image area that satisfies the determination formula 1 is not detected.
  • the defocus map shown in FIG. 18B does not include the defocus amount low reliability region.
  • the image region unit physical quantity reliability handling processing execution unit 206 determines that stable mask region determination processing is possible. Further, according to this determination, an output image is generated and output by superimposing mask stability information indicating "mask stability" on the captured image as shown in FIG. 18(C).
  • This (C) output image is output to the monitor 117 of the imaging apparatus 100, for example.
  • the user can see the image output to the monitor 117 and recognize that stable mask setting is possible.
  • the examples of the mask stability information shown in FIGS. 17 and 18 are only examples, and the mask stability information can be displayed using various characters, symbols, icons, icon displays of different colors, and the like. It is possible.
  • the digital signal processing unit 108 described with reference to FIG. 204 an image region unit physical quantity reliability calculation unit 205, an image region unit physical quantity reliability corresponding processing execution unit 206, an image information acquisition unit 211, an image signal processing unit 212, an image output unit 213, and a setting having all these configurations.
  • this configuration is an example.
  • the data processing may be performed by an external device different from the imaging device 100 .
  • the image signal processing unit 212 can be configured outside the digital signal processing unit 108 .
  • the image signal processing may be performed by an external device other than the imaging device 100, such as a PC.
  • Example 4 Embodiment of generating and outputting an image in which a color map outputting a color corresponding to a defocus amount for each image area is superimposed on a captured image.
  • Example 4 an example will be described in which an image is generated and output by superimposing a color map outputting a color corresponding to the defocus amount for each image area on a photographed image.
  • a photographed image includes subjects having various defocus amounts, such as a focused subject whose defocus amount is almost 0, and a background subject whose defocus amount is large.
  • This embodiment is an embodiment for generating and outputting an output image in which a color corresponding to the defocus amount is set for a photographed image.
  • FIG. 19 is a block diagram for explaining the configuration of the fourth embodiment.
  • FIG. 19 shows an internal configuration example of the digital signal processing unit 108, which is a component of the imaging apparatus 100 described with reference to FIG.
  • the digital signal processing unit 108 includes a phase difference information acquisition unit 201, a defocus amount calculation unit 202, an AF control signal generation unit 203, a defocus map generation unit 204, an image region unit physical quantity reliability calculation unit. 205 , an image area unit physical quantity reliability correspondence processing execution unit 206 , an image information acquisition unit 211 , an image signal processing unit 212 , and an image output unit 213 .
  • the digital signal processing unit 108 receives the RGB image signals from the preceding A/D conversion unit 105 and the phase difference detection signals (detection information) output from the phase difference detection pixels.
  • a phase difference information acquisition unit 201 shown in FIG. 19 selects only the phase difference detection signal (detection information) that is the output of the phase difference detection pixel from the input signal from the A/D conversion unit 105 .
  • the image information acquisition unit 211 shown in FIG. 19 selects only image signals (for example, RGB image signals) from the input signals from the A/D conversion unit 105 .
  • the image signal acquired by the image information acquiring section 211 is input to the image signal processing section 212 .
  • the image signal processing unit 212 performs various image signal processing such as demosaic processing, white balance adjustment, and gamma correction on the image signal, and outputs the processed image (eg, RGB image) to the image output unit 213 .
  • an image (for example, an RGB image) generated by the image signal processing unit 212 is further processed by the image region unit physical quantity reliability correspondence processing execution unit. 206 performs image control processing.
  • an image is generated and output by superimposing a color map outputting colors according to the defocus amount for each image area on an image (for example, an RGB image) generated by the image signal processing unit 212 . processing takes place.
  • An image (for example, an RGB image) generated by the image signal processing unit 212, or an image generated by the image signal processing unit 212 changed or processed by the image region unit physical quantity reliability correspondence processing execution unit 206 is output to the image output unit. 213.
  • the image output unit 213 outputs the image input from the image signal processing unit 212 .
  • image output to the monitor 117, viewfinder (EVF) 116, and recording device 115 is executed.
  • the phase difference information acquisition unit 201 selects a phase difference detection signal (detection information) that is the output of the phase difference detection pixel from the input signal from the A/D conversion unit 105, and obtains the selected phase difference detection signal (detection information). is output to the defocus amount calculation unit 202 .
  • the defocus amount calculation unit 202 calculates the defocus amount, that is, the amount of deviation between the in-focus distance and the object distance (defocus amount (DF)) for each minute image area unit, for example, an image area unit composed of a plurality of pixels. calculate.
  • the defocus of the focus lens is detected based on the shift amount of the signal output according to the amount of light received by each of a set of phase difference detection pixels that function as a focus detection sensor. Amount is calculated.
  • the AF control signal generation unit 203 generates an autofocus control signal (AF control signal) for setting the focus lens to an in-focus position (focus position) for a subject designated by the user based on the defocus amount. , and outputs the generated AF control signal to the AF control unit 112a.
  • AF control signal autofocus control signal
  • the AF control unit 112a drives the focus lens according to an autofocus control signal (AF control signal) input from the AF control signal generation unit 203, and moves the focus lens to the in-focus position (focus position) for the subject specified by the user, for example. set.
  • AF control signal autofocus control signal
  • the subject to be set at the in-focus position is not the entire image area of the captured image, but a subject specified by the user such as a person. Images of other objects, such as the background, are out of focus and appear blurred.
  • the defocus amount calculation unit 202 calculates the defocus amount, that is, the amount of deviation between the in-focus distance and the subject distance, in units of minute image areas such as n ⁇ m pixels. A defocus amount equivalent to is calculated.
  • the defocus map generation unit 204 receives the fine defocus amount of the captured image calculated by the defocus amount calculation unit 202, and calculates the defocus amount of each image area based on the defocus amount of each image area. To generate a defocus map in which the amount of focus can be identified.
  • a defocus map as described above with reference to FIG. 8B is generated.
  • the defocus map generation unit 204 generates a defocus map as shown in FIG. 8B, for example.
  • the image area unit physical quantity reliability calculation unit 205 calculates the reliability of the defocus amount for each image area, which is the physical quantity for each image area. As in the first embodiment, the image region unit physical quantity reliability calculation unit 205 calculates the defocus amount for each image region calculated by the defocus amount calculation unit 202 and the defocus map generated by the defocus map generation unit 204. Based on these input data, the reliability of the defocus amount calculated for each image area is calculated.
  • various methods can be applied as the method of calculating the reliability of the defocus amount for each image area.
  • a method of calculating the defocus amount reliability for each image area using the contrast of the image (RGB image) generated by the image signal processing unit 212, the image frequency, the edge detection result, or the like can be used.
  • the defocus amount reliability for each image area is calculated using the method using (Equation 1) described in the first embodiment, that is, the cross-correlation function of the two waveforms of the parallax data used for defocus amount calculation. method is also possible.
  • the defocus amount reliability calculated in the fourth embodiment is also the reliability of the defocus amount for each pixel area, as described in the first embodiment.
  • the image area unit physical quantity reliability calculation unit 205 calculates the defocus amount for each image area calculated by the defocus amount calculation unit 202, that is, the image area unit which is one rectangular area in the defocus map shown in FIG. is used to calculate the reliability of the defocus amount for each image area.
  • the image region unit physical quantity reliability calculation unit 205 uses, for example, the cross-correlation function of the two waveforms of the parallax data used to calculate the defocus amount, and the image region calculated by the defocus amount calculation unit 202. Calculate the reliability of the unit defocus amount.
  • the reliability data of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit 205 is output to the image area unit physical quantity reliability correspondence processing execution unit 206 .
  • the image area unit physical quantity reliability correspondence processing execution unit 206 calculates the captured image generated by the image signal processing unit 212 according to the reliability data of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit 205. (e.g., RGB images). Specifically, an image is generated and output by superimposing a color map outputting a color corresponding to the defocus amount for each image region calculated by the image region unit physical quantity reliability calculation unit 205 on the captured image. A specific example of this process will be described with reference to FIG. 20 and subsequent figures.
  • FIG. 20 shows the following data.
  • A Captured image
  • B Defocus map
  • C Defocus color map
  • a captured image is a captured image (RGB image) generated by the image signal processing unit 212 based on the image signal input from the image information acquisition unit 211 to the image signal processing unit 212 .
  • a defocus map is a defocus map generated by the defocus map generation unit 204 . That is, based on the defocus amount for each image area calculated by the defocus amount calculation unit 202 based on the pixel value signal of the phase difference detection pixel acquired by the phase difference information acquisition unit 201, the defocus map generation unit 204 generates This is a defocus map that outputs the defocus amount for each image area as a luminance value (for example, 0 to 255).
  • the defocus color map is a map obtained by coloring the defocus map that outputs the defocus amount as a luminance value (for example, 0 to 255), and a map in which different colors are set according to the defocus amount. is. For example, it is a color map in which yellow is set for areas where the defocus amount is 0 to small, green is set for areas where the defocus amount is medium, and blue is set for areas where the defocus amount is large.
  • the color setting in addition to the 3-level or 5-level color setting as described above, it is also possible to set the color to smoothly change from yellow to blue according to the change in the defocus amount.
  • a setting may be made in which the color of the in-focus area where the defocus amount is almost 0 is not set and the color of the original photographed image is output as it is. Further, for example, by performing face recognition, semantic segmentation, or the like, identification of the subject object may be performed, and a specific color that is different only for a person's face area or a specific object may be output.
  • the image area unit physical quantity reliability correspondence processing execution unit 206 first generates a color map that outputs colors according to the image area unit defocus amount calculated by the image area unit physical quantity reliability calculation unit 205. . Furthermore, the image area unit physical quantity reliability correspondence processing execution unit 206 generates and outputs an output image in which the generated color map is superimposed on the captured image. A specific example of this processing will be described with reference to FIG.
  • FIG. 21 shows the following data.
  • A Photographed image
  • C Defocused color map
  • D Output image
  • a captured image is a captured image (RGB image) generated by the image signal processing unit 212 based on the image signal input from the image information acquisition unit 211 to the image signal processing unit 212 .
  • the defocus color map is a map obtained by coloring the defocus map that outputs the defocus amount as a luminance value (for example, 0 to 255), and a map in which different colors are set according to the defocus amount. is. For example, it is a color map in which yellow is set for areas where the defocus amount is 0 to small, green is set for areas where the defocus amount is medium, and blue is set for areas where the defocus amount is large.
  • the output image is an output image generated by superimposing (B) the defocus color map on (A) the captured image.
  • the image region unit physical quantity reliability correspondence processing execution unit 206 generates and outputs an output image in which the generated color map is superimposed on the captured image.
  • This (C) output image is output to the monitor 117 of the imaging apparatus 100, for example. The user can easily and reliably determine the defocus amount of each area of the captured image by viewing the image output to the monitor 117 .
  • the digital signal processing unit 108 described with reference to FIG. 204 an image region unit physical quantity reliability calculation unit 205, an image region unit physical quantity reliability corresponding processing execution unit 206, an image information acquisition unit 211, an image signal processing unit 212, an image output unit 213, and a setting having all these configurations.
  • this configuration is an example.
  • the data processing may be performed by an external device different from the imaging device 100 .
  • the image signal processing unit 212 can be configured outside the digital signal processing unit 108 .
  • the image signal processing may be performed by an external device other than the imaging device 100, such as a PC.
  • an image is generated and output by superimposing a color map that makes it possible to identify the reliability of the defocus amount for each image area on the captured image.
  • an image is generated in which a color map is superimposed on the captured image so as to output a color corresponding to the defocus amount for each image area, and to make it possible to identify the reliability of the defocus amount for each image area.
  • the fifth embodiment is a modification of the fourth embodiment described above, and is an example in which the color map described in the fourth embodiment is changed to a color map capable of identifying the defocus amount reliability described in the first embodiment. be.
  • FIG. 22 is a block diagram for explaining the configuration of the fifth embodiment.
  • FIG. 22 shows an internal configuration example of the digital signal processing unit 108, which is a component of the imaging apparatus 100 described with reference to FIG.
  • the digital signal processing unit 108 includes a phase difference information acquisition unit 201, a defocus amount calculation unit 202, an AF control signal generation unit 203, a defocus map generation unit 204, an image area unit physical quantity reliability calculation unit. 205 , an image area unit physical quantity reliability correspondence processing execution unit 206 , an image information acquisition unit 211 , an image signal processing unit 212 , and an image output unit 213 .
  • the digital signal processing unit 108 receives the RGB image signals from the preceding A/D conversion unit 105 and the phase difference detection signals (detection information) output from the phase difference detection pixels.
  • a phase difference information acquisition unit 201 shown in FIG. 22 selects only the phase difference detection signal (detection information) that is the output of the phase difference detection pixel from the input signal from the A/D conversion unit 105 .
  • the image information acquisition unit 211 shown in FIG. 22 selects only image signals (for example, RGB image signals) from the input signals from the A/D conversion unit 105 .
  • the image signal acquired by the image information acquiring section 211 is input to the image signal processing section 212 .
  • the image signal processing unit 212 performs various image signal processing such as demosaic processing, white balance adjustment, and gamma correction on the image signal, and outputs the processed image (eg, RGB image) to the image output unit 213 .
  • an image (for example, an RGB image) generated by the image signal processing unit 212 is further processed by the image region unit physical quantity reliability correspondence processing execution unit. 206 performs image control processing.
  • a color corresponding to the defocus amount for each image area is output.
  • a process of generating and outputting an image superimposed with a color map that makes it possible to distinguish degrees is performed.
  • An image (for example, an RGB image) generated by the image signal processing unit 212, or an image generated by the image signal processing unit 212 changed or processed by the image region unit physical quantity reliability correspondence processing execution unit 206 is output to the image output unit. 213.
  • the image output unit 213 outputs the image input from the image signal processing unit 212 .
  • image output to the monitor 117, viewfinder (EVF) 116, and recording device 115 is executed.
  • the phase difference information acquisition unit 201 selects a phase difference detection signal (detection information) that is the output of the phase difference detection pixel from the input signal from the A/D conversion unit 105, and obtains the selected phase difference detection signal (detection information). is output to the defocus amount calculation unit 202 .
  • the defocus amount calculation unit 202 calculates the defocus amount, that is, the amount of deviation between the in-focus distance and the object distance (defocus amount (DF)) for each minute image area unit, for example, an image area unit composed of a plurality of pixels. calculate.
  • the defocus of the focus lens is detected based on the shift amount of the signal output according to the amount of light received by each of a set of phase difference detection pixels that function as a focus detection sensor. Amount is calculated.
  • the AF control signal generation unit 203 generates an autofocus control signal (AF control signal) for setting the focus lens to an in-focus position (focus position) for a subject designated by the user based on the defocus amount. , and outputs the generated AF control signal to the AF control unit 112a.
  • AF control signal autofocus control signal
  • the AF control unit 112a drives the focus lens according to an autofocus control signal (AF control signal) input from the AF control signal generation unit 203, and moves the focus lens to the in-focus position (focus position) for the subject specified by the user, for example. set.
  • AF control signal autofocus control signal
  • the subject to be set at the in-focus position is not the entire image area of the captured image, but a subject specified by the user such as a person. Images of other objects, such as the background, are out of focus and appear blurred.
  • the defocus amount calculation unit 202 calculates the defocus amount, that is, the amount of deviation between the in-focus distance and the subject distance, in units of minute image areas such as n ⁇ m pixels. A defocus amount equivalent to is calculated.
  • the defocus map generation unit 204 receives the fine defocus amount of the captured image calculated by the defocus amount calculation unit 202, and calculates the defocus amount of each image area based on the defocus amount of each image area. To generate a defocus map in which the amount of focus can be identified.
  • a defocus map as described above with reference to FIG. 8B is generated.
  • the defocus map generation unit 204 generates a defocus map as shown in FIG. 8B, for example.
  • the image area unit physical quantity reliability calculation unit 205 calculates the reliability of the defocus amount for each image area, which is the physical quantity for each image area. As in the first embodiment, the image region unit physical quantity reliability calculation unit 205 calculates the defocus amount for each image region calculated by the defocus amount calculation unit 202 and the defocus map generated by the defocus map generation unit 204. Based on these input data, the reliability of the defocus amount calculated for each image area is calculated.
  • various methods can be applied as the method of calculating the reliability of the defocus amount for each image area.
  • a method of calculating the defocus amount reliability for each image area using the contrast of the image (RGB image) generated by the image signal processing unit 212, the image frequency, the edge detection result, or the like can be used.
  • the defocus amount reliability for each image area is calculated using the method using (Equation 1) described in the first embodiment, that is, the cross-correlation function of the two waveforms of the parallax data used for defocus amount calculation. method is also possible.
  • the defocus amount reliability calculated in the fifth embodiment is also the reliability of the defocus amount for each pixel area, as described in the first embodiment.
  • the image area unit physical quantity reliability calculation unit 205 calculates the defocus amount for each image area calculated by the defocus amount calculation unit 202, that is, the image area unit which is one rectangular area in the defocus map shown in FIG. is used to calculate the reliability of the defocus amount for each image area.
  • the image region unit physical quantity reliability calculation unit 205 uses, for example, the cross-correlation function of the two waveforms of the parallax data used to calculate the defocus amount, and the image region calculated by the defocus amount calculation unit 202. Calculate the reliability of the unit defocus amount.
  • the reliability data of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit 205 is output to the image area unit physical quantity reliability correspondence processing execution unit 206 .
  • the image area unit physical quantity reliability correspondence processing execution unit 206 calculates the captured image generated by the image signal processing unit 212 according to the reliability data of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit 205. (e.g., RGB images). Specifically, a color map that outputs a color corresponding to the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit 205, and that can identify the reliability of the defocus amount for each image area. is superimposed on the captured image and output. A specific example of this process will be described with reference to FIG. 23 and subsequent figures.
  • FIG. 23 shows the following data.
  • A Captured image
  • B Defocus map
  • C Defocus color map
  • a captured image is a captured image (RGB image) generated by the image signal processing unit 212 based on the image signal input from the image information acquisition unit 211 to the image signal processing unit 212 .
  • a defocus map is a defocus map generated by the defocus map generation unit 204 . That is, based on the defocus amount for each image area calculated by the defocus amount calculation unit 202 based on the pixel value signal of the phase difference detection pixel acquired by the phase difference information acquisition unit 201, the defocus map generation unit 204 generates This is a defocus map that outputs the defocus amount for each image area as a luminance value (for example, 0 to 255).
  • the (B) defocus map shown in FIG. 23 includes a defocus amount low reliability region.
  • the image area unit physical quantity reliability calculation unit 205 calculates the reliability of the defocus amount for each image area in the defocus map generated by the defocus map generation unit 204, and uses the calculated reliability data. It is output to the image area unit physical quantity reliability correspondence processing execution unit 206 .
  • the image region unit physical quantity reliability corresponding processing execution unit 206 calculates, for example, as shown in FIG. , a defocus amount low-reliability region is extracted.
  • the defocus amount reliability for each image area input from the image area unit physical quantity reliability calculation unit 205 is compared with a predetermined reliability threshold to determine a defocus amount low reliability area. Extract. For example, using a predefined low reliability threshold Th1, (Decision formula 1) Defocus amount reliability ⁇ Th1 An image region that satisfies the above determination formula 1 is extracted as a defocus amount low reliability region.
  • the defocus map shown in FIG. 23B includes a defocus amount low reliability region.
  • the image area unit physical quantity reliability correspondence processing execution unit 206 generates a defocus color map that enables identification of the defocus amount low reliability area.
  • the defocus color map in FIG. 23(C) is a map obtained by coloring the defocus map outputting the defocus amount as a luminance value (for example, 0 to 255), and different colors are set according to the defocus amount. Further, it is a color map in which a specific color is set in the defocus amount low-reliability area.
  • the area where the defocus amount is 0 to small is set in yellow
  • the area where the defocus amount is medium is set in green
  • the area where the defocus amount is large is set in blue
  • the defocus amount low reliability area is set in red.
  • a color map is a color map.
  • Various settings are possible for the color setting.
  • the image-region unit physical quantity reliability correspondence processing execution unit 206 first sets a color according to the defocus amount for each image region calculated by the image-region unit physical quantity reliability calculation unit 205, and then sets the defocus amount. Generate a color map with low-confidence regions set to red. Furthermore, the image area unit physical quantity reliability correspondence processing execution unit 206 generates and outputs an output image in which the generated color map is superimposed on the captured image. A specific example of this processing will be described with reference to FIG.
  • FIG. 24 shows the following data.
  • A Photographed image
  • C Defocused color map
  • D Output image
  • a captured image is a captured image (RGB image) generated by the image signal processing unit 212 based on the image signal input from the image information acquisition unit 211 to the image signal processing unit 212 .
  • the defocus color map sets a different color according to the defocus amount with respect to the defocus map that outputs the defocus amount as a luminance value (for example, 0 to 255), and the defocus amount low reliability region is a specific color (for example, set to red) that can be identified.
  • a color map that sets yellow for areas where the defocus amount is 0 to small, green for areas with medium defocus amounts, blue for areas with large defocus amounts, and red for low-reliability defocus amount areas. is.
  • the output image is an output image generated by superimposing (C) the defocus color map on (A) the captured image.
  • the image region unit physical quantity reliability correspondence processing execution unit 206 generates and outputs an output image in which the generated color map is superimposed on the captured image.
  • This (D) output image is output to the monitor 117 of the imaging apparatus 100, for example.
  • the user can easily and reliably determine the defocus amount of each area of the captured image by looking at the image output to the monitor 117, and can also reliably determine the low-reliability area of the defocus amount. It becomes possible to recognize
  • the digital signal processing unit 108 described with reference to FIG. 204 an image region unit physical quantity reliability calculation unit 205, an image region unit physical quantity reliability corresponding processing execution unit 206, an image information acquisition unit 211, an image signal processing unit 212, an image output unit 213, and a setting having all these configurations.
  • this configuration is an example.
  • the data processing may be performed by an external device different from the imaging device 100 .
  • the image signal processing unit 212 can be configured outside the digital signal processing unit 108 .
  • the image signal processing may be performed by an external device other than the imaging device 100, such as a PC.
  • Example 6 Embodiment in which image shooting is performed by controlling exposure time according to reliability of defocus amount for each image area.
  • An image area with a low reliability of the defocus amount is often an area where the subject distance is not accurately measured, and is likely an image area with a poor S/N ratio of the captured image.
  • the exposure time is set long and the image is captured, thereby making it possible to capture a high-quality image. It is an example.
  • FIG. 25 is a block diagram for explaining the configuration of the sixth embodiment.
  • FIG. 25 shows an internal configuration example of the digital signal processing unit 108, which is a component of the imaging apparatus 100 described with reference to FIG.
  • the digital signal processing unit 108 includes a phase difference information acquisition unit 201, a defocus amount calculation unit 202, an AF control signal generation unit 203, a defocus map generation unit 204, an image area unit physical quantity reliability calculation unit. 205 , an image area unit physical quantity reliability correspondence processing execution unit 206 , an image information acquisition unit 211 , an image signal processing unit 212 , and an image output unit 213 .
  • the digital signal processing unit 108 receives the RGB image signals from the preceding A/D conversion unit 105 and the phase difference detection signals (detection information) output from the phase difference detection pixels.
  • a phase difference information acquisition unit 201 shown in FIG. 25 selects only phase difference detection signals (detection information) that are outputs of phase difference detection pixels from the input signal from the A/D conversion unit 105 .
  • the image information acquisition unit 211 shown in FIG. 25 selects only image signals (for example, RGB image signals) from the input signals from the A/D conversion unit 105 .
  • the image signal acquired by the image information acquiring section 211 is input to the image signal processing section 212 .
  • the image signal processing unit 212 performs various image signal processing such as demosaic processing, white balance adjustment, and gamma correction on the image signal, and outputs the processed image (eg, RGB image) to the image output unit 213 .
  • An image (for example, an RGB image) generated by the image signal processing unit 212 is output to the image output unit 213 .
  • the image output unit 213 outputs the image input from the image signal processing unit 212 .
  • image output to the monitor 117, viewfinder (EVF) 116, and recording device 115 is executed.
  • the image region unit physical quantity reliability corresponding processing execution unit 206 performs image processing on an image (for example, an RGB image) generated by the image signal processing unit 212. not performed.
  • the image area unit physical quantity reliability correspondence processing execution unit 206 outputs a shooting control command to the control unit 110 . Specifically, it outputs an exposure time control command for executing image capturing by exposure control with a long exposure time set for an image region with a low-reliability defocus amount.
  • the exposure time during image shooting can be controlled to be changed on a pixel-by-pixel basis.
  • Patent Document 4 Japanese Unexamined Patent Application Publication No. 2011-004088
  • the phase difference information acquisition unit 201 selects a phase difference detection signal (detection information) that is the output of the phase difference detection pixel from the input signal from the A/D conversion unit 105, and obtains the selected phase difference detection signal (detection information). is output to the defocus amount calculation unit 202 .
  • the defocus amount calculation unit 202 calculates the defocus amount, that is, the amount of deviation between the in-focus distance and the object distance (defocus amount (DF)) for each minute image area unit, for example, an image area unit composed of a plurality of pixels. calculate.
  • the defocus of the focus lens is detected based on the shift amount of the signal output according to the amount of light received by each of a set of phase difference detection pixels that function as a focus detection sensor. Amount is calculated.
  • the AF control signal generation unit 203 generates an autofocus control signal (AF control signal) for setting the focus lens to an in-focus position (focus position) for a subject designated by the user based on the defocus amount. , and outputs the generated AF control signal to the AF control unit 112a.
  • AF control signal autofocus control signal
  • the AF control unit 112a drives the focus lens according to an autofocus control signal (AF control signal) input from the AF control signal generation unit 203, and moves the focus lens to the in-focus position (focus position) for the subject specified by the user, for example. set.
  • AF control signal autofocus control signal
  • the subject to be set at the in-focus position is not the entire image area of the captured image, but a subject specified by the user such as a person. Images of other objects, such as the background, are out of focus and appear blurred.
  • the defocus amount calculation unit 202 calculates the defocus amount, that is, the amount of deviation between the in-focus distance and the subject distance, in units of minute image areas such as n ⁇ m pixels. A defocus amount equivalent to is calculated.
  • the defocus map generation unit 204 receives the fine defocus amount of the captured image calculated by the defocus amount calculation unit 202, and calculates the defocus amount of each image area based on the defocus amount of each image area. To generate a defocus map in which the amount of focus can be identified.
  • a defocus map as described above with reference to FIG. 8B is generated.
  • the defocus map generation unit 204 generates a defocus map as shown in FIG. 8B, for example.
  • the image area unit physical quantity reliability calculation unit 205 calculates the reliability of the defocus amount for each image area, which is the physical quantity for each image area. As in the first embodiment, the image region unit physical quantity reliability calculation unit 205 calculates the defocus amount for each image region calculated by the defocus amount calculation unit 202 and the defocus map generated by the defocus map generation unit 204. Based on these input data, the reliability of the defocus amount calculated for each image area is calculated.
  • various methods can be applied as the method of calculating the reliability of the defocus amount for each image area.
  • a method of calculating the defocus amount reliability for each image area using the contrast of the image (RGB image) generated by the image signal processing unit 212, the image frequency, the edge detection result, or the like can be used.
  • the defocus amount reliability for each image area is calculated using the method using (Equation 1) described in the first embodiment, that is, the cross-correlation function of the two waveforms of the parallax data used for defocus amount calculation. method is also possible.
  • the defocus amount reliability calculated in the sixth embodiment is also the reliability of the defocus amount for each pixel area, as described in the first embodiment.
  • the image area unit physical quantity reliability calculation unit 205 calculates the defocus amount for each image area calculated by the defocus amount calculation unit 202, that is, the image area unit which is one rectangular area in the defocus map shown in FIG. is used to calculate the reliability of the defocus amount for each image area.
  • the image region unit physical quantity reliability calculation unit 205 uses, for example, the cross-correlation function of the two waveforms of the parallax data used to calculate the defocus amount, and the image region calculated by the defocus amount calculation unit 202. Calculate the reliability of the unit defocus amount.
  • the reliability data of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit 205 is output to the image area unit physical quantity reliability correspondence processing execution unit 206 .
  • the image area unit physical quantity reliability corresponding processing execution unit 206 outputs a shooting control command to the control unit 110 according to the reliability data of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit 205. . Specifically, an exposure time control command for setting a long exposure time and executing image capturing is output for an image region with a low reliability defocus amount. A specific example of this processing will be described with reference to FIG.
  • FIG. 26 shows the following data.
  • A Photographed image
  • B Defocus map
  • C Shooting control example (exposure time control example for each image area)
  • a captured image is a captured image (RGB image) generated by the image signal processing unit 212 based on the image signal input from the image information acquisition unit 211 to the image signal processing unit 212 .
  • a defocus map is a defocus map generated by the defocus map generation unit 204 . That is, based on the defocus amount for each image area calculated by the defocus amount calculation unit 202 based on the pixel value signal of the phase difference detection pixel acquired by the phase difference information acquisition unit 201, the defocus map generation unit 204 generates This is a defocus map that outputs the defocus amount for each image area as a luminance value (for example, 0 to 255).
  • the (B) defocus map shown in FIG. 26 includes a defocus amount low-reliability region.
  • the image area unit physical quantity reliability calculation unit 205 calculates the reliability of the defocus amount for each image area in the defocus map generated by the defocus map generation unit 204, and uses the calculated reliability data. It is output to the image area unit physical quantity reliability correspondence processing execution unit 206 .
  • the image region unit physical quantity reliability corresponding processing execution unit 206 calculates, for example, as shown in FIG. , a defocus amount low-reliability region is extracted.
  • the defocus amount reliability for each image area input from the image area unit physical quantity reliability calculation unit 205 is compared with a predetermined reliability threshold to determine a defocus amount low reliability area. Extract. For example, using a predefined low reliability threshold Th1, (Decision formula 1) Defocus amount reliability ⁇ Th1 An image region that satisfies the above determination formula 1 is extracted as a defocus amount low reliability region.
  • the defocus map shown in FIG. 26B includes a defocus amount low reliability area.
  • the image region unit physical quantity reliability correspondence processing execution unit 206 outputs an exposure time control command to the control unit 110 that performs exposure control during image shooting. do. Specifically, it outputs an exposure time control command for setting the exposure time of an image area with a low reliability defocus amount to be longer than the exposure time of other image areas and performing image shooting.
  • an example of shooting control (an example of exposure time control for each image area)
  • an image area with a low-reliability defocus amount is specified, and the exposure time of this specified area is set to another image.
  • An exposure time control command is output for setting the exposure time longer than the exposure time of the area to perform image shooting.
  • the image area unit physical quantity reliability correspondence processing execution unit 206 executes image shooting control for image shooting by controlling the exposure time according to the reliability of the defocus amount in image area units.
  • an image area with a low degree of reliability of the defocus amount is often an area where the object distance is not accurately measured, and is likely an image area with a poor S/N of the captured image. For such an image area, it is possible to improve the S/N by setting a long exposure time.
  • the sixth embodiment is an embodiment in which high-quality images can be captured by performing image capturing control by setting a long exposure time for an image area with a low degree of reliability of the defocus amount and performing image capturing. be.
  • the sixth embodiment described with reference to FIGS. 25 and 26 is an embodiment in which photographing control is performed to execute image photographing by setting a long exposure time for an image region with a low degree of reliability of the defocus amount.
  • a configuration that performs the following photographing control (Modification 1) A configuration in which photographing control is performed to set a short exposure time for an image region with a high degree of reliability of the defocus amount.
  • (Modification 2) A configuration in which a long exposure time is set for an image region with a low reliability of the defocus amount, and a short exposure time is set for an image region with a high reliability of the defocus amount.
  • FIG. 27 shows a configuration example of the digital signal processing unit 108 of the image pickup apparatus that performs the above (Modification 1) shooting control.
  • the configuration of digital signal processing section 108 shown in FIG. 27 has the same components as the configuration of digital signal processing section 108 shown in FIG. 25 described above. However, the process executed by the image area unit physical quantity reliability corresponding process execution unit 206 is different.
  • the unit-image-area physical quantity reliability correspondence processing execution unit 206 of the digital signal processing unit 108 shown in FIG. and outputs an exposure time control command for setting a short exposure time and executing image shooting. A specific example of this processing will be described with reference to FIG.
  • FIG. 28 shows the following data.
  • A Photographed image
  • B Defocus map
  • C Shooting control example (exposure time control example for each image area)
  • a captured image is a captured image (RGB image) generated by the image signal processing unit 212 based on the image signal input from the image information acquisition unit 211 to the image signal processing unit 212 .
  • a defocus map is a defocus map generated by the defocus map generation unit 204 . That is, based on the defocus amount for each image area calculated by the defocus amount calculation unit 202 based on the pixel value signal of the phase difference detection pixel acquired by the phase difference information acquisition unit 201, the defocus map generation unit 204 generates This is a defocus map that outputs the defocus amount for each image area as a luminance value (for example, 0 to 255).
  • the (B) defocus map shown in FIG. 28 includes a defocus amount high reliability region.
  • the image area unit physical quantity reliability calculation unit 205 calculates the reliability of the defocus amount for each image area in the defocus map generated by the defocus map generation unit 204, and uses the calculated reliability data. It is output to the image area unit physical quantity reliability correspondence processing execution unit 206 .
  • the image region unit physical quantity reliability corresponding processing execution unit 206 calculates, for example, as shown in FIG. to extract the defocus amount high reliability region.
  • the defocus amount reliability for each image area input from the image area unit physical quantity reliability calculation unit 205 is compared with a predetermined reliability threshold to determine a defocus amount high reliability area. Extract. For example, using a predefined low reliability threshold Th2, (Judgment Expression 2) Th2 ⁇ Defocus Amount Reliability An image region that satisfies the above judgment expression 2 is extracted as a defocus amount high reliability region.
  • Th2 Joint Expression 2
  • the defocus map shown in FIG. 28B includes a defocus amount high reliability area.
  • the image region unit physical quantity reliability correspondence processing execution unit 206 outputs an exposure time control command to the control unit 110 that performs exposure control during image shooting. do. Specifically, it outputs an exposure time control command for setting the exposure time of an image area with a highly reliable defocus amount to be shorter than the exposure time of other image areas and performing image shooting.
  • an example of shooting control an example of exposure time control for each image area
  • an image area with a highly reliable defocus amount is specified, and the exposure time of this specified area is set to another image.
  • An exposure time control command is output for setting the exposure time to be shorter than the exposure time of the area to perform image shooting.
  • FIG. 29 shows the above (modified example 2), that is, the exposure time is set long for an image area with a low reliability of the defocus amount, and the exposure time is set short for an image area with a high reliability of the defocus amount.
  • 1 shows a configuration example of a digital signal processing unit 108 of an imaging apparatus that performs control.
  • the configuration of digital signal processing section 108 shown in FIG. 29 has the same components as the configuration of digital signal processing section 108 shown in FIG. 25 described above. However, the process executed by the image area unit physical quantity reliability corresponding process execution unit 206 is different.
  • An exposure time control command is output for setting a long exposure time and setting a short exposure time for a high-reliability image area to execute image shooting.
  • the sixth embodiment is an embodiment in which it is possible to capture a high-quality image by performing image capturing by performing exposure time control for each image area according to the reliability of the defocus amount.
  • Embodiment 1 Embodiment in which an image is generated and output in which the reliability of the defocus amount for each image area can be identified
  • Embodiment 2 The distance ratio between the in-focus subject and other background subjects is displayed on the captured image (Embodiment 3)
  • Example of generating and outputting an image in which mask stability information is superimposed on a photographed image (Embodiment 4)
  • Defocus amount for each image area Example of generating and outputting an image in which a color map outputting corresponding colors is superimposed on a captured image (Embodiment 5)
  • Exposure time is controlled according to the reliability of the defocus amount for each image area.
  • Each of these six embodiments can be configured independently, but can also be configured as a device or system having a combination configuration of any of a plurality of embodiments.
  • the following processing has been described as the calculation processing of the reliability of the distance value for each image region, which is executed by the image region unit physical quantity reliability calculation unit 205 . That is, the image area unit physical quantity reliability calculation unit 205 determines that the reliability of the distance value calculated based on the defocus amount is low for an image area with a low reliability of the defocus amount, and the reliability of the defocus amount is high. For the image area, it is determined that the reliability of the distance value calculated based on the defocus amount is also high. An example of such processing has been described.
  • the following processing may be performed. That is, the error amount of the matching process of the parallax map generated from the captured image of the stereo camera is calculated for each image area, and the distance value reliability for each image area is calculated based on the calculated error amount for each image area. In this process, it is determined that an image area with a large error amount per image area has a low distance value reliability, and an image area with a small error amount per image area has a high distance value reliability.
  • the processing of the first to sixth embodiments described above may be configured to be continuously executed during operation of the imaging apparatus 100, but may be executed in response to a specific operation on the input unit (operation unit) 118 by the user, for example. It is good also as a structure which carries out. Further, the processing may be stopped when the focus position has changed significantly. Further, the processing may be stopped when it is determined that the imaging device 100 has moved greatly based on the detection value of the gyro 131 . Conversely, when the focus position changes significantly, a new process may be started. Further, when it is determined that the imaging device 100 has moved greatly based on the detection value of the gyro 131, new processing may be started.
  • the technique disclosed in this specification can take the following configurations.
  • an image region physical quantity calculator that calculates physical quantities that change according to the subject distance for each image region that is a segmented region of a captured image
  • an image area unit physical quantity reliability calculation unit that calculates the reliability of the image area unit physical quantity calculated by the image area physical quantity calculation unit
  • An imaging apparatus having an image area unit physical quantity reliability corresponding processing execution unit that executes control processing according to the reliability of the physical quantity in image area units calculated by the image area unit physical quantity reliability calculation unit.
  • the image region physical quantity calculation unit A defocus amount calculation unit that calculates a defocus amount for each image area, The image area unit physical quantity reliability calculation unit, The imaging apparatus according to (1), wherein reliability of the defocus amount for each image area calculated by the defocus amount calculation unit is calculated.
  • the image area unit physical quantity reliability corresponding processing execution unit comparing the reliability of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit with a predetermined high reliability threshold value Th2, The method according to (3) or (4), wherein graphic data that makes it possible to identify an image area whose reliability of the defocus amount for each image area is equal to or higher than the high reliability threshold value Th2 is displayed superimposed on the captured image. imaging device.
  • the image area unit physical quantity reliability corresponding processing execution unit comparing the reliability of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit with a predetermined low reliability threshold value Th1, When an image area equal to or lower than the low-reliability threshold value Th1 is detected in the captured image, The imaging apparatus according to (6), wherein mask stability information indicating that highly accurate mask setting is difficult is displayed on the captured image.
  • the image area unit physical quantity reliability corresponding processing execution unit comparing the reliability of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit with a predetermined low reliability threshold value Th1, When an image area equal to or lower than the low-reliability threshold value Th1 is not detected in the captured image, The imaging apparatus according to (6) or (7), wherein mask stability information indicating that highly accurate mask setting is possible is displayed on the captured image.
  • the image area unit physical quantity reliability corresponding processing execution unit comparing the reliability of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit with a predetermined low reliability threshold value Th1, The image pickup apparatus according to (11), wherein the exposure time control is performed such that the exposure time of an image area whose reliability of the defocus amount for each image area is equal to or lower than the low reliability threshold value Th1 is longer than that of other image areas.
  • the image area unit physical quantity reliability corresponding processing execution unit comparing the reliability of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit with a predetermined high reliability threshold value Th2, According to (11) or (12), the exposure time control is performed such that the exposure time of the image area whose reliability of the defocus amount for each image area is equal to or higher than the high reliability threshold value Th2 is shorter than that of other image areas.
  • the exposure time control is performed such that the exposure time of the image area whose reliability of the defocus amount for each image area is equal to or higher than the high reliability threshold value Th2 is shorter than that of other image areas.
  • the image region physical quantity calculation unit A distance information calculation unit that calculates a distance value for each image area, The image area unit physical quantity reliability calculation unit, The imaging apparatus according to any one of (1) to (13), wherein the reliability of the distance information for each image area calculated by the distance information calculation unit is calculated.
  • the image area unit physical quantity reliability corresponding processing execution unit The imaging device according to (14), which displays distance ratios of subjects at a plurality of different distances.
  • the image area unit physical quantity reliability corresponding processing execution unit selecting an image region in which the reliability of the distance information in the image region unit calculated by the image region unit physical quantity reliability calculation unit is equal to or higher than a predetermined threshold value, and calculating a distance ratio between subjects in the selected image region; , the imaging device according to (14) or (15) for displaying on the captured image of the imaging device.
  • An image processing method executed in an imaging device an image region physical quantity calculation step in which the image region physical quantity calculation unit calculates a physical quantity that changes according to the subject distance for each image region that is a segmented region of the captured image; an image region unit physical quantity reliability calculation step in which the image region unit physical quantity reliability calculation unit calculates the reliability of the image region unit physical quantity calculated in the image region physical quantity calculation step; an image area unit physical quantity reliability correspondence process execution step in which the image area unit physical quantity reliability correspondence process execution unit executes control processing according to the reliability of the image area unit physical quantity reliability calculated in the image area unit physical quantity reliability calculation step;
  • a program for executing image processing in an imaging device an image region physical quantity calculation step of causing an image region physical quantity calculation unit to calculate a physical quantity that changes according to a subject distance for each image region that is a segmented region of a captured image; an image region unit physical quantity reliability calculation step for causing an image region unit physical quantity reliability calculation unit to calculate the reliability of the image region unit physical quantity calculated in the image region physical quantity calculation step; An image area unit physical quantity reliability correspondence process execution step for causing an image area unit physical quantity reliability correspondence process execution unit to execute a control process according to the reliability of the image area unit physical quantity reliability calculated in the image area unit physical quantity reliability calculation step. program to run.
  • a program recording the processing sequence is installed in the memory of a computer built into dedicated hardware and executed, or the program is loaded into a general-purpose computer capable of executing various processing. It can be installed and run.
  • the program can be pre-recorded on a recording medium.
  • the program can be received via a network such as a LAN (Local Area Network) or the Internet and installed in a recording medium such as an internal hard disk.
  • a system is a logical collective configuration of a plurality of devices, and the devices of each configuration are not limited to being in the same housing.
  • the reliability of the defocus amount and the distance value for each image area of the captured image is calculated, and the display processing of the graphic data indicating the reliability,
  • An apparatus and method for executing subject distance ratio display processing, exposure time control processing, and the like are realized. Specifically, for example, the defocus amount and the distance value are calculated for each image area of the captured image, and the reliability of the calculated defocus amount and distance value for each image area is calculated. Control is executed according to the reliability of the defocus amount and the distance value.
  • the processor executes a process of superimposing and displaying graphic data indicating the reliability of the defocus amount for each image area on the captured image, a process of displaying the distance ratio of the object, a process of controlling the exposure time, and the like.
  • the reliability of the defocus amount and distance value for each image area of the captured image is calculated, and the display processing of the graphic data indicating the reliability, the distance ratio display processing of the subject, the exposure time control processing, etc. are executed.
  • An apparatus and method are realized.
  • REFERENCE SIGNS LIST 100 imaging device 101 focus lens 102 zoom lens 103 imaging element 104 analog signal processing section 105 A/D conversion section 106 timing generator (TG) 107 vertical driver 108 digital signal processor (DSP) 110 control unit 112a AF control unit 112b zoom control unit 113 motor 115 recording device 116 viewfinder 117 monitor 118 input unit (operation unit) 122 image area 131 gyro 151 phase difference detection pixel 152 image area 201 phase difference information acquisition unit 202 defocus amount calculation unit 203 AF control signal generation unit 204 defocus map generation unit 205 image area unit physical quantity reliability calculation unit 206 image area unit Physical quantity reliability corresponding processing execution unit 211 Image information acquisition unit 212 Image signal processing unit 213 Image output unit 221 Distance information calculation unit

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Optics & Photonics (AREA)
  • Focusing (AREA)
  • Automatic Focus Adjustment (AREA)
  • Studio Devices (AREA)

Abstract

Provided are a device and a method for calculating the reliability of a defocus amount and a distance value for each image area of a captured image, and executing a display process for graphic data indicating the reliability, a distance ratio display process of an object, an exposure time control process, and the like. The defocus amount and distance value for each image area of the captured image are calculated, the reliability of the calculated defocus amount and the distance value for each image area is calculated, and control is executed in accordance with the calculated reliability of the defocus amount and the distance value for each image area. For example, a process for superimposing and displaying graphic data indicating the reliability of the defocus amount for each image area on the captured image, a process for displaying the distance ratio of the object, a process for controlling the exposure time, and the like are executed.

Description

撮像装置、および画像処理方法、並びにプログラムIMAGING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM
 本開示は、撮像装置、および画像処理方法、並びにプログラムに関する。さらに詳細には、像面位相差検出画素の検波情報から得られる画像領域単位のデフォーカス量や距離値等、画像領域単位の物理量の信頼度を算出し、算出した信頼度に応じて様々な処理を行う撮像装置、および画像処理方法、並びにプログラムに関する。 The present disclosure relates to imaging devices, image processing methods, and programs. More specifically, the reliability of the physical quantity for each image region, such as the defocus amount and the distance value for each image region obtained from the detection information of the image plane phase difference detection pixels, is calculated, and various values are calculated according to the calculated reliability. The present invention relates to an imaging device that performs processing, an image processing method, and a program.
 撮像装置(カメラ)において、フォーカス位置(合焦位置)を検出する方式として、像面位相差画素を用いた像面位相差方式がある。
 この像面位相差方式は、撮像レンズを透過した光を瞳分割して1対の像を生成し、生成した1対の像間の位相差を解析してフォーカス位置(合焦位置)を検出する方式である。
In imaging devices (cameras), there is an image plane phase difference method using an image plane phase difference pixel as a method for detecting a focus position (in-focus position).
This image plane phase difference method divides the light passing through the imaging lens into a pair of images and analyzes the phase difference between the generated pair of images to detect the focus position (in-focus position). It is a method to
 像面位相差方式を用いたフォーカス制御では撮像レンズを通過した光束を二分割し、二分割した光束を焦点検出用センサとして機能する一組の像面位相差検出画素によりそれぞれ受光する。この一組の像面位相差検出画素各々の受光量に応じて出力される信号のずれ量に基づいて合焦度を検出してフォーカスレンズを調整する。 In focus control using the image plane phase difference method, the light flux that has passed through the imaging lens is split into two, and the two split light fluxes are received by a set of image plane phase difference detection pixels that function as focus detection sensors. The focus lens is adjusted by detecting the degree of focus based on the shift amount of the signal output according to the amount of light received by each of the set of image plane phase difference detection pixels.
 この像面位相差検出画素の検波情報は主としてフォーカス制御のために用いられるが、フォーカス制御のみならず、その他の処理に適用することも可能である。
 例えば特許文献1(特開2012-142952号公報)は、像面位相差検出画素の検波情報を用いて、撮影画像に対するボケ処理を施す構成を開示している。
 具体的には像面位相差検出画素の検波情報を用いて、撮影画像内のデフォーカス量の分布を解析し、この解析結果を利用して撮影画像に対するボケ処理を施す構成である。
The detection information of the image plane phase difference detection pixels is mainly used for focus control, but it can be applied to other processing as well as focus control.
For example, Patent Document 1 (Japanese Patent Application Laid-Open No. 2012-142952) discloses a configuration that performs blurring processing on a captured image using detection information of an image plane phase difference detection pixel.
Specifically, the detection information of the image plane phase difference detection pixels is used to analyze the distribution of the defocus amount in the captured image, and the analysis result is used to apply blur processing to the captured image.
 さらに、特許文献2(特開2019-035967号公報)は、像面位相差検出画素の検波情報から得られるデフォーカス量や距離情報の諧調特性を出力先に応じて変更出力する構成を開示している。
 例えば合焦近傍の距離分解能が必要な出力先や、測距範囲情報が必要な出力先など、出力先の必要な情報に応じて像面位相差検出画素の検波情報に基づく異なる情報を生成して出力する構成を開示している。
Further, Patent Document 2 (Japanese Patent Application Laid-Open No. 2019-035967) discloses a configuration for changing and outputting the gradation characteristics of the defocus amount and the distance information obtained from the detection information of the image plane phase difference detection pixels according to the output destination. ing.
For example, different information is generated based on the detection information of the image plane phase difference detection pixels according to the information required by the output destination, such as an output destination that requires distance resolution near in-focus and an output destination that requires distance measurement range information. A configuration for outputting by
 さらに、特許文献3(特開2011-053378号公報)は、撮影画像の各画像領域のデフォーカス量と、測光値に応じて露出制御を行う構成を開示している。
 具体的には、例えば合焦対象となる焦点検出領域の測光値以外の非合焦領域の測光値も考慮して露出制御を行う構成を開示している。
Furthermore, Patent Document 3 (Japanese Patent Application Laid-Open No. 2011-053378) discloses a configuration for performing exposure control according to the defocus amount of each image area of a captured image and the photometric value.
Specifically, for example, a configuration is disclosed in which exposure control is performed in consideration of the photometric value of an out-of-focus area in addition to the photometric value of a focus detection area to be focused.
特開2012-142952号公報JP 2012-142952 A 特開2019-035967号公報JP 2019-035967 A 特開2011-053378号公報JP 2011-053378 A 特開2011-004088号公報JP 2011-004088 A
 本開示は、像面位相差検出画素の検波情報から得られる画像領域単位のデフォーカス量や距離値等、画像領域単位の物理量の信頼度を算出し、算出した信頼度に応じた様々な処理を行う撮像装置、および画像処理方法、並びにプログラムを提供することを目的とする。 The present disclosure calculates the reliability of physical quantities for each image region, such as the defocus amount and distance value for each image region obtained from the detection information of the image plane phase difference detection pixels, and performs various processing according to the calculated reliability. It is an object of the present invention to provide an imaging device, an image processing method, and a program that perform
 例えば、本開示の一実施例構成においては、画像領域単位のデフォーカス量の信頼度を識別可能とした画像を生成して出力する構成を実現する。
 さらに、本開示の一実施例構成においては、合焦被写体と背景被写体の距離比を出力する構成を実現する。
 例えば、本開示の一実施例構成においては、画像領域単位のデフォーカス量の信頼度に応じて、撮影画像の一部領域に設定するマスクの安定性を識別可能とした画像を生成して出力する構成を実現する。
 さらに、本開示の一実施例構成においては、画像領域単位のデフォーカス量の信頼度に応じて、画像領域単位の露光制御を実行する構成を実現する。
For example, in the configuration of one embodiment of the present disclosure, a configuration is realized in which an image is generated and output in which the reliability of the defocus amount for each image area can be identified.
Furthermore, in an embodiment configuration of the present disclosure, a configuration for outputting the distance ratio between the in-focus object and the background object is realized.
For example, in the configuration of the embodiment of the present disclosure, an image is generated and output in which the stability of the mask set in a partial area of the captured image can be identified according to the reliability of the defocus amount for each image area. Realize a configuration that
Furthermore, in an embodiment configuration of the present disclosure, a configuration is realized that performs exposure control for each image area according to the reliability of the defocus amount for each image area.
 本開示の第1の側面は、
 撮影画像の区分領域である画像領域各々について、被写体距離に応じて変化する物理量を算出する画像領域物理量算出部と、
 前記画像領域物理量算出部の算出した画像領域単位の物理量の信頼度を算出する画像領域単位物理量信頼度算出部と、
 前記画像領域単位物理量信頼度算出部が算出した画像領域単位の物理量の信頼度に応じた制御処理を実行する画像領域単位物理量信頼度対応処理実行部を有する撮像装置にある。
A first aspect of the present disclosure includes:
an image region physical quantity calculator that calculates physical quantities that change according to the subject distance for each image region that is a segmented region of a captured image;
an image area unit physical quantity reliability calculation unit that calculates the reliability of the image area unit physical quantity calculated by the image area physical quantity calculation unit;
The imaging apparatus includes an image area unit physical quantity reliability corresponding processing execution unit that executes control processing according to the reliability of the image area unit physical quantity calculated by the image area unit physical quantity reliability calculation unit.
 さらに、本開示の第2の側面は、
 撮像装置において実行する画像処理方法であり、
 画像領域物理量算出部が、撮影画像の区分領域である画像領域各々について、被写体距離に応じて変化する物理量を算出する画像領域物理量算出ステップと、
 画像領域単位物理量信頼度算出部が、前記画像領域物理量算出ステップにおいて算出した画像領域単位の物理量の信頼度を算出する画像領域単位物理量信頼度算出ステップと、
 画像領域単位物理量信頼度対応処理実行部が、前記画像領域単位物理量信頼度算出ステップにおいて算出した画像領域単位の物理量の信頼度に応じた制御処理を実行する画像領域単位物理量信頼度対応処理実行ステップを実行する画像処理方法にある。
Furthermore, a second aspect of the present disclosure is
An image processing method executed in an imaging device,
an image region physical quantity calculation step in which the image region physical quantity calculation unit calculates a physical quantity that changes according to the subject distance for each image region that is a segmented region of the captured image;
an image region unit physical quantity reliability calculation step in which the image region unit physical quantity reliability calculation unit calculates the reliability of the image region unit physical quantity calculated in the image region physical quantity calculation step;
an image area unit physical quantity reliability correspondence process execution step in which the image area unit physical quantity reliability correspondence process execution unit executes control processing according to the reliability of the image area unit physical quantity reliability calculated in the image area unit physical quantity reliability calculation step; An image processing method for executing
 さらに、本開示の第3の側面は、
 撮像装置において画像処理を実行させるプログラムであり、
 画像領域物理量算出部に、撮影画像の区分領域である画像領域各々について、被写体距離に応じて変化する物理量を算出させる画像領域物理量算出ステップと、
 画像領域単位物理量信頼度算出部に、前記画像領域物理量算出ステップにおいて算出した画像領域単位の物理量の信頼度を算出させる画像領域単位物理量信頼度算出ステップと、
 画像領域単位物理量信頼度対応処理実行部に、前記画像領域単位物理量信頼度算出ステップにおいて算出した画像領域単位の物理量の信頼度に応じた制御処理を実行させる画像領域単位物理量信頼度対応処理実行ステップを実行させるプログラムにある。
Furthermore, a third aspect of the present disclosure is
A program for executing image processing in an imaging device,
an image region physical quantity calculation step of causing an image region physical quantity calculation unit to calculate a physical quantity that changes according to a subject distance for each image region that is a segmented region of a captured image;
an image region unit physical quantity reliability calculation step for causing an image region unit physical quantity reliability calculation unit to calculate the reliability of the image region unit physical quantity calculated in the image region physical quantity calculation step;
An image area unit physical quantity reliability correspondence process execution step for causing an image area unit physical quantity reliability correspondence process execution unit to execute a control process according to the reliability of the image area unit physical quantity reliability calculated in the image area unit physical quantity reliability calculation step. is in the program that runs the
 なお、本開示のプログラムは、例えば、様々なプログラム・コードを実行可能な情報処理装置やコンピュータ・システムに対して、コンピュータ可読な形式で提供する記憶媒体、通信媒体によって提供可能なプログラムである。このようなプログラムをコンピュータ可読な形式で提供することにより、情報処理装置やコンピュータ・システム上でプログラムに応じた処理が実現される。 It should be noted that the program of the present disclosure is, for example, a program that can be provided in a computer-readable format to an information processing device or computer system capable of executing various program codes via a storage medium or communication medium. By providing such a program in a computer-readable format, processing according to the program is realized on the information processing device or computer system.
 本開示のさらに他の目的、特徴や利点は、後述する本開示の実施例や添付する図面に基づくより詳細な説明によって明らかになるであろう。なお、本明細書においてシステムとは、複数の装置の論理的集合構成であり、各構成の装置が同一筐体内にあるものには限らない。 Still other objects, features, and advantages of the present disclosure will become apparent from more detailed descriptions based on the embodiments of the present disclosure and the accompanying drawings, which will be described later. In this specification, a system is a logical collective configuration of a plurality of devices, and the devices of each configuration are not limited to being in the same housing.
 本開示の一実施例の構成によれば、撮影画像の画像領域単位のデフォーカス量や距離値の信頼度を算出し、信頼度を示すグラフィックデータの表示処理や、被写体の距離比表示処理や、露光時間制御処理等を実行する装置、方法が実現される。
 具体的には、例えば、撮影画像の画像領域各々についてデフォーカス量や距離値を算出し、さらに算出した画像領域単位のデフォーカス量や距離値の信頼度を算出し、算出した画像領域単位のデフォーカス量や距離値の信頼度に応じた制御を実行する。例えば、画像領域単位のデフォーカス量の信頼度を示すグラフィックデータを撮影画像上に重畳して表示する処理や、被写体の距離比の表示処理、露光時間制御処理等を実行する。
 本構成により、撮影画像の画像領域単位のデフォーカス量や距離値の信頼度を算出し、信頼度を示すグラフィックデータの表示処理や、被写体の距離比表示処理や、露光時間制御処理等を実行する装置、方法が実現される。
 なお、本明細書に記載された効果はあくまで例示であって限定されるものではなく、また付加的な効果があってもよい。
According to the configuration of the embodiment of the present disclosure, the reliability of the defocus amount and the distance value for each image area of the captured image is calculated, the display processing of the graphic data indicating the reliability, the distance ratio display processing of the subject, and the , an apparatus and a method for executing exposure time control processing and the like are realized.
Specifically, for example, the defocus amount and the distance value are calculated for each image area of the captured image, and the reliability of the calculated defocus amount and distance value for each image area is calculated. Control is executed according to the reliability of the defocus amount and the distance value. For example, it executes a process of superimposing and displaying graphic data indicating the reliability of the defocus amount for each image area on the captured image, a process of displaying the distance ratio of the object, a process of controlling the exposure time, and the like.
With this configuration, the reliability of the defocus amount and distance value for each image area of the captured image is calculated, and display processing of graphic data indicating the reliability, processing of displaying the distance ratio of the subject, processing of exposure time control, etc. are executed. An apparatus and method are realized.
Note that the effects described in this specification are merely examples and are not limited, and additional effects may be provided.
本開示の撮像装置の構成例について説明する図である。It is a figure explaining the example of composition of the imaging device of this indication. 位相差検出画素を有する撮像素子の構成例について説明する図である。It is a figure explaining the structural example of the image pick-up element which has a phase difference detection pixel. 位相差検出方式の焦点検出処理の概要について説明する図である。FIG. 10 is a diagram illustrating an outline of focus detection processing using a phase difference detection method; 位相差検出方式の焦点検出処理の概要について説明する図である。FIG. 10 is a diagram illustrating an outline of focus detection processing using a phase difference detection method; 位相差検出方式の焦点検出処理の概要について説明する図である。FIG. 10 is a diagram illustrating an outline of focus detection processing using a phase difference detection method; 撮像装置のデジタル信号処理部の構成例について説明する図である。FIG. 2 is a diagram illustrating a configuration example of a digital signal processing unit of an imaging device; 撮像素子の画素構成とデフォーカス量算出単位としての画像領域の具体例について説明する図である。FIG. 4 is a diagram illustrating a specific example of a pixel configuration of an image pickup device and an image area as a defocus amount calculation unit; 撮像装置のデジタル信号処理部が生成するデフォーカスマップの具体例について説明する図である。FIG. 4 is a diagram illustrating a specific example of a defocus map generated by a digital signal processing unit of an imaging device; 実施例1の撮像装置のデジタル信号処理部の構成例について説明する図である。4 is a diagram illustrating a configuration example of a digital signal processing unit of the image pickup apparatus of Example 1; FIG. 実施例1の撮像装置のデジタル信号処理部が実行する処理の具体例について説明する図である。4A and 4B are diagrams for explaining a specific example of processing executed by a digital signal processing unit of the imaging apparatus according to the first embodiment; FIG. 実施例1の撮像装置のデジタル信号処理部が実行する処理の具体例について説明する図である。4A and 4B are diagrams for explaining a specific example of processing executed by a digital signal processing unit of the imaging apparatus according to the first embodiment; FIG. 実施例1の撮像装置のデジタル信号処理部が実行する処理の具体例について説明する図である。4A and 4B are diagrams for explaining a specific example of processing executed by a digital signal processing unit of the imaging apparatus according to the first embodiment; FIG. 実施例2の撮像装置のデジタル信号処理部の構成例について説明する図である。FIG. 10 is a diagram illustrating a configuration example of a digital signal processing unit of the imaging apparatus of Example 2; 実施例2の撮像装置のデジタル信号処理部が実行する処理の具体例について説明する図である。FIG. 11 is a diagram illustrating a specific example of processing executed by a digital signal processing unit of the imaging apparatus according to the second embodiment; 実施例3の撮像装置のデジタル信号処理部の構成例について説明する図である。FIG. 11 is a diagram illustrating a configuration example of a digital signal processing unit of an imaging apparatus according to Example 3; 実施例3の撮像装置のデジタル信号処理部が実行する処理の具体例について説明する図である。FIG. 11 is a diagram illustrating a specific example of processing executed by a digital signal processing unit of the imaging apparatus of Example 3; 実施例3の撮像装置のデジタル信号処理部が実行する処理の具体例について説明する図である。FIG. 11 is a diagram illustrating a specific example of processing executed by a digital signal processing unit of the imaging apparatus of Example 3; 実施例3の撮像装置のデジタル信号処理部が実行する処理の具体例について説明する図である。FIG. 11 is a diagram illustrating a specific example of processing executed by a digital signal processing unit of the imaging apparatus of Example 3; 実施例4の撮像装置のデジタル信号処理部の構成例について説明する図である。FIG. 11 is a diagram illustrating a configuration example of a digital signal processing unit of an imaging apparatus according to a fourth embodiment; 実施例4の撮像装置のデジタル信号処理部が実行する処理の具体例について説明する図である。FIG. 12 is a diagram illustrating a specific example of processing executed by a digital signal processing unit of the imaging apparatus according to the fourth embodiment; 実施例4の撮像装置のデジタル信号処理部が実行する処理の具体例について説明する図である。FIG. 12 is a diagram illustrating a specific example of processing executed by a digital signal processing unit of the imaging apparatus according to the fourth embodiment; 実施例5の撮像装置のデジタル信号処理部の構成例について説明する図である。FIG. 11 is a diagram illustrating a configuration example of a digital signal processing unit of an imaging apparatus according to Example 5; 実施例5の撮像装置のデジタル信号処理部が実行する処理の具体例について説明する図である。FIG. 12 is a diagram illustrating a specific example of processing executed by a digital signal processing unit of the imaging apparatus of Example 5; 実施例5の撮像装置のデジタル信号処理部が実行する処理の具体例について説明する図である。FIG. 12 is a diagram illustrating a specific example of processing executed by a digital signal processing unit of the imaging apparatus of Example 5; 実施例6の撮像装置のデジタル信号処理部の構成例について説明する図である。FIG. 12 is a diagram illustrating a configuration example of a digital signal processing unit of an imaging apparatus according to Example 6; 実施例6の撮像装置のデジタル信号処理部が実行する処理の具体例について説明する図である。FIG. 14 is a diagram illustrating a specific example of processing executed by a digital signal processing unit of the imaging apparatus according to the sixth embodiment; 実施例6の撮像装置のデジタル信号処理部の構成例について説明する図である。FIG. 12 is a diagram illustrating a configuration example of a digital signal processing unit of an imaging apparatus according to Example 6; 実施例6の撮像装置のデジタル信号処理部が実行する処理の具体例について説明する図である。FIG. 14 is a diagram illustrating a specific example of processing executed by a digital signal processing unit of the imaging apparatus according to the sixth embodiment; 実施例6の撮像装置のデジタル信号処理部の構成例について説明する図である。FIG. 12 is a diagram illustrating a configuration example of a digital signal processing unit of an imaging apparatus according to Example 6;
 以下、図面を参照しながら本開示の撮像装置、および画像処理方法、並びにプログラムの詳細について説明する。なお、説明は以下の項目に従って行なう。
 1.本開示の撮像装置の構成例について
 2.位相差検出画素の構成、位相差検出画素による検出信号を用いたフォーカス制御の概要について
 3.画像領域単位のデフォーカス量や距離値等、画像領域単位の物理量の信頼度を算出し、算出した信頼度に応じた様々な処理を実行する基本構成例について
 4.画像領域単位のデフォーカス量や距離値等、画像領域単位の物理量の信頼度を算出し、算出した信頼度に応じた様々な処理を実行する具体的な実施例について
 4-1.(実施例1)画像領域単位のデフォーカス量の信頼度を識別可能とした画像を生成して出力する実施例
 4-2.(実施例2)合焦被写体とその他の背景被写体との距離比率を撮影画像上に重畳した画像を生成して出力する実施例
 4-3.(実施例3)マスク安定性情報を撮影画像上に重畳した画像を生成して出力する実施例
 4-4.(実施例4)画像領域単位のデフォーカス量に応じた色を出力したカラーマップを撮影画像に重畳した画像を生成して出力する実施例
 4-5.(実施例5)画像領域単位のデフォーカス量に応じた色を出力するとともに、画像領域単位のデフォーカス量の信頼度を識別可能としたカラーマップを撮影画像に重畳した画像を生成して出力する実施例
 4-6.(実施例6)画像領域単位のデフォーカス量の信頼度に応じて露光時間を制御して画像撮影を行う実施例
 5.その他の実施例について
 6.本開示の構成のまとめ
Hereinafter, details of an imaging device, an image processing method, and a program according to the present disclosure will be described with reference to the drawings. The description will be made according to the following items.
1. Configuration Example of Imaging Apparatus of Present Disclosure 2. Configuration of phase difference detection pixels, outline of focus control using detection signals from phase difference detection pixels3. 3. Basic configuration example for calculating the reliability of physical quantities in units of image areas, such as defocus amount and distance value in units of image areas, and executing various processes according to the calculated reliability. Concrete Embodiments of Calculating Reliabilities of Physical Quantities in Units of Image Areas such as Defocus Amounts and Distance Values in Units of Image Areas and Executing Various Processes According to the Calculated Reliabilities 4-1. (Embodiment 1) Embodiment of generating and outputting an image in which the reliability of the defocus amount for each image area can be identified 4-2. (Embodiment 2) Embodiment of generating and outputting an image in which the distance ratio between a focused subject and other background subjects is superimposed on a photographed image 4-3. (Embodiment 3) Embodiment of generating and outputting an image in which mask stability information is superimposed on a photographed image 4-4. (Embodiment 4) Embodiment of generating and outputting an image in which a color map outputting a color corresponding to a defocus amount for each image area is superimposed on a photographed image 4-5. (Embodiment 5) In addition to outputting a color corresponding to the defocus amount for each image area, an image is generated and output by superimposing a color map that makes it possible to identify the reliability of the defocus amount for each image area on the captured image. Example 4-6. (Embodiment 6) Embodiment in which the exposure time is controlled according to the reliability of the defocus amount for each image area, and the image is captured5. Other Examples 6. SUMMARY OF THE STRUCTURE OF THE DISCLOSURE
  [1.本開示の撮像装置の構成例について]
 まず、本開示の撮像装置の構成例について説明する。
[1. Regarding the configuration example of the imaging device of the present disclosure]
First, a configuration example of an imaging device according to the present disclosure will be described.
 図1は、本開示の撮像装置100の一構成例を示すブロック図である。
 図1を参照して本開示の撮像装置100の一構成例について説明する。
 フォーカスレンズ101、ズームレンズ102を介する入射光は、例えばCMOSやCCDなどの撮像素子103に入力し、撮像素子103において光電変換される。
FIG. 1 is a block diagram showing a configuration example of an imaging device 100 of the present disclosure.
A configuration example of an imaging device 100 of the present disclosure will be described with reference to FIG.
Incident light passing through the focus lens 101 and the zoom lens 102 is input to an imaging device 103 such as a CMOS or CCD, and photoelectrically converted in the imaging device 103 .
 撮像素子103は、例えばフォトダイオードを有して構成される複数の画素がマトリクス状に2次元配置され、各画素の受光面に、それぞれ分光特性の異なる例えばR(赤)、G(緑)、B(青)のカラーフィルタが配設されてなる通常画素と、被写体光を瞳分割して焦点検出するための位相差検出画素を有する。 The image pickup device 103 has a plurality of pixels each having a photodiode, for example, arranged two-dimensionally in a matrix. It has normal pixels in which a B (blue) color filter is arranged, and phase difference detection pixels for pupil-dividing subject light and performing focus detection.
 撮像素子103の通常画素は、被写体像のR(赤)、G(緑)、B(青)各色成分のアナログの電気信号(画像信号)を生成し、R、G、B各色の画像信号として出力する。
 撮像素子103の位相差検出画素は位相差検出信号(検波情報)を出力する。位相差検出信号(検波情報)は主にオートフォーカス制御に用いられる信号である。
 なお、位相差検出画素の構成や、位相差検出画素によって生成される位相差検出信号、さらに位相差検出信号を用いたフォーカス制御ついては、後段で詳細に説明する。
The normal pixels of the image sensor 103 generate analog electrical signals (image signals) of R (red), G (green), and B (blue) color components of the subject image, and generate R, G, and B color image signals. Output.
A phase difference detection pixel of the image sensor 103 outputs a phase difference detection signal (detection information). A phase difference detection signal (detection information) is a signal mainly used for autofocus control.
The configuration of the phase difference detection pixels, phase difference detection signals generated by the phase difference detection pixels, and focus control using the phase difference detection signals will be described later in detail.
 このように撮像素子103から出力される光電変換データにはRGB画像信号と、位相差検出画素による位相差検出信号が含まれる。
 これらの各信号は、アナログ信号処理部104に入力され、アナログ信号処理部104においてノイズ除去等の処理がなされ、A/D変換部105においてデジタル信号に変換される。
Photoelectrically converted data output from the image sensor 103 in this manner includes RGB image signals and phase difference detection signals from phase difference detection pixels.
Each of these signals is input to the analog signal processing unit 104 , subjected to processing such as noise removal in the analog signal processing unit 104 , and converted into a digital signal in the A/D conversion unit 105 .
 A/D変換部105においてデジタル変換されたデジタル信号は、デジタル信号処理部(DSP)108に入力され、様々な信号処理がなされる。
 RGB画像信号に対しては、デモザイク処理、ホワイトバランス調整、ガンマ補正等の様々な画像信号処理が実行され、処理後の画像が例えばフラッシュメモリなどによって構成される記録デバイス115に記録される。
 さらに、モニタ117、ビューファインダ(EVF)116に表示される。モニタ117、ビューファインダ(EVF)116には撮影の有無に関わらず、レンズを介する画像がスルー画として表示される。
A digital signal digitally converted by the A/D conversion unit 105 is input to a digital signal processing unit (DSP) 108 and subjected to various signal processing.
Various image signal processing such as demosaic processing, white balance adjustment, gamma correction, etc. is performed on the RGB image signal, and the processed image is recorded in a recording device 115 such as a flash memory.
Further, it is displayed on the monitor 117 and viewfinder (EVF) 116 . An image through the lens is displayed as a through image on the monitor 117 and the viewfinder (EVF) 116 regardless of whether or not shooting is performed.
 撮像素子103の位相差検出画素から出力される位相差検出画素情報(検波信号)も、AD変換部106を介してデジタル信号処理部(DSP)108に入力される。
 デジタル信号処理部(DSP)108は、位相差検出画素情報(検波信号)により生成される一対の像間の位相差を解析して、フォーカスを合わせる対象となる被写体(合焦対象物)に対するフォーカスのずれ量、すなわち合焦距離と被写体距離とのずれ量(デフォーカス量(DF))を算出する。
Phase difference detection pixel information (detection signal) output from the phase difference detection pixels of the image sensor 103 is also input to the digital signal processing unit (DSP) 108 via the AD conversion unit 106 .
A digital signal processing unit (DSP) 108 analyzes the phase difference between the pair of images generated by the phase difference detection pixel information (detection signal), and focuses on the subject (focusing object) to be focused. , that is, the amount of deviation between the in-focus distance and the object distance (defocus amount (DF)) is calculated.
 入力部(操作部)118は、カメラ本体にあるシャッタ、ズームボタンなど、各種の操作情報を入力する入力部、撮影モードを設定するためのモードダイアル等を含む操作部である。 An input unit (operation unit) 118 is an operation unit including an input unit for inputting various operation information such as shutter and zoom buttons on the camera body, a mode dial for setting the shooting mode, and the like.
 制御部110は、CPUを有し、撮像装置の実行する各種の処理の制御を予めメモリ(ROM)120などに格納されたプログラムに従って実行する。メモリ(EEPROM)119は不揮発性メモリであり、画像データ、各種の補助情報、プログラムなどが格納される。 The control unit 110 has a CPU, and controls various processes executed by the imaging device according to programs stored in advance in a memory (ROM) 120 or the like. A memory (EEPROM) 119 is a non-volatile memory and stores image data, various auxiliary information, programs, and the like.
 メモリ(ROM)120は、制御部(CPU)110が使用するプログラムや演算パラメータ等を格納する。メモリ(RAM)121は、制御部(CPU)110やAF制御部112a等において使用するプログラムや、その実行において適宜変化するパラメータ等を格納する。 The memory (ROM) 120 stores programs, calculation parameters, etc. used by the control unit (CPU) 110 . A memory (RAM) 121 stores programs used in the control unit (CPU) 110, the AF control unit 112a, and the like, parameters that change as appropriate during the execution of the programs, and the like.
 ジャイロ131は、撮像装置100の傾きや角度、傾き速度(角速度)等を計測するセンサである。ジャイロ131の検出情報は、例えば画像撮影時の手振れ量算出に用いられる。 The gyro 131 is a sensor that measures the inclination, angle, inclination velocity (angular velocity), etc. of the imaging device 100 . The detection information of the gyro 131 is used, for example, for calculating the amount of camera shake during image capturing.
 AF制御部112aは、フォーカスレンズ101に対応して設定されたフォーカスレンズ駆動モータ113aを駆動し、オートフォーカス制御(AF制御)処理を実行する。例えばユーザの選択した領域に含まれる被写体に対するフォーカスレンズ101の合焦位置にフォーカスレンズ101を移動させて、合焦状態を得る。 The AF control unit 112a drives the focus lens drive motor 113a set corresponding to the focus lens 101, and executes autofocus control (AF control) processing. For example, the focus lens 101 is moved to the focus position of the focus lens 101 with respect to the subject included in the area selected by the user to obtain the focused state.
 ズーム制御部112bは、ズームレンズ102に対応して設定されたズームレンズ駆動モータ113bを駆動する。垂直ドライバ107は、撮像素子(CCD)103を駆動する。タイミングジェネレータ106は、撮像素子103およびアナログ信号処理部104の処理タイミングの制御信号を生成して、これらの各処理部の処理タイミングを制御する。
 なお、フォーカスレンズ101は、AF制御部112aの制御によって光軸方向に駆動される。
The zoom control unit 112b drives a zoom lens driving motor 113b set corresponding to the zoom lens 102. FIG. A vertical driver 107 drives an image sensor (CCD) 103 . The timing generator 106 generates control signals for processing timings of the image sensor 103 and the analog signal processing unit 104, and controls the processing timings of these processing units.
Note that the focus lens 101 is driven in the optical axis direction under the control of the AF control section 112a.
  [2.位相差検出画素の構成、位相差検出画素による検出信号を用いたフォーカス制御の概要について]
 次に、位相差検出画素の構成、位相差検出画素による検出信号を用いたフォーカス制御の概要について説明する。
[2. Configuration of Phase Difference Detection Pixels, Overview of Focus Control Using Detection Signals from Phase Difference Detection Pixels]
Next, the configuration of the phase difference detection pixels and the outline of focus control using detection signals from the phase difference detection pixels will be described.
 前述したように図1に示す撮像装置100の撮像素子103は、被写体像のR(赤)、G(緑)、B(青)各色成分のアナログの電気信号(画像信号)を生成し、R、G、B各色の画像信号として出力する一方、オートフォーカス制御に用いる信号である位相差検出画素による位相差検出信号(検波情報)も出力する。
 撮像素子103の具体的な画素構成例について図2を参照して説明する。
As described above, the imaging device 103 of the imaging apparatus 100 shown in FIG. , G, and B image signals, and phase difference detection signals (detection information) from the phase difference detection pixels, which are signals used for autofocus control, are also output.
A specific pixel configuration example of the image sensor 103 will be described with reference to FIG.
 図2は、撮像素子103の画素構成例を示す図である。
 図2には、(A)撮影画像の一部領域に対応する撮像素子103の画素構成例を示している。
 上下方向をY軸とし、左右方向をX軸とする。図2では、1つの画素を1つの正方形で示す。
 図2に示すRGB画素は、通常の画像撮影用の画素である。RGB画素は、例えばベイヤ配列構成を有する。
FIG. 2 is a diagram showing a pixel configuration example of the image sensor 103. As shown in FIG.
FIG. 2 shows a pixel configuration example of the image sensor 103 corresponding to (A) a partial area of the captured image.
The vertical direction is the Y-axis, and the horizontal direction is the X-axis. In FIG. 2, one pixel is indicated by one square.
The RGB pixels shown in FIG. 2 are pixels for normal image capturing. RGB pixels have, for example, a Bayer array configuration.
 オートフォーカス処理に適用する検波情報取得画素、すなわち位相差情報を取得するための位相差検出画素151は、ベイヤ配列を有するRGB画素の一部(行)に離散的に設定される。
 位相差検出画素は、右開口位相差検出画素Pa、左開口位相差検出画素Pbのペアによって構成される。
Detection information acquisition pixels applied to the autofocus process, ie, phase difference detection pixels 151 for acquiring phase difference information, are discretely set in some (rows) of RGB pixels having a Bayer array.
A phase difference detection pixel is configured by a pair of a right opening phase difference detection pixel Pa and a left opening phase difference detection pixel Pb.
 撮像素子103からは、以下の2つのデータ出力が個別に行われる。
 (1)撮影画像用の画素(RGB画素)による画素情報(画像信号)出力、
 (2)位相差検出画素151による位相差検出画素情報((AF)検波信号)出力、
The following two data outputs are individually performed from the imaging device 103 .
(1) Output of pixel information (image signal) from pixels (RGB pixels) for captured images;
(2) phase difference detection pixel information ((AF) detection signal) output by the phase difference detection pixel 151;
 「(1)撮影画像用の画素(RGB画素)による画素情報(画像信号)出力」は、ユーザ(撮影者)による画像撮影タイミングに応じて出力され、さらに、非撮影時においても、モニタ117等に表示するための表示用画像(ライブビュー画像)の出力が行われる。表示用画像(ライブビュー画像)は、モニタ117等の画像表示レートに応じたフレームレートで出力される。 "(1) Output of pixel information (image signal) by pixels (RGB pixels) for captured image" is output in accordance with image capturing timing by the user (photographer). A display image (live view image) to be displayed on the display is output. A display image (live view image) is output at a frame rate corresponding to the image display rate of the monitor 117 or the like.
 「(2)位相差検出画素151による位相差検出画素情報(検波信号)出力」は、画像出力間隔と同間隔、またはより短い間隔、例えば(1/60)sec間隔(=16.7msec間隔)で行われる。 "(2) Phase difference detection pixel information (detection signal) output by phase difference detection pixel 151" is the same interval as the image output interval or a shorter interval, for example, (1/60) sec interval (=16.7 msec interval). is done in
 位相差検出画素151による位相差検出画素情報(検波信号)出力は、AD変換部106を介してデジタル信号処理部(DSP)108に入力される。
 デジタル信号処理部(DSP)108は、位相差検出画素情報(検波信号)により生成される一対の像間の位相差を解析して、フォーカスを合わせる対象となる被写体(合焦対象物)に対するフォーカスのずれ量、すなわち合焦距離と被写体距離とのずれ量(デフォーカス量(DF))を算出する。
The phase difference detection pixel information (detection signal) output from the phase difference detection pixel 151 is input to the digital signal processor (DSP) 108 via the AD converter 106 .
A digital signal processing unit (DSP) 108 analyzes the phase difference between the pair of images generated by the phase difference detection pixel information (detection signal), and focuses on the subject (focusing object) to be focused. , that is, the amount of deviation between the in-focus distance and the object distance (defocus amount (DF)) is calculated.
 位相差検出方式の焦点検出処理の概要について、図3~図5を参照して説明する。
 先に説明したように、位相差検出方式においては、焦点検出用センサとして機能する一組の位相差検出画素各々の受光量に応じて出力される信号のずれ量に基づいてフォーカスレンズのデフォーカス量が算出され、このデフォーカス量に基づいてフォーカスレンズをピント位置(フォーカス位置)に設定する。
An outline of focus detection processing of the phase difference detection method will be described with reference to FIGS. 3 to 5. FIG.
As described above, in the phase difference detection method, the defocus of the focus lens is detected based on the shift amount of the signal output according to the amount of light received by each of a set of phase difference detection pixels that function as a focus detection sensor. The amount is calculated, and the focus lens is set to the focus position (focus position) based on this defocus amount.
 図2を参照して説明した一組の位相差検出画素をそれぞれ画素Pa、画素Pbとして、これらの画素に対する入射光の詳細について図3を参照して説明する。 The set of phase difference detection pixels described with reference to FIG. 2 will be referred to as pixel Pa and pixel Pb, respectively, and the details of light incident on these pixels will be described with reference to FIG.
 図3に示されるように、位相差検出部には、撮影光学系の射出瞳EYの右側部分(「右側の部分瞳領域」または単に「右瞳領域」とも称する)Qaからの光束Taと左側部分(「左側の部分瞳領域」または単に「左瞳領域」とも称する)Qbからの光束Tbとを受光する一対のフォトディテクタ(PD)によって構成される位相差検出画素Pa,Pbが水平方向に配列されている。なお、ここでは、図中+X方向側を右側と、-X方向側を左側と表現している。 As shown in FIG. 3, the phase difference detection unit stores a luminous flux Ta from a right portion (also referred to as a “right partial pupil region” or simply a “right pupil region”) Qa of the exit pupil EY of the photographing optical system, and a light beam Ta from the left side. Phase difference detection pixels Pa and Pb configured by a pair of photodetectors (PD) for receiving a light flux Tb from a portion (also referred to as “left partial pupil region” or simply “left pupil region”) Qb are arranged in the horizontal direction. It is Here, in the drawing, the +X direction side is expressed as the right side, and the −X direction side is expressed as the left side.
 一対の位相差検出画素Pa,Pbのうち、一方の位相差検出画素(以下では「第1位相差検出画素」と称する)Paは、第1位相差検出画素Paへの入射光を集光するマイクロレンズMLと、スリット(矩形)状の第1開口部OP1を有する第1遮光板AS1と、当該第1遮光板AS1の下方に配置され、スリット(矩形)状の第2開口部OP2を有する第2遮光板AS2を介した光を受光する光電変換部PDによって構成される。 Of the pair of phase difference detection pixels Pa and Pb, one phase difference detection pixel (hereinafter referred to as “first phase difference detection pixel”) Pa collects incident light to the first phase difference detection pixel Pa. A first light shielding plate AS1 having a microlens ML, a first slit (rectangular) opening OP1, and a second slit (rectangular) opening OP2 arranged below the first light shielding plate AS1. It is composed of a photoelectric conversion part PD that receives light through the second light shielding plate AS2.
 第1位相差検出画素Paにおける第1開口部OP1は、受光素子PDの中心を通り光軸LTに平行な中心軸CLを基準(起点)にして特定方向(ここでは、右方向(+X方向))に偏った位置に設けられている。また、第1位相差検出画素Paにおける第2開口部OP2は、中心軸CLを基準にして上記特定方向とは反対の方向(「反特定方向」とも称する)に偏った位置に設けられている。 The first opening OP1 in the first phase difference detection pixel Pa is set in a specific direction (here, the right direction (+X direction) with respect to the central axis CL passing through the center of the light receiving element PD and parallel to the optical axis LT as a reference (starting point). ). In addition, the second opening OP2 in the first phase difference detection pixel Pa is provided at a position biased in a direction opposite to the specific direction (also referred to as "anti-specific direction") with respect to the central axis CL. .
 また、一対の位相差検出画素Pa,Pbのうち、他方の位相差検出画素(以下では、「第2位相差検出画素」と称する)Pbは、スリット状の第1開口部OP1を有する第1遮光板AS1と、当該第1遮光板AS1の下方に配置され、スリット状の第2開口部OP2を有する第2遮光板AS2とを備えている。そして、第2位相差検出画素Pbにおける第1開口部OP1は、中心軸CLを基準にして上記特定方向とは反対の方向に偏った位置に設けられている。また、第2位相差検出画素Pbにおける第2開口部OP2は、中心軸CLを基準にして上記特定方向に偏った位置に設けられている。 Further, of the pair of phase difference detection pixels Pa and Pb, the other phase difference detection pixel (hereinafter referred to as “second phase difference detection pixel”) Pb is a first pixel having a slit-shaped first opening OP1. A light shielding plate AS1 and a second light shielding plate AS2 arranged below the first light shielding plate AS1 and having a slit-shaped second opening OP2 are provided. The first opening OP1 in the second phase difference detection pixel Pb is provided at a position biased in the direction opposite to the specific direction with reference to the central axis CL. Also, the second opening OP2 in the second phase difference detection pixel Pb is provided at a position biased in the specific direction with reference to the central axis CL.
 すなわち、一対の位相差検出画素Pa,Pbでは、第1開口部OP1が互いに異なる方向に偏って配置される。また、第2開口部OP2は、位相差検出画素Pa,Pb内の対応する第1開口部OP1に対して異なる方向にずれて配置される。 That is, in the pair of phase difference detection pixels Pa and Pb, the first openings OP1 are arranged to be biased in different directions. Also, the second openings OP2 are arranged to be shifted in different directions with respect to the corresponding first openings OP1 in the phase difference detection pixels Pa and Pb.
 上述のような構成を有する一対の位相差検出画素Pa,Pbでは、射出瞳EYにおいて異なる領域(部分)を通過した被写体光が取得される。
 具体的には、射出瞳EYの右瞳領域Qaを通過した光束Taは、位相差検出画素Paに対応するマイクロレンズMLおよび第1遮光板AS1の第1開口部OP1を通過し、さらに第2遮光板AS2によって制限(限定)された後、第1位相差検出画素Paの受光素子PDで受光される。
The pair of phase difference detection pixels Pa and Pb configured as described above acquire subject light that has passed through different regions (parts) in the exit pupil EY.
Specifically, the luminous flux Ta that has passed through the right pupil region Qa of the exit pupil EY passes through the microlens ML corresponding to the phase difference detection pixel Pa and the first opening OP1 of the first light shielding plate AS1. After being limited (limited) by the light shielding plate AS2, the light is received by the light receiving element PD of the first phase difference detection pixel Pa.
 また、射出瞳EYの左瞳領域Qbを通過した光束Tbは、位相差検出画素Pbに対応するマイクロレンズMLおよび第2遮光板AS2の第1開口部OP1を通過し、さらに第2遮光板AS2によって制限された後、第2位相差検出画素Pbの受光素子PDで受光される。 Further, the light flux Tb that has passed through the left pupil region Qb of the exit pupil EY passes through the microlens ML corresponding to the phase difference detection pixel Pb and the first opening OP1 of the second light shielding plate AS2, and further passes through the second light shielding plate AS2. , the light is received by the light receiving element PD of the second phase difference detection pixel Pb.
 Pa,Pb各画素において取得される受光素子の出力の例を図4に示す。図4に示すように、画素Paからの出力ラインと、画素Pbからの出力ラインは、所定量のシフト量Sfを持つ信号となる。 Fig. 4 shows an example of the output of the light-receiving element obtained at each pixel of Pa and Pb. As shown in FIG. 4, the output line from the pixel Pa and the output line from the pixel Pb are signals having a predetermined amount of shift Sf.
 図5(a)は、フォーカスレンズが、被写体距離に応じた位置に設定され、フォーカスが合った場合、すなわち合焦状態において、Pa,Pb各画素間で発生するシフト量Sfaを示している。
 図5(b1),(b2)は、フォーカスレンズが、被写体距離に応じた位置に設定されず、フォーカスが合っていない場合、すなわち非合焦状態において、Pa,Pb各画素間で発生するシフト量Sfaを示している。
 (b1)は、シフト量が合焦時より大きい場合、(b2)はシフト量が合焦時より小さい場合の例である。
FIG. 5(a) shows the shift amount Sfa generated between the pixels Pa and Pb when the focus lens is set at a position corresponding to the subject distance and the focus is achieved, that is, in the in-focus state.
5B1 and 5B2 show shifts occurring between pixels Pa and Pb when the focus lens is not set to a position corresponding to the subject distance and is out of focus, i.e., in an out-of-focus state. Quantity Sfa is shown.
(b1) is an example in which the shift amount is larger than that at the in-focus state, and (b2) is an example in which the shift amount is smaller than that at the in-focus state.
 図5(b1),(b2)のような場合は、フォーカス時のシフト量になるようにフォーカスレンズを移動させて合焦させることが可能となる。
 この処理が「位相差検出法」に従った合焦処理である。
 この「位相差検出法」に従った合焦処理によってフォーカスレンズの合焦位置への設定が可能であり、フォーカスレンズは被写体距離に応じた位置に設定できる。
In the cases shown in FIGS. 5B1 and 5B2, it is possible to focus by moving the focus lens so as to achieve the shift amount during focusing.
This process is the focusing process according to the "phase difference detection method".
Focusing processing according to this "phase difference detection method" enables setting of the focus lens to the in-focus position, and the focus lens can be set to a position according to the object distance.
 図5を参照して説明したシフト量は、図2に示す撮像素子内に構成される位相差検出画素である画素Pa,画素Pbの組単位で計測可能であり、この微細領域(Pa,Pb画素の組み合わせ領域)に撮り込まれる被写体画像に対する合焦位置(フォーカスポイント)やデフォーカス量を個別に算出することが可能となる。 The shift amount described with reference to FIG. 5 can be measured for each set of pixels Pa and Pb, which are phase difference detection pixels configured in the image sensor shown in FIG. It is possible to individually calculate the in-focus position (focus point) and the defocus amount for the subject image captured in the pixel combination area).
 なお、図5(b1),(b2)のような場合のシフト量は、フォーカス時のシフト量とのずれがあり、このずれ量からデフォーカス量、すなわち合焦距離と被写体距離とのずれ量に相当するデフォーカス量を算出することができる。 5(b1) and (b2), there is a deviation from the shift amount at the time of focusing. can be calculated.
  [3.画像領域単位のデフォーカス量や距離値等、画像領域単位の物理量の信頼度を算出し、算出した信頼度に応じた様々な処理を実行する基本構成例について]
 次に、画像領域単位のデフォーカス量や距離値等、画像領域単位の物理量の信頼度を算出し、算出した信頼度に応じた様々な処理を実行する基本構成例について説明する。
[3. Example of Basic Configuration for Calculating Reliability of Physical Quantities for Each Image Region, such as Defocus Amount and Distance Value for Each Image Region, and Executing Various Processes According to the Calculated Reliability]
Next, a basic configuration example for calculating the reliability of physical quantities in units of image areas such as the defocus amount and the distance value in units of image areas and executing various processes according to the calculated reliability will be described.
 図6は、図1を参照して説明した撮像装置100の構成要素であるデジタル信号処理部108の内部構成例を示している。
 図6に示すデジタル信号処理部108は、位相差検出画素の検出情報を用いて画像領域単位のデフォーカス量や距離値等、画像領域単位の物理量の信頼度を算出し、算出した信頼度に応じた様々な処理を実行する。
FIG. 6 shows an internal configuration example of the digital signal processing unit 108, which is a component of the imaging apparatus 100 described with reference to FIG.
The digital signal processing unit 108 shown in FIG. 6 calculates the reliability of the physical quantity for each image region, such as the defocus amount and the distance value for each image region, using the detection information of the phase difference detection pixel, and the calculated reliability is Executes various processes according to the
 図6に示すように、デジタル信号処理部108は、位相差情報取得部201、デフォーカス量算出部202、AF制御信号生成部203、デフォーカスマップ生成部204、画像領域単位物理量信頼度算出部205、画像領域単位物理量信頼度対応処理実行部206、画像情報取得部211、画像信号処理部212、画像出力部213を有する。 As shown in FIG. 6, the digital signal processing unit 108 includes a phase difference information acquisition unit 201, a defocus amount calculation unit 202, an AF control signal generation unit 203, a defocus map generation unit 204, an image region unit physical quantity reliability calculation unit. 205 , an image area unit physical quantity reliability correspondence processing execution unit 206 , an image information acquisition unit 211 , an image signal processing unit 212 , and an image output unit 213 .
 デジタル信号処理部108には、前段のA/D変換部105からRGB画像信号と、位相差検出画素の出力である位相差検出信号(検波情報)が入力される。 The digital signal processing unit 108 receives the RGB image signals from the preceding A/D conversion unit 105 and the phase difference detection signals (detection information) output from the phase difference detection pixels.
 図6に示す位相差情報取得部201は、A/D変換部105からの入力信号から、位相差検出画素の出力である位相差検出信号(検波情報)のみを選択する。
 一方、図6に示す画像情報取得部211は、A/D変換部105からの入力信号から、画像信号(例えばRGB画像信号)のみを選択する。
A phase difference information acquisition unit 201 shown in FIG. 6 selects only phase difference detection signals (detection information), which are outputs of phase difference detection pixels, from the input signal from the A/D conversion unit 105 .
On the other hand, the image information acquisition unit 211 shown in FIG. 6 selects only image signals (for example, RGB image signals) from the input signals from the A/D conversion unit 105 .
 画像情報取得部211が取得した画像信号は、画像信号処理部212に入力される。
 画像信号処理部212は、画像信号に対してデモザイク処理、ホワイトバランス調整、ガンマ補正等の様々な画像信号処理を実行し、処理後の画像(例えばRGB画像)を画像出力部213に出力する。
The image signal acquired by the image information acquiring section 211 is input to the image signal processing section 212 .
The image signal processing unit 212 performs various image signal processing such as demosaic processing, white balance adjustment, and gamma correction on the image signal, and outputs the processed image (eg, RGB image) to the image output unit 213 .
 なお、画像信号処理部212が生成した画像(例えばRGB画像)に対して、さらに画像領域単位物理量信頼度対応処理実行部206による画像制御処理が行われる場合もある。例えばRGB画像に対して、デフォーカス量の信頼度識別データの重畳処理などが行われる。これらの具体的処理例については後述する。 In some cases, the image region unit physical quantity reliability corresponding processing execution unit 206 may further perform image control processing on the image (for example, an RGB image) generated by the image signal processing unit 212 . For example, superimposition processing of defocus amount reliability identification data is performed on an RGB image. Specific examples of these processes will be described later.
 画像信号処理部212が生成した画像(例えばRGB画像)、または、画像信号処理部212の生成画像に対して画像領域単位物理量信頼度対応処理実行部206が変更、加工した画像は、画像出力部213に出力される。 An image (for example, an RGB image) generated by the image signal processing unit 212, or an image generated by the image signal processing unit 212 changed or processed by the image region unit physical quantity reliability correspondence processing execution unit 206 is output to the image output unit. 213.
 画像出力部213は、画像信号処理部212から入力した画像を出力する。例えばモニタ117、ビューファインダ(EVF)116、記録デバイス115に対する画像出力を実行する。 The image output unit 213 outputs the image input from the image signal processing unit 212 . For example, image output to the monitor 117, viewfinder (EVF) 116, and recording device 115 is executed.
 位相差情報取得部201は、A/D変換部105からの入力信号から、位相差検出画素の出力である位相差検出信号(検波情報)を選択し、選択した位相差検出信号(検波情報)をデフォーカス量算出部202に出力する。 The phase difference information acquisition unit 201 selects a phase difference detection signal (detection information) that is the output of the phase difference detection pixel from the input signal from the A/D conversion unit 105, and obtains the selected phase difference detection signal (detection information). is output to the defocus amount calculation unit 202 .
 デフォーカス量算出部202は、微小な画像領域単位、例えば複数画素から構成される画像領域単位でフォーカスのずれ量、すなわち合焦距離と被写体距離とのずれ量(デフォーカス量(DF))を算出する。
 デフォーカス量算出部202は、被写体距離に応じて変化する物理量であるデフォーカス量を画像領域単位で算出する。
The defocus amount calculation unit 202 calculates the defocus amount, that is, the amount of deviation between the in-focus distance and the object distance (defocus amount (DF)) for each minute image area unit, for example, an image area unit composed of a plurality of pixels. calculate.
A defocus amount calculation unit 202 calculates a defocus amount, which is a physical quantity that changes according to the subject distance, for each image area.
 先に説明したように、位相差検出方式においては、焦点検出用センサとして機能する一組の位相差検出画素各々の受光量に応じて出力される信号のずれ量に基づいてフォーカスレンズのデフォーカス量が算出される。 As described above, in the phase difference detection method, the defocus of the focus lens is detected based on the shift amount of the signal output according to the amount of light received by each of a set of phase difference detection pixels that function as a focus detection sensor. Amount is calculated.
 AF制御信号生成部203は、このデフォーカス量に基づいてフォーカスレンズを、例えばユーザの指定した被写体に対する合焦位置(フォーカス位置)に設定するためのオートフォーカス制御信号(AF制御信号)を生成し、生成したAF制御信号をAF制御部112aに出力する。 The AF control signal generation unit 203 generates an autofocus control signal (AF control signal) for setting the focus lens to an in-focus position (focus position) for a subject designated by the user based on the defocus amount. , and outputs the generated AF control signal to the AF control unit 112a.
 AF制御部112aは、AF制御信号生成部203から入力するオートフォーカス制御信号(AF制御信号)に従って、フォーカスレンズを駆動し、フォーカスレンズを例えばユーザの指定した被写体に対する合焦位置(フォーカス位置)に設定する。 The AF control unit 112a drives the focus lens according to an autofocus control signal (AF control signal) input from the AF control signal generation unit 203, and moves the focus lens to the in-focus position (focus position) for the subject specified by the user, for example. set.
 なお、合焦位置(フォーカス位置)に設定される被写体は、撮影画像の画像領域全てではなく、例えば人物などユーザが指定した被写体である。その他の被写体、例えば背景などの画像は、合焦状態にはなく、ぼけた画像となる。 Note that the subject to be set at the in-focus position (focus position) is not the entire image area of the captured image, but a subject specified by the user such as a person. Images of other objects, such as the background, are out of focus and appear blurred.
 例えば、先に図5を参照して説明したシフト量は、図2に示す撮像素子内に構成される位相差検出画素である画素Pa,画素Pbの一組の画素単位で計測される。
 すなわち、デフォーカス量算出部202は、撮影画像の微細な画像領域単位で、被写体画像のデフォーカス量を算出することができる。
For example, the shift amount described above with reference to FIG. 5 is measured in units of a set of pixels Pa and Pb, which are phase difference detection pixels configured in the imaging device shown in FIG.
That is, the defocus amount calculation unit 202 can calculate the defocus amount of the subject image for each fine image area unit of the captured image.
 デフォーカス量の算出単位となる画像領域の例を図7に示す。
 図7には、先に説明した図2と同様の撮像素子103の画素構成を示している。
 先に図2を参照して説明したように、位相差情報を取得するための位相差検出画素151は、ベイヤ配列を有するRGB画素の一部(行)に離散的に設定される。
 位相差検出画素は、右開口位相差検出画素Pa、左開口位相差検出画素Pbのペアによって構成される。
FIG. 7 shows an example of an image area that is used as a defocus amount calculation unit.
FIG. 7 shows the pixel configuration of the imaging element 103 similar to that of FIG. 2 described above.
As described above with reference to FIG. 2, the phase difference detection pixels 151 for acquiring phase difference information are discretely set in some (rows) of RGB pixels having a Bayer array.
A phase difference detection pixel is configured by a pair of a right opening phase difference detection pixel Pa and a left opening phase difference detection pixel Pb.
 デフォーカス量の算出単位となる画像領域は、例えば図7に示す画像領域152のように設定することができる。
 図7に示す例では、デフォーカス量の算出単位となる画像領域152を6×5画素等、n×m画素の微細な画像領域とした例である。
 このn×m画素の微細画像領域の中には複数組の位相差検出画素が含まれる。この複数組の位相差検出画素の各々から先に図5を参照して説明したシフト量が計測される。
An image area that is a unit for calculating the defocus amount can be set, for example, as an image area 152 shown in FIG.
In the example shown in FIG. 7, the image area 152, which is the unit for calculating the defocus amount, is a fine image area of n×m pixels, such as 6×5 pixels.
A plurality of sets of phase difference detection pixels are included in this fine image area of n×m pixels. The shift amount described above with reference to FIG. 5 is measured from each of the plurality of sets of phase difference detection pixels.
 デフォーカス量算出部202は、例えば図7に示すような画像領域152内の複数組のシフト量の平均値をn×m画素の画像領域152のシフト量として算出する。さらに、算出したシフト量と、合焦画像領域のシフト量とのずれ量からデフォーカス量、すなわち合焦距離と被写体距離とのずれ量に相当するデフォーカス量を算出する。
 このようにして、デフォーカス量算出部202は、例えば図7に示すような画像領域152単位のデフォーカス量を算出する。
The defocus amount calculation unit 202 calculates, for example, an average value of a plurality of sets of shift amounts in an image area 152 as shown in FIG. 7 as a shift amount of the image area 152 of n×m pixels. Further, the defocus amount, that is, the defocus amount corresponding to the amount of deviation between the in-focus distance and the subject distance, is calculated from the shift amount calculated and the shift amount of the focused image area.
In this manner, the defocus amount calculation unit 202 calculates the defocus amount for each image area 152 as shown in FIG. 7, for example.
 デフォーカスマップ生成部204は、デフォーカス量算出部202が算出した撮影画像の微細な画像領域単位のデフォーカス量を入力し、この画像領域単位のデフォーカス量に基づいて、画像領域単位のデフォーカス量を識別可能としたデフォーカスマップを生成する。 The defocus map generation unit 204 receives the fine defocus amount of the captured image calculated by the defocus amount calculation unit 202, and calculates the defocus amount of each image area based on the defocus amount of each image area. To generate a defocus map in which the amount of focus can be identified.
 デフォーカスマップ生成部204は、デフォーカス量算出部202が算出した例えば図7に示すような画像領域152単位のデフォーカス量に基づいて、図7に示すような画像領域152単位のデフォーカス量を算出してデフォーカスマップを生成する。 Based on the defocus amount calculated by the defocus amount calculation unit 202 for each image area 152 as shown in FIG. 7, the defocus amount for each image area 152 as shown in FIG. is calculated to generate a defocus map.
 図8を参照して、デフォーカスマップ生成部204が生成するデフォーカスマップの例について説明する。 An example of the defocus map generated by the defocus map generation unit 204 will be described with reference to FIG.
 図8(A)は、撮像装置100の撮影する画像の一例である。なお、図8に示す(A)撮影画像は、撮像装置100に対するユーザのシャッタ操作によって記録される撮影画像のみに限らず、シャッタ操作の有無に関わらず撮像装置100のレンズを介して入力されモニタ117等に表示されるいわゆるスルー画も含む撮影画像である。 FIG. 8A is an example of an image captured by the imaging device 100. FIG. Note that the photographed image (A) shown in FIG. 8 is not limited to the photographed image recorded by the user's shutter operation on the imaging apparatus 100, and is input through the lens of the imaging apparatus 100 regardless of whether the shutter operation is performed and is monitored. 117 or the like, which includes a so-called through image.
 図8(B)デフォーカスマップは、(A)撮影画像に対応するデフォーカスマップであり、デフォーカスマップ生成部204が生成するデフォーカスマップである。
 上述したように、デフォーカスマップ生成部204は、デフォーカス量算出部202が算出した例えば図7に示すような画像領域152単位のデフォーカス量に基づいて、図7に示すような画像領域152単位のデフォーカス量を算出してデフォーカスマップを生成する。
The defocus map of FIG. 8B is a defocus map corresponding to the captured image of (A), and is a defocus map generated by the defocus map generation unit 204 .
As described above, the defocus map generation unit 204 calculates the image area 152 as shown in FIG. 7 based on the defocus amount for each image area 152 as shown in FIG. A unit defocus amount is calculated to generate a defocus map.
 図8(B)デフォーカスマップ内に示す矩形領域は、デフォーカス量算出単位の画像領域であり、図7に示す画像領域152に対応する。なお、図8(B)デフォーカスマップ内に示す矩形領域は理解しやすいように大きめのサイズとして示している。実際のデフォーカス量算出単位としての画像領域は、さらに微細な画像領域として設定可能である。すなわち、先に図7を参照して説明したように、数画素~数十画素単位の画素領域として設定することが可能である。 The rectangular area shown in the defocus map in FIG. 8(B) is the image area of the defocus amount calculation unit, and corresponds to the image area 152 shown in FIG. It should be noted that the rectangular area shown in the defocus map of FIG. 8B is shown as a large size for easy understanding. The image area as the actual defocus amount calculation unit can be set as a finer image area. That is, as described above with reference to FIG. 7, it is possible to set a pixel region of several pixels to several tens of pixels.
 図8(B)に示すデフォーカスマップは、画素領域(矩形領域)単位のデフォーカス量に応じた輝度値(画素値)を設定したマップである。
 高輝度(白)領域ほどデフォーカス量が小さい、すなわち合焦度が高い画素領域である。
 一方、低輝度(黒)領域ほどデフォーカス量が大きい、すなわち合焦度が低い画素領域である。
The defocus map shown in FIG. 8B is a map in which luminance values (pixel values) are set according to defocus amounts in units of pixel regions (rectangular regions).
The higher the brightness (white) area, the smaller the defocus amount, that is, the pixel area with the higher degree of focus.
On the other hand, the lower luminance (black) region is a pixel region with a larger defocus amount, that is, a pixel region with a lower degree of focus.
 例えば輝度値(画素値)=0~255の設定とした場合、輝度値(画素値)=255(最高輝度(白))に近いほど、デフォーカス量が小さい、すなわち合焦度が高い画素領域であり、輝度値(画素値)=0(最低輝度(黒))に近いほど、デフォーカス量が大きい、すなわち合焦度が低い画素領域となる。 For example, when the luminance value (pixel value) is set to 0 to 255, the closer the luminance value (pixel value) is to 255 (maximum luminance (white)), the smaller the defocus amount, that is, the pixel area with a high degree of focus. The closer the luminance value (pixel value) is to 0 (minimum luminance (black)), the larger the defocus amount, that is, the pixel region with the lower degree of focus.
 図8に示す例では、(A)撮影画像に示す人物や家を合焦対象被写体として設定した場合の例であり、(B)デフォーカスマップにおいて、人物や家に対応する画像領域が高輝度(白)領域に設定され、デフォーカス量が小さく合焦度が高い画素領域であることが分かる。一方、人物や家以外の背景領域は、低輝度(黒)領域やグレー領域に設定され、デフォーカス量が大きく合焦度が低い画素領域であることが分かる。 In the example shown in FIG. 8, (A) the person or house shown in the captured image is set as the focus target subject, and (B) in the defocus map, the image area corresponding to the person or house has high brightness. It can be seen that the pixel area is set to the (white) area and has a small defocus amount and a high degree of focus. On the other hand, it can be seen that the background area other than the person and the house is set to a low luminance (black) area or a gray area, and is a pixel area with a large defocus amount and a low focus degree.
 このように、デフォーカスマップ生成部204は、デフォーカス量算出部202が算出した例えば図7に示すような画像領域152単位のデフォーカス量に基づいて画像領域単位のデフォーカス量を算出して、図8(B)に示すようなデフォーカスマップを生成する。 In this way, the defocus map generation unit 204 calculates the defocus amount for each image area based on the defocus amount for each image area 152 as shown in FIG. 7 calculated by the defocus amount calculation unit 202. , to generate a defocus map as shown in FIG. 8(B).
 画像領域単位物理量信頼度算出部205は、画像領域単位の物理量、例えば画像領域単位のデフォーカス量や距離値等、被写体距離に応じて変化する画像領域単位の物理量の信頼度を算出する。 The image area unit physical quantity reliability calculation unit 205 calculates the reliability of the image area unit physical quantity that changes according to the subject distance, such as the image area unit physical quantity, for example, the image area unit defocus amount and distance value.
 例えば、画像領域単位物理量信頼度算出部205は、デフォーカス量算出部202の算出した画像領域単位のデフォーカス量や、デフォーカスマップ生成部204が生成したデフォーカスマップを入力し、これらの入力データに基づいて、画像領域単位で算出されたデフォーカス量の信頼度を算出する。
 また、画像領域単位物理量信頼度算出部205は、画像領域単位のデフォーカス量から算出される距離値の信頼度を算出する。
 これらの処理の具体例については後述する。
For example, the image area unit physical quantity reliability calculation unit 205 inputs the defocus amount for each image area calculated by the defocus amount calculation unit 202 and the defocus map generated by the defocus map generation unit 204, and inputs these inputs. Based on the data, the reliability of the defocus amount calculated for each image area is calculated.
Further, the image area unit physical quantity reliability calculation unit 205 calculates the reliability of the distance value calculated from the defocus amount for each image area.
Specific examples of these processes will be described later.
 画像領域単位物理量信頼度算出部205が算出した画像領域単位のデフォーカス量の信頼度、あるいは画像領域単位の距離値の信頼度は、画像領域単位物理量信頼度対応処理実行部206に出力される。 The reliability of the defocus amount for each image region or the reliability of the distance value for each image region calculated by the image region unit physical quantity reliability calculation unit 205 is output to the image region unit physical quantity reliability correspondence processing execution unit 206. .
 画像領域単位物理量信頼度対応処理実行部206は、画像領域単位物理量信頼度算出部205が算出した画像領域単位のデフォーカス量の信頼度、あるいは画像領域単位の距離値の信頼度に応じて、画像信号処理部212が生成した撮影画像(例えばRGB画像)に対する画像制御や撮影制御等の様々な制御を実行する。 The image area unit physical quantity reliability correspondence processing execution unit 206 performs the following operations according to the reliability of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit 205 or the reliability of the distance value for each image area. Various controls such as image control and shooting control are performed on the captured image (for example, an RGB image) generated by the image signal processing unit 212 .
 画像信号処理部212が生成した撮影画像(例えばRGB画像)に対する画像制御処理としては、例えば以下のような処理を実行する。
 (a)画像領域単位のデフォーカス量の信頼度を識別可能とした出力画像の生成、
 (b)合焦被写体と背景被写体の距離比データを撮影画像に重畳した出力図画像の生成、
 (c)撮影画像の一部領域に設定するマスクの安定性を識別可能とした画像の生成、
 (d)画像領域単位のデフォーカス量を識別可能としたカラーマップを撮影画像に重畳した出力画像の生成、
 また、撮影制御処理として、画像領域単位のデフォーカス量の信頼度に応じた画像領域単位の露光制御等を実行する。
As image control processing for the captured image (eg, RGB image) generated by the image signal processing unit 212, for example, the following processing is executed.
(a) Generation of an output image that makes it possible to identify the reliability of the defocus amount for each image area;
(b) generating an output diagram image by superimposing the distance ratio data between the focused subject and the background subject on the captured image;
(c) Generating an image that makes it possible to identify the stability of the mask set in a partial area of the captured image;
(d) Generating an output image by superimposing a color map that makes it possible to identify the defocus amount for each image area on the captured image;
Also, as the shooting control process, exposure control for each image area is executed according to the reliability of the defocus amount for each image area.
  [4.画像領域単位のデフォーカス量や距離値等、画像領域単位の物理量の信頼度を算出し、算出した信頼度に応じた様々な処理を実行する具体的な実施例について]
 次に、画像領域単位のデフォーカス量や距離値等、画像領域単位の物理量の信頼度を算出し、算出した信頼度に応じた様々な処理を実行する具体的な実施例について説明する。
[4. Concrete Embodiment of Calculating Reliability of Physical Quantities for Each Image Region, such as Defocus Amount and Distance Value for Each Image Region, and Executing Various Processing According to the Calculated Reliability]
Next, a specific embodiment will be described in which the reliability of physical quantities in units of image areas, such as the defocus amount and the distance value in units of image areas, is calculated, and various processes are executed according to the calculated reliability.
 本開示の撮像装置が実行する画像領域単位のデフォーカス量や距離値等、画像領域単位の物理量の信頼度に応じた様々な処理を実行する具体的な実施例として、以下の複数の実施例について説明する。
 (実施例1)画像領域単位のデフォーカス量の信頼度を識別可能とした画像を生成して出力する実施例
 (実施例2)合焦被写体とその他の背景被写体との距離比率を撮影画像上に重畳した画像を生成して出力する実施例
 (実施例3)マスク安定性情報を撮影画像上に重畳した画像を生成して出力する実施例
 (実施例4)画像領域単位のデフォーカス量に応じた色を出力したカラーマップを撮影画像に重畳した画像を生成して出力する実施例
 (実施例5)画像領域単位のデフォーカス量に応じた色を出力するとともに、画像領域単位のデフォーカス量の信頼度を識別可能としたカラーマップを撮影画像に重畳した画像を生成して出力する実施例
 (実施例6)画像領域単位のデフォーカス量の信頼度に応じて露光時間を制御して画像撮影を行う実施例
As specific examples of executing various processes according to the reliability of physical quantities in units of image areas, such as defocus amounts and distance values in units of image areas, which are executed by the imaging apparatus of the present disclosure, the following embodiments are provided. will be explained.
(Embodiment 1) Embodiment in which an image is generated and output in which the reliability of the defocus amount for each image area can be identified (Embodiment 2) The distance ratio between the in-focus subject and other background subjects is displayed on the captured image (Embodiment 3) Example of generating and outputting an image in which mask stability information is superimposed on a photographed image (Embodiment 4) Defocus amount for each image area Example of generating and outputting an image in which a color map outputting corresponding colors is superimposed on a captured image (Embodiment 5) Outputting colors corresponding to the defocus amount of each image area, and defocusing of each image area Example for generating and outputting an image in which a color map that enables identification of the amount reliability is superimposed on a captured image (Embodiment 6) Exposure time is controlled according to the reliability of the defocus amount for each image area. Example of image capturing
  [4-1.(実施例1)画像領域単位のデフォーカス量の信頼度を識別可能とした画像を生成して出力する実施例]
 まず、実施例1として、画像領域単位のデフォーカス量の信頼度を識別可能とした画像を生成して出力する実施例について説明する。
[4-1. (Embodiment 1) Embodiment of generating and outputting an image in which the reliability of the defocus amount for each image area can be identified]
First, as Example 1, an example of generating and outputting an image in which the reliability of the defocus amount for each image area can be identified will be described.
 図9は、本実施例1の構成を説明するブロック図である。
 図9は、図1を参照して説明した撮像装置100の構成要素であるデジタル信号処理部108の内部構成例を示している。
FIG. 9 is a block diagram for explaining the configuration of the first embodiment.
FIG. 9 shows an internal configuration example of the digital signal processing unit 108, which is a component of the imaging apparatus 100 described with reference to FIG.
 図9に示すように、デジタル信号処理部108は、位相差情報取得部201、デフォーカス量算出部202、AF制御信号生成部203、デフォーカスマップ生成部204、画像領域単位物理量信頼度算出部205、画像領域単位物理量信頼度対応処理実行部206、画像情報取得部211、画像信号処理部212、画像出力部213を有する。 As shown in FIG. 9, the digital signal processing unit 108 includes a phase difference information acquisition unit 201, a defocus amount calculation unit 202, an AF control signal generation unit 203, a defocus map generation unit 204, an image region unit physical quantity reliability calculation unit. 205 , an image area unit physical quantity reliability correspondence processing execution unit 206 , an image information acquisition unit 211 , an image signal processing unit 212 , and an image output unit 213 .
 デジタル信号処理部108には、前段のA/D変換部105からRGB画像信号と、位相差検出画素の出力である位相差検出信号(検波情報)が入力される。 The digital signal processing unit 108 receives the RGB image signals from the preceding A/D conversion unit 105 and the phase difference detection signals (detection information) output from the phase difference detection pixels.
 図9に示す位相差情報取得部201は、A/D変換部105からの入力信号から、位相差検出画素の出力である位相差検出信号(検波情報)のみを選択する。
 一方、図9に示す画像情報取得部211は、A/D変換部105からの入力信号から、画像信号(例えばRGB画像信号)のみを選択する。
A phase difference information acquisition unit 201 shown in FIG. 9 selects only the phase difference detection signal (detection information) that is the output of the phase difference detection pixel from the input signal from the A/D conversion unit 105 .
On the other hand, the image information acquisition unit 211 shown in FIG. 9 selects only image signals (for example, RGB image signals) from the input signals from the A/D conversion unit 105 .
 画像情報取得部211が取得した画像信号は、画像信号処理部212に入力される。
 画像信号処理部212は、画像信号に対してデモザイク処理、ホワイトバランス調整、ガンマ補正等の様々な画像信号処理を実行し、処理後の画像(例えばRGB画像)を画像出力部213に出力する。
The image signal acquired by the image information acquiring section 211 is input to the image signal processing section 212 .
The image signal processing unit 212 performs various image signal processing such as demosaic processing, white balance adjustment, and gamma correction on the image signal, and outputs the processed image (eg, RGB image) to the image output unit 213 .
 なお、画像信号処理部212が生成した画像(例えばRGB画像)に対して、本実施例1では、さらに画像領域単位物理量信頼度対応処理実行部206による画像制御処理が行われる。 In addition, in the first embodiment, the image region unit physical quantity reliability corresponding processing execution unit 206 further performs image control processing on the image (eg, RGB image) generated by the image signal processing unit 212 .
 本実施例1では、画像信号処理部212が生成した画像(例えばRGB画像)に対して、デフォーカス量の信頼度識別データの重畳処理が行われる。 In the first embodiment, the image (for example, the RGB image) generated by the image signal processing unit 212 is superimposed with the reliability identification data of the defocus amount.
 画像信号処理部212が生成した画像(例えばRGB画像)、または、画像信号処理部212の生成画像に対して画像領域単位物理量信頼度対応処理実行部206が変更、加工した画像は、画像出力部213に出力される。 An image (for example, an RGB image) generated by the image signal processing unit 212, or an image generated by the image signal processing unit 212 changed or processed by the image region unit physical quantity reliability correspondence processing execution unit 206 is output to the image output unit. 213.
 画像出力部213は、画像信号処理部212から入力した画像を出力する。例えばモニタ117、ビューファインダ(EVF)116、記録デバイス115に対する画像出力を実行する。 The image output unit 213 outputs the image input from the image signal processing unit 212 . For example, image output to the monitor 117, viewfinder (EVF) 116, and recording device 115 is executed.
 位相差情報取得部201は、A/D変換部105からの入力信号から、位相差検出画素の出力である位相差検出信号(検波情報)を選択し、選択した位相差検出信号(検波情報)をデフォーカス量算出部202に出力する。 The phase difference information acquisition unit 201 selects a phase difference detection signal (detection information) that is the output of the phase difference detection pixel from the input signal from the A/D conversion unit 105, and obtains the selected phase difference detection signal (detection information). is output to the defocus amount calculation unit 202 .
 デフォーカス量算出部202は、微小な画像領域単位、例えば複数画素から構成される画像領域単位でフォーカスのずれ量、すなわち合焦距離と被写体距離とのずれ量(デフォーカス量(DF))を算出する。
 デフォーカス量算出部202は、被写体距離に応じて変化する物理量であるデフォーカス量を画像領域単位で算出する。
The defocus amount calculation unit 202 calculates the defocus amount, that is, the amount of deviation between the in-focus distance and the object distance (defocus amount (DF)) for each minute image area unit, for example, an image area unit composed of a plurality of pixels. calculate.
A defocus amount calculation unit 202 calculates a defocus amount, which is a physical quantity that changes according to the subject distance, for each image area.
 先に説明したように、位相差検出方式においては、焦点検出用センサとして機能する一組の位相差検出画素各々の受光量に応じて出力される信号のずれ量に基づいてフォーカスレンズのデフォーカス量が算出される。 As described above, in the phase difference detection method, the defocus of the focus lens is detected based on the shift amount of the signal output according to the amount of light received by each of a set of phase difference detection pixels that function as a focus detection sensor. Amount is calculated.
 AF制御信号生成部203は、このデフォーカス量に基づいてフォーカスレンズを、例えばユーザの指定した被写体に対する合焦位置(フォーカス位置)に設定するためのオートフォーカス制御信号(AF制御信号)を生成し、生成したAF制御信号をAF制御部112aに出力する。 The AF control signal generation unit 203 generates an autofocus control signal (AF control signal) for setting the focus lens to an in-focus position (focus position) for a subject designated by the user based on the defocus amount. , and outputs the generated AF control signal to the AF control unit 112a.
 AF制御部112aは、AF制御信号生成部203から入力するオートフォーカス制御信号(AF制御信号)に従って、フォーカスレンズを駆動し、フォーカスレンズを例えばユーザの指定した被写体に対する合焦位置(フォーカス位置)に設定する。 The AF control unit 112a drives the focus lens according to an autofocus control signal (AF control signal) input from the AF control signal generation unit 203, and moves the focus lens to the in-focus position (focus position) for the subject specified by the user, for example. set.
 なお、合焦位置(フォーカス位置)に設定される被写体は、撮影画像の画像領域全てではなく、例えば人物などユーザが指定した被写体である。その他の被写体、例えば背景などの画像は、合焦状態にはなく、ぼけた画像となる。 Note that the subject to be set at the in-focus position (focus position) is not the entire image area of the captured image, but a subject specified by the user such as a person. Images of other objects, such as the background, are out of focus and appear blurred.
 デフォーカス量算出部202は、例えば先に図7を参照して説明したように、例えばn×m画素等、微小な画像領域単位でデフォーカス量、すなわち合焦距離と被写体距離とのずれ量に相当するデフォーカス量を算出する。 For example, as described above with reference to FIG. 7, the defocus amount calculation unit 202 calculates the defocus amount, that is, the amount of deviation between the in-focus distance and the subject distance, in units of minute image areas such as n×m pixels. A defocus amount equivalent to is calculated.
 デフォーカスマップ生成部204は、デフォーカス量算出部202が算出した撮影画像の微細な画像領域単位のデフォーカス量を入力し、この画像領域単位のデフォーカス量に基づいて、画像領域単位のデフォーカス量を識別可能としたデフォーカスマップを生成する。 The defocus map generation unit 204 receives the fine defocus amount of the captured image calculated by the defocus amount calculation unit 202, and calculates the defocus amount of each image area based on the defocus amount of each image area. To generate a defocus map in which the amount of focus can be identified.
 例えば、先に図8(B)を参照して説明したようなデフォーカスマップを生成する。
 例えば輝度値(画素値)=0~255の設定としたデフォーカスマップの場合、輝度値(画素値)=255(最高輝度(白))に近いほど、デフォーカス量が小さい、すなわち合焦度が高い画素領域であり、輝度値(画素値)=0(最低輝度(黒))に近いほど、デフォーカス量が大きい、すなわち合焦度が低い画素領域となる。
 このように、デフォーカスマップ生成部204は、例えば、図8(B)に示すようなデフォーカスマップを生成する。
For example, a defocus map as described above with reference to FIG. 8B is generated.
For example, in the case of a defocus map with a brightness value (pixel value) set to 0 to 255, the closer the brightness value (pixel value) to 255 (maximum brightness (white)), the smaller the defocus amount, that is, the degree of focus is a pixel region with a high luminance value (pixel value), and the closer the luminance value (pixel value) is to 0 (minimum luminance (black)), the larger the defocus amount, that is, the pixel region has a lower degree of focus.
Thus, the defocus map generation unit 204 generates a defocus map as shown in FIG. 8B, for example.
 画像領域単位物理量信頼度算出部205は、画像領域単位の物理量である画像領域単位のデフォーカス量の信頼度を算出する。
 画像領域単位物理量信頼度算出部205は、デフォーカス量算出部202の算出した画像領域単位のデフォーカス量や、デフォーカスマップ生成部204が生成したデフォーカスマップを入力し、これらの入力データに基づいて、画像領域単位で算出されたデフォーカス量の信頼度を算出する。
The image area unit physical quantity reliability calculation unit 205 calculates the reliability of the defocus amount for each image area, which is the physical quantity for each image area.
The image area unit physical quantity reliability calculation unit 205 receives the defocus amount for each image area calculated by the defocus amount calculation unit 202 and the defocus map generated by the defocus map generation unit 204, and converts these input data into Based on this, the reliability of the defocus amount calculated for each image area is calculated.
 なお、画像領域単位のデフォーカス量の信頼度算出処理の手法としては様々な手法が適用可能である。
 例えば画像信号処理部212が生成した画像(RGB画像)のコントラストや、画像周波数、エッジ検出結果等を用いて画像領域単位のデフォーカス量信頼度を算出する手法が利用可能である。
 また、デフォーカス量算出に用いる視差データの2つの波形の相互相関関数を用いて画像領域単位のデフォーカス量信頼度を算出する手法も可能である。
Note that various methods can be applied as the method of calculating the reliability of the defocus amount for each image area.
For example, a method of calculating the defocus amount reliability for each image area using the contrast of the image (RGB image) generated by the image signal processing unit 212, the image frequency, the edge detection result, or the like can be used.
A method of calculating the defocus amount reliability for each image area using a cross-correlation function of two waveforms of parallax data used for defocus amount calculation is also possible.
 以下では、画像領域単位物理量信頼度算出部205が実行する画像領域単位のデフォーカス量の信頼度算出処理の具体例の1つとして、デフォーカス量算出に用いる視差データの2つの波形の相互相関関数を用いる手法について説明する。 In the following, as one specific example of the reliability calculation process of the defocus amount in units of image areas executed by the image area unit physical quantity reliability calculation unit 205, cross correlation between two waveforms of parallax data used for defocus amount calculation will be described. A technique using functions will be described.
 このデフォーカス量信頼度算出手法は、デフォーカス量算出に用いる図4に示す視差データの2つの波形、すなわち図4に示す画素Paからの出力を示す波形と、画素Pbからの出力を示す波形、これらの2つの波形の相互相関関数を用いる手法である。 This defocus amount reliability calculation method uses two waveforms of the parallax data shown in FIG. , a technique that uses the cross-correlation function of these two waveforms.
 先に図3を参照して説明したように、一対の位相差検出画素Pa,Pbでは、射出瞳EYにおいて異なる領域(部分)を通過した被写体光が取得される。
 Pa,Pb各画素において取得される受光素子の出力は、図4に示すように、画素Paからの出力ラインと、画素Pbからの出力ラインは、所定量のシフト量Sfを持つ信号となる。
As described above with reference to FIG. 3, the pair of phase difference detection pixels Pa and Pb acquire subject light that has passed through different regions (parts) in the exit pupil EY.
As shown in FIG. 4, the outputs of the light-receiving elements acquired by the pixels Pa and Pb are signals having a predetermined amount of shift Sf for the output line from the pixel Pa and the output line from the pixel Pb.
 図5(a)は、フォーカスレンズが、被写体距離に応じた位置に設定され、フォーカスが合った場合、すなわち合焦状態において、Pa,Pb各画素間で発生するシフト量Sfaを示している。
 図5(b1),(b2)は、フォーカスレンズが、被写体距離に応じた位置に設定されず、フォーカスが合っていない場合、すなわち非合焦状態において、Pa,Pb各画素間で発生するシフト量Sfaを示している。
FIG. 5(a) shows the shift amount Sfa generated between the pixels Pa and Pb when the focus lens is set at a position corresponding to the subject distance and the focus is achieved, that is, in the in-focus state.
5B1 and 5B2 show shifts occurring between pixels Pa and Pb when the focus lens is not set to a position corresponding to the subject distance and is out of focus, i.e., in an out-of-focus state. Quantity Sfa is shown.
 画像領域単位物理量信頼度算出部205は、画像領域単位のデフォーカス量の信頼度を、デフォーカス量算出に用いられた視差データの2つの波形、すなわち図4に示す画素Paからの出力を示す波形と、画素Pbからの出力を示す波形、これらの2つの波形の相互相関関数を用いて算出する。なお、画素Paからの出力を示す波形と、画素Pbからの出力を示す波形、これら2つの波形データは、位相情報取得部201において取得され、デフォーカス量算出部202を介して画像領域単位物理量信頼度算出部205に入力される。 The image area unit physical quantity reliability calculation unit 205 indicates the reliability of the defocus amount for each image area by using two waveforms of the parallax data used for calculating the defocus amount, that is, the output from the pixel Pa shown in FIG. It is calculated using the waveform, the waveform indicating the output from the pixel Pb, and the cross-correlation function of these two waveforms. A waveform indicating the output from the pixel Pa and a waveform indicating the output from the pixel Pb, and these two waveform data are acquired by the phase information acquisition unit 201 and passed through the defocus amount calculation unit 202. It is input to the reliability calculation unit 205 .
 画像領域単位物理量信頼度算出部205は、以下のようにして2つの波形データからデフォーカス量信頼度を算出する。
 図4に示す画素Paからの出力を示す波形データをf(t)、画素Pbからの出力を示す波形データをg(t)とする。これらの2つの波形データf(t),g(t)の相互相関関数h(τ)は、以下の(式1)で算出することができる。なお、tは各画素Pa,Pb内の位置を示す。
The image area unit physical quantity reliability calculation unit 205 calculates the defocus amount reliability from the two waveform data as follows.
Let f(t) be the waveform data representing the output from the pixel Pa shown in FIG. 4, and g(t) be the waveform data representing the output from the pixel Pb. A cross-correlation function h(τ) of these two waveform data f(t) and g(t) can be calculated by the following (equation 1). Note that t indicates the position within each pixel Pa, Pb.
Figure JPOXMLDOC01-appb-M000001
                 (式1)
Figure JPOXMLDOC01-appb-M000001
(Formula 1)
 上記(式1)において、相互相関関数h(τ)が最大値となる(τ)をτmaxとして、h(τmax)を、デフォーカス量の信頼度として算出することができる。 In the above (Equation 1), h(τ max ) can be calculated as the reliability of the defocus amount, where τ max is the (τ) at which the cross-correlation function h(τ) reaches its maximum value.
 なお、本例において算出するデフォーカス量信頼度は、画素領域単位のデフォーカス量の信頼度である。
 先に図7を参照して説明したように1つの画像領域152内には、複数の位相差検出画素の組が存在する。デフォーカス量算出部202は、例えば図7に示すような画像領域152内の複数組のシフト量の平均値をn×m画素の画像領域152のシフト量として算出する。さらに、算出したシフト量と、合焦画像領域のシフト量とのずれ量からデフォーカス量、すなわち合焦距離と被写体距離とのずれ量に相当するデフォーカス量を算出する。
Note that the defocus amount reliability calculated in this example is the reliability of the defocus amount for each pixel region.
As described above with reference to FIG. 7, one image region 152 includes a plurality of sets of phase difference detection pixels. The defocus amount calculation unit 202 calculates, for example, an average value of a plurality of sets of shift amounts in an image area 152 as shown in FIG. 7 as a shift amount of the image area 152 of n×m pixels. Further, the defocus amount, that is, the defocus amount corresponding to the amount of deviation between the in-focus distance and the subject distance, is calculated from the shift amount calculated and the shift amount of the focused image area.
 画像領域単位物理量信頼度算出部205は、このデフォーカス量算出部202が算出した画像領域単位のデフォーカス量、すなわち図8(B)に示すデフォーカスマップ内の1つの矩形領域である画像領域単位のデフォーカス量を用いて画像領域単位のデフォーカス量の信頼度を算出する。
 なお、この信頼度の算出の基礎となる波形データf(t),g(t)は、例えば信頼度算出対象となる画像領域内に含まれる複数の位相差検出画素各組の波形データf(t)の平均波形と、波形データg(t)の平均波形を算出して利用する。
The image area unit physical quantity reliability calculation unit 205 calculates the defocus amount for each image area calculated by the defocus amount calculation unit 202, that is, the image area which is one rectangular area in the defocus map shown in FIG. Using the unit defocus amount, the reliability of the defocus amount for each image area is calculated.
Note that the waveform data f(t) and g(t) that serve as the basis for calculating the reliability are, for example, the waveform data f( t) and the average waveform of the waveform data g(t) are calculated and used.
 このように、画像領域単位物理量信頼度算出部205は、例えば、デフォーカス量算出に用いた視差データの2つの波形の相互相関関数を利用して、デフォーカス量算出部202が算出した画像領域単位のデフォーカス量の信頼度を算出する。 As described above, the image region unit physical quantity reliability calculation unit 205 uses, for example, the cross-correlation function of the two waveforms of the parallax data used to calculate the defocus amount, and the image region calculated by the defocus amount calculation unit 202. Calculate the reliability of the unit defocus amount.
 画像領域単位物理量信頼度算出部205が算出した画像領域単位のデフォーカス量の信頼度データは、画像領域単位物理量信頼度対応処理実行部206に出力される。 The reliability data of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit 205 is output to the image area unit physical quantity reliability correspondence processing execution unit 206 .
 画像領域単位物理量信頼度対応処理実行部206は、画像領域単位物理量信頼度算出部205が算出した画像領域単位のデフォーカス量の信頼度データに応じて、画像信号処理部212が生成した撮影画像(例えばRGB画像)に対する画像制御を実行する。
 具体的には、画像領域単位物理量信頼度算出部205が算出した画像領域単位のデフォーカス量の信頼度を識別可能とした画像の生成処理を行う。
The image area unit physical quantity reliability correspondence processing execution unit 206 calculates the captured image generated by the image signal processing unit 212 according to the reliability data of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit 205. (e.g., RGB images).
Specifically, an image generation process is performed in which the reliability of the defocus amount for each image region calculated by the image region unit physical quantity reliability calculation unit 205 can be identified.
 すなわち、画像領域単位物理量信頼度対応処理実行部206は、画像信号処理部212が生成した画像(例えばRGB画像)に対する画像制御処理として、画像領域単位で算出されたデフォーカス量の低信頼度領域や高信頼度領域を識別可能とした出力画像を生成する処理を実行する。
 この処理の具体例について、図10以下を参照して説明する。
That is, the image area unit physical quantity reliability correspondence processing execution unit 206 performs image control processing on an image (for example, an RGB image) generated by the image signal processing unit 212. The defocus amount calculated for each image area is calculated for each image area. and a process of generating an output image in which high-reliability regions can be identified.
A specific example of this processing will be described with reference to FIG. 10 and subsequent drawings.
 図10には以下の各データを示している。
 (A)撮影画像
 (B)デフォーカスマップ
FIG. 10 shows the following data.
(A) Captured image (B) Defocus map
 (A)撮影画像は、画像情報取得部211から画像信号処理部212に入力された画像信号に基づいて画像信号処理部212が生成した撮影画像(RGB画像)である。
 (B)デフォーカスマップは、デフォーカスマップ生成部204が生成したデフォーカスマップである。すなわち、位相差情報取得部201が取得した位相差検出画素の画素値信号に基づいてデフォーカス量算出部202が算出した画像領域単位のデフォーカス量に基づいて、デフォーカスマップ生成部204が生成した画像領域単位のデフォーカス量を輝度値(例えば0~255)で出力したデフォーカスマップである。
(A) A captured image is a captured image (RGB image) generated by the image signal processing unit 212 based on the image signal input from the image information acquisition unit 211 to the image signal processing unit 212 .
(B) A defocus map is a defocus map generated by the defocus map generation unit 204 . That is, based on the defocus amount for each image area calculated by the defocus amount calculation unit 202 based on the pixel value signal of the phase difference detection pixel acquired by the phase difference information acquisition unit 201, the defocus map generation unit 204 generates This is a defocus map that outputs the defocus amount for each image area as a luminance value (for example, 0 to 255).
 前述したように、画像領域単位物理量信頼度算出部205は、デフォーカスマップ生成部204が生成したデフォーカスマップの各画像領域単位のデフォーカス量の信頼度を算出し、算出した信頼度データを画像領域単位物理量信頼度対応処理実行部206に出力する。 As described above, the image area unit physical quantity reliability calculation unit 205 calculates the reliability of the defocus amount for each image area in the defocus map generated by the defocus map generation unit 204, and uses the calculated reliability data. It is output to the image area unit physical quantity reliability correspondence processing execution unit 206 .
 画像領域単位物理量信頼度対応処理実行部206は、画像領域単位物理量信頼度算出部205から入力した画像領域単位のデフォーカス量の信頼度データに基づいて、例えば、図10(B)に示すようにデフォーカス量低信頼度領域と、デフォーカス量高信頼度領域を抽出する。 Based on the reliability data of the defocus amount for each image region input from the image region unit physical quantity reliability calculation unit 205, the image region unit physical quantity reliability corresponding processing execution unit 206 calculates, for example, as shown in FIG. , a defocus amount low-reliability area and a defocus amount high-reliability area are extracted.
 具体的には、画像領域単位物理量信頼度算出部205から入力した画像領域単位のデフォーカス量信頼度と、予め規定した信頼度しきい値とを比較して、デフォーカス量低信頼度領域と、デフォーカス量高信頼度領域を抽出する。
 例えば、予め規定した低信頼度しきい値Th1と、高信頼度しきい値Th2を用いて、
 (判定式1)デフォーカス量信頼度≦Th1
 上記判定式1を満たす画像領域をデフォーカス量低信頼度領域として抽出する。
Specifically, the defocus amount reliability for each image area input from the image area unit physical quantity reliability calculation unit 205 is compared with a predetermined reliability threshold to determine the defocus amount low reliability area. , to extract the defocus amount high reliability region.
For example, using a predefined low-reliability threshold Th1 and a high-reliability threshold Th2,
(Decision formula 1) Defocus amount reliability ≤ Th1
An image region that satisfies the above determination formula 1 is extracted as a defocus amount low reliability region.
 さらに、
 (判定式2)Th2≦デフォーカス量信頼度
 上記判定式2を満たす画像領域をデフォーカス量高信頼度領域として抽出する。
moreover,
(Judgment Expression 2) Th2≦Defocus Amount Reliability An image region that satisfies the above judgment expression 2 is extracted as a defocus amount high reliability region.
 画像領域単位物理量信頼度対応処理実行部206は、これらの判定式に従ってデフォーカス量低信頼度領域と、デフォーカス量高信頼度領域を抽出し、抽出結果に基づいて、画像領域単位で算出されたデフォーカス量の低信頼度領域や高信頼度領域を識別可能とした出力画像を生成する処理を実行する。
 具体例について、図11以下を参照して説明する。
The image area unit physical quantity reliability correspondence processing execution unit 206 extracts a defocus amount low reliability area and a defocus amount high reliability area according to these determination formulas, and based on the extraction result, is calculated for each image area. Then, a process of generating an output image in which a low-reliability region and a high-reliability region of the defocus amount can be identified is executed.
A specific example will be described with reference to FIG. 11 and subsequent figures.
 図11には、以下の各データを示している。
 (A)撮影画像
 (B)デフォーカスマップ
 (C)出力画像
FIG. 11 shows the following data.
(A) Captured image (B) Defocus map (C) Output image
 (A)撮影画像は、画像情報取得部211から画像信号処理部212に入力された画像信号に基づいて画像信号処理部212が生成した撮影画像(RGB画像)である。
 (B)デフォーカスマップは、デフォーカスマップ生成部204が生成したデフォーカスマップである。
 (C)出力画像は、画像領域単位物理量信頼度対応処理実行部206が(A)撮影画像と、(B)デフォーカスマップに基づいて生成した出力画像の一例である。
(A) A captured image is a captured image (RGB image) generated by the image signal processing unit 212 based on the image signal input from the image information acquisition unit 211 to the image signal processing unit 212 .
(B) A defocus map is a defocus map generated by the defocus map generation unit 204 .
(C) An output image is an example of an output image generated by the image area unit physical quantity reliability correspondence processing execution unit 206 based on (A) a photographed image and (B) a defocus map.
 図11に示す(C)出力画像の例は、デフォーカス量低信頼度領域を識別可能とした出力画像の例である。 An example of an output image (C) shown in FIG. 11 is an example of an output image in which a defocus amount low-reliability region can be identified.
 画像領域単位物理量信頼度対応処理実行部206は、(B)デフォーカスマップに設定された画像領域単位のデフォーカス量の信頼度が前述した低信頼度しきい値Th1以下の領域、すなわち、
 (判定式1)デフォーカス量信頼度≦Th1
 上記判定式1を満たす画像領域をデフォーカス量低信頼度領域として抽出し、この低信頼度領域を識別可能としたグラフィックデータを(A)撮影画像上に重畳して(C)出力画像を生成する。
The image area unit physical quantity reliability correspondence processing execution unit 206 performs (B) an area where the reliability of the defocus amount for each image area set in the defocus map is equal to or lower than the low reliability threshold Th1 described above, that is,
(Decision formula 1) Defocus amount reliability ≤ Th1
An image area that satisfies the above determination formula 1 is extracted as a defocus amount low-reliability area, and graphic data that makes it possible to identify this low-reliability area is superimposed on (A) the captured image to generate (C) an output image. do.
 図11(C)出力画像に示すデフォーカス量低信頼度領域は、例えば複数の半透明赤色矩形ブロックのグラフィックデータによって構成される。矩形ブロックの各々はデフォーカス量算出単位である1つの画像領域に対応する。なお、赤色は一例であり、その他の色の設定としてもよい。 The defocus amount low-reliability area shown in the output image of FIG. 11(C) is composed of, for example, graphic data of a plurality of translucent red rectangular blocks. Each rectangular block corresponds to one image area that is a defocus amount calculation unit. Note that red is an example, and other colors may be set.
 この(C)出力画像は、例えば、撮像装置100のモニタ117に出力される。
 ユーザは、モニタ117に出力された画像を見て、デフォーカス量が正しく算出されていない可能性がある領域を容易に確認することが可能となる。
This (C) output image is output to the monitor 117 of the imaging apparatus 100, for example.
By looking at the image output to the monitor 117, the user can easily check the area where the defocus amount may not be calculated correctly.
 図11の(C)出力画像はデフォーカス量低信頼度領域を識別可能とした出力画像の例である。次に、図12を参照して、デフォーカス量高信頼度領域を識別可能とした出力画像の例について説明する。
 図12には、図11と同様、以下の各データを示している。
 (A)撮影画像
 (B)デフォーカスマップ
 (C)出力画像
The (C) output image in FIG. 11 is an example of an output image in which a defocus amount low-reliability region can be identified. Next, with reference to FIG. 12, an example of an output image in which the defocus amount high reliability area can be identified will be described.
Similar to FIG. 11, FIG. 12 shows the following data.
(A) Captured image (B) Defocus map (C) Output image
 (A)撮影画像は、画像情報取得部211から画像信号処理部212に入力された画像信号に基づいて画像信号処理部212が生成した撮影画像(RGB画像)である。
 (B)デフォーカスマップは、デフォーカスマップ生成部204が生成したデフォーカスマップである。
 (C)出力画像は、画像領域単位物理量信頼度対応処理実行部206が(A)撮影画像と、(B)デフォーカスマップに基づいて生成した出力画像の一例である。
(A) A captured image is a captured image (RGB image) generated by the image signal processing unit 212 based on the image signal input from the image information acquisition unit 211 to the image signal processing unit 212 .
(B) A defocus map is a defocus map generated by the defocus map generation unit 204 .
(C) An output image is an example of an output image generated by the image area unit physical quantity reliability correspondence processing execution unit 206 based on (A) a photographed image and (B) a defocus map.
 図12に示す(C)出力画像の例は、デフォーカス量高信頼度領域を識別可能とした出力画像の例である。 An example of the output image (C) shown in FIG. 12 is an example of an output image in which the defocus amount high reliability region can be identified.
 画像領域単位物理量信頼度対応処理実行部206は、(B)デフォーカスマップに設定された画像領域単位のデフォーカス量の信頼度が前述した高信頼度しきい値Th2以上の領域、すなわち、
 (判定式2)Th2≦デフォーカス量信頼度
 上記判定式2を満たす画像領域をデフォーカス量高信頼度領域として抽出し、この高信頼度領域を識別可能としたグラフィックデータを(A)撮影画像上に重畳して(C)出力画像を生成する。
The image area unit physical quantity reliability correspondence processing execution unit 206 performs (B) an area where the reliability of the defocus amount for each image area set in the defocus map is equal to or higher than the above-described high reliability threshold Th2, that is,
(Decision formula 2) Th2 ≤ defocus amount reliability An image area that satisfies the above determination formula 2 is extracted as a defocus amount high reliability area, and the graphic data that makes this high reliability area identifiable (A) Photographed image Superimpose on top to generate (C) the output image.
 図12(C)出力画像に示すデフォーカス量高信頼度領域は、例えば複数の半透明青色矩形ブロックのグラフィックデータによって構成される。矩形ブロックの各々はデフォーカス量算出単位である1つの画像領域に対応する。なお、青色は一例であり、その他の色の設定としてもよい。 The defocus amount high-reliability area shown in the output image of FIG. 12(C) is composed of, for example, graphic data of a plurality of translucent blue rectangular blocks. Each rectangular block corresponds to one image area that is a defocus amount calculation unit. Note that blue is an example, and other colors may be set.
 この(C)出力画像は、例えば、撮像装置100のモニタ117に出力される。
 ユーザは、モニタ117に出力された画像を見て、デフォーカス量が正しく算出されている領域を容易に確認することが可能となる。
This (C) output image is output to the monitor 117 of the imaging apparatus 100, for example.
By looking at the image output to the monitor 117, the user can easily confirm the area where the defocus amount is calculated correctly.
 このように本実施例1は、画像領域単位のデフォーカス量の信頼度を識別可能としたグラフィックデータを撮影画像上に重畳した画像を生成して出力する実施例であり、ユーザは、撮影画像に重畳されたグラフィックデータにより、デフォーカス量が正しく算出されている領域と正しく算出されていない領域を区別して確認することができる。 As described above, the first embodiment is an embodiment for generating and outputting an image in which the graphic data that enables identification of the reliability of the defocus amount for each image area is superimposed on the captured image. With the graphic data superimposed on , it is possible to distinguish and confirm an area where the defocus amount is calculated correctly and an area where the defocus amount is not calculated correctly.
 なお、図9を参照して説明したデジタル信号処理部108は、デジタル信号処理部108内に位相差情報取得部201、デフォーカス量算出部202、AF制御信号生成部203、デフォーカスマップ生成部204、画像領域単位物理量信頼度算出部205、画像領域単位物理量信頼度対応処理実行部206、画像情報取得部211、画像信号処理部212、画像出力部213、これらの全ての構成を有する設定としているが、この構成は一例である。 Note that the digital signal processing unit 108 described with reference to FIG. 204, an image region unit physical quantity reliability calculation unit 205, an image region unit physical quantity reliability corresponding processing execution unit 206, an image information acquisition unit 211, an image signal processing unit 212, an image output unit 213, and a setting having all these configurations. However, this configuration is an example.
 図9を参照して説明した構成の一部については、撮像装置100のデジタル信号処理部108以外の構成としてもよい。また、撮像装置100と異なる外部装置においてデータ処理を実行する構成としてもよい。
 具体的には、例えば、画像信号処理部212については、デジタル信号処理部108の外部に構成することが可能である。また、撮像装置100以外の外部装置、例えばPC等において画像信号処理を実行する構成としてもよい。
A part of the configuration described with reference to FIG. Alternatively, the data processing may be performed by an external device different from the imaging device 100 .
Specifically, for example, the image signal processing unit 212 can be configured outside the digital signal processing unit 108 . Also, the image signal processing may be performed by an external device other than the imaging device 100, such as a PC.
  [4-2.(実施例2)合焦被写体とその他の背景被写体との距離比率を撮影画像上に重畳した画像を生成して出力する実施例]
 次に、実施例2として、合焦被写体とその他の背景被写体との距離比率を撮影画像上に重畳した画像を生成して出力する実施例について説明する。
[4-2. (Example 2) Example of generating and outputting an image in which a distance ratio between a focused subject and other background subjects is superimposed on a photographed image]
Next, as Example 2, an example of generating and outputting an image in which the distance ratio between the in-focus object and other background objects is superimposed on the captured image will be described.
 図13は、本実施例2の構成を説明するブロック図である。
 図13は、図1を参照して説明した撮像装置100の構成要素であるデジタル信号処理部108の内部構成例を示している。
FIG. 13 is a block diagram for explaining the configuration of the second embodiment.
FIG. 13 shows an internal configuration example of the digital signal processing unit 108, which is a component of the imaging apparatus 100 described with reference to FIG.
 図13に示すように、デジタル信号処理部108は、位相差情報取得部201、デフォーカス量算出部202、AF制御信号生成部203、デフォーカスマップ生成部204、画像領域単位物理量信頼度算出部205、画像領域単位物理量信頼度対応処理実行部206、画像情報取得部211、画像信号処理部212、画像出力部213、距離情報算出部221を有する。 As shown in FIG. 13, the digital signal processing unit 108 includes a phase difference information acquisition unit 201, a defocus amount calculation unit 202, an AF control signal generation unit 203, a defocus map generation unit 204, an image region unit physical quantity reliability calculation unit. 205 , an image area unit physical quantity reliability correspondence processing execution unit 206 , an image information acquisition unit 211 , an image signal processing unit 212 , an image output unit 213 , and a distance information calculation unit 221 .
 デジタル信号処理部108には、前段のA/D変換部105からRGB画像信号と、位相差検出画素の出力である位相差検出信号(検波情報)が入力される。 The digital signal processing unit 108 receives the RGB image signals from the preceding A/D conversion unit 105 and the phase difference detection signals (detection information) output from the phase difference detection pixels.
 図13に示す位相差情報取得部201は、A/D変換部105からの入力信号から、位相差検出画素の出力である位相差検出信号(検波情報)のみを選択する。
 一方、図13に示す画像情報取得部211は、A/D変換部105からの入力信号から、画像信号(例えばRGB画像信号)のみを選択する。
A phase difference information acquisition unit 201 shown in FIG. 13 selects only the phase difference detection signal (detection information) that is the output of the phase difference detection pixel from the input signal from the A/D conversion unit 105 .
On the other hand, the image information acquisition unit 211 shown in FIG. 13 selects only image signals (for example, RGB image signals) from the input signals from the A/D conversion unit 105 .
 画像情報取得部211が取得した画像信号は、画像信号処理部212に入力される。
 画像信号処理部212は、画像信号に対してデモザイク処理、ホワイトバランス調整、ガンマ補正等の様々な画像信号処理を実行し、処理後の画像(例えばRGB画像)を画像出力部213に出力する。
The image signal acquired by the image information acquiring section 211 is input to the image signal processing section 212 .
The image signal processing unit 212 performs various image signal processing such as demosaic processing, white balance adjustment, and gamma correction on the image signal, and outputs the processed image (eg, RGB image) to the image output unit 213 .
 なお、本実施例2では、先に説明した実施例1と同様、画像信号処理部212が生成した画像(例えばRGB画像)に対して、画像領域単位物理量信頼度対応処理実行部206による画像制御処理が行われる。 Note that, in the second embodiment, as in the first embodiment described above, image control by the image region unit physical quantity reliability corresponding processing execution unit 206 is performed on an image (for example, an RGB image) generated by the image signal processing unit 212. processing takes place.
 本実施例2では、画像信号処理部212が生成した画像(例えばRGB画像)に対して、合焦被写体とその他の背景被写体との距離比率を重畳した画像を生成して出力する。 In the second embodiment, an image in which the distance ratio between the in-focus subject and other background subjects is superimposed on the image (for example, the RGB image) generated by the image signal processing unit 212 is generated and output.
 画像信号処理部212が生成した画像(例えばRGB画像)、または、画像信号処理部212の生成画像に対して画像領域単位物理量信頼度対応処理実行部206が変更、加工した画像は、画像出力部213に出力される。 An image (for example, an RGB image) generated by the image signal processing unit 212, or an image generated by the image signal processing unit 212 changed or processed by the image region unit physical quantity reliability correspondence processing execution unit 206 is output to the image output unit. 213.
 画像出力部213は、画像信号処理部212から入力した画像を出力する。例えばモニタ117、ビューファインダ(EVF)116、記録デバイス115に対する画像出力を実行する。 The image output unit 213 outputs the image input from the image signal processing unit 212 . For example, image output to the monitor 117, viewfinder (EVF) 116, and recording device 115 is executed.
 位相差情報取得部201は、A/D変換部105からの入力信号から、位相差検出画素の出力である位相差検出信号(検波情報)を選択し、選択した位相差検出信号(検波情報)をデフォーカス量算出部202に出力する。 The phase difference information acquisition unit 201 selects a phase difference detection signal (detection information) that is the output of the phase difference detection pixel from the input signal from the A/D conversion unit 105, and obtains the selected phase difference detection signal (detection information). is output to the defocus amount calculation unit 202 .
 デフォーカス量算出部202は、微小な画像領域単位、例えば複数画素から構成される画像領域単位でフォーカスのずれ量、すなわち合焦距離と被写体距離とのずれ量(デフォーカス量(DF))を算出する。
 先に説明したように、位相差検出方式においては、焦点検出用センサとして機能する一組の位相差検出画素各々の受光量に応じて出力される信号のずれ量に基づいてフォーカスレンズのデフォーカス量が算出される。
The defocus amount calculation unit 202 calculates the defocus amount, that is, the amount of deviation between the in-focus distance and the object distance (defocus amount (DF)) for each minute image area unit, for example, an image area unit composed of a plurality of pixels. calculate.
As described above, in the phase difference detection method, the defocus of the focus lens is detected based on the shift amount of the signal output according to the amount of light received by each of a set of phase difference detection pixels that function as a focus detection sensor. Amount is calculated.
 AF制御信号生成部203は、このデフォーカス量に基づいてフォーカスレンズを、例えばユーザの指定した被写体に対する合焦位置(フォーカス位置)に設定するためのオートフォーカス制御信号(AF制御信号)を生成し、生成したAF制御信号をAF制御部112aに出力する。 The AF control signal generation unit 203 generates an autofocus control signal (AF control signal) for setting the focus lens to an in-focus position (focus position) for a subject designated by the user based on the defocus amount. , and outputs the generated AF control signal to the AF control unit 112a.
 AF制御部112aは、AF制御信号生成部203から入力するオートフォーカス制御信号(AF制御信号)に従って、フォーカスレンズを駆動し、フォーカスレンズを例えばユーザの指定した被写体に対する合焦位置(フォーカス位置)に設定する。 The AF control unit 112a drives the focus lens according to an autofocus control signal (AF control signal) input from the AF control signal generation unit 203, and moves the focus lens to the in-focus position (focus position) for the subject specified by the user, for example. set.
 なお、合焦位置(フォーカス位置)に設定される被写体は、撮影画像の画像領域全てではなく、例えば人物などユーザが指定した被写体である。その他の被写体、例えば背景などの画像は、合焦状態にはなく、ぼけた画像となる。 Note that the subject to be set at the in-focus position (focus position) is not the entire image area of the captured image, but a subject specified by the user such as a person. Images of other objects, such as the background, are out of focus and appear blurred.
 デフォーカス量算出部202は、例えば先に図7を参照して説明したように、例えばn×m画素等、微小な画像領域単位でデフォーカス量、すなわち合焦距離と被写体距離とのずれ量に相当するデフォーカス量を算出する。 For example, as described above with reference to FIG. 7, the defocus amount calculation unit 202 calculates the defocus amount, that is, the amount of deviation between the in-focus distance and the subject distance, in units of minute image areas such as n×m pixels. A defocus amount equivalent to is calculated.
 距離情報算出部221は、デフォーカス量算出部202が算出した撮影画像の画像領域単位のデフォーカス量を入力し、この画像領域単位のデフォーカス量に基づいて、撮影画像の画像領域単位の距離情報を算出する。
 距離情報算出部221は、被写体距離に応じて変化する物理量である距離値を画像領域単位で算出する。
 距離情報算出部221は、例えば画像領域単位の距離値を画素値(例えば0~255)で示したデプスマップを生成する。
The distance information calculation unit 221 receives the defocus amount for each image area of the captured image calculated by the defocus amount calculation unit 202, and calculates the distance for each image area of the captured image based on the defocus amount for each image area. Calculate information.
The distance information calculation unit 221 calculates a distance value, which is a physical quantity that changes according to the object distance, for each image area.
The distance information calculation unit 221 generates, for example, a depth map indicating the distance value for each image area by pixel values (eg, 0 to 255).
 距離情報算出部221が生成する画像領域単位の(B)距離情報(デプスマップ)は、デフォーカス量算出部202が算出した例えばn×m画素等の微小な画像領域単位のデフォーカス量に基づいて算出した画像領域単位の被写体距離の値を画素値(例えば0~255)として示したデプスマップである。
 高輝度(高画素値)領域は被写体距離が近い領域となり、低輝度(低画素値)領域は被写体距離が遠い領域となる。
The (B) distance information (depth map) for each image area generated by the distance information calculation unit 221 is based on the defocus amount for each minute image area such as n×m pixels calculated by the defocus amount calculation unit 202. 4 is a depth map showing the value of the object distance for each image area calculated in the above manner as a pixel value (0 to 255, for example).
A high luminance (high pixel value) area is an area with a short subject distance, and a low luminance (low pixel value) area is an area with a long subject distance.
 なお、デフォーカス量から被写体距離を算出する処理は、撮像装置のレンズ(フォーカスレンズ)の焦点距離等のパラメータを適用して算出することができる。
 具体的には、以下の(式2)に従って画像領域単位の被写体距離を算出する。
Note that the process of calculating the subject distance from the defocus amount can be calculated by applying parameters such as the focal length of the lens (focus lens) of the imaging device.
Specifically, the subject distance for each image area is calculated according to (Equation 2) below.
Figure JPOXMLDOC01-appb-M000002
                          ・・・(式2)
Figure JPOXMLDOC01-appb-M000002
... (Formula 2)
 距離情報算出部221は、上記(式2)に従って、画像領域単位の被写体距離を算出する。 The distance information calculation unit 221 calculates the subject distance for each image area according to the above (Formula 2).
 距離情報算出部221が算出した画像領域単位の被写体距離情報は、画像領域単位物理量信頼度算出部205に出力される。 The object distance information for each image area calculated by the distance information calculation unit 221 is output to the image area unit physical quantity reliability calculation unit 205 .
 デフォーカスマップ生成部204は、デフォーカス量算出部202が算出した撮影画像の微細な画像領域単位のデフォーカス量を入力し、この画像領域単位のデフォーカス量に基づいて、画像領域単位のデフォーカス量を識別可能としたデフォーカスマップを生成する。 The defocus map generation unit 204 receives the fine defocus amount of the captured image calculated by the defocus amount calculation unit 202, and calculates the defocus amount of each image area based on the defocus amount of each image area. To generate a defocus map in which the amount of focus can be identified.
 例えば、先に図8(B)を参照して説明したようなデフォーカスマップを生成する。
 例えば輝度値(画素値)=0~255の設定としたデフォーカスマップの場合、輝度値(画素値)=255(最高輝度(白))に近いほど、デフォーカス量が小さい、すなわち合焦度が高い画素領域であり、輝度値(画素値)=0(最低輝度(黒))に近いほど、デフォーカス量が大きい、すなわち合焦度が低い画素領域となる。
 このように、デフォーカスマップ生成部204は、例えば、図8(B)に示すようなデフォーカスマップを生成する。
For example, a defocus map as described above with reference to FIG. 8B is generated.
For example, in the case of a defocus map with a brightness value (pixel value) set to 0 to 255, the closer the brightness value (pixel value) to 255 (maximum brightness (white)), the smaller the defocus amount, that is, the degree of focus is a pixel region with a high luminance value (pixel value), and the closer the luminance value (pixel value) is to 0 (minimum luminance (black)), the larger the defocus amount, that is, the pixel region has a lower degree of focus.
Thus, the defocus map generation unit 204 generates a defocus map as shown in FIG. 8B, for example.
 画像領域単位物理量信頼度算出部205は、画像領域単位の物理量である画像領域単位のデフォーカス量の信頼度と、画像領域単位の距離情報の信頼度を算出する。 The image area unit physical quantity reliability calculation unit 205 calculates the reliability of the defocus amount for each image area, which is the physical quantity for each image area, and the reliability of the distance information for each image area.
 なお、画像領域単位物理量信頼度算出部205は、画像領域単位の距離情報の信頼度を、画像領域単位のデフォーカス量の信頼度に応じた信頼度として算出する。
 すなわち、デフォーカス量の信頼度の低い画像領域は、デフォーカス量に基づいて算出した距離値の信頼度も低いと判断し、デフォーカス量の信頼度の高い画像領域は、デフォーカス量に基づいて算出した距離値の信頼度も高いと判断する。
Note that the image area unit physical quantity reliability calculation unit 205 calculates the reliability of the distance information for each image area as the reliability according to the reliability of the defocus amount for each image area.
That is, it is determined that the reliability of the distance value calculated based on the defocus amount is low for an image area with a low reliability of the defocus amount, and the image area with a high reliability of the defocus amount is determined based on the defocus amount. It is judged that the reliability of the distance value calculated by
 先の実施例1と同様、画像領域単位物理量信頼度算出部205は、デフォーカス量算出部202の算出した画像領域単位のデフォーカス量や、デフォーカスマップ生成部204が生成したデフォーカスマップを入力し、これらの入力データに基づいて、画像領域単位で算出されたデフォーカス量の信頼度を算出する。 As in the first embodiment, the image region unit physical quantity reliability calculation unit 205 calculates the defocus amount for each image region calculated by the defocus amount calculation unit 202 and the defocus map generated by the defocus map generation unit 204. Based on these input data, the reliability of the defocus amount calculated for each image area is calculated.
 先の実施例1において説明したように、画像領域単位のデフォーカス量の信頼度算出処理の手法としては様々な手法が適用可能である。
 例えば画像信号処理部212が生成した画像(RGB画像)のコントラストや、画像周波数、エッジ検出結果等を用いて画像領域単位のデフォーカス量信頼度を算出する手法が利用可能である。
 また、実施例1において説明した(式1)を用いた手法、すなわち、デフォーカス量算出に用いる視差データの2つの波形の相互相関関数を用いて画像領域単位のデフォーカス量信頼度を算出する手法も可能である。
As described in the first embodiment, various methods can be applied as the method of calculating the reliability of the defocus amount for each image area.
For example, a method of calculating the defocus amount reliability for each image area using the contrast of the image (RGB image) generated by the image signal processing unit 212, the image frequency, the edge detection result, or the like can be used.
Further, the defocus amount reliability for each image area is calculated using the method using (Equation 1) described in the first embodiment, that is, the cross-correlation function of the two waveforms of the parallax data used for defocus amount calculation. method is also possible.
 なお、本実施例2において算出するデフォーカス量信頼度も、実施例1で説明したと同様、画素領域単位のデフォーカス量の信頼度である。
 画像領域単位物理量信頼度算出部205は、デフォーカス量算出部202が算出した画像領域単位のデフォーカス量、すなわち図8(B)に示すデフォーカスマップ内の1つの矩形領域である画像領域単位のデフォーカス量を用いて画像領域単位のデフォーカス量の信頼度を算出する。
Note that the defocus amount reliability calculated in the second embodiment is also the reliability of the defocus amount for each pixel area, as described in the first embodiment.
The image area unit physical quantity reliability calculation unit 205 calculates the defocus amount for each image area calculated by the defocus amount calculation unit 202, that is, the image area unit which is one rectangular area in the defocus map shown in FIG. is used to calculate the reliability of the defocus amount for each image area.
 このように、画像領域単位物理量信頼度算出部205は、例えば、デフォーカス量算出に用いた視差データの2つの波形の相互相関関数を利用して、デフォーカス量算出部202が算出した画像領域単位のデフォーカス量の信頼度を算出する。 As described above, the image region unit physical quantity reliability calculation unit 205 uses, for example, the cross-correlation function of the two waveforms of the parallax data used to calculate the defocus amount, and the image region calculated by the defocus amount calculation unit 202. Calculate the reliability of the unit defocus amount.
 さらに、画像領域単位物理量信頼度算出部205は、画像領域単位のデフォーカス量の信頼度に応じて、画像領域単位の距離情報の信頼度を算出する。
 前述したように、画像領域単位の距離情報の信頼度は、画像領域単位のデフォーカス量の信頼度に応じた信頼度として算出する。
 すなわち、デフォーカス量の信頼度の低い画像領域は、デフォーカス量に基づいて算出した距離値の信頼度も低いと判断し、デフォーカス量の信頼度の高い画像領域は、デフォーカス量に基づいて算出した距離値の信頼度も高いと判断する。
Furthermore, the image area unit physical quantity reliability calculation unit 205 calculates the reliability of the distance information for each image area according to the reliability of the defocus amount for each image area.
As described above, the reliability of the distance information for each image area is calculated as the reliability corresponding to the reliability of the defocus amount for each image area.
That is, it is determined that the reliability of the distance value calculated based on the defocus amount is low for an image area with a low reliability of the defocus amount, and the image area with a high reliability of the defocus amount is determined based on the defocus amount. It is judged that the reliability of the distance value calculated by
 画像領域単位物理量信頼度算出部205が算出した画像領域単位の距離情報の信頼度データは、画像領域単位物理量信頼度対応処理実行部206に出力される。 The reliability data of the distance information for each image area calculated by the image area unit physical quantity reliability calculation unit 205 is output to the image area unit physical quantity reliability correspondence processing execution unit 206 .
 画像領域単位物理量信頼度対応処理実行部206は、画像領域単位物理量信頼度算出部205が算出した画像領域単位の距離情報の信頼度データに応じて、画像信号処理部212が生成した撮影画像(例えばRGB画像)に対する画像制御を実行する。
 具体的には、画像領域単位物理量信頼度算出部205が算出した画像領域単位の距離情報の信頼度に応じて、信頼度の高い距離情報を持つ領域間の距離比データを生成して撮影画像上に重畳して表示する。
 例えば予め規定したしきい値以上の信頼度の距離値を持つ画像領域を選択して選択した画像領域にある被写体間の距離比データを生成して撮影画像上に重畳して表示する。
The image area unit physical quantity reliability correspondence processing execution unit 206 calculates the photographed image generated by the image signal processing unit 212 ( image control for RGB images).
Specifically, according to the reliability of the distance information for each image area calculated by the image area unit physical quantity reliability calculation unit 205, distance ratio data between areas having highly reliable distance information is generated to generate the captured image. Displayed superimposed on top.
For example, an image area having a distance value with a degree of reliability greater than or equal to a predetermined threshold is selected, and distance ratio data between subjects in the selected image area is generated and displayed superimposed on the captured image.
 この処理の具体例について、図14を参照して説明する。 A specific example of this process will be described with reference to FIG.
 図14には以下の各データを示している。
 (A)撮影画像
 (B)出力画像
 (p)分離度(距離比)算出例
FIG. 14 shows the following data.
(A) Photographed image (B) Output image (p) Separation degree (distance ratio) calculation example
 (A)撮影画像は、画像情報取得部211から画像信号処理部212に入力された画像信号に基づいて画像信号処理部212が生成した撮影画像(RGB画像)である。
 (B)出力画像は、画像領域単位物理量信頼度対応処理実行部206が(A)撮影画像に基づいて生成した出力画像の一例である。
(A) A captured image is a captured image (RGB image) generated by the image signal processing unit 212 based on the image signal input from the image information acquisition unit 211 to the image signal processing unit 212 .
The (B) output image is an example of an output image generated by the image-region unit physical quantity reliability correspondence processing execution unit 206 based on the (A) captured image.
 (B)出力画像の左上には、合焦被写体と背景被写体の分離度(=距離比)が重畳表示されている。
 この分離度(=距離比)データは、、画像領域単位物理量信頼度算出部205が生成したデータである。画像領域単位物理量信頼度算出部205は、画像領域単位の距離情報の信頼度に応じて、信頼度の高い距離情報を持つ領域として、合焦被写体領域と、一部の背景領域を選択し、これらの領域間の距離比を算出し、これを分離度(距離比)データとして撮影画像上に重畳して表示する。
(B) The degree of separation (=distance ratio) between the focused subject and the background subject is superimposed on the upper left of the output image.
This separability (=distance ratio) data is data generated by the image area unit physical quantity reliability calculation unit 205 . The image area unit physical quantity reliability calculation unit 205 selects the in-focus subject area and a part of the background area as areas having highly reliable distance information according to the reliability of the distance information for each image area, A distance ratio between these regions is calculated and displayed as separation degree (distance ratio) data superimposed on the captured image.
 分離度(距離比)データの算出例は、図14(p)分離度(距離比)算出例に示す通りである。すなわち、例えばカメラから合焦被写体までの距離をa、カメラから背景領域までの距離をbとする。
 このとき、合焦被写体と背景領域との分離度(距離比)は、b/aとなる。
 図14(B)出力画像の左上には、このようにして算出された
 分離度(距離比)=b/a
 が表示される。
A calculation example of the separation (distance ratio) data is as shown in FIG. 14(p) separation (distance ratio) calculation example. For example, let a be the distance from the camera to the in-focus object, and b be the distance from the camera to the background area.
At this time, the degree of separation (distance ratio) between the focused object and the background area is b/a.
In the upper left of the output image in FIG.
is displayed.
 この(B)出力画像は、例えば、撮像装置100のモニタ117に出力される。
 ユーザは、モニタ117に出力された画像を見て、距離が正しく算出されている領域間の分離度(距離比)を容易に確認することが可能となる。
This (B) output image is output to the monitor 117 of the imaging apparatus 100, for example.
By looking at the image output to the monitor 117, the user can easily check the degree of separation (distance ratio) between the regions whose distances are calculated correctly.
 このように本実施例2は、画像領域単位の距離情報の信頼度を算出し、信頼度の高い距離情報を持つ領域間の分離度(距離比)を確認可能としたデータを撮影画像上に重畳した画像を生成して出力する実施例であり、ユーザは、撮影画像に重畳された分離度(距離比)データを見て、距離が正しく算出されている領域間の分離度(距離比)を容易に確認することが可能となる。 As described above, in the second embodiment, the reliability of distance information for each image area is calculated, and data that enables confirmation of the degree of separation (distance ratio) between areas having highly reliable distance information is displayed on the captured image. This is an embodiment in which superimposed images are generated and output, and the user sees the degree of separation (distance ratio) data superimposed on the captured image, and the degree of separation (distance ratio) between regions whose distances are correctly calculated. can be easily confirmed.
 なお、図13を参照して説明したデジタル信号処理部108は、デジタル信号処理部108内に位相差情報取得部201、デフォーカス量算出部202、AF制御信号生成部203、デフォーカスマップ生成部204、画像領域単位物理量信頼度算出部205、画像領域単位物理量信頼度対応処理実行部206、画像情報取得部211、画像信号処理部212、画像出力部213、距離情報算出部221、これらの全ての構成を有する設定としているが、この構成は一例である。 Note that the digital signal processing unit 108 described with reference to FIG. 204, image region unit physical quantity reliability calculation unit 205, image region unit physical quantity reliability correspondence processing execution unit 206, image information acquisition unit 211, image signal processing unit 212, image output unit 213, distance information calculation unit 221, all of these This configuration is an example.
 図13を参照して説明した構成の一部については、撮像装置100のデジタル信号処理部108以外の構成としてもよい。また、撮像装置100と異なる外部装置においてデータ処理を実行する構成としてもよい。
 具体的には、例えば、画像信号処理部212については、デジタル信号処理部108の外部に構成することが可能である。また、撮像装置100以外の外部装置、例えばPC等において画像信号処理を実行する構成としてもよい。
A part of the configuration described with reference to FIG. Alternatively, the data processing may be performed by an external device different from the imaging device 100 .
Specifically, for example, the image signal processing unit 212 can be configured outside the digital signal processing unit 108 . Also, the image signal processing may be performed by an external device other than the imaging device 100, such as a PC.
  [4-3.(実施例3)マスク安定性情報を撮影画像上に重畳した画像を生成して出力する実施例]
 次に、実施例3として、マスク安定性情報を撮影画像上に重畳した画像を生成して出力する実施例について説明する。
[4-3. (Example 3) Example of generating and outputting an image in which mask stability information is superimposed on a photographed image]
Next, as Example 3, an example of generating and outputting an image in which mask stability information is superimposed on a photographed image will be described.
 例えば撮影画像から選択した特定の被写体以外の背景領域をグリーンバック、あるいはブルーバック等の均一な色に設定するマスク処理を行う場合がある。
 このようなマスク処理を行った画像のグリーンバック領域やブルーバック領域に他の画像、例えば新たな背景画像を合成することで、選択した特定被写体を新たな背景画像上に表示した合成画像を生成することが可能となる。
 本実施例は、このようなマスク処理を行う場合に、高精度なマスク設定が可能か否かを示すマスク安定性情報を生成して、撮影画像上に重畳して出力する実施例である。
For example, mask processing may be performed to set a background area other than a specific subject selected from a captured image to a uniform color such as green background or blue background.
By synthesizing another image, such as a new background image, with the green background area or blue background area of the image subjected to such mask processing, a composite image is generated in which the selected specific subject is displayed on the new background image. It becomes possible to
The present embodiment is an embodiment for generating mask stability information indicating whether or not highly accurate mask setting is possible when performing such mask processing, and outputting the information superimposed on the captured image.
 図15は、本実施例3の構成を説明するブロック図である。
 図15は、図1を参照して説明した撮像装置100の構成要素であるデジタル信号処理部108の内部構成例を示している。
FIG. 15 is a block diagram for explaining the configuration of the third embodiment.
FIG. 15 shows an internal configuration example of the digital signal processing unit 108, which is a component of the imaging apparatus 100 described with reference to FIG.
 図15に示すように、デジタル信号処理部108は、位相差情報取得部201、デフォーカス量算出部202、AF制御信号生成部203、デフォーカスマップ生成部204、画像領域単位物理量信頼度算出部205、画像領域単位物理量信頼度対応処理実行部206、画像情報取得部211、画像信号処理部212、画像出力部213を有する。 As shown in FIG. 15, the digital signal processing unit 108 includes a phase difference information acquisition unit 201, a defocus amount calculation unit 202, an AF control signal generation unit 203, a defocus map generation unit 204, an image area unit physical quantity reliability calculation unit. 205 , an image area unit physical quantity reliability correspondence processing execution unit 206 , an image information acquisition unit 211 , an image signal processing unit 212 , and an image output unit 213 .
 デジタル信号処理部108には、前段のA/D変換部105からRGB画像信号と、位相差検出画素の出力である位相差検出信号(検波情報)が入力される。 The digital signal processing unit 108 receives the RGB image signals from the preceding A/D conversion unit 105 and the phase difference detection signals (detection information) output from the phase difference detection pixels.
 図15に示す位相差情報取得部201は、A/D変換部105からの入力信号から、位相差検出画素の出力である位相差検出信号(検波情報)のみを選択する。
 一方、図15に示す画像情報取得部211は、A/D変換部105からの入力信号から、画像信号(例えばRGB画像信号)のみを選択する。
A phase difference information acquisition unit 201 shown in FIG. 15 selects only the phase difference detection signal (detection information) that is the output of the phase difference detection pixel from the input signal from the A/D conversion unit 105 .
On the other hand, the image information acquisition unit 211 shown in FIG. 15 selects only image signals (for example, RGB image signals) from the input signals from the A/D conversion unit 105 .
 画像情報取得部211が取得した画像信号は、画像信号処理部212に入力される。
 画像信号処理部212は、画像信号に対してデモザイク処理、ホワイトバランス調整、ガンマ補正等の様々な画像信号処理を実行し、処理後の画像(例えばRGB画像)を画像出力部213に出力する。
The image signal acquired by the image information acquiring section 211 is input to the image signal processing section 212 .
The image signal processing unit 212 performs various image signal processing such as demosaic processing, white balance adjustment, and gamma correction on the image signal, and outputs the processed image (eg, RGB image) to the image output unit 213 .
 なお、本実施例3においても、先に説明した実施例1,2と同様、画像信号処理部212が生成した画像(例えばRGB画像)に対して、さらに画像領域単位物理量信頼度対応処理実行部206による画像制御処理が行われる。 In the third embodiment, as in the first and second embodiments described above, the image (for example, RGB image) generated by the image signal processing unit 212 is further processed by the image region unit physical quantity reliability corresponding processing executing unit. 206 performs image control processing.
 本実施例3では、画像信号処理部212が生成した画像(例えばRGB画像)に対して、デフォーカス量の信頼度に応じたマスク安定性情報を重畳して出力する処理が行われる。 In the third embodiment, a process of superimposing and outputting mask stability information according to the reliability of the defocus amount is performed on an image (for example, an RGB image) generated by the image signal processing unit 212 .
 画像信号処理部212が生成した画像(例えばRGB画像)、または、画像信号処理部212の生成画像に対して画像領域単位物理量信頼度対応処理実行部206が変更、加工した画像は、画像出力部213に出力される。 An image (for example, an RGB image) generated by the image signal processing unit 212, or an image generated by the image signal processing unit 212 changed or processed by the image region unit physical quantity reliability correspondence processing execution unit 206 is output to the image output unit. 213.
 画像出力部213は、画像信号処理部212から入力した画像を出力する。例えばモニタ117、ビューファインダ(EVF)116、記録デバイス115に対する画像出力を実行する。 The image output unit 213 outputs the image input from the image signal processing unit 212 . For example, image output to the monitor 117, viewfinder (EVF) 116, and recording device 115 is executed.
 位相差情報取得部201は、A/D変換部105からの入力信号から、位相差検出画素の出力である位相差検出信号(検波情報)を選択し、選択した位相差検出信号(検波情報)をデフォーカス量算出部202に出力する。 The phase difference information acquisition unit 201 selects a phase difference detection signal (detection information) that is the output of the phase difference detection pixel from the input signal from the A/D conversion unit 105, and obtains the selected phase difference detection signal (detection information). is output to the defocus amount calculation unit 202 .
 デフォーカス量算出部202は、微小な画像領域単位、例えば複数画素から構成される画像領域単位でフォーカスのずれ量、すなわち合焦距離と被写体距離とのずれ量(デフォーカス量(DF))を算出する。
 先に説明したように、位相差検出方式においては、焦点検出用センサとして機能する一組の位相差検出画素各々の受光量に応じて出力される信号のずれ量に基づいてフォーカスレンズのデフォーカス量が算出される。
The defocus amount calculation unit 202 calculates the defocus amount, that is, the amount of deviation between the in-focus distance and the object distance (defocus amount (DF)) for each minute image area unit, for example, an image area unit composed of a plurality of pixels. calculate.
As described above, in the phase difference detection method, the defocus of the focus lens is detected based on the shift amount of the signal output according to the amount of light received by each of a set of phase difference detection pixels that function as a focus detection sensor. Amount is calculated.
 AF制御信号生成部203は、このデフォーカス量に基づいてフォーカスレンズを、例えばユーザの指定した被写体に対する合焦位置(フォーカス位置)に設定するためのオートフォーカス制御信号(AF制御信号)を生成し、生成したAF制御信号をAF制御部112aに出力する。 The AF control signal generation unit 203 generates an autofocus control signal (AF control signal) for setting the focus lens to an in-focus position (focus position) for a subject designated by the user based on the defocus amount. , and outputs the generated AF control signal to the AF control unit 112a.
 AF制御部112aは、AF制御信号生成部203から入力するオートフォーカス制御信号(AF制御信号)に従って、フォーカスレンズを駆動し、フォーカスレンズを例えばユーザの指定した被写体に対する合焦位置(フォーカス位置)に設定する。 The AF control unit 112a drives the focus lens according to an autofocus control signal (AF control signal) input from the AF control signal generation unit 203, and moves the focus lens to the in-focus position (focus position) for the subject specified by the user, for example. set.
 なお、合焦位置(フォーカス位置)に設定される被写体は、撮影画像の画像領域全てではなく、例えば人物などユーザが指定した被写体である。その他の被写体、例えば背景などの画像は、合焦状態にはなく、ぼけた画像となる。 Note that the subject to be set at the in-focus position (focus position) is not the entire image area of the captured image, but a subject specified by the user such as a person. Images of other objects, such as the background, are out of focus and appear blurred.
 デフォーカス量算出部202は、例えば先に図7を参照して説明したように、例えばn×m画素等、微小な画像領域単位でデフォーカス量、すなわち合焦距離と被写体距離とのずれ量に相当するデフォーカス量を算出する。 For example, as described above with reference to FIG. 7, the defocus amount calculation unit 202 calculates the defocus amount, that is, the amount of deviation between the in-focus distance and the subject distance, in units of minute image areas such as n×m pixels. A defocus amount equivalent to is calculated.
 デフォーカスマップ生成部204は、デフォーカス量算出部202が算出した撮影画像の微細な画像領域単位のデフォーカス量を入力し、この画像領域単位のデフォーカス量に基づいて、画像領域単位のデフォーカス量を識別可能としたデフォーカスマップを生成する。 The defocus map generation unit 204 receives the fine defocus amount of the captured image calculated by the defocus amount calculation unit 202, and calculates the defocus amount of each image area based on the defocus amount of each image area. To generate a defocus map in which the amount of focus can be identified.
 例えば、先に図8(B)を参照して説明したようなデフォーカスマップを生成する。
 例えば輝度値(画素値)=0~255の設定としたデフォーカスマップの場合、輝度値(画素値)=255(最高輝度(白))に近いほど、デフォーカス量が小さい、すなわち合焦度が高い画素領域であり、輝度値(画素値)=0(最低輝度(黒))に近いほど、デフォーカス量が大きい、すなわち合焦度が低い画素領域となる。
 このように、デフォーカスマップ生成部204は、例えば、図8(B)に示すようなデフォーカスマップを生成する。
For example, a defocus map as described above with reference to FIG. 8B is generated.
For example, in the case of a defocus map with a brightness value (pixel value) set to 0 to 255, the closer the brightness value (pixel value) to 255 (maximum brightness (white)), the smaller the defocus amount, that is, the degree of focus is a pixel region with a high luminance value (pixel value), and the closer the luminance value (pixel value) is to 0 (minimum luminance (black)), the larger the defocus amount, that is, the pixel region has a lower degree of focus.
Thus, the defocus map generation unit 204 generates a defocus map as shown in FIG. 8B, for example.
 画像領域単位物理量信頼度算出部205は、画像領域単位の物理量である画像領域単位のデフォーカス量の信頼度を算出する。
 先の実施例1と同様、画像領域単位物理量信頼度算出部205は、デフォーカス量算出部202の算出した画像領域単位のデフォーカス量や、デフォーカスマップ生成部204が生成したデフォーカスマップを入力し、これらの入力データに基づいて、画像領域単位で算出されたデフォーカス量の信頼度を算出する。
The image area unit physical quantity reliability calculation unit 205 calculates the reliability of the defocus amount for each image area, which is the physical quantity for each image area.
As in the first embodiment, the image region unit physical quantity reliability calculation unit 205 calculates the defocus amount for each image region calculated by the defocus amount calculation unit 202 and the defocus map generated by the defocus map generation unit 204. Based on these input data, the reliability of the defocus amount calculated for each image area is calculated.
 先の実施例1において説明したように、画像領域単位のデフォーカス量の信頼度算出処理の手法としては様々な手法が適用可能である。
 例えば画像信号処理部212が生成した画像(RGB画像)のコントラストや、画像周波数、エッジ検出結果等を用いて画像領域単位のデフォーカス量信頼度を算出する手法が利用可能である。
 また、実施例1において説明した(式1)を用いた手法、すなわち、デフォーカス量算出に用いる視差データの2つの波形の相互相関関数を用いて画像領域単位のデフォーカス量信頼度を算出する手法も可能である。
As described in the first embodiment, various methods can be applied as the method of calculating the reliability of the defocus amount for each image area.
For example, a method of calculating the defocus amount reliability for each image area using the contrast of the image (RGB image) generated by the image signal processing unit 212, the image frequency, the edge detection result, or the like can be used.
Further, the defocus amount reliability for each image area is calculated using the method using (Equation 1) described in the first embodiment, that is, the cross-correlation function of the two waveforms of the parallax data used for defocus amount calculation. method is also possible.
 なお、本実施例3において算出するデフォーカス量信頼度も、実施例1で説明したと同様、画素領域単位のデフォーカス量の信頼度である。
 画像領域単位物理量信頼度算出部205は、デフォーカス量算出部202が算出した画像領域単位のデフォーカス量、すなわち図8(B)に示すデフォーカスマップ内の1つの矩形領域である画像領域単位のデフォーカス量を用いて画像領域単位のデフォーカス量の信頼度を算出する。
Note that the defocus amount reliability calculated in the third embodiment is also the reliability of the defocus amount for each pixel area, as described in the first embodiment.
The image area unit physical quantity reliability calculation unit 205 calculates the defocus amount for each image area calculated by the defocus amount calculation unit 202, that is, the image area unit which is one rectangular area in the defocus map shown in FIG. is used to calculate the reliability of the defocus amount for each image area.
 このように、画像領域単位物理量信頼度算出部205は、例えば、デフォーカス量算出に用いた視差データの2つの波形の相互相関関数を利用して、デフォーカス量算出部202が算出した画像領域単位のデフォーカス量の信頼度を算出する。 As described above, the image region unit physical quantity reliability calculation unit 205 uses, for example, the cross-correlation function of the two waveforms of the parallax data used to calculate the defocus amount, and the image region calculated by the defocus amount calculation unit 202. Calculate the reliability of the unit defocus amount.
 画像領域単位物理量信頼度算出部205が算出した画像領域単位のデフォーカス量の信頼度データは、画像領域単位物理量信頼度対応処理実行部206に出力される。 The reliability data of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit 205 is output to the image area unit physical quantity reliability correspondence processing execution unit 206 .
 画像領域単位物理量信頼度対応処理実行部206は、画像領域単位物理量信頼度算出部205が算出した画像領域単位のデフォーカス量の信頼度データに応じて、画像信号処理部212が生成した撮影画像(例えばRGB画像)に対する画像制御を実行する。
 具体的には、画像領域単位物理量信頼度算出部205が算出した画像領域単位のデフォーカス量の信頼度に応じたマスク安定性情報を生成して、撮影画像上に重畳して出力する。
The image area unit physical quantity reliability correspondence processing execution unit 206 calculates the captured image generated by the image signal processing unit 212 according to the reliability data of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit 205. (e.g., RGB images).
Specifically, mask stability information corresponding to the reliability of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit 205 is generated and superimposed on the captured image and output.
 マスク安定性情報とは、撮影画像の選択被写体以外の領域、例えば背景領域にマスクを設定する際に、正確にマスク設定が可能か否かを示す情報である。
 前述したように、例えば撮影画像から選択した特定の被写体以外の背景領域をグリーンバック、あるいはブルーバック等の均一な色に設定するマスク処理を行い、マスク処理を行った画像のグリーンバック領域やブルーバック領域に他の画像、例えば新たな背景画像を合成することで、選択した特定被写体を新たな背景画像上に表示した合成画像を生成することが可能となる。
The mask stability information is information indicating whether or not it is possible to set the mask accurately when setting the mask on an area other than the selected subject of the captured image, for example, the background area.
As described above, for example, mask processing is performed to set a background region other than a specific subject selected from a photographed image to a uniform color such as green background or blue background, and the green background region or blue background of the image subjected to mask processing is performed. By synthesizing another image, such as a new background image, in the background area, it is possible to generate a synthesized image in which the selected specific subject is displayed on the new background image.
 本実施例は、このようなマスク処理を行う場合に、高精度なマスク設定が可能か否かを示すマスク安定性情報を生成して、撮影画像上に重畳して出力する実施例である。
 画像領域単位物理量信頼度対応処理実行部206は、画像領域単位物理量信頼度算出部205が算出した画像領域単位のデフォーカス量の信頼度に応じたマスク安定性情報を生成して、撮影画像上に重畳して出力する。
 この処理の具体例について、図16以下を参照して説明する。
The present embodiment is an embodiment in which mask stability information indicating whether or not highly accurate mask setting is possible is generated and superimposed on the captured image and output when such mask processing is performed.
The image region unit physical quantity reliability correspondence processing execution unit 206 generates mask stability information according to the reliability of the defocus amount for each image region calculated by the image region unit physical quantity reliability calculation unit 205, and displays it on the captured image. superimposed on the output.
A specific example of this processing will be described with reference to FIG. 16 and subsequent drawings.
 まず、図16を参照して、撮影画像に対するマスク処理と、合成画像生成処理の一例について説明する。
 例えば図16(A)撮影画像は、人物と家を合焦被写体として設定した撮影画像である。この合焦被写体のデフォーカス量はほぼ0となる。この合焦被写体領域以外の背景領域をマスク領域として、グリーンバック、あるいはブルーバック等の均一な色に設定するマスク処理を行う。このマスク設定処理によって、(B)マスク設定画像が生成される。
First, with reference to FIG. 16, an example of mask processing for a captured image and an example of composite image generation processing will be described.
For example, the photographed image in FIG. 16A is a photographed image in which a person and a house are set as focused subjects. The defocus amount of this in-focus object is almost zero. A mask process is performed to set the background area other than the in-focus object area as a mask area to a uniform color such as green background or blue background. (B) a mask setting image is generated by this mask setting process.
 次にマスク処理を行った画像のグリーンバック領域やブルーバック領域に他の画像、例えば新たな背景画像を合成する。この処理によって、図16(C)に示すような合成画像を生成することが可能となる。
 このような合成画像を生成するためには、正確なマスク領域を設定することが不可欠となる。
Next, another image such as a new background image is combined with the green background area or blue background area of the masked image. By this processing, it is possible to generate a composite image as shown in FIG. 16(C).
In order to generate such a composite image, it is essential to set an accurate mask area.
 例えば、マスクを設定しない領域を人物や家等の合焦領域とし、合焦領域以外の背景領域をマスク領域とする場合の具体的なマスク領域決定処理としては、以下のような処理が可能である。
 画像領域単位のデフォーカス量が予め規定したしきい値Th以下の領域、すなわち、
 (判定式a)デフォーカス量≦Th
 この判定式aを満足する領域を合焦領域と判定してマスクを行わない領域として決定する。
 一方、判定式aを満足しない領域を非合焦領域と判定してマスクを行うマスク領域として決定する。
 このような処理によって、マスク領域を決定することができる。
For example, the following processing is possible as a specific mask area determination process when an area where no mask is set is set as a focus area such as a person or a house, and a background area other than the focus area is set as a mask area. be.
A region in which the defocus amount per image region is equal to or less than a predetermined threshold value Th, that is,
(Decision formula a) defocus amount ≤ Th
An area that satisfies this determination formula a is determined as an in-focus area and determined as an area not to be masked.
On the other hand, an area that does not satisfy the determination formula a is determined as an out-of-focus area and determined as a mask area to be masked.
A mask area can be determined by such processing.
 しかし、デフォーカス量が不正確な値である場合には、上記の(判定式a)を利用してマスク領域決定処理を行っても正確なマスク領域を決定することができなくなる。
 本実施例は、マスク処理を行う場合に、高精度なマスク設定が可能か否かを示すマスク安定性情報を生成して、撮影画像上に重畳して出力し、ユーザに安定したマスク領域設定が可能か否かを通知することを可能とした実施例である。
However, if the defocus amount is an inaccurate value, the mask area cannot be determined accurately even if the mask area determination process is performed using the above (determination formula a).
In the present embodiment, when performing mask processing, mask stability information indicating whether or not highly accurate mask setting is possible is generated, superimposed on the photographed image, and output to provide the user with stable mask area setting. This is an embodiment in which it is possible to notify whether or not it is possible.
 画像領域単位物理量信頼度対応処理実行部206は、画像領域単位物理量信頼度算出部205が算出した画像領域単位のデフォーカス量の信頼度に応じたマスク安定性情報を生成して、撮影画像上に重畳して出力する。
 図17以下を参照して具体的な処理例について説明する。
The image region unit physical quantity reliability correspondence processing execution unit 206 generates mask stability information according to the reliability of the defocus amount for each image region calculated by the image region unit physical quantity reliability calculation unit 205, and displays it on the captured image. superimposed on the output.
A specific processing example will be described with reference to FIG. 17 and the subsequent drawings.
 図17には、以下の各データを示している。
 (A)撮影画像
 (B)デフォーカスマップ
 (C)出力画像
FIG. 17 shows the following data.
(A) Captured image (B) Defocus map (C) Output image
 (A)撮影画像は、画像情報取得部211から画像信号処理部212に入力された画像信号に基づいて画像信号処理部212が生成した撮影画像(RGB画像)である。
 (B)デフォーカスマップは、デフォーカスマップ生成部204が生成したデフォーカスマップである。すなわち、位相差情報取得部201が取得した位相差検出画素の画素値信号に基づいてデフォーカス量算出部202が算出した画像領域単位のデフォーカス量に基づいて、デフォーカスマップ生成部204が生成した画像領域単位のデフォーカス量を輝度値(例えば0~255)で出力したデフォーカスマップである。
(A) A captured image is a captured image (RGB image) generated by the image signal processing unit 212 based on the image signal input from the image information acquisition unit 211 to the image signal processing unit 212 .
(B) A defocus map is a defocus map generated by the defocus map generation unit 204 . That is, based on the defocus amount for each image area calculated by the defocus amount calculation unit 202 based on the pixel value signal of the phase difference detection pixel acquired by the phase difference information acquisition unit 201, the defocus map generation unit 204 generates This is a defocus map that outputs the defocus amount for each image area as a luminance value (for example, 0 to 255).
 (C)出力画像は、画像領域単位物理量信頼度対応処理実行部206が(A)撮影画像に基づいて生成した出力画像の一例である。
 (C)出力画像の右上には、マスク安定性情報が表示されている。
The (C) output image is an example of the output image generated by the image region unit physical quantity reliability correspondence processing execution unit 206 based on the (A) captured image.
(C) Mask stability information is displayed in the upper right of the output image.
 前述したように、画像領域単位物理量信頼度算出部205は、デフォーカスマップ生成部204が生成したデフォーカスマップの各画像領域単位のデフォーカス量の信頼度を算出し、算出した信頼度データを画像領域単位物理量信頼度対応処理実行部206に出力する。 As described above, the image area unit physical quantity reliability calculation unit 205 calculates the reliability of the defocus amount for each image area in the defocus map generated by the defocus map generation unit 204, and uses the calculated reliability data. It is output to the image area unit physical quantity reliability correspondence processing execution unit 206 .
 画像領域単位物理量信頼度対応処理実行部206は、画像領域単位物理量信頼度算出部205から入力した画像領域単位のデフォーカス量の信頼度データに基づいて、例えば、図17(B)に示すようにデフォーカス量低信頼度領域を抽出する。 Based on the reliability data of the defocus amount for each image region input from the image region unit physical quantity reliability calculation unit 205, the image region unit physical quantity reliability corresponding processing execution unit 206 calculates, for example, as shown in FIG. , a defocus amount low-reliability region is extracted.
 具体的には、画像領域単位物理量信頼度算出部205から入力した画像領域単位のデフォーカス量信頼度と、予め規定した信頼度しきい値とを比較して、デフォーカス量低信頼度領域を抽出する。
 例えば、予め規定した低信頼度しきい値Th1を用いて、
 (判定式1)デフォーカス量信頼度≦Th1
 上記判定式1を満たす画像領域をデフォーカス量低信頼度領域として抽出する。
Specifically, the defocus amount reliability for each image area input from the image area unit physical quantity reliability calculation unit 205 is compared with a predetermined reliability threshold to determine a defocus amount low reliability area. Extract.
For example, using a predefined low reliability threshold Th1,
(Decision formula 1) Defocus amount reliability ≤ Th1
An image region that satisfies the above determination formula 1 is extracted as a defocus amount low reliability region.
 図17(B)に示すデフォーカスマップには、デフォーカス量低信頼度領域が含まれている。
 このように、デフォーカス量低信頼度領域が検出された場合、画像領域単位物理量信頼度対応処理実行部206は、安定したマスク領域決定処理が困難であると判定する。さらに、この判定に従って、図17(C)に示すように「マスク不安定」を示すマスク安定性情報を撮影画像上に重畳した出力画像を生成して出力する。
The defocus map shown in FIG. 17B includes a defocus amount low reliability region.
Thus, when a defocus amount low-reliability area is detected, the image area unit physical quantity reliability correspondence processing execution unit 206 determines that stable mask area determination processing is difficult. Furthermore, according to this determination, an output image is generated and output by superimposing mask stability information indicating "mask instability" on the photographed image as shown in FIG. 17(C).
 図17に示す(C)出力画像の例は、デフォーカス量低信頼度領域が検出され、安定したマスク領域の決定が困難であることを示すマスク安定性情報(=「マスク不安定」)を撮影画像上に重畳した出力画像の例である。 In the example of the output image (C) shown in FIG. 17, a defocus amount low-reliability region is detected, and mask stability information (=“mask instability”) indicating that it is difficult to determine a stable mask region is displayed. It is an example of an output image superimposed on a captured image.
 この(C)出力画像は、例えば、撮像装置100のモニタ117に出力される。
 ユーザは、モニタ117に出力された画像を見て、安定したマスク設定が困難であることを認識することができる。
This (C) output image is output to the monitor 117 of the imaging apparatus 100, for example.
The user can see the image output to the monitor 117 and recognize that stable mask setting is difficult.
 次に、図18を参照して安定したマスク設定が可能な場合の処理例について説明する。
 図18にも図17と同様、以下の各データを示している。
 (A)撮影画像
 (B)デフォーカスマップ
 (C)出力画像
Next, an example of processing when stable mask setting is possible will be described with reference to FIG.
Similar to FIG. 17, FIG. 18 also shows the following data.
(A) Captured image (B) Defocus map (C) Output image
 (A)撮影画像は、画像情報取得部211から画像信号処理部212に入力された画像信号に基づいて画像信号処理部212が生成した撮影画像(RGB画像)である。
 (B)デフォーカスマップは、デフォーカスマップ生成部204が生成したデフォーカスマップである。すなわち、位相差情報取得部201が取得した位相差検出画素の画素値信号に基づいてデフォーカス量算出部202が算出した画像領域単位のデフォーカス量に基づいて、デフォーカスマップ生成部204が生成した画像領域単位のデフォーカス量を輝度値(例えば0~255)で出力したデフォーカスマップである。
(A) A captured image is a captured image (RGB image) generated by the image signal processing unit 212 based on the image signal input from the image information acquisition unit 211 to the image signal processing unit 212 .
(B) A defocus map is a defocus map generated by the defocus map generation unit 204 . That is, based on the defocus amount for each image area calculated by the defocus amount calculation unit 202 based on the pixel value signal of the phase difference detection pixel acquired by the phase difference information acquisition unit 201, the defocus map generation unit 204 generates This is a defocus map that outputs the defocus amount for each image area as a luminance value (for example, 0 to 255).
 (C)出力画像は、画像領域単位物理量信頼度対応処理実行部206が(A)撮影画像に基づいて生成した出力画像の一例である。
 (C)出力画像の右上には、マスク安定性情報が表示されている。
The (C) output image is an example of the output image generated by the image region unit physical quantity reliability correspondence processing execution unit 206 based on the (A) captured image.
(C) Mask stability information is displayed in the upper right of the output image.
 画像領域単位物理量信頼度算出部205は、デフォーカスマップ生成部204が生成したデフォーカスマップの各画像領域単位のデフォーカス量の信頼度を算出し、算出した信頼度データを画像領域単位物理量信頼度対応処理実行部206に出力する。 The image area unit physical quantity reliability calculation unit 205 calculates the reliability of the defocus amount for each image area in the defocus map generated by the defocus map generation unit 204, and uses the calculated reliability data as the image area unit physical quantity reliability. Output to the degree correspondence processing execution unit 206 .
 図18に示す例は、図18(B)に示すデフォーカスマップからデフォーカス量低信頼度領域が検出されなかった例である。
 すなわち、先に説明したように、
 (判定式1)デフォーカス量信頼度≦Th1
 上記判定式1を満たす画像領域が検出されなかった場合の例である。
The example shown in FIG. 18 is an example in which no defocus amount low-reliability region is detected from the defocus map shown in FIG. 18B.
That is, as explained earlier,
(Decision formula 1) Defocus amount reliability ≤ Th1
This is an example of a case where an image area that satisfies the determination formula 1 is not detected.
 図18(B)に示すデフォーカスマップには、デフォーカス量低信頼度領域が含まれていない。
 このように、デフォーカス量低信頼度領域が検出されなかった場合、画像領域単位物理量信頼度対応処理実行部206は、安定したマスク領域決定処理が可能であると判定する。さらに、この判定に従って、図18(C)に示すように「マスク安定」を示すマスク安定性情報を撮影画像上に重畳した出力画像を生成して出力する。
The defocus map shown in FIG. 18B does not include the defocus amount low reliability region.
In this way, when no defocus amount low-reliability region is detected, the image region unit physical quantity reliability handling processing execution unit 206 determines that stable mask region determination processing is possible. Further, according to this determination, an output image is generated and output by superimposing mask stability information indicating "mask stability" on the captured image as shown in FIG. 18(C).
 図18に示す(C)出力画像の例は、デフォーカス量低信頼度領域が検出されず、安定したマスク領域の決定が可能であることを示すマスク安定性情報(=「マスク安定」)を撮影画像上に重畳した出力画像の例である。 An example of the output image (C) shown in FIG. 18 contains mask stability information (=“mask stable”) indicating that a defocus amount low-reliability region is not detected and a stable mask region can be determined. It is an example of an output image superimposed on a captured image.
 この(C)出力画像は、例えば、撮像装置100のモニタ117に出力される。
 ユーザは、モニタ117に出力された画像を見て、安定したマスク設定が可能であることを認識することができる。
This (C) output image is output to the monitor 117 of the imaging apparatus 100, for example.
The user can see the image output to the monitor 117 and recognize that stable mask setting is possible.
 なお、図17、図18に示すマスク安定性情報の例は一例であり、この他、様々な文字や記号、アイコン、異なる色のアイコン表示などを利用してマスク安定性情報を表示することが可能である。 The examples of the mask stability information shown in FIGS. 17 and 18 are only examples, and the mask stability information can be displayed using various characters, symbols, icons, icon displays of different colors, and the like. It is possible.
 また、図15を参照して説明したデジタル信号処理部108は、デジタル信号処理部108内に位相差情報取得部201、デフォーカス量算出部202、AF制御信号生成部203、デフォーカスマップ生成部204、画像領域単位物理量信頼度算出部205、画像領域単位物理量信頼度対応処理実行部206、画像情報取得部211、画像信号処理部212、画像出力部213、これらの全ての構成を有する設定としているが、この構成は一例である。 Further, the digital signal processing unit 108 described with reference to FIG. 204, an image region unit physical quantity reliability calculation unit 205, an image region unit physical quantity reliability corresponding processing execution unit 206, an image information acquisition unit 211, an image signal processing unit 212, an image output unit 213, and a setting having all these configurations. However, this configuration is an example.
 図15を参照して説明した構成の一部については、撮像装置100のデジタル信号処理部108以外の構成としてもよい。また、撮像装置100と異なる外部装置においてデータ処理を実行する構成としてもよい。
 具体的には、例えば、画像信号処理部212については、デジタル信号処理部108の外部に構成することが可能である。また、撮像装置100以外の外部装置、例えばPC等において画像信号処理を実行する構成としてもよい。
A part of the configuration described with reference to FIG. Alternatively, the data processing may be performed by an external device different from the imaging device 100 .
Specifically, for example, the image signal processing unit 212 can be configured outside the digital signal processing unit 108 . Also, the image signal processing may be performed by an external device other than the imaging device 100, such as a PC.
  [4-4.(実施例4)画像領域単位のデフォーカス量に応じた色を出力したカラーマップを撮影画像に重畳した画像を生成して出力する実施例]
 次に、実施例4として、画像領域単位のデフォーカス量に応じた色を出力したカラーマップを撮影画像に重畳した画像を生成して出力する実施例について説明する。
[4-4. (Embodiment 4) Embodiment of generating and outputting an image in which a color map outputting a color corresponding to a defocus amount for each image area is superimposed on a captured image]
Next, as Example 4, an example will be described in which an image is generated and output by superimposing a color map outputting a color corresponding to the defocus amount for each image area on a photographed image.
 例えば撮影画像内にはデフォーカス量がほぼ0となる合焦被写体の他、デフォーカス量が大きな背景被写体等、様々なデフォーカス量を持つ被写体が存在する。
 本実施例は、撮影画像に対してデフォーカス量に応じた色を設定した出力画像を生成して出力する実施例である。
For example, a photographed image includes subjects having various defocus amounts, such as a focused subject whose defocus amount is almost 0, and a background subject whose defocus amount is large.
This embodiment is an embodiment for generating and outputting an output image in which a color corresponding to the defocus amount is set for a photographed image.
 図19は、本実施例4の構成を説明するブロック図である。
 図19は、図1を参照して説明した撮像装置100の構成要素であるデジタル信号処理部108の内部構成例を示している。
FIG. 19 is a block diagram for explaining the configuration of the fourth embodiment.
FIG. 19 shows an internal configuration example of the digital signal processing unit 108, which is a component of the imaging apparatus 100 described with reference to FIG.
 図19に示すように、デジタル信号処理部108は、位相差情報取得部201、デフォーカス量算出部202、AF制御信号生成部203、デフォーカスマップ生成部204、画像領域単位物理量信頼度算出部205、画像領域単位物理量信頼度対応処理実行部206、画像情報取得部211、画像信号処理部212、画像出力部213を有する。 As shown in FIG. 19, the digital signal processing unit 108 includes a phase difference information acquisition unit 201, a defocus amount calculation unit 202, an AF control signal generation unit 203, a defocus map generation unit 204, an image region unit physical quantity reliability calculation unit. 205 , an image area unit physical quantity reliability correspondence processing execution unit 206 , an image information acquisition unit 211 , an image signal processing unit 212 , and an image output unit 213 .
 デジタル信号処理部108には、前段のA/D変換部105からRGB画像信号と、位相差検出画素の出力である位相差検出信号(検波情報)が入力される。 The digital signal processing unit 108 receives the RGB image signals from the preceding A/D conversion unit 105 and the phase difference detection signals (detection information) output from the phase difference detection pixels.
 図19に示す位相差情報取得部201は、A/D変換部105からの入力信号から、位相差検出画素の出力である位相差検出信号(検波情報)のみを選択する。
 一方、図19に示す画像情報取得部211は、A/D変換部105からの入力信号から、画像信号(例えばRGB画像信号)のみを選択する。
A phase difference information acquisition unit 201 shown in FIG. 19 selects only the phase difference detection signal (detection information) that is the output of the phase difference detection pixel from the input signal from the A/D conversion unit 105 .
On the other hand, the image information acquisition unit 211 shown in FIG. 19 selects only image signals (for example, RGB image signals) from the input signals from the A/D conversion unit 105 .
 画像情報取得部211が取得した画像信号は、画像信号処理部212に入力される。
 画像信号処理部212は、画像信号に対してデモザイク処理、ホワイトバランス調整、ガンマ補正等の様々な画像信号処理を実行し、処理後の画像(例えばRGB画像)を画像出力部213に出力する。
The image signal acquired by the image information acquiring section 211 is input to the image signal processing section 212 .
The image signal processing unit 212 performs various image signal processing such as demosaic processing, white balance adjustment, and gamma correction on the image signal, and outputs the processed image (eg, RGB image) to the image output unit 213 .
 なお、本実施例4においても、先に説明した実施例1~3と同様、画像信号処理部212が生成した画像(例えばRGB画像)に対して、さらに画像領域単位物理量信頼度対応処理実行部206による画像制御処理が行われる。 In the fourth embodiment, as in the first to third embodiments described above, an image (for example, an RGB image) generated by the image signal processing unit 212 is further processed by the image region unit physical quantity reliability correspondence processing execution unit. 206 performs image control processing.
 本実施例4では、画像信号処理部212が生成した画像(例えばRGB画像)に対して、画像領域単位のデフォーカス量に応じた色を出力したカラーマップを重畳した画像を生成して出力する処理が行われる。 In the fourth embodiment, an image is generated and output by superimposing a color map outputting colors according to the defocus amount for each image area on an image (for example, an RGB image) generated by the image signal processing unit 212 . processing takes place.
 画像信号処理部212が生成した画像(例えばRGB画像)、または、画像信号処理部212の生成画像に対して画像領域単位物理量信頼度対応処理実行部206が変更、加工した画像は、画像出力部213に出力される。 An image (for example, an RGB image) generated by the image signal processing unit 212, or an image generated by the image signal processing unit 212 changed or processed by the image region unit physical quantity reliability correspondence processing execution unit 206 is output to the image output unit. 213.
 画像出力部213は、画像信号処理部212から入力した画像を出力する。例えばモニタ117、ビューファインダ(EVF)116、記録デバイス115に対する画像出力を実行する。 The image output unit 213 outputs the image input from the image signal processing unit 212 . For example, image output to the monitor 117, viewfinder (EVF) 116, and recording device 115 is executed.
 位相差情報取得部201は、A/D変換部105からの入力信号から、位相差検出画素の出力である位相差検出信号(検波情報)を選択し、選択した位相差検出信号(検波情報)をデフォーカス量算出部202に出力する。 The phase difference information acquisition unit 201 selects a phase difference detection signal (detection information) that is the output of the phase difference detection pixel from the input signal from the A/D conversion unit 105, and obtains the selected phase difference detection signal (detection information). is output to the defocus amount calculation unit 202 .
 デフォーカス量算出部202は、微小な画像領域単位、例えば複数画素から構成される画像領域単位でフォーカスのずれ量、すなわち合焦距離と被写体距離とのずれ量(デフォーカス量(DF))を算出する。
 先に説明したように、位相差検出方式においては、焦点検出用センサとして機能する一組の位相差検出画素各々の受光量に応じて出力される信号のずれ量に基づいてフォーカスレンズのデフォーカス量が算出される。
The defocus amount calculation unit 202 calculates the defocus amount, that is, the amount of deviation between the in-focus distance and the object distance (defocus amount (DF)) for each minute image area unit, for example, an image area unit composed of a plurality of pixels. calculate.
As described above, in the phase difference detection method, the defocus of the focus lens is detected based on the shift amount of the signal output according to the amount of light received by each of a set of phase difference detection pixels that function as a focus detection sensor. Amount is calculated.
 AF制御信号生成部203は、このデフォーカス量に基づいてフォーカスレンズを、例えばユーザの指定した被写体に対する合焦位置(フォーカス位置)に設定するためのオートフォーカス制御信号(AF制御信号)を生成し、生成したAF制御信号をAF制御部112aに出力する。 The AF control signal generation unit 203 generates an autofocus control signal (AF control signal) for setting the focus lens to an in-focus position (focus position) for a subject designated by the user based on the defocus amount. , and outputs the generated AF control signal to the AF control unit 112a.
 AF制御部112aは、AF制御信号生成部203から入力するオートフォーカス制御信号(AF制御信号)に従って、フォーカスレンズを駆動し、フォーカスレンズを例えばユーザの指定した被写体に対する合焦位置(フォーカス位置)に設定する。 The AF control unit 112a drives the focus lens according to an autofocus control signal (AF control signal) input from the AF control signal generation unit 203, and moves the focus lens to the in-focus position (focus position) for the subject specified by the user, for example. set.
 なお、合焦位置(フォーカス位置)に設定される被写体は、撮影画像の画像領域全てではなく、例えば人物などユーザが指定した被写体である。その他の被写体、例えば背景などの画像は、合焦状態にはなく、ぼけた画像となる。 Note that the subject to be set at the in-focus position (focus position) is not the entire image area of the captured image, but a subject specified by the user such as a person. Images of other objects, such as the background, are out of focus and appear blurred.
 デフォーカス量算出部202は、例えば先に図7を参照して説明したように、例えばn×m画素等、微小な画像領域単位でデフォーカス量、すなわち合焦距離と被写体距離とのずれ量に相当するデフォーカス量を算出する。 For example, as described above with reference to FIG. 7, the defocus amount calculation unit 202 calculates the defocus amount, that is, the amount of deviation between the in-focus distance and the subject distance, in units of minute image areas such as n×m pixels. A defocus amount equivalent to is calculated.
 デフォーカスマップ生成部204は、デフォーカス量算出部202が算出した撮影画像の微細な画像領域単位のデフォーカス量を入力し、この画像領域単位のデフォーカス量に基づいて、画像領域単位のデフォーカス量を識別可能としたデフォーカスマップを生成する。 The defocus map generation unit 204 receives the fine defocus amount of the captured image calculated by the defocus amount calculation unit 202, and calculates the defocus amount of each image area based on the defocus amount of each image area. To generate a defocus map in which the amount of focus can be identified.
 例えば、先に図8(B)を参照して説明したようなデフォーカスマップを生成する。
 例えば輝度値(画素値)=0~255の設定としたデフォーカスマップの場合、輝度値(画素値)=255(最高輝度(白))に近いほど、デフォーカス量が小さい、すなわち合焦度が高い画素領域であり、輝度値(画素値)=0(最低輝度(黒))に近いほど、デフォーカス量が大きい、すなわち合焦度が低い画素領域となる。
 このように、デフォーカスマップ生成部204は、例えば、図8(B)に示すようなデフォーカスマップを生成する。
For example, a defocus map as described above with reference to FIG. 8B is generated.
For example, in the case of a defocus map with a brightness value (pixel value) set to 0 to 255, the closer the brightness value (pixel value) to 255 (maximum brightness (white)), the smaller the defocus amount, that is, the degree of focus is a pixel region with a high luminance value (pixel value), and the closer the luminance value (pixel value) is to 0 (minimum luminance (black)), the larger the defocus amount, that is, the pixel region has a lower degree of focus.
Thus, the defocus map generation unit 204 generates a defocus map as shown in FIG. 8B, for example.
 画像領域単位物理量信頼度算出部205は、画像領域単位の物理量である画像領域単位のデフォーカス量の信頼度を算出する。
 先の実施例1と同様、画像領域単位物理量信頼度算出部205は、デフォーカス量算出部202の算出した画像領域単位のデフォーカス量や、デフォーカスマップ生成部204が生成したデフォーカスマップを入力し、これらの入力データに基づいて、画像領域単位で算出されたデフォーカス量の信頼度を算出する。
The image area unit physical quantity reliability calculation unit 205 calculates the reliability of the defocus amount for each image area, which is the physical quantity for each image area.
As in the first embodiment, the image region unit physical quantity reliability calculation unit 205 calculates the defocus amount for each image region calculated by the defocus amount calculation unit 202 and the defocus map generated by the defocus map generation unit 204. Based on these input data, the reliability of the defocus amount calculated for each image area is calculated.
 先の実施例1において説明したように、画像領域単位のデフォーカス量の信頼度算出処理の手法としては様々な手法が適用可能である。
 例えば画像信号処理部212が生成した画像(RGB画像)のコントラストや、画像周波数、エッジ検出結果等を用いて画像領域単位のデフォーカス量信頼度を算出する手法が利用可能である。
 また、実施例1において説明した(式1)を用いた手法、すなわち、デフォーカス量算出に用いる視差データの2つの波形の相互相関関数を用いて画像領域単位のデフォーカス量信頼度を算出する手法も可能である。
As described in the first embodiment, various methods can be applied as the method of calculating the reliability of the defocus amount for each image area.
For example, a method of calculating the defocus amount reliability for each image area using the contrast of the image (RGB image) generated by the image signal processing unit 212, the image frequency, the edge detection result, or the like can be used.
Further, the defocus amount reliability for each image area is calculated using the method using (Equation 1) described in the first embodiment, that is, the cross-correlation function of the two waveforms of the parallax data used for defocus amount calculation. method is also possible.
 なお、本実施例4において算出するデフォーカス量信頼度も、実施例1で説明したと同様、画素領域単位のデフォーカス量の信頼度である。
 画像領域単位物理量信頼度算出部205は、デフォーカス量算出部202が算出した画像領域単位のデフォーカス量、すなわち図8(B)に示すデフォーカスマップ内の1つの矩形領域である画像領域単位のデフォーカス量を用いて画像領域単位のデフォーカス量の信頼度を算出する。
Note that the defocus amount reliability calculated in the fourth embodiment is also the reliability of the defocus amount for each pixel area, as described in the first embodiment.
The image area unit physical quantity reliability calculation unit 205 calculates the defocus amount for each image area calculated by the defocus amount calculation unit 202, that is, the image area unit which is one rectangular area in the defocus map shown in FIG. is used to calculate the reliability of the defocus amount for each image area.
 このように、画像領域単位物理量信頼度算出部205は、例えば、デフォーカス量算出に用いた視差データの2つの波形の相互相関関数を利用して、デフォーカス量算出部202が算出した画像領域単位のデフォーカス量の信頼度を算出する。 As described above, the image region unit physical quantity reliability calculation unit 205 uses, for example, the cross-correlation function of the two waveforms of the parallax data used to calculate the defocus amount, and the image region calculated by the defocus amount calculation unit 202. Calculate the reliability of the unit defocus amount.
 画像領域単位物理量信頼度算出部205が算出した画像領域単位のデフォーカス量の信頼度データは、画像領域単位物理量信頼度対応処理実行部206に出力される。 The reliability data of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit 205 is output to the image area unit physical quantity reliability correspondence processing execution unit 206 .
 画像領域単位物理量信頼度対応処理実行部206は、画像領域単位物理量信頼度算出部205が算出した画像領域単位のデフォーカス量の信頼度データに応じて、画像信号処理部212が生成した撮影画像(例えばRGB画像)に対する画像制御を実行する。
 具体的には、画像領域単位物理量信頼度算出部205が算出した画像領域単位のデフォーカス量に応じた色を出力したカラーマップを撮影画像に重畳した画像を生成して出力する。
 この処理の具体例について、図20以下を参照して説明する。
The image area unit physical quantity reliability correspondence processing execution unit 206 calculates the captured image generated by the image signal processing unit 212 according to the reliability data of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit 205. (e.g., RGB images).
Specifically, an image is generated and output by superimposing a color map outputting a color corresponding to the defocus amount for each image region calculated by the image region unit physical quantity reliability calculation unit 205 on the captured image.
A specific example of this process will be described with reference to FIG. 20 and subsequent figures.
 図20には、以下の各データを示している。
 (A)撮影画像
 (B)デフォーカスマップ
 (C)デフォーカスカラーマップ
FIG. 20 shows the following data.
(A) Captured image (B) Defocus map (C) Defocus color map
 (A)撮影画像は、画像情報取得部211から画像信号処理部212に入力された画像信号に基づいて画像信号処理部212が生成した撮影画像(RGB画像)である。
 (B)デフォーカスマップは、デフォーカスマップ生成部204が生成したデフォーカスマップである。すなわち、位相差情報取得部201が取得した位相差検出画素の画素値信号に基づいてデフォーカス量算出部202が算出した画像領域単位のデフォーカス量に基づいて、デフォーカスマップ生成部204が生成した画像領域単位のデフォーカス量を輝度値(例えば0~255)で出力したデフォーカスマップである。
(A) A captured image is a captured image (RGB image) generated by the image signal processing unit 212 based on the image signal input from the image information acquisition unit 211 to the image signal processing unit 212 .
(B) A defocus map is a defocus map generated by the defocus map generation unit 204 . That is, based on the defocus amount for each image area calculated by the defocus amount calculation unit 202 based on the pixel value signal of the phase difference detection pixel acquired by the phase difference information acquisition unit 201, the defocus map generation unit 204 generates This is a defocus map that outputs the defocus amount for each image area as a luminance value (for example, 0 to 255).
 (C)デフォーカスカラーマップは、デフォーカス量を輝度値(例えば0~255)で出力したデフォーカスマップに対して色付けを行ったマップであり、デフォーカス量に応じて異なる色を設定したマップである。
 例えば、デフォーカス量が0~小の領域は黄色、デフォーカス量が中の領域は緑色、デフォーカス量が大の領域は青色の設定等としたカラーマップである。
(C) The defocus color map is a map obtained by coloring the defocus map that outputs the defocus amount as a luminance value (for example, 0 to 255), and a map in which different colors are set according to the defocus amount. is.
For example, it is a color map in which yellow is set for areas where the defocus amount is 0 to small, green is set for areas where the defocus amount is medium, and blue is set for areas where the defocus amount is large.
 なお、色の設定は様々な設定が可能である。
 例えば上記のように3段階や5段階の色設定とする他、デフォーカス量の変化に応じて黄~青までなめらかに色を変化させるといった設定も可能である。
Various settings are possible for the color setting.
For example, in addition to the 3-level or 5-level color setting as described above, it is also possible to set the color to smoothly change from yellow to blue according to the change in the defocus amount.
 また、デフォーカス量がほぼ0の合焦領域については色を設定せず透過させて元の撮影画像の色をそのまま出力する設定としてもよい。
 さらに、例えば顔認識やセマンティックセグメンテーション等により、被写体オブジェクトの識別を実行して、人の顔領域や特定のオブジェクトに対してのみ異なる特定の色を出力するといった構成としてもよい。
In addition, a setting may be made in which the color of the in-focus area where the defocus amount is almost 0 is not set and the color of the original photographed image is output as it is.
Further, for example, by performing face recognition, semantic segmentation, or the like, identification of the subject object may be performed, and a specific color that is different only for a person's face area or a specific object may be output.
 このように、画像領域単位物理量信頼度対応処理実行部206は、まず、画像領域単位物理量信頼度算出部205が算出した画像領域単位のデフォーカス量に応じた色を出力したカラーマップを生成する。
 さらに、画像領域単位物理量信頼度対応処理実行部206は、生成したカラーマップを撮影画像に重畳した出力画像を生成して出力する。
 この処理の具体例について、図21を参照して説明する。
In this way, the image area unit physical quantity reliability correspondence processing execution unit 206 first generates a color map that outputs colors according to the image area unit defocus amount calculated by the image area unit physical quantity reliability calculation unit 205. .
Furthermore, the image area unit physical quantity reliability correspondence processing execution unit 206 generates and outputs an output image in which the generated color map is superimposed on the captured image.
A specific example of this processing will be described with reference to FIG.
 図21には、以下の各データを示している。
 (A)撮影画像
 (C)デフォーカスカラーマップ
 (D)出力画像
FIG. 21 shows the following data.
(A) Photographed image (C) Defocused color map (D) Output image
 (A)撮影画像は、画像情報取得部211から画像信号処理部212に入力された画像信号に基づいて画像信号処理部212が生成した撮影画像(RGB画像)である。
 (C)デフォーカスカラーマップは、デフォーカス量を輝度値(例えば0~255)で出力したデフォーカスマップに対して色付けを行ったマップであり、デフォーカス量に応じて異なる色を設定したマップである。
 例えば、デフォーカス量が0~小の領域は黄色、デフォーカス量が中の領域は緑色、デフォーカス量が大の領域は青色の設定等としたカラーマップである。
(A) A captured image is a captured image (RGB image) generated by the image signal processing unit 212 based on the image signal input from the image information acquisition unit 211 to the image signal processing unit 212 .
(C) The defocus color map is a map obtained by coloring the defocus map that outputs the defocus amount as a luminance value (for example, 0 to 255), and a map in which different colors are set according to the defocus amount. is.
For example, it is a color map in which yellow is set for areas where the defocus amount is 0 to small, green is set for areas where the defocus amount is medium, and blue is set for areas where the defocus amount is large.
 (C)出力画像は、(A)撮影画像に、(B)デフォーカスカラーマップを重畳して生成した出力画像である。 (C) The output image is an output image generated by superimposing (B) the defocus color map on (A) the captured image.
 このように、画像領域単位物理量信頼度対応処理実行部206は、生成したカラーマップを撮影画像に重畳した出力画像を生成して出力する。
 この(C)出力画像は、例えば、撮像装置100のモニタ117に出力される。
 ユーザは、モニタ117に出力された画像を見て、撮影画像の各領域のデフォーカス量を容易にかつ確実に判断することが可能となる。
In this manner, the image region unit physical quantity reliability correspondence processing execution unit 206 generates and outputs an output image in which the generated color map is superimposed on the captured image.
This (C) output image is output to the monitor 117 of the imaging apparatus 100, for example.
The user can easily and reliably determine the defocus amount of each area of the captured image by viewing the image output to the monitor 117 .
 なお、図19を参照して説明したデジタル信号処理部108は、デジタル信号処理部108内に位相差情報取得部201、デフォーカス量算出部202、AF制御信号生成部203、デフォーカスマップ生成部204、画像領域単位物理量信頼度算出部205、画像領域単位物理量信頼度対応処理実行部206、画像情報取得部211、画像信号処理部212、画像出力部213、これらの全ての構成を有する設定としているが、この構成は一例である。 Note that the digital signal processing unit 108 described with reference to FIG. 204, an image region unit physical quantity reliability calculation unit 205, an image region unit physical quantity reliability corresponding processing execution unit 206, an image information acquisition unit 211, an image signal processing unit 212, an image output unit 213, and a setting having all these configurations. However, this configuration is an example.
 図19を参照して説明した構成の一部については、撮像装置100のデジタル信号処理部108以外の構成としてもよい。また、撮像装置100と異なる外部装置においてデータ処理を実行する構成としてもよい。
 具体的には、例えば、画像信号処理部212については、デジタル信号処理部108の外部に構成することが可能である。また、撮像装置100以外の外部装置、例えばPC等において画像信号処理を実行する構成としてもよい。
A part of the configuration described with reference to FIG. Alternatively, the data processing may be performed by an external device different from the imaging device 100 .
Specifically, for example, the image signal processing unit 212 can be configured outside the digital signal processing unit 108 . Also, the image signal processing may be performed by an external device other than the imaging device 100, such as a PC.
  [4-5.(実施例5)画像領域単位のデフォーカス量に応じた色を出力するとともに、画像領域単位のデフォーカス量の信頼度を識別可能としたカラーマップを撮影画像に重畳した画像を生成して出力する実施例]
 次に、実施例5として、画像領域単位のデフォーカス量に応じた色を出力するとともに、画像領域単位のデフォーカス量の信頼度を識別可能としたカラーマップを撮影画像に重畳した画像を生成して出力する実施例について説明する。
[4-5. (Embodiment 5) In addition to outputting a color corresponding to the defocus amount for each image area, an image is generated and output by superimposing a color map that makes it possible to identify the reliability of the defocus amount for each image area on the captured image. Example to be performed]
Next, as a fifth embodiment, an image is generated in which a color map is superimposed on the captured image so as to output a color corresponding to the defocus amount for each image area, and to make it possible to identify the reliability of the defocus amount for each image area. An embodiment of outputting by
 この実施例5は、上述した実施例4の変形例であり、実施例4において説明したカラーマップを、実施例1で説明したデフォーカス量信頼度を識別可能としたカラーマップに変更した例である。 The fifth embodiment is a modification of the fourth embodiment described above, and is an example in which the color map described in the fourth embodiment is changed to a color map capable of identifying the defocus amount reliability described in the first embodiment. be.
 図22は、本実施例5の構成を説明するブロック図である。
 図22は、図1を参照して説明した撮像装置100の構成要素であるデジタル信号処理部108の内部構成例を示している。
FIG. 22 is a block diagram for explaining the configuration of the fifth embodiment.
FIG. 22 shows an internal configuration example of the digital signal processing unit 108, which is a component of the imaging apparatus 100 described with reference to FIG.
 図22に示すように、デジタル信号処理部108は、位相差情報取得部201、デフォーカス量算出部202、AF制御信号生成部203、デフォーカスマップ生成部204、画像領域単位物理量信頼度算出部205、画像領域単位物理量信頼度対応処理実行部206、画像情報取得部211、画像信号処理部212、画像出力部213を有する。 As shown in FIG. 22, the digital signal processing unit 108 includes a phase difference information acquisition unit 201, a defocus amount calculation unit 202, an AF control signal generation unit 203, a defocus map generation unit 204, an image area unit physical quantity reliability calculation unit. 205 , an image area unit physical quantity reliability correspondence processing execution unit 206 , an image information acquisition unit 211 , an image signal processing unit 212 , and an image output unit 213 .
 デジタル信号処理部108には、前段のA/D変換部105からRGB画像信号と、位相差検出画素の出力である位相差検出信号(検波情報)が入力される。 The digital signal processing unit 108 receives the RGB image signals from the preceding A/D conversion unit 105 and the phase difference detection signals (detection information) output from the phase difference detection pixels.
 図22に示す位相差情報取得部201は、A/D変換部105からの入力信号から、位相差検出画素の出力である位相差検出信号(検波情報)のみを選択する。
 一方、図22に示す画像情報取得部211は、A/D変換部105からの入力信号から、画像信号(例えばRGB画像信号)のみを選択する。
A phase difference information acquisition unit 201 shown in FIG. 22 selects only the phase difference detection signal (detection information) that is the output of the phase difference detection pixel from the input signal from the A/D conversion unit 105 .
On the other hand, the image information acquisition unit 211 shown in FIG. 22 selects only image signals (for example, RGB image signals) from the input signals from the A/D conversion unit 105 .
 画像情報取得部211が取得した画像信号は、画像信号処理部212に入力される。
 画像信号処理部212は、画像信号に対してデモザイク処理、ホワイトバランス調整、ガンマ補正等の様々な画像信号処理を実行し、処理後の画像(例えばRGB画像)を画像出力部213に出力する。
The image signal acquired by the image information acquiring section 211 is input to the image signal processing section 212 .
The image signal processing unit 212 performs various image signal processing such as demosaic processing, white balance adjustment, and gamma correction on the image signal, and outputs the processed image (eg, RGB image) to the image output unit 213 .
 なお、本実施例5においても、先に説明した実施例1~4と同様、画像信号処理部212が生成した画像(例えばRGB画像)に対して、さらに画像領域単位物理量信頼度対応処理実行部206による画像制御処理が行われる。 In the fifth embodiment, as in the first to fourth embodiments described above, an image (for example, an RGB image) generated by the image signal processing unit 212 is further processed by the image region unit physical quantity reliability correspondence processing execution unit. 206 performs image control processing.
 本実施例5では、画像信号処理部212が生成した画像(例えばRGB画像)に対して、画像領域単位のデフォーカス量に応じた色を出力し、さらに、画像領域単位のデフォーカス量の信頼度を識別可能としたカラーマップを重畳した画像を生成して出力する処理が行われる。 In the fifth embodiment, for an image (for example, an RGB image) generated by the image signal processing unit 212, a color corresponding to the defocus amount for each image area is output. A process of generating and outputting an image superimposed with a color map that makes it possible to distinguish degrees is performed.
 画像信号処理部212が生成した画像(例えばRGB画像)、または、画像信号処理部212の生成画像に対して画像領域単位物理量信頼度対応処理実行部206が変更、加工した画像は、画像出力部213に出力される。 An image (for example, an RGB image) generated by the image signal processing unit 212, or an image generated by the image signal processing unit 212 changed or processed by the image region unit physical quantity reliability correspondence processing execution unit 206 is output to the image output unit. 213.
 画像出力部213は、画像信号処理部212から入力した画像を出力する。例えばモニタ117、ビューファインダ(EVF)116、記録デバイス115に対する画像出力を実行する。 The image output unit 213 outputs the image input from the image signal processing unit 212 . For example, image output to the monitor 117, viewfinder (EVF) 116, and recording device 115 is executed.
 位相差情報取得部201は、A/D変換部105からの入力信号から、位相差検出画素の出力である位相差検出信号(検波情報)を選択し、選択した位相差検出信号(検波情報)をデフォーカス量算出部202に出力する。 The phase difference information acquisition unit 201 selects a phase difference detection signal (detection information) that is the output of the phase difference detection pixel from the input signal from the A/D conversion unit 105, and obtains the selected phase difference detection signal (detection information). is output to the defocus amount calculation unit 202 .
 デフォーカス量算出部202は、微小な画像領域単位、例えば複数画素から構成される画像領域単位でフォーカスのずれ量、すなわち合焦距離と被写体距離とのずれ量(デフォーカス量(DF))を算出する。
 先に説明したように、位相差検出方式においては、焦点検出用センサとして機能する一組の位相差検出画素各々の受光量に応じて出力される信号のずれ量に基づいてフォーカスレンズのデフォーカス量が算出される。
The defocus amount calculation unit 202 calculates the defocus amount, that is, the amount of deviation between the in-focus distance and the object distance (defocus amount (DF)) for each minute image area unit, for example, an image area unit composed of a plurality of pixels. calculate.
As described above, in the phase difference detection method, the defocus of the focus lens is detected based on the shift amount of the signal output according to the amount of light received by each of a set of phase difference detection pixels that function as a focus detection sensor. Amount is calculated.
 AF制御信号生成部203は、このデフォーカス量に基づいてフォーカスレンズを、例えばユーザの指定した被写体に対する合焦位置(フォーカス位置)に設定するためのオートフォーカス制御信号(AF制御信号)を生成し、生成したAF制御信号をAF制御部112aに出力する。 The AF control signal generation unit 203 generates an autofocus control signal (AF control signal) for setting the focus lens to an in-focus position (focus position) for a subject designated by the user based on the defocus amount. , and outputs the generated AF control signal to the AF control unit 112a.
 AF制御部112aは、AF制御信号生成部203から入力するオートフォーカス制御信号(AF制御信号)に従って、フォーカスレンズを駆動し、フォーカスレンズを例えばユーザの指定した被写体に対する合焦位置(フォーカス位置)に設定する。 The AF control unit 112a drives the focus lens according to an autofocus control signal (AF control signal) input from the AF control signal generation unit 203, and moves the focus lens to the in-focus position (focus position) for the subject specified by the user, for example. set.
 なお、合焦位置(フォーカス位置)に設定される被写体は、撮影画像の画像領域全てではなく、例えば人物などユーザが指定した被写体である。その他の被写体、例えば背景などの画像は、合焦状態にはなく、ぼけた画像となる。 Note that the subject to be set at the in-focus position (focus position) is not the entire image area of the captured image, but a subject specified by the user such as a person. Images of other objects, such as the background, are out of focus and appear blurred.
 デフォーカス量算出部202は、例えば先に図7を参照して説明したように、例えばn×m画素等、微小な画像領域単位でデフォーカス量、すなわち合焦距離と被写体距離とのずれ量に相当するデフォーカス量を算出する。 For example, as described above with reference to FIG. 7, the defocus amount calculation unit 202 calculates the defocus amount, that is, the amount of deviation between the in-focus distance and the subject distance, in units of minute image areas such as n×m pixels. A defocus amount equivalent to is calculated.
 デフォーカスマップ生成部204は、デフォーカス量算出部202が算出した撮影画像の微細な画像領域単位のデフォーカス量を入力し、この画像領域単位のデフォーカス量に基づいて、画像領域単位のデフォーカス量を識別可能としたデフォーカスマップを生成する。 The defocus map generation unit 204 receives the fine defocus amount of the captured image calculated by the defocus amount calculation unit 202, and calculates the defocus amount of each image area based on the defocus amount of each image area. To generate a defocus map in which the amount of focus can be identified.
 例えば、先に図8(B)を参照して説明したようなデフォーカスマップを生成する。
 例えば輝度値(画素値)=0~255の設定としたデフォーカスマップの場合、輝度値(画素値)=255(最高輝度(白))に近いほど、デフォーカス量が小さい、すなわち合焦度が高い画素領域であり、輝度値(画素値)=0(最低輝度(黒))に近いほど、デフォーカス量が大きい、すなわち合焦度が低い画素領域となる。
 このように、デフォーカスマップ生成部204は、例えば、図8(B)に示すようなデフォーカスマップを生成する。
For example, a defocus map as described above with reference to FIG. 8B is generated.
For example, in the case of a defocus map with a brightness value (pixel value) set to 0 to 255, the closer the brightness value (pixel value) to 255 (maximum brightness (white)), the smaller the defocus amount, that is, the degree of focus is a pixel region with a high luminance value (pixel value), and the closer the luminance value (pixel value) is to 0 (minimum luminance (black)), the larger the defocus amount, that is, the pixel region has a lower degree of focus.
Thus, the defocus map generation unit 204 generates a defocus map as shown in FIG. 8B, for example.
 画像領域単位物理量信頼度算出部205は、画像領域単位の物理量である画像領域単位のデフォーカス量の信頼度を算出する。
 先の実施例1と同様、画像領域単位物理量信頼度算出部205は、デフォーカス量算出部202の算出した画像領域単位のデフォーカス量や、デフォーカスマップ生成部204が生成したデフォーカスマップを入力し、これらの入力データに基づいて、画像領域単位で算出されたデフォーカス量の信頼度を算出する。
The image area unit physical quantity reliability calculation unit 205 calculates the reliability of the defocus amount for each image area, which is the physical quantity for each image area.
As in the first embodiment, the image region unit physical quantity reliability calculation unit 205 calculates the defocus amount for each image region calculated by the defocus amount calculation unit 202 and the defocus map generated by the defocus map generation unit 204. Based on these input data, the reliability of the defocus amount calculated for each image area is calculated.
 先の実施例1において説明したように、画像領域単位のデフォーカス量の信頼度算出処理の手法としては様々な手法が適用可能である。
 例えば画像信号処理部212が生成した画像(RGB画像)のコントラストや、画像周波数、エッジ検出結果等を用いて画像領域単位のデフォーカス量信頼度を算出する手法が利用可能である。
 また、実施例1において説明した(式1)を用いた手法、すなわち、デフォーカス量算出に用いる視差データの2つの波形の相互相関関数を用いて画像領域単位のデフォーカス量信頼度を算出する手法も可能である。
As described in the first embodiment, various methods can be applied as the method of calculating the reliability of the defocus amount for each image area.
For example, a method of calculating the defocus amount reliability for each image area using the contrast of the image (RGB image) generated by the image signal processing unit 212, the image frequency, the edge detection result, or the like can be used.
Further, the defocus amount reliability for each image area is calculated using the method using (Equation 1) described in the first embodiment, that is, the cross-correlation function of the two waveforms of the parallax data used for defocus amount calculation. method is also possible.
 なお、本実施例5において算出するデフォーカス量信頼度も、実施例1で説明したと同様、画素領域単位のデフォーカス量の信頼度である。
 画像領域単位物理量信頼度算出部205は、デフォーカス量算出部202が算出した画像領域単位のデフォーカス量、すなわち図8(B)に示すデフォーカスマップ内の1つの矩形領域である画像領域単位のデフォーカス量を用いて画像領域単位のデフォーカス量の信頼度を算出する。
Note that the defocus amount reliability calculated in the fifth embodiment is also the reliability of the defocus amount for each pixel area, as described in the first embodiment.
The image area unit physical quantity reliability calculation unit 205 calculates the defocus amount for each image area calculated by the defocus amount calculation unit 202, that is, the image area unit which is one rectangular area in the defocus map shown in FIG. is used to calculate the reliability of the defocus amount for each image area.
 このように、画像領域単位物理量信頼度算出部205は、例えば、デフォーカス量算出に用いた視差データの2つの波形の相互相関関数を利用して、デフォーカス量算出部202が算出した画像領域単位のデフォーカス量の信頼度を算出する。 As described above, the image region unit physical quantity reliability calculation unit 205 uses, for example, the cross-correlation function of the two waveforms of the parallax data used to calculate the defocus amount, and the image region calculated by the defocus amount calculation unit 202. Calculate the reliability of the unit defocus amount.
 画像領域単位物理量信頼度算出部205が算出した画像領域単位のデフォーカス量の信頼度データは、画像領域単位物理量信頼度対応処理実行部206に出力される。 The reliability data of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit 205 is output to the image area unit physical quantity reliability correspondence processing execution unit 206 .
 画像領域単位物理量信頼度対応処理実行部206は、画像領域単位物理量信頼度算出部205が算出した画像領域単位のデフォーカス量の信頼度データに応じて、画像信号処理部212が生成した撮影画像(例えばRGB画像)に対する画像制御を実行する。
 具体的には、画像領域単位物理量信頼度算出部205が算出した画像領域単位のデフォーカス量に応じた色を出力し、さらに画像領域単位のデフォーカス量の信頼度を識別可能としたカラーマップを撮影画像に重畳した画像を生成して出力する。
 この処理の具体例について、図23以下を参照して説明する。
The image area unit physical quantity reliability correspondence processing execution unit 206 calculates the captured image generated by the image signal processing unit 212 according to the reliability data of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit 205. (e.g., RGB images).
Specifically, a color map that outputs a color corresponding to the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit 205, and that can identify the reliability of the defocus amount for each image area. is superimposed on the captured image and output.
A specific example of this process will be described with reference to FIG. 23 and subsequent figures.
 図23には、以下の各データを示している。
 (A)撮影画像
 (B)デフォーカスマップ
 (C)デフォーカスカラーマップ
FIG. 23 shows the following data.
(A) Captured image (B) Defocus map (C) Defocus color map
 (A)撮影画像は、画像情報取得部211から画像信号処理部212に入力された画像信号に基づいて画像信号処理部212が生成した撮影画像(RGB画像)である。
 (B)デフォーカスマップは、デフォーカスマップ生成部204が生成したデフォーカスマップである。すなわち、位相差情報取得部201が取得した位相差検出画素の画素値信号に基づいてデフォーカス量算出部202が算出した画像領域単位のデフォーカス量に基づいて、デフォーカスマップ生成部204が生成した画像領域単位のデフォーカス量を輝度値(例えば0~255)で出力したデフォーカスマップである。
(A) A captured image is a captured image (RGB image) generated by the image signal processing unit 212 based on the image signal input from the image information acquisition unit 211 to the image signal processing unit 212 .
(B) A defocus map is a defocus map generated by the defocus map generation unit 204 . That is, based on the defocus amount for each image area calculated by the defocus amount calculation unit 202 based on the pixel value signal of the phase difference detection pixel acquired by the phase difference information acquisition unit 201, the defocus map generation unit 204 generates This is a defocus map that outputs the defocus amount for each image area as a luminance value (for example, 0 to 255).
 図23に示す(B)デフォーカスマップには、デフォーカス量低信頼度領域が含まれる。
 前述したように、画像領域単位物理量信頼度算出部205は、デフォーカスマップ生成部204が生成したデフォーカスマップの各画像領域単位のデフォーカス量の信頼度を算出し、算出した信頼度データを画像領域単位物理量信頼度対応処理実行部206に出力する。
The (B) defocus map shown in FIG. 23 includes a defocus amount low reliability region.
As described above, the image area unit physical quantity reliability calculation unit 205 calculates the reliability of the defocus amount for each image area in the defocus map generated by the defocus map generation unit 204, and uses the calculated reliability data. It is output to the image area unit physical quantity reliability correspondence processing execution unit 206 .
 画像領域単位物理量信頼度対応処理実行部206は、画像領域単位物理量信頼度算出部205から入力した画像領域単位のデフォーカス量の信頼度データに基づいて、例えば、図23(B)に示すようにデフォーカス量低信頼度領域を抽出する。 Based on the reliability data of the defocus amount for each image region input from the image region unit physical quantity reliability calculation unit 205, the image region unit physical quantity reliability corresponding processing execution unit 206 calculates, for example, as shown in FIG. , a defocus amount low-reliability region is extracted.
 具体的には、画像領域単位物理量信頼度算出部205から入力した画像領域単位のデフォーカス量信頼度と、予め規定した信頼度しきい値とを比較して、デフォーカス量低信頼度領域を抽出する。
 例えば、予め規定した低信頼度しきい値Th1を用いて、
 (判定式1)デフォーカス量信頼度≦Th1
 上記判定式1を満たす画像領域をデフォーカス量低信頼度領域として抽出する。
Specifically, the defocus amount reliability for each image area input from the image area unit physical quantity reliability calculation unit 205 is compared with a predetermined reliability threshold to determine a defocus amount low reliability area. Extract.
For example, using a predefined low reliability threshold Th1,
(Decision formula 1) Defocus amount reliability ≤ Th1
An image region that satisfies the above determination formula 1 is extracted as a defocus amount low reliability region.
 図23(B)に示すデフォーカスマップには、デフォーカス量低信頼度領域が含まれている。
 このように、デフォーカス量低信頼度領域が検出された場合、画像領域単位物理量信頼度対応処理実行部206は、デフォーカス量低信頼度領域を識別可能としたデフォーカスカラーマップを生成する。
The defocus map shown in FIG. 23B includes a defocus amount low reliability region.
In this way, when a defocus amount low reliability area is detected, the image area unit physical quantity reliability correspondence processing execution unit 206 generates a defocus color map that enables identification of the defocus amount low reliability area.
 図23(C)デフォーカスカラーマップは、デフォーカス量を輝度値(例えば0~255)で出力したデフォーカスマップに対して色付けを行ったマップであり、デフォーカス量に応じて異なる色を設定し、さらに、デフォーカス量低信頼度領域に特定の色を設定したカラーマップである。 The defocus color map in FIG. 23(C) is a map obtained by coloring the defocus map outputting the defocus amount as a luminance value (for example, 0 to 255), and different colors are set according to the defocus amount. Further, it is a color map in which a specific color is set in the defocus amount low-reliability area.
 例えば、デフォーカス量が0~小の領域は黄色、デフォーカス量が中の領域は緑色、デフォーカス量が大の領域は青色の設定とし、さらに、デフォーカス量低信頼度領域を赤色の設定としたカラーマップである。
 なお、色の設定は様々な設定が可能である。
For example, the area where the defocus amount is 0 to small is set in yellow, the area where the defocus amount is medium is set in green, the area where the defocus amount is large is set in blue, and the defocus amount low reliability area is set in red. is a color map.
Various settings are possible for the color setting.
 このように、画像領域単位物理量信頼度対応処理実行部206は、まず、画像領域単位物理量信頼度算出部205が算出した画像領域単位のデフォーカス量に応じた色を設定し、さらに、デフォーカス量低信頼度領域を赤色の設定としたカラーマップを生成する。
 さらに、画像領域単位物理量信頼度対応処理実行部206は、生成したカラーマップを撮影画像に重畳した出力画像を生成して出力する。
 この処理の具体例について、図24を参照して説明する。
In this manner, the image-region unit physical quantity reliability correspondence processing execution unit 206 first sets a color according to the defocus amount for each image region calculated by the image-region unit physical quantity reliability calculation unit 205, and then sets the defocus amount. Generate a color map with low-confidence regions set to red.
Furthermore, the image area unit physical quantity reliability correspondence processing execution unit 206 generates and outputs an output image in which the generated color map is superimposed on the captured image.
A specific example of this processing will be described with reference to FIG.
 図24には、以下の各データを示している。
 (A)撮影画像
 (C)デフォーカスカラーマップ
 (D)出力画像
FIG. 24 shows the following data.
(A) Photographed image (C) Defocused color map (D) Output image
 (A)撮影画像は、画像情報取得部211から画像信号処理部212に入力された画像信号に基づいて画像信号処理部212が生成した撮影画像(RGB画像)である。
 (C)デフォーカスカラーマップは、デフォーカス量を輝度値(例えば0~255)で出力したデフォーカスマップに対してデフォーカス量に応じて異なる色を設定するとともに、デフォーカス量低信頼度領域を識別可能な特定色(例えば赤色設定)としたカラーマップである。
(A) A captured image is a captured image (RGB image) generated by the image signal processing unit 212 based on the image signal input from the image information acquisition unit 211 to the image signal processing unit 212 .
(C) The defocus color map sets a different color according to the defocus amount with respect to the defocus map that outputs the defocus amount as a luminance value (for example, 0 to 255), and the defocus amount low reliability region is a specific color (for example, set to red) that can be identified.
 例えば、デフォーカス量が0~小の領域は黄色、デフォーカス量が中の領域は緑色、デフォーカス量が大の領域は青色、デフォーカス量低信頼度領域を赤色の設定等としたカラーマップである。 For example, a color map that sets yellow for areas where the defocus amount is 0 to small, green for areas with medium defocus amounts, blue for areas with large defocus amounts, and red for low-reliability defocus amount areas. is.
 (D)出力画像は、(A)撮影画像に、(C)デフォーカスカラーマップを重畳して生成した出力画像である。 (D) The output image is an output image generated by superimposing (C) the defocus color map on (A) the captured image.
 このように、画像領域単位物理量信頼度対応処理実行部206は、生成したカラーマップを撮影画像に重畳した出力画像を生成して出力する。
 この(D)出力画像は、例えば、撮像装置100のモニタ117に出力される。
 ユーザは、モニタ117に出力された画像を見て、撮影画像の各領域のデフォーカス量を容易にかつ確実に判断することが可能となるとともに、デフォーカス量の低信頼度領域についても確実に認識することが可能となる。
In this manner, the image region unit physical quantity reliability correspondence processing execution unit 206 generates and outputs an output image in which the generated color map is superimposed on the captured image.
This (D) output image is output to the monitor 117 of the imaging apparatus 100, for example.
The user can easily and reliably determine the defocus amount of each area of the captured image by looking at the image output to the monitor 117, and can also reliably determine the low-reliability area of the defocus amount. It becomes possible to recognize
 なお、図22を参照して説明したデジタル信号処理部108は、デジタル信号処理部108内に位相差情報取得部201、デフォーカス量算出部202、AF制御信号生成部203、デフォーカスマップ生成部204、画像領域単位物理量信頼度算出部205、画像領域単位物理量信頼度対応処理実行部206、画像情報取得部211、画像信号処理部212、画像出力部213、これらの全ての構成を有する設定としているが、この構成は一例である。 Note that the digital signal processing unit 108 described with reference to FIG. 204, an image region unit physical quantity reliability calculation unit 205, an image region unit physical quantity reliability corresponding processing execution unit 206, an image information acquisition unit 211, an image signal processing unit 212, an image output unit 213, and a setting having all these configurations. However, this configuration is an example.
 図22を参照して説明した構成の一部については、撮像装置100のデジタル信号処理部108以外の構成としてもよい。また、撮像装置100と異なる外部装置においてデータ処理を実行する構成としてもよい。
 具体的には、例えば、画像信号処理部212については、デジタル信号処理部108の外部に構成することが可能である。また、撮像装置100以外の外部装置、例えばPC等において画像信号処理を実行する構成としてもよい。
A part of the configuration described with reference to FIG. Alternatively, the data processing may be performed by an external device different from the imaging device 100 .
Specifically, for example, the image signal processing unit 212 can be configured outside the digital signal processing unit 108 . Also, the image signal processing may be performed by an external device other than the imaging device 100, such as a PC.
  [4-6.(実施例6)画像領域単位のデフォーカス量の信頼度に応じて露光時間を制御して画像撮影を行う実施例]
 次に、実施例6として、画像領域単位のデフォーカス量の信頼度に応じて露光時間を制御して画像撮影を行う実施例について説明する。
[4-6. (Embodiment 6) Embodiment in which image shooting is performed by controlling exposure time according to reliability of defocus amount for each image area]
Next, as Example 6, an example in which image shooting is performed by controlling the exposure time according to the reliability of the defocus amount for each image area will be described.
 デフォーカス量の信頼度が低い画像領域は、被写体距離が正確に計測されていない領域である場合が多く、撮影画像のS/Nが悪い画像領域である可能性が高い。このような画像領域については、露光時間を長く設定することでS/Nを向上させることが可能となる。
 以下に説明する実施例は、このように例えばデフォーカス量の信頼度が低い画像領域について露光時間を長く設定して画像撮影を実行する撮影制御を行い、高品質な画像の撮影を可能とする実施例である。
An image area with a low reliability of the defocus amount is often an area where the subject distance is not accurately measured, and is likely an image area with a poor S/N ratio of the captured image. For such an image area, it is possible to improve the S/N by setting a long exposure time.
In the embodiment described below, for example, in an image area where the reliability of the defocus amount is low, the exposure time is set long and the image is captured, thereby making it possible to capture a high-quality image. It is an example.
 図25は、本実施例6の構成を説明するブロック図である。
 図25は、図1を参照して説明した撮像装置100の構成要素であるデジタル信号処理部108の内部構成例を示している。
FIG. 25 is a block diagram for explaining the configuration of the sixth embodiment.
FIG. 25 shows an internal configuration example of the digital signal processing unit 108, which is a component of the imaging apparatus 100 described with reference to FIG.
 図25に示すように、デジタル信号処理部108は、位相差情報取得部201、デフォーカス量算出部202、AF制御信号生成部203、デフォーカスマップ生成部204、画像領域単位物理量信頼度算出部205、画像領域単位物理量信頼度対応処理実行部206、画像情報取得部211、画像信号処理部212、画像出力部213を有する。 As shown in FIG. 25, the digital signal processing unit 108 includes a phase difference information acquisition unit 201, a defocus amount calculation unit 202, an AF control signal generation unit 203, a defocus map generation unit 204, an image area unit physical quantity reliability calculation unit. 205 , an image area unit physical quantity reliability correspondence processing execution unit 206 , an image information acquisition unit 211 , an image signal processing unit 212 , and an image output unit 213 .
 デジタル信号処理部108には、前段のA/D変換部105からRGB画像信号と、位相差検出画素の出力である位相差検出信号(検波情報)が入力される。 The digital signal processing unit 108 receives the RGB image signals from the preceding A/D conversion unit 105 and the phase difference detection signals (detection information) output from the phase difference detection pixels.
 図25に示す位相差情報取得部201は、A/D変換部105からの入力信号から、位相差検出画素の出力である位相差検出信号(検波情報)のみを選択する。
 一方、図25に示す画像情報取得部211は、A/D変換部105からの入力信号から、画像信号(例えばRGB画像信号)のみを選択する。
A phase difference information acquisition unit 201 shown in FIG. 25 selects only phase difference detection signals (detection information) that are outputs of phase difference detection pixels from the input signal from the A/D conversion unit 105 .
On the other hand, the image information acquisition unit 211 shown in FIG. 25 selects only image signals (for example, RGB image signals) from the input signals from the A/D conversion unit 105 .
 画像情報取得部211が取得した画像信号は、画像信号処理部212に入力される。
 画像信号処理部212は、画像信号に対してデモザイク処理、ホワイトバランス調整、ガンマ補正等の様々な画像信号処理を実行し、処理後の画像(例えばRGB画像)を画像出力部213に出力する。
The image signal acquired by the image information acquiring section 211 is input to the image signal processing section 212 .
The image signal processing unit 212 performs various image signal processing such as demosaic processing, white balance adjustment, and gamma correction on the image signal, and outputs the processed image (eg, RGB image) to the image output unit 213 .
 画像信号処理部212が生成した画像(例えばRGB画像)は、画像出力部213に出力される。
 画像出力部213は、画像信号処理部212から入力した画像を出力する。例えばモニタ117、ビューファインダ(EVF)116、記録デバイス115に対する画像出力を実行する。
An image (for example, an RGB image) generated by the image signal processing unit 212 is output to the image output unit 213 .
The image output unit 213 outputs the image input from the image signal processing unit 212 . For example, image output to the monitor 117, viewfinder (EVF) 116, and recording device 115 is executed.
 なお、本実施例6では、先に説明した実施例1~5と異なり、画像領域単位物理量信頼度対応処理実行部206は、画像信号処理部212が生成した画像(例えばRGB画像)に対する画像処理は行わない。
 画像領域単位物理量信頼度対応処理実行部206は、制御部110に対する撮影制御コマンドを出力する。
 具体的には、デフォーカス量が低信頼度の画像領域の露光時間を長く設定した露光制御による画像撮影を実行させるための露光時間制御コマンドを出力する。
Note that, in the sixth embodiment, unlike the first to fifth embodiments described above, the image region unit physical quantity reliability corresponding processing execution unit 206 performs image processing on an image (for example, an RGB image) generated by the image signal processing unit 212. not performed.
The image area unit physical quantity reliability correspondence processing execution unit 206 outputs a shooting control command to the control unit 110 .
Specifically, it outputs an exposure time control command for executing image capturing by exposure control with a long exposure time set for an image region with a low-reliability defocus amount.
 なお、画像撮影時の露光時間は、画素単位で変更する制御が可能である。例えば特許文献4(特開2011-004088号公報)に記載の構成を適用することで画素単位の露光時間制御を行うことができる。 It should be noted that the exposure time during image shooting can be controlled to be changed on a pixel-by-pixel basis. For example, by applying the configuration described in Patent Document 4 (Japanese Unexamined Patent Application Publication No. 2011-004088), it is possible to perform exposure time control for each pixel.
 位相差情報取得部201は、A/D変換部105からの入力信号から、位相差検出画素の出力である位相差検出信号(検波情報)を選択し、選択した位相差検出信号(検波情報)をデフォーカス量算出部202に出力する。 The phase difference information acquisition unit 201 selects a phase difference detection signal (detection information) that is the output of the phase difference detection pixel from the input signal from the A/D conversion unit 105, and obtains the selected phase difference detection signal (detection information). is output to the defocus amount calculation unit 202 .
 デフォーカス量算出部202は、微小な画像領域単位、例えば複数画素から構成される画像領域単位でフォーカスのずれ量、すなわち合焦距離と被写体距離とのずれ量(デフォーカス量(DF))を算出する。
 先に説明したように、位相差検出方式においては、焦点検出用センサとして機能する一組の位相差検出画素各々の受光量に応じて出力される信号のずれ量に基づいてフォーカスレンズのデフォーカス量が算出される。
The defocus amount calculation unit 202 calculates the defocus amount, that is, the amount of deviation between the in-focus distance and the object distance (defocus amount (DF)) for each minute image area unit, for example, an image area unit composed of a plurality of pixels. calculate.
As described above, in the phase difference detection method, the defocus of the focus lens is detected based on the shift amount of the signal output according to the amount of light received by each of a set of phase difference detection pixels that function as a focus detection sensor. Amount is calculated.
 AF制御信号生成部203は、このデフォーカス量に基づいてフォーカスレンズを、例えばユーザの指定した被写体に対する合焦位置(フォーカス位置)に設定するためのオートフォーカス制御信号(AF制御信号)を生成し、生成したAF制御信号をAF制御部112aに出力する。 The AF control signal generation unit 203 generates an autofocus control signal (AF control signal) for setting the focus lens to an in-focus position (focus position) for a subject designated by the user based on the defocus amount. , and outputs the generated AF control signal to the AF control unit 112a.
 AF制御部112aは、AF制御信号生成部203から入力するオートフォーカス制御信号(AF制御信号)に従って、フォーカスレンズを駆動し、フォーカスレンズを例えばユーザの指定した被写体に対する合焦位置(フォーカス位置)に設定する。 The AF control unit 112a drives the focus lens according to an autofocus control signal (AF control signal) input from the AF control signal generation unit 203, and moves the focus lens to the in-focus position (focus position) for the subject specified by the user, for example. set.
 なお、合焦位置(フォーカス位置)に設定される被写体は、撮影画像の画像領域全てではなく、例えば人物などユーザが指定した被写体である。その他の被写体、例えば背景などの画像は、合焦状態にはなく、ぼけた画像となる。 Note that the subject to be set at the in-focus position (focus position) is not the entire image area of the captured image, but a subject specified by the user such as a person. Images of other objects, such as the background, are out of focus and appear blurred.
 デフォーカス量算出部202は、例えば先に図7を参照して説明したように、例えばn×m画素等、微小な画像領域単位でデフォーカス量、すなわち合焦距離と被写体距離とのずれ量に相当するデフォーカス量を算出する。 For example, as described above with reference to FIG. 7, the defocus amount calculation unit 202 calculates the defocus amount, that is, the amount of deviation between the in-focus distance and the subject distance, in units of minute image areas such as n×m pixels. A defocus amount equivalent to is calculated.
 デフォーカスマップ生成部204は、デフォーカス量算出部202が算出した撮影画像の微細な画像領域単位のデフォーカス量を入力し、この画像領域単位のデフォーカス量に基づいて、画像領域単位のデフォーカス量を識別可能としたデフォーカスマップを生成する。 The defocus map generation unit 204 receives the fine defocus amount of the captured image calculated by the defocus amount calculation unit 202, and calculates the defocus amount of each image area based on the defocus amount of each image area. To generate a defocus map in which the amount of focus can be identified.
 例えば、先に図8(B)を参照して説明したようなデフォーカスマップを生成する。
 例えば輝度値(画素値)=0~255の設定としたデフォーカスマップの場合、輝度値(画素値)=255(最高輝度(白))に近いほど、デフォーカス量が小さい、すなわち合焦度が高い画素領域であり、輝度値(画素値)=0(最低輝度(黒))に近いほど、デフォーカス量が大きい、すなわち合焦度が低い画素領域となる。
 このように、デフォーカスマップ生成部204は、例えば、図8(B)に示すようなデフォーカスマップを生成する。
For example, a defocus map as described above with reference to FIG. 8B is generated.
For example, in the case of a defocus map with a brightness value (pixel value) set to 0 to 255, the closer the brightness value (pixel value) to 255 (maximum brightness (white)), the smaller the defocus amount, that is, the degree of focus is a pixel region with a high luminance value (pixel value), and the closer the luminance value (pixel value) is to 0 (minimum luminance (black)), the larger the defocus amount, that is, the pixel region has a lower degree of focus.
Thus, the defocus map generation unit 204 generates a defocus map as shown in FIG. 8B, for example.
 画像領域単位物理量信頼度算出部205は、画像領域単位の物理量である画像領域単位のデフォーカス量の信頼度を算出する。
 先の実施例1と同様、画像領域単位物理量信頼度算出部205は、デフォーカス量算出部202の算出した画像領域単位のデフォーカス量や、デフォーカスマップ生成部204が生成したデフォーカスマップを入力し、これらの入力データに基づいて、画像領域単位で算出されたデフォーカス量の信頼度を算出する。
The image area unit physical quantity reliability calculation unit 205 calculates the reliability of the defocus amount for each image area, which is the physical quantity for each image area.
As in the first embodiment, the image region unit physical quantity reliability calculation unit 205 calculates the defocus amount for each image region calculated by the defocus amount calculation unit 202 and the defocus map generated by the defocus map generation unit 204. Based on these input data, the reliability of the defocus amount calculated for each image area is calculated.
 先の実施例1において説明したように、画像領域単位のデフォーカス量の信頼度算出処理の手法としては様々な手法が適用可能である。
 例えば画像信号処理部212が生成した画像(RGB画像)のコントラストや、画像周波数、エッジ検出結果等を用いて画像領域単位のデフォーカス量信頼度を算出する手法が利用可能である。
 また、実施例1において説明した(式1)を用いた手法、すなわち、デフォーカス量算出に用いる視差データの2つの波形の相互相関関数を用いて画像領域単位のデフォーカス量信頼度を算出する手法も可能である。
As described in the first embodiment, various methods can be applied as the method of calculating the reliability of the defocus amount for each image area.
For example, a method of calculating the defocus amount reliability for each image area using the contrast of the image (RGB image) generated by the image signal processing unit 212, the image frequency, the edge detection result, or the like can be used.
Further, the defocus amount reliability for each image area is calculated using the method using (Equation 1) described in the first embodiment, that is, the cross-correlation function of the two waveforms of the parallax data used for defocus amount calculation. method is also possible.
 なお、本実施例6において算出するデフォーカス量信頼度も、実施例1で説明したと同様、画素領域単位のデフォーカス量の信頼度である。
 画像領域単位物理量信頼度算出部205は、デフォーカス量算出部202が算出した画像領域単位のデフォーカス量、すなわち図8(B)に示すデフォーカスマップ内の1つの矩形領域である画像領域単位のデフォーカス量を用いて画像領域単位のデフォーカス量の信頼度を算出する。
Note that the defocus amount reliability calculated in the sixth embodiment is also the reliability of the defocus amount for each pixel area, as described in the first embodiment.
The image area unit physical quantity reliability calculation unit 205 calculates the defocus amount for each image area calculated by the defocus amount calculation unit 202, that is, the image area unit which is one rectangular area in the defocus map shown in FIG. is used to calculate the reliability of the defocus amount for each image area.
 このように、画像領域単位物理量信頼度算出部205は、例えば、デフォーカス量算出に用いた視差データの2つの波形の相互相関関数を利用して、デフォーカス量算出部202が算出した画像領域単位のデフォーカス量の信頼度を算出する。 As described above, the image region unit physical quantity reliability calculation unit 205 uses, for example, the cross-correlation function of the two waveforms of the parallax data used to calculate the defocus amount, and the image region calculated by the defocus amount calculation unit 202. Calculate the reliability of the unit defocus amount.
 画像領域単位物理量信頼度算出部205が算出した画像領域単位のデフォーカス量の信頼度データは、画像領域単位物理量信頼度対応処理実行部206に出力される。 The reliability data of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit 205 is output to the image area unit physical quantity reliability correspondence processing execution unit 206 .
 画像領域単位物理量信頼度対応処理実行部206は、画像領域単位物理量信頼度算出部205が算出した画像領域単位のデフォーカス量の信頼度データに応じて、制御部110に対する撮影制御コマンドを出力する。
 具体的には、デフォーカス量が低信頼度の画像領域について、露光時間を長く設定して画像撮影を実行させるための露光時間制御コマンドを出力する。
 この処理の具体例について、図26を参照して説明する。
The image area unit physical quantity reliability corresponding processing execution unit 206 outputs a shooting control command to the control unit 110 according to the reliability data of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit 205. .
Specifically, an exposure time control command for setting a long exposure time and executing image capturing is output for an image region with a low reliability defocus amount.
A specific example of this processing will be described with reference to FIG.
 図26には、以下の各データを示している。
 (A)撮影画像
 (B)デフォーカスマップ
 (C)撮影制御例(画像領域単位の露光時間制御例)
FIG. 26 shows the following data.
(A) Photographed image (B) Defocus map (C) Shooting control example (exposure time control example for each image area)
 (A)撮影画像は、画像情報取得部211から画像信号処理部212に入力された画像信号に基づいて画像信号処理部212が生成した撮影画像(RGB画像)である。
 (B)デフォーカスマップは、デフォーカスマップ生成部204が生成したデフォーカスマップである。すなわち、位相差情報取得部201が取得した位相差検出画素の画素値信号に基づいてデフォーカス量算出部202が算出した画像領域単位のデフォーカス量に基づいて、デフォーカスマップ生成部204が生成した画像領域単位のデフォーカス量を輝度値(例えば0~255)で出力したデフォーカスマップである。
(A) A captured image is a captured image (RGB image) generated by the image signal processing unit 212 based on the image signal input from the image information acquisition unit 211 to the image signal processing unit 212 .
(B) A defocus map is a defocus map generated by the defocus map generation unit 204 . That is, based on the defocus amount for each image area calculated by the defocus amount calculation unit 202 based on the pixel value signal of the phase difference detection pixel acquired by the phase difference information acquisition unit 201, the defocus map generation unit 204 generates This is a defocus map that outputs the defocus amount for each image area as a luminance value (for example, 0 to 255).
 図26に示す(B)デフォーカスマップには、デフォーカス量低信頼度領域が含まれる。
 前述したように、画像領域単位物理量信頼度算出部205は、デフォーカスマップ生成部204が生成したデフォーカスマップの各画像領域単位のデフォーカス量の信頼度を算出し、算出した信頼度データを画像領域単位物理量信頼度対応処理実行部206に出力する。
The (B) defocus map shown in FIG. 26 includes a defocus amount low-reliability region.
As described above, the image area unit physical quantity reliability calculation unit 205 calculates the reliability of the defocus amount for each image area in the defocus map generated by the defocus map generation unit 204, and uses the calculated reliability data. It is output to the image area unit physical quantity reliability correspondence processing execution unit 206 .
 画像領域単位物理量信頼度対応処理実行部206は、画像領域単位物理量信頼度算出部205から入力した画像領域単位のデフォーカス量の信頼度データに基づいて、例えば、図26(B)に示すようにデフォーカス量低信頼度領域を抽出する。 Based on the reliability data of the defocus amount for each image region input from the image region unit physical quantity reliability calculation unit 205, the image region unit physical quantity reliability corresponding processing execution unit 206 calculates, for example, as shown in FIG. , a defocus amount low-reliability region is extracted.
 具体的には、画像領域単位物理量信頼度算出部205から入力した画像領域単位のデフォーカス量信頼度と、予め規定した信頼度しきい値とを比較して、デフォーカス量低信頼度領域を抽出する。
 例えば、予め規定した低信頼度しきい値Th1を用いて、
 (判定式1)デフォーカス量信頼度≦Th1
 上記判定式1を満たす画像領域をデフォーカス量低信頼度領域として抽出する。
Specifically, the defocus amount reliability for each image area input from the image area unit physical quantity reliability calculation unit 205 is compared with a predetermined reliability threshold to determine a defocus amount low reliability area. Extract.
For example, using a predefined low reliability threshold Th1,
(Decision formula 1) Defocus amount reliability ≤ Th1
An image region that satisfies the above determination formula 1 is extracted as a defocus amount low reliability region.
 図26(B)に示すデフォーカスマップには、デフォーカス量低信頼度領域が含まれている。
 このように、デフォーカス量低信頼度領域が検出された場合、画像領域単位物理量信頼度対応処理実行部206は、画像撮影時の露光制御を行う制御部110に対して露光時間制御コマンドを出力する。
 具体的には、デフォーカス量が低信頼度の画像領域の露光時間を他の画像領域の露光時間より長く設定して画像撮影を行わせるための露光時間制御コマンドを出力する。
The defocus map shown in FIG. 26B includes a defocus amount low reliability area.
In this way, when a defocus amount low-reliability region is detected, the image region unit physical quantity reliability correspondence processing execution unit 206 outputs an exposure time control command to the control unit 110 that performs exposure control during image shooting. do.
Specifically, it outputs an exposure time control command for setting the exposure time of an image area with a low reliability defocus amount to be longer than the exposure time of other image areas and performing image shooting.
 すなわち、図26(C)撮影制御例(画像領域単位の露光時間制御例)に示すように、デフォーカス量が低信頼度の画像領域を指定して、この指定領域の露光時間を他の画像領域の露光時間より長く設定して画像撮影を行わせるための露光時間制御コマンドを出力する。 That is, as shown in FIG. 26C, an example of shooting control (an example of exposure time control for each image area), an image area with a low-reliability defocus amount is specified, and the exposure time of this specified area is set to another image. An exposure time control command is output for setting the exposure time longer than the exposure time of the area to perform image shooting.
 このような露光時間制御を行うことで、デフォーカス量が低信頼度の画像領域の露光時間を、他の画像領域の露光時間より長くした画像撮影が行われることになる。
 この結果、デフォーカス量が低信頼度の画像領域のS/Nを向上させた高品質な画像撮影が可能となる。
By performing such exposure time control, image shooting is performed in which the exposure time of the image area with the low reliability of the defocus amount is longer than the exposure time of the other image areas.
As a result, it is possible to capture a high-quality image with an improved S/N in an image area where the defocus amount has a low reliability.
 このように、画像領域単位物理量信頼度対応処理実行部206は、画像領域単位のデフォーカス量の信頼度に応じて露光時間を制御して画像撮影を行わせるための画像撮影制御を実行する。 In this way, the image area unit physical quantity reliability correspondence processing execution unit 206 executes image shooting control for image shooting by controlling the exposure time according to the reliability of the defocus amount in image area units.
 前述したように、デフォーカス量の信頼度が低い画像領域は、被写体距離が正確に計測されていない領域である場合が多く、撮影画像のS/Nが悪い画像領域である可能性が高い。このような画像領域については、露光時間を長く設定することでS/Nを向上させることが可能となる。
 上述したように実施例6は、デフォーカス量の信頼度が低い画像領域について露光時間を長く設定して画像撮影を実行する撮影制御を行い、高品質な画像の撮影を可能とした実施例である。
As described above, an image area with a low degree of reliability of the defocus amount is often an area where the object distance is not accurately measured, and is likely an image area with a poor S/N of the captured image. For such an image area, it is possible to improve the S/N by setting a long exposure time.
As described above, the sixth embodiment is an embodiment in which high-quality images can be captured by performing image capturing control by setting a long exposure time for an image area with a low degree of reliability of the defocus amount and performing image capturing. be.
 なお、図25、図26を参照して説明した実施例6は、デフォーカス量の信頼度が低い画像領域について露光時間を長く設定して画像撮影を実行する撮影制御を行う実施例であるが、実施例6の変形例として、以下のような撮影制御を行う構成も可能である。
 (変形例1)デフォーカス量の信頼度が高い画像領域について露光時間を短く設定する撮影制御を行う構成。
 (変形例2)デフォーカス量の信頼度が低い画像領域については露光時間を長く設定し、デフォーカス量の信頼度が高い画像領域について露光時間を短く設定する撮影制御を行う構成。
It should be noted that the sixth embodiment described with reference to FIGS. 25 and 26 is an embodiment in which photographing control is performed to execute image photographing by setting a long exposure time for an image region with a low degree of reliability of the defocus amount. As a modified example of the sixth embodiment, it is also possible to adopt a configuration that performs the following photographing control.
(Modification 1) A configuration in which photographing control is performed to set a short exposure time for an image region with a high degree of reliability of the defocus amount.
(Modification 2) A configuration in which a long exposure time is set for an image region with a low reliability of the defocus amount, and a short exposure time is set for an image region with a high reliability of the defocus amount.
 図27は、上記(変形例1)の撮影制御を行う撮像装置のデジタル信号処理部108の構成例を示している。
 この図27に示すデジタル信号処理部108の構成は、先に説明した図25に示すデジタル信号処理部108の構成と同様の構成要素を持つ。
 ただし、画像領域単位物理量信頼度対応処理実行部206の実行する処理が異なる。
FIG. 27 shows a configuration example of the digital signal processing unit 108 of the image pickup apparatus that performs the above (Modification 1) shooting control.
The configuration of digital signal processing section 108 shown in FIG. 27 has the same components as the configuration of digital signal processing section 108 shown in FIG. 25 described above.
However, the process executed by the image area unit physical quantity reliability corresponding process execution unit 206 is different.
 図27に示すデジタル信号処理部108の画像領域単位物理量信頼度対応処理実行部206は、画像領域単位物理量信頼度算出部205が算出した画像領域単位のデフォーカス量が高信頼度の画像領域について、露光時間を短く設定して画像撮影を実行させるための露光時間制御コマンドを出力する。
 この処理の具体例について、図28を参照して説明する。
The unit-image-area physical quantity reliability correspondence processing execution unit 206 of the digital signal processing unit 108 shown in FIG. , and outputs an exposure time control command for setting a short exposure time and executing image shooting.
A specific example of this processing will be described with reference to FIG.
 図28には、以下の各データを示している。
 (A)撮影画像
 (B)デフォーカスマップ
 (C)撮影制御例(画像領域単位の露光時間制御例)
FIG. 28 shows the following data.
(A) Photographed image (B) Defocus map (C) Shooting control example (exposure time control example for each image area)
 (A)撮影画像は、画像情報取得部211から画像信号処理部212に入力された画像信号に基づいて画像信号処理部212が生成した撮影画像(RGB画像)である。
 (B)デフォーカスマップは、デフォーカスマップ生成部204が生成したデフォーカスマップである。すなわち、位相差情報取得部201が取得した位相差検出画素の画素値信号に基づいてデフォーカス量算出部202が算出した画像領域単位のデフォーカス量に基づいて、デフォーカスマップ生成部204が生成した画像領域単位のデフォーカス量を輝度値(例えば0~255)で出力したデフォーカスマップである。
(A) A captured image is a captured image (RGB image) generated by the image signal processing unit 212 based on the image signal input from the image information acquisition unit 211 to the image signal processing unit 212 .
(B) A defocus map is a defocus map generated by the defocus map generation unit 204 . That is, based on the defocus amount for each image area calculated by the defocus amount calculation unit 202 based on the pixel value signal of the phase difference detection pixel acquired by the phase difference information acquisition unit 201, the defocus map generation unit 204 generates This is a defocus map that outputs the defocus amount for each image area as a luminance value (for example, 0 to 255).
 図28に示す(B)デフォーカスマップには、デフォーカス量高信頼度領域が含まれる。
 前述したように、画像領域単位物理量信頼度算出部205は、デフォーカスマップ生成部204が生成したデフォーカスマップの各画像領域単位のデフォーカス量の信頼度を算出し、算出した信頼度データを画像領域単位物理量信頼度対応処理実行部206に出力する。
The (B) defocus map shown in FIG. 28 includes a defocus amount high reliability region.
As described above, the image area unit physical quantity reliability calculation unit 205 calculates the reliability of the defocus amount for each image area in the defocus map generated by the defocus map generation unit 204, and uses the calculated reliability data. It is output to the image area unit physical quantity reliability correspondence processing execution unit 206 .
 画像領域単位物理量信頼度対応処理実行部206は、画像領域単位物理量信頼度算出部205から入力した画像領域単位のデフォーカス量の信頼度データに基づいて、例えば、図28(B)に示すようにデフォーカス量高信頼度領域を抽出する。 Based on the reliability data of the defocus amount for each image region input from the image region unit physical quantity reliability calculation unit 205, the image region unit physical quantity reliability corresponding processing execution unit 206 calculates, for example, as shown in FIG. to extract the defocus amount high reliability region.
 具体的には、画像領域単位物理量信頼度算出部205から入力した画像領域単位のデフォーカス量信頼度と、予め規定した信頼度しきい値とを比較して、デフォーカス量高信頼度領域を抽出する。
 例えば、予め規定した低信頼度しきい値Th2を用いて、
 (判定式2)Th2≦デフォーカス量信頼度
 上記判定式2を満たす画像領域をデフォーカス量高信頼度領域として抽出する。
Specifically, the defocus amount reliability for each image area input from the image area unit physical quantity reliability calculation unit 205 is compared with a predetermined reliability threshold to determine a defocus amount high reliability area. Extract.
For example, using a predefined low reliability threshold Th2,
(Judgment Expression 2) Th2≦Defocus Amount Reliability An image region that satisfies the above judgment expression 2 is extracted as a defocus amount high reliability region.
 図28(B)に示すデフォーカスマップには、デフォーカス量高信頼度領域が含まれている。
 このように、デフォーカス量高信頼度領域が検出された場合、画像領域単位物理量信頼度対応処理実行部206は、画像撮影時の露光制御を行う制御部110に対して露光時間制御コマンドを出力する。
 具体的には、デフォーカス量が高信頼度の画像領域の露光時間を他の画像領域の露光時間より短く設定して画像撮影を行わせるための露光時間制御コマンドを出力する。
The defocus map shown in FIG. 28B includes a defocus amount high reliability area.
In this way, when a defocus amount high-reliability region is detected, the image region unit physical quantity reliability correspondence processing execution unit 206 outputs an exposure time control command to the control unit 110 that performs exposure control during image shooting. do.
Specifically, it outputs an exposure time control command for setting the exposure time of an image area with a highly reliable defocus amount to be shorter than the exposure time of other image areas and performing image shooting.
 すなわち、図28(C)撮影制御例(画像領域単位の露光時間制御例)に示すように、デフォーカス量が高信頼度の画像領域を指定して、この指定領域の露光時間を他の画像領域の露光時間より短く設定して画像撮影を行わせるための露光時間制御コマンドを出力する。 That is, as shown in FIG. 28C, an example of shooting control (an example of exposure time control for each image area), an image area with a highly reliable defocus amount is specified, and the exposure time of this specified area is set to another image. An exposure time control command is output for setting the exposure time to be shorter than the exposure time of the area to perform image shooting.
 このような露光時間制御を行うことで、デフォーカス量が高信頼度の画像領域の露光時間を、他の画像領域の露光時間より短くした画像撮影が行われることになる。
 この結果、デフォーカス量が高信頼度の画像領域の被写体ぶれを低減させた高品質な画像撮影が可能となる。
By performing such exposure time control, image shooting is performed in which the exposure time of an image area with a highly reliable defocus amount is shorter than the exposure time of other image areas.
As a result, it is possible to shoot a high-quality image with reduced subject blur in an image area where the defocus amount is highly reliable.
 図29は、上記(変形例2)、すなわち、デフォーカス量の信頼度が低い画像領域については露光時間を長く設定し、デフォーカス量の信頼度が高い画像領域について露光時間を短く設定する撮影制御を行う撮像装置のデジタル信号処理部108の構成例を示している。
 この図29に示すデジタル信号処理部108の構成は、先に説明した図25に示すデジタル信号処理部108の構成と同様の構成要素を持つ。
 ただし、画像領域単位物理量信頼度対応処理実行部206の実行する処理が異なる。
FIG. 29 shows the above (modified example 2), that is, the exposure time is set long for an image area with a low reliability of the defocus amount, and the exposure time is set short for an image area with a high reliability of the defocus amount. 1 shows a configuration example of a digital signal processing unit 108 of an imaging apparatus that performs control.
The configuration of digital signal processing section 108 shown in FIG. 29 has the same components as the configuration of digital signal processing section 108 shown in FIG. 25 described above.
However, the process executed by the image area unit physical quantity reliability corresponding process execution unit 206 is different.
 図29に示すデジタル信号処理部108の画像領域単位物理量信頼度対応処理実行部206は、画像領域単位物理量信頼度算出部205が算出した画像領域単位のデフォーカス量が低信頼度の画像領域について露光時間を長く設定し、高信頼度の画像領域について露光時間を短く設定して画像撮影を実行させるための露光時間制御コマンドを出力する。 The unit physical quantity reliability correspondence processing execution unit 206 of the digital signal processing unit 108 shown in FIG. An exposure time control command is output for setting a long exposure time and setting a short exposure time for a high-reliability image area to execute image shooting.
 これらの処理の結果、図25、図26を参照して説明した実施例6と、図27、図28を参照して説明した(変形例1)の両方の効果が得られる。
 すなわち、デフォーカス量の信頼度が低い画像領域については、露光時間を長く設定することでS/Nを向上させることが可能となる。
 また、デフォーカス量が高信頼度の画像領域については、被写体ぶれを低減させることが可能となる。
As a result of these processes, both the effects of the sixth embodiment described with reference to FIGS. 25 and 26 and the (modification 1) described with reference to FIGS. 27 and 28 are obtained.
That is, it is possible to improve the S/N ratio by setting a long exposure time for an image region where the reliability of the defocus amount is low.
In addition, it is possible to reduce subject blurring in an image area where the defocus amount is highly reliable.
 このように実施例6は、デフォーカス量の信頼度に応じて画像領域単位の露光時間制御を行って画像撮影を実行することで高品質な画像の撮影を可能とした実施例である。 As described above, the sixth embodiment is an embodiment in which it is possible to capture a high-quality image by performing image capturing by performing exposure time control for each image area according to the reliability of the defocus amount.
  [5.その他の実施例について]
 次に、その他の実施例について説明する。
[5. Other Examples]
Next, another embodiment will be described.
 ここまで、図9~図29を参照して以下の6つの実施例について説明した。
 (実施例1)画像領域単位のデフォーカス量の信頼度を識別可能とした画像を生成して出力する実施例
 (実施例2)合焦被写体とその他の背景被写体との距離比率を撮影画像上に重畳した画像を生成して出力する実施例
 (実施例3)マスク安定性情報を撮影画像上に重畳した画像を生成して出力する実施例
 (実施例4)画像領域単位のデフォーカス量に応じた色を出力したカラーマップを撮影画像に重畳した画像を生成して出力する実施例
 (実施例5)画像領域単位のデフォーカス量に応じた色を出力するとともに、画像領域単位のデフォーカス量の信頼度を識別可能としたカラーマップを撮影画像に重畳した画像を生成して出力する実施例
 (実施例6)画像領域単位のデフォーカス量の信頼度に応じて露光時間を制御して画像撮影を行う実施例
So far, the following six embodiments have been described with reference to FIGS.
(Embodiment 1) Embodiment in which an image is generated and output in which the reliability of the defocus amount for each image area can be identified (Embodiment 2) The distance ratio between the in-focus subject and other background subjects is displayed on the captured image (Embodiment 3) Example of generating and outputting an image in which mask stability information is superimposed on a photographed image (Embodiment 4) Defocus amount for each image area Example of generating and outputting an image in which a color map outputting corresponding colors is superimposed on a captured image (Embodiment 5) Outputting colors corresponding to the defocus amount of each image area, and defocusing of each image area Example for generating and outputting an image in which a color map that enables identification of the amount reliability is superimposed on a captured image (Embodiment 6) Exposure time is controlled according to the reliability of the defocus amount for each image area. Example of image capturing
 これらの6つの実施例は、それぞれ単独に構成することも可能であるが、複数の任意の実施例の組み合わせ構成を持つ装置やシステムとして構成することも可能である。 Each of these six embodiments can be configured independently, but can also be configured as a device or system having a combination configuration of any of a plurality of embodiments.
 また、例えば上記実施例2等において、画像領域単位物理量信頼度算出部205が実行する画像領域単位の距離値の信頼度の算出処理について、以下の処理を行う例について説明した。
 すなわち、画像領域単位物理量信頼度算出部205デフォーカス量の信頼度の低い画像領域は、デフォーカス量に基づいて算出した距離値の信頼度も低いと判断し、デフォーカス量の信頼度の高い画像領域は、デフォーカス量に基づいて算出した距離値の信頼度も高いと判断する。
 このような処理例について説明した。
Further, for example, in the above-described second embodiment and the like, the following processing has been described as the calculation processing of the reliability of the distance value for each image region, which is executed by the image region unit physical quantity reliability calculation unit 205 .
That is, the image area unit physical quantity reliability calculation unit 205 determines that the reliability of the distance value calculated based on the defocus amount is low for an image area with a low reliability of the defocus amount, and the reliability of the defocus amount is high. For the image area, it is determined that the reliability of the distance value calculated based on the defocus amount is also high.
An example of such processing has been described.
 この画像領域単位の距離値の信頼度判定処理については、例えば撮像装置が2眼構成のステレオカメラである場合には以下の処理を行う構成としてもよい。
 すなわち、ステレオカメラの撮影画像から生成する視差マップのマッチング処理のエラー量を画像領域単位で算出し、算出した画像領域単位のエラー量に基づいて画像領域単位の距離値信頼度を算出する。
 画像領域単位のエラー量が多い画像領域は距離値信頼度が低いと判定し、画像領域単位のエラー量が少ない画像領域は距離値信頼度が高いと判定するといった処理である。
Regarding the reliability determination processing of the distance value for each image region, for example, when the imaging device is a stereo camera with a two-lens structure, the following processing may be performed.
That is, the error amount of the matching process of the parallax map generated from the captured image of the stereo camera is calculated for each image area, and the distance value reliability for each image area is calculated based on the calculated error amount for each image area.
In this process, it is determined that an image area with a large error amount per image area has a low distance value reliability, and an image area with a small error amount per image area has a high distance value reliability.
 また、上述した実施例1~実施例6の処理は、撮像装置100の動作時に継続的に実行する構成としてもよいが、例えばユーザによる入力部(操作部)118に対する特定の操作に応じて実行する構成としてもよい。
 また、フォーカス位置が大きく変化した場合には処理を停止してもよい。また、ジャイロ131の検出値により撮像装置100が大きく動いたと判定された場合には処理を停止してもよい。
 また、逆に、、フォーカス位置が大きく変化した場合には、新たな処理を開始してもよい。また、ジャイロ131の検出値により撮像装置100が大きく動いたと判定された場合に、新たな処理を開始してもよい。
Further, the processing of the first to sixth embodiments described above may be configured to be continuously executed during operation of the imaging apparatus 100, but may be executed in response to a specific operation on the input unit (operation unit) 118 by the user, for example. It is good also as a structure which carries out.
Further, the processing may be stopped when the focus position has changed significantly. Further, the processing may be stopped when it is determined that the imaging device 100 has moved greatly based on the detection value of the gyro 131 .
Conversely, when the focus position changes significantly, a new process may be started. Further, when it is determined that the imaging device 100 has moved greatly based on the detection value of the gyro 131, new processing may be started.
  [6.本開示の構成のまとめ]
 以上、特定の実施例を参照しながら、本開示の実施例について詳解してきた。しかしながら、本開示の要旨を逸脱しない範囲で当業者が実施例の修正や代用を成し得ることは自明である。すなわち、例示という形態で本発明を開示してきたのであり、限定的に解釈されるべきではない。本開示の要旨を判断するためには、特許請求の範囲の欄を参酌すべきである。
[6. Summary of the configuration of the present disclosure]
Embodiments of the present disclosure have been described in detail above with reference to specific embodiments. However, it is obvious that those skilled in the art can modify or substitute the embodiments without departing from the gist of this disclosure. That is, the present invention has been disclosed in the form of examples and should not be construed as limiting. In order to determine the gist of the present disclosure, the scope of claims should be considered.
 なお、本明細書において開示した技術は、以下のような構成をとることができる。
 (1) 撮影画像の区分領域である画像領域各々について、被写体距離に応じて変化する物理量を算出する画像領域物理量算出部と、
 前記画像領域物理量算出部の算出した画像領域単位の物理量の信頼度を算出する画像領域単位物理量信頼度算出部と、
 前記画像領域単位物理量信頼度算出部が算出した画像領域単位の物理量の信頼度に応じた制御処理を実行する画像領域単位物理量信頼度対応処理実行部を有する撮像装置。
In addition, the technique disclosed in this specification can take the following configurations.
(1) an image region physical quantity calculator that calculates physical quantities that change according to the subject distance for each image region that is a segmented region of a captured image;
an image area unit physical quantity reliability calculation unit that calculates the reliability of the image area unit physical quantity calculated by the image area physical quantity calculation unit;
An imaging apparatus having an image area unit physical quantity reliability corresponding processing execution unit that executes control processing according to the reliability of the physical quantity in image area units calculated by the image area unit physical quantity reliability calculation unit.
 (2) 前記画像領域物理量算出部は、
 前記画像領域単位のデフォーカス量を算出するデフォーカス量算出部であり、
 前記画像領域単位物理量信頼度算出部は、
 前記デフォーカス量算出部が算出した前記画像領域単位のデフォーカス量の信頼度を算出する(1)に記載の撮像装置。
(2) The image region physical quantity calculation unit
A defocus amount calculation unit that calculates a defocus amount for each image area,
The image area unit physical quantity reliability calculation unit,
The imaging apparatus according to (1), wherein reliability of the defocus amount for each image area calculated by the defocus amount calculation unit is calculated.
 (3) 前記画像領域単位物理量信頼度対応処理実行部は、
 前記画像領域単位物理量信頼度算出部が算出した前記画像領域単位のデフォーカス量の信頼度に基づいて、
 前記画像領域単位のデフォーカス量の信頼度を示すグラフィックデータを前記撮影画像上に重畳して表示する(2)に記載の撮像装置。
(3) The image area unit physical quantity reliability corresponding processing execution unit,
Based on the reliability of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit,
The imaging apparatus according to (2), wherein graphic data indicating the reliability of the defocus amount for each image area is superimposed and displayed on the captured image.
 (4) 前記画像領域単位物理量信頼度対応処理実行部は、
 前記画像領域単位物理量信頼度算出部が算出した前記画像領域単位のデフォーカス量の信頼度と予め規定した低信頼度しきい値Th1を比較し、
 画像領域単位のデフォーカス量の信頼度が前記低信頼度しきい値Th1以下の画像領域を識別可能としたグラフィックデータを前記撮影画像上に重畳して表示する(3)に記載の撮像装置。
(4) The image area unit physical quantity reliability corresponding processing execution unit,
comparing the reliability of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit with a predetermined low reliability threshold value Th1,
The image pickup apparatus according to (3), wherein graphic data that makes it possible to identify an image area whose reliability of the defocus amount for each image area is equal to or lower than the low reliability threshold value Th1 is superimposed on the captured image and displayed.
 (5) 前記画像領域単位物理量信頼度対応処理実行部は、
 前記画像領域単位物理量信頼度算出部が算出した前記画像領域単位のデフォーカス量の信頼度と予め規定した高信頼度しきい値Th2を比較し、
 画像領域単位のデフォーカス量の信頼度が前記高信頼度しきい値Th2以上の画像領域を識別可能としたグラフィックデータを前記撮影画像上に重畳して表示する(3)または(4)に記載の撮像装置。
(5) The image area unit physical quantity reliability corresponding processing execution unit,
comparing the reliability of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit with a predetermined high reliability threshold value Th2,
The method according to (3) or (4), wherein graphic data that makes it possible to identify an image area whose reliability of the defocus amount for each image area is equal to or higher than the high reliability threshold value Th2 is displayed superimposed on the captured image. imaging device.
 (6) 前記画像領域単位物理量信頼度対応処理実行部は、
 特定被写体以外の領域をマスクする際に高精度なマスク設定が可能か否かを示すマスク安定性情報を表示する(2)~(5)いずれかに記載の撮像装置。
(6) The image area unit physical quantity reliability corresponding processing execution unit,
The imaging apparatus according to any one of (2) to (5), wherein mask stability information indicating whether highly accurate mask setting is possible when masking an area other than a specific subject is displayed.
 (7) 前記画像領域単位物理量信頼度対応処理実行部は、
 前記画像領域単位物理量信頼度算出部が算出した前記画像領域単位のデフォーカス量の信頼度と予め規定した低信頼度しきい値Th1を比較し、
 前記撮影画像に前記低信頼度しきい値Th1以下の画像領域が検出された場合、
 高精度なマスク設定が困難であることを示すマスク安定性情報を前記撮影画像上に表示する(6)に記載の撮像装置。
(7) The image area unit physical quantity reliability corresponding processing execution unit,
comparing the reliability of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit with a predetermined low reliability threshold value Th1,
When an image area equal to or lower than the low-reliability threshold value Th1 is detected in the captured image,
The imaging apparatus according to (6), wherein mask stability information indicating that highly accurate mask setting is difficult is displayed on the captured image.
 (8) 前記画像領域単位物理量信頼度対応処理実行部は、
 前記画像領域単位物理量信頼度算出部が算出した前記画像領域単位のデフォーカス量の信頼度と予め規定した低信頼度しきい値Th1を比較し、
 前記撮影画像に前記低信頼度しきい値Th1以下の画像領域が検出されなかった場合、
 高精度なマスク設定が可能であることを示すマスク安定性情報を前記撮影画像上に表示する(6)または(7)に記載の撮像装置。
(8) The image area unit physical quantity reliability corresponding processing execution unit,
comparing the reliability of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit with a predetermined low reliability threshold value Th1,
When an image area equal to or lower than the low-reliability threshold value Th1 is not detected in the captured image,
The imaging apparatus according to (6) or (7), wherein mask stability information indicating that highly accurate mask setting is possible is displayed on the captured image.
 (9) 前記画像領域単位物理量信頼度対応処理実行部は、
 前記画像領域単位のデフォーカス量に応じて異なる色を設定したカラーマップを前記撮影画像上に重畳して表示する(2)~(8)いずれかに記載の撮像装置。
(9) The image area unit physical quantity reliability corresponding processing execution unit,
The imaging apparatus according to any one of (2) to (8), wherein a color map in which different colors are set according to the defocus amount of each image area is superimposed and displayed on the captured image.
 (10) 前記画像領域単位物理量信頼度対応処理実行部は、
 前記画像領域単位物理量信頼度算出部が算出した前記画像領域単位のデフォーカス量の信頼度と予め規定した低信頼度しきい値Th1を比較し、
 画像領域単位のデフォーカス量の信頼度が、前記低信頼度しきい値Th1以下の画像領域を識別可能としたカラーマップを前記撮影画像上に重畳して表示する(9)に記載の撮像装置。
(10) The image area unit physical quantity reliability corresponding processing execution unit,
comparing the reliability of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit with a predetermined low reliability threshold value Th1,
The image pickup apparatus according to (9), wherein a color map that makes it possible to identify image regions whose reliability of the defocus amount for each image region is equal to or lower than the low reliability threshold value Th1 is superimposed on the captured image and displayed. .
 (11) 前記画像領域単位物理量信頼度対応処理実行部は、
 前記画像領域単位物理量信頼度算出部が算出した前記画像領域単位のデフォーカス量の信頼度に応じて画像領域単位の露光時間制御を実行する(2)~(10)いずれかに記載の撮像装置。
(11) The image area unit physical quantity reliability corresponding processing execution unit,
The imaging apparatus according to any one of (2) to (10), wherein exposure time control is performed for each image area according to the reliability of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit. .
 (12) 前記画像領域単位物理量信頼度対応処理実行部は、
 前記画像領域単位物理量信頼度算出部が算出した前記画像領域単位のデフォーカス量の信頼度と予め規定した低信頼度しきい値Th1を比較し、
 画像領域単位のデフォーカス量の信頼度が前記低信頼度しきい値Th1以下の画像領域の露光時間を他の画像領域より長くする露光時間制御を実行する(11)に記載の撮像装置。
(12) The image area unit physical quantity reliability corresponding processing execution unit,
comparing the reliability of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit with a predetermined low reliability threshold value Th1,
The image pickup apparatus according to (11), wherein the exposure time control is performed such that the exposure time of an image area whose reliability of the defocus amount for each image area is equal to or lower than the low reliability threshold value Th1 is longer than that of other image areas.
 (13) 前記画像領域単位物理量信頼度対応処理実行部は、
 前記画像領域単位物理量信頼度算出部が算出した前記画像領域単位のデフォーカス量の信頼度と予め規定した高信頼度しきい値Th2を比較し、
 画像領域単位のデフォーカス量の信頼度が前記高信頼度しきい値Th2以上の画像領域の露光時間を他の画像領域より短くする露光時間制御を実行する(11)または(12)に記載の撮像装置。
(13) The image area unit physical quantity reliability corresponding processing execution unit,
comparing the reliability of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit with a predetermined high reliability threshold value Th2,
According to (11) or (12), the exposure time control is performed such that the exposure time of the image area whose reliability of the defocus amount for each image area is equal to or higher than the high reliability threshold value Th2 is shorter than that of other image areas. Imaging device.
 (14) 前記画像領域物理量算出部は、
 前記画像領域単位の距離値を算出する距離情報算出部であり、
 前記画像領域単位物理量信頼度算出部は、
 前記距離情報算出部が算出した前記画像領域単位の距離情報の信頼度を算出する(1)~(13)いずれかに記載の撮像装置。
(14) The image region physical quantity calculation unit
A distance information calculation unit that calculates a distance value for each image area,
The image area unit physical quantity reliability calculation unit,
The imaging apparatus according to any one of (1) to (13), wherein the reliability of the distance information for each image area calculated by the distance information calculation unit is calculated.
 (15) 前記画像領域単位物理量信頼度対応処理実行部は、
 複数の異なる距離にある被写体の距離比を表示する(14)に記載の撮像装置。
(15) The image area unit physical quantity reliability corresponding processing execution unit,
The imaging device according to (14), which displays distance ratios of subjects at a plurality of different distances.
 (16) 前記画像領域単位物理量信頼度対応処理実行部は、
 前記画像領域単位物理量信頼度算出部が算出した前記画像領域単位の距離情報の信頼度が予め規定したしきい値以上の画像領域を選択し、選択した画像領域の被写体間の距離比を算出して、撮像装置の撮影画像上に表示する(14)または(15)に記載の撮像装置。
(16) The image area unit physical quantity reliability corresponding processing execution unit,
selecting an image region in which the reliability of the distance information in the image region unit calculated by the image region unit physical quantity reliability calculation unit is equal to or higher than a predetermined threshold value, and calculating a distance ratio between subjects in the selected image region; , the imaging device according to (14) or (15) for displaying on the captured image of the imaging device.
 (17) 撮像装置において実行する画像処理方法であり、
 画像領域物理量算出部が、撮影画像の区分領域である画像領域各々について、被写体距離に応じて変化する物理量を算出する画像領域物理量算出ステップと、
 画像領域単位物理量信頼度算出部が、前記画像領域物理量算出ステップにおいて算出した画像領域単位の物理量の信頼度を算出する画像領域単位物理量信頼度算出ステップと、
 画像領域単位物理量信頼度対応処理実行部が、前記画像領域単位物理量信頼度算出ステップにおいて算出した画像領域単位の物理量の信頼度に応じた制御処理を実行する画像領域単位物理量信頼度対応処理実行ステップを実行する画像処理方法。
(17) An image processing method executed in an imaging device,
an image region physical quantity calculation step in which the image region physical quantity calculation unit calculates a physical quantity that changes according to the subject distance for each image region that is a segmented region of the captured image;
an image region unit physical quantity reliability calculation step in which the image region unit physical quantity reliability calculation unit calculates the reliability of the image region unit physical quantity calculated in the image region physical quantity calculation step;
an image area unit physical quantity reliability correspondence process execution step in which the image area unit physical quantity reliability correspondence process execution unit executes control processing according to the reliability of the image area unit physical quantity reliability calculated in the image area unit physical quantity reliability calculation step; An image processing method that performs
 (18) 撮像装置において画像処理を実行させるプログラムであり、
 画像領域物理量算出部に、撮影画像の区分領域である画像領域各々について、被写体距離に応じて変化する物理量を算出させる画像領域物理量算出ステップと、
 画像領域単位物理量信頼度算出部に、前記画像領域物理量算出ステップにおいて算出した画像領域単位の物理量の信頼度を算出させる画像領域単位物理量信頼度算出ステップと、
 画像領域単位物理量信頼度対応処理実行部に、前記画像領域単位物理量信頼度算出ステップにおいて算出した画像領域単位の物理量の信頼度に応じた制御処理を実行させる画像領域単位物理量信頼度対応処理実行ステップを実行させるプログラム。
(18) A program for executing image processing in an imaging device,
an image region physical quantity calculation step of causing an image region physical quantity calculation unit to calculate a physical quantity that changes according to a subject distance for each image region that is a segmented region of a captured image;
an image region unit physical quantity reliability calculation step for causing an image region unit physical quantity reliability calculation unit to calculate the reliability of the image region unit physical quantity calculated in the image region physical quantity calculation step;
An image area unit physical quantity reliability correspondence process execution step for causing an image area unit physical quantity reliability correspondence process execution unit to execute a control process according to the reliability of the image area unit physical quantity reliability calculated in the image area unit physical quantity reliability calculation step. program to run.
 なお、明細書中において説明した一連の処理はハードウェア、またはソフトウェア、あるいは両者の複合構成によって実行することが可能である。ソフトウェアによる処理を実行する場合は、処理シーケンスを記録したプログラムを、専用のハードウェアに組み込まれたコンピュータ内のメモリにインストールして実行させるか、あるいは、各種処理が実行可能な汎用コンピュータにプログラムをインストールして実行させることが可能である。例えば、プログラムは記録媒体に予め記録しておくことができる。記録媒体からコンピュータにインストールする他、LAN(Local Area Network)、インターネットといったネットワークを介してプログラムを受信し、内蔵するハードディスク等の記録媒体にインストールすることができる。 It should be noted that the series of processes described in the specification can be executed by hardware, software, or a composite configuration of both. When executing processing by software, a program recording the processing sequence is installed in the memory of a computer built into dedicated hardware and executed, or the program is loaded into a general-purpose computer capable of executing various processing. It can be installed and run. For example, the program can be pre-recorded on a recording medium. In addition to being installed in a computer from a recording medium, the program can be received via a network such as a LAN (Local Area Network) or the Internet and installed in a recording medium such as an internal hard disk.
 また、明細書に記載された各種の処理は、記載に従って時系列に実行されるのみならず、処理を実行する装置の処理能力あるいは必要に応じて並列的にあるいは個別に実行されてもよい。また、本明細書においてシステムとは、複数の装置の論理的集合構成であり、各構成の装置が同一筐体内にあるものには限らない。 In addition, the various types of processing described in the specification may not only be executed in chronological order according to the description, but may also be executed in parallel or individually according to the processing capacity of the device that executes the processing or as necessary. Further, in this specification, a system is a logical collective configuration of a plurality of devices, and the devices of each configuration are not limited to being in the same housing.
 以上、説明したように、本開示の一実施例の構成によれば、撮影画像の画像領域単位のデフォーカス量や距離値の信頼度を算出し、信頼度を示すグラフィックデータの表示処理や、被写体の距離比表示処理や、露光時間制御処理等を実行する装置、方法が実現される。
 具体的には、例えば、撮影画像の画像領域各々についてデフォーカス量や距離値を算出し、さらに算出した画像領域単位のデフォーカス量や距離値の信頼度を算出し、算出した画像領域単位のデフォーカス量や距離値の信頼度に応じた制御を実行する。例えば、画像領域単位のデフォーカス量の信頼度を示すグラフィックデータを撮影画像上に重畳して表示する処理や、被写体の距離比の表示処理、露光時間制御処理等を実行する。
 本構成により、撮影画像の画像領域単位のデフォーカス量や距離値の信頼度を算出し、信頼度を示すグラフィックデータの表示処理や、被写体の距離比表示処理や、露光時間制御処理等を実行する装置、方法が実現される。
As described above, according to the configuration of the embodiment of the present disclosure, the reliability of the defocus amount and the distance value for each image area of the captured image is calculated, and the display processing of the graphic data indicating the reliability, An apparatus and method for executing subject distance ratio display processing, exposure time control processing, and the like are realized.
Specifically, for example, the defocus amount and the distance value are calculated for each image area of the captured image, and the reliability of the calculated defocus amount and distance value for each image area is calculated. Control is executed according to the reliability of the defocus amount and the distance value. For example, it executes a process of superimposing and displaying graphic data indicating the reliability of the defocus amount for each image area on the captured image, a process of displaying the distance ratio of the object, a process of controlling the exposure time, and the like.
With this configuration, the reliability of the defocus amount and distance value for each image area of the captured image is calculated, and the display processing of the graphic data indicating the reliability, the distance ratio display processing of the subject, the exposure time control processing, etc. are executed. An apparatus and method are realized.
 100 撮像装置
 101 フォーカスレンズ
 102 ズームレンズ
 103 撮像素子
 104 アナログ信号処理部
 105 A/D変換部
 106 タイミングジェネレータ(TG)
 107 垂直ドライバ
 108 デジタル信号処理部(DSP)
 110 制御部
 112a AF制御部
 112b ズーム制御部
 113 モータ
 115 記録デバイス
 116 ビューファインダ
 117 モニタ
 118 入力部(操作部)
 122 画像領域
 131 ジャイロ
 151 位相差検出画素
 152 画像領域
 201 位相差情報取得部
 202 デフォーカス量算出部
 203 AF制御信号生成部
 204 デフォーカスマップ生成部
 205 画像領域単位物理量信頼度算出部
 206 画像領域単位物理量信頼度対応処理実行部
 211 画像情報取得部
 212 画像信号処理部
 213 画像出力部
 221 距離情報算出部
REFERENCE SIGNS LIST 100 imaging device 101 focus lens 102 zoom lens 103 imaging element 104 analog signal processing section 105 A/D conversion section 106 timing generator (TG)
107 vertical driver 108 digital signal processor (DSP)
110 control unit 112a AF control unit 112b zoom control unit 113 motor 115 recording device 116 viewfinder 117 monitor 118 input unit (operation unit)
122 image area 131 gyro 151 phase difference detection pixel 152 image area 201 phase difference information acquisition unit 202 defocus amount calculation unit 203 AF control signal generation unit 204 defocus map generation unit 205 image area unit physical quantity reliability calculation unit 206 image area unit Physical quantity reliability corresponding processing execution unit 211 Image information acquisition unit 212 Image signal processing unit 213 Image output unit 221 Distance information calculation unit

Claims (18)

  1.  撮影画像の区分領域である画像領域各々について、被写体距離に応じて変化する物理量を算出する画像領域物理量算出部と、
     前記画像領域物理量算出部の算出した画像領域単位の物理量の信頼度を算出する画像領域単位物理量信頼度算出部と、
     前記画像領域単位物理量信頼度算出部が算出した画像領域単位の物理量の信頼度に応じた制御処理を実行する画像領域単位物理量信頼度対応処理実行部を有する撮像装置。
    an image region physical quantity calculator that calculates physical quantities that change according to the subject distance for each image region that is a segmented region of a captured image;
    an image area unit physical quantity reliability calculation unit that calculates the reliability of the image area unit physical quantity calculated by the image area physical quantity calculation unit;
    An imaging apparatus having an image area unit physical quantity reliability corresponding processing execution unit that executes control processing according to the reliability of the physical quantity in image area units calculated by the image area unit physical quantity reliability calculation unit.
  2.  前記画像領域物理量算出部は、
     前記画像領域単位のデフォーカス量を算出するデフォーカス量算出部であり、
     前記画像領域単位物理量信頼度算出部は、
     前記デフォーカス量算出部が算出した前記画像領域単位のデフォーカス量の信頼度を算出する請求項1に記載の撮像装置。
    The image region physical quantity calculation unit
    A defocus amount calculation unit that calculates a defocus amount for each image area,
    The image area unit physical quantity reliability calculation unit,
    2. The imaging apparatus according to claim 1, wherein reliability of the defocus amount for each image area calculated by the defocus amount calculation unit is calculated.
  3.  前記画像領域単位物理量信頼度対応処理実行部は、
     前記画像領域単位物理量信頼度算出部が算出した前記画像領域単位のデフォーカス量の信頼度に基づいて、
     前記画像領域単位のデフォーカス量の信頼度を示すグラフィックデータを前記撮影画像上に重畳して表示する請求項2に記載の撮像装置。
    The image area unit physical quantity reliability corresponding processing execution unit,
    Based on the reliability of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit,
    3. The imaging apparatus according to claim 2, wherein graphic data indicating the reliability of the defocus amount for each image area is superimposed and displayed on the captured image.
  4.  前記画像領域単位物理量信頼度対応処理実行部は、
     前記画像領域単位物理量信頼度算出部が算出した前記画像領域単位のデフォーカス量の信頼度と予め規定した低信頼度しきい値Th1を比較し、
     画像領域単位のデフォーカス量の信頼度が前記低信頼度しきい値Th1以下の画像領域を識別可能としたグラフィックデータを前記撮影画像上に重畳して表示する請求項3に記載の撮像装置。
    The image area unit physical quantity reliability corresponding processing execution unit,
    comparing the reliability of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit with a predetermined low reliability threshold value Th1,
    4. The image pickup apparatus according to claim 3, wherein graphic data that makes it possible to identify an image area whose reliability of the defocus amount for each image area is equal to or lower than the low reliability threshold value Th1 is superimposed on the captured image and displayed.
  5.  前記画像領域単位物理量信頼度対応処理実行部は、
     前記画像領域単位物理量信頼度算出部が算出した前記画像領域単位のデフォーカス量の信頼度と予め規定した高信頼度しきい値Th2を比較し、
     画像領域単位のデフォーカス量の信頼度が前記高信頼度しきい値Th2以上の画像領域を識別可能としたグラフィックデータを前記撮影画像上に重畳して表示する請求項3に記載の撮像装置。
    The image area unit physical quantity reliability corresponding processing execution unit,
    comparing the reliability of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit with a predetermined high reliability threshold value Th2,
    4. The image pickup apparatus according to claim 3, wherein graphic data that makes it possible to identify an image area whose reliability of the defocus amount for each image area is equal to or higher than the high reliability threshold value Th2 is superimposed on the captured image and displayed.
  6.  前記画像領域単位物理量信頼度対応処理実行部は、
     特定被写体以外の領域をマスクする際に高精度なマスク設定が可能か否かを示すマスク安定性情報を表示する請求項2に記載の撮像装置。
    The image area unit physical quantity reliability corresponding processing execution unit,
    3. The imaging apparatus according to claim 2, wherein mask stability information indicating whether highly accurate mask setting is possible when masking an area other than a specific subject is displayed.
  7.  前記画像領域単位物理量信頼度対応処理実行部は、
     前記画像領域単位物理量信頼度算出部が算出した前記画像領域単位のデフォーカス量の信頼度と予め規定した低信頼度しきい値Th1を比較し、
     前記撮影画像に前記低信頼度しきい値Th1以下の画像領域が検出された場合、
     高精度なマスク設定が困難であることを示すマスク安定性情報を前記撮影画像上に表示する請求項6に記載の撮像装置。
    The image area unit physical quantity reliability corresponding processing execution unit,
    comparing the reliability of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit with a predetermined low reliability threshold value Th1,
    When an image area equal to or lower than the low-reliability threshold value Th1 is detected in the captured image,
    7. The imaging apparatus according to claim 6, wherein mask stability information indicating that highly accurate mask setting is difficult is displayed on the captured image.
  8.  前記画像領域単位物理量信頼度対応処理実行部は、
     前記画像領域単位物理量信頼度算出部が算出した前記画像領域単位のデフォーカス量の信頼度と予め規定した低信頼度しきい値Th1を比較し、
     前記撮影画像に前記低信頼度しきい値Th1以下の画像領域が検出されなかった場合、
     高精度なマスク設定が可能であることを示すマスク安定性情報を前記撮影画像上に表示する請求項6に記載の撮像装置。
    The image area unit physical quantity reliability corresponding processing execution unit,
    comparing the reliability of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit with a predetermined low reliability threshold value Th1,
    When an image area equal to or lower than the low-reliability threshold value Th1 is not detected in the captured image,
    7. The imaging apparatus according to claim 6, wherein mask stability information indicating that highly accurate mask setting is possible is displayed on the captured image.
  9.  前記画像領域単位物理量信頼度対応処理実行部は、
     前記画像領域単位のデフォーカス量に応じて異なる色を設定したカラーマップを前記撮影画像上に重畳して表示する請求項2に記載の撮像装置。
    The image area unit physical quantity reliability corresponding processing execution unit,
    3. The imaging apparatus according to claim 2, wherein a color map in which different colors are set according to the defocus amount of each image area is superimposed and displayed on the captured image.
  10.  前記画像領域単位物理量信頼度対応処理実行部は、
     前記画像領域単位物理量信頼度算出部が算出した前記画像領域単位のデフォーカス量の信頼度と予め規定した低信頼度しきい値Th1を比較し、
     画像領域単位のデフォーカス量の信頼度が、前記低信頼度しきい値Th1以下の画像領域を識別可能としたカラーマップを前記撮影画像上に重畳して表示する請求項9に記載の撮像装置。
    The image area unit physical quantity reliability corresponding processing execution unit,
    comparing the reliability of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit with a predetermined low reliability threshold value Th1,
    10. The image pickup apparatus according to claim 9, wherein a color map is superimposed on the captured image and displayed so that image regions whose reliability of the defocus amount for each image region is equal to or lower than the low reliability threshold value Th1 can be identified. .
  11.  前記画像領域単位物理量信頼度対応処理実行部は、
     前記画像領域単位物理量信頼度算出部が算出した前記画像領域単位のデフォーカス量の信頼度に応じて画像領域単位の露光時間制御を実行する請求項2に記載の撮像装置。
    The image area unit physical quantity reliability corresponding processing execution unit,
    3. The imaging apparatus according to claim 2, wherein exposure time control is performed for each image area according to the reliability of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit.
  12.  前記画像領域単位物理量信頼度対応処理実行部は、
     前記画像領域単位物理量信頼度算出部が算出した前記画像領域単位のデフォーカス量の信頼度と予め規定した低信頼度しきい値Th1を比較し、
     画像領域単位のデフォーカス量の信頼度が前記低信頼度しきい値Th1以下の画像領域の露光時間を他の画像領域より長くする露光時間制御を実行する請求項11に記載の撮像装置。
    The image area unit physical quantity reliability corresponding processing execution unit,
    comparing the reliability of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit with a predetermined low reliability threshold Th1,
    12. The imaging apparatus according to claim 11, wherein the exposure time control is performed such that the exposure time of an image area whose reliability of the defocus amount for each image area is equal to or less than the low reliability threshold value Th1 is longer than that of other image areas.
  13.  前記画像領域単位物理量信頼度対応処理実行部は、
     前記画像領域単位物理量信頼度算出部が算出した前記画像領域単位のデフォーカス量の信頼度と予め規定した高信頼度しきい値Th2を比較し、
     画像領域単位のデフォーカス量の信頼度が前記高信頼度しきい値Th2以上の画像領域の露光時間を他の画像領域より短くする露光時間制御を実行する請求項11に記載の撮像装置。
    The image area unit physical quantity reliability corresponding processing execution unit,
    comparing the reliability of the defocus amount for each image area calculated by the image area unit physical quantity reliability calculation unit with a predetermined high reliability threshold value Th2,
    12. The imaging apparatus according to claim 11, wherein the exposure time control is performed such that the exposure time of image areas whose reliability of the defocus amount for each image area is equal to or higher than the high reliability threshold value Th2 is shorter than that of other image areas.
  14.  前記画像領域物理量算出部は、
     前記画像領域単位の距離値を算出する距離情報算出部であり、
     前記画像領域単位物理量信頼度算出部は、
     前記距離情報算出部が算出した前記画像領域単位の距離情報の信頼度を算出する請求項1に記載の撮像装置。
    The image region physical quantity calculation unit
    A distance information calculation unit that calculates a distance value for each image area,
    The image area unit physical quantity reliability calculation unit,
    2. The imaging apparatus according to claim 1, wherein reliability of the distance information for each image area calculated by the distance information calculation unit is calculated.
  15.  前記画像領域単位物理量信頼度対応処理実行部は、
     複数の異なる距離にある被写体の距離比を表示する請求項14に記載の撮像装置。
    The image area unit physical quantity reliability corresponding processing execution unit,
    15. The imaging device according to claim 14, which displays distance ratios of subjects at a plurality of different distances.
  16.  前記画像領域単位物理量信頼度対応処理実行部は、
     前記画像領域単位物理量信頼度算出部が算出した前記画像領域単位の距離情報の信頼度が予め規定したしきい値以上の画像領域を選択し、選択した画像領域の被写体間の距離比を算出して、撮像装置の撮影画像上に表示する請求項14に記載の撮像装置。
    The image area unit physical quantity reliability corresponding processing execution unit,
    selecting an image region in which the reliability of the distance information in the image region unit calculated by the image region unit physical quantity reliability calculation unit is equal to or higher than a predetermined threshold value, and calculating a distance ratio between subjects in the selected image region; 15. The imaging device according to claim 14, wherein the image is displayed on the captured image of the imaging device.
  17.  撮像装置において実行する画像処理方法であり、
     画像領域物理量算出部が、撮影画像の区分領域である画像領域各々について、被写体距離に応じて変化する物理量を算出する画像領域物理量算出ステップと、
     画像領域単位物理量信頼度算出部が、前記画像領域物理量算出ステップにおいて算出した画像領域単位の物理量の信頼度を算出する画像領域単位物理量信頼度算出ステップと、
     画像領域単位物理量信頼度対応処理実行部が、前記画像領域単位物理量信頼度算出ステップにおいて算出した画像領域単位の物理量の信頼度に応じた制御処理を実行する画像領域単位物理量信頼度対応処理実行ステップを実行する画像処理方法。
    An image processing method executed in an imaging device,
    an image region physical quantity calculation step in which the image region physical quantity calculation unit calculates a physical quantity that changes according to the subject distance for each image region that is a segmented region of the captured image;
    an image region unit physical quantity reliability calculation step in which the image region unit physical quantity reliability calculation unit calculates the reliability of the image region unit physical quantity calculated in the image region physical quantity calculation step;
    an image area unit physical quantity reliability correspondence process execution step in which the image area unit physical quantity reliability correspondence process execution unit executes control processing according to the reliability of the image area unit physical quantity reliability calculated in the image area unit physical quantity reliability calculation step; An image processing method that performs
  18.  撮像装置において画像処理を実行させるプログラムであり、
     画像領域物理量算出部に、撮影画像の区分領域である画像領域各々について、被写体距離に応じて変化する物理量を算出させる画像領域物理量算出ステップと、
     画像領域単位物理量信頼度算出部に、前記画像領域物理量算出ステップにおいて算出した画像領域単位の物理量の信頼度を算出させる画像領域単位物理量信頼度算出ステップと、
     画像領域単位物理量信頼度対応処理実行部に、前記画像領域単位物理量信頼度算出ステップにおいて算出した画像領域単位の物理量の信頼度に応じた制御処理を実行させる画像領域単位物理量信頼度対応処理実行ステップを実行させるプログラム。
    A program for executing image processing in an imaging device,
    an image region physical quantity calculation step of causing an image region physical quantity calculation unit to calculate a physical quantity that changes according to a subject distance for each image region that is a segmented region of a captured image;
    an image region unit physical quantity reliability calculation step for causing an image region unit physical quantity reliability calculation unit to calculate the reliability of the image region unit physical quantity calculated in the image region physical quantity calculation step;
    An image area unit physical quantity reliability correspondence process execution step for causing an image area unit physical quantity reliability correspondence process execution unit to execute a control process according to the reliability of the image area unit physical quantity reliability calculated in the image area unit physical quantity reliability calculation step. program to run.
PCT/JP2022/003019 2021-05-20 2022-01-27 Imaging device, image processing method, and program WO2022244311A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2023522209A JPWO2022244311A1 (en) 2021-05-20 2022-01-27

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021-085268 2021-05-20
JP2021085268 2021-05-20

Publications (1)

Publication Number Publication Date
WO2022244311A1 true WO2022244311A1 (en) 2022-11-24

Family

ID=84140450

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/003019 WO2022244311A1 (en) 2021-05-20 2022-01-27 Imaging device, image processing method, and program

Country Status (2)

Country Link
JP (1) JPWO2022244311A1 (en)
WO (1) WO2022244311A1 (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1021401A (en) * 1996-07-04 1998-01-23 Canon Inc Three-dimensional information processor
JP2013025132A (en) * 2011-07-22 2013-02-04 Nikon Corp Focus adjustment device and imaging apparatus
JP2014027588A (en) * 2012-07-30 2014-02-06 Canon Inc Imaging device, method for displaying mask area, and method for controlling imaging device
JP2017092983A (en) * 2017-01-25 2017-05-25 キヤノン株式会社 Image processing device, image processing method, image processing program, and imaging device
JP2018007078A (en) * 2016-07-04 2018-01-11 株式会社ニコン Image processing apparatus, imaging device, image processing method and program
JP2018084571A (en) * 2016-11-11 2018-05-31 株式会社東芝 Processing device, imaging device, and automatic control system
JP2020046615A (en) * 2018-09-21 2020-03-26 キヤノン株式会社 Control device, imaging apparatus, control method, program, and storage medium
WO2020195198A1 (en) * 2019-03-27 2020-10-01 ソニー株式会社 Image processing device, image processing method, program, and imaging device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1021401A (en) * 1996-07-04 1998-01-23 Canon Inc Three-dimensional information processor
JP2013025132A (en) * 2011-07-22 2013-02-04 Nikon Corp Focus adjustment device and imaging apparatus
JP2014027588A (en) * 2012-07-30 2014-02-06 Canon Inc Imaging device, method for displaying mask area, and method for controlling imaging device
JP2018007078A (en) * 2016-07-04 2018-01-11 株式会社ニコン Image processing apparatus, imaging device, image processing method and program
JP2018084571A (en) * 2016-11-11 2018-05-31 株式会社東芝 Processing device, imaging device, and automatic control system
JP2017092983A (en) * 2017-01-25 2017-05-25 キヤノン株式会社 Image processing device, image processing method, image processing program, and imaging device
JP2020046615A (en) * 2018-09-21 2020-03-26 キヤノン株式会社 Control device, imaging apparatus, control method, program, and storage medium
WO2020195198A1 (en) * 2019-03-27 2020-10-01 ソニー株式会社 Image processing device, image processing method, program, and imaging device

Also Published As

Publication number Publication date
JPWO2022244311A1 (en) 2022-11-24

Similar Documents

Publication Publication Date Title
JP4524717B2 (en) Image processing apparatus, imaging apparatus, image processing method, and program
US8570427B2 (en) Image-capturing device having focus adjustment function, image creation method including focus adjustment function, and program product for image-capturing device having focus adjustment function
US8175447B2 (en) Image pickup apparatus and control method therefor
CN108462830B (en) Image pickup apparatus and control method of image pickup apparatus
US8872963B2 (en) Imaging apparatus and imaging method
US8823863B2 (en) Image capturing apparatus and control method therefor
CN110557559B (en) Image processing apparatus, image processing method, and storage medium
JP2011097645A (en) Image synthesizing apparatus, imaging apparatus, and image synthesizing method
JP2010091669A (en) Imaging device
JP4710983B2 (en) Image composition device, imaging device, and image composition method
US20200154056A1 (en) Image processing apparatus for providing information for focus adjustment, control method of the same, and storage medium
US9591202B2 (en) Image processing apparatus and image processing method for generating recomposed images
JP6748477B2 (en) Imaging device, control method thereof, program, and storage medium
WO2022244311A1 (en) Imaging device, image processing method, and program
JP4871664B2 (en) IMAGING DEVICE AND IMAGING DEVICE CONTROL METHOD
WO2023042453A1 (en) Imaging device, image processing method, and program
JP5597942B2 (en) Electronic camera
JP2008187677A (en) Digital camera
JP4143395B2 (en) Imaging apparatus, autofocus method, program, and storage medium
JP7235068B2 (en) Imaging device
WO2023106118A1 (en) Information processing device, information processing method, and program
JP2003264734A (en) Image pickup device
JP2013149043A (en) Image processing device
JP4973369B2 (en) Image processing apparatus and imaging apparatus
JPWO2005026803A1 (en) Lens position control device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22804234

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023522209

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22804234

Country of ref document: EP

Kind code of ref document: A1