WO2023042453A1 - Imaging device, image processing method, and program - Google Patents

Imaging device, image processing method, and program Download PDF

Info

Publication number
WO2023042453A1
WO2023042453A1 PCT/JP2022/013430 JP2022013430W WO2023042453A1 WO 2023042453 A1 WO2023042453 A1 WO 2023042453A1 JP 2022013430 W JP2022013430 W JP 2022013430W WO 2023042453 A1 WO2023042453 A1 WO 2023042453A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
defocus amount
unit
calculation unit
pixel
Prior art date
Application number
PCT/JP2022/013430
Other languages
French (fr)
Japanese (ja)
Inventor
大 水落
龍之介 横矢
祐基 明壁
貴洸 小杉
洋司 山本
Original Assignee
ソニーグループ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーグループ株式会社 filed Critical ソニーグループ株式会社
Publication of WO2023042453A1 publication Critical patent/WO2023042453A1/en

Links

Images

Definitions

  • the present disclosure relates to imaging devices, image processing methods, and programs. More specifically, the present invention relates to an imaging apparatus, an image processing method, and a program that apply detection information of image plane phase difference detection pixels not only to focus control but also to various other processes.
  • imaging devices there is an image plane phase difference method using an image plane phase difference pixel as a method for detecting a focus position (in-focus position).
  • This image plane phase difference method divides the light passing through the imaging lens into a pair of images and analyzes the phase difference between the generated pair of images to detect the focus position (in-focus position). It is a method to
  • the light flux that has passed through the imaging lens is split into two, and the two split light fluxes are received by a set of image plane phase difference detection pixels that function as focus detection sensors.
  • the focus lens is adjusted by detecting the degree of focus based on the shift amount of the signal output according to the amount of light received by each of the set of image plane phase difference detection pixels.
  • Patent Document 1 Japanese Patent Application Laid-Open No. 2012-142952 discloses a configuration that performs blurring processing on a captured image using detection information of an image plane phase difference detection pixel. Specifically, the detection information of the image plane phase difference detection pixels is used to analyze the distribution of the defocus amount in the captured image, and the analysis result is used to apply blur processing to the captured image.
  • Patent Document 2 Japanese Patent Application Laid-Open No. 2019-035967 discloses a configuration for changing and outputting the gradation characteristics of the defocus amount and the distance information obtained from the detection information of the image plane phase difference detection pixels according to the output destination. ing. For example, different information is generated based on the detection information of the image plane phase difference detection pixels according to the information required by the output destination, such as an output destination that requires distance resolution near in-focus and an output destination that requires distance measurement range information.
  • a configuration for outputting by Japanese Patent Application Laid-Open No. 2019-035967 discloses a configuration for changing and outputting the gradation characteristics of the defocus amount and the distance information obtained from the detection information of the image plane phase difference detection pixels according to the output destination. ing. For example, different information is generated based on the detection information of the image plane phase difference detection pixels according to the information required by the output destination, such as an output destination that requires distance resolution near in-focus and an output destination that requires distance measurement range information.
  • An object of the present disclosure is to provide an imaging device, an image processing method, and a program that realize a new configuration for using detection information of image plane phase difference detection pixels.
  • a second aspect of the present disclosure is a defocus amount calculation unit that calculates a defocus amount for each pixel area of a captured image output from an image sensor of an imaging device;
  • the imaging apparatus includes a distance information calculation unit that calculates the object distance for each pixel area based on the defocus amount for each pixel area calculated by the defocus amount calculation unit.
  • a third aspect of the present disclosure is a defocus amount calculation unit that calculates a defocus amount for each pixel area of a captured image output from an image sensor of an imaging device;
  • the imaging apparatus includes a mask area determination unit that determines a mask area for the captured image based on the defocus amount for each pixel area calculated by the defocus amount calculation unit.
  • a fourth aspect of the present disclosure is An image processing method executed in an imaging device, A defocus amount calculation step in which the defocus amount calculation unit calculates a defocus amount for each pixel area of a captured image output from an imaging device of an imaging device; A defocus amount calculation unit executes an image blend ratio calculation step of calculating an image blend ratio between the captured image and another image,
  • the image blend ratio calculation step includes: The image processing method includes a step of calculating an image blend ratio for each pixel area based on the defocus amount for each pixel area calculated in the defocus amount calculation step.
  • a fifth aspect of the present disclosure is A program for executing image processing in an imaging device, a defocus amount calculation step of causing a defocus amount calculation unit to calculate a defocus amount for each pixel area of a captured image output from an imaging element of an imaging device; causing a defocus amount calculation unit to perform an image blend ratio calculation step of calculating an image blend ratio between the captured image and another image;
  • the image blend ratio calculation step In the program, the image blend ratio for each pixel area is calculated based on the defocus amount for each pixel area calculated in the defocus amount calculation step.
  • the image blend ratio and the subject distance are calculated for each pixel region based on the defocus amount for each pixel region of the captured image, and various image processing is performed using the calculated data.
  • An apparatus and method are realized. Specifically, for example, a defocus amount calculation unit that calculates a defocus amount for each pixel area of a captured image output from an imaging device of an imaging device, and a defocus amount for each pixel area calculated by the defocus amount calculation unit. Calculating the image blend ratio for each pixel area using the defocus amount for each pixel area, and calculating the subject distance for each pixel area based on the defocus amount for each pixel area 3D image generation processing using , and shake correction amount calculation processing for each pixel region.
  • a mask area for the captured image is determined based on the defocus amount for each pixel area, and various processes such as green screen image generation are performed.
  • various processes such as green screen image generation are performed.
  • FIG. 10 is a diagram illustrating an outline of focus detection processing using a phase difference detection method
  • FIG. 10 is a diagram illustrating an outline of focus detection processing using a phase difference detection method
  • FIG. 10 is a diagram illustrating an outline of focus detection processing using a phase difference detection method
  • 4 is a diagram illustrating a configuration example of a digital signal processing unit of the image pickup apparatus of Example 1;
  • FIG. FIG. 10 is a diagram illustrating an outline of focus detection processing using a phase difference detection method
  • FIG. 11 is a diagram illustrating a specific example of processing executed by a digital signal processing unit of the imaging apparatus according to the second embodiment;
  • FIG. 11 is a diagram illustrating a configuration example of a digital signal processing unit of an imaging apparatus according to Example 3;
  • FIG. 11 is a diagram illustrating a specific example of processing executed by a digital signal processing unit of the imaging apparatus of Example 3;
  • FIG. 12 is a diagram illustrating a configuration example of an imaging device and a keyer device that execute processing of the third embodiment;
  • FIG. 11 is a diagram illustrating a configuration example of a digital signal processing unit of an imaging apparatus according to a fourth embodiment;
  • FIG. 12 is a diagram illustrating a specific example of processing executed by a digital signal processing unit of the imaging apparatus according to the fourth embodiment;
  • FIG. 12 is a diagram illustrating a specific example of processing executed by a digital signal processing unit of the imaging apparatus according to the fourth embodiment
  • FIG. 5 is a diagram illustrating a configuration example of a digital signal processing unit of an imaging device that executes processing of Examples 1 to 4
  • FIG. 2 is a diagram illustrating a hardware configuration example of an image processing device that executes processing using data input from an imaging device;
  • Configuration Example of Imaging Apparatus of Present Disclosure 2. Configuration of phase difference detection pixels, outline of focus control using detection signals from phase difference detection pixels3. Regarding processing using detection information of phase difference detection pixels 3-1.
  • Embodiment 1 Embodiment in which image blend ratio is calculated using detection information of phase difference detection pixels 3-2.
  • Embodiment 2 Embodiment in which distance information applied to three-dimensional image generation is calculated using detection information of phase difference detection pixels 3-3.
  • FIG. 1 is a block diagram showing a configuration example of an imaging device 100 of the present disclosure.
  • a configuration example of an imaging device 100 of the present disclosure will be described with reference to FIG.
  • Incident light passing through the focus lens 101 and the zoom lens 102 is input to an imaging device 103 such as a CMOS or CCD, and photoelectrically converted in the imaging device 103 .
  • the image pickup device 103 has a plurality of pixels each having a photodiode, for example, arranged two-dimensionally in a matrix. It has normal pixels in which a B (blue) color filter is arranged, and phase difference detection pixels for pupil-dividing subject light and performing focus detection.
  • the normal pixels of the image sensor 103 generate analog electrical signals (image signals) of R (red), G (green), and B (blue) color components of the subject image, and generate R, G, and B color image signals. Output.
  • a phase difference detection pixel of the image sensor 103 outputs a phase difference detection signal (detection information).
  • a phase difference detection signal (detection information) is a signal mainly used for autofocus control. The configuration of the phase difference detection pixels, phase difference detection signals generated by the phase difference detection pixels, and focus control using the phase difference detection signals will be described later in detail.
  • Photoelectrically converted data output from the image sensor 103 in this manner includes RGB image signals and phase difference detection signals from phase difference detection pixels. Each of these signals is input to the analog signal processing unit 104 , subjected to processing such as noise removal in the analog signal processing unit 104 , and converted into a digital signal in the A/D conversion unit 105 .
  • a digital signal digitally converted by the A/D conversion unit 105 is input to a digital signal processing unit (DSP) 108 and subjected to various signal processing.
  • DSP digital signal processing unit
  • Various image signal processing such as demosaic processing, white balance adjustment, gamma correction, etc. is performed on the RGB image signal, and the processed image is recorded in a recording device 115 such as a flash memory. Further, it is displayed on the monitor 117 and viewfinder (EVF) 116 .
  • An image through the lens is displayed as a through image on the monitor 117 and the viewfinder (EVF) 116 regardless of whether or not shooting is performed.
  • Phase difference detection pixel information (detection signal) output from the phase difference detection pixels of the image sensor 103 is also input to the digital signal processing unit (DSP) 108 via the AD conversion unit 106 .
  • a digital signal processing unit (DSP) 108 analyzes the phase difference between the pair of images generated by the phase difference detection pixel information (detection signal), and focuses on the subject (focusing object) to be focused. , that is, the amount of deviation between the in-focus distance and the object distance (defocus amount (DF)) is calculated.
  • An input unit (operation unit) 118 is an operation unit including an input unit for inputting various operation information such as shutter and zoom buttons on the camera body, a mode dial for setting the shooting mode, and the like.
  • the control unit 110 has a CPU, and controls various processes executed by the imaging device according to programs stored in advance in a memory (ROM) 120 or the like.
  • a memory (EEPROM) 119 is a non-volatile memory and stores image data, various auxiliary information, programs, and the like.
  • the memory (ROM) 120 stores programs, calculation parameters, etc. used by the control unit (CPU) 110 .
  • a memory (RAM) 121 stores programs used in the control unit (CPU) 110, the AF control unit 112a, and the like, parameters that change as appropriate during the execution of the programs, and the like.
  • the gyro 131 is a sensor that measures the inclination, angle, inclination velocity (angular velocity), etc. of the imaging device 100 .
  • the detection information of the gyro 131 is used, for example, for calculating the amount of camera shake during image capturing.
  • the AF control unit 112a drives the focus lens drive motor 113a set corresponding to the focus lens 101, and executes autofocus control (AF control) processing. For example, the focus lens 101 is moved to the focus position of the focus lens 101 with respect to the subject included in the area selected by the user to obtain the focused state.
  • AF control autofocus control
  • the zoom control unit 112b drives a zoom lens driving motor 113b set corresponding to the zoom lens 102.
  • FIG. A vertical driver 107 drives an image sensor (CCD) 103 .
  • the timing generator 106 generates control signals for processing timings of the image sensor 103 and the analog signal processing unit 104, and controls the processing timings of these processing units. Note that the focus lens 101 is driven in the optical axis direction under the control of the AF control section 112a.
  • FIG. 2 is a diagram showing a pixel configuration example of the image sensor 103.
  • FIG. 2 shows a pixel configuration example of the image sensor 103 corresponding to (A) a partial area of the captured image.
  • the vertical direction is the Y-axis, and the horizontal direction is the X-axis.
  • one pixel is indicated by one square.
  • the RGB pixels shown in FIG. 2 are pixels for normal image capturing. RGB pixels have, for example, a Bayer array configuration.
  • the following two data outputs are individually performed from the imaging device 103 .
  • FIG. 3 An outline of focus detection processing of the phase difference detection method will be described with reference to FIGS. 3 to 5.
  • the defocus of the focus lens is detected based on the shift amount of the signal output according to the amount of light received by each of a set of phase difference detection pixels that function as a focus detection sensor. The amount is calculated, and the focus lens is set to the focus position (focus position) based on this defocus amount.
  • the other phase difference detection pixel (hereinafter referred to as “second phase difference detection pixel”) Pb is a first pixel having a slit-shaped first opening OP1.
  • a light shielding plate AS1 and a second light shielding plate AS2 arranged below the first light shielding plate AS1 and having a slit-shaped second opening OP2 are provided.
  • the first opening OP1 in the second phase difference detection pixel Pb is provided at a position biased in the direction opposite to the specific direction with reference to the central axis CL.
  • the second opening OP2 in the second phase difference detection pixel Pb is provided at a position biased in the specific direction with reference to the central axis CL.
  • the light flux Tb that has passed through the left pupil region Qb of the exit pupil EY passes through the microlens ML corresponding to the phase difference detection pixel Pb and the first opening OP1 of the second light shielding plate AS2, and further passes through the second light shielding plate AS2. , the light is received by the light receiving element PD of the second phase difference detection pixel Pb.
  • Fig. 4 shows an example of the output of the light-receiving element obtained at each pixel of Pa and Pb.
  • the output line from the pixel Pa and the output line from the pixel Pb are signals having a predetermined amount of shift Sf.
  • FIG. 5(a) shows the shift amount Sfa generated between the pixels Pa and Pb when the focus lens is set at a position corresponding to the subject distance and the focus is achieved, that is, in the in-focus state.
  • 5B1 and 5B2 show shifts occurring between pixels Pa and Pb when the focus lens is not set to a position corresponding to the subject distance and is out of focus, i.e., in an out-of-focus state.
  • Quantity Sfa is shown.
  • (b1) is an example in which the shift amount is larger than that at the in-focus state
  • (b2) is an example in which the shift amount is smaller than that at the in-focus state.
  • This process is the focusing process according to the "phase difference detection method". Focusing processing according to this "phase difference detection method” enables setting of the focus lens to the in-focus position, and the focus lens can be set to a position according to the object distance.
  • the shift amount described with reference to FIG. 5 can be measured for each set of pixels Pa and Pb, which are phase difference detection pixels configured in the image sensor shown in FIG. It is possible to individually calculate the in-focus position (focus point) and the defocus amount for the subject image captured in the pixel combination area).
  • Example 1 Example of calculating image blend ratio using detection information of phase difference detection pixels (Example 2) Using detection information of phase difference detection pixels to calculate distance information applied to 3D image generation Embodiment (Embodiment 3)
  • Embodiment 3 An embodiment in which a mask area for background image synthesis is determined using detection information of a phase difference detection pixel, and an image is generated by synthesizing the mask image with an area other than the selected subject area (Embodiment 4) ) Embodiment in which image stabilization amount for each pixel area is calculated using detection information of phase difference detection pixels
  • Example 1 Example of calculating an image blend ratio using detection information of a phase difference detection pixel
  • FIG. 6 is a block diagram for explaining the configuration of the first embodiment.
  • FIG. 6 shows an internal configuration example of the digital signal processing unit 108, which is a component of the imaging apparatus 100 described with reference to FIG.
  • the digital signal processing unit 108 includes a phase difference information acquisition unit 201, an image information acquisition unit 202, a defocus amount calculation unit 203, an AF control signal generation unit 204, an image blend ratio calculation unit 205, an image signal It has a processing unit 206 and an image blend processing execution unit 221 .
  • the digital signal processing unit 108 receives the RGB image signals from the preceding A/D conversion unit 105 and the phase difference detection signals (detection information) output from the phase difference detection pixels.
  • a phase difference information acquisition unit 201 shown in FIG. 6 selects only phase difference detection signals (detection information), which are outputs of phase difference detection pixels, from the input signal from the A/D conversion unit 105 .
  • the image information acquisition unit 202 shown in FIG. 6 selects only image signals (for example, RGB image signals) from the input signals from the A/D conversion unit 105 .
  • the image signal acquired by the image information acquisition unit 202 is input to the image signal processing unit 206 .
  • the image signal processing unit 206 performs various image signal processing such as demosaicing, white balance adjustment, and gamma correction on the image signal, and inputs the processed image to the image blend processing execution unit 221 .
  • the phase difference information acquisition unit 201 selects a phase difference detection signal (detection information) that is the output of the phase difference detection pixel from the input signal from the A/D conversion unit 105, and obtains the selected phase difference detection signal (detection information). is output to the defocus amount calculation unit 203 .
  • the defocus amount calculation unit 202 calculates the defocus amount, that is, the amount of deviation between the in-focus distance and the subject distance (defocus amount (DF)) for each minute image area unit, for example, each pixel area composed of a plurality of pixels. calculate.
  • the defocus of the focus lens is detected based on the shift amount of the signal output according to the amount of light received by each of a set of phase difference detection pixels that function as a focus detection sensor. Amount is calculated.
  • the AF control signal generation unit 204 generates an autofocus control signal (AF control signal) for setting the focus lens to an in-focus position (focus position) for a subject designated by the user based on the defocus amount. , and outputs the generated AF control signal to the AF control unit 112a.
  • AF control signal autofocus control signal
  • the shift amount described above with reference to FIG. 5 is measured in units of a set of pixels Pa and Pb, which are phase difference detection pixels configured in the imaging device shown in FIG. That is, the defocus amount calculation unit 203 can calculate the defocus amount of the subject image for each minute pixel area of the captured image.
  • FIG. 7 shows an example of a pixel region that serves as a defocus amount calculation unit.
  • FIG. 7 shows the pixel configuration of the imaging element 103 similar to that of FIG. 2 described above.
  • the phase difference detection pixels 151 for acquiring phase difference information are discretely set in some (rows) of RGB pixels having a Bayer array.
  • a phase difference detection pixel is configured by a pair of a right opening phase difference detection pixel Pa and a left opening phase difference detection pixel Pb.
  • a pixel region that serves as a unit for calculating the defocus amount can be set, for example, as a pixel region 152 shown in FIG.
  • the example shown in FIG. 7 is an example in which the pixel region 152, which is the unit for calculating the defocus amount, is a fine pixel region of n ⁇ m pixels, such as 6 ⁇ 5 pixels.
  • a plurality of sets of phase difference detection pixels are included in the n ⁇ m fine pixel area. The shift amount described above with reference to FIG. 5 is measured from each of the plurality of sets of phase difference detection pixels.
  • the image blending ratio calculation unit 205 inputs the fine defocus amount for each pixel area of the captured image calculated by the defocus amount calculation unit 203, and combines the two images based on this defocus amount for each pixel area. Calculate the image blend ratio to apply to the process.
  • One of the images to be combined is the captured image output by the image signal processing unit 206 .
  • Another image is, for example, an image stored in advance in the storage unit, such as a background image.
  • the image blend ratio calculation unit 205 calculates the image blend ratio for each pixel area based on the fine defocus amount for each pixel area. That is, the image blending ratio to be applied to the process of synthesizing two images is calculated for each fine pixel area such as n ⁇ m pixels, and output to the image blending process execution unit 221 .
  • the image blending processing execution unit 221 executes blending processing, that is, synthesis processing, of two images according to the image blending ratio for each pixel area input from the image blending ratio calculation unit 205 .
  • the image blend processing execution unit 221 receives the (A) captured image output from the image signal processing unit 206, and applies a background image stored in advance in the storage unit to the (A) captured image. It is a figure which shows the example of a process in the case of blending and producing
  • An image defining the blend ratio of the (A) photographed image to be blended and the background image is the "(B) blend ratio output image" shown in FIG.
  • the image blend ratio calculation unit 205 generates a “(B) blend ratio output image” in which pixel values having brightness corresponding to the image blend ratio for each pixel area are set.
  • the photographed image (A) output from the image signal processing unit 206 is, for example, an image photographed with a “stuffed bear” placed outdoors. This image was taken with the setting set to . Therefore, the "stuffed bear” region is a focused image, but the background portion is not a focused image but a blurred image.
  • the user generates a blended image (composite image) by replacing this blurred background image area with the background image stored in advance in the storage unit.
  • This blend image is the "(C) blend image” generated by the image blend processing unit 221 shown in FIG.
  • the image blend processing unit 221 shown in FIG. 8 applies a fine blending ratio for each pixel area when blending the background image stored in advance in the storage unit with (A) the captured image. Blend processing.
  • the “(B) blend ratio output image” shown in FIG. 8 is an image in which the image blend ratio for each pixel region is output as the pixel value of each pixel.
  • the blending ratio between (A) the photographed image and the background image obtained from the storage unit changes gently.
  • the image blend ratio calculation unit 205 shown in FIG. 8 generates a "(B) blend ratio output image" having such pixel value output values and outputs it to the image blend processing execution unit 221.
  • Example 2 Example of calculating distance information applied to three-dimensional image generation using detection information of phase difference detection pixels.
  • the subject to be set at the in-focus position is not the entire image area of the captured image, but a subject specified by the user such as a person. Images of other objects, such as the background, are out of focus and appear blurred.
  • the defocus amount calculation unit 203 calculates the defocus amount, that is, the amount of deviation between the in-focus distance and the subject distance, for each minute pixel area such as n ⁇ m pixels. A defocus amount equivalent to is calculated.
  • the distance information calculation unit 207 receives the fine defocus amount for each pixel area of the captured image calculated by the defocus amount calculation unit 203, and based on this defocus amount for each pixel area, determines the distance information for each pixel area of the captured image. Calculate the distance information of The distance information calculation unit 207 generates, for example, a depth map indicating the distance value for each image area by pixel values (eg, 0 to 255).
  • the distance information calculation unit 207 outputs the calculated distance information for each pixel area to the three-dimensional image generation unit 222 .
  • the three-dimensional image generation unit 222 receives the captured image from the image signal processing unit 206 and further receives distance information for each pixel area from the distance information calculation unit 207 .
  • the 3D image generation unit 222 generates a 3D image corresponding to the captured image according to the distance information for each pixel area input from the distance information calculation unit 207 .
  • the photographed image (A) output from the image signal processing unit 206 is, for example, an image photographed with a “stuffed bear” placed outdoors. This image was taken with the setting set to .
  • the (C) subject distance information used by the three-dimensional image generation unit 222 to generate a three-dimensional image is (B) distance information (depth map) for each pixel region generated by the distance information calculation unit 207 .
  • the (B) distance information (depth map) for each pixel area generated by the distance information calculation unit 207 is based on the defocus amount for each minute pixel area such as n ⁇ m pixels calculated by the defocus amount calculation unit 203.
  • 4 is a depth map showing the value of the object distance for each image area calculated in the above manner as a pixel value (0 to 255, for example).
  • a high luminance (high pixel value) area that looks white is an area where the object distance is short.
  • a low luminance (low pixel value) area that looks black is an area with a long subject distance.
  • the process of calculating the subject distance from the defocus amount can be calculated by applying parameters such as the focal length of the lens (focus lens) of the imaging device. Specifically, the object distance for each pixel area is calculated according to the following (Equation 1).
  • the distance information calculation unit 207 calculates the object distance for each pixel area according to the above (Equation 1).
  • the distance information calculation unit 207 outputs the calculated subject distance information for each pixel area to the three-dimensional image generation unit 222 .
  • the “(C) three-dimensional image” generated by the three-dimensional image generation unit 222 is a three-dimensional image that reflects subject distance information in units of minute areas generated by the distance information calculation unit 207, for example, minute pixel areas of n ⁇ m pixels. It becomes an image, and becomes a highly accurate three-dimensional image reflecting detailed distance information.
  • the application execution unit 150 configured as follows.
  • a configuration in which the three-dimensional image generation unit 222 is provided in an external device other than the imaging device 100, for example, an external device 180 such as a PC may be employed.
  • the distance information calculation unit 207 of the digital signal processing unit 108 of the imaging apparatus 100 outputs the calculated distance information for each pixel area to the external device 180 such as a PC.
  • the external device 180 such as a PC uses the distance information for each pixel area input from the distance information calculation unit 207 of the digital signal processing unit 108 to execute a three-dimensional image generation process. In this way, it is possible to perform processing using various configurations.
  • Example 3 Example of determining a mask region for background image synthesis using detection information of a phase difference detection pixel, and generating an image by synthesizing the mask image with a region other than the selected subject region]
  • Example 3 an example will be described in which a mask region for background image synthesis is determined using detection information of a phase difference detection pixel, and an image is generated by synthesizing the mask image with a region other than the selected subject region.
  • FIG. 14 is a block diagram for explaining the configuration of the third embodiment.
  • FIG. 14 shows an internal configuration example of the digital signal processing unit 108, which is a component of the imaging apparatus 100 described with reference to FIG.
  • the image signal acquired by the image information acquisition unit 202 is input to the image signal processing unit 206 .
  • the image signal processing unit 206 performs various image signal processing such as demosaic processing, white balance adjustment, and gamma correction on the image signal, and inputs the processed image to the three-dimensional image generation unit 222 .
  • the phase difference information acquisition unit 201 selects a phase difference detection signal (detection information) that is the output of the phase difference detection pixel from the input signal from the A/D conversion unit 105, and obtains the selected phase difference detection signal (detection information). is output to the defocus amount calculation unit 203 .
  • the AF control signal generation unit 204 generates an autofocus control signal (AF control signal) for setting the focus lens to an in-focus position (focus position) for a subject designated by the user based on the defocus amount. , and outputs the generated AF control signal to the AF control unit 112a.
  • AF control signal autofocus control signal
  • the AF control unit 112a drives the focus lens according to an autofocus control signal (AF control signal) input from the AF control signal generation unit 204, and moves the focus lens to the in-focus position (focus position) for the subject designated by the user, for example. set.
  • AF control signal autofocus control signal
  • the subject to be set at the in-focus position is not the entire image area of the captured image, but a subject specified by the user such as a person. Images of other objects, such as the background, are out of focus and appear blurred.
  • the background image to be combined is also output in that area. Therefore, the green screen cannot be used when the output subject includes green.
  • the blue background if a blue pixel area is included in the color of the output subject, the background image to be synthesized is also output in that area. Therefore, the blue background cannot be used when the output object contains blue.
  • the output subject color analysis unit 209 analyzes the color of the output subject in order to set the color of the mask area to a color different from the color of the output subject displayed on the composite image.
  • the mask output color determination unit 210 determines a color that is not included in the output subject analyzed by the output subject color analysis unit 209 as the color of the mask area.
  • the mask area determination unit 208 receives the output subject selection information 232 from the input unit 118, and further inputs the defocus amount for each fine pixel area of the captured image calculated by the defocus amount calculation unit 203.
  • the mask area determination unit 208 determines the mask area based on the output subject selection information 232 input from the input unit 118 and the defocus amount for each pixel area input from the defocus amount calculation unit 203 . That is, the mask area to be set to green screen or blue screen is determined.
  • the mask area determining unit 208 sets an area not included in the output subject specified by the output subject selection information 232 input from the input unit 118 as a mask area. In this mask area determination process, the defocus amount for each pixel area input from the defocus amount calculation unit 203 is used as auxiliary information.
  • a pixel region having a defocus amount different from the defocus amount of the output subject specified by the output subject selection information 232 input from the input unit 118 by a predetermined threshold value or more is determined as the mask area.
  • the mask area information determined by the mask area determining unit 208 is output to the mask synthesizing unit 211 .
  • the mask synthesizing unit 211 receives the captured image from the image signal processing unit 206 , the color information of the mask area from the mask output color determination unit 210 , and the mask area information from the mask area determination unit 208 .
  • the mask synthesizing unit 211 next sets a color mask in the determined mask setting area according to the mask area color information input from the mask output color determining unit 210 . For example, a mask such as green screen or blue screen is set.
  • the image synthesizing unit 223 generates a synthesized image by pasting another image, for example, a background image, input from, for example, the storage unit or the outside onto the mask area of the mask setting image generated by the mask synthesizing unit 211 .
  • the image synthesizing unit 223 inputs (C) the mask setting image from the mask synthesizing unit 211, and (C) stores another image input from the storage unit or the outside in the mask area of the mask setting image.
  • C the image synthesizing unit 223 inputs (C) the mask setting image from the mask synthesizing unit 211, and (C) stores another image input from the storage unit or the outside in the mask area of the mask setting image.
  • it is a figure which shows the example of a process in the case of producing
  • the user (A) displays a captured image on the display section, selects an output subject that is not to be masked, and inputs output subject selection information 232 via the input section 118 .
  • the example shown in FIG. 15 is an example in which the user selects (A) the area of “stuffed bear” in the captured image as an output subject that is not to be masked.
  • the output subject selection information 232 input via the input unit 118 is input to the output subject color analysis unit 209 and mask area determination unit 208 .
  • the output subject color analysis unit 209 analyzes the color of the output subject in order to set the color of the mask area to a color different from the color of the output subject displayed on the composite image.
  • the mask area determination unit 208 sets, as a mask area, an area that is not included in the output subject specified by the output subject selection information 232 input from the input unit 118. During this mask area determination process, the defocus The defocus amount for each pixel area input from the amount calculation unit 203 is used as auxiliary information.
  • a pixel area having a defocus amount having a difference equal to or greater than a predetermined threshold value with respect to the defocus amount of the output subject specified by the output subject selection information 232 input from the input unit 118 is determined as the mask area. do.
  • a “(B) mask area designating image” as shown in FIG. 15 is generated and output to the mask synthesis unit 211 .
  • the (B) mask area designation image shown in FIG. 15 is an example in which a pixel area other than the "stuffed bear” area in the (A) photographed image selected as the output subject not to be masked by the user is determined as the mask area.
  • (A) The defocus amount of the "stuffed bear” area in the photographed image is almost 0, that is, the in-focus area, and (A) the background area other than the "stuffed bear” in the photographed image is out of focus. This is an area where the defocus amount is large.
  • the mask synthesizing unit 211 determines the mask setting area according to the mask area information input from the mask area determining unit 208, for example, the (B) mask area designating image as shown in FIG.
  • the mask synthesizing unit 211 next sets a color mask in the determined mask setting area according to the mask area color information input from the mask output color determining unit 210 .
  • a mask such as a green screen or a blue screen is set to generate a mask setting image (C) shown in FIG.
  • the (C) mask setting image generated by the mask synthesizing unit 211 is an image in which the area other than the output subject specified by the output subject selection information 232 input from the input unit 118 is replaced with a mask image such as a green screen or a blue screen. becomes.
  • the (C) mask setting image generated by the mask synthesizing unit 211 is output to the image synthesizing unit 223 .
  • the image synthesizing unit 223 pastes another image input from, for example, a storage unit or the outside, such as a background image, to the mask area of the (C) mask setting image generated by the mask synthesizing unit 211, as shown in FIG. ) to generate a composite image.
  • an image signal processing unit 206 an image signal processing unit 206, a mask area determination unit 208, an output object color analysis unit 209, a mask output color determination unit 210, a mask synthesis unit 211, and an image synthesis unit 223.
  • the configuration is an example.
  • the mask synthesizing unit 211 of the digital signal processing unit 108 of the imaging device 100 outputs the generated mask synthesized image to the external keyer device 240 .
  • a keying processing unit (image synthesizing unit) 241 in the keyer device 240 inserts another image, for example, input from a storage unit, into the mask area of the mask synthesized image input from the mask synthesizing unit 211 of the digital signal processing unit 108 . Generates a composite image pasted with a background image.
  • the image signal acquired by the image information acquisition unit 202 is input to the image signal processing unit 206 .
  • the image signal processing unit 206 performs various image signal processing such as demosaic processing, white balance adjustment, and gamma correction on the image signal, and inputs the processed image to the three-dimensional image generation unit 222 .
  • the AF control signal generation unit 204 generates an autofocus control signal (AF control signal) for setting the focus lens to an in-focus position (focus position) for a subject designated by the user based on the defocus amount. , and outputs the generated AF control signal to the AF control unit 112a.
  • AF control signal autofocus control signal
  • the defocus amount calculation unit 203 calculates the defocus amount, that is, the amount of deviation between the in-focus distance and the subject distance, for each minute pixel area such as n ⁇ m pixels. A defocus amount equivalent to is calculated.
  • the distance information calculation unit 251 inputs the fine defocus amount of each pixel area of the captured image calculated by the defocus amount calculation unit 203, and based on this defocus amount of each pixel area, calculates the pixel area of the captured image. Calculate the distance information of The distance information calculation unit 251 generates, for example, a depth map indicating the distance value for each image area by pixel values (eg, 0 to 255).
  • the distance information calculation processing in the distance information calculation unit 251 is the same as the distance information calculation processing executed by the distance information calculation unit 207 of the second embodiment described above with reference to FIGS. Using the defocus amount for each pixel area according to (Equation 1), the subject distance for each pixel area is calculated. The distance information calculation unit 251 outputs the calculated distance information for each pixel area to the camera shake correction amount calculation unit 263 .
  • the shake amount calculation unit 262 receives detection information such as the tilt, angle, and tilt speed (angular velocity) of the imaging device 100 from the gyro 131, calculates the shake amount at the time of image capturing, and uses the calculated shake amount as the shake correction amount. Output to the calculation unit 263 .
  • the camera shake correction amount calculation unit 263 receives the amount of camera shake during image shooting from the camera shake amount calculation unit 262 and also receives the distance information for each pixel area from the distance information calculation unit 251 .
  • the camera shake correction amount calculation unit 263 uses these two pieces of input information to calculate the camera shake correction amount according to the subject distance of the captured image.
  • a specific example of calculation processing of the camera shake correction amount according to the subject distance by the camera shake correction amount calculation unit 263 will be described with reference to FIG. 18 .
  • FIG. 18 shows (A) a captured image captured by the imaging device 100 and (B) distance information (depth map) generated by the distance information calculation unit 251 based on the (A) captured image. .
  • the (B) distance information (depth map) for each pixel area generated by the distance information calculation unit 251 is based on the defocus amount for each minute pixel area such as n ⁇ m pixels calculated by the defocus amount calculation unit 203.
  • 4 is a depth map showing the value of the object distance for each image area calculated in the above manner as a pixel value (0 to 255, for example).
  • a high luminance (high pixel value) area that looks white is an area where the object distance is short.
  • a low luminance (low pixel value) area that looks black is an area with a long subject distance.
  • the camera shake correction amount calculation unit 263 calculates the subject distance of the captured image by using the camera shake amount during image shooting input from the camera shake amount calculation unit 262 and (B) the distance information (depth map) generated by the distance information calculation unit 251 . determines the amount of correction according to .
  • the shake correction amount calculator 263 determines the correction amount as follows according to the subject distance of the captured image.
  • a small correction amount is set for a pixel area with a long object distance.
  • a medium correction amount is set for a pixel area with a medium object distance.
  • a large correction amount is set for a pixel area with a short subject distance. This is because the closer the object is to the object, the greater the shake of the image caused by camera shake.
  • the camera shake correction amount calculation unit 263 thus uses the camera shake amount during image shooting input from the camera shake amount calculation unit 262 and (B) the distance information (depth map) generated by the distance information calculation unit 251 to calculate the pixel A correction amount (shake correction amount) for each pixel area is calculated according to the distance information for each area.
  • the camera shake correction amount for each pixel region calculated by the camera shake correction amount calculator 263 is input to the camera shake correction execution unit 264 .
  • the camera shake correction execution unit 264 receives the camera shake correction amount for each pixel region calculated by the camera shake correction amount calculation unit 263, and executes camera shake correction processing on the captured image.
  • the camera shake correction execution unit 264 performs correction processing using a correction amount corresponding to the object distance in pixel area units according to the camera shake correction amount in pixel area units calculated by the camera shake correction amount calculation unit 263 .
  • FIG. 19 shows an image of a subject for each pixel area, according to (A) the captured image output from the image signal processing unit 206 and the camera shake correction amount for each pixel area calculated by the camera shake correction amount calculation unit 263 . It is a figure which shows the example of a process in the case of performing the correction process which applied the correction amount according to distance.
  • the photographed image (A) output from the image signal processing unit 206 is, for example, an image photographed with a “stuffed bear” placed outdoors. This image was taken with the setting set to .
  • the subject distance information used by the camera shake correction execution unit 264 to perform camera shake correction processing is (B) distance information (depth map) for each pixel region generated by the distance information calculation unit 251 .
  • the (B) distance information (depth map) for each pixel area generated by the distance information calculation unit 251 is based on the defocus amount for each minute pixel area such as n ⁇ m pixels calculated by the defocus amount calculation unit 203.
  • 4 is a depth map showing the value of the object distance for each image area calculated in the above manner as a pixel value (0 to 255, for example).
  • a high luminance (high pixel value) area that looks white is an area where the object distance is short.
  • a low luminance (low pixel value) area that looks black is an area with a long subject distance.
  • the process of calculating the subject distance from the defocus amount is the same process as the distance information calculation process executed by the distance information calculation unit 207 of the second embodiment described above with reference to FIGS.
  • the object distance for each pixel area is calculated using the defocus amount for each pixel area according to Equation 1 described above.
  • the distance information calculation unit 251 outputs the calculated distance information for each pixel area to the camera shake correction amount calculation unit 263 .
  • the camera shake correction amount calculation unit 263 receives the amount of camera shake during image shooting from the camera shake amount calculation unit 262 and also receives the distance information for each pixel area from the distance information calculation unit 251 .
  • the camera shake correction amount calculation unit 263 uses these two pieces of input information to calculate the camera shake correction amount according to the subject distance of the captured image. As described above with reference to FIG. 18, the shake correction amount calculator 263 determines the correction amount as follows according to the subject distance of the captured image. A small correction amount is set for a pixel area with a long object distance. A medium correction amount is set for a pixel area with a medium object distance. A large correction amount is set for a pixel area with a short subject distance. This is because the closer the object is to the object, the greater the shake of the image caused by camera shake.
  • the camera shake correction amount calculation unit 263 thus uses the camera shake amount during image shooting input from the camera shake amount calculation unit 262 and (B) the distance information (depth map) generated by the distance information calculation unit 251 to calculate the pixel A correction amount (shake correction amount) for each pixel area is calculated according to the distance information for each area.
  • the camera shake correction amount for each pixel region calculated by the camera shake correction amount calculator 263 is input to the camera shake correction execution unit 264 .
  • the camera shake correction execution unit 264 receives the camera shake correction amount for each pixel region calculated by the camera shake correction amount calculation unit 263, and executes camera shake correction processing on the captured image.
  • the camera shake correction execution unit 264 performs correction processing using a correction amount corresponding to the object distance in pixel area units according to the camera shake correction amount in pixel area units calculated by the camera shake correction amount calculation unit 263 .
  • the detection information of the phase difference detection pixels is used to calculate the subject distance information for each minute pixel area, and the calculated subject distance information is used to determine the correction amount of camera shake correction. That is, the correction amount is reduced for a pixel area with a longer subject distance, and the correction amount is increased for a pixel area with a shorter subject distance.
  • the data processing may be performed by an external device different from the imaging device 100 .
  • the shake correction execution unit 264 can be configured outside the digital signal processing unit 108 .
  • an external device other than the imaging device 100 for example, an external device 180 such as a PC may be provided with an image correction unit.
  • the camera shake correction amount calculation unit 263 of the digital signal processing unit 108 of the imaging apparatus 100 outputs the calculated camera shake correction amount for each pixel area to the external device 180 such as a PC.
  • the external device 180 such as a PC performs camera shake correction processing corresponding to the object distance in pixel area units using the camera shake correction amount in pixel area units calculated by the camera shake correction amount calculation unit 263 of the digital signal processing unit 108 . In this way, it is possible to perform processing using various configurations.
  • Each of these four embodiments can be configured independently, but can also be configured as a device or system having a combination configuration of any of a plurality of embodiments.
  • the purpose is determined from the object, detection information, preset settings, etc., and one or more examples are selected to be used, or one or more examples are proposed to the user. You may make it
  • FIG. 20 is a diagram showing the configuration of the digital signal processing unit 108 of the imaging device 100 capable of executing all the processes of the above four embodiments.
  • FIG. 20 shows an internal configuration example of the digital signal processing unit 108, which is a component of the imaging device 100 described with reference to FIG.
  • constituent elements of the digital signal processing unit 108 shown in FIG. 20 can be used as constituent elements of the imaging apparatus outside the digital signal processing unit 108. Moreover, it is also possible to provide a part of the constituent elements of the digital signal processing unit 108 shown in FIG.
  • a storage unit 308 connected to the input/output interface 305 is composed of, for example, a flash memory, and stores programs executed by the CPU 301 and various data.
  • a communication unit 309 includes a communication unit for Wi-Fi communication, Bluetooth (registered trademark) (BT) communication, and other data communication via a network such as the Internet or a local area network, and communicates with an external device.
  • BT registered trademark
  • a drive 310 connected to the input/output interface 305 drives a removable medium 311 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory such as a memory card to record or read data.
  • a removable medium 311 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory such as a memory card to record or read data.
  • the image blend ratio calculation unit setting a blend ratio of the captured image to a large ratio for a pixel region having a small defocus amount of the captured image;
  • the image blend ratio calculation unit setting the blending ratio of the captured image to 100% for pixel regions in which the defocus amount of the captured image is equal to or less than a prescribed threshold value Th1; setting a blending ratio of the captured image to 0% for a pixel region in which the defocus amount of the captured image is equal to or greater than a prescribed threshold value Th2; For pixel regions where the defocus amount of the captured image is greater than the specified threshold value Th1 and less than the specified threshold value Th2, the blend ratio of the captured image is set to change according to the defocus amount of the captured image. (1) to (3).
  • a defocus amount calculation unit that calculates a defocus amount for each pixel area of a captured image output from an image sensor of an imaging device;
  • An imaging apparatus comprising a distance information calculation unit that calculates a subject distance for each pixel area based on the defocus amount for each pixel area calculated by the defocus amount calculation unit.
  • the distance information calculation unit The imaging apparatus according to (6), which generates a distance image in which a pixel value having a brightness corresponding to the subject distance in pixel area units is set.
  • the imaging device further includes The method according to any one of (6) to (8), further comprising a three-dimensional image generation unit that generates a three-dimensional image based on the captured image by applying the subject distance information for each pixel area calculated by the distance information calculation unit. Imaging device.
  • the imaging device further includes: any one of (6) to (9), further comprising a camera shake correction amount calculation unit that applies the subject distance information for each pixel area calculated by the distance information calculation unit to calculate a camera shake correction amount for each pixel area for the captured image;
  • a camera shake correction amount calculation unit that applies the subject distance information for each pixel area calculated by the distance information calculation unit to calculate a camera shake correction amount for each pixel area for the captured image.
  • the camera shake correction amount calculation unit The image pickup apparatus according to (10), wherein a larger shake correction amount is set for a pixel area with a shorter subject distance, and a smaller shake correction amount is set for a pixel area with a longer subject distance.
  • the imaging device further includes a camera shake correction execution unit that executes camera shake correction processing on the captured image;
  • the camera shake correction execution unit The image pickup apparatus according to (10) or (11), wherein image stabilization processing is performed in units of pixel areas according to the amount of image stabilization in units of pixel areas calculated by the image stabilization amount calculation unit.
  • a defocus amount calculation unit that calculates a defocus amount for each pixel area of a captured image output from an image sensor of an imaging device;
  • An imaging apparatus comprising a mask area determination unit that determines a mask area for the captured image based on the defocus amount for each pixel area calculated by the defocus amount calculation unit.
  • the mask area determination unit The imaging apparatus according to (13), wherein a pixel region having a defocus amount having a difference equal to or greater than a predetermined threshold value with respect to the defocus amount of the output subject specified by the user is determined as the mask area.
  • the mask area determination unit The imaging apparatus according to (13) or (14), which generates a mask area indication image in which the mask area is distinguishable from the area of the output object specified by the user.
  • the imaging device further includes: The imaging apparatus according to any one of (13) to (15), further comprising a mask synthesizing unit that synthesizes a mask image of a predetermined color with the mask area determined by the mask area determining unit to generate a mask setting image.
  • the imaging device further includes The imaging apparatus according to (16), further comprising an image synthesizing unit that superimposes another image on the mask area of the mask setting image generated by the mask synthesizing unit to generate a synthesized image with the output subject specified by the user.
  • the mask synthesizing unit The imaging apparatus according to (16) or (17), which outputs the generated mask setting image to an external device that executes composite image generation processing.
  • An image processing method executed in an imaging device A defocus amount calculation step in which the defocus amount calculation unit calculates a defocus amount for each pixel area of a captured image output from an imaging device of an imaging device;
  • a defocus amount calculation unit executes an image blend ratio calculation step of calculating an image blend ratio between the captured image and another image,
  • the image blend ratio calculation step includes: An image processing method, comprising: calculating an image blend ratio for each pixel area based on the defocus amount for each pixel area calculated in the defocus amount calculation step.
  • a program for executing image processing in an imaging device a defocus amount calculation step of causing a defocus amount calculation unit to calculate a defocus amount for each pixel area of a captured image output from an imaging element of an imaging device; causing a defocus amount calculation unit to perform an image blend ratio calculation step of calculating an image blend ratio between the captured image and another image;
  • a program recording the processing sequence is installed in the memory of a computer built into dedicated hardware and executed, or the program is loaded into a general-purpose computer capable of executing various processing. It can be installed and run.
  • the program can be pre-recorded on a recording medium.
  • the program can be received via a network such as a LAN (Local Area Network) or the Internet and installed in a recording medium such as an internal hard disk.
  • various processes are executed such as determining a mask area for the captured image based on the defocus amount for each pixel area and performing processing for generating a green screen image.

Abstract

Provided are a device and a method for calculating an image blend ratio and a distance to an object in pixel region units on the basis of a defocus amount in pixel region units of a captured image and carrying out various kinds of image processing using calculated data. The present invention comprises a defocus amount calculation unit for calculating a defocus amount in pixel region units of a captured image which is output from an imaging element of an imaging device, and carries out an image blend process by using the defocus amount in pixel region units, which has been calculated by the defocus amount calculation unit, to calculate an image blend ratio in pixel region units, a three-dimensional image generation process by calculating a distance to an object in pixel region units on the basis of the defocus amount in pixel region units and using the distance to the object in pixel region units, and an image stabilization amount calculation process in pixel region units. Alternatively, the present invention carries out various processes such as a green-background image generation process by determining a mask region with respect to the captured image on the basis of the defocus amount in pixel region units.

Description

撮像装置、および画像処理方法、並びにプログラムIMAGING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM
 本開示は、撮像装置、および画像処理方法、並びにプログラムに関する。さらに詳細には、像面位相差検出画素の検波情報をフォーカス制御のみならず、その他の様々な処理に適用する撮像装置、および画像処理方法、並びにプログラムに関する。 The present disclosure relates to imaging devices, image processing methods, and programs. More specifically, the present invention relates to an imaging apparatus, an image processing method, and a program that apply detection information of image plane phase difference detection pixels not only to focus control but also to various other processes.
 撮像装置(カメラ)において、フォーカス位置(合焦位置)を検出する方式として、像面位相差画素を用いた像面位相差方式がある。
 この像面位相差方式は、撮像レンズを透過した光を瞳分割して1対の像を生成し、生成した1対の像間の位相差を解析してフォーカス位置(合焦位置)を検出する方式である。
In imaging devices (cameras), there is an image plane phase difference method using an image plane phase difference pixel as a method for detecting a focus position (in-focus position).
This image plane phase difference method divides the light passing through the imaging lens into a pair of images and analyzes the phase difference between the generated pair of images to detect the focus position (in-focus position). It is a method to
 像面位相差方式を用いたフォーカス制御では撮像レンズを通過した光束を二分割し、二分割した光束を焦点検出用センサとして機能する一組の像面位相差検出画素によりそれぞれ受光する。この一組の像面位相差検出画素各々の受光量に応じて出力される信号のずれ量に基づいて合焦度を検出してフォーカスレンズを調整する。 In focus control using the image plane phase difference method, the light flux that has passed through the imaging lens is split into two, and the two split light fluxes are received by a set of image plane phase difference detection pixels that function as focus detection sensors. The focus lens is adjusted by detecting the degree of focus based on the shift amount of the signal output according to the amount of light received by each of the set of image plane phase difference detection pixels.
 この像面位相差検出画素の検波情報は主としてフォーカス制御のために用いられるが、フォーカス制御のみならず、その他の処理に適用することも可能である。
 例えば特許文献1(特開2012-142952号公報)は、像面位相差検出画素の検波情報を用いて、撮影画像に対するボケ処理を施す構成を開示している。
 具体的には像面位相差検出画素の検波情報を用いて、撮影画像内のデフォーカス量の分布を解析し、この解析結果を利用して撮影画像に対するボケ処理を施す構成である。
The detection information of the image plane phase difference detection pixels is mainly used for focus control, but it can be applied to other processing as well as focus control.
For example, Patent Document 1 (Japanese Patent Application Laid-Open No. 2012-142952) discloses a configuration that performs blurring processing on a captured image using detection information of an image plane phase difference detection pixel.
Specifically, the detection information of the image plane phase difference detection pixels is used to analyze the distribution of the defocus amount in the captured image, and the analysis result is used to apply blur processing to the captured image.
 さらに、特許文献2(特開2019-035967号公報)は、像面位相差検出画素の検波情報から得られるデフォーカス量や距離情報の諧調特性を出力先に応じて変更出力する構成を開示している。
 例えば合焦近傍の距離分解能が必要な出力先や、測距範囲情報が必要な出力先など、出力先の必要な情報に応じて像面位相差検出画素の検波情報に基づく異なる情報を生成して出力する構成を開示している。
Further, Patent Document 2 (Japanese Patent Application Laid-Open No. 2019-035967) discloses a configuration for changing and outputting the gradation characteristics of the defocus amount and the distance information obtained from the detection information of the image plane phase difference detection pixels according to the output destination. ing.
For example, different information is generated based on the detection information of the image plane phase difference detection pixels according to the information required by the output destination, such as an output destination that requires distance resolution near in-focus and an output destination that requires distance measurement range information. A configuration for outputting by
特開2012-142952号公報JP 2012-142952 A 特開2019-035967号公報JP 2019-035967 A
 本開示は、像面位相差検出画素の検波情報の新たな利用構成を実現する撮像装置、および画像処理方法、並びにプログラムを提供することを目的とする。 An object of the present disclosure is to provide an imaging device, an image processing method, and a program that realize a new configuration for using detection information of image plane phase difference detection pixels.
 例えば、本開示の一実施例構成においては、像面位相差検出画素の検波情報を、複数画像の合成画像を生成する際の画像ブレンド比率の算出に利用する構成を実現する。
 さらに、本開示の一実施例構成においては、像面位相差検出画素の検波情報から、3次元画像生成に利用する距離情報を生成する構成を実現する。
 さらに、本開示の一実施例構成においては、像面位相差検出画素の検波情報に基づいて手振れ補正の補正態様を決定するための情報を生成して手振れ補正を行う構成を実現する。
For example, in an embodiment configuration of the present disclosure, a configuration is realized in which the detection information of the image plane phase difference detection pixels is used to calculate the image blend ratio when generating a composite image of a plurality of images.
Furthermore, in an embodiment configuration of the present disclosure, a configuration is realized in which distance information to be used for three-dimensional image generation is generated from detection information of image plane phase difference detection pixels.
Furthermore, in an embodiment configuration of the present disclosure, a configuration is realized in which information for determining a correction mode of camera shake correction is generated based on detection information of image plane phase difference detection pixels and camera shake correction is performed.
 本開示の第1の側面は、
 撮像装置の撮像素子から出力される撮影画像の画素領域単位のデフォーカス量を算出するデフォーカス量算出部と、
 前記撮影画像と他画像との画像ブレンド比率を算出する画像ブレンド比率算出部を有し、
 前記画像ブレンド比率算出部は、
 前記デフォーカス量算出部が算出した画素領域単位のデフォーカス量に基づいて、前記画素領域単位の画像ブレンド比率を算出する撮像装置にある。
A first aspect of the present disclosure includes:
a defocus amount calculation unit that calculates a defocus amount for each pixel area of a captured image output from an image sensor of an imaging device;
an image blend ratio calculation unit that calculates an image blend ratio between the captured image and another image;
The image blend ratio calculation unit
In the imaging device, an image blend ratio for each pixel area is calculated based on the defocus amount for each pixel area calculated by the defocus amount calculation unit.
 さらに、本開示の第2の側面は、
 撮像装置の撮像素子から出力される撮影画像の画素領域単位のデフォーカス量を算出するデフォーカス量算出部と、
 前記デフォーカス量算出部が算出した画素領域単位のデフォーカス量に基づいて、前記画素領域単位の被写体距離を算出する距離情報算出部を有する撮像装置にある。
Furthermore, a second aspect of the present disclosure is
a defocus amount calculation unit that calculates a defocus amount for each pixel area of a captured image output from an image sensor of an imaging device;
The imaging apparatus includes a distance information calculation unit that calculates the object distance for each pixel area based on the defocus amount for each pixel area calculated by the defocus amount calculation unit.
 さらに、本開示の第3の側面は、
 撮像装置の撮像素子から出力される撮影画像の画素領域単位のデフォーカス量を算出するデフォーカス量算出部と、
 前記デフォーカス量算出部が算出した画素領域単位のデフォーカス量に基づいて、前記撮影画像に対するマスク領域を決定するマスク領域決定部を有する撮像装置にある。
Furthermore, a third aspect of the present disclosure is
a defocus amount calculation unit that calculates a defocus amount for each pixel area of a captured image output from an image sensor of an imaging device;
The imaging apparatus includes a mask area determination unit that determines a mask area for the captured image based on the defocus amount for each pixel area calculated by the defocus amount calculation unit.
 さらに、本開示の第4の側面は、
 撮像装置において実行する画像処理方法であり、
 デフォーカス量算出部が、撮像装置の撮像素子から出力される撮影画像の画素領域単位のデフォーカス量を算出するデフォーカス量算出ステップと、
 デフォーカス量算出部が、前記撮影画像と他画像との画像ブレンド比率を算出する画像ブレンド比率算出ステップを実行し、
 前記画像ブレンド比率算出ステップは、
 前記デフォーカス量算出ステップにおいて算出した画素領域単位のデフォーカス量に基づいて、前記画素領域単位の画像ブレンド比率を算出するステップである画像処理方法にある。
Furthermore, a fourth aspect of the present disclosure is
An image processing method executed in an imaging device,
A defocus amount calculation step in which the defocus amount calculation unit calculates a defocus amount for each pixel area of a captured image output from an imaging device of an imaging device;
A defocus amount calculation unit executes an image blend ratio calculation step of calculating an image blend ratio between the captured image and another image,
The image blend ratio calculation step includes:
The image processing method includes a step of calculating an image blend ratio for each pixel area based on the defocus amount for each pixel area calculated in the defocus amount calculation step.
 さらに、本開示の第5の側面は、
 撮像装置において画像処理を実行させるプログラムであり、
 デフォーカス量算出部に、撮像装置の撮像素子から出力される撮影画像の画素領域単位のデフォーカス量を算出させるデフォーカス量算出ステップと、
 デフォーカス量算出部に、前記撮影画像と他画像との画像ブレンド比率を算出させる画像ブレンド比率算出ステップを実行させ、
 前記画像ブレンド比率算出ステップにおいては、
 前記デフォーカス量算出ステップで算出した画素領域単位のデフォーカス量に基づいて、前記画素領域単位の画像ブレンド比率を算出させるプログラムにある。
Furthermore, a fifth aspect of the present disclosure is
A program for executing image processing in an imaging device,
a defocus amount calculation step of causing a defocus amount calculation unit to calculate a defocus amount for each pixel area of a captured image output from an imaging element of an imaging device;
causing a defocus amount calculation unit to perform an image blend ratio calculation step of calculating an image blend ratio between the captured image and another image;
In the image blend ratio calculation step,
In the program, the image blend ratio for each pixel area is calculated based on the defocus amount for each pixel area calculated in the defocus amount calculation step.
 なお、本開示のプログラムは、例えば、様々なプログラム・コードを実行可能な情報処理装置やコンピュータ・システムに対して、コンピュータ可読な形式で提供する記憶媒体、通信媒体によって提供可能なプログラムである。このようなプログラムをコンピュータ可読な形式で提供することにより、情報処理装置やコンピュータ・システム上でプログラムに応じた処理が実現される。 It should be noted that the program of the present disclosure is, for example, a program that can be provided in a computer-readable format to an information processing device or computer system capable of executing various program codes via a storage medium or communication medium. By providing such a program in a computer-readable format, processing according to the program is realized on the information processing device or computer system.
 本開示のさらに他の目的、特徴や利点は、後述する本開示の実施例や添付する図面に基づくより詳細な説明によって明らかになるであろう。なお、本明細書においてシステムとは、複数の装置の論理的集合構成であり、各構成の装置が同一筐体内にあるものには限らない。 Still other objects, features, and advantages of the present disclosure will become apparent from more detailed descriptions based on the embodiments of the present disclosure and the accompanying drawings, which will be described later. In this specification, a system is a logical collective configuration of a plurality of devices, and the devices of each configuration are not limited to being in the same housing.
 本開示の一実施例の構成によれば、撮影画像の画素領域単位のデフォーカス量に基づいて画素領域単位の画像ブレンド比率や被写体距離を算出し、算出データを用いて様々な画像処理を実行する装置、方法が実現される。
 具体的には、例えば、撮像装置の撮像素子から出力される撮影画像の画素領域単位のデフォーカス量を算出するデフォーカス量算出部と、デフォーカス量算出部が算出した画素領域単位のデフォーカス量を利用して画素領域単位の画像ブレンド比率を算出して画像ブレンド処理を行う、また画素領域単位のデフォーカス量に基づいて画素領域単位の被写体距離を算出して、画素領域単位の被写体距離を利用した3次元画像生成処理や、画素領域単位の手振れ補正量算出処理を行う。あるいは、画素領域単位のデフォーカス量に基づいて撮影画像に対するマスク領域を決定してグリーンバック画像の生成処理など、様々な処理を実行する。
 本構成により、撮影画像の画素領域単位のデフォーカス量に基づいて画素領域単位の画像ブレンド比率や被写体距離を算出し、算出データを用いて様々な画像処理を実行する装置、方法が実現される。
 なお、本明細書に記載された効果はあくまで例示であって限定されるものではなく、また付加的な効果があってもよい。
According to the configuration of the embodiment of the present disclosure, the image blend ratio and the subject distance are calculated for each pixel region based on the defocus amount for each pixel region of the captured image, and various image processing is performed using the calculated data. An apparatus and method are realized.
Specifically, for example, a defocus amount calculation unit that calculates a defocus amount for each pixel area of a captured image output from an imaging device of an imaging device, and a defocus amount for each pixel area calculated by the defocus amount calculation unit. Calculating the image blend ratio for each pixel area using the defocus amount for each pixel area, and calculating the subject distance for each pixel area based on the defocus amount for each pixel area 3D image generation processing using , and shake correction amount calculation processing for each pixel region. Alternatively, a mask area for the captured image is determined based on the defocus amount for each pixel area, and various processes such as green screen image generation are performed.
With this configuration, it is possible to realize an apparatus and method for calculating the image blend ratio and the subject distance for each pixel area based on the defocus amount for each pixel area of the captured image, and performing various image processing using the calculated data. .
Note that the effects described in this specification are merely examples and are not limited, and additional effects may be provided.
本開示の撮像装置の構成例について説明する図である。It is a figure explaining the example of composition of the imaging device of this indication. 位相差検出画素を有する撮像素子の構成例について説明する図である。It is a figure explaining the structural example of the image pick-up element which has a phase difference detection pixel. 位相差検出方式の焦点検出処理の概要について説明する図である。FIG. 10 is a diagram illustrating an outline of focus detection processing using a phase difference detection method; 位相差検出方式の焦点検出処理の概要について説明する図である。FIG. 10 is a diagram illustrating an outline of focus detection processing using a phase difference detection method; 位相差検出方式の焦点検出処理の概要について説明する図である。FIG. 10 is a diagram illustrating an outline of focus detection processing using a phase difference detection method; 実施例1の撮像装置のデジタル信号処理部の構成例について説明する図である。4 is a diagram illustrating a configuration example of a digital signal processing unit of the image pickup apparatus of Example 1; FIG. 撮像素子の画素構成とデフォーカス量算出単位としての画素領域の具体例について説明する図である。FIG. 4 is a diagram illustrating a specific example of a pixel configuration of an image sensor and a pixel region as a defocus amount calculation unit; 実施例1の撮像装置のデジタル信号処理部が実行する処理の具体例について説明する図である。4A and 4B are diagrams for explaining a specific example of processing executed by a digital signal processing unit of the imaging apparatus according to the first embodiment; FIG. 実施例1の撮像装置のデジタル信号処理部が実行する処理の具体例について説明する図である。4A and 4B are diagrams for explaining a specific example of processing executed by a digital signal processing unit of the imaging apparatus according to the first embodiment; FIG. 本開示の処理を実行する装置の構成例について説明する図である。It is a figure explaining the structural example of the apparatus which performs the process of this indication. 本開示の処理を実行する装置の構成例について説明する図である。It is a figure explaining the structural example of the apparatus which performs the process of this indication. 実施例2の撮像装置のデジタル信号処理部の構成例について説明する図である。FIG. 10 is a diagram illustrating a configuration example of a digital signal processing unit of the imaging apparatus of Example 2; 実施例2の撮像装置のデジタル信号処理部が実行する処理の具体例について説明する図である。FIG. 11 is a diagram illustrating a specific example of processing executed by a digital signal processing unit of the imaging apparatus according to the second embodiment; 実施例3の撮像装置のデジタル信号処理部の構成例について説明する図である。FIG. 11 is a diagram illustrating a configuration example of a digital signal processing unit of an imaging apparatus according to Example 3; 実施例3の撮像装置のデジタル信号処理部が実行する処理の具体例について説明する図である。FIG. 11 is a diagram illustrating a specific example of processing executed by a digital signal processing unit of the imaging apparatus of Example 3; 実施例3の処理を実行する撮像装置とキーヤー装置の構成例について説明する図である。FIG. 12 is a diagram illustrating a configuration example of an imaging device and a keyer device that execute processing of the third embodiment; 実施例4の撮像装置のデジタル信号処理部の構成例について説明する図である。FIG. 11 is a diagram illustrating a configuration example of a digital signal processing unit of an imaging apparatus according to a fourth embodiment; 実施例4の撮像装置のデジタル信号処理部が実行する処理の具体例について説明する図である。FIG. 12 is a diagram illustrating a specific example of processing executed by a digital signal processing unit of the imaging apparatus according to the fourth embodiment; 実施例4の撮像装置のデジタル信号処理部が実行する処理の具体例について説明する図である。FIG. 12 is a diagram illustrating a specific example of processing executed by a digital signal processing unit of the imaging apparatus according to the fourth embodiment; 実施例1~4の処理を実行する撮像装置のデジタル信号処理部の構成例について説明する図である。FIG. 5 is a diagram illustrating a configuration example of a digital signal processing unit of an imaging device that executes processing of Examples 1 to 4; 撮像装置から入力するデータを利用した処理を実行する画像処理装置のハードウェア構成例について説明する図である。FIG. 2 is a diagram illustrating a hardware configuration example of an image processing device that executes processing using data input from an imaging device;
 以下、図面を参照しながら本開示の撮像装置、および画像処理方法、並びにプログラムの詳細について説明する。なお、説明は以下の項目に従って行なう。
 1.本開示の撮像装置の構成例について
 2.位相差検出画素の構成、位相差検出画素による検出信号を用いたフォーカス制御の概要について
 3.位相差検出画素の検出情報を用いた処理について
 3-1.(実施例1)位相差検出画素の検出情報を用いて画像ブレンド比率を算出する実施例
 3-2.(実施例2)位相差検出画素の検出情報を用いて3次元画像生成に適用する距離情報を算出する実施例
 3-3.(実施例3)位相差検出画素の検出情報を用いて背景画像合成用のマスク領域を決定し、選択被写体領域以外の領域にマスク画像を合成した画像を生成する実施例
 3-4.(実施例4)位相差検出画素の検出情報を用いて画素領域単位の手振れ補正量を算出する実施例
 4.その他の実施例について
 5.撮像装置と接続されたPC等の外部装置のハードウェア構成例について
 6.本開示の構成のまとめ
Hereinafter, details of an imaging device, an image processing method, and a program according to the present disclosure will be described with reference to the drawings. The description will be made according to the following items.
1. Configuration Example of Imaging Apparatus of Present Disclosure 2. Configuration of phase difference detection pixels, outline of focus control using detection signals from phase difference detection pixels3. Regarding processing using detection information of phase difference detection pixels 3-1. (Embodiment 1) Embodiment in which image blend ratio is calculated using detection information of phase difference detection pixels 3-2. (Embodiment 2) Embodiment in which distance information applied to three-dimensional image generation is calculated using detection information of phase difference detection pixels 3-3. (Embodiment 3) Embodiment in which a mask area for background image synthesis is determined using detection information of a phase difference detection pixel, and an image is generated by synthesizing the mask image with an area other than the selected subject area 3-4. (Embodiment 4) Embodiment in which camera shake correction amount for each pixel region is calculated using detection information of phase difference detection pixels. Other Examples 5. Hardware configuration example of an external device such as a PC connected to the imaging device6. SUMMARY OF THE STRUCTURE OF THE DISCLOSURE
  [1.本開示の撮像装置の構成例について]
 まず、本開示の撮像装置の構成例について説明する。
[1. Regarding the configuration example of the imaging device of the present disclosure]
First, a configuration example of an imaging device according to the present disclosure will be described.
 図1は、本開示の撮像装置100の一構成例を示すブロック図である。
 図1を参照して本開示の撮像装置100の一構成例について説明する。
 フォーカスレンズ101、ズームレンズ102を介する入射光は、例えばCMOSやCCDなどの撮像素子103に入力し、撮像素子103において光電変換される。
FIG. 1 is a block diagram showing a configuration example of an imaging device 100 of the present disclosure.
A configuration example of an imaging device 100 of the present disclosure will be described with reference to FIG.
Incident light passing through the focus lens 101 and the zoom lens 102 is input to an imaging device 103 such as a CMOS or CCD, and photoelectrically converted in the imaging device 103 .
 撮像素子103は、例えばフォトダイオードを有して構成される複数の画素がマトリクス状に2次元配置され、各画素の受光面に、それぞれ分光特性の異なる例えばR(赤)、G(緑)、B(青)のカラーフィルタが配設されてなる通常画素と、被写体光を瞳分割して焦点検出するための位相差検出画素を有する。 The image pickup device 103 has a plurality of pixels each having a photodiode, for example, arranged two-dimensionally in a matrix. It has normal pixels in which a B (blue) color filter is arranged, and phase difference detection pixels for pupil-dividing subject light and performing focus detection.
 撮像素子103の通常画素は、被写体像のR(赤)、G(緑)、B(青)各色成分のアナログの電気信号(画像信号)を生成し、R、G、B各色の画像信号として出力する。
 撮像素子103の位相差検出画素は位相差検出信号(検波情報)を出力する。位相差検出信号(検波情報)は主にオートフォーカス制御に用いられる信号である。
 なお、位相差検出画素の構成や、位相差検出画素によって生成される位相差検出信号、さらに位相差検出信号を用いたフォーカス制御ついては、後段で詳細に説明する。
The normal pixels of the image sensor 103 generate analog electrical signals (image signals) of R (red), G (green), and B (blue) color components of the subject image, and generate R, G, and B color image signals. Output.
A phase difference detection pixel of the image sensor 103 outputs a phase difference detection signal (detection information). A phase difference detection signal (detection information) is a signal mainly used for autofocus control.
The configuration of the phase difference detection pixels, phase difference detection signals generated by the phase difference detection pixels, and focus control using the phase difference detection signals will be described later in detail.
 このように撮像素子103から出力される光電変換データにはRGB画像信号と、位相差検出画素による位相差検出信号が含まれる。
 これらの各信号は、アナログ信号処理部104に入力され、アナログ信号処理部104においてノイズ除去等の処理がなされ、A/D変換部105においてデジタル信号に変換される。
Photoelectrically converted data output from the image sensor 103 in this manner includes RGB image signals and phase difference detection signals from phase difference detection pixels.
Each of these signals is input to the analog signal processing unit 104 , subjected to processing such as noise removal in the analog signal processing unit 104 , and converted into a digital signal in the A/D conversion unit 105 .
 A/D変換部105においてデジタル変換されたデジタル信号は、デジタル信号処理部(DSP)108に入力され、様々な信号処理がなされる。
 RGB画像信号に対しては、デモザイク処理、ホワイトバランス調整、ガンマ補正等の様々な画像信号処理が実行され、処理後の画像が例えばフラッシュメモリなどによって構成される記録デバイス115に記録される。
 さらに、モニタ117、ビューファインダ(EVF)116に表示される。モニタ117、ビューファインダ(EVF)116には撮影の有無に関わらず、レンズを介する画像がスルー画として表示される。
A digital signal digitally converted by the A/D conversion unit 105 is input to a digital signal processing unit (DSP) 108 and subjected to various signal processing.
Various image signal processing such as demosaic processing, white balance adjustment, gamma correction, etc. is performed on the RGB image signal, and the processed image is recorded in a recording device 115 such as a flash memory.
Further, it is displayed on the monitor 117 and viewfinder (EVF) 116 . An image through the lens is displayed as a through image on the monitor 117 and the viewfinder (EVF) 116 regardless of whether or not shooting is performed.
 撮像素子103の位相差検出画素から出力される位相差検出画素情報(検波信号)も、AD変換部106を介してデジタル信号処理部(DSP)108に入力される。
 デジタル信号処理部(DSP)108は、位相差検出画素情報(検波信号)により生成される一対の像間の位相差を解析して、フォーカスを合わせる対象となる被写体(合焦対象物)に対するフォーカスのずれ量、すなわち合焦距離と被写体距離とのずれ量(デフォーカス量(DF))を算出する。
Phase difference detection pixel information (detection signal) output from the phase difference detection pixels of the image sensor 103 is also input to the digital signal processing unit (DSP) 108 via the AD conversion unit 106 .
A digital signal processing unit (DSP) 108 analyzes the phase difference between the pair of images generated by the phase difference detection pixel information (detection signal), and focuses on the subject (focusing object) to be focused. , that is, the amount of deviation between the in-focus distance and the object distance (defocus amount (DF)) is calculated.
 入力部(操作部)118は、カメラ本体にあるシャッタ、ズームボタンなど、各種の操作情報を入力する入力部、撮影モードを設定するためのモードダイアル等を含む操作部である。 An input unit (operation unit) 118 is an operation unit including an input unit for inputting various operation information such as shutter and zoom buttons on the camera body, a mode dial for setting the shooting mode, and the like.
 制御部110は、CPUを有し、撮像装置の実行する各種の処理の制御を予めメモリ(ROM)120などに格納されたプログラムに従って実行する。メモリ(EEPROM)119は不揮発性メモリであり、画像データ、各種の補助情報、プログラムなどが格納される。 The control unit 110 has a CPU, and controls various processes executed by the imaging device according to programs stored in advance in a memory (ROM) 120 or the like. A memory (EEPROM) 119 is a non-volatile memory and stores image data, various auxiliary information, programs, and the like.
 メモリ(ROM)120は、制御部(CPU)110が使用するプログラムや演算パラメータ等を格納する。メモリ(RAM)121は、制御部(CPU)110やAF制御部112a等において使用するプログラムや、その実行において適宜変化するパラメータ等を格納する。 The memory (ROM) 120 stores programs, calculation parameters, etc. used by the control unit (CPU) 110 . A memory (RAM) 121 stores programs used in the control unit (CPU) 110, the AF control unit 112a, and the like, parameters that change as appropriate during the execution of the programs, and the like.
 ジャイロ131は、撮像装置100の傾きや角度、傾き速度(角速度)等を計測するセンサである。ジャイロ131の検出情報は、例えば画像撮影時の手振れ量算出に用いられる。 The gyro 131 is a sensor that measures the inclination, angle, inclination velocity (angular velocity), etc. of the imaging device 100 . The detection information of the gyro 131 is used, for example, for calculating the amount of camera shake during image capturing.
 AF制御部112aは、フォーカスレンズ101に対応して設定されたフォーカスレンズ駆動モータ113aを駆動し、オートフォーカス制御(AF制御)処理を実行する。例えばユーザの選択した領域に含まれる被写体に対するフォーカスレンズ101の合焦位置にフォーカスレンズ101を移動させて、合焦状態を得る。 The AF control unit 112a drives the focus lens drive motor 113a set corresponding to the focus lens 101, and executes autofocus control (AF control) processing. For example, the focus lens 101 is moved to the focus position of the focus lens 101 with respect to the subject included in the area selected by the user to obtain the focused state.
 ズーム制御部112bは、ズームレンズ102に対応して設定されたズームレンズ駆動モータ113bを駆動する。垂直ドライバ107は、撮像素子(CCD)103を駆動する。タイミングジェネレータ106は、撮像素子103およびアナログ信号処理部104の処理タイミングの制御信号を生成して、これらの各処理部の処理タイミングを制御する。
 なお、フォーカスレンズ101は、AF制御部112aの制御によって光軸方向に駆動される。
The zoom control unit 112b drives a zoom lens driving motor 113b set corresponding to the zoom lens 102. FIG. A vertical driver 107 drives an image sensor (CCD) 103 . The timing generator 106 generates control signals for processing timings of the image sensor 103 and the analog signal processing unit 104, and controls the processing timings of these processing units.
Note that the focus lens 101 is driven in the optical axis direction under the control of the AF control section 112a.
  [2.位相差検出画素の構成、位相差検出画素による検出信号を用いたフォーカス制御の概要について]
 次に、位相差検出画素の構成、位相差検出画素による検出信号を用いたフォーカス制御の概要について説明する。
[2. Configuration of Phase Difference Detection Pixels, Overview of Focus Control Using Detection Signals from Phase Difference Detection Pixels]
Next, the configuration of the phase difference detection pixels and the outline of focus control using detection signals from the phase difference detection pixels will be described.
 前述したように図1に示す撮像装置100の撮像素子103は、被写体像のR(赤)、G(緑)、B(青)各色成分のアナログの電気信号(画像信号)を生成し、R、G、B各色の画像信号として出力する一方、オートフォーカス制御に用いる信号である位相差検出画素による位相差検出信号(検波情報)も出力する。
 撮像素子103の具体的な画素構成例について図2を参照して説明する。
As described above, the imaging device 103 of the imaging apparatus 100 shown in FIG. , G, and B image signals, and phase difference detection signals (detection information) from the phase difference detection pixels, which are signals used for autofocus control, are also output.
A specific pixel configuration example of the image sensor 103 will be described with reference to FIG.
 図2は、撮像素子103の画素構成例を示す図である。
 図2には、(A)撮影画像の一部領域に対応する撮像素子103の画素構成例を示している。
 上下方向をY軸とし、左右方向をX軸とする。図2では、1つの画素を1つの正方形で示す。
 図2に示すRGB画素は、通常の画像撮影用の画素である。RGB画素は、例えばベイヤ配列構成を有する。
FIG. 2 is a diagram showing a pixel configuration example of the image sensor 103. As shown in FIG.
FIG. 2 shows a pixel configuration example of the image sensor 103 corresponding to (A) a partial area of the captured image.
The vertical direction is the Y-axis, and the horizontal direction is the X-axis. In FIG. 2, one pixel is indicated by one square.
The RGB pixels shown in FIG. 2 are pixels for normal image capturing. RGB pixels have, for example, a Bayer array configuration.
 オートフォーカス処理に適用する検波情報取得画素、すなわち位相差情報を取得するための位相差検出画素151は、ベイヤ配列を有するRGB画素の一部(行)に離散的に設定される。
 位相差検出画素は、右開口位相差検出画素Pa、左開口位相差検出画素Pbのペアによって構成される。
Detection information acquisition pixels applied to the autofocus process, ie, phase difference detection pixels 151 for acquiring phase difference information, are discretely set in some (rows) of RGB pixels having a Bayer array.
A phase difference detection pixel is configured by a pair of a right opening phase difference detection pixel Pa and a left opening phase difference detection pixel Pb.
 撮像素子103からは、以下の2つのデータ出力が個別に行われる。
 (1)撮影画像用の画素(RGB画素)による画素情報(画像信号)出力、
 (2)位相差検出画素151による位相差検出画素情報((AF)検波信号)出力、
The following two data outputs are individually performed from the imaging device 103 .
(1) Output of pixel information (image signal) from pixels (RGB pixels) for captured images;
(2) phase difference detection pixel information ((AF) detection signal) output by the phase difference detection pixel 151;
 「(1)撮影画像用の画素(RGB画素)による画素情報(画像信号)出力」は、ユーザ(撮影者)による画像撮影タイミングに応じて出力され、さらに、非撮影時においても、モニタ117等に表示するための表示用画像(ライブビュー画像)の出力が行われる。表示用画像(ライブビュー画像)は、モニタ117等の画像表示レートに応じたフレームレートで出力される。 "(1) Output of pixel information (image signal) by pixels (RGB pixels) for captured image" is output in accordance with image capturing timing by the user (photographer). A display image (live view image) to be displayed on the display is output. A display image (live view image) is output at a frame rate corresponding to the image display rate of the monitor 117 or the like.
 「(2)位相差検出画素151による位相差検出画素情報(検波信号)出力」は、画像出力間隔と同間隔、またはより短い間隔、例えば(1/60)sec間隔(=16.7msec間隔)で行われる。 "(2) Phase difference detection pixel information (detection signal) output by phase difference detection pixel 151" is the same interval as the image output interval or a shorter interval, for example, (1/60) sec interval (=16.7 msec interval). is done in
 位相差検出画素151による位相差検出画素情報(検波信号)出力は、AD変換部106を介してデジタル信号処理部(DSP)108に入力される。
 デジタル信号処理部(DSP)108は、位相差検出画素情報(検波信号)により生成される一対の像間の位相差を解析して、フォーカスを合わせる対象となる被写体(合焦対象物)に対するフォーカスのずれ量、すなわち合焦距離と被写体距離とのずれ量(デフォーカス量(DF))を算出する。
The phase difference detection pixel information (detection signal) output from the phase difference detection pixel 151 is input to the digital signal processor (DSP) 108 via the AD converter 106 .
A digital signal processing unit (DSP) 108 analyzes the phase difference between the pair of images generated by the phase difference detection pixel information (detection signal), and focuses on the subject (focusing object) to be focused. , that is, the amount of deviation between the in-focus distance and the object distance (defocus amount (DF)) is calculated.
 位相差検出方式の焦点検出処理の概要について、図3~図5を参照して説明する。
 先に説明したように、位相差検出方式においては、焦点検出用センサとして機能する一組の位相差検出画素各々の受光量に応じて出力される信号のずれ量に基づいてフォーカスレンズのデフォーカス量が算出され、このデフォーカス量に基づいてフォーカスレンズをピント位置(フォーカス位置)に設定する。
An outline of focus detection processing of the phase difference detection method will be described with reference to FIGS. 3 to 5. FIG.
As described above, in the phase difference detection method, the defocus of the focus lens is detected based on the shift amount of the signal output according to the amount of light received by each of a set of phase difference detection pixels that function as a focus detection sensor. The amount is calculated, and the focus lens is set to the focus position (focus position) based on this defocus amount.
 図2を参照して説明した一組の位相差検出画素をそれぞれ画素Pa、画素Pbとして、これらの画素に対する入射光の詳細について図3を参照して説明する。 The set of phase difference detection pixels described with reference to FIG. 2 will be referred to as pixel Pa and pixel Pb, respectively, and the details of light incident on these pixels will be described with reference to FIG.
 図3に示されるように、位相差検出部には、撮影光学系の射出瞳EYの右側部分(「右側の部分瞳領域」または単に「右瞳領域」とも称する)Qaからの光束Taと左側部分(「左側の部分瞳領域」または単に「左瞳領域」とも称する)Qbからの光束Tbとを受光する一対のフォトディテクタ(PD)によって構成される位相差検出画素Pa,Pbが水平方向に配列されている。なお、ここでは、図中+X方向側を右側と、-X方向側を左側と表現している。 As shown in FIG. 3, the phase difference detection unit stores a luminous flux Ta from a right portion (also referred to as a “right partial pupil region” or simply a “right pupil region”) Qa of the exit pupil EY of the photographing optical system, and a light beam Ta from the left side. Phase difference detection pixels Pa and Pb configured by a pair of photodetectors (PD) for receiving a light flux Tb from a portion (also referred to as “left partial pupil region” or simply “left pupil region”) Qb are arranged in the horizontal direction. It is Here, in the drawing, the +X direction side is expressed as the right side, and the −X direction side is expressed as the left side.
 一対の位相差検出画素Pa,Pbのうち、一方の位相差検出画素(以下では「第1位相差検出画素」と称する)Paは、第1位相差検出画素Paへの入射光を集光するマイクロレンズMLと、スリット(矩形)状の第1開口部OP1を有する第1遮光板AS1と、当該第1遮光板AS1の下方に配置され、スリット(矩形)状の第2開口部OP2を有する第2遮光板AS2を介した光を受光する光電変換部PDによって構成される。 Of the pair of phase difference detection pixels Pa and Pb, one phase difference detection pixel (hereinafter referred to as “first phase difference detection pixel”) Pa collects incident light to the first phase difference detection pixel Pa. A first light shielding plate AS1 having a microlens ML, a first slit (rectangular) opening OP1, and a second slit (rectangular) opening OP2 arranged below the first light shielding plate AS1. It is composed of a photoelectric conversion part PD that receives light through the second light shielding plate AS2.
 第1位相差検出画素Paにおける第1開口部OP1は、受光素子PDの中心を通り光軸LTに平行な中心軸CLを基準(起点)にして特定方向(ここでは、右方向(+X方向))に偏った位置に設けられている。また、第1位相差検出画素Paにおける第2開口部OP2は、中心軸CLを基準にして上記特定方向とは反対の方向(「反特定方向」とも称する)に偏った位置に設けられている。 The first opening OP1 in the first phase difference detection pixel Pa is set in a specific direction (here, the right direction (+X direction) with respect to the central axis CL passing through the center of the light receiving element PD and parallel to the optical axis LT as a reference (starting point). ). In addition, the second opening OP2 in the first phase difference detection pixel Pa is provided at a position biased in a direction opposite to the specific direction (also referred to as "anti-specific direction") with respect to the central axis CL. .
 また、一対の位相差検出画素Pa,Pbのうち、他方の位相差検出画素(以下では、「第2位相差検出画素」と称する)Pbは、スリット状の第1開口部OP1を有する第1遮光板AS1と、当該第1遮光板AS1の下方に配置され、スリット状の第2開口部OP2を有する第2遮光板AS2とを備えている。そして、第2位相差検出画素Pbにおける第1開口部OP1は、中心軸CLを基準にして上記特定方向とは反対の方向に偏った位置に設けられている。また、第2位相差検出画素Pbにおける第2開口部OP2は、中心軸CLを基準にして上記特定方向に偏った位置に設けられている。 Further, of the pair of phase difference detection pixels Pa and Pb, the other phase difference detection pixel (hereinafter referred to as “second phase difference detection pixel”) Pb is a first pixel having a slit-shaped first opening OP1. A light shielding plate AS1 and a second light shielding plate AS2 arranged below the first light shielding plate AS1 and having a slit-shaped second opening OP2 are provided. The first opening OP1 in the second phase difference detection pixel Pb is provided at a position biased in the direction opposite to the specific direction with reference to the central axis CL. Also, the second opening OP2 in the second phase difference detection pixel Pb is provided at a position biased in the specific direction with reference to the central axis CL.
 すなわち、一対の位相差検出画素Pa,Pbでは、第1開口部OP1が互いに異なる方向に偏って配置される。また、第2開口部OP2は、位相差検出画素Pa,Pb内の対応する第1開口部OP1に対して異なる方向にずれて配置される。 That is, in the pair of phase difference detection pixels Pa and Pb, the first openings OP1 are arranged to be biased in different directions. Also, the second openings OP2 are arranged to be shifted in different directions with respect to the corresponding first openings OP1 in the phase difference detection pixels Pa and Pb.
 上述のような構成を有する一対の位相差検出画素Pa,Pbでは、射出瞳EYにおいて異なる領域(部分)を通過した被写体光が取得される。
 具体的には、射出瞳EYの右瞳領域Qaを通過した光束Taは、位相差検出画素Paに対応するマイクロレンズMLおよび第1遮光板AS1の第1開口部OP1を通過し、さらに第2遮光板AS2によって制限(限定)された後、第1位相差検出画素Paの受光素子PDで受光される。
The pair of phase difference detection pixels Pa and Pb configured as described above acquire subject light that has passed through different regions (parts) in the exit pupil EY.
Specifically, the luminous flux Ta that has passed through the right pupil region Qa of the exit pupil EY passes through the microlens ML corresponding to the phase difference detection pixel Pa and the first opening OP1 of the first light shielding plate AS1. After being limited (limited) by the light shielding plate AS2, the light is received by the light receiving element PD of the first phase difference detection pixel Pa.
 また、射出瞳EYの左瞳領域Qbを通過した光束Tbは、位相差検出画素Pbに対応するマイクロレンズMLおよび第2遮光板AS2の第1開口部OP1を通過し、さらに第2遮光板AS2によって制限された後、第2位相差検出画素Pbの受光素子PDで受光される。 Further, the light flux Tb that has passed through the left pupil region Qb of the exit pupil EY passes through the microlens ML corresponding to the phase difference detection pixel Pb and the first opening OP1 of the second light shielding plate AS2, and further passes through the second light shielding plate AS2. , the light is received by the light receiving element PD of the second phase difference detection pixel Pb.
 Pa,Pb各画素において取得される受光素子の出力の例を図4に示す。図4に示すように、画素Paからの出力ラインと、画素Pbからの出力ラインは、所定量のシフト量Sfを持つ信号となる。 Fig. 4 shows an example of the output of the light-receiving element obtained at each pixel of Pa and Pb. As shown in FIG. 4, the output line from the pixel Pa and the output line from the pixel Pb are signals having a predetermined amount of shift Sf.
 図5(a)は、フォーカスレンズが、被写体距離に応じた位置に設定され、フォーカスが合った場合、すなわち合焦状態において、Pa,Pb各画素間で発生するシフト量Sfaを示している。
 図5(b1),(b2)は、フォーカスレンズが、被写体距離に応じた位置に設定されず、フォーカスが合っていない場合、すなわち非合焦状態において、Pa,Pb各画素間で発生するシフト量Sfaを示している。
 (b1)は、シフト量が合焦時より大きい場合、(b2)はシフト量が合焦時より小さい場合の例である。
FIG. 5(a) shows the shift amount Sfa generated between the pixels Pa and Pb when the focus lens is set at a position corresponding to the subject distance and the focus is achieved, that is, in the in-focus state.
5B1 and 5B2 show shifts occurring between pixels Pa and Pb when the focus lens is not set to a position corresponding to the subject distance and is out of focus, i.e., in an out-of-focus state. Quantity Sfa is shown.
(b1) is an example in which the shift amount is larger than that at the in-focus state, and (b2) is an example in which the shift amount is smaller than that at the in-focus state.
 図5(b1),(b2)のような場合は、フォーカス時のシフト量になるようにフォーカスレンズを移動させて合焦させることが可能となる。
 この処理が「位相差検出法」に従った合焦処理である。
 この「位相差検出法」に従った合焦処理によってフォーカスレンズの合焦位置への設定が可能であり、フォーカスレンズは被写体距離に応じた位置に設定できる。
In the cases shown in FIGS. 5B1 and 5B2, it is possible to focus by moving the focus lens so as to achieve the shift amount during focusing.
This process is the focusing process according to the "phase difference detection method".
Focusing processing according to this "phase difference detection method" enables setting of the focus lens to the in-focus position, and the focus lens can be set to a position according to the object distance.
 図5を参照して説明したシフト量は、図2に示す撮像素子内に構成される位相差検出画素である画素Pa,画素Pbの組単位で計測可能であり、この微細領域(Pa,Pb画素の組み合わせ領域)に撮り込まれる被写体画像に対する合焦位置(フォーカスポイント)やデフォーカス量を個別に算出することが可能となる。 The shift amount described with reference to FIG. 5 can be measured for each set of pixels Pa and Pb, which are phase difference detection pixels configured in the image sensor shown in FIG. It is possible to individually calculate the in-focus position (focus point) and the defocus amount for the subject image captured in the pixel combination area).
 なお、図5(b1),(b2)のような場合のシフト量は、フォーカス時のシフト量とのずれがあり、このずれ量からデフォーカス量、すなわち合焦距離と被写体距離とのずれ量に相当するデフォーカス量を算出することができる。 5(b1) and (b2), there is a deviation from the shift amount at the time of focusing. can be calculated.
  [3.位相差検出画素の検出情報を用いた処理について]
 位相差検出画素の検出情報を用いた処理について説明する。
[3. Regarding processing using detection information of phase difference detection pixels]
Processing using the detection information of the phase difference detection pixels will be described.
 以下、本開示の撮像装置が実行する位相差検出画素の検出情報を用いた処理として、以下の複数の実施例について説明する。
 (実施例1)位相差検出画素の検出情報を用いて画像ブレンド比率を算出する実施例
 (実施例2)位相差検出画素の検出情報を用いて3次元画像生成に適用する距離情報を算出する実施例
 (実施例3)位相差検出画素の検出情報を用いて背景画像合成用のマスク領域を決定し、選択被写体領域以外の領域にマスク画像を合成した画像を生成する実施例
 (実施例4)位相差検出画素の検出情報を用いて画素領域単位の手振れ補正量を算出する実施例
A plurality of examples will be described below as processing using detection information of phase difference detection pixels executed by the imaging apparatus of the present disclosure.
(Example 1) Example of calculating image blend ratio using detection information of phase difference detection pixels (Example 2) Using detection information of phase difference detection pixels to calculate distance information applied to 3D image generation Embodiment (Embodiment 3) An embodiment in which a mask area for background image synthesis is determined using detection information of a phase difference detection pixel, and an image is generated by synthesizing the mask image with an area other than the selected subject area (Embodiment 4) ) Embodiment in which image stabilization amount for each pixel area is calculated using detection information of phase difference detection pixels
  [3-1.(実施例1)位相差検出画素の検出情報を用いて画像ブレンド比率を算出する実施例]
 まず、実施例1として、位相差検出画素の検出情報を用いて画像ブレンド比率を算出する実施例について説明する。
[3-1. (Example 1) Example of calculating an image blend ratio using detection information of a phase difference detection pixel]
First, as Example 1, an example in which an image blend ratio is calculated using detection information of a phase difference detection pixel will be described.
 図6は、本実施例1の構成を説明するブロック図である。
 図6は、図1を参照して説明した撮像装置100の構成要素であるデジタル信号処理部108の内部構成例を示している。
FIG. 6 is a block diagram for explaining the configuration of the first embodiment.
FIG. 6 shows an internal configuration example of the digital signal processing unit 108, which is a component of the imaging apparatus 100 described with reference to FIG.
 図6に示すように、デジタル信号処理部108は、位相差情報取得部201、画像情報取得部202、デフォーカス量算出部203、AF制御信号生成部204、画像ブレンド比率算出部205、画像信号処理部206、画像ブレンド処理実行部221を有する。 As shown in FIG. 6, the digital signal processing unit 108 includes a phase difference information acquisition unit 201, an image information acquisition unit 202, a defocus amount calculation unit 203, an AF control signal generation unit 204, an image blend ratio calculation unit 205, an image signal It has a processing unit 206 and an image blend processing execution unit 221 .
 デジタル信号処理部108には、前段のA/D変換部105からRGB画像信号と、位相差検出画素の出力である位相差検出信号(検波情報)が入力される。 The digital signal processing unit 108 receives the RGB image signals from the preceding A/D conversion unit 105 and the phase difference detection signals (detection information) output from the phase difference detection pixels.
 図6に示す位相差情報取得部201は、A/D変換部105からの入力信号から、位相差検出画素の出力である位相差検出信号(検波情報)のみを選択する。
 一方、図6に示す画像情報取得部202は、A/D変換部105からの入力信号から、画像信号(例えばRGB画像信号)のみを選択する。
A phase difference information acquisition unit 201 shown in FIG. 6 selects only phase difference detection signals (detection information), which are outputs of phase difference detection pixels, from the input signal from the A/D conversion unit 105 .
On the other hand, the image information acquisition unit 202 shown in FIG. 6 selects only image signals (for example, RGB image signals) from the input signals from the A/D conversion unit 105 .
 画像情報取得部202が取得した画像信号は、画像信号処理部206に入力される。
 画像信号処理部206は、画像信号に対してデモザイク処理、ホワイトバランス調整、ガンマ補正等の様々な画像信号処理を実行し、処理後の画像を画像ブレンド処理実行部221に入力する。
The image signal acquired by the image information acquisition unit 202 is input to the image signal processing unit 206 .
The image signal processing unit 206 performs various image signal processing such as demosaicing, white balance adjustment, and gamma correction on the image signal, and inputs the processed image to the image blend processing execution unit 221 .
 位相差情報取得部201は、A/D変換部105からの入力信号から、位相差検出画素の出力である位相差検出信号(検波情報)を選択し、選択した位相差検出信号(検波情報)をデフォーカス量算出部203に出力する。 The phase difference information acquisition unit 201 selects a phase difference detection signal (detection information) that is the output of the phase difference detection pixel from the input signal from the A/D conversion unit 105, and obtains the selected phase difference detection signal (detection information). is output to the defocus amount calculation unit 203 .
 デフォーカス量算出部202は、微小な画像領域単位、例えば複数画素から構成される画素領域単位でフォーカスのずれ量、すなわち合焦距離と被写体距離とのずれ量(デフォーカス量(DF))を算出する。
 先に説明したように、位相差検出方式においては、焦点検出用センサとして機能する一組の位相差検出画素各々の受光量に応じて出力される信号のずれ量に基づいてフォーカスレンズのデフォーカス量が算出される。
The defocus amount calculation unit 202 calculates the defocus amount, that is, the amount of deviation between the in-focus distance and the subject distance (defocus amount (DF)) for each minute image area unit, for example, each pixel area composed of a plurality of pixels. calculate.
As described above, in the phase difference detection method, the defocus of the focus lens is detected based on the shift amount of the signal output according to the amount of light received by each of a set of phase difference detection pixels that function as a focus detection sensor. Amount is calculated.
 AF制御信号生成部204は、このデフォーカス量に基づいてフォーカスレンズを、例えばユーザの指定した被写体に対する合焦位置(フォーカス位置)に設定するためのオートフォーカス制御信号(AF制御信号)を生成し、生成したAF制御信号をAF制御部112aに出力する。 The AF control signal generation unit 204 generates an autofocus control signal (AF control signal) for setting the focus lens to an in-focus position (focus position) for a subject designated by the user based on the defocus amount. , and outputs the generated AF control signal to the AF control unit 112a.
 AF制御部112aは、AF制御信号生成部204から入力するオートフォーカス制御信号(AF制御信号)に従って、フォーカスレンズを駆動し、フォーカスレンズを例えばユーザの指定した被写体に対する合焦位置(フォーカス位置)に設定する。 The AF control unit 112a drives the focus lens according to an autofocus control signal (AF control signal) input from the AF control signal generation unit 204, and moves the focus lens to the in-focus position (focus position) for the subject designated by the user, for example. set.
 なお、合焦位置(フォーカス位置)に設定される被写体は、撮影画像の画像領域全てではなく、例えば人物などユーザが指定した被写体である。その他の被写体、例えば背景などの画像は、合焦状態にはなく、ぼけた画像となる。 Note that the subject to be set at the in-focus position (focus position) is not the entire image area of the captured image, but a subject specified by the user such as a person. Images of other objects, such as the background, are out of focus and appear blurred.
 例えば、先に図5を参照して説明したシフト量は、図2に示す撮像素子内に構成される位相差検出画素である画素Pa,画素Pbの一組の画素単位で計測される。
 すなわち、デフォーカス量算出部203は、撮影画像の微細な画素領域単位で、被写体画像のデフォーカス量を算出することができる。
For example, the shift amount described above with reference to FIG. 5 is measured in units of a set of pixels Pa and Pb, which are phase difference detection pixels configured in the imaging device shown in FIG.
That is, the defocus amount calculation unit 203 can calculate the defocus amount of the subject image for each minute pixel area of the captured image.
 デフォーカス量の算出単位となる画素領域の例を図7に示す。
 図7には、先に説明した図2と同様の撮像素子103の画素構成を示している。
 先に図2を参照して説明したように、位相差情報を取得するための位相差検出画素151は、ベイヤ配列を有するRGB画素の一部(行)に離散的に設定される。
 位相差検出画素は、右開口位相差検出画素Pa、左開口位相差検出画素Pbのペアによって構成される。
FIG. 7 shows an example of a pixel region that serves as a defocus amount calculation unit.
FIG. 7 shows the pixel configuration of the imaging element 103 similar to that of FIG. 2 described above.
As described above with reference to FIG. 2, the phase difference detection pixels 151 for acquiring phase difference information are discretely set in some (rows) of RGB pixels having a Bayer array.
A phase difference detection pixel is configured by a pair of a right opening phase difference detection pixel Pa and a left opening phase difference detection pixel Pb.
 デフォーカス量の算出単位となる画素領域は、例えば図7に示す画素領域152のように設定することができる。
 図7に示す例では、デフォーカス量の算出単位となる画素領域152を6×5画素等、n×m画素の微細な画素領域とした例である。
 このn×m画素の微細画素領域の中には複数組の位相差検出画素が含まれる。この複数組の位相差検出画素の各々から先に図5を参照して説明したシフト量が計測される。
A pixel region that serves as a unit for calculating the defocus amount can be set, for example, as a pixel region 152 shown in FIG.
The example shown in FIG. 7 is an example in which the pixel region 152, which is the unit for calculating the defocus amount, is a fine pixel region of n×m pixels, such as 6×5 pixels.
A plurality of sets of phase difference detection pixels are included in the n×m fine pixel area. The shift amount described above with reference to FIG. 5 is measured from each of the plurality of sets of phase difference detection pixels.
 デフォーカス量算出部203は、例えば図7に示すような画素領域152内の複数組のシフト量の平均値をn×m画素の画素領域152のシフト量として算出する。さらに、算出したシフト量と、合焦画素領域のシフト量とのずれ量からデフォーカス量、すなわち合焦距離と被写体距離とのずれ量に相当するデフォーカス量を算出する。
 このようにして、デフォーカス量算出部203は、例えば図7に示すような画素領域152単位のデフォーカス量を算出する。
The defocus amount calculation unit 203 calculates, for example, an average value of a plurality of sets of shift amounts in a pixel region 152 as shown in FIG. 7 as a shift amount of the pixel region 152 of n×m pixels. Further, the defocus amount, that is, the defocus amount corresponding to the amount of deviation between the in-focus distance and the subject distance, is calculated from the shift amount calculated and the shift amount of the in-focus pixel area.
In this manner, the defocus amount calculation unit 203 calculates the defocus amount for each pixel region 152 as shown in FIG. 7, for example.
 画像ブレンド比率算出部205は、デフォーカス量算出部203が算出した撮影画像の微細な画素領域単位のデフォーカス量を入力し、この画素領域単位のデフォーカス量に基づいて、2つの画像の合成処理に適用する画像ブレンド比率を算出する。 The image blending ratio calculation unit 205 inputs the fine defocus amount for each pixel area of the captured image calculated by the defocus amount calculation unit 203, and combines the two images based on this defocus amount for each pixel area. Calculate the image blend ratio to apply to the process.
 合成対象とする画像の1つは、画像信号処理部206が出力する撮影画像である。もう一つの画像は、例えば記憶部に予め格納された画像、例えば背景画像等である。 One of the images to be combined is the captured image output by the image signal processing unit 206 . Another image is, for example, an image stored in advance in the storage unit, such as a background image.
 画像ブレンド比率算出部205は、微細な画素領域単位のデフォーカス量に基づいて、画素領域単位の画像ブレンド比率を算出する。すなわち、2つの画像の合成処理に適用する画像ブレンド比率をn×m画素等の微細な画素領域単位で算出して、画像ブレンド処理実行部221に出力する。 The image blend ratio calculation unit 205 calculates the image blend ratio for each pixel area based on the fine defocus amount for each pixel area. That is, the image blending ratio to be applied to the process of synthesizing two images is calculated for each fine pixel area such as n×m pixels, and output to the image blending process execution unit 221 .
 画像ブレンド処理実行部221は、画像ブレンド比率算出部205から入力する画素領域単位の画像ブレンド比率に従って、2つの画像のブレンド処理、すなわち合成処理を実行する。 The image blending processing execution unit 221 executes blending processing, that is, synthesis processing, of two images according to the image blending ratio for each pixel area input from the image blending ratio calculation unit 205 .
 画像ブレンド処理の具体例について、図8を参照して説明する。
 図8は、画像ブレンド処理実行部221が、画像信号処理部206から出力される(A)撮影画像を入力し、この(A)撮影画像に対して、予め記憶部に格納された背景画像をブレンドして、(C)ブレンド画像を生成する場合の処理例を示す図である。
A specific example of image blend processing will be described with reference to FIG.
8, the image blend processing execution unit 221 receives the (A) captured image output from the image signal processing unit 206, and applies a background image stored in advance in the storage unit to the (A) captured image. It is a figure which shows the example of a process in the case of blending and producing|generating a (C) blended image.
 ブレンド処理対象となる(A)撮影画像と、背景画像のブレンド比率を規定した画像が、図8に示す「(B)ブレンド比率出力画像」である。
 画像ブレンド比率算出部205は、画素領域単位の画像ブレンド比率に応じた輝度を有する画素値を設定した「(B)ブレンド比率出力画像」を生成する。
An image defining the blend ratio of the (A) photographed image to be blended and the background image is the "(B) blend ratio output image" shown in FIG.
The image blend ratio calculation unit 205 generates a “(B) blend ratio output image” in which pixel values having brightness corresponding to the image blend ratio for each pixel area are set.
 画像信号処理部206から出力される(A)撮影画像は、例えば屋外に「熊のぬいぐるみ」をおいて撮影した画像であり、ユーザが「熊のぬいぐるみ」の領域を合焦位置(フォーカス位置)に設定して撮影した画像である。
 従って、「熊のぬいぐるみ」の領域は合焦した画像となっているが背景部分は合焦した画像ではなくぼけた画像となっている。
The photographed image (A) output from the image signal processing unit 206 is, for example, an image photographed with a “stuffed bear” placed outdoors. This image was taken with the setting set to .
Therefore, the "stuffed bear" region is a focused image, but the background portion is not a focused image but a blurred image.
 ユーザは、このぼけた背景画像領域を、予め記憶部に格納された背景画像に置き換えたブレンド画像(合成画像)を生成する。このブレンド画像が、図7に示す画像ブレンド処理部221が生成する「(C)ブレンド画像」である。 The user generates a blended image (composite image) by replacing this blurred background image area with the background image stored in advance in the storage unit. This blend image is the "(C) blend image" generated by the image blend processing unit 221 shown in FIG.
 図8に示す画像ブレンド処理部221は、(A)撮影画像に対して、予め記憶部に格納された背景画像をブレンドする際、微細な画素領域単位のブレンド比率を適用して画素領域単位のブレンド処理を行う。 The image blend processing unit 221 shown in FIG. 8 applies a fine blending ratio for each pixel area when blending the background image stored in advance in the storage unit with (A) the captured image. Blend processing.
 画素領域単位のブレンド比率を規定した画像が図8に示す「(B)ブレンド比率出力画像」である。
 この「(B)ブレンド比率出力画像」は、画像ブレンド比率算出部205が生成した画像である。画像ブレンド比率算出部205は、デフォーカス量算出部203から入力した画素領域単位のデフォーカス量に基づいて、画素領域単位の画像ブレンド比率を算出する。
 図8に示す「(B)ブレンド比率出力画像」は、画素領域単位の画像ブレンド比率を各画素の画素値として出力した画像である。
An image in which the blend ratio for each pixel area is defined is the "(B) blend ratio output image" shown in FIG.
This “(B) blend ratio output image” is an image generated by the image blend ratio calculation unit 205 . The image blend ratio calculation unit 205 calculates the image blend ratio for each pixel area based on the defocus amount for each pixel area input from the defocus amount calculation unit 203 .
The “(B) blend ratio output image” shown in FIG. 8 is an image in which the image blend ratio for each pixel region is output as the pixel value of each pixel.
 「(B)ブレンド比率出力画像」は、例えば輝度値(画素値)=0~255のモノクロ画像である。
 黒部分(画素値≒0)は、(A)撮影画像のブレンド比率=0%、背景画像のブレンド比率=100%、
 白部分(画素値≒255)は、(A)撮影画像のブレンド比率=100%、背景画像のブレンド比率=0%、
 である。
 グレー部分、例えば画素値=127の領域は、(A)撮影画像のブレンド比率=50%、背景画像のブレンド比率=50%となる。
The “(B) blend ratio output image” is a monochrome image with a brightness value (pixel value)=0 to 255, for example.
The black portion (pixel value ≈ 0) is (A) the blend ratio of the captured image = 0%, the blend ratio of the background image = 100%,
The white portion (pixel value ≈ 255) is (A) the blend ratio of the captured image = 100%, the blend ratio of the background image = 0%,
is.
In the gray area, for example, the pixel value=127 area, (A) the blend ratio of the photographed image=50% and the blend ratio of the background image=50%.
 図8に示す「(B)ブレンド比率出力画像」は、(A)撮影画像の合焦位置にある被写体、すなわち「熊のぬいぐるみ」部分がほぼ白(画素値=255)であり、この部分は、(A)撮影画像のブレンド比率=100%、背景画像のブレンド比率=0%に設定される。 In the "(B) blend ratio output image" shown in FIG. 8, the subject at the in-focus position of the (A) photographed image, that is, the "stuffed bear" portion is almost white (pixel value=255), and this portion is , (A) The blending ratio of the captured image is set to 100%, and the blending ratio of the background image is set to 0%.
 一方、(A)撮影画像中の合焦位置にない被写体、すなわち「熊のぬいぐるみ」以外の背景部分は、ほぼ黒(画素値=0)であり、この部分は、(A)撮影画像のブレンド比率=0%、背景画像のブレンド比率=100%に設定される。 On the other hand, (A) the subject that is not in focus in the captured image, that is, the background portion other than the "stuffed bear" is almost black (pixel value = 0), and this portion is (A) blend of the captured image. The ratio is set to 0%, and the background image blend ratio is set to 100%.
 さらに、「熊のぬいぐるみ」部分と、背景部分の境界部分は、グレーの設定、すなわち「熊のぬいぐるみ」部分のほぼ白(画素値=255)から、しだいに背景部分のほぼ黒(画素値=0)に画素値がなだらかに変化する設定となっている。 Furthermore, the boundary between the “stuffed bear” portion and the background portion is set to gray, that is, the “stuffed bear” portion is almost white (pixel value = 255), and the background portion is gradually black (pixel value = 255). 0), the pixel value is set to change gently.
 この「熊のぬいぐるみ」部分と、背景部分の境界部分では、(A)撮影画像と、記憶部から取得した背景画像とのブレンド比率がなだらかに変化する領域となる。 At the boundary between the "stuffed bear" part and the background part, the blending ratio between (A) the photographed image and the background image obtained from the storage unit changes gently.
 図8に示す画像ブレンド比率算出部205は、このような設定の画素値出力値を持つ「(B)ブレンド比率出力画像」を生成して、画像ブレンド処理実行部221に出力する。 The image blend ratio calculation unit 205 shown in FIG. 8 generates a "(B) blend ratio output image" having such pixel value output values and outputs it to the image blend processing execution unit 221.
 画像ブレンド処理実行部221は、画像ブレンド比率算出部205の生成した「(B)ブレンド比率出力画像」の各画素に設定されたブレンド比率(=画素値)に従って、(A)撮影画像と、記憶部から取得した背景画像とのブレンド比率を決定して、決定したブレンド比率に従って、(A)撮影画像と、記憶部から取得した背景画像との各画素単位のブレンド処理を実行する。
 このブレンド処理によって、図8に示す「(C)ブレンド画像」が生成される。
The image blend processing execution unit 221 combines (A) the captured image and the stored A blending ratio with the background image obtained from the storage unit is determined, and (A) the photographed image and the background image obtained from the storage unit are blended pixel by pixel according to the determined blending ratio.
By this blending process, "(C) blended image" shown in FIG. 8 is generated.
 画像ブレンド処理実行部221が生成した「(C)ブレンド画像」は、ユーザが合焦位置として設定した「熊のぬいぐるみ」と、記憶部から取得した背景画像がブレンド(合成)されたブレンド画像となる。
 撮影画像中の「熊のぬいぐるみ」と、記憶部から取得した背景画像の境界領域は、各画像の画素値がブレンドされており、異なる画像を切り貼りしたような違和感を和らげた画像とすることができる。
The “(C) blended image” generated by the image blend processing execution unit 221 is a blended image obtained by blending (synthesizing) the “stuffed bear” set by the user as the focus position and the background image acquired from the storage unit. Become.
In the boundary area between the "stuffed bear" in the captured image and the background image obtained from the storage unit, the pixel values of each image are blended, and an image that softens the sense of incongruity as if different images were cut and pasted can be created. can.
 画像ブレンド比率算出部205が実行する画素領域単位のデフォーカス量に基づく、画素領域単位の画像ブレンド比率の算出処理の具体例について、図9を参照して説明する。 A specific example of processing for calculating the image blend ratio for each pixel region based on the defocus amount for each pixel region, which is executed by the image blend ratio calculation unit 205, will be described with reference to FIG.
 図9に示すグラフは横軸に画素領域単位のデフォーカス量、縦軸に画素領域単位の撮影画像ブレンド比率(0~100%)を設定したグラフである。
 横軸に示す画素領域単位のデフォーカス量の軸には2つのしきい値Th1、Th2を示している。
 これらのしきい値は、画像ブレンド比率算出部205において予め規定したしきい値である。
The graph shown in FIG. 9 is a graph in which the horizontal axis indicates the defocus amount for each pixel area, and the vertical axis indicates the shot image blend ratio (0 to 100%) for each pixel area.
Two thresholds Th1 and Th2 are shown on the axis of the defocus amount in units of pixel regions shown on the horizontal axis.
These thresholds are predetermined thresholds in the image blend ratio calculation unit 205 .
 図9に示すグラフは、以下の設定である。
 デフォーカス量が0(合焦)~Th1の範囲=撮影画像のブレンド比率=100%、
 デフォーカス量がTh1~Th2の範囲=撮影画像のブレンド比率=100%~0%まで推移、
 デフォーカス量がTh2~の範囲=撮影画像のブレンド比率=0%、
 このような設定である。
The graph shown in FIG. 9 has the following settings.
Range of defocus amount from 0 (in focus) to Th1=blend ratio of captured image=100%,
Range of defocus amount from Th1 to Th2 = blending ratio of captured image = transition from 100% to 0%,
Range of defocus amount from Th2=blend ratio of captured image=0%,
This is the setting.
 これを式で示すと以下のように示すことができる。
 (a)デフォーカス量≦Th1→撮影画像ブレンド比率=100%
 (b)Th1<デフォーカス量<Th2→撮影画像ブレンド比率=((デフォーカス量-Th1)/(Th2-Th1))×100(%)
 (c)Th2≦デフォーカス量→撮影画像ブレンド比率=0%
This can be represented by the following formula.
(a) Defocus amount ≤ Th1 → photographed image blend ratio = 100%
(b) Th1<defocus amount<Th2→captured image blend ratio=((defocus amount−Th1)/(Th2−Th1))×100(%)
(c) Th2 ≤ defocus amount → photographed image blend ratio = 0%
 図9に示すように、撮影画像のブレンド比率は以下の設定となる。
 ユーザが合焦位置として設定した撮影画像中の「熊のぬいぐるみ」の画像領域は、デフォーカス量が0(合焦)~しきい値Th1の範囲となり、撮影画像のブレンド比率=100%(背景画像のブレンド比率=0%)となる。
 また、撮影画像中の「熊のぬいぐるみ」以外の背景領域は、デフォーカス量がしきい値Th2以上の領域となり、撮影画像のブレンド比率=0%(背景画像のブレンド比率=100%)となる。
As shown in FIG. 9, the blend ratio of the captured image is set as follows.
The image area of the "stuffed bear" in the captured image set by the user as the focus position has a defocus amount in the range of 0 (focus) to the threshold value Th1, and the blend ratio of the captured image is 100% (background image blending ratio=0%).
In addition, the background area other than the "stuffed bear" in the captured image is an area where the defocus amount is equal to or greater than the threshold value Th2, and the blend ratio of the captured image is 0% (the blend ratio of the background image is 100%). .
 また、撮影画像中の「熊のぬいぐるみ」と背景領域の境界領域は、デフォーカス量が2つのしきい値の間、Th1~Th2の領域となり、撮影画像のブレンド比率=100%~0%に変化する。すなわち、撮影画像中の「熊のぬいぐるみ」と、記憶部から取得した背景画像の境界領域は、各画像の画素値がブレンドされ、異なる画像を切り貼りしたような違和感を和らげた画像となる。 In addition, the boundary area between the "stuffed bear" and the background area in the captured image becomes an area of Th1 to Th2 when the defocus amount is between the two threshold values, and the blend ratio of the captured image is set to 100% to 0%. Change. That is, in the boundary area between the "stuffed bear" in the captured image and the background image obtained from the storage unit, the pixel values of each image are blended to produce an image that softens the sense of incongruity as if different images were cut and pasted.
 このように本実施例1は、位相差検出画素の検出情報を用いて画像ブレンド比率を算出する実施例であり、より自然なブレンド画像生成処理を行うことができる。 As described above, the first embodiment is an embodiment in which the image blend ratio is calculated using the detection information of the phase difference detection pixels, and more natural blend image generation processing can be performed.
 なお、図6を参照して説明したデジタル信号処理部108は、デジタル信号処理部108内に位相差情報取得部201、画像情報取得部202、デフォーカス量算出部203、AF制御信号生成部204、画像ブレンド比率算出部205、画像信号処理部206、画像ブレンド処理実行部221、これらの全ての構成を有する設定としているが、この構成は一例である。 Note that the digital signal processing unit 108 described with reference to FIG. , the image blending ratio calculation unit 205, the image signal processing unit 206, and the image blending processing execution unit 221, but this configuration is an example.
 図6を参照して説明した構成の一部については、撮像装置100のデジタル信号処理部108以外の構成としてもよい。また、撮像装置100と異なる外部装置においてデータ処理を実行する構成としてもよい。
 具体的には、例えば、画像ブレンド処理実行部221については、デジタル信号処理部108の外部に構成することが可能である。
A part of the configuration described with reference to FIG. Alternatively, the data processing may be performed by an external device different from the imaging device 100 .
Specifically, for example, the image blend processing execution unit 221 can be configured outside the digital signal processing unit 108 .
 図10、図11を参照して画像ブレンド処理実行部221の様々な異なる構成例について説明する。
 図10(A)は、図6を参照して説明したと同様の構成であり、画像ブレンド処理実行部221をデジタル信号処理部108の内部構成とした例である。
Various different configuration examples of the image blend processing execution unit 221 will be described with reference to FIGS. 10 and 11 .
FIG. 10A has the same configuration as described with reference to FIG. 6, and is an example in which the image blend processing execution unit 221 is the internal configuration of the digital signal processing unit 108. FIG.
 これに対して、図10(B)は、画像ブレンド処理実行部221をデジタル信号処理部108の外部構成とした例である。例えば撮像装置100にアプリケーション実行部150を設定する。デジタル信号処理部108の画像ブレンド比率算出部205は、算出した画素領域単位の画像ブレンド比率情報をアプリケーション実行部150に出力する。 On the other hand, FIG. 10B shows an example in which the image blend processing execution unit 221 is configured externally of the digital signal processing unit 108 . For example, the application execution unit 150 is set in the imaging device 100 . The image blend ratio calculation unit 205 of the digital signal processing unit 108 outputs the calculated image blend ratio information for each pixel area to the application execution unit 150 .
 アプリケーション実行部150は、デジタル信号処理部108の画像ブレンド比率算出部205から入力した画素領域単位の画像ブレンド比率情報を用いて、複数画像のブレンド処理を実行する。 The application execution unit 150 uses the image blend ratio information for each pixel area input from the image blend ratio calculation unit 205 of the digital signal processing unit 108 to execute blend processing of multiple images.
 さらに、図11(C)に示すように、撮像装置100以外の外部装置、例えばPC等の外部装置180を用いた構成としてもよい。
 撮像装置100のデジタル信号処理部108の画像ブレンド比率算出部205は、算出した画素領域単位の画像ブレンド比率情報をPC等の外部装置180に出力する。
Furthermore, as shown in FIG. 11C, an external device other than the imaging device 100, for example, an external device 180 such as a PC may be used.
The image blend ratio calculation unit 205 of the digital signal processing unit 108 of the imaging apparatus 100 outputs the calculated image blend ratio information for each pixel region to the external device 180 such as a PC.
 PC等の外部装置180は、デジタル信号処理部108の画像ブレンド比率算出部205から入力した画素領域単位の画像ブレンド比率情報を用いて、複数画像のブレンド処理を実行する。
 このように、様々な構成を利用した処理を行うことが可能である。
The external device 180 such as a PC uses the image blending ratio information for each pixel area input from the image blending ratio calculating unit 205 of the digital signal processing unit 108 to execute blending processing of a plurality of images.
In this way, it is possible to perform processing using various configurations.
  [3-2.(実施例2)位相差検出画素の検出情報を用いて3次元画像生成に適用する距離情報を算出する実施例]
 次に、実施例2として、位相差検出画素の検出情報を用いて3次元画像生成に適用する距離情報を算出する実施例について説明する。
[3-2. (Example 2) Example of calculating distance information applied to three-dimensional image generation using detection information of phase difference detection pixels]
Next, as Example 2, an example in which distance information to be applied to three-dimensional image generation is calculated using detection information of phase difference detection pixels will be described.
 図12は、本実施例2の構成を説明するブロック図である。
 図12は、図1を参照して説明した撮像装置100の構成要素であるデジタル信号処理部108の内部構成例を示している。
FIG. 12 is a block diagram for explaining the configuration of the second embodiment.
FIG. 12 shows an internal configuration example of the digital signal processing unit 108, which is a component of the imaging apparatus 100 described with reference to FIG.
 図12に示すように、デジタル信号処理部108は、位相差情報取得部201、画像情報取得部202、デフォーカス量算出部203、AF制御信号生成部204、画像信号処理部206、距離情報算出部207、3次元画像生成部222を有する。 As shown in FIG. 12, the digital signal processing unit 108 includes a phase difference information acquisition unit 201, an image information acquisition unit 202, a defocus amount calculation unit 203, an AF control signal generation unit 204, an image signal processing unit 206, a distance information calculation unit. It has a unit 207 and a three-dimensional image generation unit 222 .
 デジタル信号処理部108には、前段のA/D変換部105からRGB画像信号と、位相差検出画素の出力である位相差検出信号(検波情報)が入力される。 The digital signal processing unit 108 receives the RGB image signals from the preceding A/D conversion unit 105 and the phase difference detection signals (detection information) output from the phase difference detection pixels.
 図12に示す位相差情報取得部201は、A/D変換部105からの入力信号から、位相差検出画素の出力である位相差検出信号(検波情報)のみを選択する。
 一方、図12に示す画像情報取得部202は、A/D変換部105からの入力信号から、画像信号(例えばRGB画像信号)のみを選択する。
A phase difference information acquisition unit 201 shown in FIG. 12 selects only the phase difference detection signal (detection information) that is the output of the phase difference detection pixel from the input signal from the A/D conversion unit 105 .
On the other hand, the image information acquisition unit 202 shown in FIG. 12 selects only image signals (for example, RGB image signals) from the input signals from the A/D conversion unit 105 .
 画像情報取得部202が取得した画像信号は、画像信号処理部206に入力される。
 画像信号処理部206は、画像信号に対してデモザイク処理、ホワイトバランス調整、ガンマ補正等の様々な画像信号処理を実行し、処理後の画像を3次元画像生成部222に入力する。
The image signal acquired by the image information acquisition unit 202 is input to the image signal processing unit 206 .
The image signal processing unit 206 performs various image signal processing such as demosaic processing, white balance adjustment, and gamma correction on the image signal, and inputs the processed image to the three-dimensional image generation unit 222 .
 位相差情報取得部201は、A/D変換部105からの入力信号から、位相差検出画素の出力である位相差検出信号(検波情報)を選択し、選択した位相差検出信号(検波情報)をデフォーカス量算出部203に出力する。 The phase difference information acquisition unit 201 selects a phase difference detection signal (detection information) that is the output of the phase difference detection pixel from the input signal from the A/D conversion unit 105, and obtains the selected phase difference detection signal (detection information). is output to the defocus amount calculation unit 203 .
 デフォーカス量算出部202は、所定の画素領域単位でフォーカスのずれ量、すなわち合焦距離と被写体距離とのずれ量(デフォーカス量(DF))を算出する。
 先に説明したように、位相差検出方式においては、焦点検出用センサとして機能する一組の位相差検出画素各々の受光量に応じて出力される信号のずれ量に基づいてフォーカスレンズのデフォーカス量が算出される。
The defocus amount calculation unit 202 calculates a defocus amount, that is, a defocus amount (DF) between the in-focus distance and the object distance, for each predetermined pixel area.
As described above, in the phase difference detection method, the defocus of the focus lens is detected based on the shift amount of the signal output according to the amount of light received by each of a set of phase difference detection pixels that function as a focus detection sensor. Amount is calculated.
 AF制御信号生成部204は、このデフォーカス量に基づいてフォーカスレンズを、例えばユーザの指定した被写体に対する合焦位置(フォーカス位置)に設定するためのオートフォーカス制御信号(AF制御信号)を生成し、生成したAF制御信号をAF制御部112aに出力する。 The AF control signal generation unit 204 generates an autofocus control signal (AF control signal) for setting the focus lens to an in-focus position (focus position) for a subject designated by the user based on the defocus amount. , and outputs the generated AF control signal to the AF control unit 112a.
 AF制御部112aは、AF制御信号生成部204から入力するオートフォーカス制御信号(AF制御信号)に従って、フォーカスレンズを駆動し、フォーカスレンズを例えばユーザの指定した被写体に対する合焦位置(フォーカス位置)に設定する。 The AF control unit 112a drives the focus lens according to an autofocus control signal (AF control signal) input from the AF control signal generation unit 204, and moves the focus lens to the in-focus position (focus position) for the subject designated by the user, for example. set.
 なお、合焦位置(フォーカス位置)に設定される被写体は、撮影画像の画像領域全てではなく、例えば人物などユーザが指定した被写体である。その他の被写体、例えば背景などの画像は、合焦状態にはなく、ぼけた画像となる。 Note that the subject to be set at the in-focus position (focus position) is not the entire image area of the captured image, but a subject specified by the user such as a person. Images of other objects, such as the background, are out of focus and appear blurred.
 デフォーカス量算出部203は、例えば先に図7を参照して説明したように、例えばn×m画素等、微小な画素領域単位でデフォーカス量、すなわち合焦距離と被写体距離とのずれ量に相当するデフォーカス量を算出する。 For example, as described above with reference to FIG. 7, the defocus amount calculation unit 203 calculates the defocus amount, that is, the amount of deviation between the in-focus distance and the subject distance, for each minute pixel area such as n×m pixels. A defocus amount equivalent to is calculated.
 距離情報算出部207は、デフォーカス量算出部203が算出した撮影画像の微細な画素領域単位のデフォーカス量を入力し、この画素領域単位のデフォーカス量に基づいて、撮影画像の画素領域単位の距離情報を算出する。
 距離情報算出部207は、例えば画像領域単位の距離値を画素値(例えば0~255)で示したデプスマップを生成する。
The distance information calculation unit 207 receives the fine defocus amount for each pixel area of the captured image calculated by the defocus amount calculation unit 203, and based on this defocus amount for each pixel area, determines the distance information for each pixel area of the captured image. Calculate the distance information of
The distance information calculation unit 207 generates, for example, a depth map indicating the distance value for each image area by pixel values (eg, 0 to 255).
 距離情報算出部207における距離情報算出処理の具体例については後述する。
 距離情報算出部207は、算出した画素領域単位の距離情報を3次元画像生成部222に出力する。
A specific example of distance information calculation processing in the distance information calculation unit 207 will be described later.
The distance information calculation unit 207 outputs the calculated distance information for each pixel area to the three-dimensional image generation unit 222 .
 3次元画像生成部222は、画像信号処理部206から撮影画像を入力し、さらに、距離情報算出部207から画素領域単位の距離情報を入力する。
 3次元画像生成部222は、距離情報算出部207から入力した画素領域単位の距離情報に従って、撮影画像に対応する3次元画像を生成する。
The three-dimensional image generation unit 222 receives the captured image from the image signal processing unit 206 and further receives distance information for each pixel area from the distance information calculation unit 207 .
The 3D image generation unit 222 generates a 3D image corresponding to the captured image according to the distance information for each pixel area input from the distance information calculation unit 207 .
 本実施例2における3次元画像生成処理の具体例について、図13を参照して説明する。
 図13は、3次元画像生成部222が、画像信号処理部206から出力される(A)撮影画像と、距離情報算出部207が生成した画素領域単位の(B)距離情報(デプスマップ)を入力し、(B)距離情報(デプスマップ)を利用して、(A)撮影画像に対応する(C)3次元画像を生成する場合の処理例を示す図である。
A specific example of the three-dimensional image generation processing according to the second embodiment will be described with reference to FIG.
13, the three-dimensional image generation unit 222 generates (A) the captured image output from the image signal processing unit 206 and (B) distance information (depth map) for each pixel region generated by the distance information calculation unit 207. FIG. 10 is a diagram showing an example of processing when inputting and using (B) distance information (depth map) to generate (C) a three-dimensional image corresponding to (A) a photographed image;
 画像信号処理部206から出力される(A)撮影画像は、例えば屋外に「熊のぬいぐるみ」をおいて撮影した画像であり、ユーザが「熊のぬいぐるみ」の領域を合焦位置(フォーカス位置)に設定して撮影した画像である。 The photographed image (A) output from the image signal processing unit 206 is, for example, an image photographed with a “stuffed bear” placed outdoors. This image was taken with the setting set to .
 3次元画像生成部222が、(C)3次元画像を生成するために用いる被写体距離情報は、距離情報算出部207が生成した画素領域単位の(B)距離情報(デプスマップ)である。 The (C) subject distance information used by the three-dimensional image generation unit 222 to generate a three-dimensional image is (B) distance information (depth map) for each pixel region generated by the distance information calculation unit 207 .
 距離情報算出部207が生成した画素領域単位の(B)距離情報(デプスマップ)は、デフォーカス量算出部203が算出した例えばn×m画素等の微小な画素領域単位のデフォーカス量に基づいて算出した画像領域単位の被写体距離の値を画素値(例えば0~255)として示したデプスマップである。
 白く見える高輝度(高画素値)領域は被写体距離が近い領域である。黒く見える低輝度(低画素値)領域は被写体距離が遠い領域である。
The (B) distance information (depth map) for each pixel area generated by the distance information calculation unit 207 is based on the defocus amount for each minute pixel area such as n×m pixels calculated by the defocus amount calculation unit 203. 4 is a depth map showing the value of the object distance for each image area calculated in the above manner as a pixel value (0 to 255, for example).
A high luminance (high pixel value) area that looks white is an area where the object distance is short. A low luminance (low pixel value) area that looks black is an area with a long subject distance.
 なお、デフォーカス量から被写体距離を算出する処理は、撮像装置のレンズ(フォーカスレンズ)の焦点距離等のパラメータを適用して算出することができる。
 具体的には、以下の(式1)に従って画素領域単位の被写体距離を算出する。
Note that the process of calculating the subject distance from the defocus amount can be calculated by applying parameters such as the focal length of the lens (focus lens) of the imaging device.
Specifically, the object distance for each pixel area is calculated according to the following (Equation 1).
Figure JPOXMLDOC01-appb-M000001
                           ・・・(式1)
Figure JPOXMLDOC01-appb-M000001
... (Formula 1)
 距離情報算出部207は、上記(式1)に従って、画素領域単位の被写体距離を算出する。
 距離情報算出部207は、算出した画素領域単位の被写体距離情報を3次元画像生成部222に出力する。
The distance information calculation unit 207 calculates the object distance for each pixel area according to the above (Equation 1).
The distance information calculation unit 207 outputs the calculated subject distance information for each pixel area to the three-dimensional image generation unit 222 .
 3次元画像生成部222は、距離情報算出部207から入力した画素領域単位の被写体距離情報を利用して、画像信号処理部206から入力する(A)撮影画像に対応する3次元画像データを生成する。図13に示す(C)3次元画像である。 The three-dimensional image generation unit 222 generates three-dimensional image data corresponding to (A) the photographed image input from the image signal processing unit 206 using the subject distance information for each pixel area input from the distance information calculation unit 207. do. FIG. 13C is a three-dimensional image shown in FIG.
 3次元画像生成部222が生成した「(C)3次元画像」は、距離情報算出部207が生成した微小領域、例えばn×m画素単位の微小画素領域単位の被写体距離情報を反映した3次元画像となり、ち密な距離情報が反映された高精度な3次元画像となる。 The “(C) three-dimensional image” generated by the three-dimensional image generation unit 222 is a three-dimensional image that reflects subject distance information in units of minute areas generated by the distance information calculation unit 207, for example, minute pixel areas of n×m pixels. It becomes an image, and becomes a highly accurate three-dimensional image reflecting detailed distance information.
 このように本実施例2は、位相差検出画素の検出情報を用いて微小画素領域単位の被写体距離情報を算出し、算出した被写体距離情報を用いて3次元画像を生成する実施例であり、高精度な3次元画像の生成処理を行うことができる。 As described above, the second embodiment is an embodiment in which the subject distance information for each minute pixel area is calculated using the detection information of the phase difference detection pixels, and the three-dimensional image is generated using the calculated subject distance information. High-precision three-dimensional image generation processing can be performed.
 なお、図12を参照して説明したデジタル信号処理部108は、デジタル信号処理部108内に位相差情報取得部201、画像情報取得部202、デフォーカス量算出部203、AF制御信号生成部204、画像信号処理部206、距離情報算出部207、3次元画像生成部222、これらの全ての構成を有する設定としているが、この構成は一例である。 Note that the digital signal processing unit 108 described with reference to FIG. , the image signal processing unit 206, the distance information calculation unit 207, and the three-dimensional image generation unit 222, but this configuration is an example.
 図12を参照して説明した構成の一部については、撮像装置100のデジタル信号処理部108以外の構成としてもよい。また、撮像装置100と異なる外部装置においてデータ処理を実行する構成としてもよい。
 具体的には、例えば、3次元画像生成部222については、デジタル信号処理部108の外部に構成することが可能である。
A part of the configuration described with reference to FIG. Alternatively, the data processing may be performed by an external device different from the imaging device 100 .
Specifically, for example, the three-dimensional image generation unit 222 can be configured outside the digital signal processing unit 108 .
 例えば、先に図10、図11を参照して説明した構成と同様、3次元画像生成部22をデジタル信号処理部108の内部構成とする例の他、3次元画像生成部222を撮像装置100に構成したアプリケーション実行部150に設定する構成が可能である。
 さらに、図11(C)を参照して説明した構成と同様、撮像装置100以外の外部装置、例えばPC等の外部装置180に3次元画像生成部222を設けた構成としてもよい。
 この場合、撮像装置100のデジタル信号処理部108の距離情報算出部207は、算出した画素領域単位の距離情報をPC等の外部装置180に出力する。
For example, similar to the configuration described above with reference to FIGS. can be set in the application execution unit 150 configured as follows.
Furthermore, as in the configuration described with reference to FIG. 11C, a configuration in which the three-dimensional image generation unit 222 is provided in an external device other than the imaging device 100, for example, an external device 180 such as a PC may be employed.
In this case, the distance information calculation unit 207 of the digital signal processing unit 108 of the imaging apparatus 100 outputs the calculated distance information for each pixel area to the external device 180 such as a PC.
 PC等の外部装置180は、デジタル信号処理部108の距離情報算出部207から入力した画素領域単位の距離情報を用いて、3次元画像の生成処理を実行する。
 このように、様々な構成を利用した処理を行うことが可能である。
The external device 180 such as a PC uses the distance information for each pixel area input from the distance information calculation unit 207 of the digital signal processing unit 108 to execute a three-dimensional image generation process.
In this way, it is possible to perform processing using various configurations.
  [3-3.(実施例3)位相差検出画素の検出情報を用いて背景画像合成用のマスク領域を決定し、選択被写体領域以外の領域にマスク画像を合成した画像を生成する実施例]
 次に、実施例3として、位相差検出画素の検出情報を用いて背景画像合成用のマスク領域を決定し、選択被写体領域以外の領域にマスク画像を合成した画像を生成する実施例について説明する。
[3-3. (Example 3) Example of determining a mask region for background image synthesis using detection information of a phase difference detection pixel, and generating an image by synthesizing the mask image with a region other than the selected subject region]
Next, as Example 3, an example will be described in which a mask region for background image synthesis is determined using detection information of a phase difference detection pixel, and an image is generated by synthesizing the mask image with a region other than the selected subject region. .
 例えば撮影画像から選択した特定の被写体以外の背景領域をグリーンバック、あるいはブルーバック等の均一な色に設定するマスク処理を行い、このようなマスク処理を行った画像のグリーンバック領域やブルーバック領域に他の画像、例えば新たな背景画像を合成して、選択した特定被写体を新たな背景画像上に表示した合成画像を生成する。 For example, mask processing is performed to set the background region other than a specific subject selected from the photographed image to a uniform color such as green background or blue background, and the green background region or blue background region of the image subjected to such mask processing is synthesized with another image, for example, a new background image to generate a synthesized image in which the selected specific subject is displayed on the new background image.
 図14は、本実施例3の構成を説明するブロック図である。
 図14は、図1を参照して説明した撮像装置100の構成要素であるデジタル信号処理部108の内部構成例を示している。
FIG. 14 is a block diagram for explaining the configuration of the third embodiment.
FIG. 14 shows an internal configuration example of the digital signal processing unit 108, which is a component of the imaging apparatus 100 described with reference to FIG.
 図14に示すように、デジタル信号処理部108は、位相差情報取得部201、画像情報取得部202、デフォーカス量算出部203、AF制御信号生成部204、画像信号処理部206、マスク領域決定部208、出力被写体色解析部209、マスク出力色決定部210、マスク合成部211、画像合成部223を有する。 As shown in FIG. 14, the digital signal processing unit 108 includes a phase difference information acquisition unit 201, an image information acquisition unit 202, a defocus amount calculation unit 203, an AF control signal generation unit 204, an image signal processing unit 206, a mask area determination unit It has a section 208 , an output subject color analyzing section 209 , a mask output color determining section 210 , a mask synthesizing section 211 and an image synthesizing section 223 .
 デジタル信号処理部108には、前段のA/D変換部105からRGB画像信号と、位相差検出画素の出力である位相差検出信号(検波情報)が入力される。 The digital signal processing unit 108 receives the RGB image signals from the preceding A/D conversion unit 105 and the phase difference detection signals (detection information) output from the phase difference detection pixels.
 図14に示す位相差情報取得部201は、A/D変換部105からの入力信号から、位相差検出画素の出力である位相差検出信号(検波情報)のみを選択する。
 一方、図14に示す画像情報取得部202は、A/D変換部105からの入力信号から、画像信号(例えばRGB画像信号)のみを選択する。
A phase difference information acquisition unit 201 shown in FIG. 14 selects only the phase difference detection signal (detection information) that is the output of the phase difference detection pixel from the input signal from the A/D conversion unit 105 .
On the other hand, the image information acquisition unit 202 shown in FIG. 14 selects only image signals (for example, RGB image signals) from the input signals from the A/D conversion unit 105 .
 画像情報取得部202が取得した画像信号は、画像信号処理部206に入力される。
 画像信号処理部206は、画像信号に対してデモザイク処理、ホワイトバランス調整、ガンマ補正等の様々な画像信号処理を実行し、処理後の画像を3次元画像生成部222に入力する。
The image signal acquired by the image information acquisition unit 202 is input to the image signal processing unit 206 .
The image signal processing unit 206 performs various image signal processing such as demosaic processing, white balance adjustment, and gamma correction on the image signal, and inputs the processed image to the three-dimensional image generation unit 222 .
 位相差情報取得部201は、A/D変換部105からの入力信号から、位相差検出画素の出力である位相差検出信号(検波情報)を選択し、選択した位相差検出信号(検波情報)をデフォーカス量算出部203に出力する。 The phase difference information acquisition unit 201 selects a phase difference detection signal (detection information) that is the output of the phase difference detection pixel from the input signal from the A/D conversion unit 105, and obtains the selected phase difference detection signal (detection information). is output to the defocus amount calculation unit 203 .
 デフォーカス量算出部202は、所定の画素領域単位でフォーカスのずれ量、すなわち合焦距離と被写体距離とのずれ量(デフォーカス量(DF))を算出する。
 先に説明したように、位相差検出方式においては、焦点検出用センサとして機能する一組の位相差検出画素各々の受光量に応じて出力される信号のずれ量に基づいてフォーカスレンズのデフォーカス量が算出される。
The defocus amount calculation unit 202 calculates a defocus amount, that is, a defocus amount (DF) between the in-focus distance and the object distance, for each predetermined pixel area.
As described above, in the phase difference detection method, the defocus of the focus lens is detected based on the shift amount of the signal output according to the amount of light received by each of a set of phase difference detection pixels that function as a focus detection sensor. Amount is calculated.
 AF制御信号生成部204は、このデフォーカス量に基づいてフォーカスレンズを、例えばユーザの指定した被写体に対する合焦位置(フォーカス位置)に設定するためのオートフォーカス制御信号(AF制御信号)を生成し、生成したAF制御信号をAF制御部112aに出力する。 The AF control signal generation unit 204 generates an autofocus control signal (AF control signal) for setting the focus lens to an in-focus position (focus position) for a subject designated by the user based on the defocus amount. , and outputs the generated AF control signal to the AF control unit 112a.
 AF制御部112aは、AF制御信号生成部204から入力するオートフォーカス制御信号(AF制御信号)に従って、フォーカスレンズを駆動し、フォーカスレンズを例えばユーザの指定した被写体に対する合焦位置(フォーカス位置)に設定する。 The AF control unit 112a drives the focus lens according to an autofocus control signal (AF control signal) input from the AF control signal generation unit 204, and moves the focus lens to the in-focus position (focus position) for the subject designated by the user, for example. set.
 なお、合焦位置(フォーカス位置)に設定される被写体は、撮影画像の画像領域全てではなく、例えば人物などユーザが指定した被写体である。その他の被写体、例えば背景などの画像は、合焦状態にはなく、ぼけた画像となる。 Note that the subject to be set at the in-focus position (focus position) is not the entire image area of the captured image, but a subject specified by the user such as a person. Images of other objects, such as the background, are out of focus and appear blurred.
 デフォーカス量算出部203は、例えば先に図7を参照して説明したように、例えばn×m画素等、微小な画素領域単位でデフォーカス量、すなわち合焦距離と被写体距離とのずれ量に相当するデフォーカス量を算出する。 For example, as described above with reference to FIG. 7, the defocus amount calculation unit 203 calculates the defocus amount, that is, the amount of deviation between the in-focus distance and the subject distance, for each minute pixel area such as n×m pixels. A defocus amount equivalent to is calculated.
 出力被写体色解析部209は、最終的に生成する合成画像上に表示する被写体の色を解析する。
 最終的に生成する合成画像上に表示する被写体については、例えばユーザが入力部118から指定する。すなわち、図に示す出力被写体選択情報232が入力部118を介して出力被写体色解析部209に入力され、出力被写体色解析部209は、この入力情報に従って選択した出力被写体の色を解析する。
 この出力被写体の色解析は、マスク領域の色を決定するために実行する。
The output subject color analysis unit 209 analyzes the color of the subject to be displayed on the finally generated synthetic image.
The subject to be displayed on the finally generated synthesized image is specified by the user through the input unit 118, for example. That is, output subject selection information 232 shown in the drawing is input to the output subject color analysis section 209 via the input section 118, and the output subject color analysis section 209 analyzes the color of the selected output subject according to this input information.
A color analysis of this output subject is performed to determine the color of the mask area.
 例えばグリーンバックを利用する場合は、出力被写体の色としてグリーンの画素領域が含まれると、その部分にも合成する背景画像が出力されてしまう。従って、出力被写体にグリーンが含まれる場合はグリーンバックを利用できない。
 同様に、ブルーバックを利用する場合は、出力被写体の色としてブルーの画素領域が含まれると、その部分にも合成する背景画像が出力されてしまう。従って、出力被写体にブルーが含まれる場合はブルーバックを利用できない。
For example, in the case of using a green screen, if a green pixel area is included as the color of the output object, the background image to be combined is also output in that area. Therefore, the green screen cannot be used when the output subject includes green.
Similarly, when using a blue background, if a blue pixel area is included in the color of the output subject, the background image to be synthesized is also output in that area. Therefore, the blue background cannot be used when the output object contains blue.
 このようにマスク領域の色は、出力被写体に含まれない色の設定とすることが必要である。
 出力被写体色解析部209は、マスク領域の色を、合成画像上に表示する出力被写体の色と異なる色に設定するために出力被写体の色を解析する。
Thus, it is necessary to set the color of the mask area to a color that is not included in the output object.
The output subject color analysis unit 209 analyzes the color of the output subject in order to set the color of the mask area to a color different from the color of the output subject displayed on the composite image.
 マスク出力色決定部210は、出力被写体色解析部209において解析された出力被写体に含まれない色をマスク領域の色として決定する。 The mask output color determination unit 210 determines a color that is not included in the output subject analyzed by the output subject color analysis unit 209 as the color of the mask area.
 マスク領域決定部208は、入力部118から、出力被写体選択情報232を入力し、さらに、デフォーカス量算出部203が算出した撮影画像の微細な画素領域単位のデフォーカス量を入力する。 The mask area determination unit 208 receives the output subject selection information 232 from the input unit 118, and further inputs the defocus amount for each fine pixel area of the captured image calculated by the defocus amount calculation unit 203.
 マスク領域決定部208は、入力部118から入力する出力被写体選択情報232と、デフォーカス量算出部203から入力する画素領域単位のデフォーカス量に基づいて、マスク領域を決定する。すなわちグリーンバックやブルーバックに設定するマスク領域を決定する。 The mask area determination unit 208 determines the mask area based on the output subject selection information 232 input from the input unit 118 and the defocus amount for each pixel area input from the defocus amount calculation unit 203 . That is, the mask area to be set to green screen or blue screen is determined.
 マスク領域決定部208は、入力部118から入力する出力被写体選択情報232によって指定された出力被写体に含まれない領域をマスク領域とする。このマスク領域決定処理に際して、デフォーカス量算出部203から入力する画素領域単位のデフォーカス量を補助情報として利用する。 The mask area determining unit 208 sets an area not included in the output subject specified by the output subject selection information 232 input from the input unit 118 as a mask area. In this mask area determination process, the defocus amount for each pixel area input from the defocus amount calculation unit 203 is used as auxiliary information.
 例えば、入力部118から入力する出力被写体選択情報232によって指定された出力被写体のデフォーカス量と、予め規定したしきい値以上異なるデフォーカス量を有する画素領域をマスク領域として決定する。
 マスク領域決定部208が決定したマスク領域情報は、マスク合成部211に出力される。
For example, a pixel region having a defocus amount different from the defocus amount of the output subject specified by the output subject selection information 232 input from the input unit 118 by a predetermined threshold value or more is determined as the mask area.
The mask area information determined by the mask area determining unit 208 is output to the mask synthesizing unit 211 .
 マスク合成部211は、画像信号処理部206から撮影画像を入力し、マスク出力色決定部210からマスク領域の色情報を入力し、マスク領域決定部208からマスク領域情報を入力する。 The mask synthesizing unit 211 receives the captured image from the image signal processing unit 206 , the color information of the mask area from the mask output color determination unit 210 , and the mask area information from the mask area determination unit 208 .
 マスク合成部211は、まず、画像信号処理部206から入力した撮影画像に対するマスク設定領域を決定する。マスク領域決定部208から入力したマスク領域情報に従ってマスク設定領域を決定する。 The mask synthesizing unit 211 first determines a mask setting area for the captured image input from the image signal processing unit 206 . A mask setting area is determined according to the mask area information input from the mask area determination unit 208 .
 マスク合成部211は、次に、決定したマスク設定領域に、マスク出力色決定部210から入力したマスク領域色情報に従った色のマスクを設定する。
 例えばグリーンバックやブルーバック等のマスクを設定する。
The mask synthesizing unit 211 next sets a color mask in the determined mask setting area according to the mask area color information input from the mask output color determining unit 210 .
For example, a mask such as green screen or blue screen is set.
 マスク合成部211が生成したマスク設定画像は、入力部118から入力した出力被写体選択情報232によって指定された出力被写体以外の領域がグリーンバックやブルーバック等のマスク画像に置き換えられた画像となる。
 このマスク合成部211が生成したマスク設定画像は、画像合成部223に出力される。
The mask setting image generated by the mask synthesizing unit 211 is an image in which areas other than the output subject specified by the output subject selection information 232 input from the input unit 118 are replaced with a mask image such as a green screen or blue screen.
The mask setting image generated by the mask synthesizing section 211 is output to the image synthesizing section 223 .
 画像合成部223は、マスク合成部211が生成したマスク設定画像のマスク領域に例えば記憶部、あるいは外部から入力した別の画像、例えば背景画像を張り付けた合成画像を生成する。 The image synthesizing unit 223 generates a synthesized image by pasting another image, for example, a background image, input from, for example, the storage unit or the outside onto the mask area of the mask setting image generated by the mask synthesizing unit 211 .
 本実施例3における合成画像生成処理の具体例について、図15を参照して説明する。
 図15は、画像合成部223が、マスク合成部211から(C)マスク設定画像を入力して、(C)マスク設定画像のマスク領域に、例えば記憶部、あるいは外部から入力した別の画像、例えば背景画像を張り付けた(D)合成画像を生成する場合の処理例を示す図である。
A specific example of composite image generation processing in the third embodiment will be described with reference to FIG. 15 .
15, the image synthesizing unit 223 inputs (C) the mask setting image from the mask synthesizing unit 211, and (C) stores another image input from the storage unit or the outside in the mask area of the mask setting image. For example, it is a figure which shows the example of a process in the case of producing|generating the (D) composite image which pasted the background image.
 図15において、ユーザは(A)撮影画像を表示部に表示してマスクしない出力被写体を選択し、入力部118を介して出力被写体選択情報232を入力する。
 図15に示す例では、ユーザが、(A)撮影画像内の「熊のぬいぐるみ」の領域を、マスクしない出力被写体として選択した例である。
 入力部118を介して入力された出力被写体選択情報232は、出力被写体色解析部209と、マスク領域決定部208に入力される。
In FIG. 15 , the user (A) displays a captured image on the display section, selects an output subject that is not to be masked, and inputs output subject selection information 232 via the input section 118 .
The example shown in FIG. 15 is an example in which the user selects (A) the area of “stuffed bear” in the captured image as an output subject that is not to be masked.
The output subject selection information 232 input via the input unit 118 is input to the output subject color analysis unit 209 and mask area determination unit 208 .
 出力被写体色解析部209は、マスク領域の色を、合成画像上に表示する出力被写体の色と異なる色に設定するために出力被写体の色を解析する。 The output subject color analysis unit 209 analyzes the color of the output subject in order to set the color of the mask area to a color different from the color of the output subject displayed on the composite image.
 マスク領域決定部208は、入力部118から入力する出力被写体選択情報232と、デフォーカス量算出部203から入力する画素領域単位のデフォーカス量に基づいて、マスク領域を決定する。すなわちグリーンバックやブルーバックに設定するマスク領域を決定する。 The mask area determination unit 208 determines the mask area based on the output subject selection information 232 input from the input unit 118 and the defocus amount for each pixel area input from the defocus amount calculation unit 203 . That is, the mask area to be set to green screen or blue screen is determined.
 前述したように、マスク領域決定部208は、入力部118から入力する出力被写体選択情報232によって指定された出力被写体に含まれない領域をマスク領域とするが、このマスク領域決定処理に際して、デフォーカス量算出部203から入力する画素領域単位のデフォーカス量を補助情報として利用する。 As described above, the mask area determination unit 208 sets, as a mask area, an area that is not included in the output subject specified by the output subject selection information 232 input from the input unit 118. During this mask area determination process, the defocus The defocus amount for each pixel area input from the amount calculation unit 203 is used as auxiliary information.
 例えば、入力部118から入力する出力被写体選択情報232によって指定された出力被写体のデフォーカス量に対して、予め規定したしきい値以上の差分を持つデフォーカス量を有する画素領域をマスク領域として決定する。例えば図15に示すような「(B)マスク領域指示画像」を生成して、マスク合成部211に出力する。 For example, a pixel area having a defocus amount having a difference equal to or greater than a predetermined threshold value with respect to the defocus amount of the output subject specified by the output subject selection information 232 input from the input unit 118 is determined as the mask area. do. For example, a “(B) mask area designating image” as shown in FIG. 15 is generated and output to the mask synthesis unit 211 .
 図15に示す(B)マスク領域指示画像は、ユーザマスクしない出力被写体として選択した(A)撮影画像内の「熊のぬいぐるみ」の領域以外の画素領域をマスク領域として決定した例である。
 (A)撮影画像内の「熊のぬいぐるみ」の領域のデフォーカス量はほぼ0、すなわち合焦領域であり、(A)撮影画像内の「熊のぬいぐるみ」以外の背景領域は合焦していない領域であり、デフォーカス量が大きい領域である。
The (B) mask area designation image shown in FIG. 15 is an example in which a pixel area other than the "stuffed bear" area in the (A) photographed image selected as the output subject not to be masked by the user is determined as the mask area.
(A) The defocus amount of the "stuffed bear" area in the photographed image is almost 0, that is, the in-focus area, and (A) the background area other than the "stuffed bear" in the photographed image is out of focus. This is an area where the defocus amount is large.
 すなわち、背景領域は、入力部118から入力する出力被写体選択情報232によって指定された出力被写体(「熊のぬいぐるみ」の領域)のデフォーカス量(ほぼ0)に対して、予め規定したしきい値以上の差分を持つデフォーカス量を有する画素領域である。マスク領域決定部208は、このような領域をマスク領域として決定し、マスク領域を識別可能とした画像、例えば、マスク領域を黒やグレー、あるいは特定色に設定した「(B)マスク領域指示画像」を生成して、マスク合成部211に出力する。 That is, the background area is a predetermined threshold value for the defocus amount (approximately 0) of the output subject ("stuffed bear" area) specified by the output subject selection information 232 input from the input unit 118. It is a pixel region having a defocus amount with the above difference. The mask area determining unit 208 determines such an area as a mask area, and generates an image in which the mask area is identifiable, for example, "(B) mask area designation image" in which the mask area is set to black, gray, or a specific color. ” is generated and output to the mask synthesizing unit 211 .
 マスク合成部211は、マスク領域決定部208から入力したマスク領域情報、例えば図15に示すような(B)マスク領域指示画像に従ってマスク設定領域を決定する。 The mask synthesizing unit 211 determines the mask setting area according to the mask area information input from the mask area determining unit 208, for example, the (B) mask area designating image as shown in FIG.
 マスク合成部211は、次に、決定したマスク設定領域に、マスク出力色決定部210から入力したマスク領域色情報に従った色のマスクを設定する。
 例えばグリーンバックやブルーバック等のマスクを設定し、図に示す(C)マスク設定画像を生成する。
The mask synthesizing unit 211 next sets a color mask in the determined mask setting area according to the mask area color information input from the mask output color determining unit 210 .
For example, a mask such as a green screen or a blue screen is set to generate a mask setting image (C) shown in FIG.
 マスク合成部211が生成した(C)マスク設定画像は、入力部118から入力した出力被写体選択情報232によって指定された出力被写体以外の領域がグリーンバックやブルーバック等のマスク画像に置き換えられた画像となる。
 このマスク合成部211が生成した(C)マスク設定画像は、画像合成部223に出力される。
The (C) mask setting image generated by the mask synthesizing unit 211 is an image in which the area other than the output subject specified by the output subject selection information 232 input from the input unit 118 is replaced with a mask image such as a green screen or a blue screen. becomes.
The (C) mask setting image generated by the mask synthesizing unit 211 is output to the image synthesizing unit 223 .
 画像合成部223は、マスク合成部211が生成した(C)マスク設定画像のマスク領域に例えば記憶部、あるいは外部から入力した別の画像、例えば背景画像を張り付けた、図に示すような(D)合成画像を生成する。 The image synthesizing unit 223 pastes another image input from, for example, a storage unit or the outside, such as a background image, to the mask area of the (C) mask setting image generated by the mask synthesizing unit 211, as shown in FIG. ) to generate a composite image.
 このように本実施例3は、位相差検出画素の検出情報を用いて微小画素領域単位の被写体距離情報を算出し、算出した被写体距離情報を用いて、マスク領域を決定し、決定したマスク領域に背景画像等を張り付けた合成画像を生成する実施例であり、マスク領域の決定処理を微細な画素領域単位で行うことが可能となり、高精度な合成画像の生成処理を行うことができる。 As described above, in the third embodiment, the subject distance information for each minute pixel area is calculated using the detection information of the phase difference detection pixels, the calculated subject distance information is used to determine the mask area, and the determined mask area is determined. In this embodiment, a composite image is generated by pasting a background image or the like to the mask area. It is possible to perform the mask area determination processing in units of fine pixel regions, and to perform highly accurate composite image generation processing.
 なお、図14を参照して説明したデジタル信号処理部108は、デジタル信号処理部108内に位相差情報取得部201、画像情報取得部202、デフォーカス量算出部203、AF制御信号生成部204、画像信号処理部206、マスク領域決定部208、出力被写体色解析部209、マスク出力色決定部210、マスク合成部211、画像合成部223、これらの全ての構成を有する設定としているが、この構成は一例である。 Note that the digital signal processing unit 108 described with reference to FIG. , an image signal processing unit 206, a mask area determination unit 208, an output object color analysis unit 209, a mask output color determination unit 210, a mask synthesis unit 211, and an image synthesis unit 223. The configuration is an example.
 図14を参照して説明した構成の一部については、撮像装置100のデジタル信号処理部108以外の構成としてもよい。また、撮像装置100と異なる外部装置においてデータ処理を実行する構成としてもよい。 A part of the configuration described with reference to FIG. Alternatively, the data processing may be performed by an external device different from the imaging device 100 .
 具体的には、例えば、図16に示すように、画像合成部の画像合成処理については、グリーンバックや、ブルーバックに対する画像合成を行う専用処理装置であるキーヤー装置240を用いて実行する構成としてもよい。 Specifically, for example, as shown in FIG. 16, the image synthesizing process of the image synthesizing unit is configured to be executed using a keyer device 240, which is a dedicated processing device for synthesizing images for a green screen or a blue screen. good too.
 この場合、撮像装置100のデジタル信号処理部108のマスク合成部211は、生成したマスク合成画像を外部のキーヤー装置240に出力する。 In this case, the mask synthesizing unit 211 of the digital signal processing unit 108 of the imaging device 100 outputs the generated mask synthesized image to the external keyer device 240 .
 キーヤー装置240内のキーイング処理部(画像合成部)241は、デジタル信号処理部108のマスク合成部211から入力したマスク合成画像のマスク領域に例えば記憶部、あるいは外部から入力した別の画像、例えば背景画像を張り付けた合成画像を生成する。 A keying processing unit (image synthesizing unit) 241 in the keyer device 240 inserts another image, for example, input from a storage unit, into the mask area of the mask synthesized image input from the mask synthesizing unit 211 of the digital signal processing unit 108 . Generates a composite image pasted with a background image.
 この他、例えば、先に図10、図11を参照して説明した構成と同様、図14のデジタル信号処理部108の画像合成部223については、デジタル信号処理部108の内部構成とする構成の他、撮像装置100に構成したアプリケーション実行部150に設定する構成が可能である。 In addition, for example, similar to the configuration described above with reference to FIGS. 10 and 11, the image combining unit 223 of the digital signal processing unit 108 in FIG. In addition, it is possible to configure the setting in the application execution unit 150 configured in the imaging device 100 .
 さらに、図11(C)を参照して説明した構成と同様、撮像装置100以外の外部装置、例えばPC等の外部装置180に画像合成部223を設けた構成としてもよい。
 この場合、撮像装置100のデジタル信号処理部108のマスク合成部211は、生成したマスク合成画像をPC等の外部装置180に出力する。
Further, as in the configuration described with reference to FIG. 11C, an external device other than the imaging device 100, for example, an external device 180 such as a PC may be provided with the image synthesizing unit 223. FIG.
In this case, the mask synthesizing unit 211 of the digital signal processing unit 108 of the imaging device 100 outputs the generated mask synthesized image to the external device 180 such as a PC.
 PC等の外部装置180は、デジタル信号処理部108のマスク合成部211から入力したマスク合成画像のマスク領域に例えば記憶部、あるいは外部から入力した別の画像、例えば背景画像を張り付けた合成画像を生成する。
 このように、様々な構成を利用した処理を行うことが可能である。
The external device 180 such as a PC creates a composite image obtained by pasting another image input from, for example, a storage unit or the outside, such as a background image, to the mask area of the mask composite image input from the mask synthesis unit 211 of the digital signal processing unit 108 . Generate.
In this way, it is possible to perform processing using various configurations.
  [3-4.(実施例4)位相差検出画素の検出情報を用いて画素領域単位の手振れ補正量を算出する実施例]
 次に、実施例4として、位相差検出画素の検出情報を用いて画素領域単位の手振れ補正量を算出する実施例について説明する。
[3-4. (Embodiment 4) Embodiment of Calculating Camera Shake Correction Amount for Each Pixel Area Using Detection Information of Phase Difference Detection Pixel]
Next, as Example 4, an example in which the detection information of the phase difference detection pixels is used to calculate the camera shake correction amount for each pixel region will be described.
 図17は、本実施例4の構成を説明するブロック図である。
 図17は、図1を参照して説明した撮像装置100の構成要素であるデジタル信号処理部108の内部構成例を示している。
FIG. 17 is a block diagram for explaining the configuration of the fourth embodiment.
FIG. 17 shows an internal configuration example of the digital signal processing unit 108, which is a component of the imaging apparatus 100 described with reference to FIG.
 図17に示すように、デジタル信号処理部108は、位相差情報取得部201、画像情報取得部202、デフォーカス量算出部203、AF制御信号生成部204、画像信号処理部206、距離情報算出部251、手振れ量算出部262、手振れ補正量算出部263、手振れ補正実行部264を有する。 As shown in FIG. 17, the digital signal processing unit 108 includes a phase difference information acquisition unit 201, an image information acquisition unit 202, a defocus amount calculation unit 203, an AF control signal generation unit 204, an image signal processing unit 206, a distance information calculation unit. A camera shake amount calculator 262 , a camera shake correction amount calculator 263 , and a camera shake correction execution unit 264 .
 デジタル信号処理部108には、前段のA/D変換部105からRGB画像信号と、位相差検出画素の出力である位相差検出信号(検波情報)が入力される。 The digital signal processing unit 108 receives the RGB image signals from the preceding A/D conversion unit 105 and the phase difference detection signals (detection information) output from the phase difference detection pixels.
 図17に示す位相差情報取得部201は、A/D変換部105からの入力信号から、位相差検出画素の出力である位相差検出信号(検波情報)のみを選択する。
 一方、図17に示す画像情報取得部202は、A/D変換部105からの入力信号から、画像信号(例えばRGB画像信号)のみを選択する。
A phase difference information acquisition unit 201 shown in FIG. 17 selects only the phase difference detection signal (detection information) that is the output of the phase difference detection pixel from the input signal from the A/D conversion unit 105 .
On the other hand, the image information acquisition unit 202 shown in FIG. 17 selects only image signals (for example, RGB image signals) from the input signals from the A/D conversion unit 105 .
 画像情報取得部202が取得した画像信号は、画像信号処理部206に入力される。
 画像信号処理部206は、画像信号に対してデモザイク処理、ホワイトバランス調整、ガンマ補正等の様々な画像信号処理を実行し、処理後の画像を3次元画像生成部222に入力する。
The image signal acquired by the image information acquisition unit 202 is input to the image signal processing unit 206 .
The image signal processing unit 206 performs various image signal processing such as demosaic processing, white balance adjustment, and gamma correction on the image signal, and inputs the processed image to the three-dimensional image generation unit 222 .
 位相差情報取得部201は、A/D変換部105からの入力信号から、位相差検出画素の出力である位相差検出信号(検波情報)を選択し、選択した位相差検出信号(検波情報)をデフォーカス量算出部203に出力する。 The phase difference information acquisition unit 201 selects a phase difference detection signal (detection information) that is the output of the phase difference detection pixel from the input signal from the A/D conversion unit 105, and obtains the selected phase difference detection signal (detection information). is output to the defocus amount calculation unit 203 .
 デフォーカス量算出部202は、所定の画素領域単位でフォーカスのずれ量、すなわち合焦距離と被写体距離とのずれ量(デフォーカス量(DF))を算出する。
 先に説明したように、位相差検出方式においては、焦点検出用センサとして機能する一組の位相差検出画素各々の受光量に応じて出力される信号のずれ量に基づいてフォーカスレンズのデフォーカス量が算出される。
The defocus amount calculation unit 202 calculates a defocus amount, that is, a defocus amount (DF) between the in-focus distance and the object distance, for each predetermined pixel area.
As described above, in the phase difference detection method, the defocus of the focus lens is detected based on the shift amount of the signal output according to the amount of light received by each of a set of phase difference detection pixels that function as a focus detection sensor. Amount is calculated.
 AF制御信号生成部204は、このデフォーカス量に基づいてフォーカスレンズを、例えばユーザの指定した被写体に対する合焦位置(フォーカス位置)に設定するためのオートフォーカス制御信号(AF制御信号)を生成し、生成したAF制御信号をAF制御部112aに出力する。 The AF control signal generation unit 204 generates an autofocus control signal (AF control signal) for setting the focus lens to an in-focus position (focus position) for a subject designated by the user based on the defocus amount. , and outputs the generated AF control signal to the AF control unit 112a.
 AF制御部112aは、AF制御信号生成部204から入力するオートフォーカス制御信号(AF制御信号)に従って、フォーカスレンズを駆動し、フォーカスレンズを例えばユーザの指定した被写体に対する合焦位置(フォーカス位置)に設定する。 The AF control unit 112a drives the focus lens according to an autofocus control signal (AF control signal) input from the AF control signal generation unit 204, and moves the focus lens to the in-focus position (focus position) for the subject designated by the user, for example. set.
 なお、合焦位置(フォーカス位置)に設定される被写体は、撮影画像の画像領域全てではなく、例えば人物などユーザが指定した被写体である。その他の被写体、例えば背景などの画像は、合焦状態にはなく、ぼけた画像となる。 Note that the subject to be set at the in-focus position (focus position) is not the entire image area of the captured image, but a subject specified by the user such as a person. Images of other objects, such as the background, are out of focus and appear blurred.
 デフォーカス量算出部203は、例えば先に図7を参照して説明したように、例えばn×m画素等、微小な画素領域単位でデフォーカス量、すなわち合焦距離と被写体距離とのずれ量に相当するデフォーカス量を算出する。 For example, as described above with reference to FIG. 7, the defocus amount calculation unit 203 calculates the defocus amount, that is, the amount of deviation between the in-focus distance and the subject distance, for each minute pixel area such as n×m pixels. A defocus amount equivalent to is calculated.
 距離情報算出部251は、デフォーカス量算出部203が算出した撮影画像の微細な画素領域単位のデフォーカス量を入力し、この画素領域単位のデフォーカス量に基づいて、撮影画像の画素領域単位の距離情報を算出する。
 距離情報算出部251は、例えば画像領域単位の距離値を画素値(例えば0~255)で示したデプスマップを生成する。
The distance information calculation unit 251 inputs the fine defocus amount of each pixel area of the captured image calculated by the defocus amount calculation unit 203, and based on this defocus amount of each pixel area, calculates the pixel area of the captured image. Calculate the distance information of
The distance information calculation unit 251 generates, for example, a depth map indicating the distance value for each image area by pixel values (eg, 0 to 255).
 距離情報算出部251における距離情報算出処理は、先に図12、図13を参照して説明した実施例2の距離情報算出部207が実行する距離情報算出処理と同様の処理であり、前述した(式1)に従って画素領域単位のデフォーカス量を利用して、画素領域単位の被写体距離を算出する。
 距離情報算出部251は、算出した画素領域単位の距離情報を手振れ補正量算出部263に出力する。
The distance information calculation processing in the distance information calculation unit 251 is the same as the distance information calculation processing executed by the distance information calculation unit 207 of the second embodiment described above with reference to FIGS. Using the defocus amount for each pixel area according to (Equation 1), the subject distance for each pixel area is calculated.
The distance information calculation unit 251 outputs the calculated distance information for each pixel area to the camera shake correction amount calculation unit 263 .
 手振れ量算出部262は、ジャイロ131から撮像装置100の傾きや角度、傾き速度(角速度)等の検出情報を入力し、画像撮影時の手振れ量を算出して、算出した手振れ量を手振れ補正量算出部263に出力する。 The shake amount calculation unit 262 receives detection information such as the tilt, angle, and tilt speed (angular velocity) of the imaging device 100 from the gyro 131, calculates the shake amount at the time of image capturing, and uses the calculated shake amount as the shake correction amount. Output to the calculation unit 263 .
 手振れ補正量算出部263は、手振れ量算出部262から画像撮影時の手振れ量を入力するとともに、距離情報算出部251から画素領域単位の距離情報を入力する。 The camera shake correction amount calculation unit 263 receives the amount of camera shake during image shooting from the camera shake amount calculation unit 262 and also receives the distance information for each pixel area from the distance information calculation unit 251 .
 手振れ補正量算出部263は、これら2つの入力情報を用いて、撮影画像の被写体距離に応じた手振れ補正量を算出する。
 図18を参照して、手振れ補正量算出部263による被写体距離に応じた手振れ補正量の算出処理の具体例について説明する。
 図18には、撮像装置100によって撮影された(A)撮影画像と、この(A)撮影画像に基づいて、距離情報算出部251が生成した(B)距離情報(デプスマップ)を示している。
The camera shake correction amount calculation unit 263 uses these two pieces of input information to calculate the camera shake correction amount according to the subject distance of the captured image.
A specific example of calculation processing of the camera shake correction amount according to the subject distance by the camera shake correction amount calculation unit 263 will be described with reference to FIG. 18 .
FIG. 18 shows (A) a captured image captured by the imaging device 100 and (B) distance information (depth map) generated by the distance information calculation unit 251 based on the (A) captured image. .
 距離情報算出部251が生成した画素領域単位の(B)距離情報(デプスマップ)は、デフォーカス量算出部203が算出した例えばn×m画素等の微小な画素領域単位のデフォーカス量に基づいて算出した画像領域単位の被写体距離の値を画素値(例えば0~255)として示したデプスマップである。
 白く見える高輝度(高画素値)領域は被写体距離が近い領域である。黒く見える低輝度(低画素値)領域は被写体距離が遠い領域である。
The (B) distance information (depth map) for each pixel area generated by the distance information calculation unit 251 is based on the defocus amount for each minute pixel area such as n×m pixels calculated by the defocus amount calculation unit 203. 4 is a depth map showing the value of the object distance for each image area calculated in the above manner as a pixel value (0 to 255, for example).
A high luminance (high pixel value) area that looks white is an area where the object distance is short. A low luminance (low pixel value) area that looks black is an area with a long subject distance.
 手振れ補正量算出部263は、手振れ量算出部262から入力した画像撮影時の手振れ量と、距離情報算出部251が生成した(B)距離情報(デプスマップ)を用いて、撮影画像の被写体距離に応じた補正量を決定する。 The camera shake correction amount calculation unit 263 calculates the subject distance of the captured image by using the camera shake amount during image shooting input from the camera shake amount calculation unit 262 and (B) the distance information (depth map) generated by the distance information calculation unit 251 . determines the amount of correction according to .
 手振れ補正量算出部263は、図18に示すように、撮影画像の被写体距離に応じて以下のように補正量を決定する。
 被写体距離が遠距離の画素領域は補正量を小さく設定する。
 被写体距離が中距離の画素領域は補正量を中程度に設定する。
 被写体距離が近距離の画素領域は補正量を大きく設定する。
 これは、手振れによる画像の揺れが近距離の被写体ほど大きくなるためである。
As shown in FIG. 18, the shake correction amount calculator 263 determines the correction amount as follows according to the subject distance of the captured image.
A small correction amount is set for a pixel area with a long object distance.
A medium correction amount is set for a pixel area with a medium object distance.
A large correction amount is set for a pixel area with a short subject distance.
This is because the closer the object is to the object, the greater the shake of the image caused by camera shake.
 手振れ補正量算出部263は、このように、手振れ量算出部262から入力した画像撮影時の手振れ量と、距離情報算出部251が生成した(B)距離情報(デプスマップ)を用いて、画素領域単位の距離情報に応じて、画素領域単位の補正量(手振れ補正量)を算出する。
 手振れ補正量算出部263が算出した画素領域単位の手振れ補正量は、手振れ補正実行部264に入力される。
The camera shake correction amount calculation unit 263 thus uses the camera shake amount during image shooting input from the camera shake amount calculation unit 262 and (B) the distance information (depth map) generated by the distance information calculation unit 251 to calculate the pixel A correction amount (shake correction amount) for each pixel area is calculated according to the distance information for each area.
The camera shake correction amount for each pixel region calculated by the camera shake correction amount calculator 263 is input to the camera shake correction execution unit 264 .
 手振れ補正実行部264は、手振れ補正量算出部263が算出した画素領域単位の手振れ補正量を入力して、撮影画像に対する手振れ補正処理を実行する。
 手振れ補正実行部264は、手振れ補正量算出部263が算出した画素領域単位の手振れ補正量に従って、画素領域単位の被写体距離に応じた補正量を適用した補正処理を実行する。
The camera shake correction execution unit 264 receives the camera shake correction amount for each pixel region calculated by the camera shake correction amount calculation unit 263, and executes camera shake correction processing on the captured image.
The camera shake correction execution unit 264 performs correction processing using a correction amount corresponding to the object distance in pixel area units according to the camera shake correction amount in pixel area units calculated by the camera shake correction amount calculation unit 263 .
 本実施例4における手振れ補正処理の具体例について、図19を参照して説明する。
 図19は、手振れ補正実行部264が、画像信号処理部206から出力される(A)撮影画像と、手振れ補正量算出部263が算出した画素領域単位の手振れ補正量に従って、画素領域単位の被写体距離に応じた補正量を適用した補正処理を実行する場合の処理例を示す図である。
A specific example of camera shake correction processing according to the fourth embodiment will be described with reference to FIG.
FIG. 19 shows an image of a subject for each pixel area, according to (A) the captured image output from the image signal processing unit 206 and the camera shake correction amount for each pixel area calculated by the camera shake correction amount calculation unit 263 . It is a figure which shows the example of a process in the case of performing the correction process which applied the correction amount according to distance.
 画像信号処理部206から出力される(A)撮影画像は、例えば屋外に「熊のぬいぐるみ」をおいて撮影した画像であり、ユーザが「熊のぬいぐるみ」の領域を合焦位置(フォーカス位置)に設定して撮影した画像である。 The photographed image (A) output from the image signal processing unit 206 is, for example, an image photographed with a “stuffed bear” placed outdoors. This image was taken with the setting set to .
 手振れ補正実行部264が、手振れ補正処理を行うために用いる被写体距離情報は、距離情報算出部251が生成した画素領域単位の(B)距離情報(デプスマップ)である。 The subject distance information used by the camera shake correction execution unit 264 to perform camera shake correction processing is (B) distance information (depth map) for each pixel region generated by the distance information calculation unit 251 .
 距離情報算出部251が生成した画素領域単位の(B)距離情報(デプスマップ)は、デフォーカス量算出部203が算出した例えばn×m画素等の微小な画素領域単位のデフォーカス量に基づいて算出した画像領域単位の被写体距離の値を画素値(例えば0~255)として示したデプスマップである。
 白く見える高輝度(高画素値)領域は被写体距離が近い領域である。黒く見える低輝度(低画素値)領域は被写体距離が遠い領域である。
The (B) distance information (depth map) for each pixel area generated by the distance information calculation unit 251 is based on the defocus amount for each minute pixel area such as n×m pixels calculated by the defocus amount calculation unit 203. 4 is a depth map showing the value of the object distance for each image area calculated in the above manner as a pixel value (0 to 255, for example).
A high luminance (high pixel value) area that looks white is an area where the object distance is short. A low luminance (low pixel value) area that looks black is an area with a long subject distance.
 なお、デフォーカス量から被写体距離を算出する処理は、先に図12、図13を参照して説明した実施例2の距離情報算出部207が実行する距離情報算出処理と同様の処理であり、前述した(式1)に従って画素領域単位のデフォーカス量を利用して、画素領域単位の被写体距離を算出する。
 距離情報算出部251は、算出した画素領域単位の距離情報を手振れ補正量算出部263に出力する。
Note that the process of calculating the subject distance from the defocus amount is the same process as the distance information calculation process executed by the distance information calculation unit 207 of the second embodiment described above with reference to FIGS. The object distance for each pixel area is calculated using the defocus amount for each pixel area according to Equation 1 described above.
The distance information calculation unit 251 outputs the calculated distance information for each pixel area to the camera shake correction amount calculation unit 263 .
 手振れ補正量算出部263は、手振れ量算出部262から画像撮影時の手振れ量を入力するとともに、距離情報算出部251から画素領域単位の距離情報を入力する。 The camera shake correction amount calculation unit 263 receives the amount of camera shake during image shooting from the camera shake amount calculation unit 262 and also receives the distance information for each pixel area from the distance information calculation unit 251 .
 手振れ補正量算出部263は、これら2つの入力情報を用いて、撮影画像の被写体距離に応じた手振れ補正量を算出する。
 先に図18を参照して説明したように、手振れ補正量算出部263は撮影画像の被写体距離に応じて以下のように補正量を決定する。
 被写体距離が遠距離の画素領域は補正量を小さく設定する。
 被写体距離が中距離の画素領域は補正量を中程度に設定する。
 被写体距離が近距離の画素領域は補正量を大きく設定する。
 これは、手振れによる画像の揺れが近距離の被写体ほど大きくなるためである。
The camera shake correction amount calculation unit 263 uses these two pieces of input information to calculate the camera shake correction amount according to the subject distance of the captured image.
As described above with reference to FIG. 18, the shake correction amount calculator 263 determines the correction amount as follows according to the subject distance of the captured image.
A small correction amount is set for a pixel area with a long object distance.
A medium correction amount is set for a pixel area with a medium object distance.
A large correction amount is set for a pixel area with a short subject distance.
This is because the closer the object is to the object, the greater the shake of the image caused by camera shake.
 手振れ補正量算出部263は、このように、手振れ量算出部262から入力した画像撮影時の手振れ量と、距離情報算出部251が生成した(B)距離情報(デプスマップ)を用いて、画素領域単位の距離情報に応じて、画素領域単位の補正量(手振れ補正量)を算出する。
 手振れ補正量算出部263が算出した画素領域単位の手振れ補正量は、手振れ補正実行部264に入力される。
The camera shake correction amount calculation unit 263 thus uses the camera shake amount during image shooting input from the camera shake amount calculation unit 262 and (B) the distance information (depth map) generated by the distance information calculation unit 251 to calculate the pixel A correction amount (shake correction amount) for each pixel area is calculated according to the distance information for each area.
The camera shake correction amount for each pixel region calculated by the camera shake correction amount calculator 263 is input to the camera shake correction execution unit 264 .
 手振れ補正実行部264は、手振れ補正量算出部263が算出した画素領域単位の手振れ補正量を入力して、撮影画像に対する手振れ補正処理を実行する。
 手振れ補正実行部264は、手振れ補正量算出部263が算出した画素領域単位の手振れ補正量に従って、画素領域単位の被写体距離に応じた補正量を適用した補正処理を実行する。
The camera shake correction execution unit 264 receives the camera shake correction amount for each pixel region calculated by the camera shake correction amount calculation unit 263, and executes camera shake correction processing on the captured image.
The camera shake correction execution unit 264 performs correction processing using a correction amount corresponding to the object distance in pixel area units according to the camera shake correction amount in pixel area units calculated by the camera shake correction amount calculation unit 263 .
 このように本実施例4は、位相差検出画素の検出情報を用いて微小画素領域単位の被写体距離情報を算出し、算出した被写体距離情報を用いて手振れ補正の補正量を決定する。すなわち、被写体距離が遠距離の画素領域ほど補正量を小さくし、被写体距離が近距離の画素領域ほど補正量を大きくする。 As described above, in the fourth embodiment, the detection information of the phase difference detection pixels is used to calculate the subject distance information for each minute pixel area, and the calculated subject distance information is used to determine the correction amount of camera shake correction. That is, the correction amount is reduced for a pixel area with a longer subject distance, and the correction amount is increased for a pixel area with a shorter subject distance.
 このように被写体距離に応じた補正量調整処理によって手振れ補正を行うことで、最適な補正処理が可能となり、最適な手振れ補正がなされた高品質な画像を生成することが可能となる。 In this way, by performing camera shake correction by adjusting the correction amount according to the subject distance, it is possible to perform optimum correction processing and generate a high-quality image with optimum camera shake correction.
 なお、図17を参照して説明したデジタル信号処理部108は、デジタル信号処理部108内に位相差情報取得部201、画像情報取得部202、デフォーカス量算出部203、AF制御信号生成部204、画像信号処理部206、距離情報算出部251、手振れ量算出部262、手振れ補正量算出部263、手振れ補正実行部264、これらの全ての構成を有する設定としているが、この構成は一例である。 Note that the digital signal processing unit 108 described with reference to FIG. , image signal processing unit 206, distance information calculation unit 251, camera shake amount calculation unit 262, camera shake correction amount calculation unit 263, and camera shake correction execution unit 264, but this configuration is an example. .
 図17を参照して説明した構成の一部については、撮像装置100のデジタル信号処理部108以外の構成としてもよい。また、撮像装置100と異なる外部装置においてデータ処理を実行する構成としてもよい。
 具体的には、例えば、手振れ補正実行部264については、デジタル信号処理部108の外部に構成することが可能である。
A part of the configuration described with reference to FIG. Alternatively, the data processing may be performed by an external device different from the imaging device 100 .
Specifically, for example, the shake correction execution unit 264 can be configured outside the digital signal processing unit 108 .
 例えば、先に図10、図11を参照して説明した構成と同様、手振れ補正実行部264をデジタル信号処理部108の内部構成とする例の他、手振れ補正実行部264を撮像装置100のデジタル信号処理部108以外の画像補正部で実行する構成が可能である。 For example, similar to the configuration described above with reference to FIGS. A configuration is possible in which an image correction unit other than the signal processing unit 108 executes the processing.
 さらに、図11(C)を参照して説明した構成と同様、撮像装置100以外の外部装置、例えばPC等の外部装置180に画像補正部を設けた構成としてもよい。
 この場合、撮像装置100のデジタル信号処理部108の手振れ補正量算出部263は、算出した画素領域単位の手振れ補正量をPC等の外部装置180に出力する。
Furthermore, as in the configuration described with reference to FIG. 11C, an external device other than the imaging device 100, for example, an external device 180 such as a PC may be provided with an image correction unit.
In this case, the camera shake correction amount calculation unit 263 of the digital signal processing unit 108 of the imaging apparatus 100 outputs the calculated camera shake correction amount for each pixel area to the external device 180 such as a PC.
 PC等の外部装置180は、デジタル信号処理部108の手振れ補正量算出部263が算出した画素領域単位の手振れ補正量を用いて、画素領域単位の被写体距離に応じた手振れ補正処理を実行する。
 このように、様々な構成を利用した処理を行うことが可能である。
The external device 180 such as a PC performs camera shake correction processing corresponding to the object distance in pixel area units using the camera shake correction amount in pixel area units calculated by the camera shake correction amount calculation unit 263 of the digital signal processing unit 108 .
In this way, it is possible to perform processing using various configurations.
  [4.その他の実施例について]
 次に、その他の実施例について説明する。
[4. Other Examples]
Next, another embodiment will be described.
 ここまで、図9~図19を参照して以下の4つの実施例について説明した。
 (実施例1)位相差検出画素の検出情報を用いて画像ブレンド比率を算出する実施例
 (実施例2)位相差検出画素の検出情報を用いて3次元画像生成に適用する距離情報を算出する実施例
 (実施例3)位相差検出画素の検出情報を用いて背景画像合成用のマスク領域を決定し、選択被写体領域以外の領域にマスク画像を合成した画像を生成する実施例
 (実施例4)位相差検出画素の検出情報を用いて画素領域単位の手振れ補正量を算出する実施例
So far, the following four embodiments have been described with reference to FIGS.
(Example 1) Example of calculating image blend ratio using detection information of phase difference detection pixels (Example 2) Using detection information of phase difference detection pixels to calculate distance information applied to 3D image generation Embodiment (Embodiment 3) An embodiment in which a mask area for background image synthesis is determined using detection information of a phase difference detection pixel, and an image is generated by synthesizing the mask image with an area other than the selected subject area (Embodiment 4) ) Embodiment in which image stabilization amount for each pixel area is calculated using detection information of phase difference detection pixels
 これらの4つの実施例は、それぞれ単独に構成することも可能であるが、複数の任意の実施例の組み合わせ構成を持つ装置やシステムとして構成することも可能である。
 なお、被写体や検出情報、事前の設定などから目的を判別して、どれか1つまたは複数の実施例を用いるよう選択する、または、ユーザに対してどれか1つまたは複数の実施例を提案するようにしてもよい。
Each of these four embodiments can be configured independently, but can also be configured as a device or system having a combination configuration of any of a plurality of embodiments.
In addition, the purpose is determined from the object, detection information, preset settings, etc., and one or more examples are selected to be used, or one or more examples are proposed to the user. You may make it
 例えば、図20は上記4つの実施例の全ての処理を実行可能とした撮像装置100のデジタル信号処理部108の構成を示す図である。 For example, FIG. 20 is a diagram showing the configuration of the digital signal processing unit 108 of the imaging device 100 capable of executing all the processes of the above four embodiments.
 図20は、図1を参照して説明した撮像装置100の構成要素であるデジタル信号処理部108の内部構成例を示している。 FIG. 20 shows an internal configuration example of the digital signal processing unit 108, which is a component of the imaging device 100 described with reference to FIG.
 図20に示すように、デジタル信号処理部108は、位相差情報取得部201、画像情報取得部202、デフォーカス量算出部203、AF制御信号生成部204、画像ブレンド比率算出部205、画像信号処理部206、距離情報算出部207、マスク領域決定部208、出力被写体色解析部209、マスク出力色決定部210、マスク合成部211、画像ブレンド処理実行部221、3次元画像生成部222、画像合成部223、手振れ量算出部262、手振れ補正量算出部263、手振れ補正実行部264を有する。 As shown in FIG. 20, the digital signal processing unit 108 includes a phase difference information acquisition unit 201, an image information acquisition unit 202, a defocus amount calculation unit 203, an AF control signal generation unit 204, an image blend ratio calculation unit 205, an image signal Processing unit 206, distance information calculation unit 207, mask area determination unit 208, output subject color analysis unit 209, mask output color determination unit 210, mask synthesis unit 211, image blend processing execution unit 221, 3D image generation unit 222, image It has a synthesizing unit 223 , a camera shake amount calculator 262 , a camera shake correction amount calculator 263 , and a camera shake correction execution unit 264 .
 図20に示すデジタル信号処理部108は、これらの構成により上記4つの実施例の全ての処理を実行可能なデジタル信号処理部となる。
 例えば、このようなデジタル信号処理部108を有する撮像装置も構成可能である。
The digital signal processing unit 108 shown in FIG. 20 becomes a digital signal processing unit capable of executing all the processes of the four embodiments described above.
For example, an imaging device having such a digital signal processing unit 108 can also be configured.
 なお、図20に示すデジタル信号処理部108の構成要素の一部をデジタル信号処理部108外の撮像装置の構成要素とすることも可能である。
 また、図20に示すデジタル信号処理部108の構成要素の一部を撮像装置以外のPC等の外部装置に設けて、外部装置で処理を実行する構成も可能である。
Note that some of the constituent elements of the digital signal processing unit 108 shown in FIG. 20 can be used as constituent elements of the imaging apparatus outside the digital signal processing unit 108.
Moreover, it is also possible to provide a part of the constituent elements of the digital signal processing unit 108 shown in FIG.
  [5.撮像装置と接続されたPC等の外部装置のハードウェア構成例について]
 次に、撮像装置と接続されたPC等の外部装置のハードウェア構成例について説明する。
[5. Hardware configuration example of external device such as PC connected to imaging device]
Next, a hardware configuration example of an external device such as a PC connected to the imaging device will be described.
 上述したように、各実施例において説明したデジタル信号処理部の一部の処理は、外部のPC等の画像処理装置において実行することが可能である。
 例えば先に図11を参照して説明したように、PC等の外部装置180が、撮像装置100が算出したデータを入力し、入力データに基づく画像処理を実行する構成である。
As described above, part of the processing of the digital signal processing section described in each embodiment can be executed in an external image processing device such as a PC.
For example, as described above with reference to FIG. 11, the external device 180 such as a PC inputs data calculated by the imaging device 100 and executes image processing based on the input data.
 図21を参照して、撮像装置100の算出データを入力し、入力データに基づく画像処理を実行するPC等の画像処理装置のハードウェア構成例について説明する。 A hardware configuration example of an image processing apparatus such as a PC that inputs calculated data of the imaging apparatus 100 and executes image processing based on the input data will be described with reference to FIG.
 CPU(Central Processing Unit)301は、ROM(Read Only Memory)302、または記憶部308に記憶されているプログラムに従って各種の処理を実行する制御部やデータ処理部として機能する。例えば、上述した実施例において説明した処理を実行する。RAM(Random Access Memory)303には、CPU301が実行するプログラムやデータなどが記憶される。これらのCPU301、ROM302、およびRAM303は、バス304により相互に接続されている。 A CPU (Central Processing Unit) 301 functions as a control section and a data processing section that execute various processes according to programs stored in a ROM (Read Only Memory) 302 or a storage section 308 . For example, the processing described in the above embodiment is executed. A RAM (Random Access Memory) 303 stores programs and data executed by the CPU 301 . These CPU 301 , ROM 302 and RAM 303 are interconnected by a bus 304 .
 CPU301はバス304を介して入出力インタフェース305に接続され、入出力インタフェース305には、カメラ、各種スイッチ、マイクロホン、センサなどよりなる入力部306、ディスプレイ、スピーカーなどよりなる出力部307が接続されている。
 CPU301は、入力部306から入力される指令に対応して各種の処理を実行し、処理結果を例えば出力部307に出力する。
The CPU 301 is connected to an input/output interface 305 via a bus 304. The input/output interface 305 is connected to an input unit 306 including a camera, various switches, microphones, sensors, etc., and an output unit 307 including a display, speakers, etc. there is
The CPU 301 executes various types of processing in response to commands input from the input unit 306 and outputs processing results to the output unit 307, for example.
 入出力インタフェース305に接続されている記憶部308は、例えばフラッシュメモリ等からなり、CPU301が実行するプログラムや各種のデータを記憶する。通信部309はWi-Fi通信、ブルートゥース(登録商標)(BT)通信、その他インターネットやローカルエリアネットワークなどのネットワークを介したデータ通信の通信部によって構成され、外部の装置と通信する。 A storage unit 308 connected to the input/output interface 305 is composed of, for example, a flash memory, and stores programs executed by the CPU 301 and various data. A communication unit 309 includes a communication unit for Wi-Fi communication, Bluetooth (registered trademark) (BT) communication, and other data communication via a network such as the Internet or a local area network, and communicates with an external device.
 入出力インタフェース305に接続されているドライブ310は、磁気ディスク、光ディスク、光磁気ディスク、あるいはメモリカード等の半導体メモリなどのリムーバブルメディア311を駆動し、データの記録あるいは読み取りを実行する。 A drive 310 connected to the input/output interface 305 drives a removable medium 311 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory such as a memory card to record or read data.
  [6.本開示の構成のまとめ]
 以上、特定の実施例を参照しながら、本開示の実施例について詳解してきた。しかしながら、本開示の要旨を逸脱しない範囲で当業者が実施例の修正や代用を成し得ることは自明である。すなわち、例示という形態で本発明を開示してきたのであり、限定的に解釈されるべきではない。本開示の要旨を判断するためには、特許請求の範囲の欄を参酌すべきである。
[6. Summary of the configuration of the present disclosure]
Embodiments of the present disclosure have been described in detail above with reference to specific embodiments. However, it is obvious that those skilled in the art can modify or substitute the embodiments without departing from the gist of this disclosure. That is, the present invention has been disclosed in the form of examples and should not be construed as limiting. In order to determine the gist of the present disclosure, the scope of claims should be considered.
 なお、本明細書において開示した技術は、以下のような構成をとることができる。
 (1) 撮像装置の撮像素子から出力される撮影画像の画素領域単位のデフォーカス量を算出するデフォーカス量算出部と、
 前記撮影画像と他画像との画像ブレンド比率を算出する画像ブレンド比率算出部を有し、
 前記画像ブレンド比率算出部は、
 前記デフォーカス量算出部が算出した画素領域単位のデフォーカス量に基づいて、前記画素領域単位の画像ブレンド比率を算出する撮像装置。
In addition, the technique disclosed in this specification can take the following configurations.
(1) a defocus amount calculation unit that calculates a defocus amount for each pixel area of a captured image output from an image sensor of an imaging device;
an image blend ratio calculation unit that calculates an image blend ratio between the captured image and another image;
The image blend ratio calculation unit
An imaging device for calculating an image blend ratio for each pixel area based on the defocus amount for each pixel area calculated by the defocus amount calculation unit.
 (2) 前記画像ブレンド比率算出部は、
 前記画素領域単位の画像ブレンド比率に応じた輝度を有する画素値を設定したブレンド比率出力画像を生成する(1)に記載の撮像装置。
(2) The image blend ratio calculation unit
The imaging apparatus according to (1), which generates a blend ratio output image in which a pixel value having a luminance corresponding to the image blend ratio for each pixel area is set.
 (3) 前記画像ブレンド比率算出部は、
 前記撮影画像のデフォーカス量が小さい画素領域については、前記撮影画像のブレンド比率を大きな比率に設定し、
 前記撮影画像のデフォーカス量が大きい画素領域については、前記撮影画像のブレンド比率を小さな比率に設定する(1)または(2)に記載の撮像装置。
(3) The image blend ratio calculation unit
setting a blend ratio of the captured image to a large ratio for a pixel region having a small defocus amount of the captured image;
The image capturing apparatus according to (1) or (2), wherein a blend ratio of the captured image is set to a small ratio for a pixel region having a large defocus amount of the captured image.
 (4) 前記画像ブレンド比率算出部は、
 前記撮影画像のデフォーカス量が規定しきい値Th1以下の画素領域については、前記撮影画像のブレンド比率を100%に設定し、
 前記撮影画像のデフォーカス量が規定しきい値Th2以上の画素領域については、前記撮影画像のブレンド比率を0%に設定し、
 前記撮影画像のデフォーカス量が規定しきい値Th1より大きく、規定しきい値Th2未満の画素領域については、前記撮影画像のブレンド比率を前記撮影画像のデフォーカス量に応じて変化させる設定とする(1)~(3)いずれかに記載の撮像装置。
(4) The image blend ratio calculation unit
setting the blending ratio of the captured image to 100% for pixel regions in which the defocus amount of the captured image is equal to or less than a prescribed threshold value Th1;
setting a blending ratio of the captured image to 0% for a pixel region in which the defocus amount of the captured image is equal to or greater than a prescribed threshold value Th2;
For pixel regions where the defocus amount of the captured image is greater than the specified threshold value Th1 and less than the specified threshold value Th2, the blend ratio of the captured image is set to change according to the defocus amount of the captured image. (1) to (3).
 (5) 前記撮像装置は、さらに、
 前記画像ブレンド比率算出部が算出した画素領域単位の画像ブレンド比率を適用して、前記撮影画像と他画像との画像ブレンド処理を実行する画像ブレンド処理実行部を有する(1)~(4)いずれかに記載の撮像装置。
(5) The imaging device further includes:
Any one of (1) to (4) having an image blend processing execution unit that applies the image blend ratio for each pixel area calculated by the image blend ratio calculation unit and executes image blend processing of the photographed image and another image. 1. The imaging device according to 1.
 (6) 撮像装置の撮像素子から出力される撮影画像の画素領域単位のデフォーカス量を算出するデフォーカス量算出部と、
 前記デフォーカス量算出部が算出した画素領域単位のデフォーカス量に基づいて、前記画素領域単位の被写体距離を算出する距離情報算出部を有する撮像装置。
(6) a defocus amount calculation unit that calculates a defocus amount for each pixel area of a captured image output from an image sensor of an imaging device;
An imaging apparatus comprising a distance information calculation unit that calculates a subject distance for each pixel area based on the defocus amount for each pixel area calculated by the defocus amount calculation unit.
 (7) 前記距離情報算出部は、
 前記画素領域単位の被写体距離に応じた輝度を有する画素値を設定した距離画像を生成する(6)に記載の撮像装置。
(7) The distance information calculation unit
The imaging apparatus according to (6), which generates a distance image in which a pixel value having a brightness corresponding to the subject distance in pixel area units is set.
 (8) 前記距離情報算出部は、
 前記デフォーカス量算出部が算出した画素領域単位のデフォーカス量と、前記撮像装置が有するレンズの焦点距離情報を利用した算出式により、前記画素領域単位の被写体距離を算出する(6)または(7)に記載の撮像装置。
(8) The distance information calculation unit
(6) or ( 7) The imaging device described in 7).
 (9) 前記撮像装置は、さらに、
 前記距離情報算出部が算出した画素領域単位の被写体距離情報を適用して、前記撮影画像に基づく3次元画像を生成する3次元画像生成部を有する(6)~(8)いずれかに記載の撮像装置。
(9) The imaging device further includes
The method according to any one of (6) to (8), further comprising a three-dimensional image generation unit that generates a three-dimensional image based on the captured image by applying the subject distance information for each pixel area calculated by the distance information calculation unit. Imaging device.
 (10) 前記撮像装置は、さらに、
 前記距離情報算出部が算出した画素領域単位の被写体距離情報を適用して、前記撮影画像に対する画素領域単位の手振れ補正量を算出する手振れ補正量算出部を有する(6)~(9)いずれかに記載の撮像装置。
(10) The imaging device further includes:
any one of (6) to (9), further comprising a camera shake correction amount calculation unit that applies the subject distance information for each pixel area calculated by the distance information calculation unit to calculate a camera shake correction amount for each pixel area for the captured image; The imaging device according to .
 (11) 前記手振れ補正量算出部は、
 被写体距離が小さい画素領域ほど手振れ補正量を大きく設定し、被写体距離が大きい画素領域ほど手振れ補正量を小さく設定した画素領域単位の手振れ補正量を算出する(10)に記載の撮像装置。
(11) The camera shake correction amount calculation unit
The image pickup apparatus according to (10), wherein a larger shake correction amount is set for a pixel area with a shorter subject distance, and a smaller shake correction amount is set for a pixel area with a longer subject distance.
 (12) 前記撮像装置は、さらに、
 前記撮影画像に対する手振れ補正処理を実行する手振れ補正実行部を有し、
 前記手振れ補正実行部は、
 前記手振れ補正量算出部が算出した画素領域単位の手振れ補正量に従って、画素領域単位の手振れ補正処理を実行する(10)または(11)に記載の撮像装置。
(12) The imaging device further includes
a camera shake correction execution unit that executes camera shake correction processing on the captured image;
The camera shake correction execution unit
The image pickup apparatus according to (10) or (11), wherein image stabilization processing is performed in units of pixel areas according to the amount of image stabilization in units of pixel areas calculated by the image stabilization amount calculation unit.
 (13) 撮像装置の撮像素子から出力される撮影画像の画素領域単位のデフォーカス量を算出するデフォーカス量算出部と、
 前記デフォーカス量算出部が算出した画素領域単位のデフォーカス量に基づいて、前記撮影画像に対するマスク領域を決定するマスク領域決定部を有する撮像装置。
(13) a defocus amount calculation unit that calculates a defocus amount for each pixel area of a captured image output from an image sensor of an imaging device;
An imaging apparatus comprising a mask area determination unit that determines a mask area for the captured image based on the defocus amount for each pixel area calculated by the defocus amount calculation unit.
 (14) 前記マスク領域決定部は、
 ユーザが指定した出力被写体のデフォーカス量に対して、予め規定したしきい値以上の差分を持つデフォーカス量を有する画素領域をマスク領域として決定する(13)に記載の撮像装置。
(14) The mask area determination unit
The imaging apparatus according to (13), wherein a pixel region having a defocus amount having a difference equal to or greater than a predetermined threshold value with respect to the defocus amount of the output subject specified by the user is determined as the mask area.
 (15) 前記マスク領域決定部は、
 前記マスク領域をユーザが指定した出力被写体の領域と区別可能としたマスク領域指示画像を生成する(13)または(14)に記載の撮像装置。
(15) The mask area determination unit
The imaging apparatus according to (13) or (14), which generates a mask area indication image in which the mask area is distinguishable from the area of the output object specified by the user.
 (16) 前記撮像装置は、さらに、
 前記マスク領域決定部が決定したマスク領域に、所定の色からなるマスク画像を合成してマスク設定画像を生成するマスク合成部を有する(13)~(15)いずれかに記載の撮像装置。
(16) The imaging device further includes:
The imaging apparatus according to any one of (13) to (15), further comprising a mask synthesizing unit that synthesizes a mask image of a predetermined color with the mask area determined by the mask area determining unit to generate a mask setting image.
 (17) 前記撮像装置は、さらに、
 前記マスク合成部が生成したマスク設定画像のマスク領域に他の画像を重畳し、ユーザが指定した出力被写体との合成画像を生成する画像合成部を有する(16)に記載の撮像装置。
(17) The imaging device further includes
The imaging apparatus according to (16), further comprising an image synthesizing unit that superimposes another image on the mask area of the mask setting image generated by the mask synthesizing unit to generate a synthesized image with the output subject specified by the user.
 (18) 前記マスク合成部は、
 生成した前記マスク設定画像を、合成画像生成処理を実行する外部装置に出力する(16)または(17)に記載の撮像装置。
(18) The mask synthesizing unit
The imaging apparatus according to (16) or (17), which outputs the generated mask setting image to an external device that executes composite image generation processing.
 (19) 撮像装置において実行する画像処理方法であり、
 デフォーカス量算出部が、撮像装置の撮像素子から出力される撮影画像の画素領域単位のデフォーカス量を算出するデフォーカス量算出ステップと、
 デフォーカス量算出部が、前記撮影画像と他画像との画像ブレンド比率を算出する画像ブレンド比率算出ステップを実行し、
 前記画像ブレンド比率算出ステップは、
 前記デフォーカス量算出ステップにおいて算出した画素領域単位のデフォーカス量に基づいて、前記画素領域単位の画像ブレンド比率を算出するステップである画像処理方法。
(19) An image processing method executed in an imaging device,
A defocus amount calculation step in which the defocus amount calculation unit calculates a defocus amount for each pixel area of a captured image output from an imaging device of an imaging device;
A defocus amount calculation unit executes an image blend ratio calculation step of calculating an image blend ratio between the captured image and another image,
The image blend ratio calculation step includes:
An image processing method, comprising: calculating an image blend ratio for each pixel area based on the defocus amount for each pixel area calculated in the defocus amount calculation step.
 (20) 撮像装置において画像処理を実行させるプログラムであり、
 デフォーカス量算出部に、撮像装置の撮像素子から出力される撮影画像の画素領域単位のデフォーカス量を算出させるデフォーカス量算出ステップと、
 デフォーカス量算出部に、前記撮影画像と他画像との画像ブレンド比率を算出させる画像ブレンド比率算出ステップを実行させ、
 前記画像ブレンド比率算出ステップにおいては、
 前記デフォーカス量算出ステップで算出した画素領域単位のデフォーカス量に基づいて、前記画素領域単位の画像ブレンド比率を算出させるプログラム。
(20) A program for executing image processing in an imaging device,
a defocus amount calculation step of causing a defocus amount calculation unit to calculate a defocus amount for each pixel area of a captured image output from an imaging element of an imaging device;
causing a defocus amount calculation unit to perform an image blend ratio calculation step of calculating an image blend ratio between the captured image and another image;
In the image blend ratio calculation step,
A program for calculating an image blend ratio for each pixel area based on the defocus amount for each pixel area calculated in the defocus amount calculation step.
 なお、明細書中において説明した一連の処理はハードウェア、またはソフトウェア、あるいは両者の複合構成によって実行することが可能である。ソフトウェアによる処理を実行する場合は、処理シーケンスを記録したプログラムを、専用のハードウェアに組み込まれたコンピュータ内のメモリにインストールして実行させるか、あるいは、各種処理が実行可能な汎用コンピュータにプログラムをインストールして実行させることが可能である。例えば、プログラムは記録媒体に予め記録しておくことができる。記録媒体からコンピュータにインストールする他、LAN(Local Area Network)、インターネットといったネットワークを介してプログラムを受信し、内蔵するハードディスク等の記録媒体にインストールすることができる。 It should be noted that the series of processes described in the specification can be executed by hardware, software, or a composite configuration of both. When executing processing by software, a program recording the processing sequence is installed in the memory of a computer built into dedicated hardware and executed, or the program is loaded into a general-purpose computer capable of executing various processing. It can be installed and run. For example, the program can be pre-recorded on a recording medium. In addition to being installed in a computer from a recording medium, the program can be received via a network such as a LAN (Local Area Network) or the Internet and installed in a recording medium such as an internal hard disk.
 また、明細書に記載された各種の処理は、記載に従って時系列に実行されるのみならず、処理を実行する装置の処理能力あるいは必要に応じて並列的にあるいは個別に実行されてもよい。また、本明細書においてシステムとは、複数の装置の論理的集合構成であり、各構成の装置が同一筐体内にあるものには限らない。 In addition, the various types of processing described in the specification may not only be executed in chronological order according to the description, but may also be executed in parallel or individually according to the processing capacity of the device that executes the processing or as necessary. Further, in this specification, a system is a logical collective configuration of a plurality of devices, and the devices of each configuration are not limited to being in the same housing.
 以上、説明したように、本開示の一実施例の構成によれば、撮影画像の画素領域単位のデフォーカス量に基づいて画素領域単位の画像ブレンド比率や被写体距離を算出し、算出データを用いて様々な画像処理を実行する装置、方法が実現される。
 具体的には、例えば、撮像装置の撮像素子から出力される撮影画像の画素領域単位のデフォーカス量を算出するデフォーカス量算出部と、デフォーカス量算出部が算出した画素領域単位のデフォーカス量を利用して画素領域単位の画像ブレンド比率を算出して画像ブレンド処理を行う、また画素領域単位のデフォーカス量に基づいて画素領域単位の被写体距離を算出して、画素領域単位の被写体距離を利用した3次元画像生成処理や、画素領域単位の手振れ補正量算出処理を行う。あるいは、画素領域単位のデフォーカス量に基づいて撮影画像に対するマスク領域を決定してグリーンバック画像の生成処理を行うなど、様々な処理を実行する。
 本構成により、撮影画像の画素領域単位のデフォーカス量に基づいて画素領域単位の画像ブレンド比率や被写体距離を算出し、算出データを用いて様々な画像処理を実行する装置、方法が実現される。
As described above, according to the configuration of the embodiment of the present disclosure, the image blend ratio and the subject distance for each pixel region are calculated based on the defocus amount for each pixel region of the captured image, and the calculated data is used to An apparatus and method for executing various image processing are realized.
Specifically, for example, a defocus amount calculation unit that calculates a defocus amount for each pixel area of a captured image output from an imaging device of an imaging device, and a defocus amount for each pixel area calculated by the defocus amount calculation unit. Calculating the image blend ratio for each pixel area using the defocus amount for each pixel area, and calculating the subject distance for each pixel area based on the defocus amount for each pixel area , and image stabilization amount calculation processing for each pixel region. Alternatively, various processes are executed such as determining a mask area for the captured image based on the defocus amount for each pixel area and performing processing for generating a green screen image.
With this configuration, it is possible to realize an apparatus and method for calculating the image blend ratio and the subject distance for each pixel area based on the defocus amount for each pixel area of the captured image, and performing various image processing using the calculated data. .
 100 撮像装置
 101 フォーカスレンズ
 102 ズームレンズ
 103 撮像素子
 104 アナログ信号処理部
 105 A/D変換部
 106 タイミングジェネレータ(TG)
 107 垂直ドライバ
 108 デジタル信号処理部(DSP)
 110 制御部
 112a AF制御部
 112b ズーム制御部
 113 モータ
 115 記録デバイス
 116 ビューファインダ
 117 モニタ
 118 入力部(操作部)
 122 画素領域
 131 ジャイロ
 151 位相差検出画素
 152 画素領域
 201 位相差情報取得部
 202 画像情報取得部
 203 デフォーカス量算出部
 204 AF制御信号生成部
 205 画像ブレンド比率算出部
 206 画像信号処理部
 207 距離情報算出部
 208 マスク領域決定部
 209 出力被写体色解析部
 210 マスク出力色決定部
 211 マスク合成部
 221 画像ブレンド処理実行部
 222 3次元画像生成部
 223 画像合成部
 251 距離情報算出部
 262 手振れ量算出部
 263 手振れ補正量算出部
 264 手振れ補正実行部
 301 CPU
 302 ROM
 303 RAM
 304 バス
 305 入出力インタフェース
 306 入力部
 307 出力部
 308 記憶部
 309 通信部
 310 ドライブ
 311 リムーバブルメディア
REFERENCE SIGNS LIST 100 imaging device 101 focus lens 102 zoom lens 103 imaging element 104 analog signal processing section 105 A/D conversion section 106 timing generator (TG)
107 vertical driver 108 digital signal processor (DSP)
110 control unit 112a AF control unit 112b zoom control unit 113 motor 115 recording device 116 viewfinder 117 monitor 118 input unit (operation unit)
122 pixel region 131 gyro 151 phase difference detection pixel 152 pixel region 201 phase difference information acquisition unit 202 image information acquisition unit 203 defocus amount calculation unit 204 AF control signal generation unit 205 image blend ratio calculation unit 206 image signal processing unit 207 distance information Calculation unit 208 Mask area determination unit 209 Output subject color analysis unit 210 Mask output color determination unit 211 Mask synthesis unit 221 Image blend processing execution unit 222 Three-dimensional image generation unit 223 Image synthesis unit 251 Distance information calculation unit 262 Shake amount calculation unit 263 Camera shake correction amount calculator 264 Camera shake correction execution unit 301 CPU
302 ROMs
303 RAM
304 bus 305 input/output interface 306 input unit 307 output unit 308 storage unit 309 communication unit 310 drive 311 removable media

Claims (20)

  1.  撮像装置の撮像素子から出力される撮影画像の画素領域単位のデフォーカス量を算出するデフォーカス量算出部と、
     前記撮影画像と他画像との画像ブレンド比率を算出する画像ブレンド比率算出部を有し、
     前記画像ブレンド比率算出部は、
     前記デフォーカス量算出部が算出した画素領域単位のデフォーカス量に基づいて、前記画素領域単位の画像ブレンド比率を算出する撮像装置。
    a defocus amount calculation unit that calculates a defocus amount for each pixel area of a captured image output from an image sensor of an imaging device;
    an image blend ratio calculation unit that calculates an image blend ratio between the captured image and another image;
    The image blend ratio calculation unit
    An imaging device for calculating an image blend ratio for each pixel area based on the defocus amount for each pixel area calculated by the defocus amount calculation unit.
  2.  前記画像ブレンド比率算出部は、
     前記画素領域単位の画像ブレンド比率に応じた輝度を有する画素値を設定したブレンド比率出力画像を生成する請求項1に記載の撮像装置。
    The image blend ratio calculation unit
    2. The imaging apparatus according to claim 1, wherein a blend ratio output image is generated in which a pixel value having a luminance corresponding to the image blend ratio for each pixel area is set.
  3.  前記画像ブレンド比率算出部は、
     前記撮影画像のデフォーカス量が小さい画素領域については、前記撮影画像のブレンド比率を大きな比率に設定し、
     前記撮影画像のデフォーカス量が大きい画素領域については、前記撮影画像のブレンド比率を小さな比率に設定する請求項1に記載の撮像装置。
    The image blend ratio calculation unit
    setting a blend ratio of the captured image to a large ratio for a pixel region having a small defocus amount of the captured image;
    2. The imaging apparatus according to claim 1, wherein the blending ratio of the photographed image is set to a small ratio for a pixel region having a large defocus amount of the photographed image.
  4.  前記画像ブレンド比率算出部は、
     前記撮影画像のデフォーカス量が規定しきい値Th1以下の画素領域については、前記撮影画像のブレンド比率を100%に設定し、
     前記撮影画像のデフォーカス量が規定しきい値Th2以上の画素領域については、前記撮影画像のブレンド比率を0%に設定し、
     前記撮影画像のデフォーカス量が規定しきい値Th1より大きく、規定しきい値Th2未満の画素領域については、前記撮影画像のブレンド比率を前記撮影画像のデフォーカス量に応じて変化させる設定とする請求項1に記載の撮像装置。
    The image blend ratio calculation unit
    setting the blending ratio of the captured image to 100% for pixel regions in which the defocus amount of the captured image is equal to or less than a prescribed threshold value Th1;
    setting a blending ratio of the captured image to 0% for a pixel region in which the defocus amount of the captured image is equal to or greater than a prescribed threshold value Th2;
    For pixel regions where the defocus amount of the captured image is greater than the specified threshold value Th1 and less than the specified threshold value Th2, the blend ratio of the captured image is set to change according to the defocus amount of the captured image. The imaging device according to claim 1 .
  5.  前記撮像装置は、さらに、
     前記画像ブレンド比率算出部が算出した画素領域単位の画像ブレンド比率を適用して、前記撮影画像と他画像との画像ブレンド処理を実行する画像ブレンド処理実行部を有する請求項1に記載の撮像装置。
    The imaging device further comprises
    2. The imaging apparatus according to claim 1, further comprising an image blend processing execution unit that applies the image blend ratio for each pixel region calculated by the image blend ratio calculation unit and executes image blend processing between the captured image and another image. .
  6.  撮像装置の撮像素子から出力される撮影画像の画素領域単位のデフォーカス量を算出するデフォーカス量算出部と、
     前記デフォーカス量算出部が算出した画素領域単位のデフォーカス量に基づいて、前記画素領域単位の被写体距離を算出する距離情報算出部を有する撮像装置。
    a defocus amount calculation unit that calculates a defocus amount for each pixel area of a captured image output from an image sensor of an imaging device;
    An imaging apparatus comprising a distance information calculation unit that calculates a subject distance for each pixel area based on the defocus amount for each pixel area calculated by the defocus amount calculation unit.
  7.  前記距離情報算出部は、
     前記画素領域単位の被写体距離に応じた輝度を有する画素値を設定した距離画像を生成する請求項6に記載の撮像装置。
    The distance information calculation unit
    7. The imaging apparatus according to claim 6, which generates a distance image in which pixel values having luminance corresponding to the object distance in pixel area units are set.
  8.  前記距離情報算出部は、
     前記デフォーカス量算出部が算出した画素領域単位のデフォーカス量と、前記撮像装置が有するレンズの焦点距離情報を利用した算出式により、前記画素領域単位の被写体距離を算出する請求項6に記載の撮像装置。
    The distance information calculation unit
    7. The object distance according to claim 6, wherein the subject distance is calculated for each pixel area by a calculation formula using the defocus amount for each pixel area calculated by the defocus amount calculation unit and the focal length information of the lens of the imaging device. imaging device.
  9.  前記撮像装置は、さらに、
     前記距離情報算出部が算出した画素領域単位の被写体距離情報を適用して、前記撮影画像に基づく3次元画像を生成する3次元画像生成部を有する請求項6に記載の撮像装置。
    The imaging device further comprises
    7. The imaging apparatus according to claim 6, further comprising a three-dimensional image generation unit that generates a three-dimensional image based on the captured image by applying the subject distance information for each pixel area calculated by the distance information calculation unit.
  10.  前記撮像装置は、さらに、
     前記距離情報算出部が算出した画素領域単位の被写体距離情報を適用して、前記撮影画像に対する画素領域単位の手振れ補正量を算出する手振れ補正量算出部を有する請求項6に記載の撮像装置。
    The imaging device further comprises
    7. The imaging apparatus according to claim 6, further comprising a camera shake correction amount calculation unit that calculates a camera shake correction amount for each pixel area of the captured image by applying the object distance information for each pixel area calculated by the distance information calculation unit.
  11.  前記手振れ補正量算出部は、
     被写体距離が小さい画素領域ほど手振れ補正量を大きく設定し、被写体距離が大きい画素領域ほど手振れ補正量を小さく設定した画素領域単位の手振れ補正量を算出する請求項10に記載の撮像装置。
    The camera shake correction amount calculation unit
    11. The image pickup apparatus according to claim 10, wherein a larger image stabilization amount is set for a pixel area with a shorter subject distance, and a smaller image stabilization amount is set for a pixel area with a longer subject distance.
  12.  前記撮像装置は、さらに、
     前記撮影画像に対する手振れ補正処理を実行する手振れ補正実行部を有し、
     前記手振れ補正実行部は、
     前記手振れ補正量算出部が算出した画素領域単位の手振れ補正量に従って、画素領域単位の手振れ補正処理を実行する請求項10に記載の撮像装置。
    The imaging device further comprises
    a camera shake correction execution unit that executes camera shake correction processing on the captured image;
    The camera shake correction execution unit
    11. The imaging apparatus according to claim 10, wherein image stabilization processing is performed in units of pixel areas according to the amount of image stabilization in units of pixel areas calculated by the image stabilization amount calculation unit.
  13.  撮像装置の撮像素子から出力される撮影画像の画素領域単位のデフォーカス量を算出するデフォーカス量算出部と、
     前記デフォーカス量算出部が算出した画素領域単位のデフォーカス量に基づいて、前記撮影画像に対するマスク領域を決定するマスク領域決定部を有する撮像装置。
    a defocus amount calculation unit that calculates a defocus amount for each pixel area of a captured image output from an image sensor of an imaging device;
    An imaging apparatus comprising a mask area determination unit that determines a mask area for the captured image based on the defocus amount for each pixel area calculated by the defocus amount calculation unit.
  14.  前記マスク領域決定部は、
     ユーザが指定した出力被写体のデフォーカス量に対して、予め規定したしきい値以上の差分を持つデフォーカス量を有する画素領域をマスク領域として決定する請求項13に記載の撮像装置。
    The mask area determination unit
    14. The image pickup apparatus according to claim 13, wherein a pixel region having a defocus amount having a difference equal to or greater than a predetermined threshold value with respect to the defocus amount of the output subject specified by the user is determined as the mask area.
  15.  前記マスク領域決定部は、
     前記マスク領域をユーザが指定した出力被写体の領域と区別可能としたマスク領域指示画像を生成する請求項13に記載の撮像装置。
    The mask area determination unit
    14. The imaging apparatus according to claim 13, which generates a mask area indication image in which the mask area is distinguishable from the output subject area specified by the user.
  16.  前記撮像装置は、さらに、
     前記マスク領域決定部が決定したマスク領域に、所定の色からなるマスク画像を合成してマスク設定画像を生成するマスク合成部を有する請求項13に記載の撮像装置。
    The imaging device further comprises
    14. The imaging apparatus according to claim 13, further comprising a mask synthesizing unit that synthesizes a mask image of a predetermined color with the mask area determined by the mask area determining unit to generate a mask setting image.
  17.  前記撮像装置は、さらに、
     前記マスク合成部が生成したマスク設定画像のマスク領域に他の画像を重畳し、ユーザが指定した出力被写体との合成画像を生成する画像合成部を有する請求項16に記載の撮像装置。
    The imaging device further comprises
    17. The imaging apparatus according to claim 16, further comprising an image synthesizing section that superimposes another image on the mask area of the mask setting image generated by the mask synthesizing section to generate a synthetic image with an output subject specified by the user.
  18.  前記マスク合成部は、
     生成した前記マスク設定画像を、合成画像生成処理を実行する外部装置に出力する請求項16に記載の撮像装置。
    The mask synthesizing unit
    17. The imaging apparatus according to claim 16, wherein the generated mask setting image is output to an external device that executes composite image generation processing.
  19.  撮像装置において実行する画像処理方法であり、
     デフォーカス量算出部が、撮像装置の撮像素子から出力される撮影画像の画素領域単位のデフォーカス量を算出するデフォーカス量算出ステップと、
     デフォーカス量算出部が、前記撮影画像と他画像との画像ブレンド比率を算出する画像ブレンド比率算出ステップを実行し、
     前記画像ブレンド比率算出ステップは、
     前記デフォーカス量算出ステップにおいて算出した画素領域単位のデフォーカス量に基づいて、前記画素領域単位の画像ブレンド比率を算出するステップである画像処理方法。
    An image processing method executed in an imaging device,
    A defocus amount calculation step in which the defocus amount calculation unit calculates a defocus amount for each pixel area of a captured image output from an imaging device of an imaging device;
    A defocus amount calculation unit executes an image blend ratio calculation step of calculating an image blend ratio between the captured image and another image,
    The image blend ratio calculation step includes:
    An image processing method, comprising: calculating an image blend ratio for each pixel area based on the defocus amount for each pixel area calculated in the defocus amount calculation step.
  20.  撮像装置において画像処理を実行させるプログラムであり、
     デフォーカス量算出部に、撮像装置の撮像素子から出力される撮影画像の画素領域単位のデフォーカス量を算出させるデフォーカス量算出ステップと、
     デフォーカス量算出部に、前記撮影画像と他画像との画像ブレンド比率を算出させる画像ブレンド比率算出ステップを実行させ、
     前記画像ブレンド比率算出ステップにおいては、
     前記デフォーカス量算出ステップで算出した画素領域単位のデフォーカス量に基づいて、前記画素領域単位の画像ブレンド比率を算出させるプログラム。
    A program for executing image processing in an imaging device,
    a defocus amount calculation step of causing a defocus amount calculation unit to calculate a defocus amount for each pixel area of a captured image output from an imaging element of an imaging device;
    causing a defocus amount calculation unit to perform an image blend ratio calculation step of calculating an image blend ratio between the captured image and another image;
    In the image blend ratio calculation step,
    A program for calculating an image blend ratio for each pixel area based on the defocus amount for each pixel area calculated in the defocus amount calculation step.
PCT/JP2022/013430 2021-09-14 2022-03-23 Imaging device, image processing method, and program WO2023042453A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021149153 2021-09-14
JP2021-149153 2021-09-14

Publications (1)

Publication Number Publication Date
WO2023042453A1 true WO2023042453A1 (en) 2023-03-23

Family

ID=85601971

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/013430 WO2023042453A1 (en) 2021-09-14 2022-03-23 Imaging device, image processing method, and program

Country Status (1)

Country Link
WO (1) WO2023042453A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017217115A1 (en) * 2016-06-17 2017-12-21 ソニー株式会社 Image processing device, image processing method, program, and image processing system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017217115A1 (en) * 2016-06-17 2017-12-21 ソニー株式会社 Image processing device, image processing method, program, and image processing system

Similar Documents

Publication Publication Date Title
JP4524717B2 (en) Image processing apparatus, imaging apparatus, image processing method, and program
US9451160B2 (en) Imaging apparatus and method for controlling the imaging apparatus
JP6555863B2 (en) IMAGING DEVICE AND IMAGING DEVICE CONTROL METHOD
US7586518B2 (en) Imaging technique performing focusing on plurality of images
US9843735B2 (en) Image processing apparatus, imaging apparatus comprising the same, and image processing method
JP6046905B2 (en) Imaging apparatus, exposure control method, and program
CN108462830B (en) Image pickup apparatus and control method of image pickup apparatus
US10986262B2 (en) Imaging apparatus, control method, and non-transitory storage medium
US8872963B2 (en) Imaging apparatus and imaging method
JP2010271670A (en) Imaging apparatus
JP2012049773A (en) Imaging apparatus and method, and program
JP2015231118A (en) Image composition device, image composition system and image composition method
US10979620B2 (en) Image processing apparatus for providing information for focus adjustment, control method of the same, and storage medium
JP2020017807A (en) Image processing apparatus, image processing method, and imaging apparatus
JP4747673B2 (en) Electronic camera and image processing program
JP3974798B2 (en) Imaging device
WO2023042453A1 (en) Imaging device, image processing method, and program
US10447937B2 (en) Image processing apparatus, imaging apparatus, image processing method, and storage medium that perform image processing based on an image processing parameter set for a first object area, and information on a positional relationship between an object in a second object area and an object in the first object area
WO2022244311A1 (en) Imaging device, image processing method, and program
JP6891470B2 (en) Imaging device
JP2013149043A (en) Image processing device
JP5929019B2 (en) Imaging device
US20180091793A1 (en) Image processing apparatus, imaging apparatus, image processing method, and storage medium
JP2002244025A (en) Focusing device for camera
JPH06189184A (en) Video camera and photometric method therefor