US11202045B2 - Image processing apparatus, imaging apparatus, image processing method, and program - Google Patents
Image processing apparatus, imaging apparatus, image processing method, and program Download PDFInfo
- Publication number
- US11202045B2 US11202045B2 US16/070,952 US201616070952A US11202045B2 US 11202045 B2 US11202045 B2 US 11202045B2 US 201616070952 A US201616070952 A US 201616070952A US 11202045 B2 US11202045 B2 US 11202045B2
- Authority
- US
- United States
- Prior art keywords
- image
- white
- corresponding parameter
- positional deviation
- frequency
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related, expires
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/272—Means for inserting a foreground image in a background image, i.e. inlay, outlay
-
- G06T5/001—
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/10—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
- H04N23/13—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths with multiple sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
- H04N23/681—Motion detection
- H04N23/6812—Motion detection based on additional sensors, e.g. acceleration sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
- H04N23/88—Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/10—Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
- H04N25/11—Arrangement of colour filter arrays [CFA]; Filter mosaics
- H04N25/13—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
- H04N25/133—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements including elements passing panchromatic light, e.g. filters passing white light
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/10—Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
- H04N25/11—Arrangement of colour filter arrays [CFA]; Filter mosaics
- H04N25/13—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
- H04N25/134—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
-
- H04N5/23258—
-
- H04N9/0451—
-
- H04N9/04555—
-
- H04N9/04557—
-
- H04N9/09—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/646—Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters
-
- H04N9/735—
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
- H04N23/681—Motion detection
- H04N23/6811—Motion detection based on the image signal
Definitions
- the present disclosure relates to an image processing apparatus, an imaging apparatus, an image processing method, and a program. More particularly, the present disclosure relates to an image processing apparatus, an imaging apparatus, an image processing method, and a program that execute a correction process for a false color that occurs in an image.
- a color filter constituted by an RGB array is provided in an imaging element such that incident light via the color filter reaches the imaging element and an electric signal according to the amount of each ray of the incident light is output.
- the false color is likely to occur in a so-called high frequency signal region in which, for example, the amount of change in luminance or color signal per unit area is large. Particularly in the case of an imaging element with a high density, the false color tends to occur more easily.
- Patent Document 1 Japanese Patent Application Laid-Open No. 2013-26672
- Japanese Patent Application Laid-Open No. 2013-26672 Japanese Patent Application Laid-Open No. 2013-26672
- Patent Document 1 discloses a configuration that executes a color correction using two photographed images using two imaging elements, namely, an imaging element (image sensor) having an RGB pixel array for photographing a general color image, for example, a Bayer array, and an imaging element constituted by a white (W) pixel array including only W pixels.
- an imaging element image sensor
- RGB pixel array for photographing a general color image
- W white
- the photographed image obtained by the imaging element (image sensor) having the RGB pixel array contains a region in which the false color is likely to occur and a region in which the false color rarely occurs. Thus, these regions need to be discriminated such that an optimal process is performed in units of regions. Otherwise, it is impossible to reproduce the accurate color of the subject.
- Patent Document 1 describes a false color correction using photographed images by two imaging elements, namely, an imaging element (image sensor) having the RGB array and an imaging element (image sensor) having the white (W) pixel array.
- an imaging element image sensor
- image sensor image sensor
- the present disclosure has been made in view of the above difficulties and it is an object of the present disclosure to provide an image processing apparatus, an imaging apparatus, an image processing method, and a program that use two images photographed using an imaging element for photographing an ordinary color image, such as an RGB array imaging element, and an imaging element having a white (W) pixel array, to optimize a correction approach in accordance with characteristics in units of image regions and generate a high quality image in which false colors are decreased by an optimal image correction according to image characteristics of each image region.
- an imaging element for photographing an ordinary color image such as an RGB array imaging element, and an imaging element having a white (W) pixel array
- a first aspect of the present disclosure is an image processing apparatus including an image processor that receives inputs of a color image and a white X image photographed by a W array imaging element whose all pixels are placed in a white (W) pixel array, and executes an image process that reduces false colors included in the color image, in which
- the image processor includes
- a frequency-corresponding parameter calculation unit that receives an input of the white (W) image and calculates a frequency-corresponding parameter of the white (W) image in units of image regions;
- a positional deviation-corresponding parameter calculation unit that receives inputs of the white X image and the color image and calculates a positional deviation-corresponding parameter of the two input images in units of image regions;
- an image correction unit that executes a blending process in which a blend rate between the white (W) image and the color image is controlled in accordance with values of the frequency-corresponding parameter and the positional deviation-corresponding parameter and calculates a corrected pixel value.
- a second aspect of the present disclosure is an imaging apparatus including:
- a first imaging unit that has a W array imaging element whose all pixels are placed in a white (W) pixel array and photographs a white (W) image;
- a second imaging unit that has an RGB array imaging element having an RGB pixel array and photographs a color image
- an image processor that receives inputs of the white (W) image and the color image and executes an image process that reduces false colors included in the color image, in which
- the image processor includes:
- a frequency-corresponding parameter calculation unit that receives an input of the white (W) image and calculates a frequency-corresponding parameter of the white (W) image in units of image regions;
- a positional deviation-corresponding parameter calculation unit that receives inputs of the white (W) image and the color image and calculates a positional deviation-corresponding parameter of the two input images in units of image regions;
- an image correction unit that executes a blending process in which a blend rate between the white (W) image and the color image is controlled in accordance with values of the frequency-corresponding parameter and the positional deviation-corresponding parameter and calculates a corrected pixel value.
- a third aspect of the present disclosure is an image processing method executed in an image processing apparatus
- the image processing apparatus including an image processor that receives inputs of a color image and a white (W) image photographed by a W array imaging element whose all pixels are placed in a white (W) pixel array, and executes an image process that reduces false colors included in the color image,
- the image processing method including
- a blending process in which a blend rate between the white (W) image and the color image is controlled in accordance with values of the frequency-corresponding parameter and the positional deviation-corresponding parameter.
- a fourth aspect of the present disclosure is an image processing method executed in an imaging apparatus
- the imaging apparatus including:
- a first imaging unit that has a W array imaging element whose all pixels are placed in a white (W) pixel array and photographs a white (W) image;
- a second imaging unit that has an RGB array imaging element having an RGB pixel array and photographs a color image
- an image processor that receives inputs of the white (W) image and the color image and executes an image process that reduces false colors included in the color image
- the image processing method including:
- a blending process in which a blend rate between the white (W) image and the color image is controlled in accordance with values of the frequency-corresponding parameter and the positional deviation-corresponding parameter.
- a fifth aspect of the present disclosure is a program that causes an image processing apparatus to execute an image process
- the image processing apparatus including an image processor that receives inputs of a color image and a white (W) image photographed by a W array imaging element whose all pixels are placed in a white (W) pixel array, and executes an image process that reduces false colors included in the color image,
- the program causing the image processor to execute a process of calculating a corrected pixel value by executing:
- a blending process in which a blend rate between the white (W) image and the color image is controlled in accordance with values of the frequency-corresponding parameter and the positional deviation-corresponding parameter.
- a sixth aspect of the present disclosure is a program that causes an imaging apparatus to execute an image process
- the imaging apparatus including:
- a first imaging unit that has a W array imaging element whose all pixels are placed in a white (W) pixel array and photographs a white (W) image;
- a second imaging unit that has an RGB array imaging element having an RGB pixel array and photographs a color image
- an image processor that receives inputs of the white (W) image and the color image and executes an image process that reduces false colors included in the color image
- the first imaging unit and the second imaging unit to photograph the white (W) image and the color image
- the image processor to execute a process of calculating a corrected pixel value by executing:
- a blending process in which a blend rate between the white (W) image and the color image is controlled in accordance with values of the frequency-corresponding parameter and the positional deviation-corresponding parameter.
- programs of the present disclosure are programs that can be provided by a storage medium or a communication medium configured to provide a program in a computer readable format, for example, to an information processing apparatus or a computer system capable of executing a variety of program codes.
- a storage medium or a communication medium configured to provide a program in a computer readable format, for example, to an information processing apparatus or a computer system capable of executing a variety of program codes.
- system refers to a logical group configuration of a plurality of apparatuses and is not limited to a system in which apparatuses having respective configurations are accommodated in the same housing.
- an apparatus and a method that perform a false color correction according to image characteristics of a color image in units of image regions are implemented.
- an image processor that receives inputs of a color image and a white (W) image photographed by a W array imaging element whose all pixels are placed in a white (W) pixel array and executes an image process that reduces false colors included in the color image.
- the image processor executes a blending process in which a blend rate between the white (W) image and the color image is controlled in accordance with values of the frequency-corresponding parameter and the positional deviation-corresponding parameter and calculates a corrected pixel values.
- FIG. 1 is a diagram for explaining a configuration example of an image processing apparatus.
- FIG. 2 is a diagram for explaining configuration examples of a pixel array of an imaging apparatus.
- FIG. 3 is a diagram for explaining a configuration and a process of an image processor.
- FIG. 4 is a diagram for explaining a configuration and a process of a frequency-corresponding parameter calculation unit.
- FIG. 5 is a diagram for explaining a configuration and a process of the frequency-corresponding parameter calculation unit.
- FIG. 6 is a diagram for explaining a configuration and a process of a positional deviation-corresponding parameter calculation unit.
- FIG. 7 is a diagram for explaining a configuration and a process of the positional deviation-corresponding parameter calculation unit.
- FIG. 8 is a diagram for explaining a configuration and a process of the positional deviation-corresponding parameter calculation unit.
- FIG. 9 is a diagram for explaining a configuration and a process of an image correction unit.
- FIG. 10 is a diagram for explaining a process executed by the image correction unit.
- FIG. 11 is a diagram for explaining a process executed by the image correction unit.
- FIG. 12 is a diagram for explaining a process executed by the image correction unit.
- FIG. 13 is a diagram for explaining a process executed by the image correction unit.
- FIG. 14 is a diagram for explaining a process executed by the image correction unit.
- FIG. 15 is a diagram for explaining a process executed by the image correction unit.
- FIG. 16 is a diagram for explaining a process executed by the image correction unit.
- FIG. 17 is a diagram for explaining a process executed by the image correction unit.
- FIG. 18 is a diagram illustrating a flowchart for explaining a sequence of a process executed by the image processing apparatus.
- FIG. 1 A configuration and a process of an image processing apparatus of the present disclosure will be described with reference to FIG. 1 and the following drawings.
- FIG. 1 is a block diagram illustrating a configuration of an imaging apparatus which is an example of the image processing apparatus 100 of the present disclosure.
- the image processing apparatus is not limited to the imaging apparatus, but also includes an information processing apparatus such as a personal computer (PC) that, for example, receives an input of a photographed image by the imaging apparatus to execute an image process.
- PC personal computer
- An image process other than a photographing process described in the following embodiments is not limited to the imaging apparatus, but can be executed in an information processing apparatus such as a PC.
- the image processing apparatus 100 as the imaging apparatus illustrated in FIG. 1 has a control unit 101 , a storage unit 102 , a codec 103 , an input unit 104 , an output unit 105 , an imaging unit 106 , and an image processor 120 .
- the imaging unit 106 includes a first imaging unit 107 having a white (W) pixel array imaging element that outputs an electric signal based on the amount of incident light in an entire wavelength region of visible light, and a second imaging unit 108 having an RGB pixel array imaging element that has an RGB color filter, for example, a color filter constituted by a Bayer array, and outputs a signal corresponding to input light of each RGB color in units of pixels.
- a white (W) pixel array imaging element that outputs an electric signal based on the amount of incident light in an entire wavelength region of visible light
- a second imaging unit 108 having an RGB pixel array imaging element that has an RGB color filter, for example, a color filter constituted by a Bayer array, and outputs a signal corresponding to input light of each RGB color in units of pixels.
- RGB color filter for example, a color filter constituted by a Bayer array
- the first imaging unit 107 and the second imaging unit 108 serve as two imaging units set at positions a predetermined interval apart from each other and the photographed images by the respective units are obtained as images from different viewpoints. In a case where these two images are still images, the images are photographed as still images at the same timing. In a case where the images are moving images, frames photographed by the respective imaging units are obtained as frames photographed in synchronization with each other, that is, continuous image frames sequentially photographed at the same timing.
- these two imaging units 107 and 108 serve as two imaging units set at positions a predetermined interval apart from each other and the photographed images by the respective units are obtained as images from different viewpoints. That is, the images are obtained as images having parallax.
- the same subject image is not photographed at corresponding pixels of the two images, that is, pixels at the same position, and a subject deviation according to parallax occurs.
- the image processor 120 performs an image correction by taking this deviation into account, specifically, an image process that reduces false colors. Details of this process will be described later.
- the control unit 101 controls various processes executed in the imaging apparatus 100 , such as image photographing, a signal process on a photographed image, a recording process for an image, and a display process.
- the control unit 101 is equipped with a central processing unit (CPU) or the like that, for example, executes a process in line with a variety of processing programs saved in the storage unit 102 and functions as a data processor that executes the programs.
- CPU central processing unit
- the storage unit 102 includes a random access memory (RAM), a read only memory (ROM), and the like which function as not only a saving unit for photographed images but also a storage unit for a processing program executed by the control unit 101 and various parameters and additionally function as a work area at the time of data processing.
- RAM random access memory
- ROM read only memory
- the codec 103 executes encoding and decoding processes such as compression and decompression processes for the photographed image.
- the input unit 104 is, for example, a user operation unit and receives an input of control information such as photographing start and end and a variety of mode settings.
- the output unit 105 includes a display unit, a speaker, and the like and is used, for example, for display of the photographed image, a live view image, and the like and audio output.
- the image processor 120 receives inputs of not only two images input from the imaging unit 106 , namely, a white-RAW (W-RAW) image 111 and a RGB-RAW image 112 , but also a sensor noise characteristic ( ⁇ ) 113 as a processing parameter and executes an image process that decreases false colors to generate and output an RGB image 150 .
- W-RAW white-RAW
- RGB-RAW RGB-RAW
- the imaging unit 106 includes the first imaging unit 107 having the white (W) pixel array imaging element that outputs an electric signal based on the amount of incident light in the entire wavelength region of visible light, and the second imaging unit 108 having the RGB pixel array imaging element that has the RGB color filter, for example, a color filter constituted by the Bayer array, and outputs a signal corresponding to input light of each RGB color in units of pixels.
- the RGB color filter for example, a color filter constituted by the Bayer array
- the pixel arrays (filter arrays) of these two imaging units 107 and 108 will be described with reference to FIG. 2 .
- FIG. 2( a ) illustrates a Bayer array used for photographing a general color image.
- the Bayer array includes an RGB filter that selectively transmits light of wavelength of each RGB color.
- Two G pixels are set on the diagonal of 4 pixels made up of 2 ⁇ 2 pixels and one R pixel and one B pixel are separately arranged in the remaining spaces.
- This Bayer array type RGB pixel array is a pixel array used for the second imaging unit 108 illustrated in FIG. 1 .
- One of RGB pixel values is set in units of pixels through the image photographing process. This image before the signal process is the RGB-RAW image 112 illustrated in FIG. 1 .
- any one pixel value out of R, G, and B is set for each pixel.
- a process of setting three RGB signals to all pixels is performed through a demosaic process executed as the subsequent signal process.
- a color image is generated by such a process but, when such a process is performed, as described earlier, a false color in which a color that is not present in the original subject appears in an output image occurs during this process in some cases.
- a process that decreases such false colors is performed by an image process in the image processor 120 illustrated in FIG. 1 .
- FIG. 2( b ) is a diagram illustrating a pixel array (filter array) of the first imaging unit 107 in FIG. 1 . All the pixels are constituted by a white (W) pixel that outputs an electric signal based on the amount of incident light in the entire wavelength region of visible light.
- W white
- the first imaging unit 107 in FIG. 1 generates the W-RAW image 111 as a picked-up image by the W pixel array imaging element in which W pixels that receive incident light of all the wavelengths of RGB are arrayed for all the pixels at all pixel positions and inputs the generated W-RAW image 111 to the image processor 120 .
- the image processor 120 receives an input of the W-RAW image 111 from the first imaging unit 107 and an input of the RGB-RAW image 112 from the second imaging unit 108 and additionally receives an input of the sensor noise characteristic ( ⁇ ) 113 which is a parameter applied to a correction process that decreases the false colors, to perform an image correction process for decreasing the false colors.
- ⁇ sensor noise characteristic
- the sensor noise characteristic ( ⁇ ) 113 is a noise characteristic of the imaging elements used in the first imaging unit 107 and the second imaging unit 108 of the imaging unit 106 and, for example, is acquired in advance by the control unit 101 to be saved in the storage unit 102 .
- the noise characteristic of the imaging elements used in the first imaging unit 107 and the second imaging unit 108 is indicated here as a common value ( ⁇ )
- a configuration using separate characteristics ⁇ 1 and ⁇ 2 of the imaging elements of the respective imaging units may be adopted.
- FIG. 3 is a block diagram illustrating a configuration of the image processor 120 of the image processing apparatus 100 .
- the image processor 120 has a development processor 121 , a motion vector detection unit 122 , a position alignment unit 123 , a frequency-corresponding parameter calculation unit 124 , a positional deviation-corresponding parameter 125 , an image correction unit 126 , and a signal conversion unit 127 .
- the image processor 120 executes a process of reducing the false colors occurring in the RGB image which is a photographed image by the second imaging unit 108 illustrated in FIG. 1 , and outputs the RGB image 150 with reduced false colors.
- Input signals to the image processor 120 are the following respective signals:
- the development processor 121 executes a development process on the RGB-RAW image 112 input from the second imaging unit 108 . Specifically, for example, the following processes are executed:
- the RGB-RAW image 112 is converted into a YUV image 130 through the development process by the development processor 121 .
- the YUV image 130 is an image in which three pixel values, namely, luminance (Y), chrominance (U), and chrominance (V) are set for all the pixels.
- the motion vector detection unit 122 receives an input of the W image 111 from the first imaging unit 107 and also receives an input of a Y signal (luminance signal) of the YUV image 130 generated by the development processor 121 on the basis of the RGB-RAW image 112 which is a photographed image by the second imaging unit 108 .
- the motion vector detection unit 122 detects a motion vector (MV) representing a positional deviation between the two images.
- MV motion vector
- the first imaging unit 107 and the second imaging unit 108 which are included in the imaging unit 106 of the image processing apparatus 100 illustrated in FIG. 1 , serve as two imaging units set at positions a predetermined interval apart from each other and the photographed images by the respective units are obtained as images from different viewpoints. That is, the images are obtained as images having parallax.
- the same subject image is not photographed at corresponding pixels of the two images, that is, pixels at the same position, and a subject deviation according to parallax occurs.
- the motion vector detection unit 122 detects a motion vector (MV) representing a positional deviation between the two images.
- MV motion vector
- corresponding points of two images are found and a vector connecting these corresponding points is calculated as a motion vector (MV).
- the motion vector (MV) generated by the motion vector detection unit 122 is input to the position alignment unit 123 .
- the position alignment unit 123 receives an input of the motion vector am generated by the motion vector detection unit 122 and also receives an input of the YUV image 130 generated by the development processor 121 on the basis of the RGB-RAW image 112 .
- the position alignment unit 123 moves each pixel position in the YUV image 130 in line with the size and direction of the motion vector (MV) to generate the W image, that is, a YUV image similar to an image photographed from the same viewpoint position as that of the W-RAW image 111 which is a photographed image key the first imaging unit 107 .
- the YUV image 130 is converted into a YUV image that is regarded as photographed from the same viewpoint as that of the first imaging unit 107 .
- the YUV image after the position alignment process generated by the position alignment unit 123 is input to the positional deviation-corresponding parameter calculation unit 125 .
- a chrominance signal UV is input to the image correction unit 126 .
- the frequency-corresponding parameter calculation unit 124 receives inputs of the W-RAW image 111 , which is a photographed image by the first imaging unit 107 , and the sensor noise characteristic ( ⁇ ) 113 and, on the basis of these pieces of input data, calculates a frequency-corresponding blend ratio setting parameter, which is a correction parameter for use in false color correction, to output to the image correction unit 126 .
- the sensor noise characteristic ( ⁇ ) 113 is noise characteristic information on the imaging element used in the first imaging unit 107 of the imaging unit 106 , specifically, data indicating the intensity of noise included in an output signal from the imaging element used in the first imaging unit 107 .
- this sensor noise characteristic ( ⁇ ) 113 is acquired in advance by the control unit 101 to be saved in the storage unit 102 and acquired from the storage unit 102 under the control of the control unit 101 to be input to the frequency-corresponding parameter calculation unit 124 .
- FIG. 4 is a diagram illustrating a specific configuration of the frequency-corresponding parameter calculation unit 124 .
- the frequency-corresponding parameter calculation unit 124 receives inputs of the W-RAW image 111 , which is a photographed image by the first imaging unit 107 , and the sensor noise characteristic ( ⁇ ) 113 and, on the basis of these pieces of input data, calculates a frequency-corresponding blend ratio setting parameter, which is a correction parameter for use in false color correction, to output to the image correction unit 126 .
- the frequency-corresponding parameter calculation unit 124 has an adjacent pixel pixel pixel value difference absolute value calculation unit 151 , a dynamic range (DR) calculation unit 152 , a frequency parameter calculation unit 153 , an addition unit 154 , and a blend ratio calculation unit 155 .
- DR dynamic range
- FIG. 5( a ) is a diagram for explaining a setting example of a calculation region for the frequency-corresponding blend ratio setting parameter to be calculated by the frequency-corresponding parameter calculation unit 124 .
- the frequency-corresponding blend ratio setting parameter calculated by the frequency-corresponding parameter calculation unit 124 is a parameter corresponding to each pixel.
- the parameter calculation process is executed using the W-RAW image 111 which is a photographed image by the first imaging unit 107 . Assuming that a parameter calculation target pixel is a pixel at a position (x, y), a calculation process for the parameter is executed using the pixel values of a surrounding pixel region of this parameter calculation target pixel (x, y).
- FIG. 5( a ) is an example in which, as a surrounding pixel region of the parameter calculation target pixel (x, y), a pixel region of 9 ⁇ 9 pixels with the parameter calculation target pixel (x, y) as the center pixel is designated as a pixel region to be applied to the parameter calculation.
- FIG. 5( b ) illustrates a specific procedure of the parameter calculation process by the frequency-corresponding parameter calculation unit 124 .
- the calculation process for the frequency-corresponding blend ratio setting parameter by the frequency-corresponding parameter calculation unit 124 is performed in line with the following procedure (steps S 01 to S 03 ).
- steps S 01 and S 02 are processes executed by the adjacent pixel pixel pixel value difference absolute value calculation unit 151 , the dynamic range (DR) calculation unit 152 , and the frequency parameter calculation unit 153 illustrated in FIG. 4 .
- step S 01 a frequency parameter (activity) [act HOR ] in a horizontal direction is calculated.
- This process is a process using the pixel values of pixels in the horizontal direction included in the parameter calculation region centered on the parameter calculation target pixel (x, y).
- the frequency parameter (activity) [act HOR ] in the horizontal direction is calculated using the pixel values of nine pixels in total, made up of the parameter calculation target pixel (x, y), four pixels on the left side of the parameter calculation target pixel (x, y), and four pixels on the right side thereof.
- W x ⁇ i,y denotes the pixel value of a pixel position (x ⁇ i, y) in the W-RAW image 111 and W x ⁇ i+1,y denotes the pixel value of a pixel position (x ⁇ 1+1, y) in the W-RAW image 111 .
- the parameter may be set to be adjusted by taking into account the dynamic range (DR), the sensor noise characteristic ( ⁇ ) 113 , that is, the intensity of noise of the imaging element of the first imaging unit 107 , and the like.
- DR dynamic range
- ⁇ sensor noise characteristic
- Form 1 is a formula for calculating a value obtained by, in a case where the region setting illustrated in FIG. 4( a ) is employed, adding difference absolute values between the adjacent pixel values of nine pixels, namely, the pixel values W x ⁇ 4,y to W x+4,y of nine pixels located in the horizontal direction of the parameter calculation target pixel (x, y), and dividing the resultant value by the dynamic ranges (DR) of the nine pixels, such that the obtained value is adopted as the frequency parameter (activity) [act Hold ] of the pixel position (x, y) in the horizontal direction.
- DR dynamic ranges
- This process is a process using the pixel values of pixels in the vertical direction included in the parameter calculation region centered on the parameter calculation target pixel (x, y).
- the frequency parameter (activity) [act VER ] in the vertical direction is calculated using the pixel values of nine pixels in total, made up of the parameter calculation target pixel (x, y), four pixels on the upper side of the parameter calculation target pixel (x, y), and four pixels on the lower side thereof.
- W x,y ⁇ i denotes the pixel value of a pixel position (x, y ⁇ i) in the W-RAW image 111
- W x,y ⁇ 1+1 denotes the pixel value of a pixel position (x, y ⁇ i+1) in the W-RAW image 111 .
- the parameter may be set to be adjusted by taking into account the dynamic range (DR), the sensor noise characteristic ( ⁇ ) 113 , that is, the intensity of noise of the imaging element of the first imaging unit 107 , and the like.
- DR dynamic range
- ⁇ sensor noise characteristic
- Form 2 is a formula for calculating a value obtained by, in a case where the region setting illustrated in FIG. 4( a ) is employed, adding difference absolute values between the adjacent pixel values of nine pixels, namely, the pixel values W x,y ⁇ 4 to W x,y+4 of nine pixels located in the vertical direction of the parameter calculation target pixel (x, y), and dividing the resultant value by the dynamic ranges (DR) of the nine pixels, such that the obtained value is adopted as the frequency parameter (activity) [act VER ] of the pixel position (x, y) in the vertical direction.
- DR dynamic ranges
- step S 03 is a process executed by the addition unit 154 and the blend ratio calculation unit 155 illustrated in FIG. 4 .
- step S 03 the following process is executed
- ⁇ a denotes a predefined parameter calculation coefficient and, for example,
- a larger value that is, a value close to one in a high frequency region where the pixel value finely changes
- a smaller value that is, a value close to zero in a flat image region where a change in pixel value is small, that is, in a low frequency region.
- the frequency-corresponding parameter calculation. unit 124 calculates the frequency-corresponding blend ratio setting parameter [ratio Freq ] in line with the above-described process.
- the frequency-corresponding parameter calculation unit 124 calculates the frequency-corresponding blend ratio setting parameter [ratio Freq ] for all the pixels constituting the W-RAW image 111 which is a photographed image by the first imaging unit 107 .
- the calculated parameters are input to the image correction unit 126 .
- the positional deviation-corresponding parameter calculation unit 125 illustrated in FIG. 3 receives inputs of the W-RAW image 111 , which is a photographed image by the first imaging unit 107 , the YUV image after position alignment generated by the position alignment unit 123 , that is, a YUV image equivalent to an image photographed from the photographing viewpoint of the first imaging unit 107 , and the sensor noise characteristic ( ⁇ ) 113 and, on the basis of these pieces of input data, calculates a positional deviation-corresponding blend ratio setting parameter, which is a correction parameter for use in false color correction, to output to the mage correction unit 126 .
- the sensor noise characteristic ( ⁇ ) 113 is noise characteristic information on the imaging element used in the second imaging unit 108 of the imaging unit 106 , specifically, data indicating the intensity of noise included in an output, signal from the imaging element used in the second imaging unit 108 .
- this sensor noise characteristic ( ⁇ ) 113 is acquired in advance by the control unit 101 to be saved in the storage unit 102 and acquired from the storage unit 102 under the control of the control unit 101 to be input to the positional deviation-corresponding parameter calculation unit 125 .
- the positional deviation-corresponding parameter calculation unit 125 receives inputs of the W-RAW image 111 , which is a photographed image by the first imaging unit 107 , a position-aligned YUV image 161 generated by the position alignment unit 123 , that is, a position-aligned YUV image 161 equivalent to an image photographed from the photographing viewpoint of the first imaging unit 107 , and the sensor noise characteristic ( ⁇ ) 113 and, on the basis of these pieces of input data, calculates a positional deviation-corresponding blend ratio setting parameter 202 , which is a correction parameter for use in false color correction, to output to the image correction unit 126 .
- a signal conversion unit 171 of the positional deviation-corresponding parameter calculation unit 125 executes a signal conversion process of converting a YUV signal of each pixel of the position-aligned YUV image 161 into a white (W) signal.
- the YUV signal is converted into the white (W) signal in line with a formula illustrated in FIG. 7 , that is, the following formula (Formula 4).
- ⁇ 0 , ⁇ 1 , and ⁇ 2 denote spectroscopic model coefficients, which are predefined conversion parameters.
- a YUV image-based W image 162 generated by the signal conversion unit 171 on the basis of the position-aligned YUV image 161 is output to a second region unit pixel value addition unit 173 and a multiplication unit 175 .
- the second region unit pixel value addition unit 173 executes a pixel value addition process on the YUV image-based W image 162 in units of predefined pixel regions (n ⁇ n pixels, where n is, for example, 3, 5, 7, 9, or the like) and outputs an added pixel value (B) that has been calculated to a region unit pixel value percentage (A/B) calculation unit 174 .
- a first region unit pixel value addition unit 172 executes a pixel value addition process on the W-RAW image 111 , which is a photographed image by the first imaging unit 107 , in units of the same pixel region as the pixel region applied by the second region unit pixel value addition unit 173 (n ⁇ n pixels, for example, n is 9) and outputs an added pixel value (A) that has been calculated to the region unit pixel value percentage (A/B) calculation unit 174 .
- the region unit pixel value percentage (A/B) calculation unit 174 calculates a region unit added pixel value percentage (A/B) between the region unit added pixel value (A) of the W-RAW image 111 and the region unit added pixel value (B) of the YUV image-based W image 162 to output to the multiplication unit 175 .
- the multiplication unit 175 receives inputs of the YUV image-based W image 162 generated by the signal conversion unit 171 on the basis of the position-aligned YUV image 161 and the region unit added pixel value percentage (A/B) calculated by the region unit pixel value percentage (A/B) calculation unit 174 .
- the multiplication unit 175 executes a process of multiplying the pixel values of constituent pixels of the YUV image-based N image 162 by the region unit added pixel value percentage (A/B) to convert the pixel values.
- the multiplication process is executed by combining the region unit added pixel value percentages (A/B) of the regions including the positions of the respective pixels.
- This multiplication process is executed as a process of aligning the pixel value level of the YUV image-based W image 162 to the pixel value level of the W pixel of the W-RAW image 111 which is a photographed image by the first imaging unit 107 .
- the multiplication unit 175 generates a pixel value-adjusted YUV image-based W image 163 through this level adjustment to output to a difference calculation unit 176 .
- the N pixel value of the pixel value-adjusted YUV image-based N image 163 becomes substantially the same as the pixel value of the N pixel of the N-RAN image 111 , which is a photographed image by the first imaging unit 107 , in a pixel region where no false color occurs.
- the difference calculation unit 176 detects this difference (diff).
- the difference calculation unit 176 receives inputs of the W-RAW image 111 which is a photographed image by the first imaging unit 107 and the pixel value-adjusted YUV image-based W image 163 which is an output of the multiplication unit 175 and calculates a difference between the pixel values of the corresponding pixels of these two images located at the positions having the same coordinates.
- a difference image 164 including the calculated difference value corresponding to each pixel is input to a filter processor 177 .
- the applied filter is, for example, a median filter that acquires a median value of pixel values of a predetermined pixel region to designate as a new pixel value.
- a filtering result image of the difference image 164 including the difference pixel values is input to a positional deviation-corresponding blend ratio calculation unit 178 .
- the blend ratio calculation unit 178 calculates the positional deviation-corresponding blend ratio setting parameter (ratio ERR )) 202 on the basis of each pixel value (difference pixel value after filtering) of the filtering result image of the difference image 164 including the difference pixel values to output to the image correction unit 126 .
- FIG. 8 illustrates an example of a graph indicating the correspondence relationship between “each pixel value (difference pixel value after filtering) of the filtering result image of the difference image 164 including the difference pixel values” input to the blend ratio calculation unit 178 and “the positional deviation-corresponding blend ratio setting parameter (ratio ERR ) 202 ” output by the blend ratio calculation unit 178 .
- the abscissa axis indicates “each pixel value (difference pixel value after filtering) of the filtering result image of the difference image 164 including the difference pixel values” as an input value
- the ordinate axis indicates “the positional deviation-corresponding blend ratio setting parameter (ratio ERR ) 202 ” as an output value.
- the graph illustrated in FIG. 8 is an example indicating the correspondence relationship between the input and output values and the output value is defined as follows using threshold values 1 ⁇ and 3 ⁇ set in advance:
- the blend ratio calculation unit 178 calculates the output value, that is, “the positional deviation-corresponding blend ratio setting parameter (ratio ERR ) 202 ” on the basis of the value of “each pixel value (difference pixel value after filtering) of the filtering result image of the difference image 164 including the difference pixel values”, which is an input value, in line with, for example, the input/output correspondence relationship defining data illustrated in FIG. 8 and outputs the calculated value to the image correction unit 126 .
- ratio ERR blend ratio setting parameter
- the W-RAW image 111 and the pixel value-adjusted YUV image-based N image 163 are images after position alignment and properly, the positional deviation should be eliminated.
- a difference occurs in each pixel value (W pixel value) depending on the pixel position. This difference is thought to be a false color and is described as a “positional deviation-corresponding parameter” under the interpretation that the pixel with such a difference is a pixel that should be output to a pixel position different from the pixel position of the original pixel value.
- the image correction unit 126 receives inputs of the following respective pieces of data:
- the image correction unit 126 receives inputs of these pieces of data and generates a corrected UV signal (Uout, Vout) 203 , which is as output signal value of the chrominance signal UV constituting the pixel value of the corrected image (YUV image) in which false colors have been reduced, to output to the signal conversion unit 127 in the image processor 120 illustrated in FIG. 3 .
- the image correction unit 126 generates the corrected UV signal (Uout, Vout), for example, in line with the output signal calculation formulas illustrated in FIG. 9( a ) .
- LPF illustrated in above (Formula 5) stands for a low-pass filter.
- LPF(U) indicates a low-pass filter application process to a pixel value signal U of the position-aligned YUV image 161 generated by the position alignment unit 123 .
- LPF(V) indicates a low-pass filter application process to a pixel value signal V of the position-aligned YUV image 161 generated by the position alignment unit 123 .
- LPF(W) indicates a low-pass filter application process to a pixel value signal W of the W-RAW image 111 which is a photographed image by the first imaging unit 107 .
- Form 5 indicates, for example, formulas for executing the following pixel value correction process.
- LPF(U) illustrated in the calculation formula for a corrected U signal (Uout) in (Formula 5) applies a low-pass filter to the pixel signal U of the YUV image 161 and smooths the false color pixel value with the pixel values of the surrounding pixels to reduce false colors.
- the low-pass filter (LPF) included in the formulas illustrated in (Formula 5) is switched in accordance with the image characteristics to calculate the corrected.
- UV signal (Uout, Vout) 203 is switched in accordance with the image characteristics to calculate the corrected.
- the image correction unit 126 When calculating the corrected UV signal (Uout, Vout) 203 , the image correction unit 126 employs a different blend ratio, that is, alters the blend ratio between the position-aligned YUV image 161 and the W-RAW image 111 in accordance with characteristics in units of image regions, namely,
- FIG. 10 the following respective pieces of data are illustrated in association with each other.
- RGB sensor output image illustrated in (b) represents the position-aligned YUV image 161 and the W sensor output image illustrated therein represents the W-RAW image 111 .
- FIG. 10 exemplifies the following representative image region characteristics of three types ((1) to (3)).
- the blend ratio of the RGB sensor output image (position-aligned YUV image 161 ) is set to be high and the blend ratio of the W sensor output image (W-RAW image 111 ) is set to be small.
- the corrected UV signal (Uout, Vout) 203 is calculated by the blending process in line with the blend ratio with such a setting.
- the blend ratio of the RGB sensor output image (position-aligned YUV image 161 ) and the blend ratio of the W sensor output image (W-RAW image 111 ) are made substantially equal to each other.
- the corrected UV signal (Uout, Vout) 203 is calculated by the blending process in line with the blend ratio with such a setting.
- the blend ratio of the RGB sensor output image (position-aligned YUV image 161 ) is set to be small and the blend ratio of the W sensor output image (W-RAW image 111 ) is set to be large.
- the corrected UV signal (Uout, Vout) 203 is calculated by the blending process in line with the blend ratio with such a setting.
- the process example illustrated in FIG. 11 is a diagram for explaining a process example of switching the low-pass filter (LP F) to be applied to the output signal calculation formulas indicated as above-mentioned (Formula 5) in accordance with the image characteristics, that is, the values of parameters, namely,
- FIG. 11 illustrates an application example of three types of different low-pass filters (LPFs) to be used in accordance with the value of each parameter by setting respective axes in such a manner that the positional deviation-corresponding blend ratio setting parameter [ratio Err ] generated by the positional deviation-corresponding parameter calculation unit 125 is set to the abscissa axis and the frequency-corresponding blend ratio setting parameter [ratio Freq ] generated by the frequency-corresponding parameter calculation unit 124 is set to the ordinate axis.
- LPFs low-pass filters
- the three low-pass filters (LPF 0 to LPF 2 ) are distinguished from each other by variation in cutoff frequency, where the cutoff frequency of LPF 0 is the highest and the cutoff frequency of LPF 2 is the lowest.
- low-pass filters with the following settings can be applied as the respective low-pass filters:
- LPF 0 is a moving average filter of 3 ⁇ 3 (pixels);
- LPF 1 is a moving average filter of 13 ⁇ 13 (pixels).
- LPF 2 is a moving average filter of 25 ⁇ 25 (pixels).
- the moving average filters having such settings can be applied as the above three low-pass filters (LPF 0 to LPF 2 ).
- the coefficient setting of the moving average filter of 3 ⁇ 3 is set as illustrated in following (Formula 6).
- LPF 0 is a moving average filter of 3 ⁇ 3 (pixels);
- LPF 1 is a moving average filter of 13 ⁇ 13 (pixels).
- LPF 2 is a moving average filter of 25 ⁇ 25 (pixels)
- a smoothing process using a smaller pixel region (3 ⁇ 3) as a processing unit is performed when LPF 0 is applied, while a smoothing process using a more pixel region (13 ⁇ 13) as a processing unit is performed when LPF 1 is applied and a smoothing process using a larger pixel region (25 ⁇ 25) as a processing unit is performed when LPF 2 is applied.
- FIG. 11 is an example of executing different processes in accordance with the image characteristics in a similar manner as described earlier with reference to FIG. 10 and indicates a process example in which the low-pass filter (LPF) to be applied to the output signal calculation formulas indicated as above-mentioned (Formula 5) is switched in accordance with the image characteristics.
- LPF low-pass filter
- LPFs low-pass filters
- This region is a region satisfying the following conditions:
- Th indicates a threshold value.
- a low-pass filter (LPF 0 ) having the highest cutoff frequency is applied as the LPF in the output signal calculation formulas of above-mentioned (Formula 5) to calculate the corrected UV signal (Uout, Vout) 203 .
- the corrected UV signal (Uout, Vout) 203 is calculated with the setting in which the blend ratio of the RGB sensor output image (position-aligned YUV image 161 ) is set to be high and the blend ratio of the W sensor output image (W-RAW image 111 ) is set to be small.
- This region is a region satisfying the following conditions:
- Th indicates a threshold value.
- a low-pass filter (LPF 2 ) having the lowest cutoff frequency is applied as the LPF in the output signal calculation formulas of above-mentioned (Formula 5) to calculate the corrected UV signal (Uout, Vout) 203 .
- the corrected UV signal (Uout, Vout) 203 is calculated with the setting in which the blend ratio of the RGB sensor output image (position-aligned YUV image 161 ) is set to be low and the blend ratio of the W sensor output image (W-RAW image 111 ) is set to be high.
- a low-pass filter (LPF 1 ) having a medium cutoff frequency is applied as the LPF in the output.
- signal calculation formulas of above-mentioned (Formula 5) to calculate the corrected.
- UV signal (Uout, Vout) 203 is calculated.
- the corrected UV signal (Uout, Vout) 203 is calculated while the blend ratio of the RUB sensor output image (position-aligned YUV image 161 ) and the blend ratio of the U sensor output image (U-RAW image 111 ) have substantially the same extent.
- FIG. 12 is a diagram summarizing the process in FIG. 11 and illustrates data corresponding to the following respective pieces of data.
- the entry (1) in FIG. 12 corresponds to the region (1) illustrated in FIG. 11 and has the following image characteristics and correction process approach.
- LPF 0 (U) and LPF 0 (V) in (d) indicate a process of applying LPF to the LPF in above-mentioned (Formula 5) to calculate the corrected UV signal (Uout, Vout).
- the entry (3) in FIG. 12 corresponds to the region (3) illustrated an FIG. 11 and has the following image characteristics and correction process approach.
- the entry (2) in FIG. 12 corresponds to the region (2) illustrated in FIG. 11 and has the following image characteristics and correction process approach.
- the applied filter is altered in accordance with the image characteristics in this manner, such that the blend ratio according to the image characteristics, that is, the blend ratio between the RGB sensor output image (position-aligned YUV image 161 ) and the W sensor output image (W-RAW image 111 ) is altered) to calculate the final corrected UV signal (Uout, Vout) 203 .
- FIG. 13 illustrates an example of image characteristics and application regions of four different low-pass filters (LPF 0 to LPF 3 ) to be applied in accordance with respective image characteristics.
- LPF 0 to LPF 3 low-pass filters
- the four low-pass filters (LPF 0 to LPF 3 ) are distinguished from each other by variation in cutoff frequency, where the cutoff frequency of LPF 0 is the highest and the cutoff frequency of LPF 3 is the lowest.
- an LPF having a lower cutoff frequency for example, an LPF such as LPF 3 , is applied, as the frequency-corresponding blend ratio setting parameter [ratio Freq ] or the positional deviation-corresponding blend ratio setting parameter [ratio Err ] is closer to one.
- an LPF having a higher cutoff frequency for example, an LPF such as LPF 0 , is applied, as the frequency-corresponding blend ratio setting parameter [ratio Freq ] or the positional deviation-corresponding blend ratio setting parameter [ratio Err ] is closer to zero.
- FIG. 14 illustrates an application example of further different region-corresponding filters.
- FIG. 14 illustrates an example of image characteristics and application regions of five different low-pass filters (LPF 0 to LPF 4 ) to be applied in accordance with respective image characteristics.
- LPF 0 to LPF 4 five different low-pass filters
- the five low-pass filters (LPF 0 to LPF 4 ) are distinguished from each other by variation in cutoff frequency, where the cutoff frequency of LPF 0 is the highest and the cutoff frequency of LPF 4 is the lowest.
- an LPF having a lower cutoff frequency for example, an LPF such as LPF 4 , is applied, as the frequency-corresponding blend ratio setting parameter [ratio Freq ] or the positional deviation-corresponding blend ratio setting parameter [ratio Err ] is closer to one.
- an LPF having a higher cutoff frequency for example, an LPF such as LPF 0 , is applied, as the frequency-corresponding blend ratio setting parameter [ratio Freq ] or the positional deviation-corresponding blend ratio setting parameter [ratio Err ] is closer to zero.
- the embodiment described below is an example of performing the image process using a plurality of different low-pass filters combined in accordance with the image characteristics.
- the process example described below is one of specific process examples that implement the above-described blending process for an image in accordance with the image characteristics in units of image regions. That is, this is a specific example of the generation process for the corrected UV signal (Uout, Vout) 203 to be executed by the image correction unit 126 and is a process example using a plurality of different low-pass filters combined in accordance with the image characteristics.
- the image correction unit 126 receives inputs of the following respective pieces of data:
- the image correction unit 126 receives inputs of these pieces of data and generates the corrected UV signal (Uout, Vout) 203 , which is an output signal value of the chrominance signal UV constituting the pixel value of the corrected image (YUV image) in which false colors have been reduced, to output to the signal conversion unit 127 in the image processor 120 illustrated in FIG. 3 .
- the image correction unit 126 generates the corrected UV signal (Uout, Vout), for example, in line with the output signal calculation formulas illustrated in FIG. 15( a ) .
- the output signal calculation formulas illustrated in FIG. 15( a ) are formulas created on the basis of (Formula 5) described above, that is, the output signal calculation formulas illustrated in FIG. 9( a ) .
- the output signal calculation formulas illustrated in FIG. 15( a ) work as an formula for altering the blend ratio of the image in accordance with the image region characteristics by applying the frequency-corresponding blend ratio setting parameter [ratio Freq ] and the positional deviation-corresponding blend ratio setting parameter [ratio Err ] to generate the corrected UV signal (Uout, Vout).
- the image correction unit 126 generates the corrected UV signal (Uout, Vout) in line with the output signal calculation formulas illustrated in FIG. 15( a ) , that is, (Formula 7) indicated below.
- U out (1 ⁇ ratio Err )((1 ⁇ ratio Freq ) ⁇ U 0 +ratio Freq ⁇ U 1 )+ratio Err ⁇ U 2
- V out (1 ⁇ ratio Err )((1 ⁇ ratio Freq ) ⁇ V 0 +ratio Freq ⁇ V 1 )+ratio Err ⁇ V 2 (Formula 7)
- U 0 , U 1 , and U 2 , and V 0 , V 1 , and V 2 denote UV values obtained as pixel value conversion results to which a plurality of different low-pass filters (LPFs) have been applied.
- LPFs low-pass filters
- FIG. 16( a ) illustrates formulas similar to the formulas illustrated in FIG. 15( a ) , that is, calculation formulas for the corrected UV signal (Uout, Vout) illustrated in above (Formula 7).
- Un and Vn are calculated by following (Formula 8).
- U 0 and V 0 denote UV values obtained by applying the low-pass filter LPF 0 to the UV pixel value of the position-aligned YUV image 161 generated by the position alignment unit 123 as input data.
- U 1 and V 1 denote UV values obtained by applying the lowpass filter LPF 1 to the UV pixel value of the position-aligned YUV image 161 generated by the position alignment unit 123 as input data.
- U 2 and V 2 denote UV values obtained by applying the low-pass filter LPF 2 to the UV pixel value of the position-aligned YUV image 161 generated by the position alignment unit 123 as input data.
- the correspondence relationships between the UV pixel value of the position-aligned YUV image 161 generated by the position alignment unit 123 as input data and the UV values (U 0 , U 1 , and U 2 , and V 0 , V 1 , and V 2 ) after the filtering process obtained as the application results of the low-pass filters (LPF 0 to LPF 2 ) are as follows.
- V 0 LPF 0 (V)
- V 1 LPF 1 (V)
- the three low-pass filters (LPF 0 to LPF 2 ) are distinguished from each other by variation in cutoff frequency, where the cutoff frequency of LPF 0 is the highest and the cutoff frequency of LPF 2 is the lowest.
- low-pass filters with the following settings can be applied as the respective low-pass filters:
- LPF 0 is a moving average filter of 3 ⁇ 3 (pixels);
- LPF 1 is a moving average filter of 13 ⁇ 13 (pixels).
- LPF 2 is a moving average filter of 25 ⁇ 25 (pixels).
- the moving average filters having such settings can be applied as the above three low-pass filters (LPF 0 to LPF 2 ).
- the coefficient setting of the moving average filter of 3 ⁇ 3 is set as illustrated in (Formula 6) described above.
- LPFs low-pass filters
- LPF 0 as a moving average filter of 3 ⁇ 3 (pixels)
- LPF 1 as a moving average filter of 13 ⁇ 13 (pixels)
- a smoothing process using a smaller pixel region (3 ⁇ 3) as a processing unit is performed when LPF 0 is applied, while a smoothing process using a more pixel. region (13 ⁇ 13) as a processing unit is performed when LPF 1 is applied and a smoothing process using a larger pixel region (25 ⁇ 25) as a processing unit is performed when LPF 2 is applied.
- a larger value that is, a value close to one in a high frequency region where the pixel value finely changes
- a smaller value that is, a value close to zero in a flat image region where a change in pixel value is small, that is, in a low frequency region.
- FIG. 17 illustrates respective axes in such a manner that the abscissa axis indicates the positional deviation (representing the false color amount) and the ordinate axis indicates the frequency and also illustrates region examples 1 to 7 according to a plurality of representative image characteristics.
- a pixel value obtained by applying LPF 0 having the highest cutoff frequency among the three low-pass filters (LPF 0 , LPF 1 , and LPF 2 ) to the input UV value (U, V) is set as the corrected UV signal (U out , V out ).
- UV value (U, V) is set as the corrected UV signal (U out , V out ).
- a pixel value obtained by applying LPF 2 having the lowest cutoff frequency among the three low-pass filters (LPF 0 , LPF 1 , and LPF 2 ) to the input UV value (U, V) is set as the corrected UV signal (U out , V out ).
- the average value of a pixel value obtained by applying LPF 0 having the highest cutoff frequency among the three low-pass filters (LPF 0 , LPF 1 , and LPF 2 ) to the input UV value (U, V) and a pixel value obtained by applying LPF 1 having the medium cutoff frequency thereamong to the input UV value (U, V) is set as the corrected UV signal (U out , V out ).
- a pixel value obtained by applying LPF 1 having the medium cutoff frequency among the three low-pass filters (LPF 0 , LPF 1 , and LPF 2 ) to the input UV value (U, V) is set as the corrected UV signal (U out , V out ).
- the average value of a pixel value obtained by applying LPF 1 having the medium cutoff frequency among the three low-pass filters (LPF 0 , LPF 1 , and LPF 2 ) to the input UV value (U, V) and a pixel value obtained by applying LPF 2 having the lowest cutoff frequency thereamong to the input UV value (U, V) is set as the corrected UV signal (U out , V out ).
- the image correction unit 126 generates the corrected UV signal (U out , V out ) 203 in line with the output signal calculation formulas (Formula 7) illustrated in FIG. 15( a ) .
- the low-pass filter LPF 2 with a low cutoff frequency is used to execute the process of smoothing on the basis of the pixel values of surrounding pixels a wider range (for example, 25 ⁇ 25 pixels).
- the low-pass filter LPF 1 with a medium cutoff frequency is used to execute the process of smoothing on the basis of the pixel values of surrounding pixels in a medium range (for example, 13 ⁇ 13 pixels) for a region where there are many high frequency components.
- the lowpass filter LPF 0 with a high cutoff frequency is used to execute the process of smoothing on the basis of the pixel values of surrounding pixels in a small range (for example, 3 ⁇ 3 pixels).
- the image correction unit 126 generates the corrected UV signal (U out , V out ) 203 in accordance with the characteristics of the image region as described above and outputs the generated signal to the signal conversion unit 127 in the image processor 120 illustrated in FIG. 3 .
- the signal conversion unit 127 receives inputs of the corrected UV signal (U out /V out ) 203 generated by the image correction unit 126 and the W-RAW image 111 which is a photographed image by the first imaging unit 107 .
- the signal conversion unit 127 executes signal conversion on the basis of these input signals and generates the RGB image 150 to output.
- the signal conversion unit 127 employs the W signal of the W-RAW image 111 as the Y (luminance) signal and executes a process of converting the YUV signal constituted by the combination of this Y (luminance) signal and the UV signal of the corrected UV signal (U out , V out ) 203 into an RGB signal.
- This signal conversion process is performed in line with an existing conversion formula.
- the RGB image 150 generated by the signal conversion unit 127 is displayed, for example, on the display unit. Alternatively, the RGB image 150 is saved in the storage unit. Alternatively, the RGB image 150 is output to another external information processing apparatus.
- an encoding process such as a compression process is executed as a preprocess for a saving process to the storage unit and an external output process.
- the YUV signal may be configured to be output to a display apparatus, or saved in the storage unit, or output to the outside as it is.
- the flowchart illustrated in FIG. 16 is executed under the control of the control unit (data processor) equipped with a CPU or the like that, for example, executes a process in line with a processing program saved in the storage unit.
- control unit data processor
- Steps S 101 a and 101 b are image photographing processes.
- Two images are photographed by the first imaging unit 107 and the second imaging unit 108 of the imaging unit 106 illustrated in FIG. 1 .
- Step S 101 a is a photographing process for an RGB image to be executed by the second imaging unit 108 provided with an imaging element having the RGB pixel array such as the Bayer array described earlier with reference to FIG. 2( a ) .
- Step S 101 b is a photographing process for a white (W) image to be executed by the first imaging unit 107 provided with an imaging element having the white (W) pixel array described earlier with reference to FIG. 2( b ) .
- step S 102 a development process for the RGB image photographed by the second imaging unit 108 in step S 101 a is executed.
- This process is executed by the development processor 121 of the image processor 120 illustrated in FIG. 3 .
- the development processor 121 executes the development process on the RGB-RAW image 112 input from the second imaging unit 108 . Specifically, for example, the following processes are executed:
- step S 103 a detection process for the motion vector (MV) is executed.
- This process is executed by the motion vector detection unit 122 of the image processor 120 illustrated in FIG. 3 .
- the motion vector detection unit 122 receives an input of the W image 111 from the first imaging unit 107 and also receives an input of a Y signal (luminance signal) of the YUV image 130 generated by the development processor 121 on the basis of the RGB-RAW image 112 which is a photographed image by the second imaging unit 108 .
- the motion vector detection unit 122 detects a motion vector (MV) representing a positional deviation between the two images.
- MV motion vector
- the first imaging unit 107 and the second imaging unit 108 which are included in the imaging unit 106 of the image processing apparatus 100 illustrated in FIG. 1 , serve as two imaging units set at positions a predetermined interval apart from each other and the photographed images by the respective units are obtained as images from different viewpoints. That is, the images are obtained as images having parallax.
- the same subject image is not photographed at corresponding pixels of the two images, that is, pixels at the same position, and a subject deviation according to parallax occurs.
- the motion vector detection unit 122 detects a motion vector (MV) representing a positional deviation between the two images.
- MV motion vector
- corresponding points of two images are found and a vector connecting these corresponding points is calculated as a motion vector (MV).
- the motion vector (MV generated by the motion vector detection unit 122 is input to the position alignment unit 123 .
- step S 104 the position alignment process is executed.
- This process is a process executed by the position alignment unit 123 of the image processor 120 illustrated in FIG. 3 .
- the position alignment unit 123 receives an input of the motion vector (MV) generated by the motion vector detection unit 122 and also receives an input of the YUV image 130 generated by the development processor 121 on the basis of the RGB-RAW image 112 .
- MV motion vector
- the position alignment unit 123 moves each pixel position in the YUV image 130 in line with the size and direction of the motion vector (MV) to generate the W image, that is, a YUV image similar to an image photographed from the same viewpoint position as that of the W-RAW image 111 which is a photographed image by the first imaging unit 107 .
- the YUV image 130 is converted into a YUV image that is regarded as photographed from the same viewpoint as that of the first imaging unit 107 .
- step S 105 the frequency-corresponding parameter calculation process is executed.
- This process is a process executed by the frequency-corresponding parameter calculation unit 124 of the image processor 120 illustrated in FIG. 3 .
- the frequency-corresponding parameter calculation unit 124 receives inputs of the W-RAW image 111 which is a photographed image by the first imaging unit 107 and the sensor noise characteristic ( ⁇ ) 113 and, on the basis of these pieces of input data, calculates the frequency-corresponding blend ratio setting parameter, which is a correction parameter for use in false color correction, to output to the image correction unit 126 .
- the frequency-corresponding parameter calculation unit 124 calculates the frequency-corresponding blend ratio setting parameter [ratio Freq ] for all the pixels constituting the W-RAW image 111 which is a photographed image by the first imaging unit 107 and inputs the calculated parameter to the image correction unit 126 .
- step S 106 the positional deviation-corresponding parameter calculation process is executed.
- This process is a process executed by the positional deviation-corresponding parameter calculation unit 125 of the image processor 120 illustrated in FIG. 3 .
- the positional deviation-corresponding parameter calculation unit 125 receives inputs of the W-RAW image 111 , which is a photographed image by the first imaging unit 107 , a YUV image after position alignment generated by the position alignment unit 123 , that is, a YUV image equivalent to an image photographed from the photographing viewpoint of the first imaging unit 107 , and the sensor noise characteristic ( ⁇ ) 113 and, on the basis of these pieces of input data, calculates the positional deviation-corresponding blend ratio setting parameter, which is a correction parameter for use in false color correction, to output to the image correction unit 126 .
- the positional deviation between the W-RAW image 111 which is a photographed image by the first imaging unit 107 and the YUV image after position alignment generated by toe position alignment unit 123 should be eliminated.
- a difference occurs in each pixel value (W pixel value) depending on the pixel position. This difference is thought to be a false color and is described as a “positional deviation-corresponding parameter” under the interpretation that the pixel with such a difference is a pixel that should be output to a pixel position different from the pixel position of the original pixel value.
- the positional deviation-corresponding parameter calculation unit 125 calculates the positional deviation-corresponding blend ratio setting parameter [ratio Err ] in line with the configuration in FIG. 6 described above.
- the positional deviation-corresponding blend ratio setting parameter [ratio Err ] calculated by the positional deviation-corresponding parameter calculation unit 125 is input to the image correction unit 126 in the image processor 120 illustrated in FIG. 3 .
- steps S 107 and S 108 are processes executed by the image correction unit 126 of the image processor 120 illustrated in FIG. 3 .
- the image correction unit 126 receives inputs of the following respective pieces of data:
- the image correction unit 126 receives inputs of these pieces of data and generates the corrected UV signal (Uout, Vout) 203 , which is an output signal value of the chrominance signal UV constituting the pixel value of the corrected image (YUV image) in which false colors have been reduced, to output to the signal conversion unit 127 in the image processor 120 illustrated in FIG. 3 .
- step S 107 the image correction unit 126 applies the input data and applies the plurality of different low-pass filters (LPF 0 , LPF 1 , and LPF 2 ) specifically, the low-pass filters having different cutoff frequencies from each other, to generate different UV images, that is, the following respective UV images described above with reference to FIGS. 15 to 17 .
- LPF 0 , LPF 1 , and LPF 2 the low-pass filters having different cutoff frequencies from each other
- V 0 LPF 0 (V)
- V 1 LPF 1 (V)
- the three low-pass filters (LPF 0 to LPF 2 ) are distinguished from each other by variation in cutoff frequency, where the cutoff frequency of LPF 0 is the highest and the cutoff frequency of LPF 2 is the lowest.
- low-pass filters with the following settings can be applied as the respective low-pass filters:
- LPF 0 is a moving average filter of 3 ⁇ 3 (pixels);
- LPF 1 is a moving average filter of 13 ⁇ 13 (pixels).
- LPF 2 is a moving average filter of 25 ⁇ 25 (pixels).
- the moving average filters having such settings can be applied as the above three low-pass filters (LPF 0 to LPF 2 ).
- step S 108 the image correction unit 126 applies U 0 to U 3 and V 0 to V 0 calculated in step S 107 and the two blend ratio setting parameters, namely,
- the calculation process for the corrected UV signal (U out , V out ) executed by the image correction unit 126 in accordance with the features of the region is executed according to the following approaches.
- the low-pass filter LPF 2 with a low cutoff frequency is preferentially used to execute the process of smoothing on the basis of the pixel values of surrounding pixels in a wider range (for example, 25 ⁇ 25 pixels).
- the lowpass filter LPF 1 with a medium cutoff frequency is preferentially used to execute the process of smoothing on the basis of the pixel values of surrounding pixels in a medium range (for example, 13 ⁇ 13 pixels) for a region where there are many high frequency components.
- the low-pass filter LPF 0 with a high cutoff frequency is preferentially used to execute the process of smoothing on the basis of the pixel values of surrounding pixels in a small range (for example, 3 ⁇ 3 pixels).
- the image correction unit 126 generates the corrected UV signal (U out , V out ) 203 in accordance with the characteristics of the image region as described above and outputs the generated signal to the signal conversion unit 127 in the image processor 120 illustrated in FIG. 3 .
- step S 109 is a process executed by the signal conversion unit 127 of the image processor 120 illustrated in FIG. 3 .
- the signal conversion unit 127 receives inputs of the corrected UV signal (U out , V out ) 203 generated by the image correction unit 126 and the W-RAW image 111 which is a photographed image by the first imaging unit 107 .
- the signal conversion unit 127 executes signal conversion on the basis of these input signals and generates the RGB image 150 to output.
- the signal conversion unit 127 employs the W signal of the W-RAW image 111 as the Y (luminance) signal and executes a process of converting the YUV signal constituted by the combination of this Y (luminance) signal and the UV signal of the corrected UV signal (U out , V out ) 203 into an RGB signal.
- This signal conversion process is performed in line with an existing conversion formula.
- the RGB image 150 generated by the signal conversion unit 127 is displayed, for example, on the display unit. Alternatively, the RGB image 150 is saved in the storage unit. Alternatively, the RGB image 150 is output to another external information processing apparatus.
- an encoding process such as a compression process is executed as a preprocess for a saving process to the storage unit and an external output process.
- the YUV signal may be configured to be output to a display apparatus, or saved in the storage unit, or output to the outside as it is.
- the above-described embodiment employs a configuration in which, for example, the image processor 120 illustrated in FIG. 3 inputs the RGB-RAW image 112 which is a photographed image by the second imaging unit 108 to the development processor 121 to execute the image process by applying the YUV image 130 generated by the stringing elephant process for the RGB-RAW image 112 .
- the execution timing of this development process may be, for example, after the process of the image processor 120 is completed.
- a variety of settings can be made, including a configuration in which part of the process of the development processor is executed after the process of the image processor 120 is completed, for example.
- the signal conversion unit 127 is configured to execute signal conversion from the YUV signal to the RGB signal as a final stage process of the image processor 120 illustrated in FIG. 3 .
- this process is not essential and the YUV signal may be configured to be output to a display apparatus, or saved in the storage unit, or output to the outside as it is.
- An image processing apparatus including an image processor that receives inputs of a color image and a white (W) image photographed by a W array imaging element whose ail pixels are placed in a white (W) pixel array, and executes an image process that reduces false colors included in the color image, in which
- the image processor includes:
- a frequency-corresponding parameter calculation unit that receives an input of the white (W) image and calculates a frequency-corresponding parameter of the white (W) image in units of image regions;
- a positional deviation-corresponding parameter calculation unit that receives inputs of the white (W) image and the color image and calculates a positional deviation-corresponding parameter of the two input images in units of image regions;
- an image correction unit that executes a blending process in which a blend rate between the white (W) image and the color image is controlled in accordance with values of the frequency-corresponding parameter and the positional deviation-corresponding parameter and calculates a corrected pixel value.
- the color image is an RGB image photographed by an RGB array imaging element
- the positional deviation-corresponding parameter calculation unit receives inputs of the white (W) image and a YUV image generated on the basis of the RGB image and calculates a positional deviation-corresponding parameter of the two input images in units of image regions, and
- the image correction unit executes a blending process in which a blend rate between the white (W) image and the YUV image is controlled in accordance with values of the frequency-corresponding parameter and the positional deviation-corresponding parameter and calculates a corrected pixel value.
- the image correction unit selectively applies a plurality of different low-pass filters (LPFs) having different cutoff frequencies in units of image regions and calculates a corrected pixel value.
- LPFs low-pass filters
- the image correction unit calculates:
- LPF low-pass filter
- a low-pass filter having a relatively high cutoff frequency in a low frequency region.
- LPF low-pass filter
- a low-pass filter having a relatively high cutoff frequency in a region where a positional deviation is small.
- the image correction unit calculates:
- a low-pass filter having a relatively high cutoff frequency in a low frequency region.
- a low-pass filter having a relatively high cutoff frequency in a region where a positional deviation is small.
- the image processor includes a position alignment unit that executes position alignment between the color image and the white (W) image, and
- the positional deviation-corresponding parameter calculation unit receives inputs of the white (W) image and the color image after position alignment generated by the position alignment unit and calculates the positional deviation-corresponding parameter of the two input images in units of image regions.
- the image processor includes a motion vector detection unit that receives inputs of the color image and the white (W) image and detects a motion vector between these two images, and
- the position alignment unit executes position alignment between the color image and the white (W) image using the motion vector.
- the motion vector detection unit detects a motion vector representing a positional deviation between images based on parallax according to a deviation of photographing positions between an imaging unit for the color image and an imaging unit for the white (W) image.
- An imaging apparatus including:
- a first imaging unit that has a W array imaging element whose all pixels are placed in a white (W) pixel array and photographs a white (W) image;
- a second imaging unit that has an RGB array imaging element having an RGB pixel array and photographs a color image
- an image processor that receives inputs of the white (W) image and the color image and executes an image process that reduces false colors included in the color image, in which
- the image processor includes:
- a frequency-corresponding parameter calculation unit that receives an input of the white (W) image and calculates a frequency-corresponding parameter of the white (W) image in units of image regions;
- a positional deviation-corresponding parameter calculation unit that receives inputs of the white (W) image and the color image and calculates a positional deviation-corresponding parameter of the two input images in units of image regions;
- an image correction unit that executes a blending process in which a blend rate between the white (W) image and the color image is controlled in accordance with values of the frequency-corresponding parameter and the positional deviation-corresponding parameter and calculates a corrected pixel value.
- the image processing apparatus including an image processor that receives inputs of a color image and a white (W) image photographed by a W array imaging element whose all pixels are placed in a white (W) pixel array, and executes an image process that reduces false colors included in the color image,
- the image processing method including
- a blending process in which a blend rate between the white (W) image and the color image is controlled in accordance with values of the frequency-corresponding parameter and the positional deviation-corresponding parameter.
- the imaging apparatus including:
- a first imaging unit that has a W array imaging element whose all pixels are placed in a white (W) pixel array and photographs a white (W) image;
- a second imaging unit that has an RGB array imaging element having an RGB pixel array and photographs a color image
- an image processor that receives inputs of the white (W) image and the color image and executes an image process that reduces false colors included in the color image
- the image processing method including:
- a blending process in which a blend rate between the white (W) image and the color image is controlled in accordance with values of the frequency-corresponding parameter and the positional deviation-corresponding parameter.
- the image processing apparatus including an image processor that receives inputs of a color image and a white (W) image photographed by a W array imaging element whose all pixels are placed in a white (W) pixel array, and executes an image process that reduces false colors included in the color image,
- the program causing the image processor to execute a process of calculating a corrected pixel value by executing:
- a blending process in which a blend rate between the white (W) image and the color image is controlled in accordance with values of the frequency-corresponding parameter and the positional deviation-corresponding parameter.
- the imaging apparatus including:
- a first imaging unit that has a W array imaging element whose all pixels are placed in a white (W) pixel array and photographs a white (W) image;
- a second imaging unit that has an RGB array imaging element having an RGB pixel array and photographs a color image
- an image processor that receives inputs of the white (W) image and the color image and executes an image process that reduces false colors included in the color image
- the first imaging unit and the second imaging unit to photograph the white (W) image and the color image
- the image processor to execute a process of calculating a corrected pixel value by executing:
- a blending process in which a blend rate between the white (W) image and the color image is controlled in accordance with values of the frequency-corresponding parameter and the positional deviation-corresponding parameter.
- a program recording a processing sequence can be installed on a memory within a computer incorporated in dedicated hardware and executed or the program can be installed on a general-purpose computer capable of executing various processes and executed.
- the program can be recorded in a recording medium in advance.
- the program can be received via a network such as a local area network (LAN) or the Internet and installed on a recording medium such as a built-in hard disk.
- LAN local area network
- the Internet installed on a recording medium such as a built-in hard disk.
- system refers to a logical group configuration of a plurality of apparatuses and is not limited to a system in which apparatuses having respective configurations are accommodated in the same housing.
- an apparatus and a method that perform a false color correction according to image characteristics of a color image in units of image regions are implemented.
- an image processor that receives inputs of a color image and a white (W) image photographed by a W array imaging element whose all pixels are placed in a white (W) pixel array and executes an image process that reduces false colors included in the color image.
- the image processor executes a blending process in which a blend rate between the white (W) image and the color image is controlled in accordance with values of the frequency-corresponding parameter and the positional deviation-corresponding parameter and calculates a corrected pixel values.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Color Television Image Signal Generators (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
Abstract
Description
- Patent Document 1: Japanese Patent Application Laid-Open No. 2013-26672
act=actHOR+actVER
[Mathematical Formula 7]
U out=(1−ratioErr)((1−ratioFreq)×U 0+ratioFreq ×U 1)+ratioErr ×U 2
V out=(1−ratioErr)((1−ratioFreq)×V 0+ratioFreq ×V 1)+ratioErr ×V 2 (Formula 7)
U out=(1−ratioErr)((1−ratioFreq)×U 0+ratioFreq ×U 1)+ratioErr ×U 2
V out=(1−ratioErr)((1−ratioFreq)×V 0+ratioFreq ×V 1)+ratioErr ×V 2 (Formula 7).
U out =U 0
V out =V 0.
U out=0.5×U 0+0,5×U 2
V out=0.5×V 0+0.5×V 2.
U out =U 2
V out =V 2
U out=0.5×U 0+0.5×U 1
V out=0.5×V 0+0.5×V 1.
U out=0.5×(0.5×U 0+0.5+U 1)+0.5×U 2
V out=0.5×(0.5×V 0+0.5×V 1)+0.5×V 2.
U out =U 1
V out =V 1.
U out=0.5×U 1+0.5×U 2
V out=0.5×V 1+0.5×V 2.
U out=(1−ratioErr)((1−ratioFreq)×U 0+ratioFreq ×U 1)+ratioErr ×U 2
V out=(1−ratioErr)((1−ratioFreq)×V 0+ratioFreq ×V 1)+ratioErr ×V 2 (Formula 7).
Uout=LPF(U)×(W/LPF(W)); and
Vout=LPF(V)×(W/LPF(W)),
- 100 Image processing apparatus
- 101 Control unit
- 102 Storage unit
- 103 Codec
- 104 Input unit
- 105 Output unit
- 106 Imaging unit
- 107 First imaging unit
- 108 Second imaging unit
- 111 W-RAW image
- 112 RGB-RAW image
- 113 Sensor noise characteristic (σ)
- 120 Image processor
- 121 Development processor
- 122 Motion vector detection unit
- 123 Position alignment unit
- 124 Frequency-corresponding parameter calculation unit
- 125 Positional deviation-corresponding parameter calculation unit
- 126 Image correction unit
- 127 Signal conversion unit
- 150 RGB image
- 151 Adjacent pixel pixel value difference absolute value calculation unit
- 152 Dynamic range (DR) calculation unit
- 153 Frequency parameter calculation unit
- 154 Addition unit
- 155 Blend ratio calculation unit
- 161 Position-aligned YUV image
- 162 YUV image-based W image
- 163 Pixel value-adjusted YUV image-based W image
- 164 Difference image
- 171 Signal conversion unit
- 172 First region unit pixel value addition unit
- 173 Second region unit pixel value addition unit
- 174 Region unit pixel non-calculation unit
- 175 Multiplication unit
- 176 Difference calculation unit.
- 201 Frequency-corresponding blend ratio setting parameter
- 202 Positional deviation-corresponding blend rate setting parameter
- 203 Corrected UV signal
Claims (18)
Uout=LPF(U)×(W/LPF(W)); and
Vout=LPF(V)×(W/LPF(W)),
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JPJP2016-045224 | 2016-03-09 | ||
| JP2016045224 | 2016-03-09 | ||
| JP2016-045224 | 2016-03-09 | ||
| PCT/JP2016/086062 WO2017154293A1 (en) | 2016-03-09 | 2016-12-05 | Image processing apparatus, imaging apparatus, image processing method, and program |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20210076017A1 US20210076017A1 (en) | 2021-03-11 |
| US11202045B2 true US11202045B2 (en) | 2021-12-14 |
Family
ID=59789299
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US16/070,952 Expired - Fee Related US11202045B2 (en) | 2016-03-09 | 2016-12-05 | Image processing apparatus, imaging apparatus, image processing method, and program |
Country Status (5)
| Country | Link |
|---|---|
| US (1) | US11202045B2 (en) |
| EP (1) | EP3429197B1 (en) |
| JP (1) | JP6825617B2 (en) |
| CN (1) | CN108702494B (en) |
| WO (1) | WO2017154293A1 (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20230009051A1 (en) * | 2021-07-12 | 2023-01-12 | Fujifilm Corporation | Image processing apparatus and medical image processing apparatus |
Families Citing this family (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2019111529A1 (en) * | 2017-12-08 | 2019-06-13 | ソニーセミコンダクタソリューションズ株式会社 | Image processing device and image processing method |
| WO2022011506A1 (en) * | 2020-07-13 | 2022-01-20 | 深圳市汇顶科技股份有限公司 | Image processing method and image processing apparatus |
| CN114205487B (en) * | 2020-08-28 | 2025-12-02 | 超威半导体公司 | Content-Adaptive Lens Shadow Correction Method and Apparatus |
| US11910121B2 (en) * | 2021-01-26 | 2024-02-20 | Zf Friedrichshafen Ag | Converting dual-context video data to full color video |
| CN113870146B (en) * | 2021-10-15 | 2024-06-25 | 中国大恒(集团)有限公司北京图像视觉技术分公司 | A method for correcting false color at the edge of color camera images |
Citations (25)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6614471B1 (en) | 1999-05-10 | 2003-09-02 | Banctec, Inc. | Luminance correction for color scanning using a measured and derived luminance value |
| US20070153335A1 (en) * | 2005-12-22 | 2007-07-05 | Hajime Hosaka | Image signal processing apparatus, imaging apparatus, image signal processing method and computer program |
| US7796814B2 (en) * | 2006-04-14 | 2010-09-14 | Sony Corporation | Imaging device |
| US20110050918A1 (en) * | 2009-08-31 | 2011-03-03 | Tachi Masayuki | Image Processing Device, Image Processing Method, and Program |
| US20120257821A1 (en) * | 2009-10-20 | 2012-10-11 | Yasushi Saito | Image processing apparatus and image processing method, and program |
| JP2013026672A (en) | 2011-07-15 | 2013-02-04 | Toshiba Corp | Solid-state imaging device and camera module |
| US20130272605A1 (en) * | 2012-04-12 | 2013-10-17 | Sony Corporation | Image processing device, image processing method, and program |
| JP2013239904A (en) | 2012-05-15 | 2013-11-28 | Sony Corp | Image processing apparatus and image processing method and program |
| US8803985B2 (en) * | 2011-05-13 | 2014-08-12 | Sony Corporation | Image processing apparatus, image pickup apparatus, image processing method, and program |
| US20140253808A1 (en) * | 2011-08-31 | 2014-09-11 | Sony Corporation | Image processing device, and image processing method, and program |
| US8837853B2 (en) * | 2011-09-06 | 2014-09-16 | Sony Corporation | Image processing apparatus, image processing method, information recording medium, and program providing image blur correction |
| US8948506B2 (en) * | 2010-03-04 | 2015-02-03 | Sony Corporation | Image processing device, image processing method, and program |
| US20150055873A1 (en) | 2013-08-20 | 2015-02-26 | Samsung Techwin Co., Ltd. | Image alignment apparatus and image alignment method of using the same |
| US20150103212A1 (en) * | 2012-04-24 | 2015-04-16 | Sony Corporation | Image processing device, method of processing image, and program |
| US20150215595A1 (en) * | 2012-09-10 | 2015-07-30 | Kazuhiro Yoshida | Image processor, imaging apparatus equipped with the same, and image processing method |
| US20160050354A1 (en) | 2014-08-12 | 2016-02-18 | Google Technology Holdings LLC | High Dynamic Range Array Camera |
| US20160269693A1 (en) * | 2013-12-02 | 2016-09-15 | Megachips Corporation | Pixel interpolation apparatus, imaging apparatus, pixel interpolation processing method, and integrated circuit |
| US20160309131A1 (en) * | 2013-12-24 | 2016-10-20 | Olympus Corporation | Image processing device, imaging device, information storage medium, and image processing method |
| US20160337623A1 (en) * | 2015-05-11 | 2016-11-17 | Canon Kabushiki Kaisha | Imaging apparatus, imaging system, and signal processing method |
| US9654700B2 (en) * | 2014-09-16 | 2017-05-16 | Google Technology Holdings LLC | Computational camera using fusion of image sensors |
| US9699429B2 (en) * | 2012-03-27 | 2017-07-04 | Sony Corporation | Image processing apparatus, imaging device, image processing method, and program for reducing noise or false colors in an image |
| US9712792B2 (en) * | 2015-08-10 | 2017-07-18 | Samsung Electronics Co., Ltd. | RGB-RWB dual images by multi-layer sensors towards better image quality |
| US9826177B2 (en) * | 2013-10-29 | 2017-11-21 | Hitachi Kokusai Electric Inc. | Video signal noise elimination circuit and video signal noise elimination method |
| US20200112705A1 (en) * | 2017-03-27 | 2020-04-09 | Sony Corporation | Image processing device, image processing method and imaging device |
| US20200296343A1 (en) * | 2017-11-06 | 2020-09-17 | Eizo Corporation | Image processing device, image processing method, and image processing program |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP5106870B2 (en) * | 2006-06-14 | 2012-12-26 | 株式会社東芝 | Solid-state image sensor |
| JP2015035782A (en) * | 2013-08-09 | 2015-02-19 | オリンパス株式会社 | Image processing device, imaging device, microscope system, image processing method, and image processing program |
-
2016
- 2016-12-05 EP EP16893617.7A patent/EP3429197B1/en active Active
- 2016-12-05 JP JP2018504003A patent/JP6825617B2/en not_active Expired - Fee Related
- 2016-12-05 WO PCT/JP2016/086062 patent/WO2017154293A1/en not_active Ceased
- 2016-12-05 CN CN201680083059.9A patent/CN108702494B/en not_active Expired - Fee Related
- 2016-12-05 US US16/070,952 patent/US11202045B2/en not_active Expired - Fee Related
Patent Citations (42)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6614471B1 (en) | 1999-05-10 | 2003-09-02 | Banctec, Inc. | Luminance correction for color scanning using a measured and derived luminance value |
| US20070153335A1 (en) * | 2005-12-22 | 2007-07-05 | Hajime Hosaka | Image signal processing apparatus, imaging apparatus, image signal processing method and computer program |
| US9191593B2 (en) * | 2005-12-22 | 2015-11-17 | Sony Corporation | Image signal processing apparatus, imaging apparatus, image signal processing method and computer program |
| US20140098265A1 (en) * | 2005-12-22 | 2014-04-10 | Sony Corporation | Image signal processing apparatus, imaging apparatus, image signal processing method and computer program |
| US8467088B2 (en) * | 2005-12-22 | 2013-06-18 | Sony Corporation | Image signal processing apparatus, imaging apparatus, image signal processing method and computer program |
| US7796814B2 (en) * | 2006-04-14 | 2010-09-14 | Sony Corporation | Imaging device |
| US20110050918A1 (en) * | 2009-08-31 | 2011-03-03 | Tachi Masayuki | Image Processing Device, Image Processing Method, and Program |
| US8314863B2 (en) * | 2009-08-31 | 2012-11-20 | Sony Corporation | Image processing device, image processing method, and program pertaining to image correction |
| US20140240567A1 (en) * | 2009-10-20 | 2014-08-28 | Sony Corporation | Image processing apparatus and image processing method, and program |
| US9609291B2 (en) * | 2009-10-20 | 2017-03-28 | Sony Corporation | Image processing apparatus and image processing method, and program |
| US8755640B2 (en) * | 2009-10-20 | 2014-06-17 | Sony Corporation | Image processing apparatus and image processing method, and program |
| US20120257821A1 (en) * | 2009-10-20 | 2012-10-11 | Yasushi Saito | Image processing apparatus and image processing method, and program |
| US8948506B2 (en) * | 2010-03-04 | 2015-02-03 | Sony Corporation | Image processing device, image processing method, and program |
| US9124809B2 (en) * | 2011-05-13 | 2015-09-01 | Sony Corporation | Image processing apparatus, image pickup apparatus, image processing method, and program |
| US8803985B2 (en) * | 2011-05-13 | 2014-08-12 | Sony Corporation | Image processing apparatus, image pickup apparatus, image processing method, and program |
| US20140313400A1 (en) * | 2011-05-13 | 2014-10-23 | Sony Corporation | Image processing apparatus, image pickup apparatus, image processing method, and program |
| JP2013026672A (en) | 2011-07-15 | 2013-02-04 | Toshiba Corp | Solid-state imaging device and camera module |
| US20140253808A1 (en) * | 2011-08-31 | 2014-09-11 | Sony Corporation | Image processing device, and image processing method, and program |
| US9179113B2 (en) * | 2011-08-31 | 2015-11-03 | Sony Corporation | Image processing device, and image processing method, and program |
| US8837853B2 (en) * | 2011-09-06 | 2014-09-16 | Sony Corporation | Image processing apparatus, image processing method, information recording medium, and program providing image blur correction |
| US10200664B2 (en) * | 2012-03-27 | 2019-02-05 | Sony Corporation | Image processing apparatus, image device, image processing method, and program for reducing noise or false colors in an image |
| US20170251188A1 (en) * | 2012-03-27 | 2017-08-31 | Sony Corporation | Image processing apparatus, imaging device, image processing method, and program for reducing noise or false colors in an image |
| US9699429B2 (en) * | 2012-03-27 | 2017-07-04 | Sony Corporation | Image processing apparatus, imaging device, image processing method, and program for reducing noise or false colors in an image |
| US9147230B2 (en) * | 2012-04-12 | 2015-09-29 | Sony Corporation | Image processing device, image processing method, and program to perform correction processing on a false color |
| JP2013219705A (en) | 2012-04-12 | 2013-10-24 | Sony Corp | Image processor, image processing method and program |
| US20130272605A1 (en) * | 2012-04-12 | 2013-10-17 | Sony Corporation | Image processing device, image processing method, and program |
| US20150103212A1 (en) * | 2012-04-24 | 2015-04-16 | Sony Corporation | Image processing device, method of processing image, and program |
| US20160210760A1 (en) * | 2012-04-24 | 2016-07-21 | Sony Corporation | Image processing device, method of processing image, and image processing program including false color correction |
| US9288457B2 (en) * | 2012-04-24 | 2016-03-15 | Sony Corporation | Image processing device, method of processing image, and image processing program including false color correction |
| JP2013239904A (en) | 2012-05-15 | 2013-11-28 | Sony Corp | Image processing apparatus and image processing method and program |
| US20150215595A1 (en) * | 2012-09-10 | 2015-07-30 | Kazuhiro Yoshida | Image processor, imaging apparatus equipped with the same, and image processing method |
| US20150055873A1 (en) | 2013-08-20 | 2015-02-26 | Samsung Techwin Co., Ltd. | Image alignment apparatus and image alignment method of using the same |
| US9826177B2 (en) * | 2013-10-29 | 2017-11-21 | Hitachi Kokusai Electric Inc. | Video signal noise elimination circuit and video signal noise elimination method |
| US20160269693A1 (en) * | 2013-12-02 | 2016-09-15 | Megachips Corporation | Pixel interpolation apparatus, imaging apparatus, pixel interpolation processing method, and integrated circuit |
| US20160309131A1 (en) * | 2013-12-24 | 2016-10-20 | Olympus Corporation | Image processing device, imaging device, information storage medium, and image processing method |
| US9344639B2 (en) * | 2014-08-12 | 2016-05-17 | Google Technology Holdings LLC | High dynamic range array camera |
| US20160050354A1 (en) | 2014-08-12 | 2016-02-18 | Google Technology Holdings LLC | High Dynamic Range Array Camera |
| US9654700B2 (en) * | 2014-09-16 | 2017-05-16 | Google Technology Holdings LLC | Computational camera using fusion of image sensors |
| US20160337623A1 (en) * | 2015-05-11 | 2016-11-17 | Canon Kabushiki Kaisha | Imaging apparatus, imaging system, and signal processing method |
| US9712792B2 (en) * | 2015-08-10 | 2017-07-18 | Samsung Electronics Co., Ltd. | RGB-RWB dual images by multi-layer sensors towards better image quality |
| US20200112705A1 (en) * | 2017-03-27 | 2020-04-09 | Sony Corporation | Image processing device, image processing method and imaging device |
| US20200296343A1 (en) * | 2017-11-06 | 2020-09-17 | Eizo Corporation | Image processing device, image processing method, and image processing program |
Non-Patent Citations (1)
| Title |
|---|
| Feb. 5, 2019, European Search Report issued for related EP Application No. 16893617.7. |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20230009051A1 (en) * | 2021-07-12 | 2023-01-12 | Fujifilm Corporation | Image processing apparatus and medical image processing apparatus |
Also Published As
| Publication number | Publication date |
|---|---|
| CN108702494B (en) | 2020-12-04 |
| US20210076017A1 (en) | 2021-03-11 |
| EP3429197A1 (en) | 2019-01-16 |
| EP3429197B1 (en) | 2020-05-06 |
| JPWO2017154293A1 (en) | 2019-01-10 |
| EP3429197A4 (en) | 2019-03-06 |
| WO2017154293A1 (en) | 2017-09-14 |
| CN108702494A (en) | 2018-10-23 |
| JP6825617B2 (en) | 2021-02-03 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11202045B2 (en) | Image processing apparatus, imaging apparatus, image processing method, and program | |
| CN103202022B (en) | Image processing device and control method thereof | |
| US8363123B2 (en) | Image pickup apparatus, color noise reduction method, and color noise reduction program | |
| US9179113B2 (en) | Image processing device, and image processing method, and program | |
| US8730357B2 (en) | Image processing device, image processing method, and program | |
| US8908062B2 (en) | Flare determination apparatus, image processing apparatus, and storage medium storing flare determination program | |
| US10055815B2 (en) | Image processing apparatus, image processing system, imaging apparatus and image processing method | |
| EP2523160A1 (en) | Image processing device, image processing method, and program | |
| US9111365B2 (en) | Edge-adaptive interpolation and noise filtering method, computer-readable recording medium, and portable terminal | |
| US20140153823A1 (en) | Method and apparatus for processing image | |
| US9030579B2 (en) | Image processing apparatus and control method that corrects a signal level of a defective pixel | |
| US8675102B2 (en) | Real time denoising of video | |
| US10853926B2 (en) | Image processing device, imaging device, and image processing method | |
| CN112837230B (en) | Image processing apparatus, image processing method, and computer readable medium | |
| US9530185B2 (en) | Image processing apparatus, imaging apparatus, image processing method and storage medium | |
| US6747698B2 (en) | Image interpolating device | |
| CN113068011B (en) | Image sensor, image processing method and system | |
| US8675106B2 (en) | Image processing apparatus and control method for the same | |
| US8654220B2 (en) | Image processing apparatus and control method for the same | |
| JP2005354585A (en) | Device, method and program of image processing | |
| US9635330B2 (en) | Image processing device, image processing method, and program | |
| KR101076045B1 (en) | The method for demosaicing color to bayer format of cfa and the apparatus thereof | |
| JP2014086957A (en) | Image processing device and image processing method | |
| TW201408083A (en) | System and a method of adaptively suppressing false-color artifacts | |
| JP2002218484A (en) | Image interpolation device |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: SONY CORPORATION, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:YOKOKAWA, MASATOSHI;KAMIO, KAZUNORI;UCHIDA, MASASHI;SIGNING DATES FROM 20180706 TO 20180709;REEL/FRAME:046581/0720 |
|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
| FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20251214 |