WO2013161349A1 - 画像処理装置、および画像処理方法、並びにプログラム - Google Patents
画像処理装置、および画像処理方法、並びにプログラム Download PDFInfo
- Publication number
- WO2013161349A1 WO2013161349A1 PCT/JP2013/053455 JP2013053455W WO2013161349A1 WO 2013161349 A1 WO2013161349 A1 WO 2013161349A1 JP 2013053455 W JP2013053455 W JP 2013053455W WO 2013161349 A1 WO2013161349 A1 WO 2013161349A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pixel
- low
- false color
- frequency signal
- pixels
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims description 269
- 238000003672 processing method Methods 0.000 title claims description 10
- 238000000034 method Methods 0.000 claims abstract description 194
- 230000008569 process Effects 0.000 claims abstract description 181
- 238000006243 chemical reaction Methods 0.000 claims abstract description 67
- 238000001514 detection method Methods 0.000 claims description 113
- 238000004364 calculation method Methods 0.000 claims description 91
- 230000000630 rising effect Effects 0.000 claims description 5
- 239000003086 colorant Substances 0.000 abstract description 18
- 238000012937 correction Methods 0.000 abstract description 14
- 230000014509 gene expression Effects 0.000 description 49
- 238000003384 imaging method Methods 0.000 description 12
- 230000003287 optical effect Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 241001326510 Phacelia sericea Species 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 239000000470 constituent Substances 0.000 description 3
- 230000010365 information processing Effects 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- 230000006866 deterioration Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4015—Image demosaicing, e.g. colour filter arrays [CFA] or Bayer patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
- H04N23/843—Demosaicing, e.g. interpolating colour pixel values
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/10—Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
- H04N25/11—Arrangement of colour filter arrays [CFA]; Filter mosaics
- H04N25/13—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
- H04N25/133—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements including elements passing panchromatic light, e.g. filters passing white light
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/10—Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
- H04N25/11—Arrangement of colour filter arrays [CFA]; Filter mosaics
- H04N25/13—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
- H04N25/135—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on four or more different wavelength filter elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/646—Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Definitions
- the present disclosure relates to an image processing apparatus, an image processing method, and a program.
- the present invention relates to an image processing apparatus, an image processing method, and a program that execute a process of correcting a false color generated in an image.
- FIG. 2A shows a Bayer array used in many conventional cameras composed of RGB pixels.
- FIG. 2B shows an RGBW array that is being used in recent cameras.
- Each of the RGB pixels is a pixel having a filter that selectively transmits light in each wavelength region of R, G, or B, and each of the W pixels is a filter that transmits almost all visible light of each wavelength of RGB. It is a pixel provided with.
- the generation principle of the bright spot false color will be described with reference to FIG.
- the pixel value of the G pixel is equal to the surrounding R It is relatively larger than the pixel values of the pixels and B pixels.
- the color balance is lost and the product is colored green.
- FIG. 3B when the area of the highlight region is small and the pixels in the highlight region are substantially composed of W pixels, R pixels, and B pixels, The pixel value is relatively larger than the pixel values of the surrounding G pixels. As a result, the color balance is lost and the color is changed to magenta.
- various patterns of false colors that is, bright spot false colors, are generated depending on the configuration of pixels included in a highlight area which is a high-luminance portion.
- a technique for reducing false colors, generally called purple fringe, is described in, for example, Patent Document 1 (Japanese Patent Laid-Open No. 2009-124598) and Patent Document 2 (Japanese Patent Laid-Open No. 2006-14261).
- the purple fringe is a false color that occurs mainly around the high-contrast edge that is blown out due to the aberration of the lens provided in the camera.
- the bright spot false color described above is generated even in a pixel region that is not necessarily overexposed, there are cases where the conventional processing for reducing purple fringes cannot be used.
- An object is to provide an image processing apparatus, an image processing method, and a program for outputting an image.
- a mosaic image having an array including W (white) pixels before demosaicing is input as a processing target, and a high luminance region (highlight region) having a small area of about several pixels is input.
- An object of the present invention is to provide an image processing apparatus, an image processing method, and a program that output a high-quality image by detecting a generated false color and performing correction.
- the first aspect of the present disclosure is: An RGBW array image as an input image, and an RGB array image as an output image.
- the data conversion processing unit A false color detection unit that detects false color pixels of the input image and outputs detection information; A low-frequency signal calculation unit that inputs detection information from the false color detection unit, changes a processing mode according to the detection information, and calculates a low-frequency signal corresponding to each RGBW color; A pixel interpolation unit that generates an RGB array image by performing pixel conversion of the RGBW array of the input image by pixel interpolation using the low-frequency signal calculated by the low-frequency signal calculation unit; The interpolation processing unit The image processing apparatus calculates an interpolated pixel value based on the assumption that the low-frequency signal mW of the W pixel and the low-frequency signals mR, mG, and mB of the RGB pixels are proportional to each other in the local region.
- the low-frequency signal calculation unit when the low-frequency signal calculation unit inputs detection information that the target pixel is a false color pixel from the false color detection unit, the vicinity of the target pixel A low-pass signal is calculated by applying a low-pass filter in which a low-pass filter coefficient in which the pixel value contribution rate of the pixel is relatively lower than that of a pixel far from the target pixel is set.
- a low-pass signal is set by applying a low-pass filter in which a low-pass filter coefficient having a pixel value contribution rate relatively higher than that of a pixel far from the target pixel is set.
- the false color detection unit detects the presence or absence of a local highlight region that is a local high-intensity region in the input image, and the pixel of interest is locally If the target pixel is included in the target highlight area, the target pixel is determined to be a false color pixel.
- the false color detection unit detects gradient information of W pixels in the vicinity of the target pixel, and the W pixel value in the vicinity of the target pixel is determined in any of the two orthogonal directions.
- the pixel value is higher than the surrounding W pixel value, the pixel of interest is included in a local highlight region that is a local high luminance region, and it is determined that the pixel of interest is a false color pixel.
- the false color detection unit may: (a) each line based on pixel values of a plurality of W pixels in each of a plurality of diagonally lower right lines set in the vicinity of the target pixel.
- the corresponding W pixel low frequency component signal is calculated and the difference between the W pixel low frequency component signal maximum value of the plurality of inner lines close to the target pixel and the W pixel low frequency component signal maximum value of the plurality of outer lines far from the target pixel Comparison processing of value Diff1 and threshold value Thr1, (b) W pixel low-frequency component signal corresponding to each line is calculated based on pixel values of a plurality of W pixels in each of a plurality of diagonally right rising lines set in the vicinity of the target pixel.
- the threshold value Thr1 and the threshold value Thr2 may be fixed values, values that can be set by the user, or may be automatically calculated.
- the false color detection unit is configured such that W pixels and G pixels are concentrated in a local highlight region that is a local high-intensity region in the input image, or The false color generated when the W pixel, the R pixel, and the B pixel are concentrated in the local highlight region is detected.
- the false color detection unit detects a W pixel having a high pixel value from the input image, and stores the detected W pixel configuration pattern having a high pixel value in advance in a memory. Compared with a registered highlight area pattern that is a recorded local high-intensity area shape, and when the detected W pixel configuration pattern of the high pixel value matches the registered highlight area pattern, A pixel included in the W pixel configuration pattern is determined to be a false color pixel.
- the low-frequency signal calculation unit according to a registered highlight region pattern determined that the false color detection unit matches a W pixel configuration pattern of a high pixel value, A low-pass signal is calculated by applying a low-pass filter in which a low-pass filter coefficient is set such that the pixel value contribution rate in the highlight region is relatively lower than that of pixels outside the highlight region.
- the second aspect of the present disclosure is: An image processing method executed in an image processing apparatus, wherein a data conversion processing unit executes a data conversion process for generating an RGBW array image as an input image and an RGB array image as an output image,
- a false color detection process for detecting false color pixels of the input image and outputting detection information
- a low-frequency signal calculation process for inputting the detection information and changing a processing mode according to the detection information to calculate a low-frequency signal corresponding to each RGBW color
- a pixel interpolation process for generating an RGB array image by performing pixel conversion of the RGBW array of the input image by pixel interpolation to which the low-frequency signal is applied;
- This is an image processing method for calculating an interpolated pixel value based on the assumption that the low-frequency signal mW of the W pixel and the low-frequency signals mR, mG, and mB of the RGB pixels are proportional to each other in the local region.
- the third aspect of the present disclosure is: A program for executing image processing in an image processing apparatus, causing a data conversion processing unit to execute a data conversion processing step that generates an RGBW array image as an input image and an RGBW array image as an output image,
- a false color detection process for detecting false color pixels of the input image and outputting detection information
- a low-frequency signal calculation process for inputting the detection information and changing a processing mode according to the detection information to calculate a low-frequency signal corresponding to each RGBW color
- the interpolation process There is a program for calculating an interpolated pixel value based on the assumption that the low-frequency signal mW of the W pixel and the low-frequency signals mR, mG, mB of the RGB pixels are in a proportional relationship in the local region.
- the program of the present disclosure is a program that can be provided by, for example, a storage medium or a communication medium provided in a computer-readable format to an information processing apparatus or a computer system that can execute various program codes.
- a program in a computer-readable format, processing corresponding to the program is realized on the information processing apparatus or the computer system.
- system is a logical set configuration of a plurality of devices, and is not limited to one in which the devices of each configuration are in the same casing.
- an apparatus and a method for correcting a false color generated in a local highlight region of an image are realized.
- a false color pixel is detected during data conversion processing for generating an RGB array image from an RGBW array image, and a low-frequency signal corresponding to each RGBW color that differs depending on whether or not it is a false color pixel is calculated.
- the RGBW array is converted by the interpolation process to which the low-frequency signal is applied to generate an RGB array image.
- the interpolation processing is performed using each low-frequency signal based on the assumption that the W low-frequency signal mW and the RGB low-frequency signals mR, mG, and mB are proportional to each other in the local region.
- the low-frequency signal is calculated by applying a low-pass filter having a coefficient in which the pixel value contribution ratio in the vicinity of the target pixel is relatively lower than that of the separated pixel.
- the image processing apparatus includes an RGBW color filter including a white (W) compatible filter that transmits all the RGB wavelength light in addition to the RGB filter that selectively transmits the RGB wavelength light. Processing is performed on the acquired data of the image sensor (image sensor).
- RGBW white
- image sensor image sensor
- pixel conversion is performed by analyzing a two-dimensional pixel array signal in which pixels that are main components of a luminance signal are arranged in a checkered pattern and pixels of a plurality of colors that are color information components are arranged in the remaining portion.
- the main color of the luminance signal is white or green.
- the image processing apparatus of the present disclosure obtains acquired data of an image sensor (image sensor) having, for example, an RGBW type color filter including white (W: White), as shown in FIG.
- RGB array conversion processing is referred to as re-mosaic processing. Details of the re-mosaic process are described in, for example, Japanese Patent Application Laid-Open No. 2011-182354, and the disclosed re-mosaic process can be applied to the image processing apparatus of the present disclosure.
- the image processing apparatus executes a process of correcting a bright spot false color in the conversion process (re-mosaic process).
- the bright spot false color is a false color generated in a highlight area having a small area of about several pixels.
- FIG. 4 is a diagram illustrating a configuration example of the imaging apparatus 100 according to an embodiment of the image processing apparatus of the present disclosure.
- the imaging apparatus 100 includes an optical lens 105, an imaging element (image sensor) 110, a signal processing unit 120, a memory 130, and a control unit 140.
- the imaging apparatus 100 illustrated in FIG. 4 is an aspect of the image processing apparatus of the present disclosure.
- the image processing apparatus of the present disclosure includes an apparatus such as a PC.
- An image processing apparatus such as a PC does not have the optical lens 105 and the image sensor 110 of the image pickup apparatus 100 shown in FIG. 4, and is composed of other components, and has an input unit for acquired data of the image sensor 100 or a storage unit. It becomes composition.
- the imaging device 100 is a still camera, a video camera, or the like, and the image processing device 100 includes an information processing device capable of image processing such as a PC.
- the imaging apparatus 100 will be described as a representative example of the image processing apparatus of the present invention.
- the imaging device (image sensor) 110 of the imaging apparatus 100 shown in FIG. 4 is configured to include a filter having an RGBW array 181 having white (W) as shown in FIG.
- pixel conversion is performed by analyzing a two-dimensional pixel array signal in which pixels that are main components of a luminance signal are arranged in a checkered pattern and pixels of a plurality of colors that are color information components are arranged in the remaining portion. .
- the main color of the luminance signal is white or green.
- an image sensor (image sensor) 110 is Red (R) that transmits wavelengths near red, Green (G) that transmits wavelengths in the vicinity of green, Blue (B) that transmits wavelengths near blue, White (W) that transmits light of almost all wavelengths of RGB,
- RGB Red
- Green Green
- B Blue
- W White
- the image sensor 110 having this RGBW array 181 filter receives RGBW light in units of pixels via the optical lens 105, and generates and outputs an electrical signal corresponding to the received light signal intensity by photoelectric conversion.
- a mosaic image composed of four types of RGBW spectra is obtained by the image sensor 110.
- An output signal of the image sensor (image sensor) 110 is input to the data conversion processing unit 200 of the signal processing unit 120.
- the data conversion processing unit 200 executes conversion processing from the RGBW array 181 to the RGB array 182.
- the data conversion processing unit 200 executes a bright spot false color correction process in the pixel array conversion process.
- the RGB array 182 generated by the data conversion processing unit 200 that is, data having a Bayer array is data having a color array obtained by an image sensor such as a conventional camera. This color array data is input to the RGB signal processing unit 250.
- the RGB signal processing unit 250 executes the same processing as the signal processing unit provided in a conventional camera or the like. Specifically, the color image 183 is generated by executing demosaic processing, white balance adjustment processing, ⁇ correction processing, and the like. The generated color image 183 is recorded in the memory 130.
- the control unit 140 executes control of these series of processes.
- a program for executing a series of processes is stored in the memory 130, and the control unit 140 executes the program read from the memory 130 and controls the series of processes.
- the data conversion processing unit 200 executes conversion processing (re-mosaic processing) from the RGBW array 181 to the RGB array 182. Further, a process for reducing bright spot false color is also executed during this process.
- the input pixel unit to the data conversion processing unit 200 is 7 pixels ⁇ 7 pixels centering on a pixel value conversion target pixel to be executed as a bright spot false color reduction and pixel array conversion process. That is, a process is performed in which a 7 ⁇ 7 pixel reference area centered on the processing target pixel is set. Note that the processing target pixels are sequentially changed, and the data conversion processing unit 200 sequentially performs processing for each pixel. Note that setting the size of the reference area set as the input pixel unit as 7 pixels ⁇ 7 pixels is an example, and an area having a size other than this size may be set as the input pixel unit (reference area). .
- the data conversion processing unit 200 includes a false color detection unit 201, a low-frequency signal calculation unit 202, and a pixel interpolation unit 203.
- the pixel interpolation unit 203 interpolates the center pixel of the input pixel unit region so that an image having the RGBW array 181 that is the output of the image sensor 110 becomes the RGB array 182.
- a pixel having the same color as one of RGB to be set is selected from a 7 ⁇ 7 pixel reference area, and the pixel value of the reference pixel is applied using these selected pixel values as reference pixels. This is performed as a process of calculating the pixel value of the target pixel, that is, the center pixel of 7 ⁇ 7 pixels.
- various existing methods such as a reference pixel selection process considering the edge direction and a method using the correlation between the W pixel element and the RGB pixel in the reference region can be applied.
- FIG. 6A shows an input pixel unit (7 ⁇ 7 pixel region) centered on a W pixel which is a target pixel to be subjected to pixel conversion.
- a low-frequency signal mW of W pixel is calculated based on the pixel value of W pixel in the reference area.
- the low-frequency signal mW is calculated by applying a low-pass filter to the pixel value of the W pixel in the reference region.
- the low-frequency signal of the G pixel is calculated from the G pixel in the input pixel unit (reference region). This is the low frequency signal mG.
- This interpolation process is an interpolation process based on the estimation that the low frequency signal mW of the W pixel and the low frequency signal mG of the G pixel are approximately proportional in the local region. That is, in a narrow local region such as a 7 ⁇ 7 pixel reference region set in this embodiment, the proportional relationship between the low-frequency signal mW of the W pixel and the low-frequency signal mG of the G pixel as shown in FIG. Based on the proportional relationship, the G pixel value at the W pixel position is estimated according to the above (Equation 1).
- the calculation processing mode of the low-frequency signals mR, mG, mB, and mW corresponding to the RGBW pixels applied in the pixel value conversion processing of the pixel interpolation unit 203 depends on whether or not the bright spot false color is detected. Perform the change process. In the false color detection unit 201 in FIG. 5, it is determined whether or not the target pixel is a bright spot false color pixel, and this determination information is output to the low-frequency signal calculation unit 202.
- the low-frequency signal calculation unit 202 changes the calculation processing mode of the low-frequency signals mR, mG, mB, mW corresponding to RGBW pixels applied in the pixel value conversion processing of the pixel interpolation unit 203 according to the determination information, Low-frequency signals mR, mG, mB, and mW are calculated, and the calculated low-frequency signals mR, mG, mB, and mW are output to the pixel interpolation unit 203.
- the pixel interpolation unit 203 applies the low-frequency signals mR, mG, mB, and mW input from the low-frequency signal calculation unit 202, and performs pixel interpolation processing as re-mosaic processing that converts the RGBW pixel array into the RGB pixel array. It should be noted that the pixel interpolation processing executed by the pixel interpolation unit 203 uses the low-frequency signals mR, mG, mB, and mW input from the low-frequency signal calculation unit 202. The processing is in accordance with the processing described in 2011-182354.
- the bright spot false color detection process in the false color detection unit 201 and the low band signal (mR, mG, mB, mW) calculation process in the low band signal calculation unit 202 will be described.
- the false color detection unit 201 uses the input pixel unit (7 ⁇ 7 pixels) as a reference area and applies a pixel value of a W pixel in the reference area to execute a bright spot false color detection process.
- the bright spot false color is a local area, that is, a pixel in a highlight area that is a high luminance area of about several pixels is W.
- the pixel value of the G pixel is relatively larger than the pixel values of the surrounding R and B pixels. As a result, the color balance is lost and the product is colored green. Also, for example, as shown in FIG.
- FIGS. 7A to 7C show a configuration example of a highlight area where a green false color is generated, as in FIG. 3A.
- a region surrounded by a dotted circle represents a highlight region.
- the false color detection unit 201 inputs an input pixel unit (7 ⁇ 7 pixels) centered on a target pixel to be processed by pixel value conversion (re-mosaic processing) in the data conversion processing unit 200, and the target pixel is a highlight region Or not, that is, whether or not the pixel is a bright spot false color pixel.
- FIGS. 7A to 7C show examples in which the target pixel is included in the highlight area, that is, the target pixel is a bright spot false color pixel.
- the highlighted region surrounded by the dotted line circle shown in FIGS. 7A to 7C is substantially composed of G pixels and W pixels.
- the central pixel (target pixel) of the input pixel unit (7 ⁇ 7 pixels) is the W pixel (W4), and the left adjacent pixel and the lower adjacent pixel of the central pixel (W4) are It is an example of processing in the case of G pixels.
- the false color detection unit 201 determines whether or not the center pixel (target pixel) is a bright spot false color pixel that generates a green false color in FIGS. 8 and 9. It is determined by the processing shown.
- out_up, in_up, in_dn, and out_dn are all W pixel low values generated by the average value or weighted addition of a plurality of W pixels in the diagonally lower right direction in the reference area, which is a 7 ⁇ 7 pixel input pixel unit. It is a frequency signal. That is, it is a W pixel low frequency signal corresponding to each diagonally lowering direction line.
- out_up, in_up, in_dn, and out_dn are values obtained by weighting and adding pixel values of three W pixels in the diagonally downward direction at a ratio of 1: 2: 1.
- in_up and in_dn are W pixel low-frequency signals based on two obliquely lower right lines [upper line (in_up), lower line (in_dn)] closest to the center of the reference region and three W pixels in each line.
- out_up is a W pixel low-frequency signal based on three W pixels in a diagonally right-down line adjacent to in_up in the upward direction.
- out_dn is a W pixel low-frequency signal based on the three W pixels in the diagonally right-down line adjacent to in_dn in the downward direction.
- out_up, in_up, in_dn, and out_dn two calculated values in_up or in_dn close to the center position of the reference region are large, and two calculated values out_up or out_dn far from the center are When it is small, it is determined that the G pixel near the center is included in the highlight area and there is a high possibility that a false color is generated.
- the target pixel may be a false color generation pixel.
- Diff1 max (in_up, in_dn) ⁇ max (out_up, out_dn) Diff1> Thr1 (Formula 3)
- max (A, B) is a function that returns a large value of A and B;
- Thr1 is a threshold value, It is.
- the threshold Thr1 may be a fixed value, a value that can be set by the user, or may be automatically calculated.
- FIG. 8 (1b) shows a correspondence example of out_up, in_up, in_dn, out_dn, and threshold value Thr1 when it is determined that the pixel of interest is likely to be a false color generation pixel.
- FIG. 8 (1c) shows a correspondence example of out_up, in_up, in_dn, out_dn, and threshold value Thr1 when it is determined that the target pixel is not a false color generation pixel.
- FIG. 9 (2a) shows a 7 ⁇ 7 pixel reference area in which G pixels are located on the left and below the central W pixel (W4), as in FIG. 8 (1a).
- the false color detection unit 201 when the G pixel is located on the left and below the center W pixel (W4), follows the following expressions (Expression 4a) to (Expression 4e). , Out_up, in_up, center, in_dn, out_dn. Note that W0 to W9 represent pixel values of the W pixel shown in FIG. 9 (2a).
- out_up, in_up, center, in_dn, and out_dn are all W pixel low-frequency signals calculated by averaging or weighted addition of pixel values of a plurality of W pixels in the diagonally upward direction in the reference area.
- the pixel values of two W pixels in the diagonally upward direction are weighted and added at a ratio of 1: 1.
- “center” is a W pixel low frequency signal based on two W pixels in the diagonally right-up line closest to the G pixel near the center of the reference region.
- in_up is a W pixel low-frequency signal based on two W pixels in a diagonally upward line adjacent to the upper direction of the center.
- in_dn is a W pixel low-frequency signal based on two W pixels in a diagonally upward line adjacent to the lower direction of the center.
- out_up is a W pixel low-frequency signal based on two W pixels in a diagonally upward line adjacent to in_up in the upward direction.
- out_dn is a W pixel low-frequency signal based on two W pixels in a diagonally upward line adjacent to in_dn in the downward direction.
- out_up, in_up, center, in_dn, and out_dn three calculated values in_up, in_dn, or center that are close to the center position of the reference region are large, and two calculated values out_up, Alternatively, when the difference from the value of out_dn is large, it is determined that the G pixel near the center is included in the highlight area and there is a high possibility that a false color is generated.
- max (A, B, C) is a function that returns the largest value among A, B, and C;
- Thr2 is a threshold value, It is.
- the threshold value Thr2 may be a fixed value, a value that can be set by the user, or may be automatically calculated.
- FIG. 9 (2b) shows a correspondence example of out_up, in_up, center, in_dn, out_dn, and threshold value Thr2 when it is determined that there is a high possibility that the target pixel is a false color generation pixel.
- FIG. 9 (2c) shows a correspondence example of out_up, in_up, center, in_dn, out_dn, and threshold value Thr2 when it is determined that the target pixel is not a false color generation pixel.
- the false color detection unit 201 finally If both of the above-described two determination formulas (Equation 3) and (Equation 5) are satisfied, it is determined that the pixel of interest is a false color generation pixel, and if either one is not satisfied, the pixel of interest is false color It is determined that the pixel is not generated.
- the center pixel (target pixel) of 7 ⁇ 7 pixels, which is an input pixel unit is a W pixel
- G pixels are located to the left and below the center W pixel. This is processing when adjacent to each other.
- FIG. 10 and FIG. 11 in the case where the center pixel (target pixel) of 7 ⁇ 7 pixels, which is an input pixel unit, is the W pixel, and the G pixel is adjacent to the right and above the center W pixel.
- the false color detection unit 201 shows the following when the central pixel of the input pixel unit is the W pixel (W 7) and the G pixel is adjacent to the right and above the central W pixel.
- Out_up, in_up, in_dn, and out_dn are obtained according to equations (expression 6a) to (expression 6d).
- W0 to W11 represent pixel values of the W pixel shown in FIG. 10 (3a).
- out_up, in_up, in_dn, and out_dn are all W pixel low-frequency signals based on the pixel values of a plurality of W pixels in the diagonally lower right direction in the reference region.
- the pixel values of the three W pixels in the diagonally downward direction are weighted and added at a ratio of 1: 2: 1.
- in_up and in_dn are W pixel low-frequency signals based on two obliquely lower right lines [upper line (in_up), lower line (in_dn)] closest to the center of the reference region and three W pixels in each line.
- out_up is a W pixel low-frequency signal based on three W pixels in a diagonally right-down line adjacent to in_up in the upward direction.
- out_dn is a W pixel low-frequency signal based on the three W pixels in the diagonally right-down line adjacent to in_dn in the downward direction.
- out_up, in_up, in_dn, and out_dn two calculated values in_up or in_dn close to the center position of the reference region are large, and two calculated values out_up or out_dn far from the center are When it is small, it is determined that the G pixel near the center is included in the highlight area and there is a high possibility that a false color is generated.
- max (A, B) is a function that returns a large value of A and B; Thr1 is a threshold value, It is.
- FIG. 10 (3b) shows a correspondence example of out_up, in_up, in_dn, out_dn, and threshold value Thr1 when it is determined that the pixel of interest is likely to be a false color generation pixel.
- FIG. 10 (3c) shows a correspondence example of out_up, in_up, in_dn, out_dn, and threshold value Thr1 when it is determined that the target pixel is not a false color generation pixel.
- FIG. 11 (4a) shows a 7 ⁇ 7 pixel reference area in which the G pixel is located on the right and above the central W pixel (W5), as in FIG. 10 (3a).
- the false color detection unit 201 follows the following formulas (formula 8a) to (formula 8e) when the G pixel is located on the left and below the central W pixel (W5).
- W0 to W9 represent pixel values of the W pixel shown in FIG. 11 (4a).
- out_up, in_up, center, in_dn, and out_dn are all W pixel low-frequency signals based on pixel values of a plurality of W pixels in a diagonally upward direction in the reference region.
- the pixel values of two W pixels in the diagonally upward direction are weighted and added at a ratio of 1: 1.
- “center” is a W pixel low frequency signal based on two W pixels in the diagonally right-up line closest to the G pixel near the center of the reference region.
- in_up is a W pixel low-frequency signal based on two W pixels in a diagonally upward line adjacent to the upper direction of the center.
- in_dn is a W pixel low-frequency signal based on two W pixels in a diagonally upward line adjacent to the lower direction of the center.
- out_up is a W pixel low-frequency signal based on two W pixels in a diagonally upward line adjacent to in_up in the upward direction.
- out_dn is a W pixel low-frequency signal based on two W pixels in a diagonally upward line adjacent to in_dn in the downward direction.
- out_up, in_up, center, in_dn, and out_dn three calculated values in_up, in_dn, or center that are close to the center position of the reference region are large, and two calculated values out_up, Alternatively, when the difference from the value of out_dn is large, it is determined that the G pixel near the center is included in the highlight area and there is a high possibility that a false color is generated.
- max (A, B, C) is a function that returns the largest value among A, B, and C; Thr2 is a threshold value, It is.
- FIG. 11 (4b) shows a correspondence example of out_up, in_up, center, in_dn, out_dn, and threshold value Thr2 when it is determined that the pixel of interest is likely to be a false color generation pixel.
- FIG. 11 (4c) shows a correspondence example of out_up, in_up, center, in_dn, out_dn, and threshold value Thr2 when it is determined that the target pixel is not a false color generation pixel.
- the false color detection unit 201 finally When both of the two determination formulas (Expression 7) and (Expression 9) described above are satisfied, it is determined that the pixel of interest is a false color generation pixel. It is determined that the pixel is not generated.
- the false color detection unit 201 basically performs the same process, that is, the following two comparison processes: The determination process based on is executed. (1) A comparison process between the difference value Diff1 calculated based on the pixel values of a plurality of W pixels on the diagonally lower right line and the threshold Thr1; (2) A comparison process between the difference value Diff2 calculated based on the pixel values of the plurality of W pixels on the diagonally upward line and the threshold value Thr2. As a result of these two comparisons, if the difference value is larger than the threshold value, the target pixel is determined to be a bright spot false color pixel. The false color detection unit 201 executes these determination processes and outputs a determination result to the low-frequency signal calculation unit 202.
- FIGS. 12A to 12E show configuration examples of highlight areas in which a magenta false color is generated, as in FIG. 3B.
- a region surrounded by a dotted circle represents a highlight region.
- FIGS. 13 and 14 are processing examples when the central pixel (target pixel) of the input pixel unit (7 ⁇ 7 pixels) is an R pixel.
- FIG. 13 shows a comparison process between the difference value Diff1 calculated based on the pixel values of a plurality of W pixels in the diagonally lower right line and the threshold value Thr1.
- FIG. 14 shows a comparison process between the difference value Diff2 calculated based on the pixel values of the plurality of W pixels on the diagonally right rising line and the threshold value Thr2. Examples of these processes are shown.
- the example shown in FIG. 13 is the same processing as the processing of FIGS. 8 and 10 described above. However, the difference is that the center of the input pixel unit is the R pixel. Set a plurality of diagonally descending right-down lines from a position close to the center of the reference area, and calculate a low frequency signal of W pixels using a plurality of W pixel values on each line, that is, a W low band signal, A difference value between a maximum W low frequency signal value of a plurality of lines close to the maximum W low frequency signal value of a plurality of lines far from the center is calculated. That is, the difference value of the W pixel in the diagonally right upward direction orthogonal to the four diagonally downward right lines is calculated as Diff1. Further, the calculated difference value Diff1 is compared with a predetermined threshold value Thr1.
- FIG. 14 is processing similar to the processing of FIGS. 9 and 11 described above. However, the difference is that the center of the input pixel unit is the R pixel. Another difference is that six diagonally rising lines are set and used.
- the false color detection unit 201 when the R pixel is located at the center, follows out_up, in_up, center, in_dn, out_dn according to the following expressions (Expression 10a) to (Expression 10f). Ask for. Note that W0 to W11 represent pixel values of the W pixel shown in FIG. 14 (6a).
- out_up, mid_up, in_up, in_dn, mid_dn, and out_dn are all W pixel low-frequency signals that are calculated by averaging or weighted addition of a plurality of W pixels in the reference area in the diagonally upward direction.
- the pixel values of two W pixels in the diagonally upward direction are weighted and added at a ratio of 1: 1.
- in_up is a W pixel low-frequency signal based on two W pixels in a diagonally upward line adjacent to the center R pixel in the upward direction.
- in_dn is a W pixel low-frequency signal based on two W pixels in a diagonally upward line adjacent to the lower direction of the center R pixel.
- mid_up is a W pixel low-frequency signal based on two W pixels in a diagonally upward line adjacent to in_up in the upward direction.
- mid_dn is a W pixel low-frequency signal based on two W pixels in a diagonally upward line adjacent to in_dn in the downward direction.
- out_up is a W pixel low-frequency signal based on two W pixels in an obliquely upward line adjacent to the upper direction of mid_up.
- out_dn is a W pixel low-frequency signal based on two W pixels in a diagonally upward line adjacent to the lower direction of mid_dn.
- out_up, mid_up, in_up, in_dn, mid_dn, and out_dn the four calculated values mid_up, in_up, in_dn, or mid_dn that are close to the center position of the reference region are large, and two calculations that are far from the center are performed.
- the difference between the value out_up or the value out_dn is large, it is determined that the R pixel near the center is included in the highlight region and there is a high possibility that a false color is generated.
- max (A, B, C) is a function that returns the largest value among A, B, and C;
- Thr2 is a threshold value, It is.
- the threshold value Thr2 may be a fixed value, a value that can be set by the user, or may be automatically calculated.
- FIG. 14 (6b) shows a correspondence example of out_up, mid_up, in_up, in_dn, mid_dn, out_dn, and threshold value Thr2 when it is determined that the pixel of interest is likely to be a false color generation pixel.
- FIG. 14 (6c) shows a correspondence example of out_up, mid_up, in_up, in_dn, mid_dn, out_dn, and threshold value Thr2 when it is determined that the target pixel is not a false color generation pixel.
- the false color detection unit 201 determines whether or not the following determination formula is satisfied according to the processing described with reference to FIGS. 13 and 14. Diff1> Thr1 Diff2> Thr2 Only when the above two determination formulas are satisfied, the target pixel is determined to be a false color generation pixel. The determination result is output to the low frequency signal calculation unit 202.
- FIG. 15 and FIG. 16 show processing examples when the center pixel is a B pixel.
- FIG. 15 shows a comparison process between the difference value Diff1 calculated based on the pixel values of a plurality of W pixels on the diagonally lower right line and the threshold value Thr1.
- FIG. 16 shows a comparison process between the difference value Diff2 calculated based on the pixel values of a plurality of W pixels on the diagonally upward line and the threshold value Thr2. Examples of these processes are shown.
- the false color detection unit 201 also has the R, B pixel as the central pixel. Diff1> Thr1, Diff2> Thr2, It is determined whether or not both of the above two expressions are satisfied. If both are satisfied, it is determined that the target pixel is a bright spot false color pixel. If at least one of the above two expressions is not satisfied, it is determined that the target pixel is not a bright spot false color pixel, and the determination result is output to the low-frequency signal calculation unit 202.
- the false color detection unit 201 executes a determination process based on the following two comparison processes.
- a W pixel low-frequency component signal corresponding to each line is calculated based on pixel values of a plurality of W pixels in each of a plurality of obliquely lower right lines set in the vicinity of the target pixel, and a plurality of inner lines near the target pixel are calculated.
- a W pixel low frequency component signal corresponding to each line is calculated based on pixel values of a plurality of W pixels in each of a plurality of obliquely rising lines set in the vicinity of the target pixel, and a plurality of inner lines near the target pixel are calculated. Comparison processing of the difference value Diff2 between the W pixel low frequency component signal maximum value and the W pixel low frequency component signal maximum value of a plurality of outer lines far from the target pixel and the threshold Thr2.
- the false color detection unit 201 executes these determination processes and outputs a determination result to the low-frequency signal calculation unit 202.
- the low-frequency signal calculation unit 202 receives the low-frequency signal (mR, mG, mB) of any of R, G, and B pixels used by the pixel interpolation unit 203 and the low-frequency signal (mW) of the W pixel, respectively. calculate.
- the low-frequency signal calculation unit 202 determines whether or not the target pixel to be interpolated in the pixel interpolation unit 203, that is, the target pixel at the center of the input unit pixel (7 ⁇ 7 pixel) is a bright spot false color pixel. Is input from the false color detection unit 201.
- the low-frequency signal calculation unit 202 has a first predetermined low-pass filter (LPF) coefficient, that is, the target pixel.
- LPF low-pass filter
- the low-frequency signal calculation unit 202 uses the second low-pass filter (figure color correction effect enhanced) LPF) coefficients are applied to calculate the low frequency signal.
- the second low-pass filter (LPF) coefficient is a coefficient obtained by setting a small coefficient for a pixel close to the center of the reference area, which is a 7 ⁇ 7 input pixel unit, that is, a target pixel position to be converted.
- This process is a process for calculating the interpolated pixel value while suppressing the influence of the pixel value in the highlight area. By performing such processing, the influence of the pixel value in the highlight area is suppressed, and false colors are reduced.
- the low-frequency signal calculation unit 202 calculates the low-frequency signals mR, mG, mB, and mW for the RGBW color signals. That is, the low frequency signal calculation unit 202 selectively executes the following processing. If the target pixel is determined to be a false color pixel, set the pixel value contribution ratio of the reference pixel in the area close to the target pixel low, and set the pixel value contribution ratio of the reference pixel in the peripheral area far from the target pixel high. The low-pass signal calculation process to which the low-pass filter (LPF) coefficient applied is applied.
- LPF low-pass filter
- the pixel value contribution ratio of the reference pixel in the area close to the target pixel is set high, and the pixel value contribution ratio of the reference pixel in the peripheral area far from the target pixel is set low.
- a low-frequency signal calculation process using a low-pass filter (LPF) coefficient is performed.
- LPF coefficients will be described with reference to FIG. 17
- FIG. 17 (1 a) shows an example of the input pixel unit. Calculation of the low-frequency signal mG of the G signal when the center pixel is the W pixel and the G pixel is located on the left and below the center W pixel. G pixels applied to the processing are indicated by a thick frame.
- FIG. 17 (1 b) shows that the first low-pass filter (LPF) coefficient in the case where it is determined that the target pixel that is the central pixel is not a false color pixel, that is, the target pixel position, that is, the pixel closer to the center of the reference region It is an example of setting a normal LPF coefficient as a coefficient.
- LPF low-pass filter
- FIG. 17 (1c) shows the second low-pass filter (LPF) coefficient when the target pixel that is the central pixel of the input pixel unit is determined to be a false color pixel, that is, the target pixel position, that is, the center of the reference region.
- LPF low-pass filter
- the G low-frequency signal mG corresponding to the target pixel position is the pixel value of the 12 G pixels in the reference region shown in FIG.
- the G pixel value at each pixel position is multiplied by the coefficient (1/32 to 6/32) shown in FIG.
- the G low-frequency signal mG corresponding to the target pixel position is the 12 G pixels in the reference region shown in FIG. Is obtained by multiplying the G pixel value at each pixel position by the coefficient (0/16 to 3/16) shown in FIG.
- the G low-frequency signal mG is a value that largely reflects the G pixel value close to the center of the reference region.
- the G low-frequency signal mG is a value in which the reflection of the G pixel value close to the center of the reference region is 0 or suppressed.
- the coefficient setting example shown in FIG. 17 is one specific example, and other coefficient settings may be used. However, if it is determined that the pixel of interest that is the central pixel is not a false color pixel, the pixel closer to the center of the reference region is set to a larger coefficient, and if it is determined that the pixel of interest is a false color pixel, the reference The LPF coefficient is set so that the pixel closer to the center of the region has a smaller coefficient.
- FIG. 18 (2a) shows an example of the input pixel unit. Calculation of the low-frequency signal mG of the G signal when the central pixel is the W pixel and the G pixel is located on the right and above the central W pixel. G pixels applied to the processing are indicated by a thick frame.
- FIG. 18 (2b) shows that the first low-pass filter (LPF) coefficient when it is determined that the target pixel as the central pixel is not a false color pixel, that is, the pixel closer to the target pixel position, that is, the center of the reference region, is larger. It is an example of setting a normal LPF coefficient as a coefficient.
- FIG. 18 (2c) shows the second low-pass filter (LPF) coefficient when the pixel of interest which is the central pixel of the input pixel unit is determined to be a false color pixel, that is, the pixel of interest, that is, the center of the reference region This is an example of setting the LPF coefficient with a smaller coefficient for pixels closer to.
- LPF low-pass filter
- the G low-frequency signal mG corresponding to the target pixel position is the pixel value of 12 G pixels in the reference region shown in FIG. 18 (2a).
- the coefficient (1/32 to 6/32) shown in FIG. 18 (2b) is multiplied by the G pixel value at each pixel position and added.
- the G low-frequency signal mG corresponding to the target pixel position is the 12 G pixels in the reference region shown in FIG. 18 (2a). Is obtained by multiplying the G pixel value at each pixel position by the coefficient (0/16 to 3/16) shown in FIG. 18 (2c).
- the G low-frequency signal mG is a value that largely reflects the G pixel value close to the center of the reference region.
- the G low-frequency signal mG is a value in which the reflection of the G pixel value close to the center of the reference region is 0 or suppressed.
- the coefficient setting example shown in FIG. 18 is one specific example, and other coefficient settings may be used. However, if it is determined that the pixel of interest that is the central pixel is not a false color pixel, the pixel closer to the center of the reference region is set to a larger coefficient, and if it is determined that the pixel of interest is a false color pixel, the reference The LPF coefficient is set so that the pixel closer to the center of the region has a smaller coefficient.
- the examples shown in FIGS. 17 and 18 are LPF coefficient setting examples applied to the calculation of the G low-frequency signal mG when the center of the input unit pixel region is a W pixel.
- the low frequency signal calculation unit 202 also executes the same processing to perform the low frequency signal calculation processing. Do. That is, if it is determined that the target pixel that is the center pixel of the reference area is not a false color pixel, the pixel closer to the center of the reference area is set to a larger coefficient, and the target pixel is determined to be a false color pixel. Executes a low pass filter application process in which an LPF coefficient is set such that the smaller the pixel is closer to the center of the reference area, the lower band signals mR, mG, mB, mW are calculated.
- the example shown in FIG. 19 is an example of LPF setting applied to the calculation process of the low-frequency signal mB of the B signal when the central pixel of the input pixel unit is the R pixel.
- FIG. 19 (3a) shows an example of the input pixel unit, and the B pixel applied to the calculation process of the low-frequency signal mB of the B signal when the central pixel is the R pixel is indicated by a thick frame.
- FIG. 19 (3 b) shows that the first low-pass filter (LPF) coefficient when it is determined that the target pixel as the central pixel is not a false color pixel, that is, the pixel closer to the target pixel position, that is, the center of the reference region, is larger. It is an example of setting a normal LPF coefficient as a coefficient.
- FIG. 19 (3c) shows the second low-pass filter (LPF) coefficient when the target pixel that is the central pixel of the input pixel unit is determined to be a false color pixel, that is, the target pixel position, that is, the center of the reference region.
- LPF low-pass filter
- the B low-frequency signal mB corresponding to the target pixel position is the pixel value of the eight B pixels in the reference region shown in FIG.
- the coefficient (1/32 to 9/32) shown in FIG. 19 (3b) is multiplied by the B pixel value at each pixel position and added.
- the B low-frequency signal mB corresponding to the target pixel position is the eight B pixels in the reference region shown in FIG. 19 (3a). Is obtained by multiplying the B pixel value at each pixel position by the coefficient (0/16 to 3/16) shown in FIG.
- the B low-frequency signal mB is a value that largely reflects the B pixel value close to the center of the reference region.
- the B low-frequency signal mB is a value in which the reflection of the B pixel value close to the center of the reference region is 0 or suppressed.
- the coefficient setting example shown in FIG. 19 is one specific example, and other coefficient settings may be used. However, if it is determined that the pixel of interest that is the central pixel is not a false color pixel, the pixel closer to the center of the reference region is set to a larger coefficient, and if it is determined that the pixel of interest is a false color pixel, the reference The LPF coefficient is set so that the pixel closer to the center of the region has a smaller coefficient.
- the example shown in FIG. 20 is an example of LPF setting applied to the calculation process of the low-frequency signal mR of the R signal when the central pixel of the input pixel unit is the B pixel.
- FIG. 20 (4a) shows an example of the input pixel unit, and the R pixel applied to the calculation process of the low-frequency signal mR of the R signal when the central pixel is the B pixel is indicated by a thick frame.
- FIG. 20 (4b) shows that the first low-pass filter (LPF) coefficient in the case where it is determined that the target pixel as the central pixel is not a false color pixel, that is, the target pixel position, that is, the pixel closer to the center of the reference region is larger. It is an example of setting a normal LPF coefficient as a coefficient.
- FIG. 20 (4c) shows the second low-pass filter (LPF) coefficient when the target pixel that is the central pixel of the input pixel unit is determined to be a false color pixel, that is, the target pixel position, that is, the center of the reference region.
- LPF low-pass filter
- the R low-frequency signal mR corresponding to the target pixel position is the pixel value of eight R pixels in the reference region shown in FIG.
- a value (1/32 to 9/32) shown in FIG. 20 (4b) is multiplied by the R pixel value at each pixel position and added.
- the R low-frequency signal mR corresponding to the target pixel position is the eight R pixels in the reference region shown in FIG. 20 (4a).
- the pixel value is obtained by multiplying the R pixel value at each pixel position by the coefficient (0/16 to 3/16) shown in FIG.
- FIGS. 20 (4b) and (4c) are largely different in the setting of the LPF coefficient in FIGS. 20 (4b) and (4c), which are coefficients for the two B pixels close to the center, and when it is determined that the target pixel as the center pixel is not a false color pixel. A large coefficient is set, and when it is determined that the target pixel as the center pixel is a false color pixel, a small coefficient is set.
- the R low-frequency signal mR is a value that largely reflects the R pixel value close to the center of the reference region.
- the R low-frequency signal mR is a value in which the reflection of the R pixel value close to the center of the reference region is 0 or suppressed.
- the coefficient setting example shown in FIG. 20 is one specific example, and other coefficient settings may be used. However, if it is determined that the pixel of interest that is the central pixel is not a false color pixel, the pixel closer to the center of the reference region is set to a larger coefficient, and if it is determined that the pixel of interest is a false color pixel, the reference The LPF coefficient is set so that the pixel closer to the center of the region has a smaller coefficient.
- the low-frequency signal calculation unit 202 receives, from the false color detection unit, determination information as to whether or not the pixel of interest at the center of the input pixel unit that is the processing target pixel is a bright spot false color, and the input information
- the low pass signals mR, mG, mB, mW of the RGBW signals are calculated and output to the pixel interpolation unit 203 by LPF application processing in which different LPF coefficients are set accordingly.
- the pixel interpolation unit 203 applies the low-frequency signals mR, mG, mB, and mW of the RGBW signals input from the low-frequency signal calculation unit 202 to convert the RGBW pixel array into an RGB array, that is, re-mosaic Process.
- the pixel interpolation processing executed by the pixel interpolating unit 203 uses the low-frequency signals mR, mG, mB, and mW input from the low-frequency signal calculating unit 202, except for the applicant's previous one.
- the processing follows the processing described in Japanese Patent Application Laid-Open No. 2011-182354.
- the pixel interpolation unit 203 executes the following processing as processing for converting the RGBW pixel array into the RGB pixel array. Processing for converting W pixels to G pixels; Processing to convert G pixel to R or B pixel; Processing to convert R pixel to B pixel; Processing to convert B pixel to R pixel, Each of these processes is performed.
- the low-frequency signals mR, mG, mB, and mW of the RGBW signals input from the low-frequency signal calculation unit 202 are the bright spots when it is determined that the target pixel to be converted is a bright spot false color.
- the pixel interpolation unit 203 can set an interpolated pixel value in which the influence of the false color is suppressed.
- the flowchart shown in FIG. 21 is a processing sequence for one conversion target pixel when converting an RGBW array to an RGB array, and a data conversion process is performed by inputting a reference pixel area including the conversion target pixel, for example, a 7 ⁇ 7 pixel area. This process is executed in the section.
- the processing according to the flow shown in FIG. 21 is sequentially executed for each processing pixel.
- step S101 the difference value Diff1 of W in the diagonally upward direction of the reference area is obtained.
- step S102 Diff1 is compared with the threshold value Thr1.
- a plurality of diagonally lower right descending lines are set from a position close to the center of the reference area, a W low frequency signal is calculated using a plurality of W pixel values on each line, and the maximum W low frequency of the lines closer to the center is calculated.
- a difference value between the signal value and the maximum W low frequency signal value of a plurality of lines far from the center is calculated. That is, the difference value of the W pixel in the diagonally right upward direction orthogonal to the four diagonally downward right lines is calculated as Diff1. Further, the calculated difference value Diff1 is compared with a predetermined threshold value Thr1.
- step S102 As a comparison processing result between Diff1 and threshold value Thr1 in step S102, If Diff1 is greater than the threshold Thr1 (Yes), the process proceeds to step S103. If Diff1 is not greater than the threshold value Thr1 (No), the process proceeds to step S106.
- step S106 the LPF coefficient of the reference pixel close to the target pixel is set relatively high.
- a necessary low-frequency signal is calculated by applying a low-pass filter (LPF).
- the necessary low-frequency signal is a low-frequency signal necessary for application to the target pixel conversion processing executed by the pixel interpolation unit 203. At least one low frequency signal of the low frequency signals mR, mG, mB, mW is calculated.
- This low-frequency signal calculation process is a process of the low-frequency signal calculation unit 202 shown in FIG.
- the low-frequency signal is calculated by applying a low-pass filter (LPF) in which the LPF coefficient of the reference pixel close to the target pixel is set relatively high.
- LPF low-pass filter
- step S102 If it is determined that Diff1 is greater than the threshold Thr1 (Yes), the process proceeds to step S103. In this case, it is determined that there is a possibility that the target pixel is a bright spot false color generation pixel. In this case, the process proceeds to step S103.
- step S103 a difference value Diff2 of W in the diagonally lower right direction of the reference area is obtained.
- step S104 Diff2 is compared with the threshold value Thr2.
- step S104 As a comparison processing result between Diff2 and threshold value Thr2 in step S104, If Diff2 is greater than the threshold Thr2 (Yes), the process proceeds to step S105. If Diff2 is not greater than the threshold value Thr2 (No), the process proceeds to step S106.
- a necessary low-frequency signal is calculated by applying a low-pass filter (LPF).
- the necessary low-frequency signal is a low-frequency signal necessary for application to the target pixel conversion processing executed by the pixel interpolation unit 203. At least one low frequency signal of the low frequency signals mR, mG, mB, mW is calculated.
- This low-frequency signal calculation process is a process of the low-frequency signal calculation unit 202 shown in FIG.
- the low-frequency signal is calculated by applying a low-pass filter (LPF) in which the LPF coefficient of the reference pixel close to the target pixel is set relatively high.
- LPF low-pass filter
- step S104 If it is determined that Diff2 is greater than the threshold Thr2 (Yes), the process proceeds to step S105. In this case, it is determined that the target pixel is a bright spot false color generation pixel. In this case, the process proceeds to step S105.
- a necessary low-frequency signal is calculated by applying a low-pass filter (LPF) in which the LPF coefficient of the reference pixel close to the target pixel is set relatively low.
- the necessary low-frequency signal is a low-frequency signal necessary for application to the target pixel conversion processing executed by the pixel interpolation unit 203. At least one low frequency signal of the low frequency signals mR, mG, mB, mW is calculated.
- This low-frequency signal calculation process is a process of the low-frequency signal calculation unit 202 shown in FIG.
- the low-frequency signal is calculated by applying a low-pass filter (LPF) in which the LPF coefficient of the reference pixel close to the target pixel is set relatively low.
- LPF low-pass filter
- step S107 the interpolation pixel value of the target pixel is calculated by applying either the low-frequency signal generated in step S105 or the low-frequency signal generated in step S106.
- This process is a process executed by the pixel interpolation unit 203 shown in FIG.
- the pixel interpolation unit 203 determines that the target pixel is a bright spot false color, the pixel interpolation unit 203 uses the low-frequency signal generated in step S105. If the target pixel is determined not to be a bright spot false color, the pixel interpolation unit 203 performs step S106. Interpolation processing is performed using the low-frequency signal generated in step.
- the low frequency band generated by the application process of the low pass filter (LPF) in which the LPF coefficient of the reference pixel close to the target pixel is set relatively low Interpolation processing using signals is performed.
- LPF low pass filter
- a low-frequency signal generated by a low-pass filter (LPF) application process in which the LPF coefficient of the reference pixel close to the pixel of interest is set relatively high is used. Perform interpolation processing.
- the interpolation pixel value in which the influence of the bright spot false color is reduced is set for the interpolation pixel value of the pixel area determined to be the bright spot false color.
- the process shown in FIG. 21 is executed under the control of the control unit 140, for example, according to a program stored in the memory 130 shown in FIG.
- the data conversion processing unit 200 shown in FIGS. 4 and 5 generates an image according to the RGB array 182 as a result of the false color correction and the re-mosaic processing by the above-described processing, and outputs the image to the RGB signal processing unit 250.
- the RGB signal processing unit 250 executes the same processing as the signal processing unit provided in a conventional camera or the like. Specifically, the color image 183 is generated by executing demosaic processing, white balance adjustment processing, ⁇ correction processing, and the like. The generated color image 183 is recorded in the memory 130.
- the false color detection unit 201 performs the determination process in step S102 in the flow of FIG. Diff1> Thr1, Furthermore, the determination process in step S104, that is, Diff2> Thr2, When these two determination formulas are satisfied, it is determined that the pixel of interest is a bright spot false color generation pixel.
- the low-frequency signal calculation unit applies the low-frequency signals to which different LPF coefficients are applied as described with reference to FIGS. 17 and 18 according to whether or not the target pixel is a bright spot false color pixel. The description has been given assuming that the signal calculation process is executed.
- the false color detection process in the false color detection unit 201 and the low-frequency signal calculation process in the low-frequency signal calculation unit 202 are not limited to these processes, and other processes may be executed. One example will be described below.
- FIG. 22 shows an example of a highlight area where a bright spot false color occurs when the G pixel is located on the left and below the center W pixel.
- the dotted area in FIGS. 22A to 22G is a highlight area.
- the false color detection unit 201 detects a W pixel having a high pixel value by paying attention to a W pixel in a reference area of a reference area of 7 ⁇ 7 pixels, that is, an input pixel unit. Further, it is determined whether or not the W pixel having the high pixel value matches the pattern of the W pixel in any one of the highlight areas shown in FIGS. Note that a commonly used method such as pattern matching can be applied as a method for obtaining the arrangement of the corresponding W pixel.
- the low-frequency signal calculation unit 202 selects an LPF corresponding to the W pixel arrangement obtained by the false color detection unit 201 and calculates a low-frequency signal to be applied to the pixel interpolation process.
- FIG. 23 shows a setting example of LPF coefficients applied to the calculation of the low frequency signal mG of the G signal.
- FIGS. 23A to 23G are setting examples of LPF coefficients applied to the calculation of the low-frequency signal mG used corresponding to the highlight regions in FIGS. 22A to 22G, respectively.
- the pixels described with L or H shown in FIGS. 23A to 23G are G pixels, L is a setting pixel having a relatively low LPF coefficient, H is a setting pixel having a relatively high LPF coefficient, These are shown.
- the LPF coefficient at the G pixel position in the highlight area is set to be relatively small.
- a desired low-frequency signal is obtained by performing a convolution operation on the input pixel unit with the LPF coefficient setting shown in FIG. .
- the LPF coefficient shown in FIG. 23A only the coefficient at the G pixel position in the highlight area shown in FIG. 22A is L, and the coefficients at the other G pixel positions are H.
- the low-frequency signal calculation unit 202 reduces the contribution ratio of the constituent pixel values in the highlight area according to the shape of the highlight area. Is calculated and output to the pixel interpolation unit 203.
- FIG. 24 shows an example in which the highlight area is composed of W, R, and B pixels.
- the dotted lines in FIGS. 24A to 24H are highlight areas.
- the false color detection unit 201 detects a W pixel having a high pixel value by paying attention to a W pixel in a reference area of a reference area of 7 ⁇ 7 pixels, that is, an input pixel unit. Further, it is determined whether or not the W pixel having the high pixel value matches the pattern of the W pixel in any one of the highlight areas shown in FIGS. Note that a commonly used method such as pattern matching can be applied as a method for obtaining the arrangement of the corresponding W pixel.
- the low-frequency signal calculation unit 202 selects an LPF corresponding to the W pixel arrangement obtained by the false color detection unit 201 and calculates a low-frequency signal to be applied to the pixel interpolation process.
- FIG. 25 shows a setting example of LPF coefficients applied to the calculation of the low frequency signal mB of the B signal.
- FIGS. 25A to 25H are LPF coefficient setting examples applied to the calculation of the low-frequency signal mB used corresponding to the highlighted areas in FIGS. 24A to 24H, respectively.
- the pixels described with L or H shown in FIGS. 25A to 25H are B pixels, L is a setting pixel having a relatively low LPF coefficient, H is a setting pixel having a relatively high LPF coefficient, These are shown.
- the LPF coefficient at the B pixel position in the highlight area is set to be relatively small.
- a desired low-frequency signal is obtained by performing a convolution operation on the input pixel unit with the LPF coefficient setting shown in FIG. .
- the coefficient at the other B pixel position is H.
- the low-frequency signal calculation unit 202 reduces the contribution ratio of the constituent pixel values in the highlight area according to the shape of the highlight area. Is calculated and output to the pixel interpolation unit 203.
- the pixel interpolation unit 203 performs an interpolation process using a low-frequency signal in which the contribution ratio of the constituent pixel values of the highlight region calculated by the low-frequency signal calculation unit 202 is reduced. By this processing, it is possible to calculate an optimal low frequency signal corresponding to the highlight area, and to realize optimal pixel interpolation with a low contribution ratio of pixel values in the highlight area.
- FIG. 26 is a diagram showing a flowchart in the case of performing bright spot false color detection, low-frequency signal calculation, and pixel interpolation according to the present embodiment.
- the flowchart shown in FIG. 26 is a processing sequence for one conversion target pixel when converting the RGBW array to the RGB array, as in the flow shown in FIG. 21 described above, and a reference pixel region including the conversion target pixel, for example, This is a process executed by inputting a 7 ⁇ 7 pixel area and executing it in the data conversion processing unit.
- the processing according to the flow shown in FIG. 26 is sequentially executed for each processing pixel. Each step will be described below.
- step S201 an arrangement pattern of W pixels having a large pixel value is detected from the reference area.
- step S202 it is determined whether or not the high pixel value W pixel arrangement pattern detected in step S201 matches the pattern of the highlight area registered in advance in the memory. For example, highlight area patterns shown in FIGS. 22A to 22G are registered in the memory. These processes are processes executed by the false color detection unit 201 shown in FIG.
- step S204 If it is determined in step S202 that there is no registered pattern that matches the detected W pixel arrangement pattern of the high pixel value (No), the process proceeds to step S204. In this case, it is determined that the target pixel is not a bright spot false color, and the process proceeds to step S204, where the low pass filter (LPF) application process in which the LPF coefficient of the reference pixel close to the target pixel is set relatively high is performed. Calculate the required low frequency signal.
- the necessary low-frequency signal is a low-frequency signal necessary for application to the target pixel conversion processing executed by the pixel interpolation unit 203. At least one low frequency signal of the low frequency signals mR, mG, mB, mW is calculated.
- This low-frequency signal calculation process is a process of the low-frequency signal calculation unit 202 shown in FIG.
- the low-frequency signal is calculated by applying a low-pass filter (LPF) in which the LPF coefficient of the reference pixel close to the target pixel is set relatively high.
- LPF low-pass filter
- step S202 if a registered pattern that matches the detected W pixel arrangement pattern of the high pixel value is detected in step S202 (Yes), the process proceeds to step S203.
- step S203 an LPF coefficient corresponding to the registered pattern that matches the detected W pixel arrangement pattern of the high pixel value is selected, and a low-frequency signal that enables processing that also serves as false color correction is calculated.
- This low-frequency signal calculation process is a process of the low-frequency signal calculation unit 202 shown in FIG.
- the low-frequency signal calculation unit 202 inputs registration pattern information that matches the W pixel arrangement pattern of the high pixel value from the false color detection unit 201, and sets the LPF coefficient corresponding to the registration pattern according to this information.
- a low-frequency signal is calculated by applying a low-pass filter (LPF).
- LPF low-pass filter
- the low pass filter in which the LPF coefficient of the reference pixel close to the target pixel shown in FIG. A low-frequency signal is calculated by applying a filter (LPF).
- LPF filter
- FIGS. 23B to 23G the registered pattern is shown in FIGS. 23B to 23G.
- a low-frequency signal is calculated by applying a low-pass filter (LPF) in which the LPF coefficient of the reference pixel close to the target pixel is set relatively low.
- LPF low-pass filter
- step S205 the interpolation pixel value of the target pixel is calculated by applying either the low frequency signal generated in step S203 or the low frequency signal generated in step S204.
- This process is a process executed by the pixel interpolation unit 203 shown in FIG.
- the pixel interpolation unit 203 determines that the pixel of interest is a bright spot false color, the pixel interpolation unit 203 uses the low-frequency signal generated in step S203. If the pixel of interest is determined not to be a bright spot false color, step S204 is performed. Interpolation processing is performed using the low-frequency signal generated in step.
- the LPF coefficient of the reference pixel close to the target pixel is set relatively low as shown in FIGS. 23 (a) to (g). Interpolation processing is performed using a low-frequency signal generated by application processing of a low-pass filter (LPF).
- LPF low-pass filter
- a low-frequency signal generated by a low-pass filter (LPF) application process in which the LPF coefficient of the reference pixel close to the pixel of interest is set relatively high is used. Perform interpolation processing.
- the interpolation pixel value in which the influence of the bright spot false color is reduced is set for the interpolation pixel value of the pixel area determined to be the bright spot false color.
- the process shown in FIG. 26 is executed under the control of the control unit 140, for example, according to a program stored in the memory 130 shown in FIG.
- the data conversion processing unit 200 shown in FIGS. 4 and 5 generates an image according to the RGB array 182 as a result of the false color correction and the re-mosaic processing by the above-described processing, and outputs the image to the RGB signal processing unit 250.
- the RGB signal processing unit 250 executes the same processing as the signal processing unit provided in a conventional camera or the like. Specifically, the color image 183 is generated by executing demosaic processing, white balance adjustment processing, ⁇ correction processing, and the like. The generated color image 183 is recorded in the memory 130.
- the technology disclosed in this specification can take the following configurations.
- the data conversion processing unit A false color detection unit that detects false color pixels of the input image and outputs detection information;
- a low-frequency signal calculation unit that inputs detection information from the false color detection unit, changes a processing mode according to the detection information, and calculates a low-frequency signal corresponding to each RGBW color;
- a pixel interpolation unit that generates an RGB array image by performing pixel conversion of the RGBW array of the input image by pixel interpolation using the low-frequency signal calculated by the low-frequency signal calculation unit;
- the interpolation processing unit An image processing apparatus that calculates an interpolated pixel value based on the assumption that a low-frequency signal mW of W pixels and low-frequency signals mR, mG, and mB of RGB pixels are in a proportional relationship in a local region.
- the low-frequency signal calculation unit separates the pixel value contribution rate of the neighboring pixels of the target pixel from the target pixel.
- a low-pass signal is calculated by applying a low-pass filter in which a low-pass filter coefficient that is relatively lower than a pixel is set.
- the pixel value contribution rate of the neighboring pixel of the target pixel is a pixel separated from the target pixel.
- the false color detection unit detects the presence or absence of a local highlight region that is a local high-intensity region in the input image, and when the pixel of interest is included in the local highlight region, The image processing apparatus according to any one of (1) to (3), wherein the target pixel is determined to be a false color pixel.
- the false color detection unit detects the gradient information of the W pixel in the vicinity of the target pixel, and when the W pixel value in the vicinity of the target pixel is higher than the surrounding W pixel value in any of the two orthogonal directions,
- the image processing apparatus according to any one of (1) to (4), wherein the pixel is included in a local highlight area, which is a local high luminance area, and the target pixel is determined to be a false color pixel.
- the false color detection unit calculates (a) a W pixel low-frequency component signal corresponding to each line based on pixel values of a plurality of W pixels in each of a plurality of obliquely lower right lines set in the vicinity of the target pixel.
- the difference value Diff2 between the pixel low frequency component signal maximum value and the W pixel low frequency component signal maximum value of a plurality of outer lines far from the target pixel is compared with the threshold value Thr2, and the above (a) and (b) Two comparison results As both, the determined pixel of interest when the difference value is larger than the threshold value is false color pixels (1) - (5)
- the image processing apparatus according to any one.
- the false color detection unit is configured such that when W pixels and G pixels are concentrated in a local highlight area that is a local high luminance area in the input image, or W pixels and The image processing apparatus according to any one of (1) to (6), wherein a false color generated when the R pixel and the B pixel are concentrated is detected.
- the false color detection unit detects a W pixel having a high pixel value from the input image, and includes a detected W pixel configuration pattern having a high pixel value and a local high luminance region shape recorded in advance in a memory. Compared with the registered highlight area pattern, and if the detected high pixel value W pixel configuration pattern matches the registered highlight area pattern, the pixels included in the high pixel value W pixel configuration pattern are false color.
- the image processing apparatus according to any one of (1) to (7), wherein the image processing apparatus determines that the pixel is a pixel.
- the low-frequency signal calculation unit highlights the pixel value contribution ratio of the highlight region according to the registered highlight region pattern determined by the false color detection unit to match the high pixel value W pixel configuration pattern.
- a method of processing executed in the above-described apparatus and system, a program for executing the processing, and a recording medium recording the program are also included in the configuration of the present disclosure.
- the series of processing described in the specification can be executed by hardware, software, or a combined configuration of both.
- the program recording the processing sequence is installed in a memory in a computer incorporated in dedicated hardware and executed, or the program is executed on a general-purpose computer capable of executing various processing. It can be installed and run.
- the program can be recorded in advance on a recording medium.
- the program can be received via a network such as a LAN (Local Area Network) or the Internet and installed on a recording medium such as a built-in hard disk.
- the various processes described in the specification are not only executed in time series according to the description, but may be executed in parallel or individually according to the processing capability of the apparatus that executes the processes or as necessary.
- the system is a logical set configuration of a plurality of devices, and the devices of each configuration are not limited to being in the same casing.
- an apparatus and a method for correcting false color generated in a local highlight region of an image are realized.
- a false color pixel is detected during data conversion processing for generating an RGB array image from an RGBW array image, and a low-frequency signal corresponding to each RGBW color that differs depending on whether or not it is a false color pixel is calculated.
- the RGBW array is converted by the interpolation process to which the low-frequency signal is applied to generate an RGB array image.
- the interpolation processing is performed using each low-frequency signal based on the assumption that the W low-frequency signal mW and the RGB low-frequency signals mR, mG, and mB are proportional to each other in the local region.
- the low-frequency signal is calculated by applying a low-pass filter having a coefficient in which the pixel value contribution ratio in the vicinity of the target pixel is relatively lower than that of the separated pixel.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Color Television Image Signal Generators (AREA)
- Facsimile Image Signal Circuits (AREA)
- Color Image Communication Systems (AREA)
- Image Processing (AREA)
Abstract
Description
図2(a)はRGB画素からなる従来の多くのカメラにおいて利用されているベイヤ配列である。
図2(b)は、最近のカメラで利用が進んでいるRGBW配列である。RGB各画素は、各々がRまたはGまたはBの各波長領域の光を選択的に透過するフィルタを備えた画素であり、W画素は、RGBの各波長光ほぼ全ての可視光を透過するフィルタを備えた画素である。
以下、この数画素程度の小さい面積のハイライト領域に発生する偽色を輝点偽色と呼ぶ。
例えば、図3(a)に示すように、ハイライト領域の面積が小さく、ハイライト領域の画素がほぼW画素とG画素のみで構成されている場合、G画素の画素値が、周囲のR画素やB画素の画素値に比べて相対的に大きくなる。その結果、色バランスが崩れて緑色に着色する。
また、例えば、図3(b)に示すように、ハイライト領域の面積が小さく、ハイライト領域の画素がほぼW画素とR画素とB画素によって構成されている場合、R画素とB画素の画素値が、周囲のG画素の画素値に比べて相対的に大きくなる。その結果、色バランスが崩れてマゼンタ色に着色する。
このように、高輝度部分であるハイライト領域に含まれる画素の構成によって様々なパターンの偽色、すなわち輝点偽色が発生すると考えられる。
パープルフリンジは、主にカメラに備えられたレンズの収差が原因であり、白飛びしているハイコントラストのエッジの周辺に発生する偽色である。
しかし、上述の輝点偽色は、必ずしも白飛びしていない画素領域でも発生するため、従来のパープルフリンジを低減する処理では対応できない場合が存在する。
RGBW配列画像を入力画像とし、RGB配列画像を出力画像として生成するデータ変換処理部を有し、
前記データ変換処理部は、
前記入力画像の偽色画素を検出し、検出情報を出力する偽色検出部と、
前記偽色検出部から検出情報を入力し、検出情報に応じて処理態様を変更してRGBW各色対応の低域信号を算出する低域信号算出部と、
前記低域信号算出部の算出した低域信号を適用した画素補間により、前記入力画像のRGBW配列の画素変換を実行してRGB配列画像を生成する画素補間部を有し、
前記補間処理部は、
W画素の低域信号mWと、RGB各画素の低域信号mR,mG,mBが、局所領域で比例関係にあるとの仮定に基づいて補間画素値を算出する画像処理装置にある。
なお、閾値Thr1と閾値Thr2は、固定値としてもよいし、ユーザによって設定可能な値としてもよいし、自動的に計算されてもよい。
画像処理装置において実行する画像処理方法であり
データ変換処理部が、RGBW配列画像を入力画像とし、RGB配列画像を出力画像として生成するデータ変換処理を実行し、
前記データ変換処理において、
前記入力画像の偽色画素を検出し、検出情報を出力する偽色検出処理と、
前記検出情報を入力し、検出情報に応じて処理態様を変更してRGBW各色対応の低域信号を算出する低域信号算出処理と、
前記低域信号を適用した画素補間により、前記入力画像のRGBW配列の画素変換を実行してRGB配列画像を生成する画素補間処理を実行し、
前記補間処理において、
W画素の低域信号mWと、RGB各画素の低域信号mR,mG,mBが、局所領域で比例関係にあるとの仮定に基づく補間画素値を算出する画像処理方法にある。
画像処理装置において画像処理を実行させるプログラムであり
データ変換処理部に、RGBW配列画像を入力画像とし、RGB配列画像を出力画像として生成させるデータ変換処理ステップを実行させ、
前記データ変換処理ステップにおいて、
前記入力画像の偽色画素を検出し、検出情報を出力する偽色検出処理と、
前記検出情報を入力し、検出情報に応じて処理態様を変更してRGBW各色対応の低域信号を算出する低域信号算出処理と、
前記低域信号を適用した画素補間により、前記入力画像のRGBW配列の画素変換を実行してRGB配列画像を生成する画素補間処理を実行させ、
前記補間処理において、
W画素の低域信号mWと、RGB各画素の低域信号mR,mG,mBが、局所領域で比例関係にあるとの仮定に基づく補間画素値を算出させるプログラムにある。
具体的には、RGBW配列画像からRGB配列画像を生成するデータ変換処理に際して偽色画素を検出し、偽色画素であるか否かに応じて異なるRGBW各色対応の低域信号を算出し、算出した低域信号を適用した補間処理によりRGBW配列を変換してRGB配列画像を生成する。補間処理においては、W低域信号mWと、RGB各低域信号mR,mG,mBが局所領域で比例関係にあるとの仮定に基づいて各低域信号を利用して行う。低域信号は、注目画素が偽色画素である場合、注目画素近傍の画素値寄与率を離間画素より相対的に低くした係数を持つローパスフィルタを適用して算出する。
これらの処理により、RGBW配列画像をRGB配列に変換するリモザイク処理に併せて、画像の局所的ハイライト領域に発生する偽色の補正を実行し、偽色を除去または低減した高品質な画像を生成して出力することが可能となる。
1.本開示の画像処理装置の第1実施例の構成と処理について
2.偽色検出部と低域信号算出部の変形例について
3.本開示の画像処理装置の処理による効果について
4.本開示の構成のまとめ
本開示の画像処理装置は、RGB各色の波長光を選択的に透過するRGBフィルタに加え、RGB各波長光をすべて透過するホワイト(W:White)対応のフィルタを含むRGBW型のカラーフィルタを持つ撮像素子(イメージセンサ)の取得データに対する処理を行う。
図4は、本開示の画像処理装置の一実施例に係る撮像装置100の構成例を示す図である。撮像装置100は、光学レンズ105、撮像素子(イメージセンサ)110、信号処理部120、メモリ130、制御部140を有する。
赤色近傍の波長を透過する赤(R)、
緑色近傍の波長を透過する緑(G)、
青色近傍の波長を透過する青(B)、
RGBのほぼすべての波長光を透過するホワイト(W)、
これら4種類の分光特性を持つフィルタを備えた撮像素子である。
撮像素子(イメージセンサ)110の出力信号は信号処理部120のデータ変換処理部200に入力される。
なお、入力画素単位として設定する参照領域の大きさを7画素×7画素とする設定は一例であり、この大きさ以外の大きさの領域を入力画素単位(参照領域)として設定してもよい。
画素補間部203は、撮像素子110の出力であるRGBW配列181を持つ画像をRGB配列182となるように、入力画素単位領域の中心の画素を補間する。
なお、この補間処理においては、エッジ方向を考慮した参照画素の選択処理や、参照領域のW画素素とRGB画素の相関を利用する方法など既存の様々な方法が適用できる。
図6(a)には、画素変換対象とする注目画素であるW画素を中心とした入力画素単位(7×7画素領域)を示している。
例えば、参照領域のW画素の画素値に対するローパスフィルタ適用処理などによって低域信号mWを算出る。
同じく入力画素単位(参照領域)内のG画素からG画素の低域信号を算出する。これを低域信号mGとする。
G=mG/mW×w ・・・・(式1)
すなわち、本実施例において設定する7×7画素の参照領域のような狭い局所領域では図6(b)に示すようなW画素の低域信号mWとG画素の低域信号mGの比例関係が成立すると推定し、比例関係に基づいて、上記(式1)に従ってW画素位置のG画素値を推定する。
G画素をRまたはB画素に変換する処理、
R画素をB画素に変換する処理、
B画素をR画素に変換する処理、
これらの各処理が必要となる。
W画素の低域信号mWとG画素の低域信号mGとの比例関係、
W画素の低域信号mWとR画素の低域信号mRとの比例関係、
W画素の低域信号mWとB画素の低域信号mBとの比例関係、
これらの関係が成立すると仮定して、例えば、注目画素間変換先の画素色に応じて、参照領域のW画素をその画素色に設定して、注目画素の画素値を算出する。
なお、この補間処理は、本出願人の先の出願である特開2011-182354号公報に記載があり、本開示の画像処理装置においてもこの開示された処理を適用できる。
図5の偽色検出部201において、注目画素が輝点偽色画素であるか否かを判定し、この判定情報を低域信号算出部202に出力する。
画素補間部203は、低域信号算出部202から入力した低域信号mR,mG,mB,mWを適用して、RGBW画素配列をRGB画素配列に変換するリモザイク処理としての画素補間処理を行う。
なお、画素補間部203の実行する画素補間処理は、低域信号算出部202から入力した低域信号mR,mG,mB,mWを利用する以外は、本出願人の先の出願である特開2011-182354号公報に記載がある処理に従った処理となる。
先に図3を参照して説明したように輝点偽色は、例えば、図3(a)に示すように、局所領域、すなわち数画素程度の高輝度領域であるハイライト領域の画素がW画素とG画素のみである場合、G画素の画素値が、周囲のR画素やB画素の画素値に比べて相対的に大きくなる。その結果、色バランスが崩れて緑色に着色する。
また、例えば、図3(b)に示すように、ハイライト領域の画素がW画素とR画素とB画素によって構成されている場合、R画素とB画素の画素値が、周囲のG画素の画素値に比べて相対的に大きくなる。その結果、色バランスが崩れてマゼンタ色に着色する。このように、高輝度部分であるハイライト領域に含まれる画素の構成によって様々なパターンの偽色、すなわち輝点偽色が発生すると考えられる。
図7(a)~(c)は、図3(a)と同様、緑色の偽色が発生するハイライト領域の構成例を示している。点線丸で囲んだ領域がハイライト領域を表している。
図7(a)~(c)は、注目画素がハイライト領域に含まれる場合、すなわち、注目画素が輝点偽色画素である場合の例である。図7(a)~(c)に示す点線丸で囲んだハイライト領域は、ほぼG画素とW画素によって構成されている。
図8、図9に示す処理例は、入力画素単位(7×7画素)の中心画素(注目画素)がW画素(W4)であり、中心画素(W4)の左隣接画素と下隣接画素がG画素である場合の処理例である。
out_up=(W0+2×W1+W2)/4 ・・・(式2a)
in_up=(W3+2×W4+W5)/4 ・・・(式2b)
in_dn=(W6+2×W7+W8)/4 ・・・(式2c)
out_dn=(W9+2×W10+W11)/4 ・・・(式2d)
out_up、in_up、in_dn、out_dnは、いずれも、7×7画素の入力画素単位である参照領域内の斜め右下がり方向の複数のW画素の画素値の平均値や重み付け加算によって生成するW画素低周波信号である。
すなわち、各斜め右下がり方向ライン対応のW画素低周波信号である。
in_up、in_dnは、参照領域の中心に最近接する2つの斜め右下がりライン[上側ライン(in_up)、下側ライン(in_dn)]、各ラインの3つのW画素に基づくW画素低周波信号である。
out_upはin_upの上方向に隣接する斜め右下がりラインの3つのW画素に基づくW画素低周波信号である。
out_dnはin_dnの下方向に隣接する斜め右下がりラインの3つのW画素に基づくW画素低周波信号である。
中心に近い2つのW画素低周波信号in_up、in_dnの最大値max(in_up,in_dn)と、
中心から遠い値2つのW画素低周波信号out_up、out_dnの最大値max(out_up,out_dn)と、
これらの2つの最大値の差分値Diff1を算出する。
Diff1=max(in_up,in_dn)-max(out_up,out_dn)
Diff1>Thr1 ・・・・・(式3)
max(A,B)は、AとBの大きい値を返す関数、
Thr1は閾値、
である。
なお、閾値Thr1は、固定値としてもよいし、ユーザによって設定可能な値としてもよいし、自動的に計算されてもよい。
図8(1c)は、注目画素が偽色発生画素ではないと判定する場合のout_up、in_up、in_dn、out_dn、および閾値Thr1の対応例を示している。
Diff1>Thr1 ・・・・・(式3)
上記(式3)を満たす場合は、さらに、参照領域の斜め右下がり方向のW画素の画素値を用いた判定処理を行う。
図9(2a)は図8(1a)と同様、中心のW画素(W4)の左と下にG画素が位置する7×7画素の参照領域を示している。
in_up=(W2+W3)/2 ・・・(式4b)
center=(W4+W5)/2 ・・・(式4c)
in_dn=(W6+W7)/2 ・・・(式4d)
out_dn=(W8+W9)/2 ・・・(式4e)
out_up、in_up、center,in_dn、out_dnは、いずれも、参照領域内の斜め右上がり方向の複数のW画素の画素値の平均値や重み付け加算によって算出するW画素低周波信号である。
本処理例では、斜め右上がり方向の2つのW画素の画素値を1:1の割合で重み付け加算した値である。
in_upはcenterの上方向に隣接する斜め右上がりラインの2つのW画素に基づくW画素低周波信号である。
in_dnはcenterの下方向に隣接する斜め右上がりラインの2つのW画素に基づくW画素低周波信号である。
out_upはin_upの上方向に隣接する斜め右上がりラインの2つのW画素に基づくW画素低周波信号である。
out_dnはin_dnの下方向に隣接する斜め右上がりラインの2つのW画素に基づくW画素低周波信号である。
中心に近い3つのW画素低周波信号in_up、center,in_dnの最大値max(in_up,center,in_dn)と、
中心から遠い値2つのW画素低周波信号out_up、out_dnの最大値max(out_up,out_dn)と、
これらの2つの最大値の差分値Diff2を算出する。
Diff2=max(in_up,center,in_dn)-max(out_up,out_dn)
Diff2>Thr2 ・・・・・(式5)
max(A,B,C)は、AとBとCの中で最も大きい値を返す関数、
Thr2は閾値、
である。
なお、閾値Thr2は、固定値としてもよいし、ユーザによって設定可能な値としてもよいし、自動的に計算されてもよい。
図9(2c)は、注目画素が偽色発生画素ではないと判定する場合のout_up、in_up、center,in_dn、out_dn、および閾値Thr2の対応例を示している。
前述の(式3)と(式5)の2つの判定式の双方を満足する場合、注目画素が偽色発生画素であると判定し、いずれか一方でも満足しない場合は、注目画素が偽色発生画素でないと判定する。
Diff1>Thr1 ・・・・・(式3)
Diff2>Thr2 ・・・・・(式5)
上記の2つの判定式を満足する場合のみ、注目画素が偽色発生画素であると判定する。
判定結果は、低域信号算出部202に出力する。
次に、図10、図11を参照して、入力画素単位である7×7画素の中心画素(注目画素)がW画素であり、中心W画素の右と上にG画素が隣接する場合の偽色判定処理例について説明する。
out_up=(W0+2×W1+W2)/4 ・・・(式6a)
in_up=(W3+2×W4+W5)/4 ・・・(式6b)
in_dn=(W6+2×W7+W8)/4 ・・・(式6c)
out_dn=(W9+2×W10+W11)/4 ・・・(式6d)
out_up、in_up、in_dn、out_dnは、いずれも、参照領域内の斜め右下がり方向の複数のW画素の画素値に基づくW画素低周波信号である。
本処理例では、斜め右下がり方向の3つのW画素の画素値を1:2:1の割合で重み付け加算した値である。
in_up、in_dnは、参照領域の中心に最近接する2つの斜め右下がりライン[上側ライン(in_up)、下側ライン(in_dn)]、各ラインの3つのW画素に基づくW画素低周波信号である。
out_upはin_upの上方向に隣接する斜め右下がりラインの3つのW画素に基づくW画素低周波信号である。
out_dnはin_dnの下方向に隣接する斜め右下がりラインの3つのW画素に基づくW画素低周波信号である。
中心に近い2つのW画素低周波信号in_up、in_dnの最大値max(in_up,in_dn)と、
中心から遠い2つのW画素低周波信号out_up、out_dnの最大値max(out_up,out_dn)と、
これらの2つの最大値の差分値Diff1を算出する。
Diff1=max(in_up,in_dn)-max(out_up,out_dn)
Diff1>Thr1 ・・・・・(式7)
max(A,B)は、AとBの大きい値を返す関数、
Thr1は閾値、
である。
図10(3c)は、注目画素が偽色発生画素ではないと判定する場合のout_up、in_up、in_dn、out_dn、および閾値Thr1の対応例を示している。
Diff1>Thr1 ・・・・・(式7)
上記(式7)を満たす場合は、さらに、参照領域の斜め右下がり方向のW画素の画素値を用いた判定処理を行う。
図11(4a)は図10(3a)と同様、中心のW画素(W5)の右と上にG画素が位置する7×7画素の参照領域を示している。
in_up=(W2+W3)/2 ・・・(式8b)
center=(W4+W5)/2 ・・・(式8c)
in_dn=(W6+W7)/2 ・・・(式8d)
out_dn=(W8+W9)/2 ・・・(式8e)
out_up、in_up、center,in_dn、out_dnは、いずれも、参照領域内の斜め右上がり方向の複数のW画素の画素値に基づくW画素低周波信号である。
本処理例では、斜め右上がり方向の2つのW画素の画素値を1:1の割合で重み付け加算した値である。
in_upはcenterの上方向に隣接する斜め右上がりラインの2つのW画素に基づくW画素低周波信号である。
in_dnはcenterの下方向に隣接する斜め右上がりラインの2つのW画素に基づくW画素低周波信号である。
out_upはin_upの上方向に隣接する斜め右上がりラインの2つのW画素に基づくW画素低周波信号である。
out_dnはin_dnの下方向に隣接する斜め右上がりラインの2つのW画素に基づくW画素低周波信号である。
中心に近い3つのW画素低周波信号in_up、center,in_dnの最大値max(in_up,center,in_dn)と、
中心から遠い値2つのW画素低周波信号out_up、out_dnの最大値max(out_up,out_dn)と、
これらの2つの最大値の差分値Diff2を算出する。
Diff2=max(in_up,center,in_dn)-max(out_up,out_dn)
Diff2>Thr2 ・・・・・(式9)
max(A,B,C)は、AとBとCの中で最も大きい値を返す関数、
Thr2は閾値、
である。
図11(4c)は、注目画素が偽色発生画素ではないと判定する場合のout_up、in_up、center,in_dn、out_dn、および閾値Thr2の対応例を示している。
前述の(式7)と(式9)の2つの判定式の双方を満足する場合、注目画素が偽色発生画素であると判定し、いずれか一方でも満足しない場合は、注目画素が偽色発生画素でないと判定する。
Diff1>Thr1 ・・・・・(式7)
Diff2>Thr2 ・・・・・(式9)
上記の2つの判定式を満足する場合のみ、注目画素が偽色発生画素であると判定する。
判定結果は、低域信号算出部202に出力する。
(A)中心画素がW画素であり、中心W画素の左と下にG画素が隣接する場合(図8、図9)、
(B)中心画素がW画素であり、中心W画素の右と上にG画素が隣接する場合(図10、図11)、
これらの各場合において、緑色の輝点偽色が発生しているか否かを判定する処理例を説明した。
(1)斜め右下がりラインの複数のW画素の画素値に基づいて算出する差分値Diff1と閾値Thr1との比較処理、
(2)斜め右上がりラインの複数のW画素の画素値に基づいて算出する差分値Diff2と閾値Thr2との比較処理、
これらの2つの比較結果として、いずれも、差分値が閾値より大きい場合に注目画素が輝点偽色画素であると判定する。
偽色検出部201は、これらの判定処理を実行し、判定結果を低域信号算出部202に出力する。
図13、図14に示す処理例は、入力画素単位(7×7画素)の中心画素(注目画素)がR画素である場合の処理例である。
図13は、斜め右下がりラインの複数のW画素の画素値に基づいて算出する差分値Diff1と閾値Thr1との比較処理、
図14は、斜め右上がりラインの複数のW画素の画素値に基づいて算出する差分値Diff2と閾値Thr2との比較処理、
これらの処理例を示している。
参照領域の中心に近い位置から複数の斜め右下がりラインを設定し、各ライン上の複数のW画素値を用いてW画素の低周波信号、すなわちW低域信号を算出して、より中心に近い複数ラインの最大W低周波信号値と、中心から遠い複数ラインの最大W低周波信号値との差分値を算出する。すなわち4つの斜め右下がりラインに直交する斜め右上がり方向のW画素の差分値をDiff1として算出する。さらに、算出した差分値Diff1と予め既定した閾値Thr1とを比較する。
mid_up=(W2+W3)/2 ・・・(式10b)
in_up=(W4+W5)/2 ・・・(式10c)
in_dn=(W6+W7)/2 ・・・(式10d)
mid_dn=(W8+W9)/2 ・・・(式10e)
out_dn=(W10+W11)/2 ・・・(式10f)
out_up、mid_up,in_up、in_dn、mid_dn,out_dnは、いずれも、参照領域内の斜め右上がり方向の複数のW画素の画素値の平均値や重み付け加算によって算出するW画素低周波信号である。
本処理例では、斜め右上がり方向の2つのW画素の画素値を1:1の割合で重み付け加算した値である。
in_dnは中心R画素の下方向に隣接する斜め右上がりラインの2つのW画素に基づくW画素低周波信号である。
mid_upはin_upの上方向に隣接する斜め右上がりラインの2つのW画素に基づくW画素低周波信号である。
mid_dnはin_dnの下方向に隣接する斜め右上がりラインの2つのW画素に基づくW画素低周波信号である。
out_upはmid_upの上方向に隣接する斜め右上がりラインの2つのW画素に基づくW画素低周波信号である。
out_dnはmid_dnの下方向に隣接する斜め右上がりラインの2つのW画素に基づくW画素低周波信号である。
中心に近い4つのW画素低周波信号mid_up,in_up、in_dn,mid_dnの最大値max(mid_up,in_up,in_dn,mid_dn)と、
中心から遠い値2つのW画素低周波信号out_up、out_dnの最大値max(out_up,out_dn)と、
これらの2つの最大値の差分値Diff2を算出する。
Diff2=max(mid_up,in_up,in_dn,mid_dn)-max(out_up,out_dn)
Diff2>Thr2 ・・・・・(式11)
max(A,B,C)は、AとBとCの中で最も大きい値を返す関数、
Thr2は閾値、
である。
なお、閾値Thr2は、固定値としてもよいし、ユーザによって設定可能な値としてもよいし、自動的に計算されてもよい。
図14(6c)は、注目画素が偽色発生画素ではないと判定する場合のout_up、mid_up,in_up、in_dn、mid_dn,out_dn、および閾値Thr2の対応例を示している。
Diff1>Thr1
Diff2>Thr2
上記の2つの判定式を満足する場合のみ、注目画素が偽色発生画素であると判定する。
判定結果は、低域信号算出部202に出力する。
図15は、斜め右下がりラインの複数のW画素の画素値に基づいて算出する差分値Diff1と閾値Thr1との比較処理、
図16は、斜め右上がりラインの複数のW画素の画素値に基づいて算出する差分値Diff2と閾値Thr2との比較処理、
これらの処理例を示している。
Diff1>Thr1、
Diff2>Thr2、
上記の2つの式の双方を満たすか否かの判定を行い、満たす場合、注目画素は輝点偽色画素であると判定する。
上記2つの式の少なくともいずれかを満たさな場合は、注目画素は輝点偽色画素でないと判定し、判定結果を低域信号算出部へ202に出力する。
(1)注目画素近傍に設定した複数の斜め右下がりライン各々の複数のW画素の画素値に基づいて各ライン対応のW画素低周波成分信号を算出し、注目画素に近い複数の内側ラインのW画素低周波成分信号最大値と、注目画素から遠い複数の外側ラインのW画素低周波成分信号最大値の差分値Diff1と閾値Thr1との比較処理。
(2)注目画素近傍に設定した複数の斜め右上がりライン各々の複数のW画素の画素値に基づいて各ライン対応のW画素低周波成分信号を算出し、注目画素に近い複数の内側ラインのW画素低周波成分信号最大値と、注目画素から遠い複数の外側ラインのW画素低周波成分信号最大値の差分値Diff2と閾値Thr2との比較処理。
偽色検出部201は、これらの判定処理を実行し、判定結果を低域信号算出部202に出力する。
低域信号算出部202は、画素補間部203で使用するR画素、G画素、B画素のいずれかの低域信号(mR,mG,mB)と、W画素の低域信号(mW)をそれぞれ算出する。
この第2のローパスフィルタ(LPF)係数は、変換対象となる注目画素位置、すなわち7×7の入力画素単位である参照領域の中心に近い画素に対する係数を小さく設定した係数である。この処理は、ハイライト領域にある画素値の影響を抑えて補間画素値を算出するための処理である。
このような処理を行うことで、ハイライト領域の画素値の影響を抑制し、偽色の低減を図っている。
注目画素が偽色画素であると判定された場合は、注目画素に近い領域の参照画素の画素値寄与率を低く設定し、注目画素から遠い周辺領域の参照画素の画素値寄与率を高く設定したローパスフィルタ(LPF)係数を適用した低域信号算出処理を行う。
注目画素が偽色画素でないと判定された場合は、注目画素に近い領域の参照画素の画素値寄与率を高く設定し、注目画素から遠い周辺領域の参照画素の画素値寄与率を低く設定したローパスフィルタ(LPF)係数を適用した低域信号算出処理を行う。
図17に示す例は、入力画素単位の中心画素がW画素であり、中心のW画素の左と下にG画素が位置する場合のG信号の低域信号mGの算出処理に適用するLPFの設定例である。
一方、中心画素である注目画素が偽色画素であると判定された場合には、G低域信号mGは、参照領域の中心に近いG画素値の反映が0または抑制された値となる。
一方、中心画素である注目画素が偽色画素であると判定された場合には、G低域信号mGは、参照領域の中心に近いG画素値の反映が0または抑制された値となる。
この他の入力単位画素設定の場合の様々な低域信号mR,mG,mB,mWの算出処理においても、低域信号算出部202は、やはり同様の処理を実行して低域信号算出処理を行う。
すなわち、参照領域の中心画素である注目画素が偽色画素でないと判定された場合には、参照領域の中心に近い画素ほど大きい係数とし、注目画素が偽色画素であると判定された場合には、参照領域の中心に近い画素ほど小さい係数としたLPF係数を設定したローパスフィルタ適用処理を実行して、低域信号mR,mG,mB,mWの算出処理を実行する。
一方、中心画素である注目画素が偽色画素であると判定された場合には、B低域信号mBは、参照領域の中心に近いB画素値の反映が0または抑制された値となる。
一方、中心画素である注目画素が偽色画素であると判定された場合には、R低域信号mRは、参照領域の中心に近いR画素値の反映が0または抑制された値となる。
W画素をG画素に変換する処理、
G画素をRまたはB画素に変換する処理、
R画素をB画素に変換する処理、
B画素をR画素に変換する処理、
これらの各処理を行う。
W画素の低域信号mWとG画素の低域信号mGとの比例関係、
W画素の低域信号mWとR画素の低域信号mRとの比例関係、
W画素の低域信号mWとB画素の低域信号mBとの比例関係、
これらの関係が成立すると仮定して、例えば、注目画素間変換先の画素色に応じて、参照領域のW画素をその画素色に設定して、注目画素の画素値を算出する。
この処理に際して、低域信号算出部202から入力するRGBW各信号の低域信号mR,mG,mB,mWを適用する。
次に、ステップS102でDiff1と閾値Thr1とを比較する。
参照領域の中心に近い位置から複数の斜め右下がりラインを設定し、各ライン上の複数のW画素値を用いてW低域信号を算出して、より中心に近い複数ラインの最大W低周波信号値と、中心から遠い複数ラインの最大W低周波信号値との差分値を算出する。すなわち4つの斜め右下がりラインに直交する斜め右上がり方向のW画素の差分値をDiff1として算出する。
さらに、算出した差分値Diff1と予め既定した閾値Thr1とを比較する。
Diff1が閾値Thr1より大きい場合(Yes)には、ステップS103へ進む。Diff1が閾値Thr1より大きくない場合(No)には、ステップS106へ進む。
例えば図17(1b)、図18(2b)に示すように、注目画素に近い参照画素のLPF係数を相対的に高く設定したローパスフィルタ(LPF)の適用処理により低域信号を算出する。
Diff1が閾値Thr1より大きいと判定された場合(Yes)には、ステップS103へ進む。
この場合は、注目画素が輝点偽色発生画素である可能性があると判定された場合である。この場合は、ステップS103に進む。
次に、ステップS104でDiff2と閾値Thr2とを比較する。
参照領域の中心に近い位置から複数の斜め右上がりラインを設定し、各ライン上の複数のW画素値を用いてW低域信号を算出して、より中心に近い複数ラインの最大W低周波信号値と、中心から遠い複数ラインの最大W低周波信号値との差分値を算出する。すなわち例えば5つの斜め右上がりラインに直交する斜め右下がり方向のW画素の差分値をDiff2として算出する。
さらに、算出した差分値Diff2と予め既定した閾値Thr2とを比較する。
Diff2が閾値Thr2より大きい場合(Yes)には、ステップS105へ進む。Diff2が閾値Thr2より大きくない場合(No)には、ステップS106へ進む。
例えば図17(1b)、図18(2b)に示すように、注目画素に近い参照画素のLPF係数を相対的に高く設定したローパスフィルタ(LPF)の適用処理により低域信号を算出する。
Diff2が閾値Thr2より大きいと判定された場合(Yes)には、ステップS105へ進む。
この場合は、注目画素が輝点偽色発生画素であると判定された場合である。この場合は、ステップS105に進む。
例えば図17(1c)、図18(2c)に示すように、注目画素に近い参照画素のLPF係数を相対的に低く設定したローパスフィルタ(LPF)の適用処理により低域信号を算出する。
この処理は、図5に示す画素補間部203の実行する処理である。
一方、注目画素が輝点偽色でないと判定された場合は、注目画素に近い参照画素のLPF係数を相対的に高く設定したローパスフィルタ(LPF)の適用処理により生成した低域信号を用いた補間処理を行う。
なお、図21に示す処理は、例えば図4に示すメモリ130に格納されたプログラムに従って、制御部140の制御の下で実行される。
上述した実施例において、偽色検出部201は、図21のフローにおけるステップS102の判定処理、すなわち、
Diff1>Thr1、
さらに、ステップS104の判定処理、すなわち、
Diff2>Thr2、
これらの2つの判定式が満足する場合に、注目画素が輝点偽色の発生画素であると判定していた。
図23には、G信号の低域信号mGの算出に適用するLPF係数の設定例を示している。
図23(a)~(g)は、それぞれ図22(a)~(g)のハイライト領域に対応して利用する低域信号mGの算出に適用するLPF係数の設定例である。
Lは相対的に低いLPF係数の設定画素、
Hは相対的に高いLPF係数の設定画素、
これらを示している。
例えば、図22(a)に示す4画素のハイライト領域が検出された場合、図23(a)に示すLPF係数の設定で入力画素単位を畳み込み演算することにより、所望の低域信号を得る。
図23(a)に示すLPF係数は、図22(a)に示すハイライト領域内のG画素位置の係数のみがLであり、その他のG画素位置の係数はHとなっている。
このような係数を適用したローパスフィルタを利用して低域信号mGを算出することで、ハイライト領域のG画素値の影響を低減した低域信号mGを算出できる。
図25には、B信号の低域信号mBの算出に適用するLPF係数の設定例を示している。
図25(a)~(h)は、それぞれ図24(a)~(h)のハイライト領域に対応して利用する低域信号mBの算出に適用するLPF係数の設定例である。
Lは相対的に低いLPF係数の設定画素、
Hは相対的に高いLPF係数の設定画素、
これらを示している。
例えば、図24(a)に示す4画素のハイライト領域が検出された場合、図25(a)に示すLPF係数の設定で入力画素単位を畳み込み演算することにより、所望の低域信号を得る。
図25(a)に示すLPF係数は、図24(a)に示すハイライト領域内のB画素位置の係数のみがLであり、その他のB画素位置の係数はHとなっている。
このような係数を適用したローパスフィルタを利用して低域信号mBを算出することで、ハイライト領域のB画素値の影響を低減した低域信号mBを算出できる。
この処理により、ハイライト領域の計所に応じた最適な低域信号の算出が可能となり、ハイライト領域の画素値の寄与率を低くした最適な画素補間が実現される。
図26に示すフローチャートは、先に説明した図21に示すフローと同様、RGBW配列をRGB配列に変換する場合の1つの変換対象画素に対する処理シーケンスであり、変換対象画素を含む参照画素領域、例えば7×7画素領域を入力してデータ変換処理部において実行する処理である。各処理画素に対して、図26に示すフローに従った処理が順次実行されることになる。
以下に各ステップについて説明する。
次に、ステップS202において、ステップS201で検出した高画素値のW画素配置パターンが、予めメモリに登録済みのハイライト領域のパターンに一致するか否かを判定する。メモリには、例えば図22(a)~(g)に示すハイライト領域パターンが登録されている。
これらの処理は、図5に示す偽色検出部201の実行する処理である。
この場合は、注目画素が輝点偽色でないと判定された場合であり、ステップS204へ進み、注目画素に近い参照画素のLPF係数を相対的に高く設定したローパスフィルタ(LPF)の適用処理により、必要な低域信号を算出する。なお必要な低域信号とは、画素補間部203で実行する注目画素の変換処理に適用するために必要な低域信号である。低域信号mR,mG,mB,mWの少なくとも1つ以上の低域信号を算出する。
例えば図17(1b)、図18(2b)に示すように、注目画素に近い参照画素のLPF係数を相対的に高く設定したローパスフィルタ(LPF)の適用処理により低域信号を算出する。
この低域信号算出処理は、図5に示す低域信号算出部202の処理である。
低域信号算出部202は、偽色検出部201から、高画素値のW画素配置パターンと一致する登録パターン情報を入力し、この情報に応じて、その登録パターンに対応するLPF係数を設定したローパスフィルタ(LPF)の適用処理により低域信号を算出する。
高画素値のW画素配置パターンと一致する登録パターンが、その他の図22(b)~(g)である場合も同様であり、それぞれの場合において、図23(b)~(g)に示す注目画素に近い参照画素のLPF係数を相対的に低く設定したローパスフィルタ(LPF)の適用処理により低域信号を算出する。
この処理は、図5に示す画素補間部203の実行する処理である。
一方、注目画素が輝点偽色でないと判定された場合は、注目画素に近い参照画素のLPF係数を相対的に高く設定したローパスフィルタ(LPF)の適用処理により生成した低域信号を用いた補間処理を行う。
なお、図26に示す処理は、例えば図4に示すメモリ130に格納されたプログラムに従って、制御部140の制御の下で実行される。
上述した本開示の画像処理装置の処理を行うことで、例えば以下の効果が得られる。
(a)木漏れ日などの撮像面において小さい面積で発生する偽色を低減することができる。
(b)パープルフリンジなどのように白飛びしていない場合でも偽色を低減することが可能となる。
(c)光学ローパスフィルタを用いて解像度を落とす方法と比較して、画質の劣化が少ない補正ができる。
(d)デモザイク前のRAWデータに適用することができるため、イメージセンサ内などに組み込むことが可能である。
(e)カラーフィルタにホワイト画素を含んだ配列の画像に対して適用することが可能である。
例えば、これらの効果を奏することができる。
以上、特定の実施例を参照しながら、本開示の実施例について詳解してきた。しかしながら、本開示の要旨を逸脱しない範囲で当業者が実施例の修正や代用を成し得ることは自明である。すなわち、例示という形態で本発明を開示してきたのであり、限定的に解釈されるべきではない。本開示の要旨を判断するためには、特許請求の範囲の欄を参酌すべきである。
(1) RGBW配列画像を入力画像とし、RGB配列画像を出力画像として生成するデータ変換処理部を有し、
前記データ変換処理部は、
前記入力画像の偽色画素を検出し、検出情報を出力する偽色検出部と、
前記偽色検出部から検出情報を入力し、検出情報に応じて処理態様を変更してRGBW各色対応の低域信号を算出する低域信号算出部と、
前記低域信号算出部の算出した低域信号を適用した画素補間により、前記入力画像のRGBW配列の画素変換を実行してRGB配列画像を生成する画素補間部を有し、
前記補間処理部は、
W画素の低域信号mWと、RGB各画素の低域信号mR,mG,mBが、局所領域で比例関係にあるとの仮定に基づいて補間画素値を算出する画像処理装置。
(3)前記低域信号算出部は、前記偽色検出部から注目画素が偽色画素でないとの検出情報を入力した場合、注目画素の近傍画素の画素値寄与率を注目画素から離れた画素より相対的に高くしたローパスフィルタ係数を設定したローパスフィルタを適用して低域信号を算出する前記(1)または(2)に記載の画像処理装置。
(5)前記偽色検出部は、注目画素近傍のW画素の勾配情報を検出し、2つの直交方向のいずれにおいても注目画素近傍のW画素値が周囲W画素値より高い場合に、前記注目画素が局所的な高輝度領域である局所的ハイライト領域に含まれ、該注目画素が偽色画素であると判定する前記(1)~(4)いずれかに記載の画像処理装置。
具体的には、RGBW配列画像からRGB配列画像を生成するデータ変換処理に際して偽色画素を検出し、偽色画素であるか否かに応じて異なるRGBW各色対応の低域信号を算出し、算出した低域信号を適用した補間処理によりRGBW配列を変換してRGB配列画像を生成する。補間処理においては、W低域信号mWと、RGB各低域信号mR,mG,mBが局所領域で比例関係にあるとの仮定に基づいて各低域信号を利用して行う。低域信号は、注目画素が偽色画素である場合、注目画素近傍の画素値寄与率を離間画素より相対的に低くした係数を持つローパスフィルタを適用して算出する。
これらの処理により、RGBW配列画像をRGB配列に変換するリモザイク処理に併せて、画像の局所的ハイライト領域に発生する偽色の補正を実行し、偽色を除去または低減した高品質な画像を生成して出力することが可能となる。
105 光学レンズ
110 撮像素子(イメージセンサ)
120 信号処理部
130 メモリ
140 制御部
181 RGBW配列
182 RGB配列
183 カラー画像
200 データ変換処理部
201 偽色検出部
202 低域信号算出部
203 画素補間部
Claims (11)
- RGBW配列画像を入力画像とし、RGB配列画像を出力画像として生成するデータ変換処理部を有し、
前記データ変換処理部は、
前記入力画像の偽色画素を検出し、検出情報を出力する偽色検出部と、
前記偽色検出部から検出情報を入力し、検出情報に応じて処理態様を変更してRGBW各色対応の低域信号を算出する低域信号算出部と、
前記低域信号算出部の算出した低域信号を適用した画素補間により、前記入力画像のRGBW配列の画素変換を実行してRGB配列画像を生成する画素補間部を有し、
前記補間処理部は、
W画素の低域信号mWと、RGB各画素の低域信号mR,mG,mBが、局所領域で比例関係にあるとの仮定に基づいて補間画素値を算出する画像処理装置。 - 前記低域信号算出部は、
前記偽色検出部から注目画素が偽色画素であるとの検出情報を入力した場合、
前記注目画素の近傍画素の画素値寄与率を注目画素から離れた画素より相対的に低くしたローパスフィルタ係数を設定したローパスフィルタを適用して低域信号を算出する請求項1に記載の画像処理装置。 - 前記低域信号算出部は、
前記偽色検出部から注目画素が偽色画素でないとの検出情報を入力した場合、
注目画素の近傍画素の画素値寄与率を注目画素から離れた画素より相対的に高くしたローパスフィルタ係数を設定したローパスフィルタを適用して低域信号を算出する請求項1に記載の画像処理装置。 - 前記偽色検出部は、
前記入力画像における局所的な高輝度領域である局所的ハイライト領域の存在の有無を検出し、注目画素が局所的ハイライト領域に含まれる場合に、該注目画素を偽色画素であると判定する請求項1に記載の画像処理装置。 - 前記偽色検出部は、
注目画素近傍のW画素の勾配情報を検出し、2つの直交方向のいずれにおいても注目画素近傍のW画素値が周囲W画素値より高い場合に、前記注目画素が局所的な高輝度領域である局所的ハイライト領域に含まれ、該注目画素が偽色画素であると判定する請求項1に記載の画像処理装置。 - 前記偽色検出部は、
(a)注目画素近傍に設定した複数の斜め右下がりライン各々の複数のW画素の画素値に基づいて各ライン対応のW画素低周波成分信号を算出し、注目画素に近い複数の内側ラインのW画素低周波成分信号最大値と、注目画素から遠い複数の外側ラインのW画素低周波成分信号最大値の差分値Diff1と閾値Thr1との比較処理、
(b)注目画素近傍に設定した複数の斜め右上がりライン各々の複数のW画素の画素値に基づいて各ライン対応のW画素低周波成分信号を算出し、注目画素に近い複数の内側ラインのW画素低周波成分信号最大値と、注目画素から遠い複数の外側ラインのW画素低周波成分信号最大値の差分値Diff2と閾値Thr2との比較処理、
を実行し、
上記(a),(b)の2つの比較結果として、いずれも、差分値が閾値より大きい場合に注目画素が偽色画素であると判定する請求項1に記載の画像処理装置。 - 前記偽色検出部は、
前記入力画像における局所的な高輝度領域である局所的ハイライト領域にW画素とG画素が集中する場合、または、前記局所的ハイライト領域にW画素とR画素とB画素が集中する場合に発生する偽色の検出を行う請求項1に記載の画像処理装置。 - 前記偽色検出部は、
前記入力画像から画素値の高いW画素を検出し、
検出した高画素値のW画素構成パターンと、予めメモリに記録された局所的高輝度領域形状である登録ハイライト領域パターンとを比較し、
検出した高画素値のW画素構成パターンが、前記登録ハイライト領域パターンに一致する場合に、前記高画素値のW画素構成パターンに含まれる画素を偽色画素であると判定する請求項1に記載の画像処理装置。 - 前記低域信号算出部は、
前記偽色検出部が高画素値のW画素構成パターンに一致すると判定した登録ハイライト領域パターンに応じて、ハイライト領域の画素値寄与率をハイライト領域外の画素より相対的に低くしたローパスフィルタ係数を設定したローパスフィルタを適用して低域信号を算出する請求項8に記載の画像処理装置。 - 画像処理装置において実行する画像処理方法であり
データ変換処理部が、RGBW配列画像を入力画像とし、RGB配列画像を出力画像として生成するデータ変換処理を実行し、
前記データ変換処理において、
前記入力画像の偽色画素を検出し、検出情報を出力する偽色検出処理と、
前記検出情報を入力し、検出情報に応じて処理態様を変更してRGBW各色対応の低域信号を算出する低域信号算出処理と、
前記低域信号を適用した画素補間により、前記入力画像のRGBW配列の画素変換を実行してRGB配列画像を生成する画素補間処理を実行し、
前記補間処理において、
W画素の低域信号mWと、RGB各画素の低域信号mR,mG,mBが、局所領域で比例関係にあるとの仮定に基づく補間画素値を算出する画像処理方法。 - 画像処理装置において画像処理を実行させるプログラムであり
データ変換処理部に、RGBW配列画像を入力画像とし、RGB配列画像を出力画像として生成させるデータ変換処理ステップを実行させ、
前記データ変換処理ステップにおいて、
前記入力画像の偽色画素を検出し、検出情報を出力する偽色検出処理と、
前記検出情報を入力し、検出情報に応じて処理態様を変更してRGBW各色対応の低域信号を算出する低域信号算出処理と、
前記低域信号を適用した画素補間により、前記入力画像のRGBW配列の画素変換を実行してRGB配列画像を生成する画素補間処理を実行させ、
前記補間処理において、
W画素の低域信号mWと、RGB各画素の低域信号mR,mG,mBが、局所領域で比例関係にあるとの仮定に基づく補間画素値を算出させるプログラム。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2014512387A JP6020556B2 (ja) | 2012-04-24 | 2013-02-14 | 画像処理装置、および画像処理方法、並びにプログラム |
EP13781172.5A EP2843947B1 (en) | 2012-04-24 | 2013-02-14 | Image processing device, image processing method, and program |
US14/395,095 US9288457B2 (en) | 2012-04-24 | 2013-02-14 | Image processing device, method of processing image, and image processing program including false color correction |
CN201380020652.5A CN104247409B (zh) | 2012-04-24 | 2013-02-14 | 图像处理装置、图像处理方法以及程序 |
US15/012,935 US9542759B2 (en) | 2012-04-24 | 2016-02-02 | Image processing device, method of processing image, and image processing program including false color correction |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012-098544 | 2012-04-24 | ||
JP2012098544 | 2012-04-24 |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/395,095 A-371-Of-International US9288457B2 (en) | 2012-04-24 | 2013-02-14 | Image processing device, method of processing image, and image processing program including false color correction |
US15/012,935 Continuation US9542759B2 (en) | 2012-04-24 | 2016-02-02 | Image processing device, method of processing image, and image processing program including false color correction |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013161349A1 true WO2013161349A1 (ja) | 2013-10-31 |
Family
ID=49482691
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/053455 WO2013161349A1 (ja) | 2012-04-24 | 2013-02-14 | 画像処理装置、および画像処理方法、並びにプログラム |
Country Status (5)
Country | Link |
---|---|
US (2) | US9288457B2 (ja) |
EP (1) | EP2843947B1 (ja) |
JP (1) | JP6020556B2 (ja) |
CN (1) | CN104247409B (ja) |
WO (1) | WO2013161349A1 (ja) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019091994A (ja) * | 2017-11-13 | 2019-06-13 | キヤノン株式会社 | 撮像装置及び撮像システム |
JP2019106576A (ja) * | 2017-12-08 | 2019-06-27 | キヤノン株式会社 | 撮像装置及び撮像システム |
CN113536820A (zh) * | 2020-04-14 | 2021-10-22 | 深圳爱根斯通科技有限公司 | 位置识别方法、装置以及电子设备 |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI422020B (zh) * | 2008-12-08 | 2014-01-01 | Sony Corp | 固態成像裝置 |
WO2013161349A1 (ja) | 2012-04-24 | 2013-10-31 | ソニー株式会社 | 画像処理装置、および画像処理方法、並びにプログラム |
JP5968944B2 (ja) * | 2014-03-31 | 2016-08-10 | 富士フイルム株式会社 | 内視鏡システム、プロセッサ装置、光源装置、内視鏡システムの作動方法、プロセッサ装置の作動方法、光源装置の作動方法 |
KR102280452B1 (ko) * | 2014-11-05 | 2021-07-23 | 삼성디스플레이 주식회사 | 표시장치 및 그의 구동방법 |
EP3043558B1 (en) | 2014-12-21 | 2022-11-02 | Production Resource Group, L.L.C. | Large-format display systems having color pixels and white pixels |
KR102329440B1 (ko) | 2015-10-01 | 2021-11-19 | 에스케이하이닉스 주식회사 | 칼라 필터 어레이의 변환 방법 및 장치 |
KR102480600B1 (ko) * | 2015-10-21 | 2022-12-23 | 삼성전자주식회사 | 이미지 처리 장치의 저조도 화질 개선 방법 및 상기 방법을 수행하는 이미지 처리 시스템의 동작 방법 |
JP6758859B2 (ja) * | 2016-03-01 | 2020-09-23 | キヤノン株式会社 | 撮像装置、撮像システム、および画像処理方法 |
JP6825617B2 (ja) * | 2016-03-09 | 2021-02-03 | ソニー株式会社 | 画像処理装置、撮像装置、および画像処理方法、並びにプログラム |
CN106970085A (zh) * | 2017-05-08 | 2017-07-21 | 广东工业大学 | 一种湿度指示纸质量检测系统及方法 |
US10337923B2 (en) * | 2017-09-13 | 2019-07-02 | Qualcomm Incorporated | Directional interpolation and cross-band filtering for hyperspectral imaging |
TWI638339B (zh) * | 2017-11-14 | 2018-10-11 | 瑞昱半導體股份有限公司 | 錯色移除方法 |
CN108711396B (zh) | 2018-05-30 | 2020-03-31 | 京东方科技集团股份有限公司 | 像素数据的处理方法及处理装置、显示装置及显示方法 |
KR20220036014A (ko) | 2020-09-15 | 2022-03-22 | 삼성전자주식회사 | 이미지 센싱 시스템 |
JP2022122682A (ja) * | 2021-02-10 | 2022-08-23 | キヤノン株式会社 | 画像符号化装置及びその制御方法及びプログラム |
US11778337B2 (en) | 2021-11-09 | 2023-10-03 | Samsung Electronics Co., Ltd. | Image sensor and method for sensing image |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH01133492A (ja) * | 1987-11-19 | 1989-05-25 | Matsushita Electric Ind Co Ltd | 撮像装置 |
JP2002077928A (ja) * | 2000-08-25 | 2002-03-15 | Sharp Corp | 画像処理装置 |
JP2006014261A (ja) | 2004-05-27 | 2006-01-12 | Sony Corp | 画像処理装置、および画像処理方法、並びにコンピュータ・プログラム |
JP2009124598A (ja) | 2007-11-16 | 2009-06-04 | Canon Inc | 画像処理装置及び画像処理方法 |
JP2011182354A (ja) | 2010-03-04 | 2011-09-15 | Sony Corp | 画像処理装置、および画像処理方法、並びにプログラム |
JP2012060602A (ja) * | 2010-09-13 | 2012-03-22 | Konica Minolta Opto Inc | 撮像装置 |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4035688B2 (ja) * | 2001-02-28 | 2008-01-23 | セイコーエプソン株式会社 | 偽色除去装置、偽色除去プログラム、偽色除去方法およびデジタルカメラ |
CN100576924C (zh) * | 2004-05-27 | 2009-12-30 | 索尼株式会社 | 图像处理设备和图像处理方法 |
JP5106870B2 (ja) * | 2006-06-14 | 2012-12-26 | 株式会社東芝 | 固体撮像素子 |
JP4930109B2 (ja) * | 2007-03-06 | 2012-05-16 | ソニー株式会社 | 固体撮像装置、撮像装置 |
JP5574615B2 (ja) * | 2009-04-20 | 2014-08-20 | キヤノン株式会社 | 画像処理装置、その制御方法、及びプログラム |
JP5326943B2 (ja) * | 2009-08-31 | 2013-10-30 | ソニー株式会社 | 画像処理装置、および画像処理方法、並びにプログラム |
BR112012008515B1 (pt) * | 2009-10-13 | 2021-04-27 | Canon Kabushiki Kaisha | Aparelho e método de processamento de imagem, e meios de gravação |
JP5454075B2 (ja) * | 2009-10-20 | 2014-03-26 | ソニー株式会社 | 画像処理装置、および画像処理方法、並びにプログラム |
CN104170376B (zh) * | 2012-03-27 | 2016-10-19 | 索尼公司 | 图像处理设备、成像装置及图像处理方法 |
WO2013161349A1 (ja) | 2012-04-24 | 2013-10-31 | ソニー株式会社 | 画像処理装置、および画像処理方法、並びにプログラム |
-
2013
- 2013-02-14 WO PCT/JP2013/053455 patent/WO2013161349A1/ja active Application Filing
- 2013-02-14 EP EP13781172.5A patent/EP2843947B1/en active Active
- 2013-02-14 US US14/395,095 patent/US9288457B2/en active Active
- 2013-02-14 CN CN201380020652.5A patent/CN104247409B/zh active Active
- 2013-02-14 JP JP2014512387A patent/JP6020556B2/ja active Active
-
2016
- 2016-02-02 US US15/012,935 patent/US9542759B2/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH01133492A (ja) * | 1987-11-19 | 1989-05-25 | Matsushita Electric Ind Co Ltd | 撮像装置 |
JP2002077928A (ja) * | 2000-08-25 | 2002-03-15 | Sharp Corp | 画像処理装置 |
JP2006014261A (ja) | 2004-05-27 | 2006-01-12 | Sony Corp | 画像処理装置、および画像処理方法、並びにコンピュータ・プログラム |
JP2009124598A (ja) | 2007-11-16 | 2009-06-04 | Canon Inc | 画像処理装置及び画像処理方法 |
JP2011182354A (ja) | 2010-03-04 | 2011-09-15 | Sony Corp | 画像処理装置、および画像処理方法、並びにプログラム |
JP2012060602A (ja) * | 2010-09-13 | 2012-03-22 | Konica Minolta Opto Inc | 撮像装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP2843947A4 |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019091994A (ja) * | 2017-11-13 | 2019-06-13 | キヤノン株式会社 | 撮像装置及び撮像システム |
US10567712B2 (en) | 2017-11-13 | 2020-02-18 | Canon Kabushiki Kaisha | Imaging device and imaging system |
JP2019106576A (ja) * | 2017-12-08 | 2019-06-27 | キヤノン株式会社 | 撮像装置及び撮像システム |
CN113536820A (zh) * | 2020-04-14 | 2021-10-22 | 深圳爱根斯通科技有限公司 | 位置识别方法、装置以及电子设备 |
CN113536820B (zh) * | 2020-04-14 | 2024-03-15 | 深圳爱根斯通科技有限公司 | 位置识别方法、装置以及电子设备 |
Also Published As
Publication number | Publication date |
---|---|
CN104247409B (zh) | 2016-10-12 |
US20150103212A1 (en) | 2015-04-16 |
JPWO2013161349A1 (ja) | 2015-12-24 |
US20160210760A1 (en) | 2016-07-21 |
EP2843947B1 (en) | 2020-11-04 |
US9542759B2 (en) | 2017-01-10 |
CN104247409A (zh) | 2014-12-24 |
US9288457B2 (en) | 2016-03-15 |
JP6020556B2 (ja) | 2016-11-02 |
EP2843947A4 (en) | 2016-01-13 |
EP2843947A1 (en) | 2015-03-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6020556B2 (ja) | 画像処理装置、および画像処理方法、並びにプログラム | |
JP5454075B2 (ja) | 画像処理装置、および画像処理方法、並びにプログラム | |
JP6012375B2 (ja) | 画素補間処理装置、撮像装置、プログラムおよび集積回路 | |
JP5206796B2 (ja) | 画像入力装置 | |
JP4144630B2 (ja) | 撮像装置 | |
JP4914303B2 (ja) | 画像処理装置及び撮像装置、画像処理方法及び撮像方法、画像処理プログラム | |
US9179113B2 (en) | Image processing device, and image processing method, and program | |
US20200401859A1 (en) | Image processing device, imaging device, and image processing method | |
JP5011814B2 (ja) | 撮像装置、および画像処理方法、並びにコンピュータ・プログラム | |
JP5724185B2 (ja) | 画像処理装置、および画像処理方法、並びにプログラム | |
JP2013219705A (ja) | 画像処理装置、および画像処理方法、並びにプログラム | |
WO2014073386A1 (ja) | 信号処理装置、信号処理方法及び信号処理プログラム | |
JP5771677B2 (ja) | 画像処理装置、撮像装置、プログラム及び画像処理方法 | |
JP2013055623A (ja) | 画像処理装置、および画像処理方法、情報記録媒体、並びにプログラム | |
JP2012099903A (ja) | 画像処理装置、画像処理方法、撮像装置 | |
JP6415093B2 (ja) | 画像処理装置、画像処理方法、及びプログラム | |
JP2010178226A (ja) | 撮像装置、色収差抑制方法、色収差抑制回路及びプログラム | |
JP4962293B2 (ja) | 画像処理装置、画像処理方法、プログラム | |
JP5981824B2 (ja) | 画素補間処理装置、撮像装置、プログラムおよび集積回路 | |
JP2013055622A (ja) | 画像処理装置、および画像処理方法、情報記録媒体、並びにプログラム | |
TW201408083A (zh) | 可調適抑制錯色假影的系統及方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13781172 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2014512387 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2013781172 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14395095 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |