WO2016183744A1 - Color correction system and method - Google Patents

Color correction system and method Download PDF

Info

Publication number
WO2016183744A1
WO2016183744A1 PCT/CN2015/079094 CN2015079094W WO2016183744A1 WO 2016183744 A1 WO2016183744 A1 WO 2016183744A1 CN 2015079094 W CN2015079094 W CN 2015079094W WO 2016183744 A1 WO2016183744 A1 WO 2016183744A1
Authority
WO
WIPO (PCT)
Prior art keywords
color
noise
psnr
color correction
evaluation image
Prior art date
Application number
PCT/CN2015/079094
Other languages
French (fr)
Inventor
Wei Chen
Zisheng Cao
Original Assignee
SZ DJI Technology Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co., Ltd. filed Critical SZ DJI Technology Co., Ltd.
Priority to CN201580029947.8A priority Critical patent/CN106471567B/en
Priority to EP15874399.7A priority patent/EP3202131A1/en
Priority to PCT/CN2015/079094 priority patent/WO2016183744A1/en
Priority to CN201910240349.5A priority patent/CN109963133B/en
Priority to US15/176,037 priority patent/US9742960B2/en
Publication of WO2016183744A1 publication Critical patent/WO2016183744A1/en
Priority to US15/646,301 priority patent/US9998632B2/en
Priority to US15/977,661 priority patent/US10244146B2/en
Priority to US16/358,946 priority patent/US10560607B2/en
Priority to US16/783,478 priority patent/US20200228681A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • H04N1/60Colour correction or control
    • H04N1/6027Correction or control of colour gradation or colour contrast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/02Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/02Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed
    • G09G5/06Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed using colour palettes, e.g. look-up tables
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/86Camera processing pipelines; Components thereof for processing colour signals for controlling the colour saturation of colour signals, e.g. automatic chroma control circuits
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/88Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/68Circuits for processing colour signals for controlling the amplitude of colour signals, e.g. automatic chroma control circuits
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/06Adjustment of display parameters
    • G09G2320/0693Calibration of display systems
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0407Resolution change, inclusive of the use of different resolutions for different screen areas
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/06Colour space transformation
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/12Use of DVI or HDMI protocol in interfaces along the display data pipeline
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2380/00Specific applications
    • G09G2380/08Biomedical applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • H04N1/60Colour correction or control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • H04N1/60Colour correction or control
    • H04N1/603Colour correction or control controlled by characteristics of the picture signal generator or the picture reproducer
    • H04N1/6033Colour correction or control controlled by characteristics of the picture signal generator or the picture reproducer using test pattern analysis

Definitions

  • the disclosed embodiments relate generally to digital image processing and more particularly, but not exclusively, to systems and methods for color correction of digital images.
  • the method comprises obtaining an input color value and a reference color value for each of a plurality of color references, as well as a noise evaluation image having a color noise for evaluating noise reduction, the input color values and reference color values being in a non-linear color space.
  • a plurality of color correction parameters are determined as optimized based on evaluating a fitness function in the non-linear color space.
  • the non-linear color space can be a CIE L*a*b*color space.
  • the fitness function can include a color correction error and a noise amplification metric so as to reduce noise amplification during color correction
  • a method for calibrating a digital imaging device for color correction comprising obtaining an input color value and a reference color value for each of a plurality of color references and a noise evaluation image having a color noise for evaluating noise reduction, the input color values and reference color values being in a non-linear color space; and determining a plurality of color correction parameters that are optimized based on evaluating a fitness function in the non-linear color space.
  • the non-linear color space is a CIE L*a*b*color space.
  • determining the plurality of color correction parameters comprises adjusting the color correction parameters based on the input color values, and reference color values, and the noise evaluation image.
  • the fitness function comprises a color correction error and a noise amplification metric.
  • adjusting the plurality of color correction parameters comprises determining the color correction error by color correcting the input color values and comparing the corrected input color values with the reference color values.
  • adjusting the plurality of color correction parameters comprises determining the noise amplification metric by color correcting the noise evaluation image using the parameters and comparing the corrected noise evaluation image with the pre-correction noise evaluation image.
  • determining the noise amplification metric comprises determining the noise amplification metric using a peak signal-to-noise ratio (PSNR) .
  • PSNR peak signal-to-noise ratio
  • determining the noise amplification metric using a PSNR comprises finding a PSNR difference that is a difference between a PSNR for the pre-correction noise evaluation image and a PSNR for the corrected noise evaluation image.
  • determining the noise amplification metric using a PSNR comprises determining a downsampled PSNR difference.
  • determining the downsampled PSNR difference comprises downsampling the pre-correction noise evaluation image to obtain a downsampled pre-correction noise evaluation image; downsampling the corrected noise evaluation image to obtain a downsampled corrected noise evaluation image; and finding the downsampled PSNR difference as a difference between a PSNR for the downsampled pre-correction noise evaluation image and a PSNR for the downsampled corrected noise evaluation image.
  • determining the noise amplification metric comprises determining a weighted average of the PSNR difference and at least one downsampled PSNR difference.
  • determining the noise amplification metric using a PSNR comprises determining a plurality of successively downsampled PSNR differences and determining the noise amplification metric as a weighted average of the PSNR difference and the plurality of successively downsampled PSNR differences.
  • determining the noise amplification metric comprises weighing each of the PSNR difference and the plurality of successively downsampled PSNR difference with a respective weight, and at least one of the weights is not zero.
  • determining the noise amplification metric comprises determining the noise amplification metric based on color variances.
  • adjusting the plurality of color correction parameters comprises determining the noise amplification metric by converting the color corrected noise evaluation image into a YUV color space.
  • the noise evaluation image comprises a plurality of color patches, and wherein the noise amplification metric is determined as a weighted sum of noise amplification levels of each of the color patches.
  • each color patch is weighted according to an average sensitivity of human perception to the color patch.
  • adjusting the color correction parameters comprises optimizing the fitness function by minimizing a weighted sum of the color correction error and the noise amplification metric.
  • adjusting the color correction parameters comprises adjusting the parameters using a genetic process. Some embodiments comprise further adjusting the color correction parameters using a direct search method.
  • obtaining the input color values of the color calibration images comprises imaging one or more color references comprising a plurality of color patches.
  • the color correction parameters are in the form of a matrix.
  • the color correction parameters are in the form of a look-up table.
  • adjusting the color correction parameters comprises look-up operations and interpolation operations in the look-up table.
  • the interpolation operations comprise a Shepard interpolation.
  • the noise evaluation image is a virtual noise evaluation image.
  • the virtual noise evaluation image comprises noise added to a virtual noise-free image.
  • a color correction apparatus configured for calibration for color correction based upon images of a plurality of color references each having a reference color, comprising a memory for storing a plurality of color correction parameters; and a processor for performing color correction of a digital image, wherein the processor is configured to obtain an input color value and a reference color value for each of the color references and a noise evaluation image having a color noise for evaluating noise reduction, the input color values and reference color values being in a non-linear color space, and determine a plurality of color correction parameters that are optimized based on evaluating a fitness function in the non-linear color space.
  • the non-linear color space is a CIE L*a*b*color space.
  • the processor is configured to determine the plurality of color correction parameters by adjusting the color correction parameters based on the input color values, and reference color values, and the noise evaluation image.
  • the fitness function comprises a color correction error and a noise amplification metric.
  • the processor is configured to adjust the plurality of color correction parameters by color correcting the input color values and comparing the corrected input color values with the reference color values.
  • the processor is configured to adjust the plurality of color correction parameters by color correcting the noise evaluation image using the parameters and comparing the corrected noise evaluation image with the pre-correction noise evaluation image.
  • the processor is configured to determine the noise amplification metric using a peak signal-to-noise ratio (PSNR) .
  • PSNR peak signal-to-noise ratio
  • the processor is configured to determine the noise amplification metric using a PSNR by finding a PSNR difference that is a difference between a PSNR for the pre-correction noise evaluation image and a PSNR for the corrected noise evaluation image.
  • the processor is configured to determine the noise amplification metric using a PSNR by determining a downsampled PSNR difference.
  • the processor is configured to determine the downsampled PSNR difference by downsampling the pre-correction noise evaluation image to obtain a downsampled pre-correction noise evaluation image; downsampling the corrected noise evaluation image to obtain a downsampled corrected noise evaluation image; and finding the downsampled PSNR difference as a difference between a PSNR for the downsampled pre-correction noise evaluation image and a PSNR for the downsampled corrected noise evaluation image.
  • the processor is configured to determine the noise amplification metric by determining a weighted average of the PSNR difference and at least one downsampled PSNR difference.
  • the processor is configured to determine a plurality of successively downsampled PSNR differences and determine the noise amplification metric as a weighted average of the PSNR difference and the plurality of successively downsampled PSNR differences.
  • the processor is configured to determine the noise amplification metric by weighing each of the PSNR difference and the plurality of successively downsampled PSNR difference with a respective weight, and at least one of the weights is not zero.
  • the processor is configured to determine the noise amplification metric based on color variances.
  • the processor is configured to determine the noise amplification metric by converting the color corrected noise evaluation image into a YUV color space.
  • the noise evaluation image comprises a plurality of color patches, and wherein the noise amplification metric is determined as a weighted sum of noise amplification levels of each of the color patches.
  • each color patch is weighted according to an average sensitivity of human perception to the color patch.
  • the processor is configured to optimizing the fitness function by minimizing a weighted sum of the color correction error and the noise amplification metric.
  • the processor is configured to adjust the color correction parameters using a genetic process.
  • the processor is configured to further adjust the color correction parameters using a direct search method.
  • the processor is configured to obtain the input color values of the color calibration images by imaging one or more color references comprising a plurality of color patches.
  • the color correction parameters are in the form of a matrix.
  • the color correction parameters are in the form of a look-up table.
  • the processor is configured to adjust the color correction parameters by look-up operations and interpolation operations in the look-up table.
  • the interpolation operations comprise a Shepard interpolation.
  • the noise evaluation image is a virtual noise evaluation image.
  • the virtual noise evaluation image comprises noise added to a virtual noise-free image.
  • the color correction apparatus is mounted aboard a mobile platform.
  • the mobile platform is an unmanned aerial vehicle (UAV) .
  • UAV unmanned aerial vehicle
  • a digital imaging device comprising an image sensor for imaging a plurality of color references; and a processor for performing color correction of a digital image, wherein the processor is configured to obtain an input color value and a reference color value for each of the color references and a noise evaluation image having a color noise for evaluating noise reduction, the input color values and reference color values being in a non-linear color space, and determine a plurality of color correction parameters that are optimized based on evaluating a fitness function in the non-linear color space.
  • the non-linear color space is a CIE L*a*b*color space.
  • the processor is configured to determine the plurality of color correction parameters by adjusting the color correction parameters based on the input color values, and reference color values, and the noise evaluation image.
  • the fitness function comprises a color correction error and a noise amplification metric.
  • the processor is configured to adjust the plurality of color correction parameters by color correcting the input color values and comparing the corrected input color values with the reference color values.
  • the processor is configured to adjust the plurality of color correction parameters by color correcting the noise evaluation image using the parameters and comparing the corrected noise evaluation image with the pre-correction noise evaluation image.
  • the processor is configured to determine the noise amplification metric using a peak signal-to-noise ratio (PSNR) .
  • PSNR peak signal-to-noise ratio
  • the processor is configured to determine the noise amplification metric using a PSNR by finding a PSNR difference that is a difference between a PSNR for the pre-correction noise evaluation image and a PSNR for the corrected noise evaluation image.
  • the processor is configured to determine the noise amplification metric using a PSNR by determining a downsampled PSNR difference.
  • the processor is configured to determine the downsampled PSNR difference by downsampling the pre-correction noise evaluation image to obtain a downsampled pre-correction noise evaluation image; downsampling the corrected noise evaluation image to obtain a downsampled corrected noise evaluation image; and finding the downsampled PSNR difference as a difference between a PSNR for the downsampled pre-correction noise evaluation image and a PSNR for the downsampled corrected noise evaluation image.
  • the processor is configured to determine the noise amplification metric by determining a weighted average of the PSNR difference and at least one downsampled PSNR difference.
  • the processor is configured to determine a plurality of successively downsampled PSNR differences and determine the noise amplification metric as a weighted average of the PSNR difference and the plurality of successively downsampled PSNR differences.
  • the processor is configured to determine the noise amplification metric by weighing each of the PSNR difference and the plurality of successively downsampled PSNR difference with a respective weight, and at least one of the weights is not zero.
  • the processor is configured to determine the noise amplification metric based on color variances.
  • the processor is configured to determine the noise amplification metric by converting the color corrected noise evaluation image into a YUV color space.
  • the noise evaluation image comprises a plurality of color patches, and wherein the noise amplification metric is determined as a weighted sum of noise amplification levels of each of the color patches.
  • each color patch is weighted according to an average sensitivity of human perception to the color patch.
  • the processor is configured to optimizing the fitness function by minimizing a weighted sum of the color correction error and the noise amplification metric.
  • the processor is configured to adjust the color correction parameters using a genetic process.
  • the processor is configured to further adjust the color correction parameters using a direct search method.
  • the processor is configured to obtain the input color values of the color calibration images by imaging one or more color references comprising a plurality of color patches.
  • the color correction parameters are in the form of a matrix.
  • the color correction parameters are in the form of a look-up table.
  • the processor is configured to adjust the color correction parameters by look-up operations and interpolation operations in the look-up table.
  • said interpolation operations comprise a Shepard interpolation.
  • the noise evaluation image is a virtual noise evaluation image.
  • the virtual noise evaluation image comprises noise added to a virtual noise-free image.
  • the color correction apparatus is mounted aboard a mobile platform.
  • the mobile platform is an unmanned aerial vehicle (UAV) .
  • UAV unmanned aerial vehicle
  • a non-transitory readable medium storing instructions for calibrating a digital imaging device for color correction, wherein the instructions comprise instructions for obtaining an input color value and a reference color value for each of a plurality of color references and a noise evaluation image having a color noise for evaluating noise reduction, the input color values and reference color values being in a non-linear color space; and determining a plurality of color correction parameters that are optimized based on evaluating a fitness function in the non-linear color space.
  • the non-linear color space is a CIE L*a*b*color space.
  • said determining the plurality of color correction parameters comprises adjusting the color correction parameters based on the input color values, and reference color values, and the noise evaluation image.
  • the fitness function comprises a color correction error and a noise amplification metric.
  • said adjusting the plurality of color correction parameters comprises determining the color correction error by color correcting the input color values and comparing the corrected input color values with the reference color values.
  • said adjusting the plurality of color correction parameters comprises determining the noise amplification metric by color correcting the noise evaluation image using the parameters and comparing the corrected noise evaluation image with the pre-correction noise evaluation image.
  • said determining the noise amplification metric comprises determining the noise amplification metric using a peak signal-to-noise ratio (PSNR) .
  • PSNR peak signal-to-noise ratio
  • said determining the noise amplification metric using a PSNR comprises finding a PSNR difference that is a difference between a PSNR for the pre-correction noise evaluation image and a PSNR for the corrected noise evaluation image.
  • said determining the noise amplification metric using a PSNR comprises determining a downsampled PSNR difference.
  • said determining the downsampled PSNR difference comprises downsampling the pre-correction noise evaluation image to obtain a downsampled pre-correction noise evaluation image; downsampling the corrected noise evaluation image to obtain a downsampled corrected noise evaluation image; and finding the downsampled PSNR difference as a difference between a PSNR for the downsampled pre-correction noise evaluation image and a PSNR for the downsampled corrected noise evaluation image.
  • said determining the noise amplification metric comprises determining a weighted average of the PSNR difference and at least one downsampled PSNR difference.
  • said determining the noise amplification metric using a PSNR comprises determining a plurality of successively downsampled PSNR differences and determining the noise amplification metric as a weighted average of the PSNR difference and the plurality of successively downsampled PSNR differences.
  • said determining the noise amplification metric comprises weighing each of the PSNR difference and the plurality of successively downsampled PSNR difference with a respective weight, and at least one of the weights is not zero.
  • said determining the noise amplification metric comprises determining the noise amplification metric based on color variances.
  • said adjusting the plurality of color correction parameters comprises determining the noise amplification metric by converting the color corrected noise evaluation image into a YUV color space.
  • said noise evaluation image comprises a plurality of color patches, and wherein the noise amplification metric is determined as a weighted sum of noise amplification levels of each of the color patches.
  • each color patch is weighted according to an average sensitivity of human perception to the color patch.
  • said adjusting the color correction parameters comprises optimizing the fitness function by minimizing a weighted sum of the color correction error and the noise amplification metric.
  • said adjusting the color correction parameters comprises adjusting the parameters using a genetic process.
  • Some embodiments comprise further adjusting the color correction parameters using a direct search method.
  • said obtaining the input color values of the color calibration images comprises imaging one or more color references comprising a plurality of color patches.
  • the color correction parameters are in the form of a matrix.
  • the color correction parameters are in the form of a look-up table.
  • said adjusting the color correction parameters comprises look-up operations and interpolation operations in the look-up table.
  • said interpolation operations comprise a Shepard interpolation.
  • the noise evaluation image is a virtual noise evaluation image.
  • the virtual noise evaluation image comprises noise added to a virtual noise-free image.
  • Fig. 1 is an exemplary top-level block diagram illustrating an embodiment of a color correction apparatus for color-correcting a digital image.
  • Fig. 2 is an exemplary diagram illustrating an embodiment of an imaging system including the color correction apparatus of Fig. 1, wherein the imaging system is shown imaging a color reference using an image sensor.
  • Fig. 3 is an exemplary diagram illustrating an alternative embodiment of the imaging system of Fig. 2, wherein the color correction apparatus is shown as acquiring various color values for calibration of color correction parameters.
  • Fig. 4 is exemplary top-level flow chart illustrating an embodiment of a method for calibrating a digital imaging device.
  • Fig. 5 is exemplary flow chart illustrating an alternative embodiment of the method of Fig. 4, wherein color correction parameters are optimized for calibrating a digital imaging device.
  • Fig. 6 is exemplary flow chart illustrating an alternative embodiment of the method of Fig. 5, wherein the method includes a two-step optimization method.
  • Fig. 7 is exemplary flow chart illustrating another alternative embodiment of the method of Fig. 5, wherein the method includes a genetic process.
  • Fig. 8 is an exemplary diagram illustrating an embodiment of the method of Fig. 5, wherein the method includes sampling at different spatial frequencies to determine a noise amplification metric.
  • Fig. 9 is an exemplary diagram illustrating an embodiment of the method of Fig. 8 with spatial downsampling.
  • Fig. 10 is an exemplary flow chart illustrating an embodiment of the method of Fig. 5, wherein the method includes sampling at different spatial frequencies to determine a noise amplification metric.
  • Fig. 11 is an exemplary diagram illustrating an embodiment of an imaging system installed on an unmanned aerial vehicle (UAV) .
  • UAV unmanned aerial vehicle
  • Fig. 12 is an exemplary diagram illustrating a chrominance diagram showing a color error of a first experiment testing the efficacy of optimizing color correction parameters with noise regulation.
  • Fig. 13 is an exemplary diagram illustrating noise values of the experiment of Fig. 9 testing the efficacy of optimizing color correction parameters with noise regulation.
  • Fig. 14 is an exemplary diagram illustrating a chrominance diagram showing a color error of a second experiment testing the efficacy of optimizing color correction parameters without noise regulation.
  • Fig. 15 is an exemplary diagram illustrating noise values of the experiment of Fig. 11 testing the efficacy of optimizing color correction parameters without noise regulation.
  • Fig. 16 is an exemplary diagram illustrating a two-step method for optimizing color correction parameters with noise regulation.
  • Color correction is a process that transforms signals acquired by photosensors of digital imaging devices into colors that look realistic.
  • Color correction is a transformation defined by a set of color correction parameters. These parameters are typically calibrated for each individual digital imaging device to find customized parameter values that accurately reflect the color response characteristics of the individual device.
  • the calibration of color correction parameters entails an optimization process in which colors values acquired by the imaging device are compared with known reference color values. Typically, the goal during the optimization process is to minimize a difference between the acquired colors post-color-correction and the known reference values.
  • a drawback of this approach is that accounting for color correction accuracy alone can often result in parameters that excessively amplify noise.
  • Image noise can include color and brightness variations in an image. These variations are not features of an original object imaged but, instead, are attributable to artifacts introduced by the acquisition and processing of the image. Sources of noise include, for example, quantum exposure noise, dark current noise, thermal noise, readout noise, and others.
  • the present disclosure sets forth systems and methods for color correction of a digital image which overcome shortcomings of existing color correction techniques by increasing color correction accuracy while limiting noise amplification.
  • color correction parameters are calibrated to increase color correction accuracy while limiting noise amplification.
  • the calibration can be performed in the CIE L*a*b*color space to more closely reflect human perception of distances between colors.
  • the calibration can be performed with reference to a virtual noisy image that can be sampled at different spatial frequencies. At each spatial frequency, a peak signal-to-noise ratio (PSNR) can be used to evaluate the amount of noise introduced by color correction.
  • PSNR peak signal-to-noise ratio
  • the color correction parameters can be optimized by using a genetic process. A two-step parameter optimization method can be used that avoid the optimization process being trapped in local optima.
  • the present systems and methods advantageously are suitable for use, for example, by unmanned aerial vehicles (UAVs) and other mobile platforms.
  • UAVs unmanned aerial vehicles
  • an exemplary color correction apparatus 100 is shown as including a processor 110 and a memory 120.
  • the processor 110 can comprise any type of processing system.
  • Exemplary processors 110 can include, without limitation, one or more general purpose microprocessors (for example, single or multi-core processors) , application-specific integrated circuits, application-specific instruction-set processors, graphics processing units, physics processing units, digital signal processing units, coprocessors, network processing units, audio processing units, encryption processing units, and the like.
  • the processor 110 can include an image processing engine or media processing unit, which can include specialized hardware for enhancing the speed and efficiency of certain operations for image capture, image filtering, and image processing.
  • Such operations include, for example, Bayer transformations, demosaicing operations, white balancing operations, color correction operations, noise reduction operations, and/or image sharpening/softening operations.
  • the processor 110 can include specialized hardware and/or software for performing various color correction parameter calibration functions and operations described herein. Specialized hardware can include, but are not limited to, specialized parallel processors, caches, high speed buses, and the like.
  • the memory 120 can comprise any type of memory and can be, for example, a random access memory (RAM) , a static RAM, a dynamic RAM, a read-only memory (ROM) , a programmable ROM, an erasable programmable ROM, an electrically erasable programmable ROM, a flash memory, a secure digital (SD) card, and the like.
  • RAM random access memory
  • ROM read-only memory
  • ROM read-only memory
  • ROM read-only memory
  • programmable ROM erasable programmable ROM
  • an electrically erasable programmable ROM a flash memory
  • SD secure digital
  • the memory 120 can have any commercially-available memory capacity suitable for use in image processing applications and preferably has a storage capacity of at least 512 Megabytes, 1 Gigabyte, 2 Gigabytes, 4 Gigabytes, 16 Gigabytes, 32 Gigabytes, 64 Gigabytes, or more.
  • the memory 120 can be a non-transitory storage medium that can store instructions for performing any of the processes described herein.
  • the color correction apparatus 100 can further include any hardware and/or software desired for performing the color correction parameter calibration functions and operations described herein.
  • the color correction apparatus 100 can include one or more input/output interfaces (not shown) .
  • Exemplary interfaces include, but are not limited to, universal serial bus (USB) , digital visual interface (DVI) , display port, serial ATA (SATA) , IEEE 1394 interface (also known as FireWire) , serial, video graphics array (VGA) , super video graphics array (SVGA) , small computer system interface (SCSI) , high-definition multimedia interface (HDMI) , audio ports, and/or proprietary input/output interfaces.
  • USB universal serial bus
  • DVI digital visual interface
  • SATA serial ATA
  • IEEE 1394 interface also known as FireWire
  • serial video graphics array
  • SVGA super video graphics array
  • SCSI small computer system interface
  • HDMI high-definition multimedia interface
  • audio ports and/or proprietary input/output interfaces.
  • the color correction apparatus 100 can include one or more input/output devices (not shown) , for example, buttons, a keyboard, keypad, trackball, displays, and/or a monitor.
  • the color correction apparatus 100 can include hardware for communication between components of the color correction apparatus 100 (for example, between the processor 110 and the memory 120) .
  • an exemplary embodiment of an imaging system 200 is shown as including a color correction apparatus 100, an image sensor 130, and a color filter 140.
  • the color correction apparatus 100 can be provided in the manner discussed in more detail above with reference to Fig. 1.
  • the memory 120 of the color correction apparatus 100 is shown as storing color correction parameters 125, noise generation parameters 126, pre-correction and post-correction image data 127, and intermediate values 128 produced during various color correction parameter calibration functions and operations described herein.
  • the image sensor 130 can perform the function of sensing light and converting the sensed light into electrical signals that can be rendered as an image.
  • Various image sensors 130 are suitable for use with the disclosed systems and methods, including, but not limited to, image sensors 130 used in commercially-available cameras and camcorders.
  • Suitable image sensors 130 can include analog image sensors (for example, video camera tubes) and/or digital image sensors (for example, charge-coupled device (CCD) , complementary metal-oxide-semiconductor (CMOS) , N-type metal-oxide-semiconductor (NMOS) image sensors, and hybrids/variants thereof) .
  • Digital image sensors can include, for example, a two-dimensional array of photosensor elements that can each capture one pixel of image information. The resolution of the image sensor 130 can be determined by the number of photosensor elements.
  • the image sensor 130 can support any commercially-available image resolution and preferably has a resolution of at least 0.1 Megapixels, 0.5 Megapixels, 1 Megapixel, 2 Megapixels, 5 Megapixels, 10 Megapixels, or an even greater number of pixels.
  • the image sensor 130 can have specialty functions for use in various applications such as thermography, creation of multi-spectral images, infrared detection, gamma detection, x-ray detection, and the like.
  • the image sensor 130 can include, for example, an electro-optical sensor, a thermal/infrared sensor, a color or monochrome sensor, a multi-spectral imaging sensor, a spectrophotometer, a spectrometer, a thermometer, and/or an illuminometer.
  • the color filter 140 is shown in Fig. 2 as separating and/or filtering incoming light based on color and directing the light onto the appropriate photosensor elements of the image sensor 130.
  • the color filter 140 can include a color filter array that passes red, green, or blue light to selected pixel sensors to form a color mosaic (not shown) .
  • the layout of different colors on the color mosaic can be arranged in any convenient manner, including a Bayer pattern. Once a color mosaic is formed, a color value of each pixel can be interpolated using any of various demosaicing methods that interpolate missing color values at each pixel using color values of adjacent pixels.
  • the image sensor 130 can include an array of layered pixel photosensor elements that separates light of different wavelengths based on the properties of the photosensor elements. In either case, an image can be acquired by the image sensor 130 as intensity values in each of a plurality of color channels at each pixel.
  • the imaging system 200 is further shown in Fig. 2 as acquiring an image of a color reference 150 to perform calibration of color correction parameters 125.
  • the color reference 150 preferably has a known reference color value C ref that is known or that can be otherwise determined in advance, making the color reference 150 suitable for use as a color standard.
  • the reference color value C ref is a property of the color reference 150 that is independent of how the color reference 150 is imaged.
  • the reference color value C ref can be designated based on an average human perception of the color reference 150.
  • the reference color value C ref can thus serve as an objective measure how a color imaged by the image sensor 130 can be corrected so as to match the average human perception.
  • the color reference 150 is preferably, but not necessarily, homogeneous in color. Flatness of the color reference 150 is preferable, though not essential, to avoid variations attributable to differential light scattering.
  • the optical properties of the color reference 150 need not be ideal for purposes of performing color correction, so long as the optical properties do not interfere with imaging the color reference 150.
  • the color reference 150 can be made of one or more of a variety of materials such as plastic, paper, metal, wood, foam, composites thereof, and other materials.
  • the color, reflectance, and/or other optical properties of the color reference 150 can advantageously be calibrated as desired using an appropriate paint or other coating.
  • the color reference 150 can advantageously include multiple color patches 151, each of which has a different reference color value C ref .
  • This embodiment enables multiple color references 150 to be imaged at the same time, reducing the number of image capture operations for color correction. This embodiment is particularly suitable when a large number of color references 150 are to be imaged in order to calibrate color correction parameters 125 with greater accuracy.
  • Commercially available color references 150 include, for example, MacBeth ColorChecker, MacBeth ColorChecker SG, and the like.
  • images acquired by the image sensor 130 are described above in an RGB (red, green, and blue) color space for illustrative purposes only, the images can be acquired in other color spaces, as well.
  • the color space in which images are acquired depends generally on the properties of the image sensor 130 and any color filters 140.
  • the color space in which an image is acquired need not be three-dimensional but can have any number of dimensions as desired to capture the spectral composition of the image. The number of dimensions can depend on the number of color channels of the image sensor 130.
  • the color space of an acquired image can be one-dimensional, two-dimensional, three-dimensional, four-dimensional, five-dimensional, or more.
  • an image can be converted between color spaces as desired for processing and/or calibration.
  • a conversion from a sRGB color space with coordinates (R sRGB , G sRGB , B sRGB ) to a CIE 1931 XYZ color space with coordinates (X, Y, Z) entails a linear conversion, which can be represented by the following three-dimensional matrix:
  • a non-linear color space for imaging applications is a CIE L*a*b*color space (for example, a CIE 1976 L*a*b*color space) as defined by the International Commission on Illumination.
  • the color of an image in a CIE L*a*b*color space can be computed from the colors of the image in a CIE 1931 XYZ color space using the following non-linear transformation:
  • X n , Y n , and Z n are the CIE XYZ values of the color at a reference white point.
  • the CIE L*a*b*color space is designed to mimic the color response of human perception.
  • the non-linearity of the transformation from the CIE XYZ color space to the CIE L*a*b*color space reflects the nonlinearity of human perception.
  • Representing a color in the CIE L*a*b*color space has the advantage that the CIE L*a*b*color space is perceptually uniform to human beings, meaning that a change of a given amount in a color value will produce a proportional change of visual significance.
  • calibration of color correction parameters 125 can advantageously be performed after converting input and reference colors into a CIE L*a*b*color space representation.
  • a YUV color space for example, a Y’ UV color space
  • the YUV color space is represented by one luminance component Y representing image brightness and two chrominance components U and V representing image color.
  • a conversion from a RGB color space with coordinates (R, G, B) to a YUV color space with coordinates (Y, U, V) entails a linear conversion, which can be represented by the following three-dimensional matrix:
  • the imaging system 200 of Fig. 3 includes a color correction apparatus 100, which is shown as obtaining several inputs for calibration of color correction parameters 125.
  • a color correction apparatus 100 which is shown as obtaining several inputs for calibration of color correction parameters 125.
  • an image sensor 130 is shown as acquiring an image of a color reference 150.
  • the image is then passed to the color correction apparatus 100, which can obtain an input color value C in of the image.
  • the input color value C in represents a pre-color-corrected value that reflects the image acquisition properties of the image sensor 130, filtering properties of an image filter 140, as well as any other optical properties of the imaging system 200.
  • the input color value C in can be transformed from the color space of the color reference image to a non-linear color space—for example, a CIE L*a*b*color space.
  • the transformation can be performed, for example, by first using a linear transformation from a RGB color space to an intermediate CIE XYZ color using Equation (1) shown above.
  • the color values in the intermediate CIE XYZ color space can be non-linearly transformed to a CIE L*a*b*color space as shown above in Equations (2) - (5) .
  • Such transformations can be performed on a processor 110 (shown in Fig. 1) of the color correction apparatus 100.
  • the color correction apparatus 100 can obtain a reference color value C ref that corresponds to the input color value C in for color reference 150.
  • the reference color value C ref can be transformed into a non-linear color space—for example, the CIE L*a*b*color space.
  • the reference color value C ref advantageously can be directly inputted into the color correction apparatus 100 in the CIE L*a*b*color space, thereby making the transformation step unnecessary.
  • the color correction apparatus 100 is further shown as obtaining noise evaluation color values C noise from a noise evaluation image 160.
  • the noise evaluation image 160 can be any image containing noise. As color correction tends to amplify noise, the noise evaluation image 160 can be used to calibrate color correction parameters 125 in order to limit noise amplification. Stated somewhat differently, the noise evaluation image 160 can used to evaluate now noise is amplified with a given set of color correction parameters 125 (shown in Fig. 2) , and thereby select a set of color correction parameters 125 with reduced noise amplification.
  • the noise evaluation color values C noise can be transformed into the YUV color space, as further described below with reference to Fig 7. The transformation can be performed, for example, using the linear transformation from the RGB color space to the YUV color space shown above in Equation (6) . This transformation can be performed using the processor 110 of the color correction apparatus 100.
  • the noise evaluation image 160 can be an image acquired by the image sensor 130 with or without filtering through the color filter 140.
  • the noise evaluation image 160 is preferably an image of the color reference 150. Imaging the color reference 150 advantageous allows the simultaneous determination of the input color values C in and the noise evaluation color values C noise .
  • the noise evaluation image 160 can be a virtual noise evaluation image 160A.
  • the virtual noise evaluation image 160A can be generated by the color correction apparatus 100 using a pre-determined set of noise generation parameters 126 (shown in Fig. 2) .
  • the noise generation parameters 126 can, for example, reflect the distribution of the noise that is generated virtually (for example, Poisson or Gaussian noise) .
  • the specific noise generation parameters 126 can reflect the types of noise that the imaging system 200 can be expected to encounter in usage.
  • a virtual noise evaluation image 160A can be used because the evaluation of noise amplification does not require information about the color of an underlying object that is imaged. Instead, an arbitrary image containing noise can be evaluated for how the noise of that image would be amplified under a given set of color correction parameters 125.
  • the noise evaluation color values C noise of the virtual noise evaluation image 160A can be represented as follows:
  • C noise_free represents the color of the virtual noise evaluation image 160A before noise is added
  • n represents the noise added
  • the inputs for color correction parameter calibration (for example, input color values C in , reference color values C ref , and noise evaluation color values C noise ) are obtained by the color correction apparatus 100, these inputs can be stored for later use by the color correction apparatus 100 (for example, in a memory 120 as shown in Fig. 1) .
  • the inputs for color correction parameter calibration can be obtained as part of an initialization process for a new imaging device 200 prior to usage.
  • the inputs for color correction parameter calibration can be stored in the memory 120 and called upon periodically to re-calibrate the color correction parameters 125 as desired (for example, as image response characteristics of the imaging device 200 change after wear and tear) .
  • the inputs for color correction parameter calibration can be, but do not need to be, re-obtained for each new color correction parameter calibration.
  • Fig. 4 an exemplary top-level method 400 of calibrating color correction parameters 125 is shown.
  • the method 400 advantageously can be applied to calibrating the color correction parameters 125 for a digital imaging device 200 (shown in Figs. 2 and 3) .
  • input color values C in and reference color values C ref are obtained for each of a plurality of color references 150 (shown in Figs. 2 and 3) .
  • the input color values C in and reference color values C ref are obtained or transformed into a non-linear color space—for example, a CIE L*a*b*color space—as described above with reference to Fig. 3.
  • a noise evaluation image 160 having a color noise for evaluating noise reduction is obtained.
  • a plurality of color correction parameters 125 are adjusted so as to optimize a fitness function J.
  • the fitness function J can comprise a color correction error e color and/or a noise amplification metric D noise based on the input color values C in , the reference color values C ref , and the noise evaluation image 160.
  • An exemplary embodiment of the adjusting is described in more detail below with respect to Fig. 5.
  • FIG. 5 an exemplary method 500 of calibrating color correction parameters 125 (shown in Fig. 2) for a digital imaging device 200 (shown in Figs. 2 and 3) is shown.
  • input color (or pre-correction) values C in for a color references 150 are color corrected using the current values of the color correction parameters 125 to obtain post-correction input color values
  • This operation can be represented as:
  • CC represents a color correction operation.
  • the specific implementation of the color correction operation CC depends on the underlying form of the color correction parameters 125.
  • the color correction parameters 125 can take the form of a matrix having dimensions n x m, where m is dimensionality of the pre-correction color value and n is the dimensionality of the post-correction color value.
  • the color correction operation CC will take the form of a matrix multiplication that transforms an m-dimensional color value vector into an n-dimensional color value vector.
  • the pre-correction color value and the post-correction color value have the same dimensionality, in which case CC will take the form of a square matrix.
  • the pre- correction color value and the post-correction color value are each three-dimensional (for example, for color values in the RGB, CIE XYZ, CIE L*a*b*, and LUV color spaces) , in which case CC will take the form of a 3x3 matrix.
  • An advantage of using a matrix is that a matrix can describe a color correction operation CC using only n x m correction parameters 125, allowing decreased memory usage.
  • linear color correction using a matrix may be unsuitable for some applications.
  • the color correction parameters 125 can take the form of a look-up table (LUT) indexed in m dimensions that contains ordered m-tuples (a 1 , a 2 , ..., a m ) each mapping to an n-dimensional vector, where m is dimensionality of the pre-correction color value and n is the dimensionality of the post-correction color value.
  • LUT look-up table
  • the look-up table is three-dimensional, that is, indexed in three dimensions.
  • interpolation operations can be performed when pre-correction color values fall in between discrete entries.
  • Such interpolation operations can include finding look-up table entries that have the closest distance (for example, Euclidian distance) to the pre-correction color value, and interpolating a corrected color value using the closest look-up table entries.
  • linear interpolations can be performed for one-dimensional look-up tables, and multi-linear interpolations can be performed for look-up tables in higher dimensions.
  • the color correction operation CC will take the form of a look-up operation in the look-up table, followed by an interpolation operation, if desired.
  • the color correction parameters 125 can be implemented in multiple ways simultaneously; for example, a combination of a matrix and a look-up table can be used.
  • a Shepard interpolation can be used to perform color correction where the color correction parameters 125 take the form of a look-up table (LUT) .
  • LUT look-up table
  • a color-corrected value for a given color p can be found as follows:
  • i is an index over the different input color values C in and their corresponding reference color values C ref
  • c i represents the ith value of the input color values C in
  • ( ⁇ p-c i ⁇ ) represents a distance (for example, a Euclidian distance) between the given color p and c i
  • w i represents a weight of the ith input color value C in .
  • the post-correction input color values are compared with the reference color values C ref , and the color correction error e color is computed based on the comparison.
  • the color correction error e color can be expressed as:
  • C in_j represent the jth component of the reference color values C ref and the post-correction input color values respectively.
  • the color correction error e color is the Euclidian distance between the post-correction input color values and the reference color values C ref in the color space in which the color values are represented.
  • the color correction error e color can be taken as a weighted and/or unweighted average over the color patches 151.
  • noise evaluation color values C noise are color corrected using the current values of the color correction parameters 125 to obtain post-correction noise evaluation color values This operation can be represented as:
  • CC represents a color correction operation as described above with reference to 501.
  • the specific color correction operation CC depends on the implementation of the color correction parameters 125 and, as described above with reference to 501, can take the form of a matrix or a look-up table with each form having respective advantages.
  • the post-correction noise evaluation color values are compared with pre-correction noise evaluation color values C noise , and the noise amplification metric D noise is found based on the comparison.
  • the noise amplification metric D noise can be any measure of the distance between post-correction noise evaluation color values and the pre-correction noise evaluation color values C noise . That is, the greater the value of the noise amplification metric D noise , the more noise is amplified after applying a color correction.
  • the noise amplification metric D noise can be taken as a weighted and/or unweighted average over the color patches 151. In one embodiment, the noise amplification metric D noise can be taken as a weighted average over the color patches 151.
  • i is an index over the color patches 151
  • N is the total number of color patches 151
  • ⁇ i is a non-negative weight for color patch i.
  • the weights ⁇ i can be set according to the sensitivity of the average human perception to the color of each color patch 151. For example, colors having greater sensitivity of human perception can be given greater weights ⁇ i.
  • a fitness function J can be determined.
  • the fitness function J can be found as a weighted and/or unweighted sum of the color correction error e color and the noise amplification metric D noise .
  • an unweighted fitness function J can be represented as the following sum:
  • a weighted fitness function J can be used to advantageously weight the color correction error e color more than the noise amplification metric D noise , or vice versa.
  • the amount of weighting for the fitness function J can be determined, for example, by repeating a color correction parameter calibrations for different weights and taking the weight that gives the best (for example, the lowest) value of the fitness function J.
  • the amount of weighting for the fitness function J can be determined based on prior color correction parameter calibrations (for example, using different imaging devices) .
  • a first optimization process is applied to obtain initial values CC 0 for the color correction parameters 125.
  • the first optimization preferably samples broadly the space of possible color correction parameter values so as to avoid becoming trapped in local optima. Any of various optimization processes can be used in the first optimization at 601, including a genetic process, a simulated annealing method, and other non-greedy methods that avoid local optima.
  • a second optimization process is applied using the initial values CC 0 as a starting point to obtain further optimized values CC opt for the color correction parameters 125.
  • a goal is to find the local optimum value.
  • direct optimization methods are suitable for the second optimization at 602. Exemplary direct optimization methods include, but are not limited to, gradient descent methods.
  • FIG. 7 an exemplary genetic process 700 is shown for calibrating color correction parameters 125 (shown in Fig. 2) .
  • a genetic process is an optimization method loosely based on evolutionary principles in biology, where possible solutions to a problem are generated as members of a “population, ” and the members are selected based on a fitness function over a number of selection rounds.
  • the genetic process 700 can be used to find an optimal solution to the problem of selecting a set of color correction parameters 125 to optimize (for example, minimize) the fitness function J that includes a color correction error e color and a noise amplification metric D noise .
  • a predetermined number N of initial sets of candidate color correction parameters 125A are selected as the initial “population” of solutions.
  • the predetermined number N can comprise any suitable number of initial sets and, for example, can be at least 10, 50, 100, 500, 1000, or more.
  • the initial population of the N sets of candidate color correction parameters 125A can be selected, for example, by sampling the space of possible parameters at specified intervals. Alternatively and/or additionally, the sampling can be done at random.
  • the fitness function J is evaluated for the members of the “population, ” that is, for each of the N sets of candidate color correction parameters 125A. From among the N initial sets of candidate color correction parameters 125A, the initial set that has the best value of the fitness function J (for example, the minimal value, if the fitness function J to be minimized) is chosen. At 703, if the best value passes a predefined threshold, the genetic process stops at 704. Alternatively and/or additionally, at 705, if certain conditions are met (for example, the genetic process has been run for more than a certain number of rounds, or the genetic process has not produced more than a specific amount of improvement in the fitness function J from the prior round) , the genetic process stops at 704. After the genetic process stops, at 704, the candidate color correction parameters 125A giving the best value of the fitness function J is declared to be the “winner, ” and these candidate color correction parameters 125A can be outputted and/or used as a starting point for further optimization.
  • the candidate color correction parameters 125A giving the best value
  • the genetic process continues, at 706, by discarding and replacing candidate color correction parameters 125A having the lowest values of the fitness function J.
  • a given percentile of the candidate color correction parameters 125A having the lowest fitness function J can be discarded and replaced with new candidate color correction parameters 125A.
  • the new candidate color correction parameters 125A can, for example, be generated in the same way as the initial candidate color correction parameters 125A. In some embodiments, at least 10%, 20%, 30%, 40%, 50%, or more the lowest scoring fitness functions J can be discarded.
  • “mutation” operations can be applied to the candidate color correction parameters 125A, simulating biological mutations of chromosomes between successive generations of individuals.
  • each set of candidate color correction parameters 125A can be conceptually treated as a “chromosome” that is also subject to mutation.
  • Mutations to the candidate color correction parameters 125A include, for example, “point mutations” changing individual parameters at random and/or “crossover” mutations between two sets of candidate color correction parameters 125A.
  • a crossover can be performed by swapping corresponding rows and/or columns or portions thereof between two candidate matrices.
  • a crossover can be performed by swapping one or more corresponding entries in the look-up table.
  • the noise amplification metric D noise can be determined using any suitable approach, including but not limited to using peak signal-to-noise ratios (PSNR) and/or color variances.
  • PSNR peak signal-to-noise ratios
  • noise-evaluation color values C noise can be found by adding noise n, as described in Equation (7) .
  • a color correction CC can be applied to the noise-free color values C noise_free and noise-evaluation color values C noise , respectively, to find corresponding post-correction values of the noise-free color values and noise evaluation color values
  • the color correction is shown in Equation (11) and in Equation (14) below:
  • S i.e. ⁇ j S j
  • MAX I is the maximum value of C noise and C noise_free
  • j is an index over virtual color patches.
  • determining the noise amplification metric D noise can include finding a PSNR difference that is a difference between a PSNR for the pre-correction noise evaluation image and a for the corrected noise evaluation image.
  • the noise amplification metric D noise can be determined by downsampling.
  • the downsampling can be a spatial downsampling, as illustrated in Fig. 9.
  • Fig. 9 illustrates an embodiment of downsampling in which an image (for example, images having pre-correction noise evaluation color values C noise, or pre-correction noise-free color values C noise_free ) is sample at every other pixel in a first downsampling.
  • the downsampled image can be downsampling again, and the downsampling process can be repeated as often as desired up to M iterations.
  • a similar downsampling process can be performed for images that have been color-corrected (for example, images having post-correction noise evaluation color values or post-correction noise-free color values ) . Since downsampling can be an iterative process, color values and PSNR values at particular iterations are denoted with a subscript from 0 to M corresponding to the iteration, as shown in Figs. 8-9.
  • the downsampled images can be used to determine one or more downsampled PSNRs as well as a downsampled PSNR difference.
  • pre-correction noise-free color values and pre-correction noise evaluation color values that have undergone one round of downsampling can be used to find a corresponding downsampled PSNR 1 .
  • post-correction noise-free color values and post-correction noise evaluation color values that have undergone one round of downsampling can be used to find a corresponding downsampled
  • a set of PSNR values PSNR i and will be obtained, where i ranges from 0 to M.
  • the noise amplification metric D noise can be obtained by taking a weighted average of a PSNR difference and at least one downsampled PSNR difference. In some embodiments, the noise amplification metric D noise can be obtained by taking a weighted average of the PSNR difference and the plurality of successively downsampled PSNR differences.
  • the weight applied to each PSNR difference and/or downsampled PSNR difference can represented as w i , where i ranges from 0 to M.
  • M is the total number of downsampling iterations and w i is the weight given to each downsampling iteration i.
  • At least one of the weights w i is non-zero. Stated somewhat differently, PSNR differences at one or more iterations i can be given a weight of zero to effectively ignore that PSNR difference, provided that not all of the weights are ignored.
  • an exemplary method 1000 is shown for finding the noise amplification metric D noise that locates and compares peak signal-to-noise ratios (PSNR) at successively downsampled frequencies.
  • PSNR peak signal-to-noise ratios
  • an initial value of a pre-correction PSNR PSNR 0 can be found using the pre-correction noise evaluation color values and pre-correction noise-free color values as described above with reference to Figs. 8 and 9.
  • 1002a and can each be downsampled to obtain and respectively.
  • a downsampled PSNR 1 can be found from and
  • the process of downsampling and finding a corresponding downsampled PSNR can be repeated for M iterations, as desired.
  • the iterative downsampling process can be repeated for color-corrected images.
  • an initial value of a post-correction PSNR can be found using the post-correction noise evaluation color values and post-correction noise-free color values as described above with reference to Figs. 8 and 9.
  • At 1002b and can each be downsampled to obtain and respectively.
  • a downsampled can be found from and
  • the process of downsampling and finding a corresponding downsampled can be repeated for M iterations, as desired.
  • the set of PSNR values and color-corrected PSNR values found at iterations 0 to M can be used to find the noise amplification metric D noise —for example, as shown above in Equation (19) .
  • the noise amplification metric D noise can be obtained based on a variance of Y, U, and V components of the pre-correction noise evaluation color values and post-correction noise evaluation color values
  • the noise amplification metric D noise can be obtained using the Equation (20) :
  • FIG. 11 an exemplary embodiment of the imaging system 200 is shown wherein the imaging system 200 is shown as being installed aboard an unmanned aerial vehicle (UAV) 1100.
  • UAV unmanned aerial vehicle
  • a UAV 1100 colloquially referred to as a “drone, ” is an aircraft without an onboard human pilot and whose flight is controlled autonomously and/or by a remote pilot.
  • the imaging system 200 is suitable for installation aboard any of various types of UAVs 1100, including, but not limited to, rotocraft, fixed-wing aircraft, and hybrids thereof.
  • Suitable rotocraft include, for example, single rotor, dual rotor, trirotor, quadrotor (quadcopter) , hexarotor, and octorotor rotocraft.
  • the imaging system 200 can be installed on various portions of the UAV 1100.
  • the imaging system 200 can be installed within a fuselage 1110 of the UAV 1100.
  • the imaging system 200 can be mounted onto an exterior surface 1020 (for example, on the underside 1025) of the UAV 1100.
  • the various components of the imaging system 200 can be installed on the same portion, and/or different portions, of the UAV 1100.
  • an image sensor 130 can be mounted on an exterior surface 1120 to facilitate image acquisition; while, a color correction apparatus 100 advantageously can be installed within the fuselage 1110 for protection against wear and tear.
  • the various components of the color correction apparatus 100 can be installed on the same portion, and/or different portions, of the UAV 1100.
  • the imaging system 200 can include, or be mounted on, any type of mobile platform.
  • exemplary suitable mobile platforms include, but are not limited to, bicycles, automobiles, trucks, ships, boats, trains, helicopters, aircraft, various hybrids thereof, and the like.
  • Fig. 12 shows a chrominance diagram of resulting color errors in a CIE L*a*b*color space (showing a cross section in the a*and b*dimensions) , showing a mean color correction error of 16.8 with a maximum color correction error of 29.7.
  • Fig. 12 shows a chrominance diagram of resulting color errors in a CIE L*a*b*color space (showing a cross section in the a*and b*dimensions) , showing a mean color correction error of 16.8 with a maximum color correction error of 29.7.
  • Fig. 14 shows a chrominance diagram of resulting color errors in a CIE L*a*b*color space, showing a mean color correction error of 17.7 with a maximum color correction error of 35, both of which are significantly greater than the corresponding errors obtained with noise regulation.
  • Fig. 15 shows a plot of the corresponding noise levels of the experiment, showing that the average Y(luminance) noise is 0.86%; while, the average chrominance noise in the R, G, and B components are 1.56%, 1.31%, and 1.66%, respectively, which are significantly greater than the noise obtained by calibration with noise regulation. Accordingly, it can be seen from this experiment that color correction parameter calibration with noise regulation is an improvement over color correction parameter calibration without noise regulation.
  • the following example shows the process of optimizing a set of color correction parameters using the two-step method of Fig. 6.
  • a genetic process is used to find a set of initial parameters so as to avoid becoming trapped in local optima.
  • the fitness value of the parameters for the genetic process over six hundred generations is shown in Fig. 16 at the upper panel, showing that the fitness value reaches a best value of 335.134 after 600 generations.
  • a direct optimization process is used starting from the initial parameters produced at the end of step one.
  • the direct optimization method reduces the average distance between the corrected input colors and the corresponding reference colors, as shown in Fig. 15 at the lower panel. This example shows that it is advantageous to use a two-step optimization method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Facsimile Image Signal Circuits (AREA)
  • Color Image Communication Systems (AREA)

Abstract

A system and method for calibrating a digital imaging device for color correction is disclosed. The method comprises obtaining an input color value and a reference color value for each of a plurality of color references, as well as a noise evaluation image having a color noise for evaluating noise reduction, the input color values and reference color values being in a non-linear color space. A plurality of color correction parameters are determined as optimized based on evaluating a fitness function in the non-linear color space. The non-linear color space can be a CIE L*a*b*color space. The fitness function can include a color correction error and a noise amplification metric so as to reduce noise amplification during color correction.

Description

无标题
A portion of the disclosure of this patent document contains material which is subject to (copyright or mask work) protection. The (copyright or mask work) owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all (copyright or mask work) rights whatsoever.
COLOR CORRECTION SYSTEM AND METHOD
FIELD
The disclosed embodiments relate generally to digital image processing and more particularly, but not exclusively, to systems and methods for color correction of digital images.
BACKGROUND
Because digital imaging devices acquire colors differently from the way that human eyes perceive color, images acquired by digital imaging devices typically benefit from color correction. However, the color correction process may be prone to introducing and/or amplifying different types of noise. This is the general area that embodiments of the invention are intended to address.
SUMMARY
Described herein are systems and methods that can calibrate a digital imaging device for color correction is disclosed. The method comprises obtaining an input color value and a reference color value for each of a plurality of color references, as well as a noise evaluation image having a color noise for evaluating noise reduction, the input color values and reference color values being in a non-linear color space. A plurality of color correction parameters are determined as optimized based on evaluating a fitness function in the non-linear color space. The non-linear color space can be a CIE L*a*b*color space. The fitness function can include a color correction error and a noise amplification metric so as to reduce noise amplification during color correction
In accordance with a first aspect herein, there is disclosed a method for calibrating a digital imaging device for color correction, the method comprising obtaining an input color value and a reference color value for each of a plurality of color references and a noise evaluation image having a color noise for evaluating noise reduction, the input color values and reference color values being in a non-linear color space; and determining a plurality of color  correction parameters that are optimized based on evaluating a fitness function in the non-linear color space.
In some embodiments, the non-linear color space is a CIE L*a*b*color space.
In some embodiments, determining the plurality of color correction parameters comprises adjusting the color correction parameters based on the input color values, and reference color values, and the noise evaluation image.
In some embodiments, the fitness function comprises a color correction error and a noise amplification metric.
In some embodiments, adjusting the plurality of color correction parameters comprises determining the color correction error by color correcting the input color values and comparing the corrected input color values with the reference color values.
In some embodiments, adjusting the plurality of color correction parameters comprises determining the noise amplification metric by color correcting the noise evaluation image using the parameters and comparing the corrected noise evaluation image with the pre-correction noise evaluation image.
In some embodiments, determining the noise amplification metric comprises determining the noise amplification metric using a peak signal-to-noise ratio (PSNR) .
In some embodiments, determining the noise amplification metric using a PSNR comprises finding a PSNR difference that is a difference between a PSNR for the pre-correction noise evaluation image and a PSNR for the corrected noise evaluation image.
In some embodiments, determining the noise amplification metric using a PSNR comprises determining a downsampled PSNR difference.
In some embodiments, determining the downsampled PSNR difference comprises downsampling the pre-correction noise evaluation image to obtain a downsampled pre-correction noise evaluation image; downsampling the corrected noise evaluation image to obtain a downsampled corrected noise evaluation image; and finding the downsampled PSNR difference as a difference between a PSNR for the downsampled pre-correction noise evaluation image and a PSNR for the downsampled corrected noise evaluation image.
In some embodiments, determining the noise amplification metric comprises determining a weighted average of the PSNR difference and at least one downsampled PSNR difference.
In some embodiments, determining the noise amplification metric using a PSNR comprises determining a plurality of successively downsampled PSNR differences and determining the noise amplification metric as a weighted average of the PSNR difference and the plurality of successively downsampled PSNR differences.
In some embodiments, determining the noise amplification metric comprises weighing each of the PSNR difference and the plurality of successively downsampled PSNR difference with a respective weight, and at least one of the weights is not zero.
In some embodiments, determining the noise amplification metric comprises determining the noise amplification metric based on color variances.
In some embodiments, adjusting the plurality of color correction parameters comprises determining the noise amplification metric by converting the color corrected noise evaluation image into a YUV color space.
In some embodiments, the noise evaluation image comprises a plurality of color patches, and wherein the noise amplification metric is determined as a weighted sum of noise amplification levels of each of the color patches.
In some embodiments, each color patch is weighted according to an average sensitivity of human perception to the color patch.
In some embodiments, adjusting the color correction parameters comprises optimizing the fitness function by minimizing a weighted sum of the color correction error and the noise amplification metric.
In some embodiments, adjusting the color correction parameters comprises adjusting the parameters using a genetic process. Some embodiments comprise further adjusting the color correction parameters using a direct search method.
In some embodiments, obtaining the input color values of the color calibration images comprises imaging one or more color references comprising a plurality of color patches.
In some embodiments, the color correction parameters are in the form of a matrix.
In some embodiments, the color correction parameters are in the form of a look-up table.
In some embodiments, adjusting the color correction parameters comprises look-up operations and interpolation operations in the look-up table.
In some embodiments, the interpolation operations comprise a Shepard interpolation.
In some embodiments, the noise evaluation image is a virtual noise evaluation image.
In some embodiments, the virtual noise evaluation image comprises noise added to a virtual noise-free image.
In another aspect herein, there is disclosed a color correction apparatus configured for calibration for color correction based upon images of a plurality of color references each having a reference color, comprising a memory for storing a plurality of color correction parameters; and a processor for performing color correction of a digital image, wherein the processor is configured to obtain an input color value and a reference color value for each of the color references and a noise evaluation image having a color noise for evaluating noise reduction, the input color values and reference color values being in a non-linear color space, and determine a plurality of color correction parameters that are optimized based on evaluating a fitness function in the non-linear color space.
In some embodiments, the non-linear color space is a CIE L*a*b*color space.
In some embodiments, the processor is configured to determine the plurality of color correction parameters by adjusting the color correction parameters based on the input color values, and reference color values, and the noise evaluation image.
In some embodiments, the fitness function comprises a color correction error and a noise amplification metric.
In some embodiments, the processor is configured to adjust the plurality of color correction parameters by color correcting the input color values and comparing the corrected input color values with the reference color values.
In some embodiments, the processor is configured to adjust the plurality of color correction parameters by color correcting the noise evaluation image using the parameters and comparing the corrected noise evaluation image with the pre-correction noise evaluation image.
In some embodiments, the processor is configured to determine the noise amplification metric using a peak signal-to-noise ratio (PSNR) .
In some embodiments, the processor is configured to determine the noise amplification metric using a PSNR by finding a PSNR difference that is a difference between a PSNR for the pre-correction noise evaluation image and a PSNR for the corrected noise evaluation image.
In some embodiments, the processor is configured to determine the noise amplification metric using a PSNR by determining a downsampled PSNR difference.
In some embodiments, the processor is configured to determine the downsampled PSNR difference by downsampling the pre-correction noise evaluation image to obtain a downsampled pre-correction noise evaluation image; downsampling the corrected noise evaluation image to obtain a downsampled corrected noise evaluation image; and finding the downsampled PSNR difference as a difference between a PSNR for the downsampled pre-correction noise evaluation image and a PSNR for the downsampled corrected noise evaluation image.
In some embodiments, the processor is configured to determine the noise amplification metric by determining a weighted average of the PSNR difference and at least one downsampled PSNR difference.
In some embodiments, the processor is configured to determine a plurality of successively downsampled PSNR differences and determine the noise amplification metric as a weighted average of the PSNR difference and the plurality of successively downsampled PSNR differences.
In some embodiments, the processor is configured to determine the noise amplification metric by weighing each of the PSNR difference and the plurality of successively downsampled PSNR difference with a respective weight, and at least one of the weights is not zero.
In some embodiments, the processor is configured to determine the noise amplification metric based on color variances.
In some embodiments, the processor is configured to determine the noise amplification metric by converting the color corrected noise evaluation image into a YUV color space.
In some embodiments, the noise evaluation image comprises a plurality of color patches, and wherein the noise amplification metric is determined as a weighted sum of noise amplification levels of each of the color patches.
In some embodiments, each color patch is weighted according to an average sensitivity of human perception to the color patch.
In some embodiments, the processor is configured to optimizing the fitness function by minimizing a weighted sum of the color correction error and the noise amplification metric.
In some embodiments, the processor is configured to adjust the color correction parameters using a genetic process.
In some embodiments, the processor is configured to further adjust the color correction parameters using a direct search method.
In some embodiments, the processor is configured to obtain the input color values of the color calibration images by imaging one or more color references comprising a plurality of color patches.
In some embodiments, the color correction parameters are in the form of a matrix.
In some embodiments, the color correction parameters are in the form of a look-up table.
In some embodiments, the processor is configured to adjust the color correction parameters by look-up operations and interpolation operations in the look-up table.
In some embodiments, the interpolation operations comprise a Shepard interpolation.
In some embodiments, the noise evaluation image is a virtual noise evaluation image.
In some embodiments, the virtual noise evaluation image comprises noise added to a virtual noise-free image.
In some embodiments, the color correction apparatus is mounted aboard a mobile platform.
In some embodiments, the mobile platform is an unmanned aerial vehicle (UAV) .
In another aspect herein, there is disclosed a digital imaging device, comprising an image sensor for imaging a plurality of color references; and a processor for performing color correction of a digital image, wherein the processor is configured to obtain an input color value and a reference color value for each of the color references and a noise evaluation image having a color noise for evaluating noise reduction, the input color values and reference color values being in a non-linear color space, and determine a plurality of color correction parameters that are optimized based on evaluating a fitness function in the non-linear color space.
In some embodiments, the non-linear color space is a CIE L*a*b*color space.
In some embodiments, the processor is configured to determine the plurality of color correction parameters by adjusting the color correction parameters based on the input color values, and reference color values, and the noise evaluation image.
In some embodiments, the fitness function comprises a color correction error and a noise amplification metric.
In some embodiments, the processor is configured to adjust the plurality of color correction parameters by color correcting the input color values and comparing the corrected input color values with the reference color values.
In some embodiments, the processor is configured to adjust the plurality of color correction parameters by color correcting the noise evaluation image using the parameters and comparing the corrected noise evaluation image with the pre-correction noise evaluation image.
In some embodiments, the processor is configured to determine the noise amplification metric using a peak signal-to-noise ratio (PSNR) .
In some embodiments, the processor is configured to determine the noise amplification metric using a PSNR by finding a PSNR difference that is a difference between a PSNR for the pre-correction noise evaluation image and a PSNR for the corrected noise evaluation image.
In some embodiments, the processor is configured to determine the noise amplification metric using a PSNR by determining a downsampled PSNR difference.
In some embodiments, the processor is configured to determine the downsampled PSNR difference by downsampling the pre-correction noise evaluation image to obtain a downsampled pre-correction noise evaluation image; downsampling the corrected noise evaluation image to obtain a downsampled corrected noise evaluation image; and finding the downsampled PSNR difference as a difference between a PSNR for the downsampled pre-correction noise evaluation image and a PSNR for the downsampled corrected noise evaluation image.
In some embodiments, the processor is configured to determine the noise amplification metric by determining a weighted average of the PSNR difference and at least one downsampled PSNR difference.
In some embodiments, the processor is configured to determine a plurality of successively downsampled PSNR differences and determine the noise amplification metric as a weighted average of the PSNR difference and the plurality of successively downsampled PSNR differences.
In some embodiments, the processor is configured to determine the noise amplification metric by weighing each of the PSNR difference and the plurality of successively downsampled PSNR difference with a respective weight, and at least one of the weights is not zero.
In some embodiments, the processor is configured to determine the noise amplification metric based on color variances.
In some embodiments, the processor is configured to determine the noise amplification metric by converting the color corrected noise evaluation image into a YUV color space.
In some embodiments, the noise evaluation image comprises a plurality of color patches, and wherein the noise amplification metric is determined as a weighted sum of noise amplification levels of each of the color patches.
In some embodiments, each color patch is weighted according to an average sensitivity of human perception to the color patch.
In some embodiments, the processor is configured to optimizing the fitness function by minimizing a weighted sum of the color correction error and the noise amplification metric.
In some embodiments, the processor is configured to adjust the color correction parameters using a genetic process.
In some embodiments, the processor is configured to further adjust the color correction parameters using a direct search method.
In some embodiments, the processor is configured to obtain the input color values of the color calibration images by imaging one or more color references comprising a plurality of color patches.
In some embodiments, the color correction parameters are in the form of a matrix.
In some embodiments, the color correction parameters are in the form of a look-up table.
In some embodiments, the processor is configured to adjust the color correction parameters by look-up operations and interpolation operations in the look-up table.
In some embodiments, said interpolation operations comprise a Shepard interpolation.
In some embodiments, the noise evaluation image is a virtual noise evaluation image.
In some embodiments, the virtual noise evaluation image comprises noise added to a virtual noise-free image.
In some embodiments, the color correction apparatus is mounted aboard a mobile platform.
In some embodiments, the mobile platform is an unmanned aerial vehicle (UAV) .
In another aspect herein, there is disclosed a non-transitory readable medium storing instructions for calibrating a digital imaging device for color correction, wherein the instructions comprise instructions for obtaining an input color value and a reference color value for each of a plurality of color references and a noise evaluation image having a color noise for evaluating noise reduction, the input color values and reference color values being in a non-linear color space; and determining a plurality of color correction parameters that are optimized based on evaluating a fitness function in the non-linear color space.
In some embodiments, the non-linear color space is a CIE L*a*b*color space.
In some embodiments, said determining the plurality of color correction parameters comprises adjusting the color correction parameters based on the input color values, and reference color values, and the noise evaluation image.
In some embodiments, the fitness function comprises a color correction error and a noise amplification metric.
In some embodiments, said adjusting the plurality of color correction parameters comprises determining the color correction error by color correcting the input color values and comparing the corrected input color values with the reference color values.
In some embodiments, said adjusting the plurality of color correction parameters comprises determining the noise amplification metric by color correcting the noise evaluation image using the parameters and comparing the corrected noise evaluation image with the pre-correction noise evaluation image.
In some embodiments, said determining the noise amplification metric comprises determining the noise amplification metric using a peak signal-to-noise ratio (PSNR) .
In some embodiments, said determining the noise amplification metric using a PSNR comprises finding a PSNR difference that is a difference between a PSNR for the pre-correction noise evaluation image and a PSNR for the corrected noise evaluation image.
In some embodiments, said determining the noise amplification metric using a PSNR comprises determining a downsampled PSNR difference.
In some embodiments, said determining the downsampled PSNR difference comprises downsampling the pre-correction noise evaluation image to obtain a downsampled pre-correction noise evaluation image; downsampling the corrected noise evaluation image to obtain a downsampled corrected noise evaluation image; and finding the downsampled PSNR difference as a difference between a PSNR for the downsampled pre-correction noise evaluation image and a PSNR for the downsampled corrected noise evaluation image.
In some embodiments, said determining the noise amplification metric comprises determining a weighted average of the PSNR difference and at least one downsampled PSNR difference.
In some embodiments, said determining the noise amplification metric using a PSNR comprises determining a plurality of successively downsampled PSNR differences and determining the noise amplification metric as a weighted average of the PSNR difference and the plurality of successively downsampled PSNR differences.
In some embodiments, said determining the noise amplification metric comprises weighing each of the PSNR difference and the plurality of successively downsampled PSNR difference with a respective weight, and at least one of the weights is not zero.
In some embodiments, said determining the noise amplification metric comprises determining the noise amplification metric based on color variances.
In some embodiments, said adjusting the plurality of color correction parameters comprises determining the noise amplification metric by converting the color corrected noise evaluation image into a YUV color space.
In some embodiments, said noise evaluation image comprises a plurality of color patches, and wherein the noise amplification metric is determined as a weighted sum of noise amplification levels of each of the color patches.
In some embodiments, each color patch is weighted according to an average sensitivity of human perception to the color patch.
In some embodiments, said adjusting the color correction parameters comprises optimizing the fitness function by minimizing a weighted sum of the color correction error and the noise amplification metric.
In some embodiments, said adjusting the color correction parameters comprises adjusting the parameters using a genetic process.
Some embodiments comprise further adjusting the color correction parameters using a direct search method.
In some embodiments, said obtaining the input color values of the color calibration images comprises imaging one or more color references comprising a plurality of color patches.
In some embodiments, the color correction parameters are in the form of a matrix.
In some embodiments, the color correction parameters are in the form of a look-up table.
In some embodiments, said adjusting the color correction parameters comprises look-up operations and interpolation operations in the look-up table.
In some embodiments, said interpolation operations comprise a Shepard interpolation.
In some embodiments, the noise evaluation image is a virtual noise evaluation image.
In some embodiments, the virtual noise evaluation image comprises noise added to a virtual noise-free image.
BRIEF DESCRIPTION OF THE DRAWINGS
Fig. 1 is an exemplary top-level block diagram illustrating an embodiment of a color correction apparatus for color-correcting a digital image.
Fig. 2 is an exemplary diagram illustrating an embodiment of an imaging system including the color correction apparatus of Fig. 1, wherein the imaging system is shown imaging a color reference using an image sensor.
Fig. 3 is an exemplary diagram illustrating an alternative embodiment of the imaging system of Fig. 2, wherein the color correction apparatus is shown as acquiring various color values for calibration of color correction parameters.
Fig. 4 is exemplary top-level flow chart illustrating an embodiment of a method for calibrating a digital imaging device.
Fig. 5 is exemplary flow chart illustrating an alternative embodiment of the method of Fig. 4, wherein color correction parameters are optimized for calibrating a digital imaging device.
Fig. 6 is exemplary flow chart illustrating an alternative embodiment of the method of Fig. 5, wherein the method includes a two-step optimization method.
Fig. 7 is exemplary flow chart illustrating another alternative embodiment of the method of Fig. 5, wherein the method includes a genetic process.
Fig. 8 is an exemplary diagram illustrating an embodiment of the method of Fig. 5, wherein the method includes sampling at different spatial frequencies to determine a noise amplification metric.
Fig. 9 is an exemplary diagram illustrating an embodiment of the method of Fig. 8 with spatial downsampling.
Fig. 10 is an exemplary flow chart illustrating an embodiment of the method of Fig. 5, wherein the method includes sampling at different spatial frequencies to determine a noise amplification metric.
Fig. 11 is an exemplary diagram illustrating an embodiment of an imaging system installed on an unmanned aerial vehicle (UAV) .
Fig. 12 is an exemplary diagram illustrating a chrominance diagram showing a color error of a first experiment testing the efficacy of optimizing color correction parameters with noise regulation.
Fig. 13 is an exemplary diagram illustrating noise values of the experiment of Fig. 9 testing the efficacy of optimizing color correction parameters with noise regulation.
Fig. 14 is an exemplary diagram illustrating a chrominance diagram showing a color error of a second experiment testing the efficacy of optimizing color correction parameters without noise regulation.
Fig. 15 is an exemplary diagram illustrating noise values of the experiment of Fig. 11 testing the efficacy of optimizing color correction parameters without noise regulation.
Fig. 16 is an exemplary diagram illustrating a two-step method for optimizing color correction parameters with noise regulation.
It should be noted that the figures are not drawn to scale and that elements of similar structures or functions are generally represented by like reference numerals for illustrative purposes throughout the figures. It also should be noted that the figures are only intended to facilitate the description of the preferred embodiments. The figures do not illustrate every aspect of the described embodiments and do not limit the scope of the present disclosure.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
The invention is illustrated, by way of example and not by way of limitation, in the figures of the accompanying drawings in which like references indicate similar elements. It should be noted that references to “an” or “one” or “some” embodiment (s) in this disclosure are not necessarily to the same embodiment, and such references mean at least one.
Color correction is a process that transforms signals acquired by photosensors of digital imaging devices into colors that look realistic. Color correction is a transformation defined by a set of color correction parameters. These parameters are typically calibrated for each individual digital imaging device to find customized parameter values that accurately reflect the color response characteristics of the individual device.
The calibration of color correction parameters entails an optimization process in which colors values acquired by the imaging device are compared with known reference color values. Typically, the goal during the optimization process is to minimize a difference between the acquired colors post-color-correction and the known reference values. A drawback of this approach, however, is that accounting for color correction accuracy alone can often result in parameters that excessively amplify noise. Image noise can include color and brightness variations in an image. These variations are not features of an original object imaged but, instead, are attributable to artifacts introduced by the acquisition and processing of the image. Sources of noise include, for example, quantum exposure noise, dark current noise, thermal noise, readout noise, and others. Since image noise is inversely proportional to the size of the imaging device, the problem of noise is especially acute for smaller imaging devices. When image acquisition is performed aboard mobile platforms such as unmanned aerial vehicles (UAVs) , the image noise problem is especially acute both because of the smaller cameras used on the mobile platforms and because of noise introduced by movement of the mobile platforms. In view of the foregoing, there is a need for improved color correction systems and methods that increase color correction accuracy while limiting noise amplification.
The present disclosure sets forth systems and methods for color correction of a digital image which overcome shortcomings of existing color correction techniques by increasing color correction accuracy while limiting noise amplification. Based on color reference images, color correction parameters are calibrated to increase color correction accuracy while limiting noise amplification. The calibration can be performed in the CIE L*a*b*color space to more closely reflect human perception of distances between colors. The calibration can be performed with reference to a virtual noisy image that can be sampled at different spatial frequencies. At each spatial frequency, a peak signal-to-noise ratio (PSNR) can be used to evaluate the amount of noise introduced by color correction. The color correction parameters can be optimized by using a genetic process. A two-step parameter optimization method can be used that avoid the optimization process being trapped in local optima. The present systems and methods advantageously are suitable for use, for example, by unmanned aerial vehicles (UAVs) and other mobile platforms.
Turning now to Fig. 1, an exemplary color correction apparatus 100 is shown as including a processor 110 and a memory 120. The processor 110 can comprise any type of processing system. Exemplary processors 110 can include, without limitation, one or more general purpose microprocessors (for example, single or multi-core processors) , application-specific integrated circuits, application-specific instruction-set processors, graphics processing units, physics processing units, digital signal processing units, coprocessors, network  processing units, audio processing units, encryption processing units, and the like. In certain embodiments, the processor 110 can include an image processing engine or media processing unit, which can include specialized hardware for enhancing the speed and efficiency of certain operations for image capture, image filtering, and image processing. Such operations include, for example, Bayer transformations, demosaicing operations, white balancing operations, color correction operations, noise reduction operations, and/or image sharpening/softening operations. In certain embodiments, the processor 110 can include specialized hardware and/or software for performing various color correction parameter calibration functions and operations described herein. Specialized hardware can include, but are not limited to, specialized parallel processors, caches, high speed buses, and the like.
The memory 120 can comprise any type of memory and can be, for example, a random access memory (RAM) , a static RAM, a dynamic RAM, a read-only memory (ROM) , a programmable ROM, an erasable programmable ROM, an electrically erasable programmable ROM, a flash memory, a secure digital (SD) card, and the like. Preferably, the memory 120 has a storage capacity that accommodates the needs of the color correction parameter calibration functions and operations described herein. The memory 120 can have any commercially-available memory capacity suitable for use in image processing applications and preferably has a storage capacity of at least 512 Megabytes, 1 Gigabyte, 2 Gigabytes, 4 Gigabytes, 16 Gigabytes, 32 Gigabytes, 64 Gigabytes, or more. In some embodiments, the memory 120 can be a non-transitory storage medium that can store instructions for performing any of the processes described herein.
The color correction apparatus 100 can further include any hardware and/or software desired for performing the color correction parameter calibration functions and operations described herein. For example, the color correction apparatus 100 can include one or more input/output interfaces (not shown) . Exemplary interfaces include, but are not limited to, universal serial bus (USB) , digital visual interface (DVI) , display port, serial ATA (SATA) , IEEE 1394 interface (also known as FireWire) , serial, video graphics array (VGA) , super video graphics array (SVGA) , small computer system interface (SCSI) , high-definition multimedia interface (HDMI) , audio ports, and/or proprietary input/output interfaces. As another example, the color correction apparatus 100 can include one or more input/output devices (not shown) , for example, buttons, a keyboard, keypad, trackball, displays, and/or a monitor. As yet another example, the color correction apparatus 100 can include hardware for communication between components of the color correction apparatus 100 (for example, between the processor 110 and the memory 120) .
Turning now to Fig. 2, an exemplary embodiment of an imaging system 200 is shown as including a color correction apparatus 100, an image sensor 130, and a color filter 140. The color correction apparatus 100 can be provided in the manner discussed in more detail above with reference to Fig. 1. The memory 120 of the color correction apparatus 100 is shown as storing color correction parameters 125, noise generation parameters 126, pre-correction and post-correction image data 127, and intermediate values 128 produced during various color correction parameter calibration functions and operations described herein. The image sensor 130 can perform the function of sensing light and converting the sensed light into electrical signals that can be rendered as an image. Various image sensors 130 are suitable for use with the disclosed systems and methods, including, but not limited to, image sensors 130 used in commercially-available cameras and camcorders. Suitable image sensors 130 can include analog image sensors (for example, video camera tubes) and/or digital image sensors (for example, charge-coupled device (CCD) , complementary metal-oxide-semiconductor (CMOS) , N-type metal-oxide-semiconductor (NMOS) image sensors, and hybrids/variants thereof) . Digital image sensors can include, for example, a two-dimensional array of photosensor elements that can each capture one pixel of image information. The resolution of the image sensor 130 can be determined by the number of photosensor elements. The image sensor 130 can support any commercially-available image resolution and preferably has a resolution of at least 0.1 Megapixels, 0.5 Megapixels, 1 Megapixel, 2 Megapixels, 5 Megapixels, 10 Megapixels, or an even greater number of pixels. The image sensor 130 can have specialty functions for use in various applications such as thermography, creation of multi-spectral images, infrared detection, gamma detection, x-ray detection, and the like. The image sensor 130 can include, for example, an electro-optical sensor, a thermal/infrared sensor, a color or monochrome sensor, a multi-spectral imaging sensor, a spectrophotometer, a spectrometer, a thermometer, and/or an illuminometer.
The color filter 140 is shown in Fig. 2 as separating and/or filtering incoming light based on color and directing the light onto the appropriate photosensor elements of the image sensor 130. For example, the color filter 140 can include a color filter array that passes red, green, or blue light to selected pixel sensors to form a color mosaic (not shown) . The layout of different colors on the color mosaic can be arranged in any convenient manner, including a Bayer pattern. Once a color mosaic is formed, a color value of each pixel can be interpolated using any of various demosaicing methods that interpolate missing color values at each pixel using color values of adjacent pixels. As an alternative to filtering and demosaicing, the image sensor 130 can include an array of layered pixel photosensor elements that separates light of different wavelengths based on the properties of the photosensor elements. In either case, an  image can be acquired by the image sensor 130 as intensity values in each of a plurality of color channels at each pixel.
The imaging system 200 is further shown in Fig. 2 as acquiring an image of a color reference 150 to perform calibration of color correction parameters 125. The color reference 150 preferably has a known reference color value Cref that is known or that can be otherwise determined in advance, making the color reference 150 suitable for use as a color standard. Stated somewhat differently, the reference color value Cref is a property of the color reference 150 that is independent of how the color reference 150 is imaged. The reference color value Cref can be designated based on an average human perception of the color reference 150. The reference color value Cref can thus serve as an objective measure how a color imaged by the image sensor 130 can be corrected so as to match the average human perception.
The color reference 150 is preferably, but not necessarily, homogeneous in color. Flatness of the color reference 150 is preferable, though not essential, to avoid variations attributable to differential light scattering. The optical properties of the color reference 150 need not be ideal for purposes of performing color correction, so long as the optical properties do not interfere with imaging the color reference 150. The color reference 150 can be made of one or more of a variety of materials such as plastic, paper, metal, wood, foam, composites thereof, and other materials. Furthermore, the color, reflectance, and/or other optical properties of the color reference 150 can advantageously be calibrated as desired using an appropriate paint or other coating. In some embodiments, the color reference 150 can advantageously include multiple color patches 151, each of which has a different reference color value Cref. This embodiment enables multiple color references 150 to be imaged at the same time, reducing the number of image capture operations for color correction. This embodiment is particularly suitable when a large number of color references 150 are to be imaged in order to calibrate color correction parameters 125 with greater accuracy. Commercially available color references 150 include, for example, MacBeth ColorChecker, MacBeth ColorChecker SG, and the like.
Although images acquired by the image sensor 130 are described above in an RGB (red, green, and blue) color space for illustrative purposes only, the images can be acquired in other color spaces, as well. The color space in which images are acquired depends generally on the properties of the image sensor 130 and any color filters 140. Furthermore, the color space in which an image is acquired need not be three-dimensional but can have any number of dimensions as desired to capture the spectral composition of the image. The number of dimensions can depend on the number of color channels of the image sensor 130. The color  space of an acquired image can be one-dimensional, two-dimensional, three-dimensional, four-dimensional, five-dimensional, or more.
Once acquired by the image sensor 130, an image can be converted between color spaces as desired for processing and/or calibration. For example, a conversion from a sRGB color space with coordinates (RsRGB, GsRGB, BsRGB) to a CIE 1931 XYZ color space with coordinates (X, Y, Z) entails a linear conversion, which can be represented by the following three-dimensional matrix:
Figure PCTCN2015079094-appb-000001
In some embodiments, it can be desirable to express the color of an image in a non-linear color space. One suitable non-linear color space for imaging applications is a CIE L*a*b*color space (for example, a CIE 1976 L*a*b*color space) as defined by the International Commission on Illumination. The color of an image in a CIE L*a*b*color space can be computed from the colors of the image in a CIE 1931 XYZ color space using the following non-linear transformation:
Figure PCTCN2015079094-appb-000002
Figure PCTCN2015079094-appb-000003
Figure PCTCN2015079094-appb-000004
where
Figure PCTCN2015079094-appb-000005
In the above equations (2) - (5) , Xn, Yn, and Zn are the CIE XYZ values of the color at a reference white point. The CIE L*a*b*color space is designed to mimic the color response of human perception. The non-linearity of the transformation from the CIE XYZ color space to the CIE L*a*b*color space reflects the nonlinearity of human perception. Representing a color in the CIE L*a*b*color space has the advantage that the CIE L*a*b*color space is perceptually uniform to human beings, meaning that a change of a given amount in a color value will produce a proportional change of visual significance. According, calibration of color correction parameters 125 can advantageously be performed after converting input and reference colors into a CIE L*a*b*color space representation.
In some embodiments, it can be desirable to express the color of an image in a YUV color space (for example, a Y’ UV color space) . The YUV color space is represented by one  luminance component Y representing image brightness and two chrominance components U and V representing image color. A conversion from a RGB color space with coordinates (R, G, B) to a YUV color space with coordinates (Y, U, V) entails a linear conversion, which can be represented by the following three-dimensional matrix:
Figure PCTCN2015079094-appb-000006
Although conversions between specific color spaces are shown and described for illustrative purposes only, an image can be converted from any first predetermined color space to any other second predetermined color space as desired.
Turning now to Fig. 3, an alternative embodiment of the imaging system 200 is shown. The imaging system 200 of Fig. 3 includes a color correction apparatus 100, which is shown as obtaining several inputs for calibration of color correction parameters 125. Without limitation, an image sensor 130 is shown as acquiring an image of a color reference 150. The image is then passed to the color correction apparatus 100, which can obtain an input color value Cin of the image. The input color value Cin represents a pre-color-corrected value that reflects the image acquisition properties of the image sensor 130, filtering properties of an image filter 140, as well as any other optical properties of the imaging system 200. In one embodiment, the input color value Cin can be transformed from the color space of the color reference image to a non-linear color space—for example, a CIE L*a*b*color space. The transformation can be performed, for example, by first using a linear transformation from a RGB color space to an intermediate CIE XYZ color using Equation (1) shown above. The color values in the intermediate CIE XYZ color space can be non-linearly transformed to a CIE L*a*b*color space as shown above in Equations (2) - (5) . Such transformations can be performed on a processor 110 (shown in Fig. 1) of the color correction apparatus 100.
Similarly, the color correction apparatus 100 can obtain a reference color value Cref that corresponds to the input color value Cin for color reference 150. If desired, the reference color value Cref can be transformed into a non-linear color space—for example, the CIE L*a*b*color space. In some embodiments, the reference color value Cref advantageously can be directly inputted into the color correction apparatus 100 in the CIE L*a*b*color space, thereby making the transformation step unnecessary.
In Fig. 3, the color correction apparatus 100 is further shown as obtaining noise evaluation color values Cnoise from a noise evaluation image 160. The noise evaluation image 160 can be any image containing noise. As color correction tends to amplify noise, the noise evaluation image 160 can be used to calibrate color correction parameters 125 in order to limit  noise amplification. Stated somewhat differently, the noise evaluation image 160 can used to evaluate now noise is amplified with a given set of color correction parameters 125 (shown in Fig. 2) , and thereby select a set of color correction parameters 125 with reduced noise amplification. In one embodiment, the noise evaluation color values Cnoise can be transformed into the YUV color space, as further described below with reference to Fig 7. The transformation can be performed, for example, using the linear transformation from the RGB color space to the YUV color space shown above in Equation (6) . This transformation can be performed using the processor 110 of the color correction apparatus 100.
In one embodiment, the noise evaluation image 160 can be an image acquired by the image sensor 130 with or without filtering through the color filter 140. In this embodiment, the noise evaluation image 160 is preferably an image of the color reference 150. Imaging the color reference 150 advantageous allows the simultaneous determination of the input color values Cin and the noise evaluation color values Cnoise.
Alternatively and/or additionally, the noise evaluation image 160 can be a virtual noise evaluation image 160A. The virtual noise evaluation image 160A can be generated by the color correction apparatus 100 using a pre-determined set of noise generation parameters 126 (shown in Fig. 2) . The noise generation parameters 126 can, for example, reflect the distribution of the noise that is generated virtually (for example, Poisson or Gaussian noise) . The specific noise generation parameters 126 can reflect the types of noise that the imaging system 200 can be expected to encounter in usage. A virtual noise evaluation image 160A can be used because the evaluation of noise amplification does not require information about the color of an underlying object that is imaged. Instead, an arbitrary image containing noise can be evaluated for how the noise of that image would be amplified under a given set of color correction parameters 125. For example, the noise evaluation color values Cnoise of the virtual noise evaluation image 160A can be represented as follows:
Cnoise=Cnoise_free+n   Equation (7)
where Cnoise_free represents the color of the virtual noise evaluation image 160A before noise is added, and n represents the noise added.
Once the inputs for color correction parameter calibration (for example, input color values Cin, reference color values Cref, and noise evaluation color values Cnoise) are obtained by the color correction apparatus 100, these inputs can be stored for later use by the color correction apparatus 100 (for example, in a memory 120 as shown in Fig. 1) . For example, the inputs for color correction parameter calibration can be obtained as part of an initialization process for a new imaging device 200 prior to usage. The inputs for color correction parameter  calibration can be stored in the memory 120 and called upon periodically to re-calibrate the color correction parameters 125 as desired (for example, as image response characteristics of the imaging device 200 change after wear and tear) . The inputs for color correction parameter calibration can be, but do not need to be, re-obtained for each new color correction parameter calibration.
Turning now to Fig. 4, an exemplary top-level method 400 of calibrating color correction parameters 125 is shown. The method 400 advantageously can be applied to calibrating the color correction parameters 125 for a digital imaging device 200 (shown in Figs. 2 and 3) . At 401, input color values Cin and reference color values Cref are obtained for each of a plurality of color references 150 (shown in Figs. 2 and 3) . Preferably, the input color values Cin and reference color values Cref are obtained or transformed into a non-linear color space—for example, a CIE L*a*b*color space—as described above with reference to Fig. 3. Additionally, a noise evaluation image 160 having a color noise for evaluating noise reduction is obtained.
At 402, a plurality of color correction parameters 125 are adjusted so as to optimize a fitness function J. In some embodiments, the fitness function J can comprise a color correction error ecolor and/or a noise amplification metric Dnoise based on the input color values Cin, the reference color values Cref , and the noise evaluation image 160. An exemplary embodiment of the adjusting is described in more detail below with respect to Fig. 5.
Turning now to Fig. 5, an exemplary method 500 of calibrating color correction parameters 125 (shown in Fig. 2) for a digital imaging device 200 (shown in Figs. 2 and 3) is shown. At 501, input color (or pre-correction) values Cin for a color references 150 are color corrected using the current values of the color correction parameters 125 to obtain post-correction input color values
Figure PCTCN2015079094-appb-000007
This operation can be represented as:
Figure PCTCN2015079094-appb-000008
In the above equation (8) , CC represents a color correction operation. The specific implementation of the color correction operation CC depends on the underlying form of the color correction parameters 125. In one embodiment, the color correction parameters 125 can take the form of a matrix having dimensions n x m, where m is dimensionality of the pre-correction color value and n is the dimensionality of the post-correction color value. In this embodiment, the color correction operation CC will take the form of a matrix multiplication that transforms an m-dimensional color value vector into an n-dimensional color value vector. Preferably, the pre-correction color value and the post-correction color value have the same dimensionality, in which case CC will take the form of a square matrix. Preferably, the pre- correction color value and the post-correction color value are each three-dimensional (for example, for color values in the RGB, CIE XYZ, CIE L*a*b*, and LUV color spaces) , in which case CC will take the form of a 3x3 matrix. An advantage of using a matrix is that a matrix can describe a color correction operation CC using only n x m correction parameters 125, allowing decreased memory usage. However, linear color correction using a matrix may be unsuitable for some applications.
In another embodiment, the color correction parameters 125 can take the form of a look-up table (LUT) indexed in m dimensions that contains ordered m-tuples (a1, a2, …, am) each mapping to an n-dimensional vector, where m is dimensionality of the pre-correction color value and n is the dimensionality of the post-correction color value. Preferably, the look-up table is three-dimensional, that is, indexed in three dimensions. An advantage of using a look-up table to implement the color correction parameters 125 is that a look-up table can account for a non-linear relationship between a pre-correction color value and a post-correction color value. Furthermore, since the entries in the look-up table are discrete, interpolation operations can be performed when pre-correction color values fall in between discrete entries. Such interpolation operations can include finding look-up table entries that have the closest distance (for example, Euclidian distance) to the pre-correction color value, and interpolating a corrected color value using the closest look-up table entries. For example, linear interpolations can be performed for one-dimensional look-up tables, and multi-linear interpolations can be performed for look-up tables in higher dimensions. In this embodiment, the color correction operation CC will take the form of a look-up operation in the look-up table, followed by an interpolation operation, if desired. The color correction parameters 125 can be implemented in multiple ways simultaneously; for example, a combination of a matrix and a look-up table can be used.
In one embodiment, a Shepard interpolation can be used to perform color correction where the color correction parameters 125 take the form of a look-up table (LUT) . In one embodiment, a color-corrected value for a given color p can be found as follows:
Figure PCTCN2015079094-appb-000009
In the above equation (9) , i is an index over the different input color values Cin and their corresponding reference color values Cref , ci represents the ith value of the input color values Cin
Figure PCTCN2015079094-appb-000010
represents the ith value of the reference color values Cref , (‖p-ci‖) represents a distance (for example, a Euclidian distance) between the given color p and ci, and wi represents a weight of the ith input color value Cin.
At 502, the post-correction input color values
Figure PCTCN2015079094-appb-000011
are compared with the reference color values Cref, and the color correction error ecolor is computed based on the comparison. For example, where the post-correction input color values
Figure PCTCN2015079094-appb-000012
and reference color values Cref are represented in a CIE L*a*b*color space, the color correction error ecolor can be expressed as:
Figure PCTCN2015079094-appb-000013
In the above equation (10) , Cin_j and
Figure PCTCN2015079094-appb-000014
represent the jth component of the reference color values Cref and the post-correction input color values
Figure PCTCN2015079094-appb-000015
respectively. Stated somewhat differently, the color correction error ecolor is the Euclidian distance between the post-correction input color values
Figure PCTCN2015079094-appb-000016
and the reference color values Cref in the color space in which the color values are represented. Where the color correction error ecolor is to be determined over multiple color references 150 (or, equivalently, over multiple color patches 151 of a given color reference 150) , the color correction error ecolor can be taken as a weighted and/or unweighted average over the color patches 151.
At 503, noise evaluation color values Cnoise are color corrected using the current values of the color correction parameters 125 to obtain post-correction noise evaluation color values
Figure PCTCN2015079094-appb-000017
This operation can be represented as:
Figure PCTCN2015079094-appb-000018
In the above equation (11) , CC represents a color correction operation as described above with reference to 501. The specific color correction operation CC depends on the implementation of the color correction parameters 125 and, as described above with reference to 501, can take the form of a matrix or a look-up table with each form having respective advantages.
At 504, the post-correction noise evaluation color values
Figure PCTCN2015079094-appb-000019
are compared with pre-correction noise evaluation color values Cnoise, and the noise amplification metric Dnoise is found based on the comparison. The noise amplification metric Dnoise can be any measure of the distance between post-correction noise evaluation color values
Figure PCTCN2015079094-appb-000020
and the pre-correction noise evaluation color values Cnoise. That is, the greater the value of the noise amplification metric Dnoise, the more noise is amplified after applying a color correction.
Where the noise amplification metric Dnoise is to be determined over multiple color references 150 (or, equivalently, over multiple color patches 151 of a given color reference 150) , the noise amplification metric Dnoise can be taken as a weighted and/or unweighted average over the color patches 151. In one embodiment, the noise amplification metric Dnoise can be taken as a weighted average over the color patches 151.
Figure PCTCN2015079094-appb-000021
In the above equation (12) , i is an index over the color patches 151, N is the total number of color patches 151, and ωi is a non-negative weight for color patch i. The weights ωi can be set according to the sensitivity of the average human perception to the color of each color patch 151. For example, colors having greater sensitivity of human perception can be given greater weights ωi.
At 505, a fitness function J can be determined. In some embodiments, the fitness function J can be found as a weighted and/or unweighted sum of the color correction error ecolor and the noise amplification metric Dnoise. For example, an unweighted fitness function J can be represented as the following sum:
J=ecolor+Dnoise   Equation (13)
In some embodiments, a weighted fitness function J can used to advantageously weight the color correction error ecolor more than the noise amplification metric Dnoise, or vice versa. The amount of weighting for the fitness function J can be determined, for example, by repeating a color correction parameter calibrations for different weights and taking the weight that gives the best (for example, the lowest) value of the fitness function J. Alternatively and/or additionally, the amount of weighting for the fitness function J can be determined based on prior color correction parameter calibrations (for example, using different imaging devices) . 
Turning now to Fig. 6, an exemplary method 600 for calibrating color correction parameters 125 (shown in Fig. 2) is shown as including two steps. At 601, a first optimization process is applied to obtain initial values CC0 for the color correction parameters 125. The first optimization preferably samples broadly the space of possible color correction parameter values so as to avoid becoming trapped in local optima. Any of various optimization processes can be used in the first optimization at 601, including a genetic process, a simulated annealing method, and other non-greedy methods that avoid local optima. At 602, a second optimization process is applied using the initial values CC0 as a starting point to obtain further optimized values CCopt for the color correction parameters 125. At the second optimization, at 602, a goal is to find the local optimum value. Accordingly, direct optimization methods are suitable for the second optimization at 602. Exemplary direct optimization methods include, but are not limited to, gradient descent methods.
Turning now to Fig. 7, an exemplary genetic process 700 is shown for calibrating color correction parameters 125 (shown in Fig. 2) . A genetic process is an optimization method loosely based on evolutionary principles in biology, where possible solutions to a problem are generated as members of a “population, ” and the members are selected based on a  fitness function over a number of selection rounds. The genetic process 700 can be used to find an optimal solution to the problem of selecting a set of color correction parameters 125 to optimize (for example, minimize) the fitness function J that includes a color correction error ecolor and a noise amplification metric Dnoise. At 701, a predetermined number N of initial sets of candidate color correction parameters 125A are selected as the initial “population” of solutions. The predetermined number N can comprise any suitable number of initial sets and, for example, can be at least 10, 50, 100, 500, 1000, or more. The initial population of the N sets of candidate color correction parameters 125A can be selected, for example, by sampling the space of possible parameters at specified intervals. Alternatively and/or additionally, the sampling can be done at random.
At 702, the fitness function J is evaluated for the members of the “population, ” that is, for each of the N sets of candidate color correction parameters 125A. From among the N initial sets of candidate color correction parameters 125A, the initial set that has the best value of the fitness function J (for example, the minimal value, if the fitness function J to be minimized) is chosen. At 703, if the best value passes a predefined threshold, the genetic process stops at 704. Alternatively and/or additionally, at 705, if certain conditions are met (for example, the genetic process has been run for more than a certain number of rounds, or the genetic process has not produced more than a specific amount of improvement in the fitness function J from the prior round) , the genetic process stops at 704. After the genetic process stops, at 704, the candidate color correction parameters 125A giving the best value of the fitness function J is declared to be the “winner, ” and these candidate color correction parameters 125A can be outputted and/or used as a starting point for further optimization.
If the “best” candidate correction parameters 125 do not pass a predefined threshold and/or certain conditions for stopping the genetic process, at 703, are not met, the genetic process continues, at 706, by discarding and replacing candidate color correction parameters 125A having the lowest values of the fitness function J. In one embodiment, a given percentile of the candidate color correction parameters 125A having the lowest fitness function J can be discarded and replaced with new candidate color correction parameters 125A. The new candidate color correction parameters 125A can, for example, be generated in the same way as the initial candidate color correction parameters 125A. In some embodiments, at least 10%, 20%, 30%, 40%, 50%, or more the lowest scoring fitness functions J can be discarded.
At 707, “mutation” operations can be applied to the candidate color correction parameters 125A, simulating biological mutations of chromosomes between successive generations of individuals. Here, each set of candidate color correction parameters 125A can be conceptually treated as a “chromosome” that is also subject to mutation. Mutations to the  candidate color correction parameters 125A include, for example, “point mutations” changing individual parameters at random and/or “crossover” mutations between two sets of candidate color correction parameters 125A. For example, where the candidate color correction parameters 125A take the form of a matrix, a crossover can be performed by swapping corresponding rows and/or columns or portions thereof between two candidate matrices. Where the candidate color correction parameters 125A take the form of a look-up table, a crossover can be performed by swapping one or more corresponding entries in the look-up table. Once the mutations are applied to the candidate color correction parameters 125A, the method 700 can return to 702 for evaluating the fitness function J for the new round of the genetic process.
The noise amplification metric Dnoise can be determined using any suitable approach, including but not limited to using peak signal-to-noise ratios (PSNR) and/or color variances.
Turning now to Fig. 8, an exemplary diagram is shown for finding the noise amplification metric Dnoise using a peak signal-to-noise ratios (PSNR) . Beginning with noise-free color values Cnoise_free, noise-evaluation color values Cnoise can be found by adding noise n, as described in Equation (7) . A color correction CC can be applied to the noise-free color values Cnoise_free and noise-evaluation color values Cnoise , respectively, to find corresponding post-correction values of the noise-free color values
Figure PCTCN2015079094-appb-000022
and noise evaluation color values
Figure PCTCN2015079094-appb-000023
For example, the color correction is shown in Equation (11) and in Equation (14) below:
Figure PCTCN2015079094-appb-000024
Based on the parameters Cnoise_free, Cnoise
Figure PCTCN2015079094-appb-000025
and
Figure PCTCN2015079094-appb-000026
apair of PSNR values PSNR and
Figure PCTCN2015079094-appb-000027
can be found through determining a mean squared error (MSE) , as shown in Equations (15) through (18) :
Figure PCTCN2015079094-appb-000028
Figure PCTCN2015079094-appb-000029
In the above equations (15) - (16) , S, i.e. ∑j Sj, is the number of pixels and MAXI is the maximum value of Cnoise and Cnoise_free, and j is an index over virtual color patches.
Figure PCTCN2015079094-appb-000030
Figure PCTCN2015079094-appb-000031
In the above equations (17) - (18) , S, i.e. ∑j Sj, is the number of pixels and MAXI is the maximum value of
Figure PCTCN2015079094-appb-000032
and
Figure PCTCN2015079094-appb-000033
and j is an index over virtual color patches
In some embodiments, determining the noise amplification metric Dnoise can include finding a PSNR difference that is a difference between a PSNR for the pre-correction noise evaluation image and a
Figure PCTCN2015079094-appb-000034
for the corrected noise evaluation image.
In some embodiments, the noise amplification metric Dnoise can be determined by downsampling. For example, the downsampling can be a spatial downsampling, as illustrated in Fig. 9. In particular, Fig. 9 illustrates an embodiment of downsampling in which an image (for example, images having pre-correction noise evaluation color values Cnoise, or pre-correction noise-free color values Cnoise_free) is sample at every other pixel in a first downsampling. In some embodiments, the downsampled image can be downsampling again, and the downsampling process can be repeated as often as desired up to M iterations. Although not shown in Fig. 9, a similar downsampling process can be performed for images that have been color-corrected (for example, images having post-correction noise evaluation color values 
Figure PCTCN2015079094-appb-000035
or post-correction noise-free color values
Figure PCTCN2015079094-appb-000036
) . Since downsampling can be an iterative process, color values and PSNR values at particular iterations are denoted with a subscript from 0 to M corresponding to the iteration, as shown in Figs. 8-9.
After each round of downsampling, the downsampled images can be used to determine one or more downsampled PSNRs as well as a downsampled PSNR difference. Returning to Fig. 8, pre-correction noise-free color values
Figure PCTCN2015079094-appb-000037
and pre-correction noise evaluation color values
Figure PCTCN2015079094-appb-000038
that have undergone one round of downsampling can be used to find a corresponding downsampled PSNR1. Likewise, post-correction noise-free color values 
Figure PCTCN2015079094-appb-000039
and post-correction noise evaluation color values
Figure PCTCN2015079094-appb-000040
that have undergone one round of downsampling can be used to find a corresponding downsampled
Figure PCTCN2015079094-appb-000041
After M downsampling rounds, a set of PSNR values PSNRi. and
Figure PCTCN2015079094-appb-000042
will be obtained, where i ranges from 0 to M. The set of PSNR values can be used to find corresponding PSNR differences for value of i ranging from 0 to M, where i=0 corresponds to a PSNR difference that has not been downsampled, and i=m corresponds to a PSNR difference that has been downsampled m times.
In some embodiments, the noise amplification metric Dnoise can be obtained by taking a weighted average of a PSNR difference and at least one downsampled PSNR difference. In some embodiments, the noise amplification metric Dnoise can be obtained by taking a weighted average of the PSNR difference and the plurality of successively  downsampled PSNR differences. The weight applied to each PSNR difference and/or downsampled PSNR difference can represented as wi, where i ranges from 0 to M. An exemplary method of finding the noise amplification metric Dnoise is shown as follows in Equation (19) , which is reproduced in Fig. 8:
Figure PCTCN2015079094-appb-000043
where M is the total number of downsampling iterations and wi is the weight given to each downsampling iteration i.
In some embodiments, at least one of the weights wi is non-zero. Stated somewhat differently, PSNR differences at one or more iterations i can be given a weight of zero to effectively ignore that PSNR difference, provided that not all of the weights are ignored.
Turning now to Fig. 10, an exemplary method 1000 is shown for finding the noise amplification metric Dnoise that locates and compares peak signal-to-noise ratios (PSNR) at successively downsampled frequencies. At 1001a, an initial value of a pre-correction PSNR PSNR0 can be found using the pre-correction noise evaluation color values
Figure PCTCN2015079094-appb-000044
and pre-correction noise-free color values
Figure PCTCN2015079094-appb-000045
as described above with reference to Figs. 8 and 9. At 1002a, 
Figure PCTCN2015079094-appb-000046
and
Figure PCTCN2015079094-appb-000047
can each be downsampled to obtain
Figure PCTCN2015079094-appb-000048
and
Figure PCTCN2015079094-appb-000049
respectively. At 1003a, a downsampled PSNR1 can be found from
Figure PCTCN2015079094-appb-000050
and
Figure PCTCN2015079094-appb-000051
Optionally, at 1004a, the process of downsampling and finding a corresponding downsampled PSNR can be repeated for M iterations, as desired.
Similarly, the iterative downsampling process can be repeated for color-corrected images. At 1001b, an initial value of a post-correction PSNR
Figure PCTCN2015079094-appb-000052
can be found using the post-correction noise evaluation color values
Figure PCTCN2015079094-appb-000053
and post-correction noise-free color values
Figure PCTCN2015079094-appb-000054
as described above with reference to Figs. 8 and 9. At 1002b, 
Figure PCTCN2015079094-appb-000055
and 
Figure PCTCN2015079094-appb-000056
can each be downsampled to obtain
Figure PCTCN2015079094-appb-000057
and
Figure PCTCN2015079094-appb-000058
respectively. At 1003b, a downsampled
Figure PCTCN2015079094-appb-000059
can be found from
Figure PCTCN2015079094-appb-000060
and
Figure PCTCN2015079094-appb-000061
Optionally, at 1004b, the process of downsampling and finding a corresponding downsampled
Figure PCTCN2015079094-appb-000062
can be repeated for M iterations, as desired.
Finally, at 1005, the set of PSNR values and color-corrected PSNR values found at iterations 0 to M can be used to find the noise amplification metric Dnoise —for example, as shown above in Equation (19) .
In another embodiment, the noise amplification metric Dnoise can be obtained based on a variance of Y, U, and V components of the pre-correction noise evaluation color values 
Figure PCTCN2015079094-appb-000063
and post-correction noise evaluation color valuesIn an exemplary embodiment, the noise amplification metric Dnoise can be obtained using the Equation (20) :
Figure PCTCN2015079094-appb-000065
wherein_represents the variance of the Y components of the pre-correction noise evaluation color values , and represents the variance of the Y components of the post-correction noise evaluation color values^, and likewise for the U and V components, and is the weight given to each downsampling iteration i, where ≥0.
Turning now to Fig. 11, an exemplary embodiment of the imaging system 200 is shown wherein the imaging system 200 is shown as being installed aboard an unmanned aerial vehicle (UAV) 1100. A UAV 1100, colloquially referred to as a “drone, ” is an aircraft without an onboard human pilot and whose flight is controlled autonomously and/or by a remote pilot. The imaging system 200 is suitable for installation aboard any of various types of UAVs 1100, including, but not limited to, rotocraft, fixed-wing aircraft, and hybrids thereof. Suitable rotocraft include, for example, single rotor, dual rotor, trirotor, quadrotor (quadcopter) , hexarotor, and octorotor rotocraft. The imaging system 200 can be installed on various portions of the UAV 1100. For example, the imaging system 200 can be installed within a fuselage 1110 of the UAV 1100. Alternatively, the imaging system 200 can be mounted onto an exterior surface 1020 (for example, on the underside 1025) of the UAV 1100. Furthermore, the various components of the imaging system 200 can be installed on the same portion, and/or different portions, of the UAV 1100. For example, an image sensor 130 can be mounted on an exterior surface 1120 to facilitate image acquisition; while, a color correction apparatus 100 advantageously can be installed within the fuselage 1110 for protection against wear and tear. Likewise, the various components of the color correction apparatus 100 can be installed on the same portion, and/or different portions, of the UAV 1100. Although shown and described with respect to a UAV 1100 for purposes of illustration only, the imaging system 200 can include, or be mounted on, any type of mobile platform. Exemplary suitable mobile platforms include, but are not limited to, bicycles, automobiles, trucks, ships, boats, trains, helicopters, aircraft, various hybrids thereof, and the like.
Example 1
The following color correction parameter calibration experiment was performed to determine the efficacy of a method of calibration with noise regulation in comparison to the  method without noise regulation. First, an input image was used to calibrate color correction parameters by using a fitness function that includes the noise amplification metric in the manner described above. Fig. 12 shows a chrominance diagram of resulting color errors in a CIE L*a*b*color space (showing a cross section in the a*and b*dimensions) , showing a mean color correction error of 16.8 with a maximum color correction error of 29.7. Fig. 13 shows a plot of the resulting noise levels from the same experiment, showing that the average Y(luminance) noise is 0.83%; while, the average chrominance noise in the R, G, and B components are 1.38%, 1.31%, and 1.63%, respectively.
In contrast, the same input image was used to calibrate color correction parameters by using a fitness function that does not include the noise amplification metric. Fig. 14 shows a chrominance diagram of resulting color errors in a CIE L*a*b*color space, showing a mean color correction error of 17.7 with a maximum color correction error of 35, both of which are significantly greater than the corresponding errors obtained with noise regulation. Fig. 15 shows a plot of the corresponding noise levels of the experiment, showing that the average Y(luminance) noise is 0.86%; while, the average chrominance noise in the R, G, and B components are 1.56%, 1.31%, and 1.66%, respectively, which are significantly greater than the noise obtained by calibration with noise regulation. Accordingly, it can be seen from this experiment that color correction parameter calibration with noise regulation is an improvement over color correction parameter calibration without noise regulation.
Example 2
The following color correction parameter calibration experiment was performed to determine the efficacy of a method of calibration with conversion to a CIE L*a*b*color space in comparison to the method that performs the calibration in a CIE XYZ color space. First, an input image was used to calibrate color correction parameters after the input and reference colors of the input image are converted to a CIE L*a*b*. Optimization yielded the following matrix of color correction parameters:
Figure PCTCN2015079094-appb-000066
having an optimized ec of 2.412169304.
Next, the same input image was used to calibrate color correction parameters after the input and reference colors of the input image are converted to a CIE XYZ. Optimization yielded the following matrix of color correction parameters:
Figure PCTCN2015079094-appb-000067
having an optimized ec of 3.0107447. This comparison shows that using a non-linear color space—here, a CIE L*a*b*color space—yields improved results over using a CIE XYZ color space.
Example 3
The following example shows the process of optimizing a set of color correction parameters using the two-step method of Fig. 6. In the first step of the two-step method, a genetic process is used to find a set of initial parameters so as to avoid becoming trapped in local optima. The fitness value of the parameters for the genetic process over six hundred generations is shown in Fig. 16 at the upper panel, showing that the fitness value reaches a best value of 335.134 after 600 generations. In the second step of the two-step method, a direct optimization process is used starting from the initial parameters produced at the end of step one. In the second step, after another 600 generations, the direct optimization method reduces the average distance between the corrected input colors and the corresponding reference colors, as shown in Fig. 15 at the lower panel. This example shows that it is advantageous to use a two-step optimization method.
The disclosed embodiments are susceptible to various modifications and alternative forms, and specific examples thereof have been shown by way of example in the drawings and are herein described in detail. It should be understood, however, that the disclosed embodiments are not to be limited to the particular forms or methods disclosed, but to the contrary, the disclosed embodiments are to cover all modifications, equivalents, and alternatives.

Claims (112)

  1. A method for calibrating a digital imaging device for color correction, the method comprising:
    obtaining an input color value and a reference color value for each of a plurality of color references and a noise evaluation image having a color noise for evaluating noise reduction, the input color values and reference color values being in a non-linear color space; and
    determining a plurality of color correction parameters that are optimized based on evaluating a fitness function in the non-linear color space.
  2. The method of claim 1, wherein the non-linear color space is a CIE L*a*b* color space.
  3. The method of claim 1 or claim 2, wherein said determining the plurality of color correction parameters comprises adjusting the color correction parameters based on the input color values, and reference color values, and the noise evaluation image.
  4. The method of claim 3, wherein the fitness function comprises a color correction error and a noise amplification metric.
  5. The method of claim 4, wherein said adjusting the plurality of color correction parameters comprises determining the color correction error by color correcting the input color values and comparing the corrected input color values with the reference color values.
  6. The method of claim 4 or claim 5, wherein said adjusting the plurality of color correction parameters comprises determining the noise amplification metric by color correcting the noise evaluation image using the parameters and comparing the corrected noise evaluation image with the pre-correction noise evaluation image.
  7. The method of claim 6, wherein said determining the noise amplification metric comprises determining the noise amplification metric using a peak signal-to-noise ratio (PSNR) .
  8. The method of claim 7, wherein said determining the noise amplification metric using a PSNR comprises finding a PSNR difference that is a difference between a PSNR for the pre-correction noise evaluation image and a PSNR for the corrected noise evaluation image.
  9. The method of claim 7 or claim 8, wherein said determining the noise amplification metric using a PSNR comprises determining a downsampled PSNR difference.
  10. The method of claim 9, wherein said determining the downsampled PSNR difference comprises:
    downsampling the pre-correction noise evaluation image to obtain a downsampled pre-correction noise evaluation image;
    downsampling the corrected noise evaluation image to obtain a downsampled corrected noise evaluation image; and
    finding the downsampled PSNR difference as a difference between a PSNR for the downsampled pre-correction noise evaluation image and a PSNR for the downsampled corrected noise evaluation image.
  11. The method of claim 9 or claim 10, wherein said determining the noise amplification metric comprises determining a weighted average of the PSNR difference and at least one downsampled PSNR difference.
  12. The method of claim 11, wherein said determining the noise amplification metric using a PSNR comprises determining a plurality of successively downsampled PSNR differences and determining the noise amplification metric as a weighted average of the PSNR difference and the plurality of successively downsampled PSNR differences.
  13. The method of claim 12, wherein said determining the noise amplification metric comprises weighing each of the PSNR difference and the plurality of successively downsampled PSNR difference with a respective weight, and at least one of the weights is not zero.
  14. The method of claim 6, wherein said determining the noise amplification metric comprises determining the noise amplification metric based on color variances.
  15. The method of any one of claims 6-14, wherein said adjusting the plurality of color correction parameters comprises determining the noise amplification metric by converting the color corrected noise evaluation image into a YUV color space.
  16. The method of any one of claims 4-15, wherein said noise evaluation image comprises a plurality of color patches, and wherein the noise amplification metric is determined as a weighted sum of noise amplification levels of each of the color patches.
  17. The method of claim 16, wherein each color patch is weighted according to an average sensitivity of human perception to the color patch.
  18. The method of any one claims 4-17, wherein said adjusting the color correction parameters comprises optimizing the fitness function by minimizing a weighted sum of the color correction error and the noise amplification metric.
  19. The method of any one of the above claims, wherein said adjusting the color correction parameters comprises adjusting the parameters using a genetic process.
  20. The method of claim 19, comprising further adjusting the color correction parameters using a direct search method.
  21. The method of any one of the above claims, wherein said obtaining the input color values of the color calibration images comprises imaging one or more color references comprising a plurality of color patches.
  22. The method of any one of the above claims, wherein the color correction parameters are in the form of a matrix.
  23. The method of any one of the above claims, wherein the color correction parameters are in the form of a look-up table.
  24. The method of claim 23, wherein said adjusting the color correction parameters comprises look-up operations and interpolation operations in the look-up table.
  25. The method of claim 24, wherein said interpolation operations comprise a Shepard interpolation.
  26. The method of any one of the above claims, wherein the noise evaluation image is a virtual noise evaluation image.
  27. The method of claim 26, wherein the virtual noise evaluation image comprises noise added to a virtual noise-free image.
  28. A color correction apparatus configured for calibration for color correction based upon images of a plurality of color references each having a reference color, comprising:
    a memory for storing a plurality of color correction parameters; and
    a processor for performing color correction of a digital image,
    wherein the processor is configured to:
    obtain an input color value and a reference color value for each of the color references and a noise evaluation image having a color noise for evaluating noise reduction, the input color values and reference color values being in a non-linear color space, and
    determine a plurality of color correction parameters that are optimized based on evaluating a fitness function in the non-linear color space.
  29. The color correction apparatus of claim 28, wherein the non-linear color space is a CIE L*a*b* color space.
  30. The color correction apparatus of claim 28 or claim 29, wherein the processor is configured to determine the plurality of color correction parameters by adjusting the color correction parameters based on the input color values, and reference color values, and the noise evaluation image.
  31. The color correction apparatus of claim 30, wherein the fitness function comprises a color correction error and a noise amplification metric.
  32. The color correction apparatus of claim 31, wherein the processor is configured to adjust the plurality of color correction parameters by color correcting the input color values and comparing the corrected input color values with the reference color values.
  33. The color correction apparatus of claim 31 or claim 32, wherein the processor is configured to adjust the plurality of color correction parameters by color correcting the noise evaluation image using the parameters and comparing the corrected noise evaluation image with the pre-correction noise evaluation image.
  34. The color correction apparatus of claim 33, wherein the processor is configured to determine the noise amplification metric using a peak signal-to-noise ratio (PSNR) .
  35. The color correction apparatus of claim 34, wherein the processor is configured to determine the noise amplification metric using a PSNR by finding a PSNR difference that is a difference between a PSNR for the pre-correction noise evaluation image and a PSNR for the corrected noise evaluation image.
  36. The color correction apparatus of claim 34 or claim 35, wherein the processor is configured to determine the noise amplification metric using a PSNR by determining a downsampled PSNR difference.
  37. The color correction apparatus of claim 36, wherein the processor is configured to determine the downsampled PSNR difference by:
    downsampling the pre-correction noise evaluation image to obtain a downsampled pre-correction noise evaluation image;
    downsampling the corrected noise evaluation image to obtain a downsampled corrected noise evaluation image; and
    finding the downsampled PSNR difference as a difference between a PSNR for the downsampled pre-correction noise evaluation image and a PSNR for the downsampled corrected noise evaluation image.
  38. The color correction apparatus of claim 36 or claim 37, wherein the processor is configured to determine the noise amplification metric by determining a weighted average of the PSNR difference and at least one downsampled PSNR difference.
  39. The color correction apparatus of claim 38, wherein the processor is configured to determine a plurality of successively downsampled PSNR differences and determine the noise amplification metric as a weighted average of the PSNR difference and the plurality of successively downsampled PSNR differences.
  40. The color correction apparatus of claim 39, wherein the processor is configured to determine the noise amplification metric by weighing each of the PSNR difference and the plurality of successively downsampled PSNR difference with a respective weight, and at least one of the weights is not zero.
  41. The color correction apparatus of claim 33, wherein the processor is configured to determine the noise amplification metric based on color variances.
  42. The color correction apparatus of any one of claims 33-41, wherein the processor is configured to determine the noise amplification metric by converting the color corrected noise evaluation image into a YUV color space.
  43. The color correction apparatus of any one of claims 31-42, wherein the noise evaluation image comprises a plurality of color patches, and wherein the noise amplification metric is determined as a weighted sum of noise amplification levels of each of the color patches.
  44. The color correction apparatus of claim 43, wherein each color patch is weighted according to an average sensitivity of human perception to the color patch.
  45. The color correction apparatus of any one claims 31-44, wherein the processor is configured to optimizing the fitness function by minimizing a weighted sum of the color correction error and the noise amplification metric.
  46. The color correction apparatus of any one of claims 28-45, wherein the processor is configured to adjust the color correction parameters using a genetic process.
  47. The color correction apparatus of claim 46, wherein the processor is configured to further adjust the color correction parameters using a direct search method.
  48. The color correction apparatus of any one of claims 28-47, wherein the processor is configured to obtain the input color values of the color calibration images by imaging one or more color references comprising a plurality of color patches.
  49. The color correction apparatus of any one of claims 28-48, wherein the color correction parameters are in the form of a matrix.
  50. The color correction apparatus of any one of claims 28-49, wherein the color correction parameters are in the form of a look-up table.
  51. The color correction apparatus of claim 50, wherein the processor is configured to adjust the color correction parameters by look-up operations and interpolation operations in the look-up table.
  52. The color correction apparatus of claim 51, wherein said interpolation operations comprise a Shepard interpolation.
  53. The color correction apparatus of any one of claims 28-52, wherein the noise evaluation image is a virtual noise evaluation image.
  54. The color correction apparatus of claim 53, wherein the virtual noise evaluation image comprises noise added to a virtual noise-free image.
  55. The color correction apparatus of any one of claims 28-54, wherein the color correction apparatus is mounted aboard a mobile platform.
  56. The color correction apparatus of claim 55, wherein the mobile platform is an unmanned aerial vehicle (UAV) .
  57. A digital imaging device, comprising:
    an image sensor for imaging a plurality of color references; and
    a processor for performing color correction of a digital image,
    wherein the processor is configured to:
    obtain an input color value and a reference color value for each of the color references and a noise evaluation image having a color noise for evaluating noise reduction, the input color values and reference color values being in a non-linear color space, and
    determine a plurality of color correction parameters that are optimized based on evaluating a fitness function in the non-linear color space.
  58. The digital imaging device of claim 57, wherein the non-linear color space is a CIE L*a*b* color space.
  59. The digital imaging device of claim 57 or claim 58, wherein the processor is configured to determine the plurality of color correction parameters by adjusting the color correction parameters based on the input color values, and reference color values, and the noise evaluation image.
  60. The digital imaging device of claim 59, wherein the fitness function comprises a color correction error and a noise amplification metric.
  61. The digital imaging device of claim 60, wherein the processor is configured to adjust the plurality of color correction parameters by color correcting the input color values and comparing the corrected input color values with the reference color values.
  62. The digital imaging device of claim 60 or claim 61, wherein the processor is configured to adjust the plurality of color correction parameters by color correcting the noise evaluation image using the parameters and comparing the corrected noise evaluation image with the pre-correction noise evaluation image.
  63. The digital imaging device of claim 62, wherein the processor is configured to determine the noise amplification metric using a peak signal-to-noise ratio (PSNR) .
  64. The digital imaging device of claim 63, wherein the processor is configured to determine the noise amplification metric using a PSNR by finding a PSNR difference that is a difference between a PSNR for the pre-correction noise evaluation image and a PSNR for the corrected noise evaluation image.
  65. The digital imaging device of claim 63 or claim 64, wherein the processor is configured to determine the noise amplification metric using a PSNR by determining a downsampled PSNR difference.
  66. The digital imaging device of claim 65, wherein the processor is configured to determine the downsampled PSNR difference by:
    downsampling the pre-correction noise evaluation image to obtain a downsampled pre-correction noise evaluation image;
    downsampling the corrected noise evaluation image to obtain a downsampled corrected noise evaluation image; and
    finding the downsampled PSNR difference as a difference between a PSNR for the downsampled pre-correction noise evaluation image and a PSNR for the downsampled corrected noise evaluation image.
  67. The digital imaging device of claim 65 or claim 66, wherein the processor is configured to determine the noise amplification metric by determining a weighted average of the PSNR difference and at least one downsampled PSNR difference.
  68. The digital imaging device of claim 67, wherein the processor is configured to determine a plurality of successively downsampled PSNR differences and determine the noise amplification metric as a weighted average of the PSNR difference and the plurality of successively downsampled PSNR differences.
  69. The digital imaging device of claim 68, wherein the processor is configured to determine the noise amplification metric by weighing each of the PSNR difference and the plurality of successively downsampled PSNR difference with a respective weight, and at least one of the weights is not zero.
  70. The digital imaging device of claim 62, wherein the processor is configured to determine the noise amplification metric based on color variances.
  71. The digital imaging device of any one of claims 62-70, wherein the processor is configured to determine the noise amplification metric by converting the color corrected noise evaluation image into a YUV color space.
  72. The digital imaging device of any one of claims 60-71, wherein the noise evaluation image comprises a plurality of color patches, and wherein the noise amplification metric is determined as a weighted sum of noise amplification levels of each of the color patches.
  73. The digital imaging device of claim 72, wherein each color patch is weighted according to an average sensitivity of human perception to the color patch.
  74. The digital imaging device of any one claims 60-73, wherein the processor is configured to optimizing the fitness function by minimizing a weighted sum of the color correction error and the noise amplification metric.
  75. The digital imaging device of any one of claims 57-74, wherein the processor is configured to adjust the color correction parameters using a genetic process.
  76. The digital imaging device of claim 75, wherein the processor is configured to further adjust the color correction parameters using a direct search method.
  77. The digital imaging device of any one of claims 57-76, wherein the processor is configured to obtain the input color values of the color calibration images by imaging one or more color references comprising a plurality of color patches.
  78. The digital imaging device of any one of claims 57-77, wherein the color correction parameters are in the form of a matrix.
  79. The digital imaging device of any one of claims 57-78, wherein the color correction parameters are in the form of a look-up table.
  80. The digital imaging device of claim 79, wherein the processor is configured to adjust the color correction parameters by look-up operations and interpolation operations in the look-up table.
  81. The digital imaging device of claim 80, wherein said interpolation operations comprise a Shepard interpolation.
  82. The digital imaging device of any one of claims 57-81, wherein the noise evaluation image is a virtual noise evaluation image.
  83. The digital imaging device of claim 82, wherein the virtual noise evaluation image comprises noise added to a virtual noise-free image.
  84. The digital imaging device of any one of claims 57-83, wherein the color correction apparatus is mounted aboard a mobile platform.
  85. The digital imaging device of claim 84, wherein the mobile platform is an unmanned aerial vehicle (UAV) .
  86. A non-transitory readable medium storing instructions for calibrating a digital imaging device for color correction, wherein the instructions comprise instructions for:
    obtaining an input color value and a reference color value for each of a plurality of color references and a noise evaluation image having a color noise for evaluating noise reduction, the input color values and reference color values being in a non-linear color space; and
    determining a plurality of color correction parameters that are optimized based on evaluating a fitness function in the non-linear color space.
  87. The non-transitory readable medium of claim 86, wherein the non-linear color space is a CIE L*a*b* color space.
  88. The non-transitory readable medium of claim 86 or claim 87, wherein said determining the plurality of color correction parameters comprises adjusting the color correction parameters based on the input color values, and reference color values, and the noise evaluation image.
  89. The non-transitory readable medium of claim 88, wherein the fitness function comprises a color correction error and a noise amplification metric.
  90. The non-transitory readable medium of claim 89, wherein said adjusting the plurality of color correction parameters comprises determining the color correction error by color correcting the input color values and comparing the corrected input color values with the reference color values.
  91. The non-transitory readable medium of claim 89 or claim 90, wherein said adjusting the plurality of color correction parameters comprises determining the noise amplification metric by color correcting the noise evaluation image using the parameters and comparing the corrected noise evaluation image with the pre-correction noise evaluation image.
  92. The non-transitory readable medium of claim 91, wherein said determining the noise amplification metric comprises determining the noise amplification metric using a peak signal-to-noise ratio (PSNR) .
  93. The non-transitory readable medium of claim 92, wherein said determining the noise amplification metric using a PSNR comprises finding a PSNR difference that is a difference between a PSNR for the pre-correction noise evaluation image and a PSNR for the corrected noise evaluation image.
  94. The non-transitory readable medium of claim 92 or claim 93, wherein said determining the noise amplification metric using a PSNR comprises determining a downsampled PSNR difference.
  95. The non-transitory readable medium of claim 94, wherein said determining the downsampled PSNR difference comprises:
    downsampling the pre-correction noise evaluation image to obtain a downsampled pre-correction noise evaluation image;
    downsampling the corrected noise evaluation image to obtain a downsampled corrected noise evaluation image; and
    finding the downsampled PSNR difference as a difference between a PSNR for the downsampled pre-correction noise evaluation image and a PSNR for the downsampled corrected noise evaluation image.
  96. The non-transitory readable medium of claim 94 or claim 95, wherein said determining the noise amplification metric comprises determining a weighted average of the PSNR difference and at least one downsampled PSNR difference.
  97. The non-transitory readable medium of claim 96, wherein said determining the noise amplification metric using a PSNR comprises determining a plurality of successively downsampled PSNR differences and determining the noise amplification metric as a weighted average of the PSNR difference and the plurality of successively downsampled PSNR differences.
  98. The non-transitory readable medium of claim 97, wherein said determining the noise amplification metric comprises weighing each of the PSNR difference and the plurality of successively downsampled PSNR difference with a respective weight, and at least one of the weights is not zero.
  99. The non-transitory readable medium of claim 98, wherein said determining the noise amplification metric comprises determining the noise amplification metric based on color variances.
  100. The non-transitory readable medium of any one of claims 91-99, wherein said adjusting the plurality of color correction parameters comprises determining the noise amplification metric by converting the color corrected noise evaluation image into a YUV color space.
  101. The non-transitory readable medium of any one of claims 89-100, wherein said noise evaluation image comprises a plurality of color patches, and wherein the noise amplification metric is determined as a weighted sum of noise amplification levels of each of the color patches.
  102. The non-transitory readable medium of claim 101, wherein each color patch is weighted according to an average sensitivity of human perception to the color patch.
  103. The non-transitory readable medium of any one claims 89-102, wherein said adjusting the color correction parameters comprises optimizing the fitness function by minimizing a weighted sum of the color correction error and the noise amplification metric.
  104. The non-transitory readable medium of any one of claims 86-103, wherein said adjusting the color correction parameters comprises adjusting the parameters using a genetic process.
  105. The non-transitory readable medium of claim 104, comprising further adjusting the color correction parameters using a direct search method.
  106. The non-transitory readable medium of any one of claims 86-105, wherein said obtaining the input color values of the color calibration images comprises imaging one or more color references comprising a plurality of color patches.
  107. The non-transitory readable medium of any one of claims 86-106, wherein the color correction parameters are in the form of a matrix.
  108. The non-transitory readable medium of any one of claims 86-107, wherein the color correction parameters are in the form of a look-up table.
  109. The non-transitory readable medium of claim 108, wherein said adjusting the color correction parameters comprises look-up operations and interpolation operations in the look-up table.
  110. The non-transitory readable medium of claim 109, wherein said interpolation operations comprise a Shepard interpolation.
  111. The non-transitory readable medium of any one of claims 86-110, wherein the noise evaluation image is a virtual noise evaluation image.
  112. The non-transitory readable medium of claim 111, wherein the virtual noise evaluation image comprises noise added to a virtual noise-free image.
PCT/CN2015/079094 2015-05-15 2015-05-15 Color correction system and method WO2016183744A1 (en)

Priority Applications (9)

Application Number Priority Date Filing Date Title
CN201580029947.8A CN106471567B (en) 2015-05-15 2015-05-15 Color calibration system and method
EP15874399.7A EP3202131A1 (en) 2015-05-15 2015-05-15 Color correction system and method
PCT/CN2015/079094 WO2016183744A1 (en) 2015-05-15 2015-05-15 Color correction system and method
CN201910240349.5A CN109963133B (en) 2015-05-15 2015-05-15 Color correction system and method
US15/176,037 US9742960B2 (en) 2015-05-15 2016-06-07 Color correction system and method
US15/646,301 US9998632B2 (en) 2015-05-15 2017-07-11 Color correction system and method
US15/977,661 US10244146B2 (en) 2015-05-15 2018-05-11 Color correction system and method
US16/358,946 US10560607B2 (en) 2015-05-15 2019-03-20 Color correction system and method
US16/783,478 US20200228681A1 (en) 2015-05-15 2020-02-06 Color correction system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/079094 WO2016183744A1 (en) 2015-05-15 2015-05-15 Color correction system and method

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/176,037 Continuation US9742960B2 (en) 2015-05-15 2016-06-07 Color correction system and method

Publications (1)

Publication Number Publication Date
WO2016183744A1 true WO2016183744A1 (en) 2016-11-24

Family

ID=57319197

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/079094 WO2016183744A1 (en) 2015-05-15 2015-05-15 Color correction system and method

Country Status (4)

Country Link
US (5) US9742960B2 (en)
EP (1) EP3202131A1 (en)
CN (2) CN106471567B (en)
WO (1) WO2016183744A1 (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102017207306A1 (en) * 2016-05-20 2017-11-23 Heidelberger Druckmaschinen Ag Method of monitoring a staining standard in a printing press
US10179647B1 (en) 2017-07-13 2019-01-15 Fat Shark Technology SEZC Unmanned aerial vehicle
USD848383S1 (en) 2017-07-13 2019-05-14 Fat Shark Technology SEZC Printed circuit board
USD825381S1 (en) 2017-07-13 2018-08-14 Fat Shark Technology SEZC Unmanned aerial vehicle
CN107358926A (en) * 2017-07-24 2017-11-17 惠科股份有限公司 Display panel driving method, driving device and display device
US11120725B2 (en) 2018-04-24 2021-09-14 Advanced Micro Devices, Inc. Method and apparatus for color gamut mapping color gradient preservation
US11115563B2 (en) * 2018-06-29 2021-09-07 Ati Technologies Ulc Method and apparatus for nonlinear interpolation color conversion using look up tables
CN111785220B (en) * 2019-04-03 2022-02-08 名硕电脑(苏州)有限公司 Display correction method and system
TWI737125B (en) * 2020-01-14 2021-08-21 佳世達科技股份有限公司 Color calibrator
US11729371B2 (en) 2020-10-14 2023-08-15 Ford Global Technologies, Llc Systems and methods for improved camera color calibration
KR102661114B1 (en) 2020-11-10 2024-04-25 삼성전자주식회사 Camera module test apparatus, camera module test method and image generating device
CN113284473A (en) * 2021-05-27 2021-08-20 深圳市华星光电半导体显示技术有限公司 White point correction method and device of display panel
US11700458B2 (en) 2021-08-06 2023-07-11 Ford Global Technologies, Llc White balance and color correction for interior vehicle camera
CN117041531B (en) * 2023-09-04 2024-03-15 无锡维凯科技有限公司 Mobile phone camera focusing detection method and system based on image quality evaluation
CN117934353A (en) * 2024-03-20 2024-04-26 上海玄戒技术有限公司 Image processing method, device, equipment, storage medium and chip

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1622135A (en) * 2004-12-13 2005-06-01 中国科学院长春光学精密机械与物理研究所 Digital image color correction method
US20070263098A1 (en) 2006-04-27 2007-11-15 Shuxue Quan Weight adjustment in color correction
JP2008099069A (en) * 2006-10-13 2008-04-24 Mitsubishi Electric Corp Noise reduction device and method
KR20100079479A (en) * 2008-12-31 2010-07-08 엠텍비젼 주식회사 Apparatus for processing image siganls, method for reducing chrominamce noise in the image signal processing apparatus and record medium for performing method of reducing chrominance noise
US20130016082A1 (en) * 2003-09-30 2013-01-17 International Business Machines Corporation On demand calibration of imaging displays
US20140043627A1 (en) * 2012-08-09 2014-02-13 Konica Minolta, Inc. Color correcting system and image forming apparatus including same
CN104023218A (en) * 2013-02-28 2014-09-03 株式会社日立制作所 Imaging device and image signal processor

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5471324A (en) * 1994-04-05 1995-11-28 Xerox Corporation Color printer calibration with improved color mapping linearity
JP3305495B2 (en) * 1994-04-28 2002-07-22 キヤノン株式会社 Image processing apparatus and image processing method
CA2309002A1 (en) * 2000-05-23 2001-11-23 Jonathan Martin Shekter Digital film grain reduction
JP2002204374A (en) * 2000-10-23 2002-07-19 Seiko Epson Corp Creation method for color correction table, apparatus for image processing, method therefor and recording medium
JP2005110176A (en) * 2003-10-02 2005-04-21 Nikon Corp Noise removing method, noise removing processing program, and noise removing device
JP5248010B2 (en) * 2006-02-17 2013-07-31 株式会社東芝 Data correction apparatus, data correction method, magnetic resonance imaging apparatus, and X-ray CT apparatus
US8581981B2 (en) * 2006-04-28 2013-11-12 Southwest Research Institute Optical imaging system for unmanned aerial vehicle
JP2008099089A (en) 2006-10-13 2008-04-24 Konica Minolta Business Technologies Inc Color image processing method, color image processor and color image processing program
CN101193314B (en) * 2006-11-30 2012-06-27 北京思比科微电子技术有限公司 Image processing device and method for image sensor
US9830691B2 (en) * 2007-08-03 2017-11-28 The University Of Akron Method for real-time implementable local tone mapping for high dynamic range images
WO2012106797A1 (en) * 2011-02-11 2012-08-16 Canadian Space Agency Method and system of increasing spatial resolution of multi-dimensional optical imagery using sensor's intrinsic keystone
US9049334B1 (en) * 2011-02-24 2015-06-02 Foveon, Inc. Denoising images with a color matrix pyramid
JP5963009B2 (en) * 2011-12-08 2016-08-03 パナソニックIpマネジメント株式会社 Digital specimen preparation apparatus, digital specimen preparation method, and digital specimen preparation server
KR102081241B1 (en) * 2012-03-29 2020-02-25 더 유니버서티 어브 퀸슬랜드 A method and apparatus for processing patient sounds
JP5939962B2 (en) * 2012-11-19 2016-06-29 株式会社Pfu Image processing apparatus, image processing method, and computer program
JP2015097382A (en) * 2013-10-08 2015-05-21 キヤノン株式会社 Information processing device, imaging system, information processing method and program
JP6013382B2 (en) * 2014-02-27 2016-10-25 富士フイルム株式会社 Endoscope system and operating method thereof

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130016082A1 (en) * 2003-09-30 2013-01-17 International Business Machines Corporation On demand calibration of imaging displays
CN1622135A (en) * 2004-12-13 2005-06-01 中国科学院长春光学精密机械与物理研究所 Digital image color correction method
US20070263098A1 (en) 2006-04-27 2007-11-15 Shuxue Quan Weight adjustment in color correction
JP2008099069A (en) * 2006-10-13 2008-04-24 Mitsubishi Electric Corp Noise reduction device and method
KR20100079479A (en) * 2008-12-31 2010-07-08 엠텍비젼 주식회사 Apparatus for processing image siganls, method for reducing chrominamce noise in the image signal processing apparatus and record medium for performing method of reducing chrominance noise
US20140043627A1 (en) * 2012-08-09 2014-02-13 Konica Minolta, Inc. Color correcting system and image forming apparatus including same
CN104023218A (en) * 2013-02-28 2014-09-03 株式会社日立制作所 Imaging device and image signal processor

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
See also references of EP3202131A4
SHUXUE QUAN: "Analytical approach to the optimal linear matrix with comprehensive parametric", PROCEEDINGS OF SPIE-IS&T ELECTRONIC IMAGING, vol. 5292, 2004, XP002467113, DOI: 10.1117/12.527530

Also Published As

Publication number Publication date
US20200228681A1 (en) 2020-07-16
US20170318194A1 (en) 2017-11-02
EP3202131A4 (en) 2017-08-09
US20180262650A1 (en) 2018-09-13
CN106471567A (en) 2017-03-01
US20170041507A1 (en) 2017-02-09
CN106471567B (en) 2019-04-26
EP3202131A1 (en) 2017-08-09
US9742960B2 (en) 2017-08-22
US9998632B2 (en) 2018-06-12
CN109963133B (en) 2021-07-30
US10244146B2 (en) 2019-03-26
CN109963133A (en) 2019-07-02
US20190215418A1 (en) 2019-07-11
US10560607B2 (en) 2020-02-11

Similar Documents

Publication Publication Date Title
US10560607B2 (en) Color correction system and method
US11190669B2 (en) System and method for image processing
Stevens et al. Using digital photography to study animal coloration
US8767103B2 (en) Color filter, image processing apparatus, image processing method, image-capture apparatus, image-capture method, program and recording medium
US11622085B2 (en) Multispectral image decorrelation method and system
JP5067499B2 (en) Image processing device
EP1977615B1 (en) Automatic color calibration of an image sensor
US8036457B2 (en) Image processing apparatus with noise reduction capabilities and a method for removing noise from a captured image
US20090147098A1 (en) Image sensor apparatus and method for color correction with an illuminant-dependent color correction matrix
EP2846530A2 (en) Method and associated apparatus for correcting color artifact of image
US8374433B2 (en) Apparatus, method, and manufacture for correcting color shading in CMOS image sensors
US20020114533A1 (en) Adaptive process for removing streaks in multi-band digital images
JP4936686B2 (en) Image processing
JP4677699B2 (en) Image processing method, image processing device, photographing device evaluation method, image information storage method, and image processing system
Monno et al. N-to-sRGB mapping for single-sensor multispectral imaging
US20080144056A1 (en) Color metric for halo artifacts
Amziane et al. Frame-based reflectance estimation from multispectral images for weed identification in varying illumination conditions
WO2015167460A1 (en) Imager calibration via modeled responses to importance-weighted color sample data
Vaillant et al. Color correction matrix for sparse RGB-W image sensor without IR cutoff filter
JP2019165398A (en) Image processing apparatus, image processing method and program
Mou et al. Colorimetric characterization of imaging device by total color difference minimization
WO2022198436A1 (en) Image sensor, image data acquisition method and imaging device
CN117808715A (en) Image pseudo-color correction method and device
CN117710489A (en) Image color edge quantization method, apparatus, storage medium, and program product
CN117197111A (en) Display defect detection method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
REEP Request for entry into the european phase

Ref document number: 2015874399

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2015874399

Country of ref document: EP

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15874399

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE