WO2001060051A1 - Reglage automatique des couleurs et etalonnage d'une imprimante - Google Patents

Reglage automatique des couleurs et etalonnage d'une imprimante Download PDF

Info

Publication number
WO2001060051A1
WO2001060051A1 PCT/US2000/024079 US0024079W WO0160051A1 WO 2001060051 A1 WO2001060051 A1 WO 2001060051A1 US 0024079 W US0024079 W US 0024079W WO 0160051 A1 WO0160051 A1 WO 0160051A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
output device
information
color space
pixel
Prior art date
Application number
PCT/US2000/024079
Other languages
English (en)
Inventor
Allen Rush
Greg Ward
Sumit Chawla
Daniel R. Baum
Original Assignee
Shutterfly, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shutterfly, Inc. filed Critical Shutterfly, Inc.
Priority to AU73424/00A priority Critical patent/AU7342400A/en
Publication of WO2001060051A1 publication Critical patent/WO2001060051A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32106Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title separate from the image data, e.g. in a different computer file
    • H04N1/32112Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title separate from the image data, e.g. in a different computer file in a separate computer file, document page or paper sheet, e.g. a fax cover sheet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00127Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture
    • H04N1/00204Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture with a digital computer or a digital computer system, e.g. an internet server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • H04N1/60Colour correction or control
    • H04N1/6016Conversion to subtractive colour signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • H04N1/60Colour correction or control
    • H04N1/603Colour correction or control controlled by characteristics of the picture signal generator or the picture reproducer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N2201/3201Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N2201/3225Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title of data relating to an image, a page or a document
    • H04N2201/325Modified version of the image, e.g. part of the image, image reduced in size or resolution, thumbnail or screennail
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N2201/3201Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N2201/3274Storage or retrieval of prestored additional information
    • H04N2201/3277The additional information being stored in the same storage device as the image data

Definitions

  • TECHNICAL FIELD This application relates to image processing, for example, processing digital images captured by a digital camera.
  • the computer system 100 illustrated in FIG. 1 represents a typical hardware setup for executing software that allows a user to perform tasks such as communicating with other computer users, accessing various computer resources, and viewing, creating, or otherwise manipulating electronic content — that is, any combination of text, images, movies, music or other sounds, animations, 3D virtual worlds, and links to other objects.
  • the system includes various input/output (I/O) devices (mouse 103, keyboard 105, display 107) and a general purpose computer 100 having a central processor unit (CPU) 121, an I/O unit 1 17 and a memory 109 that stores data and various programs such as an operating system 11 1, and one or more application programs 113.
  • I/O input/output
  • the computer system 100 also typically includes nonvolatile memory 110 (e.g., flash RAM, a hard disk drive, and/or a floppy disk or other removable storage media) and a communications card or device 123 (e.g., a modem or network adapter) for exchanging data with a network 127 via a communications link 125 (e.g., a telephone line).
  • nonvolatile memory 110 e.g., flash RAM, a hard disk drive, and/or a floppy disk or other removable storage media
  • a communications card or device 123 e.g., a modem or network adapter
  • the computer 100 of FIG. 1 also can be connected to various peripheral I/O devices.
  • One of the more popular of such peripheral devices is a digital camera 108 that enables users to take pictures and save them in digital (electronic) format.
  • Digital cameras 108 typically employ a two-dimensional array of charge-coupled-devices (CCDs) or other type of sensors to capture images.
  • CCDs charge-coupled-devices
  • the analog pixel signals from the CCD array are digitized (e.g., using analog-to-digital (A/D) converters) and used to generate tristimulus data (e.g., RGB data), which can be compressed (e.g., according to the JPEG compression standard) and stored in memory of the camera 108.
  • A/D analog-to-digital
  • the digital camera 108 is connected to the computer 100 only while the user is uploading images to the computer's disk drive or other non- volatile memory 1 10.
  • Users also can obtain digital images, for example, of film-based prints from a traditional camera, by sending an exposed film into a photo-finishing service, which develops the film to make prints and then scans (or otherwise digitizes) the prints or negatives to generate digital image files.
  • the digital image files then can be transmitted back to the user by e-mail or on a CD-ROM, diskette, or other removable storage medium.
  • an image viewer application can be used to view the images or a photo editor application can be used to touch-up or otherwise modify the images.
  • an electronic messaging (e.g., e-mail) application can be used to transmit the digital images to other users.
  • users In addition to viewing the digital images on the computer display 107, users often desire to have hard copies (physical prints) made of digital images. Such hard copies can be generated locally by the user using output devices such as an inkjet printer or a dye sublimation printer.
  • users can transmit digital images (e.g., either over a computer network or by using a physical storage medium such as a floppy disk) to a photo- finishing service, which can make hard copies of the digital images and send them (e.g., by U.S. Mail or courier service) back to the user.
  • a photo- finishing service which can make hard copies of the digital images and send them (e.g., by U.S. Mail or courier service) back to the user.
  • special procedures and/or processing typically must be used to calibrate the input and output equipment and/or correct and enhance the captured image.
  • special procedures and/or processing are typically used to compensate for any color shift that otherwise may occur due to the difference between the illuminant used to illuminate the captured image and a reference illuminant (e.g., corresponding to a color temperature of D65) to which the digital camera is calibrated.
  • the white reference of a captured image illuminated with different illuminants will reflect light having differing wavelengths.
  • the human brain automatically adapts to differences in the white reference of a captured image, the CCD array used in digital cameras makes no such adaptation and instead will produce different pixel data (i.e., the CCD array will sense different colors) when the same white reference is illuminated with different illuminants.
  • a conventional approach to compensating for "color shift” requires that the spectral nature of the illuminant used to illuminate the captured image be identified and used to shift the captured image's color data to a "normal" response (i.e., the response that would occur if the subject of the captured image was illuminated with the desired reference illuminant).
  • Such color shift compensation can involve taking a careful measurement of the illuminant that is used to illuminate the captured image and using the measurement to color shift the resulting image data and/or calibrate the equipment used to capture the image.
  • Such an approach typically requires that a photographer have the skill and equipment necessary to make such illuminant measurements and adjust his or her camera accordingly.
  • illuminant measurements be made before each image is captured.
  • color shift compensation can involve manually inspecting and editing each captured image.
  • Such an approach typically involves a user having the requisite skill inspecting the captured image to identify any color shift and using image editing software (e.g., ADOBE PHOTOSHOP®) to color shift the image in a compensating manner.
  • image editing software e.g., ADOBE PHOTOSHOP®
  • Conventional digital cameras typically perform a limited form of color-shift compensation at the time the image is captured. Such digital cameras typically calculate an approximation of the color temperature (or some other attribute) of the illuminant used to illuminate the captured image. The color temperature approximation is then used to color shift the resulting captured image.
  • conventional digital cameras typically have limited processing capability and/or time to perform color-shift compensation.
  • the resultant color-shift compensation may be less than satisfactory for the production of high-quality image prints.
  • the compensation performed by such digital cameras is generally oriented to preparing the captured image for display on computer monitors and other CRT-type devices.
  • conventional digital cameras may produce other artifacts that appear in captured images.
  • artifacts include blooming and color aliasing artifacts, which result from limitations in the sensors, A/D converters, and/or optical components used in the digital camera.
  • Other artifacts produced by digital cameras include "blocking" due to data compression and loss of perceived sharpness due to optical and spatial undersampling.
  • some digital cameras perform other types of image processing that can be unsatisfactory and/or introduce other artifacts into the captured image. For example, overuse of sharpening filters by some digital cameras to compensate for undersampling can result in the creation of shadows and/or halos in the captured image.
  • the image typically has to be converted from the color space in which the image was captured (typically, a RGB color space) to the color space used by the printer (typically, a CMY color space).
  • Some conventional printers convert RGB images into the printer's CMY color space using ad-hoc methods that switch between several transformation matrices and load different lookup tables; however, these ad-hoc approaches may not be suitable for the production of high-quality image prints.
  • the present inventors recognized that it would be advantageous to automatically undo portions of the image processing performed by digital cameras and automatically perform high-quality image processing to compensate for, and/or correct, artifacts introduced into captured images by digital cameras so that high-quality image prints can be produced.
  • the present inventors recognized that it would be advantageous to use an automated image-processing process in which information about the source of the captured image (e.g., the digital camera or other device used to capture the image) and information about the particular output device on which prints will be printed is used to correct, adjust, and/or otherwise optimize the image for generating high-quality image prints from the captured image.
  • characteristic data about the digital camera (or other device) used to capture the image and/or the particular output device on which prints will be made can be used in the image processing process, e.g., to identify, correct, and/or compensate for limited and/or unreliable source data and/or to calibrate and/or optimize the image processing for that particular output device.
  • a computer-implemented method of processing images may include receiving an image, receiving source information corresponding to a source of the image (e.g., an input device such as a digital camera), receiving output device information corresponding to an output device on which the image is to be printed, and processing the image using the source information and the output device information.
  • source information corresponding to a source of the image
  • output device information corresponding to an output device on which the image is to be printed
  • the source information may include information about an input device used to capture the image.
  • the source information may include information about the model of input device used to capture the image, information about the particular input device used to capture the image, one or more settings of the input device when the image was captured (e.g., resolution, aperture, exposure, and flash information), and/or a measurement of the color temperature of the illuminant used to illuminate the captured image.
  • the source information may include information explaining the camera's configuration for a particular captured image (e.g., information indicating that the camera was configured for a particular illuminant such as a daylight illuminant), a measurement of focal length, gamma correction information, an effective film speed of a digital camera, information relating to a compression algorithm used to compress raw image data, and information concerning physical limitations of the input device (e.g., information about the dynamic range of the input device).
  • the source information may be represented by JPEG markers and/or may be entered by a user and received with the image.
  • the source information may be provided by a digital camera that captured the image and/or a computer application that is used to create, modify or otherwise process the captured image.
  • the output device information may include information about the identity of the output device (e.g., information about which particular output device is to be used to print the captured image and information about the manufacturer and/or model of the output device), information about the printing capabilities of the output device (e.g., information about the resolution of the output device, information about the type of media used with the output device, information about the size of media used with the output device, information about the throughput of the output device, information about the dynamic range of the output device, information about a gamma value of the output device), information about the processing capabilities of the output device (e.g., information about image processing functions that are performed by the output device and information about a color space used by the output device).
  • information about the identity of the output device e.g., information about which particular output device is to be used to print the captured image and information about the manufacturer and/or model of the output device
  • information about the printing capabilities of the output device e.g., information about the resolution of the output device, information about the type of media used with the output device, information
  • Processing the image using the source information and the output device information may include using the source information and the output device information to compensate for undesirable artifacts included in the captured image, for example, by compensating for color shift in the captured image.
  • processing the image using the source information and the output device information may include performing processing that is optimized based on the output device information (e.g., the output device on which the image is to be printed). For example, gamma correction processing performed by the input device may be undone and the image may be gamma corrected based on a gamma value for the output device.
  • the image may be interpolated to the resolution of the output device.
  • additional image processing may be performed after interpolating the image to the resolution of the output device.
  • processing the image using the source information and the output device information may include determining which image processing operations are to be performed by the output device.
  • the method may further include receiving characteristic data and processing the image using the source information, the output device information, and the characteristic data.
  • the characteristic data may include data about the input device and/or processing rules for qualifying an image.
  • the source information may include an exposure setting for the input device, and the image may be qualified by comparing the exposure setting for the input device to a predetermined exposure setting value.
  • the characteristic data may include processing rules for qualifying pixels of the image to be used for compensating for artifacts introduced in the image.
  • the characteristic data may include processing rules for qualifying pixels by determining if each pixel is in a dark region of the image, e.g., by comparing the luminance of each pixel to a predetermined luminance value.
  • the characteristic data may also include processing rules that are a function of the difference between the red and green values of a pixel.
  • the characteristic data may include processing rules for qualifying pixels by comparing the difference between the red and green values of a pixel to a predetermined difference value and/or processing rules for qualifying pixels by determining if the ratio of the red value of the pixel to the green value of the pixel is within a predetermined range. Also, for each pixel that is qualified, the ratio of the red value of the pixel to the green value of the pixel may be added to a R/G ratio accumulator.
  • the method may further include normalizing the R/G ratio accumulator at some point during processing (e.g., after the ratio of the red value to the green value of all the qualified pixels are added to the R/G ratio accumulator). Also, if the normalized R/G ratio accumulator is less than a predetermined value, the red value of each pixel in the image may be increased (e.g., by applying a piecewise linear map that is a function of the normalized R/G ratio accumulator). If the normalized R/G ratio accumulator is not less than the predetermined value, the red value of each pixel in the image may be reduced (e.g., by a factor that is a function of the normalized R/G ratio accumulator).
  • the characteristic data may also include processing rules that are a function of the difference between the blue and green values of the pixel.
  • the characteristic data may include processing rules for qualifying pixels by comparing the difference between the blue and green values of the pixel to a predetermined difference value and/or processing rules for qualifying pixels by determining if the ratio of the blue value of the pixel to the green value of the pixel is within a predetermined range. Also, for each pixel that is qualified, the ratio of the blue value of the pixel to the green value of the pixel may be added to a B/G ratio accumulator.
  • the method may further include normalizing the B/G ratio accumulator at some point during processing (e.g., after the ratio of the blue value to the green value of all the qualified pixels are added to the B/G ratio accumulator). Also, if the normalized B/G ratio accumulator is less than a predetermined value, the blue value of each pixel in the image may be increased (e.g., by applying a piecewise linear map that is a function of the normalized B/G ratio accumulator to the blue channel of the image). If the normalized B/G ratio accumulator is not less than the predetermined value, the blue value of each pixel in the image may be reduced (e.g., by a factor that is a function of the normalized B/G ratio accumulator).
  • the method may also include receiving a mapping from an intermediate, device- independent color space (e.g., a floating-point YUN color space) to the output device's color space (e.g., a 24-bit CMY color space) and converting the captured image from the device- dependent color space in which the captured image is encoded (e.g., a 24-bit RGB color space) into the intermediate, device-independent color space.
  • the image processing of the captured image may take place in the intermediate, device-independent color space.
  • image processing may include performing at least one of the following operations on the captured image in the device-independent color space: automatic red-eye reduction or manual red-eye reduction, filtering, digital sepia toning, and color biasing.
  • the processed captured image may be converted to the device-dependent color space of the output device using the mapping.
  • the mapping from the intermediate, device- independent color space to the output device's color space may be generated, for example, by calibrating the output device and printing a test image.
  • the output device may be calibrated to approximate a gamma 2 response function in each color channel and a test image including a row of test patches for the colors red, blue, green, cyan, magenta, yellow, and gray may be printed. Then, the spectral reflectance of the test patches may be measured and the output device may be fitted to a model.
  • D c ( ⁇ ) is the basic spectral dye density for Cyan pigment
  • D m ( ⁇ ) is the basic spectral dye density for Magenta pigment
  • D y ( ⁇ ) is the basic spectral dye density for Yellow pigment
  • f c (i) is the Cyan pigment response as a function of stimulus
  • f m (i) is the Magenta pigment response as a function of stimulus
  • f y (i) is the Yellow pigment response as a function of stimulus.
  • K cm , K ⁇ , K mc , K my , Ky c , and K ym are the inter-channel cross-talk factors between cyan and magenta, between cyan and yellow, between magenta and cyan, between magenta and yellow, between yellow and cyan, and between yellow and magenta, respectively.
  • c, m, and y are the input values of the primaries cyan, magenta, and yellow, respectively, for the particular measured test patch.
  • the output device may be fitted to the model using a Chi square minimization process to compute D c ( ⁇ ), D m ( ⁇ ), D y ( ⁇ ),f c (i), f m (i),f y (i), K cm , K cy , K mc , K my , K yc , and K ym from ⁇ , R( ⁇ ), R p ( ⁇ ), c, m, and y.
  • the mapping may be generated using the fitted model, for example, to create a three-dimensional lookup table.
  • the image processing of the captured image may include calibrating the captured image to standard colors used to calibrate the output device and generate the mapping. Also, at least a portion of the image processing may occur in a device-dependent color space in which the captured image is encoded.
  • the output device's device-dependent color space may be a CMY color space and converting the processed captured image to the output device's device-dependent color space may include using a lookup table to interpolate a CMY triplet for each pixel in the processed captured image. Also, pixels having a color that is outside of the gamut of the output device may be mapped to the gamut of the output device.
  • the method may also include printing the processed captured image on the output device.
  • the output device may be an output device located at a photofinisher and/or may be a local output device attached to a user's computer.
  • generating the mapping may include transmitting a test image to the user's computer for printing on the local output device, printing the test image on the local output device to create a test print, sending the test print to a photofinisher, having the photofinisher generate the mapping and processes the captured image, and transmitting the processed captured image from the photofinisher to the user's computer for printing on the local output device.
  • a system for printing images may include an output device on which an image is to be printed.
  • the system may also include an image processor connected to the output device.
  • the image processor may have image processor software in a computer-readable medium comprising instructions for causing the image processor to perform the following operations: (i) receive an image, (ii) receive source information corresponding to a source of the image (e.g., information about an input device such as a digital camera used to capture the image), (iii) receive output device information about an output device on which the image is to be printed, and (iv) process the image using the source information and the output device information.
  • the source information may include information about the model of input device used to capture the image, information about the particular input device used to capture the image, and one or more settings of the input device when the image was captured (e.g., resolution, aperture, exposure, and flash information). Also, the source information may include a measurement of the color temperature of the illuminant used to illuminate the captured image, information explaining the camera's configuration for a particular captured image (e.g., information indicating that the camera was configured for a particular illuminant such as a daylight illuminant), a measurement of focal length, gamma correction information, an effective film speed of a digital camera, and information relating to a compression algorithm used to compress raw image data.
  • information about the model of input device used to capture the image e.g., information about the particular input device used to capture the image, and one or more settings of the input device when the image was captured (e.g., resolution, aperture, exposure, and flash information).
  • the source information may include a measurement of the color temperature of the
  • the source information may include information concerning physical limitations of the input device such as information about the dynamic range of the input device.
  • the source may be represented by JPEG markers and/or may be entered by a user and received with the image.
  • the image processor may be connected to a network such as the Internet and at least a portion of the source information and/or the output device information may be received via the network.
  • the source information may be provided by a digital camera that captured the image and/or by a computer application that is used to create, modify or otherwise process the captured image.
  • the output device information includes information about the identity of the output device.
  • the information about the identity of the output device may include information about which particular output device is to be used to print the captured image and information about the manufacturer and/or the model of the output device.
  • the output device information may further include information about the printing capabilities of the output device.
  • the information about the printing capabilities of the output device may include information about the resolution of the output device, information about the type of media used with the output device, information about the size of media used with the output device, information about the throughput of the output device, information about the dynamic range of the output device, and information about the gamma value of the output device.
  • the output device information may include information about the processing capabilities of the output device.
  • the information about the processing capabilities of the output device may include information about image processing functions that are performed by the output device and information about a color space used by the output device.
  • the image processor software may also include instructions that cause the image processor to use the source information and the output device information to compensate for undesirable artifacts included in the captured image.
  • the image processor software may include instructions that cause the image processor to compensate for color shift in the captured image.
  • the image processor software may include instructions that cause the image processor to optimize processing of the image based on the output device information.
  • the image processor software may include instructions that cause the image processor to optimize processing of the image based on the output device on which the image is to printed.
  • the image processor software may include instructions that cause the image processor to undo gamma correction processing performed by the input device and gamma correct the image based on a gamma value for the output device.
  • the image processor software may include instructions that cause the image processor to interpolate the image to the resolution of the output device and to perform additional image processing after interpolating the image to the resolution of the output device.
  • the image processor software may also include instructions that cause the image processor to determine which image processing operations are to be performed by the output device.
  • the image processor software further may include instructions that cause the image processor to receive characteristic data and process the image using the source information, the output device information, and the characteristic data.
  • the characteristic data may include characteristic data about the input device such as processing rules for qualifying an image.
  • the source information may include an exposure setting for the input device
  • the image processor software may include instructions that cause the image processor to qualify the image by comparing the exposure setting for the input device to a predetermined exposure setting value.
  • the characteristic data may include processing rules for qualifying pixels of an image to be used for compensating for artifacts introduced in the image.
  • the image processor software may include instructions that cause the image processor to qualify pixels by determining if each pixel is in a dark region of the image (e.g., by comparing the luminance of each pixel to a predetermined luminance value).
  • the characteristic data may also include processing rules that are a function of the difference between the red and green values of a pixel.
  • the image processor software may include instructions that cause the image processor to qualify pixels by comparing the difference between the red and green values of a pixel to a predetermined difference value and by determining if the ratio of the red value of the pixel to the green value of the pixel is within a predetermined range.
  • the image processor software may further include instructions that cause the image processor to, for each pixel that is qualified, add to a R/G ratio accumulator the ratio of the red value of the pixel to the green value of the pixel and to normalize the R/G ratio accumulator (e.g., after the ratio of the red value to the green value of all the qualified pixels are added to the R/G ratio accumulator).
  • the image processor software may include instructions that cause the image processor to determine if the normalized R G ratio accumulator is less than a predetermined value, increase the red value of each pixel in the image (e.g., by applying a piecewise linear map that is a function of the normalized R/G ratio accumulator) if the normalized R/G ratio accumulator is less than the predetermined value, and reduce the red value of each pixel in the image (e.g., by a factor that is a function of the normalized R/G ratio accumulator) if the normalized R/G ratio accumulator is not less than the predetermined value.
  • the characteristic data may also include processing rules that are a function of the difference between the blue and green values of the pixel.
  • the image processor software may include instructions that cause the image processor to qualify pixels by comparing the difference between the blue and green values of a pixel to a predetermined difference value and by determining if the ratio of the blue value of the pixel to the green value of the pixel is within a predetermined range.
  • the image processor software may further include instructions that cause the image processor to, for each pixel that is qualified, add to a B/G ratio accumulator the ratio of the blue value of the pixel to the green value of the pixel and to normalize the B/G ratio accumulator (e.g., after the ratio of the blue value to the green value of all the qualified pixels are added to the B/G ratio accumulator).
  • the image processor software may include instructions that cause the image processor to determine if the normalized B/G ratio accumulator is less than a predetermined value, increase the blue value of each pixel in the image (e.g., by applying a piecewise linear map that is a function of the normalized B/G ratio accumulator) if the normalized B/G ratio accumulator is less than the predetermined value, and reduce the blue value of each pixel in the image (e.g., by a factor that is a function of the normalized B/G ratio accumulator) if the normalized B/G ratio accumulator is not less than the predetermined value.
  • the image processor software may also include instructions that cause the image processor to receive a mapping from an intermediate, device-independent color space (e.g., a floating-point YUN color space) to the output device's color space (e.g., a 24-bit CMY color space) and to convert the captured image from the device-dependent color space (e.g., a 24- bit RGB color space) in which the captured image is encoded into the intermediate, device- independent color space.
  • the image processor software may include instructions that cause the image processor to process the captured image in the intermediate, device- independent color space.
  • the image processor software may include instructions that cause the image processor to perform at least one of the following operations on the captured image in the device-independent color space: automatic red-eye reduction or manual red-eye reduction, filtering, digital sepia toning, and color biasing.
  • the image processor software may include instructions that cause the image processor to convert the processed captured image to the device-dependent color space of the output device using the mapping.
  • the system may further include an output device processor in communication with the output device.
  • the output device processor may have output device processor software in a computer-readable medium comprising instructions for causing the output device processor to generate the mapping from the intermediate, device-independent color space to the output device's color space.
  • the output device processor may be a part of the image processor.
  • the output device processor software may include instructions that cause the output device processor to calibrate the output device.
  • the output device processor software may include instructions that cause the output device processor to calibrate the output device to approximate a gamma 2 response function in each color channel and cause the output device processor to print a test image on the output device (e.g., a test image having a row of test patches for the colors red, blue, green, cyan, magenta, yellow, and gray).
  • the output device processor software may include instructions that cause the output device processor to receive measurements of the spectral reflectance of the test patches (e.g., from a spectrophotometer connected to the output device processor).
  • the output device processor software may include instructions that cause the output device processor to fit the output device to a model.
  • the model may include the following:
  • D c ( ⁇ ) is the basic spectral dye density for Cyan pigment
  • D m ( ⁇ ) is the basic spectral dye density for Magenta pigment
  • D y ( ⁇ ) is the basic spectral dye density for Yellow pigment
  • f c (i) is the Cyan pigment response as a function of stimulus
  • f m (i) is the Magenta pigment response as a function of stimulus
  • f y (i) is the Yellow pigment response as a function of stimulus.
  • K cm , K cy , K mc , K my , K yc , and K ym are the inter-channel cross-talk factors between cyan and magenta, between cyan and yellow, between magenta and cyan, between magenta and yellow, between yellow and cyan, and between yellow and magenta, respectively.
  • c, m, and y are the input values of the primaries cyan, magenta, and yellow, respectively, for the particular measured test patch.
  • the output device processor software may include instructions that cause the output device processor to fit the output device to the model using a Chi square minimization process to compute D c ( ⁇ ), D m ( ⁇ ), D y ( ⁇ ),f c (i), f m (i),f y (i), K cm , K e y, K mc , K my , K yc , and K ym from ⁇ , R( ⁇ ), R p ( ⁇ ), c, m, and y, and K ym from ⁇ , R( ⁇ ), R p ( ⁇ ), c, m, and y, and K ym from ⁇ , R( ⁇ ), R p ( ⁇ ), c, m, and y, and K ym from ⁇ , R( ⁇ ), R p ( ⁇ ), c, m, and y, and K ym from ⁇ , R( ⁇ ), R p ( ⁇ ), c, m, and y, and generate the mapping using the fitted model, for example, by creating
  • the image processor software may include instructions that cause the image processor to calibrate the captured image to standard colors used to calibrate the output device and generate the mapping. Also, image processor software may include instructions that cause the image processor to perform at least a portion of the image processing in a device-dependent color space in which the captured image is encoded. Moreover, the image processor software may further include instructions that cause the image processor to receive the lookup table and convert the processed captured image to the output device's device-dependent color space using the lookup table to interpolate a CMY triplet for each pixel in the processed captured image. In addition, the image processor software may include instructions that cause the image processor to map pixels having a color that is outside of the gamut of the output device to the gamut of the output device.
  • the image processor software may also include instructions that cause the image processor to print the processed captured image on the output device.
  • the output device may be an output device located at a photofinisher or a local output device attached to a user's computer.
  • the image processor software may also include instructions that cause the image processor to cause a test image to be transmitted to the user's computer for printing a test print on the local output device, and the output device processor software may include instructions that cause the output device processor to generate the mapping using spectral reflectance measurements of the test print.
  • the image processor software may include instructions that cause the image processor to cause the processed captured image to be transmitted to the user's computer for printing on the local output device.
  • a method of processing a capture image for printing on an output device may include receiving a mapping from an intermediate, device-independent color space (e.g., a floating-point YUN color space) to the output device's color space (e.g., a 24- bit CMY color space), converting the captured image from the device-dependent color space in which the captured image is encoded (e.g., a 24-bit RGB color space) into the intermediate, device-independent color space, image processing the captured image in the intermediate, device-independent color space, and converting the processed captured image to the device-dependent color space of the output device using the mapping.
  • the image processing may include performing at least one of the following operations on the captured image in the device-independent color space: automatic red-eye reduction or manual red-eye reduction, filtering, digital sepia toning, and color biasing.
  • the method may also include generating the mapping from the intermediate, device- independent color space to the output device's color space.
  • Generating the mapping may include calibrating the output device (e.g., by calibrating the output device to approximate a gamma 2 response function in each color channel), printing a test image (e.g., a test image having a row of test patches for the colors red, blue, green, cyan, magenta, yellow, and gray), and measuring the spectral reflectance of the test patches.
  • Generating the mapping may also include fitting the output device to a model.
  • is the measured wavelength of the light reflected from a printed color
  • R( ⁇ ) is the measured spectral reflectance of a printed color
  • R p ( ⁇ ) is the measured spectral reflectance of unexposed, developed paper.
  • D c ( ⁇ ) is the basic spectral dye density for Cyan pigment
  • D m ( ⁇ ) is the basic spectral dye density for Magenta pigment
  • D y ( ⁇ ) is the basic spectral dye density for Yellow pigment.
  • f c (i) is the Cyan pigment response as a function of stimulus
  • f m (i) is the Magenta pigment response as a function of stimulus
  • fy(i) is the Yellow pigment response as a function of stimulus.
  • the factors K cm , K cy , K mc , K my , Kyc, and K ym are the inter-channel cross-talk factors between cyan and magenta, between cyan and yellow, between magenta and cyan, between magenta and yellow, between yellow and cyan, and between yellow and magenta, respectively.
  • c, m, and y are the input values of the primaries cyan, magenta, and yellow, respectively, for the particular measured test patch.
  • the output device may be fitted to the model using a Chi square minimization process to compute D c ( ⁇ ), D m ( ⁇ ), D y ( ⁇ ),f c (i), f m (i),f y (i), K cm , K cy , K mc , K my , K yc , and K ym from ⁇ , R( ⁇ ), R p ( ⁇ ), c, m, and y, and the mapping may be generated using the fitted model, for example, to create a three-dimensional lookup table.
  • the image processing of the captured image may include calibrating the captured image to standard colors used to calibrate the output device and generate the mapping. Also, at least a portion of the image processing may occur in a device-dependent color space in which the captured image is encoded.
  • the output device's device-dependent color space may be a CMY color space and converting the processed captured image to the output device's device-dependent color space may include using a lookup table to interpolate a CMY triplet for each pixel in the processed captured image. Also, pixels having colors that are outside of the gamut of the output device may be mapped to the gamut of the output device.
  • the method may also include printing the processed captured image on the output device.
  • the output device may be an output device located at a photofinisher or may be a local output device attached to a user's computer.
  • generating the mapping may include transmitting a test image to the user's computer for printing on the local output device, printing the test image on the local output device to create a test print, sending the test print to a photofinisher, having the photofinisher generate the mapping and processes the captured image, and transmitting the processed captured image from the photofinisher to the user's computer for printing on the local output device.
  • the systems and techniques described here can be used to automatically undo image processing performed by digital cameras and automatically perform high-quality image processing to compensate for, and/or correct, artifacts introduced into captured images by digital cameras so that high- quality image prints can be produced.
  • the image processing that is performed on the captured image can be adapted and/or optimized for the particular input device and output device that are used, e.g., in order to improve the quality of the resulting print and/or the efficiency with which the images are processed and/or printed.
  • such processing can include a process in which color-shift is compensated for using pixels that are located in dark regions of an image.
  • Source information and characteristic data is used to generate correction parameters and criteria. If an image is qualified, then the pixels in the image that are qualified can be used to generate color-shift correction data. The color-shift correction data then can be used to compensate for color-shift in the image.
  • processing can include a process for performing color conversion and other image processing that is optimized based on output device information and characteristic data.
  • a mapping from an intermediate, device-independent color space (e.g., a floating-point YUN color space) to the output device's color space (typically a 24-bit CMY color space) is received and the captured image is converted from the device-dependent color space in which the captured image is initially encoded (typically, a 24-bit RGB color space) into the intermediate, device-independent color space.
  • image processing can take place in the intermediate, device-independent color space.
  • the image processing can include processing that more closely calibrates the captured image to the standard colors used to calibrate the printer and generate the mapping.
  • the image prints that are ultimately produced from the processed image will tend to contain more accurate color.
  • the captured image After the captured image has been image processed in the intermediate, device-independent color space, the captured image is converted to the device-dependent color space of the output device (e.g., a 24-bit CMY color space). Because this approach starts from a device-independent color space and converts directly to the printer's native CMY color space, the multiple gamut limitations that can result from moving through multiple RGB gamuts and transformations can be reduced.
  • FIG. 1 is a block diagram showing a typical computer architecture.
  • FIG. 2 is a block diagram of a system for producing high-quality image prints.
  • FIG. 3 is a flowchart of a process for automatically producing high-quality image prints from captured digital images.
  • FIGS. 4-5 are flowcharts of a process for compensating for color-shift in images captured in daylight conditions that can be used in the process of FIG. 3.
  • FIG. 6 is a flowchart of a process for performing color conversion and other image processing that is optimized based on the output device information that can be used in the process of FIG. 3.
  • FIG. 2 One implementation of a system 10 for producing high-quality image prints is shown in FIG. 2.
  • the system 10 includes an input device 12 (e.g., a digital camera) for capturing, storing, and transferring to a computer 14 digital images in the general manner described above in connection with FIG. 1.
  • a user can send the captured images to a photofinisher 16 (also referred to as a "print lab") so that high-quality image prints can be produced from the captured digital images.
  • the captured images can be sent via a network 18 (e.g., a public network such as the Internet) to the photofinisher 16.
  • a network 18 e.g., a public network such as the Internet
  • each captured image is processed by an image processor 20 and then printed on one or more output devices 22 (only one of which is shown in FIG. 2).
  • the image processor 20 is shown in FIG. 2 as being separate from the output device 22, it is to be understood that the image processor 20 and the output device 22 can be integrated into single device, if so desired.
  • the image processor 20 and the one or more output devices 22 can be configured and programmed to implement a process 30, shown in FIG. 3, for automatically producing high- quality image prints from captured digital images.
  • a captured image is received from a user (e.g., over the network 18 shown in FIG. 2) in step 32.
  • Information about the source of the captured image is also received in step 34.
  • the information about the source of the captured image can include the identity of the manufacturer and model of the input device used to capture the image and the settings of the input device when the image was captured (e.g., resolution, aperture, exposure, and/or flash information such as whether the flash fired).
  • Files compressed according to the JPEG compression standard typically supply some of this information in "JPEG markers" that are saved along with the image data.
  • additional information about the captured image that is not typically included in JPEG markers can be transmitted to the photofinisher 16 along with the captured image (e.g., by having the user enter such information when the user transmits the captured image).
  • Such information can include a measurement of the color temperature of the illuminant used to illuminate the captured image, information as to why the camera was configured the way it was for taking the captured image (e.g., to convey what effect was desired to be achieved by the particular exposure or other settings), a measurement of focal length, and gamma correction information including what gamma correction was applied to the captured image by the input device, if any, and why such gamma correction was applied to the captured image.
  • Other information about the source of the captured information can include information about the effective film speed of the digital camera, information about any compression algorithm used to compress the raw image data (e.g., whether an iterative compression algorithm was used and how many iterations were used), and information concerning the physical limitations of the sensor array, analog-to-digital converters, and/or optical components used in the input device (e.g., information about the dynamic range of the input device).
  • any compression algorithm used to compress the raw image data e.g., whether an iterative compression algorithm was used and how many iterations were used
  • information concerning the physical limitations of the sensor array, analog-to-digital converters, and/or optical components used in the input device e.g., information about the dynamic range of the input device.
  • the output device information can include information about the identity of the output device (including which particular output device is to be used and its manufacturer and model), information about the printing capabilities of the output device (e.g., resolution, media size and type, throughput, dynamic range, gamma value, etc.) and information about the processing capabilities of the output device (e.g., image processing functions that are always and/or optionally performed by the printer, the color space used by the printer, etc.).
  • the output device information can include information that is specified by the manufacturer of the particular output device and/or information that is determined by testing the output device.
  • characteristic data is received in step 37.
  • the characteristic data can be developed by testing input devices and/or output devices (for example, as described below in connection with FIGS. 4-6).
  • the characteristic data can include assumptions and processing rules (and conditions for determining when the assumptions and processing rules apply) that relate, and are applied, to the captured image based on the received source information, output device information, and/or the image itself.
  • Characteristic data is especially helpful in implementations where limited or unreliable source information is available. For example, typical consumer-quality digital cameras only record limited information about the conditions under which the captured image was taken (e.g., most consumer-quality digital cameras do not record any measurements concerning lighting conditions). Moreover, information that is recorded and/or used for internal image processing by consumer-quality digital cameras may not be accurate and/or reliable enough for the generation of high-quality image prints.
  • the characteristic data can include additional data about the particular output device on which the image is to be printed. For example, as described below, the characteristic data can include data for mapping the captured image to the color space of the particular output device on which the image is to be printed.
  • the captured image is processed using the source information, the output device information, and the characteristic data in step 38.
  • the source information and the characteristic data are used to identify, and to correct and/or compensate for, any undesirable artifacts that are introduced into the captured image due to the physical limitations of the sensors, A/D converters, and optics used in the input device and/or due to any image processing performed by the input device.
  • any artifacts introduced into the captured image also can be optimized based on the particular output device to be used.
  • any gamma correction applied to the captured image by the input device can be undone and the captured image can be re-gamma corrected based on a gamma value appropriate for the output device.
  • the captured image can be interpolated to the resolution of the output device before certain enhancements (e.g., rotations, sizing, cropping, red-eye reduction, filtering, and/or assembling of image compositions from stored templates) are performed.
  • the particular output device 22 that is to be used to generate prints from the captured image typically perform various types of image processing; the output device information and/or the characteristic data can be used to determine which processing operations should be performed by the printer and which processing operations should be performed by the image processor 20.
  • the processed image data is used in step 40 to generate a high-quality image print on the output device.
  • the high-quality image prints can be produced on any suitable output device including inkjet and dye sublimation printers.
  • FIGS. 4-5 are flowcharts of a process 50 for compensating for color-shift in images captured in daylight conditions.
  • the process 50 is one example of processing that can be performed in step 38 of process 30.
  • the process 50 makes use of characteristic data that includes the following assumptions about image that are captured using commercially available, consumer-quality digital cameras. It is assumed that, if the digital camera performs some type of color-shift correction processing, any corrections that it makes are consistent from image-to-image (though they may not be entirely correct). In other words, it is assumed that any error produced, e.g., by the digital camera's calculation of the color temperature, will be linear (or at least monotonic).
  • the non-saturated colors in the dark regions (i.e., the shadows) of the image are "truly gray.” That is, it is assumed that the ratio of red to green (the R/G ratio) and the ratio of blue to green (the B/G ratio) for pixels in the non-saturated dark regions of the image should both have a value of "1" in the absence of any color-shift.
  • the R/G ratio and the B/G ratio of pixels in the non-saturated dark regions of the image are used to correct the color-shift.
  • the scene does not contain a preponderance of image information in the dark regions of the image that are color saturated. Although the process 50 may successfully complete if this assumption is incorrect, saturated colors in the dark regions of the captured image reduce the amount of data that can be used in determining the color shift.
  • Process 50 operates by first checking if the image was taken in daylight conditions (step 52). Then, if the image was taken in daylight conditions, each pixel in the image is checked to determine if the pixel is in a dark region of the image (step 54). Each pixel that is in a dark region of the image is then checked in step 56 to determine if the difference between the red and green values for the pixel (the "red-green difference) is relatively large. If the red-green difference for the pixel is not large, then the R/G ratio for that pixel is added to a R G ratio accumulator (step 58).
  • each pixel that is in a dark region of the image is checked in step 60 to determine if the difference between the blue and green values for the pixel (the "blue-green difference") is relatively large. If the blue-green difference for the pixel is not large, the B/G ratio for that pixel is added to the B/G accumulator (step 62). Steps 54-62 are repeated until each pixel in the image has been checked (which is checked in step 64).
  • the tests performed in steps 52, 54, 56, and 60 make use of characteristic data that generally varies depending on the model of digital camera that is used. Thus, such characteristic data should be received for each model of digital camera or other input device with which process 50 is to be used.
  • the characteristic data for a given camera model can be obtained by using the camera to capture several test images containing a MACBETHTM color chart or similar test target under a variety of lighting conditions, including both indoor (e.g., incandescent and fluorescent) and outdoor (e.g., for color temperatures in the range of about 3200° K to about 7200° K) lighting conditions.
  • An automatic extraction and analysis program is used to generate the characteristic data from the test images.
  • the extraction and analysis program first searches for the 24 test squares of the MACBETHTM color chart using a conventional pattern matching algorithm. Then, data including the mean, standard deviation, maximum, and minimum for each channel (i.e., red, green, and blue) in each of the 24 squares of the MACBETHTM color chart is calculated along with the mean and standard deviation for each channel taken over the entire image. This data is used to compute parameters that are used in the process 50 to qualify the images and pixels so that acceptable color-shift compensation can be provided. This characteristic data also can be obtained for the particular camera that was used to capture the image (in addition to, or instead of, characteristic data for the particular model of digital camera used).
  • a user can take pictures of a test target (e.g., a test target supplied to the user by the photofinisher) under various lighting conditions and send the resulting captured test images to the photofinisher along with the captured images from which prints are to be generated.
  • the photofinisher can then generate the characteristic data from the received test images and use that characteristic data (which is specific to the user's camera) to process and print the user's images.
  • An image can be qualified (in step 52) by checking to see if the image was taken with an exposure setting that was less than or equal to a threshold exposure setting (the "exposure threshold”).
  • the exposure threshold corresponds to the upper limit of the range of exposure settings that are typically used for capturing images in daylight conditions. For example, it has been determined that, for an OLYMPUS MODEL C-2000TM digital camera, an exposure threshold of 5000 microseconds is acceptable for qualifying images for use with process 50. Then, if the image is qualified (i.e., was taken in daylight conditions), pixels that are in the dark regions of the image can be identified (in step 54) by checking if each pixel in the image has a luminance that is less than or equal to a threshold luminance value (the "luminance threshold").
  • Pixels that have a luminance that is greater than the luminance threshold are not qualified and are not used to generate the color-shift correction data.
  • the luminance threshold is determined during characterization of a given digital camera by identifying the luminance corresponding to the upper limit of the range of detectable gray values in the test images and the lower limit of chroma contrast detection in the test images. For example, it has been determined that, for the OLYMPUS MODEL C-2000TM digital camera, a luminance threshold of 50 is acceptable for identifying pixels that are in dark regions of the image.
  • Each pixel that has a luminance less than or equal to the luminance threshold is then checked to determine if the pixel has a red-green difference that is large (in step 56).
  • a red- green difference is considered to be "large” if it is greater than a specified value (referred to as a "red-green level discriminator").
  • the red-green level discriminator is determined during characterization of a given digital camera by measuring the average differences in the gray scale values between the test images and the target images. Specifically, the red-green level discriminator value is set by calculating the difference between the average red values and the average green values.
  • the pixel is not used to generate the color-shift correction data. If the difference between the red and green values of a pixel is less than or equal to the red- green level discriminator, then the R G ratio of the pixel is checked to determine if it is within a specified range (referred to as the "R G ratio range").
  • the R/G ratio range is calculated during characterization of the digital camera in a similar manner as the red-green level discriminator (i.e., by setting the upper and lower limits of the ratios ranges to coincide with those pixels that are close to gray) and is used to account for differences in a pixel's red and green values in very low levels of luminance (i.e., less than 15-20). If the R/G ratio of the pixel is within the R/G ratio range, then the R/G ratio of the pixel is added to the R/G ratio accumulator (in step 58). If the R G ratio of the pixel is not within the R/G ratio range, then the pixel is not added to the R/G ratio accumulator.
  • each pixel that has a luminance less than or equal to the luminance threshold is also checked to determine if the blue-green difference for the pixel is large (in step 60).
  • a blue-green difference is considered to be "large” if it is greater than a specified value (referred to as a "blue-green level discriminator").
  • the blue-green level discriminator is determined during characterization of a given digital camera by measuring the average differences in the gray scale values between the test images and the target images. Specifically, the blue-green level discriminator value is set by calculating the difference between the average blue values and the average green values.
  • the pixel is not used to generate the color-shift correction data. If the blue-green difference for a pixel is less than or equal to the blue-green level discriminator, then the B/G ratio of the pixel is checked to determine if it is within a specified range (referred to as the "B/G ratio range").
  • the B/G ratio range is calculated during characterization of the digital camera in a similar manner as the level discriminator (i.e., by setting the upper and lower limits of the ratios ranges to coincide with those pixels that are close to gray) and is used to account for differences in a pixel's blue and green values in very low levels of luminance (i.e., less than 15-20). If the B/G ratio of the pixel is within the B/G ratio range, then the B/G ratio of the pixel is added to the B/G ratio accumulator (in step 62). If the B/G ratio of the pixel is not within the B/G ratio range, then the B/G ratio of the pixel is not added to the B/G ratio accumulator.
  • a blue-green level discriminator of 15 and a B/G ratio range of 0.9 to 1.5 is acceptable for identifying those pixels that have a blue-green difference that is not large.
  • a pixel that is located in a dark region of the image can have a large red-green difference but not a large blue-green difference or have a large blue-green difference but not a large red-green difference.
  • a given pixel can have its R/G ratio added to the R/G ratio accumulator, but not have its B/G ratio added to the B/G ratio accumulator and vice versa.
  • the final values of the R/G ratio accumulator and the B/G ratio accumulator are normalized (i.e., the R/G ratio accumulator is divided by the number of pixels that had their R/G ratios added to the R/G ratio accumulator and the B/G ratio accumulator is divided by the number of pixels that had their B/G ratios added to the B/G ratio accumulator) in step 66, which is shown in FIG. 5.
  • step 70 a piecewise linear map is applied to the red channel of the image (i.e., is applied to the red channel of each pixel in the image).
  • the target ranges for the piecewise linear mapping of the red channel are ⁇ 0:50*(the final R/G ratio), 100*(the final R/G ratio): 150*(the final R G ratio), 150*(the final R G ratio): 200*(the final R/G ratio), 200*(the final R/G ratio): 255*(the final R/G ratio).
  • step 72 an exponential formula is used to reduce the red channel of the image (i.e., the exponential formula is applied to the red channel of each pixel in the image).
  • the exponential formula is as follows.
  • a is the final R/G ratio plus an error bias
  • y equals the color-shifted value of the red channel for the pixel.
  • the error bias is empirically derived for each digital camera or other input device. For example, it has been determined that, for the OLYMPUS MODEL C-2000TM digital camera, an error bias is 0.1 acceptable if the aperture for the image was greater than 5; otherwise, an error bias of 0.3 should be used. The resulting value of y will be mapped into the range 0-255 and will be gamma reduced.
  • step 76 a piecewise linear map is applied to the blue channel of the image (i.e., is applied to the blue channel of each pixel in the image) by multiplying the blue channel by l/(the final B/G ratio).
  • the target ranges for the piecewise linear mapping of the blue channel are ⁇ 0:50*(the final B/G ratio), 100*(the final B/G ratio): 150*(the final B/G ratio), 150*(the final B/G ratio): 200*(the final B/G ratio), 200*(the final B/G ratio): 255*(the final B/G ratio) ⁇ . If the final B/G ratio is greater than 1, then in step 78 the exponential formula 1 described above is used to reduce the blue charmel of the image (i.e., the exponential formula 1 is applied to the blue channel of each pixel in the image) in the same manner as is described above in connection with step 72.
  • process 50 has been described in connection with color-shift compensation processing that is performed on images taken during daylight conditions and sets forth exemplary output device information and characteristic data for a particular model of digital camera, it is to be understood that the process 50 can be applied to perform color-shift compensation processing on images taken in other lighting conditions and/or with other types of digital cameras and input devices. Indeed, characteristic data for a wide variety of input devices and/or for a wide variety of lighting conditions and/or other operating conditions and settings can be gathered and/or generated and used to perform color-shift processing.
  • FIG. 6 is a flowchart of a process 80 for performing color conversion and other image processing that is optimized based on output device information and characteristic data.
  • the process 80 is one example of processing that can be performed in step 38 of process 30 (FIG. 3).
  • the process 80 of FIG. 6 includes receiving a mapping from an intermediate, device- independent color space (e.g., a floating-point YUN color space) to the output device's color space (typically a 24-bit CMY color space) in step 82 and converting the captured image from the device-dependent color space in which the captured image is encoded (typically, a 24-bit RGB color space) into the intermediate, device-independent color space in step 84 (e.g., using conventional techniques for converting from RGB to YUN).
  • an intermediate, device- independent color space e.g., a floating-point YUN color space
  • the output device's color space typically a 24-bit CMY color space
  • step 84 typically, a 24-bit RGB color space
  • image processing can take place in the intermediate, device-independent color space (step 86).
  • image processing can take place in the intermediate, device-independent color space (step 86).
  • automatic or manual red-eye reduction, filtering (e.g., using a sharpening filter), digital sepia toning, color biasing and/or other image processing can be performed in the floating-point, device-independent color space.
  • the captured image is converted to the device-dependent color space of the output device (e.g., a 24-bit CMY color space) in step 88.
  • the output device e.g., a 24-bit CMY color space
  • process 80 can be implemented for use with a KO ⁇ ICA MODEL QD-
  • the MODEL QD-21 digital printer is calibrated and a mapping from a floating-point, YUN color space (i.e., the intermediate, device-independent color space) to the CMY color space of the printer is created.
  • the hardware device control (HDC) look-up tables (LUT) for both printing lanes of the MODEL QD-21 digital printer are set so as to approximate a gamma 2 response function in each color channel (i.e., Cyan, Magenta, and Yellow).
  • a test image is created that includes a row of test patches for the colors Red, Blue, Green, Cyan, Magenta, Yellow, and Gray.
  • Each of the seven rows of color includes several patches (e.g., 33 patches) in which various shades of that row's color are printed, fading to white in each case.
  • a test print is then printed using the test image by each of the two printing lanes of the KONICA MODEL QD-21TM printer.
  • the spectral reflectance of the test patches printed on each of the test prints are then measured using an autoscan spectrophotometer (e.g., an X-RITE MODEL DTP-41 TM spectrophotometer).
  • is the measured wavelength of the light reflected from a printed color
  • R( ⁇ ) is the spectral reflectance of a printed color
  • R p ( ⁇ ) is the spectral reflectance of unexposed, developed paper
  • the logarithm noted in equation (2) is a base- 10 logarithm.
  • D c ( ⁇ ) is the basic spectral dye density for Cyan pigment
  • D m ( ⁇ ) is the basic spectral dye density for Magenta pigment
  • D y ( ⁇ ) is the basic spectral dye density for Yellow pigment.
  • the function f c (i) is the Cyan pigment response as a function of stimulus
  • the function f m (i) is the Magenta pigment response as a function of stimulus
  • the function f y (i) is the Yellow pigment response as a function of stimulus.
  • the factors K cm , K cy , K mc , K my , K yc , and Kyflect are the inter-channel cross-talk factors between Cyan and Magenta, between Cyan and Yellow, between Magenta and Cyan, between Magenta and Yellow, between Yellow and Cyan, and between Yellow and Magenta, respectively.
  • the inputs c, m, and y are the input values of the primaries Cyan, Magenta, and Yellow, respectively, for the test patch.
  • the Chi square analysis computes, from ⁇ , the patch measurements R( ⁇ ) and R p ( ⁇ ), and the input values c, m, and , the actual dye density curves D c ( ⁇ ), D m ( ⁇ ), and D y ( ⁇ ), the actual pigment response functions f c (i),f m (i), and f y (i), and the actual cross-talk factors K cm , Key, K mc , K my , K yc , and K ym for each lane of the printer.
  • K cm Key, K mc , K my , K yc , and K ym for each lane of the printer.
  • the fitted model equation (2) is then used to compute an inverse mapping from floating-point YUN values to 24-bit CMY triplets that are used as the inputs to the KO ⁇ ICA MODEL QD-21TM printer.
  • the mapping is typically implemented as a large (approximately 50x50x50) three-dimensional lookup table that maps floating-point YUN values to 24-bit CMY triplets. This three-dimensional lookup table is stored in the image processor 20.
  • printer calibration and mapping generation steps are performed twice each day (or more) so that the image processor 20 has accurate mapping data.
  • any image processing that is to be performed in the device-dependent color space in which the captured image was initially encoded can be performed (e.g., the adaptive color-shift correction process 50 can be performed in the 24-bit RGB color space).
  • the captured image is converted to a floating-point, device-independent YUN color space and additional image processing is performed in the floating-space, YUN color space.
  • one result of the image processing e.g., due to the use of process 50 of FIGS. 4- 5) is that the captured image is more closely calibrated to the standard colors used to calibrate the printer and generate the mapping.
  • the processed image is more closely calibrated to the standard colors used to calibrate the printer and generate the mapping, the image prints ultimately produced by the printer using the processed image will tend to contain more accurate color.
  • the processed image is converted to the CMY primaries of the KO ⁇ ICA MODEL QD-21 TM printer using the lookup table for the printer lane on which the image is to be printed.
  • a 24-bit CMY triplet for each pixel in the processed image is interpolated from the values in the lookup table.
  • a conventional simple, fast linear interpolation can be used.
  • pixels having a color that is outside of the printer's gamut are mapped to the printer's gamut at this point in a conventional manner. Because this approach starts from a floating-point, device-independent YUN color space and converts directly to the printer's native CMY color space, the multiple gamut limitations that can result from moving through multiple RGB gamuts and transformations can be reduced. Then, after the image has been converted to the color space of the KONICA MODEL QD- 21TM printer, the image is printed.
  • the process 80 can also be implemented so that a photofinisher can perform image processing operations using output device information and/or characteristic data about a user's local printer.
  • the user can print one or more test images (e.g., test images of the type described above in connection with the KONICA MODEL QD-21TM printer) that are sent to the user by the photofinisher and send the resulting test prints to the photofinisher.
  • the photofinisher can then generate characteristic data for the user's local printer (e.g., using the steps described above in connection with the KONICA MODEL QD-21TM printer), which can be stored for later retrieval.
  • the photofinisher can image process the images using the output device information and/or characteristic data specific to the user's local printer and send the processed images back to the user (e.g., over the Internet as an email attachment) for printing on the user's local printer (in addition to, or instead of, processing and printing the captured images using the photofinisher's printers).
  • process 80 has been described in connection with an exemplary implementation of the process 80 for a KONICA MODEL QD-21TM printer, it is to be understood that the process 80 can be used with other output devices, color spaces, and calibration models.
  • system 10 and process 30 have been described in connection with particular implementations and exemplary processing operations, it is to be understood that system 10 and process 30 can be implemented in other ways and/or to perform additional and/or alternate processing operations.
  • the techniques, methods, and systems described here may find applicability in any computing or processing environment in which users desire to produce high-quality physical manifestations of captured images. For example, these techniques could be applied to the generation of other physical manifestations of an image print (e.g., greeting and post cards).
  • image print e.g., greeting and post cards
  • a system or other apparatus that uses one or more of the techniques and methods described here may be implemented as a computer-readable storage medium, configured with a computer program, where the storage medium so configured causes a computer system to operate on input and/or generate output in a specific and predefined manner.
  • a computer system may include one or more programmable processors that receive data and instructions from, and transmit data and instructions to, a data storage system, and suitable input and output devices.
  • Each computer program may be implemented in a high-level procedural or object-oriented programming language, or in assembly or machine language if desired; and in any case, the language may be a compiled or inte ⁇ reted language.
  • Suitable processors include, by way of example, both general and special pu ⁇ ose microprocessors.
  • a processor will receive instructions and data from a read-only memory and/or a random access memory.
  • Storage deices suitable for tangibly embodying computer instructions and data include all forms of non-volatile memory, including semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM disks.

Abstract

L'invention porte sur un système et un procédé de traitement d'images utilisant des informations sur la source des images (par exemple un dispositif de prise de vues tel qu'une caméra numérique et/ou un programme informatique utilisé pour créer ou modifier l'image), et des informations sur un dispositif de sortie sur lequel l'image doit être imprimée. On peut également traiter l'image en utilisant des données caractéristiques relatives à la source des images, au dispositif de sortie, ou à l'image elle-même. Par exemple les décalages de couleurs dans des images prises par une caméra numérique peuvent être compensés à l'aide de données caractéristiques concernant la caméra numérique. De plus les informations sur la source, les informations sur le dispositif de sortie et/ou les données caractéristique peuvent servir à établir une correspondance entre un système de couleurs intermédiaire indépendant du dispositif (par exemple le système de couleurs YUV à virgule flottante) et le système de couleurs du dispositif de sortie (par exemple le système de couleurs CMY à 24-bit). On peut convertir une image du système de couleurs intermédiaire dépendant du dispositif dans lequel l'image est codée (par exemple un système de couleurs RGB 24-bit), dans le système de couleurs intermédiaire indépendant du dispositif, et le traitement de l'image peut se faire dans le système de couleurs intermédiaire indépendant du dispositif. On peut alors convertir l'image traitée du système de couleurs intermédiaire indépendant du dispositif dans le système de couleurs du dispositif de sortie. L'image peut également être traitée pour être étalonnée de manière à être plus proche des couleurs normalisées utilisées pour l'étalonnage du dispositif de sortie et pour produire la correspondance.
PCT/US2000/024079 1999-08-31 2000-08-31 Reglage automatique des couleurs et etalonnage d'une imprimante WO2001060051A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU73424/00A AU7342400A (en) 1999-08-31 2000-09-01 Automatic color adjustment and printer calibration

Applications Claiming Priority (10)

Application Number Priority Date Filing Date Title
US15153399P 1999-08-31 1999-08-31
US60/151,533 1999-08-31
US15937299P 1999-10-14 1999-10-14
US60/159,372 1999-10-14
US43670499A 1999-11-09 1999-11-09
US09/436,704 1999-11-09
US16724399P 1999-11-24 1999-11-24
US60/167,243 1999-11-24
US45034799A 1999-11-29 1999-11-29
US09/450,347 1999-11-29

Publications (1)

Publication Number Publication Date
WO2001060051A1 true WO2001060051A1 (fr) 2001-08-16

Family

ID=27538398

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2000/024079 WO2001060051A1 (fr) 1999-08-31 2000-08-31 Reglage automatique des couleurs et etalonnage d'une imprimante

Country Status (2)

Country Link
AU (1) AU7342400A (fr)
WO (1) WO2001060051A1 (fr)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1293954A1 (fr) * 2001-09-12 2003-03-19 Jeremy Graham Scott Dispositif d'affichage et procédé de réalisation
EP1457921A4 (fr) * 2001-11-16 2007-06-06 Sharp Kk Support d'enregistrement, systeme d'enregistrement/reproduction de contenus, appareil d'enregistrement de contenu et dispositif de recodage de contenu
US7750919B2 (en) 2006-02-24 2010-07-06 Samsung Electronics Co., Ltd. Apparatus and method for enhancing device-adaptive color
US8117134B2 (en) 2008-10-16 2012-02-14 Xerox Corporation Neutral pixel correction for proper marked color printing
US8937749B2 (en) 2012-03-09 2015-01-20 Xerox Corporation Integrated color detection and color pixel counting for billing
US10600139B2 (en) 2011-04-29 2020-03-24 American Greetings Corporation Systems, methods and apparatus for creating, editing, distributing and viewing electronic greeting cards

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0378448A2 (fr) * 1989-01-13 1990-07-18 Electronics for Imaging, Inc. Traitement d'image en couleur
EP0501942A1 (fr) * 1991-03-01 1992-09-02 Barco Graphics N.V. Procédé et dispositif de conversion d'un ensemble de coordonnées chromatiques
EP0565283A1 (fr) * 1992-03-29 1993-10-13 Scitex Corporation Ltd. Méthode et appareil pour le contrôle de la reproduction des couleurs et des tonalités chromatiques
US5377025A (en) * 1992-11-24 1994-12-27 Eastman Kodak Company Optimal color quantization for addressing multi-dimensional color calibration look-up-table
EP0781036A1 (fr) * 1995-12-19 1997-06-25 Hewlett-Packard Company Correction de couleur prenant en considération l'illumination de la scène
US5771311A (en) * 1995-05-17 1998-06-23 Toyo Ink Manufacturing Co., Ltd. Method and apparatus for correction of color shifts due to illuminant changes
US5809213A (en) * 1996-02-23 1998-09-15 Seiko Epson Corporation Automatic color calibration of a color reproduction system
EP0961487A2 (fr) * 1998-05-26 1999-12-01 Canon Kabushiki Kaisha Procédé et appareil de traitement d'image et support d'enregistrement

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0378448A2 (fr) * 1989-01-13 1990-07-18 Electronics for Imaging, Inc. Traitement d'image en couleur
EP0501942A1 (fr) * 1991-03-01 1992-09-02 Barco Graphics N.V. Procédé et dispositif de conversion d'un ensemble de coordonnées chromatiques
EP0565283A1 (fr) * 1992-03-29 1993-10-13 Scitex Corporation Ltd. Méthode et appareil pour le contrôle de la reproduction des couleurs et des tonalités chromatiques
US5377025A (en) * 1992-11-24 1994-12-27 Eastman Kodak Company Optimal color quantization for addressing multi-dimensional color calibration look-up-table
US5771311A (en) * 1995-05-17 1998-06-23 Toyo Ink Manufacturing Co., Ltd. Method and apparatus for correction of color shifts due to illuminant changes
EP0781036A1 (fr) * 1995-12-19 1997-06-25 Hewlett-Packard Company Correction de couleur prenant en considération l'illumination de la scène
US5809213A (en) * 1996-02-23 1998-09-15 Seiko Epson Corporation Automatic color calibration of a color reproduction system
EP0961487A2 (fr) * 1998-05-26 1999-12-01 Canon Kabushiki Kaisha Procédé et appareil de traitement d'image et support d'enregistrement

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MACDONALD L W: "DEVELOPMENTS IN COLOUR MANAGEMENT SYSTEMS", DISPLAYS,GB,ELSEVIER SCIENCE PUBLISHERS BV., BARKING, vol. 16, no. 4, 1 May 1996 (1996-05-01), pages 203 - 211, XP000607335, ISSN: 0141-9382 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1293954A1 (fr) * 2001-09-12 2003-03-19 Jeremy Graham Scott Dispositif d'affichage et procédé de réalisation
WO2003023742A1 (fr) * 2001-09-12 2003-03-20 Jeremy Graham Scott Dispositif d'affichage et procede de fabrication correspondant
EP1457921A4 (fr) * 2001-11-16 2007-06-06 Sharp Kk Support d'enregistrement, systeme d'enregistrement/reproduction de contenus, appareil d'enregistrement de contenu et dispositif de recodage de contenu
US7594041B2 (en) 2001-11-16 2009-09-22 Sharp Kabushiki Kaisha Recording medium, content recording/reproducing system, content reproducing apparatus, content recording apparatus, and content recoding apparatus
US7750919B2 (en) 2006-02-24 2010-07-06 Samsung Electronics Co., Ltd. Apparatus and method for enhancing device-adaptive color
US8117134B2 (en) 2008-10-16 2012-02-14 Xerox Corporation Neutral pixel correction for proper marked color printing
US10600139B2 (en) 2011-04-29 2020-03-24 American Greetings Corporation Systems, methods and apparatus for creating, editing, distributing and viewing electronic greeting cards
US8937749B2 (en) 2012-03-09 2015-01-20 Xerox Corporation Integrated color detection and color pixel counting for billing

Also Published As

Publication number Publication date
AU7342400A (en) 2001-08-20

Similar Documents

Publication Publication Date Title
EP1139653B1 (fr) Reproduction d'image en couleurs avec un tableau de conversion des couleurs préférentiel
US6594388B1 (en) Color image reproduction of scenes with preferential color mapping and scene-dependent tone scaling
US6249315B1 (en) Strategy for pictorial digital image processing
US6243133B1 (en) Method for automatic scene balance of digital images
US7715050B2 (en) Tonescales for geographically localized digital rendition of people
JP4194133B2 (ja) 画像処理方法及び装置及び記憶媒体
EP1014687A2 (fr) Correction de l'exposition et de l'échelle de luminance d'images digitalement enregistrées par un appareil de capture d'image
US20070216776A1 (en) Color image reproduction
KR20120118383A (ko) 이미지 보정 장치 및 이를 이용하는 이미지 처리 장치와 그 방법들
JP2005210526A (ja) 画像処理装置、撮像装置、画像処理方法、画像データ出力方法、画像処理プログラム及び画像データ出力プログラム
US8427722B2 (en) Color transform insensitive to process variability
EP1467555B1 (fr) Production d'une image numérique en couleurs équilibrées avec peu d'erreurs de couleurs
JP4197276B2 (ja) 画像処理装置、画像読取装置、画像形成装置、および画像処理方法
JP2005210495A (ja) 画像処理装置、画像処理方法及び画像処理プログラム
JP2005354372A (ja) 画像記録装置、画像記録方法、画像処理装置、画像処理方法、及び画像処理システム
WO2001060051A1 (fr) Reglage automatique des couleurs et etalonnage d'une imprimante
US7369273B2 (en) Grayscale mistracking correction for color-positive transparency film elements
JP4402041B2 (ja) 画像処理方法及び装置及び記憶媒体
Holm Capture color analysis gamuts
Holm A strategy for pictorial digital image processing (PDIP)
JP4034027B2 (ja) 機種色特性プロファイル作成方法
JP4909308B2 (ja) 画像処理方法及び装置
Triantaphillidou Digital colour reproduction
JP4818459B2 (ja) 画像処理方法及び装置
HOSHINO et al. for Color Hard Copy Images

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP