WO2000033207A1 - Digital image improvement through genetic image evolution - Google Patents

Digital image improvement through genetic image evolution Download PDF

Info

Publication number
WO2000033207A1
WO2000033207A1 PCT/US1999/028676 US9928676W WO0033207A1 WO 2000033207 A1 WO2000033207 A1 WO 2000033207A1 US 9928676 W US9928676 W US 9928676W WO 0033207 A1 WO0033207 A1 WO 0033207A1
Authority
WO
WIPO (PCT)
Prior art keywords
genotype
image
gene
dominant
leader
Prior art date
Application number
PCT/US1999/028676
Other languages
French (fr)
Inventor
Lucien G. Frisch
Original Assignee
Qbeo, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qbeo, Inc. filed Critical Qbeo, Inc.
Priority to AU23521/00A priority Critical patent/AU2352100A/en
Priority to EP99967186A priority patent/EP1147471A1/en
Publication of WO2000033207A1 publication Critical patent/WO2000033207A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/12Computing arrangements based on biological models using genetic models
    • G06N3/126Evolutionary algorithms, e.g. genetic algorithms or genetic programming

Definitions

  • the present invention is in the field of computer applications and, more particularly, computer applications that improve the quality of digital images.
  • This digital image editing tool should be simple to operate, powerful in its ability to alter the digital image in beneficial ways, and efficient.
  • the present invention is directed to providing such a digital imaging tool.
  • each image has an associated genotype, with the associated genotype having a plurality of genes.
  • Each gene defines an attribute of the associated image such as brightness, color saturation, and contrast.
  • the genotype may also contain genes that define methods for enhancing the digital image. For example, the gene may define a method for making a global color shift in the digital image or applying special effects to the digital image.
  • the genotype may be a predefined template genotype or may be a genotype that is evolved through a selection process described below.
  • the selection process includes receiving a fitness rating for each of a plurality of digital images within a current generation of images.
  • Each current generation of images generally includes a leader image and a plurality of child images.
  • a copy of an original digital image is made the leader image of a first generation of images and the leader image is assigned a neutral leader genotype.
  • a plurality of child genotypes are applied to the leader image with the appUcation of each child genotype yielding a child image.
  • the child images are sequentially compared to the leader image and a fitness rating is assigned to each child image.
  • An improved image may be chosen at any time from any of the images in the current generation of images or the evolution can continue by making the current generation of images a previous generation of images and evolving a next generation of images to become the current generation of images.
  • the method of evolution creates a next generation of images by evolving a plurality of next generation child images from a next generation leader image.
  • the next generation child images are created by applying a plurality of next generation child genotypes that are evolved from the leader and child genotypes of the previous generation of images.
  • the leader genotype from the previous generation of images is recombined with a corresponding child genotype from the previous generation of images.
  • the next generation child genotype is then mutated.
  • a child image from the previous generation that received the highest fitness rating is made the next generation leader image.
  • Each of the next generation child genotypes is applied to the next generation leader image to produce a next generation of child images. Since the next generation leader image and the next generation child images become the current generation of images for the selection process, the method of evolution in accordance with the present invention may continue through unlimited generations.
  • a genotype includes a plurality of genes. Each gene includes one or more parameters.
  • the genotypes, genes and parameters are evolved through methods for recombination and mutation.
  • the boundary gradation curves (and thus the interpolated gradation curves) may be linear or non-linear and may be predefined or generated specifically for the digital image being processed.
  • the gradation curve defines the replacement values for current values in the digital image when the gene is applied to the digital image.
  • any genotype may be stored independent of a digital image file and later applied to a digital image as a template genotype.
  • the stored genotype comprises a relatively small data file that may be easily transmitted between computers across a communication network.
  • Other template genotypes may be evolved by third parties or shipped with a product incorp orating the present invention.
  • a printer calibration genotype may be provided as a template genotype or evolved according to a method and system of the present invention. The printer calibration genotype adjusts the digital image for the particular printing characteristics of the printer being calibrated. Before the digital image is sent to the printer, the printer calibration genotype is applied to the digital image.
  • FIGURE 1 is a block diagram of a general purpose computer system suitable for implementing the present invention
  • FIGURE 2A is a block diagram illustrating a common storage method for storing a digital image file
  • FIGURE 2B is an illustration of a color space model used by an actual embodiment of the present invention.
  • FIGURES 3A, 3B and 4 are pictorial representations of a user interface provided by an actual embodiment of the present invention
  • FIGURE 5 is a functional flow diagram of a main processing loop, in accordance with the present invention.
  • FIGURE 6 is a functional flow diagram illustrating an evolution of a digital image, in accordance with the present invention.
  • FIGURE 7 is a functional flow diagram illustrating a process of creating a set of child images from a leader image using a method of genetic evolution, in accordance with the invention
  • FIGURE 8 is a functional flow diagram illustrating a process for recombining genotypes, in accordance with the present invention.
  • FIGURE 9 is a functional flow diagram illustrating a process of recombining a leader genotype with a fitness-rated child genotype, in accordance with the present invention.
  • FIGURE 10A is a functional flow diagram illustrating the process of mutating a genotype, in accordance with the present invention.
  • FIGURE 10B is a mutation rate table containing values for mutation rates used in the process illustrated in FIGURE 10A;
  • FIGURE 11 is a functional flow diagrams illustrating the process of applying a genotype to a digital image, in accordance with the present invention.
  • FIGURES 12A-12C are a functional flow diagram illustrating the process for creating RGB gradation curves for gene 1 (global color shift) of the present invention
  • FIGURES 12D-12E are graphs of boundary gradation curves used by the process illustrated in FIGURES 12A-12C, in accordance with the present invention
  • FIGURE 12F is an illustration of interpolating an interpolated gradation curve between a boundary gradation curve and a neutral gradation curve
  • FIGURE 13A is a functional flow diagram illustrating a process of providing the RGB gradation curves for gene 2 (color temperature) in accordance with the present invention
  • FIGURES 13B-13D are graphs of boundary gradation curves used in the method illustrated in FIGURE 13 A;
  • FIGURE 14 is a functional flow diagram illustrating a method for specifying the RGB gradation curves for gene 3 (effects), in accordance with the present invention
  • FIGURE 15A is a functional flow diagram illustrating the method for deteiTJiiiiing a blackpoint and wbitepoint gradation curve for gene 8 (blackpoint/wbitepoint), in accordance with the present invention
  • FIGURE 15B is a graph of an example blackpoint and wbitepoint gradation curve
  • FIGURE 16A is a functional flow diagram illustrating process of providing the RGB gradation curves for gene 4 (brightness), in accordance with the present invention
  • FIGURES 16B-16C are graphs of boundary gradation curves referenced by the processes of FIGURES 16A and 18;
  • FIGURE 17 is a functional flow diagram illustrating the process of providing a contrast gradation curve for gene 5 (contrast), in accordance with the present invention
  • FIGURE 18 is a functional flow diagram illustrating the process of adjusting a pixel's color saturation value using gene 6 (color saturation), in accordance with the present invention
  • FIGURES 19A-19B are a functional flow diagram illustrating the method for adjusting the black level in primary colors using gene 7a (black level), in accordance with the present invention
  • FIGURES 19C-19D illustrate curves referenced by the method of FIGURES 19A-19B;
  • FIGURES 20A-20C are a functional flow diagram illustrating the method for selective color correction of a pixel using gene 7b (selective color correction), in accordance with the present invention.
  • FIGURE 21A is a functional flow diagram illustrating a process for preparing a contrast gradation curve for gene 9 (contrast gradation curve), in accordance with the present invention
  • FIGURES 2 IB and 21C are pictorial representations of a histogram and a gradation curve, respectively, calculated by the process illustrated in FIGURE 21 A, in accordance with the present invention
  • FIGURE 22 is a functional flow diagram of a process for removing optical distortion in digital images (dewarp), in accordance with the present invention.
  • FIGURES 23A-23C are a functional flow diagram illustrating a process for calibrating a printer using a genetic algorithm, in accordance with the present invention.
  • FIGURE 1 illustrates a computer system 100 of conventional design that is suitable for practicing the present invention.
  • the computer system includes a central processing unit (CPU) 110 that executes program instructions and manipulates digital data.
  • the central processing unit 110 is coupled to a volatile memory 112 and a nonvolatile memory 114 that store the program instructions and data.
  • the central processing unit 110 is coupled to a video monitor 116, which provides a human recognizable medium for the display of digital images, text and other data to a user.
  • a keyboard 119 and a pointer device 117 (a "mouse") are coupled to the CPU 110 for providing input to the computer 100.
  • a printer 118 may be coupled to the CPU 100 to provide printouts of images, text and other data.
  • the CPU 110 may also receive input from devices such as a digital camera 120 or optical scanner 122. Both the digital camera 120 and the optical scanner 122 record photographic images as computer-readable digital image files.
  • a plurality of computers 100 may be connected to a communication network 124 such as a local area network (LAN), a wide area network (WAN), or global network such as the Internet.
  • a communication network 124 such as a local area network (LAN), a wide area network (WAN), or global network such as the Internet.
  • LAN local area network
  • WAN wide area network
  • Internet global network
  • a remote computer 126 is generally similar in configuration to that described for computer 100.
  • a digital image file 210 (FIGURE 2A), such as that created by the digital camera 120 or the optical scanner 122 (FIGURE 1), comprises data representing a plurality of pixels 212. When rendered on the video monitor 116 or the printer 118, the plurality of pixels visually form an image.
  • the digital image file 210 is assumed to be 480 x 480 pixels in size. Of course, other digital image file sizes, formats and resolutions can be used when practicing the present invention.
  • An exemplary pixel 212 is shown in a partial view in FIGURE 2A.
  • the pixel 212 is defined by a red (R) data value 214, a green (G) data value 216, and a blue (B) data value 218 (collectively the "RGB values").
  • a red (R) data value 214, a green (G) data value 216, and a blue (B) data value 218 define a color for the pixel 212 from a set of colors available in a color space.
  • a color space is a mathematical representation of a set of colors. In other words, by specifying an R value 214, a G value 216, and a B value 218, a color in the color space can be replicated for each pixel 212 on the video monitor 116, a printer 118, or other device.
  • a RGB color space is illustrated in FIGURE 2B as a hexagon 220, with each of the additive and subtractive primary colors indicated at a vertex.
  • the primary colors red 222, green 224, and yellow 226 are known as the additive primary colors, and the primary colors magenta 228, cyan 230, and blue 232 are known as the subtractive primary colors.
  • any color within the color space can be specified.
  • the number of colors that can be specified in the color space depends on the resolution of each RGB value 214, 216, and 218.
  • the resolution is defined by the number of bits of data that are used to capture and store each of the RGB values—the more bits, the more colors that can be described In the following discussion, it will be assumed that the RGB values have a resolution of eight bits, or 256 possible values. Each pixel, therefore, can specify more man 1.6 million different colors in the RGB color space defined by hexagon 220. However, the resolution and size (number of pixels) in the digital image is not important to processes of the present invention.
  • the digital image file 210 may have a resolution that is higher or lower depending on the quality requirements for the digital image, and practical matters such as storage requirements for the digital image file 210.
  • Other color space models may be used without departing from the scope of the present invention. Examples of other color space encoding include YIQ, YUV, YCbCr, HSI, and HSV. All of these color spaces can be translated to the RGB color space 220 described herein using well-known conversion methods.
  • the digital image file 210 may be stored in many different formats that are well known in the art.
  • the RGB values 214, 216, 218 can be individually stored for each pixel in the digital image file 210 in a bitmap, or these RGB values may be translated, compressed and stored in a variety of other formats such as TIFF, JPEG, or MPEG, among other known digital image file 210 formats. All file formats may be processed by the present invention.
  • FIGURES 3 A-3B and 4 A user interface 300 provided by an actual embodiment of the present invention is illustrated in FIGURES 3 A-3B and 4.
  • the user interface 300 includes a menu bar 310 that contains menus named File 312, Edit 314, View 316, Window 318, Special 320, and Help 322.
  • the File menu 312 includes a drop-down menu that includes items for saving and retrieving the digital image file 210 to and from the non-volatile memory 114, including options to Open, Open Special, Close, Save, Save As, and Save Genotype As.
  • the Open Special option will be discussed further with reference to FIGURE 3B.
  • a genotype may be saved in a separate data file using the Save Genotype As option in this menu (genotypes are discussed below beginning with the discussion of image evolution in FIGURE 5).
  • the digital image file 210 may also be loaded into the application controlling the user interface 300 using file menu options to Acquire the image from a device such as scanner 122 (FIGURE 1) or from another device using a Select Source menu option.
  • the image may be printed on the printer 118 using File menu options for Print, Print Preview (on the display), Printer Setup, and Printer Calibration. Printer calibration is described below with reference to FIGURE 26A-C.
  • the user may also select an option for Preferences or may select an Exit option from the drop- down menu to end the application.
  • the Edit menu 314 includes options to Undo the Last Action, Perform a Dewarp on the Image, to Start Evolution, and to Rotate Left, Rotate Right, Resize Image, Crop Image, and to Remove Red Eyes.
  • the Perform a Dewarp option is discussed below with reference to FIGURE 22 and the Start Evolution option is discussed below beginning with FIGURE 6.
  • the View menu 316 allows for the selection of user interface elements that the user wishes to appear in user interface 300. These user interface elements include a Toolbar menu item (toolbar 324), a Status Bar menu item (status bar 326), a Navigator window menu item (navigator 328), and Genotypes window menu item (genotypes 330).
  • the View menu 316 also provides the options to Compare to Original and to present the Optimal View of the image.
  • the Window and Help menus 318 and 322 offer menu items well known to those skilled in the art and will not be discussed further. Referring to FIGURE 3 A, the user interface 300 has an image display area 332 that displays a digital image file 210 that has been opened for enhancement.
  • the digital image file 210 will be referred to as a parent image 334.
  • a miniature (thumbnail) version of the parent image 334 is displayed in a navigator 328 window.
  • a slider control 336 By using a slider control 336, a minus command button 338 or a plus command button 340, a user can select a percentage of the total image and only a portion of the parent image 334 will be displayed in the display area 332.
  • the genotypes list box 330 contains a list of template genotypes that may be applied to the parent image 334 to change the parent image 334 according to the predefined genes and gene parameters of the template genotype selected. For instance, if the inverse (INV) template genotype 335 is selected from the list box and an Apply command button 340 is activated, the TNV template genotype 335 would be applied (FIGURE 11) to the parent image 334 and would result in an evolved image replacing the parent image in the display area 332. In the example shown, the evolved image would appear as a black background with a white outline of a sailboat. If a Reject control button 342 is activated, the parent image 334 will revert to the state it was in just before the TNV template genotype 335 was applied.
  • the toolbar 324 includes command buttons for many of the functions described as menu items in the menus 310 described above.
  • a Dewarp command button 344 activates the image Dewarp process illustrated in FIGURE 22
  • a color temperature button 346 adjusts the color temperature as illustrated in FIGURE 13 A
  • an image cropping command button 348 activates a function for cropping the parent image 334
  • a remove red eyes command button 350 activates a remove red eyes function.
  • the user interface presented when the Open Special method of the present invention available from the drop-down file menu is invoked as illustrated in FIGURE 3B.
  • the Open Special user interface 360 includes a directory tree window 362 for selecting the directory 364 containing image data files which may be imported into the present invention.
  • Each image 366A, 366B, 366C, 366D, 366E . . . stored in the directory 364 is displayed as a thumbnail image that, when selected (as indicated by a dark border surrounding the image), is opened for editing by the present invention by activating an Open button 368.
  • a drop-down list box 370 lists available template genotypes. When an available template genotype is selected, selected thumbnail image (or images) are changed by the selected template genotype. When loaded, the loaded image is changed by the selected template genotype.
  • the present invention quickly and efficiently improves the quality of digital images through evolution.
  • the evolution process is initiated by activating the Start Evolution command button 352 (FIGURE 3A).
  • the Start Evolution command button 352 when the Start Evolution command button 352 is actuated, the image display area 332 changes to display a copy of the parent image 334 as a leader image 334A in the left-hand portion 417 of the image display area 332 and one of a set of four child images 334B, 334C, 334D, and 334E in the right-hand portion 415 of the image display area 332.
  • Thumbnail images of the leader image 334A and of the four child images 334B-E are displayed in a legend area 410 together with an indication 412 of the generation to which the images 334A-E belong.
  • Each child image 334B-E is sequentially displayed in the right-hand portion 415 of the image display area 332 for comparison with the leader image 334A that is displayed in the left-hand portion 417 of the image display area 332.
  • the child image 334B is compared to the leader image 334A and fitness rated by selecting a portion of a graduated control 414.
  • the graduated control 414 has a "no better or worse" portion 416 and a graduated portion 418 associated with values from 1% to 100% better.
  • a fitness value equal to the percentage selected from the graduated control 414 is associated with the child image 334B.
  • the next child image 334C is displayed in the right-hand portion 415 of the image display area 332 for comparison with the leader image 334A.
  • the process of assigning a fitness rating for images 334D and 334E continues in the same manner just described.
  • the status bar 326 displays an indication of the current generation and the number of the child image 334B-E that is currently being compared.
  • the evolution can be stopped by activating a Stop Evolution command button 420.
  • a user dialog is displayed asking whether the left image (leader image 334A) or the right image (child image 334B-E) should be kept.
  • the kept image can then be saved as a new digital image file 210 or otherwise manipulated by the user's computer 100.
  • next generation leader image 422A is displayed in the right-hand portion 417 of the video display area 332.
  • the next generation leader image is created from a (new) leader genotype that is produced by recombining the (previous) leader genotype with the genotype of the highest fitness-rated child from the last generation.
  • next generation child images 422B- E are evolved and displayed in the legend area 410 with an indication 412 that the second generation is being viewed.
  • the child images 422B-E are then compared to the leader image using the graduated control 414 to assign fitness ratings as is discussed above.
  • a parent image is imported 510 from the non-volatile memory 114 (or other source such as the digital camera 120 or optical scanner 122) into a workspace available to the present invention in the volatile memory 112.
  • the encoded information describing the parent image e.g., in formats such as bitmap, JPEG, TIFF, etc.
  • RGB red, green, and blue
  • the parent image can be modified according to the present invention by image evolution 512, applying a template genotype 514, or conducting an image Dewarp 516. Any digital image 210 can be accurately printed following a printer calibration 518 according to the method of the present invention. These four processes 512, 514, 516, 518 will be discussed below. Any or all of the above methods 512, 514, 516, 518 may be conducted in any order on a parent image until the program is exited 520.
  • evolution is performed 512 beginning with the process illustrated in FIGURE 6.
  • the evolution method 512 sets 610 a leader genotype to a neutral genotype.
  • a genotype is a set of genes (described below) that are used to alter a digital image in a number of ways.
  • a neutral genotype contains genes that do not affect the digital image to which they are applied.
  • the evolution of the first generation of child images may be produced using predefined genotypes or evolved genotypes. If predefined genotypes are selected 612, then the method 512 selects a plurality of predefined template genotypes that are used as the initial child genotypes. Each child genotype is apphed 616 to the leader image to produce a corresponding child image. The child images are displayed 618 and the method 512 receives the fitness ratings of each of the child images from the user (as was described above with reference to FIGURE 4).
  • the evolution continues 620 by recombining 622 the genotype of the child image that received the highest fitness rating with the leader genotype as a next generation leader genotype. As is illustrated begiiuding in FIGURE 7, a next generation of child genotypes is evolved 624 using this next generation leader genotype. Alternatively, if the evolution is ended 620, the method 512 is done 626.
  • the evolve next generation method 710 begins by recombining 712 the leader genotype with a child genotype from the previous generation into a next generation child genotype. Recombination is discussed below with reference to FIGURE 8.
  • the next generation child genotype is then mutated 714 in the manner illustrated in FIGURE 10 and described below. Having been recombined and mutated, the next generation child genotype is apphed 716 (FIGURE 11) to the currently loaded image (the leader image) to produce a next generation child image.
  • the next generation child genotype is then associated with the next generation child image.
  • an actual embodiment of the present invention produces four child images in each generation for comparison with that generation's leader image. Of course, greater or fewer child images can be produced in each generation.
  • the evolve next generation method 710 repeats (starting with recombining 712 the child genotype with the leader genotype into a next generation child genotype).
  • a decision 720 directs the evolve next generation method 710 to return 722 all the next generation child images, together with their associated next generation child genotypes, as child images for presentation as a (new) current generation in the selection process described above (FIGURE 4). The method 710 is then done 724.
  • the method 810 for recombining the leader genotype with each child genotype to produce a plurality of next generation child genotypes is illustrated in FIGURE 8.
  • the method 810 retrieves 812 a leader genotype. (It is to be understood that each of the methods described herein may retrieve, create or be passed data.)
  • a child genotype is then retrieved 814.
  • the individual genes of the leader genotype are recombined 816 with the genes of the child genotype into a next generation genotype, in the manner illustrated in FIGURE 9 and described below.
  • a decision 818 returns the method 810 to the retrieval 814 of another child genotype and the process is repeated.
  • the next generation genotypes are returned 820 as the (new) child genotypes to block 712 in FIGURE 7.
  • the method 810 is then done 822.
  • the method 910 of the present invention for recombining the leader genotype with a child genotype begins by retrieving 912 a leader gene from a set of genes contained in leader genotype. A corresponding child gene is also retrieved 912 from the child genotype. Each gene may contain one or more parameters which are retrieved 914 one at a time from the leader gene and then the corresponding child gene. A next generation gene parameter is computed 916 by taking a weighted average of the leader gene parameter and the child gene parameter. In this computation 916, the child gene parameter is weighted by a factor that equals the fitness rating that was assigned to the child genotype (as discussed above with reference to FIGURE 4). For instance, if the resolution was eight bits (256 values), the weighting formula would be:
  • This weighted average formula produces a next generation gene parameter that is stored 918 in a next generation gene.
  • a decision 920 is made to determine if there is another gene parameter to process. If so, the method 910 returns to retrieve 914 the (next) leader gene parameter from the leader gene and a 0 corresponding child gene parameter from the child gene. If all gene parameters have been processed, the decision 920 directs the method 910 to store 922 the next generation gene in the next generation genotype. If there is another leader gene to process, a decision 924 returns the process to the beginning of method 910 to retrieve 912 the next leader gene and a corresponding child gene. If the decision 924 5 determines that there is not another leader gene to process, then the method 910 is done 926.
  • the method 1010 for mutating a genotype is shown in FIGURE 10A.
  • the method 1010 retrieves a genotype to be mutated from a block 714 in FIGURE 7.
  • a gene from the genotype is retrieved 1012 and a mutation rate is obtained for the gene 0 from a mutation rate table (FIGURE 10B).
  • the mutation rate table provides the mutation rate to use for the gene being processed (column) and the number of the generation currently being processed (row). For instance, if gene 4 is being mutated from generation 5, cell 1016 would be selected to obtain a value of "18" hexadecimal. If the evolution has progressed through more than 16 generations, the mutation rate is 5 selected from the generation 16 column according to the genes row. For instance, all mutations after the 16th generation for gene 5 will have a mutation rate found in cell 1018 that indicates a hexadecimal value of "LA".
  • the mutation rate is multiplied 1022 by a Gauss-random value to produce a mutation value from a Gauss-random function.
  • the Gauss-random function returns a value that makes small changes to the mutation rate very probable while big changes will seldom occur. Gauss-random functions are well known to those skilled in the art and will not be further discussed.
  • the mutation value is then added to the gene to produce a mutated gene. If the gene has multiple parameters, all parameters are mutated.
  • the mutated gene is then stored 1026 in a mutated genotype. If another gene in the genotype needs to be processed, the method 1010 is directed 1028 to retrieve 1012 the next gene in the genotype. If another gene is not present in the genotype to be mutated, the method 1010 is directed 1030 to return 1032 the mutated genotype as the next generation genotype. The method 1010 is then done 1034.
  • the method 1110 for applying a genotype to a digital image is shown in FIGURE 11.
  • the leader (currently loaded) image is retrieved 1112.
  • the child genotype to be apphed to the leader image is also retrieved 1114.
  • a pixel is retrieved 1116 from the currently loaded image and a gene is retrieved 1118 from the genotype.
  • the gene is apphed 1120 to the pixel according to a method associated with each gene, as is discussed below in detail with reference to FIGURES 12A-21.
  • a decision 1122 determines if there is another gene in the genotype that still needs to be processed. If so, the method 1110 directs 1124 the retrieval 1118 of the next gene from the genotype. If there is not another gene in the genotype to be processed, the decision 1122 directs 1126 the method 1110 to store 1128 the newly altered pixel in a child image.
  • a decision 1130 determines if there is another pixel in the currently loaded image that still needs to be processed, and directs 1132 the method 1110 to repeat starting with retrieving 1116 the next pixel from the currently loaded image. If there is not another pixel in the currently loaded image, the decision 1130 returns 1136 the child image and the associated genotype to the calling method. The method 1110 is then done 1138.
  • FIGURES 12A-21 detail a set of nine genes in a genotype provided by an actual embodiment of the invention. Other genes may be added to the genotype and still be within the scope and spirit of the present invention. For instance, a gene could be developed to create artistic effects for the image using the methods and systems described below. While the order that the genes are apphed does not have to be strictly defined, the genes below are discussed in the order that they are apphed in an actual embodiment of the invention.
  • genes described below include methods for defining a new gradation curve for each of the R, G, and B values.
  • the apphcation of the gradation curves specified by these genes occurs in FIGURE 11.
  • a gradation curve (e.g., FIGURE 12D) is generated. This generated gradation curve is then referenced to assign a new value (the y-coordinate) to the R 214, G 216, or B 218 value for each pixel in the digital image.
  • gradation curves shown in the FIGURES are illustrated as continuous curves both to more clearly point out the concept of the present invention and to point out that the methods and systems of the present invention apply equally to digital images of all resolutions, e.g., a gradation curve may be apphed to a digital image with 8-bit resolution equally as well as to a digital image with 32-bit resolution simply by calculating a greater range of data points for (or from) the gradation curve.
  • interpolating a gradation curve (FIGURE 12F) is actually computing a new value (y-coordinate) for each possible old value (x-coordinate).
  • predefined boundary gradation curves described for the use of the genes discussed herein have been determined empirically and may be changed as further experience dictates. The same is true for the predefined data points used by some genes to dynamically calculate boundary gradation curves.
  • the interpolated gradation curve 1206 in this example represents a weight (or parameter) value of 89. If the gene is defined by an 8-bit signed integer, giving 127 possible positive values (and 127 possible positive interpolated gradation curves), the positive boundary gradation curve 1204 represents the maximum parameter value (127).
  • the interpolated gradation curve 1206 therefore comprises 256 new values (ynew) that may be stored in any convenient data structure for use in the methods and systems described herein, such as the lookup table described above or each ynew point may be calculated as needed.
  • the global color shift of gene 1 is accomplished by calculating a gradation curve for each of the RGB values.
  • Gene 1 has three parameters, with the first parameter specifying which two of the three RGB gradation curves will be interpolated. If the gene 1 first parameter is 1, new red and green gradation curves are interpolated and the blue gradation curve is assigned a neutral gradation curve 1202. When the first parameter of gene 1 is equal to 2, new green and blue gradation curves are created and the red gradation curve is assigned the neutral gradation curve 1202. If the first parameter of gene 1 is equal to 3, new red and blue gradation curves are calculated and the green gradation curve is assigned the neutral gradation curve 1202.
  • the second and third parameters of gene 1 are specified by a signed 8-bit integer with a range of -128 to +127.
  • a gradation curve is calculated for every possible value of parameter 2 (and 3) by calculating a weighted interpolation between a predefined boundary curve and the neutral curve.
  • a decision 1212 directs 1214 the process to a decision 1216 that checks if the second parameter is positive. If the second parameter is positive, the decision 1216 directs 1218 a method 1220 to use the second parameter of the leader genotype gene 1 as a weight value to interpolate a new red global color shift gradation curve between a positive global color shift boundary gradation curve (FIGURE 12D) and the neutral gradation curve 1202 (FIGURE 12F).
  • the method 1210 continues 1222 by using the leader genotype gene 1 second parameter as the weight value to interpolate a new red global color shift gradation curve 1224 between the negative global color shift gradation curve (FIGURE 12E) and the neutral gradation curve 1202 (FIGURE 12F).
  • a decision 1226 is made whether the third parameter of the leader genotype gene 1 is positive. If so, the third parameter of the leader genotype gene 1 is used as the weight value to interpolate a new green color shift gradation curve between the positive global color shift gradation curve (FIGURE 12D) and the neutral gradation curve 1202 (FIGURE 12F).
  • the third parameter of the leader genotype gene 1 is used as the weight value to interpolate 1232 a new green global color shift gradation curve between the negative global color shift gradation curve (FIGURE 12E) and the neutral gradation curve (FIGURE 12F).
  • a new blue gradation curve is assigned 1233 the neutral gradation curve (FIGURE 12F).
  • the method 1210 continues as illustrated in FIGURE 12B. If gene 1 of a leader genotype has a first parameter equal to two, a decision 1234 directs 1236 the process to a decision 1238 that checks if the second parameter is positive.
  • the decision 1238 directs 1240 the method 1210 to use the second parameter of the leader genotype gene 1 as a weight value to interpolate 1242 a new green global color shift gradation curve between the positive global color shift boundary gradation curve (FIGURE 12D) and the neutral gradation curve 1202 (FIGURE 12F).
  • the method 1210 continues 1244 by using the leader genotype gene 1 second parameter as the weight value to interpolate a new green global color shift gradation curve 1246 between the negative global color shift gradation curve (FIGURE 12E) and the neutral gradation curve (FIGURE 12F).
  • a decision 1248 is made whether the third parameter of the leader genotype gene 1 is positive. If so, the third parameter of the leader genotype gene 1 is used as the weight value to interpolate a new blue color shift gradation curve 1252 between the positive global color shift gradation curve (FIGURE 12D) and the neutral gradation curve (FIGURE 12F). If in the decision 1248 it is determined that the third parameter is not positive, the third parameter of the leader genotype gene 1 is used as the weight value to interpolate a new blue global color shift gradation curve 1256 between the negative global color shift gradation curve (FIGURE 12E) and the neutral gradation curve 1202 (FIGURE 12F). A new red gradation curve is assigned 1257 the neutral gradation curve 1202 (FIGURE 12F). After the RGB gradation curves have been interpolated or assigned 1252,
  • a decision 1260 directs 1262 the process to a decision 1264 that checks if the second parameter is positive. If the second parameter is positive, the decision 1264 directs 1265 the method 1210 to use the second parameter of the leader genotype gene 1 as a weight value to interpolate 1266 a new red global color shift gradation curve between the positive global color shift boundary gradation curve (FIGURE 12D) and the neutral gradation curve (FIGURE 12F).
  • the method 1210 continues by using the leader genotype gene 1 second parameter as the weight value to interpolate a new red global color shift gradation curve 1268 between the negative global color shift gradation curve (FIGURE 12E) and the neutral gradation curve (FIGURE 12F).
  • a decision 1270 is made whether the third parameter of the leader genotype gene 1 is positive. If so, the third parameter of the leader genotype gene 1 is used as the weight value to interpolate a new blue color shift gradation curve 1274 between the positive global color shift gradation curve (FIGURE 12D) and the neutral gradation curve (FIGURE 12F). If in the decision 1270 it is determined that the third parameter is not positive, the third parameter of the leader genotype gene 1 is used as the weight value to interpolate a new blue global color shift gradation curve 1278 between the negative global color shift gradation curve (FIGURE 12E) and the neutral gradation curve (FIGURE 12F).
  • a new green gradation curve is assigned 1279 the neutral gradation curve (FIGURE 12F).
  • Gene 2 adjusts the color temperature of the child image 334B-E. Color temperature may be thought of as simulating a "warmer” or "colder” ambient light in the digital image.
  • Gene 2 has a single parameter defined by a signed 8-bit integer (values from -128 to +127). Altered by gene 2, a negative parameter value result in the image looking warmer (lower color temperature caused by a shift toward orange- red in the image) and a positive parameter will make the image look colder (higher color temperature caused by a shift toward blue in the image).
  • FIGURE 13 A The method for calculating the RGB gradation curves 1310 to adjust the color temperature of an image is shown in FIGURE 13 A.
  • a single new value (y-coordinates) is calculated for each of the RGB gradation curves, as needed, to adjust a single pixel instead of calculating and storing all of the new values (y-coordinates) of an interpolated gradation curve.
  • the following will continue to refer to these new values as being part of a gradation curve to better illustrate the concept behind the method 1310 of gene 2.
  • the parameter value of gene 2 is used as a weight value to interpolate 1312 a red color temperature gradation curve ("RTemp") 1312 between a low temperature red boundary gradation curve (FIGURE 13B) and the neutral gradation curve (FIGURE 12F).
  • RTemp red color temperature gradation curve
  • GTemp green color temperature gradation curve
  • a blue color temperature gradation curve (“BTemp") is interpolated 1316 by using the gene 2 parameter as the weight value in a weighted interpolation between a low temperature blue boundary gradation curve (FIGURE 13D) and the neutral gradation curve (FIGURE 12F).
  • the new RGB values are obtained 1322 from a set of high temperature RGB gradation curves that mirror the low temperature RGB gradation curves about the neutral gradation curve.
  • the high temperature RGB gradation curves may be expressed mathematically as:
  • the high-temperature RGB boundary gradation curves may be defined separately from the low-temperature boundary gradation curves and the high- temperature RGB gradation curves interpolated between those separately defined high-temperature RGB gradation curves and the neutral gradation curve (FIGURE 12F).
  • the new RGB values obtained in blocks 1320 or 1322 are then returned 1324 as a new pixel for the child image.
  • Gene 3 provides RGB gradation curves that define special effects.
  • a method 1410 of gene 2 is illustrated in FIGURE 14.
  • Gene 3 has three parameters that specify a gradation curve for each red, green, and blue value.
  • the first parameter specifies 1412 a predefined gradation curve defining an effect to the red component of each pixel.
  • the second parameter of gene 3 specifies 1414 a predefined effect gradation curve that defines an effect to the green component of the pixel.
  • a third parameter specifies 1416 a predefined effect gradation curve that defines an effect to the blue value of each pixel.
  • the first, second and third parameters may each specify the same gradation curve or a different gradation curve for each of the red, green and blue values.
  • the method 1410 is then done 1418.
  • Gene 8 determines a blackpoint and wbitepoint gradation curve that is apphed to each of the RGB values.
  • the method 1510 of gene 8 is shown in FIGURE 15 A, where the brightest color tone in the image is assigned 1512 as the whitepoint and then the darkest color tone in the image is assigned 1514 as the blackpoint. Using the coordinates (0, whitepoint and 255, blackpoint) a linear gradation curve is computed 1516 between these coordinates. The method 1510 of gene 8 is then done 1518.
  • An example of the gradation curve produced by gene 8 is shown in FIGURE 15B with the linear gradation curve 1520 extending between the blackpoint 1522 and the whitepoint 1524.
  • the RGB gradation curves calculated by gene 4 affect the brightness of the image.
  • Gene 4 has one parameter that is a signed 8-bit integer, ranging from -128 (maximum brightness) to +127 (minimum brightness). In an actual embodiment of the invention, the same calculated brightness gradation curve is used for all the RGB values.
  • the method 1610 for calculating the brightness gradation curve of gene 4 is shown in FIGURE 16A and begins with a decision 1612 that determines whether the value of the gene 4 parameter is positive.
  • a brightness gradation curve is interpolated 1614 by using the value of a gene 4 parameter as the weight value in a weighted interpolation between a positive brightness/color saturation boundary curve (FIGURE 16B) and the neutral gradation curve (FIGURE 12F). If the decision 1612 determines that the value of the gene 4 parameter is not positive, the weighted interpolation is made using the gene 4 parameter as the weight value between a negative brightness/color saturation boundary gradation curve (FIGURE 16C) and the neutral gradation curve (FIGURE 12F). After calculating the brightness gradation curve 1614 or 1616, the method 1610 is done 1618.
  • the contrast gradation curve produced by gene 5 is apphed equally to each of the RGB values to alter contrast as the red, green and blue gradation curve for contrast.
  • Gene 5 has one parameter that is a signed 8-bit integer ranging from -128 (lowest contrast) to +127 (maximum contrast).
  • the method 1710 for finding the contrast gradation curve of gene 5 begins by retrieving 1712 the contrast parameter of gene 5.
  • the contrast parameter is used as the weight value to interpolate 1714 a new contrast gradation curve between a boundary gradation curve generated by gene 9 (discussed below with reference to FIGURE 21) and a neutral gradation curve (FIGURE 12F).
  • a new contrast value is determined directly from the new contrast gradation curve 1716 for each of the RGB values of the pixel.
  • the method 1710 is then done 1718.
  • the method 1810 for adjusting the color saturation of the image using gene 6 is illustrated in FIGURE 18.
  • Gene 6 has a single parameter that comprises a signed 8- bit integer in an actual embodiment of the invention.
  • the RGB values of the pixel being passed to method 1810 are each translated 1814 from their additive primary color values to the equivalent subtractive primary colors using the formulas:
  • the black level (K) is assigned 1816 the minimum value of the C, M or Y:
  • the subtractive primary colors are adjusted 1820 (FIGURE 11) using the
  • the subtractive primary color values are translated back 1824 to their equivalent additive primary colors using the formulas:
  • the method 1810 then returns 1826 the new pixel for storage in the adjusted image and is done 1830.
  • the method 1910 of gene 7A for adjusting the black level in the primary colors of the image is illustrated in FIGURES 19A and 19B beginning with the method 1910 retrieving 1912 the RGB values of a pixel.
  • the RGB values are translated 1914 from the additive primary color values to the equivalent subtractive primary colors using the formulas:
  • the black level is temporarily removed 1918 from the C, M & K values using the formulas:
  • Each intermediate separation value is adjusted 1924 by applying a corresponding gene parameter from gene 7A.
  • the parameters of gene 7A are changed by a regulator gene (gr) that is evolved during the evolution process.
  • the regulator gene (gr) is set by the evolution process, i.e., by mutation and recombination.
  • the six gene parameters of gene 7A have the values that are calculated using a predefined weighting factor to adjust the value of the regulator gene, i.e.:
  • the weight values used in gl-g6 have been determined empirically and may be changed as experience dictates. In choosing the weight values, however, it should be considered that the reduction in the black value of color should not be equal for all colors or unpleasant artifacts may result. It has been found that green, blue and cyan are the most robust against these artifacts and may be weighted more heavily, while red and yellow are the least robust against manipulation and should have smaller weight values.
  • the intermediate color separation values are adjusted 1924 using the following formulas:
  • the method 1910 next determines 1926 a black level correction factor (kf) from a predefined curve shown in FIGURE 19D (the "kfCurve"):
  • the method 1910 is then done 1938.
  • the method 2010 for the selective color correction of the image by gene 7B is illustrated in FIGURES 20A-C.
  • the pixel passed to the method 2010 is separated 2012 into the pixel's RGB values, which are translated 2014 to the equivalent subtractive primary colors, using the formulas:
  • the method 2010 then retrieves the eighteen parameters of gene 7B which are stored as unsigned 8-bit integers for each color except black: an amplification factor for the color being processed, an amplification factor for the clockwise adjacent color on the color hexagon 220 (FIGURE 2B) to the color being processed, and an amplification factor for the counterclockwise adjacent color on the color hexagon 220 (FIGURE 2B) to the color being processed.
  • gradation curves (3 for each color except black) are computed 2024 using the points: (0,0), (xpoint, ypoint), (255,255), where ypoint is equal to xpoint if xpoint is less than 230 or ypoint is equal to 230 if xpoint is greater than 230.
  • 230 as the breakpoint has been dete ⁇ riined empirically and may change as further experience dictates.
  • the resulting 18 gradation curves are: BBCorrf] BlueCorrection for Blue
  • the primary colors are selectively corrected 2028, as foUows:
  • the method 2110 of gene 9 provides the contrast gradation boundary curve that is used by the method 1710 of gene 5 (FIGURE 17).
  • the contrast gradation boundary curve is calculated by first converting 2112 each pixel into gray according to a gray scale.
  • a histogram is then created 2114 that records the gray tone value of every pixel in the image with the x-axis of the histogram representing the gray scale value and the y-axis of the histogram representing the frequency that that gray tone appears in the image.
  • An example of such a histogram 2130 is lustrated in FIGURE 21B. The histogram is integrated from its minimum value until 2% of the data members in the histogram has been reached.
  • the value of the x-axis at this point is recorded 2116, as a first parameter (A) of gene 9. Similarly, the histogram is integrated from its maximum value toward its minimum value until 2% of the total data members has been reached. The value of the x-axis at this point is recorded 2118 as a second parameter (B) of gene 9.
  • the integration percentage (2%) was dete ⁇ nined empiricaUy and may be changed as experience dictates.
  • a S-shaped contrast gradation curve 2132 is calculated 2120 using (0,0) and (255,255) as the end points 2134, 2140 and the points (A, 12) and (B, 242) as the inflection points 2136, 2138.
  • An example of this S-shaped contrast gradation curve with the end points and inflection points 2136, 2138 just described is iUustrated in FIGURE 21C. Since, spline interpolation algorithms are weU known in the art, they wiU not be discussed further here. Retaining to FIGURE 5, it is also possible to apply 514 a template genotype to the image without going through the evolution process.
  • the template genotype may be predefined and shipped as part of the product incorporating the invention or may be a genotype that has been saved from a previous evolution. For example, a user may prefer a genotype that the user has evolved for photographs of mountains that the user wishes to apply to aU pictures of mountains that the user works on.
  • the genotype may be selected 524 and the genotype apphed 514 to the image as was discussed above with reference to FIGURE 11.
  • any saved or template genotype forms a relatively smaU data file. This data file may be exchanged between appUcations employing the present invention, for instance, by sending the genotype data file from the user's computer 100 over the cornmunication network 124 to another user's remote computer 126 (FIGURE 1).
  • the present invention also provides the option 526 to Dewarp the image.
  • Performing 516 a Dewarp is iUustrated in FIGURE 22.
  • the not perfect shape of the lens may introduce a perceptible aberration in the image.
  • the process of "Dewarping" an image minimizes the effect of this spherical aberration by redistributing the pixels in the image.
  • the method 2210 of Dewarping the image begins by retrieving 2212 a predefined set of co ⁇ ection coefficients A, B, C.
  • a focal point is then determined 2214 in the parent image. For each pixel in the parent image, the pixel is retrieved 2216 and a vectored radius between the location of the pixel and the focal point is determined 2218.
  • the pixel is then stored 2222 in a chUd image at the new vectored radius (rnew). To make the image look more smooth, a bi-cubic interpolation is used. If there is another pixel to process in the parent image, a decision 2224 retums control to the retrieval 2216 of the next pixel and the method continues as described above. If there is not another pixel to process in the parent image, the decision directs the method 2210 to end 2226.
  • the last option shown in FIGURE 5 is to select 528 printer cahbration.
  • a printer cahbration is performed 518 according to the method shown in FIGURES 23A-C.
  • Printer cahbrations are necessary because the image produced by any given printer 118 may appear different than the image as it is displayed on the video monitor 116.
  • a genotype may be developed that is associated with the printer and that is used to harmonize the image printed by the printer 118 with the image displayed on the video monitor 116.
  • the printer cahbration method 2310 begins by retrieving 2312 a gray scale image.
  • a gray scale template genotype is apphed 2314 to the gray scale image to create a test image.
  • the gray scale template genotype has genes that are neutral for aU attributes other than brightness (altered by gene 4) and contrast (altered by gene 5).
  • a decision 2316 determines if another gray scale test image should be produced. If yes, a different gray scale template genotype is apphed 2314 to the gray scale image to create another gray scale test image. In an actual embodiment of the invention, five gray scale test images are produced. Each of the gray scale test images is displayed 2318 on the video monitor 116. The same gray scale test images are then printed 2320 on the printer 118 to be calibrated.
  • the test images should be arranged on the printed page as they are displayed on the display so that the user may intuitively select 2322 the "best" test image appearing on the printed test page. (The "best” test image is a subjective determination by the user.)
  • the genotype of the image is assigned as a printer gray scale genotype.
  • a red-dominant test image containing subject matter with many red tones is retrieved 2324.
  • the printer gray scale genotype is apphed 2326 to the red-dominant test image (as is described above with reference to FIGURE 11) to create a gray scale adjusted test image.
  • a red-dominant template genotype is apphed 2328 to the gray scale adjusted test image to create a red-dominant test image.
  • a decision 2330 returns control to the block 2328 to apply a different red-dominant template genotype to the gray scale adjusted test image to produce another red-dominant test image.
  • the decision 2330 directs control to the retrieval 2332 of a green- dominant test image.
  • the printer gray scale genotype is apphed 2334 to the green- dominant test image to create a gray scale adjusted test image.
  • a set of green- dominant template genotypes are apphed 2336 to the gray scale adjusted test image to create a set of green-dominant test images.
  • a decision 2338 returns control to the block 2336 until aU green-dominant test images have been produced.
  • the process is next repeated for a blue-dominant test image that is retrieved 2340 and adjusted by applying 2342 the printer gray scale genotype to the blue-dominant test image to create a gray scale adjusted test image.
  • a set of blue- dominant genotypes are apphed 2344 to create a set of blue- dominant test images using the control loop provided by decision 2346.
  • the decision 2346 directs control to the display 2348 of each of the red-dominant, green- dominant, and blue-dominant test images on the video monitor 116.
  • Each of the red-dominant, green-dominant and blue-dominant test images are printed 2350 on a test page by the printer 118.
  • the test images should be arranged as they are displayed on the display so that the user can intuitively select the images from the printed page on the video display.
  • the user chooses 2352 a "best" red-dominant test image that appears on the printed test page and selects the corresponding red-dominant test image on the video monitor 116. This selects the genotype of the selected test image as the printer red- dominant genotype.
  • the user chooses 2354 the "best" green-dominant test image that appears on the printed test page and selects the corresponding green- dominant test image on the display. This assigns the genotype of the selected test image as the printer green-dominant genotype.
  • the user then chooses 2356 the "best" blue- dominant test image that appears on the printed test page and selects the corresponding blue-dominant test image on the display.
  • the genotype of the selected test image is assigned as the printer blue-dominant genotype.
  • the printer gray scale genotype, the printer red- dominant genotype, the printer green- dominant genotype, and the printer blue-dominant genotype are recombined 2358 to form a printer cahbration genotype that is stored as a printer cahbration template genotype and associated with the printer 118 being cahbrated.
  • the image to be printed can be modified using the printer cahbration genotype as it is sent to the printer to be printed so that the image printed matches the image displayed as closely as possible.
  • the printer cahbration method is then done 2360. WhUe the preferred embodiment of the invention has been iUustrated and described, it wiU be appreciated that various changes can be made therein without departing from the spirit and scope of the invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Physiology (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Genetics & Genomics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

Computer-implemented methods and systems for improving a digital image using genetic algorithms is provided. Child images are evolved from the digital image using either predefined template genotypes or genotypes created through evolution. A genotype includes a number of genes that may each alter an attribute of the digital image such as hue, brightness, contrast or color saturation. The evolution includes a selection process during which the child images are compared against a leader image and assigned a fitness rating. The fitness rating is used in the evolution of subsequent generations of child genotypes, with each child genotype producing an associated child image. The evolution process may continue through as many generations as is desired and an improved image may be selected at any time during the evolution. Genotypes may be saved as independent data files and applied as template genotypes to other digital images. The methods and systems of the present invention also provide for evolving a printer calibration genotype that is used to adjust the image for the printing characteristics of a calibrated printer.

Description

DIGITAL IMAGE IMPROVEMENT THROUGH GENETIC IMAGE
EVOLUTION
Field of the Invention The present invention is in the field of computer applications and, more particularly, computer applications that improve the quality of digital images.
Background of the Invention The growing popularity of digital photography and other digital images may be based in part on the ability to alter those digital images using computer programs. For instance, with editing tools currently on the market it is possible to selectively substitute one color in a digital image for another color, to change the size and shape of the digital image, to increase or decrease the brightness of the image, or to selectively correct the color saturation of an image so that colors appear more or less intense. While it is possible in theory to use these digital image editing tools to improve the overall quality of a digital image, in practice a high level of skill is required — the user must possess a fairly complete understanding of image attributes such as hue, brightness, color saturation, resolution, and gamma. Each change to a digital image requires an understanding of what to change, how to change it, and what effect the change may have on other attributes of the digital image. The user must expend considerable effort to alter the digital image to produce an acceptable image. If the user subjectively likes some of the changes made to the image, but not the final altered digital image as a whole, the effort expended on that final image may be completely lost. Conversely, if the user is satisfied with the final image, each of the changes must be remembered and manually repeated for similar images. IQ some cases, a change made to one attribute of the digital image may unwittingly introduce unpleasant artifacts in the digital image. The methods of the prior art digital image editing tools often takes a "one size fits all" approach to editing attributes in the image. For instance, all color, contrast or brightness values in a digital image may be increased or decreased by a set value or according to a linear relationship such as y - ax + b. Prior art digital image editing tools, therefore, may only provide acceptable results to those that are skilled in color theory.
It would be desirable to have a digital image editing tool that is easily usable by a user from novice to expert to evolve a digital image so that it progressively becomes subjectively "better" in the eyes of a user. This digital image editing tool should be simple to operate, powerful in its ability to alter the digital image in beneficial ways, and efficient. The present invention is directed to providing such a digital imaging tool.
Summary of the Invention In accordance with the present invention, computer-implemented methods and systems for enhancing a digital image are disclosed. Each image has an associated genotype, with the associated genotype having a plurality of genes. Each gene defines an attribute of the associated image such as brightness, color saturation, and contrast. The genotype may also contain genes that define methods for enhancing the digital image. For example, the gene may define a method for making a global color shift in the digital image or applying special effects to the digital image. The genotype may be a predefined template genotype or may be a genotype that is evolved through a selection process described below.
In accordance with the present invention, the selection process includes receiving a fitness rating for each of a plurality of digital images within a current generation of images. Each current generation of images generally includes a leader image and a plurality of child images. A copy of an original digital image is made the leader image of a first generation of images and the leader image is assigned a neutral leader genotype. A plurality of child genotypes are applied to the leader image with the appUcation of each child genotype yielding a child image. After all the child images for the current generation have been created, the child images are sequentially compared to the leader image and a fitness rating is assigned to each child image. An improved image may be chosen at any time from any of the images in the current generation of images or the evolution can continue by making the current generation of images a previous generation of images and evolving a next generation of images to become the current generation of images.
In accordance with other aspects of the present invention, the method of evolution creates a next generation of images by evolving a plurality of next generation child images from a next generation leader image. The next generation child images are created by applying a plurality of next generation child genotypes that are evolved from the leader and child genotypes of the previous generation of images. To create the next generation child genotypes, the leader genotype from the previous generation of images is recombined with a corresponding child genotype from the previous generation of images. The next generation child genotype is then mutated. A child image from the previous generation that received the highest fitness rating is made the next generation leader image. Each of the next generation child genotypes is applied to the next generation leader image to produce a next generation of child images. Since the next generation leader image and the next generation child images become the current generation of images for the selection process, the method of evolution in accordance with the present invention may continue through unlimited generations.
In accordance with other aspects of the invention, a genotype includes a plurality of genes. Each gene includes one or more parameters. The genotypes, genes and parameters are evolved through methods for recombination and mutation. In many cases, the parameters are used to interpolate one or more gradation curves for altering the leader image, i.e., each parameter value may have a different gradation curve that is interpolated between a boundary gradation curve and a neutral gradation curve (y = x). The boundary gradation curves (and thus the interpolated gradation curves) may be linear or non-linear and may be predefined or generated specifically for the digital image being processed. The gradation curve defines the replacement values for current values in the digital image when the gene is applied to the digital image. In other cases, the method of the gene directly changes the current values in the digital image. In accordance with further aspects of the invention, any genotype may be stored independent of a digital image file and later applied to a digital image as a template genotype. The stored genotype comprises a relatively small data file that may be easily transmitted between computers across a communication network. Other template genotypes may be evolved by third parties or shipped with a product incorp orating the present invention. In accordance with yet further aspects of the invention, a printer calibration genotype may be provided as a template genotype or evolved according to a method and system of the present invention. The printer calibration genotype adjusts the digital image for the particular printing characteristics of the printer being calibrated. Before the digital image is sent to the printer, the printer calibration genotype is applied to the digital image.
Brief Description of the Drawings The foregoing aspects and many of the attendant advantages of this invention will become more readily appreciated as the same becomes better understood by reference to the following detailed description, when taken in conjunction with the accompanying drawings, wherein:
FIGURE 1 is a block diagram of a general purpose computer system suitable for implementing the present invention;
FIGURE 2A is a block diagram illustrating a common storage method for storing a digital image file;
FIGURE 2B is an illustration of a color space model used by an actual embodiment of the present invention;
FIGURES 3A, 3B and 4 are pictorial representations of a user interface provided by an actual embodiment of the present invention; FIGURE 5 is a functional flow diagram of a main processing loop, in accordance with the present invention;
FIGURE 6 is a functional flow diagram illustrating an evolution of a digital image, in accordance with the present invention;
FIGURE 7 is a functional flow diagram illustrating a process of creating a set of child images from a leader image using a method of genetic evolution, in accordance with the invention;
FIGURE 8 is a functional flow diagram illustrating a process for recombining genotypes, in accordance with the present invention;
FIGURE 9 is a functional flow diagram illustrating a process of recombining a leader genotype with a fitness-rated child genotype, in accordance with the present invention;
FIGURE 10A is a functional flow diagram illustrating the process of mutating a genotype, in accordance with the present invention;
FIGURE 10B is a mutation rate table containing values for mutation rates used in the process illustrated in FIGURE 10A; FIGURE 11 is a functional flow diagrams illustrating the process of applying a genotype to a digital image, in accordance with the present invention;
FIGURES 12A-12C are a functional flow diagram illustrating the process for creating RGB gradation curves for gene 1 (global color shift) of the present invention; FIGURES 12D-12E are graphs of boundary gradation curves used by the process illustrated in FIGURES 12A-12C, in accordance with the present invention;
FIGURE 12F is an illustration of interpolating an interpolated gradation curve between a boundary gradation curve and a neutral gradation curve;
FIGURE 13A is a functional flow diagram illustrating a process of providing the RGB gradation curves for gene 2 (color temperature) in accordance with the present invention;
FIGURES 13B-13D are graphs of boundary gradation curves used in the method illustrated in FIGURE 13 A;
FIGURE 14 is a functional flow diagram illustrating a method for specifying the RGB gradation curves for gene 3 (effects), in accordance with the present invention;
FIGURE 15A is a functional flow diagram illustrating the method for deteiTJiiiiing a blackpoint and wbitepoint gradation curve for gene 8 (blackpoint/wbitepoint), in accordance with the present invention; FIGURE 15B is a graph of an example blackpoint and wbitepoint gradation curve;
FIGURE 16A is a functional flow diagram illustrating process of providing the RGB gradation curves for gene 4 (brightness), in accordance with the present invention; FIGURES 16B-16C are graphs of boundary gradation curves referenced by the processes of FIGURES 16A and 18;
FIGURE 17 is a functional flow diagram illustrating the process of providing a contrast gradation curve for gene 5 (contrast), in accordance with the present invention; FIGURE 18 is a functional flow diagram illustrating the process of adjusting a pixel's color saturation value using gene 6 (color saturation), in accordance with the present invention;
FIGURES 19A-19B are a functional flow diagram illustrating the method for adjusting the black level in primary colors using gene 7a (black level), in accordance with the present invention; FIGURES 19C-19D illustrate curves referenced by the method of FIGURES 19A-19B;
FIGURES 20A-20C are a functional flow diagram illustrating the method for selective color correction of a pixel using gene 7b (selective color correction), in accordance with the present invention;
FIGURE 21A is a functional flow diagram illustrating a process for preparing a contrast gradation curve for gene 9 (contrast gradation curve), in accordance with the present invention;
FIGURES 2 IB and 21C are pictorial representations of a histogram and a gradation curve, respectively, calculated by the process illustrated in FIGURE 21 A, in accordance with the present invention;
FIGURE 22 is a functional flow diagram of a process for removing optical distortion in digital images (dewarp), in accordance with the present invention; and
FIGURES 23A-23C are a functional flow diagram illustrating a process for calibrating a printer using a genetic algorithm, in accordance with the present invention.
Detailed Description of the Preferred Embodiment FIGURE 1 illustrates a computer system 100 of conventional design that is suitable for practicing the present invention. The computer system includes a central processing unit (CPU) 110 that executes program instructions and manipulates digital data. The central processing unit 110 is coupled to a volatile memory 112 and a nonvolatile memory 114 that store the program instructions and data. The central processing unit 110 is coupled to a video monitor 116, which provides a human recognizable medium for the display of digital images, text and other data to a user. A keyboard 119 and a pointer device 117 (a "mouse") are coupled to the CPU 110 for providing input to the computer 100. A printer 118 may be coupled to the CPU 100 to provide printouts of images, text and other data. The CPU 110 may also receive input from devices such as a digital camera 120 or optical scanner 122. Both the digital camera 120 and the optical scanner 122 record photographic images as computer-readable digital image files.
A plurality of computers 100 may be connected to a communication network 124 such as a local area network (LAN), a wide area network (WAN), or global network such as the Internet. For the ease of discussion below, it will be assumed that only a user computer 100 and a remote computer 126 are connected to the communication network 124. A remote computer 126 is generally similar in configuration to that described for computer 100.
A digital image file 210 (FIGURE 2A), such as that created by the digital camera 120 or the optical scanner 122 (FIGURE 1), comprises data representing a plurality of pixels 212. When rendered on the video monitor 116 or the printer 118, the plurality of pixels visually form an image. In FIGURE 2A, the digital image file 210 is assumed to be 480 x 480 pixels in size. Of course, other digital image file sizes, formats and resolutions can be used when practicing the present invention.
An exemplary pixel 212 is shown in a partial view in FIGURE 2A. The pixel 212 is defined by a red (R) data value 214, a green (G) data value 216, and a blue (B) data value 218 (collectively the "RGB values"). Together, a red (R) data value 214, a green (G) data value 216, and a blue (B) data value 218 define a color for the pixel 212 from a set of colors available in a color space. A color space is a mathematical representation of a set of colors. In other words, by specifying an R value 214, a G value 216, and a B value 218, a color in the color space can be replicated for each pixel 212 on the video monitor 116, a printer 118, or other device.
A RGB color space is illustrated in FIGURE 2B as a hexagon 220, with each of the additive and subtractive primary colors indicated at a vertex. The primary colors red 222, green 224, and yellow 226 are known as the additive primary colors, and the primary colors magenta 228, cyan 230, and blue 232 are known as the subtractive primary colors. Using either of the additive primary colors 222, 224, 226 or the subtractive primary colors 228, 230, 232, any color within the color space can be specified. The number of colors that can be specified in the color space depends on the resolution of each RGB value 214, 216, and 218. The resolution is defined by the number of bits of data that are used to capture and store each of the RGB values—the more bits, the more colors that can be described In the following discussion, it will be assumed that the RGB values have a resolution of eight bits, or 256 possible values. Each pixel, therefore, can specify more man 1.6 million different colors in the RGB color space defined by hexagon 220. However, the resolution and size (number of pixels) in the digital image is not important to processes of the present invention. The digital image file 210 may have a resolution that is higher or lower depending on the quality requirements for the digital image, and practical matters such as storage requirements for the digital image file 210. Other color space models may be used without departing from the scope of the present invention. Examples of other color space encoding include YIQ, YUV, YCbCr, HSI, and HSV. All of these color spaces can be translated to the RGB color space 220 described herein using well-known conversion methods.
The digital image file 210 may be stored in many different formats that are well known in the art. For instance, the RGB values 214, 216, 218 can be individually stored for each pixel in the digital image file 210 in a bitmap, or these RGB values may be translated, compressed and stored in a variety of other formats such as TIFF, JPEG, or MPEG, among other known digital image file 210 formats. All file formats may be processed by the present invention.
A user interface 300 provided by an actual embodiment of the present invention is illustrated in FIGURES 3 A-3B and 4. In FIGURE 3 A, the user interface 300 includes a menu bar 310 that contains menus named File 312, Edit 314, View 316, Window 318, Special 320, and Help 322. The File menu 312 includes a drop-down menu that includes items for saving and retrieving the digital image file 210 to and from the non-volatile memory 114, including options to Open, Open Special, Close, Save, Save As, and Save Genotype As. The Open Special option will be discussed further with reference to FIGURE 3B. A genotype may be saved in a separate data file using the Save Genotype As option in this menu (genotypes are discussed below beginning with the discussion of image evolution in FIGURE 5). The digital image file 210 may also be loaded into the application controlling the user interface 300 using file menu options to Acquire the image from a device such as scanner 122 (FIGURE 1) or from another device using a Select Source menu option. The image may be printed on the printer 118 using File menu options for Print, Print Preview (on the display), Printer Setup, and Printer Calibration. Printer calibration is described below with reference to FIGURE 26A-C. The user may also select an option for Preferences or may select an Exit option from the drop- down menu to end the application.
The Edit menu 314 includes options to Undo the Last Action, Perform a Dewarp on the Image, to Start Evolution, and to Rotate Left, Rotate Right, Resize Image, Crop Image, and to Remove Red Eyes. The Perform a Dewarp option is discussed below with reference to FIGURE 22 and the Start Evolution option is discussed below beginning with FIGURE 6.
The View menu 316 allows for the selection of user interface elements that the user wishes to appear in user interface 300. These user interface elements include a Toolbar menu item (toolbar 324), a Status Bar menu item (status bar 326), a Navigator window menu item (navigator 328), and Genotypes window menu item (genotypes 330). The View menu 316 also provides the options to Compare to Original and to present the Optimal View of the image.
The Special menu 320 allows options to adjust the Color Temperature of the image (FIGURE 13A) and the Genotype Intensity (FIGURE 6). As will be better understood from the following discussion, this is accomplished by the use of a slider that goes from 0 % to 100 %. All genes are interpolated between their current value (= 100 %) and their neutral value (= 0 %). The Window and Help menus 318 and 322 offer menu items well known to those skilled in the art and will not be discussed further. Referring to FIGURE 3 A, the user interface 300 has an image display area 332 that displays a digital image file 210 that has been opened for enhancement. In the remainder of this discussion, the digital image file 210, as it exists when loaded into the present invention, will be referred to as a parent image 334. A miniature (thumbnail) version of the parent image 334 is displayed in a navigator 328 window. By using a slider control 336, a minus command button 338 or a plus command button 340, a user can select a percentage of the total image and only a portion of the parent image 334 will be displayed in the display area 332.
The genotypes list box 330 contains a list of template genotypes that may be applied to the parent image 334 to change the parent image 334 according to the predefined genes and gene parameters of the template genotype selected. For instance, if the inverse (INV) template genotype 335 is selected from the list box and an Apply command button 340 is activated, the TNV template genotype 335 would be applied (FIGURE 11) to the parent image 334 and would result in an evolved image replacing the parent image in the display area 332. In the example shown, the evolved image would appear as a black background with a white outline of a sailboat. If a Reject control button 342 is activated, the parent image 334 will revert to the state it was in just before the TNV template genotype 335 was applied.
The toolbar 324 includes command buttons for many of the functions described as menu items in the menus 310 described above. In particular, a Dewarp command button 344 activates the image Dewarp process illustrated in FIGURE 22, a color temperature button 346 adjusts the color temperature as illustrated in FIGURE 13 A, an image cropping command button 348 activates a function for cropping the parent image 334, and a remove red eyes command button 350 activates a remove red eyes function. The user interface presented when the Open Special method of the present invention available from the drop-down file menu is invoked as illustrated in FIGURE 3B. The Open Special user interface 360 includes a directory tree window 362 for selecting the directory 364 containing image data files which may be imported into the present invention. Each image 366A, 366B, 366C, 366D, 366E . . . stored in the directory 364 is displayed as a thumbnail image that, when selected (as indicated by a dark border surrounding the image), is opened for editing by the present invention by activating an Open button 368. A drop-down list box 370 lists available template genotypes. When an available template genotype is selected, selected thumbnail image (or images) are changed by the selected template genotype. When loaded, the loaded image is changed by the selected template genotype.
The present invention quickly and efficiently improves the quality of digital images through evolution. The evolution process is initiated by activating the Start Evolution command button 352 (FIGURE 3A). Referring to FIGURE 4, when the Start Evolution command button 352 is actuated, the image display area 332 changes to display a copy of the parent image 334 as a leader image 334A in the left-hand portion 417 of the image display area 332 and one of a set of four child images 334B, 334C, 334D, and 334E in the right-hand portion 415 of the image display area 332. Thumbnail images of the leader image 334A and of the four child images 334B-E are displayed in a legend area 410 together with an indication 412 of the generation to which the images 334A-E belong. Each child image 334B-E is sequentially displayed in the right-hand portion 415 of the image display area 332 for comparison with the leader image 334A that is displayed in the left-hand portion 417 of the image display area 332. The child image 334B is compared to the leader image 334A and fitness rated by selecting a portion of a graduated control 414. The graduated control 414 has a "no better or worse" portion 416 and a graduated portion 418 associated with values from 1% to 100% better. By using the pointer device 117 to select a portion of the graduated control 414, a fitness value equal to the percentage selected from the graduated control 414 (and displayed 419 below the graduated control 414) is associated with the child image 334B. Once a fitness value has been associated with the child image 334B, the next child image 334C is displayed in the right-hand portion 415 of the image display area 332 for comparison with the leader image 334A. The process of assigning a fitness rating for images 334D and 334E continues in the same manner just described. The status bar 326 displays an indication of the current generation and the number of the child image 334B-E that is currently being compared.
At any point during the comparison process, the evolution can be stopped by activating a Stop Evolution command button 420. In response to activating the Stop Evolution command button 420, a user dialog is displayed asking whether the left image (leader image 334A) or the right image (child image 334B-E) should be kept. The kept image can then be saved as a new digital image file 210 or otherwise manipulated by the user's computer 100.
If the evolution of the image is allowed to continue, a next generation is evolved and a next generation leader image 422A is displayed in the right-hand portion 417 of the video display area 332. As is discussed below in more detail, the next generation leader image is created from a (new) leader genotype that is produced by recombining the (previous) leader genotype with the genotype of the highest fitness-rated child from the last generation. Four next generation child images 422B- E are evolved and displayed in the legend area 410 with an indication 412 that the second generation is being viewed. The child images 422B-E are then compared to the leader image using the graduated control 414 to assign fitness ratings as is discussed above.
It should be noted that what constitutes the "best" or "most fit" image (reflected by the value that is assigned to each image 334B-E, 422B-E, etc.) process is subjective and may differ among any two users because of the user's perception of color and personal preferences. The evolution process may be continued through any number of generations until the user finds the best image according to that user's subjective tastes. As will become apparent in the following discussion, the method of the present invention for very quickly evolving a "best" image (according to the individual tastes of the user) is based in part on the user's indication of a fitness rating of each of the child images 334B-E, 422B-E, etc.
The methods of the present invention for improving a parent image through evolution will now be discussed in detail begiirning with FIGURE 5. A parent image is imported 510 from the non-volatile memory 114 (or other source such as the digital camera 120 or optical scanner 122) into a workspace available to the present invention in the volatile memory 112. When imported 510, the encoded information describing the parent image (e.g., in formats such as bitmap, JPEG, TIFF, etc.) is translated into equivalent red, green, and blue (RGB) values for each pixel 212 in the digital image 210. As will be understood by those skilled in the art, the translation of the information describing the parent image into the RGB color space 220 is not absolutely necessary and the method of the present invention could be applied equally to many other color space models that are well known within the art.
Once the parent image is converted into pixels described by RGB values, the parent image can be modified according to the present invention by image evolution 512, applying a template genotype 514, or conducting an image Dewarp 516. Any digital image 210 can be accurately printed following a printer calibration 518 according to the method of the present invention. These four processes 512, 514, 516, 518 will be discussed below. Any or all of the above methods 512, 514, 516, 518 may be conducted in any order on a parent image until the program is exited 520.
If image evolution is selected 522, evolution is performed 512 beginning with the process illustrated in FIGURE 6. Referring to FIGURE 6, the evolution method 512 sets 610 a leader genotype to a neutral genotype. A genotype is a set of genes (described below) that are used to alter a digital image in a number of ways. A neutral genotype contains genes that do not affect the digital image to which they are applied. The evolution of the first generation of child images may be produced using predefined genotypes or evolved genotypes. If predefined genotypes are selected 612, then the method 512 selects a plurality of predefined template genotypes that are used as the initial child genotypes. Each child genotype is apphed 616 to the leader image to produce a corresponding child image. The child images are displayed 618 and the method 512 receives the fitness ratings of each of the child images from the user (as was described above with reference to FIGURE 4).
If a request has not been received to end the evolution (e.g., by pressing the Stop Evolution command button 420), the evolution continues 620 by recombining 622 the genotype of the child image that received the highest fitness rating with the leader genotype as a next generation leader genotype. As is illustrated begiiiriing in FIGURE 7, a next generation of child genotypes is evolved 624 using this next generation leader genotype. Alternatively, if the evolution is ended 620, the method 512 is done 626.
Referring to FIGURE 7, the evolve next generation method 710 begins by recombining 712 the leader genotype with a child genotype from the previous generation into a next generation child genotype. Recombination is discussed below with reference to FIGURE 8. The next generation child genotype is then mutated 714 in the manner illustrated in FIGURE 10 and described below. Having been recombined and mutated, the next generation child genotype is apphed 716 (FIGURE 11) to the currently loaded image (the leader image) to produce a next generation child image. The next generation child genotype is then associated with the next generation child image. As discussed above, an actual embodiment of the present invention produces four child images in each generation for comparison with that generation's leader image. Of course, greater or fewer child images can be produced in each generation. For each child genotype from the previous generation, the evolve next generation method 710 repeats (starting with recombining 712 the child genotype with the leader genotype into a next generation child genotype). Once all child genotypes has been processed, a decision 720 directs the evolve next generation method 710 to return 722 all the next generation child images, together with their associated next generation child genotypes, as child images for presentation as a (new) current generation in the selection process described above (FIGURE 4). The method 710 is then done 724.
The method 810 for recombining the leader genotype with each child genotype to produce a plurality of next generation child genotypes is illustrated in FIGURE 8. The method 810 retrieves 812 a leader genotype. (It is to be understood that each of the methods described herein may retrieve, create or be passed data.) A child genotype is then retrieved 814. The individual genes of the leader genotype are recombined 816 with the genes of the child genotype into a next generation genotype, in the manner illustrated in FIGURE 9 and described below. If another child genotype is to be recombined, a decision 818 returns the method 810 to the retrieval 814 of another child genotype and the process is repeated. When all of the child genotypes have been recombined with the leader genotype, the next generation genotypes are returned 820 as the (new) child genotypes to block 712 in FIGURE 7. The method 810 is then done 822.
Referring to FIGURE 9, the method 910 of the present invention for recombining the leader genotype with a child genotype begins by retrieving 912 a leader gene from a set of genes contained in leader genotype. A corresponding child gene is also retrieved 912 from the child genotype. Each gene may contain one or more parameters which are retrieved 914 one at a time from the leader gene and then the corresponding child gene. A next generation gene parameter is computed 916 by taking a weighted average of the leader gene parameter and the child gene parameter. In this computation 916, the child gene parameter is weighted by a factor that equals the fitness rating that was assigned to the child genotype (as discussed above with reference to FIGURE 4). For instance, if the resolution was eight bits (256 values), the weighting formula would be:
. _ _, _, LeaderGeneParameter + (FitnessRating * ChildGeneParameter)
D NextGenerahonGenerarameter =
127
This weighted average formula produces a next generation gene parameter that is stored 918 in a next generation gene. A decision 920 is made to determine if there is another gene parameter to process. If so, the method 910 returns to retrieve 914 the (next) leader gene parameter from the leader gene and a 0 corresponding child gene parameter from the child gene. If all gene parameters have been processed, the decision 920 directs the method 910 to store 922 the next generation gene in the next generation genotype. If there is another leader gene to process, a decision 924 returns the process to the beginning of method 910 to retrieve 912 the next leader gene and a corresponding child gene. If the decision 924 5 determines that there is not another leader gene to process, then the method 910 is done 926.
The method 1010 for mutating a genotype is shown in FIGURE 10A. The method 1010 retrieves a genotype to be mutated from a block 714 in FIGURE 7. A gene from the genotype is retrieved 1012 and a mutation rate is obtained for the gene 0 from a mutation rate table (FIGURE 10B). The mutation rate table provides the mutation rate to use for the gene being processed (column) and the number of the generation currently being processed (row). For instance, if gene 4 is being mutated from generation 5, cell 1016 would be selected to obtain a value of "18" hexadecimal. If the evolution has progressed through more than 16 generations, the mutation rate is 5 selected from the generation 16 column according to the genes row. For instance, all mutations after the 16th generation for gene 5 will have a mutation rate found in cell 1018 that indicates a hexadecimal value of "LA".
Relrirning to FIGURE 10A, the mutation rate is multiplied 1022 by a Gauss-random value to produce a mutation value from a Gauss-random function. 0 The Gauss-random function returns a value that makes small changes to the mutation rate very probable while big changes will seldom occur. Gauss-random functions are well known to those skilled in the art and will not be further discussed. The mutation value is then added to the gene to produce a mutated gene. If the gene has multiple parameters, all parameters are mutated. The mutated gene is then stored 1026 in a mutated genotype. If another gene in the genotype needs to be processed, the method 1010 is directed 1028 to retrieve 1012 the next gene in the genotype. If another gene is not present in the genotype to be mutated, the method 1010 is directed 1030 to return 1032 the mutated genotype as the next generation genotype. The method 1010 is then done 1034.
The method 1110 for applying a genotype to a digital image is shown in FIGURE 11. The leader (currently loaded) image is retrieved 1112. The child genotype to be apphed to the leader image is also retrieved 1114. A pixel is retrieved 1116 from the currently loaded image and a gene is retrieved 1118 from the genotype. The gene is apphed 1120 to the pixel according to a method associated with each gene, as is discussed below in detail with reference to FIGURES 12A-21.
A decision 1122 determines if there is another gene in the genotype that still needs to be processed. If so, the method 1110 directs 1124 the retrieval 1118 of the next gene from the genotype. If there is not another gene in the genotype to be processed, the decision 1122 directs 1126 the method 1110 to store 1128 the newly altered pixel in a child image. A decision 1130 determines if there is another pixel in the currently loaded image that still needs to be processed, and directs 1132 the method 1110 to repeat starting with retrieving 1116 the next pixel from the currently loaded image. If there is not another pixel in the currently loaded image, the decision 1130 returns 1136 the child image and the associated genotype to the calling method. The method 1110 is then done 1138.
As mentioned above, FIGURES 12A-21 detail a set of nine genes in a genotype provided by an actual embodiment of the invention. Other genes may be added to the genotype and still be within the scope and spirit of the present invention. For instance, a gene could be developed to create artistic effects for the image using the methods and systems described below. While the order that the genes are apphed does not have to be strictly defined, the genes below are discussed in the order that they are apphed in an actual embodiment of the invention.
Many of the genes described below include methods for defining a new gradation curve for each of the R, G, and B values. The apphcation of the gradation curves specified by these genes occurs in FIGURE 11. Using the value of the gene parameter, a gradation curve (e.g., FIGURE 12D) is generated. This generated gradation curve is then referenced to assign a new value (the y-coordinate) to the R 214, G 216, or B 218 value for each pixel in the digital image. The gradation curves shown in the FIGURES are illustrated as continuous curves both to more clearly point out the concept of the present invention and to point out that the methods and systems of the present invention apply equally to digital images of all resolutions, e.g., a gradation curve may be apphed to a digital image with 8-bit resolution equally as well as to a digital image with 32-bit resolution simply by calculating a greater range of data points for (or from) the gradation curve. In an actual embodiment of the invention, interpolating a gradation curve (FIGURE 12F) is actually computing a new value (y-coordinate) for each possible old value (x-coordinate). Since this discussion assumes an 8-bit resolution, 256 new values (y-coordinates) are calculated— one for each of the possible old values (256 possible values). In some circumstances it may be more efficient to calculate all of these new values at once and store the new values (y-coordinates) in a lookup table keyed by the old value (x-coordinate). In other circumstance it may be more efficient to calculate a single new value (y-coordinate) as needed. Unless described otherwise below, the new value (y-coordinate) is an interpolated value calculated between a predefined or dynamically calculated boundary curve and a neutral curve (y = x).
It should be noted that the predefined boundary gradation curves described for the use of the genes discussed herein have been determined empirically and may be changed as further experience dictates. The same is true for the predefined data points used by some genes to dynamically calculate boundary gradation curves.
In FIGURE 12F, an example of interpolating a gradation curve is illustrated including: a neutral gradation curve (y = x) 1202, an example of a positive boundary gradation curve 1204, and an interpolated gradation curve 1206. The interpolated gradation curve 1206 in this example represents a weight (or parameter) value of 89. If the gene is defined by an 8-bit signed integer, giving 127 possible positive values (and 127 possible positive interpolated gradation curves), the positive boundary gradation curve 1204 represents the maximum parameter value (127). A gradation curve for any positive parameter value may be interpolated in between the neutral gradation curve 1202 (y = x) and the positive boundary gradation curve 1204 by using a weighted interpolation formula to compute a new y-coordinate (ynew) for the interpolated gradation curve 1206 from the boundary gradation curve (yboundary) 1204 for each possible RGB value (x): For each x (256 values): (yboundary - x) • p ynew = __ arameter
+ x
The interpolated gradation curve 1206 therefore comprises 256 new values (ynew) that may be stored in any convenient data structure for use in the methods and systems described herein, such as the lookup table described above or each ynew point may be calculated as needed.
The global color shift of gene 1 is accomplished by calculating a gradation curve for each of the RGB values. Gene 1 has three parameters, with the first parameter specifying which two of the three RGB gradation curves will be interpolated. If the gene 1 first parameter is 1, new red and green gradation curves are interpolated and the blue gradation curve is assigned a neutral gradation curve 1202. When the first parameter of gene 1 is equal to 2, new green and blue gradation curves are created and the red gradation curve is assigned the neutral gradation curve 1202. If the first parameter of gene 1 is equal to 3, new red and blue gradation curves are calculated and the green gradation curve is assigned the neutral gradation curve 1202. The second and third parameters of gene 1 are specified by a signed 8-bit integer with a range of -128 to +127. A gradation curve is calculated for every possible value of parameter 2 (and 3) by calculating a weighted interpolation between a predefined boundary curve and the neutral curve. Referring to FIGURE 12A, a method 1210 of gene 1 is illustrated to produce red, green and blue gradation curves for use in the global adjustment of color in the image. Two of these gradation curves are interpolated using the second and third parameters of gene 1. The remaining color is assigned the neutral gradation curve (y = x). If gene 1 of a leader genotype has a first parameter equal to one, a decision 1212 directs 1214 the process to a decision 1216 that checks if the second parameter is positive. If the second parameter is positive, the decision 1216 directs 1218 a method 1220 to use the second parameter of the leader genotype gene 1 as a weight value to interpolate a new red global color shift gradation curve between a positive global color shift boundary gradation curve (FIGURE 12D) and the neutral gradation curve 1202 (FIGURE 12F). Returning to decision 1216, if the leader genotype gene 1 has a second parameter that is negative, the method 1210 continues 1222 by using the leader genotype gene 1 second parameter as the weight value to interpolate a new red global color shift gradation curve 1224 between the negative global color shift gradation curve (FIGURE 12E) and the neutral gradation curve 1202 (FIGURE 12F).
FoUowing the interpolation of the new red global color shift gradation curve 1220 or 1224, a decision 1226 is made whether the third parameter of the leader genotype gene 1 is positive. If so, the third parameter of the leader genotype gene 1 is used as the weight value to interpolate a new green color shift gradation curve between the positive global color shift gradation curve (FIGURE 12D) and the neutral gradation curve 1202 (FIGURE 12F). If in the decision 1226 it is deteirriined that the third parameter is not positive, the third parameter of the leader genotype gene 1 is used as the weight value to interpolate 1232 a new green global color shift gradation curve between the negative global color shift gradation curve (FIGURE 12E) and the neutral gradation curve (FIGURE 12F). A new blue gradation curve is assigned 1233 the neutral gradation curve (FIGURE 12F).
After the RGB gradation curves have been interpolated or assigned 1230, 1232 or 1233, or if the first parameter of the leader genotype gene 1 is not equal to one (decision 1212), the method 1210 continues as illustrated in FIGURE 12B. If gene 1 of a leader genotype has a first parameter equal to two, a decision 1234 directs 1236 the process to a decision 1238 that checks if the second parameter is positive. If the second parameter is positive, the decision 1238 directs 1240 the method 1210 to use the second parameter of the leader genotype gene 1 as a weight value to interpolate 1242 a new green global color shift gradation curve between the positive global color shift boundary gradation curve (FIGURE 12D) and the neutral gradation curve 1202 (FIGURE 12F). Returning to decision 1238, if the leader genotype gene 1 has a second parameter that is negative, the method 1210 continues 1244 by using the leader genotype gene 1 second parameter as the weight value to interpolate a new green global color shift gradation curve 1246 between the negative global color shift gradation curve (FIGURE 12E) and the neutral gradation curve (FIGURE 12F).
Following the interpolation of the new green global color shift gradation curve 1242 or 1246, a decision 1248 is made whether the third parameter of the leader genotype gene 1 is positive. If so, the third parameter of the leader genotype gene 1 is used as the weight value to interpolate a new blue color shift gradation curve 1252 between the positive global color shift gradation curve (FIGURE 12D) and the neutral gradation curve (FIGURE 12F). If in the decision 1248 it is determined that the third parameter is not positive, the third parameter of the leader genotype gene 1 is used as the weight value to interpolate a new blue global color shift gradation curve 1256 between the negative global color shift gradation curve (FIGURE 12E) and the neutral gradation curve 1202 (FIGURE 12F). A new red gradation curve is assigned 1257 the neutral gradation curve 1202 (FIGURE 12F). After the RGB gradation curves have been interpolated or assigned 1252,
1256 or 1257, or if the first parameter of the leader genotype gene 1 is not equal to two (decision 1234), the method 1210 continues as illustrated in FIGURE 12C. If gene 1 of a leader genotype has a first parameter equal to three, a decision 1260 directs 1262 the process to a decision 1264 that checks if the second parameter is positive. If the second parameter is positive, the decision 1264 directs 1265 the method 1210 to use the second parameter of the leader genotype gene 1 as a weight value to interpolate 1266 a new red global color shift gradation curve between the positive global color shift boundary gradation curve (FIGURE 12D) and the neutral gradation curve (FIGURE 12F). Reforming to decision 1264, if the leader genotype gene 1 has a second parameter that is negative, the method 1210 continues by using the leader genotype gene 1 second parameter as the weight value to interpolate a new red global color shift gradation curve 1268 between the negative global color shift gradation curve (FIGURE 12E) and the neutral gradation curve (FIGURE 12F).
Following the interpolation of the new red global color shift gradation curve 1266 or 1268, a decision 1270 is made whether the third parameter of the leader genotype gene 1 is positive. If so, the third parameter of the leader genotype gene 1 is used as the weight value to interpolate a new blue color shift gradation curve 1274 between the positive global color shift gradation curve (FIGURE 12D) and the neutral gradation curve (FIGURE 12F). If in the decision 1270 it is determined that the third parameter is not positive, the third parameter of the leader genotype gene 1 is used as the weight value to interpolate a new blue global color shift gradation curve 1278 between the negative global color shift gradation curve (FIGURE 12E) and the neutral gradation curve (FIGURE 12F). A new green gradation curve is assigned 1279 the neutral gradation curve (FIGURE 12F). Gene 2 adjusts the color temperature of the child image 334B-E. Color temperature may be thought of as simulating a "warmer" or "colder" ambient light in the digital image. Gene 2 has a single parameter defined by a signed 8-bit integer (values from -128 to +127). Altered by gene 2, a negative parameter value result in the image looking warmer (lower color temperature caused by a shift toward orange- red in the image) and a positive parameter will make the image look colder (higher color temperature caused by a shift toward blue in the image).
The method for calculating the RGB gradation curves 1310 to adjust the color temperature of an image is shown in FIGURE 13 A. In an actual embodiment of the invention, a single new value (y-coordinates) is calculated for each of the RGB gradation curves, as needed, to adjust a single pixel instead of calculating and storing all of the new values (y-coordinates) of an interpolated gradation curve. However, the following will continue to refer to these new values as being part of a gradation curve to better illustrate the concept behind the method 1310 of gene 2. The parameter value of gene 2 is used as a weight value to interpolate 1312 a red color temperature gradation curve ("RTemp") 1312 between a low temperature red boundary gradation curve (FIGURE 13B) and the neutral gradation curve (FIGURE 12F). Next, a green color temperature gradation curve ("GTemp") is interpolated 1314 using the gene 2 parameter as the weight value in a weighted interpolation between a low temperature green boundary gradation curve (FIGURE 13C) and the neutral gradation curve (FIGURE 12F). Finally, a blue color temperature gradation curve ("BTemp") is interpolated 1316 by using the gene 2 parameter as the weight value in a weighted interpolation between a low temperature blue boundary gradation curve (FIGURE 13D) and the neutral gradation curve (FIGURE 12F).
A decision 1318 deteπnines if the gene 2 parameter is positive and directs the method 1310 to find 1320 a new low temperature RGB value (y-coordinate) corresponding to the old RGB value (x-coordinate) from each of the RGB low temperature gradation curves 1312, 1314, 1316, i.e.: R = RTemp [x]
G = GTemρ[x] B = BTemp[x] If the decision 1318 determines that the gene 2 parameter is not positive, the new RGB values are obtained 1322 from a set of high temperature RGB gradation curves that mirror the low temperature RGB gradation curves about the neutral gradation curve. The high temperature RGB gradation curves may be expressed mathematically as:
R = x - (RTemp [x] - x) G = x - (GTemp [x] - x) B = x - (BTempfx] - x) Alternatively, the high-temperature RGB boundary gradation curves may be defined separately from the low-temperature boundary gradation curves and the high- temperature RGB gradation curves interpolated between those separately defined high-temperature RGB gradation curves and the neutral gradation curve (FIGURE 12F). The new RGB values obtained in blocks 1320 or 1322 are then returned 1324 as a new pixel for the child image.
Gene 3 provides RGB gradation curves that define special effects. A method 1410 of gene 2 is illustrated in FIGURE 14. Gene 3 has three parameters that specify a gradation curve for each red, green, and blue value. The first parameter specifies 1412 a predefined gradation curve defining an effect to the red component of each pixel. For example, a predefined effect gradation curve could provide an inverse for the red value by having the equation y = 255 - x. The second parameter of gene 3 specifies 1414 a predefined effect gradation curve that defines an effect to the green component of the pixel. A third parameter specifies 1416 a predefined effect gradation curve that defines an effect to the blue value of each pixel. Depending on the effect desired, the first, second and third parameters may each specify the same gradation curve or a different gradation curve for each of the red, green and blue values. The method 1410 is then done 1418.
Gene 8 determines a blackpoint and wbitepoint gradation curve that is apphed to each of the RGB values. The method 1510 of gene 8 is shown in FIGURE 15 A, where the brightest color tone in the image is assigned 1512 as the whitepoint and then the darkest color tone in the image is assigned 1514 as the blackpoint. Using the coordinates (0, whitepoint and 255, blackpoint) a linear gradation curve is computed 1516 between these coordinates. The method 1510 of gene 8 is then done 1518. An example of the gradation curve produced by gene 8 is shown in FIGURE 15B with the linear gradation curve 1520 extending between the blackpoint 1522 and the whitepoint 1524.
The RGB gradation curves calculated by gene 4 affect the brightness of the image. Gene 4 has one parameter that is a signed 8-bit integer, ranging from -128 (maximum brightness) to +127 (minimum brightness). In an actual embodiment of the invention, the same calculated brightness gradation curve is used for all the RGB values. The method 1610 for calculating the brightness gradation curve of gene 4 is shown in FIGURE 16A and begins with a decision 1612 that determines whether the value of the gene 4 parameter is positive. If yes, a brightness gradation curve is interpolated 1614 by using the value of a gene 4 parameter as the weight value in a weighted interpolation between a positive brightness/color saturation boundary curve (FIGURE 16B) and the neutral gradation curve (FIGURE 12F). If the decision 1612 determines that the value of the gene 4 parameter is not positive, the weighted interpolation is made using the gene 4 parameter as the weight value between a negative brightness/color saturation boundary gradation curve (FIGURE 16C) and the neutral gradation curve (FIGURE 12F). After calculating the brightness gradation curve 1614 or 1616, the method 1610 is done 1618.
The contrast gradation curve produced by gene 5 is apphed equally to each of the RGB values to alter contrast as the red, green and blue gradation curve for contrast. Gene 5 has one parameter that is a signed 8-bit integer ranging from -128 (lowest contrast) to +127 (maximum contrast). The method 1710 for finding the contrast gradation curve of gene 5 begins by retrieving 1712 the contrast parameter of gene 5. The contrast parameter is used as the weight value to interpolate 1714 a new contrast gradation curve between a boundary gradation curve generated by gene 9 (discussed below with reference to FIGURE 21) and a neutral gradation curve (FIGURE 12F). Next, a new contrast value is determined directly from the new contrast gradation curve 1716 for each of the RGB values of the pixel. The method 1710 is then done 1718.
The method 1810 for adjusting the color saturation of the image using gene 6 is illustrated in FIGURE 18. Gene 6 has a single parameter that comprises a signed 8- bit integer in an actual embodiment of the invention. The RGB values of the pixel being passed to method 1810 are each translated 1814 from their additive primary color values to the equivalent subtractive primary colors using the formulas:
C (cyan) - 255 - R (red)
M (magenta) =255 - G (green) Y (yellow) = 255 - B (blue)]
The black level (K) is assigned 1816 the minimum value of the C, M or Y:
K = min(C,M,Y)
This black level (K) is then temporarily removed 1818 from the subtractive primary colors using the formulas: c (cyan) = C-K m (magenta) = M-K y (yellow) = Y-K
The subtractive primary colors are adjusted 1820 (FIGURE 11) using the
Positive Boundary Brightness/Color Saturation Gradation Curves illustrated in FIGURES 16B and 16C (discussed above), weighted by the parameter of this gene. The black level (K) is then added back 1822 into the subtractive primary colors:
C = c+K
M = m+K
Y = y+K
The subtractive primary color values are translated back 1824 to their equivalent additive primary colors using the formulas:
R= 255 - C G = 255 - M B = 255 - Y
The method 1810 then returns 1826 the new pixel for storage in the adjusted image and is done 1830.
The method 1910 of gene 7A for adjusting the black level in the primary colors of the image is illustrated in FIGURES 19A and 19B beginning with the method 1910 retrieving 1912 the RGB values of a pixel. The RGB values are translated 1914 from the additive primary color values to the equivalent subtractive primary colors using the formulas:
C (cyan) = 255 - R (red) M (magenta) = 255 - G (green)
Y (yellow) = 255 - B (blue)
The black level is deteirnined 1916 to be the minimum value of the three subtractive primary values C, M & Y, which is otherwise expressed as: K = min(C,M,Y)
The black level is temporarily removed 1918 from the C, M & K values using the formulas:
c (cyan) = C-K m (magenta) = M-K y (yellow) = Y-K
and an intermediate color separation is made 1920 for each of the primary colors:
b' (blue) =min(c,m) r' (red) =min(m,y) g' (green) =min(c,y) c' (cyan) =c-(g'+b') m' (magenta) =m-(r'+b') y1 (yellow) =y-(r'+g')
Using the intermediate color separation values (b', r', g', c', m', y1) as the x-coordinate, a Color Separation Graduation Curve (FIGURE 19C; "Curvel") is apphed 1922 to each of the intermediate color separation values:
b7(blue) =Curvel[b'] r7(red) =Curvel[r'] g7(green) =Curve 1 [g'] c7(cyan) =Curvel[c'] m7(magenta) =Curvel[m'] y7(yellow) = Curve l[γ]
Each intermediate separation value is adjusted 1924 by applying a corresponding gene parameter from gene 7A. Unlike other genes that are directly evolved during the evolution process described above, the parameters of gene 7A are changed by a regulator gene (gr) that is evolved during the evolution process. The regulator gene (gr) is set by the evolution process, i.e., by mutation and recombination. In an actual embodiment of the invention, the six gene parameters of gene 7A have the values that are calculated using a predefined weighting factor to adjust the value of the regulator gene, i.e.:
gl = 0.7 * gr g2 = 0.15 * gr g3 = 0.11 * gr g4 = 0.078 * gr g5 = 0.94 * gr g6 = 0.94 * gr
The weight values used in gl-g6 have been determined empirically and may be changed as experience dictates. In choosing the weight values, however, it should be considered that the reduction in the black value of color should not be equal for all colors or unpleasant artifacts may result. It has been found that green, blue and cyan are the most robust against these artifacts and may be weighted more heavily, while red and yellow are the least robust against manipulation and should have smaller weight values. Using the six gene parameters of gene 7A, the intermediate color separation values are adjusted 1924 using the following formulas:
b7 (blue) = (b7* gl)/127 r7 (red) = (r7* g2)/127 g7 (green) = (g7* g3)/127 c7 (cyan) = (c7* g4)/127 m7 (magenta) = (m7* g5)/127 y7 (yellow) = (y7* g6)/127
The method 1910 next determines 1926 a black level correction factor (kf) from a predefined curve shown in FIGURE 19D (the "kfCurve"):
kf= kfCurve [K]
and a black level correction value is calculated 1928 for each primary color using the formulas:
c"= kf*c7/256 m"= kf*m7/256 y"= kf*y7/256 r"= kf*r7/256 g"= kf*g7/256 b"= kf*b7/256
Each of the intermediate color separation values is then enhanced 1930 with the black level correction values using the formulas:
c'=c'+c" m'=m'+m"
Ϋ=Ϋ+Ϋ r'=r'+r" g'=g'+g" b'=b'+b"
Black is enhanced 1932 in accordance with the formula:
K = K-c" - m" - y" - r" -g" - b"
Finally, the black enhanced RGB values for a new pixel are calculated 1934 using the formulas:
C = c'+b'+g' + K M = m'+r'+b' +K
Y = +r'+g' + K
R = 255-C
G = 255-M B = 255-Y
and the new black enhanced RGB values are returned 1936 as an altered pixel for the child image. The method 1910 is then done 1938.
The method 2010 for the selective color correction of the image by gene 7B is illustrated in FIGURES 20A-C. The pixel passed to the method 2010 is separated 2012 into the pixel's RGB values, which are translated 2014 to the equivalent subtractive primary colors, using the formulas:
C (cyan) = 255 - R (red) M (magenta) = 255 - G (green)
Y (yellow) = 255 - B (blue)]
and the black level (K) is detemiined 2016 from the minimum of the C M & Y values. This black level is temporarily removed 2018 from the subtractive primary colors using the formulas:
c (cyan) = C-K m (magenta) = M-K y (yellow) = Y-K
and an intermediate color separation is found 2020 as follows:
b' (blue) =min(c,m) r' (red) =min(m,y) g' (green) =min(c,y) c' (cyan) =c-(g'+b') m' (magenta) =m-(r'+b') Ϋ (yellow) =y-(r'+g')
The method 2010 then retrieves the eighteen parameters of gene 7B which are stored as unsigned 8-bit integers for each color except black: an amplification factor for the color being processed, an amplification factor for the clockwise adjacent color on the color hexagon 220 (FIGURE 2B) to the color being processed, and an amplification factor for the counterclockwise adjacent color on the color hexagon 220 (FIGURE 2B) to the color being processed.
Using a spline algorithm, eighteen gradation curves (3 for each color except black) are computed 2024 using the points: (0,0), (xpoint, ypoint), (255,255), where ypoint is equal to xpoint if xpoint is less than 230 or ypoint is equal to 230 if xpoint is greater than 230. Using 230 as the breakpoint has been deteπriined empirically and may change as further experience dictates. The resulting 18 gradation curves are: BBCorrf] BlueCorrection for Blue
BCCorrf] CyanCorrection for Blue
BMCorr[] MagentaCorrection for Blue
MMCorr[] Magenta Correction for Magenta
MBCorrQ BlueCorrection for Magenta
MRCorr[] RedCorrection for Magenta
RRCorr[] RedCorrection for Red RMCorr[] MagentaCorrection for Red RYCorr[] YeUowCorrection for Red
YYCorrf] YeUowCorrection for YeUow YRCorr[] RedCorrection for YeUow YGCorr[] GreenCorrection for YeUow
GGCorrQ GreenCorrection for Green GYCorr[] YeUowCorrection for Green GCCorr[] CyanCorrection for Green
CCCorr[] CyanCorrection for Cyan CGCorr[] GreenCorrection for Cyan CBCorr[] BlueCorrection for Cyan
Using a the corresponding gradation curve hsted above, the primary colors are selectively corrected 2028, as foUows:
b'= BBCorr[b'] c'=BCCorr[b'] m'=BMCorr[b']
m' = MMCorr[m'] b' = MBCorrfm'] r' = MRCorr[m*] r' = RRCorrfr'] m' = RMCorr[r'] = RYCorr[r']
y = YYCorr[ ] r' = YRCorr[y] g' = YGCoπ[y]
g' = GGCorr[g'] y = GYCorr[g'] c1 = GCCorr[g']
c' = CCCor '] g' = CGCorr[c'] b' = CBCorr[c']
and then these selectively corrected colors are converted 2030 to their new RGB values and returned to the calling function as a new pixel for a chUd image using these formulas:
R=r'+m'+y+K
G=g'+c'+y+κ
B=b'+c'+m'+K The method 2110 of gene 9 provides the contrast gradation boundary curve that is used by the method 1710 of gene 5 (FIGURE 17). Referring to FIGURE 21A, the contrast gradation boundary curve is calculated by first converting 2112 each pixel into gray according to a gray scale. A histogram is then created 2114 that records the gray tone value of every pixel in the image with the x-axis of the histogram representing the gray scale value and the y-axis of the histogram representing the frequency that that gray tone appears in the image. An example of such a histogram 2130 is lustrated in FIGURE 21B. The histogram is integrated from its minimum value until 2% of the data members in the histogram has been reached. The value of the x-axis at this point is recorded 2116, as a first parameter (A) of gene 9. Similarly, the histogram is integrated from its maximum value toward its minimum value until 2% of the total data members has been reached. The value of the x-axis at this point is recorded 2118 as a second parameter (B) of gene 9. The integration percentage (2%) was deteπnined empiricaUy and may be changed as experience dictates.
Using a spline interpolation, a S-shaped contrast gradation curve 2132 is calculated 2120 using (0,0) and (255,255) as the end points 2134, 2140 and the points (A, 12) and (B, 242) as the inflection points 2136, 2138. An example of this S-shaped contrast gradation curve with the end points and inflection points 2136, 2138 just described is iUustrated in FIGURE 21C. Since, spline interpolation algorithms are weU known in the art, they wiU not be discussed further here. Retaining to FIGURE 5, it is also possible to apply 514 a template genotype to the image without going through the evolution process. The template genotype may be predefined and shipped as part of the product incorporating the invention or may be a genotype that has been saved from a previous evolution. For example, a user may prefer a genotype that the user has evolved for photographs of mountains that the user wishes to apply to aU pictures of mountains that the user works on. By saving the favored genotype as a template genotype, the genotype may be selected 524 and the genotype apphed 514 to the image as was discussed above with reference to FIGURE 11. As mentioned above, any saved or template genotype forms a relatively smaU data file. This data file may be exchanged between appUcations employing the present invention, for instance, by sending the genotype data file from the user's computer 100 over the cornmunication network 124 to another user's remote computer 126 (FIGURE 1).
The present invention also provides the option 526 to Dewarp the image. Performing 516 a Dewarp is iUustrated in FIGURE 22. When taking a photograph through a lens, the not perfect shape of the lens may introduce a perceptible aberration in the image. The process of "Dewarping" an image minimizes the effect of this spherical aberration by redistributing the pixels in the image. The method 2210 of Dewarping the image begins by retrieving 2212 a predefined set of coπection coefficients A, B, C. A focal point is then determined 2214 in the parent image. For each pixel in the parent image, the pixel is retrieved 2216 and a vectored radius between the location of the pixel and the focal point is determined 2218. A new vectored radius for the pixel is determined 2220 using the formula rnew=ar+b*rA3+c*rA5. The pixel is then stored 2222 in a chUd image at the new vectored radius (rnew). To make the image look more smooth, a bi-cubic interpolation is used. If there is another pixel to process in the parent image, a decision 2224 retums control to the retrieval 2216 of the next pixel and the method continues as described above. If there is not another pixel to process in the parent image, the decision directs the method 2210 to end 2226.
The last option shown in FIGURE 5 is to select 528 printer cahbration. A printer cahbration is performed 518 according to the method shown in FIGURES 23A-C. Printer cahbrations are necessary because the image produced by any given printer 118 may appear different than the image as it is displayed on the video monitor 116. With the present invention, a genotype may be developed that is associated with the printer and that is used to harmonize the image printed by the printer 118 with the image displayed on the video monitor 116.
The printer cahbration method 2310 begins by retrieving 2312 a gray scale image. A gray scale template genotype is apphed 2314 to the gray scale image to create a test image. The gray scale template genotype has genes that are neutral for aU attributes other than brightness (altered by gene 4) and contrast (altered by gene 5). A decision 2316 determines if another gray scale test image should be produced. If yes, a different gray scale template genotype is apphed 2314 to the gray scale image to create another gray scale test image. In an actual embodiment of the invention, five gray scale test images are produced. Each of the gray scale test images is displayed 2318 on the video monitor 116. The same gray scale test images are then printed 2320 on the printer 118 to be calibrated. The test images should be arranged on the printed page as they are displayed on the display so that the user may intuitively select 2322 the "best" test image appearing on the printed test page. (The "best" test image is a subjective determination by the user.) The genotype of the image is assigned as a printer gray scale genotype. A red-dominant test image containing subject matter with many red tones is retrieved 2324. The printer gray scale genotype is apphed 2326 to the red-dominant test image (as is described above with reference to FIGURE 11) to create a gray scale adjusted test image. A red-dominant template genotype is apphed 2328 to the gray scale adjusted test image to create a red-dominant test image. If another red- dominant test image is desired, a decision 2330 returns control to the block 2328 to apply a different red-dominant template genotype to the gray scale adjusted test image to produce another red-dominant test image. When aU the red-dominant test images are created, the decision 2330 directs control to the retrieval 2332 of a green- dominant test image. The printer gray scale genotype is apphed 2334 to the green- dominant test image to create a gray scale adjusted test image. As above, a set of green- dominant template genotypes are apphed 2336 to the gray scale adjusted test image to create a set of green-dominant test images. A decision 2338 returns control to the block 2336 until aU green-dominant test images have been produced.
The process is next repeated for a blue-dominant test image that is retrieved 2340 and adjusted by applying 2342 the printer gray scale genotype to the blue-dominant test image to create a gray scale adjusted test image. A set of blue- dominant genotypes are apphed 2344 to create a set of blue- dominant test images using the control loop provided by decision 2346. When aU the blue-dominant test images have been produced, the decision 2346 directs control to the display 2348 of each of the red-dominant, green- dominant, and blue-dominant test images on the video monitor 116. Each of the red-dominant, green-dominant and blue-dominant test images are printed 2350 on a test page by the printer 118. The test images should be arranged as they are displayed on the display so that the user can intuitively select the images from the printed page on the video display. The user chooses 2352 a "best" red-dominant test image that appears on the printed test page and selects the corresponding red-dominant test image on the video monitor 116. This selects the genotype of the selected test image as the printer red- dominant genotype. Next, the user chooses 2354 the "best" green-dominant test image that appears on the printed test page and selects the corresponding green- dominant test image on the display. This assigns the genotype of the selected test image as the printer green-dominant genotype. The user then chooses 2356 the "best" blue- dominant test image that appears on the printed test page and selects the corresponding blue-dominant test image on the display. The genotype of the selected test image is assigned as the printer blue-dominant genotype. Using the method iUustrated in FIGURE 8, the printer gray scale genotype, the printer red- dominant genotype, the printer green- dominant genotype, and the printer blue-dominant genotype are recombined 2358 to form a printer cahbration genotype that is stored as a printer cahbration template genotype and associated with the printer 118 being cahbrated. When the printer 118 is next used by the present invention, the image to be printed can be modified using the printer cahbration genotype as it is sent to the printer to be printed so that the image printed matches the image displayed as closely as possible. The printer cahbration method is then done 2360. WhUe the preferred embodiment of the invention has been iUustrated and described, it wiU be appreciated that various changes can be made therein without departing from the spirit and scope of the invention.

Claims

The embodiments of the invention in which an exclusive property or privUege is claimed are defined as foUows:
1. A computer-implemented method of improving a digital image, the method comprising:
(a) displaying a digital image and designating the digital image a leader image;
(b) applying a chUd genotype to the leader image to produce a child image;
(c) displaying the leader image and the chnd image so that a user can select which of the chUd image and the leader image is more desirable; and
(d) repeating (b) and (c) for a plurality of child genotypes.
2. The computer-implemented method of Claim 1, wherein the child genotypes apphed to the leader image are next generation chUd genotypes and the chUd images are next generation chUd images.
3. The coirφuter-implemented method of Claim 2, wherein the next generation child genotypes are evolved prior to being apphed to the leader image to produce next generation chUd images.
4. The computer-implemented method of Claim 3, wherein the leader image has an associated leader genotype and wherein evolving the next generation chUd genotypes comprises recombining the leader genotype with a child genotype.
5. The computer-implemented method of Claim 4, wherein evolving the next generation chUd genotypes also comprises mutating the recombined leader and chUd genotypes prior to the evolved next generation chUd genotypes being apphed to the leader image to produce a next generation child image.
6. The computer-implemented method of Claim 5, wherein mutating the recombined leader and chUd genotypes comprises:
(a) selecting a gene from the genotype;
(b) deteπnining a mutation rate for the gene;
(c) using a random value to adjust the mutation rate; (d) applying the value of the adjusted mutation rate to the gene to produce a mutated gene;
(e) storing the mutated gene in a mutated genotype; and
(f) repeating (a) through (e) for aU genes in the genotype.
7. The computer-implemented method of Claim 4, wherein recombining the leader genotype with a chUd genotype to evoke a next generation genotype comprises:
(a) retrieving a leader gene from the leader genotype and a corresponding chUd gene from the chUd genotype;
(b) retrieving a leader gene parameter from the leader gene and a corresponding leader gene parameter from the child gene;
(c) producing a next generation gene parameter by computing a weighted average of the leader gene parameter and the chUd gene parameter;
(d) storing the next generation gene parameter in a next generation gene;
(e) repeating (b), (c), and (d) for aU parameters included in the retrieved leader gene;
(f) storing the resulting next generation gene in the next generation chUd genotype; and
(g) repeating (a) through (f) for aU genes in the leader genotype.
8. The computer-implemented method of Claim 7, wherein evolving the next generation chUd genotypes also comprises mutating the recombined leader and chUd genotypes prior to the evolved next generation chUd genotypes being apphed to the leader image to produce a next generation chUd image.
9. The computer-implemented method of Claim 8, wherein mutating the recombined leader and chUd genotypes comprises:
(a) selecting a gene from the genotype;
(b) determining a mutation rate for the gene;
(c) using a random value to adjust the mutation rate;
(d) applying the value of the adjusted mutation rate to the gene to produce a mutated gene;
(e) storing the mutated gene in a mutated genotype; and
(f) repeating (a) through (e) for aU genes in the genotype.
10. The computer-implemented method claimed in Claim 1, including storing a fitness value assigned to a chUd image by a user for each chUd image displayed with a leader image for a user to select which of the child image and the leader image is more desirable.
11. The computer-implemented method of Claim 10 wherein the chUd genotypes apphed to the leader image are next generation child genotypes and the chUd images are next generation chUd images.
12. The computer-implemented method of Claim 11, wherein the next generation child genotypes are evolved prior to being apphed to the leader image to produce next generation chUd images.
13. The corrφuter-implemented method of Claim 12, wherein the leader image has an associated leader genotype and wherein evolving the next generation clύld genotypes comprises recombining the leader genotype with a chUd genotype.
14. The computer-implemented method of Claim 13, wherein evolving the next generation chUd genotypes also comprises mutating the recombined leader and chUd genotypes prior to the evolved next generation chUd genotypes being apphed to the leader image to produce a next generation child image.
15. The conφuter-implemented method of Claim 14, wherein mutating the recombined leader and child genotypes comprises:
(a) selecting a gene from the genotype;
(b) determining a mutation rate for the gene;
(c) using a random value to adjust the mutation rate;
(d) applying the value of the adjusted mutation rate to the gene to produce a mutated gene;
(e) storing the mutated gene in a mutated genotype; and
(f) repeating (a) through (e) for aU genes in the genotype.
16. The computer-implemented method of Claim 13, wherein recombining the leader genotype with a chUd genotype to evolve a next generation genotype comprises:
(a) retrieving a leader gene from the leader genotype and a corresponding child gene from the chUd genotype; (b) retrieving a leader gene parameter from the leader gene and a corresponding leader gene parameter from the chUd gene;
(c) producing a next generation gene parameter by computing a weighted average of the leader gene parameter and the chUd gene parameter, the weighting factor being based on the fitness value assigned to the chUd image associated with the chUd genotype;
(d) storing the next generation gene parameter in a next generation gene;
(e) repeating (b), (c), and (d) for aU parameters included in the retrieved leader gene;
(f) storing the resulting next generation gene in the next generation chUd genotype; and
(g) repeating (a) through (f) for aU genes included in the leader genotype.
17. The computer-implemented method of Claim 16, wherein evolving the next generation child genotypes also comprises mutating the recombined leader and child genotypes prior to the evolved next generation child genotypes being apphed to the leader image to produce a next generation child image.
18. The computer-implemented method of Claim 17, wherein mutating the recombined leader and chUd genotypes comprises:
(a) selecting a gene from the genotype;
(b) deteiriiining a mutation rate for the gene;
(c) using a random value to adjust the mutation rate;
(d) applying the value of the adjusted mutation rate to the gene to produce a mutated gene;
(e) storing the mutated gene in a mutated genotype; and
(f) repeating (a) through (e) for aU genes in the genotype.
19. The computer-implemented method of Claim 1, wherein the child genotypes include at least one gene for altering an attribute of the leader image when a chUd genotype is apphed to the leader image to produce a child image.
20. The computer-implemented method of Claim 19, wherein the chUd genotypes include a global color shift gene for altering the global color of the leader image.
21. The computer-implemented method of Claim 19, wherein the child genotypes include a color temperature gene for altering the color temperature of the leader image.
22. The computer-implemented method of Claim 19, wherein the child genotypes include a special effects gene for altering the leader image by introducing special effects into the leader image.
23. The computer-implemented method of Claim 19, wherein the chUd genotypes include a blackpoint and whitepoint gene for altering the blackpoint and whitepoint of the leader image.
24. The computer-implemented method of Claim 19, wherein the chUd genotypes include a brightness gene for altering the brightness of the leader image.
25. The computer-implemented method of Claim 19, wherein the chUd genotypes include a contrast gene for altering the contrast of the leader image.
26. The computer-implemented method of Claim 19, wherein the chUd genotypes include a color saturation gene for altering the color saturation of the leader image.
27. The computer-implemented method of Claim 19, wherein the chUd genotypes include a black level adjustment gene for altering the black level of the leader image.
28. The computer-implemented method of Claim 19, wherein the chUd genotypes include a selective color correction gene for selectively correcting the colors of the leader image.
29. The computer-implemented method of Claim 1, wherein applying a chUd genotype to the leader image to produce a chUd image comprises: obtaining a pixel from the leader image; obtaining a gene from the chUd genotype; applying the gene to the pixel to produce a modified pixel; and storing the modified pixel in the chUd image.
30. The computer-implemented method of Claim 29, wherein applying the gene to the pixel to produce a modified pixel comprises: obtaining a parameter from the gene; using the parameter as a weighting value to interpolate a gradation curve between a boundary gradation curve associated with the gene and a neutral gradation curve; obtaining a color value from the pixel; determining a new color value (y-coordinate) from the gradation curve by using the color value as the lookup value (x-coordinate); assigning the new color value to the modified pixel; and retarning the modified pixel for inclusion in the chUd image.
31. The computer-implemented method of Claim 30, wherein the boundary gradation curve is chosen based on a second parameter of the gene.
32. The computer-implemented method of Claim 30, wherein the color value comprises a pluraUty of color component values and each of the color component values is modified by applying a gradation curve only if specified by another parameter of the gene.
33. The computer-implemented method of Claim 30, wherein the boundary gradation curve is created by a support gene based on attributes of the image.
34. The computer-implemented method of Claim 30, wherein the parameter of the gene is regulated by a regulator gene.
35. The conφuter-i plemented method of Claim 34, wherein the regulator gene is weighted by a weighting value, the color value comprises a plurality of color component values, and the weighting value is associated with a color component value.
36. The computer-implemented method of Claim 1, wherein the child genotype is a predefined template genotype.
37. The computer-implemented method of Claim 1, further comprising: evolving a printer cahbration genotype; and applying the printer cahbration genotype to an image : before it is printed by a printer.
38. The computer-implemented method of Claim 37, wherein evolving a printer cahbration genotype comprises: applying a pluraUty of genotypes to a digital image, the apphcation of each genotype producing a test image; displaying each test image; printing each test image on a printed test page using the printer; receiving a selection of a best fit test image; and assigning as the printer cahbration genotype the genotype that produced the best fit test image.
39. The conφuter-implemented method of Claim 38, wherein evolving the printer cahbration genotype includes deteimining the most fit gray scale genotype by: applying a plurahty of gray-scale genotypes to a gray-scale dominant digital image to produce a plurahty of gray scale dominant test images, each gray scale genotype having an associated gray scale genotype; displaying the gray scale dominant test images; printing the gray scale dominant test images on a printed test page using the printer; receiving a selection of the most fit gray scale dominant test image; and assigning the associated gray scale genotype of the most fit gray scale dominant test image as the printer gray scale genotype.
40. The computer-implemented method of Claim 39, wherein evolving the printer cahbration genotype also includes determining the most fit red-dominant scale genotype by: applying the printer gray scale genotype to a red-dominant digital image to produce a gray scale adjusted test image; applying a plurahty of red-dominant genotypes to a red-dominant digital image to produce a pluraUty of red-dominant test images, each red-dominant genotype having an associated red-dominant genotype; displaying the red-dominant test images; printing the red-dominant test images on a printed test page using the printer; receiving a selection of the most fit red-dominant test image; and assigning the associated red-dominant genotype of the most fit red- dominant test image as the printer gray scale genotype.
41. The computer-implemented method of Claim 40, wherein evolving the printer cahbration genotype also includes deterrnining the most fit green- dominant scale genotype by: applying the printer gray scale genotype to a green- dominant digital image to produce a gray scale adjusted test image; applying a plurahty of green- dominant genotypes to a green-dominant digital image to produce a pluraUty of green-dominant test images, each green- dominant genotype having an associated green-dominant genotype; displaying the green- dominant test images; printing the green-dominant images on a printed test page using the printer; receiving a selection of the most fit green- dominant test image; and assigning the associated green-dominant genotype of the most fit green- dominant test image as the printer gray scale genotype.
42. The computer-implemented method of Claim 40, wherein evolving the printer cahbration genotype also includes deteπnining the most fit blue- dominant scale genotype by: applying the printer gray scale genotype to a blue-dominant digital image to produce a gray scale adjusted test image; applying a plurahty of blue- dominant genotypes to a blue- dominant digital image to produce a plurahty of blue-dominant test images, each blue-dominant genotype having an associated blue-dominant genotype; displaying the blue-dominant test images on the video display; printing the blue-dominant test images on a printed test page using the printer; receiving a selection of the most fit blue dominant test image; and assigning the associated blue-dominant genotype of the most fit blue-dominant test image as the printer gray scale genotype.
43. The computer-implemented method of Claim 42, wherein evolving the printer combination genotype also includes recombining the printer gray scale genotype, the printer red-dominant genotype, the printer green-dominant genotype, and the printer blue- dominant genotype to form the printer cahbration genotype.
44. The computer-implemented method of Claim 38, wherein evolving the printer cahbration genotype also includes deteπnining the most fit red- dominant scale genotype by: applying the printer gray scale genotype to a red- dominant digital image to produce a gray scale adjusted test image; applying a plurahty of red- dominant genotypes to a red-dominant digital image to produce a plurahty of red-dominant test images, each red-dominant genotype having an associated red-dominant genotype; displaying the red-dominant test images; printing the red-dominant test images on a printed test page using the printer; receiving a selection of the most fit red-dominant test image; and assigning the associated red-dominant genotype of the most fit red-dominant test image as the printer gray scale genotype.
45. The computer-implemented method of Claim 38, wherein evolving the printer cahbration genotype also includes deterrnining the most fit green- dominant scale genotype by: applying the printer gray scale genotype to a green- dominant digital image to produce a gray scale adjusted test image; applying a plurahty of green- dominant genotypes to a green- dominant digital image to produce a plurahty of green- dominant test images, each green- dominant genotype having an associated green- dominant genotype; displaying the green-dominant test images; printing the green-dominant images on a printed test page using the printer; receiving a selection of the most fit green- dominant test image; and assigning the associated green- dominant genotype of the most fit green- dominant test image as the printer gray scale genotype.
46. The coirφuter-implemented method of Claim 38, wherein evolving the printer cahbration genotype also includes determining the most fit blue- dominant scale genotype by: applying the printer gray scale genotype to a blue-dominant digital image to produce a gray scale adjusted test image; applying a plurahty of blue-dominant genotypes to a blue-dominant digital image to produce a plurahty of blue-dominant test images, each blue-dominant genotype having an associated blue-dominant genotype; displaying the blue- dominant test images on the video display; printing the blue- dominant test images on a printed test page using the printer; receiving a selection of the most fit blue dominant test image; and assigning the associated blue-dominant genotype of the most fit blue-dominant test image as the printer gray scale genotype.
47. A computer-readable medium containing computer-executable instructions for carrying out the computer-implemented method recited in any one of Claims 1-47.
PCT/US1999/028676 1998-12-03 1999-12-03 Digital image improvement through genetic image evolution WO2000033207A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
AU23521/00A AU2352100A (en) 1998-12-03 1999-12-03 Digital image improvement through genetic image evolution
EP99967186A EP1147471A1 (en) 1998-12-03 1999-12-03 Digital image improvement through genetic image evolution

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US11084098P 1998-12-03 1998-12-03
US60/110,840 1998-12-03
US11239198P 1998-12-15 1998-12-15
US60/112,391 1998-12-15
US12308799P 1999-03-05 1999-03-05
US60/123,087 1999-03-05
US44031799A 1999-11-12 1999-11-12
US09/440,317 1999-11-12

Publications (1)

Publication Number Publication Date
WO2000033207A1 true WO2000033207A1 (en) 2000-06-08

Family

ID=27493777

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1999/028676 WO2000033207A1 (en) 1998-12-03 1999-12-03 Digital image improvement through genetic image evolution

Country Status (3)

Country Link
EP (1) EP1147471A1 (en)
AU (1) AU2352100A (en)
WO (1) WO2000033207A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001037553A1 (en) * 1999-11-15 2001-05-25 Copan, Inc. System and method for improving digital video utilizing genetic image evolution
WO2002056596A2 (en) * 2001-01-10 2002-07-18 Koninklijke Philips Electronics N.V. System and method for optimizing control parameter settings in a chain of video processing algorithms
WO2004051984A1 (en) * 2002-12-05 2004-06-17 Koninklijke Philips Electronics N.V. A method and apparatus to utilize the probability vectors in the binary representation of video systems for faster convergence with minimal computation requirements
CN109447265A (en) * 2018-10-12 2019-03-08 湘潭大学 A kind of dyeing matching method and system based on preference genetic algorithm
US10499069B2 (en) 2015-02-19 2019-12-03 Magic Pony Technology Limited Enhancing visual data using and augmenting model libraries
US10666962B2 (en) 2015-03-31 2020-05-26 Magic Pony Technology Limited Training end-to-end video processes
US10692185B2 (en) 2016-03-18 2020-06-23 Magic Pony Technology Limited Generative methods of super resolution

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0714069A2 (en) * 1994-11-24 1996-05-29 Matsushita Electric Industrial Co., Ltd. Optimization adjusting method and optimization adjusting apparatus
EP0881826A2 (en) * 1997-05-27 1998-12-02 Canon Kabushiki Kaisha Image processing method, and image processing apparatus

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0714069A2 (en) * 1994-11-24 1996-05-29 Matsushita Electric Industrial Co., Ltd. Optimization adjusting method and optimization adjusting apparatus
EP0881826A2 (en) * 1997-05-27 1998-12-02 Canon Kabushiki Kaisha Image processing method, and image processing apparatus

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
"MetaTools, Inc. Ships Kai's Power Tools 3 for Windows 95/NT", METACREATIONS ARCHIVED PRESS RELEASES, 16 January 1996 (1996-01-16), Carpinteria, Ca., USA, XP002136080 *
CATHY ABES: "KPT Adds 3-D and Animation to the Mix", MACWORLD, January 1996 (1996-01-01), pages 112, XP002136079 *
KARL SIMS: "Artificial Evolution for Computer Graphics", COMPUTER GRAPHICS, vol. 25, no. 4, July 1991 (1991-07-01), pages 319 - 328, XP002136078 *
LUND H.H. ET AL: "Artistic Design with GA & NN", PROCEEDINGS OF THE 1NWGA, 9 January 1995 (1995-01-09) - 12 January 1995 (1995-01-12), Vaasa, Finland, pages 97 - 105, XP002136081 *
SHYU M -S ET AL: "A genetic algorithm approach to color image enhancement", PATTERN RECOGNITION,US,PERGAMON PRESS INC. ELMSFORD, N.Y, vol. 31, no. 7, 1 July 1998 (1998-07-01), pages 871 - 880, XP004122604, ISSN: 0031-3203 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001037553A1 (en) * 1999-11-15 2001-05-25 Copan, Inc. System and method for improving digital video utilizing genetic image evolution
WO2002056596A2 (en) * 2001-01-10 2002-07-18 Koninklijke Philips Electronics N.V. System and method for optimizing control parameter settings in a chain of video processing algorithms
WO2002056596A3 (en) * 2001-01-10 2003-09-12 Koninkl Philips Electronics Nv System and method for optimizing control parameter settings in a chain of video processing algorithms
WO2004051984A1 (en) * 2002-12-05 2004-06-17 Koninklijke Philips Electronics N.V. A method and apparatus to utilize the probability vectors in the binary representation of video systems for faster convergence with minimal computation requirements
US10516890B2 (en) 2015-02-19 2019-12-24 Magic Pony Technology Limited Accelerating machine optimisation processes
US10499069B2 (en) 2015-02-19 2019-12-03 Magic Pony Technology Limited Enhancing visual data using and augmenting model libraries
US10887613B2 (en) 2015-02-19 2021-01-05 Magic Pony Technology Limited Visual processing using sub-pixel convolutions
US10523955B2 (en) 2015-02-19 2019-12-31 Magic Pony Technology Limited Enhancement of visual data
US10547858B2 (en) 2015-02-19 2020-01-28 Magic Pony Technology Limited Visual processing using temporal and spatial interpolation
US10582205B2 (en) 2015-02-19 2020-03-03 Magic Pony Technology Limited Enhancing visual data using strided convolutions
US10623756B2 (en) 2015-02-19 2020-04-14 Magic Pony Technology Limited Interpolating visual data
US10630996B2 (en) 2015-02-19 2020-04-21 Magic Pony Technology Limited Visual processing using temporal and spatial interpolation
US11528492B2 (en) 2015-02-19 2022-12-13 Twitter, Inc. Machine learning for visual processing
US10904541B2 (en) 2015-02-19 2021-01-26 Magic Pony Technology Limited Offline training of hierarchical algorithms
US10666962B2 (en) 2015-03-31 2020-05-26 Magic Pony Technology Limited Training end-to-end video processes
US10692185B2 (en) 2016-03-18 2020-06-23 Magic Pony Technology Limited Generative methods of super resolution
CN109447265B (en) * 2018-10-12 2020-09-01 湘潭大学 Dyeing proportioning method and system based on preference genetic algorithm
CN109447265A (en) * 2018-10-12 2019-03-08 湘潭大学 A kind of dyeing matching method and system based on preference genetic algorithm

Also Published As

Publication number Publication date
EP1147471A1 (en) 2001-10-24
AU2352100A (en) 2000-06-19

Similar Documents

Publication Publication Date Title
US6154217A (en) Gamut restriction of color image
JP4375781B2 (en) Image processing apparatus, image processing method, program, and recording medium
US7206445B2 (en) Method, apparatus and recording medium for image processing
US6650771B1 (en) Color management system incorporating parameter control channels
US20070247474A1 (en) Method, apparatus and recording medium for image processing
US20010035989A1 (en) Method, apparatus and recording medium for color correction
US8125533B2 (en) Undeveloped image data developing apparatus, method for developing undeveloped image data, and computer program for developing undeveloped image data
EP1781045B1 (en) Image processing apparatus and image processing method
US7379096B2 (en) Image processing method for processing image data based on conditions customized for each user
JP4600424B2 (en) Development processing apparatus for undeveloped image data, development processing method, and computer program for development processing
JP2007202218A (en) Method and apparatus for enhancing source digital image
US7046400B2 (en) Adjusting the color, brightness, and tone scale of rendered digital images
WO2000033207A1 (en) Digital image improvement through genetic image evolution
JP4146506B1 (en) Mosaic image generating apparatus, method and program
JP5070921B2 (en) Development processing apparatus for undeveloped image data, development processing method, and computer program for development processing
JP2007013475A (en) Image processing apparatus and method
JP2004297438A (en) Image correction method, apparatus, and program
JP4807315B2 (en) Development processing apparatus for undeveloped image data, development processing method, and computer program for executing development processing
JPH09160527A (en) Method for displaying image data at corrected tint and density
JP3900871B2 (en) Image file generation device and image data output device
JP2005065113A (en) Image processing which is performed by using lut
JP2004164276A (en) Red-eye correcting device, red-eye correcting method and its program
WO2001037553A1 (en) System and method for improving digital video utilizing genetic image evolution
JP2005117437A (en) Blocked decorative image
JP2004295410A (en) Image processing apparatus, image processing method and image processing program

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AL AM AT AU AZ BA BB BG BR BY CA CH CN CR CU CZ DE DK DM EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 1999967186

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWP Wipo information: published in national office

Ref document number: 1999967186

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 1999967186

Country of ref document: EP