US20050105796A1 - Method and system for transforming adaptively visual contents according to terminal user's color vision characteristics - Google Patents

Method and system for transforming adaptively visual contents according to terminal user's color vision characteristics Download PDF

Info

Publication number
US20050105796A1
US20050105796A1 US10/512,730 US51273004A US2005105796A1 US 20050105796 A1 US20050105796 A1 US 20050105796A1 US 51273004 A US51273004 A US 51273004A US 2005105796 A1 US2005105796 A1 US 2005105796A1
Authority
US
United States
Prior art keywords
user
color vision
color
deficiency
max
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/512,730
Inventor
Jin-Woo Hong
Seung-Ji Yang
Jae-Ii Song
Yong-Man Ro
Je-ho Nam
Jin-woong Kim
Jae-Joon Kim
Cheon-Seog Kim
Original Assignee
Jin-Woo Hong
Seung-Ji Yang
Jae-Ii Song
Yong-Man Ro
Nam Je-Ho
Kim Jin-Woong
Jae-Joon Kim
Cheon-Seog Kim
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to KR1020020023130A priority Critical patent/KR20030084419A/en
Priority to KR1020020041294A priority patent/KR20040008004A/en
Priority to KR20020062260 priority
Priority to KR20030004003 priority
Application filed by Jin-Woo Hong, Seung-Ji Yang, Jae-Ii Song, Yong-Man Ro, Nam Je-Ho, Kim Jin-Woong, Jae-Joon Kim, Cheon-Seog Kim filed Critical Jin-Woo Hong
Priority to PCT/KR2003/000750 priority patent/WO2003091946A1/en
Publication of US20050105796A1 publication Critical patent/US20050105796A1/en
Application status is Abandoned legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/73Circuits for processing colour signals colour balance circuits, e.g. white balance circuits, colour temperature control
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/02Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/06Adjustment of display parameters
    • G09G2320/0666Adjustment of display parameters for control of colour parameters, e.g. colour temperature
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/14Solving problems related to the presentation of information to be displayed

Abstract

Disclosed are a method and a system that adaptively transform visual contents inputted from a network, in accordance with the visual characteristics of a terminal user. A visual characteristics descriptor that describes the information of the user visual characteristics in a predetermined format is proposed. The descriptor includes the information of the color vision deficiency type and the color vision deficiency degree. The color vision deficiency may be described in numerical degree or textual degree. The invention adaptively transforms visual contents differently in accordance with the color vision deficiency type.

Description

    TECHNICAL FIELD
  • The present invention relates to a method and a system for transforming visual contents and, in particular, to a method and a system for adaptively transforming visual contents in accordance with the color vision characteristics of a user.
  • BACKGROUND ART
  • The MPEG-21 is being established as the next generation standard for a multimedia framework by MPEG (Moving Picture Expert Group), which is a Working Group of ISO/IEC JTC 1/SC 29. The goal of MPEG-21 is to realize a multimedia integration framework capable of freely and easily using multimedia contents despite the wide-range characteristics of networks, terminals and users, existing under various environments, by unifying the standards of the existing MPEGs or other standardization groups. The digital item adaptation part of the MPEG-21, Part 7 relates to adaptively transforming the multimedia contents (or digital items) in accordance with the characteristics of networks, terminals (video display devices) and users, the standardization of which is now in progress.
  • Preceding researches for users with a color vision deficiency are as follows: In “Computerized Simulation of Color Appearance for Dichromats” (Journal of Optical Society of America. A, v.14, no. 10, 1997, 2647-2655), H. Brettel studied an algorithm for allowing common users to experience the color vision characteristics of users with dichromacy. However, in this paper, only an algorithm capable of simulating the color vision characteristics of users with the color vision deficiency is disclosed. An adaptation algorithm for obtaining information that is impossible or difficult to obtain due to the color vision deficiencies is not mentioned. This method requires that contents manufacturers perform a simulation process for dichromats before selecting the colors of the contents. An object of such a method is to avoid a combination of colors that is difficult to be distinguished by the dichromats, if possible, by performing a simulation process to determine whether the dichromats can discriminate the selected combination of the colors.
  • However, this method urges the contents manufacturers to use limited number of colors, thereby restricting the creativity of the manufacturers and possibly inducing the inconvenience and monotony in the process of recognizing the color information for normal users. Therefore, this method is difficult to satisfy the requirements of various users. Accordingly, there is a need for adaptation not in the contents manufacturing step, but in accordance with individual vision abilities or terminal devices. Nowadays, numerous digital multimedia contents are manufactured even in a day. Thus, such a process performed in the contents manufacturing step has a disadvantage in that it is impossible to adaptively transform the already existing contents.
  • In order to solve these problems, improving the abilities for recognizing the color information processing of humans with a color vision deficiency by directly transforming the colors of visual contents may be considered. This method has an advantage in that it is not required to redesign a display device and it is possible to adaptively transform all existing contents.
  • A method of adaptation for users with a color vision deficiency is discussed in “Enhancing Color Representation for Anomalous Trichromats on CRT Monitors Color” (G. Kovacs, Research and Application, v.26 SUPP, 2001, S273-S276), in which an algorithm is disclosed which allows the users to see like a normal user by computing a filter to be mounted in cathode ray tube (to be referred as “CRT”) and applying the obtained filter to a RGB spectrum response value of a corresponding CRT monitor. However, this method applies a filter to a monitor and has a disadvantage in that it is impossible to perform a transformation in accordance with the contents if a plurality of digital items, i.e. a number of images, exist in a screen. Furthermore, it is a burden to purchase a specially manufactured CRT monitor in order to implement this function.
  • In the Gazette of U.S. Pat. No. 6,362,830, an equation for modeling a human with a color vision deficiency is vaguely derived. However, the process for adaptively transforming visual contents in accordance with the color vision characteristics of humans with a color vision deficiency is very complicated. Moreover, the method does not allow humans with a color vision deficiency to conceive the adaptively transformed visual contents, but allows humans to merely discriminate the visual contents. The disclosure of U.S. Pat. No. 6,362,830 is incorporated herein by reference.
  • Humans recognize colors and brightness of an object by the visual cells sensing the light reflected from the surface of the object. The visual cells existing in the retina include rod cells and cone cells. The visual cells are specialized cells that function to sense light. Human eyes contain about seven million cone cells and one hundred and thirty million rod cells. Humans discriminate light and darkness using the rod cells and recognize detailed appearance and colors using cone cells. As photochrome contained in the cone cells absorb photons, color recognition of humans is made. Normal humans have three types of cone cells, which absorb different portions of light with a visible wavelength, in the retina. The types are classified into L (long), M (middle) and S (short) in accordance with the peak sensitivity of a wavelength region absorbed by each type of cone cell. Humans recognize colors depending on the ratio of signals which the three types of cone cells generate in accordance with light.
  • Unlike the above conditions, color vision deficiency is the state in which any of the three types of cone cells does not exist naturally or function abnormally. If there are only two types of cone cells, it is called a dichromacy. In addition, if the function of the cone cells is abnormal, even though all three types exist, it is called an anomalous trichromacy. In the world, about 8% of males and about 0.5% of females have a color vision deficiency. Nevertheless, no method for treating color vision deficiencies exists at present; thus, this study has been commissioned to research a new scheme for treating color vision deficiencies.
  • It is medically impossible to make humans with a color vision deficiency see original colors. The goal of adaptation for dichromacy is to allow humans with a color vision deficiency to obtain information from the colors of contents at the same level of a normal human, although they are not capable of seeing the original colors.
  • DISCLOSURE OF THE INVENTION
  • It is an object of the present invention to provide a user Wraith a color vision deficiency with the semantic information of visual contents that corresponds to a normal user regardless of the color vision deficiency type and without any separate equipment.
  • It is another object of the present invention to provide a user with a color vision deficiency with the semantic information of visual contents that corresponds to a normal user in accordance with the digital items.
  • In order to achieve the above objects, there are provided a method and a system for adaptively transforming visual contents inputted from a network to be suitable for the color vision characteristics of a terminal user. At first, a color vision characteristic descriptor is presented which describes information on the color vision characteristics of the user in a standardized format in which the characteristics of the network and the terminal are not considered. The color vision characteristic descriptor in accordance with the present invention contains information on the color vision deficiency type and degree of the user. The color vision deficiency degree is texturally or numerically described. The color vision characteristic descriptor may further comprise information indicating the user identification information or the existence of a color vision deficiency. In addition, the color vision characteristic descriptor may comprise user environment, in particular, information on the illumination of the surroundings of the user.
  • The present invention adaptively transforms visual contents differently in accordance with the color vision deficiency type, i.e. depending on whether the color vision deficiency is dichromacy or anomalous trichromacy. At first, the present invention detects a region difficult for the user with a dichromacy if it is determined that the user is a dichromat from the information on the degree of deficiency for color vision contained in the color vision characteristic descriptor. The first method presented in accordance with the present invention detects the region difficult for the user with dichromacy by comparing the user limited LMS region to the LMS region of a normal human and then calculating the region in which the LMS value decreases. The second method presented in accordance with the present invention may be implemented in such a manner that the visual contents are transformed from the RGB color space to the CMYK color space for identification of the deficiency region and the pixels corresponding to a predetermined region in the CMYK color space are differentiated in accordance with the color vision deficiency type. If the deficiency region is differentiated in this manner, the visual contents are adaptively transformed to be suitable for the color vision characteristics of the user by tuning at least one of hue, saturation and intensity of the respective pixels corresponding to the deficiency region.
  • Meanwhile, if it is determined that the user is an anomalous trichromat, the visual contents are transformed from the RGB color space to the LMS color space and the substance of the visual contents are adaptively transformed by using a cone cell response function of the user eyes.
  • The present invention provides a method for adaptively transforming a visual contents to be suitable for the color vision characteristics of a user, the method comprising the steps of: receiving information on the color vision characteristics of the user; and executing the adaptation to the visual contents in accordance with the color vision characteristics, wherein the information on the color vision characteristics contains a description of color vision deficiency type and degree.
  • In addition, the present invention provides a method for adaptively transforming a visual contents to be suitable for the color vision characteristics of a user of an image display device, the method comprising the steps of: receiving information on the color vision characteristics of the user; receiving the visual contents; executing the adaptation on the visual contents in accordance with the color vision characteristics; and displaying the transformed visual contents through the image display device.
  • The present invention also provides a system for adaptively transforming visual contents to be suitable for the color vision characteristics of a user of an image display device, the system comprising: a means for receiving information on the color vision characteristics of the user; a means for receiving the visual contents; and a processing section for executing adaptation to the inputted visual contents in accordance with the information on the color vision characteristics.
  • As described above, in accordance with the present invention, a user with a color vision deficiency is able to receive semantic information from visual contents that are substantially identical to that of a normal user without any separate equipment, whereby the user with a color vision deficiency is able to freely and conveniently use multimedia contents. In addition, the present invention is applicable to the digital item adaptive part of MPEG-7 and MPEG-21, which are the international standards of media.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram of an adaptation system in accordance with an embodiment of the present invention;
  • FIG. 2 is a flowchart of an adaptation process according the present invention;
  • FIG. 3 is a structure view of a user color vision characteristic descriptor in accordance with an embodiment of the present invention;
  • FIG. 4 is a view showing an example of enumerating the degree of deficiency for color vision by using the results of the Farnsworth-Munsell test;
  • FIG. 5 is a view showing an example for enumerating the degree of deficiency for color vision by using the results of the Nagel Anomaloscope test;
  • FIG. 6 is a detailed flowchart which shows an example of the adaptation step of FIG. 2;
  • FIG. 7 is a detailed flowchart which shows an example of the adaptation step of anomalous trichlomacy of FIG. 6;
  • FIG. 8 is a view showing the spectral sensitivity of LMS cone cells of a normal human;
  • FIG. 9 is a view showing a RGB emission curve of a CRT monitor wraith P22 phosphor.
  • FIG. 10 is a view showing the stimuli in the LMS color space;
  • FIG. 11 is a view showing the spectral sensitivity of protanomaly in which the peak sensitivity of L cone cells moves about 10 nm;
  • FIG. 12 is a detailed flowchart of an example of the dichromacy adaptation process of FIG. 6;
  • FIG. 13 is a view showing color spaces recognized by a human with a deficiency of dichromacy;
  • FIG. 14 is a detailed flowchart of an example of a method of discriminating the deficiency region in FIG. 12;
  • FIG. 15 is a view showing hues recognized by a normal human, a human with a deficiency of protanopy or deuteranopy, and a human with a deficiency of tritanopy, respectively;
  • FIG. 16 is a view showing hues recognized by a normal human, a human with a deficiency of protanopy or deuteranopy, and a human with a deficiency of tritanopy for a hue angle in the range of 0° to 360°;
  • FIG. 17 is a detailed flowchart of another example of the process for discriminating the deficiency region in FIG. 12;
  • FIG. 18 is a detailed flowchart of an example of the HIS tuning method in FIG. 12; and
  • FIG. 19 is a view showing the distribution of magenta, cyan, and yellow components in the color distribution.
  • BEST MODE FOR CARRYING OUT THE INVENTION
  • Hereinbelow, the present invention will be described in detail with reference to the accompanying drawings. For the purpose of consistency in description, like reference numerals are used to indicate like components and signals in the drawings.
  • FIG. 1 is a block diagram of an adaptation system in accordance with an embodiment of the present invention. FIG. 2 is a flowchart of the adaptation method in accordance with the present invention, in which the adaptation is a processing step that is specifically executed in a processing section 102 shown in FIG. 1. As shown in FIG. 1, an adaptation system 100 is implemented to include a processing section 102, an input section 103, a database 104, a network interface 106, and an image display device 110. The processing section 102 comprises a dichromacy adaptation section 110 and an anomalous trichromacy adaptation section 112.
  • The user inputs the user own information on color vision characteristics and environment to the processing section 102 through an input device 103 such as a keyboard (step 202). The processing section 102 receives the information on the color vision characteristics through the input device 103 and stores it in the database 104 in a predetermined format, thereby initializing the adaptation system 100. The information prepared and stored in a predetermined format for the color vision characteristics of the user is called a color vision characteristic descriptor 114. The visual contents are provided from an external network 107 to the processing section 102 through the network interface 106 such as a modem (step 204). The processing section 102 determines whether the user is an anomalous trichromat of a dichromat with reference to the color vision characteristic descriptor 114 in the database 104. If it is determined that the user is a dichromat, the processing section 102 drives the dichromacy adaptation section 110, so that the provided visual contents are adaptively transformed to be suitable for the color vision characteristics of the user by using the information on the color vision characteristics and/or environment contained in the color vision characteristic descriptor 114, and then the transformed visual contents are displayed through an image display device 108 such as a liquid crystal display device (hereinafter, referred to as “LCD”) or CRT. If the user is determined as an anomalous trichromat, the processing section 102 drives the anomalous trichlomacy adaptation section 112, so that the provided visual contents are adaptively transformed to be suitable for the color vision characteristics of the user and displayed through the image display device 108 (step 206).
  • In the Gazette of U.S. Pat. No. 6,362,830, a matrix [A′] displaying the color vision characteristics of humans having a color vision deficiency is dimly derived. However, there is no correct recognition of the problem of singularity for matrix [A′]. In the case of a dichromat, an inverse transform function of matrix [A′] does not exist due to the problem of singularity for matrix [A′]. Therefore, it was impossible to try the adaptation using the inverse transform function of the matrix [A′] in the Gazette of U.S. Pat. No. 6,320,830. The present invention uses a differential approaching method by differentiating an anomalous trichromat and a dichromat in the process of the adaptation in consideration of the fact that the inverse transformation function of matrix [A′] exists in the case of an anomalous trichromat.
  • FIG. 3 is a structure view of a user color vision characteristic descriptor in accordance with an embodiment of the present invention. As shown ill FIG. 2 a user color vision characteristic descriptor 300 comprises a user characteristic descriptor 310 and a user environment element descriptor 320. The user characteristic descriptor 310 contains a user identification number (hereinafter, referred to as “ID”) (311) for confirming the user, a user name 312 for confirming the user name, information on whether to open individual information 313 for protecting individual information. In addition, the user characteristic descriptor 310 includes a descriptor 314 for indicating the eyesight of the user, a color vision deficiency presence descriptor 315 for describing whether the user has a color vision deficiency, a color vision deficiency type descriptor 316 for describing the user color vision deficiency type, and a color vision deficiency degree descriptor 317 for describing a degree of deficiency for the color vision. The user environment element descriptor 320 comprises a user surrounding illumination degree descriptor 321.
  • The user characteristic descriptor 310 is described in Table 1 below. The dichromat is subdivided into red color blindness (protanopy), green color blindness (deuteranopy) and blue color blindness (tritanopy). For protanopy or deuteranopy that is the most common among dichromats, the middle green of the spectrum is seen as colorless or gray, the shorter wavelength side is seen as blue and the longer wavelength side is seen as yellow. Therefore, the colors visible from a monitor, a television set or the like are shown in only the two colors of blue and yellow; it is difficult to discriminate well a signal light. Contrary to this, trianopy is extremely rate. With trianopy, every thing is seen in the two colors of red and green; it is unexpectedly easy to discriminate a signal light. Meanwhile, if all three types of cone cells do not exist, it is called achromatopsia. In such a case, the eyesight is very poor because all colors are seen as black or gray.
  • The anomalous trichromat is subdivided into protanomaly, deuteranomaly, and tritanomaly. Protanomaly or deuteranomaly are the most common among the anomalous trichromats who can see red and green colors of varying degrees. The degrees of protanomaly or deuteranomaly range from severe cases, in which protanomaly or deuteranomaly are not different from protanopy or deuteranopy, to very mild cases, in which protanomaly or deuteranomaly are close to normality. Like the eyesight of humans, color vision deficiencies widely differ in degree.
    TABLE 1
    Color Vision Deficiency Information Type
    Color Vision
    Deficiency Degree Type
    Medical Color Vision Textual Numerical
    Terminologies Deficiency Type Degree Type Degree Type
    Protanomaly Red-deficiency Mild 0.1-0.9
    Protanopy Red-deficiency Severe 1.0
    Deuteranomaly Green-deficiency Mild 0.1-0.9
    Deuteranopy Green-deficiency Severe 1.0
    Tritanomaly Blue-deficiency Mild 0.1-0.9
    Tritanopy Blue-deficiency Severe 1.0
    Achromatopsia Complete Color Blindness NA NA
  • For example, in the case of protanomaly, in medical terminology, the color vision deficiency type descriptor 319 indicates red deficiency, and the color vision deficiency degree descriptor 316 is expressed as Mild (anomalous trichromacy) in textural description and having a value from 0 to 0.9 in numerical description in the case of anomalous trichromacy, and is expressed as Severe (dichromat) in textural description and having a value of 1.0 in the case of dichromat. That is, the severity of color vision deficiency degree may be described not only by normalized numerical values, but also by textural description. The specific necessity of such a description method will be described later.
  • The present invention provides three methods for enumerating the severity of color vision deficiency degree. The first method for enumerating the severity of color vision deficiency degree is to measure abnormal elements inducing anomalous trichromacy and to directly use the measured values. One of the abnormal elements inducing anomalous trichromacy is the case in which the response function of the corresponding cone cells is shifted from normal position, and the other is the case in which the intensity of the response value of the cone cells decreases. The severity of anomalous trichromacy is determined by compositely combining the above two phenomena. The procedures for enumerating the above two cases are performed as expressed by Equation 1 and Equation 2, respectively.
  • If the enumerated value of the shift of cone cells among the LMS cone cells is Z, the value of Z is expressed like Equation 1. If the maximum shift limit numerical value of medically verified cone cells is αmax nanometers(nm), and the shifted value of cone cells of the anomalous trichromacy is α nanometers, the value of α may be ranged from 0.0 to αmax nanometers. Z = α a max Equation 1
  • Here, if the shifted value of abnormal cone cells, α, exceeds αmax or if the cone cells do not exist, it is determined as dichromacy and the value of α is made to be equal to the value of αmax. Therefore, the value of Z is always 1.0 in the case of dichromacy.
  • In addition, the method for considering the case in which the intensity of response value of abnormal cone cells among the LMS cone cells decreases is performed in Equation 2. If the maximum threshold numerical value of decrease of medically verified cone cells is βmax, and the decrease value of the cone cells of anomalous trichromacy is β, the value of β may be ranged from 0.0 to the value of βmax. As a result, the value of 1 is normalized to have a value from 0.0 to 1.0 and is determined by Equation 2. I = β β max Equation 2
  • Here, if the decreased intensity of abnormal cone cells, β, exceeds βmax or if the cone cells do not exist, it is determined as dichromat and the value of β is made to be equal to the value of βmax. Therefore, the value of 1 is always 1.0 in the case of dichromacy.
  • As a result, the two elements for determining the severity of color vision deficiency degree can be enumerated by using Equation 1 and Equation 2. The color vision deficiency is induced medically through various combinations of the two elements. Therefore, it is possible to more correctly reflect and enumerate the severity of color vision deficiency degree of a human with a color vision deficiency by giving a weighted value to Z, an enumerated value of the shifted extent of abnormal cone cells, and I, an enumerated value of decrease degree in the response intensity of abnormal cone cells, respectively.
  • Therefore, the moving phenomena of cone cells is expressed by Zw, in Equation 3, wherein Zw is obtained by the product of Z and Wz, in which Z is the value enumerated from the shift of abnormal cone cells expressed by Equation 1 and Wz is a weighted value. Z W = w Z × Z = w Z × α a max Equation 3
  • In addition, the decrease in the intensity of the response value of the cone cells is also expressed by Iw in Equation 4, wherein Iw is obtained by the product of 1 and W1, in which I is the value enumerated from the decrease degree in the intensity of the response value of abnormal cone cells expressed by Equation 1 and W1 is a weighted value. I W = w I × I = w I × β β max Equation 4
  • As a result, in Equation 5, the severity of color vision deficiency is obtained by combining the two elements of given weighted values. N = Z W + I W w Z max + w I max = w Z × Z + w I × I w Z max + w I max = { w Z × ( α α max ) + w I × ( β β max ) } / ( w Z max + w I max ) Equation 5
  • Here, the value of N is a numerical value indicating the degree of color vision and normalized from 0.0 to 1.0. The value of N is obtained by adding the value obtained by the product of Z and the weighted value, Wz, to the value obtained by the product of I and the weighted value, W1, and then normalizing the resultant value from 0.0 to 1.0, wherein Z is the value enumerated from the shifted extent, to which abnormal cone cells have moved to other cone cells among the LMS cone cells of a human with a color vision deficiency, and I is the value enumerated from the decrease degree in 5 response intensity of the abnormal cone cells.
  • Because the peak values of Z and I are 1.0, the normalization is executed by dividing the above resultant value by the value obtained by adding Wz max to W1 max, wherein Wz max is the peak value of the weighted value, Wz, and W1 max is the peak value of the weighted value, W1. Finally, the numerical description value of color vision 10 deficiency degree is obtained by moving the decimal point one place to the right and cutting away lower fractions by one half. Consequently, as shown in Table 1. the numerical description value of color vision deficiency degree is 1.0 in the case of dichromat. In the case of anomalous trichromat, the numerical description value is in the range of 0.0 to 0.9.
  • The second and third methods for enumerating the degree of color vision deficiency use the results of a color vision deficiency test unlike the first method. The methods for testing color vision deficiency are divided into pseudoisochromatic tests, color arrangement tests, and color light tests. The most representative testing method among the pseudoisochromatic tests is the Ishihara test. This method is most generally used among the testing methods because it is very easy and rapid. However, there is a disadvantage in that it is difficult to test the degree of color vision deficiency in detail.
  • The color arrangement tests have a disadvantage in that the time required in testing is long and the analysis of color vision deficiency is difficult when compared to the pseudoisochromatic tests. However, the color arrangement tests have an advantage in that it is possible to correctly test the type and degree of color vision deficiency when compared to the pseudoisochromatic tests. The most representative test among the color arrangement tests is Farnsworth-Munsell (FM) hue test. Finally, there are anomaloscope tests that use color light. These tests are known as being the most capable in accurately examining red-green anomalous trichromat. In particular, these tests easily subdivide the degree of color vision deficiency.
  • In accordance with the second method of the present invention, the present invention uses the FM hue test for enumerating the degree of color vision deficiencies. The degree in the severity of color vision deficiency is enumerated by using the total error score (TES) acquired after the FM hue examination. The degrees in the severity of color vision deficiencies are enumerated from 0.1 to 1.0 in accordance with the total error score in Equation 6: N = { E - E min E max - E min , E min < E < E max 1.0 , E th max Equation 6
  • Here, E is the total error score. If the total error score is smaller than Emin, it is determined that the subject is normal without any color vision deficiency. If the total error score is larger than Emin, it is determined that the subject has a color vision deficiency. If the total error is larger than Emin, but smaller than Emax, it is determined that the subject has an anomalous trichromat deficiency. In anomalous trichromat deficiencies, the numerical value N of the color vision deficiency degree is determined by the proportion occupied by the total error score of the subject in the entire range of the total error score. In this case, the numerical value N of the color vision deficiency degree has a value from 0.1 to 0.9. These numerical values are obtained by cutting away lower fractions by one half and moving the decimal point two places to the right. And, in the case of dichromat deficiencies, the numerical value N of the color vision deficiency degree is always 1.0. FIG. 4 shows an example of the methods for enumerating color vision deficiency degrees using the FM hue test.
  • In accordance with the third method, the present invention uses an anomaloscope for enumerating the color vision deficiency degrees. Nowadays, anomaloscopes can be used only for examining red-green anomalous trichromacy. The present invention enumerates the color vision deficiency degree using a Nagel anomaloscope that is the most representative anomaloscope. The Nagel anomaloscope consists of two parts. The first part is a test field, in which a pure yellow color is emitted, and the second part is a mixture field, in which a red color and a green color are jointly emitted and produce a yellow color. The Nagel anomaloscope is provided with two adjustment devices: The first adjustment device is used to adjust the illumination of the test field and the second adjustment device is used to adjust the ratio of red to green in the mixture field. The subject should adjust the colors emitted from the test field and the mixture field to be identical, using the two adjustment devices while viewing the anomaloscope with both eyes. The examiner determines the degree of severity and the type of the color vision deficiency by analyzing the values of the two adjustment devices adjusted by the subject. The ratio of red to green has a value from 0 to 73. 0 indicates a pure green color and 73 indicates a pure red color. The numerical range of 1 to 72 indicates a mixed color generated by adding red to green. The proportion occupied by red in the mixed color increases as the value decreases while the proportion occupied by green in the mixed color increases as the value increases. The numerical value is usually set to 43 before initiating the test and thus a yellow color is generated in the mixture field. If the value of the subject ranges from 40 to 45, the subject is determined as normal. The degree in the severity of the color vision deficiency is enumerated from 0.1 to 1.0 in Equation 7. N = { R d R th , R d R th 1.0 , R d > R th Equation 7
    Here Rd=Rmax−Rmin, R th = { R min normal , green color vision deficiency 73 - R max normal , red color vision deficiency
  • In Equation 7, Rd indicates the range of the red/green ratio section in the mixture field, which is recognized as identical to the test field of the subject. That is, Rd indicates the distance between the minimum value, Rmin, and the maximum value Rmax, in the red/green ratio section range. The larger the value of Rd, the more severe the degree of the color vision deficiency. A normal human has the minimum value of Rnormal and the maximum value of Rnormal max in the red/green ratio section range. That is, the value of Rd is the value of (Rnormal max−Rnormal min). As a result of performing the anomaloscope test, if the distance value, Rd, is smaller than the limit value, Rth, it is determined that the user has the deficiency of anomalous trichromacy,; and if Rd is larger than Rth, it is determined that the user has the deficiency of dichromacy. In accordance with the types of color vision deficiencies, the limit value, Rth, varies. In the case of green-color vision deficiency, the limit value Rth equals Rnormal min and in the case of red-color vision deficiency, the limit value Rth equals (73−Rnormal max 0. Using these numerical values, the numerical value of the color vision deficiency degree, N, is determined by the ratio between Rth and Rd in the case of anomalous trichromacy, wherein Rth is the longest distance in the red/green ratio section range in which the color vision deficiency is determined as dichromacy, and Rd is the distance within the red/green ratio section range of the subject. In this case, the numerical value of the color vision deficiency degree, N, has a value from 0.1 to 0.9. These values are obtained by cutting away lower fractions by one half moving the decimal point two places to the right. In the case of dichromacy, the color vision deficiency degree, N, is always 1.0. FIG. 5 shows an example of the method for enumerating the color vision deficiency degree using the results of the anomaloscope test. Following Table 2 is an example of the color vision deficiency descriptor prepared in a XML document, in which the descriptor has the structure shown in FIG. 3.
    TABLE 2a
    <!-- ########################################### -->
    <!--   Definition of VisualImpairmentType   -->
    <!-- ########################################### -->
    <complexType name=“VisualImpairmentType”>
         <sequence>
          <element name=“ColorVisionDeficiency” type=“ColorVisionDeficiencyType”
      minOccurs=“0”/>
         </sequence>
        <attribute name=“ColorVisionDeficiencyOrNot” type=“boolean” use=“required”/>
    </complexType>
  • TABLE 2b
    <!-- ########################################### -->
    <!--  Definition of ColorVisionDeficiency   -->
    <!-- ########################################### -->
    <complexType name=“ColorVisionDeficiencyType”>
         <sequence>
          <element name=“ColorVisionDeficiencyType”
          type=“ColorVisionDeficiencyTypeType”/>
          <element name=“ColorVisionDeficiencyDegree”
          type=“ColorVisionDeficiencyDegreeType”/>
         </sequence>
         <attribute name=“Sight” type=“float” use=“optional”/>
         <attribute name=“IlluminanceDegree” type=“float” use=
         “optional”/>
    </complexType>
    <simpleType name=“ColorVisionDeficiencyTypeType”>
         <restriction base=“string”>
           <enumeration value=“Red-Deficiency”/>
           <enumeration value=“Green-Deficiency”/>
           <enumeration value=“Blue-Deficiency”/>
           <enumeration value=“CompleteColorBlindness”/>
         </restriction>
    </simpleType>
    <complexType name=“ColorVisionDeficiencyDegreeType”>
      <choice>
        <element name=“NumericDegree” type=
        “mpeg7:zeroToOneType”/>
        <element name=“TextualDegree”>
              <simpleType>
          <restriction base=“string”>
                <enumeration value=“Severe”/>
          <enumeration value=“Mild”/>
            </restriction>
              </simpleType>
        </element>
      </choice>
    </complexType>
  • FIG. 6 is a detailed flowchart of the adaptation step (step 206) shown in FIG. 2.
  • As shown in FIG. 6, the color vision deficiency degree of the user is determined from the color vision characteristic descriptor as described above (step 402). If the user is determined as an anomalous trichromat as the result of the determination, an adaptation process is executed for such an anomalous trichromat (step 404). If the user Is determined as a dichromat, a separate adaptation process is executed for such a dichromat (step 406). If the textural description 317 of the color vision degree descriptor 316 is “Severe” (dichromat) or a numerical description 318 is 1.0 in FIG. 3, the user is a dichromat among the color vision deficiencies, and thus the adaptation process is performed. Whereas, if the textural description 3 17 of the color vision deficiency of the color vision deficiency degree descriptor 316 is “Mild” (anomalous trichromacy) or the numerical description 318 is 0-0.9, the user is an anomalous trichromat among the color vision deficiencies, and thus the adaptation process for the anomalous trichromat is performed.
  • FIG. 7 is a detailed flowchart of an example of the adaptation process for an anomalous trichromat (step 404). At first, an LMS response function expressing the vision characteristics of the user, who is an anomalous trichromat, is obtained (step 502). The method for obtaining the LMS response function will be specifically described below. Next, the externally inputted visual contents are transformed from the RGB color space to the LMS color space (step 504). Then, the inputted visual contents are transformed using the inverse function of the user LMS response function (step 506), and the visual contents transformed in this manner in the LMS space is transformed to the RGB color space again (step 508).
  • Next, the principle of the adaptation method of an anomalous trichromat in accordance with the present invention is specifically described with reference to FIGS. 8 to 11. FIG. 8 shows the spectral sensitivity of the LMS cone cells for the visible wavelengths of a normal human.
  • FIG. 9 shows the RGB emission curves of a CRT monitor with P22 phosphor. As described above, a human discriminates colors by visual cells in the eyes that recognize light reflected from an object. However, when a human recognizes colors through an image display device, unlike the case in which a human recognizes colors by directly viewing the object, the colors are recognized differently due to the characteristics of the image display device and the characteristics of each individual's eyes. Therefore, in order to allow the human to accurately grasp the finally recognized colors, the characteristics of the spectral emission function of the corresponding image display device should be considered. In general, the characteristics of the spectral emission function of an image display device can be measured by using a spectroradiometer, in which those characteristics appear differently in accordance with the characteristics and the types of image display devices. In this embodiment, the characteristics of the RGB emission function of a CRT monitor with P22 phosphor are measured using a spectroradiometer.
  • FIG. 10 expresses stimuli in the LMS color space. The colors measured with a spectroradiometer are not equal to those recognized by a human. The former is merely a physical measurement of colors. The colors finally recognized by a human are a result of a composite reaction between the LMS characteristics of cone cells and the RGB characteristics of an image display device. The colors emitted from the image display device are transformed and recognized in accordance with the characteristics of the three types of cone cells. FIG. 10 expresses each of the RGB values recognized by the three types of cone cells on LMS orthogonal coordinate system. All colors are recognized, using an image display device, are present in the hexahedron formed by the points ORYGBMWC.
  • The LMS values (LQ, MQ, SQ) of an optional stimulus Q can be transformed by a transformation matrix that is obtained by integrating the LMS function of cone cells (FIG. 8) and the RGB spectrum emission curves measured with a spectroradiometer (FIG. 9) in accordance with each wavelength. The equation for obtaining the LMS transformation matrix of a normal human, Tnormal, is expressed in Equation 8 below. [ L M S ] = T normal · [ R G B ] , T normal = [ L normal r L normal g L normal b M normal r M normal g M normal b S normal r S normal g S normal b ] , where ( L r = k l E r ( λ ) L ( λ ) λ L g = k l E g ( λ ) L ( λ ) λ L b = k l E b ( λ ) L ( λ ) λ M r = k m E r ( λ ) M ( λ ) λ M g = k m E g ( λ ) M ( λ ) λ M b = k m E b ( λ ) M ( λ ) λ S r = k s E r ( λ ) S ( λ ) λ S g = k s E g ( λ ) S ( λ ) λ S b = k s E b ( λ ) S ( λ ) λ ) . Equation 8
  • In Equation 8, Er(λ), Eg(λ), and Eb(λ) indicate spectrum powers emitted by an image display device at a wavelength (λ) in connection with R, G, and B stimuli, respectively, and L(λ), M(λ), and S(λ) indicate spectral response values absorbed by cone cells at the wavelength (λ). The maximum emission value of each phosphor in an image display device forms a neutral LMS response value. Each neutral response value should have an ideal emission function characteristic in order to form a white point. If an image display device has such an ideal condition, the K value is selected to satisfy Σ L=ΣM=ΣS=1.
  • FIG. 11 shows the spectral sensitivity of protanomaly, in which the peak sensitivity of cone cells is shifted about 10 nm. Unlike dichromacy, anomalous trichromacy is the state in which all three types of cone cells exist, but they do not exert normal function. Because the difference in accordance with the degree of anomalous trichromacy is varied, unlike that of dichromacy, it is very difficult to accurately express the colors recognized by an anomalous trichromat. However, in accordance with several papers studying eyesight, in the case of anomalous trichromacy, it is assumed that the peak sensitivity of the LMS cone cells is shifted by a certain wavelength. Because the L-cone cell is shifted in protanomaly, the M-cone cells are shifted in deuteranomaly, and the S-cone cells are shifted in tritanomaly, two types of cone cells are more overlapped than those in a normal human. Therefore, an anomalous trichromat lacks the capability of discriminating colors when compared to a normal human. FIG. 11 shows the spectral sensitivity of protanomaly, in which the peak sensitivity of the L-cone cells is shifted about 10 nm.
  • Unlike the simulation of dichromacy, color simulation recognized by an anomalous trichromacy can be directly obtained by the transformation matrix that transforms light emitted from an image display device into the colors recognized by defected cone cells of an anomalous trichromat. Transformation matrixes are obtained in accordance with the type of anomalous trichromacy; protanomaly is given the transformation matrix TL abnormal in Equation 9, deuteranomaly is given the transformation matrix TM abnormal in Equation 10, and tritanomaly is given with the transformation matrix TS abnormal in Equation 11. That is, it is possible to obtain direct transformation matrixes by applying an LMS response function in deformed cone cells of an anomalous trichromat instead of an LMS response function of a normal human to Equation 8.
  • However, for such an approach, the enumeration for the LMS transformation matrix Tabnormal of a human with a color vision deficiency should precede. As apparent in FIG. 8, in order to enumerate Tabnormal, it is necessary to know the spectral response functions L′(λ), M′(λ), and S′(λ) in cone cells of a human with a color vision deficiency along with the characteristics Er(λ), Eg(λ), and Eb(λ) of the display. However, an important problem in practice is how to obtain L′(λ), M′(λ), and S′(λ). Even if it becomes possible to measure those characteristics by an expert, a problem still remains in that a method should be devised for inputting the measured data into an adaptation system for use in an adaptation.
  • As described in reference to Equation 1 through Equation 5, the present invention proposes a method for expressing the degree of anomalous trichromacy with simple numerical values by modeling the mechanism of anomalous trichromacy in consideration of the spectral transition of LMS cone cells and the variation of the response intensity. The simplified numerical values for the degree of anomalous trichromacy are very effectively used to approximate the spectral response functions L′(λ), M′(λ), and S′(λ) of the cone cells of anomalous trichromats together with the information on the types of anomalous trichromacy. Through these procedures, it becomes possible to enumerate Tabnormal and thus to very easily and effectively express color vision deficiency of an anomalous trichromat for the first time.
  • Here, the response functions of defected cone cells of an anomalous trichromat include the cases in which one type of LMS cone cells is shifted toward any other type of cone cell by several nm to tens of nm and in which the response degree of the LMS cone cells decreases.
  • The original color image information i.e. (R, G, B) is directly transformed to (L′, M′, S′) in the LMS space by using an LMS transformation matrix of each anomalous trichromat, and in the transformation procedure, protanomaly is expressed in Equation 9, deuteranomaly is expressed in Equation 10, and tritanomaly is expressed in Equation 11. [ L M S ] = T abnormal L · [ R G B ] , T abnormal L = [ L abnormal r L abnormal g L abnormal b M normal r M normal g M normal b S normal r S normal g S normal b ] Equation 9 [ L M S ] = T abnormal M · [ R G B ] , T abnormal M = [ L normal r L normal g L normal b M abnormal r M abnormal g M abnormal b S normal r S normal g S normal b ] Equation 10 [ L M S ] = T abnormal S [ R G B ] , T abnormal S = [ L normal r L normal g L normal b M normal r M normal g M normal b S abnormal r S abnormal g S abnormal b ] Equation 11
  • A color stimulus value transformed to (L′, M4′, S′) in the LMS space is transformed again by an LMS inverse transformation matrix in a normal human in Equation 12, whereby it is possible to obtain the colors in RGB values practically recognized by an anomalous trichromat. By this method, it is possible to simulate the colors seen by anomalous trichromats, in Equation 12, so that normal humans are capable of seeing the colors. At first, the original color information, i.e. (R, G, B) is transformed to (L′, M′, S′) using the LMS transformation matrix of anomalous trichromats in Equation 12(1), and then the transformed (L′, M′, S′) is transformed to (Rsimulate, Gsimulated, Bsimulated) which is recognized by anomalous trichromats by multiplying the transformed (L′, M′, S′) by the LMS inverse transformation matrix in normal humans, thereby executing the simulation. If Equation 12(1) and Equation 12(2) are combined, it is possible to execute color simulation for anomalous trichromats using Equation 12(3). In general, the colors simulated for anomalous trichromats, in Equation 12(4), are not identical to the original colors. The more severe the degree of anomalous trichromacy, the greater the difference between the simulated colors and the original colors. Equation 12 [ L M S ] = [ T abnormal ] · [ R G B ] ( 1 ) [ R simulated G simulated B simulated ] = [ T normal ] - 1 · [ L M S ] ( 2 ) [ R simulated G simulated B simulated ] = [ T normal ] - 1 · [ T abnormal ] · [ R G B ] ( 3 ) [ R simulated G simulated B simulated ] [ R G B ] ( 4 )
  • An adaptation process for anomalous trichromats is performed in such a manner that the color discriminating capability of anomalous trichromats is further enhanced by emphasizing the brightness and saturation of a color, which is difficult for an anomalous trichromat with a given anomalous trichlomacy type to discriminate, to be more intense than normal ones. That is, this is a method to compensate for the decrease in the color discrimination capability of an anomalous trichromat with a given anomalous trichromacy type due to shifted cone cells, and is expressed in Equation 13. Specifically, the adaptively transformed colors, i.e. (Radapted, Gadapted, Badapted) are first obtained by multiplying the original colors (R, G, B) by the adaptation matrix [A] in Equation 13(1). Here, the adaptation matrix [A] is applied, so that the result of simulating the adaptively transformed colors (Radapted, Gadapted, Badapted) to the colors (Rsimulated, Gsimulated, Bsimulated), which are recognized by the anomalous trichromats, are equal to the original colors (R, G, B) in Equation 13(2). Equation 13 [ R adapted G adapted B adapted ] = [ A ] · [ R G B ] ( 1 ) [ R simulated G simulated B simulated ] = [ T normal ] - 1 · [ T abnormal ] · [ R adapted G adapted B adapted ] = [ R G B ] ( 2 )
  • That is, the goal of the contents adaptation for anomalous trichromats is to adaptively transform the RGB colors of the original contents, so that a corresponding type of anomalous trichromat can see the contents as a normal human sees the contents. Here, the contents adaptive matrix [A] for anomalous trichromats can be expressed in Equation 14 below. Although the adaptively transformed contents may be very factitious to normal humans, anomalous trichromats can see the adaptively transformed contents at the same or approximate level as normal humans see the original contents.
    A=[T abnormal]−1 ·[T normal]  Equation 14
  • FIG. 12 is a detailed flowchart of an example of the adaptation process for dichromacy shown in FIG. 6. As shown in the drawing, a deficiency region, which is difficult for the user to detect, is first discriminated in accordance with the color vision deficiency type extracted from the color vision characteristic descriptor (step 1002). Next, at least one of hue, saturation or intensity of the pixels corresponding to the deficiency region is tuned (step 1004). Thereby, the visual contents are adaptively transformed to be suitable for the color vision characteristics of a user with the deficiency of dichromacy. The specific transformation process is described in detail below.
  • FIG. 13 is a view displaying the color spaces recognized by a human wraith the deficiency of dichromacy, in which FIG. 13 a showers that for protanopy or deuteranopy and FIG. 13 b shows that for tritanopy. To express the colors recognized by humans with color vision deficiencies is essential for an adaptation process for a color vision deficiency. Several papers have already verified simulation processes of expressing colors recognized by dichromats. Humans with the deficiency of protanlopy or deuteranopy recognize a color of short wavelength as blue and a color of long wavelength as yellow. Therefore, the colors for humans with the deficiency of protanopy or deuteranopy can be expressed by two colors with various degrees of intensity and saturation. Although it is very seldom, humans with the deficiency of tritanopy recognize a color of short wavelength as cyan and a color of long wavelength as red. Therefore, the colors for humans with the deficiency of tritanopy can also be expressed by two colors with various degrees of intensity and saturation. These two colors will be seen as identical colors for both humans with color vision deficiencies and normal humans. Medically, it is possible to assume that these two colors are blue of 475 nm and yellow of 575 nm for protanopy or deuteranopy and cyan of 485 nm and red of 660 nm for tritanopy.
  • FIG. 13 expresses the colors recognized by humans with the deficiency of dichromacy. In FIG. 13, point E (LE, ME, SE) is the brightest metamer among the equal-energy stimuli of a corresponding image display device. Therefore, OE indicates neutral stimuli equally recognized by a normal human and a dichromat. Two limited stimulus planes are formed centering on these stimuli. In other words, these planes form two unchangeable colors for given dichromat types. A certain color stimulus Q in the LMS space is substituted with a color on the two planes in accordance with the wavelength thereof. In FIG. 13, the color stimuli of points P1 and P2 of protanopy are all substituted with the color stimulus of point P, and the color stimuli of points D1 and D2 of deuteranopy are all substituted with the color stimulus of point D. Similarly, the color stimuli of points T1 and T2 of tritanopy are all substituted with the color stimulus of point T.
  • It is assumed that the color stimulus of dichromats substituted from the certain stimulus, Q, is Q′(LQ,MQ,SQ). And, it is assumed that the color stimulus forming the two unchangeable color planes is A(LA,MA,SA). The substituted Q′ value is always orthogonal to a plane formed by normal vectors. Therefore, the Q′ can be expressed in Equation 15. In addition, Equation 15 can be expressed by the lineal equations of LQ′, MQ′, SQ′ values in Equation 16.
    (E×AQ′=0   Equation 15
    aL Q′ +bM Q′ +cS Q′=0   Equation 16
    Here,
    a=M E S A −S E M A , b=S E L A L E S A , c=L E M A −M E L A
  • Therefore, the transformation equations from stimulus Q to Q′ are finally expressed in Equation 17 (for protanopy), Equation 18 (deuteranopy), and Equation 19 (tritanopy). [ L Q M Q S Q ] = [ 0 - b a - c a 0 1 0 0 0 1 ] [ L M S ] = [ L p M p S p ] Here , λ A = { 575 nm , SQ / MQ < SE / ME 475 nm , otherwise Equation 17 [ L Q M Q S Q ] = [ 1 0 0 - a b 0 - c b 0 0 1 ] [ L M S ] = [ L d M d S d ] Here , λ A = { 575 nm , SQ / LQ < SE / LE 475 nm , otherwise Equation 18 [ L Q M Q S Q ] = [ 1 0 0 - a b 0 - c b 0 0 1 ] [ L M S ] = [ L d M d S d ] Here , λ A = { 660 nm , MQ / LQ < ME / LE 485 nm , otherwise Equation 19
  • FIG. 14 is a detailed flowchart of an example of the method for discriminating a deficiency region in FIG. 12. As shown in FIG. 14, the visual contents are first transformed from the RGB color space to the CMYK color space in Equation 20 (step 1202). Next, a region to be adaptively transformed is determined (step 1204). This is executed by discriminating the pixels corresponding to a predetermined region of the CMYK in accordance with the color vision deficiency type. In the case of protanopy or deuteranopy, the deficiency region is determined in Equation 21 and in the case of tritanopy, the deficiency is determined in Equation 22. [ C M Y ] = [ c m y ] - K Equation 20
  • Here, c, m, y are values obtained as the complements of R, G, B, respectively, and are indicated as follows: [ c m y ] = 1 - [ R G B ]
  • In addition, K indicates the minimum value in (c, m, y). The color deficiency regions Radaptaion(x,y) for protanopy or deuteranopy distributed in the space are detected in Equation 21. R adaptation ( x , y ) = { 1 , when M ( x , y ) Th 1 0 , otherwise Equation 21
  • Here, (x, y) indicates positions of pixels in an image. M(x,y) indicates magenta values distributed in the space. Th1 indicates the threshold of values determined as magenta. In the case of tritanopy, the color deficiency region Radaptation(x,y) is detected as follows: R adaptation ( x , y ) = { 1 , when Y ( x , y ) Th 2 0 , otherwise Equation 22
  • Here, Y(x,y) indicates the yellow values distributed in the space. Th2 indicates the threshold of the yellow values for finding a blue that is the complementary of yellow using the yellow values.
  • The adaptation processes for dichromats are divided into an adaptation process for protanopy or deuteranopy and an adaptation process for tritanopy. Humans with protanopy or deuteranopy see all of the colors viewed through an image display device as blue or yellow. That is, the red of long wavelength in the red color region is seen as yellow and the red of short wavelength is seen as blue. Similarly, the green of long wavelength in the green color region is seen as yellow and the green of short wavelength is seen as blue. Therefore, the goal of the adaptation of dichromat is to find the red color and the green color regions that are indistinguishable by humans with protanopy or deuteranopy and to make those regions distinguishable. If only one, either red or green is changed into a color that is distinguishable by humans with a deficiency of protanopy or deuteranopy, the two colors are made to be distinguishable. In general, the pixels of the visual contents consist of three values, RGB (Red, Green, Blue), and these values have hue, saturation, and intensity. Therefore, the inherent color of the pixels is just hue. Even if the pixels have a same hue, they are expressed differently by the intensity or saturation.
  • In the process of contents adaptation for dichromats, the HSI (Hue, Saturation, Intensity) color space is used in order to tune the hues and intensities of colors. The HSI color space is known to be useful to divide an object of an image. Therefore, the adaptation process is performed in such a manner that the RGB colors are transformed into the HSI color space to obtain object information on an image, and the colors indistinguishable by the dichromats are changed.
  • FIG. 15 a indicates the hues (1302) of colors recognized by normal humans. Here, Θ means a hue angle, and red R is distributed to 360°, in the counterclockwise direction in reference to 0°. Typically, yellow (Y) is positioned at the point of 60°, green (G) is positioned at the point of 120°, cyan is positioned at the point of 180°, blue is positioned at the point of 240° and magenta (M) is positioned at the point of 300°.
  • However, unlike normal humans, dichromats recognize all colors recognized by normal humans as two hues. FIG. 15 b indicates the hues (1304) recognized by protanopy or deuteranopy. FIG. 15 c indicates the hues (1306) recognized by tritanopy. That is, dichromats discriminate colors based on the difference in saturation and intensity because they are able to recognize only two hues. As a result, dichromats have extremely poor capability for recognizing information from colors of an image.
  • FIG. 16 shows a simulation of the hues recognized by protanopy, deuteranopy and tritanopy in comparison to the hues, from 0° to 360°, recognized by normal humans. In FIG. 16, the horizontal axis indicates the hue angles from 0° to 360° and the vertical axis indicates the hue values obtained by normalizing the hues from 0° to 360° to have values in the range of 0.0 to 1.0. As shown in FIG. 16, the hues recognized by protanopy, deuteranopy and tritanopy are divided into two types of hues.
  • FIG. 17 is a detailed flowchart of another example for the method of discriminating a deficiency region in FIG. 12. As shown in FIG. 17, the pixels of the inputted visual contents are first transformed from the RGB color space to the LMS color space (step 1502). Next, the LMS values are transformed to the limited LMS space of a user with a deficiency of dichromacy (step 1504). Then, the L-value decrease region is detected in the case of protanopy, the M-value decrease region is detected in the case of deuteranopy, and the S-value decrease region is detected in the case of tritanopy (step 1506). It is also possible to detect a color deficiency region Radaptation(x,y) by this method.
  • After detecting the color deficiency region Radaptation(x,y), the color correction is performed in the detected color deficiency region as follows. FIG. 18 is a detailed flowchart of an example of the HSI tuning method in FIG. 12. As shown in FIG. 18, the RGB values of the pixels corresponding to the detected deficiency region are first transformed into HSI values in Equation 23, and then the HSI values are corrected in Equation 24 (step 1602). Then, the corrected HSI values are transformed into RGB values again in Equation 25 (step 1604). [ R ( x , y ) G ( x , y ) B ( x , y ) ] [ H ( x , y ) S ( x , y ) I ( x , y ) ] , Equation 23
  • Here, H, S, I values are normalized values in the range of 0.0 to 1.0. [ H ( x , y ) S ( x , y ) I ( x , y ) ] = [ H ( x , y ) S ( x , y ) I ( x , y ) ] + R adaptation ( x , y ) × [ h s i ] , Equation 24
  • Here, h, s, i values are adaptively transformed values in the range of 0.0 to 1.0. [ H ( x , y ) S ( x , y ) I ( x , y ) ] [ R ( x , y ) G ( x , y ) B ( x , y ) ] Equation 25
  • Another method for adaptively transforming colors in accordance with the present invention is to determine the deficiency region and deficiency degree at the same time by using proportions of cyan, magenta, and yellow instead of detecting the deficiency region in Equation 21 and Equation 22. Protanopy or deuteranopy is expressed in Equation 26 and tritanopy is expressed in Equation 27. In this case, Radaptation(x, y) is always 1 and the deficiency region and deficiency degree are determined with (h, s, i) at the same time.
      • (1) hue adaptive transformation: h = { 0 , H blue region θ max × M ( x , y ) , otherwise
      • (2) saturation adaptive transformation:
        s=α 1 ×M)x,y)+α2 ×C(x,y)
  • Here, M (x, y) indicates the magenta values distributed in the space and C (x, 5) indicates the cyan values distributed in the space. In Equation 26, h is the amount of change in hue for protanopy or deuteranopy and s is the amount of change in saturation for protanopy or deuteranopy. In the hue adaptation, if the hues of the original pixels are included in the blue region, the hue adaptation is not performed. The blue region is excluded from the object for the hue adaptation because the region is normally recognizable by protanopy or deuteranopy. (Θmax is the maximum value of the amount of change in hue, which means the maximum angle that the hue angle can move. Here, α1 and α2 are the maximum amounts of change in saturation using the magenta ratio and the cyan ratio and have values in the range of 0.0 to 1.0.
  • In the hue and saturation adaptation for dichromats, the magenta ratio, the cyan ratio and the yellow ratio are used in Equation 26. The magenta, cyan and yellow ratios are values obtained by transforming RGB values of pixels into values in the CMYK color space and normalizing the transformed CMY values to have values in the range of 0.0 to 1.0; and the magenta, cyan and yellow ratios indicate the proportions of magenta, cyan and yellow components contained in corresponding pixels, respectively.
  • FIG. 19 a, 19 b and 19 c indicate a magenta ratio 1702, a cyan ratio 1704 and a yellow ratio 1706 in color distribution, respectively. First, the magenta ratio 1702 has the maximum value for the product in saturation with intensity for a hue angle in the range of 240° to 360°. For example, if both saturation and intensity have the maximum values, that is, if both the hue value and intensity value are 1.0, the magenta ratio is 1.0, that is, the product of the saturation value 1.0 multiplied by the intensity value 1.0. In another example, if the saturation value is 0.5 and the intensity value is 0.5, the magenta ratio is 0.25, that is, the product of the saturation value 0.5 multiplied by the intensity value 0.5. Furthermore, the magenta ratio is always 0 for a hue angle in the range of 60° to 180°. For a hue angle in the range of 0° to 60°, the magenta ratio linearly decreases from the maximum magenta ratio wraith a hue angle of 0° to the minimum magenta ratio with a hue angle of 60°. For a hue angle in the range of 180° to 240°, the magenta ratio linearly increases form the minimum magenta ratio with a hue angle of 180° to the maximum magenta ratio with an angle of 240°.
  • The cyan ratio 1704 has the maximum value of the product of saturation multiplied by intensity for a hue angle in the range of 120° to 240°. In addition, the cyan ratio is always 0 for a hue angle in the range of 0° to 60° and for a hue angle in the range of 300° to 360°. For a hue angle in the range of 60° to 120°, the cyan ratio linearly increases from the minimum cyan ratio with a hue angle of 60° to the maximum cyan ratio with a hue angle of 120°. For a hue angle in the range of 240° to 300°, the cyan ratio linearly decreases from the maximum cyan ratio with a hue angle of 240° to the minimum cyan ratio with a hue angle of 300°.
  • The yellow ratio 1706 has the maximum ratio of the product of saturation multiplied by intensity for a hue angle in the range of 0° to 120°. In addition, the yellow ratio is always 0 for a hue angle in the range of 180° to 300°. For a hue angle in the range of 120° to 180°, the yellow ratio linearly decreases from the maximum yellow ratio with a hue angle of 120° to the minimum yellow ratio with a hue angle of 180°. For a hue angle in the range of 300° to 360°, the cyan ratio linearly increases from the minimum yellow ratio with a hue angle of 300° to the maximum cyan ratio with a hue angle of 360°.
  • The magenta ratio is used in the process of hue adaptation for protanopy or deuteranopy due to the following reasons. The first reason is to exclude the yellow region normally distinguishable by protanopy or deuteranopy from the objects of hue adaptation. The second reason is to simultaneously adaptively transform not only the red region indistinguishable from green, but also the magenta region indistinguishable from blue. The third reason is to gradually change the hue because an abrupt transformation of the hue may deteriorate the quality of an image. The fourth reason to use the magenta ratio in the process of saturation adaptation for protanopy or deuteranopy is to provide a difference in saturation as a measure for differentiating the color changed to blue after the adaptation from the original blue. The fifth reason to use the cyan ratio is to provide a difference in saturation as a measure for differentiating the green region seen as yellow to protanopy or deuteranopy, from the original yellow region.
  • Unlike protanopy or deuteranopy, tritanopy has a principle problem in that a blue (adjacent to violet) is recognized as red and thus indistinguishable from the original red. Tritanopy normally recognizes blue green (cyan) and red only. Therefore, if hue angle of pixels of original image is included in the blue green region when using a method similar to that used for protanopy or deuteranopy, the hue adaptation is not performed. In general, the hue angle of 165° to 195° is used as the blue green angle.
      • (1) hue adaptive transformation: h = { 0 , H cyan region θ max × Y ( x , y ) , otherwise
      • (2) saturation adaptive transformation:
        s=β 1 ×Y′(x,y)+β2 ×M′(x,y)
  • Here, Y′ (x, y) indicates the yellow component in the color changed by H′, that is, the H value of the original color plus 0.5, and M′ (x, y) indicates the magenta value in the color changed to the HSI value. In the Equation 27, h and s are the amount of change in hue and the amount of saturation for tritanopy, respectively. Θmax is the maximum value of the amount of change in hue, which means the maximum angle that the hue angle can move. In the process of adaptation for tritanopy, the blue ratio and the green ratio are used; and in order to use these ratios, the yellow ratio that is the complementary color ratio of the blue ratio, and the magenta ratio that is the complementary ratio of the green ratio, are used instead of the blue and green ratios. Here, β1 and β2 are the maximum amounts of change in saturation using the blue ratio and green ratio and have values in the range of 0.0 to 1.0. The blue ratio is used in the process of hue adaptation for tritanopy in order to exclude the red region from the object to be adaptively transformed, and if possible, the yellow ratio complementary to blue is used in order to obtain the blue ratio.
  • In the process of hue adaptation for tritanopy, the green ratio is also used beyond the blue ratio. The yellow ratio, complementary to blue ratio, is used to obtain the blue ratio; and the magenta ratio, complementary to green ratio, is used to obtain the green ratio. The reason to use the blue ratio is to provide a difference in saturation between the colors changed to red after the adaptation and the original red, thereby differentiating these two colors. The reason to use the green ratio is to provide a difference in saturation in order to differentiate the green region, seen as blue green to tritanopy, from the original blue green region.
  • Table 3 below is a color table of adaptation for dichromats in accordance with the present embodiment.
    TABLE 3
    Type of Indistinguishable Recognizable Adaptively
    Dichromacy Color Color Transformed Color
    Protanopy or Red and Green Yellow Red → Magenta
    Deuteranopy
    Tritanopy Blue and Yellow Red Blue → Green
  • The embodiments described above are not intended to limit the scope of the present invention, but merely provided for those who skilled in the art to readily understand and embody the present invention. Therefore, it should be appreciated that various modification and change can be made within the scope of the present invention. In principle, the scope of the present invention is determined by the accompanying claims.
  • INDUSTRIAL APPLICABILITY
  • In accordance with the present invention as described above, a user with a color vision deficiency is able to receive semantic information that is almost the same as that of a normal human from visual contents without a separate apparatus. As a result, the user with a color vision deficiency can freely and conveniently use multimedia contents. In addition, the present invention is applicable to the digital item adaptive parts of MPEG-7 and MPEG-21 that are international standards in media.

Claims (34)

1. A method for adaptively transforming visual contents to be suitable for color vision characteristics of a user, the method comprising the steps of:
receiving information on color vision characteristics of a user; and
transforming adaptively the visual contents in accordance with the information on color vision characteristics.
wherein the information on color vision characteristics includes descriptions as to color vision deficiency type and color vision deficiency degree of the user.
2. The method according to claim 1, further comprising step of:
receiving information on a user environment,
wherein the adaptive transforming is executed in accordance with the information on color vision characteristics and the user environment.
3. The method according to claim 2, wherein the user environment is described with the illumination of the surrounding of the user.
4. The method according to claim 1, wherein the color vision deficiency degree is described numerically or texturally, and the color vision deficiency degree is described with a numerical value in the range of 0.0 to 1.0 when numerically described, and wherein, in the event the user is a dichromat, the color vision deficiency degree is described as 1.0.
5. The method according to claim 1, wherein the adaptive transforming is executed by distinguishing between a dichromat and an anomalous trichromat according to the color vision deficiency degree, and approaching the dichromat and the anomalous trichromat differently.
6. The method according to claim 5, wherein the adaptive transforming for a dichromat is executed by the steps of:
differentiating a deficiency region which is difficult for the user to detect, from the visual contents according to the color vision deficiency type; and
adjusting at least one of hue, saturation and intensity of pixels in the deficiency region.
7. The method according to claim 6, wherein the differentiating of the deficiency region is executed by transforming the visual contents from RGB color space to CMYK color space, and discriminating pixels in the deficiency region by using the values of cyan, magenta, and yellow in accordance with the color vision deficiency type.
8. The method according to claim 6, wherein the differentiating of the deficiency region is executed by transforming the visual contents from RGB color space to LMS color space, transforming the transformed visual contents with LMS response function of the user, which is determined with the color vision deficiency type and the color vision deficiency degree, and measuring the degree of decrease of the respective LMS values.
9. The method according to claim 6, wherein the adjusting is executed by changing the hue and the saturation of the pixels in the deficiency region.
10. The method according to claim 5, wherein the adaptive transforming for an anomalous trichromat is executed by the steps of:
transforming the visual contents from RGB color space to LMS color space;
transforming the visual contents in LMS color space with an LMS response function of the user, which is determined with the color vision deficiency type and the color vision deficiency degree; and
transforming again the transformed visual contents from LMS color space to RGB color space.
11. A method for adaptively transforming visual contents to be suitable for the color vision characteristics of a user of an image display device, the method comprising the steps of:
receiving information on the color vision characteristics of the user;
receiving visual contents;
transforming adaptively the visual contents in accordance with the information on the color vision characteristics; and
displaying the transformed visual contents through the image display device.
12. The method according to claim 11, wherein the information on the color vision characteristics contains descriptions as to the color vision deficiency type and the color vision deficiency degree of the user.
13. The method according to claim 12, wherein the color vision deficiency degree is described by a normalized numerical value from 0.0 to 1.0, and in the event the user is a dichromat, the color vision deficiency degree is described as 1.0.
14. The method according to claim 12, wherein the numerical description of the color vision deficiency degree is determined in accordance with the shift or the intensity decrease of a response function of the user's cone cells.
15. The method according to claim 12, wherein the numerical description of the color vision deficiency degree is determined by using the total error score obtained from the Farnsworth-Munsell hue test for the user.
16. The method according to claim 12, wherein the numerical description for the color vision deficiency degree is determined by using the area of the red/green ratio section in a mixture field that is recognized by the user as identical to a test field after anomaloscope testing for the user.
17. The method according to claim 14, wherein the numerical description for the color vision deficiency degree is determined by the following equation
{ w z × ( α α max ) + w l × ( β β max ) } / ( w z max + w l max )
wherein α is the shift value of the user's cone cells, αmax is the maximum shift value of the user's cone cells, β is the intensity decrease value of the user's cone cells, βmax is the maximum intensity decrease value of the user's cone cells, ω, is a weighting value for the shift value, ω1 is the weighting value for the intensity decrease value, ωz max is the maximum value of ωz, and ω1 max is the maximum value of ω1.
18. The method according to claim 12, wherein the numerical description of the color vision deficiency degree is determined by the following equation:
{ E - E min E max - E min , E min < E < E max 1.0 , E th max
wherein E is the total error score of the user, Emin is the minimum threshold value where the user is determined as an anomalous trichromat, and Emax is the maximum threshold value where the user is determined as an anomalous trichromat.
19. The method according to claim 12, wherein the numerical description for the color vision deficiency degree is determined by the following equation:
color vision deficiency degree is determined by the following equation:
{ R d R th , R d R th 1.0 , R d > R th ,
Here, Rd R− max −R min,
R th = { R min normal , green color vision deficiency 73 - R max normal , red color vision deficiency
wherein Rd is the range of a red/green ratio section in a mixture field that is recognized by the user as identical to the test field, Rnormal min and Rnormal max are the minimum and maximum values of the range of the red/green ratio section of a normal human, and Rth is the minimum threshold value of Rd where the user is determined as an anomalous trichromat.
20. The method according to claim 11, wherein the information on the color vision characteristics further comprises identification information on the user.
21. The method according to claim 11, further comprising the step of receiving information on the user's environment, wherein the visual contents are transformed in accordance with the information on the color vision characteristics and the user's environment.
22. The method according to claim 21, wherein the information on the user's environment comprises description as to the illumination of the user's surroundings.
23. A system for adaptively transforming visual contents to be suitable for the color vision characteristics of a user of an image display device, the system comprising:
means for receiving information on the color vision characteristics of the user;
means for receiving visual contents; and
accordance with the information on the color vision characteristics of the user.
24. The system according to claim 23, further comprising:
means for storing the information on the color vision characteristics and supplying the information on the color vision characteristics to the processing section in a standardized XML specification.
25. The system according to claim 23, wherein the information on the color vision characteristics contains descriptions as to the color vision deficiency type and the color vision deficiency degree of the user, and the color vision deficiency degree is described with a normalized numerical value.
26. The system according to claim 25, wherein the numerical description for the color vision deficiency degree is determined by the following equation:
{ w z × ( α α max ) + w l × ( β β max ) } / ( w z max + w l max )
wherein α is the shift value of the user's cone cells, αmax is the maximum shift value of the user's cone cells, β is the intensity decrease value of the user's cone cells, βmax is the maximum intensity decrease value of the user's cone cells, ωz is a weighting value for the shift value, ω1 is the weighting value for the intensity decrease value, ωz max is the maximum value of ωz, and ω1 max is the maximum value of ω1.
27. The system according to claim 25, wherein the numerical description of the color vision deficiency degree is determined by the following equation:
{ E - E min E max - E min , E min < E < E max 1.0 , E th max
wherein E is the total error score of the user, Emin is the minimum threshold value where the user is determined as an anomalous trichromat, and Emax is the maximum threshold value where the user is determined as an anomalous trichromat.
28. The system according to claim 25, wherein the numerical description for the color vision deficiency degree is determined by the following equation:
{ R d R th , R d R th 1.0 , R d > R th ,
Here, Rd =R max −R min,
R th = { R min normal , green color vision deficiency 73 - R max normal , red color vision deficiency
wherein Rd is the range of a red/green ratio section in a mixture field that is recognized by the user as identical to the test field, Rnormal min and Rnormal max are the minimum and maximum values of the range of the red/green ratio section of a normal human, and Rth is the minimum threshold value of Rd where the user is determined as an anomalous trichromat.
29. The system according to claim 25, wherein the processing section executes adaptive transforming for dichromat on the received visual contents in accordance with the color vision deficiency type if the user is determined to be a dichromat from the information on the color vision deficiency degree, and executes adaptation for anomalous trichromat on the received visual contents in accordance with the color vision deficiency type if the user is determined to be an anomalous trichromat from the information on the color vision deficiency degree.
30. A system according to claim 29, wherein the adaptive transforming for dichromat is executed by differentiating a deficiency region, which is difficult for the user to detect, from the visual contents in accordance with the color vision deficiency type; and transforming at least one of hue, saturation and intensity of pixels in the deficiency region.
31. A system according to claim 30, wherein the differentiating of the deficiency region is executed by transforming the visual contents from RCB color space to CMYK color space, and discriminating pixels corresponding to a predetermined region in the CYMK color space in accordance with the color vision deficiency type.
32. A system according to claim 30, wherein the differentiating of the deficiency region is executed by transforming the visual contents from RCB color space to CMYK color space, and measuring the degree of decrease of the respective LMS values during the process of transforming the transformed visual contents with a LMS response function of the user, in which the response function is determined in accordance with the color vision deficiency type and the color vision deficiency degree.
33. A system according to claim 29, wherein the adaptive transforming for dichromat is executed by determining the color vision deficiency region and the color vision deficiency degree of the user at the same time by using a CMY ration of the visual contents.
34. A system according to claim 25, wherein the adaptive transforming for anomalous trichromat is executed by transforming the visual contents from RGB color space to LMS color space, transforming the visual contents in LMS color space by using the inverse function of an LMS response function of the user, in which the LMS response function is determined in accordance with the color vision deficiency type and the color vision deficiency degree, and transforming again the transformed visual contents from LMS color space to RGB color space.
US10/512,730 2002-04-26 2003-04-14 Method and system for transforming adaptively visual contents according to terminal user's color vision characteristics Abandoned US20050105796A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
KR1020020023130A KR20030084419A (en) 2002-04-26 2002-04-26 Input description scheme and the method transforming feature of contents for adapting visual characteristic of user
KR1020020041294A KR20040008004A (en) 2002-07-15 2002-07-15 Adaptation method and description scheme of input information for accessibility according to user's color vision variation
KR20020062260 2002-10-12
KR20030004003 2003-01-21
PCT/KR2003/000750 WO2003091946A1 (en) 2002-04-26 2003-04-14 Method and system for transforming adaptively visual contents according to terminal user’s color vision characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12/189,708 US7737992B2 (en) 2002-04-26 2008-08-11 Method and system for transforming adaptively visual contents according to terminal user's color vision characteristics

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US12/189,708 Continuation US7737992B2 (en) 2002-04-26 2008-08-11 Method and system for transforming adaptively visual contents according to terminal user's color vision characteristics

Publications (1)

Publication Number Publication Date
US20050105796A1 true US20050105796A1 (en) 2005-05-19

Family

ID=29273744

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/512,730 Abandoned US20050105796A1 (en) 2002-04-26 2003-04-14 Method and system for transforming adaptively visual contents according to terminal user's color vision characteristics
US12/189,708 Active US7737992B2 (en) 2002-04-26 2008-08-11 Method and system for transforming adaptively visual contents according to terminal user's color vision characteristics

Family Applications After (1)

Application Number Title Priority Date Filing Date
US12/189,708 Active US7737992B2 (en) 2002-04-26 2008-08-11 Method and system for transforming adaptively visual contents according to terminal user's color vision characteristics

Country Status (7)

Country Link
US (2) US20050105796A1 (en)
EP (1) EP1563453A4 (en)
JP (1) JP4723856B2 (en)
KR (1) KR100712644B1 (en)
CN (1) CN1323374C (en)
AU (1) AU2003221142A1 (en)
WO (1) WO2003091946A1 (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080318634A1 (en) * 2007-06-22 2008-12-25 Hong Fu Jin Precision Industry (Shenzhen)Co., Ltd Wireless communication apparatus and method for replacing background color of display for same
US20100033679A1 (en) * 2008-08-08 2010-02-11 Taku Kodama Image processing apparatus and method of controlling image processing apparatus
US20120293530A1 (en) * 2011-05-20 2012-11-22 Sony Corporation Image processing method and an image processing device
US20130050234A1 (en) * 2011-08-31 2013-02-28 Microsoft Corporation Image rendering filter creation
US20130222414A1 (en) * 2010-10-12 2013-08-29 Panasonic Corporation Color signal processing device
US20130226008A1 (en) * 2012-02-21 2013-08-29 Massachusetts Eye & Ear Infirmary Calculating Conjunctival Redness
CN103310736A (en) * 2012-03-16 2013-09-18 联想(北京)有限公司 Display method and device
US20140015850A1 (en) * 2012-07-11 2014-01-16 Korea University Research And Business Foundation Color transformation method and apparatus for person with color vision defect
CN104536709A (en) * 2014-12-12 2015-04-22 深圳市金立通信设备有限公司 Display control method
CN104536710A (en) * 2014-12-12 2015-04-22 深圳市金立通信设备有限公司 Terminal
US20160071470A1 (en) * 2014-09-05 2016-03-10 Samsung Display Co., Ltd. Display apparatus, display control method, and display method
US20160071468A1 (en) * 2014-09-04 2016-03-10 Samsung Display Co., Lid. Display device
US20160365017A1 (en) * 2015-06-12 2016-12-15 Samsung Life Public Welfare Foundation Display apparatus and method of driving the same
CN106598533A (en) * 2016-12-09 2017-04-26 北京奇虎科技有限公司 Display method, device and mobile terminal
US9710931B2 (en) 2012-09-19 2017-07-18 Kagoshima University Image processing system with hue rotation processing
US9916786B2 (en) 2015-04-09 2018-03-13 Boe Technology Group Co., Ltd. Display driving method, driving circuit and display device
US10004395B2 (en) 2014-05-02 2018-06-26 Massachusetts Eye And Ear Infirmary Grading corneal fluorescein staining

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101661710A (en) * 2002-04-26 2010-03-03 韩国电子通信研究院 Visual data adjusting device and method
US20090135266A1 (en) * 2004-11-19 2009-05-28 Koninklijke Philips Electronics, N.V. System for scribing a visible label
JP3867988B2 (en) * 2004-11-26 2007-01-17 株式会社両備システムソリューションズ Pixel processing device
US8429549B2 (en) * 2009-06-30 2013-04-23 Sap Ag System and method for automated color scheme transformation
US20120169756A1 (en) * 2009-09-09 2012-07-05 Kagoshima University Image processing device, image processing method, and program
JP5707099B2 (en) * 2010-11-05 2015-04-22 浅田 一憲 Color vision assist device, color vision aid METHOD AND PROGRAM
JP2012252382A (en) * 2011-05-31 2012-12-20 Fujifilm Corp Imaged content correction device, method, and program, and imaged content distribution system
US8792138B2 (en) * 2012-02-08 2014-07-29 Lexmark International, Inc. System and methods for automatic color deficient vision correction of an image
US20140066196A1 (en) * 2012-08-30 2014-03-06 Colin William Crenshaw Realtime color vision deficiency correction
JP5924289B2 (en) * 2013-02-20 2016-05-25 富士ゼロックス株式会社 The color conversion coefficient generation device, a color processing apparatus and program
US9370299B2 (en) 2013-05-16 2016-06-21 Successfactors, Inc. Display accessibility for color vision impairment
KR20150088587A (en) 2014-01-24 2015-08-03 삼성전자주식회사 Method and apparatus for compensating color of electronic devices
CN106550280A (en) * 2015-09-17 2017-03-29 中兴通讯股份有限公司 Image display method and device
KR101716304B1 (en) * 2015-12-18 2017-03-14 주식회사 한글과컴퓨터 Color Sensitivity compensation system and compensation method using the same
CN106504713A (en) * 2016-11-21 2017-03-15 四川长虹电器股份有限公司 Television set color sense obstacle mode adjustment system and method
US20180182161A1 (en) * 2016-12-27 2018-06-28 Samsung Electronics Co., Ltd Method and apparatus for modifying display settings in virtual/augmented reality
CN107948627A (en) * 2017-11-27 2018-04-20 腾讯科技(深圳)有限公司 Video playing method and device, computing device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4765731A (en) * 1986-03-21 1988-08-23 John J. Ferlazzo Solid state color vision testing system
US6229916B1 (en) * 1997-09-30 2001-05-08 Fuji Photo Film Co., Ltd. Color transformation look-up table
US6704024B2 (en) * 2000-08-07 2004-03-09 Zframe, Inc. Visual content browsing using rasterized representations
US7124375B1 (en) * 1999-05-11 2006-10-17 California Institute Of Technology Color monitoring and analysis for color vision deficient individuals

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5341917B2 (en) * 1974-03-12 1978-11-07
JPH0682385B2 (en) * 1987-05-15 1994-10-19 日本放送協会 Color vision conversion device
DE3854753D1 (en) * 1987-08-03 1996-01-18 American Film Tech Means and method for color image enhancement.
JPH0416171B2 (en) * 1989-08-15 1992-03-23 Nat Science Council
JP3332492B2 (en) * 1993-08-30 2002-10-07 三洋電機株式会社 Visual simulation apparatus
US5589898A (en) 1995-06-07 1996-12-31 Reuters Limited Method and system for color vision deficiency correction
NL1007531C2 (en) * 1997-11-12 1999-06-02 Tno Method and apparatus for displaying a color image.
JPH11175050A (en) * 1997-12-16 1999-07-02 Hitachi Information Technology Co Ltd Display system for person having difficulty in color sensation
JP4147655B2 (en) * 1998-12-07 2008-09-10 ソニー株式会社 Image processing apparatus and image processing method
JP2000181426A (en) * 1998-12-16 2000-06-30 Nikon Corp Image information presenting device
JP2000306074A (en) * 1999-04-20 2000-11-02 Ntt Data Corp Method and system for generating color pallet for person having difficulty in the color sense
JP2001154655A (en) * 1999-11-29 2001-06-08 Ibm Japan Ltd Color converting system
JP2001293926A (en) * 2000-04-17 2001-10-23 Seiko Epson Corp Printer, printer host, printer system having the same and memory medium containing operating program for printer host
JP4310030B2 (en) * 2000-06-30 2009-08-05 ヤフー株式会社 Web document customize server to deliver a Web document that has been designated by the requesting person to the client's computer to fix easy to see in accordance with the wishes of the client's
JP3761773B2 (en) * 2000-08-11 2006-03-29 三菱電機株式会社 Color display system and a color display method
US6309117B1 (en) * 2000-08-17 2001-10-30 Nortel Networks Limited System and method for adjustment of color presentation in networked media
JP2002063277A (en) * 2000-08-18 2002-02-28 Prop Station:Kk System and method for information provision
JP2003223635A (en) * 2002-01-29 2003-08-08 Nippon Hoso Kyokai <Nhk> Video display device and photographing device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4765731A (en) * 1986-03-21 1988-08-23 John J. Ferlazzo Solid state color vision testing system
US6229916B1 (en) * 1997-09-30 2001-05-08 Fuji Photo Film Co., Ltd. Color transformation look-up table
US7124375B1 (en) * 1999-05-11 2006-10-17 California Institute Of Technology Color monitoring and analysis for color vision deficient individuals
US6704024B2 (en) * 2000-08-07 2004-03-09 Zframe, Inc. Visual content browsing using rasterized representations

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080318634A1 (en) * 2007-06-22 2008-12-25 Hong Fu Jin Precision Industry (Shenzhen)Co., Ltd Wireless communication apparatus and method for replacing background color of display for same
US20100033679A1 (en) * 2008-08-08 2010-02-11 Taku Kodama Image processing apparatus and method of controlling image processing apparatus
US8007108B2 (en) * 2008-08-08 2011-08-30 Ricoh Company Limited Image processing apparatus and method of controlling image processing apparatus
US20130222414A1 (en) * 2010-10-12 2013-08-29 Panasonic Corporation Color signal processing device
US9430986B2 (en) * 2010-10-12 2016-08-30 Godo Kaisha Ip Bridge 1 Color signal processing device
US20120293530A1 (en) * 2011-05-20 2012-11-22 Sony Corporation Image processing method and an image processing device
US8842128B2 (en) * 2011-05-20 2014-09-23 Sony Corporation Image processing method and an image processing device for changing a saturation of image data
US20130050234A1 (en) * 2011-08-31 2013-02-28 Microsoft Corporation Image rendering filter creation
US9520101B2 (en) * 2011-08-31 2016-12-13 Microsoft Technology Licensing, Llc Image rendering filter creation
US9854970B2 (en) * 2012-02-21 2018-01-02 Massachusetts Eye & Ear Infirmary Calculating conjunctival redness
US20130226008A1 (en) * 2012-02-21 2013-08-29 Massachusetts Eye & Ear Infirmary Calculating Conjunctival Redness
CN103310736A (en) * 2012-03-16 2013-09-18 联想(北京)有限公司 Display method and device
US20140015850A1 (en) * 2012-07-11 2014-01-16 Korea University Research And Business Foundation Color transformation method and apparatus for person with color vision defect
US9710931B2 (en) 2012-09-19 2017-07-18 Kagoshima University Image processing system with hue rotation processing
US10004395B2 (en) 2014-05-02 2018-06-26 Massachusetts Eye And Ear Infirmary Grading corneal fluorescein staining
US20160071468A1 (en) * 2014-09-04 2016-03-10 Samsung Display Co., Lid. Display device
US9779647B2 (en) * 2014-09-04 2017-10-03 Samsung Display Co., Ltd. Display device
US20160071470A1 (en) * 2014-09-05 2016-03-10 Samsung Display Co., Ltd. Display apparatus, display control method, and display method
US10078988B2 (en) * 2014-09-05 2018-09-18 Samsung Display Co., Ltd. Display apparatus, display control method, and display method
CN104536710A (en) * 2014-12-12 2015-04-22 深圳市金立通信设备有限公司 Terminal
CN104536709A (en) * 2014-12-12 2015-04-22 深圳市金立通信设备有限公司 Display control method
US9916786B2 (en) 2015-04-09 2018-03-13 Boe Technology Group Co., Ltd. Display driving method, driving circuit and display device
US20160365017A1 (en) * 2015-06-12 2016-12-15 Samsung Life Public Welfare Foundation Display apparatus and method of driving the same
US9799254B2 (en) * 2015-06-12 2017-10-24 Samsung Display Co., Ltd. Display apparatus and method of driving the same
CN106598533A (en) * 2016-12-09 2017-04-26 北京奇虎科技有限公司 Display method, device and mobile terminal

Also Published As

Publication number Publication date
AU2003221142A1 (en) 2003-11-10
JP2005524154A (en) 2005-08-11
CN1662932A (en) 2005-08-31
KR100712644B1 (en) 2007-05-02
US20090066720A1 (en) 2009-03-12
US7737992B2 (en) 2010-06-15
WO2003091946A1 (en) 2003-11-06
AU2003221142A8 (en) 2003-11-10
EP1563453A4 (en) 2009-04-29
CN1323374C (en) 2007-06-27
JP4723856B2 (en) 2011-07-13
EP1563453A1 (en) 2005-08-17
KR20050023244A (en) 2005-03-09

Similar Documents

Publication Publication Date Title
Luo et al. Quantifying colour appearance. Part I. LUTCHI colour appearance data
Rasche et al. Detail preserving reproduction of color images for monochromats and dichromats
Li et al. CMC 2000 chromatic adaptation transform: CMCCAT2000
Brown The influence of luminance level on visual sensitivity to color differences
Delahunt et al. Does human color constancy incorporate the statistical regularity of natural daylight?
Zhang et al. Color image fidelity metrics evaluated using image distortion maps
US6744544B1 (en) Method and apparatus for forming and correcting color image
US20110298837A1 (en) Methods and apparatus for calibrating a color display
EP0946050B1 (en) Method and apparatus for converting color data
DE69915225T2 (en) An image processing apparatus and image processing method
Sankeralli et al. Postreceptoral chromatic detection mechanisms revealed by noise masking in three-dimensional cone contrast space
US7314283B2 (en) Color correction method and device for projector
Brainard et al. Bayesian model of human color constancy
US6567543B1 (en) Image processing apparatus, image processing method, storage medium for storing image processing method, and environment light measurement apparatus
CA2322219C (en) Method and arrangement for objective assessment of video quality
US8086030B2 (en) Method and system for visually presenting a high dynamic range image
KR100649338B1 (en) Color quantization and similarity measure for content based image retrieval
US6987567B2 (en) Color evaluation apparatus and method
EP0540313B1 (en) Color adjustment for smoothing a boundary between color images
Wang et al. An optimized tongue image color correction scheme
US8760578B2 (en) Quality assessment of high dynamic range, visual dynamic range and wide color gamut image and video
US20110305386A1 (en) Color Indication Tool for Colorblindness
Busin et al. Color spaces and image segmentation
US6888648B2 (en) Method and apparatus for extracting color signal values, method and apparatus for creating a color transformation table, method and apparatus for checking gradation maintainability, and record medium in which programs therefor are recorded
Jefferson et al. Accommodating color blind computer users

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: EXPRESSLY ABANDONED -- DURING EXAMINATION