CN1945627A - Method for correcting digital tongue picture colour cast - Google Patents

Method for correcting digital tongue picture colour cast Download PDF

Info

Publication number
CN1945627A
CN1945627A CN 200610113870 CN200610113870A CN1945627A CN 1945627 A CN1945627 A CN 1945627A CN 200610113870 CN200610113870 CN 200610113870 CN 200610113870 A CN200610113870 A CN 200610113870A CN 1945627 A CN1945627 A CN 1945627A
Authority
CN
China
Prior art keywords
mrow
msup
msub
mfrac
prime
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 200610113870
Other languages
Chinese (zh)
Other versions
CN100412906C (en
Inventor
白净
张永红
吴佳
崔珊珊
孙晓静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CNB2006101138705A priority Critical patent/CN100412906C/en
Publication of CN1945627A publication Critical patent/CN1945627A/en
Application granted granted Critical
Publication of CN100412906C publication Critical patent/CN100412906C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Of Color Television Signals (AREA)

Abstract

This invention relates to a digital correction method of tongue color deflection, the feature is as the following steps: making white equation emendation on the image of Macbeth color adjusting calibration card from flash lamp and switching to the tonality, saturation, lightness space to get the corresponding adjustment parameters by cooperation with the standard value of color card. Then shoot the tongue picture by digital camera, and make white equation emendation switching to the tonality, saturation, lightness space, which is then adjusted by using the above parameter and returned into the red, green and blue spaces. Finally, use gamma value to adjust the pictures of red, green and blue spaces. The gamma value of curve emendation is combined by the least double multiplication with the light value of Macbeth card standard and the average value of picture light.

Description

Digital tongue picture color cast correction method
Technical Field
The present invention belongs to the field of digital correction method of tongue picture color cast.
Background
From the middle and late 80 s of the 20 th century, the combination of traditional Chinese medicine workers and digital image workers began to attempt to objectively study tongue diagnosis based on computer technology. Research has focused primarily on calibration of tongue photograph color, storage and output, and processing of tongue photographs with modern image processing analysis techniques. Among them, the color of the tongue is the most important information in tongue diagnosis. Although the tongue image shot in the forced flash mode of the digital camera can be shielded from natural light interference by the strong light of the flash lamp, the whole color of the tongue image is deviated when the strong light of the flash lamp irradiates the surface of the tongue, and the whole tongue image is biased to be red-yellow warm tone, so that the deviation needs to be corrected by a certain method.
The method utilizes a Macbeth24 color standard colorimetric card commonly used for correcting the color of a digital photo, the colorimetric card is processed by a special process, the surface of the colorimetric card has a reflection spectrum similar to the skin, and a series of calibration parameters are designed by comparing the color of the Macbeth24 colorimetric card in the photo with the color of a real colorimetric card for calibrating a shot tongue picture.
Disclosure of Invention
The invention aims to correct color cast caused by a strong flash lamp when a tongue picture is shot, and provides a set of general computer method for correcting tongue picture colors.
The invention is characterized in that:
the method comprises the following steps that (1) under the condition of forced flashing, N Macbeth color cards are shot by a digital camera, wherein N is 10-15 Macbeth color cards, and photos are input into a computer;
step (2), the computer corrects the white balance of the N Macbeth color chart photos:
if: the N photo Macbeth color chip photo is denoted as f1(x,y),f2(x,y)...fN(x, y) each image in the photograph is represented as: f. ofi(x,y)=[Ri(x,y),Gi(x,y),Bi(x,y)],i=1...NRi(x,y),Gi(x,y),Bi(x, y) represents the values of red, green, and blue at the sampling point (x, y), respectively; the corresponding picture of each photo after white balance correction is expressed as:
fi′=[Ri′(x,y),Gi′(x,y),Bi′(x,y)],
<math> <mrow> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>255</mn> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msub> <mi>R</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>*</mo> <msub> <mi>R</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
wherein, <math> <mrow> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>255</mn> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msub> <mi>G</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>*</mo> <msub> <mi>G</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msup> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>255</mn> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msub> <mi>B</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>*</mo> <msub> <mi>B</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
wherein, Max (R)i(x,y)),Max(Gi(x, y)) and Max (B)i(x, y)) are respectively shown in the image fi(x, y), a maximum value of a red channel, a green channel, and a blue channel;
step (3) of expressing f in RGB spacei' (x, y) and Macbeth color chip colors are shifted into the HSV space representation, HSV representing hue, saturation and lightness, respectively:
step (3.1), mixingi′(x,y)=[Ri′(x,y),Gi′(x,y),Bi′(x,y)]Conversion to HSV space:
<math> <mrow> <msup> <msub> <mi>H</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mi>cos</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>[</mo> <mfrac> <mrow> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>+</mo> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> <mrow> <mn>2</mn> <msqrt> <msup> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>*</mo> <mrow> <mo>(</mo> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </msqrt> </mrow> </mfrac> <mo>]</mo> </mrow> </math>
<math> <mrow> <msup> <msub> <mi>S</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>-</mo> <mi>Min</mi> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
Vi′(x,y)=Max(Ri′(x,y),Gi′(x,y),Bi′(x,y))
said Hi′(x,y),Si′(x,y),Vi' (x, y) respectively indicate the values of three channels of hue, saturation and lightness of the i-th image at the sampling point (x, y) after conversion, and Ri′(x,y),Gi′(x,y),Bi' (x, y) respectively shows the values of the red, green and blue channels of the ith image at the sampling point (x, y) after the white balance is adjusted, and the HSV channel of the image after the white balance is corrected is shown as:
fi′(x,y)=[Hi′(x,y),Si′(x,y),Vi′(x,y)]
step (3.2), the standard value M of the Macbeth color card is setjTurning into the space of the HSV (hue, saturation and value),
Mj=[MRj,MGj,MBj],j=1~24
MH j = cos - 1 [ ( MR j - MG j ) + ( MR j - MB j ) 2 ( MR j - MG j ) 2 + ( MR j - MG j ) ( MR j - MB j ) ]
then MS j = Max ( MR j , MG j , MB j ) - Min ( MR j , MG j , MB j ) Max ( MR j , MG j , MB j )
MVj=Max(MRj,MGj,MBj)
Wherein, MRj,MGj,MBjRespectively representing the values of the red, green and blue channels of the jth color block of the color card,
MHj,MSj,MVjrespectively representing the values of three channels of hue, saturation and lightness of the j color block after conversion;
obtaining: HSV space expression method of color card standard valueFormula (II): mj=[MHj,MSj,MVj],j=1~24;
And (4) respectively solving each adjusting parameter through the values of the standard Macbeth color chart and the color chart photo according to the following steps:
step (4.1), calculating the average value of the H channel of the color tone of the Macbeth color chart photo after white balance adjustment, and comparing the average value with the average color tone of a standard color chart to obtain a color tone adjustment parameter H:
the average hue value of the N pairs of Macbeth photos is <math> <mrow> <mi>Hav</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>Hav</mi> <mi>i</mi> </msub> <mo>,</mo> </mrow> </math> HaviTo adjust the average value of the hue H channel of the i-th image after white balance,
the Macbeth color chart has a standard hue average value of <math> <mrow> <mi>Hst</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mn>24</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>24</mn> </munderover> <msub> <mi>MH</mi> <mi>j</mi> </msub> <mo>,</mo> </mrow> </math>
The hue adjustment parameter is h-Hav-Hst;
and (4.2) calculating the average value of a brightness V channel of the picture of the Macbeth color chart after the white balance is adjusted, and comparing the average value with the average brightness of a standard color chart to obtain a brightness adjusting parameter V:
the lightness average value of the N pairs of Macbeth photos is <math> <mrow> <mi>Vav</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>Vav</mi> <mi>i</mi> </msub> <mo>,</mo> </mrow> </math> VaviTo adjust the average value of the lightness V channel of the i-th image after white balance,
the standard lightness average of the Macbeth color chart is <math> <mrow> <mi>Vst</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mn>24</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>24</mn> </munderover> <msub> <mi>MV</mi> <mi>j</mi> </msub> <mo>,</mo> </mrow> </math>
The brightness adjustment parameter is v ═ Vav-Vst;
step (4.3), calculating the average value of the saturation S channel of the Macbeth color chart photo after self-balancing adjustment, comparing the average value with the saturation of the standard value of the color chart, and obtaining a saturation adjustment parameter S through fitting:
in N Macbeth color chart photos, the average value of the saturation of the color block j is <math> <mrow> <msub> <mi>Sav</mi> <mi>j</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>fS</mi> <mi>ij</mi> </msub> <mo>,</mo> </mrow> </math> Wherein, fSijI is 1 to N, j is 1 to 24,
the standard saturation of each color block in the color card is MS1,MS2...MS24
The saturation adjustment parameter s is: <math> <mrow> <mi>s</mi> <mo>=</mo> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>24</mn> </munderover> <msub> <mi>Sav</mi> <mi>j</mi> </msub> <mo>*</mo> <msub> <mi>MS</mi> <mi>j</mi> </msub> </mrow> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>24</mn> </munderover> <msubsup> <mi>Sav</mi> <mi>j</mi> <mn>2</mn> </msubsup> </mrow> </mfrac> <mo>;</mo> </mrow> </math>
and (5) calculating a curve correction value gamma:
in N Macbeth color chart photos, the brightness value of a color block j is <math> <mrow> <msub> <mi>Vav</mi> <mi>j</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>fV</mi> <mi>ij</mi> </msub> <mo>,</mo> </mrow> </math> Wherein, fVijThe lightness of the color block j in the ith picture, i is 1-N, j is 1-24,
the standard lightness of each color block in the color card is Vav1,Vav2...Vav24
The curve correction value gamma is: <math> <mrow> <mi>gamma</mi> <mo>=</mo> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>24</mn> </munderover> <mi>ln</mi> <mrow> <mo>(</mo> <msub> <mi>Vav</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>*</mo> <mi>ln</mi> <mrow> <mo>(</mo> <msub> <mi>MV</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>24</mn> </munderover> <mi>ln</mi> <msup> <mrow> <mo>(</mo> <msub> <mi>Vav</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </mfrac> <mo>;</mo> </mrow> </math>
taking a tongue picture by using a digital camera to obtain an image It (x, y) ([ IR (x, y), IG (x, y), IB (x, y) ], wherein IR (x, y), IG (x, y), and IB (x, y) respectively represent red, green, and blue values at the image sampling point (x, y); and (7) correcting the white color in the tongue picture It (x, y) to be pure white, and obtaining a corrected tongue picture It' (x, y):
It′(x,y)=[IR′(x,y),IG′(x,y),IB′(x,y)]
<math> <mrow> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>255</mn> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <mi>IR</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>*</mo> <mi>IR</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msup> <msub> <mi>IG</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>255</mn> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <mi>IG</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>*</mo> <mi>IG</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msup> <msub> <mi>IB</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>255</mn> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <mi>IB</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>*</mo> <mi>IB</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
where Max (IR (x, y)), Max (IG (x, y)), and Max (IB (x, y)) respectively represent the maximum values of the red, green, and blue channels in the image It (x, y);
step (8), converting the tongue picture represented in RGB space to the expression of hue H, saturation S and lightness V space
<math> <mrow> <msup> <mi>IH</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mi>cos</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>[</mo> <mfrac> <mrow> <mrow> <mo>(</mo> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>IG</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>+</mo> <mrow> <mo>(</mo> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>IB</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> <mrow> <mn>2</mn> <msqrt> <msup> <mrow> <mo>(</mo> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>IG</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <mrow> <mo>(</mo> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>IB</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msup> <mi>IG</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>IB</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </msqrt> </mrow> </mfrac> <mo>]</mo> </mrow> </math>
<math> <mrow> <msup> <mi>IS</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <mi>IG</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <mi>IB</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>-</mo> <mi>Min</mi> <mrow> <mo>(</mo> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <mi>IG</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <mi>IB</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <mi>IG</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <mi>IB</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
IV′(x,y)=Max(IR′(x,y),IG′(x,y),IB′(x,y));
Step (9), judging whether the tongue picture is over exposed after correcting the white balance, and carrying out brightness balance on the underexposed picture:
step (9.1), performing brightness histogram distribution on the photo:
a histogram histv (t) ═ Num (IV '(x, y) ═ t), t ═ 0 to 255, and indicates the number of points whose value is t in the lightness graph IV' (x, y) represented by the t-th component in the histogram;
step (9.2), solving peak PeakV of lightness graph:
PeakV=Max(HistV(t)),t=0~255;
step (9.3), judging whether PeakV is less than threshold V
Wherein, the threshold V is a set value of 100, if PeakV is more than or equal to the threshold V, the IV' (x, y) is not processed,
otherwise, entering the next step;
step (9.4), histogram equalization is carried out on the image IV' (x, y) to obtain <math> <mrow> <msup> <mi>IV</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <msup> <mi>IV</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </munderover> <mi>HistV</mi> <mrow> <mo>(</mo> <msup> <mi>IV</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> k represents a counter of lightness values;
and (10) correcting the image in the HSV space according to the adjusting parameters h, s and v set in the step (4):
IHd(x,y)=IH′(x,y)-h
ISd(x,y)=IS′(x,y)*s;
IVd(x,y)=IV′(x,y)-v
step (11), converting the image in the HSV space into the RGB space:
step (11.1), four temporary variables f, aa, bb, cc are set to aid in transformation:
wherein f is IHd-floor(IHd) Function floor (IH)d) Means taking ratio IHdSmall maximum integer, so f denotes IHdThe fractional part of (a) is,
aa=IVd*(1-ISd)
bb=IVd*(1-(ISd*f))
cc=IVd*(1-(ISd*(1-f)))
step (11.2), according to IHdRange determination of (IR)d,IGd,IBdThe value of (c):
if it is <math> <mrow> <msub> <mi>IH</mi> <mi>d</mi> </msub> <mo>&Element;</mo> <mo>[</mo> <mn>0</mn> <mo>~</mo> <mfrac> <mi>&pi;</mi> <mn>6</mn> </mfrac> <mo>)</mo> <mo>,</mo> </mrow> </math> Then IRd=IVd,IGd=cc,IBd=aa
If it is <math> <mrow> <msub> <mi>IH</mi> <mi>d</mi> </msub> <mo>&Element;</mo> <mo>[</mo> <mfrac> <mi>&pi;</mi> <mn>6</mn> </mfrac> <mo>~</mo> <mfrac> <mi>&pi;</mi> <mn>3</mn> </mfrac> <mo>)</mo> <mo>,</mo> </mrow> </math> Then IRd=bb,IGd=IVd,IBd=aa
If it is <math> <mrow> <msub> <mi>IH</mi> <mi>d</mi> </msub> <mo>&Element;</mo> <mo>[</mo> <mfrac> <mi>&pi;</mi> <mn>3</mn> </mfrac> <mo>~</mo> <mfrac> <mi>&pi;</mi> <mn>2</mn> </mfrac> <mo>)</mo> <mo>,</mo> </mrow> </math> Then IRd=aa,IGd=IVd,IBd=cc
If it is <math> <mrow> <msub> <mi>IH</mi> <mi>d</mi> </msub> <mo>&Element;</mo> <mo>[</mo> <mfrac> <mi>&pi;</mi> <mn>2</mn> </mfrac> <mo>~</mo> <mfrac> <mrow> <mn>2</mn> <mi>&pi;</mi> </mrow> <mn>3</mn> </mfrac> <mo>)</mo> <mo>,</mo> </mrow> </math> Then IRd=aa,IGd=bb,IBd=IVd
If it is <math> <mrow> <msub> <mi>IH</mi> <mi>d</mi> </msub> <mo>&Element;</mo> <mo>[</mo> <mfrac> <mrow> <mn>2</mn> <mi>&pi;</mi> </mrow> <mn>3</mn> </mfrac> <mo>~</mo> <mfrac> <mrow> <mn>5</mn> <mi>&pi;</mi> </mrow> <mn>6</mn> </mfrac> <mo>)</mo> <mo>,</mo> </mrow> </math> Then IRd=cc,IGd=aa,IBd=IVd
If it is <math> <mrow> <msub> <mi>IH</mi> <mi>d</mi> </msub> <mo>&Element;</mo> <mo>[</mo> <mfrac> <mrow> <mn>5</mn> <mi>&pi;</mi> </mrow> <mn>6</mn> </mfrac> <mo>~</mo> <mn>2</mn> <mi>&pi;</mi> <mo>)</mo> <mo>,</mo> </mrow> </math> Then IRd=IVd,IGd=aa,IBd=bb
And (12) adjusting the image by using the curve correction value gamma:
IRd′(x,y)=IRd(x,y)gamma
IGd′(x,y)=IGd(x,y)gamma
IBd′(x,y)=IBd(x,y)gamma
resulting adjusted image Itd(x,y)=[IRd′(x,y),IGd′(x,y),IBd′(x,y)]The adjusted picture of the tongue picture is obtained.
The results of the use prove that the adjusted image more accurately reflects the real color of the tongue than the unadjusted image by visual observation of the doctor, and the adjusted image is accepted by the doctor.
Drawings
FIG. 1: an algorithm flow chart;
FIG. 2: a Macbeth color chart real object graph;
FIG. 3: a Macbeth color chart diagram;
FIG. 4: a Macbeth color chart truth table;
FIG. 5: histogram of the photograph: (a) exposing a lightness histogram of the photograph which is not enough, (b) making a lightness histogram after histogram equalization;
FIG. 6: taking a picture of the tongue;
FIG. 7: adjusting the tongue picture after white balance and brightness homogenization;
FIG. 8: and adjusting the tongue picture result.
Detailed Description
The hardware of the method consists of a digital camera and a computer. The software for adjusting the algorithm is implemented by Matlab. The input tongue picture is shot by doctor in Chinese medicine with a digital camera, and the patient stretches out the tongue when shooting, so that doctors can conveniently take the picture of the whole tongue.
The Macbeth color card consists of 24 color blocks, is distributed into four rows and six columns, and is arranged from left to right and from top to bottom, as shown in FIG. 2. Sequentially represented as Mj=[MRj,MGj,MBj]J is 1 to 24, and the sequence is schematically shown in FIG. 3Shown in the figure. Wherein, MRj,MGj,MBjThe values of the red, green and blue color channels in the jth patch are shown, respectively, and the standard values of the 24 patches of the card are listed in the accompanying description, as shown in FIG. 4.
A Ni (Sony) Cybershot DSC-W1 digital camera is adopted to take pictures indoors, strong light interference is avoided during shooting, and a forced flash lamp mode is used at the same time, wherein the distance is about 30 centimeters. 10 Macbeth color cards were taken and the pictures were entered into the computer.
The white balance of ten images is adjusted, the brightest color in the images is adjusted to (255, 255, 255), and the other colors are linearly adjusted correspondingly. And then transformed into HSV space.
And comparing the image converted into the HSV space with the color of the standard color card, and solving parameters. In our experiments, four parameters were obtained, h-0.02, s-5, v-6 and gamma-1.1, respectively.
A photograph of the tongue is taken and stored in the computer. As shown in fig. 6. After adjusting the white balance of the picture, the picture is converted into HSV space. The histogram of the V channel is obtained, and it is found that the image belongs to the underexposure type, and the lightness of the image is equalized, and the obtained picture is as shown in fig. 7.
The image is adjusted accordingly according to the parameters h 0.02, s-5, v-6, and gamma 1.1 obtained by the system, and the final adjustment result is shown in fig. 8.
The method comprises the following specific implementation steps:
1. n Macbeth color cards for shooting
The shooting conditions are as follows: the indoor photographing method has the advantages that strong light interference is avoided during photographing, and meanwhile, a forced flash lamp mode is used, and the distance is about 30 centimeters. And shooting N Macbeth color cards, wherein N is 10-15. The photograph is entered into a computer.
2. Correcting the white balance of the N Macbeth color chart photos
Will be stored in the computerIs represented by f1(x,y),f2(x,y)...fN(x, y), wherein, for each image, fi(x,y)=[Ri(x,y),Gi(x,y),Bi(x,y)]N, i ═ 1. (x, y) represents sample points of an image, Ri(x,y),Gi(x,y),Bi(x, y) represent the values of red, green, and blue at the sample point, respectively.
Since the values of three pure white colors of red, green, and blue in the digital photograph are 255, 255, and 255, respectively, the color is the brightest color in the image and is the maximum value of each channel. In order to correct the brightness color cast of the shot image, before correcting the color of the image, the white color in the picture needs to be corrected to be pure white, and the method comprises the following steps:
<math> <mrow> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>255</mn> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>*</mo> <msub> <mi>R</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>255</mn> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>*</mo> <msub> <mi>G</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msup> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>255</mn> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>*</mo> <msub> <mi>B</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
wherein, Max (R)i),Max(Gi) And Max (B)i) Is shown in the image fi(x, y) the maximum of the red, green and blue channels. The corresponding picture after adjustment is denoted as fi′=[Ri′(x,y),Gi′(x,y),Bi′(x,y)]. By adjusting the white balance, we obtain a series of adjusted images f1′(x,y),f2′(x,y)...fN′(x,y)。
3. F to be represented in RGB (red, green, blue) spacei' (x, y) and color chip color Mj=[MRj,MGj,MBj]J is 1 to 24 and is converted into HSV (hue, saturation, brightness) space representation
For an image f whose white balance has been correctedi′(x,y)=[Ri′(x,y),Gi′(x,y),Bi′(x,y)]It is transformed into the HSV space by the following formula.
<math> <mrow> <msup> <msub> <mi>H</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mi>cos</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>[</mo> <mfrac> <mrow> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>+</mo> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> <mrow> <mn>2</mn> <msqrt> <msup> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>*</mo> <mrow> <mo>(</mo> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </msqrt> </mrow> </mfrac> <mo>]</mo> </mrow> </math>
<math> <mrow> <msup> <msub> <mi>S</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>-</mo> <mi>Min</mi> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
Vi′(x,y)=Max(Ri′(x,y),Gi′(x,y),Bi′(x,y))
Wherein R isi′(x,y),Gi′(x,y),Bi' (x, y) denotes the values of the three red, green and blue channels of the ith image at the sampling point (x, y) after the white balance adjustment, respectively, Hi′(x,y),Si′(x,y),Vi' (x, y) respectively indicate the values of three channels of hue, saturation and lightness of the i-th image at the sampling point (x, y) after conversion. Thereby obtaining a corrected white balanceHSV channel of the image after the scale is represented as
fi′(x,y)=[Hi′(x,y),Si′(x,y),Vi′(x,y)]
In addition, the standard value M of the color chart is to be matchedj=[MRj,MGj,MBj]And j is also transferred to the HSV space from 1 to 24.
MH j = cos - 1 [ ( MR j - MG j ) + ( MR j - MB j ) 2 ( MR j - MG j ) 2 + ( MR j - MG j ) ( MR j - MB j ) ]
MS j = Max ( MR j , MG j , MB j ) - Min ( MR j , MG j , MB j ) Max ( MR j , MG j , MB j , )
MVj=Max(MRj,MGj,MBj)
Wherein, MRj,MGj,MBjRespectively representing the values of the red, green and blue channels of the jth color block of the color card, MHj,MSj,MVjAnd respectively representing the values of three channels of hue, saturation and brightness of the j th color block after conversion. Thereby obtaining the HSV space expression mode M of the standard value of the color cardj=[MHj,MSj,MVj],j=1~24。
4. Calculating the average value of the H (tone) channel of the Macbeth color chart photo after adjusting the white balance, and comparing the average value with the average tone of the color chart to obtain a tone adjusting parameter H
By Hav1,Hav2,...HavNThe average value of H channels of each adjusted image i is shown, the average value of tone of N sub-images is <math> <mrow> <mi>Hav</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>Hav</mi> <mi>i</mi> </msub> <mo>&CenterDot;</mo> </mrow> </math> In addition, the average value of the standard hue value of the color chart is obtained <math> <mrow> <mi>Hst</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mn>24</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>24</mn> </munderover> <msub> <mi>MH</mi> <mi>j</mi> </msub> <mo>&CenterDot;</mo> </mrow> </math> The difference h between the two is Hav-Hst which is a parameter for adjusting the chromaticity.
5. Calculating the average value of the V (brightness) channel of the Macbeth color chart photo after white balance adjustment, and comparing the average value with the average brightness of the color chart photo to obtain a brightness adjustment parameter V
By Vav1,Vav2,...VavNThe average value of V channels of each image after adjustment is shown, and the brightness average value of N images is <math> <mrow> <mi>Vav</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>Vav</mi> <mi>i</mi> </msub> <mo>&CenterDot;</mo> </mrow> </math> In addition, the average value of the standard values of the color chart is obtained <math> <mrow> <mi>Vst</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mn>24</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>24</mn> </munderover> <msub> <mi>MV</mi> <mi>j</mi> </msub> <mo>&CenterDot;</mo> </mrow> </math> The difference v-Vst between the two is a parameter for adjusting the brightness.
6. Calculating the average value of the channels of the Macbeth color chart photo S (saturation) after white balance adjustment, comparing the average value with the saturation of the color chart photo S (saturation), and obtaining a saturation adjustment parameter S through fitting
Photograph f after adjustment of white balancei' (x, y), 24 rectangular color blocks are taken out according to positions, and as shown in FIG. 3, the saturation in each color block is determined to be fSi1,fSi2....fSi24. After the saturation of 24 color blocks is respectively obtained for all the collected photos, the average value of each color block for the N pictures is obtained.
<math> <mrow> <msub> <mi>Sav</mi> <mi>j</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>fS</mi> <mi>ij</mi> </msub> <mo>&CenterDot;</mo> </mrow> </math>
For the calculated saturation mean value Sav1,Sav2...Sav24And color card standard saturation MS1,MS2...MS24Fitting by using a least square method standard, wherein the concrete method comprises the following steps: selecting parameter s, MSj=s*SavjSo that <math> <mrow> <mi>&delta;</mi> <mo>=</mo> <msqrt> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>24</mn> </munderover> <msup> <mrow> <mo>(</mo> <mi>s</mi> <mo>*</mo> <msub> <mi>Sav</mi> <mi>j</mi> </msub> <mo>-</mo> <msub> <mi>MS</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> </mrow> </math> The value of (c) is minimal.
The calculation method of s is as follows: <math> <mrow> <mi>s</mi> <mo>=</mo> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>24</mn> </munderover> <msub> <mi>Sav</mi> <mi>j</mi> </msub> <mo>*</mo> <msub> <mi>MS</mi> <mi>j</mi> </msub> </mrow> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>24</mn> </munderover> <msubsup> <mi>Sav</mi> <mi>j</mi> <mn>2</mn> </msubsup> </mrow> </mfrac> </mrow> </math>
7. calculating the average value of the V (brightness) channel of the color chart photo after white balance adjustment, comparing the average value with the brightness of the color chart photo, and obtaining a curve correction value gamma through fitting
gamma is derived from the response curve of a CRT (display/television), i.e. its brightness is non-linear with respect to the input voltage;
gamma correction corresponds to processing an image; since the change in gamma causes a change in lightness, the value of gamma is obtained by the lightness V channel:
photograph f after adjustment of white balancei′(x, y), respectively taking out 24 rectangular color blocks according to positions, and obtaining that the lightness value in each color block is fVi1,fVi2....fVi24. After the brightness of 24 color blocks is respectively obtained for all the collected pictures, the average value of each color block for the N pictures is obtained.
<math> <mrow> <msub> <mi>Vav</mi> <mi>j</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>fV</mi> <mi>ij</mi> </msub> <mo>&CenterDot;</mo> </mrow> </math>
For the obtained brightness average value Vav1,Vav2...Vav24And color chart standard lightness MV1,MV2...MV24Fitting by using a least square method standard, wherein the concrete method comprises the following steps: selecting parameters gamma, MVj=Vavj gammaSo that <math> <mrow> <mi>&sigma;</mi> <mo>=</mo> <msqrt> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>24</mn> </munderover> <msup> <mrow> <mo>(</mo> <msubsup> <mi>Vav</mi> <mi>j</mi> <mi>gamma</mi> </msubsup> <mo>-</mo> <msub> <mi>MV</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> </mrow> </math> The value of (c) is minimal.
The gamma is calculated by <math> <mrow> <mi>gamma</mi> <mo>=</mo> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>24</mn> </munderover> <mi>ln</mi> <mrow> <mo>(</mo> <msub> <mi>Vav</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>*</mo> <mi>ln</mi> <mrow> <mo>(</mo> <msub> <mi>MV</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>24</mn> </munderover> <mi>ln</mi> <msup> <mrow> <mo>(</mo> <msub> <mi>Vav</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </mfrac> <mo>&CenterDot;</mo> </mrow> </math>
8. The digital camera is used for shooting the tongue picture and storing the tongue picture into the computer.
The expression method is that It (x, y) ([ IR (x, y), IG (x, y), IB (x, y) ], (x, y) represents a sampling point of an image, and IR (x, y), IG (x, y), and IB (x, y) represent values of red, green, and blue at the sampling point, respectively.
9. Correcting white balance of tongue picture
The white color in the picture of the tongue picture is corrected to be pure white by the following method:
It′(x,y)=[IR′(x,y),IG′(x,y),IB′(x,y)]
<math> <mrow> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>255</mn> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <mi>IR</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>*</mo> <mi>IR</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msup> <msub> <mi>IG</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>255</mn> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <mi>IG</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>*</mo> <mi>IG</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msup> <msub> <mi>IB</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>255</mn> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <mi>IB</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>*</mo> <mi>IB</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
where max (ir), max (ig), and max (ib) represent the maximum values of the red, green, and blue channels in the image It (x, y). By adjusting the white balance, we obtain the adjusted tongue picture as It' (x, y).
10. It will be represented in the RGB spacei' (x, y) into HSV space representation
<math> <mrow> <msup> <mi>IH</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mi>cos</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>[</mo> <mfrac> <mrow> <mrow> <mo>(</mo> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>IG</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>+</mo> <mrow> <mo>(</mo> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>IB</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> <mrow> <mn>2</mn> <msqrt> <msup> <mrow> <mo>(</mo> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>IG</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <mrow> <mo>(</mo> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>IB</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msup> <mi>IG</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>IB</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </msqrt> </mrow> </mfrac> <mo>]</mo> </mrow> </math>
<math> <mrow> <msup> <mi>IS</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <mi>IG</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <mi>IB</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>-</mo> <mi>Min</mi> <mrow> <mo>(</mo> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <mi>IG</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <mi>IB</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <mi>IG</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <mi>IB</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
IV′(x,y)=Max(IR′(x,y),IG′(x,y),IB′(x,y))
11. Lightness equalization for underexposed photographs
And (3) after the white balance is corrected, analyzing distribution of a brightness histogram, marking the tongue picture if the position where the peak value appears is smaller than a predetermined threshold value, carrying out brightness balance, and then carrying out next adjustment.
The histogram of the lightness channel IV' (x, y) of the image is counted. The value of the lightness channel is 0-255. And (5) counting a histogram HistV (t) of the lightness channel, wherein t is 0-255.
The histogram is defined as histv (t) ═ Num (IV '(x, y) ═ t), that is, the t-th component of the histogram represents the number of points having a value of t in the lightness map IV' (x, y). Fig. 5(a) shows a histogram of one image, where Max (peak v (t)) and t are 0 to 255 in the lightness graph. The abscissa is a gray value, the value is from 0 to 255, and the ordinate is the number of pixel points in the image, the gray value of which is equal to the gray of the abscissa.
If PeakV < threshold V, histogram equalization is performed on image IV' (x, y). The equation for equalization is expressed as: <math> <mrow> <msup> <mi>IV</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <msup> <mi>IV</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </munderover> <mi>HistV</mi> <mrow> <mo>(</mo> <msup> <mi>IV</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> meaning that the gray level of a pixel (x, y) before equalization is IV '(x, y), then the brightness value of this point after equalization is the sum of the histogram from gray value 1 to gray value IV' (x, y). In the equation, k represents a brightness value, and fig. 5(b) is a histogram in which the histogram is changed by histogram equalization.
If PeakV ≧ threshold V, no treatment is performed on IV' (x, y).
12. Correcting the image in the HSV space according to the set parameters h, s and v;
IHd(x,y)=IH′(x,y)-h
ISd(x,y)=IS′(x,y)*s
IVd(x,y)=IV′(x,y)-v
13. converting the image in HSV space to RGB space and using gamma coefficient
The procedure for transferring HSV to RGB space is:
four temporary variables f, aa, bb, cc are set to assist in the operation, where
f=IHd-floor(IHd) Function floor (IH)d) Means taking ratio IHdSmall maximum integer, so f denotes IHdFractional part of
aa=IVd*(1-ISd)
bb=IVd*(1-(ISd*f))
cc=IVd*(1-(ISd*(1-f)))
According to IHdCan determine the IRd,IGd,IBdValue of (A)
If it is <math> <mrow> <msub> <mi>IH</mi> <mi>d</mi> </msub> <mo>&Element;</mo> <mo>[</mo> <mn>0</mn> <mo>~</mo> <mfrac> <mi>&pi;</mi> <mn>6</mn> </mfrac> <mo>)</mo> <mo>,</mo> </mrow> </math> Then IRd=IVd,IGd=cc,IBd=aa
If it is <math> <mrow> <msub> <mi>IH</mi> <mi>d</mi> </msub> <mo>&Element;</mo> <mo>[</mo> <mfrac> <mi>&pi;</mi> <mn>6</mn> </mfrac> <mo>~</mo> <mfrac> <mi>&pi;</mi> <mn>3</mn> </mfrac> <mo>)</mo> <mo>,</mo> </mrow> </math> Then IRd=bb,IGd=IVd,IBd=aa
If it is <math> <mrow> <msub> <mi>IH</mi> <mi>d</mi> </msub> <mo>&Element;</mo> <mo>[</mo> <mfrac> <mi>&pi;</mi> <mn>3</mn> </mfrac> <mo>~</mo> <mfrac> <mi>&pi;</mi> <mn>2</mn> </mfrac> <mo>)</mo> <mo>,</mo> </mrow> </math> Then IRd=aa,IGd=IVd,IBd=cc
If it is <math> <mrow> <msub> <mi>IH</mi> <mi>d</mi> </msub> <mo>&Element;</mo> <mo>[</mo> <mfrac> <mi>&pi;</mi> <mn>2</mn> </mfrac> <mo>~</mo> <mfrac> <mrow> <mn>2</mn> <mi>&pi;</mi> </mrow> <mn>3</mn> </mfrac> <mo>)</mo> <mo>,</mo> </mrow> </math> Then IRd=aa,IGd=bb,IBd=IVd
If it is <math> <mrow> <msub> <mi>IH</mi> <mi>d</mi> </msub> <mo>&Element;</mo> <mo>[</mo> <mfrac> <mrow> <mn>2</mn> <mi>&pi;</mi> </mrow> <mn>3</mn> </mfrac> <mo>~</mo> <mfrac> <mrow> <mn>5</mn> <mi>&pi;</mi> </mrow> <mn>6</mn> </mfrac> <mo>)</mo> <mo>,</mo> </mrow> </math> Then IRd=cc,IGd=aa,IBd=IVd
If it is <math> <mrow> <msub> <mi>IH</mi> <mi>d</mi> </msub> <mo>&Element;</mo> <mo>[</mo> <mfrac> <mrow> <mn>5</mn> <mi>&pi;</mi> </mrow> <mn>6</mn> </mfrac> <mo>~</mo> <mn>2</mn> <mi>&pi;</mi> <mo>)</mo> <mo>,</mo> </mrow> </math> Then IRd=IVd,IGd=aa,IBd=bb
After the conversion into the RGB space, the image in the RGB space is adjusted by the curve correction value gamma.
IRd′(x,y)=IRd(x,y)gamma
IGd′(x,y)=IGd(x,y)gamma
IBd′(x,y)=IBd(x,y)gamma
Resulting adjusted image Itd(x,y)=[IRd′(x,y),IGd′(x,y),IBd′(x,y)]The adjusted picture of the tongue picture is obtained.

Claims (1)

1. The digital tongue picture color cast correction method is characterized by sequentially comprising the following steps of:
the method comprises the following steps that (1) under the condition of forced flashing, N Macbeth color cards are shot by a digital camera, wherein N is 10-15 Macbeth color cards, and photos are input into a computer;
step (2), the computer corrects the white balance of the N Macbeth color chart photos:
if: the N photo Macbeth color chip photo is denoted as f1(x,y),f2(x,y)…fN(x,y)
Each image list in the photoShown as follows: f. ofi(x,y)=[Ri(x,y),Gi(x,y),Bi(x,y)],i=1…N
Ri(x,y),Gi(x,y),Bi(x, y) represents the values of red, green, and blue at the sampling point (x, y), respectively;
the corresponding picture of each photo after white balance correction is expressed as:
fi′=[Ri′(x,y),Gi′(x,y),Bi′(x,y)],
<math> <mrow> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>255</mn> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msub> <mi>R</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>*</mo> <msub> <mi>R</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
wherein, <math> <mrow> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>255</mn> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msub> <mi>G</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>*</mo> <msub> <mi>G</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msup> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>255</mn> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msub> <mi>B</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>*</mo> <msub> <mi>B</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
wherein, Max (R)i(x,y)),Max(Gi(x, y)) and Max (B)i(x, y)) are respectively shown in the image fi(x, y), a maximum value of a red channel, a green channel, and a blue channel;
step (3) of expressing f in RGB spacei' (x, y) and Macbeth color chip colors are shifted into the HSV space representation, HSV representing hue, saturation and lightness, respectively:
step (3.1), mixingi′(x,y)=[Ri′(x,y),Gi′(x,y),Bi′(x,y)]Conversion to HSV space:
<math> <mrow> <msup> <msub> <mi>H</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mi>cos</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>[</mo> <mfrac> <mrow> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>+</mo> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> <mrow> <mn>2</mn> <msqrt> <msup> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>*</mo> <mrow> <mo>(</mo> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </msqrt> </mrow> </mfrac> <mo>]</mo> </mrow> </math>
<math> <mrow> <msup> <msub> <mi>S</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>-</mo> <mi>Min</mi> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
Vi′(x,y)=Max(Ri′(x,y),Gi′(x,y),Bi′(x,y))
said Hi′(x,y),Si′(x,y),Vi' (x, y) respectively indicate the values of three channels of hue, saturation and lightness of the i-th image at the sampling point (x, y) after conversion, and Ri′(x,y),Gi′(x,y),Bi' (x, y) respectively shows the values of the red, green and blue channels of the ith image at the sampling point (x, y) after the white balance is adjusted, and the HSV channel of the image after the white balance is corrected is shown as:
fi′(x,y)=[Hi′(x,y),Si′(x,y),Vi′(x,y)]
step (3.2), the standard value M of the Macbeth color card is setjTurning into the space of the HSV (hue, saturation and value),
Mj=[MRj,MGj,MBj],j=1~24
MH j = cos - 1 [ ( MR j - MG j ) + ( MR j - MB j ) 2 ( MR j - MG j ) 2 + ( MR j - MG j ) ( MR j - MB j ) ]
then MS j = Max ( MR j , MG j , MB j ) - Min ( MR j , MG j , MB j ) Max ( MR j , MG j , MB j )
MVj=Max(MRj,MGj,MBj)
Wherein, MRj,MGj,MBjRespectively representing the values of the red, green and blue channels of the jth color block of the color card,
MHj,MSj,MVjrespectively representing the values of three channels of hue, saturation and lightness of the j color block after conversion;
obtaining: color chip standardHSV spatial expression of values: mj=[MHj,MSj,MVj],j=1~24;
And (4) respectively solving each adjusting parameter through the values of the standard Macbeth color chart and the color chart photo according to the following steps:
step (4.1), calculating the average value of the H channel of the color tone of the Macbeth color chart photo after white balance adjustment, and comparing the average value with the average color tone of a standard color chart to obtain a color tone adjustment parameter H:
the average hue value of the N pairs of Macbeth photos is <math> <mrow> <mi>Hav</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>Hav</mi> <mi>i</mi> </msub> <mo>,</mo> </mrow> </math> HaviTo adjust the average value of the hue H channel of the i-th image after white balance,
the Macbeth color chart has a standard hue average value of <math> <mrow> <mi>Hst</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mn>24</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>24</mn> </munderover> <msub> <mi>MH</mi> <mi>j</mi> </msub> <mo>,</mo> </mrow> </math>
The hue adjustment parameter is h-Hav-Hst;
and (4.2) calculating the average value of a brightness V channel of the picture of the Macbeth color chart after the white balance is adjusted, and comparing the average value with the average brightness of a standard color chart to obtain a brightness adjusting parameter V:
the lightness average value of the N pairs of Macbeth photos is <math> <mrow> <mi>Vav</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>Vav</mi> <mi>i</mi> </msub> <mo>,</mo> </mrow> </math> VaviTo adjust the average value of the lightness V channel of the i-th image after white balance,
the standard lightness average of the Macbeth color chart is <math> <mrow> <mi>Vst</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mn>24</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>24</mn> </munderover> <msub> <mi>MV</mi> <mi>j</mi> </msub> <mo>,</mo> </mrow> </math>
The brightness adjustment parameter is v ═ Vav-Vst;
and (4.3) calculating the average value of the saturation S channel of the Macbeth color chart photo after white balance adjustment, comparing the average value with the saturation of a color chart standard value, and obtaining a saturation adjustment parameter S through fitting:
in N Macbeth color chart photos, the average value of the saturation of the color block j is <math> <mrow> <msub> <mi>Sav</mi> <mi>j</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mi>f</mi> <msub> <mi>S</mi> <mrow> <mi>ij</mi> <mo>,</mo> </mrow> </msub> </mrow> </math> Wherein, fSijI is 1 to N, j is 1 to 24,
the standard saturation of each color block in the color card is MS1,MS2…MS24
The saturation adjustment parameter s is: <math> <mrow> <mi>s</mi> <mo>=</mo> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>24</mn> </munderover> <msub> <mi>Sav</mi> <mi>j</mi> </msub> <mo>*</mo> <msub> <mi>MS</mi> <mi>j</mi> </msub> </mrow> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>24</mn> </munderover> <msubsup> <mi>Sav</mi> <mi>j</mi> <mn>2</mn> </msubsup> </mrow> </mfrac> <mo>;</mo> </mrow> </math>
and (5) calculating a curve correction value gamma:
in N Macbeth color chart photosLightness value of color block j is <math> <mrow> <msub> <mi>Vav</mi> <mi>j</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mi>f</mi> <msub> <mi>V</mi> <mi>ij</mi> </msub> <mo>,</mo> </mrow> </math> Wherein, fVijThe lightness of the color block j in the ith picture, i is 1-N, j is 1-24,
the standard lightness of each color block in the color card is Vav1,Vav2…Vav24
The curve correction value gamma is: <math> <mrow> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>24</mn> </munderover> <mi>ln</mi> <mrow> <mo>(</mo> <msub> <mi>Vav</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>*</mo> <mi>ln</mi> <mrow> <mo>(</mo> <msub> <mi>MV</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>24</mn> </munderover> <mi>ln</mi> <msup> <mrow> <mo>(</mo> <msub> <mi>Vav</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </mfrac> <mo>;</mo> </mrow> </math>
taking a tongue picture by using a digital camera to obtain an image It (x, y) ([ IR (x, y), IG (x, y), IB (x, y) ], wherein IR (x, y), IG (x, y), and IB (x, y) respectively represent red, green, and blue values at the image sampling point (x, y); and (7) correcting the white color in the tongue picture It (x, y) to be pure white, and obtaining a corrected tongue picture It' (x, y):
It′(x,y)=[IR′(x,y),IG′(x,y),IB′(x,y)]
<math> <mrow> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>255</mn> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <mi>IR</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>*</mo> <mi>IR</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msup> <mrow> <mi>I</mi> <msub> <mi>G</mi> <mi>i</mi> </msub> </mrow> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>255</mn> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <mi>IG</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>*</mo> <mi>IG</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msup> <mrow> <mi>I</mi> <msub> <mi>B</mi> <mi>i</mi> </msub> </mrow> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>255</mn> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <mi>IB</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>*</mo> <mi>IB</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
where Max (IR (x, y)), Max (IG (x, y)), and Max (IB (x, y)) respectively represent the maximum values of the red, green, and blue channels in the image It (x, y);
step (8), converting the tongue picture represented in RGB space to the expression of hue H, saturation S and lightness V space
<math> <mrow> <mi>I</mi> <msup> <mi>H</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mi>cos</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>[</mo> <mfrac> <mrow> <mrow> <mo>(</mo> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>IG</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>+</mo> <mrow> <mo>(</mo> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>IB</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> <mrow> <mn>2</mn> <msqrt> <msup> <mrow> <mo>(</mo> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>IG</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <mrow> <mo>(</mo> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>IB</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>*</mo> <mrow> <mo>(</mo> <mi>I</mi> <msup> <mi>G</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>IB</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </msqrt> </mrow> </mfrac> <mo>]</mo> </mrow> </math>
<math> <mrow> <msup> <mrow> <mi>I</mi> <mi>S</mi> </mrow> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msup> <mrow> <mi>I</mi> <mi>R</mi> </mrow> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <mi>IG</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <mi>IB</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>-</mo> <mi>Min</mi> <mrow> <mo>(</mo> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <mrow> <mi>I</mi> <mi>G</mi> </mrow> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <mrow> <mi>I</mi> <mi>B</mi> </mrow> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <mrow> <mi>I</mi> <mi>G</mi> </mrow> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <mrow> <mi>I</mi> <mi>B</mi> </mrow> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
IV′(x,y)=Max(IR′(x,y),IG′(x,y),IB′(x,y));
Step (9), judging whether the tongue picture is over exposed after correcting the white balance, and carrying out brightness balance on the underexposed picture:
step (9.1), performing brightness histogram distribution on the photo:
a histogram histv (t) ═ Num (IV '(x, y) ═ t), t ═ 0 to 255, and indicates the number of points whose value is t in the lightness graph IV' (x, y) represented by the t-th component in the histogram;
step (9.2), solving peak PeakV of lightness graph:
PeakV=Max(HistV(t)),t=0~255;
step (9.3), judging whether PeakV is less than threshold V
Wherein, the threshold V is a set value of 100, if PeakV is larger than or equal to the threshold V, the IV' (x, y) is not processed, otherwise, the next step is carried out;
step (9.4), histogram equalization is carried out on the image IV' (x, y) to obtain <math> <mrow> <msup> <mi>IV</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <msup> <mi>IV</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </munderover> <mi>HistV</mi> <mrow> <mo>(</mo> <msup> <mi>IV</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> k represents a counter of lightness values;
and (10) correcting the image in the HSV space according to the adjusting parameters h, s and v set in the step (4):
IHd(x,y)=IH′(x,y)-h
ISd(x,y)=IS′(x,y)*s;
IVd(x,y)=IV′(x,y)-v
step (11), converting the image in the HSV space into the RGB space:
step (11.1), four temporary variables f, aa, bb, cc are set to aid in transformation:
wherein f is IHd-floor(IHd) Function floor (IH)d) Means taking ratio IHdSmall maximum integer, so f denotes IHdThe fractional part of (a) is,
aa=IVd*(1-ISd)
bb=IVd*(1-(ISd*f))
cc=IVd*(1-(ISd*(1-f)))
step (11.2), according to IHdRange determination of (IR)d,IGd,IBdThe value of (c):
if it is <math> <mrow> <msub> <mi>IH</mi> <mi>d</mi> </msub> <mo>&Element;</mo> <mo>[</mo> <mrow> <mn>0</mn> <mo>~</mo> <mfrac> <mi>&pi;</mi> <mn>6</mn> </mfrac> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> Then IRd=IVd,IGd=cc,IBd=aa
If it is <math> <mrow> <msub> <mi>IH</mi> <mi>d</mi> </msub> <mo>&Element;</mo> <mo>[</mo> <mfrac> <mi>&pi;</mi> <mn>6</mn> </mfrac> <mo>~</mo> <mrow> <mfrac> <mi>&pi;</mi> <mn>3</mn> </mfrac> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> Then IRd=bb,IGd=IVd,IBd=aa
If it is <math> <mrow> <msub> <mi>IH</mi> <mi>d</mi> </msub> <mo>&Element;</mo> <mo>[</mo> <mfrac> <mi>&pi;</mi> <mn>3</mn> </mfrac> <mo>~</mo> <mrow> <mfrac> <mi>&pi;</mi> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> Then IRd=aa,IGd=IVd,IBd=cc
If it is <math> <mrow> <msub> <mi>IH</mi> <mi>d</mi> </msub> <mo>&Element;</mo> <mo>[</mo> <mfrac> <mi>&pi;</mi> <mn>2</mn> </mfrac> <mo>~</mo> <mrow> <mfrac> <mrow> <mn>2</mn> <mi>&pi;</mi> </mrow> <mn>3</mn> </mfrac> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> Then IRd=aa,IGd=bb,IBd=IVd
If it is <math> <mrow> <msub> <mi>IH</mi> <mi>d</mi> </msub> <mo>&Element;</mo> <mo>[</mo> <mfrac> <mrow> <mn>2</mn> <mi>&pi;</mi> </mrow> <mn>3</mn> </mfrac> <mo>~</mo> <mrow> <mfrac> <mrow> <mn>5</mn> <mi>&pi;</mi> </mrow> <mn>6</mn> </mfrac> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> Then IRd=cc,IGd=aa,IBd=IVd
If it is <math> <mrow> <msub> <mi>IH</mi> <mi>d</mi> </msub> <mo>&Element;</mo> <mo>[</mo> <mfrac> <mrow> <mn>5</mn> <mi>&pi;</mi> </mrow> <mn>6</mn> </mfrac> <mo>~</mo> <mrow> <mn>2</mn> <mi>&pi;</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> Then IRd=IVd,IGd=aa,IBd=bb
And (12) adjusting the image by using the curve correction value gamma:
IRd′(x,y)=IRd(x,y)gamma
IGd′(x,y)=IGd(x,y)gamma
IBd′(x,y)=IBd(x,y)gamma
resulting adjusted image Itd(x,y)=[IRd′(x,y),IGd′(x,y),IBd′(x,y)]The adjusted picture of the tongue picture is obtained.
CNB2006101138705A 2006-10-20 2006-10-20 Method for correcting digital tongue picture colour cast Expired - Fee Related CN100412906C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2006101138705A CN100412906C (en) 2006-10-20 2006-10-20 Method for correcting digital tongue picture colour cast

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2006101138705A CN100412906C (en) 2006-10-20 2006-10-20 Method for correcting digital tongue picture colour cast

Publications (2)

Publication Number Publication Date
CN1945627A true CN1945627A (en) 2007-04-11
CN100412906C CN100412906C (en) 2008-08-20

Family

ID=38045022

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2006101138705A Expired - Fee Related CN100412906C (en) 2006-10-20 2006-10-20 Method for correcting digital tongue picture colour cast

Country Status (1)

Country Link
CN (1) CN100412906C (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101947101A (en) * 2010-07-15 2011-01-19 哈尔滨工业大学 Method for making tongue colour reproduction colour card
CN102419861A (en) * 2010-09-27 2012-04-18 上海中医药大学 Color image correcting method based on topology subdivision of uniform color space
CN101615385B (en) * 2008-06-23 2012-05-30 索尼株式会社 Image display apparatus and image display apparatus assembly and driving method thereof
CN102693671A (en) * 2011-03-23 2012-09-26 长庚医疗财团法人林口长庚纪念医院 Tongue picture classification card and manufacturing method thereof
CN103106669A (en) * 2013-01-02 2013-05-15 北京工业大学 Tongue image environment adaptive color reproduction method of traditional Chinese medicine
CN103248793A (en) * 2013-05-14 2013-08-14 旭曜科技股份有限公司 Skin tone optimization method and device for color gamut transformation system
CN103308517A (en) * 2013-05-21 2013-09-18 谢绍鹏 Traditional Chinese medicine color objectification method and traditional Chinese medicine image acquisition device
CN103957396A (en) * 2014-05-14 2014-07-30 姚杰 Image processing method and device used when tongue diagnosis is conducted with intelligent device and equipment
CN104297469A (en) * 2013-07-15 2015-01-21 艾博生物医药(杭州)有限公司 Immune reading device and calibration method of the same
CN104412299A (en) * 2012-06-28 2015-03-11 夏普株式会社 Image processing device and recording medium
CN104572538A (en) * 2014-12-31 2015-04-29 北京工业大学 K-PLS regression model based traditional Chinese medicine tongue image color correction method
CN104700363A (en) * 2013-12-06 2015-06-10 富士通株式会社 Removal method and device of reflecting areas in tongue image
CN106127675A (en) * 2016-06-20 2016-11-16 深圳市中识创新科技有限公司 Colour consistency method and apparatus
CN107689031A (en) * 2016-08-03 2018-02-13 天津慧医谷科技有限公司 Color restoration method based on illumination compensation in tongue picture analysis
CN107833620A (en) * 2017-11-28 2018-03-23 北京羽医甘蓝信息技术有限公司 Image processing method and image processing apparatus
CN108320272A (en) * 2018-02-05 2018-07-24 电子科技大学 The method that image delusters
CN108742519A (en) * 2018-04-02 2018-11-06 上海中医药大学附属岳阳中西医结合医院 Machine vision three-dimensional reconstruction technique skin ulcer surface of a wound intelligent auxiliary diagnosis system
CN109815860A (en) * 2019-01-10 2019-05-28 中国科学院苏州生物医学工程技术研究所 TCM tongue diagnosis image color correction method, electronic equipment, storage medium
CN110009588A (en) * 2019-04-09 2019-07-12 成都品果科技有限公司 A kind of portrait image color enhancement method and device
WO2020103570A1 (en) * 2018-11-19 2020-05-28 Oppo广东移动通信有限公司 Image color correction method, device, storage medium and mobile terminal
CN114073494A (en) * 2020-08-19 2022-02-22 京东方科技集团股份有限公司 Leukocyte detection method, system, electronic device, and computer-readable medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050270383A1 (en) * 2004-06-02 2005-12-08 Aiptek International Inc. Method for detecting and processing dominant color with automatic white balance
GB0504520D0 (en) * 2005-03-04 2005-04-13 Chrometrics Ltd Reflectance spectra estimation and colour space conversion using reference reflectance spectra

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101615385B (en) * 2008-06-23 2012-05-30 索尼株式会社 Image display apparatus and image display apparatus assembly and driving method thereof
CN101947101A (en) * 2010-07-15 2011-01-19 哈尔滨工业大学 Method for making tongue colour reproduction colour card
CN102419861A (en) * 2010-09-27 2012-04-18 上海中医药大学 Color image correcting method based on topology subdivision of uniform color space
CN102693671B (en) * 2011-03-23 2014-03-26 长庚医疗财团法人林口长庚纪念医院 Manufacturing method of tongue picture classification card
CN102693671A (en) * 2011-03-23 2012-09-26 长庚医疗财团法人林口长庚纪念医院 Tongue picture classification card and manufacturing method thereof
CN104412299A (en) * 2012-06-28 2015-03-11 夏普株式会社 Image processing device and recording medium
CN103106669B (en) * 2013-01-02 2015-10-28 北京工业大学 Chinese medicine tongue picture is as environmental suitability color reproduction method
CN103106669A (en) * 2013-01-02 2013-05-15 北京工业大学 Tongue image environment adaptive color reproduction method of traditional Chinese medicine
CN103248793A (en) * 2013-05-14 2013-08-14 旭曜科技股份有限公司 Skin tone optimization method and device for color gamut transformation system
CN103248793B (en) * 2013-05-14 2016-08-10 旭曜科技股份有限公司 The colour of skin optimization method of gamut conversion system and device
CN103308517B (en) * 2013-05-21 2015-09-30 谢绍鹏 Chinese medicine color objectifies method and Chinese medicine image acquiring device
CN103308517A (en) * 2013-05-21 2013-09-18 谢绍鹏 Traditional Chinese medicine color objectification method and traditional Chinese medicine image acquisition device
CN104297469A (en) * 2013-07-15 2015-01-21 艾博生物医药(杭州)有限公司 Immune reading device and calibration method of the same
CN104297469B (en) * 2013-07-15 2017-02-08 艾博生物医药(杭州)有限公司 Immune reading device and calibration method of the same
CN104700363A (en) * 2013-12-06 2015-06-10 富士通株式会社 Removal method and device of reflecting areas in tongue image
CN103957396A (en) * 2014-05-14 2014-07-30 姚杰 Image processing method and device used when tongue diagnosis is conducted with intelligent device and equipment
CN104572538A (en) * 2014-12-31 2015-04-29 北京工业大学 K-PLS regression model based traditional Chinese medicine tongue image color correction method
CN104572538B (en) * 2014-12-31 2017-08-25 北京工业大学 A kind of Chinese medicine tongue image color correction method based on K PLS regression models
CN106127675B (en) * 2016-06-20 2019-05-24 深圳市利众信息科技有限公司 Colour consistency method and apparatus
CN106127675A (en) * 2016-06-20 2016-11-16 深圳市中识创新科技有限公司 Colour consistency method and apparatus
CN107689031A (en) * 2016-08-03 2018-02-13 天津慧医谷科技有限公司 Color restoration method based on illumination compensation in tongue picture analysis
CN107689031B (en) * 2016-08-03 2021-05-28 天津慧医谷科技有限公司 Color restoration method based on illumination compensation in tongue picture analysis
CN107833620A (en) * 2017-11-28 2018-03-23 北京羽医甘蓝信息技术有限公司 Image processing method and image processing apparatus
CN108320272A (en) * 2018-02-05 2018-07-24 电子科技大学 The method that image delusters
CN108742519A (en) * 2018-04-02 2018-11-06 上海中医药大学附属岳阳中西医结合医院 Machine vision three-dimensional reconstruction technique skin ulcer surface of a wound intelligent auxiliary diagnosis system
WO2020103570A1 (en) * 2018-11-19 2020-05-28 Oppo广东移动通信有限公司 Image color correction method, device, storage medium and mobile terminal
CN109815860A (en) * 2019-01-10 2019-05-28 中国科学院苏州生物医学工程技术研究所 TCM tongue diagnosis image color correction method, electronic equipment, storage medium
CN110009588A (en) * 2019-04-09 2019-07-12 成都品果科技有限公司 A kind of portrait image color enhancement method and device
CN110009588B (en) * 2019-04-09 2022-12-27 成都品果科技有限公司 Portrait image color enhancement method and device
CN114073494A (en) * 2020-08-19 2022-02-22 京东方科技集团股份有限公司 Leukocyte detection method, system, electronic device, and computer-readable medium
WO2022037328A1 (en) * 2020-08-19 2022-02-24 京东方科技集团股份有限公司 White blood cell detection method and system, electronic device, and computer readable medium

Also Published As

Publication number Publication date
CN100412906C (en) 2008-08-20

Similar Documents

Publication Publication Date Title
CN1945627A (en) Method for correcting digital tongue picture colour cast
CN100345160C (en) Histogram equalizing method for controlling average brightness
CN100342709C (en) Image recording apparatus, image recording method, image processing apparatus, image processing method and image processing system
CN100341045C (en) Image display device, image processing method, program and storage medium
CN1314266C (en) Image pickup apparatus
CN1245033C (en) Equipment and method for regulating colour image colour saturation
CN1153444C (en) Image processing apparatus and method
CN1941843A (en) Image processing apparatus and an image processing program
CN1252991C (en) Method and equipment for changing brightness of image
CN101032159A (en) Image processing device, method, and image processing program
CN1717006A (en) Be used for improving the equipment and the method for picture quality at imageing sensor
CN1943248A (en) Imaging device and imaging element
CN1925562A (en) Apparatus, method for taking an image and apparatus, method for processing an image and program thereof
CN1642220A (en) Image processing device, image display device, image processing method, and image processing program
CN101061704A (en) Color adjusting device and method
CN1645914A (en) Image processing method, image processing apparatus, and computer program used therewith
CN101039439A (en) Method and apparatus for realizing correction of white balance
CN1832583A (en) Equipment, medium and method possessing white balance control
CN1714372A (en) Image signal processing
CN1662071A (en) Image data processing in color spaces
CN1822660A (en) Image processing system, projector, and image processing method
CN1909671A (en) Method and apparatus for generating user preference data
CN101035190A (en) Apparatus, method, and program product for color correction
CN1867080A (en) Image process apparatus and image pickup apparatus
CN101079954A (en) Method and device for realizing white balance correction

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20080820

Termination date: 20091120