CN100412906C - Method for correcting digital tongue picture colour cast - Google Patents

Method for correcting digital tongue picture colour cast Download PDF

Info

Publication number
CN100412906C
CN100412906C CNB2006101138705A CN200610113870A CN100412906C CN 100412906 C CN100412906 C CN 100412906C CN B2006101138705 A CNB2006101138705 A CN B2006101138705A CN 200610113870 A CN200610113870 A CN 200610113870A CN 100412906 C CN100412906 C CN 100412906C
Authority
CN
China
Prior art keywords
mrow
msup
msub
mfrac
math
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2006101138705A
Other languages
Chinese (zh)
Other versions
CN1945627A (en
Inventor
白净
张永红
吴佳
崔珊珊
孙晓静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CNB2006101138705A priority Critical patent/CN100412906C/en
Publication of CN1945627A publication Critical patent/CN1945627A/en
Application granted granted Critical
Publication of CN100412906C publication Critical patent/CN100412906C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Of Color Television Signals (AREA)

Abstract

This invention relates to a digital correction method of tongue color deflection, the feature is as the following steps: making white equation emendation on the image of Macbeth color adjusting calibration card from flash lamp and switching to the tonality, saturation, lightness space to get the corresponding adjustment parameters by cooperation with the standard value of color card. Then shoot the tongue picture by digital camera, and make white equation emendation switching to the tonality, saturation, lightness space, which is then adjusted by using the above parameter and returned into the red, green and blue spaces. Finally, use gamma value to adjust the pictures of red, green and blue spaces. The gamma value of curve emendation is combined by the least double multiplication with the light value of Macbeth card standard and the average value of picture light.

Description

Digital tongue picture color cast correction method
Technical Field
The present invention belongs to the field of digital correction method of tongue picture color cast.
Background
From the middle and late 80 s of the 20 th century, the combination of traditional Chinese medicine workers and digital image workers began to attempt to objectively study tongue diagnosis based on computer technology. Research has focused primarily on calibration of tongue photograph color, storage and output, and processing of tongue photographs with modern image processing analysis techniques. Among them, the color of the tongue is the most important information in tongue diagnosis. Although the tongue image shot in the forced flash mode of the digital camera can be shielded from natural light interference by the strong light of the flash lamp, the whole color of the tongue image is deviated when the strong light of the flash lamp irradiates the surface of the tongue, and the whole tongue image is biased to be red-yellow warm tone, so that the deviation needs to be corrected by a certain method.
The method utilizes a Macbeth24 color standard colorimetric card commonly used for correcting the color of a digital photo, the colorimetric card is processed by a special process, the surface of the colorimetric card has a reflection spectrum similar to the skin, and a series of calibration parameters are designed by comparing the color of the Macbeth24 colorimetric card in the photo with the color of a real colorimetric card for calibrating a shot tongue picture.
Disclosure of Invention
The invention aims to correct color cast caused by a strong flash lamp when a tongue picture is shot, and provides a set of general computer method for correcting tongue picture colors.
The invention is characterized in that:
the method comprises the following steps that (1) under the condition of forced flashing, N Macbeth color cards are shot by a digital camera, wherein N is 10-15 Macbeth color cards, and photos are input into a computer;
step (2), the computer corrects the white balance of the N Macbeth color chart photos:
if: the N photo Macbeth color chip photo is denoted as f1(x,y),f2(x,y)…fN(x, y) each image in the photograph is represented as: f. ofi(x,y)=[Ri(x,y),Gi(x,y),Bi(x,y)],i=1…NRi(x,y),Gi(x,y),Bi(x, y) represents the values of red, green, and blue at the sampling point (x, y), respectively; the corresponding picture of each photo after white balance correction is expressed as:
fi′=[Ri′(x,y),Gi′(x,y),Bi′(x,y)],
<math> <mrow> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>255</mn> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msub> <mi>R</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>*</mo> <msub> <mi>R</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
wherein, <math> <mrow> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>255</mn> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msub> <mi>G</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>*</mo> <msub> <mi>G</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msup> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>255</mn> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msub> <mi>B</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>*</mo> <msub> <mi>B</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
wherein, Max (R)i(x,y)),Max(Gi(x, y)) and Max (B)i(x, y)) are respectively shown in the image fi(x, y), a maximum value of a red channel, a green channel, and a blue channel;
step (3) of expressing f in RGB spacei' (x, y) and Macbeth color chip color into HSV space representation, H, S, V denotes hue, saturation and value, respectively:
step (3.1), mixingi′(x,y)=[Ri′(x,y),Gi′(x,y),Bi′(x,y)]Conversion to HSV space:
<math> <mrow> <msup> <msub> <mi>H</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mi>cos</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>[</mo> <mfrac> <mrow> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>+</mo> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> <mrow> <mn>2</mn> <msqrt> <msup> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>*</mo> <mrow> <mo>(</mo> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </msqrt> </mrow> </mfrac> <mo>]</mo> </mrow> </math>
<math> <mrow> <msup> <msub> <mi>S</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>-</mo> <mi>Min</mi> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
Vi′(x,y)=Max(Ri′(x,y),Gi′(x,y),Bi′(x,y))
said Hi′(x,y),Si′(x,y),Vi' (x, y) respectively indicate the values of three channels of hue, saturation and lightness of the i-th image at the sampling point (x, y) after conversion, and Ri′(x,y),Gi′(x,y),Bi' (x, y) respectively shows the values of the red, green and blue channels of the ith image at the sampling point (x, y) after the white balance is adjusted, and the HSV channel of the image after the white balance is corrected is shown as:
fi′(x,y)=[Hi′(x,y),Si′(x,y),Vi′(x,y)]
step (3.2), the standard value M of the Macbeth color card is setjTurning into the space of the HSV (hue, saturation and value),
Mj=[MRj,MGj,MBj],j=1~24
MH j = cos - 1 [ ( MR j - MG j ) + ( MR j - MB j ) 2 ( MR j - MG j ) 2 + ( MR j - MG j ) ( MR j - MB j ) ]
then MS j = Max ( MR j , MG j , MB j ) - Min ( MR j , MG j , MB j ) Max ( MR j , MG j , MB j )
MVj=Max(MRj,MGj,MBj)
Wherein, MRj,MGj,MBjRespectively representing the values of the red, green and blue channels of the jth color block of the color card, MHj,MSj,MVjRespectively representing the values of three channels of hue, saturation and lightness of the j color block after conversion;
obtaining: HSV space expression of the standard value of the color card: mj=[MHj,MSj,MVj],j=1~24;
And (4) respectively solving each adjusting parameter through the values of the standard Macbeth color chart and the color chart photo according to the following steps:
step (4.1), calculating the average value of the H channel of the color tone of the Macbeth color chart photo after white balance adjustment, and comparing the average value with the average color tone of a standard color chart to obtain a color tone adjustment parameter H:
the average hue of the N Macbeth photos is <math> <mrow> <mi>Hav</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>Hav</mi> <mi>i</mi> </msub> <mo>,</mo> </mrow> </math> HaviTo adjust the average value of the hue H channel of the i-th image after white balance,
the Macbeth color chart has a standard hue average value of <math> <mrow> <mi>Hst</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mn>24</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>24</mn> </munderover> <msub> <mi>MH</mi> <mi>j</mi> </msub> <mo>,</mo> </mrow> </math>
The hue adjustment parameter is h-Hav-Hst;
and (4.2) calculating the average value of a brightness V channel of the picture of the Macbeth color chart after the white balance is adjusted, and comparing the average value with the average brightness of a standard color chart to obtain a brightness adjusting parameter V:
the lightness average value of N Macbeth photos is <math> <mrow> <mi>Vav</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>Vpav</mi> <mi>i</mi> </msub> <mo>,</mo> </mrow> </math> VpaviTo adjust the average value of the lightness V channel of the i-th image after white balance,
the standard lightness average of the Macbeth color chart is <math> <mrow> <mi>Vst</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mn>24</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>24</mn> </munderover> <msub> <mi>MV</mi> <mi>j</mi> </msub> <mo>,</mo> </mrow> </math>
The brightness adjustment parameter is v ═ Vav-Vst;
and (4.3) calculating the average value of the saturation S channel of the Macbeth color chart photo after white balance adjustment, comparing the average value with the saturation of a color chart standard value, and obtaining a saturation adjustment parameter S through fitting:
in N Macbeth color chart photos, the average value of the saturation of the color block j is <math> <mrow> <msub> <mi>Sav</mi> <mi>j</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>fS</mi> <mi>ij</mi> </msub> <mo>,</mo> </mrow> </math> Wherein, fSijI is 1 to N, j is 1 to 24,
the standard saturation of each color block in the color card is MS1,MS2…MS24
The saturation adjustment parameter s is: <math> <mrow> <mi>s</mi> <mo>=</mo> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>24</mn> </munderover> <msub> <mi>Sav</mi> <mi>j</mi> </msub> <mo>*</mo> <msub> <mi>MS</mi> <mi>j</mi> </msub> </mrow> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>24</mn> </munderover> <msup> <mrow> <mi>Sa</mi> <msub> <mi>v</mi> <mi>j</mi> </msub> </mrow> <mn>2</mn> </msup> </mrow> </mfrac> <mo>;</mo> </mrow> </math>
and (5) calculating a curve correction value gamma:
in N Macbeth color chart photos, the average brightness value of a color block j is <math> <mrow> <msub> <mi>Vbav</mi> <mi>j</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>fV</mi> <mi>ij</mi> </msub> <mo>,</mo> </mrow> </math> Wherein, fVijThe lightness of the color block j in the ith picture, i is 1-N, j is 1-24,
the average brightness of each color block in the color card is Vbav1,Vbav2…Vbav24
The curve correction value gamma is: <math> <mrow> <mi>gamma</mi> <mo>=</mo> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>24</mn> </munderover> <mi>ln</mi> <mrow> <mo>(</mo> <msub> <mi>Vbav</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>*</mo> <mi>ln</mi> <mrow> <mo>(</mo> <msub> <mi>MV</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>24</mn> </munderover> <mi>ln</mi> <msup> <mrow> <mo>(</mo> <msub> <mi>Vbav</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </mfrac> <mo>;</mo> </mrow> </math>
taking a tongue picture by using a digital camera to obtain an image It (x, y) ([ IR (x, y), IG (x, y), IB (x, y) ], wherein IR (x, y), IG (x, y), and IB (x, y) respectively represent red, green, and blue values at the image sampling point (x, y);
and (7) correcting the white color in the tongue picture It (x, y) to be pure white, and obtaining a corrected tongue picture It' (x, y):
It′(x,y)=[IR′(x,y),IG′(x,y),IB′(x,y)]
<math> <mrow> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>255</mn> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <mi>IR</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>*</mo> <mi>IR</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msup> <mi>IG</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>255</mn> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <mi>IG</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>*</mo> <mi>IG</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msup> <mi>IB</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>255</mn> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <mi>IB</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>*</mo> <mi>IB</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
where Max (IR (x, y)), Max (IG (x, y)), and Max (IB (x, y)) respectively represent the maximum values of the red, green, and blue channels in the image It (x, y);
step (8), converting the tongue picture represented in RGB space to the expression of hue H, saturation S and lightness V space
<math> <mrow> <msup> <mi>IH</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mi>cos</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>[</mo> <mfrac> <mrow> <mrow> <mo>(</mo> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>IG</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>+</mo> <mrow> <mo>(</mo> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>IB</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> <mrow> <mn>2</mn> <msqrt> <msup> <mrow> <mo>(</mo> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>IG</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <mrow> <mo>(</mo> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>IB</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msup> <mi>IG</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>IB</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </msqrt> </mrow> </mfrac> <mo>]</mo> </mrow> </math>
<math> <mrow> <msup> <mi>IS</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <mi>IG</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <mi>IB</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>-</mo> <mi>Min</mi> <mrow> <mo>(</mo> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <mi>IG</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <msup> <mrow> <mo>,</mo> <mi>IB</mi> </mrow> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <mi>IG</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <mi>IB</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
IV′(x,y)=Max(IR′(x,y),IG′(x,y),IB′(x,y));
Step (9), judging whether the tongue picture is over exposed after correcting the white balance, and carrying out brightness balance on the underexposed picture:
step (9.1), performing brightness histogram distribution on the photo:
a histogram histv (t) ═ Num (IV '(x, y) ═ t), t ═ 0 to 255, and indicates the number of points whose value is t in the lightness graph IV' (x, y) represented by the t-th component in the histogram;
step (9.2), solving peak PeakV of lightness graph:
PeakV=Max(HistV(t)),t=0~255;
step (9.3), judging whether PeakV is less than threshold V
Wherein, the threshold V is a set value of 100, if PeakV is larger than or equal to the threshold V, the IV' (x, y) is not processed, otherwise, the next step is carried out;
step (9.4), histogram equalization is carried out on the image IV' (x, y) to obtain <math> <mrow> <msup> <mi>IV</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <mi>IV</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </munderover> <mi>HistV</mi> <mrow> <mo>(</mo> <msup> <mi>IV</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> k represents a counter of lightness values;
and (10) correcting the image in the HSV space according to the adjusting parameters h, s and v set in the step (4):
IHd(x,y)=IH′(x,y)-h
ISd(x,y)=IS′(x,y)*s;
IVd(x,y)=IV′(x,y)-v
step (11), converting the image in the HSV space into the RGB space:
step (11.1), four temporary variables f, aa, bb, cc are set to aid in transformation:
wherein f is IHd-floor(IHd) Function floor (IH)d) Means taking ratio IHdSmall maximum integer, so f denotes IHdThe fractional part of (a) is,
aa=IVd*(1-Sd)
bb=IVd*(1-(ISd*f))
cc=IVd*(1-(ISd*(1-f)))
step (11.2), according to IHdRange determination of (IR)d,IGd.IBdThe value of (c):
if it is <math> <mrow> <msub> <mi>IH</mi> <mi>d</mi> </msub> <mo>&Element;</mo> <mo>[</mo> <mn>0</mn> <mo>~</mo> <mfrac> <mi>&pi;</mi> <mn>6</mn> </mfrac> <mo>)</mo> <mo>,</mo> </mrow> </math> Then IRd=IVd,IGd=cc,IBd=aa
If it is <math> <mrow> <msub> <mi>IH</mi> <mi>d</mi> </msub> <mo>&Element;</mo> <mo>[</mo> <mfrac> <mi>&pi;</mi> <mn>6</mn> </mfrac> <mo>~</mo> <mfrac> <mi>&pi;</mi> <mn>3</mn> </mfrac> <mo>)</mo> <mo>,</mo> </mrow> </math> Then IRd=bb,IGd=IVd,IBd=aa
If it is <math> <mrow> <msub> <mi>IH</mi> <mi>d</mi> </msub> <mo>&Element;</mo> <mo>[</mo> <mfrac> <mi>&pi;</mi> <mn>3</mn> </mfrac> <mo>~</mo> <mfrac> <mi>&pi;</mi> <mn>2</mn> </mfrac> <mo>)</mo> <mo>,</mo> </mrow> </math> Then IRd=aa,IGd=IVd,IBd=cc
If it is <math> <mrow> <msub> <mi>IH</mi> <mi>d</mi> </msub> <mo>&Element;</mo> <mo>[</mo> <mfrac> <mi>&pi;</mi> <mn>2</mn> </mfrac> <mo>~</mo> <mfrac> <mrow> <mn>2</mn> <mi>&pi;</mi> </mrow> <mn>3</mn> </mfrac> <mo>)</mo> <mo>,</mo> </mrow> </math> Then IRd=aa,IGd=bb,IBd=IVd
If it is <math> <mrow> <msub> <mi>IH</mi> <mi>d</mi> </msub> <mo>&Element;</mo> <mo>[</mo> <mfrac> <mrow> <mn>2</mn> <mi>&pi;</mi> </mrow> <mn>3</mn> </mfrac> <mo>~</mo> <mfrac> <mrow> <mn>5</mn> <mi>&pi;</mi> </mrow> <mn>6</mn> </mfrac> <mo>)</mo> <mo>,</mo> </mrow> </math> Then IRd=cc,IGd=aa,IBd=IVd
If it is <math> <mrow> <msub> <mi>IH</mi> <mi>d</mi> </msub> <mo>&Element;</mo> <mo>[</mo> <mfrac> <mrow> <mn>5</mn> <mi>&pi;</mi> </mrow> <mn>6</mn> </mfrac> <mo>~</mo> <mn>2</mn> <mi>&pi;</mi> <mo>)</mo> <mo>,</mo> </mrow> </math> Then IRd=IVd,IGd=aa,IBd=bb
And (12) adjusting the image by using the curve correction value gamma:
IRd′(x,y)=IRd(x,y)gamma
IGd′(x,y)=IGd(x,y)gamma
IBd′(x,y)=IBd(x,y)gamma
resulting adjusted image Itd(x,y)=[IRd′(x,y),IGd′(x,y),IBd′(x,y)]The adjusted picture of the tongue picture is obtained.
The use result proves that the adjusted image reflects the real color of the tongue more exactly than the unadjusted image through visual inspection of a doctor, and the adjusted image is accepted by the doctor.
Drawings
FIG. 1: an algorithm flow chart;
FIG. 2: a Macbeth color chart real object graph;
FIG. 3: a Macbeth color chart diagram;
FIG. 4: a Macbeth color chart truth table;
FIG. 5: histogram of the photograph: (a) exposing a lightness histogram of the photograph which is not enough, (b) making a lightness histogram after histogram equalization;
FIG. 6: taking a picture of the tongue;
FIG. 7: adjusting the tongue picture after white balance and brightness homogenization;
FIG. 8: and adjusting the tongue picture result.
Detailed Description
The hardware of the method consists of a digital camera and a computer. The software for adjusting the algorithm is implemented by Matlab. The input tongue picture is shot by doctor in Chinese medicine with a digital camera, and the patient stretches out the tongue when shooting, so that doctors can conveniently take the picture of the whole tongue.
The Macbeth color card consists of 24 color blocks, is distributed into four rows and six columns, and is arranged from left to right and from top to bottom, as shown in FIG. 2. Sequentially represented as Mj=[MRj,MGj,MBj]And j is 1-24, and the sequence is schematically shown in FIG. 3. Wherein, MRj,MGj,MBjThe values of the red, green and blue color channels in the jth patch are shown, respectively, and the standard values of the 24 patches of the card are listed in the accompanying description, as shown in FIG. 4.
A Sony (Sony) Cybershot DSC-W1 digital camera is adopted to take pictures indoors, strong light interference is avoided during shooting, and a forced flash lamp mode is used at the same time, wherein the distance is about 30 centimeters. 10 Macbeth color cards were taken and the pictures were entered into the computer.
The white balance of ten images is adjusted, the brightest color in the images is adjusted to (255, 255, 255), and the other colors are linearly adjusted correspondingly. And then transformed into HSV space.
And comparing the image converted into the HSV space with the color of the standard color card, and solving parameters. In our experiments, four parameters were obtained, h-0.02, s-5, v-6 and gamma-1.1, respectively.
A photograph of the tongue is taken and stored in the computer. As shown in fig. 6. After adjusting the white balance of the picture, the picture is converted into HSV space. The histogram of the V channel is obtained, and it is found that the image belongs to the underexposure type, and the lightness of the image is equalized, and the obtained picture is as shown in fig. 7.
The image is adjusted accordingly according to the parameters h 0.02, s-5, v-6, and gamma 1.1 obtained by the system, and the final adjustment result is shown in fig. 8.
The method comprises the following specific implementation steps:
step (1) shooting N Macbeth color cards
The shooting conditions are as follows: the indoor photographing method has the advantages that strong light interference is avoided during photographing, and meanwhile, a forced flash lamp mode is used, and the distance is about 30 centimeters. And shooting N Macbeth color cards, wherein N is 10-15. The photograph is entered into a computer.
Step (2) correcting the white balance of the N Macbeth color chart photos
Representing N photographs stored in a computer as f1(x,y),f2(x,y)…fN(x, y), wherein, for each image, f1(x,y)=[Ri(x,y),Gi(x,y),Bi(x,y)]And i is 1 … N. (x, y) represents sample points of an image, Ri(x,y),Gi(x,y),Bi(x, y) represent the values of red, green, and blue at the sample point, respectively.
Since the values of three pure white colors of red, green, and blue in the digital photograph are 255, 255, and 255, respectively, the color is the brightest color in the image and is the maximum value of each channel. In order to correct the brightness color cast of the shot image, before correcting the color of the image, the white color in the picture needs to be corrected to be pure white, and the method comprises the following steps:
<math> <mrow> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>255</mn> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>*</mo> <msub> <mi>R</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>255</mn> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>*</mo> <msub> <mi>G</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msup> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>255</mn> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>*</mo> <msub> <mi>B</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
wherein, Max (R)i),Max(Gi) And Max (B)i) Is shown in the image fi(x, y) the maximum of the red, green and blue channels. The corresponding picture after adjustment is denoted as fi′=[Ri′(x,y),Gi′(x,y),Bi′(x,y)]. By adjusting the white balance, we obtain a series of adjusted images f1′(x,y),f2′(x,y)…fN′(x,y)。
Step (3) f to be expressed in RGB (red, green, blue) spacei' (x, y), i-1 … N and color card color standard value Mj=[MRj,MGj,MBj]J is 1 to 24 and is converted into HSV (hue, saturation, brightness) space representation
For an image f whose white balance has been correctedi′(x,y)=[Ri′(x,y),Gi′(x,y),Bi′(x,y)]It is transformed into the HSV space by the following formula.
<math> <mrow> <msup> <msub> <mi>H</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mi>cos</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>[</mo> <mfrac> <mrow> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>+</mo> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> <mrow> <mn>2</mn> <msqrt> <msup> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>*</mo> <mrow> <mo>(</mo> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </msqrt> </mrow> </mfrac> <mo>]</mo> </mrow> </math>
<math> <mrow> <msup> <msub> <mi>S</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>-</mo> <mi>Min</mi> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <msup> <mrow> <mo>,</mo> <msub> <mi>B</mi> <mi>i</mi> </msub> </mrow> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
Vi′(x,y)=Max(Ri′(x,y),Gi′(x,y),Bi′(x,y))
Wherein R isi′(x,y),Gi′(x,y),Bi' (x, y) denotes the values of the three red, green and blue channels of the ith image at the sampling point (x, y) after the white balance adjustment, respectively, Hi′(x,y),Si′(x,y),Vi' (x, y) respectively indicate the values of three channels of hue, saturation and lightness of the i-th image at the sampling point (x, y) after conversion. The HSV channel from which the image after white balance correction is derived is denoted
fi′(x,y)=[Hi′(x,y),S′(x,y),Vi′(x,y)]
In addition, the standard value M of the color chart is to be matchedj=[MRj,MGj,MBj]And j is also transferred to the HSV space from 1 to 24.
MH j = cos - 1 [ ( MR j - MG j ) + ( MR j - MB j ) 2 ( MR j - MG j ) 2 + ( MR j - MG j ) ( MR j - MB j ) ]
MS j = Max ( MR j , MG j , MB j ) - Min ( MR j , MG j , MR j ) Max ( MR j , MG j , MB j )
MVj=Max(MRj,MGj,MBj)
Wherein, MRj,MGj,MBjRespectively representing the red, green and blue of the jth color block of the color cardValues of three channels, MHj,MSj,MVjAnd respectively representing the values of three channels of hue, saturation and brightness of the j th color block after conversion. Thereby obtaining the HSV space expression mode M of the standard value of the color cardj=[MHj,MSj,MVj],j=1~24。
And (4) solving each adjusting parameter through the values of the standard Macbeth color chart and the color chart photo:
step (4.1) of obtaining the average value of the H (tone) channel of the Macbeth color chart photo after white balance adjustment, and comparing the average value with the average tone of the color chart photo to obtain a tone adjustment parameter H
By Hav1,Hav2,…HavNThe average value of H channels of each adjusted image i is shown, and the average value of the tone of N images is <math> <mrow> <mi>Hav</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>Hav</mi> <mi>i</mi> </msub> <mo>.</mo> </mrow> </math> In addition, the average value of the standard hue value of the color chart is obtained <math> <mrow> <mi>Hst</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mn>24</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>24</mn> </munderover> <msub> <mi>MH</mi> <mi>j</mi> </msub> <mo>.</mo> </mrow> </math> The difference h between the two is Hav-Hst which is a parameter for adjusting the chromaticity.
Step (4.2) calculating the average value of the V (brightness) channel of the Macbeth color chart photo after white balance adjustment, and comparing the average value with the average brightness of the color chart photo to obtain a brightness adjustment parameter V
With Vpav1,Vpav2,…VpavNThe average value of the V channels of the adjusted images is shown, and the brightness average value of the N images is <math> <mrow> <mi>Vav</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>Vpav</mi> <mi>i</mi> </msub> <mo>.</mo> </mrow> </math> In addition, the average value of the standard values of the color chart is obtained <math> <mrow> <mi>Vst</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mn>24</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>24</mn> </munderover> <msub> <mi>MV</mi> <mi>j</mi> </msub> <mo>.</mo> </mrow> </math> The difference v-Vst between the two is a parameter for adjusting the brightness.
Step (4.3) calculating the average value of the channels of the Macbeth color chart photo S (saturation) after white balance adjustment, comparing the average value with the saturation of the color chart photo S (saturation), and obtaining a saturation adjustment parameter S through fitting
Photograph f after adjustment of white balancei' (x, y), 24 rectangular color blocks are taken out according to positions, and as shown in FIG. 3, the saturation in each color block is determined to be fSi1,fSi2.…fSi24. After the saturation of 24 color blocks is respectively obtained for all the collected photos, the average value of each color block for the N pictures is obtained.
<math> <mrow> <msub> <mi>Sav</mi> <mi>j</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>fS</mi> <mi>ij</mi> </msub> <mo>.</mo> </mrow> </math>
For the calculated saturation mean value Sav1,Sav2…Sav24And color card standard saturation MS1,MS2…MS24Fitting by using a least square method standard, wherein the concrete method comprises the following steps: selecting parameter s, MSj=s*SavjSo that <math> <mrow> <mi>&delta;</mi> <mo>=</mo> <msqrt> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>24</mn> </munderover> <msup> <mrow> <mo>(</mo> <mi>s</mi> <mo>*</mo> <msub> <mi>Sav</mi> <mi>j</mi> </msub> <mo>-</mo> <msub> <mi>MS</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> </mrow> </math> The value of (c) is minimal.
The calculation method of s is as follows: <math> <mrow> <mi>s</mi> <mo>=</mo> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>24</mn> </munderover> <msub> <mi>Sav</mi> <mi>j</mi> </msub> <mo>*</mo> <msub> <mi>MS</mi> <mi>j</mi> </msub> </mrow> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>24</mn> </munderover> <msup> <msub> <mi>Sav</mi> <mi>j</mi> </msub> <mn>2</mn> </msup> </mrow> </mfrac> </mrow> </math>
step (5) calculating the average value of the V (lightness) channel of the color chart photo after the white balance is adjusted, comparing the average value with the lightness of the color chart photo, and obtaining a curve correction value gamma through fitting
gamma is derived from the response curve of a CRT (display/television), i.e. its brightness is non-linear with respect to the input voltage;
gamma correction corresponds to processing an image; since the change in gamma causes a change in lightness, the value of gamma is obtained by the lightness V channel:
photograph f after adjustment of white balancei' (x, y) taking out 24 rectangular color blocks according to positions, and obtaining the value of fV in each color blocki1,fVi2.…fVi24. After the brightness of 24 color blocks is respectively obtained for all the collected pictures, the average value of each color block for the N pictures is obtained.
<math> <mrow> <msub> <mi>Vbav</mi> <mi>j</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>fV</mi> <mi>ij</mi> </msub> <mo>.</mo> </mrow> </math>
For the obtained brightness average value Vbav1,Vbav2…Vbav24And color chart standard lightness MV1,MV2…MV24Fitting by using a least square method standard, wherein the concrete method comprises the following steps: selecting parameters gamma, MVj=Vbavj gammaSo that <math> <mrow> <mi>&sigma;</mi> <mo>=</mo> <msqrt> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>24</mn> </munderover> <msup> <mrow> <mo>(</mo> <msup> <msub> <mi>Vbav</mi> <mi>j</mi> </msub> <mi>gamma</mi> </msup> <mo>-</mo> <msub> <mi>MV</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> </mrow> </math> The value of (c) is minimal.
The gamma is calculated by <math> <mrow> <mi>gamma</mi> <mo>=</mo> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>24</mn> </munderover> <mi>ln</mi> <mrow> <mo>(</mo> <msub> <mi>Vbav</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>*</mo> <mi>ln</mi> <mrow> <mo>(</mo> <msub> <mi>MV</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>24</mn> </munderover> <mi>ln</mi> <msup> <mrow> <mo>(</mo> <msub> <mi>Vbav</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </mfrac> <mo>.</mo> </mrow> </math>
And (6) taking a tongue picture by using a digital camera, and storing the tongue picture in a computer.
The expression method is that It (x, y) ([ IR (x, y), IG (x, y), IB (x, y) ], (x, y) represents a sampling point of an image, and IR (x, y), IG (x, y), and IB (x, y) represent values of red, green, and blue at the sampling point, respectively.
Step (7) correcting the white balance of the tongue picture
The white color in the picture of the tongue picture is corrected to be pure white by the following method:
It′(x,y)=[IR′(x,y),IG′(x,y),IB′(x,y)]
<math> <mrow> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>255</mn> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <mi>IR</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>*</mo> <mi>IR</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msup> <mrow> <mi>I</mi> <msub> <mi>G</mi> <mi>i</mi> </msub> </mrow> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>255</mn> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <mi>IG</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>*</mo> <mi>IG</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msup> <mrow> <mi>I</mi> <msub> <mi>B</mi> <mi>i</mi> </msub> </mrow> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>255</mn> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <mi>IB</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>*</mo> <mi>IB</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
where max (ir), max (ig), and max (ib) represent the maximum values of the red, green, and blue channels in the image It (x, y). By adjusting the white balance, we obtain the adjusted tongue picture as It' (x, y).
Step (8) will represent It in RGB spacei' (x, y) is transformed into HSV (hue, saturation, brightness) space representation
<math> <mrow> <msup> <mi>IH</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mi>cos</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>[</mo> <mfrac> <mrow> <mrow> <mo>(</mo> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>IG</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>+</mo> <mrow> <mo>(</mo> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>IB</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> <mrow> <mn>2</mn> <msqrt> <msup> <mrow> <mo>(</mo> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>IG</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <mrow> <mo>(</mo> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>IB</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msup> <mi>IG</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>IB</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </msqrt> </mrow> </mfrac> <mo>]</mo> </mrow> </math>
<math> <mrow> <msup> <mi>IS</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <mi>IG</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <mi>IB</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>-</mo> <mi>Min</mi> <mrow> <mo>(</mo> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <mi>IG</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <msup> <mrow> <mo>,</mo> <mi>IB</mi> </mrow> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <mi>IG</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <mi>IB</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
IV′(x,y)=Max(IR′(x,y),IG′(x,y),IB′(x,y))
Step (9) carrying out lightness balance on the underexposed photos
And (3) after the white balance is corrected, analyzing distribution of a brightness histogram, marking the tongue picture if the position where the peak value appears is smaller than a predetermined threshold value, carrying out brightness balance, and then carrying out next adjustment.
The histogram of the lightness channel IV' (x, y) of the image is counted. The value of the lightness channel is 0-255. And (5) counting a histogram HistV (t) of the lightness channel, wherein t is 0-255.
The histogram is defined as histv (t) ═ Num (IV '(x, Y) ═ t), that is, the t-th component of the histogram represents the number of points having a value of t in the lightness map IV' (x, Y). Fig. 5(a) shows a histogram of one image, where Max (peak v (t)) and t are 0 to 255 in the lightness graph. The abscissa is a gray value, the value is from 0 to 255, and the ordinate is the number of pixel points in the image, the gray value of which is equal to the gray of the abscissa.
If PeakV < threshold V, histogram equalization is performed on image IV' (x, y). The equation for equalization is expressed as: <math> <mrow> <msup> <mi>IV</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <mi>IV</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </munderover> <mi>HistV</mi> <mrow> <mo>(</mo> <msup> <mi>IV</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> meaning that the gray level of a pixel (x, y) before equalization is IV '(x, y), then the brightness value of this point after equalization is the sum of the histogram from gray value 1 to gray value IV' (x, y). In the equation, k represents a brightness value, and fig. 5(b) is a histogram in which the histogram is changed by histogram equalization.
If PeakV ≧ threshold V, no treatment is performed on IV' (x, y).
Correcting the image in the HSV space according to the set parameters h, s and v;
IHd(x,y)=IH′(x,y)-h
ISd(x,y)=IS′(x,y)*s
IVd(x,y)=IV′(x,y)-v
step (11), converting the image in the HSV space into the RGB space and then using gamma coefficient
The procedure for transferring HSV to RGB space is:
four temporary variables f, aa, bb, cc are set to assist in the operation, where
f=IHd-floor(IHd) Function floor (IH)d) Means taking ratio IHdSmall maximum integer, so f denotes IHdFractional part of
aa=IVd*(1-ISd)
bb=IVd*(1-(ISd*f))
cc=IVd*(1-(ISd*(1-f)))
According to IHdCan determine the IRd,IGd,IBdValue of (A)
If it is <math> <mrow> <msub> <mi>IH</mi> <mi>d</mi> </msub> <mo>&Element;</mo> <mo>[</mo> <mn>0</mn> <mo>~</mo> <mfrac> <mi>&pi;</mi> <mn>6</mn> </mfrac> <mo>)</mo> <mo>,</mo> </mrow> </math> Then IRd=IVd,IGd=cc,IBd=aa
If it is <math> <mrow> <msub> <mi>IH</mi> <mi>d</mi> </msub> <mo>&Element;</mo> <mo>[</mo> <mfrac> <mi>&pi;</mi> <mn>6</mn> </mfrac> <mo>~</mo> <mfrac> <mi>&pi;</mi> <mn>3</mn> </mfrac> <mo>)</mo> <mo>,</mo> </mrow> </math> Then IRd=bb,IGd=IVd,IBd=aa
If it is <math> <mrow> <msub> <mi>IH</mi> <mi>d</mi> </msub> <mo>&Element;</mo> <mo>[</mo> <mfrac> <mi>&pi;</mi> <mn>3</mn> </mfrac> <mo>~</mo> <mfrac> <mi>&pi;</mi> <mn>2</mn> </mfrac> <mo>)</mo> <mo>,</mo> </mrow> </math> Then IRd=aa,IGd=IVd,IBd=cc
If it is <math> <mrow> <msub> <mi>IH</mi> <mi>d</mi> </msub> <mo>&Element;</mo> <mo>[</mo> <mfrac> <mi>&pi;</mi> <mn>2</mn> </mfrac> <mo>~</mo> <mfrac> <mrow> <mn>2</mn> <mi>&pi;</mi> </mrow> <mn>3</mn> </mfrac> <mo>)</mo> <mo>,</mo> </mrow> </math> Then IRd=aa,IGd=bb,IBd=IVd
If it is <math> <mrow> <msub> <mi>IH</mi> <mi>d</mi> </msub> <mo>&Element;</mo> <mo>[</mo> <mfrac> <mrow> <mn>2</mn> <mi>&pi;</mi> </mrow> <mn>3</mn> </mfrac> <mo>~</mo> <mfrac> <mrow> <mn>5</mn> <mi>&pi;</mi> </mrow> <mn>6</mn> </mfrac> <mo>)</mo> <mo>,</mo> </mrow> </math> Then IRd=cc,IGd=aa,IBd=IVd
If it is <math> <mrow> <msub> <mi>IH</mi> <mi>d</mi> </msub> <mo>&Element;</mo> <mo>[</mo> <mfrac> <mrow> <mn>5</mn> <mi>&pi;</mi> </mrow> <mn>6</mn> </mfrac> <mo>~</mo> <mn>2</mn> <mi>&pi;</mi> <mo>)</mo> <mo>,</mo> </mrow> </math> Then IRd=IVd,IGd=aa,IBd=bb
After the image is transferred into the RGB space, the image in the RGB space is adjusted by the curve correction value gamma.
IRd′(x,y)=IRd(x,y)gamma
IGd′(x,y)=IGd(x,y)gamma
IBd′(x,y)=IBd(x,y)gamma
Resulting adjusted image Itd(x,y)=[IRd′(x,y),IGd′(x,y),IBd′(x,y)]The adjusted picture of the tongue picture is obtained.

Claims (1)

1. The digital tongue picture color cast correction method is characterized by sequentially comprising the following steps of:
the method comprises the following steps that (1) under the condition of forced flashing, N Macbeth color cards are shot by a digital camera, wherein N is 10-15 Macbeth color cards, and photos are input into a computer;
step (2), the computer corrects the white balance of the N Macbeth color chart photos:
if: the N photo Macbeth color chip photo is denoted as f1(x,y),f2(x,y)…fN(x,y)
Each of the picturesThe image is represented as: f. ofi(x,y)=[Ri(x,y),Gi(x,y),Bi(x,y)],i =1…N
Ri(x,y),Gi(x,y),Bi(x, y) represents the values of red, green, and blue at the sampling point (x, y), respectively;
the corresponding picture of each photo after white balance correction is expressed as:
fi′=[Ri′(x,y),Gi′(x,y),Bi′(x,y)],
<math><mrow> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>255</mn> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msub> <mi>R</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>*</mo> <msub> <mi>R</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow></math>
wherein, <math><mrow> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>255</mn> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msub> <mi>G</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>*</mo> <msub> <mi>G</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow></math>
<math><mrow> <msup> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>255</mn> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msub> <mi>B</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>*</mo> <msub> <mi>B</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow></math>
wherein, Max (R)i(x,y)),Max(Gi(x, y)) and Max (B)i(x, y)) are respectively shown in the image fi(x, y), a maximum value of a red channel, a green channel, and a blue channel;
step (3) of expressing f in RGB spacei' (x, y) and Macbeth color chip color into HSV space representation, H, S, V denotes hue, saturation and value, respectively:
step (3.1), mixingi′(x,y)=[Ri′(x,y),Gi′(x,y),Bi′(x,y)]Conversion to HSV space:
<math><mrow> <msup> <msub> <mi>H</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mi>cos</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>[</mo> <mfrac> <mrow> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>+</mo> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> <mrow> <mn>2</mn> <msqrt> <msup> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>*</mo> <mrow> <mo>(</mo> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </msqrt> </mrow> </mfrac> <mo>]</mo> </mrow></math>
<math><mrow> <msup> <msub> <mi>S</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>-</mo> <mi>Min</mi> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msup> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow></math>
Vi′(x,y)=Max(Ri′(x,y),Gi′(x,y),Bi′(x,y))
said Hi′(x,y),Si′(x,y),Vi' (x, y) respectively indicate the values of three channels of hue, saturation and lightness of the i-th image at the sampling point (x, y) after conversion, and Ri′(x,y),Gi′(x,y),Bi' (x, y) respectively shows the values of the red, green and blue channels of the ith image at the sampling point (x, y) after the white balance is adjusted, and the HSV channel of the image after the white balance is corrected is shown as:
fi′(x,y)=[Hi′(x,y),Si′(x,y),Vi′(x,y)]
step (3.2), the standard value M of the Macbeth color card is setjTurning into the space of the HSV (hue, saturation and value),
Mj=[MRj,MGj,MBj],j=1~24
MH j = cos - 1 [ ( MR j - MG j ) + ( MR j - MB j ) 2 ( MR j - MG j ) 2 + ( MR j - MG j ) ( MR j - MB j ) ]
then MS j = Max ( MR j , MG j , MB j ) - Min ( MR j , MG j , MB j ) Max ( MR j , MG j , MB j )
MVj=Max(MRj,MGj,MBj)
Wherein, MRj,MGj,MBjRespectively representing the values of the red, green and blue channels of the jth color block of the color card, MHj,MSj,MVjRespectively representing the values of three channels of hue, saturation and lightness of the j color block after conversion; obtaining: HSV spatial expression for the colour chip standard value: mj=[MHj,MSj,MVj],j=1~24;
And (4) respectively solving each adjusting parameter through the values of the standard Macbeth color chart and the color chart photo according to the following steps:
step (4.1), calculating the average value of the H channel of the color tone of the Macbeth color chart photo after white balance adjustment, and comparing the average value with the average color tone of a standard color chart to obtain a color tone adjustment parameter H:
the average hue of the N Macbeth photos is <math><mrow> <mi>Hav</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>Hav</mi> <mi>i</mi> </msub> <mo>,</mo> </mrow></math> HaviFor adjusting ith after white balance
The average value of the hue H channel of the image,
the Macbeth color chart has a standard hue average value of <math><mrow> <mi>Hst</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mn>24</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>24</mn> </munderover> <msub> <mi>MH</mi> <mi>j</mi> </msub> <mo>,</mo> </mrow></math>
The hue adjustment parameter is h-Hav-Hst;
and (4.2) calculating the average value of a brightness V channel of the picture of the Macbeth color chart after the white balance is adjusted, and comparing the average value with the average brightness of a standard color chart to obtain a brightness adjusting parameter V:
the lightness average value of N Macbeth photos is <math><mrow> <mi>Vav</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mi>Vpa</mi> <msub> <mi>v</mi> <mi>i</mi> </msub> <mo>,</mo> </mrow></math> VpaviTo adjust the average value of the lightness V channel of the i-th image after white balance,
the standard lightness average of the Macbeth color chart is <math><mrow> <mi>Vst</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mn>24</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>24</mn> </munderover> <msub> <mi>MV</mi> <mi>j</mi> </msub> <mo>,</mo> </mrow></math>
The brightness adjustment parameter is v ═ Vav-Vst;
and (4.3) calculating the average value of the saturation S channel of the Macbeth color chart photo after white balance adjustment, comparing the average value with the saturation of a color chart standard value, and obtaining a saturation adjustment parameter S through fitting:
in N Macbeth color chart photos, the average value of the saturation of the color block j is <math><mrow> <msub> <mi>Sav</mi> <mi>j</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>fS</mi> <mi>ij</mi> </msub> <mo>,</mo> </mrow></math> Wherein, fSijIs composed of
In the ith picture, the saturation of the color patch j, i is 1 to N, j is 1 to 24,
the standard saturation of each color block in the color card is MS1,MS2…MS24
The saturation adjustment parameter s is: <math><mrow> <mi>s</mi> <mo>=</mo> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>24</mn> </munderover> <msub> <mi>Sav</mi> <mi>j</mi> </msub> <mo>*</mo> <msub> <mi>MS</mi> <mi>j</mi> </msub> </mrow> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>24</mn> </munderover> <msubsup> <mi>Sav</mi> <mi>j</mi> <mn>2</mn> </msubsup> </mrow> </mfrac> <mo>;</mo> </mrow></math>
and (5) calculating a curve correction value gamma:
in N Macbeth color chart photos, the average brightness value of a color block j is <math><mrow> <msub> <mi>Vbav</mi> <mi>j</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>fV</mi> <mi>ij</mi> </msub> <mo>,</mo> </mrow></math> Wherein, fVijThe lightness of the color block j in the ith picture, i is 1-N, j is 1-24,
average of color blocks in color cardBrightness of Vbav1,Vbav2…Vbav24
The curve correction value gamma is: <math><mrow> <mi>gamma</mi> <mo>=</mo> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>24</mn> </munderover> <mi>ln</mi> <mrow> <mo>(</mo> <msub> <mi>Vbav</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>*</mo> <mi>ln</mi> <mrow> <mo>(</mo> <mi>M</mi> <msub> <mi>V</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>24</mn> </munderover> <mi>ln</mi> <msup> <mrow> <mo>(</mo> <msub> <mi>Vbav</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </mfrac> <mo>;</mo> </mrow></math>
taking a tongue picture by using a digital camera to obtain an image It (x, y) ([ IR (x, y), IG (x, y), IB (x, y) ], wherein IR (x, y), IG (x, y), and IB (x, y) respectively represent red, green, and blue values at the image sampling point (x, y);
and (7) correcting the white color in the tongue picture It (x, y) to be pure white, and obtaining a corrected tongue picture It' (x, y):
It′(x,y)=[IR′(x,y),IG′(x,y),IB′(x,y)]
<math><mrow> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>255</mn> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <mi>IR</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>*</mo> <mi>IR</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow></math>
<math><mrow> <mi>I</mi> <msup> <mi>G</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>255</mn> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <mi>IG</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>*</mo> <mi>IG</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow></math>
<math><mrow> <msup> <mi>IB</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>255</mn> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <mi>IB</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>*</mo> <mi>IB</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow></math>
where Max (IR (x, y)), Max (IG (x, y)), and Max (IB (x, y)) respectively represent the maximum values of the red, green, and blue channels in the image It (x, y);
step (8) of converting the tongue picture represented in the RGB space to a picture represented in the hue H, saturation S, lightness V space
<math><mrow> <msup> <mi>IH</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mi>cos</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>[</mo> <mfrac> <mrow> <mrow> <mo>(</mo> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>IG</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>+</mo> <mrow> <mo>(</mo> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>IB</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> <mrow> <mn>2</mn> <msqrt> <msup> <mrow> <mo>(</mo> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>IG</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <mrow> <mo>(</mo> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>IB</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msup> <mi>IG</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>IB</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </msqrt> </mrow> </mfrac> <mo>]</mo> </mrow></math>
<math><mrow> <msup> <mi>IS</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <mi>IG</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <mi>IB</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>-</mo> <mi>Min</mi> <mrow> <mo>(</mo> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <mi>IG</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <mi>IB</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> <mrow> <mi>Max</mi> <mrow> <mo>(</mo> <msup> <mi>IR</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <mi>IG</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <msup> <mi>IB</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow></math>
IV′(x,y)=Max(IR′(x,y),IG′(x,y),IB′(x,y));
Step (9), judging whether the tongue picture is over exposed after correcting the white balance, and carrying out brightness balance on the underexposed picture:
step (9.1), performing brightness histogram distribution on the photo:
a histogram histv (t) ═ Num (IV '(x, y) ═ t), t ═ 0 to 255, and indicates the number of points whose value is t in the lightness graph IV' (x, y) represented by the t-th component in the histogram;
step (9.2), solving peak PeakV of lightness graph:
PeakV=Max(HistV(t)),t=0~255;
step (9.3), judging whether PeakV is less than threshold V
Wherein, the threshold V is a set value of 100, if PeakV is larger than or equal to the threshold V, the IV' (x, y) is not processed, otherwise, the next step is carried out;
step (9.4), histogram equalization is carried out on the image IV' (x, y) to obtain <math><mrow> <msup> <mi>IV</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <msup> <mi>IV</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </munderover> </mrow></math> HistV(IV′(x.y)),
k represents a counter of lightness values;
and (10) correcting the image in the HSV space according to the adjusting parameters h, s and v set in the step (4):
IHd(x,y)=IH′(x,y)-h
ISd(x,y)=IS′(x,y)*s;
IVd(x,y)=IV′(x,y)-v
step (11), converting the image in the HSV space into the RGB space:
step (11.1), four temporary variables f, aa, bb, cc are set to aid in transformation:
wherein f is IHd-floor(IHd) Function floor (IH)d) Means taking ratio IHdSmall maximum integer, so f denotes IHdThe fractional part of (a) is,
aa = IV d * ( 1 - IS d )
bb = IV d * ( 1 - ( IS d * f ) )
cc = IV d * ( 1 - ( IS d * ( 1 - f ) ) )
step (11.2), according to IHdRange determination of (IR)d,IGd,IBdThe value of (c):
if it is <math><mrow> <msub> <mi>IH</mi> <mi>d</mi> </msub> <mo>&Element;</mo> <mo>[</mo> <mn>0</mn> <mo>~</mo> <mfrac> <mi>&pi;</mi> <mn>6</mn> </mfrac> <mo>)</mo> <mo>,</mo> </mrow></math> Then IRd=IVd,IGd=cc,IBd=aa
If it is <math><mrow> <msub> <mi>IH</mi> <mi>d</mi> </msub> <mo>&Element;</mo> <mo>[</mo> <mfrac> <mi>&pi;</mi> <mn>6</mn> </mfrac> <mo>~</mo> <mfrac> <mi>&pi;</mi> <mn>3</mn> </mfrac> <mo>)</mo> <mo>,</mo> </mrow></math> Then IRd=bb,IGd=IVd,IBd=aa
If it is <math><mrow> <msub> <mi>IH</mi> <mi>d</mi> </msub> <mo>&Element;</mo> <mo>[</mo> <mfrac> <mi>&pi;</mi> <mn>3</mn> </mfrac> <mo>~</mo> <mfrac> <mi>&pi;</mi> <mn>2</mn> </mfrac> <mo>)</mo> <mo>,</mo> </mrow></math> Then IRd=aa,IGd=IVd,IBd=cc
If it is <math><mrow> <msub> <mi>IH</mi> <mi>d</mi> </msub> <mo>&Element;</mo> <mo>[</mo> <mfrac> <mi>&pi;</mi> <mn>2</mn> </mfrac> <mo>~</mo> <mfrac> <mrow> <mn>2</mn> <mi>&pi;</mi> </mrow> <mn>3</mn> </mfrac> <mo>)</mo> <mo>,</mo> </mrow></math> Then IRd=aa,IGd=bb,IBd=IVd
If it is <math><mrow> <msub> <mi>IH</mi> <mi>d</mi> </msub> <mo>&Element;</mo> <mo>[</mo> <mfrac> <mrow> <mn>2</mn> <mi>&pi;</mi> </mrow> <mn>3</mn> </mfrac> <mo>~</mo> <mfrac> <mrow> <mn>5</mn> <mi>&pi;</mi> </mrow> <mn>6</mn> </mfrac> <mo>)</mo> <mo>,</mo> </mrow></math> Then IRd=cc,IGd=aa,IBd=IVd
If it is <math><mrow> <msub> <mi>IH</mi> <mi>d</mi> </msub> <mo>&Element;</mo> <mo>[</mo> <mfrac> <mrow> <mn>5</mn> <mi>&pi;</mi> </mrow> <mn>6</mn> </mfrac> <mo>~</mo> <mn>2</mn> <mi>&pi;</mi> <mo>)</mo> <mo>,</mo> </mrow></math> Then IRd=IVd,IGd=aa,IBd=bb
And (12) adjusting the image by using the curve correction value gamma:
IRd′(x,y)=IRd(x,y)gamma
IGd′(x,y)=IGd(x,y)gamma
IBd′(x,y)=IBd(x,y)gamma
resulting adjusted image Itd(x,y)=[IRd′(x,y),IGd′(x,y),IBd′(x,y)]The adjusted picture of the tongue picture is obtained.
CNB2006101138705A 2006-10-20 2006-10-20 Method for correcting digital tongue picture colour cast Expired - Fee Related CN100412906C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2006101138705A CN100412906C (en) 2006-10-20 2006-10-20 Method for correcting digital tongue picture colour cast

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2006101138705A CN100412906C (en) 2006-10-20 2006-10-20 Method for correcting digital tongue picture colour cast

Publications (2)

Publication Number Publication Date
CN1945627A CN1945627A (en) 2007-04-11
CN100412906C true CN100412906C (en) 2008-08-20

Family

ID=38045022

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2006101138705A Expired - Fee Related CN100412906C (en) 2006-10-20 2006-10-20 Method for correcting digital tongue picture colour cast

Country Status (1)

Country Link
CN (1) CN100412906C (en)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5386211B2 (en) * 2008-06-23 2014-01-15 株式会社ジャパンディスプレイ Image display device and driving method thereof, and image display device assembly and driving method thereof
CN101947101B (en) * 2010-07-15 2012-04-25 哈尔滨工业大学 Method for making tongue colour reproduction colour card
CN102419861A (en) * 2010-09-27 2012-04-18 上海中医药大学 Color image correcting method based on topology subdivision of uniform color space
CN102693671B (en) * 2011-03-23 2014-03-26 长庚医疗财团法人林口长庚纪念医院 Manufacturing method of tongue picture classification card
JP5547243B2 (en) * 2012-06-28 2014-07-09 シャープ株式会社 Image processing apparatus, program, and recording medium
CN103106669B (en) * 2013-01-02 2015-10-28 北京工业大学 Chinese medicine tongue picture is as environmental suitability color reproduction method
CN103248793B (en) * 2013-05-14 2016-08-10 旭曜科技股份有限公司 The colour of skin optimization method of gamut conversion system and device
CN103308517B (en) * 2013-05-21 2015-09-30 谢绍鹏 Chinese medicine color objectifies method and Chinese medicine image acquiring device
CN104297469B (en) * 2013-07-15 2017-02-08 艾博生物医药(杭州)有限公司 Immune reading device and calibration method of the same
CN104700363A (en) * 2013-12-06 2015-06-10 富士通株式会社 Removal method and device of reflecting areas in tongue image
CN103957396A (en) * 2014-05-14 2014-07-30 姚杰 Image processing method and device used when tongue diagnosis is conducted with intelligent device and equipment
CN104572538B (en) * 2014-12-31 2017-08-25 北京工业大学 A kind of Chinese medicine tongue image color correction method based on K PLS regression models
CN106127675B (en) * 2016-06-20 2019-05-24 深圳市利众信息科技有限公司 Colour consistency method and apparatus
CN107689031B (en) * 2016-08-03 2021-05-28 天津慧医谷科技有限公司 Color restoration method based on illumination compensation in tongue picture analysis
CN107833620A (en) * 2017-11-28 2018-03-23 北京羽医甘蓝信息技术有限公司 Image processing method and image processing apparatus
CN108320272A (en) * 2018-02-05 2018-07-24 电子科技大学 The method that image delusters
CN108742519A (en) * 2018-04-02 2018-11-06 上海中医药大学附属岳阳中西医结合医院 Machine vision three-dimensional reconstruction technique skin ulcer surface of a wound intelligent auxiliary diagnosis system
CN109523485B (en) * 2018-11-19 2021-03-02 Oppo广东移动通信有限公司 Image color correction method, device, storage medium and mobile terminal
CN109815860A (en) * 2019-01-10 2019-05-28 中国科学院苏州生物医学工程技术研究所 TCM tongue diagnosis image color correction method, electronic equipment, storage medium
CN110009588B (en) * 2019-04-09 2022-12-27 成都品果科技有限公司 Portrait image color enhancement method and device
CN114073494A (en) * 2020-08-19 2022-02-22 京东方科技集团股份有限公司 Leukocyte detection method, system, electronic device, and computer-readable medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050270383A1 (en) * 2004-06-02 2005-12-08 Aiptek International Inc. Method for detecting and processing dominant color with automatic white balance
WO2006092559A1 (en) * 2005-03-04 2006-09-08 Chrometrics Limited Reflectance spectra estimation and colour space conversion using reference reflectance spectra

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050270383A1 (en) * 2004-06-02 2005-12-08 Aiptek International Inc. Method for detecting and processing dominant color with automatic white balance
WO2006092559A1 (en) * 2005-03-04 2006-09-08 Chrometrics Limited Reflectance spectra estimation and colour space conversion using reference reflectance spectra

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SVR based color calibration for tongue image. Hong-Zhi Zhang, Kuan-Quan Wang, Xue-Song Jin, DavidZhang.Proceedings of the Forth International Conference on Machine Learning and Cybernetics,Vol.8 . 2005
SVR based color calibration for tongue image. Hong-Zhi Zhang, Kuan-Quan Wang, Xue-Song Jin, DavidZhang.Proceedings of the Forth International Conference on Machine Learning and Cybernetics,Vol.8 . 2005 *

Also Published As

Publication number Publication date
CN1945627A (en) 2007-04-11

Similar Documents

Publication Publication Date Title
CN100412906C (en) Method for correcting digital tongue picture colour cast
US8947549B2 (en) Spectral synthesis for image capturing device processing
CN105959662B (en) Self-adapted white balance method of adjustment and device
CN111292246B (en) Image color correction method, storage medium, and endoscope
CN105933617B (en) A kind of high dynamic range images fusion method for overcoming dynamic problem to influence
EP1592227B1 (en) Apparatus and method for determining an image processing parameter from image data at a user-selected image position
US8861878B2 (en) Image processing method and device for performing grayscale conversion, and image processing program
JP2005117612A (en) Image processing method and apparatus
CN108600723A (en) A kind of color calibration method and evaluation method of panorama camera
CN101841631B (en) Shadow exposure compensation method and image processing device using same
CN110213556B (en) Automatic white balance method and system in monochrome scene, storage medium and terminal
Kao et al. Design considerations of color image processing pipeline for digital cameras
Wang et al. Fast automatic white balancing method by color histogram stretching
US20110205400A1 (en) Method and apparatus for applying tonal correction to images
JP2978615B2 (en) Apparatus and method for adjusting color balance
US20090244316A1 (en) Automatic white balance control
Barnard Computational color constancy: taking theory into practice
CN109451292A (en) Color temp bearing calibration and device
CN117156289A (en) Color style correction method, system, electronic device, storage medium and chip
Kuang et al. A psychophysical study on the influence factors of color preference in photographic color reproduction
CN110726536B (en) Color correction method for color digital reflection microscope
KR101441380B1 (en) Method and apparatus of detecting preferred color and liquid crystal display device using the same
WO2009091500A1 (en) A method for chromatic adaptation of images
Post et al. Fast radiometric compensation for nonlinear projectors
JP4881325B2 (en) Profiling digital image input devices

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20080820

Termination date: 20091120