CN113132693A - Color correction method - Google Patents

Color correction method Download PDF

Info

Publication number
CN113132693A
CN113132693A CN201911424392.3A CN201911424392A CN113132693A CN 113132693 A CN113132693 A CN 113132693A CN 201911424392 A CN201911424392 A CN 201911424392A CN 113132693 A CN113132693 A CN 113132693A
Authority
CN
China
Prior art keywords
image
color
color block
target
standard color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911424392.3A
Other languages
Chinese (zh)
Other versions
CN113132693B (en
Inventor
戴骥
吕格莉
戴昂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changsha Yunzhijian Information Technology Co ltd
Original Assignee
Changsha Yunzhijian Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha Yunzhijian Information Technology Co ltd filed Critical Changsha Yunzhijian Information Technology Co ltd
Priority to CN201911424392.3A priority Critical patent/CN113132693B/en
Publication of CN113132693A publication Critical patent/CN113132693A/en
Application granted granted Critical
Publication of CN113132693B publication Critical patent/CN113132693B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/02Diagnosis, testing or measuring for television systems or their details for colour television signals
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a color correction method, which comprises the following steps: identifying the bar code image by using a bar code identification algorithm to obtain bar code data; then according to the configuration scheme obtained by the bar code data, respectively obtaining the position coordinates and the sizes of each standard color block, each white check color block and the target, and further positioning the position and the outline of each standard color block, each white check color block and the target object in the original complete detection area image; based on the characteristics, preprocessing each original color image to obtain the image color values of each original standard color block image and each white check color block as the color value of the correction color block; and obtaining the image color value of the target object. The method can be used for solving the problem of color deviation of different cameras with unknown parameters when acquiring images of different target objects, and has the advantages of wide adaptability, simplicity in operation, small operand, and low cost realization based on general hardware and software.

Description

Color correction method
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a color correction method.
Background
A camera is an electronic device for acquiring images, which can convert optical image signals into electric signals, thereby realizing the acquisition and storage of images. The color of an object seen by human eyes is the true color of the object, and the color observed by human eyes is not influenced by ambient light, which is called color constancy. Different cameras have different parameters, so that the colors of images acquired by different cameras for the same object have different deviations. In order to realize accurate color acquisition and identification of cameras with different parameters, a practical color correction method and a practical color correction device are designed for reducing color deviation among different cameras when target object images are acquired.
At present, various mainstream color correction methods either need to know parameter values of a camera in advance or need to simultaneously acquire approximate images of a target object by using different cameras, identify the approximate images through an algorithm and select enough pixel points at the same position for analysis, and have the problems of certain limitation, difficult operation, long consumed time, large resource consumption and the like.
Disclosure of Invention
In view of the above-mentioned shortcomings of the prior art, the present invention is directed to a color correction method to solve the above-mentioned problems.
In order to achieve the purpose, the invention provides the following technical scheme:
an existing standard color card for color correction comprises a substrate, a card holding area and a detection area, wherein the card holding area and the detection area are distributed on the substrate; the detection area comprises a product parameter area, a standard color area, a positioning area, a verification area and a sample area; the parameter area is pasted or printed with detection parameters which can be bar codes or codes in other forms; the standard color area comprises a plurality of standard color blocks with different colors; the checking area comprises a plurality of white checking blocks; moreover, each check block is distributed among the standard color blocks in the detection area; the positioning area is used for assisting the image acquisition equipment in carrying out image acquisition and color block positioning on the standard color card; the sample area is provided with a sample positioning quadrilateral or polygonal frame line for placing a target object to be color-collected.
A color correction method comprising the steps of:
step 1, placing a target object in a sample area on a standard color card, and respectively acquiring the standard color card and a complete color image comprising the standard color card and the target object by using two different cameras;
step 2, the original complete image is a color image; converting the colored original complete image into a binary image, wherein the binary image is a black-and-white image;
step 3, judging whether the final four positioning marks can be identified in the complete black-and-white image or not based on the binarization mathematical characteristics of the four positioning marks, if not, abandoning the original complete color image, returning to the step 1, and re-collecting the complete color image comprising the standard color card and the target object; if yes, executing step 4;
step 4, acquiring the identified mark points of each positioning identifier by using a detection analysis module of the detection equipment, and sequentially connecting the mark points of each positioning identifier to form a quadrilateral profile, wherein the internal area of the quadrilateral profile is the candidate standard color card; thereby obtaining the shape and the size of the candidate standard color card detection area;
step 5, reading the real shape and the real size of a real detection area of a standard color card with the same pre-stored specification; then judging whether the deviation of the candidate standard color card detection area and the real standard color card detection area in shape and size is within a design threshold value; if not, returning to the step 1, and re-acquiring a complete color image comprising the standard color card and the target object; if so, indicating that the shapes and the sizes of the candidate standard color card detection area and the real standard color card detection area are very approximate, and then executing the step 6;
step 6, according to the candidate standard color card detection area, an original complete detection area image is segmented from the original complete color image; wherein, the original complete detection area image is a color image;
step 7, reading the position coordinates and the size of the bar code area of the pre-stored standard color card with the same specification; then, accurately positioning the outline of the bar code area in the original complete detection area image based on the position coordinate of the bar code area and the size of the bar code area;
step 8, obtaining a complete bar code image according to the outline of the positioned bar code area; then, identifying the bar code image by using a bar code identification algorithm to obtain bar code data;
step 9, obtaining a configuration scheme according to the bar code data in the step 8; according to the configuration scheme, position coordinates and sizes of each standard color block, each white check color block and the target are respectively obtained, and then the positions and the outlines of each standard color block, each white check color block and the target object are positioned in the original complete detection area image;
step 10, obtaining color images of each standard color block, each white check color block and the target object from the original complete detection area image based on the positions and the contours of each standard color block, each white check color block and the target object; then, preprocessing each original color image to obtain image color values of each original standard color block image and each white check color block as correction color block color values; obtaining an image color value of a target object;
step 11, taking the standard color chart shot by the camera A as a reference image, taking the image shot by the camera B and comprising the standard color chart and the target object as a target image, and then correcting the color of the image of the target object collected by the camera B to be consistent with the color of the image of the target object collected by the camera A;
the parameters affecting the color of the image taken by the camera are mainly three, namely Gain (Gain), Offset (Offset) and Gamma (Gamma); the mathematical representation of the image pixel values with three parameters is as follows:
gain (Gain): pref=Ptar×Cgain
Offset (Offset): Pref=Ptar+Coffset
Gamma (Gamma):
Figure RE-GDA0002556334780000031
wherein P isrefAnd PtarPixel values, C, representing points of the reference and target images, respectivelygain,Coffset, CGammaRepresenting the Gain (Gain), Offset (Offset) and Gamma (Gamma) parameters of the target camera, respectively, 2bitdepthRepresenting a total number of gray levels of the image color space; the general image adopts 8bit gray scale; the combination of the three parameters can obtain a mathematical table of the pixel values of the image acquired by the cameraThe present form is as follows:
Figure RE-GDA0002556334780000032
step 12, using the reference correction color block and the pixel points at the corresponding positions of the target correction color block image, and using a Levenberg-Marquard optimal algorithm to perform fitting to calculate three parameters of Gain (Gain), Offset (Offset) and Gamma (Gamma) of the target camera relative to the reference camera, which comprises the following specific steps:
step (1), firstly, defining an error function as follows:
Figure RE-GDA0002556334780000033
wherein y isiAnd xiRespectively obtaining RGB values of pixel points at corresponding positions of the reference correction color block image and the target correction color block image;
Figure RE-GDA0002556334780000034
is a vector C composed of three parameters of target camera Gain (Gain), Offset (Offset) and Gamma (Gamma)gain,Coffset,CGammaDenoted as { beta }0,β1,β2The expression of the function f is:
Figure RE-GDA0002556334780000035
step (2), selecting initial vectors according to experience
Figure RE-GDA0002556334780000036
Calculating RGB values of pixel points at all corresponding positions of the reference correction color block image and the target correction color block image respectively corresponding to {1, 0, 1} or other empirical values
Figure RE-GDA0002556334780000037
And the sum of squares is calculated using the following formula:
Figure RE-GDA0002556334780000038
step (3), continuously and circularly calculating correction parameters
Figure RE-GDA0002556334780000039
Wherein
Figure RE-GDA00025563347800000310
Is a function eiFormed vector, matrix
Figure RE-GDA00025563347800000311
Is an m x 3 function matrix with the value of the ith row being
Figure RE-GDA0002556334780000041
The damping coefficient lambda is initially an empirical value lambda0Then starting from the second time, each time the value of lambda/v, v is any number greater than 0, if calculated after lambda adjustment
Figure RE-GDA0002556334780000042
If the v does not become smaller, v is doubled continuously and then lambda is adjusted continuously until the calculated value is obtained
Figure RE-GDA0002556334780000043
Become smaller; using the formula
Figure RE-GDA0002556334780000044
Calculate new vector
Figure RE-GDA0002556334780000045
Wherein
Figure RE-GDA0002556334780000046
By using
Figure RE-GDA0002556334780000047
Iteratively calculating new
Figure RE-GDA0002556334780000048
Up to
Figure RE-GDA0002556334780000049
If the minimum value is approached, no circulation is performed and the step (4) is carried out;
step (4), removing noise points, and correcting the RGB values of pixel points at all corresponding positions of the color block image of the target correction calculated finally in the step (3)
Figure RE-GDA00025563347800000410
Calculating the mean deviation h, wherein the parameter a is constant, e.g. 1.5, xi: correcting pixel point RGB value, x of all corresponding positions of color block image for targeteIs composed of
Figure RE-GDA00025563347800000411
The formula is as follows:
Figure RE-GDA00025563347800000412
calculating a difference value of RGB values of pixel points at all corresponding positions of the reference original correction color block image and the target correction color block image, and removing the pixel points with the difference value larger than 2 times h as noise points;
step (5), re-executing steps (2) - (4) by using pixel point RGB values at all corresponding positions of the reference original correction color block image and the target correction color block image after noise points are removed until the number of the noise points is 0 and entering step (6);
step (6) of using the product obtained in step (5)
Figure RE-GDA00025563347800000413
As a parameter, correcting the color deviation of all pixel point RGB values of the target object image by using the following formula to obtain the corrected pixel point RGB values of the target object image;
Figure RE-GDA00025563347800000414
compared with the prior art, the method can be used for solving the problem of color deviation of different cameras with unknown parameters when acquiring images of different target objects, and has the advantages of wide adaptability, simplicity in operation, small operand, and low cost realization based on general hardware and software.
Drawings
Fig. 1 is a schematic structural diagram of a conventional standard color chart.
Detailed Description
The technical solution of the present patent will be described in further detail with reference to the following embodiments.
As shown in fig. 1, a conventional standard color chart for color calibration includes a substrate, and a card holding area 1 and a detection area 1 distributed on the substrate; the detection area 2 comprises a product parameter area 3, a standard color area 4, a positioning area 5, a verification area 6 and a sample area 7; the parameter area 3 is pasted or printed with detection parameters, which can be bar codes or codes in other forms; the standard color area 4 comprises a plurality of standard color blocks with different colors; the checking area 6 comprises a plurality of white checking blocks; moreover, each check block is distributed among the standard color blocks in the detection area; the positioning area 5 is used for assisting image acquisition equipment in carrying out image acquisition and color block positioning on a standard color card; the sample area 7 is provided with a sample positioning quadrilateral or polygonal frame line for placing a target object to be color-collected.
A color correction method comprising the steps of:
step 1, placing a target object in a sample area on a standard color card, and respectively acquiring the standard color card and a complete color image comprising the standard color card and the target object by using two different cameras;
step 2, the original complete image is a color image; converting the colored original complete image into a binary image, wherein the binary image is a black-and-white image;
step 3, judging whether the final four positioning marks can be identified in the complete black-and-white image or not based on the binarization mathematical characteristics of the four positioning marks, if not, abandoning the original complete color image, returning to the step 1, and re-collecting the complete color image comprising the standard color card and the target object; if yes, executing step 4;
step 4, acquiring the identified mark points of each positioning identifier by using a detection analysis module of the detection equipment, and sequentially connecting the mark points of each positioning identifier to form a quadrilateral profile, wherein the internal area of the quadrilateral profile is the candidate standard color card; thereby obtaining the shape and the size of the candidate standard color card detection area;
step 5, reading the real shape and the real size of a real detection area of a standard color card with the same pre-stored specification; then judging whether the deviation of the candidate standard color card detection area and the real standard color card detection area in shape and size is within a design threshold value; if not, returning to the step 1, and re-acquiring a complete color image comprising the standard color card and the target object; if so, indicating that the shapes and the sizes of the candidate standard color card detection area and the real standard color card detection area are very approximate, and then executing the step 6;
step 6, according to the candidate standard color card detection area, an original complete detection area image is segmented from the original complete color image; wherein, the original complete detection area image is a color image;
step 7, reading the position coordinates and the size of the bar code area of the pre-stored standard color card with the same specification; then, accurately positioning the outline of the bar code area in the original complete detection area image based on the position coordinate of the bar code area and the size of the bar code area;
step 8, obtaining a complete bar code image according to the outline of the positioned bar code area; then, identifying the bar code image by using a bar code identification algorithm to obtain bar code data;
step 9, obtaining a configuration scheme according to the bar code data in the step 8; according to the configuration scheme, position coordinates and sizes of each standard color block, each white check color block and the target are respectively obtained, and then the positions and the outlines of each standard color block, each white check color block and the target object are positioned in the original complete detection area image;
step 10, obtaining color images of each standard color block, each white check color block and the target object from the original complete detection area image based on the positions and the contours of each standard color block, each white check color block and the target object; then, preprocessing each original color image to obtain image color values of each original standard color block image and each white check color block as correction color block color values; obtaining an image color value of a target object;
step 11, taking the standard color chart shot by the camera A as a reference image, taking the image shot by the camera B and comprising the standard color chart and the target object as a target image, and then correcting the color of the image of the target object collected by the camera B to be consistent with the color of the image of the target object collected by the camera A;
the parameters affecting the color of the image taken by the camera are mainly three, namely Gain (Gain), Offset (Offset) and Gamma (Gamma); the mathematical representation of the image pixel values with three parameters is as follows:
gain (Gain): pref=Ptar×Cgain
Offset (Offset): Pref=Ptar+Coffset
Gamma (Gamma):
Figure RE-GDA0002556334780000061
wherein P isrefAnd PtarPixel values, C, representing points of the reference and target images, respectivelygain,Coffset, CGammaRepresenting the Gain (Gain), Offset (Offset) and Gamma (Gamma) parameters of the target camera, respectively, 2bitdepthRepresenting a total number of gray levels of the image color space; the general image adopts 8bit gray scale; the mathematical representation of the pixel values of the image acquired by the camera can be obtained by combining the three parameters as follows:
Figure RE-GDA0002556334780000062
step 12, using the reference correction color block and the pixel points at the corresponding positions of the target correction color block image, and using a Levenberg-Marquard optimal algorithm to perform fitting to calculate three parameters of Gain (Gain), Offset (Offset) and Gamma (Gamma) of the target camera relative to the reference camera, which comprises the following specific steps:
step (1), firstly, defining an error function as follows:
Figure RE-GDA0002556334780000071
wherein y isiAnd xiRespectively obtaining RGB values of pixel points at corresponding positions of the reference correction color block image and the target correction color block image;
Figure RE-GDA0002556334780000072
is a vector C composed of three parameters of target camera Gain (Gain), Offset (Offset) and Gamma (Gamma)gain,Coffset,CGammaDenoted as { beta }0,β1,β2The expression of the function f is:
Figure RE-GDA0002556334780000073
step (2), selecting initial vectors according to experience
Figure RE-GDA0002556334780000074
Calculating RGB values of pixel points at all corresponding positions of the reference correction color block image and the target correction color block image respectively corresponding to {1, 0, 1} or other empirical values
Figure RE-GDA0002556334780000075
And the sum of squares is calculated using the following formula:
Figure RE-GDA0002556334780000076
step (3), continuously and circularly calculating correction parameters
Figure RE-GDA0002556334780000077
Wherein
Figure RE-GDA0002556334780000078
Is a function eiFormed vector, matrix
Figure RE-GDA0002556334780000079
Is an m x 3 function matrix with the value of the ith row being
Figure RE-GDA00025563347800000710
The damping coefficient lambda is initially an empirical value lambda0Then starting from the second time, each time the value of lambda/v, v is any number greater than 0, if calculated after lambda adjustment
Figure RE-GDA00025563347800000711
If the v does not become smaller, v is doubled continuously and then lambda is adjusted continuously until the calculated value is obtained
Figure RE-GDA00025563347800000712
Become smaller; using the formula
Figure RE-GDA00025563347800000713
Calculate new vector
Figure RE-GDA00025563347800000714
Wherein
Figure RE-GDA00025563347800000715
By using
Figure RE-GDA00025563347800000716
Iteratively calculating new
Figure RE-GDA00025563347800000717
Up to
Figure RE-GDA00025563347800000718
If the minimum value is approached, no circulation is performed and the step (4) is carried out;
step (4), removing noise points, and carrying out the last step of the step (3)Calculated RGB values of pixel points at all corresponding positions of target correction color lump image
Figure RE-GDA00025563347800000719
Calculating the mean deviation h, wherein the parameter a is constant, e.g. 1.5, xi: correcting pixel point RGB value, x of all corresponding positions of color block image for targeteIs composed of
Figure RE-GDA00025563347800000720
The formula is as follows:
Figure RE-GDA00025563347800000721
calculating a difference value of RGB values of pixel points at all corresponding positions of the reference original correction color block image and the target correction color block image, and removing the pixel points with the difference value larger than 2 times h as noise points;
step (5), re-executing steps (2) - (4) by using pixel point RGB values at all corresponding positions of the reference original correction color block image and the target correction color block image after noise points are removed until the number of the noise points is 0 and entering step (6);
step (6) of using the product obtained in step (5)
Figure RE-GDA0002556334780000081
As a parameter, correcting the color deviation of all pixel point RGB values of the target object image by using the following formula to obtain the corrected pixel point RGB values of the target object image;
Figure RE-GDA0002556334780000082
although the preferred embodiments of the present patent have been described in detail, the present patent is not limited to the above embodiments, and various changes can be made without departing from the spirit of the present patent within the knowledge of those skilled in the art.

Claims (2)

1. A color correction method, comprising the steps of:
step 1, placing a target object in a sample area on a standard color card, and respectively acquiring the standard color card and a complete color image comprising the standard color card and the target object by using two different cameras;
step 2, the original complete image is a color image; converting the colored original complete image into a binary image, wherein the binary image is a black-and-white image;
step 3, judging whether the final four positioning marks can be identified in the complete black-and-white image or not based on the binarization mathematical characteristics of the four positioning marks, if not, abandoning the original complete color image, returning to the step 1, and re-collecting the complete color image comprising the standard color card and the target object; if yes, executing step 4;
step 4, acquiring the identified mark points of each positioning identifier by using a detection analysis module of the detection equipment, and sequentially connecting the mark points of each positioning identifier to form a quadrilateral profile, wherein the internal area of the quadrilateral profile is the candidate standard color card; thereby obtaining the shape and the size of the candidate standard color card detection area;
step 5, reading the real shape and the real size of a real detection area of a standard color card with the same pre-stored specification; then judging whether the deviation of the candidate standard color card detection area and the real standard color card detection area in shape and size is within a design threshold value; if not, returning to the step 1, and re-acquiring a complete color image comprising the standard color card and the target object; if so, indicating that the shapes and the sizes of the candidate standard color card detection area and the real standard color card detection area are very approximate, and then executing the step 6;
step 6, according to the candidate standard color card detection area, an original complete detection area image is segmented from the original complete color image; wherein, the original complete detection area image is a color image;
step 7, reading the position coordinates and the size of the bar code area of the pre-stored standard color card with the same specification; then, accurately positioning the outline of the bar code area in the original complete detection area image based on the position coordinate of the bar code area and the size of the bar code area;
step 8, obtaining a complete bar code image according to the outline of the positioned bar code area; then, identifying the bar code image by using a bar code identification algorithm to obtain bar code data;
step 9, obtaining a configuration scheme according to the bar code data in the step 8; according to the configuration scheme, position coordinates and sizes of each standard color block, each white check color block and the target are respectively obtained, and then the positions and the outlines of each standard color block, each white check color block and the target object are positioned in the original complete detection area image;
step 10, obtaining color images of each standard color block, each white check color block and the target object from the original complete detection area image based on the positions and the contours of each standard color block, each white check color block and the target object; then, preprocessing each original color image to obtain image color values of each original standard color block image and each white check color block as correction color block color values; obtaining an image color value of a target object;
step 11, taking the standard color chart shot by the camera A as a reference image, taking the image shot by the camera B and comprising the standard color chart and the target object as a target image, and then correcting the color of the image of the target object collected by the camera B to be consistent with the color of the image of the target object collected by the camera A;
the parameters affecting the color of the image taken by the camera are mainly three, namely Gain (Gain), Offset (Offset) and Gamma (Gamma); the mathematical representation of the image pixel values with three parameters is as follows:
gain (Gain): pref=Ptar×Cgain
Offset (Offset): pref=Ptar+Coffset
Gamma (Gamma):
Figure RE-FDA0002556334770000021
wherein P isrefAnd PtarPixel values, C, representing points of the reference and target images, respectivelygain,Coffset,CGammaRepresenting the Gain (Gain), Offset (Offset) and Gamma (Gamma) parameters of the target camera, respectively, 2bitdepthRepresenting the total number of gray levels in the color space of the image. The general image adopts 8bit gray scale;
the mathematical representation of the pixel values of the image acquired by the camera can be obtained by combining the three parameters as follows:
Figure RE-FDA0002556334770000022
and step 12, using the reference correction color block and the pixel points at the corresponding positions of the target correction color block image.
2. The method of claim 1, wherein the three parameters of Gain (Gain), Offset (Offset) and Gamma (Gamma) of the target camera relative to the reference camera are calculated, and the Levenberg-Marquard optimization algorithm is used to perform the fitting calculation, and the method comprises the following steps:
step (1), firstly, defining an error function as follows:
Figure RE-FDA0002556334770000023
wherein y isiAnd xiRespectively obtaining RGB values of pixel points at corresponding positions of the reference correction color block image and the target correction color block image;
Figure RE-FDA0002556334770000024
is a vector C composed of three parameters of target camera Gain (Gain), Offset (Offset) and Gamma (Gamma)gain,Coffset,CGammaDenoted as { beta }0,β1,β2The expression of the function f is:
Figure RE-FDA0002556334770000031
step (2), selecting initial vectors according to experience
Figure RE-FDA0002556334770000032
Calculating RGB values of pixel points at all corresponding positions of the reference correction color block image and the target correction color block image respectively corresponding to {1, 0, 1} or other empirical values
Figure RE-FDA0002556334770000033
And the sum of squares is calculated using the following formula:
Figure RE-FDA0002556334770000034
step (3), continuously and circularly calculating correction parameters
Figure RE-FDA0002556334770000035
Wherein
Figure RE-FDA0002556334770000036
Is a function eiFormed vector, matrix
Figure RE-FDA0002556334770000037
Is an m x 3 function matrix with the value of the ith row being
Figure RE-FDA0002556334770000038
The damping coefficient lambda is initially an empirical value lambda0Then starting from the second time, each time the value of lambda/v, v is any number greater than 0, if calculated after lambda adjustment
Figure RE-FDA0002556334770000039
If the v does not become smaller, v is doubled continuously and then lambda is adjusted continuously until the calculated value is obtained
Figure RE-FDA00025563347700000310
Become smaller; using the formula
Figure RE-FDA00025563347700000311
Calculate new vector
Figure RE-FDA00025563347700000312
Wherein
Figure RE-FDA00025563347700000313
By using
Figure RE-FDA00025563347700000314
Iteratively calculating new
Figure RE-FDA00025563347700000315
Up to
Figure RE-FDA00025563347700000316
If the minimum value is approached, no circulation is performed and the step (4) is carried out;
step (4), removing noise points, and correcting the RGB values of pixel points at all corresponding positions of the color block image of the target correction calculated finally in the step (3)
Figure RE-FDA00025563347700000317
Calculating the mean deviation h, wherein the parameter a is constant, e.g. 1.5, xi: correcting pixel point RGB value, x of all corresponding positions of color block image for targeteIs composed of
Figure RE-FDA00025563347700000318
The formula is as follows:
Figure RE-FDA00025563347700000319
calculating a difference value of RGB values of pixel points at all corresponding positions of the reference original correction color block image and the target correction color block image, and removing the pixel points with the difference value larger than 2 times h as noise points;
step (5), re-executing steps (2) - (4) by using pixel point RGB values at all corresponding positions of the reference original correction color block image and the target correction color block image after noise points are removed until the number of the noise points is 0 and entering step (6);
step (6) of using the product obtained in step (5)
Figure RE-FDA00025563347700000320
As a parameter, correcting the color deviation of all pixel point RGB values of the target object image by using the following formula to obtain the corrected pixel point RGB values of the target object image;
Figure RE-FDA00025563347700000321
CN201911424392.3A 2019-12-31 2019-12-31 Color correction method Active CN113132693B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911424392.3A CN113132693B (en) 2019-12-31 2019-12-31 Color correction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911424392.3A CN113132693B (en) 2019-12-31 2019-12-31 Color correction method

Publications (2)

Publication Number Publication Date
CN113132693A true CN113132693A (en) 2021-07-16
CN113132693B CN113132693B (en) 2024-10-01

Family

ID=76769843

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911424392.3A Active CN113132693B (en) 2019-12-31 2019-12-31 Color correction method

Country Status (1)

Country Link
CN (1) CN113132693B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113392670A (en) * 2021-07-30 2021-09-14 新疆金牛能源物联网科技股份有限公司 Cassette, cassette reading device, device configuration apparatus, and configuration method
CN115115609A (en) * 2022-07-18 2022-09-27 中国农业科学院蔬菜花卉研究所 Image analysis method and system for plant leaf positive phenotypic characters

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006303783A (en) * 2005-04-19 2006-11-02 Fuji Photo Film Co Ltd Image processing method, image processing system, and image processing program
JP2011259047A (en) * 2010-06-07 2011-12-22 For-A Co Ltd Color correction device, color correction method, and video camera system
WO2017046829A1 (en) * 2015-09-17 2017-03-23 株式会社Elan Color measuring device and color measuring method
CN109805891A (en) * 2019-01-08 2019-05-28 中南大学湘雅医院 Post-operative recovery state monitoring method, device, system, readable medium and colour atla
CN110400278A (en) * 2019-07-30 2019-11-01 广东工业大学 A kind of full-automatic bearing calibration, device and the equipment of color of image and geometric distortion

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006303783A (en) * 2005-04-19 2006-11-02 Fuji Photo Film Co Ltd Image processing method, image processing system, and image processing program
JP2011259047A (en) * 2010-06-07 2011-12-22 For-A Co Ltd Color correction device, color correction method, and video camera system
WO2017046829A1 (en) * 2015-09-17 2017-03-23 株式会社Elan Color measuring device and color measuring method
CN109805891A (en) * 2019-01-08 2019-05-28 中南大学湘雅医院 Post-operative recovery state monitoring method, device, system, readable medium and colour atla
CN110400278A (en) * 2019-07-30 2019-11-01 广东工业大学 A kind of full-automatic bearing calibration, device and the equipment of color of image and geometric distortion

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113392670A (en) * 2021-07-30 2021-09-14 新疆金牛能源物联网科技股份有限公司 Cassette, cassette reading device, device configuration apparatus, and configuration method
CN115115609A (en) * 2022-07-18 2022-09-27 中国农业科学院蔬菜花卉研究所 Image analysis method and system for plant leaf positive phenotypic characters

Also Published As

Publication number Publication date
CN113132693B (en) 2024-10-01

Similar Documents

Publication Publication Date Title
US20200364849A1 (en) Method and device for automatically drawing structural cracks and precisely measuring widths thereof
CN111932504B (en) Edge contour information-based sub-pixel positioning method and device
CN108921057B (en) Convolutional neural network-based prawn form measuring method, medium, terminal equipment and device
CN103048331B (en) Printing defect detection method based on flexible template registration
CN110400278B (en) Full-automatic correction method, device and equipment for image color and geometric distortion
CN111223133A (en) Registration method of heterogeneous images
CN111784778A (en) Binocular camera external parameter calibration method and system based on linear solving and nonlinear optimization
CN111145205B (en) Pig body temperature detection method based on infrared image under multiple pig scenes
CN113132693A (en) Color correction method
CN114692991B (en) Deep learning-based wolfberry yield prediction method and system
CN111724354A (en) Image processing-based method for measuring spike length and small spike number of multiple wheat
CN111323125A (en) Temperature measurement method and device, computer storage medium and electronic equipment
CN108510477A (en) The localization method and device of test paper color lump
CN112561986A (en) Secondary alignment method, device, equipment and storage medium for inspection robot holder
CN117893457B (en) PCB intelligent detection method, device and computer equipment
CN111369455B (en) Highlight object measuring method based on polarization image and machine learning
CN112700488A (en) Living body long blade area analysis method, system and device based on image splicing
Tu et al. 2D in situ method for measuring plant leaf area with camera correction and background color calibration
CN117475373A (en) Tea garden pest and disease damage identification and positioning method and system based on binocular vision
CN112215304A (en) Gray level image matching method and device for geographic image splicing
CN111986266A (en) Photometric stereo light source parameter calibration method
CN113628182B (en) Automatic fish weight estimation method and device, electronic equipment and storage medium
CN113449638B (en) Pig image ideal frame screening method based on machine vision technology
CN114170319A (en) Method and device for adjusting test target
CN103279953A (en) Machine vision calibration system based on LabVIEW platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 4121, Floor 4, Main Building, No. 15 Lutian Road, High-tech Development Zone, Changsha, Hunan Province, 410221

Applicant after: CHANGSHA YUNZHIJIAN INFORMATION TECHNOLOGY CO.,LTD.

Address before: Room 604, scientific research building, 229 tongzipo West Road, Changsha hi tech Development Zone, Hunan 410205

Applicant before: CHANGSHA YUNZHIJIAN INFORMATION TECHNOLOGY CO.,LTD.

GR01 Patent grant
GR01 Patent grant