CN113870146A - Method for correcting false color of image edge of color camera - Google Patents

Method for correcting false color of image edge of color camera Download PDF

Info

Publication number
CN113870146A
CN113870146A CN202111204681.XA CN202111204681A CN113870146A CN 113870146 A CN113870146 A CN 113870146A CN 202111204681 A CN202111204681 A CN 202111204681A CN 113870146 A CN113870146 A CN 113870146A
Authority
CN
China
Prior art keywords
channel
inter
edge
calibration
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111204681.XA
Other languages
Chinese (zh)
Other versions
CN113870146B (en
Inventor
易天格
宋伟铭
周中亚
刘敏
高晓阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Daheng Image Vision Co ltd
China Daheng Group Inc Beijing Image Vision Technology Branch
Original Assignee
Beijing Daheng Image Vision Co ltd
China Daheng Group Inc Beijing Image Vision Technology Branch
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Daheng Image Vision Co ltd, China Daheng Group Inc Beijing Image Vision Technology Branch filed Critical Beijing Daheng Image Vision Co ltd
Priority to CN202111204681.XA priority Critical patent/CN113870146B/en
Publication of CN113870146A publication Critical patent/CN113870146A/en
Application granted granted Critical
Publication of CN113870146B publication Critical patent/CN113870146B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

The application discloses a method for correcting false color of image edge of a color camera, which comprises the following steps: step 1, calculating the pixel distance between a current pixel point and an image main point in an image to be corrected and the horizontal angle of a connecting line between the current pixel point and the image main point; step 2, calculating R channel correction coordinates and B channel correction coordinates corresponding to the pixel distance and the horizontal angle according to the calibration function; and 3, respectively calculating the R channel correction gray value and the B channel correction gray value of the current pixel point by adopting an interpolation operation mode according to the R channel correction coordinate and the B channel correction coordinate, wherein the R channel correction gray value and the B channel correction gray value are used for correcting the current pixel point. According to the technical scheme, the pseudo color of the edge of the shot color image is corrected, so that the degree of the pseudo color of the edge is reduced, and the image quality of the color camera is improved.

Description

Method for correcting false color of image edge of color camera
Technical Field
The application relates to the technical field of image processing, in particular to a method for correcting false color of an image edge of a color camera.
Background
The automatic optical detection is an effective detection method for industrial automatic detection, is a machine vision detection standard technology using an industrial camera as a platform, and is widely applied to the manufacturing fields of printing and packaging quality control, PCB detection, rapid forming and the like. The common application mode is to collect images through a camera, and calculate and identify the characteristics of a shot object through image processing technologies such as positioning, identification and classification.
For a color industrial camera, under the influence of elements such as an optical system and a sensor, in a color image of a shot object collected, at an edge which is similar to black-white change, due to the fact that refractive indexes of light with different wavelengths in a lens are inconsistent, blue light or red light diffuses to two sides of the edge, the light which diffuses to the darker side of the edge of the shot object is overlapped with original black, a side deviation phenomenon of 'cold side and warm side' can occur, and a 'pseudo color' is generated at the edge transition position of the color image, and the pseudo color becomes more obvious along with the reduction of the resolution of the lens and the size of a sensor pixel.
The false color at the edge of the color image can cause the visual perception distortion of the color image on one hand, and can cause the edge positioning error during the automatic optical detection on the other hand, thereby increasing the difficulty of the later image processing, simultaneously increasing the possibility of the false detection and the omission of the defect, and bringing negative influence to the robustness of the automatic optical detection system.
Disclosure of Invention
The purpose of this application lies in: and correcting the pseudo color at the edge of the shot color image so as to reduce the degree of the pseudo color at the edge and improve the image quality of the color camera.
The technical scheme of the application is as follows: a method for correcting edge pseudo color of color camera image is provided, which comprises: step 1, calculating the pixel distance between a current pixel point and an image main point in an image to be corrected and the horizontal angle of a connecting line between the current pixel point and the image main point; step 2, calculating R channel correction coordinates and B channel correction coordinates corresponding to the pixel distance and the horizontal angle according to the calibration function; and 3, respectively calculating the R channel correction gray value and the B channel correction gray value of the current pixel point by adopting an interpolation operation mode according to the R channel correction coordinate and the B channel correction coordinate, wherein the R channel correction gray value and the B channel correction gray value are used for correcting the current pixel point.
In any one of the above technical solutions, further, in step 3, the process of calculating the R channel correction gray scale value specifically includes: step 301, determining coordinates of four control points corresponding to a current pixel point according to an R channel correction coordinate, and calculating an interpolation proportion according to a difference value between the R channel correction coordinate and a coordinate of a first control point, wherein the coordinate of the first control point is determined by the R channel correction coordinate through rounding operation; step 302, performing interpolation operation according to the interpolation ratio and the initial gray value of the R channel of the current pixel point, and calculating the R channel correction gray value of the current pixel point.
In any of the above technical solutions, further, the calculation formula of the R channel correction gray scale value is:
Hr=Hx1×eyR+Hx2×(1-eyR)
Hx1=IR(xinter R,yinter R)×exR+IR(xinter R+1,yinter R)×(1-exR)
Hx2=IR(xinter R,yinter R+1)×exR+IR(xinter R+1,yinter R+1)×(1-exR)
in the formula, Hx1Is a first intermediate parameter, Hx2Is a second intermediate parameter, IRIs the initial gray value of the current pixel point in the R channel, (x)inter R,yinter R) Is the coordinate of the first control point, (x)inter R+1,yinter R) Is the coordinate of the second control point, (x)inter R,yinter R+1) is the coordinate of the third control point, (x)inter R+1,yinter R+1) is the coordinate of the fourth control point, exRAs a ratio of interpolation in the x direction, eyIs the interpolation ratio in the y direction.
In any one of the above technical solutions, further, in step 2, the calibration function is formed by fitting a difference between a distance between each edge position in the calibration image and the calibration principal point in the R channel and the B channel and a distance between the edge position and the calibration principal point in the G channel, where the calibration function at least includes an R channel calibration function and a B channel calibration function.
In any of the above technical solutions, further, the fitting process of the calibration function specifically includes: step 201, obtaining a calibration image of a calibration plate, generating a plurality of edge measurement lines based on calibration principal points of the calibration image, and extracting edge positions of the calibration image on each edge measurement line in RGB three channels respectively; step 202, respectively calculating a first distance between each edge position of the three RGB channels and the calibration principal point, and a second distance between the R channel and the G channel at any edge position
Figure BDA0003306409830000031
Third distance between channel B and channel G
Figure BDA0003306409830000032
Step
203, according to the first distance and the second distance of the RGB three channels
Figure BDA0003306409830000033
Third distance
Figure BDA0003306409830000034
And performing function fitting to generate a calibration function.
In any one of the above technical solutions, further, the edge measurement lines are equiangularly distributed.
In any one of the above technical solutions, further, an included angle between two adjacent edge measurement lines is greater than or equal to 8 °.
The beneficial effect of this application is:
according to the technical scheme, the R channel correction coordinate and the B channel correction coordinate of the current pixel point in the image to be corrected are calculated through the calibration function, the R channel correction gray value and the B channel correction gray value are calculated respectively in an interpolation mode, edge pseudo color correction is carried out on the current pixel point, pseudo color at the edge can be eliminated under the condition that the overall image quality is guaranteed, the quality of images shot by a color camera is improved, the robustness of an automatic optical detection system is further improved, and the difficulty of image processing at the later stage is reduced.
In a preferred implementation manner of the application, the G channel is selected as a reference value, and the R channel calibration function and the B channel calibration function are respectively determined by respectively utilizing the distances between the edge positions and the calibration main points in the R channel and the B channel, so that the calculation accuracy of the corrected gray values of the R channel and the B channel in the image to be corrected is improved, and the accuracy and the reliability of the pseudo-color correction are further improved.
Drawings
The advantages of the above and/or additional aspects of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flow chart diagram of a method for correcting edge artifacts in a color camera image according to one embodiment of the present application;
FIG. 2 is a schematic diagram of edge measurement lines, edge locations in a calibration image according to one embodiment of the present application;
FIG. 3 is a schematic coordinate diagram of an ith edge position in three channels of RGB according to one embodiment of the present application;
FIG. 4 is a schematic diagram illustrating a front-to-back comparison of edge position pseudo color correction according to an embodiment of the present application;
fig. 5 is a graph showing a variation trend of RGB three-channel gray-scale values at the same edge position before and after correction according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a comparison of image saturation before and after correction according to one embodiment of the present application.
Detailed Description
In order that the above objects, features and advantages of the present application can be more clearly understood, the present application will be described in further detail with reference to the accompanying drawings and detailed description. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, however, the present application may be practiced in other ways than those described herein, and therefore the scope of the present application is not limited by the specific embodiments disclosed below.
As shown in fig. 1, the present embodiment provides a method for correcting false color at an edge of a color camera image, which is suitable for performing false color correction on an image captured by a calibrated color camera, where parameters involved in the correction process are related to hardware parameters of the color camera itself. The method comprises the following steps:
step 1, calculating the pixel distance between a current pixel point and an image main point in an image to be corrected and the horizontal angle of a connecting line between the current pixel point and the image main point;
specifically, the position coordinates (x, y) of each pixel point in the image to be corrected are determined in a traversal mode, then, the image principal point is determined by adopting a calculation method of camera internal parameter, such as a 'Zhang calibration' method, so as to calculate the pixel distance between each pixel point and the image principal point and the horizontal angle of a connecting line between the pixel point and the image principal point, wherein the coordinates of the image principal point are (x, y)s,ys)。
It should be noted that, in this embodiment, a basic image coordinate system is adopted, that is, the pixel point at the upper left point of the image is the origin (0,0), the horizontal right direction is the positive direction of the X axis of the image coordinate, and the vertical downward direction is the positive direction of the Y axis.
For any pixel point, the distance L between the pixel point and the image principal point can be calculated according to the position coordinates (x, y), and the corresponding calculation formula is:
Figure BDA0003306409830000041
establishing a horizontal line by taking the image principal point as a starting point, and calculating a horizontal angle alpha of a connecting line between the image principal point and the pixel point, wherein the corresponding calculation formula is as follows:
Figure BDA0003306409830000051
and 2, calculating R channel correction coordinates and B channel correction coordinates corresponding to the pixel distance and the horizontal angle according to a calibration function, wherein the calibration function is formed by fitting the difference between the distance between each edge position in the calibration image and the calibration principal point in the R channel and the B channel and the distance between the edge position and the calibration principal point in the G channel.
Specifically, considering that human eyes are most sensitive to green, if an error occurs when adjusting green, the human eyes feel more obvious, and moreover, the refractive index of the R channel is the smallest in the RGB three channels, the refractive index of the B channel is the largest, and the G channel is located between the R channel and the B channel, so that when data of the R channel and the B channel are corrected by taking the G channel as a reference, the adjustment amount is the smallest and the error is also small. Therefore, the set calibration function is formed by combining the R-channel calibration function and the B-channel calibration function.
Now, taking the R channel as an example, a calculation process of the R channel correction coordinate will be described.
Firstly, substituting the pixel distance L into a calibration function, and calculating a distance correction quantity delta R of an R channel; then, according to the distance correction quantity delta R and horizontal angle alpha of R channel, calculating horizontal distance correction component delta RcolAnd a vertical distance correction component Δ RrowThe corresponding calculation formula is:
ΔRcol=ΔR*cos(α)
ΔRrow=ΔR*sin(α)
finally, the corresponding correction coordinate (x) of the pixel point in the R channelR,yR) Comprises the following steps:
xR=x+ΔRcol
yR=y+ΔRrow
correspondingly, the corresponding correction coordinate (x) of the pixel point in the B channelB,yB) Comprises the following steps:
xB=x+ΔBcol
yB=y+ΔBrow
the embodiment shows an implementation process of fitting a calibration function, which specifically includes:
step 201, obtaining a calibration image of a calibration board, generating a plurality of edge measurement lines based on a calibration principal point of the calibration image, and extracting edge positions of the calibration image on each edge measurement line in three channels of RGB respectively.
Wherein, the edge measuring lines are distributed in an equal angle, and the included angle between two adjacent edge measuring lines is more than or equal to 8 degrees.
It should be noted that, in this embodiment, the implementation manner of the calibration board is not limited, and only the captured calibration image is a black-and-white image, and the black-and-white edge of the calibration board covers the field of view of the industrial color camera as much as possible.
Specifically, as shown in fig. 2, taking a calibration plate with a plurality of concentric circles as an example, an industrial color camera is used to capture a calibration image of the calibration plate, and white balance processing is performed on the captured calibration image to ensure that gray values of different channels of black and white colors at non-edges of the image are substantially consistent, where this embodiment does not limit an implementation manner of the white balance processing.
Then, a similar method to the method for determining the image principal point is adopted to determine the calibration principal point 201 in the calibration image, wherein the coordinate of the calibration principal point 201 is (x)s0,ys0). After the calibration main point 201 is determined, 16 edge measurement lines are determined along different directions with the calibration main point 201 as a starting point and 22.5 ° as an included angle 202, as shown by a dotted line 204 in fig. 2, in three channels of RGB, a variation trend is extracted in the direction of the corresponding edge measurement line, edge positions of the calibration image on any channel and any edge measurement line are respectively calculated according to the principle that the first-order gradient of the gray scale is maximum and the second-order gradient crosses zero (i.e. the position where the gray scale value gradient is large and the gray scale value is changed fastest), as shown by a triangle Δ 203 in fig. 2,the coordinates of the ith edge position in the three RGB channels are shown in fig. 3(a), 3(b), and 3(c) in sequence, respectively
Figure BDA0003306409830000061
i=1,2,…,N。
In addition, in the process of calculating the edge position, a method including filtering, interpolation, and the like for improving the calculation accuracy may also be adopted, and details are not repeated in this embodiment.
It should be noted that the smaller and the larger the included angle 202 of the edge measurement line, the higher the calibration (correction) accuracy.
Step 202, respectively calculating a first distance between each edge position of the three RGB channels and the calibration principal point, and a second distance between the R channel and the G channel at any edge position
Figure BDA0003306409830000062
Third distance between channel B and channel G
Figure BDA0003306409830000063
Specifically, the first distance includes three components, each being the first distance of the R channel
Figure BDA0003306409830000064
First distance of G channel
Figure BDA0003306409830000065
And a first distance of the B channel
Figure BDA0003306409830000066
The corresponding calculation formula is:
Figure BDA0003306409830000067
Figure BDA0003306409830000071
Figure BDA0003306409830000072
in the present embodiment, the first and second electrodes are,
second distance
Figure BDA0003306409830000073
Is a first distance of R channel
Figure BDA0003306409830000074
First distance from G channel
Figure BDA0003306409830000075
The difference between them is calculated by the following formula:
Figure BDA0003306409830000076
correspondingly, the third distance
Figure BDA0003306409830000077
The calculation formula of (2) is as follows:
Figure BDA0003306409830000078
note that the second distance in this embodiment
Figure BDA0003306409830000079
And a third distance
Figure BDA00033064098300000710
Has a positive and a negative
Step 203, according to the first distance and the second distance of the RGB three channels
Figure BDA00033064098300000711
Third distance
Figure BDA00033064098300000712
And performing function fitting to generate a calibration function, wherein the calibration function at least comprises an R channel calibration function and a B channel calibration function.
Specifically, the first distance of the R channel
Figure BDA00033064098300000713
As the first argument X1, the second distance is set
Figure BDA00033064098300000714
As a first dependent variable Y1, performing R channel calibration function fitting according to a first independent variable X1 and a first dependent variable Y1 to obtain an R channel calibration function fr(X1)。
In the fitting process, a linear function, such as f, may be selectedr(X) aX + b, other types of functions, e.g. f, being optionalr(X)=a sin(bX+c)+d。
The specific fitting process is not described in detail.
Similarly, the first distance of the B channel
Figure BDA00033064098300000715
As the second argument X2, the third distance is set
Figure BDA00033064098300000716
As a second dependent variable Y2, performing B-channel calibration function fitting according to a second independent variable X2 and a second dependent variable Y2 to obtain a second first function fr(X2), calibrating the fitted R channel to obtain a function fr(X1) and B channel calibration function fr(X2) together as a calibration function.
And 3, respectively calculating the R channel correction gray value and the B channel correction gray value of the current pixel point by adopting an interpolation operation mode according to the R channel correction coordinate and the B channel correction coordinate, wherein the R channel correction gray value and the B channel correction gray value are used for correcting the current pixel point.
Now, an R channel correction gray value calculation process is taken as an example for explanation, and the process specifically includes:
step 301, determining coordinates of four control points corresponding to a current pixel point according to an R channel correction coordinate, and calculating an interpolation proportion according to a difference value between the R channel correction coordinate and a coordinate of a first control point, wherein the coordinate of the first control point is determined by the R channel correction coordinate through rounding operation;
step 302, performing interpolation operation according to the interpolation ratio and the initial gray value of the R channel of the current pixel point, and calculating the R channel correction gray value of the current pixel point.
Specifically, the coordinates (x) are corrected due to the calculated R channelR,yR) There may be decimal places and therefore the coordinates (x) of the first control point are calculated in a rounded-down mannerinter R,yinter R) The corresponding calculation formula is:
Figure BDA0003306409830000081
Figure BDA0003306409830000082
then, based on the position coordinates (x, y) of the pixel point and the corresponding first control point coordinates (x)inter R,yinter R) And calculating an interpolation proportion, wherein the corresponding calculation formula is as follows:
exR=xR-xinter R
eyRyR-yinter R
in the formula, exAs a ratio of interpolation in the x direction, eyIs the interpolation ratio in the y direction.
Finally, selecting the other three control points adjacent to the first control point by adding one to the coordinate point, carrying out interpolation operation according to the selected control points and the initial gray value of the R channel of the current pixel point, and calculating the R channel correction gray value H of the current pixel pointrThe corresponding calculation formula is:
Hr=Hx1×eyR+Hx2×(1-eyR)
Hx1=IR(xinter R,yinter R)×exR+IR(xinter R+1,yinter R)×(1-exR)
Hx2=IR(xinter R,yinter R+1)×exR+IR(xinter R+1,yinter R+1)×(1-exR)
in the formula, Hx1Is a first intermediate parameter, Hx2Is a second intermediate parameter, IRIs the initial gray value of the current pixel point in the R channel, (x)inter R,yinter R) As the first control point coordinate, (x)inter R+1,yinter R) As a second control point coordinate, (x)inter R,yinter R+1) is the third control point coordinate, (x)inter R+1,yinter R+1) is the fourth control point coordinate.
It should be noted that in this embodiment, when the coordinate of the first control point is calculated, an upward rounding mode may also be selected, and correspondingly, a mode of subtracting one from the coordinate point needs to be adopted, and the remaining three control points adjacent to the first control point are selected, which is not described in detail again in the specific process.
Likewise, the gray value H is corrected for the B channelbIn other words, the corresponding calculation formula is:
Hb=Hx3×eyB+Hx4×(1-eyB)
Hx3=IB(xinter B,yinter B)×exB+IB(xinter B+1,yinter B)×(1-exB)
Hx4=IB(xinter B,yinter B+1)×exB+IB(xinter B+1,yinter B+1)×(1-exB)
exB=xB-xinter B
Figure BDA0003306409830000093
Figure BDA0003306409830000091
Figure BDA0003306409830000092
and then, calculating the R channel correction gray value and the B channel correction gray value of each pixel point in the image to be corrected in a traversal mode to replace the initial gray value of each pixel point in the image to be corrected, thereby realizing the correction of the edge pseudo-color of the image.
As shown in fig. 4, in which fig. 4(a) is a schematic diagram of a pseudo color at an edge position, although it is not obvious after being converted into a black-and-white image (the width of the pseudo color is usually 3-5 pixels), in a color image, an upper area 401 at a black-and-white edge has an obvious warm-tone pseudo color, and a lower area 402 has an obvious cool-tone pseudo color. After the pseudo-color correction is performed on the image, the transition of the black and white edges in the corrected image (b) is basically gray in visual effect, that is, no pseudo-color appears, and the subsequent processing of the image is not required to be added.
Accordingly, by analyzing the pixel gray values, as shown in fig. 5, from the specific pixel gray, as shown in fig. 5(a), the change trends of the 3 channels at the edge position of the image before correction are misaligned, wherein the gray value of the R channel is delayed from decreasing, and a color cast phenomenon (misalignment of the gray values of the pixels in the 3 channels) occurs, that is, a warm color false color occurs, corresponding to the false color of the upper region 401 in fig. 4 (a). And as shown in fig. 5(b), the gray scale value variation trends at the edge positions of the corrected image are basically overlapped, and the gray scale values of 3 channels are consistent, which proves that no color cast phenomenon occurs.
As shown in fig. 6, image saturation is introduced for verification, and since the saturation is the difference between the maximum and minimum gray values in the RGB three channels, a smaller saturation value indicates a more consistent gray value in the three channels. For a certain edge position, as shown in fig. 6(a) before correction, the saturation image thereof is obviously highlighted at the edge position, which indicates that the region has color information, i.e. there is a false color; and after correction, as shown in fig. 6(b), the saturation image is substantially black, indicating no pseudo color.
The technical solution of the present application is described in detail above with reference to the accompanying drawings, and the present application provides a method for correcting false color at an edge of a color camera image, the method comprising: step 1, calculating the pixel distance between a current pixel point and an image main point in an image to be corrected and the horizontal angle of a connecting line between the current pixel point and the image main point; step 2, calculating R channel correction coordinates and B channel correction coordinates corresponding to the pixel distance and the horizontal angle according to the calibration function; and 3, respectively calculating the R channel correction gray value and the B channel correction gray value of the current pixel point by adopting an interpolation operation mode according to the R channel correction coordinate and the B channel correction coordinate, wherein the R channel correction gray value and the B channel correction gray value are used for correcting the current pixel point. According to the technical scheme, the pseudo color of the edge of the shot color image is corrected, so that the degree of the pseudo color of the edge is reduced, and the image quality of the color camera is improved.
The steps in the present application may be sequentially adjusted, combined, and subtracted according to actual requirements.
The units in the device can be merged, divided and deleted according to actual requirements.
Although the present application has been disclosed in detail with reference to the accompanying drawings, it is to be understood that such description is merely illustrative and not restrictive of the application of the present application. The scope of the present application is defined by the appended claims and may include various modifications, adaptations, and equivalents of the invention without departing from the scope and spirit of the application.

Claims (7)

1. A method for correcting edge pseudo color of a color camera image, the method comprising:
step 1, calculating the pixel distance between a current pixel point and an image main point in an image to be corrected and the horizontal angle of a connecting line between the current pixel point and the image main point;
step 2, calculating R channel correction coordinates and B channel correction coordinates corresponding to the pixel distance and the horizontal angle according to a calibration function;
step 3, respectively calculating the R channel correction gray value and the B channel correction gray value of the current pixel point by adopting an interpolation operation mode according to the R channel correction coordinate and the B channel correction coordinate,
and the R channel correction gray value and the B channel correction gray value are used for correcting the current pixel point.
2. The method for correcting the edge pseudo color of the color camera image according to claim 1, wherein the step 3 of calculating the R-channel correction gray scale value specifically comprises:
step 301, determining coordinates of four control points corresponding to the current pixel point according to the R channel corrected coordinates, and calculating an interpolation ratio according to a difference value between the R channel corrected coordinates and coordinates of a first control point, wherein the coordinates of the first control point are determined by the R channel corrected coordinates by adopting an integer operation;
step 302, performing interpolation operation according to the interpolation ratio and the initial gray value of the R channel of the current pixel point, and calculating the R channel correction gray value of the current pixel point.
3. The method for correcting the edge pseudo color of the color camera image according to claim 2, wherein the formula for calculating the R channel correction gray scale value is:
Hr=Hx1×eyR+Hx2×(1-eyR)
Hx1=IR(xinter R,yinter R)×exR+IR(xinter R+1,yinter R)×(1-exR)
Hx2=IR(xinter R,yinter R+1)×exR+IR(xinter R+1,yinter R+1)×(1-exR)
in the formula, Hx1Is a first intermediate parameter, Hx2Is a second intermediate parameter, IRThe initial gray value (x) of the current pixel point in the R channelinter R,yinter R) Is the coordinate of the first control point, (x)inter R+1,yinter R) Is the coordinate of the second control point, (x)inter R,yinter R+1) is the coordinate of the third control point, (x)inter R+1,yinter R+1) is the coordinate of the fourth control point, exRAs a ratio of interpolation in the x direction, eyIs the interpolation ratio in the y direction.
4. The method for correcting the edge artifacts of the color camera image according to claim 1, wherein in the step 2, the calibration functions are obtained by fitting the difference between the distance between each edge position in the calibration image and the calibration principal point on the R channel and the B channel and the distance between the edge position and the calibration principal point on the G channel, wherein the calibration functions at least comprise an R channel calibration function and a B channel calibration function.
5. The method for correcting the edge pseudo color of the color camera image according to claim 4, wherein the fitting process of the calibration function specifically comprises:
step 201, obtaining a calibration image of a calibration plate, generating a plurality of edge measurement lines based on a calibration principal point of the calibration image, and extracting edge positions of the calibration image on each edge measurement line in three channels of RGB respectively;
step 202, calculating each edge of the RGB three channels respectivelyA first distance between the edge position and the calibration principal point, and a second distance between the R channel and the G channel at any one edge position
Figure FDA0003306409820000021
Third distance between channel B and channel G
Figure FDA0003306409820000022
Step 203, according to the first distance and the second distance of the RGB three channels
Figure FDA0003306409820000023
The third distance
Figure FDA0003306409820000024
And performing function fitting to generate the calibration function.
6. The method for correcting the edge pseudo color of the color camera image according to claim 5, wherein the edge measurement lines are equiangularly distributed.
7. The method for correcting the edge artifact of color camera image according to claim 6, wherein the angle between two adjacent edge measurement lines is greater than or equal to 8 °.
CN202111204681.XA 2021-10-15 2021-10-15 Correction method for false color of color camera image edge Active CN113870146B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111204681.XA CN113870146B (en) 2021-10-15 2021-10-15 Correction method for false color of color camera image edge

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111204681.XA CN113870146B (en) 2021-10-15 2021-10-15 Correction method for false color of color camera image edge

Publications (2)

Publication Number Publication Date
CN113870146A true CN113870146A (en) 2021-12-31
CN113870146B CN113870146B (en) 2024-06-25

Family

ID=78999833

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111204681.XA Active CN113870146B (en) 2021-10-15 2021-10-15 Correction method for false color of color camera image edge

Country Status (1)

Country Link
CN (1) CN113870146B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115474028A (en) * 2022-08-26 2022-12-13 中国大恒(集团)有限公司北京图像视觉技术分公司 Quick color correction device and method for industrial camera

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1964425A (en) * 2006-11-28 2007-05-16 北京中星微电子有限公司 A device to restrain pseudo color and its method to restrain pseudo color
CN101815220A (en) * 2009-02-20 2010-08-25 华晶科技股份有限公司 Method for correcting image color distortion
US20110052053A1 (en) * 2009-08-25 2011-03-03 Stmicroelectronics S.R.L. Digital image processing apparatus and method
US20110115942A1 (en) * 2009-09-18 2011-05-19 Teppei Kurita Image processing device, imaging apparatus, imaging processing method, and program
CN109345597A (en) * 2018-09-27 2019-02-15 四川大学 A kind of camera calibration image-pickup method and device based on augmented reality
WO2020097851A1 (en) * 2018-11-15 2020-05-22 深圳市大疆创新科技有限公司 Image processing method, control terminal and storage medium
US20210076017A1 (en) * 2016-03-09 2021-03-11 Sony Corporation Image processing apparatus, imaging apparatus, image processing method, and program
CN112652027A (en) * 2020-12-30 2021-04-13 凌云光技术股份有限公司 Pseudo-color detection algorithm and system
CN113160095A (en) * 2021-05-25 2021-07-23 烟台艾睿光电科技有限公司 Infrared detection signal pseudo-color processing method, device and system and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1964425A (en) * 2006-11-28 2007-05-16 北京中星微电子有限公司 A device to restrain pseudo color and its method to restrain pseudo color
CN101815220A (en) * 2009-02-20 2010-08-25 华晶科技股份有限公司 Method for correcting image color distortion
US20110052053A1 (en) * 2009-08-25 2011-03-03 Stmicroelectronics S.R.L. Digital image processing apparatus and method
US20110115942A1 (en) * 2009-09-18 2011-05-19 Teppei Kurita Image processing device, imaging apparatus, imaging processing method, and program
US20210076017A1 (en) * 2016-03-09 2021-03-11 Sony Corporation Image processing apparatus, imaging apparatus, image processing method, and program
CN109345597A (en) * 2018-09-27 2019-02-15 四川大学 A kind of camera calibration image-pickup method and device based on augmented reality
WO2020097851A1 (en) * 2018-11-15 2020-05-22 深圳市大疆创新科技有限公司 Image processing method, control terminal and storage medium
CN112652027A (en) * 2020-12-30 2021-04-13 凌云光技术股份有限公司 Pseudo-color detection algorithm and system
CN113160095A (en) * 2021-05-25 2021-07-23 烟台艾睿光电科技有限公司 Infrared detection signal pseudo-color processing method, device and system and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GANDHI PRIYA PRASHANT,ET.AL: "Information fusion for images on FPGA: Pixel level with pseudo color", 2017 1ST INTERNATIONAL CONFERENCE ON INTELLIGENT SYSTEMS AND INFORMATION MANAGEMENT (ICISIM), 6 October 2017 (2017-10-06), pages 185 - 188, XP033270728, DOI: 10.1109/ICISIM.2017.8122171 *
赵明,等: "基于Zynq-7000的伪彩色图像处理系统设计与实现", 电子测量技术, vol. 41, no. 6, 23 March 2018 (2018-03-23), pages 120 - 123 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115474028A (en) * 2022-08-26 2022-12-13 中国大恒(集团)有限公司北京图像视觉技术分公司 Quick color correction device and method for industrial camera
CN115474028B (en) * 2022-08-26 2023-10-17 中国大恒(集团)有限公司北京图像视觉技术分公司 Industrial camera color correction device and method

Also Published As

Publication number Publication date
CN113870146B (en) 2024-06-25

Similar Documents

Publication Publication Date Title
US8358835B2 (en) Method for detecting and correcting chromatic aberration, and apparatus and method for processing image using the same
US8400505B2 (en) Calibration method, calibration device, and calibration system including the device
US8049795B2 (en) Lens shading compensation apparatus and method, and image processor using the same
CN112669394B (en) Automatic calibration method for vision detection system
CN105051506A (en) Brightness measurement method, brightness measurement device, and image quality adjustment technology using same
JP4311040B2 (en) Chromatic aberration correction apparatus, chromatic aberration correction method, and chromatic aberration correction program
CN114757853B (en) Method and system for acquiring flat field correction function and flat field correction method and system
CN113870146A (en) Method for correcting false color of image edge of color camera
US20230230345A1 (en) Image analysis method, image analysis device, program, and recording medium
CN111064963A (en) Image data decoding method, device, computer equipment and storage medium
CN110174351B (en) Color measuring device and method
JPH06315091A (en) Color picture processor
EP2237220A2 (en) Method of correcting image distortion
CN110288662B (en) Display detection method and system
JP2002320237A (en) Method for detecting chromatic aberration in magnification
WO2014208188A1 (en) Image processing apparatus and image processing method
US20140104418A1 (en) Image capturing apparatus, control method of image capturing apparatus, three-dimensional measurement apparatus, and storage medium
JPH08327497A (en) Method for inspecting color liquid crystal display panel
JP2014158165A (en) Image processing device, image processing method, and program
CN113096188B (en) Visual odometer pose optimization method based on highlight pixel detection
KR100992525B1 (en) Colored Image Correction Method
JP7300962B2 (en) Image processing device, image processing method, imaging device, program, and storage medium
CN110995961B (en) Method and system for enhancing camera vignetting
KR0183736B1 (en) Method of judging photo sensor arrangement of cathode ray tube picture
CN117278821A (en) Transverse chromatic aberration correction method for biprism virtual binocular vision system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant