CN113284189A - Distortion parameter calibration method, device, equipment and storage medium - Google Patents
Distortion parameter calibration method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN113284189A CN113284189A CN202110515355.4A CN202110515355A CN113284189A CN 113284189 A CN113284189 A CN 113284189A CN 202110515355 A CN202110515355 A CN 202110515355A CN 113284189 A CN113284189 A CN 113284189A
- Authority
- CN
- China
- Prior art keywords
- distortion
- pixel
- sub
- point
- linear equation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 238000004364 calculation method Methods 0.000 claims description 15
- 238000001514 detection method Methods 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 8
- 230000009286 beneficial effect Effects 0.000 description 4
- 239000013589 supplement Substances 0.000 description 4
- 238000005452 bending Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 239000000463 material Substances 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000005311 autocorrelation function Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000005316 response function Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a distortion parameter calibration method, a distortion parameter calibration device, distortion parameter calibration equipment and a storage medium, and belongs to the technical field of parameter calibration. The distortion parameter calibration method comprises the steps of obtaining a target image; extracting the characteristics of the target image to obtain sub-pixel level angular points; respectively calculating a first linear equation and a second linear equation according to the sub-pixel level angular points; calibrating a distortion central point according to the first linear equation and the second linear equation; and calibrating the distortion coefficient according to the distortion central point, the sub-pixel level angular point and a preset distortion correction model. The distortion parameter calibration method can accurately calibrate distortion parameters such as distortion center, distortion coefficient and the like.
Description
Technical Field
The present invention relates to the field of parameter calibration technologies, and in particular, to a distortion parameter calibration method, apparatus, device, and storage medium.
Background
At present, when distortion parameters of an image are calibrated, the center of the image is often used as a distortion center, which often results in accuracy of calibration of the distortion parameters.
Disclosure of Invention
The present invention is directed to solving at least one of the problems of the prior art. Therefore, the invention provides a distortion parameter calibration method, which has higher accuracy in distortion parameter calibration.
The invention also provides a distortion parameter calibration device with the distortion parameter calibration method.
The invention also provides electronic equipment with the distortion parameter calibration method.
The invention also provides a computer readable storage medium.
The distortion parameter calibration method according to the embodiment of the first aspect of the invention comprises the following steps:
acquiring a target image;
extracting the features of the target image to obtain sub-pixel level angular points;
respectively calculating a first linear equation and a second linear equation according to the sub-pixel level angular points;
calibrating a distortion central point according to the first linear equation and the second linear equation;
and calibrating a distortion coefficient according to the distortion central point, the sub-pixel level angular point and a preset distortion correction model.
The distortion parameter calibration method provided by the embodiment of the invention at least has the following beneficial effects: the distortion parameter calibration method comprises the steps of obtaining a target image, extracting features of the target image to obtain sub-pixel-level angular points, respectively calculating a first linear equation and a second linear equation according to the sub-pixel-level angular points, calibrating a distortion central point according to the first linear equation and the second linear equation, calibrating a distortion coefficient according to the distortion central point, the sub-pixel-level angular points and a preset distortion correction model, and accurately calibrating distortion parameters such as the distortion center and the distortion coefficient.
According to some embodiments of the present invention, the extracting features of the target image to obtain sub-pixel-level corner points includes:
extracting pixel-level angular points in the target image according to a Harris angular point detection algorithm;
and calculating the sub-pixel level angular points according to the pixel level angular points and a Gaussian surface fitting algorithm.
According to some embodiments of the present invention, the calculating a first linear equation and a second linear equation according to the sub-pixel level corner points respectively comprises:
traversing all the sub-pixel-level angular points in rows to respectively obtain a first coordinate value of the sub-pixel-level angular point at the leftmost end of each row and a second coordinate value of the sub-pixel-level angular point at the rightmost end of each row;
obtaining a first endpoint linear equation of each line according to the first coordinate value and the second coordinate value;
calculating a first distance sum from all the sub-pixel-level corner points of each line to a first end point line of the line according to the first end point line equation;
determining a first target distance sum, the first target distance sum and a corresponding first endpoint straight line according to the magnitude relation of all the first distance sums;
and obtaining the first linear equation according to the first target distance and the corresponding first endpoint straight line.
According to some embodiments of the present invention, the calculating a first linear equation and a second linear equation according to the sub-pixel level corner points respectively comprises:
traversing all the sub-pixel-level angular points in columns to respectively obtain a third coordinate value of the sub-pixel-level angular point at the head end of each column and a fourth coordinate value of the sub-pixel-level angular point at the tail end of each row;
obtaining a second endpoint linear equation of each row according to the third coordinate value and the fourth coordinate value;
calculating a second distance sum from all the sub-pixel-level corner points of each column to a second endpoint line of the column according to the second endpoint line equation;
determining a second target distance sum, the second target distance sum and a corresponding second endpoint straight line according to the magnitude relation of all the second distance sums;
and obtaining the second linear equation according to the second target distance and the corresponding second endpoint straight line.
According to some embodiments of the invention, the calibrating the distortion center point according to the first line equation and the second line equation comprises:
and calculating the intersection point of the first straight line and the second straight line according to the first straight line equation and the second straight line equation, wherein the intersection point is the distortion central point.
According to some embodiments of the present invention, the calibrating distortion coefficients according to the distortion central points, the sub-pixel-level corner points and a preset distortion correction model includes:
calculating a corrected sub-pixel-level angular point according to the preset distortion correction model, the distortion central point and the sub-pixel-level angular point;
fitting to obtain a third straight line according to the corrected sub-pixel-level angular points and a least square method, and calculating a third distance sum from the corrected sub-pixel-level angular points to the third straight line;
equally dividing the target image into three image regions;
optimizing each of the image regions according to the third line, the third distance sum, and an LM algorithm,
the distortion coefficient of each image area is obtained.
According to some embodiments of the present invention, after calibrating the distortion coefficient according to the distortion central point, the sub-pixel-level corner point and a preset distortion correction model, the method further includes:
calculating the coordinate value of the target pixel point according to the distortion coefficient;
and carrying out distortion correction processing on the target image according to the coordinate value of the target pixel point and a bilinear interpolation algorithm.
The distortion parameter calibration apparatus according to an embodiment of a second aspect of the present invention includes:
the image acquisition module is used for acquiring a target image;
the characteristic extraction module is used for extracting the characteristics of the target image to obtain sub-pixel level angular points;
the first calculation module is used for respectively calculating a first linear equation and a second linear equation according to the sub-pixel level angular points;
the parameter calibration module is used for calibrating a distortion central point according to the first linear equation and the second linear equation;
and the second calculation module is used for calibrating a distortion coefficient according to the distortion central point, the sub-pixel level angular point and a preset distortion correction model.
The distortion parameter calibration device provided by the embodiment of the invention at least has the following beneficial effects: the distortion parameter calibration device obtains a target image through an image obtaining module, a feature extracting module extracts features of the target image to obtain sub-pixel level angular points, then a first calculating module calculates a first linear equation and a second linear equation according to the sub-pixel level angular points respectively, then the parameter calibration module calibrates distortion central points according to the first linear equation and the second linear equation, and accordingly the second calculating module calibrates distortion coefficients according to the distortion central points, the sub-pixel level angular points and a preset distortion correction model, and therefore distortion parameters such as distortion centers and distortion coefficients can be calibrated accurately.
An electronic device according to an embodiment of the third aspect of the present invention includes:
at least one processor, and,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions that are executed by the at least one processor to cause the at least one processor to implement the distortion parameter calibration method as described in the first aspect embodiment when executing the instructions.
According to the electronic equipment provided by the embodiment of the invention, at least the following beneficial effects are achieved: the electronic equipment adopts the distortion parameter calibration method, a target image is obtained, features of the target image are extracted to obtain sub-pixel level angular points, then a first linear equation and a second linear equation are respectively calculated according to the sub-pixel level angular points, and then distortion central points are calibrated according to the first linear equation and the second linear equation, so that distortion coefficients are calibrated according to the distortion central points, the sub-pixel level angular points and a preset distortion correction model, and distortion parameters such as the distortion center, the distortion coefficients and the like can be calibrated more accurately.
According to a fourth aspect of the present invention, there is provided a computer-readable storage medium storing computer-executable instructions for causing a computer to perform the distortion parameter calibration method as described in the first aspect.
The computer-readable storage medium according to the embodiment of the invention has at least the following advantages: the computer-readable storage medium executes the distortion parameter calibration method, and the distortion parameter calibration method comprises the steps of obtaining a target image, extracting features of the target image to obtain sub-pixel level angular points, respectively calculating a first linear equation and a second linear equation according to the sub-pixel level angular points, calibrating a distortion central point according to the first linear equation and the second linear equation, calibrating a distortion coefficient according to the distortion central point, the sub-pixel level angular points and a preset distortion correction model, and calibrating distortion parameters such as the distortion center and the distortion coefficient more accurately.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The invention is further described with reference to the following figures and examples, in which:
FIG. 1 is a flow chart of a distortion parameter calibration method according to an embodiment of the present invention;
FIG. 2 is a flowchart of step S200 in FIG. 1;
FIG. 3 is a flowchart of step S300 in FIG. 1;
FIG. 4 is another flowchart of step S300 in FIG. 1;
FIG. 5 is a flowchart of step S500 in FIG. 1;
FIG. 6 is a partial flow chart of a distortion parameter calibration method according to another embodiment of the present invention;
fig. 7 is a schematic structural diagram of a distortion parameter calibration apparatus according to an embodiment of the present invention.
Reference numerals: 710. an image acquisition module; 720. a feature extraction module; 730. a first calculation module; 740. a parameter calibration module; 750. and a second calculation module.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
In the description of the present invention, it should be understood that the orientation or positional relationship referred to in the description of the orientation, such as the upper, lower, front, rear, left, right, etc., is based on the orientation or positional relationship shown in the drawings, and is only for convenience of description and simplification of description, and does not indicate or imply that the device or element referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention.
In the description of the present invention, the meaning of a plurality is one or more, the meaning of a plurality is two or more, and the above, below, exceeding, etc. are understood as excluding the present numbers, and the above, below, within, etc. are understood as including the present numbers. If the first and second are described for the purpose of distinguishing technical features, they are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
In the description of the present invention, unless otherwise explicitly limited, terms such as arrangement, installation, connection and the like should be understood in a broad sense, and those skilled in the art can reasonably determine the specific meanings of the above terms in the present invention in combination with the specific contents of the technical solutions.
In the description of the present invention, reference to the description of the terms "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
In a first aspect, referring to fig. 1, a distortion parameter calibration method according to an embodiment of the present invention includes:
s100, acquiring a target image;
s200, extracting the characteristics of the target image to obtain sub-pixel level angular points;
s300, respectively calculating a first linear equation and a second linear equation according to the sub-pixel level angular points;
s400, calibrating a distortion central point according to the first linear equation and the second linear equation;
and S500, calibrating a distortion coefficient according to the distortion central point, the sub-pixel level angular point and a preset distortion correction model.
In the process of calibrating distortion parameters, firstly, a target image needs to be acquired, specifically, a line laser scanner, a customized calibration plate and a light supplement light source with the same wave band as laser, which are designed according to the schem's law, can be adopted, the customized calibration plate is placed along the direction of the coincidence of the plane of the customized calibration plate and the laser plane, the light supplement light source is turned on, and the image of the calibration plate is acquired to acquire the target image, wherein it needs to be explained that the schem's law means that when extension surfaces of three surfaces, namely a shot object plane, an image plane and a lens plane, are intersected in a straight line, a comprehensive and clear image can be acquired; the target image can be a checkerboard image, so that the target image is subjected to feature extraction to obtain sub-pixel-level angular points, pixel-level angular points in the target image can be extracted according to a Harris angular point detection algorithm, and then the extracted pixel-level angular points are calculated by adopting a Gaussian surface fitting algorithm; respectively calculating a first linear equation and a second linear equation according to the sub-pixel level angular points, namely traversing all the sub-pixel level angular points according to the rows, respectively obtaining a first coordinate value of the sub-pixel level angular point at the leftmost end of each row and a second coordinate value of the sub-pixel level angular point at the rightmost end of each row, obtaining a first endpoint linear equation of each row according to the first coordinate value and the second coordinate value, calculating a first distance sum from all the sub-pixel level angular points of each row to a first endpoint straight line of the row according to the first endpoint linear equation, and determining a first target distance sum, a first target distance sum and a corresponding first endpoint straight line according to the magnitude relation of all the first distance sums; obtaining a first linear equation according to the first target distance and the corresponding first endpoint straight line; traversing all the sub-pixel level angular points according to columns to respectively obtain a third coordinate value of the sub-pixel level angular point at the head end of each column and a fourth coordinate value of the sub-pixel level angular point at the tail end of each row, obtaining a second endpoint linear equation of each column according to the third coordinate value and the fourth coordinate value, calculating a second distance sum from all the sub-pixel level angular points of each column to a second endpoint linear of the column according to the second endpoint linear equation, determining a second target distance sum and a second target distance sum corresponding to the second endpoint linear according to the magnitude relation of all the second distance sums, and obtaining a second linear equation according to the second target distance sum corresponding to the second endpoint linear; it should be noted that the first distance represents the degree of curvature of the first endpoint curve, and the second distance represents the degree of curvature of the second endpoint curve; the first target distance and the corresponding first endpoint straight line and the second target distance and the corresponding second endpoint straight line are the straight lines with the minimum bending degree in all the straight lines, so that a first straight line equation and a second straight line equation can be respectively calculated according to the two straight lines, and the intersection point of the first straight line and the second straight line is calculated according to the first straight line equation and the second straight line equation, wherein the intersection point is the distortion central point. The distortion coefficient can be conveniently calibrated according to the distortion central point, the sub-pixel level angular points and a preset distortion correction model, namely the corrected sub-pixel level angular points are calculated according to the preset distortion correction model, the distortion central point and the sub-pixel level angular points; fitting to obtain a third straight line according to the corrected sub-pixel-level angular points and a least square method, and calculating a third distance sum from the corrected sub-pixel-level angular points to the third straight line; equally dividing the target image into three image areas; and optimizing each image area according to the third straight line, the third distance sum and the LM algorithm to obtain the distortion coefficient of each image area, so that the distortion parameters such as the distortion center, the distortion coefficient and the like can be accurately calibrated.
Referring to fig. 2, in some embodiments, step S200 includes:
s210, extracting pixel-level corners in the target image according to a Harris corner detection algorithm;
and S220, calculating sub-pixel-level angular points according to the pixel-level angular points and a Gaussian surface fitting algorithm.
In the process of calculating the sub-pixel-level corner points, firstly, the pixel positions of the checkerboard corner points in the target image are extracted according to a Harris corner point detection algorithm to obtain the pixel-level corner points, and it needs to be explained that the core of the Harris corner point detection algorithm is to use a local window to move on the image to judge whether the gray scale is changed greatly. If there is a large variation in the gray values (on the gradient map) within the window, there are corner points in the region of the window. Thus, the Harris corner detection algorithm can be divided into the following three steps: calculating a pixel value variation E (x, y) inside a window (local area) when the window is moved in both x (horizontal) and y (vertical) directions; for each window, calculating a corresponding corner response function R; then threshold the function if R>threshold, which means that the window corresponds to a corner feature. Specifically, an approximate Hessian matrix M of the local gray autocorrelation function E (x, y) of the target image, that is, an approximate Hessian matrix M is first calculatedThen, the corner detector R (x, y) is calculated as det [ M (x, y) ]]-k·trace2[M(x,y)]Wherein, det [ M (x, y)]=A(x,y)B(x,y)-C2(x,y),trace[M(x,y)]A (x, y) + B (x, y), k is 0.04; furthermore, a threshold value M is set according to actual conditions, the magnitude relation between R (x, y) and M at each position of the target image is compared, if R (x, y) at a certain point exceeds the threshold value M, the position of the point as a pixel-level angular point can be determined, the pixel-level angular point in the target image can be conveniently obtained, furthermore, a Gaussian surface fitting algorithm is adopted for the extracted pixel-level angular point, and a sub-pixel-level angular point is calculated, wherein the Gaussian surface fitting function isWherein c is a constant; therefore, the sub-pixel level angular points can be conveniently calculated, and the calculation is simple and quick.
Referring to fig. 3, in some embodiments, step S300 includes:
s311, traversing all the sub-pixel-level angular points according to rows, and respectively obtaining a first coordinate value of the sub-pixel-level angular point at the leftmost end of each row and a second coordinate value of the sub-pixel-level angular point at the rightmost end of each row;
s312, obtaining a first endpoint linear equation of each line according to the first coordinate value and the second coordinate value;
s313, calculating a first distance sum from all sub-pixel level corner points of each line to a first end point line of the line according to a first end point line equation;
s314, determining a first target distance sum, a first target distance sum and a corresponding first endpoint straight line according to the magnitude relation of all the first distance sums;
and S315, obtaining a first linear equation according to the first target distance and the corresponding first endpoint straight line.
In the process of calibrating the distortion center, all sub-pixel-level angular points need to be traversed according to lines, and a first coordinate value of the sub-pixel-level angular point at the leftmost end of each line are respectively obtainedObtaining a first endpoint linear equation of each line according to the first coordinate value and the second coordinate value of the subpixel level angular point at the right end, calculating a first distance sum from all subpixel level angular points of each line to a first endpoint linear of the line according to the first endpoint linear equation, wherein the first distance sum represents the bending degree of a first endpoint curve, and determining a first target distance sum and a first endpoint linear corresponding to the first target distance sum according to the magnitude relation of all the first distance sums, namely selecting two first endpoint linear with the smallest first distance sum, respectively marking the two first endpoint linear as a first distortion linear and a second distortion linear, and obtaining the coordinate of a certain point on a third distortion linear according to the two distortion linear so as to calculate the first distance sum of the third distortion linear, selecting the combination of the first distance sum smaller of the first distortion straight line and the second distortion straight line and the third distortion straight line, repeating the steps for iterative calculation until the iteration termination condition is met, recording a straight line equation at the iteration termination time, taking the straight line equation as the first straight line equation, wherein the iteration termination condition can be that the iteration number N is more than 50 times, or the first distance sum d obtained by calculationiSatisfy di<1e-06pixel; for example, the coordinates of the left end point (x1, y1) and the coordinates of the right end point (x 1) of the ith row are recorded2,y2) Calculating a linear equation l determined by the left and right end pointsiThat is, the first endpoint linear equation of the ith row is aix+biy+ci0, wherein a, b and c are constants; then calculate all corner points of i rows to the straight line liFirst distance d andicomprises the following steps:according to the magnitude relation of all the first distance sums, two first endpoint straight lines with the smallest first distance sum are selected, the two first endpoint straight lines are respectively marked as a first distortion straight line and a second distortion straight line, and the first distance sums corresponding to the first distortion straight line and the second distortion straight line are respectively di、di+1(ii) a According to the two distortionsChanging the straight line can obtain the coordinate of a certain point on the third distorted straight lineSelecting the combination of the first distance sum smaller of the first distorted straight line and the second distorted straight line and the third distorted straight line, and repeating the steps to carry out iterative calculation until the iterative times N is more than 50 times or the first distance sum d obtained by calculation is larger than 50 timesiSatisfy di<1e-06pixel, the equation of a straight line at the end of a recording iteration is: a is1x+b1y+c1And (3) the first linear equation can be conveniently obtained, and the calculation is simple and quick.
Referring to fig. 4, in some embodiments, step S300 further includes:
s321, traversing all the sub-pixel-level angular points according to columns, and respectively obtaining a third coordinate value of the sub-pixel-level angular point at the head end of each column and a fourth coordinate value of the sub-pixel-level angular point at the tail end of each row;
s322, obtaining a second endpoint linear equation of each row according to the third coordinate value and the fourth coordinate value;
s323, calculating a second distance sum from all sub-pixel level corner points of each column to a second endpoint line of the column according to a second endpoint line equation;
s324, determining a second target distance sum and a second target distance sum corresponding to the second endpoint straight line according to the magnitude relation of all the second distance sums;
and S325, obtaining a second linear equation according to the second target distance and the corresponding second endpoint straight line.
In the process of calibrating the distortion center, traversing all sub-pixel-level angular points according to columns to obtain a third coordinate value of the sub-pixel-level angular point at the head end of each column and a fourth coordinate value of the sub-pixel-level angular point at the tail end of each row respectively, calculating a second distance sum from all sub-pixel-level angular points of each column to a second end point line of the column according to a second end point line equation, and determining a second target distance sum, a second target distance sum and a corresponding second end point distance sum according to the magnitude relation of all second distance sumsPoint straight lines, it should be noted that the second distance and the degree of curvature representing the second endpoint curve determine the second target distance sum and the corresponding second endpoint straight line according to the magnitude relationship of all the second distance sums, that is, two second endpoint straight lines with the smallest second distance sum are selected, the two second endpoint straight lines are respectively marked as a fourth distortion straight line and a fifth distortion straight line, the coordinates of a certain point on the sixth distortion straight line can be obtained according to the two distortion straight lines, so as to calculate the second distance sum of the sixth distortion straight line, the combination of the sixth distortion straight line and the smaller second distance sum of the fourth distortion straight line and the fifth distortion straight line is selected, the above steps are repeated to perform iterative calculation, when the iterative termination condition is satisfied, a straight line equation at the time of the iterative termination is recorded, and the straight line equation is used as the second straight line equation, it should be noted that the iteration termination condition may be that the number of iterations N is greater than 50, or that the calculated second distance sum d is obtainediSatisfy di<1e-06pixel, so that a second linear equation is obtained having a2x+b2y+c20; calculating the intersection point (u) of the first straight line and the second straight line according to the first straight line equation and the second straight line equation0,v0) And the intersection point is the distortion center point, so that the distortion center can be accurately marked.
In some embodiments, step S400 includes:
and calculating the intersection point of the first straight line and the second straight line according to the first straight line equation and the second straight line equation, wherein the intersection point is the distortion central point.
Through the iterative computation, two straight lines, namely a first straight line and a second straight line, when the line iteration is terminated can be conveniently obtained, and then the intersection point (u) of the first straight line and the second straight line is computed according to the first straight line equation and the second straight line equation0,v0) And the intersection point is the distortion center point, so that the distortion center can be accurately marked.
Referring to fig. 5, in some embodiments, step S500 includes:
s510, calculating a corrected sub-pixel-level corner according to a preset distortion correction model, a distortion central point and a sub-pixel-level corner;
s520, fitting to obtain a third straight line according to the corrected sub-pixel-level angular points and a least square method, and calculating a third distance sum from the corrected sub-pixel-level angular points to the third straight line;
s530, equally dividing the target image into three image areas;
and S540, optimizing each image area according to the third straight line, the third distance sum and the LM algorithm to obtain the distortion coefficient of each image area.
After calibrating the distortion center, the distortion coefficient needs to be calibrated, specifically, the corrected sub-pixel-level corner is calculated according to a preset distortion correction model, the distortion center and the sub-pixel-level corner, where the preset distortion correction model may adopt a polynomial model, for example, the preset distortion correction model isWherein, it should be noted that the initial value k may be preset according to actual conditions1And then calculating a corrected sub-pixel-level angular point according to the distortion correction model, the distortion central point and the sub-pixel-level angular point, and fitting to obtain a third straight line L according to the corrected sub-pixel-level angular point and a least square methodiAnd calculating a third distance D between the corrected sub-pixel-level corner point and a third lineiBecause the imaging upper and lower distortion degrees under the Simm condition are different, the target image is divided into three equal image areas, each image area is optimized according to a third straight line, a third distance sum and an LM algorithm, and in order to improve the calibration accuracy, the iteration number N can be set to be greater than 50 or the iteration termination condition is satisfied:thereby obtaining a distortion coefficient k of each image regionup,kmid,kdownTherefore, the distortion coefficient can be accurately calibrated.
Referring to fig. 6, in some embodiments, the distortion parameter calibration method further comprises:
s600, calculating coordinate values of target pixel points according to the distortion coefficients;
s700, distortion correction processing is carried out on the target image according to the coordinate value of the target pixel point and the bilinear interpolation algorithm.
In order to perform distortion correction processing on a target image, after a distortion center point and a distortion coefficient are calibrated, the coordinate value of a target pixel point, namely the position of an ideal pixel point, can be calculated according to the distortion coefficient, and then distortion correction processing is performed on the target image according to the coordinate value of the target pixel point and a bilinear interpolation algorithm, wherein a target pixel point calculation formula meets the following requirements:wherein x isu=u-u0,yu=v-v0; Therefore, the distortion correction processing can be accurately carried out on the target image, and the accuracy of image identification is improved.
In a second aspect, referring to fig. 7, a distortion parameter calibration apparatus according to an embodiment of the present invention includes:
an image acquisition module 710 for acquiring a target image;
the feature extraction module 720 is configured to perform feature extraction on the target image to obtain sub-pixel-level corner points;
the first calculating module 730 is configured to calculate a first linear equation and a second linear equation according to the sub-pixel-level corner points;
the parameter calibration module 740 is configured to calibrate a distortion center point according to the first linear equation and the second linear equation;
and a second calculating module 750, configured to calibrate a distortion coefficient according to the distortion central point, the sub-pixel-level corner point, and a preset distortion correction model.
In the process of calibrating the distortion parameters, firstly, the image acquisition module 710 needs to acquire a target image, specifically, a line laser scanner, a customized calibration plate and a light supplement light source with the same wave band as laser, which are designed according to the schem's law, can be adopted, the customized calibration plate is placed along the direction of the coincidence of the plane of the customized calibration plate and the plane of the laser, the light supplement light source is turned on, and the image of the calibration plate is collected to obtain the target image, so that the feature extraction module 720 performs feature extraction on the target image to obtain sub-pixel-level angular points, which can be realized by firstly extracting pixel-level angular points in the target image according to a Harris angular point detection algorithm, and then calculating the sub-pixel-level angular points by adopting a gaussian surface fitting algorithm on the extracted pixel-level angular points; the first calculation module 730 calculates a first linear equation and a second linear equation respectively according to the subpixel level corner points, that is, traverses all the subpixel level corner points in a row, obtains a first coordinate value of the subpixel level corner point at the leftmost end of each row and a second coordinate value of the subpixel level corner point at the rightmost end of each row respectively, obtains a first endpoint linear equation of each row according to the first coordinate value and the second coordinate value, calculates a first distance sum from all the subpixel level corner points in each row to a first endpoint straight line of the row according to the first endpoint linear equation, and determines a first target distance sum, a first target distance sum and a corresponding first endpoint straight line according to the magnitude relation of all the first distance sums; obtaining a first linear equation according to the first target distance and the corresponding first endpoint straight line; traversing all the sub-pixel level angular points according to columns to respectively obtain a third coordinate value of the sub-pixel level angular point at the head end of each column and a fourth coordinate value of the sub-pixel level angular point at the tail end of each row, obtaining a second endpoint linear equation of each column according to the third coordinate value and the fourth coordinate value, calculating a second distance sum from all the sub-pixel level angular points of each column to a second endpoint linear of the column according to the second endpoint linear equation, determining a second target distance sum and a second target distance sum corresponding to the second endpoint linear according to the magnitude relation of all the second distance sums, and obtaining a second linear equation according to the second target distance sum corresponding to the second endpoint linear; it should be noted that the first distance represents the degree of curvature of the first endpoint curve, and the second distance represents the degree of curvature of the second endpoint curve; the first target distance and the corresponding first endpoint straight line, and the second target distance and the corresponding second endpoint straight line are the straight lines with the minimum bending degree among all the straight lines, so the parameter calibration module 740 can respectively calculate a first straight line equation and a second straight line equation according to the two straight lines, and calculate the intersection point of the first straight line and the second straight line according to the first straight line equation and the second straight line equation, wherein the intersection point is the distortion central point. The second calculating module 750 may further conveniently calibrate the distortion coefficient according to the distortion central point, the sub-pixel level angular point and the preset distortion correction model, that is, calculate the corrected sub-pixel level angular point according to the preset distortion correction model, the distortion central point and the sub-pixel level angular point; fitting to obtain a third straight line according to the corrected sub-pixel-level angular points and a least square method, and calculating a third distance sum from the corrected sub-pixel-level angular points to the third straight line; equally dividing the target image into three image areas; and optimizing each image area according to the third straight line, the third distance sum and the LM algorithm to obtain the distortion coefficient of each image area, so that the distortion parameters such as the distortion center, the distortion coefficient and the like can be accurately calibrated.
In a third aspect, an electronic device of an embodiment of the invention includes at least one processor, and a memory communicatively coupled to the at least one processor; the memory stores instructions, and the instructions are executed by the at least one processor, so that the at least one processor, when executing the instructions, implements the distortion parameter calibration method as described in the first aspect.
According to the electronic equipment provided by the embodiment of the invention, at least the following beneficial effects are achieved: the electronic equipment adopts the distortion parameter calibration method, a target image is obtained, features of the target image are extracted to obtain sub-pixel level angular points, then a first linear equation and a second linear equation are respectively calculated according to the sub-pixel level angular points, and then distortion central points are calibrated according to the first linear equation and the second linear equation, so that distortion coefficients are calibrated according to the distortion central points, the sub-pixel level angular points and a preset distortion correction model, and distortion parameters such as the distortion center, the distortion coefficients and the like can be calibrated more accurately.
In a fourth aspect, the present invention further provides a computer-readable storage medium. The computer-readable storage medium stores computer-executable instructions for causing a computer to perform the distortion parameter calibration method as described in the embodiments of the first aspect.
The computer-readable storage medium according to the embodiment of the invention has at least the following advantages: the computer-readable storage medium executes the distortion parameter calibration method, the target image is obtained, the features of the target image are extracted, the sub-pixel level angular points are obtained, then the first linear equation and the second linear equation are respectively calculated according to the sub-pixel level angular points, the distortion central point is calibrated according to the first linear equation and the second linear equation, the distortion coefficient is calibrated according to the distortion central point, the sub-pixel level angular points and a preset distortion correction model, and therefore the distortion parameters such as the distortion center, the distortion coefficient and the like can be accurately calibrated.
The embodiments of the present invention have been described in detail with reference to the accompanying drawings, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of those skilled in the art without departing from the gist of the present invention. Furthermore, the embodiments of the present invention and the features of the embodiments may be combined with each other without conflict.
Claims (10)
1. The distortion parameter calibration method is characterized by comprising the following steps:
acquiring a target image;
extracting the features of the target image to obtain sub-pixel level angular points;
respectively calculating a first linear equation and a second linear equation according to the sub-pixel level angular points;
calibrating a distortion central point according to the first linear equation and the second linear equation;
and calibrating a distortion coefficient according to the distortion central point, the sub-pixel level angular point and a preset distortion correction model.
2. The distortion parameter calibration method according to claim 1, wherein the extracting the features of the target image to obtain sub-pixel-level corner points comprises:
extracting pixel-level angular points in the target image according to a Harris angular point detection algorithm;
and calculating the sub-pixel level angular points according to the pixel level angular points and a Gaussian surface fitting algorithm.
3. A distortion parameter calibration method as claimed in claim 1, wherein the calculating a first linear equation and a second linear equation respectively according to the sub-pixel level corner points comprises:
traversing all the sub-pixel-level angular points in rows to respectively obtain a first coordinate value of the sub-pixel-level angular point at the leftmost end of each row and a second coordinate value of the sub-pixel-level angular point at the rightmost end of each row;
obtaining a first endpoint linear equation of each line according to the first coordinate value and the second coordinate value;
calculating a first distance sum from all the sub-pixel-level corner points of each line to a first end point line of the line according to the first end point line equation;
determining a first target distance sum, the first target distance sum and a corresponding first endpoint straight line according to the magnitude relation of all the first distance sums;
and obtaining the first linear equation according to the first target distance and the corresponding first endpoint straight line.
4. A distortion parameter calibration method as claimed in claim 1, wherein the calculating a first linear equation and a second linear equation respectively according to the sub-pixel level corner points comprises:
traversing all the sub-pixel-level angular points in columns to respectively obtain a third coordinate value of the sub-pixel-level angular point at the head end of each column and a fourth coordinate value of the sub-pixel-level angular point at the tail end of each row;
obtaining a second endpoint linear equation of each row according to the third coordinate value and the fourth coordinate value;
calculating a second distance sum from all the sub-pixel-level corner points of each column to a second endpoint line of the column according to the second endpoint line equation;
determining a second target distance sum, the second target distance sum and a corresponding second endpoint straight line according to the magnitude relation of all the second distance sums;
and obtaining the second linear equation according to the second target distance and the corresponding second endpoint straight line.
5. A distortion parameter calibration method as set forth in claim 1, wherein said calibrating the distortion center point based on said first line equation and said second line equation comprises:
and calculating the intersection point of the first straight line and the second straight line according to the first straight line equation and the second straight line equation, wherein the intersection point is the distortion central point.
6. The distortion parameter calibration method according to claim 1, wherein calibrating the distortion coefficient according to the distortion central point, the sub-pixel-level corner points and a preset distortion correction model comprises:
calculating a corrected sub-pixel-level angular point according to the preset distortion correction model, the distortion central point and the sub-pixel-level angular point;
fitting to obtain a third straight line according to the corrected sub-pixel-level angular points and a least square method, and calculating a third distance sum from the corrected sub-pixel-level angular points to the third straight line;
equally dividing the target image into three image regions;
and optimizing each image area according to the third straight line, the third distance sum and the LM algorithm to obtain the distortion coefficient of each image area.
7. The distortion parameter calibration method according to any one of claims 1 to 6, wherein after calibrating the distortion coefficients according to the distortion center points, the sub-pixel-level corner points and a preset distortion correction model, the method further comprises:
calculating the coordinate value of the target pixel point according to the distortion coefficient;
and carrying out distortion correction processing on the target image according to the coordinate value of the target pixel point and a bilinear interpolation algorithm.
8. Distortion parameter calibration device, its characterized in that includes:
the image acquisition module is used for acquiring a target image;
the characteristic extraction module is used for extracting the characteristics of the target image to obtain sub-pixel level angular points;
the first calculation module is used for respectively calculating a first linear equation and a second linear equation according to the sub-pixel level angular points;
the parameter calibration module is used for calibrating a distortion central point according to the first linear equation and the second linear equation;
and the second calculation module is used for calibrating a distortion coefficient according to the distortion central point, the sub-pixel level angular point and a preset distortion correction model.
9. An electronic device, comprising:
at least one processor, and,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions for execution by the at least one processor to cause the at least one processor, when executing the instructions, to implement a distortion parameter calibration method as claimed in any one of claims 1 to 7.
10. Computer-readable storage medium, characterized in that it stores computer-executable instructions for causing a computer to execute the distortion parameter calibration method as claimed in any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110515355.4A CN113284189B (en) | 2021-05-12 | 2021-05-12 | Distortion parameter calibration method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110515355.4A CN113284189B (en) | 2021-05-12 | 2021-05-12 | Distortion parameter calibration method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113284189A true CN113284189A (en) | 2021-08-20 |
CN113284189B CN113284189B (en) | 2024-07-19 |
Family
ID=77278632
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110515355.4A Active CN113284189B (en) | 2021-05-12 | 2021-05-12 | Distortion parameter calibration method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113284189B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116152350A (en) * | 2022-12-09 | 2023-05-23 | 禾多科技(北京)有限公司 | Internal reference evaluation method, device, terminal and storage medium |
CN116823681A (en) * | 2023-08-31 | 2023-09-29 | 尚特杰电力科技有限公司 | Method, device and system for correcting distortion of infrared image and storage medium |
CN117111046A (en) * | 2023-10-25 | 2023-11-24 | 深圳市安思疆科技有限公司 | Distortion correction method, system, device and computer readable storage medium |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130101204A1 (en) * | 2011-10-19 | 2013-04-25 | Lee F. Holeva | Generating a composite score for a possible pallet in an image scene |
CN103116889A (en) * | 2013-02-05 | 2013-05-22 | 海信集团有限公司 | Positioning method and electronic device |
KR20150002995A (en) * | 2013-06-28 | 2015-01-08 | (주) 세인 | Distortion Center Correction Method Applying 2D Pattern to FOV Distortion Correction Model |
CN104820973A (en) * | 2015-05-07 | 2015-08-05 | 河海大学 | Image correction method for distortion curve radian detection template |
CN104881874A (en) * | 2015-06-04 | 2015-09-02 | 西北工业大学 | Double-telecentric lens calibration method based on binary quartic polynomial distortion error compensation |
US20160140713A1 (en) * | 2013-07-02 | 2016-05-19 | Guy Martin | System and method for imaging device modelling and calibration |
CN106780628A (en) * | 2016-12-24 | 2017-05-31 | 大连日佳电子有限公司 | High Precision Camera Calibration method based on mixing distortion model |
CN107610199A (en) * | 2017-09-11 | 2018-01-19 | 常州新途软件有限公司 | Real-time backing track display methods, system and wide trajectory method for drafting |
CN108876749A (en) * | 2018-07-02 | 2018-11-23 | 南京汇川工业视觉技术开发有限公司 | A kind of lens distortion calibration method of robust |
CN108986172A (en) * | 2018-07-25 | 2018-12-11 | 西北工业大学 | A kind of single-view linear camera scaling method towards small depth of field system |
CN110738707A (en) * | 2019-10-16 | 2020-01-31 | 北京华捷艾米科技有限公司 | Distortion correction method, device, equipment and storage medium for cameras |
CN110910456A (en) * | 2019-11-22 | 2020-03-24 | 大连理工大学 | Stereo camera dynamic calibration algorithm based on Harris angular point mutual information matching |
CN111667536A (en) * | 2019-03-09 | 2020-09-15 | 华东交通大学 | Parameter calibration method based on zoom camera depth estimation |
-
2021
- 2021-05-12 CN CN202110515355.4A patent/CN113284189B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130101204A1 (en) * | 2011-10-19 | 2013-04-25 | Lee F. Holeva | Generating a composite score for a possible pallet in an image scene |
CN103116889A (en) * | 2013-02-05 | 2013-05-22 | 海信集团有限公司 | Positioning method and electronic device |
KR20150002995A (en) * | 2013-06-28 | 2015-01-08 | (주) 세인 | Distortion Center Correction Method Applying 2D Pattern to FOV Distortion Correction Model |
US20160140713A1 (en) * | 2013-07-02 | 2016-05-19 | Guy Martin | System and method for imaging device modelling and calibration |
CN104820973A (en) * | 2015-05-07 | 2015-08-05 | 河海大学 | Image correction method for distortion curve radian detection template |
CN104881874A (en) * | 2015-06-04 | 2015-09-02 | 西北工业大学 | Double-telecentric lens calibration method based on binary quartic polynomial distortion error compensation |
CN106780628A (en) * | 2016-12-24 | 2017-05-31 | 大连日佳电子有限公司 | High Precision Camera Calibration method based on mixing distortion model |
CN107610199A (en) * | 2017-09-11 | 2018-01-19 | 常州新途软件有限公司 | Real-time backing track display methods, system and wide trajectory method for drafting |
CN108876749A (en) * | 2018-07-02 | 2018-11-23 | 南京汇川工业视觉技术开发有限公司 | A kind of lens distortion calibration method of robust |
CN108986172A (en) * | 2018-07-25 | 2018-12-11 | 西北工业大学 | A kind of single-view linear camera scaling method towards small depth of field system |
CN111667536A (en) * | 2019-03-09 | 2020-09-15 | 华东交通大学 | Parameter calibration method based on zoom camera depth estimation |
CN110738707A (en) * | 2019-10-16 | 2020-01-31 | 北京华捷艾米科技有限公司 | Distortion correction method, device, equipment and storage medium for cameras |
CN110910456A (en) * | 2019-11-22 | 2020-03-24 | 大连理工大学 | Stereo camera dynamic calibration algorithm based on Harris angular point mutual information matching |
Non-Patent Citations (6)
Title |
---|
WAECEO: ""相机畸变详细推导"", 《HTTPS://BLOG.CSDN.NET/WAECEO/ARTICLE/DETAILS/51024396》, 31 March 2016 (2016-03-31) * |
WEN-YIN CHEN等: ""Perspective Preserving Style Transfer for Interior Portraits"", 《IEEE ACCESS》, 1 January 2021 (2021-01-01) * |
姚文韬;沈春锋;董文生;: "一种优化的自适应摄像机标定算法", 控制工程, no. 1, 20 November 2017 (2017-11-20) * |
王富治;黄大贵;: "基于图像差分的精密畸变校正研究", 仪器仪表学报, no. 02, 15 February 2010 (2010-02-15) * |
王晓华;董煜文;李;张蕾;张森宇;: "一种提高机器人视觉系统参数标定精度的方法", 机床与液压, no. 15, 15 August 2018 (2018-08-15) * |
赵漫丹等: ""利用直线特征进行单幅图像畸变校正方法"", 《武汉大学学报(信息科学版)》, 5 January 2018 (2018-01-05) * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116152350A (en) * | 2022-12-09 | 2023-05-23 | 禾多科技(北京)有限公司 | Internal reference evaluation method, device, terminal and storage medium |
CN116152350B (en) * | 2022-12-09 | 2023-12-29 | 禾多科技(北京)有限公司 | Internal reference evaluation method, device, terminal and storage medium |
CN116823681A (en) * | 2023-08-31 | 2023-09-29 | 尚特杰电力科技有限公司 | Method, device and system for correcting distortion of infrared image and storage medium |
CN116823681B (en) * | 2023-08-31 | 2024-01-26 | 尚特杰电力科技有限公司 | Method, device and system for correcting distortion of infrared image and storage medium |
CN117111046A (en) * | 2023-10-25 | 2023-11-24 | 深圳市安思疆科技有限公司 | Distortion correction method, system, device and computer readable storage medium |
CN117111046B (en) * | 2023-10-25 | 2024-01-12 | 深圳市安思疆科技有限公司 | Distortion correction method, system, device and computer readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN113284189B (en) | 2024-07-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113284189B (en) | Distortion parameter calibration method, device, equipment and storage medium | |
CN110689579B (en) | Rapid monocular vision pose measurement method and measurement system based on cooperative target | |
CN101699313B (en) | Method and system for calibrating external parameters based on camera and three-dimensional laser radar | |
CN110443879B (en) | Perspective error compensation method based on neural network | |
CN101673397B (en) | Digital camera nonlinear calibration method based on LCDs | |
CN106960449B (en) | Heterogeneous registration method based on multi-feature constraint | |
CN111488874A (en) | Method and system for correcting inclination of pointer instrument | |
CN111263142B (en) | Method, device, equipment and medium for testing optical anti-shake of camera module | |
CN112132907B (en) | Camera calibration method and device, electronic equipment and storage medium | |
CN104820973B (en) | The method for correcting image of distortion curve radian detection template | |
CN109583365A (en) | Method for detecting lane lines is fitted based on imaging model constraint non-uniform B-spline curve | |
CN112862895B (en) | Fisheye camera calibration method, device and system | |
CN112381847A (en) | Pipeline end head space pose measuring method and system | |
CN109859101A (en) | The recognition methods of corps canopy thermal infrared images and system | |
CN103413319A (en) | Industrial camera parameter on-site calibration method | |
CN112529792B (en) | Distortion correction method for distortion-free model camera | |
CN111667429B (en) | Target positioning correction method for inspection robot | |
CN111462216B (en) | Method for determining circle center pixel coordinates in circular array calibration plate | |
CN117372498A (en) | Multi-pose bolt size measurement method based on three-dimensional point cloud | |
CN117237424A (en) | Registering method, feature matching method and equipment for infrared and visible light images | |
CN112116644A (en) | Vision-based obstacle detection method and device and obstacle distance calculation method and device | |
CN106228593B (en) | A kind of image dense Stereo Matching method | |
CN114004949B (en) | Airborne point cloud-assisted mobile measurement system placement parameter checking method and system | |
CN112489065B (en) | Chessboard standard point sub-pixel extraction method | |
CN114792342A (en) | Line structure light positioning method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |