Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
Due to the influence of the optical structure of the lens and the layout of the image sensor in the image acquisition equipment, the geometric distortion of the image acquisition equipment occurs in the process of acquiring the image, and the acquired image is distorted. Wherein the geometric distortion includes radial distortion and tangential distortion. Radial distortion is distortion caused by problems of shape or symmetry of a lens, and the presence of radial distortion causes straight lines in an acquired image to become bent, and particularly for straight lines at the edge of the image, the degree of bending is more serious. The radial distortion is divided into barrel distortion (barrel distortion) and pincushion distortion (pincushion distortion). Barrel distortion refers to distortion that causes straight lines in an image to bulge out toward the center of the image as if the image were placed in a barrel. Pincushion distortion refers to distortion that causes straight lines in the image to sag toward the center of the image, as if the image were placed in a pillow. Tangential distortion is distortion caused by incomplete parallelism between the lens and the image plane, the presence of which causes straight lines in the acquired image to be compressed or distorted in certain directions. Therefore, before the image acquisition device is used for image acquisition, the distortion parameters of the image acquisition device need to be determined, so that the image acquisition device can be calibrated according to the distortion parameters, and the influence of geometric distortion on the acquired image is reduced. Therefore, how to accurately determine the distortion parameters is important.
The application provides a solution to the above problem, specifically, the application obtains a first bitmap obtained by image acquisition of a calibration plate by an image acquisition device, determines the centroid coordinates of a lattice in the first bitmap as first lattice centroid coordinates according to the centroid coordinates of each calibration object in the first bitmap, constructs a second bitmap according to the first lattice centroid coordinates and a preset lattice distance, and obtains accurate distortion parameters of the image acquisition device according to the position difference between the calibration objects in the first and second bitmaps. In this way, the image acquisition device is subjected to distortion correction based on the accurate distortion parameters, so that the influence of geometric distortion on the image acquired by the image acquisition device can be reduced, and the accuracy of the acquired image is improved. It should be noted that the distortion parameter is not limited to be used for correcting distortion of the image capturing device, and may be used in other application scenarios, which is not limited in the present application.
In some embodiments, as shown in fig. 1, a method for determining distortion parameters is provided, and the method is applied to a computer device for illustration, and includes the following steps:
Step 101, acquiring a first bitmap obtained by image acquisition of a calibration plate by image acquisition equipment; the plurality of calibration objects in the calibration plate form a lattice in the first lattice diagram.
The image acquisition device is a device provided with a lens and used for acquiring images.
Alternatively, the image pickup apparatus may be various cameras such as a digital camera, a single-lens reflex camera, a video camera, etc., and may also be various apparatuses provided with lenses such as a lens-mounted cellular phone, a lens-mounted tablet computer, an AOI (Automatic Optical Inspection ) apparatus.
The calibration plate is a standardized reference tool for performing distortion recognition on the image acquisition equipment. Alternatively, the calibration plate includes, but is not limited to, a checkerboard calibration plate, a dot calibration plate, and the like. The calibration object is a marker in the calibration plate providing the image acquisition device with known geometry and features to be used as reference object in the distortion identification process.
For example, when the calibration plate is a dot calibration plate, each dot in the dot calibration plate is a calibration object. As shown in fig. 2A, a first dot pattern is provided, in which the dot elements are dots, and each dot is a calibration object in the calibration plate.
Step 102, determining the centroid coordinates of the dot matrix in the first dot matrix diagram as the centroid coordinates of the first dot matrix according to the centroid coordinates of the calibration objects in the first dot matrix diagram.
The computer device performs preprocessing on the original first bitmap collected in the step 101 to obtain a preprocessed bitmap, calculates the centroid coordinates of each calibration object in the first bitmap based on the preprocessed bitmap, and determines the centroid coordinates of the lattice in the first bitmap as the centroid coordinates of the first lattice according to the centroid coordinates of each calibration object in the first bitmap.
Optionally, preprocessing the acquired original first bitmap to obtain a preprocessed bitmap, including: converting the original first bitmap into a gray map, and performing binarization processing on the gray map to obtain a binarized bitmap; filtering the binarized bitmap to obtain a filtered bitmap, and determining the filtered bitmap as a preprocessed bitmap. It can be appreciated that some impurity interference can be removed after pretreatment, and subsequent more accurate calculation of distortion parameters can be facilitated.
For ease of understanding, the pretreatment will now be schematically described with reference to fig. 2B1 to 2B 2. Fig. 2B1 is a binary bitmap obtained by binarizing a gray scale map, and fig. 2B2 is a bitmap obtained by filtering the binary bitmap. The original figure of fig. 2A also contains stains, and the stains have been removed from fig. 2B1 and 2B2 after the relevant pretreatment.
Alternatively, gray binarization processing may be performed on the first bitmap using the Otsu's Method.
In one embodiment, after converting the first bitmap into the gray map, the image of the gray map is provided with M rows and N columns, the row counter I is 0, the column counter j is 0, the gray value of each pixel point in the gray map is I (I, j), and the minimum value and the maximum value of the gray values in the gray map are respectively denoted as minI and maxI. Then, the gray value of each pixel in the gray map is normalized, and the normalized gray value of each pixel is denoted as newI (i, j). The normalized gray value new I (I, j) = ((I, j) -minI)/(maxl/minI)) of each pixel is 255. It will be appreciated that the normalization process is an iterative process, and the row counter i and the column counter j may update the counts iteratively, and if j < N, or i < M, continue to update j to j+1, or i to i+1, and continue the normalization process.
Then, let the gray level K (also called gray threshold) be 1 (i.e. from K equal to 1), let L different gray values (L is the pixel level of the whole picture, generally 255) be shared in the normalized gray map, and divide the normalized gray map into two parts according to the gray level K. A part of the region with gray values of 0 to K (i.e. gray values less than or equal to K) is a first region, and the sum P of the occurrence probability of each gray value in the first region is calculated 1 And is denoted as a first total probability, wherein,it can be understood that the occurrence probability P of each gradation value in the first region I A ratio between the number of the pixel points corresponding to the gray value in the first area and the total number of the pixel points in the first area; the average gray value of the first region is denoted as a first average gray value m 1 Wherein, the method comprises the steps of, wherein,that is, the product of each gray value I and the probability of occurrence of that gray value is divided by the total probability of occurrence P of that region 1 Obtaining the average gray value m of the first region 1 . Another part is a part of the region from the gray value of k+1 to L (i.e. the gray value is larger than K), which is called a second region for short, and the sum P of the occurrence probability of each gray value in the second region is calculated 2 The second total probability, where,similarly, the probability of occurrence of each gray value in the second region is equal to the ratio between the number of pixels corresponding to the gray value in the second region and the total number of pixels in the second region. Similarly, the average gray value of the second region is denoted as the second average gray value m 2 Wherein, the method comprises the steps of, wherein,that is, the product of each gray value i and the probability of occurrence of that gray value is divided by the total probability of occurrence P of that region 2 Obtaining the average gray value m of the second region 2 . It will be appreciated that the computer device may, based on the first total probability P 1 Second total probability P 2 First average gray value m 1 And a second average gray value m 2 Calculating the inter-class variance S 2 (k) I.e. the inter-class variance S 2 (k)= P 1 * P 2 *(m 1 -m 2 ) 2 . Iteratively updating K between 0~L =255, finding the result that S 2 (k) The maximized K is the target gray threshold Kopt. Then, the gray value of each pixel point is compared with a target gray threshold Kopt, if the gray value is smaller than or equal to the target gray threshold K opt Setting the gray value of the pixel point to 0, if the gray value is larger than the target gray threshold K opt The gray value of the pixel is set to 255, so that the gray value of each pixel is binarized. It will be appreciated that the process of binarization processing is also an iterative process.
Optionally, the filtering of the binarized bitmap includes at least one of median filtering, gaussian filtering, or mean filtering.
In some embodiments, calculating centroid coordinates of each calibration object in the first bitmap based on the preprocessed bitmap comprises: detecting calibration objects in the preprocessed bitmap, performing contour extraction processing on each calibration object to obtain the contour of the calibration object, and calculating the centroid coordinates of the calibration object based on the image moment of the contour of the calibration object to obtain the centroid coordinates of each calibration object in the first bitmap. As shown in fig. 2C, a schematic diagram of each calibration object after contour detection is provided. As shown in fig. 2D, a schematic diagram of the centroid of each calibration object is provided, the calibration objects in fig. 2D are circles, and the white point in each circle represents the centroid.
Optionally, performing contour extraction processing on each calibration object in the detected bitmap to obtain a contour of the calibration object, including: and marking the pixels in the image area where each calibration object is located in the detected binarized bitmap as foreground pixels (marked as 1) or background pixels (marked as 0) according to the connected domain of the pixels so as to distinguish the foreground area and the background area of the image area where the calibration object is located, traversing each pixel in the image area where each calibration object is located, and for each foreground pixel, marking the foreground pixel as contour point pixels if the pixel above or to the left of the foreground pixel is the background pixel (marked as 0), wherein each contour point pixel forms the contour of the calibration object.
Alternatively, a Suzuki contour tracking algorithm (Suzuki's Contour Tracing Algorithm) may be employed to detect calibration objects in the bitmap, and to extract the contour of each calibration object in the detected bitmap.
In some embodiments, for the binarized image, let NBD be equal to 1, the pixel points located in the ith row and jth column be denoted by (i, j), the value of the pixel point be denoted by v (i, j), v (i, j) =1 denote white points, and v (i, j) =0 denote black points. The row number i increases from top to bottom and the column number j increases from sitting to right, i.e. the coordinate zero point or starting point (i, j) is in the upper left corner of the image. The computer device may scan line by line from left to right in a line-first manner starting from a starting point (i, j) as follows:
(1) From the following 3 cases, see which one:
(a) If v (i, j-1) =0 and v (i, j) =1, then it is determined that the pixel point (i, j) is the outer boundary start point, then NBD is self-incremented by 1 bit, i.e., nbd=nbd+1, and (i, j-1) is assigned to (i 2, j 2), it being understood that (i 2, j 2) is used to characterize the point found last to the current point, i.e., in this case, the point found last to the current point is actually (i, j-1). And judging whether the previous boundary is an outer boundary or not, if so, setting the current parent boundary as the parent boundary of the previous boundary, and if not, setting the current parent boundary as the previous boundary. Wherein the outer boundary is a set of boundary points between an arbitrary 1-connected domain and a 0-connected domain immediately surrounding it. The connected domain is a region composed of the same pixel points, and in this embodiment, is referred to as a 1-connected domain (also called 1-part) if the pixel point of the current region is 1, and is referred to as a 0-connected domain (also called 0-part) if the pixel point of the current region is 0. For the sake of easy understanding of the parent boundary, explanation definition will be made on the parent boundary, for one 1-connected domain S1, if there is a 0-connected domain S2 directly surrounding this 1-connected domain S1, the parent boundary of this 1-connected domain S1 may be divided into two cases, if the 0-connected domain S2 directly surrounding 1-connected domain S1 is a hole, the parent boundary of this 1-connected domain S1 is a hole boundary of the 0-connected domain S2, and if S2 is the background of an image (i.e., a connected domain formed by the outermost boundary of an image), the parent boundary of this 1-connected domain S1 is the frame of a picture.
(b) If v (i, j) >1 and v (i, j+1) =0, then it is determined that pixel point (i, j) is the hole boundary start point, then NBD is incremented by 1 bit, i.e., nbd=nbd+1, and (i, j+1) is assigned to (i 2, j 2), i.e., in this case, the point found immediately above the current point is actually (i, j+1). If v (i, j) >1, the LNBD is marked with v (i, j). Where a hole boundary refers to a collection of boundary points between one hole and a 1-connected domain immediately surrounding it.
(c) If neither hit (a) nor hit (b), then (5) may be performed.
(2) After hit and execution of (a) or (b), non-zero pixels in the neighborhood of (i, j) may be looked up clockwise starting with (i 2, j 2), and if there are non-zero pixels in the neighborhood, the first found non-zero pixel is marked as (i 1, ji), and (i 1, ji) is assigned to (i 2, j 2), and (i, j) is assigned to (i 3, j 3), it being understood that (i 3, j 3) is used to characterize the current point. If there are no non-zero pixels in the neighborhood, then (5) can be performed. It will be appreciated that the neighborhood may be an 8-neighborhood, with 8-neighborhood referring to eight pixels adjacent to one pixel.
(3) Starting from (i 2, j 2), a non-zero pixel point in the neighborhood of the current point (i 3, j 3) is searched in the anticlockwise direction, and the first found non-zero pixel point is marked as (i 4, j 4).
(4) In the counterclockwise search process, if V (i 3, j3+1) =0, V (i 3, j 3) is marked-NBD, if V (i 3, j3+1) =1, V (i 3, j 3) is marked-NBD, otherwise, V (i 3, j 3) remains unchanged. Then, it is further determined whether (i 4, j 4) = (i, j), (i 3, j 3) = (i 1, j 1), if not, (i 3, j 3) is assigned to (i 2, j 2), i.e., (i 2, j 2) < - (i 3, j 3), (i 3, j 3) < - (i 4, j 4), and the execution of (3) is continued with the return of (3) by finding non-zero pixels in the neighborhood of the current point (i 3, j 3) in the counterclockwise direction from (i 2, j 2), and marking the first found non-zero pixel as (i 4, j 4).
(5) If v (i, j) noteq.1, then sign LNBD for v (i, j), restart scanning from pixel point (i, j+1), and terminate when the scanner reaches the bottom right corner of the picture.
It should be noted that, (i 2, j 2) is used to represent the last found point of the current point, (i 3, j 3) is used to represent the current point, (i 4, j 4) is used to represent the first non-zero pixel point found in the neighborhood of the current point, and as the current point is changed continuously and the first non-zero pixel point found in the neighborhood of the current point is changed continuously with the iterative process, the points are assigned continuously. NBD is a boundary sequence number for identifying edge points, LNBD refers to a boundary order for identifying dependencies between edges.
In some embodiments, the image moments may include a zero order blending origin moment, a first blending origin moment, and a second blending origin moment. Calculating centroid coordinates of the calibration object based on the image moments of the contours of the calibration object includes: and determining the coordinates of the centroid on the first coordinate axis according to the ratio of the first mixed origin moment to the zero-order mixed origin moment, and determining the coordinates of the centroid on the second coordinate axis according to the ratio of the second mixed origin moment to the zero-order mixed origin moment.
It will be appreciated that the coordinates of the centroid on the first coordinate axis and the coordinates of the centroid on the second coordinate axis form the centroid coordinates of the calibration object.
The first coordinate axis and the second coordinate axis are two coordinate axes in a rectangular coordinate system. The zero-order hybrid origin moment may be used to represent the area of the outline of the calibration object. The first and second moments of origin are used to represent the first moment of the image about the abscissa and the ordinate, respectively.
In some embodiments, the centroid coordinates of the calibration object may be calculated by the following formula:
;
wherein M is 00 Is zero order mixed origin moment, M 10 Is the first mixed origin moment, M 01 Is the second mixed origin moment.
For ease of understanding, an example will now be described. As shown in fig. 4, assuming that the calibration object is a circle, an arc is sampled on the outline of the circle as a sampled image (the image in the white box is the sampled image), and the gray values corresponding to the sampled image are shown in table 1, that is, the gray values corresponding to the pixels in the white box in fig. 4.
TABLE 1
Further, the centroid of the sampled image may be calculated according to the following formula:
;
wherein m00= (0+0+0+0+0+0)
1+1+1+0+0+
0+0+1+1+0+
0+0+0+1+1+
0+0+0+0+1)=8
M10 = (1*0+1*0+1*0+1*0+1*0+
2*1+2*1+2*1+2*0+2*0+
3*0+3*0+3*1+3*1+3*0+
4*0+4*0+4*0+4*1+4*1+
5*0+5*0+5*0+5*0+5*1)=25
M01 = (1*0+1*1+1*0+1*0+1*0+
2*0+2*1+2*0+2*0+2*0+
3*0+3*1+3*1+3*0+3*0+
4*0+4*0+4*1+4*1+4*0+
5*0+5*0+5*0+5*1+5*1)=27
The centroid of the segment of arc= (25/8, 27/8) = (3.125,3.375), where the coordinate system origin is the upper left corner of the sample. Similarly, the gray value corresponding to the whole calibration object is determined, and then the centroid of the whole circular calibration object can be calculated according to the formula.
Optionally, determining, according to the centroid coordinates of each calibration object in the first bitmap, the centroid coordinates of the lattice in the first bitmap as the first lattice centroid coordinates includes: and carrying out average processing on the centroid coordinates of each calibration object in the first bitmap to obtain average centroid coordinates of each calibration object, and determining the average centroid coordinates of each calibration object as the centroid coordinates of the dot matrix in the first bitmap to obtain the centroid coordinates of the first dot matrix.
Step 103, constructing a second lattice diagram according to the centroid coordinates of the first lattice and the preset point distance; the distance between adjacent calibration objects in the second lattice diagram accords with a preset lattice distance.
The preset point distance is the distance between adjacent calibration objects in the calibration plate. It will be appreciated that in order to more accurately determine the distortion parameters, the distance between adjacent calibration objects in the calibration plate should correspond to the predetermined point distance.
The second bitmap is a bitmap in which the distance between adjacent calibration objects accords with a preset point distance and the problem of image distortion does not exist. It can be understood that, because the image acquisition device has geometric distortion, the acquired first bitmap has distortion problem, so that the distortion parameters of the image acquisition device need to be determined according to the second bitmap without image distortion problem and the first bitmap with image distortion problem.
The computer device determines centroid coordinates of the reference calibration objects in the second bitmap according to the centroid coordinates of the first bitmap, and determines centroid coordinates of each calibration object in the second bitmap according to the centroid coordinates of the reference calibration objects and a preset point distance, so as to construct the second bitmap.
Optionally, when the calibration board is a dot calibration board, the calibration object is a dot correspondingly, the computer device may determine centroid coordinates of reference dots in the second bitmap according to the centroid coordinates of the first dot, determine centroid coordinates of each dot in the second bitmap according to the centroid coordinates of the reference dots and a preset distance, and generate a plurality of dots according to the centroid coordinates of each dot and the radius of the dot, so as to obtain the second bitmap.
In some embodiments, in the case that the preset point distance is expressed in length units, since the first dot matrix centroid coordinate is a coordinate in the image coordinate system and is expressed by a coordinate of a pixel point (i.e., is expressed quantitatively at a pixel level), the preset point moment needs to be converted to the pixel level for quantization, and the length distance represented by the preset point moment is measured by the number of the pixel points, so that the second dot matrix diagram can be constructed according to the first dot matrix centroid coordinate and the preset point distance. For example, when the preset point moment is 1cm, it is necessary to convert it into a number of pixels. In this case, in order to calculate accuracy, it is necessary to calculate the true resolution of the image capturing apparatus, based on which the preset point moment is converted into the specific number of pixel points. The number of pixel points corresponding to the preset point distance=the preset point distance, and the real resolution of the image acquisition device. For example, if the preset dot pitch is 100 mm and the resolution of the image capturing device is 10 pixels/mm, the preset dot pitch is 100×10=1000 pixels of the true resolution of the image capturing device. It will be appreciated that in the case where the preset moment is represented by the number of pixels, the actual resolution does not need to be calculated to perform the conversion processing on the preset moment.
In some embodiments, after the outlines of the calibration objects are obtained, for each calibration object, the area of the outline of the calibration object may be calculated, and the real resolution of the image acquisition device may be calculated according to the area of the outline of the calibration object and the geometric dimensions of the calibration object.
In some embodiments, the calibration object is circular, and the area of the outline is the area occupied by the area formed by the pixel points in the outline. The area of the outline of each calibration object can be calculated by the following formula:
;
wherein x, y respectively represent an abscissa and an ordinate, xi represents an abscissa of an ith sampling point on the outline of the calibration object, yi represents an ordinate of the ith sampling point on the outline of the calibration object, and in this embodiment, n sampling points are taken on the outline of each calibration object, and according to the abscissas and the ordinates of each sampling point in the n sampling points, the area of the outline of the calibration object is calculated by combining the above formula.
For ease of understanding, an example will now be described. As shown in fig. 3, assuming that the calibration object is circular, 3 sampling points (x 1, y 1) = (1842,1591), (x 2, y 2) = (1913,1701) and (x 3, y 3) = (1780,1681) are taken on the outline, and the area of the outline of the triangle formed by the 3 sampling points is calculated according to the above formula to be 1/2 ((x 1y2-x2y 1) + (x 2y3-x3y 2) + (x 3y1-x1y 3))
=1/2((1842*1701-1913*1591)+(1913*1681-1780*1701)+(1780*1591-1842*1681))
=6605. That is, the region of 6605 pixels is the area of the outline of the triangle of the 3 sampling points.
Then, when the sampling points are increased, the number of sides of the polygon surrounded by the points is gradually increased until the sampling points approach to the circular outline when the sampling points are equal to n, and the area of the outline of each calibration object can be calculated according to the method. In the case of 42 circular calibration objects, the areas of the outlines of the 42 circular calibration objects can be calculated according to the above-described method, respectively.
Optionally, when the calibration plate is a dot calibration plate, the calibration object is a dot, and the calculating the real resolution of the image capturing device according to the area of the outline of the calibration object and the geometric dimension of the calibration object includes: calculating a first radius of the dot in a first dot matrix diagram according to the area of the outline of the dot; determining an average value of the first radiuses corresponding to the dots to obtain an average radius of the dots; acquiring a second radius of the dot in the dot calibration plate; and determining the real resolution of the image acquisition device according to the second radius of the dot in the dot calibration plate and the average radius of the dot. It should be noted that, the sizes of the dots in the dot calibration plate are uniform, so that the second radii corresponding to the dots are the same. It will be appreciated that the second radius is the physical radius of the dot that is actually present in the dot calibration plate, and the first radius is the radius of the dot calculated by the area of the outline of the dot based on the method of the present application.
In some embodiments, calculating the first radius of the dot in the first dot pattern from the area of the outline of the dot is accomplished by the following equation: first radius= (area of outline of dot/pi)/(0.5).
It will be appreciated that determining an average of the first radii corresponding to each dot, resulting in an average radius of the dot, includes: summing the first radiuses corresponding to the dots to obtain a radius sum, and dividing the radius sum by the total number of the dots to obtain the average radius of the dots.
Optionally, the true resolution of the image acquisition device is calculated by the following formula:
true resolution of image acquisition device = second radius of dot in dot calibration plate/average radius of each dot.
And 104, determining distortion parameters of the image acquisition equipment according to the position difference between the marked objects in the first bitmap and the second bitmap.
Optionally, the computer device determines the position difference according to the centroid coordinates of the calibration objects in the first bitmap and the centroid coordinates of the calibration objects in the second bitmap, and determines the distortion parameter of the image acquisition device according to the position difference.
Optionally, the computer device determines a position difference according to the average centroid coordinates of the calibration objects in the first bitmap and the average centroid coordinates of the calibration objects in the second bitmap, and determines a distortion parameter of the image acquisition device according to the position difference.
Optionally, before performing step 104, the computer device may perform rotation correction on the first bitmap to obtain a rotation corrected first bitmap, and perform step 104 based on the rotation corrected first bitmap, that is, determine a distortion parameter of the image capturing device according to a position difference between the rotation corrected first bitmap and the marked object in the second bitmap.
As shown in fig. 2E, a schematic diagram of the geometric distortion of each calibration object is provided. The white dots are used for representing the mass centers of the marked objects in the original first bitmap of the image acquisition, the cross marks are used for representing the mass centers of the marked objects in the constructed second bitmap (i.e. ideal bitmap), and the X-shaped marks are used for representing the mass centers of the marked objects in the first bitmap after rotation correction. It can be seen that the rotation distortion caused by the rotation of the device during image acquisition is removed from the rotation corrected first bitmap, so that the rotation distortion is closer to the second bitmap, and the distortion parameters of the image acquisition device in the aspect of image processing can be more accurately determined. One diamond mark in the center of the graph represents the centroid coordinates of the lattice in the first lattice graph (i.e., the first lattice centroid coordinates).
In the method for determining the distortion parameters, a first bitmap is obtained by acquiring an image of a calibration plate by an image acquisition device, the centroid coordinates of a dot matrix in the first bitmap are determined to be first dot matrix centroid coordinates according to the centroid coordinates of all calibration objects in the first dot matrix, and a second dot matrix map is constructed according to the first dot matrix centroid coordinates and a preset dot distance; the distance between adjacent calibration objects in the second bitmap accords with the preset point distance, and accurate distortion parameters of the image acquisition equipment are obtained according to the position difference between the calibration objects in the first bitmap and the second bitmap.
Further, if the accurate distortion parameters are used for carrying out distortion correction on the image acquisition equipment, the influence of geometric distortion on the image acquired by the image acquisition equipment can be reduced, and the accuracy of the acquired image is improved.
In some embodiments, before determining the distortion parameters of the image acquisition device from the position differences between the identified objects in the first and second bitmap, the method further comprises: determining the rotation direction of the image acquisition equipment when the image acquisition is carried out according to the barycenter coordinates of each calibration object in the first bitmap; determining a rotation angle of the image acquisition equipment when the image acquisition is carried out according to the barycenter coordinates of each calibration object in the first bitmap; and carrying out rotation correction on the first bitmap according to the rotation direction and the rotation angle. In this embodiment, determining the distortion parameter of the image acquisition device according to the position difference between the marked objects in the first and second bitmap includes: and determining distortion parameters of the image acquisition equipment according to the position difference between the marked objects in the first bitmap and the second bitmap after rotation correction.
It can be understood that, when the image capturing device performs image capturing, rotation may be performed by a certain rotation angle around the coordinate axis of its own coordinate system, so that rotation correction needs to be performed on the image capturing device.
In some embodiments, determining a rotation direction of the image acquisition device when performing image acquisition according to the centroid coordinates of each calibration object in the first bitmap includes: determining a first coordinate of each calibration object on a first coordinate axis and a second coordinate of each calibration object on a second coordinate axis aiming at each calibration object in the first bitmap; determining calibration objects with adjacent coordinates on a first coordinate axis based on first coordinates corresponding to the calibration objects respectively; and determining the rotation direction of the image acquisition equipment when the image acquisition is carried out according to the magnitude relation between the second coordinates of the calibration objects with adjacent coordinates on the second coordinate axis.
It can be understood that, because the dot matrix in the first dot matrix chart is unordered, the calibration objects need to be ordered based on the first coordinates corresponding to the calibration objects respectively, so as to determine the calibration objects with adjacent coordinates on the first coordinate axis.
Optionally, the first coordinate axis is an abscissa axis, the second coordinate axis is an ordinate axis, and correspondingly, the first coordinate is an abscissa axis, and the second coordinate is an ordinate axis. For each calibration object in the first bitmap, determining an abscissa of each calibration object on an abscissa axis and an ordinate of each calibration object on an ordinate axis; determining calibration objects with adjacent coordinates on the abscissa axis based on the abscissa corresponding to each calibration object; and determining the rotation direction of the image acquisition equipment when the image acquisition is carried out according to the magnitude relation between the vertical coordinates of the calibration objects with adjacent coordinates on the vertical coordinate axis.
Optionally, determining the rotation direction of the image acquisition device when performing image acquisition according to the magnitude relation between the ordinate axes of the calibration objects with adjacent coordinates on the ordinate axis includes: and when the ordinate of the calibration object with the small abscissa among the two adjacent calibration objects with the coordinates is smaller than the ordinate of the calibration object with the large abscissa on the ordinate, determining the rotation direction of the image acquisition equipment when the image acquisition is performed to be clockwise. And when the ordinate of the calibration object with the small abscissa among the two adjacent calibration objects with the coordinates on the ordinate axis is larger than or equal to the ordinate of the calibration object with the large abscissa on the ordinate axis, determining that the rotation direction of the image acquisition equipment is anticlockwise when the image acquisition equipment acquires the image.
It can be understood that according to the magnitude relation between the second coordinates of the calibration objects with adjacent coordinates on the first coordinate axis and the second coordinate axis, the accurate rotation direction of the image acquisition device during image acquisition can be obtained.
In some embodiments, determining a rotation angle of the image acquisition device when performing image acquisition according to the centroid coordinates of each calibration object in the first bitmap includes: determining the line number of each calibration object in the first bitmap according to the barycenter coordinates of each calibration object in the first bitmap; and determining the rotation angle of the image acquisition equipment when the image acquisition is carried out according to the centroid coordinates corresponding to the calibration objects with the same line number.
The line number of each calibration object in the first bitmap is determined according to the centroid coordinates of each calibration object in the first bitmap, namely, the calibration objects in the same line are determined; and determining the rotation angle of the image acquisition equipment when the image acquisition is carried out according to the centroid coordinates corresponding to the calibration objects with the same line number, namely determining the centroid coordinates of the calibration objects in the same line.
In some embodiments, determining the number of rows in the first bitmap where each calibration object is located according to the centroid coordinates of each calibration object in the first bitmap includes: sequencing the calibration objects according to the second coordinates in the centroid coordinates of the calibration objects in the first bitmap; determining the first calibration object in the ordered calibration objects as a current calibration object, and calculating a second coordinate difference value between the current calibration object and the calibration object with adjacent coordinates according to the current calibration object; if the second coordinate difference value is smaller than or equal to the preset difference value, determining that the current calibration object and the calibration object adjacent to the coordinates are in the first row; if the second coordinate difference is larger than the preset difference, determining that the current calibration object and the calibration object with adjacent coordinates are in different rows and the calibration object with adjacent coordinates is in the next row, sequentially taking the calibration objects with adjacent coordinates as the current calibration object, and returning to execute the step of calculating the second coordinate difference between the current calibration object and the calibration object with adjacent coordinates aiming at the current calibration object until the row number of each calibration object in the first dot matrix image is determined.
Optionally, sorting the calibration objects according to the second coordinates in the centroid coordinates of the calibration objects in the first bitmap includes: and carrying out ascending order sequencing on the calibration objects according to the second coordinates in the barycenter coordinates of the calibration objects in the first bitmap, or carrying out descending order sequencing on the calibration objects according to the second coordinates in the barycenter coordinates of the calibration objects in the first bitmap.
In some embodiments, determining the number of columns in which each calibration object is located in the first bitmap according to the centroid coordinates of each calibration object in the first bitmap includes: sequencing the calibration objects according to first coordinates in the centroid coordinates of the calibration objects in the first bitmap; determining the first calibration object in the sequenced calibration objects as a current calibration object, and calculating a first coordinate difference value between the current calibration object and the adjacent calibration object according to the current calibration object; if the first coordinate difference value is smaller than or equal to the preset difference value, determining that the current calibration object and the calibration object adjacent to the coordinates are in a first column; if the first coordinate difference is larger than the preset difference, determining that the current calibration object and the adjacent calibration object are in different columns and the calibration object with the adjacent coordinates is in the next column, sequentially taking the calibration object with the adjacent coordinates as the current calibration object, and returning to the step of executing the calculation of the first coordinate difference between the current calibration object and the adjacent calibration object aiming at the current calibration object until the column number of each calibration object in the first dot matrix image is determined.
It should be noted that, in the present application, the first axis is the abscissa axis (i.e., X axis) and the second axis is the ordinate axis (i.e., Y axis).
In some embodiments, as shown in fig. 6, a schematic diagram of the lattice in the first lattice diagram is provided. As can be seen from fig. 6, the lattice of fig. 6 has 42 calibration objects, the abscissas of the 42 calibration objects are respectively determined, the 42 calibration objects are ordered according to the abscissas of the calibration objects, the first calibration object a in the ordered calibration objects is determined as the current calibration object, and for the current calibration object, the difference value of the abscissas between the current calibration object a and the adjacent calibration object B is calculated; if the difference value of the horizontal coordinate is smaller than or equal to the preset difference value, determining that the current calibration object A and the calibration object B adjacent to the coordinate are in the first row; if the horizontal coordinate difference value is larger than the preset difference value, determining that the current calibration object A and the coordinate adjacent calibration object B are in different columns, and the coordinate adjacent calibration object B is in the next column, namely the coordinate adjacent calibration object B is in the second column, sequentially taking the coordinate adjacent calibration object as the current calibration object, and returning to the step of executing the column aiming at the current calibration object, and calculating the horizontal coordinate difference value between the current calibration object and the adjacent calibration object until the column number of each calibration object in the first dot matrix image is determined, and obtaining the column number of the dot matrix as 7.
In some embodiments, as shown in FIG. 7, an enlarged schematic view of the dashed area of FIG. 6 is provided, as can be seen from FIG. 7, x 1 For the abscissa, x, of the first calibration object 3 -x 2 Is the difference of the horizontal coordinates between the third calibration object and the second calibration object, and can be processedThe solution is that the distance between two adjacent calibration objects in the X-axis direction is the horizontal coordinate difference value X of the adjacent calibration objects i+1 -x i If the difference value of the horizontal coordinate is greater than the preset difference value, it indicates that the two calibration objects are not in the same column, for example, the two calibration objects corresponding to the distance 2 are not in the same column. If the difference value of the horizontal coordinate is smaller than or equal to the preset difference value, the two calibration objects are in the same row, for example, the two calibration objects corresponding to the distance 1 are in the same row.
It can be understood that the number of rows and columns where each calibration object is located can be determined according to the magnitude relation between the coordinate difference value between the calibration objects with adjacent coordinates and the preset difference value.
In some embodiments, determining a rotation angle of the image acquisition device when performing image acquisition according to centroid coordinates corresponding to calibration objects with the same number of rows includes: constructing a straight line where the calibration objects with the same line number are located according to the centroid coordinates corresponding to the calibration objects with the same line number; and determining the rotation angle of the image acquisition equipment when the image acquisition is carried out according to the slope of the straight line where the calibration objects with the same line number are located.
Optionally, when the number of rows is even, selecting centroid coordinates corresponding to the calibration objects on any one of the two middle rows in the lattice to construct a straight line.
Optionally, when the number of lines is odd, selecting centroid coordinates corresponding to the calibration objects on the middle line to construct a straight line.
Optionally, constructing a straight line where the calibration objects with the same line number are located according to the centroid coordinates corresponding to the calibration objects with the same line number, including: setting a linear model of a straight line where the calibration objects with the same line number are located, and bringing centroid coordinates corresponding to the calibration objects with the same line number into the linear model of the straight line to obtain a linear equation set; solving the linear equation set to obtain each parameter of a linear model of the straight line; and determining the straight line where the calibration objects with the same line number are located according to each parameter of the linear model of the straight line.
It can be understood that the steps of sorting the calibration objects in the first bitmap according to the rotation direction when the image acquisition device performs image acquisition, so that after the first bitmap is ordered, executing centroid coordinates corresponding to the calibration objects with the same line number according to the first bitmap, constructing a straight line where the calibration objects with the same line number are located, and determining the rotation angle when the image acquisition device performs image acquisition according to the slope of the straight line where the calibration objects with the same line number are located. Specifically, if the rotation direction is counterclockwise, the unordered calibration objects in the first bitmap may be sorted in ascending order according to the respective ordinate sizes, and if the rotation direction is not counterclockwise, the unordered calibration objects in the first bitmap may be sorted in ascending order according to the respective abscissa sizes.
As shown in fig. 5, a schematic diagram is provided after ordering the calibration objects in the first bitmap.
Optionally, when the linear model of the straight line is a linear function y=kx+b and the number of calibration objects with the same line number is m, the centroid coordinates corresponding to the m calibration objects are brought into the linear function to obtain m equations, and the m equations are solved simultaneously to obtain the values of k and b. Where k is the slope of the linear function. In the above embodiment, according to the centroid coordinates corresponding to the calibration objects with the same number of lines, the accurate rotation angle of the image acquisition device during image acquisition can be obtained.
Optionally, performing rotation correction on the first bitmap according to the rotation direction and the rotation angle includes: determining the barycenter coordinates of the first point array according to the barycenter coordinates of each calibration object in the first point array diagram; constructing a rotation matrix according to the barycenter coordinates, the rotation direction and the rotation angle of the first point array; and carrying out rotation correction on each calibration object in the first bitmap according to the rotation matrix to obtain the barycenter coordinates of each calibration object after rotation correction so as to form the first bitmap after rotation correction.
Optionally, the centroid coordinates of the first dot matrix include the centroid abscissa of the first dot matrix and the centroid ordinate of the first dot matrix, and the rotation matrix M is constructed as follows:
Wherein θ is the rotation angle,is the centroid abscissa of the first lattice,is the centroid ordinate of the first lattice.
Optionally, the centroid coordinates UC (x, y) of each calibration object in the rotation corrected first bitmap=the rotation matrix right multiplied by the centroid coordinates AC (x, y) of each calibration object in the first bitmap, where AC x (x, y) is the centroid abscissa, AC, of each calibration object in the first bitmap y (x, y) is the ordinate of the centroid, UC, of each calibration object in the first bitmap x (x, y) is the centroid abscissa, UC, of each calibration object in the first bitmap after rotation correction y And (x, y) is the ordinate of the mass center of each calibration object in the first bitmap after rotation correction. Namely:
it can be understood that the rotational distortion is caused by the rotation condition during shooting, and not caused by the distortion of the camera, and in the above embodiment, the rotation correction is performed on the first bitmap according to the rotation direction and the rotation angle, so that the first bitmap without rotational distortion after the rotation correction can be obtained, and further, based on the position difference between the first bitmap after the rotation correction and the marked object in the second bitmap, it is determined that the interference of the distortion caused by the rotation shooting is eliminated, and the distortion parameter of the image acquisition device can be more accurately determined.
In some embodiments, constructing a second lattice diagram according to the first lattice centroid coordinates and the preset lattice distance includes: determining centroid coordinates of reference calibration objects in a second bitmap according to the centroid coordinates of the first bitmap, the preset lattice distance, the number of rows of the lattices in the first bitmap and the number of columns of the lattices in the first bitmap; and generating a lattice diagram matched with the number of the plurality of calibration objects according to the centroid coordinates of the reference calibration objects and the preset point distances, and obtaining a second lattice diagram.
Optionally, the reference calibration object is the first calibration object in the second bitmap. In this embodiment, determining the centroid coordinates of the reference calibration object in the second bitmap according to the centroid coordinates of the first bitmap, the preset lattice distance, the number of rows of the lattice in the first bitmap, and the number of columns of the lattice in the first bitmap includes: determining a first coordinate distance between a centroid coordinate of a reference calibration object and a first coordinate axis of a centroid coordinate of a first dot matrix according to the number of columns of dot matrixes in the first dot matrix diagram and a preset dot distance; determining a second coordinate distance between the centroid coordinates of the reference calibration object and the centroid coordinates of the first dot matrix on a second coordinate axis according to the number of rows of dot matrixes in the first dot matrix diagram and a preset dot distance; and determining the centroid coordinates of the reference calibration object in the second lattice diagram according to the first coordinate distance, the second coordinate distance and the first lattice centroid coordinates.
It can be appreciated that in the embodiment of the present application, the first axis is the abscissa axis, and the second axis is the ordinate axis.
Optionally, when the number of rows of the lattice in the first lattice diagram is even and the number of columns of the lattice in the first lattice diagram is odd, the first coordinate distance= (the number of columns of the lattice in the first lattice diagram/2-0.5) ×the preset dot distance, and the second coordinate distance= (the number of columns of the lattice in the first lattice diagram/2) ×the preset dot distance.
Optionally, when the number of rows and columns of the lattice in the first lattice diagram is even, the first coordinate distance= (the number of columns/2-0.5 of the lattice in the first lattice diagram) is equal to the preset distance, and the second coordinate distance= (the number of rows/2-0.5 of the lattice in the first lattice diagram) is equal to the preset distance.
Optionally, when the number of rows of the lattice in the first bitmap is odd and the number of columns of the lattice in the first bitmap is even, the first coordinate distance= (the number of rows/2 of the lattice in the first bitmap) is equal to the preset distance, and the second coordinate distance= (the number of rows/2-0.5 of the lattice in the first bitmap) is equal to the preset distance.
Optionally, when the number of rows and columns of the lattice in the first lattice diagram is odd, the first coordinate distance= (the number of columns/2 of the lattice in the first lattice diagram) is equal to the preset lattice distance, and the second coordinate distance= (the number of rows/2 of the lattice in the first lattice diagram) is equal to the preset lattice distance.
It may be understood that the first dot matrix centroid coordinates include a centroid abscissa of the first dot matrix and a centroid ordinate of the first dot matrix, and the reference calibration object centroid coordinates include a reference calibration object centroid abscissa and a reference calibration object centroid ordinate, and thus, reference calibration object centroid abscissa=first dot matrix centroid abscissa-first coordinate distance; reference is made to the centroid ordinate of the calibration object = centroid ordinate of the first lattice-second coordinate distance.
Optionally, generating a second bitmap matching the number of the plurality of calibration objects according to the centroid coordinates of the reference calibration objects and the preset point distances, including: determining the centroid coordinates of the calibration objects in the second lattice diagram according to the centroid coordinates of the reference calibration objects and the preset point distance; and generating a plurality of calibration objects at the barycenter coordinates of each calibration object to obtain a second bitmap.
For ease of understanding, a schematic diagram of a second bitmap is provided as shown in fig. 8. In fig. 8, for the differentiated display, the calibration object in the second bitmap is represented by a white circle, the white point is the centroid of the white circle, and the calibration object in the first bitmap is represented by a black circle. As can be seen from fig. 8, reference calibration objects may be generated, the centroid coordinates of each calibration object in the second bitmap are sequentially determined according to the preset lattice distances, and a second bitmap matching the number of calibration objects in the first bitmap is generated, that is, 42 calibration objects are also generated in the second bitmap. The distances between adjacent calibration objects in the second lattice diagram are consistent, and all the distances accord with preset point distances.
In the above embodiment, according to the centroid coordinates and the preset point distances of the reference calibration objects, the second bitmap matching with the number of the plurality of calibration objects may be accurately generated.
In some embodiments, determining distortion parameters of the image acquisition device from the difference in position between the identified objects in the first and second bitmap comprises: for each calibration object in the first bitmap, calculating a first distance difference value between the centroid coordinates of each calibration object and the centroid coordinates of the first bitmap; calculating a second distance difference value between the corresponding centroid coordinates of the calibration object in the second lattice diagram and the centroid coordinates of the first lattice; calculating geometric distortion parameters of each calibration object according to the first distance difference value and the second distance difference value; and determining the distortion parameters of the image acquisition equipment according to the geometric distortion parameters of each calibration object.
If the rotation correction is performed on the first bitmap obtained by image acquisition, the step of determining the distortion parameter may be performed based on the rotation corrected first bitmap. And calculating the geometric distortion parameter of each calibration object according to the first distance difference value corresponding to the calibration object and the second distance difference value corresponding to the calibration object. It will be appreciated that each calibration object will have a corresponding geometric distortion parameter, a first distance difference value and a second distance difference value.
Since the geometrical distortion of the image acquisition device affects pixels at different positions in the image to different extents, the geometrical distortion parameters of each calibration object may be different. Thus, determining distortion parameters of the image acquisition device from the geometric distortion parameters of each calibration object comprises: determining a calibration object positioned at the edge of the first bitmap; and determining the geometric distortion parameters corresponding to the calibration objects positioned at the edge of the first bitmap as candidate geometric distortion parameters, and determining the largest geometric distortion parameter in the candidate geometric distortion parameters as the distortion parameter of the image acquisition equipment.
Optionally, the first lattice centroid coordinates include a centroid abscissa of the first lattice and an ordinate of the first lattice; the second lattice centroid coordinates comprise the centroid abscissa of the second lattice and the centroid ordinate of the second lattice; the distortion parameters of the image acquisition device are calculated by the following formula:
;
;
GD(x,y)=100*(H’(x,y)-H(x,y))/ H(x,y);
wherein: GD (x, y) is the geometric distortion parameter corresponding to each calibration object; h' (x, y) is the first distance difference; h (x, y) is a second distance difference; UC (UC) x (x, y) is the abscissa in the centroid coordinates of each calibration object; UC (UC) y (x, y) is the ordinate in the centroid coordinates of each calibration object; COM (COM) x (x, y) is the centroid abscissa of the first lattice; COM (COM) y (x, y) is the centroid ordinate of the first lattice; IC (integrated circuit) x (x, y) is the centroid abscissa of the second lattice; IC (integrated circuit) y (x, y) is the centroid ordinate of the second lattice.
It can be appreciated that if the rotation correction is performed on the first bitmap obtained by image acquisition, UC x (x, y) is the abscissa, UC, in the centroid coordinates of each calibration object in the rotation corrected first bitmap y (x, y) is the ordinate in the centroid coordinates of each calibration object in the rotation corrected first bitmap.
In the implementation, the geometric distortion parameter of each calibration object is calculated according to the first distance difference value and the second distance difference value, and the accurate distortion parameter of the image acquisition device can be obtained according to the geometric distortion parameter of each calibration object.
In some embodiments, as shown in fig. 9, a flowchart of another method for determining distortion parameters is provided, and the method is applied to a computer device for illustration, and includes the following steps:
step 901, acquiring a first bitmap obtained by image acquisition of a calibration plate by image acquisition equipment; the plurality of calibration objects in the calibration plate form a lattice in the first lattice diagram.
Step 902, determining the centroid coordinates of the first lattice according to the centroid coordinates of the calibration objects in the first lattice.
And 903, determining the rotation direction of the image acquisition equipment when the image acquisition is carried out according to the barycenter coordinates of each calibration object in the first bitmap.
Step 904, determining a rotation angle of the image acquisition device when the image acquisition device acquires the image according to the barycenter coordinates of each calibration object in the first bitmap.
In step 905, the rotation correction is performed on the first bitmap according to the rotation direction and the rotation angle, so as to obtain a rotation corrected first bitmap.
Step 906, constructing a second lattice diagram according to the centroid coordinates of the first lattice and the preset point distance; the distance between adjacent calibration objects in the second lattice diagram accords with a preset lattice distance.
Step 907, for each calibration object, calculating a first distance difference between the centroid coordinates of each calibration object in the first bitmap after rotation correction and the centroid coordinates of the first bitmap, and calculating a second distance difference between the centroid coordinates of the calibration object corresponding to the second bitmap and the centroid coordinates of the first bitmap.
Step 908, for each calibration object, calculating a geometric distortion parameter of each calibration object according to the first distance difference value and the second distance difference value.
Step 909, determining the distortion parameters of the image acquisition device from the geometric distortion parameters corresponding to each calibration object by using the largest geometric distortion parameters.
According to the method for determining the distortion parameters, the first bitmap is obtained by acquiring the image of the calibration plate by the image acquisition equipment, the centroid coordinates of the dot matrix in the first bitmap are determined to be the first dot matrix centroid coordinates according to the centroid coordinates of the calibration objects in the first dot matrix map, and the second dot matrix map is constructed according to the first dot matrix centroid coordinates and the preset dot distance; the distance between adjacent calibration objects in the second bitmap accords with the preset point distance, and accurate distortion parameters of the image acquisition equipment are obtained according to the position difference between the calibration objects in the first bitmap and the second bitmap.
It should be understood that, although the steps in the flowcharts related to the above embodiments are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a distortion parameter determining device for realizing the above-mentioned distortion parameter determining method. The implementation of the solution provided by the apparatus is similar to the implementation described in the above method, so the specific limitation in the embodiment of the determining apparatus for one or more distortion parameters provided below may be referred to the limitation of the determining method for the distortion parameters hereinabove, and will not be described herein.
In some embodiments, as shown in fig. 10, there is provided a distortion parameter determining apparatus, including: an acquisition module 1001, a first determination module 1002, a second determination module 1003, and a third determination module 1004, wherein:
an acquisition module 1001, configured to acquire a first bitmap obtained by image acquisition of the calibration board by the image acquisition device; the plurality of calibration objects in the calibration plate form a lattice in the first lattice diagram.
The first determining module 1002 is configured to determine, according to the centroid coordinates of each calibration object in the first bitmap, that the centroid coordinates of the lattice in the first bitmap are the centroid coordinates of the first lattice.
A second determining module 1003, configured to construct a second lattice diagram according to the first lattice centroid coordinate and the preset lattice distance; the distance between adjacent calibration objects in the second lattice diagram accords with a preset lattice distance.
A third determining module 1004 is configured to determine a distortion parameter of the image capturing device according to a position difference between the marked objects in the first and second bitmap.
In some embodiments, the third determining module 1004 is configured to calculate, for each calibration object in the first bitmap, a first distance difference between a centroid coordinate of each calibration object and a centroid coordinate of the first bitmap; calculating a second distance difference value between the corresponding centroid coordinates of the calibration object in the second lattice diagram and the centroid coordinates of the first lattice; calculating geometric distortion parameters of each calibration object according to the first distance difference value and the second distance difference value; and determining the distortion parameters of the image acquisition equipment according to the geometric distortion parameters of each calibration object.
In some embodiments, the apparatus further comprises:
a rotation correction module (not shown in the figure) for determining a rotation direction of the image acquisition device when the image acquisition device acquires the image according to the centroid coordinates of each calibration object in the first bitmap; determining a rotation angle of the image acquisition equipment when the image acquisition is carried out according to the barycenter coordinates of each calibration object in the first bitmap; performing rotation correction on the first bitmap according to the rotation direction and the rotation angle; the third determining module 1004 is further configured to determine a distortion parameter of the image capturing device according to a position difference between the rotation corrected first bitmap and the identified object in the second bitmap.
In some embodiments, the first determining module 1002 is specifically configured to determine, for each calibration object in the first bitmap, a first coordinate of each calibration object on a first coordinate axis and a second coordinate of each calibration object on a second coordinate axis; determining calibration objects with adjacent coordinates on a first coordinate axis based on first coordinates corresponding to the calibration objects respectively; and determining the rotation direction of the image acquisition equipment when the image acquisition is carried out according to the magnitude relation between the second coordinates of the calibration objects with adjacent coordinates on the second coordinate axis.
In some embodiments, the first determining module 1002 is specifically configured to determine, according to the centroid coordinates of each calibration object in the first bitmap, a number of rows where each calibration object is located in the first bitmap; and determining the rotation angle of the image acquisition equipment when the image acquisition is carried out according to the centroid coordinates corresponding to the calibration objects with the same line number.
In some embodiments, the first determining module 1002 is specifically configured to sort each calibration object according to a second coordinate in the centroid coordinates of each calibration object in the first bitmap; determining the first calibration object in the ordered calibration objects as a current calibration object, and calculating a second coordinate difference value between the current calibration object and the calibration object with adjacent coordinates according to the current calibration object; if the second coordinate difference value is smaller than or equal to the preset difference value, determining that the current calibration object and the calibration object adjacent to the coordinates are in the first row; if the second coordinate difference is larger than the preset difference, determining that the current calibration object and the calibration object with adjacent coordinates are in different rows and the calibration object with adjacent coordinates is in the next row, sequentially taking the calibration objects with adjacent coordinates as the current calibration object, and returning to execute the step of calculating the second coordinate difference between the current calibration object and the calibration object with adjacent coordinates aiming at the current calibration object until the row number of each calibration object in the first dot matrix image is determined.
In some embodiments, the first determining module 1002 is specifically configured to construct a straight line where the calibration objects with the same line number are located according to the centroid coordinates corresponding to the calibration objects with the same line number; and determining the rotation angle of the image acquisition equipment when the image acquisition is carried out according to the slope of the straight line where the calibration objects with the same line number are located.
In some embodiments, the second determining module 1003 is configured to construct a second bitmap according to the first lattice centroid coordinates and the preset lattice distance, including: determining centroid coordinates of reference calibration objects in a second bitmap according to the centroid coordinates of the first bitmap, the preset lattice distance, the number of rows of the lattices in the first bitmap and the number of columns of the lattices in the first bitmap; and generating a lattice diagram matched with the number of the plurality of calibration objects according to the centroid coordinates of the reference calibration objects and the preset point distances, and obtaining a second lattice diagram.
The respective modules in the above-described distortion parameter determination apparatus may be implemented in whole or in part by software, hardware, or a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In some embodiments, a computer device is provided, which may be a server or a terminal, and the internal structure of which may be as shown in fig. 11. The computer device includes a processor, a memory, an Input/Output interface (I/O) and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of determining distortion parameters.
It will be appreciated by those skilled in the art that the structure shown in FIG. 11 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In some embodiments, a computer device is provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In some embodiments, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements the steps of the method embodiments described above.
In some embodiments, a computer program product is provided comprising a computer program which, when executed by a processor, performs the steps of the method embodiments described above.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (MagnetoresistiveRandom Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (PhaseChange Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (StaticRandom Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.