CN115830134A - Camera calibration method and device, electronic equipment and medium - Google Patents

Camera calibration method and device, electronic equipment and medium Download PDF

Info

Publication number
CN115830134A
CN115830134A CN202211063168.8A CN202211063168A CN115830134A CN 115830134 A CN115830134 A CN 115830134A CN 202211063168 A CN202211063168 A CN 202211063168A CN 115830134 A CN115830134 A CN 115830134A
Authority
CN
China
Prior art keywords
target
edge
determining
camera
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211063168.8A
Other languages
Chinese (zh)
Inventor
刘斯宁
赵昌华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Astatine Technology Co ltd
Original Assignee
Shanghai Astatine Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Astatine Technology Co ltd filed Critical Shanghai Astatine Technology Co ltd
Priority to CN202211063168.8A priority Critical patent/CN115830134A/en
Publication of CN115830134A publication Critical patent/CN115830134A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The disclosure provides a camera calibration method, a camera calibration device, an electronic device and a medium, wherein the method comprises the following steps: acquiring a target image acquired by a camera to be calibrated; wherein a plurality of marker patterns are displayed in the target image; extracting edge curves belonging to the same mark pattern from the target image; determining target coordinates and normal vectors of center points of the plurality of mark patterns in a camera coordinate system according to image positions of edge curves of the plurality of mark patterns in a target image; and determining external parameters of the camera to be calibrated according to the target coordinates and normal vectors of the central points of the plurality of marked patterns. Therefore, external reference calibration of the camera to be calibrated can be completed according to an image shot by the camera to be calibrated, user operation can be simplified, and user experience is improved.

Description

Camera calibration method and device, electronic equipment and medium
Technical Field
The present disclosure relates to the field of camera calibration technologies, and in particular, to a camera calibration method and apparatus, an electronic device, and a medium.
Background
Camera calibration is a very important technology in image processing technology, and in image measurement process and machine vision application, in order to determine the correlation between the three-dimensional geometric positions of points on the surface of a space object and the corresponding points in an image, a geometric model of camera imaging needs to be established, and the geometric model parameters are camera parameters. The camera parameters can be obtained through experiments and calculation, and the process of solving the camera parameters is called as camera calibration. The accuracy of the calibration determines whether the machine vision system can effectively perform operations such as positioning, detecting, segmenting and ranging on the interested area in the image.
The camera calibration can be divided into two sub-processes of internal reference calibration and external reference calibration. The camera external reference is also called a camera pose, is generally expressed in a matrix form, and can be decomposed into a translation matrix (position) and a rotation matrix (pose). The camera external reference calibration means that a rotation matrix and a translation matrix are determined according to a certain method, so that a complete external reference matrix is calculated. The points in a world coordinate system are multiplied by the external reference matrix to enter the camera coordinate system, and vice versa, and the points in a camera coordinate system are multiplied by the inverse matrix of the external reference matrix to enter the world coordinate system. That is, the camera external reference is used to indicate a mapping relationship between the camera coordinate system and the world coordinate system.
Therefore, how to calibrate the camera to determine the camera external parameters is very important.
Disclosure of Invention
The present disclosure is directed to solving, at least to some extent, one of the technical problems in the related art.
The disclosure provides a camera calibration method, a camera calibration device, an electronic device and a medium, so as to complete external reference calibration of a camera to be calibrated according to an image shot by the camera to be calibrated, simplify user operation and improve user experience.
An embodiment of a first aspect of the present disclosure provides a camera calibration method, including:
acquiring a target image acquired by a camera to be calibrated; wherein a plurality of marker patterns are displayed in the target image;
extracting edge curves belonging to the same mark pattern from the target image;
determining target coordinates and normal vectors of center points of the plurality of mark patterns in a camera coordinate system according to image positions of edge curves of the plurality of mark patterns in the target image;
and determining external parameters of the camera to be calibrated according to the target coordinates and normal vectors of the central points of the plurality of marking patterns.
An embodiment of a second aspect of the present disclosure provides another camera calibration apparatus, including:
the acquisition module is used for acquiring a target image acquired by a camera to be calibrated; wherein a plurality of marker patterns are displayed in the target image;
the extraction module is used for extracting edge curves belonging to the same mark pattern from the target image;
the first determining module is used for determining target coordinates and normal vectors of center points of the plurality of mark patterns in a camera coordinate system according to image positions of edge curves of the plurality of mark patterns in the target image;
and the second determining module is used for determining the external parameters of the camera to be calibrated according to the target coordinates and the normal vectors of the central points of the plurality of marking patterns.
An embodiment of a third aspect of the present disclosure provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute the camera calibration method provided by the embodiment of the first aspect of the disclosure.
In an embodiment of a fourth aspect of the present disclosure, a computer-readable storage medium is provided, where computer instructions are stored, and the computer instructions are configured to cause the computer to execute the camera calibration method provided in the embodiment of the first aspect of the present disclosure.
An embodiment of a fifth aspect of the present disclosure provides a computer program, which includes a computer program, and when the computer program is executed by a processor, the computer program implements the camera calibration method described in the embodiment of the first aspect of the present disclosure.
One embodiment of the present disclosure described above has at least the following advantages or benefits:
acquiring a target image acquired by a camera to be calibrated; wherein a plurality of marker patterns are displayed in the target image; extracting edge curves belonging to the same mark pattern from the target image; determining target coordinates and normal vectors of center points of the plurality of mark patterns in a camera coordinate system according to image positions of edge curves of the plurality of mark patterns in a target image; and determining external parameters of the camera to be calibrated according to the target coordinates and normal vectors of the central points of the plurality of marked patterns. Therefore, external reference calibration of the camera to be calibrated can be completed according to an image shot by the camera to be calibrated, user operation can be simplified, and user experience is improved.
Additional aspects and advantages of the disclosure will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the disclosure.
Drawings
The foregoing and/or additional aspects and advantages of the present disclosure will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart of a camera calibration method according to a first embodiment of the disclosure;
fig. 2 is a schematic flowchart of a camera calibration method according to a second embodiment of the disclosure;
fig. 3 is a schematic flowchart of a camera calibration method according to a third embodiment of the disclosure;
fig. 4 is a schematic flowchart of a camera calibration method according to a fourth embodiment of the disclosure;
fig. 5 is a schematic structural diagram of a camera calibration apparatus according to a fifth embodiment of the present disclosure;
FIG. 6 illustrates a block diagram of an exemplary electronic device suitable for use in implementing embodiments of the present disclosure.
Detailed Description
Reference will now be made in detail to the embodiments of the present disclosure, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the drawings are exemplary and intended to be illustrative of the present disclosure, and should not be construed as limiting the present disclosure.
The television top-mounted camera is a camera additionally mounted on a television, and besides some televisions are provided with original cameras, consumers can purchase rear-mounted cameras from the open market and install the rear-mounted cameras by themselves. At present, two typical purposes exist in a self-mounted camera, the first purpose is video interaction, such as video conference, interactive game and the like, and at the moment, a shooting object of the camera is a television audience; the second is to analyze the content of the tv picture, such as viewing atmosphere improving system, etc., where the object to be captured by the camera is the tv picture itself.
For the second application, the existing external reference calibration method of the camera is as follows: consumers were asked to paste 7 square markers along the frame of the tv, i.e. paste 3 markers on the left and right side frames, respectively, and paste 1 marker in the middle of the bottom frame. With these markers, the calibration software can establish the external contour and dimensional relationship of the tv set, and thus calculate the external parameters of the camera coordinate system and the tv coordinate system (the coordinate system established by the 7 markers).
However, the external reference calibration method of the camera has the following disadvantages:
first, the number of markers required is greater. Because the sticking markers are operated by ordinary consumers without any training, some markers have large position errors due to the lack of a reliable positioning method, and the calibration precision cannot be guaranteed.
Secondly, the material, shape and color of the marker are specially designed and can only be provided by manufacturers actually. When the camera pose changes for reasons, the consumer has to recalibrate, and if part of the markers have been lost, the consumer must seek technical support from the manufacturer, which increases the time cost of the consumer.
Third, the markers need to be attached to the tv set, and once the calibration is completed and the markers need to be removed, the risk of physical damage and surface contamination to the tv set increases once per operation.
Fourth, the calibration process is time consuming and labor intensive. In order to ensure the calibration effect, a consumer may need to adjust the position of the marker many times, and also need to take care to prevent damage to the television caused by accidental operation, so that the consumer needs to take care when pasting and removing the marker, the time cost is high, and the user experience is poor.
In view of at least one of the above problems, the present disclosure provides a camera calibration method, an apparatus, an electronic device, and a storage medium.
The following describes a camera calibration method, apparatus, electronic device, and storage medium according to embodiments of the present disclosure with reference to the drawings.
Fig. 1 is a schematic flowchart of a camera calibration method according to a first embodiment of the disclosure.
The camera calibration method of the embodiment of the disclosure can be applied to any electronic device, so that the electronic device can execute a camera calibration function.
The electronic device may be any device with computing capability, for example, a personal computer, a mobile terminal, a server, and the like, and the mobile terminal may be a hardware device with various operating systems, touch screens, and/or display screens, such as a mobile phone, a tablet computer, a personal digital assistant, and a wearable device.
As shown in fig. 1, the camera calibration method may include the steps of:
step 101, acquiring a target image acquired by a camera to be calibrated; wherein, a plurality of mark patterns are displayed in the target image.
In the embodiment of the disclosure, the camera to be calibrated refers to a camera that needs to be calibrated with external reference.
In the disclosed embodiment, the number of the mark patterns may include, but is not limited to, two.
In the embodiment of the present disclosure, the shape and color of the mark pattern are not limited, for example, the shape of the mark pattern may be circular, oval, square, rectangle, etc., and the color of the mark pattern may be black, white, red, etc.
In the embodiment of the disclosure, the camera to be calibrated can shoot a plurality of marking patterns to obtain a target image, so that the target image acquired by the camera to be calibrated can be acquired in the disclosure.
Step 102, extracting edge curves belonging to the same mark pattern from the target image.
In the embodiment of the present disclosure, edge curves belonging to the same marker image are extracted from a target image based on an image recognition technique. For example, edge detection may be performed on the target image to obtain edge curves belonging to the same mark pattern.
And 103, determining target coordinates and normal vectors of the central points of the plurality of mark patterns in a camera coordinate system according to the image positions of the edge curves of the plurality of mark patterns in the target image.
In the embodiment of the present disclosure, an origin O of a camera coordinate system is an optical center (optical center for short) of a camera to be calibrated, a vertical axis (z-axis) is an optical axis of the camera to be calibrated, a plane formed by a horizontal axis (x-axis) and a vertical axis (y-axis) is perpendicular to the z-axis, a horizontal direction is an x-axis direction, and the y-axis is perpendicular to the x-axis. That is, the origin of the camera coordinate system is the optical center of the camera to be calibrated, the x-axis of the camera coordinate system is parallel to the x-axis of the image coordinate system of the target image, the y-axis of the camera coordinate system is parallel to the y-axis of the image coordinate system of the target image, and the z-axis is the optical axis of the camera to be calibrated. The intersection point of the optical axis and the target image plane is the origin of the image coordinate system.
Wherein, the image coordinate system is a two-dimensional rectangular coordinate system. The origin of the image coordinate system is the center of the target image, the x-axis and the y-axis are respectively parallel to two sides of the target image, the x-axis is horizontally towards the right, and the y-axis is vertically towards the bottom.
In the embodiment of the present disclosure, for any one of the plurality of marker patterns, the coordinates of the center points of the plurality of marker patterns in the camera coordinate system (referred to as target coordinates in the present disclosure) and the normal vectors of the center points of the plurality of marker patterns in the camera coordinate system may be determined according to the image position of the edge curve of the marker pattern in the target image.
And step 104, determining external parameters of the camera to be calibrated according to the target coordinates and normal vectors of the central points of the plurality of marked patterns.
In the embodiment of the disclosure, the external parameters of the camera to be calibrated can be determined according to the target coordinates and normal vectors of the central points of the plurality of marked images.
The camera calibration method of the embodiment of the disclosure acquires a target image acquired by a camera to be calibrated; wherein a plurality of marker patterns are displayed in the target image; extracting edge curves belonging to the same mark pattern from the target image; determining target coordinates and normal vectors of center points of the plurality of mark patterns in a camera coordinate system according to image positions of edge curves of the plurality of mark patterns in a target image; and determining external parameters of the camera to be calibrated according to the target coordinates and normal vectors of the central points of the plurality of marked patterns. Therefore, external reference calibration of the camera to be calibrated can be completed according to an image shot by the camera to be calibrated, user operation can be simplified, and user experience is improved.
In order to clearly illustrate how the above embodiments extract the edge curves belonging to the same mark pattern from the target image, the present disclosure also proposes a camera calibration method.
Fig. 2 is a schematic flowchart of a camera calibration method according to a second embodiment of the disclosure.
As shown in fig. 2, the camera calibration method may include the steps of:
step 201, acquiring a target image acquired by a camera to be calibrated; wherein, a plurality of mark patterns are displayed in the target image.
For the explanation of step 201, reference may be made to the related description in any embodiment of the present disclosure, which is not described herein again.
In a possible implementation manner of the embodiment of the present disclosure, in order to improve the accuracy and reliability of the external reference calculation result, the target image may be preprocessed, where the preprocessing includes at least one of color space transformation processing, noise reduction smoothing processing, binarization processing, and erosion processing.
As an example, when the preprocessing includes a color space transform process, the target image may be changed from a three-channel (R (red), G (green), B (blue) color image to a single-channel grayscale image. For example, when performing color space transform processing on a target image, assuming that the input and output are all normalized to the [0,1] interval, the transform processing formula used may be:
Y=CLIP(0.299*R+0.587*G+0.114*B); (1)
wherein, R, G, B are the value of each color channel before the pixel point conversion processing respectively, Y is the pixel value after the pixel point conversion processing, and the function of CLIP () function is to limit the value of the output pixel value Y within the interval of [0,1 ].
As an example, when the preprocessing includes a noise reduction smoothing process, the noise reduction smoothing process includes, but is not limited to, filtering processes such as gaussian filtering and median filtering.
As an example, when the preprocessing includes binarization processing, binarization operation algorithms may be employed to perform binarization processing on the target image.
As an example, when the preprocessing includes erosion processing, an erosion operation algorithm may be employed to perform erosion processing on the target image.
Step 202, performing edge detection on the target image to obtain an edge curve set, where the edge curve set includes at least one edge curve.
In the embodiment of the disclosure, edge detection may be performed on a target image based on an edge detection algorithm to obtain an edge curve set, where the edge curve set includes at least one edge curve.
Step 203, for any edge curve in the edge curve set, determining the length of the any edge curve.
In the disclosed embodiment, for any one of the set of edge curves, the length of the edge curve may be determined. For example, the length of the edge curve may be determined according to the coordinates of each pixel point on the edge curve.
And 204, under the condition that the length is greater than the first length threshold, segmenting any one edge curve to obtain a plurality of sub-edge curves, deleting any one edge curve from the edge curve set, and adding the plurality of sub-edge curves to the edge curve set.
The first length threshold is a preset length threshold.
In the embodiment of the present disclosure, when the length of a certain edge curve is greater than a first length threshold, the edge curve may be segmented to obtain a plurality of sub-edge curves, where the length of each sub-edge curve is less than or equal to the first length threshold. Thereafter, the edge curve may be deleted from the set of edge curves and a plurality of sub-edge curves may be added to the set of edge curves.
In a possible implementation manner of the embodiment of the present disclosure, in order to reduce an influence of a shorter curve on a subsequent calculation result, in the present disclosure, in a case that a length of a certain edge curve is smaller than a second length threshold, the edge curve may be deleted from the edge curves. The second length threshold is a preset length threshold, and the second length threshold is smaller than the first length threshold.
And step 205, clustering the curves in the updated edge curve set to obtain a plurality of clusters.
In the embodiment of the present disclosure, each curve in the updated edge curve set may be clustered based on a boundary clustering algorithm to obtain a plurality of clusters.
Step 206, determining a target cluster to which each mark pattern belongs from the plurality of clusters according to the sizes of the plurality of mark patterns and the distances between the plurality of mark patterns, wherein the target cluster comprises edge curves belonging to the same mark pattern.
It should be noted that, a plurality of mark patterns are preset, and the size of each mark pattern (for example, assuming that the size and the shape of each mark pattern are the same, both are circular, and the radius is r) and the distance between each mark pattern can be obtained.
In the embodiment of the present disclosure, a target cluster to which each mark pattern belongs may be determined from a plurality of clusters according to sizes of the plurality of mark patterns and distances between the plurality of mark patterns.
For example, assuming that the number of marker patterns is two, both marker patterns are circular, and the radius is r, if the number of clusters is 3, 3 clusters are cluster 1, cluster 2, and cluster 3, respectively, and the radius of the circle indicated by each edge curve in cluster 1 is equal to the radius of the circle indicated by each edge curve in cluster 2, cluster 1 and cluster 2 may be taken as a target cluster.
And step 207, determining target coordinates and normal vectors of the central points of the plurality of mark patterns in a camera coordinate system according to the image positions of the edge curves of the plurality of mark patterns in the target image.
In the embodiment of the present disclosure, the target coordinates and normal vectors of the central point of the mark pattern corresponding to each target cluster in the camera coordinate system may be determined according to the image position of the edge curve in the target image in each target cluster.
And 208, determining external parameters of the camera to be calibrated according to the target coordinates and normal vectors of the central points of the plurality of marked patterns.
For the explanation of step 208, reference may be made to the related description in any embodiment of the present disclosure, which is not described herein again.
The camera calibration method disclosed by the embodiment of the disclosure can effectively extract the edge curves belonging to the same mark pattern from the target image in the modes of edge detection and clustering.
In order to clearly illustrate how the target coordinates and normal vectors of the central points of the plurality of marker patterns in the camera coordinate system are determined in any embodiment of the present disclosure, the present disclosure further provides a camera calibration method.
Fig. 3 is a schematic flowchart of a camera calibration method provided in the third embodiment of the disclosure.
As shown in fig. 3, the camera calibration method may include the steps of:
step 301, acquiring a target image acquired by a camera to be calibrated; wherein, a plurality of mark patterns are displayed in the target image.
Step 302, performing edge detection on the target image to obtain an edge curve set, where the edge curve set includes at least one edge curve.
Step 303, for any edge curve in the edge curve set, determining the length of any edge curve.
And 304, under the condition that the length is greater than the first length threshold value, segmenting any edge curve to obtain a plurality of sub-edge curves, deleting any edge curve from the edge curve set, and adding the plurality of sub-edge curves to the edge curve set.
And 305, clustering the curves in the updated edge curve set to obtain a plurality of clusters.
Step 306, determining a target cluster to which each mark pattern belongs from the plurality of clusters according to the sizes of the plurality of mark patterns and the distances between the plurality of mark patterns.
For the explanation of steps 301 to 306, reference may be made to the related description in any embodiment of the present disclosure, which is not described herein again.
Step 307 is to extract a plurality of target edge points from the target cluster to which any one of the plurality of marker patterns belongs, for the marker pattern.
In the embodiment of the present disclosure, for any one of a plurality of marker patterns, a plurality of target edge points may be extracted from a target cluster to which the marker image belongs.
As a possible implementation manner, in order to take account of the calculation speed and the accuracy of the calculation result, at least one target edge point may be extracted from each edge curve in the target cluster.
As an example, for any curve in the target cluster to which the mark pattern belongs, at least one candidate edge point may be extracted from the any curve, and the image position of the candidate edge point on each curve in the target cluster in the target image may be determined, so that the distance between each candidate edge point may be determined according to the image position of each candidate edge point in the target image, so as to determine the target edge point from each candidate edge point according to the distance between each candidate edge point.
For example, candidate edge points whose distance is greater than a set distance threshold may be set as the target edge points.
And 308, determining the target coordinates and normal vectors of the central point of any mark pattern in the camera coordinate system according to the image positions of the edge points of the targets in the target image.
In the embodiment of the present disclosure, the target coordinates and normal vectors of the central point of the mark pattern in the camera coordinate system may be determined according to the image positions of the plurality of target edge points in the target image (for example, the position coordinates of the plurality of target edge points in the image coordinate system).
As a possible implementation manner, a target coefficient matrix may be determined according to image positions of the plurality of target edge points in the target image (for example, position coordinates of the plurality of target edge points in the image coordinate system), where the target coefficient matrix is used to indicate a shape of the mark pattern.
As an example, taking the shape of the mark pattern as a circle, the mark pattern appears as an ellipse in the target image due to the perspective principle, and the position coordinates (x, y) of a plurality of target edge points in the image coordinate system can be substituted into the following ellipse matrix equation:
GX=b; (2)
wherein,
Figure BDA0003827107340000091
wherein,
Figure BDA0003827107340000092
where m represents the number of target edge points, (x) 0 ,y 0 )、(x 1 ,y 1 )、…、(x m-1 ,y m-1 ) Respectively representing the position coordinates of the m target edge points in the image coordinate system.
Then, the coefficient of the ellipse equation can be estimated by using the least square method to obtain: x =(G T G) -1 G T b. Then, a corresponding target coefficient matrix Q may be constructed for the marker pattern using the coefficients X. For example, Q may be a 3 × 3 symmetric matrix, where Q includes 6 different elements, where 5 elements are B, C, D, E, and F in X, and another element is a fixed value (e.g., 1).
In this disclosure, the target coefficient matrix may be decomposed to obtain a diagonal matrix and a middle matrix, for example, if the diagonal matrix is marked as Λ and the middle matrix is marked as V, then:
Q=VΛV T (3)
wherein,
Figure BDA0003827107340000101
then, an intermediate vector may be generated according to the value of each diagonal element in the diagonal matrix, for example, the intermediate vector may be
Figure BDA0003827107340000102
And generating an intermediate coefficient according to the value of each diagonal element in the diagonal elements and the size of the mark pattern, for example, if the shape of the mark pattern is a circle and the radius is r, the intermediate coefficient σ may be:
Figure BDA0003827107340000103
therefore, in the present disclosure, the target coordinate of the central point of the mark pattern in the camera coordinate system may be determined according to the intermediate coefficient, the intermediate vector and the intermediate matrix, for example, if the target coordinate of the central point is C:
Figure BDA0003827107340000104
wherein s is 1 ~s 3 Representing the sign, and can reserve correct signs according to actual conditions.
Furthermore, a normal vector of the center point of the mark pattern in the camera coordinate system may be determined according to the intermediate vector and the intermediate matrix, for example, if the normal vector of the center point is N:
Figure BDA0003827107340000105
step 309, determining external parameters of the camera to be calibrated according to the target coordinates and normal vectors of the central points of the plurality of marking patterns.
For the explanation of step 309, reference may be made to the related description in any embodiment of the present disclosure, and details are not described herein.
The camera calibration method disclosed by the embodiment of the disclosure can effectively calculate the coordinates and normal vectors of the center point of each mark pattern in the camera coordinate system according to the image position of the target edge point on the edge curve of each mark pattern in the target image.
In order to clearly illustrate how the external reference of the camera to be calibrated is determined according to the target coordinates and the normal vector of the central point of the plurality of marking patterns in any embodiment of the present disclosure, the present disclosure further proposes a camera calibration method.
Fig. 4 is a schematic flowchart of a camera calibration method according to a fourth embodiment of the disclosure.
As shown in fig. 4, the camera calibration method may include the steps of:
step 401, acquiring a target image acquired by a camera to be calibrated; wherein, a plurality of mark patterns are displayed in the target image.
Step 402, edge curves belonging to the same marking pattern are extracted from the target image.
And step 403, determining target coordinates and normal vectors of the central points of the plurality of mark patterns in a camera coordinate system according to the image positions of the edge curves of the plurality of mark patterns in the target image.
For the explanation of steps 401 to 403, reference may be made to the related description in any embodiment of the present disclosure, which is not described herein again.
Step 404, determining a first unit vector of a horizontal axis of the world coordinate system according to the difference between the target coordinates of the center points of the marker patterns.
In the embodiment of the present disclosure, the world coordinate system may be a coordinate system previously established from a plurality of marker patterns. For example, taking the number of the mark patterns as 2 and the shape of the mark patterns as a circle as an example, the midpoint of the connecting line of the centers of the circles of the two mark patterns may be used as the origin of the world coordinate system, the direction of the connecting line of the centers of the circles is used as the x-axis direction, the direction perpendicular to the x-axis on the plane of the mark patterns is used as the y-axis direction, and the z-axis is perpendicular to the plane of the mark patterns.
In the embodiment of the present disclosure, the first unit vector of the horizontal axis (x-axis) of the world coordinate system may be determined according to the difference between the target coordinates of the center point of each marker pattern.
The number of the marking patterns is 2, and the target coordinates of the central points of the two marking patterns are C 1 And C 2 The first unit vector i of the horizontal axis (x-axis) of the world coordinate system w Can be as follows:
i w =(C 2 -C 1 )/|C 2 -C 1 |; (6)
wherein i w Is the projection coordinate of the x-axis unit vector of the world coordinate system in the camera coordinate system.
Step 405, determining a second unit vector of the vertical axis of the world coordinate system according to the normal vector.
In the disclosed embodiment, the second unit vector of the vertical axis (z-axis) of the world coordinate system may be determined from the normal vector.
Taking the number of the marker patterns as 2 for example, the second unit vector k of the vertical axis (z axis) of the world coordinate system w Can be as follows:
k w =N; (7)
wherein k is w Is the projected coordinate of the z-axis unit vector of the world coordinate system in the camera coordinate system.
And 406, determining a third unit vector of the vertical axis of the world coordinate system according to the first unit vector and the second unit vector.
In the disclosed embodiment, a third unit vector of the vertical axis (y-axis) may be determined from a first unit vector of the horizontal axis (x-axis) and a second unit vector of the vertical axis (z-axis) of the world coordinate system.
Still exemplified by the above example, then the third unit vector j of the vertical axis (y-axis) of the world coordinate system w Can be as follows:
j w =k w ×i w ; (8)
wherein j is w Is the projection coordinate of the y-axis unit vector of the world coordinate system in the camera coordinate system.
Step 407, determining external parameters of the camera to be calibrated according to the first unit vector, the second unit vector and the third unit vector.
In the embodiment of the disclosure, the external parameters of the camera to be calibrated can be determined according to the first unit vector, the second unit vector and the third unit vector.
For example, if the external parameter of the camera to be calibrated is the matrix M, then:
M=(i w j w k w ); (9)
the camera calibration method disclosed by the embodiment of the disclosure can effectively calculate the external parameters of the camera to be calibrated according to the unit vectors of all axes under the world coordinate system.
As an application scenario, a camera to be calibrated is taken as a television top mounted camera, and the camera to be calibrated is exemplified for the purpose of analyzing television picture content, assuming that the number of marking patterns is two, and the marking images are drawn on the calibration card, the two calibration cards can be adsorbed onto a television screen, so that the planes of the calibration card and the television are coplanar, and a world coordinate system is established by taking the two calibration cards as references.
Adjusting the position of the camera to be calibrated to enable the two calibration cards to be completely and symmetrically appeared in the shot picture, and then locking the camera to enable the position of the camera not to be changed. And establishing a camera coordinate system by taking the optical center and the optical axis of the camera as references.
The method comprises the steps of enabling a camera to shoot a target image, extracting information of a calibration card from the target image, obtaining rotation matrixes of the two calibration cards and position coordinates of central points of the two calibration cards in a camera coordinate system, taking a group of reference points (such as position points of unit vectors on coordinate axes in the world coordinate system) in the world coordinate system, taking the world coordinates of the reference points as a source value, taking the camera coordinates of the reference points obtained by calculation in the target image as a target value, and solving a transformation matrix from the source value to the target value, wherein the transformation matrix is external parameters of the camera.
Optionally, the two calibration cards are identical in style and are both square cards.
Optionally, the calibration card has suitable hardness, thickness and flatness, when the bottom of the calibration card contacts with a lower frame of the television, the calibration card naturally enters a stress balance state under the action of self gravity, the lower surface of the calibration card is attached to the lower frame of the television, the side surface of the calibration card is attached to the side frame of the television, the back surface of the calibration card is attached to the screen of the television, the calibration card cannot automatically fall off when no external force is applied, and namely, one calibration card is respectively placed at the lower left corner and the lower right corner of the screen of the television.
Optionally, the calibration card has an adsorption design, and when the position of the calibration card is properly adjusted, the front surface of the calibration card is lightly pressed by a hand or a tool, so that the calibration card and the television screen form a vacuum adsorption effect.
Alternatively, the calibration card uses a white background and is printed with a black perfect circle (denoted as the mark pattern in the present disclosure), and the diameter of the perfect circle is the same as or close to the side length of the calibration card. Alternatively, the calibration card may adopt a pure black background printed with a white perfect circle, and the diameter of the perfect circle is the same as or close to the side length of the calibration card.
Optionally, the calibration card is designed symmetrically, the front and back patterns are completely the same, and the calibration card can be rotated by 90 °, 180 ° and 270 ° for use, with completely the same efficacy.
Optionally, the world coordinate system uses a midpoint of a connection line between centers of circles of the two calibration cards as an origin of the coordinate system, uses a direction of the connection line between the centers of circles as an x-axis direction (a right-hand direction is positive when facing a television), uses a direction perpendicular to the x-axis on a plane of the calibration card as a y-axis direction (upward is positive), and uses a z-axis perpendicular to the plane of the calibration card to point to a television viewer.
Alternatively, the camera coordinate system uses an optical center (optical center for short) of the camera as a coordinate system origin O, an optical axis of the camera as a z-axis direction (positive direction away from the television), a plane formed by an x-axis and a y-axis is perpendicular to the z-axis, a horizontal direction is an x-axis direction (positive direction toward right hand when facing the television), and the y-axis is perpendicular to the x-axis (positive direction toward upward).
Alternatively, a target image captured by the camera is called image I 0 Wherein, I 0 Two calibration cards are included. Due to perspective principle, the circle on each calibration card is in the image I 0 In the form of an ellipse, left side is called E 1 The right side is called E 2
Optionally, in using the image I 0 Before calculating the E1 and E2 information, the image I can be firstly processed 0 Preprocessing is performed, and the preprocessed image is used when the ellipse information is calculated.
As an example, for image I 0 Comprises a color space transformation process that transforms the image I 0 Changing from three-channel color image to single-channel gray image, and obtaining gray image called I 1
For example, for image I 0 When color space transform processing is performed, it is assumed that the input and output are normalized to [0,1]]The interval adopts a transformation processing formula as follows: y = CLIP (0.299 r +0.587 g +0.114 b), wherein the CLIP () function acts to limit the value of the output pixel value Y to [0,1 b ]]Within the interval.
As an example, for image I 0 The preprocessing comprises noise reduction smoothing processing, and the input of the operation is a gray scale image I 1 The output is a gray scale image I 2
For example, for the gray scale image I 1 The noise reduction smoothing processing may be performed by using filtering algorithms such as gaussian filtering and median filtering.
As an example, for image I 0 The preprocessing of (1) comprises an edge detection and screening step, and the input of the operation is a gray-scale image I 2 The output is an edge map I 3 Picture I 3 The pixel value of (a) represents the edge intensity value of the position of the pixel point. Hypothesis-screened image I 3 Where there are n qualified edge curves, defining a set Ω = { λ = { i ,i∈[1,n]In which λ is i Representing the curve of the ith edge with length l i And (4) showing.
For example, for image I 2 The method for carrying out edge detection and screening comprises the steps of obtaining an initial edge image by adopting a Canny edge detection operator, screening and processing edges existing in the initial edge image by using a connected domain marking algorithm, splitting an overlong continuous edge into a plurality of shorter continuous edges, and deleting an overlong edge, so that an output image I is output 3 Only contains continuous edges of acceptable length.
As an example, for image I 0 The preprocessing comprises a binarization operation and an erosion operation, and the input of the operation is an edge map I 3 Output as a pre-processing result image I 4
Alternatively, a preprocessed image I may also be used 4 Extracting possible ellipse points (marked as candidate edge points in the disclosure) in the picture, judging how many different ellipses exist together in the extraction process, wherein each ellipse is reserved not to exceed Max 1 Dots, not exceeding Max in total 2 And (4) points. For example, a boundary clustering algorithm may be employed to cluster images from the preprocessed image I 4 Extracting edge curves belonging to the same ellipse, and extracting an ellipse point from each edge curve. Among them, max 1 It can take 100,Max 2 2000 may be preferred.
Optionally, the extracted elliptical points may also be subjected to pairing analysis, and points (denoted as target edge points in the present disclosure) that meet the calibrated elliptical features are screened out according to a preset constraint condition (such as a distance between adjacent candidate edge points), and other unrelated elliptical points are deleted.
After finding out the elliptic points corresponding to E1 and E2, solving the elliptic equation coefficients of E1 and E2 twice, and solving one ellipse each time. The specific method is to substitute the coordinate values (x, y) of all the ellipse points of an ellipse in the image coordinate system into the matrix equation GX = b, where the subscript m represents the number of qualified ellipse points (i.e. the number of target edge points), where:
Figure BDA0003827107340000141
Figure BDA0003827107340000151
then, the coefficient X of the elliptic equation is estimated by using a least square method, and the following results are obtained: x = (G) T G) -1 G T b;
After extracting coefficients X of the calibration ellipses E1 and E2 from the image, constructing a corresponding coefficient matrix for each ellipse by utilizing X to obtain two matrixes Q1 and Q2, and then decomposing each matrix (Q1 and Q2) into a form of multiplying three independent matrixes: q = V Λ V T Wherein
Figure BDA0003827107340000152
after the matrixes Q1 and Q2 are calculated, the coordinates C and the normal vector N of the central points of the calibration ellipses E1 and E2 in the camera coordinate system are calculated by using the known parameters of the decomposition matrixes, and the calculation is carried out according to the following method:
Figure BDA0003827107340000153
Figure BDA0003827107340000154
wherein,
Figure BDA0003827107340000155
s 1 ~s 3 representing the sign, keeping the correct sign according to the actual situation, and r representing the radius of the perfect circle on the calibration card, which is a known constant.
Coordinates C of the central point of the calibration ellipse 1 、C 2 Then, the coordinates (C) of the origin of the world coordinate system in the camera coordinate system can be obtained 1 +C 2 ) And/2, and three coordinate axis unit vectors of the world coordinate system.
Wherein i w =(C 2 -C 1 )/|C 2 -C 1 The coordinate is the projection coordinate of the x-axis unit vector of the world coordinate system under the camera coordinate system;
j w =k w ×i w the projection coordinate of the y-axis unit vector of the world coordinate system under the camera coordinate system is shown;
k w and = N, is a projection coordinate of the world coordinate system z-axis unit vector in the camera coordinate system.
And the external reference matrix is M = (i) w j w k w )。
In summary, in the present disclosure, calibration of external parameters of the camera can be completed only by a single image, specifically, two calibration cards with the same specification are used, and each calibration card is printed with a calibration circle; during calibration, a camera is adopted to shoot a target image; preprocessing a target image to obtain an edge image; performing algorithm analysis on the edge image, and screening out elliptical points (marked as target edge points in the disclosure) which accord with the characteristics of the calibration circle; solving a coefficient matrix of an elliptic equation by using the screened elliptic points, performing characteristic decomposition on the coefficient matrix, and calculating a circle center coordinate (coordinate in a camera coordinate system) and a normal vector of a calibration circle;
and solving the external parameter matrix of the camera by using the center coordinates and normal vectors of the calibration circle.
Optionally, the calibration card is square, and the front and back sides of the calibration card are respectively provided with a same perfect circle; the calibration card has structural rigidity and can keep self-stability when three supporting points exist; the calibration card is a plane and can form a vacuum adsorption effect with another plane.
Optionally, during calibration, the two calibration cards are located on the same plane, and each calibration circle needs to have no less than 1/4 of the circular arc visible in the target image.
Optionally, the process of preprocessing the target image is: color space transformation, using formula Y = CLIP (0.299R + 0.587G + 0.114B), to change the three-channel color image into a single-channel gray image; carrying out noise reduction and filtering on the gray level image; performing edge detection, edge processing and edge screening on the noise-reduced and filtered image; and carrying out binarization and corrosion operation on the screened edges.
Optionally, analyzing the screened edge image, keeping the edge which may be a circular arc, and deleting the edge which is not a circular arc; analyzing the arcs by using a boundary clustering algorithm to determine that a plurality of independent ellipses exist in the image, wherein each independent ellipse comprises which arcs, and each arc selects a coordinate point as a representative; analyzing the screened ellipses, reserving ellipses relevant to the calibration circles, and deleting other interference ellipses, for example, selecting target clusters matched with the number of the calibration circles from a plurality of clusters according to the radius of the calibration circles and the distance between the calibration circles; and optimizing the selected ellipses in the target cluster related to the calibration circle, wherein each ellipse only reserves a certain number of high-quality elliptic points (namely target edge points), for example, the elliptic points with smaller distance can be deleted.
Optionally, calculating a coefficient matrix of an elliptic equation by using the high-quality elliptic points obtained by the boundary clustering algorithm; performing characteristic decomposition on the coefficient matrix of the elliptic equation; calculating the center of the calibration circle and a normal vector by using the characteristic value and the characteristic vector obtained by decomposition; and calculating an external parameter matrix of the camera by using the center of the calibration circle and the normal vector.
Alternatively, the number of edge curves in the target image may be not less than 100 and not more than 10000.
Optionally, the number of the continuous edge curves which are screened out from the edge curves and meet the circular arc characteristics can be not less than 20, and not more than 2000.
Optionally, the continuous edge curves conforming to the circular arc features are classified or clustered, and the number of the obtained clusters can be not less than 2 and not more than 20.
Optionally, the clusters are screened according to prior knowledge, and 2 target clusters conforming to the calibration circle feature are output, where each target cluster outputs not less than 10 and not more than 100 ellipsoid points (denoted as target edge points in the present disclosure).
Alternatively, the external reference matrix of the camera may be calculated from the information of the elliptical points (i.e., target edge points) in the 2 target clusters and the a priori knowledge about the calibration circle.
In summary, the camera calibration method provided by the present disclosure has the following advantages:
firstly, calibrating external parameters of the camera can be completed only by a single image;
secondly, the calibration of external parameters of the camera can be completed only by using two calibration cards;
thirdly, the consumer can be allowed to self-make a calibration card to finish the calibration of external parameters of the camera;
fourthly, the calibration of the external parameters of the camera can be completed without manually measuring the position of the camera.
The camera calibration method provided by the disclosure can simplify the operation steps of a consumer and reduce the risks of physical damage and surface fouling on a television.
Corresponding to the camera calibration method provided in the embodiments of fig. 1 to 4, the present disclosure also provides a camera calibration device, and since the camera calibration device provided in the embodiments of the present disclosure corresponds to the camera calibration method provided in the embodiments of fig. 1 to 4, the implementation of the camera calibration method is also applicable to the camera calibration device provided in the embodiments of the present disclosure, and will not be described in detail in the embodiments of the present disclosure.
Fig. 5 is a schematic structural diagram of a camera calibration apparatus according to a fifth embodiment of the present disclosure.
As shown in fig. 5, the camera calibration apparatus 500 may include: an acquisition module 501, an extraction module 502, a first determination module 503, and a second determination module 504.
The acquiring module 501 is configured to acquire a target image acquired by a camera to be calibrated; wherein, a plurality of mark patterns are displayed in the target image.
An extracting module 502 is configured to extract edge curves belonging to the same mark pattern from the target image.
The first determining module 503 is configured to determine target coordinates and normal vectors of center points of the plurality of marker patterns in the camera coordinate system according to image positions of edge curves of the plurality of marker patterns in the target image.
The second determining module 504 is configured to determine an external parameter of the camera to be calibrated according to the target coordinates and the normal vector of the central point of the plurality of marker patterns.
In a possible implementation manner of the embodiment of the present disclosure, the extracting module 502 is configured to: performing edge detection on a target image to obtain an edge curve set, wherein the edge curve set comprises at least one edge curve; determining the length of any edge curve in the edge curve set; under the condition that the length is larger than a first length threshold value, segmenting any edge curve to obtain a plurality of sub-edge curves, deleting any edge curve from an edge curve set, and adding the plurality of sub-edge curves to the edge curve set; clustering all curves in the updated edge curve set to obtain a plurality of clusters; a target cluster to which each of the mark patterns belongs is determined from the plurality of clusters according to the sizes of the plurality of mark patterns and the distances between the plurality of mark patterns.
In a possible implementation manner of the embodiment of the present disclosure, the camera calibration apparatus 500 may further include:
a deleting module, configured to delete any edge curve from the edge curve set when the length is smaller than a second length threshold; wherein the second length threshold is less than the first length threshold.
In a possible implementation manner of the embodiment of the present disclosure, the first determining module 503 is configured to: extracting a plurality of target edge points from a target cluster to which any mark pattern belongs, for any mark pattern of the plurality of mark patterns; and determining the target coordinates and normal vectors of the central point of any mark pattern in a camera coordinate system according to the image positions of the edge points of the targets in the target image.
In a possible implementation manner of the embodiment of the present disclosure, the first determining module 503 is configured to: aiming at any curve in a target cluster to which any mark pattern belongs, at least one candidate edge point is extracted from any curve; determining the distance between candidate edge points according to the image positions of the candidate edge points on the curves in the target cluster in the target image; and determining the target edge points from the candidate edge points according to the distance between the candidate edge points.
In a possible implementation manner of the embodiment of the present disclosure, the first determining module 503 is configured to: determining a target coefficient matrix according to the image positions of the plurality of target edge points in the target image, wherein the target coefficient matrix is used for indicating the shape of any mark pattern; decomposing the target coefficient matrix to obtain a diagonal matrix and a middle matrix; generating a middle vector according to the value of each diagonal element in the diagonal matrix; generating an intermediate coefficient according to the value of each diagonal element in the diagonal matrix and the size of any mark pattern; determining the target coordinate of the central point of any mark pattern in the camera coordinate system according to the intermediate coefficient, the intermediate vector and the intermediate matrix; and determining a normal vector of the central point of any mark pattern in a camera coordinate system according to the intermediate vector and the intermediate matrix.
In a possible implementation manner of the embodiment of the present disclosure, the second determining module 504 is configured to: determining a first unit vector of a horizontal axis of a world coordinate system according to the difference between the target coordinates of the central point of each marking pattern; determining a second unit vector of a vertical axis of the world coordinate system according to the normal vector; determining a third unit vector of a longitudinal axis of the world coordinate system according to the first unit vector and the second unit vector; and determining external parameters of the camera to be calibrated according to the first unit vector, the second unit vector and the third unit vector.
In a possible implementation manner of the embodiment of the present disclosure, the camera calibration apparatus 500 may further include:
the processing module is used for preprocessing the target image; wherein the preprocessing comprises at least one of color space transformation processing, noise reduction smoothing processing, binarization processing and corrosion processing.
The camera calibration device of the embodiment of the disclosure acquires a target image acquired by a camera to be calibrated; wherein a plurality of marker patterns are displayed in the target image; extracting edge curves belonging to the same mark pattern from the target image; determining target coordinates and normal vectors of center points of the plurality of mark patterns in a camera coordinate system according to image positions of edge curves of the plurality of mark patterns in a target image; and determining external parameters of the camera to be calibrated according to the target coordinates and normal vectors of the central points of the plurality of marked patterns. Therefore, external reference calibration of the camera to be calibrated can be completed according to an image shot by the camera to be calibrated, user operation can be simplified, and user experience is improved.
In order to implement the above embodiment, the present disclosure further provides an electronic device, including: the camera calibration method includes a memory, a processor and a computer program stored in the memory and running on the processor, wherein the processor executes the program to implement the camera calibration method as set forth in any one of the foregoing embodiments of the disclosure.
In order to implement the above embodiments, the present disclosure also proposes a non-transitory computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the camera calibration method as proposed in any of the foregoing embodiments of the present disclosure.
In order to implement the foregoing embodiments, the present disclosure further provides a computer program product, wherein when instructions in the computer program product are executed by a processor, the camera calibration method as set forth in any one of the foregoing embodiments of the present disclosure is performed.
FIG. 6 illustrates a block diagram of an exemplary electronic device suitable for use in implementing embodiments of the present disclosure. The electronic device 12 shown in fig. 6 is only an example and should not bring any limitations to the functionality and scope of use of the embodiments of the present disclosure.
As shown in FIG. 6, electronic device 12 is embodied in the form of a general purpose computing device. The components of electronic device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. These architectures include, but are not limited to, industry Standard Architecture (ISA) bus, micro Channel Architecture (MAC) bus, enhanced ISA bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus, to name a few.
Electronic device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by electronic device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 28 may include computer system readable media in the form of volatile Memory, such as Random Access Memory (RAM) 30 and/or cache Memory 32. The electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 6, and commonly referred to as a "hard drive"). Although not shown in FIG. 6, a disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a Compact disk Read Only Memory (CD-ROM), a Digital versatile disk Read Only Memory (DVD-ROM), or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the disclosure.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally perform the functions and/or methodologies of the embodiments described in this disclosure.
Electronic device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with electronic device 12, and/or with any devices (e.g., network card, modem, etc.) that enable electronic device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Also, the electronic device 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public Network such as the Internet) via the Network adapter 20. As shown in FIG. 6, the network adapter 20 communicates with the other modules of the electronic device 12 via the bus 18. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with electronic device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, to name a few.
The processing unit 16 executes various functional applications and data processing, for example, implementing the methods mentioned in the foregoing embodiments, by executing programs stored in the system memory 28.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present disclosure. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or to implicitly indicate the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present disclosure, "a plurality" means at least two, e.g., two, three, etc., unless explicitly specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present disclosure in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present disclosure.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a separate product, may also be stored in a computer-readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present disclosure have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present disclosure, and that changes, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present disclosure.

Claims (18)

1. A camera calibration method is characterized by comprising the following steps:
acquiring a target image acquired by a camera to be calibrated; wherein a plurality of marker patterns are displayed in the target image;
extracting edge curves belonging to the same mark pattern from the target image;
determining target coordinates and normal vectors of center points of the plurality of mark patterns in a camera coordinate system according to image positions of edge curves of the plurality of mark patterns in the target image;
and determining external parameters of the camera to be calibrated according to the target coordinates and normal vectors of the central points of the plurality of marking patterns.
2. The method according to claim 1, wherein the extracting edge curves belonging to the same mark pattern from the target image comprises:
performing edge detection on the target image to obtain an edge curve set, wherein the edge curve set comprises at least one edge curve;
for any edge curve in the set of edge curves, determining a length of the any edge curve;
under the condition that the length is larger than a first length threshold value, segmenting any edge curve to obtain a plurality of sub-edge curves, deleting any edge curve from the edge curve set, and adding the plurality of sub-edge curves to the edge curve set;
clustering the curves in the updated edge curve set to obtain a plurality of clusters;
and determining a target cluster to which each mark pattern belongs from the plurality of clusters according to the sizes of the plurality of mark patterns and the distances among the plurality of mark patterns.
3. The method of claim 2, wherein after determining the length of any of the edge curves, the method further comprises:
in the case that the length is less than a second length threshold, deleting the any edge curve from the set of edge curves;
wherein the second length threshold is less than the first length threshold.
4. The method of claim 2, wherein determining the target coordinates and normal vectors of the center points of the plurality of marker patterns in the camera coordinate system according to the image positions of the edge curves of the plurality of marker patterns in the target image comprises:
extracting a plurality of target edge points from a target cluster to which any one of the plurality of marker patterns belongs, for any one of the plurality of marker patterns;
and determining target coordinates and normal vectors of the central point of any mark pattern in the camera coordinate system according to the image positions of the target edge points in the target image.
5. The method according to claim 4, wherein the extracting a plurality of edge points from the target cluster to which the any mark pattern belongs comprises:
extracting at least one candidate edge point from any curve in a target cluster to which the any mark pattern belongs;
determining the distance between the candidate edge points according to the image positions of the candidate edge points on the curves in the target cluster in the target image;
and determining a target edge point from the candidate edge points according to the distance between the candidate edge points.
6. The method of claim 4, wherein determining the target coordinates and normal vector of the center point of any marker pattern in the camera coordinate system according to the image positions of the target edge points in the target image comprises:
determining a target coefficient matrix according to the image positions of the target edge points in the target image, wherein the target coefficient matrix is used for indicating the shape of any mark pattern;
decomposing the target coefficient matrix to obtain a diagonal matrix and a middle matrix;
generating a middle vector according to the value of each diagonal element in the diagonal matrix;
generating an intermediate coefficient according to the value of each diagonal element in the diagonal matrix and the size of any mark pattern;
determining a target coordinate of the central point of any marking pattern in the camera coordinate system according to the intermediate coefficient, the intermediate vector and the intermediate matrix;
and determining a normal vector of the central point of any mark pattern under the camera coordinate system according to the intermediate vector and the intermediate matrix.
7. The method according to any one of claims 1 to 5, wherein the determining the external parameters of the camera to be calibrated according to the target coordinates and normal vector of the central points of the plurality of marker patterns comprises:
determining a first unit vector of a horizontal axis of a world coordinate system according to a difference between target coordinates of a central point of each marking pattern;
determining a second unit vector of a vertical axis of the world coordinate system according to the normal vector;
determining a third unit vector of a longitudinal axis of the world coordinate system according to the first unit vector and the second unit vector;
and determining external parameters of the camera to be calibrated according to the first unit vector, the second unit vector and the third unit vector.
8. The method according to any one of claims 1-5, wherein after acquiring the target image captured by the camera to be calibrated, the method further comprises:
preprocessing the target image;
wherein the preprocessing comprises at least one of color space transformation processing, noise reduction smoothing processing, binarization processing and corrosion processing.
9. A camera calibration apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring a target image acquired by a camera to be calibrated; wherein a plurality of marker patterns are displayed in the target image;
the extraction module is used for extracting edge curves belonging to the same mark pattern from the target image;
the first determining module is used for determining target coordinates and normal vectors of center points of the plurality of mark patterns in a camera coordinate system according to image positions of edge curves of the plurality of mark patterns in the target image;
and the second determining module is used for determining the external parameters of the camera to be calibrated according to the target coordinates and the normal vectors of the central points of the plurality of marking patterns.
10. The apparatus of claim 9, wherein the extraction module is configured to:
performing edge detection on the target image to obtain an edge curve set, wherein the edge curve set comprises at least one edge curve;
for any edge curve in the set of edge curves, determining a length of the any edge curve;
under the condition that the length is larger than a first length threshold value, segmenting any edge curve to obtain a plurality of sub-edge curves, deleting any edge curve from the edge curve set, and adding the plurality of sub-edge curves to the edge curve set;
clustering all the curves in the updated edge curve set to obtain a plurality of clusters;
and determining a target cluster to which each mark pattern belongs from the plurality of clusters according to the sizes of the plurality of mark patterns and the distances among the plurality of mark patterns.
11. The apparatus of claim 10, further comprising:
a deleting module, configured to delete the any edge curve from the edge curve set if the length is smaller than a second length threshold;
wherein the second length threshold is less than the first length threshold.
12. The apparatus of claim 10, wherein the first determining module is configured to:
extracting a plurality of target edge points from a target cluster to which any one of the plurality of marker patterns belongs, for any one of the plurality of marker patterns;
and determining target coordinates and normal vectors of the central point of any mark pattern in the camera coordinate system according to the image positions of the target edge points in the target image.
13. The apparatus of claim 12, wherein the first determining module is configured to:
extracting at least one candidate edge point from any curve in a target cluster to which the any mark pattern belongs;
determining the distance between the candidate edge points according to the image positions of the candidate edge points on the curves in the target cluster in the target image;
and determining a target edge point from the candidate edge points according to the distance between the candidate edge points.
14. The apparatus of claim 12, wherein the first determining module is configured to:
determining a target coefficient matrix according to the image positions of the target edge points in the target image, wherein the target coefficient matrix is used for indicating the shape of any mark pattern;
decomposing the target coefficient matrix to obtain a diagonal matrix and a middle matrix;
generating a middle vector according to the value of each diagonal element in the diagonal matrix;
generating an intermediate coefficient according to the value of each diagonal element in the diagonal matrix and the size of any mark pattern;
determining the target coordinate of the central point of any mark pattern in the camera coordinate system according to the intermediate coefficient, the intermediate vector and the intermediate matrix;
and determining a normal vector of the central point of any mark pattern under the camera coordinate system according to the intermediate vector and the intermediate matrix.
15. The apparatus of any one of claims 9-13, wherein the second determining module is configured to:
determining a first unit vector of a horizontal axis of a world coordinate system according to a difference between target coordinates of a central point of each marking pattern;
determining a second unit vector of a vertical axis of the world coordinate system according to the normal vector;
determining a third unit vector of a longitudinal axis of the world coordinate system according to the first unit vector and the second unit vector;
and determining external parameters of the camera to be calibrated according to the first unit vector, the second unit vector and the third unit vector.
16. The apparatus according to any one of claims 9-13, further comprising:
the processing module is used for preprocessing the target image;
wherein the preprocessing comprises at least one of color space transformation processing, noise reduction smoothing processing, binarization processing and corrosion processing.
17. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
18. A computer-readable storage medium having computer instructions stored thereon for causing a computer to perform the method of any one of claims 1-8.
CN202211063168.8A 2022-08-31 2022-08-31 Camera calibration method and device, electronic equipment and medium Pending CN115830134A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211063168.8A CN115830134A (en) 2022-08-31 2022-08-31 Camera calibration method and device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211063168.8A CN115830134A (en) 2022-08-31 2022-08-31 Camera calibration method and device, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN115830134A true CN115830134A (en) 2023-03-21

Family

ID=85523335

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211063168.8A Pending CN115830134A (en) 2022-08-31 2022-08-31 Camera calibration method and device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN115830134A (en)

Similar Documents

Publication Publication Date Title
CN112348815B (en) Image processing method, image processing apparatus, and non-transitory storage medium
US10373380B2 (en) 3-dimensional scene analysis for augmented reality operations
US9426444B2 (en) Depth measurement quality enhancement
US6774889B1 (en) System and method for transforming an ordinary computer monitor screen into a touch screen
CN108229475B (en) Vehicle tracking method, system, computer device and readable storage medium
US8363955B2 (en) Apparatus and method of image analysis
CN110570435B (en) Method and device for carrying out damage segmentation on vehicle damage image
JP6744747B2 (en) Information processing apparatus and control method thereof
WO2021169396A1 (en) Media content placement method and related device
CN107895377B (en) Foreground target extraction method, device, equipment and storage medium
CN111091590A (en) Image processing method, image processing device, storage medium and electronic equipment
CN109934873B (en) Method, device and equipment for acquiring marked image
CN111368717A (en) Sight line determining method and device, electronic equipment and computer readable storage medium
CN113744142B (en) Image restoration method, electronic device and storage medium
WO2020087434A1 (en) Method and device for evaluating resolution of face image
WO2021195873A1 (en) Method and device for identifying region of interest in sfr test chart image, and medium
CN114359412A (en) Automatic calibration method and system for external parameters of camera facing to building digital twins
CN112991374A (en) Canny algorithm-based edge enhancement method, device, equipment and storage medium
CN112419207A (en) Image correction method, device and system
CN114742866A (en) Image registration method and device, storage medium and electronic equipment
CN114298985B (en) Defect detection method, device, equipment and storage medium
CN116342519A (en) Image processing method based on machine learning
CN108961182B (en) Vertical direction vanishing point detection method and video correction method for video image
WO2019188316A1 (en) Image processing device, image processing method, and program
CN114674826A (en) Visual detection method and detection system based on cloth

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination