CN113822948A - Method for calibrating digital camera by using texture of object elevation - Google Patents

Method for calibrating digital camera by using texture of object elevation Download PDF

Info

Publication number
CN113822948A
CN113822948A CN202111365835.3A CN202111365835A CN113822948A CN 113822948 A CN113822948 A CN 113822948A CN 202111365835 A CN202111365835 A CN 202111365835A CN 113822948 A CN113822948 A CN 113822948A
Authority
CN
China
Prior art keywords
image
camera
straight line
adjustment model
calibration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111365835.3A
Other languages
Chinese (zh)
Inventor
谢文寒
薛玉彩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chinese Academy of Surveying and Mapping
Original Assignee
Chinese Academy of Surveying and Mapping
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chinese Academy of Surveying and Mapping filed Critical Chinese Academy of Surveying and Mapping
Priority to CN202111365835.3A priority Critical patent/CN113822948A/en
Publication of CN113822948A publication Critical patent/CN113822948A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection

Abstract

The invention provides a method for calibrating a digital camera by using a target facade texture, which comprises the following steps: s1, acquiring a surface image of the target with three-dimensional line characteristics; s2, extracting X, Y and Z three-axis texture features and performing orthogonal straight line grouping; s3, taking the straight line information in the image as an observed value, establishing a adjustment model for the vanishing point, and checking the single image according to the adjustment model; s4, establishing a adjustment model directly related to the straight line information of the multiple images and the calibration parameters based on the calibration parameters of the single image obtained in S3; and S5, calculating the adjustment model to obtain camera calibration parameters. The invention specially checks and corrects the common digital camera, does not need any control field or control point information during checking and correcting, and can check and correct all internal orientation elements of the camera only by shooting the artificial building.

Description

Method for calibrating digital camera by using texture of object elevation
Technical Field
The invention relates to a digital camera calibration method, in particular to a method for calibrating a digital camera by using a facade texture.
Background
The three-dimensional computer vision system calculates geometric information such as the position, shape, and the like of an object in a three-dimensional environment from image information acquired by a camera, and thereby recognizes the object in the environment. The brightness of each point on the image reflects the intensity of the light reflected by a point on the surface of the space object, the position of the point on the image is related to the geometric position of the corresponding point on the surface of the space object, the interrelation of the positions is determined by a camera imaging geometric model, the parameters of the geometric model are called the orientation elements in the camera, the parameters are determined by experiments and calculation, and the process of the experiments and calculation is called the calibration of the camera.
Camera calibration plays an important role in the following applications:
firstly, three-dimensional geometric information is derived from computer image coordinates: the camera model and parameters determined by the camera calibration can provide a way to determine a spatial line on which the actual target point must be given the image coordinates of the target point (i.e., a collinear condition), and with two such images, the spatial position of the target point can be determined from the intersection of the two lines. The extremely important information can be applied to important fields such as stereo measurement, three-dimensional reconstruction, automatic assembly of electromechanical elements, automobile body visual detection, robot vision and the like.
And II, deriving two-dimensional computer image coordinates from the three-dimensional information: in model-driven inspection and assembly applications using machine vision, an assumption about the three-dimensional spatial position, orientation, of an object can be translated into an assumption about its resultant image using camera models and parameters, and comparing the assumed image with the actual captured image can reject or confirm the assumption about the object and its spatial position.
In the camera calibration, a model parameter is obtained by establishing a corresponding relation model of a known object and image point and calculating internal and external geometric and optical parameters of an imaging system. Once this correspondence is established, the three-dimensional world coordinates of the object points can be deduced from the two-dimensional image point coordinates, or conversely, two-dimensional information can be deduced from known three-dimensional information. Therefore, camera calibration is a prerequisite and basic problem for photogrammetry and computer vision realization and is widely concerned by scholars at home and abroad.
The traditional camera calibration method is carried out in a high-precision control field. The high-precision control field is generally composed of indoor high-precision three-dimensional control point groups, the control points are obtained by measurement of a precision surveying and mapping instrument, and the bottoms of the control point connecting rods are treated by oil seals to prevent control point position changes caused by thermal expansion and cold contraction. At present, only a few institutions such as Wuhan university and Shandong science and technology university in China specially establish a high-precision indoor control field for camera calibration. Meanwhile, the Chinese surveying and mapping science research institute has also built a high-precision outdoor control field which is specially used for camera calibration. The high-precision control field has high precision requirement and high maintenance cost, is limited by regions, brings great inconvenience to camera calibration, and can not be calibrated because some places have no control field at all. Although there are many two-dimensional control fields for camera calibration in the later stage, these two-dimensional control fields also need to perform high-precision measurement, which is inconvenient for camera calibration at any time and any place.
Disclosure of Invention
Aiming at the problems in the background art, the invention provides a method for performing digital camera calibration by using a target facade texture, which comprises the following steps: s1, acquiring a surface image of the target with three-dimensional line characteristics; s2, extracting X, Y and Z three-axis texture features and performing orthogonal straight line grouping; s3, taking the straight line information in the image as an observed value, establishing a adjustment model for the vanishing point, and checking the single image according to the adjustment model; s4, establishing a adjustment model directly related to the straight line information of the multiple images and the calibration parameters based on the calibration parameters of the single image obtained in S3; and S5, calculating the adjustment model to obtain camera calibration parameters.
Preferably, in step S1, the surface images are captured at different angles by rotating the optical axis of the camera.
Preferably, step S2 includes: performing edge detection on each image to obtain edge points; performing straight-line segment fitting on the edge points; and carrying out orthogonal straight line grouping on the straight line segments.
Preferably, in step S3, after the inside and outside orientation elements are obtained for the three vanishing points in the single image, the target is three-dimensionally reconstructed according to the geometric relationship among the known object space points, lines, and planes, so as to implement calibration and modeling based on the single image.
Preferably, step S3 further includes: and checking each single image to obtain an initial value of a checking parameter and an initial value of a triaxial vanishing point.
Preferably, in step S4, the inside and outside orientation elements are included in the adjustment model, and the multi-azimuth image generated by the target rotation shooting is calibrated.
Preferably, in step S4, the light distortion model is used to jointly perform adjustment and calibration on the multi-image triaxial orthogonal line features, so as to obtain an adjustment model with distortion correction.
Preferably, the method for establishing the adjustment model in step S4 includes: 1) aiming at one axial direction, obtaining a functional relation between a vanishing point and an outer azimuth angle element according to projection geometry; 2) in the adjustment, the function relation is linearized without taking the vanishing point as an unknown parameter; 3) and acquiring the functional relation between the linearized vanishing points in the three axial directions and the outer azimuth angle element.
Preferably, the adjustment model in step S4 includes: the coordinate information of the starting point of the three-axis orthogonal straight line, five parameters to be checked and corrected of the camera: focal length, principal point x coordinate, principal point y coordinate, optical distortion parameter K1, and optical distortion parameter K2.
Preferably, in step S5, the three-axis orthogonal straight line information and the five camera parameters of each image are used to construct an equation according to the adjustment model in step S4, and the camera calibration parameters are solved.
The invention can check and calibrate the common digital camera, does not need any control field or control point information during checking and calibrating, and can check and calibrate all internal orientation elements of the camera only by shooting the artificial building.
Drawings
In order that the invention may be more readily understood, it will be described in more detail with reference to specific embodiments thereof that are illustrated in the accompanying drawings. These drawings depict only typical embodiments of the invention and are not therefore to be considered to limit the scope of the invention.
FIG. 1 is a flow chart of one embodiment of the method of the present invention.
Fig. 2 shows 4 original images captured by a rotating camera, wherein (a), (B), (C), and (D) represent different rotation angles.
Fig. 3 is a schematic diagram of extraction and grouping of straight lines in three axis directions, where (a) denotes an X axis direction, (B) denotes a Y axis direction, and (C) denotes a Z axis direction.
FIG. 4 is a diagram of the geometric relationship between vanishing points and orientation elements.
FIG. 5 is a diagram of a cube model.
Figure 6 shows the results of the first set of experiments.
Figure 7 shows the results of the second set of experiments.
Fig. 8 shows the results of the third set of experiments.
Fig. 9 shows the results of the fourth set of tests.
Fig. 10 shows the results of the fifth set of tests.
Fig. 11 shows the results of the sixth set of tests.
Detailed Description
Embodiments of the present invention will be described below with reference to the accompanying drawings so that those skilled in the art can better understand the present invention and can carry out the present invention, but the illustrated embodiments are not intended to limit the present invention, and technical features in the following embodiments and embodiments can be combined with each other without conflict, wherein like parts are denoted by like reference numerals.
As shown in fig. 1, the method of the present invention comprises: s1, acquiring data, and acquiring a target surface image with three-dimensional line characteristics (such as buildings). And S2, extracting and grouping X, Y, Z three-axis feature data. And S3, checking and correcting the geometric data model of the single image. And S4, constructing a geometric calibration model of the target line characteristics and the camera calibration parameters. And S5, calculating a balancing model and outputting camera calibration parameters.
The step S1 of acquiring data is described in detail below. As shown in fig. 2, the 4 original images are taken by a rotating camera, wherein (a) is rotated by 0 degree, (B) is rotated by 90 degrees, (C) is rotated by 180 degrees, and (D) is rotated by 270 degrees. The shooting target is a building with two orthogonal planes, and each surface of the building has two pieces of straight line contour information in the vertical direction. If the camera is positioned in a large angle between two planes, the camera is called as a convex surface; if it is in a small angle between two planes, it is called concave. Whether concave or convex, the object can be captured as long as the above conditions are satisfied. The imaging method is rotational imaging, that is, imaging is performed by rotating the camera substantially around the optical axis, and the rotation angle is not critical during imaging, but generally, it is sufficient that the rotation angle is about 90 degrees every time. Fig. 2 shows 4 original images taken by a rotating camera, which has a rotation angle of about 90 degrees.
The line feature data extraction and grouping of step S2 will be described in detail below. The line feature extraction and grouping step comprises: (1) performing edge detection on each image to obtain edge points; (2) performing straight-line segment fitting on the edge points; (3) and carrying out orthogonal straight line grouping on the straight line segments.
(1) The edge detection process is as follows: in an image, an edge has two characteristics of direction and amplitude. The gray scale variation along the edge is gentle, while the gray scale variation perpendicular to the edge is severe, which may be step-shaped or ridge-shaped. However, the actual image to be analyzed is often complex, and the gray scale variation is not necessarily the standard form. Therefore, the choice of detection operator should depend on different types of images. If the building is taken as a shooting target, the image edge has obvious directivity, and the operators are all nondirectional. Therefore, aiming at the characteristic, the invention adopts a Canny edge detection operator. Such operators are widely used because of their effectiveness in edge detection and reliability of localization. The optimal form of the Canny edge detector is different for each type of edge. Under the two-dimensional condition, the orientation of the Canny operator enables the edge positioning performance to be very good, the edge strength estimation is good, two kinds of information of the gradient direction and the strength of the edge can be generated, and convenience is provided for subsequent processing.
(2) The procedure for the multiple straight line segment fitting is as follows: a multi-line segment is a sequence of line segments whose endpoints connect endpoints, the connection points of the sequence of line segments being called vertices.The input values for the multi-line segment algorithm are an ordered list of edge points:
Figure DEST_PATH_IMAGE001
. The edge point coordinates may be calculated to sub-pixel accuracy. Since the two end points of the line segment correspond to the two edge points, i.e., the line segment fitting is performed between the two edge points, only the coordinates of the two edge points corresponding to the end points need to be accurately calculated. Fitting the edge and taking the first edge point
Figure DEST_PATH_IMAGE002
And the last edge point
Figure DEST_PATH_IMAGE003
The formula of the connected straight line segment is as follows:
Ax+By+C=0
wherein
Figure DEST_PATH_IMAGE004
If it is
Figure DEST_PATH_IMAGE005
If the distance between two points is equal, any point is given
Figure DEST_PATH_IMAGE006
Then the distance of the point to the fitted straight line segment is:
Figure DEST_PATH_IMAGE007
the normalized maximum error is:
Figure DEST_PATH_IMAGE008
the normalized maximum error can be used as a measure of how well the straight line segment fits the edge. The invention adopts a linear splitting and linear merging algorithm which is used in a crossed way. After the decomposition process, if a new line segment can fit the edge with very little normalized error, a single straight line segment can be substituted for several adjacent line segments. After the line segments are merged, the new line segments may split at different points. Thus, the two alternate until no segments are merged and split.
(3) The orthogonal straight line grouping is carried out based on a statistical test method. Is provided with
Figure DEST_PATH_IMAGE009
Are normal vectors of a plane formed by three parallel lines of space and the photographic center. Then, according to the perspective geometry principle, ideally, the normal vectors of the planes of the three mutually orthogonal straight lines have an intersection constraint:
Figure DEST_PATH_IMAGE010
. In the presence of noise, the constraint equation is not zero and there is a closing difference. And counting the test values of every three straight lines in the image, and then automatically grouping by using a clustering method to obtain mutually orthogonal straight line segments in the image.
As shown in fig. 3, 295 straight line segments are extracted from the original image according to the above-mentioned automatic straight line detection and grouping method, wherein three groups of mutually orthogonal straight lines are grouped: a line parallel to the X-axis, a line parallel to the Y-axis, and a line parallel to the Z-axis. In this example, there are 62 straight lines in the X direction; 44Y-direction lines; the number of Z-direction lines is 138. The rest are unordered straight line segments which are not grouped.
The image calibration in step S3 is described in detail below. As shown in fig. 4, the projections of the parallel lines in space on the image intersect at a point, called a vanishing point, according to the projection geometry. The inside and outside orientation elements of the camera can be resolved by the vanishing points. The invention provides a method for establishing a adjustment model for an vanishing point by taking linear information in an image as an observed value from the aspect of photogrammetry so as to realize calibration and modeling of a single-image camera. The geometric model is shown in fig. 4.
Setting vanishing points in three orthogonal directions in an image as
Figure DEST_PATH_IMAGE011
Namely three perspective view points, the photographing center is S, o is the main point of the camera,
Figure DEST_PATH_IMAGE012
for the focal length, the three outer azimuth elements are respectively
Figure DEST_PATH_IMAGE013
Solving the inner orientation element:
since the line connecting the vanishing point and the center of photography is parallel to the parallel line of the space forming the vanishing point, the vanishing points formed by the straight lines in the three orthogonal directions X, Y, Z necessarily fall on the corresponding coordinate axes. Thus, it is possible to provide
Figure DEST_PATH_IMAGE014
Is a right-angle cone So as to be perpendicular to
Figure DEST_PATH_IMAGE015
Then the camera principal point o is
Figure DEST_PATH_IMAGE016
The center of gravity of (a). The focal length f is:
Figure DEST_PATH_IMAGE017
(1)
derivation: at right angles
Figure DEST_PATH_IMAGE018
In (b), the following relationship exists:
Figure DEST_PATH_IMAGE019
Figure DEST_PATH_IMAGE020
Figure DEST_PATH_IMAGE021
will be provided with
Figure DEST_PATH_IMAGE022
Substituting the formula into the formula to obtain the formula (1).
Solving the outer azimuth element:
solving for three outer azimuth elements
Figure DEST_PATH_IMAGE023
: at right angles
Figure DEST_PATH_IMAGE024
In the middle, SP is the height of triangle, then
Figure DEST_PATH_IMAGE025
At right angles
Figure DEST_PATH_IMAGE026
In the case where So is the height of the triangle, then
Figure DEST_PATH_IMAGE027
Figure DEST_PATH_IMAGE028
Thus, in a three-point perspective view, the three exterior orientation angle elements that can pass through the vanishing point are:
Figure DEST_PATH_IMAGE029
(2)
solving the elements of the external orientation line:
after obtaining the internal orientation element and the external orientation angle element of the image, the image line element (namely the spatial position of the camera station) can be carried out by utilizing the photogrammetric collinear equation
Figure DEST_PATH_IMAGE030
) And (4) estimating. Obviously, the spatial position of the camera cannot be solved without object space under any known conditions. To this end, the camera coordinates may be resolved using spatial information provided by a cube of known absolute position. FIG. 5 shows a standThe cube model establishes that image coordinates corresponding to 7 visible points in the cube are known:
Figure DEST_PATH_IMAGE031
(i =1,2, …, 7), the spatial coordinate system can be transformed by a similarity transformation into a cubic local coordinate system with point 2 as the origin, 23 as the X-axis, 21 as the Y-axis, and 26 as the Z-axis, in which case
Figure DEST_PATH_IMAGE032
If the length of the cubic edge 23 in the object space is L, then there is
Figure DEST_PATH_IMAGE033
From the collinear equation of photogrammetry, one can derive:
Figure DEST_PATH_IMAGE034
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE035
for the interior orientation elements estimated from the vanishing points,
Figure DEST_PATH_IMAGE036
elements of the rotational matrix consisting of elements of the exterior orientation angle estimated from the vanishing points. The exterior orientation line elements can be obtained by the above formula:
Figure DEST_PATH_IMAGE037
after the internal and external orientation elements are respectively obtained by using the three vanishing points in the single image, if the geometric relationship (such as angle and distance conditions) among object space points, lines and surfaces is known, the simple regular building can be subjected to three-dimensional reconstruction, so that the calibration and modeling based on the single image are realized.
Step S4 is described in detail below to construct a geometric calibration model based on multi-image three-axis orthogonality. As shown in fig. 2, the geometric calibration method of the camera based on the single image is theoretically possible, and practical application shows that the method can only weakly calibrate the camera. Although the method can obtain better effect on the focal length of the camera, the image principal point has strong sensitivity to the error of the vanishing point. In many cases it is simply assumed that the image principal point is located at the image center. This assumption greatly reduces the accuracy of the calibration. Therefore, on the basis of the principle of a single-image geometric calibration camera, the invention provides a method for calibrating the camera based on multi-image triaxial orthogonality. The method uniformly incorporates the internal and external orientation elements into the adjustment model, and the error of single-image calibration is inhibited by calibrating the multi-azimuth image generated by the target rotation shooting.
From the above steps, it can be seen that the inside and outside orientation elements of the camera are functions of vanishing points, and conversely, the vanishing points are functions of the orientation elements. From FIG. 4, the functional relationship between vanishing point and outer azimuth element can be obtained:
Figure DEST_PATH_IMAGE038
(3)
since the vanishing point calculation is not the purpose of calibration, it is only a bridge connecting the target straight line information and calibration parameters. Therefore, the vanishing point may not be considered as an unknown parameter in the adjustment. Linearizing the formula (3) to obtain
Figure DEST_PATH_IMAGE039
(4)
Thus, a mean square model is established in which the straight line is directly associated with the calibration parameters. To be provided with
Figure DEST_PATH_IMAGE040
The corresponding directions are taken as examples, and:
Figure DEST_PATH_IMAGE041
the same can be obtained
Figure DEST_PATH_IMAGE042
The corresponding equation.
The geometric calibration model can greatly improve the precision of the internal parameters of the camera to be calibrated, however, the algorithm is based on a linear pinhole camera model, namely, the image is assumed to have no distortion, and the calibration parameters can be simplified into a three-parameter model, so that the straight line of an object space is projected into the image and is also a straight line. In fact, any optical lens has distortion of different degrees, and if the image distortion is not corrected, the calibration accuracy will be reduced. Therefore, the present model takes into account the effects of image distortion.
The point location error of the image point deviating from its ideal position caused by the design, manufacture and assembly of the camera objective system is called optical distortion. Optical distortion is an important error affecting the quality of point coordinates. Optical distortion is mainly caused by radial distortion. Meanwhile, research shows that in general conditions, only the first two terms of radial distortion are considered during calibration. If too many non-linear parameters (such as the second term and the third term in the radial distortion model) are introduced, not only the precision cannot be improved, but also the solution is unstable. Thus, radial distortion is already sufficient to describe nonlinear distortion. Thus, according to the radial distortion formula, the corrected coordinates of the image point can be written as:
Figure DEST_PATH_IMAGE043
(6)
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE044
for the distortion corrected image point coordinates,
Figure DEST_PATH_IMAGE045
is the coordinate of the image point;
Figure DEST_PATH_IMAGE046
is describing the distortion coefficient; r is the radial direction of the image point;
linearizing the model to obtain a matrix form
Figure DEST_PATH_IMAGE047
The matrix a is the coefficient of each parameter in the linearization.
At the moment, an optical distortion model is introduced, parameters to be corrected are expanded from simplified three parameters to comprehensive five parameters, and a final five-parameter adjustment model can be obtained by combining the model (5).
Figure DEST_PATH_IMAGE048
The model is divided into two major parts, the first part is the coordinate information of the starting point of the three-axis orthogonal straight line, and the second part is five parameters (namely focal length, principal point x coordinate, principal point y coordinate, optical distortion parameters K1 and K2) to be checked and corrected by the camera.
Observing the three-axis orthogonal straight line of each image, constructing an equation with the five parameters of the camera according to the model, and solving the calibration parameters of the camera by simultaneous equations.
Step S5 is described in detail below to calculate the adjustment model and output camera calibration parameters. And (3) substituting the three-axis orthogonal line characteristics of all the images into the calibration and adjustment model in the step, and setting the initial values of the optical distortion parameters K1 and K2 to be zero. And finally outputting all calibration parameters of the camera through iterative adjustment operation.
The invention is feasible through experimental verification. The experimental results are as follows. The calibration experiment was performed using a digital measuring camera, Rollei d30-metric-10mm, with a frame size of 1280X1024(pixel), a pixel size of 0.007mm/pixel, and the intrinsic parameters as shown in Table 1:
TABLE 1 measurement of camera intrinsic parameters
Figure DEST_PATH_IMAGE049
The parameter values are in units of millimeters, and the coordinates of the image principal point are in a coordinate system with the image center as the origin, the right as the positive X direction, and the upward as the positive Y direction. To verify the feasibility of the method of the present invention, 6 camera calibration experiments (see fig. 6-11) are listed, and each experiment was individually factored (not, already) in consideration of the distortion model. Statistics are shown in Table 2 for the above six experiments.
TABLE 2 statistics of six groups of experiments
Figure DEST_PATH_IMAGE050
From data analysis it appears that: if the influence of the distortion is considered, the calibration accuracy is slightly higher than the case where the influence of the distortion is not considered. The mean error of the calibration result after considering the distortion difference is much smaller than the mean error without considering the distortion difference, which shows that the calibration value in the calibration test of the invention tends to be stable and the calibration precision is high by comparing with the true value.
The embodiments described above are merely preferred specific embodiments of the present invention, and the present specification uses the phrases "in one embodiment," "in another embodiment," "in yet another embodiment," or "in other embodiments," which may each refer to one or more of the same or different embodiments in accordance with the present disclosure. General changes and substitutions by those skilled in the art within the technical scope of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method for calibrating a digital camera by using a texture of a target facade is characterized by comprising the following steps:
s1, acquiring a surface image of the target with three-dimensional line characteristics;
s2, extracting X, Y and Z three-axis texture features and performing orthogonal straight line grouping;
s3, taking the straight line information in the image as an observed value, establishing a adjustment model for the vanishing point, and checking the single image according to the adjustment model;
s4, establishing a adjustment model directly related to the straight line information of the multiple images and the calibration parameters based on the calibration parameters of the single image obtained in S3;
and S5, calculating the adjustment model to obtain camera calibration parameters.
2. The method of claim 1, wherein in step S1, the surface images are captured at different angles by rotating an optical axis of the camera.
3. The method according to claim 1, wherein step S2 includes: performing edge detection on each image to obtain edge points; performing straight-line segment fitting on the edge points; and carrying out orthogonal straight line grouping on the straight line segments.
4. The method according to claim 1, wherein in step S3, after the inside and outside orientation elements are obtained from the three vanishing points in the single image, the target is three-dimensionally reconstructed according to the geometric relationship among the known object space points, lines and planes, thereby realizing calibration and modeling based on the single image.
5. The method according to claim 4, wherein step S3 further comprises: and checking each single image to obtain an initial value of a checking parameter and an initial value of a triaxial vanishing point.
6. The method of claim 5, wherein in step S4, the inside and outside orientation elements are incorporated into the adjustment model, and the multi-azimuth image generated by rotating the target is calibrated.
7. The method of claim 6, wherein in step S4, the light distortion model is used to jointly perform adjustment and calibration on the features of the three-axis orthogonal lines of the multi-image to obtain an adjustment model with distortion correction.
8. The method of claim 7, wherein the step of building the adjustment model in step S4 comprises:
1) aiming at one axial direction, obtaining a functional relation between a vanishing point and an outer azimuth angle element according to projection geometry;
2) in the adjustment, the function relation is linearized without taking the vanishing point as an unknown parameter;
3) and acquiring the functional relation between the linearized vanishing points in the three axial directions and the outer azimuth angle element.
9. The method according to claim 8, wherein the adjustment model in step S4 comprises: the coordinate information of the starting point of the three-axis orthogonal straight line, five parameters to be checked and corrected of the camera: focal length, principal point x coordinate, principal point y coordinate, optical distortion parameter K1, and optical distortion parameter K2.
10. The method of claim 9, wherein in step S5, the three-axis orthogonal straight line information and the five camera parameters of each image are used to construct an equation according to the adjustment model in step S4, and the camera calibration parameters are solved.
CN202111365835.3A 2021-11-18 2021-11-18 Method for calibrating digital camera by using texture of object elevation Pending CN113822948A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111365835.3A CN113822948A (en) 2021-11-18 2021-11-18 Method for calibrating digital camera by using texture of object elevation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111365835.3A CN113822948A (en) 2021-11-18 2021-11-18 Method for calibrating digital camera by using texture of object elevation

Publications (1)

Publication Number Publication Date
CN113822948A true CN113822948A (en) 2021-12-21

Family

ID=78919321

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111365835.3A Pending CN113822948A (en) 2021-11-18 2021-11-18 Method for calibrating digital camera by using texture of object elevation

Country Status (1)

Country Link
CN (1) CN113822948A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130287318A1 (en) * 2012-04-27 2013-10-31 Adobe Systems Incorporated Automatic Adjustment of Images using a Homography
CN108267854A (en) * 2016-12-30 2018-07-10 王政 The zoom lens geometry calibration method of model is relied on based on EXIF focal lengths

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130287318A1 (en) * 2012-04-27 2013-10-31 Adobe Systems Incorporated Automatic Adjustment of Images using a Homography
CN108267854A (en) * 2016-12-30 2018-07-10 王政 The zoom lens geometry calibration method of model is relied on based on EXIF focal lengths

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
谢文寒,张祖勋: "基于多像灭点的相机定标", 《测绘学报》 *
谢文寒: "基于多像灭点进行相机标定的方法研究", 《中国优秀博士学位论文全文数据库 基础科学辑(月刊)》 *

Similar Documents

Publication Publication Date Title
CN110296691B (en) IMU calibration-fused binocular stereo vision measurement method and system
JP6967715B2 (en) Camera calibration method, camera calibration program and camera calibration device
Luhmann et al. Sensor modelling and camera calibration for close-range photogrammetry
CN107507235B (en) Registration method of color image and depth image acquired based on RGB-D equipment
Zhang Camera calibration with one-dimensional objects
CN109163657B (en) Round target pose detection method based on binocular vision three-dimensional reconstruction
CN109579695B (en) Part measuring method based on heterogeneous stereoscopic vision
CN103278138A (en) Method for measuring three-dimensional position and posture of thin component with complex structure
CN110874854B (en) Camera binocular photogrammetry method based on small baseline condition
CN109272555B (en) External parameter obtaining and calibrating method for RGB-D camera
CN109974618B (en) Global calibration method of multi-sensor vision measurement system
Kannala et al. Geometric camera calibration.
CN112229323B (en) Six-degree-of-freedom measurement method of checkerboard cooperative target based on monocular vision of mobile phone and application of six-degree-of-freedom measurement method
CN115830103A (en) Monocular color-based transparent object positioning method and device and storage medium
CN111879354A (en) Unmanned aerial vehicle measurement system that becomes more meticulous
TW201310004A (en) Correlation arrangement device of digital images
CN116051659B (en) Linear array camera and 2D laser scanner combined calibration method
Wang et al. Complete calibration of a structured light stripe vision sensor through a single cylindrical target
JP2012198031A (en) Image correction method and image correction device
CN105678088B (en) A kind of adjustment optimization algorithm of target gauge head
CN111754584A (en) Remote large-field-of-view camera parameter calibration system and method
Cauchois et al. Calibration of the omnidirectional vision sensor: SYCLOP
He et al. A new camera calibration method from vanishing points in a vision system
TW565736B (en) Method for determining the optical parameters of a camera
CN112163309A (en) Method for quickly extracting space circle center of single plane circular image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20211221

RJ01 Rejection of invention patent application after publication