CN111445535A - Camera calibration method, device and equipment - Google Patents

Camera calibration method, device and equipment Download PDF

Info

Publication number
CN111445535A
CN111445535A CN202010300244.7A CN202010300244A CN111445535A CN 111445535 A CN111445535 A CN 111445535A CN 202010300244 A CN202010300244 A CN 202010300244A CN 111445535 A CN111445535 A CN 111445535A
Authority
CN
China
Prior art keywords
camera
parameter
dimensional
parameters
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010300244.7A
Other languages
Chinese (zh)
Inventor
刘凯正
俞蔚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Kelan Information Technology Co ltd
Original Assignee
Zhejiang Kelan Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Kelan Information Technology Co ltd filed Critical Zhejiang Kelan Information Technology Co ltd
Priority to CN202010300244.7A priority Critical patent/CN111445535A/en
Publication of CN111445535A publication Critical patent/CN111445535A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a camera calibration method, which can construct an error function based on internal and external parameters of a camera, and optimize real number parameters of the error function by using pre-selected feature points to obtain the optimal real number parameters which are used as camera calibration results. The calibration process is automatically realized in response to the calibration instruction, the camera does not need to be adjusted manually, the camera calibration efficiency is improved, the internal parameter and the external parameter of the camera can be calibrated simultaneously, the scene adaptability is improved, and the accuracy of the camera calibration result is ensured. In addition, the application also provides a camera calibration device, equipment and a readable storage medium, and the technical effect of the camera calibration device corresponds to that of the method.

Description

Camera calibration method, device and equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a camera calibration method, device, and apparatus, and a readable storage medium.
Background
The current camera calibration method generally adjusts the position, orientation and the like of a camera manually through a visualization tool to observe whether a video is fused with a three-dimensional scene or not so as to achieve the fusion effect. The method has low accuracy and poor fusion effect; in addition, the manual camera adjustment mode is inconvenient to operate, and it is very difficult to try out accurate parameters of the camera, so that the efficiency of the whole camera calibration process is low.
Compared with the rough method of manually adjusting the camera parameters, a fine method of solving the camera parameters through a mathematical tool exists, but the general fixed camera internal parameters can only solve the external parameters of the camera. In practical application, camera parameters produced by different camera manufacturers are different, so that the calibration accuracy of the calibration method for fixing the internal parameters is not high, and the fusion effect of the video and the three-dimensional scene is poor.
Therefore, the current camera calibration scheme either needs to manually adjust the camera or cannot calibrate internal parameters, so that the calibration accuracy and calibration efficiency are low, and the fusion effect of the video and the three-dimensional scene is influenced.
Disclosure of Invention
The application aims to provide a camera calibration method, a camera calibration device, equipment and a readable storage medium, which are used for solving the problems of low calibration accuracy and calibration efficiency caused by the fact that a camera needs to be manually adjusted or internal parameters cannot be calibrated in the current camera calibration scheme.
The specific scheme is as follows:
in a first aspect, the present application provides a camera calibration method, including:
determining the position coordinates of the target three-dimensional point on a camera projection plane according to the calibration instruction, based on the orientation parameter, the upper orientation parameter, the position parameter and the focal length parameter of the current camera and the position coordinates of the target three-dimensional point in a world coordinate system;
constructing an error function between the position coordinates of the camera projection plane and the position coordinates in the video according to the position coordinates of the target three-dimensional point in the video, wherein real number parameters of the error function comprise the orientation parameter, the upper orientation parameter, the position parameter and the focal length parameter;
optimizing real number parameters of the error function according to a preset number of feature point pairs until an error value corresponding to the feature point pairs is minimum, and obtaining optimal real number parameters to serve as camera internal and external parameters, wherein the feature point pairs comprise three-dimensional points in a three-dimensional scene and two-dimensional points which are uniquely corresponding to the three-dimensional points in a video;
and calibrating the current camera according to the internal and external parameters of the camera.
Preferably, the determining, based on the orientation parameter, the upward orientation parameter, the position parameter, and the focal length parameter of the current camera, the position coordinate of the target three-dimensional point on the camera projection plane according to the position coordinate of the target three-dimensional point in the world coordinate system includes:
determining a camera coordinate system according to the orientation parameter and the upper orientation parameter of the current camera, and determining a transformation matrix from a world coordinate system to the camera coordinate system;
determining the position coordinates of the target three-dimensional point in a camera coordinate system according to the position parameters of the current camera, the transformation matrix and the position coordinates of the target three-dimensional point in a world coordinate system;
and determining the position coordinates of the target three-dimensional point on a camera projection plane according to the focal length parameters of the current camera.
Preferably, before the optimizing the real number parameter of the error function according to the preset number of feature point pairs until an error value corresponding to the feature point pair is minimum to obtain an optimal real number parameter, the method further includes:
and selecting a preset number of characteristic point pairs which are not in the same plane, wherein the preset number is more than or equal to 4.
Preferably, before the optimizing the real number parameter of the error function according to the preset number of feature point pairs until an error value corresponding to the feature point pair is minimum to obtain an optimal real number parameter, the method further includes:
determining the central points of the three-dimensional points in the preset number of characteristic point pairs;
determining a difference vector between the central point and the origin of coordinates;
and translating the three-dimensional points in the preset number of characteristic point pairs according to the difference vector until the translated central point is positioned at the origin of coordinates.
Preferably, the optimizing the real number parameter of the error function according to a preset number of feature point pairs until an error value corresponding to the feature point pair is minimum to obtain an optimal real number parameter includes:
and optimizing the real number parameters of the error function by using a Gauss-Newton method according to a preset number of feature point pairs until the error value corresponding to the feature point pairs is minimum, so as to obtain the optimal real number parameters.
Preferably, after the calibrating the current camera according to the internal and external parameters of the camera, the method further includes:
judging whether the content of the three-dimensional scene is consistent with the content of the video;
and if the parameters are inconsistent, generating prompt information of parameter errors and generating a recalibration instruction.
Preferably, after the calibrating the current camera according to the internal and external parameters of the camera, the method further includes:
according to the fusion instruction, performing intersection operation on the view cone of the current camera and the three-dimensional scene to obtain a target grid;
setting image frames in a video to the target grid;
and displaying the target grid to realize the fusion of the video and the three-dimensional scene.
In a second aspect, the present application provides a camera calibration apparatus, including:
a projection module: the system comprises a camera projection plane, a camera positioning plane, a calibration command and a target three-dimensional point positioning plane, wherein the camera positioning plane is used for determining the position coordinate of the target three-dimensional point on the camera projection plane according to the calibration command, based on the orientation parameter, the upper orientation parameter, the position parameter and the focal length parameter of the current camera and the position coordinate of the target three-dimensional point in a world coordinate system;
an error function building module: the real number parameters of the error function comprise the orientation parameter, the upper orientation parameter, the position parameter and the focal length parameter;
a parameter optimization module: the real number parameter of the error function is optimized according to a preset number of feature point pairs until an error value corresponding to the feature point pairs is minimum, so that an optimal real number parameter is obtained and is used as an internal parameter and an external parameter of the camera, wherein the feature point pairs comprise three-dimensional points in a three-dimensional scene and two-dimensional points which are uniquely corresponding to the three-dimensional points in a video;
a camera calibration module: and the camera calibration module is used for calibrating the current camera according to the internal and external parameters of the camera.
In a third aspect, the present application provides a camera calibration apparatus, including:
a memory: for storing a computer program;
a processor: for executing said computer program for carrying out the steps of the camera calibration method as described above.
In a fourth aspect, the present application provides a readable storage medium having stored thereon a computer program for implementing the steps of the camera calibration method as described above when executed by a processor.
The application provides a camera calibration method, which comprises the following steps: determining the position coordinates of the target three-dimensional point on a camera projection plane according to the calibration instruction, based on the orientation parameter, the upper orientation parameter, the position parameter and the focal length parameter of the current camera and the position coordinates of the target three-dimensional point in a world coordinate system; constructing an error function between the position coordinate of the camera projection plane and the position coordinate of the target three-dimensional point in the video according to the position coordinate of the target three-dimensional point in the video, wherein real number parameters of the error function comprise an orientation parameter, an upper orientation parameter, a position parameter and a focal length parameter; optimizing real number parameters of the error function according to a preset number of feature point pairs until the error value corresponding to the feature point pairs is minimum, and obtaining optimal real number parameters to be used as internal and external parameters of the camera; and calibrating the current camera according to the internal and external parameters of the camera. The characteristic point pairs comprise three-dimensional points in a three-dimensional scene and two-dimensional points which are uniquely corresponding to the three-dimensional points in the video.
Therefore, the method can construct an error function based on the internal and external parameters of the camera, and optimize the real number parameters of the error function by using the pre-selected feature points to obtain the optimal real number parameters which are used as the camera calibration result. The calibration process is automatically realized in response to the calibration instruction, the camera does not need to be adjusted manually, the camera calibration efficiency is improved, the internal parameter and the external parameter of the camera can be calibrated simultaneously, the scene adaptability is improved, and the accuracy of the camera calibration result is ensured.
In addition, the application also provides a camera calibration device, equipment and a readable storage medium, and the technical effect of the camera calibration device corresponds to that of the method, and the details are not repeated here.
Drawings
For a clearer explanation of the embodiments or technical solutions of the prior art of the present application, the drawings needed for the description of the embodiments or prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a flowchart illustrating a first implementation of a camera calibration method according to an embodiment of the present disclosure;
fig. 2 is a flowchart illustrating an implementation of a second camera calibration method according to an embodiment of the present disclosure;
fig. 3 is a functional block diagram of an embodiment of a camera calibration apparatus provided in the present application;
fig. 4 is a schematic structural diagram of an embodiment of a camera calibration device provided in the present application.
Detailed Description
The core of the application is to provide a camera calibration method, a camera calibration device, equipment and a readable storage medium, which can automatically realize calibration of internal parameters and external parameters of a camera according to a calibration instruction, do not need to manually adjust the camera, improve the camera calibration efficiency and ensure the accuracy of a camera calibration result.
In order that those skilled in the art will better understand the disclosure, the following detailed description will be given with reference to the accompanying drawings. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, a first embodiment of a camera calibration method provided in the present application is described as follows, where the first embodiment includes:
s101, determining the position coordinates of a target three-dimensional point on a camera projection plane according to a calibration instruction, based on the orientation parameter, the upper orientation parameter, the position parameter and the focal length parameter of the current camera and the position coordinates of the target three-dimensional point in a world coordinate system;
the present embodiment is directed to calibrating a camera, so-called camera calibration, that is, a process of obtaining internal and external parameters of the camera. In the scene of fusing the video and the three-dimensional scene, if a better fusion effect is to be achieved, the most important thing is to accurately calculate the internal and external parameters of the camera which meet the requirements. The internal reference refers to the distance from the camera to the projection screen, namely the focal length; the external parameters include three parameters of the position, orientation and upward direction of the camera.
Specifically, the step S101 includes the steps of: firstly, determining a camera coordinate system according to orientation parameters and upper orientation parameters of a current camera, and determining a transformation matrix from a world coordinate system to the camera coordinate system; then, determining the position coordinates of the target three-dimensional point in a camera coordinate system according to the position parameters of the current camera, the transformation matrix and the position coordinates of the target three-dimensional point in a world coordinate system; and finally, determining the position coordinates of the target three-dimensional point on a camera projection plane according to the focal length parameters of the current camera. The target three-dimensional point refers to an arbitrarily designated point in a three-dimensional space.
S102, constructing an error function between the position coordinate of the projection plane and the position coordinate of the target three-dimensional point in the video according to the position coordinate of the target three-dimensional point in the video;
the error function is used to describe the error of two position coordinates, wherein the position coordinates in the camera projection plane are based on the position coordinates of the target three-dimensional point in the world coordinate system, and are obtained through the conversion process as described in S101, which can be understood as an estimate; the position coordinates in the video are the position coordinates which are directly acquired and uniquely correspond to the target three-dimensional point on the video interface, and can be understood as actual values. The error function is used to measure the deviation between the guessed value and the actual value.
It can be understood that, in the process of actually calculating the error according to the error function, the orientation parameter, the upward orientation parameter, the position parameter and the focal length parameter are real number parameters of the error function, that is, the internal and external parameters of the camera are real number parameters of the error function; and the position coordinates of the target three-dimensional point in the world coordinate system and the position coordinates of the target three-dimensional point in the video are input values of the error function.
S103, optimizing real number parameters of the error function according to a preset number of feature point pairs until an error value corresponding to the feature point pairs is minimum, and obtaining optimal real number parameters to serve as internal and external parameters of the camera;
the feature point pair is a pair of feature points, which is a pair of three-dimensional points and two-dimensional points in a video, where one corresponding two-dimensional point exists for any three-dimensional point in a three-dimensional scene. In order to find the internal and external parameters of the camera, a certain number of feature point pairs are selected in advance to serve as the input of the subsequent process. In order to reduce the influence of errors caused by each feature point pair, the feature point pairs should be selected as many as possible, so that the preset number should be greater than or equal to a preset threshold, and the value of the preset threshold is determined according to an actual scene. In addition, in order to improve the accuracy of the camera calibration result, when the feature point pairs are selected, the feature point pairs which are not on the same plane are selected as much as possible.
In the parameter optimization process, the position coordinates of the three-dimensional points in the feature point pair are used as the position coordinates of the target three-dimensional point in the world coordinate system, the position coordinates of the two-dimensional points in the feature point pair are used as the position coordinates of the target three-dimensional point in the video, and the position coordinates are brought into an error function, so that the error value corresponding to the feature point pair can be obtained. And optimizing the real number parameters of the error function by using an optimization algorithm to minimize the error value corresponding to the characteristic point pair, and finally obtaining the optimal real number parameters to be used as the internal and external parameters of the camera.
And S104, calibrating the current camera according to the internal and external parameters of the camera.
The camera calibration method provided by the embodiment can construct an error function based on internal and external parameters of the camera, and optimize real number parameters of the error function by using pre-selected feature points to obtain optimal real number parameters, namely, the optimal real number parameters are used as a camera calibration result. The calibration process is automatically realized in response to the calibration instruction, the camera does not need to be adjusted manually, the camera calibration efficiency is improved, the internal parameter and the external parameter of the camera can be calibrated simultaneously, the scene adaptability is improved, and the accuracy of the camera calibration result is ensured.
The second embodiment of the camera calibration method provided by the present application is described in detail below, and is implemented based on the first embodiment, and is expanded to a certain extent based on the first embodiment.
Referring to fig. 2, the second embodiment specifically includes:
s201, selecting a preset number of characteristic point pairs which are not on the same plane, wherein the preset number is more than or equal to 4;
in order to facilitate observation and comparison, two windows are arranged, wherein one window is used for displaying a three-dimensional scene, and the other window is used for displaying a video. And then selecting characteristic feature pairs by comparing the three-dimensional scene with the video.
When selecting the characteristic point pairs, following several principles are followed: firstly, in order to avoid unnecessary errors as much as possible, selecting places with obvious characteristics, such as the vertex of a rectangle where a zebra crossing is located, the vertex of a rectangle where a window is located, and the like; secondly, selecting characteristic points which are not on the same plane as much as possible to prevent the generation of singular values of the Jacobian matrix and the incorrect calculation of parameters; thirdly, in order to accelerate optimization, when an initial value is input, the orientation of the software camera is consistent with that of the real camera as much as possible; and fourthly, in order to reduce the influence of errors caused by each pair of characteristic points, selecting the characteristic point pairs as many as possible, wherein the number of the characteristic point pairs is not less than 4.
S202, determining the central points of the three-dimensional points in the preset number of characteristic point pairs, and determining a difference vector between the central points and the origin of coordinates;
s203, translating the three-dimensional points in the preset number of feature point pairs according to the difference vector;
the numerical value of the position coordinate of the three-dimensional point in the feature point pair is generally large, and in order to avoid causing a corresponding error due to numerical accuracy, the embodiment translates the three-dimensional point in the feature point pair to be near the origin coordinate after selecting the feature point pair.
As a specific implementation manner, as described in S202 and S203, in this embodiment, first, the central point of the three-dimensional point in the feature point pair is found, then, the difference vector from the central point to the origin point is found with (0,0,0) as the origin of coordinates, and finally, the three-dimensional point in each feature point pair is translated according to the difference vector, so that the translated central point is located at the origin of coordinates.
S204, according to the calibration instruction, determining a camera coordinate system according to the orientation parameter and the upper orientation parameter of the current camera, and determining a transformation matrix from a world coordinate system to the camera coordinate system;
s205, determining the position coordinates of the target three-dimensional point in a camera coordinate system according to the position parameters of the current camera, the transformation matrix and the position coordinates of the target three-dimensional point in a world coordinate system;
s206, determining the position coordinates of the target three-dimensional point on a camera projection plane according to the focal length parameters of the current camera;
s207, constructing an error function between the position coordinate of the projection plane and the position coordinate of the target three-dimensional point in the video according to the position coordinate of the target three-dimensional point in the video;
it should be noted that, the processes of selecting and processing the feature point pairs are described in S201 to S203, and the processes of constructing the error function are described in S204 to S207, and in practical applications, the order of the two processes can be freely adjusted.
S208, optimizing real number parameters of the error function by using a Gauss-Newton method according to a preset number of feature point pairs until an error value corresponding to the feature point pairs is minimum, and obtaining optimal real number parameters to be used as internal and external parameters of the camera;
the characteristic point pairs comprise three-dimensional points in a three-dimensional scene and two-dimensional points which are uniquely corresponding to the three-dimensional points in a video; the real parameters of the error function include the orientation parameter, the upper orientation parameter, the position parameter, and the focal length parameter.
S209, calibrating the current camera according to the internal and external parameters of the camera.
After the parameters of the main camera in the three-dimensional scene are set as the camera internal and external parameters, whether the picture content in the three-dimensional scene is consistent with the video content or not can be further observed, and whether the calculated camera internal and external parameters are accurate or not can be judged through consistency. That is, it is determined whether the content of the three-dimensional scene is consistent with the content of the video; and if the parameters are inconsistent, generating prompt information of parameter error/inaccuracy and generating a recalibration instruction.
After accurate internal and external parameters of the camera are obtained, a corresponding curtain needs to be generated in order to project the video content into the three-dimensional scene. The method adopted by the embodiment is as follows: according to the fusion instruction, carrying out intersection calculation on the current cone of the camera and the three-dimensional scene to obtain a grid; then, each frame of the video is taken as a texture picture of the grid and pasted on the grid; and finally, displaying the grid with the texture in the three-dimensional scene so as to obtain the effect of fusing the video and the three-dimensional scene. If the fusion effect is not good, the steps are repeated.
Next, the construction process and the parameter optimization process of the error function in the second embodiment are further described in detail by taking a specific application scenario as an example.
(1) Determining a camera coordinate system, and determining a transformation matrix from a world coordinate system to the camera coordinate system.
Assuming the camera is oriented at α with the z-axis and β with the x-axis, then:
v1=(cos(α)cos(β),cos(α)sin(β),sin(α));
v is then1One feasible vector on the vertical plane is:
v2t=(sin(α)cos(β),sin(α)sin(β),-cos(α));
let v2tWinding v1Rotate by an angle of gamma to obtain v2As follows:
Figure BDA0002453713120000101
with v1,v2,v3For the x, y, z axes of the camera coordinate system, the transformation matrix from the world coordinate system to the camera coordinate system is:
M=[v'1,v'2,v'3];
wherein, v'1,v'2,v'3Respectively represent v1,v2,v3The transposed matrix of (2).
(2) And determining the position coordinates of the target three-dimensional point in a camera coordinate system according to the position coordinates of the target three-dimensional point in a world coordinate system.
Since M is an orthogonal matrix, the inverse of M is M'. Let the position of the camera be t ═ xt,yt,zt) And assuming that the position coordinates of the target three-dimensional point under the world coordinates are as follows:
a=(xa,ya,za);
the position coordinates of the target three-dimensional point in the camera coordinate system are as follows:
b=(xb,yb,zb)=(M′(a′-t′))'=M(a-t)。
(3) and determining the position coordinates of the target three-dimensional point on a camera projection plane according to the position coordinates of the target three-dimensional point on a camera coordinate system.
Assuming that the focal length is f, the coordinates of the target three-dimensional point on the projection plane are:
Figure BDA0002453713120000102
(4) and constructing an error function according to the position coordinates of the target three-dimensional point on the camera projection plane and the corresponding position coordinates of the target three-dimensional point in the video.
Recording the corresponding position coordinates of the target three-dimensional point in the video as:
v=(v1,v2);
aligning the first coordinate, the error function is then:
Figure BDA0002453713120000103
in summary, this embodiment defines an error function with a total of 7 real parameters, including 6 camera parameters α, γ, xt,yt,ztAnd 1 external parameter f of the camera. As follows:
d=F(x|α,β,γ,xt,yt,zt,f)。
(5) and optimizing real number parameters of the error function by using the characteristic point pairs.
To find the optimal real number parameter of the error function, the pre-selected pairs of feature points are substituted into the function F so that d2The optimization method, as a specific embodiment, uses the gauss-newton method, which can quickly find the optimal real number parameter because it drops quickly, and specifically, let the vector h be (α, γ, x)t,yt,ztF), let hnThe value of the nth time is as follows:
hn+1=hn-(J′J)-1J′r(d);
r(d)=(d1,d2,...,dm);
d1=F(x1|h),d2=F(x2|h),...,dm=F(xm|h);
wherein J is a Jacobian matrix, n is iteration times, and m is the number of characteristic point pairs.
In summary, according to the camera calibration method provided by the embodiment, the calibration process is automatically realized in response to the calibration instruction, the camera does not need to be manually adjusted, and the camera calibration efficiency is improved; different from the existing camera calibration method, the focal length is innovatively used as an optimized parameter, so that the camera calibration method of the embodiment can adapt to various cameras produced by different manufacturers, and the scene adaptability is improved; in the parameter optimization process, a gauss-newton method is adopted, so that the fast convergence can be realized in the iteration process, the high efficiency of the algorithm is ensured, and the calibration speed of the camera is accelerated while the accuracy of the calibration result is ensured.
In the following, a camera calibration device provided in an embodiment of the present application is introduced, and a camera calibration device described below and a camera calibration method described above may be referred to correspondingly.
As shown in fig. 3, the camera calibration apparatus of the present embodiment includes:
the projection module 301: the system comprises a camera projection plane, a camera positioning plane, a calibration command and a target three-dimensional point positioning plane, wherein the camera positioning plane is used for determining the position coordinate of the target three-dimensional point on the camera projection plane according to the calibration command, based on the orientation parameter, the upper orientation parameter, the position parameter and the focal length parameter of the current camera and the position coordinate of the target three-dimensional point in a world coordinate system;
error function building block 302: the real number parameters of the error function comprise the orientation parameter, the upper orientation parameter, the position parameter and the focal length parameter;
the parameter optimization module 303: the real number parameter of the error function is optimized according to a preset number of feature point pairs until an error value corresponding to the feature point pairs is minimum, so that an optimal real number parameter is obtained and is used as an internal parameter and an external parameter of the camera, wherein the feature point pairs comprise three-dimensional points in a three-dimensional scene and two-dimensional points which are uniquely corresponding to the three-dimensional points in a video;
the camera calibration module 304: and the camera calibration module is used for calibrating the current camera according to the internal and external parameters of the camera.
The camera calibration apparatus of the present embodiment is used to implement the aforementioned camera calibration method, and therefore the specific implementation of the apparatus can be seen in the foregoing embodiment parts of the camera calibration method, for example, the projection module 301, the error function construction module 302, the parameter optimization module 303, and the camera calibration module 304, which are respectively used to implement steps S101, S102, S103, and S104 in the aforementioned camera calibration method. Therefore, specific embodiments thereof may be referred to in the description of the corresponding respective partial embodiments, and will not be described herein.
In addition, since the camera calibration device of the present embodiment is used for implementing the camera calibration method, the function thereof corresponds to the function of the method, and is not described herein again.
In addition, the present application further provides a camera calibration device, as shown in fig. 4, including:
the memory 100: for storing a computer program;
the processor 200: for executing said computer program for carrying out the steps of the camera calibration method as described above.
Finally, the present application provides a readable storage medium having stored thereon a computer program for implementing the steps of the camera calibration method as described above when executed by a processor.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The above detailed descriptions of the solutions provided in the present application, and the specific examples applied herein are set forth to explain the principles and implementations of the present application, and the above descriptions of the examples are only used to help understand the method and its core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A camera calibration method is characterized by comprising the following steps:
determining the position coordinates of the target three-dimensional point on a camera projection plane according to the calibration instruction, based on the orientation parameter, the upper orientation parameter, the position parameter and the focal length parameter of the current camera and the position coordinates of the target three-dimensional point in a world coordinate system;
constructing an error function between the position coordinates of the camera projection plane and the position coordinates in the video according to the position coordinates of the target three-dimensional point in the video, wherein real number parameters of the error function comprise the orientation parameter, the upper orientation parameter, the position parameter and the focal length parameter;
optimizing real number parameters of the error function according to a preset number of feature point pairs until an error value corresponding to the feature point pairs is minimum, and obtaining optimal real number parameters to serve as camera internal and external parameters, wherein the feature point pairs comprise three-dimensional points in a three-dimensional scene and two-dimensional points which are uniquely corresponding to the three-dimensional points in a video;
and calibrating the current camera according to the internal and external parameters of the camera.
2. The method of claim 1, wherein the determining the position coordinates of the target three-dimensional point on the projection plane of the camera according to the position coordinates of the target three-dimensional point in the world coordinate system based on the orientation parameter, the upward orientation parameter, the position parameter and the focal length parameter of the current camera comprises:
determining a camera coordinate system according to the orientation parameter and the upper orientation parameter of the current camera, and determining a transformation matrix from a world coordinate system to the camera coordinate system;
determining the position coordinates of the target three-dimensional point in a camera coordinate system according to the position parameters of the current camera, the transformation matrix and the position coordinates of the target three-dimensional point in a world coordinate system;
and determining the position coordinates of the target three-dimensional point on a camera projection plane according to the focal length parameters of the current camera.
3. The method of claim 1, wherein before the optimizing the real number parameter of the error function according to the preset number of pairs of feature points until the error value corresponding to the pair of feature points is minimum to obtain an optimal real number parameter, the method further comprises:
and selecting a preset number of characteristic point pairs which are not in the same plane, wherein the preset number is more than or equal to 4.
4. The method of claim 3, wherein before the optimizing the real number parameter of the error function according to the preset number of pairs of feature points until the error value corresponding to the pair of feature points is minimum to obtain an optimal real number parameter, the method further comprises:
determining the central points of the three-dimensional points in the preset number of characteristic point pairs;
determining a difference vector between the central point and the origin of coordinates;
and translating the three-dimensional points in the preset number of characteristic point pairs according to the difference vector until the translated central point is positioned at the origin of coordinates.
5. The method of claim 1, wherein the optimizing the real number parameter of the error function according to a preset number of pairs of feature points until an error value corresponding to the pair of feature points is minimum to obtain an optimal real number parameter comprises:
and optimizing the real number parameters of the error function by using a Gauss-Newton method according to a preset number of feature point pairs until the error value corresponding to the feature point pairs is minimum, so as to obtain the optimal real number parameters.
6. The method according to any one of claims 1-5, wherein after said calibrating said current camera according to said camera internal and external parameters, further comprising:
judging whether the content of the three-dimensional scene is consistent with the content of the video;
and if the parameters are inconsistent, generating prompt information of parameter errors and generating a recalibration instruction.
7. The method of claim 6, wherein after said calibrating said current camera based on said camera in-out parameters, further comprising:
according to the fusion instruction, performing intersection operation on the view cone of the current camera and the three-dimensional scene to obtain a target grid;
setting image frames in a video to the target grid;
and displaying the target grid to realize the fusion of the video and the three-dimensional scene.
8. A camera calibration device is characterized by comprising:
a projection module: the system comprises a camera projection plane, a camera positioning plane, a calibration command and a target three-dimensional point positioning plane, wherein the camera positioning plane is used for determining the position coordinate of the target three-dimensional point on the camera projection plane according to the calibration command, based on the orientation parameter, the upper orientation parameter, the position parameter and the focal length parameter of the current camera and the position coordinate of the target three-dimensional point in a world coordinate system;
an error function building module: the real number parameters of the error function comprise the orientation parameter, the upper orientation parameter, the position parameter and the focal length parameter;
a parameter optimization module: the real number parameter of the error function is optimized according to a preset number of feature point pairs until an error value corresponding to the feature point pairs is minimum, so that an optimal real number parameter is obtained and is used as an internal parameter and an external parameter of the camera, wherein the feature point pairs comprise three-dimensional points in a three-dimensional scene and two-dimensional points which are uniquely corresponding to the three-dimensional points in a video;
a camera calibration module: and the camera calibration module is used for calibrating the current camera according to the internal and external parameters of the camera.
9. A camera calibration apparatus, comprising:
a memory: for storing a computer program;
a processor: for executing said computer program for carrying out the steps of the camera calibration method according to any one of claims 1 to 7.
10. A readable storage medium, having stored thereon a computer program for implementing the steps of the camera calibration method according to any one of claims 1 to 7 when being executed by a processor.
CN202010300244.7A 2020-04-16 2020-04-16 Camera calibration method, device and equipment Pending CN111445535A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010300244.7A CN111445535A (en) 2020-04-16 2020-04-16 Camera calibration method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010300244.7A CN111445535A (en) 2020-04-16 2020-04-16 Camera calibration method, device and equipment

Publications (1)

Publication Number Publication Date
CN111445535A true CN111445535A (en) 2020-07-24

Family

ID=71653297

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010300244.7A Pending CN111445535A (en) 2020-04-16 2020-04-16 Camera calibration method, device and equipment

Country Status (1)

Country Link
CN (1) CN111445535A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101839692A (en) * 2010-05-27 2010-09-22 西安交通大学 Method for measuring three-dimensional position and stance of object with single camera
CN103226830A (en) * 2013-04-25 2013-07-31 北京大学 Automatic matching correction method of video texture projection in three-dimensional virtual-real fusion environment
CN103400409A (en) * 2013-08-27 2013-11-20 华中师范大学 3D (three-dimensional) visualization method for coverage range based on quick estimation of attitude of camera
CN103716586A (en) * 2013-12-12 2014-04-09 中国科学院深圳先进技术研究院 Monitoring video fusion system and monitoring video fusion method based on three-dimension space scene
CN106127853A (en) * 2016-06-17 2016-11-16 中国电子科技集团公司第二十八研究所 A kind of unmanned plane Analysis of detectable region method
US20180336704A1 (en) * 2016-02-03 2018-11-22 Sportlogiq Inc. Systems and Methods for Automated Camera Calibration

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101839692A (en) * 2010-05-27 2010-09-22 西安交通大学 Method for measuring three-dimensional position and stance of object with single camera
CN103226830A (en) * 2013-04-25 2013-07-31 北京大学 Automatic matching correction method of video texture projection in three-dimensional virtual-real fusion environment
CN103400409A (en) * 2013-08-27 2013-11-20 华中师范大学 3D (three-dimensional) visualization method for coverage range based on quick estimation of attitude of camera
CN103716586A (en) * 2013-12-12 2014-04-09 中国科学院深圳先进技术研究院 Monitoring video fusion system and monitoring video fusion method based on three-dimension space scene
US20180336704A1 (en) * 2016-02-03 2018-11-22 Sportlogiq Inc. Systems and Methods for Automated Camera Calibration
CN106127853A (en) * 2016-06-17 2016-11-16 中国电子科技集团公司第二十八研究所 A kind of unmanned plane Analysis of detectable region method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"摄影测量与遥感学", 测绘文摘, vol. 2003, no. 04, pages 39 *
邵绪强等: "GPU并行计算加速的实时可视外壳三维重建及其虚实交互", no. 01 *

Similar Documents

Publication Publication Date Title
US11704833B2 (en) Monocular vision tracking method, apparatus and non-transitory computer-readable storage medium
US11954870B2 (en) Dynamic scene three-dimensional reconstruction method, apparatus and system, server, and medium
US11107277B2 (en) Method and device for constructing 3D scene model
CN111127655B (en) House layout drawing construction method and device, and storage medium
Wei et al. Fisheye video correction
WO2021031781A1 (en) Method and device for calibrating projection image and projection device
CN112689135A (en) Projection correction method, projection correction device, storage medium and electronic equipment
CN108629810B (en) Calibration method and device of binocular camera and terminal
JP6636252B2 (en) Projection system, projector device, imaging device, and program
CN110996083A (en) Trapezoidal correction method and device, electronic equipment and readable storage medium
US20230025058A1 (en) Image rectification method and device, and electronic system
CN112991515B (en) Three-dimensional reconstruction method, device and related equipment
CN110940312A (en) Monocular camera ranging method and system combined with laser equipment
CN104200454A (en) Fisheye image distortion correction method and device
US20200342583A1 (en) Method, apparatus and measurement device for measuring distortion parameters of a display device, and computer-readable medium
JP7076097B2 (en) Image leveling device and its program, and drawing generation system
CN116823639A (en) Image distortion correction method, device, equipment and storage medium
CN111445535A (en) Camera calibration method, device and equipment
KR20100001608A (en) Apparatus and method for correcting lens distortion
CN111432117A (en) Image rectification method, device and electronic system
CN114882194A (en) Method and device for processing room point cloud data, electronic equipment and storage medium
CN113421292A (en) Three-dimensional modeling detail enhancement method and device
CN117274956B (en) Vehicle side view generation method, device, terminal equipment and storage medium
CN117611724B (en) Method, system, equipment and medium for correcting simulation image of vehicle vision sensor
JP6773982B2 (en) Information processing equipment, its control method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200724

RJ01 Rejection of invention patent application after publication