CN114943773A - Camera calibration method, device, equipment and storage medium - Google Patents

Camera calibration method, device, equipment and storage medium Download PDF

Info

Publication number
CN114943773A
CN114943773A CN202210358591.4A CN202210358591A CN114943773A CN 114943773 A CN114943773 A CN 114943773A CN 202210358591 A CN202210358591 A CN 202210358591A CN 114943773 A CN114943773 A CN 114943773A
Authority
CN
China
Prior art keywords
image
images
camera
preset
initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210358591.4A
Other languages
Chinese (zh)
Inventor
郭金辉
吴博剑
樊鲁斌
周昌
黄建强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202210358591.4A priority Critical patent/CN114943773A/en
Publication of CN114943773A publication Critical patent/CN114943773A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the invention provides a camera calibration method, a device, equipment and a storage medium, wherein the method comprises the following steps: acquiring a plurality of initial images comprising a plurality of preset images, and determining initial parameters of a camera according to pixel coordinates of matched feature point pairs between every two images in the plurality of initial images; acquiring a first image currently acquired by a camera, and determining a target preset image matched with the first image from a plurality of preset images; and determining the current parameters of the camera according to the pixel coordinates of the matched feature point pairs between the first image and the target preset image and the initial parameters of the camera. The real-time automatic calibration of the camera parameters is realized by combining a plurality of preset images and the camera initial parameters. The current parameters of the camera and the interest points on the plurality of preset images can be used for determining the marking position and the rendering mode of each interest point in the first image, so that the tracking display effect of the interest points in the real-time collected images is realized. The camera calibration method can also be applied to three-dimensional measurement scenes in the field of virtual reality.

Description

Camera calibration method, device, equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a camera calibration method, apparatus, device, and storage medium.
Background
The camera is used as an efficient real-time data sensing device, is widely applied to the application fields of traffic, security protection, smart cities and the like, and is used for monitoring targets such as pedestrians and vehicles in a scene. The cameras are various in types, wherein a spherical camera, which is called a ball camera for short, is a camera with low cost, wide visual field range and flexible control, and is used in a high proportion in many application scenes.
When a camera such as a dome camera is used, after the camera is installed, camera calibration processing needs to be performed first to obtain camera related parameters such as camera internal parameters, camera external parameters and distortion coefficients. However, during the operation of the camera, the position and the focal length of the camera are changed due to artificial operation or the influence of natural factors (such as wind blowing, vibration, and position deviation of some mechanisms for fixing the camera), so that the previously obtained camera calibration result is no longer suitable for the current camera shooting angle. Therefore, how to achieve automatic camera calibration during camera operation is a problem to be solved.
Disclosure of Invention
The embodiment of the invention provides a camera calibration method, a camera calibration device, camera calibration equipment and a storage medium, which are used for realizing automatic calibration of a camera in the operation process.
In a first aspect, an embodiment of the present invention provides a camera calibration method, where the method includes:
acquiring a plurality of initial images for determining initial camera parameters, wherein the plurality of initial images comprise a plurality of preset images acquired under a plurality of preset visual angles and images acquired under a plurality of rotation angles, and the plurality of initial images correspond to the same camera focal length;
determining initial parameters of the camera according to pixel coordinates of matched feature point pairs between every two images in the plurality of initial images, wherein the initial parameters comprise initial camera internal parameters, a rotation matrix between shooting visual angles corresponding to the every two images and a distortion coefficient;
acquiring a first image currently acquired by the camera and a plurality of reference images used for determining current camera parameters, wherein the plurality of reference images comprise a plurality of preset images;
determining a target reference image matched with the first image from the plurality of reference images according to the number of the feature point pairs respectively matched between the first image and the plurality of reference images;
determining a first current parameter of the camera according to the pixel coordinates of the matched feature point pair between the first image and the target reference image and the initial parameter of the camera, wherein the first current parameter comprises current camera internal parameters, a rotation matrix of a shooting visual angle of the first image relative to a shooting visual angle of the target reference image and the distortion coefficient.
In a second aspect, an embodiment of the present invention provides a camera calibration apparatus, where the apparatus includes:
the system comprises an image acquisition module, a parameter calculation module and a parameter calculation module, wherein the image acquisition module is used for acquiring a plurality of initial images used for determining initial camera parameters, the plurality of initial images comprise a plurality of preset images acquired under a plurality of preset visual angles and images acquired under a plurality of rotation angles, and the plurality of initial images correspond to the same camera focal length;
the initial parameter determining module is used for determining initial parameters of the camera according to pixel coordinates of feature point pairs matched between every two images in the plurality of initial images, wherein the initial parameters comprise initial camera internal parameters, a rotation matrix between shooting visual angles corresponding to every two images and distortion coefficients;
the parameter determining module is used for acquiring a first image currently acquired by the camera and a plurality of reference images used for determining current camera parameters, wherein the reference images comprise a plurality of preset images; determining a target reference image matched with the first image from the plurality of reference images according to the number of the feature point pairs respectively matched between the first image and the plurality of reference images; determining a first current parameter of the camera according to the pixel coordinates of the matched feature point pair between the first image and the target reference image and the initial parameter of the camera, wherein the first current parameter comprises current camera internal parameters, a rotation matrix of a shooting visual angle of the first image relative to a shooting visual angle of the target reference image and the distortion coefficient.
In a third aspect, an embodiment of the present invention provides an electronic device, including: a memory, a processor, a communication interface; wherein the memory has stored thereon executable code which, when executed by the processor, causes the processor to implement at least the camera calibration method as described in the first aspect.
In a fourth aspect, an embodiment of the present invention provides a non-transitory machine-readable storage medium, on which executable code is stored, and when the executable code is executed by a processor of an electronic device, the processor is enabled to implement at least the camera calibration method according to the first aspect.
In a fifth aspect, an embodiment of the present invention provides an image rendering method, including:
acquiring a plurality of initial images for determining initial camera parameters, wherein the plurality of initial images comprise a plurality of preset images acquired under a plurality of preset visual angles and images acquired under a plurality of rotation angles, the plurality of initial images correspond to the same camera focal length, and the plurality of preset images respectively comprise different interest point information;
determining initial parameters of the camera according to pixel coordinates of feature point pairs matched between every two images in the plurality of initial images, wherein the initial parameters comprise initial camera internal parameters, a rotation matrix between shooting visual angles corresponding to the every two images and a distortion coefficient;
acquiring a first image currently acquired by the camera and a plurality of reference images used for determining current camera parameters, wherein the plurality of reference images comprise a plurality of preset images;
determining a target reference image matched with the first image from the plurality of reference images according to the number of the feature point pairs respectively matched between the first image and the plurality of reference images;
determining current parameters of the camera according to pixel coordinates of a feature point pair matched between the first image and the target reference image and initial parameters of the camera, wherein the current parameters comprise current camera internal parameters and rotation matrixes of shooting visual angles of the first image relative to shooting visual angles of the plurality of preset images respectively;
determining the offset of the shooting visual angle of the first image relative to the shooting visual angles of the plurality of preset images respectively according to the initial parameters and the current parameters of the camera;
and determining pixel coordinates respectively corresponding to the interest point information contained in the plurality of preset images in the first image according to the offset, and displaying the interest point information in the first image in a set rendering mode according to the pixel coordinates.
In the scheme provided by the embodiment of the invention, except for the need of calibrating the camera at the initial stage of the operation of the camera, the camera still needs to be calibrated in real time in the subsequent actual operation process so as to obtain the camera related parameters suitable for the real-time position and the focal length of the camera. In the initialization stage, the preset images acquired by the camera under a plurality of preset viewing angles and the images acquired under a plurality of rotation angles are acquired, so that the initial parameters of the camera (such as initial camera internal parameters, rotation matrixes and distortion coefficients between corresponding shooting viewing angles of two images) can be determined according to the pixel coordinates of the matched feature point pairs between two images in the acquired images. In the subsequent use process of the camera, the preset images can be used as reference images, and the preset images which are matched with the first image and meet the set matching condition among the preset images are determined as target reference images for completing the current camera calibration based on the number of the characteristic points which are matched between the first image currently collected by the camera and the preset images. Then, the current parameters of the camera (for example, the internal parameters of the current camera, the rotation matrix of the shooting angle of the first image relative to the shooting angle of the target reference image) are determined according to the pixel coordinates of the matched feature point pairs between the first image and the target reference image and the initial parameters of the camera. The method comprises the steps of finishing determination of initial parameters of a camera based on a plurality of preset images, taking the matched preset images as target reference images for determining current parameters of the camera based on matching between the images acquired in real time subsequently and the preset images, and determining the current parameters of the camera by combining matching conditions between the target reference images and the images shot at present and the initial camera parameters, so as to achieve the purpose of automatically calibrating the camera in real time.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
Fig. 1 is a flowchart of a camera calibration method according to an embodiment of the present invention;
fig. 2 is a flowchart of a method for determining a target reference image according to an embodiment of the present invention;
FIG. 3 is a flowchart of another method for determining a target reference image according to an embodiment of the present invention;
fig. 4 is a schematic diagram illustrating a mark of the point of interest information in the preset image according to an embodiment of the present invention;
fig. 5 is a flowchart of another camera calibration method according to an embodiment of the present invention;
fig. 6 is a flowchart of a method for rendering a phase image according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating an AR application interface according to an embodiment of the present invention;
fig. 8 is an application schematic diagram of a camera calibration method according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a camera calibration apparatus according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of an electronic device corresponding to the camera calibration apparatus provided in the embodiment shown in fig. 9.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In addition, the sequence of steps in each method embodiment described below is only an example and is not strictly limited.
The camera calibration method provided by the embodiment of the invention can be executed by an electronic device, and the electronic device can be a terminal device such as a PC (personal computer), a notebook computer, a smart phone and the like, and can also be a server. The server may be a physical server including an independent host, or may also be a virtual server, or may also be a cloud server or a server cluster.
Fig. 1 is a flowchart of a camera calibration method according to an embodiment of the present invention, as shown in fig. 1, the method may include the following steps:
101. the method comprises the steps of obtaining a plurality of initial images used for determining initial camera parameters, wherein the plurality of initial images comprise a plurality of preset images collected under a plurality of preset visual angles and images collected under a plurality of rotation angles, and the plurality of initial images correspond to the same camera focal length.
102. And determining initial parameters of the camera according to pixel coordinates of matched feature point pairs between every two images in the plurality of initial images, wherein the initial parameters comprise initial camera internal parameters, a rotation matrix between corresponding shooting visual angles of every two images and a distortion coefficient.
103. The method comprises the steps of acquiring a first image currently acquired by a camera and a plurality of reference images used for determining current camera parameters, wherein the plurality of reference images comprise a plurality of preset images.
104. And determining a target reference image matched with the first image from the plurality of reference images according to the number of the feature point pairs respectively matched between the first image and the plurality of reference images.
105. And determining a first current parameter of the camera according to the pixel coordinates of the matched characteristic point pair between the first image and the target reference image and the initial parameter of the camera, wherein the first current parameter comprises current camera internal parameters, a rotation matrix of the shooting visual angle of the first image relative to the shooting visual angle of the target reference image and a distortion coefficient.
In many practical application scenes, a ball machine is deployed and used for achieving a video monitoring function in the fields of traffic, security, emergency, smart cities and the like. For example, in a road traffic scene, a large number of ball machines are usually deployed on the roadside to monitor the environment such as the road and the building. For another example, a ball machine is deployed on a fire passage of a building to monitor the fire passage.
When various types of cameras such as a dome camera are put into practical use, calibration of camera related parameters including camera internal parameters, camera external parameters, distortion coefficients and the like is required.
For convenience of description, in the embodiment of the present invention, the requirement of camera calibration is divided into a phase according to a timeline, and the phase is divided into an initialization phase and an actual operation phase after the initialization phase.
In practical application, in the initialization stage, it may be considered that initial camera calibration needs to be completed in a short period of time after the camera is installed at a corresponding position according to actual monitoring requirements, that is, initial parameters of the camera are determined.
In the embodiment of the invention, a plurality of preset images are adopted to assist in completing the determination of the initial parameters of the camera, and in addition, in order to ensure the accuracy of the determination result of the initial parameters, additional images acquired by the camera under a plurality of rotation angles are also used to assist in completing the determination of the initial parameters of the camera.
The method includes the steps that for a plurality of preset images, a user simply adjusts the lens orientation of a camera, a plurality of preset visual angles are determined, and a frame of image is shot under each preset visual angle, so that the preset images are obtained. In practical applications, the plurality of preset viewing angles correspond to viewing the monitored environment from different viewing angles. Generally, the number of preset images is not very large, such as 2-5, and the difference between the shooting angles of different preset images is generally large, so as to be able to adapt to the respective shooting angles that may appear during the subsequent operation of the camera.
For images acquired at multiple rotation angles: except controlling the camera to collect corresponding preset images under different shooting visual angles, the camera can be controlled to continuously change the rotating angle in a short time, namely, the camera is continuously rotated, so that videos collected by the camera in the period of time are collected, and the collected videos are sampled to obtain the plurality of images. It should be noted that only "pure rotation" control of the camera is involved here, without changing the focal length of the camera. In addition, the distance between the cameras corresponding to the preset images and the camera focal distance corresponding to the images collected under the rotation angles is the same, namely in the initialization stage, the camera focal distance is kept unchanged, and the initial images are collected only by changing the rotation angles of the cameras. The focal length of the camera is kept unchanged, so that the internal parameters of the camera can be kept unchanged.
After a plurality of initial images used for determining initial camera parameters are obtained, feature points are extracted for each initial image, and then, based on the extracted feature points, feature point pairs matched between every two initial images are determined, so that pixel coordinates of the feature point pairs matched between every two images in the plurality of initial images are obtained. The input of the feature matching model is each feature point extracted from a pair of images.
In practical application, a feature extraction model for completing feature point extraction and a feature matching model for completing feature point matching processing can be trained in advance. The feature extraction model and the feature matching model can be realized by adopting a deep neural network model. For example, the feature extraction model may adopt a SuperPoint model, and the feature matching model may adopt a SuperGlue model, or other convolution neural network models, although not limited thereto. The feature extraction model and the feature matching model in the embodiment of the invention can be trained by adopting a general data set (such as a COCO data set), are not limited by a specific application scene, and have good universality.
The feature points refer to points which are not changed along with ambient light and visual angles in the picture, and therefore, the feature points are extracted from the picture, which is a very basic task in computer vision. In order to obtain the feature points with definite semantic information, such as elbows and extremities when estimating human body postures, a deep neural network model can be adopted to extract the feature points, and the deep neural network model can have the capability of understanding and learning semantic information in an input image through training, so that the quality of the extracted feature points can be ensured.
Similarly, the feature matching model implemented by using the deep neural network model can also complete the recognition task of the matched feature point pairs by training and utilizing semantic and context information between the feature points, so that the robustness is higher, and the calculation amount is larger. Taking any two initial images as an example, after the feature points corresponding to the two initial images are input to the feature matching model, the feature matching model may output the pixel coordinates of the feature point pairs satisfying matching. In this case, the matched pairs of feature points, which are simply the pairs of feature points in the two initial images, actually correspond to the same point in the physical space, for example, both correspond to the same position on an object captured by the camera.
The extraction and matching processing of the feature points are completed based on the feature matching model and the feature extraction model which are formed by the deep neural network model, and the interference of environmental factors such as weather, illumination, seasons and the like on the extraction and matching processing of the feature points can be resisted by means of the natural advantages of the deep neural network model.
In an optional embodiment, after obtaining the matched feature point pairs output by the feature matching model, the rejecting process of the mis-matched feature point pairs may also be performed, for example, using an magsa + + algorithm in combination with a homography transformation model to implement the rejecting process.
And then, determining initial parameters of the camera according to pixel coordinates of feature point pairs matched between every two images in the obtained initial images, wherein the initial parameters to be determined comprise initial camera internal parameters, a rotation matrix between shooting visual angles corresponding to every two images and a distortion coefficient.
The determination of the initial parameters of the camera is described below:
for the sake of description, it is assumed that N initial images are acquired, and the N initial images are represented as an image sequence I i (i ═ 1,2, ·, N), pixel coordinates of a pair of feature points matching between any two images can be obtained by the above-described extraction and matching processing of the feature points, and then homography matrices corresponding to any two images can be obtained based on the pixel coordinates of the pair of feature points matching between the two images according to a homography matrix solving formula. The homography matrix obtained for the above image sequence is represented as: h ij (i, j. 1,2, N; i. noteq. j), wherein H ij Representing a homography matrix between image i and image j.
The initial camera internal reference is represented as a matrix K, which includes several parameters to be solved as shown in the following formula (1), and in addition, the initial value thereof may be initialized in the manner given in formula (1):
Figure BDA0003582903230000061
wherein f is x Denotes the focal length of the camera in the x-direction, f y Denotes the focal length of the camera in the y-direction, s denotes the miscut coefficient, c x Indicating the position of the principal point in the x-direction, c y Represents the principal point position in the y direction, rows represents the image height, cols represents the image column width, and max () represents the maximum value operation.
The coefficients (e.g. 1.2) in the above formula (1) are schematic examples, and other arbitrary coefficients may be adopted according to specific calculation requirements, and the setting of the coefficient 0.5 is to set the principal point position as the center of the image. In addition, it should be noted that, in the formula (1), only one initial value of the initial camera internal parameter K is given, and the initial parameters of the camera including the initial camera parameter K need to be solved finally through the solution of the loss function defined below, and the final solution result is used as the real camera initial parameters.
Since the focal length of the camera is not changed during the initialization phase, the initial camera parameters corresponding to the plurality of initial images are the same, since a change in the focal length of the camera will only result in a change in the camera parameters.
Based on the definition of the initial camera internal reference and the initial value assignment result, the initial value of the rotation matrix between the shooting visual angles corresponding to every two images in the N initial images can be obtained according to the following formula (2):
R ij =K -1 H ij K (2)
initial value R of rotation matrix between shooting visual angles corresponding to two images ij And after an initial value K of the camera internal parameter is initialized, based on pixel coordinates of a feature point pair matched between every two images, a reprojection error can be constructed to be used as a loss function: f ().
Regarding the reprojection error, with any pair of images: the description will be given by taking an image i and an image j as examples. It is assumed that any pair of matched pairs of feature points present in the pair of images has pixel coordinates expressed as:
Figure BDA0003582903230000062
and
Figure BDA0003582903230000063
wherein k is ∈ [1, M ∈]And M denotes the number of matching pairs of feature points present in the two images. In addition, the radial distortion coefficients of the camera are expressed as: k is a radical of 1 、k 2 The tangential distortion coefficient is denoted as p 1 、p 2 . Since an image captured by a camera generally has distortion, pixel coordinates of the obtained feature point pairs may be first subjected to a distortion removal process.
For convenience of description, the pixel coordinate of any one of the feature point pairs is represented as Z, and the distortion removal process can be implemented by the following equations (3) and (4):
Z′=K -1 Z (3)
Figure BDA0003582903230000071
wherein Z represents the pixel coordinate of a feature point, Z' represents the coordinate of the feature point after radial distortion and tangential distortion are removed, r 2 =x′ 2 +y′ 2 X 'represents an x-direction coordinate in the coordinate Z', and y 'represents a y-direction coordinate in the coordinate Z'.
Still with the above-mentioned matched pairs of feature points
Figure BDA0003582903230000072
And
Figure BDA0003582903230000073
for example, after the pixel coordinates are subjected to the above-described distortion removal processing, a loss function may be generated based on the reprojection error according to the following formula (5):
Figure BDA0003582903230000074
wherein, the term of computation on the square of the modulus in the right side of the equation is the term of computation on the reprojection error between the above feature point pairs, and based on this, the meaning of equation (5) is: for any pair of images i and j, respectively calculating reprojection errors between M pairs of feature points, summing the obtained M reprojection errors, summarizing the reprojection errors of each pair of feature points corresponding to all pairwise images in the initial images, and taking the sum of all the reprojection errors as a loss function to be solved. As shown in the formula (5), the variables to be solved in the loss function include an initial camera internal parameter K and a rotation matrix R between the corresponding shooting angles of two images ij And a distortion coefficient k 1 ,k 2 ,p 1 ,p 2 . In fact, the solution problem of the loss function is an optimization problem, based on the aboveK and R given in ij The initial value of (2) is solved with the objective of minimizing the value of the loss function, and the solving results of the variables can be obtained.
The above describes a process of determining initial parameters of a camera according to pixel coordinates of a feature point pair matched between two images in a plurality of initial images.
In the operation process of the cameras such as the dome camera and the like after the initialization stage, the position and the focal length of the camera are usually changed due to the influence of external factors (artificial control, natural factors and the like), and at this time, the implementation of the application purpose will be influenced by performing applications such as three-dimensional measurement of a shooting scene, video AR tracking and the like based on the camera initial parameters calibrated in the initialization stage.
In view of this, the embodiment of the present invention provides a scheme for automatically calibrating a camera by combining initial parameters of the camera and images acquired in real time during the subsequent operation of the camera.
It should be noted that, after the initialization phase, the automatic calibration scheme of the camera may be configured to be executed periodically at set time intervals, for example, at set time intervals of 0.5 minute, 1 minute, and the like. Or, alternatively, in the case that the camera is used for video capture and display, the camera automatic calibration scheme may also be executed for each frame of image according to the actual application requirements. For convenience of description, the time at which the automatic calibration of the camera needs to be performed each time, which is determined according to the above-mentioned rule, is referred to as a current time, and the camera parameters currently determined based on the first image are referred to as first current parameters of the camera, assuming that the currently acquired image is the first image. It will be appreciated that if the first current parameter of the camera is different from the initial parameter of the camera, the camera needs to update its parameter to the first current parameter. In addition, in fact, the difference between the first current parameter of the camera and the initial parameter of the camera is mainly reflected on the camera internal parameter and the rotation matrix, and the distortion coefficient of the camera cannot be changed.
To determine a first current parameter of the camera, first, a first image currently acquired by the camera and a plurality of reference images for determining the current camera parameter are acquired. The plurality of reference images may include a plurality of preset images acquired by the camera at a preset viewing angle.
In this embodiment, a case where the plurality of reference images only include the plurality of preset images is described first, and in this case, it is necessary to select a preset image that meets a set matching condition from the plurality of preset images as a target reference image that matches the first image based on the number of matching feature points between the first image and each preset image. Wherein the matching condition may be: any one of the preset images in which the number of matched feature point pairs exceeds a set threshold, or the preset image in which the number of matched feature point pairs exceeds the set threshold and the number of matched feature point pairs is the largest.
As described above, first, the first image and each preset image may be respectively extracted with feature points based on the feature extraction model, and then, the plurality of preset images are traversed one by one, and the feature points of the first image and the feature points of one currently traversed preset image are input to the feature matching model to obtain the pixel coordinates of each feature point pair matched between the first image and the preset image. Therefore, according to the number of the feature point pairs respectively matched between the first image and the plurality of preset images, the preset image which meets the matching condition with the first image is determined as the target reference image.
And then, determining a first current parameter of the camera according to the pixel coordinates of the matched feature point pair between the first image and the target reference image and the initial parameter of the camera, wherein the first current parameter comprises current camera internal parameters, a rotation matrix of a shooting visual angle of the first image relative to a shooting visual angle of the target reference image and a distortion coefficient.
It should be noted that, if there is no preset image that meets the matching condition in the plurality of preset images, it is often indicated that the camera has been seriously deviated from the scene that the camera should shoot, that is, the scene that the camera should shoot cannot be covered, or the shooting scene has been largely changed, and the plurality of preset images collected before are not applicable, the camera calibration operation of this time is terminated, and prompt information is output to prompt the user to change the shooting angle of the camera or to collect the preset images again.
The determination process of the first current parameter of the camera is similar to the determination process of the initial parameter of the camera, and the following description is provided for the determination process of the first current parameter.
For convenience of description, the first image is represented as I c Representing the target reference image as I m Expressing the current camera reference as a matrix K c First, the current camera internal parameters corresponding to the first image may be initialized based on the camera internal parameters corresponding to the target reference image: k c =K m Wherein, K is m The camera internal parameters corresponding to the target reference image are one of the preset images, and the camera internal parameters corresponding to the preset images are the initial camera internal parameters K obtained in the initialization stage, so that the K can be used as the K after being solved by the formula (5) m To initialize K c The value of (a).
Then, based on the pixel coordinates of the matched feature point pairs between the first image and the target reference image, solving a homography matrix H corresponding to the first image and the target reference image cm . Further, a calculation result based on the homography matrix and K c May be initialized by the following formula (6) to a rotation matrix R of the photographing angle of view of the first image with respect to the photographing angle of view of the target reference image cm
R cm =K c -1 H cm K m (6)
To obtain K c And R cm Based on the pixel coordinates of the matched pairs of feature points between the first image and the target reference image, the reprojection error of the pairs of feature points in the two images can be constructed as a loss function.
It is assumed that any pair of matched pairs of feature points present in the pair of images has pixel coordinates expressed as:
Figure BDA0003582903230000081
and
Figure BDA0003582903230000082
wherein k is ∈ [1, M ∈]And M denotes the number of matching pairs of feature points present in the two images (the first image and the target reference image). The pixel coordinates of the obtained feature point pairs may be respectively subjected to distortion removal processing, which is implemented by referring to the above equations (3) and (4), and will not be described herein again.
Feature point pairs still matching as described above
Figure BDA0003582903230000091
And
Figure BDA0003582903230000092
for example, the pixel coordinates obtained after the above distortion removal processing is performed on the pixel coordinates are respectively expressed as:
Figure BDA0003582903230000093
and
Figure BDA0003582903230000094
a loss function may be generated based on the reprojection errors of the matched pairs of feature points between the first image and the target reference image according to the following equation (7):
Figure BDA0003582903230000095
as can be seen from equation (7), the variables to be solved in the loss function include the current camera intrinsic parameter K c A rotation matrix R of the shooting visual angle of the first image relative to the shooting visual angle of the target reference image cm . And a distortion coefficient k 1 ,k 2 ,p 1 ,p 2 Remain unchanged and do not need to be solved again. Based on K given hereinabove c ,R cm The initial value of (a) is solved with the objective of minimizing the value of the loss function, and the above K can be obtained c ,R cm And (4) final solution results of (1).
That is, for the current time, the current camera internal parameters K including the camera are obtained through the above process c And a shooting view of the first imageRotation matrix R of the angle with respect to the shooting angle of view of the target reference image cm And after the first current parameter of the distortion coefficient, updating the camera parameter to the first current parameter so as to finish the automatic calibration of the camera parameter at the current moment. The distortion coefficient is directly obtained from the calibration result in the initialization stage without repeated calculation.
In summary, based on the preset multiple preset images and the initialized camera initial parameters, the real-time and automatic calibration of the camera parameters can be completed based on the matching between the currently acquired images and the preset images.
In the above-described embodiment, in the determination of the target reference image, the calculation processing of the specific point pair matched between the pair of input images may be performed directly using the feature matching model. Although the feature matching model has advantages in terms of computational accuracy, it may take relatively long time, and therefore, considering requirements for time delay and accuracy in different application scenarios comprehensively, the embodiment of the present invention provides a "multi-stage matching strategy" to achieve determination of the target reference image. This is explained below with reference to the embodiments shown in fig. 2 and 3.
Fig. 2 is a flowchart of a method for determining a target reference image according to an embodiment of the present invention, as shown in fig. 2, the method includes the following steps:
201. and respectively extracting the features of the first image and the plurality of reference images by adopting a set feature extraction model to obtain the feature points corresponding to the first image and the plurality of reference images, wherein the plurality of reference images comprise a plurality of preset images.
202. And matching the feature points corresponding to the first image and any reference image by adopting a preset feature matching algorithm to determine the number of matched feature point pairs between the first image and any reference image.
203. And if the number of the matched feature point pairs between the first image and any reference image obtained by adopting a preset feature matching algorithm is smaller than a set threshold value, inputting the feature points corresponding to the first image and any reference image into a set feature matching model so as to obtain the matched feature point pairs output by the feature matching model.
204. And determining a target reference image, of which the number of the feature point pairs matched with the first image is greater than a set threshold value, from the plurality of reference images.
In summary, in this embodiment, the multi-stage matching strategy is embodied as: and matching the feature points by adopting the preset feature matching algorithm, and if the target reference image cannot be obtained, matching the feature points by adopting a feature matching model. And the execution rate of the preset feature matching algorithm is higher than that of the feature matching model.
In practical applications, the predetermined feature matching algorithm may include, for example, a K-Nearest Neighbor (KNN) algorithm. The KNN algorithm only uses information between individual pairs of feature points, which are independent. The feature matching model (matching based on the graph convolution neural network) uses the semantic and context relationship between feature points, is more robust, and has larger calculation amount.
Specifically, if the number of feature point pairs matched between the first image and each reference image obtained by using the preset feature matching algorithm is smaller than a set threshold, it is determined that the matching fails. Then, the feature points of the first image respectively corresponding to the reference images may be input into the feature matching model to obtain matched feature point pairs output by the feature matching model. And if at least one reference image with the number of the matched characteristic point pairs between the first images larger than a set threshold value is obtained through the characteristic matching model, randomly determining one of the at least one reference image as a target reference image or determining one with the largest number of the matched characteristic point pairs from the at least one reference image as the target reference image. If the target reference image cannot be matched based on the two matching modes, the error prompt information can be directly returned.
In the above embodiment, it is actually assumed that the plurality of reference images are constituted by only the plurality of preset images. However, in another alternative embodiment of the present invention, the plurality of reference images may further include a second image that is successfully matched before, where the acquisition time of the second image is earlier than that of the first image, and the successful matching refers to that, in the plurality of reference images obtained at the acquisition time, there are reference images whose number of pairs of feature points matched with the second image is greater than a set threshold.
To facilitate understanding of the second image, for example. Assuming that the initial phase is completed at time T1, that is, the initial parameters of the camera are calculated at time T1, after time T1, that is, in the actual operation phase of the camera, the second image may be a frame of image acquired at time T2 after time T1, and the first image may be a frame of image acquired at time T3 after time T2. It can be understood that, assuming that the second image is the first frame image acquired after the time T1, when the camera parameters corresponding to the acquisition time of the second image are calculated for the second image, a plurality of reference images are also acquired, where the plurality of reference images at this time are composed of a plurality of preset images, and the calculated camera parameters include, for example, the camera intrinsic parameters K2 and R2 corresponding to the time T2, where R2 represents a rotation matrix of the shooting angle of the second image with respect to the shooting angle of the corresponding target reference image. Then, when a first image corresponding to time T3 is acquired, a plurality of reference images corresponding to the acquisition time are generated, and at this time, the plurality of reference images include a second image that is successfully matched with the previous frame in addition to the plurality of preset images. In fact, the number of the second images is not limited to one, that is, the images successfully matched in the first few frames may be included, and the number may be set according to requirements.
Based on this, for the first image, in the case that the obtained multiple reference images corresponding to the first image include multiple preset images and the second image successfully matched before, the method for determining the target reference image matched with the first image may be performed with reference to the embodiment shown in fig. 3.
Fig. 3 is a flowchart of another method for determining a target reference image according to an embodiment of the present invention, as shown in fig. 3, the method includes the following steps.
301. And respectively extracting the features of the first image and the plurality of reference images by adopting a set feature extraction model to obtain the feature points corresponding to the first image and the plurality of reference images, wherein the plurality of reference images comprise a plurality of preset images and second images successfully matched before.
302. And aiming at each preset image in the plurality of reference images, respectively matching the first image and the feature points corresponding to each preset image by adopting a preset feature matching algorithm so as to determine the number of the feature point pairs matched between the first image and each preset image.
303. And if the number of the feature point pairs matched between the first image and each preset image obtained by adopting the preset feature matching algorithm is smaller than a set threshold value, matching the feature points corresponding to the first image and the second image by adopting the preset feature matching algorithm so as to determine the number of the feature point pairs matched between the first image and the second image.
304. And if the number of the matched feature point pairs between the first image and the second image obtained by adopting a preset feature matching algorithm is smaller than a set threshold value, inputting the feature points corresponding to the first image and any reference image into a set feature matching model so as to obtain the matched feature point pairs output by the feature matching model.
305. And according to the output result of the feature matching model, determining a target reference image of which the number of the feature point pairs matched with the first image is larger than a set threshold value from the plurality of reference images.
In summary, in this embodiment, the multi-stage matching strategy is embodied as: the method comprises the steps of firstly adopting the preset feature matching algorithm to carry out matching processing on feature points between a first image and each preset image, then adopting the preset feature matching algorithm to carry out matching processing on the feature points between the first image and a second image if a target reference image cannot be obtained, and adopting a feature matching model to carry out matching processing on the feature points between the first image and each reference image (comprising each preset image and the second image) if the target reference image cannot be obtained. And the execution rate of the preset feature matching algorithm is higher than that of the feature matching model.
Since the preset feature matching algorithm is faster than the execution rate of the feature matching model, if the target reference image is determined in the first two stages, the feature matching model does not need to be executed to perform matching processing of the feature point pairs.
In this embodiment, the second image that is successfully matched before may only include the image that is successfully matched in the previous frame with respect to the first image, and does not need to include all the images that are successfully matched before. The introduction of the second image actually takes into account the timing information of the image. Because the plurality of preset images correspond to images of the camera under specific shooting visual angles, the difference between the shooting visual angles is larger, when the camera has larger deviation with the specific shooting visual angles in the actual operation process, in order to avoid causing obstacles to the automatic calibration of the camera, images successfully matched before are introduced, because the deviation between the current first image and the shooting visual angle of the last frame image successfully matched before is not larger, the real-time automatic calibration of the camera is ensured.
The real-time automatic calibration of the camera has many practical meanings, and in the embodiment of the invention, an application scene based on an automatic calibration result of the camera is illustrated as follows: augmented Reality (AR) tracking.
In brief, assuming that a camera is used to photograph a certain area on a road, the plurality of preset images may be images photographed for the area from different photographing perspectives. Within the area there are multiple points of interest, such as a university, a residential district, a road, traffic lights, etc. It is assumed that the interest point information is marked on a video picture acquired by photographing the area by the camera, so as to form an AR effect of displaying the interest point information in a superimposed manner in the actually acquired video picture. In practical applications, after the plurality of preset images are determined in the initial stage, the user may mark the point of interest information in each preset image. It should be noted that, in the embodiment of the present invention, it is assumed that the interest points marked on each preset image are not identical, and in the simplest case, the same interest point information is marked only in one preset image, specifically, on the preset image where the interest point is most viewed. The marked interest point information may include position mark information and function attribute mark information, where the position mark information marks the position of a certain interest point in a corresponding preset image by drawing a point, a coil, a polygon, etc., and the function attribute mark information may be structured data such as the name of the interest point, a function description, etc. As shown in fig. 4, in a certain preset image of the camera, the location of each point of interest is marked with a black dot and the corresponding name, such as a certain university, a certain street, street lamp 1 (oriented: north), street lamp 2 (oriented: south), etc., is displayed in association.
In summary, in practical applications, optionally, after determining each preset image, the user may mark the point of interest information on each preset image, and the point of interest information marked on different preset images is different, so that the corresponding relationship between the preset images and the point of interest information may be stored for later use.
In the above AR tracking application scenario, briefly, the AR tracking effect can be understood as: in the subsequent actual operation process of the camera, each frame of acquired image should be automatically marked with each interest point information, that is, all interest point information marked on a plurality of preset images needs to be marked on each frame of image, so that a user can clearly see each interest point information on each frame of image.
Based on this, in order to achieve the marking purpose in the application scenario, in the camera automatic marking stage, in addition to the above-mentioned rotation matrix of the shooting angle of the first image relative to the shooting angle of the target reference image (such as a certain preset image or a second image successfully matched with the previous frame) and the current camera internal reference, for the currently acquired first image, further determination of rotation matrices of the shooting angles of the first image relative to the shooting angles of the plurality of preset images is needed, and in short, the relationship between the camera pose at the shooting angle of the first image relative to the camera pose at the shooting angle of each preset image is obtained. This determination process is explained in conjunction with fig. 5.
Fig. 5 is a flowchart of a camera calibration method according to an embodiment of the present invention, as shown in fig. 5, the method may include the following steps:
501. the method comprises the steps of obtaining a plurality of initial images used for determining initial camera parameters, wherein the plurality of initial images comprise a plurality of preset images collected under a plurality of preset visual angles and images collected under a plurality of rotation angles, and the plurality of initial images correspond to the same camera focal length.
502. According to pixel coordinates of matched feature point pairs between every two images in the plurality of initial images, initial parameters of the camera are determined, wherein the initial parameters comprise initial camera internal parameters, rotation matrixes between corresponding shooting visual angles of every two images and distortion coefficients.
503. The method comprises the steps of acquiring a first image currently acquired by a camera and a plurality of reference images used for determining current camera parameters, wherein the plurality of reference images comprise a plurality of preset images.
Optionally, as described above, a second image that is successfully matched before may be included in the plurality of reference images.
504. And determining a target reference image matched with the first image from the plurality of reference images according to the number of the feature point pairs respectively matched between the first image and the plurality of reference images.
505. And determining a first current parameter of the camera according to the pixel coordinates of the matched characteristic point pair between the first image and the target reference image and the initial parameter of the camera, wherein the first current parameter comprises current camera internal parameters, a rotation matrix of the shooting visual angle of the first image relative to the shooting visual angle of the target reference image and a distortion coefficient.
506. Determining rotation matrixes of the shooting visual angles of the target reference image relative to the shooting visual angles of the plurality of preset images respectively, determining rotation matrixes of the shooting visual angles of the first image relative to the shooting visual angles of the target reference image and rotation matrixes of the shooting visual angles of the target reference image relative to the shooting visual angles of the plurality of preset images respectively, determining a second current parameter of the camera, wherein the second current parameter comprises the rotation matrixes of the current camera internal reference and the shooting visual angles of the first image relative to the shooting visual angles of the plurality of preset images respectively and distortion coefficients.
In this embodiment, the execution of steps 501-504 can refer to the related descriptions in the other embodiments, which are not repeated herein.
As described above, when the plurality of reference images include the plurality of preset images and the second image, the matched target reference image corresponding to the first image may be one of the plurality of preset images or the second image.
The following description is made for these two cases, respectively:
in the first case: the target reference image is a preset image.
If the target reference image I m The above-mentioned determination process of the first current parameter of the camera can be implemented with reference to the foregoing embodiments for one preset image (referred to as a target preset image for convenience of description), which is not described herein. The above example is accepted, and the first current parameter obtained at this time includes the current camera internal parameter K c And a rotation matrix R of the shooting visual angle of the first image relative to the shooting visual angle of the preset target image cm
At this time, any one of the preset images is represented as I p First, a preset image I needs to be determined m With a preset image I p Is represented as R mp Whereas the rotation matrix has been determined during the determination of the initial parameters of the camera, i.e. R in the above, actually at the initialization stage ij In which R is contained mp . Therefore, in the known R cm And R mp On the basis of the above formula (8), the first image I can be obtained c Relative preset image I of shooting visual angle p Rotation matrix R of the photographing angle of view cp
R cp =R cm R mp (8)
Through the calculation process, based on the rotation matrix of the shooting angle of the target preset image relative to the shooting angles of the plurality of preset images obtained in the initialization stage and the calculated rotation matrix of the shooting angle of the first image relative to the shooting angle of the target preset image, the rotation matrix of the shooting angle of the first image relative to the shooting angles of the plurality of preset images can be determined, and the second current parameter of the camera is determined, wherein the second current parameter comprises the current camera internal reference and the rotation matrix of the shooting angle of the first image relative to the shooting angles of the plurality of preset images. It will be appreciated that the rotation matrix of the target preset image with respect to itself is actually an identity matrix.
In the second case: target reference image I m Is the second image.
As described above, assuming that the acquisition time of the second image is T2, the following parameters corresponding to the second image may have been actually calculated at time T2 based on the calculation process described above: k is m And R mp . The calculation is saved for later use. After the first image I is acquired c When the plurality of reference images acquired at this time include the second image and the target reference image is finally determined to be the second image, the stored parameter values are directly taken out for use.
In conclusion, through the above process, the pose of the camera when shooting the first image relative to the preset images, that is, the rotation matrix of the shooting view angle of the first image relative to the shooting view angles of the preset images can be obtained.
After the second current parameter is obtained, in order to achieve an AR tracking display effect in the AR tracking application scenario, in summary, it is necessary to determine, based on the second current parameter, pixel coordinates corresponding to the interest point information marked on the multiple preset images on the currently acquired first image.
Specifically, the process of marking the point of interest information in the first image may include the following steps:
and S1, acquiring the interest point information contained in the plurality of preset images respectively.
Namely, the information such as the pixel coordinate and the attribute name corresponding to each interest point information in the corresponding preset image is obtained.
And S2, determining the offset of the shooting visual angle of the first image relative to the shooting visual angles of the plurality of preset images respectively according to the rotation matrix of the shooting visual angle of the first image relative to the shooting visual angles of the plurality of preset images respectively, the camera focal length contained in the current camera internal reference and the camera focal length contained in the initial camera internal reference.
Alternatively, the determination of the offset may be implemented as: for any preset image, determining the modulus length of a rotation vector according to a rotation matrix of the shooting visual angle of the first image relative to the shooting visual angle of the any preset image; determining the variation of the focal length of the camera according to the focal length of the camera contained in the current camera internal parameters and the focal length of the camera contained in the initial camera internal parameters; and determining the offset of the shooting visual angle of the first image relative to the shooting visual angle of any one preset image according to the rotation vector modular length and the camera focal length variation.
For example, assume any preset image is I p The first image is I c Current camera intrinsic parameter K in second current parameter of camera c The medium package contains a camera focal length: f. of xc And f yc The initial camera parameters in the initial parameters of the camera comprise camera focal lengths: f. of xp And f yp Actually, the initial camera parameter K includes the camera focal length f x And f y . First image I c Relative preset image I of shooting visual angle p Rotation matrix R of the photographing angle of view cp Converted into a rotation vector, denoted r cp Then the first image I can be determined according to the following equation (9) c Relative preset image I of shooting visual angle p The shift amount of the shooting angle of view of (1):
offset cp =n 1 ×|f xc -f xp +f yc -f yp |+n 2 ×‖r cp ‖ (9)
wherein n is 1 And n 2 Is a preset scaling factor, | is a modulo operation.
The offset between the shooting angles of the first image and each preset image can be obtained through the formula.
S3, if the offset of the capturing view angle of the first image with respect to the capturing view angle of the first preset image is smaller than the preset threshold, directly positioning a first pixel coordinate corresponding to the first interest point information in the first image, so as to display the first interest point information in the first image, where the first interest point information is included in the first preset image, and the first pixel coordinate is a pixel coordinate corresponding to the first interest point information in the first preset image.
Assuming that the offset of the shooting angle of view of the first image relative to the shooting angle of view of the first preset image is smaller than the preset threshold, and assuming that the first preset image is marked with the first interest point information, at this time, the first interest point information is directly positioned and marked on the first image according to the pixel coordinate corresponding to the first interest point information in the first preset image.
And S4, if the offset of the shooting visual angle of the first image relative to the shooting visual angle of the second preset image is larger than the preset threshold, positioning a second pixel coordinate corresponding to the second interest point information in the first image to display the second interest point information in the first image, wherein the second interest point information is contained in the second preset image, and performing coordinate transformation on the pixel coordinate corresponding to the second interest point information in the second preset image according to the rotation matrix of the shooting visual angle of the first image relative to the shooting visual angle of the second preset image, the current camera internal reference and the initial camera internal reference to obtain the second pixel coordinate, wherein the first preset image and the second preset image are any two of the plurality of preset images.
Here, assuming that the offset of the shooting angle of view of the first image relative to the shooting angle of view of the second preset image is greater than the preset threshold, and assuming that the second preset image is marked with the second interest point information, the corresponding pixel coordinate of the second interest point information in the second preset image needs to be transformed to obtain the corresponding pixel coordinate in the first image, and the coordinate transformation can be completed according to the following formula (10):
Figure BDA0003582903230000141
wherein, K p Representing a preset image I p The corresponding camera parameter is actually the initial camera parameter in the aboveGinseng, Z p Representing the corresponding pixel coordinates, Z, of the second point of interest information in the second preset image c And the second interest point information corresponds to the pixel coordinate in the first image.
In fact, in the case where the distortion coefficient described above is included in the camera parameters, the distortion coefficient may be employed to continue with the pixel coordinate Z c A de-distortion calculation is performed to obtain the final pixel coordinates. And then, positioning the pixel coordinate in the first image, and displaying the functional attribute information corresponding to the second interest point information in the first image in a manner of being associated with the pixel coordinate, thereby completing the automatic marking of the second interest point information in the first image.
Taking the AR application scenario illustrated in the above embodiment as an example, actually, an AR application program may be provided, and the camera calibration scheme provided in the embodiment of the present invention may be executed by the AR application program, and the AR application program may be provided with a relevant configuration interface, so that a user may perform some configuration operations, such as changing a shooting angle of a camera, setting a preset image, and setting an interest point mark.
Fig. 6 is a flowchart of a method for rendering a phase image according to an embodiment of the present invention, and as shown in fig. 6, the method includes the following steps:
601. the method comprises the steps of obtaining a plurality of initial images used for determining initial camera parameters, wherein the plurality of initial images comprise a plurality of preset images collected under a plurality of preset visual angles and images collected under a plurality of rotation angles, the plurality of initial images correspond to the same camera focal length, and the plurality of preset images respectively comprise different interest point information.
602. According to pixel coordinates of matched feature point pairs between every two images in the plurality of initial images, initial parameters of the camera are determined, wherein the initial parameters comprise initial camera internal parameters, a rotation matrix between corresponding shooting visual angles of the every two images and a distortion coefficient.
603. The method comprises the steps of acquiring a first image currently acquired by a camera and a plurality of reference images used for determining current camera parameters, wherein the plurality of reference images comprise a plurality of preset images.
Optionally, the plurality of reference images may further include a second image that is successfully matched before, the acquisition time of the second image is earlier than that of the first image, and the successful matching refers to that, in the plurality of reference images obtained at the acquisition time of the second image, there are reference images whose number of pairs of feature points matched with the second image is greater than a set threshold.
604. Determining a target reference image matched with the first image from the plurality of reference images according to the number of the feature point pairs matched between the first image and the plurality of reference images, and determining current parameters of the camera according to pixel coordinates of the feature point pairs matched between the first image and the target reference image and initial parameters of the camera, wherein the current parameters comprise rotation matrixes of shooting visual angles of the current camera internal reference and the first image relative to shooting visual angles of the plurality of preset images respectively.
605. According to the initial parameters and the current parameters of the camera, determining the offset of the shooting visual angle of the first image relative to the shooting visual angles of the plurality of preset images respectively, determining the pixel coordinates corresponding to the interest point information contained in the plurality of preset images in the first image respectively according to the offset, and displaying the interest point information in the first image in a set rendering mode according to the pixel coordinates.
In this embodiment, with respect to the process of determining the initial parameter, the current parameter (including the first current parameter and the second current parameter described above), determining the offset, and determining the corresponding pixel coordinate of the interest point information in the first image according to the offset, reference may be made to the relevant description in the foregoing embodiment, which is not repeated herein.
After the pixel coordinates of the interest point information in the first image are obtained, the interest point information may be rendered and displayed in the first image in a certain set rendering manner, and the set rendering manner may highlight the interest point information in the first image. The rendering method is, for example: improving the image resolution corresponding to the interest point information; changing the illumination effect corresponding to the interest point information, such as modifying to a ray tracing (ray tracing) effect; and so on.
In practical application, a rendering mode selection list can be provided for a user in advance, wherein the rendering mode selection list comprises a plurality of selectable rendering modes, and the user can select a certain rendering mode adapted to the actual shooting scene of the camera to highlight the rendering of the interest point information.
Optionally, the camera calibration method provided by the embodiment of the invention can also be applied to three-dimensional measurement application scenes in the fields of virtual reality and the like.
For example, it is assumed that the camera in the above is a depth camera capable of acquiring depth information, the depth camera is deployed for image acquisition of a certain space environment and three-dimensional reconstruction of the space based on the acquired image, or measurement of distance, coverage area and the like between certain specific interest points of the space environment and the camera is performed. In order to complete the space measurement and three-dimensional reconstruction tasks, accurate calibration of camera parameters is the basis. The camera calibration method introduced above can determine the relevant parameters of the depth camera in real time, and even based on the scheme introduced above, the corresponding positions of the interest points set in the space environment in the current picture taken by the depth camera can be obtained, and then based on the camera parameters calibrated at present and the position information of the interest points in the current picture, the three-dimensional measurement of distance measurement, coverage area and the like for the interest points can be completed.
Fig. 7 is a schematic diagram of an AR application interface according to an embodiment of the present invention, and as shown in fig. 7, the AR application interface may include an information bar 701, a camera selection list 702, a video display area 703, a function panel 704, and the like.
In this embodiment, the information bar 701 may be used for user login, logout, and help functions. The camera selection list 702 may list the cameras on line in the form of a device tree, so that after the user selects the camera identifier to be viewed, a video frame taken by the camera is displayed in the video display area 703.
The video display area 703 is used for displaying a video image captured by the selected camera in real time, and displaying the above-described interest point information in the video image in an overlapping manner. In the video display area 703, front-end interaction (e.g., addition, deletion, selection, modification, etc.) of user and point of interest information is supported.
The function panel 704 may include, for example, functional items such as a pan/tilt control function 705 and a parameter configuration function 706. The pan/tilt control function 705 may further include a camera control function 7051 and a preset location management function 7052.
When the camera control function 7051 is triggered, various camera control operation items may be displayed, such as control in different shooting directions, for example, up, down, left, and right as illustrated in the drawing, and further including zoom-in and zoom-out control, return preset bit control, and the like, for controlling the pose of the camera.
The preset bit management function 7052 is used to manage (add, delete, query, modify, etc.) a plurality of preset shooting positions (i.e., a plurality of preset shooting angles as described above) of the camera, preset images, and point-of-interest information.
A parameter configuration function 706, configured to perform parameter configuration on related algorithms (such as a K-nearest neighbor algorithm and various thresholds) used in the camera calibration process, and may also be used to store the determined camera parameters.
As described above, the camera calibration method provided by the present invention can be executed in the cloud, and a plurality of computing nodes can be deployed in the cloud, and each computing node has processing resources such as computation and storage. In the cloud, a plurality of computing nodes may be organized to provide a service, and of course, one computing node may also provide one or more services. The way that the cloud provides the service may be to provide a service interface to the outside, and the user calls the service interface to use the corresponding service. The service Interface includes Software Development Kit (SDK), Application Programming Interface (API), and other forms.
According to the scheme provided by the embodiment of the invention, the cloud end can be provided with a service interface of the camera calibration service, and a user calls the camera calibration service interface through user equipment so as to trigger a request for calling the camera calibration service interface to the cloud end. The cloud determines the compute nodes that respond to the request, and performs the following steps using processing resources in the compute nodes: acquiring a plurality of initial images for determining initial camera parameters, wherein the plurality of initial images comprise a plurality of preset images acquired under a plurality of preset visual angles and images acquired under a plurality of rotation angles, and the plurality of initial images correspond to the same camera focal length; determining initial parameters of the camera according to pixel coordinates of matched feature point pairs between every two images in the plurality of initial images, wherein the initial parameters comprise initial camera internal parameters, a rotation matrix between corresponding shooting visual angles of every two images and a distortion coefficient; acquiring a first image currently acquired by a camera and a plurality of reference images for determining current camera parameters, wherein the plurality of reference images comprise a plurality of preset images; determining a target reference image matched with the first image from the plurality of reference images according to the number of the feature point pairs respectively matched between the first image and the plurality of reference images; determining a first current parameter of the camera according to pixel coordinates of a matched feature point pair between the first image and the target reference image and an initial parameter of the camera, wherein the first current parameter comprises current camera internal parameters, a rotation matrix of a shooting visual angle of the first image relative to a shooting visual angle of the target reference image and a distortion coefficient; and sending the first current parameter of the camera to the camera so that the camera completes the updating of the camera parameter according to the first current parameter.
For a detailed process of the camera calibration processing executed by the camera calibration service interface using the processing resource, reference may be made to the related description in the foregoing other embodiments, which is not repeated herein.
For ease of understanding, the description is exemplified in conjunction with fig. 8. In fig. 8, a user may input configuration and control information of the camera E2 in the user device E1, the user device E1 sends the configuration and control information to the cloud by invoking the service interface, for example, the computing node E3 illustrated in the figure performs processing, and the computing node E3 may perform corresponding configuration and control operations on the camera E2 based on the configuration information and the control information. These configuration and control information are, for example: in the initialization stage, the camera E2 is controlled to be at different preset shooting angles respectively to acquire a plurality of preset images, for example, the sampling rate of the camera E2 on video frames in the subsequent actual operation process is configured, so that the camera uploads the acquired images to the cloud based on the sampling rate, and further, for example, the camera is configured to upload the acquired images to a certain service address of the cloud. Based on configuration and control operations of the user on the user device E1, the camera E2 may send a call request to the cloud computing node E3 after acquiring a plurality of initial images, where the call request includes the plurality of initial images. After receiving the call request, the cloud computing node E3 calculates and stores the initial parameters of the camera, and may also send the initial parameters to the user equipment E1. Then, in the subsequent use process of the camera, each time the camera acquires a frame of image, the acquired image can be automatically uploaded to the cloud through the service interface, the cloud computing node E3 completes the determination of the corresponding camera parameters, the determination result is fed back to the user equipment E1 for storage, and the picture marked with the interest point in the above embodiment can be fed back to the user equipment E1. The detailed implementation process refers to the description in the foregoing embodiments, and is not repeated herein.
The camera calibration apparatus according to one or more embodiments of the present invention will be described in detail below. Those skilled in the art will appreciate that these means can each be constructed using commercially available hardware components and by performing the steps taught in this disclosure.
Fig. 9 is a schematic structural diagram of a camera calibration apparatus according to an embodiment of the present invention, and as shown in fig. 9, the apparatus includes: the system comprises an image acquisition module 11, an initial parameter determination module 12 and an actual parameter determination module 13.
The image acquiring module 11 is configured to acquire a plurality of initial images for determining initial camera parameters, where the plurality of initial images include a plurality of preset images acquired at a plurality of preset viewing angles and images acquired at a plurality of rotation angles, and the plurality of initial images correspond to a same camera focal length.
The initial parameter determining module 12 is configured to determine initial parameters of the camera according to pixel coordinates of feature point pairs matched between every two images in the multiple initial images, where the initial parameters include initial camera internal parameters, a rotation matrix between shooting angles corresponding to every two images, and a distortion coefficient.
A parameter determining module 13, configured to acquire a first image currently acquired by the camera and a plurality of reference images used for determining current camera parameters, where the plurality of reference images include a plurality of preset images; determining a target reference image matched with the first image from the plurality of reference images according to the number of the feature point pairs respectively matched between the first image and the plurality of reference images; and determining a first current parameter of the camera according to the pixel coordinates of the matched characteristic point pair between the first image and the target reference image and the initial parameter of the camera, wherein the first current parameter comprises current camera internal parameters, a rotation matrix of the shooting visual angle of the first image relative to the shooting visual angle of the target reference image and a distortion coefficient.
Optionally, the parameter determining module 13 is further configured to: determining rotation matrixes of the shooting visual angles of the target reference images relative to the shooting visual angles of the plurality of preset images respectively; determining rotation matrixes of the shooting visual angle of the first image relative to the shooting visual angle of the target reference image and rotation matrixes of the shooting visual angle of the target reference image relative to the shooting visual angles of the preset images respectively according to the rotation matrixes of the shooting visual angle of the first image relative to the shooting visual angle of the target reference image and the shooting visual angles of the target reference image relative to the shooting visual angles of the preset images respectively; and determining second current parameters of the camera, wherein the second current parameters comprise current camera internal parameters, rotation matrixes of the shooting visual angles of the first images relative to the shooting visual angles of the plurality of preset images and distortion coefficients.
Optionally, the parameter determining module 13 is further configured to: acquiring interest point information contained in a plurality of preset images respectively; determining the offset of the shooting visual angle of the first image relative to the shooting visual angles of the plurality of preset images respectively according to the rotation matrix of the shooting visual angle of the first image relative to the shooting visual angles of the plurality of preset images, the camera focal length contained in the current camera internal reference and the camera focal length contained in the initial camera internal reference; if the offset of the shooting visual angle of the first image relative to the shooting visual angle of the first preset image is smaller than a preset threshold value, directly positioning a first pixel coordinate corresponding to first interest point information in the first image so as to display the first interest point information in the first image, wherein the first interest point information is contained in the first preset image, and the first pixel coordinate is the pixel coordinate corresponding to the first interest point information in the first preset image; if the offset of the shooting visual angle of the first image relative to the shooting visual angle of the second preset image is larger than a preset threshold value, positioning a second pixel coordinate corresponding to the second interest point information in the first image so as to display the second interest point information in the first image; the second interest point information is contained in a second preset image, and according to the rotation matrix of the shooting visual angle of the first image relative to the shooting visual angle of the second preset image, the current camera internal reference and the initial camera internal reference, the corresponding pixel coordinates of the second interest point information in the second preset image are subjected to coordinate transformation to obtain second pixel coordinates; the first preset image and the second preset image are any two of the plurality of preset images.
Optionally, the parameter determining module 13 is specifically configured to: for any preset image, determining the modulus length of the rotation vector according to the rotation matrix of the shooting visual angle of the first image relative to the shooting visual angle of any preset image; determining the variation of the focal length of the camera according to the focal length of the camera contained in the current camera internal parameters and the focal length of the camera contained in the initial camera internal parameters; and determining the offset of the shooting visual angle of the first image relative to the shooting visual angle of any preset image according to the rotation vector mode length and the camera focal length variation.
Optionally, the parameter determining module 13 is specifically configured to: respectively extracting the features of the first image and the plurality of reference images by adopting a set feature extraction model to obtain feature points corresponding to the first image and the plurality of reference images; inputting the characteristic points corresponding to the first image and any reference image into a set characteristic matching model aiming at any reference image so as to obtain matched characteristic point pairs output by the characteristic matching model; and determining a target reference image with the number of feature point pairs matched with the first image being larger than a set threshold value from the plurality of reference images, wherein the feature extraction model and the feature matching model are deep neural network models.
Optionally, the parameter determining module 13 is specifically configured to: matching the feature points corresponding to the first image and any reference image by adopting a preset feature matching algorithm to determine the number of matched feature point pairs between the first image and any reference image; if the number of the matched feature point pairs between the first image and any reference image obtained by adopting a preset feature matching algorithm is smaller than a set threshold value, inputting the feature points corresponding to the first image and any reference image into a set feature matching model to obtain the matched feature point pairs output by the feature matching model; and the execution rate of the preset feature matching algorithm is higher than that of the feature matching model.
Optionally, the multiple reference images include a second image that is successfully matched before, the acquisition time of the second image is earlier than that of the first image, and successful matching means that there are reference images in the multiple reference images obtained at the acquisition time, where the number of pairs of feature points matched with the second image is greater than a set threshold.
Optionally, the parameter determining module 13 is specifically configured to: respectively extracting the features of the first image and the plurality of reference images by adopting a set feature extraction model to obtain feature points corresponding to the first image and the plurality of reference images; aiming at each preset image in a plurality of reference images, respectively matching the first image and the feature points corresponding to each preset image by adopting a preset feature matching algorithm so as to determine the number of the feature point pairs matched between the first image and each preset image; if the number of the feature point pairs matched between the first image and each preset image obtained by adopting a preset feature matching algorithm is smaller than a set threshold value, matching the feature points corresponding to the first image and the second image by adopting the preset feature matching algorithm so as to determine the number of the feature point pairs matched between the first image and the second image; if the number of the matched feature point pairs between the first image and the second image obtained by adopting a preset feature matching algorithm is smaller than a set threshold value, inputting the feature points corresponding to the first image and any reference image into a set feature matching model to obtain the matched feature point pairs output by the feature matching model; according to the output result of the feature matching model, determining a target reference image of which the number of feature point pairs matched with the first image is greater than a set threshold value from a plurality of reference images; the feature extraction model and the feature matching model are both deep neural network models, and the execution rate of the preset feature matching algorithm is higher than that of the feature matching model.
The apparatus shown in fig. 9 can perform the steps described in the foregoing embodiments, and the detailed performing process and technical effects refer to the descriptions in the foregoing embodiments, which are not described herein again.
In one possible design, the structure of the camera calibration apparatus shown in fig. 9 may be implemented as an electronic device, as shown in fig. 10, which may include: a processor 21, a memory 22, and a communication interface 23. Wherein the memory 22 has stored thereon executable code, which when executed by the processor 21, causes the processor 21 to implement at least the camera calibration method as provided in the previous embodiments.
In addition, an embodiment of the present invention provides a non-transitory machine-readable storage medium having stored thereon executable code, which, when executed by a processor of an electronic device, causes the processor to implement at least the camera calibration method as provided in the foregoing embodiments.
The above-described apparatus embodiments are merely illustrative, wherein the units described as separate components may or may not be physically separate. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by adding a necessary general hardware platform, and of course, can also be implemented by a combination of hardware and software. With this understanding in mind, the above-described aspects and portions of the present technology which contribute substantially or in part to the prior art may be embodied in the form of a computer program product, which may be embodied on one or more computer-usable storage media having computer-usable program code embodied therein, including without limitation disk storage, CD-ROM, optical storage, and the like.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (14)

1. A camera calibration method is characterized by comprising the following steps:
acquiring a plurality of initial images for determining initial camera parameters, wherein the plurality of initial images comprise a plurality of preset images acquired under a plurality of preset visual angles and images acquired under a plurality of rotation angles, and the plurality of initial images correspond to the same camera focal length;
determining initial parameters of the camera according to pixel coordinates of feature point pairs matched between every two images in the plurality of initial images, wherein the initial parameters comprise initial camera internal parameters, a rotation matrix between shooting visual angles corresponding to the every two images and a distortion coefficient;
acquiring a first image currently acquired by the camera and a plurality of reference images used for determining current camera parameters, wherein the plurality of reference images comprise a plurality of preset images;
determining a target reference image matched with the first image from the plurality of reference images according to the number of matched feature point pairs between the first image and the plurality of reference images respectively;
determining a first current parameter of the camera according to the pixel coordinates of the matched feature point pair between the first image and the target reference image and the initial parameter of the camera, wherein the first current parameter comprises current camera internal parameters, a rotation matrix of a shooting visual angle of the first image relative to a shooting visual angle of the target reference image and the distortion coefficient.
2. The method of claim 1, further comprising:
determining rotation matrixes of the shooting visual angles of the target reference images relative to the shooting visual angles of the plurality of preset images respectively;
determining rotation matrixes of the shooting visual angle of the first image relative to the shooting visual angle of the target reference image and rotation matrixes of the shooting visual angle of the target reference image relative to the shooting visual angles of the plurality of preset images respectively;
and determining second current parameters of the camera, wherein the second current parameters comprise the current camera internal parameters, rotation matrixes of the shooting visual angles of the first images relative to the shooting visual angles of the plurality of preset images respectively and the distortion coefficients.
3. The method of claim 2, further comprising:
acquiring interest point information contained in the plurality of preset images respectively;
determining the offset of the shooting visual angle of the first image relative to the shooting visual angles of the plurality of preset images respectively according to the rotation matrix of the shooting visual angle of the first image relative to the shooting visual angles of the plurality of preset images respectively, the camera focal length contained in the current camera internal reference and the camera focal length contained in the initial camera internal reference;
if the offset of the shooting visual angle of the first image relative to the shooting visual angle of a first preset image is smaller than a preset threshold, directly positioning a first pixel coordinate corresponding to first interest point information in the first image so as to display the first interest point information in the first image, wherein the first interest point information is contained in the first preset image, and the first pixel coordinate is the pixel coordinate corresponding to the first interest point information in the first preset image;
if the offset of the shooting visual angle of the first image relative to the shooting visual angle of a second preset image is larger than the preset threshold, positioning a second pixel coordinate corresponding to second interest point information in the first image so as to display the second interest point information in the first image; the second interest point information is contained in the second preset image, and according to a rotation matrix of the shooting visual angle of the first image relative to the shooting visual angle of the second preset image, the current camera internal reference and the initial camera internal reference, coordinate transformation is carried out on the corresponding pixel coordinate of the second interest point information in the second preset image so as to obtain a second pixel coordinate;
the first preset image and the second preset image are any two of the plurality of preset images.
4. The method of claim 3, wherein the determining the offset of the capturing view angle of the first image from the capturing view angles of the plurality of preset images according to the rotation matrix of the capturing view angle of the first image from the capturing view angles of the plurality of preset images, the camera focal length included in the current camera internal reference, and the camera focal length included in the initial camera internal reference comprises:
for any preset image, determining the modulus length of a rotation vector according to a rotation matrix of the shooting visual angle of the first image relative to the shooting visual angle of the any preset image;
determining the variation of the camera focal length according to the camera focal length contained in the current camera internal parameter and the camera focal length contained in the initial camera internal parameter;
and determining the offset of the shooting visual angle of the first image relative to the shooting visual angle of any preset image according to the rotation vector mode length and the camera focal length variation.
5. The method according to claim 1, wherein the determining a target reference image matching the first image from the plurality of reference images according to the number of pairs of feature points matching the first image with the plurality of reference images respectively comprises:
respectively extracting the features of the first image and the plurality of reference images by adopting a set feature extraction model to obtain feature points corresponding to the first image and the plurality of reference images;
inputting the feature points corresponding to the first image and any reference image into a set feature matching model aiming at any reference image so as to obtain matched feature point pairs output by the feature matching model;
and determining a target reference image with the number of matched feature point pairs greater than a set threshold value from the plurality of reference images, wherein the feature extraction model and the feature matching model are deep neural network models.
6. The method according to claim 5, wherein before inputting, for any reference image, the feature points corresponding to the first image and any reference image into the set feature matching model to obtain the matched feature point pairs output by the feature matching model, the method further comprises:
matching the feature points corresponding to the first image and any reference image by adopting a preset feature matching algorithm to determine the number of matched feature point pairs between the first image and any reference image;
if the number of the feature point pairs matched between the first image and any reference image obtained by adopting the preset feature matching algorithm is smaller than the set threshold value, inputting the feature points corresponding to the first image and any reference image into a set feature matching model to obtain the matched feature point pairs output by the feature matching model;
wherein the execution rate of the preset feature matching algorithm is higher than that of the feature matching model.
7. The method according to claim 1, wherein the plurality of reference images include a second image that is successfully matched before, the second image is acquired earlier than the first image at the acquisition time, and the successfully matching is that, of the plurality of reference images obtained at the acquisition time, there are reference images whose number of pairs of feature points matched with the second image is greater than a set threshold.
8. The method according to claim 7, wherein the determining a target reference image matching the first image from the plurality of reference images according to the number of pairs of feature points matching the first image with the plurality of reference images respectively comprises:
respectively extracting the features of the first image and the plurality of reference images by adopting a set feature extraction model to obtain feature points corresponding to the first image and the plurality of reference images;
aiming at each preset image in the plurality of reference images, respectively matching the first image and the feature points corresponding to each preset image by adopting a preset feature matching algorithm so as to determine the number of the feature point pairs matched between the first image and each preset image;
if the number of the feature point pairs matched between the first image and each preset image obtained by adopting the preset feature matching algorithm is smaller than the set threshold, matching the feature points corresponding to the first image and the second image by adopting the preset feature matching algorithm to determine the number of the feature point pairs matched between the first image and the second image;
if the number of the feature point pairs matched between the first image and the second image obtained by adopting the preset feature matching algorithm is smaller than the set threshold value, inputting the feature points corresponding to the first image and any reference image into a set feature matching model to obtain the matched feature point pairs output by the feature matching model;
according to the output result of the feature matching model, determining a target reference image of which the number of feature point pairs matched with the first image is greater than a set threshold value from the plurality of reference images;
the feature extraction model and the feature matching model are both deep neural network models, and the execution rate of the preset feature matching algorithm is higher than that of the feature matching model.
9. A camera calibration device is characterized by comprising:
the system comprises an image acquisition module, a parameter calculation module and a parameter calculation module, wherein the image acquisition module is used for acquiring a plurality of initial images used for determining initial camera parameters, the plurality of initial images comprise a plurality of preset images acquired under a plurality of preset visual angles and images acquired under a plurality of rotation angles, and the plurality of initial images correspond to the same camera focal length;
the initial parameter determining module is used for determining initial parameters of the camera according to pixel coordinates of feature point pairs matched between every two images in the plurality of initial images, wherein the initial parameters comprise initial camera internal parameters, a rotation matrix between shooting visual angles corresponding to every two images and distortion coefficients;
a parameter determining module, configured to acquire a first image currently acquired by the camera and a plurality of reference images for determining current camera parameters, where the plurality of reference images include the plurality of preset images; determining a target reference image matched with the first image from the plurality of reference images according to the number of the feature point pairs respectively matched between the first image and the plurality of reference images; determining a first current parameter of the camera according to the pixel coordinates of the matched feature point pair between the first image and the target reference image and the initial parameter of the camera, wherein the first current parameter comprises current camera internal parameters, a rotation matrix of a shooting visual angle of the first image relative to a shooting visual angle of the target reference image and the distortion coefficient.
10. The apparatus according to claim 9, wherein the plurality of reference images include a second image that has been successfully matched before, the second image is acquired earlier than the first image at an acquisition time, and the successfully matching is that, of the plurality of reference images obtained at the acquisition time, there are reference images whose number of pairs of feature points matched with the second image is greater than a set threshold.
11. An electronic device, comprising: a memory, a processor, a communication interface; wherein the memory has stored thereon executable code which, when executed by the processor, causes the processor to perform a camera calibration method as claimed in any one of claims 1 to 8.
12. A non-transitory machine-readable storage medium having stored thereon executable code, which when executed by a processor of an electronic device, causes the processor to perform a camera calibration method as claimed in any one of claims 1 to 8.
13. An image rendering method, comprising:
acquiring a plurality of initial images for determining initial camera parameters, wherein the plurality of initial images comprise a plurality of preset images acquired at a plurality of preset viewing angles and images acquired at a plurality of rotation angles, the plurality of initial images correspond to the same camera focal length, and the plurality of preset images respectively comprise different interest point information;
determining initial parameters of the camera according to pixel coordinates of feature point pairs matched between every two images in the plurality of initial images, wherein the initial parameters comprise initial camera internal parameters, a rotation matrix between shooting visual angles corresponding to the every two images and a distortion coefficient;
acquiring a first image currently acquired by the camera and a plurality of reference images used for determining current camera parameters, wherein the plurality of reference images comprise a plurality of preset images;
determining a target reference image matched with the first image from the plurality of reference images according to the number of the feature point pairs respectively matched between the first image and the plurality of reference images;
determining current parameters of the camera according to pixel coordinates of a feature point pair matched between the first image and the target reference image and initial parameters of the camera, wherein the current parameters comprise current camera internal parameters and rotation matrixes of shooting visual angles of the first image relative to shooting visual angles of the plurality of preset images respectively;
determining the offset of the shooting visual angle of the first image relative to the shooting visual angles of the plurality of preset images respectively according to the initial parameters and the current parameters of the camera;
and determining pixel coordinates respectively corresponding to the interest point information contained in the plurality of preset images in the first image according to the offset, and displaying the interest point information in the first image in a set rendering mode according to the pixel coordinates.
14. The method according to claim 13, wherein the plurality of reference images include a second image that is successfully matched before, and an acquisition time of the second image is earlier than that of the first image, and the successfully matching is that, of the plurality of reference images obtained at the acquisition time, there are reference images whose number of pairs of feature points matched with the second image is greater than a set threshold.
CN202210358591.4A 2022-04-06 2022-04-06 Camera calibration method, device, equipment and storage medium Pending CN114943773A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210358591.4A CN114943773A (en) 2022-04-06 2022-04-06 Camera calibration method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210358591.4A CN114943773A (en) 2022-04-06 2022-04-06 Camera calibration method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114943773A true CN114943773A (en) 2022-08-26

Family

ID=82906407

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210358591.4A Pending CN114943773A (en) 2022-04-06 2022-04-06 Camera calibration method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114943773A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115311353A (en) * 2022-08-29 2022-11-08 上海鱼微阿科技有限公司 Multi-sensor multi-handle controller graph optimization tight coupling tracking method and system
CN115937002A (en) * 2022-09-09 2023-04-07 北京字跳网络技术有限公司 Method, apparatus, electronic device and storage medium for estimating video rotation
CN116170576A (en) * 2022-12-22 2023-05-26 国家电投集团贵州金元威宁能源股份有限公司 Multi-element perception data fault diagnosis method and system
CN116958271A (en) * 2023-06-06 2023-10-27 阿里巴巴(中国)有限公司 Calibration parameter determining method and device

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115311353A (en) * 2022-08-29 2022-11-08 上海鱼微阿科技有限公司 Multi-sensor multi-handle controller graph optimization tight coupling tracking method and system
CN115311353B (en) * 2022-08-29 2023-10-10 玩出梦想(上海)科技有限公司 Multi-sensor multi-handle controller graph optimization tight coupling tracking method and system
CN115937002A (en) * 2022-09-09 2023-04-07 北京字跳网络技术有限公司 Method, apparatus, electronic device and storage medium for estimating video rotation
CN115937002B (en) * 2022-09-09 2023-10-20 北京字跳网络技术有限公司 Method, apparatus, electronic device and storage medium for estimating video rotation
CN116170576A (en) * 2022-12-22 2023-05-26 国家电投集团贵州金元威宁能源股份有限公司 Multi-element perception data fault diagnosis method and system
CN116170576B (en) * 2022-12-22 2024-04-02 国家电投集团贵州金元威宁能源股份有限公司 Multi-element perception data fault diagnosis method
CN116958271A (en) * 2023-06-06 2023-10-27 阿里巴巴(中国)有限公司 Calibration parameter determining method and device

Similar Documents

Publication Publication Date Title
CN110568447B (en) Visual positioning method, device and computer readable medium
CN110866480B (en) Object tracking method and device, storage medium and electronic device
CN114943773A (en) Camera calibration method, device, equipment and storage medium
CN109242961B (en) Face modeling method and device, electronic equipment and computer readable medium
CN111382613B (en) Image processing method, device, equipment and medium
CN110799921A (en) Shooting method and device and unmanned aerial vehicle
CN110533694B (en) Image processing method, device, terminal and storage medium
CN109520500A (en) One kind is based on the matched accurate positioning of terminal shooting image and streetscape library acquisition method
CN111737518A (en) Image display method and device based on three-dimensional scene model and electronic equipment
WO2019037038A1 (en) Image processing method and device, and server
CN110675426B (en) Human body tracking method, device, equipment and storage medium
CN112207821A (en) Target searching method of visual robot and robot
CN109902675B (en) Object pose acquisition method and scene reconstruction method and device
JP7407428B2 (en) Three-dimensional model generation method and three-dimensional model generation device
WO2023284358A1 (en) Camera calibration method and apparatus, electronic device, and storage medium
CN114170290A (en) Image processing method and related equipment
CN113391644B (en) Unmanned aerial vehicle shooting distance semi-automatic optimization method based on image information entropy
US11166005B2 (en) Three-dimensional information acquisition system using pitching practice, and method for calculating camera parameters
CN113298871B (en) Map generation method, positioning method, system thereof, and computer-readable storage medium
CN112750164B (en) Lightweight positioning model construction method, positioning method and electronic equipment
CN115278064A (en) Panoramic image generation method and device, terminal equipment and storage medium
CN114359425A (en) Method and device for generating ortho image, and method and device for generating ortho exponential graph
KR102146839B1 (en) System and method for building real-time virtual reality
RU2779245C1 (en) Method and system for automated virtual scene construction based on three-dimensional panoramas
CN113379853B (en) Method, device and equipment for acquiring camera internal parameters and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination