CN114445506A - Camera calibration processing method, device, equipment and storage medium - Google Patents
Camera calibration processing method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN114445506A CN114445506A CN202111679547.5A CN202111679547A CN114445506A CN 114445506 A CN114445506 A CN 114445506A CN 202111679547 A CN202111679547 A CN 202111679547A CN 114445506 A CN114445506 A CN 114445506A
- Authority
- CN
- China
- Prior art keywords
- camera
- coordinate
- abnormal
- optical
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/337—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
Abstract
The invention relates to the field of motion capture, and discloses a camera calibration processing method, a device, equipment and a storage medium, which are used for improving the accuracy of camera calibration. The method comprises the following steps: acquiring first space coordinates and first image coordinates of optical tracking mark points arranged at different positions of a dynamic capture space through a plurality of optical dynamic capture cameras; respectively calculating a projection matrix of each optical dynamic capture camera according to the first space coordinate and the first image coordinate; calculating to obtain a first projection coordinate of the first space coordinate in a preset image coordinate system of each optical dynamic capturing camera according to the projection matrix of each optical dynamic capturing camera; respectively matching the first projection coordinate corresponding to each optical motion capture camera with the first image coordinate, and determining a normal camera and an abnormal camera according to the matching result; acquiring a second space coordinate through the normal camera, and acquiring a second image coordinate through the abnormal camera; and recalibrating the abnormal camera according to the second space coordinate and the second image coordinate.
Description
Technical Field
The present invention relates to the field of motion capture technologies, and in particular, to a camera calibration processing method, apparatus, device, and storage medium.
Background
In the image measurement process and machine vision application, in order to determine the correlation between the three-dimensional geometric position of a certain point on the surface of a space object and the corresponding point in the image, a geometric model of camera imaging must be established, the geometric model parameters are camera parameters, the parameters must be obtained through experiments and calculation under most conditions, and the process of solving the parameters is called as camera calibration (or camera calibration). The calibration of the camera is a very important link in optical motion capture, and the accuracy of the result generated by the operation of the camera is directly influenced by the precision of the calibration result and the stability of the algorithm.
The existing optical motion capture system generally uses a calibration rod to calibrate the camera, the calibration rod is fixed with a plurality of reflective balls, a user swings the calibration rod in a tracking field to make the motion trail of the reflective balls dispersed in the tracking field as much as possible, meanwhile, a plurality of cameras shoot the motion images of the reflective balls at different angles, and the internal and external parameters of a camera model can be calculated through the motion images. However, due to temperature variation, vibration of the installation position of the camera, and other factors, calibration parameters of some cameras may be far from actual camera parameters, resulting in inaccurate final calibration results of the camera.
Disclosure of Invention
The invention mainly aims to provide a camera calibration processing method, a camera calibration processing device, camera calibration equipment and a storage medium, and aims to improve the accuracy of camera calibration.
The invention provides a camera calibration processing method in a first aspect, which comprises the following steps:
acquiring first space coordinates and first image coordinates of optical tracking mark points arranged at different positions of a dynamic capture space through a plurality of optical dynamic capture cameras, wherein the first space coordinates are three-dimensional coordinates of the optical tracking mark points in a preset world coordinate system, and the first image coordinates are two-dimensional coordinates of the optical tracking mark points in a preset image coordinate system of each optical dynamic capture camera;
respectively calculating a projection matrix of each optical dynamic capture camera according to the first space coordinate and the first image coordinate, wherein the projection matrix is used for expressing a mapping relation between a three-dimensional coordinate in the preset world coordinate system and a two-dimensional coordinate in the preset image coordinate system;
according to the projection matrix of each optical dynamic capturing camera, carrying out coordinate transformation on the first space coordinate to obtain a first projection coordinate of the first space coordinate in a preset image coordinate system of each optical dynamic capturing camera;
respectively matching the first projection coordinate corresponding to each optical dynamic capturing camera with the first image coordinate, and determining a normal camera and an abnormal camera from the plurality of optical dynamic capturing cameras according to a matching result, wherein the normal camera is a camera which does not need to be calibrated again, and the abnormal camera is a camera to be calibrated again;
acquiring a second space coordinate of the optical tracking mark point through the normal camera, and acquiring a second image coordinate of the optical tracking mark point through the abnormal camera;
and recalibrating the abnormal camera according to the second space coordinate and the second image coordinate.
Optionally, in a first implementation manner of the first aspect of the present invention, the step of calculating a projection matrix of each optical motion capture camera according to the first spatial coordinate and the first image coordinate, where the projection matrix is used to represent a mapping relationship between three-dimensional coordinates in the preset world coordinate system and two-dimensional coordinates in the preset image coordinate system, includes:
acquiring internal parameters of each optical dynamic capture camera;
and substituting the first space coordinate, the first image coordinate and the internal reference into a camera attitude estimation algorithm (EPNP) for calculation to respectively obtain a projection matrix of each optical motion capture camera, wherein the projection matrix is used for representing the mapping relation between the three-dimensional coordinate in the preset world coordinate system and the two-dimensional coordinate in the preset image coordinate system.
Optionally, in a second implementation manner of the first aspect of the present invention, the step of respectively matching the first projection coordinates corresponding to each optical motion capture camera with the first image coordinates, and determining a normal camera and an abnormal camera from the plurality of optical motion capture cameras according to a matching result, where the normal camera is a camera that does not need to be recalibrated, and the abnormal camera is a camera to be recalibrated includes:
for each optical dynamic capture camera, matching the first projection coordinate and the first image coordinate corresponding to each optical dynamic capture camera pairwise according to a matching rule that the first projection coordinate and the first image coordinate are matched if the distance between the first projection coordinate and the first image coordinate is minimum;
respectively calculating distance values between the first projection coordinates and the first image coordinates which are matched pairwise to obtain a distance value set corresponding to each optical dynamic capture camera;
and determining a normal camera and an abnormal camera from the plurality of optical dynamic capture cameras according to the distance value set, wherein the normal camera is a camera which does not need to be calibrated again, and the abnormal camera is a camera to be calibrated again.
Optionally, in a third implementation manner of the first aspect of the present invention, the determining, according to the set of distance values, a normal camera and an abnormal camera from the plurality of optical motion capture cameras, where the normal camera is a camera that does not need to be recalibrated, and the step of determining, by using the abnormal camera, the camera that is to be recalibrated includes:
judging whether the distance values in the distance value set are all smaller than or equal to a preset threshold value;
if the distance values in the distance value set are all smaller than or equal to a preset threshold value, determining the optical dynamic capture camera corresponding to the distance value set as a normal camera, wherein the normal camera is a camera which does not need to be calibrated again;
if the distance value greater than the preset threshold value exists in the distance value set, determining the optical dynamic capture camera corresponding to the distance value set as an abnormal camera, wherein the abnormal camera is a camera to be calibrated again.
Optionally, in a fourth implementation manner of the first aspect of the present invention, the step of recalibrating the abnormal camera according to the second space coordinate and the second image coordinate includes:
according to the projection matrix of the abnormal camera, performing coordinate transformation on the second space coordinate to obtain a second projection coordinate of the second space coordinate in a preset image coordinate system of the abnormal camera;
acquiring pose conversion parameters in a projection matrix of the abnormal camera, and constructing a distance error function between the second projection coordinate and the second image coordinate according to the pose conversion parameters;
adjusting the pose conversion parameters by adopting a gradient descent algorithm until the distance error function obtains a minimum value, and acquiring target pose conversion parameters corresponding to the minimum value;
and recalibrating the abnormal camera according to the target pose conversion parameters.
Optionally, in a fifth implementation manner of the first aspect of the present invention, the pose conversion parameter includes a rotation parameter and a translation parameter, where the rotation parameter is used to represent a rotation state of the abnormal camera with respect to the preset world coordinate system, and the translation parameter is used to represent a shift state of the abnormal camera with respect to the preset world coordinate system.
Optionally, in a sixth implementation manner of the first aspect of the present invention, the recalibrating the abnormal camera according to the target pose conversion parameter includes:
and respectively calculating the position and the posture of the abnormal camera in the preset world coordinate system according to the rotation parameter and the translation parameter, and taking the calculated position and the calculated posture as new calibration parameters of the abnormal camera.
A second aspect of the present invention provides a camera calibration processing apparatus, including:
the first acquisition module is used for acquiring a first space coordinate and a first image coordinate of optical tracking mark points arranged at different positions in a dynamic capture space through a plurality of optical dynamic capture cameras, wherein the first space coordinate is a three-dimensional coordinate of the optical tracking mark points in a preset world coordinate system, and the first image coordinate is a two-dimensional coordinate of the optical tracking mark points in the preset image coordinate system of each optical dynamic capture camera;
the calculation module is used for respectively calculating a projection matrix of each optical dynamic capture camera according to the first space coordinate and the first image coordinate, wherein the projection matrix is used for expressing a mapping relation between a three-dimensional coordinate in the preset world coordinate system and a two-dimensional coordinate in the preset image coordinate system;
the coordinate transformation module is used for carrying out coordinate transformation on the first space coordinate according to the projection matrix of each optical dynamic capturing camera to obtain a first projection coordinate of the first space coordinate in a preset image coordinate system of each optical dynamic capturing camera;
the determining module is used for respectively matching the first projection coordinate corresponding to each optical dynamic capturing camera with the first image coordinate, and determining a normal camera and an abnormal camera from the plurality of optical dynamic capturing cameras according to a matching result, wherein the normal camera is a camera which does not need to be re-calibrated, and the abnormal camera is a camera to be re-calibrated;
the second acquisition module is used for acquiring a second space coordinate of the optical tracking mark point through the normal camera and acquiring a second image coordinate of the optical tracking mark point through the abnormal camera;
and the calibration module is used for recalibrating the abnormal camera according to the second space coordinate and the second image coordinate.
Optionally, in a first implementation manner of the second aspect of the present invention, the calculation module is further configured to:
acquiring internal parameters of each optical dynamic capture camera;
and substituting the first space coordinate, the first image coordinate and the internal reference into a camera attitude estimation algorithm (EPNP) for calculation to respectively obtain a projection matrix of each optical motion capture camera, wherein the projection matrix is used for representing the mapping relation between the three-dimensional coordinate in the preset world coordinate system and the two-dimensional coordinate in the preset image coordinate system.
Optionally, in a second implementation manner of the second aspect of the present invention, the determining module is further configured to:
for each optical dynamic capture camera, matching the first projection coordinate and the first image coordinate corresponding to each optical dynamic capture camera pairwise according to a matching rule that the first projection coordinate and the first image coordinate are matched if the distance between the first projection coordinate and the first image coordinate is minimum;
respectively calculating distance values between the first projection coordinates and the first image coordinates which are matched pairwise to obtain a distance value set corresponding to each optical dynamic capture camera;
and determining a normal camera and an abnormal camera from the plurality of optical dynamic capture cameras according to the distance value set, wherein the normal camera is a camera which does not need to be calibrated again, and the abnormal camera is a camera to be calibrated again.
Optionally, in a third implementation manner of the second aspect of the present invention, the determining module is further configured to:
judging whether the distance values in the distance value set are all smaller than or equal to a preset threshold value;
if the distance values in the distance value set are all smaller than or equal to a preset threshold value, determining the optical dynamic capture camera corresponding to the distance value set as a normal camera, wherein the normal camera is a camera which does not need to be calibrated again;
if the distance value greater than the preset threshold value exists in the distance value set, determining the optical dynamic capture camera corresponding to the distance value set as an abnormal camera, wherein the abnormal camera is a camera to be calibrated again.
Optionally, in a fourth implementation manner of the second aspect of the present invention, the calibration module includes:
the coordinate transformation unit is used for carrying out coordinate transformation on the second space coordinate according to the projection matrix of the abnormal camera to obtain a second projection coordinate of the second space coordinate in a preset image coordinate system of the abnormal camera;
the construction unit is used for acquiring pose conversion parameters in a projection matrix of the abnormal camera and constructing a distance error function between the second projection coordinate and the second image coordinate according to the pose conversion parameters;
the acquisition unit is used for adjusting the pose conversion parameters by adopting a gradient descent algorithm until the distance error function obtains a minimum value, and acquiring target pose conversion parameters corresponding to the minimum value;
and the calibration unit is used for recalibrating the abnormal camera according to the target pose conversion parameter.
Optionally, in a fifth implementation manner of the second aspect of the present invention, the pose conversion parameter includes a rotation parameter and a translation parameter, where the rotation parameter is used to represent a rotation state of the abnormal camera with respect to the preset world coordinate system, and the translation parameter is used to represent a shift state of the abnormal camera with respect to the preset world coordinate system.
Optionally, in a sixth implementation manner of the second aspect of the present invention, the calibration unit is further configured to:
and respectively calculating the position and the posture of the abnormal camera in the preset world coordinate system according to the rotation parameter and the translation parameter, and taking the calculated position and the calculated posture as new calibration parameters of the abnormal camera.
A third aspect of the present invention provides a camera calibration processing apparatus, including: a memory having instructions stored therein and at least one processor, the memory and the at least one processor interconnected by a line; the at least one processor invokes the instructions in the memory to cause the camera calibration processing device to perform the camera calibration processing method described above.
A fourth aspect of the present invention provides a storage medium having stored therein instructions that, when run on a computer, cause the computer to execute the camera calibration processing method described above.
The method comprises the steps that a plurality of optical dynamic capture cameras are used for collecting first space coordinates and first image coordinates of optical tracking mark points arranged at different positions of a dynamic capture space; respectively calculating a projection matrix of each optical dynamic capture camera according to the first space coordinate and the first image coordinate; according to the projection matrix of each optical dynamic capturing camera, carrying out coordinate transformation on the first space coordinate to obtain a first projection coordinate of the first space coordinate in a preset image coordinate system of each optical dynamic capturing camera; respectively matching the first projection coordinate corresponding to each optical dynamic capture camera with the first image coordinate, and determining a normal camera and an abnormal camera from the plurality of optical dynamic capture cameras according to the matching result; acquiring a second space coordinate of the optical tracking mark point through the normal camera, and acquiring a second image coordinate of the optical tracking mark point through the abnormal camera; and recalibrating the abnormal camera according to the second space coordinate and the second image coordinate. By the method, the optical dynamic capture camera with abnormal calibration is identified and recalibrated, so that the calibration accuracy of the camera is improved.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating a camera calibration processing method according to an embodiment of the present invention;
FIG. 2 is a schematic flowchart of another embodiment of a camera calibration processing method according to the present invention;
FIG. 3 is a block diagram of a camera calibration processing apparatus according to an embodiment of the present invention;
FIG. 4 is a block diagram of another embodiment of the camera calibration processing apparatus of the present invention;
fig. 5 is a schematic structural diagram of an embodiment of the camera calibration processing apparatus of the present invention.
Detailed Description
The embodiment of the invention provides a camera calibration processing method, a device, equipment and a storage medium, which realize the identification and recalibration of an optical dynamic capture camera with abnormal calibration, thereby improving the accuracy of camera calibration.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprises," "comprising," or "having," and any variations thereof, are intended to cover non-exclusive inclusions, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
For convenience of understanding, a specific flow of an embodiment of the camera calibration processing method of the present invention is described below.
Referring to fig. 1, fig. 1 is a schematic flowchart of an embodiment of a camera calibration processing method of the present invention, where the method includes:
it is to be understood that the executing subject of the present invention may be a camera calibration processing apparatus, and may also be a terminal or a server, which is not limited herein. The embodiment of the present invention is described by taking a server as an execution subject.
For ease of understanding, the following briefly introduces several coordinate systems that are used in the prior art camera calibration process:
the world coordinate system, since the camera can be placed at any position, selects a reference coordinate in the three-dimensional space to describe the position of the camera and uses it to describe the position of any object in the three-dimensional space, which is called the world coordinate system, also called the absolute coordinate system of the optical motion capture system.
Camera coordinate system: the coordinate system established on the camera, defined for describing the object position from the camera's perspective, is the middle ring that communicates the world coordinate system with the image coordinate system.
Image coordinate system: and a coordinate system established on the two-dimensional image shot by the camera is used for describing the position of the pixel point in the two-dimensional image.
In the present embodiment, both the world coordinate system and the image coordinate system are set in advance by a human. The server collects first space coordinates and first image coordinates of optical tracking mark points arranged at different positions of a dynamic capture space through the plurality of optical dynamic capture cameras. Wherein, the optical tracking mark point can be a reflective mark point or a fluorescent mark point; the first space coordinate is a three-dimensional coordinate of the plurality of optical tracking mark points in a world coordinate system, and specifically, the server can process a plurality of images shot by different cameras at the same time according to a multi-view geometric principle so as to obtain the three-dimensional coordinate of each optical tracking mark point in the images in the world coordinate system; the first image coordinate is the two-dimensional coordinate of the plurality of optical tracking mark points in the image coordinate systems of different optical dynamic capturing cameras, and the first image coordinate can be directly obtained from pictures shot by the corresponding optical dynamic capturing cameras.
102, respectively calculating a projection matrix of each optical dynamic capture camera according to the first space coordinate and the first image coordinate, wherein the projection matrix is used for expressing a mapping relation between a three-dimensional coordinate in a preset world coordinate system and a two-dimensional coordinate in a preset image coordinate system;
specifically, this step 102 may include: acquiring internal parameters of each optical dynamic capture camera; and substituting the first space coordinate, the first image coordinate and the internal reference into a camera attitude estimation algorithm (EPNP) for calculation to respectively obtain a projection matrix of each optical motion capture camera, wherein the projection matrix is used for expressing a mapping relation between a three-dimensional coordinate in a preset world coordinate system and a two-dimensional coordinate in a preset image coordinate system.
Taking a camera A as an example, a server firstly acquires internal parameters of the camera A, wherein the internal parameters are internal parameters, the internal parameters are only related to the optical characteristics and the mechanical characteristics of the camera and are inherent attributes of the camera, and the internal parameters comprise a focal length, a principal point, a scaling factor, a distortion factor and the like; then, the server substitutes the first spatial coordinates, the two-dimensional coordinates of the optical tracking mark point in the image coordinate system of the camera a, and the internal reference of the camera a into a camera pose estimation algorithm (EPNP) for calculation, so as to obtain a projection matrix of the camera a, where the projection matrix is used for representing a mapping relationship between the three-dimensional coordinates in the world coordinate system and the two-dimensional coordinates in the image coordinate system. The specific way of calculating the projection matrix according to the camera pose estimation algorithm EPNP may refer to the prior art, which is not described herein. In this way, the projection matrix of each optical motion capture camera can be calculated in the same way.
103, performing coordinate transformation on the first space coordinate according to the projection matrix of each optical dynamic capture camera to obtain a first projection coordinate of the first space coordinate in a preset image coordinate system of each optical dynamic capture camera;
in the step, the server performs coordinate transformation on the first space coordinate according to the projection matrix of each optical motion capture camera obtained through calculation, so as to obtain a first projection coordinate of the first space coordinate in the image coordinate system of each optical motion capture camera, and therefore the orthographic projection process from the three-dimensional coordinate in the world coordinate system to the two-dimensional coordinate in the image coordinate system is realized.
104, respectively matching the first projection coordinates corresponding to each optical dynamic capture camera with the first image coordinates, and determining a normal camera and an abnormal camera from the plurality of optical dynamic capture cameras according to the matching result, wherein the normal camera is a camera which does not need to be re-calibrated, and the abnormal camera is a camera to be re-calibrated;
taking the camera a as an example, the server performs coordinate transformation on the first space coordinate according to the projection matrix of the camera a to obtain a first projection coordinate of the first space coordinate in the image coordinate system of the camera a, then, the server matches the first projection coordinate corresponding to the camera a with the first image coordinate, and determines whether the camera a is a normal camera or an abnormal camera according to a matching result, if the camera is an abnormal camera, the camera a needs to be calibrated again.
Specifically, this step 104 may include: for each optical dynamic capture camera, matching the first projection coordinate and the first image coordinate corresponding to each optical dynamic capture camera pairwise according to a matching rule matched when the distance between the corresponding first projection coordinate and the first image coordinate is minimum; respectively calculating distance values between the first projection coordinates and the first image coordinates which are matched in pairs to obtain a distance value set corresponding to each optical dynamic capturing camera; and determining a normal camera and an abnormal camera from the plurality of optical dynamic capture cameras according to the distance value set, wherein the normal camera is a camera which does not need to be calibrated again, and the abnormal camera is a camera to be calibrated again.
Considering that the distance between the first projection coordinate having the matching relationship and the first image coordinate should be zero under an ideal condition without calibration error, for this reason, whether a large error exists in the current camera calibration can be judged according to the distance between the matched first projection coordinate and the first image coordinate, and the camera having the large calibration error needs to be calibrated again.
Taking the camera a as an example, in order to determine the matching relationship between the first projection coordinate and the first image coordinate corresponding to the camera a, the server may calculate a distance between each first projection coordinate and each first image coordinate, and for a certain first projection coordinate, if the distance between the certain first projection coordinate and a certain first image coordinate is the minimum, it is indicated that the two coordinates correspond to the same optical tracking mark point, and at this time, the server determines that the two coordinates have a pairwise matching relationship, and thus, pairwise matching may be performed on all the first projection coordinates and all the first image coordinates corresponding to the camera a. And then, the server respectively calculates the distance value between the first projection coordinate and the first image coordinate which are matched in pairs to obtain a distance value set corresponding to the camera A, and then determines whether the camera A is a normal camera or an abnormal camera according to the distance value set.
Further, the step of determining a normal camera and an abnormal camera from the plurality of optical motion capture cameras according to the set of distance values, where the normal camera is a camera that does not need to be recalibrated, and the abnormal camera is a camera to be recalibrated may include: judging whether the distance values in the distance value set are all smaller than or equal to a preset threshold value; if the distance values in the distance value set are all smaller than or equal to a preset threshold value, determining the optical dynamic capture camera corresponding to the distance value set as a normal camera, wherein the normal camera is a camera which does not need to be calibrated again; if the distance value greater than the preset threshold value exists in the distance value set, the optical dynamic capture camera corresponding to the distance value set is determined to be an abnormal camera, and the abnormal camera is a camera to be calibrated again.
Taking the camera a as an example, the server first determines whether distance values in a distance value set corresponding to the camera a are all smaller than or equal to a preset threshold, if the distance values in the distance value set are all smaller than or equal to the preset threshold, it is considered that a measurement error caused by current calibration parameters of the camera a is still within a controllable range, at this time, the camera a is determined to be a normal camera, that is, a camera that does not need to be recalibrated, otherwise, if the distance values in the distance value set that are larger than the preset threshold exist, the camera a is determined to be an abnormal camera, that is, a camera that needs to be recalibrated.
In this way, the plurality of optical motion capture cameras can be divided into normal cameras and abnormal cameras. It should be noted that, at this time, the server may further determine whether the number of the abnormal cameras exceeds the preset number, and when the number of the abnormal cameras exceeds the preset number, it indicates that calibration errors of a large part of the cameras are large, and at this time, it is necessary to scan the field again to recalibrate all the cameras (including the normal cameras and the abnormal cameras).
after the plurality of optical motion capture cameras are divided into normal cameras and abnormal cameras, the server collects second space coordinates of the optical tracking mark points through the normal cameras, and simultaneously collects second image coordinates of the optical tracking mark points through the abnormal cameras, wherein the second space coordinates are three-dimensional coordinates of the optical tracking mark points in a preset world coordinate system, and the second image coordinates are two-dimensional coordinates of the optical tracking mark points in a preset image coordinate system of the abnormal cameras.
For example, for the plurality of optical motion capture cameras A, B, C, D, where camera a is an abnormal camera and camera B, C, D is a normal camera, the server then captures the three-dimensional coordinates, i.e., the second spatial coordinates, of the optical tracking marker points in the world coordinate system via camera B, C, D and captures the two-dimensional coordinates, i.e., the second image coordinates, of the optical tracking marker points in the image coordinate system of camera a via camera a.
And 106, recalibrating the abnormal camera according to the second space coordinate and the second image coordinate.
And the server recalibrates the abnormal camera according to the acquired second space coordinate and the second image coordinate and by combining a preset back projection rule, namely, recalculating the position and the posture of the abnormal camera in a preset space coordinate system. Therefore, automatic recalibration of the abnormal camera is achieved according to the second space coordinate acquired by the normal camera.
In the embodiment, a plurality of optical dynamic capture cameras are used for collecting first space coordinates and first image coordinates of optical tracking mark points arranged at different positions of a dynamic capture space; respectively calculating a projection matrix of each optical dynamic capture camera according to the first space coordinate and the first image coordinate; according to the projection matrix of each optical dynamic capturing camera, carrying out coordinate transformation on the first space coordinate to obtain a first projection coordinate of the first space coordinate in a preset image coordinate system of each optical dynamic capturing camera; respectively matching the first projection coordinate corresponding to each optical dynamic capturing camera with the first image coordinate, and determining a normal camera and an abnormal camera from the plurality of optical dynamic capturing cameras according to the matching result; acquiring a second space coordinate of the optical tracking mark point through the normal camera, and acquiring a second image coordinate of the optical tracking mark point through the abnormal camera; and recalibrating the abnormal camera according to the second space coordinate and the second image coordinate. By the method, the optical dynamic capture camera with abnormal calibration is identified and recalibrated, so that the calibration accuracy of the camera is improved.
Referring to fig. 2, fig. 2 is a schematic flowchart of another embodiment of the camera calibration processing method according to the present invention. In this embodiment, the camera calibration processing method includes:
the detailed implementation of the steps 201-205 is substantially the same as that of the steps 101-105 in the first embodiment, and will not be described herein again.
specifically, when the abnormal camera is recalibrated, the server firstly performs coordinate transformation on the second space coordinate according to the projection matrix of the abnormal camera, so as to obtain a second projection coordinate of the second space coordinate in the image coordinate system of the abnormal camera, and therefore the orthographic projection process from the three-dimensional coordinate in the world coordinate system to the two-dimensional coordinate in the image coordinate system is achieved.
in the step, a server acquires pose conversion parameters in a projection matrix of the abnormal camera, wherein the pose conversion parameters comprise rotation parameters and translation parameters, the rotation parameters are used for representing the rotation state of the abnormal camera relative to a preset world coordinate system, and the translation parameters are used for representing the offset state of the abnormal camera relative to the preset world coordinate system; then, the server constructs a distance error function between the first projection coordinate and the second image coordinate according to the pose conversion parameter, wherein the distance error function is used for expressing the distance between the first projection coordinate and the second image coordinate.
the gradient descent algorithm is an optimization algorithm, which is commonly used in machine learning and artificial intelligence to recursively approximate a minimum deviation model by solving a minimum value along the direction of gradient descent. In this embodiment, for the distance error function, it is desirable to find the value of the argument (i.e., the attitude transformation parameter) that makes the function value reach the global minimum, and the problem can be solved by using a gradient descent algorithm. Specifically, the server adjusts the pose conversion parameters by adopting a gradient descent algorithm, calculates the value of the corresponding distance error function after each adjustment, compares the value with the value of the distance error function obtained by the last calculation, continues to adjust the pose conversion parameters in the same direction if the value of the distance error function obtained by the current calculation is smaller than the value of the distance error function obtained by the last calculation, and repeats the process until the distance error function obtains a minimum value, and takes the pose conversion parameters corresponding to the minimum value as target pose conversion parameters.
And 209, recalibrating the abnormal camera according to the target pose conversion parameters.
Specifically, this step 209 may include: and respectively calculating the position and the posture of the abnormal camera in a preset world coordinate system according to the rotation parameter and the translation parameter, and taking the calculated position and posture as new calibration parameters of the abnormal camera.
The server can be based on a formulaAndrespectively calculating the positions t of the abnormal cameras in a preset world coordinate systemcAnd attitude rcWherein, in the step (A),as the parameters of the rotation, it is,is a translation parameter. The server calculates the position tcAnd attitude rcAs a new calibration parameter for the abnormal camera.
According to the method, the optical dynamic capture camera with abnormal calibration is identified and recalibrated, and therefore the accuracy of camera calibration is improved.
The embodiment of the invention also provides a camera calibration processing device.
Referring to fig. 3, fig. 3 is a schematic block diagram of an embodiment of a camera calibration processing apparatus according to the present invention. In this embodiment, the camera calibration processing apparatus includes:
the first acquisition module 301 is configured to acquire, by a plurality of optical motion capture cameras, a first spatial coordinate and a first image coordinate of optical tracking mark points arranged at different positions in a motion capture space, where the first spatial coordinate is a three-dimensional coordinate of the optical tracking mark point in a preset world coordinate system, and the first image coordinate is a two-dimensional coordinate of the optical tracking mark point in a preset image coordinate system of each optical motion capture camera;
a calculating module 302, configured to calculate a projection matrix of each optical motion capture camera according to the first space coordinate and the first image coordinate, where the projection matrix is used to represent a mapping relationship between a three-dimensional coordinate in the preset world coordinate system and a two-dimensional coordinate in the preset image coordinate system;
the coordinate transformation module 303 is configured to perform coordinate transformation on the first space coordinate according to the projection matrix of each optical motion capture camera to obtain a first projection coordinate of the first space coordinate in a preset image coordinate system of each optical motion capture camera;
a determining module 304, configured to match the first projection coordinate corresponding to each optical motion capture camera with the first image coordinate, and determine a normal camera and an abnormal camera from the multiple optical motion capture cameras according to a matching result, where the normal camera is a camera that does not need to be recalibrated, and the abnormal camera is a camera to be recalibrated;
a second collecting module 305, configured to collect a second spatial coordinate of the optical tracking mark point through the normal camera, and collect a second image coordinate of the optical tracking mark point through the abnormal camera;
a calibration module 306, configured to recalibrate the abnormal camera according to the second space coordinate and the second image coordinate.
Further, the calculation module 302 is further configured to:
acquiring internal parameters of each optical dynamic capture camera;
and substituting the first space coordinate, the first image coordinate and the internal reference into a camera attitude estimation algorithm (EPNP) for calculation to respectively obtain a projection matrix of each optical motion capture camera, wherein the projection matrix is used for representing the mapping relation between the three-dimensional coordinate in the preset world coordinate system and the two-dimensional coordinate in the preset image coordinate system.
Further, the determining module 304 is further configured to:
for each optical dynamic capture camera, matching the first projection coordinate and the first image coordinate corresponding to each optical dynamic capture camera pairwise according to a matching rule that the first projection coordinate and the first image coordinate are matched if the distance between the first projection coordinate and the first image coordinate is minimum;
respectively calculating distance values between the first projection coordinates and the first image coordinates which are matched pairwise to obtain a distance value set corresponding to each optical dynamic capture camera;
and determining a normal camera and an abnormal camera from the plurality of optical dynamic capture cameras according to the distance value set, wherein the normal camera is a camera which does not need to be calibrated again, and the abnormal camera is a camera to be calibrated again.
Further, the determining module 304 is further configured to:
judging whether the distance values in the distance value set are all smaller than or equal to a preset threshold value;
if the distance values in the distance value set are all smaller than or equal to a preset threshold value, determining the optical dynamic capture camera corresponding to the distance value set as a normal camera, wherein the normal camera is a camera which does not need to be calibrated again;
if the distance value greater than the preset threshold value exists in the distance value set, determining the optical dynamic capture camera corresponding to the distance value set as an abnormal camera, wherein the abnormal camera is a camera to be calibrated again.
The function implementation and beneficial effects of each module in the camera calibration processing apparatus correspond to each step in the camera calibration processing method embodiment, and are not described herein again.
Referring to fig. 4, fig. 4 is a block diagram of another embodiment of the camera calibration processing apparatus of the present invention. In this embodiment, the camera calibration processing apparatus includes:
the first acquisition module 301 is configured to acquire, by a plurality of optical motion capture cameras, a first spatial coordinate and a first image coordinate of optical tracking mark points arranged at different positions in a motion capture space, where the first spatial coordinate is a three-dimensional coordinate of the optical tracking mark point in a preset world coordinate system, and the first image coordinate is a two-dimensional coordinate of the optical tracking mark point in a preset image coordinate system of each optical motion capture camera;
a calculating module 302, configured to calculate a projection matrix of each optical motion capture camera according to the first space coordinate and the first image coordinate, where the projection matrix is used to represent a mapping relationship between a three-dimensional coordinate in the preset world coordinate system and a two-dimensional coordinate in the preset image coordinate system;
the coordinate transformation module 303 is configured to perform coordinate transformation on the first space coordinate according to the projection matrix of each optical motion capture camera to obtain a first projection coordinate of the first space coordinate in a preset image coordinate system of each optical motion capture camera;
a determining module 304, configured to match the first projection coordinate corresponding to each optical motion capture camera with the first image coordinate, and determine a normal camera and an abnormal camera from the multiple optical motion capture cameras according to a matching result, where the normal camera is a camera that does not need to be recalibrated, and the abnormal camera is a camera to be recalibrated;
a second collecting module 305, configured to collect a second spatial coordinate of the optical tracking mark point through the normal camera, and collect a second image coordinate of the optical tracking mark point through the abnormal camera;
a calibration module 306, configured to recalibrate the abnormal camera according to the second space coordinate and the second image coordinate.
Wherein, the calibration module 306 comprises:
a coordinate transformation unit 3061, configured to perform coordinate transformation on the second spatial coordinate according to the projection matrix of the abnormal camera, so as to obtain a second projection coordinate of the second spatial coordinate in a preset image coordinate system of the abnormal camera;
a building unit 3062, configured to obtain pose transformation parameters in the projection matrix of the abnormal camera, and build a distance error function between the second projection coordinate and the second image coordinate according to the pose transformation parameters;
an obtaining unit 3063, configured to adjust the pose conversion parameter by using a gradient descent algorithm, until the distance error function obtains a minimum value, obtain a target pose conversion parameter corresponding to the minimum value;
the calibration unit 3064 is configured to recalibrate the abnormal camera according to the target pose conversion parameter.
Further, the pose conversion parameters include a rotation parameter for representing a rotation state of the abnormal camera with respect to the preset world coordinate system and a translation parameter for representing an offset state of the abnormal camera with respect to the preset world coordinate system.
Further, the calibration unit 3064 is further configured to:
and respectively calculating the position and the posture of the abnormal camera in the preset world coordinate system according to the rotation parameter and the translation parameter, and taking the calculated position and the calculated posture as new calibration parameters of the abnormal camera.
The function realization and the beneficial effects of each module in the camera calibration processing device correspond to each step in the camera calibration processing method embodiment, and are not described herein again.
The camera calibration processing apparatus in the embodiment of the present invention is described in detail above from the perspective of the modular functional entity, and the camera calibration processing apparatus in the embodiment of the present invention is described in detail below from the perspective of hardware processing.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a camera calibration processing apparatus according to an embodiment of the present invention. The camera calibration processing apparatus 500 may have relatively large differences due to different configurations or performances, and may include one or more processors (CPUs) 510 (e.g., one or more processors) and a memory 520, one or more storage media 530 (e.g., one or more mass storage devices) storing applications 533 or data 532. Memory 520 and storage media 530 may be, among other things, transient or persistent storage. The program stored in the storage medium 530 may include one or more modules (not shown), each of which may include a series of instruction operations for the camera calibration processing apparatus 500. Still further, the processor 510 may be configured to communicate with the storage medium 530, and execute a series of instruction operations in the storage medium 530 on the camera calibration processing device 500.
The camera calibration processing apparatus 500 may also include one or more power supplies 540, one or more wired or wireless network interfaces 550, one or more input-output interfaces 560, and/or one or more operating systems 531, such as Windows Server, Mac OS X, Unix, Linux, FreeBSD, and the like. Those skilled in the art will appreciate that the camera calibration processing apparatus configuration shown in fig. 5 does not constitute a limitation of the camera calibration processing apparatus and may include more or fewer components than those shown, or some components in combination, or a different arrangement of components.
The present invention further provides a storage medium, which may be a non-volatile storage medium or a volatile storage medium, wherein the storage medium stores a camera calibration processing program, and the camera calibration processing program, when executed by a processor, implements the steps of the camera calibration processing method described above.
The method and the beneficial effects of the camera calibration processing program executed on the processor can refer to the embodiments of the camera calibration processing method of the present invention, and are not described herein again.
It will be appreciated by those skilled in the art that the above-described integrated modules or units, if implemented as software functional units and sold or used as separate products, may be stored in a storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (10)
1. A camera calibration processing method is characterized by comprising the following steps:
acquiring first space coordinates and first image coordinates of optical tracking mark points arranged at different positions of a dynamic capture space through a plurality of optical dynamic capture cameras, wherein the first space coordinates are three-dimensional coordinates of the optical tracking mark points in a preset world coordinate system, and the first image coordinates are two-dimensional coordinates of the optical tracking mark points in a preset image coordinate system of each optical dynamic capture camera;
respectively calculating a projection matrix of each optical dynamic capture camera according to the first space coordinate and the first image coordinate, wherein the projection matrix is used for expressing a mapping relation between a three-dimensional coordinate in the preset world coordinate system and a two-dimensional coordinate in the preset image coordinate system;
according to the projection matrix of each optical dynamic capturing camera, carrying out coordinate transformation on the first space coordinate to obtain a first projection coordinate of the first space coordinate in a preset image coordinate system of each optical dynamic capturing camera;
respectively matching the first projection coordinate corresponding to each optical dynamic capturing camera with the first image coordinate, and determining a normal camera and an abnormal camera from the plurality of optical dynamic capturing cameras according to a matching result, wherein the normal camera is a camera which does not need to be re-calibrated, and the abnormal camera is a camera to be re-calibrated;
acquiring a second space coordinate of the optical tracking mark point through the normal camera, and acquiring a second image coordinate of the optical tracking mark point through the abnormal camera;
and recalibrating the abnormal camera according to the second space coordinate and the second image coordinate.
2. The camera calibration processing method according to claim 1, wherein the step of calculating a projection matrix of each optical motion capture camera according to the first spatial coordinates and the first image coordinates, the projection matrix being used to represent a mapping relationship between three-dimensional coordinates in the preset world coordinate system and two-dimensional coordinates in the preset image coordinate system, comprises:
acquiring internal parameters of each optical dynamic capture camera;
and substituting the first space coordinate, the first image coordinate and the internal reference into a camera attitude estimation algorithm (EPNP) for calculation to respectively obtain a projection matrix of each optical motion capture camera, wherein the projection matrix is used for representing the mapping relation between the three-dimensional coordinate in the preset world coordinate system and the two-dimensional coordinate in the preset image coordinate system.
3. The method for camera calibration processing according to claim 1, wherein the step of matching the first projection coordinates corresponding to each optical motion capture camera with the first image coordinates, and determining a normal camera and an abnormal camera from the plurality of optical motion capture cameras according to the matching result, the normal camera being a camera that does not need to be recalibrated, and the abnormal camera being a camera to be recalibrated includes:
for each optical dynamic capture camera, matching the first projection coordinate and the first image coordinate corresponding to each optical dynamic capture camera pairwise according to a matching rule that the first projection coordinate and the first image coordinate are matched if the distance between the first projection coordinate and the first image coordinate is minimum;
respectively calculating distance values between the first projection coordinates and the first image coordinates which are matched pairwise to obtain a distance value set corresponding to each optical dynamic capture camera;
and determining a normal camera and an abnormal camera from the plurality of optical dynamic capture cameras according to the distance value set, wherein the normal camera is a camera which does not need to be calibrated again, and the abnormal camera is a camera to be calibrated again.
4. A camera calibration processing method according to claim 3, wherein said step of determining a normal camera and an abnormal camera from said plurality of optical motion capture cameras according to said set of distance values, said normal camera being a camera that does not need to be recalibrated, said abnormal camera being a camera to be recalibrated comprises:
judging whether the distance values in the distance value set are all smaller than or equal to a preset threshold value;
if the distance values in the distance value set are all smaller than or equal to a preset threshold value, determining the optical dynamic capture camera corresponding to the distance value set as a normal camera, wherein the normal camera is a camera which does not need to be calibrated again;
if the distance value greater than the preset threshold value exists in the distance value set, determining the optical dynamic capture camera corresponding to the distance value set as an abnormal camera, wherein the abnormal camera is a camera to be calibrated again.
5. The camera calibration processing method according to any one of claims 1 to 4, wherein the step of recalibrating the abnormal camera according to the second spatial coordinates and the second image coordinates comprises:
according to the projection matrix of the abnormal camera, performing coordinate transformation on the second space coordinate to obtain a second projection coordinate of the second space coordinate in a preset image coordinate system of the abnormal camera;
acquiring pose conversion parameters in a projection matrix of the abnormal camera, and constructing a distance error function between the second projection coordinate and the second image coordinate according to the pose conversion parameters;
adjusting the pose conversion parameters by adopting a gradient descent algorithm until the distance error function obtains a minimum value, and acquiring target pose conversion parameters corresponding to the minimum value;
and recalibrating the abnormal camera according to the target pose conversion parameters.
6. The camera calibration processing method according to claim 5, wherein the pose conversion parameters include a rotation parameter and a translation parameter, the rotation parameter is used for representing a rotation state of the abnormal camera relative to the preset world coordinate system, and the translation parameter is used for representing a shift state of the abnormal camera relative to the preset world coordinate system.
7. The camera calibration processing method of claim 6, wherein the step of recalibrating the anomalous camera in accordance with the target pose transition parameters comprises:
and respectively calculating the position and the posture of the abnormal camera in the preset world coordinate system according to the rotation parameter and the translation parameter, and taking the calculated position and the calculated posture as new calibration parameters of the abnormal camera.
8. A camera calibration processing apparatus, characterized in that the camera calibration processing apparatus includes:
the first acquisition module is used for acquiring a first space coordinate and a first image coordinate of optical tracking mark points arranged at different positions in a dynamic capture space through a plurality of optical dynamic capture cameras, wherein the first space coordinate is a three-dimensional coordinate of the optical tracking mark points in a preset world coordinate system, and the first image coordinate is a two-dimensional coordinate of the optical tracking mark points in the preset image coordinate system of each optical dynamic capture camera;
the calculation module is used for respectively calculating a projection matrix of each optical dynamic capture camera according to the first space coordinate and the first image coordinate, wherein the projection matrix is used for expressing a mapping relation between a three-dimensional coordinate in the preset world coordinate system and a two-dimensional coordinate in the preset image coordinate system;
the coordinate transformation module is used for carrying out coordinate transformation on the first space coordinate according to the projection matrix of each optical dynamic capturing camera to obtain a first projection coordinate of the first space coordinate in a preset image coordinate system of each optical dynamic capturing camera;
the determining module is used for respectively matching the first projection coordinate corresponding to each optical dynamic capturing camera with the first image coordinate, and determining a normal camera and an abnormal camera from the plurality of optical dynamic capturing cameras according to a matching result, wherein the normal camera is a camera which does not need to be re-calibrated, and the abnormal camera is a camera to be re-calibrated;
the second acquisition module is used for acquiring a second space coordinate of the optical tracking mark point through the normal camera and acquiring a second image coordinate of the optical tracking mark point through the abnormal camera;
and the calibration module is used for recalibrating the abnormal camera according to the second space coordinate and the second image coordinate.
9. A camera calibration processing apparatus, characterized by comprising: a memory having instructions stored therein and at least one processor, the memory and the at least one processor interconnected by a line;
the at least one processor invokes the instructions in the memory to cause the camera calibration processing device to perform the camera calibration processing method of any of claims 1-7.
10. A storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the camera calibration processing method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111679547.5A CN114445506A (en) | 2021-12-31 | 2021-12-31 | Camera calibration processing method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111679547.5A CN114445506A (en) | 2021-12-31 | 2021-12-31 | Camera calibration processing method, device, equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114445506A true CN114445506A (en) | 2022-05-06 |
Family
ID=81366426
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111679547.5A Pending CN114445506A (en) | 2021-12-31 | 2021-12-31 | Camera calibration processing method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114445506A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114897997A (en) * | 2022-07-13 | 2022-08-12 | 星猿哲科技(深圳)有限公司 | Camera calibration method, device, equipment and storage medium |
CN115375772A (en) * | 2022-08-10 | 2022-11-22 | 北京英智数联科技有限公司 | Camera calibration method, device, equipment and storage medium |
CN116206067A (en) * | 2023-04-27 | 2023-06-02 | 南京诺源医疗器械有限公司 | Medical equipment fluorescence three-dimensional imaging method and system |
CN117351091A (en) * | 2023-09-14 | 2024-01-05 | 成都飞机工业(集团)有限责任公司 | Camera array calibration device and use method thereof |
WO2024164569A1 (en) * | 2023-02-10 | 2024-08-15 | 腾讯科技(深圳)有限公司 | Data processing method and apparatus, device, and storage medium |
-
2021
- 2021-12-31 CN CN202111679547.5A patent/CN114445506A/en active Pending
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114897997A (en) * | 2022-07-13 | 2022-08-12 | 星猿哲科技(深圳)有限公司 | Camera calibration method, device, equipment and storage medium |
CN115375772A (en) * | 2022-08-10 | 2022-11-22 | 北京英智数联科技有限公司 | Camera calibration method, device, equipment and storage medium |
CN115375772B (en) * | 2022-08-10 | 2024-01-19 | 北京英智数联科技有限公司 | Camera calibration method, device, equipment and storage medium |
WO2024164569A1 (en) * | 2023-02-10 | 2024-08-15 | 腾讯科技(深圳)有限公司 | Data processing method and apparatus, device, and storage medium |
CN116206067A (en) * | 2023-04-27 | 2023-06-02 | 南京诺源医疗器械有限公司 | Medical equipment fluorescence three-dimensional imaging method and system |
CN116206067B (en) * | 2023-04-27 | 2023-07-18 | 南京诺源医疗器械有限公司 | Medical equipment fluorescence three-dimensional imaging method and system |
CN117351091A (en) * | 2023-09-14 | 2024-01-05 | 成都飞机工业(集团)有限责任公司 | Camera array calibration device and use method thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114445506A (en) | Camera calibration processing method, device, equipment and storage medium | |
KR100855657B1 (en) | System for estimating self-position of the mobile robot using monocular zoom-camara and method therefor | |
CN105818167B (en) | The method that hinged end effector is calibrated using long distance digital camera | |
KR100386090B1 (en) | Camera calibration system and method using planar concentric circles | |
EP2926543B1 (en) | A method of calibrating a camera and a system therefor | |
CN112949478B (en) | Target detection method based on tripod head camera | |
WO2013111229A1 (en) | Camera calibration device, camera calibration method, and camera calibration program | |
CN113409391B (en) | Visual positioning method and related device, equipment and storage medium | |
EP2959315A2 (en) | Generation of 3d models of an environment | |
CN110880189A (en) | Combined calibration method and combined calibration device thereof and electronic equipment | |
CN113910219A (en) | Exercise arm system and control method | |
US10652521B2 (en) | Stereo camera and image pickup system | |
CN109887002A (en) | Image feature point matching method and device, computer equipment and storage medium | |
CN106871900A (en) | Image matching positioning method in ship magnetic field dynamic detection | |
WO2016135856A1 (en) | Three-dimensional shape measurement system and measurement method for same | |
JP2010074730A (en) | Camera calibration device for zoom lens equipped camera of broadcasting virtual studio and method and program for the device | |
CN109389645B (en) | Camera self-calibration method and system, camera, robot and cloud server | |
CN114018212B (en) | Spherical camera monocular ranging-oriented pitch angle correction method and system | |
CN108253940A (en) | Localization method and device | |
CN111429530B (en) | Coordinate calibration method and related device | |
JP5267100B2 (en) | Motion estimation apparatus and program | |
KR102185329B1 (en) | Distortion correction method of 3-d coordinate data using distortion correction device and system therefor | |
CN112419427A (en) | Method for improving time-of-flight camera accuracy | |
CN116051634A (en) | Visual positioning method, terminal and storage medium | |
CN108345463B (en) | Three-dimensional measuring method and device based on robot, robot and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |