CN112308925A - Binocular calibration method and device of wearable device and storage medium - Google Patents

Binocular calibration method and device of wearable device and storage medium Download PDF

Info

Publication number
CN112308925A
CN112308925A CN201910711384.0A CN201910711384A CN112308925A CN 112308925 A CN112308925 A CN 112308925A CN 201910711384 A CN201910711384 A CN 201910711384A CN 112308925 A CN112308925 A CN 112308925A
Authority
CN
China
Prior art keywords
camera
calibration
wearable device
matrix
binocular
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910711384.0A
Other languages
Chinese (zh)
Inventor
朱镕杰
周骥
冯歆鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NextVPU Shanghai Co Ltd
Original Assignee
NextVPU Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NextVPU Shanghai Co Ltd filed Critical NextVPU Shanghai Co Ltd
Priority to CN201910711384.0A priority Critical patent/CN112308925A/en
Publication of CN112308925A publication Critical patent/CN112308925A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration

Abstract

The invention provides a binocular calibration method, equipment and a storage medium of wearable equipment, wherein the method comprises the following steps: moving the wearable device based on the calibration plate, wherein two cameras of the wearable device take a plurality of pictures of the calibration plate from different visual angles and positions; calibrating internal parameters of each camera through a plane homography, wherein the internal parameters comprise an abscissa and an ordinate of an optical center of the camera, a focal length of the camera in the direction of the abscissa and the ordinate and distortion parameters; calibrating external parameters between the two cameras according to the internal parameters of each camera and the position relation of the calibration plate relative to the two cameras of the wearable device, wherein the external parameters are a rotation matrix and a translation vector of the right camera relative to the left camera; the method and the device have the advantages that the debugging time is shortened, the calibration process is simplified, the calibration efficiency is improved, and the large-scale mass production of the wearable device is facilitated.

Description

Binocular calibration method and device of wearable device and storage medium
Technical Field
The invention relates to the field of binocular calibration, in particular to a binocular calibration method and device of wearable equipment and a storage medium.
Background
The application of the intelligent wearable device is more and more widespread, and in the related art, the hardware characteristics of the wearable device, such as the intelligent glasses, include but are not limited to the focal length, the optical center position, the distortion coefficient and the like of the binocular camera of the intelligent glasses. The calibration mode of the prior wearable device is manual calibration, the debugging process is complex, the debugging time is long, the prior wearable device cannot be produced in large scale, and the intelligent degree is low. The existing calibration mode is manual calibration, the debugging time is long, the process is complicated, and the efficiency is low.
Therefore, the invention provides a binocular calibration method and device of wearable equipment and a storage medium.
Disclosure of Invention
Aiming at the problems in the prior art, the invention aims to provide a binocular calibration method, equipment and a storage medium of wearable equipment, overcomes the difficulties in the prior art, can realize binocular automatic calibration of the wearable equipment, shortens debugging time, simplifies calibration process, improves calibration efficiency, and is beneficial to large-scale mass production of the wearable equipment.
The embodiment of the invention provides a binocular calibration method of wearable equipment, which comprises the following steps:
s100, moving the wearable device based on a calibration plate, wherein two cameras of the wearable device shoot a plurality of pictures of the calibration plate from different visual angles and positions;
s200, calibrating internal parameters of each camera according to the homography between the calibration plate and the camera imaging plane, wherein the internal parameters comprise the abscissa and the ordinate of the optical center of the camera, the focal length of the camera in the direction of the abscissa and the ordinate and distortion parameters;
s300, calibrating extrinsic parameters between the cameras according to the intrinsic parameters of each camera and the position relation of the calibration board relative to the two cameras of the wearable device, wherein the extrinsic parameters are a rotation matrix and a translation vector of the right camera relative to the left camera;
s400, performing binocular correction on the two cameras of the wearable device to obtain a three-dimensional correction matrix, and enabling the two cameras of the wearable device to be equivalent to two cameras with the same internal parameters and aligned in the horizontal direction.
Preferably, the step S200 includes the following steps:
s201, setting a space point M (X, Y, Z) under a world coordinate system, respectively establishing a camera coordinate system in a picture obtained by each camera, and obtaining a pixel coordinate M (u, v) of the space point M in the camera coordinate system;
s202, establishing homogeneous coordinates of a space point M (X, Y, Z) and a pixel coordinate M (u, v) as follows:
Figure BDA0002153879580000021
and
Figure BDA0002153879580000022
to obtain
Figure BDA0002153879580000023
Wherein A is an in-camera parameter matrix of 3 rows and 3 columns, s is a scaling parameter, and (u)0,v0) Coordinates of the projected point of the camera optical center in the pixel coordinate system, fuAnd fvFor the focal length of the camera in the x and y directions, R is the column vector [ R ] of the rotation matrix of the camera1r2r3]T is a translation vector of 3 rows and 1 column, and Z is 0 in the space point M (X, Y, Z), so as to obtain
Figure BDA0002153879580000024
S203, let M ═ X, Y)TReduce it to a two-dimensional vector, then
Figure BDA0002153879580000025
To obtain
Figure BDA0002153879580000026
Wherein the homographyThe matrix H ═ A [ r ]1,r2,t],
S204, setting H ═ H1,h2,h3]And r is1,r2Is orthogonal to obtain
Figure BDA0002153879580000027
Figure BDA0002153879580000028
Wherein A is-T A-1A projection matrix of the absolute quadratic curve under an image coordinate system is obtained;
s205, obtaining a symmetric matrix B:
Figure BDA0002153879580000031
s206, setting a 6-dimensional vector B ═ B11B12B22B13B23B33]-TTaking the ith column vector of the H array as: h isi=[hi1,hi2,hi3]TThe following can be obtained:
Figure BDA0002153879580000032
vij=[hi1hj1hi1hj2+hi2hj1hi2ji2hi3ji1+hi1hj3hi3hj2+hi2hj3hi3hj3]T
s207, obtaining
Figure BDA0002153879580000033
S208, order
Figure BDA0002153879580000034
Then V × b ═ 0;
wherein, the vector b is a 6-dimensional vector, and when V is a full-rank matrix, a unique solution of the vector b is obtained;
s209, calculating and obtaining the vertical coordinate v of the optical center of the camera by the symmetric matrix B0
By the symmetric matrix B and the ordinate v of the optical center of the camera0Calculating to obtain a normalization coefficient lambda of the camera;
calculating to obtain focal lengths f of the camera in the abscissa direction and the ordinate direction respectively by using the camera normalization coefficient lambda and the symmetric matrix BuAnd fv
The focal length f of the symmetric matrix B and the camera in the abscissa direction and the ordinate direction respectivelyuAnd fvCalculating a camera normalization coefficient lambda to obtain an orthogonality coefficient gamma of the description pixel;
by the orthogonality coefficient gamma of the description pixel, the symmetry matrix B and the focal length f of the camera in the abscissa direction and the ordinate direction, respectivelyuAnd fvThe ordinate v of the optical center of the camera0And calculating the normalized coefficient lambda of the camera to obtain the abscissa u of the optical center of the camera0
Preferably, the step S209 further includes the following steps:
s210, obtaining a second formula group:
Figure BDA0002153879580000035
s211, setting mijThe coordinate value of the j detected angular point on the ith image is obtained; mjThree-dimensional coordinates of the jth corner point in a world coordinate system determined by the calibration board;
Figure BDA0002153879580000036
is MjThe coordinate values actually projected onto the camera image may then be given by the objective function:
Figure BDA0002153879580000041
then obtaining a calibration result;
s212, distortion of the lens is calculated after the internal parameters of the camera are obtained, and coordinates of the ideal undistorted point in the pixel coordinate system, coordinates of the ideal undistorted point in the image coordinate system and distortion parameters are calculated to obtain coordinates of the actually distorted point in the pixel coordinate system.
Preferably, in step S211, an optimal iterative algorithm is used to obtain an optimal calibration result.
Preferably, the step S300 includes the following steps:
s301, a point P in the space is set, coordinates of the point P in the world coordinate system are Pw, and coordinates of the point P in the left and right camera coordinate systems of the wearable device can be expressed as:
Pl=RlPw+Tl
Pr=RrPw+Tr
wherein, Pr=RPl+T
S302, obtaining a rotation matrix R of the relative calibration object through monocular calibration by the right camerarThe rotation matrix R of the relative calibration object obtained by monocular calibration with the left cameralCalculating to obtain a rotation matrix R between the left camera and the right camera;
the translation vector T of the relative calibration object is obtained by the right camera through monocular calibrationrAnd a translation vector T of the left camera relative to the calibration object obtained through monocular calibrationlAnd calculating a rotation matrix R between the left camera and the right camera to obtain a translation vector T between the left camera and the right camera
Wherein, the left camera and the right camera respectively carry out monocular calibration.
Preferably, the step S400 includes the following steps:
s401, setting an intersection point of a connecting line of the original points of the two camera coordinate systems in the space and an image plane as a pole;
s402, setting a rotation matrix RrectSo that the pole is located atAn infinity of the wearable device;
Figure BDA0002153879580000042
s403, obtaining
Figure BDA0002153879580000043
Wherein T ═ Tx Ty Tz]T,e2And e1Orthogonal to the main optical axis direction (0, 0, 1) and e1Cross product;
s404, obtaining
Figure BDA0002153879580000051
S405, order e3And e1And e2Orthogonal, one can obtain: e.g. of the type3=e1×e2
S406, adding RrectAnd (4) performing left multiplication to the matrix of the left and right camera coordinate systems after R decomposition to obtain a stereo correction matrix.
Preferably, the step S406 further includes the following steps:
s407, obtaining an output matrix Q of the stereo correction matrix, and realizing a reprojection matrix Q for conversion between a world coordinate system { world } and a pixel coordinate system { pixel }
S408, obtaining
Figure BDA0002153879580000052
Wherein d represents parallax, three-dimensional coordinates are (X/W, Y/W, Z/W), and cx' respectively represent optical centers of the left image and the right image;
s409, let cx' ═ cx, the fourth row and fourth column elements of Q are 0, and the three-dimensional coordinates of the spatial object can be expressed as
(x,y,z)=(X/W,Y/W,Z/W)。
Preferably, the surface of the calibration plate is provided with a checkerboard pattern, and intersection points between the checkerboards are angular points.
Preferably, the surface of the calibration plate is provided with directional checkerboard patterns, each grid is internally provided with a unique asymmetric pattern, intersection points between the checkerboard patterns are angular points, and each angular point is provided with a unique code.
Preferably, the method further comprises the following steps: s500, judging the calibration precision, and judging whether a first judgment condition and a second judgment condition are met simultaneously;
the first judgment condition is that the binocular module is aligned with the checkerboards, the calibrated parameters are used for correcting the left image and the right image, the checkerboard angular points are detected in the corrected images, and if the difference value of the vertical coordinates of the matched points is smaller than a preset first threshold value;
and the second judgment condition is that the spatial coordinates of the corner points are recovered by using the disparity map by using the detected matching points, and if the difference between the distance between the spatial coordinates of the adjacent empty checkerboard corner points and the real distance between the checkerboard corner points is smaller than a second preset threshold value.
The embodiment of the invention also provides binocular calibration equipment of the wearable equipment, which comprises:
a processor;
a memory having stored therein executable instructions of the processor;
wherein the processor is configured to perform the steps of the binocular calibration method of the wearable device described above via execution of the executable instructions.
Embodiments of the present invention also provide a computer-readable storage medium for storing a program, where the program implements the steps of the binocular calibration method of the wearable device described above when executed.
The binocular calibration method, the equipment and the storage medium of the wearable equipment can realize binocular automatic calibration of the wearable equipment, shorten debugging time, simplify calibration process, improve calibration efficiency and contribute to large-scale mass production of the wearable equipment.
Drawings
Other features, objects and advantages of the present invention will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, with reference to the accompanying drawings.
Fig. 1 is a flowchart of a binocular calibration method of a wearable device of the present invention;
fig. 2 is a schematic diagram of a binocular calibration method for implementing the wearable device of the present invention based on a first calibration plate;
fig. 3 is a schematic diagram of a binocular calibration method for implementing the wearable device of the present invention based on a second calibration plate;
fig. 4 is a schematic diagram of a binocular camera before stereo correction in the binocular calibration method of the wearable device of the present invention;
fig. 5 is a schematic diagram of a binocular camera after stereo correction in the binocular calibration method of the wearable device of the present invention;
fig. 6 is a histogram of difference values of y coordinates of right and left image corner points after correction in the binocular calibration method of the wearable device of the present invention;
fig. 7 is a histogram of the average spacing of corner points recovered from depth information after correction in the binocular calibration method of the wearable device of the present invention;
fig. 8 is a schematic structural diagram of a binocular calibration device of the wearable device of the present invention; and
fig. 9 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present invention.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The same reference numerals in the drawings denote the same or similar structures, and thus their repetitive description will be omitted.
Fig. 1 is a flowchart of a binocular calibration method using the wearable device of the present invention. As shown in fig. 1, the method for binocular calibration of wearable equipment of the present invention includes the following steps:
s100, moving the wearable device based on a calibration plate, wherein two cameras of the wearable device shoot a plurality of pictures of the calibration plate from different visual angles and positions;
s200, calibrating internal parameters of each camera according to the homography between the calibration plate and the camera imaging plane, wherein the internal parameters comprise the abscissa and the ordinate of the optical center of the camera, the focal length of the camera in the direction of the abscissa and the ordinate and distortion parameters;
s300, calibrating extrinsic parameters between the cameras according to the intrinsic parameters of each camera and the position relation of the calibration board relative to the two cameras of the wearable device, wherein the extrinsic parameters are a rotation matrix and a translation vector of the right camera relative to the left camera;
s400, performing binocular correction on the two cameras of the wearable device to obtain a three-dimensional correction matrix, and enabling the two cameras of the wearable device to be equivalent to two cameras with the same internal parameters and aligned in the horizontal direction.
The invention can realize binocular automatic calibration of wearable equipment, shorten debugging time, simplify calibration process, improve calibration efficiency and contribute to large-scale mass production of wearable equipment.
In a preferred embodiment, step S200 includes the following steps:
s201, setting a space point M (X, Y, Z) under a world coordinate system, respectively establishing a camera coordinate system in a picture obtained by each camera, and obtaining a pixel coordinate M (u, v) of the space point M in the camera coordinate system;
s202, establishing homogeneous coordinates of a space point M (X, Y, Z) and a pixel coordinate M (u, v) as follows:
Figure BDA0002153879580000071
and
Figure BDA0002153879580000072
to obtain
Figure BDA0002153879580000081
Wherein A is an in-camera parameter matrix of 3 rows and 3 columns, s is a scaling parameter, and (u)0,v0) Coordinates of the projected point of the camera optical center in the pixel coordinate system, fuAnd fvFor the focal length of the camera in the x and y directions, R is the column vector [ R ] of the rotation matrix of the camera1r2r3]T is a translation vector of 3 rows and 1 column, and Z is 0 in the space point M (X, Y, Z), so as to obtain
Figure BDA0002153879580000082
S203, let M ═ X, Y)TReduce it to a two-dimensional vector, then
Figure BDA0002153879580000083
To obtain
Figure BDA0002153879580000084
Wherein the homography matrix H ═ A [ r ═ R1,r2,t],
S204, setting H ═ H1,h2,h3]And r is1,r2Is orthogonal to obtain
Figure BDA0002153879580000085
Figure BDA0002153879580000086
Wherein A is-T A-1A projection matrix of the absolute quadratic curve under an image coordinate system is obtained;
s205, obtaining a symmetric matrix B:
Figure BDA0002153879580000087
s206, setting a 6-dimensional vector B ═ B11B12B22B13B23B33]-TTaking the ith column vector of the H array as: h isi=[hi1,hi2,hi3]TThe following can be obtained:
Figure BDA0002153879580000088
vij=[hi1hj1hi1hj2+hi2hj1hi2ji2hi3ji1+hi1hj3hi3hj2+hi2hj3hi3hj3]T
s207, obtaining
Figure BDA0002153879580000089
S208, order
Figure BDA00021538795800000810
Then V × b ═ 0;
wherein, the vector b is a 6-dimensional vector, and when V is a full-rank matrix, a unique solution of the vector b is obtained;
s209, calculating and obtaining the vertical coordinate v of the optical center of the camera by the symmetric matrix B0
By the symmetric matrix B and the ordinate v of the optical center of the camera0Calculating to obtain a normalization coefficient lambda of the camera;
calculating to obtain focal lengths f of the camera in the abscissa direction and the ordinate direction respectively by using the camera normalization coefficient lambda and the symmetric matrix BuAnd fv
The focal length f of the symmetric matrix B and the camera in the abscissa direction and the ordinate direction respectivelyuAnd fvCalculating a camera normalization coefficient lambda to obtain an orthogonality coefficient gamma of the description pixel;
by the orthogonality coefficient gamma of the description pixel, the symmetry matrix B and the focal length f of the camera in the abscissa direction and the ordinate direction, respectivelyuAnd fvThe ordinate v of the optical center of the camera0And calculating the normalized coefficient lambda of the camera to obtain the abscissa u of the optical center of the camera0
In a preferred embodiment, the following steps are further included after step S209:
s210, obtaining a second formula group:
Figure BDA0002153879580000091
s211, setting mijThe coordinate value of the j detected angular point on the ith image is obtained; mjThree-dimensional coordinates of the jth corner point in a world coordinate system determined by the calibration board;
Figure BDA0002153879580000092
is MjThe coordinate values actually projected onto the camera image may then be given by the objective function:
Figure BDA0002153879580000093
then obtaining a calibration result;
s212, distortion of the lens is calculated after the internal parameters of the camera are obtained, and coordinates of the ideal undistorted point in the pixel coordinate system, coordinates of the ideal undistorted point in the image coordinate system and distortion parameters are calculated to obtain coordinates of the actually distorted point in the pixel coordinate system.
In a preferred embodiment, in step S211, an optimal iterative algorithm is used to obtain an optimal calibration result.
In a preferred embodiment, step S300 includes the following steps:
s301, a point P in the space is set, coordinates of the point P in the world coordinate system are Pw, and coordinates of the point P in the left and right camera coordinate systems of the wearable device can be expressed as:
Pl=RlPw+Tl
Pr=RrPw+Tr
wherein, Pr=RPl+T
S302, obtaining a rotation matrix R of the relative calibration object through monocular calibration by the right camerarThe rotation matrix R of the relative calibration object obtained by monocular calibration with the left cameralCalculating to obtain a rotation matrix R between the left camera and the right camera;
the translation vector T of the relative calibration object is obtained by the right camera through monocular calibrationrAnd a translation vector T of the left camera relative to the calibration object obtained through monocular calibrationlAnd calculating a rotation matrix R between the left camera and the right camera to obtain a translation vector T between the left camera and the right camera
Wherein, the left camera and the right camera respectively carry out monocular calibration.
In a preferred embodiment, step S400 includes the following steps:
s401, setting an intersection point of a connecting line of the original points of the two camera coordinate systems in the space and an image plane as a pole;
s402, setting a rotation matrix RrectCausing the pole to be located at infinity of the wearable device;
Figure BDA0002153879580000101
s403, obtaining
Figure BDA0002153879580000102
Wherein T ═ Tx Ty Tz]T,e2And e1Orthogonal to the main optical axis direction (0, 0, 1) and e1Cross product;
s404, obtaining
Figure BDA0002153879580000103
S405, order e3And e1And e2Orthogonal, one can obtain: e.g. of the type3=e1×e2
S406, adding RrectAnd (4) performing left multiplication to the matrix of the left and right camera coordinate systems after R decomposition to obtain a stereo correction matrix.
In a preferred embodiment, the step S406 further includes the following steps:
s407, obtaining an output matrix Q of the stereo correction matrix, and realizing a reprojection matrix Q for conversion between a world coordinate system { world } and a pixel coordinate system { pixel }.
S408, obtaining
Figure BDA0002153879580000104
Wherein d represents parallax, three-dimensional coordinates are (X/W, Y/W, Z/W), and cx' respectively represent optical centers of the left image and the right image;
s409, let cx' ═ cx, the fourth row and fourth column elements of Q are 0, and the three-dimensional coordinates of the spatial object can be expressed as
(x,y,z)=(X/W,Y/W,Z/W)。
In a preferred scheme, the surface of the calibration plate is provided with a checkerboard pattern, and intersection points between the checkerboards are angular points.
In a preferred scheme, the surface of the calibration plate is provided with directional checkerboard patterns, each grid is internally provided with a unique asymmetric pattern, intersection points between the checkerboard patterns are angular points, and each angular point is provided with a unique code.
In a preferred embodiment, the method further comprises the following steps: s500, judging the calibration precision, and judging whether a first judgment condition and a second judgment condition are met simultaneously;
the first judgment condition is that the binocular module is aligned to the checkerboards, the calibrated parameters are used for correcting the left image and the right image, the checkerboard angular points are detected in the corrected images, and if the difference value of the vertical coordinates of the matched points is smaller than a preset first threshold value;
the second judgment condition is that the spatial coordinates of the corner points are recovered by using the disparity map by using the detected matching points, and if the difference between the distance between the spatial coordinates of the adjacent empty checkerboard corner points and the real distance between the checkerboard corner points is smaller than a second preset threshold value.
Referring to fig. 2, the invention fixes a calibration board 2 with black and white checkerboard 21 according to the required specification by using a mechanical arm 4, and a computer 3 is connected with at least one pair of glasses and is horizontally placed at a certain distance from the checkerboard to be connected with the computer 3, so that the pictures in the glasses seen in the computer 3 are ensured, and the checkerboard is in the pictures. The mechanical arm can move at a plurality of preset positions, and when the mechanical arm moves to one position, the program can control the left camera 12 and the right camera 11 on the two sides of the intelligent glasses 1 (wearable device) to respectively take a picture, and meanwhile, the background can analyze the captured picture. And after all the preset positions are moved, completing calibration calculation by using a calibration algorithm, and if the preset positions are within the range of the preset threshold value, passing the calibration, otherwise failing the calibration and needing to retry. The specific method for calibration is as follows:
the surface of the calibration plate is provided with a checkerboard which can be a calibration object with known geometric dimension for camera calibration, and the checkerboard is adopted and generally comprises known side length, line number, column number and black and white.
The calibration board in the invention can be the most common checkerboard pattern as shown in fig. 2, and has the characteristics of easy manufacture and quick detection, and the defects are that the change of the checkerboard position cannot be too large, otherwise, the sequencing of the detected corner positions can be wrong, and if the checkerboard is shielded, the detection can be failed.
Or, the calibration board page in the invention may be in another checkerboard pattern as in fig. 3, and each corner point has a unique code determination, so that the corner points can be always detected in a correct sequence no matter how the checkerboard rotates, and the position of the corner point can be correctly detected even if the checkerboard is shielded. But the checkerboard is more troublesome to make, and the detection algorithm is more complex.
In the present invention, the parameters of the camera refer to the focal length, optical center position, distortion coefficient, etc. of the camera itself. Aligning the cameras to the checkerboards, moving the checkerboards in a range as large as possible under the condition of ensuring that corner points can be detected, shooting 15-20 groups of images, then running a calibration program, and calculating respective parameters of the left camera and the right camera at the moment.
And during calibration, a calibration plate distributed in a dot matrix is taken as a reference object, and images of the calibration plate under different visual angles and poses are obtained by shooting with a camera. And extracting the positions of the angular points of the dots from the image, and recovering the internal and external parameters of the camera through the obtained angular points by using a maximum likelihood estimation method.
Let space point M under world coordinate system be (X, Y, Z)TThe projection point on the imaging plane is m ═ (u, v)T. Their homogeneous coordinates are:
Figure BDA0002153879580000121
and
Figure BDA0002153879580000122
obtaining:
Figure BDA0002153879580000123
wherein A is the intrinsic parameter matrix of the camera, s is the scaling parameter, u0,v0Abscissa and ordinate f for the optical center of the camerau,fvIs the focal length of the camera in the horizontal and vertical coordinate directions. For the convenience of calculation, the dot angle point M of the calibration plate is set as [ X, Y, Z ═ X, Y, Z]TWhen Z in (1) is 0, the following are present:
Figure BDA0002153879580000124
in the formula, r1,r2,r3Is a 3 × 1 column vector, [ r1r2r3]Is the rotation matrix of the camera, and then let M be (X, Y)TReducing it to a two-dimensional vector. Then there is
Figure BDA0002153879580000125
At this time, the equations (2-15) are simplified as:
Figure BDA0002153879580000126
wherein H ═ A [ r ]1,r2,t]Referred to as Homography Matrix (Homography Matrix). The method is used for representing the mapping relation between the angular point extracted from the calibration plate and the pixel point of the image coordinate system corresponding to the angular point. Let H ═ H1,h2,h3]And r1 and r2 are orthogonal, so that:
Figure BDA0002153879580000131
in the formula, A-T A-1Is a projection matrix of the absolute quadratic curve in an image coordinate system.
According to the nature of the absolute quadratic curve, let
Figure BDA0002153879580000132
The matrix is a symmetric matrix. Defining a 6-dimensional vector B ═ B11B12B22B13B23B33]-T. Taking the ith column vector of the H array as: h isi=[hi1,hi2,hi3]TThe following can be obtained:
Figure BDA0002153879580000133
in the formula, vij=[hi1hj1hi1hj2+hi2hj1hi2ji2hi3ji1+hi1hj3hi3hj2+hi2hj3hi3hj3]TFurther, the following formula (2-19) can give:
Figure BDA0002153879580000134
order to
Figure BDA0002153879580000135
The above formula can be rewritten as: vb=0
In the above formula, the vector b is a 6-dimensional vector, and when the number of acquired images is large enough, V in the above formula is a 6 × 6 matrix, and if V is a full-rank matrix (generally n is large enough to always make V columns full of rank), a unique solution of b can be found. And further extrapolates the internal parameters of the camera back through the formula (2-21).
Figure BDA0002153879580000136
Wherein u is0,v0Is the abscissa and ordinate of the optical center of the camera, formed by the symmetric matrix B and the ordinate v of the optical center of the camera0Calculating to obtain a normalization coefficient lambda of the camera;
calculating to obtain focal lengths f of the camera in the abscissa direction and the ordinate direction respectively by using the camera normalization coefficient lambda and the symmetric matrix BuAnd fv
The focal length f of the symmetric matrix B and the camera in the abscissa direction and the ordinate direction respectivelyuAnd fvCalculating a camera normalization coefficient lambda to obtain an orthogonality coefficient gamma of the description pixel;
by the orthogonality coefficient gamma of the description pixel, the symmetry matrix B and the focal length f of the camera in the abscissa direction and the ordinate direction, respectivelyuAnd fvThe ordinate v of the optical center of the camera0And calculating the normalized coefficient lambda of the camera to obtain the abscissa u of the optical center of the camera0
By the formula (2-22), we find the intrinsic parameters a of the camera, and then we can easily obtain the extrinsic parameters, as follows:
Figure BDA0002153879580000141
in order to obtain more accurate parameters, a maximum likelihood estimation method is commonly used. Let mijThe coordinate value of the j detected angular point on the ith image is obtained; mjDetermined for the jth corner point on the calibration plateThree-dimensional coordinates in the world coordinate system of (a);
Figure BDA0002153879580000142
is MjCoordinate values actually projected onto the camera image. Then the objective function can be derived:
Figure BDA0002153879580000143
and then obtaining an optimal calibration result by using an optimal iterative algorithm (Levenberg-Marquqrat algorithm).
After calculating the internal parameters of the camera, the distortion of the lens is calculated, and the expression is
Figure BDA0002153879580000144
Figure BDA0002153879580000145
Where (u, v) are the ideal undistorted pixel coordinates,
Figure BDA0002153879580000146
is the pixel coordinate after actual distortion; (u)0,v0) Represents the optical center, k1And k2The distortion parameters of the first two orders.
Calibrating extrinsic parameters between cameras
After the internal parameters of the left and right binocular cameras are calibrated, the position relation between the left and right cameras can be calculated according to the calculated positions of the checkerboard relative to the left and right cameras, the internal parameters calculated in the last step are optimized by utilizing the reprojection relation, and the optimization aim is to minimize the reprojection error.
The main difference between the calibration of a binocular camera and the calibration of a monocular camera is that the binocular camera needs to calibrate the relative relationship between the coordinate systems of the left camera and the right camera, and the relative relationship between the coordinate systems of the left camera and the right camera is usually described by a rotation matrix R and a translation matrix T, which specifically includes: the coordinates under the left camera are converted to the coordinates under the right camera. Assuming that there is a point P in space, its coordinates in the world coordinate system are Pw, and its coordinates in the left and right camera coordinate systems can be expressed as:
Figure BDA0002153879580000151
wherein, Pr=RPl+T
Combined upper type, can be pushed
Figure BDA0002153879580000152
T=Tr-RTl
Wherein R isl,TlFor the rotation matrix and translation vector, R, of the left camera relative to the calibration object obtained by monocular calibrationr,TrThe rotation matrix and the translation vector of the relative calibration object obtained by the monocular calibration of the right camera
The left camera and the right camera respectively carry out monocular calibration, and then R can be respectively obtainedl,Tl,Rr,TrThe above formula is substituted to obtain the rotation matrix R and translation vector T between the left and right cameras.
The obtained R and T are the results obtained by the stereo calibration.
The monocular camera needs calibrated parameters, both eyes need to be calibrated, and the binocular camera has more calibrated parameters than the monocular camera: r and T, are mainly parameters describing the relative position of the two cameras, which are very useful in stereo correction and pair geometry.
And (3) binocular correction, wherein the final purpose of binocular calibration is to enable the left camera and the right camera to be equivalent to two cameras with the internal parameters completely consistent, and the cameras are completely aligned in the horizontal direction, and the process of achieving the target is called binocular correction.
The main task of the binocular camera system is distance measurement, and the parallax distance-finding formula is derived under the ideal condition of the binocular system, but in the real binocular stereo vision system, two camera image planes which are completely aligned in a coplanar line do not exist. So we want to do stereo correction. The purpose of stereo correction is to correct two images, which are actually aligned in non-coplanar rows, into coplanar rows. (coplanar alignment: when two camera image planes are on the same plane and the same point is projected to both camera image planes, it should be on the same line of both pixel coordinate systems), the actual binocular system is corrected to an ideal binocular system.
The ideal binocular system is: the two camera image planes 11 and 12 are parallel, the optical axis is perpendicular to the image planes, the pole is located far away from the radio, and the level line corresponding to this point (x0, y0) is y 0.
Referring to fig. 4 and 5, the two image planes 11, 12 are each rotated by half R during correction, which minimizes reprojection distortion when the two camera image planes are coplanar (the distortion corrected optical axes are also parallel), but not aligned.
The pole is the intersection of the line connecting the origins of the two camera coordinate systems and the image plane, and if the pole is at infinity (i.e. line alignment), the adjusted image planes 13, 14 of the two cameras and the line connecting the origins of the two camera coordinate systems are parallel
A rotation matrix Rrect can be calculated such that the poles are at infinity:
Figure BDA0002153879580000161
since the image plane is finally parallel to the connecting line of the origin of the camera coordinate system, there are
Figure BDA0002153879580000162
Wherein T ═ Tx Ty Tz]T,e2And e1Orthogonal to the main optical axis direction (0, 0, 1) and e1Cross product;
Figure BDA0002153879580000163
let e3And e1And e2Orthogonal, one can obtain:
e3=e1×e2
and (4) performing left multiplication on the Rrect to the matrix of the left and right camera coordinate systems after R decomposition, and obtaining the final stereo correction matrix.
Obtaining space object coordinate information by calibrated binocular camera
The reprojection matrix Q enables a conversion between the world coordinate system [ world ] and the pixel coordinate system [ pixel ]. The method comprises the following specific steps:
Figure BDA0002153879580000171
then
Figure BDA0002153879580000172
Where d denotes parallax, the three-dimensional coordinates are (X/W, Y/W, Z/W), cx and cx 'denote optical centers of the left and right images, respectively, and may be set generally as cx' ═ cx, the fourth row and fourth column elements of Q are 0, and the three-dimensional coordinates of the spatial object may be expressed as
Figure BDA0002153879580000173
The calibration precision determination process comprises the following steps:
a black and white checkerboard with a known size is placed in front of a binocular camera, the checkerboard is shot by a left camera and a right camera simultaneously, and angular point coordinates of the checkerboard on the left camera and the right camera are detected respectively. Restoring the space coordinates of the checkerboard angular points in the left camera coordinate system, transferring the space coordinates in the left camera into the right camera coordinate system by using the formula 2-23, re-projecting the space coordinates into the right camera image, comparing the re-projected angular point coordinates with the angular point coordinates detected in the right image, and calculating the re-projection errors of the angular point coordinates and the angular point coordinates. And taking the reprojection error as a criterion for the quality of the calibration result, and if the error is smaller than a preset threshold value, determining that the calibration is passed, otherwise, determining that the calibration is not passed.
Referring to fig. 6, the y-coordinate difference histograms of the left and right image corners in fig. 1 are mainly distributed around 0 pixel, and the calibration result is good.
Referring to fig. 7, the average distance between the corners recovered from the depth information has a peak value of about 18.3mm, which is very close to the actual length of 18.4mm, and if the calibration result is good, calibration parameters are generated, calibration intermediate process data is backed up, and the calibration parameters are burned into the glasses and backed up to the server, so as to obtain a unique serial number of the glasses. If the burning or backup fails, the user needs to retry, and can recalibrate or click a 'Burn' button to Burn and backup again. After the burning and backup are finished, the glasses serial number is printed out through a label printer program in a preset label format so as to be conveniently pasted to an outer package, a product maintenance card and the like. If printing fails, the 'ReadSN' can be clicked to read the burnt serial number in the glasses, then the 'Print' button is clicked, and label paper is printed. Any process in the middle that fails can continue from a process in the middle.
The binocular calibration method of the wearable equipment solves the problem of binocular calibration and improves the productivity. The part of middle human participation is reduced, and the generation of human operation errors is reduced
The embodiment of the invention also provides binocular calibration equipment of the wearable equipment, which comprises a processor. A memory having stored therein executable instructions of the processor. Wherein the processor is configured to perform the steps of the binocular calibration method of the wearable device via execution of the executable instructions.
As shown above, the embodiment can accurately reflect the actual displacement of the robot movement in a period of time, reduce the measurement deviation, improve the measurement precision, have no accumulated error and can infinitely expand the positioning range.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" platform.
Fig. 8 is a schematic structural diagram of the binocular calibration device of the wearable device of the present invention. An electronic device 600 according to this embodiment of the invention is described below with reference to fig. 8. The electronic device 600 shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 8, the electronic device 600 is embodied in the form of a general purpose computing device. The components of the electronic device 600 may include, but are not limited to: at least one processing unit 610, at least one memory unit 620, a bus 630 that connects the different platform components (including memory unit 620 and processing unit 610), and the like.
Wherein the storage unit stores program code executable by the processing unit 610 to cause the processing unit 610 to perform steps according to various exemplary embodiments of the present invention described in the above-mentioned electronic prescription flow processing method section of the present specification. For example, processing unit 610 may perform the steps as shown in fig. 1.
The storage unit 620 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM)6201 and/or a cache memory unit 6202, and may further include a read-only memory unit (ROM) 6203.
The memory unit 620 may also include a program/utility 6204 having a set (at least one) of program modules 6205, such program modules 6205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 630 may be one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 600 may also communicate with one or more external devices 700 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 600, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 600 to communicate with one or more other computing devices. Such communication may occur via an input/output (I/O) interface 650. Also, the electronic device 600 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 660. The network adapter 660 may communicate with other modules of the electronic device 600 via the bus 630. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 600, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage platforms, to name a few.
The embodiment of the invention also provides a computer readable storage medium for storing a program, and the steps of the binocular calibration method of the wearable device are realized when the program is executed. In some possible embodiments, the aspects of the present invention may also be implemented in the form of a program product comprising program code for causing a terminal device to perform the steps according to various exemplary embodiments of the present invention described in the above-mentioned electronic prescription flow processing method section of this specification, when the program product is run on the terminal device.
As shown above, the embodiment can accurately reflect the actual displacement of the robot movement in a period of time, reduce the measurement deviation, improve the measurement precision, have no accumulated error and can infinitely expand the positioning range.
Fig. 9 is a schematic structural diagram of a computer-readable storage medium of the present invention. Referring to fig. 9, a program product 800 for implementing the above method according to an embodiment of the present invention is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable storage medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable storage medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
In summary, the invention provides a binocular calibration method, a device and a storage medium for wearable equipment, which can realize binocular automatic calibration of the wearable equipment, shorten debugging time, simplify calibration process, improve calibration efficiency, and facilitate large-scale mass production of the wearable equipment.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.

Claims (12)

1. A binocular calibration method of wearable equipment is characterized by comprising the following steps:
s100, moving the wearable device based on a calibration plate, wherein two cameras of the wearable device shoot a plurality of pictures of the calibration plate from different visual angles and positions;
s200, calibrating internal parameters of each camera according to the homography between the calibration plate and the camera imaging plane, wherein the internal parameters comprise the abscissa and the ordinate of the optical center of the camera, the focal length of the camera in the direction of the abscissa and the ordinate and distortion parameters;
s300, calibrating extrinsic parameters between the cameras according to the intrinsic parameters of each camera and the position relation of the calibration board relative to the two cameras of the wearable device, wherein the extrinsic parameters are a rotation matrix and a translation vector of the right camera relative to the left camera;
s400, performing binocular correction on the two cameras of the wearable device to obtain a three-dimensional correction matrix, and enabling the two cameras of the wearable device to be equivalent to two cameras with the same internal parameters and aligned in the horizontal direction.
2. The binocular calibration method of the wearable device of claim 1, wherein: the step S200 includes the following steps:
s201, setting a space point M (X, Y, Z) under a world coordinate system, respectively establishing a camera coordinate system in a picture obtained by each camera, and obtaining a pixel coordinate M (u, v) of the space point M in the camera coordinate system;
s202, establishing homogeneous coordinates of a space point M (X, Y, Z) and a pixel coordinate M (u, v) as follows:
Figure FDA0002153879570000011
and
Figure FDA0002153879570000012
to obtain
Figure FDA0002153879570000013
Wherein A is an in-camera parameter matrix of 3 rows and 3 columns, s is a scaling parameter, and (u)0,v0) Coordinates of the projected point of the camera optical center in the pixel coordinate system, fuAnd fvFor the focal length of the camera in the x and y directions, R is the column vector [ R ] of the rotation matrix of the camera1r2r3]T is a translation vector of 3 rows and 1 column, and Z is 0 in the space point M (X, Y, Z), so as to obtain
Figure FDA0002153879570000021
S203, let M ═ X, Y)TReduce it to a two-dimensional vector, then
Figure FDA0002153879570000022
To obtain
Figure FDA0002153879570000023
Wherein the homography matrix H ═ A [ r ═ R1,r2,t],
S204, setting H ═ H1,h2,h3]And r is1,r2Is orthogonal to obtain
Figure FDA0002153879570000024
Figure FDA0002153879570000025
Wherein A is-T A-1A projection matrix of the absolute quadratic curve under an image coordinate system is obtained;
s205, obtaining a symmetric matrix B:
Figure FDA0002153879570000026
s206, setting a 6-dimensional vector B ═ B11 B12 B22 B13 B23 B33]-TTaking the ith column vector of the H array as: h isi=[hi1,hi2,hi3]TThe following can be obtained:
Figure FDA0002153879570000027
vij=[hi1hj1 hi1hj2+hi2hj1 hi2ji2 hi3ji1+hi1hj3 hi3hj2+hi2hj3 hi3hj3]T
s207, obtaining
Figure FDA0002153879570000028
S208, order
Figure FDA0002153879570000029
Then V × b ═ 0;
wherein, the vector b is a 6-dimensional vector, and when V is a full-rank matrix, a unique solution of the vector b is obtained;
s209, calculating and obtaining the vertical coordinate v of the optical center of the camera by the symmetric matrix B0
By the symmetric matrix B and the ordinate v of the optical center of the camera0Calculating to obtain a normalization coefficient lambda of the camera;
calculating to obtain focal lengths f of the camera in the abscissa direction and the ordinate direction respectively by using the camera normalization coefficient lambda and the symmetric matrix BuAnd fv
The focal length f of the symmetric matrix B and the camera in the abscissa direction and the ordinate direction respectivelyuAnd fvCalculating a camera normalization coefficient lambda to obtain an orthogonality coefficient gamma of the description pixel;
by the orthogonality coefficient gamma of the description pixel, the symmetry matrix B and the focal length f of the camera in the abscissa direction and the ordinate direction, respectivelyuAnd fvThe ordinate v of the optical center of the camera0And calculating the normalized coefficient lambda of the camera to obtain the abscissa u of the optical center of the camera0
3. The binocular calibration method of the wearable device of claim 2, wherein: the step S209 further includes the following steps:
s210, obtaining a second formula group:
Figure FDA0002153879570000031
s211, setting mijThe coordinate value of the j detected angular point on the ith image is obtained; mjThree-dimensional coordinates of the jth corner point in a world coordinate system determined by the calibration board;
Figure FDA0002153879570000032
is MjThe coordinate values actually projected onto the camera image may then be given by the objective function:
Figure FDA0002153879570000033
then obtaining a calibration result;
s212, distortion of the lens is calculated after the internal parameters of the camera are obtained, and coordinates of the ideal undistorted point in the pixel coordinate system, coordinates of the ideal undistorted point in the image coordinate system and distortion parameters are calculated to obtain coordinates of the actually distorted point in the pixel coordinate system.
4. The binocular calibration method of the wearable device of claim 3, wherein: in step S211, an optimal calibration result is obtained by using an optimal iterative algorithm.
5. The binocular calibration method of the wearable device of claim 1, wherein: the step S300 includes the following steps:
s301, a point P in the space is set, coordinates of the point P in the world coordinate system are Pw, and coordinates of the point P in the left and right camera coordinate systems of the wearable device can be expressed as:
Pl=RlPw+Tl
Pr=RrPw+Tr
wherein,Pr=RPl+T
S302, obtaining a rotation matrix R of the relative calibration object through monocular calibration by the right camerarThe rotation matrix R of the relative calibration object obtained by monocular calibration with the left cameralCalculating to obtain a rotation matrix R between the left camera and the right camera;
the translation vector T of the relative calibration object is obtained by the right camera through monocular calibrationrAnd a translation vector T of the left camera relative to the calibration object obtained through monocular calibrationlAnd calculating a rotation matrix R between the left camera and the right camera to obtain a translation vector T between the left camera and the right camera
Wherein, the left camera and the right camera respectively carry out monocular calibration.
6. The binocular calibration method of the wearable device of claim 5, wherein: the step S400 includes the following steps:
s401, setting an intersection point of a connecting line of the original points of the two camera coordinate systems in the space and an image plane as a pole;
s402, setting a rotation matrix RrectCausing a pole to be located at infinity of the wearable device;
Figure FDA0002153879570000041
s403, obtaining
Figure FDA0002153879570000042
Wherein T ═ Tx Ty Tz]T,e2And e1Orthogonal to the main optical axis direction (0, 0, 1) and e1Cross product;
s404, obtaining
Figure FDA0002153879570000043
S405, order e3And e1And e2Orthogonal, one can obtain: e.g. of the type3=e1×e2
S406, adding RrectAnd (4) performing left multiplication to the matrix of the left and right camera coordinate systems after R decomposition to obtain a stereo correction matrix.
7. The binocular calibration method of the wearable device of claim 5, wherein: the step S406 further includes the following steps:
s407, obtaining an output matrix Q of the stereo correction matrix, and realizing a reprojection matrix Q for conversion between a world coordinate system { world } and a pixel coordinate system { pixel }
S408, obtaining
Figure FDA0002153879570000044
Wherein d represents parallax, three-dimensional coordinates are (X/W, Y/W, Z/W), and cx' respectively represent optical centers of the left image and the right image;
s409, let cx' ═ cx, the fourth row and fourth column elements of Q are 0, and the three-dimensional coordinates of the spatial object can be expressed as
(x,y,z)=(X/W,Y/W,Z/W)。
8. The binocular calibration method of the wearable device of claim 1, wherein: the surface of the calibration plate is provided with checkerboard patterns, and intersection points between the checkerboards are angular points.
9. The binocular calibration method of the wearable device of claim 1, wherein: the surface of the calibration plate is provided with directional checkerboard patterns, each grid is internally provided with a unique asymmetric pattern, cross points among the checkerboard patterns are angular points, and each angular point is provided with a unique code.
10. The binocular calibration method of the wearable device of claim 8 or 9, wherein: further comprising the steps of: s500, judging the calibration precision, and judging whether a first judgment condition and a second judgment condition are met simultaneously;
the first judgment condition is that the binocular module is aligned with the checkerboards, the calibrated parameters are used for correcting the left image and the right image, the checkerboard angular points are detected in the corrected images, and if the difference value of the vertical coordinates of the matched points is smaller than a preset first threshold value;
and the second judgment condition is that the spatial coordinates of the corner points are recovered by using the disparity map by using the detected matching points, and if the difference between the distance between the spatial coordinates of the adjacent empty checkerboard corner points and the real distance between the checkerboard corner points is smaller than a second preset threshold value.
11. The utility model provides a binocular calibration equipment of wearable equipment which characterized in that includes:
a processor;
a memory having stored therein executable instructions of the processor;
wherein the processor is configured to perform the steps of the binocular calibration method of the wearable device of any of claims 1 to 10 via execution of the executable instructions.
12. A computer readable storage medium storing a program, wherein the program when executed implements the steps of the binocular scaling method of the wearable device of any of claims 1 to 10.
CN201910711384.0A 2019-08-02 2019-08-02 Binocular calibration method and device of wearable device and storage medium Pending CN112308925A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910711384.0A CN112308925A (en) 2019-08-02 2019-08-02 Binocular calibration method and device of wearable device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910711384.0A CN112308925A (en) 2019-08-02 2019-08-02 Binocular calibration method and device of wearable device and storage medium

Publications (1)

Publication Number Publication Date
CN112308925A true CN112308925A (en) 2021-02-02

Family

ID=74485443

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910711384.0A Pending CN112308925A (en) 2019-08-02 2019-08-02 Binocular calibration method and device of wearable device and storage medium

Country Status (1)

Country Link
CN (1) CN112308925A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112802127A (en) * 2021-03-31 2021-05-14 深圳中科飞测科技股份有限公司 Calibration method and device, calibration equipment and storage medium
CN113084827A (en) * 2021-04-01 2021-07-09 北京飞影科技有限公司 Method and device for calibrating optical center position of camera device
CN113112553A (en) * 2021-05-26 2021-07-13 北京三快在线科技有限公司 Parameter calibration method and device for binocular camera, electronic equipment and storage medium
CN113894055A (en) * 2021-09-06 2022-01-07 电子科技大学 Hardware surface defect detection and classification system and method based on machine vision
CN114067002A (en) * 2022-01-17 2022-02-18 江苏中云筑智慧运维研究院有限公司 Binocular camera external parameter determination method and system
CN114205483A (en) * 2022-02-17 2022-03-18 杭州思看科技有限公司 Scanner precision calibration method and device and computer equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102842118A (en) * 2012-07-17 2012-12-26 刘怡光 New robust stereopair correcting method
US20130058581A1 (en) * 2010-06-23 2013-03-07 Beihang University Microscopic Vision Measurement Method Based On Adaptive Positioning Of Camera Coordinate Frame
CN104182982A (en) * 2014-08-27 2014-12-03 大连理工大学 Overall optimizing method of calibration parameter of binocular stereo vision camera
CN106981083A (en) * 2017-03-22 2017-07-25 大连理工大学 The substep scaling method of Binocular Stereo Vision System camera parameters
CN108053450A (en) * 2018-01-22 2018-05-18 浙江大学 A kind of high-precision binocular camera scaling method based on multiple constraint
CN108830905A (en) * 2018-05-22 2018-11-16 苏州敏行医学信息技术有限公司 The binocular calibration localization method and virtual emulation of simulating medical instrument cure teaching system
CN109685851A (en) * 2018-10-08 2019-04-26 上海肇观电子科技有限公司 Hand and eye calibrating method, system, equipment and the storage medium of walking robot

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130058581A1 (en) * 2010-06-23 2013-03-07 Beihang University Microscopic Vision Measurement Method Based On Adaptive Positioning Of Camera Coordinate Frame
CN102842118A (en) * 2012-07-17 2012-12-26 刘怡光 New robust stereopair correcting method
CN104182982A (en) * 2014-08-27 2014-12-03 大连理工大学 Overall optimizing method of calibration parameter of binocular stereo vision camera
CN106981083A (en) * 2017-03-22 2017-07-25 大连理工大学 The substep scaling method of Binocular Stereo Vision System camera parameters
CN108053450A (en) * 2018-01-22 2018-05-18 浙江大学 A kind of high-precision binocular camera scaling method based on multiple constraint
CN108830905A (en) * 2018-05-22 2018-11-16 苏州敏行医学信息技术有限公司 The binocular calibration localization method and virtual emulation of simulating medical instrument cure teaching system
CN109685851A (en) * 2018-10-08 2019-04-26 上海肇观电子科技有限公司 Hand and eye calibrating method, system, equipment and the storage medium of walking robot

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112802127A (en) * 2021-03-31 2021-05-14 深圳中科飞测科技股份有限公司 Calibration method and device, calibration equipment and storage medium
CN112802127B (en) * 2021-03-31 2021-07-20 深圳中科飞测科技股份有限公司 Calibration method and device, calibration equipment and storage medium
CN113084827A (en) * 2021-04-01 2021-07-09 北京飞影科技有限公司 Method and device for calibrating optical center position of camera device
CN113084827B (en) * 2021-04-01 2022-06-14 北京飞影科技有限公司 Method and device for calibrating optical center position of camera device
CN113112553A (en) * 2021-05-26 2021-07-13 北京三快在线科技有限公司 Parameter calibration method and device for binocular camera, electronic equipment and storage medium
CN113112553B (en) * 2021-05-26 2022-07-29 北京三快在线科技有限公司 Parameter calibration method and device for binocular camera, electronic equipment and storage medium
CN113894055A (en) * 2021-09-06 2022-01-07 电子科技大学 Hardware surface defect detection and classification system and method based on machine vision
CN114067002A (en) * 2022-01-17 2022-02-18 江苏中云筑智慧运维研究院有限公司 Binocular camera external parameter determination method and system
CN114205483A (en) * 2022-02-17 2022-03-18 杭州思看科技有限公司 Scanner precision calibration method and device and computer equipment

Similar Documents

Publication Publication Date Title
CN112308925A (en) Binocular calibration method and device of wearable device and storage medium
EP2202686B1 (en) Video camera calibration method and device thereof
CN105096329B (en) Method for accurately correcting image distortion of ultra-wide-angle camera
CN111127422A (en) Image annotation method, device, system and host
CN109993798B (en) Method and equipment for detecting motion trail by multiple cameras and storage medium
JPWO2018235163A1 (en) Calibration apparatus, calibration chart, chart pattern generation apparatus, and calibration method
CN112767542A (en) Three-dimensional reconstruction method of multi-view camera, VR camera and panoramic camera
EP3547260B1 (en) System and method for automatic calibration of image devices
JP2006252473A (en) Obstacle detector, calibration device, calibration method and calibration program
CN111540004A (en) Single-camera polar line correction method and device
CN114200430A (en) Calibration method, system, equipment and storage medium for laser radar and camera
CN111461963B (en) Fisheye image stitching method and device
CN113920206B (en) Calibration method of perspective tilt-shift camera
CN111145271A (en) Method and device for determining accuracy of camera parameters, storage medium and terminal
CN112288826A (en) Calibration method and device of binocular camera and terminal
CN112233189A (en) Multi-depth camera external parameter calibration method and device and storage medium
CN113496503B (en) Point cloud data generation and real-time display method, device, equipment and medium
CN114792343B (en) Calibration method of image acquisition equipment, method and device for acquiring image data
CN115965697A (en) Projector calibration method, calibration system and device based on Samm's law
CN116188594A (en) Calibration method, calibration system, calibration device and electronic equipment of camera
CN115567781A (en) Shooting method and device based on smart camera and computer equipment
CN114926542A (en) Mixed reality fixed reference system calibration method based on optical positioning system
CN109801312B (en) Multi-lens motion track monitoring method, system, equipment and storage medium
JP2007034964A (en) Method and device for restoring movement of camera viewpoint and three-dimensional information and estimating lens distortion parameter, and program for restoring movement of camera viewpoint and three-dimensional information and estimating lens distortion parameter
CN113160059A (en) Underwater image splicing method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination