CN109345595B - Stereoscopic vision sensor calibration method based on spherical lens - Google Patents

Stereoscopic vision sensor calibration method based on spherical lens Download PDF

Info

Publication number
CN109345595B
CN109345595B CN201811071566.8A CN201811071566A CN109345595B CN 109345595 B CN109345595 B CN 109345595B CN 201811071566 A CN201811071566 A CN 201811071566A CN 109345595 B CN109345595 B CN 109345595B
Authority
CN
China
Prior art keywords
camera
lens
spherical lens
vision sensor
lenses
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811071566.8A
Other languages
Chinese (zh)
Other versions
CN109345595A (en
Inventor
刘震
阎峰
胡杨
李若铭
吴穗宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201811071566.8A priority Critical patent/CN109345595B/en
Publication of CN109345595A publication Critical patent/CN109345595A/en
Application granted granted Critical
Publication of CN109345595B publication Critical patent/CN109345595B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence

Abstract

The invention discloses a stereoscopic vision sensor calibration method based on a spherical lens, which comprises the following steps: placing the checkerboard target at a proper position for multiple times, and shooting target images refracted by multiple lenses; extracting the angular points of the checkerboard refracted by each lens in the image; calibrating the initial values of the structural parameters by using an analytical method; and obtaining the optimal solution of the structural parameters through nonlinear optimization. Compared with a perspective projection camera, the three-dimensional reconstruction method has the advantages that the field of view is large, the three-dimensional reconstruction can be realized by only using one camera, and the three-dimensional reconstruction method is suitable for large field of view measurement under narrow conditions.

Description

Stereoscopic vision sensor calibration method based on spherical lens
Technical Field
The invention relates to a sensor calibration technology and a vision measurement technology, in particular to a stereoscopic vision sensor calibration method based on a spherical lens.
Background
The stereoscopic vision sensor based on the catadioptric element has the advantages of compact structure, easy construction, low cost and the like, does not have synchronization errors, and has attracted wide attention of scholars in recent years. The stereo vision sensor based on the spherical lens is formed by combining one camera with two or more spherical lenses, has a larger view field, and is suitable for completing measurement in a large view field in a narrow space.
The calibration of the stereoscopic vision sensor based on the spherical lens comprises two parts of internal parameter calibration and structural parameter calibration of a camera. The literature on calibrating the internal parameters of the camera is many, so the structural parameter calibration method is mainly discussed. The method for calibrating the model structure parameters mainly comprises the following two methods. One is a virtual camera method proposed by Kah and the like, which realizes the calibration of the model by regarding the system as a plurality of virtual cameras and calibrating the structural parameters between the virtual cameras; the other method is an analytic method, namely, an analytic solution of the structural parameters is obtained by establishing an analytic model of the refraction light path, so that the calibration of the structural parameters is completed. Agrawal et al make a prominent contribution in this field by establishing coordinates in the meridian plane of a spherical lens, and completing calibration of the rotation matrix and translation vector of the optical axis direction, length, world coordinate system to the camera coordinate system by using the incident light, the object point and the optical axis coplanar and different lenses to refract the same target as constraint.
However, the existing main calibration method based on the projection array stereoscopic vision sensor has strict requirements on the relative position of the lens and the camera, for example, the camera and the lens are required to be parallel, which is difficult to realize in practical operation; or manual intervention is needed, for example, the spherical lens contour is marked manually, and the calibration process is complex; or the precision is not enough, especially at the edge of the spherical lens, because the aberration is larger, the depth of field is not enough, the imaging quality is poor, the extraction of the characteristic point is not accurate, the calibration precision is poor, and the measurement is finished by using the calibration result.
Disclosure of Invention
In view of this, the main objective of the present invention is to provide a calibration method for a stereoscopic vision sensor based on a spherical lens, which can realize high-precision calibration of the stereoscopic vision sensor and complete accurate measurement based on the same.
In order to achieve the purpose, the invention adopts the technical scheme that:
a stereoscopic vision sensor calibration method based on a spherical lens comprises the following steps:
a. calibrating a camera in the stereoscopic vision sensor based on the sphere, placing the checkerboard grid target in a proper position for multiple times, at least once, so that the two lenses can image the checkerboard grid target, and shooting the target image refracted by the multiple lenses by the stereoscopic vision sensor based on the spherical lens; extracting the angular points of the checkerboard refracted by each lens in the image;
b. calibrating the initial values of the structural parameters by using an analytical method;
c. and obtaining the optimal solution of the structural parameters through nonlinear optimization.
The step a of shooting the target image refracted by the plurality of lenses comprises the following steps:
(11) the spherical lens array consists of two spherical lenses;
(12) the distance between the spherical lenses and the distance between the spherical lens array and the camera are adjusted, so that the two lenses are all imaged in a field range, and focusing is clear.
The implementation steps for extracting the checkerboard corner points refracted by each lens in the image in the step a are as follows:
(21) obtaining an initial value of an angular point through a multi-scale angular point extraction algorithm;
(22) and obtaining the coordinates of the image characteristic points without the distortion of the camera lens by an image distortion correction method.
The step b of calibrating the structural parameters by using an analytical method comprises the following steps:
(31) the calibrated structural parameters comprise: the center of each spherical lens and the perspective projection center of the camera form the axial direction Ai(i is 1, 2), the distance d between the center of each spherical ball lens and the perspective projection center of the camerai(i ═ 1, 2), the rotation matrix R and translation vector t of the world coordinate system to the camera coordinate system;
(32) respectively solving A according to the common meridian plane constraint, the lens constraint and the analytic equationi,[R t]And di
In the step c, the optimal solution of the structural parameters is obtained through nonlinear optimization, the trust region deflecting method is adopted, the minimum reprojection error and the minimum reconstruction point distance error are used as objective functions to carry out nonlinear optimization, and the solution A is obtainedi,[R t],diThe optimal solution of (1).
Compared with the prior art, the invention has the advantages that:
(1) according to the calibration method, the accurate target characteristic point coordinates are obtained through an uncertainty method, the reprojection error and the reconstruction point distance error are jointly used as a target function, the nonlinear optimization is carried out by using a plurality of target postures, the calibration method of the stereoscopic vision sensor based on the spherical lens is obtained, the precision is high, and the reprojection error reaches 0.1 pixel level. The calibration precision of the stereoscopic vision sensor model reaches the degree of reconstruction and measurement, so that the stereoscopic vision sensor model has higher practical value.
(2) The invention provides a stereoscopic vision sensor reconstruction model based on a spherical lens, which is applied to nonlinear optimization to ensure that a calibration result is accurate in actual measurement. The method is suitable for high-precision calibration and measurement of the stereoscopic vision sensor based on the spherical lens.
Drawings
FIG. 1 is a flow chart of a calibration method of a stereoscopic vision sensor based on a spherical lens according to the present invention;
FIG. 2 is a three-dimensional visual sense calibration model based on a spherical lens;
FIG. 3 is a representation of coplanar confinement in a meridian plane during refraction;
fig. 4 is a measurement model of a stereo vision sensor based on a spherical lens.
Detailed Description
The basic idea of the invention is: acquiring coordinates of the image characteristic points by using an uncertainty method; obtaining an initial value of the structural parameter by using an analytical method; taking the reprojection error and the reconstruction error as target functions, and using a plurality of pictures to realize the calibration of the stereoscopic vision sensor based on the spherical lens; and the reconstruction and the measurement of the stereoscopic vision sensor are realized according to the fact that emergent rays of all lenses at the same point are converged at one point.
The present invention will be described in further detail below with reference to a stereo vision sensor based on a spherical ball array, which is composed of a camera and two spherical lenses.
As shown in FIG. 1, the calibration method of the stereoscopic vision sensor based on the spherical lens of the invention mainly comprises the following steps:
step 11: and (3) measuring the stereoscopic vision sensor based on the spherical lens.
Here, a description is given of a measurement process of the stereo vision sensor. An object point P in the space is refracted by two spherical lenses to form P on an image plane1,p2Two image points. After the image characteristic points are extracted, distortion is removed by using a distortion formula, and the coordinates p of the undistorted image characteristic points can be obtained1=[u1,v1]T,p2=[u2,v2]TAccording to the perspective relation of the camera:
Figure BDA0001799646650000031
its coordinates p in the camera coordinate system can be calculated separatelyc,1=[xc,1,yc,1,zc,1]T,pc,2=[xc,2,yc,2,zc,2]TAccording to the refraction light path analysis model mentioned in the article "Single Image Calibration of Multi-Axial Imaging System" of Agrawal et al, the emergent point on each lens and the direction of the emergent ray can be calculated separately.
Step 12: and (4) building a stereoscopic vision sensor system based on the spherical lens.
Here, the two spherical lenses and the camera form a stereoscopic vision sensor based on the spherical lens, so that the camera can shoot the two spherical lenses completely and can focus clearly. The camera lens is focused on the spherical lens, and clear imaging of an object can be observed through the spherical lens.
Step 13: and calibrating a camera in the stereoscopic vision sensor based on the spherical lens.
Here, the camera of the vision sensor is calibrated, i.e., the internal parameters of the camera are solved, and the specific solving method is described in detail in article "A flexible new technical for camera calibration [ R ]. Microsoft Corporation, NSR-TR-98-71,1998", by Zhangyingyou.
Step 14: one checkerboard target is placed at least 1 time in front of the stereo vision sensor, so that two lenses can image the checkerboard target. The stereo vision sensor based on the spherical lens shoots target images in the two lenses.
Here, fig. 2 is a schematic view of a stereoscopic vision sensor based on a spherical lens. Wherein P isijJ point, p, representing the ith pose of the targetm,1ijRepresenting the imaging of the 1 st spherical lens on the target feature point, pm,2ijRepresenting the imaging of the 2 nd spherical lens on the target feature point, p1ijAnd p2ijRespectively representing the imaging points refracted by the 1 st and 2 nd spherical lenses on the image plane. Establishing a world coordinate system O-XY, a world coordinate system and a camera coordinate system O on the target surfacec-xcyczcThe rotational matrix and translation vector between are R and t, respectively. Starting from the perspective projection center of the camera, the unit vector of the axial direction from the center of the 1 st spherical lens is A1The unit vector of axial direction to the center of the 2 nd spherical lens is A2. The distances from the 1 st spherical lens and the 2 nd spherical lens to the perspective projection center of the camera are respectively called the axle distance d of two axles1And d2
Step 15: and extracting coordinates of the checkerboard corner points in the two lenses.
In order to overcome the problem of low imaging quality of lens edge points, a multi-scale point extraction method is adopted, Harris corner point extraction templates with m different scale parameters are used, m coordinates are obtained by extracting one feature point P, and a coordinate set Q ═ P is obtained1,p2,…pmGet the average value
Figure BDA0001799646650000041
As precise coordinate values for calibration.
Step 16: and calibrating by using the corrected coordinates by using an analytic method to obtain an initial value of the parameter.
Here, the basic method is a method proposed by Agrawal in the article "Single Image Calibration of Multi-Axial Imaging System", and a certain improvement is made when Calibration is performed using a plurality of poses, and specifically includes the following steps:
step 161: if one pose is used for calibration, as shown in figure 3, for each lens there is a coplanar constraint on its meridian plane: (RP (j) + t)T(AiX v (i, j)) -0, where for any pose position, let p (j) be the jth feature point of the target, R and t be the rotation matrix and translation vector, respectively, from the world coordinate system attached to the target to the camera coordinate system, aiThe feature point is the ith spherical lens, and v (i, j) refers to the light ray of the jth feature point after being refracted by the ith lens. For convenience of calculation, a coordinate system is established on a meridian plane by taking a camera perspective projection center COP as an origin and taking an optical axis as a coordinate axis, the COP is regarded as an emitting point of all light rays according to the reversibility of an optical path, and v (i, j) is incident light rays refracted by an ith lens for a jth refraction point. Meridian planes corresponding to different feature points intersect on the optical axis, and the direction of the axis can be calibrated by using at least 8 feature points; when a plurality of postures are used for calibration, any posture is adopted in the axial direction for calibration to obtain an initial value.
Step 162: the light from different lenses is combined with the coplanar constraint, so that the calibration of external parameters can be completed by using only one picture; and when the plurality of postures are used for calibration, respectively calibrating external parameters of each posture relative to a camera coordinate system.
Step 163: after the parameters are obtained, the characteristic points are converted from a world coordinate system to a camera coordinate system and then projected to a meridian plane, a 12-order equation about the wheelbase d can be obtained, the equation is solved to remove the virtual root, and a final solution can be obtained by utilizing geometric constraint. When a plurality of postures are used, the wheelbase is calculated for each posture, and the median is taken as the initial value of the wheelbase.
And step 17: and obtaining the optimal solution of the structural parameters of the stereoscopic vision sensor through nonlinear optimization to finish calibration.
And placing the checkerboard grid targets for multiple times, and optimizing the structural parameters of the stereo vision sensor by using a maximum likelihood criterion to obtain the optimal solution of the structural parameters under the maximum likelihood criterion.
And performing nonlinear optimization by using a trust region deflecting method by taking the minimum reprojection error and the minimum reconstruction point distance error as objective functions.
Let the imaging point in the image plane after the jth feature point in the ith posture is refracted by the first spherical lens and the second spherical lens be p1ijAnd p2ijThe reprojection formula is shown in "Analytical Forward Projection For Axial Non-Central Dioptric And digital Camera", Agrawal, etc., And an object point P in a camera coordinate system is setijAre respectively the two reprojection points
Figure BDA0001799646650000051
And
Figure BDA0001799646650000052
using M pictures in total, and N feature points in each picture, the reprojection error can be expressed as:
Figure BDA0001799646650000053
where Dist (a, B) represents the distance between A, B two points.
For any space object point P, the refraction point on the side close to the object point is m respectively, as shown in FIG. 42lAnd m2rThe emergent rays are respectively v2lAnd v2rAnd the intersection point of the two emergent lights is the coordinate of the object point in the space. Arranged under the coordinate system of the camera m2l=[mlx,mly,mlz]T,m2r=[mrx,mry,mrz]T,v2l=[vlx,vly,vlz]T,v2r=[vrx,vry,vrz]TThen, there are:
Figure BDA0001799646650000054
k1、k2is a scale factor that is a function of,
Figure BDA0001799646650000058
are the reconstructed object point coordinates under the camera system. Formula (3) can be rewritten as:
k1·v2l-k2·v2r=m2r-m2l(4)
unfolding into a matrix form:
Figure BDA0001799646650000055
order to
Figure BDA0001799646650000056
Formula (5) may be changed to Ak ═ b. Taking the least squares solution of k (A)TA)-1(ATb) As the optimal solution for k
Figure BDA0001799646650000057
The calculated reconstruction point coordinates for the two lenses are then:
Figure BDA0001799646650000061
taking the average value of the two as the reconstruction coordinate
Figure BDA0001799646650000062
The jth characteristic point P of the ith target attitude in the calibration processijHaving reconstructed coordinates of
Figure BDA0001799646650000065
And during nonlinear optimization, the difference between the point distance between the adjacent target points in the transverse direction and the longitudinal direction and the actual point distance D is taken as the minimum, and the minimum is taken as a reconstruction error objective function. Setting N characteristic points of each gesture into m rows and N columns, and setting the characteristic point of the ith target gesture on the r row and the c column as Pi,rcThe objective function for that part can be expressed as:
Figure BDA0001799646650000063
Figure BDA0001799646650000064
frec(a)=frec1(a)+frec2(a)(8)
the overall objective function is:
f(a)=min(frep(a)+frec(a))(9)
where a is the parameter to be optimized, a ═ c1,c2,r1,r2,...rM,t1,t2,…,tM) And a nonlinear optimization method is adopted, so that the optimal solution of a under the maximum likelihood criterion can be solved. c. C1=A1×d1,c2=A2×d2The direction vectors from the two spherical lenses to the perspective projection center of the camera are described, namely the structural parameters required to be calibrated by the stereoscopic vision sensor system.

Claims (1)

1. A stereoscopic vision sensor calibration method based on a spherical lens is characterized by comprising the following steps: the method comprises the following steps:
a. calibrating a camera in the stereoscopic vision sensor based on the sphere, placing the checkerboard grid target in a proper position for multiple times, at least once, so that the two lenses can image the checkerboard grid target, and shooting the target image refracted by the multiple lenses by the stereoscopic vision sensor based on the spherical lens; extracting the angular points of the checkerboard refracted by each lens in the image;
b. calibrating the initial values of the structural parameters by using an analytical method;
c. obtaining an optimal solution of the structural parameters through nonlinear optimization;
the step a of shooting the target image refracted by the plurality of lenses comprises the following steps:
(11) the spherical lens array consists of two spherical lenses;
(12) adjusting the distance between the spherical lenses and the distance between the spherical lens array and the camera to enable the two lenses to be imaged in a field range and focus clearly;
the implementation steps for extracting the checkerboard corner points refracted by each lens in the image in the step a are as follows:
(21) obtaining an initial value of an angular point through a multi-scale angular point extraction algorithm;
(22) obtaining the coordinates of the image characteristic points without the distortion of the camera lens by an image distortion correction method;
the step b of calibrating the structural parameters by using an analytical method comprises the following steps:
(31) the calibrated structural parameters comprise: the center of each spherical lens and the perspective projection center of the camera form the axial direction Ai(i is 1, 2), the distance d between the center of each spherical ball lens and the perspective projection center of the camerai(i ═ 1, 2), the rotation matrix R and translation vector t of the world coordinate system to the camera coordinate system;
(32) respectively solving A according to the common meridian plane constraint, the lens constraint and the analytic equationi,[R t]And di
In the step c, the optimal solution of the structural parameters is obtained through nonlinear optimization, the Trust region deflecting method is adopted, the minimum reprojection error and the minimum reconstruction point distance error are used as objective functions to carry out nonlinear optimization, and the solution A is obtainedi,[R t],diThe optimal solution of (1).
CN201811071566.8A 2018-09-14 2018-09-14 Stereoscopic vision sensor calibration method based on spherical lens Active CN109345595B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811071566.8A CN109345595B (en) 2018-09-14 2018-09-14 Stereoscopic vision sensor calibration method based on spherical lens

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811071566.8A CN109345595B (en) 2018-09-14 2018-09-14 Stereoscopic vision sensor calibration method based on spherical lens

Publications (2)

Publication Number Publication Date
CN109345595A CN109345595A (en) 2019-02-15
CN109345595B true CN109345595B (en) 2022-02-11

Family

ID=65305108

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811071566.8A Active CN109345595B (en) 2018-09-14 2018-09-14 Stereoscopic vision sensor calibration method based on spherical lens

Country Status (1)

Country Link
CN (1) CN109345595B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111482963B (en) * 2020-04-08 2022-11-25 江西理工大学 Calibration method of robot
CN117351091A (en) * 2023-09-14 2024-01-05 成都飞机工业(集团)有限责任公司 Camera array calibration device and use method thereof

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101118648A (en) * 2007-05-22 2008-02-06 南京大学 Road conditions video camera marking method under traffic monitoring surroundings
CN101526338A (en) * 2009-04-15 2009-09-09 北京信息科技大学 Field calibration method of structural parameter of line structure light vision sensing system
CN106127745A (en) * 2016-06-17 2016-11-16 凌云光技术集团有限责任公司 The combined calibrating method of structure light 3 D visual system and line-scan digital camera and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101118648A (en) * 2007-05-22 2008-02-06 南京大学 Road conditions video camera marking method under traffic monitoring surroundings
CN101526338A (en) * 2009-04-15 2009-09-09 北京信息科技大学 Field calibration method of structural parameter of line structure light vision sensing system
CN106127745A (en) * 2016-06-17 2016-11-16 凌云光技术集团有限责任公司 The combined calibrating method of structure light 3 D visual system and line-scan digital camera and device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A two-step calibration method of a large FOV binocular stereovision sensor for onsite measurement;Zhenxing Wang 等;《Measurement》;20150228;第62卷;第15-24页 *
Single image calibration of multi-axial imaging systems;Amit Agrawal 等;《26th IEEE Conference on Computer Vision and Pattern Recognition》;20130628;第1399-1406页 *
多传感器线结构光视觉测量系统全局校准;黄邦奎 等;《光电子·激光》;20111215;第22卷(第12期);第1816-1820页 *
多视点标定图像的交替迭代度量重建方法;陈京 等;《北京交通大学学报》;20120415;第36卷(第2期);第16-23页 *

Also Published As

Publication number Publication date
CN109345595A (en) 2019-02-15

Similar Documents

Publication Publication Date Title
CN109272570B (en) Space point three-dimensional coordinate solving method based on stereoscopic vision mathematical model
CN101577002B (en) Calibration method of fish-eye lens imaging system applied to target detection
US8934721B2 (en) Microscopic vision measurement method based on adaptive positioning of camera coordinate frame
CN109003311B (en) Calibration method of fisheye lens
CN103971353B (en) Splicing method for measuring image data with large forgings assisted by lasers
CN107248178B (en) Fisheye camera calibration method based on distortion parameters
CN109859272B (en) Automatic focusing binocular camera calibration method and device
CN102567989A (en) Space positioning method based on binocular stereo vision
CN107025670A (en) A kind of telecentricity camera calibration method
CN109615663A (en) Panoramic video bearing calibration and terminal
CN109325981B (en) Geometric parameter calibration method for micro-lens array type optical field camera based on focusing image points
CN109903227A (en) Full-view image joining method based on camera geometry site
CN112985293B (en) Binocular vision measurement system and measurement method for single-camera double-spherical mirror image
CN105488766B (en) Fisheye image bearing calibration and device
CN111009030A (en) Multi-view high-resolution texture image and binocular three-dimensional point cloud mapping method
CN108154536A (en) The camera calibration method of two dimensional surface iteration
CN109345595B (en) Stereoscopic vision sensor calibration method based on spherical lens
CN113947638B (en) Method for correcting orthographic image of fish-eye camera
CN108269234A (en) A kind of lens of panoramic camera Attitude estimation method and panorama camera
CN108981608B (en) Novel line structured light vision system and calibration method
CN113298886A (en) Calibration method of projector
CN108109111A (en) Pass through the method for the more fish eye lens panorama cameras of software and hardware combining assembly and adjustment
CN112258581A (en) On-site calibration method for multi-fish glasses head panoramic camera
CN110766752A (en) Virtual reality interactive glasses with reflective mark points and space positioning method
CN113989105B (en) Single-camera spherical mirror reflection imaging projection device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant