CN104374338A - Single-axis rotation angle vision measurement method based on fixed camera and single target - Google Patents

Single-axis rotation angle vision measurement method based on fixed camera and single target Download PDF

Info

Publication number
CN104374338A
CN104374338A CN201410506279.0A CN201410506279A CN104374338A CN 104374338 A CN104374338 A CN 104374338A CN 201410506279 A CN201410506279 A CN 201410506279A CN 104374338 A CN104374338 A CN 104374338A
Authority
CN
China
Prior art keywords
msub
mrow
mtd
msubsup
mtr
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410506279.0A
Other languages
Chinese (zh)
Other versions
CN104374338B (en
Inventor
全权
董洪信
张瑞峰
付强
蔡开元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201410506279.0A priority Critical patent/CN104374338B/en
Publication of CN104374338A publication Critical patent/CN104374338A/en
Application granted granted Critical
Publication of CN104374338B publication Critical patent/CN104374338B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a single-axis rotation angle vision measurement method based on a fixed camera and a single target. The method includes the four steps of firstly, calibrating internal parameters and external parameters of the camera; secondly, obtaining the initial values of the external parameters related to pivoting movement; thirdly, conducting nonlinear optimization on the internal parameter pin and the external parameter pout; fourthly, measuring the rotation angle through the calibrated internal parameters and the calibrated external parameters. The method is a method for measuring the rotation angle of the pivoting movement through the monocular camera, and solves the problem that the target is hard to install in an existing angle vision measurement method; meanwhile, the method is suitable for the conditions with blockage as well and is quite high in practicability.

Description

一种基于固定相机和单靶标的单轴旋转角的视觉测量方法A Visual Measurement Method of Single-axis Rotation Angle Based on Fixed Camera and Single Target

技术领域technical field

本发明涉及一种基于固定相机和单靶标的单轴旋转角的视觉测量方法,属于视觉测量技术领域。The invention relates to a visual measurement method based on a fixed camera and a single-axis rotation angle of a single target, belonging to the technical field of visual measurement.

背景技术Background technique

角位移测量广泛应用于工业、航空航天等领域。由于在恶劣的环境中,角位移传感器的电气、机械等参数会发生变化,因此需要定期对其进行标定矫正。视觉测量技术因具有结构简单,非接触实时测量,现场安装和调试方便等优点而备受青睐。现有的视觉测角方法分为双目视觉和单目视觉的方法。双目视觉方法,因其公共视场小,安装复杂而受到很大的限制。现有的单目视觉的方法大都对靶标的安装有一定的要求,安装难度大。因此,本专利申请提出了一种基于固定相机和单靶标的单轴旋转角的视觉测量方法,该方法对靶标的安装没有任何要求。另外,本发明对有遮挡的情况同样成立,更加实用。Angular displacement measurement is widely used in industry, aerospace and other fields. Since the electrical and mechanical parameters of the angular displacement sensor will change in harsh environments, it is necessary to calibrate and correct them regularly. Vision measurement technology is popular because of its simple structure, non-contact real-time measurement, and convenient on-site installation and debugging. The existing visual angle measurement methods are divided into binocular vision and monocular vision methods. The binocular vision method is greatly limited due to its small public field of view and complicated installation. Most of the existing monocular vision methods have certain requirements for the installation of the target, and the installation is difficult. Therefore, this patent application proposes a visual measurement method based on a fixed camera and a single-axis rotation angle of a single target, which does not require any installation of the target. In addition, the present invention is also valid for the situation of occlusion, which is more practical.

发明内容Contents of the invention

本发明提出了一种基于固定相机和单靶标的单轴旋转角的视觉测量方法,它是一种利用单目相机对绕轴旋转运动的旋转角进行测量的方法。它解决了现有视觉测角方法中靶标安装困难的问题。同时,本发明对有遮挡的情况同样适用,实用性很好。The invention proposes a single-axis rotation angle visual measurement method based on a fixed camera and a single target, which is a method for measuring the rotation angle of a rotation around an axis by using a monocular camera. It solves the problem of difficult target installation in existing visual goniometric methods. Simultaneously, the present invention is equally applicable to the situation that has shelter, and practicability is very good.

该方法,通过二维靶标,对相机内参及相对于轴的外参数进行标定,进而测量旋转角,该方法对靶标安装没有任何要求,可以将靶标固定到任意位置。In this method, the internal parameters of the camera and the external parameters relative to the axis are calibrated through a two-dimensional target, and then the rotation angle is measured. This method does not require any installation of the target, and the target can be fixed at any position.

本发明采用的摄像机模型为非线性透视投影模型,模型描述如下:The camera model adopted in the present invention is a nonlinear perspective projection model, and the model is described as follows:

1.线性摄像机模型1. Linear camera model

如图(1)所示,空间任何一点P在图像中的成像位置可以用针孔成像模型近似表示,即点P在图像中的投影位置p,为光心O与点P的连线与图像平面的交点。因而世界坐标系下P点坐标(Xw,Yw,Zw)T与投影点p的像素坐标(u,v)T之间的关系如下:As shown in Figure (1), the imaging position of any point P in the image can be approximated by the pinhole imaging model, that is, the projection position p of point P in the image is the connection line between optical center O and point P and the image intersection of the planes. Therefore, the relationship between the coordinates (X w , Y w , Z w ) T of point P in the world coordinate system and the pixel coordinates (u, v) T of the projected point p is as follows:

sthe s uu vv 11 == αα xx 00 uu 00 00 00 αα ythe y vv 00 00 00 00 11 00 RR cwcw TT cwcw 00 11 Xx ww YY ww ZZ ww 11 -- -- -- (( 1.11.1 ))

其中αx=f/dX为u轴上的尺度因子,αy=f/dY为v轴上的尺度因子。αxy,u0,v0只与摄像机内部参数有关,称为摄像机的内参数。Rcw,Tcw分别为相机坐标系与世界坐标系的旋转矩阵和平移向量,称为摄像机的外部参数。Where α x =f/dX is the scale factor on the u axis, and α y =f/dY is the scale factor on the v axis. α x , α y , u 0 , v 0 are only related to the internal parameters of the camera, which are called internal parameters of the camera. R cw , T cw are the rotation matrix and translation vector of the camera coordinate system and the world coordinate system respectively, which are called the external parameters of the camera.

1.非线性摄像机模型1. Nonlinear camera model

实际上,镜头并不是理想的透视成像,而是带有一定程度的畸变。本发明采用的是非线性畸变模型,由世界坐标(Xw,Yw,Zw)T求取像素坐标(u,v)T的过程如下:In fact, the lens is not an ideal perspective imaging, but has a certain degree of distortion. The present invention uses a nonlinear distortion model, and the process of obtaining pixel coordinates (u, v) T from world coordinates (Xw , Yw , Zw ) T is as follows:

Xx cc YY cc ZZ cc == RR cwcw Xx ww YY ww ZZ ww ++ TT cwcw ,, -- -- -- (( 1.21.2 ))

x=Xc/Zc,y=Yc/Zc,r2=x2+y2,   (1.3)x=X c /Z c , y=Y c /Z c , r 2 =x 2 +y 2 , (1.3)

xx dd == (( 11 ++ kk 11 rr 22 ++ kk 22 rr 44 ++ kk 55 rr 66 )) xx ++ 22 kk 33 xyxy ++ kk 44 (( rr 22 ++ 22 xx 22 )) (( 11 ++ kk 11 rr 22 ++ kk 22 rr 44 ++ kk 55 rr 66 )) ythe y ++ 22 kk 44 xyxy ++ kk 33 (( rr 22 ++ 22 ythe y 22 )) ,, -- -- -- (( 1.41.4 ))

uu vv 11 == αα xx 00 uu 00 00 αα ythe y vv 00 00 00 11 xx dd 11 ,, -- -- -- (( 1.51.5 ))

其中,k1,k2,k3,k4,k5为畸变系数。αxy,u0,v0,k1,k2,k3,k4,k5构成摄像机的内参数,Rcw,Tcw为摄像机的外参数。Among them, k 1 , k 2 , k 3 , k 4 , and k 5 are distortion coefficients. α x , α y , u 0 , v 0 , k 1 , k 2 , k 3 , k 4 , and k 5 constitute the internal parameters of the camera, and R cw and T cw are the external parameters of the camera.

为描述方便,我们记u,v为pin=(αxy,u0,v0,k1,k2,k3,k4,k5)T,Rcw,Tcw,Xw,Yw,Zw的函数,即For the convenience of description, we record u and v as pin =(α xy ,u 0 ,v 0 ,k 1 ,k 2 ,k 3 ,k 4 ,k 5 ) T ,R cw ,T cw ,X function of w , Y w , Z w , namely

u=fu(pin,Rcw,Tcw,Xw,Yw,Zw),v=fv(pin,Rcw,Tcw,Xw,Yw,Zw)  (1.6)u=f u (p in , R cw , T cw , X w , Y w , Z w ), v=f v (p in , R cw , T cw , X w , Y w , Z w ) (1.6)

本发明所建立的世界坐标系如图(2).取转轴为Zw轴,并使得在基准位置时靶标原点位于Xw轴上。假设基准位置靶标坐标系与世界坐标系之间的旋转矩阵和平移向量为并设相机坐标系与世界坐标系的旋转矩阵为Rcw,平移向量为Tcw.The world coordinate system established by the present invention is shown in figure (2). The axis of rotation is taken as the Zw axis, and the target origin is located on the Xw axis at the reference position. Suppose the rotation matrix and translation vector between the reference position target coordinate system and the world coordinate system are And set the rotation matrix between the camera coordinate system and the world coordinate system as R cw , and the translation vector as T cw .

本发明所提出的方法包括两步:内外参数信息标定和任意角度求解。内外参数信息标定,是在普通相机标定的基础上,利用绕轴旋转运动坐标系之间的特殊几何关系,求取所需要的外部参数信息的过程。t,Rcw,Tcw及各位置与基准位置之间的夹角是内外参信息标定所要估计的外参数。任意角度求解则是根据已求得的内外参数信息,求取旋转角。在任意角度求解过程中,我们提出了一种点匹配算法,从而使测角方法可以求取遮挡问题的解。The method proposed by the invention includes two steps: calibration of internal and external parameter information and solution at any angle. Calibration of internal and external parameter information is the process of obtaining the required external parameter information on the basis of ordinary camera calibration, using the special geometric relationship between the rotating coordinate system around the axis. t, R cw , T cw and the angle between each position and the reference position are the external parameters to be estimated in the calibration of internal and external parameter information. Arbitrary angle solution is to calculate the rotation angle based on the obtained internal and external parameter information. During the arbitrary angle solution, we propose a point-matching algorithm that enables the goniometric method to solve the occlusion problem.

综上所述,本发明一种基于固定相机和单靶标的单轴旋转角的视觉测量方法,该方法具体步骤如下:In summary, the present invention is a visual measurement method based on a fixed camera and a single-axis rotation angle of a single target. The specific steps of the method are as follows:

步骤一:相机内外参数标定。Step 1: Camera internal and external parameters calibration.

在具体实现过程中,我们利用二维棋盘格(如图5)作为靶标。我们采用的图片序列,包含一张基准位置时的图片和m张旋转部件旋转到不同角度时的图片。假设这m个角度为θ12,...,θm。接下来,可以利用MATLAB 2014自带的棋盘格角点提取函数detectChecker--boardPoints进行角点提取,并结合MATLAB标定工具进行内参标定,求得内参数pin和m+1个位姿的外部参数。该标定工具箱,可以在http://www.vision.caltech.edu/bouguetj/calib_doc/上下载。这些外部参数可以表示为In the specific implementation process, we use a two-dimensional checkerboard (as shown in Figure 5) as the target. The picture sequence we use includes a picture at the reference position and m pictures when the rotating part is rotated to different angles. Suppose these m angles are θ 1 , θ 2 ,...,θ m . Next, you can use the checkerboard corner extraction function detectChecker--boardPoints that comes with MATLAB 2014 to extract the corner points, and combine the MATLAB calibration tool for internal parameter calibration to obtain the internal parameters p in and external parameters of m+1 poses . The calibration toolbox can be downloaded at http://www.vision.caltech.edu/bouguetj/calib_doc/. These external parameters can be expressed as

[[ RR cbcb ii ,, TT cbcb ii ]] ,, ii == 0,1,20,1,2 ,, .. .. .. ,, mm ..

表示在旋转角为θi时,相机与靶标之间的旋转矩阵与平移向量,i=0时表示基准位置时,相机与靶标之间的旋转矩阵与平移向量。 Indicates the rotation matrix and translation vector between the camera and the target when the rotation angle is θ i , and when i=0 indicates the rotation matrix and translation vector between the camera and the target at the reference position.

步骤二:绕轴旋转运动相关的外参数初值求取Step 2: Calculate the initial value of the external parameters related to the rotation around the axis

1、我们根据步骤一中求得的外参数,由(1.7)求取在旋转角为θi时,靶标相对于基准时靶标的旋转矩阵和平移向量 [ R b i b 0 , T b i b 0 ] , i = 1,2 , . . . , m . 1. According to the external parameters obtained in step 1, we use (1.7) to obtain the rotation matrix and translation vector of the target relative to the reference time when the rotation angle is θ i [ R b i b 0 , T b i b 0 ] , i = 1,2 , . . . , m .

RR bb ii bb 00 == RR cbcb ii TT RR cbcb 00 ,, TT bb ii bb 00 == RR cbcb ii TT (( TT cbcb 00 -- TT cbcb ii )) .. -- -- -- (( 1.71.7 ))

2、求取t及θi初值:2. Seek Initial value of t and θ i :

定义r1,r2,r3分别为的第1,2,3列,且向量[t]×表示如下矩阵Define r 1 , r 2 , r 3 as Columns 1, 2, and 3 of , and the vector [t] × represents the following matrix

tt xx tt ythe y tt zz ×× == 00 -- tt zz tt ythe y tt zz 00 -- tt xx -- tt ythe y tt xx 00 .. -- -- -- (( 1.81.8 ))

则,我们可以求得Then, we can obtain

rr 22 == (( SS 3,23,2 ,, SS 1,31,3 ,, SS 2,12,1 )) TT || || (( SS 3,23,2 ,, SS 1,31,3 ,, SS 2,12,1 )) || || ,, μμ == tt sinsin θθ == 11 22 || || (( SS 3,23,2 ,, SS 1,31,3 ,, SS 2,12,1 )) || || ,, -- -- -- (( 1.91.9 ))

其中,Si,j表示矩阵S第i行j列的元素。然后,r1可由下式求得in, S i,j represents the element in row i and column j of matrix S. Then, r1 can be obtained by

(( 11 μμ [[ TT bb ii bb 00 ]] ×× RR bb ii bb 00 ++ [[ rr 22 ]] ×× )) xx == 00 ,, || || xx || || == 11 .. -- -- -- (( 1.101.10 ))

上式可以通过对作SVD分解求得,若 1 μ [ T b i b 0 ] × R b i b 0 + [ r 2 ] × = USV T , 则x为V的第三列。然后,我们可以得到The above formula can be passed to It can be obtained by SVD decomposition, if 1 μ [ T b i b 0 ] × R b i b 0 + [ r 2 ] × = USV T , Then x is the third column of V. Then, we can get

RR bb 00 ww == rr 11 rr 22 rr 11 ×× rr 22 .. -- -- -- (( 1.111.11 ))

θi=2arctan((ATA)-1(ATB)),      (1.12) θi = 2arctan((A T A) -1 (A T B)), (1.12)

其中 A = vec ( r 2 r 3 T + r 3 r 2 T ) , B = vec ( 1 μ [ T b i b 0 ] × R b i b 0 + [ r 2 ] × ) . 这里的vec(·)表示矩阵的拉直。所以,in A = vec ( r 2 r 3 T + r 3 r 2 T ) , B = vec ( 1 μ [ T b i b 0 ] × R b i b 0 + [ r 2 ] × ) . Here vec( ) represents the straightening of the matrix. so,

RR wbwb 00 == RR bb 00 ww TT -- -- -- (( 1.131.13 ))

tt == μμ sinsin θθ ii .. -- -- -- (( 1.141.14 ))

3、根据求得的t及θi初值求解,根据(1.16),求Rcw,Tcw的值3. According to the obtained Find the initial value of t and θ i , according to (1.16), find the values of R cw and T cw

RR cwcw == RR cbcb 00 RR wbwb 00 TT ,, TT cwcw == (( tt ,, 0,00,0 )) TT -- RR cwcw TT wbwb 00 .. -- -- -- (( 1.151.15 ))

步骤三:对内参数pin和外参数pout进行非线性优化。这里pin∈□9,pout∈□10表示如下Step 3: Perform nonlinear optimization on the internal parameter p in and the external parameter p out . Here p in ∈ □ 9 , p out ∈ □ 10 are expressed as follows

pp inin == (( αα xx ,, αα ythe y ,, uu 00 ,, vv 00 ,, kk 11 ,, kk 22 ,, kk 33 ,, kk 44 ,, kk 55 )) TT ,, pp outout == (( vv wbwb 00 TT ,, tt ,, vv cwcw TT ,, TT cwcw TT )) TT ,,

其中,分别为Rcw所对应的Rodrigues向量。一个Rodrigues向量v与其对应的旋转矩阵R间的转换关系如下in, respectively The Rodrigues vector corresponding to R cw . The conversion relationship between a Rodrigues vector v and its corresponding rotation matrix R is as follows

sinsin (( || || vv || || )) [[ vv ]] ×× == RR -- RR TT 22 ,,

RR == coscos (( || || vv || || )) II 33 ++ (( 11 -- coscos (( || || vv || || )) )) || || vv || || 22 vvvv TT ++ sinsin (( || || vv || || )) [[ vv ]] ×× ,,

其中,I3表示3阶的单位矩阵。Among them, I 3 represents the identity matrix of order 3.

我们对各图像点的坐标作如下标记,靶标1上,在旋转角为θi时,对应于第j个靶标特征点的角点坐标记为 ( u j i , v j i ) , i = 0,1 , . . . , m , j = 1,2 , . . . , n , 靶标上的点记为We mark the coordinates of each image point as follows. On target 1, when the rotation angle is θi , the coordinates of the corner point corresponding to the jth target feature point are marked as ( u j i , v j i ) , i = 0,1 , . . . , m , j = 1,2 , . . . , no , The point on the target is marked as

Xb={(xj,yj,0)T,j=1,2,...,n}.X b ={(x j ,y j ,0) T ,j=1,2,...,n}.

由于图像提取过程中是存在误差的,因此,我们选取图像的重投影误差和作为优化目标。由靶标点及pout,获取重投影点的过程如下:Since there are errors in the process of image extraction, we choose the reprojection error sum of the image as the optimization target. Obtain the reprojection point from the target point and p out The process is as follows:

1、旋转角为θi时,靶标点转化为世界坐标系下的点 1. When the rotation angle is θ i , the target point Convert to a point in the world coordinate system

xx ii ,, jj ww ythe y ii ,, jj ww zz ii ,, jj ww == coscos θθ ii -- sinsin θθ ii 00 sinsin θθ ii coscos θθ ii 00 00 00 11 (( RR wbwb 00 xx jj ythe y jj 00 ++ tt 00 00 )) .. -- -- -- (( 1.161.16 ))

2、求取相应的重投影点 2. Find the corresponding reprojection point

重投影误差定义为The reprojection error is defined as

EE. rr == ΣΣ ii == 11 mm ΣΣ jj == 11 nno [[ (( uu ii jj -- gg uu (( pp inin ,, pp outout ,, xx jj ,, ythe y jj ,, θθ ii )) )) 22 ++ (( vv ii jj -- gg vv (( pp inin ,, pp outout ,, xx jj ,, ythe y jj ,, θθ ii )) )) 22 ]] -- -- -- (( 1.191.19 ))

优化问题可以描述为The optimization problem can be described as

(( pp inin ** ,, pp outout ** ,, θθ 11 ** ,, θθ 22 ** ,, .. .. .. ,, θθ mm ** )) == argarg minmin pp inin ,, pp outout ,, ,, θθ 11 ,, .. .. .. ,, θθ mm EE. rr .. -- -- -- (( 1.201.20 ))

该优化问题可以利用稀疏的Levenberg-Marquardt算法来解决优化问题,稀疏LM算法的优化工具箱可以在http://users.ics.forth.gr/~lourakis/sparseLM/上下载。利用上述优化,我们可以获得t,Rcw,Tcw的信息,这些信息将用于步骤四的旋转角度求取。The optimization problem can be solved by using the sparse Levenberg-Marquardt algorithm. The optimization toolbox of the sparse LM algorithm can be downloaded from http://users.ics.forth.gr/~lourakis/sparseLM/. Using the above optimization, we can obtain t, R cw , T cw information, these information will be used in step 4 to calculate the rotation angle.

步骤四:利用标定的内外参数,测量旋转角度。Step 4: Use the calibrated internal and external parameters to measure the rotation angle.

首先,我们根据已经标定好的信息,对角点进行匹配。假设在θ处拍摄的照片的角点坐标为Xc={(ui,vi)T,i=1,2,...,Nc},此时图片中可能存在遮挡,因此匹配并不能自动完成,可以根据如下算法:First, we match the corner points based on the calibrated information. Assuming that the corner coordinates of the photo taken at θ are X c ={(u i ,v i ) T ,i=1,2,...,N c }, there may be occlusion in the picture at this time, so matching and Can not be done automatically, can be based on the following algorithm:

1、寻找可能匹配的点集。1. Find possible matching point sets.

对每个角点与靶标点对计算一个得分Si,j,计算方法如下:For each corner point and target point pair To calculate a score S i,j , the calculation method is as follows:

(1)求取(xj,yj,0)T所对应的世界坐标 (1) Obtain the world coordinates corresponding to (x j ,y j ,0) T

xx jj ww ythe y jj ww zz jj ww == RR wbwb 00 xx jj ythe y jj 00 ++ tt 00 00 ..

(2)求取像点(ui,vi)T平面上的原像求取方法如下(2) Calculate the image point (u i , v i ) T at original image on the plane The method of obtaining is as follows

Xx ww jj == ρρ (( xx ii nno ,, ythe y ii nno ,, 11 )) rr 11 ++ tt

YY ww jj == ρρ (( xx ii nno ,, ythe y ii nno ,, 11 )) rr 22 ,,

ZZ ww jj == zz ww jj ,,

其中, ρ = z w j ( x i n , y i n , 1 ) r 3 , [ r 1 , r 2 , r 3 ] = R wb 0 · ( x i n , y i n ) T 为(ui,vi)T对应的归一化坐标,可以由迭代求得。in, ρ = z w j ( x i no , the y i no , 1 ) r 3 , [ r 1 , r 2 , r 3 ] = R wb 0 &Center Dot; ( x i no , the y i no ) T is the normalized coordinate corresponding to (u i , v i ) T , which can be obtained by iteration.

(3)我们定义Si,j如下式(3) We define S i,j as follows

SS ii ,, jj == (( Xx ww jj )) 22 ++ (( YY ww jj )) 22 -- (( xx ww jj )) 22 ++ (( ythe y ww jj )) 22 .. -- -- -- (( 1.211.21 ))

(4)选取阈值τm=0.1d,这里d是棋盘格格子的长度,然后定义可能匹配的点的集合(4) Select the threshold τ m = 0.1d, where d is the length of the checkerboard grid, and then define the set of possible matching points

&Omega;&Omega; == {{ (( uu ii ,, vv ii )) TT &LeftRightArrow;&LeftRightArrow; (( xx jj ,, ythe y jj ,, 00 )) TT ,, jj == 1,21,2 ,, .. .. .. ,, nno ,, ii == 1,21,2 ,, .. .. .. ,, NN cc || SS ii ,, jj << &tau;&tau; mm }} .. -- -- -- (( 1.221.22 ))

2、利用直方图,计算角度。2. Use the histogram to calculate the angle.

对于Ω中的元素,由下式计算θi,j For elements in Ω, θ i,j is calculated by

xx jj ww ythe y jj ww zz jj ww == coscos &theta;&theta; ii ,, jj -- sinsin &theta;&theta; ii ,, jj 00 sinsin &theta;&theta; ii ,, jj coscos &theta;&theta; ii ,, jj 00 00 00 11 Xx jj ww YY jj ww ZZ jj ww .. -- -- -- (( 1.231.23 ))

然后画出分布的直方图,该直方图中有360个直方块,第k个直方块中的元素为((k-181)°,(k-179)°)∩Ω中的元素个数。我们对θ的估计值为then draw The histogram of the distribution, there are 360 histograms in the histogram, and the elements in the kth histogram are the number of elements in ((k-181)°,(k-179)°)∩Ω. Our estimate of θ is

3、利用求得的角度,选择正确的匹配对。3. Use the obtained angle to select the correct matching pair.

i所对应的靶标点M(j)对应于在正确角范围内,得分最小的点,也就是The target point M(j) corresponding to i corresponds to the point with the smallest score within the correct angle range, that is,

M ( i ) = min j S i , j , 满足θi,j∈[θ-1°,θ+1°]. m ( i ) = min j S i , j , Satisfy θ i,j ∈[θ-1°,θ+1°].

若不存在θi,j∈[θ-1°,θ+1°],则令M(i)=0.If there is no θ i, j ∈ [θ-1°, θ+1°], let M(i)=0.

4、非线性优化4. Nonlinear optimization

优化问题描述为The optimization problem is described as

&theta;&theta; ** == argarg minmin &theta;&theta; &Sigma;&Sigma; ii == 11 NN cc &delta;&delta; (( ii )) (( (( uu ii -- gg uu (( pp inin ** ,, pp outout ** ,, xx Mm (( ii )) ,, ythe y Mm (( ii )) ,, &theta;&theta; )) )) 22 ++ (( vv ii -- gg vv (( pp inin ** ,, pp outout ** ,, xx Mm (( ii )) ,, ythe y Mm (( ii )) ,, &theta;&theta; )) )) 22 )) -- -- -- (( 1.241.24 ))

其中, &delta; ( i ) = 1 , M ( i ) > 0 0 , M ( i ) = 0 . in, &delta; ( i ) = 1 , m ( i ) > 0 0 , m ( i ) = 0 .

该优化问题可以通过MATLAB自带的lsqnonlin函数进行求解。This optimization problem can be solved by the lsqnonlin function that comes with MATLAB.

优点与功效:Advantages and effects:

本发明是一种基于固定相机和单靶标的单轴旋转角的视觉测量方法,它是一种利用单目相机和二维棋盘格对单轴旋转角进行测量的方法,其优点在于,解决了当前视觉测角问题对安装有诸多要求,不方便的问题。本发明采用二维靶标作为标定特征物,通过优化算法,可以单轴旋转角。该方法具有安装过程简单,标定精度高,成本低廉等优点,同时该方法在标定后可以解决遮挡问题。The present invention is a visual measurement method for a single-axis rotation angle based on a fixed camera and a single target. It is a method for measuring a single-axis rotation angle by using a monocular camera and a two-dimensional checkerboard. Its advantage is that it solves the problem of The current problem of visual angle measurement has many requirements for installation and is inconvenient. The present invention adopts a two-dimensional target as a calibration feature, and through an optimization algorithm, a single-axis rotation angle can be achieved. This method has the advantages of simple installation process, high calibration accuracy, low cost, etc., and at the same time, this method can solve the occlusion problem after calibration.

附图说明Description of drawings

图1:针孔成像模型示意图Figure 1: Schematic diagram of the pinhole imaging model

图2:实验设备安装示意图Figure 2: Schematic diagram of experimental equipment installation

图3a:测量误差均值随噪声变化图Figure 3a: Graph of mean measurement error versus noise

图3b:测量误差方差随噪声变化图Figure 3b: Plot of measurement error variance versus noise

图4a:角度变化时检测出的角点个数图Figure 4a: The number of corner points detected when the angle changes

图4b:角度变化时,角度误差图Figure 4b: Angle error plot when the angle changes

图5:真实实验所使用的二维棋盘格靶标图片Figure 5: Picture of the 2D checkerboard target used in the real experiment

图6:本发明流程框图Fig. 6: flow chart of the present invention

图中符号说明如下:The symbols in the figure are explained as follows:

图1中的符号说明:O表示摄像机光心,O1表示图像坐标系原点。X,Y表示图像坐标系的坐标轴,x,y,z表示摄像机坐标系的坐标轴。P(Xw,Yw,Zw)表示三维点在摄像机坐标系下的坐标,p表示点P在图像上的投影。Explanation of symbols in Figure 1: O represents the optical center of the camera, and O 1 represents the origin of the image coordinate system. X, Y represent the coordinate axes of the image coordinate system, and x, y, z represent the coordinate axes of the camera coordinate system. P(X w , Y w , Z w ) represents the coordinates of a three-dimensional point in the camera coordinate system, and p represents the projection of point P on the image.

图2中的符号说明:Ow-XwYwZw为世界坐标系,Zw轴为旋转轴,靶标在基准位置时的原点位于Xw轴上;靶标在基准位置时的靶标坐标系为该坐标系绕Zw旋转θi后,靶标坐标系变为相机坐标系为Oc-XcYcZcExplanation of the symbols in Figure 2: O w -X w Y w Z w is the world coordinate system, the Z w axis is the rotation axis, the origin of the target is on the X w axis when the target is at the reference position; the target coordinate system when the target is at the reference position for After the coordinate system rotates θ i around Z w , the target coordinate system becomes The camera coordinate system is O c -X c Y c Z c .

具体实施方式Detailed ways

见图1—图6,本发明提供了一种基于固定相机和单靶标的单轴旋转角的视觉测量方法,并进行了仿真实验和真实实验验证。As shown in Figures 1 to 6, the present invention provides a visual measurement method based on a fixed camera and a single-axis rotation angle of a single target, and has been verified by simulation experiments and real experiments.

仿真实验Simulation

仿真实验是在主频3.07GHz、内存4.00GB的计算机上,Windows XP环境下的MATLAB2014a上进行的。仿真所用的摄像机模型为二阶畸变模型,各参数如下:The simulation experiment is carried out on a computer with a main frequency of 3.07GHz and a memory of 4.00GB, and MATLAB2014a under Windows XP environment. The camera model used in the simulation is a second-order distortion model, and the parameters are as follows:

αx=782.5109,αy=782.9155,u0=357.3909,v0=264.5240α x =782.5109, α y =782.9155, u 0 =357.3909, v 0 =264.5240

k1=-0.4232,k2=0.2664,k3=-0.000555,k4=-0.0031,k5=0k 1 =-0.4232, k 2 =0.2664, k 3 =-0.000555, k 4 =-0.0031, k 5 =0

靶标为6×8的棋盘格,单个棋盘格长度为39.5mm×39.5mm。仿真实验主要验证了标定算法的稳定性和高精度。仿真步骤如下:The target is a 6×8 checkerboard, and the length of a single checkerboard is 39.5mm×39.5mm. The simulation experiment mainly verifies the stability and high precision of the calibration algorithm. The simulation steps are as follows:

仿真环境设置:根据设定的内参数,及位姿,获取图像点坐标。其中转动时选取的角度为(单位是度)Simulation environment setting: According to the set internal parameters and pose, the image point coordinates are obtained. The angle selected when turning is (unit is degree)

θi=2.25×i θi = 2.25×i

步骤一:利用MATLAB标定工具箱,获取内参数的初值,结果如下(标定内参图像噪声为2像素):Step 1: Use the MATLAB calibration toolbox to obtain the initial value of the internal parameters. The result is as follows (the image noise of the calibration internal reference is 2 pixels):

αx=774.34147,αy=773.91064,u0=349.58372,v0=269.85450α x =774.34147, α y =773.91064, u 0 =349.58372, v 0 =269.85450

k1=-0.43943,k2=0.33274,k3=-0.00222,k4=-0.00206,k5=0k 1 =-0.43943, k 2 =0.33274, k 3 =-0.00222, k 4 =-0.00206, k 5 =0

步骤二:根据普通相机标定的结果,计算外参初值。Step 2: Calculate the initial value of the extrinsic parameters according to the result of ordinary camera calibration.

步骤三:根据内外参数初值,进行捆绑调整。Step 3: Perform bundle adjustment according to the initial value of internal and external parameters.

步骤四:选取角度,进行测量。Step 4: Select an angle and measure it.

为了验证算法的鲁棒性,在步骤四中我们选取100个角度,进行了测量,并将角度误差的最大值作为测量误差。在图像点中添加均值为μ=0,标准差σ=0□4个像素的高斯白噪声,取100次实验测量误差的均值和方差作图,结果如图3(a)、(b)所示。结果显示,即使加入了标准差为4个像素的高斯白噪声,仍能得到角度误差均值小于0.5度的结果,这证明我们的测角方法对图像噪声时鲁棒的。为了验证我们的方法对遮挡情形有效,我们对靶标在(-180°,180°)内转动时的情形,进行了仿真。图4(a)、(b)列出了仿真结果,(a)表示检测出的角点的个数,(b)表示测量的角度误差。实验结果表明,我们的方法可以有效解决遮挡问题。In order to verify the robustness of the algorithm, in step 4, we selected 100 angles for measurement, and took the maximum value of the angle error as the measurement error. Add Gaussian white noise with mean value μ=0 and standard deviation σ=0□4 pixels to the image points, and plot the mean value and variance of 100 experimental measurement errors, the results are shown in Figure 3(a) and (b) Show. The results show that even if Gaussian white noise with a standard deviation of 4 pixels is added, the mean value of the angle error is still less than 0.5 degrees, which proves that our angle measurement method is robust to image noise. In order to verify that our method is effective for the occlusion situation, we simulated the situation when the target rotates within (-180°, 180°). Figure 4(a), (b) lists the simulation results, (a) indicates the number of detected corner points, and (b) indicates the measured angle error. Experimental results show that our method can effectively solve the occlusion problem.

真实实验real experiment

为了进一步验证本发明的可行性,我们进行了真实实验验证。我们采用的靶标为打印的棋盘格(如图5所示),规格为6×9,每个格子的长度为29.5mm,;所采用的相机为Basler相机,分辨率为658×492像素,焦距为800像素左右。角度验证采用光电跟踪系统,该系统角度测量精度为0.01度。结果如表1,误差表明在该实验条件下,角度误差的最大值小于0.15度。In order to further verify the feasibility of the present invention, we have carried out real experiment verification. The target we used is a printed checkerboard (as shown in Figure 5), the specification is 6×9, and the length of each grid is 29.5mm; the camera we use is a Basler camera with a resolution of 658×492 pixels and a focal length of It is about 800 pixels. The angle verification adopts the photoelectric tracking system, and the angle measurement accuracy of the system is 0.01 degree. The results are shown in Table 1. The error shows that under the experimental conditions, the maximum value of the angle error is less than 0.15 degrees.

表1.真实实验旋转角度估计结果Table 1. Real experimental rotation angle estimation results

11 22 33 44 55 66 77 88 99 1010 转台值turntable value -54.00-54.00 -36.00-36.00 -27.00-27.00 -10.80-10.80 -1.8-1.8 5.405.40 16.2016.20 28.8028.80 36.0036.00 54.0054.00 真值true value -53.985-53.985 -35.978-35.978 -26.966-26.966 -10.749-10.749 -0.036-0.036 5.4775.477 0.0700.070 28.89428.894 36.13536.135 54.14754.147 误差error -0.015-0.015 -0.022-0.022 -0.034-0.034 -0.051-0.051 -1.764-1.764 0.0770.077 16.27016.270 0.0940.094 0.1350.135 0.1470.147

Claims (1)

1. A vision measurement method based on a single-axis rotation angle of a fixed camera and a single target is characterized in that: the method comprises the following specific steps:
the method comprises the following steps: calibrating internal and external parameters of a camera;
in the specific implementation process, a two-dimensional checkerboard is used as a target; the adopted picture sequence comprises a picture at a reference position and pictures when the m rotating components rotate to different angles; suppose that the m angles are θ12,...,θmNext, the checkerboard corner points of MATLAB2014 are utilized to extractTaking a function detectCheckerboardPoints for corner extraction, and combining an MATLAB calibration tool for internal parameter calibration to obtain an internal parameter pinAnd external parameters of m +1 poses; the calibration tool box is downloaded on http:// www.vision.caltech.edu/bouguetj/caliib _ doc/; these extrinsic parameters are expressed as
[ R cb i , T cb i ] , i = 0,1,2 , . . . , m .
Expressed at a rotation angle of thetaiWhen the i is 0, the rotation matrix and the translation vector between the camera and the target are represented as the reference position;
step two: initial value of external parameter related to pivoting motion
(1) Determining the angle of rotation theta from (1.7) based on the external parameters determined in step oneiThe rotational and translational vectors of the target relative to the reference [ R b i b 0 , T b i b 0 ] , i = 1,2 , . . . , m .
R b i b 0 = R cb i T R cb 0 , T b i b 0 = R cb i T ( T cb 0 - T cb i ) . - - - ( 0.1 )
(2) And obtainingt and thetaiInitial value:
definition of r1,r2,r3Are respectively as1,2,3, and vector [ t ]]×Is expressed as the following matrix
<math> <mrow> <msub> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>t</mi> <mi>x</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>t</mi> <mi>y</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>t</mi> <mi>z</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>&times;</mo> </msub> <mo>=</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <msub> <mrow> <mo>-</mo> <mi>t</mi> </mrow> <mi>z</mi> </msub> </mtd> <mtd> <msub> <mi>t</mi> <mi>y</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>t</mi> <mi>z</mi> </msub> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <msub> <mrow> <mo>-</mo> <mi>t</mi> </mrow> <mi>x</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mrow> <mo>-</mo> <mi>t</mi> </mrow> <mi>y</mi> </msub> </mtd> <mtd> <msub> <mi>t</mi> <mi>x</mi> </msub> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> <mo>.</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>0.2</mn> <mo>)</mo> </mrow> </mrow> </math>
Then, obtain
<math> <mrow> <msub> <mi>r</mi> <mn>2</mn> </msub> <mo>=</mo> <mfrac> <msup> <mrow> <mo>(</mo> <msub> <mi>S</mi> <mn>3,2</mn> </msub> <mo>,</mo> <msub> <mi>S</mi> <mn>1,3</mn> </msub> <mo>,</mo> <msub> <mi>S</mi> <mn>2,1</mn> </msub> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mrow> <mo>|</mo> <mo>|</mo> <mrow> <mo>(</mo> <msub> <mi>S</mi> <mn>3,2</mn> </msub> <mo>,</mo> <msub> <mi>S</mi> <mn>1,3</mn> </msub> <mo>,</mo> <msub> <mi>S</mi> <mn>2,1</mn> </msub> <mo>)</mo> </mrow> <mo>|</mo> <mo>|</mo> </mrow> </mfrac> <mo>,</mo> <mi>&mu;</mi> <mo>=</mo> <mi>t</mi> <mi>sin</mi> <mi>&theta;</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mn>2</mn> <mo>|</mo> <mo>|</mo> <mrow> <mo>(</mo> <msub> <mi>S</mi> <mn>3,2</mn> </msub> <mo>,</mo> <msub> <mi>S</mi> <mn>1,3</mn> </msub> <mo>,</mo> <msub> <mi>S</mi> <mn>2,1</mn> </msub> <mo>)</mo> </mrow> <mo>|</mo> <mo>|</mo> </mrow> </mfrac> <mo>,</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>0.3</mn> <mo>)</mo> </mrow> </mrow> </math>
Wherein,Si,jelements representing the ith row and j columns of the matrix S, r1Obtained from the following formula
<math> <mrow> <mrow> <mo>(</mo> <mfrac> <mn>1</mn> <mi>&mu;</mi> </mfrac> <msub> <mrow> <mo>[</mo> <msub> <mi>T</mi> <mrow> <msub> <mi>b</mi> <mi>i</mi> </msub> <msub> <mi>b</mi> <mn>0</mn> </msub> </mrow> </msub> <mo>]</mo> </mrow> <mo>&times;</mo> </msub> <msub> <mi>R</mi> <mrow> <msub> <mi>b</mi> <mi>i</mi> </msub> <msub> <mi>b</mi> <mn>0</mn> </msub> </mrow> </msub> <mo>+</mo> <msub> <mrow> <mo>[</mo> <msub> <mi>r</mi> <mn>2</mn> </msub> <mo>]</mo> </mrow> <mo>&times;</mo> </msub> <mo>)</mo> </mrow> <mi>x</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mo>|</mo> <mo>|</mo> <mi>x</mi> <mo>|</mo> <mo>|</mo> <mo>=</mo> <mn>1</mn> <mo>.</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>0.4</mn> <mo>)</mo> </mrow> </mrow> </math>
The above type passing pairObtained by SVD decomposition if <math> <mrow> <mrow> <mfrac> <mn>1</mn> <mi>&mu;</mi> </mfrac> <msub> <mrow> <mo>[</mo> <msub> <mi>T</mi> <mrow> <msub> <mi>b</mi> <mi>i</mi> </msub> <msub> <mi>b</mi> <mn>0</mn> </msub> </mrow> </msub> <mo>]</mo> </mrow> <mo>&times;</mo> </msub> <msub> <mi>R</mi> <mrow> <msub> <mi>b</mi> <mi>i</mi> </msub> <msub> <mi>b</mi> <mn>0</mn> </msub> </mrow> </msub> <mo>+</mo> <msub> <mrow> <mo>[</mo> <msub> <mi>r</mi> <mn>2</mn> </msub> <mo>]</mo> </mrow> <mo>&times;</mo> </msub> </mrow> <mo>=</mo> <msup> <mi>USV</mi> <mi>T</mi> </msup> <mo>,</mo> </mrow> </math> Then x is the third column of V, then, get
<math> <mrow> <msub> <mi>R</mi> <mrow> <msub> <mi>b</mi> <mn>0</mn> </msub> <mi>w</mi> </mrow> </msub> <mo>=</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>r</mi> <mn>1</mn> </msub> </mtd> <mtd> <msub> <mi>r</mi> <mn>2</mn> </msub> </mtd> <mtd> <msub> <mi>r</mi> <mn>1</mn> </msub> <mo>&times;</mo> <msub> <mi>r</mi> <mn>2</mn> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>.</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>0.5</mn> <mo>)</mo> </mrow> </mrow> </math>
θi=2arctan((ATA)-1(ATB)), (0.6)
Wherein A = vec ( r 2 r 3 T + r 3 r 2 T ) , <math> <mrow> <mi>B</mi> <mo>=</mo> <mi>vec</mi> <mrow> <mo>(</mo> <mfrac> <mn>1</mn> <mi>&mu;</mi> </mfrac> <msub> <mrow> <mo>[</mo> <msub> <mi>T</mi> <mrow> <msub> <mi>b</mi> <mi>i</mi> </msub> <msub> <mi>b</mi> <mn>0</mn> </msub> </mrow> </msub> <mo>]</mo> </mrow> <mo>&times;</mo> </msub> <msub> <mi>R</mi> <mrow> <msub> <mi>b</mi> <mi>i</mi> </msub> <msub> <mi>b</mi> <mn>0</mn> </msub> </mrow> </msub> <mo>+</mo> <msub> <mrow> <mo>[</mo> <msub> <mi>r</mi> <mn>2</mn> </msub> <mo>]</mo> </mrow> <mo>&times;</mo> </msub> <mo>)</mo> </mrow> <mo>.</mo> </mrow> </math> Vec (-) here represents the straightening of the matrix, so,
R wb 0 = R b 0 w T - - - ( 0.7 )
<math> <mrow> <mi>t</mi> <mo>=</mo> <mfrac> <mi>&mu;</mi> <mrow> <mi>sin</mi> <msub> <mi>&theta;</mi> <mi>i</mi> </msub> </mrow> </mfrac> <mo>.</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>0.8</mn> <mo>)</mo> </mrow> </mrow> </math>
(3) according to the obtainedt and thetaiSolving the initial value, and solving R according to (1.16)cw,TcwValue of (A)
R cw = R cb 0 R wb 0 T , T cw = ( t , 0,0 ) T - R cw T wb 0 . - - - ( 0.9 )
Step three: intra-pair parameter pinAnd an extrinsic parameter poutCarrying out nonlinear optimization; where p isin∈□9,pout∈□10Is shown below
<math> <mrow> <msub> <mi>p</mi> <mi>in</mi> </msub> <mo>=</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>&alpha;</mi> <mi>x</mi> </msub> <mo>,</mo> <msub> <mi>&alpha;</mi> <mi>y</mi> </msub> <mo>,</mo> <msub> <mi>u</mi> <mrow> <mn>0</mn> <mo>,</mo> </mrow> </msub> <msub> <mi>v</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>k</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>k</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>k</mi> <mn>3</mn> </msub> <mo>,</mo> <msub> <mi>k</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>k</mi> <mn>5</mn> </msub> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mo>,</mo> <msub> <mi>p</mi> <mi>out</mi> </msub> <mo>=</mo> <msup> <mrow> <mo>(</mo> <msubsup> <mi>v</mi> <msub> <mi>wb</mi> <mn>0</mn> </msub> <mi>T</mi> </msubsup> <mo>,</mo> <mi>t</mi> <mo>,</mo> <msubsup> <mi>v</mi> <mi>cw</mi> <mi>T</mi> </msubsup> <mo>,</mo> <msubsup> <mi>T</mi> <mi>cw</mi> <mi>T</mi> </msubsup> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mo>,</mo> </mrow> </math>
Wherein,are respectively asRcwThe corresponding Rodrigues vector; the conversion relationship between a Rodrigues vector v and its corresponding rotation matrix R is as follows
<math> <mrow> <mi>sin</mi> <mrow> <mo>(</mo> <mo>|</mo> <mo>|</mo> <mi>v</mi> <mo>|</mo> <mo>|</mo> <mo>)</mo> </mrow> <msub> <mrow> <mo>[</mo> <mi>v</mi> <mo>]</mo> </mrow> <mo>&times;</mo> </msub> <mo>=</mo> <mfrac> <mrow> <mi>R</mi> <mo>-</mo> <msup> <mi>R</mi> <mi>T</mi> </msup> </mrow> <mn>2</mn> </mfrac> <mo>,</mo> </mrow> </math>
<math> <mrow> <mi>R</mi> <mo>=</mo> <mi>cos</mi> <mrow> <mo>(</mo> <mo>|</mo> <mo>|</mo> <mi>v</mi> <mo>|</mo> <mo>|</mo> <mo>)</mo> </mrow> <msub> <mi>I</mi> <mn>3</mn> </msub> <mo>+</mo> <mfrac> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>cos</mi> <mrow> <mo>(</mo> <mo>|</mo> <mo>|</mo> <mi>v</mi> <mo>|</mo> <mo>|</mo> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <msup> <mrow> <mo>|</mo> <mo>|</mo> <mi>v</mi> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> </mfrac> <msup> <mi>vv</mi> <mi>T</mi> </msup> <mo>+</mo> <mi>sin</mi> <mrow> <mo>(</mo> <mo>|</mo> <mo>|</mo> <mi>v</mi> <mo>|</mo> <mo>|</mo> <mo>)</mo> </mrow> <msub> <mrow> <mo>[</mo> <mi>v</mi> <mo>]</mo> </mrow> <mo>&times;</mo> </msub> <mo>,</mo> </mrow> </math>
Wherein, I3An identity matrix of order 3;
the coordinates of the individual image points are marked on the target 1 at a rotation angle θiThen, the corner point coordinate corresponding to the jth target feature point is recorded as ( u j i , v j i ) , i = 0,1 , . . . , m , j = 1,2 , . . . , n , Points on the target are noted
Xb={(xj,yj,0)T,j=1,2,...,n}.
Because errors exist in the image extraction process, the reprojection error sum of the images is selected as an optimization target, and the target point and the poutObtaining the reprojection pointsThe process of (2) is as follows:
<1>angle of rotation thetaiTime, target point (x)j,yj,0)TConversion to points in world coordinate system
<math> <mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>w</mi> </msubsup> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>w</mi> </msubsup> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>z</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>w</mi> </msubsup> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>cos</mi> <msub> <mi>&theta;</mi> <mi>i</mi> </msub> </mtd> <mtd> <mo>-</mo> <msub> <mrow> <mi>sin</mi> <mi>&theta;</mi> </mrow> <mi>i</mi> </msub> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <msub> <mrow> <mi>sin</mi> <mi>&theta;</mi> </mrow> <mi>i</mi> </msub> </mtd> <mtd> <msub> <mrow> <mi>cos</mi> <mi>&theta;</mi> </mrow> <mi>i</mi> </msub> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mrow> <mo>(</mo> <msub> <mi>R</mi> <msub> <mi>wb</mi> <mn>0</mn> </msub> </msub> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>x</mi> <mi>j</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>y</mi> <mi>j</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> <mo>+</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>t</mi> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> <mo>)</mo> </mrow> <mo>.</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>0.10</mn> <mo>)</mo> </mrow> </mrow> </math>
<2>And calculating the corresponding reprojection point
The reprojection error is defined as
<math> <mrow> <msub> <mi>E</mi> <mi>r</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mo>[</mo> <msup> <mrow> <mo>(</mo> <msubsup> <mi>u</mi> <mi>i</mi> <mi>j</mi> </msubsup> <mo>-</mo> <msub> <mi>g</mi> <mi>u</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mi>in</mi> </msub> <mo>,</mo> <msub> <mi>p</mi> <mi>out</mi> </msub> <mo>,</mo> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>j</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msubsup> <mi>v</mi> <mi>i</mi> <mi>j</mi> </msubsup> <mo>-</mo> <msub> <mi>g</mi> <mi>v</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mi>in</mi> </msub> <mo>,</mo> <msub> <mi>p</mi> <mi>out</mi> </msub> <mo>,</mo> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>j</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>]</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>0.13</mn> <mo>)</mo> </mrow> </mrow> </math>
The optimization problem is described as
<math> <mrow> <mrow> <mo>(</mo> <msubsup> <mi>p</mi> <mi>in</mi> <mo>*</mo> </msubsup> <mo>,</mo> <msubsup> <mi>p</mi> <mi>out</mi> <mo>*</mo> </msubsup> <mo>,</mo> <msubsup> <mi>&theta;</mi> <mn>1</mn> <mo>*</mo> </msubsup> <mo>,</mo> <msubsup> <mi>&theta;</mi> <mn>2</mn> <mo>*</mo> </msubsup> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msubsup> <mi>&theta;</mi> <mi>m</mi> <mo>*</mo> </msubsup> <mo>)</mo> </mrow> <mo>=</mo> <mi>arg</mi> <munder> <mi>min</mi> <mrow> <msub> <mi>p</mi> <mi>in</mi> </msub> <mo>,</mo> <msub> <mi>p</mi> <mi>out</mi> </msub> <mo>,</mo> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>1</mn> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>m</mi> </msub> </mrow> </munder> <msub> <mi>E</mi> <mi>r</mi> </msub> <mo>.</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>0.14</mn> <mo>)</mo> </mrow> </mrow> </math>
The optimization problem is solved by utilizing a sparse Levenberg-Marquardt algorithm, and an optimization tool box of the sparse LM algorithm is downloaded in http:// users. ics.forth.gr/. Loursakis/spark LM; with the above optimization, obtaint,Rcw,TcwThe information of (1), which is used for the rotation angle calculation of the step four;
step four: measuring the rotation angle by utilizing the calibrated internal and external parameters;
firstly, matching angle points according to calibrated information; let the corner coordinate of the picture taken at θ be Xc={(ui,vi)T,i=1,2,...,NcAt this time, occlusion may exist in the picture, so matching cannot be automatically completed according to the following algorithm:
scattering and searching a point set which is possibly matched;
for each corner point and target point pairCalculating a score Si,jThe calculation method is as follows:
(1) obtaining (x)j,yj,0)TCorresponding world coordinate
x j w y j w z j w = R wb 0 x j y j 0 + t 0 0 .
(2) Finding an image point (u)i,vi)TIn thatPrimary image on planeThe obtaining method is as follows
<math> <mrow> <msubsup> <mi>X</mi> <mi>w</mi> <mi>j</mi> </msubsup> <mo>=</mo> <mi>&rho;</mi> <mrow> <mo>(</mo> <msubsup> <mi>x</mi> <mi>i</mi> <mi>n</mi> </msubsup> <mo>,</mo> <msubsup> <mi>y</mi> <mi>i</mi> <mi>n</mi> </msubsup> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> <msub> <mi>r</mi> <mn>1</mn> </msub> <mo>+</mo> <mi>t</mi> </mrow> </math>
<math> <mrow> <msubsup> <mi>Y</mi> <mi>w</mi> <mi>j</mi> </msubsup> <mo>=</mo> <mi>&rho;</mi> <mrow> <mo>(</mo> <msubsup> <mi>x</mi> <mi>i</mi> <mi>n</mi> </msubsup> <mo>,</mo> <msubsup> <mi>y</mi> <mi>i</mi> <mi>n</mi> </msubsup> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> <msub> <mi>r</mi> <mn>2</mn> </msub> <mo>,</mo> </mrow> </math>
Z w j = z w j ,
Wherein, <math> <mrow> <mi>&rho;</mi> <mo>=</mo> <mfrac> <msubsup> <mi>z</mi> <mi>w</mi> <mi>j</mi> </msubsup> <mrow> <mrow> <mo>(</mo> <msubsup> <mi>x</mi> <mi>i</mi> <mi>n</mi> </msubsup> <mo>,</mo> <msubsup> <mi>y</mi> <mi>i</mi> <mi>n</mi> </msubsup> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> <msub> <mi>r</mi> <mn>3</mn> </msub> </mrow> </mfrac> <mo>,</mo> </mrow> </math> <math> <mrow> <mo>[</mo> <msub> <mi>r</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>r</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>r</mi> <mn>3</mn> </msub> <mo>]</mo> <mo>=</mo> <msub> <mi>R</mi> <msub> <mi>wb</mi> <mn>0</mn> </msub> </msub> <mo>&CenterDot;</mo> <msup> <mrow> <mo>(</mo> <msubsup> <mi>x</mi> <mi>i</mi> <mi>n</mi> </msubsup> <mo>,</mo> <msubsup> <mi>y</mi> <mi>i</mi> <mi>n</mi> </msubsup> <mo>)</mo> </mrow> <mi>T</mi> </msup> </mrow> </math> is (u)i,vi)TThe corresponding normalized coordinates are obtained by iteration;
(3) definition of Si,jThe following formula
S i , j = ( X w j ) 2 + ( Y w j ) 2 - ( x w j ) 2 + ( y w j ) 2 . - - - ( 0.15 )
(4) Selecting a threshold τmWhere d is the length of the checkerboard grid, then define the set of points that may match
<math> <mrow> <mi>&Omega;</mi> <mo>=</mo> <mo>{</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>u</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>v</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mo>&LeftRightArrow;</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>j</mi> </msub> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mo>,</mo> <mi>j</mi> <mo>=</mo> <mn>1,2</mn> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mi>n</mi> <mo>,</mo> <mi>i</mi> <mo>=</mo> <mn>1,2</mn> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msub> <mi>N</mi> <mi>c</mi> </msub> <mo>|</mo> <msub> <mi>S</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>&lt;</mo> <msub> <mi>&tau;</mi> <mi>m</mi> </msub> <mo>}</mo> <mo>.</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>0.16</mn> <mo>)</mo> </mrow> </mrow> </math>
Utilizing the histogram to calculate an angle;
for the elements in Ω, θ is calculated by the following equationi,j
<math> <mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msubsup> <mi>x</mi> <mi>j</mi> <mi>w</mi> </msubsup> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>y</mi> <mi>j</mi> <mi>w</mi> </msubsup> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>z</mi> <mi>j</mi> <mi>w</mi> </msubsup> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>cos</mi> <msub> <mi>&theta;</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </mtd> <mtd> <mo>-</mo> <msub> <mrow> <mi>sin</mi> <mi>&theta;</mi> </mrow> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <msub> <mrow> <mi>sin</mi> <mi>&theta;</mi> </mrow> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </mtd> <mtd> <msub> <mrow> <mi>cos</mi> <mi>&theta;</mi> </mrow> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msubsup> <mi>X</mi> <mi>j</mi> <mi>w</mi> </msubsup> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>Y</mi> <mi>j</mi> <mi>w</mi> </msubsup> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>Z</mi> <mi>j</mi> <mi>w</mi> </msubsup> </mtd> </mtr> </mtable> </mfenced> <mo>.</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>0.17</mn> <mo>)</mo> </mrow> </mrow> </math>
Then draw outA histogram of distribution having 360 squares, the number of elements in the kth square being (k-181) °, (k-179) ° n Ω; an estimate of theta of
Selecting a correct matching pair by using the obtained angle;
i corresponds to the point M (j) with the smallest score in the correct angle range, i.e. the point M (j)
M ( i ) = min j S i , j , Satisfies thetai,j∈[θ-1°,θ+1°].
If theta is not presenti,j∈[θ-1°,θ+1°]Then, let m (i) be 0;
good and non-linear optimization
The optimization problem is described as
<math> <mrow> <msup> <mi>&theta;</mi> <mo>*</mo> </msup> <mo>=</mo> <mi>arg</mi> <munder> <mi>min</mi> <mi>&theta;</mi> </munder> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>N</mi> <mi>c</mi> </msub> </munderover> <mi>&delta;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <msup> <mrow> <mo>(</mo> <mrow> <mo>(</mo> <msub> <mi>u</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>g</mi> <mi>u</mi> </msub> <mrow> <mo>(</mo> <msubsup> <mi>p</mi> <mi>in</mi> <mo>*</mo> </msubsup> <mo>,</mo> <msubsup> <mi>p</mi> <mi>out</mi> <mo>*</mo> </msubsup> <mo>,</mo> <msub> <mi>x</mi> <mrow> <mi>M</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>M</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mo>,</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <mrow> <mo>(</mo> <msup> <mrow> <msub> <mi>v</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>g</mi> <mi>v</mi> </msub> <mrow> <mo>(</mo> <msubsup> <mi>p</mi> <mi>in</mi> <mo>*</mo> </msubsup> <mo>,</mo> <msubsup> <mi>p</mi> <mi>out</mi> <mo>*</mo> </msubsup> <mo>,</mo> <msub> <mi>x</mi> <mrow> <mi>M</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>M</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mo>,</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>0.18</mn> <mo>)</mo> </mrow> </mrow> </math>
Wherein, <math> <mrow> <mi>&delta;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mn>1</mn> <mo>,</mo> <mi>M</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>></mo> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> <mo>,</mo> <mi>M</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>=</mo> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> </mrow> </math>
the optimization problem is solved by the lsqnolin function of the MATLAB itself.
CN201410506279.0A 2014-09-28 2014-09-28 Single-axis rotation angle vision measurement method based on fixed camera and single target Active CN104374338B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410506279.0A CN104374338B (en) 2014-09-28 2014-09-28 Single-axis rotation angle vision measurement method based on fixed camera and single target

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410506279.0A CN104374338B (en) 2014-09-28 2014-09-28 Single-axis rotation angle vision measurement method based on fixed camera and single target

Publications (2)

Publication Number Publication Date
CN104374338A true CN104374338A (en) 2015-02-25
CN104374338B CN104374338B (en) 2017-04-26

Family

ID=52553394

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410506279.0A Active CN104374338B (en) 2014-09-28 2014-09-28 Single-axis rotation angle vision measurement method based on fixed camera and single target

Country Status (1)

Country Link
CN (1) CN104374338B (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106289071A (en) * 2016-08-18 2017-01-04 温州大学 A kind of structure three-dimensional displacement monocular photographing measurement method
CN106920262A (en) * 2017-03-07 2017-07-04 湖南科技大学 A kind of machine vision 3D four-wheel aligners instrument target binding method
CN107421502A (en) * 2017-07-26 2017-12-01 同济大学 A kind of railway freight-car gauge automatic measurement method
CN107817005A (en) * 2017-10-09 2018-03-20 大连理工大学 The integrated flexible experimental method of monocular pose measurement accuracy evaluation
CN108663043A (en) * 2018-05-16 2018-10-16 北京航空航天大学 Distributed boss's POS node relative pose measurement method based on single camera auxiliary
CN108776980A (en) * 2018-05-14 2018-11-09 南京工程学院 A kind of scaling method towards lenticule light-field camera
CN109785394A (en) * 2018-12-31 2019-05-21 深圳积木易搭科技技术有限公司 A kind of camera and turntable combined calibrating method, device and storage medium
CN110006368A (en) * 2019-05-20 2019-07-12 中山大学 A method and system for measuring rope swing angle based on vision measurement
CN110136204A (en) * 2019-03-19 2019-08-16 浙江大学山东工业技术研究院 Sound membrane dome assembly system based on double-sided telecentric lens camera machine tool position calibration
CN110428457A (en) * 2019-08-08 2019-11-08 苏州中科全象智能科技有限公司 A kind of point set affine transform algorithm in vision positioning
CN110440726A (en) * 2019-08-31 2019-11-12 大连理工大学 The bolt faying face coplanarity measuring method of twin shaft translation stage and structured light scanner
CN110455188A (en) * 2019-08-25 2019-11-15 大连理工大学 Joint measurement and calibration method of single-axis translation stage and structured light 3D sensor
CN110954067A (en) * 2019-12-28 2020-04-03 长安大学 A target-based monocular vision excavator pose measurement system and measurement method
CN111197975A (en) * 2019-12-09 2020-05-26 中国资源卫星应用中心 Image attitude estimation method based on Rodrigues
CN111981982A (en) * 2020-08-21 2020-11-24 北京航空航天大学 Multi-directional cooperative target optical measurement method based on weighted SFM algorithm
CN113450398A (en) * 2021-08-31 2021-09-28 北京柏惠维康科技有限公司 Method, device, equipment and readable medium for matching marker in calibration object
CN113658266A (en) * 2021-07-27 2021-11-16 中航西安飞机工业集团股份有限公司 Moving axis rotation angle visual measurement method based on fixed camera and single target
CN114216395A (en) * 2021-12-14 2022-03-22 众致盛视智能科技(苏州)有限公司 Space rotation axis solving method based on calibration plate

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101424551A (en) * 2008-11-28 2009-05-06 北京航空航天大学 Active vision non-contact type servomechanism parameter measurement method and apparatus thereof
JP2010140386A (en) * 2008-12-15 2010-06-24 Panasonic Corp Optical angle sensor
JP2010187130A (en) * 2009-02-10 2010-08-26 Nippon Telegr & Teleph Corp <Ntt> Camera calibrating device, camera calibration method, camera calibration program, and recording medium having the program recorded therein
CN102692183A (en) * 2011-03-23 2012-09-26 比比威株式会社 Measurement method of initial positions and poses of multiple cameras
CN103292809A (en) * 2013-05-14 2013-09-11 哈尔滨工程大学 Single-shaft rotary type inertial navigation system and special error self-compensation method thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101424551A (en) * 2008-11-28 2009-05-06 北京航空航天大学 Active vision non-contact type servomechanism parameter measurement method and apparatus thereof
JP2010140386A (en) * 2008-12-15 2010-06-24 Panasonic Corp Optical angle sensor
JP2010187130A (en) * 2009-02-10 2010-08-26 Nippon Telegr & Teleph Corp <Ntt> Camera calibrating device, camera calibration method, camera calibration program, and recording medium having the program recorded therein
CN102692183A (en) * 2011-03-23 2012-09-26 比比威株式会社 Measurement method of initial positions and poses of multiple cameras
CN103292809A (en) * 2013-05-14 2013-09-11 哈尔滨工程大学 Single-shaft rotary type inertial navigation system and special error self-compensation method thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
付强等: "基于自由运动一维标定物的多摄像机参数标定方法与实验", 《控制理论与应用》 *

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106289071B (en) * 2016-08-18 2018-10-19 温州大学 A kind of structure three-dimensional displacement monocular photographing measurement method
CN106289071A (en) * 2016-08-18 2017-01-04 温州大学 A kind of structure three-dimensional displacement monocular photographing measurement method
CN106920262A (en) * 2017-03-07 2017-07-04 湖南科技大学 A kind of machine vision 3D four-wheel aligners instrument target binding method
CN107421502A (en) * 2017-07-26 2017-12-01 同济大学 A kind of railway freight-car gauge automatic measurement method
CN107421502B (en) * 2017-07-26 2019-09-27 同济大学 A method for automatic measurement of the limit of railway freight cars
CN107817005A (en) * 2017-10-09 2018-03-20 大连理工大学 The integrated flexible experimental method of monocular pose measurement accuracy evaluation
CN107817005B (en) * 2017-10-09 2020-01-24 大连理工大学 An integrated flexible experimental method for evaluating the accuracy of monocular pose measurement
CN108776980A (en) * 2018-05-14 2018-11-09 南京工程学院 A kind of scaling method towards lenticule light-field camera
CN108776980B (en) * 2018-05-14 2021-07-09 南京工程学院 A calibration method for microlens light field camera
CN108663043B (en) * 2018-05-16 2020-01-10 北京航空航天大学 Single-camera-assisted distributed POS main node and sub node relative pose measurement method
CN108663043A (en) * 2018-05-16 2018-10-16 北京航空航天大学 Distributed boss's POS node relative pose measurement method based on single camera auxiliary
CN109785394A (en) * 2018-12-31 2019-05-21 深圳积木易搭科技技术有限公司 A kind of camera and turntable combined calibrating method, device and storage medium
CN110136204A (en) * 2019-03-19 2019-08-16 浙江大学山东工业技术研究院 Sound membrane dome assembly system based on double-sided telecentric lens camera machine tool position calibration
CN110136204B (en) * 2019-03-19 2021-08-03 浙江大学山东工业技术研究院 Sound-diaphragm dome assembly system based on machine tool position calibration of double-sided telecentric lens camera
CN110006368A (en) * 2019-05-20 2019-07-12 中山大学 A method and system for measuring rope swing angle based on vision measurement
CN110428457A (en) * 2019-08-08 2019-11-08 苏州中科全象智能科技有限公司 A kind of point set affine transform algorithm in vision positioning
CN110428457B (en) * 2019-08-08 2022-02-22 苏州中科全象智能科技有限公司 Point set affine transformation algorithm in visual positioning
CN110455188A (en) * 2019-08-25 2019-11-15 大连理工大学 Joint measurement and calibration method of single-axis translation stage and structured light 3D sensor
CN110440726A (en) * 2019-08-31 2019-11-12 大连理工大学 The bolt faying face coplanarity measuring method of twin shaft translation stage and structured light scanner
CN111197975A (en) * 2019-12-09 2020-05-26 中国资源卫星应用中心 Image attitude estimation method based on Rodrigues
CN110954067A (en) * 2019-12-28 2020-04-03 长安大学 A target-based monocular vision excavator pose measurement system and measurement method
CN111981982B (en) * 2020-08-21 2021-07-06 北京航空航天大学 An Optical Measurement Method for Multidirectional Cooperative Targets Based on Weighted SFM Algorithm
CN111981982A (en) * 2020-08-21 2020-11-24 北京航空航天大学 Multi-directional cooperative target optical measurement method based on weighted SFM algorithm
CN113658266A (en) * 2021-07-27 2021-11-16 中航西安飞机工业集团股份有限公司 Moving axis rotation angle visual measurement method based on fixed camera and single target
CN113658266B (en) * 2021-07-27 2023-10-20 中航西安飞机工业集团股份有限公司 Visual measurement method for rotation angle of moving shaft based on fixed camera and single target
CN113450398A (en) * 2021-08-31 2021-09-28 北京柏惠维康科技有限公司 Method, device, equipment and readable medium for matching marker in calibration object
CN113450398B (en) * 2021-08-31 2021-11-19 北京柏惠维康科技有限公司 Method, device, equipment and readable medium for matching marker in calibration object
CN114216395A (en) * 2021-12-14 2022-03-22 众致盛视智能科技(苏州)有限公司 Space rotation axis solving method based on calibration plate
CN114216395B (en) * 2021-12-14 2023-10-24 众致盛视智能科技(苏州)有限公司 Space rotation axis solving method based on calibration plate

Also Published As

Publication number Publication date
CN104374338B (en) 2017-04-26

Similar Documents

Publication Publication Date Title
CN104374338B (en) Single-axis rotation angle vision measurement method based on fixed camera and single target
CN104333675B (en) A Method of Panoramic Electronic Image Stabilization Based on Spherical Projection
CN104616292B (en) Monocular vision measuring method based on global homography matrix
CN108510551B (en) Method and system for calibrating camera parameters under long-distance large-field-of-view condition
CN104376553B (en) A kind of vision measuring method at the single-shaft-rotation angle based on mobile camera and dual-target
CN100461220C (en) Calibration method based on fixed parameters and variable parameters in 3D scanning system
CN110349221A (en) A kind of three-dimensional laser radar merges scaling method with binocular visible light sensor
CN103278138B (en) Method for measuring three-dimensional position and posture of thin component with complex structure
Sun et al. Camera self-calibration with lens distortion
CN107680139B (en) Universality calibration method of telecentric binocular stereo vision measurement system
CN105678785A (en) Method for calibrating posture relation of laser and camera
US8432442B2 (en) Method for self localization using parallel projection model
CN106530358A (en) Method for calibrating PTZ camera by using only two scene images
CN101377405B (en) A Visual Measurement Method of Space Circular Attitude Parameters and Geometric Parameters
CN105809706B (en) A kind of overall calibration method of the more camera systems of distribution
Li et al. Cross-ratio invariant based line scan camera geometric calibration with static linear data
CN103208122A (en) Multi-camera calibration method based on one-dimensional calibration rod design
CN103886595B (en) A kind of catadioptric Camera Self-Calibration method based on broad sense unified model
CN101865656B (en) Method for accurately positioning position of multi-camera system by using small number of coplanar points
CN105913417A (en) Method for geometrically constraining pose based on perspective projection line
CN101377404B (en) An Ambiguity Elimination Method for Space Circle Pose Recognition Based on Angle Constraint
CN104167001B (en) Large-visual-field camera calibration method based on orthogonal compensation
CN104574372A (en) Image registration method based on similar feature triangles
CN113706635B (en) Long-focus camera calibration method based on point feature and line feature fusion
CN108871373A (en) A kind of star sensor scaling method based on pitching turning platform and nonlinear optimization

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant