CN1851618A - Single-eye vision semi-matter simulating system and method - Google Patents
Single-eye vision semi-matter simulating system and method Download PDFInfo
- Publication number
- CN1851618A CN1851618A CN 200610083639 CN200610083639A CN1851618A CN 1851618 A CN1851618 A CN 1851618A CN 200610083639 CN200610083639 CN 200610083639 CN 200610083639 A CN200610083639 A CN 200610083639A CN 1851618 A CN1851618 A CN 1851618A
- Authority
- CN
- China
- Prior art keywords
- virtual
- camera
- image
- computer
- projection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Image Processing (AREA)
Abstract
本发明涉及一种单目视觉半实物仿真系统及方法,所述的系统包括:虚拟成像单元、图像投影单元和摄像实现单元,在进行半实物仿真时,包括如下步骤:在虚拟成像单元中,通过OpenGL等软件对虚拟景象进行构建,设定虚拟景象的位置姿态,通过虚拟摄像机将虚拟景象投影到计算机屏幕上,然后,图像投影单元将图像投影到投影平面上,经过摄像实现单元在投影平面上获得投影图像,最后,通过计算机软件计算得出虚拟摄像机的内部参数,虚拟景象的位置姿态,通过与设定值进行比较,分析得出仿真结果的精度误差。本发明的优点是设备简单、易于实现,使用范围广,尤其适合应用于标定和视觉导航定位的仿真实现。
The present invention relates to a monocular vision semi-physical simulation system and method. The system includes: a virtual imaging unit, an image projection unit and a camera realization unit. When performing semi-physical simulation, the following steps are included: in the virtual imaging unit, The virtual scene is constructed by software such as OpenGL, the position and posture of the virtual scene are set, and the virtual scene is projected onto the computer screen through the virtual camera. Then, the image projection unit projects the image onto the projection plane. Finally, the internal parameters of the virtual camera and the position and posture of the virtual scene are calculated by computer software, and the accuracy error of the simulation result is analyzed by comparing with the set value. The invention has the advantages of simple equipment, easy realization and wide application range, and is especially suitable for simulation realization of calibration and visual navigation and positioning.
Description
技术领域technical field
本发明涉及一种单目视觉半实物仿真系统及方法。The invention relates to a monocular vision semi-physical simulation system and method.
背景技术Background technique
随着计算机视觉理论以及计算机技术的发展,计算机视觉已广泛应用于目标识别、视觉导引、工业测量、工业控制等方面。半实物仿真技术又称为硬件在回路仿真,在条件允许的情况下尽可能在仿真系统中接入实物,以取代相应部分的数学模型,这样更接近实际情况,从而得到更确切的信息。在计算机视觉研究过程中经常会遇到一些复杂情况,比如常需要大量场景图像来进行算法研究,但通常现场情况复杂,很难在研究初期就进行实物实验来获得大量场景图像。那样既浪费资金,又影响研究进度。因此,在研究工作的初期进行仿真实验是十分必要的。完全的数字仿真虽然节约资金、研究时间短,但由于复杂系统建模困难,不得不对复杂系统进行简化,并将很多状态设为理想状态。虽然也加入了噪声,但还是很难得到非常接近实际情况的仿真结果。因此,半实物仿真在计算机视觉研究的初期阶段就体现出了优势。但现有资料中还没有提到一种半实物仿真系统能很好的解决计算机视觉研究过程中所遇到的复杂情况仿真难的问题,因此设计一种单目视觉半实物仿真系统用以解决这些困难是十分必要和迫切的。With the development of computer vision theory and computer technology, computer vision has been widely used in target recognition, visual guidance, industrial measurement, industrial control, etc. Hardware-in-the-loop simulation technology is also called hardware-in-the-loop simulation. When conditions permit, physical objects are connected to the simulation system as much as possible to replace the corresponding part of the mathematical model, which is closer to the actual situation and thus more accurate information can be obtained. In the process of computer vision research, some complex situations are often encountered. For example, a large number of scene images are often required for algorithm research, but usually the site conditions are complicated, and it is difficult to conduct physical experiments in the early stage of research to obtain a large number of scene images. That would waste money and affect research progress. Therefore, it is very necessary to conduct simulation experiments in the initial stage of research work. Although complete digital simulation saves money and shortens the research time, due to the difficulty in modeling complex systems, complex systems have to be simplified and many states are set to ideal states. Although noise is also added, it is still difficult to obtain simulation results that are very close to the actual situation. Therefore, hardware-in-the-loop simulation has shown advantages in the early stages of computer vision research. However, there is no mention of a hardware-in-the-loop simulation system in the existing data that can well solve the problem of difficult simulation of complex situations encountered in the computer vision research process, so a monocular vision hardware-in-the-loop simulation system is designed to solve These difficulties are very necessary and urgent.
发明内容Contents of the invention
本发明的目的在于克服现有技术的缺陷,提供一种单目视觉半实物仿真系统及方法,用以实现在复杂环境下计算机视觉研究的仿真工作,尤其适合应用于标定和视觉导航定位的仿真实现,其具有设备简单、易于实现和使用范围广的优点。The purpose of the present invention is to overcome the defects of the prior art, to provide a monocular vision semi-physical simulation system and method, to realize the simulation work of computer vision research in a complex environment, especially suitable for calibration and visual navigation positioning simulation It has the advantages of simple equipment, easy implementation and wide application range.
本发明的单目半实物仿真系统包括虚拟成像单元、图像投影单元和摄像实现单元,这三个单元之间顺次连接,并且其相对位置不能变化。所述的虚拟成像单元是将虚拟景象投影到计算机屏幕上;所述的图像投影单元是将计算机屏幕上的虚拟景象投影到投影平面上;所述的摄像实现单元是通过单目摄像机获取投影平面上的图像。The monocular hardware-in-the-loop simulation system of the present invention includes a virtual imaging unit, an image projection unit and a camera realization unit, these three units are connected in sequence, and their relative positions cannot be changed. The virtual imaging unit is to project the virtual scene on the computer screen; the image projection unit is to project the virtual scene on the computer screen onto the projection plane; image on the .
本发明的单目半实物仿真方法包括以下步骤:首先,在虚拟成像单元中,通过OpenGL等软件对虚拟景象进行构建,设定虚拟景象的位置姿态,通过虚拟摄像机将虚拟景象投影到计算机屏幕上。然后,图像投影单元将图像投影到投影平面上,经过摄像实现单元在投影平面上获得投影图像。最后,通过计算机软件计算得出虚拟摄像机的内部参数,虚拟景象的位置姿态,通过与设定值进行比较,分析得出仿真结果的精度误差。The monocular semi-physical simulation method of the present invention comprises the following steps: first, in the virtual imaging unit, the virtual scene is constructed by software such as OpenGL, the position and posture of the virtual scene are set, and the virtual scene is projected onto the computer screen by the virtual camera . Then, the image projection unit projects the image onto the projection plane, and obtains the projection image on the projection plane through the camera realization unit. Finally, the internal parameters of the virtual camera and the position and attitude of the virtual scene are calculated by computer software, and the accuracy error of the simulation result is analyzed by comparing with the set value.
本发明的优点主要有:(1)创造性地将计算机视觉与虚拟现实技术结合起来,很好的解决了复杂情况仿真难的问题;(2)设备简单、易于实现,使用范围广;(3)通过本仿真环境,既能得到比完全数字仿真更加接近真实情况的仿真结果,又解决了实物实验花费资金多,准备时间长等问题。The advantages of the present invention mainly include: (1) Creatively combining computer vision and virtual reality technology, it solves the problem of difficult simulation of complex situations; (2) The equipment is simple, easy to implement, and has a wide range of applications; (3) Through this simulation environment, not only the simulation results closer to the real situation can be obtained than the complete digital simulation, but also the problems of high cost and long preparation time for the physical experiment are solved.
附图说明Description of drawings
下面参照附图和具体实施方式对本发明的单目视觉半实物仿真系统及方法作进一步详细地说明。The monocular vision hardware-in-the-loop simulation system and method of the present invention will be further described in detail below with reference to the accompanying drawings and specific embodiments.
图1是本发明的系统结构原理图;Fig. 1 is a schematic diagram of the system structure of the present invention;
图2是本发明的系统的结构示意图;Fig. 2 is the structural representation of the system of the present invention;
图3是本发明的方法流程原理图;Fig. 3 is a schematic diagram of the method flow chart of the present invention;
图4是本发明的虚拟摄像机视场角与尺度因子关系示意图;Fig. 4 is a schematic diagram of the relationship between the virtual camera field of view and the scale factor of the present invention;
图5是本发明的实施例的平面靶标示意图;Fig. 5 is the planar target schematic diagram of the embodiment of the present invention;
图6是图5的实施例的虚拟景象的投影图像示意图;Fig. 6 is a schematic diagram of a projected image of a virtual scene in the embodiment of Fig. 5;
图7是图6的实施例的提取的特征点的图像示意图。FIG. 7 is a schematic diagram of an image of extracted feature points in the embodiment of FIG. 6 .
具体实施方式Detailed ways
图1所示的单目视觉半实物仿真系统包括:虚拟成像单元、图像投影单元和摄像实现单元,这三个单元之间顺次连接,并且其相对位置不能变化。所述的虚拟成像单元是将虚拟景像投影到计算机屏幕上;所述的图象投影单元是将计算机屏幕上的虚拟景象投影到投影平面上;所述的摄像实现单元是通过单目摄像机获取投影平面上的图像。The monocular vision hardware-in-the-loop simulation system shown in Figure 1 includes: a virtual imaging unit, an image projection unit, and a camera realization unit. These three units are connected in sequence, and their relative positions cannot be changed. The virtual imaging unit is to project the virtual scene on the computer screen; the image projection unit is to project the virtual scene on the computer screen onto the projection plane; The image on the projection plane.
如图2所示,本发明的虚拟成像单元包括装有虚拟视景软件的计算机1,所述的图象投影单元包括投影仪2和投影屏幕3。所述的摄像实现单元包括摄像机4、摄像机云台5和装有仿真软件的计算机6。计算机6可控制摄像机云台5,使摄像机4获得最佳的视场位置,而且,投影仪2、投影屏幕3、摄像机4和摄像机云台5均固定在室内墙壁或支架上,它们之间位置不能出现相对变化。投影仪2的参数固定不变,其与计算机1相连接,在投影屏幕3上投出由计算机1中虚拟视景软件生成的虚拟三维视觉图像,摄像机4和计算机6相接,通过图像采集卡先将由摄像机4采集的图像数据传送至计算机6,并通过计算机6中的仿真软件对该次采集的图像进行处理,计算得出虚拟摄像机的内部参数及虚拟景象的位置姿态,然后,改变虚拟场景的位置姿态,再重复进行采集计算。As shown in FIG. 2 , the virtual imaging unit of the present invention includes a computer 1 equipped with virtual scene software, and the image projection unit includes a projector 2 and a projection screen 3 . The camera realization unit includes a camera 4, a camera pan-tilt 5 and a computer 6 equipped with simulation software. The computer 6 can control the camera platform 5 so that the camera 4 can obtain the best field of view position, and the projector 2, the projection screen 3, the camera 4 and the camera platform 5 are all fixed on the indoor wall or the support, and the position between them is There can be no relative change. The parameters of the projector 2 are fixed, and it is connected with the computer 1. On the projection screen 3, the virtual three-dimensional visual image generated by the virtual view software in the computer 1 is projected. First transmit the image data collected by the camera 4 to the computer 6, and process the collected image through the simulation software in the computer 6, calculate the internal parameters of the virtual camera and the position and posture of the virtual scene, and then change the virtual scene position and posture, and then repeat the acquisition calculation.
如图3所示,本发明的单目半实物仿真方法包括以下步骤:首先,在虚拟成像单元中,通过OpenGL等软件对虚拟景象进行构建,设定虚拟景象的位置姿态,通过虚拟摄像机将虚拟景象投影到计算机屏幕上,然后,图像投影单元将图像投影到投影平面上,经过摄像实现单元在投影平面上获得投影图像,最后,通过计算机软件计算得出虚拟摄像机的内部参数,虚拟景象的位置姿态,通过与设定值进行比较,分析得出仿真结果的精度误差。As shown in Figure 3, the monocular hardware-in-the-loop simulation method of the present invention includes the following steps: first, in the virtual imaging unit, the virtual scene is constructed by software such as OpenGL, the position and posture of the virtual scene are set, and the virtual scene is set by the virtual camera. The scene is projected on the computer screen, and then the image projection unit projects the image onto the projection plane, and the projection image is obtained on the projection plane through the camera realization unit. Finally, the internal parameters of the virtual camera and the position of the virtual scene are calculated by computer software. Attitude, by comparing with the set value, analyze the accuracy error of the simulation result.
下面结合附图具体说明该方法实现的各个步骤:Each step that this method realizes is described in detail below in conjunction with accompanying drawing:
步骤一:建立由虚拟景象到计算机屏幕的成像模型Step 1: Create an imaging model from the virtual scene to the computer screen
摄像机线性模型如下式1The linear model of the camera is as follows: 1
其中虚拟景象上点的齐次坐标为[xw,yw,zw,1]T,图像上点的齐次坐标为[a,b,1]T;αx是虚拟摄像机像平面上u轴上的尺度因子,αy是虚拟摄像机像平面上v轴上的尺度因子;[u0,v0]为虚拟摄像机像平面上图像中心坐标;s1是一比例因子;R,T为旋转矩阵和平移向量; Mx1由αx,αy,u0,v0决定,被称为虚拟摄像机的内部参数;Mx2由R,T决定,被称为虚拟摄像机的外部参数。The homogeneous coordinates of points on the virtual scene are [x w , y w , z w , 1] T , and the homogeneous coordinates of points on the image are [a, b, 1] T ; α x is u on the image plane of the virtual camera The scale factor on the axis, α y is the scale factor on the v-axis on the virtual camera image plane; [u 0 , v 0 ] is the image center coordinates on the virtual camera image plane; s 1 is a scale factor; R, T are rotation Matrix and translation vector; M x1 is determined by α x , α y , u 0 , v 0 and is called the internal parameter of the virtual camera; M x2 is determined by R and T and is called the external parameter of the virtual camera.
步骤1:确定虚拟摄像机内部参数Step 1: Determine the internal parameters of the virtual camera
设定虚拟摄像机图像分辨率为1024×768。垂直视场角取30度,具体几何关系参照图4,这里设定αx=αy=384.0/tan(15.0×π/180)=1433.1075357064;u0=512,v0=384。Set the image resolution of the virtual camera to 1024×768. The vertical viewing angle is taken as 30 degrees. Refer to Figure 4 for the specific geometric relationship. Here, α x =α y =384.0/tan(15.0×π/180)=1433.1075357064; u 0 =512, v 0 =384.
步骤2:确定虚拟摄像机外部参数Step 2: Determine the external parameters of the virtual camera
在虚拟视景软件中为了确定虚拟摄像机的外参需要确定视点的空间位置,即视点在虚拟世界坐标系下的三维坐标值Tw=[Xw,Yw,Zw];视线方向偏航角度θ(绕X轴方向旋转)、俯仰角度(绕Y轴方向旋转)和滚转角度φ(绕Z轴方向旋转)。In the virtual scene software, in order to determine the external parameters of the virtual camera, it is necessary to determine the spatial position of the viewpoint, that is, the three-dimensional coordinate value of the viewpoint in the virtual world coordinate system T w = [X w , Y w , Z w ]; the line of sight yaw Angle θ (rotate around the X-axis), pitch angle (rotate around the Y-axis), and roll angle φ (rotate around the Z-axis).
这里设定虚拟摄像机坐标系的初始位置与虚拟世界坐标系位置重合。虚拟摄像机在初始状态下按照先滚转再俯仰最后偏航的顺序进行旋转,从而确定虚拟摄像机旋转后的最终位置,其旋转矩阵为R,再将虚拟摄像机视点位移到所设置的位置即完成了从虚拟世界坐标系到虚拟摄像机坐标系的变换。Here it is set that the initial position of the virtual camera coordinate system coincides with the position of the virtual world coordinate system. In the initial state, the virtual camera rotates in the order of first roll, then pitch, and finally yaw, so as to determine the final position of the virtual camera after rotation. The rotation matrix is R, and then the virtual camera viewpoint is displaced to the set position and it is completed. Transformation from the virtual world coordinate system to the virtual camera coordinate system.
坐标旋转矩阵R可以用欧拉角表示法求得,根据前面所规定的虚拟摄像机旋转顺序可知欧拉角表示法中旋转顺序为:绕Z轴→绕Y轴→绕X轴,旋转角度正负按右手坐标系法则确定,绕X轴、Y轴、Z轴的角度分别为θ,,φ后,可推导求得旋转矩阵为:The coordinate rotation matrix R can be obtained by the Euler angle representation. According to the virtual camera rotation sequence specified above, the rotation sequence in the Euler angle representation is: around the Z axis → around the Y axis → around the X axis, and the rotation angle is positive or negative Determined according to the right-handed coordinate system rule, after the angles around the X-axis, Y-axis, and Z-axis are θ, , φ respectively, the rotation matrix can be derived as:
设虚拟摄像机视点在虚拟世界坐标系中的坐标值为TW,则平移向量为:Suppose the coordinate value of the virtual camera viewpoint in the virtual world coordinate system is T W , then the translation vector is:
T=-(R*TW)T=-(R*T W )
最终得到虚拟摄像机外部参数矩阵为:Finally, the external parameter matrix of the virtual camera is obtained as:
步骤二:建立由计算机屏幕到摄像机像平面的成像模型Step 2: Establish an imaging model from the computer screen to the camera image plane
由于在标定过程中,很难对投影仪进行单独标定,故将后两个投影过程合并,作为一个整体来考虑。Since it is difficult to calibrate the projector individually during the calibration process, the latter two projection processes are combined and considered as a whole.
在建立计算机屏幕到摄像机像平面的成像模型时,不但要考虑具体模型的建立,而且要对系统的参数进行标定,具体分为:When establishing the imaging model from the computer screen to the camera image plane, not only the establishment of the specific model must be considered, but also the parameters of the system must be calibrated, specifically divided into:
步骤1:建立成像模型Step 1: Building an Imaging Model
为了更准确的了解整个成像模型,先分别建立计算机屏幕到投影平面,投影平面到摄像机像平面的模型,最后再将两个模型合并形成由计算机屏幕到摄像机像平面的成像模型,具体包括:In order to understand the entire imaging model more accurately, first establish a model from the computer screen to the projection plane, and from the projection plane to the camera image plane, and finally combine the two models to form an imaging model from the computer screen to the camera image plane, specifically including:
1)建立由计算机屏幕到投影平面的成像模型1) Establish an imaging model from the computer screen to the projection plane
由于计算机屏幕到投影平面的投影关系是面—面投影,故可以得到如下模型:Since the projection relationship from the computer screen to the projection plane is a plane-plane projection, the following model can be obtained:
其中s2为比例因子;Ht为一3×3非奇异矩阵;投影平面上点的齐次坐标为[xt,yt,1]。Among them, s 2 is a scaling factor; H t is a 3×3 non-singular matrix; the homogeneous coordinates of points on the projection plane are [x t , y t , 1].
2)建立由投影平面到摄像机的成像模型2) Establish an imaging model from the projection plane to the camera
由于投影平面为一平面,投影平面到摄像机的成像模型可得:Since the projection plane is a plane, the imaging model from the projection plane to the camera can be obtained as:
其中摄像机像平面上点的齐次坐标为[u,v,1]T;s3为一比例因子;αcx是摄像机像平面上u轴上的尺度因子,αcy是摄像机像平面上v轴上的尺度因子;[uc0,yc0]为摄像机像平面上图像中心处坐标;rc1,rc2分别为Rc的第1、2列。Among them, the homogeneous coordinates of the point on the camera image plane are [u, v, 1] T ; s 3 is a scale factor; α cx is the scale factor on the u axis on the camera image plane, and α cy is the v axis on the camera image plane scale factor on ; [u c0 , y c0 ] are the coordinates of the center of the image on the camera image plane; r c1 , r c2 are the first and second columns of R c respectively.
3)建立由计算机屏幕到摄像机像平面的成像模型3) Establish an imaging model from the computer screen to the camera image plane
将式(2)和式(3)合并形成由计算机屏幕到摄像机像平面的成像模型Combine formula (2) and formula (3) to form an imaging model from computer screen to camera image plane
具体公式如下:The specific formula is as follows:
式中Hb为3×3矩阵;s4为一比例因子。Where H b is a 3×3 matrix; s 4 is a scaling factor.
步骤2:系统的标定Step 2: Calibration of the system
利用式(4)的模型公式,对系统进行标定。Use the model formula of formula (4) to calibrate the system.
具体过程如下:The specific process is as follows:
(1)根据Lenz和Tsai提出的变焦距法(参见Lenz.R.K,Tsai.R.Y,Techniques for Calibrationof the Scale Factor and Image Center for High Accuracy 3-D Machine Vision Metrology,IEEETransactions on Pattern Analysis and Machine Intelligence.Volume 10.Issue 5.September 1988.Pages:713-720)求得摄像机像平面上的图像中心。(1) According to the zoom method proposed by Lenz and Tsai (see Lenz.R.K, Tsai.R.Y, Techniques for Calibration of the Scale Factor and Image Center for High Accuracy 3-D Machine Vision Metrology, IEEE Transactions on Pattern Analysis and Machine Intelligence.Volume 10.Issue 5.September 1988.Pages: 713-720) to obtain the image center on the camera image plane.
(2)利用张广军等提出的基于交比不变的摄像机畸变系数标定方法(参见张广军编著,《机器视觉》,北京:科学出版社,2005.)实现对摄像机径向畸变系数的标定。(2) Using the camera distortion coefficient calibration method proposed by Zhang Guangjun et al. based on cross-ratio invariance (see Zhang Guangjun edited, "Machine Vision", Beijing: Science Press, 2005.) to achieve the calibration of the camera radial distortion coefficient.
(3)在求出摄像机像平面上的图像中心和径向畸变系数的基础上,对摄像机像平面上的图像进行畸变校正后,通过靶标上多个角点可由(4)式建立线性方程,利用最小二乘法即可求出矩阵Hb,即完成参数的标定。(3) On the basis of obtaining the image center and radial distortion coefficient on the camera image plane, after distortion correction is performed on the image on the camera image plane, a linear equation can be established by formula (4) through multiple corner points on the target, The matrix H b can be obtained by using the least square method, that is, the calibration of the parameters is completed.
步骤三:建立整体计算机视觉半实物仿真环境模型Step 3: Establish an overall computer vision hardware-in-the-loop simulation environment model
将前边过程的数学模型合并,即可得到半实物仿真环境的整体模型:Combining the mathematical models of the previous process, the overall model of the hardware-in-the-loop simulation environment can be obtained:
s5为一比例因子;其中Hb可通过标定得到;R,T为人为设定。s 5 is a proportional factor; among them, H b can be obtained through calibration; R, T are artificially set.
根据上述模型可以看出,当已知摄像机像平面上点的齐次坐标[u,v,1]T和虚拟景象上点的齐次坐标[xw,yw,zw,1]T时,就可以计算出式(5)中的虚拟摄像机的内参及外参(R、T)。According to the above model, it can be seen that when the homogeneous coordinates [u, v, 1] T of the point on the camera image plane and the homogeneous coordinates [x w , y w , z w , 1] T of the point on the virtual scene are known , the internal and external parameters (R, T) of the virtual camera in formula (5) can be calculated.
下面通过一具体实施例来说明本发明的单目半实物视觉仿真的步骤:无人机在陆战场上已取得显赫战绩,其出色的表现向整个世界展示了无人机应用于海战的广阔前景,因而受到各国海军的重视。而舰载无人机的关键技术问题之一就是使用后的着舰回收问题,在这里我采用本发明进行无人机着舰仿真实验。The steps of the monocular semi-physical visual simulation of the present invention will be described below through a specific embodiment: UAVs have achieved outstanding results on the land battlefield, and their outstanding performance has demonstrated to the whole world the broad prospects for UAVs to be used in naval warfare , so it is valued by the navies of various countries. And one of the key technical problems of ship-borne unmanned aerial vehicles is exactly the problem of shipboard recovery after use, and here I adopt the present invention to carry out unmanned aerial vehicle landing simulation experiment.
步骤一:step one:
首先由虚拟成像单元的OpenGL软件设计一个平面靶标,如图5所示。这里取靶标区域内各黑色方块的四个角点为特征点,各特征点的坐标事先设定,其中大小黑色方块的排列方式是为了识别靶标坐标系的原点和X轴方便。First, a planar target is designed by the OpenGL software of the virtual imaging unit, as shown in Figure 5. Here, the four corners of each black square in the target area are taken as feature points, and the coordinates of each feature point are set in advance. The arrangement of the large and small black squares is for the convenience of identifying the origin and X axis of the target coordinate system.
步骤二:Step two:
由摄像实现单元在投影平面上获得经图像投影单元投影的图像。为了叙述简单起见,这里只相应的改变一次位置姿态,以俯仰角为45度,视点是在150厘米处为例,如图6所示。The image projected by the image projection unit is obtained by the camera realization unit on the projection plane. For the sake of simplification, the position and posture are only changed once accordingly, taking the pitch angle as 45 degrees and the viewpoint at 150 centimeters as an example, as shown in Figure 6.
步骤三:Step three:
通过计算机仿真软件的图像处理功能提取图像中特征点的图像坐标,如图7所示。The image coordinates of the feature points in the image are extracted through the image processing function of the computer simulation software, as shown in Figure 7.
步骤四:Step four:
在已知式(5)中M1,M2,αx,αy,u0,v0情况下,根据式(5),可求解出无人机的位置姿态(R,T),具体数据如下:In the case of known M 1 , M 2 , α x , α y , u 0 , v 0 in formula (5), according to formula (5), the position and attitude (R, T) of the UAV can be solved, specifically Data are as follows:
R=R=
0.702725 -0.001678 -0.7149380.702725 -0.001678 -0.714938
-0.000565 -0.999956 0.001809-0.000565 -0.999956 0.001809
-0.711514 -0.000596 -0.699186-0.711514 -0.000596 -0.699186
T=T =
-0.006923-0.006923
-0.004578-0.004578
150.141495150.141495
OpenGL中设定值与实际测量值如下表
表中T=[Tx,Ty,Tz]T为摄像机光心在世界坐标系下的坐标。In the table, T=[T x , Ty y , T z ] T is the coordinate of the optical center of the camera in the world coordinate system.
以上所述的仅是本发明的优选实施方式。应当指出,对于本领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以作出若干变型和改进,这些也应视为属于本发明的保护范围。What has been described above are only preferred embodiments of the present invention. It should be pointed out that those skilled in the art can make several modifications and improvements without departing from the principles of the present invention, and these should also be regarded as belonging to the protection scope of the present invention.
Claims (6)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2006100836396A CN100416466C (en) | 2006-05-31 | 2006-05-31 | Monocular vision hardware-in-the-loop simulation system and method |
US11/561,696 US7768527B2 (en) | 2006-05-31 | 2006-11-20 | Hardware-in-the-loop simulation system and method for computer vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2006100836396A CN100416466C (en) | 2006-05-31 | 2006-05-31 | Monocular vision hardware-in-the-loop simulation system and method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN1851618A true CN1851618A (en) | 2006-10-25 |
CN100416466C CN100416466C (en) | 2008-09-03 |
Family
ID=37133097
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB2006100836396A Expired - Fee Related CN100416466C (en) | 2006-05-31 | 2006-05-31 | Monocular vision hardware-in-the-loop simulation system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN100416466C (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104715486A (en) * | 2015-03-25 | 2015-06-17 | 北京经纬恒润科技有限公司 | Simulated rack camera calibration method and real-time machine |
CN106558080A (en) * | 2016-11-14 | 2017-04-05 | 天津津航技术物理研究所 | Join on-line proving system and method outside a kind of monocular camera |
CN107255458A (en) * | 2017-06-19 | 2017-10-17 | 昆明理工大学 | A kind of upright projection grating measuring analogue system and its implementation |
CN107918293A (en) * | 2017-12-15 | 2018-04-17 | 四川汉科计算机信息技术有限公司 | Universal type simulation system |
CN109242752A (en) * | 2018-08-21 | 2019-01-18 | 吉林大学 | A kind of analog acquisition obtains the method and application of mobile image |
CN112987593A (en) * | 2021-02-19 | 2021-06-18 | 中国第一汽车股份有限公司 | Visual positioning hardware-in-the-loop simulation platform and simulation method |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6618076B1 (en) * | 1999-12-23 | 2003-09-09 | Justsystem Corporation | Method and apparatus for calibrating projector-camera system |
CN1149388C (en) * | 2001-02-23 | 2004-05-12 | 清华大学 | A Digital Projection 3D Contour Reconstruction Method Based on Phase Shifting Method |
US6634552B2 (en) * | 2001-09-26 | 2003-10-21 | Nec Laboratories America, Inc. | Three dimensional vision device and method, and structured light bar-code patterns for use in the same |
-
2006
- 2006-05-31 CN CNB2006100836396A patent/CN100416466C/en not_active Expired - Fee Related
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104715486A (en) * | 2015-03-25 | 2015-06-17 | 北京经纬恒润科技有限公司 | Simulated rack camera calibration method and real-time machine |
CN104715486B (en) * | 2015-03-25 | 2017-12-19 | 北京经纬恒润科技有限公司 | One kind emulation stand camera marking method and real-time machine |
CN106558080A (en) * | 2016-11-14 | 2017-04-05 | 天津津航技术物理研究所 | Join on-line proving system and method outside a kind of monocular camera |
CN106558080B (en) * | 2016-11-14 | 2020-04-24 | 天津津航技术物理研究所 | Monocular camera external parameter online calibration method |
CN107255458A (en) * | 2017-06-19 | 2017-10-17 | 昆明理工大学 | A kind of upright projection grating measuring analogue system and its implementation |
CN107255458B (en) * | 2017-06-19 | 2020-02-07 | 昆明理工大学 | Resolving method of vertical projection grating measurement simulation system |
CN107918293A (en) * | 2017-12-15 | 2018-04-17 | 四川汉科计算机信息技术有限公司 | Universal type simulation system |
CN109242752A (en) * | 2018-08-21 | 2019-01-18 | 吉林大学 | A kind of analog acquisition obtains the method and application of mobile image |
CN112987593A (en) * | 2021-02-19 | 2021-06-18 | 中国第一汽车股份有限公司 | Visual positioning hardware-in-the-loop simulation platform and simulation method |
CN112987593B (en) * | 2021-02-19 | 2022-10-28 | 中国第一汽车股份有限公司 | Visual positioning hardware-in-the-loop simulation platform and simulation method |
Also Published As
Publication number | Publication date |
---|---|
CN100416466C (en) | 2008-09-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107314771B (en) | UAV positioning and attitude angle measurement method based on coded landmarks | |
CN104851104B (en) | Using the flexible big view calibration method of target high speed camera close shot | |
CN1897715A (en) | Three-dimensional vision semi-matter simulating system and method | |
WO2021004416A1 (en) | Method and apparatus for establishing beacon map on basis of visual beacons | |
CN107367229B (en) | Free binocular stereo vision rotating shaft parameter calibration method | |
CN109272574B (en) | Construction method and calibration method of linear array rotary scanning camera imaging model based on projection transformation | |
CN101876532B (en) | Camera on-field calibration method in measuring system | |
CN111080709A (en) | Multispectral stereo camera self-calibration algorithm based on trajectory feature registration | |
CN108288292A (en) | A kind of three-dimensional rebuilding method, device and equipment | |
CN111801198A (en) | Hand-eye calibration method, system and computer storage medium | |
CN109272555B (en) | A method of obtaining and calibrating external parameters of RGB-D camera | |
CN114549871B (en) | A method for matching drone aerial images with satellite images | |
CN1566906A (en) | Construction optical visual sense transducer calibration method based on plane targets | |
CN106408556A (en) | Minimal object measurement system calibration method based on general imaging model | |
CN102496160A (en) | Calibrating method for centralized vision system of soccer robot | |
CN1975324A (en) | Double-sensor laser visual measuring system calibrating method | |
CN104463969B (en) | A kind of method for building up of the model of geographical photo to aviation tilt | |
CN113034347B (en) | Oblique photography image processing method, device, processing equipment and storage medium | |
CN113870366B (en) | Calibration method and calibration system of three-dimensional scanning system based on pose sensor | |
CN1851618A (en) | Single-eye vision semi-matter simulating system and method | |
CN1220866C (en) | Method for calibarting lens anamorphic parameter | |
CN1948085A (en) | Star sensor calibrating method based on star field | |
CN105374067A (en) | Three-dimensional reconstruction method based on PAL cameras and reconstruction system thereof | |
CN105931261A (en) | Method and device for modifying extrinsic parameters of binocular stereo camera | |
CN102855620A (en) | Pure rotation camera self-calibration method based on spherical projection model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20080903 Termination date: 20200531 |