WO2020062434A1 - Static calibration method for external parameters of camera - Google Patents

Static calibration method for external parameters of camera Download PDF

Info

Publication number
WO2020062434A1
WO2020062434A1 PCT/CN2018/113667 CN2018113667W WO2020062434A1 WO 2020062434 A1 WO2020062434 A1 WO 2020062434A1 CN 2018113667 W CN2018113667 W CN 2018113667W WO 2020062434 A1 WO2020062434 A1 WO 2020062434A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
image
point
coordinates
imu
Prior art date
Application number
PCT/CN2018/113667
Other languages
French (fr)
Chinese (zh)
Inventor
陈亮
李晓东
芦超
Original Assignee
初速度(苏州)科技有限公司
北京初速度科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 初速度(苏州)科技有限公司, 北京初速度科技有限公司 filed Critical 初速度(苏州)科技有限公司
Publication of WO2020062434A1 publication Critical patent/WO2020062434A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Definitions

  • the invention relates to the field of intelligent driving, in particular to a static calibration method for external parameters of a camera.
  • the accuracy of autonomous driving maps is measured in the GPS coordinate system, which means that each point on the map needs to be represented by GPS coordinates.
  • the camera and GPS may not be installed at the same location of the vehicle at the same time, but may be separated by a distance of 2 to 3 meters. Therefore, the external parameters of the camera need to be calibrated to establish the spatial position relationship between the camera and the GPS module. If the external parameters of the camera are not calibrated, and the image is directly constructed based on the camera image and the position of the car body, an error of two or three meters may eventually occur.
  • the camera and the IMU (the GPS module on the IMU) need to be tied closely together, and then shakes violently, exciting the axis of the IMU and the camera, so that the trajectory of the IMU and the camera's The trajectory fits to complete the calibration of the camera.
  • the camera and IMU installed on the car cannot shake, so the traditional calibration method is not applicable.
  • a method for calibrating external parameters of a camera includes the following steps:
  • Step S1 Set one or more marked points, acquire an image including the marked points through a camera device, and identify the position (u0, v0) of the marked points in the image;
  • Step S2 The actual GPS position of the marked point is converted to the IMU coordinate system with the IMU as the origin of the coordinate to obtain the point P1;
  • Step S3 P1 is transformed into a coordinate system with the camera device as the coordinate origin by rotating the translation transformation matrix M [R
  • Step S4 P2 is transformed into the coordinate system of the image by the internal parameter matrix K of the camera device itself to obtain the projection coordinates (u1, v1) of the point on the image;
  • Step S5 construct re-projection error functions of (u0, v0) and (u1, v1), and calculate re-projection errors, which are related to the rotation translation transformation matrix M [R
  • K the internal parameter matrix of the camera
  • R b , t b respectively the attitude and position of the IMU
  • Step S6 Optimize the value of M [R
  • the camera device is disposed on the roof of the vehicle or around the vehicle.
  • the camera includes a plurality of cameras, and there is an overlapping area between the images acquired by each camera.
  • step S1 the position (u0, v0) of the marked point in the image is obtained by searching by rows or columns manually or by a computer.
  • step S1 the position (u0, v0) of the marked point in the image is obtained through a trained neural network.
  • step S5 the re-projection error function is as follows:
  • K the internal parameter matrix of the camera
  • R b , t b respectively the attitude and position of the IMU
  • step S6 the value of M [R
  • the initial values of the rotation R and the translation t both take 0.
  • the number of the marked points is 10-20.
  • step S2 the actual GPS position of the marked point is measured in advance, or directly measured by using an RTK device during calibration.
  • the invention of the present invention lies in the following aspects, but is not limited to the following aspects:
  • the invention is based on the following aspects: (1) a static calibration method for the external parameters of the camera is provided; the calibration of the external parameters of the camera can be completed in a stationary state without the need to modify the acquisition equipment or the vehicle;
  • the traditional punctuation method requires the camera and the IMU (the GPS module on the IMU) be tied together very closely, and then shakes violently.
  • the vehicle is stationary, the traditional method cannot be used.
  • the present invention provides a method for marking points The GPS coordinates and pixel coordinates of the marker points in the camera device are used to construct a loss function to finally obtain the Euclidean transformation relationship between the two, that is, the rotation and translation transformation matrix. This method does not need to modify the acquisition equipment or the vehicle. , In the static state, you can complete the calibration of the camera external parameters. This is one of the inventive points of the present invention.
  • the loss function takes into account the attitude and position of the IMU and the GPS coordinates of the marked points; in the mathematical model, the loss function is associated with the rotation translation transformation matrix, and the specified threshold is set by reprojection error To adjust the value of M [R
  • FIG. 1 is a step diagram of a static calibration method for external parameters of a camera according to the present invention.
  • Three-dimensional maps have good performance effects, and have the characteristics of small data volume, fast transmission speed, and low cost. After years of development, three-dimensional maps have been widely used in social management, public services, publicity and display fields.
  • 3D map modelers use high-resolution satellite maps or aerial photographs as reference maps based on the data collected by the collectors to complete the 3D modeling work by stitching the images; however, aerial photography or drone photography has many disadvantages, such as The cost is too high and the resolution is not high.
  • This application provides a new data acquisition method for three-dimensional maps, specifically by installing a 360-degree all-around high-resolution camera device on the vehicle, such as four high-resolution cameras on the front and back of the vehicle, Cameras are installed on the front, back, left and right of the vehicle, and the vehicle traverses each street intersection. At the same time, the camera on the vehicle captures high-resolution images as the reference map, and then the stitching of the images completes the 3D modeling work.
  • Camera external parameters are parameters in the world coordinate system, such as camera position, rotation direction, etc.
  • the camera external parameter calibration method is the mapping of world coordinates to pixel coordinates.
  • world coordinates are considered to be defined, and calibration is a known calibration control.
  • the world coordinates and pixel coordinates of the point we go to solve this mapping relationship. Once this relationship is solved, we can infer its world coordinates from the pixel coordinates of the point.
  • the purpose of camera calibration is to determine the values of some camera parameters.
  • the staff can use these parameters to map points in a three-dimensional space to the image space, or vice versa.
  • the invention is a static calibration method.
  • the marking point can be a special marking plate or a marking rod.
  • the GPS position of the marker point is known. This position can be measured in advance and obtained directly during calibration, or it can be measured directly using RTK equipment during calibration.
  • the rotation parameters of the three axes are ( ⁇ , ⁇ , ⁇ ), and then the 3 * 3 rotation matrix of each axis is combined (that is, the matrix is multiplied first) to obtain
  • the three-axis rotation information R is collected, and its size is still 3 * 3; the translation parameters (Tx, Ty, Tz) of the three axes of t.
  • a plurality of high-resolution cameras or cameras on the top of the vehicle can realize 360-degree high-definition video recording or taking pictures.
  • the method of calibrating the external parameters of the camera includes the following steps:
  • Step S1 Set one or more marked points, and obtain the image including the marked points by the camera on the top of the vehicle, and identify the position (u0, v0) of the marked points in the image.
  • the position of the marker point in the image is the observation value.
  • the marker point here can be a special marker card or a marker rod. Because it occupies a small area and the GPS position is known, the accuracy of the acquired signal is higher; the marker point The GPS position is known.
  • the position can be measured in advance and obtained directly during calibration, or it can be measured directly using RTK equipment during calibration.
  • Step S2 The actual GPS position of the marked point is converted to the IMU coordinate system with the IMU as the origin of the coordinates to obtain the point P1.
  • the longitude and latitude height system of GPS is an ellipsoidal system, not orthogonal, but the rotation translation transformation matrix M [R
  • the coordinates of this point are a three-dimensional coordinate (X, Y, Z), which indicates the position of this marked point relative to the IMU.
  • t], R represents rotation
  • t represents translation.
  • Step S3 P1 is transformed into the camera coordinate system with the camera as the coordinate origin by rotating the translation transformation matrix M [R
  • Step S4 P2 is transformed into the image coordinate system by the projection matrix of the camera itself, and the projection coordinates (u1, v1) of the point on the image are obtained.
  • the projection coordinates on the image are estimated values.
  • the projection matrix here is the internal parameter matrix of the camera. This step is actually converting the three-dimensional point coordinate P2 into the theoretical two-dimensional coordinate of the point on the camera image through the internal parameter matrix of the camera. (u1, v1).
  • Step S5 Construct the reprojection error functions of (u0, v0) and (u1, v1), and calculate the reprojection error, which is related to M [R
  • K the internal parameter matrix of the camera
  • R b , t b respectively the attitude and position of the IMU
  • Step S6 Optimize the value of M [R
  • the value of] is used as the calibration result.
  • t] by the method of least squares is the prior art.
  • multiple high-resolution cameras or cameras can be set around the vehicle, as long as 360-degree high-definition photography or photography is achieved.
  • the calibration method of the camera's external parameters includes the following steps:
  • Step S1 Set one or more marked points, and the cameras or cameras around the vehicle obtain an image including the marked points, and identify the position (u0, v0) of the marked points in the image.
  • the position coordinates of the marked points in the image can be calculated by a trained neural network.
  • the neural network was previously trained through a large amount of existing data (that is, the positions of the known marked points in the image have corresponding coordinates).
  • Step S2 The actual GPS position of the marked point is converted to the IMU coordinate system with the IMU as the origin of the coordinates to obtain the point P1.
  • the GPS position and IMU coordinates are both three-dimensional coordinates, and the resulting point P1 is also a three-dimensional coordinate.
  • Step S3 P1 is transformed into the camera coordinate system with the camera as the coordinate origin by rotating the translation transformation matrix M [R
  • the point P2 here is also a three-dimensional coordinate.
  • Step S4 P2 is transformed into the image coordinate system by the projection matrix of the camera itself, and the projection coordinates (u1, v1) of the point on the image are obtained.
  • the projection matrix here is the internal parameter matrix of the camera. This step is actually converting the three-dimensional point coordinate P2 into the theoretical two-dimensional coordinate (u1, v1) of the point on the camera image through the internal parameter matrix of the camera.
  • Step S5 Construct the reprojection error functions of (u0, v0) and (u1, v1), and calculate the reprojection error, which is related to M [R
  • K the internal parameter matrix of the camera
  • R b , t b respectively the attitude and position of the IMU
  • Step S6 Optimize the value of M [R
  • the present invention illustrates the detailed structural features of the present invention through the foregoing embodiments, but the present invention is not limited to the detailed structural features described above, which does not mean that the present invention must rely on the detailed structural features described above to be implemented.
  • Those skilled in the art should understand that any improvement to the present invention, equivalent replacement of the components used in the present invention, addition of auxiliary components, selection of specific methods, etc., all fall within the scope of protection and disclosure of the present invention.

Abstract

A static calibration method for the external parameters of a camera, comprising: step S1: setting multiple marking points, acquiring an image of the marking points, and identifying the positions of the marking points in the image; step S2: converting the actual GPS positions of the marking points into an IMU coordinate system using an IMU as the origin of coordinates to obtain a point P1; step S3: by means of a rotation translation transformation matrix, converting P1 into a camera coordinate system using a camera as the origin of coordinates to obtain a point P2: step S4: by means of an internal reference matrix of an image capture apparatus, converting P2 into an image coordinate system to obtain projection coordinates of the point on an image; step S5: constructing a re-projection error function, and calculating a re-projection error; step S6: optimizing the rotation translation transformation matrix, and repeating steps S3-S5 until the re-projection error is lower than a specified threshold. In the calibration method, by means of the GPS coordinates of the marking points and pixel coordinates of marking points in the image capture apparatus, a loss function is constructed so as to ultimately obtain a Euclidean transformation relationship, i.e., the rotation translation transformation matrix, between the two. In the calibration method, the calibration of the external parameters of a camera may be completed in a static state without needing to modify an acquisition device and without needing to modify a vehicle.

Description

一种相机外部参数的静态标定方法Static calibration method for camera external parameters 技术领域Technical field
本发明涉及智能驾驶领域,具体涉及一种相机外部参数的静态标定方法。The invention relates to the field of intelligent driving, in particular to a static calibration method for external parameters of a camera.
背景技术Background technique
目前,自动驾驶地图的精度是在GPS坐标系下进行衡量的,也就是说地图上的每个点需要用GPS坐标进行表示。在多传感器的方案中,相机和GPS可能不是同时安装在车辆的相同位置,而是可能相隔了2~3米的距离。因此需要对相机的外参进行标定,建立相机和GPS模块的空间位置关系。如果不进行相机外参的标定,直接根据相机的图像和车体的位置进行建图,最终可能就会产生两三米的误差。At present, the accuracy of autonomous driving maps is measured in the GPS coordinate system, which means that each point on the map needs to be represented by GPS coordinates. In the multi-sensor solution, the camera and GPS may not be installed at the same location of the vehicle at the same time, but may be separated by a distance of 2 to 3 meters. Therefore, the external parameters of the camera need to be calibrated to establish the spatial position relationship between the camera and the GPS module. If the external parameters of the camera are not calibrated, and the image is directly constructed based on the camera image and the position of the car body, an error of two or three meters may eventually occur.
而传统的标定方法,需要将相机和IMU(IMU上有GPS模块)很近地绑在一起,然后剧烈地晃动,激发IMU的轴和相机的轴贴在一起,从而使得IMU的轨迹和相机的轨迹贴合,完成相机的标定。但是安装在车上的相机和IMU不能晃动,因此传统的标定方法不适用。In the traditional calibration method, the camera and the IMU (the GPS module on the IMU) need to be tied closely together, and then shakes violently, exciting the axis of the IMU and the camera, so that the trajectory of the IMU and the camera's The trajectory fits to complete the calibration of the camera. However, the camera and IMU installed on the car cannot shake, so the traditional calibration method is not applicable.
并且现有的技术方案中对于坐标变换过程中如何对重投影与变换矩阵间建立联系来调整变换矩阵还缺少相关的研究。And in the existing technical solutions, there is still a lack of related research on how to establish a connection between the reprojection and the transformation matrix to adjust the transformation matrix during the coordinate transformation process.
发明内容Summary of the Invention
鉴于现有技术中存在的问题,本发明采用以下技术方案:In view of the problems existing in the prior art, the present invention adopts the following technical solutions:
一种相机外部参数的标定方法,其特征在于,所述方法包括以下步骤:A method for calibrating external parameters of a camera, wherein the method includes the following steps:
步骤S1:设定一个或多个标记点,通过摄像装置获取包括所述标记点的图像,识别标记点在图像中的位置(u0,v0);Step S1: Set one or more marked points, acquire an image including the marked points through a camera device, and identify the position (u0, v0) of the marked points in the image;
步骤S2:将标记点的实际GPS位置转换到以IMU为坐标原点的IMU坐标系下,得到点P1;Step S2: The actual GPS position of the marked point is converted to the IMU coordinate system with the IMU as the origin of the coordinate to obtain the point P1;
步骤S3:通过旋转平移变换矩阵M[R|t]将P1转换到以所述摄像装置为坐标原点的坐标系下,得到点P2;Step S3: P1 is transformed into a coordinate system with the camera device as the coordinate origin by rotating the translation transformation matrix M [R | t] to obtain point P2;
步骤S4:通过所述摄像装置自身的内参矩阵K将P2转换到所述图像的坐标系下,得到该点在图像上的投影坐标(u1,v1);Step S4: P2 is transformed into the coordinate system of the image by the internal parameter matrix K of the camera device itself to obtain the projection coordinates (u1, v1) of the point on the image;
步骤S5:构建(u0,v0)和(u1,v1)的重投影误差函数,计算重投影误差,该误差与所述旋转平移变换矩阵M[R|t]相关;其中所述重投影误差函数如下所示:Step S5: construct re-projection error functions of (u0, v0) and (u1, v1), and calculate re-projection errors, which are related to the rotation translation transformation matrix M [R | t]; wherein the re-projection error functions As follows:
[根据细则26改正16.01.2019] 
Figure WO-DOC-FIGURE-1
[Corrected 16.01.2019 in accordance with Rule 26]
Figure WO-DOC-FIGURE-1
其中,K:相机的内参矩阵;Among them, K: the internal parameter matrix of the camera;
R b,t b:分别为IMU的姿态和位置; R b , t b : respectively the attitude and position of the IMU;
[根据细则26改正16.01.2019] 
Figure WO-DOC-FIGURE-2
第i个标记点的GPS坐标;
[Corrected 16.01.2019 in accordance with Rule 26]
Figure WO-DOC-FIGURE-2
GPS coordinates of the i-th marker point;
[根据细则26改正16.01.2019] 
Figure WO-DOC-FIGURE-3
第i个标记点在图像中的位置。
[Corrected 16.01.2019 in accordance with Rule 26]
Figure WO-DOC-FIGURE-3
The position of the i-th marker point in the image.
步骤S6:优化M[R|t]的值,重复步骤S3-步骤S5,直至重投影误差低于指定阈值,此时M[R|t]的取值作为标定的结果;Step S6: Optimize the value of M [R | t], and repeat steps S3-Step S5 until the re-projection error is lower than the specified threshold, at which time the value of M [R | t] is used as the calibration result;
所述述旋转平移变换矩阵M[R|t]中R代表旋转,t代表平移。In the rotation and translation transformation matrix M [R | t], R represents rotation, and t represents translation.
优选地,步骤S1中,所述摄像装置设置在车顶或车辆的四周。Preferably, in step S1, the camera device is disposed on the roof of the vehicle or around the vehicle.
优选地,所述摄像装置包括多台摄像机,各摄像机获取的图像之间有重叠区域。Preferably, the camera includes a plurality of cameras, and there is an overlapping area between the images acquired by each camera.
优选地,步骤S1中,所述标记点在图像中的位置(u0,v0)通过人为或计算机逐行逐列查找得到。Preferably, in step S1, the position (u0, v0) of the marked point in the image is obtained by searching by rows or columns manually or by a computer.
优选地,步骤S1中,所述标记点在图像中的位置(u0,v0)通过训练好的神经网络得到。Preferably, in step S1, the position (u0, v0) of the marked point in the image is obtained through a trained neural network.
优选地,在步骤S5中,所述重投影误差函数如下:Preferably, in step S5, the re-projection error function is as follows:
[根据细则26改正16.01.2019] 
Figure WO-DOC-FIGURE-1
[Corrected 16.01.2019 in accordance with Rule 26]
Figure WO-DOC-FIGURE-1
其中K:相机的内参矩阵;Where K: the internal parameter matrix of the camera;
R b,t b:分别为IMU的姿态和位置; R b , t b : respectively the attitude and position of the IMU;
[根据细则26改正16.01.2019] 
Figure WO-DOC-FIGURE-2
第i个标记点的GPS坐标;
[Corrected 16.01.2019 in accordance with Rule 26]
Figure WO-DOC-FIGURE-2
GPS coordinates of the i-th marker point;
[根据细则26改正16.01.2019] 
Figure WO-DOC-FIGURE-3
第i个标记点在图像中的位置。
[Corrected 16.01.2019 in accordance with Rule 26]
Figure WO-DOC-FIGURE-3
The position of the i-th marker point in the image.
优选地,在步骤S6中,通过最小二乘法来优化M[R|t]的值。Preferably, in step S6, the value of M [R | t] is optimized by a least square method.
优选地,所述旋转R、所述平移t的初始值都取0。Preferably, the initial values of the rotation R and the translation t both take 0.
优选地,步骤S1中,所述标记点的数为10-20个。Preferably, in step S1, the number of the marked points is 10-20.
优选地,步骤S2中,所述标记点的实际GPS位置是预先测量出,或者是在标定时直接使用RTK设备测量出的。Preferably, in step S2, the actual GPS position of the marked point is measured in advance, or directly measured by using an RTK device during calibration.
本发明的发明点在于下述的几个方面,但不仅限于下述的几个方面:The invention of the present invention lies in the following aspects, but is not limited to the following aspects:
本发明的发明点在于:(1)提供了一种相机外部参数的静态标定方法;,无需对采集设备进行改造,也无需对车辆进行改造,在静止状态下即可完成相机外参的标定;传统的标点方法需要将相机和IMU(IMU上有GPS模块)很近地绑在一起,然后剧烈地晃动,而在车辆静止时,无法使用传统方法,本发明提供了一种方法,通过标记点的GPS坐标和摄像装置中标记点的像素坐标,构建损失函数,来最终获得两者之间的欧式变换关系,即旋转平移变换矩阵,该方法无需对采集设备进行改造,也无需对车辆进行改造,在静止状态下即可完成相机外参的标定。这是本发明的发明点之一。The invention is based on the following aspects: (1) a static calibration method for the external parameters of the camera is provided; the calibration of the external parameters of the camera can be completed in a stationary state without the need to modify the acquisition equipment or the vehicle; The traditional punctuation method requires the camera and the IMU (the GPS module on the IMU) be tied together very closely, and then shakes violently. When the vehicle is stationary, the traditional method cannot be used. The present invention provides a method for marking points The GPS coordinates and pixel coordinates of the marker points in the camera device are used to construct a loss function to finally obtain the Euclidean transformation relationship between the two, that is, the rotation and translation transformation matrix. This method does not need to modify the acquisition equipment or the vehicle. , In the static state, you can complete the calibration of the camera external parameters. This is one of the inventive points of the present invention.
(2)设计了一种全新的损失函数;,通过该函数可以精准、有效的求出旋转平移变换矩阵。这主要体现在以下几个方面:该损失函数考虑到了IMU的姿态和位置以及标记点的GPS坐标;在数学模型中将损失函数与旋转平移变换矩阵相联系,并且通过重投影误差设定指定阈值的方式来调整M[R|t]的取值作为标定的结果。这是本发明的发明点之一。(2) A new loss function is designed; the rotation translation transformation matrix can be accurately and effectively obtained through this function. This is mainly reflected in the following aspects: the loss function takes into account the attitude and position of the IMU and the GPS coordinates of the marked points; in the mathematical model, the loss function is associated with the rotation translation transformation matrix, and the specified threshold is set by reprojection error To adjust the value of M [R | t] as the result of calibration. This is one of the inventive points of the present invention.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1是本发明相机外部参数静态标定方法步骤图。FIG. 1 is a step diagram of a static calibration method for external parameters of a camera according to the present invention.
下面对本发明进一步详细说明。但下述的实例仅仅是本发明的简易例子,并不代表或限制本发明的权利保护范围,本发明的保护范围以权利要求书为准。The present invention is described in further detail below. However, the following examples are merely simple examples of the present invention, and do not represent or limit the scope of protection of the rights of the present invention. The scope of protection of the present invention is subject to the claims.
具体实施方式detailed description
下面结合附图并通过具体实施方式来进一步说明本发明的技术方案。The technical solution of the present invention will be further described below with reference to the accompanying drawings and specific embodiments.
为更好地说明本发明,便于理解本发明的技术方案,本发明的典型但非限制性的实施例如下:In order to better illustrate the present invention and facilitate understanding of the technical solutions of the present invention, typical but non-limiting examples of the present invention are as follows:
三维地图有着良好的表现效果,并且有数据量小、传输速度快,成本较低等特点,经过多年的发展,三维地图在社会管理、公共服务、宣传展示等领域被广泛应用。Three-dimensional maps have good performance effects, and have the characteristics of small data volume, fast transmission speed, and low cost. After years of development, three-dimensional maps have been widely used in social management, public services, publicity and display fields.
通常,三维地图建模人员根据采集人员采集的数据,以高分辨率的卫星地图或航拍图为基准地图,通过图像的拼接完成三维建模工作;但是航拍或无人机拍摄有众多弊,如成本过高、分辨率不高等。本申请提供了一种全新的三维地图的数据采集方式,具体为通过在车辆上安装360度全方位的高分辨率的摄像装置,如在车顶安装前后左右4个高分辨率摄像机,又如在车辆的前后左右分别安装摄像装置,通过车辆遍历各个街道路口,同时车辆上的摄像装置拍摄高分辨率的图像作为基准地图,然后图像的拼接完成三维建模工作Generally, 3D map modelers use high-resolution satellite maps or aerial photographs as reference maps based on the data collected by the collectors to complete the 3D modeling work by stitching the images; however, aerial photography or drone photography has many disadvantages, such as The cost is too high and the resolution is not high. This application provides a new data acquisition method for three-dimensional maps, specifically by installing a 360-degree all-around high-resolution camera device on the vehicle, such as four high-resolution cameras on the front and back of the vehicle, Cameras are installed on the front, back, left and right of the vehicle, and the vehicle traverses each street intersection. At the same time, the camera on the vehicle captures high-resolution images as the reference map, and then the stitching of the images completes the 3D modeling work.
相机的外部参数是在世界坐标系中的参数,比如相机位置、旋转方向等;相机外部参数标定方法就是世界坐标到像素坐标的映射,这里的世界坐标是认为定义的,标定就是已知标定控制点的世界坐标和像素坐标,我们去解算这个映射关系,一旦这个关系解算出来,我们就可以由点的像素坐标去反推它的世界坐标。Camera external parameters are parameters in the world coordinate system, such as camera position, rotation direction, etc. The camera external parameter calibration method is the mapping of world coordinates to pixel coordinates. Here, world coordinates are considered to be defined, and calibration is a known calibration control. The world coordinates and pixel coordinates of the point, we go to solve this mapping relationship. Once this relationship is solved, we can infer its world coordinates from the pixel coordinates of the point.
相机标定的目的是确定相机的一些参数的值,工作人员可以通过这些参数把一个三维空间中的点映射到图像空间,或者反过来。The purpose of camera calibration is to determine the values of some camera parameters. The staff can use these parameters to map points in a three-dimensional space to the image space, or vice versa.
本发明为静态标定方法,标定时车辆静止,标记点的位置也不变。标记点可以是特殊的标记牌,或者标记杆。标记点的GPS位置已知,该位置可以是预先测量出,在标定时直接获取的,也可以是在标定时直接使用RTK设备测量出的。The invention is a static calibration method. When the vehicle is stationary during calibration, the position of the marking point does not change. The marking point can be a special marking plate or a marking rod. The GPS position of the marker point is known. This position can be measured in advance and obtained directly during calibration, or it can be measured directly using RTK equipment during calibration.
通常,相机的外部参数有6个,三个轴的旋转参数分别为(ω、δ、θ),然后把每个轴的3*3旋转矩阵进行组合(即先矩阵之间相乘),得到集合三个轴旋转信息R,其大小还是3*3;t的三个轴的平移参数(Tx,Ty,Tz)。Generally, there are 6 external parameters of the camera, and the rotation parameters of the three axes are (ω, δ, θ), and then the 3 * 3 rotation matrix of each axis is combined (that is, the matrix is multiplied first) to obtain The three-axis rotation information R is collected, and its size is still 3 * 3; the translation parameters (Tx, Ty, Tz) of the three axes of t.
实施例1Example 1
本实施例中,通过车辆顶部多个高分辨率摄像机或相机,可实现360度高清晰摄像或拍照,在车辆静止的状态下,相机外部参数的标定方法包括以下步骤:In this embodiment, a plurality of high-resolution cameras or cameras on the top of the vehicle can realize 360-degree high-definition video recording or taking pictures. When the vehicle is stationary, the method of calibrating the external parameters of the camera includes the following steps:
步骤S1:设定一个或多个标记点,由车辆顶部的摄像机,获取包括标记点的图像,识别标记点在图像中的位置(u0,v0)。这里标记点在图像中的位置是观测值,这里的标记点可以是特殊的标记牌,或者标记杆,由于其占地不大并且GPS位置已知,使得获取的信号精准度较高;标记点的GPS位置已知,该位置可以是预先测量出,在标定时直接获取的,也可以是在标定时直接使用RTK设备测量出的。Step S1: Set one or more marked points, and obtain the image including the marked points by the camera on the top of the vehicle, and identify the position (u0, v0) of the marked points in the image. Here the position of the marker point in the image is the observation value. The marker point here can be a special marker card or a marker rod. Because it occupies a small area and the GPS position is known, the accuracy of the acquired signal is higher; the marker point The GPS position is known. The position can be measured in advance and obtained directly during calibration, or it can be measured directly using RTK equipment during calibration.
由于相机的外部参数有6个,为了通过相应的数学方程解算出相应的参数,至少需要6个标记点,通常在实际操作中,为了减小数据噪声对求解的影响,选取10~20个左右的标记点,虽然理论上越多的标记点越有利于求解,但是会增加求解的成本;这里标记点的坐标可以在相机拍摄的图像中,通过人为的逐行逐列的查找相应的坐标,也可通过计算机进行查找。Since there are 6 external parameters of the camera, in order to calculate the corresponding parameters through the corresponding mathematical equations, at least 6 marked points are required. Generally, in order to reduce the impact of data noise on the solution, about 10 to 20 are selected. Although theoretically more markers are more conducive to solving, it will increase the cost of solving; here the coordinates of the markers can be found in the image captured by the camera, and the corresponding coordinates can be searched line by line by column. Find it on your computer.
步骤S2:将标记点的实际GPS位置转换到以IMU为坐标原点的IMU坐标系下,得到点P1。GPS的经纬高系统是个椭球系统,不是正交的,但是旋转平移变换矩阵M[R|t]是基于在正交坐标系的。因此,将GPS点转换到IMU坐标系下表示可以将经纬高坐标转换成笛卡尔坐标(就是正交的坐标系)。这个转换的关系是GIS系统里面的标准转换方法。转换之后,这个点的坐标是一个三维的坐标(X,Y,Z),表示这个标记点相对于IMU的位置。上述旋转平移变换矩阵M[R|t],R代表旋转,t代表平移。Step S2: The actual GPS position of the marked point is converted to the IMU coordinate system with the IMU as the origin of the coordinates to obtain the point P1. The longitude and latitude height system of GPS is an ellipsoidal system, not orthogonal, but the rotation translation transformation matrix M [R | t] is based on an orthogonal coordinate system. Therefore, transforming GPS points to the IMU coordinate system means that the latitude and longitude coordinates can be converted into Cartesian coordinates (that is, orthogonal coordinate systems). This conversion relationship is a standard conversion method in a GIS system. After the transformation, the coordinates of this point are a three-dimensional coordinate (X, Y, Z), which indicates the position of this marked point relative to the IMU. The above-mentioned rotation and translation transformation matrix M [R | t], R represents rotation, and t represents translation.
步骤S3:通过旋转平移变换矩阵M[R|t]将P1转换到以相机为坐标原点的摄像机坐标系下,得到点P2。Step S3: P1 is transformed into the camera coordinate system with the camera as the coordinate origin by rotating the translation transformation matrix M [R | t] to obtain point P2.
步骤S4:通过相机自身的投影矩阵将P2转换到图像坐标系下,得到该点在图像上的投影坐标(u1,v1)。这里图像上的投影坐标是估计值,这里的投影矩阵就是摄像机的内参矩阵,该步骤实际上是通过相机的内参矩阵,将三维点坐标P2转换为理论上该点在相机图像上的二维坐标(u1,v1)。Step S4: P2 is transformed into the image coordinate system by the projection matrix of the camera itself, and the projection coordinates (u1, v1) of the point on the image are obtained. Here, the projection coordinates on the image are estimated values. The projection matrix here is the internal parameter matrix of the camera. This step is actually converting the three-dimensional point coordinate P2 into the theoretical two-dimensional coordinate of the point on the camera image through the internal parameter matrix of the camera. (u1, v1).
步骤S5:构建(u0,v0)和(u1,v1)的重投影误差函数,计算重投影误差,该误差与M[R|t]相关;重投影误差函数如下:Step S5: Construct the reprojection error functions of (u0, v0) and (u1, v1), and calculate the reprojection error, which is related to M [R | t]; the reprojection error function is as follows:
[根据细则26改正16.01.2019] 
Figure WO-DOC-FIGURE-1
[Corrected 16.01.2019 in accordance with Rule 26]
Figure WO-DOC-FIGURE-1
其中K:相机的内参矩阵;Where K: the internal parameter matrix of the camera;
R b,t b:分别为IMU的姿态和位置; R b , t b : respectively the attitude and position of the IMU;
[根据细则26改正16.01.2019] 
Figure WO-DOC-FIGURE-2
第i个标记点的GPS坐标;
[Corrected 16.01.2019 in accordance with Rule 26]
Figure WO-DOC-FIGURE-2
GPS coordinates of the i-th marker point;
[根据细则26改正16.01.2019] 
Figure WO-DOC-FIGURE-3
第i个标记点在图像中的位置。该位置也是观测值,可以是人工标识出的。
[Corrected 16.01.2019 in accordance with Rule 26]
Figure WO-DOC-FIGURE-3
The position of the i-th marker point in the image. This location is also an observation and can be manually identified.
[根据细则26改正16.01.2019] 
其中
Figure WO-DOC-FIGURE-4
为第i个标记点的坐标(u1,v1),是估计值。
[Corrected 16.01.2019 in accordance with Rule 26]
among them
Figure WO-DOC-FIGURE-4
Is the coordinate (u1, v1) of the i-th marker point, and is an estimated value.
步骤S6:通过最小二乘法优化M[R|t]的值,重复步骤S3-步骤S5,不断迭代M[R|t]的值直至重投影误差低于指定阈值,此时M[R|t]的取值作为标定的结果。这里通过最小二乘法优化M[R|t]的值,是现有技术,具有多种方式解算出M[R|t]的值,例如可以通过重投影误差函数求导,通过导数的大小判断出M[R|t]优化的值;M[R|t]的初始值为简便起见可以取0,其真实大小大约在10的负六次方附近。Step S6: Optimize the value of M [R | t] by the method of least squares, repeat steps S3-step S5, and iterate the value of M [R | t] until the reprojection error is lower than the specified threshold, at this time M [R | t The value of] is used as the calibration result. Here, optimizing the value of M [R | t] by the method of least squares is the prior art. There are various ways to calculate the value of M [R | t]. For example, you can use the reprojection error function to derive the derivative and judge by the size of the derivative. Find the optimized value of M [R | t]; the initial value of M [R | t] can be 0 for simplicity, and its true size is around the negative sixth power of 10.
实施例2Example 2
在本实施例中,可以在车辆的四周设置有多台高分辨率摄像机或相机,只要实现360度高清晰摄像或拍照即可,相机外部参数的标定方法包括以下步骤:In this embodiment, multiple high-resolution cameras or cameras can be set around the vehicle, as long as 360-degree high-definition photography or photography is achieved. The calibration method of the camera's external parameters includes the following steps:
步骤S1:设定一个或多个标记点,由车辆四周的摄像机或相机,获取包括标记点的图像,识别标记点在图像中的位置(u0,v0)。这里标记点在图像中的位置坐标可以通过已经训练好的神经网络计算得到,神经网络之前通过大量的已有数据(即已知标记点在图像中的位置已经相应的坐标)训练得到。Step S1: Set one or more marked points, and the cameras or cameras around the vehicle obtain an image including the marked points, and identify the position (u0, v0) of the marked points in the image. Here, the position coordinates of the marked points in the image can be calculated by a trained neural network. The neural network was previously trained through a large amount of existing data (that is, the positions of the known marked points in the image have corresponding coordinates).
步骤S2:将标记点的实际GPS位置转换到以IMU为坐标原点的IMU坐标系下,得到点P1。这里GPS位置和IMU坐标都是一个三维坐标,得到的点P1也是一个三维坐标。Step S2: The actual GPS position of the marked point is converted to the IMU coordinate system with the IMU as the origin of the coordinates to obtain the point P1. Here the GPS position and IMU coordinates are both three-dimensional coordinates, and the resulting point P1 is also a three-dimensional coordinate.
步骤S3:通过旋转平移变换矩阵M[R|t]将P1转换到以相机为坐标原点的相机坐标系下,得到点P2。这里的点P2也是一个三维坐标。Step S3: P1 is transformed into the camera coordinate system with the camera as the coordinate origin by rotating the translation transformation matrix M [R | t] to obtain point P2. The point P2 here is also a three-dimensional coordinate.
步骤S4:通过相机自身的投影矩阵将P2转换到图像坐标系下,得到该点在图像上的投影坐标(u1,v1)。这里的投影矩阵就是相机的内参矩阵,该步骤实际上是通过相机的内参矩阵,将三维点坐标P2转换为理论上该点在相机图像上的二维坐标(u1,v1)。Step S4: P2 is transformed into the image coordinate system by the projection matrix of the camera itself, and the projection coordinates (u1, v1) of the point on the image are obtained. The projection matrix here is the internal parameter matrix of the camera. This step is actually converting the three-dimensional point coordinate P2 into the theoretical two-dimensional coordinate (u1, v1) of the point on the camera image through the internal parameter matrix of the camera.
步骤S5:构建(u0,v0)和(u1,v1)的重投影误差函数,计算重投影误差,该误差与M[R|t]相关;重投影误差函数如下:Step S5: Construct the reprojection error functions of (u0, v0) and (u1, v1), and calculate the reprojection error, which is related to M [R | t]; the reprojection error function is as follows:
[根据细则26改正16.01.2019] 
Figure WO-DOC-FIGURE-1
[Corrected 16.01.2019 in accordance with Rule 26]
Figure WO-DOC-FIGURE-1
其中K:相机的内参矩阵;Where K: the internal parameter matrix of the camera;
R b,t b:分别为IMU的姿态和位置; R b , t b : respectively the attitude and position of the IMU;
[根据细则26改正16.01.2019] 
Figure WO-DOC-FIGURE-2
第i个标记点的GPS坐标;
[Corrected 16.01.2019 in accordance with Rule 26]
Figure WO-DOC-FIGURE-2
GPS coordinates of the i-th marker point;
[根据细则26改正16.01.2019] 
Figure WO-DOC-FIGURE-3
第i个标记点在图像中的位置。该位置也是观测值,可以是人工标识出的。
[Corrected 16.01.2019 in accordance with Rule 26]
Figure WO-DOC-FIGURE-3
The position of the i-th marker point in the image. This location is also an observation and can be manually identified.
[根据细则26改正16.01.2019] 
其中
Figure WO-DOC-FIGURE-4
为第i个标记点的坐标(u1,v1),是估计值。
[Corrected 16.01.2019 in accordance with Rule 26]
among them
Figure WO-DOC-FIGURE-4
Is the coordinate (u1, v1) of the i-th marker point, and is an estimated value.
步骤S6:通过最小二乘法优化M[R|t]的值,重复步骤S3-步骤S5,直至重投影误差低于指定阈值,此时M[R|t]的取值作为标定的结果。Step S6: Optimize the value of M [R | t] by the method of least squares, and repeat steps S3-S5 until the re-projection error is lower than the specified threshold. At this time, the value of M [R | t] is used as the calibration result.
申请人声明,本发明通过上述实施例来说明本发明的详细结构特征,但本发明并不局限于上述详细结构特征,即不意味着本发明必须依赖上述详细结构特征才能实施。所属技术领域的技术人员应该明了,对本发明的任何改进,对本发明所选用部件的等效替换以及辅助部件的增加、具体方式的选择等,均落在本发明的保护范围和公开范围之内。The applicant states that the present invention illustrates the detailed structural features of the present invention through the foregoing embodiments, but the present invention is not limited to the detailed structural features described above, which does not mean that the present invention must rely on the detailed structural features described above to be implemented. Those skilled in the art should understand that any improvement to the present invention, equivalent replacement of the components used in the present invention, addition of auxiliary components, selection of specific methods, etc., all fall within the scope of protection and disclosure of the present invention.
以上详细描述了本发明的优选实施方式,但是,本发明并不限于上述实施方式中的具体细节,在本发明的技术构思范围内,可以对本发明的技术方案进行多种简单变型,这些简单变型均属于本发明的保护范围。The preferred embodiments of the present invention have been described in detail above. However, the present invention is not limited to the specific details in the above embodiments. Within the scope of the technical concept of the present invention, various simple modifications can be made to the technical solution of the present invention. These simple modifications All belong to the protection scope of the present invention.
另外需要说明的是,在上述具体实施方式中所描述的各个具体技术特征,在不矛盾的情况下,可以通过任何合适的方式进行组合,为了避免不必要的重复,本发明对各种可能的组 合方式不再另行说明。In addition, it should be noted that the specific technical features described in the above specific embodiments can be combined in any suitable manner without contradiction. In order to avoid unnecessary repetition, the present invention provides various possible The combination is not explained separately.
此外,本发明的各种不同的实施方式之间也可以进行任意组合,只要其不违背本发明的思想,其同样应当视为本发明所公开的内容。In addition, various combinations of the embodiments of the present invention can also be arbitrarily combined, as long as it does not violate the idea of the present invention, it should also be regarded as the content disclosed by the present invention.

Claims (9)

  1. 一种相机外部参数的标定方法,其特征在于,所述方法包括以下步骤:A method for calibrating external parameters of a camera, wherein the method includes the following steps:
    步骤S1:设定一个或多个标记点,通过摄像装置获取包括所述标记点的图像,识别标记点在图像中的位置(u0,v0);Step S1: Set one or more marked points, acquire an image including the marked points through a camera device, and identify the position (u0, v0) of the marked points in the image;
    步骤S2:将标记点的实际GPS位置转换到以IMU为坐标原点的IMU坐标系下,得到点P1;Step S2: The actual GPS position of the marked point is converted to the IMU coordinate system with the IMU as the origin of the coordinate to obtain the point P1;
    步骤S3:通过旋转平移变换矩阵M[R|t]将P1转换到以所述摄像装置为坐标原点的坐标系下,得到点P2;Step S3: P1 is transformed into a coordinate system with the camera device as the coordinate origin by rotating the translation transformation matrix M [R | t] to obtain point P2;
    步骤S4:通过所述摄像装置自身的内参矩阵K将P2转换到所述图像的坐标系下,得到该点在图像上的投影坐标(u1,v1);Step S4: P2 is transformed into the coordinate system of the image by the internal parameter matrix K of the camera device itself to obtain the projection coordinates (u1, v1) of the point on the image;
    步骤S5:构建(u0,v0)和(u1,v1)的重投影误差函数,计算重投影误差,该误差与所述旋转平移变换矩阵M[R|t]相关;其中所述重投影误差函数如下所示:Step S5: construct re-projection error functions of (u0, v0) and (u1, v1), and calculate re-projection errors, which are related to the rotation translation transformation matrix M [R | t]; wherein the re-projection error functions As follows:
    Figure PCTCN2018113667-appb-100001
    Figure PCTCN2018113667-appb-100001
    其中,K:相机的内参矩阵;Among them, K: the internal parameter matrix of the camera;
    R b,t b:分别为IMU的姿态和位置; R b , t b : respectively the attitude and position of the IMU;
    Figure PCTCN2018113667-appb-100002
    第i个标记点的GPS坐标;
    Figure PCTCN2018113667-appb-100002
    GPS coordinates of the i-th marker point;
    Figure PCTCN2018113667-appb-100003
    第i个标记点在图像中的位置;
    Figure PCTCN2018113667-appb-100003
    The position of the i-th marker point in the image;
    步骤S6:优化M[R|t]的值,重复步骤S3-S5,直至重投影误差低于指定阈值,此时M[R|t]的取值作为标定的结果;Step S6: Optimize the value of M [R | t], and repeat steps S3-S5 until the reprojection error is lower than the specified threshold, at which time the value of M [R | t] is used as the calibration result;
    所述旋转平移变换矩阵M[R|t]中R代表旋转,t代表平移。In the rotation and translation transformation matrix M [R | t], R represents rotation, and t represents translation.
  2. 根据权利要求1所述的方法,其特征在于:步骤S1中,所述摄像装置设置在车顶或车辆的四周。The method according to claim 1, wherein in step S1, the camera device is disposed on a roof of a vehicle or around a vehicle.
  3. 根据权利要求1-2任一项所述的方法,其特征在于:所述摄像装置包括多台摄像机,各摄像机获取的图像之间有重叠区域。The method according to any one of claims 1-2, wherein the camera device comprises a plurality of cameras, and there is an overlapping area between the images acquired by each camera.
  4. 根据权利要求1-3任一项所述的方法,其特征在于:步骤S1中,所述标记点在图像中的位置(u0,v0)通过人为或计算机逐行逐列查找得到。The method according to any one of claims 1-3, characterized in that, in step S1, the position (u0, v0) of the marked point in the image is obtained by human or computer by row and column.
  5. 根据权利要求1-4任一项所述的方法,其特征在于:步骤S1中,所述标记点在图像 中的位置(u0,v0)通过训练好的神经网络得到。The method according to any one of claims 1-4, wherein in step S1, the position (u0, v0) of the marked point in the image is obtained through a trained neural network.
  6. 根据权利要求1-6中任一项所述的方法,其特征在于:在步骤S6中,通过最小二乘法来优化M[R|t]的值。The method according to any one of claims 1-6, wherein in step S6, the value of M [R | t] is optimized by a least square method.
  7. 根据权利要求1-7中任一项所述的方法,其特征在于:所述旋转R、所述平移t的初始值都取0。The method according to any one of claims 1 to 7, wherein the initial values of the rotation R and the translation t are both taken as 0.
  8. 根据权利要求1-8中任一项所述的方法,其特征在于:步骤S1中,所述标记点的数为10-20个。The method according to any one of claims 1-8, wherein in step S1, the number of the marked points is 10-20.
  9. 根据权利要求1-9中任一项所述的方法,其特征在于:步骤S2中,所述标记点的实际GPS位置是预先测量出,或者是在标定时直接使用RTK设备测量出的。The method according to any one of claims 1-9, wherein in step S2, the actual GPS position of the marked point is measured in advance, or directly measured using an RTK device during calibration.
PCT/CN2018/113667 2018-09-30 2018-11-02 Static calibration method for external parameters of camera WO2020062434A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811154021.3 2018-09-30
CN201811154021.3A CN110969663B (en) 2018-09-30 2018-09-30 Static calibration method for external parameters of camera

Publications (1)

Publication Number Publication Date
WO2020062434A1 true WO2020062434A1 (en) 2020-04-02

Family

ID=69952814

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/113667 WO2020062434A1 (en) 2018-09-30 2018-11-02 Static calibration method for external parameters of camera

Country Status (2)

Country Link
CN (1) CN110969663B (en)
WO (1) WO2020062434A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112116529A (en) * 2020-09-23 2020-12-22 浙江浩腾电子科技股份有限公司 PTZ camera-based conversion method for GPS coordinates and pixel coordinates
CN112465920A (en) * 2020-12-08 2021-03-09 广州小鹏自动驾驶科技有限公司 Vision sensor calibration method and device
CN112489111A (en) * 2020-11-25 2021-03-12 深圳地平线机器人科技有限公司 Camera external parameter calibration method and device and camera external parameter calibration system
CN112767498A (en) * 2021-02-03 2021-05-07 苏州挚途科技有限公司 Camera calibration method and device and electronic equipment
CN113284193A (en) * 2021-06-22 2021-08-20 智道网联科技(北京)有限公司 Calibration method, device and equipment of RS equipment
CN112037285B (en) * 2020-08-24 2024-03-29 上海电力大学 Camera calibration method based on Levy flight and variation mechanism gray wolf optimization

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419420B (en) * 2020-09-17 2022-01-28 腾讯科技(深圳)有限公司 Camera calibration method and device, electronic equipment and storage medium
CN112611361A (en) * 2020-12-08 2021-04-06 华南理工大学 Method for measuring installation error of camera of airborne surveying and mapping pod of unmanned aerial vehicle
CN112802123B (en) * 2021-01-21 2023-10-27 北京科技大学设计研究院有限公司 Binocular linear array camera static calibration method based on stripe virtual target
CN112837381A (en) * 2021-02-09 2021-05-25 上海振华重工(集团)股份有限公司 Camera calibration method, system and equipment suitable for driving equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102010029110A1 (en) * 2009-05-29 2010-12-02 Mori Seiki Co., Ltd., Yamatokoriyama-shi Calibration method and calibration device
CN103049912A (en) * 2012-12-21 2013-04-17 浙江大学 Random trihedron-based radar-camera system external parameter calibration method
CN103593836A (en) * 2012-08-14 2014-02-19 无锡维森智能传感技术有限公司 A Camera parameter calculating method and a method for determining vehicle body posture with cameras
CN103745452A (en) * 2013-11-26 2014-04-23 理光软件研究所(北京)有限公司 Camera external parameter assessment method and device, and camera external parameter calibration method and device
CN103985118A (en) * 2014-04-28 2014-08-13 无锡观智视觉科技有限公司 Parameter calibration method for cameras of vehicle-mounted all-round view system
CN105547228A (en) * 2015-12-09 2016-05-04 中国航空工业集团公司西安飞机设计研究所 Angle displacement measurement device

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010045766A (en) * 2008-07-16 2010-02-25 Kodaira Associates Kk System for calibrating urban landscape image information
CN104766058B (en) * 2015-03-31 2018-04-27 百度在线网络技术(北京)有限公司 A kind of method and apparatus for obtaining lane line
US10338209B2 (en) * 2015-04-28 2019-07-02 Edh Us Llc Systems to track a moving sports object
US9892558B2 (en) * 2016-02-19 2018-02-13 The Boeing Company Methods for localization using geotagged photographs and three-dimensional visualization
CN105930819B (en) * 2016-05-06 2019-04-12 西安交通大学 Real-time city traffic lamp identifying system based on monocular vision and GPS integrated navigation system
CN107067437B (en) * 2016-12-28 2020-02-21 中国航天电子技术研究院 Unmanned aerial vehicle positioning system and method based on multi-view geometry and bundle adjustment
CN108279428B (en) * 2017-01-05 2020-10-16 武汉四维图新科技有限公司 Map data evaluating device and system, data acquisition system, acquisition vehicle and acquisition base station
CN106803273B (en) * 2017-01-17 2019-11-22 湖南优象科技有限公司 A kind of panoramic camera scaling method
CN108364252A (en) * 2018-01-12 2018-08-03 深圳市粒视界科技有限公司 A kind of correction of more fish eye lens panorama cameras and scaling method
CN108288294A (en) * 2018-01-17 2018-07-17 视缘(上海)智能科技有限公司 A kind of outer ginseng scaling method of a 3D phases group of planes
CN108230381B (en) * 2018-01-17 2020-05-19 华中科技大学 Multi-view stereoscopic vision method combining space propagation and pixel level optimization

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102010029110A1 (en) * 2009-05-29 2010-12-02 Mori Seiki Co., Ltd., Yamatokoriyama-shi Calibration method and calibration device
CN103593836A (en) * 2012-08-14 2014-02-19 无锡维森智能传感技术有限公司 A Camera parameter calculating method and a method for determining vehicle body posture with cameras
CN103049912A (en) * 2012-12-21 2013-04-17 浙江大学 Random trihedron-based radar-camera system external parameter calibration method
CN103745452A (en) * 2013-11-26 2014-04-23 理光软件研究所(北京)有限公司 Camera external parameter assessment method and device, and camera external parameter calibration method and device
CN103985118A (en) * 2014-04-28 2014-08-13 无锡观智视觉科技有限公司 Parameter calibration method for cameras of vehicle-mounted all-round view system
CN105547228A (en) * 2015-12-09 2016-05-04 中国航空工业集团公司西安飞机设计研究所 Angle displacement measurement device

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112037285B (en) * 2020-08-24 2024-03-29 上海电力大学 Camera calibration method based on Levy flight and variation mechanism gray wolf optimization
CN112116529A (en) * 2020-09-23 2020-12-22 浙江浩腾电子科技股份有限公司 PTZ camera-based conversion method for GPS coordinates and pixel coordinates
CN112489111A (en) * 2020-11-25 2021-03-12 深圳地平线机器人科技有限公司 Camera external parameter calibration method and device and camera external parameter calibration system
CN112489111B (en) * 2020-11-25 2024-01-30 深圳地平线机器人科技有限公司 Camera external parameter calibration method and device and camera external parameter calibration system
CN112465920A (en) * 2020-12-08 2021-03-09 广州小鹏自动驾驶科技有限公司 Vision sensor calibration method and device
CN112767498A (en) * 2021-02-03 2021-05-07 苏州挚途科技有限公司 Camera calibration method and device and electronic equipment
CN113284193A (en) * 2021-06-22 2021-08-20 智道网联科技(北京)有限公司 Calibration method, device and equipment of RS equipment
CN113284193B (en) * 2021-06-22 2024-02-02 智道网联科技(北京)有限公司 Calibration method, device and equipment of RS equipment

Also Published As

Publication number Publication date
CN110969663B (en) 2023-10-03
CN110969663A (en) 2020-04-07

Similar Documents

Publication Publication Date Title
WO2020062434A1 (en) Static calibration method for external parameters of camera
CN112894832B (en) Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and storage medium
CN104330074B (en) Intelligent surveying and mapping platform and realizing method thereof
US10127721B2 (en) Method and system for displaying and navigating an optimal multi-dimensional building model
CN108168521A (en) One kind realizes landscape three-dimensional visualization method based on unmanned plane
CN107179086A (en) A kind of drafting method based on laser radar, apparatus and system
CN107504957A (en) The method that three-dimensional terrain model structure is quickly carried out using unmanned plane multi-visual angle filming
CN107194989A (en) The scene of a traffic accident three-dimensional reconstruction system and method taken photo by plane based on unmanned plane aircraft
CN103426165A (en) Precise registration method of ground laser-point clouds and unmanned aerial vehicle image reconstruction point clouds
CA2705809A1 (en) Method and apparatus of taking aerial surveys
CN100417231C (en) Three-dimensional vision semi-matter simulating system and method
CN103226838A (en) Real-time spatial positioning method for mobile monitoring target in geographical scene
CN101545776B (en) Method for obtaining digital photo orientation elements based on digital map
CN103310487B (en) A kind of universal imaging geometric model based on time variable generates method
KR20200110120A (en) A system implementing management solution of road facility based on 3D-VR multi-sensor system and a method thereof
CN110706273B (en) Real-time collapse area measurement method based on unmanned aerial vehicle
CN111612901A (en) Extraction feature and generation method of geographic information image
CN115962757A (en) Unmanned aerial vehicle surveying and mapping method, system and readable storage medium
Koeva 3D modelling and interactive web-based visualization of cultural heritage objects
CN113808269A (en) Map generation method, positioning method, system and computer readable storage medium
CN104133874B (en) Streetscape image generating method based on true color point cloud
CN111964665B (en) Intelligent vehicle positioning method and system based on vehicle-mounted all-around image and storage medium
CN113034347A (en) Oblique photographic image processing method, device, processing equipment and storage medium
CN112348941A (en) Real-time fusion method and device based on point cloud and image data
CN116823966A (en) Internal reference calibration method and device for camera, computer equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18935019

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18935019

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 18935019

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 16.11.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18935019

Country of ref document: EP

Kind code of ref document: A1