CN108447097A - Depth camera scaling method, device, electronic equipment and storage medium - Google Patents
Depth camera scaling method, device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN108447097A CN108447097A CN201810179738.7A CN201810179738A CN108447097A CN 108447097 A CN108447097 A CN 108447097A CN 201810179738 A CN201810179738 A CN 201810179738A CN 108447097 A CN108447097 A CN 108447097A
- Authority
- CN
- China
- Prior art keywords
- camera
- pose
- tag
- label
- frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 75
- 230000033001 locomotion Effects 0.000 claims abstract description 24
- 239000011159 matrix material Substances 0.000 claims description 39
- 238000004364 calculation method Methods 0.000 claims description 36
- 238000005457 optimization Methods 0.000 claims description 23
- 238000001514 detection method Methods 0.000 claims description 20
- 238000006243 chemical reaction Methods 0.000 claims description 17
- 230000009466 transformation Effects 0.000 claims description 16
- 238000012545 processing Methods 0.000 claims description 11
- 230000002159 abnormal effect Effects 0.000 claims description 10
- 238000000605 extraction Methods 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 4
- 230000000007 visual effect Effects 0.000 abstract description 3
- 230000001360 synchronised effect Effects 0.000 abstract description 2
- 230000008569 process Effects 0.000 description 19
- 238000004422 calculation algorithm Methods 0.000 description 17
- 230000006870 function Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 238000013519 translation Methods 0.000 description 4
- 238000009795 derivation Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 230000001186 cumulative effect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000017105 transposition Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
- Image Analysis (AREA)
- Stereoscopic And Panoramic Photography (AREA)
Abstract
Description
技术领域technical field
本发明实施例涉及机器视觉技术,尤其涉及一种深度相机标定方法、装置、电子设备及存储介质。Embodiments of the present invention relate to machine vision technology, and in particular to a depth camera calibration method, device, electronic equipment, and storage medium.
背景技术Background technique
随着机器人导航、虚拟现实及增强现实技术的发展,RGB-D相机(即深度相机)被广泛应用于机器人导航、静态场景重建及动态人体重建等。RGB-D相机结合传统RGB相机与深度相机,具有精度高、体积小、信息量大、无源性和信息丰富等优点。With the development of robot navigation, virtual reality and augmented reality technology, RGB-D cameras (that is, depth cameras) are widely used in robot navigation, static scene reconstruction and dynamic human body reconstruction. RGB-D camera combines traditional RGB camera and depth camera, which has the advantages of high precision, small size, large amount of information, passivity and rich information.
目前,由于单个RGB-D相机的视场(Field of View,简称为FoV)有限,使用单个RGB-D相机导航无法同时获取周围全部信息,而小视场对于三维重建也有一定限制,例如动态物体重建中往往无法获取全身或多物体模型,又如无法针对视场内的动态物体与静态物体同时重建等。At present, due to the limited Field of View (FoV) of a single RGB-D camera, using a single RGB-D camera for navigation cannot obtain all the surrounding information at the same time, and the small field of view also has certain limitations for 3D reconstruction, such as dynamic object reconstruction However, it is often impossible to obtain full-body or multi-object models, and for example, it is impossible to reconstruct dynamic objects and static objects in the field of view at the same time.
若使用多相机系统,则需要对相机进行精准的外参标定。目前一般采用基于特定标定物的相机标定方法,方法相对繁琐,且要求相邻相机间的视场具有较大重叠范围,无法适应视场重叠范围很小的情况。而有基于同时定位与建图(Simultaneous LocalizationAnd Mapping,简称为SLAM)的标定方法利用相机按照预设轨迹运动以采集图像,然后离线处理图像进行标定,无法快速在线同时标定360°全景RGB-D相机。If a multi-camera system is used, precise extrinsic calibration of the cameras is required. At present, the camera calibration method based on specific calibration objects is generally used, which is relatively cumbersome and requires a large overlapping range of fields of view between adjacent cameras, which cannot adapt to the situation where the overlapping range of fields of view is small. However, there is a calibration method based on Simultaneous Localization And Mapping (SLAM for short), which uses the camera to move according to a preset trajectory to collect images, and then processes the images offline for calibration, which cannot quickly and simultaneously calibrate 360° panoramic RGB-D cameras online .
发明内容Contents of the invention
本发明实施例提供一种深度相机标定方法、装置、电子设备及存储介质,以实现无需人为干预的、实时在线的全景多相机自标定。Embodiments of the present invention provide a depth camera calibration method, device, electronic equipment, and storage medium, so as to realize real-time online panoramic multi-camera self-calibration without human intervention.
第一方面,本发明实施例提供了一种深度相机标定方法,包括:In a first aspect, an embodiment of the present invention provides a method for calibrating a depth camera, including:
控制全景深度相机系统中的至少两个深度相机在运动过程中同步采集图像,其中每个深度相机均设置对应的标签;Controlling at least two depth cameras in the panoramic depth camera system to acquire images synchronously during motion, wherein each depth camera sets a corresponding label;
获取至少一个第一标签相机采集各帧图像时的位姿;Obtain the pose of at least one first label camera when capturing each frame of image;
若第二标签相机与所述第一标签相机出现历史视角重叠,根据历史视角重叠对应的图像及所述第一标签相机采集各帧图像时的位姿,计算所述第二标签相机与所述第一标签相机在同一时刻的相对位姿。If the second tag camera overlaps with the first tag camera in terms of historical viewing angles, calculate the relationship between the second tag camera and the The relative pose of the first labeled camera at the same moment.
第二方面,本发明实施例还提供了一种深度相机标定装置,包括:In the second aspect, the embodiment of the present invention also provides a depth camera calibration device, including:
相机控制模块,用于控制全景深度相机系统中的至少两个深度相机在运动过程中同步采集图像,其中每个深度相机均设置对应的标签;A camera control module, configured to control at least two depth cameras in the panoramic depth camera system to acquire images synchronously during motion, wherein each depth camera is provided with a corresponding label;
位姿获取模块,用于获取至少一个第一标签相机采集各帧图像时的位姿;A pose acquisition module, configured to acquire poses of at least one first label camera when each frame of image is collected;
相对位姿计算模块,用于在第二标签相机与所述第一标签相机出现历史视角重叠的情况下,根据历史视角重叠对应的图像及所述第一标签相机采集各帧图像时的位姿,计算所述第二标签相机与所述第一标签相机在同一时刻的相对位姿。The relative pose calculation module is used to overlap the corresponding image according to the historical view angle and the pose of each frame image collected by the first tag camera when the second tag camera overlaps with the first tag camera. , calculating the relative pose of the second tag camera and the first tag camera at the same moment.
第三方面,本发明实施例还提供了一种电子设备,包括:In a third aspect, an embodiment of the present invention also provides an electronic device, including:
一个或多个处理器;one or more processors;
存储器,用于存储一个或多个程序;memory for storing one or more programs;
全景深度相机系统,包括至少两个深度相机,所述至少两个深度相机覆盖全景视场,用于采集图像;A panoramic depth camera system, including at least two depth cameras, the at least two depth cameras cover a panoramic field of view for collecting images;
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如本发明任意实施例所述的深度相机标定方法。When the one or more programs are executed by the one or more processors, the one or more processors implement the depth camera calibration method according to any embodiment of the present invention.
第四方面,本发明实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如本发明任意实施例所述的深度相机标定方法。In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, the depth camera calibration method as described in any embodiment of the present invention is implemented.
本发明实施例基于全景深度相机系统中预先设置的第一标签相机,在全景深度相机系统边运动边采集图像的过程中,利用第二标签相机与第一标签相机的历史视角重叠,确定第二标签相机与第一标签相机的相对位姿,实现了多相机的快速在线自标定。该方法无需标定物,无需相邻相机间具有固定的较大视场重叠范围,搭建相机系统时相邻相机间存在较小的重叠视角或者不具有重叠视角,利用全景深度相机系统的运动使不同相机采集到历史视角重叠的图像,即可依此进行标定。该方法计算量小,可基于CPU进行在线标定。In the embodiment of the present invention, based on the preset first tag camera in the panoramic depth camera system, during the process of capturing images while the panoramic depth camera system is moving, the second tag camera and the first tag camera's historical angle of view overlap are used to determine the second The relative pose of the tag camera to the first tag camera enables fast online self-calibration of multiple cameras. This method does not require a calibration object, and does not require a fixed large field of view overlapping range between adjacent cameras. When building a camera system, there are small or no overlapping viewing angles between adjacent cameras. The movement of the panoramic depth camera system makes different The camera collects images with overlapping historical perspectives, which can be calibrated accordingly. This method has a small amount of calculation and can be calibrated online based on the CPU.
附图说明Description of drawings
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图做一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present invention or the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description These are some embodiments of the present invention. Those skilled in the art can also obtain other drawings based on these drawings without creative work.
图1是本发明实施例一提供的深度相机标定方法的流程图;FIG. 1 is a flow chart of a depth camera calibration method provided by Embodiment 1 of the present invention;
图2是本发明实施例一提供的全景深度相机系统的示意图;FIG. 2 is a schematic diagram of a panoramic depth camera system provided by Embodiment 1 of the present invention;
图3是本发明实施例二提供的深度相机标定方法的流程图;FIG. 3 is a flow chart of a depth camera calibration method provided by Embodiment 2 of the present invention;
图4是本发明实施例三提供的深度相机标定方法的流程图;FIG. 4 is a flow chart of a depth camera calibration method provided by Embodiment 3 of the present invention;
图5是本发明实施例四提供的获取第一标签相机位姿的流程图;Fig. 5 is a flow chart of obtaining the pose of the first label camera provided by Embodiment 4 of the present invention;
图6是本发明实施例五提供的深度相机标定装置的结构框图;FIG. 6 is a structural block diagram of a depth camera calibration device provided in Embodiment 5 of the present invention;
图7是本发明实施例六提供的电子设备的结构示意图。FIG. 7 is a schematic structural diagram of an electronic device provided by Embodiment 6 of the present invention.
具体实施方式Detailed ways
下面结合附图和实施例对本发明作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释本发明,而非对本发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与本发明相关的部分而非全部结构。The present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, but not to limit the present invention. In addition, it should be noted that, for the convenience of description, only some structures related to the present invention are shown in the drawings but not all structures.
实施例一Embodiment one
图1是本发明实施例一提供的深度相机标定方法的流程图,本实施例可适用于多相机自标定的情况,本实施例中的标定是指计算相机之间的相对位姿。该方法可以由深度相机标定装置或电子设备来执行,该深度相机标定装置可以通过软件和/或硬件实现,例如通过中央处理器(Central Processing Unit,简称为CPU)实现,由CPU完成相机的控制和标定;进一步的,该装置可集成在便携式移动电子设备中。如图1所示,该方法具体包括:Fig. 1 is a flow chart of the depth camera calibration method provided by Embodiment 1 of the present invention. This embodiment is applicable to the situation of multi-camera self-calibration. Calibration in this embodiment refers to calculating the relative pose between cameras. The method can be performed by a depth camera calibration device or electronic equipment, and the depth camera calibration device can be implemented by software and/or hardware, such as by a central processing unit (Central Processing Unit, referred to as CPU), and the control of the camera is completed by the CPU and calibration; further, the device can be integrated in portable mobile electronic equipment. As shown in Figure 1, the method specifically includes:
S101,控制全景深度相机系统中的至少两个深度相机在运动过程中同步采集图像,其中每个深度相机均设置对应的标签。S101. Control at least two depth cameras in the panorama depth camera system to acquire images synchronously during motion, where each depth camera sets a corresponding tag.
其中,全景深度相机系统(可简称为相机系统)包括至少两个深度相机(RGB-D相机),该至少两个深度相机覆盖360度全景视场。在实际应用中,可以根据具体需求选取要使用的相机尺寸和相机个数,根据相机尺寸和相机个数将各相机固定在平台(例如刚性结构的部件)上,以满足视场覆盖需求,初步完成全景深度相机系统的搭建。具体的,根据单相机的视场和需求视场确定所需相机个数,所有相机的视场之和需要大于需求视场,示例性的,以使用视角相同的单相机为例,相机个数n需满足n×Fov>α,Fov表示单相机的视场,α表示所搭建的相机系统的需求视场,例如,取α=360°,单相机的横向视场为65°,纵向视场为80°,可选取n=5或n=6,考虑到纵向视场的需求,可以使用6个相机。根据所使用的相机尺寸及相机个数对相机进行合理布局,示例性的,以长为10~15厘米,宽为3~5厘米,高为3~5厘米,分辨率为640×480的RGB-D相机为例,选取底边边长为5厘米的正六棱柱为轴,将相机镜头朝外通过固定件固定在轴上,如图2所示。需要说明的是,搭建相机系统时,结合具体的系统使用需求,相邻相机间可以不具有视角重叠或者有较小的视角重叠,例如,一两度的视角重叠。Wherein, the panoramic depth camera system (may be simply referred to as the camera system) includes at least two depth cameras (RGB-D cameras), and the at least two depth cameras cover a 360-degree panoramic field of view. In practical applications, the camera size and number of cameras to be used can be selected according to specific requirements, and each camera is fixed on a platform (such as a rigid structure part) according to the camera size and number of cameras to meet the field of view coverage requirements. Complete the construction of the panoramic depth camera system. Specifically, the required number of cameras is determined according to the field of view of a single camera and the required field of view. The sum of the fields of view of all cameras needs to be greater than the required field of view. For example, taking a single camera with the same viewing angle as an example, the number of cameras n needs to satisfy n×Fov>α, Fov represents the field of view of a single camera, and α represents the required field of view of the built camera system, for example, if α=360°, the horizontal field of view of a single camera is 65°, and the vertical field of view is 80°, n=5 or n=6 can be selected, and considering the requirement of the vertical field of view, 6 cameras can be used. Arrange the cameras reasonably according to the size and number of cameras used. For example, RGB with a length of 10-15 cm, a width of 3-5 cm, a height of 3-5 cm, and a resolution of 640×480 -D camera as an example, select a regular hexagonal prism with a base length of 5 cm as the axis, and fix the camera lens on the axis through the fixing piece, as shown in Figure 2. It should be noted that when building a camera system, in combination with specific system usage requirements, adjacent cameras may have no viewing angle overlap or a small viewing angle overlap, for example, a viewing angle overlap of one or two degrees.
全景深度相机系统中的深度相机同步采集图像,同步方式可以是硬件同步,也可以是软件同步。具体的,硬件同步是指利用一信号(如上升沿信号)同时触发所有相机采集同一时刻的图像;软件方式是指缓存各相机采集的图像时,为各图像打上时间戳,时间戳最相近的图像被认为是同一时刻采集的图像,即维持一个缓冲区域,每次输出各相机最相近时间戳的图像帧。The depth cameras in the panoramic depth camera system collect images synchronously, and the synchronization method can be hardware synchronization or software synchronization. Specifically, hardware synchronization refers to using a signal (such as a rising edge signal) to simultaneously trigger all cameras to capture images at the same time; software method refers to stamping each image with a time stamp when caching the images captured by each camera, and the one with the closest time stamp The image is considered to be the image collected at the same time, that is, a buffer area is maintained, and the image frame with the closest time stamp of each camera is output each time.
在线进行相机自标定时,需使全景深度相机系统处于运动状态,以采集图像作为观测信息。例如,利用机器人搭载全景深度相机系统或用户手持全景深度相机系统在常规室内进行自由运动,并保证运动过程中尽可能增加旋转运动,以增加运动过程中相机间的历史视角重叠,得到更多观测信息,便于标定。When performing camera self-calibration online, the panoramic depth camera system needs to be in motion to collect images as observation information. For example, use a robot equipped with a panoramic depth camera system or a user to hold a panoramic depth camera system to move freely in a conventional room, and ensure that the rotational movement is increased as much as possible during the movement, so as to increase the historical viewing angle overlap between the cameras during the movement and obtain more observations information for calibration.
每个深度相机均设置对应的属性标签,该标签用于区分相机的角色,例如,第一标签表示相机是基准相机,第二标签表示相机是非基准相机,简单起见,标签具体取值可以是0或1,例如0代表非基准相机,1代表基准相机。Each depth camera has a corresponding attribute label, which is used to distinguish the role of the camera. For example, the first label indicates that the camera is a reference camera, and the second label indicates that the camera is a non-reference camera. For simplicity, the specific value of the label can be 0 Or 1, such as 0 for non-reference camera and 1 for reference camera.
S102,获取至少一个第一标签相机采集各帧图像时的位姿。S102. Acquire a pose when at least one first label camera captures each frame of image.
其中,可以根据相机标签确定相机角色,全景深度相机系统所包含的至少两个深度相机中至少有一个基准相机,具体可以预先设置系统中任意相机作为基准相机,第一标签相机即为基准相机。需要注意的是,若预先设置多个基准相机,则需要准确获知基准相机之间的相对位姿,考虑到标定的准确性,一般可预先设置一个基准相机。针对相机系统中的基准相机,在整个标定过程中,需要实时获取基准相机采集各帧图像时的位姿,位姿是指基准相机采集当前帧图像时相对于采集上一帧图像时的相对位姿变化,位姿具体包括位置(平移矩阵T)和姿态(旋转矩阵R),涉及六个自由度X、Y、Z、α、β和γ,其中X、Y和Z是相机位置的参数,α、β和γ是相机姿态的参数。Among them, the camera role can be determined according to the camera label. There is at least one reference camera among the at least two depth cameras included in the panoramic depth camera system. Specifically, any camera in the system can be preset as the reference camera, and the first tag camera is the reference camera. It should be noted that if multiple reference cameras are set in advance, the relative poses between the reference cameras need to be accurately known. Considering the accuracy of calibration, generally one reference camera can be set in advance. For the reference camera in the camera system, during the entire calibration process, it is necessary to obtain the pose of the reference camera in real time when it captures each frame of image. Pose change, pose specifically includes position (translation matrix T) and attitude (rotation matrix R), involving six degrees of freedom X, Y, Z, α, β, and γ, where X, Y, and Z are parameters of the camera position, α, β and γ are the parameters of the camera pose.
S103,若第二标签相机与第一标签相机出现历史视角重叠,根据历史视角重叠对应的图像及第一标签相机采集各帧图像时的位姿,计算第二标签相机与第一标签相机在同一时刻的相对位姿。S103, if the second tag camera overlaps with the first tag camera in terms of historical angles of view, calculate the position and orientation of the second tag camera and the first tag camera at the same time according to the image corresponding to the historical angle of view overlap and the pose of the first tag camera when each frame of image is collected. The relative pose at each moment.
其中,随着全景深度相机系统的运动,各第二标签相机都会与第一标签相机出现历史视角重叠,由此可以得到各第二标签相机与第一标签相机在同一时刻的相对位姿,即得到了各相机之间的相对位姿,完成了实时在线的相机自标定过程。Among them, with the movement of the panorama depth camera system, each second tag camera will overlap with the first tag camera in terms of historical angle of view, so that the relative pose of each second tag camera and the first tag camera at the same time can be obtained, that is, The relative poses between the cameras are obtained, and the real-time online camera self-calibration process is completed.
需要说明的是,在整个标定过程中,全景深度相机系统中待标定的深度相机均处于运动状态,这些运动的相机实时同步采集图像,并维护各相机下的关键帧。对于其中的基准相机,根据采集的各帧图像进行单相机位姿估计,得到采集各帧图像时的相机位姿;与此同时,对于其中的非基准相机,不进行单相机位姿估计,采集的各帧图像作为观测信息,以判断是否与任一基准相机出现历史视角重叠,若重叠,则可以计算出该非基准相机与对应的基准相机在同一时刻的相对位姿。It should be noted that during the entire calibration process, the depth cameras to be calibrated in the panoramic depth camera system are all in motion, and these moving cameras collect images synchronously in real time, and maintain key frames under each camera. For the reference camera, single-camera pose estimation is performed according to each frame of images collected to obtain the camera pose when each frame of image is collected; at the same time, for non-reference cameras, single-camera pose estimation is not performed, and the acquisition Each frame of the image is used as the observation information to determine whether there is any overlapping of the historical angle of view with any reference camera. If it overlaps, the relative pose of the non-reference camera and the corresponding reference camera at the same time can be calculated.
本实施例的深度相机标定方法,基于全景深度相机系统中预先设置的第一标签相机,在全景深度相机系统边运动边采集图像的过程中,利用第二标签相机与第一标签相机的历史视角重叠,确定第二标签相机与第一标签相机的相对位姿,实现了多相机的快速在线自标定。该方法无需标定物,无需相邻相机间具有固定的较大视场重叠范围,搭建相机系统时相邻相机间存在较小的重叠视角或者不具有重叠视角,利用全景深度相机系统的运动使不同相机采集到历史视角重叠的图像,即可依此进行标定。该方法计算量小,可基于CPU进行在线标定。本发明实施例的深度相机标定方法适用于室内机器人导航或三维场景重建等应用背景。The depth camera calibration method in this embodiment is based on the first tag camera preset in the panoramic depth camera system, and uses the historical perspectives of the second tag camera and the first tag camera during the process of capturing images while the panoramic depth camera system is moving. Overlapping, determining the relative pose of the second tag camera to the first tag camera, enables fast online self-calibration of multiple cameras. This method does not require a calibration object, and does not require a fixed large field of view overlapping range between adjacent cameras. When building a camera system, there are small or no overlapping viewing angles between adjacent cameras. The movement of the panoramic depth camera system makes different The camera collects images with overlapping historical perspectives, which can be calibrated accordingly. This method has a small amount of calculation and can be calibrated online based on the CPU. The depth camera calibration method of the embodiment of the present invention is applicable to application backgrounds such as indoor robot navigation or three-dimensional scene reconstruction.
实施例二Embodiment two
本实施例在上述实施例的基础上,进一步对历史视角重叠的确定以及相机间相对位姿的计算进行优化。On the basis of the foregoing embodiments, this embodiment further optimizes the determination of overlapping historical view angles and the calculation of relative poses between cameras.
针对相机系统中每个待标定的深度相机,采集到一帧图像,确定该帧图像是否为关键帧,以便于确定历史视角重叠,若后续进行回环优化,也需要使用关键帧。具体的,在获取至少一个第一标签相机采集各帧图像时的位姿的同时,还可以包括:分别将各深度相机采集的当前帧图像与自身上一关键帧进行特征点匹配,得到两帧图像之间的转换关系矩阵;若转换关系矩阵大于或等于预设转换阈值,确定当前帧图像为对应深度相机下的关键帧,并存储该关键帧。For each depth camera to be calibrated in the camera system, a frame of image is collected, and it is determined whether the frame of image is a key frame, so as to determine the overlap of historical viewing angles. If subsequent loopback optimization is performed, key frames are also required. Specifically, while obtaining the pose of each frame image captured by at least one first label camera, it may also include: respectively matching the feature points of the current frame image captured by each depth camera with its previous key frame to obtain two frames The conversion relationship matrix between images; if the conversion relationship matrix is greater than or equal to the preset conversion threshold, determine that the current frame image is a key frame under the corresponding depth camera, and store the key frame.
其中,每个深度相机采集的第一帧图像均默认是关键帧,对于后续采集的每帧图像,则通过该帧图像与该相机下最近的关键帧进行比较,来确定该帧图像是否为关键帧。预设转换阈值是根据深度相机采集图像时的运动情况提前设定的,例如,若相机拍摄相邻两帧图像时位姿变化较大,则预设转换阈值就设置大一些。特征点匹配可以采用现有的匹配算法,例如,基于快速特征点提取和描述(Oriented FAST and Rotated BRIEF,简称为ORB)算法(稀疏算法)对RGB-D相机采集的彩色图像进行特征匹配,或者使用直接法进行稠密配准。Among them, the first frame of image captured by each depth camera is a key frame by default, and for each frame of image captured subsequently, it is determined whether the frame image is a key frame by comparing the frame image with the nearest key frame under the camera frame. The preset conversion threshold is set in advance according to the motion situation when the depth camera captures images. For example, if the pose changes greatly when the camera captures two adjacent frames of images, the preset conversion threshold is set larger. The feature point matching can adopt the existing matching algorithm, for example, based on the fast feature point extraction and description (Oriented FAST and Rotated BRIEF, referred to as ORB) algorithm (sparse algorithm) to perform feature matching on the color image collected by the RGB-D camera, or Dense registration using the direct method.
图3是本发明实施例二提供的深度相机标定方法的流程图,如图3所示,该方法包括:Fig. 3 is a flow chart of the depth camera calibration method provided by Embodiment 2 of the present invention. As shown in Fig. 3, the method includes:
S301,控制全景深度相机系统中的至少两个深度相机在运动过程中同步采集图像,其中每个深度相机均设置对应的标签。S301. Control at least two depth cameras in the panorama depth camera system to acquire images synchronously during motion, where each depth camera sets a corresponding tag.
S302,获取至少一个第一标签相机采集各帧图像时的位姿。S302. Acquire a pose when at least one first label camera captures each frame of image.
S303,将第二标签相机采集的当前帧图像与上述至少一个第一标签相机的历史关键帧进行特征点匹配;若存在历史关键帧与当前帧图像达到匹配阈值,确定第二标签相机与对应的第一标签相机出现历史视角重叠。S303, matching the feature points of the current frame image collected by the second tag camera with the historical key frame of the at least one first tag camera; if there is a historical key frame and the current frame image reaches the matching threshold, determine the second tag camera and the corresponding The first tab camera has history view overlap.
其中,第二标签相机每采集到新一帧图像,就与第一标签相机下存储的关键帧(即历史关键帧)进行特征点匹配,若达到匹配阈值,则认为出现历史视角重叠。特征点匹配可以采用现有的匹配算法,例如,基于稀疏的ORB算法对RGB-D相机采集的彩色图像进行特征匹配,或者使用直接法进行稠密配准。匹配阈值可以是预设的匹配特征点的数量。Wherein, every time the second tag camera collects a new frame of image, it performs feature point matching with the key frame stored under the first tag camera (that is, the historical key frame). If the matching threshold is reached, it is considered that there is an overlap of historical perspectives. The feature point matching can use the existing matching algorithm, for example, based on the sparse ORB algorithm to perform feature matching on the color image collected by the RGB-D camera, or use the direct method for dense registration. The matching threshold may be a preset number of matching feature points.
S304,移除第二标签相机采集的当前帧图像与对应历史关键帧的特征点对应关系中的异常数据,根据剩余的特征点对应关系计算当前帧图像与对应历史关键帧的相对位置关系;S304, remove the abnormal data in the corresponding relationship between the current frame image collected by the second label camera and the feature point corresponding to the historical key frame, and calculate the relative positional relationship between the current frame image and the corresponding historical key frame according to the remaining feature point corresponding relationship;
S305,根据相对位置关系计算第二标签相机采集当前帧图像时的位姿与第一标签相机采集对应历史关键帧时的位姿的变换关系;S305, calculating the transformation relationship between the pose when the second tag camera captures the current frame image and the pose when the first tag camera captures the corresponding historical key frame according to the relative positional relationship;
S306,根据变换关系及第一标签相机从采集对应历史关键帧到采集当前帧图像之间的各位姿,计算第二标签相机与第一标签相机在当前帧时刻的相对位姿。S306. Calculate the relative pose of the second tag camera and the first tag camera at the current frame time according to the transformation relationship and the poses of the first tag camera from collecting the corresponding historical key frame to collecting the current frame image.
其中,可以使用随机抽样一致(Random Sample Consensus,简称为RANSAC)算法移除异常数据,并依据剩余的特征点对应关系计算视角重叠的两帧图像间的相对位置关系;依据图像间的相对位置关系,可以求得对应的相机位姿变换关系。由于一直在估计第一标签相机采集各帧图像时的位姿,历史视角重叠对应的历史关键帧到当前帧之间的第一标签相机位姿都是已知的,由此可以推导出现历史视角重叠的第二标签相机和第一标签相机在当前帧时刻(即出现历史视角重叠的时刻)的相对位姿。相对位姿同样包括六个自由度,涉及旋转矩阵R和平移矩阵T。Among them, the Random Sample Consensus (RANSAC) algorithm can be used to remove abnormal data, and the relative positional relationship between two frames of images with overlapping viewing angles can be calculated according to the remaining feature point correspondence; according to the relative positional relationship between images , the corresponding camera pose transformation relationship can be obtained. Since the pose of the first tag camera is always estimated when each frame of image is captured, the pose of the first tag camera between the historical key frame corresponding to the historical viewing angle overlap and the current frame is known, so the historical viewing angle can be deduced The relative pose of the overlapping second tag camera and the first tag camera at the current frame moment (that is, the moment when historical viewing angles overlap). The relative pose also includes six degrees of freedom involving the rotation matrix R and the translation matrix T.
示例性的,全景深度相机系统包括三个深度相机A、B和C,其中相机A为第一标签相机,相机B和C为第二标签相机。三个相机同步采集图像,采集到第10帧时,相机B的第10帧与相机A的第5帧(历史关键帧)出现历史视角重叠,根据这两帧历史视角重叠的图像之间的特征点匹配,可以算出相机B采集第10帧的位姿与相机A采集第5帧的位姿的变换关系。由于相机A一直在做位姿估计,记录有第5帧到第10帧的位姿,由此一帧帧推导,便可得到相机B与相机A在第10帧的相对位姿。Exemplarily, the panoramic depth camera system includes three depth cameras A, B and C, wherein camera A is a first label camera, and cameras B and C are second label cameras. The three cameras collect images synchronously. When the 10th frame is collected, the 10th frame of camera B and the 5th frame (historical key frame) of camera A overlap in historical perspective. Point matching can calculate the transformation relationship between the pose of the 10th frame captured by camera B and the pose of the 5th frame captured by camera A. Since camera A has been doing pose estimation, the poses of the 5th to 10th frames are recorded, and the relative poses of camera B and camera A at the 10th frame can be obtained by deriving frame by frame.
本实施例的深度相机标定方法,利用历史关键帧确定第二标签相机与第一标签相机出现历史视角重叠,通过视角重叠的两帧图像计算第二标签相机与第一标签相机的位姿变换关系,然后利用第一标签相机下对应的历史关键帧与当前帧之间的第一标签相机位姿,可以推导出第二标签相机与第一标签相机在同一时刻的相对位姿,计算简单,能够保证快速在线标定。The depth camera calibration method of this embodiment uses historical key frames to determine that the historical viewing angles of the second tagged camera and the first tagged camera overlap, and calculates the pose transformation relationship between the second tagged camera and the first tagged camera through two frames of images with overlapping viewing angles , and then using the pose of the first tag camera between the corresponding historical key frame and the current frame under the first tag camera, the relative pose of the second tag camera and the first tag camera at the same time can be deduced. The calculation is simple and can be Guaranteed fast online calibration.
可选的,为了得到更为准确的相机间的相对位姿,提高标定准确度,在计算第二标签相机与第一标签相机在同一时刻的相对位姿之后,可以对已计算出的相对位姿进行全局优化,具体包括:若已计算出相对位姿的深度相机同步采集的当前帧图像中有关键帧,则根据该关键帧和已计算出相对位姿的深度相机下的历史关键帧进行回环检测;若回环成功(即找到匹配的历史关键帧),根据该关键帧及对应的历史关键帧优化更新对应深度相机间的相对位姿。Optionally, in order to obtain a more accurate relative pose between cameras and improve calibration accuracy, after calculating the relative pose of the second tag camera and the first tag camera at the same moment, the calculated relative pose Global optimization of the pose, specifically including: if there is a key frame in the current frame image synchronously collected by the depth camera whose relative pose has been calculated, then the key frame will be optimized according to the key frame and the historical key frame under the depth camera whose relative pose has been calculated. Loopback detection; if the loopback is successful (that is, a matching historical keyframe is found), optimize and update the relative pose between the corresponding depth cameras according to the keyframe and the corresponding historical keyframe.
其中,回环检测是判断深度相机是否运动到曾经达到的地方或与历史视角具有较大重叠的地方。回环检测是针对关键帧率进行的,对于深度相机采集的每帧图像,需要判断该帧图像是否为关键帧,若是,则进行回环检测,若否,则等待下一关键帧到来以进行回环检测。其中,关键帧的判断,可以是将深度相机采集的每帧图像与该相机对应的上一关键帧进行特征点匹配,得到两帧图像之间的转换关系矩阵;若转换关系矩阵大于或等于预设转换阈值,则确定当前帧图像为该相机下的关键帧。Among them, the loopback detection is to judge whether the depth camera has moved to a place that has been reached or a place that has a large overlap with the historical perspective. Loopback detection is performed for the key frame rate. For each frame of image captured by the depth camera, it is necessary to determine whether the frame image is a key frame. If so, perform loopback detection. If not, wait for the arrival of the next key frame to perform loopback detection. . Among them, the judgment of the key frame can be to match the feature points of each frame of image collected by the depth camera with the previous key frame corresponding to the camera to obtain the conversion relationship matrix between the two frames of images; if the conversion relationship matrix is greater than or equal to the preset If the conversion threshold is set, the current frame image is determined to be the key frame under the camera.
由于涉及相机间的相对位姿,因此本实施例中回环检测的比较对象是相机系统中已计算出相对位姿的深度相机下的历史关键帧,而非仅仅比较自身的历史关键帧,若回环成功,根据当前关键帧及对应的历史关键帧优化更新对应深度相机间的相对位姿,以减小累积误差,提高相机间相对位姿的精度。Since the relative poses between cameras are involved, the comparison object of the loopback detection in this embodiment is the historical keyframes under the depth camera whose relative poses have been calculated in the camera system, rather than just comparing its own historical keyframes. If the loopback Successfully, according to the current key frame and the corresponding historical key frame, the relative pose between the corresponding depth cameras is optimized and updated to reduce the cumulative error and improve the accuracy of the relative pose between the cameras.
具体的,将当前关键帧与历史关键帧进行回环检测可以是将当前关键帧与历史关键帧的ORB特征点进行匹配运算,若匹配度高,则说明回环成功,优选的,可以按照当前关键帧与历史关键帧的匹配程度,选出匹配度高的一个或多个历史关键帧进行对应相机间的相对位姿的优化更新。需要说明的是,若匹配的历史关键帧属于该深度相机自身,则根据当前关键帧和历史关键帧对该深度相机的自身位姿进行优化。上述相对位姿的优化更新过程,在得到相机系统中一对相机的相对位姿之后,即可启动,随着相机运动采集图像的过程对已计算的相机间相对位姿进行更新,如果相机系统中所有相机间的相对位姿都计算完成,且当满足预设条件(例如,各相机间的相对位姿的优化次数达到预设次数,或者满足预设误差需求)时,停止优化更新,得到最终较为准确的标定参数。Specifically, the loopback detection of the current keyframe and the historical keyframe can be performed by matching the ORB feature points of the current keyframe and the historical keyframe. If the matching degree is high, the loopback is successful. Preferably, the current keyframe can be The degree of matching with the historical keyframes, select one or more historical keyframes with a high matching degree to optimize and update the relative poses of the corresponding cameras. It should be noted that if the matched historical key frame belongs to the depth camera itself, then the own pose of the depth camera is optimized according to the current key frame and the historical key frame. The optimization update process of the above relative pose can be started after the relative pose of a pair of cameras in the camera system is obtained, and the calculated relative pose between the cameras is updated along with the process of camera movement to collect images. If the camera system The relative poses between all the cameras in are calculated, and when the preset conditions are met (for example, the number of optimizations of the relative poses between the cameras reaches the preset number, or the preset error requirements are met), the optimization update is stopped, and the obtained The final more accurate calibration parameters.
示例性的,仍以全景深度相机系统包括三个深度相机A、B和C为例,计算出相机B与相机A在同一时刻的相对位姿之后,在确定相机C与相机A是否出现历史视角重叠(以确定二者相对位姿)的同时,也根据相机A和/或相机B采集的关键帧对相机B与A之间的相对位姿进行优化更新。若得到了相机C与相机A在同一时刻的相对位姿,则可以进一步推导出相机B与相机C的相对位姿,由此得到各相机之间的相对位姿,接下来便进行优化更新。对于当前帧时刻,三个相机同步采集对应的图像,若判断出相机A采集的图像a和相机B采集的图像b是关键帧,相机C采集的图像c不是关键帧,则对关键帧a和b进行回环检测,二者均回环成功,分别根据关键帧a和b进行优化更新,优化更新过程中相对位姿的计算与初始的相对位姿计算方法相同,此处不再赘述。Exemplarily, still taking the panoramic depth camera system including three depth cameras A, B and C as an example, after calculating the relative pose of camera B and camera A at the same time, after determining whether there is a historical angle of view between camera C and camera A While overlapping (to determine the relative poses of the two), the relative poses between cameras B and A are also optimized and updated according to the key frames collected by camera A and/or camera B. If the relative pose of camera C and camera A at the same time is obtained, the relative pose of camera B and camera C can be further deduced, thereby obtaining the relative pose between the cameras, and then optimized and updated. For the current frame moment, the three cameras capture corresponding images synchronously. If it is judged that the image a captured by camera A and the image b captured by camera B are key frames, and the image c captured by camera C is not a key frame, then the key frames a and b performs loopback detection, both of which are successful in loopback, and optimize and update according to keyframes a and b respectively. The relative pose calculation method during the optimization update process is the same as the initial relative pose calculation method, and will not be repeated here.
本实施例可以根据相机采集的图像实时进行已计算的相机间相对位姿的优化更新,提高相机标定的准确度。In this embodiment, the calculated relative poses between the cameras can be optimized and updated in real time according to the images collected by the cameras, so as to improve the accuracy of camera calibration.
实施例三Embodiment Three
上述各实施例的深度相机标定方法是利全景深度相机系统中预设的第一标签相机作为基准,求取相机间的相对位姿。本实施例在上述各实施例的基础上,提供了另一种深度相机标定方法,以进一步加快在线标定速度。图4是本发明实施例三提供的深度相机标定方法的流程图,如图4所示,该方法包括:The depth camera calibration method in each of the above embodiments uses the preset first label camera in the panoramic depth camera system as a reference to obtain the relative poses between the cameras. On the basis of the foregoing embodiments, this embodiment provides another depth camera calibration method to further speed up the online calibration speed. FIG. 4 is a flow chart of a depth camera calibration method provided by Embodiment 3 of the present invention. As shown in FIG. 4, the method includes:
S401,控制全景深度相机系统中的至少两个深度相机在运动过程中同步采集图像,其中每个深度相机均设置对应的标签。S401. Control at least two depth cameras in the panorama depth camera system to acquire images synchronously during motion, where each depth camera sets a corresponding tag.
S402,获取至少一个第一标签相机采集各帧图像时的位姿。S402. Acquire the pose of at least one first tag camera when each frame of image is captured.
S403,若第二标签相机与第一标签相机出现历史视角重叠,根据历史视角重叠对应的图像及第一标签相机采集各帧图像时的位姿,计算第二标签相机与第一标签相机在同一时刻的相对位姿。S403, if the second tag camera overlaps with the first tag camera in terms of historical angles of view, calculate the position and orientation of the second tag camera and the first tag camera in the same The relative pose at each moment.
S404,将第二标签相机的标签修改为第一标签。S404. Modify the tag of the second tag camera to the first tag.
也就是说,在计算相对位姿之后,将与第一标签相机出现历史视角重叠的第二标签相机纳入基准范畴,扩大基准范围。That is to say, after the relative pose is calculated, the second tag camera whose historical viewing angle overlaps with the first tag camera is included in the benchmark category to expand the benchmark range.
S405,重复执行获取至少一个第一标签相机采集各帧图像时的位姿、计算存在历史视角重叠的第二标签相机与对应的第一标签相机在同一时刻的相对位姿以及修改标签的操作(即重复执行S402至S404),直到上述至少两个深度相机中不包含第二标签相机。相机系统中深度相机的标签都修改为第一标签,表示每个第二标签相机都计算得到了与其他相机的相对位姿,得到了标定结果。S405, repeatedly performing the operation of acquiring the pose of at least one first tag camera when each frame of image is captured, calculating the relative pose of the second tag camera with overlapping historical perspectives and the corresponding first tag camera at the same moment, and modifying the tag ( That is, S402 to S404) are repeatedly executed until the at least two depth cameras do not include the second tag camera. The tags of the depth camera in the camera system are all changed to the first tag, which means that each second tag camera has calculated its relative pose with other cameras and obtained the calibration result.
本实施例中,利用历史视角重叠得到第二标签相机与第一标签相机的相对位姿之后,将该第二标签相机也作为基准,扩大基准范围,增加其他第二标签相机出现历史视角重叠的概率,从而可以进一步加快标定速度。In this embodiment, after obtaining the relative pose between the second tag camera and the first tag camera by using the overlap of historical viewing angles, the second tag camera is also used as a reference to expand the reference range and increase the occurrence of overlapping historical viewing angles of other second tag cameras. probability, so that the calibration speed can be further accelerated.
对于关键帧的确定、历史视角重叠的确定、相对位姿的计算、相对位姿的优化更新,请参考前述实施例的描述,此处不再赘述。For the determination of the key frame, the determination of the overlap of historical view angles, the calculation of the relative pose, and the optimization update of the relative pose, please refer to the description of the foregoing embodiments, and details are not repeated here.
实施例四Embodiment four
本实施例在上述各实施例的基础上,进一步对“获取至少一个第一标签相机采集各帧图像时的位姿”进行了优化,以提高计算速度。图5是本发明实施例四提供的获取第一标签相机位姿的流程图,如图5所示,该方法包括:On the basis of the above-mentioned embodiments, this embodiment further optimizes "obtaining the pose of each frame of images captured by at least one first label camera" to increase the calculation speed. Fig. 5 is a flow chart of obtaining the pose of the first label camera provided by Embodiment 4 of the present invention. As shown in Fig. 5, the method includes:
S501,针对每个第一标签相机,对第一标签相机采集的每帧图像进行特征提取,得到每帧图像的至少一个特征点。S501. For each first label camera, perform feature extraction on each frame of image collected by the first label camera, to obtain at least one feature point of each frame of image.
其中,对图像进行特征提取是为了找到该帧图像中一些具有标志性特征的像素点(即特征点),例如,可以是一帧图像中的角点、纹理、边缘处的像素点。对每帧图像进行特征提取可以采用ORB算法,找到该帧图像中的至少一个特征点。Among them, the feature extraction of the image is to find some pixel points (that is, feature points) with iconic features in the frame image, for example, it can be a corner point, texture, and edge pixel point in a frame image. The ORB algorithm may be used for feature extraction of each frame of image to find at least one feature point in the frame of image.
S502,对相邻两帧图像进行特征点匹配,得到相邻两帧图像间的特征点对应关系。S502. Perform feature point matching on two adjacent frames of images to obtain a corresponding relationship of feature points between the two adjacent frames of images.
考虑到相机在运动过程中采集图像的频率较快,同一相机采集的相邻两帧图像的部分内容是一样的,因此两帧图像对应的特征点之间也存在着一定的对应关系。可以采用稀疏的ORB特征配准或者直接法稠密配准,得到相邻两帧图像间的特征点对应关系,即得到相邻两帧图像间的特征点对应关系。Considering that the frequency of image acquisition by the camera is relatively fast during the motion process, part of the content of two adjacent frames of images collected by the same camera is the same, so there is a certain correspondence between the feature points corresponding to the two frames of images. Sparse ORB feature registration or direct dense registration can be used to obtain the feature point correspondence between two adjacent frames of images, that is, to obtain the feature point correspondence between two adjacent frames of images.
具体的,以相邻两帧图像间的一个特征点为例,假设两帧图像中表示同一个纹理特征的特征点X1,X2分别位于两帧图像的不同位置,以H(X1,X2)表示两个特征点X1,X2之间的汉明距离,对两特征点进行异或运算,并统计结果为1的个数,作为相邻两帧图像间的一个特征点的汉明距离(即特征点对应关系)。Specifically, take a feature point between two adjacent frames of images as an example, assuming that the feature points X1 and X2 representing the same texture feature in the two frames of images are located at different positions of the two frames of images, represented by H(X1, X2) The Hamming distance between two feature points X1, X2, XOR operation is performed on the two feature points, and the number of the result is 1, which is used as the Hamming distance of a feature point between two adjacent frames of images (that is, the feature point correspondence).
S503,移除特征点对应关系中的异常数据,通过包含剩余特征点二阶统计量的线性成分以及包含相机位姿的非线性成分计算J(ξ)T J(ξ)中的非线性项对δ=-(J(ξ)T J(ξ))-1J(ξ)Tr(ξ)进行多次迭代计算,求解重投影误差r(ξ)小于预设阈值时的位姿。具体可使用高斯牛顿法进行迭代计算。优选可以是计算重投影误差最小化时的位姿。S503, remove the abnormal data in the feature point correspondence, and calculate the nonlinear term in J(ξ) T J(ξ) through the linear component including the second-order statistics of the remaining feature points and the nonlinear component including the camera pose Perform multiple iterative calculations for δ=-(J(ξ) T J(ξ)) -1 J(ξ) T r(ξ) to solve the pose when the reprojection error r(ξ) is less than the preset threshold. Specifically, the Gauss-Newton method can be used for iterative calculation. Preferably, the pose at which the reprojection error is minimized may be calculated.
其中,r(ξ)表示包含所有重投影误差的向量,J(ξ)为r(ξ)的雅克比矩阵,ξ表示相机位姿的李代数,δ表示每次迭代时r(ξ)的增量值;Ri表示采集第i帧图像时相机的旋转矩阵;Rj表示采集第j帧图像时相机的旋转矩阵;表示第i帧图像上的第k个特征点;表示第j帧图像上的第k个特征点;Ci,j表示第i帧图像与第j帧图像的特征点对应关系的集合;||Ci,j||-1表示第i帧图像与第j帧图像的特征点对应关系的数量;[]×表示向量积;||Ci,j||表示取Ci,j的范数。where r(ξ) represents the vector containing all reprojection errors, J(ξ) is the Jacobian matrix of r(ξ), ξ represents the Lie algebra of the camera pose, and δ represents the increment of r(ξ) at each iteration. Quantity; R i represents the rotation matrix of the camera when capturing the i-th frame image; R j represents the rotation matrix of the camera when capturing the j-th frame image; Indicates the kth feature point on the i-th frame image; Represents the kth feature point on the jth frame image; C i, j represents the set of correspondence between the i-th frame image and the j-th frame image; ||C i,j ||-1 represents the i-th frame image The number of corresponding relations with the feature points of the j-th frame image; [] × means vector product; ||C i,j || means take the norm of C i,j .
进一步的,非线性项的表达式为:Furthermore, the non-linear term The expression is:
其中,表示线性成分;ril T和rjl表示非线性成分,ril T是旋转矩阵Ri中的第l行,rjl是旋转矩阵Rj中的第l行的转置,l=0,1,2(本实施例基于编程思想从0开始计数,l=0即表示通常所说的矩阵第1行,依此类推)。in, Represents linear components; r il T and r jl represent nonlinear components, r il T is the l-th row in the rotation matrix R i , r jl is the transpose of the l-th row in the rotation matrix R j , l=0,1 , 2 (this embodiment counts from 0 based on the idea of programming, and l=0 means the first row of the matrix, which is commonly referred to as, and so on).
具体的,S502中得到的相邻两帧图像间的各特征点对应关系中会存在不合格的异常数据,例如,相邻的两帧图像中,每帧图像中一定存在另一帧图像所没有的特征点,将它们进行S502的匹配运算,就会出现异常的对应关系。优选的,可以使用RANSAC算法对异常数据进行移除,剩余的特征点对应关系表示为其中,表示第i帧图像与第j帧图像间第k个特征点之间的对应关系;j=i-1。Specifically, there will be unqualified abnormal data in the corresponding relationship between the feature points between the two adjacent frames of images obtained in S502. feature points, and if they are subjected to the matching operation of S502, an abnormal corresponding relationship will appear. Preferably, the abnormal data can be removed using the RANSAC algorithm, and the remaining feature point correspondences are expressed as in, Indicates the corresponding relationship between the kth feature point between the i-th frame image and the j-th frame image; j=i-1.
计算相机位姿就是求解以下式为代价函数的两帧图像之间的非线性最小二乘问题:Calculating the camera pose is to solve the nonlinear least squares problem between two frames of images with the following formula as the cost function:
其中,E表示欧氏空间中第i帧图像相比于第j帧图像(本实施例中指上一帧图像)的重投影误差;Ti表示相机采集第i帧图像时的位姿(根据前述对相机位姿的解释可知,实际是指采集第i帧图像相对于上一帧图像的位姿变化),Tj表示相机采集第j帧图像时的位姿;N表示相机采集到的总帧数;表示第i帧图像上的第k个特征点的齐次坐标,表示第j帧图像上的第k个特征点的齐次坐标。需要说明的是,当i和k取值相同时,和表示同一个点,区别在于是本地坐标,是齐次坐标。Wherein, E represents the reprojection error of the i-th frame image in Euclidean space compared to the j-th frame image (referring to the previous frame image in this embodiment); T i represents the pose when the camera captures the i-th frame image (according to the foregoing The explanation of the camera pose shows that it actually refers to the pose change of the i-th frame image relative to the previous frame image), T j represents the pose of the camera when the j-th frame image is collected; N represents the total frames collected by the camera number; Indicates the kth feature point on the i-th frame image The homogeneous coordinates of Indicates the kth feature point on the jth frame image homogeneous coordinates of . It should be noted that when i and k take the same value, and means the same point, the difference is that are the local coordinates, are homogeneous coordinates.
为了加快计算速率,本实施例并不对式(2)的代价函数进行直接计算,而是通过包含剩余特征点二阶统计量对应关系的线性成分以及包含相机位姿的非线性成分计算J(ξ)TJ(ξ)中的非线性项对δ=-(J(ξ)T J(ξ))-1J(ξ)Tr(ξ)进行多次迭代计算,求解重投影误差小于预设阈值时的位姿。由非线性项的表达式可知,在进行非线性项计算时,将两帧图像间固定的线性部分看成一个整体W来进行计算,不需要按照特征点对应关系的数量进行计算,降低了相机位姿计算的复杂度,增强了相机位姿计算的实时性。In order to speed up the calculation rate, this embodiment does not directly calculate the cost function of formula (2), but calculates J(ξ ) Nonlinear term in T J(ξ) Perform multiple iterative calculations on δ=-(J(ξ) T J(ξ))- 1 J(ξ) T r(ξ) to solve the pose when the reprojection error is less than the preset threshold. by nonlinear terms The expression of When calculating, the fixed linear part between the two frames of images Considering W as a whole to calculate, it does not need to calculate according to the number of feature point correspondences, which reduces the complexity of camera pose calculation and enhances the real-time performance of camera pose calculation.
下面对式(1)的推导过程进行说明,并结合推导过程分析降低算法复杂度的原理。The derivation process of formula (1) is described below, and the principle of reducing algorithm complexity is analyzed in combination with the derivation process.
欧氏空间中相机采集第i帧图像时的相机位姿Ti=[Ri/ti],实际上Ti是指相机采集第i帧图像时相对于采集第j帧图像(本实施例中指上一帧图像)时的位姿变换矩阵,包括旋转矩阵Ri和平移矩阵ti。将欧氏空间中的刚性变换Ti用SE3空间上的李代数ξi来表示,即ξi也表示相机采集第i帧图像时的相机位姿,T(ξi)将李代数ξi映射为欧氏空间中的Ti。In Euclidean space, the camera pose T i =[R i /t i ] when the camera captures the i-th frame of image, in fact, T i refers to when the camera captures the i-th frame of image relative to the acquisition of the j-th frame of image (this embodiment The pose transformation matrix when the middle finger is on the previous frame image), including the rotation matrix R i and the translation matrix t i . The rigid transformation T i in the Euclidean space is represented by the Lie algebra ξ i on the SE3 space, that is, ξ i also represents the camera pose when the camera captures the i-th frame image, and T(ξ i ) maps the Lie algebra ξ i is T i in Euclidean space.
对于每个特征点对应关系其重投影误差为:For each feature point correspondence Its reprojection error is:
式(1)中欧氏空间的重投影误差可表示为E(ξ)=||r(ξ)||,r(ξ)表示包含所有重投影误差的向量,即:The reprojection error in Euclidean space in formula (1) can be expressed as E(ξ)=||r(ξ)||, where r(ξ) represents the vector containing all reprojection errors, namely:
T(ξi)Pi k可以表示为(为表示简便,以下省去ξi):T(ξ i )P i k can be expressed as (for simplicity, ξ i is omitted below):
其中,表示旋转矩阵Ri中的第l行;til表示平移向量ti中的第l个元素,l=0,1,2。in, represents the lth row in the rotation matrix R i ; t il represents the lth element in the translation vector t i , l=0,1,2.
其中,表示第i帧图像与第j帧图像间特征点对应关系相应的雅克比矩阵;m表示第m个特征点对应关系。in, Represents the Jacobian matrix corresponding to the feature point correspondence between the i-th frame image and the j-th frame image; m represents the m-th feature point correspondence.
是一个6×6方阵,表示矩阵的转置,表达式如下: is a 6×6 square matrix, representation matrix the transposition of The expression is as follows:
其中,I3×3表示3×3的单位矩阵。根据式(6)和式(7),中四个非零的6×6子矩阵为:下面以为例进行说明,其他三个非零子矩阵也类似计算,不再赘述。Among them, I 3×3 represents a 3×3 identity matrix. According to formula (6) and formula (7), The four non-zero 6×6 sub-matrices in are: Below to As an example, the other three non-zero sub-matrices are also calculated similarly, and will not be repeated here.
其中,结合式(5)可以得到:Among them, combining formula (5) can get:
将表示为W,结合式(5),则可将式(10)中的非线性项简化为式(1),该非线性项中的结构项被线性为W。虽然对结构项而言,是非线性的,但经过上述分析,中的所有非零元素与Ci,j中结构项的二阶统计量成线性关系,结构项的二阶统计量为和也就是说,稀疏矩阵对Ci,j中结构项的二阶统计量是元素线性的。Will Expressed as W, combined with formula (5), the nonlinear term in formula (10) can be Simplified to formula (1), the structural term in this nonlinear term is linearized to W. Although for structural items in terms of is nonlinear, but after the above analysis, All non-zero elements in are linearly related to the second-order statistics of the structural items in C i,j, and the second-order statistics of the structural items are and That is, the sparse matrix The second-order statistics for the structural terms in C i,j are element-wise linear.
需要说明的是,每个对应关系的雅克比矩阵均由几何项ξi,ξj和结构项 决定。对于同一帧对Ci,j中的所有对应关系,其对应的雅可比矩阵共享相同的几何项,但具有不同的结构项。对于一个帧对Ci,j,计算时,现有算法依赖于Ci,j中特征点对应关系的数量,而本实施例可以固定的复杂度高效计算只需计算结构项的二阶统计量W,而不需要每个对应关系都将相关的结构项去参与计算,即中四个非零子矩阵可以用复杂度O(1)代替复杂度O(||Ci,j||)来计算。It should be noted that each correspondence The Jacobian matrices of are composed of geometric terms ξ i , ξ j and structural terms Decide. For all correspondences in the same frame pair C i,j , their corresponding Jacobian matrices share the same geometric terms but have different structural terms. For a frame pair C i,j , calculate When , the existing algorithm depends on the number of feature point correspondences in C i,j , but this embodiment can efficiently calculate It is only necessary to calculate the second-order statistics W of the structural items, and it is not necessary for each corresponding relationship to involve the relevant structural items in the calculation, that is, The four non-zero sub-matrices in can be calculated with complexity O(1) instead of complexity O(||C i,j ||).
因此,在δ=-(J(ξ)TJ(ξ))-1J(ξ)Tr(ξ)的非线性高斯牛顿最优化的迭代步骤中需要的稀疏矩阵JTJ和JTr可以复杂度O(M)高效计算,代替原来的计算复杂度O(Ncoor),Ncoor表示所有帧对的全部特征点对应关系的总数,M表示帧对的个数。一般的,O(Ncoor)在稀疏匹配中大约为300,而在稠密匹配中大约为10000,远大于帧对个数M。Therefore, the sparse matrices JTJ and JTr required in the iterative step of nonlinear Gauss-Newton optimization of δ=-(J(ξ) T J(ξ)) -1 J(ξ) T r(ξ) can be of complexity O (M) Efficient calculation, instead of the original computational complexity O(N coor ), N coor represents the total number of all feature point correspondences of all frame pairs, and M represents the number of frame pairs. Generally, O(N coor ) is about 300 in sparse matching, and about 10000 in dense matching, which is much larger than the number M of frame pairs.
经过上述推导,在相机位姿计算过程中,对于每个帧对,计算W,然后计算式(1)、(10、(9)、(8)和(6),求取进而可以通过迭代计算,求取r(ξ)最小时的ξ。After the above derivation, in the process of camera pose calculation, for each frame pair, calculate W, and then calculate formulas (1), (10, (9), (8) and (6) to obtain Furthermore, iterative calculation can be used to obtain ξ when r(ξ) is minimum.
可选的,为了得到更为准确的第一标签相机位姿,在获取至少一个第一标签相机采集各帧图像时的位姿之后,可以对获取的位姿进行全局一致的优化更新,具体包括:若第一标签相机采集的当前帧图像为关键帧,则根据当前关键帧和该第一标签相机的历史关键帧进行回环检测;若回环成功,根据当前关键帧对已获取的第一标签相机位姿进行全局一致的优化更新。Optionally, in order to obtain a more accurate pose of the first tag camera, after acquiring at least one pose of the first tag camera when each frame of image is captured, a globally consistent optimization update can be performed on the acquired pose, specifically including : If the current frame image captured by the first tag camera is a key frame, perform loopback detection based on the current key frame and the historical key frames of the first tag camera; if the loopback is successful, check the acquired first tag camera according to the current key frame The pose is globally consistent and optimally updated.
也就是说,对于第一标签相机采集的每帧图像,需要判断该帧图像是否为关键帧,若是,则进行回环检测,若否,则等待下一关键帧到来以进行回环检测。其中,关键帧的判断,可以是将第一标签相机采集的每帧图像与该相机对应的上一关键帧进行特征点匹配,得到两帧图像之间的转换关系矩阵;若转换关系矩阵大于或等于预设转换阈值,则确定当前帧图像为该相机下的关键帧。That is to say, for each frame of image collected by the first tag camera, it is necessary to judge whether the frame of image is a key frame, if so, perform loop detection, if not, wait for the arrival of the next key frame to perform loop detection. Wherein, the judgment of the key frame can be that each frame of image collected by the first label camera is matched with the previous key frame corresponding to the camera to obtain the conversion relationship matrix between the two frames of images; if the conversion relationship matrix is greater than or is equal to the preset conversion threshold, then the current frame image is determined to be the key frame under the camera.
全局一致的优化更新是指在标定过程中,随着相机的运动,当深度相机运动到曾经到达的地方或与历史视角具有较大重叠时,当前帧图像和已采集过的图像一致,而非产生交错、混叠等现象。回环检测则是依据深度相机当前观测判断该相机是否运动到曾经达到的地方或与历史视角具有较大重叠的地方,若回环成功,根据当前关键帧对第一标签相机位姿进行全局一致的优化更新,减小累积误差。Globally consistent optimization update means that during the calibration process, with the movement of the camera, when the depth camera moves to the place it has reached or has a large overlap with the historical perspective, the current frame image is consistent with the captured image, instead of Phenomena such as interleaving and aliasing occur. Loopback detection is based on the current observation of the depth camera to judge whether the camera has moved to a place it has reached or has a large overlap with the historical perspective. If the loopback is successful, the pose of the first label camera is globally consistent and optimized according to the current key frame. update to reduce the cumulative error.
具体的,将当前关键帧与历史关键帧进行回环检测可以是将当前关键帧与历史关键帧的ORB特征点进行匹配运算,若匹配度高,则说明回环成功,优选的,可以按照当前关键帧与历史关键帧的匹配程度,选出匹配度高的一个或多个历史关键帧进行相机位姿的全局一致的优化更新。Specifically, the loopback detection of the current keyframe and the historical keyframe can be performed by matching the ORB feature points of the current keyframe and the historical keyframe. If the matching degree is high, the loopback is successful. Preferably, the current keyframe can be According to the degree of matching with the historical keyframes, one or more historical keyframes with high matching degree are selected for globally consistent optimization update of the camera pose.
优选的,相机位姿全局一致的优化更新,即依据当前关键帧和匹配度高的一个或多个历史关键帧之间的对应关系,求解以为代价函数的当前关键帧与所有匹配度高的历史关键帧间的最小化转换误差问题。其中,E(T1,T2,…,TN-1|Ti∈SE3,i∈[1,N-1])表示所有帧对(任意一个历史匹配关键帧与当前关键帧即为一个帧对)的转换误差;N表示与当前关键帧匹配度高的历史关键帧的个数;Ei,j表示第i帧与第j帧之间的转换误差,转换误差即重投影误差。Preferably, the globally consistent optimization update of the camera pose is to solve the following It is the problem of minimizing the transformation error between the current keyframe of the cost function and all historical keyframes with high matching degree. Among them, E(T 1 ,T 2 ,…,T N-1 |T i ∈ SE3,i∈[1,N-1]) represents all frame pairs (any historical matching key frame and the current key frame are one frame pair) conversion error; N represents the number of historical key frames that match the current key frame; E i, j represents the conversion error between the i-th frame and the j-th frame, and the conversion error is the reprojection error.
具体的,在相机位姿优化更新的过程中,需要保持非关键帧和其对应的关键帧的相对位姿不变,具体优化更新算法可以使用现有的BA算法,也可以使用S503中的方法以提高优化速度,具体不再赘述。同样的,对于相机间相对位姿的计算和优化过程,也可使用本实施例的算法(即S503中的方法)。Specifically, in the process of optimizing and updating the camera pose, it is necessary to keep the relative pose of the non-key frame and its corresponding key frame unchanged. The specific optimization update algorithm can use the existing BA algorithm or the method in S503 In order to improve the optimization speed, details will not be repeated here. Similarly, the algorithm of this embodiment (ie, the method in S503 ) can also be used for the calculation and optimization process of the relative pose between cameras.
本实施例通过包含特征点二阶统计量的线性成分以及包含相机位姿的非线性成分迭代计算相机位姿,在进行非线性项计算时,将固定的线性部分看成一个整体W来进行计算,降低了相机位姿计算的复杂度,增强了相机位姿计算的实时性,对硬件要求低。将上述算法应用于求解位姿及后端优化过程中,能够得到快速并且全局一致的标定参数。In this embodiment, the camera pose is iteratively calculated through the linear component including the second-order statistics of feature points and the nonlinear component including the camera pose. When calculating, the fixed linear part Considering W as a whole for calculation, the complexity of camera pose calculation is reduced, the real-time performance of camera pose calculation is enhanced, and the hardware requirements are low. Applying the above algorithm to the process of solving pose and back-end optimization can obtain fast and globally consistent calibration parameters.
需要说明的是,本发明实施例可以基于SLAM的流程及原理实现位姿估计与优化,位姿估计由前端的视觉里程计线程实现,位姿优化由后端的回环检测及优化算法实现,例如使用现有的光束平差(BundleAdjustment,简称为BA)算法或本实施例中的算法。It should be noted that the embodiment of the present invention can realize pose estimation and optimization based on the process and principle of SLAM. The pose estimation is realized by the front-end visual odometry thread, and the pose optimization is realized by the back-end loop detection and optimization algorithm. For example, using The existing bundle adjustment (Bundle Adjustment, BA for short) algorithm or the algorithm in this embodiment.
在SLAM的过程中,根据采集的图像执行以下操作:第一标签相机的位姿估计与优化,利用视角重叠计算相机间的相对位姿,对已计算出相对位姿进行优化。这些操作可以同时执行的。通过全局一致的SLAM,优化每个相机的姿态并不断更新已计算的相机之间的相对位姿,同时可以维护局部地图和全局一致的全局地图,以适应常规RGB-D相机进行室内机器人导航或三维场景重建的应用背景。SLAM中的地图是指相机在世界坐标系中的运动轨迹以及在运动轨迹中所观测的关键帧在世界坐标系中的位置。若因物理撞击导致相机系统出现刚性形变,本实施例只需启动标定程序进行快速的重新标定,无需重新布置标定物。In the process of SLAM, the following operations are performed according to the collected images: the pose estimation and optimization of the first label camera, the relative pose between the cameras is calculated by using the view overlap, and the calculated relative pose is optimized. These operations can be performed concurrently. Through globally consistent SLAM, the pose of each camera is optimized and the relative pose between the calculated cameras is continuously updated, while a local map and a globally consistent global map can be maintained to accommodate regular RGB-D cameras for indoor robot navigation or Application background of 3D scene reconstruction. The map in SLAM refers to the movement trajectory of the camera in the world coordinate system and the position in the world coordinate system of the key frames observed in the movement trajectory. If the camera system is rigidly deformed due to physical impact, in this embodiment, only the calibration program needs to be started for quick re-calibration without rearranging the calibration objects.
实施例五Embodiment five
本实施例提供了一种深度相机标定装置,可以用于执行本发明任意实施例所提供的深度相机标定方法,具备执行方法相应的功能模块和有益效果。该装置可以通过硬件和/或软件的方式实现,例如通过CPU实现。未在本实施例中详尽描述的技术细节,可参见本发明任意实施例提供的深度相机标定方法。深度相机标定装置与全景深度相机系统之间需要传输控制信号和图像等,二者之间的通信方式很多,例如,通过串口、网线等有线的方式进行通信,也可以通过蓝牙、无线宽带等无线的方式进行通信。如图6所示,该装置包括:相机控制模块61、位姿获取模块62和相对位姿计算模块63。This embodiment provides a depth camera calibration device, which can be used to execute the depth camera calibration method provided by any embodiment of the present invention, and has corresponding functional modules and beneficial effects for executing the method. The device may be realized by means of hardware and/or software, such as by a CPU. For technical details not exhaustively described in this embodiment, refer to the depth camera calibration method provided in any embodiment of the present invention. Control signals and images need to be transmitted between the depth camera calibration device and the panoramic depth camera system. There are many communication methods between the two, for example, communication through serial ports, network cables, etc. way of communication. As shown in FIG. 6 , the device includes: a camera control module 61 , a pose acquisition module 62 and a relative pose calculation module 63 .
相机控制模块61,用于控制全景深度相机系统中的至少两个深度相机在运动过程中同步采集图像,其中每个深度相机均设置对应的标签;A camera control module 61, configured to control at least two depth cameras in the panoramic depth camera system to acquire images synchronously during motion, wherein each depth camera is provided with a corresponding label;
位姿获取模块62,用于获取至少一个第一标签相机采集各帧图像时的位姿;A pose acquisition module 62, configured to acquire poses when at least one first tag camera captures each frame of image;
相对位姿计算模块63,用于在第二标签相机与第一标签相机出现历史视角重叠的情况下,根据历史视角重叠对应的图像及第一标签相机采集各帧图像时的位姿,计算第二标签相机与第一标签相机在同一时刻的相对位姿。The relative pose calculation module 63 is used to calculate the first tag camera according to the image corresponding to the historical view overlap and the pose when each frame of image is collected by the first tag camera when the second tag camera and the first tag camera have overlapping historical perspectives. The relative pose of the second tag camera and the first tag camera at the same moment.
可选的,上述装置还可以包括:Optionally, the above-mentioned device may also include:
标签修改模块,用于在相对位姿计算模块63计算第二标签相机与第一标签相机在同一时刻的相对位姿之后,将第二标签相机的标签修改为第一标签;The label modification module is used to modify the label of the second label camera to the first label after the relative pose calculation module 63 calculates the relative pose of the second label camera and the first label camera at the same moment;
操作执行模块,用于重复执行获取至少一个第一标签相机采集各帧图像时的位姿、计算存在历史视角重叠的第二标签相机与对应的第一标签相机在同一时刻的相对位姿以及修改标签的操作(即重复执行位姿获取模块62、相对位姿计算模块63和标签修改模块的操作),直到全景深度相机系统的至少两个深度相机中不包含第二标签相机。The operation execution module is used to repeatedly execute the acquisition of the pose of at least one first tag camera when each frame of image is captured, calculate the relative pose of the second tag camera with overlapping historical perspectives and the corresponding first tag camera at the same time, and modify The operation of the tag (that is, repeatedly execute the operations of the pose acquisition module 62, the relative pose calculation module 63 and the tag modification module) until at least two depth cameras of the panoramic depth camera system do not include the second tag camera.
可选的,上述装置还可以包括:关键帧确定模块,用于在位姿获取模块62获取至少一个第一标签相机采集各帧图像时的位姿的同时,分别将各深度相机采集的当前帧图像与自身上一关键帧进行特征点匹配,得到两帧图像之间的转换关系矩阵;若该转换关系矩阵大于或等于预设转换阈值,确定当前帧图像为对应深度相机下的关键帧,并存储该关键帧,具体是存储该关键帧及其所属的深度相机。Optionally, the above-mentioned device may further include: a key frame determination module, configured to obtain the pose of each frame image captured by at least one first tag camera at the same time as the pose acquisition module 62. Match the feature points of the image with its previous key frame to obtain the transformation relationship matrix between the two frames of images; if the transformation relationship matrix is greater than or equal to the preset conversion threshold, determine the current frame image as the key frame under the corresponding depth camera, and Store the keyframe, specifically store the keyframe and the depth camera it belongs to.
可选的,上述装置还可以包括:视角重叠确定模块,用于在计算第二标签相机与第一标签相机在同一时刻的相对位姿之前,将第二标签相机采集的当前帧图像与上述至少一个第一标签相机的历史关键帧进行特征点匹配;若存在历史关键帧与当前帧图像达到匹配阈值,确定第二标签相机与对应的第一标签相机出现历史视角重叠。Optionally, the above-mentioned device may further include: an angle-of-view overlapping determination module, configured to combine the current frame image captured by the second tag camera with the above-mentioned at least A historical key frame of the first tag camera is matched with feature points; if there is a historical key frame and the current frame image reaches the matching threshold, it is determined that the historical angle of view of the second tag camera overlaps with the corresponding first tag camera.
可选的,相对位姿计算模块63包括:Optionally, the relative pose calculation module 63 includes:
相对位置关系计算单元,用于移除第二标签相机采集的当前帧图像与对应历史关键帧的特征点对应关系中的异常数据,根据剩余的特征点对应关系计算当前帧图像与对应历史关键帧的相对位置关系;The relative positional relationship calculation unit is used to remove the abnormal data in the corresponding relationship between the current frame image collected by the second label camera and the corresponding historical key frame, and calculate the current frame image and the corresponding historical key frame according to the remaining feature point corresponding relationship relative positional relationship;
变换关系计算单元,用于根据上述相对位置关系计算第二标签相机采集当前帧图像时的位姿与第一标签相机采集对应历史关键帧时的位姿的变换关系;A transformation relationship calculation unit, used to calculate the transformation relationship between the pose when the second tag camera captures the current frame image and the pose when the first tag camera captures the corresponding historical key frame according to the above-mentioned relative positional relationship;
相对位姿计算单元,用于根据上述变换关系及第一标签相机从采集对应历史关键帧到采集当前帧图像之间的各位姿,计算第二标签相机与第一标签相机在当前帧时刻的相对位姿。The relative pose calculation unit is used to calculate the relative position between the second tag camera and the first tag camera at the current frame moment according to the above-mentioned transformation relationship and the poses of the first tag camera from collecting the corresponding historical key frame to collecting the current frame image. pose.
可选的,位姿获取模块62包括:Optionally, the pose acquisition module 62 includes:
特征提取单元,用于针对每个第一标签相机,对该第一标签相机采集的每帧图像进行特征提取,得到每帧图像的至少一个特征点;A feature extraction unit, configured to perform feature extraction on each frame of image collected by the first tag camera for each first tag camera, to obtain at least one feature point of each frame of image;
特征匹配单元,用于对相邻两帧图像进行特征点匹配,得到相邻两帧图像间的特征点对应关系;A feature matching unit is used to perform feature point matching on two adjacent frames of images to obtain the corresponding relationship of feature points between the two adjacent frames of images;
迭代计算单元,用于移除特征点对应关系中的异常数据,通过包含剩余特征点二阶统计量的线性成分以及包含相机位姿的非线性成分计算J(ξ)T J(ξ)中的非线性项对δ=-(J(ξ)T J(ξ))-1J(ξ)Tr(ξ)进行多次迭代计算,求解重投影误差小于预设阈值时的位姿;The iterative calculation unit is used to remove abnormal data in the feature point correspondence, and calculates J(ξ) in T J(ξ) through the linear component including the second-order statistics of the remaining feature points and the nonlinear component including the camera pose. non-linear term Perform multiple iterative calculations on δ=-(J(ξ) T J(ξ)) -1 J(ξ) T r(ξ) to solve the pose when the reprojection error is less than the preset threshold;
其中,r(ξ)表示包含所有重投影误差的向量,J(ξ)为r(ξ)的雅克比矩阵,ξ表示相机位姿的李代数,δ表示每次迭代时r(ξ)的增量值;Ri表示采集第i帧图像时相机的旋转矩阵;Rj表示采集第j帧图像时相机的旋转矩阵;表示第i帧图像上的第k个特征点;表示第j帧图像上的第k个特征点;Ci,j表示第i帧图像与第j帧图像的特征点对应关系的集合;||Ci,j||-1表示第i帧图像与第j帧图像的特征点对应关系的数量;[]×表示向量积;||Ci,j||表示取Ci,j的范数。where r(ξ) represents the vector containing all reprojection errors, J(ξ) is the Jacobian matrix of r(ξ), ξ represents the Lie algebra of the camera pose, and δ represents the increment of r(ξ) at each iteration. Quantity; R i represents the rotation matrix of the camera when capturing the i-th frame image; R j represents the rotation matrix of the camera when capturing the j-th frame image; Represents the kth feature point on the i-th frame image; Represents the kth feature point on the jth frame image; C i, j represents the set of correspondence between the i-th frame image and the j-th frame image; ||C i,j ||-1 represents the i-th frame image The number of corresponding relations with the feature points of the j-th frame image; [] × means vector product; ||C i,j || means take the norm of C i,j .
进一步的,非线性项的表达式为:Furthermore, the non-linear term The expression is:
其中,表示线性成分;ril T和rjl表示非线性成分,ril T是旋转矩阵Ri中的第l行,rjl是旋转矩阵Rj中的第l行的转置,l=0,1,2。in, Represents linear components; r il T and r jl represent nonlinear components, r il T is the l-th row in the rotation matrix R i , r jl is the transpose of the l-th row in the rotation matrix R j , l=0,1 ,2.
可选的,上述装置还可以包括:Optionally, the above-mentioned device may also include:
回环检测模块,用于在获取至少一个第一标签相机采集各帧图像时的位姿之后,若第一标签相机采集的当前帧图像为关键帧,则根据当前关键帧和该第一标签相机的历史关键帧进行回环检测;The loop closure detection module is used to obtain at least one pose of each frame image captured by the first label camera, if the current frame image collected by the first label camera is a key frame, then according to the current key frame and the first label camera. Historical keyframes for loopback detection;
优化更新模块,用于在回环成功的情况下,根据当前关键帧对已获取的第一标签相机位姿进行全局一致的优化更新。The optimization update module is used to perform globally consistent optimization update on the obtained first label camera pose according to the current key frame when the loopback is successful.
可选的,上述回环检测模块,还用于在相对位姿计算模块63计算第二标签相机与第一标签相机在同一时刻的相对位姿之后,若已计算出相对位姿的深度相机同步采集的当前帧图像中有关键帧,则根据该关键帧和已计算出相对位姿的深度相机下的历史关键帧进行回环检测;上述优化更新模块,还用于在回环成功的情况下,根据该关键帧及对应的历史关键帧优化更新对应深度相机间的相对位姿。Optionally, the above-mentioned loop closure detection module is also used to calculate the relative pose of the second tag camera and the first tag camera at the same time after the relative pose calculation module 63 calculates the synchronous acquisition of the depth camera of the relative pose. If there is a key frame in the current frame of the image, the loopback detection is performed based on the keyframe and the historical keyframe under the depth camera whose relative pose has been calculated; Keyframes and corresponding historical keyframes are optimized to update the relative poses between corresponding depth cameras.
值得注意的是,上述深度相机标定装置的实施例中,所包括的各个单元和模块只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,各功能单元的具体名称也只是为了便于相互区分,并不用于限制本发明的保护范围。It is worth noting that in the above embodiment of the depth camera calibration device, the included units and modules are only divided according to functional logic, but are not limited to the above division, as long as the corresponding functions can be realized; in addition, The specific names of the functional units are only for the convenience of distinguishing each other, and are not used to limit the protection scope of the present invention.
实施例六Embodiment six
本实施例提供了一种电子设备,包括:一个或多个处理器,存储器和全景深度相机系统。其中,存储器,用于存储一个或多个程序。全景深度相机系统,包括至少两个深度相机,该至少两个深度相机覆盖全景视场,用于采集图像。当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如本发明任意实施例所述的深度相机标定方法。This embodiment provides an electronic device, including: one or more processors, a memory, and a panoramic depth camera system. Among them, the memory is used to store one or more programs. The panoramic depth camera system includes at least two depth cameras, and the at least two depth cameras cover the panoramic field of view and are used for collecting images. When the one or more programs are executed by the one or more processors, the one or more processors implement the depth camera calibration method according to any embodiment of the present invention.
图7是本发明实施例六提供的电子设备的结构示意图。图7示出了适于用来实现本发明实施方式的示例性电子设备的框图。图7显示的电子设备712仅仅是一个示例,不应对本发明实施例的功能和使用范围带来任何限制。FIG. 7 is a schematic structural diagram of an electronic device provided by Embodiment 6 of the present invention. Figure 7 shows a block diagram of an exemplary electronic device suitable for use in implementing embodiments of the present invention. The electronic device 712 shown in FIG. 7 is only an example, and should not limit the functions and scope of use of this embodiment of the present invention.
如图7所示,电子设备712以通用计算设备的形式表现。电子设备712的组件可以包括但不限于:一个或多个处理器(或称为处理单元716),系统存储器728,连接不同系统组件(包括系统存储器728和处理单元716)的总线718。As shown in FIG. 7, electronic device 712 takes the form of a general-purpose computing device. Components of electronic device 712 may include, but are not limited to: one or more processors (or called processing unit 716 ), system memory 728 , and bus 718 connecting different system components (including system memory 728 and processing unit 716 ).
总线718表示几类总线结构中的一种或多种,包括存储器总线或者存储器控制器,外围总线,图形加速端口,处理器或者使用多种总线结构中的任意总线结构的局域总线。举例来说,这些体系结构包括但不限于工业标准体系结构(ISA)总线,微通道体系结构(MAC)总线,增强型ISA总线、视频电子标准协会(VESA)局域总线以及外围组件互连(PCI)总线。Bus 718 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus structures. These architectures include, by way of example, but are not limited to Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MAC) bus, Enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect ( PCI) bus.
电子设备712典型地包括多种计算机系统可读介质。这些介质可以是任何能够被电子设备712访问的可用介质,包括易失性和非易失性介质,可移动的和不可移动的介质。Electronic device 712 typically includes a variety of computer system readable media. These media can be any available media that can be accessed by electronic device 712 and include both volatile and nonvolatile media, removable and non-removable media.
系统存储器728可以包括易失性存储器形式的计算机系统可读介质,例如随机存取存储器(RAM)730和/或高速缓存存储器732。电子设备712可以进一步包括其它可移动/不可移动的、易失性/非易失性计算机系统存储介质。仅作为举例,存储系统734可以用于读写不可移动的、非易失性磁介质(图7未显示,通常称为“硬盘驱动器”)。尽管图7中未示出,可以提供用于对可移动非易失性磁盘(例如“软盘”)读写的磁盘驱动器,以及对可移动非易失性光盘(例如CD-ROM,DVD-ROM或者其它光介质)读写的光盘驱动器。在这些情况下,每个驱动器可以通过一个或者多个数据介质接口与总线718相连。系统存储器728可以包括至少一个程序产品,该程序产品具有一组(例如至少一个)程序模块,这些程序模块被配置以执行本发明各实施例的功能。System memory 728 may include computer system readable media in the form of volatile memory, such as random access memory (RAM) 730 and/or cache memory 732 . Electronic device 712 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 734 may be used to read and write to non-removable, non-volatile magnetic media (not shown in FIG. 7, commonly referred to as a "hard drive"). Although not shown in FIG. 7, a disk drive for reading and writing to removable nonvolatile disks (e.g., "floppy disks") may be provided, as well as for removable nonvolatile optical disks (e.g., CD-ROM, DVD-ROM or other optical media) CD-ROM drive. In these cases, each drive may be connected to bus 718 through one or more data media interfaces. System memory 728 may include at least one program product having a set (eg, at least one) of program modules configured to perform the functions of various embodiments of the present invention.
具有一组(至少一个)程序模块742的程序/实用工具740,可以存储在例如系统存储器728中,这样的程序模块742包括但不限于操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。程序模块742通常执行本发明所描述的实施例中的功能和/或方法。Programs/utilities 740 may be stored, for example, in system memory 728 as a set (at least one) of program modules 742 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of these examples may include the realization of the network environment. Program modules 742 generally perform the functions and/or methodologies of the described embodiments of the invention.
电子设备712也可以与一个或多个外部设备714(例如键盘、指向设备、显示器724等)通信,还可与一个或者多个使得用户能与该电子设备712交互的设备通信,和/或与使得该电子设备712能与一个或多个其它计算设备进行通信的任何设备(例如网卡,调制解调器等等)通信。这种通信可以通过输入/输出(I/O)接口722进行。并且,电子设备712还可以通过网络适配器720与一个或者多个网络(例如局域网(LAN),广域网(WAN)和/或公共网络,例如因特网)通信。如图所示,网络适配器720通过总线718与电子设备712的其它模块通信。应当明白,尽管图中未示出,可以结合电子设备712使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理单元、外部磁盘驱动阵列、RAID系统、磁带驱动器以及数据备份存储系统等。The electronic device 712 may also communicate with one or more external devices 714 (e.g., a keyboard, pointing device, display 724, etc.), may also communicate with one or more devices that enable a user to interact with the electronic device 712, and/or communicate with Any device (eg, network card, modem, etc.) that enables the electronic device 712 to communicate with one or more other computing devices. Such communication may occur through input/output (I/O) interface 722 . Moreover, the electronic device 712 can also communicate with one or more networks (such as a local area network (LAN), a wide area network (WAN) and/or a public network such as the Internet) through the network adapter 720 . As shown, network adapter 720 communicates with other modules of electronic device 712 via bus 718 . It should be appreciated that although not shown, other hardware and/or software modules may be used in conjunction with electronic device 712, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives And data backup storage system, etc.
处理单元716通过运行存储在系统存储器728中的程序,从而执行各种功能应用以及数据处理,例如实现本发明实施例所提供的深度相机标定方法。The processing unit 716 executes various functional applications and data processing by running the programs stored in the system memory 728 , such as implementing the depth camera calibration method provided by the embodiment of the present invention.
电子设备712还可以包括:全景深度相机系统750,包含至少两个深度相机,该至少两个深度相机覆盖全景视场,用于采集图像。全景深度相机系统750与处理单元716及系统存储器728连接。全景深度相机系统750所包含的深度相机可在处理单元716的控制下采集图像。具体的,全景深度相机系统750可嵌入式安装在电子设备中。The electronic device 712 may further include: a panoramic depth camera system 750, including at least two depth cameras, the at least two depth cameras cover a panoramic field of view, and are used for collecting images. The panoramic depth camera system 750 is connected to the processing unit 716 and the system memory 728 . The depth cameras included in the panoramic depth camera system 750 can capture images under the control of the processing unit 716 . Specifically, the panoramic depth camera system 750 can be embedded and installed in an electronic device.
可选的,一个或多个处理器为中央处理器;电子设备为便携式移动电子设备,例如移动机器人、无人机、三维视觉交互设备(如VR眼镜或可戴式头盔)或智能终端(如手机或平板电脑)等。Optionally, one or more processors are central processing units; the electronic device is a portable mobile electronic device, such as a mobile robot, a drone, a three-dimensional visual interaction device (such as VR glasses or a wearable helmet) or an intelligent terminal (such as phone or tablet), etc.
实施例七Embodiment seven
本实施例提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如本发明任意实施例所述的深度相机标定方法。This embodiment provides a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, the depth camera calibration method according to any embodiment of the present invention is implemented.
本发明实施例的计算机存储介质,可以采用一个或多个计算机可读的介质的任意组合。计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本文件中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。The computer storage medium in the embodiments of the present invention may use any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples (non-exhaustive list) of computer readable storage media include: electrical connections with one or more leads, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), Erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above. In this document, a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。A computer readable signal medium may include a data signal carrying computer readable program code in baseband or as part of a carrier wave. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing. A computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can send, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device. .
计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括——但不限于无线、电线、光缆、RF等等,或者上述的任意合适的组合。Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including - but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
可以以一种或多种程序设计语言或其组合来编写用于执行本发明操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如”C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。Computer program code for carrying out the operations of the present invention may be written in one or more programming languages, or combinations thereof, including object-oriented programming languages—such as Java, Smalltalk, C++, and conventional Procedural programming language—such as "C" or a similar programming language. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In cases involving a remote computer, the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as through an Internet service provider). Internet connection).
上述实施例序号仅仅为了描述,不代表实施例的优劣。The serial numbers of the above embodiments are for description only, and do not represent the advantages and disadvantages of the embodiments.
本领域普通技术人员应该明白,上述的本发明实施例的各模块或各操作可以用通用的计算装置来实现,它们可以集中在单个计算装置上,或者分布在多个计算装置所组成的网络上,可选地,他们可以用计算机装置可执行的程序代码来实现,从而可以将它们存储在存储装置中由计算装置来执行,或者将它们分别制作成各个集成电路模块,或者将它们中的多个模块或操作制作成单个集成电路模块来实现。这样,本发明不限制于任何特定的硬件和软件的结合。Those of ordinary skill in the art should understand that each module or each operation of the above-mentioned embodiments of the present invention can be realized by a general-purpose computing device, and they can be concentrated on a single computing device, or distributed on a network formed by multiple computing devices , optionally, they can be implemented with executable program codes of computer devices, so that they can be stored in storage devices and executed by computing devices, or they can be made into individual integrated circuit modules, or multiple of them Each module or operation is implemented as a single integrated circuit module. As such, the present invention is not limited to any specific combination of hardware and software.
本说明书中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间的相同或相似的部分互相参见即可。Each embodiment in this specification is described in a progressive manner, each embodiment focuses on the difference from other embodiments, and the same or similar parts between the various embodiments can be referred to each other.
注意,上述仅为本发明的较佳实施例及所运用技术原理。本领域技术人员会理解,本发明不限于这里所述的特定实施例,对本领域技术人员来说能够进行各种明显的变化、重新调整和替代而不会脱离本发明的保护范围。因此,虽然通过以上实施例对本发明进行了较为详细的说明,但是本发明不仅仅限于以上实施例,在不脱离本发明构思的情况下,还可以包括更多其他等效实施例,而本发明的范围由所附的权利要求范围决定。Note that the above are only preferred embodiments of the present invention and applied technical principles. Those skilled in the art will understand that the present invention is not limited to the specific embodiments described herein, and that various obvious changes, readjustments and substitutions can be made by those skilled in the art without departing from the protection scope of the present invention. Therefore, although the present invention has been described in detail through the above embodiments, the present invention is not limited to the above embodiments, and can also include more other equivalent embodiments without departing from the concept of the present invention, and the present invention The scope is determined by the scope of the appended claims.
Claims (13)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810179738.7A CN108447097B (en) | 2018-03-05 | 2018-03-05 | Depth camera calibration method and device, electronic equipment and storage medium |
PCT/CN2019/085515 WO2019170166A1 (en) | 2018-03-05 | 2019-05-05 | Depth camera calibration method and apparatus, electronic device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810179738.7A CN108447097B (en) | 2018-03-05 | 2018-03-05 | Depth camera calibration method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108447097A true CN108447097A (en) | 2018-08-24 |
CN108447097B CN108447097B (en) | 2021-04-27 |
Family
ID=63193477
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810179738.7A Active CN108447097B (en) | 2018-03-05 | 2018-03-05 | Depth camera calibration method and device, electronic equipment and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN108447097B (en) |
WO (1) | WO2019170166A1 (en) |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109218562A (en) * | 2018-09-07 | 2019-01-15 | 百度在线网络技术(北京)有限公司 | Clock synchronizing method, device, equipment, storage medium and vehicle |
CN109242913A (en) * | 2018-09-07 | 2019-01-18 | 百度在线网络技术(北京)有限公司 | Scaling method, device, equipment and the medium of collector relative parameter |
CN109360243A (en) * | 2018-09-28 | 2019-02-19 | 上海爱观视觉科技有限公司 | A kind of scaling method of the movable vision system of multiple degrees of freedom |
CN109584302A (en) * | 2018-11-27 | 2019-04-05 | 北京旷视科技有限公司 | Camera pose optimization method, device, electronic equipment and computer-readable medium |
CN109946680A (en) * | 2019-02-28 | 2019-06-28 | 北京旷视科技有限公司 | External parameter calibration method, device, storage medium and calibration system of detection system |
CN110132306A (en) * | 2019-05-20 | 2019-08-16 | 广州小鹏汽车科技有限公司 | The correcting method and system of vehicle location error |
CN110166714A (en) * | 2019-04-11 | 2019-08-23 | 深圳市朗驰欣创科技股份有限公司 | Dual-light fusion adjustment method, dual-light fusion adjustment device, and dual-light fusion equipment |
WO2019170166A1 (en) * | 2018-03-05 | 2019-09-12 | 清华-伯克利深圳学院筹备办公室 | Depth camera calibration method and apparatus, electronic device, and storage medium |
CN110232715A (en) * | 2019-05-08 | 2019-09-13 | 深圳奥比中光科技有限公司 | A kind of self-alignment method, apparatus of more depth cameras and system |
CN110349249A (en) * | 2019-06-26 | 2019-10-18 | 华中科技大学 | Real-time dense method for reconstructing and system based on RGB-D data |
CN110363821A (en) * | 2019-07-12 | 2019-10-22 | 顺丰科技有限公司 | Acquisition methods, device, camera and the storage medium at monocular camera installation deviation angle |
CN110415286A (en) * | 2019-09-24 | 2019-11-05 | 杭州蓝芯科技有限公司 | A kind of outer ginseng scaling method of more flight time depth camera systems |
CN110794828A (en) * | 2019-10-08 | 2020-02-14 | 福瑞泰克智能系统有限公司 | A landmark localization method fused with semantic information |
CN110866953A (en) * | 2019-10-31 | 2020-03-06 | Oppo广东移动通信有限公司 | Map construction method and device, and positioning method and device |
CN111563840A (en) * | 2019-01-28 | 2020-08-21 | 北京初速度科技有限公司 | Segmentation model training method and device, pose detection method and vehicle-mounted terminal |
CN112115980A (en) * | 2020-08-25 | 2020-12-22 | 西北工业大学 | Design method of binocular visual odometry based on optical flow tracking and point-line feature matching |
CN112802112A (en) * | 2021-04-12 | 2021-05-14 | 北京三快在线科技有限公司 | Visual positioning method, device, server and storage medium |
CN113256731A (en) * | 2021-04-01 | 2021-08-13 | 深圳市宁创云软件技术有限公司 | Target detection method and device based on monocular vision |
CN113269876A (en) * | 2021-05-10 | 2021-08-17 | Oppo广东移动通信有限公司 | Map point coordinate optimization method and device, electronic equipment and storage medium |
CN113330487A (en) * | 2019-12-30 | 2021-08-31 | 华为技术有限公司 | Parameter calibration method and device |
CN113781548A (en) * | 2020-06-10 | 2021-12-10 | 华为技术有限公司 | Multi-device pose measurement method, electronic device and system |
CN113870358A (en) * | 2021-09-17 | 2021-12-31 | 聚好看科技股份有限公司 | Method and equipment for joint calibration of multiple 3D cameras |
US12073071B2 (en) | 2020-07-29 | 2024-08-27 | Huawei Technologies Co., Ltd. | Cross-device object drag method and device |
US12197693B2 (en) | 2020-08-26 | 2025-01-14 | Huawei Technologies Co., Ltd. | Method and device for displaying a projection interface |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114663528A (en) * | 2019-10-09 | 2022-06-24 | 阿波罗智能技术(北京)有限公司 | Multi-phase external parameter combined calibration method, device, equipment and medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140368620A1 (en) * | 2013-06-17 | 2014-12-18 | Zhiwei Li | User interface for three-dimensional modeling |
US20160163054A1 (en) * | 2011-01-31 | 2016-06-09 | Microsoft Technology Licensing, Llc | Reducing interference between multiple infra-red depth cameras |
CN105844624A (en) * | 2016-03-18 | 2016-08-10 | 上海欧菲智能车联科技有限公司 | Dynamic calibration system, and combined optimization method and combined optimization device in dynamic calibration system |
CN106105192A (en) * | 2014-01-03 | 2016-11-09 | 英特尔公司 | Rebuild by the real-time 3D of depth camera |
CN106157304A (en) * | 2016-07-01 | 2016-11-23 | 成都通甲优博科技有限责任公司 | A kind of Panoramagram montage method based on multiple cameras and system |
CN106204443A (en) * | 2016-07-01 | 2016-12-07 | 成都通甲优博科技有限责任公司 | A kind of panorama UAS based on the multiplexing of many mesh |
WO2017117517A1 (en) * | 2015-12-30 | 2017-07-06 | The Johns Hopkins University | System and method for medical imaging |
CN107025668A (en) * | 2017-03-30 | 2017-08-08 | 华南理工大学 | A kind of design method of the visual odometry based on depth camera |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11019330B2 (en) * | 2015-01-19 | 2021-05-25 | Aquifi, Inc. | Multiple camera system with auto recalibration |
CN106097300B (en) * | 2016-05-27 | 2017-10-20 | 西安交通大学 | A kind of polyphaser scaling method based on high-precision motion platform |
CN108447097B (en) * | 2018-03-05 | 2021-04-27 | 清华-伯克利深圳学院筹备办公室 | Depth camera calibration method and device, electronic equipment and storage medium |
-
2018
- 2018-03-05 CN CN201810179738.7A patent/CN108447097B/en active Active
-
2019
- 2019-05-05 WO PCT/CN2019/085515 patent/WO2019170166A1/en active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160163054A1 (en) * | 2011-01-31 | 2016-06-09 | Microsoft Technology Licensing, Llc | Reducing interference between multiple infra-red depth cameras |
US20140368620A1 (en) * | 2013-06-17 | 2014-12-18 | Zhiwei Li | User interface for three-dimensional modeling |
CN106105192A (en) * | 2014-01-03 | 2016-11-09 | 英特尔公司 | Rebuild by the real-time 3D of depth camera |
WO2017117517A1 (en) * | 2015-12-30 | 2017-07-06 | The Johns Hopkins University | System and method for medical imaging |
CN105844624A (en) * | 2016-03-18 | 2016-08-10 | 上海欧菲智能车联科技有限公司 | Dynamic calibration system, and combined optimization method and combined optimization device in dynamic calibration system |
CN106157304A (en) * | 2016-07-01 | 2016-11-23 | 成都通甲优博科技有限责任公司 | A kind of Panoramagram montage method based on multiple cameras and system |
CN106204443A (en) * | 2016-07-01 | 2016-12-07 | 成都通甲优博科技有限责任公司 | A kind of panorama UAS based on the multiplexing of many mesh |
CN107025668A (en) * | 2017-03-30 | 2017-08-08 | 华南理工大学 | A kind of design method of the visual odometry based on depth camera |
Non-Patent Citations (2)
Title |
---|
JIANFENG WANG等: "3D scene reconstruction by multiple structured-light based commodity depth cameras", 《2012 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS,SPEECH AND SIGNAL PROCESSING(ICASSP)》 * |
陈英博: "Kinect点云数据与序列影像结合的三维重建技术", 《中国优秀硕士学位论文全文数据库(信息科技辑)》 * |
Cited By (35)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019170166A1 (en) * | 2018-03-05 | 2019-09-12 | 清华-伯克利深圳学院筹备办公室 | Depth camera calibration method and apparatus, electronic device, and storage medium |
CN109218562A (en) * | 2018-09-07 | 2019-01-15 | 百度在线网络技术(北京)有限公司 | Clock synchronizing method, device, equipment, storage medium and vehicle |
CN109242913A (en) * | 2018-09-07 | 2019-01-18 | 百度在线网络技术(北京)有限公司 | Scaling method, device, equipment and the medium of collector relative parameter |
US10984556B2 (en) | 2018-09-07 | 2021-04-20 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method and apparatus for calibrating relative parameters of collector, device and storage medium |
CN109218562B (en) * | 2018-09-07 | 2021-04-27 | 百度在线网络技术(北京)有限公司 | Clock synchronization method, device, equipment, storage medium and vehicle |
CN109242913B (en) * | 2018-09-07 | 2020-11-10 | 百度在线网络技术(北京)有限公司 | Method, device, equipment and medium for calibrating relative parameters of collector |
US11363192B2 (en) | 2018-09-07 | 2022-06-14 | Apollo Intelligent Driving Technology (Beijing) Co., Ltd. | Method, and apparatus for clock synchronization, device, storage medium and vehicle |
CN109360243A (en) * | 2018-09-28 | 2019-02-19 | 上海爱观视觉科技有限公司 | A kind of scaling method of the movable vision system of multiple degrees of freedom |
US11847797B2 (en) | 2018-09-28 | 2023-12-19 | Anhui Eyevolution Technology Co., Ltd. | Calibration method for multi-degree-of-freedom movable vision system |
CN109584302A (en) * | 2018-11-27 | 2019-04-05 | 北京旷视科技有限公司 | Camera pose optimization method, device, electronic equipment and computer-readable medium |
CN109584302B (en) * | 2018-11-27 | 2023-12-01 | 北京旷视科技有限公司 | Camera pose optimization method, device, electronic equipment and computer-readable medium |
CN111563840B (en) * | 2019-01-28 | 2023-09-05 | 北京魔门塔科技有限公司 | Training method and device of segmentation model, pose detection method and vehicle-mounted terminal |
CN111563840A (en) * | 2019-01-28 | 2020-08-21 | 北京初速度科技有限公司 | Segmentation model training method and device, pose detection method and vehicle-mounted terminal |
CN109946680A (en) * | 2019-02-28 | 2019-06-28 | 北京旷视科技有限公司 | External parameter calibration method, device, storage medium and calibration system of detection system |
CN110166714A (en) * | 2019-04-11 | 2019-08-23 | 深圳市朗驰欣创科技股份有限公司 | Dual-light fusion adjustment method, dual-light fusion adjustment device, and dual-light fusion equipment |
CN110232715B (en) * | 2019-05-08 | 2021-11-19 | 奥比中光科技集团股份有限公司 | Method, device and system for self calibration of multi-depth camera |
CN110232715A (en) * | 2019-05-08 | 2019-09-13 | 深圳奥比中光科技有限公司 | A kind of self-alignment method, apparatus of more depth cameras and system |
CN110132306B (en) * | 2019-05-20 | 2021-02-19 | 广州小鹏汽车科技有限公司 | Method and system for correcting vehicle positioning error |
CN110132306A (en) * | 2019-05-20 | 2019-08-16 | 广州小鹏汽车科技有限公司 | The correcting method and system of vehicle location error |
CN110349249A (en) * | 2019-06-26 | 2019-10-18 | 华中科技大学 | Real-time dense method for reconstructing and system based on RGB-D data |
CN110363821A (en) * | 2019-07-12 | 2019-10-22 | 顺丰科技有限公司 | Acquisition methods, device, camera and the storage medium at monocular camera installation deviation angle |
CN110415286A (en) * | 2019-09-24 | 2019-11-05 | 杭州蓝芯科技有限公司 | A kind of outer ginseng scaling method of more flight time depth camera systems |
CN110794828A (en) * | 2019-10-08 | 2020-02-14 | 福瑞泰克智能系统有限公司 | A landmark localization method fused with semantic information |
CN110866953B (en) * | 2019-10-31 | 2023-12-29 | Oppo广东移动通信有限公司 | Map construction method and device, and positioning method and device |
CN110866953A (en) * | 2019-10-31 | 2020-03-06 | Oppo广东移动通信有限公司 | Map construction method and device, and positioning method and device |
CN113330487A (en) * | 2019-12-30 | 2021-08-31 | 华为技术有限公司 | Parameter calibration method and device |
CN113781548A (en) * | 2020-06-10 | 2021-12-10 | 华为技术有限公司 | Multi-device pose measurement method, electronic device and system |
US12073071B2 (en) | 2020-07-29 | 2024-08-27 | Huawei Technologies Co., Ltd. | Cross-device object drag method and device |
CN112115980A (en) * | 2020-08-25 | 2020-12-22 | 西北工业大学 | Design method of binocular visual odometry based on optical flow tracking and point-line feature matching |
US12197693B2 (en) | 2020-08-26 | 2025-01-14 | Huawei Technologies Co., Ltd. | Method and device for displaying a projection interface |
CN113256731A (en) * | 2021-04-01 | 2021-08-13 | 深圳市宁创云软件技术有限公司 | Target detection method and device based on monocular vision |
CN112802112A (en) * | 2021-04-12 | 2021-05-14 | 北京三快在线科技有限公司 | Visual positioning method, device, server and storage medium |
CN113269876A (en) * | 2021-05-10 | 2021-08-17 | Oppo广东移动通信有限公司 | Map point coordinate optimization method and device, electronic equipment and storage medium |
CN113870358A (en) * | 2021-09-17 | 2021-12-31 | 聚好看科技股份有限公司 | Method and equipment for joint calibration of multiple 3D cameras |
CN113870358B (en) * | 2021-09-17 | 2024-05-24 | 聚好看科技股份有限公司 | Method and equipment for jointly calibrating multiple 3D cameras |
Also Published As
Publication number | Publication date |
---|---|
WO2019170166A1 (en) | 2019-09-12 |
CN108447097B (en) | 2021-04-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108447097B (en) | Depth camera calibration method and device, electronic equipment and storage medium | |
CN109242913B (en) | Method, device, equipment and medium for calibrating relative parameters of collector | |
CN110866496B (en) | Robot positioning and mapping method and device based on depth image | |
CN110322500B (en) | Optimization method and device, medium and electronic equipment for real-time positioning and map construction | |
CN107888828B (en) | Space positioning method and device, electronic device, and storage medium | |
CN108898630B (en) | A three-dimensional reconstruction method, apparatus, device and storage medium | |
WO2019170164A1 (en) | Depth camera-based three-dimensional reconstruction method and apparatus, device, and storage medium | |
KR102502651B1 (en) | Method and device for generating maps | |
CN108805917B (en) | Method, medium, apparatus and computing device for spatial localization | |
US11557083B2 (en) | Photography-based 3D modeling system and method, and automatic 3D modeling apparatus and method | |
WO2019205852A1 (en) | Method and apparatus for determining pose of image capture device, and storage medium therefor | |
CN110111388B (en) | Three-dimensional object pose parameter estimation method and visual equipment | |
CN109461208B (en) | Three-dimensional map processing method, device, medium and computing equipment | |
CN110349212B (en) | Optimization method and device, medium and electronic equipment for real-time positioning and map construction | |
CN108805979A (en) | A kind of dynamic model three-dimensional rebuilding method, device, equipment and storage medium | |
CN109035303A (en) | SLAM system camera tracking and device, computer readable storage medium | |
Zhang et al. | Synthetic aperture photography using a moving camera-IMU system | |
CN110096134B (en) | VR handle ray jitter correction method, device, terminal and medium | |
CN112085842B (en) | Depth value determining method and device, electronic equipment and storage medium | |
US11418716B2 (en) | Spherical image based registration and self-localization for onsite and offsite viewing | |
CN113075647A (en) | Robot positioning method, device, equipment and medium | |
CN113393529B (en) | Camera calibration method, device, equipment and medium | |
CN117788602A (en) | Camera calibration method and device, electronic equipment and storage medium | |
CN117274543A (en) | A NeRF-based AR real-time visualization method and system for the ORB-SLAM3 system | |
CN117115333A (en) | Three-dimensional reconstruction method combined with IMU data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20221213 Address after: 518000 2nd floor, building a, Tsinghua campus, Shenzhen University Town, Xili street, Nanshan District, Shenzhen City, Guangdong Province Patentee after: Tsinghua Shenzhen International Graduate School Address before: 518055 Nanshan Zhiyuan 1001, Xue Yuan Avenue, Nanshan District, Shenzhen, Guangdong. Patentee before: TSINGHUA-BERKELEY SHENZHEN INSTITUTE |