WO2018076154A1 - 一种基于鱼眼摄像机空间位姿标定的全景视频生成方法 - Google Patents

一种基于鱼眼摄像机空间位姿标定的全景视频生成方法 Download PDF

Info

Publication number
WO2018076154A1
WO2018076154A1 PCT/CN2016/103157 CN2016103157W WO2018076154A1 WO 2018076154 A1 WO2018076154 A1 WO 2018076154A1 CN 2016103157 W CN2016103157 W CN 2016103157W WO 2018076154 A1 WO2018076154 A1 WO 2018076154A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
calibration
image
relationship
calibration plate
Prior art date
Application number
PCT/CN2016/103157
Other languages
English (en)
French (fr)
Inventor
晁志超
周剑
龙学军
余兴
谢荣路
徐一丹
张明磊
Original Assignee
成都通甲优博科技有限责任公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 成都通甲优博科技有限责任公司 filed Critical 成都通甲优博科技有限责任公司
Priority to PCT/CN2016/103157 priority Critical patent/WO2018076154A1/zh
Publication of WO2018076154A1 publication Critical patent/WO2018076154A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Definitions

  • the present invention relates to the field of video image processing technologies, and in particular, to a method for generating a spherical re-projection panoramic video based on a fisheye lens camera group.
  • Panoramic video generation is a technique of capturing images of different positions by using multiple cameras, and then realizing image synthesis through image stitching technology to realize panoramic video generation.
  • panoramic video generation by fisheye lens camera is generated.
  • the reliability of the panoramic video image mainly depends on two steps: 1. calibration of the relative spatial position of the camera group; 2. method of splicing the image acquired by the fisheye lens camera to generate a panoramic image. Both of the above points have a large impact on the reliability of the final panoramic video image.
  • the relative pose of the camera group needs to be calibrated, and the spatial relationship of the camera is fixed. Therefore, the internal and external parameters and height angles of different cameras are different. In order to obtain panoramic video with a minimum of cameras, the orientations between different cameras are different. In order to accurately project images obtained by different cameras into a common coordinate system, it is necessary to calibrate the pose relationship between the internal reference matrix of each camera and each camera.
  • the traditional dual camera calibration or multi-camera calibration generally requires the camera to be calibrated.
  • a target such as a calibration plate
  • each target camera can be unified based on the target coordinate system, thereby obtaining a relative position between the cameras to be calibrated. system.
  • it is not suitable for the above calibration method because if a larger field of view is required with fewer units, then the overlapping image portion is bound to be less, so that the calibration result will be larger. It is very difficult to use the calibration target to calibrate the pose relationship between the cameras in this limited coincidence field of view.
  • each camera can capture one picture.
  • the use of the calibration block is very inconvenient, and when using the calibration plate, it is required to acquire at least three calibration plate images of different postures, and it is very difficult to change the calibration plate to different postures in a limited field of view; Image distortion, wide-angle lens imaging generally has a large distortion at the edge of the image, and the common field of view of adjacent cameras is just at the edge of the field of view of the respective wide-angle lens camera. Therefore, the accuracy of the calibration of the traditional calibration method is also can not guarantee.
  • the panoramic stitching method such as the smart phone panoramic stitching method
  • the panoramic stitching method mostly uses a single camera to acquire images, which makes the shooting area limited, and it is easy to generate large parallax in the actual shooting process.
  • the panoramic stitching technology such as the smart phone panoramic stitching method
  • An object of the present invention is to provide a method for generating a panoramic video based on spatial orientation calibration of a fisheye camera, which solves the above technical problems;
  • a panoramic video generation method based on spatial orientation calibration of a fisheye camera and a plurality of fixed fisheye lens cameras are provided, each camera being used for obtaining original Flat circular image, including
  • Step A performing spatial pose calibration on all adjacent cameras to determine a relative pose relationship
  • Step B listing independent constraint relationships according to the relative pose relationship
  • Step C listing the correction number equation according to the independent constraint relationship, and performing the adjustment on the independent constraint relationship described in step B by the adjustment method to obtain the correction number vector;
  • Step D determining whether the correction number vector is less than a predetermined threshold, if it is performing step F, if not obtaining a correction result according to the corrected number vector;
  • Step E substituting the correction result into the independent constraint relationship, and returning to the step C,
  • Step F obtaining a spatial pose relationship according to the independent constraint relationship
  • Step G obtaining parameters of each of the cameras and the spatial pose relationship
  • Step H constructing an imaging model plane and a standard field of view sphere according to the spatial pose relationship and parameters obtained in step G, and the plane circular images acquired by each of the cameras are respectively located on the corresponding imaging model plane, and The planar circular image is projected from the imaging model plane to the standard field of view sphere to form a first spherical image, and the standard field of view spherical surface is obtained according to the image point coordinates of the first spherical image to the imaging The mapping relationship of the model plane;
  • Step I according to the mapping relationship, respectively projecting the planar circular images collected by each camera in real time onto the same spherical surface of the standard field to form a second spherical image;
  • Step J merging overlapping portions existing between the second spherical images corresponding to the adjacent cameras to obtain a fused image
  • Step K splicing the fused image and the second spherical image to obtain a spherical panoramic view.
  • the relative pose relationship between two adjacent cameras can be determined, but the adjacent pose relationship determined in this way is bound to have a certain error. In fact, it will have a greater impact, so the present invention passes multiple groups of photos.
  • the method of constructing an independent constraint relationship between two cameras lists the modified parameter equations, and then obtains the correction result through calculation. Through repeated iterations, the correction result is repeatedly substituted into the constraint relationship until the result satisfies all the constraint relationships, thus It can obtain high-precision spatial pose relationship and meet the needs of image stitching.
  • step A includes
  • step A-1 the two adjacent cameras are respectively used as the first camera and the second camera, and the first camera and the second camera are respectively subjected to internal parameter calibration;
  • Step A-2 placing a first calibration plate in the field of view of the first camera, and placing a second calibration plate in the field of view of the second camera;
  • Step A-3 setting a reference camera between the first camera and the second camera, so that the field of view of the reference camera includes a first calibration plate and a second calibration plate, and the reference camera simultaneously calibrates the first calibration plate and the second calibration plate.
  • the plate is imaged to obtain a pose relationship H A ⁇ C0 between the reference camera and the first calibration plate and a pose relationship H C0 ⁇ B between the reference camera and the second calibration plate;
  • Step A-5 obtaining a pose relationship H A ⁇ C1 between the first camera and the first calibration plate and a pose relationship H B ⁇ C2 between the second camera and the second calibration plate;
  • step A-7 the process returns to step A-1 until the relative pose relationship of all the two adjacent cameras is obtained.
  • the present invention determines the relative pose relationship by designing the reference camera, so that the weight can be increased.
  • the size of the complex field ensures the accuracy of the relative pose relationship.
  • step A includes
  • step A-1 the two adjacent cameras are respectively used as the first camera and the second camera, and the first camera and the second camera are respectively subjected to internal parameter calibration;
  • Step A-2 placing a first calibration plate in the field of view of the first camera, and placing a second calibration plate in the field of view of the second camera;
  • Step A-3 a reference camera is disposed between the first camera and the second camera, so that the first calibration plate and the second calibration plate are included in the field of view of the reference camera;
  • Step A-4 the reference camera, the first camera, and the second camera respectively image the first calibration plate and the second calibration plate by the synchronization trigger signal, and obtain a pose relationship between the reference camera and the first calibration plate H A ⁇ C0 , the pose relationship between the reference camera and the second calibration plate H C0 ⁇ B , the pose relationship between the first camera and the first calibration plate H A ⁇ C1, and between the second camera and the second calibration plate Pose relationship H B ⁇ C2 ;
  • step A-6 the process returns to step A-1 until the relative pose relationship of all the two adjacent cameras is obtained.
  • the difference from the previous improvement is that the synchronous triggering method is adopted to acquire the pose relationship, so that the error caused by the ambient light change can be eliminated.
  • step A includes
  • Step A-1 optionally, two adjacent cameras are respectively used as the first camera and the second camera, and respectively perform internal parameter calibration on the first camera and the second camera;
  • Step A-2 placing a first calibration plate in the field of view of the first camera, and placing a second calibration plate in the field of view of the second camera;
  • Step A-3 obtaining a pose relationship H A ⁇ C1 between the first camera and the first calibration plate and a pose relationship H B ⁇ C2 between the second camera and the second calibration plate;
  • Step A-4 changing the pose of the first camera and the second camera to regain the pose relationship between the first camera and the first calibration plate And the pose relationship between the second camera and the second calibration plate
  • Step A-5 according to the formula Obtaining a pose relationship H C1 ⁇ C2 between the first camera and the second camera;
  • step A-6 the process returns to step A-1 until the relative pose relationship of all the two adjacent cameras is obtained.
  • This improvement does not require the addition of a reference camera, but only through the rotation of the two cameras can solve the relative pose relationship.
  • step H includes
  • step H-1 the imaging model plane, the imaging model surface, and the standard field spherical surface are constructed according to the spatial pose relationship and parameters obtained in step 2-1, and the planar circular images acquired by each camera are respectively located on the corresponding imaging model plane. ;
  • Step H-2 projecting the original planar circular image from the imaging model plane to the corresponding imaging model curved surface to form a first curved surface image
  • Step H-3 reprojecting the first curved surface image on the curved surface of the imaging model onto the spherical surface of the standard field of view to form a first spherical image
  • Step H-4 according to the pixel coordinates of the corresponding planar circular image and the image points of the first spherical image
  • the coordinates determine a mapping relationship between the standard field of view sphere and the imaging model plane.
  • step J includes
  • Step J-1 triangulating the overlapping portion of each of the second spherical images in the step I, and projecting the overlapping portions of the second spherical images after the triangulation on the tangent plane to form a plurality of triangular images, Calculating feature points within each of the triangular images;
  • Step J-2 two triangular images having the same feature point belonging to different second spherical images are oppositely translated on the tangent plane, and the translated triangular image is stretched to form two equal large and coincident Stretched image
  • step J-3 the two stretched images in step J-2 are fused to form a fused image, and the fused image is re-projected from the tangent plane to the standard field of view sphere.
  • the triangular image before stretching and the stretched image after stretching are similar triangles.
  • the method further includes performing smoothing processing on the fused image.
  • the parameter type of the internal parameter calibration calibration includes an equivalent focal length and an aberration coefficient of the camera.
  • the parameter type of the internal parameter calibration calibration further includes an imaging model and a principal point coordinate.
  • the position determination relationship is more accurate, and fewer cameras are required.
  • the fisheye camera determined by multiple spatial positions is used to ensure the horizontal direction. 360 degrees and 180 degrees in the vertical direction without blind spots, to obtain large-scale clear images from multiple angles.
  • the multi-angle images are stitched into a spherical panorama with a 360-degree horizontal viewing angle and a 180-degree vertical viewing angle.
  • the spherical panorama information acquired by multiple cameras is more abundant.
  • Figure 1 is a schematic view showing the structure of the camera
  • Embodiment 2 is a calibration method for the positional relationship between the camera heads of the fisheye lens unit in Embodiment 1-1;
  • 3 is a calibration method for the positional relationship between the camera heads of the fisheye lens unit in the embodiment 1-2;
  • Figure 5 is a calibration of the calibration data of the position and orientation of the fisheye lens unit.
  • Figure 6 is a schematic diagram of re-projection of a fisheye lens camera
  • FIG. 7 is a schematic diagram of triangulation of a fisheye lens camera image
  • Figure 8 is a schematic view showing an overlapping field of view of an adjacent fisheye lens camera
  • FIG. 9 is a schematic plan view of an overlapping field of view of an adjacent fisheye lens camera
  • FIG. 10 is a flow chart of generating a panoramic image of a fisheye lens camera group.
  • step 1 the method for determining the spatial pose relationship in step 1 will be described.
  • R W ⁇ C is the rotation matrix from the world coordinate system W to the camera coordinate system C
  • t W ⁇ C is the translation vector from the world coordinate system W to the camera coordinate system C
  • R C ⁇ W is set by the camera
  • t C ⁇ W is the translation vector from the camera coordinate system C to the world coordinate system W
  • Equation (2) can be rewritten as:
  • the coordinates can be extended to homogeneous coordinates, as shown in the following equation:
  • the first thing to do is the orientation of the adjacent camera's spatial orientation. Ding, without loss of generality, the following explanations take the calibration of the pose relationship between the camera C1 and the camera C2 as an example, and the calibration steps 1 and the calibration of the camera C1-C2 of the spatial orientation relationship between other adjacent cameras It's exactly the same.
  • Step A performing spatial pose calibration on all adjacent cameras
  • Step B listing independent constraint relationships according to relative pose relationships of adjacent cameras
  • step C the correction number equation is listed according to the independent constraint relationship, and the correction number vector is obtained by adjusting the constraint generated by the redundancy calibration by the adjustment method;
  • Step D correcting the measured value according to the corrected number vector
  • step E the correction result is substituted into the independent constraint relationship in step B, and step B is repeated until the corrected result vector is smaller than the preset threshold.
  • Embodiment 1-1 is to determine the pose relationship of adjacent cameras in the present invention on the basis of the above, as shown in FIG. 2, C1 and C2 are adjacent fisheye avatar machines to be calibrated in space pose relationship, A and B.
  • C1 and C2 are adjacent fisheye avatar machines to be calibrated in space pose relationship, A and B.
  • a high-precision camera C0 is also introduced, and C0 is used to calibrate the pose relationship between A and B.
  • Type is similar for any calibration plate coordinate system A is located in a spatial point P A, its spatial position in the image plane coordinate system C1 is:
  • the position of the spatial point P C1 in the camera C1 coordinate system in the camera C2 coordinate system is:
  • the position of the spatial point P C2 in the coordinate system C2 coordinate system in the calibration plate B coordinate system is:
  • H C1 ⁇ C2 H B ⁇ C2 H A ⁇ B H C1 ⁇ A (13)
  • H B ⁇ C2 and H C1 ⁇ A are the pose relationship between the camera C2 and the calibration plate B and the pose relationship between the camera C1 and the calibration plate A, which can be obtained by the method of Zhang Zhengyou, the quantity H A ⁇ B
  • the orientation relationship between the calibration plate A and the calibration plate B is an unknown amount
  • Embodiment 1-2 is similar to Embodiment 1-1, and it is assumed that C1 and C2 are adjacent fisheye lens machines to be calibrated in a spatial orientation relationship, and calibration method 2 is also in the field of view of the fisheye lens machines C1 and C2. Place a calibration plate A and a calibration plate B, and use the Zhang Zhengyou method to separately calibrate, as shown in Figure 3.
  • the difference between the method and the method 1 is that the intermediate quantity H A ⁇ B is additionally calibrated by the camera C0, and the method no longer obtains the indirect quantity, but calibrates H C1 ⁇ A and H B ⁇
  • H C0 ⁇ B and H A ⁇ C0 are directly synchronized, which requires the camera C0 to synchronize with the fisheye lens machines C1 and C2;
  • the fisheye lens machine C1, the fisheye lens machine C2, and the high-precision camera C0 are connected to ensure the simultaneous acquisition of the three cameras;
  • Embodiment 1-3 it is assumed that C1 and C2 are adjacent fisheye lens machines to be calibrated in a spatial orientation relationship, and the first two calibration methods require an additional high-precision camera C0 to directly or indirectly obtain the calibration plate A. Different from the positional relationship between B and B, this calibration method only needs to be calibrated.
  • Each camera in the unit has a calibration plate in the field of view, as shown in Figure 4. It is assumed that there is a calibration plate A in the field of view of the camera C1, and a calibration plate B in the field of view of the camera C2, and the relative positional relationship between A and B is unchanged during the calibration process.
  • Type is similar for any calibration plate coordinate system A is located in a spatial point P A, its spatial position in the image plane coordinate system C1 is:
  • the position of the spatial point P C1 in the camera C1 coordinate system in the camera C2 coordinate system is:
  • the position of the spatial point P C2 in the coordinate system C2 coordinate system in the calibration plate B coordinate system is:
  • H A ⁇ B and H C1 ⁇ C2 are the pose relationship between the calibration plates A and B and the cameras C1 and C2, respectively, which are invariants; and H C2 ⁇ B and H A ⁇ C1 are respectively
  • the positional relationship between the camera C2 and the calibration plate B and between the calibration plate A and the camera C1 is a change amount, and will change when the posture of the image unit changes, that is, in another group state:
  • the specific calibration implementation step 1 is as follows:
  • Embodiment 2 is an algorithm for finding the final pose relationship by iteratively by constructing an independent constraint relationship, and is also the core of the present invention.
  • Embodiment 2 can be combined with Embodiments 1-1, 1-2, and 1.
  • the spatial position and orientation relationship of the camera is determined.
  • the image unit consisting of four cameras has a spatial pose constraint:
  • the most influential effect on the splicing effect is the relative attitude between the cameras.
  • the relative pose relationship between any two cameras can be obtained by the method of two-two camera pose calibration. These pose relationships have a large number of inherent constraint relationships through different combinations, in order to make full use of all The constraint relationship, while minimizing the amount of computation, first needs to explore the combination constraint relationship between the cameras in the multi-channel delivery station.
  • the following is an example of a four-camera structure as shown in FIG. 5, which has the following constraint relationship:
  • the relative position between the cameras in the four-way transfer station can be completely determined by R C1 ⁇ C2 , R C2 ⁇ C3 and R C3 ⁇ C4 .
  • Step 1 is as follows:
  • the measured value is corrected by the corrected number vector, and iteratively iterated until all the constraint relationships are satisfied.
  • the spherical panorama is the closest panoramic description to the human eye model.
  • the panoramic stitching of this patent is to first project the image obtained by different fisheye cameras onto the model surface according to its preset imaging model, and then reproject the image on the fisheye lens camera imaging model onto the standard field of view sphere. All cameras in the camera group complete the re-projection process and then perform panoramic stitching to form a panoramic image with less distortion and less distortion.
  • Step H is explained as follows.
  • C is the center of the fisheye lens camera image.
  • the corresponding projection spherical radius is r
  • O is the spherical center of the standard field spherical surface, and the corresponding radius.
  • R the fisheye lens camera images the projection sphere and the standard field of view sphere at the center point C
  • the corresponding image points are tangent and the tangent point is T.
  • P be a point on the image of the fisheye lens camera
  • Q is the image point corresponding to P on the projection surface of the fisheye lens camera
  • the image point Q on the spherical surface of the imaging model is re-projected onto the spherical surface of the standard field of view, recorded as the image point.
  • Several common projection methods for fisheye lens cameras include stereoscopic projection, equidistant projection, stereoscopic projection, orthogonal projection, etc.
  • the angle ⁇ is determined by the distance of the point P from the point C of the optical center
  • the first curved surface image is obtained from the imaging model surface to the standard field of view spherical surface to obtain a first spherical image, and the mapping from the point P ⁇ Q ⁇ M is established according to the first spherical image and the original planar circular image. relationship.
  • the mapping relationship of M ⁇ Q ⁇ P can be obtained in the order of inverse mapping. The problem.
  • the spherical re-projection is embodied as follows:
  • the spherical re-projection part of the patent takes a four-camera structure as an example to determine the pose relationship between the fisheye cameras of the panoramic camera group; the determination of the pose relationship does not belong to the invention of the patent, but may The existing pose relationship determination method is used to determine, and the parameter construction system of the two fisheye camera is read into the system.
  • step H-1 the imaging parameters of the fisheye lens camera are calibrated to obtain a fisheye image center point coordinate C (Cx, Cy), a fisheye image imaging spherical radius r, a standard field spherical radius R and a specific fisheye.
  • the camera model is related and can be obtained through experimental verification.
  • the specific data is related to the actual fisheye lens camera parameters.
  • a standard field of view spherical surface can be generated, and then corresponding to each of the fisheye lens camera imaging models. Imaging model surfaces and corresponding standard field of view spheres.
  • step H-2 the original planar circular image acquired by the fisheye lens camera is projected onto the surface of the imaging model according to a projection model corresponding to the specific fisheye lens camera, such as an orthogonal projection model;
  • mapping relationship of M ⁇ Q ⁇ P is obtained according to the order of inverse mapping. Projects images of four fisheye lens cameras onto a standard field of view sphere.
  • the next step is to perform the panoramic stitching step 2, which is to stitch the images that have been projected to the same spherical surface into a spherical panoramic view with a large field of view, including image registration and image fusion two steps 2 -.
  • Image stitching refers to the process of aligning two or more images with spatially overlapping information and combining the aligned images into a seamless, high-definition image technique.
  • step J The function of step J is to have severe distortion and parallax between the overlapping parts of different fisheye images projected under the same large circle.
  • This patent adopts triangulation correction to first triangulate the different fisheye images of the coincident part, as shown in Fig. 7. The method shown then matches the feature points within each triangle. By the spatial distance between the feature points obtained by the matching of the feature points, the displacement required for the triangle vertices after triangulation can be obtained. At the same time we define the cost function to limit the movement of the triangle to follow similar changes, which will make the final stretched triangle area visually smooth. Triangulation is shown in Figure 7. The triangle is meshed to obtain a figure, and the division method is currently applied more frequently, and is not described here. It is preferably divided by the division method shown in FIG.
  • the basic strategy of the cost function of the motion is, for example, considering that the triangle ⁇ G 1 G 2 G 3 contains a feature point P, and the corresponding matching point Q of the known P point is first shortened by the translation of ⁇ G 1 G 2 G 3 , Q's Euler distance.
  • the vertices directly adjacent to it should also move.
  • We measure the similarity of the triangle before and after the movement to describe the moving distance and direction of the fixed point.
  • we want the feature points not to be rich in the area as far as possible we give each vertex the weight of the movement, the weight is defined as a function related to the feature point distance.
  • step J-3 after the images are aligned and registered, it is inevitable that traces will be generated at the splicing, which affects the final panoramic visual effect.
  • the weighted average method of the patent preferentially weights the image gradation values at the splicing and then superimposes the average processing. , assuming I 1 and I 2 are the images to be fused, respectively, and I is the image after fusion, then:
  • the weight is configured according to the actual proportion. It is not limited here and can be related to the specific location.
  • step 2 is as follows:
  • step J-1 the overlapping portion of the different fisheye images projected under the same large circle is triangulated as shown in FIG. 8, and then the image after triangulation is subjected to tangent plane projection according to the method shown in FIG. Matching feature points within each triangle;
  • step J-2 by using the spatial distance between the feature points in steps 2-8, the displacement required for the triangle vertices after triangulation is calculated, and the movement of the defined triangle follows similar changes, and then stretched;
  • step J-3 the triangular image after stretching is fused according to formula (30);
  • step K the coincident portion after the fusion is subjected to spherical re-projection to obtain a final spherical panorama.

Abstract

本发明涉及一种基于多个鱼眼摄像机空间位姿标定的全景视频生成方法,更大更全的视场图像拼接技术要求固联的鱼眼摄像机间的位姿关系的高精度标定,利用多个空间位置确定的鱼眼摄像机,可以保证在更少摄像机的情况下进行水平方向360度和垂直方向180度无盲区拍摄。利用本专利图像拼接技术,将多角度的影像拼接成一幅拥有360度水平视角和180度垂直视角的球面全景图。多摄像机获取的球面全景图信息更加丰富。

Description

一种基于鱼眼摄像机空间位姿标定的全景视频生成方法 技术领域
本发明涉及视频图像处理技术领域,具体涉及一种基于鱼眼镜头摄像机组的球面重投影全景视频生成方法。
背景技术
全景视频生成是利用多个摄像机拍摄不同位置的图像,而后通过图像拼接技术实现图像合成,实现全景视频的生成的一项技术,而目前,通过鱼眼镜头摄像机实现的全景视频生成,其生成的全景视频图像的可靠性主要依赖两个步骤:1、摄像机组的的相对空间位置的标定;2、对鱼眼镜头摄像机采集的图像进行拼接生成全景图像的方法。以上两点均对最后的全景视频图像的可靠性会产生较大的影响。
首先,做全景拼接图像处理之前,需要对摄像机组的相对位姿进行标定,像机空间关系排列为固定的,因此不同的像机,其内外部参数以及高度角度均不相同。为了利用最少的像机来获得全景视频,不同像机之间的朝向各不相同。为了准确地将不同像机获得的影像投影到共同坐标系下,需要标定出每个像机的内参矩阵和各个像机之间的位姿关系。
像机内参数的标定方法有多种,而两个摄像机进行标定的方法,难以适用较大视场的环境,传统的双像机标定或者多像机标定一般都要求待标定的像机对同一个目标(例如标定板)同时进行成像,从而能够以该目标坐标系为基准,将各个待标定像机统一起来,进而获得待标定像机间的相对位姿关 系。需要较大视场的场合不适用于上述的标定方法,因为如果需要用较少的机组获得较大的视场,那么重合影像部分势必较少,这样一来,对标定结果会有较大的影响,要在这有限的重合视场内利用标定目标来对像机间的位姿关系进行标定,十分困难,利用具有高精度立体结构的标定块时,虽然各个像机采集一张图片即可,但标定块的使用非常不便,而当利用标定板时则要求必须采集至少三幅不同姿态的标定板图像,在有限的视场内让标定板变换不同的姿态也是非常的困难;此外,由于图像扭曲,广角镜头成像在图像边缘部分一般都具有很大的畸变,而相邻像机的公共视场部分又刚好处于各自广角镜头像机视场的边缘部分,所以,传统标定方法标定得到的精度也不能保证。
而现如今的全景拼接方法,如智能手机全景拼接方法,大多使用单摄像机来获取影像,使得拍摄面积有限,而且在实际拍摄过程中很容易产生很大的视差。随着拍摄设备的分辨率提高以及应用场景的复杂化,例如实时监控,人们希望用更多摄像机拍摄高分辨率的全景图来对现实场景进行展示。因此更大、更全的高清全景图对全景拼接技术提出了更高要求。
发明内容
本发明的目的在于,提供一种基于鱼眼摄像机空间位姿标定的全景视频生成方法,解决以上技术问题;
本发明所解决的技术问题可以采用以下技术方案来实现:一种基于鱼眼摄像机空间位姿标定的全景视频生成方法,提供若干个固联的鱼眼镜头摄像机,每一摄像机分别用于获取原始的平面圆形图像,包括
步骤A,对所有相邻的摄像机进行空间位姿标定以确定相对位姿关系;
步骤B,根据所述相对位姿关系列出独立约束关系;
步骤C,根据所述独立约束关系列出改正数方程,通过平差法对步骤B中所述的独立约束关系进行平差求出改正数向量;
步骤D,判断所述改正数向量是否小于一预设阈值,如果是执行步骤F,如果否根据所述改正数向量求出修正结果;
步骤E,将所述修正结果代入所述独立约束关系,并返回所述步骤C,
步骤F,根据所述独立约束关系得到空间位姿关系;
步骤G,获得每一所述摄像机的参数以及所述空间位姿关系;
步骤H,根据步骤G中获得的所述空间位姿关系以及参数构建成像模型平面、标准视场球面,每一所述摄像机采集的平面圆形图像分别位于对应的所述成像模型平面上,将所述平面圆形图像从所述成像模型平面投影到所述标准视场球面形成第一球面图像,并根据所述第一球面图像的像点坐标求取所述标准视场球面到所述成像模型平面的映射关系;
步骤I,根据所述映射关系将每一所述摄像机实时采集的所述平面圆形图像分别投影到同一所述标准视场球面上形成第二球面图像;
步骤J,将相邻的所述摄像机对应的所述第二球面图像之间存在的重合部分进行融合得到融合图像;
步骤K,将所述融合图像以及所述第二球面图像进行拼接,得到球面全景图。
首先,对相邻的摄像机组进行空间位姿标定,就可以确定出相邻两个摄像机之间的相对位姿关系,但是这样确定的相邻的位姿关系势必存在一定的误差,这个误差对实际而言,会产生较大的影响,所以本发明通过对多组摄 像机两两之间构建独立约束关系的方法,列出修正参数方程,再通过计算获得出修正结果,通过不断的迭代,将修正结果反复代入约束关系,直至结果满足所有的约束关系,这样就能获得精度较高的空间位姿关系,满足图像拼接的需求。
进一步的,步骤A包括
步骤A-1,任选相邻的两个摄像机分别作为第一摄像机和第二摄像机,分别对第一摄像机和第二摄像机进行内参数标定;
步骤A-2,在第一摄像机的视场中放置第一标定板,在第二摄像机的视场中放置第二标定板;
步骤A-3,在第一摄像机和第二摄像机之间设置一基准摄像机,使基准摄像机的视场中包括第一标定板和第二标定板,基准摄像机同时对第一标定板和第二标定板成像,获得基准摄像机与第一标定板之间的位姿关系HA→C0以及基准摄像机与第二标定板之间的位姿关系HC0→B
步骤A-4,获得第一标定板和第二标定板的位姿关系HA→B=HC0→BHA→C0
步骤A-5,获得第一摄像机与第一标定板之间的位姿关系HA→C1以及第二摄像机与第二标定板之间的位姿关系HB→C2
步骤A-6,获得第一摄像机和第二摄像机之间的位姿关系HC1→C2=HB→C2HA→BHC1→A
步骤A-7,返回步骤A-1,直至得到所有相邻的两个摄像机的相对位姿关系。
为了消除广角镜带来的边缘成像误差对相对位姿关系确定的影响,本发明通过设计基准摄像机进行相对位姿关系的确定,这样一来,就可以增加重 复视场的大小,保证确定相对位姿关系的精度。
进一步的,步骤A包括
步骤A-1,任选相邻的两个摄像机分别作为第一摄像机和第二摄像机,分别对第一摄像机和第二摄像机进行内参数标定;
步骤A-2,在第一摄像机的视场中放置第一标定板,在第二摄像机的视场中放置第二标定板;
步骤A-3,在第一摄像机和第二摄像机之间设置一基准摄像机,使基准摄像机的视场中包括第一标定板和第二标定板;
步骤A-4,通过同步触发信号使基准摄像机、第一摄像机和第二摄像机分别对第一标定板和第二标定板成像,获得基准摄像机与第一标定板之间的位姿关系HA→C0、基准摄像机与第二标定板之间的位姿关系HC0→B、第一摄像机与第一标定板之间的位姿关系HA→C1以及第二摄像机与第二标定板之间的位姿关系HB→C2
步骤A-5,获得第一摄像机和第二摄像机之间的位姿关系HC1→C2=HB→C2HC0→BHA→C0HC1→A
步骤A-6,返回步骤A-1,直至得到所有相邻的两个摄像机的相对位姿关系。
与上一改进的区别在于,此处采用了同步触发的方式进行位姿关系的获取,这样一来,可以消除因环境光线变化而带来的误差。
进一步的,步骤A包括
步骤A-1,任选相邻的两个摄像机分别作为第一摄像机和第二摄像机,并分别对第一摄像机和第二摄像机进行内参数标定;
步骤A-2,在第一摄像机的视场中放置第一标定板,在第二摄像机的视场中放置第二标定板;
步骤A-3,获得第一摄像机与第一标定板之间的位姿关系HA→C1以及第二摄像机与第二标定板之间的位姿关系HB→C2
步骤A-4,改变第一摄像机和第二摄像机的位姿,重新获得第一摄像机与第一标定板之间的位姿关系
Figure PCTCN2016103157-appb-000001
以及第二摄像机与第二标定板之间的位姿关系
Figure PCTCN2016103157-appb-000002
步骤A-5,根据公式
Figure PCTCN2016103157-appb-000003
获得第一摄像机和第二摄像机之间的位姿关系HC1→C2
步骤A-6,返回步骤A-1,直至得到所有相邻的两个摄像机的相对位姿关系。
而这种改进方式无需增加基准摄像机,只需要通过两个摄像机的转动就可以求解相对位姿关系。
进一步的,步骤H包括
步骤H-1,根据步骤2-1中获得的空间位姿关系以及参数构建成像模型平面、成像模型曲面、标准视场球面,每一摄像机采集的平面圆形图像分别位于对应的成像模型平面上;
步骤H-2,将原始的所述平面圆形图像从成像模型平面投影到对应的所述成像模型曲面形成第一曲面图像;
步骤H-3,将所述成像模型曲面上的第一曲面图像重投影到所述标准视场球面上形成第一球面图像;
步骤H-4,根据对应的平面圆形图像的像点坐标和第一球面图像的像点 坐标求取所述标准视场球面到所述成像模型平面的映射关系。
进一步的,步骤J包括
步骤J-1,对步骤I中的每一所述第二球面图像的重合部分进行三角化,并将三角化之后的所述第二球面图像的重合部分于切平面上投影形成若干三角形图像,计算每个所述三角形图像内的特征点;
步骤J-2,将属于不同第二球面图像的两个具有相同特征点的三角形图像在所述切平面上进行相向平移,对平移后的三角形图像进行拉伸以形成两个等大且相互重合的拉伸图像;
步骤J-3,对步骤J-2中两个所述拉伸图像进行融合以形成融合图像,将所述融合图像从切平面重投影到标准视场球面。
进一步的,所述步骤J-2中,拉伸前的所述三角形图像和拉伸后的所述拉伸图像为相似三角形。
进一步的,所述步骤J-3中,还包括对所述融合图像进行平滑处理。
进一步的,所述内参数标定标定的参数类型包括该摄像机的等效焦距、像差系数。
进一步的,所述内参数标定标定的参数类型还包括成像模型、主点坐标。
有益效果:由于采用以上技术方案,更大更全的视场图像拼接技术的要求,位姿确定关系精度更高,所需摄像机更少,利用多个空间位置确定的鱼眼摄像机,保证水平方向360度和垂直方向180度无盲区拍摄,获取多个角度的大比例尺清晰影像。利用本专利图像拼接技术,将多角度的影像拼接成一幅拥有360度水平视角和180度垂直视角的球面全景图。多摄像机获取的球面全景图信息更加丰富。
附图说明
图1为像机固联结构示意图;
图2为实施例1-1中鱼眼镜头像机组像机间位姿关系标定方法;
图3为实施例1-2中鱼眼镜头像机组像机间位姿关系标定方法;
图4为实施例1-3中鱼眼镜头像机组像机间位姿关系标定方法;
图5为鱼眼镜头像机组像机间位姿关系标定数据修正。
图6为鱼眼镜头摄像机重投影示意图;
图7为鱼眼镜头摄像机图像三角化划分示意图;
图8为相邻鱼眼镜头摄像机重叠视场示意图;
图9为相邻鱼眼镜头摄像机重叠视场切平面投影示意图;
图10为鱼眼镜头摄像机组全景图像生成流程图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动的前提下所获得的所有其他实施例,都属于本发明保护的范围。
需要说明的是,在不冲突的情况下,本发明中的实施例及实施例中的特征可以相互组合。
下面结合附图和具体实施例对本发明作进一步说明,但不作为本发明的限定。
首先,对步骤1中的空间位姿关系的确定方法进行说明,
对于世界坐标系W中任意一个三维空间点PW,设其在摄像机坐标系C中的空间点坐标为PC,则有:
PC=RW→CPW+tW→C   (1)
其中RW→C为由世界坐标系W到摄像机坐标系C的旋转矩阵,tW→C为由世界坐标系W到摄像机坐标系C的平移向量;同样,如果设RC→W为由摄像机坐标系C到世界坐标系W的旋转矩阵,tC→W为由摄像机坐标系C到世界坐标系W的平移向量,则有:
PW=RC→WPC+tC→W   (2)
错误!未找到引用源。式(2)可以重写为:
Figure PCTCN2016103157-appb-000004
显然,由上式错误!未找到引用源。可得:
Figure PCTCN2016103157-appb-000005
为了更简约的表示坐标系变换,可以将坐标扩展为齐次坐标,如下式所示:
Figure PCTCN2016103157-appb-000006
记位姿关系矩阵
Figure PCTCN2016103157-appb-000007
齐次坐标
Figure PCTCN2016103157-appb-000008
则有:
Figure PCTCN2016103157-appb-000009
与上述错误!未找到引用源。推导类似,有:
Figure PCTCN2016103157-appb-000010
对于鱼眼镜头像机组来说,首先进行的就是相邻像机空间位姿关系的标 定,不失一般性,下面的阐述均以像机C1和像机C2之间位姿关系标定为例,其他相邻像机间空间位姿关系的标定步骤1与像机C1-C2的标定完全相同。
一种全景视频摄像机组空间位姿标定方法,
步骤A,对所有相邻的摄像机进行空间位姿标定;
步骤B,根据相邻摄像机的相对位姿关系列出独立约束关系;
步骤C,根据独立约束关系列出改正数方程,通过平差法对冗余标定产生的约束进行平差求出改正数向量;
步骤D,根据改正数向量对测量值进行修正;
步骤E,将修正结果代入步骤B中的独立约束关系,重复进行步骤B,直至修正后的结果所述改正数向量小于预设阈值。
实施例1-1是在上述基础上对本发明中相邻摄像机的位姿关系进行确定,如图2所示,C1和C2为待标定空间位姿关系的相邻鱼眼镜头像机,A和B分别为像机C1和C2视场中的标定板,为了标定C1和C2的位姿关系;还需引入一个高精度摄像机C0,并利用C0来标定A和B之间的位姿关系。
理论推导如下:
如图2左图所示,与前述错误!未找到引用源。式相类似,对于标定板A所在坐标系中的任一空间点PA来说,其在像机C1坐标系中的空间位置为:
PC1=HA→C1PA   (8)
同样的,像机C1坐标系中的空间点PC1在像机C2坐标系中的位置为:
PC2=HC1→C2PC1   (9)
像机C2坐标系中的空间点PC2在标定板B坐标系中的位置为:
PB=HC2→BPC2   (10)
最后,标定板B坐标系中的空间点PB在标定板A坐标系中的位置为:
PA=HB→APB   (11)
由上式可得:
HB→AHC2→BHC1→C2HA→C1=I   (12)
从而有:
HC1→C2=HB→C2HA→BHC1→A   (13)
上式中HB→C2和HC1→A为摄像机C2与标定板B之间的位姿关系和摄像机C1与标定板A之间的位姿关系,可利用张正友方法得到,量HA→B为标定板A和标定板B之间的位姿关系,为未知量;
如图2右图所示,对于摄像机C0和标定板A、标定板B之间,也存在类似于错误!未找到引用源。式(12)的关系:
HA→BHC0→AHB→C0=I   (14)
从而有:
HA→B=HC0→BHA→C0   (15)
依据上述理论,具体的标定实施步骤1-如下:
a).分别对C1和C2进行内参数标定,包括等效焦距、鱼眼成像模型、主点坐标以及像差系数等;
b).在C1的视场中放置一个标定板A,在C2的视场中放置一个标定板B,保持A与B的相对位置不变;
c).保持A与B的相对位置不变,利用高精度像机C0同时对标定板A和标定板B成像,利用张正友方法分别获得像机C0与标定板A之间的位姿 关系HA→C0和像机C0与标定板B之间的位姿关系HC0→B,根据公式(15)得到标定板A与B的位姿关系HA→B=HC0→BHA→C0
d).继续保持A与B的相对位置不变,利用张正友方法获得鱼眼镜头摄像机C1与标定板A之间的位姿关系HC1→A和鱼眼镜头摄像机C2与标定板B之间的位姿关系HB→C2
e).相邻像机C1和C2之间的位姿关系HC1→C2=HB→C2HA→BHC1→A;这里步骤1-c与步骤1-b之间是分开进行的,不要求同步;
f).重复步骤1-d)和e)分别得到其它相邻像机之间的位姿关系。
实施例1-2,与实施例1-1类似,设C1和C2为待标定空间位姿关系的相邻鱼眼镜头像机,标定方法2也是在鱼眼镜头像机C1和C2的视场中各放置一个标定板A和标定板B,并利用张正友方法分别标定,如图3所示。本方法与方法1的区别在于中间量HA→B是另外利用像机C0另外标定出来的,而本方法则是不再去得到这个间接量,而是在标定HC1→A和HB→C2时就直接同步得到HC0→B和HA→C0,这就要求像机C0与鱼眼镜头像机C1和C2同步;
具体的实施步骤1-如下:
a).分别对C1和C2进行内参数标定,包括等效焦距、鱼眼成像模型、主点坐标以及像差系数等;
b).在C1的视场中放置一个标定板A,在C2的视场中放置一个标定板B,保持A与B的相对位置不变;
c).利用同步信号触发等装置将鱼眼镜头像机C1、鱼眼镜头像机C2、以及高精度像机C0连接起来,保证三个像机同步采图;
d).分别利用张正友方法,获得鱼眼镜头摄像机C1与标定板A之间的位姿关系HC1→A和鱼眼镜头摄像机C2与标定板B之间的位姿关系HB→C2,同时,在采集C1对A、C2对B的图像时,利用高精度像机C0同时对标定板A和标定板B成像,利用张正友方法标定像机C0与标定板A之间的位姿关系HA→C0,以及像机C0与标定板B之间的位姿关系HC0→B
e).鱼眼镜头像机C1和C2间的位姿关系HC1→C2=HB→C2HC0→BHA→C0HC1 →A
f).重复步骤1-d)和e)分别得到其它相邻像机之间的位姿关系。
实施例1-3,设C1和C2为待标定空间位姿关系的相邻鱼眼镜头像机,与前面两种标定方法需要额外的借助一个高精度像机C0,直接或间接的获得标定板A与B之间的位姿关系不同,本标定方法只需要待标定像机组中每个像机视场中均包含一块标定板即可,如图4所示。设摄像机C1视场中有标定板A,摄像机C2视场中有标定板B,标定过程中保证A与B的相对位姿关系不变。
理论推导如下:
如图4所示,与前述错误!未找到引用源。式相类似,对于标定板A所在坐标系中的任一空间点PA来说,其在像机C1坐标系中的空间位置为:
PC1=HA→C1PA   (16)
同样的,像机C1坐标系中的空间点PC1在像机C2坐标系中的位置为:
PC2=HC1→C2PC1   (17)
像机C2坐标系中的空间点PC2在标定板B坐标系中的位置为:
PB=HC2→BPC2   (18)
最后,标定板B坐标系中的空间点PB在标定板A坐标系中的位置为:
PA=HB→APB   (19)
由上式可得:
HB→AHC2→BHC1→C2HA→C1=I   (20)
将HB→A移到等式右边即有:
Figure PCTCN2016103157-appb-000011
在上式中,HA→B和HC1→C2分别为标定板A与B和摄像机C1与C2之间的位姿关系,均为不变量;而HC2→B和HA→C1则分别为摄像机C2与标定板B之间以及标定板A与摄像机C1之间的位姿关系,均为变化量,当像机组的位姿变化时会跟着变化,也即在另一组状态下有:
Figure PCTCN2016103157-appb-000012
由(21)和(22)式可得:
Figure PCTCN2016103157-appb-000013
等式两边分别左乘
Figure PCTCN2016103157-appb-000014
和右乘
Figure PCTCN2016103157-appb-000015
后有:
Figure PCTCN2016103157-appb-000016
Figure PCTCN2016103157-appb-000017
和X=HC1→C2,则有:
AX=XB   (25)
这即典型的机器人手眼标定方程,利用手眼标定法即可求解得到X,也即HC1→C2
依据上述理论,具体的标定实施步骤1如下:
a).分别对C1和C2进行内参数标定,包括等效焦距、鱼眼成像模型、主点坐标以及像差系数等;
b).在C1的视场中放置一个标定板A,在C2的视场中放置一个标定板B,保持A与B的相对位置不变;
c).分别利用张正友方法,获得鱼眼镜头摄像机C1与标定板A之间的位姿关系HA→C1和鱼眼镜头摄像机C2与标定板B之间的位姿关系HC2→B
d).改变摄像机组的位姿,重新获得一组摄像机C1与标定板A之间的位姿关系和摄像机C2与标定板B之间的位姿关系,分别记为
Figure PCTCN2016103157-appb-000018
Figure PCTCN2016103157-appb-000019
e).设鱼眼镜头像机C1和C2间的位姿关系为HC1→C2,则根据公式(24)有,利用手眼标定法即可求解出位姿关系矩阵HC1→C2
f).重复步骤1a)到e)分别得到其它相邻像机之间的位姿关系。
实施例2是对通过构建独立约束关系求修正向量通过迭代求出最后的位姿关系的一种算法,也是本发明的核心,通过实施例2可以与实施例1-1、1-2和1-3中任一一项实施例配合实现对摄像机空间位姿关系的确定,通过上面步骤1-,我们解决了如何对任意两个鱼眼镜头像机进行空间位姿关系标定的难题。显然,对于由N个(N>=2)鱼眼镜头像机组成的像机组来说,进行N-1次相邻像机间相对位姿关系的标定,即可将所有像机的位姿关系统一起来。但是,考虑到标定过程中必然会存在误差,所以为了进一步提高精度,一般我们采用冗余标定的方法来提高最终的标定精度;也即进行N(N-1)次两两标定,最后,再对所有的两两标定结果进行平差,也即使所有的标定值能够自洽。
一般的,对于任意的由n个像机组成的像机组,共有n-1个独立的相对位姿参数,共有
Figure PCTCN2016103157-appb-000020
个可测量的相对位姿参数,这些可测量的位姿参数之间共存在
Figure PCTCN2016103157-appb-000021
个约束关系,而这些约束关系间只有
Figure PCTCN2016103157-appb-000022
个相互独立。设RCi→Cj, tCi→Cj(i≠j)分别表示由摄像机坐标系Ci变换到摄像机坐标系Cj的姿态旋转矩阵和平移向量,显然有约束关系:
Figure PCTCN2016103157-appb-000023
如图5所示为由四个像机组成的像机组,其空间位姿约束为:
Figure PCTCN2016103157-appb-000024
在全景像机组中,对拼接效果影响最大的是各个像机间的相对姿态,下面我们就针对姿态的平差修正来推导。一般来说,通过两两像机位姿标定的方法可以得到任意两个摄像机之间的相对位姿关系,这些位姿关系之间通过不同的组合方式存在大量固有的约束关系,为了充分利用所有的约束关系,同时最大程度的减小计算量,需要首先探讨多目传递站中各个摄像机间之间存在的组合约束关系。为了简单起见,下面以如图5所示的四像机结构为例进行说明,其存在如下的约束关系:
Figure PCTCN2016103157-appb-000025
由于四目传递站中独立的位姿关系的数目为n-1=3,因此由RC1→C2,RC2→C3和RC3→C4即可完全确定四目传递站中各摄像机间的相对位姿关系。四目传递站中可以两两标定的位姿关系数目为
Figure PCTCN2016103157-appb-000026
即RC1→C2,RC1→C3,RC1→C4,RC2→C3,RC2→C4和RC3→C4,由于RCi→Cj·RCj→Ci=I,(i≠j),因此由RCi→Cj即可求出RCj→Ci.四目传递站中独立的位姿约束关系数目为
Figure PCTCN2016103157-appb-000027
因此选错误!未找 到引用源。式(28)中的任意3个即可组成一组独立的约束关系,不妨选前3个等式,并记旋转矩阵RCi→Cj对应的欧拉角为AqCi→Cj(q=x,y,z),则存在如下形式的独立约束关系:
Figure PCTCN2016103157-appb-000028
由于两两标定过程中必然带有误差,因此,标定得到的欧拉角将不能满足上述约束关系,这时可以利用平差法对冗余标定产生的约束进行平差,从而减小误差,具体步骤1如下:
a).首先根据具体的像机结构列出其独立的位姿约束关系;
b).由得到的独立约束关系,列出待平差量的改正数方程;
c).根据改正数方程,列出平差方程组并求解改正数向量;
d).由改正数向量对测量值进行修正,反复迭代,直至满足所有的约束关系。
接下来,对步骤2中的图像拼接方法进行说明,参照图7,球面全景图是与人眼模型最接近的全景描述。本专利的全景拼接是将不同鱼眼摄像机获得的影像首先根据其预设的成像模型投影到模型曲面上,然后再将鱼眼镜头摄像机成像模型上的图像重投影到标准视场球面上,当摄像机组中的所有摄像机都完成重投影过程之后再进行全景拼接,形成一幅畸变、变形较少的全景图像。
本专利的球面重投影模型如图6所示,步骤H解释如下中,C为鱼眼镜头摄像机图像的中心,对应的投影球面半径为r,O为标准视场球面的球心,对应的半径为R,鱼眼镜头摄像机成像投影球面与标准视场球面在中心点C 对应的像点相切,切点为T。设P为鱼眼镜头摄像机图像上某一点,Q为鱼眼镜头摄像机成像投影曲面上与P对应的像点,成像模型球面上的像点Q重投影到标准视场球面上,记为像点M。
为了准确地将鱼眼镜头摄像机图像投影到标准视场球面上,需要提前对鱼眼镜头摄像机的成像参数进行标定,例如鱼眼图像中心点坐标C(Cx,Cy)、鱼眼图像成像球面半径r和标准视场球面半径R。
鱼眼镜头摄像机几种常见投影方法有体视投影、等距投影、等立体角投影、正交投影等,根据鱼眼镜头摄像机的成像模型可将鱼眼镜头摄像机采集的原始的平面圆形图像从成像模型平面投影到其成像模型曲面上,即鱼眼图像上的原始像点P首先根据成像模型平面投影到鱼眼镜头摄像机成像投影曲面上形成第一曲面图像,设对应的投影像点为Q,其中
Figure PCTCN2016103157-appb-000029
与光轴
Figure PCTCN2016103157-appb-000030
的夹角θ是由P点距离光心C点的距离|CP|和鱼眼摄像机的成像模型决定的。
将第一曲面图像从成像模型曲面到标准视场球面上,得到第一球面图像,二就可以根据第一球面图像以及原始的平面圆形图像,即建立了由点P→Q→M的映射关系。但是由于球面面积大于其赤道面的面积,因此在标准视场球面上会出现未定义的“空洞点”现象,这时可以按照逆映射的顺序求取M→Q→P的映射关系,从而解决该问题。
球面重投影具体实施如下:
步骤2-1中,本专利球面重投影部分以四摄像机结构为例,求出全景摄像机组的鱼眼摄像机之间的位姿关系;位姿关系的确定不属于本专利的发明点,而可以通过现有的位姿关系确定方法进行确定,二鱼眼摄像机的参数构建系统时读入系统。
步骤H-1中,对鱼眼镜头摄像机的成像参数进行标定,得到鱼眼图像中心点坐标C(Cx,Cy)、鱼眼图像成像球面半径r,标准视场球面半径R与具体的鱼眼摄像机模型有关,可通过实验验证获得,具体数据与实际的鱼眼镜头摄像机参数相关,而根据这些参数就可以生成标准视场球面,然后将根据每个所述鱼眼镜头摄像机成像模型构建对应的成像模型曲面以及对应的标准视场球面。
步骤H-2中,根据具体的鱼眼镜头摄像机所对应的投影模型,例如正交投影模型,将鱼眼镜头摄像机采集的原始的平面圆形图像投影到其成像模型曲面上;
步骤H-3、2-2-4、2-2-3中,利用步骤2-1中标定出的全景摄像机组的位姿关系,按照逆映射的顺序求取M→Q→P的映射关系,将四个鱼眼镜头摄像机的图像投影到标准视场球面上。
而接下来就是进行全景拼接的步骤2,是将已经投影到同一球面下的图像拼接成一副大视野的球面全景图,包含图像配准与图像融合两个步骤2-。图像拼接是指通过对齐两幅或者若干幅有空间位置上重叠信息的图像,并将对齐之后的图像组合成一副无缝的、高清晰的图像技术。
步骤J的作用是对投影到同一大圆下的不同鱼眼图像的重合部分之间存在严重畸变和视差,本专利采用三角化校正,首先将重合部分的不同鱼眼图像进行三角化,如图7所示的方法,然后对每个三角形内的特征点进行匹配。通过特征点的匹配得出的特征点间的空间距离,可以得出三角化后三角形顶点需要的位移。同时我们定义代价函数限定三角形的移动遵守相似变化,该策略将使得最终拉伸的三角形区域在视觉上是平滑的。三角化如图7所示, 对球面进行三角网格划分获得图形,划分方法上目前较多应用,在此不做赘述,其优选如图7所示的划分方法进行划分。
移动的代价函数基本策略是,例如,考虑三角形ΔG1G2G3中包含一特征点P,已知P点的对应匹配点位Q,首先通过对ΔG1G2G3的平移缩短P、Q的欧拉距离。同时,为了避免过度修正导致的图像不平整,我们定义,当空间中任意顶点位置发生位移时,与之直接相邻的顶点也应该配合移动。我们测量移动前和移动后三角形的相似度来描述临接定点的移动距离和方向。最后,我们希望特征点不丰富区域尽量保持原位,我们给与每个顶点移动权值,权值定义为与特征点距离相关的函数。当优化最终代价函数时,最终结果是一个实现特征点匹配,同时维持整个三角形网格平滑的结果。
由于鱼眼镜头摄像机产生的畸变,图像临近图像边缘的信息可靠度不如图像中心,我们进一步优化三角网格的布局,产生一种如图7所示的外密内疏的结构。如此,在临近图像边缘我们允许更多的顶点位移修正图像。反之在图像中心区域,位移受限于三角形数量,图像将尽量保留原始信息。
而步骤J-3中,图像对齐、配准之后难免会在拼接处产生痕迹,影响最终的全景图视觉效果,本专利优选加权平均法对拼接处的图像灰度值先加权后叠加平均进行处理,假设I1和I2分别为待融合图像,I为融合之后的图像,则有:
Figure PCTCN2016103157-appb-000031
其中,ω12=1,0<ω12<1是重叠区域像素的权值,经过平滑处理后,再将其进行球面重投影,得到最终的球面全景图。权值根据实际情况比重进 行配置,在此不做限定,与具体位置可以相关。
结合以上说明,实施步骤2如下:
步骤J-1中,对投影到同一大圆下的不同鱼眼图像的重合部分如图8所示进行三角化,然后将三角化之后的图像按照如图9所示的方法进行切平面投影,对每个三角形内的特征点进行匹配;
步骤J-2中,通过步骤2-8中,得出的特征点间的空间距离,计算出三角化后将所述三角形顶点需要的位移,限定三角形的移动遵守相似变化,然后进行拉伸;
步骤J-3中,根据公式(30)对拉伸之后的三角形图像进行融合;
步骤K中,将融合之后的重合部分进行球面重投影,得到最终的球面全景图。
以上所述仅为本发明较佳的实施例,并非因此限制本发明的实施方式及保护范围,对于本领域技术人员而言,应当能够意识到凡运用本发明说明书及图示内容所作出的等同替换和显而易见的变化所得到的方案,均应当包含在本发明的保护范围内。

Claims (10)

  1. 一种基于鱼眼摄像机空间位姿标定的全景视频生成方法,提供若干个固联的鱼眼镜头摄像机,每一摄像机分别用于获取原始的平面圆形图像,其特征在于,包括
    步骤A,对所有相邻的摄像机进行空间位姿标定以确定相对位姿关系;
    步骤B,根据所述相对位姿关系列出独立约束关系;
    步骤C,根据所述独立约束关系列出改正数方程,通过平差法对步骤B中所述的独立约束关系进行平差求出改正数向量;
    步骤D,判断所述改正数向量是否小于一预设阈值,如果是,执行步骤F,如果否,根据所述改正数向量求出修正结果;
    步骤E,将所述修正结果代入所述独立约束关系,并返回所述步骤C,
    步骤F,根据所述独立约束关系得到空间位姿关系;
    步骤G,获得每一所述摄像机的参数以及所述空间位姿关系;
    步骤H,根据步骤G中获得的所述空间位姿关系以及参数构建成像模型平面、标准视场球面,每一所述摄像机采集的平面圆形图像分别位于对应的所述成像模型平面上,将所述平面圆形图像从所述成像模型平面投影到所述标准视场球面形成第一球面图像,并根据所述第一球面图像的像点坐标求取所述标准视场球面到所述成像模型平面的映射关系;
    步骤I,根据所述映射关系将每一所述摄像机实时采集的所述平面圆形图像分别投影到同一所述标准视场球面上形成第二球面图像;
    步骤J,将相邻的所述摄像机对应的所述第二球面图像之间存在的 重合部分进行融合得到融合图像;
    步骤K,将所述融合图像以及所述第二球面图像进行拼接,得到球面全景图。
  2. 根据权利要求1所述的一种基于鱼眼摄像机空间位姿标定的全景视频生成方法,其特征在于,步骤A包括
    步骤A-1,任选相邻的两个摄像机分别作为第一摄像机和第二摄像机,分别对所述第一摄像机和所述第二摄像机进行内参数标定;
    步骤A-2,在所述第一摄像机的视场中放置第一标定板,在所述第二摄像机的视场中放置第二标定板;
    步骤A-3,在所述第一摄像机和所述第二摄像机之间设置一基准摄像机,使所述基准摄像机的视场中包括所述第一标定板和所述第二标定板,所述基准摄像机同时对所述第一标定板和所述第二标定板成像,获得所述基准摄像机与所述第一标定板之间的位姿关系HA→C0以及所述基准摄像机与所述第二标定板之间的位姿关系HC0→B
    步骤A-4,获得所述第一标定板和所述第二标定板的位姿关系HA→B=HC0 →BHA→C0
    步骤A-5,获得所述第一摄像机与所述第一标定板之间的位姿关系HA→ C1以及所述第二摄像机与所述第二标定板之间的位姿关系HB→C2
    步骤A-6,获得所述第一摄像机和所述第二摄像机之间的位姿关系HC1→ C2=HB→C2HA→BHC1→A
    步骤A-7,返回步骤A-1,直至得到所有相邻的两个摄像机的相对位姿关系。
  3. 根据权利要求1所述的一种基于鱼眼摄像机空间位姿标定的全景视频生成方法,其特征在于,步骤A包括
    步骤A-1,任选相邻的两个摄像机分别作为第一摄像机和第二摄像机,分别对所述第一摄像机和所述第二摄像机进行内参数标定;
    步骤A-2,在所述第一摄像机的视场中放置第一标定板,在所述第二摄像机的视场中放置第二标定板;
    步骤A-3,在所述第一摄像机和所述第二摄像机之间设置一基准摄像机,使所述基准摄像机的视场中包括所述第一标定板和所述第二标定板;
    步骤A-4,通过同步触发信号使所述基准摄像机、所述第一摄像机和所述第二摄像机分别对所述第一标定板和所述第二标定板成像,获得所述基准摄像机与所述第一标定板之间的位姿关系HA→C0、所述基准摄像机与所述第二标定板之间的位姿关系HC0→B、所述第一摄像机与所述第一标定板之间的位姿关系HA→C1以及所述第二摄像机与所述第二标定板之间的位姿关系HB→ C2
    步骤A-5,获得所述第一摄像机和所述第二摄像机之间的位姿关系HC1→ C2=HB→C2HC0→BHA→C0HC1→A
    步骤A-6,返回步骤A-1,直至得到所有相邻的两个摄像机的相对位姿关系。
  4. 根据权利要求1所述的一种基于鱼眼摄像机空间位姿标定的全景视频生成方法,其特征在于,步骤A包括
    步骤A-1,任选相邻的两组摄像机分别作为第一摄像机和第二摄像机,并分别对所述第一摄像机和所述第二摄像机进行内参数标定;
    步骤A-2,在所述第一摄像机的视场中放置第一标定板,在所述第二摄像机的视场中放置第二标定板;
    步骤A-3,获得所述第一摄像机与所述第一标定板之间的位姿关系HA→ C1以及所述第二摄像机与所述第二标定板之间的位姿关系HB→C2
    步骤A-4,改变所述第一摄像机和所述第二摄像机的位姿,重新获得所述第一摄像机与所述第一标定板之间的位姿关系
    Figure PCTCN2016103157-appb-100001
    以及所述第二摄像机与所述第二标定板之间的位姿关系
    Figure PCTCN2016103157-appb-100002
    步骤A-5,根据公式
    Figure PCTCN2016103157-appb-100003
    获得所述第一摄像机和所述第二摄像机之间的位姿关系HC1→C2
    步骤A-6,返回步骤A-1,直至得到所有相邻的两组摄像机的相对位姿关系。
  5. 根据权利要求1所述的一种基于鱼眼摄像机空间位姿标定的全景视频生成方法,其特征在于,步骤H包括
    步骤H-1,根据步骤G中获得的空间位姿关系以及参数构建成像模型平面、成像模型曲面、标准视场球面,每一摄像机采集的平面圆形图像分别位于对应的所述成像模型平面上;
    步骤H-2,将所述平面圆形图像从成像模型平面投影到对应的所述成像模型曲面形成第一曲面图像;
    步骤H-3,将所述成像模型曲面上的第一曲面图像重投影到所述标准视场球面上形成第一球面图像;
    步骤H-4,根据对应的所述平面圆形图像的像点坐标和所述第一球面图像的像点坐标求取所述标准视场球面到所述成像模型平面的映射关系。
  6. 根据权利要求1所述的一种基于鱼眼摄像机空间位姿标定的全景视频生成方法,其特征在于,步骤J包括
    步骤J-1,对步骤I中的每一所述第二球面图像的重合部分进行三角化,并将三角化之后的所述第二球面图像的重合部分于切平面上投影形成若干三角形图像,计算每个所述三角形图像内的特征点;
    步骤J-2,将属于不同第二球面图像的两个具有相同特征点的三角形图像在所述切平面上进行相向平移,对平移后的三角形图像进行拉伸以形成两个等大且相互重合的拉伸图像;
    步骤J-3,对步骤J-2中两个所述拉伸图像进行融合以形成融合图像,将所述融合图像从切平面重投影到标准视场球面。
  7. 根据权利要求6所述的一种基于鱼眼摄像机空间位姿标定的全景视频生成方法,其特征在于,所述步骤J-2中,拉伸前的所述三角形图像和拉伸后的所述拉伸图像为相似三角形。
  8. 根据权利要求6所述的一种基于鱼眼摄像机空间位姿标定的全景视频生成方法,其特征在于,所述步骤J-3中,还包括对所述融合图像进行平滑处理。
  9. 根据权利要求2或3或4所述的一种基于鱼眼摄像机空间位姿标定的全景视频生成方法,其特征在于,所述内参数标定标定的参数类型包括该摄像机的等效焦距、像差系数。
  10. 根据权利要求9所述的一种基于鱼眼摄像机空间位姿标定的全景视频生成方法,其特征在于,所述内参数标定标定的参数类型还包括成像模型、主点坐标。
PCT/CN2016/103157 2016-10-25 2016-10-25 一种基于鱼眼摄像机空间位姿标定的全景视频生成方法 WO2018076154A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/103157 WO2018076154A1 (zh) 2016-10-25 2016-10-25 一种基于鱼眼摄像机空间位姿标定的全景视频生成方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/103157 WO2018076154A1 (zh) 2016-10-25 2016-10-25 一种基于鱼眼摄像机空间位姿标定的全景视频生成方法

Publications (1)

Publication Number Publication Date
WO2018076154A1 true WO2018076154A1 (zh) 2018-05-03

Family

ID=62023000

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/103157 WO2018076154A1 (zh) 2016-10-25 2016-10-25 一种基于鱼眼摄像机空间位姿标定的全景视频生成方法

Country Status (1)

Country Link
WO (1) WO2018076154A1 (zh)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108765292A (zh) * 2018-05-30 2018-11-06 中国人民解放军军事科学院国防科技创新研究院 基于空间三角面片拟合的图像拼接方法
CN108846796A (zh) * 2018-06-22 2018-11-20 北京航空航天大学青岛研究院 图像拼接方法及电子设备
CN109194954A (zh) * 2018-09-21 2019-01-11 上海小萌科技有限公司 鱼眼摄像头性能参数测试方法、装置、设备及可存储介质
CN109272445A (zh) * 2018-10-29 2019-01-25 中国航空无线电电子研究所 基于球面模型的全景视频拼接方法
CN109636858A (zh) * 2018-10-30 2019-04-16 广州超音速自动化科技股份有限公司 锂电池涂布图像采集标定方法、系统、设备及存储介质
CN109993799A (zh) * 2019-03-08 2019-07-09 贵州电网有限责任公司 一种紫外像机标定方法及标定装置
CN110136049A (zh) * 2018-10-30 2019-08-16 北京初速度科技有限公司 一种基于环视图像与轮速计融合的定位方法及车载终端
CN110148182A (zh) * 2019-05-08 2019-08-20 云南大学 一种标定摄像机的方法、存储介质、运算器和系统
CN110202573A (zh) * 2019-06-04 2019-09-06 上海知津信息科技有限公司 全自动手眼标定、工作平面标定方法及装置
CN110264524A (zh) * 2019-05-24 2019-09-20 联想(上海)信息技术有限公司 一种标定方法、装置、系统及存储介质
CN110728619A (zh) * 2018-07-17 2020-01-24 中科创达软件股份有限公司 一种全景图像拼接渲染方法及装置
CN110827361A (zh) * 2019-11-01 2020-02-21 清华大学 基于全局标定架的相机组标定方法及装置
CN110956667A (zh) * 2019-11-28 2020-04-03 李安澜 基于近似平面靶的摄像机自标定方法及系统
CN111563840A (zh) * 2019-01-28 2020-08-21 北京初速度科技有限公司 分割模型的训练方法、装置、位姿检测方法及车载终端
CN111726566A (zh) * 2019-03-21 2020-09-29 上海飞猿信息科技有限公司 一种实时校正拼接防抖的实现方法
CN111899307A (zh) * 2020-07-30 2020-11-06 浙江大学 一种空间标定方法、电子设备及存储介质
CN112215901A (zh) * 2020-10-09 2021-01-12 哈尔滨工程大学 一种用于水下标定的多功能标定板装置
CN112950727A (zh) * 2021-03-30 2021-06-11 中国科学院西安光学精密机械研究所 基于仿生曲面复眼的大视场多目标同时测距方法
CN113111548A (zh) * 2021-03-27 2021-07-13 西北工业大学 一种基于周角差值的产品三维特征点提取方法
CN113129383A (zh) * 2021-03-15 2021-07-16 中建科技集团有限公司 手眼标定方法、装置、通信设备及存储介质
CN113393529A (zh) * 2020-03-12 2021-09-14 浙江宇视科技有限公司 摄像机的标定方法、装置、设备和介质
CN113496520A (zh) * 2020-04-02 2021-10-12 北京四维图新科技股份有限公司 摄像机转俯视图的方法、装置及存储介质
CN113689339A (zh) * 2021-09-08 2021-11-23 北京经纬恒润科技股份有限公司 图像拼接方法及装置
CN113706627A (zh) * 2021-08-06 2021-11-26 武汉极目智能技术有限公司 车载环视中基于单张图的鱼眼相机内参标定方法
CN113763480A (zh) * 2021-08-03 2021-12-07 桂林电子科技大学 一种多镜头全景摄像机组合标定方法
WO2022153207A1 (en) * 2021-01-18 2022-07-21 Politecnico Di Milano Multi-camera three-dimensional capturing and reconstruction system
CN114777668A (zh) * 2022-04-12 2022-07-22 新拓三维技术(深圳)有限公司 一种桌面式弯管测量方法及装置
CN116385564A (zh) * 2023-02-03 2023-07-04 厦门农芯数字科技有限公司 一种基于鱼眼图像实现栏位尺寸的自动标定方法、装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060047471A1 (en) * 2004-08-25 2006-03-02 Microsoft Corporation Relative range camera calibration
US20070106482A1 (en) * 2005-10-28 2007-05-10 Ali Zandifar Fast imaging system calibration
CN101577002A (zh) * 2009-06-16 2009-11-11 天津理工大学 应用于目标检测的鱼眼镜头成像系统标定方法
CN102175221A (zh) * 2011-01-20 2011-09-07 上海杰图软件技术有限公司 基于鱼眼镜头的车载移动摄影测量系统
CN102693539A (zh) * 2012-03-13 2012-09-26 夏东 一种用于智能监控系统的宽基线快速三维标定方法
CN103077524A (zh) * 2013-01-25 2013-05-01 福州大学 混合视觉系统标定方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060047471A1 (en) * 2004-08-25 2006-03-02 Microsoft Corporation Relative range camera calibration
US20070106482A1 (en) * 2005-10-28 2007-05-10 Ali Zandifar Fast imaging system calibration
CN101577002A (zh) * 2009-06-16 2009-11-11 天津理工大学 应用于目标检测的鱼眼镜头成像系统标定方法
CN102175221A (zh) * 2011-01-20 2011-09-07 上海杰图软件技术有限公司 基于鱼眼镜头的车载移动摄影测量系统
CN102693539A (zh) * 2012-03-13 2012-09-26 夏东 一种用于智能监控系统的宽基线快速三维标定方法
CN103077524A (zh) * 2013-01-25 2013-05-01 福州大学 混合视觉系统标定方法

Cited By (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108765292A (zh) * 2018-05-30 2018-11-06 中国人民解放军军事科学院国防科技创新研究院 基于空间三角面片拟合的图像拼接方法
CN108765292B (zh) * 2018-05-30 2022-04-29 中国人民解放军军事科学院国防科技创新研究院 基于空间三角面片拟合的图像拼接方法
CN108846796A (zh) * 2018-06-22 2018-11-20 北京航空航天大学青岛研究院 图像拼接方法及电子设备
CN108846796B (zh) * 2018-06-22 2022-08-16 北京航空航天大学青岛研究院 图像拼接方法及电子设备
CN110728619A (zh) * 2018-07-17 2020-01-24 中科创达软件股份有限公司 一种全景图像拼接渲染方法及装置
CN110728619B (zh) * 2018-07-17 2024-03-22 中科创达软件股份有限公司 一种全景图像拼接渲染方法及装置
CN109194954A (zh) * 2018-09-21 2019-01-11 上海小萌科技有限公司 鱼眼摄像头性能参数测试方法、装置、设备及可存储介质
CN109272445A (zh) * 2018-10-29 2019-01-25 中国航空无线电电子研究所 基于球面模型的全景视频拼接方法
CN109272445B (zh) * 2018-10-29 2022-11-04 中国航空无线电电子研究所 基于球面模型的全景视频拼接方法
CN110136049B (zh) * 2018-10-30 2023-07-11 北京魔门塔科技有限公司 一种基于环视图像与轮速计融合的定位方法及车载终端
CN109636858B (zh) * 2018-10-30 2024-01-12 超音速人工智能科技股份有限公司 锂电池涂布图像采集标定方法、系统、设备及存储介质
CN109636858A (zh) * 2018-10-30 2019-04-16 广州超音速自动化科技股份有限公司 锂电池涂布图像采集标定方法、系统、设备及存储介质
CN110136049A (zh) * 2018-10-30 2019-08-16 北京初速度科技有限公司 一种基于环视图像与轮速计融合的定位方法及车载终端
CN111563840A (zh) * 2019-01-28 2020-08-21 北京初速度科技有限公司 分割模型的训练方法、装置、位姿检测方法及车载终端
CN111563840B (zh) * 2019-01-28 2023-09-05 北京魔门塔科技有限公司 分割模型的训练方法、装置、位姿检测方法及车载终端
CN109993799A (zh) * 2019-03-08 2019-07-09 贵州电网有限责任公司 一种紫外像机标定方法及标定装置
CN111726566A (zh) * 2019-03-21 2020-09-29 上海飞猿信息科技有限公司 一种实时校正拼接防抖的实现方法
CN110148182A (zh) * 2019-05-08 2019-08-20 云南大学 一种标定摄像机的方法、存储介质、运算器和系统
CN110264524A (zh) * 2019-05-24 2019-09-20 联想(上海)信息技术有限公司 一种标定方法、装置、系统及存储介质
CN110202573A (zh) * 2019-06-04 2019-09-06 上海知津信息科技有限公司 全自动手眼标定、工作平面标定方法及装置
CN110202573B (zh) * 2019-06-04 2023-04-07 上海知津信息科技有限公司 全自动手眼标定、工作平面标定方法及装置
CN110827361A (zh) * 2019-11-01 2020-02-21 清华大学 基于全局标定架的相机组标定方法及装置
CN110956667A (zh) * 2019-11-28 2020-04-03 李安澜 基于近似平面靶的摄像机自标定方法及系统
CN110956667B (zh) * 2019-11-28 2023-02-17 李安澜 基于近似平面靶的摄像机自标定方法及系统
CN113393529A (zh) * 2020-03-12 2021-09-14 浙江宇视科技有限公司 摄像机的标定方法、装置、设备和介质
CN113496520A (zh) * 2020-04-02 2021-10-12 北京四维图新科技股份有限公司 摄像机转俯视图的方法、装置及存储介质
CN111899307A (zh) * 2020-07-30 2020-11-06 浙江大学 一种空间标定方法、电子设备及存储介质
CN111899307B (zh) * 2020-07-30 2023-12-29 浙江大学 一种空间标定方法、电子设备及存储介质
CN112215901A (zh) * 2020-10-09 2021-01-12 哈尔滨工程大学 一种用于水下标定的多功能标定板装置
CN112215901B (zh) * 2020-10-09 2023-08-01 哈尔滨工程大学 一种用于水下标定的多功能标定板装置
WO2022153207A1 (en) * 2021-01-18 2022-07-21 Politecnico Di Milano Multi-camera three-dimensional capturing and reconstruction system
CN113129383A (zh) * 2021-03-15 2021-07-16 中建科技集团有限公司 手眼标定方法、装置、通信设备及存储介质
CN113111548A (zh) * 2021-03-27 2021-07-13 西北工业大学 一种基于周角差值的产品三维特征点提取方法
CN112950727A (zh) * 2021-03-30 2021-06-11 中国科学院西安光学精密机械研究所 基于仿生曲面复眼的大视场多目标同时测距方法
CN112950727B (zh) * 2021-03-30 2023-01-06 中国科学院西安光学精密机械研究所 基于仿生曲面复眼的大视场多目标同时测距方法
CN113763480A (zh) * 2021-08-03 2021-12-07 桂林电子科技大学 一种多镜头全景摄像机组合标定方法
CN113706627A (zh) * 2021-08-06 2021-11-26 武汉极目智能技术有限公司 车载环视中基于单张图的鱼眼相机内参标定方法
CN113689339A (zh) * 2021-09-08 2021-11-23 北京经纬恒润科技股份有限公司 图像拼接方法及装置
CN113689339B (zh) * 2021-09-08 2023-06-20 北京经纬恒润科技股份有限公司 图像拼接方法及装置
CN114777668B (zh) * 2022-04-12 2024-01-16 新拓三维技术(深圳)有限公司 一种桌面式弯管测量方法及装置
CN114777668A (zh) * 2022-04-12 2022-07-22 新拓三维技术(深圳)有限公司 一种桌面式弯管测量方法及装置
CN116385564B (zh) * 2023-02-03 2023-09-19 厦门农芯数字科技有限公司 一种基于鱼眼图像实现栏位尺寸的自动标定方法、装置
CN116385564A (zh) * 2023-02-03 2023-07-04 厦门农芯数字科技有限公司 一种基于鱼眼图像实现栏位尺寸的自动标定方法、装置

Similar Documents

Publication Publication Date Title
WO2018076154A1 (zh) 一种基于鱼眼摄像机空间位姿标定的全景视频生成方法
TWI555379B (zh) 一種全景魚眼相機影像校正、合成與景深重建方法與其系統
TWI555378B (zh) 一種全景魚眼相機影像校正、合成與景深重建方法與其系統
WO2021120407A1 (zh) 一种基于多对双目相机的视差图像拼接与可视化方法
CN109272478B (zh) 一种荧幕投影方法和装置及相关设备
Micusik et al. Autocalibration & 3D reconstruction with non-central catadioptric cameras
US20170127045A1 (en) Image calibrating, stitching and depth rebuilding method of a panoramic fish-eye camera and a system thereof
WO2019049331A1 (ja) キャリブレーション装置、キャリブレーションシステム、およびキャリブレーション方法
CN107705252B (zh) 适用于双目鱼眼图像拼接展开校正的方法及系统
CN108629829B (zh) 一种球幕相机与深度相机结合的三维建模方法和系统
CN108257183A (zh) 一种相机镜头光轴校准方法和装置
WO2023045147A1 (zh) 双目摄像机的标定方法、系统、电子设备和存储介质
CN106534670B (zh) 一种基于固联鱼眼镜头摄像机组的全景视频生成方法
US20200294269A1 (en) Calibrating cameras and computing point projections using non-central camera model involving axial viewpoint shift
US11812009B2 (en) Generating virtual reality content via light fields
JP2002516443A (ja) 3次元表示のための方法および装置
CN111854636A (zh) 一种多相机阵列三维检测系统和方法
JP2010130628A (ja) 撮像装置、映像合成装置および映像合成方法
CN108898550B (zh) 基于空间三角面片拟合的图像拼接方法
KR20190019059A (ko) 수평 시차 스테레오 파노라마를 캡쳐하는 시스템 및 방법
CN113763480A (zh) 一种多镜头全景摄像机组合标定方法
CN112258581B (zh) 一种多鱼眼镜头全景相机的现场标定方法
JP4851240B2 (ja) 画像処理装置及びその処理方法
CN108205799B (zh) 一种图像拼接方法及装置
TWM594322U (zh) 全向立體視覺的相機配置系統

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16919964

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16919964

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 16919964

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 12.12.2019)

122 Ep: pct application non-entry in european phase

Ref document number: 16919964

Country of ref document: EP

Kind code of ref document: A1