WO2018076154A1 - 一种基于鱼眼摄像机空间位姿标定的全景视频生成方法 - Google Patents
一种基于鱼眼摄像机空间位姿标定的全景视频生成方法 Download PDFInfo
- Publication number
- WO2018076154A1 WO2018076154A1 PCT/CN2016/103157 CN2016103157W WO2018076154A1 WO 2018076154 A1 WO2018076154 A1 WO 2018076154A1 CN 2016103157 W CN2016103157 W CN 2016103157W WO 2018076154 A1 WO2018076154 A1 WO 2018076154A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- camera
- calibration
- image
- relationship
- calibration plate
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
Definitions
- the present invention relates to the field of video image processing technologies, and in particular, to a method for generating a spherical re-projection panoramic video based on a fisheye lens camera group.
- Panoramic video generation is a technique of capturing images of different positions by using multiple cameras, and then realizing image synthesis through image stitching technology to realize panoramic video generation.
- panoramic video generation by fisheye lens camera is generated.
- the reliability of the panoramic video image mainly depends on two steps: 1. calibration of the relative spatial position of the camera group; 2. method of splicing the image acquired by the fisheye lens camera to generate a panoramic image. Both of the above points have a large impact on the reliability of the final panoramic video image.
- the relative pose of the camera group needs to be calibrated, and the spatial relationship of the camera is fixed. Therefore, the internal and external parameters and height angles of different cameras are different. In order to obtain panoramic video with a minimum of cameras, the orientations between different cameras are different. In order to accurately project images obtained by different cameras into a common coordinate system, it is necessary to calibrate the pose relationship between the internal reference matrix of each camera and each camera.
- the traditional dual camera calibration or multi-camera calibration generally requires the camera to be calibrated.
- a target such as a calibration plate
- each target camera can be unified based on the target coordinate system, thereby obtaining a relative position between the cameras to be calibrated. system.
- it is not suitable for the above calibration method because if a larger field of view is required with fewer units, then the overlapping image portion is bound to be less, so that the calibration result will be larger. It is very difficult to use the calibration target to calibrate the pose relationship between the cameras in this limited coincidence field of view.
- each camera can capture one picture.
- the use of the calibration block is very inconvenient, and when using the calibration plate, it is required to acquire at least three calibration plate images of different postures, and it is very difficult to change the calibration plate to different postures in a limited field of view; Image distortion, wide-angle lens imaging generally has a large distortion at the edge of the image, and the common field of view of adjacent cameras is just at the edge of the field of view of the respective wide-angle lens camera. Therefore, the accuracy of the calibration of the traditional calibration method is also can not guarantee.
- the panoramic stitching method such as the smart phone panoramic stitching method
- the panoramic stitching method mostly uses a single camera to acquire images, which makes the shooting area limited, and it is easy to generate large parallax in the actual shooting process.
- the panoramic stitching technology such as the smart phone panoramic stitching method
- An object of the present invention is to provide a method for generating a panoramic video based on spatial orientation calibration of a fisheye camera, which solves the above technical problems;
- a panoramic video generation method based on spatial orientation calibration of a fisheye camera and a plurality of fixed fisheye lens cameras are provided, each camera being used for obtaining original Flat circular image, including
- Step A performing spatial pose calibration on all adjacent cameras to determine a relative pose relationship
- Step B listing independent constraint relationships according to the relative pose relationship
- Step C listing the correction number equation according to the independent constraint relationship, and performing the adjustment on the independent constraint relationship described in step B by the adjustment method to obtain the correction number vector;
- Step D determining whether the correction number vector is less than a predetermined threshold, if it is performing step F, if not obtaining a correction result according to the corrected number vector;
- Step E substituting the correction result into the independent constraint relationship, and returning to the step C,
- Step F obtaining a spatial pose relationship according to the independent constraint relationship
- Step G obtaining parameters of each of the cameras and the spatial pose relationship
- Step H constructing an imaging model plane and a standard field of view sphere according to the spatial pose relationship and parameters obtained in step G, and the plane circular images acquired by each of the cameras are respectively located on the corresponding imaging model plane, and The planar circular image is projected from the imaging model plane to the standard field of view sphere to form a first spherical image, and the standard field of view spherical surface is obtained according to the image point coordinates of the first spherical image to the imaging The mapping relationship of the model plane;
- Step I according to the mapping relationship, respectively projecting the planar circular images collected by each camera in real time onto the same spherical surface of the standard field to form a second spherical image;
- Step J merging overlapping portions existing between the second spherical images corresponding to the adjacent cameras to obtain a fused image
- Step K splicing the fused image and the second spherical image to obtain a spherical panoramic view.
- the relative pose relationship between two adjacent cameras can be determined, but the adjacent pose relationship determined in this way is bound to have a certain error. In fact, it will have a greater impact, so the present invention passes multiple groups of photos.
- the method of constructing an independent constraint relationship between two cameras lists the modified parameter equations, and then obtains the correction result through calculation. Through repeated iterations, the correction result is repeatedly substituted into the constraint relationship until the result satisfies all the constraint relationships, thus It can obtain high-precision spatial pose relationship and meet the needs of image stitching.
- step A includes
- step A-1 the two adjacent cameras are respectively used as the first camera and the second camera, and the first camera and the second camera are respectively subjected to internal parameter calibration;
- Step A-2 placing a first calibration plate in the field of view of the first camera, and placing a second calibration plate in the field of view of the second camera;
- Step A-3 setting a reference camera between the first camera and the second camera, so that the field of view of the reference camera includes a first calibration plate and a second calibration plate, and the reference camera simultaneously calibrates the first calibration plate and the second calibration plate.
- the plate is imaged to obtain a pose relationship H A ⁇ C0 between the reference camera and the first calibration plate and a pose relationship H C0 ⁇ B between the reference camera and the second calibration plate;
- Step A-5 obtaining a pose relationship H A ⁇ C1 between the first camera and the first calibration plate and a pose relationship H B ⁇ C2 between the second camera and the second calibration plate;
- step A-7 the process returns to step A-1 until the relative pose relationship of all the two adjacent cameras is obtained.
- the present invention determines the relative pose relationship by designing the reference camera, so that the weight can be increased.
- the size of the complex field ensures the accuracy of the relative pose relationship.
- step A includes
- step A-1 the two adjacent cameras are respectively used as the first camera and the second camera, and the first camera and the second camera are respectively subjected to internal parameter calibration;
- Step A-2 placing a first calibration plate in the field of view of the first camera, and placing a second calibration plate in the field of view of the second camera;
- Step A-3 a reference camera is disposed between the first camera and the second camera, so that the first calibration plate and the second calibration plate are included in the field of view of the reference camera;
- Step A-4 the reference camera, the first camera, and the second camera respectively image the first calibration plate and the second calibration plate by the synchronization trigger signal, and obtain a pose relationship between the reference camera and the first calibration plate H A ⁇ C0 , the pose relationship between the reference camera and the second calibration plate H C0 ⁇ B , the pose relationship between the first camera and the first calibration plate H A ⁇ C1, and between the second camera and the second calibration plate Pose relationship H B ⁇ C2 ;
- step A-6 the process returns to step A-1 until the relative pose relationship of all the two adjacent cameras is obtained.
- the difference from the previous improvement is that the synchronous triggering method is adopted to acquire the pose relationship, so that the error caused by the ambient light change can be eliminated.
- step A includes
- Step A-1 optionally, two adjacent cameras are respectively used as the first camera and the second camera, and respectively perform internal parameter calibration on the first camera and the second camera;
- Step A-2 placing a first calibration plate in the field of view of the first camera, and placing a second calibration plate in the field of view of the second camera;
- Step A-3 obtaining a pose relationship H A ⁇ C1 between the first camera and the first calibration plate and a pose relationship H B ⁇ C2 between the second camera and the second calibration plate;
- Step A-4 changing the pose of the first camera and the second camera to regain the pose relationship between the first camera and the first calibration plate And the pose relationship between the second camera and the second calibration plate
- Step A-5 according to the formula Obtaining a pose relationship H C1 ⁇ C2 between the first camera and the second camera;
- step A-6 the process returns to step A-1 until the relative pose relationship of all the two adjacent cameras is obtained.
- This improvement does not require the addition of a reference camera, but only through the rotation of the two cameras can solve the relative pose relationship.
- step H includes
- step H-1 the imaging model plane, the imaging model surface, and the standard field spherical surface are constructed according to the spatial pose relationship and parameters obtained in step 2-1, and the planar circular images acquired by each camera are respectively located on the corresponding imaging model plane. ;
- Step H-2 projecting the original planar circular image from the imaging model plane to the corresponding imaging model curved surface to form a first curved surface image
- Step H-3 reprojecting the first curved surface image on the curved surface of the imaging model onto the spherical surface of the standard field of view to form a first spherical image
- Step H-4 according to the pixel coordinates of the corresponding planar circular image and the image points of the first spherical image
- the coordinates determine a mapping relationship between the standard field of view sphere and the imaging model plane.
- step J includes
- Step J-1 triangulating the overlapping portion of each of the second spherical images in the step I, and projecting the overlapping portions of the second spherical images after the triangulation on the tangent plane to form a plurality of triangular images, Calculating feature points within each of the triangular images;
- Step J-2 two triangular images having the same feature point belonging to different second spherical images are oppositely translated on the tangent plane, and the translated triangular image is stretched to form two equal large and coincident Stretched image
- step J-3 the two stretched images in step J-2 are fused to form a fused image, and the fused image is re-projected from the tangent plane to the standard field of view sphere.
- the triangular image before stretching and the stretched image after stretching are similar triangles.
- the method further includes performing smoothing processing on the fused image.
- the parameter type of the internal parameter calibration calibration includes an equivalent focal length and an aberration coefficient of the camera.
- the parameter type of the internal parameter calibration calibration further includes an imaging model and a principal point coordinate.
- the position determination relationship is more accurate, and fewer cameras are required.
- the fisheye camera determined by multiple spatial positions is used to ensure the horizontal direction. 360 degrees and 180 degrees in the vertical direction without blind spots, to obtain large-scale clear images from multiple angles.
- the multi-angle images are stitched into a spherical panorama with a 360-degree horizontal viewing angle and a 180-degree vertical viewing angle.
- the spherical panorama information acquired by multiple cameras is more abundant.
- Figure 1 is a schematic view showing the structure of the camera
- Embodiment 2 is a calibration method for the positional relationship between the camera heads of the fisheye lens unit in Embodiment 1-1;
- 3 is a calibration method for the positional relationship between the camera heads of the fisheye lens unit in the embodiment 1-2;
- Figure 5 is a calibration of the calibration data of the position and orientation of the fisheye lens unit.
- Figure 6 is a schematic diagram of re-projection of a fisheye lens camera
- FIG. 7 is a schematic diagram of triangulation of a fisheye lens camera image
- Figure 8 is a schematic view showing an overlapping field of view of an adjacent fisheye lens camera
- FIG. 9 is a schematic plan view of an overlapping field of view of an adjacent fisheye lens camera
- FIG. 10 is a flow chart of generating a panoramic image of a fisheye lens camera group.
- step 1 the method for determining the spatial pose relationship in step 1 will be described.
- R W ⁇ C is the rotation matrix from the world coordinate system W to the camera coordinate system C
- t W ⁇ C is the translation vector from the world coordinate system W to the camera coordinate system C
- R C ⁇ W is set by the camera
- t C ⁇ W is the translation vector from the camera coordinate system C to the world coordinate system W
- Equation (2) can be rewritten as:
- the coordinates can be extended to homogeneous coordinates, as shown in the following equation:
- the first thing to do is the orientation of the adjacent camera's spatial orientation. Ding, without loss of generality, the following explanations take the calibration of the pose relationship between the camera C1 and the camera C2 as an example, and the calibration steps 1 and the calibration of the camera C1-C2 of the spatial orientation relationship between other adjacent cameras It's exactly the same.
- Step A performing spatial pose calibration on all adjacent cameras
- Step B listing independent constraint relationships according to relative pose relationships of adjacent cameras
- step C the correction number equation is listed according to the independent constraint relationship, and the correction number vector is obtained by adjusting the constraint generated by the redundancy calibration by the adjustment method;
- Step D correcting the measured value according to the corrected number vector
- step E the correction result is substituted into the independent constraint relationship in step B, and step B is repeated until the corrected result vector is smaller than the preset threshold.
- Embodiment 1-1 is to determine the pose relationship of adjacent cameras in the present invention on the basis of the above, as shown in FIG. 2, C1 and C2 are adjacent fisheye avatar machines to be calibrated in space pose relationship, A and B.
- C1 and C2 are adjacent fisheye avatar machines to be calibrated in space pose relationship, A and B.
- a high-precision camera C0 is also introduced, and C0 is used to calibrate the pose relationship between A and B.
- Type is similar for any calibration plate coordinate system A is located in a spatial point P A, its spatial position in the image plane coordinate system C1 is:
- the position of the spatial point P C1 in the camera C1 coordinate system in the camera C2 coordinate system is:
- the position of the spatial point P C2 in the coordinate system C2 coordinate system in the calibration plate B coordinate system is:
- H C1 ⁇ C2 H B ⁇ C2 H A ⁇ B H C1 ⁇ A (13)
- H B ⁇ C2 and H C1 ⁇ A are the pose relationship between the camera C2 and the calibration plate B and the pose relationship between the camera C1 and the calibration plate A, which can be obtained by the method of Zhang Zhengyou, the quantity H A ⁇ B
- the orientation relationship between the calibration plate A and the calibration plate B is an unknown amount
- Embodiment 1-2 is similar to Embodiment 1-1, and it is assumed that C1 and C2 are adjacent fisheye lens machines to be calibrated in a spatial orientation relationship, and calibration method 2 is also in the field of view of the fisheye lens machines C1 and C2. Place a calibration plate A and a calibration plate B, and use the Zhang Zhengyou method to separately calibrate, as shown in Figure 3.
- the difference between the method and the method 1 is that the intermediate quantity H A ⁇ B is additionally calibrated by the camera C0, and the method no longer obtains the indirect quantity, but calibrates H C1 ⁇ A and H B ⁇
- H C0 ⁇ B and H A ⁇ C0 are directly synchronized, which requires the camera C0 to synchronize with the fisheye lens machines C1 and C2;
- the fisheye lens machine C1, the fisheye lens machine C2, and the high-precision camera C0 are connected to ensure the simultaneous acquisition of the three cameras;
- Embodiment 1-3 it is assumed that C1 and C2 are adjacent fisheye lens machines to be calibrated in a spatial orientation relationship, and the first two calibration methods require an additional high-precision camera C0 to directly or indirectly obtain the calibration plate A. Different from the positional relationship between B and B, this calibration method only needs to be calibrated.
- Each camera in the unit has a calibration plate in the field of view, as shown in Figure 4. It is assumed that there is a calibration plate A in the field of view of the camera C1, and a calibration plate B in the field of view of the camera C2, and the relative positional relationship between A and B is unchanged during the calibration process.
- Type is similar for any calibration plate coordinate system A is located in a spatial point P A, its spatial position in the image plane coordinate system C1 is:
- the position of the spatial point P C1 in the camera C1 coordinate system in the camera C2 coordinate system is:
- the position of the spatial point P C2 in the coordinate system C2 coordinate system in the calibration plate B coordinate system is:
- H A ⁇ B and H C1 ⁇ C2 are the pose relationship between the calibration plates A and B and the cameras C1 and C2, respectively, which are invariants; and H C2 ⁇ B and H A ⁇ C1 are respectively
- the positional relationship between the camera C2 and the calibration plate B and between the calibration plate A and the camera C1 is a change amount, and will change when the posture of the image unit changes, that is, in another group state:
- the specific calibration implementation step 1 is as follows:
- Embodiment 2 is an algorithm for finding the final pose relationship by iteratively by constructing an independent constraint relationship, and is also the core of the present invention.
- Embodiment 2 can be combined with Embodiments 1-1, 1-2, and 1.
- the spatial position and orientation relationship of the camera is determined.
- the image unit consisting of four cameras has a spatial pose constraint:
- the most influential effect on the splicing effect is the relative attitude between the cameras.
- the relative pose relationship between any two cameras can be obtained by the method of two-two camera pose calibration. These pose relationships have a large number of inherent constraint relationships through different combinations, in order to make full use of all The constraint relationship, while minimizing the amount of computation, first needs to explore the combination constraint relationship between the cameras in the multi-channel delivery station.
- the following is an example of a four-camera structure as shown in FIG. 5, which has the following constraint relationship:
- the relative position between the cameras in the four-way transfer station can be completely determined by R C1 ⁇ C2 , R C2 ⁇ C3 and R C3 ⁇ C4 .
- Step 1 is as follows:
- the measured value is corrected by the corrected number vector, and iteratively iterated until all the constraint relationships are satisfied.
- the spherical panorama is the closest panoramic description to the human eye model.
- the panoramic stitching of this patent is to first project the image obtained by different fisheye cameras onto the model surface according to its preset imaging model, and then reproject the image on the fisheye lens camera imaging model onto the standard field of view sphere. All cameras in the camera group complete the re-projection process and then perform panoramic stitching to form a panoramic image with less distortion and less distortion.
- Step H is explained as follows.
- C is the center of the fisheye lens camera image.
- the corresponding projection spherical radius is r
- O is the spherical center of the standard field spherical surface, and the corresponding radius.
- R the fisheye lens camera images the projection sphere and the standard field of view sphere at the center point C
- the corresponding image points are tangent and the tangent point is T.
- P be a point on the image of the fisheye lens camera
- Q is the image point corresponding to P on the projection surface of the fisheye lens camera
- the image point Q on the spherical surface of the imaging model is re-projected onto the spherical surface of the standard field of view, recorded as the image point.
- Several common projection methods for fisheye lens cameras include stereoscopic projection, equidistant projection, stereoscopic projection, orthogonal projection, etc.
- the angle ⁇ is determined by the distance of the point P from the point C of the optical center
- the first curved surface image is obtained from the imaging model surface to the standard field of view spherical surface to obtain a first spherical image, and the mapping from the point P ⁇ Q ⁇ M is established according to the first spherical image and the original planar circular image. relationship.
- the mapping relationship of M ⁇ Q ⁇ P can be obtained in the order of inverse mapping. The problem.
- the spherical re-projection is embodied as follows:
- the spherical re-projection part of the patent takes a four-camera structure as an example to determine the pose relationship between the fisheye cameras of the panoramic camera group; the determination of the pose relationship does not belong to the invention of the patent, but may The existing pose relationship determination method is used to determine, and the parameter construction system of the two fisheye camera is read into the system.
- step H-1 the imaging parameters of the fisheye lens camera are calibrated to obtain a fisheye image center point coordinate C (Cx, Cy), a fisheye image imaging spherical radius r, a standard field spherical radius R and a specific fisheye.
- the camera model is related and can be obtained through experimental verification.
- the specific data is related to the actual fisheye lens camera parameters.
- a standard field of view spherical surface can be generated, and then corresponding to each of the fisheye lens camera imaging models. Imaging model surfaces and corresponding standard field of view spheres.
- step H-2 the original planar circular image acquired by the fisheye lens camera is projected onto the surface of the imaging model according to a projection model corresponding to the specific fisheye lens camera, such as an orthogonal projection model;
- mapping relationship of M ⁇ Q ⁇ P is obtained according to the order of inverse mapping. Projects images of four fisheye lens cameras onto a standard field of view sphere.
- the next step is to perform the panoramic stitching step 2, which is to stitch the images that have been projected to the same spherical surface into a spherical panoramic view with a large field of view, including image registration and image fusion two steps 2 -.
- Image stitching refers to the process of aligning two or more images with spatially overlapping information and combining the aligned images into a seamless, high-definition image technique.
- step J The function of step J is to have severe distortion and parallax between the overlapping parts of different fisheye images projected under the same large circle.
- This patent adopts triangulation correction to first triangulate the different fisheye images of the coincident part, as shown in Fig. 7. The method shown then matches the feature points within each triangle. By the spatial distance between the feature points obtained by the matching of the feature points, the displacement required for the triangle vertices after triangulation can be obtained. At the same time we define the cost function to limit the movement of the triangle to follow similar changes, which will make the final stretched triangle area visually smooth. Triangulation is shown in Figure 7. The triangle is meshed to obtain a figure, and the division method is currently applied more frequently, and is not described here. It is preferably divided by the division method shown in FIG.
- the basic strategy of the cost function of the motion is, for example, considering that the triangle ⁇ G 1 G 2 G 3 contains a feature point P, and the corresponding matching point Q of the known P point is first shortened by the translation of ⁇ G 1 G 2 G 3 , Q's Euler distance.
- the vertices directly adjacent to it should also move.
- We measure the similarity of the triangle before and after the movement to describe the moving distance and direction of the fixed point.
- we want the feature points not to be rich in the area as far as possible we give each vertex the weight of the movement, the weight is defined as a function related to the feature point distance.
- step J-3 after the images are aligned and registered, it is inevitable that traces will be generated at the splicing, which affects the final panoramic visual effect.
- the weighted average method of the patent preferentially weights the image gradation values at the splicing and then superimposes the average processing. , assuming I 1 and I 2 are the images to be fused, respectively, and I is the image after fusion, then:
- the weight is configured according to the actual proportion. It is not limited here and can be related to the specific location.
- step 2 is as follows:
- step J-1 the overlapping portion of the different fisheye images projected under the same large circle is triangulated as shown in FIG. 8, and then the image after triangulation is subjected to tangent plane projection according to the method shown in FIG. Matching feature points within each triangle;
- step J-2 by using the spatial distance between the feature points in steps 2-8, the displacement required for the triangle vertices after triangulation is calculated, and the movement of the defined triangle follows similar changes, and then stretched;
- step J-3 the triangular image after stretching is fused according to formula (30);
- step K the coincident portion after the fusion is subjected to spherical re-projection to obtain a final spherical panorama.
Abstract
Description
Claims (10)
- 一种基于鱼眼摄像机空间位姿标定的全景视频生成方法,提供若干个固联的鱼眼镜头摄像机,每一摄像机分别用于获取原始的平面圆形图像,其特征在于,包括步骤A,对所有相邻的摄像机进行空间位姿标定以确定相对位姿关系;步骤B,根据所述相对位姿关系列出独立约束关系;步骤C,根据所述独立约束关系列出改正数方程,通过平差法对步骤B中所述的独立约束关系进行平差求出改正数向量;步骤D,判断所述改正数向量是否小于一预设阈值,如果是,执行步骤F,如果否,根据所述改正数向量求出修正结果;步骤E,将所述修正结果代入所述独立约束关系,并返回所述步骤C,步骤F,根据所述独立约束关系得到空间位姿关系;步骤G,获得每一所述摄像机的参数以及所述空间位姿关系;步骤H,根据步骤G中获得的所述空间位姿关系以及参数构建成像模型平面、标准视场球面,每一所述摄像机采集的平面圆形图像分别位于对应的所述成像模型平面上,将所述平面圆形图像从所述成像模型平面投影到所述标准视场球面形成第一球面图像,并根据所述第一球面图像的像点坐标求取所述标准视场球面到所述成像模型平面的映射关系;步骤I,根据所述映射关系将每一所述摄像机实时采集的所述平面圆形图像分别投影到同一所述标准视场球面上形成第二球面图像;步骤J,将相邻的所述摄像机对应的所述第二球面图像之间存在的 重合部分进行融合得到融合图像;步骤K,将所述融合图像以及所述第二球面图像进行拼接,得到球面全景图。
- 根据权利要求1所述的一种基于鱼眼摄像机空间位姿标定的全景视频生成方法,其特征在于,步骤A包括步骤A-1,任选相邻的两个摄像机分别作为第一摄像机和第二摄像机,分别对所述第一摄像机和所述第二摄像机进行内参数标定;步骤A-2,在所述第一摄像机的视场中放置第一标定板,在所述第二摄像机的视场中放置第二标定板;步骤A-3,在所述第一摄像机和所述第二摄像机之间设置一基准摄像机,使所述基准摄像机的视场中包括所述第一标定板和所述第二标定板,所述基准摄像机同时对所述第一标定板和所述第二标定板成像,获得所述基准摄像机与所述第一标定板之间的位姿关系HA→C0以及所述基准摄像机与所述第二标定板之间的位姿关系HC0→B;步骤A-4,获得所述第一标定板和所述第二标定板的位姿关系HA→B=HC0 →BHA→C0;步骤A-5,获得所述第一摄像机与所述第一标定板之间的位姿关系HA→ C1以及所述第二摄像机与所述第二标定板之间的位姿关系HB→C2;步骤A-6,获得所述第一摄像机和所述第二摄像机之间的位姿关系HC1→ C2=HB→C2HA→BHC1→A;步骤A-7,返回步骤A-1,直至得到所有相邻的两个摄像机的相对位姿关系。
- 根据权利要求1所述的一种基于鱼眼摄像机空间位姿标定的全景视频生成方法,其特征在于,步骤A包括步骤A-1,任选相邻的两个摄像机分别作为第一摄像机和第二摄像机,分别对所述第一摄像机和所述第二摄像机进行内参数标定;步骤A-2,在所述第一摄像机的视场中放置第一标定板,在所述第二摄像机的视场中放置第二标定板;步骤A-3,在所述第一摄像机和所述第二摄像机之间设置一基准摄像机,使所述基准摄像机的视场中包括所述第一标定板和所述第二标定板;步骤A-4,通过同步触发信号使所述基准摄像机、所述第一摄像机和所述第二摄像机分别对所述第一标定板和所述第二标定板成像,获得所述基准摄像机与所述第一标定板之间的位姿关系HA→C0、所述基准摄像机与所述第二标定板之间的位姿关系HC0→B、所述第一摄像机与所述第一标定板之间的位姿关系HA→C1以及所述第二摄像机与所述第二标定板之间的位姿关系HB→ C2;步骤A-5,获得所述第一摄像机和所述第二摄像机之间的位姿关系HC1→ C2=HB→C2HC0→BHA→C0HC1→A;步骤A-6,返回步骤A-1,直至得到所有相邻的两个摄像机的相对位姿关系。
- 根据权利要求1所述的一种基于鱼眼摄像机空间位姿标定的全景视频生成方法,其特征在于,步骤A包括步骤A-1,任选相邻的两组摄像机分别作为第一摄像机和第二摄像机,并分别对所述第一摄像机和所述第二摄像机进行内参数标定;步骤A-2,在所述第一摄像机的视场中放置第一标定板,在所述第二摄像机的视场中放置第二标定板;步骤A-3,获得所述第一摄像机与所述第一标定板之间的位姿关系HA→ C1以及所述第二摄像机与所述第二标定板之间的位姿关系HB→C2;步骤A-6,返回步骤A-1,直至得到所有相邻的两组摄像机的相对位姿关系。
- 根据权利要求1所述的一种基于鱼眼摄像机空间位姿标定的全景视频生成方法,其特征在于,步骤H包括步骤H-1,根据步骤G中获得的空间位姿关系以及参数构建成像模型平面、成像模型曲面、标准视场球面,每一摄像机采集的平面圆形图像分别位于对应的所述成像模型平面上;步骤H-2,将所述平面圆形图像从成像模型平面投影到对应的所述成像模型曲面形成第一曲面图像;步骤H-3,将所述成像模型曲面上的第一曲面图像重投影到所述标准视场球面上形成第一球面图像;步骤H-4,根据对应的所述平面圆形图像的像点坐标和所述第一球面图像的像点坐标求取所述标准视场球面到所述成像模型平面的映射关系。
- 根据权利要求1所述的一种基于鱼眼摄像机空间位姿标定的全景视频生成方法,其特征在于,步骤J包括步骤J-1,对步骤I中的每一所述第二球面图像的重合部分进行三角化,并将三角化之后的所述第二球面图像的重合部分于切平面上投影形成若干三角形图像,计算每个所述三角形图像内的特征点;步骤J-2,将属于不同第二球面图像的两个具有相同特征点的三角形图像在所述切平面上进行相向平移,对平移后的三角形图像进行拉伸以形成两个等大且相互重合的拉伸图像;步骤J-3,对步骤J-2中两个所述拉伸图像进行融合以形成融合图像,将所述融合图像从切平面重投影到标准视场球面。
- 根据权利要求6所述的一种基于鱼眼摄像机空间位姿标定的全景视频生成方法,其特征在于,所述步骤J-2中,拉伸前的所述三角形图像和拉伸后的所述拉伸图像为相似三角形。
- 根据权利要求6所述的一种基于鱼眼摄像机空间位姿标定的全景视频生成方法,其特征在于,所述步骤J-3中,还包括对所述融合图像进行平滑处理。
- 根据权利要求2或3或4所述的一种基于鱼眼摄像机空间位姿标定的全景视频生成方法,其特征在于,所述内参数标定标定的参数类型包括该摄像机的等效焦距、像差系数。
- 根据权利要求9所述的一种基于鱼眼摄像机空间位姿标定的全景视频生成方法,其特征在于,所述内参数标定标定的参数类型还包括成像模型、主点坐标。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2016/103157 WO2018076154A1 (zh) | 2016-10-25 | 2016-10-25 | 一种基于鱼眼摄像机空间位姿标定的全景视频生成方法 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2016/103157 WO2018076154A1 (zh) | 2016-10-25 | 2016-10-25 | 一种基于鱼眼摄像机空间位姿标定的全景视频生成方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018076154A1 true WO2018076154A1 (zh) | 2018-05-03 |
Family
ID=62023000
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2016/103157 WO2018076154A1 (zh) | 2016-10-25 | 2016-10-25 | 一种基于鱼眼摄像机空间位姿标定的全景视频生成方法 |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2018076154A1 (zh) |
Cited By (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108765292A (zh) * | 2018-05-30 | 2018-11-06 | 中国人民解放军军事科学院国防科技创新研究院 | 基于空间三角面片拟合的图像拼接方法 |
CN108846796A (zh) * | 2018-06-22 | 2018-11-20 | 北京航空航天大学青岛研究院 | 图像拼接方法及电子设备 |
CN109194954A (zh) * | 2018-09-21 | 2019-01-11 | 上海小萌科技有限公司 | 鱼眼摄像头性能参数测试方法、装置、设备及可存储介质 |
CN109272445A (zh) * | 2018-10-29 | 2019-01-25 | 中国航空无线电电子研究所 | 基于球面模型的全景视频拼接方法 |
CN109636858A (zh) * | 2018-10-30 | 2019-04-16 | 广州超音速自动化科技股份有限公司 | 锂电池涂布图像采集标定方法、系统、设备及存储介质 |
CN109993799A (zh) * | 2019-03-08 | 2019-07-09 | 贵州电网有限责任公司 | 一种紫外像机标定方法及标定装置 |
CN110136049A (zh) * | 2018-10-30 | 2019-08-16 | 北京初速度科技有限公司 | 一种基于环视图像与轮速计融合的定位方法及车载终端 |
CN110148182A (zh) * | 2019-05-08 | 2019-08-20 | 云南大学 | 一种标定摄像机的方法、存储介质、运算器和系统 |
CN110202573A (zh) * | 2019-06-04 | 2019-09-06 | 上海知津信息科技有限公司 | 全自动手眼标定、工作平面标定方法及装置 |
CN110264524A (zh) * | 2019-05-24 | 2019-09-20 | 联想(上海)信息技术有限公司 | 一种标定方法、装置、系统及存储介质 |
CN110728619A (zh) * | 2018-07-17 | 2020-01-24 | 中科创达软件股份有限公司 | 一种全景图像拼接渲染方法及装置 |
CN110827361A (zh) * | 2019-11-01 | 2020-02-21 | 清华大学 | 基于全局标定架的相机组标定方法及装置 |
CN110956667A (zh) * | 2019-11-28 | 2020-04-03 | 李安澜 | 基于近似平面靶的摄像机自标定方法及系统 |
CN111563840A (zh) * | 2019-01-28 | 2020-08-21 | 北京初速度科技有限公司 | 分割模型的训练方法、装置、位姿检测方法及车载终端 |
CN111726566A (zh) * | 2019-03-21 | 2020-09-29 | 上海飞猿信息科技有限公司 | 一种实时校正拼接防抖的实现方法 |
CN111899307A (zh) * | 2020-07-30 | 2020-11-06 | 浙江大学 | 一种空间标定方法、电子设备及存储介质 |
CN112215901A (zh) * | 2020-10-09 | 2021-01-12 | 哈尔滨工程大学 | 一种用于水下标定的多功能标定板装置 |
CN112950727A (zh) * | 2021-03-30 | 2021-06-11 | 中国科学院西安光学精密机械研究所 | 基于仿生曲面复眼的大视场多目标同时测距方法 |
CN113111548A (zh) * | 2021-03-27 | 2021-07-13 | 西北工业大学 | 一种基于周角差值的产品三维特征点提取方法 |
CN113129383A (zh) * | 2021-03-15 | 2021-07-16 | 中建科技集团有限公司 | 手眼标定方法、装置、通信设备及存储介质 |
CN113393529A (zh) * | 2020-03-12 | 2021-09-14 | 浙江宇视科技有限公司 | 摄像机的标定方法、装置、设备和介质 |
CN113496520A (zh) * | 2020-04-02 | 2021-10-12 | 北京四维图新科技股份有限公司 | 摄像机转俯视图的方法、装置及存储介质 |
CN113689339A (zh) * | 2021-09-08 | 2021-11-23 | 北京经纬恒润科技股份有限公司 | 图像拼接方法及装置 |
CN113706627A (zh) * | 2021-08-06 | 2021-11-26 | 武汉极目智能技术有限公司 | 车载环视中基于单张图的鱼眼相机内参标定方法 |
CN113763480A (zh) * | 2021-08-03 | 2021-12-07 | 桂林电子科技大学 | 一种多镜头全景摄像机组合标定方法 |
WO2022153207A1 (en) * | 2021-01-18 | 2022-07-21 | Politecnico Di Milano | Multi-camera three-dimensional capturing and reconstruction system |
CN114777668A (zh) * | 2022-04-12 | 2022-07-22 | 新拓三维技术(深圳)有限公司 | 一种桌面式弯管测量方法及装置 |
CN116385564A (zh) * | 2023-02-03 | 2023-07-04 | 厦门农芯数字科技有限公司 | 一种基于鱼眼图像实现栏位尺寸的自动标定方法、装置 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060047471A1 (en) * | 2004-08-25 | 2006-03-02 | Microsoft Corporation | Relative range camera calibration |
US20070106482A1 (en) * | 2005-10-28 | 2007-05-10 | Ali Zandifar | Fast imaging system calibration |
CN101577002A (zh) * | 2009-06-16 | 2009-11-11 | 天津理工大学 | 应用于目标检测的鱼眼镜头成像系统标定方法 |
CN102175221A (zh) * | 2011-01-20 | 2011-09-07 | 上海杰图软件技术有限公司 | 基于鱼眼镜头的车载移动摄影测量系统 |
CN102693539A (zh) * | 2012-03-13 | 2012-09-26 | 夏东 | 一种用于智能监控系统的宽基线快速三维标定方法 |
CN103077524A (zh) * | 2013-01-25 | 2013-05-01 | 福州大学 | 混合视觉系统标定方法 |
-
2016
- 2016-10-25 WO PCT/CN2016/103157 patent/WO2018076154A1/zh active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060047471A1 (en) * | 2004-08-25 | 2006-03-02 | Microsoft Corporation | Relative range camera calibration |
US20070106482A1 (en) * | 2005-10-28 | 2007-05-10 | Ali Zandifar | Fast imaging system calibration |
CN101577002A (zh) * | 2009-06-16 | 2009-11-11 | 天津理工大学 | 应用于目标检测的鱼眼镜头成像系统标定方法 |
CN102175221A (zh) * | 2011-01-20 | 2011-09-07 | 上海杰图软件技术有限公司 | 基于鱼眼镜头的车载移动摄影测量系统 |
CN102693539A (zh) * | 2012-03-13 | 2012-09-26 | 夏东 | 一种用于智能监控系统的宽基线快速三维标定方法 |
CN103077524A (zh) * | 2013-01-25 | 2013-05-01 | 福州大学 | 混合视觉系统标定方法 |
Cited By (43)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108765292A (zh) * | 2018-05-30 | 2018-11-06 | 中国人民解放军军事科学院国防科技创新研究院 | 基于空间三角面片拟合的图像拼接方法 |
CN108765292B (zh) * | 2018-05-30 | 2022-04-29 | 中国人民解放军军事科学院国防科技创新研究院 | 基于空间三角面片拟合的图像拼接方法 |
CN108846796A (zh) * | 2018-06-22 | 2018-11-20 | 北京航空航天大学青岛研究院 | 图像拼接方法及电子设备 |
CN108846796B (zh) * | 2018-06-22 | 2022-08-16 | 北京航空航天大学青岛研究院 | 图像拼接方法及电子设备 |
CN110728619A (zh) * | 2018-07-17 | 2020-01-24 | 中科创达软件股份有限公司 | 一种全景图像拼接渲染方法及装置 |
CN110728619B (zh) * | 2018-07-17 | 2024-03-22 | 中科创达软件股份有限公司 | 一种全景图像拼接渲染方法及装置 |
CN109194954A (zh) * | 2018-09-21 | 2019-01-11 | 上海小萌科技有限公司 | 鱼眼摄像头性能参数测试方法、装置、设备及可存储介质 |
CN109272445A (zh) * | 2018-10-29 | 2019-01-25 | 中国航空无线电电子研究所 | 基于球面模型的全景视频拼接方法 |
CN109272445B (zh) * | 2018-10-29 | 2022-11-04 | 中国航空无线电电子研究所 | 基于球面模型的全景视频拼接方法 |
CN110136049B (zh) * | 2018-10-30 | 2023-07-11 | 北京魔门塔科技有限公司 | 一种基于环视图像与轮速计融合的定位方法及车载终端 |
CN109636858B (zh) * | 2018-10-30 | 2024-01-12 | 超音速人工智能科技股份有限公司 | 锂电池涂布图像采集标定方法、系统、设备及存储介质 |
CN109636858A (zh) * | 2018-10-30 | 2019-04-16 | 广州超音速自动化科技股份有限公司 | 锂电池涂布图像采集标定方法、系统、设备及存储介质 |
CN110136049A (zh) * | 2018-10-30 | 2019-08-16 | 北京初速度科技有限公司 | 一种基于环视图像与轮速计融合的定位方法及车载终端 |
CN111563840A (zh) * | 2019-01-28 | 2020-08-21 | 北京初速度科技有限公司 | 分割模型的训练方法、装置、位姿检测方法及车载终端 |
CN111563840B (zh) * | 2019-01-28 | 2023-09-05 | 北京魔门塔科技有限公司 | 分割模型的训练方法、装置、位姿检测方法及车载终端 |
CN109993799A (zh) * | 2019-03-08 | 2019-07-09 | 贵州电网有限责任公司 | 一种紫外像机标定方法及标定装置 |
CN111726566A (zh) * | 2019-03-21 | 2020-09-29 | 上海飞猿信息科技有限公司 | 一种实时校正拼接防抖的实现方法 |
CN110148182A (zh) * | 2019-05-08 | 2019-08-20 | 云南大学 | 一种标定摄像机的方法、存储介质、运算器和系统 |
CN110264524A (zh) * | 2019-05-24 | 2019-09-20 | 联想(上海)信息技术有限公司 | 一种标定方法、装置、系统及存储介质 |
CN110202573A (zh) * | 2019-06-04 | 2019-09-06 | 上海知津信息科技有限公司 | 全自动手眼标定、工作平面标定方法及装置 |
CN110202573B (zh) * | 2019-06-04 | 2023-04-07 | 上海知津信息科技有限公司 | 全自动手眼标定、工作平面标定方法及装置 |
CN110827361A (zh) * | 2019-11-01 | 2020-02-21 | 清华大学 | 基于全局标定架的相机组标定方法及装置 |
CN110956667A (zh) * | 2019-11-28 | 2020-04-03 | 李安澜 | 基于近似平面靶的摄像机自标定方法及系统 |
CN110956667B (zh) * | 2019-11-28 | 2023-02-17 | 李安澜 | 基于近似平面靶的摄像机自标定方法及系统 |
CN113393529A (zh) * | 2020-03-12 | 2021-09-14 | 浙江宇视科技有限公司 | 摄像机的标定方法、装置、设备和介质 |
CN113496520A (zh) * | 2020-04-02 | 2021-10-12 | 北京四维图新科技股份有限公司 | 摄像机转俯视图的方法、装置及存储介质 |
CN111899307A (zh) * | 2020-07-30 | 2020-11-06 | 浙江大学 | 一种空间标定方法、电子设备及存储介质 |
CN111899307B (zh) * | 2020-07-30 | 2023-12-29 | 浙江大学 | 一种空间标定方法、电子设备及存储介质 |
CN112215901A (zh) * | 2020-10-09 | 2021-01-12 | 哈尔滨工程大学 | 一种用于水下标定的多功能标定板装置 |
CN112215901B (zh) * | 2020-10-09 | 2023-08-01 | 哈尔滨工程大学 | 一种用于水下标定的多功能标定板装置 |
WO2022153207A1 (en) * | 2021-01-18 | 2022-07-21 | Politecnico Di Milano | Multi-camera three-dimensional capturing and reconstruction system |
CN113129383A (zh) * | 2021-03-15 | 2021-07-16 | 中建科技集团有限公司 | 手眼标定方法、装置、通信设备及存储介质 |
CN113111548A (zh) * | 2021-03-27 | 2021-07-13 | 西北工业大学 | 一种基于周角差值的产品三维特征点提取方法 |
CN112950727A (zh) * | 2021-03-30 | 2021-06-11 | 中国科学院西安光学精密机械研究所 | 基于仿生曲面复眼的大视场多目标同时测距方法 |
CN112950727B (zh) * | 2021-03-30 | 2023-01-06 | 中国科学院西安光学精密机械研究所 | 基于仿生曲面复眼的大视场多目标同时测距方法 |
CN113763480A (zh) * | 2021-08-03 | 2021-12-07 | 桂林电子科技大学 | 一种多镜头全景摄像机组合标定方法 |
CN113706627A (zh) * | 2021-08-06 | 2021-11-26 | 武汉极目智能技术有限公司 | 车载环视中基于单张图的鱼眼相机内参标定方法 |
CN113689339A (zh) * | 2021-09-08 | 2021-11-23 | 北京经纬恒润科技股份有限公司 | 图像拼接方法及装置 |
CN113689339B (zh) * | 2021-09-08 | 2023-06-20 | 北京经纬恒润科技股份有限公司 | 图像拼接方法及装置 |
CN114777668B (zh) * | 2022-04-12 | 2024-01-16 | 新拓三维技术(深圳)有限公司 | 一种桌面式弯管测量方法及装置 |
CN114777668A (zh) * | 2022-04-12 | 2022-07-22 | 新拓三维技术(深圳)有限公司 | 一种桌面式弯管测量方法及装置 |
CN116385564B (zh) * | 2023-02-03 | 2023-09-19 | 厦门农芯数字科技有限公司 | 一种基于鱼眼图像实现栏位尺寸的自动标定方法、装置 |
CN116385564A (zh) * | 2023-02-03 | 2023-07-04 | 厦门农芯数字科技有限公司 | 一种基于鱼眼图像实现栏位尺寸的自动标定方法、装置 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018076154A1 (zh) | 一种基于鱼眼摄像机空间位姿标定的全景视频生成方法 | |
TWI555379B (zh) | 一種全景魚眼相機影像校正、合成與景深重建方法與其系統 | |
TWI555378B (zh) | 一種全景魚眼相機影像校正、合成與景深重建方法與其系統 | |
WO2021120407A1 (zh) | 一种基于多对双目相机的视差图像拼接与可视化方法 | |
CN109272478B (zh) | 一种荧幕投影方法和装置及相关设备 | |
Micusik et al. | Autocalibration & 3D reconstruction with non-central catadioptric cameras | |
US20170127045A1 (en) | Image calibrating, stitching and depth rebuilding method of a panoramic fish-eye camera and a system thereof | |
WO2019049331A1 (ja) | キャリブレーション装置、キャリブレーションシステム、およびキャリブレーション方法 | |
CN107705252B (zh) | 适用于双目鱼眼图像拼接展开校正的方法及系统 | |
CN108629829B (zh) | 一种球幕相机与深度相机结合的三维建模方法和系统 | |
CN108257183A (zh) | 一种相机镜头光轴校准方法和装置 | |
WO2023045147A1 (zh) | 双目摄像机的标定方法、系统、电子设备和存储介质 | |
CN106534670B (zh) | 一种基于固联鱼眼镜头摄像机组的全景视频生成方法 | |
US20200294269A1 (en) | Calibrating cameras and computing point projections using non-central camera model involving axial viewpoint shift | |
US11812009B2 (en) | Generating virtual reality content via light fields | |
JP2002516443A (ja) | 3次元表示のための方法および装置 | |
CN111854636A (zh) | 一种多相机阵列三维检测系统和方法 | |
JP2010130628A (ja) | 撮像装置、映像合成装置および映像合成方法 | |
CN108898550B (zh) | 基于空间三角面片拟合的图像拼接方法 | |
KR20190019059A (ko) | 수평 시차 스테레오 파노라마를 캡쳐하는 시스템 및 방법 | |
CN113763480A (zh) | 一种多镜头全景摄像机组合标定方法 | |
CN112258581B (zh) | 一种多鱼眼镜头全景相机的现场标定方法 | |
JP4851240B2 (ja) | 画像処理装置及びその処理方法 | |
CN108205799B (zh) | 一种图像拼接方法及装置 | |
TWM594322U (zh) | 全向立體視覺的相機配置系統 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16919964 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16919964 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16919964 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 12.12.2019) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16919964 Country of ref document: EP Kind code of ref document: A1 |