CN110363838B - Large-visual-field image three-dimensional reconstruction optimization method based on multi-spherical-surface camera model - Google Patents

Large-visual-field image three-dimensional reconstruction optimization method based on multi-spherical-surface camera model Download PDF

Info

Publication number
CN110363838B
CN110363838B CN201910492689.7A CN201910492689A CN110363838B CN 110363838 B CN110363838 B CN 110363838B CN 201910492689 A CN201910492689 A CN 201910492689A CN 110363838 B CN110363838 B CN 110363838B
Authority
CN
China
Prior art keywords
point
spherical
camera
space
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910492689.7A
Other languages
Chinese (zh)
Other versions
CN110363838A (en
Inventor
陈舒雅
项志宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201910492689.7A priority Critical patent/CN110363838B/en
Publication of CN110363838A publication Critical patent/CN110363838A/en
Application granted granted Critical
Publication of CN110363838B publication Critical patent/CN110363838B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a large-visual-field image three-dimensional reconstruction optimization method based on a multi-spherical-surface camera model. Filtering three-dimensional space points with larger errors based on parallax and color constraints among different stereo pairs; calculating the coordinate mean value of each matching point pair by acquiring matching point pairs existing in different point clouds to obtain a smooth reference point cloud; acquiring affine transformation parameters for each point cloud, and approximately transforming the affine transformation parameters to a reference point cloud area; and aiming at the plurality of groups of converted point clouds, finely adjusting the positions of the fused point clouds according to the normal vectors and the distance information. The invention effectively fuses a plurality of groups of point clouds and improves the integrity and accuracy of the final point cloud.

Description

Large-visual-field image three-dimensional reconstruction optimization method based on multi-spherical-surface camera model
Technical Field
The invention relates to a three-dimensional reconstruction algorithm in stereoscopic vision, in particular to a large-visual-field image three-dimensional reconstruction optimization method for point cloud fusion based on a multi-spherical-surface stereoscopic camera model.
Background
The camera acquisition equipment with a large visual field is applied more and more in the fields of robot navigation, video monitoring and the like, and the spherical camera model can better meet the requirement of large-visual-field image processing. The method has important theoretical and practical significance for multi-pair stereoscopic vision three-dimensional reconstruction of a large-view scene based on the spherical model, and can expand the view range and improve the reconstruction precision.
There are generally two ways to acquire multiple pairs of large-field stereoscopic images: firstly, a plurality of large-view cameras form a stereo matching pair with each other; the second is that a single camera shoots an image obtained by an array formed by a plurality of reflecting mirror surfaces once. In contrast, the former has high resolution and precision, but the system has large volume and power consumption; the latter type has small volume and power consumption, but the error of the system is large. In any case, the reconstruction precision can be improved by the point cloud fusion method of the multi-spherical stereo vision provided by the invention.
Due to the existence of multiple sets of stereo matching pairs, how to fuse the reconstruction results between the matching pairs is a problem to be considered. The currently mainstream fusion algorithms can be generally divided into three categories, namely a voxel method, a feature point expansion method and a depth map-based algorithm. Dividing the whole three-dimensional point cloud into a plurality of voxels by a voxel method, and removing non-uniform voxels from the original point cloud according to the projection constraint of multiple views; the feature point expansion algorithm firstly takes a group of three-dimensional points as seed points, and uses the expansion algorithm to realize compact reconstruction by detecting and matching features in multiple views; depth map-based algorithms use consistency constraints in the depth map to fuse the reconstruction results. As the equipment for depth acquisition matures, the cost of depth acquisition becomes lower and lower. Meanwhile, fusion is realized based on the depth map, and the operability and the expansibility are stronger.
In consideration of the characteristics of partial non-centered projection imaging which may exist in an actual image acquisition system, redundant points are generally only removed by adopting the algorithm, and the generated multiple groups of point clouds may still have offset without completely coinciding.
Disclosure of Invention
In order to solve the problems in the background art, the invention aims to provide a large-visual-field image three-dimensional reconstruction optimization method based on a multi-spherical-surface camera model, which is suitable for the stereoscopic vision requirements in various environments.
The technical scheme adopted by the invention comprises the following steps:
step 1: setting a plurality of spherical camera models towards an object to be shot, taking one spherical camera model as a main camera and the other spherical camera models as auxiliary cameras, wherein the main camera and any one auxiliary camera have overlapped areas in the visual field, calibrating the main camera and each auxiliary camera respectively, and obtaining the pose change relation of each auxiliary camera relative to the main camera; the pose change relationship comprises a rotation and translation matrix.
Step 2: the main camera and all the auxiliary cameras shoot an object to be shot simultaneously to obtain respective images, the main camera and each auxiliary camera respectively form a spherical stereo pair, and a corresponding group of three-dimensional point clouds are obtained according to images obtained by each spherical stereo pair by adopting a stereo matching algorithm.
And step 3: obtaining all matching point pairs existing in different groups of three-dimensional point clouds, calculating parallax and color constraint of the matching point pairs, setting a parallax threshold and a color constraint threshold, and filtering the matching point pairs with parallax errors larger than the parallax threshold and color constraint larger than the color constraint threshold.
And 4, step 4: and in the filtered matching point pairs, calculating the coordinate mean value of each matching point pair to obtain a reference point, and traversing all the matching point pairs to obtain a smooth reference point cloud consisting of the reference points.
And 5: transforming each group of three-dimensional point clouds processed in the step 3 by adopting an affine transformation method according to the reference point clouds obtained in the step 4 to obtain transformed point clouds; and transforming each group of three-dimensional point clouds according to the affine transformation parameters to obtain transformed point clouds. Thereby approximately transforming each set of three-dimensional point clouds to the reference point cloud region.
Step 6: based on the normal vector and distance relation among the multiple groups of transformed point clouds, the positions of the multiple groups of transformed point clouds are optimized, point cloud fusion is completed, three-dimensional reconstruction is achieved, and finally processed single point clouds are obtained.
And (3) calibrating each camera independently in the step (1), and finally transforming all reconstruction results to a camera coordinate system of the main camera for fusion.
In the step 3, the following two-step outlier filtering algorithm is specifically adopted:
3.1) Main Camera C1Calculating the spherical parallax of the main pixel point and the matched pixel point and the space point of the main pixel point p in each spherical stereo pair, converting the spherical parallax of each spherical stereo pair into the same spherical stereo pair according to the respective pose change relationship to obtain a plurality of parallax values, and filtering the space point of the main pixel point p in each spherical stereo pair if the maximum value of the difference between every two parallax values is greater than a parallax threshold value; if the maximum value of the difference between every two parallax values is not larger than the parallax threshold, the space point of the main pixel point p in each spherical stereo pair is reserved.
3.2) calculating the square error of the pixel color values of the main pixel point p and all the pixel points of the pixel points matched with the main pixel point p in respective neighborhood 3 x 3 windows in each spherical stereo pair, wherein the square error is used as color constraint to be compared with a color constraint threshold, if the color constraint is greater than the color constraint threshold, filtering out the space point of the main pixel point p in the spherical stereo pair, and if the color constraint is not greater than the color constraint threshold, keeping the space point of the main pixel point p in the spherical stereo pair.
3.3) repeating steps 3.1) -3.2) to traverse all matching point pairs.
The step 4 specifically comprises the following steps: main camera C1Any main pixel point p in the collected imageCalculating the space point of the main pixel point p in each spherical stereo pair; if the main pixel point p has at least two space points, calculating the average coordinates of all the space points as reference points, and traversing the main camera C1The reference points obtained by all the main pixel points form a reference point cloud; and if the main pixel point p has only one or no corresponding space point, skipping the step 4 and entering the next step.
In the step 5, the affine transformation processing methods of each group of three-dimensional point clouds are the same, specifically: establishing a loss function of the three-dimensional point cloud, calculating an initial transformation matrix and an initial translation vector by using Singular Value Decomposition (SVD) according to a matching relation of the three-dimensional point cloud and the reference point cloud, minimizing the following loss function by using a Levenberg-Marquardt optimization algorithm, obtaining a final transformation matrix R and a final translation vector T, and transforming each space point in the three-dimensional point cloud to be close to the reference point cloud according to the transformation matrix R and the translation vector T obtained by solving.
The loss function of the three-dimensional point cloud is specifically expressed as follows:
Figure BDA0002087560660000031
wherein E represents the solving operation, N represents the total number of matching point pairs or the point number of the reference point cloud, and MjIs the jth reference point, Sij(i-1, 2, … …) is the ith set of three-dimensional point clouds SiNeutral MjThe corresponding spatial point.
The processing method for each group of three-dimensional point clouds in the step 6 is the same, and specifically comprises the following steps:
6.1) computing a transformed Point cloud S1' the first transformation space point G of1If the point cloud S is transformed in other points2A second transformation space point exists, so that the distance between the two transformation space points is smaller than a distance threshold value, and the included angle of the normal vectors of the two transformation space points is smaller than an included angle threshold value, and the result is 6.2); otherwise, consider other transformed point clouds S2' No space point G of the first transformation exists1Into 6.3).
6.2) distance first transformation space point G1The nearest second transform space point is taken as the first transform space point G1Corresponding point G of (1)2And will be
Figure BDA0002087560660000032
Projected to a first transformed spatial point G1Normal vector n of1In the direction of (1), and taking the projected image
Figure BDA0002087560660000033
Is taken as a first transformation space point G1Is moving vector m1According to the motion vector m1Moving the first transformed spatial point G1
6.3) taking the set of all the first transformation space points without corresponding points as a non-corresponding area Q, taking the mean value of the motion vectors of the first transformation space points with corresponding points in the edge neighborhood of the non-corresponding area Q as the motion vector of the non-corresponding area Q, and adjusting the positions of all the first transformation space points in the non-corresponding area Q according to the motion vector of the non-corresponding area Q.
6.4) traversing all the space points of the transformed point cloud according to the mode to complete the fusion optimization of the group of transformed point clouds.
According to the invention, based on a spherical camera model, after a plurality of pairs of spherical stereo pairs are respectively reconstructed, redundant points are removed based on depth and color consistency check, and then point cloud fusion is completed by optimizing the positions of the point clouds according to information such as matching between the point clouds and normal vectors, so that the point clouds in the generated large-view image are more complete and accurate, and the method is suitable for reconstruction of the large-view image so as to improve the application effect in the fields of robot navigation, video monitoring and the like.
The invention has the beneficial effects that:
(1) based on the spherical model, the method can be suitable for various large-field-of-view image acquisition devices.
(2) And carrying out outer point filtering based on the consistency constraint of the disparity map and the color, effectively removing redundant points and enabling subsequent point cloud fusion to be more accurate.
(3) Reconstruction errors caused by non-centralized projection possibly existing in the system are effectively compensated through the fusion operation of the point clouds, so that the single point cloud generated by processing is more complete and accurate.
Drawings
FIG. 1 illustrates an outlier filtering method.
Fig. 2 is a reference point cloud obtaining method.
FIG. 3 is a schematic diagram of point cloud fusion.
Fig. 4 is a point cloud fusion effect diagram of an actual system.
Detailed Description
The invention is further illustrated with reference to the following figures and examples.
The examples of the complete process according to the invention are as follows:
filtering out one, outer point
As shown in FIG. 1, the spherical stereo model is based on the existence of multiple spherical camera models in the system, including a main camera C1With a plurality of auxiliary cameras Ck(k 2,3, … …), the primary camera and any secondary camera view overlap to form a stereo matching pair (C)1-C2And C1-C3) And finally fusing all reconstruction results under the coordinate system of the main camera.
For the main camera C1Corresponding to a point p in the image, in the stereo pair C1-C2And C1-C3Calculating the corresponding spherical parallax gamma1,γ2And corresponding spatial point P1,P2. According to P1And C and3relative to C1Calculating P from the pose relationship of1At C1-C3Lower virtual parallax gamma1'. Comparison of gamma1' and gamma2Filtering out points with larger errors.
For the main camera C1Corresponding to a point p in the image, in the stereo pair C1-C2And C1-C3In the acquisition of matched pixel points p1,p2. Separately calculating p and p1,p2And filtering out points with larger errors by the square error of pixel-by-pixel color values in a 3 multiplied by 3 window of the image neighborhood.
As shown in fig. 1, with a main camera C1And two auxiliary cameras C2、C3The description is given for the sake of example:
3.1) for the main camera C1Any pixel point p in the collected image is in the first spherical stereo pair C1-C2And a second spherical stereo pair C1-C3Respectively calculating a first and a second spherical parallax gamma1,γ2And a corresponding first spatial point P1And a second spatial point P2(ii) a According to a first space point P1Calculating a first space point P according to the pose change relation of the second spherical stereo pair1Virtual parallax gamma under the second spherical stereo pair1'; comparison of gamma1' and gamma2Filtering out space points with larger errors;
3.2) for the main camera C1Corresponding to any pixel point p in the image, in the first spherical stereo pair C1-C2And a second spherical stereo pair C1-C3Respectively obtaining the matched first pixel points p1And a second pixel p2Separately calculating p and two pixel points p1,p2And filtering out the space points corresponding to the pixel points with the square errors larger than the color constraint threshold value in the square errors of the color values of all the pixels in the 3 multiplied by 3 window of the neighborhood.
Secondly, acquiring reference point cloud
As shown in fig. 2, for the main camera C1Corresponding to any point p in the image, in the stereo pair C1-C2And C1-C3To calculate a corresponding spatial point P1,P2. If P1And P2Both exist and calculate the average coordinate of the two spatial points, denoted as P. And traversing all the points to obtain the averaged reference point cloud. Stereo pair C1-C2And C1-C3Generated point cloud S1,S2As shown in fig. 3(a), the reference point cloud profile is shown in fig. 3 (b).
Thirdly, converting the original point cloud into a reference point cloud area
Using Levenberg-Marquardt optimization algorithm minimization equation (1), a point cloud S can be obtained1And S2The affine transformation parameters of (1). Respectively applying affine transformation to single point clouds, approximately transforming the original point clouds to reference point cloud areas, and obtaining transformed point clouds S1' and S2', as shown in FIG. 3 (c).
Fourthly, fine tuning the point cloud position
As shown in fig. 3(d), for point cloud S1', calculating any point G therein1Normal vector n of1. If in the point cloud S2' the point of presence G2So that G is1And G2The difference between the normal vector directions and the distance between two points are both less than a certain threshold, G2Is G1The corresponding point of (2). Will be provided with
Figure BDA0002087560660000051
Projection to n1In the direction of (1), half of the value is taken as G1Is moving vector m1. Traversing the point cloud S in the above manner1' and S2'。
As shown in fig. 3(e), for the region Q in which the corresponding point cannot be obtained, its neighborhood is obtained. The edges of the neighborhood are searched for spatial points where there is a corresponding point in the other point cloud. The mean of the motion vectors of the above points (the bold line segment in fig. 3 (e)) is calculated and assigned to all the points in the region Q.
The large-field image is an image with a large field angle and a horizontal direction close to 360 degrees, and can be obtained by a system such as image stitching, a fisheye camera, a catadioptric camera, or the like. The effect of the method is evaluated based on the built single-camera multi-mirror large-visual-field catadioptric system. The refraction and reflection system comprises a perspective camera and 5 spherical mirrors, the basic structure is similar to that adopted in the patent 'a compact large-field light field acquisition system and the analysis and optimization method thereof', but not limited to the central projection combination of a parabolic mirror surface and a telecentric camera, and the refraction and reflection system is also suitable for a spherical camera formed by non-centralized combination of a common perspective camera, a parabolic mirror or a spherical mirror and the like.
The curvature radius of the spherical mirror in the verification system is 120mm, the diameter of the bottom surface is 51mm, and a horizontal base line BX50mm, perpendicular to base line BZ80 mm. MV-CA030-10GC fluoroscopy adopting Haikangwei visionCamera resolution 1920 × 1440. And qualitatively and quantitatively analyzing the point cloud fusion effect of the calibration plate and the reconstruction of the three vertical planes.
Fig. 4(a) shows the original single-group point cloud reconstruction result. Fig. 4(b) shows the result of directly superimposing two groups of point clouds. It can be seen that due to the non-centralized nature of the system, there is a large offset error in the two sets of point clouds in space. After the point clouds are fused by the algorithm of the invention, as shown in fig. 4(c), the offset between the two groups of point clouds is greatly reduced, and the point clouds are better integrated into a single point cloud.
The results of the quantitative accuracy are shown in Table 1.
TABLE 1 quantitative analysis of fusion accuracy of actual systems
Figure BDA0002087560660000061
For the calibration plate, the average value μ of the distance of each point to the fitting plane is calculated, and the ratio e of this average value to the average distance of all points from the virtual camera is calculated. For three vertical planes, calculating the included angle theta of normal vectors of two adjacent fitting planes, and calculating the average angle error thetaeAs shown in the following formula:
Figure BDA0002087560660000062
the surface of the fused point cloud is smoother and the angle is more optimized than that of a single group of point clouds. For the calibration plate, the error is reduced by about 30 percent; for three perpendicular planes, the angle error is reduced by about 15%.
The embodiment shows that the method can effectively improve the quality of the three-dimensional point cloud, and can perform reliable fusion operation on a plurality of groups of point clouds to finally obtain more complete and smooth single point cloud. As shown in fig. 4(c), the point cloud effect obtained by the method of the present invention is significantly better than that of fig. 4(a) and 4(b), and the three-dimensional point cloud in the finally generated large-view image is closer to a real object by improving the fusion precision and accuracy of the point cloud, so that the method can be better applied to the fields of robot navigation, monitoring, video conferencing, scene reconstruction, etc.
Any modification and variation of the present invention within the spirit of the present invention and the scope of the claims will fall within the scope of the present invention.

Claims (2)

1. A three-dimensional reconstruction optimization method for a large-visual-field image based on a multi-spherical-surface camera model is characterized by comprising the following steps: the method comprises the following steps:
step 1: setting a plurality of spherical camera models towards an object to be shot, taking one spherical camera model as a main camera and the other spherical camera models as auxiliary cameras, wherein the main camera and any one auxiliary camera have overlapped areas in the visual field, calibrating the main camera and each auxiliary camera respectively, and obtaining the pose change relation of each auxiliary camera relative to the main camera;
step 2: the main camera and all the auxiliary cameras shoot an object to be shot simultaneously to obtain respective images, the main camera and each auxiliary camera respectively form a spherical stereo pair, and a corresponding group of three-dimensional point clouds are obtained by adopting a stereo matching algorithm according to images obtained by each spherical stereo pair;
and step 3: acquiring all matching point pairs existing in different groups of three-dimensional point clouds, calculating parallax and color constraint of the matching point pairs, setting a parallax threshold and a color constraint threshold, and filtering the matching point pairs with parallax errors larger than the parallax threshold and color constraint larger than the color constraint threshold;
and 4, step 4: calculating the coordinate mean value of each matching point pair to obtain a reference point in the filtered matching point pairs, and traversing all the matching point pairs to obtain a reference point cloud consisting of the reference points;
and 5: transforming each group of three-dimensional point clouds processed in the step 3 by adopting an affine transformation method according to the reference point clouds obtained in the step 4 to obtain transformed point clouds;
step 6: optimizing the positions of the multiple groups of transformed point clouds based on the normal vector and distance relation among the multiple groups of transformed point clouds, completing point cloud fusion, realizing three-dimensional reconstruction and obtaining single point cloud after final processing;
in the step 3, the following two-step outlier filtering algorithm is specifically adopted:
3.1) Main Camera C1Calculating the spherical parallax of the main pixel point and the matched pixel point and the space point of the main pixel point p in each spherical stereo pair, converting the spherical parallax of each spherical stereo pair into the same spherical stereo pair according to the respective pose change relationship to obtain a plurality of parallax values, and filtering the space point of the main pixel point p in each spherical stereo pair if the maximum value of the difference between every two parallax values is greater than a parallax threshold value; if the maximum value of the difference between every two parallax values is not greater than the parallax threshold, reserving the space point of the main pixel point p in each spherical stereo pair;
3.2) calculating the square error of pixel color values of all pixel points of a main pixel point p and a pixel point matched with the main pixel point p in respective neighborhood 3 x 3 windows in each spherical stereo pair, wherein the square error is used as color constraint and is compared with a color constraint threshold, if the color constraint is greater than the color constraint threshold, a space point of the main pixel point p in the spherical stereo pair is filtered, and if the color constraint is not greater than the color constraint threshold, the space point of the main pixel point p in the spherical stereo pair is reserved;
3.3) repeating the steps 3.1) -3.2) to traverse all the matching point pairs;
the step 4 specifically comprises the following steps: main camera C1Calculating a space point of any main pixel point p in each spherical stereo pair in the acquired image; if the main pixel point p has at least two space points, calculating the average coordinates of all the space points as reference points, and traversing the main camera C1The reference points obtained by all the main pixel points form a reference point cloud; if the main pixel point p has only one or no corresponding space point, skipping the step 4 and entering the next step;
in the step 5, the affine transformation processing methods of each group of three-dimensional point clouds are the same, specifically:
establishing a loss function of the three-dimensional point cloud, calculating an initial transformation matrix and an initial translation vector by adopting singular value decomposition according to the matching relation of the three-dimensional point cloud and the reference point cloud, minimizing the following loss function by using a Levenberg-Marquardt optimization algorithm, obtaining a final transformation matrix R and a final translation vector T, and transforming each space point in the three-dimensional point cloud to be close to the reference point cloud according to the transformation matrix R and the translation vector T obtained by solving;
the loss function of the three-dimensional point cloud is specifically expressed as follows:
Figure FDA0002721823530000021
wherein E represents the solving operation, N represents the total number of matching point pairs or the point number of the reference point cloud, and MjIs the jth reference point, SijIs the ith group of three-dimensional point clouds SiNeutral MjA corresponding spatial point, wherein i ═ 1,2, … …;
the processing method for each group of three-dimensional point clouds in the step 6 is the same, and specifically comprises the following steps:
6.1) computing a transformed Point cloud S1' the first transformation space point G of1If the point cloud S is transformed in other points2A second transformation space point exists, so that the distance between the two transformation space points is smaller than a distance threshold value, and the included angle of the normal vectors of the two transformation space points is smaller than an included angle threshold value, and the result is 6.2); otherwise, consider other transformed point clouds S2' No space point G of the first transformation exists1Into 6.3);
6.2) distance first transformation space point G1The nearest second transform space point is taken as the first transform space point G1Corresponding point G of (1)2And will be
Figure FDA0002721823530000031
Projected to a first transformed spatial point G1Normal vector n of1In the direction of (1), and taking the projected image
Figure FDA0002721823530000032
Is taken as a first transformation space point G1Is moving vector m1According to the motion vector m1Move the first changeChange space point G1
6.3) taking the set of all the first transformation space points without corresponding points as a non-corresponding area Q, taking the mean value of the motion vectors of the first transformation space points with corresponding points in the edge neighborhood of the non-corresponding area Q as the motion vector of the non-corresponding area Q, and adjusting the positions of all the first transformation space points in the non-corresponding area Q according to the motion vector of the non-corresponding area Q;
6.4) traversing all the space points of the transformed point cloud according to the modes of 6.1) to 6.3) to complete the fusion optimization of the group of transformed point clouds.
2. The multi-spherical-camera-model-based large-field-of-view image three-dimensional reconstruction optimization method according to claim 1, characterized in that: and (3) calibrating each camera independently in the step (1), and finally transforming all reconstruction results to a camera coordinate system of the main camera for fusion.
CN201910492689.7A 2019-06-06 2019-06-06 Large-visual-field image three-dimensional reconstruction optimization method based on multi-spherical-surface camera model Active CN110363838B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910492689.7A CN110363838B (en) 2019-06-06 2019-06-06 Large-visual-field image three-dimensional reconstruction optimization method based on multi-spherical-surface camera model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910492689.7A CN110363838B (en) 2019-06-06 2019-06-06 Large-visual-field image three-dimensional reconstruction optimization method based on multi-spherical-surface camera model

Publications (2)

Publication Number Publication Date
CN110363838A CN110363838A (en) 2019-10-22
CN110363838B true CN110363838B (en) 2020-12-15

Family

ID=68216769

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910492689.7A Active CN110363838B (en) 2019-06-06 2019-06-06 Large-visual-field image three-dimensional reconstruction optimization method based on multi-spherical-surface camera model

Country Status (1)

Country Link
CN (1) CN110363838B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111536871B (en) * 2020-05-07 2022-05-31 武汉大势智慧科技有限公司 Accurate calculation method for volume variation of multi-temporal photogrammetric data
CN112446952B (en) * 2020-11-06 2024-01-26 杭州易现先进科技有限公司 Three-dimensional point cloud normal vector generation method and device, electronic equipment and storage medium
CN112837419B (en) * 2021-03-04 2022-06-24 浙江商汤科技开发有限公司 Point cloud model construction method, device, equipment and storage medium
CN113012238B (en) * 2021-04-09 2024-04-16 南京星顿医疗科技有限公司 Method for quick calibration and data fusion of multi-depth camera
CN113674333B (en) * 2021-09-02 2023-11-07 上海交通大学 Precision verification method and medium for calibration parameters and electronic equipment
CN113989116B (en) * 2021-10-25 2024-08-02 西安知微传感技术有限公司 Point cloud fusion method and system based on symmetry plane
CN114173106B (en) * 2021-12-01 2022-08-05 北京拙河科技有限公司 Real-time video stream fusion processing method and system based on light field camera

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103886595B (en) * 2014-03-19 2016-08-17 浙江大学 A kind of catadioptric Camera Self-Calibration method based on broad sense unified model
US20160232705A1 (en) * 2015-02-10 2016-08-11 Mitsubishi Electric Research Laboratories, Inc. Method for 3D Scene Reconstruction with Cross-Constrained Line Matching
CN108389157A (en) * 2018-01-11 2018-08-10 江苏四点灵机器人有限公司 A kind of quick joining method of three-dimensional panoramic image

Also Published As

Publication number Publication date
CN110363838A (en) 2019-10-22

Similar Documents

Publication Publication Date Title
CN110363838B (en) Large-visual-field image three-dimensional reconstruction optimization method based on multi-spherical-surface camera model
WO2019100933A1 (en) Method, device and system for three-dimensional measurement
WO2021120407A1 (en) Parallax image stitching and visualization method based on multiple pairs of binocular cameras
WO2018076154A1 (en) Spatial positioning calibration of fisheye camera-based panoramic video generating method
Svoboda et al. Epipolar geometry for panoramic cameras
JP4825980B2 (en) Calibration method for fisheye camera.
CN107843251B (en) Pose estimation method of mobile robot
CN109919911B (en) Mobile three-dimensional reconstruction method based on multi-view photometric stereo
CN109064404A (en) It is a kind of based on polyphaser calibration panorama mosaic method, panoramic mosaic system
US20170019655A1 (en) Three-dimensional dense structure from motion with stereo vision
US20090153669A1 (en) Method and system for calibrating camera with rectification homography of imaged parallelogram
JP7502440B2 (en) Method for measuring the topography of an environment - Patents.com
CN108629829B (en) Three-dimensional modeling method and system of the one bulb curtain camera in conjunction with depth camera
Gao et al. Dual-fisheye omnidirectional stereo
WO2013076605A1 (en) Method and system for alignment of a pattern on a spatial coded slide image
CN105208247A (en) Quaternion-based panoramic image stabilizing method
CN111028155A (en) Parallax image splicing method based on multiple pairs of binocular cameras
CN105809706B (en) A kind of overall calibration method of the more camera systems of distribution
CN106534670B (en) It is a kind of based on the panoramic video generation method for connecting firmly fish eye lens video camera group
CN101354796B (en) Omnidirectional stereo vision three-dimensional rebuilding method based on Taylor series model
CN106170086B (en) Method and device thereof, the system of drawing three-dimensional image
CN110782498B (en) Rapid universal calibration method for visual sensing network
JP7489253B2 (en) Depth map generating device and program thereof, and depth map generating system
CN114782636A (en) Three-dimensional reconstruction method, device and system
CN114998448A (en) Method for calibrating multi-constraint binocular fisheye camera and positioning space point

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant