CN110363838A - Big field-of-view image three-dimensionalreconstruction optimization method based on more spherical surface camera models - Google Patents
Big field-of-view image three-dimensionalreconstruction optimization method based on more spherical surface camera models Download PDFInfo
- Publication number
- CN110363838A CN110363838A CN201910492689.7A CN201910492689A CN110363838A CN 110363838 A CN110363838 A CN 110363838A CN 201910492689 A CN201910492689 A CN 201910492689A CN 110363838 A CN110363838 A CN 110363838A
- Authority
- CN
- China
- Prior art keywords
- point
- point cloud
- spherical surface
- points
- cloud
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of big field-of-view image three-dimensionalreconstruction optimization methods based on more spherical surface camera models.Based on different three-dimensional parallax and color constraints between, the biggish three-dimensional space point of error is filtered out;By obtaining matching double points present in the cloud of difference, the coordinate mean value of each matching double points is calculated, smooth reference point clouds are obtained;Affine transformation parameter is obtained for each cloud, by its approximate transform to reference point clouds region;For transformed multiple groups point cloud, according to the position of normal vector and range information fine tuning merging point cloud.Effective integration multiple groups point cloud of the present invention, improves the integrality and accuracy of maximal end point cloud.
Description
Technical field
The present invention relates to the three-dimensionalreconstruction algorithms in stereoscopic vision, and in particular to a kind of to be based on more sphere stereoscopic camera moulds
Type carries out a big field-of-view image three-dimensionalreconstruction optimization method for cloud fusion.
Background technique
Wide-field camera acquisition equipment is in robot navigation, and there are more and more applications in the fields such as video monitoring, and ball
Face camera model can preferably cope with the demand of big field-of-view image processing.The multipair of big visual field scene is carried out based on Sphere Measurement Model
Stereo vision three-dimensional reconstruct has important theory and realistic meaning, and on the one hand it can expand field range, on the other hand can mention
Rise the precision of reconstruct.
The acquisition of multipair big visual field 3 D visual image generally can be there are two types of mode: first is that multiple big visual field cameras are two-by-two
Stereo matching pair is constituted between each other;Second is that the figure that the array that single camera single shot is made of multiple mirror surfaces obtains
Picture.In comparison, former resolution ratio and precision are high, but system bulk power consumption is big;Latter volume and small power consumption, but system
Error is larger.Either any situation, the point cloud fusion method for the more sphere stereoscopic visions that can be proposed through the invention
Promote reconstruction accuracy.
Based on the stereo reconstruction of polyphaser due to there is the presence of multiple groups Stereo matching pair, how the reconstruction between will be matched
As a result carrying out fusion is problem in need of consideration.The blending algorithm of mainstream can generally be divided into three classes at present, voxel method, feature
Point development method and the algorithm based on depth map.Entire three-dimensional point cloud is divided into multiple voxels by voxel method, according to the projection of multiple view
Constraint removes nonconforming voxel from original point cloud;Characteristic point expansion algorithm is first using one group of three-dimensional point as seed
Point uses expansion algorithm to realize fine and close reconstruction by detection and matching characteristic in multiple view;Algorithm based on depth map makes
With the consistency constraint in depth map come converged reconstruction result.As the equipment development for obtaining depth is increasingly mature, depth is obtained
Cost it is lower and lower.Meanwhile fusion is realized based on depth map, operability and scalability are all stronger.
In view of the characteristic of actual image acquisition system decentralization projection imaging in part that may be present, using above-mentioned calculation
Method typically only removes redundant points, there will still likely be offset between the multiple groups point cloud of generation without being completely coincident.
Summary of the invention
It is a kind of based on more spherical surface cameras it is an object of the invention to propose in order to solve the problems, such as background technique
The big field-of-view image three-dimensionalreconstruction optimization method of model, suitable for the stereoscopic vision demand under a variety of environment.
The step of the technical solution adopted by the present invention, is as follows:
Step 1: multiple spherical surface camera models being arranged towards object to be shot, one of spherical surface camera model is made
For main camera, remaining spherical surface camera model is as auxiliary camera, and there are overlay regions in the visual field of main phase machine and any one auxiliary camera
Domain demarcates main phase machine and each auxiliary camera respectively, and obtains each auxiliary camera and change with respect to the pose of main phase machine
Relationship;Pose variation relation includes rotation and translation matrix.
Step 2: main phase machine and all auxiliary cameras carry out shooting to object to be shot simultaneously and obtain respective imaging, main phase machine
A sphere stereoscopic pair is respectively constituted with each auxiliary camera, is calculated according to image of each sphere stereoscopic to acquisition using Stereo matching
Method obtains corresponding one group of three-dimensional point cloud.
Step 3: obtaining all matching double points present in different groups of three-dimensional point clouds, calculate the parallax and color of matching double points
Constraint, is arranged parallax threshold value and color constrains threshold value, filters out parallactic error greater than parallax threshold value and color constraint and is greater than color about
The matching double points of beam threshold value.
Step 4: in the matching double points after filtering out, the coordinate mean value for calculating each matching double points obtains reference point, traversal
All matching double points are to obtain being made of smooth reference point clouds reference point.
Step 5: the reference point clouds obtained according to step 4, using affine transformation method will through step 3 treated every group three
Dimension point cloud is converted to obtain change point cloud;It is converted every group of three-dimensional point cloud to obtain change point according to affine transformation parameter
Cloud.Thus by each group three-dimensional point cloud approximate transform to reference point clouds region.
Step 6: based on the normal vector and distance relation between multiple groups change point cloud, optimize the position of multiple groups change point cloud,
Point Yun Ronghe is completed, realizes three-dimensionalreconstruction, the single point cloud after obtaining final process.
Each camera is independently demarcated in the step 1, and final all reconstructed results transform to the camera of main phase machine
It is merged under coordinate system.
In the step 3, following two steps exterior point filtering algorithm is specifically used:
3.1) main phase machine C1Any main pixel p in the image of acquisition, calculates main pixel in each sphere stereoscopic pair
The spatial point with the spherical surface parallax and main pixel p of matched pixel point in each sphere stereoscopic pair is put, each spherical surface is stood
The spherical surface parallax of body pair is transformed into same sphere stereoscopic according to respective pose variation relation and obtains multiple parallax values under, if two
The maximum value of the difference of two parallax values is greater than parallax threshold value, then filters out the spatial point of the main pixel p in each sphere stereoscopic pair;
If the maximum value of the difference of parallax value is not more than parallax threshold value two-by-two, retain the sky of the main pixel p in each sphere stereoscopic pair
Between point.
3.2) main pixel p and the pixel to match with main pixel p are calculated in each sphere stereoscopic pair respective
3 × 3 window of neighborhood in all pixels point pixel color value square error, square error as color constrain and color
Threshold value comparison is constrained, if color constraint, which is greater than color, constrains threshold value, filters out the sky of the main pixel p in the sphere stereoscopic pair
Between point, if color constraint no more than color constrain threshold value, retain spatial point of the main pixel p in the sphere stereoscopic pair.
3.3) step 3.1) -3.2 is repeated) all matching double points of traversal.
The step 4 is specifically: main phase machine C1Any main pixel p in the image of acquisition, calculates main pixel p each
The spatial point of a sphere stereoscopic centering;If there are at least two spatial points by main pixel p, the average seat of all spatial points is calculated
Mark it is as a reference point, will traverse main phase machine C1The reference point that obtains of all main pixels form reference point clouds;If main pixel p
Or only one is without corresponding spatial point, then step 4 is skipped, into next step.
In the step 5, the affine transformation processing method of every group of three-dimensional point cloud is identical, specifically: establishing three-dimensional point cloud
Loss function calculates initial transformation matrix using singular value decomposition (SVD) according to three-dimensional point cloud and the matching relationship of reference point clouds
With initial translation vector, reuses Levenberg-Marquardt optimization algorithm and minimize following loss functions, seek final
Transformation matrix R and translation vector T, according to transformation matrix R and translation vector T that solution obtains, by each sky in three-dimensional point cloud
Between near point transformation to reference point clouds.
The loss function of three-dimensional point cloud is specifically expressed as follows:
In formula, E indicates derivation, and N indicates the total number of matching double points or the points of reference point clouds, MjJoin for j-th
Examination point, Sij(i=1,2 ...) it is i-th group of three-dimensional point cloud SiIn with MjCorresponding spatial point.
It is identical to the processing method of every group of three-dimensional point cloud in the step 6, specifically:
6.1) change point cloud S is calculated1' in the first transformation space point G1If in other change point clouds S2' in there are second
Transformation space point makes the distance between two transformation space points be less than distance threshold and the normal vector of two transformation space points
Angle is less than angle threshold value, into 6.2);Otherwise it is assumed that other change point clouds S2' in be not present and the first transformation space point G1's
Corresponding points, into 6.3).
6.2) by distance the first transformation space point G1Nearest the second transformation space o'clock is as the first transformation space point G1Pair
It should point G2, and willProject to the first transformation space point G1Normal vector n1Direction on, and take projection after's
One half value is as the first transformation space point G1Motion-vector m1, according to motion-vector m1Mobile first transformation space point G1。
6.3) there is no the set of the first transformation space point of corresponding points as non-corresponding region Q using all, by non-corresponding
In the edge neighborhood of region Q there are the mean values of the motion-vector of the first transformation space point of corresponding points as non-corresponding region Q
Motion-vector, according to the motion-vector of non-corresponding region Q to the first transformation space point position all in the Q of non-corresponding region into
Row adjustment.
6.4) all spatial points of the change point cloud in the manner described above, are traversed, the fusion to this group of change point cloud is completed
Optimization.
The present invention is based on spherical surface camera models, to multipair sphere stereoscopic to being reconstructed respectively after, first be based on depth and face
Color consistency check removes redundant points, completes further according to the position of matching and the Advance data qualities point cloud such as normal vector between cloud
Point Yun Ronghe, so that the point cloud in the big field-of-view image that processing generates is more complete and accurate, the weight suitable for big field-of-view image
Structure is to improve in robot navigation, the application effect in the fields such as video monitoring.
The invention has the advantages that:
(1) it is based on Sphere Measurement Model, can be adapted for a variety of big field-of-view image acquisition devices.
(2) exterior point is carried out based on disparity map and consistency of colour constraint to filter out, effectively remove redundant points, make subsequent point
Cloud fusion is more accurate.
(3) pass through reconstruct caused by decentralization projection that may be present in the mixing operation effective compensation of point cloud system
Error, so that the single point cloud that processing generates is more complete and accurate.
Detailed description of the invention
Fig. 1 is exterior point filtering method.
Fig. 2 is reference point clouds acquisition methods.
Fig. 3 is that point cloud merges schematic diagram.
Fig. 4 is real system point cloud syncretizing effect figure.
Specific embodiment
The present invention will be further described with reference to the accompanying drawings and examples.
The embodiment implemented according to the complete method of summary of the invention is as follows:
One, exterior point filters out
As shown in Figure 1, sphere stereoscopic model is based in system, there are multiple spherical surface camera models, including a main phase machine C1
With multiple auxiliary camera Ck(k=2,3 ... ...), main phase machine and any auxiliary camera fields of view constitute Stereo matching to (C there are Chong Die1-
C2And C1-C3), final all reconstructed results merge under main camera coordinates system.
For main phase machine C1A point p in correspondence image, in solid to C1-C2And C1-C3It is middle to calculate its correspondence spherical surface parallax
γ1, γ2And corresponding spatial point P1, P2.According to P1Coordinate and C3Opposite C1Position orientation relation, calculate P1In C1-C3Under
Virtual parallax γ1'.Compare γ1' and γ2Difference, filter out the biggish point of error.
For main phase machine C1A point p in correspondence image, in solid to C1-C2And C1-C3It is middle to obtain matched pixel p1,
p2.Calculate separately p and p1, p2The square error of color value pixel-by-pixel in 3 × 3 windows of Image neighborhood, it is biggish to filter out error
Point.
As shown in Figure 1, with main phase machine C1With two auxiliary camera C2、C3For be illustrated:
3.1) it is directed to main phase machine C1Any pixel point p in the image of acquisition, in the first sphere stereoscopic to C1-C2With second
Sphere stereoscopic is to C1-C3In calculate separately the first and second spherical surface parallax γ1, γ2And corresponding first spatial point P1With second
Spatial point P2;According to the first spatial point P1With the pose variation relation of the second sphere stereoscopic pair, the first spatial point P is calculated1Second
Virtual parallax γ of the sphere stereoscopic under1';Compare γ1' and γ2Difference, filter out the biggish spatial point of error;
3.2) it is directed to main phase machine C1Any pixel point p in correspondence image, in the first sphere stereoscopic to C1-C2With the second ball
Face solid is to C1-C3It is middle to obtain matched first pixel p respectively1With the second pixel p2, calculate separately p and two pixel
p1, p2The square error of all pixels color value in its 3 × 3 window of neighborhood filters out square error greater than color constraint threshold value
The corresponding spatial point of pixel.
Two, reference point clouds are obtained
As shown in Fig. 2, being directed to main phase machine C1Any point p in correspondence image, in solid to C1-C2And C1-C3Middle calculating phase
The spatial point P answered1, P2.If P1And P2All exist, calculates the average coordinates of two spaces point, be denoted as P.All the points are traversed, are obtained
Reference point clouds after equalization.Solid is to C1-C2And C1-C3The point cloud S of generation1, S2As shown in Fig. 3 (a), reference point clouds side view
As shown in Fig. 3 (b).
Three, original point Cloud transform is to reference point clouds region
Formula (1) is minimized using Levenberg-Marquardt optimization algorithm, a cloud S can be obtained1And S2It is affine
Transformation parameter.Affine transformation is respectively applied to a single point cloud, can be obtained by original point cloud approximate transform to reference point clouds region
Obtain transformed cloud S1' and S2', as shown in Fig. 3 (c).
Four, fine tuning point cloud position
As shown in Fig. 3 (d), for a cloud S1', calculate any point G1Normal vector n1.If in a cloud S2' in exist
Point G2, so that G1And G2Between normal vector direction difference and the distance between two o'clock both less than some threshold value, then G2For G1Pair
Ying Dian.It willProject to n1Direction on, take one half value as G1Motion-vector m1.In the manner described above, point is traversed
Cloud S1' and S2'。
As shown in Fig. 3 (e), for the region Q of corresponding points can not be obtained, its neighborhood is obtained.It is searched in the edge of neighborhood
There are the spatial points of corresponding points in another point cloud.Calculate the equal of the motion-vector of above-mentioned point (line segment of overstriking in Fig. 3 (e))
Value, all the points being assigned in the Q of region.
Big field-of-view image refers to that field angle is larger, and horizontal direction is close to 360 ° of image, can pass through image mosaic, flake
The systems such as camera or catadioptric camera obtain.This method is assessed based on the big visual field mirror-lens system of the more mirror surfaces of the one camera built
Effect.The mirror-lens system includes a perspective camera and 5 spherical mirrors, and it is " a kind of compact that basic structure is similar to patent
Big visual field optical field acquisition system and its analysis optimization method " used by, but be not limited in parabolic mirror surface and telecentricity camera
Heart projection combination, is equally applicable to the decentralizations such as common perspective camera and parabolic lens or spherical mirror and combines the spherical surface phase constituted
Machine.
Spherical surface curvature radius is 120mm, basal diameter 51mm, horizontal base line B in verifying systemX=50mm, vertical base
Line BZ=80mm.The MV-CA030-10GC perspective camera regarded using Haikang prestige, resolution ratio 1920 × 1440.Qualitative and quantitative point
Analyse the reconstruction point cloud syncretizing effect of scaling board and three vertical planes.
It is original single group point cloud reconstructed results shown in Fig. 4 (a).It is the knot that two groups of point clouds are directly superimposed shown in Fig. 4 (b)
Fruit.It can be seen that due to the decentralization characteristic of system spatially there is biggish offset error in two groups of point clouds.By this
After the algorithm of invention merges a cloud, as shown in Fig. 4 (c), the offset between two groups of point clouds significantly reduces, more preferably
Ground is integrated into a single point cloud.
Quantitative precision result is as shown in table 1.
The quantitative analysis of 1 real system fusion accuracy of table
For scaling board, each point is calculated to the distance average μ of fit Plane and the average value and all the points distance
The ratio e of virtual camera average distance.For three vertical planes, the angle theta of two neighboring fit Plane normal vector is calculated, and is calculated
Average angular error θe, it is shown below:
Wherein, merging point cloud is more more smooth than single group point cloud surface, and angle more optimizes.For scaling board, error is reduced
30% or so;For three vertical planes, angular error reduces 15% or so.
As seen from the above-described embodiment, three-dimensional point cloud quality can be effectively improved using the present invention, it can for the progress of multiple groups point cloud
The mixing operation leaned on, the single point cloud of final available more complete and smooth.By being obtained shown in Fig. 4 (c) by means of the present invention
The point cloud effect obtained is substantially better than Fig. 4 (a) and Fig. 4 (b), by improving the fusion accuracy and accuracy of point cloud, so that most throughout one's life
At big field-of-view image in three-dimensional point cloud be closer to material object, therefore can be preferably applied for robot navigation, monitoring,
In the fields such as video conference, scene rebuilding.
Within the spirit of the invention and the scope of protection of the claims, any modifications and changes present invention made, all
Fall into protection scope of the present invention.
Claims (6)
1. a kind of big field-of-view image three-dimensionalreconstruction optimization method based on more spherical surface camera models, it is characterised in that: including as follows
Step:
Step 1: multiple spherical surface camera models being arranged towards object to be shot, using one of spherical surface camera model as master
Camera, remaining spherical surface camera model is as auxiliary camera, and there are overlapping regions in the visual field of main phase machine and any one auxiliary camera, will
Main phase machine is demarcated respectively with each auxiliary camera, and obtains pose variation relation of each auxiliary camera with respect to main phase machine;
Step 2: main phase machine and all auxiliary cameras carry out shooting to object to be shot simultaneously and obtain respective imaging, main phase machine and every
A auxiliary camera respectively constitutes a sphere stereoscopic pair, is obtained according to image of each sphere stereoscopic to acquisition using Stereo Matching Algorithm
To corresponding one group of three-dimensional point cloud;
Step 3: obtaining all matching double points present in different groups of three-dimensional point clouds, the parallax for calculating matching double points and color are about
Beam, is arranged parallax threshold value and color constrains threshold value, filters out parallactic error and constrains greater than parallax threshold value and color constraint greater than color
The matching double points of threshold value;
Step 4: in the matching double points after filtering out, the coordinate mean value for calculating each matching double points obtains reference point, and traversal is all
Matching double points are to obtain the reference point clouds being made of reference point;
Step 5: the reference point clouds obtained according to step 4, it will be through step 3 treated every group of three-dimensional point using affine transformation method
Cloud is converted to obtain change point cloud;
Step 6: based on the normal vector and distance relation between multiple groups change point cloud, optimizing the position of multiple groups change point cloud, complete
Point Yun Ronghe realizes three-dimensionalreconstruction, the single point cloud after obtaining final process.
2. the big field-of-view image three-dimensionalreconstruction optimization method according to claim 1 based on more spherical surface camera models, special
Sign is: each camera is independently demarcated in the step 1, and the camera that final all reconstructed results transform to main phase machine is sat
It is merged under mark system.
3. the big field-of-view image three-dimensionalreconstruction optimization method according to claim 1 based on more spherical surface camera models, special
Sign is: in the step 3, specifically use following two steps exterior point filtering algorithm:
3.1) main phase machine C1Any main pixel p in the image of acquisition, calculated in each sphere stereoscopic pair main pixel with
The spatial point of spherical surface parallax and main pixel p in each sphere stereoscopic pair with pixel, by each sphere stereoscopic pair
Spherical surface parallax is transformed into same sphere stereoscopic according to respective pose variation relation and obtains multiple parallax values under, if parallax two-by-two
The maximum value of the difference of value is greater than parallax threshold value, then filters out the spatial point of the main pixel p in each sphere stereoscopic pair;If two-by-two
The maximum value of the difference of parallax value is not more than parallax threshold value, retains the spatial point of the main pixel p in each sphere stereoscopic pair;
3.2) main pixel p and the pixel to match with main pixel p are calculated in each sphere stereoscopic pair in respective neighbour
The square error of the pixel color value of all pixels point in 3 × 3 window of domain, square error are constrained as color constraint with color
Threshold value comparison filters out the space of the main pixel p in the sphere stereoscopic pair if color constraint, which is greater than color, constrains threshold value
Point retains spatial point of the main pixel p in the sphere stereoscopic pair if color constraint constrains threshold value no more than color;
3.3) step 3.1) -3.2 is repeated) all matching double points of traversal.
4. the big field-of-view image three-dimensionalreconstruction optimization method according to claim 1 based on more spherical surface camera models, special
Sign is: the step 4 is specifically: main phase machine C1Any main pixel p in the image of acquisition, calculates main pixel p each
The spatial point of a sphere stereoscopic centering;If there are at least two spatial points by main pixel p, the average seat of all spatial points is calculated
Mark it is as a reference point, will traverse main phase machine C1The reference point that obtains of all main pixels form reference point clouds;If main pixel p
Or only one is without corresponding spatial point, then step 4 is skipped, into next step.
5. the big field-of-view image three-dimensionalreconstruction optimization method according to claim 1 based on more spherical surface camera models, special
Sign is:
In the step 5, the affine transformation processing method of every group of three-dimensional point cloud is identical, specifically:
The loss function for establishing three-dimensional point cloud uses singular value decomposition according to three-dimensional point cloud and the matching relationship of reference point clouds
(SVD) initial transformation matrix and initial translation vector are calculated, is reused under Levenberg-Marquardt optimization algorithm minimum
Loss function is stated, final transformation matrix R and translation vector T are sought, according to solution obtained transformation matrix R and translation vector T,
Each spatial point in three-dimensional point cloud is transformed near reference point clouds;
The loss function of three-dimensional point cloud is specifically expressed as follows:
In formula, E indicates derivation, and N indicates the total number of matching double points or the points of reference point clouds, MjFor j-th of reference point,
Sij(i=1,2 ...) it is i-th group of three-dimensional point cloud SiIn with MjCorresponding spatial point.
6. the big field-of-view image three-dimensionalreconstruction optimization method according to claim 1 based on more spherical surface camera models, special
Sign is: it is identical to the processing method of every group of three-dimensional point cloud in the step 6, specifically:
6.1) change point cloud S is calculated1' in the first transformation space point G1If in other change point clouds S2' middle there are the second transformation
Spatial point makes the distance between two transformation space points be less than the angle of distance threshold and the normal vector of two transformation space points
Less than angle threshold value, into 6.2);Otherwise it is assumed that other change point clouds S2' in be not present and the first transformation space point G1Correspondence
Point, into 6.3);
6.2) by distance the first transformation space point G1Nearest the second transformation space o'clock is as the first transformation space point G1Corresponding points
G2, and willProject to the first transformation space point G1Normal vector n1Direction on, and take projection afterHalf
Value is used as the first transformation space point G1Motion-vector m1, according to motion-vector m1Mobile first transformation space point G1;
6.3) there is no the set of the first transformation space point of corresponding points as non-corresponding region Q using all, by non-corresponding region Q
Edge neighborhood in the movement there are the mean value of the motion-vector of the first transformation space point of corresponding points as non-corresponding region Q
Vector adjusts the first transformation space point position all in the Q of non-corresponding region according to the motion-vector of non-corresponding region Q
It is whole;
6.4) all spatial points of the change point cloud in the manner described above, are traversed, the fusion to this group of change point cloud is completed and optimizes.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910492689.7A CN110363838B (en) | 2019-06-06 | 2019-06-06 | Large-visual-field image three-dimensional reconstruction optimization method based on multi-spherical-surface camera model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910492689.7A CN110363838B (en) | 2019-06-06 | 2019-06-06 | Large-visual-field image three-dimensional reconstruction optimization method based on multi-spherical-surface camera model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110363838A true CN110363838A (en) | 2019-10-22 |
CN110363838B CN110363838B (en) | 2020-12-15 |
Family
ID=68216769
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910492689.7A Active CN110363838B (en) | 2019-06-06 | 2019-06-06 | Large-visual-field image three-dimensional reconstruction optimization method based on multi-spherical-surface camera model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110363838B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111536871A (en) * | 2020-05-07 | 2020-08-14 | 武汉大势智慧科技有限公司 | Accurate calculation method for volume variation of multi-temporal photogrammetric data |
CN112446952A (en) * | 2020-11-06 | 2021-03-05 | 杭州易现先进科技有限公司 | Three-dimensional point cloud normal vector generation method and device, electronic equipment and storage medium |
CN112837419A (en) * | 2021-03-04 | 2021-05-25 | 浙江商汤科技开发有限公司 | Point cloud model construction method, device, equipment and storage medium |
CN112861674A (en) * | 2021-01-28 | 2021-05-28 | 中振同辂(江苏)机器人有限公司 | Point cloud optimization method based on ground features and computer readable storage medium |
CN113012238A (en) * | 2021-04-09 | 2021-06-22 | 南京星顿医疗科技有限公司 | Method for rapid calibration and data fusion of multi-depth camera |
CN113674333A (en) * | 2021-09-02 | 2021-11-19 | 上海交通大学 | Calibration parameter precision verification method, medium and electronic equipment |
CN114173106A (en) * | 2021-12-01 | 2022-03-11 | 北京拙河科技有限公司 | Real-time video stream fusion processing method and system based on light field camera |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103886595A (en) * | 2014-03-19 | 2014-06-25 | 浙江大学 | Catadioptric camera self-calibration method based on generalized unified model |
US20160232705A1 (en) * | 2015-02-10 | 2016-08-11 | Mitsubishi Electric Research Laboratories, Inc. | Method for 3D Scene Reconstruction with Cross-Constrained Line Matching |
CN108389157A (en) * | 2018-01-11 | 2018-08-10 | 江苏四点灵机器人有限公司 | A kind of quick joining method of three-dimensional panoramic image |
-
2019
- 2019-06-06 CN CN201910492689.7A patent/CN110363838B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103886595A (en) * | 2014-03-19 | 2014-06-25 | 浙江大学 | Catadioptric camera self-calibration method based on generalized unified model |
US20160232705A1 (en) * | 2015-02-10 | 2016-08-11 | Mitsubishi Electric Research Laboratories, Inc. | Method for 3D Scene Reconstruction with Cross-Constrained Line Matching |
CN108389157A (en) * | 2018-01-11 | 2018-08-10 | 江苏四点灵机器人有限公司 | A kind of quick joining method of three-dimensional panoramic image |
Non-Patent Citations (2)
Title |
---|
XIANG ZHIYU,ET AL: "《Compact omnidirectional multi-stereo vision system for 3D reconstruction》", 《APPLIED OPTICS 57》 * |
周炎兵: "《多镜面折反射系统的标定与三维重建》", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 * |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111536871A (en) * | 2020-05-07 | 2020-08-14 | 武汉大势智慧科技有限公司 | Accurate calculation method for volume variation of multi-temporal photogrammetric data |
CN112446952A (en) * | 2020-11-06 | 2021-03-05 | 杭州易现先进科技有限公司 | Three-dimensional point cloud normal vector generation method and device, electronic equipment and storage medium |
CN112446952B (en) * | 2020-11-06 | 2024-01-26 | 杭州易现先进科技有限公司 | Three-dimensional point cloud normal vector generation method and device, electronic equipment and storage medium |
CN112861674A (en) * | 2021-01-28 | 2021-05-28 | 中振同辂(江苏)机器人有限公司 | Point cloud optimization method based on ground features and computer readable storage medium |
JP2023519466A (en) * | 2021-03-04 | 2023-05-11 | チョーチアン センスタイム テクノロジー デベロップメント カンパニー,リミテッド | POINT CLOUD MODEL CONSTRUCTION METHOD, APPARATUS, ELECTRONIC DEVICE, STORAGE MEDIUM AND PROGRAM |
CN112837419B (en) * | 2021-03-04 | 2022-06-24 | 浙江商汤科技开发有限公司 | Point cloud model construction method, device, equipment and storage medium |
WO2022183657A1 (en) * | 2021-03-04 | 2022-09-09 | 浙江商汤科技开发有限公司 | Point cloud model construction method and apparatus, electronic device, storage medium, and program |
CN112837419A (en) * | 2021-03-04 | 2021-05-25 | 浙江商汤科技开发有限公司 | Point cloud model construction method, device, equipment and storage medium |
CN113012238A (en) * | 2021-04-09 | 2021-06-22 | 南京星顿医疗科技有限公司 | Method for rapid calibration and data fusion of multi-depth camera |
CN113012238B (en) * | 2021-04-09 | 2024-04-16 | 南京星顿医疗科技有限公司 | Method for quick calibration and data fusion of multi-depth camera |
CN113674333A (en) * | 2021-09-02 | 2021-11-19 | 上海交通大学 | Calibration parameter precision verification method, medium and electronic equipment |
CN113674333B (en) * | 2021-09-02 | 2023-11-07 | 上海交通大学 | Precision verification method and medium for calibration parameters and electronic equipment |
CN114173106A (en) * | 2021-12-01 | 2022-03-11 | 北京拙河科技有限公司 | Real-time video stream fusion processing method and system based on light field camera |
Also Published As
Publication number | Publication date |
---|---|
CN110363838B (en) | 2020-12-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110363838A (en) | Big field-of-view image three-dimensionalreconstruction optimization method based on more spherical surface camera models | |
WO2021120407A1 (en) | Parallax image stitching and visualization method based on multiple pairs of binocular cameras | |
CN112505065B (en) | Method for detecting surface defects of large part by indoor unmanned aerial vehicle | |
CN108416812B (en) | Calibration method of single-camera mirror image binocular vision system | |
WO2019100933A1 (en) | Method, device and system for three-dimensional measurement | |
CN109919911B (en) | Mobile three-dimensional reconstruction method based on multi-view photometric stereo | |
Furukawa et al. | Accurate camera calibration from multi-view stereo and bundle adjustment | |
CN105243637B (en) | One kind carrying out full-view image joining method based on three-dimensional laser point cloud | |
WO2018076154A1 (en) | Spatial positioning calibration of fisheye camera-based panoramic video generating method | |
CN111325794A (en) | Visual simultaneous localization and map construction method based on depth convolution self-encoder | |
CN108932725B (en) | Scene flow estimation method based on convolutional neural network | |
CN109115184B (en) | Collaborative measurement method and system based on non-cooperative target | |
CN106934809A (en) | Unmanned plane based on binocular vision autonomous oiling rapid abutting joint air navigation aid in the air | |
US8867826B2 (en) | Disparity estimation for misaligned stereo image pairs | |
CN110189400B (en) | Three-dimensional reconstruction method, three-dimensional reconstruction system, mobile terminal and storage device | |
CN106056622B (en) | A kind of multi-view depth video restored method based on Kinect cameras | |
CN104537707A (en) | Image space type stereo vision on-line movement real-time measurement system | |
CN110070598A (en) | Mobile terminal and its progress 3D scan rebuilding method for 3D scan rebuilding | |
CN104835158A (en) | 3D point cloud acquisition method based on Gray code structure light and polar constraints | |
JP7502440B2 (en) | Method for measuring the topography of an environment - Patents.com | |
CN111009030A (en) | Multi-view high-resolution texture image and binocular three-dimensional point cloud mapping method | |
CN108981608A (en) | A kind of Novel wire Constructed Lighting Vision System and scaling method | |
CN108269234A (en) | A kind of lens of panoramic camera Attitude estimation method and panorama camera | |
Liu et al. | Dense stereo matching strategy for oblique images that considers the plane directions in urban areas | |
CN116625258A (en) | Chain spacing measuring system and chain spacing measuring method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |