CN103489169A - Improved depth data splicing method based on least square method - Google Patents

Improved depth data splicing method based on least square method Download PDF

Info

Publication number
CN103489169A
CN103489169A CN201310354009.8A CN201310354009A CN103489169A CN 103489169 A CN103489169 A CN 103489169A CN 201310354009 A CN201310354009 A CN 201310354009A CN 103489169 A CN103489169 A CN 103489169A
Authority
CN
China
Prior art keywords
matrix
point
triangle
epsiv
representative
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310354009.8A
Other languages
Chinese (zh)
Inventor
卢万崎
常智勇
黄亮
聂寇准
卢津
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201310354009.8A priority Critical patent/CN103489169A/en
Publication of CN103489169A publication Critical patent/CN103489169A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides an improved depth data splicing method based on a least square method. The method includes the steps of firstly obtaining three-dimensional coordinates of gauge points in different coordinate systems, utilizing a 'point-plane-side-point' grading searching method to search out the corresponding gauge points in the different coordinate systems, and then using the improved least square method to carry out registration on a collection of the corresponding gauge points in the two coordinate systems to achieve splicing of depth data in the different coordinate systems. By the adoption of the method, matching errors of the corresponding gauge points are effectively eliminated, and the matching efficiency and the robustness of the registration of the depth data are improved.

Description

A kind of improved depth data joining method based on least square method
Technical field
The invention belongs to field of optical measuring technologies, relate to the some cloud problem under different coordinates, be specially a kind of improved depth data joining method based on least square method.
Background technology
In order to obtain the complete three-dimensional data in testee surface, need to testee, be measured from a plurality of angles, the depth data Unitary coordinate processing that then will repeatedly measure, complete the data splicing.In the depth data splicing, joining method based on gauge point is not subject to the restriction of testee shape, and it is low that gauge point is easy to cost of manufacture, positioning precision is high, be easy to the advantages such as identification, the stitching algorithm operand is little, does not need interative computation, can reach automatic high-precision data splicing requirement.
At first depth data joining method based on the non-coding gauge point need to obtain the corresponding relation of gauge point under different coordinates, then according to the depth data joining method, the depth data under different coordinates is spliced.The correctness that under different coordinates, the gauge point correspondence is searched and efficiency direct relation splicing precision and the real-time efficiency of measuring; In the situation that correspondence is searched is correct, the precision of stitching algorithm too also relation the precision of depth data splicing precision and optical measurement.
For the optical three-dimensional measuring method based on the non-coding gauge point, the corresponding relation and the more accurate joining method that accurately obtain in real time gauge point under different coordinates are the gordian techniquies of three dimensional optical measuring.The present invention proposes a kind of method and higher technology of data splicing robustness that accurately obtains in real time the corresponding relation of gauge point under different coordinates.
Summary of the invention
The technical matters solved
The problem existed for solving prior art, the present invention proposes a kind of improved depth data joining method based on least square method.
Technical scheme
The present invention adopts the non-coding gauge point to be spliced the depth data under different visual angles (coordinate system), at first need to obtain the three-dimensional coordinate of gauge point under different coordinates, utilize the hierarchical searching method of " point-face-limit-point " to search out the correspondence markings point under different coordinates, then adopt improved least square method to gather and carry out registration the correspondence markings point under two coordinate systems, realize the splicing of depth data under different coordinates.
Technical scheme of the present invention is:
Described a kind of improved depth data joining method based on least square method is characterized in that: adopt following steps:
Step 1: at testee surface binding mark point, shot object from two different visual angles, and twice shooting have overlappingly, and the gauge point of lap has 3 at least, obtain respectively the three-dimensional coordinate of gauge point under two visual angles, wherein the gauge point three-dimensional coordinate matrix under visual angle A shooting is P a, the gauge point three-dimensional coordinate matrix under visual angle B takes is P b;
Step 2: the gauge point three-dimensional coordinate matrix P two visual angles shootings that obtain from step 1 a=[p 1, p 2... p i..., p n] t, P b=[q 1, q 2... q j..., q m] tin find out the identical point under the frame of reference, matrix P wherein ain element p i={ x pi, y pi, z pi, { x pi, y pi, z pibe a p icoordinate under the A coordinate system of visual angle, matrix P bin element q j={ x qj, y qj, z qj, { x qj, y qj, z qjbe a q jcoordinate under the B coordinate system of visual angle:
According to matrix P abuild distance matrix dA, wherein
Figure BDA00003664861800021
The element da of matrix dA ijrepresenting matrix P ain element p iand p jdistance; According to matrix P bbuild distance matrix dB, wherein
Figure BDA00003664861800022
The element db of matrix dB ijrepresenting matrix P bin element q iand q jdistance;
Each row in every a line of matrix dA and matrix dB is contrasted, and the number of identical element in two row that obtain comparing, further obtain the two row da that identical element is maximum rwith db s, representing matrix P ar point and matrix P bthe s point be the same point under the frame of reference, i.e. first group of corresponding point; Described identical element refers to that the difference of the element value of two elements is less than the three-dimensional coordinate extraction accuracy;
Step 3: by matrix P arevise: by P ain capable the 1st row of mentioning of r, by P ain the former the 1st walk to that former r-1 is capable to be reduced to the 2nd to walk to r capable, obtain new gauge point coordinate set matrix P a1; By P a1point and P that middle the first row means a1in the point of other any two line displays form triangle, and obtain triangle area, form the area matrix S a,
Wherein
Figure BDA00003664861800031
element sa ijexpression is by P a1point and P that middle the first row means a1in the triangle area that forms of the point of the capable and j+1 line display of i+1; By S aturn to upper triangular matrix T a;
By matrix P brevise: by P bin capable the 1st row of mentioning of s, by P bin the former the 1st walk to that former s-1 is capable to be reduced to the 2nd to walk to s capable, obtain new gauge point coordinate set matrix P b1, by P b1point and P that middle the first row means b1in the point of other any two line displays form triangle, and obtain triangle area, form the area matrix S b,
Wherein
Figure BDA00003664861800032
sb ijexpression is by P b1point and P that middle the first row means b1in the triangle area that forms of the point of the capable and j+1 line display of i+1; By S bturn to upper triangular matrix T b;
Step 4: by T ain each nonzero element and T bin each nonzero element make comparisons, if T wherein amiddle element T athe triangle of (i, j) representative and T bmiddle element T bthe triangle area compatibility of (r, t) representative and length of side compatibility, by matrix P a1in the capable and capable corresponding point of j+1 of i+1 put into P a1correspondence markings point set pA1 in, by matrix P b1in the capable and capable corresponding point of t+1 of r+1 put into P b1correspondence markings point set pB1 in, wherein area is compatible as follows with the compatible determination methods of the length of side:
Set area relative error limit ε sif meet
Figure BDA00003664861800033
t is described athe triangle of (i, j) representative and T bthe triangle area compatibility of (r, t) representative;
T athe limit of first group of corresponding point of leg-of-mutton two mistakes of (i, j) representative is L a(i, 1), L a(j, 1), T bthe limit of first group of corresponding point of leg-of-mutton two mistakes of (r, t) representative is L b(r, 1), L b(t, 1), if
- &epsiv; l < L A ( i , 1 ) - L B ( r , 1 ) L A ( i , 1 ) < &epsiv; l With - &epsiv; l < L A ( i , 1 ) - L B ( t , 1 ) L A ( i , 1 ) < &epsiv; l At least one establishment, and
- &epsiv; l < L A ( j , 1 ) - L B ( r , 1 ) L A ( j , 1 ) < &epsiv; l With - &epsiv; l < L A ( j , 1 ) - L B ( t , 1 ) L A ( j , 1 ) < &epsiv; l At least one establishment, mean T athe triangle of (i, j) representative and T bthe triangle length of side compatibility of (r, t) representative, wherein ε lmean length relative error limit;
Step 5: by improved least square method obtain under two visual angles correspondence markings point between transition matrix R and T:
Compute matrix P a1correspondence markings point set pA1={p i| p i∈ pAl, i=1,2 ... the barycenter C of N} pwith matrix P b1correspondence markings point set pB1={q i| q i∈ pB1, i=1,2 ... the barycenter C of N} q; Correspondence markings in pA1 and pB1 point is done with respect to the translation of barycenter separately &alpha; i = p i - C p &beta; i = q i - C q , Make C 1=[α 1, α 2... α i... α n] t, C 2=[β 1, β 2... β i... β n] t, solving equation C 1r=C 2minimum least square solution, and obtain
Figure BDA00003664861800047
meet objective function
Figure BDA00003664861800048
minimize, and obtain translation vector T=C q-RC p.
Beneficial effect
Adopt method of the present invention, effectively eliminated correspondence markings point matching error, improve the efficiency of coupling and the robustness of depth data registration.
The accompanying drawing explanation
Fig. 1: left and right two pictures under the A of visual angle;
Fig. 2: left and right two pictures under the B of visual angle;
Fig. 3: the hierarchical searching method diagram of " point-face-limit-point ";
Fig. 4: the average error skew diagram of three kinds of joining methods under random noise average 0.05;
Fig. 5: the average error skew diagram of three kinds of joining methods under random noise average 0.1.
Embodiment
Below in conjunction with specific embodiment, the present invention is described:
A kind of improved depth data joining method based on least square method in the present embodiment adopts following steps:
Step 1: build experiment porch, on surface, tested tool box, paste the non-coding gauge point.Demarcate two cameras (model DMK-21BUO4) and obtain inside and outside parameter, take tool box with two cameras simultaneously, obtain left and right two pictures under the A of first visual angle, as shown in Figure 1, extract the gauge point two-dimensional coordinate of left and right two figure, then carry out the two dimension coupling, obtain the three-dimensional coordinate of gauge point under this coordinate system by trigonometry, the gauge point three-dimensional coordinate matrix formed under visual angle A shooting is P a, separate the depth data that phase place obtains Xia tool box, this visual angle.Take tested tool box left and right two figure under two cameras of translation or tested tool box ,Cong visual angle B, as shown in Figure 2, solve the three-dimensional coordinate of gauge point taken under the B of visual angle, the gauge point three-dimensional coordinate matrix formed under visual angle B shooting is P b, and separate the depth data that phase place obtains tool box under this visual angle.Wherein under visual angle A and visual angle B, take have overlapping, at least 3 of the gauge points of lap.
Step 2: the gauge point three-dimensional coordinate matrix P two visual angles shootings that obtain from step 1 a=[p 1, p 2... p i..., p n] t, P b=[q 1, q 2... q j..., q m] tin find out the identical point under the frame of reference, matrix P wherein ain element p i={ x pi, y pi, z pi, { x pi, y pi, z pibe a p icoordinate under the A coordinate system of visual angle, matrix P bin element q j={ x qj, y qj, z qj, { x qj, y qj, z qjbe a q jcoordinate under the B coordinate system of visual angle:
According to matrix P abuild distance matrix dA, wherein
Figure BDA00003664861800051
The element da of matrix dA ijrepresenting matrix P ain element p iand p jdistance; According to matrix P bbuild distance matrix dB, wherein
Figure BDA00003664861800052
The element db of matrix dB ijrepresenting matrix P bin element q iand q jdistance;
Each row in every a line of matrix dA and matrix dB is contrasted, and the number of identical element in two row that obtain comparing, further obtain the two row da that identical element is maximum rwith db s, representing matrix P ar point and matrix P bthe s point be the same point under the frame of reference, i.e. first group of corresponding point; Described identical element refers to that the difference of the element value of two elements is less than the three-dimensional coordinate extraction accuracy.
Step 3: by matrix P arevise: by P ain capable the 1st row of mentioning of r, by P ain the former the 1st walk to that former r-1 is capable to be reduced to the 2nd to walk to r capable, obtain new gauge point coordinate set matrix P a1; By P a1point and P that middle the first row means a1in the point of other any two line displays form triangle, and obtain triangle area, form the area matrix S a,
Wherein
Figure BDA00003664861800061
element sa ijexpression is by P a1point and P that middle the first row means a1in the triangle area that forms of the point of the capable and j+1 line display of i+1; By S aturn to upper triangular matrix T a;
By matrix P brevise: by P bin capable the 1st row of mentioning of s, by P bin the former the 1st walk to that former s-1 is capable to be reduced to the 2nd to walk to s capable, obtain new gauge point coordinate set matrix P b1, by P b1point and P that middle the first row means b1in the point of other any two line displays form triangle, and obtain triangle area, form the area matrix S b,
Wherein sb ijexpression is by P b1point and P that middle the first row means b1in the triangle area that forms of the point of the capable and j+1 line display of i+1; By S bturn to upper triangular matrix T b.
Step 4: by T ain each nonzero element and T bin each nonzero element make comparisons, if T wherein amiddle element T athe triangle of (i, j) representative and T bmiddle element T bthe triangle area compatibility of (r, t) representative and length of side compatibility, by matrix P a1in the capable and capable corresponding point of j+1 of i+1 put into P a1correspondence markings point set pA1 in, by matrix P b1in the capable and capable corresponding point of t+1 of r+1 put into P b1correspondence markings point set pB1 in, wherein area is compatible as follows with the compatible determination methods of the length of side:
Set area relative error limit ε sif meet
Figure BDA00003664861800063
t is described athe triangle of (i, j) representative and T bthe triangle area compatibility of (r, t) representative;
T athe limit of first group of corresponding point of leg-of-mutton two mistakes of (i, j) representative is L a(i, 1), L a(j, 1), T bthe limit of first group of corresponding point of leg-of-mutton two mistakes of (r, t) representative is L b(r, 1), L b(t, 1), if
- &epsiv; l < L A ( i , 1 ) - L B ( r , 1 ) L A ( i , 1 ) < &epsiv; l With - &epsiv; l < L A ( i , 1 ) - L B ( t , 1 ) L A ( i , 1 ) < &epsiv; l At least one establishment, and
- &epsiv; l < L A ( j , 1 ) - L B ( r , 1 ) L A ( j , 1 ) < &epsiv; l With - &epsiv; l < L A ( j , 1 ) - L B ( t , 1 ) L A ( j , 1 ) < &epsiv; l At least one establishment, mean T athe triangle of (i, j) representative and T bthe triangle length of side compatibility of (r, t) representative, wherein ε lmean length relative error limit.
Step 5: by improved least square method obtain under two visual angles correspondence markings point between transition matrix R and T, realize the splicing of depth data:
The correspondence markings point set pA1={p of compute matrix PA1 i| p i∈ pA1, i=1,2 ... the barycenter C of N} pwith matrix P b1correspondence markings point set barycenter C q, the geometrical mean that three coordinates of barycenter described here are each element respective coordinates in the set of respective markers point; Correspondence markings in pA1 and pB1 point is done with respect to the translation of barycenter separately &alpha; i = p i - C p &beta; i = q i - C q , Make C 1=[α 1, α 2... α i... α n] t, C 2=[β 1, β 2... β i... β n] t, solving equation C1R=C 2minimum least square solution, and obtain
Figure BDA00003664861800079
meet objective function
Figure BDA000036648618000710
minimize, and obtain translation vector T=C q-RC p.
The set pA1 that obtains in step 4 and set pB1 are added to random noise, by the improved least square method of step 5 and SVD method and least square method in the situation that the same comparison of random noise average stitching error drift condition, as shown in Figure 4 and Figure 5.
In actual measurement environment, grating is incident upon the testee surface, and camera is taken testee, and the noisy existence of meeting, affect the extraction accuracy at gauge point center and follow-up splicing precision.By contrast improved least square method with SVD method and least square method the error deviation figure in the consistent situation of noise average, can clearly find: improved least square method is in the situation that the random noise existence, the stitching error variation range is less, and precision all can remain on below the random noise average, precision is higher, noise is had to better robustness, be applicable to practical engineering application.Institute of the present invention elaboration method has improved accuracy and the real-time of correspondence markings point coupling, has overcome the unstable and low problem of precision of stitching error that exists under noise situations.

Claims (1)

1. the improved depth data joining method based on least square method is characterized in that: adopt following steps:
Step 1: at testee surface binding mark point, shot object from two different visual angles, and twice shooting have overlappingly, and the gauge point of lap has 3 at least, obtain respectively the three-dimensional coordinate of gauge point under two visual angles, wherein the gauge point three-dimensional coordinate matrix under visual angle A shooting is P a, the gauge point three-dimensional coordinate matrix under visual angle B takes is P b;
Step 2: the gauge point three-dimensional coordinate matrix P two visual angles shootings that obtain from step 1 a=[p 1, p 2... p i..., p n] t, P b=[q 1, q 2... q j..., q m] tin find out the identical point under the frame of reference, matrix P wherein ain element p i={ x pi, y pi, z pi, { x pi, y pi, z pibe a p icoordinate under the A coordinate system of visual angle, matrix P bin element q j={ x qj, y qj, z qj, { x qj, y qj, z qjbe a q jcoordinate under the B coordinate system of visual angle:
According to matrix P abuild distance matrix dA, wherein
Figure FDA00003664861700011
The element dai of matrix dA jrepresenting matrix P ain element p iand p jdistance; According to matrix P bbuild distance matrix dB, wherein
Figure FDA00003664861700012
The element dbi of matrix dB jrepresenting matrix P bin element q iand q jdistance;
Each row in every a line of matrix dA and matrix dB is contrasted, and the number of identical element in two row that obtain comparing, further obtain the two row da that identical element is maximum rwith db s, representing matrix P ar point and matrix P bthe s point be the same point under the frame of reference, i.e. first group of corresponding point; Described identical element refers to that the difference of the element value of two elements is less than the three-dimensional coordinate extraction accuracy;
Step 3: by matrix P arevise: by P ain capable the 1st row of mentioning of r, by P ain the former the 1st walk to that former r-1 is capable to be reduced to the 2nd to walk to r capable, obtain new gauge point coordinate set matrix P a1; By P a1point and P that middle the first row means a1in the point of other any two line displays form triangle, and obtain triangle area, form the area matrix S a,
Wherein
Figure FDA00003664861700021
element sa ijexpression is by P a1point and P that middle the first row means a1in the triangle area that forms of the point of the capable and j+1 line display of i+1; By S aturn to upper triangular matrix T a;
By matrix P brevise: by P bin capable the 1st row of mentioning of s, by P bin the former the 1st walk to that former s-1 is capable to be reduced to the 2nd to walk to s capable, obtain new gauge point coordinate set matrix P b1, by P b1point and P that middle the first row means b1in the point of other any two line displays form triangle, and obtain triangle area, form the area matrix S b,
Wherein
Figure FDA00003664861700022
sb ijexpression is by P b1point and P that middle the first row means b1in the triangle area that forms of the point of the capable and j+1 line display of i+1; By S bturn to upper triangular matrix T b;
Step 4: by T ain each nonzero element and T bin each nonzero element make comparisons, if T wherein amiddle element T athe triangle of (i, j) representative and T bmiddle element T bthe triangle area compatibility of (r, t) representative and length of side compatibility, by matrix P a1in the capable and capable corresponding point of j+1 of i+1 put into P a1correspondence markings point set pA1 in, by matrix P b1in the capable and capable corresponding point of t+1 of r+1 put into P b1correspondence markings point set pB1 in, wherein area is compatible as follows with the compatible determination methods of the length of side:
Set area relative error limit ε sif meet
Figure FDA00003664861700023
t is described athe triangle of (i, j) representative and T bthe triangle area compatibility of (r, t) representative;
T athe limit of first group of corresponding point of leg-of-mutton two mistakes of (i, j) representative is L a(i, 1), L a(j, 1), T bthe limit of first group of corresponding point of leg-of-mutton two mistakes of (r, t) representative is L b(r, 1), L b(t, 1), if
- &epsiv; l < L A ( i , 1 ) - L B ( r , 1 ) L A ( i , 1 ) < &epsiv; l With - &epsiv; l < L A ( i , 1 ) - L B ( t , 1 ) L A ( i , 1 ) < &epsiv; l At least one establishment, and
- &epsiv; l < L A ( j , 1 ) - L B ( r , 1 ) L A ( j , 1 ) < &epsiv; l With - &epsiv; l < L A ( j , 1 ) - L B ( t , 1 ) L A ( j , 1 ) < &epsiv; l At least one establishment, mean T athe triangle of (i, j) representative and T bthe triangle length of side compatibility of (r, t) representative, wherein ε lmean length relative error limit;
Step 5: by improved least square method obtain under two visual angles correspondence markings point between transition matrix R and T:
Compute matrix P a1correspondence markings point set pA1={p i| p i∈ pA1, i=1,2 ... the barycenter C of N} pwith matrix P b1correspondence markings point set pB1={q i| q i∈ pB1, i=1,2 ... the barycenter C of N} q; Correspondence markings in pA1 and pB1 point is done with respect to the translation of barycenter separately &alpha; i = p i - C p &beta; i = q i - C q , Make C 1=[α 1, α 2... α i... α n] t, C 2=[β 1, β 2... β i... β n] t, solving equation C 1r=C 2minimum least square solution, and obtain meet objective function
Figure FDA00003664861700033
minimize, and obtain translation vector T=C q-RC p.
CN201310354009.8A 2013-08-14 2013-08-14 Improved depth data splicing method based on least square method Pending CN103489169A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310354009.8A CN103489169A (en) 2013-08-14 2013-08-14 Improved depth data splicing method based on least square method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310354009.8A CN103489169A (en) 2013-08-14 2013-08-14 Improved depth data splicing method based on least square method

Publications (1)

Publication Number Publication Date
CN103489169A true CN103489169A (en) 2014-01-01

Family

ID=49829368

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310354009.8A Pending CN103489169A (en) 2013-08-14 2013-08-14 Improved depth data splicing method based on least square method

Country Status (1)

Country Link
CN (1) CN103489169A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108833927A (en) * 2018-05-03 2018-11-16 北京大学深圳研究生院 A kind of point cloud genera compression method based on 0 element in deletion quantization matrix
CN112966138A (en) * 2021-02-22 2021-06-15 济南大学 Two-dimensional shape retrieval method and system based on contour feature point matching

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120206438A1 (en) * 2011-02-14 2012-08-16 Fatih Porikli Method for Representing Objects with Concentric Ring Signature Descriptors for Detecting 3D Objects in Range Images
CN102831101A (en) * 2012-07-30 2012-12-19 河南工业职业技术学院 Point cloud data splicing method based on automatic identification of plurality of mark points

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120206438A1 (en) * 2011-02-14 2012-08-16 Fatih Porikli Method for Representing Objects with Concentric Ring Signature Descriptors for Detecting 3D Objects in Range Images
CN102831101A (en) * 2012-07-30 2012-12-19 河南工业职业技术学院 Point cloud data splicing method based on automatic identification of plurality of mark points

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
欧阳祥波 等: "基于标志点的测量数据自动拼接方法", 《中国图象图形学报》 *
沈海平 等: "基于最小二乘法的点云数据拼接研究", 《中国图象图形学报》 *
王宫 等: "大面积形体三维测量数据拼接技术的研究", 《机械设计与制造》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108833927A (en) * 2018-05-03 2018-11-16 北京大学深圳研究生院 A kind of point cloud genera compression method based on 0 element in deletion quantization matrix
CN112966138A (en) * 2021-02-22 2021-06-15 济南大学 Two-dimensional shape retrieval method and system based on contour feature point matching

Similar Documents

Publication Publication Date Title
CN103604417B (en) The multi-view images bi-directional matching strategy that object space is information constrained
CN102252653B (en) Position and attitude measurement method based on time of flight (TOF) scanning-free three-dimensional imaging
CN102034238B (en) Multi-camera system calibrating method based on optical imaging probe and visual graph structure
CN103278138B (en) Method for measuring three-dimensional position and posture of thin component with complex structure
CN103411553A (en) Fast calibration method of multiple line structured light visual sensor
CN105157609A (en) Two-sets-of-camera-based global morphology measurement method of large parts
CN103438826B (en) The three-dimension measuring system of the steel plate that laser combines with vision and method
CN104835144A (en) Solving camera intrinsic parameter by using image of center of sphere and orthogonality
CN111735439B (en) Map construction method, map construction device and computer-readable storage medium
Hu et al. A four-camera videogrammetric system for 3-D motion measurement of deformable object
CN109596059A (en) A kind of aircraft skin gap based on parallel lines structure light and scale measurement method
CN103714571A (en) Single camera three-dimensional reconstruction method based on photogrammetry
CN105021124A (en) Planar component three-dimensional position and normal vector calculation method based on depth map
CN106489062B (en) System and method for measuring the displacement of mobile platform
CN102589530A (en) Method for measuring position and gesture of non-cooperative target based on fusion of two dimension camera and three dimension camera
CN109425348A (en) A kind of while positioning and the method and apparatus for building figure
CN102930551B (en) Camera intrinsic parameters determined by utilizing projected coordinate and epipolar line of centres of circles
CN110889873A (en) Target positioning method and device, electronic equipment and storage medium
CN111754462A (en) Visual detection method and system for three-dimensional bent pipe
CN103075977B (en) The automatic splicing method of the cloud data in Binocular Stereo Vision System
CN104574273A (en) Point cloud registration system and method
CN105787464A (en) A viewpoint calibration method of a large number of pictures in a three-dimensional scene
CN103424087A (en) Three-dimensional measurement splicing system and method for large-scale steel plate
CN104318566A (en) Novel multi-image plumb line track matching method capable of returning multiple elevation values
CN102999895A (en) Method for linearly solving intrinsic parameters of camera by aid of two concentric circles

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20140101