CN106204603A - Three-dimensional camera solid matching method - Google Patents

Three-dimensional camera solid matching method Download PDF

Info

Publication number
CN106204603A
CN106204603A CN201610556616.6A CN201610556616A CN106204603A CN 106204603 A CN106204603 A CN 106204603A CN 201610556616 A CN201610556616 A CN 201610556616A CN 106204603 A CN106204603 A CN 106204603A
Authority
CN
China
Prior art keywords
rightarrow
point
sigma
cloud data
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610556616.6A
Other languages
Chinese (zh)
Other versions
CN106204603B (en
Inventor
谭登峰
田启川
杜响红
凌晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zen-Ai Technology Co Ltd
Original Assignee
Beijing Zen-Ai Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zen-Ai Technology Co Ltd filed Critical Beijing Zen-Ai Technology Co Ltd
Publication of CN106204603A publication Critical patent/CN106204603A/en
Application granted granted Critical
Publication of CN106204603B publication Critical patent/CN106204603B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Abstract

Three-dimensional camera solid matching method, fixes two three-dimensional camera;The public viewing area of two three-dimensional camera is placed object of reference;Two three-dimensional camera gather three dimensional point cloud respectively, are filtered respectively processing to cloud data according to the color of object of reference, it is thus achieved that only comprise the three dimensional point cloud of object of reference, be called cloud data collection X={xiAnd point set Y={yi};On two cloud data collection, obtain the mean place of color region in each point converges and the boundary position of different colours two-by-two according to the new cloud data collection X ' of three somes compositions of equal interval sampling and Y ' according to six kinds of different colours point clouds on object of reference respectively;Spin matrix T is calculated according to new cloud data collection0, translation matrix R0;According to spin matrix T0, translation matrix R0And the point set X={x of the regions of different colours of object of referenceiAnd point set Y={yi, calculate accurate spin matrix T, translation matrix R.Present invention greatly reduces and need cloud data amount to be processed, improve speed and the matching precision of coupling.

Description

Three-dimensional camera solid matching method
Technical field
The present invention relates to coupling, technical field of measurement and test, particularly relate to the three-dimensional camera Stereo matching side towards three-dimensional coupling Method.
Background technology
3 D stereo coupling refers to imitate the visual performance of human eye, utilizes two three-dimensional cameras from different perspectives to tested Object shooting image, is analyzed image mating, and calculated the three-dimensional geometric information of this object by principle of triangulation Method.3 D stereo matching system applies to each big necks such as reverse-engineering, quality testing, vehicle guidance the most more and more widely Territory.
The stereo-picture that 3 D stereo matching system is shot by different angles obtains object dimensional geological information.Matching system The twin camera image that obtains of shooting between coupling and the three-dimensionalreconstruction problem that is closely related therewith be 3 D stereo Mating most important is also a most difficult step.
Current matching process mainly uses ICP algorithm, directly according to two cloud data collection to be matched by repeatedly In generation, progressively mates, and obtains rotation and translation change that one of them cloud data collection is done relative to another when coupling Change.Owing to two cloud data collection public territorys to be matched are less, Iterative matching process is by the beginning of spin matrix and translation matrix The impact of beginning value, the convergence of iteration is difficult to ensure that and iterations is the biggest, adds not common region point cloud data set Impact on Iterative matching, matching precision is not the highest.
Document is had to describe quick, the matching algorithm in high precision of some practicalities, before the iteration by artificial treatment to point Cloud data set selects, and only utilizes the cloud data of those artificial selections to mate, thus obtains spin matrix and translation Matrix.But artificial selection's cloud data collection is the most inconvenient, particularly relate to workload during multiple-camera coupling bigger.
It practice, after obtaining three-dimensional colour point clouds data, the band color object of rule can be used completely to provide color Information, it is simple to computer carries out choosing of cloud data collection automatically, utilizes the cloud data collection after this selection to mate, thus Obtain relative transform matrix, build 3 D stereo scene.
Summary of the invention
In order to solve the technical problem of existing existence, the invention provides three-dimensional camera solid matching method, greatly subtract Lack and needed cloud data amount to be processed, improve speed and the matching precision of coupling.
The invention provides three-dimensional camera solid matching method, specifically include:
S1: fix two three-dimensional camera;
S2: place object of reference on the public viewing area of described two three-dimensional camera;
S3: described two three-dimensional camera gather three dimensional point cloud respectively, divide cloud data according to the color of object of reference It is not filtered processing, it is thus achieved that only comprise the three dimensional point cloud of object of reference, be called cloud data collection X={xiAnd point set Y={yi};
S4: on said two cloud data collection, obtains respectively according to six kinds of different colours point clouds on object of reference respectively The mean place (totally 6 points) of the color region in point converges and the boundary position of different colours two-by-two are according to equal interval sampling Three points (totally 18 points) form new cloud data collection X ' and Y ', respectively comprise 24 points, and are matched;According to described newly Cloud data collection calculates spin matrix T0, translation matrix R0
S41: the position of centre of gravity of calculating cloud data collection X ' and Y ' respectively:
μ x → = 1 24 Σ i = 1 24 x i →
μ y → = 1 24 Σ i = 1 24 y i →
S42: utilize position of centre of gravityWithCalculate the Cross-covariance ∑ of two data setsxy:
Σ x y = 1 24 Σ i = 1 24 [ ( x i → - μ x → ) ( y i → - μ y → ) T ] = 1 24 Σ i = 1 24 [ x i → y i → T ] - μ x → μ y →
S43: utilize Cross-covariance ∑xyAntisymmetric matrix structure column vector Aij=(∑xy-∑xy T)ijStructure row are vowed Amount Δ=[A23A31A12]T, obtain symmetrical matrix Q (∑ according to this column vectorxy);
Q ( Σ x y ) = t r ( Σ x y ) Δ T Δ Σ x y + Σ x y T - t r ( Σ x y ) I 3
S44: solve symmetrical matrix Q (∑xy) eigenvalue of maximum corresponding unit character vector
S45: by unit character vectorObtain spin matrix R0
R 0 = q 0 2 + q 1 2 - q 2 2 - q 3 2 2 ( q 1 q 2 - q 0 q 3 ) 2 ( q 1 q 3 + q 0 q 2 ) 2 ( q 1 q 2 + q 0 q 3 ) q 0 2 - q 1 2 + q 2 2 - q 3 2 2 ( q 2 q 3 - q 0 q 1 ) 2 ( q 1 q 3 - q 0 q 2 ) 2 ( q 2 q 3 + q 0 q 1 ) q 0 2 - q 1 2 - q 2 2 + q 3 2
S46: obtained translation matrix T by spin matrix0
T 0 = μ x → - R 0 μ y → = Δ [ q 4 q 5 q 6 ]
S5: according to described spin matrix T0, translation matrix R0And the point set X=of the regions of different colours of described object of reference {xiAnd point set Y={yi, calculate accurate spin matrix T, translation matrix R.
S51: accurately before coupling, making initial point set is data set X0=X ',K=0.
S52: according to currentEuclidean distance is used to calculate point set XkNearest point set Y with point set Yk=C (Pk, Y), and According to point set XkWith point set YkCalculateApply new registration vectorObtain sampled point XkNew match point Yk+1, will join The square distance of the point after to and DkEvaluation criterion as precision.
D k = 1 N Σ i = 1 N | | ( x → - y → ) | |
S53: if k > kmaxOr | Dk-Dk-1|k< τ, then iteration terminates, otherwise k=k+1 return the 2nd step.
Preferably, described object of reference is heptahedron, and its bottom surface is, has six kinds of regions of different colours.
Preferably, in step s 4, described spin matrix T is calculated by method of least square0, translation matrix R0
Accompanying drawing explanation
Fig. 1 is the object of reference floor map of the present invention;
Fig. 2. the object of reference schematic perspective view of the present invention;
Fig. 3 is the three-dimensional camera stero realizing three-dimensional camera solid matching method of the present invention.
Detailed description of the invention
Technical scheme is further described below in conjunction with detailed description of the invention.Should be appreciated that and be described herein as Detailed description of the invention only in order to explain the present invention, be not intended to limit the present invention.
Fig. 1 is the object of reference floor map of the present invention.As it is shown in figure 1, the three-dimensional camera Stereo matching that the present invention provides Method a, it is desirable to provide known form and the object of reference 1 of color.Fig. 2. the object of reference schematic perspective view of the present invention.Such as Fig. 2 institute Show, this object of reference 1 be ground be regular hexagon, side is the positive seven face cones (regions of different colours in Fig. 2 with 6 trianglees The most specifically draw).These positive seven face cones have six different color regions 11,12,13,14,15,16.
The size changing object of reference 1 about can be placed in the square area of 1 square metre, and its height is not above reference Thing 1 is to the half of the distance of three-dimensional camera.
According to different demands and design, the size of object of reference can arbitrarily change.
Fig. 3 is the three-dimensional camera stero realizing three-dimensional camera solid matching method of the present invention.As it is shown on figure 3, this three Dimension camera stero includes two three-dimensional camera, object of reference 1 in the viewing area that is placed in two three-dimensional camera and two three Information process unit that dimension camera connects and receive the image information of information process unit the display shown.
The three-dimensional camera solid matching method that the present invention provides is specific as follows: first as in figure 2 it is shown, fix in the left and right sides Two three-dimensional camera;Then on the public viewing area of these two three-dimensional camera, place object of reference 1;Secondly described two three-dimensionals Camera gathers three dimensional point cloud respectively, is filtered respectively processing to cloud data according to the color of object of reference, it is thus achieved that only bag Three dimensional point cloud containing object of reference, is called cloud data collection X={xiAnd point set Y={yi};Then at said two point On cloud data set, obtain putting down at the color region each put in converging according to six kinds of different colours point clouds on object of reference respectively The boundary position of equal position (totally 6 points) and two-by-two different colours forms new according to three points of equal interval sampling (totally 18 points) Cloud data collection X ' and Y ', respectively comprises 24 points, and is matched;Spin moment is calculated then according to described new cloud data collection Battle array T0, translation matrix R0
The position of centre of gravity of calculating cloud data collection X ' and Y ' the most respectively:
&mu; x &RightArrow; = 1 24 &Sigma; i = 1 24 x i &RightArrow;
&mu; y &RightArrow; = 1 24 &Sigma; i = 1 24 y i &RightArrow;
2. utilize position of centre of gravityWithCalculate the Cross-covariance ∑ of two data setsxy:
&Sigma; x y = 1 24 &Sigma; i = 1 24 &lsqb; ( x i &RightArrow; - &mu; x &RightArrow; ) ( y i &RightArrow; - &mu; y &RightArrow; ) T &rsqb; = 1 24 &Sigma; i = 1 24 &lsqb; x i &RightArrow; y i &RightArrow; T &rsqb; - &mu; x &RightArrow; &mu; y &RightArrow;
3. utilize Cross-covariance ∑xyAntisymmetric matrix structure column vector Aij=(∑xy-∑xy T)ijStructure column vector Δ=[A23A31A12]T, obtain symmetrical matrix Q (∑ according to this column vectorxy);
Q ( &Sigma; x y ) = t r ( &Sigma; x y ) &Delta; T &Delta; &Sigma; x y + &Sigma; x y T - t r ( &Sigma; x y ) I 3
4. solve symmetrical matrix Q (∑xy) eigenvalue of maximum corresponding unit character vector
5. by unit character vectorObtain spin matrix R0
R 0 = q 0 2 + q 1 2 - q 2 2 - q 3 2 2 ( q 1 q 2 - q 0 q 3 ) 2 ( q 1 q 3 + q 0 q 2 ) 2 ( q 1 q 2 + q 0 q 3 ) q 0 2 - q 1 2 + q 2 2 - q 3 2 2 ( q 2 q 3 - q 0 q 1 ) 2 ( q 1 q 3 - q 0 q 2 ) 2 ( q 2 q 3 + q 0 q 1 ) q 0 2 - q 1 2 - q 2 2 + q 3 2
6. obtained translation matrix T by spin matrix0
T 0 = &mu; x &RightArrow; - R 0 &mu; y &RightArrow; = &Delta; &lsqb; q 4 q 5 q 6 &rsqb;
Then according to described spin matrix T0, translation matrix R0And the point set X=of the regions of different colours of described object of reference {xiAnd point set Y={yi, calculate accurate spin matrix T, translation matrix R.
1., before accurately mating, making initial point set is data set X0=X ',K=0.
2. according to currentEuclidean distance and same color region is used to go to calculate point set XkClosest approach with point set Y Collection Yk=C (Pk, Y), and according to point set XkWith point set YkCalculateApply new registration vectorObtain sampled point XkNew Match point Yk+1, by square distance and the D of the point after pairingkEvaluation criterion as precision.
D k = 1 N &Sigma; i = 1 N | | ( x &RightArrow; - y &RightArrow; ) | |
If 3. k > kmaxOr | Dk-Dk-1|k< τ, then iteration terminates, otherwise k=k+1 return the 2nd step.
Calculating spin matrix T0, translation matrix R0Step in, can calculate with method of least square.
In sum, the invention provides three-dimensional camera solid matching method, considerably reduce and need to be processed some cloud Data volume, improves speed and the matching precision of coupling.
Above embodiment is the preferred embodiment of the present invention, not thereby limits the patent protection model of the present invention Enclose.Those skilled in the art belonging to any present invention, in the premise without departing from spirit and scope disclosed in this invention Under, the equivalent structure being done present disclosure each falls within claimed the scope of the claims with the conversion of equivalent step Within.

Claims (3)

1. three-dimensional camera solid matching method, it is characterised in that
S1: fix two three-dimensional camera;
S2: place object of reference on the public viewing area of described two three-dimensional camera;
S3: described two three-dimensional camera gather three dimensional point cloud respectively, enter cloud data respectively according to the color of object of reference Row Filtering Processing, it is thus achieved that only comprise the three dimensional point cloud of object of reference, is called cloud data collection X={xiAnd point set Y= {yi};
S4: on said two cloud data collection, obtains at respective point according to six kinds of different colours point clouds on object of reference respectively The boundary position of the mean place (totally 6 points) of the color region in converging and two-by-two different colours is according to equal interval sampling three Point (totally 18 points) forms new cloud data collection X ' and Y ', respectively comprises 24 points, and is matched;According to described new some cloud Data set calculates spin matrix T0, translation matrix R0, concrete, S41: the position of centre of gravity of calculating cloud data collection X ' and Y ' respectively:
&mu; x &RightArrow; = 1 24 &Sigma; i = 1 24 x i &RightArrow;
&mu; y &RightArrow; = 1 24 &Sigma; i = 1 24 y i &RightArrow;
S42: utilize position of centre of gravityWithCalculate the Cross-covariance ∑ of two data setsxy:
&Sigma; x y = 1 24 &Sigma; i = 1 24 &lsqb; ( x i &RightArrow; - &mu; x &RightArrow; ) ( y i &RightArrow; - &mu; y &RightArrow; ) T &rsqb; = 1 24 &Sigma; i = 1 24 &lsqb; x i &RightArrow; y i &RightArrow; T &rsqb; - &mu; x &RightArrow; &mu; y &RightArrow;
S43: utilize Cross-covariance ∑xyAntisymmetric matrix structure column vector Aij=(∑xy-∑xy T)ijStructure column vector Δ =[A23 A31 A12]T, obtain symmetrical matrix Q (∑ according to this column vectorxy);
Q ( &Sigma; x y ) = t r ( &Sigma; x y ) &Delta; T &Delta; &Sigma; x y + &Sigma; x y T - t r ( &Sigma; x y ) I 3
S44: solve symmetrical matrix Q (∑xy) eigenvalue of maximum corresponding unit character vector
S45: by unit character vectorObtain spin matrix R0
R 0 = q 0 2 + q 1 2 - q 2 2 - q 3 2 2 ( q 1 q 2 - q 0 q 3 ) 2 ( q 1 q 3 + q 0 q 2 ) 2 ( q 1 q 2 + q 0 q 3 ) q 0 2 - q 1 2 + q 2 2 - q 3 2 2 ( q 2 q 3 - q 0 q 1 ) 2 ( q 1 q 3 - q 0 q 2 ) 2 ( q 2 q 3 + q 0 q 1 ) q 0 2 - q 1 2 - q 2 2 + q 3 2
S46: obtained translation matrix T by spin matrix0
T 0 = &mu; x &RightArrow; - R 0 &mu; y &RightArrow; = &Delta; q 4 q 5 q 6 .
S5: according to described spin matrix T0, translation matrix R0And the point set X={x of the regions of different colours of described object of referencei} With point set Y={yi, calculate accurate spin matrix T, translation matrix R, concrete,
S51: accurately before coupling, making initial point set is data set X0=X ',K=0;
S52: according to currentEuclidean distance is used to calculate point set XkNearest point set Y with point set Yk=C (Pk, Y), and according to point Collection XkWith point set YkCalculateApply new registration vectorObtain sampled point XkNew match point Yk+1, after pairing The square distance of point and DkEvaluation criterion as precision;
D k = 1 N &Sigma; i = 1 N | | ( x &RightArrow; - y &RightArrow; ) | |
S53: if k > kmaxOr | Dk-Dk-1|k< τ, then iteration terminates, otherwise k=k+1 return the 2nd step.
Three-dimensional camera solid matching method the most according to claim 1, it is characterised in that described object of reference is heptahedron, Its bottom surface is regular hexagon, has six kinds of regions of different colours.
Three-dimensional camera solid matching method the most according to claim 1, it is characterised in that in step s 4, by minimum Square law calculates described spin matrix T0, translation matrix R0
CN201610556616.6A 2016-04-29 2016-07-14 Three-dimensional camera solid matching method Active CN106204603B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610274084 2016-04-29
CN2016102740847 2016-04-29

Publications (2)

Publication Number Publication Date
CN106204603A true CN106204603A (en) 2016-12-07
CN106204603B CN106204603B (en) 2018-06-29

Family

ID=57474419

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610556616.6A Active CN106204603B (en) 2016-04-29 2016-07-14 Three-dimensional camera solid matching method

Country Status (1)

Country Link
CN (1) CN106204603B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107346550A (en) * 2017-07-05 2017-11-14 滁州学院 It is a kind of to be directed to the three dimensional point cloud rapid registering method with colouring information

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103591906A (en) * 2012-08-13 2014-02-19 上海威塔数字科技有限公司 A method for carrying out three dimensional tracking measurement on a moving object through utilizing two dimensional coding
CN103955939A (en) * 2014-05-16 2014-07-30 重庆理工大学 Boundary feature point registering method for point cloud splicing in three-dimensional scanning system
CN104484648A (en) * 2014-11-27 2015-04-01 浙江工业大学 Variable-viewing angle obstacle detection method for robot based on outline recognition

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103591906A (en) * 2012-08-13 2014-02-19 上海威塔数字科技有限公司 A method for carrying out three dimensional tracking measurement on a moving object through utilizing two dimensional coding
CN103955939A (en) * 2014-05-16 2014-07-30 重庆理工大学 Boundary feature point registering method for point cloud splicing in three-dimensional scanning system
CN104484648A (en) * 2014-11-27 2015-04-01 浙江工业大学 Variable-viewing angle obstacle detection method for robot based on outline recognition

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107346550A (en) * 2017-07-05 2017-11-14 滁州学院 It is a kind of to be directed to the three dimensional point cloud rapid registering method with colouring information
CN107346550B (en) * 2017-07-05 2019-09-20 滁州学院 It is a kind of for the three dimensional point cloud rapid registering method with colouring information

Also Published As

Publication number Publication date
CN106204603B (en) 2018-06-29

Similar Documents

Publication Publication Date Title
CN104504671B (en) Method for generating virtual-real fusion image for stereo display
CN103106688B (en) Based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering
CN101877143B (en) Three-dimensional scene reconstruction method of two-dimensional image group
CN107578404B (en) View-based access control model notable feature is extracted complete with reference to objective evaluation method for quality of stereo images
CN106023303B (en) A method of Three-dimensional Gravity is improved based on profile validity and is laid foundations the dense degree of cloud
CN103426200B (en) Tree three-dimensional reconstruction method based on unmanned aerial vehicle aerial photo sequence image
CN109886870B (en) Remote sensing image fusion method based on dual-channel neural network
CN102938142B (en) Based on the indoor LiDAR missing data complementing method of Kinect
CN102800127B (en) Light stream optimization based three-dimensional reconstruction method and device
CN105205858A (en) Indoor scene three-dimensional reconstruction method based on single depth vision sensor
CN106683173A (en) Method of improving density of three-dimensional reconstructed point cloud based on neighborhood block matching
CN103955954B (en) Reconstruction method for high-resolution depth image in combination with space diagram pairs of same scene
CN106920276B (en) A kind of three-dimensional rebuilding method and system
CN102902355A (en) Space interaction method of mobile equipment
CN106709950A (en) Binocular-vision-based cross-obstacle lead positioning method of line patrol robot
CN104318569A (en) Space salient region extraction method based on depth variation model
CN106023230B (en) A kind of dense matching method of suitable deformation pattern
CN106600632A (en) Improved matching cost aggregation stereo matching algorithm
CN110070567A (en) A kind of ground laser point cloud method for registering
CN111260707B (en) Depth estimation method based on light field EPI image
Kim et al. Semiautomatic reconstruction of building height and footprints from single satellite images
CN103971379B (en) Foam stereoscopic features extracting method based on the equivalent binocular stereo vision model of single camera
CN109345581B (en) Augmented reality method, device and system based on multi-view camera
CN106485737A (en) Cloud data based on line feature and the autoregistration fusion method of optical image
CN108564620A (en) Scene depth estimation method for light field array camera

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant