CN106204603B - Three-dimensional camera solid matching method - Google Patents

Three-dimensional camera solid matching method Download PDF

Info

Publication number
CN106204603B
CN106204603B CN201610556616.6A CN201610556616A CN106204603B CN 106204603 B CN106204603 B CN 106204603B CN 201610556616 A CN201610556616 A CN 201610556616A CN 106204603 B CN106204603 B CN 106204603B
Authority
CN
China
Prior art keywords
point cloud
cloud data
data collection
dimensional
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610556616.6A
Other languages
Chinese (zh)
Other versions
CN106204603A (en
Inventor
谭登峰
田启川
杜响红
凌晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zen-Ai Technology Co Ltd
Original Assignee
Beijing Zen-Ai Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zen-Ai Technology Co Ltd filed Critical Beijing Zen-Ai Technology Co Ltd
Publication of CN106204603A publication Critical patent/CN106204603A/en
Application granted granted Critical
Publication of CN106204603B publication Critical patent/CN106204603B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

Three-dimensional camera solid matching method fixes two three-dimensional cameras;Object of reference is placed on the public visible area of two three-dimensional cameras;Two three-dimensional cameras acquire three dimensional point cloud respectively, and point cloud data is filtered respectively according to the color of object of reference, obtain the three dimensional point cloud for only including object of reference, are referred to as point cloud data collection X={ xiAnd point cloud data collection Y={ yi};On two point cloud data collection, the mean place for the color region concentrated in respective point cloud data is obtained according to six kinds of different colours point clouds on object of reference respectively and the boundary position of different colours forms new point cloud data collection X ' and Y ' according to three points of equal interval sampling two-by-two;Spin matrix R is calculated according to new point cloud data collection0, translation matrix T0;According to spin matrix R0, translation matrix T0And the point cloud data collection X={ x of the regions of different colours of object of referenceiAnd point cloud data collection Y={ yi, calculate accurate spin matrix R, translation matrix T.Present invention greatly reduces point cloud data amounts to be treated, improve matched speed and matching precision.

Description

Three-dimensional camera solid matching method
Technical field
The present invention relates to matching, technical field of measurement and test more particularly to towards three-dimensional matched three-dimensional camera Stereo matching side Method.
Background technology
3 D stereo matching refers to imitate the visual performance of human eye, using two three-dimensional cameras from different perspectives to tested Object shoots image, to image carries out analysis matching, and passes through the three-dimensional geometric information that principle of triangulation calculates the object Method.3 D stereo matching system is just applying to major necks such as reverse-engineering, quality testing, vehicle guiding more and more widely Domain.
3 D stereo matching system obtains object dimensional geological information by the stereo-picture that different angle is shot.Matching system The image pair that shoots of twin camera between matching and the three-dimensionalreconstruction problem that is closely related therewith be 3 D stereo It is also a most difficult step to match most important.
Current matching process mainly using ICP algorithm, directly passes through repeatedly according to two point cloud data collection to be matched In generation, is gradually matched, and one of point cloud data collection is obtained in matching and is become relative to the rotation and translation that another is done It changes.Since two point cloud data collection public domains to be matched are smaller, Iterative matching process is at the beginning of by spin matrix and translation matrix The influence of beginning value, the convergence of iteration is difficult to ensure that and iterations are also very big, along with not common region point cloud data set Influence to Iterative matching, matching precision is nor very high.
There is document to describe some practical quick, high-precision matching algorithms, before the iteration by artificial treatment to point Cloud data set is selected, and is only matched using the point cloud data of those artificial selection, so as to obtain spin matrix and translation Matrix.But artificial selection point cloud data collection is inconvenient after all, workload bigger when being matched more particularly to multiple-camera.
In fact, after three-dimensional colour point clouds data are obtained, the band color object that rule may be used completely provides color Information carries out the selection of point cloud data collection convenient for computer automatically, is matched using the point cloud data collection after the selection, so as to Relative transform matrix is obtained, builds 3 D stereo scene.
Invention content
In order to solve existing technical problem, the present invention provides three-dimensional camera solid matching methods, greatly subtract Lack point cloud data amount to be treated, improve matched speed and matching precision.
The present invention provides three-dimensional camera solid matching methods, specifically include:
S1:Fix two three-dimensional cameras;
S2:Three-dimensional object of reference is placed on the public visible area of two three-dimensional cameras;
S3:Two three-dimensional cameras acquire three dimensional point cloud respectively, according to the color of object of reference to point cloud data point It is not filtered, obtains the three dimensional point cloud for only including object of reference, be referred to as point cloud data collection X={ xiAnd point cloud Data set Y={ yi};
S4:On described two point cloud data collection, obtained respectively according to six kinds of different colours point clouds on object of reference each The mean place for the color region concentrated from point cloud data and the two-by-two boundary position of different colours are according to equal interval sampling three Point forms new point cloud data collection X ' and Y ', respectively comprising 24 points, and is matched, wherein, by according on object of reference The mean place of color region totally 6 that the acquisition of six kinds of different colours point clouds is concentrated in respective point cloud data, corresponding to 6 points, 18 points are obtained by the boundary position of different colours two-by-two altogether according to three points of equal interval sampling;According to the new point cloud data Collection calculates spin matrix R0, translation matrix T0, specifically, S41:The position of centre of gravity of point cloud data collection X ' and Y ' are calculated respectively:
S42:Utilize position of centre of gravityWithCalculate the Cross-covariance ∑ of two datasetsxy
S43:Utilize Cross-covariance ∑xyAntisymmetric matrix construction column vector Aij=(∑xy-∑xy T)ijAnd column vector Δ=[A23 A31 A12]T, symmetrical matrix Q (∑s are obtained according to this column vectorxy);
S44:Solve symmetrical matrix Q (∑sxy) maximum eigenvalue corresponding unit character vector
S45:By unit character vectorObtain spin matrix R0
S46:Translation matrix T is obtained by spin matrix0
S5:According to the spin matrix R0, translation matrix T0And the point cloud data of the regions of different colours of the object of reference Collect X={ xiAnd point cloud data collection Y={ yi, accurate spin matrix R, translation matrix T are calculated, specifically,
S51:Before accurate matching, it is data set X to enable initial point cloud data set0=X ',K is iterations, takes its initial value 0 herein;
S52:According to currentPoint cloud data collection X is calculated using Euclidean distancekWith the closest approach cloud number of point cloud data collection Y According to collection Yk=C (Xk, Y), and according to point cloud data collection XkWith point cloud data collection YkIt calculatesUsing new registration vector To point cloud data collection XkNew match point cloud data set Yk+1, by the square distance and D of the point after pairingkEvaluation as precision Standard;
Wherein, N is the number of the point of three dimensional point cloud;
S53:If k > kmaxOr | Dk-Dk-1|k< τ, then iteration terminates, and k otherwise is increased by 1, original for replacing K, return S52.
Preferably, the object of reference is heptahedron, and bottom surface is regular hexagon, has six kinds of regions of different colours.
Preferably, in step s 4, the spin matrix R is calculated by least square method0, translation matrix T0
Description of the drawings
Fig. 1 is the object of reference floor map of the present invention;
The object of reference stereoscopic schematic diagram of Fig. 2 present invention;
The three-dimensional camera stero of Fig. 3 three-dimensional camera solid matching methods to realize the present invention.
Specific embodiment
Technical scheme of the present invention is further described below in conjunction with specific embodiment.It should be appreciated that it is described herein as Specific embodiment be only used to explain the present invention, be not intended to limit the present invention.
Fig. 1 is the object of reference floor map of the present invention.As shown in Figure 1, three-dimensional camera Stereo matching provided by the invention Method, it is desirable to provide the object of reference 1 of a known form and color.The object of reference stereoscopic schematic diagram of Fig. 2 present invention.Such as Fig. 2 institutes Show, which is ground for regular hexagon, and side is the positive seven face cone (regions of different colours in Fig. 2 with 6 triangles It does not draw specifically).There are six different color regions 11,12,13,14,15,16 for the positive seven face cones tool.
The size for changing object of reference 1 is about that can be placed in 1 square metre of square area, and height is not above reference Object 1 arrives the half of the distance of three-dimensional camera.
According to different demands and design, the size of object of reference can arbitrarily change.
The three-dimensional camera stero of Fig. 3 three-dimensional camera solid matching methods to realize the present invention.As shown in figure 3, this three Dimension camera stero includes two three-dimensional cameras, the object of reference 1 being placed in the visible area of two three-dimensional cameras and two three The information process unit for tieing up camera connection and the image information for receiving information process unit and the display shown.
Three-dimensional camera solid matching method provided by the invention is specific as follows:First as shown in Fig. 2, being fixed in the left and right sides Two three-dimensional cameras;Then object of reference 1 is placed on the public visible area of two three-dimensional cameras;Secondly two three-dimensionals Camera acquires three dimensional point cloud respectively, and point cloud data is filtered respectively according to the color of object of reference, obtains and only wraps Three dimensional point cloud containing object of reference is referred to as point cloud data collection X={ xiAnd point set Y={ yi};Then in described two points On cloud data set, the color region concentrated in respective point cloud data is obtained according to six kinds of different colours point clouds on object of reference respectively Mean place (totally 6 points) and two-by-two the boundary position of different colours according to three points of equal interval sampling (totally 18 points) form New point cloud data collection X ' and Y ', respectively comprising 24 points, and is matched;Rotation is calculated then according to the new point cloud data collection Torque battle array R0With translation matrix T0
1. the position of centre of gravity of point cloud data collection X ' and Y ' are calculated respectively:
2. utilize position of centre of gravityWithCalculate the Cross-covariance ∑ of two datasetsxy
3. utilize Cross-covariance ∑xyAntisymmetric matrix construction column vector Aij=(∑xy-∑xy T)ijWith column vector Δ =[A23 A31 A12]T, symmetrical matrix Q (∑s are obtained according to this column vectorxy);
4. solve symmetrical matrix Q (∑sxy) maximum eigenvalue corresponding unit character vector
5. by unit character vectorObtain spin matrix R0
6. translation matrix T is obtained by spin matrix0
Then according to the spin matrix R0, translation matrix T0And the point cloud number of the regions of different colours of the object of reference According to collection X={ xiAnd point cloud data collection Y={ yi, accurate spin matrix R, translation matrix T are calculated, specifically,
1. before accurate matching, it is data set X to enable initial point cloud data set0=X ',k It is iterations, takes its initial value 0 herein;
2. according to currentPoint cloud data collection X is calculated using Euclidean distancekWith the nearest point cloud data of point cloud data collection Y Collect Yk=C (Xk, Y), and according to point cloud data collection XkWith point cloud data collection YkIt calculatesUsing new registration vectorIt obtains Point cloud data collection XkNew match point cloud data set Yk+1, by the square distance and D of the point after pairingkEvaluation mark as precision It is accurate;
Wherein, N is the number of the point of three dimensional point cloud;
3. if k > kmaxOr | Dk-Dk-1|k< τ, then iteration terminates, and k otherwise is increased by 1, for replacing originally K returns to the 2nd step.
Calculating spin matrix R0, translation matrix T0The step of in, can be calculated with least square method.
In conclusion the present invention provides three-dimensional camera solid matching method, to be treated cloud is considerably reduced Data volume improves matched speed and matching precision.
Above embodiment is the preferred embodiment of the present invention, not thereby limits the patent protection model of the present invention It encloses.Those skilled in the art belonging to any present invention, in the premise for not departing from spirit and scope disclosed in this invention Under, the equivalent structure done to present disclosure and the transformation of equivalent step each fall within claimed the scope of the claims Within.

Claims (3)

1. three-dimensional camera solid matching method, which is characterized in that
S1:Fix two three-dimensional cameras;
S2:Three-dimensional object of reference is placed on the public visible area of two three-dimensional cameras;
S3:Two three-dimensional cameras acquire three dimensional point cloud respectively, according to the color of object of reference to point cloud data respectively into Row filtering process obtains the three dimensional point cloud for only including object of reference, is referred to as point cloud data collection X={ xiAnd point cloud data Collect Y={ yi};
S4:On described two point cloud data collection, obtained respectively according to six kinds of different colours point clouds on object of reference in respective point The boundary position of the mean place of color region in cloud data set and two-by-two different colours is according to three point groups of equal interval sampling The point cloud data collection X ' and Y ' of Cheng Xin, respectively comprising 24 points, and is matched, wherein, by according to six kinds on object of reference The mean place of color region totally 6 that the acquisition of different colours point cloud is concentrated in respective point cloud data, corresponding to 6 points, passes through The boundary position of different colours obtains 18 points altogether according to three points of equal interval sampling two-by-two;It is calculated according to the new point cloud data collection Go out spin matrix R0, translation matrix T0, specifically, S41:The position of centre of gravity of point cloud data collection X ' and Y ' are calculated respectively:
S42:Utilize position of centre of gravityWithCalculate the Cross-covariance ∑ of two datasetsxy
S43:Utilize Cross-covariance ∑xyAntisymmetric matrix construction column vector Aij=(∑xy-∑xy T)ijWith column vector Δ= [A23 A31 A12]T, symmetrical matrix Q (∑s are obtained according to this column vectorxy);
S44:Solve symmetrical matrix Q (∑sxy) maximum eigenvalue corresponding unit character vector
S45:By unit character vectorObtain spin matrix R0
S46:Translation matrix T is obtained by spin matrix0
S5:According to the spin matrix R0, translation matrix T0And the point cloud data collection X of the regions of different colours of the object of reference ={ xiAnd point cloud data collection Y={ yi, accurate spin matrix R, translation matrix T are calculated, specifically,
S51:Before accurate matching, it is data set X to enable initial point cloud data set0=X ',K is Iterations take its initial value 0 herein;
S52:According to currentPoint cloud data collection X is calculated using Euclidean distancekWith the closest approach cloud data set of point cloud data collection Y Yk=C (Xk, Y), and according to point cloud data collection XkWith point cloud data collection YkIt calculatesUsing new registration vectorIt obtains a little Cloud data set XkNew match point cloud data set Yk+1, by the square distance and D of the point after pairingkEvaluation mark as precision It is accurate;
Wherein, N is the number of the point of three dimensional point cloud;
S53:If k > kmaxOr | Dk-Dk-1|k< τ, then iteration terminates, and k otherwise is increased by 1, for replacing original k, Return to S52.
2. three-dimensional camera solid matching method according to claim 1, which is characterized in that the object of reference is heptahedron, Its bottom surface is regular hexagon, has six kinds of regions of different colours.
3. three-dimensional camera solid matching method according to claim 1, which is characterized in that in step s 4, pass through minimum Square law calculates the spin matrix R0, translation matrix T0
CN201610556616.6A 2016-04-29 2016-07-14 Three-dimensional camera solid matching method Active CN106204603B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610274084 2016-04-29
CN2016102740847 2016-04-29

Publications (2)

Publication Number Publication Date
CN106204603A CN106204603A (en) 2016-12-07
CN106204603B true CN106204603B (en) 2018-06-29

Family

ID=57474419

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610556616.6A Active CN106204603B (en) 2016-04-29 2016-07-14 Three-dimensional camera solid matching method

Country Status (1)

Country Link
CN (1) CN106204603B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107346550B (en) * 2017-07-05 2019-09-20 滁州学院 It is a kind of for the three dimensional point cloud rapid registering method with colouring information

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103591906A (en) * 2012-08-13 2014-02-19 上海威塔数字科技有限公司 A method for carrying out three dimensional tracking measurement on a moving object through utilizing two dimensional coding
CN103955939A (en) * 2014-05-16 2014-07-30 重庆理工大学 Boundary feature point registering method for point cloud splicing in three-dimensional scanning system
CN104484648A (en) * 2014-11-27 2015-04-01 浙江工业大学 Variable-viewing angle obstacle detection method for robot based on outline recognition

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103591906A (en) * 2012-08-13 2014-02-19 上海威塔数字科技有限公司 A method for carrying out three dimensional tracking measurement on a moving object through utilizing two dimensional coding
CN103955939A (en) * 2014-05-16 2014-07-30 重庆理工大学 Boundary feature point registering method for point cloud splicing in three-dimensional scanning system
CN104484648A (en) * 2014-11-27 2015-04-01 浙江工业大学 Variable-viewing angle obstacle detection method for robot based on outline recognition

Also Published As

Publication number Publication date
CN106204603A (en) 2016-12-07

Similar Documents

Publication Publication Date Title
CN110285793B (en) Intelligent vehicle track measuring method based on binocular stereo vision system
CN106780590B (en) Method and system for acquiring depth map
CN110322702B (en) Intelligent vehicle speed measuring method based on binocular stereo vision system
CN106097348B (en) A kind of fusion method of three-dimensional laser point cloud and two dimensional image
CN104504671B (en) Method for generating virtual-real fusion image for stereo display
CN102800127B (en) Light stream optimization based three-dimensional reconstruction method and device
CN103106688B (en) Based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering
CN106683173A (en) Method of improving density of three-dimensional reconstructed point cloud based on neighborhood block matching
CN104574393B (en) A kind of three-dimensional pavement crack pattern picture generates system and method
CN104034269B (en) A kind of monocular vision measuring method and device
CN109708578A (en) A kind of plant phenotype parameter measuring apparatus, method and system
US20180108143A1 (en) Height measuring system and method
CN108038902A (en) A kind of high-precision three-dimensional method for reconstructing and system towards depth camera
CN106920276B (en) A kind of three-dimensional rebuilding method and system
CN106296825B (en) A kind of bionic three-dimensional information generating system and method
CN108717728A (en) A kind of three-dimensional reconstruction apparatus and method based on various visual angles depth camera
CN108564536B (en) Global optimization method of depth map
CN102750697A (en) Parameter calibration method and device
CN107578376A (en) The fork division of distinguished point based cluster four and the image split-joint method of local transformation matrix
CN104036488A (en) Binocular vision-based human body posture and action research method
CN103630091B (en) Leaf area measurement method based on laser and image processing techniques
CN108921864A (en) A kind of Light stripes center extraction method and device
CN108596975A (en) A kind of Stereo Matching Algorithm for weak texture region
CN111028221B (en) Airplane skin butt-joint measurement method based on linear feature detection
CN109816779A (en) A method of artificial forest forest model, which is rebuild, using smart phone obtains single wooden parameter

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant