CN105894574A - Binocular three-dimensional reconstruction method - Google Patents

Binocular three-dimensional reconstruction method Download PDF

Info

Publication number
CN105894574A
CN105894574A CN201610195387.XA CN201610195387A CN105894574A CN 105894574 A CN105894574 A CN 105894574A CN 201610195387 A CN201610195387 A CN 201610195387A CN 105894574 A CN105894574 A CN 105894574A
Authority
CN
China
Prior art keywords
point
characteristic
feature point
image
binocular
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610195387.XA
Other languages
Chinese (zh)
Other versions
CN105894574B (en
Inventor
马建设
魏云峰
刘彤
苏萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huizhou Frant Photoelectric Technology Co ltd
Shenzhen International Graduate School of Tsinghua University
Original Assignee
Shenzhen Graduate School Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Graduate School Tsinghua University filed Critical Shenzhen Graduate School Tsinghua University
Priority to CN201610195387.XA priority Critical patent/CN105894574B/en
Publication of CN105894574A publication Critical patent/CN105894574A/en
Application granted granted Critical
Publication of CN105894574B publication Critical patent/CN105894574B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation

Abstract

The invention discloses a binocular three-dimensional reconstruction method comprising the following steps of: 1) acquiring an image of an object to be reconstructed by two image acquisition apparatuses with the same model in order to obtain a left image and a right image; 2) calibrating the image acquisition apparatuses by a chessboard method, computing the internal and external parameter and a lens distortion coefficient, processing the left image and the right image to remove distortion in the images; 3) extracting features from the two processed images in the step 2) to obtain the feature points of the two images; 4) matching the feature points by using the feature points of the two images in the step 3) to obtain feature point pairs; 5) performing epipolar geometric constraint detection on the obtained feature point pairs to remove mismatched feature point pairs; and 6) computing three-dimensional coordinates of corresponding points, in a world coordinate system, of the obtained feature points by using the reserved feature point pairs. The binocular three-dimensional reconstruction method may reduce abnormal points and acquires an accurate three-dimensional reconstruction model.

Description

A kind of binocular three-dimensional reconstruction method
[technical field]
The present invention relates to computer vision, particularly relate to a kind of binocular three-dimensional reconstruction method.
[background technology]
Reconstructing three-dimensional model is a very important research field in computer vision, has begun to be widely used in Industry, medical treatment, amusement three big fields.Currently, reconstructing three-dimensional model, it is divided into four big classes according to technical method, uses Structured light reconstruction method, the method for reconstructing of employing tomoscan, the method for reconstructing of employing flight time, employing solid become The method for reconstructing of picture.Wherein, three-dimensional imaging method for reconstructing is broadly divided into two classes, i.e. uses the method for reconstructing of single image With the method for reconstructing using multiple image.The method using single image to rebuild can be summarized as shape from X class, Such as shape from contour (reconstruct based on profile), shape from shading (reconstruct based on shade), shape From focusing (reconstruct based on focal length).But the method using single image to rebuild, is limited by amount of image information not Foot, picture depth reduction accuracy is not enough.The method using multiple image to rebuild can avoid this problem.Binocular three Dimension module method for reconstructing, i.e. belongs to the reconstructing method using multiple image.Common binocular three-dimensional reconstruction method, substantially Process includes: (1) image acquisition, (2) feature extraction, (3) characteristic matching, (4) three-dimensional coordinate calculate, and it is heavy There is problems of easily occurring during step (4) three-dimensional coordinate calculates improper point after building, and then affect threedimensional model weight Build effect.
[summary of the invention]
The technical problem to be solved is: make up above-mentioned the deficiencies in the prior art, proposes a kind of binocular three-dimensional weight Construction method, can effectively reduce improper point, it is thus achieved that three-dimensionalreconstruction model accurately.
The technical problem of the present invention is solved by following technical scheme:
A kind of binocular three-dimensional reconstruction method, comprises the following steps: 1) use the image capture device of two same model to adopt The image collecting object to be reconstructed obtains left image and right image respectively;2) use chessboard method to demarcate described image acquisition to set Standby, calculate inside and outside parameter and the distortion coefficients of camera lens of described image capture device, and according to described inside and outside parameter and camera lens Two images are respectively processed by distortion factor, remove the distortion in image;3) to step 2) process after two images Carry out feature extraction, obtain the characteristic point of two width images;4) utilize step 3) in characteristic point in two width images, enter Row Feature Points Matching, obtains feature point pairs;5) to step 4) in the feature point pairs that obtains, carry out Epipolar geometry constraint Detection, removes the feature point pairs of error hiding;6) step 5 is utilized) two-dimensional coordinate of feature point pairs that retains afterwards calculates Three-dimensional coordinate to feature point pairs corresponding point in world coordinate system.
The present invention is compared with the prior art and provides the benefit that:
The binocular three-dimensional reconstruction method of the present invention, by carrying out the pole geometric match constraint detection of feature point pairs, demarcates not The feature point pairs meeting Epipolar geometry constraint is Mismatching point, carries out three-dimensional mould after removing the feature point pairs of these error hiding Type reconstructs, thus effectively reduces error hiding rate, reduces the improper point during three-dimensional coordinate calculates, is relatively as the criterion True three-dimensionalreconstruction model.
[accompanying drawing explanation]
Fig. 1 is the flow chart of the binocular three-dimensional reconstruction method of the specific embodiment of the invention;
Fig. 2 is the principle signal of Epipolar geometry involved in the binocular three-dimensional reconstruction method of the specific embodiment of the invention Figure.
[detailed description of the invention]
Below in conjunction with detailed description of the invention and compare accompanying drawing the present invention is described in further details.
Insight of the invention is that in binocular three-dimensional model reconstruction method, the main error of reconstructing three-dimensional model comes from three Dimension module does not meets the point of practical situation.In process of reconstruction, characteristic extraction procedure is easily subject in environment extensively deposit Various noises and twin-lens distortion correction error impact, occur due to feature similarity during causing characteristic matching Error hiding, and then cause, during reconstruct, the above-mentioned point not meeting practical situation occurs.By using Epipolar geometry in the present invention Constraint detection, thus effectively remove the feature point pairs of error hiding.
As it is shown in figure 1, be the flow chart of the method for reconstructing three-dimensional model of this detailed description of the invention, comprise the following steps:
1) use the image capture device of two same model gather the image of object to be reconstructed obtain respectively left image and Right image.
Specifically, arranging binocular imaging system, image capture device can be camera or projector.When gathering image, Guaranteeing that two image capture devices have identical CCD size, identical lens parameters, its camera lens horizontal parallel is placed, When putting, the optical axis of camera lens in the same plane, and planar angle between two camera lens optical axis less than or equal to 30 °, And object to be reconstructed described in all comprising in two images gathered.
2) demarcate and calibrate: using chessboard method to demarcate described image capture device, calculate the inside and outside ginseng of image capture device Number and distortion coefficients of camera lens, and according to described inside and outside parameter and distortion coefficients of camera lens, two images are respectively processed, go Except the distortion in image.
3) extract characteristic point: to step 2) process after two images carry out feature extraction, obtain the feature of two width images Point.
In this step, extract the characteristic point in image, the autocorrelation matrix detection image point of interest of luminosity function can be used As characteristic point.Specifically:
In each image, (x, y), with I, (x, y) represents its gray value given picture point, sets local and is translated towards Amount (then auto-correlation function is for Δ x, Δ y):
G (x, y)=∑ [I (and x, y)-I (x+ Δ x, y+ Δ y)]2
Wherein, [I (x, y)-I (x+ Δ x, y+ Δ y)]2For luminance picture Grad.(x y) is Gaussian window to given g Function, then feature point detection functions is:
R (Δ x, Δ y)=∑ g (x, y) [I (and x, y)-I (x+ Δ x, y+ Δ y)]2
Window function g (x, y) in, if eigenvalue R (Δ x, Δ y) more than set threshold value, then explanation find characteristic point.Need Illustrating, the method extracting characteristic point is more, is not limited to the autocorrelation matrix detection method of above-mentioned luminosity function, its Its method will not enumerate.
4) matching characteristic point: utilize step 3) in characteristic point in two width images, carry out Feature Points Matching, obtain spy It is a little right to levy.Nearest-neighbor matching process such as can be used to carry out Feature Points Matching, obtain feature point pairs.
5) to step 4) in the feature point pairs that obtains, carry out epipolar-line constraint detection, remove the feature point pairs of error hiding.
In this step, the feature point pairs obtained after aforementioned coupling is carried out Epipolar geometry constraint detection, remove by noise or The feature point pairs being unsatisfactory for constraint that person's distortion causes, thus improve the accuracy of follow-up three-dimensional reconstruction.
In this detailed description of the invention, Epipolar geometry constraint detection comprises epipolar-line constraint process.Specifically, it is judged that characteristic point The epipolar-line constraint value of two characteristic points of centering the most respectively less than sets threshold value, in this way, then retains;As no, then for mistake The feature point pairs joined.
As in figure 2 it is shown, be the ultimate principle figure of Epipolar geometry involved in three-dimensional rebuilding method.Wherein, m1, m2 A pair feature point pairs obtained for coupling, Xw is characterized a little to m1, m2Corresponding point in world coordinate system, then point Xw subpoint in the view image of left and right two is respectively m1, m2。x1-y1It is the Epipolar geometry coordinate system in left view, x2-y2It it is the Epipolar geometry coordinate system in right view.L1, L2 are respectively the optical axis of the camera lens of two image capture devices, O1、O2For camera lens photocentre position, e1、e2It is respectively O1、O2Subpoint (also referred to as limit) in another view, I.e. e1For O2Subpoint in left view image, e2For O1Subpoint in right view image. It is respectively polar curve,For baseline.Epipolar-line constraint condition is i.e.: some Xw subpoint m on left view image1On the right side Subpoint in lateral-view image (is set to m1') inevitable at right side view polar curveGo up or and polar curveInterval one Relatively short distance, puts Xw subpoint m on right view image2Subpoint on left side view image (is set to m2') must So at left side view polar curveGo up or and polar curveUpper interval one relatively short distance.If distance is beyond certain limit, Then it is considered as the feature point pairs of error hiding.
According to Epipolar geometry principle, by feature point pairs m1, m2Coordinate be normalized, m1=[xl,yl]TNormalization Form2=[xr,yr]TIt is normalized toIn like manner, characteristic point is regarded at another Subpoint coordinate in figure is normalized, m1′、m2' be normalized toBy feature point pairs at world coordinates Corresponding point X in systemWThree-dimensional coordinate be normalized, XW=[X, Y, Z]TIt is normalized to By subpoint e1、e2Coordinate be also normalized toAccording to step 2) obtain during calibration and calibration for cameras Inside and outside parameter, the basis matrix F in inner parameter1、F2With external parameter matrix Q1、Q2, can obtain following relational expression:
According to position relationship shown in Fig. 2, there is planar linear equation matrix P1、P2Make:
For ease of calculating, definition vector p1=P1(1 1 0)T, p2=P2(1 1 0)T, then characteristic point is obtained m1Polar curve value C on the left view image at place1And polar curve value C on the right view image at characteristic point place2It is respectively as follows:
Set CthresFor epipolar-line constraint threshold value, when it meets C1≤Cthres, C2≤CthresTime, it is determined that this feature point To meeting epipolar-line constraint, retained.Otherwise, then it is assumed that this feature point, to for error hiding feature point pairs, is removed.
To sum up, detected, according to characteristic point m by epipolar-line constraint in Epipolar geometry constraint detection1On right side view image Subpoint (be set to m1') inevitable at right side view polar curveGo up or and polar curveIt is spaced a relatively short distance, special Levy a m2Subpoint on left side view image (is set to m2') inevitable at left side view polar curveGo up or and polar curveThe principle of upper interval one relatively short distance, the detection proceeded as described above judges, can not meet the spy of mentioned above principle Levy a little to being removed, thus remove the feature point pairs of error hiding.
Preferably, Epipolar geometry constraint detection also includes the degree of depth seriality detection process of characteristic point.Specifically, it is judged that In feature point pairs, whether two characteristic points are satisfied by the degree of depth condition of continuity, in this way, then retain;As no, permit beyond error Permitted scope, be then considered as the feature point pairs of error hiding.
In this detailed description of the invention, set reference view and be characterized a m1The view at place, reference-view is characterized a little m2The view at place.For given characteristic point m in reference view1If its coordinate is (xl,yl), calculate depth information, The depth value that can obtain this point is d (xl,yl).Similarly, for characteristic point m in reference-view2If its coordinate is (xr,yr), calculating depth information, the depth value that can obtain this point is d (xr,yr)。
K characteristic direction is set, on the s characteristic direction, it is assumed that given characteristic point m1Having of around existing is deep The point of degree information is at (xl+Δxs,yl+ΔysIn the range of), calculate Δ Ei=(Δ xs+Δys)2, demarcate and make Δ EsIt is worth Little point is the measuring point to be checked on this s characteristic direction.If any k characteristic direction, then there is k measuring point to be checked, The most corresponding 1st characteristic direction, the 2nd characteristic direction ..., the s characteristic direction ..., kth Characteristic direction, wherein s=1,2,3 ... k.
DefinitionIt is the continuity constraint direction character on the s characteristic direction, Set TthresAs direction threshold value, calculate and meet Ts≤TthresThe maximum n of condition, will have during maximum n value corresponding N group Δ xi, Δ yiValue as each Δ x to be solvedi, Δ yiValue.N herein i.e. represents feature point pairs m1, m2 Corresponding point in world coordinate system are corresponding on the s characteristic direction meets above-mentioned Ts≤TthresInstitute's energy during condition Choose the maximum number of offset point, Δ xiRepresent the transversal displacement of i-th point in n point, Δ yiRepresent in n point The vertical misalignment amount of i-th point.
On the s characteristic direction, definition datum image characteristic point m1, coordinate is (xl,yl) the degree of depth seriality detection Value is:
D s ( x l , y l ) = Σ 1 i = n [ d ( x l , y l ) - d ( x l + Δx i , y l + Δy i ) Δx i + Δy i ] 2
Characteristic point m on definition reference picture2, coordinate is (xr,yr) degree of depth seriality detected value be:
D s ( x r , y r ) = Σ 1 i = n [ d ( x r , y r ) - d ( x r + Δx i , y r + Δy i ) Δx i + Δy i ] 2
Wherein, d (xl+Δxi,yl+Δyi) represent solve obtain n put in deep in reference view of i-th point Angle value, d (xr+Δxi,yr+Δyi) represent the depth value solving the i-th point obtained in n point in reference-view.
Set DthresThreshold value is detected for degree of depth seriality.On the s characteristic direction, when meeting Ds(xl,yl)≤Dthres, Then characteristic point m on benchmark image1The s characteristic direction has degree of depth seriality;When it meets Ds(xr,yr)≤Dthres, then characteristic point m on reference picture2The s characteristic direction has degree of depth seriality.
As feature point pairs m1, m2, at s=1,2,3 ... in the case of k, when being satisfied by identical degree of depth seriality, sentence Fixed this meets continuity constraint to feature point pairs.Otherwise, then it is assumed that this is error hiding feature point pairs to feature point pairs, gives To remove.
Usually, if characteristic point m1Benchmark image meets the degree of depth on certain characteristic direction continuous, then characteristic point m2 Meet the degree of depth on this feature direction on a reference continuous;If characteristic point m2Meet on a reference in certain feature On direction, the degree of depth is continuous, then characteristic point m1Benchmark image also meets the degree of depth on this feature direction continuous.According to this One principle, the seriality detection carrying out above-mentioned further setting judges, can be entered by the feature point pairs that not meet mentioned above principle Row is removed, thus the feature point pairs of removable error hiding.
It is further preferred that Epipolar geometry constraint detection also includes that sequence constraint detects process: to two couples of feature point pairs m1, m2And w1, w2, it is judged that when characteristic point m1Abscissa more than characteristic point w1Abscissa time, if existing characteristics point m2Abscissa more than characteristic point w2Abscissa;And when characteristic point m1Vertical coordinate more than characteristic point w1Vertical coordinate, Whether existing characteristics point m2Vertical coordinate more than characteristic point w2Vertical coordinate.As all, then keeping characteristics point is to m1, m2;As no, then feature point pairs m1, m2Feature point pairs for error hiding.Due to the nothing at benchmark image Yu reference picture Shield portions, any two pairs of characteristic points, meet and there is identical relative position relation, therefore carry out according to this principle on The sequence constraint detection stating process judges, can be removed by the feature point pairs not meeting mentioned above principle, thus remove by mistake The feature point pairs of coupling.
To sum up, by step 5) the middle Epipolar geometry constraint detection process arranged, can effectively remove the characteristic point of error hiding Right, thus improve the accuracy of later reconstitution result.
6) step 5 is utilized) two-dimensional coordinate of feature point pairs that retains afterwards is calculated feature point pairs in world coordinate system The three-dimensional coordinate of corresponding point.After calculated three-dimensional coordinate, threedimensional model can be reconstructed.
To sum up, the binocular three-dimensional reconstruction method of this detailed description of the invention, by carrying out the pole geometric match of feature point pairs about Bundle detection, it is Mismatching point that demarcation does not meets the feature point pairs of Epipolar geometry constraint, removes the characteristic point of these error hiding Carrying out three-dimensional model reconfiguration after to, thus effectively reduce error hiding rate, that reduces during three-dimensional coordinate calculates is improper Point, obtains accurate three-dimensionalreconstruction model.
Above content is to combine concrete preferred implementation further description made for the present invention, it is impossible to assert Being embodied as of the present invention is confined to these explanations.For general technical staff of the technical field of the invention, Make some replacements or obvious modification without departing from the inventive concept of the premise, and performance or purposes are identical, all answer When being considered as belonging to protection scope of the present invention.

Claims (9)

1. a binocular three-dimensional reconstruction method, it is characterised in that: comprise the following steps: 1) use two same model Image capture device gathers the image of object to be reconstructed and obtains left image and right image respectively;2) chessboard method is used to demarcate Described image capture device, calculates inside and outside parameter and the distortion coefficients of camera lens of described image capture device, and according to described Two images are respectively processed by inside and outside parameter and distortion coefficients of camera lens, remove the distortion in image;3) to step 2) Two images after process carry out feature extraction, obtain the characteristic point of two width images;4) utilize step 3) in two width images In characteristic point, carry out Feature Points Matching, obtain feature point pairs;5) to step 4) in the feature point pairs that obtains, enter The constraint detection of row Epipolar geometry, removes the feature point pairs of error hiding;6) step 5 is utilized) feature point pairs that retains afterwards Two-dimensional coordinate is calculated the three-dimensional coordinate of feature point pairs corresponding point in world coordinate system.
Binocular three-dimensional reconstruction method the most according to claim 1, it is characterised in that: described step 5) in, to pole Geometrical constraint detection includes that epipolar-line constraint detects: the epipolar-line constraint value of judging characteristic point two characteristic points of centering is the least In setting threshold value, in this way, then retain;As no, then it it is the feature point pairs of error hiding.
Binocular three-dimensional reconstruction method the most according to claim 2, it is characterised in that: calculate spy according to equation below Levy a little to m1, m2Epipolar-line constraint value C1, C2:
C 1 = [ P 1 ( m 2 ~ ′ - e 1 ~ ) ] 2 / t r ( p 1 p 1 T ) C 2 = [ P 2 ( m 1 ~ ′ - e 2 ~ ) ] 2 / t r ( p 2 p 2 T )
Wherein,The normalization coordinate of limit in left view in expression Epipolar geometry,Represent right view in Epipolar geometry The normalization coordinate of middle limit;
It is calculated according to equation below:
m 2 ~ ′ = F 1 Q 1 X W ~ m 1 ~ ′ = F 2 Q 2 X W ~
F1, F2Be respectively step 2) in calculated two image capture devices inner parameter in basis matrix, Q1, Q2Be respectively step 2) in the external parameter matrix of calculated two image capture devices;Xw is characterized a little To m1, m2Corresponding point in world coordinate system;For the matrix obtained after the three-dimensional coordinate normalization of an Xw;
P1, P2Solve according to equation below and be calculated:
P 1 ( m 1 ~ - e 1 ~ ) = 0 P 2 ( m 2 ~ - e 2 ~ ) = 0
It is characterized a m1Two-dimensional coordinate normalization after the matrix that obtains;It is characterized a m2Two-dimensional coordinate normalizing The matrix obtained after change;
Vector p1=P1(1 1 0)T, vector p2=P2(1 1 0)T
Binocular three-dimensional reconstruction method the most according to claim 1, it is characterised in that: described step 5) in, to pole Geometrical constraint detection also includes the degree of depth seriality detection of characteristic point: judging characteristic two characteristic points of some centering are the fullest The foot degree of depth condition of continuity, in this way, then retains;As no, then it it is the feature point pairs of error hiding.
Binocular three-dimensional reconstruction method the most according to claim 4, it is characterised in that: calculate spy according to equation below Levy a little to m1, m2Degree of depth seriality detected value at s the characteristic direction chosen:
D s ( x l , y l ) = Σ 1 i = n [ d ( x l , y l ) - d ( x l + Δx i , y l + Δy i ) Δx i + Δy i ] 2
D s ( x r , y r ) = Σ 1 i = n [ d ( x r , y r ) - d ( x r + Δx i , y r + Δy i ) Δx i + Δy i ] 2
Wherein, s takes the positive integer between 1~k, and k represents the number of the characteristic direction chosen;(xl,yl) represent characteristic point m1Coordinate, (xr,yr) represent characteristic point m2Coordinate;d(xl,yl) represent characteristic point m1Depth value, d (xr,yr) table Show characteristic point m2Depth value;
Solving equationObtain n, Δ xi, Δ yiValue, n will be made to get One group of solution during maximum is as final n, Δ xi, Δ yiValue;(Δxs,Δys) for deposit on the s characteristic direction (Δ x is made in the point of depth informations+Δys)2There is the coordinate offset amount of the point of minima, TthresIt is the s feature Setting threshold value on direction;
d(xl+Δxi,yl+Δyi) represent the i-th point solved in n the point obtained depth value in reference view, d(xr+Δxi,yr+Δyi) represent the depth value solving the i-th point obtained in n point in reference-view;Wherein, Reference view is characterized a m1The view at place, reference-view is characterized a m2The view at place.
After calculating, it is judged that feature point pairs m1, m2K characteristic direction on degree of depth seriality detected value whether be respectively less than Set threshold value, in this way, then retain;As no, then it it is the feature point pairs of error hiding.
Binocular three-dimensional reconstruction method the most according to claim 1, it is characterised in that: described step 5) in, to pole Geometrical constraint detection also includes that sequence constraint detects process: to two couples of feature point pairs m1, m2And w1, w2, it is judged that when Characteristic point m1Abscissa more than characteristic point w1Abscissa time, if existing characteristics point m2Abscissa more than feature Point w2Abscissa;And when characteristic point m1Vertical coordinate more than characteristic point w1Vertical coordinate, if existing characteristics point m2Vertical coordinate more than characteristic point w2Vertical coordinate, as all, then keeping characteristics point is to m1, m2;As no, then special Levy a little to m1, m2Feature point pairs for error hiding.
Binocular three-dimensional reconstruction method the most according to claim 1, it is characterised in that: described step 3) in, use The autocorrelation matrix detection image point of interest of luminosity function is as characteristic point.
Binocular three-dimensional reconstruction method the most according to claim 7, it is characterised in that: calculate pixel in image (x, Eigenvalue R y) (Δ x, Δ y),
R (Δ x, Δ y)=∑ g (x, y) [I (and x, y)-I (x+ Δ x, y+ Δ y)]2
Wherein, (x y) represents Gauss function to g;(x y) represents pixel (x, gray value y) to I;(△ x, △ y) represent the local translation vector set;I (x+ △ x, y+ △ y) represents pixel (x+ △ x, y+ △ y) Gray value;
After calculating, it is judged that (x, (whether Δ x, Δ y), more than setting threshold value, in this way, are then eigenvalue R y) pixel Characteristic point;As no, it it not the most characteristic point.
Binocular three-dimensional reconstruction method the most according to claim 1, it is characterised in that: described step 4) in use Neighbour territory matching process carries out Feature Points Matching, obtains feature point pairs.
CN201610195387.XA 2016-03-30 2016-03-30 A kind of binocular three-dimensional reconstruction method Expired - Fee Related CN105894574B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610195387.XA CN105894574B (en) 2016-03-30 2016-03-30 A kind of binocular three-dimensional reconstruction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610195387.XA CN105894574B (en) 2016-03-30 2016-03-30 A kind of binocular three-dimensional reconstruction method

Publications (2)

Publication Number Publication Date
CN105894574A true CN105894574A (en) 2016-08-24
CN105894574B CN105894574B (en) 2018-09-25

Family

ID=57014112

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610195387.XA Expired - Fee Related CN105894574B (en) 2016-03-30 2016-03-30 A kind of binocular three-dimensional reconstruction method

Country Status (1)

Country Link
CN (1) CN105894574B (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106926241A (en) * 2017-03-20 2017-07-07 深圳市智能机器人研究院 A kind of the tow-armed robot assembly method and system of view-based access control model guiding
CN107170010A (en) * 2017-05-11 2017-09-15 四川大学 System calibration method, device and three-dimensional reconstruction system
CN107392929A (en) * 2017-07-17 2017-11-24 河海大学常州校区 A kind of intelligent target detection and dimension measurement method based on human vision model
CN107424196A (en) * 2017-08-03 2017-12-01 江苏钜芯集成电路技术股份有限公司 A kind of solid matching method, apparatus and system based on the weak more mesh cameras of demarcation
CN108510530A (en) * 2017-02-28 2018-09-07 深圳市朗驰欣创科技股份有限公司 A kind of three-dimensional point cloud matching process and its system
CN108537831A (en) * 2018-03-09 2018-09-14 中北大学 The method and device of CT imagings is carried out to increasing material manufacturing workpiece
CN109087382A (en) * 2018-08-01 2018-12-25 宁波发睿泰科智能科技有限公司 A kind of three-dimensional reconstruction method and 3-D imaging system
CN109155822A (en) * 2017-11-28 2019-01-04 深圳市大疆创新科技有限公司 Image processing method and device
CN109636903A (en) * 2018-12-24 2019-04-16 华南理工大学 A kind of binocular three-dimensional reconstruction method based on shake
CN109859314A (en) * 2019-03-12 2019-06-07 上海曼恒数字技术股份有限公司 Three-dimensional rebuilding method, device, electronic equipment and storage medium
CN110223355A (en) * 2019-05-15 2019-09-10 大连理工大学 A kind of feature mark poiX matching process based on dual epipolar-line constraint
CN111008602A (en) * 2019-12-06 2020-04-14 青岛海之晨工业装备有限公司 Two-dimensional and three-dimensional visual combined lineation feature extraction method for small-curvature thin-wall part
CN111133474A (en) * 2017-09-29 2020-05-08 日本电气方案创新株式会社 Image processing apparatus, image processing method, and computer-readable recording medium
CN111160232A (en) * 2019-12-25 2020-05-15 上海骏聿数码科技有限公司 Front face reconstruction method, device and system
WO2020173052A1 (en) * 2019-02-28 2020-09-03 未艾医疗技术(深圳)有限公司 Three-dimensional image measurement method, electronic device, storage medium, and program product
CN111754449A (en) * 2019-03-27 2020-10-09 北京外号信息技术有限公司 Scene reconstruction method based on optical communication device and corresponding electronic equipment
CN111882618A (en) * 2020-06-28 2020-11-03 北京石油化工学院 Left and right view feature point matching processing method, terminal and system in binocular ranging
CN112215871A (en) * 2020-09-29 2021-01-12 武汉联影智融医疗科技有限公司 Moving target tracking method and device based on robot vision
CN113689555A (en) * 2021-09-09 2021-11-23 武汉惟景三维科技有限公司 Binocular image feature matching method and system
CN117523431A (en) * 2023-11-17 2024-02-06 中国科学技术大学 Firework detection method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103065351A (en) * 2012-12-16 2013-04-24 华南理工大学 Binocular three-dimensional reconstruction method
CN105069839A (en) * 2015-08-03 2015-11-18 清华大学深圳研究生院 Computed hologram generation method for three-dimensional point cloud model

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103065351A (en) * 2012-12-16 2013-04-24 华南理工大学 Binocular three-dimensional reconstruction method
CN105069839A (en) * 2015-08-03 2015-11-18 清华大学深圳研究生院 Computed hologram generation method for three-dimensional point cloud model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
TRUCCO E ET AL.: "《Prentice-Hall》", 31 December 1998 *
宁柯琳: "基于双目立体视觉的三维重建平台研究与实现", 《中国优秀硕士学位论文全文数据库-信息科技辑》 *

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108510530A (en) * 2017-02-28 2018-09-07 深圳市朗驰欣创科技股份有限公司 A kind of three-dimensional point cloud matching process and its system
CN106926241A (en) * 2017-03-20 2017-07-07 深圳市智能机器人研究院 A kind of the tow-armed robot assembly method and system of view-based access control model guiding
CN107170010A (en) * 2017-05-11 2017-09-15 四川大学 System calibration method, device and three-dimensional reconstruction system
CN107392929A (en) * 2017-07-17 2017-11-24 河海大学常州校区 A kind of intelligent target detection and dimension measurement method based on human vision model
CN107392929B (en) * 2017-07-17 2020-07-10 河海大学常州校区 Intelligent target detection and size measurement method based on human eye vision model
CN107424196A (en) * 2017-08-03 2017-12-01 江苏钜芯集成电路技术股份有限公司 A kind of solid matching method, apparatus and system based on the weak more mesh cameras of demarcation
CN107424196B (en) * 2017-08-03 2021-02-26 江苏钜芯集成电路技术股份有限公司 Stereo matching method, device and system based on weak calibration multi-view camera
CN111133474A (en) * 2017-09-29 2020-05-08 日本电气方案创新株式会社 Image processing apparatus, image processing method, and computer-readable recording medium
CN111133474B (en) * 2017-09-29 2023-09-19 日本电气方案创新株式会社 Image processing apparatus, image processing method, and computer-readable recording medium
CN109155822A (en) * 2017-11-28 2019-01-04 深圳市大疆创新科技有限公司 Image processing method and device
WO2019104453A1 (en) * 2017-11-28 2019-06-06 深圳市大疆创新科技有限公司 Image processing method and apparatus
CN108537831A (en) * 2018-03-09 2018-09-14 中北大学 The method and device of CT imagings is carried out to increasing material manufacturing workpiece
CN108537831B (en) * 2018-03-09 2021-06-15 中北大学 Method and device for performing CT imaging on additive manufacturing workpiece
CN109087382A (en) * 2018-08-01 2018-12-25 宁波发睿泰科智能科技有限公司 A kind of three-dimensional reconstruction method and 3-D imaging system
CN109636903A (en) * 2018-12-24 2019-04-16 华南理工大学 A kind of binocular three-dimensional reconstruction method based on shake
CN109636903B (en) * 2018-12-24 2020-09-15 华南理工大学 Binocular three-dimensional reconstruction method based on jitter
WO2020173052A1 (en) * 2019-02-28 2020-09-03 未艾医疗技术(深圳)有限公司 Three-dimensional image measurement method, electronic device, storage medium, and program product
CN109859314A (en) * 2019-03-12 2019-06-07 上海曼恒数字技术股份有限公司 Three-dimensional rebuilding method, device, electronic equipment and storage medium
CN111754449A (en) * 2019-03-27 2020-10-09 北京外号信息技术有限公司 Scene reconstruction method based on optical communication device and corresponding electronic equipment
CN110223355A (en) * 2019-05-15 2019-09-10 大连理工大学 A kind of feature mark poiX matching process based on dual epipolar-line constraint
CN110223355B (en) * 2019-05-15 2021-01-05 大连理工大学 Feature mark point matching method based on dual epipolar constraint
CN111008602A (en) * 2019-12-06 2020-04-14 青岛海之晨工业装备有限公司 Two-dimensional and three-dimensional visual combined lineation feature extraction method for small-curvature thin-wall part
CN111160232A (en) * 2019-12-25 2020-05-15 上海骏聿数码科技有限公司 Front face reconstruction method, device and system
CN111882618A (en) * 2020-06-28 2020-11-03 北京石油化工学院 Left and right view feature point matching processing method, terminal and system in binocular ranging
CN111882618B (en) * 2020-06-28 2024-01-26 北京石油化工学院 Left-right view characteristic point matching processing method, terminal and system in binocular ranging
CN112215871A (en) * 2020-09-29 2021-01-12 武汉联影智融医疗科技有限公司 Moving target tracking method and device based on robot vision
CN112215871B (en) * 2020-09-29 2023-04-21 武汉联影智融医疗科技有限公司 Moving target tracking method and device based on robot vision
CN113689555A (en) * 2021-09-09 2021-11-23 武汉惟景三维科技有限公司 Binocular image feature matching method and system
CN113689555B (en) * 2021-09-09 2023-08-22 武汉惟景三维科技有限公司 Binocular image feature matching method and system
CN117523431A (en) * 2023-11-17 2024-02-06 中国科学技术大学 Firework detection method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN105894574B (en) 2018-09-25

Similar Documents

Publication Publication Date Title
CN105894574A (en) Binocular three-dimensional reconstruction method
CN108416791B (en) Binocular vision-based parallel mechanism moving platform pose monitoring and tracking method
CN106204544B (en) It is a kind of to automatically extract the method and system of mark point position and profile in image
CN105528785B (en) A kind of binocular vision image solid matching method
CN105225482B (en) Vehicle detecting system and method based on binocular stereo vision
CN105894499B (en) A kind of space object three-dimensional information rapid detection method based on binocular vision
Corona et al. Digital stereo image analyzer for generating automated 3-D measures of optic disc deformation in glaucoma
CN106447708A (en) OCT eye fundus image data registration method
CN107560592B (en) Precise distance measurement method for photoelectric tracker linkage target
CN111784778B (en) Binocular camera external parameter calibration method and system based on linear solving and nonlinear optimization
CN110956661B (en) Method for calculating dynamic pose of visible light and infrared camera based on bidirectional homography matrix
CN102567989A (en) Space positioning method based on binocular stereo vision
CN103913131A (en) Free curve method vector measurement method based on binocular vision
CN106709950A (en) Binocular-vision-based cross-obstacle lead positioning method of line patrol robot
CN106548492A (en) Determine method and device, the image acquiring method of matching double points
CN104408772A (en) Grid projection-based three-dimensional reconstructing method for free-form surface
CN102567734B (en) Specific value based retina thin blood vessel segmentation method
CN105913013A (en) Binocular vision face recognition algorithm
CN103308000B (en) Based on the curve object measuring method of binocular vision
CN104881866A (en) Fisheye camera rectification and calibration method for expanding pin-hole imaging model
CN110910456B (en) Three-dimensional camera dynamic calibration method based on Harris angular point mutual information matching
US20170358077A1 (en) Method and apparatus for aligning a two-dimensional image with a predefined axis
Kolar et al. Registration of 3D retinal optical coherence tomography data and 2D fundus images
CN106534833A (en) Space and time axis joint double-viewpoint three dimensional video stabilizing method
Cao et al. An efficient lens structures segmentation method on as-oct images

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 518000 Guangdong city in Shenzhen Province, Nanshan District City Xili street Shenzhen University Tsinghua Campus A building two floor

Patentee after: Shenzhen International Graduate School of Tsinghua University

Address before: 518055 Guangdong city of Shenzhen province Nanshan District Xili of Tsinghua

Patentee before: GRADUATE SCHOOL AT SHENZHEN, TSINGHUA University

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20200702

Address after: Room 2101-2108, 21 / F, Kerong Chuangye building, No. 666, Zhongkai Avenue (Huihuan section), Zhongkai high tech Zone, Huizhou City, Guangdong Province

Patentee after: Huizhou Frant Photoelectric Technology Co.,Ltd.

Address before: 518000 Guangdong city in Shenzhen Province, Nanshan District City Xili street Shenzhen University Tsinghua Campus A building two floor

Patentee before: Shenzhen International Graduate School of Tsinghua University

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180925