CN105187812B - A kind of binocular vision solid matching method - Google Patents

A kind of binocular vision solid matching method Download PDF

Info

Publication number
CN105187812B
CN105187812B CN201510556591.5A CN201510556591A CN105187812B CN 105187812 B CN105187812 B CN 105187812B CN 201510556591 A CN201510556591 A CN 201510556591A CN 105187812 B CN105187812 B CN 105187812B
Authority
CN
China
Prior art keywords
theta
sin
cos
left eye
right eye
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510556591.5A
Other languages
Chinese (zh)
Other versions
CN105187812A (en
Inventor
刘培志
赵小川
陈晓鹏
孔晓梅
施建昌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China North Computer Application Technology Research Institute
Original Assignee
China North Computer Application Technology Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China North Computer Application Technology Research Institute filed Critical China North Computer Application Technology Research Institute
Priority to CN201510556591.5A priority Critical patent/CN105187812B/en
Publication of CN105187812A publication Critical patent/CN105187812A/en
Application granted granted Critical
Publication of CN105187812B publication Critical patent/CN105187812B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a kind of binocular vision solid matching method, including: step 1, left eye and the same target characteristic point of right eye fixation space object, make target characteristic point image in left eye and the image in right eye overlap with the photocentre position of left eye and the photocentre position of right eye respectively;Step 2, sets up left eye coordinates system, right eye coordinate system and world coordinate system;Step 3, obtains left eye and right eye corner information in three directions;Step 4, according to triangle geometric method, calculates the distance between target characteristic point and the photocentre position of left eye, distance between target characteristic point and the photocentre position of right eye;Step 5, calculates target characteristic point coordinate under left eye coordinates system and the coordinate under right eye coordinate system;Step 6, according to Binocular Vision Principle, obtains target characteristic point world coordinates under world coordinate system.The invention have the benefit that method simple possible, complexity is substantially reduced, and can meet the requirement of three-dimensional localization precision simultaneously.

Description

A kind of binocular vision solid matching method
Technical field
The present invention relates to binocular vision technical field of image processing, in particular to a kind of binocular vision Stereo matching Method.
Background technology
Human vision is possible not only to tell the feature such as color, profile, moreover it is possible to the different images seen by binocular Difference, tells the depth information of object.Binocular vision is an important form of machine vision, and it is based on principle of parallax also Utilize two camera head shooting Same Scene of diverse location, by calculating spatial point parallax in two images, obtain The method taking the three-dimensional geometric information of object.Merge image that two eyes obtain and observe the difference between them, making us can To obtain obvious depth perception, set up the corresponding relation between feature, by the same space physical points photosites in different images It is mapped.
According to Binocular Vision Principle, once obtain the parallax of spatial point, if it is possible in image coordinate system, determine two Individual match point, and know its respective image coordinate, then it is obtained with the depth information of spatial point.Therefore, it is achieved the degree of depth Acquisition of information it is crucial that obtain a spatial point coupling in left images plane right, stereo matching problem then be realization The key that the degree of depth obtains.Binocular solid matching problem is " ill " problem, binocular solid coupling enforcement to consider many because of Element, and weigh, with the indicators of overall performance such as computation complexity and stability, feasibility and the effectiveness that scheme is implemented.Three-dimensional The algorithm joined is a lot, mainly have Region Matching Algorithm, Feature Correspondence Algorithm, cut based on global restriction algorithm, figure algorithm and based on The algorithm of artificial intelligence.Region Matching Algorithm carries out cost gathering by fixed size window, fast operation, but at low texture Poor with degree of depth discontinuity zone matching effect;Feature Correspondence Algorithm can only obtain sparse optical parallax field, will obtain intensive parallax Field must be by complicated difference process, often be applicable to having the significant environment of characteristic information;Based on global restriction algorithm Although high-precision matching result can be obtained by building complicated energy function model, but it is slow to calculate speed, and to meter The memory requirements of calculation machine is bigger;Although figure cuts algorithm can obtain intensive result, but easily produces bigger matching error.
It is therefore proposed that a kind of binocular vision matching algorithm solving existing matching algorithm shortcoming is problem demanding prompt solution.
Summary of the invention
For solving the problems referred to above, it is an object of the invention to provide a kind of feasible and effective bionical solid matching method, The complexity making matching algorithm disclosure satisfy that the requirement of the 3 D stereo positioning precision of binocular vision while being substantially reduced.
The invention provides a kind of binocular vision solid matching method, it is characterised in that including:
The same target characteristic point P of step 1, left eye and right eye fixation space object, makes described target characteristic point P described Image in left eye and the image in described right eye respectively with photocentre position l and the photocentre position of described right eye of described left eye R overlaps;
Step 2, sets up left eye coordinates system, right eye coordinate system and world coordinate system;
Step 3, obtains described left eye and described right eye corner information in three directions, including the optical axis of described left eye Direction and the angle theta of X-axis1, the optical axis direction of described left eye and the angle α of Y-axis1, the optical axis direction of described left eye and the folder of Z axis Angle beta1, the optical axis direction of described right eye and the angle theta of X-axis2', the optical axis direction of described right eye and the angle α of Y-axis2, described right eye Optical axis direction and the angle β of Z axis2
Step 4, according to triangle geometric method, calculate respectively described target characteristic point P and described left eye photocentre position l it Between distance l1, distance l between described target characteristic point P and the photocentre position r of described right eye2
l 1 = b sinθ 2 s i n ( θ 1 + θ 2 ) ;
l 2 = b sinθ 1 s i n ( θ 1 + θ 2 ) ;
Wherein, θ2′+θ2=π, b are photocentre position l and distance between the r of photocentre position;
Step 5, calculates described target characteristic point P coordinate (x under described left eye coordinates system respectivelyl,yl,zl) and Coordinate (x under described right eye coordinate systemr,yr,zr);
( x 1 , y 1 , z 1 ) = ( b sinθ 2 cosθ 1 sin ( θ 1 + θ 2 ) , b sinθ 2 cosα 1 sin ( θ 1 + θ 2 ) , b sinθ 2 cosβ 1 sin ( θ 1 + θ 2 ) ) ;
( x r , y r , z r ) = ( b sinθ 1 cosθ 2 sin ( θ 1 + θ 2 ) , b sinθ 1 cosα 2 s i n ( θ 1 + θ 2 ) , b sinθ 1 cosβ 2 s i n ( θ 1 + θ 2 ) ) ;
Step 6, according to Binocular Vision Principle, obtains the described target characteristic point P world under described world coordinate system and sits Mark (XW,YW,ZW), i.e. obtain the depth information of described target characteristic point P;
Or,
( X W , Y W , Z W ) = ( - b s i n ( θ 1 + θ 2 ) 2 s i n ( θ 1 - θ 2 ) , b sinθ 1 cosα 2 s i n ( θ 1 - θ 2 ) , - f s i n ( θ 1 + θ 2 ) s i n ( θ 1 - θ 2 ) ) ;
Wherein, the focal length of described left eye and described right eye is f.
Further improving as the present invention, in step 2, described left eye coordinates system is the photocentre position l with described left eye The three-dimensional system of coordinate set up for initial point, described right eye coordinate system is the three-dimensional set up for initial point with the photocentre position r of described right eye Coordinate system, described world coordinate system is the point midway between the photocentre position with described left eye and the photocentre position of described right eye The three-dimensional system of coordinate set up for initial point.
Further improving as the present invention, step 4 specifically includes:
Step 401, according to triangle geometrical relationship, obtains:
l 1 cosθ 1 + l 2 cosθ 2 = b l 1 sinθ 1 = l 2 cosθ 2 ;
Step 402, is calculated
l 1 = b sinθ 2 s i n ( θ 1 + θ 2 ) ;
l 2 = b sinθ 1 s i n ( θ 1 + θ 2 ) .
Further improving as the present invention, step 5 specifically includes:
Step 501, according to triangle geometrical relationship, obtains:
xl=l1cosθ1;yl=l1cosα1;zl=l1cosβ1
xr=l1cosθ2;yr=l1cosα2;zr=l1cosβ2
Step 502, the l calculated according to step 41And l2, it is calculated:
x l = b sinθ 2 cosθ 1 sin ( θ 1 + θ 2 ) ; y l = b sinθ 2 cosα 1 sin ( θ 1 + θ 2 ) ; z l = b sinθ 2 cosβ 1 sin ( θ 1 + θ 2 ) ;
x r = b sinθ 1 cosθ 2 sin ( θ 1 + θ 2 ) ; y r = b sinθ 1 cosα 2 sin ( θ 1 + θ 2 ) ; z r = b sinθ 1 cosβ 2 sin ( θ 1 + θ 2 ) ;
The most described target characteristic point P coordinate (x under described left eye coordinates systeml,yl,zl) and in described right eye coordinate system Under coordinate (xr,yr,zr) be respectively as follows:
( x l , y l , z l ) = ( b sinθ 2 cosθ 1 sin ( θ 1 + θ 2 ) , b sinθ 2 cosα 1 s i n ( θ 1 + θ 2 ) , b sinθ 2 cosβ 1 s i n ( θ 1 + θ 2 ) ) ;
( x r , y r , z r ) = ( b sinθ 1 cosθ 2 sin ( θ 1 + θ 2 ) , b sinθ 1 cosα 2 s i n ( θ 1 + θ 2 ) , b sinθ 1 cosβ 2 s i n ( θ 1 + θ 2 ) ) .
Further improving as the present invention, step 6 specifically includes:
Step 601, it is assumed that described target characteristic point P image in described left eye and the image in described right eye are same In one plane, the most described target characteristic point P is identical with the y-coordinate under described right eye coordinate system in described left eye coordinates system, That is:
yl=yr
Step 602, according to triangle geometrical relationship, obtains:
x l f = X W + b / 2 Y W x r f = X W + b / 2 Y W y l f = y r f = - Y W Z W ;
Step 603, obtains according to the first two calculating formula in step 602:
Z W = b f x l - x r ;
And then be calculated:
X W = b ( x l + x r ) 2 ( x l - x r ) ;
And then be calculated:
Y W = - y l f Z W = - y r f Z W ;
Step 604, remembers parallax D=xl-xr, according to coordinate (x calculated in step 5l,yl,zl) and coordinate (xr,yr, zr), obtain:
D = - b s i n ( θ 1 - θ 2 ) s i n ( θ 1 + θ 2 ) ;
Step 605, is calculated further:
X W = - b s i n ( θ 1 + θ 2 ) 2 s i n ( θ 1 - θ 2 ) ;
Y W = b sinθ 2 cosα 1 s i n ( θ 1 - θ 2 ) = b sinθ 1 cosα 2 s i n ( θ 1 - θ 2 ) ;
Z W = - f s i n ( θ 1 + θ 2 ) s i n ( θ 1 - θ 2 ) ;
The depth information i.e. obtaining described target characteristic point P is:
Or,
( X W , Y W , Z W ) = ( - b s i n ( θ 1 + θ 2 ) 2 s i n ( θ 1 - θ 2 ) , b sinθ 1 cosα 2 s i n ( θ 1 - θ 2 ) , - f s i n ( θ 1 + θ 2 ) s i n ( θ 1 - θ 2 ) ) .
The invention have the benefit that
1, introduce binocular corner information in three directions, can get the deep of target characteristic point by triangle geometry method Degree information;
2, method simple possible, complexity is substantially reduced, and can meet the requirement of three-dimensional localization precision simultaneously.
Accompanying drawing explanation
Fig. 1 is the schematic flow sheet of a kind of binocular vision solid matching method described in the embodiment of the present invention;
Fig. 2 Fig. 1 solves the principle schematic of target characteristic point depth information.
Detailed description of the invention
Below by specific embodiment and combine accompanying drawing the present invention is described in further detail.
Shown in Fig. 1, a kind of binocular vision solid matching method of the embodiment of the present invention, including:
The same target characteristic point P of step 1, left eye and right eye fixation space object, makes target characteristic point P in left eye Image and the image in right eye overlap with the photocentre position l of left eye and the photocentre position r of right eye respectively.
Step 2, sets up the three-dimensional system of coordinate (left eye coordinates system) with the photocentre position l of left eye as initial point, sets up with right eye The three-dimensional system of coordinate that photocentre position r is initial point (right eye coordinate system), set up with the photocentre position of left eye and the photocentre position of right eye Point midway between putting is the three-dimensional system of coordinate (world coordinate system) of initial point.
Step 3, the first camera head simulation left eye, the second camera head simulation right eye, at three of the first camera head It is respectively mounted the first servomotor, the second servomotor and the 3rd servomotor, second on the direction of axle (X-axis, Y-axis, Z axis) 3 the 4th servomotors, the 5th servomotor and it is respectively mounted on the direction of three axles (X-axis, Y-axis, Z axis) of camera head Six servomotors.First servomotor, the second servomotor and the 3rd servomotor drive the first camera head to transport up and down respectively Dynamic, around axis movement and side-to-side movement, the 4th servomotor, the 5th servomotor and the 6th servomotor drive second to take the photograph respectively As device moves up and down, around axis movement and side-to-side movement.
The positional information fed back by the first servomotor, the second servomotor and the 3rd servomotor obtains left eye three Corner information on individual direction, including optical axis direction and the angle theta of X-axis of left eye1, the optical axis direction of left eye and the angle of Y-axis α1, the optical axis direction of left eye and the angle β of Z axis1
Right eye is obtained three by the positional information of the 4th servomotor, the 5th servomotor and the 6th servomotor feedback Corner information on individual direction, including optical axis direction and the angle theta of X-axis of right eye2', the optical axis direction of right eye and the angle of Y-axis α2, the optical axis direction of right eye and the angle β of Z axis2
Step 4, as in figure 2 it is shown, specifically include:
Step 401, according to triangle geometrical relationship, obtains:
l 1 cosθ 1 + l 2 cosθ 2 = b l 1 sinθ 1 = l 2 cosθ 2 ;
Step 402, calculates distance l between the photocentre position l of target characteristic point P and left eye respectively1, target characteristic point Distance l between the photocentre position r of P and right eye2
l 1 = b sinθ 2 s i n ( θ 1 + θ 2 ) ;
l 2 = b sinθ 1 s i n ( θ 1 + θ 2 ) ;
Wherein, θ2′+θ2=π, b are photocentre position l and distance between the r of photocentre position.
Step 5, as in figure 2 it is shown, specifically include:
Step 501, according to triangle geometrical relationship, obtains:
xl=l1cosθ1;yl=l1cosα1;zl=l1cosβ1
xr=l1cosθ2;yr=l1cosα2;zr=l1cosβ2
Step 502, the l calculated according to step 41And l2, it is calculated:
x l = b sinθ 2 cosθ 1 sin ( θ 1 + θ 2 ) ; y l = b sinθ 2 cosα 1 sin ( θ 1 + θ 2 ) ; z l = b sinθ 2 cosβ 1 sin ( θ 1 + θ 2 ) ;
x r = b sinθ 1 cosθ 2 s i n ( θ 1 + θ 2 ) ; y r = b sinθ 1 cosα 2 s i n ( θ 1 + θ 2 ) ; z r = b sinθ 1 cosβ 2 s i n ( θ 1 + θ 2 ) ;
I.e. target characteristic point P coordinate (x under left eye coordinates systeml,yl,zl) and coordinate (x under right eye coordinate systemr, yr,zr) be respectively as follows:
( x l , y l , z l ) = ( b sinθ 2 cosθ 1 sin ( θ 1 + θ 2 ) , b sinθ 2 cosα 1 s i n ( θ 1 + θ 2 ) , b sinθ 2 cosβ 1 s i n ( θ 1 + θ 2 ) ) ;
( x r , y r , z r ) = ( b sinθ 1 cosθ 2 sin ( θ 1 + θ 2 ) , b sinθ 1 cosα 2 s i n ( θ 1 + θ 2 ) , b sinθ 1 cosβ 2 s i n ( θ 1 + θ 2 ) ) .
Step 6, as in figure 2 it is shown, step 6 specifically includes:
Step 601, it is assumed that target characteristic point P image in left eye and the image in right eye in approximately the same plane, Then target characteristic point P is identical with the y-coordinate under right eye coordinate system in left eye coordinates system, it may be assumed that
yl=yr
Step 602, according to triangle geometrical relationship, obtains:
x l f = X W + b / 2 Y W x r f = X W + b / 2 Y W y l f = y r f = - Y W Z W ;
Wherein, the first camera head of simulation left eye is f with the focal length of the second camera head of simulation right eye;
Step 603, obtains according to the first two calculating formula in step 602:
Z W = b f x l - x r ;
And then be calculated:
X W = b ( x l + x r ) 2 ( x l - x r ) ;
And then be calculated:
Y W = - y l f Z W = - y r f Z W ;
Step 604, remembers parallax D=xl-xr, according to coordinate (x calculated in step 5l,yl,zl) and coordinate (xr,yr, zr), obtain:
D = - b s i n ( θ 1 - θ 2 ) s i n ( θ 1 + θ 2 ) ;
Step 605, is calculated further:
X W = - b s i n ( θ 1 + θ 2 ) 2 s i n ( θ 1 - θ 2 ) ;
Y W = b sinθ 2 cosα 1 sin ( θ 1 - θ 2 ) = b sinθ 1 cosα 2 s i n ( θ 1 - θ 2 ) ;
Z W = - f s i n ( θ 1 + θ 2 ) s i n ( θ 1 - θ 2 ) ;
The depth information i.e. obtaining target characteristic point P is:
Or,
( X W , Y W , Z W ) = ( - b s i n ( θ 1 + θ 2 ) 2 s i n ( θ 1 - θ 2 ) , b sinθ 1 cosα 2 s i n ( θ 1 - θ 2 ) , - f s i n ( θ 1 + θ 2 ) s i n ( θ 1 - θ 2 ) ) .
The binocular vision matching method of the present invention makes the same point of binocular gaze sensation target, i.e. this spy by visual servo Levy and a little overlap with the photocentre position of image in binocular, at this time utilize the positional information that servomotor feeds back, several by triangle What method draws the target characteristic point depth information relative to binocular.
The foregoing is only the preferred embodiments of the present invention, be not limited to the present invention, for the skill of this area For art personnel, the present invention can have various modifications and variations.All within the spirit and principles in the present invention, that is made any repaiies Change, equivalent, improvement etc., should be included within the scope of the present invention.

Claims (5)

1. a binocular vision solid matching method, it is characterised in that including:
The same target characteristic point P of step 1, left eye and right eye fixation space object, makes described target characteristic point P at described left eye In image and image in described right eye respectively with photocentre position l and the photocentre position r weight of described right eye of described left eye Close;
Step 2, sets up left eye coordinates system, right eye coordinate system and world coordinate system;
Step 3, obtains described left eye and described right eye corner information in three directions, including the optical axis direction of described left eye Angle theta with X-axis1, the optical axis direction of described left eye and the angle α of Y-axis1, the optical axis direction of described left eye and the angle β of Z axis1、 The optical axis direction of described right eye and the angle theta of X-axis2', the optical axis direction of described right eye and the angle α of Y-axis2, the light of described right eye The angle β of direction of principal axis and Z axis2
Step 4, according to triangle geometric method, calculates between the photocentre position l of described target characteristic point P and described left eye respectively Distance l1, distance l between described target characteristic point P and the photocentre position r of described right eye2
l 1 = b sinθ 2 s i n ( θ 1 + θ 2 ) ;
l 2 = b sinθ 1 s i n ( θ 1 + θ 2 ) ;
Wherein, θ2′+θ2=π, b are photocentre position l and distance between the r of photocentre position;
Step 5, calculates described target characteristic point P coordinate (x under described left eye coordinates system respectivelyl,yl,zl) and described Coordinate (x under right eye coordinate systemr,yr,zr);
( x 1 , y 1 , z 1 ) = ( b sinθ 2 cosθ 1 sin ( θ 1 + θ 2 ) , b sinθ 2 cosα 1 s i n ( θ 1 + θ 2 ) , b sinθ 2 cosβ 1 s i n ( θ 1 + θ 2 ) ) ;
( x r , y r , z r ) = ( b sinθ 1 cosθ 2 sin ( θ 1 + θ 2 ) , b sinθ 1 cosα 2 sin ( θ 1 + θ 2 ) , b sinθ 1 cosβ 2 sin ( θ 1 + θ 2 ) ) ;
Step 6, according to Binocular Vision Principle, obtains described target characteristic point P world coordinates (X under described world coordinate systemW, YW,ZW), i.e. obtain the depth information of described target characteristic point P;
Or,
( X W , Y W , Z W ) = ( - b s i n ( θ 1 + θ 2 ) 2 sin ( θ 1 - θ 2 ) , b sinθ 1 cosα 2 s i n ( θ 1 - θ 2 ) , - f s i n ( θ 1 + θ 2 ) s i n ( θ 1 - θ 2 ) ) ;
Wherein, the focal length of described left eye and described right eye is f.
Binocular vision solid matching method the most according to claim 1, it is characterised in that in step 2, described left eye coordinates System is the three-dimensional system of coordinate set up for initial point with the photocentre position l of described left eye, and described right eye coordinate system is with described right eye Photocentre position r is the three-dimensional system of coordinate that initial point is set up, and described world coordinate system is the photocentre position with described left eye and the described right side Point midway between the photocentre position of eye is the three-dimensional system of coordinate that initial point is set up.
Binocular vision solid matching method the most according to claim 1, it is characterised in that step 4 specifically includes:
Step 401, according to triangle geometrical relationship, obtains:
l 1 c o s θ 1 + l 2 c o s θ 2 = b l 1 s i n θ 1 = l 2 c o s θ 2 ;
Step 402, is calculated
l 1 = b sinθ 2 s i n ( θ 1 + θ 2 ) ;
l 2 = b sinθ 1 s i n ( θ 1 + θ 2 ) .
Binocular vision solid matching method the most according to claim 1, it is characterised in that step 5 specifically includes:
Step 501, according to triangle geometrical relationship, obtains:
xl=l1cosθ1;yl=l1cosα1;zl=l1cosβ1
xr=l1cosθ2;yr=l1cosα2;zr=l1cosβ2
Step 502, the l calculated according to step 41And l2, it is calculated:
x l = b sinθ 2 cosθ 1 sin ( θ 1 + θ 2 ) ; y l = b sinθ 2 cosα 1 s i n ( θ 1 + θ 2 ) ; z l = b sinθ 2 cosβ 1 s i n ( θ 1 + θ 2 ) ;
x r = b sinθ 1 cosθ 2 sin ( θ 1 + θ 2 ) ; y r = b sinθ 1 cosα 2 s i n ( θ 1 + θ 2 ) ; z r = b sinθ 1 cosβ 2 s i n ( θ 1 + θ 2 ) ;
The most described target characteristic point P coordinate (x under described left eye coordinates systeml,yl,zl) and under described right eye coordinate system Coordinate (xr,yr,zr) be respectively as follows:
( x l , y l , z l ) = ( b sinθ 2 cosθ 1 sin ( θ 1 + θ 2 ) , b sinθ 2 cosα 1 s i n ( θ 1 + θ 2 ) , b sinθ 2 cosβ 1 s i n ( θ 1 + θ 2 ) ) ;
( x r , y r , z r ) = ( b sinθ 1 cosθ 2 sin ( θ 1 + θ 2 ) , b sinθ 1 cosα 2 s i n ( θ 1 + θ 2 ) , b sinθ 1 cosβ 2 s i n ( θ 1 + θ 2 ) ) .
Binocular vision solid matching method the most according to claim 1, it is characterised in that step 6 specifically includes:
Step 601, it is assumed that described target characteristic point P image in described left eye and the image in described right eye are same In plane, the most described target characteristic point P is identical with the y-coordinate under described right eye coordinate system in described left eye coordinates system, it may be assumed that
yl=yr
Step 602, according to triangle geometrical relationship, obtains:
x l f = X W + b / 2 Y W x r f = X W + b / 2 Y W y l f = y r f = - Y W Z W ;
Step 603, obtains according to the first two calculating formula in step 602:
Z W = b f x l - x r ;
And then be calculated:
X W = b ( x l + x r ) 2 ( x l - x r ) ;
And then be calculated:
Y W = - y l f Z W = - y r f Z W ;
Step 604, remembers parallax D=xl-xr, according to coordinate (x calculated in step 5l,yl,zl) and coordinate (xr,yr,zr), It is calculated:
D = - b s i n ( θ 1 - θ 2 ) s i n ( θ 1 + θ 2 ) ;
Step 605, is calculated further:
X W = - b s i n ( θ 1 + θ 2 ) 2 s i n ( θ 1 - θ 2 ) ;
Y W = b sinθ 2 cosα 1 sin ( θ 1 - θ 2 ) = b sinθ 1 cosα 2 s i n ( θ 1 - θ 2 ) ;
Z W = - f s i n ( θ 1 + θ 2 ) s i n ( θ 1 - θ 2 ) ;
The depth information i.e. obtaining described target characteristic point P is:
Or,
( X W , Y W , Z W ) = ( - b s i n ( θ 1 + θ 2 ) 2 s i n ( θ 1 - θ 2 ) , b sinθ 1 cosα 2 s i n ( θ 1 - θ 2 ) , - f s i n ( θ 1 + θ 2 ) s i n ( θ 1 - θ 2 ) ) .
CN201510556591.5A 2015-09-02 2015-09-02 A kind of binocular vision solid matching method Active CN105187812B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510556591.5A CN105187812B (en) 2015-09-02 2015-09-02 A kind of binocular vision solid matching method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510556591.5A CN105187812B (en) 2015-09-02 2015-09-02 A kind of binocular vision solid matching method

Publications (2)

Publication Number Publication Date
CN105187812A CN105187812A (en) 2015-12-23
CN105187812B true CN105187812B (en) 2016-11-30

Family

ID=54909634

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510556591.5A Active CN105187812B (en) 2015-09-02 2015-09-02 A kind of binocular vision solid matching method

Country Status (1)

Country Link
CN (1) CN105187812B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106990668B (en) * 2016-06-27 2019-10-11 深圳市圆周率软件科技有限责任公司 A kind of imaging method, the apparatus and system of full-view stereo image
CN106504188B (en) * 2016-11-23 2018-10-23 北京清影机器视觉技术有限公司 Generation method and device for the eye-observation image that stereoscopic vision is presented
CN109341530B (en) * 2018-10-25 2020-01-21 华中科技大学 Object point positioning method and system in binocular stereo vision
WO2020177060A1 (en) * 2019-03-04 2020-09-10 北京大学深圳研究生院 Binocular visual stereoscopic matching method based on extreme value checking and weighted voting

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101581569A (en) * 2009-06-17 2009-11-18 北京信息科技大学 Calibrating method of structural parameters of binocular visual sensing system
CN104428624A (en) * 2012-06-29 2015-03-18 富士胶片株式会社 Three-dimensional measurement method, apparatus, and system, and image processing device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101581569A (en) * 2009-06-17 2009-11-18 北京信息科技大学 Calibrating method of structural parameters of binocular visual sensing system
CN104428624A (en) * 2012-06-29 2015-03-18 富士胶片株式会社 Three-dimensional measurement method, apparatus, and system, and image processing device

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
双目视觉三维测量技术研究;陈济棠;《中国优秀硕士学位论文全文数据库-信息科技辑》;20111015;I138-528 *
基于双目立体视觉的三维信息获取技术研究;方恒;《中国优秀硕士学位论文全文数据库-信息科技辑》;20081215;I138-189 *
基于双目立体视觉的物体深度信息提取系统研究;刘维;《中国优秀硕士学位论文全文数据库-信息科技辑》;20100415;I138-557 *
基于双目视觉技术的物体深度信息的提取;王平,韩炎,等;《科学技术与工程》;20140118;第14卷(第2期);第56-61页 *
基于角点检测的双目视觉测距新方法;仁继昌,杨晓东;《电光与控制》;20130621;第20卷(第7期);第93-95页 *
采用极线约束与圆窗口匹配的立体视觉检测;李鹤喜等;《计算机工程与应用》;20090321;第45卷(第09期);第216-219页 *

Also Published As

Publication number Publication date
CN105187812A (en) 2015-12-23

Similar Documents

Publication Publication Date Title
CN105187812B (en) A kind of binocular vision solid matching method
CN104596502B (en) Object posture measuring method based on CAD model and monocular vision
Zhang et al. Intelligent collaborative localization among air-ground robots for industrial environment perception
CN206096621U (en) Enhancement mode virtual reality perception equipment
US10269139B2 (en) Computer program, head-mounted display device, and calibration method
CN102697508B (en) Method for performing gait recognition by adopting three-dimensional reconstruction of monocular vision
CN104933718A (en) Physical coordinate positioning method based on binocular vision
CN107741234A (en) The offline map structuring and localization method of a kind of view-based access control model
CN102589530B (en) Method for measuring position and gesture of non-cooperative target based on fusion of two dimension camera and three dimension camera
CN106153008B (en) A kind of rotor wing unmanned aerial vehicle objective localization method of view-based access control model
CN104517291A (en) Pose measuring method based on coaxial circle characteristics of target
CN104183010A (en) Multi-view three-dimensional online reconstruction method
Wang et al. Real-time drogue recognition and 3D locating for UAV autonomous aerial refueling based on monocular machine vision
CN202362833U (en) Binocular stereo vision-based three-dimensional reconstruction device of moving vehicle
EP3644826A1 (en) A wearable eye tracking system with slippage detection and correction
CN106454311A (en) LED three-dimensional imaging system and method
CN109285189B (en) Method for quickly calculating straight-line track without binocular synchronization
CN103115613A (en) Three-dimensional space positioning method
CN104656880B (en) A kind of writing system and method based on intelligent glasses
CN108764080B (en) Unmanned aerial vehicle visual obstacle avoidance method based on point cloud space binarization
CN105469389A (en) Grid ball target for visual sensor calibration and corresponding calibration method
Gratal et al. Visual servoing on unknown objects
CN111127540A (en) Automatic distance measurement method and system for three-dimensional virtual space
CN105335699A (en) Intelligent determination method for reading and writing element three-dimensional coordinates in reading and writing scene and application thereof
US20220011750A1 (en) Information projection system, controller, and information projection method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant