CN104408719A - Three-collinear-feature-point monocular vision space positioning method - Google Patents

Three-collinear-feature-point monocular vision space positioning method Download PDF

Info

Publication number
CN104408719A
CN104408719A CN201410682742.7A CN201410682742A CN104408719A CN 104408719 A CN104408719 A CN 104408719A CN 201410682742 A CN201410682742 A CN 201410682742A CN 104408719 A CN104408719 A CN 104408719A
Authority
CN
China
Prior art keywords
point
coordinate
unique point
unique
projection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410682742.7A
Other languages
Chinese (zh)
Other versions
CN104408719B (en
Inventor
王丽君
张锦赓
刘中海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Luoyang Institute of Electro Optical Equipment AVIC
Original Assignee
Luoyang Institute of Electro Optical Equipment AVIC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Luoyang Institute of Electro Optical Equipment AVIC filed Critical Luoyang Institute of Electro Optical Equipment AVIC
Priority to CN201410682742.7A priority Critical patent/CN104408719B/en
Publication of CN104408719A publication Critical patent/CN104408719A/en
Application granted granted Critical
Publication of CN104408719B publication Critical patent/CN104408719B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to a three-collinear-feature-point monocular vision space positioning method, and belongs to the field of machine vision. A camera system is positioned in a central perspective model during working; three collinear points are selected as feature points; one of the three collinear points serves as a center point; a relational expression between space coordinates of the feature points and projection coordinates is determined according to a perspective projection relation; and images, which are shot by a camera, of the feature points serve as the projection coordinates. According to the position relation, the length and the projection coordinates of the feature points, the determined relational expression is resolved, a coordinate position of three-dimensional space of the feature points is obtained, and the feature points are positioned. The positioning method can be implemented by algebraic operation, iterative operation is avoided, and the space positioning precision and the space positioning efficiency are improved.

Description

A kind of conllinear three unique point monocular vision space-location method
Technical field
The present invention relates to a kind of conllinear three unique point monocular vision space-location method, belong to technical field of machine vision.
Background technology
Monocular vision location is exactly only complete positioning work with a video camera.It has be simple and easy to and applied widely etc. specific, without the need to solving optimal distance in stereoscopic vision between two video cameras and Feature Points Matching problem, also can not produce very large distortion as omnibearing vision sensor.
In machine vision research field, under monocular vision condition, how to complete solving of position and attitude become an important research direction, monocular vision location technology is applied to many aspects, such as: camera calibration, robot localization, visual servo, target following and monitoring etc.
The researchist of various countries proposes the problem that a variety of algorithm solves monocular vision location, and the unique point wherein related to has coplanar also to be had non-coplanar, and the quantity of required unique point is also different, and algorithm adopts the algorithm of iterative approach mostly, and counting yield is low.
Summary of the invention
The object of the invention is a kind of conllinear three unique point monocular vision space-location method, with solve existing monocular vision space-location method adopt iterative approach algorithm cause counting yield low and not accurate enough problem.
Technical scheme of the present invention is: a kind of conllinear three unique point monocular vision space-location method, and it is characterized in that, this localization method comprises the following steps:
1) choose three collinear point as unique point, and one of them point is mid point;
2) the above-mentioned volume coordinate of three unique points and the relational expression of projection coordinate is determined according to projection projection relation;
3) use the image of located camera shooting unique point as projection coordinate, according to the position relationship between three unique points, length and projection coordinate to step 2) determined relational expression resolves, obtain the three dimensional space coordinate position of unique point, realize the location to unique point.
Described step 2) in determined relational expression be:
z a = f x a xu a
z b = f x b xu b
z = f x xu
z a = f y a yu a
z b = f y b yu b
z = f y yu
Wherein (x a, y a, z a), (x b, y b, z b) and (x, y, z) be respectively the volume coordinate of three unique point A, B and C, C is the mid point between AB, (xu a, yu a), (xu b, yu b) and (xu, yu) be respectively three unique point A, B and C projection coordinates on focal plane, focal length used when f is the projection coordinate of shooting unique point.
Described step 3) in located camera be operated in perspective projection model.
Described step 3) process carrying out resolving is as follows:
A) according to being wherein that the feature of mid point is to step 2) determined relational expression carries out abbreviation,
z+Δz-a 1*x-a 1*Δx=0
z-Δz-a 2*x+a 2*Δx=0
z-a 3*x=0
z+Δz-b 1*y-b 1*Δy=0
z-Δz-b 2*y+a 2*Δy=0
z-b 3*y=0
Wherein, a 1=f/xu a, a 2=f/xu b, a 3=f/xu, b 1=f/yu a, b 2=f/yu b, b 3=f/yu, Δ x=x a-x=x-x b, Δ y=y a-y=y-y b, Δ z=z a-z=z-z b, the focal length of f for adopting when video camera is taken, (xu a, yu a), (xu b, yu b), (xu, yu) unique point coordinate of arriving for video camera captured in real-time, a 1, a 2, a 3, b 1, b 2, b 3can calculate in real time;
B) result after abbreviation is carried out derive further and can obtain:
Δx=z/c 1
Δy=z/c 2
Δz=z/c 3
Wherein c 1=(a 1-a 2) * a 3/ (2a 3-a 1-a 2), c 2=(b 1-b 2) * b 3/ (2b 3-b 1-b 2) c 3=(a 1-a 2) * a 3/ (a 1* a 3+ a 2* a 3-2a 1* a 2);
C) length between any two unique points is brought into step B) relational expression of deriving and step 2) relational expression determined, the volume coordinate (x, y, z) solving unique point C is:
x = xu * z f
y = yu * z f
z = L 1 c 1 2 + 1 c 2 2 + 1 c 3 2
Wherein L is the length of AC;
D) according to three unique point conllinear and one of them feature being mid point and step C) the middle point coordinate that calculates solves the volume coordinate of other two unique points, thus realizes the space orientation to three collinear feature points.
The invention has the beneficial effects as follows: the present invention will determine located camera system works in perspective model, by choosing three collinear point as unique point, and be wherein mid point, according to the relational expression of perspective projection relation determination unique point volume coordinate and projection coordinate, use the image of video camera shooting unique point as projection coordinate, according to the position relationship between unique point, length and projection coordinate, determined relational expression is resolved, thus obtain the three dimensional space coordinate position of unique point, realize the location to unique point.Localization method of the present invention adopts algebraic operation to complete, and avoids interative computation, improves sterically defined precision and efficiency.
Accompanying drawing explanation
Fig. 1 is perspective projection model coordinate system schematic diagram;
Fig. 2 is unique point spatial relationship schematic diagram.
Embodiment
Below in conjunction with accompanying drawing, the specific embodiment of the present invention is further described.
The object of this invention is to provide a kind of monocular vision space-location method of conllinear three unique point, to complete resolving of unique point locus fast and accurately.The present invention supposes that located camera system works is in perspective projection model.
1. choose 3 collinear point as unique point, be designated as A (x respectively a, y a, z a), B (x b, y b, z b) and C (x, y, z), A, B are respectively two end points, and C is the mid point of AB line, and the length of AC is L.
2. according to the relation of perspective projection relation determination unique point volume coordinate and projection coordinate.
As shown in Figure 1, camera coordinates is [Xc, Yc, Zc], and image coordinate is [xu, yu], according to perspective projection equation, then has:
x u = f x c z c
y u = f y c z c
Namely the x coordinate of subpoint is only relevant with z coordinate to the x coordinate of imaging focal length, unique point, and the y coordinate of subpoint is only relevant with z coordinate to the y coordinate of imaging focal length, unique point.
Video camera captured in real-time to three unique point coordinates be respectively A ' (xu a, yu a), B ' (xu b, yu b), C ' (xu, yu), focal length used during shooting is f.According to the pass between perspective projection relation determination unique point volume coordinate and projection coordinate be:
z a = f x a xu a - - - 1 - 1
z b = f x b xu b - - - 1 - 2
z = f x xu - - - 1 - 3
z a = f y a yu a - - - 1 - 4
z b = f y b xu b - - - 1 - 5
z = f y yu - - - 1 - 6
3., according to the position relationship between three unique points, C is the mid point of A, B, known:
x a=x+Δx 2-1
x b=x-Δx 2-2
y a=y+Δy 2-3
y b=y-Δy 2-4
z a=z+Δz 2-5
z b=z-Δz 2-6
4. according to the relation of locus coordinate between three unique points, the relational expression between determined unique point volume coordinate and projection coordinate is changed to:
z + Δz = f x + Δx xu a - - - 3 - 1
z - Δz = f x - Δx xu b - - - 3 - 2
z = f x xu - - - 3 - 3
z + Δz = f y + Δy yu a - - - 3 - 4
z - Δz = f y - Δy yu b - - - 3 - 5
z = f y yu - - - 3 - 6
5. formula is reduced to further:
z+Δz-a 1*x-a 1*Δx=0 4-1
z-Δz-a 2*x+a 2*Δx=0 4-2
z-a 3*x=0 4-3
z+Δz-b 1*y-b 1*Δy=0 4-4
z-Δz-b 2*y+a 2*Δy=0 4-5
z-b 3*y=0 4-6
Wherein:
a 1=f/xu a
a 2=f/xu b
a 3=f/xu
b 1=f/yu a
b 2=f/yu b
b 3=f/yu
F is known focal length, (xu a, yu a), (xu b, yu b), (xu, yu) unique point coordinate of arriving for video camera captured in real-time, therefore, a 1, a 2, a 3, b 1, b 2, b 3also can calculate in real time (being the equal of a known number).
6. derive further can obtain obtaining formula in step 5:
Δx=z/c 15-1
Δy=z/c 25-2
Δz=z/c 35-3
Wherein:
c 1=(a 1-a 2)*a 3/(2a 3-a 1-a 2)
c 2=(b 1-b 2)*b 3/(2b 3-b 1-b 2)
c 3=(a 1-a 2)*a 3/(a 1*a 3+a 2*a 3-2a 1*a 2)
7. due to the information dropout in projection process, only cannot obtain the spatial coordinate location of unique point with above-listed formula, therefore need extra and information is assisted.
By the prior demarcation to unique point, the length of known AC is L, then have:
L = Δx 2 + Δy 2 + Δz 2 - - - 5 - 4
Formula 5-1,5-2,5-3 are substituted into 5-4 can obtain:
z = L 1 c 1 2 + 1 c 2 2 + 1 c 3 2 - - - 5 - 5
The z solved by 5-5 substitutes in formula 1-3 and 1-6 can try to achieve x, y,
x = xu * z f - - - 5 - 6
y = yu * z f - - - 5 - 7
So far the volume coordinate (x, y, z) of mid point C is obtained.
The z solved by 5-5 substitutes in formula 5-1,5-2 and 5-3 can try to achieve Δ x, Δ y, Δ z, and the value of Δ x, Δ y, Δ z is substituted into the volume coordinate coordinate A (x that formula 2-1 to 2-6 can try to achieve A, B a, y a, z a), B (x b, y b, z b).

Claims (4)

1. a conllinear three unique point monocular vision space-location method, it is characterized in that, this localization method comprises the following steps:
1) choose three collinear point as unique point, and one of them point is mid point;
2) the above-mentioned volume coordinate of three unique points and the relational expression of projection coordinate is determined according to projection projection relation;
3) use the image of located camera shooting unique point as projection coordinate, according to the position relationship between three unique points, length and projection coordinate to step 2) determined relational expression resolves, obtain the three dimensional space coordinate position of unique point, realize the location to unique point.
2. conllinear three unique point monocular vision space-location method according to claim 1, is characterized in that, described step 2) in determined relational expression be:
z a = f x a xu a
z b = f x b xu b
z = f x xu
z a = f y a yu a
z b = f y b yu b
z = f y yu
Wherein (x a, y a, z a), (x b, y b, z b) and (x, y, z) be respectively the volume coordinate of three unique point A, B and C, C is the mid point between AB, (xu a, yu a), (xu b, yu b) and (xu, yu) be respectively three unique point A, B and C projection coordinates on focal plane, focal length used when f is the projection coordinate of shooting unique point.
3. conllinear three unique point monocular vision space-location method according to claim 1, is characterized in that, described step 3) in located camera be operated in perspective projection model.
4. conllinear three unique point monocular vision space-location method according to claim 2, is characterized in that, described step 3) process carrying out resolving is as follows:
A) according to being wherein that the feature of mid point is to step 2) determined relational expression carries out abbreviation,
z+Δz-a 1*x-a 1*Δx=0
z-Δz-a 2*x+a 2*Δx=0
z-a 3*x=0
z+Δz-b 1*y-b 1*Δy=0
z-Δz-b 2*y+a 2*Δy=0
z-b 3*y=0
Wherein, a 1=f/xu a, a 2=f/xu b, a 3=f/xu, b 1=f/yu a, b 2=f/yu b, b 3=f/yu, Δ x=x a-x=x-x b, Δ y=y a-y=y-y b, Δ z=z a-z=z-z b, the focal length of f for adopting when video camera is taken, (xu a, yu a), (xu b, yu b), (xu, yu) unique point coordinate of arriving for video camera captured in real-time, a 1, a 2, a 3, b 1, b 2, b 3can calculate in real time;
B) result after abbreviation is carried out derive further and can obtain:
Δx=z/c 1
Δy=z/c 2
Δz=z/c 3
Wherein c 1=(a 1-a 2) * a 3/ (2a 3-a 1-a 2), c 2=(b 1-b 2) * b 3/ (2b 3-b 1-b 2) c 3=(a 1-a 2) * a 3/ (a 1* a 3+ a 2* a 3-2a 1* a 2);
C) length between any two unique points is brought into step B) relational expression of deriving and step 2) relational expression determined, the volume coordinate (x, y, z) solving unique point C is:
x = xu * z f
y = yu * z f
z = L 1 c 1 2 + 1 c 2 2 + 1 c 3 2
Wherein L is the length of AC;
D) according to three unique point conllinear and one of them feature being mid point and step C) the middle point coordinate that calculates solves the volume coordinate of other two unique points, thus realizes the space orientation to three collinear feature points.
CN201410682742.7A 2014-11-24 2014-11-24 A kind of conllinear three characteristic points monocular vision space-location method Active CN104408719B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410682742.7A CN104408719B (en) 2014-11-24 2014-11-24 A kind of conllinear three characteristic points monocular vision space-location method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410682742.7A CN104408719B (en) 2014-11-24 2014-11-24 A kind of conllinear three characteristic points monocular vision space-location method

Publications (2)

Publication Number Publication Date
CN104408719A true CN104408719A (en) 2015-03-11
CN104408719B CN104408719B (en) 2017-07-28

Family

ID=52646348

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410682742.7A Active CN104408719B (en) 2014-11-24 2014-11-24 A kind of conllinear three characteristic points monocular vision space-location method

Country Status (1)

Country Link
CN (1) CN104408719B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106441234A (en) * 2016-09-22 2017-02-22 上海极清慧视科技有限公司 3D machine vision space detection calibration method
CN106815872A (en) * 2017-01-06 2017-06-09 芜湖瑞思机器人有限公司 Monocular vision space-location method based on conical projection conversion
CN110955237A (en) * 2018-09-27 2020-04-03 台湾塔奇恩科技股份有限公司 Teaching path module of mobile carrier
CN112634360A (en) * 2019-10-08 2021-04-09 北京京东乾石科技有限公司 Visual information determination method, device, equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030222984A1 (en) * 2002-06-03 2003-12-04 Zhengyou Zhang System and method for calibrating a camera with one-dimensional objects
CN101702233A (en) * 2009-10-16 2010-05-05 电子科技大学 Three-dimension locating method based on three-point collineation marker in video frame

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030222984A1 (en) * 2002-06-03 2003-12-04 Zhengyou Zhang System and method for calibrating a camera with one-dimensional objects
CN101702233A (en) * 2009-10-16 2010-05-05 电子科技大学 Three-dimension locating method based on three-point collineation marker in video frame

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ISAO MIYAGAWA ET AL: "Simple Camera Calibration From a Single Image Using Five Points on Two Orthogonal 1-D Objects", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 *
李小峰 等: "单目摄像机标定方法的研究", 《计算机工程与应用》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106441234A (en) * 2016-09-22 2017-02-22 上海极清慧视科技有限公司 3D machine vision space detection calibration method
CN106441234B (en) * 2016-09-22 2018-12-28 上海极清慧视科技有限公司 Detect scaling method in a kind of 3D machine vision space
CN106815872A (en) * 2017-01-06 2017-06-09 芜湖瑞思机器人有限公司 Monocular vision space-location method based on conical projection conversion
CN110955237A (en) * 2018-09-27 2020-04-03 台湾塔奇恩科技股份有限公司 Teaching path module of mobile carrier
CN112634360A (en) * 2019-10-08 2021-04-09 北京京东乾石科技有限公司 Visual information determination method, device, equipment and storage medium
CN112634360B (en) * 2019-10-08 2024-03-05 北京京东乾石科技有限公司 Visual information determining method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN104408719B (en) 2017-07-28

Similar Documents

Publication Publication Date Title
CN111775146B (en) Visual alignment method under industrial mechanical arm multi-station operation
CN103714571B (en) A kind of based on photogrammetric single camera three-dimensional rebuilding method
CN108088390B (en) Optical losses three-dimensional coordinate acquisition methods based on double eye line structure light in a kind of welding detection
CN104075688B (en) A kind of binocular solid stares the distance-finding method of monitoring system
CN104933718B (en) A kind of physical coordinates localization method based on binocular vision
CN104408719A (en) Three-collinear-feature-point monocular vision space positioning method
CN103278138B (en) Method for measuring three-dimensional position and posture of thin component with complex structure
CN102810205B (en) The scaling method of a kind of shooting or photographic means
CN108109174A (en) A kind of robot monocular bootstrap technique sorted at random for part at random and system
CN104923593B (en) Vision-based positioning method for top layer bending plate
CN105328304B (en) Based on statistical weld seam starting point automatic localization method
CN104596502A (en) Object posture measuring method based on CAD model and monocular vision
CN104400279A (en) CCD-based method and system for automatic identification and track planning of pipeline space weld seams
CN105196292B (en) Visual servo control method based on iterative duration variation
CN104268876A (en) Camera calibration method based on partitioning
CN103093479A (en) Target positioning method based on binocular vision
CN102436660A (en) Automatic correction method and device of 3D camera image
Robbe et al. Quantification of the uncertainties of high-speed camera measurements
CN103473758A (en) Secondary calibration method of binocular stereo vision system
CN102221331A (en) Measuring method based on asymmetric binocular stereovision technology
CN102981406A (en) Sole glue spraying thickness control method based on binocular vision
CN106203429A (en) Based on the shelter target detection method under binocular stereo vision complex background
CN106595601A (en) Camera six-degree-of-freedom pose accurate repositioning method without hand eye calibration
CN104822060B (en) Information processing method, information processor and electronic equipment
CN104091345B (en) Five-point relative orientation method based on forward intersection constraints

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant