JPH036781A - Processing system for extraction of three-dimensional position information - Google Patents

Processing system for extraction of three-dimensional position information

Info

Publication number
JPH036781A
JPH036781A JP1142829A JP14282989A JPH036781A JP H036781 A JPH036781 A JP H036781A JP 1142829 A JP1142829 A JP 1142829A JP 14282989 A JP14282989 A JP 14282989A JP H036781 A JPH036781 A JP H036781A
Authority
JP
Japan
Prior art keywords
dimensional position
points
position information
point
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP1142829A
Other languages
Japanese (ja)
Inventor
Satoshi Shimada
聡 嶌田
Shigeki Masaki
正木 茂樹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nippon Telegraph and Telephone Corp
Original Assignee
Nippon Telegraph and Telephone Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nippon Telegraph and Telephone Corp filed Critical Nippon Telegraph and Telephone Corp
Priority to JP1142829A priority Critical patent/JPH036781A/en
Publication of JPH036781A publication Critical patent/JPH036781A/en
Pending legal-status Critical Current

Links

Abstract

PURPOSE:To easily extract the 3-dimensional position information on an object by providing a coordinate transformation part which obtains the coordinates of residual three points when one of four feature points set on two frame images of two feature points is defined as an original point. CONSTITUTION:An image input part 101, a feature point extracting part 102, a coordinate transformation part 103, a reference point distance supply part 104, and a position information extracting part 105 are provided. When one of four characteristic points of two frame images of two feature points is defined as an original point, the coordinates of residual three points are used for extraction of a 3-dimensional position with the 3-dimensional position of the reference point of each feature point set as a standard. Thus it is possible to extract the 3-dimensional position information on the feature point of an object just by observing the moving object.

Description

【発明の詳細な説明】 〔産業上の利用分野〕 本発明は、物体を1台の画像入力装置で観測することに
より行う3次元位置の情報抽出処理方式%式% 〔従来の技術] 1台の画像入力装置で観測して得られるl連のフレーム
画像から物体上の特徴点についての3次元の位置情報を
抽出する従来の方式においては物体に対して前もって定
めた運動を定められた移動量だけ施す物体移動手段を設
け、物体移動手段で移動する物体の移動前後を観測した
2つの入力画像からステレオ画像を構成し、ステレ第3
次元位置計測により3次元位置情報を抽出するものであ
った。
[Detailed Description of the Invention] [Industrial Application Field] The present invention relates to a three-dimensional position information extraction processing method performed by observing an object with one image input device. [Prior Art] One image input device In the conventional method of extracting three-dimensional position information about feature points on an object from a series of frame images obtained by observation with an image input device, a predetermined movement of the object is determined by a predetermined amount of movement. A stereo image is constructed from two input images obtained by observing the object before and after it is moved by the object moving means, and
Three-dimensional position information was extracted by dimensional position measurement.

〔発明が解決しようとする課題〕[Problem to be solved by the invention]

従って、前もって定めた運動を定められた移動量だけ物
体に施すか、または、移動量を手作業により計測する必
要があり、3次元位置情報を抽出するのに時間と手間と
がかかるという問題点があった。
Therefore, it is necessary to apply a predetermined movement to the object by a predetermined amount of movement, or to manually measure the amount of movement, which is a problem in that it takes time and effort to extract three-dimensional position information. was there.

本発明は、このような従来の問題点を解決するために、
物体の移動量については制限を設けずに前もって定めた
運動を観測することにより、物体の3次元の位置情報を
聞易に抽出できる3次元位置の情報抽出処理方式を提供
することにある。
In order to solve such conventional problems, the present invention has the following features:
The object of the present invention is to provide a three-dimensional position information extraction processing method that can easily extract three-dimensional position information of an object by observing a predetermined movement without setting any restrictions on the amount of movement of the object.

(課題を解決するための手段) 本発明の3次元位置の情報抽出処理方式においては、運
動を行っている物体をテレビカメラ等の1台の画像入力
装置により観測して得られる1連のフレーム画像の中か
ら2つのフレーム画像に対し2つずつの特徴点の計4つ
の特徴点の内、1つの点を原点としたときの残り3つの
点の座標を求め、求めた3つの中の1つの点を3次元位
置の基準として特徴点の3次元位置を抽出することに特
徴がある。
(Means for Solving the Problems) In the three-dimensional position information extraction processing method of the present invention, a series of frames obtained by observing a moving object with a single image input device such as a television camera are used. Out of a total of 4 feature points (2 feature points for each of the 2 frame images) from the image, calculate the coordinates of the remaining 3 points when one point is the origin, and calculate 1 of the 3 points. The feature is that the three-dimensional position of the feature point is extracted using one point as the reference for the three-dimensional position.

従来の場合には各特徴点の2つのフレーム画像上の座標
から三角測量の原理によりその特徴点の3次元位置情報
を抽出していたため物体の移動量を前もって定めておく
か、あるいは、別途計測して与える必要があったが1本
発明の3次元位置の情報抽出処理方式においては、2つ
の特徴点の2つのフレーム画像上の合計4つの特徴点の
内、:、1つの点を原点としたときの残り3つの点の座
標を求める手段を有し、求めた3つの点の座標から特徴
点の中の1つである基準点を3次元位置の5i4tとし
て、特徴点の3次元位置を抽出する点で従来の方式と異
なる。
In the conventional case, the three-dimensional position information of each feature point was extracted from the coordinates of each feature point on two frame images using the principle of triangulation, so it was necessary to determine the amount of movement of the object in advance or to measure it separately. However, in the three-dimensional position information extraction processing method of the present invention, out of a total of four feature points on two frame images of two feature points, one point is set as the origin. It has a means for determining the coordinates of the remaining three points when It differs from conventional methods in the point of extraction.

(実施例〕 以下1本発明の実施例を1図面に基づいて詳細に説明す
る。第1図は本実施例における3次元位置の情報抽出処
理方式の構成図を示すものであって、特徴点のうちの1
つの基準点の画像入力装置までの距離を与える場合の実
施例として示している。同図において、101は画像入
力部、  102は特徴点抽出部、103は座標変換部
、104は基準点距離付与部、105は位置情報抽出部
である。
(Embodiment) An embodiment of the present invention will be described below in detail based on one drawing. Fig. 1 shows a block diagram of a three-dimensional position information extraction processing method in this embodiment, and shows feature points. one of them
This is shown as an example in which the distances of two reference points to the image input device are given. In the figure, 101 is an image input section, 102 is a feature point extraction section, 103 is a coordinate conversion section, 104 is a reference point distance adding section, and 105 is a position information extraction section.

画像入力部101は、テレビカメラ等の画像入力装置に
より運動を行っている物体を観測し、観測して得られた
1連のフレーム画像中の2つのフレーム画像を特徴点抽
出部102に出力する。特徴点抽出部102は5画像人
力部101より出力された2つのフレーム画像から少な
(とも2つ以上の特徴点を抽出し、抽出した特徴点の2
つのフレーム画像上の座標を座標変換部103に出力す
る。2つのフレーム画像からの特徴点の抽出は1例えば
、物体の形状や表面の色が局所的に〕、激な変化をする
点を画像処理により抽出し、物体上で同一の点であるも
のを対応付け、対応付けられた点を特徴点とする。座標
変換部103は、特徴点抽出部102より出力される特
徴点の中の2つの特徴点の2つのフレーム画像上の合計
4つの特徴点の内、1つの点を原点としたときの残り3
つの点の座標を求め求めた3つの点の座標を位1情報抽
出部105に出力する。基準点距離付与部104は、各
特徴点の3次元位置の基準を与えるもので1例えば、特
徴点の中の1つの点を基準点とし、基準点から画像入力
装置までの距離をS準として与える。このとき。
The image input unit 101 observes a moving object using an image input device such as a television camera, and outputs two frame images of a series of frame images obtained by observation to the feature point extraction unit 102. . The feature point extraction unit 102 extracts at least two feature points from the two frame images output from the 5-image human power unit 101, and extracts two or more feature points from the extracted feature points.
The coordinates on the two frame images are output to the coordinate conversion unit 103. Extraction of feature points from two frame images is as follows: 1. For example, points where the shape or surface color of an object change drastically (locally) are extracted by image processing, and points that are the same on the object are extracted. The associated points are defined as feature points. The coordinate conversion unit 103 converts two of the feature points output from the feature point extraction unit 102 into four feature points on the two frame images, the remaining three when one point is set as the origin.
The coordinates of the three points thus obtained are output to the digit 1 information extraction section 105. The reference point distance providing unit 104 provides a reference for the three-dimensional position of each feature point. For example, one point among the feature points is set as the reference point, and the distance from the reference point to the image input device is set as the S quasi. give. At this time.

特徴点間の位置関係のみ必要な場合は適当な値を基準点
の画像人力装置までの距離として与えればよい。位置情
報抽出部105は、座標変換部103から出力された3
つの点の座標と基準点距離付与部104より出力された
基準点の画像人力装置までの距離とから2つのフレーム
画像人力時における特徴点の3次元位置情報を抽出して
出力する。
If only the positional relationship between feature points is required, an appropriate value may be given as the distance of the reference point to the human-powered image device. The position information extraction unit 105 extracts the 3 output from the coordinate conversion unit 103.
The three-dimensional position information of the feature point in the two frame images is extracted and output from the coordinates of the two points and the distance of the reference point to the image human-powered device outputted from the reference point distance adding unit 104.

以下、座標変換部103と位置情報抽出部105の実施
例を2つのフレーム画像間での物体の移動が回転運動成
分を含まない場合について説明する。
An example of the coordinate transformation unit 103 and the position information extraction unit 105 will be described below for a case where the movement of an object between two frame images does not include a rotational motion component.

カメラモデルを第2図に示す中心射影で表すとカメラレ
ンズの中心を原点、カメラの光軸をZ軸カメラの結像面
の垂直方向をX軸とする直交座標系において、物体上の
特徴点Pkの3次元位置(Xk、Yk、Zk)は、  
Pk (7)71z−L画像上の位置を(Uk、Vk)
、カメラレンズの中心からカメラの結像面までの距離を
rとして(r Zk=Zk                   (
+)で表せる。
If the camera model is represented by the central projection shown in Figure 2, the center of the camera lens is the origin, the optical axis of the camera is the Z axis, and the direction perpendicular to the image plane of the camera is the X axis. The three-dimensional position of Pk (Xk, Yk, Zk) is
Pk (7) 71z-L position on image (Uk, Vk)
, where r is the distance from the center of the camera lens to the image plane of the camera (r Zk=Zk (
It can be expressed as +).

第3図に示すように特徴点PO及びPkについてのi番
目とj番目との2つのフレーム画像入力時における点を
P io、  P jO及びPik、  Pjk(k・
1・・・、 N−1、N :特徴点の数)、特徴点抽出
部102から出力される2つのフレーム画像上の特徴点
をQiO(UiO,VIO) 、 QjO(UjO,V
ja) 、 Qik(Uik、  Vik) 、  Q
jk (Ujk、  Vjk)とする。
As shown in FIG. 3, the points at the time of inputting the i-th and j-th frame images regarding the feature points PO and Pk are expressed as Pio, PjO and Pik, Pjk(k・
1..., N-1, N: number of feature points), the feature points on the two frame images output from the feature point extraction unit 102 are expressed as QiO(UiO, VIO), QjO(UjO, V
ja), Qik (Uik, Vik), Q
Let jk (Ujk, Vjk).

2つの特徴点の2つのフレーム画像上の点Q io。Point Q io on two frame images of two feature points.

QjO,Qik、 Qjkの4つの点の中で1例えば2
点Qjkを原点としたときの残り3つの点QiO,Qj
OQikの変換後の座標を各々(Wlx、 Wly) 
Among the four points QjO, Qik, Qjk, 1, for example 2
When point Qjk is the origin, the remaining three points QiO, Qj
The coordinates after OQik transformation are (Wlx, Wly)
.

(W2x、 W2y) 、  (W3X、 W3V)と
すると、座標変換部103から出力する3つの点の変換
後の座標は式(2)であられせる。
(W2x, W2y), (W3X, W3V), the converted coordinates of the three points output from the coordinate conversion unit 103 are expressed by equation (2).

QiO:(W3χ、  W3y) − (UiO−Ujk、  VIO−Vjk)QjO:  
(W2x、W2y)− (UjO−Ujk、  VjO−Vjk)Qik:(W
lx、  Wly)  −(Uik−Ujk、  Vi
k−Vjk) (2)次に1位置情報抽出部105の説
明を行う、第3図において、2つのフレーム画像間での
物体の動きに回転運動成分が含まれていないとき次式が
成立する。
QiO: (W3χ, W3y) − (UiO−Ujk, VIO−Vjk)QjO:
(W2x, W2y) - (UjO-Ujk, VjO-Vjk) Qik: (W
lx, Wly) −(Uik−Ujk, Vi
k-Vjk) (2) Next, the 1-position information extraction unit 105 will be explained. In FIG. 3, when the movement of the object between the two frame images does not include a rotational motion component, the following equation holds true. .

0Pjk  =  0Pik  +  PjOPiO(
3)式(1)1式(3)より。
0Pjk = 0Pik + PjOPiO(
3) From equation (1) 1 equation (3).

の関係が得られ、  Zjk、  fを消去して算出で
きる基準点のZ座標(aZioから式(6)を用いて算
出することにより、基準点PiOの画像入力装置までの
距離を基準としたi番目とj番目の2つのフレーム画像
人力時における特徴点の3次元位置を求めることができ
る。
The relationship is obtained, and the Z coordinate of the reference point that can be calculated by eliminating Zjk and f (by calculating from aZio using equation (6), i The three-dimensional positions of the feature points in the two frame images, the th and jth, can be determined manually.

が導かれる0式(2)1式(4)2式(5)より特徴点
の3次元位1のZ成分は。
From equations 0 (2), 1 (4), and 2 (5), the Z component of the three-dimensional position 1 of the feature point is:

W1xW2y−W2xW1y W1xW2y−W2xWIy Zjk= Zik+ ZjO−ZiQ        
    (6)と表せ1特徴点の3次元位置のX成分、
Y成分は。
W1xW2y-W2xW1y W1xW2y-W2xWIy Zjk= Zik+ ZjO-ZiQ
(6) can be expressed as the X component of the three-dimensional position of one feature point,
The Y component is.

式(1)に式(6)を代入することにより求められる。It is obtained by substituting equation (6) into equation (1).

すなわち、座標変換部103から出力される3つの点の
座標(Wlx、  Wly) 、  (W2X、 W2
y) 。
That is, the coordinates of the three points output from the coordinate conversion unit 103 are (Wlx, Wly), (W2X, W2
y).

(W3x、 W3y)と基準点距離付与部104から出
力される基準点PiOの画像入力装置までの距離より〔
発明の効果] 以上説明したように1本発明によれば、2つの特徴点の
2つのフレーム画像上の合計4つの特徴点の内、1つの
点を原点としたときの残り3つの点の座標を用いて、各
特徴点の基準点の3次元位置を基準とした3次元位置を
抽出することができるから、物体に前もって定めた移動
量を運動させたり5手作業により移動量を計測したりす
る必要がなく、運動を行っている物体を観測するだけで
物体上の特徴点の3次元位置情報が抽出できる。
(W3x, W3y) and the distance from the reference point PiO output from the reference point distance adding unit 104 to the image input device [
[Effects of the Invention] As explained above, according to the present invention, the coordinates of the remaining three points when one point is set as the origin among a total of four feature points on two frame images of two feature points. By using Three-dimensional position information of feature points on the object can be extracted simply by observing the moving object.

【図面の簡単な説明】[Brief explanation of drawings]

第1図は本発明の一実施例を示す3次元位置の情報抽出
処理方式の構成図、第2図はカメラモデルを示す図、第
3図は物体の運動による特徴点の3次元的位置変化とフ
レーム画像上での特徴点の2次元的位置変化の対応を示
す図である。 101・・・・・・画像入力部、102・・・・・・特
徴点抽出部。 103・・・・・・座45変換部、104・・・・・・
基準点距離付与部105・・・・・・位置情報抽出部。
Fig. 1 is a configuration diagram of a three-dimensional position information extraction processing method showing an embodiment of the present invention, Fig. 2 is a diagram showing a camera model, and Fig. 3 is a three-dimensional position change of feature points due to the movement of an object. FIG. 3 is a diagram showing the correspondence between two-dimensional positional changes of feature points on a frame image and a two-dimensional position change of a feature point on a frame image. 101... Image input unit, 102... Feature point extraction unit. 103... Locus 45 conversion section, 104...
Reference point distance adding unit 105...Position information extraction unit.

Claims (1)

【特許請求の範囲】 運動を行っている物体を画像入力装置により観測して3
次元位置の情報を抽出する3次元位置の情報抽出処理方
式において、 該画像入力装置より得られる1連のフレーム画像中のi
番目とj番目との2つのフレーム画像の各々から特徴点
を抽出し、上記i番目、j番目のフレーム画像において
各フレーム画像から抽出した特徴点の中の2つの特徴点
の組である4つの特徴点の内、1つの点を原点としたと
きの残り3つの点の座標を求める座標変換手段を有し、 該座標変換手段により求めた3つの点の座標を用いて、
特徴点の中の1つの点を基準とした特徴点の3次元位置
を抽出するようにした ことを特徴とする3次元位置の情報抽出処理方式。
[Claims] Observing a moving object using an image input device
In a three-dimensional position information extraction processing method that extracts dimensional position information, i in a series of frame images obtained from the image input device is
Feature points are extracted from each of the two frame images, the i-th and j-th frame images, and four It has a coordinate transformation means for determining the coordinates of the remaining three points when one point among the feature points is set as the origin, and using the coordinates of the three points determined by the coordinate transformation means,
A three-dimensional position information extraction processing method characterized by extracting a three-dimensional position of a feature point using one point among the feature points as a reference.
JP1142829A 1989-06-05 1989-06-05 Processing system for extraction of three-dimensional position information Pending JPH036781A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP1142829A JPH036781A (en) 1989-06-05 1989-06-05 Processing system for extraction of three-dimensional position information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP1142829A JPH036781A (en) 1989-06-05 1989-06-05 Processing system for extraction of three-dimensional position information

Publications (1)

Publication Number Publication Date
JPH036781A true JPH036781A (en) 1991-01-14

Family

ID=15324586

Family Applications (1)

Application Number Title Priority Date Filing Date
JP1142829A Pending JPH036781A (en) 1989-06-05 1989-06-05 Processing system for extraction of three-dimensional position information

Country Status (1)

Country Link
JP (1) JPH036781A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009020690A (en) * 2007-07-11 2009-01-29 Kyocera Mita Corp Image forming apparatus
JP2009020691A (en) * 2007-07-11 2009-01-29 Kyocera Mita Corp User authentication method, user authentication apparatus and image forming apparatus
CN107390205A (en) * 2017-07-20 2017-11-24 清华大学 A kind of monocular vision vehicle odometry method that front truck feature is obtained using car networking

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009020690A (en) * 2007-07-11 2009-01-29 Kyocera Mita Corp Image forming apparatus
JP2009020691A (en) * 2007-07-11 2009-01-29 Kyocera Mita Corp User authentication method, user authentication apparatus and image forming apparatus
CN107390205A (en) * 2017-07-20 2017-11-24 清华大学 A kind of monocular vision vehicle odometry method that front truck feature is obtained using car networking
CN107390205B (en) * 2017-07-20 2019-08-09 清华大学 A kind of monocular vision vehicle odometry method obtaining front truck feature using car networking

Similar Documents

Publication Publication Date Title
Abidi et al. A new efficient and direct solution for pose estimation using quadrangular targets: Algorithm and evaluation
Zhou et al. Complete calibration of a structured light stripe vision sensor through planar target of unknown orientations
CN111089569B (en) Large box body measuring method based on monocular vision
Li Camera calibration of a head-eye system for active vision
CN102980526A (en) Three-dimensional scanister using black and white camera to obtain color image and scan method thereof
CN112288815B (en) Target die position measurement method, system, storage medium and device
JPH036781A (en) Processing system for extraction of three-dimensional position information
EP1143221A3 (en) Method for determining the position of a coordinate system of an object in a 3D space
CN111833392A (en) Multi-angle scanning method, system and device for mark points
JP3512894B2 (en) Relative moving amount calculating apparatus and relative moving amount calculating method
Wang et al. Facilitating PTZ camera auto-calibration to be noise resilient with two images
Panerai et al. A 6-dof device to measure head movements in active vision experiments: geometric modeling and metric accuracy
JP2798393B2 (en) Method and apparatus for estimating posture of object
Zhang et al. Digital photogrammetry applying to reverse engineering
Gong et al. Multi view 3D reconstruction method for weak texture objects based on" eye-in-hand" model
JPH0238804A (en) Apparatus for measuring object
CN113379663B (en) Space positioning method and device
Lu et al. Calibration of a 3D vision system using pattern projection
Hecht et al. Triangulation based digitizing of tooling and sheet metal part surfaces-Measuring technique, analysis of deviation to CAD and remarks on use of 3D-coordinate fields for the finite element analysis
Cappellini et al. From multiple views to object recognition
JPH036780A (en) Processing system for extraction of three-dimensional position information
Wenli et al. Pose estimation problem in computer vision
JPS62291513A (en) Distance measurement by light-intercepting method
JPS6247512A (en) Three dimensional position recognizing device
JP2661118B2 (en) Conversion method of object coordinates and visual coordinates using image processing device