JP2580516B2 - Real-time three-dimensional motion measuring apparatus and method - Google Patents

Real-time three-dimensional motion measuring apparatus and method

Info

Publication number
JP2580516B2
JP2580516B2 JP4345585A JP34558592A JP2580516B2 JP 2580516 B2 JP2580516 B2 JP 2580516B2 JP 4345585 A JP4345585 A JP 4345585A JP 34558592 A JP34558592 A JP 34558592A JP 2580516 B2 JP2580516 B2 JP 2580516B2
Authority
JP
Japan
Prior art keywords
imaging
pixel
image pickup
time
devices
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
JP4345585A
Other languages
Japanese (ja)
Other versions
JPH06174463A (en
Inventor
敏彰 岩田
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National Institute of Advanced Industrial Science and Technology AIST
Original Assignee
Agency of Industrial Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Agency of Industrial Science and Technology filed Critical Agency of Industrial Science and Technology
Priority to JP4345585A priority Critical patent/JP2580516B2/en
Publication of JPH06174463A publication Critical patent/JPH06174463A/en
Application granted granted Critical
Publication of JP2580516B2 publication Critical patent/JP2580516B2/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Description

【発明の詳細な説明】DETAILED DESCRIPTION OF THE INVENTION

【0001】[0001]

【産業上の利用分野】本発明は、3次元空間で運動(移
動)する未知の物体やモデル化が困難な物体の運動状態
を測定する実時間3次元運動測定装置およびその方法に
関し、より詳しくは測定結果を用いて動作するロボット
に好適な実時間3次元運動測定装置およびその方法に関
する。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a real-time three-dimensional motion measuring apparatus and method for measuring the motion state of an unknown object that moves (moves) in a three-dimensional space or an object that is difficult to model. The present invention relates to a real-time three-dimensional motion measurement device suitable for a robot that operates using measurement results and a method thereof.

【0002】[0002]

【従来の技術】3次元運動の実時間測定方法としては形
状がわかっている物体についてはモデルと得られた画像
を照合して運動状態を知る方法が知られている(例えば
Dnald B.Genery:Internatio
al Journal ofComputer Vis
ion,243,(1992))。この測定方法はモデ
ルを計算機の中で対象物の運動を推測しながら回転・移
動させ、その得られた形状と実際の画像とを重ね合わせ
て対象物の運動を推定するものである。正確なモデルを
もつことにより、対象物の運動を推定することを可能に
している。
2. Description of the Related Art As a method for measuring three-dimensional motion in real time, for an object whose shape is known, a method is known in which a model is compared with an obtained image to determine a motion state (for example, Dnald B. Generaly: Internet
al Journal ofComputer Vis
ion, 243, (1992)). In this measurement method, the model is rotated and moved while estimating the motion of the object in a computer, and the obtained shape is superimposed on an actual image to estimate the motion of the object. Having an accurate model makes it possible to estimate the motion of the object.

【0003】撮像結果が動画像でないものについては従
来からステレオ立体視による画像認識を用いた測定方法
がある。この場合、カメラは2台使用するものの、固定
されている。2台のカメラから得られた左右画像の対応
を取ることにより、三角測量の原理で対象物の位置を計
算するものである。
[0003] For the case where the imaging result is not a moving image, there is a conventional measuring method using image recognition by stereoscopic vision. In this case, two cameras are used but fixed. The position of the object is calculated based on the principle of triangulation by taking correspondence between the left and right images obtained from the two cameras.

【0004】[0004]

【発明が解決しようとする課題】上記のモデルを使う方
法では、どうしても正確なモデルが必要であるが、現実
の世界では対象物の形状が不明であったり、形状が複雑
でモデル化が困難である場合が多く、モデルがない場合
は測定が困難といった問題があった。
In the method using the above model, an accurate model is absolutely necessary. However, in the real world, the shape of an object is unknown or the shape is complicated and modeling is difficult. In many cases, there is a problem that measurement is difficult without a model.

【0005】固定カメラのステレオ視では対応点の検出
が困難で、時間がかかり、実時間処理は困難であった。
また、遠方の計測にはズームレンズを使っても対応点が
取れないので、測定結果は誤差を大きく含んだものとな
る。
[0005] In stereo vision of a fixed camera, it is difficult to detect corresponding points, it takes time, and real-time processing is difficult.
Further, since a corresponding point cannot be obtained even when a zoom lens is used for a distant measurement, the measurement result includes a large error.

【0006】そこで、本発明は、モデルを使用すること
なく、運動状態の測定対象を追尾することの可能な実時
間3次元運動測定装置およびその測定方法を提供するこ
とを目的とする。
SUMMARY OF THE INVENTION It is therefore an object of the present invention to provide a real-time three-dimensional motion measuring apparatus and a measuring method capable of tracking an object whose motion state is to be measured without using a model.

【0007】[0007]

【課題を解決するための手段】このような目的を達成す
るために、請求項1の発明は、測定対象の移動の物体を
2台の撮像装置により撮像する撮像手段と、該撮像手段
の2台の撮像装置の視線方向をそれぞれ移動させる可動
手段と、前記撮像手段により得られる撮像結果の中か
ら、光学的特徴を有する物体の特徴点の画素位置を検出
する検出手段と、該検出手段の検出結果に基づき、前記
物体の特徴点を前記2台の撮像装置が注視するように前
記可動手段を追尾制御する制御手段とを具え、前記検出
手段は、前記撮像手段の撮像結果の中で周辺の画素との
光度レベルの差が最も大きい画素およびその位置を検出
して、当該位置を前記光学的特徴を有する物体の特徴点
の画素位置とすることを特徴とする。
In order to achieve the above object, the present invention is directed to an image pickup means for picking up an image of a moving object to be measured by two image pickup devices, Movable means for moving the line-of-sight directions of the two imaging devices, detection means for detecting a pixel position of a feature point of an object having an optical feature from among imaging results obtained by the imaging means, A control unit that controls the tracking of the movable unit so that the two imaging devices gaze at a feature point of the object based on the detection result, wherein the detection unit detects a peripheral point in an imaging result of the imaging unit. A pixel having the largest difference in luminous intensity level from the pixel and the position of the pixel are detected, and the position is set as a pixel position of a feature point of the object having the optical characteristic.

【0008】請求項2の発明は、前記2台の撮像装置が
時系列的に注視した複数の注視位置および該注視位置間
の移動時間を用いて前記物体の移動速度を算出する演算
処理手段をさらに具えたことを特徴とする。
According to a second aspect of the present invention, there is provided an arithmetic processing means for calculating a moving speed of the object by using a plurality of gazing positions gazed in time series by the two imaging devices and a moving time between the gazing positions. Further features are provided.

【0009】請求項3の発明は、測定対象の移動の物体
を2台の撮像装置により一定間隔で撮像し、当該2台の
撮像装置の撮像結果の中から、光学的特徴を有する前記
物体の特徴点として、周辺の画素との光度レベルの差が
最も大きい画素およびその位置を検出し、該特徴点を前
記2台の撮像装置の次回の撮像における注視点として前
記2台の撮像装置に前記物体を追尾させることを特徴と
する。
According to a third aspect of the present invention, a moving object to be measured is imaged at a fixed interval by two image pickup devices, and an image of the object having optical characteristics is obtained from the image pickup results of the two image pickup devices. As a feature point, a pixel having a largest difference in luminous intensity level from a peripheral pixel and its position are detected, and the feature point is set as a gazing point in the next imaging by the two imaging devices and the two imaging devices are used as the gazing points. It is characterized by tracking an object.

【0010】請求項4の発明は、測定対象の移動の物体
を2台の撮像装置により一定間隔で撮像し、当該2台の
撮像装置の撮像結果の中から光学的特徴を有する前記物
体の特徴点を抽出し、該特徴点を前記2台の撮像装置の
次回の撮像における注視点として前記2台の撮像装置に
前記物体を追尾させ、前記撮像結果は1画面分の画素毎
の画像信号で構成され、該1画面の中央部分の画素密度
を細く、1画面の中央部分以外の周辺部分を相対的に粗
くするようにしたことを特徴とする。
According to a fourth aspect of the present invention, a moving object to be measured is imaged at a fixed interval by two imaging devices, and the characteristic of the object having an optical characteristic from the imaging results of the two imaging devices. A point is extracted, the feature point is used as a gazing point in the next imaging by the two imaging devices, and the object is tracked by the two imaging devices. The imaging result is an image signal for each pixel of one screen. It is characterized in that the pixel density in the central part of the one screen is made small and the peripheral parts other than the central part of the one screen are made relatively rough.

【0011】[0011]

【作用】請求項1,3の発明は、物体の光学的特徴点と
して撮像結果の中の、周辺画素の光度レベルの差が最も
大きい画素を2台の撮像装置の注視点として追尾するこ
とによりモデル化が困難な物体や未確認物体を追尾する
ことが可能となる。
According to the first and third aspects of the present invention, as a feature point of an object, a pixel having a largest difference in luminous intensity level between peripheral pixels in an image pickup result is tracked as a gazing point of two image pickup devices. It is possible to track an object that is difficult to model or an unidentified object.

【0012】請求項2の発明は、この注視位置を用いて
物体の移動速度を算出し、その算出結果が測定結果とな
る。
According to a second aspect of the present invention, the moving speed of the object is calculated using the gaze position, and the calculation result is a measurement result.

【0013】請求項4の発明は、画素密度が全て細い画
面構成に比べて画素数が少なくなるので特徴点の抽出時
間が短くなり、2台の撮像装置が追尾を始めると、撮像
結果の中の特徴点位置は画面中央部に移動してくるの
で、追尾精度を劣化させることはない。
According to a fourth aspect of the present invention, since the number of pixels is smaller than that in a screen configuration in which all pixel densities are small, the extraction time of a feature point is shortened. Since the position of the feature point moves to the center of the screen, the tracking accuracy does not deteriorate.

【0014】[0014]

【実施例】以下、図面を参照して本発明の実施例を詳細
に説明する。
Embodiments of the present invention will be described below in detail with reference to the drawings.

【0015】図1は、本発明における測定方法の原理を
示す説明図である。
FIG. 1 is an explanatory diagram showing the principle of the measuring method according to the present invention.

【0016】図1において左目および右目に相当するビ
デオメラが物体の中の特定の注視点を注視する場合、注
視点の位置Pの座標(x,y,z)は左目および右目の
位置および左目および右目それぞれから注視点に向う視
線ベクトルにより三角測量の原理で定まる。
In FIG. 1, when a video camera corresponding to the left eye and the right eye gazes at a specific point of gaze in an object, the coordinates (x, y, z) of the position P of the point of gaze are the positions of the left eye and the right eye, and the left eye and the right eye. It is determined by the line of sight vector from each right eye to the gazing point based on the principle of triangulation.

【0017】また、一定単位時間後の注視点の位置P′
も同様の関係がある。
Further, the position P 'of the gazing point after a certain unit time
Have a similar relationship.

【0018】そこで、本発明では、撮像結果の中から物
体の特徴のある点(以下、特徴点と称す)を自動的に検
出し、この特徴点を2台のビデオカメラの注視点として
追尾する。このために2つのビデオカメラの姿勢制御を
(実時間3次元)測定装置において実行すると共に、こ
のような制御の下で物体の位置や速度についての運動状
態を測定装置において測定することに特徴がある。
Therefore, according to the present invention, a point having a characteristic of an object (hereinafter, referred to as a characteristic point) is automatically detected from the imaging result, and the characteristic point is tracked as a gazing point of two video cameras. . For this purpose, it is characterized in that the attitude control of the two video cameras is executed by the (real-time three-dimensional) measuring device, and the motion state of the position and speed of the object is measured by the measuring device under such control. is there.

【0019】このような測定方法を採用したロボット搭
載の測定装置の回路構成例を図2に示す。
FIG. 2 shows an example of a circuit configuration of a measuring device mounted on a robot adopting such a measuring method.

【0020】図2において、右目用,左目用のビデオカ
メラ5B,5Aの撮像結果として画素毎に得られるアナ
ログの画像信号はアナログ/デジタル(A/D)変換器
3B,3Aによりデジタル形態に変換され、入出力イン
タフェース(I/O)2を介してマイクロプロセッサ
(MPU)1に転送される。MPU1は一定単位時間毎
に上記画素毎の画像信号1画面分を内部メモリに記憶す
る。MPU1は後述の画像処理を行って特徴点を検出す
ると、ビデオカメラ5A,5Bの新たな注視点を定めこ
の注視点にビデオカメラ5A,5Bが注視するように姿
勢制御を行う。また、前回の注視点の位置と、今回の注
視点の位置と、注視点(物体の特徴点)の移動時間とを
用いて測定対象の速度を算出する。
In FIG. 2, an analog image signal obtained for each pixel as an imaging result of the right-eye and left-eye video cameras 5B and 5A is converted into a digital form by analog / digital (A / D) converters 3B and 3A. The data is transferred to the microprocessor (MPU) 1 via the input / output interface (I / O) 2. The MPU 1 stores one screen of the image signal for each pixel in the internal memory at every fixed unit time. When the MPU 1 detects a feature point by performing image processing described later, the MPU 1 determines a new gazing point of the video cameras 5A and 5B, and performs attitude control such that the video cameras 5A and 5B gaze at the gazing point. In addition, the speed of the measurement target is calculated using the position of the previous gazing point, the current position of the gazing point, and the moving time of the gazing point (the characteristic point of the object).

【0021】ビデオカメラ5A,5Bの撮像結果は1画
面で構成され、画面を構成する画素は人間の網膜のよう
に画面中心部が間隔の細い画素を有し、すなわち画素密
度が小さく、周辺部が画素密度の粗い画素を有する。こ
のような画素配置のハードウエア化が困難な場合には高
画素密度の撮像結果の平均化を行い、上述のような画素
密度に設定すればよい。
The imaging results of the video cameras 5A and 5B are composed of one screen, and the pixels constituting the screen have pixels with a narrow space at the center of the screen like a human retina. Have coarse pixels. If it is difficult to implement such a pixel arrangement with hardware, it is sufficient to average the imaging results of a high pixel density and set the pixel density as described above.

【0022】ビデオカメラ5A,5Bはロボットの目か
ら見て水平方向に沿って上下,左右に回動可能な台(不
図示、パン・チルト可動雲台と呼ばれる)に設置され、
モータ等を用いた周知の駆動機構4A,4Bにより上記
上下,左右方向に移動される。
The video cameras 5A and 5B are installed on a platform (not shown, called a pan / tilt movable platform) which can be turned up and down and left and right along the horizontal direction as viewed from the eyes of the robot.
It is moved in the above and below and left and right directions by well-known drive mechanisms 4A and 4B using motors and the like.

【0023】駆動機構4A,4BはMPU1の指示する
移動方向および移動量だけビデオカメラ5A,5Bを移
動できるものであればどのような機構を用いてもよい。
As the driving mechanisms 4A and 4B, any mechanism may be used as long as it can move the video cameras 5A and 5B by the moving direction and the moving amount instructed by the MPU 1.

【0024】以下、図2の回路の動作および測定処理内
容を図3のフローチャートを参照しながら説明する。
The operation of the circuit of FIG. 2 and the contents of the measurement processing will be described below with reference to the flowchart of FIG.

【0025】図3はMPU1の実行処理手順を機能的に
示す。この実行処理手順は実際にはプログラム言語形態
でMPU1に格納され、実行される。図3において、初
期処理としてビデオカメラ5A,5Bはロボットの正面
の特定位置を注視するように視線ベクトル(図1参照)
が定められ、ビデオカメラの姿勢位置が初期設定される
(S1)。このような状態において、測定対象物がビデ
オカメラ5A,5Bの視野に入ると、次に、MPU1は
ビデオカメラ5A,5Bからの撮像結果を取得し、内部
メモリに記憶する。また、MPU1は光度レベル、例え
ば輝度について周辺の画素との変化(差)が最も大きい
画素(以下、最大変化画素と称す)をそれぞれのビデオ
カメラ毎に検出する(S2,S3)。より具体的には、
1画面の各画素を注目画素として、次式により指標値f
を算出し、fの値の中で最大値を持つ画素を最大変化画
素と決定する。
FIG. 3 functionally shows an execution processing procedure of the MPU 1. This execution procedure is actually stored and executed in the MPU 1 in the form of a program language. In FIG. 3, as initial processing, the video cameras 5A and 5B look at a specific position in front of the robot so as to gaze at a specific position (see FIG. 1)
Is determined, and the posture position of the video camera is initialized (S1). In such a state, when the measurement target enters the field of view of the video cameras 5A and 5B, the MPU 1 next acquires the imaging results from the video cameras 5A and 5B and stores them in the internal memory. Further, the MPU 1 detects, for each video camera, a pixel having a largest change (difference) in the luminous intensity level, for example, the luminance from peripheral pixels (hereinafter, referred to as a maximum change pixel) (S2, S3). More specifically,
Using each pixel of one screen as a pixel of interest, the index value f
Is calculated, and the pixel having the maximum value among the values of f is determined as the maximum change pixel.

【0026】[0026]

【数1】 (Equation 1)

【0027】ここでPr,Pg,Pbはある画素の赤,
緑,青の光の強度値を表わす。
Here, Pr, Pg, and Pb are red of a certain pixel,
Indicates the intensity value of green and blue light.

【0028】oは注目している画素の位置を示す。Kは
注目画素を中心とした周辺の画素位置を表わす。また数
1は注目画素の周囲5×5画素を最大変化画素の検出に
用いる例である。
O indicates the position of the pixel of interest. K represents a peripheral pixel position around the target pixel. Equation 1 is an example in which 5 × 5 pixels around a target pixel are used for detecting a maximum change pixel.

【0029】このようにして、ビデオカメラ5A,5B
の撮像結果の中からそれぞれ最大変化画素を検出する
と、この最大変化画素を注視点とするように、すなわ
ち、最大変化画素が撮像結果の中心位置となるように注
視点位置を現在のロボット正面位置から移動させる。こ
のための移動方向,移動量は予め定めた演算式で算出さ
れ、MPU1から駆動機構4A,4Bに移動方向および
移動量が指示される(S4)。このとき、粗い画素で特
徴点を抽出している可能性があるので再びビデオ画像を
取り込み(第2回目)、中央部付近に移動した特徴点の
方向にカメラの向きを微調整する(S5)。次に、MP
U1はS6で再びビデオカメラ5A,5Bの撮像結果
(第3回目)を取り込み、5A,5Bそれぞれの中央部
付近についてS6と画像の相関が最大となる方向にカメ
ラを移動する(S6)。S6とS5のカメラの向きの違
いから物体の移動量を算出し、これをもとに速度を算出
する。ここでS6を複数回連続して行い、速度測定の精
度を上げてもよい。MPU1は再びビデオカメラ5A,
5Bの撮像(第4回目)結果から別の注視点を見つける
(S7) MPU1の実行処理はS7→S4に戻り、以下、S4〜
S7のループ処理で測定対象が撮像結果の中心に位置す
るようにビデオカメラ5A,5Bの姿勢制御すなわち、
物体の追尾のための姿勢制御が行われる。
Thus, the video cameras 5A and 5B
When the maximum change pixel is detected from among the imaging results, the gazing point is set to the current robot front position so that the maximum change pixel is set as the gazing point, that is, the maximum changing pixel becomes the center position of the imaging result. Move from. The moving direction and the moving amount for this are calculated by a predetermined arithmetic expression, and the moving direction and the moving amount are instructed from the MPU 1 to the driving mechanisms 4A and 4B (S4). At this time, since there is a possibility that feature points are extracted with coarse pixels, the video image is captured again (second time), and the direction of the camera is finely adjusted in the direction of the feature point moved to the vicinity of the center (S5). . Next, MP
U1 takes in the imaging results (the third time) of the video cameras 5A and 5B again in S6, and moves the camera in the direction in which the correlation between S6 and the image becomes maximum near the center of each of 5A and 5B (S6). The movement amount of the object is calculated from the difference in the camera direction between S6 and S5, and the speed is calculated based on the movement amount. Here, S6 may be continuously performed a plurality of times to increase the accuracy of the speed measurement. MPU 1 again has video camera 5A,
Another point of regard is found from the result of the imaging of the 5B (the fourth time) (S7). The execution processing of the MPU1 returns from S7 to S4.
In the loop processing of S7, the attitude control of the video cameras 5A and 5B, that is, the measurement target is positioned at the center of the imaging result, that is,
Attitude control for tracking an object is performed.

【0030】第3回目の物体の撮像が行われ、その撮像
結果の最大画素位置が検出されるとS6で物体の速度が
算出される。この処理内容を説明しておく。ビデオカメ
ラ5A,5Bの時刻tの注視点位置を(x,y,z)、
時刻t+Δtの注視点位置を(x′,y′,z′)とし
たとき(図1の参照)、物体の3次元空間での速度は次
式により表わされる。
A third imaging of the object is performed, and when the maximum pixel position of the imaging result is detected, the speed of the object is calculated in S6. The details of this processing will be described. The gazing point positions of the video cameras 5A and 5B at time t are (x, y, z),
When the gazing point position at time t + Δt is (x ′, y ′, z ′) (see FIG. 1), the velocity of the object in the three-dimensional space is expressed by the following equation.

【0031】[0031]

【数2】Vx=(x′−x)/ΔtVx = (x′−x) / Δt

【0032】[0032]

【数3】Vy=(y′−y)/ΔtVy = (y′−y) / Δt

【0033】[0033]

【数4】Vz=(z′−z)/Δt として与えられる。## EQU4 ## It is given as Vz = (z'-z) /. DELTA.t.

【0034】上記注視点の位置は、The position of the gazing point is

【0035】[0035]

【外1】 [Outside 1]

【0036】[0036]

【外2】 [Outside 2]

【0037】とすると(図1参照)、次式で表わされる
2直線
Then (see FIG. 1), two straight lines represented by the following equations are obtained.

【0038】[0038]

【数5】 (Equation 5)

【0039】[0039]

【数6】 (Equation 6)

【0040】の交点として得られる。Is obtained as the intersection of

【0041】以上の算出結果が物体の測定位置および測
定速度として信号出力される。以上、説明したように、
本実施例では物体の最大光度変化を持つ点を光学的な特
徴点としているので、モデル認識のような複雑な画像処
理を行わなくても簡単に物体の存在を認識することが可
能となる。
The above calculation result is output as a signal as the measurement position and the measurement speed of the object. As described above,
In the present embodiment, since the point having the maximum luminous intensity change of the object is used as the optical feature point, it is possible to easily recognize the presence of the object without performing complicated image processing such as model recognition.

【0042】本実施例の他に次の例を実施できる。The following example can be carried out in addition to this embodiment.

【0043】1) ビデオカメラにズーム機能を持たせ
てもよいこと勿論である。
1) Of course, the video camera may have a zoom function.

【0044】2) 本実施例ではロボットに搭載する実
時間3次元運動装置を説明したが、その他の機器に搭載
することもできる。
2) In this embodiment, the real-time three-dimensional motion device mounted on the robot has been described, but it can be mounted on other devices.

【0045】3) ここで定義した最大光度変化(数1
のf)以外の特徴抽出法を用いても本発明を実施でき
る。
3) Maximum luminous intensity change defined here (Equation 1)
The present invention can be implemented by using a feature extraction method other than the method f).

【0046】[0046]

【発明の効果】以上、説明したように、本発明によれば
モデルを使用することなく測定対象の移動物体を追尾で
きるだけでなく、測定処理が迅速な実時間3次元運動測
定装置を提供できる。
As described above, according to the present invention, it is possible to provide a real-time three-dimensional motion measuring apparatus capable of not only tracking a moving object to be measured without using a model but also performing a measurement process quickly.

【図面の簡単な説明】[Brief description of the drawings]

【図1】本発明の測定方法を示す説明図である。FIG. 1 is an explanatory view showing a measuring method of the present invention.

【図2】本発明実施例の回路構成を示すブロック図であ
る。
FIG. 2 is a block diagram showing a circuit configuration of an embodiment of the present invention.

【図3】図2のMPU1の実行する処理手順を示すフロ
ーチャートである。
FIG. 3 is a flowchart showing a processing procedure executed by an MPU 1 of FIG. 2;

【符号の説明】[Explanation of symbols]

1 MPU 2 I/O 3A,3B A/D変換器 5A,5B ビデオカメラ 1 MPU 2 I / O 3A, 3B A / D converter 5A, 5B Video camera

Claims (4)

(57)【特許請求の範囲】(57) [Claims] 【請求項1】 測定対象の移動の物体を2台の撮像装置
により撮像する撮像手段と、 該撮像手段の2台の撮像装置の視線方向をそれぞれ移動
させる可動手段と、 前記撮像手段により得られる撮像結果の中から、光学的
特徴を有する物体の特徴点の画素位置を検出する検出手
段と、 該検出手段の検出結果に基づき、前記物体の特徴点を前
記2台の撮像装置が注視するように前記可動手段を追尾
制御する制御手段とを具え、 前記検出手段は、前記撮像手段の撮像結果の中で周辺の
画素との光度レベルの差が最も大きい画素およびその位
置を検出して、当該位置を前記光学的特徴を有する物体
の特徴点の画素位置とすることを特徴とする実時間3次
元運動測定装置。
1. An image pickup means for picking up an image of a moving object to be measured by two image pickup devices, a movable means for moving the line-of-sight directions of the two image pickup devices of the image pickup means, and the image pickup means. Detecting means for detecting a pixel position of a feature point of an object having an optical feature from an imaging result; and detecting the feature points of the object by the two imaging devices based on the detection result of the detecting means. Control means for performing tracking control of the movable means, wherein the detection means detects a pixel having the largest difference in luminous intensity level from peripheral pixels in the imaging result of the imaging means and its position, and A real-time three-dimensional motion measuring device, wherein a position is a pixel position of a feature point of an object having the optical feature.
【請求項2】 前記2台の撮像装置が時系列的に注視し
た複数の注視位置および該注視位置間の移動時間を用い
て前記物体の移動速度を算出する演算処理手段をさらに
具えたことを特徴とする請求項1に記載の実時間3次元
運動測定装置。
2. The image processing apparatus according to claim 1, further comprising: an arithmetic processing unit configured to calculate a moving speed of the object by using a plurality of gaze positions watched in time series by the two imaging devices and a movement time between the gaze positions. The real-time three-dimensional motion measuring device according to claim 1.
【請求項3】 測定対象の移動の物体を2台の撮像装置
により一定間隔で撮像し、 当該2台の撮像装置の撮像結果の中から、光学的特徴を
有する前記物体の特徴点として、周辺の画素との光度レ
ベルの差が最も大きい画素およびその位置を検出し、 該特徴点を前記2台の撮像装置の次回の撮像における注
視点として前記2台の撮像装置に前記物体を追尾させる
ことを特徴とする実時間3次元運動測定方法。
3. An image of a moving object to be measured is taken at fixed intervals by two image pickup devices, and the image pickup results of the two image pickup devices are used as peripheral feature points of the object having optical characteristics. Detecting the pixel having the largest difference in luminous intensity level from the pixel and its position, and causing the two imaging devices to track the object using the feature point as a gazing point in the next imaging by the two imaging devices. A real-time three-dimensional motion measuring method characterized by the following.
【請求項4】 測定対象の移動の物体を2台の撮像装置
により一定間隔で撮像し、 当該2台の撮像装置の撮像結果の中から光学的特徴を有
する前記物体の特徴点を抽出し、 該特徴点を前記2台の撮像装置の次回の撮像における注
視点として前記2台の撮像装置に前記物体を追尾させ、 前記撮像結果は1画面分の画素毎の画像信号で構成さ
れ、画像処理の効率化のため該1画面の中央部分の画素
密度を細く、1画面の中央部分以外の周辺部分を相対的
に粗くするようにしたことを特徴とする実時間3次元運
動測定方法。
4. A moving object to be measured is imaged at regular intervals by two imaging devices, and feature points of the object having optical characteristics are extracted from imaging results of the two imaging devices, The feature points are used as gazing points in the next imaging by the two imaging devices, and the two imaging devices are caused to track the object. The imaging result is configured by an image signal for each pixel of one screen, and image processing is performed. A real-time three-dimensional motion measurement method, characterized in that the pixel density in the central portion of the one screen is made small and the peripheral portions other than the central portion of the one screen are made relatively coarse for the purpose of improving the efficiency.
JP4345585A 1992-12-01 1992-12-01 Real-time three-dimensional motion measuring apparatus and method Expired - Lifetime JP2580516B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP4345585A JP2580516B2 (en) 1992-12-01 1992-12-01 Real-time three-dimensional motion measuring apparatus and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP4345585A JP2580516B2 (en) 1992-12-01 1992-12-01 Real-time three-dimensional motion measuring apparatus and method

Publications (2)

Publication Number Publication Date
JPH06174463A JPH06174463A (en) 1994-06-24
JP2580516B2 true JP2580516B2 (en) 1997-02-12

Family

ID=18377596

Family Applications (1)

Application Number Title Priority Date Filing Date
JP4345585A Expired - Lifetime JP2580516B2 (en) 1992-12-01 1992-12-01 Real-time three-dimensional motion measuring apparatus and method

Country Status (1)

Country Link
JP (1) JP2580516B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020134399A (en) * 2019-02-22 2020-08-31 キヤノン株式会社 Image processing apparatus, imaging apparatus, image processing method, program and storage medium

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100434876B1 (en) * 2002-03-23 2004-06-07 (주)맥스소프트 Method and apparatus for a tracking stereo target
JP2006329747A (en) * 2005-05-25 2006-12-07 Tokyo Institute Of Technology Imaging device
JP2007120993A (en) * 2005-10-25 2007-05-17 Tokyo Institute Of Technology Object shape measuring device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5933858A (en) * 1982-08-19 1984-02-23 Nec Corp Manufacture of hybrid integrated circuit
JPH071170B2 (en) * 1985-10-30 1995-01-11 株式会社日立製作所 Light source position and moving speed measuring device
JPH04291111A (en) * 1991-03-20 1992-10-15 Hitachi Zosen Corp Method for detecting position of traveling object

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020134399A (en) * 2019-02-22 2020-08-31 キヤノン株式会社 Image processing apparatus, imaging apparatus, image processing method, program and storage medium
JP7200002B2 (en) 2019-02-22 2023-01-06 キヤノン株式会社 Image processing device, imaging device, image processing method, program, and storage medium

Also Published As

Publication number Publication date
JPH06174463A (en) 1994-06-24

Similar Documents

Publication Publication Date Title
CN108885799B (en) Information processing apparatus, information processing system, and information processing method
CN109151439B (en) Automatic tracking shooting system and method based on vision
JP3064928B2 (en) Subject extraction method
US5627586A (en) Moving body detection device of camera
JP3728160B2 (en) Depth image measuring apparatus and method, and mixed reality presentation system
JP3565707B2 (en) Observer tracking autostereoscopic display device, image tracking system, and image tracking method
KR101769177B1 (en) Apparatus and method for eye tracking
JP2018522348A (en) Method and system for estimating the three-dimensional posture of a sensor
JP5001930B2 (en) Motion recognition apparatus and method
US6809771B1 (en) Data input apparatus having multiple lens unit
CN110944101A (en) Image pickup apparatus and image recording method
JP4193342B2 (en) 3D data generator
JP2001243478A (en) Target body tracing device
JP5987584B2 (en) Image processing apparatus, video projection system, and program
WO2021221341A1 (en) Augmented reality device and control method for same
JP2580516B2 (en) Real-time three-dimensional motion measuring apparatus and method
JPH07181024A (en) Method and apparatus for measuring three-dimensional profile
JP6833483B2 (en) Subject tracking device, its control method, control program, and imaging device
JPH0993472A (en) Automatic monitor
JPH09322053A (en) Image pickup method for object in automatic image pickup camera system
US11847784B2 (en) Image processing apparatus, head-mounted display, and method for acquiring space information
JP2001034388A (en) Equipment controller and navigation device
JP4235291B2 (en) 3D image system
JP2667885B2 (en) Automatic tracking device for moving objects
JP2021174089A (en) Information processing device, information processing system, information processing method and program

Legal Events

Date Code Title Description
EXPY Cancellation because of completion of term