JP3055721B2 - Method for searching corresponding points of images captured by left and right cameras - Google Patents

Method for searching corresponding points of images captured by left and right cameras

Info

Publication number
JP3055721B2
JP3055721B2 JP3287538A JP28753891A JP3055721B2 JP 3055721 B2 JP3055721 B2 JP 3055721B2 JP 3287538 A JP3287538 A JP 3287538A JP 28753891 A JP28753891 A JP 28753891A JP 3055721 B2 JP3055721 B2 JP 3055721B2
Authority
JP
Japan
Prior art keywords
time
image
point
poa1
pob1
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
JP3287538A
Other languages
Japanese (ja)
Other versions
JPH05141919A (en
Inventor
藤 淳 佐
田 文 明 富
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aisin Corp
Original Assignee
Aisin Seiki Co Ltd
Aisin Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aisin Seiki Co Ltd, Aisin Corp filed Critical Aisin Seiki Co Ltd
Priority to JP3287538A priority Critical patent/JP3055721B2/en
Publication of JPH05141919A publication Critical patent/JPH05141919A/en
Application granted granted Critical
Publication of JP3055721B2 publication Critical patent/JP3055721B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Description

【発明の詳細な説明】DETAILED DESCRIPTION OF THE INVENTION

【0001】[0001]

【産業上の利用分野】本発明は、2個以上の撮像カメラ
で前方のシ−ンを撮映しステレオ画像処理により、該シ
−ンにあるものを摘出もしくは認識し、認識したものの
距離および速度を測定する、物体位置監視に関し、特
に、左,右カメラの撮像画像中の物体の同一点の対応付
けすなわわち対応点検索に関する。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a method for picking up or recognizing an object in a scene by photographing a forward scene with two or more imaging cameras and performing stereo image processing. In particular, the present invention relates to matching the same point of an object in the images captured by the left and right cameras, that is, to searching for a corresponding point.

【0002】[0002]

【従来の技術】例えば車両や船舶においては、前方車両
又は船舶もしくは障害物を自動検出する技術が望まれて
いる。これに答えるものとして特開昭61−44312
号公報および特開昭61−44313号公報には、車両
上の二台のテレビカメラで撮映したそれぞれ一画面分の
画像情報をステレオ処理して、カメラで映したシ−ン内
の物の距離を算出する距離測定装置が提示されている。
2. Description of the Related Art For example, in a vehicle or a ship, a technique for automatically detecting a preceding vehicle, a ship, or an obstacle is desired. Japanese Patent Application Laid-Open No. 61-44312 responds to this.
Japanese Patent Application Laid-Open No. 61-44313 and Japanese Patent Application Laid-Open No. 61-44313 disclose stereoscopic processing of image information for one screen each taken by two television cameras on a vehicle, and perform object processing in a scene shown by the cameras. A distance measuring device for calculating a distance is presented.

【0003】前者では、所定量離して配置した2台のカ
メラで撮影した画像上で、同一の明るさを持つ領域をブ
ロック化する。次に一方の画像上の各々のブロックを視
差が減る方向に動かし、もう一方のブロックと比較しブ
ロックが一致する位置を見つける。一致するまでに動か
した量から距離を計算する。
[0003] In the former, regions having the same brightness are divided into blocks on images taken by two cameras arranged at a predetermined distance from each other. Next, each block on one image is moved in a direction to reduce the parallax, and is compared with the other block to find a position where the blocks match. Calculate the distance from the amount moved until they match.

【0004】後者では、所定量離して配置した2台のカ
メラで撮影した画像上で、同一の明るさを持つ領域をブ
ロック化する。次に一方の画像上のブロックともう一方
の画像上のブロックのそれぞれの特徴量を算出し、相異
なるカメラで撮影した各ブロックの間で特徴量を比較し
最も良く一致するブロックを検出する(対応ブロックの
検出)。そして対応ブロック間の位置差から距離を計算
する。
In the latter, regions having the same brightness are divided into blocks on images taken by two cameras arranged at a predetermined distance from each other. Next, the feature amount of each of the blocks on one image and the block on the other image is calculated, and the feature amounts are compared between the blocks captured by different cameras to detect the best matching block ( Detection of corresponding block). Then, the distance is calculated from the position difference between the corresponding blocks.

【0005】また本出願人の出願にかかる特開平2−2
9878号公報および特開平2−29879号公報は、
前記シ−ン内の特定の物体、特に表面凹凸が多く複雑な
外表面を呈する物体、の撮映画面上の領域すなわちテク
スチャ領域を検出し、該物体の表面各部の距離すなわち
立体形状を検出する技術を提示している。
Further, Japanese Patent Application Laid-Open No. 2-2 filed by the present applicant
No. 9878 and Japanese Patent Application Laid-Open No. 2-29879
Detect a region on the photographic film, that is, a texture region of a specific object in the scene, in particular, an object having a complex outer surface with a large number of surface irregularities, and detect a distance, that is, a three-dimensional shape, of each surface portion of the object. Presenting technology.

【0006】更に、本発明者の提案にかかる特願平2−
262269号では、車両上で前景を左,右カメラで撮
映し、前景画像中の路面を切出す画像処理方法を開示し
ている。
Further, Japanese Patent Application No. Hei.
No. 262269 discloses an image processing method in which a foreground is photographed by a left and right camera on a vehicle, and a road surface in a foreground image is cut out.

【0007】[0007]

【発明が解決しようとする課題】前記特開昭61−44
312号公報および特開昭61−44313号公報に開
示の如き距離測定装置では、2台のカメラの視差が大き
いときには非常に大きな領域を対応探索することになる
ので、似かよった特徴を持つブロックが多く存在する場
合には対応ブロック検出を誤る確率が高くなる。
Problems to be Solved by the Invention
In a distance measuring device as disclosed in Japanese Patent Application Laid-Open No. 312 and Japanese Patent Application Laid-Open No. 61-44313, when a parallax between two cameras is large, a corresponding search is performed for a very large area. If there are many, the probability of erroneous detection of the corresponding block increases.

【0008】前記特開平2−29878号公報および特
開平2−29879号公報に開示の如き距離測定装置で
は、立体的に比較的に複雑な形状をした物体の検出に多
くの利点をもたらすが、立体的には比較的に単調でしか
も領域が広く、画面上に1つの独立体(単体)として現
われにくい物体、例えばテ−ブルの上面,床面,路面,
水面等々、平面又は擬似平面(以下平面状部という)、
の摘出は難しい。これはこれらの距離測定装置がもとも
と立体的な個体を正確に摘出しかつその表面凹凸をより
正確に検出しようとするものであることを考えれば当然
でもある。
The distance measuring device disclosed in the above-mentioned Japanese Patent Application Laid-Open Nos. 2-29878 and 2-29879 has many advantages in detecting an object having a three-dimensionally complicated shape. Objects that are relatively monotonous and relatively large in area, and are unlikely to appear as one independent entity (single body) on the screen, such as the top surface of a table, floor surface, road surface,
A water surface, etc., a plane or a pseudo plane (hereinafter referred to as a plane portion),
Is difficult to remove. This is, of course, given that these distance measuring devices originally attempt to accurately extract a three-dimensional individual and more accurately detect surface irregularities.

【0009】いずれの立体視による物体検出あるいは追
跡においても、左カメラで撮映した画像と右カメラで撮
映した画像の中の、物体の同一点を検索する対応処理に
おいて誤対応をとってしまうと、左,右カメラの立体視
処理技術を利用する物体の認識が混乱し、例えば車両上
における前方物体の認識では物体認識に基づいて前方物
体の位置および速度(自車両に対する相対速度)を算出
するが、これらが誤りとなる。
[0009] In any of the object detection and tracking by stereoscopic vision, an erroneous correspondence is taken in the corresponding processing for searching for the same point of the object in the image taken by the left camera and the image taken by the right camera. And the recognition of an object using the stereoscopic processing technology of the left and right cameras is confused. For example, in the recognition of a front object on a vehicle, the position and speed of the front object (relative speed with respect to the own vehicle) are calculated based on the object recognition. But these are incorrect.

【0010】本発明は、左,右カメラの撮像画像の対応
点検索をより正確にすることを目的とする。
SUMMARY OF THE INVENTION It is an object of the present invention to more accurately search corresponding points of images captured by left and right cameras.

【0011】[0011]

【課題を解決するための手段】左,右の撮像カメラによ
りそれらの前方のシ−ンを撮映してそれぞれビデオ信号
を得て、これらのビデオ信号をデジタル処理して、左,
右画像にある同一物体の同一点を検索するにおいて、t
o時刻,これよりΔt後のt1時刻および更にΔt後の
t2時刻の左,右画像の内、少くともto時刻およびt
1時刻の左,右画像は、画像中の物体のエッジを細線で
表わす細線化処理を施して、これらto,t1およびt
2時刻の左,右画像に基づいて、時刻toの左(右)画
像にある細線のある点を注目点Poa1と定め、時刻toの
右(左)画像上の細線の、注目点Poa1の垂直座標iの対
応垂直座標i上にある点Pob1,Pob2,Po3bを摘出してto
時刻の右(左)画像上の対応候補点Pob1,Pob2,Po3bと
し、時刻t1の左(右)画像上で注目点Poa1の座標(j,
i)を中心とする所定領域の細線上の各点をt1時刻の左
(右)画像上の対応候補点P1a-d〜P1a-eとして、対応候
補点Pob1,Pob2,Po3bの1つ毎に、それと注目点Poa1の座
標で定まるto時刻の注目点Poa1の3次元位置、ならび
に、注目点Poa1が対応候補点P1a-d〜P1a-eのそれぞれに
移動した場合のそれぞれのt1時刻の右(左)画像上の
位置の連なりとt1時刻の右(左)画像上の細線の交点
P1bの座標で定まる、t1時刻の注目点Poa1の3次元位
置を算出して、両3次元位置の外挿によりt2時刻の注
目点Poa1の3次元位置を算出し、これを左,右画像上の
座標に変換して、これらの座標位置の、t2時刻の左,
右画像上の画像濃度の相関値Cを算出し、対応候補点Pob
1,Pob2,Po3bの内、相関値Cが最大の候補点を、時刻の左
(右)画像上の注目点Poa1の、to時刻の右(左)画像
上の対応点と定める。
Means for Solving the Problems A scene in front of the left and right imaging cameras is taken to obtain video signals, and these video signals are digitally processed.
In searching for the same point of the same object in the right image, t
o time, at least t time and t of the left and right images at time t1 after Δt and time t2 after Δt.
The left and right images at one time are subjected to thinning processing for expressing the edges of the object in the image with thin lines, and these to, t1, and t
Based on the left and right images at two times, a point with a thin line in the left (right) image at time to is determined as the point of interest Poa1, and the thin line on the right (left) image at time to is perpendicular to the point of interest Poa1. Extract the points Pob1, Pob2, Po3b on the corresponding vertical coordinate i of the coordinate i and to
The corresponding candidate points Pob1, Pob2, and Po3b on the right (left) image at the time are set as the coordinates (j, j) of the target point Poa1 on the left (right) image at the time t1.
Each point on the thin line of the predetermined area centered on i) is defined as a corresponding candidate point P1a-d to P1a-e on the left (right) image at time t1, and for each of the corresponding candidate points Pob1, Pob2, Po3b , The three-dimensional position of the attention point Poa1 at the to time determined by the coordinates of the attention point Poa1, and the right of each time t1 when the attention point Poa1 moves to each of the corresponding candidate points P1a-d to P1a-e ( Left) intersection of positions on image and thin line on right (left) image at time t1
The three-dimensional position of the point of interest Poa1 at time t1 determined by the coordinates of P1b is calculated, and the three-dimensional position of the point of interest Poa1 at time t2 is calculated by extrapolation of both three-dimensional positions. Are converted to the coordinates at the left of time t2 of these coordinate positions,
The correlation value C of the image density on the right image is calculated, and the corresponding candidate point Pob is calculated.
The candidate point having the largest correlation value C among 1, Pob2, and Po3b is determined as a corresponding point on the right (left) image at the to time of the target point Poa1 on the left (right) image at the time.

【0012】[0012]

【作用】to時刻,これよりΔt後のt1時刻および更
にΔt後のt2時刻の左,右画像の内、少くともto時
刻およびt1時刻の左,右画像は、画像中の物体のエッ
ジを細線で表わす細線化処理を施すことにより、to時
刻およびt1時刻の左,右画像は、撮映画像中の物体の
縁を表わすので、左,右画像で細線同志の対応付けで対
応点がある。
The left and right images at the to time and at the time t1 after Δt and at the time t2 after Δt, at least the left and right images at the time to and at the time t1 are formed by thin lines of the edges of the object in the image. By performing the thinning process represented by, the left and right images at the time to and at the time t1 represent the edges of the object in the captured image, and therefore, the left and right images have corresponding points in correspondence between the thin lines.

【0013】左,右カメラを実質上水平にならべること
により、左カメラの画像中のある細線の、ある走査線
(水平線)上の一点Poa1の対応点は、右カメラの画
像中の細線と、該走査線に対応する走査線(水平線)と
の交点となる。この交点は通常複数(多数)となる。左
カメラの画像中の一点Poa1は右カメラでは該一点P
oa1の横方向位置jよりも左側に表われるので、時刻
toの右(左)画像上の細線の、注目点Poa1の垂直座標
iの対応垂直座標i上にある点Pob1,Pob2,Po3bを摘出して
to時刻の右(左)画像上の対応候補点Pob1,Pob2,Po3b
とすることにより、対応候補点数が絞られる。
By arranging the left and right cameras substantially horizontally, the point corresponding to one point Poa1 on a certain scanning line (horizontal line) in the image of the left camera is defined as the thin line in the image of the right camera. It becomes an intersection with a scanning line (horizontal line) corresponding to the scanning line. This intersection is usually plural (many). One point Poa1 in the image of the left camera is the one point P in the right camera.
Since it appears on the left side of the horizontal position j of oa1, the vertical coordinate of the point of interest Poa1 of the thin line on the right (left) image at time to
The points Pob1, Pob2, Po3b on the corresponding vertical coordinate i of i are extracted and the corresponding candidate points Pob1, Pob2, Po3b on the right (left) image at the to time
By doing so, the number of correspondence candidate points is reduced.

【0014】短時間Δt内では、カメラで撮映された物
体の移動範囲は狭いので、その範囲(所定領域)を定め
ておくことができる。しかして、時刻t1の左(右)画
像上で注目点Poa1の座標(j,i)を中心とする所定領域の
細線上の各点をt1時刻の左(右)画像上の対応候補点
P1a-d〜P1a-eとすることにより、to時刻の左(右)画
像上の注目点Poa1は、t1時刻の左(右)画像上の対応
候補点P1a-d〜P1a-eのいずれかである。そこで、to時
刻の右(左)画像上の対応候補点Pob1,Pob2,Po3bのそれ
ぞれを注目点Poa1の対応点と仮定し、例えば1つの候補
点Pob1に関しては、to時刻の左(右)画像上の注目点
=Poa1,to時刻の右(左)画像上の対応点=Pob1,t
1時刻の左(右)画像上の対応点=P1a-d〜P1a-e、とし
て、t1時刻の右(左)画像上の、P1a-d〜P1a-eに対応
する点列を求めて、t1時刻の右(左)画像上の細線と
この点列との交点P1bをt1時刻の右(左)画像上の対
応点P1bとして、Poa1とPob1よりto時刻の注目点Poa1
の3次元位置を求め、t1時刻の交点P1bとこれに対応
するt1時刻の左(右)画像上のP1b対応点よりt1時
刻の注目点Poa1の3次元位置を求めて、注目点Poa1の運
動が2Δtの間この時間が極く短いので、実質上等速直
線運動であるとして、to時刻の3次元位置とt1時刻
の3次元位置に外挿法を適用して、t2時刻の注目点Po
a1の対応点の3次元位置を算出する。この、算出した3
次元位置に対応する、t2時刻の左,右画像上の位置の
画像濃度の、左,右画像相関値Cを算出する。
Since the moving range of the object photographed by the camera is narrow within the short time Δt, the range (predetermined area) can be determined. Thus, on the left (right) image at the time t1, each point on the thin line of the predetermined area centered on the coordinates (j, i) of the target point Poa1 is converted to a corresponding candidate point on the left (right) image at the time t1.
By setting P1a-d to P1a-e, the point of interest Poa1 on the left (right) image at the time t is one of the corresponding candidate points P1a-d to P1a-e on the left (right) image at the time t1. It is. Therefore, each of the corresponding candidate points Pob1, Pob2, and Po3b on the right (left) image at the to time is assumed to be the corresponding point of the attention point Poa1, and for example, for one candidate point Pob1, the left (right) image at the to time Top attention point = Poa1, corresponding point on right (left) image of time to = Pob1, t
Assuming that the corresponding point on the left (right) image at one time = P1a-d to P1a-e, a point sequence corresponding to P1a-d to P1a-e on the right (left) image at time t1 is obtained. The intersection P1b between the thin line on the right (left) image at the time t1 and this point sequence is set as the corresponding point P1b on the right (left) image at the time t1, and the point of interest Poa1 at the to time from Poa1 and Pob1.
From the intersection P1b at time t1 and the corresponding point P1b on the left (right) image at time t1 corresponding to the intersection P1b, the motion of the attention point Poa1 at time t1 is determined. Since this time is extremely short during 2Δt, the extrapolation method is applied to the three-dimensional position at the time t and the three-dimensional position at the time t1 assuming that the motion is substantially constant-velocity linear motion.
The three-dimensional position of the corresponding point of a1 is calculated. This calculated 3
A left and right image correlation value C of the image density at a position on the left and right images at time t2 corresponding to the dimensional position is calculated.

【0015】これと同様の処理を残りの候補点について
も同様に行なう。このようにして得た相関値の最大値を
得た候補点(Pob1,Pob2,Po3bの1つ)を、to時刻の左
(右)画像の注目点Poa1に対応する、to時刻の右(左)
画像上の対応点と決定する。
The same processing is performed for the remaining candidate points. The candidate point (one of Pob1, Pob2, and Po3b) that obtained the maximum value of the correlation values obtained in this way is set to the right (left) of the to time corresponding to the target point Poa1 of the left (right) image of the to time. )
It is determined as a corresponding point on the image.

【0016】短時間Δt内では、カメラで撮映された物
体の移動範囲は狭いので、その範囲を常に含むように前
記所定領域を定めておくことにより、注目点Po1のΔt
後(t1時刻)の対応点が必ず、対応候補点P1a-d〜P1a
-eとして摘出される。すなわちt1時刻の対応候補点P1
a-d〜P1a-eの摘出が正確となる。これ(P1a-d〜P1a-e)
とto時刻の右(左)画像上の対応候補点Pob1,Pob2,Po
3bのそれぞれに基づいてt1時刻の対応点列(Pob1,Pob
2,Po3b毎)を算出するので、この点列とt1時刻の右
(左)画像上の細線との交点P1bでなる、t1時刻の右
(左)画像上の対応点P1b(Pob1,Pob2,Po3bそれぞれに
1個)も正確に求まる。物体の移動は等速直線運動に見
なせるので、to時刻の注目点Poaの3次元位置と、t
1時刻の対応点P1b(Pob1,Pob2,Po3bそれぞれに1個)
の3次元位置より、外挿によりt2時刻の対応点(Pob
1,Pob2,Po3bそれぞれに1個)の3次元位置を求めるの
で、t2時刻の対応点の推定精度が高い。加えて、物体
の同一点の左,右画像上の濃度は実質上同一であり、こ
れに着目して、t2時刻の各対応点(Pob1,Pob2,Po3bそ
れぞれに1個)に関して、t2時刻の左,右画像上の画
像濃度の相関値Cを算出し、相関値の最大値を得た対応
点補点(Pob1,Pob2,Po3bの1つ)を、to時刻の左(右)
画像の注目点Poa1に対応する、to時刻の右(左)画像上
の対応点と決定するので対応点の推定が正確である。
Since the moving range of the object photographed by the camera is narrow within a short time Δt, the predetermined area is defined so as to always include the moving range.
The corresponding point after (time t1) must be a corresponding candidate point P1a-d to P1a
Extracted as -e. That is, the corresponding candidate point P1 at the time t1
The extraction of ad to P1a-e becomes accurate. This (P1a-d to P1a-e)
And corresponding points Pob1, Pob2, Po on the right (left) image of the to time
3b, the corresponding point sequence at time t1 (Pob1, Pob
2, Po3b), the corresponding point P1b (Pob1, Pob2, Pb1) on the right (left) image at time t1, which is the intersection P1b between this point sequence and the thin line on the right (left) image at time t1. (One for each Po3b). Since the movement of the object can be regarded as a constant-velocity linear motion, the three-dimensional position of the target point Poa at the to time and t
Corresponding point P1b at one time (one for each of Pob1, Pob2, Po3b)
The corresponding point at time t2 (Pob
Since one three-dimensional position is obtained for each of 1, Pob2, and Po3b, the estimation accuracy of the corresponding point at time t2 is high. In addition, the densities of the same point of the object on the left and right images are substantially the same, and by paying attention to this, with respect to each corresponding point at t2 (one for each of Pob1, Pob2, and Po3b), at time t2 The correlation value C of the image densities on the left and right images is calculated, and the corresponding point supplementary point (one of Pob1, Pob2, Po3b) for which the maximum value of the correlation value is obtained is set to the left (right) of the to time.
Since the corresponding point on the right (left) image at the to time corresponding to the attention point Poa1 of the image is determined, the estimation of the corresponding point is accurate.

【0017】このように、物体の運動による対応点の位
置変化を利用した外挿による対応点(X2a,Y2a),(X2b,Y2
b)の推定と、同一物体の同一部は同一の明るさであると
の自明の事項を利用した左右画像濃度の相関値の演算お
よび比較により、複数の候補点Pob1,Pob2,Po3bから一点
を対応点と決定するので、相対的に高速に移動する物体
の認識の信頼度が高い。
As described above, the corresponding points (X2a, Y2a), (X2b, Y2) obtained by extrapolation using the position change of the corresponding points due to the motion of the object.
By estimating b) and calculating and comparing the correlation values of the left and right image densities using the obvious matter that the same part of the same object has the same brightness, one point is obtained from a plurality of candidate points Pob1, Pob2, Po3b. Since the corresponding point is determined, the reliability of recognition of an object moving relatively quickly is high.

【0018】本発明の他の目的および特徴は、図面を参
照した以下の実施例の説明より明らかになろう。
Other objects and features of the present invention will become apparent from the following description of embodiments with reference to the drawings.

【0019】[0019]

【実施例】図2に、本発明を一態様で実施する画像処理
装置の構成を示す。この装置は、コンピュ−タ1,撮像
カメラ2a(左),2b(右),画像メモリ4,ディス
プレイ5,プリンタ6,フロッピ−ディスク7およびキ
−ボ−ドタ−ミナル8等でなり、これらは乗用車に搭載
されている。左,右カメラ2a,2bは、それぞれ、前
方のシ−ンを撮映して512×512画素区分でアナロ
グ画像信号を出力する同諸元のITVカメラ(CCDカ
メラ)であり、これらの画像信号をA/Dコンバ−タ2
2a,22bが1画素256階調のデジタルデ−タ(画
像デ−タ)に変換する。撮映カメラ2a,2bは、平面
(路面)よりある高さhにあり、二台のステレオで斜め
下向き(下向き角度)で前方のシ−ンを撮映する。図3
に示すように左カメラ2aの撮映画面FLaを左画像
と、右カメラ2bの撮映画面FLbと呼ぶものとする。
FIG. 2 shows the configuration of an image processing apparatus for implementing the present invention in one embodiment. This apparatus comprises a computer 1, an imaging camera 2a (left), 2b (right), an image memory 4, a display 5, a printer 6, a floppy disk 7, a keyboard terminal 8, and the like. It is mounted on passenger cars. The left and right cameras 2a and 2b are ITV cameras (CCD cameras) having the same specifications for photographing a front scene and outputting analog image signals in 512 × 512 pixel sections, respectively. A / D converter 2
2a and 22b convert digital data (image data) of 256 gradations per pixel. The imaging cameras 2a and 2b are at a certain height h from a plane (road surface), and image the front scene obliquely downward (downward angle) with two stereos. FIG.
As shown in the figure, the photographed movie surface FLa of the left camera 2a is referred to as a left image and the photographed movie surface FLb of the right camera 2b.

【0020】図2に示す画像メモリ4は読み書き自在で
あり、左画像や右画像の原画像デ−タを始めとして種々
の処理デ−タを記憶する。ディスプレイユニット5およ
びプリンタ6は、コンピュ−タ1の処理結果等を出力
し、フロッピ−ディスク7はその処理結果を登録してお
く。また、キ−ボ−ドタ−ミナル8はオペレ−タにより
操作されて、各種の指示が入力される。コンピュ−タ1
には、さらにホストコンピュ−タが接続されており、そ
こから与えられた指示、またはキ−ボ−ドタ−ミナル8
より与えられた指示に従って各部を制御し、対応する処
理を行なう。以下、その処理のうち、前方のシ−ンの中
の物体を認識しその位置(カメラからの距離)および速
度(カメラとの相対速度)を検出するための、「前方物
体の速度監視」について説明する。
The image memory 4 shown in FIG. 2 is readable and writable, and stores various processing data including original image data of a left image and a right image. The display unit 5 and the printer 6 output the processing results of the computer 1 and the like, and the floppy disk 7 registers the processing results. The keyboard terminal 8 is operated by an operator to input various instructions. Computer 1
Is further connected to a host computer from which instructions or keyboard terminal 8 provided thereto are connected.
Each part is controlled in accordance with the instruction given by the controller and the corresponding processing is performed. Hereinafter, of the processing, "speed monitoring of a forward object" for recognizing an object in a forward scene and detecting its position (distance from the camera) and speed (relative speed with respect to the camera). explain.

【0021】この種の画像処理装置による物体認識にお
いては、前方に2つの点(物体又はその一部)が存在
し、図4に示すように、それらの点がto時刻でA1,
B1にあり、それよりΔt後のt1時刻でA2,B2に
あり、更にΔt後のt2時刻でA3,B3にあったと
し、物体認識で誤対応を生じて、図4に示すように、両
点を一点(対応点)と認識して、PC1,PC2,PC
3と追跡すると、この認識点PC1,PC2,PC3は
等速直線運動とならない(ただし例外もある)。従って
等速直線運動と仮定して、to時刻のPC1とt1時刻
のPC2から外挿によりt2時刻の位置PC3’を求め
ると、実際に認識したPC3と異なり、しかもPC3
の、左画像上(PA3)の明るさと右画像上(PB3)
の明るさとが異なり、これらの明るさの相関値が低い。
位置ずれΔXは、次の(1)式で表わせる。
In object recognition by this type of image processing apparatus, there are two points (objects or a part thereof) in front of them, and as shown in FIG.
B1 and A2 and B2 at a time t1 after Δt, and A3 and B3 at a time t2 after Δt, an erroneous correspondence occurs in object recognition, and as shown in FIG. Recognize a point as one point (corresponding point), PC1, PC2, PC
When the tracking point is tracked as 3, the recognition points PC1, PC2, and PC3 do not form a uniform linear motion (however, there are exceptions). Therefore, if the position PC3 ′ at time t2 is obtained by extrapolation from PC1 at time t1 and PC2 at time t1 assuming uniform linear motion, it is different from PC3 actually recognized, and
Of the brightness on the left image (PA3) and the brightness on the right image (PB3)
, And the correlation values of these brightnesses are low.
The displacement ΔX can be expressed by the following equation (1).

【0022】[0022]

【数1】 (Equation 1)

【0023】(1)式中の各パラメ-タは次の通りである。Each parameter in the equation (1) is as follows.

【0024】[0024]

【数2】 (Equation 2)

【0025】[0025]

【数3】 (Equation 3)

【0026】[0026]

【数4】 (Equation 4)

【0027】[0027]

【数5】 (Equation 5)

【0028】[0028]

【数6】 (Equation 6)

【0029】[0029]

【数7】 (Equation 7)

【0030】この実施例の「前方物体の速度監視」は、
等速直線運動と明るさの相関値を積極的に利用して、迅
速かつ正確に対応点を求めようとするものである。図5
に、この実施例において、ホストコンピュ−タ又はキ−
ボ−ドタ−ミナル8より与えられた指示に応答してその
指示が解除されるまで、コンピュ−タ1が実質上所定周
期で繰返えす「前方物体の速度監視」ル−チンの概略を
示し、図6〜図11に、その中の主たる処理項目の処理
内容を示。
In this embodiment, "speed monitoring of a forward object"
It is intended to quickly and accurately find a corresponding point by positively utilizing a correlation value between constant velocity linear motion and brightness. FIG.
In this embodiment, the host computer or the key
The outline of a "forward object speed monitoring" routine in which the computer 1 repeats at a substantially predetermined cycle until the instruction is released in response to the instruction given from the board terminal 8 is shown. 6 to 11 show processing contents of main processing items among them.

【0031】まず図5を参照すると、コンピュ−タ1
は、1サイクルの「前方物体の速度監視」処理の先頭に
おいて、左右カメラ2a,2bの左画像,右画像の画像
デ−タを画像メモリ4に書き込む(サブル−チン1)。
なお、以下カッコ内ではサブル−チンとかステップとい
う語を省略し、それに付した番号数字のみを示す。次に
画像デ−タの微分処理を行なう(2)。この内容を図6
に示す。
First, referring to FIG.
Writes the image data of the left image and the right image of the left and right cameras 2a and 2b into the image memory 4 at the beginning of the process of "monitoring the speed of the forward object" in one cycle (subroutine 1).
In the following, in the parentheses, the word "subroutine" or "step" is omitted, and only the numeral attached thereto is shown. Next, differential processing of the image data is performed (2). Figure 6
Shown in

【0032】「微分処理」(2)では、まず左画像に対
して順方向ラスタスキャンを設定する。なお、ラスタス
キャンの方向は、撮映シ−ンの水平方向(車上から前方
を見て左右方向)である。順方向ラスタスキャンは、図
3に示す撮像領域FLa(右画面ではFLb)内をXa
(Xb)に平行なua(ub)軸を主走査軸とし、Ya
(Yb)に平行なva(vb)軸を副走査軸として左上
端画素から右下画素に至る経路で各画素に注目する走査
を行なう。すなわちu方向(主走査方向)およびv(副
走査方向)の走査を、左上の画素(画像原点0,0)か
ら右下画素(512,512)まで行なう。順方向ラス
タスキャンを行いながら微分デ−タを生成する。
In the "differential processing" (2), a forward raster scan is first set for the left image. The direction of the raster scan is the horizontal direction of the shooting scene (the left-right direction when looking forward from above the vehicle). In the forward raster scan, the image pickup area FLa (FLb on the right screen) shown in FIG.
The ua (ub) axis parallel to (Xb) is the main scanning axis, and
With the va (vb) axis parallel to (Yb) as a sub-scanning axis, scanning is performed on each pixel along a path from the upper left pixel to the lower right pixel. That is, scanning in the u direction (main scanning direction) and v (sub scanning direction) is performed from the upper left pixel (image origin 0, 0) to the lower right pixel (512, 512). Differential data is generated while performing a forward raster scan.

【0033】微分デ−タは、注目画素およびその近傍画
素8個の原画像デ−タ(注目画素を中心とする3×3の
画像デ−タマトリックス)の原画像デ−タを、注目画素
の画像濃度をp0とし、その右隣りの画素の画像濃度を
p1とし、p1の上の画素の画像濃度をp2とし、注目
画素の上の画素の画像濃度をp3とし、p3の左隣りの
画素の画像濃度をp4とし、注目画素の左隣りの画素の
画像濃度をp5とし、p5の下の画素の画像濃度をp6
とし、注目画素の下の画素の画像濃度をp7とし、p7
の右隣りの画素の画像濃度をp8とすると、(p1+p
2+p8)−(p4+p5+p6)で与えられる主走査
方向微分デ−タと、(p2+p3+p4)−(p6+p
7+p8)で与えられる副走査方向微分デ−タとの和で
あり、原画像デ−タの空間的な変化量を示す。コンピュ
−タ1は、これらのデ−タを各画素に対応付けして画像
メモリ4に書き込む。
The differential data is obtained by converting the original image data of the original image data (the 3 × 3 image data matrix centered on the target pixel) of the target pixel and eight neighboring pixels into the target pixel. , The image density of the pixel on the right side is p1, the image density of the pixel above p1 is p2, the image density of the pixel above the pixel of interest is p3, and the pixel on the left side of p3 is p0. The image density of the pixel on the left of the pixel of interest is p5, and the image density of the pixel below p5 is p6.
And the image density of the pixel below the target pixel is p7, and p7
Assuming that the image density of the pixel on the right of the image is p8, (p1 + p
2 + p8)-(p4 + p5 + p6) and the main scanning direction differential data, and (p2 + p3 + p4)-(p6 + p
7 + p8), which is the sum with the differential data in the sub-scanning direction, and indicates the spatial variation of the original image data. The computer 1 writes these data into the image memory 4 in association with each pixel.

【0034】コンピュ−タ1は次に、図7に示す「しき
い値処理」(3)により、微分デ−タをしきい値TH1
と比較して、TH1より大きい微分デ−タをそのまま画
素対応で残し、TH1より小さい微分デ−タは画素対応
で0に置換する。このようにして2値化した結果、原画
像デ−タの値の変化が大きい領域「エッジ領域」が検出
され該領域のみ微分デ−タがそのまま大きい値で残り、
他の領域では微分デ−タは0(明るさの変化なし:連続
領域)となる。
Next, the computer 1 converts the differential data into a threshold value TH1 by "threshold value processing" (3) shown in FIG.
In contrast to the above, the differential data larger than TH1 is left as it is for the pixel, and the differential data smaller than TH1 is replaced with 0 for the pixel. As a result of binarization in this manner, an area "edge area" where the change in the value of the original image data is large is detected, and only in this area, the differential data remains as a large value.
In other regions, the differential data is 0 (no change in brightness: continuous region).

【0035】コンピュ−タ1は次に、図8に示す「細線
化」(4)により、垂直方向の、微分デ−タの大から小
への切換わり点を「1」とし、他の点を「0」として、
「1」が境界(境界線の画素)を表わす2値画像に微分
デ−タを変換する。これにより画像中の物の境界が細線
で表わされる細線画像が得られる。
Next, the computer 1 sets "1" as the switching point of the differential data from large to small in the vertical direction by "thinning" (4) shown in FIG. As “0”
"1" converts the differential data into a binary image representing the boundary (pixel of the boundary line). As a result, a thin line image in which the boundaries of objects in the image are represented by thin lines is obtained.

【0036】図12〜14に、to時刻,t1時刻およ
びt2時刻の左,右カメラ2a,2bの撮映画像(原画
像:画像入力1で得たもの)の一例を示す。これらの図
中の(a)の画像が左カメラ2aで撮映したもの、
(b)が右カメラ2bで撮映したものである。上述の微
分処理(2)〜細線化(4)の処理により、原画像(図
12〜14)の中の、明るさの変化が急激な位置、例え
ば路面とその上の白線との境界,路面と前景との境界,
路面と車両の境界,前景と車両との境界,車両の影と露
出路面との境界、一車両でもボディと窓との境界等々、
実質上物体の縁が細線となった細線画像が得られ、これ
がメモリに記憶される。
FIGS. 12 to 14 show examples of images taken by the left and right cameras 2a and 2b (original images: obtained by the image input 1) at times t, t1 and t2. (A) in these figures are images taken by the left camera 2a,
(B) is an image shot by the right camera 2b. Due to the above-described differentiation processing (2) to thinning (4), the position in the original image (FIGS. 12 to 14) where the brightness changes sharply, for example, the boundary between the road surface and the white line thereon, the road surface The boundary between the foreground and
The boundary between the road surface and the vehicle, the boundary between the foreground and the vehicle, the boundary between the shadow of the vehicle and the exposed road surface, the boundary between the body and the window of a single vehicle, etc.
A thin line image in which the edges of the object are substantially thin lines is obtained, and this is stored in the memory.

【0037】コンピュ−タ1は、以上に説明した画像入
力(1),微分処理(2),しきい値処理(3)および
細線化(4)を、Δt周期で左,右カメラ2a,2bの
撮映画像のそれぞれについて実行する。この画像読取処
理を実行するときには、t1時刻の細線化画像を記憶す
るためのメモリ領域のデ−タをto時刻の細線化画像を
記憶するためのメモリ領域に移し、t2時刻の細線化画
像を記憶するためのメモリ領域のデ−タをt1時刻の細
線化画像を記憶するためのメモリ領域に移し、そして上
述の画像入力(1)〜細線化(4)を実行して、得た細
線画像を左,右画像別に、t2時刻の細線画像を記憶す
るためのメモリ領域に書込む。これにより、第1回の画
像入力(1)を実行してから2Δtが経過した時点か
ら、上述のメモリ領域には、最新に読取った細線画像
(t2時刻の細線画像),それよりΔt前に読取った細
線画像(t1時刻の細線画像)および2Δt前に読取っ
た細線画像(to時刻の細線画像)が、常時存在する。
なお、最新(t2時刻の)の原画像デ-タは、次に画像
入力(1)を実行するまで、原画像デ−タメモリ領域に
保持する。
The computer 1 performs the above-described image input (1), differential processing (2), threshold processing (3) and thinning (4) at left and right cameras 2a and 2b with a period of Δt. Is executed for each of the captured images. When executing this image reading process, the data in the memory area for storing the thinned image at time t1 is moved to the memory area for storing the thinned image at time t, and the thinned image at time t2 is stored. The data in the memory area for storing is transferred to the memory area for storing the thinned image at time t1, and the above-described image input (1) to thinning (4) are executed to obtain the thin line image. Is written into the memory area for storing the fine line image at time t2 for each of the left and right images. As a result, from the time point when 2Δt has elapsed since the first image input (1) was executed, the above-described memory area contains the most recently read thin line image (the thin line image at time t2) and Δt before that. The read thin line image (the thin line image at time t1) and the thin line image read 2Δt ago (the thin line image at time to) are always present.
The latest (at time t2) original image data is held in the original image data memory area until the next image input (1) is executed.

【0038】コンピュ−タ1は、3時刻to,t1,t
3の細線画像が整った時点から、細線化(4)を終了す
る毎に、「時系列両眼立体視」(6)を実行する。この
内容を図9に示し、その中の「処理1」(68)の内容
を図10に、また図10の処理(1)の中の「処理2」
の内容を図11に示す。また、図1には、「時系列両眼
立体視」(6)の処理対象の細線画像を、理解を容易に
するために単純化して示す。「時系列両眼立体視」
(6)でコンピュ−タ1はまず、to時刻の左画像(細
線画像:以下、時系列両眼立体視6の説明中では、細線
画像を単に画像と表現する)。を読取走査して細線部の
黒(「1」)情報を探索する(61〜64)。1つの黒
情報Poa1が座標(j,i)にあるとこれを注目点Poa1とす
る。to時刻の右画像の、該水平走査線Yiと同一の走
査線上の、左画像の黒情報Poa1の水平位置iよりも左
側(iが小さい領域)の黒情報Pob1,Pob2,Pob3が、
左画像の注目点Poa1の対応候補点Pob1,Pob2,Pob3で
ある。
The computer 1 operates at three times to, t1, t
From the time when the thin line image No. 3 is prepared, every time the thinning (4) is completed, “time-series binocular stereoscopic viewing” (6) is executed. This content is shown in FIG. 9, and the content of "process 1" (68) therein is shown in FIG. 10, and "process 2" in process (1) of FIG.
11 is shown in FIG. FIG. 1 shows a thin line image to be processed in the “time-series binocular stereopsis” (6) in a simplified manner for easy understanding. `` Time-series binocular stereovision ''
In (6), the computer 1 firstly displays the left image at the to time (thin line image: in the following description of the time-series binocular stereoscopic vision 6, the thin line image is simply expressed as an image). Is scanned to search for black ("1") information in the thin line portion (61-64). When one piece of black information Poa1 exists at the coordinates (j, i), this is set as a point of interest Poa1. Black information Pob1, Pob2, Pob3 on the left side (i.e., an area where i is smaller) than the horizontal position i of the black information Poa1 on the left image on the same scanning line as the horizontal scanning line Yi on the right image at the time to
The candidate points Pob1, Pob2, and Pob3 corresponding to the point of interest Poa1 in the left image.

【0039】−(1) to時刻の右画像上で第1の対応
候補点Pob1を検索すると(67)、「処理1」(6
8)を実行する。まず、t1時刻の左画像中に、注目点
Poa1の座標(j,i)を中心として(j±20,i±20)のウィン
ドウ領域Winを設定し、その内部の黒情報P1a−e〜P
1a-dをt1時刻の対応候補点群と仮に定める(681〜68
5)。
-(1) When the first corresponding candidate point Pob1 is searched on the right image at the to-time (67), "processing 1" (6
Execute 8). First, in the left image at the time t1, a window area Win of (j ± 20, i ± 20) is set around the coordinates (j, i) of the target point Poa1, and the black information P1a-e to P
1a-d is tentatively defined as the corresponding candidate point group at time t1 (681-68).
Five).

【0040】−(2) t1時刻の第1の対応候補点群の
第1の点P1a−eに関して、Poa1(to時刻の左画像),P
ob1(to時刻の右画像)およびP1a−e(t1時刻の左画像)
を各対応点と仮定して、これのt1時刻の右画像上の対
応点を求める(686)。これにおいては、次の(8)式で、求
めようとする対応点の水平方向の位置(X座標)X1b*を
算出する。垂直方向の位置(Y座標)は、P1a−eのもの
と同一とする。
-(2) Regarding the first point P1a-e of the first corresponding candidate point group at time t1, Poa1 (left image at time to), Poa1
ob1 (right image at time to) and P1a-e (left image at time t1)
Are assumed to be corresponding points, and corresponding points on the right image at time t1 are obtained (686). In this case, the horizontal position (X coordinate) X1b * of the corresponding point to be obtained is calculated by the following equation (8). The vertical position (Y coordinate) is the same as that of P1a-e.

【0041】[0041]

【数8】 (Equation 8)

【0042】なお、数式および図面の中の各種記号の意
味は次の通りである。 X1b*:t1時刻の右画像における予測対応点(P1a-d
〜P1a-eの対応点)のX座標, (Xca,Yca):左画像中心, (Xcb,Ycb):右左画像中心, Sxa,Sya:左画像のスケ−ルファクタ, Sxb,Syb:右画像のスケ−ルファクタ, θ:カメラの下向き角, Y1:t1時刻の右画像における対応点P1bのY座
標, (X2a,Y2a):t2時刻の左画像上の対応点, (X2b,Y2b):t2時刻の右画像上の対応点, F(Y,X):原画像デ−タ, G(Y,X):微分デ−タ, H(Y,X):しきい値処理後の微分デ−タ, P(Y,X):細線デ−タ(「1」:細線部/「0」:
細線部でない)。
The meanings of the various symbols in the formulas and drawings are as follows. X1b *: Predicted corresponding point (P1a-d) in the right image at time t1
(Xca, Yca): Center of left image, (Xcb, Ycb): Center of right and left image, Sxa, Sya: Scale factor of left image, Sxb, Syb: of right image Scale factor, θ: downward angle of camera, Y1: Y coordinate of corresponding point P1b in right image at time t1, (X2a, Y2a): corresponding point on left image at time t2, (X2b, Y2b): time t2 F (Y, X): Original image data, G (Y, X): Differential data, H (Y, X): Differential data after threshold processing , P (Y, X): thin line data (“1”: thin line portion / “0”:
Not a thin line).

【0043】次に、求めた対応点(X1b*)に、t1
時刻の右画像に黒情報があるかをチェックする(689)。
第1の対応点P1a-eに対応する位置に、t1時刻の右画
像に黒情報がないと、ウインドウ領域Winのt1時刻の
対応候補点群P1a−e〜P1a-dの第2のものについて、
同様に、t1時刻の右画像上の対応点を求め(686)、そ
こに黒情報があるかをチェックする。このように、対応
候補点群P1a−e〜P1a-dのt1時刻の右画像上の対応
点を順次算出してそこに黒情報があるかをチェックし、
これを黒情報があるまで行なう。すなわち、対応候補点
群P1a−e〜P1a-dのt1時刻の右画像上の対応点の並
びで現わされる細線(図1に点線で示す)と、t1時刻の
右画像上に実際に存在する細線(図1に実線で示す)との
交点P1bを検索し、これを対応点と定める(685〜689)。
この交点P1bを算出した根拠となる、t1時刻の左画像
上の対応候補点群P1a−e〜P1a-dの1点(これをP1a
−pと呼ぶ)を把握しておく。
Next, t1 is added to the obtained corresponding point (X1b *).
It is checked whether black information exists in the right image of the time (689).
If there is no black information in the right image at time t1 at the position corresponding to the first corresponding point P1a-e, the second candidate of the corresponding candidate point group P1a-e to P1a-d at time t1 in the window area Win ,
Similarly, a corresponding point on the right image at time t1 is determined (686), and it is checked whether black information exists there. In this manner, the corresponding points on the right image at the time t1 of the corresponding candidate point groups P1a-e to P1a-d are sequentially calculated, and it is checked whether there is black information there.
This is repeated until there is black information. That is, a thin line (indicated by a dotted line in FIG. 1) that appears as a sequence of corresponding points on the right image at time t1 of the corresponding candidate point groups P1a-e to P1a-d, An intersection P1b with an existing thin line (shown by a solid line in FIG. 1) is searched, and this is determined as a corresponding point (685 to 689).
One of the corresponding candidate point groups P1a-e to P1a-d on the left image at time t1, which is the basis for calculating the intersection P1b (this is referred to as P1a
-P).

【0044】−(3) 以上で、to時刻の、左画像上の
注目点Poa1とそれの右画像上の対応点Pob1、ならび
に、t1時刻の、左画像上の注目点P1a−pとそれの右
画像上の対応点P1bが定まった。そこで、この点(to
時刻のPoa1=Pob1,t1時刻のP1a−p=P1b)の、
to時刻の3次元位置と、t1時刻の3次元位置より、
それらを結ぶ直線上でt1時刻の3次元位置から、t
o,t1時刻の3次元位置間距離分だけ離れた位置にt
2時刻の3次元位置があるとして該t2時刻の3次元位
置を外挿法で求め、t2時刻の3次元位置の、t2時刻
の左画像上の位置(X2a,Y2a)および右上画像上
の位置(X2b,Y2b)を算出する(6901,6902)。算
出式の主要なものを次に示す。以上により、to時刻の
左,右画像の対応点Poa1,Pob1の、t1時刻の左,右
画像上の対応点P1a−p,P1bと、t2時刻の左,右画
像上の対応点(X2a,Y2a),(X2b,Y2b)
を推定したことになる。
-(3) At this point, the point of interest Poa1 on the left image and its corresponding point Pob1 on the right image at the time to, and the point of interest P1a-p on the left image at the time t1. The corresponding point P1b on the right image has been determined. Therefore, this point (to
Poa1 at time = Pob1, P1a−p = P1b at time t1)
From the three-dimensional position at time to and the three-dimensional position at time t1,
From the three-dimensional position at time t1 on a straight line connecting them, t
o, t1 at a position separated by the distance between the three-dimensional positions at time t1
Assuming that there is a three-dimensional position at two times, the three-dimensional position at time t2 is obtained by extrapolation, and the three-dimensional position at time t2 is the position (X2a, Y2a) on the left image at time t2 and the position on the upper right image at time t2 (X2b, Y2b) is calculated (6901, 6902). The main calculation formulas are shown below. Accordingly, the corresponding points P1a-p, P1b on the left and right images at the time t1 of the corresponding points Poa1 and Pob1 on the left and right images at the time to, and the corresponding points (X2a, Y2a), (X2b, Y2b)
Has been estimated.

【0045】[0045]

【数9】 (Equation 9)

【0046】[0046]

【数10】 (Equation 10)

【0047】−(4) この推定の正確さを更に確保する
ため、コンピュ−タ1は、t2時刻の左,右原画像デ−
タより、それぞれ対応点(X2a,Y2a),(X2
b,Y2b)を中心とする3×3画素の画像デ−タを読
出して、次の(11)式で、3×3画素の画像濃度の和を算
出し、左画像の和と右画像の和の差を算出し、得た差の
逆数を算出し、これを相関値Cとする(6903)。すなわ
ち、対応点(X2a,Y2a),(X2b,Y2b)の
明るさの相関値Cを算出する。
(4) In order to further ensure the accuracy of the estimation, the computer 1 outputs the left and right original image data at the time t2.
Corresponding points (X2a, Y2a), (X2a
b, Y2b), image data of 3 × 3 pixels is read out, the sum of image densities of 3 × 3 pixels is calculated by the following equation (11), and the sum of the left image and the right image is calculated. The difference between the sums is calculated, the reciprocal of the obtained difference is calculated, and this is set as the correlation value C (6903). That is, the brightness correlation value C of the corresponding points (X2a, Y2a) and (X2b, Y2b) is calculated.

【0048】[0048]

【数11】 [Equation 11]

【0049】以上により、to時刻の右画像上の第1の
対応候補点Pob1について、それが対応点である可能性
の度合いを示す相関値Cが求まったことになる。
As described above, the correlation value C indicating the degree of possibility that the first corresponding candidate point Pob1 on the right image at the to time is a corresponding point has been obtained.

【0050】 コンピュ−タ1は、to時刻の右画像
の、注目点Poa1の水平走査線Yiと同一の走査線上
の、注目点Poa1の水平位置iよりも左側(iが小さい
領域)の、上記相関値Cの算出処理を終了した黒情報P
ob1の次の黒情報Pob2についても、上述の−(1)〜
−(4)の相関値C算出処理を同様に実行し、黒情報Pob2
の相関値Cを、先に算出した黒情報Pob1の相関値と比
較し、大きい方の相関値を記憶保持し、大きい方の相関
値を得た候補点(Pob1とPob2の一方)をto時刻の対
応点として記憶し、大きい方の相関値を得た対応点(X2
a,Y2a),(X2b,Y2b)をt2時刻の対応点として記憶する(6
903〜6906)。
The computer 1 is located on the same scan line as the horizontal scan line Yi of the target point Poa1 on the right image at the time to, on the left side of the horizontal position i of the target point Poa1 (the area where i is smaller). Black information P for which calculation processing of correlation value C has been completed
Regarding the black information Pob2 following the ob1 as well, the above-(1)-
Similarly, the correlation value C calculation processing of (4) is executed to obtain the black information Pob2.
Is compared with the previously calculated correlation value of the black information Pob1, the larger correlation value is stored and retained, and the candidate point (one of Pob1 and Pob2) at which the larger correlation value was obtained is set at the time t. The corresponding point (X2
a, Y2a) and (X2b, Y2b) are stored as corresponding points at time t2 (6
903-6906).

【0051】 以下同様にコンピュ−タ1は、to時
刻の右画像の、注目点Poa1の水平走査線Yiと同一の
走査線上の、注目点Poa1の水平位置iよりも左側(i
が小さい領域)の、残りの黒情報Pob3についても、上
述のの処理を同様に実行する(6903〜6906)。
Similarly, the computer 1 is located on the same scanning line as the horizontal scanning line Yi of the target point Poa1 on the right image at the time to, on the left side of the horizontal position i of the target point Poa1 (i
The above-described processing is similarly performed for the remaining black information Pob3 in an area where is small (6903 to 6906).

【0052】to時刻の右画像の、注目点Poa1の水平
走査線Yiと同一の走査線上の、注目点Poa1の水平位
置iよりも左側(iが小さい領域)の、すべての対応候
補点すなわち黒情報Pob1,Pob2,Pob3について上述の
相関値C算出等の処理を終了すると、終了時点には、算
出した相関値Cの最大値と、これをもたらしたto時刻
の右画像上の対応候補点、ならびに、t2時刻の対応候
補点がコンピュ−タ1に記憶されている。以上により、
to時刻の左画像上の細線上の1点Poa1の、to時刻
の右画像上の対応点、ならびに、t2時刻の左,右画像
上の対応点が決定された。
In the right image at the time to, on the same scanning line as the horizontal scanning line Yi of the target point Poa1, all the corresponding candidate points on the left side of the horizontal position i of the target point Poa1 (i. When processing such as the above-described calculation of the correlation value C is completed for the information Pob1, Pob2, and Pob3, at the end time point, the calculated maximum value of the correlation value C and the corresponding candidate point on the right image at the to time that resulted this, The corresponding candidate point at time t2 is stored in the computer 1. From the above,
The corresponding point on the right image at the to time and the corresponding point on the left and right images at the time t2 for one point Poa1 on the thin line on the left image at the to time are determined.

【0053】コンピユ−タ1は、このようなto時刻の
左,右細線画像の対応点検索を、画像中の細線を構成す
る各点について同様に実行する(61〜71)。to時
刻の左,右細線画像の対応点検索を終了すると、対応点
それぞれの3次元位置を算出することができ、3次元位
置デ−タに基づいて色々な方向から見た線図を描くこと
ができる。例えば3次元位置デ−タに基づいて各点を
X,Z平面上に描くと例えば図15の(a)に示す線図
が得られ、これはカメラ2a,2bを搭載した車両の前
方の、道路の平面図であり、路上の物体(車両)の存在
を示す。X,Y平面上に描くと例えば図15の(b)に
示す線図が得られ、これはカメラ2a,2bを搭載した
車両の前方の左,右,上下方向の物体の存在を示す。
X,Y,Z軸に斜交する平面上に描くと、例えば図15
の(c)に示す線図が得られ、これは道路の斜視図であ
る。更には、Y,Z平面上に描くと、例えば図15の
(d)に示す線図が得られ、これは道路の側面図であ
る。このように、道路およびその上の物体を立体的に認
識することができ、かつ立体的に表わすことができる。
The computer 1 similarly performs the corresponding point search of the left and right thin line images at the to time for each of the points constituting the thin line in the image (61 to 71). When the corresponding point search of the left and right thin line images at the to time is completed, the three-dimensional position of each corresponding point can be calculated, and a diagram viewed from various directions based on the three-dimensional position data can be drawn. Can be. For example, if each point is drawn on the X and Z planes based on the three-dimensional position data, for example, a diagram shown in FIG. 15A is obtained, which is a diagram in front of the vehicle equipped with the cameras 2a and 2b. It is a top view of a road and shows the presence of the object (vehicle) on a road. When drawn on the X and Y planes, for example, a diagram shown in FIG. 15B is obtained, which shows the presence of an object in the left, right, and vertical directions in front of the vehicle equipped with the cameras 2a and 2b.
Drawing on a plane oblique to the X, Y, and Z axes, for example, FIG.
(C) is obtained, which is a perspective view of the road. Further, when drawn on the Y and Z planes, for example, a diagram shown in FIG. 15D is obtained, which is a side view of the road. In this way, the road and the objects thereon can be recognized three-dimensionally and can be three-dimensionally represented.

【0054】一画面の対応点検索を終了するとコンピュ
−タ1はこの実施例では、「距離計算」(72)を実行
する。これにおいては、対応点検索を行なった各点につ
いて、to時刻の左,右対応点に基づいて、3次元位置
Xo,Yo,Zoを、次の(12)式で算出する。
When the corresponding point search for one screen is completed, the computer 1 executes "distance calculation" (72) in this embodiment. In this case, the three-dimensional positions Xo, Yo and Zo are calculated by the following equation (12) based on the left and right corresponding points at the to time for each point for which corresponding point search has been performed.

【0055】[0055]

【数12】 (Equation 12)

【0056】Xoはカメラから見て左,右方向の位置、
Yoは上下方向の位置、Zoが前方距離である。「距離
計算」(72)を終えるとコンピュ−タ1は、「速度ベ
クトル計算」(73)を実行する。これにおいては、次
の(13)式に示すように、対応点を検索した各点につい
て、to時刻の位置とt2時刻の位置の差を算出して2
(これは2Δtに対応)で割って、横方向の速度(カメ
ラに対する相対速度)VxおよびVyを算出する。
Xo is the position in the left and right directions viewed from the camera,
Yo is the position in the vertical direction, and Zo is the forward distance. After completing the "distance calculation" (72), the computer 1 executes "speed vector calculation" (73). In this case, as shown in the following equation (13), for each point for which a corresponding point has been searched, the difference between the position at the time to and the position at the time t2 is calculated.
(This corresponds to 2Δt) to calculate the lateral velocities (relative velocities with respect to the camera) Vx and Vy.

【0057】[0057]

【数13】 (Equation 13)

【0058】なお、to時刻の3次元位置とt2時刻の
3次元位置の差のZ成分を算出することにより、Z(前
方方向)の相対速度を算出することができる。Z方向の
速度を横軸とし、同一速度を有する点の数を縦軸にとる
と、例えば図16に示す如き、速度分布が得られる。図
16は、カメラ2a,2bを搭載した車両(自車)と同
程度の速度の車両(図16の横軸の中央位置:速度0)
と、自車より速い速度で遠ざかって行く車両(横軸の−
は遠ざかる方向)が存在することを意味する。同一X,
Y平面上に、to時刻とt2時刻の対応点を結ぶ直線を
描くと、図17に示す線図が得られる。これらの線は速
度ベクトルを示し、視認上は、直線の方向が自車に対す
る相対移動方向を、直線の長さが自車に対する相対移動
速度を示す。
By calculating the Z component of the difference between the three-dimensional position at the time t and the three-dimensional position at the time t2, the relative velocity in the Z (forward) direction can be calculated. When the speed in the Z direction is set on the horizontal axis and the number of points having the same speed is set on the vertical axis, a speed distribution is obtained, for example, as shown in FIG. FIG. 16 shows a vehicle having the same speed as the vehicle (own vehicle) equipped with the cameras 2a and 2b (the center position on the horizontal axis in FIG. 16: speed 0).
And the vehicle moving away at a faster speed than the own vehicle (-on the horizontal axis)
Signifies that there is a direction to go away). The same X,
Drawing a straight line connecting the corresponding points of the to time and the t2 time on the Y plane yields the diagram shown in FIG. These lines indicate speed vectors, and visually, the direction of the straight line indicates the relative movement direction with respect to the own vehicle, and the length of the straight line indicates the relative movement speed with respect to the own vehicle.

【0059】コンピュ−タ1は、以上に説明した一画面
上の処理(時系列両眼立体視6)を、「画像入力」
(1)を実行する毎に、すなわちΔt周期で、実行す
る。
The computer 1 executes the above-described processing on one screen (time-series binocular stereovision 6) by “image input”.
It is executed every time (1) is executed, that is, in the Δt cycle.

【0060】[0060]

【発明の効果】短い時間区間Δtでは、物体の移動は等
速直線運動に見なせる。本発明では、to時刻の注目点
Poaの3次元位置と、t1時刻の対応点P1b(Pob1,Pob2,
Po3bそれぞれに1個)の3次元位置より、外挿によりt
2時刻の対応点(Pob1,Pob2,Po3bそれぞれに1個)の3
次元位置を求めるので、t2時刻の対応点の推定精度が
高い。加えて、物体の同一点の左,右画像上の濃度は実
質上同一であり、これに着目して、t2時刻の各対応点
(Pob1,Pob2,Po3bそれぞれに1個)に関して、t2時刻
の左,右画像上の画像濃度の相関値Cを算出し、相関値
の最大値を得た対応点補点(Pob1,Pob2,Po3bの1つ)を、
to時刻の左(右)画像の注目点Poa1に対応する、to
時刻の右(左)画像上の対応点と決定するので対応点の推
定が正確である。このように、物体の運動による対応点
の位置変化を利用した外挿による対応点(X2a,Y2a),(X2
b,Y2b)の推定と、同一物体の同一部は同一の明るさであ
るとの自明の事項を利用した左右画像濃度の相関値の演
算および比較により、複数の候補点Pob1,Pob2,Po3bから
一点を対応点と決定するので、相対的に高速に移動する
物体の認識の信頼度が高い。
In the short time interval Δt, the movement of the object can be regarded as a uniform linear motion. In the present invention, attention point of to time
The three-dimensional position of Poa and the corresponding point P1b at time t1 (Pob1, Pob2,
From the three-dimensional position (one for each Po3b), t
3 of the corresponding points of two times (one for each of Pob1, Pob2, Po3b)
Since the dimensional position is obtained, the estimation accuracy of the corresponding point at the time t2 is high. In addition, the densities of the same point of the object on the left and right images are substantially the same, and by paying attention to this, with respect to each corresponding point at t2 (one for each of Pob1, Pob2, and Po3b), at time t2 Calculate the correlation value C of the image density on the left and right images, and calculate the corresponding point supplementary point (one of Pob1, Pob2, and Po3b) from which the maximum value of the correlation value is obtained,
to corresponding to the point of interest Poa1 of the left (right) image at the to time
Since the corresponding point is determined on the right (left) image of the time, the estimation of the corresponding point is accurate. Thus, the corresponding points (X2a, Y2a), (X2a) by extrapolation using the position change of the corresponding points due to the motion of the object
b, Y2b), by calculating and comparing the correlation values of the left and right image densities using the obvious matter that the same part of the same object has the same brightness, from a plurality of candidate points Pob1, Pob2, Po3b Since one point is determined as a corresponding point, the reliability of recognition of an object moving relatively quickly is high.

【図面の簡単な説明】[Brief description of the drawings]

【図1】 本発明の対応点検索を説明するために、撮
影画面を細線化した細線画像を単純化して示す平面図で
ある。
FIG. 1 is a plan view showing a simplified thin line image obtained by thinning a shooting screen in order to explain a corresponding point search according to the present invention.

【図2】 本発明を一態様で実施する画像処理装置の
構成概要を示すブロック図である。
FIG. 2 is a block diagram illustrating a schematic configuration of an image processing apparatus that implements the present invention in one aspect.

【図3】 図2に示す左,右カメラ2a,2bが撮映
した画像FLa,FLbとカメラ前方の平面上の点Pと
の光学的な距離関係を示す斜視図である。
3 is a perspective view showing an optical distance relationship between images FLa and FLb captured by left and right cameras 2a and 2b shown in FIG. 2 and a point P on a plane in front of the cameras.

【図4】 図3に示す光学的な距離関係を、図3に示
すX,Z平面に投影した平面図である。
FIG. 4 is a plan view in which the optical distance relationship shown in FIG. 3 is projected on an X, Z plane shown in FIG.

【図5】 図2に示すコンピュ−タ1の、「前方物体
の速度監視」を行なう処理内容を示すフロ−チャ−トで
ある。
FIG. 5 is a flowchart showing a processing content for performing “speed monitoring of a forward object” of the computer 1 shown in FIG. 2;

【図6】 図5に示す「画像入力」(1)の処理内容
を示すフロ−チャ−トである。
FIG. 6 is a flowchart showing processing contents of "image input" (1) shown in FIG.

【図7】 図5に示す「しきい値処理」(3)の処理
内容を示すフロ−チャ−トである。
FIG. 7 is a flowchart showing processing contents of “threshold processing” (3) shown in FIG. 5;

【図8】 図5に示す「細線化」(4)の処理内容を
示すフロ−チャ−トである。
8 is a flowchart showing the processing content of "thinning" (4) shown in FIG.

【図9】 図5に示す「時系列両眼立体視」(6)の
処理内容を示すフロ−チャ−トである。
FIG. 9 is a flowchart showing processing contents of “time-series binocular stereovision” (6) shown in FIG. 5;

【図10】 図9に示す「処理1」(68)の処理内容
を示すフロ−チャ−トである。
FIG. 10 is a flowchart showing the processing contents of "processing 1" (68) shown in FIG.

【図11】 図10に示す「処理2」(690)の処理
内容を示すフロ−チャ−トである。
FIG. 11 is a flowchart showing processing contents of “processing 2” (690) shown in FIG.

【図12】 図2に示す左,右カメラ2a,2bで撮映
した画像の一例を示す平面図であり、(a)が左カメラ
2aの撮映画像を、(b)が右カメラ2bの撮映画像を
示す。
12 is a plan view showing an example of an image captured by the left and right cameras 2a and 2b shown in FIG. 2, where (a) shows an image captured by the left camera 2a, and (b) shows an image captured by the right camera 2b. 4 shows a captured image.

【図13】 図2に示す左,右カメラ2a,2bで撮映
した画像の一例を示す平面図であり、図12に示す画像
よりΔt後の画像であり、(a)が左カメラ2aの撮映
画像を、(b)が右カメラ2bの撮映画像を示す。
13 is a plan view showing an example of an image captured by the left and right cameras 2a and 2b shown in FIG. 2, and is an image Δt after the image shown in FIG. 12, (a) of the left camera 2a (B) shows the captured image of the right camera 2b.

【図14】 図2に示す左,右カメラ2a,2bで撮映
した画像の一例を示す平面図であり、図13に示す画像
よりΔt後の画像であり、(a)が左カメラ2aの撮映
画像を、(b)が右カメラ2bの撮映画像を示す。
14 is a plan view showing an example of an image captured by the left and right cameras 2a and 2b shown in FIG. 2, and is an image Δt after the image shown in FIG. 13, and (a) of the left camera 2a (B) shows the captured image of the right camera 2b.

【図15】 (a)は図12〜図14の画像の、細線化
および対応点検索により対応点を決定して得られる、画
面中物体の平面投影細線図、(b)は垂直面投影細線
図、(c)は斜視細線図、および(d)は側面細線図で
ある。
15A is a plane projection thin line diagram of an object on a screen obtained by determining corresponding points by thinning and corresponding point search of the images of FIGS. 12 to 14, and FIG. 15B is a vertical plane projection thin line. (C) is a perspective thin line diagram, and (d) is a side thin line diagram.

【図16】 図12〜図14の画像の、細線化および対
応点検索により対応点を決定して得られる、前方物体の
速度分布を示すグラフであり、横軸が速度を、縦軸が同
一速度の点の累算値を示す。
FIG. 16 is a graph showing the velocity distribution of the forward object obtained by determining corresponding points by thinning and corresponding point search in the images of FIGS. 12 to 14, wherein the horizontal axis represents the velocity and the vertical axis represents the same. Indicates the accumulated value of the speed point.

【図17】 図12〜図14の画像の、細線化および対
応点検索により対応点を決定して得られる、前方物体の
2Δt時間の移動方向と移動量を示す線図である。
FIG. 17 is a diagram showing a moving direction and a moving amount of the forward object for 2Δt time obtained by determining corresponding points by thinning and corresponding point search of the images of FIGS. 12 to 14;

【符号の説明】[Explanation of symbols]

Poa1:対応点検索対象の注目点/to時刻の左画像上
の細線上の一点 Pob1,Pob2,Pob3:to時刻の右画像上の対応候補点 P1a-d〜P1a-e:t1時刻の左画像上の対応候補点群 P1b:t1時刻の、算出した対応点 (X2a,Y2a):t2時刻の左画像上の、算出した対応点 (X2b,Y2b):t2時刻の右画像上の、算出した対応点
Poa1: a point on a thin line on the left image at the point of interest / to time corresponding to the corresponding point search target Pob1, Pob2, Pob3: corresponding candidate points on the right image at the to time P1a-d to P1a-e: left image at the time t1 Upper corresponding candidate point group P1b: Calculated corresponding point (X2a, Y2a) at time t1, calculated point (X2b, Y2b) on the left image at time t2, calculated on right image at time t2. Corresponding point

───────────────────────────────────────────────────── フロントページの続き 審査官 柴田 和雄 (56)参考文献 特開 平3−122510(JP,A) (58)調査した分野(Int.Cl.7,DB名) G01B 11/00 - 11/30 ────────────────────────────────────────────────── ─── Continued on the front page Examiner Kazuo Shibata (56) References JP-A-3-122510 (JP, A) (58) Fields investigated (Int. Cl. 7 , DB name) G01B 11/00-11 / 30

Claims (2)

(57)【特許請求の範囲】(57) [Claims] 【請求項1】左,右の撮像カメラによりそれらの前方の
シ−ンを撮映してそれぞれビデオ信号を得て、これらの
ビデオ信号をデジタル処理して、左,右画像にある同一
物体の同一点を検索するにおいて、 to時刻,これよりΔt後のt1時刻および更にΔt後
のt2時刻の左,右画像の内、少くともto時刻および
t1時刻の左,右画像は、画像中の物体のエッジを細線
で表わす細線化処理を施して、これらto,t1および
t2時刻の左,右画像に基づいて、 時刻toの左(右)画像にある細線のある点を注目点Po
a1と定め、時刻toの右(左)画像上の細線の、注目点
Poa1の垂直座標iの対応垂直座標i上にある点Pob1,Pob2,
Po3bを摘出してto時刻の右(左)画像上の対応候補点
Pob1,Pob2,Po3bとし、 −(1) 時刻t1の左(右)画像上で注目点Poa1の座標
(j,i)を中心とする所定領域の細線上の各点をt1時刻
の左(右)画像上の対応候補点P1a-d〜P1a-eとして、 −(2) 注目点Poa1がt1時刻の左(右)画像上の対応
候補点P1a-d〜P1a-eのそれぞれに移動したと仮定した場
合の、これらに対応するto時刻の右(左)画像上の対
応候補点Pob1,Pob2,Po3bの1つ Pob1 のt1時刻の右
(左)画像上の各移動位置を演算し、これらの移動位置
の連なりと重複する、t1時刻の右(左)画像上の細線
上の点P1bの座標(L,n)を摘出し、 −(3) to時刻の注目点Poa1と前記1つの対応候補点
Pob1が同一点であるとした場合のto時刻の注目点Poa1
の3次元位置と、t1時刻の右(左)画像上の上記重複
点P1bが注目点Poa1のt1時刻の対応点とした場合のt
1時刻の注目点Poa1の3次元位置とより、外挿によりt
2時刻の注目点Poa1の3次元位置を算出し、 −(4) 上記t2時刻の注目点Poa1の3次元位置に対応
するt2時刻の左画像上の位置(X2a,Y2a)を中心とし
た、t2時刻の左画像上の所定領域の画像濃度と、上記
t2時刻の注目点Poa1の3次元位置に対応するt2時刻
の右画像上の位置(X2b,Y2b)を中心とした、t2時刻の
右画像上の所定領域の画像濃度との相関値Cを算出し、 to時刻の右(左)画像上の対応候補点Pob1,Pob2,Po3b
の残りのものPob2,Po3bそれぞれにつき、上記−(1)〜
−(4)と同様にして相関値Cを算出して、相関値Cが最
大となった、to時刻の右(左)画像上の対応候補点Po
b1,Pob2,Po3bの1つ、をto時刻の右(左)画像上の、
注目点Poa1の対応点と定める、ことを特徴とする、左,
右カメラの撮像画像の対応点検索方法。
1. A left and right imaging camera shoots scenes in front of them and obtains video signals, respectively, and digitally processes these video signals to obtain the same object in the left and right images. In searching for one point, at least the left and right images at the to time, the t1 time after Δt and further at the t2 time after Δt, the left and right images at the to time and the t1 time are the objects of the object in the image. Based on the left and right images at times to, t1 and t2, a point having a thin line in the left (right) image at time to is subjected to a thinning process of expressing edges as thin lines.
a1 and the point of interest of the thin line on the right (left) image at time to
Points Pob1, Pob2, on the corresponding vertical coordinate i of the vertical coordinate i of Poa1
Po3b is extracted and the corresponding candidate point on the right (left) image of the to time
Pob1, Pob2, Po3b, − (1) coordinates of the point of interest Poa1 on the left (right) image at time t1
Each point on the thin line of the predetermined area centered on (j, i) is set as a corresponding candidate point P1a-d to P1a-e on the left (right) image at time t1, and-(2) the target point Poa1 is at time t1 Corresponding to the corresponding candidate points P1a-d to P1a-e on the left (right) image, and corresponding corresponding points Pob1, Pob2, One of Po3b Calculates each movement position on the right (left) image at time t1 of Pob1 and coordinates of a point P1b on a thin line on the right (left) image at time t1 that overlaps with a series of these movement positions. (L, n) is extracted, and-(3) the attention point Poa1 at the to time and the one corresponding candidate point
Attention point Poa1 of to time when Pob1 is assumed to be the same point
And the overlapping point P1b on the right (left) image at time t1 is the corresponding point at time t1 of point of interest Poa1.
From the three-dimensional position of the attention point Poa1 at one time, t
The three-dimensional position of the attention point Poa1 at two times is calculated, and-(4) the center (X2a, Y2a) on the left image at time t2 corresponding to the three-dimensional position of the attention point Poa1 at time t2 The image density of a predetermined area on the left image at time t2 and the right of time t2 around the position (X2b, Y2b) on the right image at time t2 corresponding to the three-dimensional position of the target point Poa1 at time t2 A correlation value C with the image density of a predetermined area on the image is calculated, and corresponding candidate points Pob1, Pob2, Po3b on the right (left) image at time to
For each of the remaining Pob2 and Po3b, the above-(1) ~
The correlation value C is calculated in the same manner as in (4), and the corresponding candidate point Po on the right (left) image at the time to when the correlation value C becomes the maximum is obtained.
One of b1, Pob2, Po3b on the right (left) image of the to time,
It is defined as the corresponding point of the attention point Poa1,
A corresponding point search method for the image captured by the right camera.
【請求項2】左,右の撮像カメラによりそれらの前方の
シ−ンを撮映してそれぞれビデオ信号を得て、これらの
ビデオ信号をデジタル処理して、左,右画像にある同一
物体の同一点を検索するにおいて、 to時刻,これよりΔt後のt1時刻および更にΔt後
のt2時刻の左,右画像の内、少くともto時刻および
t1時刻の左,右画像は、画像中の物体のエッジを細線
で表わす細線化処理を施して、これらto,t1および
t2時刻の左,右画像に基づいて、時刻toの左(右)
画像にある細線のある点を注目点Poa1と定め、時刻to
の右(左)画像上の細線の、注目点Poa1の垂直座標iの
対応垂直座標i上にある点Pob1,Pob2,Po3bを摘出してt
o時刻の右(左)画像上の対応候補点Pob1,Pob2,Po3bと
し、時刻t1の左(右)画像上で注目点Poa1の座標(j,
i)を中心とする所定領域の細線上の各点をt1時刻の左
(右)画像上の対応候補点P1a-d〜P1a-eとして、 対応候補点Pob1,Pob2,Po3bの1つ毎に、それと注目点Po
a1の座標で定まるto時刻の注目点Poa1の3次元位置、
ならびに、注目点Poa1が対応候補点P1a-d〜P1a-eのそれ
ぞれに移動した場合のそれぞれのt1時刻の右(左)画
像上の位置の連なりとt1時刻の右(左)画像上の細線
の交点P1bの座標で定まる、t1時刻の注目点Poa1の3
次元位置を算出して、両3次元位置の外挿によりt2時
刻の注目点Poa1の3次元位置を算出し、これを左,右画
像上の座標に変換して、これらの座標位置の、t2時刻
の左,右画像上の画像濃度の相関値Cを算出し、 対応候補点Pob1,Pob2,Po3bの内、相関値Cが最大の候補
点を、時刻の左(右)画像上の注目点Poa1の、to時刻
の右(左)画像上の対応点と定める、ことを特徴とす
る、左,右カメラの撮像画像の対応点検索方法。
2. The left and right imaging cameras photograph scenes in front of them and obtain video signals, respectively, and digitally process these video signals to obtain the same object in the left and right images. In searching for one point, at least the left and right images at the to time, the t1 time after Δt and further at the t2 time after Δt, the left and right images at the to time and the t1 time are the objects of the object in the image. A thinning process is performed to represent the edge as a thin line, and based on the left and right images at times to, t1 and t2, the left (right) of time to
A point with a thin line in the image is determined as the point of interest Poa1, and the time to
The points Pob1, Pob2, Po3b on the right (left) image of the thin line on the corresponding vertical coordinate i of the vertical coordinate i of the point of interest Poa1 are extracted and t
o Let the corresponding candidate points Pob1, Pob2, Po3b on the right (left) image at time t be the coordinates (j, j) of the target point Poa1 on the left (right) image at time t1.
Each point on the thin line of the predetermined area centered on i) is defined as a corresponding candidate point P1a-d to P1a-e on the left (right) image at time t1, and for each of the corresponding candidate points Pob1, Pob2, Po3b , And the point of interest Po
the three-dimensional position of the point of interest Poa1 at the to time determined by the coordinates of a1,
When the point of interest Poa1 moves to each of the corresponding candidate points P1a-d to P1a-e, a series of positions on the right (left) image at time t1 and a thin line on the right (left) image at time t1 Of the point of interest Poa1 at time t1 determined by the coordinates of the intersection P1b of
The three-dimensional position of the point of interest Poa1 at the time t2 is calculated by extrapolating the two-dimensional positions, and the three-dimensional position is converted into coordinates on the left and right images. The correlation value C of the image density on the left and right images at the time is calculated, and the candidate point having the largest correlation value C among the corresponding candidate points Pob1, Pob2, and Po3b is determined as the target point on the left (right) image at the time. A corresponding point search method for images captured by left and right cameras, which is defined as a corresponding point on the right (left) image of the to time of Poa1.
JP3287538A 1991-11-01 1991-11-01 Method for searching corresponding points of images captured by left and right cameras Expired - Fee Related JP3055721B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP3287538A JP3055721B2 (en) 1991-11-01 1991-11-01 Method for searching corresponding points of images captured by left and right cameras

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP3287538A JP3055721B2 (en) 1991-11-01 1991-11-01 Method for searching corresponding points of images captured by left and right cameras

Publications (2)

Publication Number Publication Date
JPH05141919A JPH05141919A (en) 1993-06-08
JP3055721B2 true JP3055721B2 (en) 2000-06-26

Family

ID=17718638

Family Applications (1)

Application Number Title Priority Date Filing Date
JP3287538A Expired - Fee Related JP3055721B2 (en) 1991-11-01 1991-11-01 Method for searching corresponding points of images captured by left and right cameras

Country Status (1)

Country Link
JP (1) JP3055721B2 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5404263B2 (en) 2009-09-07 2014-01-29 パナソニック株式会社 Parallax calculation method and parallax calculation device
JP6458577B2 (en) * 2015-03-19 2019-01-30 トヨタ自動車株式会社 Image ranging device

Also Published As

Publication number Publication date
JPH05141919A (en) 1993-06-08

Similar Documents

Publication Publication Date Title
EP1394761B1 (en) Obstacle detection device and method therefor
JP4002919B2 (en) Moving body height discrimination device
JP3054681B2 (en) Image processing method
KR920001616B1 (en) Method and apparatus for detecting objects
JP4963964B2 (en) Object detection device
JPH10143659A (en) Object detector
KR20150074544A (en) Method of tracking vehicle
US11727637B2 (en) Method for generating 3D skeleton using joint-based calibration acquired from multi-view camera
KR100574227B1 (en) Apparatus and method for separating object motion from camera motion
JPH1144533A (en) Preceding vehicle detector
JP3055721B2 (en) Method for searching corresponding points of images captured by left and right cameras
JPH11248431A (en) Three-dimensional model forming apparatus and computer readable medium recorded with three-dimensional model generating program
US5144373A (en) Detection of range discontinuities in stereoscopic imagery
JPH0875454A (en) Range finding device
JP3253328B2 (en) Distance video input processing method
JP4584405B2 (en) 3D object detection apparatus, 3D object detection method, and recording medium
JPH05141930A (en) Three-dimensional shape measuring device
JP2993610B2 (en) Image processing method
JP3275252B2 (en) Three-dimensional information input method and three-dimensional information input device using the same
JP2993611B2 (en) Image processing method
JP3230292B2 (en) Three-dimensional object shape measuring device, three-dimensional object shape restoring device, three-dimensional object shape measuring method, and three-dimensional object shape restoring method
JP2938520B2 (en) 3D shape measuring device
JPH09229648A (en) Input/output method and device for image information
JP2504641B2 (en) Three-dimensional shape measurement processing method
JP3040575B2 (en) Time series distance image input processing method

Legal Events

Date Code Title Description
R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20090414

Year of fee payment: 9

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20090414

Year of fee payment: 9

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20100414

Year of fee payment: 10

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110414

Year of fee payment: 11

LAPS Cancellation because of no payment of annual fees