JP2004056763A - Monitoring apparatus, monitoring method, and program for monitor - Google Patents

Monitoring apparatus, monitoring method, and program for monitor Download PDF

Info

Publication number
JP2004056763A
JP2004056763A JP2003130008A JP2003130008A JP2004056763A JP 2004056763 A JP2004056763 A JP 2004056763A JP 2003130008 A JP2003130008 A JP 2003130008A JP 2003130008 A JP2003130008 A JP 2003130008A JP 2004056763 A JP2004056763 A JP 2004056763A
Authority
JP
Japan
Prior art keywords
flow
monitoring
vehicle
background
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2003130008A
Other languages
Japanese (ja)
Other versions
JP3776094B2 (en
Inventor
Masamichi Nakagawa
Kazuo Nobori
Satoshi Sato
中川 雅通
佐藤 智
登 一生
Original Assignee
Matsushita Electric Ind Co Ltd
松下電器産業株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to JP2002134583 priority Critical
Application filed by Matsushita Electric Ind Co Ltd, 松下電器産業株式会社 filed Critical Matsushita Electric Ind Co Ltd
Priority to JP2003130008A priority patent/JP3776094B2/en
Publication of JP2004056763A publication Critical patent/JP2004056763A/en
Application granted granted Critical
Publication of JP3776094B2 publication Critical patent/JP3776094B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

<P>PROBLEM TO BE SOLVED: To accurately detect an approaching object even while curving in a monitoring apparatus for a vehicle in which the approaching object is detected by using an optical flow. <P>SOLUTION: An optical flow detecting part 12 finds an optical flow Vi from an image photographed by a camera 11. Based upon a movement of a relevant vehicle estimated by a present vehicle movement estimating part 13 and a space model in which a space under photographing by the camera 11 is modeled estimated by a space model estimating part 14, a background flow estimating part 15 finds a background flow Vdi which is a flow of the camera image on the assumption of a background. An approaching object detecting part 16 compares the optical flow Vi with the background flow Vdi to detect movements of objects around the relevant vehicle. <P>COPYRIGHT: (C)2004,JPO

Description

【0001】
【発明の属する技術分野】
本発明は、カメラを用いて車両の周囲の状況を監視し、接近する物体を検出する車両用監視技術に関する。
【0002】
【従来の技術】
従来から、車両の周囲の状況を監視し、接近する物体を検出する車両用監視装置について、様々なアプローチが行われてきた。
【0003】
その一つに、レーダー等の障害物センサを用いたものがある。ところがこの手法は、障害物を確実に検出できるものの、その障害物が近づいているのか遠ざかっているのかといった複雑な判断には不向きである。また、降雨などの影響も大きく、検出範囲も比較的狭いので、障害物センサ単独で接近物を検出するのは困難だった。
【0004】
一方、カメラ画像を用いたアプローチも行われている。 On the other hand, an approach using camera images is also used. この手法によると、現状ではレーダーほどの信頼性は得られないが、デジタル化された画像情報の加工が容易であることから、障害物が近づいているのか遠ざかっているのかといった複雑な判断が可能である。 According to this method, it is not as reliable as radar at present, but since it is easy to process digitized image information, it is possible to make complicated judgments such as whether an obstacle is approaching or moving away. Is. また、検出範囲がカメラの画角と分解能によって決定されるため、非常に広い領域を監視することができる。 Moreover, since the detection range is determined by the angle of view and the resolution of the camera, it is possible to monitor a very wide area.
【0005】 0005
カメラ画像を用いた手法としては、複数のカメラ画像を用いるステレオ法と、オプティカルフローを用いる方法とが広く知られている。 As a method using a camera image, a stereo method using a plurality of camera images and a method using an optical flow are widely known. ステレオ法は、カメラ間の視差を利用する方法であるが、カメラ同士のキャリブレーションが複雑である、複数のカメラを用いる必要があるためコストがかかる、といった問題点がある。 The stereo method is a method that utilizes the parallax between cameras, but has problems that calibration between cameras is complicated and that it is costly because a plurality of cameras need to be used.
【0006】 0006
オプティカルフローを用いた車両用監視装置は、例えば、特許文献1(第1の従来例)に開示されている。 A vehicle monitoring device using an optical flow is disclosed in, for example, Patent Document 1 (first conventional example). すなわち、カメラを車両後方に向けて設置し、水平方向に分割した複数の領域を画面内に設定し、それぞれの領域内で、所定の閾値以上の大きさを持ち、かつ、接近物を仮定したときの画像上の動きと同一の方向を持つオプティカルフローを抽出する。 That is, the camera is installed toward the rear of the vehicle, a plurality of horizontally divided areas are set in the screen, and each area has a size equal to or larger than a predetermined threshold value and an approaching object is assumed. Extract the optical flow that has the same direction as the movement on the image at the time. そして、このオプティカルフローに基づいて、接近物を判別する。 Then, based on this optical flow, an approaching object is discriminated.
【0007】 0007
また、カーブ走行中における対応については、すでにいくつかの方法が提案されている。 In addition, some methods have already been proposed for dealing with curves.
【0008】 0008
例えば、特許文献2(第2の従来例)では、車両の舵角および走行速度から旋回ベクトルを求め、この旋回ベクトルを、実際に求まったオプティカルフローから引くことによって、オプティカルフローを補正する。 For example, in Patent Document 2 (second conventional example), the turning vector is obtained from the steering angle and the traveling speed of the vehicle, and the turning vector is subtracted from the actually obtained optical flow to correct the optical flow. そして、この補正によりカーブの影響を除いた後に、移動物体を抽出する。 Then, after removing the influence of the curve by this correction, the moving object is extracted.
【0009】 0009
また、特許文献3(第3の従来例)では、オプティカルフローを、車速センサおよびヨーレートセンサの出力と、予め測定した画像位置と距離との対応関係とによって補正し、カーブの影響を除いた後、移動物体を抽出する。 Further, in Patent Document 3 (third conventional example), after the optical flow is corrected by the output of the vehicle speed sensor and the yaw rate sensor and the correspondence relationship between the image position and the distance measured in advance, the influence of the curve is removed. , Extract moving objects.
【0010】 0010
また、画像内の無限遠点(FOE)を利用した手法(例えば特許文献4参照)が、オプティカルフローを用いた接近物検出において広く用いられているが、この手法をカーブ走行に対応させた方法についてもいくつか提案されている。 Further, a method using an infinity point (FOE) in an image (see, for example, Patent Document 4) is widely used in approaching object detection using optical flow, but a method corresponding to curve running is used. There are also some suggestions.
【0011】 0011
例えば、特許文献5(第4の従来例)では、FOEの移動量に相当する量によってオプティカルフローを補正する。 For example, in Patent Document 5 (fourth conventional example), the optical flow is corrected by an amount corresponding to the amount of movement of FOE.
【0012】 [0012]
また、特許文献6(第5の従来例)では、画面を複数に分割し、白線決定手段によって求められた白線情報に基づいて仮想FOEを求める。 Further, in Patent Document 6 (fifth conventional example), the screen is divided into a plurality of screens, and a virtual FOE is obtained based on the white line information obtained by the white line determining means.
【0013】 0013
また、特許文献7(第6の従来例)では、カーブの影響を受けないオプティカルフローを用いた接近物検出の手法が開示されている。 Further, Patent Document 7 (sixth conventional example) discloses a method of detecting an approaching object using an optical flow that is not affected by a curve. この手法では、カメラの運動パラメータから理論的に計算された移動ベクトルVdiと、画像から検出された移動ベクトルViとの差異εi を、画像から検出された移動ベクトルの信頼性を表すベクトルr1i,r2iを用いて、次式によって計算し、その差異の値を用いて移動している物体を検出する。 In this method, the difference εi 2 between the movement vector Vdi theoretically calculated from the motion parameters of the camera and the movement vector Vi detected from the image is the vector r1i, which represents the reliability of the movement vector detected from the image. Using r2i, it is calculated by the following equation, and the moving object is detected using the value of the difference.
εi =((Vdi−Vi)・r1i) +((Vdi−Vi)・r2i) εi 2 = ((Vdi-Vi) · r1i) 2 + ((Vdi-Vi) · r2i) 2
【0014】 0014.
【特許文献1】 [Patent Document 1]
特許第3011566号公報【特許文献2】 Japanese Patent No. 3011566 [Patent Document 2]
特開2000−168442号公報【特許文献3】 Japanese Unexamined Patent Publication No. 2000-168442 [Patent Document 3]
特開平6−282655号公報【特許文献4】 Japanese Unexamined Patent Publication No. 6-228655 [Patent Document 4]
特開平7−50769号公報【特許文献5】 Japanese Unexamined Patent Publication No. 7-50769 [Patent Document 5]
特開2000−251199号公報【特許文献6】 Japanese Unexamined Patent Publication No. 2000-251199 [Patent Document 6]
特開2000−90243号公報【特許文献7】 Japanese Unexamined Patent Publication No. 2000-90243 [Patent Document 7]
特許第2882136号公報【0015】 Japanese Patent No. 2882136 [0015]
【発明が解決しようとする課題】 [Problems to be Solved by the Invention]
ところが、上述の従来技術では、次のような問題がある。 However, the above-mentioned conventional technique has the following problems.
【0016】 0016.
まず、第1の従来例では、自車が直進していることを仮定しており、カーブでの利用は難しい。 First, in the first conventional example, it is assumed that the own vehicle is traveling straight, and it is difficult to use it on a curve. すなわち、「接近物を仮定したときの画像上の動きの方向」を利用して接近物を検出しているが、この「動きの方向」は、カーブを走行中の場合には一様には求まらない。 That is, the approaching object is detected by using the "direction of movement on the image assuming an approaching object", but this "direction of movement" is uniformly set when traveling on a curve. I can't find it. この点について、図34を用いて説明する。 This point will be described with reference to FIG. 34.
【0017】 [0017]
カメラが後方を向いているとすると、車両がカーブを走行中の場合には、図34(a)のような画像が撮影される。 Assuming that the camera is facing backward, when the vehicle is traveling on a curve, an image as shown in FIG. 34A is taken. 第1の従来例では、図34(a)に示すように、水平方向に領域Lと領域Rとを分割して設定する。 In the first conventional example, as shown in FIG. 34A, the area L and the area R are divided and set in the horizontal direction. ここで、図34(b)に示すように、領域L内の領域AR1に接近車が存在する場合を仮定すると、「仮定された接近車の動きの方向」は図の矢印のように右下を向く。 Here, assuming that an approaching vehicle exists in the area AR1 in the area L as shown in FIG. 34 (b), the "assumed direction of movement of the approaching vehicle" is the lower right as shown by the arrow in the figure. Turn to. 一方、同じ領域L内にあるが、領域AR1とは垂直方向における位置が異なる領域AR2においては、「仮定された接近車の動きの方向」は図の矢印のように左下を向き、領域AR1における動きの向きとは全く異なってしまう。 On the other hand, in the region AR2 which is in the same region L but whose position in the vertical direction is different from that of the region AR1, the "assumed direction of movement of the approaching vehicle" faces the lower left as shown by the arrow in the figure, and in the region AR1. It will be completely different from the direction of movement. このように、車両がカーブを走行中の場合には、同じ領域L内においても、その位置によって「仮定された接近車の動きの方向」は一様ではなくなり、したがって、接近車の検出が困難になる。 In this way, when the vehicle is traveling on a curve, the "assumed direction of movement of the approaching vehicle" is not uniform depending on the position even within the same region L, and therefore it is difficult to detect the approaching vehicle. become.
【0018】 0018
また、車両が直線道路を走行中の場合でも、画面の上部と下部とでは、オプティカルフローの大きさは著しく異なる。 Further, even when the vehicle is traveling on a straight road, the magnitude of the optical flow is significantly different between the upper part and the lower part of the screen. すなわち、画面上部では、自車から遠く離れた領域が映っているので、検出されるオプティカルフローは非常に小さい。 That is, since the area far away from the own vehicle is shown at the upper part of the screen, the detected optical flow is very small. これに対して、画面下部では、自車に非常に近い領域が映っているので、検出されるオプティカルフローは相対的に非常に大きい。 On the other hand, at the bottom of the screen, an area very close to the own vehicle is shown, so the detected optical flow is relatively large.
【0019】 0019
このため、画面の上部と下部について、同一の閾値を用いて処理を行うと、接近車の検出精度が劣化する可能性が高い。 Therefore, if the upper part and the lower part of the screen are processed using the same threshold value, there is a high possibility that the detection accuracy of the approaching vehicle will deteriorate. 例えば、画面上部の小さなフローを基準に閾値を定めたとき、閾値は非常に小さな値になるので、この閾値によって画面下部における処理を行うと、ノイズが出やすくなる。 For example, when a threshold value is set based on a small flow at the upper part of the screen, the threshold value becomes a very small value. Therefore, when processing is performed at the lower part of the screen according to this threshold value, noise is likely to occur. 一方、画面下部の大きなフローを基準に閾値を定めた場合、閾値は非常に大きな値になるので、この閾値によって画面上部における処理を行うと、ほとんどのオプティカルフローが閾値よりも小さくなり、接近物は検出できなくなってしまう。 On the other hand, when the threshold value is set based on the large flow at the bottom of the screen, the threshold value becomes a very large value. Therefore, when the processing at the upper part of the screen is performed by this threshold value, most of the optical flows become smaller than the threshold value, and the approaching object Can no longer be detected.
【0020】 0020
さらに、第1の従来例では、所定値以上の大きさを持つオプティカルフローのみを用いて、接近車を検出している。 Further, in the first conventional example, an approaching vehicle is detected using only an optical flow having a size equal to or larger than a predetermined value. このため、自車とほぼ等速度で走る並走車については、オプティカルフローの大きさがほぼ0になるので、検出することができない。 For this reason, it is not possible to detect a parallel running vehicle that runs at almost the same speed as the own vehicle because the magnitude of the optical flow becomes almost zero.
【0021】 0021.
また、第2の従来例については、自車の旋回によって生じる旋回ベクトルは、その旋回ベクトルに係る対象点のカメラからの3次元相対位置によって、その大きさおよび向きが異なる。 Further, in the second conventional example, the size and direction of the turning vector generated by the turning of the own vehicle differ depending on the three-dimensional relative position of the target point related to the turning vector from the camera. このため、カメラ画像上での点と実世界3次元上での点との対応関係が求まっていないと、旋回ベクトルを推定することができない。 Therefore, the swivel vector cannot be estimated unless the correspondence between the points on the camera image and the points on the three-dimensional real world is obtained.
【0022】 0022.
この点を、図35を用いて説明する。 This point will be described with reference to FIG. 35. 図35はカメラ2で撮影されたカメラ画像と実世界上の3次元座標の関係を示したものである。 FIG. 35 shows the relationship between the camera image taken by the camera 2 and the three-dimensional coordinates in the real world. カメラ画像の水平方向にXi軸、垂直方向にYi軸をとり、実世界座標系のXw,Yw,Zw軸も図35に示すようにとる。 The Xi axis is taken in the horizontal direction and the Yi axis is taken in the vertical direction of the camera image, and the Xw, Yw, and Zw axes of the real-world coordinate system are also taken as shown in FIG. つまり、平面Xw−Zwは路面と平行な平面であり、Xw方向は自車両の左右方向、Yw方向は路面に対して垂直方向、Zw方向は自車両の前後方向とする。 That is, the plane Xw-Zw is a plane parallel to the road surface, the Xw direction is the left-right direction of the own vehicle, the Yw direction is the direction perpendicular to the road surface, and the Zw direction is the front-rear direction of the own vehicle. また、カメラの焦点位置を原点、カメラの光軸方向をZc軸としたカメラ座標系(Xc,Yc,Zc)を定める。 Further, a camera coordinate system (Xc, Yc, Zc) is defined with the focal position of the camera as the origin and the optical axis direction of the camera as the Zc axis. もちろん、これらの軸方向はこれに限ったものではない。 Of course, these axial directions are not limited to this. これらの座標系は、透視投影変換である(数1)と、座標変換である(数2)の関係をもつ。 These coordinate systems have a relationship of a perspective projection transformation (Equation 1) and a coordinate transformation (Equation 2).
【数1】 [Number 1]
【数2】 [Number 2]
【0023】 [0023]
ただし、fはカメラの焦点距離、rはカメラの内部パラメータと、カメラの設置位置すなわち実世界座標系におけるカメラ座標系の位置関係とによって定まる定数であり、既知である。 However, f is the focal length of the camera, r is a constant determined by the internal parameters of the camera and the installation position of the camera, that is, the positional relationship of the camera coordinate system in the real world coordinate system, and is known. この関係式から、カメラ画像上の任意の点に対応する実世界の3次元座標は、カメラの焦点位置を通るある直線上にのることは分かるものの、それ以上の情報を得ることはできないので、その位置を一意に決定することはできない。 From this relational expression, it can be seen that the 3D coordinates in the real world corresponding to any point on the camera image are on a straight line passing through the focal position of the camera, but no further information can be obtained. , The position cannot be uniquely determined.
【0024】 0024
すなわち、図36に示すように、実世界座標系からカメラ画像上の点への変換は透視投影変換(数1)および座標変換(数2)を用いることによって可能であるが、逆に、カメラ画像上の点から実世界座標系への変換は、これらの関係式のみでは不可能である。 That is, as shown in FIG. 36, the conversion from the real-world coordinate system to a point on the camera image is possible by using the perspective projection transformation (Equation 1) and the coordinate transformation (Equation 2), but conversely, the camera. Conversion from points on the image to the real-world coordinate system is not possible with these relational expressions alone. 第2の従来例における旋回ベクトルは、カメラ座標系のそれぞれの点について求めなければならないが、図36に示したように他に条件がなければ、この変換は不可能である。 The swivel vector in the second conventional example must be obtained for each point in the camera coordinate system, but as shown in FIG. 36, this conversion is impossible unless there are other conditions.
【0025】 0025
第3の従来例についても、第2の従来例と同様に、画像位置と距離との対応関係の求め方には触れられておらず、このままでは実現できない。 Similar to the second conventional example, the third conventional example does not touch on how to obtain the correspondence between the image position and the distance, and cannot be realized as it is.
【0026】 0026
また、第4の従来例については、FOEは直進時のみしか存在せず、カーブではそもそも存在しないものであるので、FOEを用いてオプティカルフローを補正すると、直進を近似できないような急激なカーブでは誤差が非常に大きくなり、実用的ではない。 Further, in the fourth conventional example, the FOE exists only when the vehicle goes straight, and does not exist in the curve in the first place. Therefore, if the optical flow is corrected by using the FOE, the FOE cannot be approximated to the straight curve. The error is very large and impractical.
【0027】 [0027]
また、第5の従来例については、当然のことながら、白線がない道路を走行する場合には利用できない。 Further, as a matter of course, the fifth conventional example cannot be used when traveling on a road without a white line.
【0028】 [0028]
また、第6の従来例については、画像内では移動物体でないものが支配的であることを仮定しており、トラックのような大きな移動物体がカメラの近傍に存在する場合には、検出に失敗するという問題がある。 Further, in the sixth conventional example, it is assumed that non-moving objects are dominant in the image, and detection fails when a large moving object such as a truck exists in the vicinity of the camera. There is a problem of doing.
【0029】 [0029]
前記の問題に鑑み、本発明は、オプティカルフローを用いて接近物を検出する車両用監視装置において、たとえカーブ走行中であっても、接近物を精度良く検出可能にすることを課題とする。 In view of the above problems, it is an object of the present invention to make it possible to accurately detect an approaching object even during a curve traveling in a vehicle monitoring device that detects an approaching object using an optical flow.
【0030】 [0030]
【課題を解決するための手段】 [Means for solving problems]
前記の課題を解決するために、本発明は、車両の周囲を映すカメラを用いて、前記カメラによって撮影された画像からオプティカルフローを求め、当該車両の動きを基にして、背景と仮定した場合の前記画像のオプティカルフローである背景フローを求め、前記オプティカルフローと前記背景フローとを比較し、当該車両の周囲にある物体の動きを検出するものである。 In order to solve the above-mentioned problems, the present invention uses a camera that reflects the surroundings of a vehicle to obtain an optical flow from an image taken by the camera, and assumes a background based on the movement of the vehicle. The background flow, which is the optical flow of the image, is obtained, the optical flow is compared with the background flow, and the movement of an object around the vehicle is detected.
【0031】 0031
この発明によると、背景フローはカメラ画像を背景と仮定した場合のオプティカルフローであるので、この背景フローとカメラ画像から実際に求められたオプティカルフローとを比較することによって、車両周囲にある物体の動きを、精度良く検出することができる。 According to the present invention, the background flow is an optical flow when the camera image is assumed as the background. Therefore, by comparing this background flow with the optical flow actually obtained from the camera image, the object around the vehicle can be compared. The movement can be detected with high accuracy. また、たとえ車両がカーブ走行中であっても、画像上のそれぞれの点における背景フローとオプティカルフローとの比較によって検出を行うため、接近物を正確に検出することができる。 Further, even if the vehicle is traveling on a curve, the detection is performed by comparing the background flow and the optical flow at each point on the image, so that an approaching object can be accurately detected. また、画像から求めたオプティカルフローの大きさが小さくなってしまうような並走車や遠方の接近物についても、そのオプティカルフローは、画像上のその点における背景フローとは大きく異なるため、容易に検出可能である。 In addition, even for a parallel vehicle or a distant approaching object in which the size of the optical flow obtained from the image becomes small, the optical flow is significantly different from the background flow at that point on the image, so it is easy to do so. It is detectable.
【0032】 [0032]
【発明の実施の形態】 BEST MODE FOR CARRYING OUT THE INVENTION
本発明の第1態様によれば、車両の周囲を映すカメラを用いた監視装置として、前記カメラによって撮影された画像からオプティカルフローを求め、当該車両の動きを基にして、背景と仮定した場合の前記画像のオプティカルフローである背景フローを求め、前記オプティカルフローと前記背景フローとを比較し、当該車両の周囲にある物体の動きを検出するものを提供する。 According to the first aspect of the present invention, as a monitoring device using a camera that reflects the surroundings of a vehicle, an optical flow is obtained from an image taken by the camera, and it is assumed that the background is based on the movement of the vehicle. The background flow, which is the optical flow of the image, is obtained, the optical flow is compared with the background flow, and the movement of an object around the vehicle is detected.
【0033】 0033
本発明の第2態様によれば、前記背景フローを、前記カメラが撮影している空間をモデル化した空間モデルを用いて求める第1態様の監視装置を提供する。 According to the second aspect of the present invention, there is provided a monitoring device of the first aspect in which the background flow is obtained by using a space model that models the space captured by the camera.
【0034】 0034
本発明の第3態様によれば、前記空間モデルは、前記カメラが撮影している各物体の距離データに基づいて生成された第2態様の監視装置を提供する。 According to the third aspect of the present invention, the spatial model provides the monitoring device of the second aspect generated based on the distance data of each object photographed by the camera.
【0035】 0035.
本発明の第4態様によれば、前記距離データは、当該車両に設けられた障害物センサによって測定された第3態様の監視装置を提供する。 According to the fourth aspect of the present invention, the distance data provides a monitoring device of the third aspect measured by an obstacle sensor provided in the vehicle.
【0036】 0036
本発明の第5態様によれば、前記空間モデルは、少なくとも、走行路面をモデル化した路面モデルを含む第2態様の監視装置を提供する。 According to a fifth aspect of the present invention, the spatial model provides at least the monitoring device of the second aspect including a road surface model that models a traveling road surface.
【0037】 0037
本発明の第6態様によれば、前記空間モデルは、少なくとも、走行路面に対して垂直な壁面を仮定した壁面モデルを含む第2態様の監視装置を提供する。 According to the sixth aspect of the present invention, the spatial model provides at least the monitoring device of the second aspect including the wall surface model assuming a wall surface perpendicular to the traveling road surface.
【0038】 [0038]
本発明の第7態様によれば、前記壁面は、車両の後側方にあると仮定された第6態様の監視装置を提供する。 According to a seventh aspect of the present invention, the wall surface provides the monitoring device of the sixth aspect, which is assumed to be on the rear side of the vehicle.
【0039】 [0039]
本発明の第8態様によれば、オプティカルフローと背景フローとの比較の際に、前記オプティカルフローの大きさが所定値よりも大きいか否かを判定し、所定値よりも大きいときは、角度差を用いて比較を行う一方、そうでないときは、角度差を用いないで比較を行う第1態様の監視装置を提供する。 According to the eighth aspect of the present invention, when comparing the optical flow and the background flow, it is determined whether or not the size of the optical flow is larger than the predetermined value, and if it is larger than the predetermined value, the angle is increased. Provided is a monitoring device of the first aspect in which comparison is performed using a difference, while comparison is performed without using an angle difference.
【0040】 0040
本発明の第9態様によれば、前記所定値は、画像上の当該位置における背景フローの大きさに応じて設定されている第8態様の監視装置を提供する。 According to the ninth aspect of the present invention, the predetermined value provides the monitoring device of the eighth aspect in which the predetermined value is set according to the size of the background flow at the position on the image.
【0041】 [0041]
本発明の第10態様によれば、オプティカルフローと背景フローとの比較によって、前記オプティカルフローの中から接近物候補フローを特定し、近傍にある前記接近物候補フローを関連付けることによって接近物候補領域を生成し、前記接近物候補領域の面積が所定値よりも小さいとき、この接近物候補領域に係る接近物候補フローはノイズであると判断する第1態様の監視装置を提供する。 According to the tenth aspect of the present invention, the approaching object candidate flow is specified from the optical flow by comparing the optical flow with the background flow, and the approaching object candidate area is associated with the approaching object candidate flow in the vicinity. Is generated, and when the area of ​​the approaching object candidate region is smaller than a predetermined value, the monitoring device of the first aspect is provided, which determines that the approaching object candidate flow related to the approaching object candidate region is noise.
【0042】 [0042]
本発明の第11態様によれば、車両の周囲を映すカメラを用いた監視装置として、前記カメラによって撮影された画像からオプティカルフローを求め、前記オプティカルフローと、当該車両の動きと、前記カメラが撮影している空間をモデル化した空間モデルとを基にして、前記画像上の点の実世界座標上における動きである空間フローを求め、前記空間フローを基にして当該車両の周囲にある物体の動きを検出するものを提供する。 According to the eleventh aspect of the present invention, as a monitoring device using a camera that reflects the surroundings of a vehicle, an optical flow is obtained from an image taken by the camera, and the optical flow, the movement of the vehicle, and the camera are used. Based on a spatial model that models the space being photographed, the spatial flow, which is the movement of points on the image in real-world coordinates, is obtained, and the objects around the vehicle are based on the spatial flow. Provide what detects the movement of.
【0043】 [0043]
本発明の第12態様によれば、監視方法として、車両の周囲を映すカメラによって撮影された画像から、オプティカルフローを求め、前記車両の動きを基にして、背景と仮定した場合の前記画像のオプティカルフローである背景フローを求め、前記オプティカルフローと前記背景フローとを比較し、前記車両の周囲にある物体の動きを検出するものを提供する。 According to the twelfth aspect of the present invention, as a monitoring method, an optical flow is obtained from an image taken by a camera that reflects the surroundings of the vehicle, and based on the movement of the vehicle, the image is assumed to be a background. Provided is a device that obtains a background flow, which is an optical flow, compares the optical flow with the background flow, and detects the movement of an object around the vehicle.
【0044】 [0044]
本発明の第13態様によれば、前記車両に設けられた車速センサおよび舵角センサの出力を用いて、前記車両の動きを推定する第12態様の監視方法を提供する。 According to the thirteenth aspect of the present invention, there is provided a monitoring method of the twelfth aspect of estimating the movement of the vehicle by using the outputs of the vehicle speed sensor and the steering angle sensor provided on the vehicle.
【0045】 0045
本発明の第14態様によれば、監視用プログラムとして、コンピュータに、車両の周囲を映すカメラによって撮影された画像について、オプティカルフローを得る手順と、前記車両の動きを基にして、背景と仮定した場合の前記画像のオプティカルフローである背景フローを求める手順と、前記オプティカルフローと前記背景フローとを比較し、当該車両の周囲にある物体の動きを検出する手順とを実行させるものを提供する。 According to the 14th aspect of the present invention, as a monitoring program, it is assumed that an image taken by a camera that reflects the surroundings of a vehicle on a computer is used as a background based on a procedure for obtaining an optical flow and the movement of the vehicle. Provided is a procedure for obtaining a background flow, which is an optical flow of the image, and a procedure for comparing the optical flow with the background flow and detecting the movement of an object around the vehicle. ..
【0046】 [0046]
以下、本発明の実施の形態について、図面を参照して説明する。 Hereinafter, embodiments of the present invention will be described with reference to the drawings.
【0047】 [0047]
(第1の実施形態) (First Embodiment)
本発明の第1の実施形態では、次のようにして、車両周囲の監視を行う。 In the first embodiment of the present invention, the surroundings of the vehicle are monitored as follows. まず、車両の周囲を映すカメラからの画像を用いて、オプティカルフローを求める。 First, the optical flow is obtained using an image from a camera that reflects the surroundings of the vehicle. 次に、カメラ画像上の点と実世界3次元座標との対応を「空間モデル」として推定する。 Next, the correspondence between the points on the camera image and the three-dimensional coordinates in the real world is estimated as a "spatial model". 図37に示すように、この空間モデルに透視投影変換を用いることによって、カメラ画像上の点と実世界3次元座標とを正確に対応付けることができる。 As shown in FIG. 37, by using the perspective projection transformation in this spatial model, it is possible to accurately associate points on the camera image with real-world three-dimensional coordinates. そしてこの空間モデルと、推定された自車動き情報とを用いて、画像上の各点が移動物体ではなく背景であった場合を仮定したオプティカルフローを計算によって求める。 Then, using this spatial model and the estimated vehicle motion information, an optical flow assuming that each point on the image is a background instead of a moving object is calculated. このようにして求めたオプティカルフローを「背景フロー」と呼ぶ。 The optical flow obtained in this way is called a "background flow". この背景フローと、画像から実際に求めたオプティカルフローとを比較することによって、接近物を検出する。 By comparing this background flow with the optical flow actually obtained from the image, an approaching object is detected.
【0048】 0048
すなわち、背景フローは、カメラ画像上の点と実世界3次元座標との対応を正確に考慮して計算されたものであるので、本実施形態によると、従来例と比べて、車両周囲にある物体の動きを精度良く検出することができる。 That is, since the background flow is calculated by accurately considering the correspondence between the points on the camera image and the three-dimensional coordinates in the real world, according to the present embodiment, it is around the vehicle as compared with the conventional example. The movement of an object can be detected with high accuracy. また、カーブ走行中であっても、画像上のそれぞれの点における背景フローとオプティカルフローとの比較によって検出を行うため、接近物を正確に検出することができる。 Further, even during a curve running, the detection is performed by comparing the background flow and the optical flow at each point on the image, so that an approaching object can be accurately detected. また、画像から求めたオプティカルフローの大きさが小さくなってしまうような並走車や遠方の接近物についても、そのオプティカルフローは、その点における背景フローとは大きく異なるため、本実施形態では容易に検出可能になる。 Further, even for a parallel vehicle or a distant approaching object in which the size of the optical flow obtained from the image becomes small, the optical flow is significantly different from the background flow in that respect, so that it is easy in this embodiment. Becomes detectable.
【0049】 [0049]
図1は本実施形態に係る車両用監視装置の基本構成を概念的に示すブロック図である。 FIG. 1 is a block diagram conceptually showing the basic configuration of the vehicle monitoring device according to the present embodiment. 本実施形態に係る車両用監視装置は、車両の周囲を映すカメラ11を用いて、当該車両の周囲にある物体の動きを検出する。 The vehicle monitoring device according to the present embodiment detects the movement of an object around the vehicle by using the camera 11 that captures the surroundings of the vehicle. 具体的には図1に示すように、基本構成として、カメラ11によって撮像された画像からオプティカルフローViを算出するオプティカルフロー検出部12と、当該車両の動きを検出する自車動き検出部13と、カメラ11が撮影している空間のモデルを推定する空間モデル推定部14と、自車動きと空間モデルとに基づいて背景フローVdiを推定する背景フロー推定部15と、オプティカルフローViと背景フローVdiとを比較することによって、接近物を検出する接近物検出部16とを備えている。 Specifically, as shown in FIG. 1, as a basic configuration, an optical flow detection unit 12 that calculates an optical flow Vi from an image captured by the camera 11 and a vehicle motion detection unit 13 that detects the movement of the vehicle. , The space model estimation unit 14 that estimates the model of the space captured by the camera 11, the background flow estimation unit 15 that estimates the background flow Vdi based on the vehicle movement and the space model, the optical flow Vi, and the background flow. It is provided with an approaching object detecting unit 16 that detects an approaching object by comparing with Vdi.
【0050】 0050
カメラ11は、典型的には自車両に設置されており、当該車両の周囲の状況をセンシングする。 The camera 11 is typically installed in the own vehicle and senses the surrounding conditions of the vehicle. また、道路上や信号機、建物などのインフラに取り付けられたカメラや周辺車両に取り付けられたカメラを、自車に設置されたカメラと併せて、または単独で、用いてもよい。 In addition, a camera mounted on an infrastructure such as a road, a traffic light, or a building or a camera mounted on a peripheral vehicle may be used in combination with a camera installed in the own vehicle or alone. これは、見通しの悪い交差点など、自車からは接近車が見えにくい状況などにおいて、有効である。 This is effective in situations where it is difficult to see an approaching vehicle from the own vehicle, such as at an intersection with poor visibility.
【0051】 0051
<オプティカルフロー検出> <Optical flow detection>
オプティカルフロー検出部12は、カメラ11によって撮影された時間的に異なる2枚の画像から、画像上における見かけの動きを示すベクトル、すなわち「オプティカルフロー」を検出する。 The optical flow detection unit 12 detects a vector indicating an apparent movement on an image, that is, an "optical flow" from two temporally different images taken by the camera 11. オプティカルフローの検出には、画像の時空間微分の拘束方程式を用いる勾配法と、テンプレートマッチングを利用するブロックマッチング法とが広く知られている(「ダイナミックシーンの理解」浅田稔著、電子情報通信学会)。 For the detection of optical flow, the gradient method using the constraint equation of the spatiotemporal derivative of the image and the block matching method using template matching are widely known ("Understanding the Dynamic Scene" by Minoru Asada, IEICE). Society). ここではブロックマッチング法を用いるものとする。 Here, the block matching method is used.
【0052】 [0052]
ブロックマッチング法は、一般的には全探索を行うので、膨大な処理時間を必要とする。 Since the block matching method generally performs a full search, it requires a huge amount of processing time. このため、画像の階層化という手法が、処理時間の削減のために広く用いられている(「3次元ビジョン」除剛、辻三郎共著、共立出版)。 For this reason, the technique of layering images is widely used to reduce processing time ("3D Vision", Saburo Tsuji, Kyoritsu Shuppan). すなわち、与えられた画像から、縦横それぞれ1/2,1/4,1/8,…に圧縮した階層化画像をつくる。 That is, from the given image, a layered image compressed into 1/2, 1/4, 1/8, ... この画像の階層化によって、分解能が高い(サイズが大きい)画像では遠く離れた2点も、分解能が低い(サイズが小さい)画像では近くなる。 Due to this layering of images, two points that are far apart in an image with high resolution (large size) are close to each other in an image with low resolution (small size). そこでまず、分解能が低い画像についてテンプレートマッチングを行い、その結果求まったオプティカルフローの近傍のみで、次に分解能が高い画像についてテンプレートマッチングを行う。 Therefore, first, template matching is performed on an image having a low resolution, and template matching is performed on an image having the next highest resolution only in the vicinity of the optical flow obtained as a result. このような処理を繰り返すことによって、最終的に、分解能が高い原画像でのオプティカルフローを求めることができる。 By repeating such processing, it is finally possible to obtain an optical flow in the original image having high resolution. しかも、局所的な探索ですむので、処理時間を大幅に削減することができる。 Moreover, since only a local search is required, the processing time can be significantly reduced.
【0053】 [0053]
<自車動き推定> <Estimation of vehicle movement>
自車動き推定部13は、自車の左右両車輪の回転速度とハンドルの舵角とを求め、これによって、自車の動きを推定する。 The own vehicle movement estimation unit 13 obtains the rotation speeds of the left and right wheels of the own vehicle and the steering angle of the steering wheel, thereby estimating the movement of the own vehicle. この推定方法を図2および図3を用いて説明する。 This estimation method will be described with reference to FIGS. 2 and 3. なお、ここでは、タイヤの横滑りが生じないと近似したいわゆるアッカーマンモデル(2輪モデル)を用いた。 Here, a so-called Ackermann model (two-wheel model), which is approximated to prevent the tire from skidding, was used.
【0054】 0054
図2はアッカーマンステアリングのモデルを示す図である。 FIG. 2 is a diagram showing a model of Ackermann steering. タイヤの横滑りがないと仮定した場合、ステアリングを切ると、車両は後輪3b軸の延長線上の点Oを中心に旋回する。 Assuming that there is no side slip of the tire, when the steering is turned, the vehicle turns around the point O on the extension line of the rear wheel 3b axis. 後輪3b中央の回転半径Rsは、前輪3aの切れ角βおよびホイールベースlを用いて(数3)のように表される。 The radius of gyration Rs at the center of the rear wheel 3b is expressed as shown in (Equation 3) by using the turning angle β of the front wheel 3a and the wheelbase l.
【数3】 [Number 3]
【0055】 0055
図3は2次元上の車両の動きについて示したものである。 FIG. 3 shows the movement of the vehicle in two dimensions. 図3に示すように、後輪3bの中央がCt からCt+1 に動いたとすると、その移動量hは、左右の車輪速Vl,Vr、または、後輪3b中央の回転半径Rsおよび移動回転角γを用いて(数4)のように表される。 As shown in FIG. 3, assuming that the center of the rear wheel 3b moves from Ct to Ct + 1, the amount of movement h is the left and right wheel speeds Vl, Vr, or the radius of gyration Rs and the angle of rotation γ at the center of the rear wheel 3b. It is expressed as (Equation 4) using.
【数4】 [Number 4]
(数3)および(数4)より、移動回転角γは(数5)のように表される。 From (Equation 3) and (Equation 4), the moving angle of rotation γ is expressed as (Equation 5).
【数5】 [Number 5]
したがって、車両の平面に対する回転量αは(数6)で表される。 Therefore, the amount of rotation α with respect to the plane of the vehicle is represented by (Equation 6).
【数6】 [Number 6]
【0056】 0056
またCt からCt+1 への移動ベクトルTは、車両進行方向にX軸、垂直方向にY軸をとると、(数7)で表される。 The movement vector T from Ct to Ct + 1 is represented by (Equation 7) when the X-axis is taken in the vehicle traveling direction and the Y-axis is taken in the vertical direction.
【数7】 [Number 7]
(数6)および(数7)より、車両の左右車輪速Vl,Vrおよびハンドル舵角βが分かれば、車両の動きが推定できる。 From (Equation 6) and (Equation 7), the movement of the vehicle can be estimated if the left and right wheel speeds Vl, Vr and the steering angle β of the vehicle are known.
【0057】 [0057]
もちろん、車輪速とハンドルの舵角情報だけでなく、車速センサ、ヨーレートセンサを用いて直接、車両の動きを求めてもよい。 Of course, not only the wheel speed and steering angle information of the steering wheel, but also the vehicle speed sensor and the yaw rate sensor may be used to directly obtain the movement of the vehicle. さらには、GPSや地図情報を用いて車両の動きを求めてもよい。 Furthermore, the movement of the vehicle may be obtained using GPS or map information.
【0058】 0058.
<空間モデル推定> <Spatial model estimation>
空間モデル推定部14は、カメラが撮影している空間をモデル化した空間モデルを推定する。 The spatial model estimation unit 14 estimates a spatial model that models the space captured by the camera. 空間モデルは上述したように、カメラ画像上の点と実世界3次元座標との対応関係を求めるために用いられる。 As described above, the spatial model is used to find the correspondence between the points on the camera image and the real-world three-dimensional coordinates. すなわち、カメラ画像上の任意の点は、(数1)および(数2)を用いることによって、カメラの焦点位置を通る実世界空間上のある直線に対応付けることができる。 That is, any point on the camera image can be associated with a straight line in the real world space passing through the focal position of the camera by using (Equation 1) and (Equation 2). この実世界空間上の直線と、推定した空間モデルとの交点を求めることによって、カメラ画像上の任意の点を実世界の3次元座標に射影することができる。 By finding the intersection of this straight line in the real world space and the estimated space model, it is possible to project an arbitrary point on the camera image onto the three-dimensional coordinates of the real world.
【0059】 [0059]
ここでは、カメラ11が撮影している各物体の距離データに基づいて、空間モデルを生成するものとする。 Here, it is assumed that a spatial model is generated based on the distance data of each object captured by the camera 11. この距離データの測定は、例えば、両眼視、モーションステレオ法を利用したり、レーザ、超音波、赤外線、ミリ波などを用いた障害物センサが車両に設けられている場合には、これを利用したりすることによって実行できる。 This distance data can be measured, for example, if the vehicle is equipped with an obstacle sensor that uses binocular vision, motion stereo, or laser, ultrasonic, infrared, or millimeter waves. It can be executed by using it.
【0060】 [0060]
また、建物等のインフラに固定されたカメラを用いる場合には、カメラに映る建物等の形状はきわめてまれにしか変化しないので、カメラが撮影している空間の3次元情報は既知である。 Further, when a camera fixed to an infrastructure such as a building is used, the shape of the building or the like reflected by the camera changes very rarely, so that the three-dimensional information of the space captured by the camera is known. この場合は、空間モデルを推定する必要はなく、カメラごとに既知の空間モデルを予め定めておけばよい。 In this case, it is not necessary to estimate the spatial model, and a known spatial model may be predetermined for each camera.
【0061】 [0061]
さらに、カメラが車両に設置されている場合であっても、GPS等を用いてその位置を正確に知ることができるので、現在位置を詳細な地図データと照合することによって、空間モデルを推定することも可能である。 Furthermore, even if the camera is installed in the vehicle, the position can be accurately known using GPS or the like, so the spatial model is estimated by collating the current position with detailed map data. It is also possible. 例えば、車両がトンネルを走行中であることがGPSと地図データから分かったとすると、そのトンネルの高さや長さ等の形状情報から空間モデルを生成できる。 For example, if it is known from GPS and map data that a vehicle is traveling in a tunnel, a spatial model can be generated from shape information such as the height and length of the tunnel. このような形状情報は、地図データに予め持たせておいてもよいし、トンネルなどのインフラ側に保持させておき、通信によって得られるようにしてもよい。 Such shape information may be stored in the map data in advance, or may be stored on the infrastructure side such as a tunnel so that it can be obtained by communication. 例えば、トンネルの形状情報を、トンネル入口等に設けたDSRC等の通信手段によって、トンネルに入ろうとする車両に送信する。 For example, the shape information of the tunnel is transmitted to the vehicle trying to enter the tunnel by a communication means such as DSRC provided at the tunnel entrance or the like. もちろん、このような方法はトンネルに限ったものではなく、一般道、高速道路、住宅地または駐車場等において用いることも可能である。 Of course, such a method is not limited to tunnels, and can be used on general roads, highways, residential areas, parking lots, and the like.
【0062】 [0062]
<背景フロー推定> <Background flow estimation>
背景フロー推定部15は、空間モデルに対応する点が動いていない、すなわち、移動物体でなく背景であると仮定した場合の画像の動き(オプティカルフロー)を、背景フローとして検出する。 The background flow estimation unit 15 detects the movement (optical flow) of the image when it is assumed that the point corresponding to the spatial model is not moving, that is, the background is not a moving object, as the background flow.
【0063】 [0063]
図4は背景フロー推定部15の動作を示すフローチャートであり、図5は車両1がカーブなどで旋回している状況を想定した概念図である。 FIG. 4 is a flowchart showing the operation of the background flow estimation unit 15, and FIG. 5 is a conceptual diagram assuming a situation in which the vehicle 1 is turning on a curve or the like. 図5ではカメラ2が車両1に設置されているので、カメラ2の動きと車両1の動きとは等しい。 In FIG. 5, since the camera 2 is installed in the vehicle 1, the movement of the camera 2 and the movement of the vehicle 1 are equal to each other. また、5はカメラ2に映された背景物体である。 Reference numeral 5 denotes a background object projected on the camera 2.
【0064】 [0064]
まず、時刻t−1において撮影されたカメラ画像上の任意の点(PreXi,PreYi )を、空間モデル推定部14によって推定された空間モデルを用いて、実世界3次元座標(Xw,Yw,Zw)に射影する(S11)。 First, real-world three-dimensional coordinates (Xw, Yw, Zw) of arbitrary points (PreXi, PreYi) on the camera image taken at time t-1 using the spatial model estimated by the spatial model estimation unit 14 ) (S11). このとき、(数1)に示す透視投影変換式と(数2)に示す座標変換式とを用いる。 At this time, the perspective projection conversion formula shown in (Equation 1) and the coordinate conversion formula shown in (Equation 2) are used. なお、(数2)における実世界座標系でのカメラの焦点位置は、時刻t−1におけるカメラ2の焦点位置である。 The focal position of the camera in the real-world coordinate system in (Equation 2) is the focal position of the camera 2 at time t-1.
【0065】 [0065]
次に、自車動き推定部13によって推定された時刻t−1から時刻tまでの車両1の移動量hを基にして、時刻tにおける実世界での自車位置、すなわち実世界座標系におけるカメラ2の焦点位置を求める(ステップS12)。 Next, based on the movement amount h of the vehicle 1 from the time t-1 to the time t estimated by the vehicle motion estimation unit 13, the vehicle position in the real world at the time t, that is, in the real world coordinate system. The focal position of the camera 2 is obtained (step S12). そして、この実世界座標系におけるカメラ2の焦点位置に基づき、(数2)の各定数rを変更する(ステップS13)。 Then, each constant r of (Equation 2) is changed based on the focal position of the camera 2 in this real-world coordinate system (step S13). この処理を繰り返すことによって、(数2)における実世界座標系におけるカメラの焦点位置は、更新され続け、常に正確な位置を示すことになる。 By repeating this process, the focal position of the camera in the real-world coordinate system in (Equation 2) is continuously updated and always shows an accurate position.
【0066】 [0066]
さらに、(数1)と更新した(数2)とを用いて、ステップS11で求めた実世界座標(Xw,Yw,Zw)をカメラ画像上の点(NextXi,NextYi)に再射影する(ステップS14)。 Further, using (Equation 1) and the updated (Equation 2), the real-world coordinates (Xw, Yw, Zw) obtained in step S11 are reprojected to the points (NextXi, NextYi) on the camera image (step). S14). このようにして求めたカメラ座標(NextXi,NextYi)は、時刻t−1でのカメラ画像上の点(PreXi,PreYi )が、時刻t−1から時刻tまでの間に動いていない背景物体5の一点であると仮定した場合の、時刻tでのカメラ画像上の位置を示している。 The camera coordinates (NextXi, NextYi) obtained in this way are background objects 5 in which the points (PreXi, PreYi) on the camera image at time t-1 do not move between time t-1 and time t. It shows the position on the camera image at time t, assuming that it is one point. そこで、時刻t−1におけるカメラ画像上での点(PreXi,PreYi )が背景の一部であると仮定した場合の背景フローとして、(NextXi−PreXi,NextYi−PreYi )を求める(ステップS15)。 Therefore, (NextXi-PreXi, NextYi-PreYi) is obtained as a background flow when it is assumed that the points (PreXi, PreYi) on the camera image at time t-1 are a part of the background (step S15).
【0067】 [0067]
なお、ここでは説明を簡単にするために、車両がカーブする場合を例にとって説明したが、直進時や駐車時などであっても、同様の方法によって、背景フローを求めることができる。 Here, for the sake of simplicity, the case where the vehicle curves is described as an example, but the background flow can be obtained by the same method even when the vehicle is going straight or when parking.
【0068】 [0068]
<接近物検出> <Detection of approaching objects>
図6は接近物検出部16の構成を概念的に示すブロック図である。 FIG. 6 is a block diagram conceptually showing the configuration of the approaching object detection unit 16. 図6において、フロー比較部16aはオプティカルフロー検出部12によってカメラ画像から実際に求められたオプティカルフローViと、背景フロー検出部15によって求められた背景フローVdiとを比較し、接近物候補フローを検出する。 In FIG. 6, the flow comparison unit 16a compares the optical flow Vi actually obtained from the camera image by the optical flow detection unit 12 with the background flow Vdi obtained by the background flow detection unit 15, and determines the approaching object candidate flow. To detect. そしてノイズ除去部16bはフロー比較部16aによって求められた接近物候補フローからノイズを除去し、接近物のフローのみを検出する。 Then, the noise removing unit 16b removes noise from the approaching object candidate flow obtained by the flow comparing unit 16a, and detects only the approaching object flow.
【0069】 [0069]
図7はフロー比較部16aの動作を示すフローチャートである。 FIG. 7 is a flowchart showing the operation of the flow comparison unit 16a. ここで、オプティカルフローViと背景フローVdiとの比較は、原則として、その角度差を用いて行う。 Here, as a general rule, the comparison between the optical flow Vi and the background flow Vdi is performed using the angle difference. ところが、オプティカルフローViの大きさが小さいときは、その方向情報の信頼性が低いため、判別精度が保てなくなる。 However, when the size of the optical flow Vi is small, the reliability of the direction information is low, so that the discrimination accuracy cannot be maintained. 例えば、自車とほぼ等速度で並走している他車両に係るオプティカルフローViは、大きさは非常に小さく、かつ、その方向は撮影タイミングに応じて自車を向いたり反対方向を向いたりして、刻々と変化する。 For example, the optical flow Vi related to another vehicle running in parallel with the own vehicle at almost the same speed is very small in size, and the direction may face the own vehicle or the opposite direction depending on the shooting timing. Then, it changes from moment to moment. そこで本実施形態では、オプティカルフローViの大きさが所定値よりも小さいときは、角度差を用いないで、別の比較基準(S43)を用いることによって、判別の信頼性を上げている。 Therefore, in the present embodiment, when the magnitude of the optical flow Vi is smaller than the predetermined value, the reliability of the discrimination is improved by using another comparison standard (S43) without using the angle difference.
【0070】 [0070]
具体的にはまず、オプティカルフローViの大きさを調べる(S41)。 Specifically, first, the size of the optical flow Vi is examined (S41). オプティカルフローViが十分な大きさを有するとき(所定値TH Viよりも大きい場合)、オプティカルフローViは方向情報について十分な信頼性を持っていると考えられるので、背景フローVdiとの比較を角度差を用いて行う(S42)。 When the optical flow Vi has a sufficient size (when it is larger than the predetermined value TH Vi ), the optical flow Vi is considered to have sufficient reliability in the directional information, so the comparison with the background flow Vdi is angled. This is done using the difference (S42). すなわち、オプティカルフローViと背景フローVdiとの角度差の絶対値が所定値Th Arg以上のとき、そのオプティカルフローViは背景フローVdiとは異なるものと考えられるので、接近物のフローと判別する(S44)。 That is, when the absolute value of the angular difference between the optical flow Vi and the background flow Vdi is equal to or larger than the predetermined value Th Arg, the optical flow Vi is because it is believed that different from the background flow Vdi, and determines the flow of an approaching object ( S44). また角度差の絶対値が十分に小さいとき(S42でNo)、そのオプティカルフローViは背景フローVdiに近いので、接近物のフローではないと判別する(S45)。 When the absolute value of the angle difference is sufficiently small (No in S42), the optical flow Vi is close to the background flow Vdi, so it is determined that the flow is not an approaching object (S45). なお、閾値TH Viは0.1[Pixel]程度、閾値TH Argはπ/2程度であるのが好ましい。 It is preferable that the threshold value TH Vi is about 0.1 [Pixel] and the threshold value TH Arg is about π / 2. ここで、ステップS41でYESと判断されたときに、角度情報のみを用い、大きさ情報を用いないのは、自車から離れていく移動物体を接近物と判別しないようにするためである。 Here, when YES is determined in step S41, only the angle information is used and the size information is not used in order to prevent the moving object moving away from the own vehicle from being discriminated as an approaching object. もちろん、フローのベクトル差の絶対値を判別基準として用いてもかまわない。 Of course, the absolute value of the vector difference of the flow may be used as a discrimination criterion.
【0071】 [0071]
また、オプティカルフローViが十分な大きさを有しないとき(S41でNo)、背景フローとの比較を角度差を用いて行うことはできない。 Further, when the optical flow Vi does not have a sufficient size (No in S41), the comparison with the background flow cannot be performed using the angle difference. そこで、この場合はフローの大きさに着目する。 Therefore, in this case, pay attention to the size of the flow. すなわち、オプティカルフローViと背景フローVdiとのベクトル差の絶対値が所定値TH Vdi以上であるか否かを判定し(S43)、所定値TH Vdi以上のときは接近物のフローと判別し(S44)、そうでないときは接近物のフローではないと判別する(S45)。 That is, it is determined whether or not the absolute value of the vector difference between the optical flow Vi and the background flow Vdi is equal to or greater than the predetermined value TH Vdi (S43), and if it is equal to or greater than the predetermined value TH Vdi , it is determined to be the flow of an approaching object (S43). S44), if not, it is determined that the flow is not an approaching object (S45). オプティカルフローViの大きさが小さく、かつ、それが接近物である状況としては、 In the situation where the size of the optical flow Vi is small and it is an approaching object,
1. 1. 1. 接近物が自車とほぼ等速度で並走している2. An approaching object is running in parallel with the own vehicle at almost the same speed. 接近車が遠方を走っているの2つが考えられるが、いずれの状況においても背景フローVdiは正確に求められるので、高精度の接近物判別ができる。 There are two possibilities that the approaching vehicle is running far away. In either situation, the background flow Vdi can be accurately obtained, so that the approaching object can be discriminated with high accuracy. なお、閾値TH Vdiは0.1[Pixel]程度であるのが好ましい。 The threshold value TH Vdi is preferably about 0.1 [Pixel].
【0072】 [0072]
また、ステップS43において、オプティカルフローViの大きさが十分小さいことに着目して、オプティカルフローViと背景フローVdiとのベクトル差の絶対値ではなく、背景フローVdiの大きさのみを所定値と比較するようにしてもよい。 Further, in step S43, paying attention to the fact that the size of the optical flow Vi is sufficiently small, only the size of the background flow Vdi is compared with the predetermined value, not the absolute value of the vector difference between the optical flow Vi and the background flow Vdi. You may try to do it. この場合、背景フローVdiが十分大きいときは、接近物フローであると判別し、十分小さければ接近物のフローではないと判別すればよい。 In this case, when the background flow Vdi is sufficiently large, it may be determined that it is an approaching object flow, and when it is sufficiently small, it may be determined that it is not an approaching object flow.
【0073】 [0073]
また、ここでのフロー比較で用いた閾値TH Vi ,TH Arg ,TH Vdiは、画像上の位置の関数として設定してもよい。 Further, the threshold values TH Vi , TH Arg , and TH Vdi used in the flow comparison here may be set as a function of the position on the image. 例えば、背景フローVdiが小さい位置では閾値TH Vi ,TH Vdiを小さくし、背景フローVdiが大きい位置では閾値TH Vi ,TH Vdiを大きくする。 For example, the threshold values TH Vi and TH Vdi are decreased at a position where the background flow Vdi is small, and the threshold values ​​TH Vi and TH Vdi are increased at a position where the background flow Vdi is large. これにより、ノイズの影響を抑えつつ、かつ、遠方においても正確な判別を行うことができる。 As a result, accurate discrimination can be performed even at a distance while suppressing the influence of noise.
【0074】 [0074]
図8はノイズ除去部16bの動作を示すフローチャートである。 FIG. 8 is a flowchart showing the operation of the noise removing unit 16b. フロー比較部16aによって接近物フローと検出されたものの中にはノイズが含まれているので、これらをすべて接近物であると検出してしまうと、検出精度の劣化をまねいてしまう。 Since noise is included in what is detected as an approaching object flow by the flow comparison unit 16a, if all of these are detected as approaching objects, the detection accuracy will be deteriorated. そこで、ノイズ除去部16bは、ノイズや接近物のモデル化を行い、そのモデルをフロー比較部16aによって検出された接近物フロー(接近物候補フロー)と比較することによって、接近物のみを検出する。 Therefore, the noise removing unit 16b detects only the approaching object by modeling the noise or the approaching object and comparing the model with the approaching object flow (approaching object candidate flow) detected by the flow comparing unit 16a. ..
【0075】 [0075]
まず、接近物候補フローがノイズである場合、時間的にも空間的にも、連続しては検出されないはずである。 First, if the approaching object candidate flow is noise, it should not be detected continuously both temporally and spatially. ところが、接近物候補フローがノイズでなく実際の接近物に係るものである場合は、接近物はある程度の大きさを有するため、類似の接近物候補フローが、空間的に、接近物の大きさ分の領域を占めるはずである。 However, when the approaching object candidate flow is related to an actual approaching object rather than noise, the approaching object has a certain size, so that a similar approaching object candidate flow is spatially the size of the approaching object. It should occupy the area of ​​minutes.
【0076】 [0076]
そこで、近傍にある接近物候補フローを関連付け、関連付けた接近物候補フローに係る領域同士を連結し、連結された領域を接近物候補領域Aiとする(S51)。 Therefore, the approaching object candidate flows in the vicinity are associated with each other, the regions related to the associated approaching object candidate flows are connected to each other, and the connected regions are designated as the approaching object candidate region Ai (S51). そしてその接近物候補領域Aiの面積Siを求め(S52)、この面積Siを所定値TH Siと比較する(S53)。 Then, the area Si of the approaching object candidate region Ai is obtained (S52), and this area Si is compared with the predetermined value TH Si (S53). ここで、接近物候補領域Aiがカメラから離れているときは所定値TH Siを小さく設定する一方、接近物候補領域Aiがカメラに近いときは所定値TH Siを大きく設定する。 Here, when the approaching object candidate region Ai is far from the camera, the predetermined value TH Si is set small, while when the approaching object candidate region Ai is close to the camera, the predetermined value TH Si is set large. 面積Siが所定値TH Siよりも小さいとき(S53でNo)は、領域Aiはノイズであると判断する(S54)。 When the area Si is smaller than the predetermined value TH Si (No in S53), it is determined that the region Ai is noise (S54). 一方、面積Siが所定値TH Si以上のときはステップS55に進む。 On the other hand, when the area Si is equal to or larger than the predetermined value TH Si, the process proceeds to step S55.
【0077】 [0077]
ステップS55では、接近物のモデル化によるノイズ除去処理を行う。 In step S55, noise removal processing is performed by modeling an approaching object. カメラが車両に設置されていることを考慮すると、接近物は自動車やバイク、自転車等であるが、いずれの接近物も路面上を走行している物体である。 Considering that the camera is installed in the vehicle, the approaching objects are automobiles, motorcycles, bicycles, etc., but all the approaching objects are objects traveling on the road surface. そこで、接近物候補領域Aiが空間モデルの路面上に存在するか否かを判定し(S55)、存在していないときは領域Aiはノイズであると判断し(S54)、次のフレームの処理を行う。 Therefore, it is determined whether or not the approaching object candidate region Ai exists on the road surface of the spatial model (S55), and if it does not exist, it is determined that the region Ai is noise (S54), and the processing of the next frame is performed. I do. 一方、接近物候補領域Aiの少なくとも一部が路面上に存在しているときは、ステップS56に進む。 On the other hand, when at least a part of the approaching object candidate region Ai is present on the road surface, the process proceeds to step S56.
【0078】 [0078]
ステップS56では、時間方向のフィルタリング処理を行う。 In step S56, the filtering process in the time direction is performed. 接近物は、画面上に急に現れたり、あるいは急に消えたりすることはありえないので、その領域は数フレーム連続して存在するはずである。 The area should be present for several frames in a row, as the approaching object cannot appear or disappear suddenly on the screen. 図9は直進走行時に他の車両が後方から接近してくる状況を示している。 FIG. 9 shows a situation in which another vehicle approaches from behind when traveling straight ahead. 現時刻tでは、図9(a)の左図のように接近車6が撮影されるので、接近物候補領域Atは図9(a)の右図のように検出される。 At the current time t, since the approaching vehicle 6 is photographed as shown in the left figure of FIG. 9A, the approaching object candidate region At is detected as shown in the right figure of FIG. 9A. また、現時刻tよりも少し前の時刻t−1,t−2,…,t−Nでは、接近物候補領域At−1,At−2,…,At−Nはそれぞれ図9(b),(c),(d)のように検出される。 Further, at times t-1, t-2, ..., T-N slightly earlier than the current time t, the approaching object candidate regions At-1, At-2, ..., At-N are shown in FIG. 9B, respectively. , (C), (d). ここで、時間間隔が十分小さいものとすると、接近車6は画像上で大きく動かないので、領域At−1,At−2,…,At−Nは領域Atとその一部が重なる。 Here, assuming that the time interval is sufficiently small, the approaching vehicle 6 does not move significantly on the image, so that the regions At-1, At-2, ..., At-N overlap with the region At. これに対して、接近物候補領域がカメラの振動等に起因して生じたものであるとき、その候補領域は短時間しか検出されないはずである。 On the other hand, when the approaching object candidate region is generated due to the vibration of the camera or the like, the candidate region should be detected only for a short time.
【0079】 [0079]
そこで、接近物領域候補Aiに対応するその前数フレームの領域において、接近物領域候補が存在する割合を所定値と比較する(S56)。 Therefore, the ratio of the approaching object region candidate existing in the region of the previous several frames corresponding to the approaching object region candidate Ai is compared with the predetermined value (S56). そして、その割合が所定値よりも低いとき、その接近物候補領域Aiは接近物である可能性は低いので、この接近物候補領域Aiを保持して(S57)、次のフレームの処理を行う。 Then, when the ratio is lower than the predetermined value, the approaching object candidate region Ai is unlikely to be an approaching object, so the approaching object candidate region Ai is held (S57), and the next frame is processed. .. 一方、その前数フレームにおいて接近物領域候補が存在する割合が所定値よりも高いとき、その接近物候補領域Aiは接近物によるものと判断し、次のフレームの処理を行う(S58)。 On the other hand, when the ratio of the approaching object region candidates present in the previous few frames is higher than a predetermined value, it is determined that the approaching object candidate region Ai is due to the approaching object, and the next frame is processed (S58). 例えば、前10フレーム中で6回以上検出されたとき、接近物であると判断する。 For example, when it is detected 6 times or more in the previous 10 frames, it is determined that the object is approaching.
【0080】 [0080]
また、カメラを車両の前方や後方に向けて設置し、かつ、接近物は乗用車のみであると仮定すると、カメラに映る接近物の画像は乗用車を前方か後方から見た画像のみになる。 Further, assuming that the camera is installed facing the front or the rear of the vehicle and the approaching object is only the passenger car, the image of the approaching object reflected by the camera is only the image of the passenger car viewed from the front or the rear. このため、接近物領域の大きさは、幅約2m、路面からの高さ約1.5mと限定できる。 Therefore, the size of the approaching object region can be limited to a width of about 2 m and a height of about 1.5 m from the road surface. そこで、接近物候補領域をこの大きさに設定し、領域内に存在する接近物フローの個数が最大になるように、接近物候補領域の位置を設定してもよい。 Therefore, the approaching object candidate region may be set to this size, and the position of the approaching object candidate region may be set so that the number of approaching object flows existing in the region is maximized. この場合、接近物候補領域に含まれる接近物フローの個数が所定値よりも大きいか否かによって、接近物とノイズとの判別を行うことができる。 In this case, it is possible to discriminate between the approaching object and the noise depending on whether or not the number of approaching object flows included in the approaching object candidate region is larger than a predetermined value. このような処理を、ステップS51〜S53の代わりに実行してもよい。 Such processing may be executed instead of steps S51 to S53.
【0081】 [0081]
以上のように本実施形態によると、背景フローとカメラ画像から実際に求められたオプティカルフローとを比較することによって、車両周囲にある物体の動きを、精度良く検出することができる。 As described above, according to the present embodiment, the movement of an object around the vehicle can be detected with high accuracy by comparing the background flow with the optical flow actually obtained from the camera image.
【0082】 [882]
また、たとえ車両がカーブ走行中であっても、画像上のそれぞれの点における背景フローとオプティカルフローとの比較によって検出を行うため、接近物を正確に検出することができる。 Further, even if the vehicle is traveling on a curve, the detection is performed by comparing the background flow and the optical flow at each point on the image, so that an approaching object can be accurately detected. また、画像から求めたオプティカルフローの大きさが小さくなってしまうような並走車や遠方の接近物についても、そのオプティカルフローは、画像上のその点における背景フローとは大きく異なるため、容易に検出可能である。 In addition, even for a parallel vehicle or a distant approaching object in which the size of the optical flow obtained from the image becomes small, the optical flow is significantly different from the background flow at that point on the image, so it is easy to do so. It is detectable.
【0083】 [0083].
<背景フロー推定の他の例> <Other examples of background flow estimation>
(その1) (Part 1)
図5のように、カメラ2が回転半径Rs、回転角γで動き、カメラ2に映っている物体5は全く動かなかったとすると、このときに求まるオプティカルフローは、図10のように、カメラ2は動いていないが、カメラ2に映る全ての物体5が回転角γで動いた場合に求まるオプティカルフローと等しくなる。 Assuming that the camera 2 moves with a radius of gyration Rs and an angle of rotation γ as shown in FIG. 5 and the object 5 reflected in the camera 2 does not move at all, the optical flow obtained at this time is the camera 2 as shown in FIG. Is not moving, but it is equal to the optical flow obtained when all the objects 5 reflected in the camera 2 move at the rotation angle γ. すなわち、カメラが実世界座標系においてベクトルVだけ移動する場合に求められるオプティカルフローと、カメラに映る全ての物体がベクトル(−V)だけ移動する場合に求められるオプティカルフローとは、等しい。 That is, the optical flow required when the camera moves by the vector V in the real-world coordinate system is equal to the optical flow required when all the objects reflected in the camera move by the vector (−V).
【0084】 [0084]
そこで、カメラの動きに基づいて背景フローを求めるのではなく、カメラは固定されており動かないと仮定し、空間モデルを動かすことによって背景フローを求めてもよい。 Therefore, instead of finding the background flow based on the movement of the camera, it is possible to find the background flow by moving the spatial model on the assumption that the camera is fixed and does not move. 図11はこのような場合の背景フロー推定の処理を示すフローチャートである。 FIG. 11 is a flowchart showing the background flow estimation process in such a case.
【0085】 [0085]
まず、図4のステップS11と同様に、時刻t−1において撮影されたカメラ画像上の任意の点(PreXi,PreYi )を、空間モデル推定部14によって推定された空間モデルを用いて、実世界3次元座標(PreXw,PreYw,PreZw )に射影する(S21)。 First, as in step S11 of FIG. 4, arbitrary points (PreXi, PreYi) on the camera image taken at time t-1 are used in the real world by using the spatial model estimated by the spatial model estimation unit 14. It projects onto three-dimensional coordinates (PreXw, PreYw, ​​PreZw) (S21). このとき、(数1)に示す透視投影変換式と(数2)に示す座標変換式とを用いる。 At this time, the perspective projection conversion formula shown in (Equation 1) and the coordinate conversion formula shown in (Equation 2) are used.
【0086】 0083.
次に、自車動き推定部13によって推定された時刻t−1から時刻tまでの車両1の移動量hを基にして、ステップS21で求めた実世界座標(PreXw,PreYw,PreZw )を車両1に対して相対的に移動させる。 Next, based on the movement amount h of the vehicle 1 from the time t-1 to the time t estimated by the own vehicle motion estimation unit 13, the vehicle obtains the real world coordinates (PreXw, PreYw, ​​PreZw) obtained in step S21. Move relative to 1. すなわち、実世界座標(PreXw,PreYw,PreZw )を、移動量hに係る回転中心座標Oおよび回転角γを用いて回転させ、実世界座標(NextXw,NextYw,NextZw)を求める(S22)。 That is, the real world coordinates (PreXw, PreYw, ​​PreZw) are rotated using the rotation center coordinates O and the rotation angle γ related to the movement amount h, and the real world coordinates (NextXw, NextYw, NextZw) are obtained (S22). さらに、(数1),(数2)を用いて、ステップS22で求めた実世界座標(NextXw,NextYw,NextZw)をカメラ画像上の点(NextXi,NextYi )に再射影する(S23)。 Further, using (Equation 1) and (Equation 2), the real-world coordinates (NextXw, NextYw, NextZw) obtained in step S22 are reprojected to the points (NextXi, NextYi) on the camera image (S23). そして、図4のステップS15と同様に、(NextXi−PreXi, NextYi−PreYi)を背景フローとして求める(S24)。 Then, (NextXi-PreXi, NextYi-PreYi) is obtained as the background flow in the same manner as in step S15 of FIG. 4 (S24).
【0087】 [0087]
なお、ステップS22で求められた(NextXw,NextYw,NextZw)は、時刻t−1の空間モデルから予測された時刻tにおける空間モデルともいえるので、このような処理を続けることによって空間モデルを更新してもよい。 The (NextXw, NextYw, NextZw) obtained in step S22 can be said to be a spatial model at time t predicted from the spatial model at time t-1, so the spatial model is updated by continuing such processing. You may. または、接近物検出部16によって背景と判定された部分のみ、このような空間モデルの更新処理を行い、それ以外の領域については上述の方法によって空間モデルを求め直すようにしてもよい。 Alternatively, the spatial model may be updated only in the portion determined to be the background by the approaching object detection unit 16, and the spatial model may be recalculated for the other regions by the above-mentioned method.
【0088】 [0088]
(その2) (Part 2)
上述の2つの方法はいずれも、カメラ画像上の点を実世界3次元座標に変換し、実世界座標系で仮定した空間モデルを用いて処理を行った。 In both of the above two methods, points on the camera image were converted into real-world three-dimensional coordinates, and processing was performed using a spatial model assumed in the real-world coordinate system. これに対して、カメラ座標系で全ての処理を行うことも可能である。 On the other hand, it is also possible to perform all processing in the camera coordinate system. 図12はこのような場合の背景フロー推定の処理を示すフローチャートである。 FIG. 12 is a flowchart showing the background flow estimation process in such a case. この場合には、空間モデルはカメラ座標系で記述する必要がある。 In this case, the spatial model needs to be described in the camera coordinate system. カメラ座標系と実世界座標系とは1対1に対応するので、実世界座標系で仮定した空間モデルは、(数2)に従って、カメラ座標系に容易に変換することができる。 Since the camera coordinate system and the real world coordinate system have a one-to-one correspondence, the spatial model assumed in the real world coordinate system can be easily converted into the camera coordinate system according to (Equation 2).
【0089】 [089]
まず、時刻t−1において撮影されたカメラ画像上の任意の点(PreXi,PreYi)を、カメラ座標系で記述された空間モデルを用いて、カメラ座標系での3次元位置(PreXc,PreYc,PreZc )に射影する(S31)。 First, a three-dimensional position (PreXc, PreYc,) in the camera coordinate system is used to position an arbitrary point (PreXi, PreYi) on the camera image taken at time t-1 using a spatial model described in the camera coordinate system. It is projected onto PreZc) (S31). このとき、(数1)に示す透視投影変換式を用いる。 At this time, the perspective projection conversion formula shown in (Equation 1) is used.
【0090】 [0090]
次に、自車動き推定部13によって推定された時刻t−1から時刻tまでの車両1の移動量hを基にして、ステップS31で求めたカメラ座標(PreXc,PreYc,PreZc )を車両1に対して相対的に移動させる。 Next, based on the movement amount h of the vehicle 1 from the time t-1 to the time t estimated by the own vehicle motion estimation unit 13, the camera coordinates (PreXc, PreYc, PreZc) obtained in step S31 are set to the vehicle 1. Move relative to. すなわち、カメラ座標(PreXc,PreYc,PreZc )を、移動量hに係るカメラ画像系での回転中心座標C、回転角γc を用いて回転させ、カメラ座標(NextXc,NextYc,NextZc)を求める(S32)。 That is, the camera coordinates (PreXc, PreYc, PreZc) are rotated using the rotation center coordinates C and the rotation angle γc in the camera image system related to the movement amount h, and the camera coordinates (NextXc, NextYc, NextZc) are obtained (S32). ). さらに、(数1)を用いて、ステップS32で求めたカメラ座標(NextXc,NextYc,NextZc)をカメラ画像上の点(NextXi,NextYi )に再射影する(S33)。 Further, using (Equation 1), the camera coordinates (NextXc, NextYc, NextZc) obtained in step S32 are re-projected to the points (NextXi, NextYi) on the camera image (S33). そして、図4のステップS15と同様に、(NextXi−PreXi,NextYi−PreYi )を背景フローとして求める(S34)。 Then, (NextXi-PreXi, NextYi-PreYi) is obtained as the background flow in the same manner as in step S15 of FIG. 4 (S34).
【0091】 [0091]
このように、空間モデルは、カメラ画像から実世界3次元座標系への変換、または、カメラ画像からカメラ座標系への変換のために、利用される。 As described above, the spatial model is used for the conversion of the camera image to the real-world three-dimensional coordinate system or the conversion of the camera image to the camera coordinate system. そこで、空間モデルを、カメラ画像から実世界3次元座標系への変換式、または、カメラ画像からカメラ座標系への変換式として記述することも可能である。 Therefore, it is also possible to describe the spatial model as a conversion formula from the camera image to the real-world three-dimensional coordinate system or a conversion formula from the camera image to the camera coordinate system.
【0092】 [0092]
図13は本発明に係る背景フローが矢印で示されたカメラ画像の例である。 FIG. 13 is an example of a camera image in which the background flow according to the present invention is indicated by an arrow. すでに説明したように、背景フローは、実際にカメラ画像から求められたオプティカルフローと比較を行うために用いられ、接近物の検出基準となる。 As described above, the background flow is used to compare with the optical flow actually obtained from the camera image, and serves as a detection criterion for an approaching object. 図13を見ると分かるように、背景フローは、走行路面のカーブを考慮したものになっている。 As can be seen from FIG. 13, the background flow considers the curve of the traveling road surface. すなわち、本発明は、従来と比べて、カーブにおいても接近物を精度良く検出可能であることが、図13から直感的に理解できる。 That is, it can be intuitively understood from FIG. 13 that the present invention can accurately detect an approaching object even on a curve as compared with the conventional invention.
【0093】 [093]
本発明によると、自車両に接近してくる物体のみを正確に検出することが可能である。 According to the present invention, it is possible to accurately detect only an object approaching the own vehicle. そして、この検出結果を用いて、例えば、接近車のみを強調した画像を表示したり、接近車の存在を音や画像、ハンドルや椅子の振動、LEDのような危険告知ランプの点灯などによって警告したり、することができる。 Then, using this detection result, for example, an image emphasizing only an approaching vehicle is displayed, or the presence of an approaching vehicle is warned by sound or image, vibration of a steering wheel or chair, lighting of a danger notification lamp such as an LED, or the like. You can do it. さらに、危険な状況のときに、ステアリングやブレーキを自動で制御して、接近車との接触や衝突を回避するようにしてもかまわない。 Further, in a dangerous situation, the steering and brakes may be automatically controlled to avoid contact or collision with an approaching vehicle.
【0094】 [0094]
ここでは、背景フローを利用した強調表示について説明する。 Here, highlighting using the background flow will be described.
【0095】 [0995]
図38はカーブ走行中の後方画像に、接近車に対応するオプティカルフローのみを重畳表示した例である。 FIG. 38 is an example in which only the optical flow corresponding to the approaching vehicle is superimposed and displayed on the rear image while traveling on a curve. 図38では、自車両からほぼ等距離だけ離れた車両A,Bがほぼ等速度で接近している。 In FIG. 38, vehicles A and B, which are approximately equidistant from the own vehicle, are approaching at approximately equal speeds. 図38に示すように、カーブ外側の車Aについては、重畳表示されたオプティカルフロー(矢印)によって、利用者はその接近を認識できる。 As shown in FIG. 38, the user can recognize the approach of the vehicle A outside the curve by the superimposed optical flow (arrow). ところが、カーブ内側の車Bについては、オプティカルフローはほぼ0であり、利用者はその接近を認識できない。 However, for the car B inside the curve, the optical flow is almost 0, and the user cannot recognize the approach. このように、カーブ走行中では、単にオプティカルフローを重畳表示しただけでは、接近物の存在が認識できない場合がある。 In this way, while traveling on a curve, the existence of an approaching object may not be recognized simply by superimposing the optical flow.
【0096】 [0906]
この原因は、自車両のカーブ走行によって生じる背景フローと、接近車の動きによるフローとが互いに打ち消し合うことにある。 The reason for this is that the background flow caused by the curve running of the own vehicle and the flow caused by the movement of the approaching vehicle cancel each other out. 図39は図38の画像の背景フローである。 FIG. 39 is a background flow of the image of FIG. 38. 図39において、矢印A2,B2はそれぞれ車A,Bの位置に対応する背景フローである。 In FIG. 39, arrows A2 and B2 are background flows corresponding to the positions of vehicles A and B, respectively. 図39から分かるように、車Bの位置での背景フローB2が車Bの動きと逆向きになっている。 As can be seen from FIG. 39, the background flow B2 at the position of the car B is in the opposite direction to the movement of the car B.
【0097】 [097]
そこで、接近車の動きのみを際立たせるために、求めたオプティカルフローから背景フローをベクトル的に減算し、得られたフローをカメラ画像に重畳表示する。 Therefore, in order to make only the movement of the approaching vehicle stand out, the background flow is vectorically subtracted from the obtained optical flow, and the obtained flow is superimposed and displayed on the camera image. 図40はその表示例である。 FIG. 40 is a display example thereof. 図38と異なり、接近車A,Bともにフローが表示されており、したがって、利用者は接近車の存在を確実に認識することができる。 Unlike FIG. 38, the flow is displayed for both the approaching vehicles A and B, so that the user can reliably recognize the existence of the approaching vehicle.
【0098】 [0998]
もちろん、上述のように接近物フローを連結して接近物領域を求め、後述の図26(b)のように接近物領域に枠を描いて強調表示してもかまわない。 Of course, the approaching object flow may be connected as described above to obtain the approaching object region, and a frame may be drawn and highlighted in the approaching object region as shown in FIG. 26B described later. さらに、接近物フローや接近物領域が大きいほど、接近車がより接近しており危険度が高い、と考えられるので、接近物フローや接近物領域の大きさに応じて、枠の色や太さ、線種等を切り替えてもよい。 Furthermore, it is considered that the larger the approaching object flow and the approaching object area, the closer the approaching vehicle is and the higher the risk. Therefore, the color and thickness of the frame depend on the approaching object flow and the size of the approaching object area. The line type and the like may be switched.
【0099】 [00099]
また、画像の代わりに、または画像とともに、音で警告してもかまわない。 You may also give a audible warning instead of or with the image. この場合、接近物が存在する位置の方から音が聞こえるようにすると効果的である。 In this case, it is effective to make the sound heard from the position where the approaching object exists. また、危険度に応じて、音の大きさやメロディ、テンポ、周波数などを変えてもよい。 Further, the loudness, melody, tempo, frequency, etc. may be changed according to the degree of danger. 例えば、車が右後方からゆっくりと接近しているときは、運転者の右後方から小さな音を発し、車が左後方から急速に近づいているときは、左後方から大きな音を発するようにすればよい。 For example, when the car is slowly approaching from the rear right, make a small noise from the driver's right rear, and when the car is approaching rapidly from the left rear, make a loud noise from the left rear. Just do it.
【0100】 [0100]
また上述の実施形態では、自車の動きを車輪の回転速度とハンドルの舵角から推定した。 Further, in the above-described embodiment, the movement of the own vehicle is estimated from the rotation speed of the wheels and the steering angle of the steering wheel. これに対して、車両に設置されたカメラの画像を用いて、車両(カメラ自身)の動きを検出することも可能である。 On the other hand, it is also possible to detect the movement of the vehicle (camera itself) by using the image of the camera installed in the vehicle. この手法について説明する。 This method will be described.
【0101】 [0101]
ここでは、静止平面である路面の式は既知であり、かつ車両の動きは微小であると仮定し、路面上の点の画像上での動きから、車両(カメラ)の動きパラメタを推定する。 Here, it is assumed that the formula of the road surface which is a stationary plane is known and the movement of the vehicle is minute, and the movement parameters of the vehicle (camera) are estimated from the movement of the points on the road surface on the image. 仮に車両が高速で移動していたとしても、撮像時のフレームレートを上げることによって、カメラの動きは画像間で微小となるので、この仮定は一般性を失わせるものではない。 Even if the vehicle is moving at high speed, by increasing the frame rate at the time of imaging, the movement of the camera becomes minute between images, so this assumption does not lose its generality.
【0102】 [0102]
図41は本手法を説明するための模式図である。 FIG. 41 is a schematic diagram for explaining the present method. 図41に示すように、カメラ2の動きによって、静止平面上の点Pのカメラ座標系における座標値が、PE=(x,y,z)からPE'=(x',y',z')に変化したとする。 As shown in FIG. 41, due to the movement of the camera 2, the coordinate values ​​of the points P on the stationary plane in the camera coordinate system change from PE = (x, y, z) to PE'= (x', y', z'. ). カメラ2の動きは、回転R(wx,wy,wz)と平行移動T(Tx,Ty,Tz)によって表される。 The movement of the camera 2 is represented by rotation R (wx, wy, wz) and translation T (Tx, Ty, Tz). すなわち、カメラ2の動きは、次のように表される。 That is, the movement of the camera 2 is expressed as follows.
【数12】 [Number 12]
【0103】 [0103]
ここで、静止平面がz=ax+by+cと表されるものとすると、画像座標上での点Pの移動(u,v)→(u',v')について、次の式が成り立つ。 Here, assuming that the stationary plane is represented by z = ax + by + c, the following equation holds for the movement (u, v) → (u', v') of the point P on the image coordinates.
【数13】 [Number 13]
【0104】 [0104]
ここで、画像座標上の対応点(u,v)(u',v')と静止平面の式の係数a,b,cが既知であるとし、上の式を未知パラメタ(wx,wy,wz,tx,ty,tz)についてまとめると、次のようになる。 Here, it is assumed that the corresponding points (u, v) (u', v') on the image coordinates and the coefficients a, b, c of the equation of the rest plane are known, and the above equation is an unknown parameter (wx, wy, wz, tx, ty, tz) can be summarized as follows.
【数14】 [Number 14]
【数15】 [Number 15]
【0105】 [0105]
これをAC=Rとおくと、車両の動きである回転と平行移動の項を,最小二乗法を用いて求めることができる。 If this is set to AC = R, the terms of rotation and translation, which are the movements of the vehicle, can be obtained by using the least squares method.
C=(A A) −1 C = (A t A) -1 A t R
【0106】 [0106]
(第2の実施形態) (Second Embodiment)
図14は本発明の第2の実施形態に係る車両用監視装置の基本構成を概念的に示すブロック図である。 FIG. 14 is a block diagram conceptually showing the basic configuration of the vehicle monitoring device according to the second embodiment of the present invention. 図14において、図1と共通の構成要素には図1と同一の符号を付しており、ここではその詳細な説明を省略する。 In FIG. 14, the components common to FIG. 1 are designated by the same reference numerals as those in FIG. 1, and detailed description thereof will be omitted here. 第1の実施形態と異なるのは、空間モデル推定部14Aが、自車動き推定部13によって推定された自車動き情報を用いて、比較的単純な空間モデルを推定する点である。 The difference from the first embodiment is that the spatial model estimation unit 14A estimates a relatively simple spatial model by using the vehicle motion information estimated by the vehicle motion estimation unit 13.
【0107】 [0107]
一般に、車両が走行する状況では、路面上に、建物や電柱、看板、木など様々なものが存在している。 Generally, in a situation where a vehicle travels, there are various things such as buildings, utility poles, signboards, and trees on the road surface. このため、カメラ11には、路面、建物、電柱、看板、木、さらには空など様々なものが映される。 For this reason, the camera 11 captures various objects such as road surfaces, buildings, utility poles, signboards, trees, and even the sky. ここでは、カメラ11に映される様々なものを単純な空間モデルによって仮定する方法を考える。 Here, we consider a method of assuming various things projected on the camera 11 by a simple spatial model.
【0108】 [0108]
カメラに映るものは様々であるが、カメラを車両の後方下向きに設置した場合、ほぼ間違いなく映るものがある。 There are various things that can be seen by the camera, but when the camera is installed facing downward from the rear of the vehicle, there is almost certainly something that can be seen. それは「車両が走行してきた路面」である。 It is the "road surface on which the vehicle has traveled." そこで、まず空間モデルとして、車両が走行してきた路面をモデル化した「路面モデル」を用いる。 Therefore, first, as a space model, a "road surface model" that models the road surface on which the vehicle has traveled is used. 路面モデルは走行路面を無限に延長したものであるため、勾配が急激に変化しない限り、正確な路面モデルを推定することができる。 Since the road surface model is an infinite extension of the traveling road surface, an accurate road surface model can be estimated as long as the slope does not change suddenly. また、坂道やアップダウンの激しい道のように路面の勾配が変化している場合でも、ジャイロのようなセンサを用いることによって、路面の勾配の影響を考慮した路面モデルを生成することが可能である。 In addition, even when the slope of the road surface is changing, such as on a slope or a road with severe ups and downs, it is possible to generate a road surface model that takes into account the influence of the slope of the road surface by using a sensor such as a gyro. is there.
【0109】 [0109]
ところが、路面モデルは、カメラ画像上の全領域において用いることは必ずしもできない。 However, the road surface model cannot always be used in the entire area on the camera image. なぜなら、カメラは通常ほぼ水平に近い状態で設置されると考えられるので、この場合、カメラ画像内の地平線よりも高い領域には、路面が映らないからである。 This is because the camera is usually considered to be installed almost horizontally, and in this case, the road surface is not reflected in the region higher than the horizon in the camera image. もちろん、カメラ画像の全領域に路面が映るようにカメラを配置することも可能であるが、この場合、カメラで監視できる範囲が狭くなってしまうため、実用的ではない。 Of course, it is possible to arrange the camera so that the road surface is reflected in the entire area of ​​the camera image, but in this case, the range that can be monitored by the camera is narrowed, which is not practical.
【0110】 [0110]
そこで本実施形態では、路面モデルのほかに、走行路面に対して垂直な壁面を空間モデルとして仮定する。 Therefore, in the present embodiment, in addition to the road surface model, a wall surface perpendicular to the traveling road surface is assumed as a spatial model. この空間モデルを「壁面モデル」と呼ぶ。 This space model is called a "wall model".
【0111】 [0111]
図15は壁面モデルの一例を示す図である。 FIG. 15 is a diagram showing an example of a wall surface model. 図15において、1は車両、2は車両1に設置されたカメラ、カメラ2から延びた2本の直線によって囲まれた領域VAはカメラ2の撮影領域である。 In FIG. 15, 1 is a vehicle, 2 is a camera installed in the vehicle 1, and a region VA surrounded by two straight lines extending from the camera 2 is a shooting region of the camera 2. 図15に示すように、車両1から距離Lだけ離れた後方に、走行路面に対して垂直な壁面を仮定した壁面モデルMWが、最も簡単なものである。 As shown in FIG. 15, the wall surface model MW assuming a wall surface perpendicular to the traveling road surface behind the vehicle 1 by a distance L is the simplest. この壁面モデルMWは、カメラ2の視野範囲VAのうち、路面モデルMSでカバーできない全ての領域をカバーするほど十分に大きいものと仮定する。 It is assumed that the wall surface model MW is sufficiently large to cover the entire field of view VA of the camera 2 that cannot be covered by the road surface model MS. カメラ2の視野範囲VAのうち、カメラ2と壁面モデルMWとの間に存在する領域が路面モデルMSの領域となる。 Of the field of view range VA of the camera 2, the area existing between the camera 2 and the wall surface model MW is the area of ​​the road surface model MS. このため、車両1が直進している場合、カメラ2によって撮影されたカメラ画像は、図示したように、上部は壁面モデルMW、下部は路面モデルMSによって占められる。 Therefore, when the vehicle 1 is traveling straight, the camera image taken by the camera 2 is occupied by the wall surface model MW in the upper portion and the road surface model MS in the lower portion as shown in the figure.
【0112】 [0112]
図15のような壁面モデルを用いる場合、壁面モデルMWまでの距離Lをどのように決定するかが問題となる。 When a wall surface model as shown in FIG. 15 is used, how to determine the distance L to the wall surface model MW becomes a problem. 図16は右カーブにおける背景フローの例を示す図であり、各矢印は求められた背景フローを示し、白線は路面上の白線を示している。 FIG. 16 is a diagram showing an example of the background flow in the right curve, each arrow shows the obtained background flow, and the white line shows the white line on the road surface. また、MWAは壁面モデルMWによって背景フローを求めた領域、MSAは路面モデルMSによって背景フローを求めた領域である。 Further, the MWA is a region where the background flow is obtained by the wall surface model MW, and the MSA is a region where the background flow is obtained by the road surface model MS. 同図中、(a)はLが十分に長いとき、(b)はLが十分に短いときを示しており、(a)と(b)の比較によって、距離Lの大小に応じた背景フローの変化が分かる。 In the figure, (a) shows when L is sufficiently long, and (b) shows when L is sufficiently short. By comparing (a) and (b), the background flow according to the magnitude of the distance L You can see the change in.
【0113】 [0113]
すなわち、図16(a),(b)を比較すると、まず壁面モデル領域MWAにおける背景フローの大きさが大きく異なっていることが分かる。 That is, when comparing FIGS. 16A and 16B, it can be seen that the size of the background flow in the wall surface model area MWA is significantly different. ところが、上述したように、接近物検出処理において「路面に接近物の一部が接している」という条件を加えているので、路面モデル領域MSAにおける背景フローが正確に求まっていれば、壁面モデル領域MSAにおける背景フローは必ずしも正確に求まっている必要はない。 However, as described above, since the condition that "a part of the approaching object is in contact with the road surface" is added in the approaching object detection process, if the background flow in the road surface model area MSA is accurately obtained, the wall surface model The background flow in the region MSA does not necessarily have to be determined accurately.
【0114】 [0114]
また、路面モデルと壁面モデルの境界付近の領域BAにおける背景フローが、図16(a),(b)で相当異なっていることが分かる。 Further, it can be seen that the background flows in the region BA near the boundary between the road surface model and the wall surface model are considerably different in FIGS. 16A and 16B. この境界領域BAは、路面判定の際にも重要になる領域であり、したがって、壁面モデルMWまでの距離Lの設定如何によっては、検出精度の劣化を招いてしまう可能性がある。 This boundary region BA is also an important region when determining the road surface, and therefore, depending on the setting of the distance L to the wall surface model MW, the detection accuracy may deteriorate.
【0115】 [0115]
そこで、別の壁面モデルとして、図17のように車両1の両脇に壁(MW1,MW2)があるような空間モデルを想定する。 Therefore, as another wall surface model, a space model in which walls (MW1, MW2) are present on both sides of the vehicle 1 is assumed as shown in FIG. これは、高さが無限のトンネルを走行している状況といえる。 It can be said that this is a situation in which the vehicle is traveling in a tunnel with an infinite height. カメラ2の視野範囲VA内において左右の壁面モデルMW1,MW2間に存在する領域が、路面モデルMSの領域となる。 The area existing between the left and right wall surface models MW1 and MW2 in the field of view range VA of the camera 2 is the area of ​​the road surface model MS. このため、車両1が直進している場合、カメラ2によって撮影されたカメラ画像は、図示したとおり、上部は左右の壁面モデルMW1,MW2、下部は路面モデルMSによって占められる。 Therefore, when the vehicle 1 is traveling straight, the camera image taken by the camera 2 is occupied by the left and right wall surface models MW1 and MW2 in the upper portion and the road surface model MS in the lower portion as shown in the figure. なお図17では、説明を簡単にするために、車両1が直進しているものとしたが、車両がカーブを走行しているときは、壁面モデルもそのカーブの曲率に合わせて曲げればよい。 In FIG. 17, for the sake of simplicity, it is assumed that the vehicle 1 is traveling straight, but when the vehicle is traveling on a curve, the wall surface model may also be bent according to the curvature of the curve. ..
【0116】 [0116]
図17に示すような空間モデルを、カメラ画像から実世界3次元座標系への変換式として記述した場合について、詳述する。 The case where the spatial model as shown in FIG. 17 is described as a conversion formula from the camera image to the real world three-dimensional coordinate system will be described in detail.
【0117】 [0117]
まず、路面モデルについて説明する。 First, the road surface model will be described. 走行路面を勾配のない平らな路面であると仮定すると、路面モデルは実世界3次元座標系のXw−Yw平面となる。 Assuming that the traveling road surface is a flat road surface with no slope, the road surface model is the Xw-Yw plane of the real-world three-dimensional coordinate system. そこで、この平面の関係式Zw=0 Therefore, the relational expression Zw = 0 of this plane
を(数1),(数2)へ代入すると、カメラ画像上の点P(Xi,Yi)は、次式によって、実世界3次元座標Pw(Xw,Yw,Zw)に変換することができる。 Is substituted into (Equation 1) and (Equation 2), the point P (Xi, Yi) on the camera image can be converted into the real-world three-dimensional coordinates Pw (Xw, Yw, Zw) by the following equation. ..
【数8】 [Number 8]
(数8)が路面モデルにおける画像座標系から実世界3次元座標系への変換式となる。 (Equation 8) is a conversion formula from the image coordinate system in the road surface model to the real world three-dimensional coordinate system.
【0118】 [0118]
次に、壁面モデルについて説明する。 Next, the wall surface model will be described. 車両1は、時刻t−1から時刻tまでの間に回転半径Rで旋回走行していたとする。 It is assumed that the vehicle 1 is turning and traveling with a turning radius R between the time t-1 and the time t. 実世界3次元座標系の原点を時刻tにおける車両1の後輪中央位置と定め、回転中心を(R,0,0 )とする。 The origin of the real-world three-dimensional coordinate system is defined as the center position of the rear wheels of the vehicle 1 at time t, and the center of rotation is (R, 0, 0). ただし、回転半径Rの符号は正負両方をとり、車両1が反時計周りのときは正、時計周りのときは負であるものとする。 However, the sign of the radius of gyration R takes both positive and negative signs, and is assumed to be positive when the vehicle 1 is counterclockwise and negative when the vehicle 1 is clockwise.
【0119】 [0119]
このとき、壁面モデルは、実世界3次元座標系において、Xw−Zw平面における断面の中心が点 (R,0,0 )であり、Xw−Zw平面に垂直な半径(R±W/2)の円筒の一部であるので、次式によって表される。 At this time, the wall surface model has a point (R, 0, 0) at the center of the cross section in the Xw-Zw plane in the real-world three-dimensional coordinate system, and a radius (R ± W / 2) perpendicular to the Xw-Zw plane. Since it is a part of the cylinder of, it is expressed by the following equation.
【0120】 [0120]
(Xw−R) +Zw =(R±W/2) (Xw-R) 2 + Zw 2 = (R ± W / 2) 2
この式と(数1),(数2)を用いることによって、カメラ画像上の点P(Xi, Yi)は、次式によって、実世界3次元座標Pw(Xw,Yw,Zw)に変換することができる。 By using this equation and (Equation 1) and (Equation 2), the point P (Xi, Yi) on the camera image is converted into the real world three-dimensional coordinates Pw (Xw, Yw, Zw) by the following equation. be able to.
【数9】 [Number 9]
さらに、 further,
1)カメラ画像に映っている物体はカメラ2の後ろには存在しない(Ze>0)2)空間モデルは路面よりも高い位置にある(Yw≧0) 1) The object shown in the camera image does not exist behind the camera 2 (Ze> 0) 2) The spatial model is higher than the road surface (Yw ≧ 0)
という条件を加えることによって、Pw(Xw,Yw,Zw)を一意に決定することができる。 By adding the condition, Pw (Xw, Yw, Zw) can be uniquely determined. (数9)が画像座標系から実世界3次元座標系への変換式として表現された壁面モデルである。 (Equation 9) is a wall surface model expressed as a conversion formula from the image coordinate system to the real world three-dimensional coordinate system.
【0121】 [0121]
さて、図17のような壁面モデルを用いる場合、車両の左右に仮定した壁面MW1,MW2間の距離Wをどのように決定するかが問題となる。 When using the wall surface model as shown in FIG. 17, the problem is how to determine the distance W between the wall surfaces MW1 and MW2 assumed on the left and right sides of the vehicle. 図18は右カーブにおける背景フローの例を示す図であり、各矢印は求められた背景フローを示し、白線は路面上の白線を示している。 FIG. 18 is a diagram showing an example of the background flow in the right curve, each arrow shows the obtained background flow, and the white line shows the white line on the road surface. また、MW1Aはカメラ2から見て左側の壁面モデルMW1によって背景フローを求めた領域、MW2Aはカメラ2から見て右側の壁面モデルMW2によって背景フローを求めた領域、MSAは路面モデルMSによって背景フローを求めた領域である。 Further, MW1A is an area where the background flow is obtained by the wall surface model MW1 on the left side when viewed from the camera 2, MW2A is an area where the background flow is obtained by the wall surface model MW2 on the right side when viewed from the camera 2, and MSA is a background flow by the road surface model MS. Is the area where 同図中、(a)はWが十分に小さいとき、(b)はWが十分に大きいときを示しており、(a)と(b)の比較によって、距離Wの大小に応じた背景フローの変化が分かる。 In the figure, (a) shows when W is sufficiently small, (b) shows when W is sufficiently large, and by comparing (a) and (b), the background flow according to the magnitude of the distance W. You can see the change in.
【0122】 [0122]
すなわち、図18(a),(b)を比較すると、左右の壁面モデルの境界付近の領域BA1における背景フローが大きく異なっていることが分かる。 That is, when comparing FIGS. 18A and 18B, it can be seen that the background flow in the region BA1 near the boundary between the left and right wall surface models is significantly different. ところが、上述したように、接近物検出処理において「路面に接近物の一部が接している」という条件を加えているので、路面モデル領域MSAにおける背景フローが正確に求まっていれば、壁面モデル領域MW1A,MW1Bにおける背景フローに誤差が生じていたとしても、接近物検出には大きな問題とならない。 However, as described above, since the condition that "a part of the approaching object is in contact with the road surface" is added in the approaching object detection process, if the background flow in the road surface model area MSA is accurately obtained, the wall surface model Even if there is an error in the background flow in the regions MW1A and MW1B, it does not pose a big problem in detecting an approaching object.
【0123】 [0123]
すなわち、図17に示すような壁面モデルでは、図15に示すようなカメラ後方に壁面を仮定した場合と異なり、壁面モデルと路面モデルの境界領域においては、壁面の間隔Wの大小によって、背景フローにさほど大きな差は生じないので、接近物を精度良く検出することができる。 That is, in the wall surface model as shown in FIG. 17, unlike the case where the wall surface is assumed behind the camera as shown in FIG. 15, in the boundary region between the wall surface model and the road surface model, the background flow depends on the size of the wall spacing W. Since the difference is not so large, an approaching object can be detected with high accuracy. 例えば、壁面の間隔Wは10m程度にしておけばよい。 For example, the distance W between the wall surfaces may be set to about 10 m.
【0124】 [0124]
もちろん、壁面間の距離Wを、レーザー、超音波、赤外線またはミリ波などの各種障害物検出センサを用いて測定したり、両眼視、モーションステレオ法を用いて測定してもよい。 Of course, the distance W between the wall surfaces may be measured by using various obstacle detection sensors such as laser, ultrasonic, infrared ray or millimeter wave, or may be measured by binocular vision or motion stereo method. また、路面のテクスチャ情報を利用して、カメラ画像から路面以外の領域を抽出し、その領域を壁面として仮定することも可能である。 It is also possible to extract a region other than the road surface from the camera image by using the texture information of the road surface and assume that region as a wall surface. さらに、空間モデルの一部のみを測距センサを用いて測定し、それ以外の領域を、ここで説明したように平面や曲面でモデル化することもできる。 Further, it is also possible to measure only a part of the spatial model using the distance measuring sensor and model the other region as a plane or a curved surface as described here. また、カメラ画像や通信を用いて、現在、車両が走行している道路の車線数を求め、その結果を利用して壁面間の距離Wを決定してもよい。 Further, the number of lanes on the road on which the vehicle is currently traveling may be obtained by using a camera image or communication, and the result may be used to determine the distance W between the wall surfaces. カメラ画像から車線数を決定する方法としては、例えば、白線検出を行い、求まった白線の本数で車線数を決定する方法などがある。 As a method of determining the number of lanes from the camera image, for example, there is a method of detecting white lines and determining the number of lanes based on the obtained number of white lines.
【0125】 [0125]
なお、空間モデルの形状を、カーナビゲーションシステムのようなGPS情報と地図データ、あるいは時計などによる時刻データや、車両のワイパー、ヘッドライトなどの作動データ等に基づいて、切り替えることも可能である。 It is also possible to switch the shape of the spatial model based on GPS information and map data such as a car navigation system, time data such as a clock, and operation data such as vehicle wipers and headlights. 例えば、GPS情報と地図データから、車両がいまトンネルを走行中であるということが分かったとき、天井が存在するような空間モデルを求めてもよい。 For example, when it is found from GPS information and map data that a vehicle is currently traveling in a tunnel, a spatial model in which a ceiling exists may be obtained. あるいは、トンネル入口等に取り付けられたDSRC等の通信手段によって、車両がトンネル走行中であるとの情報を得ることもできる。 Alternatively, it is possible to obtain information that the vehicle is traveling in the tunnel by a communication means such as DSRC attached to the tunnel entrance or the like. また、トンネル走行中には通常、ヘッドライトを点けるので、時間データやワイパーデータ、ヘッドライトデータと連動して、例えば、夜間でなく(時間データから推定)、雨も降っていない(ワイパーデータから推定)のに、ヘッドライトを点けたとき、トンネル走行中であると判断し、空間モデルを切り替えることもできる。 In addition, since the headlights are normally turned on while driving in a tunnel, it is linked with the time data, wiper data, and headlight data, for example, not at night (estimated from the time data), and it is not raining (wiper data). However, when the headlights are turned on, it is possible to judge that the vehicle is running in a tunnel and switch the spatial model.
【0126】 [0126]
図19は本実施形態に係る自車動き推定部13、空間モデル推定部14Aおよび背景フロー推定部15の動作を示すフローチャートである。 FIG. 19 is a flowchart showing the operations of the vehicle motion estimation unit 13, the space model estimation unit 14A, and the background flow estimation unit 15 according to the present embodiment. まず、時刻t−1から時刻tまでの車両の左右両車輪の回転パルスとハンドル切れ角を測定することによって、時刻tにおける自車両の左右の車輪速Vl,Vrと前輪の切れ角βを求める(S61)。 First, by measuring the rotation pulses of the left and right wheels of the vehicle and the steering angle of the steering wheel from time t-1 to time t, the left and right wheel speeds Vl and Vr of the own vehicle and the turning angle β of the front wheels at time t are obtained. (S61). そして、こうして求めた車輪速Vl,Vrおよび切れ角βと既知のホイールベースlとから、上述のアッカーマンモデルを用い、(数7)に従って車両の動きベクトルTを推定する(S62)。 Then, from the wheel speeds Vl, Vr, the turning angle β, and the known wheelbase l obtained in this way, the motion vector T of the vehicle is estimated according to (Equation 7) using the above-mentioned Ackermann model (S62).
【0127】 [0127]
ところで、時刻t−1以前の動きベクトルはこれまでに全て求まっているはずである。 By the way, all motion vectors before time t-1 should have been obtained so far. そこで、これまでの動きベクトルを繋ぎ合わせることによって、時刻t−1までの自車の軌跡を求める(S63)。 Therefore, the trajectory of the own vehicle up to the time t-1 is obtained by connecting the motion vectors so far (S63). 図20はこのようにして求めた時刻t−1までの軌跡の例を示している。 FIG. 20 shows an example of the locus obtained in this way up to the time t-1. 図20において、Tは現在の動きベクトル、TRはこれまでの動きベクトルを繋ぎ合わせることによって求めた軌跡を示している。 In FIG. 20, T shows the current motion vector, and TR shows the locus obtained by connecting the motion vectors so far.
【0128】 [0128]
次に、空間モデルを推定する(S64)。 Next, the spatial model is estimated (S64). ここでは、上述した図17に示すような路面モデルと壁面モデルを用いるものとする。 Here, it is assumed that the road surface model and the wall surface model as shown in FIG. 17 described above are used. すなわち図21に示すように、軌跡TRを含む平面が、路面モデルMSとして求められ、軌跡TRの左右に長さW/2だけ離れた位置にある路面モデルMSに垂直な壁面が、壁面モデルMW1,MW2として求められる。 That is, as shown in FIG. 21, a plane including the locus TR is obtained as the road surface model MS, and a wall surface perpendicular to the road surface model MS located at a position separated by a length W / 2 to the left and right of the locus TR is the wall surface model MW1. , MW2.
【0129】 [0129]
次に、背景フローを推定する(S65〜S68)。 Next, the background flow is estimated (S65 to S68). まず、時刻t−1におけるカメラ画像上の任意の点PreCi(PreXi,PreYi)を実世界3次元座標PreRi(PreXw,PreYw,PreZw)に射影させる(S65)。 First, an arbitrary point PreCi (PreXi, PreYi) on the camera image at time t-1 is projected onto the real-world three-dimensional coordinates PreRi (PreXw, PreYw, ​​PreZw) (S65). 図22はこの処理を示した図である。 FIG. 22 is a diagram showing this process. 角度θはカメラ2の画角である。 The angle θ is the angle of view of the camera 2. 上述したように、(数1),(数2)に示すような透視投影変換式のみでは、カメラ画像上の点PreCiは、カメラ2の焦点を通る直線LN上に対応することしか分からず、実世界座標系の一点に射影することはできない。 As described above, only the fluoroscopic projection conversion formulas shown in (Equation 1) and (Equation 2) show that the point PreCi on the camera image corresponds to the straight line LN passing through the focal point of the camera 2. It cannot be projected onto a point in the real-world coordinate system. しかしながら、車両1は推定された空間モデル内を走行していると仮定すると、この直線LNと空間モデル(図22ではMW2)との交点として、点PreRiを求めることができる。 However, assuming that the vehicle 1 is traveling in the estimated space model, the point PreRi can be obtained as the intersection of this straight line LN and the space model (MW2 in FIG. 22).
【0130】 [0130]
次に、現在の動きベクトルTだけ車両1を動かすことによって、時刻tにおける自車位置を求め、座標変換式(数2)のパラメータをその位置に合うように更新する(S66)。 Next, by moving the vehicle 1 by the current motion vector T, the position of the own vehicle at the time t is obtained, and the parameter of the coordinate conversion formula (Equation 2) is updated to match the position (S66). そして、そのカメラ位置で実世界座標PreRiがカメラ画像上のどの位置に映るかを計算する。 Then, it is calculated at which position on the camera image the real-world coordinates PreRi are reflected at the camera position. 実世界座標からカメラ画像上への変換は、(数1)と更新された(数2)を用いることによって実現できる。 The conversion from real-world coordinates to the camera image can be achieved by using (Equation 1) and the updated (Equation 2). こうして求めたカメラ画像上の点をNextCi(NexiXi,NextYi)とする(S67)。 The points on the camera image obtained in this way are designated as NextCi (NexiXi, NextYi) (S67). 図23はここでの処理を模式的に示した図である。 FIG. 23 is a diagram schematically showing the processing here.
【0131】 [0131]
ここで求めたNextCi(NextXi,NextYi)は、点PreRiが時刻t−1から時刻tまでの間に動いていない、すなわち背景物体上の点であると仮定した場合の、時刻tにおけるカメラ画像上の位置を示している。 The NextCi (NextXi, NextYi) obtained here is on the camera image at time t, assuming that the point PreRi does not move between time t-1 and time t, that is, it is a point on the background object. Indicates the position of. そこで、時刻t−1におけるカメラ画像上の点PreCiが背景の一部であると仮定したときの背景フローBFLは、(NextXi−PreXi, NextYi−PreYi)と求めることができる(S68)。 Therefore, the background flow BFL when it is assumed that the point PreCi on the camera image at time t-1 is a part of the background can be obtained as (NextXi-PreXi, NextYi-PreYi) (S68). このような処理をカメラ画像上のすべての点について行うことによって、図13に示すような背景フローを求めることができる。 By performing such processing for all points on the camera image, the background flow as shown in FIG. 13 can be obtained.
【0132】 [0132]
本実施形態に係る接近物検出を行った例を、図24〜図26に示す。 An example of detecting an approaching object according to the present embodiment is shown in FIGS. 24 to 26. 図24は車両後方に設置したカメラから撮像した画像である。 FIG. 24 is an image taken from a camera installed at the rear of the vehicle. 車両は交差点を右折しようとしており、後方から白い乗用車VC1が接近している。 The vehicle is about to turn right at the intersection, and the white passenger car VC1 is approaching from behind. なお、画像上で動いている物体は白い乗用車VC1のみであり、乗用車VC2,VC3は停止している。 The only moving object on the image is the white passenger car VC1, and the passenger cars VC2 and VC3 are stopped.
【0133】 [0133]
ここで、上述の第1の従来例について考える。 Here, consider the first conventional example described above. 第1の従来例では、まず図25(a)のように、水平方向に領域Lと領域Rを分割して設定する。 In the first conventional example, as shown in FIG. 25A, the area L and the area R are first divided and set in the horizontal direction. そして、検出されたオプティカルフローのうち、領域Lにおいては左または左下方向を向いているものを、領域Rにおいては右または右下方向を向いているものを接近物として検出する。 Then, among the detected optical flows, those facing the left or lower left direction in the area L and those facing the right or lower right direction in the area R are detected as approaching objects.
【0134】 [0134]
ここで、領域L内に領域AR1,AR2を仮定する。 Here, the regions AR1 and AR2 are assumed in the region L. 領域AR1には接近物である乗用車VC1が含まれ、領域AR2には静止している乗用車VC2が含まれる。 The area AR1 includes a passenger car VC1 which is an approaching object, and the area AR2 includes a passenger car VC2 which is stationary. しかし、領域AR1内の乗用車VC1は矢印のように右下方向のオプティカルフローをもつので、接近物としては検出されない。 However, since the passenger car VC1 in the region AR1 has an optical flow in the lower right direction as shown by the arrow, it is not detected as an approaching object. 一方、領域AR2内の乗用車VC2は、当該車両がカーブを走行しているため、矢印のように左方向のオプティカルフローをもち、このため接近物として検出されてしまう。 On the other hand, the passenger car VC2 in the area AR2 has an optical flow in the left direction as shown by the arrow because the vehicle is traveling on a curve, and therefore is detected as an approaching object. 図25(b)はこのような処理を行った結果を示しており、矩形で囲われた領域が接近物として検出された領域である。 FIG. 25B shows the result of performing such processing, and the region surrounded by the rectangle is the region detected as an approaching object. このように、第1の従来例では、カーブにおいて接近物の検出がうまくいかない。 As described above, in the first conventional example, the detection of an approaching object is not successful in the curve.
【0135】 [0135]
一方、本実施形態によると、領域AR1,AR2における背景フローは、図26(a)の矢印のように求められる。 On the other hand, according to the present embodiment, the background flow in the regions AR1 and AR2 is obtained as shown by the arrow in FIG. 26A. すなわち、接近物である乗用車VC1を含む領域AR1では、オプティカルフローは図25(a)のように右下を向くのに対して、背景フローは図26(a)のように左向きとなり、全く異なる。 That is, in the region AR1 including the passenger car VC1 which is an approaching object, the optical flow faces the lower right as shown in FIG. 25 (a), while the background flow faces left as shown in FIG. 26 (a), which are completely different. .. このため、領域AR1は接近物であると判別される。 Therefore, the region AR1 is determined to be an approaching object. 一方、静止している乗用車VC2を含む領域AR2では、オプティカルフローは図25(a)のように左を向くが、背景フローも同様に図26(a)のように左向きとなり、非常に似通っている。 On the other hand, in the region AR2 including the stationary passenger car VC2, the optical flow faces left as shown in FIG. 25 (a), but the background flow also faces left as shown in FIG. 26 (a), which are very similar. There is. このため、領域AR2は接近物ではなく、背景であると判別される。 Therefore, the region AR2 is determined to be a background rather than an approaching object. 図26(b)はこのような処理を行なった結果を示しており、矩形で囲われた領域が接近物として検出された領域である。 FIG. 26B shows the result of performing such processing, and the region surrounded by the rectangle is the region detected as an approaching object. このように、本発明によると、従来例と異なり、カーブにおいても接近物を確実に検出することが可能である。 As described above, according to the present invention, unlike the conventional example, it is possible to reliably detect an approaching object even in a curve.
【0136】 [0136]
なお、路面モデルを利用した車両用監視装置は、特開2000−74645号公報や特開2001−266160号公報に記載されている。 The vehicle monitoring device using the road surface model is described in JP-A-2000-74645 and JP-A-2001-266160. ところがこれらの技術では、本願発明のように、背景フローを求めるために路面モデルを用いているのではない。 However, these techniques do not use the road surface model to obtain the background flow as in the present invention. また、カーブ走行時の接近物検出を目的としておらず、本願発明とは課題が相違する。 Further, it is not intended to detect an approaching object when traveling on a curve, and the problem is different from that of the present invention.
【0137】 [0137]
具体的に説明すると、まず前者では、監視領域内の他車両から発生するオプティカルフローを検出し、検出したオプティカルフローを用いて自車両と周辺の他車両との相対関係を監視する。 Specifically, in the former, first, the optical flow generated from another vehicle in the monitoring area is detected, and the relative relationship between the own vehicle and other vehicles in the vicinity is monitored by using the detected optical flow. この技術の特徴は、処理時間を短縮するために、オプティカルフローを検出する領域を限定することにあり、これを実現するために路面モデルを利用している。 The feature of this technology is to limit the area where the optical flow is detected in order to shorten the processing time, and the road surface model is used to realize this. すなわち、本発明のように空間モデルを接近物フローの検出に用いているわけではない。 That is, the spatial model is not used for detecting the approaching object flow as in the present invention. 実際のところ、この技術では、カーブでの接近物フローの検出に上述の第5の従来例に示された仮想FOEを利用する方法を用いているため、第5の従来例と同様の問題点を抱える。 As a matter of fact, since this technique uses the method of using the virtual FOE shown in the above-mentioned fifth conventional example for detecting the approaching object flow on the curve, the same problem as that of the fifth conventional example is used. Hold. すなわち、白線のないカーブにおいて正確な接近物検出を行うことはできない。 That is, it is not possible to accurately detect an approaching object on a curve without a white line.
【0138】 [0138]
また後者は、画面上の各点の3次元的な動きを利用している。 The latter utilizes the three-dimensional movement of each point on the screen. すなわち、まず画面上の各点について、画面上での2次元的な動きであるオプティカルフローを求める。 That is, first, for each point on the screen, an optical flow, which is a two-dimensional movement on the screen, is obtained. そして、求めたオプティカルフローと車両動き情報とを基にして、各点の実世界での3次元的な動きを計算する。 Then, based on the obtained optical flow and vehicle movement information, the three-dimensional movement of each point in the real world is calculated. この3次元的な動きを時間的に追跡することによって、実際に車両が走行している空間の空間モデルを推定する。 By tracking this three-dimensional movement in time, a spatial model of the space in which the vehicle is actually traveling is estimated. こうして推定された空間モデルの中で、路面の動きと違うものを障害物として検出する。 Among the spatial models estimated in this way, obstacles that are different from the movement of the road surface are detected. しかしながら、この技術では、各点の動きを完全に3次元的に求めるため、非常に計算コストがかかる、という問題があり、実現は難しい。 However, this technique has a problem that the calculation cost is very high because the movement of each point is obtained completely three-dimensionally, and it is difficult to realize it.
【0139】 [0139]
<ハードウェア構成例> <Hardware configuration example>
図27は本発明を実現するためのハードウェア構成の一例を示す図である。 FIG. 27 is a diagram showing an example of a hardware configuration for realizing the present invention. 図27において、例えば車両後方に設置されたカメラ2によって撮影された画像は、画像処理装置20において、画像入力部21によってデジタル信号に変換されてフレームメモリ22に格納される。 In FIG. 27, for example, the image taken by the camera 2 installed at the rear of the vehicle is converted into a digital signal by the image input unit 21 in the image processing device 20 and stored in the frame memory 22. そしてDSP23がフレームメモリ22に格納されたデジタル化された画像信号からオプティカルフローを検出する。 Then, the DSP 23 detects the optical flow from the digitized image signal stored in the frame memory 22. 検出されたオプティカルフローは、バス43を介してマイクロコンピュータ30に供給される。 The detected optical flow is supplied to the microcomputer 30 via the bus 43. 一方、車速センサ41は車両の走行速度を測定し、舵角センサ42は車両の走行舵角を測定する。 On the other hand, the vehicle speed sensor 41 measures the traveling speed of the vehicle, and the steering angle sensor 42 measures the traveling steering angle of the vehicle. 測定結果を表す信号は、バス43を介してマイクロコンピュータ30に供給される。 The signal representing the measurement result is supplied to the microcomputer 30 via the bus 43.
【0140】 [0140]
マイクロコンピュータ30はCPU31、所定の制御用プログラムが格納されたROM32、およびCPU31による演算結果を記憶するRAM33を備えており、カメラ2が撮影した画像内に接近物が存在しているか否かを判定する。 The microcomputer 30 includes a CPU 31, a ROM 32 in which a predetermined control program is stored, and a RAM 33 that stores calculation results by the CPU 31, and determines whether or not an approaching object exists in the image captured by the camera 2. To do.
【0141】 [0141]
すなわち、CPU31は、まず車速センサ41および舵角センサ42から供給された走行速度信号と舵角信号から、車両の動きを推定する。 That is, the CPU 31 first estimates the movement of the vehicle from the traveling speed signal and the steering angle signal supplied from the vehicle speed sensor 41 and the steering angle sensor 42. 次に、推定した車両の動きを基にして、車両がこれまで走行してきた軌跡を推定する。 Next, based on the estimated movement of the vehicle, the trajectory that the vehicle has traveled so far is estimated. RAM33には過去の軌跡情報が格納されているので、CPU31は、推定した車両の動きをRAM33に格納された過去の軌跡情報と繋ぎ合わせることによって、当該時刻までの軌跡を求める。 Since the past trajectory information is stored in the RAM 33, the CPU 31 obtains the trajectory up to the time by connecting the estimated vehicle movement with the past trajectory information stored in the RAM 33. この新たな軌跡情報はRAM33に格納される。 This new trajectory information is stored in the RAM 33.
【0142】 [0142]
そして、CPU31は、RAM33に格納された軌跡情報を用いて、空間モデルを推定し、背景フローを求める。 Then, the CPU 31 estimates the spatial model using the locus information stored in the RAM 33, and obtains the background flow. そして、求めた背景フローを画像処理装置20から供給されたオプティカルフローと比較することによって、接近物のフローを検出し、接近物を検出する。 Then, by comparing the obtained background flow with the optical flow supplied from the image processing device 20, the flow of the approaching object is detected, and the approaching object is detected.
【0143】 [0143]
(第3の実施形態) (Third Embodiment)
図28は本発明の第3の実施形態に係る車両用監視装置の基本構成を概念的に示すブロック図である。 FIG. 28 is a block diagram conceptually showing the basic configuration of the vehicle monitoring device according to the third embodiment of the present invention. 図28において、図1と共通の構成要素には図1と同一の符号を付しており、ここではその詳細な説明を省略する。 In FIG. 28, the components common to FIG. 1 are designated by the same reference numerals as those in FIG. 1, and detailed description thereof will be omitted here. 第1の実施形態と異なるのは、オプティカルフロー検出部12Aが、背景フロー推定部15によって推定された背景フローを用いてオプティカルフローを検出する点である。 The difference from the first embodiment is that the optical flow detection unit 12A detects the optical flow using the background flow estimated by the background flow estimation unit 15. これにより、オプティカルフローの計算時間の削減や精度向上を図ることができる。 As a result, it is possible to reduce the calculation time of the optical flow and improve the accuracy.
【0144】 [0144]
カメラ画像上における物体の動きであるオプティカルフローViは、次式のように、対象物体が実際に動いていることによる動きVbと、カメラ自体の動きによる相対的な動きVcとの和によって表される。 Optical flow Vi, which is the movement of an object on a camera image, is expressed by the sum of the movement Vb due to the actual movement of the target object and the relative movement Vc due to the movement of the camera itself, as shown in the following equation. To.
Vi=Vb+Vc Vi = Vb + Vc
【0145】 [0145]
ここで、対象が動いていない、すなわち背景であるとすると、Vbは0であり、Vcは背景フローに等しくなる。 Here, assuming that the object is not moving, that is, the background, Vb is 0 and Vc is equal to the background flow. また、対象が移動している物体であるとすると、Vbは対象物の移動ベクトルに依存するが、Vcは背景フローにほぼ等しくなる。 Further, assuming that the object is a moving object, Vb depends on the movement vector of the object, but Vc is almost equal to the background flow. このことは、対象物の移動量があまり大きくない場合、そのオプティカルフローは背景フローの近傍に存在することを示している。 This indicates that the optical flow exists in the vicinity of the background flow when the movement amount of the object is not so large. そこで、オプティカルフローを求める際、背景フローの近傍のみを検索することによって、探索領域を狭め、計算時間の削減を図ることができる。 Therefore, when the optical flow is obtained, the search area can be narrowed and the calculation time can be reduced by searching only the vicinity of the background flow.
【0146】 [0146]
また、同様の方法によって、オプティカルフローの検出精度を向上させることもできる。 Further, the detection accuracy of the optical flow can be improved by the same method. これは、オプティカルフロー検出のために階層化画像を用いる場合に特に有効である。 This is particularly effective when a layered image is used for optical flow detection. 上述したように、階層化画像は計算時間を削減するために用いられるが、階層が上位になるほど画像の解像度は低くなり、このため、テンプレートマッチングで誤差が生じる可能性が高くなる。 As described above, the layered image is used to reduce the calculation time, but the higher the layer, the lower the resolution of the image, and therefore, the possibility of error in template matching increases. そしてもし、上位階層で誤差が生じ、誤った検出をしてしまうと、その誤差は下位階層では吸収されず、したがって、実際とは異なったオプティカルフローを検出してしまう。 If an error occurs in the upper hierarchy and an erroneous detection is made, the error is not absorbed in the lower hierarchy, and therefore an optical flow different from the actual one is detected.
【0147】 [0147]
図29は階層化画像におけるブロックマッチング結果を模式的に示す図である。 FIG. 29 is a diagram schematically showing a block matching result in a layered image. 図29において、画像1は元画像である画像0を縮小してLPFをかけたもの、画像2は画像1をさらに縮小してLPFをかけたものである。 In FIG. 29, the image 1 is the original image 0 obtained by reducing the image 0 and applying the LPF, and the image 2 is the image 2 obtained by further reducing the image 1 and applying the LPF. 画像内の各矩形はマッチングを行ったブロックを示しており、その中の数字は、当該ブロックGとテンプレートブロックFとの差分評価関数の値を示している。 Each square in the image shows the matched block, and the number in it shows the value of the difference evaluation function between the block G and the template block F. すなわち、ブロックの大きさをm×n、テンプレートブロックFの各画素の輝度をf(i,j)、当該ブロックGの各画素の輝度をg(x,y,i,j)とすると、差分評価関数E(x,y)は(数10)または(数11)によって示される。 That is, assuming that the size of the block is m × n, the brightness of each pixel of the template block F is f (i, j), and the brightness of each pixel of the block G is g (x, y, i, j), the difference is The evaluation function E (x, y) is indicated by (Equation 10) or (Equation 11).
【数10】 [Number 10]
【数11】 [Number 11]
すなわち、この差分評価関数E(x,y)の値が最小のものがテンプレートブロックFに対応するブロックであり、その画像でのオプティカルフローに対応するものになる。 That is, the block having the smallest value of the difference evaluation function E (x, y) is the block corresponding to the template block F, and corresponds to the optical flow in the image.
【0148】 [0148]
まず、最も解像度が低い画像2においてブロックマッチングを行う。 First, block matching is performed on the image 2 having the lowest resolution. いま、テンプレートブロックFは画像の右上方向に移動しているものとすると、本来ならブロックG(1,−1)との整合性が最も高くなり、差分評価関数E(1,−1)が最小になるはずである。 Assuming that the template block F is moving toward the upper right of the image, the consistency with the block G (1, -1) is normally the highest, and the difference evaluation function E (1, -1) is the minimum. Should be. ところが、階層化による分解能の低下や、テクスチャ、開口問題などの影響によって、図29のようにE(−1,1)が最小になってしまったとする。 However, it is assumed that E (-1,1) is minimized as shown in FIG. 29 due to the decrease in resolution due to layering and the influence of texture, aperture problem, and the like. すると、画像1において、画像2のブロックG(−1,1)に対応するブロックが探索領域として設定され、さらにその中で差分評価関数値が最小になったブロックに対応するブロックが、画像2において探索領域として設定される。 Then, in the image 1, the block corresponding to the block G (-1,1) of the image 2 is set as the search area, and the block corresponding to the block having the smallest difference evaluation function value in the search area is the image 2. Is set as a search area in. しかしながら、この探索領域内には正しいオプティカルフローに対応するブロックは存在せず、したがって、誤ったオプティカルフローが検出されてしまう。 However, there is no block corresponding to the correct optical flow in this search area, and therefore an incorrect optical flow is detected.
【0149】 [0149]
そこで本実施形態では、最も分解能が高い元画像0におけるブロックマッチングの際に、背景フロー近傍のブロックも探索領域に加えることによって、この問題を解決する。 Therefore, in the present embodiment, this problem is solved by adding a block near the background flow to the search area at the time of block matching in the original image 0 having the highest resolution. 上述したように、背景フロー近傍には、オプティカルフローが存在する可能性が高い。 As described above, there is a high possibility that an optical flow exists in the vicinity of the background flow. また、接近物フロー検出には、オプティカルフローと背景フローとの相違が重要になるが、言い換えると、背景フロー近傍にオプティカルフローが存在しないときは、それは移動物体であるといえる。 Further, the difference between the optical flow and the background flow is important for detecting the approaching object flow. In other words, when the optical flow does not exist in the vicinity of the background flow, it can be said that it is a moving object. すなわち、背景フロー近傍を探索領域に入れることによって、オプティカルフローが背景か否かを判別できる。 That is, it is possible to determine whether or not the optical flow is the background by putting the vicinity of the background flow into the search area. 例えば、背景フロー近傍に差分評価関数値を最小にするブロックが存在するとき、これに係るオプティカルフローは背景であるが、差分評価関数値が背景フロー近傍よりも小さいブロックが背景フロー近傍以外に存在するとき、そのオプティカルフローは移動物体であると判別できる。 For example, when there is a block that minimizes the difference evaluation function value near the background flow, the optical flow related to this is the background, but there is a block whose difference evaluation function value is smaller than the background flow neighborhood other than the background flow neighborhood. Then, the optical flow can be determined to be a moving object.
【0150】 [0150]
(第4の実施形態) (Fourth Embodiment)
図30は本発明の第4の実施形態に係る車両用監視装置の基本構成を概念的に示す図である。 FIG. 30 is a diagram conceptually showing a basic configuration of a vehicle monitoring device according to a fourth embodiment of the present invention. 図30において、図1と共通の構成要素には図1と同一の符号を付しており、ここではその詳細な説明を省略する。 In FIG. 30, components common to those in FIG. 1 are designated by the same reference numerals as those in FIG. 1, and detailed description thereof will be omitted here. 第1の実施形態と異なるのは、背景フロー推定部15が省かれており、接近物検出部16Aが、対象物の空間的な動きを求めて、接近物を検出する点である。 The difference from the first embodiment is that the background flow estimation unit 15 is omitted, and the approaching object detecting unit 16A seeks the spatial movement of the object and detects the approaching object. 接近物検出部16Aにおいて、3次元動き推定部16cは、オプティカルフロー検出部12によってカメラ画像から実際に求められたオプティカルフローViと、自車動き推定部13によって求められた当該車両の動きベクトルTと、空間モデル推定部14によって推定された空間モデルとを用いて、オプティカルフローのような平面的な動きではなく、対象物の空間的な動きを求める。 In the approaching object detection unit 16A, the three-dimensional motion estimation unit 16c has the optical flow Vi actually obtained from the camera image by the optical flow detection unit 12 and the motion vector T of the vehicle obtained by the own vehicle motion estimation unit 13. And the spatial model estimated by the spatial model estimation unit 14, the spatial motion of the object is obtained instead of the planar motion as in the optical flow.
【0151】 [0151]
図31および図32を参照して、3次元動き推定部16cにおける処理について説明する。 The processing in the three-dimensional motion estimation unit 16c will be described with reference to FIGS. 31 and 32. 図31は時刻t−1における車両1と空間モデルとの関係を模式的に示した図であり、図17と同様の空間モデルMS,MW1,MW2が想定されている。 FIG. 31 is a diagram schematically showing the relationship between the vehicle 1 and the space model at time t-1, and space models MS, MW1, and MW2 similar to those in FIG. 17 are assumed. ただし、空間モデルは実世界3次元座標系で表現されているものとする。 However, it is assumed that the spatial model is represented by a real-world three-dimensional coordinate system. また、カメラ2が撮影したカメラ画像も併せて図示している。 In addition, a camera image taken by the camera 2 is also shown in the figure. ここで上述のように、透視投影変換式(数1、数2)および空間モデルMS,MW1,MW2を用いることによって、カメラ画像上の任意の点Ciを実世界3次元座標上に投影した点Riを求めることができる。 Here, as described above, by using the perspective projection conversion formula (Equation 1 and Equation 2) and the spatial models MS, MW1 and MW2, a point obtained by projecting an arbitrary point Ci on the camera image onto real-world three-dimensional coordinates. Ri can be obtained. また、Tは自車動き推定部13によって求められた,時刻t−1から時刻tまでの車両1の動きベクトルである。 Further, T is a motion vector of the vehicle 1 from the time t-1 to the time t obtained by the own vehicle motion estimation unit 13.
【0152】 [0152]
図32は時刻tにおける車両1と空間モデルとの関係を模式的に示した図である。 FIG. 32 is a diagram schematically showing the relationship between the vehicle 1 and the space model at time t. 一般に、時刻が変わると空間モデルは変化する。 In general, the spatial model changes as the time changes. ここで、オプティカルフロー検出部12によって、時刻t−1における点Ciが時刻tにおける点NextCiに対応していることが分かったとする。 Here, it is assumed that the optical flow detection unit 12 finds that the point Ci at time t-1 corresponds to the point NextCi at time t. このとき、点Ciに対する点Riと同様に、時刻tにおけるNextCiを実世界3次元座標上に投影した点NextRiを求めることができる。 At this time, similarly to the point Ri with respect to the point Ci, the point NextRi obtained by projecting NextCi at time t on the three-dimensional coordinates in the real world can be obtained. よって、点Riが時刻tまでに動いたベクトルVRiは次式のように求めることができる。 Therefore, the vector VRi in which the point Ri has moved by the time t can be obtained by the following equation.
VRi=NextRi−Ri VRi = NextRi-Ri
【0153】 [0153]
ところで、時刻t−1から時刻tまでの車両1の動きはベクトルTとして求められているので、ベクトル(VRi−T)を求めることによって、カメラ画像上の点Ciの実世界3次元座標上での動き、すなわち空間フローを、求めることができる。 By the way, since the movement of the vehicle 1 from the time t-1 to the time t is obtained as a vector T, by obtaining the vector (VRi-T), the points Ci on the camera image are displayed on the real-world three-dimensional coordinates. Movement, that is, spatial flow can be obtained. この処理を、カメラ画像上の全ての点について行うことによって、カメラ画像上の全ての点の実世界3次元座標上での動きを求めることができる。 By performing this process on all the points on the camera image, it is possible to obtain the movement of all the points on the camera image on the real world three-dimensional coordinates.
【0154】 [0154]
もちろん、空間モデルは、第1の実施形態のように様々なセンサや通信によって求めてもよいし、他の空間モデルを用いてもかまわない。 Of course, the spatial model may be obtained by various sensors and communications as in the first embodiment, or other spatial models may be used.
【0155】 [0155]
接近物フロー検出部16dは、3次元動き推定部16cによって求められたカメラ画像上の各点の実世界3次元座標系での動き、すなわち空間フローを基にして、その点が自車に接近しているか否かを判断する。 The approaching object flow detection unit 16d approaches the own vehicle based on the movement of each point on the camera image obtained by the three-dimensional motion estimation unit 16c in the real-world three-dimensional coordinate system, that is, the spatial flow. Determine if you are doing it. すなわち、ベクトル(VRi−T)が車両を向いているときは、点Ciは接近物のフローであり、そうでないときは、点Ciは接近物のフローではないと判定される。 That is, when the vector (VRi-T) is facing the vehicle, it is determined that the point Ci is the flow of the approaching object, and otherwise, the point Ci is not the flow of the approaching object.
【0156】 [0156]
さらに、ノイズ除去部16eによって、第1の実施形態に係るノイズ除去部16bと同様の処理を行うことによって、接近物を検出することができる。 Further, the noise removing unit 16e can detect an approaching object by performing the same processing as the noise removing unit 16b according to the first embodiment.
【0157】 [0157]
ここまでの説明では、空間モデルは、実世界3次元座標系で記述されていると仮定したが、この代わりに、カメラ座標系で表現されていてもよい。 In the explanation so far, it is assumed that the spatial model is described in the real-world three-dimensional coordinate system, but instead, it may be expressed in the camera coordinate system. この場合、カメラ画像上の点Ci,NextCiはそれぞれ、カメラ座標系における点Ri,NextRiに対応する。 In this case, the points Ci and NextCi on the camera image correspond to the points Ri and NextRi in the camera coordinate system, respectively. カメラ座標系の原点は、時刻t−1から時刻tまでの間に車両1の動きベクトルTに相当する分だけ動いているので、カメラ画像上の点Ciの動きVRiは次のように求めることができる。 Since the origin of the camera coordinate system moves by the amount corresponding to the motion vector T of the vehicle 1 from the time t-1 to the time t, the movement VRi of the point Ci on the camera image is obtained as follows. Can be done.
VRi=NextRi−Ri−T VRi = NextRi-Ri-T
【0158】 [0158]
このベクトルVRiが原点を向いているとき、点Ciは接近物フローであり、そうでないときは、点Ciは接近物フローではないと判定される。 When this vector VRi points to the origin, it is determined that the point Ci is the approaching object flow, and otherwise, the point Ci is not the approaching object flow. この処理を、カメラ画像上の全ての点で行い、ノイズ除去部16cがノイズ除去処理を行うことによって、接近物を検出することができる。 By performing this process at all points on the camera image and the noise removing unit 16c performing the noise removing process, an approaching object can be detected.
【0159】 [0159]
<障害物センサとの兼用> <Combined with obstacle sensor>
本発明は、画像情報を用いて接近物検出を行うものである。 The present invention uses image information to detect an approaching object. このため、レーザー、赤外線、ミリ波などの障害物センサを利用する場合に比べると、接近物が近づいているか遠ざかっているかといった複雑な検出が可能である。 Therefore, compared to the case of using an obstacle sensor such as a laser, infrared rays, or millimeter waves, it is possible to perform complicated detection such as whether an approaching object is approaching or moving away. ところが、障害物が車両近傍に存在している場合は、このような複雑な情報よりも、障害物があるか否かという単純な情報を、いち早く、正確に検出することが重要である。 However, when an obstacle exists in the vicinity of the vehicle, it is more important to quickly and accurately detect simple information as to whether or not there is an obstacle than such complicated information.
【0160】 [0160]
そこで、車両近傍の領域については、障害物センサによる検出を行い、それ以外の広域な領域について、本発明に係る方法を用いて障害物検出を行うようにしてもよい。 Therefore, the area near the vehicle may be detected by the obstacle sensor, and the other wide area may be detected by the method according to the present invention. これにより、車両周辺の監視を、高速に、かつ、正確に行うことが可能になる。 This makes it possible to monitor the area around the vehicle at high speed and accurately.
【0161】 [0161]
図33は障害物センサの設置例を示す図である。 FIG. 33 is a diagram showing an installation example of the obstacle sensor. 同図中、(a)では、レーザー、赤外線、ミリ波などを用いた障害物センサ51は、車両1のバンパーやエンブレム等に取り付けられている。 In the figure, in (a), the obstacle sensor 51 using a laser, infrared rays, millimeter waves, or the like is attached to a bumper, an emblem, or the like of the vehicle 1. また、(b)では、最も接触事故の可能性が高い車両1の四隅に障害物センサ52を設置している。 Further, in (b), obstacle sensors 52 are installed at the four corners of the vehicle 1 having the highest possibility of a contact accident. 障害物センサ52を設置する位置は、バンパーの下や上、またはバンパーや車体自体に組み込んでもよい。 The position where the obstacle sensor 52 is installed may be incorporated under or above the bumper, or in the bumper or the vehicle body itself.
【0162】 [0162]
また、障害物センサは、雨などの天候による影響を大きく受けるため、ウインカーの動作情報等から降雨を認識したとき、障害物センサの利用を中止し、本発明に係る方法を用いるようにしてもよい。 Further, since the obstacle sensor is greatly affected by the weather such as rain, when the rain is recognized from the operation information of the blinker, the use of the obstacle sensor is stopped and the method according to the present invention is used. Good. これにより、検出精度を向上させることが可能である。 This makes it possible to improve the detection accuracy.
【0163】 [0163]
また、本発明に係る方法によって接近物があると検出された領域について、再度、障害物センサによって検出を行うようにしてもよい。 Further, the area where the approaching object is detected by the method according to the present invention may be detected again by the obstacle sensor. これにより、接近物の検出精度が向上し、誤った警報が発生しないようにすることができる。 As a result, the detection accuracy of the approaching object can be improved, and an erroneous alarm can be prevented from being generated.
【0164】 [0164]
さらに、障害物センサによる検出の結果、障害物があると判断された領域のみについて、本発明に係る方法によってそれが接近物か否かを判定するようにしてもよい。 Further, it may be determined whether or not the area where the obstacle is determined as a result of the detection by the obstacle sensor is an approaching object by the method according to the present invention. これにより、処理速度の向上を図ることが可能になる。 This makes it possible to improve the processing speed.
【0165】 [0165]
なお、本発明の監視装置の各手段の機能の全部または一部を専用のハードウェアを用いて実現してもかまわないし、コンピュータのプログラムによってソフトウェア的に実現してもかまわない。 In addition, all or a part of the functions of each means of the monitoring device of the present invention may be realized by using dedicated hardware, or may be realized by software by a computer program.
【0166】 [0166]
【発明の効果】 【Effect of the invention】
以上のように本発明によると、オプティカルフローを用いて接近物を検出する車両用監視装置において、カーブにおいても検出精度を落とすことがなく、さらに、直進時においても、自車と並走する物体や、画面上での動きの小さな遠方の接近物も検出することができる。 As described above, according to the present invention, in a vehicle monitoring device that detects an approaching object using an optical flow, an object that does not reduce the detection accuracy even on a curve and runs in parallel with the own vehicle even when traveling straight. In addition, it is possible to detect a distant approaching object with a small movement on the screen.
【図面の簡単な説明】 [Simple explanation of drawings]
【図1】本発明の第1の実施形態に係る監視装置の構成を示すブロック図である。 FIG. 1 is a block diagram showing a configuration of a monitoring device according to a first embodiment of the present invention.
【図2】自車動きの推定方法を説明するための図であり、アッカーマンモデルを説明するための概念図である。 FIG. 2 is a diagram for explaining a method of estimating own vehicle movement, and is a conceptual diagram for explaining an Ackermann model.
【図3】自車動きの推定方法を説明するための図であり、2次元上での車両の動きを示す概念図である。 FIG. 3 is a diagram for explaining a method of estimating the movement of the own vehicle, and is a conceptual diagram showing the movement of the vehicle in two dimensions.
【図4】背景フロー推定の処理の流れを示すフローチャートである。 FIG. 4 is a flowchart showing a process flow of background flow estimation.
【図5】背景フローの推定方法を説明するための図であり、車両が旋回するときの動きを示す概念図である。 FIG. 5 is a diagram for explaining a method of estimating a background flow, and is a conceptual diagram showing a movement when a vehicle turns.
【図6】接近物検出部の構成を概念的に示すブロック図である。 FIG. 6 is a block diagram conceptually showing the configuration of an approaching object detection unit.
【図7】フロー比較部の動作を示すフローチャートである。 FIG. 7 is a flowchart showing the operation of the flow comparison unit.
【図8】ノイズ除去部の動作を示すフローチャートである。 FIG. 8 is a flowchart showing the operation of the noise removing unit.
【図9】ノイズ除去部における処理を説明するための概念図である。 FIG. 9 is a conceptual diagram for explaining a process in a noise removing unit.
【図10】背景フローの推定方法を説明するための概念図である。 FIG. 10 is a conceptual diagram for explaining a method of estimating a background flow.
【図11】背景フロー推定の他の例の処理の流れを示すフローチャートである。 FIG. 11 is a flowchart showing a processing flow of another example of background flow estimation.
【図12】背景フロー推定の他の例の処理の流れを示すフローチャートである。 FIG. 12 is a flowchart showing a processing flow of another example of background flow estimation.
【図13】本発明に係る背景フローが表示されたカメラ画像の例である。 FIG. 13 is an example of a camera image displaying a background flow according to the present invention.
【図14】本発明の第2の実施形態に係る監視装置の構成を示すブロック図である。 FIG. 14 is a block diagram showing a configuration of a monitoring device according to a second embodiment of the present invention.
【図15】本発明に係る空間モデルの一例を示す図である。 FIG. 15 is a diagram showing an example of a spatial model according to the present invention.
【図16】図15の空間モデルにおける距離Lと背景フローとの関係を示す図である。 16 is a diagram showing the relationship between the distance L and the background flow in the spatial model of FIG. 15. FIG.
【図17】本発明に係る空間モデルの他の例を示す図である。 FIG. 17 is a diagram showing another example of a spatial model according to the present invention.
【図18】図17の空間モデルにおける幅Wと背景フローとの関係を示す図である。 FIG. 18 is a diagram showing the relationship between the width W and the background flow in the spatial model of FIG.
【図19】本発明の第2の実施形態に係る背景フロー推定の処理の流れを示すフローチャートである。 FIG. 19 is a flowchart showing a process flow of background flow estimation according to a second embodiment of the present invention.
【図20】本発明の第2の実施形態に係る背景フローの推定方法を説明するための図である。 FIG. 20 is a diagram for explaining a method of estimating a background flow according to a second embodiment of the present invention.
【図21】本発明の第2の実施形態に係る背景フローの推定方法を説明するための図である。 FIG. 21 is a diagram for explaining a method of estimating a background flow according to a second embodiment of the present invention.
【図22】本発明の第2の実施形態に係る背景フローの推定方法を説明するための図である。 FIG. 22 is a diagram for explaining a method of estimating a background flow according to a second embodiment of the present invention.
【図23】本発明の第2の実施形態に係る背景フローの推定方法を説明するための図である。 FIG. 23 is a diagram for explaining a method of estimating a background flow according to a second embodiment of the present invention.
【図24】車両に設置されたカメラの画像の例である。 FIG. 24 is an example of an image of a camera installed in a vehicle.
【図25】図24のカメラ画像から第1の従来例によって接近物を検出した結果を示す図である。 FIG. 25 is a diagram showing a result of detecting an approaching object from the camera image of FIG. 24 according to the first conventional example.
【図26】図24のカメラ画像から本発明によって接近物を検出した結果を示す図である。 FIG. 26 is a diagram showing a result of detecting an approaching object from the camera image of FIG. 24 according to the present invention.
【図27】本発明に係るハードウェア構成の例を示す図である。 FIG. 27 is a diagram showing an example of a hardware configuration according to the present invention.
【図28】本発明の第3の実施形態に係る監視装置の構成を示すブロック図である。 FIG. 28 is a block diagram showing a configuration of a monitoring device according to a third embodiment of the present invention.
【図29】階層化画像における問題点を説明するための概念図である。 FIG. 29 is a conceptual diagram for explaining a problem in a layered image.
【図30】本発明の第4の実施形態に係る監視装置の構成を示すブロック図である。 FIG. 30 is a block diagram showing a configuration of a monitoring device according to a fourth embodiment of the present invention.
【図31】本発明の第4の実施形態に係る処理を説明するための図である。 FIG. 31 is a diagram for explaining a process according to a fourth embodiment of the present invention.
【図32】本発明の第4の実施形態に係る処理を説明するための図である。 FIG. 32 is a diagram for explaining a process according to a fourth embodiment of the present invention.
【図33】本発明における障害物センサの設置位置の例を示す図である。 FIG. 33 is a diagram showing an example of an installation position of an obstacle sensor in the present invention.
【図34】車両後方を映すカメラ画像の例であり、第1の従来例のカーブ走行中における問題を説明するための図である。 FIG. 34 is an example of a camera image showing the rear of the vehicle, and is a diagram for explaining a problem during a curve traveling of the first conventional example.
【図35】カメラ画像と実世界の3次元座標の関係を示す図である。 FIG. 35 is a diagram showing a relationship between a camera image and three-dimensional coordinates in the real world.
【図36】従来例におけるカメラ画像と実世界座標系の関係を示す図である。 FIG. 36 is a diagram showing a relationship between a camera image and a real-world coordinate system in a conventional example.
【図37】本発明におけるカメラ画像と実世界座標系の関係を示す図である。 FIG. 37 is a diagram showing a relationship between a camera image and a real-world coordinate system in the present invention.
【図38】カーブ走行中のカメラ画像に、接近車のオプティカルフローを重畳表示した例である。 FIG. 38 is an example in which an optical flow of an approaching vehicle is superimposed and displayed on a camera image while traveling on a curve.
【図39】図38の画像の背景フローである。 39 is a background flow of the image of FIG. 38. FIG.
【図40】背景フローを減算して得たフローをカメラ画像に重畳表示した例である。 FIG. 40 is an example in which a flow obtained by subtracting a background flow is superimposed and displayed on a camera image.
【図41】車両の動きを、カメラ画像を用いて検出する手法を説明するための概念図である。 FIG. 41 is a conceptual diagram for explaining a method of detecting the movement of a vehicle using a camera image.
【符号の説明】 [Explanation of symbols]
1 車両2 カメラ11 カメラ12,12A オプティカルフロー検出部13 自車動き推定部14,14A 空間モデル推定部15 背景フロー推定部16,16A 接近物検出部51,52 障害物センサVi オプティカルフローT 車両の動きベクトルMS 路面モデルMW,MW1,MW2 壁面モデル[0001] 1 Vehicle 2 Camera 11 Camera 12, 12A Optical flow detection unit 13 Own vehicle motion estimation unit 14, 14A Spatial model estimation unit 15 Background flow estimation unit 16, 16A Approaching object detection unit 51, 52 Obstacle sensor Vi Optical flow T Vehicle Motion vector MS road surface model MW, MW1, MW2 wall surface model [0001]
TECHNICAL FIELD OF THE INVENTION TECHNICAL FIELD OF THE Invention
The present invention relates to a vehicle monitoring technique for monitoring a situation around a vehicle using a camera and detecting an approaching object. The present invention relates to a vehicle monitoring technique for monitoring a situation around a vehicle using a camera and detecting an approaching object.
[0002] [0002]
[Prior art] [Prior art]
2. Description of the Related Art Conventionally, various approaches have been taken for a vehicle monitoring device that monitors a situation around a vehicle and detects an approaching object. 2. Description of the Related Art Conventionally, various approaches have been taken for a vehicle monitoring device that monitors a situation around a vehicle and detects an approaching object.
[0003] [0003]
One of them uses an obstacle sensor such as a radar. However, although this method can reliably detect an obstacle, it is not suitable for complicated judgments as to whether the obstacle is approaching or moving away. In addition, since the influence of rainfall is large and the detection range is relatively narrow, it is difficult to detect an approaching object by the obstacle sensor alone. One of them uses an obstacle sensor such as a radar. However, although this method can reliably detect an obstacle, it is not suitable for complicated judgments as to whether the obstacle is approaching or moving away. In addition, since the influence of rainfall is large and the detection range is relatively narrow, it is difficult to detect an approaching object by the obstacle sensor alone.
[0004] [0004]
On the other hand, approaches using camera images are also being performed. According to this method, it is not as reliable as radar at present, but it is easy to process digitized image information, so it is possible to make complicated judgments as to whether an obstacle is approaching or moving away It is. Further, since the detection range is determined by the angle of view and the resolution of the camera, a very wide area can be monitored. On the other hand, approaches using camera images are also being performed. According to this method, it is not as reliable as radar at present, but it is easy to process digitized image information, so it is possible to make complicated judgments as to whether an obstacle is approaching or moving away It is. Further, since the detection range is determined by the angle of view and the resolution of the camera, a very wide area can be monitored.
[0005] [0005]
As methods using camera images, a stereo method using a plurality of camera images and a method using an optical flow are widely known. The stereo method is a method that uses parallax between cameras, but has a problem in that calibration between cameras is complicated, and it is necessary to use a plurality of cameras, thereby increasing costs. As methods using camera images, a stereo method using a plurality of camera images and a method using an optical flow are widely known. The stereo method is a method that uses parallax between cameras, but has a problem in that calibration between cameras is complicated, and it is necessary to use a plurality of cameras, thereby increasing costs.
[0006] [0006]
A vehicular monitoring apparatus using an optical flow is disclosed in, for example, Patent Document 1 (first conventional example). That is, the camera is installed facing the rear of the vehicle, a plurality of areas divided in the horizontal direction are set on the screen, and in each area, the size is equal to or larger than a predetermined threshold, and an approaching object is assumed. An optical flow having the same direction as the movement on the image at the time is extracted. Then, an approaching object is determined based on the optical flow. A vehicular monitoring apparatus using an optical flow is disclosed in, for example, Patent Document 1 (first conventional example). That is, the camera is installed facing the rear of the vehicle, a plurality of areas divided in the horizontal direction are set on The screen, and in each area, the size is equal to or larger than a predetermined threshold, and an approaching object is assumed. An optical flow having the same direction as the movement on the image at the time is extracted. Then, an approaching object is determined based on the optical flow.
[0007] [0007]
In addition, several methods have already been proposed for handling during curve running. In addition, several methods have already been proposed for handling during curve running.
[0008] [0008]
For example, in Patent Literature 2 (second conventional example), a turning vector is obtained from a steering angle and a traveling speed of a vehicle, and the optical flow is corrected by subtracting the turning vector from an actually obtained optical flow. After removing the influence of the curve by this correction, a moving object is extracted. For example, in Patent Literature 2 (second conventional example), a turning vector is obtained from a steering angle and a traveling speed of a vehicle, and the optical flow is corrected by subtracting the turning vector from an actually obtained optical flow. After removing the influence of the curve by this correction, a moving object is extracted.
[0009] [0009]
In Patent Document 3 (third conventional example), the optical flow is corrected based on the outputs of the vehicle speed sensor and the yaw rate sensor and the correspondence between the image position and the distance measured in advance to eliminate the influence of the curve. , And extract the moving object. In Patent Document 3 (third conventional example), the optical flow is corrected based on the outputs of the vehicle speed sensor and the yaw rate sensor and the correspondence between the image position and the distance measured in advance to eliminate the influence of the curve. , And extract the moving object.
[0010] [0010]
Further, a method using an infinite point (FOE) in an image (for example, see Patent Document 4) is widely used in detecting an approaching object using an optical flow. Some have been proposed. Further, a method using an infinite point (FOE) in an image (for example, see Patent Document 4) is widely used in detecting an approaching object using an optical flow. Some have been proposed.
[0011] [0011]
For example, in Patent Document 5 (fourth conventional example), an optical flow is corrected by an amount corresponding to the moving amount of the FOE. For example, in Patent Document 5 (fourth conventional example), an optical flow is corrected by an amount corresponding to the moving amount of the FOE.
[0012] [0012]
In Patent Document 6 (fifth conventional example), a screen is divided into a plurality of parts, and a virtual FOE is obtained based on white line information obtained by white line determining means. In Patent Document 6 (fifth conventional example), a screen is divided into a plurality of parts, and a virtual FOE is obtained based on white line information obtained by white line determining means.
[0013] [0013]
Patent Document 7 (sixth conventional example) discloses an approaching object detection method using an optical flow that is not affected by a curve. In this method, the difference εi between the motion vector Vdi theoretically calculated from the motion parameters of the camera and the motion vector Vi detected from the image is calculated. 2 Is calculated by the following equation using the vectors r1i and r2i representing the reliability of the movement vector detected from the image, and the moving object is detected using the value of the difference. Patent Document 7 (sixth conventional example) examines an approaching object detection method using an optical flow that is not affected by a curve. In this method, the difference εi between the motion vector Vdi theoretically calculated from the motion parameters of the camera and the motion vector Vi detected from the image is calculated. 2 Is calculated by the following equation using the vectors r1i and r2i representing the reliability of the movement vector detected from the image, and the moving object is detected using the value of the difference.
εi 2 = ((Vdi−Vi) · r1i) 2 + ((Vdi−Vi) · r2i) 2 εi 2 = ((Vdi−Vi) · r1i) 2 + ((Vdi−Vi) · r2i) 2
[0014] [0014]
[Patent Document 1] [Patent Document 1]
Japanese Patent No. 30111566 Japanese Patent No. 30111566
[Patent Document 2] [Patent Document 2]
JP 2000-168442 A JP 2000-168442 A
[Patent Document 3] [Patent Document 3]
JP-A-6-282655 JP-A-6-282655
[Patent Document 4] [Patent Document 4]
JP-A-7-50769 JP-A-7-50769
[Patent Document 5] [Patent Document 5]
JP 2000-251199 A JP 2000-251199 A
[Patent Document 6] [Patent Document 6]
JP 2000-90243 A JP 2000-90243 A
[Patent Document 7] [Patent Document 7]
Japanese Patent No. 2882136 Japanese Patent No. 2882136
[0015] [0015]
[Problems to be solved by the invention] [Problems to be solved by the invention]
However, the above-described related art has the following problems. However, the above-described related art has the following problems.
[0016] [0016]
First, in the first conventional example, it is assumed that the vehicle is traveling straight, and it is difficult to use the vehicle on a curve. That is, the approaching object is detected using the "direction of movement on the image when assuming an approaching object", but this "direction of movement" is uniformly determined when traveling on a curve. I do not want. This will be described with reference to FIG. First, in the first conventional example, it is assumed that the vehicle is traveling straight, and it is difficult to use the vehicle on a curve. That is, the approaching object is detected using the "direction of movement on the image when assuming an" approaching object ", but this" direction of movement "is uniformly determined when traveling on a curve. I do not want. This will be described with reference to FIG.
[0017] [0017]
Assuming that the camera is facing rearward, an image as shown in FIG. 34A is captured when the vehicle is traveling on a curve. In the first conventional example, as shown in FIG. 34A, a region L and a region R are divided and set in the horizontal direction. Here, assuming that there is an approaching vehicle in the area AR1 in the area L as shown in FIG. 34B, the “assumed direction of movement of the approaching vehicle” is lower right as indicated by the arrow in the figure. Turn to. On the other hand, in an area AR2 that is in the same area L but has a position different from the area AR1 in the vertical direction, the “assumed direction of movement of the approaching vehicle” points to the lower left as indicated by the arrow in the figure, and the area AR1 It is completely different from the direction of movement. As described above, when the vehicle is traveling on a curve, even in the same area L, the “assumed direction of movement of the approaching vehicle” is not uniform de Assuming that the camera is facing rearward, an image as shown in FIG. 34A is captured when the vehicle is traveling on a curve. In the first conventional example, as shown in FIG. 34A, a region L and a region R are divided and set in the horizontal direction. Here, assuming that there is an approaching vehicle in the area AR1 in the area L as shown in FIG. 34B, the “assumed direction of movement of the approaching vehicle” is lower right as indicated by the arrow in the figure. Turn to. On the other hand, in an area AR2 that is in the same area L but has a position different from the area AR1 in the vertical direction, the “assumed direction of movement of the approaching vehicle” points to the lower left as indicated by the arrow in the figure, and the area AR1 It is completely different from the direction of movement. As described above, when the vehicle is traveling on a curve, even in the same area L, the “assumed direction of movement of the approaching vehicle ”is not uniform de pending on its position, and therefore, it is difficult to detect the approaching vehicle. become. pending on its position, and therefore, it is difficult to detect the approaching vehicle. Become.
[0018] [0018]
Also, even when the vehicle is traveling on a straight road, the magnitude of the optical flow is significantly different between the upper part and the lower part of the screen. That is, in the upper part of the screen, an area far away from the own vehicle is reflected, so that the detected optical flow is very small. On the other hand, since a region very close to the own vehicle is reflected in the lower part of the screen, the detected optical flow is relatively very large. Also, even when the vehicle is traveling on a straight road, the magnitude of the optical flow is significantly different between the upper part and the lower part of the screen. That is, in the upper part of the screen, an area far away from The own vehicle is reflected, so that the detected optical flow is very small. On the other hand, since a region very close to the own vehicle is reflected in the lower part of the screen, the detected optical flow is relatively very large.
[0019] [0019]
For this reason, if processing is performed using the same threshold value for the upper part and the lower part of the screen, there is a high possibility that the detection accuracy of the approaching vehicle will deteriorate. For example, when a threshold value is determined based on a small flow in the upper part of the screen, the threshold value becomes very small. If the processing in the lower part of the screen is performed using this threshold value, noise is likely to appear. On the other hand, if the threshold value is determined based on the large flow at the bottom of the screen, the threshold value is very large.Thus, if processing at the top of the screen is performed using this threshold value, most optical flows will be smaller than the threshold value, and Cannot be detected. For example, when a threshold value is determined. For this reason, if processing is performed using the same threshold value for the upper part and the lower part of the screen, there is a high possibility that the detection accuracy of the approaching vehicle will deteriorate. Based on a small flow in the upper part of the screen, the threshold value becomes very small. If the processing in the lower part of the screen is performed using this threshold value, noise is likely to appear. On the other hand, if the threshold value is determined based on the large flow at the bottom of the screen, the threshold value is very large.Thus, if processing at the top of the screen is performed using this threshold value, most optical flows will be smaller than the threshold value , and Cannot be detected.
[0020] [0020]
Further, in the first conventional example, an approaching vehicle is detected using only an optical flow having a size equal to or larger than a predetermined value. Therefore, a parallel running vehicle running at substantially the same speed as the own vehicle cannot detect the optical flow since the magnitude of the optical flow becomes almost zero. Further, in the first conventional example, an approaching vehicle is detected using only an optical flow having a size equal to or larger than a predetermined value. Therefore, a parallel running vehicle running at substantially the same speed as the own vehicle cannot detect the optical. flow since the magnitude of the optical flow becomes almost zero.
[0021] [0021]
Further, in the second conventional example, the size and the direction of the turning vector generated by the turning of the own vehicle differ depending on the three-dimensional relative position of the target point related to the turning vector from the camera. For this reason, the turning vector cannot be estimated unless the correspondence between the point on the camera image and the point on the real world 3D is determined. Further, in the second conventional example, the size and the direction of the turning vector generated by the turning of the own vehicle differ depending on the three-dimensional relative position of the target point related to the turning vector from the camera. For this reason , the turning vector cannot be estimated unless the correspondence between the point on the camera image and the point on the real world 3D is determined.
[0022] [0022]
This will be described with reference to FIG. FIG. 35 shows a relationship between a camera image taken by the camera 2 and three-dimensional coordinates in the real world. The Xi axis is taken in the horizontal direction and the Yi axis is taken in the vertical direction of the camera image, and the Xw, Yw, Zw axes of the real world coordinate system are also taken as shown in FIG. That is, the plane Xw-Zw is a plane parallel to the road surface, the Xw direction is the left-right direction of the vehicle, the Yw direction is the direction perpendicular to the road surface, and the Zw direction is the front-rear direction of the vehicle. In addition, a camera coordinate system (Xc, Yc, Zc) is defined in which the focal point of the camera is the origin and the optical axis direction of the camera is the Zc axis. Of course, these axial directions are not limited to this. These coordinate systems have a relationship of perspective projection transformation (Equation 1) and coordinate tr This will be described with reference to FIG. 35 shows a relationship between a camera image taken by the camera 2 and three-dimensional coordinates in the real world. The Xi axis is taken in the horizontal direction and the Yi axis is taken in the vertical direction of the camera image, and the Xw, Yw, Zw axes of the real world coordinate system are also taken as shown in FIG. That is, the plane Xw-Zw is a plane parallel to the road surface, the Xw direction is the left-right direction of the vehicle, the Yw direction is the direction perpendicular to the road surface, and the Zw direction is the front-rear direction of the vehicle. In addition, a camera coordinate system (Xc, Yc, Zc) These coordinate systems have a relationship of perspective projection transformation (Equation) is defined in which the focal point of the camera is the origin and the optical axis direction of the camera is the Zc axis. Of course, these axial directions are not limited to this. 1) and coordinate tr ansformation (Equation 2). ansformation (Equation 2).
(Equation 1) (Equation 1)
(Equation 2) (Equation 2)
[0023] [0023]
Here, f is a focal length of the camera, and r is a constant determined by the internal parameters of the camera and the installation position of the camera, that is, the positional relationship of the camera coordinate system in the real world coordinate system, and is known. From this relational expression, it can be seen that the three-dimensional coordinates of the real world corresponding to an arbitrary point on the camera image are on a certain straight line passing through the focal point of the camera, but no further information can be obtained. , Its position cannot be uniquely determined. Here, f is a focal length of the camera, and r is a constant determined by the internal parameters of the camera and the installation position of the camera, that is, the positional relationship of the camera coordinate system in the real world coordinate system, And is known. From this relational expression, it can be seen that the three-dimensional coordinates of the real world corresponding to an arbitrary point on the camera image are on a certain straight line passing through the focal point of the camera, but no further information can be obtained., Its position cannot be uniquely determined.
[0024] [0024]
That is, as shown in FIG. 36, the transformation from the real world coordinate system to a point on the camera image can be performed by using perspective projection transformation (Equation 1) and coordinate transformation (Equation 2). Transformation from a point on the image to the real world coordinate system is impossible only with these relational expressions. The turning vector in the second conventional example must be obtained for each point in the camera coordinate system. However, as shown in FIG. 36, this conversion is impossible unless there are other conditions. That is, as shown in FIG. 36, the transformation from the real world coordinate system to a point on the camera image can be performed by using perspective projection transformation (Equation 1) and coordinate transformation (Equation 2). Transformation from a point on The image to the real world coordinate system is impossible only with these relational expressions. The turning vector in the second conventional example must be obtained for each point in the camera coordinate system. However, as shown in FIG. 36, this conversion is impossible unless there are other conditions.
[0025] [0025]
In the third conventional example, as in the second conventional example, the method of obtaining the correspondence between the image position and the distance is not mentioned, and it cannot be realized as it is. In the third conventional example, as in the second conventional example, the method of obtaining the correspondence between the image position and the distance is not mentioned, and it cannot be realized as it is.
[0026] [0026]
Further, in the fourth conventional example, the FOE exists only when traveling straight ahead and does not exist at all in a curve. Therefore, when the optical flow is corrected using the FOE, the sharp traveling curve which cannot approximate straight traveling can be obtained. The error becomes very large and is not practical. Further, in the fourth conventional example, the FOE exists only when traveling straight ahead and does not exist at all in a curve. Therefore, when the optical flow is corrected using the FOE, the sharp traveling curve which cannot approximate straight traveling can be obtained . The error becomes very large and is not practical.
[0027] [0027]
Further, the fifth conventional example cannot be used, of course, when traveling on a road without a white line. Further, the fifth conventional example cannot be used, of course, when traveling on a road without a white line.
[0028] [0028]
In the sixth conventional example, it is assumed that an object that is not a moving object is dominant in an image. If a large moving object such as a truck exists near the camera, detection fails. There is a problem of doing. In the sixth conventional example, it is assumed that an object that is not a moving object is dominant in an image. If a large moving object such as a truck exists near the camera, detection fails. There is a problem of doing.
[0029] [0029]
In view of the above problems, an object of the present invention is to enable a vehicle monitoring device that detects an approaching object using an optical flow to accurately detect an approaching object even when traveling on a curve. In view of the above problems, an object of the present invention is to enable a vehicle monitoring device that detects an approaching object using an optical flow to accurately detect an approaching object even when traveling on a curve.
[0030] [0030]
[Means for Solving the Problems] [Means for Solving the Problems]
In order to solve the above-described problem, the present invention uses a camera that projects around a vehicle, obtains an optical flow from an image captured by the camera, and assumes a background based on the movement of the vehicle. A background flow, which is an optical flow of the image, and compares the optical flow with the background flow to detect a motion of an object around the vehicle. In order to solve the above-described problem, the present invention uses a camera that projects around a vehicle, obtains an optical flow from an image captured by the camera, and assumes a background based on the movement of the vehicle. A background flow, which is an optical flow of the image, and compares the optical flow with the background flow to detect a motion of an object around the vehicle.
[0031] [0031]
According to the present invention, since the background flow is an optical flow when the camera image is assumed to be the background, by comparing this background flow with the optical flow actually obtained from the camera image, the background flow of the object around the vehicle is obtained. The movement can be detected with high accuracy. Further, even when the vehicle is traveling on a curve, detection is performed by comparing the background flow and the optical flow at each point on the image, so that an approaching object can be accurately detected. Also, for a parallel running vehicle or a distant approaching object in which the size of the optical flow obtained from the image becomes smaller, the optical flow is greatly different from the background flow at that point on the image, so that it can be easily obtained. Can be detected. According to the present invention, since the background flow is an optical flow when the camera image is assumed to be the background, by comparing this background flow with the optical flow actually obtained from the camera image, the background flow of the object around the vehicle Is obtained. The movement can be detected with high accuracy. Further, even when the vehicle is traveling on a curve, detection is performed by comparing the background flow and the optical flow at each point on the image, so that an approaching object can be Also, for a parallel running vehicle or a distant approaching object in which the size of the optical flow obtained from the image becomes smaller, the optical flow is greatly different from the background flow at that point on the image, so that it can be easily obtained. Can be detected.
[0032] [0032]
BEST MODE FOR CARRYING OUT THE INVENTION BEST MODE FOR CARRYING OUT THE Invention
According to the first aspect of the present invention, as a monitoring device using a camera that reflects the periphery of a vehicle, an optical flow is obtained from an image captured by the camera, and a background is assumed based on the movement of the vehicle. A background flow, which is an optical flow of the image, is obtained, the optical flow is compared with the background flow, and a motion of an object around the vehicle is detected. According to the first aspect of the present invention, as a monitoring device using a camera that reflects the peripheral of a vehicle, an optical flow is obtained from an image captured by the camera, and a background is assumed based on the movement of the vehicle A background flow, which is an optical flow of the image, is obtained, the optical flow is compared with the background flow, and a motion of an object around the vehicle is detected.
[0033] [0033]
According to a second aspect of the present invention, there is provided the monitoring apparatus according to the first aspect, wherein the background flow is obtained using a space model obtained by modeling a space where the camera is shooting. According to a second aspect of the present invention, there is provided the monitoring apparatus according to the first aspect, wherein the background flow is obtained using a space model obtained by modeling a space where the camera is shooting.
[0034] [0034]
According to a third aspect of the present invention, there is provided the monitoring device according to the second aspect, wherein the space model is generated based on distance data of each object captured by the camera. According to a third aspect of the present invention, there is provided the monitoring device according to the second aspect, wherein the space model is generated based on distance data of each object captured by the camera.
[0035] [0035]
According to a fourth aspect of the present invention, there is provided the monitoring apparatus according to the third aspect, wherein the distance data is measured by an obstacle sensor provided in the vehicle. According to a fourth aspect of the present invention, there is provided the monitoring apparatus according to the third aspect, wherein the distance data is measured by an obstacle sensor provided in the vehicle.
[0036] [0036]
According to a fifth aspect of the present invention, there is provided the monitoring device according to the second aspect, wherein the space model includes at least a road surface model obtained by modeling a traveling road surface. According to a fifth aspect of the present invention, there is provided the monitoring device according to the second aspect, wherein the space model includes at least a road surface model obtained by modeling a traveling road surface.
[0037] [0037]
According to a sixth aspect of the present invention, there is provided the monitoring device according to the second aspect, wherein the space model includes at least a wall surface model assuming a wall surface perpendicular to a traveling road surface. According to a sixth aspect of the present invention, there is provided the monitoring device according to the second aspect, wherein the space model includes at least a wall surface model assuming a wall surface perpendicular to a traveling road surface.
[0038] [0038]
According to a seventh aspect of the present invention, there is provided the monitoring apparatus of the sixth aspect, wherein the wall surface is assumed to be on a rear side of the vehicle. According to a seventh aspect of the present invention, there is provided the monitoring apparatus of the sixth aspect, wherein the wall surface is assumed to be on a rear side of the vehicle.
[0039] [0039]
According to the eighth aspect of the present invention, when comparing the optical flow with the background flow, it is determined whether or not the size of the optical flow is larger than a predetermined value. A monitoring device according to a first aspect is provided in which a comparison is performed using a difference, and otherwise, the comparison is performed without using an angle difference. According to the eighth aspect of the present invention, when comparing the optical flow with the background flow, it is determined whether or not the size of the optical flow is larger than a predetermined value. A monitoring device according to a first aspect is provided in which a comparison is performed using a difference, and otherwise, the comparison is performed without using an angle difference.
[0040] [0040]
According to a ninth aspect of the present invention, there is provided the monitoring apparatus according to the eighth aspect, wherein the predetermined value is set according to a magnitude of a background flow at the position on the image. According to a ninth aspect of the present invention, there is provided the monitoring apparatus according to the eighth aspect, wherein the predetermined value is set according to a magnitude of a background flow at the position on the image.
[0041] [0041]
According to the tenth aspect of the present invention, an approaching object candidate flow is identified from the optical flows by comparing the optical flow with the background flow, and the approaching object candidate area is determined by associating the approaching object candidate flow in the vicinity. Is generated, and when the area of the approaching object candidate area is smaller than a predetermined value, the monitoring apparatus according to the first aspect, which determines that the approaching object candidate flow related to the approaching object candidate area is noise. According to the tenth aspect of the present invention, an approaching object candidate flow is identified from the optical flows by comparing the optical flow with the background flow, and the approaching object candidate area is determined by associating the approaching object candidate flow in the vicinity. Is generated, and when the area of ​​the approaching object candidate area is smaller than a predetermined value, the monitoring apparatus according to the first aspect, which determines that the approaching object candidate flow related to the approaching object candidate area is noise.
[0042] [0042]
According to an eleventh aspect of the present invention, as a monitoring device using a camera that projects around the vehicle, an optical flow is obtained from an image captured by the camera, and the optical flow, the movement of the vehicle, and the camera A space flow, which is a movement of a point on the image on real world coordinates, is obtained based on a space model obtained by modeling the space being photographed, and an object around the vehicle is obtained based on the space flow. To detect the movement of the object. According to an eleventh aspect of the present invention, as a monitoring device using a camera that projects around the vehicle, an optical flow is obtained from an image captured by the camera, and the optical flow, the movement of the vehicle, and the camera A space flow, which is a movement of a point on the image on real world coordinates, is obtained based on a space model obtained by modeling the space being photographed, and an object around the vehicle is obtained based on the space flow. To detect the movement of the object.
[0043] [0043]
According to the twelfth aspect of the present invention, as a monitoring method, an optical flow is obtained from an image captured by a camera that reflects the surroundings of the vehicle, and based on the movement of the vehicle, the image of the image is assumed to be a background. A background flow which is an optical flow is obtained, the optical flow is compared with the background flow, and a motion of an object around the vehicle is detected. According to the twelfth aspect of the present invention, as a monitoring method, an optical flow is obtained from an image captured by a camera that reflects the surroundings of the vehicle, and based on the movement of the vehicle, the image of the image is assumed to be a background. A background flow which is an optical flow is obtained, the optical flow is compared with the background flow, and a motion of an object around the vehicle is detected.
[0044] [0044]
According to a thirteenth aspect of the present invention, there is provided a monitoring method according to the twelfth aspect, wherein a movement of the vehicle is estimated using outputs of a vehicle speed sensor and a steering angle sensor provided in the vehicle. According to a thirteenth aspect of the present invention, there is provided a monitoring method according to the twelfth aspect, which a movement of the vehicle is estimated using outputs of a vehicle speed sensor and a steering angle sensor provided in the vehicle.
[0045] [0045]
According to a fourteenth aspect of the present invention, as a monitoring program, a computer obtains an optical flow for an image captured by a camera reflecting the surroundings of a vehicle, and assumes a background based on the movement of the vehicle. A procedure for obtaining a background flow, which is an optical flow of the image in the case of performing, and a procedure for comparing the optical flow with the background flow and detecting a motion of an object around the vehicle. . According to a fourteenth aspect of the present invention, as a monitoring program, a computer obtains an optical flow for an image captured by a camera reflecting the surroundings of a vehicle, and assumes a background based on the movement of the vehicle. A procedure for obtaining a background flow, which is an optical flow of the image in the case of performing, and a procedure for comparing the optical flow with the background flow and detecting a motion of an object around the vehicle.
[0046] [0046]
Hereinafter, embodiments of the present invention will be described with reference to the drawings. Embodied, embodiments of the present invention will be described with reference to the drawings.
[0047] [0047]
(1st Embodiment) (1st Embodiment)
In the first embodiment of the present invention, monitoring around the vehicle is performed as follows. First, an optical flow is obtained by using an image from a camera reflecting the surroundings of the vehicle. Next, the correspondence between the point on the camera image and the real world three-dimensional coordinates is estimated as a “spatial model”. As shown in FIG. 37, by using perspective projection transformation for this space model, it is possible to accurately associate a point on a camera image with real world three-dimensional coordinates. Then, using this space model and the estimated own-vehicle motion information, an optical flow is calculated by assuming that each point on the image is not a moving object but a background. The optical flow obtained in this way is called a “background flow”. An approaching object is detected by comparing the background flow with the optical flow actually obtained from the image. In the first embodiment of the present invention, monitoring around the vehicle is performed as follows. First, an optical flow is obtained by using an image from a camera reflecting the surroundings of the vehicle. Next, the correspondence between the point on the camera image and the real world three-dimensional coordinates is estimated as a “spatial model”. As shown in FIG. 37, by using perspective projection transformation for this space model, it is possible to accurately associate a point on a camera image with real world three -dimensional coordinates. Then, using this space model and the estimated own-vehicle motion information, an optical flow is calculated by assuming that each point on the image is not a moving object but a background. The optical flow obtained in this way is called a “background flow”. An approaching object is detected by comparing the background flow with the optical flow actually obtained from the image.
[0048] [0048]
In other words, the background flow is calculated by accurately considering the correspondence between the points on the camera image and the real world three-dimensional coordinates. Therefore, according to the present embodiment, the background flow is located around the vehicle as compared with the conventional example. The motion of the object can be detected with high accuracy. In addition, even when the vehicle is traveling on a curve, the detection is performed by comparing the background flow and the optical flow at each point on the image, so that an approaching object can be accurately detected. Also, in the case of a parallel running vehicle or a distant approaching object in which the size of the optical flow obtained from the image becomes small, the optical flow is significantly different from the background flow at that point. Can be detected. In other words, the background flow is calculated by accurately considering the correspondence between the points on the camera image and the real world three-dimensional coordinates. Therefore, according to the present embodiment, the background flow is located around the vehicle as compared with the conventional example. The motion of the object can be detected with high accuracy. In addition, even when the vehicle is traveling on a curve, the detection is performed by comparing the background flow and the optical flow at each point on the image, so that An approaching object can be accurately detected. Also, in the case of a parallel running vehicle or a distant approaching object in which the size of the optical flow obtained from the image becomes small, the optical flow is significantly different from the background flow at that point. Can be detected.
[0049] [0049]
FIG. 1 is a block diagram conceptually showing the basic configuration of the vehicle monitoring device according to the present embodiment. The monitoring device for a vehicle according to the present embodiment detects the movement of an object around the vehicle by using a camera 11 that reflects the surroundings of the vehicle. Specifically, as shown in FIG. 1, as a basic configuration, an optical flow detection unit 12 that calculates an optical flow Vi from an image captured by a camera 11, and a vehicle movement detection unit 13 that detects the movement of the vehicle, , A space model estimating unit 14 for estimating a model of the space being photographed by the camera 11, a background flow estimating unit 15 for estimating a background flow Vdi based on the own vehicle motion and the space model, an optical flow Vi and a background flow An approaching object detection unit 16 that detects an approaching object by comparing with Vdi. FIG. 1 is a block diagram conceptually showing the basic configuration of the vehicle monitoring device according to the present embodiment. The monitoring device for a vehicle according to the present embodiment detects the movement of an object around the vehicle by using a camera 11 that reflects The surroundings of the vehicle. Specifically, as shown in FIG. 1, as a basic configuration, an optical flow detection unit 12 that calculates an optical flow Vi from an image captured by a camera 11, and a vehicle movement detection unit 13 that detects the movement of the vehicle,, A space model estimating unit 14 for estimating a model of the space being photographed by the camera 11, a background flow estimating unit 15 for estimating a background flow Vdi based on the own vehicle motion and the space model, an optical flow Vi and a background flow An approaching object detection unit 16 that detects an approaching object by comparing with Vdi.
[0050] [0050]
The camera 11 is typically installed in the own vehicle, and senses a situation around the vehicle. Further, a camera attached to an infrastructure such as a road, a traffic light, a building, or a peripheral vehicle may be used together with a camera installed in the own vehicle or alone. This is effective in a situation where an approaching vehicle is difficult to see from an own vehicle, such as an intersection with poor visibility. The camera 11 is typically installed in the own vehicle, and senses a situation around the vehicle. Further, a camera attached to an infrastructure such as a road, a traffic light, a building, or a peripheral vehicle may be used together with a camera This is effective in a situation where an approaching vehicle is difficult to see from an own vehicle, such as an intersection with poor visibility.
[0051] [0051]
<Optical flow detection> <Optical flow detection>
The optical flow detection unit 12 detects a vector indicating an apparent movement on an image, that is, an “optical flow”, from two temporally different images captured by the camera 11. For the detection of optical flow, there are widely known a gradient method using a constraint equation of a spatiotemporal derivative of an image and a block matching method using template matching (“Understanding Dynamic Scenes” by Minoru Asada, Electronic Information Communication) Society). Here, the block matching method is used. The optical flow detection unit 12 detects a vector indicating an apparent movement on an image, that is, an “optical flow”, from two temporally different images captured by the camera 11. For the detection of optical flow, there are widely known a gradient method using a constraint equation of a spatiotemporal derivative of an image and a block matching method using template matching (“Understanding Dynamic Scenes” by Minoru Asada, Electronic Information Communication) Society). Here, the block matching method is used.
[0052] [0052]
The block matching method generally requires an enormous processing time since a full search is performed. For this reason, a method of layering images is widely used to reduce the processing time (“3D Vision”, by Tsuyoshi Tsuyoshi and Saburo Tsuji, Kyoritsu Shuppan). That is, from the given image, a layered image compressed vertically and horizontally to 1/2, 1/4, 1/8,. By layering the images, two points that are far apart in an image with high resolution (large size) are close in an image with low resolution (small size). Therefore, first, template matching is performed on an image having a low resolution, and template matching is performed on an image having a next higher resolution only in the vicinity of the optical flow obtained as a result. By repeating such processing, an optical flow in the original image with high resolution can be finally obtained. In addition, since a local search is sufficient, the processing time can be significantly reduced. The block matching method generally requires an enormous processing time since a full search is performed. For this reason, a method of layering images is widely used to reduce the processing time (“3D Vision”, by Tsuyoshi Tsuyoshi and Saburo Tsuji, Kyoritsu Shuppan) That is, from the given image, a layered image compressed vertically and horizontally to 1/2, 1/4, 1/8 ,. By layering the images, two points that are far apart in an image with high resolution (large size) ) are close in an image with low resolution (small size). Therefore, first, template matching is performed on an image having a low resolution, and template matching is performed on an image having a next higher resolution only in the vicinity of the optical By repeating such processing, an optical flow in the original image with high resolution can be finally obtained. In addition, since a local search is sufficient, the processing time can be significantly reduced.
[0053] [0053]
<Vehicle motion estimation> <Vehicle motion estimation>
The own-vehicle motion estimating unit 13 obtains the rotational speeds of the left and right wheels of the own vehicle and the steering angle of the steering wheel, and thereby estimates the motion of the own vehicle. This estimation method will be described with reference to FIGS. Note that, here, a so-called Ackerman model (two-wheel model) that approximates that no tire slippage occurs is used. The own-vehicle motion estimating unit 13 obtains the rotational speeds of the left and right wheels of the own vehicle and the steering angle of the steering wheel, and thereby estimates the motion of the own vehicle. This estimation method will be described with reference to FIGS. Notes that, here, a so-called Ackerman model (two-wheel model) that approximates that no tire slippage occurs is used.
[0054] [0054]
FIG. 2 is a diagram showing a model of Ackerman steering. Assuming that there is no side slip of the tire, when the steering wheel is turned, the vehicle turns around a point O on the extension of the axis of the rear wheel 3b. The turning radius Rs at the center of the rear wheel 3b is expressed as in (Equation 3) using the turning angle β of the front wheel 3a and the wheelbase l. FIG. 2 is a diagram showing a model of Ackerman steering. Assuming that there is no side slip of the tire, when the steering wheel is turned, the vehicle turns around a point O on the extension of the axis of the rear wheel 3b. The turning radius Rs at the center of the rear wheel 3b is expressed as in (Equation 3) using the turning angle β of the front wheel 3a and the wheelbase l.
(Equation 3) (Equation 3)
[0055] [0055]
FIG. 3 shows the movement of the vehicle in two dimensions. As shown in FIG. 3, assuming that the center of the rear wheel 3b has moved from Ct # to Ct + 1 #, the movement amount h is the left or right wheel speed Vl, Vr, or the rotation radius Rs and the movement rotation angle γ of the center of the rear wheel 3b. Is expressed as (Equation 4). FIG. 3 shows the movement of the vehicle in two dimensions. As shown in FIG. 3, assuming that the center of the rear wheel 3b has moved from Ct # to Ct + 1 #, the movement amount h is the left or right wheel speed Vl, Vr, or the rotation radius Rs and the movement rotation angle γ of the center of the rear wheel 3b. Is expressed as (Equation 4).
(Equation 4) (Equation 4)
From (Equation 3) and (Equation 4), the movement rotation angle γ is expressed as (Equation 5). From (Equation 3) and (Equation 4), the movement rotation angle γ is expressed as (Equation 5).
(Equation 5) (Equation 5)
Therefore, the rotation amount α with respect to the plane of the vehicle is represented by (Equation 6). Therefore, the rotation amount α with respect to the plane of the vehicle is represented by (Equation 6).
(Equation 6) (Equation 6)
[0056] [0056]
Further, a movement vector T from CtC to Ct + 1 is represented by (Equation 7) when the X axis is taken in the vehicle traveling direction and the Y axis is taken in the vertical direction. Further, a movement vector T from CtC to Ct + 1 is represented by (Equation 7) when the X axis is taken in the vehicle traveling direction and the Y axis is taken in the vertical direction.
(Equation 7) (Equation 7)
From (Equation 6) and (Equation 7), if the left and right wheel speeds Vl, Vr and the steering angle β of the vehicle are known, the movement of the vehicle can be estimated. From (Equation 6) and (Equation 7), if the left and right wheel speeds Vl, Vr and the steering angle β of the vehicle are known, the movement of the vehicle can be estimated.
[0057] [0057]
Of course, not only the wheel speed and steering angle information of the steering wheel, but also the movement of the vehicle may be directly obtained using a vehicle speed sensor and a yaw rate sensor. Further, the movement of the vehicle may be obtained using GPS or map information. Of course, not only the wheel speed and steering angle information of the steering wheel, but also the movement of the vehicle may be directly obtained using a vehicle speed sensor and a yaw rate sensor. Further, the movement of the vehicle may be obtained using GPS or map information.
[0058] [0058]
<Spatial model estimation> <Spatial model estimation>
The space model estimating unit 14 estimates a space model that models the space where the camera is shooting. As described above, the space model is used to determine the correspondence between the points on the camera image and the real world three-dimensional coordinates. That is, an arbitrary point on the camera image can be associated with a certain straight line in the real world space passing through the focal position of the camera by using (Equation 1) and (Equation 2). By finding the intersection between the straight line in the real world space and the estimated space model, an arbitrary point on the camera image can be projected onto the three-dimensional coordinates of the real world. The space model estimating unit 14 estimates a space model that models the space where the camera is shooting. As described above, the space model is used to determine the correspondence between the points on the camera image and the real world three-dimensional coordinates. That is, an arbitrary point on the camera image can be associated with a certain straight line in the real world space passing through the focal position of the camera by using (Equation 1) and (Equation 2). By finding the intersection between the straight line in the real world space and the estimated space model, an arbitrary point on the camera image can be projected onto the three-dimensional coordinates of the real world.
[0059] [0059]
Here, it is assumed that a spatial model is generated based on distance data of each object photographed by the camera 11. The measurement of the distance data uses, for example, binocular vision, a motion stereo method, or, when an obstacle sensor using a laser, an ultrasonic wave, an infrared ray, It can be executed by utilizing. Here, it is assumed that a spatial model is generated based on distance data of each object photographed by the camera 11. The measurement of the distance data uses, for example, binocular vision, a motion stereo method, or, when an obstacle sensor using a laser, an ultrasonic wave, an infrared ray, It can be executed by utilizing.
[0060] [0060]
In addition, when a camera fixed to an infrastructure such as a building is used, the shape of the building or the like reflected by the camera changes very rarely, and thus the three-dimensional information of the space photographed by the camera is known. In this case, there is no need to estimate a spatial model, and a known spatial model may be determined in advance for each camera. In addition, when a camera fixed to an infrastructure such as a building is used, the shape of the building or the like reflected by the camera changes very rarely, and thus the three-dimensional information of the space photographed by the camera is known. In this case, there is no need to estimate a spatial model, and a known spatial model may be determined in advance for each camera.
[0061] [0061]
Further, even when the camera is installed in the vehicle, the position can be accurately known using GPS or the like. Therefore, the spatial model is estimated by comparing the current position with detailed map data. It is also possible. For example, if it is found from the GPS and the map data that the vehicle is traveling in a tunnel, a space model can be generated from shape information such as the height and length of the tunnel. Such shape information may be stored in the map data in advance, or may be stored in an infrastructure such as a tunnel and obtained by communication. For example, information on the shape of the tunnel is transmitted to a vehicle attempting to enter the tunnel by a communication means such as DSRC provided at the entrance of the tunnel. Of course, such a method is not limited to a tunnel, and may be used on a general road, an expressway, a residential area, a parking lot, or the like. Further, even when the camera is installed in the vehicle, the position can be accurately known using GPS or the like. Therefore, the spatial model is estimated by comparing the current position with detailed map data. It is also possible. For example, if It is found from the GPS and the map data that the vehicle is traveling in a tunnel, a space model can be generated from shape information such as the height and length of the tunnel. Such shape information may be stored in the map data in advance. , or may be stored in an infrastructure such as a tunnel and obtained by communication. For example, information on the shape of the tunnel is transmitted to a vehicle attempting to enter the tunnel by a communication means such as DSRC provided at the entrance of the tunnel. Of course, such a method is not limited to a tunnel, and may be used on a general road, an expressway, a residential area, a parking lot, or the like.
[0062] [0062]
<Background flow estimation> <Background flow estimation>
The background flow estimating unit 15 detects, as a background flow, a motion (optical flow) of an image when the point corresponding to the spatial model is not moving, that is, when it is assumed that the point is not a moving object but a background. The background flow estimating unit 15 detects, as a background flow, a motion (optical flow) of an image when the point corresponding to the spatial model is not moving, that is, when it is assumed that the point is not a moving object but a background.
[0063] [0063]
FIG. 4 is a flowchart showing the operation of the background flow estimating unit 15, and FIG. 5 is a conceptual diagram assuming a situation where the vehicle 1 is turning on a curve or the like. In FIG. 5, since the camera 2 is installed in the vehicle 1, the movement of the camera 2 and the movement of the vehicle 1 are equal. Reference numeral 5 denotes a background object reflected on the camera 2. FIG. 4 is a flowchart showing the operation of the background flow estimating unit 15, and FIG. 5 is a conceptual diagram assuming a situation where the vehicle 1 is turning on a curve or the like. In FIG. 5, since the camera 2 is installed in the vehicle 1, the movement of the camera 2 and the movement of the vehicle 1 are equal. Reference diagram 5 Then a background object reflected on the camera 2.
[0064] [0064]
First, an arbitrary point (PreXi, PreYi) on the camera image photographed at the time t-1 is converted into real-world three-dimensional coordinates (Xw, Yw, Zw) using the spatial model estimated by the spatial model estimating unit 14. ) (S11). At this time, a perspective projection conversion equation shown in (Equation 1) and a coordinate conversion equation shown in (Equation 2) are used. The focal position of the camera in the real world coordinate system in (Equation 2) is the focal position of camera 2 at time t-1. First, an arbitrary point (PreXi, PreYi) on the camera image photographed at the time t-1 is converted into real-world three-dimensional coordinates (Xw, Yw, Zw) using the spatial model estimated by the spatial model estimating unit 14 .) (S11). At this time, a perspective projection conversion equation shown in (Equation 1) and a coordinate conversion equation shown in (Equation 2) are used. The focal position of the camera in the real world coordinate system in (Equation 1). 2) is the focal position of camera 2 at time t-1.
[0065] [0065]
Next, based on the movement amount h of the vehicle 1 from the time t-1 to the time t estimated by the own vehicle motion estimation unit 13, the own vehicle position in the real world at the time t, that is, in the real world coordinate system. The focal position of the camera 2 is obtained (Step S12). Then, based on the focal position of the camera 2 in the real world coordinate system, each constant r of (Equation 2) is changed (step S13). By repeating this process, the focal position of the camera in the real world coordinate system in (Equation 2) continues to be updated, and always indicates an accurate position. Next, based on the movement amount h of the vehicle 1 from the time t-1 to the time t estimated by the own vehicle motion estimation unit 13, the own vehicle position in the real world at the time t, that is, in the real world coordinate system. The focal position of the camera 2 is obtained (Step S12). Then, based on the focal position of the camera 2 in the real world coordinate system, each constant r of (Equation 2) is changed (step S13) ). By repeating this process, the focal position of the camera in the real world coordinate system in (Equation 2) continues to be updated, and always indicates an accurate position.
[0066] [0066]
Further, using (Equation 1) and the updated (Equation 2), the real world coordinates (Xw, Yw, Zw) obtained in step S11 are reprojected to a point (NextXi, NextYi) on the camera image (step). S14). The camera coordinates (NextXi, NextYi) obtained in this manner are obtained by comparing the point (PreXi, PreYi) on the camera image at time t−1 with the background object 5 that has not moved between time t−1 and time t. Shows the position on the camera image at time t when it is assumed that the point is one point. Therefore, (NextXi-PreXi, NextYi-PreYi) is obtained as a background flow when it is assumed that the point (PreXi, PreYi #) on the camera image at the time t-1 is a part of the background (step S15). Further, using (Equation 1) and the updated (Equation 2), the real world coordinates (Xw, Yw, Zw) obtained in step S11 are reprojected to a point (NextXi, NextYi) on the camera image (step). S14) . The camera coordinates (NextXi, NextYi) obtained in this manner are obtained by comparing the point (PreXi, PreYi) on the camera image at time t−1 with the background object 5 that has not moved between time t−1 and time t Shows the position on the camera image at time t when it is assumed that the point is one point. Therefore, (NextXi-PreXi, NextYi-PreYi) is obtained as a background flow when it is assumed that the point (PreXi, PreYi) #) on the camera image at the time t-1 is a part of the background (step S15).
[0067] [0067]
Here, for the sake of simplicity, the case where the vehicle curves is described as an example, but the background flow can be obtained by the same method even when traveling straight or when parking. Here, for the sake of simplicity, the case where the vehicle curves is described as an example, but the background flow can be obtained by the same method even when traveling straight or when parking.
[0068] [0068]
<Detection of approaching object> <Detection of approaching object>
FIG. 6 is a block diagram conceptually showing the configuration of the approaching object detection unit 16. 6, the flow comparing unit 16a compares the optical flow Vi actually obtained from the camera image by the optical flow detecting unit 12 with the background flow Vdi obtained by the background flow detecting unit 15, and determines the approaching object candidate flow. To detect. Then, the noise removing unit 16b removes noise from the approaching object candidate flow obtained by the flow comparing unit 16a, and detects only the approaching object flow. FIG. 6 is a block diagram conceptually showing the configuration of the approaching object detection unit 16. 6, the flow comparing unit 16a compares the optical flow Vi actually obtained from the camera image by the optical flow detecting unit 12 with the background flow Vdi obtained by the background flow detecting unit 15, and determines the approaching object candidate flow. To detect. Then, the noise removing unit 16b removes noise from the approaching object candidate flow obtained by the flow comparing unit 16a, and detects only the approaching object flow.
[0069] [0069]
FIG. 7 is a flowchart showing the operation of the flow comparison unit 16a. Here, the comparison between the optical flow Vi and the background flow Vdi is performed using the angle difference in principle. However, when the size of the optical flow Vi is small, the reliability of the direction information is low, so that the discrimination accuracy cannot be maintained. For example, the optical flow Vi relating to another vehicle running in parallel with the own vehicle at substantially the same speed has a very small size, and its direction is directed to the own vehicle or to the opposite direction according to the shooting timing. And change every moment. Therefore, in the present embodiment, when the magnitude of the optical flow Vi is smaller than a predetermined value, the reliability of the determination is improved by using another comparison criterion (S43) without using the angle difference. FIG. 7 is a flowchart showing the operation of the flow comparison unit 16a. Here, the comparison between the optical flow Vi and the background flow Vdi is performed using the angle difference in principle. However, when the size of the optical flow Vi is Small, the reliability of the direction information is low, so that the discrimination accuracy cannot be maintained. For example, the optical flow Vi relating to another vehicle running in parallel with the own vehicle at substantially the same speed has a very small size, and Its direction is directed to the own vehicle or to the opposite direction according to the shooting timing. And change every moment. Therefore, in the present embodiment, when the magnitude of the optical flow Vi is smaller than a predetermined value, the reliability of the determination is improved by using another comparison criterion (S43) without using the angle difference.
[0070] [0070]
Specifically, first, the size of the optical flow Vi is checked (S41). When the optical flow Vi has a sufficient size (the predetermined value TH Vi If the optical flow Vi is considered to have sufficient reliability with respect to the direction information, the optical flow Vi is compared with the background flow Vdi using the angle difference (S42). That is, the absolute value of the angle difference between the optical flow Vi and the background flow Vdi is equal to the predetermined value Th. Arg At this time, since the optical flow Vi is considered to be different from the background flow Vdi, the optical flow Vi is determined to be a flow of an approaching object (S44). When the absolute value of the angle difference is sufficiently small (No in S42), since the optical flow Vi is close to the background flow Vdi, it is determined that the flow is not a flow of an approaching object (S45). Note that the threshold value TH Vi Is about 0.1 [Pixel] and the threshold value TH Arg Is Specifically, first, the size of the optical flow Vi is checked (S41). When the optical flow Vi has a sufficient size (the predetermined value TH Vi If the optical flow Vi is considered to have sufficient reliability with respect to the direction information, The optical flow Vi is compared with the background flow Vdi using the angle difference (S42). That is, the absolute value of the angle difference between the optical flow Vi and the background flow Vdi is equal to the predetermined value Th. Arg At this time, since the optical flow Vi is considered to be different from the background flow Vdi, the optical flow Vi is determined to be a flow of an approaching object (S44). When the absolute value of the angle difference is sufficiently small (No in S42), since the optical flow Vi is close to the background flow Vdi, it is determined that the flow is not a flow of an approaching object (S45). Note that the threshold value TH Vi Is about 0.1 [Pixel] and the threshold value TH Arg Is preferably about π / 2. Here, when YES is determined in step S41, the reason that only the angle information is used and the size information is not used is to prevent a moving object moving away from the own vehicle from being determined as an approaching object. Of course, the absolute value of the flow vector difference may be used as a criterion. preferably about π / 2. Here, when YES is determined in step S41, the reason that only the angle information is used and the size information is not used is to prevent a moving object moving away from the own vehicle from being determined as an approaching object. Of course, the absolute value of the flow vector difference may be used as a criterion.
[0071] [0071]
When the optical flow Vi does not have a sufficient size (No in S41), comparison with the background flow cannot be performed using the angle difference. Therefore, in this case, attention is paid to the size of the flow. That is, the absolute value of the vector difference between the optical flow Vi and the background flow Vdi is equal to the predetermined value TH. Vdi It is determined whether or not this is the case (S43), and the predetermined value TH is determined. Vdi In the above case, it is determined that the flow is an approaching object (S44). Otherwise, it is determined that the flow is not an approaching object (S45). As a situation where the size of the optical flow Vi is small and it is an approaching object, When the optical flow Vi does not have a sufficient size (No in S41), comparison with the background flow cannot be performed using the angle difference. Therefore, in this case, attention is paid to the size of the flow. That is, the Vdi It is determined whether or not this is the case (S43), and the predetermined value TH is determined. Vdi In the absolute value of the vector difference between the optical flow Vi and the background flow Vdi is equal to the predetermined value TH. above case, it is determined that the flow is an approaching object (S44). Otherwise, it is determined that the flow is not an approaching object (S45). As a situation where the size of the optical flow Vi is small and it is an approaching object,
1. An approaching object is running parallel to your vehicle at almost the same speed 1. An approaching object is running parallel to your vehicle at almost the same speed
2. The approaching car is running far away 2. The approaching car is running far away
In any case, the background flow Vdi can be accurately obtained, so that a highly accurate approaching object can be determined. Note that the threshold value TH Vdi Is preferably about 0.1 [Pixel]. In any case, the background flow Vdi can be accurately obtained, so that a highly accurate approaching object can be determined. Note that the threshold value TH Vdi Is preferably about 0.1 [Pixel].
[0072] [0072]
In step S43, noting that the size of the optical flow Vi is sufficiently small, only the magnitude of the background flow Vdi is compared with a predetermined value, not the absolute value of the vector difference between the optical flow Vi and the background flow Vdi. You may make it. In this case, when the background flow Vdi is sufficiently large, it is determined that the flow is an approaching object flow, and when it is sufficiently small, it is determined that the flow is not an approaching object flow. In step S43, noting that the size of the optical flow Vi is sufficiently small, only the magnitude of the background flow Vdi is compared with a predetermined value, not the absolute value of the vector difference between the optical flow Vi and the background flow Vdi . You may make it. In this case, when the background flow Vdi is sufficiently large, it is determined that the flow is an approaching object flow, and when it is sufficiently small, it is determined that the flow is not an approaching object flow ..
[0073] [0073]
Also, the threshold value TH used in the flow comparison here Vi , TH Arg , TH Vdi May be set as a function of position on the image. For example, at a position where the background flow Vdi is small, the threshold value TH Vi , TH Vdi At the position where the background flow Vdi is large, the threshold value TH Vi , TH Vdi To increase. This makes it possible to perform accurate determination even in a distant place while suppressing the influence of noise. Also, the threshold value TH used in the flow comparison here Vi , TH Arg , TH Vdi May be set as a function of position on the image. For example, at a position where the background flow Vdi is small, the threshold value TH Vi , TH Vdi At the position where the background flow Vdi is large, the threshold value TH Vi , TH Vdi To increase. This makes it possible to perform accurate determination even in a distant place while suppressing the influence of noise.
[0074] [0074]
FIG. 8 is a flowchart showing the operation of the noise removing unit 16b. Noise detected by the flow comparison unit 16a as an approaching object flow includes noise. Therefore, if all of them are detected as approaching objects, the detection accuracy will deteriorate. Thus, the noise removing unit 16b models noise and approaching objects, and compares the model with the approaching object flow (approaching object candidate flow) detected by the flow comparison unit 16a to detect only approaching objects. . FIG. 8 is a flowchart showing the operation of the noise removing unit 16b. Noise detected by the flow comparison unit 16a as an approaching object flow includes noise. Therefore, if all of them are detected as approaching objects, the detection accuracy will deteriorate. Thus, the noise removing unit 16b models noise and approaching objects, and compares the model with the approaching object flow (approaching object candidate flow) detected by the flow comparison unit 16a to detect only approaching objects.
[0075] [0075]
First, if the approaching object candidate flow is noise, it should not be detected continuously both temporally and spatially. However, when the approaching object candidate flow is related to an actual approaching object instead of noise, the approaching object has a certain size, and a similar approaching object candidate flow is spatially determined by the size of the approaching object. Minutes of space. First, if the approaching object candidate flow is noise, it should not be detected continuously both temporally and spatially. However, when the approaching object candidate flow is related to an actual approaching object instead of noise, the approaching object has a certain size, and a similar approaching object candidate flow is spatially determined by the size of the approaching object. Minutes of space.
[0076] [0076]
Therefore, nearby approaching object candidate flows are associated with each other, and areas related to the associated approaching object candidate flows are connected to each other, and the connected area is set as an approaching object candidate area Ai (S51). Then, the area Si of the approaching object candidate area Ai is determined (S52), and this area Si is determined by a predetermined value TH. Si And (S53). Here, when the approaching object candidate area Ai is far from the camera, the predetermined value TH Si Is set small, and when the approaching object candidate area Ai is close to the camera, the predetermined value TH is set. Si Set large. Area Si is a predetermined value TH Si If smaller (No in S53), it is determined that the area Ai is noise (S54). On the other hand, the area Si is a predetermined value TH. Si In the above case, the process proceeds to step S55. Therefore, nearby approaching object candidate flows are associated with each other, and areas related to the associated approaching object candidate flows are connected to each other, and the connected area is set as an approaching object candidate area Ai (S51). Then, the area Si of the approaching object candidate area Ai is determined (S52), and this area Si is determined by a predetermined value TH. Si And (S53). Here, when the approaching object candidate area Ai is far from the camera, the predetermined value TH Si Is set small, and when the approaching object candidate area Ai is close to the camera, the predetermined value TH is set. Si Set large. Area Si is a predetermined value TH Si If smaller (No in S53), it is determined That the area Ai is noise (S54). On the other hand, the area Si is a predetermined value TH. Si In the above case, the process proceeds to step S55.
[0077] [0077]
In step S55, noise removal processing is performed by modeling an approaching object. Considering that the camera is installed in the vehicle, the approaching object is an automobile, a motorcycle, a bicycle, or the like, and any approaching object is an object running on a road surface. Therefore, it is determined whether or not the approaching object candidate area Ai exists on the road surface of the space model (S55). If not, the area Ai is determined to be noise (S54), and the processing of the next frame is performed. I do. On the other hand, when at least a part of the approaching object candidate area Ai exists on the road surface, the process proceeds to step S56. In step S55, noise removal processing is performed by modeling an approaching object. Considering that the camera is installed in the vehicle, the approaching object is an automobile, a motorcycle, a bicycle, or the like, and any approaching object is an object running on a road surface. Therefore, it is determined whether or not the approaching object candidate area Ai exists on the road surface of the space model (S55). If not, the area Ai is determined to be noise (S54), and the processing of the next frame is performed. I do. On the other hand, when at least a part of the approaching object candidate area Ai exists on the road surface, the process proceeds to step S56.
[0078] [0078]
In step S56, a filtering process in the time direction is performed. Since an approaching object cannot suddenly appear or disappear on the screen, the area should exist for several consecutive frames. FIG. 9 shows a situation in which another vehicle approaches from behind when traveling straight. At the current time t, the approaching vehicle 6 is photographed as shown in the left diagram of FIG. 9A, so that the approaching object candidate area At is detected as shown in the right diagram of FIG. 9A. At times t-1, t-2,..., TN slightly before the current time t, the approaching object candidate areas At-1, At-2,. , (C), and (d). Here, assuming that the time interval is sufficiently small, the approaching vehicle 6 does not largely move on the image, and thus the regions At-1, At-2,..., At-N partially overlap the region At. On the other hand, when the approaching object candidate area is generated due to the vibration of the camera or the like, the candidate area should be detected on In step S56, a filtering process in the time direction is performed. Since an approaching object cannot suddenly appear or disappear on the screen, the area should exist for several consecutive frames. FIG. 9 shows a situation in which another vehicle approaches from behind when traveling straight. At the current time t, the approaching vehicle 6 is photographed as shown in the left diagram of FIG. 9A, so that the approaching object candidate area At is detected as shown in the right diagram of FIG. 9A. At times t -1, t-2, ..., TN slightly before the current time t, the approaching object candidate areas At-1, At-2 ,., (C), and (d). Here, assuming that the time interval is sufficiently small, the approaching vehicle 6 does not largely move on the image, and thus the regions At-1, At-2, ..., At-N partially overlap the region At. On the other hand, when the approaching object candidate area is generated due to the vibration of the camera or the like, the candidate area should be detected on ly for a short time. ly for a short time.
[0079] [0079]
Therefore, the ratio of the approaching object region candidate in the region of the previous several frames corresponding to the approaching object region candidate Ai is compared with a predetermined value (S56). Then, when the ratio is lower than the predetermined value, the approaching object candidate area Ai is unlikely to be an approaching object, so that the approaching object candidate area Ai is held (S57), and the processing of the next frame is performed. . On the other hand, if the proportion of the approaching object area candidates in the previous several frames is higher than a predetermined value, it is determined that the approaching object candidate area Ai is due to the approaching object, and the processing of the next frame is performed (S58). For example, when it is detected six times or more in the previous 10 frames, it is determined that the object is an approaching object. Therefore, the ratio of the approaching object region candidate in the region of the previous several frames corresponding to the approaching object region candidate Ai is compared with a predetermined value (S56). Then, when the ratio is lower than the predetermined value, the approaching object candidate area Ai is unlikely to be an approaching object, so that the approaching object candidate area Ai is held (S57), and the processing of the next frame is performed .. On the other hand, if the proportion of the approaching object area candidates in the previous several frames is higher than a predetermined value, it is determined that the approaching object candidate area Ai is due to the approaching object, and the processing of the next frame is performed (S58). For example, when it is detected six times or more in the previous 10 frames, it is determined that the object is an approaching object.
[0080] [0080]
Further, assuming that the camera is installed in front of or behind the vehicle and that the approaching object is only a passenger car, the image of the approaching object reflected on the camera is only an image of the passenger car viewed from the front or the rear. Therefore, the size of the approaching object area can be limited to about 2 m in width and about 1.5 m in height from the road surface. Therefore, the approaching object candidate area may be set to this size, and the position of the approaching object candidate area may be set such that the number of approaching object flows existing in the area is maximized. In this case, it is possible to determine the approaching object and the noise based on whether or not the number of approaching object flows included in the approaching object candidate area is larger than a predetermined value. Such processing may be performed instead of steps S51 to S53. Further, assuming that the camera is installed in front of or behind the vehicle and that the approaching object is only a passenger car, the image of the approaching object reflected on the camera is only an image of the passenger car viewed from the front or the rear. Therefore, the size of the approaching object area can be limited to about 2 m in width and about 1.5 m in height from the road surface. Therefore, the approaching object candidate area may be set to this size, and the position of the approaching object candidate area may be set such that the number of approaching object flows existing in the area is maximized. In this case, it is possible to determine the approaching object and the noise based on whether or not the number of approaching object flows included in The approaching object candidate area is larger than a predetermined value. Such processing may be performed instead of steps S51 to S53.
[0081] [0081]
As described above, according to the present embodiment, the motion of an object around the vehicle can be detected with high accuracy by comparing the background flow with the optical flow actually obtained from the camera image. As described above, according to the present embodiment, the motion of an object around the vehicle can be detected with high accuracy by comparing the background flow with the optical flow actually obtained from the camera image.
[0082] [0082]
Further, even when the vehicle is traveling on a curve, detection is performed by comparing the background flow and the optical flow at each point on the image, so that an approaching object can be accurately detected. Also, for a parallel running vehicle or a distant approaching object in which the size of the optical flow obtained from the image becomes smaller, the optical flow is greatly different from the background flow at that point on the image, so that it can be easily obtained. Can be detected. Further, even when the vehicle is traveling on a curve, detection is performed by comparing the background flow and the optical flow at each point on the image, so that an approaching object can be accurately detected. Also, for a parallel running vehicle or a distant approaching object in which the size of the optical flow obtained from the image becomes smaller, the optical flow is greatly different from the background flow at that point on the image, so that it can be easily obtained. Can be detected.
[0083] [0083]
<Other example of background flow estimation> <Other example of background flow estimation>
(Part 1) (Part 1)
Assuming that the camera 2 moves with the rotation radius Rs and the rotation angle γ as shown in FIG. 5 and the object 5 reflected on the camera 2 does not move at all, the optical flow obtained at this time is as shown in FIG. Does not move, but is equal to the optical flow obtained when all the objects 5 reflected by the camera 2 move at the rotation angle γ. That is, the optical flow obtained when the camera moves by the vector V in the real world coordinate system is equal to the optical flow obtained when all the objects reflected by the camera move by the vector (−V). Assuming that the camera 2 moves with the rotation radius Rs and the rotation angle γ as shown in FIG. 5 and the object 5 reflected on the camera 2 does not move at all, the optical flow obtained at this time is as shown in FIG. Does not move, but is equal to the optical flow obtained when all the objects 5 reflected by the camera 2 move at the rotation angle γ. That is, the optical flow obtained when the camera moves by the vector V in the real world coordinate system is equal to the optical flow obtained when all the objects reflected by the camera move by the vector (−V).
[0084] [0084]
Therefore, instead of obtaining the background flow based on the movement of the camera, the background flow may be obtained by moving the space model, assuming that the camera is fixed and does not move. FIG. 11 is a flowchart showing the process of estimating the background flow in such a case. Therefore, instead of obtaining the background flow based on the movement of the camera, the background flow may be obtained by moving the space model, assuming that the camera is fixed and does not move. FIG. 11 is a flowchart showing the process of estimating the background flow in such a case.
[0085] [0085]
First, as in step S11 of FIG. 4, an arbitrary point (PreXi, PreYi) on the camera image captured at time t-1 is converted into a real world using the spatial model estimated by the spatial model estimating unit 14. Projection is performed on three-dimensional coordinates (PreXw, PreYw, PreZw) (S21). At this time, a perspective projection conversion equation shown in (Equation 1) and a coordinate conversion equation shown in (Equation 2) are used. First, as in step S11 of FIG. 4, an arbitrary point (PreXi, PreYi) on the camera image captured at time t-1 is converted into a real world using the spatial model estimated by the spatial model estimating unit 14. Projection is performed on three-dimensional coordinates (PreXw, PreYw, ​​PreZw) (S21). At this time, a perspective projection conversion equation shown in (Equation 1) and a coordinate conversion equation shown in (Equation 2) are used.
[0086] [0086]
Next, the real world coordinates (PreXw, PreYw, PreZw) obtained in step S21 based on the movement amount h of the vehicle 1 from the time t-1 to the time t estimated by the own-vehicle motion estimating unit 13 are calculated. Move relative to 1. That is, the real world coordinates (PreXw, PreYw, PreZw) are rotated using the rotation center coordinate O and the rotation angle γ relating to the movement amount h, and the real world coordinates (NextXw, NextYw, NextZw) are obtained (S22). Further, using (Equation 1) and (Equation 2), the real world coordinates (NextXw, NextYw, NextZw) obtained in step S22 are reprojected to a point (NextXi, NextYi) on the camera image (S23). Then, similarly to step S15 in FIG. 4, (NextXi-PreXi, @ NextYi-PreYi) is obtained as a background flow (S24). Next, the real world coordinates (PreXw, PreYw, ​​PreZw) obtained in step S21 based on the movement amount h of the vehicle 1 from the time t-1 to the time t estimated by the own-vehicle motion estimating unit 13 are calculated. Move relative to 1. That is, the real world coordinates (PreXw, PreYw, ​​PreZw) are rotated using the rotation center coordinate O and the rotation angle γ relating to the movement amount h, and the real world coordinates (NextXw, NextYw, NextZw) ) are obtained (S22). Further, using (Equation 1) and (Equation 2), the real world coordinates (NextXw, NextYw, NextZw) obtained in step S22 are reprojected to a point (NextXi, NextYi) on the camera image ( S23). Then, similarly to step S15 in FIG. 4, (NextXi-PreXi, @ NextYi-PreYi) is obtained as a background flow (S24).
[0087] [0087]
Since (NextXw, NextYw, NextZw) obtained in step S22 can be said to be a spatial model at time t predicted from the spatial model at time t-1, the spatial model is updated by continuing such processing. May be. Alternatively, such a space model updating process may be performed only on the portion determined as the background by the approaching object detection unit 16, and the space model may be obtained again by using the above-described method for other regions. Since (NextXw, NextYw, NextZw) obtained in step S22 can be said to be a spatial model at time t predicted from the spatial model at time t-1, the spatial model is updated by continuing such processing. May be. Alternatively, such a space model updating process may be performed only on the portion determined as the background by the approaching object detection unit 16, and the space model may be obtained again by using the above-described method for other regions.
[0088] [0088]
(Part 2) (Part 2)
In each of the above two methods, a point on a camera image is converted into real world three-dimensional coordinates, and processing is performed using a space model assumed in a real world coordinate system. On the other hand, all processing can be performed in the camera coordinate system. FIG. 12 is a flowchart showing the background flow estimation processing in such a case. In this case, the space model needs to be described in the camera coordinate system. Since the camera coordinate system and the real world coordinate system correspond one-to-one, the space model assumed in the real world coordinate system can be easily converted to the camera coordinate system according to (Equation 2). In each of the above two methods, a point on a camera image is converted into real world three-dimensional coordinates, and processing is performed using a space model assumed in a real world coordinate system. On the other hand, all processing can be performed. In the camera coordinate system. FIG. 12 is a flowchart showing the background flow estimation processing in such a case. In this case, the space model needs to be described in the camera coordinate system. Since the camera coordinate system and the real world coordinate system correspond one-to-one, the space model assumed in the real world coordinate system can be easily converted to the camera coordinate system according to (Equation 2).
[0089] [0089]
First, an arbitrary point (PreXi, PreYi) on a camera image captured at time t-1 is converted into a three-dimensional position (PreXc, PreYc, PreYc, PreYc, Prec) using a spatial model described in the camera coordinate system. PreZc}) (S31). At this time, the perspective projection conversion formula shown in (Equation 1) is used. First, an arbitrary point (PreXi, PreYi) on a camera image captured at time t-1 is converted into a three-dimensional position (PreXc, PreYc, PreYc, PreYc, Prec) using a spatial model described in the camera coordinate system. PreZc}) (S31). At this time, the perspective projection conversion formula shown in (Equation 1) is used.
[0090] [0090]
Next, based on the movement amount h of the vehicle 1 from the time t-1 to the time t estimated by the own-vehicle motion estimation unit 13, the camera coordinates (PreXc, PreYc, PreZcZ) obtained in step S31 are obtained from the vehicle 1 Relative to. That is, the camera coordinates (PreXc, PreYc, PreZc) are rotated using the rotation center coordinates C and the rotation angle γc in the camera image system relating to the movement amount h, and the camera coordinates (NextXc, NextYc, NextZc) are obtained (S32). ). Further, the camera coordinates (NextXc, NextYc, NextZc) obtained in step S32 are reprojected to a point (NextXi, NextYi) on the camera image using (Equation 1) (S33). Then, similarly to step S15 of FIG. 4, (NextXi-PreXi, NextYi-PreYi) is obtained as a background flow (S34). Next, based on the movement amount h of the vehicle 1 from the time t-1 to the time t estimated by the own-vehicle motion estimation unit 13, the camera coordinates (PreXc, PreYc, PreZcZ) obtained in step S31 are obtained from the vehicle 1 Relative to. That is, the camera coordinates (PreXc, PreYc, PreZc) are rotated using the rotation center coordinates C and the rotation angle γc in the camera image system relating to the movement amount h, and the camera coordinates (NextXc) , NextYc, NextZc) are obtained (S32).). Further, the camera coordinates (NextXc, NextYc, NextZc) obtained in step S32 are reprojected to a point (NextXi, NextYi) on the camera image using (Equation 1) (S33) ). Then, similarly to step S15 of FIG. 4, (NextXi-PreXi, NextYi-PreYi) is obtained as a background flow (S34).
[0091] [0091]
As described above, the space model is used for converting a camera image into a real world three-dimensional coordinate system or converting a camera image into a camera coordinate system. Therefore, the spatial model can be described as a conversion formula from a camera image to a real world three-dimensional coordinate system or a conversion formula from a camera image to a camera coordinate system. As described above, the space model is used for converting a camera image into a real world three-dimensional coordinate system or converting a camera image into a camera coordinate system. Therefore, the spatial model can be described as a conversion formula from a camera image. to a real world three-dimensional coordinate system or a conversion formula from a camera image to a camera coordinate system.
[0092] [0092]
FIG. 13 is an example of a camera image in which the background flow according to the present invention is indicated by arrows. As described above, the background flow is used for comparison with the optical flow actually obtained from the camera image, and serves as a reference for detecting an approaching object. As can be seen from FIG. 13, the background flow takes into account the curve of the traveling road surface. That is, it can be intuitively understood from FIG. 13 that the present invention can detect an approaching object with high accuracy even on a curve as compared with the related art. FIG. 13 is an example of a camera image in which the background flow according to the present invention is indicated by arrows. As described above, the background flow is used for comparison with the optical flow actually obtained from the camera image, and serves as A reference for detecting an approaching object. As can be seen from FIG. 13, the background flow takes into account the curve of the traveling road surface. That is, it can be intuitively understood from FIG. 13 that the present invention can detect an approaching object with high accuracy even on a curve as compared with the related art.
[0093] [0093]
According to the present invention, it is possible to accurately detect only an object approaching the host vehicle. Using this detection result, for example, an image highlighting only the approaching vehicle is displayed, or the presence of the approaching vehicle is warned by sound or image, vibration of the steering wheel or chair, lighting of a danger notification lamp such as an LED, or the like. Or can. Further, in a dangerous situation, the steering and the brake may be automatically controlled to avoid contact with or collision with an approaching vehicle. According to the present invention, it is possible to accurately detect only an object approaching the host vehicle. Using this detection result, for example, an image highlighting only the approaching vehicle is displayed, or the presence of the approaching vehicle is warned by sound or image, vibration of the steering wheel or chair, lighting of a danger notification lamp such as an LED, or the like. Or can. Further, in a dangerous situation, the steering and the brake may be automatically controlled to avoid contact with or collision with an approaching vehicle.
[0094] [0094]
Here, the highlighting using the background flow will be described. Here, the highlighting using the background flow will be described.
[0095] [0095]
FIG. 38 shows an example in which only an optical flow corresponding to an approaching vehicle is superimposed and displayed on a rear image while traveling on a curve. In FIG. 38, vehicles A and B, which are separated from the own vehicle by substantially the same distance, approach at substantially the same speed. As shown in FIG. 38, for the car A outside the curve, the user can recognize the approach by the optical flow (arrow) superimposed and displayed. However, for the car B inside the curve, the optical flow is almost 0, and the user cannot recognize the approach. As described above, when the vehicle is traveling on a curve, the presence of an approaching object may not be recognized simply by superimposing and displaying the optical flows. FIG. 38 shows an example in which only an optical flow corresponding to an approaching vehicle is interconnected and displayed on a rear image while traveling on a curve. In FIG. 38, vehicles A and B, which are separated from the own vehicle by substantially The same distance, approach at substantially the same speed. As shown in FIG. 38, for the car A outside the curve, the user can recognize the approach by the optical flow (arrow) superimposed and displayed. However, for the car B inside. The curve, the optical flow is almost 0, and the user cannot recognize the approach. As described above, when the vehicle is traveling on a curve, the presence of an approaching object may not be recognized simply by superimposing and displaying the optical flows.
[0096] [0096]
This is because the background flow caused by the curve running of the vehicle and the flow caused by the movement of the approaching vehicle cancel each other. FIG. 39 is a background flow of the image of FIG. In FIG. 39, arrows A2 and B2 are background flows corresponding to the positions of cars A and B, respectively. As can be seen from FIG. 39, the background flow B2 at the position of the car B is opposite to the movement of the car B. This is because the background flow caused by the curve running of the vehicle and the flow caused by the movement of the approaching vehicle cancel each other. FIG. 39 is a background flow of the image of FIG. In FIG. 39, arrows A2 and B2 are background flows corresponding to the positions of cars A and B, respectively. As can be seen from FIG. 39, the background flow B2 at the position of the car B is opposite to the movement of the car B.
[0097] [0097]
Therefore, in order to make only the movement of the approaching vehicle stand out, the background flow is vector-wise subtracted from the obtained optical flow, and the obtained flow is superimposed and displayed on the camera image. FIG. 40 shows an example of the display. Unlike FIG. 38, the flow is displayed for both approaching vehicles A and B, so that the user can reliably recognize the presence of the approaching vehicle. Therefore, in order to make only the movement of the approaching vehicle stand out, the background flow is vector-wise subtracted from the obtained optical flow, and the obtained flow is multiplexed and displayed on the camera image. FIG. 40 shows an example of the display. Unlike FIG. 38, the flow is displayed for both approaching vehicles A and B, so that the user can reliably recognize the presence of the approaching vehicles.
[0098] [0098]
Needless to say, the approaching object area may be obtained by connecting the approaching object flows as described above, and a frame may be drawn and highlighted in the approaching object area as shown in FIG. Furthermore, it is considered that the larger the approaching object flow or the approaching object area, the closer the approaching vehicle is and the higher the risk is. Therefore, depending on the approaching object flow and the size of the approaching object area, the frame color and thickness are large. The line type and the like may be switched. Needless to say, the approaching object area may be obtained by connecting the approaching object flows as described above, and a frame may be drawn and highlighted in the approaching object area as shown in FIG. Further, it is considered that the larger the approaching object The line type and the approaching object area, the closer the approaching vehicle is and the higher the risk is. Therefore, depending on the approaching object flow and the size of the approaching object area, the frame color and thickness are large. like may be switched.
[0099] [0099]
Alternatively, a warning may be given by sound instead of or together with the image. In this case, it is effective if the sound can be heard from the position where the approaching object exists. Further, the loudness, melody, tempo, frequency and the like of the sound may be changed according to the degree of danger. For example, when the vehicle is slowly approaching from the right rear, a small sound is emitted from the driver's right rear, and when the vehicle is rapidly approaching from the left rear, a loud noise is emitted from the left rear. Just fine. Alternatively, a warning may be given by sound instead of or together with the image. In this case, it is effective if the sound can be heard from the position where the approaching object exists. Further, the loudness, melody, tempo, frequency and The like of the sound may be changed according to the degree of danger. For example, when the vehicle is slowly approaching from the right rear, a small sound is emitted from the driver's right rear, and when the vehicle is rapidly approaching from the left rear, a loud noise is emitted from the left rear. Just fine.
[0100] [0100]
In the above-described embodiment, the movement of the own vehicle is estimated from the rotational speed of the wheels and the steering angle of the steering wheel. On the other hand, it is also possible to detect the movement of the vehicle (camera itself) using the image of the camera installed in the vehicle. This technique will be described. In the above-described embodiment, the movement of the own vehicle is estimated from the rotational speed of the wheels and the steering angle of the steering wheel. On the other hand, it is also possible to detect the movement of the vehicle (camera itself) ) using the image of the camera installed in the vehicle. This technique will be described.
[0101] [0101]
Here, it is assumed that the equation of the road surface, which is a stationary plane, is known, and the motion of the vehicle is minute, and the motion parameters of the vehicle (camera) are estimated from the motion of the point on the road surface on the image. Even if the vehicle is moving at a high speed, this assumption does not lose generality because the movement of the camera becomes minute between images by increasing the frame rate at the time of imaging. Here, it is assumed that the equation of the road surface, which is a stationary plane, is known, and the motion of the vehicle is minute, and the motion parameters of the vehicle (camera) are estimated from the motion of the point on The road surface on the image. Even if the vehicle is moving at a high speed, this assumption does not lose generality because the movement of the camera becomes minute between images by increasing the frame rate at the time of imaging.
[0102] [0102]
FIG. 41 is a schematic diagram for explaining the present method. As shown in FIG. 41, the coordinate value of the point P on the stationary plane in the camera coordinate system is changed from PE = (x, y, z) to PE ′ = (x ′, y ′, z ′) by the movement of the camera 2. ). The movement of the camera 2 is represented by a rotation R (wx, wy, wz) and a translation T (Tx, Ty, Tz). That is, the movement of the camera 2 is expressed as follows. FIG. 41 is a schematic diagram for explaining the present method. As shown in FIG. 41, the coordinate value of the point P on the stationary plane in the camera coordinate system is changed from PE = (x, y, z) to PE ′ = (X ′, y ′, z ′) by the movement of the camera 2.). The movement of the camera 2 is represented by a rotation R (wx, wy, wz) and a translation T (Tx, Ty, Tz). That is, the movement of the camera 2 is expressed as follows.
(Equation 12) (Equation 12)
[0103] [0103]
Here, assuming that the stationary plane is represented by z = ax + by + c, the following equation holds for the movement (u, v) → (u ′, v ′) of the point P on the image coordinates. Here, assuming that the stationary plane is represented by z = ax + by + c, the following equation holds for the movement (u, v) → (u ′, v ′) of the point P on the image coordinates.
(Equation 13) (Equation 13)
[0104] [0104]
Here, it is assumed that the corresponding point (u, v) (u ′, v ′) on the image coordinates and the coefficients a, b, and c of the stationary plane equation are known, and the above equation is converted to unknown parameters (wx, wy, wz, tx, ty, tz) are summarized as follows. Here, it is assumed that the corresponding point (u, v) (u ′, v ′) on the image coordinates and the coefficients a, b, and c of the stationary plane equation are known, and the above equation is converted to unknown parameters (wx, wy, wz, tx, ty, tz) are summarized as follows.
[Equation 14] [Equation 14]
(Equation 15) (Equation 15)
[0105] [0105]
If this is set to AC = R, the terms of rotation and translation, which are the motions of the vehicle, can be obtained by using the least squares method. If this is set to AC = R, the terms of rotation and translation, which are the motions of the vehicle, can be obtained by using the least squares method.
C = (A t A) -1 A t R C = (A t A) -1 A t R
[0106] [0106]
(Second embodiment) (Second embodiment)
FIG. 14 is a block diagram conceptually showing the basic configuration of the vehicle monitoring device according to the second embodiment of the present invention. 14, the same components as those of FIG. 1 are denoted by the same reference numerals as those of FIG. 1, and detailed description thereof will be omitted. The difference from the first embodiment is that the spatial model estimating unit 14A estimates a relatively simple spatial model using the own vehicle motion information estimated by the own vehicle motion estimating unit 13. FIG. 14 is a block diagram conceptually showing the basic configuration of the vehicle monitoring device according to the second embodiment of the present invention. 14, the same components as those of FIG. 1 are exemplified by the same reference numerals as those of FIG. 1, and detailed description thereof will be omitted. The difference from the first embodiment is that the spatial model estimating unit 14A estimates a relatively simple spatial model using the own vehicle motion information estimated by the own vehicle motion estimating unit 13.
[0107] [0107]
2. Description of the Related Art In general, when a vehicle travels, various things such as buildings, electric poles, signboards, and trees exist on a road surface. For this reason, various things such as a road surface, a building, a telephone pole, a signboard, a tree, and even the sky are projected on the camera 11. Here, a method of assuming various things projected on the camera 11 by a simple space model will be considered. 2. Description of the Related Art In general, when a vehicle travels, various things such as buildings, electric poles, signboards, and trees exist on a road surface. For this reason, various things such as a road surface, a building, a telephone pole, a signboard, a tree, and even the sky are projected on the camera 11. Here, a method of assuming various things projected on the camera 11 by a simple space model will be considered.
[0108] [0108]
There are various things that are reflected on the camera, but when the camera is installed rearward and downward of the vehicle, there is almost definitely a thing that is reflected on the camera. It is the "road surface on which the vehicle has traveled". Therefore, first, a “road surface model” that models a road surface on which a vehicle has traveled is used as a space model. Since the road surface model is obtained by extending the traveling road surface to infinity, an accurate road surface model can be estimated as long as the gradient does not suddenly change. In addition, even when the gradient of the road surface changes, such as a hill or a road with a lot of ups and downs, it is possible to generate a road surface model that takes into account the effect of the road gradient by using a sensor such as a gyro. is there. There are various things that are reflected on the camera, but when the camera is installed rearward and downward of the vehicle, there is almost definitely a thing that is reflected on the camera. It is the "road surface on which the vehicle has traveled" Therefore, first, a “road surface model” that models a road surface on which a vehicle has traveled is used as a space model. Since the road surface model is obtained by extending the traveling road surface to infinity, an accurate road surface model. In addition, even when the gradient of the road surface changes, such as a hill or a road with a lot of ups and downs, it is possible to generate a road surface model can be estimated as long as the gradient does not suddenly change. that takes into account the effect of the road gradient by using a sensor such as a gyro. Is there.
[0109] [0109]
However, the road surface model cannot always be used in all areas on the camera image. This is because the camera is generally considered to be installed almost horizontally, and in this case, the road surface is not reflected in an area higher than the horizon in the camera image. Of course, it is possible to arrange the camera so that the road surface is reflected in the entire area of the camera image. However, in this case, the range that can be monitored by the camera is narrowed, which is not practical. However, the road surface model cannot always be used in all areas on the camera image. This is because the camera is generally considered to be installed almost horizontally, and in this case, the road surface is not reflected in an area higher than the horizon In the camera image. Of course, it is possible to arrange the camera so that the road surface is reflected in the entire area of ​​the camera image. However, in this case, the range that can be monitored by the camera is narrowed, which is not practical.
[0110] [0110]
Therefore, in the present embodiment, in addition to the road surface model, a wall surface perpendicular to the traveling road surface is assumed as a space model. This space model is called a “wall model”. Therefore, in the present embodiment, in addition to the road surface model, a wall surface perpendicular to the traveling road surface is assumed as a space model. This space model is called a “wall model”.
[0111] [0111]
FIG. 15 is a diagram illustrating an example of the wall surface model. In FIG. 15, reference numeral 1 denotes a vehicle, 2 denotes a camera installed in the vehicle 1, and an area VA surrounded by two straight lines extending from the camera 2 is a shooting area of the camera 2. As shown in FIG. 15, a wall model MW that assumes a wall perpendicular to a running road surface behind the vehicle 1 by a distance L is the simplest. It is assumed that the wall surface model MW is large enough to cover all areas of the visual field range VA of the camera 2 that cannot be covered by the road surface model MS. In the visual field range VA of the camera 2, a region existing between the camera 2 and the wall surface model MW is a region of the road surface model MS. Therefore, when the vehicle 1 is traveling straight, the camera image captured by the camera 2 is occupied by the wall model MW on the upper side and the road surface model MS on the lower side, as shown in the figure. FIG. 15 is a diagram illustrating an example of the wall surface model. In FIG. 15, reference numeral 1 Then a vehicle, 2 ו a camera installed in the vehicle 1, and an area VA surrounded by two straight lines extending from the camera 2 is a shooting area of ​​the camera 2. As shown in FIG. 15, a wall model MW that assumes a wall perpendicular to a running road surface behind the vehicle 1 by a distance L is the simplest. It is assumed that the wall surface model MW is large enough to cover all areas of the visual field range VA of the camera 2 that cannot be covered by the road surface model MS. In the visual field range VA of the camera 2, a region existing between the camera 2 and the wall surface model MW is a region of the road surface model MS. Therefore, when the vehicle 1 is traveling straight, the camera image captured by the camera 2 is occupied by the wall model MW on the upper side and the road surface model MS on the lower side, as shown in the figure.
[0112] [0112]
When a wall model as shown in FIG. 15 is used, how to determine the distance L to the wall model MW becomes a problem. FIG. 16 is a diagram showing an example of a background flow in the right curve, where each arrow indicates the obtained background flow, and a white line indicates a white line on the road surface. MWA is a region where the background flow is obtained by the wall surface model MW, and MSA is a region where the background flow is obtained by the road surface model MS. In the figure, (a) shows a case where L is sufficiently long, and (b) shows a case where L is sufficiently short. By comparing (a) and (b), a background flow according to the magnitude of the distance L is shown. You can see the change in When a wall model as shown in FIG. 15 is used, how to determine the distance L to the wall model MW becomes a problem. FIG. 16 is a diagram showing an example of a background flow in the right curve, where each arrow indicates The obtained background flow, and a white line indicates a white line on the road surface. MWA is a region where the background flow is obtained by the wall surface model MW, and MSA is a region where the background flow is obtained by the road surface. model MS. In the figure, (a) shows a case where L is sufficiently long, and (b) shows a case where L is sufficiently short. By comparing (a) and (b), a background flow according to the magnitude of the distance L is shown. You can see the change in
[0113] [0113]
That is, comparing FIGS. 16A and 16B, it can be seen that the magnitude of the background flow in the wall surface model area MWA is greatly different. However, as described above, in the approaching object detection processing, a condition that "a part of the approaching object is in contact with the road surface" is added, so that if the background flow in the road surface model area MSA is accurately obtained, the wall surface model The background flow in the area MSA does not necessarily need to be determined accurately. That is, comparing FIGS. 16A and 16B, it can be seen that the magnitude of the background flow in the wall surface model area MWA is greatly different. However, as described above, in the approaching object detection processing, a condition that "a part of the approaching object is in contact with the road surface "is added, so that if the background flow in the road surface model area MSA is accurately obtained, the wall surface model The background flow in the area MSA does not necessarily need to be determined accurately.
[0114] [0114]
Further, it can be seen that the background flow in the area BA near the boundary between the road surface model and the wall surface model is considerably different in FIGS. 16 (a) and 16 (b). The boundary area BA is an area that is also important when determining the road surface. Therefore, depending on how the distance L to the wall surface model MW is set, there is a possibility that the detection accuracy will deteriorate. Further, it can be seen that the background flow in the area BA near the boundary between the road surface model and the wall surface model is considerably different in FIGS. 16 (a) and 16 (b). The boundary area BA is an area That is also important when determining the road surface. Therefore, depending on how the distance L to the wall surface model MW is set, there is a possibility that the detection accuracy will deteriorate.
[0115] [0115]
Therefore, as another wall surface model, a space model in which walls (MW1, MW2) are present on both sides of the vehicle 1 as shown in FIG. 17 is assumed. It can be said that the vehicle is traveling in a tunnel with an infinite height. A region existing between the left and right wall surface models MW1 and MW2 within the visual field range VA of the camera 2 is a region of the road surface model MS. Therefore, when the vehicle 1 is traveling straight, the camera image captured by the camera 2 is occupied by the left and right wall surface models MW1 and MW2, and the lower part is occupied by the road surface model MS, as shown. In FIG. 17, for simplicity of explanation, the vehicle 1 is assumed to be traveling straight. However, when the vehicle is traveling on a curve, the wall surface model may be bent in accordance with the curvature of the curve. . Therefore, as another wall surface model, a space model in which walls (MW1, MW2) are present on both sides of the vehicle 1 as shown in FIG. 17 is assumed. It can be said that the vehicle is traveling in a tunnel with an infinite height. A region existing between the left and right wall surface models MW1 and MW2 within the visual field range VA of the camera 2 is a region of the road surface model MS. Therefore, when the vehicle 1 is traveling straight, the camera image captured by the camera 2 is occupied by the left and right wall surface models MW1 and MW2, and the lower part is occupied by the road surface model MS, as shown. In FIG. 17, for simplicity of explanation, the vehicle 1 is assumed to be traveling straight. However, when the vehicle is traveling on a curve, the wall surface model may be bent in accordance with the curvature of the curve.
[0116] [0116]
A case where the space model as shown in FIG. 17 is described as a conversion formula from a camera image to a real world three-dimensional coordinate system will be described in detail. A case where the space model as shown in FIG. 17 is described as a conversion formula from a camera image to a real world three-dimensional coordinate system will be described in detail.
[0117] [0117]
First, a road surface model will be described. Assuming that the traveling road surface is a flat road surface with no gradient, the road surface model is an Xw-Yw plane of a real world three-dimensional coordinate system. Therefore, the relational expression of this plane First, a road surface model will be described. Assuming that the traveling road surface is a flat road surface with no gradient, the road surface model is an Xw-Yw plane of a real world three-dimensional coordinate system. Therefore, the relational expression of this plane
Zw = 0 Zw = 0
Is substituted into (Equation 1) and (Equation 2), the point P (Xi, Yi) on the camera image can be converted into real world three-dimensional coordinates Pw (Xw, Yw, Zw) by the following equation. . Is substituted into (Equation 1) and (Equation 2), the point P (Xi, Yi) on the camera image can be converted into real world three-dimensional coordinates Pw (Xw, Yw, Zw) by the following equation.
(Equation 8) (Equation 8)
(Equation 8) is a conversion formula from the image coordinate system in the road surface model to the real world three-dimensional coordinate system. (Equation 8) is a conversion formula from the image coordinate system in the road surface model to the real world three-dimensional coordinate system.
[0118] [0118]
Next, the wall surface model will be described. It is assumed that the vehicle 1 is turning with a turning radius R between time t-1 and time t. The origin of the real world three-dimensional coordinate system is defined as the rear wheel center position of the vehicle 1 at time t, and the rotation center is (R, 0, 0). However, the sign of the turning radius R is positive and negative, and is positive when the vehicle 1 is counterclockwise and negative when clockwise. Next, the wall surface model will be described. It is assumed that the vehicle 1 is turning with a turning radius R between time t-1 and time t. The origin of the real world three-dimensional coordinate system is defined as the rear wheel center position of the vehicle 1 at time t, and the rotation center is (R, 0, 0). However, the sign of the turning radius R is positive and negative, and is positive when the vehicle 1 is counterclockwise and negative when clockwise ..
[0119] [0119]
At this time, in the wall model, in the real world three-dimensional coordinate system, the center of the cross section on the Xw-Zw plane is a point {(R, 0, 0}), and the radius (R ± W / 2) perpendicular to the Xw-Zw plane. , Is expressed by the following equation. At this time, in the wall model, in the real world three-dimensional coordinate system, the center of the cross section on the Xw-Zw plane is a point {(R, 0, 0}), and the radius (R ± W / 2) perpendicular to the Xw-Zw plane., Is expressed by the following equation.
[0120] [0120]
(Xw-R) 2 + Zw 2 = (R ± W / 2) 2 (Xw-R) 2 + Zw 2 = (R ± W / 2) 2
By using this equation and (Equation 1) and (Equation 2), the point P (Xi, Yi) on the camera image is converted into real world three-dimensional coordinates Pw (Xw, Yw, Zw) by the following equation. be able to. By using this equation and (Equation 1) and (Equation 2), the point P (Xi, Yi) on the camera image is converted into real world three-dimensional coordinates Pw (Xw, Yw, Zw) by the following equation. able to.
(Equation 9) (Equation 9)
further, further,
1) The object shown in the camera image does not exist behind the camera 2 (Ze> 0) 2) The spatial model is at a position higher than the road surface (Yw ≧ 0) 1) The object shown in the camera image does not exist behind the camera 2 (Ze> 0) 2) The spatial model is at a position higher than the road surface (Yw ≧ 0)
By adding the condition, Pw (Xw, Yw, Zw) can be uniquely determined. (Equation 9) is a wall surface model expressed as a conversion formula from the image coordinate system to the real world three-dimensional coordinate system. By adding the condition, Pw (Xw, Yw, Zw) can be uniquely determined. (Equation 9) is a wall surface model expressed as a conversion formula from the image coordinate system to the real world three-dimensional coordinate system.
[0121] [0121]
Now, when a wall model as shown in FIG. 17 is used, how to determine the distance W between the wall surfaces MW1 and MW2 assumed on the left and right sides of the vehicle becomes a problem. FIG. 18 is a diagram showing an example of a background flow in the right curve, where each arrow indicates the obtained background flow, and a white line indicates a white line on the road surface. Further, MW1A is an area where the background flow is obtained by the wall model MW1 on the left side viewed from the camera 2, MW2A is an area where the background flow is obtained by the wall model MW2 on the right side viewed from the camera 2, and MSA is a background flow obtained by the road model MS. Is the area for which In the figure, (a) shows a case where W is sufficiently small, and (b) shows a case where W is sufficiently large. By comparing (a) and (b), the background flow according to the magnitude of the distance W is shown. You can see the change in Now, when a wall model as shown in FIG. 17 is used, how to determine the distance W between the wall surfaces MW1 and MW2 assumed on the left and right sides of the vehicle becomes a problem. FIG. 18 is a diagram showing an example of a background flow in the right curve, where each arrow indicates the obtained background flow, and a white line indicates a white line on the road surface. Further, MW1A is an area where the background flow is obtained by the wall model MW1 on the left side viewed from the camera 2, MW2A is an area where the background flow is obtained by the wall model MW2 on the right side viewed from the camera 2, and MSA is a background flow obtained by the road model MS. Is the area for which In the figure, (a) shows a case where W is sufficiently small, and (b) shows a case where W is sufficiently large. By comparing (a) and (b), the background flow according to the magnitude of the distance W is shown. You can see the change in
[0122] [0122]
That is, comparing FIGS. 18A and 18B, it can be seen that the background flow in the area BA1 near the boundary between the left and right wall surface models is significantly different. However, as described above, in the approaching object detection processing, a condition that "a part of the approaching object is in contact with the road surface" is added, so that if the background flow in the road surface model area MSA is accurately obtained, the wall surface model Even if an error occurs in the background flow in the regions MW1A and MW1B, this does not cause a significant problem in detecting an approaching object. That is, comparing FIGS. 18A and 18B, it can be seen that the background flow in the area BA1 near the boundary between the left and right wall surface models is significantly different. However, as described above, in the approaching object detection processing, a condition that "a part of the approaching object is in contact with the road surface" is added, so that if the background flow in the road surface model area MSA is accurately obtained, the wall surface model Even if an error occurs in the background flow in the regions MW1A and MW1B, this does not cause a significant problem in detecting an approaching object.
[0123] [0123]
That is, in the wall surface model as shown in FIG. 17, unlike the case where the wall surface is assumed behind the camera as shown in FIG. 15, in the boundary region between the wall surface model and the road surface model, the background flow is determined by the size of the wall interval W. Since there is no significant difference, an approaching object can be detected with high accuracy. For example, the interval W between the wall surfaces may be set to about 10 m. That is, in the wall surface model as shown in FIG. 17, unlike the case where the wall surface is assumed behind the camera as shown in FIG. 15, in the boundary region between the wall surface model and the road surface model, the background flow is determined by the size of the wall interval W. Since there is no significant difference, an approaching object can be detected with high accuracy. For example, the interval W between the wall surfaces may be set to about 10 m.
[0124] [0124]
Of course, the distance W between the wall surfaces may be measured using various obstacle detection sensors such as a laser, an ultrasonic wave, an infrared ray, or a millimeter wave, or may be measured using a binocular vision or a motion stereo method. Further, it is also possible to extract a region other than the road surface from the camera image using the texture information of the road surface and assume that the region is a wall surface. Furthermore, it is also possible to measure only a part of the space model using the distance measurement sensor and model the other area by a plane or a curved surface as described herein. Alternatively, the number of lanes on the road on which the vehicle is currently traveling may be obtained using camera images and communication, and the result may be used to determine the distance W between the wall surfaces. As a method of determining the number of lanes from a camera image, for example, there is a method of performing white line detecti Of course, the distance W between the wall surfaces may be measured using various obstacle detection sensors such as a laser, an electrostatic wave, an infrared ray, or a millimeter wave, or may be measured using a binocular vision or a motion stereo method. Further, it is also possible to extract a region other than the road surface from the camera image using the texture information of the road surface and assume that the region is a wall surface. Further, it is also possible to measure only a part of the space model using the distance measurement sensor and model the other area by a plane or a curved surface as described herein. Alternatively, the number of lanes on the road on which the vehicle is currently traveling may be obtained using camera images and communication, and the result may be used to determine the distance W between the wall surfaces. As a method of determining the number of lanes from a camera image, for example, there is a method of performing white line detecti on and determining the number of lanes based on the determined number of white lines. on and determining the number of lanes based on the determined number of white lines.
[0125] [0125]
The shape of the space model can be switched based on GPS information and map data as in a car navigation system, time data from a clock or the like, operation data on a vehicle wiper, headlights, and the like. For example, when it is found from the GPS information and the map data that the vehicle is currently traveling in the tunnel, a space model in which a ceiling exists may be obtained. Alternatively, information that the vehicle is traveling in a tunnel can be obtained by communication means such as DSRC attached to the tunnel entrance or the like. In addition, since the headlights are normally turned on during the traveling of the tunnel, in conjunction with the time data, the wiper data, and the headlight data, for example, it is not at night (estimated from the time data), and no rainfall (wiper data). However, when the headlights are turned on, it can be determined that the vehicle is traveling in a tunnel, and the space model can be switched. The shape of the space model can be switched based on GPS information and map data as in a car navigation system, time data from a clock or the like, operation data on a vehicle wiper, headlights, and the like. For example, when it is found from the GPS information and the map data that the vehicle is currently traveling in the tunnel, a space model in which a ceiling exists may be obtained. Alternatively, information that the vehicle is traveling in a tunnel can be obtained by communication means such As DSRC attached to the tunnel entrance or the like. In addition, since the headlights are normally turned on during the traveling of the tunnel, in conjunction with the time data, the wiper data, and the headlight data, for example, it is not At night (estimated from the time data), and no rainfall (wiper data). However, when the headlights are turned on, it can be determined that the vehicle is traveling in a tunnel, and the space model can be switched.
[0126] [0126]
FIG. 19 is a flowchart showing the operations of the vehicle movement estimating unit 13, the space model estimating unit 14A, and the background flow estimating unit 15 according to the present embodiment. First, the left and right wheel velocities Vl, Vr of the host vehicle and the turning angle β of the front wheels at time t are determined by measuring the rotation pulses of both the left and right wheels and the turning angle of the steering wheel from time t-1 to time t. (S61). Then, the motion vector T of the vehicle is estimated from the wheel speeds Vl, Vr and the turning angle β thus obtained and the known wheelbase l according to (Equation 7) using the above-mentioned Ackerman model (S62). FIG. 19 is a flowchart showing the operations of the vehicle movement estimating unit 13, the space model estimating unit 14A, and the background flow estimating unit 15 according to the present embodiment. First, the left and right wheel velocities Vl, Vr of the host vehicle and the turning angle β of the front wheels at time t are determined by measuring the rotation pulses of both the left and right wheels and the turning angle of the steering wheel from time t-1 to time t. (S61). Then , the motion vector T of the vehicle is estimated from the wheel speeds Vl, Vr and the turning angle β thus obtained and the known wheelbase l according to (Equation 7) using the above-mentioned Ackerman model (S62).
[0127] [0127]
By the way, all motion vectors before time t-1 must have been obtained so far. Therefore, the trajectory of the own vehicle up to time t-1 is obtained by connecting the motion vectors so far (S63). FIG. 20 shows an example of the trajectory up to time t-1 obtained in this manner. In FIG. 20, T indicates a current motion vector, and TR indicates a trajectory obtained by joining motion vectors so far. By the way, all motion vectors before time t-1 must have been obtained so far. Therefore, the trajectory of the own vehicle up to time t-1 is obtained by connecting the motion vectors so far (S63). FIG. 20 shows An example of the trajectory up to time t-1 obtained in this manner. In FIG. 20, T indicates a current motion vector, and TR indicates a trajectory obtained by joining motion vectors so far.
[0128] [0128]
Next, a spatial model is estimated (S64). Here, it is assumed that the road surface model and the wall surface model as shown in FIG. 17 are used. That is, as shown in FIG. 21, a plane including the trajectory TR is obtained as the road surface model MS, and a wall surface perpendicular to the road surface model MS at a position separated by a length W / 2 to the left and right of the trajectory TR is a wall surface model MW1. , MW2. Next, a spatial model is estimated (S64). Here, it is assumed that the road surface model and the wall surface model as shown in FIG. 17 are used. That is, as shown in FIG. 21, a plane including the trajectory TR is obtained as the road surface model MS, and a wall surface perpendicular to the road surface model MS at a position separated by a length W / 2 to the left and right of the trajectory TR is a wall surface model MW1., MW2.
[0129] [0129]
Next, the background flow is estimated (S65 to S68). First, an arbitrary point PreCi (PreXi, PreYi) on the camera image at time t-1 is projected onto the real world three-dimensional coordinates PreRi (PreXw, PreYw, PreZw) (S65). FIG. 22 is a diagram showing this processing. The angle θ is the angle of view of the camera 2. As described above, it is only known that the point PreCi on the camera image corresponds to the straight line LN passing through the focal point of the camera 2 only by the perspective projection conversion formulas shown in (Equation 1) and (Equation 2). It cannot be projected onto a single point in the real world coordinate system. However, assuming that the vehicle 1 is traveling in the estimated space model, a point PreRi can be obtained as an intersection between the straight line LN and the space model (MW2 in FIG. 22). Next, the background flow is estimated (S65 to S68). First, an arbitrary point PreCi (PreXi, PreYi) on the camera image at time t-1 is projected onto the real world three-dimensional coordinates PreRi (PreXw, PreYw, ​​PreZw) ) (S65). FIG. 22 is a diagram showing this processing. The angle θ is the angle of view of the camera 2. As described above, it is only known that the point PreCi on the camera image corresponds to the straight line LN passing through the focal point of the camera 2 only by the perspective projection conversion formulas shown in (Equation 1) and (Equation 2). It cannot be projected onto a single point in the real world coordinate system. However, assuming that the vehicle 1 is traveling in the estimated space model, a point PreRi can be obtained as an intersection between the straight line LN and the space model (MW2 in FIG. 22).
[0130] [0130]
Next, by moving the vehicle 1 by the current motion vector T, the position of the own vehicle at the time t is obtained, and the parameters of the coordinate conversion equation (Equation 2) are updated to match the position (S66). Then, the position on the camera image at which the real world coordinates PreRi appear at the camera position is calculated. The conversion from the real world coordinates onto the camera image can be realized by using (Equation 1) and the updated (Equation 2). The point on the camera image thus obtained is set as NextCi (NextiXi, NextYi) (S67). FIG. 23 is a diagram schematically showing the processing here. Next, by moving the vehicle 1 by the current motion vector T, the position of the own vehicle at the time t is obtained, and the parameters of the coordinate conversion equation (Equation 2) are updated to match the position (S66). Then , the position on the camera image at which the real world coordinates PreRi appear at the camera position is calculated. The conversion from the real world coordinates onto the camera image can be realized by using (Equation 1) and the updated (Equation 2). The point on the camera image thus obtained is set as NextCi (NextiXi, NextYi) (S67). FIG. 23 is a diagram displaying the processing here.
[0131] [0131]
NextCi (NextXi, NextYi) obtained here is the camera image at the time t when the point PreRi does not move from the time t-1 to the time t, that is, assuming that the point PreRi is a point on the background object. The position of is shown. Therefore, the background flow BFL assuming that the point PreCi on the camera image at the time t-1 is a part of the background can be obtained as (NextXi-PreXi, @ NextYi-PreYi) (S68). By performing such processing for all points on the camera image, a background flow as shown in FIG. 13 can be obtained. NextCi (NextXi, NextYi) obtained here is the camera image at the time t when the point PreRi does not move from the time t-1 to the time t, that is, assuming that the point PreRi is a point on the background object. The position of is shown. Therefore, the background flow BFL assuming that the point PreCi on the camera image at the time t-1 is a part of the background can be obtained as (NextXi-PreXi, @ NextYi-PreYi) (S68) By performing such processing for all points on the camera image, a background flow as shown in FIG. 13 can be obtained.
[0132] [0132]
FIGS. 24 to 26 show examples in which approaching objects are detected according to the present embodiment. FIG. 24 is an image captured by a camera installed behind the vehicle. The vehicle is about to turn right at the intersection, and a white passenger vehicle VC1 is approaching from behind. The only moving object on the image is the white passenger car VC1, and the passenger cars VC2 and VC3 are stopped. FIGS. 24 to 26 show examples in which approaching objects are detected according to the present embodiment. FIG. 24 is an image captured by a camera installed behind the vehicle. The vehicle is about to turn right at the intersection, and a white passenger vehicle. VC1 is approaching from behind. The only moving object on the image is the white passenger car VC1, and the passenger cars VC2 and VC3 are stopped.
[0133] [0133]
Here, the first conventional example described above is considered. In the first conventional example, first, as shown in FIG. 25A, a region L and a region R are divided and set in the horizontal direction. Then, of the detected optical flows, those that face left or lower left in the area L and those that face right or lower right in the area R are detected as approaching objects. Here, the first conventional example described above is considered. In the first conventional example, first, as shown in FIG. 25A, a region L and a region R are divided and set in the horizontal direction. Then, of the detected optical flows, those that face left or lower left in the area L and those that face right or lower right in the area R are detected as approaching objects.
[0134] [0134]
Here, the areas AR1 and AR2 are assumed to be within the area L. The area AR1 includes an approaching passenger vehicle VC1, and the area AR2 includes a stationary passenger car VC2. However, since the passenger car VC1 in the area AR1 has an optical flow in the lower right direction as shown by the arrow, it is not detected as an approaching object. On the other hand, the passenger car VC2 in the area AR2 has an optical flow in the left direction as shown by the arrow because the vehicle is traveling on a curve, and is therefore detected as an approaching object. FIG. 25 (b) shows the result of performing such a process, where a region surrounded by a rectangle is a region detected as an approaching object. As described above, in the first conventional example, detection of an approaching object on a curve is not successful. Here, the areas AR1 and AR2 are assumed to be within the area L. The area AR1 includes an approaching passenger vehicle VC1, and the area AR2 includes a stationary passenger car VC2. However, since the passenger car VC1 in the area AR1 has an optical flow in the lower right direction as shown by the arrow, it is not detected as an approaching object. On the other hand, the passenger car VC2 in the area AR2 has an optical flow in the left direction as shown by the arrow because the vehicle is traveling on a curve, and is therefore detected as an approaching object. FIG. 25 (b) shows the result of performing such a process, where a region surrounded by a rectangle is a region detected as an approaching object. As described above , in the first conventional example, detection of an approaching object on a curve is not successful.
[0135] [0135]
On the other hand, according to the present embodiment, the background flows in the areas AR1 and AR2 are obtained as indicated by arrows in FIG. That is, in the area AR1 including the approaching vehicle VC1, the optical flow is directed to the lower right as shown in FIG. 25A, whereas the background flow is directed to the left as shown in FIG. . Therefore, the area AR1 is determined to be an approaching object. On the other hand, in the area AR2 including the stationary passenger car VC2, the optical flow turns to the left as shown in FIG. 25A, and the background flow also turns to the left as shown in FIG. I have. Therefore, it is determined that the area AR2 is not an approaching object but a background. FIG. 26 (b) shows the result of performing such processing, where the area surrounded by a rectangle is the area detected as an approaching object. Thus, according to the present invention, unlike the conventional example, it is possible to reliably detect an approaching object ev On the other hand, according to the present embodiment, the background flows in the areas AR1 and AR2 are obtained as indicated by arrows in FIG. That is, in the area AR1 including the approaching vehicle VC1, the optical flow is directed to the lower right as shown in FIG. 25A, the background flow is directed to the left as shown in FIG. .Therefore, the area AR1 is determined to be an approaching object. On the other hand, in the area AR2 including the stationary passenger car VC2, the optical flow turns to the left as shown in FIG. 25A, and the background flow also turns to the left as shown in FIG. I have. Therefore, it is determined that the area AR2 is not an approaching object but a background FIG. 26 (b) shows the result of performing such processing, where the area surrounded by a rectangle is the area detected as an approaching object. Thus, according to the present invention, unlike the conventional example, it is possible to reliably detect. an approaching object ev en on a curve. en on a curve.
[0136] [0136]
In addition, the vehicle monitoring apparatus using the road surface model is described in JP-A-2000-74645 and JP-A-2001-266160. However, these techniques do not use a road surface model to obtain a background flow as in the present invention. In addition, it is not intended to detect an approaching object when traveling on a curve, and therefore has a different problem from the present invention. In addition, the vehicle monitoring apparatus using the road surface model is described in JP-A-2000-74645 and JP-A-2001-266160. However, these techniques do not use a road surface model to obtain a background flow as in the present invention. In addition, it is not intended to detect an approaching object when traveling on a curve, and therefore has a different problem from the present invention.
[0137] [0137]
More specifically, in the former, first, an optical flow generated from another vehicle in the monitoring area is detected, and a relative relationship between the own vehicle and other nearby vehicles is monitored using the detected optical flow. The feature of this technique is that an area for detecting an optical flow is limited in order to shorten the processing time, and a road surface model is used to realize this. That is, the space model is not used for detecting the approaching object flow as in the present invention. As a matter of fact, this technique uses the method using the virtual FOE shown in the above-described fifth conventional example to detect an approaching object flow on a curve, and therefore has the same problems as the fifth conventional example. Holding That is, accurate approaching object detection cannot be performed on a curve without a white line. More specifically, in the former, first, an optical flow generated from another vehicle in the monitoring area is detected, and a relative relationship between the own vehicle and other nearby vehicles is monitored using the detected optical flow. The feature of this technique is that an area for detecting an optical flow is limited in order to shorten the processing time, and a road surface model is used to realize this. That is, the space model is not used for detecting the approaching object flow as in the present invention. As a matter of fact, this technique uses the method using the virtual FOE shown in the above-described fifth conventional example to detect an approaching object flow on a curve, and therefore has the same problems as the fifth conventional example. Holding That is, accurate approaching object detection cannot be performed on a curve without a white line.
[0138] [0138]
The latter uses three-dimensional movement of each point on the screen. That is, first, for each point on the screen, an optical flow, which is a two-dimensional movement on the screen, is obtained. Then, a three-dimensional movement of each point in the real world is calculated based on the obtained optical flow and the vehicle movement information. By temporally tracking the three-dimensional movement, a space model of the space in which the vehicle is actually traveling is estimated. Among the space models estimated in this way, those different from the movement of the road surface are detected as obstacles. However, this technique has a problem in that the motion of each point is completely three-dimensionally calculated, and thus it has a problem that the calculation cost is extremely high, and is difficult to realize. The latter uses three-dimensional movement of each point on the screen. That is, first, for each point on the screen, an optical flow, which is a two-dimensional movement on the screen, is obtained. Then, a three-dimensional Movement of each point in the real world is calculated based on the obtained optical flow and the vehicle movement information. By temporally tracking the three-dimensional movement, a space model of the space in which the vehicle is actually traveling is estimated. Among the space models estimated in this way, those different from the movement of the road surface are detected as obstacles. However, this technique has a problem in that the motion of each point is completely three-dimensionally calculated, and thus it has a problem that the calculation. cost is extremely high, and is difficult to realize.
[0139] [0139]
<Example of hardware configuration> <Example of hardware configuration>
FIG. 27 is a diagram illustrating an example of a hardware configuration for implementing the present invention. In FIG. 27, for example, an image captured by the camera 2 installed behind the vehicle is converted into a digital signal by the image input unit 21 in the image processing device 20 and stored in the frame memory 22. Then, the DSP 23 detects an optical flow from the digitized image signal stored in the frame memory 22. The detected optical flow is supplied to the microcomputer 30 via the bus 43. On the other hand, the vehicle speed sensor 41 measures the traveling speed of the vehicle, and the steering angle sensor 42 measures the traveling steering angle of the vehicle. A signal representing the measurement result is supplied to the microcomputer 30 via the bus 43. FIG. 27 is a diagram illustrating an example of a hardware configuration for implementing the present invention. In FIG. 27, for example, an image captured by the camera 2 installed behind the vehicle is converted into a digital signal by the image input unit 21 in the image processing device 20 and stored in the frame memory 22. Then, the DSP 23 detects an optical flow from the digitized image signal stored in the frame memory 22. The detected optical flow is supplied to the subsystem 30 via the bus 43. On the other hand, the vehicle speed sensor 41 measures the traveling speed of the vehicle, and the steering angle sensor 42 measures the traveling steering angle of the vehicle. A signal representing the measurement result is supplied to the microcomputer 30 via the bus 43.
[0140] [0140]
The microcomputer 30 includes a CPU 31, a ROM 32 storing a predetermined control program, and a RAM 33 storing a calculation result by the CPU 31, and determines whether an approaching object exists in an image taken by the camera 2. I do. The microcomputer 30 includes a CPU 31, a ROM 32 storing a predetermined control program, and a RAM 33 storing a calculation result by the CPU 31, and determines whether an approaching object exists in an image taken by the camera 2. I do.
[0141] [0141]
That is, the CPU 31 first estimates the movement of the vehicle from the traveling speed signal and the steering angle signal supplied from the vehicle speed sensor 41 and the steering angle sensor 42. Next, based on the estimated movement of the vehicle, the trajectory that the vehicle has traveled so far is estimated. Since the past trajectory information is stored in the RAM 33, the CPU 31 obtains a trajectory up to the time by connecting the estimated movement of the vehicle with the past trajectory information stored in the RAM 33. This new trajectory information is stored in the RAM 33. That is, the CPU 31 first estimates the movement of the vehicle from the traveling speed signal and the steering angle signal supplied from the vehicle speed sensor 41 and the steering angle sensor 42. Next, based on the estimated movement of the vehicle, the trajectory That the vehicle has traveled so far is estimated. Since the past trajectory information is stored in the RAM 33, the CPU 31 obtains a trajectory up to the time by connecting the estimated movement of the vehicle with the past trajectory information stored in the RAM 33 . This new trajectory information is stored in the RAM 33.
[0142] [0142]
Then, the CPU 31 estimates the space model using the trajectory information stored in the RAM 33 and obtains a background flow. Then, by comparing the obtained background flow with the optical flow supplied from the image processing apparatus 20, the flow of the approaching object is detected, and the approaching object is detected. Then, the CPU 31 estimates the space model using the trajectory information stored in the RAM 33 and obtains a background flow. Then, by comparing the obtained background flow with the optical flow supplied from the image processing apparatus 20, the flow of the approaching object is detected, and the approaching object is detected.
[0143] [0143]
(Third embodiment) (Third embodiment)
FIG. 28 is a block diagram conceptually showing the basic configuration of the vehicle monitoring device according to the third embodiment of the present invention. 28, the same components as those in FIG. 1 are denoted by the same reference numerals as those in FIG. 1, and the detailed description thereof will be omitted. The difference from the first embodiment is that the optical flow detection unit 12A detects an optical flow using the background flow estimated by the background flow estimation unit 15. As a result, the calculation time of the optical flow can be reduced and the accuracy can be improved. FIG. 28 is a block diagram conceptually showing the basic configuration of the vehicle monitoring device according to the third embodiment of the present invention. 28, the same components as those in FIG. 1 are exemplified by the same reference numerals as those in FIG. 1, and the detailed description thereof will be omitted. The difference from the first embodiment is that the optical flow detection unit 12A detects an optical flow using the background flow estimated by the background flow estimation unit 15. As a result, the calculation time of the optical flow can be reduced and the accuracy can be improved.
[0144] [0144]
The optical flow Vi, which is the motion of an object on a camera image, is represented by the sum of the motion Vb due to the actual motion of the target object and the relative motion Vc due to the motion of the camera itself, as in the following equation. You. The optical flow Vi, which is the motion of an object on a camera image, is represented by the sum of the motion Vb due to the actual motion of the target object and the relative motion Vc due to the motion of the camera itself, as in the following equation. You.
Vi = Vb + Vc Vi = Vb + Vc
[0145] [0145]
Here, assuming that the target is not moving, that is, the background, Vb is 0 and Vc is equal to the background flow. If the target is a moving object, Vb depends on the movement vector of the target, but Vc is substantially equal to the background flow. This indicates that when the moving amount of the object is not so large, the optical flow exists near the background flow. Therefore, when obtaining the optical flow, only the vicinity of the background flow is searched, so that the search area can be narrowed and the calculation time can be reduced. Here, assuming that the target is not moving, that is, the background, Vb is 0 and Vc is equal to the background flow. If the target is a moving object, Vb depends on the movement vector of the target, but Vc is substantially This indicates that when the moving amount of the object is not so large, the optical flow exists near the background flow. Therefore, when obtaining the optical flow, only the vicinity of the background flow is searched, so that the search area can be narrowed and the calculation time can be reduced.
[0146] [0146]
Further, the detection accuracy of the optical flow can be improved by the same method. This is particularly effective when a hierarchical image is used for optical flow detection. As described above, the hierarchical image is used to reduce the calculation time. However, the higher the hierarchical level, the lower the resolution of the image, and thus the higher the possibility that an error occurs in template matching. If an error occurs in the upper layer and an erroneous detection is performed, the error is not absorbed in the lower layer, and an optical flow different from the actual flow is detected. Further, the detection accuracy of the optical flow can be improved by the same method. This is particularly effective when a hierarchical image is used for optical flow detection. As described above, the hierarchical image is used to reduce the calculation time. However, the higher the hierarchical level, the lower the resolution of the image, and thus the higher the possibility that an error occurs in template matching. If an error occurs in the upper layer and an erroneous detection is performed, the error is not absorbed in the lower layer, and an optical flow different from the actual flow is detected.
[0147] [0147]
FIG. 29 is a diagram schematically illustrating a block matching result in a hierarchical image. In FIG. 29, an image 1 is obtained by reducing the original image 0 and applying an LPF, and an image 2 is obtained by further reducing the image 1 and applying an LPF. Each rectangle in the image indicates a block on which matching has been performed, and a number in the rectangle indicates a value of a difference evaluation function between the block G and the template block F. That is, assuming that the size of the block is m × n, the luminance of each pixel of the template block F is f (i, j), and the luminance of each pixel of the block G is g (x, y, i, j), the difference is The evaluation function E (x, y) is represented by (Equation 10) or (Equation 11). FIG. 29 is a diagram efficiently illustrating a block matching result in a hierarchical image. In FIG. 29, an image 1 is obtained by reducing the original image 0 and applying an LPF, and an image 2 is obtained by further reducing the image 1 and applying an LPF. Each rectangle in the image indicates a block on which matching has been performed, and a number in the rectangle indicates a value of a difference evaluation function between the block G and the template block F. That is, assuming that the size of the block is m × n, the luminance of each pixel of the template block F is f (i, j), and the luminance of each pixel of the block G is g (x, y, i, j), the difference is The evaluation function E (x, y) is represented by (Equation 10) or (Equation 11).
(Equation 10) (Equation 10)
[Equation 11] [Equation 11]
That is, the block having the smallest value of the difference evaluation function E (x, y) is the block corresponding to the template block F, and corresponds to the optical flow in the image. That is, the block having the smallest value of the difference evaluation function E (x, y) is the block corresponding to the template block F, and corresponds to the optical flow in the image.
[0148] [0148]
First, block matching is performed on the image 2 having the lowest resolution. Now, assuming that the template block F is moving in the upper right direction of the image, the consistency with the block G (1, -1) is originally the highest, and the difference evaluation function E (1, -1) is the minimum. Should be. However, it is assumed that E (−1, 1) is minimized as shown in FIG. 29 due to a reduction in resolution due to the hierarchization, an effect of a texture, an aperture problem, and the like. Then, in image 1, a block corresponding to block G (−1, 1) of image 2 is set as a search area, and a block corresponding to the block having the smallest difference evaluation function value is image 2. Is set as a search area. However, there is no block corresponding to the correct optical flow in this search area, and therefore, an erroneous optical flow is detected. First, block matching is performed on the image 2 having the lowest resolution. Now, assuming that the template block F is moving in the upper right direction of the image, the consistency with the block G (1, -1) is originally the highest , and the difference evaluation function E (1, -1) is the minimum. Should be. However, it is assumed that E (-1, 1) is minimized as shown in FIG. 29 due to a reduction in resolution due to the hierarchization, an effect of a texture, an aperture problem, and the like. Then, in image 1, a block corresponding to block G (−1, 1) of image 2 is set as a search area, and a block corresponding to the block having the smallest difference evaluation function value is image 2. Is set as a search area. However, there is no block corresponding to the correct optical flow in this search area, and therefore, an erroneous optical flow is detected.
[0149] [0149]
Therefore, in the present embodiment, at the time of block matching in the original image 0 having the highest resolution, this problem is solved by adding blocks near the background flow to the search area. As described above, there is a high possibility that an optical flow exists near the background flow. In addition, the difference between the optical flow and the background flow is important for detecting the approaching object flow. In other words, when there is no optical flow near the background flow, it can be said that it is a moving object. That is, by putting the vicinity of the background flow into the search area, it can be determined whether or not the optical flow is the background. For example, when there is a block that minimizes the difference evaluation function value near the background flow, the optical flow related thereto is the background, but a block whose difference evaluation function value is smaller than the vicinity of the background flow exists outside t Therefore, in the present embodiment, at the time of block matching in the original image 0 having the highest resolution, this problem is solved by adding blocks near the background flow to the search area. As described above, there is a high possibility that an In other words, when there is no optical flow near the background flow, it can be said that optical flow exists near the background flow. In addition, the difference between the optical flow and the background flow is important for detecting the approaching object flow. That is, by putting the vicinity of the background flow into the search area, it can be determined whether or not the optical flow is the background. For example, when there is a block that minimizes the difference evaluation function value near the background flow, the optical flow related thereto is the background, but a block whose difference evaluation function value is smaller than the vicinity of the background flow exists outside t he background flow. Then, the optical flow can be determined to be a moving object. he background flow. Then, the optical flow can be determined to be a moving object.
[0150] [0150]
(Fourth embodiment) (Fourth embodiment)
FIG. 30 is a diagram conceptually showing a basic configuration of a vehicle monitoring device according to a fourth embodiment of the present invention. 30, the same components as those of FIG. 1 are denoted by the same reference numerals as those of FIG. 1, and detailed description thereof will be omitted. The difference from the first embodiment is that the background flow estimating unit 15 is omitted, and the approaching object detection unit 16A detects the approaching object by obtaining the spatial movement of the object. In the approaching object detecting unit 16A, the three-dimensional motion estimating unit 16c includes an optical flow Vi actually obtained from a camera image by the optical flow detecting unit 12, and a motion vector T of the vehicle obtained by the own vehicle motion estimating unit 13. And the spatial model estimated by the spatial model estimating unit 14, the spatial motion of the object is obtained instead of the planar motion as in the optical flow. FIG. 30 is a diagram conceptually showing a basic configuration of a vehicle monitoring device according to a fourth embodiment of the present invention. 30, the same components as those of FIG. 1 are exemplified by the same reference numerals as those of FIG. 1 , and detailed description thereof will be omitted. The difference from the first embodiment is that the background flow estimating unit 15 is omitted, and the approaching object detection unit 16A detects the approaching object by obtaining the spatial movement of the object. In the approaching object. detecting unit 16A, the three-dimensional motion estimating unit 16c includes an optical flow Vi actually obtained from a camera image by the optical flow detecting unit 12, and a motion vector T of the vehicle obtained by the own vehicle motion estimating unit 13. And the spatial model estimated by the spatial model estimating unit 14, the spatial motion of the object is obtained instead of the planar motion as in the optical flow.
[0151] [0151]
The processing in the three-dimensional motion estimating unit 16c will be described with reference to FIGS. FIG. 31 is a diagram schematically showing the relationship between the vehicle 1 and the space model at time t-1, and the same space models MS, MW1, and MW2 as those in FIG. 17 are assumed. However, it is assumed that the space model is represented by a real world three-dimensional coordinate system. Further, a camera image taken by the camera 2 is also shown. Here, as described above, a point obtained by projecting an arbitrary point Ci on the camera image onto the real world three-dimensional coordinates by using the perspective projection conversion formulas (Equation 1, Equation 2) and the spatial models MS, MW1, and MW2. Ri can be determined. T is a motion vector of the vehicle 1 from time t-1 to time t, obtained by the own vehicle motion estimating unit 13. The processing in the three-dimensional motion estimating unit 16c will be described with reference to FIGS. FIG. 31 is a diagram showing the relationship between the vehicle 1 and the space model at time t-1, and the same space models MS, MW1, and MW2 as those in FIG. 17 are assumed. However, it is assumed that the space model is represented by a real world three-dimensional coordinate system. Further, a camera image taken by the camera 2 is also shown. Here, as described above, a point obtained by projecting an arbitrary point Ci on the camera image onto the real world three-dimensional coordinates by using the perspective projection conversion formulas (Equation 1, Equation 2) and the spatial models MS, MW1, and MW2. Ri can be determined. T is a motion vector of the vehicle 1 from time t-1 to time t, obtained by the own vehicle motion estimating unit 13.
[0152] [0152]
FIG. 32 is a diagram schematically showing the relationship between the vehicle 1 and the space model at time t. Generally, when the time changes, the spatial model changes. Here, it is assumed that the optical flow detection unit 12 has found that the point Ci at the time t-1 corresponds to the point NextCi at the time t. At this time, similarly to the point Ri with respect to the point Ci, the point NextRi at which the NextCi at the time t is projected on the real world three-dimensional coordinates can be obtained. Therefore, the vector VRi in which the point Ri has moved by the time t can be obtained as in the following equation. FIG. 32 is a diagram displaying the relationship between the vehicle 1 and the space model at time t. Generally, when the time changes, the spatial model changes. Here, it is assumed that the optical flow detection unit 12 has found that the point Ci at the time t-1 corresponds to the point NextCi at the time t. At this time, similarly to the point Ri with respect to the point Ci, the point NextRi at which the NextCi at the time t is projected on the real world three-dimensional coordinates can be obtained. Therefore, the vector VRi in which the point Ri has moved by the time t can be obtained as in the following equation.
VRi = NextRi−Ri VRi = NextRi−Ri
[0153] [0153]
By the way, since the movement of the vehicle 1 from the time t-1 to the time t is obtained as a vector T, by obtaining the vector (VRi-T), the point Ci on the camera image is displayed on the real world three-dimensional coordinates. , That is, the spatial flow. By performing this process for all points on the camera image, the movement of all points on the camera image on real world three-dimensional coordinates can be obtained. By the way, since the movement of the vehicle 1 from the time t-1 to the time t is obtained as a vector T, by obtaining the vector (VRi-T), the point Ci on the camera image is displayed on the real world three-dimensional coordinates., That is, the spatial flow. By performing this process for all points on the camera image, the movement of all points on the camera image on real world three-dimensional coordinates can be obtained.
[0154] [0154]
Of course, the space model may be obtained by various sensors or communication as in the first embodiment, or another space model may be used. Of course, the space model may be obtained by various sensors or communication as in the first embodiment, or another space model may be used.
[0155] [0155]
The approaching object flow detection unit 16d moves the point on the camera image obtained by the three-dimensional motion estimation unit 16c in the real world three-dimensional coordinate system, that is, based on the spatial flow, the point approaches the own vehicle. It is determined whether or not. That is, when the vector (VRi-T) faces the vehicle, it is determined that the point Ci is the flow of the approaching object, and otherwise, the point Ci is not the flow of the approaching object. The approaching object flow detection unit 16d moves the point on the camera image obtained by the three-dimensional motion estimation unit 16c in the real world three-dimensional coordinate system, that is, based on the spatial flow, the point approaches the own vehicle. It is determined whether or not. That is, when the vector (VRi-T) faces the vehicle, it is determined that the point Ci is the flow of the approaching object, and otherwise, the point Ci is not the flow of the approaching object.
[0156] [0156]
Further, the approaching object can be detected by performing the same processing as that of the noise removing unit 16b according to the first embodiment by the noise removing unit 16e. Further, the approaching object can be detected by performing the same processing as that of the noise removing unit 16b according to the first embodiment by the noise removing unit 16e.
[0157] [0157]
In the description so far, it is assumed that the space model is described in the real world three-dimensional coordinate system. However, the space model may be expressed in the camera coordinate system instead. In this case, points Ci and NextCi on the camera image correspond to points Ri and NextRi in the camera coordinate system, respectively. Since the origin of the camera coordinate system moves by an amount corresponding to the motion vector T of the vehicle 1 between time t-1 and time t, the movement VRi of the point Ci on the camera image is obtained as follows. Can be. In the description so far, it is assumed that the space model is described in the real world three-dimensional coordinate system. However, the space model may be expressed in the camera coordinate system instead. In this case, points Ci and NextCi on the camera image correspond to points Ri and NextRi in the camera coordinate system, respectively. Since the origin of the camera coordinate system moves by an amount corresponding to the motion vector T of the vehicle 1 between time t-1 and time t, the movement VRi of the point Ci on the camera image is obtained as follows. Can be.
VRi = NextRi-Ri-T VRi = NextRi-Ri-T
[0158] [0158]
When the vector VRi points toward the origin, it is determined that the point Ci is an approaching object flow, and otherwise, the point Ci is not an approaching object flow. This processing is performed at all points on the camera image, and the approaching object can be detected by the noise removal unit 16c performing the noise removal processing. When the vector VRi points toward the origin, it is determined that the point Ci is an approaching object flow, and otherwise, the point Ci is not an approaching object flow. This processing is performed at all points on the camera image, and the approaching object can be detected by the noise removal unit 16c performing the noise removal processing.
[0159] [0159]
<Common use with obstacle sensor> <Common use with obstacle sensor>
The present invention detects an approaching object using image information. Therefore, as compared with the case where an obstacle sensor such as a laser, an infrared ray, or a millimeter wave is used, more complicated detection of whether an approaching object is approaching or moving away is possible. However, when an obstacle exists near the vehicle, it is more important to detect simple information as to whether or not there is an obstacle as soon as possible and more accurately than such complicated information. The present invention detects an approaching object using image information. Therefore, as compared with the case where an obstacle sensor such as a laser, an infrared ray, or a millimeter wave is used, more complicated detection of whether an approaching object is approaching or moving away is possible. However, when an obstacle exists near the vehicle, it is more important to detect simple information as to whether or not there is an obstacle as soon as possible and more accurately than such complicated information.
[0160] [0160]
Therefore, an area near the vehicle may be detected by the obstacle sensor, and an area other than the wide area may be detected using the method according to the present invention. This makes it possible to monitor the periphery of the vehicle at high speed and accurately. Therefore, an area near the vehicle may be detected by the obstacle sensor, and an area other than the wide area may be detected using the method according to the present invention. This makes it possible to monitor the peripheral of the vehicle at high speed and accurately.
[0161] [0161]
FIG. 33 is a diagram illustrating an example of installation of an obstacle sensor. In FIG. 3A, an obstacle sensor 51 using a laser, an infrared ray, a millimeter wave, or the like is attached to a bumper, an emblem, or the like of the vehicle 1. In (b), the obstacle sensors 52 are installed at the four corners of the vehicle 1 where the possibility of a contact accident is the highest. The position where the obstacle sensor 52 is installed may be incorporated below or above the bumper, or into the bumper or the vehicle body itself. FIG. 33 is a diagram illustrating an example of installation of an obstacle sensor. In FIG. 3A, an obstacle sensor 51 using a laser, an infrared ray, a millimeter wave, or the like is attached to a bumper, an emblem, or the like of the vehicle 1. In (b), the obstacle sensors 52 are installed at the four corners of the vehicle 1 where the possibility of a contact accident is the highest. The position where the obstacle sensors 52 is installed may be incorporated below or above the bumper, or into the bumper or the vehicle body itself.
[0162] [0162]
In addition, since the obstacle sensor is greatly affected by weather such as rain, the use of the obstacle sensor is stopped when the rain is recognized from the operation information of the turn signal, and the method according to the present invention may be used. Good. Thereby, it is possible to improve the detection accuracy. In addition, since the obstacle sensor is greatly affected by weather such as rain, the use of the obstacle sensor is stopped when the rain is recognized from the operation information of the turn signal, and the method according to the present invention may be used. Good. Thus, it is possible to improve the detection accuracy.
[0163] [0163]
Further, an area in which an approaching object is detected by the method according to the present invention may be detected again by the obstacle sensor. As a result, the accuracy of detecting an approaching object can be improved, and a false alarm can be prevented from being generated. Further, an area in which an approaching object is detected by the method according to the present invention may be detected again by the obstacle sensor. As a result, the accuracy of detecting an approaching object can be improved, and a false alarm can be prevented. from being generated.
[0164] [0164]
Furthermore, as to the area where it is determined that there is an obstacle as a result of the detection by the obstacle sensor, the method according to the present invention may be used to determine whether or not it is an approaching object. This makes it possible to improve the processing speed. Furthermore, as to the area where it is determined that there is an obstacle as a result of the detection by the obstacle sensor, the method according to the present invention may be used to determine whether or not it is an approaching object. This makes it possible to improve the processing speed.
[0165] [0165]
Note that all or part of the functions of each unit of the monitoring apparatus of the present invention may be realized using dedicated hardware, or may be realized as software using a computer program. Note that all or part of the functions of each unit of the monitoring apparatus of the present invention may be realized using dedicated hardware, or may be realized as software using a computer program.
[0166] [0166]
【The invention's effect】 [The invention's effect]
As described above, according to the present invention, in a vehicle monitoring device that detects an approaching object using an optical flow, the detection accuracy is not reduced even in a curve, and even when traveling straight, an object that runs parallel to the own vehicle Also, a distant approaching object with small movement on the screen can be detected. As described above, according to the present invention, in a vehicle monitoring device that detects an approaching object using an optical flow, the detection accuracy is not reduced even in a curve, and even when traveling straight, an object that runs parallel to the own vehicle Also, a distant approaching object with small movement on the screen can be detected.
[Brief description of the drawings] [Brief description of the drawings]
FIG. 1 is a block diagram illustrating a configuration of a monitoring device according to a first embodiment of the present invention. FIG. 1 is a block diagram illustrating a configuration of a monitoring device according to a first embodiment of the present invention.
FIG. 2 is a diagram for explaining a method of estimating the movement of the own vehicle, and is a conceptual diagram for explaining an Ackerman model. FIG. 2 is a diagram for explaining a method of estimating the movement of the own vehicle, and is a conceptual diagram for explaining an Ackerman model.
FIG. 3 is a diagram for explaining a method of estimating the movement of the host vehicle, and is a conceptual diagram showing a two-dimensional movement of the vehicle. FIG. 3 is a diagram for explaining a method of estimating the movement of the host vehicle, and is a conceptual diagram showing a two-dimensional movement of the vehicle.
FIG. 4 is a flowchart illustrating a flow of a background flow estimation process. FIG. 4 is a flowchart illustrating a flow of a background flow estimation process.
FIG. 5 is a diagram for describing a method of estimating a background flow, and is a conceptual diagram illustrating a motion when the vehicle turns. FIG. 5 is a diagram for describing a method of estimating a background flow, and is a conceptual diagram illustrating a motion when the vehicle turns.
FIG. 6 is a block diagram conceptually showing a configuration of an approaching object detection unit. FIG. 6 is a block diagram conceptually showing a configuration of an approaching object detection unit.
FIG. 7 is a flowchart illustrating an operation of a flow comparison unit. FIG. 7 is a flowchart illustrating an operation of a flow comparison unit.
FIG. 8 is a flowchart illustrating an operation of a noise removing unit. FIG. 8 is a flowchart illustrating an operation of a noise removing unit.
FIG. 9 is a conceptual diagram for describing processing in a noise removing unit. FIG. 9 is a conceptual diagram for describing processing in a noise removing unit.
FIG. 10 is a conceptual diagram illustrating a method of estimating a background flow. FIG. 10 is a conceptual diagram illustrating a method of estimating a background flow.
FIG. 11 is a flowchart showing a flow of processing of another example of background flow estimation. FIG. 11 is a flowchart showing a flow of processing of another example of background flow estimation.
FIG. 12 is a flowchart illustrating a flow of another example of the process of estimating the background flow. FIG. 12 is a flowchart illustrating a flow of another example of the process of estimating the background flow.
FIG. 13 is an example of a camera image displaying a background flow according to the present invention. FIG. 13 is an example of a camera image displaying a background flow according to the present invention.
FIG. 14 is a block diagram illustrating a configuration of a monitoring device according to a second embodiment of the present invention. FIG. 14 is a block diagram illustrating a configuration of a monitoring device according to a second embodiment of the present invention.
FIG. 15 is a diagram showing an example of a space model according to the present invention. FIG. 15 is a diagram showing an example of a space model according to the present invention.
FIG. 16 is a diagram illustrating a relationship between a distance L and a background flow in the space model of FIG. 15; FIG. 16 is a diagram illustrating a relationship between a distance L and a background flow in the space model of FIG. 15;
FIG. 17 is a diagram showing another example of the space model according to the present invention. FIG. 17 is a diagram showing another example of the space model according to the present invention.
FIG. 18 is a diagram illustrating a relationship between a width W and a background flow in the space model of FIG. 17; FIG. 18 is a diagram illustrating a relationship between a width W and a background flow in the space model of FIG. 17;
FIG. 19 is a flowchart illustrating a flow of a background flow estimation process according to the second embodiment of the present invention. FIG. 19 is a flowchart illustrating a flow of a background flow estimation process according to the second embodiment of the present invention.
FIG. 20 is a diagram for explaining a background flow estimation method according to the second embodiment of the present invention. FIG. 20 is a diagram for explaining a background flow estimation method according to the second embodiment of the present invention.
FIG. 21 is a diagram for explaining a method of estimating a background flow according to the second embodiment of the present invention. FIG. 21 is a diagram for explaining a method of estimating a background flow according to the second embodiment of the present invention.
FIG. 22 is a diagram for explaining a method of estimating a background flow according to the second embodiment of the present invention. FIG. 22 is a diagram for explaining a method of estimating a background flow according to the second embodiment of the present invention.
FIG. 23 is a diagram for explaining a method of estimating a background flow according to the second embodiment of the present invention. FIG. 23 is a diagram for explaining a method of estimating a background flow according to the second embodiment of the present invention.
FIG. 24 is an example of an image of a camera installed in a vehicle. FIG. 24 is an example of an image of a camera installed in a vehicle.
FIG. 25 is a diagram showing a result of detecting an approaching object from the camera image of FIG. 24 by the first conventional example. FIG. 25 is a diagram showing a result of detecting an approaching object from the camera image of FIG. 24 by the first conventional example.
26 is a diagram showing a result of detecting an approaching object from the camera image of FIG. 24 according to the present invention. 26 is a diagram showing a result of detecting an approaching object from the camera image of FIG. 24 according to the present invention.
FIG. 27 is a diagram illustrating an example of a hardware configuration according to the present invention. FIG. 27 is a diagram illustrating an example of a hardware configuration according to the present invention.
FIG. 28 is a block diagram illustrating a configuration of a monitoring device according to a third embodiment of the present invention. FIG. 28 is a block diagram illustrating a configuration of a monitoring device according to a third embodiment of the present invention.
FIG. 29 is a conceptual diagram for describing a problem in a hierarchical image. FIG. 29 is a conceptual diagram for describing a problem in a hierarchical image.
FIG. 30 is a block diagram illustrating a configuration of a monitoring device according to a fourth embodiment of the present invention. FIG. 30 is a block diagram illustrating a configuration of a monitoring device according to a fourth embodiment of the present invention.
FIG. 31 is a diagram for explaining processing according to the fourth embodiment of the present invention. FIG. 31 is a diagram for explaining processing according to the fourth embodiment of the present invention.
FIG. 32 is a diagram for explaining processing according to the fourth embodiment of the present invention. FIG. 32 is a diagram for explaining processing according to the fourth embodiment of the present invention.
FIG. 33 is a diagram illustrating an example of an installation position of an obstacle sensor according to the present invention. FIG. 33 is a diagram illustrating an example of an installation position of an obstacle sensor according to the present invention.
FIG. 34 is an example of a camera image showing the rear side of the vehicle, and is a diagram for describing a problem during traveling on a curve in the first conventional example. FIG. 34 is an example of a camera image showing the rear side of the vehicle, and is a diagram for describing a problem during traveling on a curve in the first conventional example.
FIG. 35 is a diagram illustrating a relationship between a camera image and three-dimensional coordinates in the real world. FIG. 35 is a diagram illustrating a relationship between a camera image and three-dimensional coordinates in the real world.
FIG. 36 is a diagram showing a relationship between a camera image and a real world coordinate system in a conventional example. FIG. 36 is a diagram showing a relationship between a camera image and a real world coordinate system in a conventional example.
FIG. 37 is a diagram illustrating a relationship between a camera image and a real world coordinate system according to the present invention. FIG. 37 is a diagram illustrating a relationship between a camera image and a real world coordinate system according to the present invention.
FIG. 38 is an example in which an optical flow of an approaching vehicle is superimposed and displayed on a camera image traveling on a curve. FIG. 38 is an example in which an optical flow of an approaching vehicle is laminated and displayed on a camera image traveling on a curve.
FIG. 39 is a background flow of the image in FIG. 38. FIG. 39 is a background flow of the image in FIG. 38.
FIG. 40 is an example in which a flow obtained by subtracting a background flow is superimposed and displayed on a camera image. FIG. 40 is an example in which a flow obtained by subtracting a background flow is superimposed and displayed on a camera image.
FIG. 41 is a conceptual diagram for describing a method of detecting the movement of a vehicle using a camera image. FIG. 41 is a conceptual diagram for describing a method of detecting the movement of a vehicle using a camera image.
[Explanation of symbols] [Explanation of symbols]
1 Vehicle 1 Vehicle
2 Camera 2 Camera
11 camera 11 camera
12,12A @ optical flow detector 12,12A @ optical flow detector
13 Vehicle motion estimation unit 13 Vehicle motion estimation unit
14,14A space model estimator 14,14A space model estimator
15 Background flow estimator 15 Background flow estimator
16,16A Approaching object detector 16,16A Approaching object detector
51,52 Obstacle sensor 51,52 Obstacle sensor
Vi @ Optical flow Vi @ Optical flow
T Vehicle motion vector T Vehicle motion vector
MS road surface model MS road surface model
MW, MW1, MW2 Wall model MW, MW1, MW2 Wall model

Claims (14)

  1. 車両の周囲を映すカメラを用いた監視装置であって、
    前記カメラによって撮影された画像から、オプティカルフローを求め、
    前記車両の動きを基にして、背景と仮定した場合の前記画像のオプティカルフローである背景フローを求め、
    前記オプティカルフローと前記背景フローとを比較し、前記車両の周囲にある物体の動きを検出する
    ことを特徴とする監視装置。
    A monitoring device using a camera that reflects the surroundings of the vehicle,
    From the image taken by the camera, determine the optical flow,
    Based on the movement of the vehicle, determine a background flow that is an optical flow of the image assuming a background,
    A monitoring device, comprising: comparing the optical flow with the background flow to detect a motion of an object around the vehicle.
  2. 請求項1において、
    前記背景フローを、前記カメラが撮影している空間をモデル化した空間モデルを用いて、求めるものであることを特徴とする監視装置。
    In claim 1,
    A monitoring apparatus, wherein the background flow is obtained using a space model that models a space where the camera is shooting.
  3. 請求項2において、
    前記空間モデルは、前記カメラが撮影している各物体の距離データに基づいて、生成されたものであることを特徴とする監視装置。
    In claim 2,
    A monitoring device, wherein the space model is generated based on distance data of each object photographed by the camera.
  4. 請求項3において、
    前記車両は、障害物センサが設けられており、 The vehicle is provided with an obstacle sensor.
    前記距離データは、前記障害物センサによって測定されたものであることを特徴とする監視装置。 A monitoring device characterized in that the distance data is measured by the obstacle sensor. In claim 3, In claim 3,
    The vehicle is provided with an obstacle sensor, The vehicle is provided with an obstacle sensor,
    A monitoring device, wherein the distance data is measured by the obstacle sensor. A monitoring device, wherein the distance data is measured by the obstacle sensor.
  5. 請求項2において、
    前記空間モデルは、少なくとも、走行路面をモデル化した路面モデルを含むものであることを特徴とする監視装置。
    In claim 2,
    A monitoring device, wherein the space model includes at least a road surface model obtained by modeling a traveling road surface.
  6. 請求項2において、
    前記空間モデルは、少なくとも、走行路面に対して垂直な壁面を仮定した壁面モデルを含むものであることを特徴とする監視装置。 The space model is a monitoring device including at least a wall surface model assuming a wall surface perpendicular to a traveling road surface. In claim 2, In claim 2,
    The monitoring device, wherein the space model includes at least a wall model assuming a wall surface perpendicular to a traveling road surface. The monitoring device, wherein the space model includes at least a wall model assuming a wall surface perpendicular to a traveling road surface.
  7. 請求項6において、
    前記壁面は、車両の後側方にあると仮定されたものであることを特徴とする監視装置。
    In claim 6,
    The monitoring device, wherein the wall surface is assumed to be on a rear side of the vehicle.
  8. 請求項1において、
    オプティカルフローと背景フローとの比較の際に、
    前記オプティカルフローの大きさが、所定値よりも大きいか否かを判定し、

    所定値よりも大きいときは、角度差を用いて、比較を行う一方、そうでないときは、角度差を用いないで、比較を行うことを特徴とする監視装置。 A monitoring device characterized in that when it is larger than a predetermined value, an angle difference is used for comparison, while when it is not, a comparison is performed without using an angle difference. In claim 1, In claim 1,
    When comparing the optical flow with the background flow, When comparing the optical flow with the background flow,
    Determine whether the size of the optical flow is greater than a predetermined value, Determine whether the size of the optical flow is greater than a predetermined value,
    A monitoring apparatus characterized in that when the difference is larger than a predetermined value, the comparison is performed using the angle difference, and when not, the comparison is performed without using the angle difference. A monitoring apparatus characterized in that when the difference is larger than a predetermined value, the comparison is performed using the angle difference, and when not, the comparison is performed without using the angle difference.
  9. 請求項8において、
    前記所定値は、画像上の当該位置における背景フローの大きさに応じて、設定されていることを特徴とする監視装置。
    In claim 8,

    The monitoring device according to claim 1, wherein the predetermined value is set according to a size of a background flow at the position on the image. The monitoring device according to claim 1, wherein the predetermined value is set according to a size of a background flow at the position on the image.
  10. 請求項1において、
    オプティカルフローと背景フローとの比較によって、前記オプティカルフローの中から、接近物候補フローを特定し、
    近傍にある前記接近物候補フローを関連付けることによって、接近物候補領域を生成し、
    前記接近物候補領域の面積が、所定値よりも小さいとき、この接近物候補領域に係る接近物候補フローは、ノイズであると判断することを特徴とする監視装置。
    In claim 1,
    By comparing the optical flow and the background flow, an approaching object candidate flow is identified from the optical flows,

    Generating an approaching object candidate area by associating the approaching object candidate flow in the vicinity, Generating an approaching object candidate area by associating the approaching object candidate flow in the vicinity,
    When the area of the approaching object candidate area is smaller than a predetermined value, the approaching object candidate flow relating to the approaching object candidate area is determined to be noise. When the area of ​​the approaching object candidate area is smaller than a predetermined value, the approaching object candidate flow relating to the approaching object candidate area is determined to be noise.
  11. 車両の周囲を映すカメラを用いた監視装置であって、
    前記カメラによって撮影された画像から、オプティカルフローを求め、
    前記オプティカルフローと、当該車両の動きと、前記カメラが撮影している空間をモデル化した空間モデルとを基にして、前記画像上の点の実世界座標上における動きである空間フローを求め、

    前記空間フローを基にして、当該車両の周囲にある物体の動きを検出することを特徴とする監視装置。 A monitoring device characterized by detecting the movement of an object around the vehicle based on the space flow. A monitoring device using a camera that reflects the surroundings of the vehicle, A monitoring device using a camera that reflects the surroundings of the vehicle,
    From the image taken by the camera, determine the optical flow, From the image taken by the camera, determine the optical flow,
    Based on the optical flow, the movement of the vehicle, and a space model that models the space being photographed by the camera, a space flow that is a movement of a point on the image on real world coordinates is obtained. Based on the optical flow, the movement of the vehicle, and a space model that models the space being photographed by the camera, a space flow that is a movement of a point on the image on real world coordinates is obtained.
    A monitoring device for detecting a motion of an object around the vehicle based on the spatial flow. A monitoring device for detecting a motion of an object around the vehicle based on the spatial flow.
  12. 車両の周囲を映すカメラによって撮影された画像から、オプティカルフローを求め、
    前記車両の動きを基にして、背景と仮定した場合の前記画像のオプティカルフローである背景フローを求め、

    前記オプティカルフローと前記背景フローとを比較し、前記車両の周囲にある物体の動きを検出することを特徴とする監視方法。 A monitoring method characterized by comparing the optical flow with the background flow and detecting the movement of an object around the vehicle. From the image taken by the camera reflecting the surroundings of the vehicle, optical flow is determined, From the image taken by the camera reflecting the surroundings of the vehicle, optical flow is determined,
    Based on the movement of the vehicle, determine a background flow that is an optical flow of the image assuming a background, Based on the movement of the vehicle, determine a background flow that is an optical flow of the image assuming a background,
    A monitoring method comprising: comparing the optical flow with the background flow to detect a motion of an object around the vehicle. A monitoring method comprising: comparing the optical flow with the background flow to detect a motion of an object around the vehicle.
  13. 請求項12において、
    前記車両に設けられた車速センサおよび舵角センサの出力を用いて、前記車両の動きを推定することを特徴とする監視方法。
    In claim 12,

    A monitoring method comprising: estimating a motion of the vehicle using outputs of a vehicle speed sensor and a steering angle sensor provided in the vehicle. A monitoring method comprising: estimating a motion of the vehicle using outputs of a vehicle speed sensor and a steering angle sensor provided in the vehicle.
  14. コンピュータに、
    車両の周囲を映すカメラによって撮影された画像について、オプティカルフローを得る手順と、
    前記車両の動きを基にして、背景と仮定した場合の前記画像のオプティカルフローである背景フローを求める手順と、
    前記オプティカルフローと前記背景フローとを比較し、当該車両の周囲にある物体の動きを検出する手順とを実行させるためのプログラム。
    On the computer,
    Obtaining an optical flow of an image taken by a camera reflecting the surroundings of the vehicle; and

    A procedure for obtaining a background flow that is an optical flow of the image assuming the background based on the movement of the vehicle; A procedure for obtaining a background flow that is an optical flow of the image assuming the background based on the movement of the vehicle;
    A step of comparing the optical flow with the background flow to detect a motion of an object around the vehicle. A step of comparing the optical flow with the background flow to detect a motion of an object around the vehicle.
JP2003130008A 2002-05-09 2003-05-08 Monitoring device, monitoring method and monitoring program Active JP3776094B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2002134583 2002-05-09
JP2003130008A JP3776094B2 (en) 2002-05-09 2003-05-08 Monitoring device, monitoring method and monitoring program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2003130008A JP3776094B2 (en) 2002-05-09 2003-05-08 Monitoring device, monitoring method and monitoring program

Publications (2)

Publication Number Publication Date
JP2004056763A true JP2004056763A (en) 2004-02-19
JP3776094B2 JP3776094B2 (en) 2006-05-17

Family

ID=31948903

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2003130008A Active JP3776094B2 (en) 2002-05-09 2003-05-08 Monitoring device, monitoring method and monitoring program

Country Status (1)

Country Link
JP (1) JP3776094B2 (en)

Cited By (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006285910A (en) * 2005-04-05 2006-10-19 Nissan Motor Co Ltd On-vehicle object detecting device and object detecting method
WO2007000999A1 (en) * 2005-06-27 2007-01-04 Pioneer Corporation Image analysis device and image analysis method
WO2007029455A1 (en) * 2005-09-07 2007-03-15 Pioneer Corporation Scene monotonousness calculation device and method
KR100738522B1 (en) 2004-12-21 2007-07-11 삼성전자주식회사 Apparatus and method for distinction between camera movement and object movement and extracting object in video surveillance system
JP2007233755A (en) * 2006-03-01 2007-09-13 Toyota Motor Corp Image processor and image processing method
JP2007272511A (en) * 2006-03-31 2007-10-18 Casio Comput Co Ltd Information transmission system, imaging device, information output method, and information output program
JP2007300181A (en) * 2006-04-27 2007-11-15 Denso Corp Periphery monitoring apparatus and periphery monitoring method and program thereof
WO2007132902A1 (en) * 2006-05-17 2007-11-22 Kagoshima University Object detector
JP2007320024A (en) * 2006-06-01 2007-12-13 Samsung Electronics Co Ltd Anti-collision system, device and method for mobile robot remote control
JP2008027138A (en) * 2006-07-20 2008-02-07 Nissan Motor Co Ltd Vehicle monitoring device
JP2008066953A (en) * 2006-09-06 2008-03-21 Sanyo Electric Co Ltd Image monitoring apparatus
JP2008269073A (en) * 2007-04-17 2008-11-06 Denso Corp Object detector for vehicle
JP2008282106A (en) * 2007-05-08 2008-11-20 Fujitsu Ltd Method for detecting obstacle and obstacle detector
WO2009005025A1 (en) * 2007-07-03 2009-01-08 Konica Minolta Holdings, Inc. Moving object detection device
US7512251B2 (en) 2004-06-15 2009-03-31 Panasonic Corporation Monitoring system and vehicle surrounding monitoring system
JP2009083632A (en) * 2007-09-28 2009-04-23 Denso Corp Moving object detecting device
JP2009086748A (en) * 2007-09-27 2009-04-23 Saxa Inc Monitoring device and program
JP2009181557A (en) * 2008-02-01 2009-08-13 Hitachi Ltd Image processor and vehicle detector equipped with the same
JP2009210424A (en) * 2008-03-04 2009-09-17 Nissan Motor Co Ltd Object detection device for vehicle and moving object determination method for vehicle
US7602945B2 (en) 2005-09-07 2009-10-13 Hitachi, Ltd. Driving support apparatus
US7667581B2 (en) 2006-05-24 2010-02-23 Nissan Motor Co., Ltd. Pedestrian detector and detecting method using change of velocity of object in image
WO2010058821A1 (en) 2008-11-19 2010-05-27 クラリオン株式会社 Approaching object detection system
JP2010132053A (en) * 2008-12-03 2010-06-17 Koito Mfg Co Ltd Headlight control device
JP2010165352A (en) * 2009-01-16 2010-07-29 Honda Research Inst Europe Gmbh System and method for detecting object movement based on multiple three-dimensional warping and vehicle having system
US7792327B2 (en) 2005-12-06 2010-09-07 Nissan Motor Co., Ltd. Apparatus and method for detecting a road boundary
JP2010204805A (en) * 2009-03-02 2010-09-16 Konica Minolta Holdings Inc Periphery-monitoring device and method
JP2010282615A (en) * 2009-05-29 2010-12-16 Honda Research Inst Europe Gmbh Object motion detection system based on combining 3d warping technique and proper object motion (pom) detection
WO2011016367A1 (en) * 2009-08-04 2011-02-10 アイシン精機株式会社 Vehicle-surroundings awareness support device
US7899211B2 (en) 2005-12-07 2011-03-01 Nissan Motor Co., Ltd. Object detecting system and object detecting method
US7925441B2 (en) 2005-03-09 2011-04-12 Mitsubishi Jidosha Kogyo Kabushiki Kaisha Vehicle periphery monitoring apparatus
JP2011128838A (en) * 2009-12-17 2011-06-30 Panasonic Corp Image display device
JP2012198857A (en) * 2011-03-23 2012-10-18 Denso It Laboratory Inc Approaching object detector and approaching object detection method
US8300887B2 (en) 2007-05-10 2012-10-30 Honda Motor Co., Ltd. Object detection apparatus, object detection method and object detection program
WO2013118191A1 (en) * 2012-02-10 2013-08-15 三菱電機株式会社 Driving assistance device and driving assistance method
JP2013205925A (en) * 2012-03-27 2013-10-07 Fuji Heavy Ind Ltd Vehicle exterior environment recognition device, and vehicle exterior environment recognition method
JP2013205924A (en) * 2012-03-27 2013-10-07 Fuji Heavy Ind Ltd Vehicle exterior environment recognition device, and vehicle exterior environment recognition method
US8755634B2 (en) 2009-08-12 2014-06-17 Nec Corporation Obstacle detection device and method and obstacle detection system
JP2014165810A (en) * 2013-02-27 2014-09-08 Fujitsu Ten Ltd Parameter acquisition device, parameter acquisition method and program
JP2014241592A (en) * 2005-01-07 2014-12-25 クアルコム,インコーポレイテッド Optical flow based tilt sensor
JP2015088092A (en) * 2013-11-01 2015-05-07 富士通株式会社 Movement amount estimation device and movement amount estimation method
JP2015170133A (en) * 2014-03-06 2015-09-28 富士通株式会社 Trajectory estimation system, trajectory estimation method and program
JP2016177718A (en) * 2015-03-23 2016-10-06 富士通株式会社 Object detection apparatus, object detection method, and information processing program
JP2016190603A (en) * 2015-03-31 2016-11-10 いすゞ自動車株式会社 Road gradient estimation device and method of estimating road gradient
JP2017084019A (en) * 2015-10-26 2017-05-18 トヨタ自動車東日本株式会社 Object detection device, object detection method, and, object detection program
US10867401B2 (en) 2015-02-16 2020-12-15 Application Solutions (Electronics and Vision) Ltd. Method and device for the estimation of car ego-motion from surround view images

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180014572A (en) 2016-08-01 2018-02-09 삼성전자주식회사 Method for processing event signal and event-based sensor performing the same

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06203161A (en) * 1993-01-05 1994-07-22 Fujitsu Ltd Extracting method for object from animation image and device therefor
JPH06282655A (en) * 1993-03-30 1994-10-07 Toyota Motor Corp Device for recognizing moving object
JPH0750769A (en) * 1993-08-06 1995-02-21 Yazaki Corp Backward side supervising method for vehicle
JP2000074645A (en) * 1998-08-27 2000-03-14 Yazaki Corp Device and method for monitoring periphery
JP2000090243A (en) * 1998-09-14 2000-03-31 Yazaki Corp Periphery monitoring device and method therefor
JP2001006096A (en) * 1999-06-23 2001-01-12 Honda Motor Co Ltd Peripheral part monitoring device for vehicle
JP2001266160A (en) * 2000-03-22 2001-09-28 Toyota Motor Corp Method and device for recognizing periphery
JP2002083297A (en) * 2000-06-28 2002-03-22 Matsushita Electric Ind Co Ltd Object recognition method and object recognition device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06203161A (en) * 1993-01-05 1994-07-22 Fujitsu Ltd Extracting method for object from animation image and device therefor
JPH06282655A (en) * 1993-03-30 1994-10-07 Toyota Motor Corp Device for recognizing moving object
JPH0750769A (en) * 1993-08-06 1995-02-21 Yazaki Corp Backward side supervising method for vehicle
JP2000074645A (en) * 1998-08-27 2000-03-14 Yazaki Corp Device and method for monitoring periphery
JP2000090243A (en) * 1998-09-14 2000-03-31 Yazaki Corp Periphery monitoring device and method therefor
JP2001006096A (en) * 1999-06-23 2001-01-12 Honda Motor Co Ltd Peripheral part monitoring device for vehicle
JP2001266160A (en) * 2000-03-22 2001-09-28 Toyota Motor Corp Method and device for recognizing periphery
JP2002083297A (en) * 2000-06-28 2002-03-22 Matsushita Electric Ind Co Ltd Object recognition method and object recognition device

Cited By (67)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7693303B2 (en) 2004-06-15 2010-04-06 Panasonic Corporation Monitoring system and vehicle surrounding monitoring system
US7916899B2 (en) 2004-06-15 2011-03-29 Panasonic Corporation Monitoring system and vehicle surrounding monitoring system
US7512251B2 (en) 2004-06-15 2009-03-31 Panasonic Corporation Monitoring system and vehicle surrounding monitoring system
EP2182730A2 (en) 2004-06-15 2010-05-05 Panasonic Corporation Monitor and vehicle peripheriy monitor
KR100738522B1 (en) 2004-12-21 2007-07-11 삼성전자주식회사 Apparatus and method for distinction between camera movement and object movement and extracting object in video surveillance system
JP2014241592A (en) * 2005-01-07 2014-12-25 クアルコム,インコーポレイテッド Optical flow based tilt sensor
DE102006010735B4 (en) * 2005-03-09 2016-03-10 Mitsubishi Jidosha Kogyo K.K. Vehicle environment monitoring device
US7925441B2 (en) 2005-03-09 2011-04-12 Mitsubishi Jidosha Kogyo Kabushiki Kaisha Vehicle periphery monitoring apparatus
JP2006285910A (en) * 2005-04-05 2006-10-19 Nissan Motor Co Ltd On-vehicle object detecting device and object detecting method
JP4529768B2 (en) * 2005-04-05 2010-08-25 日産自動車株式会社 On-vehicle object detection device and object detection method
JPWO2007000999A1 (en) * 2005-06-27 2009-01-22 パイオニア株式会社 Image analysis apparatus and image analysis method
JP4493050B2 (en) * 2005-06-27 2010-06-30 パイオニア株式会社 Image analysis apparatus and image analysis method
WO2007000999A1 (en) * 2005-06-27 2007-01-04 Pioneer Corporation Image analysis device and image analysis method
US8086046B2 (en) 2005-06-27 2011-12-27 Pioneer Corporation Image analysis device and image analysis method
WO2007029455A1 (en) * 2005-09-07 2007-03-15 Pioneer Corporation Scene monotonousness calculation device and method
US7602945B2 (en) 2005-09-07 2009-10-13 Hitachi, Ltd. Driving support apparatus
US7792327B2 (en) 2005-12-06 2010-09-07 Nissan Motor Co., Ltd. Apparatus and method for detecting a road boundary
US7899211B2 (en) 2005-12-07 2011-03-01 Nissan Motor Co., Ltd. Object detecting system and object detecting method
JP4622889B2 (en) * 2006-03-01 2011-02-02 トヨタ自動車株式会社 Image processing apparatus and image processing method
JP2007233755A (en) * 2006-03-01 2007-09-13 Toyota Motor Corp Image processor and image processing method
JP2007272511A (en) * 2006-03-31 2007-10-18 Casio Comput Co Ltd Information transmission system, imaging device, information output method, and information output program
JP2007300181A (en) * 2006-04-27 2007-11-15 Denso Corp Periphery monitoring apparatus and periphery monitoring method and program thereof
US8036424B2 (en) 2006-04-27 2011-10-11 Denso Corporation Field recognition apparatus, method for field recognition and program for the same
JP4676373B2 (en) * 2006-04-27 2011-04-27 株式会社デンソー Peripheral recognition device, peripheral recognition method, and program
WO2007132902A1 (en) * 2006-05-17 2007-11-22 Kagoshima University Object detector
JP2007334859A (en) * 2006-05-17 2007-12-27 Denso It Laboratory Inc Object detector
US7667581B2 (en) 2006-05-24 2010-02-23 Nissan Motor Co., Ltd. Pedestrian detector and detecting method using change of velocity of object in image
JP2007320024A (en) * 2006-06-01 2007-12-13 Samsung Electronics Co Ltd Anti-collision system, device and method for mobile robot remote control
JP2012045706A (en) * 2006-06-01 2012-03-08 Samsung Electronics Co Ltd Device and method of preventing collision for remote control of mobile robot
US7853372B2 (en) 2006-06-01 2010-12-14 Samsung Electronics Co., Ltd. System, apparatus, and method of preventing collision of remote-controlled mobile robot
JP2008027138A (en) * 2006-07-20 2008-02-07 Nissan Motor Co Ltd Vehicle monitoring device
JP2008066953A (en) * 2006-09-06 2008-03-21 Sanyo Electric Co Ltd Image monitoring apparatus
JP2008269073A (en) * 2007-04-17 2008-11-06 Denso Corp Object detector for vehicle
JP2008282106A (en) * 2007-05-08 2008-11-20 Fujitsu Ltd Method for detecting obstacle and obstacle detector
US8300887B2 (en) 2007-05-10 2012-10-30 Honda Motor Co., Ltd. Object detection apparatus, object detection method and object detection program
WO2009005025A1 (en) * 2007-07-03 2009-01-08 Konica Minolta Holdings, Inc. Moving object detection device
JP2009086748A (en) * 2007-09-27 2009-04-23 Saxa Inc Monitoring device and program
JP2009083632A (en) * 2007-09-28 2009-04-23 Denso Corp Moving object detecting device
JP2009181557A (en) * 2008-02-01 2009-08-13 Hitachi Ltd Image processor and vehicle detector equipped with the same
JP4533936B2 (en) * 2008-02-01 2010-09-01 日立オートモティブシステムズ株式会社 Image processing apparatus and vehicle detection apparatus including the same
JP2009210424A (en) * 2008-03-04 2009-09-17 Nissan Motor Co Ltd Object detection device for vehicle and moving object determination method for vehicle
WO2010058821A1 (en) 2008-11-19 2010-05-27 クラリオン株式会社 Approaching object detection system
CN102257533A (en) * 2008-11-19 2011-11-23 歌乐牌株式会社 Approaching object detection system
US8712097B2 (en) 2008-11-19 2014-04-29 Clarion Co., Ltd. Approaching object detection system
JP2010132053A (en) * 2008-12-03 2010-06-17 Koito Mfg Co Ltd Headlight control device
JP2010165352A (en) * 2009-01-16 2010-07-29 Honda Research Inst Europe Gmbh System and method for detecting object movement based on multiple three-dimensional warping and vehicle having system
JP2010204805A (en) * 2009-03-02 2010-09-16 Konica Minolta Holdings Inc Periphery-monitoring device and method
JP2010282615A (en) * 2009-05-29 2010-12-16 Honda Research Inst Europe Gmbh Object motion detection system based on combining 3d warping technique and proper object motion (pom) detection
KR101343331B1 (en) 2009-08-04 2013-12-19 아이신세이끼가부시끼가이샤 Vehicle-surroundings awareness support device
US8964034B2 (en) 2009-08-04 2015-02-24 Aisin Seiki Kabushiki Kaisha Vehicle surroundings awareness support device
WO2011016367A1 (en) * 2009-08-04 2011-02-10 アイシン精機株式会社 Vehicle-surroundings awareness support device
JP2011035777A (en) * 2009-08-04 2011-02-17 Aisin Aw Co Ltd Support system for perception of vehicle surroundings
US8755634B2 (en) 2009-08-12 2014-06-17 Nec Corporation Obstacle detection device and method and obstacle detection system
JP2011128838A (en) * 2009-12-17 2011-06-30 Panasonic Corp Image display device
JP2012198857A (en) * 2011-03-23 2012-10-18 Denso It Laboratory Inc Approaching object detector and approaching object detection method
JPWO2013118191A1 (en) * 2012-02-10 2015-05-11 三菱電機株式会社 Driving support device and driving support method
US9852632B2 (en) 2012-02-10 2017-12-26 Mitsubishi Electric Corporation Driving assistance device and driving assistance method
WO2013118191A1 (en) * 2012-02-10 2013-08-15 三菱電機株式会社 Driving assistance device and driving assistance method
JP2013205925A (en) * 2012-03-27 2013-10-07 Fuji Heavy Ind Ltd Vehicle exterior environment recognition device, and vehicle exterior environment recognition method
JP2013205924A (en) * 2012-03-27 2013-10-07 Fuji Heavy Ind Ltd Vehicle exterior environment recognition device, and vehicle exterior environment recognition method
JP2014165810A (en) * 2013-02-27 2014-09-08 Fujitsu Ten Ltd Parameter acquisition device, parameter acquisition method and program
JP2015088092A (en) * 2013-11-01 2015-05-07 富士通株式会社 Movement amount estimation device and movement amount estimation method
JP2015170133A (en) * 2014-03-06 2015-09-28 富士通株式会社 Trajectory estimation system, trajectory estimation method and program
US10867401B2 (en) 2015-02-16 2020-12-15 Application Solutions (Electronics and Vision) Ltd. Method and device for the estimation of car ego-motion from surround view images
JP2016177718A (en) * 2015-03-23 2016-10-06 富士通株式会社 Object detection apparatus, object detection method, and information processing program
JP2016190603A (en) * 2015-03-31 2016-11-10 いすゞ自動車株式会社 Road gradient estimation device and method of estimating road gradient
JP2017084019A (en) * 2015-10-26 2017-05-18 トヨタ自動車東日本株式会社 Object detection device, object detection method, and, object detection program

Also Published As

Publication number Publication date
JP3776094B2 (en) 2006-05-17

Similar Documents

Publication Publication Date Title
US10387733B2 (en) Processing apparatus, processing system, and processing method
US10055650B2 (en) Vehicle driving assistance device and vehicle having the same
US10445928B2 (en) Method and system for generating multidimensional maps of a scene using a plurality of sensors of various types
US9330320B2 (en) Object detection apparatus, object detection method, object detection program and device control system for moveable apparatus
US9443154B2 (en) Fusion of far infrared and visible images in enhanced obstacle detection in automotive applications
US10402664B2 (en) Processing apparatus, processing system, processing program, and processing method
KR101565006B1 (en) apparatus for providing around view and Vehicle including the same
US20160098839A1 (en) Estimating distance to an object using a sequence of images recorded by a monocular camera
US9884623B2 (en) Method for image-based vehicle localization
CN106054174B (en) It is used to cross the fusion method of traffic application using radar and video camera
JP5689907B2 (en) Method for improving the detection of a moving object in a vehicle
JP5867273B2 (en) Approaching object detection device, approaching object detection method, and computer program for approaching object detection
US7859652B2 (en) Sight-line end estimation device and driving assist device
US20150332103A1 (en) Processing apparatus, computer program product, and processing method
DE60126382T2 (en) Method and device for detecting objects
DE602004011164T2 (en) Device and method for displaying information
US8933797B2 (en) Video-based warning system for a vehicle
JP4809019B2 (en) Obstacle detection device for vehicle
US8190355B2 (en) Driving assistance and monitoring
JP3739693B2 (en) Image recognition device
US8175331B2 (en) Vehicle surroundings monitoring apparatus, method, and program
US9074906B2 (en) Road shape recognition device
US6411898B2 (en) Navigation device
US7612800B2 (en) Image processing apparatus and method
JP4871909B2 (en) Object recognition apparatus and object recognition method

Legal Events

Date Code Title Description
A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20050901

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20051122

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20060105

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20060207

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20060221

R150 Certificate of patent or registration of utility model

Ref document number: 3776094

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20100303

Year of fee payment: 4

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110303

Year of fee payment: 5

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20120303

Year of fee payment: 6

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20130303

Year of fee payment: 7

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20130303

Year of fee payment: 7

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20140303

Year of fee payment: 8