JP2009025932A - Obstacle recognition device - Google Patents

Obstacle recognition device Download PDF

Info

Publication number
JP2009025932A
JP2009025932A JP2007186549A JP2007186549A JP2009025932A JP 2009025932 A JP2009025932 A JP 2009025932A JP 2007186549 A JP2007186549 A JP 2007186549A JP 2007186549 A JP2007186549 A JP 2007186549A JP 2009025932 A JP2009025932 A JP 2009025932A
Authority
JP
Japan
Prior art keywords
image
obstacle
vehicle
road surface
monitoring range
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2007186549A
Other languages
Japanese (ja)
Other versions
JP4854619B2 (en
Inventor
Toshio Ito
敏夫 伊東
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Daihatsu Motor Co Ltd
Original Assignee
Daihatsu Motor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Daihatsu Motor Co Ltd filed Critical Daihatsu Motor Co Ltd
Priority to JP2007186549A priority Critical patent/JP4854619B2/en
Publication of JP2009025932A publication Critical patent/JP2009025932A/en
Application granted granted Critical
Publication of JP4854619B2 publication Critical patent/JP4854619B2/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

<P>PROBLEM TO BE SOLVED: To reliably and stably recognize an obstacle by an easy method through simple processing of photographic images. <P>SOLUTION: The photographic images for nearly the same road surface areas photographed by a photographing means 3 of one's own traveling vehicle 1 from a plurality of different directions are acquired by an image acquisition means, a prescribed monitoring range included in the photographic image from each the direction is extracted and identified by an identification means of an image processing means 4, and the obstacle perpendicular to a road surface in front of or in the rear of the own vehicle 1 is stably and certainly recognized by a recognition means of the image processing means 4 by simple difference detection of the images without performing complicated image processing or the like based on an identification result. <P>COPYRIGHT: (C)2009,JPO&INPIT

Description

この発明は、撮影画像を処理して自車が走行する路上の障害物を認識する障害物認識装置に関する。   The present invention relates to an obstacle recognition device that recognizes an obstacle on a road on which a vehicle travels by processing a captured image.

従来、車両のプリクラッシュセーフティ(被害軽減や衝突回避等)を実現する場合、走行中の自車の前方や後方の障害物をどのようにして認識するかが重要である。   Conventionally, when realizing pre-crash safety (reduction of damage, collision avoidance, etc.) of a vehicle, it is important how to recognize obstacles in front of and behind the traveling vehicle.

そして、この種の障害物を認識する装置として、従来、自車に搭載したカメラにより一方向(例えば自車前方)を一定時間毎に撮影し、前後する時刻taと時刻tbの撮影画像について射影変換を施した後に差分をとり、その差分から時間ずれが生じた特徴点を検出し、これらの特徴点から動きベクトルのオプティカルフローを検出して自車前方の車両等の障害物(路面垂直物)を認識する装置が提案されている(例えば、特許文献1参照)。
特許第3463858号公報(例えば、請求項1、段落[0014]、[0041]−[0045]、[0118]、図1〜図3)
As a device for recognizing this kind of obstacle, conventionally, one direction (for example, the front of the own vehicle) is photographed at regular intervals by a camera mounted on the own vehicle, and the captured images at time ta and time tb are projected. The difference is taken after the conversion, and the feature point where the time lag has occurred is detected from the difference, the optical flow of the motion vector is detected from these feature points, and the obstacle such as the vehicle in front of the host vehicle (the road surface vertical object) ) Has been proposed (see, for example, Patent Document 1).
Japanese Patent No. 3463858 (for example, claim 1, paragraphs [0014], [0041]-[0045], [0118], FIGS. 1 to 3)

前記特許文献1に記載の従来装置の場合、時間が前後する撮影画像を射影変換し、その差分をとることにより、差分の画像は路面に描かれた道路標示の文字や図形(路面上のペイント)が除去され、その差分の画像から前記動きベクトルを検出し、道路標示を誤認識することなく路上の障害物を検出することは可能であるが、自車前方等の一方向からみた撮影画像を用いて前記オプティカルフローを検出する極めて複雑な画像処理が必要になる問題があるとともに、画像上の僅かな動きベクトルから障害物を認識するため確実に安定して障害物を認識することが容易でない問題がある。   In the case of the conventional apparatus described in Patent Document 1, by taking a projective transformation of a captured image whose time is around and taking the difference, the difference image is a road marking character or figure drawn on the road surface (paint on the road surface). ) Is removed, and the motion vector is detected from the difference image, and an obstacle on the road can be detected without erroneously recognizing the road marking. There is a problem that requires extremely complicated image processing to detect the optical flow using, and it is easy to recognize the obstacle reliably and stably because the obstacle is recognized from a small motion vector on the image. There is no problem.

また、障害物の画像は路面に近い下端部が障害物自身の影等の影響を受けて暗くなる。しかも、前記射影変換を行なった場合、路面に近い前記下端部は、射影変換の特性上、自車と障害物との距離変化に対して形状があまり変わらない。そのため、前記差分の画像は路面に近い障害物の下端部の輪郭がほとんど消失した状態になる。したがって、前記差分の画像の障害物下端からの画像距離によって衝突の回避や予測に必要な障害物と自車との距離を把握することも困難である。   In the image of the obstacle, the lower end portion close to the road surface becomes dark due to the influence of the shadow of the obstacle itself. In addition, when the projective transformation is performed, the shape of the lower end portion close to the road surface does not change much with respect to a change in the distance between the vehicle and the obstacle due to the characteristics of the projective transformation. Therefore, the image of the difference is in a state in which the outline of the lower end portion of the obstacle near the road surface has almost disappeared. Therefore, it is difficult to grasp the distance between the obstacle and the vehicle necessary for avoiding or predicting the collision based on the image distance from the lower end of the obstacle in the difference image.

さらに、前記従来装置の場合、障害物が自車と同じ速度で走行している自車前方の先行車等であれば、障害物と自車との距離の変化がなく、前記差分がほとんど生じないため、先行車等の障害物を認識できない問題もある。   Further, in the case of the conventional device, if the obstacle is a preceding vehicle or the like ahead of the own vehicle traveling at the same speed as the own vehicle, there is no change in the distance between the obstacle and the own vehicle, and the difference hardly occurs. Therefore, there is a problem that obstacles such as preceding cars cannot be recognized.

すなわち、前記従来のこの種の障害物認識装置は、自車前方等の一方向からみた撮影画像を用いて、その画像の先行車等の障害物と自車との距離や形状の時間変化による僅かな差から障害物を認識する構成であるため、オプティカルフローを検出する複雑な画像処理が必要であるだけでなく、障害物の認識が容易でなく、状況によっては障害物を認識できないおそれがあり、安定して確実に障害物を認識することができない。   That is, the conventional obstacle recognition device of this type uses a photographed image viewed from one direction such as the front of the host vehicle, and changes the distance or shape of the vehicle between the obstacle such as the preceding vehicle of the image and the host vehicle. Since it is a configuration that recognizes an obstacle from a slight difference, it not only requires complicated image processing to detect an optical flow, but also it is not easy to recognize the obstacle, and there is a possibility that the obstacle cannot be recognized depending on the situation. There is no stable and reliable recognition of obstacles.

本発明は、撮影画像の簡単な処理により障害物を容易な手法で確実かつ安定に認識することを目的とする。   An object of the present invention is to reliably and stably recognize an obstacle by an easy method by simple processing of a captured image.

上記した目的を達成するために、本発明の障害物認識装置は、走行する自車の複数方向からみた路面撮影が可能な撮影手段と、略同じ路面領域についての前記撮影手段の異なる複数方向からの撮影画像を取得する画像取得手段と、前記画像取得手段が取得した複数方向からの撮影画像に含まれた所定の監視範囲を抽出して同定する同定手段と、前記同定手段の同定結果により、前記監視範囲の撮影画像間で異なる画像部分を路面に垂直な障害物として認識する認識手段とを備えたことを特徴としている(請求項1)。   In order to achieve the above-described object, the obstacle recognition device of the present invention includes a photographing unit capable of photographing a road surface viewed from a plurality of directions of a traveling vehicle and a plurality of directions of the photographing unit for substantially the same road surface area. The image acquisition means for acquiring the captured image, the identification means for extracting and identifying a predetermined monitoring range included in the captured image from a plurality of directions acquired by the image acquisition means, and the identification result of the identification means, Recognizing means for recognizing an image portion that differs between captured images in the monitoring range as an obstacle perpendicular to the road surface is provided.

そして、前記同定手段は、前記監視範囲の抽出に加え、射影変換を行なって各撮影画像の前記監視範囲を同定することが好ましい(請求項2)。   The identifying means preferably performs projective transformation in addition to extraction of the monitoring range to identify the monitoring range of each captured image.

また、本発明の障害物認識装置は、請求項1記載の構成の障害物認識装置において、自車の走行路の路面形状等の路面情報を取得する路面情報取得手段を更に備え、前記同定手段は、前記監視範囲の抽出に加え、前記路面情報に基づく画像処理を行なって各撮影画像の前記監視範囲を同定することを特徴としている(請求項3)。   The obstacle recognition device according to the present invention further comprises road surface information acquisition means for acquiring road surface information such as a road surface shape of the traveling road of the own vehicle in the obstacle recognition device having the configuration according to claim 1, Is characterized in that, in addition to the extraction of the monitoring range, image processing based on the road surface information is performed to identify the monitoring range of each captured image (Claim 3).

請求項1の発明の場合、説明を簡単にするために複数方向を自車の前、後の2方向とすると、画像取得手段は、例えば時刻tmに撮影手段が撮影した自車前方の一定の路面領域の撮影画像(前方画像)と、自車がその路面領域を通過して時刻tnに撮影手段が撮影した自車後方の略同じ路面領域の撮影画像(後方画像)とを取得する。   In the case of the invention of claim 1, if a plurality of directions are assumed to be the two directions before and after the own vehicle for the sake of simplicity of explanation, the image acquisition means is, for example, a certain front part of the own vehicle taken by the photographing means at time tm. A captured image (front image) of the road surface area and a captured image (rear image) of substantially the same road surface area behind the own vehicle captured by the imaging means at time tn when the vehicle passed through the road surface area are acquired.

そして、同定手段は、画像取得手段が取得した前記前方画像と前記後方画像の略同じ監視範囲を抽出してその画像の異同から同定する。   Then, the identification unit extracts the substantially same monitoring range of the front image and the rear image acquired by the image acquisition unit, and identifies from the difference between the images.

このとき、前記監視範囲を例えば自車の前方および後方の障害物が存在する可能性がある範囲に設定すると、自車前方に障害物があれば、前方画像の監視範囲には障害物が存在するが、後方画像監視範囲には障害物が存在しない状態になる。逆に、自車後方に障害物があれば、前方画像の監視範囲には障害物が存在しないが、後方画像の監視範囲には障害物が存在する状態になる。   At this time, if the monitoring range is set, for example, to a range where obstacles ahead and behind the vehicle may exist, if there is an obstacle ahead of the vehicle, there is an obstacle in the monitoring range of the front image. However, there is no obstacle in the rear image monitoring range. On the contrary, if there is an obstacle behind the host vehicle, there is no obstacle in the monitoring range of the front image, but there is an obstacle in the monitoring range of the rear image.

すなわち、自車の前方または後方に障害物が存在するときは、前記前方画像と前記後方画像のいずれか一方の監視範囲にのみ障害物が含まれ、両画像の監視範囲の内容が大きく異なる。   That is, when an obstacle exists in front of or behind the host vehicle, the obstacle is included only in one of the monitoring range of the front image and the rear image, and the contents of the monitoring range of both images are greatly different.

そのため、オプティカルフローを検出する複雑な画像処理等を行なうことなく、簡単な画像の異同検出により自車の前方または後方の障害物を容易に安定して確実に認識することができる。この場合、障害物が自車と同じ速度で走行する先行、後行の車両であっても、それらの障害物を確実に認識することができるのは勿論である。   Therefore, it is possible to easily and reliably recognize an obstacle in front of or behind the host vehicle by simply detecting the difference in the image without performing complicated image processing for detecting an optical flow. In this case, of course, even if the obstacle is a preceding or following vehicle that travels at the same speed as the own vehicle, the obstacle can be reliably recognized.

なお、自車の前方、後方の両方に障害物が存在しないときは、前記前方画像と前記後方画像のいずれの監視範囲も障害物が含まれない同じような路面のみの画像になることから、画像が一致して自車の前方および後方に障害物が存在しないことを確実に認識できる。さらに、自車の前方、後方の両方に障害物が存在するときは、前方画像と後方画像のいずれの監視範囲にも障害物が含まれるが、両方の画像の障害物は別個のものであり、しかも、同タイプの車両であっても前側の形状と後側の形状とは大きく異なることが多いため、形状、大きさ、位置等が一致することは皆無である。そのため、この場合も両画像が一致しないことから自車の前方および後方に障害物が存在することを確実に認識できる。   In addition, when there are no obstacles in both the front and rear of the vehicle, the monitoring range of either the front image or the rear image will be an image of only a similar road surface that does not include an obstacle. It is possible to reliably recognize that there are no obstacles in front of and behind the vehicle by matching the images. In addition, when there are obstacles in both the front and rear of the vehicle, the obstacles are included in the monitoring range of both the front and rear images, but the obstacles in both images are separate. In addition, even in the same type of vehicle, the shape on the front side and the shape on the rear side often differ greatly, and thus the shape, size, position, and the like do not match at all. Therefore, also in this case, since both images do not coincide with each other, it can be surely recognized that an obstacle exists in front of and behind the host vehicle.

そして、前記複数方向を自車の前、後、左、右のいずれか2方向とする場合は勿論、前記複数方向を自車の前、後、左、右のいずれか3方向以上とする場合にも、各撮影画像の監視範囲の障害物の有無に基づき、前記と同様の簡単な画像比較の画像処理により、障害物を容易かつ確実に認識することができる。   And, of course, when the plurality of directions are two directions of the front, rear, left and right of the own vehicle, and when the plurality of directions are three or more directions of the front, rear, left and right of the own vehicle. In addition, based on the presence or absence of obstacles in the monitored range of each captured image, obstacles can be easily and reliably recognized by the same image processing for image comparison as described above.

請求項2の発明によれば、前記同定手段の射影変換により各撮影画像の監視範囲を俯瞰(鳥瞰)した平面(俯瞰平面)の画像が得られる。   According to the second aspect of the present invention, an image of a plane (bird's-eye view plane) obtained by bird's-eye view (bird's-eye view) of the monitoring range of each captured image is obtained by projective transformation of the identification means.

このとき、各俯瞰平面の画像において、監視範囲の路面に垂直な障害物は撮影時間のずれに起因して形状が変化するが、監視範囲の路面にペイントされた道路標示は形状がほとんど変化しない。   At this time, in the image of each overhead view plane, the obstacle perpendicular to the road surface of the monitoring range changes in shape due to the difference in shooting time, but the shape of the road marking painted on the road surface of the monitoring range hardly changes. .

そのため、前記同定手段は、射影変換後の各俯瞰平面の画像から前記道路標示を除去して障害物を一層正確に同定することができる。   Therefore, the identification means can remove the road marking from the image of each overhead view plane after the projective transformation and identify the obstacle more accurately.

請求項3の発明によれば、同定手段は、前記射影変換を行なう代わりに、路面情報取得手段の路面情報に基づく画像処理を行なって各撮影画像の監視範囲の障害物および道路標示を抽出し、それらの画像の異同等から前記道路標示を除去して障害物を容易かつ正確に同定することができる。   According to the invention of claim 3, instead of performing the projective transformation, the identification means performs image processing based on the road surface information of the road surface information acquisition means to extract obstacles and road markings in the monitored range of each captured image. The obstacle can be easily and accurately identified by removing the road marking from the difference between the images.

つぎに、本発明をより詳細に説明するため、請求項1、2に対応する一実施形態について、図1〜図10を参照して詳述する。   Next, in order to describe the present invention in more detail, an embodiment corresponding to claims 1 and 2 will be described in detail with reference to FIGS.

図1は自車1に搭載された障害物認識装置2のブロック構成を示し、図2は障害物認識装置2の撮像手段の構成例を示し、図3は障害物認識装置2の動作説明用のフローチャート、図4はその撮影画像の処理例を示し、図5はその同定結果を示す。   FIG. 1 shows a block configuration of an obstacle recognition device 2 mounted on the host vehicle 1, FIG. 2 shows a configuration example of an imaging means of the obstacle recognition device 2, and FIG. 3 is for explaining the operation of the obstacle recognition device 2. FIG. 4 shows a processing example of the photographed image, and FIG. 5 shows the identification result.

図6は射影変換の説明図、図7は自車1の前方のみを撮影して認識する場合の撮影例の説明図、図8は図7の撮影結果の射影変換の説明図である。図9は図7の障害物がない場合の撮影画像の処理例の説明図、図10は図7の障害物がある場合の撮影画像の処理例の説明図である。   FIG. 6 is an explanatory diagram of projective transformation, FIG. 7 is an explanatory diagram of a photographing example when photographing and recognizing only the front of the vehicle 1, and FIG. 8 is an explanatory diagram of projective transformation of the photographing result of FIG. 9 is an explanatory diagram of a processing example of a captured image when there is no obstacle in FIG. 7, and FIG. 10 is an explanatory diagram of a processing example of the captured image when there is an obstacle in FIG.

(構成)
図1に示すように、車両1に搭載された障害物認識装置2は、撮影手段3と、その撮影画像を処理して同定するマイクロコンピュータ構成の画像処理手段4と、撮影画像等を書き換え自在に蓄積するデータ蓄積手段5によって構成される。
(Constitution)
As shown in FIG. 1, an obstacle recognition device 2 mounted on a vehicle 1 has a photographing unit 3, an image processing unit 4 having a microcomputer configuration for processing and identifying the photographed image, and a photographed image and the like can be rewritten. The data storage means 5 stores the data.

撮影手段3は、走行する自車1の複数方向からみた路面撮影が可能なモノクロ或いはカラーの単眼カメラからなり、撮影した時々刻々のフレーム画像(又はフィールド画像)を出力する。   The photographing means 3 is composed of a monochrome or color monocular camera capable of photographing a road surface viewed from a plurality of directions of the traveling vehicle 1 and outputs a frame image (or field image) taken every moment.

なお、複数方向とは、自車1から見たその周囲の各方向であり、実用的には、自車1の前、後、左、右の全部又は一部の方向(2方向以上)である。   The plural directions are directions around the vehicle 1 as viewed from the own vehicle 1, and in practical terms, all or a part of the front, rear, left, and right of the own vehicle 1 (partially two or more directions). is there.

そして、本実施形態においては、説明を簡単にするため、撮影手段3の撮影方向を自車1の前方、後方の2方向とする。   And in this embodiment, in order to demonstrate easily, let the imaging | photography direction of the imaging | photography means 3 be two directions of the front of the own vehicle 1, and back.

また、前記複数方向の路面撮影を行なうため、撮影手段3は例えば撮影方向毎に単眼カメラを設けて形成され、前記複数方向を自車1の前方、後方の2方向とする本実施形態の場合、図2に示すように、自車1の車内の前方、後方の上部位置に、単眼カメラ3a、3bを同じ角度で上方から適当な角度で斜め下の路面をねらうように設けて撮影手段3が形成される。   In the case of the present embodiment, the photographing means 3 is formed by providing a monocular camera for each photographing direction, for example, so that the plural directions are two directions forward and backward of the host vehicle 1 in order to perform the road surface photographing in the plural directions. As shown in FIG. 2, monocular cameras 3a and 3b are provided at upper and lower positions in the vehicle 1 so as to aim at an obliquely lower road surface from above at an appropriate angle from above. Is formed.

なお、単眼カメラ3aはいわゆるプリクラッシュセンサとして車両1に搭載されるカメラであり、単眼カメラ3bは例えばバックセンサと兼用されるカメラである。   The monocular camera 3a is a camera mounted on the vehicle 1 as a so-called pre-crash sensor, and the monocular camera 3b is a camera that is also used as a back sensor, for example.

また、撮影手段3によって自車1の全周から斜め下の路面を見た映像を撮影してもよく、この場合は、例えば自車1の車室内上部に円錐形の鏡を設け、この鏡に走行中の自車1の全周からみた撮影画像を映し、その撮影画像を1又は複数個の単眼カメラにより撮影する。   Further, an image of the road surface obliquely below the entire circumference of the own vehicle 1 may be taken by the photographing means 3, and in this case, for example, a conical mirror is provided in the upper part of the interior of the own vehicle 1, and this mirror A photographed image viewed from the entire circumference of the traveling vehicle 1 is projected on the vehicle, and the photographed image is photographed by one or a plurality of monocular cameras.

つぎに、画像処理手段4は本発明の画像取得手段、同定手段、認識手段を形成し、自車1の走行中に図3のステップA1〜A6の認識処理プログラムをくり返し実行する。   Next, the image processing means 4 forms the image acquisition means, the identification means, and the recognition means of the present invention, and repeatedly executes the recognition processing program in steps A1 to A6 in FIG.

画像取得手段は、略同じ路面領域についての撮影手段3の異なる複数方向からの撮影画像を取得する手段であり、本実施形態の場合、図3のステップA1、A2に動作し、時刻t1に単眼カメラ3aが撮影した例えば図4の自車1の前方の路面領域の撮影画像(以下、前方画像という)P(t1)を取り込んでデータ蓄積手段5に保持し、車速センサ(図示せず)の自車速等から決定した時間だけ遅れた時刻t2に単眼カメラ3bが撮影した前記図4の自車1の後方の略同じ路面領域の撮影画像(以下、後方画像という)P(t2)を取り込んでデータ蓄積手段5に保持する。   The image acquisition unit is a unit that acquires captured images from a plurality of different directions of the imaging unit 3 for substantially the same road surface area. In the present embodiment, the image acquisition unit operates in steps A1 and A2 in FIG. For example, a photographed image (hereinafter referred to as a forward image) P (t1) of the road surface area in front of the host vehicle 1 of FIG. 4 taken by the camera 3a is captured and held in the data storage means 5, and a vehicle speed sensor (not shown) A captured image (hereinafter referred to as a rear image) P (t2) of substantially the same road area behind the vehicle 1 of FIG. 4 taken by the monocular camera 3b at time t2 delayed by a time determined from the vehicle speed or the like is captured. The data is stored in the data storage means 5.

同定手段は、画像取得手段が取得した複数方向からの撮影画像に含まれた所定の監視範囲を抽出して同定する手段であり、本実施形態の場合、図3のステップA3、A4に動作し、前方画像P(t1)、後方画像P(t2)の少なくとも路面部分に後述の射影変換を施し、両画像P(t1)、P(t2)の障害物を含む所定の監視範囲を抽出して同定する。   The identification unit is a unit that extracts and identifies a predetermined monitoring range included in the captured images from a plurality of directions acquired by the image acquisition unit. In this embodiment, the identification unit operates in steps A3 and A4 in FIG. Then, projective transformation described later is performed on at least a road surface portion of the front image P (t1) and the rear image P (t2), and a predetermined monitoring range including an obstacle of both the images P (t1) and P (t2) is extracted. Identify.

認識手段は、図3のステップA5に動作し、同定手段の同定結果により、前記監視範囲の画像P(t1)、P(t2)間で異なる画像部分を路面に垂直な障害物として認識し、認識結果を自車1の被害軽減や衝突回避の衝突予測処理手段(図示せず)等に送る。   The recognizing unit operates in step A5 in FIG. 3, and recognizes an image portion different between the images P (t1) and P (t2) in the monitoring range as an obstacle perpendicular to the road surface based on the identification result of the identifying unit. The recognition result is sent to a collision prediction processing means (not shown) for reducing damage of the own vehicle 1 or avoiding collision.

つぎに、前記画像取得手段の射影変換等について、具体的に説明する。   Next, the projective transformation and the like of the image acquisition unit will be specifically described.

まず、射影変換は、画像P(t1)、P(t2)の不要な道路標示を除去するために施されるものであり、一般的には図6に示すように、ある平面Lのx−y座標系の点P(x、y)が投影中心Oに関して、他の平面L´のu−v座標系の点P´(u、v)として投影される変換である。この変換によって平面Lの点P、A、B、C、Dは平面L´に点P´、A´、B´、C´、D´として射影される。   First, the projective transformation is performed to remove unnecessary road markings in the images P (t1) and P (t2). Generally, as shown in FIG. This is a transformation in which a point P (x, y) in the y coordinate system is projected as a point P ′ (u, v) in the uv coordinate system of another plane L ′ with respect to the projection center O. By this conversion, the points P, A, B, C, and D on the plane L are projected on the plane L ′ as points P ′, A ′, B ′, C ′, and D ′.

なお、上記射影変換は、パラメータa11、a12、a13、a21、a22、a23、a31、a32、a33を用いて、次の数1の(1)式及び数2の(2)式で表される。   The projective transformation is expressed by the following equation (1) and equation (2) using the parameters a11, a12, a13, a21, a22, a23, a31, a32, and a33. .

ただし、各パラメータa11、…、a33には、つぎの数3の(3)式の条件が課せられる。   However, the condition of the following expression (3) is imposed on each parameter a11,..., A33.

つぎに、この射影変換による道路標示の除去を説明する。   Next, the removal of road marking by this projective transformation will be described.

まず、図7に示すように、自車1に単眼カメラ3aと同じ単眼カメラ3aaのみを搭載し、自車1の走行中に単眼カメラ3aaによって自車前方の一定の路面範囲を上方から斜めに撮影し、自車前方の時系列の撮影画像P(t)を得、それらの撮影画像P(t)に、前記平面Lを撮影平面、前記平面L´を射影変換後の俯瞰平面とする、射影変換を施し、例えば図8に示すように、撮影画像P(t)の上方からの俯瞰平面の画像P(t)´に変換するものとする。   First, as shown in FIG. 7, only the monocular camera 3aa that is the same as the monocular camera 3a is mounted on the own vehicle 1, and a certain road surface area in front of the own vehicle is inclined obliquely from above by the monocular camera 3aa while the own vehicle 1 is traveling. Shooting and obtaining time-series captured images P (t) in front of the vehicle, and in these captured images P (t), the plane L is an imaging plane and the plane L ′ is an overhead plane after projective transformation. For example, as shown in FIG. 8, projective conversion is performed and converted to an image P (t) ′ of a bird's-eye view plane from above the captured image P (t).

この場合、射影変換後の俯瞰平面の画像P(t)´において、撮影画像P(t)中の路面に垂直な障害物(先行車等)αと路面にペイントされた道路標示β、β*とでは、形状の時間変化に違いが生じる。なお、道路標示βは走行車線の左右の白線、道路標示β*は走行車線内の矢印や数字等である。   In this case, in the image P (t) ′ of the overhead view plane after the projective transformation, an obstacle (such as a preceding vehicle) α perpendicular to the road surface in the photographed image P (t) and road markings β, β * painted on the road surface And, a difference arises in the time change of a shape. The road marking β is a white line on the left and right of the traveling lane, and the road marking β * is an arrow or a number in the traveling lane.

すなわち、単眼カメラ3aaの時刻t=t11の撮影画像(前方画像)P(t11)、それからΔt秒遅れた時刻t=t22の撮影画像(前方画像)P(t22)を、それぞれ俯瞰平面の画像P(t11)´、P(t22)´に射影変換すると、道路平面の道路標示β、β*は撮影の時刻t11、t22が前後することによっては射影変換後の形状はほとんど変化しないが、高さを持つ障害物αは撮影の時刻t11、t22が前後することによって射影変換後の形状が変化して異なる。   That is, a captured image (front image) P (t11) at time t = t11 of the monocular camera 3aa, and a captured image (front image) P (t22) at time t = t22 delayed by Δt seconds from the image P on the overhead view plane, respectively. When projective transformation is performed to (t11) ′ and P (t22) ′, the road markings β and β * on the road plane hardly change in shape after the projective transformation depending on the shooting times t11 and t22. The obstacle α having a difference in shape after the projective transformation changes depending on the shooting times t11 and t22.

そのため、画像P(t11)´、P(t22)´の各画素の明るさの差分を求めて同定すると、道路標示β、β*が略除去される。   Therefore, when the difference in brightness between the pixels of the images P (t11) ′ and P (t22) ′ is obtained and identified, the road markings β and β * are substantially removed.

そして、道路標示β、β*が除去されることにより、障害物αが存在しなければ、同定結果は「画像なし」となり、障害物αが存在すれば、同定結果は障害物αの形状の違いに基づく「差分の画像あり」となり、この同定結果の違いによって障害物αの認識が可能になる。   Then, by removing the road markings β and β *, if there is no obstacle α, the identification result is “no image”, and if the obstacle α exists, the identification result is the shape of the obstacle α. “Difference image exists” based on the difference, and the obstacle α can be recognized by the difference in the identification result.

実際の撮影画像例で説明すると、自車1の前方の監視範囲内に障害物αとなる先行車が存在しない場合、単眼カメラ3aaで自車前方の一定の路面領域を撮影して得られた図9の時刻tx1の画像P(tx1)、それからΔt秒遅れた時刻ty1の画像P(ty1)は、自車前方の監視範囲x、yに例えば道路標示β、β*のみが存在し、それらの射影変換後の予測画像P*(tx1)、P*(ty1)は、道路標示β、β*のみが略同じ形状(寸法を含む)で存在する。   Explaining with an actual photographed image example, when there is no preceding vehicle that becomes an obstacle α in the monitoring range in front of the host vehicle 1, it is obtained by photographing a certain road surface area in front of the host vehicle with the monocular camera 3aa. In the image P (tx1) at time tx1 in FIG. 9 and the image P (ty1) at time ty1 delayed by Δt seconds, for example, only the road markings β and β * exist in the monitoring ranges x and y in front of the own vehicle. In the projected images P * (tx1) and P * (ty1) after the projective transformation, only road markings β and β * exist in substantially the same shape (including dimensions).

一方、自車1の前方の監視範囲内に障害物αとなる先行車が存在する場合、単眼カメラ3aaで自車前方の一定の路面領域を撮影して得られた図10の時刻tx2の画像P(tx2)、それからΔt秒遅れた時刻ty2の画像P(ty2)は、自車前方の監視範囲x、yに例えば障害物α及び道路標示βが存在し、それらの射影変換後の画像P*(tx1)、画像P*(ty1)は、障害物αが異なる形状で存在して道路標示βが略同じ形状で存在する。   On the other hand, when there is a preceding vehicle that becomes an obstacle α in the monitoring range in front of the host vehicle 1, the image at time tx2 in FIG. 10 obtained by photographing a certain road surface area in front of the host vehicle with the monocular camera 3aa. An image P (ty2) at time ty2 delayed by P (tx2) and then Δt seconds includes, for example, an obstacle α and a road marking β in the monitoring range x, y ahead of the host vehicle, and an image P after projective transformation thereof. * (Tx1) and image P * (ty1) have obstacles α in different shapes and road markings β in substantially the same shape.

そして、図9の画像P*(tx1)、画像P*(ty1)の各画素の明るさの差分を求めると、同図の同定画像P1xyが得られる。同様に、図10の画像P*(tx2)、画像P*(ty2)の各画素の明るさの差分を求めると、同図の同定画像P2xyが得られる。   Then, when the difference in brightness of each pixel of the image P * (tx1) and the image P * (ty1) in FIG. 9 is obtained, the identification image P1xy in the same figure is obtained. Similarly, when the difference in brightness of each pixel in the image P * (tx2) and the image P * (ty2) in FIG. 10 is obtained, the identification image P2xy in the same figure is obtained.

このとき、同定画像P1xyは監視範囲x、yの略同じ形状の道路標示β、β*が打ち消しあって略消失し、同定画像P2xyは監視範囲x、yの略同じ形状の道路標示β、β*は打ち消しあって略消失するが、形状が異なる障害物αはその差分の輪郭状の画像を含む。   At this time, in the identification image P1xy, road signs β and β * having substantially the same shape in the monitoring ranges x and y cancel each other out, and the identification image P2xy disappears substantially in the same shape in the monitoring ranges x and y. * Cancels and almost disappears, but the obstacle α having a different shape includes a contour image of the difference.

そして、同定画像P1xy、同定画像P2xyに上記相違が生じるため、原理上は、単眼カメラ3aaで同じ方向から時間を前後して撮影した撮影画像によっても自車1の前方の障害物αを認識し得る。なお、自車1に単眼カメラ3bと同様の単眼カメラを搭載し、このカメラにより時間を前後して撮影した自車後方の撮影画像について同様の射影変換を施すことにより、原理上は、自車1の後方の障害物も認識し得る。   Since the above-described difference occurs between the identification image P1xy and the identification image P2xy, in principle, the obstacle α in front of the host vehicle 1 is recognized also by the captured images taken before and after the same direction with the monocular camera 3aa. obtain. In addition, in principle, the vehicle 1 is equipped with a monocular camera similar to the monocular camera 3b, and the same projective transformation is applied to the captured image of the rear of the vehicle taken by the camera before and after the time. Obstacles behind 1 can also be recognized.

しかしながら、例えば障害物αが存在する場合、図10からも明らかなように前後して撮影した撮影画像P(tx2)、P(ty2)がいずれも対象とする障害物αを含み、射影変換後の画像P*(tx2)、P*(ty2)の障害物αの形状の違いも僅かであるため、前記違いに基づく同定結果の画像P2xyは障害物αの面積が小さく明確ではない。   However, for example, when there is an obstacle α, the captured images P (tx2) and P (ty2) taken before and after include the target obstacle α as apparent from FIG. Since the difference between the shapes of the obstacles α in the images P * (tx2) and P * (ty2) is slight, the identification result image P2xy is not clear because the area of the obstacle α is small.

一方、障害物αが存在しない場合、図9からも明らかなように、前後して撮影した撮影画像P(tx1)、P(ty1)の射影変換後の画像P*(tx1)、P*(ty1)は、道路標示β、β*の形状や位置が完全に一致するものではなく、実際には、同定結果の画像P1xyにそれらの不一致に基づく画像が残存する。   On the other hand, when there is no obstacle α, as is clear from FIG. 9, images P * (tx1) and P * (P * (tx1) and P (ty1) obtained by projective transformation of the captured images P (tx1) and P (ty1) taken before and after. In ty1), the shapes and positions of the road markings β and β * do not completely match, and actually, an image based on these mismatches remains in the image P1xy of the identification result.

そのため、同じ方向から時間を前後して撮影した撮影画像を射影変換して認識する手法では、走行環境等によっては障害物αの誤認識が生じ易く、障害物αを確実かつ安定に認識することが困難である。   For this reason, the method of projecting and recognizing captured images taken before and after the same direction from the same direction is likely to cause erroneous recognition of the obstacle α depending on the driving environment and the like, and the obstacle α is recognized reliably and stably. Is difficult.

しかも、同定結果の画像P2xyからも明らかなように、障害物αの下端部は形状の変化が少なく、略輪郭が消失した状態になる。そのため、同定結果の画像p2xyの障害物αの下端からの画像距離を求めて障害物αと自車1との距離を求めることも容易でない。   Moreover, as is clear from the image P2xy of the identification result, the lower end portion of the obstacle α has a little change in shape, and the outline is almost lost. Therefore, it is not easy to obtain the distance between the obstacle α and the vehicle 1 by obtaining the image distance from the lower end of the obstacle α in the image p2xy as the identification result.

そこで、本発明においては、上述したように、撮影手段3によって走行する自車1の異なる複数方向から同じ路面領域を撮影する。   Therefore, in the present invention, as described above, the same road surface area is photographed from a plurality of different directions of the own vehicle 1 traveling by the photographing means 3.

すなわち、本実施形態の場合、図2の単眼カメラ3a、3bにより、例えばt1時に単眼カメラ3aが一定の路面領域を前方画像P(t1)として撮影し、自車1の走行速度等から定まる数秒程度の時間経過後の時刻t2に単眼カメラ3bが同じ路面領域を後方画像P(t2)として撮影し、前記画像取得手段により、前後する単眼カメラ3aの時刻t1の前方画像P(t1)と時刻t2の後方画像P(t2)とを取り込む。   That is, in the case of the present embodiment, the monocular camera 3a, 3b in FIG. 2 takes several seconds determined by the monocular camera 3a taking a fixed road surface area as the front image P (t1) at t1, for example, based on the traveling speed of the vehicle 1 and the like. The monocular camera 3b captures the same road surface area as a rear image P (t2) at time t2 after a certain amount of time has elapsed, and the front image P (t1) and time at the time t1 of the preceding and following monocular camera 3a by the image acquisition means. The rear image P (t2) of t2 is captured.

このとき、自車1の前方に障害物αとしての先行車(自動車)が存在していると、図4に示すように、前方画像P(t1)にはその障害物αが含まれるが後方画像P(t2)には障害物αは含まれない。また、自車1の後方に別の障害物α´としての後行車(二輪車)が存在していると、図4に示すように、後方画像P(t2)にはその障害物α´が含まれるが前方画像P(t1)には障害物α´は含まれない。   At this time, if there is a preceding vehicle (automobile) as an obstacle α in front of the vehicle 1, the front image P (t1) includes the obstacle α as shown in FIG. The image P (t2) does not include the obstacle α. Further, if there is a following vehicle (two-wheeled vehicle) as another obstacle α ′ behind the host vehicle 1, as shown in FIG. 4, the rear image P (t2) includes the obstacle α ′. However, the front image P (t1) does not include the obstacle α ′.

そして、前記同定手段は、本実施形態では画像P(t1)、P(t2)をそのまま俯瞰平面の画像に射影変換する。この射影変換は、単眼カメラ3a、3bの撮影面の画像P(t1)、P(t2)に道路標示βの白線が含まれる場合、その白線を周知のハフ変換による車線幅認識等によって検出し、検出した白線が画像P(t1)、P(t2)の垂直線になるように前記a11〜a33のパラメータを設定して行なうことが画像比較等の上から好ましい。なお、白線は連続線、破線のいずれであってもよいのは勿論である。また、白線のない道路を走行するときは、例えば設定した標準パラメータで射影変換を行なえばよい。   Then, in the present embodiment, the identification unit performs projective conversion of the images P (t1) and P (t2) as they are into an overhead plane image. This projective transformation detects the white line of the road marking β when the images P (t1) and P (t2) of the photographing planes of the monocular cameras 3a and 3b are detected by lane width recognition using a well-known Hough transform. From the viewpoint of image comparison and the like, it is preferable to set the parameters a11 to a33 so that the detected white line is a vertical line of the images P (t1) and P (t2). Of course, the white line may be either a continuous line or a broken line. Further, when traveling on a road without a white line, for example, projective transformation may be performed with a set standard parameter.

さらに、前記同定手段は画像P(t1)、P(t2)の射影変換画像から自車1が走行する車線部分の水平線より下の路面部分の中央部分を障害物の存在を予測する監視範囲として抽出し、例えば図4の前方、後方の予測画像P*(t1)、P*(t2)を形成し、両画像P*(t1)、P*(t2)につき、例えば車線幅等を揃えた状態で重ね合わせて画素毎に比較して明るさの差分を求め、異同を検出して同定する。   Furthermore, the identification means uses the central portion of the road surface portion below the horizontal line of the lane portion where the vehicle 1 travels as a monitoring range for predicting the presence of an obstacle from the projected transformation images of the images P (t1) and P (t2). For example, the prediction images P * (t1) and P * (t2) in the front and rear of FIG. 4 are formed, and the lane widths and the like are aligned for both the images P * (t1) and P * (t2). The brightness is compared for each pixel by overlapping in a state, and the difference is detected and identified.

このとき、画像P*(t1)、P*(t2)を重ねた同定結果の画像は、前方にのみ障害物αが存在するのであれば、図5(a)に示すように略前方画像P*(t1)の障害物αの部分の画像になり、後方にのみ障害物α´が存在するのであれば、同図(b)に示すように略後方画像P*(t2)の障害物α´の部分の画像になる。なお、前方にも後方にも障害物α、α´が存在しない場合は、同定結果の画像がいわゆるブランク画像になり、前方、後方に障害物α、α´が存在する場合は、同定結果の画像が例えば前記図5(a)、(b)の画像を重ね合わせた画像になる。   At this time, the image of the identification result obtained by superimposing the images P * (t1) and P * (t2) has a substantially forward image P as shown in FIG. * If the image of the part of the obstacle α in (t1) is present and the obstacle α ′ exists only in the rear, the obstacle α in the substantially rear image P * (t2) as shown in FIG. It becomes the image of 'part. If there are no obstacles α and α ′ in front and rear, the image of the identification result is a so-called blank image. If obstacles α and α ′ exist in the front and rear, The image is, for example, an image obtained by superimposing the images shown in FIGS.

なお、夜間走行やトンネル走行等においても、自車1および障害物αのヘッドライト、テールライトにより、走行路の同定に必要な部分には明るさが確保され、昼間等と同様に問題なく同定が行なえる。   Even when driving at night or in tunnels, the headlights and taillights of the vehicle 1 and the obstacle α ensure the brightness necessary for the identification of the roadway, so that it can be identified without problems as in daytime. Can be done.

そして、認識手段は、予測画像P*(t1)、P*(t2)間の明るさが異なる部分につき、例えば周囲と異なる明るさになる予測画像P*(t1)、P*(t2)の側に路面に垂直な先行車や後行車等の障害物α、α´が存在することを把握して認識し、認識結果を自車1の被害軽減や衝突回避の処理手段(図示せず)等に送る。   Then, the recognizing means, for the portions where the brightness between the predicted images P * (t1) and P * (t2) is different, for example, the predicted images P * (t1) and P * (t2) having different brightness from the surroundings. Grasping and recognizing that there are obstacles α, α 'on the side that are perpendicular to the road surface, such as a preceding vehicle and a following vehicle, and processing the recognition result to reduce damage to the vehicle 1 and avoid collision (not shown) Send to etc.

なお、前方、後方に障害物α、α´が存在しない場合は、予測画像P*(t1)、P*(t2)が障害物α、α´の存在しない同じような路面のみの画像になり、予測画像P*(t1)、P*(t2)間に明るさの異なる部分が存在しないことから、前方にも後方にも障害物α、α´が存在しないことを認識する。また、前方、後方に障害物α、α´が存在する場合は、前方の同定結果の画像が例えば前記図5(a)、(b)の画像を重ね合わせた画像になり、このとき、予測画像P*(t1)、P*(t2)の障害物α、α´は別個のものであり、しかも、同タイプの車両であっても前側の形状と後ろ側の形状とは大きく異なることが多いため、形状、大きさ、位置等が一致することは皆無であり、認識結果として複数個の障害物α、α´の画像がえられることから、前方および後方に障害物α、α´が存在することを認識する。   When there are no obstacles α and α ′ in front and rear, the predicted images P * (t1) and P * (t2) are images of only similar road surfaces in which no obstacles α and α ′ exist. Since there is no portion with different brightness between the predicted images P * (t1) and P * (t2), it is recognized that there are no obstacles α and α ′ in front and rear. When obstacles α and α ′ are present in the front and rear, the image of the front identification result is, for example, an image obtained by superimposing the images in FIGS. 5A and 5B. Obstacles α and α ′ in the images P * (t1) and P * (t2) are separate, and the shape of the front side and the shape of the rear side are greatly different even in the same type of vehicle. Because there are many, the shape, size, position, etc. do not match at all, and since images of a plurality of obstacles α, α ′ are obtained as a recognition result, the obstacles α, α ′ are present forward and backward. Recognize that it exists.

つぎに、認識した障害物α、α´は、同定結果の画像上において、下端部も含めて輪郭が明瞭である。そこで、前記認識手段は、前記画像上で障害物α、α´の下端から自車1までの距離(車間距離)を正確に求めて自車1と障害物α、α´との時々刻々の距離を検出することができ、この検出結果の距離も前記した被害軽減や衝突回避の処理手段等に送る。   Next, the recognized obstacles α and α ′ have clear outlines including the lower end portion on the image of the identification result. Therefore, the recognizing means accurately obtains the distance (inter-vehicle distance) from the lower end of the obstacles α, α ′ to the own vehicle 1 on the image so that the vehicle 1 and the obstacles α, α ′ The distance can be detected, and the distance of the detection result is also sent to the above-described damage reduction and collision avoidance processing means.

したがって、本実施形態の場合、オプティカルフローを検出する複雑な画像処理等を行なうことなく、自車1の前、後の撮影画像の簡単な異同検出により、自車1の前方または後方、あるいは両方の障害物α、α´を安定かつ確実に認識することができる。   Therefore, in the case of the present embodiment, the front or rear of the host vehicle 1 or both of the front and rear of the host vehicle 1 can be detected by simple difference detection of the captured images before and after the host vehicle 1 without performing complicated image processing or the like for detecting the optical flow. Obstacles α and α ′ can be recognized stably and reliably.

この場合、障害物α、α´が自車1と同じ速度で走行する先行車や後行車であっても、それらの障害物αを確実に認識することができるのは勿論である。   In this case, of course, even if the obstacles α and α ′ are the preceding vehicle and the following vehicle that travel at the same speed as the own vehicle 1, the obstacle α can be surely recognized.

また、撮影した画像P(t1)、P(t2)に射影変換を施してそれぞれを俯瞰平面の画像P*(t1)、P*(t2)に変換する構成であるため、自車1の走行路に起伏がある場合や、自車1の走行路が上り坂や下り坂の場合も、問題なく障害物αを認識できる。   In addition, since the captured images P (t1) and P (t2) are subjected to projective transformation to convert them into images P * (t1) and P * (t2) of the overhead view plane, the vehicle 1 is traveling. The obstacle α can be recognized without any problem even when the road is undulated or the traveling path of the host vehicle 1 is an uphill or downhill.

なお、障害物αは車両や人等の移動体に限られるものではなく、電柱や看板等の静止物体であっても同様に認識することができる。   The obstacle α is not limited to a moving body such as a vehicle or a person, and can be recognized similarly even if it is a stationary object such as a utility pole or a signboard.

また、例えば画像P(t1)、P(t2)の障害物α、α´はいわゆるエッジヒストグラム手法により水平・垂直のエッジを検出して認識することも可能であるが、この場合は、障害物α、α´が水平エッジが出にくい人、バイク、電柱等の細い垂直物の場合には認識困難である。これに対して、本発明の場合は水平・垂直のエッジ検出は不要であり、障害物α、α´が細い垂直物であっても確実に認識することができる。   For example, the obstacles α and α ′ of the images P (t1) and P (t2) can be recognized by detecting horizontal and vertical edges by a so-called edge histogram method. α and α ′ are difficult to recognize in the case of a person with a difficult horizontal edge, a thin vertical object such as a motorcycle or a utility pole. On the other hand, in the case of the present invention, horizontal and vertical edge detection is unnecessary, and even if the obstacles α and α ′ are thin vertical objects, they can be reliably recognized.

つぎに、同定結果として得られる障害物α、α´の画像が鮮明であり、その下端部を含む全体をはっきり認識することができため、画像上で測定した障害物α、α´の下端から自車1までの長さから、衝突予測に重要な時々刻々変化する自車1から障害物α、α´までの距離を正確に検出して把握することができる。   Next, since the images of the obstacles α and α ′ obtained as a result of the identification are clear and the entire image including the lower end portion can be clearly recognized, from the lower ends of the obstacles α and α ′ measured on the image From the length to the own vehicle 1, it is possible to accurately detect and grasp the distances from the own vehicle 1 to the obstacles α and α ′ that change every moment that is important for collision prediction.

そして、本発明は上記した実施形態に限定されるものではなく、その趣旨を逸脱しない限りにおいて上述したもの以外に種々の変更を行なうことが可能である。   The present invention is not limited to the above-described embodiment, and various modifications other than those described above can be made without departing from the spirit of the present invention.

例えば、前記実施形態においては、同定手段により、異なる方向からの撮影画像に先に射影変換を施し、その後監視範囲を抽出して障害物をするようにしたが、同定手段により、異なる方向からの撮影画像から監視範囲を抽出し、その後監視範囲に射影変換を施して障害物を認識するようにしてもよい。   For example, in the above embodiment, the identification unit first performs projective transformation on a captured image from a different direction, and then extracts the monitoring range to obstruct the obstacle. The monitoring range may be extracted from the captured image, and then the projection range may be subjected to projective transformation to recognize the obstacle.

つぎに、前記実施形態においては、自車1の前方、後方から路面を撮影した前方画像P(t1)、後方画像P(t2)に基づいて、前方、後方の障害物α、α´を認識する場合に適用したが、撮影方向の組合わせは、前方と後方に限られるものではなく、例えば、撮影方向が前方または後方と左、右いずれかの側方との2方向、或いは、前、後、左、右のいずれか3方向以上であってもよい。   Next, in the embodiment, the front and rear obstacles α and α ′ are recognized based on the front image P (t1) and the rear image P (t2) obtained by photographing the road surface from the front and rear of the host vehicle 1. However, the combination of the shooting directions is not limited to the front and rear, for example, the shooting direction is two directions of the front or rear and left or right side, or the front, Afterward, it may be in three or more directions, either left or right.

具体的には、例えば後方と側方とを組合わせ、右左折時のバイク等の障害物の巻き込みの衝突防止に適用する場合、撮影手段として、図11に示すように自車1に前記の単眼カメラ3bを搭載するとともに、自車1の左、右のいずれか一方または両方のサイドミラー等に単眼カメラ3c、3dを取り付ける。   Specifically, for example, when the rear and the side are combined and applied to prevent the collision of an obstacle such as a motorcycle at the time of turning right or left, the above-described vehicle 1 is used as a photographing unit as shown in FIG. The monocular camera 3b is mounted, and the monocular cameras 3c and 3d are attached to one or both of the left and right side mirrors of the vehicle 1 or the like.

そして、自車1が右ハンドルの場合に有効な左折時の障害物の巻き込みの衝突防止に適用する場合について説明すると、左のサイドミラーの単眼カメラ3cにより時刻t1に左側から見た路面を撮影し、それから数秒後の時刻t2に単眼カメラ3bにより後方からみた同じ路面領域を撮影する。   Then, the case where the vehicle 1 is applied to the collision prevention of the obstacle involving at the time of the left turn effective when the vehicle 1 is the right steering wheel will be described. The road surface viewed from the left side at the time t1 is photographed by the monocular camera 3c of the left side mirror. Then, at the time t2 several seconds after that, the same road surface region viewed from the rear is photographed by the monocular camera 3b.

このとき、単眼カメラ3bの後方画像P(t2)の左側は、単眼カメラ3cが時刻t1に撮影した画像(以下、左側画像という)P(t1)と略同じ範囲(監視範囲)を異なる方向から撮影した画像であり、側方に障害物が存在すれば左側画像P(t1)にのみ障害物が含まれ、後方に障害物が存在すれば後方画像P(t1)にのみ障害物が含まれる。   At this time, the left side of the rear image P (t2) of the monocular camera 3b is substantially the same range (monitoring range) as the image (hereinafter referred to as the left image) P (t1) taken by the monocular camera 3c at time t1 from different directions. This is a photographed image. If there is an obstacle on the side, the obstacle is included only in the left image P (t1), and if there is an obstacle behind, the obstacle is included only in the rear image P (t1). .

そこで、画像処理手段4の画像取得手段、同定手段、認識手段により、左側画像P(t1)、後方画像P(t2)を前記実施形態の場合と同様に処理し、左側画像P(t1)、後方画像P(t2)の俯瞰平面の画像の前記監視範囲を同定して障害物を認識する。   Therefore, the left image P (t1) and the rear image P (t2) are processed by the image acquisition unit, the identification unit, and the recognition unit of the image processing unit 4 in the same manner as in the above embodiment, and the left image P (t1), An obstacle is recognized by identifying the monitoring range of the image of the overhead view of the rear image P (t2).

なお、右折時の障害物の巻き込みの衝突防止に適用する場合は、右のサイドミラーの単眼カメラ3dにより時刻t1に右側から路面を撮影し、それから数秒後の時刻t2に単眼カメラ3bにより後方から同じ路面領域を撮影して障害物を認識する。   In addition, when applying to the collision prevention of the obstacle involving at the time of the right turn, the road surface is photographed from the right side at time t1 by the monocular camera 3d of the right side mirror, and from the rear by the monocular camera 3b at time t2 several seconds later. Recognize obstacles by photographing the same road surface area.

また、前、後、左、右のいずれか3方向以上からの撮影画像に基づいて障害物を認識する場合は、それらにより前後して異なる方向から撮影された略同じ路面領域の共通する範囲を監視範囲として抽出し、画像処理手段4の画像取得手段、同定手段、認識手段により前記実施形態と同様にして障害物を認識することができる。   In addition, when an obstacle is recognized based on captured images from three or more directions of front, rear, left, and right, a common range of substantially the same road surface area captured from different directions before and after them is displayed. It is extracted as a monitoring range, and obstacles can be recognized by the image acquisition means, identification means, and recognition means of the image processing means 4 in the same manner as in the above embodiment.

つぎに、前記実施形態においては、同定手段により、異なる方向からの撮影画像に射影変換を施して障害物を認識したが、自車1にカーナビゲーション装置のような自車1の走行路の路面形状等の路面情報(マップデータ)を取得する路面情報取得手段を更に備え、画像処理手段4の同定手段から前記射影変換の処理を省き、前記同定手段により、監視範囲の抽出に加え、前記路面情報に基づく画像処理を行なって各撮影画像の前記監視範囲を同定して障害物を認識するようにしてもよい(請求項3対応)。   Next, in the above-described embodiment, the obstacle is recognized by performing projective transformation on the captured images from different directions by the identification unit. However, the road surface of the traveling path of the own vehicle 1 such as a car navigation device is used as the own vehicle 1. Road surface information acquisition means for acquiring road surface information (map data) such as a shape, and the projection conversion processing is omitted from the identification means of the image processing means 4; in addition to the monitoring range extraction by the identification means, the road surface Image processing based on information may be performed to identify the monitoring range of each captured image and recognize an obstacle (corresponding to claim 3).

具体的には、例えば撮影された前方画像P(t1)、後方撮影画像P(t2)の路面範囲につき、前記同定手段により、まず、路面情報取得手段の路面情報に基づく画像処理を行なって路面を前方、後方から撮影した画像(以下、マップ画像という)を形成する。これらのマップ画像には、路面にペイントされた道路標示は含まれていない。   Specifically, for example, with respect to the road surface ranges of the photographed front image P (t1) and rear photographed image P (t2), the identification unit first performs image processing based on the road surface information of the road surface information acquisition unit. Are taken from the front and rear (hereinafter referred to as a map image). These map images do not include road markings painted on the road surface.

そして、前方画像P(t1)、後方撮影画像P(t2)とマップ画像との差分を取り、前方画像P(t1)、後方撮影画像P(t2)を略障害物および道路標示の画像に加工する。   Then, the difference between the front image P (t1), the rear shot image P (t2), and the map image is taken, and the front image P (t1) and the rear shot image P (t2) are processed into substantially obstacle and road marking images. To do.

さらに、障害物αおよび道路標示の画像に加工された前方画像P(t1)、後方撮影画像P(t2)を、そのまま、或いは、所要の座標変換を施して明るさの差分を取り、それらの異同等から道路標示を除去して障害物を容易かつ正確に同定する。   Further, the front image P (t1) and the rear photographed image P (t2) processed into the image of the obstacle α and the road marking are taken as they are, or the necessary coordinate conversion is performed to obtain a difference in brightness. Easily and accurately identify obstacles by removing road markings from dissimilarities.

なお、この場合も、撮影方向は前後方向以外の2方向或いは3方向以上であってもよい。   Also in this case, the photographing direction may be two directions other than the front-rear direction or three or more directions.

つぎに、撮影手段3、画像処理手段4等の構成や動作(処理手順)は、前記実施形態のものに限られるものではない。   Next, the configuration and operation (processing procedure) of the photographing unit 3, the image processing unit 4 and the like are not limited to those of the above-described embodiment.

そして、本発明は、種々の車両の障害物認識に適用することができる。   The present invention can be applied to obstacle recognition of various vehicles.

本発明の一実施形態のブロック図である。It is a block diagram of one embodiment of the present invention. 図1の撮像手段の構成例の説明図である。It is explanatory drawing of the structural example of the imaging means of FIG. 図1の動作説明用のフローチャートである。It is a flowchart for operation | movement description of FIG. 図1の撮影画像の処理例の説明図である。It is explanatory drawing of the example of a process of the picked-up image of FIG. 図4の同定結果の説明図である。It is explanatory drawing of the identification result of FIG. 射影変換の説明図である。It is explanatory drawing of projective transformation. 前方のみを撮影して認識する場合の撮影例の説明図である。It is explanatory drawing of the imaging | photography example in the case of image | photographing only the front and recognizing. 図7の撮影結果の射影変換の説明図である。It is explanatory drawing of the projection conversion of the imaging | photography result of FIG. 図7の障害物がない場合の撮影画像の処理例の説明図である。It is explanatory drawing of the example of a process of a picked-up image when there is no obstacle of FIG. 図7の障害物がある場合の撮影画像の処理例の説明図である。It is explanatory drawing of the example of a process of a picked-up image when there exists an obstruction of FIG. 本発明の他の撮像手段の構成例の説明図である。It is explanatory drawing of the structural example of the other imaging means of this invention.

符号の説明Explanation of symbols

1 自車
2 障害物認識装置
3 撮影手段
4 画像処理手段
α、α´ 障害物
DESCRIPTION OF SYMBOLS 1 Own vehicle 2 Obstacle recognition apparatus 3 Imaging | photography means 4 Image processing means alpha, alpha 'Obstacle

Claims (3)

走行する自車の複数方向からみた路面撮影が可能な撮影手段と、
略同じ路面領域についての前記撮影手段の異なる複数方向からの撮影画像を取得する画像取得手段と、
前記画像取得手段が取得した複数方向からの撮影画像に含まれた所定の監視範囲を抽出して同定する同定手段と、
前記同定手段の同定結果により、前記監視範囲の撮影画像間で異なる画像部分を路面に垂直な障害物として認識する認識手段と
を備えたことを特徴とする障害物認識装置。
Photographing means capable of photographing the road surface from a plurality of directions of the traveling vehicle;
Image acquisition means for acquiring captured images from a plurality of different directions of the imaging means for substantially the same road surface area;
An identification unit that extracts and identifies a predetermined monitoring range included in a captured image from a plurality of directions acquired by the image acquisition unit;
An obstacle recognizing apparatus comprising: a recognizing unit that recognizes an image portion that differs between captured images in the monitoring range as an obstruction perpendicular to a road surface based on an identification result of the identifying unit.
請求項1記載の障害物認識装置において、
前記同定手段は、前記監視範囲の抽出に加え、射影変換を行なって各撮影画像の前記監視範囲を同定することを特徴とする障害物認識装置。
The obstacle recognition apparatus according to claim 1,
The obstacle recognizing apparatus characterized in that the identifying means performs projective transformation in addition to extraction of the monitoring range to identify the monitoring range of each captured image.
請求項1記載の障害物認識装置において、
自車の走行路の路面形状等の路面情報を取得する路面情報取得手段を更に備え、
前記同定手段は、前記監視範囲の抽出に加え、前記路面情報に基づく画像処理を行なって各撮影画像の前記監視範囲を同定することを特徴とする障害物認識装置。
The obstacle recognition apparatus according to claim 1,
It further comprises road surface information acquisition means for acquiring road surface information such as the road surface shape of the traveling road of the own vehicle,
The obstacle recognizing apparatus characterized in that the identification means identifies the monitoring range of each captured image by performing image processing based on the road surface information in addition to extraction of the monitoring range.
JP2007186549A 2007-07-18 2007-07-18 Obstacle recognition device Expired - Fee Related JP4854619B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2007186549A JP4854619B2 (en) 2007-07-18 2007-07-18 Obstacle recognition device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2007186549A JP4854619B2 (en) 2007-07-18 2007-07-18 Obstacle recognition device

Publications (2)

Publication Number Publication Date
JP2009025932A true JP2009025932A (en) 2009-02-05
JP4854619B2 JP4854619B2 (en) 2012-01-18

Family

ID=40397701

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2007186549A Expired - Fee Related JP4854619B2 (en) 2007-07-18 2007-07-18 Obstacle recognition device

Country Status (1)

Country Link
JP (1) JP4854619B2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012185540A (en) * 2011-03-03 2012-09-27 Honda Elesys Co Ltd Image processing device, image processing method, and image processing program
WO2022220770A3 (en) * 2021-04-16 2022-11-17 Mpg Maki̇ne Prodüksi̇yon Grubu Maki̇ne İmalat Sanayi̇ Ve Ti̇caret Anoni̇m Şi̇rketi̇ A situational awareness and control system developed for vehicles

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH03282707A (en) * 1990-03-30 1991-12-12 Mazda Motor Corp Environment recognition device for mobile vehicle
JPH05297941A (en) * 1992-04-21 1993-11-12 Daihatsu Motor Co Ltd Road shape detecting method
JPH10222679A (en) * 1997-02-04 1998-08-21 Toyota Motor Corp Vehicle image processor
JP2001167394A (en) * 1999-12-09 2001-06-22 Toyota Central Res & Dev Lab Inc Obstacle judging device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH03282707A (en) * 1990-03-30 1991-12-12 Mazda Motor Corp Environment recognition device for mobile vehicle
JPH05297941A (en) * 1992-04-21 1993-11-12 Daihatsu Motor Co Ltd Road shape detecting method
JPH10222679A (en) * 1997-02-04 1998-08-21 Toyota Motor Corp Vehicle image processor
JP2001167394A (en) * 1999-12-09 2001-06-22 Toyota Central Res & Dev Lab Inc Obstacle judging device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012185540A (en) * 2011-03-03 2012-09-27 Honda Elesys Co Ltd Image processing device, image processing method, and image processing program
WO2022220770A3 (en) * 2021-04-16 2022-11-17 Mpg Maki̇ne Prodüksi̇yon Grubu Maki̇ne İmalat Sanayi̇ Ve Ti̇caret Anoni̇m Şi̇rketi̇ A situational awareness and control system developed for vehicles

Also Published As

Publication number Publication date
JP4854619B2 (en) 2012-01-18

Similar Documents

Publication Publication Date Title
CN106485233B (en) Method and device for detecting travelable area and electronic equipment
EP3161507B1 (en) Method for tracking a target vehicle approaching a motor vehicle by means of a camera system of the motor vehicle, camera system and motor vehicle
EP2369552B1 (en) Approaching object detection system
CN106611512B (en) Method, device and system for processing starting of front vehicle
KR101891460B1 (en) Method and apparatus for detecting and assessing road reflections
JP5774770B2 (en) Vehicle periphery monitoring device
US20050276450A1 (en) Vehicle surroundings monitoring apparatus
Aytekin et al. Increasing driving safety with a multiple vehicle detection and tracking system using ongoing vehicle shadow information
JP2009288867A (en) Lane marking detection device and lane marking detection method
KR20080004835A (en) Apparatus and method for generating a auxiliary information of moving vehicles for driver
EP3202626A1 (en) In-vehicle image processing device
JP6129268B2 (en) Vehicle driving support system and driving support method
US20160371549A1 (en) Method and Device for Detecting Objects from Depth-Resolved Image Data
JP4951481B2 (en) Road marking recognition device
JP2002314989A (en) Peripheral monitor for vehicle
JP2009231937A (en) Surroundings monitoring device for vehicle
KR20150096924A (en) System and method for selecting far forward collision vehicle using lane expansion
JP6493000B2 (en) Road marking detection device and road marking detection method
JP5097681B2 (en) Feature position recognition device
JP2009301495A (en) Image processor and image processing method
JP2011103058A (en) Erroneous recognition prevention device
JP4854619B2 (en) Obstacle recognition device
JP4768499B2 (en) In-vehicle peripheral other vehicle detection device
WO2014050285A1 (en) Stereo camera device
JP2006343148A (en) Collision avoiding system with image sensor and laser

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20100611

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20110707

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20110719

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20110912

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20111018

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20111025

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20141104

Year of fee payment: 3

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

LAPS Cancellation because of no payment of annual fees