JP2001114047A - Vehicle-surrounding situation indication device - Google Patents

Vehicle-surrounding situation indication device

Info

Publication number
JP2001114047A
JP2001114047A JP29777799A JP29777799A JP2001114047A JP 2001114047 A JP2001114047 A JP 2001114047A JP 29777799 A JP29777799 A JP 29777799A JP 29777799 A JP29777799 A JP 29777799A JP 2001114047 A JP2001114047 A JP 2001114047A
Authority
JP
Japan
Prior art keywords
road surface
image
road
vehicle
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP29777799A
Other languages
Japanese (ja)
Other versions
JP3301421B2 (en
Inventor
Masamichi Nakagawa
雅通 中川
Shusaku Okamoto
修作 岡本
Kazuo Nobori
一生 登
Atsushi Morimura
森村  淳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Panasonic Holdings Corp
Original Assignee
Matsushita Electric Industrial Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Matsushita Electric Industrial Co Ltd filed Critical Matsushita Electric Industrial Co Ltd
Priority to JP29777799A priority Critical patent/JP3301421B2/en
Publication of JP2001114047A publication Critical patent/JP2001114047A/en
Application granted granted Critical
Publication of JP3301421B2 publication Critical patent/JP3301421B2/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Abstract

PROBLEM TO BE SOLVED: To show the areas of objects having the heights except for the heights of a road surface plane by composing images observed from desired view points, using the images of an on vehicle camera. SOLUTION: By projecting images photographed at different positions on a road surface plane by a road surface projecting means 105, to determine the difference in overlapping of the projected images by a road surface outside area extracting means 106, areas of objects outside the road surface are extracted. Further, the outlines of the areas are classified into the outlines which are originated from the outlines of the objects and the outlines which are generated due to distortion of projection. By adding the outlines of the areas to the images observed from desired view points, which are converted from the projected images by a road surface outside area indicator composing means 107 the surrounding state of a vehicle is presented to a driver in an understandable manner.

Description

【発明の詳細な説明】DETAILED DESCRIPTION OF THE INVENTION

【0001】[0001]

【発明の属する技術分野】本発明は、車両の周囲の状況
を運転者に判りやすく表示することにより、運転者が車
両周囲の状況を、容易に把握し的確で安全な運転操作が
できるようにするための装置に関する。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention is intended to display a situation around a vehicle in a manner that is easy for a driver to understand, so that the driver can easily grasp the situation around the vehicle and perform an accurate and safe driving operation. To a device for doing so.

【0002】[0002]

【従来の技術】従来の一般的な車両周囲監視装置は、車
両周囲を1台もしくは数台のカメラで撮影し、その画像
を変換し、車両周囲の状況を1枚の画像にして表示する
ものがる。例えば第1の従来例が特開平3−99952
号公報に開示されている。その内容は、車両に複数台の
カメラを設置し、その画像を透視変換によって、例えば
路面のような平面座標系に張り付ける。そして視点を車
両中心上方に下向きに置けば、運転者は自車とその周囲
状況との関係が一目で分かるというものである。
2. Description of the Related Art A conventional general vehicle surroundings monitoring apparatus captures the surroundings of a vehicle with one or several cameras, converts the images, and displays the surroundings of the vehicle as a single image. To For example, a first conventional example is disclosed in Japanese Patent Laid-Open No. 3-99952.
No. 6,086,045. The contents are as follows. A plurality of cameras are installed in a vehicle, and the images are attached to a plane coordinate system such as a road surface by a perspective transformation. By placing the viewpoint downward above the center of the vehicle, the driver can understand at a glance the relationship between the vehicle and the surrounding conditions.

【0003】図2は、従来の車両周囲監視装置の実施例
を示したブロック図である。画像変換部 202 に入力さ
れたカメラ1〜N (201) からの画像が、透視変換に
より他の座標に変換され、画像表示部 203 で1枚の画
像に合成される。そして運転席に設置されたTVモニタ
204 に表示する。画像表示部203では、ギア位置、車
速、ウインカ動作に応じた信号により、自車表示位置を
画面の中心よりずらすなどし、見ようとする車両周囲環
境領域を広くとるようにするなどの工夫も可能である。
FIG. 2 is a block diagram showing an embodiment of a conventional vehicle periphery monitoring device. The images from the cameras 1 to N (201) input to the image conversion unit 202 are converted into other coordinates by perspective conversion, and are combined into one image by the image display unit 203. And a TV monitor installed in the driver's seat
Display at 204. In the image display unit 203, it is also possible to deviate the vehicle display position from the center of the screen by using signals according to the gear position, vehicle speed, turn signal operation, etc. It is.

【0004】また、これをさらに発展させた第2の従来
例が特開平7−186833号公報に開示されている。
前記発明では、周囲の状況を運転者に提示する際に、路
面部分とそれ以外の部分をあらかじめ区別し、路面部分
は座標変換によって車両中心上方に下向きに視点を置い
たときに観測される画像に変換し、また路面外部分は、
前記変換画像にカメラからの映像をそのまま適切な場所
で適切な大きさに変更して重ね合わせ表示する。これに
より車両周囲の障害物の状況、特に車両後方から接近す
る他の車両などを正確に知らせるものである。
[0004] A second conventional example which further develops this is disclosed in Japanese Patent Application Laid-Open No. Hei 7-186833.
In the above invention, when presenting the surrounding situation to the driver, the road surface portion and the other portion are distinguished in advance, and the road surface portion is an image observed when the viewpoint is positioned downward above the vehicle center by coordinate transformation. And the outer part of the road is
An image from a camera is directly superimposed on the converted image at an appropriate place with an appropriate size. In this way, the state of obstacles around the vehicle, particularly other vehicles approaching from behind the vehicle, is accurately notified.

【0005】[0005]

【発明が解決しようとする課題】しかしながら、上記に
示されるような従来における車両周囲監視装置において
は、画像中の物体がすべて道路面上にあるものと仮定
し、路面を基準とする座標系への変換を行うため、例え
ば白線や路面に描かれた矢印、文字、また横断歩道な
ど、道路面上に貼り付いており高さ成分を持たないもの
は、カメラに写っている状態が正確に路面上の位置に投
影されるが、逆に、例えば車両や人物などのように、路
面から上方に高さ成分を持つ物体に関しては、高さ成分
がカメラ視線方向の奥行き成分に変換されてしまうた
め、物体の高さが高いほど、その物体を投影した画像の
歪みが大きくなってしまう。このため、路面に投影され
た画像を上方から下向きに眺めると、前記高さ成分を持
つ物体が大きく歪んで表示されてしまい、実在の物体と
の対応づけが困難になる場合があるという問題があっ
た。
However, in the conventional vehicle periphery monitoring device as described above, it is assumed that all the objects in the image are on the road surface, and the coordinate system based on the road surface is used. For example, white lines, arrows drawn on the road, letters on the road, pedestrian crossings, etc. that have no height component attached to the road surface accurately reflect the state of the camera. It is projected to the upper position, but conversely, for an object having a height component above the road surface, such as a vehicle or a person, the height component is converted to a depth component in the camera viewing direction The higher the height of the object, the greater the distortion of the image projected on the object. For this reason, when the image projected on the road surface is viewed downward from above, the object having the height component is greatly distorted and displayed, and it may be difficult to associate the object with a real object. there were.

【0006】図3(a)に路面上に無い物体の投影像の歪
みの様子を示す。図3(a)の直方体をカメラから撮影し
た画像を、路面へ投影した場合、直方体からカメラと反
対側に広がった投影像が生じる。この投影像は図中、灰
色で示している。
FIG. 3A shows a state of distortion of a projected image of an object not on the road surface. When an image of the rectangular parallelepiped of FIG. 3A taken from the camera is projected onto a road surface, a projected image that spreads from the rectangular parallelepiped to the side opposite to the camera is generated. This projected image is shown in gray in the figure.

【0007】これらの問題を回避することを目的とした
前記第2の従来例では、路面が同じ色であると仮定し路
面とみなされる特定色の部分を抽出することにより、路
面と路面以外の領域を分離し、路面外物体を元のカメラ
に映った状態で切り出し、合成画像に貼り付けるという
方法を用いている。しかし実際には路面領域は単一の色
ではなく、種々の模様の入ったテクスチャとなっている
ため、色だけで路面と路面以外の分離は困難である。
[0007] In the second conventional example for the purpose of avoiding these problems, it is assumed that the road surface has the same color, and a portion of a specific color regarded as the road surface is extracted, so that the road surface and the road other than the road surface are extracted. A method is used in which a region is separated, an object outside the road surface is cut out while being reflected on the original camera, and the object is pasted on a composite image. However, actually, the road surface area is not a single color but a texture containing various patterns. Therefore, it is difficult to separate the road surface from the road surface only by the color.

【0008】また、切り出した路面外物体は元のカメラ
の視点位置からの像であるため、上方からの見おろすな
どの別の視点位置で合成された路面の像とは、視点が異
なるため遠近による像の見え方の方向などが一致せず、
一目で状況の把握が困難であることも考えられる。この
様子を図を用いて説明する。図3(b)は車載のカメラから
道路上の車を撮影した画像を示しており、物体は遠方ほ
ど小さく写り、画面左の車は右側面が、画面右の車は左
側面が見えるように見える。この画像を上空から路面を
真下に見おろした画像に変換したものが、図3(c)であ
る。真下を見おろしているので、道路の白線などは真っ
直ぐの平行線となる。この路面に図3(b)の車の領域を
切り出し、位置が合うように拡大、縮小して合成してい
る。画像の拡大、縮小だけでは、車の片方の側面が斜め
に写ったままになるなどして車の向きなどが上空から見
た視点とは一致しない。そのため、本来道路の白線など
と平行になるべき車の向きが、図3(c)の矢印の方向で
あるかのように見えてしまう。
Further, since the extracted object outside the road surface is an image from the viewpoint position of the original camera, the viewpoint image is different from the image of the road surface synthesized at another viewpoint position such as looking down from above because the viewpoint is different. The directions of the appearance of the images do not match,
It may be difficult to grasp the situation at a glance. This will be described with reference to the drawings. Fig. 3 (b) shows an image of a car on the road taken from the on-board camera.The object is smaller in the distance, so that the car on the left side of the screen can see the right side, and the car on the right side of the screen can see the left side. appear. FIG. 3 (c) is an image obtained by converting this image into an image in which the road surface is looked down from directly above the road surface. Looking straight down, the white lines on the road are straight parallel lines. The area of the car shown in FIG. 3B is cut out on this road surface, and is enlarged and reduced so as to match the position and is synthesized. Just by enlarging or reducing the image, the direction of the car or the like does not match the viewpoint viewed from the sky because one side of the car remains obliquely. Therefore, the direction of the vehicle that should be parallel to the white line of the road or the like looks as if it is the direction of the arrow in FIG.

【0009】本発明は、係る課題を解決するものであ
り、その目的は、車両に搭載したカメラで撮影している
車両周囲の様子を、運転者に提示するにあたって、歪み
の生じる路面以外の領域を判りやすく提示する車両周囲
状況提示装置を提供することにある。
SUMMARY OF THE INVENTION The present invention has been made to solve the above-described problems, and an object of the present invention is to present a situation around a vehicle, which is photographed by a camera mounted on the vehicle, to a driver in an area other than a road surface on which distortion is caused. The present invention is to provide a vehicle surrounding situation presentation device that presents the vehicle surroundings easily.

【0010】[0010]

【課題を解決するための手段】上記課題を解決するため
に、本発明の車両周囲状況提示装置は次の構成を有す
る。請求項1に係る本発明の基本構成は、車両周囲の状
況を撮影し、入力する画像入力手段と、前記画像入力手
段からの画像を撮影したカメラの位置、方向、特性を示
すカメラパラメータを用いて路面へ投影変換した路面投
影像を作成する路面投影手段と、前記路面投影像から路
面以外の物体の領域を抽出する路面外領域手段と、抽出
された前記路面外領域から路面以外の物体を区別する指
標を作成し、前記路面投影像に合成する路面指標合成手
段を備えたことを特徴とする。
Means for Solving the Problems In order to solve the above-mentioned problems, a vehicle surrounding situation presenting apparatus of the present invention has the following configuration. The basic configuration of the present invention according to claim 1 uses image input means for photographing and inputting a situation around a vehicle, and camera parameters indicating a position, a direction, and characteristics of a camera which has taken an image from the image input means. Road surface projection means for creating a road surface projection image projected and converted to a road surface, an outside road surface area means for extracting an area of an object other than the road surface from the road surface projection image, and an object other than the road surface from the extracted outside road surface area. A road surface index synthesizing means for creating an index to be distinguished and synthesizing the index with the road surface projected image is provided.

【0011】請求項1に係る本発明による車両周囲状況
提示装置では、車両の周囲の画像から、路面へ投影した
路面投影像を生成するとともに、路面以外の物体の領域
を抽出する。抽出された路面外領域が他の領域と区別が
明確になるように、路面投影像に指標を合成することに
より、運転者に車両周囲の状況を任意の視点から見た画
像を提示し、かつ路面以外の物体を判断しやすいよう提
示することが可能である。
In the vehicle surrounding situation presenting apparatus according to the first aspect of the present invention, a road surface projected image projected on a road surface is generated from an image around the vehicle, and an area of an object other than the road surface is extracted. By combining an index with the road projection image so that the extracted off-road area is clearly distinguished from other areas, the driver is presented with an image of the situation around the vehicle from any viewpoint, and It is possible to present an object other than the road surface so that it can be easily judged.

【0012】請求項2に示される本発明による車両周囲
状況提示装置では、異なる位置で撮影された画像データ
を路面へ投影した投影像を作成し、それら投影像の差を
検出することにより、路面以外の物体の領域を抽出す
る。
According to a second aspect of the present invention, there is provided an apparatus for presenting a surrounding condition of a vehicle, wherein a projection image is formed by projecting image data photographed at different positions onto a road surface, and a difference between the projection images is detected to detect the difference between the projection images. The region of the object other than is extracted.

【0013】請求項9に示される本発明による車両周囲
状況提示装置では、抽出された路面外領域が、実際の物
体の領域か投影の歪みにより生じたものかを判別し、前
記判別結果に基づいて前記路面外領域の輪郭を路面投影
像に合成表示することにより、運転者に車両周囲の状況
を任意の視点から見た画像を提示し、かつ路面以外の物
体を判断しやすいよう提示することが可能である。
According to a ninth aspect of the present invention, there is provided a vehicle surroundings state presenting apparatus for determining whether an extracted off-road area is an actual object area or a projection area caused by distortion of a projection, and based on the determination result. Presenting the driver with an image of the situation around the vehicle viewed from an arbitrary viewpoint by displaying the contour of the area outside the road surface on the road surface projected image, and presenting the driver with an object other than the road surface so as to be easily determined. Is possible.

【0014】[0014]

【発明の実施の形態】以下、図を用いて本発明の一実施
の形態を説明する。
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS One embodiment of the present invention will be described below with reference to the drawings.

【0015】図1は本発明請求項1に記載の車両周囲監
視装置の基本構成例を示したブロック図である。本発明
による車両周囲状況提示装置は、基本構成として、車両
に設置されたカメラを用いて車両の周囲を撮影する画像
入力手段101と、前記入力画像の撮影時の位置、方
向、画角などの情報であるカメラパラメータ102と、
前記入力画像に対して所定の画像処理を実行する画像処
理手段103と、前記画像処理された周囲状況の画像を
表示する表示手段104とから大きく構成されている。
FIG. 1 is a block diagram showing a basic configuration example of a vehicle surroundings monitoring device according to the first aspect of the present invention. The vehicle surrounding situation presentation device according to the present invention has, as a basic configuration, an image input unit 101 for photographing the periphery of the vehicle using a camera installed in the vehicle, and a position, a direction, an angle of view, and the like at the time of photographing the input image. Camera parameters 102 which are information;
An image processing unit 103 that executes predetermined image processing on the input image and a display unit 104 that displays an image of the surrounding situation that has been subjected to the image processing are mainly configured.

【0016】画像処理手段103は、以下の機能ブロッ
クから構成されている。前記入力画像を路面へ投影した
路面投影像を作成する路面投影手段105と、前記路面
投影像から、路面以外の物体の領域を抽出する路面外領
域手段106と、抽出された前記路面外領域から路面以
外の物体を区別する指標を作成し、前記路面投影像に合
成する路面指標合成手段107とから構成されている。
The image processing means 103 comprises the following functional blocks. A road surface projection unit 105 that creates a road surface projection image by projecting the input image onto a road surface, an outside road surface region unit 106 that extracts an area of an object other than the road surface from the road surface projection image, A road surface index synthesizing means 107 for generating an index for distinguishing objects other than the road surface and synthesizing the index with the road surface projected image.

【0017】次に、本発明の処理の詳細を各構成要素の
順に説明する。
Next, details of the processing of the present invention will be described in the order of each component.

【0018】画像入力手段101は、車両に取り付けら
れたカメラから周囲の状況を撮影する。なお、異なる位
置で撮影する場合は、車両の異なる位置に複数のカメラ
を設置するか、1台のカメラで車両を動かし異なる時刻
に撮影する、またはそれらの両方を行うことにより、異
なる位置で撮影した画像を取得する。
The image input means 101 takes a picture of the surroundings from a camera mounted on the vehicle. When shooting at different positions, install multiple cameras at different positions of the vehicle, move the vehicle with one camera and shoot at different times, or perform both of them to shoot at different positions. To get the image.

【0019】カメラパラメータ102は、画像入力手段
101で撮影された画像に対応して、カメラの焦点距
離、撮影時のカメラの位置、方向などのデータである。
図4を用いてカメラパラメータの説明を行う。図4は、
車両と路面、カメラの位置関係を示した概念図である。
The camera parameters 102 are data such as the focal length of the camera, the position and direction of the camera at the time of shooting, corresponding to the image shot by the image input means 101.
The camera parameters will be described with reference to FIG. FIG.
FIG. 3 is a conceptual diagram illustrating a positional relationship between a vehicle, a road surface, and a camera.

【0020】カメラのレンズ中心を原点Oeとし光軸方向
をZ軸とするカメラ座標系 Xe-Ye-Ze と、車を基準に
し、車の載る路面平面に原点Owがあり、路面平面がXw-Y
w平面と一致するようにとったワールド座標系Xw-Yw-Zw
との関係は、回転行列Rと平行移動ベクトルTを用いて
(数1)のように表される。
There is a camera coordinate system Xe-Ye-Ze with the center of the lens of the camera as the origin Oe and the optical axis direction as the Z axis, and the origin Ow on the road surface plane on which the car is mounted with respect to the car, and the road surface plane Xw- Y
World coordinate system Xw-Yw-Zw taken to match the w plane
Is expressed as (Equation 1) using the rotation matrix R and the translation vector T.

【0021】またカメラ座標系の点(xe, ye, ze)と、
その対応する画像での座標(u, v)は、カメラの焦点距離
fを用いて(数2)のように表される。
Further, a point (xe, ye, ze) in the camera coordinate system;
The coordinates (u, v) in the corresponding image are the focal length of the camera
It is expressed as (Equation 2) using f.

【0022】[0022]

【数1】 (Equation 1)

【0023】[0023]

【数2】 (Equation 2)

【0024】車が移動しない場合は、ワールド座標系の
原点Owが動かないので、複数のカメラの関係は、カメラ
毎のR、Tで表現できる。一方、車が移動して同じカメラ
で、別の位置から撮影する場合は、車の動きに応じてワ
ールド座標系の原点が移動するため、車が動いた場合の
画像データの関係は、車の移動による2つのワールド座
標系の関係で表される。移動前のワールド座標系での点
をPw(Xw,Yw,Zw)とし、移動後の対応する点をPw'(Xw',Y
w',Zw')とすると、それらの関係は平行移動ベクトルT
m、回転行列Rmを用いて表される。今、全ての点が路面
上にあると考え、また車の移動も同じ路面上を移動する
と考えると、Zw,Zw'は0となり、また回転行列Tmは、Zw
軸周りの回転を表す2行2列の行列、平行移動ベクトル
Tmも、Xw,Yw方向の移動だけの2次元ベクトルで良い。こ
の関係を(数3)に示す。
When the car does not move, the origin Ow of the world coordinate system does not move, so that the relationship between a plurality of cameras can be expressed by R and T for each camera. On the other hand, when the car moves and the same camera shoots from a different position, the origin of the world coordinate system moves according to the movement of the car. It is represented by the relationship between two world coordinate systems due to movement. The point in the world coordinate system before the movement is Pw (Xw, Yw, Zw), and the corresponding point after the movement is Pw '(Xw', Y
w ', Zw'), the relationship is the translation vector T
m and rotation matrix Rm. Now, assuming that all points are on the road surface and that the movement of the car also moves on the same road surface, Zw and Zw 'are 0, and the rotation matrix Tm is Zw
2-by-2 matrix representing rotation about an axis, translation vector
Tm may also be a two-dimensional vector that only moves in the Xw and Yw directions. This relationship is shown in (Equation 3).

【0025】[0025]

【数3】 (Equation 3)

【0026】前記Rm, Tmは、車速、舵角、左右のタイヤ
の回転数などの車の動きの情報をセンサーから得ること
などにより求めることが可能である。また衛星を使った
位置計測システム(GPS)などを用いることもでき
る。
The Rm and Tm can be obtained by obtaining information on the movement of the vehicle, such as the vehicle speed, the steering angle, and the rotational speeds of the left and right tires, from a sensor. Also, a position measurement system (GPS) using a satellite can be used.

【0027】以上、焦点距離f, カメラ座標系の回転行
列R、平行移動ベクトルT、ワールド座標系間の回転行列
Rm、平行移動ベクトルTmが 、カメラの位置、方向、画
角などを表すカメラパラメータ102である。
As described above, the focal length f, the rotation matrix R of the camera coordinate system, the translation vector T, and the rotation matrix between the world coordinate systems
Rm and the translation vector Tm are camera parameters 102 representing the position, direction, angle of view, and the like of the camera.

【0028】次に、路面投影手段105における画像デ
ータの路面投影像への投影について図5を用いて説明す
る。図5(a)は、車両に載ったカメラから、路面上の直
方体を撮影している様子を示した概略図である。
Next, the projection of the image data on the road surface projection image by the road surface projection means 105 will be described with reference to FIG. FIG. 5A is a schematic diagram illustrating a state where a rectangular parallelepiped on a road surface is photographed by a camera mounted on a vehicle.

【0029】カメラで撮られた画像に写っている物体が
すべて路面(XwYw平面)に載っているとすると、物体上
の点Pw( Xw, Yw, 0 )と、対応する画像上の点Pv(u, v)
の関係は、(数1)、(数2)を用いて、(数4)のよ
うに表される。
Assuming that all objects shown in the image taken by the camera are on the road surface (XwYw plane), a point Pw (Xw, Yw, 0) on the object and a corresponding point Pv ( u, v)
Is expressed as (Equation 4) using (Equation 1) and (Equation 2).

【0030】[0030]

【数4】 (Equation 4)

【0031】(数4)を用いて、図5(a)のカメラで撮
影された画像の各画素(u,v)を、路面Xw-Yw平面へ投影す
る。Xw-Yw平面も2次元配列として表現しておき、各配列
の要素に対応するカメラの画像の画素を(数4)を用い
て当てはめることにより、画像としての投影像が得られ
る。このようにして変換した投影像を図5(b)に示す。
図5(b)で上部の斜線の部分は、カメラの視野外の部分
であり、カメラの画像にデータの無い部分である。この
ような部分は領域外を示すデータで埋めておく。また黒
点はカメラの位置を示している。路面上にない直方体
は、すべてが路面上にあると仮定した投影により歪んで
投影される。
Each pixel (u, v) of the image photographed by the camera shown in FIG. 5A is projected onto the road surface Xw-Yw plane by using (Equation 4). The Xw-Yw plane is also expressed as a two-dimensional array, and the pixels of the camera image corresponding to the elements of each array are applied using (Equation 4) to obtain a projected image as an image. FIG. 5B shows the projection image converted in this manner.
In FIG. 5B, the upper shaded portion is a portion outside the field of view of the camera, and is a portion having no data in the camera image. Such a portion is filled with data indicating the outside of the area. The black dots indicate the position of the camera. The rectangular parallelepiped that is not on the road surface is distortedly projected by the projection assuming that it is all on the road surface.

【0032】前記直方体のような路面以外の物体の領域
を抽出するための一手法として、路面外領域抽出手段1
06において、異なる位置から撮影された画像の路面投
影像間で違いの生じている領域を求める。処理の詳細を
図を用いて説明する。図6(a)はカメラ位置P1で撮影し
た画像を路面へ投影した投影像であり、図6(b)はカメ
ラ位置P2での投影像を示している。なお図にある自車
の絵は、位置関係を分かり易く説明するために付加した
もので、実際の投影像には含まれていない。
As one method for extracting an area of an object other than the road surface such as the rectangular parallelepiped, an off-road area extraction means 1 is used.
At 06, an area where a difference occurs between road surface projected images of images captured from different positions is obtained. Details of the processing will be described with reference to the drawings. FIG. 6A shows a projection image obtained by projecting an image taken at the camera position P1 onto a road surface, and FIG. 6B shows a projection image at the camera position P2. Note that the picture of the own vehicle in the figure is added for easy understanding of the positional relationship, and is not included in the actual projected image.

【0033】次に、図6(a),(b)の投影像を、(数3)
を用いて、路面平面の同じ位置で重ねあわせる。(数
3)からも判るように、この変換は2次元の平行移動
と、回転であり、2次元の画像としての処理で位置合わ
せが可能である。図6(c)は、図6(b)を基準として、図
6(a)を回転、平行移動して重ね合わせた様子を示して
いる。白線などの路面上にある像はずれなく重なるが、
路面以外の物体の投影像は、カメラ位置により歪みの形
状が異なるためにずれが生じる。そこで2つの投影像の
差を計算すれば、両方の投影像とも路面が写っている領
域は同じ対象を写しているので差は0になる。また投影
像の路面以外の物体の領域は、他方の投影像では路面の
領域であったり、また路面以外の領域であってもカメラ
位置により歪み方が異なっているため、物体の他の部分
が投影されていたりするため、差を取ると0にはならな
い。例えば2つの投影像の各画素の輝度の差を計算すれ
ば、両方が路面の領域では、その差はほぼ0になる。一
方直方体の領域では差が生じる。その差分の様子を図6
(d)に示す。白線などは差が0になり消え、路面上にな
い直方体の部分だけが値を持って残る。路面以外の物体
の領域でも、たまたま違う部分の色が一致し、差が0に
なる領域が内部が生じることもあるが、対象とする物体
が内部に穴を持たないとすれば、穴となっている小領域
を埋める処理を行うことにより、領域の外形を抽出する
ことができる。これらにより路面外の領域を抽出したこ
とになる。
Next, the projected images of FIGS. 6A and 6B are expressed by (Equation 3).
And superimpose them at the same position on the road surface plane. As can be seen from (Equation 3), this conversion is two-dimensional translation and rotation, and positioning can be performed by processing as a two-dimensional image. FIG. 6C illustrates a state in which FIG. 6A is rotated, translated, and superimposed on the basis of FIG. 6B. Images on the road surface, such as white lines, overlap without shifting,
The projected image of an object other than the road surface is displaced because the shape of the distortion differs depending on the camera position. Therefore, if the difference between the two projected images is calculated, the difference becomes 0 since the area where the road surface is captured is the same object in both projected images. Also, the area of the object other than the road surface of the projection image is the area of the road surface in the other projection image, and even in the area other than the road surface, the distortion is different depending on the camera position, so the other parts of the object are different. Because it is projected, it does not become 0 if the difference is taken. For example, if the difference between the luminances of the pixels of the two projected images is calculated, the difference is almost 0 in the case where both are on the road surface. On the other hand, a difference occurs in the rectangular parallelepiped region. Figure 6 shows the state of the difference.
It is shown in (d). The difference between the white line and the like becomes zero and disappears, and only the portion of the rectangular parallelepiped which is not on the road surface has a value. Even in the area of the object other than the road surface, there may be an area where the color of the different part happens to match and the difference is 0, but if the target object does not have a hole inside, it will be a hole. By performing the process of filling the small area, the outline of the area can be extracted. Thus, the area outside the road surface is extracted.

【0034】なお、差を計算するには、輝度の差以外に
も、色空間での差、1画素の比較でなく、各画素の周り
の局所的なテクスチャパターンの差などを利用すること
も可能である。
To calculate the difference, in addition to the difference in luminance, a difference in a color space, not a comparison of one pixel, but a difference of a local texture pattern around each pixel may be used. It is possible.

【0035】また、(数3)で示される2つのカメラ撮
影位置の関係を、車両の速度、舵角左右の車輪の回転
数、GPSの位置情報などから求めた場合、位置関係の
精度が十分で無いため重ね合わせにずれが生じる可能性
がある。その場合、路面上の領域でも差が生じてしま
う。その場合、(数3)の回転行列Rm、平行移動ベクト
ルTmで投影像を重ね合わせた後、その位置を中心に微少
な移動量、回転角度で重なりをずらして、前記の差の計
算を行い、画像全体での差が最小になる位置を求めるこ
とにより、Rm, Tmが精度悪くしか得られない場合にも、
路面外の領域の抽出が可能となる。例えば、画像の各画
素で輝度の差の絶対値をとり、その絶対値の画像の全画
素での和が最小になる位置を求める。
When the relationship between the two camera photographing positions represented by (Formula 3) is obtained from the vehicle speed, the rotation speeds of the left and right wheels of the steering angle, the GPS position information, etc., the accuracy of the positional relationship is sufficient. Therefore, there is a possibility that a deviation occurs in the superposition. In that case, a difference occurs even in the area on the road surface. In this case, after the projected images are superimposed by the rotation matrix Rm and the translation vector Tm of (Equation 3), the overlap is shifted by a small amount of movement and a rotation angle around the position, and the difference is calculated. In the case where Rm and Tm can be obtained only with low accuracy by obtaining the position where the difference in the entire image is minimized,
An area outside the road surface can be extracted. For example, the absolute value of the luminance difference is obtained for each pixel of the image, and the position where the sum of the absolute value of all the pixels of the image is minimum is obtained.

【0036】なお、これまでの説明は車両を動かして、
同一カメラで撮影する場合について述べてきたが、車両
の異なる位置にある複数のカメラから撮影された場合に
も同様の手順で、路面以外の領域を抽出することが可能
である。
In the description so far, the vehicle is moved,
Although the case of shooting with the same camera has been described, an area other than the road surface can be extracted in a similar procedure when shooting is performed with a plurality of cameras at different positions of the vehicle.

【0037】前記路面外領域抽出手段106によって抽
出された図6(d)の領域を、路面外領域指標合成手段1
07により、前記路面投影像の上に運転者に分かり易い
形の指標を合成する。一例として、図6(d)の路面以外
の領域の輪郭をエッジ処理などにより求め、図6(b)の
路面投影像の上に赤色などで表示したものが、図7(a)
に示す画像である。この路面投影像を表示手段104で
運転者へ提示する。図7(a)では、位置関係をより把握
しやすいように、自車の位置に車のイラストを重ね描き
している。また、輪郭だけでなく、前記路面外領域に対
応する前記路面投影像の部分に特定色や網掛けなどを行
い合成することもできる。
The area extracted by the out-of-road area extraction means 106 in FIG.
In step 07, an index that is easy for the driver to understand is synthesized on the road surface projection image. As an example, the outline of a region other than the road surface in FIG. 6D is obtained by edge processing or the like, and is displayed in red or the like on the road surface projection image in FIG.
It is an image shown in FIG. This road surface projection image is presented to the driver on the display means 104. In FIG. 7A, an illustration of the car is overlaid on the position of the own vehicle so that the positional relationship can be more easily grasped. Further, not only the contour but also a part of the road surface projected image corresponding to the area outside the road surface can be synthesized by applying a specific color, shading, or the like.

【0038】路面以外の物体の輪郭を表示することによ
り、運転時に衝突するなどの可能性のある領域が強調さ
れ、一目で判りやすくなる。また、図7(b)に示すよう
に、自車からの一定の距離内にある輪郭だけを表示すれ
ば、より注意すべき近傍の物体を把握しやすくなる。ま
た距離に応じて輪郭の色を変える、近傍の輪郭を点滅表
示するなどすることも可能である。
By displaying the contour of an object other than the road surface, an area where a collision may occur during driving is emphasized, and the area can be easily recognized at a glance. Also, as shown in FIG. 7 (b), displaying only the contour within a certain distance from the host vehicle makes it easier to grasp nearby objects that require more attention. It is also possible to change the color of the contour according to the distance, blink the contour of the vicinity, and so on.

【0039】図7(a)では、投影による歪みも含んだ路
面外の領域を表示している。一方、その中から物体の真
の輪郭の可能性の高い部分が判れば、運転などの判断に
おいてより役立つことになる。そこで次に、路面外の物
体が路面に対して垂直な面で構成されると仮定すること
により、物体の真の輪郭の可能性の高い部分を抽出する
方法について説明する。
FIG. 7A shows an area outside the road including the distortion due to projection. On the other hand, if a portion having a high possibility of the true contour of the object is known from among them, it will be more useful in a judgment such as driving. Therefore, next, a method of extracting a portion having a high possibility of the true contour of the object by assuming that the object outside the road surface is constituted by a plane perpendicular to the road surface will be described.

【0040】図8(a)は、直方体を路面に投影した投影
像を示している。視点802から、点線の直方体801
を撮影した画像を、路面に投影した投影像803は、直
方体の各点の視点からの距離、高さなどにより歪みが生
じた形となる。実際に路面上で直方体が占める領域は、
点線の領域801である。同様に、図8(b)に同じ直方
体を、視点804から撮影した画像を路面へ投影した投
影像805を示す。図8(c)は、図8(a)、(b)の差分に
より抽出された路面外の領域を示している。この領域の
輪郭を、視点802、804からの表裏で分類する。視
点から直接見える、言い換えると視点と輪郭上の点を結
ぶ直線が、他の輪郭を横切らない点の集合を表の輪郭と
し、それ以外を裏の輪郭とする。例えば視点802に対
しては、白丸で区切られた部分輪郭806、807が表
の輪郭で、部分輪郭808、809は裏の輪郭である。
また輪郭810は、視点からの輪郭線への直線が外接す
る輪郭として、外接の輪郭と分類する。同様に視点80
4に対しては、表の輪郭は、806、809、810で
あり、裏の輪郭は808、外接の輪郭は807である。
FIG. 8A shows a projected image obtained by projecting a rectangular parallelepiped on a road surface. From the viewpoint 802, a dotted rectangular parallelepiped 801
Is projected on the road surface, and the projected image 803 has a shape distorted due to the distance, height, and the like of each point of the rectangular parallelepiped from the viewpoint. The area occupied by the cuboid on the road surface is
This is a region 801 indicated by a dotted line. Similarly, FIG. 8B shows a projection image 805 obtained by projecting an image of the same cuboid from a viewpoint 804 onto a road surface. FIG. 8C shows an area outside the road surface extracted by the difference between FIGS. 8A and 8B. The outline of this area is classified on the front and back from the viewpoints 802 and 804. A set of points that can be seen directly from the viewpoint, that is, a line connecting the viewpoint and a point on the outline does not cross another outline is defined as a front outline, and the rest is defined as a back outline. For example, for the viewpoint 802, the partial outlines 806 and 807 separated by white circles are the outlines of the front, and the partial outlines 808 and 809 are the outlines of the back.
In addition, the contour 810 is classified as a circumscribed contour as a contour circumscribing a straight line from the viewpoint to the contour line. Viewpoint 80
For 4, the front contours are 806, 809, and 810, the back contour is 808, and the circumscribed contour is 807.

【0041】ここで、真の物体の輪郭は表の輪郭の中に
含まれる。例えば、図8(a)を見ると、真の輪郭は80
6に相当する部分であり、同様に図8(b)では806
と、図8(c)では領域の中に隠れている部分の2つであ
る。これは路面に垂直な面は、必ず視点と反対の方向に
伸びる方向に歪むので、視点から可視である輪郭は、真
の輪郭と一致する。また外接する輪郭は図8(a),(b)を
見れば、視点に近い端点を下端とする垂直エッジが延び
た像である可能性が高い。よって外接する輪郭の部分の
長さからその部分の高さを計算することが出来る。
Here, the contour of the true object is included in the contour of the table. For example, looking at FIG.
8, and 806 in FIG.
In FIG. 8C, two parts are hidden in the area. This is because a plane perpendicular to the road surface is always distorted in a direction extending in a direction opposite to the viewpoint, so that a contour visible from the viewpoint matches a true contour. 8A and 8B, it is highly likely that the circumscribed contour is an image in which a vertical edge extending from an end point near the viewpoint to the lower end is extended. Therefore, the height of the circumscribed contour can be calculated from the length of the circumscribed contour.

【0042】領域として得られるのは、図8(a)、(b)で
はなく、それらの領域の和である図8(c)なので、単独
の視点からの表裏の判別だけでは、たまたま他の視点か
らの像が歪んだ領域が表の領域になる可能性もある。視
点804に対して輪郭809がそうである。そこで、2
つの視点の両方で表の輪郭となる部分を、物体の真の輪
郭として抽出する。図8(c)では806が真の輪郭とし
て抽出される。また外接の輪郭807、810に関して
も、垂直エッジの可能性が高いので、その長さから、そ
の点での長さを計算する。
8 (a) and 8 (b), and FIG. 8 (c), which is the sum of those areas, is obtained as a region. The region where the image from the viewpoint is distorted may be the region of the table. The contour 809 is for the viewpoint 804. So 2
The part that becomes the outline of the table from both viewpoints is extracted as the true outline of the object. In FIG. 8C, 806 is extracted as a true contour. Also, regarding the circumscribed contours 807 and 810, since there is a high possibility of a vertical edge, the length at that point is calculated from the length.

【0043】前記方法によって、図7(a)の太線で示し
た輪郭のうち、真の輪郭と考えられる部分だけを抽出し
表示したものが、図9(a)である。路面以外の領域を、
歪みによる輪郭を除いて、所望の視点からの画像上に表
示することにより、運転に際して注意すべき部分が明確
になり、表示された輪郭と自車の位置を見ることで、物
体との距離を把握しやすくなる。予め対象となる物体の
形状が判っていれば、真の輪郭の部分と照合することに
より、その位置に物体の形を上書きすることもできる。
FIG. 9 (a) shows only a portion which is considered to be a true outline extracted and displayed from the outlines shown by the thick line in FIG. 7 (a) by the above method. Areas other than the road surface,
By displaying on the image from the desired viewpoint, excluding the contour due to distortion, the parts to be careful when driving are clarified, and by looking at the displayed contour and the position of the own vehicle, the distance to the object is reduced. It becomes easy to grasp. If the shape of the target object is known in advance, the position can be overwritten with the shape of the object at that position by comparing it with the true contour part.

【0044】また外接の輪郭から高さを求めることによ
り、物体のおおよその高さを投影像に示すこともでき
る。図9(b)は、高さが求められる可能性の高い部分
に、その領域の高さを表示した例である。
By calculating the height from the circumscribed contour, the approximate height of the object can be shown in the projected image. FIG. 9B is an example in which the height of the region is displayed in a portion where the height is likely to be obtained.

【0045】図10は、ポールなどのは柱の物体の投影
像である。この場合も前記の輪郭の分類を行うことによ
り、柱の立っている位置である根元部分が表の輪郭とし
て抽出でき、外接の輪郭より高さ情報を表示することが
できる。
FIG. 10 is a projection image of a pole object such as a pole. Also in this case, by performing the above-described classification of the contour, the root portion where the pillar stands can be extracted as the contour of the table, and the height information can be displayed more than the circumscribed contour.

【0046】なお、路面外領域の輪郭から真の輪郭部分
を求める前記の手法は、従来例2などの単独の画像から
求められた路面外領域へ対しても適用することが可能で
ある。
It should be noted that the above-described method of obtaining a true outline portion from the outline of the off-road area can also be applied to an off-road area obtained from a single image, such as in Conventional Example 2.

【0047】[0047]

【発明の効果】以上説明してきたように、請求項1に係
る本発明によれば、車に設置されたカメラからの画像を
用いて任意の視点からの画像を合成する際に、路面指標
合成手段によって路面投影像に路面以外の物体を区別す
る指標を合成するので、運転者に路面以外の物体を判断
しやすいように提示することが可能である。
As described above, according to the first aspect of the present invention, when an image from an arbitrary viewpoint is synthesized using an image from a camera installed in a car, a road surface index is synthesized. Since the index for discriminating objects other than the road surface is combined with the road surface projected image by the means, it is possible to present the driver with the object other than the road surface so that it can be easily determined.

【0048】また請求項9に係る本発明の路面外領域の
抽出方法を用いることにより、従来の平面を用いた方法
では、高さ成分がカメラ視線方向の奥行き成分に変換さ
れてしまうため、路面から上方に高さ成分を持つ物体
は、路面に投影すると大きく歪むことが問題であった
が、歪んだ物体の投影像のうち、真の物体の輪郭の部分
を抽出し、判り易く表示することが可能となった。
Further, by using the method for extracting an out-of-road area according to the ninth aspect of the present invention, the height component is converted into a depth component in the camera line-of-sight direction by the conventional method using a plane. The problem is that objects that have a height component above are greatly distorted when projected on a road surface.However, from the projected image of the distorted object, extract the outline of the true object and display it in a way that is easy to understand. Became possible.

【0049】また、請求項11に係る本発明によれば、
ある部分においては、物体の高さを計算し、路面投影像
に合成することも可能となった。
According to the eleventh aspect of the present invention,
In some parts, it has become possible to calculate the height of the object and combine it with the road surface projected image.

【0050】従って、運転者は、路面投影の合成画像だ
けを提示される場合に比べて、路面外領域の情報の提示
により、路面外の物体の位置など周囲の状況がよりいっ
そう認識しやすくなり、適確な運転操作を行えることが
期待できる。
Therefore, the driver can more easily recognize the surrounding situation such as the position of the object outside the road surface by presenting the information on the area outside the road surface as compared with the case where only the composite image of the road surface projection is presented. It can be expected that a proper driving operation can be performed.

【図面の簡単な説明】[Brief description of the drawings]

【図1】本発明請求項1に記載の車両周囲状況提示装置
の基本構成例を示したブロック図
FIG. 1 is a block diagram showing a basic configuration example of a vehicle surrounding situation presentation device according to claim 1 of the present invention.

【図2】従来の車両周囲監視装置の構成例を示したブロ
ック図
FIG. 2 is a block diagram showing a configuration example of a conventional vehicle surrounding monitoring device;

【図3】(a)路面外の物体の投影像の歪みに関する説明
図(b),(c)従来の車両周囲監視装置による路面外の物体
の貼り付け合成の説明図
FIGS. 3A and 3B are explanatory diagrams relating to distortion of a projected image of an object outside the road surface; FIGS.

【図4】カメラ座標系とワールド座標系、路面、カメラ
パラメータの関係を示した図
FIG. 4 is a diagram showing a relationship between a camera coordinate system, a world coordinate system, a road surface, and camera parameters.

【図5】直方体の路面投影の例を示した図FIG. 5 is a diagram showing an example of road surface projection of a rectangular parallelepiped;

【図6】路面外領域抽出手段106の処理の流れを示した
FIG. 6 is a diagram showing a flow of processing of an off-road area extraction unit 106;

【図7】路面外領域の表示方法を示した図FIG. 7 is a diagram showing a method of displaying an area outside the road surface;

【図8】路面外領域の輪郭の分類処理を示した図FIG. 8 is a diagram showing a process of classifying the contour of the area outside the road surface;

【図9】輪郭の分類を用いて、真の輪郭を表示した例を
示した図
FIG. 9 is a diagram showing an example in which a true outline is displayed by using outline classification.

【図10】輪郭の分類を用いて、柱状物体を表示した例
を示した図
FIG. 10 is a diagram showing an example in which a columnar object is displayed by using contour classification.

【符号の説明】[Explanation of symbols]

101 画像入力手段 102 カメラパラメータ 103 画像処理手段 104 表示手段 105 路面投影手段 106 路面外領域抽出手段 107 路面外領域指標合成手段 101 image input means 102 camera parameters 103 image processing means 104 display means 105 road surface projection means 106 out-of-road area extraction means 107 out-of-road area index combining means

───────────────────────────────────────────────────── フロントページの続き (51)Int.Cl.7 識別記号 FI テーマコート゛(参考) B60R 21/00 626G 628H 628C 628Z G06F 15/62 380 (72)発明者 登 一生 大阪府門真市大字門真1006番地 松下電器 産業株式会社内 (72)発明者 森村 淳 大阪府門真市大字門真1006番地 松下電器 産業株式会社内 Fターム(参考) 5B057 AA16 BA02 CA01 CA08 CA12 CA16 CB01 CB08 CB12 CB16 CC01 CD02 CD03 CE08 DA08 DB02 DB06 DB09 DC02 DC16 5C054 AA01 FC12 FC14 FD03 FE13 HA30 ──────────────────────────────────────────────────の Continued on the front page (51) Int.Cl. 7 Identification symbol FI Theme coat ゛ (Reference) B60R 21/00 626G 628H 628C 628Z G06F 15/62 380 (72) Inventor Kazuo Noboru 1002 Kadoma Kadoma, Kadoma City, Osaka Prefecture Address Matsushita Electric Industrial Co., Ltd. DB06 DB09 DC02 DC16 5C054 AA01 FC12 FC14 FD03 FE13 HA30

Claims (11)

【特許請求の範囲】[Claims] 【請求項1】車両周囲の状況を撮影し、入力する画像入
力手段と、前記画像入力手段からの画像を撮影したカメ
ラの位置、方向、特性を示すカメラパラメータを用いて
路面へ投影変換した路面投影像を作成する路面投影手段
と、前記路面投影像から路面以外の物体の領域を抽出す
る路面外領域手段と、抽出された前記路面外領域から路
面以外の物体を区別する指標を作成し、前記路面投影像
に合成する路面指標合成手段とを備えたことを特徴とす
る車両周囲状況提示装置。
An image input means for photographing and inputting a situation around a vehicle, and a road surface projected and converted to a road surface using camera parameters indicating a position, a direction, and characteristics of a camera which has taken an image from the image input means. Road surface projection means to create a projection image, off-road area means to extract the region of the object other than the road surface from the road surface projection image, to create an index for distinguishing objects other than the road surface from the extracted outside road surface region, A road surface index synthesizing means for synthesizing the road surface projected image with the road surface projected image.
【請求項2】前記路面外領域抽出手段は、異なる位置で
撮影された画像を路面投影した路面投影像を重ね合わ
せ、路面外領域を抽出することを特徴とする請求項1記
載の車両周囲状況提示装置。
2. The vehicle surrounding situation according to claim 1, wherein the off-road surface area extracting means superimposes a road surface projected image obtained by road-projecting images photographed at different positions to extract an off-road surface area. Presentation device.
【請求項3】前記路面外領域抽出手段は、車両の位置の
情報により路面投影像を重ね合わせることを特徴とする
請求項2記載の車両周囲状況提示装置。
3. The vehicle surrounding situation presentation device according to claim 2, wherein said outside road surface area extracting means superimposes a road surface projected image on the basis of vehicle position information.
【請求項4】前記路面外領域抽出手段は、車両の位置の
情報を車速センサー、舵角センサー、タイヤの回転数、
衛星による位置計測の少なくとも1つにより求めること
を特徴とする請求項3記載の車両周囲状況提示装置。
4. The off-road area extracting means includes a vehicle speed sensor, a steering angle sensor, a tire rotation speed,
4. The apparatus according to claim 3, wherein the position is obtained by at least one of position measurement by a satellite.
【請求項5】前記路面外領域抽出手段は、路面投影像を
投影像の2次元的な回転、平行移動を行い、差が最小に
なる位置に重ね合わせることを特徴とする請求項2記載
の車両周囲状況提示装置。
5. The method according to claim 2, wherein the outside road surface area extracting means performs two-dimensional rotation and parallel movement of the road surface projected image and superimposes the road surface projected image on a position where the difference is minimized. Vehicle surrounding situation presentation device.
【請求項6】前記路面外領域抽出手段において、路面投
影像を車両の位置の情報により重ね合わせた後、投影像
の2次元的な回転、平行移動を行い、差が最小になる位
置に修正することを特徴とする請求項2記載の車両周囲
状況提示装置。
6. The road surface area extracting means superimposes a road surface projected image on the basis of vehicle position information, and then performs two-dimensional rotation and translation of the projected image to correct the position to minimize the difference. The vehicle surrounding situation presentation device according to claim 2, wherein
【請求項7】前記路面外指標合成手段は、前記路面外領
域の輪郭を生成し、指標として前記路面投影像に合成す
ることを特徴とする請求項1記載の車両周囲状況提示装
置。
7. The vehicle surrounding situation presentation device according to claim 1, wherein said out-of-road index combining means generates an outline of said out-of-road area and combines it as an index into said road surface projected image.
【請求項8】前記路面外指標合成手段は、前記路面外領
域を、撮影位置からの距離に応じて異なる表現で、前記
路面投影像に合成することを特徴とする請求項1記載の
車両周囲状況提示装置。
8. The vehicle surroundings according to claim 1, wherein the outside road surface index combining means combines the outside road surface region with the road surface projection image in a different expression according to a distance from a photographing position. Status presentation device.
【請求項9】前記路面外指標合成手段は、歪みより生じ
た部分と物体の輪郭に分類し、前記分類結果に基づいて
前記路面外領域の輪郭を前記路面投影像に合成すること
を特徴とする請求項7記載の車両周囲状況提示装置。
9. The method according to claim 1, wherein the extra-road index combining means classifies the contours of the object and the contour of the extra-road area based on the classification result. The vehicle surrounding situation presentation device according to claim 7.
【請求項10】前記路面外指標合成手段は、前記路面外
領域の輪郭を、カメラ位置との関係により、歪みより生
じた部分と、物体の輪郭に分類し、前記路面投影像に合
成することを特徴とする請求項9記載の車両周囲状況提
示装置。
10. The off-road index combining means classifies the contour of the off-road area into a portion caused by distortion and an outline of an object based on a relationship with a camera position, and combines the contour with the road surface projected image. The vehicle surrounding situation presentation device according to claim 9, characterized in that:
【請求項11】前記路面外指標合成手段は、前記路面外
領域の輪郭から物体の部分の高さを求め、前記路面投影
像に合成することを特徴とする請求項7記載の車両周囲
状況提示装置。
11. The vehicle surrounding situation presentation according to claim 7, wherein said out-of-road index combining means obtains a height of an object portion from an outline of said out-of-road area and combines it with the road surface projected image. apparatus.
JP29777799A 1999-10-20 1999-10-20 Vehicle surrounding situation presentation device Expired - Lifetime JP3301421B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP29777799A JP3301421B2 (en) 1999-10-20 1999-10-20 Vehicle surrounding situation presentation device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP29777799A JP3301421B2 (en) 1999-10-20 1999-10-20 Vehicle surrounding situation presentation device

Publications (2)

Publication Number Publication Date
JP2001114047A true JP2001114047A (en) 2001-04-24
JP3301421B2 JP3301421B2 (en) 2002-07-15

Family

ID=17851053

Family Applications (1)

Application Number Title Priority Date Filing Date
JP29777799A Expired - Lifetime JP3301421B2 (en) 1999-10-20 1999-10-20 Vehicle surrounding situation presentation device

Country Status (1)

Country Link
JP (1) JP3301421B2 (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002359839A (en) * 2001-03-29 2002-12-13 Matsushita Electric Ind Co Ltd Method and device for displaying image of rearview camera
JP2002373327A (en) * 2001-06-13 2002-12-26 Denso Corp Apparatus for processing image around vehicle and recording medium
JP2003030627A (en) * 2001-07-16 2003-01-31 Denso Corp Vehicle peripheral image processor and recording medium
JP2003104122A (en) * 2001-09-28 2003-04-09 Clarion Co Ltd On-vehicle information device
JP2003132349A (en) * 2001-10-24 2003-05-09 Matsushita Electric Ind Co Ltd Drawing device
WO2003107273A1 (en) * 2002-06-12 2003-12-24 松下電器産業株式会社 Drive assisting system
JP2005347863A (en) * 2004-05-31 2005-12-15 Nissan Motor Co Ltd Device and method for displaying periphery of car
JP2006253872A (en) * 2005-03-09 2006-09-21 Toshiba Corp Apparatus and method for displaying vehicle perimeter image
JP2007195061A (en) * 2006-01-20 2007-08-02 Toyota Motor Corp Image processor
JP2007235642A (en) * 2006-03-02 2007-09-13 Hitachi Ltd Obstruction detecting system
JP2008085710A (en) * 2006-09-28 2008-04-10 Sanyo Electric Co Ltd Driving support system
JP2008205914A (en) * 2007-02-21 2008-09-04 Alpine Electronics Inc Image processor
US7432799B2 (en) 2004-11-09 2008-10-07 Alpine Electronics, Inc. Driving support apparatus and driving support method
WO2009131152A1 (en) * 2008-04-23 2009-10-29 コニカミノルタホールディングス株式会社 Three-dimensional image processing camera and three-dimensional image processing system
WO2010044127A1 (en) * 2008-10-16 2010-04-22 三菱電機株式会社 Device for detecting height of obstacle outside vehicle
US8294563B2 (en) 2009-08-03 2012-10-23 Alpine Electronics, Inc. Vehicle-surrounding image display apparatus and vehicle-surrounding image display method
WO2012144053A1 (en) * 2011-04-21 2012-10-26 トヨタ自動車株式会社 Vehicle periphery obstacle display device and vehicle periphery obstacle display method
US10744941B2 (en) 2017-10-12 2020-08-18 Magna Electronics Inc. Vehicle vision system with bird's eye view display

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002359839A (en) * 2001-03-29 2002-12-13 Matsushita Electric Ind Co Ltd Method and device for displaying image of rearview camera
JP2002373327A (en) * 2001-06-13 2002-12-26 Denso Corp Apparatus for processing image around vehicle and recording medium
US7317813B2 (en) 2001-06-13 2008-01-08 Denso Corporation Vehicle vicinity image-processing apparatus and recording medium
JP2003030627A (en) * 2001-07-16 2003-01-31 Denso Corp Vehicle peripheral image processor and recording medium
JP2003104122A (en) * 2001-09-28 2003-04-09 Clarion Co Ltd On-vehicle information device
JP2003132349A (en) * 2001-10-24 2003-05-09 Matsushita Electric Ind Co Ltd Drawing device
WO2003107273A1 (en) * 2002-06-12 2003-12-24 松下電器産業株式会社 Drive assisting system
KR100937750B1 (en) 2002-06-12 2010-01-20 파나소닉 주식회사 Drive assisting system
JP2005347863A (en) * 2004-05-31 2005-12-15 Nissan Motor Co Ltd Device and method for displaying periphery of car
JP4556494B2 (en) * 2004-05-31 2010-10-06 日産自動車株式会社 Vehicle periphery display device and vehicle periphery display method
US7432799B2 (en) 2004-11-09 2008-10-07 Alpine Electronics, Inc. Driving support apparatus and driving support method
JP2006253872A (en) * 2005-03-09 2006-09-21 Toshiba Corp Apparatus and method for displaying vehicle perimeter image
JP4696925B2 (en) * 2006-01-20 2011-06-08 トヨタ自動車株式会社 Image processing device
JP2007195061A (en) * 2006-01-20 2007-08-02 Toyota Motor Corp Image processor
JP2007235642A (en) * 2006-03-02 2007-09-13 Hitachi Ltd Obstruction detecting system
JP2008085710A (en) * 2006-09-28 2008-04-10 Sanyo Electric Co Ltd Driving support system
JP2008205914A (en) * 2007-02-21 2008-09-04 Alpine Electronics Inc Image processor
US8330816B2 (en) 2007-02-21 2012-12-11 Alpine Electronics, Inc. Image processing device
WO2009131152A1 (en) * 2008-04-23 2009-10-29 コニカミノルタホールディングス株式会社 Three-dimensional image processing camera and three-dimensional image processing system
WO2010044127A1 (en) * 2008-10-16 2010-04-22 三菱電機株式会社 Device for detecting height of obstacle outside vehicle
US8294563B2 (en) 2009-08-03 2012-10-23 Alpine Electronics, Inc. Vehicle-surrounding image display apparatus and vehicle-surrounding image display method
WO2012144053A1 (en) * 2011-04-21 2012-10-26 トヨタ自動車株式会社 Vehicle periphery obstacle display device and vehicle periphery obstacle display method
US10744941B2 (en) 2017-10-12 2020-08-18 Magna Electronics Inc. Vehicle vision system with bird's eye view display
US11242004B2 (en) 2017-10-12 2022-02-08 Magna Electronics Inc. Method of generating images for display for vehicular vision system
US11618383B2 (en) 2017-10-12 2023-04-04 Magna Electronics Inc. Vehicular vision system with display of combined images

Also Published As

Publication number Publication date
JP3301421B2 (en) 2002-07-15

Similar Documents

Publication Publication Date Title
JP3301421B2 (en) Vehicle surrounding situation presentation device
US7307655B1 (en) Method and apparatus for displaying a synthesized image viewed from a virtual point of view
EP1462762B1 (en) Circumstance monitoring device of a vehicle
US9479740B2 (en) Image generating apparatus
JP3300340B2 (en) Driving support device
CN100438623C (en) Image processing device and monitoring system
WO2009119110A1 (en) Blind spot display device
WO2012169355A1 (en) Image generation device
EP1383332A1 (en) Method and apparatus for displaying pickup image of camera installed in vehicle
JP5715778B2 (en) Image display device for vehicle
CN104859538A (en) Vision-based object sensing and highlighting in vehicle image display systems
JP4248570B2 (en) Image processing apparatus and visibility support apparatus and method
EP2079053A1 (en) Method and apparatus for calibrating a video display overlay
JP2006268076A (en) Driving assistance system
CN101487895B (en) Reverse radar system capable of displaying aerial vehicle image
WO2002080557A1 (en) Drive supporting device
JP2009151524A (en) Image display method and image display apparatus
JP2003189293A (en) Device for displaying state of surroundings of vehicle and image-providing system
JP5516998B2 (en) Image generation device
JP2004240480A (en) Operation support device
JP2003044996A (en) Obstacle detecting device
JP2017520133A (en) Vehicle periphery image generation apparatus and method
CN102555905B (en) Produce the method and apparatus of the image of at least one object in vehicle-periphery
CN109690558B (en) Method for assisting a driver of a motor vehicle in driving the motor vehicle, driver assistance system and motor vehicle
JP5271186B2 (en) Image display device for vehicle

Legal Events

Date Code Title Description
TRDD Decision of grant or rejection written
R151 Written notification of patent or utility model registration

Ref document number: 3301421

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R151

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20080426

Year of fee payment: 6

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20090426

Year of fee payment: 7

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20100426

Year of fee payment: 8

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110426

Year of fee payment: 9

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20120426

Year of fee payment: 10

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20130426

Year of fee payment: 11

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20130426

Year of fee payment: 11

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20140426

Year of fee payment: 12

EXPY Cancellation because of completion of term