JPH07186833A - Surrounding state displaying device for vehicle - Google Patents

Surrounding state displaying device for vehicle

Info

Publication number
JPH07186833A
JPH07186833A JP5355345A JP35534593A JPH07186833A JP H07186833 A JPH07186833 A JP H07186833A JP 5355345 A JP5355345 A JP 5355345A JP 35534593 A JP35534593 A JP 35534593A JP H07186833 A JPH07186833 A JP H07186833A
Authority
JP
Japan
Prior art keywords
image
road surface
surface area
vehicle
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP5355345A
Other languages
Japanese (ja)
Other versions
JP3381351B2 (en
Inventor
Kazunori Noso
千典 農宗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nissan Motor Co Ltd
Original Assignee
Nissan Motor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nissan Motor Co Ltd filed Critical Nissan Motor Co Ltd
Priority to JP35534593A priority Critical patent/JP3381351B2/en
Publication of JPH07186833A publication Critical patent/JPH07186833A/en
Application granted granted Critical
Publication of JP3381351B2 publication Critical patent/JP3381351B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

PURPOSE:To perform further accurate announcement of a surrounding state to a driver to effect proper decision during driving by a method wherein the image of an object, such as a preceding vehicle and an obstacle, having height is displayed without any distortion. CONSTITUTION:A photograph image at the periphery of a vehicle is inputted to an image input part 101. An input image is separated into a road region and a non-road region by a road detecting part 104. Only the road region is converted into a coordinate by a coordinate converting part 106 but the non-road area is not converted into a coordinate and parallel movement and enlargement/ contraction are effected by an image feed part 105. The two images are synthesized by a synthesizing part 107 and by displaying a result on a display part 103, an input image in the non-road region, such as an obstacle and a preceding vehicle, is recognized in a natural state.

Description

【発明の詳細な説明】Detailed Description of the Invention

【0001】[0001]

【産業上の利用分野】この発明は,先行車両や障害物等
の高さのある物体画像も歪みなく表示可能として車両の
周囲状況を的確に検出し,該周囲状況を運転者に対して
より正確に表示する車両用周囲状況表示装置に関する。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention is capable of accurately displaying an image of a high-level object such as a preceding vehicle or an obstacle without distortion, and accurately detecting the surrounding condition of the vehicle, and further detecting the surrounding condition to the driver. The present invention relates to a vehicle surrounding condition display device for displaying accurately.

【0002】[0002]

【従来の技術】従来における車両用周囲状況表示装置と
して,例えば,特開平3−99952号公報に開示され
ている「車両用周囲状況モニタ」がある。これは,車両
に設置された複数台のカメラ画像を,逆射影変換によっ
てあたかも真上からみた画像に変換し,複数画像を合成
しながら表示し,自車両と周囲環境との位置関係を運転
者に対して充分認識できるようにしたものである。
2. Description of the Related Art As a conventional vehicle surrounding condition display device, for example, there is a "vehicle surrounding condition monitor" disclosed in Japanese Patent Laid-Open No. 3-99952. This is to convert the images of multiple cameras installed in the vehicle into images as if they were viewed from directly above by inverse projection transformation, and display them while synthesizing the multiple images to show the positional relationship between the vehicle and the surrounding environment. It is made so that it can be sufficiently recognized.

【0003】[0003]

【発明が解決しようとする課題】しかしながら,上記に
示されるような従来における「車両用周囲状況モニタ」
にあっては,画像中の物体がすべて道路面上にあるもの
と仮定して座標変換を行うため,真に道路面上の物体,
例えば,白線や路面に描かれた矢印や横断歩道等の表示
については,前方距離が線形となるように変換されるの
で,距離の把握については容易となるが,先行車両や障
害物等の高さのある物体に対しては画像が大きく歪んで
表示されるため,表示画像上の物体と実際の物体とを対
応させることが容易ではないという問題点があった。
However, the conventional "vehicle surroundings monitor" as described above is used.
In this case, since the coordinates are converted assuming that all the objects in the image are on the road surface, the objects on the road surface are
For example, the display of white lines, arrows drawn on the road surface, pedestrian crossings, etc. is converted so that the front distance becomes linear, so it is easy to grasp the distance, but it is easy to understand the height of the preceding vehicle or obstacles. Since an image is displayed with large distortion for a large object, it is not easy to associate the object on the display image with the actual object.

【0004】この発明は,上記に鑑みてなされたもので
あって,先行車両や障害物等の高さのある物体画像も歪
みなく表示し,運転者に対して運転時における判断を的
確に行えるように周囲状況をより正確に知らせることを
目的とする。
The present invention has been made in view of the above, and displays a high-level object image such as a preceding vehicle or an obstacle without distortion, so that the driver can accurately make a judgment at the time of driving. The purpose is to inform the surroundings more accurately.

【0005】[0005]

【課題を解決するための手段】この発明は,上記の目的
を達成するために,車両周囲の状況を撮像し,入力する
画像入力手段と,前記画像入力手段からの入力画像を路
面領域と非路面領域とに分離する路面領域検出手段と,
前記画像入力手段からの入力画像を座標変換する座標変
換手段と,前記路面領域検出手段により分離された非路
面領域の画像を切り出す非路面領域抽出手段と,前記座
標変換手段により座標変換された画像と前記非路面領域
抽出手段により切り出された画像とを合成する画像合成
手段と,前記画像合成手段による合成画像を表示する画
像表示手段とを具備する車両用周囲状況表示装置を提供
するものである。
SUMMARY OF THE INVENTION In order to achieve the above object, the present invention provides an image inputting means for picking up and inputting a situation around a vehicle, and an input image from the image inputting means as a non-road surface area. A road surface area detecting means that is separated into a road surface area,
Coordinate conversion means for coordinate-converting the input image from the image input means, non-road surface area extraction means for cutting out an image of the non-road surface area separated by the road surface area detection means, and image for coordinate conversion by the coordinate conversion means A surrounding environment display device for a vehicle, comprising: an image synthesizing unit for synthesizing an image cut out by the non-road surface area extracting unit; and an image display unit for displaying a synthetic image by the image synthesizing unit. .

【0006】[0006]

【作用】この発明に係る車両用周囲状況表示装置は,画
像入力手段により車両周囲の撮像画像を入力し,該入力
画像を路面領域検出手段により路面領域と非路面領域と
に分離し,路面領域のみ座標変換し,非路面領域に対し
ては座標変換を行わず平行移動や拡大/縮小を行い,こ
れら2つの処理画像を画像合成手段により合成させて画
像表示手段に表示することにより,障害物や先行車両な
どの非路面領域の入力画像を自然な状態で認識させる。
In the vehicle surrounding condition display device according to the present invention, a captured image of the vehicle surroundings is input by the image input means, and the input image is separated into the road surface area and the non-road surface area by the road surface area detecting means. Only the coordinate conversion is performed, the coordinate conversion is not performed for the non-road surface area, the parallel movement and the enlargement / reduction are performed, and these two processed images are combined by the image combining unit and displayed on the image display unit, thereby displaying the obstacle. The input image of a non-road surface area such as a vehicle or a preceding vehicle is recognized in a natural state.

【0007】[0007]

【実施例】以下,この発明に係る車両用周囲状況表示装
置の一実施例を添付図面に基づいて説明する。図1は,
この発明に係る車両用周囲状況表示装置の概略構成を示
すブロック図であり,車両の周囲を撮像した画像を入力
する画像入力部101と,該入力画像に対して所定の画
像処理(検出,領域特定,座標変換,合成)を実行する
画像処理部102と,該画像処理された周囲状況の画像
情報を表示する表示部103とから大きく構成されてい
る。
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS An embodiment of a vehicle surrounding condition display device according to the present invention will be described below with reference to the accompanying drawings. Figure 1
FIG. 1 is a block diagram showing a schematic configuration of a vehicle surroundings display device according to the present invention, which includes an image input unit 101 for inputting an image of the surroundings of a vehicle and predetermined image processing (detection, area) for the input image. The image processing unit 102 that executes (identification, coordinate conversion, and composition) and the display unit 103 that displays the image information of the image-processed surroundings are largely configured.

【0008】具体的には,画像入力部101として,車
体前方に設置するカメラを用い,表示部103として,
運転席近傍に設置するTVモニタのような2次元ディス
プレイを用いる。
Specifically, a camera installed in front of the vehicle body is used as the image input unit 101, and a display unit 103 is used.
A two-dimensional display such as a TV monitor installed near the driver's seat is used.

【0009】また,画像処理部102は,以下の機能ブ
ロックにより構成されている。すなわち,画像処理部1
02は,画像入力部101を介して入力された画像を路
面領域と非路面領域とに分離する路面検出部104と,
該路面検出部104により分離された非路面領域におけ
る入力画像を切り出す画像切り出し部105と,入力画
像を座標変換する座標変換部106と,上記画像切り出
し部105からの切り出し画像と,上記座標変換部10
6による座標変換処理後の画像とを合成する合成部10
7とから構成されている。
The image processing unit 102 is composed of the following functional blocks. That is, the image processing unit 1
Reference numeral 02 denotes a road surface detection unit 104 that separates an image input through the image input unit 101 into a road surface region and a non-road surface region,
An image cutout unit 105 that cuts out an input image in the non-road surface area separated by the road surface detection unit 104, a coordinate conversion unit 106 that performs coordinate conversion of the input image, a cutout image from the image cutout unit 105, and the coordinate conversion unit. 10
A synthesizing unit 10 for synthesizing the image after the coordinate conversion processing by 6
7 and 7.

【0010】次に,動作について説明する。まず,画像
入力部101(カメラ)により車両周囲の画像が撮像さ
れ,画像処理部102に入力される。該画像情報入力
後,路面検出部104により路面領域と非路面領域とに
分割される。また,座標変換部106により入力画像が
逆射影変換されるが,この場合,非路面領域については
そのままの状態で表示処理を行う。なお,これらの処理
は画像切り出し部105および合成部107により実行
される。
Next, the operation will be described. First, an image around the vehicle is captured by the image input unit 101 (camera) and input to the image processing unit 102. After inputting the image information, the road surface detection unit 104 divides the road surface area into a road surface area and a non-road surface area. Further, although the input image is inversely projectively transformed by the coordinate transformation unit 106, in this case, the display processing is performed as it is for the non-road surface area. Note that these processes are executed by the image cutout unit 105 and the combining unit 107.

【0011】図2は,図1に示した車両用周囲状況表示
装置による入力画像および表示画像の例を示す説明図で
あり,図2(a)は入力画像,図2(b)は表示画像を
それぞれ示している。上記図2(b)において,画像下
部の左右三角形のハッチング部分は入力画像の存在しな
い部分であり,また,先行車両の上部ハッチング部分
は,先行車両によってその前方の画像が隠されているた
めに表示できない部分を示している。
2A and 2B are explanatory views showing an example of an input image and a display image by the vehicle surroundings display device shown in FIG. 1. FIG. 2A shows the input image and FIG. 2B shows the display image. Are shown respectively. In FIG. 2B, the hatching portion of the left and right triangles at the bottom of the image is the portion where the input image does not exist, and the top hatching portion of the preceding vehicle has the image in front of it hidden by the preceding vehicle. The part that cannot be displayed is shown.

【0012】図3は,図1に示した画像処理部102に
よる画像処理動作を示すフローチャートであり,図4
は,上記図3の各処理に対応する表示画面である。な
お,ここでは,車両前方の画像を例にとって説明してい
るが,側方や後方あるいは後側方や前側方等の表示処理
についても全く同様である。
FIG. 3 is a flow chart showing an image processing operation by the image processing unit 102 shown in FIG.
Is a display screen corresponding to each process of FIG. It should be noted that although the image in the front of the vehicle has been described as an example here, the display processing for the side, the rear, the rear, the front, and the like is exactly the same.

【0013】まず,全体的な処理の流れについて説明す
る。処理が開始されると,図4(a)に示すように,画
像入力部101を介してカラー画像の入力が実行され
(S301),該入力画像情報が座標変換部106によ
り座標変換され,俯瞰図画像となる(S302)。さら
に,画面下端部で色の認識を行い(S303),図4
(b)に示すように画面全体で同一色を抽出する(S3
04)。その後,図4(c)に示すように膨張および収
縮処理(後述)を実行し(S305),非路面領域に対
してラベリング処理を行い(S306),各領域毎に境
界を検出する(S307)。
First, the overall processing flow will be described. When the process is started, as shown in FIG. 4A, a color image is input through the image input unit 101 (S301), the input image information is coordinate-converted by the coordinate conversion unit 106, and the bird's eye view is obtained. It becomes a graphic image (S302). Further, color recognition is performed at the lower end of the screen (S303), and then FIG.
As shown in (b), the same color is extracted on the entire screen (S3).
04). Thereafter, as shown in FIG. 4C, expansion and contraction processing (described later) is executed (S305), labeling processing is performed on the non-road surface area (S306), and a boundary is detected for each area (S307). .

【0014】次に,図4(d)に示すように各領域毎に
直線検出を実行し(S308),図4(e)に示すよう
に,画面上の障害物領域を消去する(S309)。その
後,上記入力画像から画像の切り出し処理および拡大あ
るいは縮小処理を実行し(S310),合成部107に
より俯瞰図画像へ合成する(S311)。続いて,図4
(f)に示すように,適正車間距離マーカの描画処理を
実行し(S312),表示部103において,上記一連
の処理を経た画像情報を表示する(S313)。
Next, straight line detection is executed for each area as shown in FIG. 4 (d) (S308), and as shown in FIG. 4 (e), the obstacle area on the screen is erased (S309). . Then, an image cutting process and an enlargement or reduction process are executed from the input image (S310), and the synthesis unit 107 synthesizes the bird's-eye view image (S311). Then, Fig. 4
As shown in (f), the drawing process of the appropriate inter-vehicle distance marker is executed (S312), and the image information that has undergone the series of processes is displayed on the display unit 103 (S313).

【0015】さらに,上記処理について詳述する。ま
ず,座標変換部106の処理動作について説明する。入
力画像をA(x,y)とし,座標変換によってB(i,
j)を得るものとして説明する。なお,理解を容易にす
るため,カメラ(画像入力部101)は路面に対して水
平方向に設置されているものとする。路面からのカメラ
の高さがH,レンズの焦点距離をFとすると,前方Zで
カメラの横手方向Xにある路面(水平と仮定)上の点
は,カメラ上では, x=F・X/Z y=F・H/Z ・・・(1) として撮像される。
Further, the above processing will be described in detail. First, the processing operation of the coordinate conversion unit 106 will be described. The input image is A (x, y), and B (i,
j) is obtained. Note that, for ease of understanding, the camera (image input unit 101) is assumed to be installed in the horizontal direction with respect to the road surface. Assuming that the height of the camera from the road surface is H and the focal length of the lens is F, the point on the road surface (assumed to be horizontal) in the lateral direction X of the camera at the front Z is x = F · X / The image is taken as Z y = F · H / Z (1).

【0016】このとき,路面上の前方方向の座標Zと横
手方向の座標Xと,表示画像の座標i,jとをそれぞれ
対応させ, i=L・X j=M・Z ・・・(2) となるように表示するものとする。ただし,LとMとは
適切な比例定数である。また,前方や後方の表示では,
L>Mの関係であることが望ましい。これは,前方につ
いては100m位まで表示する必要があるのに対し,横
方向には数十m程度表示すればよいためである。
At this time, the coordinate Z in the forward direction on the road surface, the coordinate X in the lateral direction, and the coordinates i, j of the display image are made to correspond to each other, and i = L · X j = M · Z (2 ) Shall be displayed. However, L and M are appropriate proportional constants. Also, in front and rear display,
It is desirable that L> M. This is because it is necessary to display up to about 100 m in the front, but it is sufficient to display about several tens of meters in the lateral direction.

【0017】上記(1)式と(2)式により, x=F・M・i/(L・j) y=F・H・M/j ・・・(3) となる変換を実行する。すなわち, B(i,j)=A(F・M・i/(L・j),F・H・
M/j) となる座標変換を実行することにより,入力画像に撮像
されている物体がすべて路面上の点であれば,真上から
見たような画像B(i,j)を得ることができるもので
ある。
From the above equations (1) and (2), a conversion is carried out such that x = F.multidot.i.multidot. (L.multidot.j) y = F.multidot.H.multidot.M / j (3). That is, B (i, j) = A (F · M · i / (L · j), F · H ·
By executing the coordinate transformation of M / j), if all the objects imaged in the input image are points on the road surface, an image B (i, j) as seen from directly above can be obtained. It is possible.

【0018】次に,路面検出部104の処理動作につい
て説明する。なお,本実施例における路面領域と非路面
領域との分離はカラー画像で行うものとする。また,こ
の部分は,例えば,特願平3−90043号公報に開示
されている障害物検出の方法を用いてもよい。カラー画
像を用いることによって,より正確な非路面領域の検出
が可能となる。
Next, the processing operation of the road surface detection unit 104 will be described. The color image is used to separate the road surface area and the non-road surface area in this embodiment. Further, for this portion, for example, the obstacle detection method disclosed in Japanese Patent Application No. 3-90043 may be used. By using a color image, it is possible to detect the non-road surface area more accurately.

【0019】まず,カラー画像の入力後,画面の下端部
分で色を認識する。該画面の下端部分は車両の直前に相
当するため,路面である確率が非常に高い。次に,画面
全体で画面下端部と同様な色の部分を抽出する。該同様
の色をもつ部分が路面領域であり,その他の部分を非路
面領域として認識する。
First, after inputting a color image, the color is recognized at the lower end portion of the screen. Since the lower end portion of the screen corresponds to the front of the vehicle, the probability of being a road surface is very high. Next, the same color part as the bottom edge of the screen is extracted from the entire screen. The portion having the same color is the road surface area, and the other portions are recognized as the non-road surface area.

【0020】具体的には,カラー画像がR(赤),G
(緑),B(青)の3原色で表されているものとする
と,例えば, V1=R/(R+G+B) V2=G/(R+G+B) V3=B/(R+G+B) と変換した後,上記V1,V2,V3をベクトルとみな
して,類似度(内積)を計算すればよい。
Specifically, the color image is R (red), G
If it is expressed by the three primary colors (green) and B (blue), for example, V1 = R / (R + G + B) V2 = G / (R + G + B) V3 = B / (R + G + B) , V2, V3 are regarded as vectors, and the similarity (inner product) may be calculated.

【0021】すなわち,画面下端部分で基準となるV
1,V2,V3を求める。次に,画面の各画素で同様に
V1,V2,V3を求め,基準となるV1,V2,V3
との内積を計算し,ある閾値により2値化処理を実行
し,該閾値以上の画素を路面領域とする。また,上記の
他に,輝度,色相,彩度に分離されているカラー情報を
用いるには,色相と彩度とをベクトルの要素と考えて,
上記と同様の処理を行ってもよい。
That is, the reference V at the bottom of the screen
1, V2, V3 are calculated. Next, similarly, V1, V2, V3 are obtained for each pixel on the screen, and the reference V1, V2, V3 are obtained.
The inner product of and is calculated, binarization processing is executed with a certain threshold value, and pixels above the threshold value are set as the road surface area. In addition to the above, in order to use color information separated into luminance, hue, and saturation, consider hue and saturation as elements of a vector,
You may perform the same process as the above.

【0022】また,路面領域の検出は,入力画像に対し
て行ってもよいが,上記画像(B(i,j))に対して
行ってもよい。画像B(i,j)に対して行った方が効
果的であるため,以下においては,画像B(i,j)に
対して路面領域の検出を実行するものとして説明する。
The detection of the road surface area may be performed on the input image, but may be performed on the image (B (i, j)). Since it is more effective to perform the processing on the image B (i, j), the following description will be made assuming that the road surface area is detected on the image B (i, j).

【0023】上記処理においては,白線等の路面上に描
かれた路面色以外のものも非路面領域として認識される
ため,路面領域を2値画像処理により膨張処理すること
によって,白線部分を一旦路面領域に変換し,さらに,
収縮処理を実行する。この膨張および収縮の2つの処理
によって,白線等の細い線やノイズ成分は路面領域に吸
収される。先行車両等の障害物は,ある程度の大きさを
もっているため,膨張・収縮処理を行っても元の大きさ
が保持される。
In the above-described processing, since a color other than the road surface color such as a white line drawn on the road surface is also recognized as the non-road surface area, the road surface area is expanded by the binary image processing so that the white line portion is temporarily processed. Convert to the road surface area,
Perform contraction processing. By these two processes of expansion and contraction, thin lines such as white lines and noise components are absorbed in the road surface area. Since the obstacle such as the preceding vehicle has a certain size, the original size is maintained even if the expansion / contraction process is performed.

【0024】なお,入力画像では遠方の障害物は小さく
撮像されているため,膨張・収縮処理により路面領域に
変換されている可能性があるが,座標変換後における画
像を用いることにより,遠方でも大きな物体として変換
されているため,路面領域に吸収されることがなくな
る。
Since the obstacle in the distance is captured in a small size in the input image, it may be converted into the road surface area by the expansion / contraction process. However, by using the image after the coordinate conversion, even in the distance. Since it is converted as a large object, it is no longer absorbed by the road surface area.

【0025】また,上記の場合,遠方の物体は画面上方
に映り,座標変換により画面上方になるほど拡大するよ
うな変換を行う。したがって,画面上部の領域は,自然
と大きく判断されるため,入力画像に小さく映った物体
を,遠方に位置するために小さいのか,物体そのものが
小さいのかの判断は特に不要となる。
Further, in the above case, a distant object is reflected in the upper part of the screen, and the coordinate conversion is performed such that it is enlarged toward the upper part of the screen. Therefore, the area in the upper part of the screen is naturally judged large, and it is not necessary to judge whether the object reflected in the input image is small because it is located at a distance or the object itself is small.

【0026】さらに,上記膨張・収縮処理について詳細
に説明する。図5は,この膨張・収縮処理を示す説明図
である。まず,膨張処理は,図5(b)に示す如く,2
値画像において,ある注目画素の近傍8画素のうち,1
つでも“1”であれば,“1”として処理する。すなわ
ち,図5(a)におけるa〜iのOR論理をとる。な
お,3画素以内の“0”領域は消滅する。
Further, the expansion / contraction process will be described in detail. FIG. 5 is an explanatory diagram showing this expansion / contraction process. First, the expansion process is performed as shown in FIG.
1 out of 8 pixels in the vicinity of a pixel of interest in the value image
If even one is "1", it is processed as "1". That is, the OR logic of a to i in FIG. The "0" area within 3 pixels disappears.

【0027】また,収縮処理は,図5(c)に示すよう
に,注目画素と近傍8画素の計9画素が,すべて“1”
のときにのみ“1”として処理し,他の場合は“0”と
して処理する。膨張回数だけ収縮すれば,消滅しなかっ
た領域は,ほぼ元の大きさになる。
In the contraction process, as shown in FIG. 5C, all 9 pixels of the target pixel and 8 neighboring pixels are all "1".
Only when it is processed as "1", in other cases it is processed as "0". If it shrinks by the number of times of expansion, the area that has not disappeared becomes almost the original size.

【0028】なお,路面領域を“1”,非路面領域を
“0”として処理する場合は膨張・収縮となり,逆の場
合は,収縮→膨張の順に処理を実行することにより全く
同様の結果を得ることができる。
When the road surface area is processed as "1" and the non-road surface area is processed as "0", the expansion / contraction is performed. In the opposite case, the processing is executed in the order of contraction → expansion to obtain exactly the same result. Obtainable.

【0029】また,障害物には,例えば,先行車両のよ
うに垂直や水平のエッジ成分をもつ物体が多く,特に,
路面領域との境界部分は垂直や水平の直線となる場合が
多い。したがって,まず,非路面領域をラベリング処理
により領域分離を行う。その後,各非路面領域におい
て,路面領域との境界部分をエッジとして検出する。次
に,該境界部分に直線を適合させ,障害物領域を確定す
る。また,この場合における直線適合は,一つの領域に
対して3本の直線を当てはめることにより実行される。
Further, many obstacles are objects having vertical and horizontal edge components, such as a preceding vehicle, and in particular,
The boundary with the road surface area is often a vertical or horizontal straight line. Therefore, first, the non-road surface area is separated by labeling processing. Then, in each non-road surface area, the boundary portion with the road surface area is detected as an edge. Next, a straight line is fitted to the boundary portion to determine the obstacle area. The straight line fitting in this case is executed by fitting three straight lines to one area.

【0030】第1の直線適合は,水平方向の傾きをもつ
直線を検出する。非路面領域と路面領域の境界において
水平な直線部分をもつのは,先行車両など障害物の下端
である場合が多い。したがって,検出された直線を直線
1とすると,直線1は, j=b により表すことができる。
The first straight line fitting detects a straight line having a horizontal inclination. It is often the lower edge of an obstacle such as a preceding vehicle that has a horizontal straight line at the boundary between the non-road surface area and the road surface area. Therefore, assuming that the detected straight line is the straight line 1, the straight line 1 can be represented by j = b.

【0031】また,第2と第3の直線適合は,障害物の
右端と左端を認識するためのものである。入力画像(A
(x,y))においては,障害物の右端と左端は共に垂
直な直線成分をもつが,画像B(i,j)では視点(原
点)から放射状に延びる直線に変換される。すなわち, i=a・j 上の境界点(エッジ点)の個数をカウントする。aをあ
る範囲内において変化させ,エッジ点のカウント数が閾
値を越える最も左側の直線の傾きをaL とし,最も右側
の直線の傾きをaR として,共に直線検出結果とする。
The second and third straight line adaptations are for recognizing the right end and the left end of the obstacle. Input image (A
In (x, y)), the right and left ends of the obstacle both have vertical straight line components, but in image B (i, j), they are converted into straight lines extending radially from the viewpoint (origin). That is, the number of boundary points (edge points) on i = a · j is counted. By changing a within a certain range, the slope of the leftmost straight line whose count number of edge points exceeds the threshold is a L, and the slope of the rightmost straight line is a R , both of which are straight line detection results.

【0032】このように直線を各非路面領域を用いて検
出する。なお,前方や後方の画像においては,先行車両
や後続車両の下端部分は,画面上水平な直線である可能
性が高い。側方の表示においては,カメラの設置角度に
よっては,側方車が画像上水平に撮像されるとは限らな
いため,直線1は水平な直線の検出ではなく,ある定め
られた角度をもつ直線の検出を行えばよいことになる。
Thus, the straight line is detected by using each non-road surface area. In the front and rear images, the lower end portions of the preceding vehicle and the following vehicle are likely to be horizontal straight lines on the screen. In the side display, since the side vehicle is not always imaged horizontally on the image depending on the installation angle of the camera, the straight line 1 is not a detection of a horizontal straight line but a straight line having a certain angle. Should be detected.

【0033】次に,非路面領域の切り出し処理について
説明する。まず,切り出し処理の前に画像B(i,j)
における非路面領域を消去しておく。これは,先行車両
等の高さのある物体より遠方の路面上の点は,先行車両
等によって隠されるため,画像化できないためである。
したがって,見やすさの向上を図るために,上記隠され
た部分については,固定の色によって消去(塗りつぶ
し)する。画面下端部において検出した路面色を用いて
塗りつぶすことにより,見やすい画像を得ることができ
る。
Next, the clipping processing of the non-road surface area will be described. First, before cutting out the image B (i, j)
The non-road surface area in is deleted. This is because a point on the road surface farther than a high object such as the preceding vehicle is hidden by the preceding vehicle and cannot be imaged.
Therefore, in order to improve the visibility, the hidden portion is erased (filled) with a fixed color. An easy-to-see image can be obtained by painting with the road surface color detected at the lower edge of the screen.

【0034】また,上記における消去は,各領域におい
て検出された3本の直線の内部について実行する。すな
わち,消去は,j>bにおいて, aL ・j<i<aR ・j となる範囲を対象として塗りつぶし処理を実行する。
The erasure described above is executed inside the three straight lines detected in each area. That is, in the erasing, the filling process is executed for the range of a L · j <i <a R · j in j> b.

【0035】次に,非路面領域において,入力画像を切
り出す。上記(3)式から, yD =F・H・M/b が切り出す画像A(x,y)の最下端である。また,左
端は, xL =F・M・aL /L となり,一方,右端は, xR =F・M・aR /L となる。
Next, the input image is cut out in the non-road surface area. From the equation (3), y D = F · H · M / b is the lowermost end of the image A (x, y) to be cut out. At the left end, x L = F · M · a L / L, while at the right end, x R = F · M · a R / L.

【0036】なお,上端は適切な固定値とする。例え
ば,xL −xR の何倍かを切り出す領域のy方向の幅
(高さ)としてもよい。このように,画像A(x,y)
から四角形の領域を切り出す。
The upper end is set to an appropriate fixed value. For example, the width (height) in the y direction of the region to be cut out may be a multiple of x L −x R. Thus, image A (x, y)
Cut out a rectangular area from.

【0037】次に,合成部107の処理動作について詳
述する。上記のようにして切り出した四角形の領域を,
画像B(i,j)内の塗りつぶしを行った領域に拡大縮
小を実行し,転送する。これは,画像A(x,y)で
は,遠方の障害物は小さく撮像されるが,画像B(i,
j)では同じ大きさになるためである。直線適合結果か
ら,j=bとした場合, iL =aL ・b iR =aR ・b が画像B(i,j)における障害物領域の下端であるか
ら,iR −iL とxR −xL とが同じ大きさになるよう
に拡大あるいは縮小すればよい。こうして,点(xR
D )と点(iR ,b)が一致するように切り出した画
像を,画像B(i,j)にはめ込む処理を実行する。
Next, the processing operation of the synthesizing unit 107 will be described in detail. The rectangular area cut out as above is
Enlargement / reduction is performed on the filled area in the image B (i, j), and the area is transferred. This is because in the image A (x, y), a distant obstacle is imaged small, but the image B (i,
This is because the size is the same in j). From the straight line fitting result, when j = b, i L = a L · b i R = a R · b is the lower end of the obstacle region in the image B (i, j), so i R −i L It may be enlarged or reduced so that x R −x L has the same size. Thus, the point (x R ,
The image cut out so that y D ) and the point (i R , b) coincide with each other is fitted into the image B (i, j).

【0038】そして,画像B(i,j)を表示すること
により,周囲状況が表示される。さらに,画像B(i,
j)に,水平線を1本追記することにより,適正車間距
離を表示することも可能である。画像B(i,j)のj
軸は,車両前方方向の距離に対応する。車速から適正車
間距離を求め,その距離に対応するjに水平線を描けば
よい。水平線の位置と先行車両との画像上における位置
から,容易に先行車両までの距離を把握することが可能
となる。
Then, by displaying the image B (i, j), the surrounding situation is displayed. Furthermore, image B (i,
It is also possible to display an appropriate inter-vehicle distance by adding one horizontal line to j). J in image B (i, j)
The axis corresponds to the distance in the forward direction of the vehicle. The proper inter-vehicle distance can be obtained from the vehicle speed, and a horizontal line can be drawn at j corresponding to that distance. The distance to the preceding vehicle can be easily grasped from the position of the horizon and the position of the preceding vehicle on the image.

【0039】このように,本実施例では,入力画像を真
上から見た画像に射影変換する際,非路面領域について
はカメラからみた画像をそのままの状態で表示できるよ
うにすることによって,画面を見て障害物が容易に認識
することができるため,ブレーキングや操縦等の運転判
断が的確に実行でき,例えば,スムーズな車庫入れ運転
等が可能となる。
As described above, in this embodiment, when the input image is projectively converted into an image viewed from directly above, the image viewed from the camera can be displayed as it is for the non-road surface area, so that the screen can be displayed. Since the obstacle can be easily recognized by observing, the driving judgment such as braking or maneuvering can be accurately executed, and for example, smooth garage parking operation becomes possible.

【0040】[0040]

【発明の効果】以上説明したように,この発明に係る車
両用周囲状況表示装置によれば,入力画像を路面領域と
非路面領域とに分離し,路面領域のみ座標変換し,非路
面領域に対しては座標変換を行わず平行移動や拡大/縮
小を行い,これら2つの処理画像を合成させることによ
り,先行車両や障害物等の高さのある物体画像も歪みな
く表示可能としたため,先行車両や障害物等の高さのあ
る物体画像も歪みなく表示し,運転者に対して運転時に
おける判断を的確に行えるように周囲状況をより正確に
知らせることができる。
As described above, according to the vehicle surrounding condition display device of the present invention, the input image is separated into the road surface area and the non-road surface area, and only the road surface area is subjected to the coordinate conversion to the non-road surface area. On the other hand, by performing parallel movement and enlargement / reduction without coordinate conversion and combining these two processed images, it is possible to display high-level object images such as preceding vehicles and obstacles without distortion. Images of tall objects such as vehicles and obstacles can also be displayed without distortion, and the driver can be informed of the surrounding conditions more accurately so that he / she can make accurate decisions when driving.

【図面の簡単な説明】[Brief description of drawings]

【図1】この発明に係る車両用周囲状況表示装置の概略
構成を示すブロック図である。
FIG. 1 is a block diagram showing a schematic configuration of a vehicle surrounding condition display device according to the present invention.

【図2】図1に示した車両用周囲状況表示装置の入力画
像および表示画像の例を示す説明図である。
FIG. 2 is an explanatory diagram showing an example of an input image and a display image of the vehicle surroundings display device shown in FIG.

【図3】図1に示した車両用周囲状況表示装置の画像処
理動作を示すフローチャートである。
FIG. 3 is a flowchart showing an image processing operation of the vehicle surroundings display device shown in FIG.

【図4】図3に示したフローチャートの各処理動作に対
応する表示画面を示す説明図である。
FIG. 4 is an explanatory diagram showing a display screen corresponding to each processing operation of the flowchart shown in FIG.

【図5】図1に示した車両用周囲状況表示装置の膨張・
収縮処理を示す説明図である。
FIG. 5: Expansion of the vehicle surroundings display device shown in FIG.
It is explanatory drawing which shows a contraction process.

【符号の説明】[Explanation of symbols]

101 画像入力部 102 画像処理部 103 表示部 104 路面検出部 105 画像切り出し部 106 座標変換部 107 合成部 Reference Signs List 101 image input unit 102 image processing unit 103 display unit 104 road surface detection unit 105 image cutout unit 106 coordinate conversion unit 107 combining unit

Claims (1)

【特許請求の範囲】[Claims] 【請求項1】 車両周囲の状況を撮像し,入力する画像
入力手段と,前記画像入力手段からの入力画像を路面領
域と非路面領域とに分離する路面領域検出手段と,前記
画像入力手段からの入力画像を座標変換する座標変換手
段と,前記路面領域検出手段により分離された非路面領
域の画像を切り出す非路面領域抽出手段と,前記座標変
換手段により座標変換された画像と前記非路面領域抽出
手段により切り出された画像とを合成する画像合成手段
と,前記画像合成手段による合成画像を表示する画像表
示手段とを具備することを特徴とする車両用周囲状況表
示装置。
1. An image input means for picking up and inputting a situation around the vehicle, a road surface area detecting means for separating an input image from the image input means into a road surface area and a non-road surface area, and the image input means. Coordinate conversion means for performing coordinate conversion of the input image, non-road surface area extraction means for cutting out the image of the non-road surface area separated by the road surface area detection means, image converted by the coordinate conversion means, and the non-road surface area A vehicle surrounding condition display device comprising: an image synthesizing unit for synthesizing the image cut out by the extracting unit; and an image display unit for displaying the synthetic image by the image synthesizing unit.
JP35534593A 1993-12-24 1993-12-24 Ambient situation display device for vehicles Expired - Fee Related JP3381351B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP35534593A JP3381351B2 (en) 1993-12-24 1993-12-24 Ambient situation display device for vehicles

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP35534593A JP3381351B2 (en) 1993-12-24 1993-12-24 Ambient situation display device for vehicles

Publications (2)

Publication Number Publication Date
JPH07186833A true JPH07186833A (en) 1995-07-25
JP3381351B2 JP3381351B2 (en) 2003-02-24

Family

ID=18443403

Family Applications (1)

Application Number Title Priority Date Filing Date
JP35534593A Expired - Fee Related JP3381351B2 (en) 1993-12-24 1993-12-24 Ambient situation display device for vehicles

Country Status (1)

Country Link
JP (1) JP3381351B2 (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001024527A1 (en) * 1999-09-30 2001-04-05 Kabushiki Kaisha Toyoda Jidoshokki Seisakusho Image conversion device for vehicle rearward-monitoring device
JP2002335524A (en) * 1999-09-20 2002-11-22 Matsushita Electric Ind Co Ltd Driving support device
JP2002359838A (en) * 2001-03-28 2002-12-13 Matsushita Electric Ind Co Ltd Device for supporting driving
JP2003132349A (en) * 2001-10-24 2003-05-09 Matsushita Electric Ind Co Ltd Drawing device
JP2004213489A (en) * 2003-01-07 2004-07-29 Nissan Motor Co Ltd Driving support device for vehicle
JP2007235642A (en) * 2006-03-02 2007-09-13 Hitachi Ltd Obstruction detecting system
US7277123B1 (en) 1998-10-08 2007-10-02 Matsushita Electric Industrial Co., Ltd. Driving-operation assist and recording medium
WO2007111377A1 (en) * 2006-03-27 2007-10-04 Sanyo Electric Co., Ltd. Drive assistance device
JP2008177856A (en) * 2007-01-18 2008-07-31 Sanyo Electric Co Ltd Bird's-eye view image provision apparatus, vehicle, and bird's-eye view image provision method
JP2008205914A (en) * 2007-02-21 2008-09-04 Alpine Electronics Inc Image processor
US7512251B2 (en) 2004-06-15 2009-03-31 Panasonic Corporation Monitoring system and vehicle surrounding monitoring system
EP2193957A2 (en) 2008-11-28 2010-06-09 Aisin Seiki Kabushiki Kaisha Bird's-eye image generating apparatus
EP2259220A2 (en) 1998-07-31 2010-12-08 Panasonic Corporation Method and apparatus for displaying image
JP2012004693A (en) * 2010-06-15 2012-01-05 Clarion Co Ltd Driving support device
JP2012019552A (en) * 2001-03-28 2012-01-26 Panasonic Corp Driving support device
JP2014126990A (en) * 2012-12-26 2014-07-07 Yamaha Motor Co Ltd Obstacle detection device and vehicle using the same
JP2015221662A (en) * 2014-05-22 2015-12-10 ドクター エンジニール ハー ツェー エフ ポルシェ アクチエンゲゼルシャフトDr. Ing. h.c.F. Porsche Aktiengesellschaft Method for presenting vehicle environment on display apparatus, display apparatus, system comprising plural image capturing units and display apparatus, and computer program
CN107914714A (en) * 2017-11-16 2018-04-17 北京经纬恒润科技有限公司 The display methods and device of a kind of vehicle running state
CN108136954A (en) * 2015-09-14 2018-06-08 法雷奥照明公司 For projecting image onto the projecting method for motor vehicles in projection surface
CN111310663A (en) * 2020-02-17 2020-06-19 北京三快在线科技有限公司 Road fence detection method, device, equipment and storage medium

Cited By (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2259220A2 (en) 1998-07-31 2010-12-08 Panasonic Corporation Method and apparatus for displaying image
EP2309453A2 (en) 1998-07-31 2011-04-13 Panasonic Corporation Image displaying apparatus and image displaying method
EP2267656A2 (en) 1998-07-31 2010-12-29 Panasonic Corporation Image displaying apparatus und image displaying method
US8077202B2 (en) 1998-10-08 2011-12-13 Panasonic Corporation Driving-operation assist and recording medium
US8111287B2 (en) 1998-10-08 2012-02-07 Panasonic Corporation Driving-operation assist and recording medium
US9272731B2 (en) 1998-10-08 2016-03-01 Panasonic Intellectual Property Corporation Of America Driving-operation assist and recording medium
US7277123B1 (en) 1998-10-08 2007-10-02 Matsushita Electric Industrial Co., Ltd. Driving-operation assist and recording medium
JP2002335524A (en) * 1999-09-20 2002-11-22 Matsushita Electric Ind Co Ltd Driving support device
US6993159B1 (en) 1999-09-20 2006-01-31 Matsushita Electric Industrial Co., Ltd. Driving support system
GB2361376A (en) * 1999-09-30 2001-10-17 Toyoda Automatic Loom Works Image conversion device for vehicle rearward-monitoring device
GB2361376B (en) * 1999-09-30 2004-07-28 Toyoda Automatic Loom Works Image conversion device for vehicle rearward-monitoring device
US6985171B1 (en) 1999-09-30 2006-01-10 Kabushiki Kaisha Toyoda Jidoshokki Seisakusho Image conversion device for vehicle rearward-monitoring device
WO2001024527A1 (en) * 1999-09-30 2001-04-05 Kabushiki Kaisha Toyoda Jidoshokki Seisakusho Image conversion device for vehicle rearward-monitoring device
JP2012019552A (en) * 2001-03-28 2012-01-26 Panasonic Corp Driving support device
DE10296593B4 (en) * 2001-03-28 2017-02-02 Panasonic Intellectual Property Management Co., Ltd. Driving support device
US7218758B2 (en) 2001-03-28 2007-05-15 Matsushita Electric Industrial Co., Ltd. Drive supporting device
JP2002359838A (en) * 2001-03-28 2002-12-13 Matsushita Electric Ind Co Ltd Device for supporting driving
JP2003132349A (en) * 2001-10-24 2003-05-09 Matsushita Electric Ind Co Ltd Drawing device
JP2004213489A (en) * 2003-01-07 2004-07-29 Nissan Motor Co Ltd Driving support device for vehicle
US7512251B2 (en) 2004-06-15 2009-03-31 Panasonic Corporation Monitoring system and vehicle surrounding monitoring system
US7693303B2 (en) 2004-06-15 2010-04-06 Panasonic Corporation Monitoring system and vehicle surrounding monitoring system
EP2182730A2 (en) 2004-06-15 2010-05-05 Panasonic Corporation Monitor and vehicle peripheriy monitor
US7916899B2 (en) 2004-06-15 2011-03-29 Panasonic Corporation Monitoring system and vehicle surrounding monitoring system
JP2007235642A (en) * 2006-03-02 2007-09-13 Hitachi Ltd Obstruction detecting system
WO2007111377A1 (en) * 2006-03-27 2007-10-04 Sanyo Electric Co., Ltd. Drive assistance device
US9308862B2 (en) 2006-03-27 2016-04-12 Panasonic Intellectual Property Management Co., Ltd. Drive assistance device
JP2008177856A (en) * 2007-01-18 2008-07-31 Sanyo Electric Co Ltd Bird's-eye view image provision apparatus, vehicle, and bird's-eye view image provision method
US8330816B2 (en) 2007-02-21 2012-12-11 Alpine Electronics, Inc. Image processing device
JP2008205914A (en) * 2007-02-21 2008-09-04 Alpine Electronics Inc Image processor
EP2193957A2 (en) 2008-11-28 2010-06-09 Aisin Seiki Kabushiki Kaisha Bird's-eye image generating apparatus
JP2012004693A (en) * 2010-06-15 2012-01-05 Clarion Co Ltd Driving support device
JP2014126990A (en) * 2012-12-26 2014-07-07 Yamaha Motor Co Ltd Obstacle detection device and vehicle using the same
JP2015221662A (en) * 2014-05-22 2015-12-10 ドクター エンジニール ハー ツェー エフ ポルシェ アクチエンゲゼルシャフトDr. Ing. h.c.F. Porsche Aktiengesellschaft Method for presenting vehicle environment on display apparatus, display apparatus, system comprising plural image capturing units and display apparatus, and computer program
CN108136954A (en) * 2015-09-14 2018-06-08 法雷奥照明公司 For projecting image onto the projecting method for motor vehicles in projection surface
CN108136954B (en) * 2015-09-14 2021-06-11 法雷奥照明公司 Projection method for a motor vehicle for projecting an image onto a projection surface
CN107914714A (en) * 2017-11-16 2018-04-17 北京经纬恒润科技有限公司 The display methods and device of a kind of vehicle running state
CN111310663A (en) * 2020-02-17 2020-06-19 北京三快在线科技有限公司 Road fence detection method, device, equipment and storage medium

Also Published As

Publication number Publication date
JP3381351B2 (en) 2003-02-24

Similar Documents

Publication Publication Date Title
JP3381351B2 (en) Ambient situation display device for vehicles
JP4309920B2 (en) Car navigation system, road marking identification program, and road marking identification method
US8305431B2 (en) Device intended to support the driving of a motor vehicle comprising a system capable of capturing stereoscopic images
US10178314B2 (en) Moving object periphery image correction apparatus
KR102404149B1 (en) Driver assistance system and method for object detection and notification
US11518390B2 (en) Road surface detection apparatus, image display apparatus using road surface detection apparatus, obstacle detection apparatus using road surface detection apparatus, road surface detection method, image display method using road surface detection method, and obstacle detection method using road surface detection method
JP2009017157A (en) Image processor, method and program
WO2013161028A1 (en) Image display device, navigation device, image display method, image display program and recording medium
JP2010018102A (en) Driving support device
JP2008262333A (en) Road surface discrimination device and road surface discrimination method
GB2550472A (en) Adaptive display for low visibility
CN115273023A (en) Vehicle-mounted road pothole identification method and system and automobile
CN114339185A (en) Image colorization for vehicle camera images
US20240078815A1 (en) Device and method for recognizing obstacles for a vehicle
JP2002321579A (en) Warning information generating method and vehicle side image generating device
Habib et al. Lane departure detection and transmission using Hough transform method
JP2007140828A (en) Sign recognition method
US20240071104A1 (en) Image processing device, image processing method, and recording medium
JP4574157B2 (en) Information display device and information display method
JP2611326B2 (en) Road recognition device for vehicles
JPH06348991A (en) Traveling environment recognizer for traveling vehicle
JPH07195978A (en) Vehicle surroundings display unit
JP2003085535A (en) Position recognition method for road guide sign
JPH0676065A (en) Method and device for recognizing road circumstances
JP3380436B2 (en) Recognition method of vehicles, etc.

Legal Events

Date Code Title Description
FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20071220

Year of fee payment: 5

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20081220

Year of fee payment: 6

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20081220

Year of fee payment: 6

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20091220

Year of fee payment: 7

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20101220

Year of fee payment: 8

LAPS Cancellation because of no payment of annual fees