JPH06348991A - Traveling environment recognizer for traveling vehicle - Google Patents

Traveling environment recognizer for traveling vehicle

Info

Publication number
JPH06348991A
JPH06348991A JP5132219A JP13221993A JPH06348991A JP H06348991 A JPH06348991 A JP H06348991A JP 5132219 A JP5132219 A JP 5132219A JP 13221993 A JP13221993 A JP 13221993A JP H06348991 A JPH06348991 A JP H06348991A
Authority
JP
Japan
Prior art keywords
color image
image
area
traveling
dimensional space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP5132219A
Other languages
Japanese (ja)
Inventor
Mikao Nakajima
実香夫 中島
Toshihiro Toda
敏宏 戸田
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sumitomo Electric Industries Ltd
Original Assignee
Sumitomo Electric Industries Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sumitomo Electric Industries Ltd filed Critical Sumitomo Electric Industries Ltd
Priority to JP5132219A priority Critical patent/JPH06348991A/en
Publication of JPH06348991A publication Critical patent/JPH06348991A/en
Pending legal-status Critical Current

Links

Landscapes

  • Closed-Circuit Television Systems (AREA)
  • Traffic Control Systems (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Image Analysis (AREA)
  • Navigation (AREA)
  • Image Processing (AREA)

Abstract

PURPOSE:To provide a traveling environment recognizer which can accurately recognize the traveling lane of a vehicle, the preceding vehicles on the lane, the traffic control signs, etc. CONSTITUTION:A data converting part 2 projects the picture elements of a color image supplied from a color input part 1 of a color image pickup device which is mounted on a traveling vehicle into a three-dimensional space different from that which defines the R(red), G(green) and B(blue) values as three axes based on these three values. Then the three-dimensional space where each picture element is projected is divided into plural areas by an dividing part 3. Then the part 3 defines the center value of each divided area as the picture element value that belongs to the corresponding divided area and divides the color image into plural area based on each picture element value valculated by the part 2. An image recognizing part 4 recognizes a traveling lane and the preceding vehicles and the traffic control signs existing on the lane for each divided area in response to the actual views. The objects recognized by the part 4 are shown at a display part 5.

Description

【発明の詳細な説明】Detailed Description of the Invention

【0001】[0001]

【産業上の利用分野】本発明は移動車等に搭載され、移
動車の走行域前方の風景を撮像して得たカラー画像に基
づき、道路等の走行車線,白線,或いは先行車等を認識
する走行環境認識装置に関する。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention is mounted on a moving vehicle or the like, and recognizes a driving lane such as a road, a white line, or a preceding vehicle on the basis of a color image obtained by capturing a scene in front of the traveling area of the moving vehicle Driving environment recognition device.

【0002】[0002]

【従来の技術】走行する自動車の前方風景を撮像し、撮
影された画像に基づいて自車の走行車線等を認識し、自
動車運転者の安全な運転動作を補助する手段が開発され
ている(特開平4−134503号公報, 特開昭62−221800号
公報) 。特開平4−134503号公報には自動車に搭載した
CCD カメラ等のビデオカメラにて撮像した自動車前方の
画像にフィルタリング処理, 色抽出処理を施して白線領
域を抽出し、これを2値化した後、フレームメモリに格
納し、この画像を基に車輌の走行路領域、走行路方向を
検出する技術が開示されている。また特開昭62−221800
号公報には同じく撮像した画像から一定の条件式を用い
て、これを満足する画素を抽出し、夫々抽出した白線,
黄色線領域に従って自動車の走行車線を特定する技術が
開示されている。
2. Description of the Related Art A means for assisting a safe driving operation of a vehicle driver by developing an image of a scene in front of a traveling vehicle, recognizing the traveling lane of the vehicle based on the captured image, has been developed. JP-A-4-134503, JP-A-62-221800). In Japanese Patent Laid-Open No. 4-134503, a vehicle is mounted.
The image in front of the car captured by a video camera such as a CCD camera is subjected to filtering and color extraction to extract the white line area, which is binarized and then stored in the frame memory. The technology for detecting the traveling road area and traveling road direction is disclosed. In addition, JP-A-62-221800
In the gazette, a constant conditional expression is used from the same captured image, pixels satisfying this are extracted, and white lines extracted respectively,
A technique for identifying the driving lane of an automobile according to the yellow line area is disclosed.

【0003】[0003]

【発明が解決しようとする課題】従来のこの種装置で
は、自動車の走行車線を抽出するために、白線, または
黄色線等の道路表示を抽出し、抽出された道路表示で挟
まれる領域を自動車の走行車線と特定している。しかし
実際の道路では路面の損傷等により道路表示が認識でき
ない場合があり、この場合には走行車線を特定すること
ができないという問題があった。
In the conventional device of this type, in order to extract the driving lane of the automobile, road indications such as white lines or yellow lines are extracted, and the area sandwiched by the extracted road indications is extracted. It is specified as the driving lane. However, on an actual road, the road display may not be recognized due to damage to the road surface or the like, and in this case, there is a problem that the traveling lane cannot be specified.

【0004】一方走行車線を認識するために濃度値に関
する一定の条件式を用いて判断する技術では、例えば日
照条件等が変化した場合、安定した判断を行うことがで
きず、また白色,黄色の車両が画像内に映っている場
合、それらを道路表示として誤認する場合がある等、従
来の技術では撮像された画像内で自車周囲の車両, 道路
標識等を個別に認識することは事実上不可能であるとい
う問題もあった。
On the other hand, in the technique of making a judgment using a constant conditional expression relating to the density value in order to recognize the driving lane, it is not possible to make a stable judgment when, for example, the sunshine conditions change, and white and yellow When a vehicle is shown in the image, it may be mistakenly recognized as a road display.In the conventional technology, it is virtually impossible to individually recognize the vehicles around the vehicle and road signs in the captured image. There was also the problem of being impossible.

【0005】[0005]

【課題を解決するための手段】本発明に係る移動車の走
行環境認識装置は、移動車の前方を撮像してカラー画像
を得るカラー撮像装置と、該カラー撮像装置で得たカラ
ー画像の各画素を色の類似性に基づき複数の領域に分割
する領域分割部と、この領域分割結果に基づいて走行環
境を認識する画像認識部とを具備することを特徴とす
る。
A traveling environment recognition apparatus for a moving vehicle according to the present invention includes a color image pickup apparatus for picking up the front of the moving vehicle to obtain a color image, and a color image obtained by the color image pickup apparatus. It is characterized by comprising an area dividing section for dividing a pixel into a plurality of areas based on color similarity, and an image recognition section for recognizing a traveling environment based on the area division result.

【0006】[0006]

【作用】本発明にあってはこれによって、カラー画像の
画素を色の類似性に基づいて分割することで直接的に白
線,黄線の有,無、標識を誤認することなく正確に認識
可能となる。
According to the present invention, by dividing the pixels of the color image based on the color similarity, it is possible to directly recognize the presence / absence of the white line, the yellow line, and the sign accurately without erroneous recognition. Becomes

【0007】[0007]

【実施例】以下本発明をその実施例を示す図面に基づき
具体的に説明する。図1は本発明に係る自動車等の移動
車の走行環境認識装置の主要構成を示すブロック図であ
り、図中1はCCD カメラ等のカラー撮像装置で構成され
たカラー画像入力部を示している。カラー画像入力部を
構成するカラー撮像装置で撮像されたカラー画像は各画
素毎にそのR(赤),G(緑),B(青)の光3原色値
としてデータ変換部2へ入力される。図2は車載のCCD
カメラで撮像したカラー画像の説明図であり、図中11は
自車の走行車線、12は隣接車線、13は両車線の境界を示
す白線を示している。自車の走行車線11上には先行車14
がその影領域14a と共に撮像されている。15,16は自車
の走行車線11の左端,隣接車線12の右端に沿って設けら
れた防音壁である。データ変換部2はカラー画像入力部
1から入力されたカラー画像の各画素夫々のR,G,B
値をR,G,Bを3軸とする3次元座標系とは異なる3
次元空間、即ち色の類似性に関連する輝度を表すパラメ
ータと、彩度及び明度の分離を容易にする為の2つのパ
ラメータの合計3つのパラメータを3軸とする3次元座
標系に投影し、領域分割して夫々に分割領域について定
めた代表値で置換して領域分割部3へ出力する。
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS The present invention will be specifically described below with reference to the drawings showing the embodiments. FIG. 1 is a block diagram showing the main configuration of a traveling environment recognition device for a moving vehicle such as an automobile according to the present invention, in which 1 denotes a color image input section constituted by a color image pickup device such as a CCD camera. . A color image picked up by a color image pickup device forming a color image input section is input to the data conversion section 2 as R (red), G (green), and B (blue) light three primary color values for each pixel. . Figure 2 is a CCD onboard
It is explanatory drawing of the color image imaged with the camera, 11 is a driving lane of the own vehicle in the figure, 12 is an adjacent lane, 13 is the white line which shows the boundary of both lanes. The preceding vehicle 14 on the driving lane 11 of the own vehicle
Is imaged with its shadow area 14a. 15 and 16 are soundproof walls provided along the left end of the driving lane 11 of the own vehicle and the right end of the adjacent lane 12. The data conversion unit 2 outputs R, G, B of each pixel of the color image input from the color image input unit 1.
3 which is different from the 3D coordinate system in which the values are R, G and B as the 3 axes
Dimensional space, that is, a parameter representing luminance related to color similarity and two parameters for facilitating separation of saturation and lightness, a total of three parameters are projected onto a three-dimensional coordinate system having three axes, Region division is performed, and each region is replaced with a representative value determined for the divided region and output to the region division unit 3.

【0008】図3はデータ変換部2における処理内容を
示す説明図であり、図3(a) は画素についてそのR,
G,B値を各座標軸とする3次元座標系に投影した図で
ある。図3(a) にはR軸,G軸,B軸夫々におけるR,
G,B値が夫々10である画素P(10,10,10) 及び夫々20
である画素Q(20,20,20) について例示してある。デー
タ変換部2はO−RGB座標系上の各画素を図3(b) に
示す如きO−XYZ座標系の値に変換する。以下これを
具体的に説明する。図3(a) 系のO−RGB3次元座標
系上においては、その原点O、画素P,Qを結ぶ線上に
は画素P,Qと同様にR,G,B値が夫々同じで、輝度
のみが異なる画素が位置する。そこでこの線と一致する
ようにX軸を定めればX軸上には輝度のみが異なる画素
が投影されることとなる。なお、このとき、Y軸,Z軸
上には明度及び彩度が異なる画素が投影される。O−R
GB座標系からO−XYZ座標系への変換は次の如くに
すればよい。即ち、図3(b) に示す如くO−RGB座標
系におけるR軸,G軸,B軸をB軸回りに夫々45°づつ
回転させ、R軸を45°回転させて得た軸をr軸,G軸を
45°回転させて得た軸をg軸とし、これをO−rgB座
標系とする。次にこのO−rgB座標系において、g軸
を中心にしてB軸,r軸を夫々−θ°だけ回転させ、r
軸を−θ回転させて得た軸をX軸,B軸を−θ°だけ回
転させて得た軸をZ軸とすればO−XYZ座標系が得ら
れる。なおここにθはtan θ=1/sqr(2) (但し、sqr
(n)はnの平方根を示す)である。
FIG. 3 is an explanatory diagram showing the contents of processing in the data conversion unit 2, and FIG.
It is the figure which projected on the three-dimensional coordinate system which makes G and B values each coordinate axis. In Fig. 3 (a), R, G, and B axes have R,
Pixel P (10,10,10) whose G and B values are 10 and 20 respectively
The pixel Q (20,20,20) is The data conversion unit 2 converts each pixel on the O-RGB coordinate system into a value on the O-XYZ coordinate system as shown in FIG. 3 (b). This will be specifically described below. On the O-RGB three-dimensional coordinate system of FIG. 3 (a), the R, G, and B values are the same on the line connecting the origin O and the pixels P and Q, like the pixels P and Q. Pixels are different. Therefore, if the X axis is determined so as to coincide with this line, pixels having different brightness are projected on the X axis. At this time, pixels having different lightness and saturation are projected on the Y axis and the Z axis. OR
The conversion from the GB coordinate system to the O-XYZ coordinate system may be performed as follows. That is, as shown in FIG. 3 (b), the R, G, and B axes in the O-RGB coordinate system are rotated about the B axis by 45 °, and the R axis is rotated by 45 °. , G axis
The axis obtained by rotating by 45 ° is the g-axis, and this is the O-rgB coordinate system. Next, in this O-rgB coordinate system, the B axis and the r axis are each rotated by -θ ° about the g axis, and r
An O-XYZ coordinate system can be obtained by letting the axis obtained by rotating the axis by −θ be the X axis and the axis obtained by rotating the B axis by −θ ° be the Z axis. Where θ is tan θ = 1 / sqr (2) (however, sqr
(n) indicates the square root of n).

【0009】これによってO−XYZ座標系におけるX
軸は図3(a) に示すP点を通り、X軸上の各点における
画素のR値,B値,G値は等しくなり、X軸方向におけ
る各画素の位置の相違は輝度の差異を示す。またY軸,
Z軸上のY値,Z値はその絶対値が大きいことは画素が
有彩色であることを、またY値,Z値の絶対値が共に小
さいことは画素が無彩色であることを示す。更に、領域
分割部3は入力された各画素が投影されたO−XYZ座
標系の3次元空間を複数の領域に分割する。図4は領域
分割部3の処理内容を示す説明図であり、領域分割部3
はX軸,Y軸,Z軸を夫々L,M,N個に分けてL×M
×N個の領域に分割する。例えばR,G,B3原色値の
値を夫々0〜255 までの256階調として、これに対応す
るX軸方向の値の範囲は0〜255 ×sqr(3)とする。
As a result, X in the O-XYZ coordinate system
The axis passes through point P shown in FIG. 3 (a), and the R value, B value, and G value of the pixel at each point on the X axis become equal, and the difference in the position of each pixel in the X axis direction causes the difference in brightness. Show. Also the Y axis,
A large absolute value of the Y value and the Z value on the Z axis indicates that the pixel is chromatic, and a small absolute value of the Y value and the Z value indicates that the pixel is achromatic. Further, the area dividing unit 3 divides the three-dimensional space of the O-XYZ coordinate system onto which each input pixel is projected into a plurality of areas. FIG. 4 is an explanatory diagram showing the processing contents of the area dividing unit 3.
Divides the X-axis, Y-axis, and Z-axis into L, M, and N pieces, respectively, and L × M
Divide into N regions. For example, the R, G, B3 primary color values are 256 gradations from 0 to 255, respectively, and the corresponding value range in the X-axis direction is 0 to 255 * sqr (3).

【0010】次に0〜255 ×sqr(3)の値を対数値に変換
する。例えば底を10とした範囲は−∞から log10(255×
sqr(3))を(L−1)等分すると全体でX軸方向にL分
割することが出来る。またY,Z軸方向についても同様
であり、各Y,Z軸の対数値を基に分割する。これによ
って、例えば画素PのX,Y,Z値は(10×sqr(3),0,
0) と表せる。
Next, the value of 0 to 255 × sqr (3) is converted into a logarithmic value. For example, the range with base 10 is −∞ to log 10 (255 ×
By dividing sqr (3)) into (L-1) equal parts, it is possible to divide the whole into L in the X-axis direction. The same applies to the directions of the Y and Z axes, and division is performed based on the logarithmic values of the Y and Z axes. Thereby, for example, the X, Y, and Z values of the pixel P are (10 × sqr (3), 0,
It can be expressed as 0).

【0011】このようなXYZ空間に投影された画素
が、図3(c) に示す如く、X値が5から20の範囲内、
Y,Z値が各々−5から2の範囲内の領域中に位置した
とするとそのX,Y,Z値夫々は、X軸は5と20との中
央値を代表値として12に、又Y,Z値は夫々2と−5の
中央値を代表値として−3で置換される。従って、画素
PはX,Y,Z値が(12,−3,−3)の値を持つことにな
る。
Pixels projected in such an XYZ space have X values in the range of 5 to 20, as shown in FIG. 3 (c).
If the Y and Z values are located within the range of -5 to 2, respectively, the X, Y, and Z values of the X axis are 12 and Y as the representative value of the median value of 5 and 20, respectively. , Z values are replaced by -3 with the median values of 2 and -5 as representative values. Therefore, the pixel P has X, Y, and Z values of (12, -3, -3).

【0012】これによって撮像されたカラー画像はL×
M×N個の異なる値を持つ画素の集合として表現され、
領域分割部3へ出力される。領域分割部3ではデータ変
換部2で求められた各画素のX,Y,Z値を基にカラー
画像を領域分割し、領域分割された画像は画像認識部4
へ出力される。画像認識部4は分割された領域の特徴と
画像認識部4が持つ既知データとを照合し、各領域の実
際の構造との対応付けを行う。例えば図2に示す如きカ
ラー画像の場合、以下の如き知識を利用して各領域と実
世界の構造物との対応付けを行う。 (1) 自車走行車線は水平線よりも下に位置し、水平線以
下である他のどの領域よりも大きい。 (2) 自車走行車線と同じX,Y,Z値を持つ領域は路面
である。 (3) 水平線よりも上に領域を持つ領域は立体物である。 (4) 画像内の移動車は路面との境界部に影を持つ。 (5) 白線, 黄色線は画像内で上方に延びる平行線部分を
持つ。 (6) 白線部の画素のX値は走路領域の画素のX値よりも
大きい。 他に画像内での位置から推定される実世界での領域の大
きさ等の情報も併せて用いることで自車前方の走行環境
を認識することが可能となる。画像認識部4はこのよう
にして自らが持つデータと前記領域分割部3で分割され
た各領域の値とに基づき自車前方の環境を認識する。認
識結果は表示部5を通じて運転者に図5に示す如き態様
で表示される。
The color image picked up by this is L ×
Represented as a set of M × N pixels with different values,
It is output to the area dividing unit 3. The region dividing unit 3 divides the color image into regions based on the X, Y, and Z values of each pixel obtained by the data converting unit 2, and the region-divided image is the image recognizing unit 4.
Is output to. The image recognition unit 4 collates the characteristics of the divided areas with the known data held by the image recognition unit 4, and associates them with the actual structure of each area. For example, in the case of a color image as shown in FIG. 2, the following knowledge is used to associate each area with a structure in the real world. (1) The vehicle lane is below the horizon and is larger than any other area below the horizon. (2) The area having the same X, Y, and Z values as the vehicle lane is the road surface. (3) Areas with areas above the horizon are solid objects. (4) The moving vehicle in the image has a shadow at the boundary with the road surface. (5) White lines and yellow lines have parallel lines extending upward in the image. (6) The X value of the pixel in the white line portion is larger than the X value of the pixel in the runway area. In addition, it is possible to recognize the traveling environment in front of the own vehicle by also using information such as the size of the real world area estimated from the position in the image. The image recognition unit 4 recognizes the environment in front of the own vehicle based on the data possessed by the image recognition unit 4 and the value of each region divided by the region division unit 3 in this way. The recognition result is displayed to the driver through the display unit 5 in a manner as shown in FIG.

【0013】図5は表示部5に表示された画像の説明図
であり、図中21は自車の走行車線領域、22は先行車の存
在領域を夫々示している。なお上述の実施例にあっては
カラー画像を撮影した3次元空間をL×M×N個に分割
する構成を説明したが、L,M,Nについては特に限定
するものではなく、必要に応じて設定すればよい。
FIG. 5 is an explanatory diagram of an image displayed on the display unit 5. In the figure, reference numeral 21 indicates the traveling lane area of the own vehicle, and 22 indicates the existence area of the preceding vehicle. Although the three-dimensional space in which a color image is photographed is divided into L × M × N pieces in the above-described embodiment, L, M, and N are not particularly limited, and may be as necessary. And set it.

【0014】[0014]

【発明の効果】以上の如く本発明装置にあっては、撮影
して得たカラー画像の各画素のR,G,B3原色値を色
の類似性に基づき複数の領域に分割し、領域分割結果か
ら走行環境を容易に認識することが可能となる等本発明
は優れた効果を奏する。
As described above, in the apparatus of the present invention, the R, G, B3 primary color values of each pixel of the color image obtained by photographing are divided into a plurality of areas based on the color similarity, and the area division is performed. The present invention has an excellent effect such that the traveling environment can be easily recognized from the result.

【図面の簡単な説明】[Brief description of drawings]

【図1】本発明に係る移動車の走行環境認識装置を示す
ブロック図である。
FIG. 1 is a block diagram showing a traveling environment recognition device for a mobile vehicle according to the present invention.

【図2】図1に示すカラー画像撮像装置で撮影した画像
の説明図である。
2 is an explanatory diagram of an image captured by the color image capturing apparatus shown in FIG.

【図3】図1に示すデータ変換部の変換態様を示す説明
図である。
FIG. 3 is an explanatory diagram showing a conversion mode of a data conversion unit shown in FIG. 1.

【図4】図1に示すデータ変換部の分割態様を示す説明
図である。
4 is an explanatory diagram showing a division aspect of a data conversion unit shown in FIG. 1. FIG.

【図5】図1に示す表示部に表示された画像の説明図で
ある。
5 is an explanatory diagram of an image displayed on the display unit shown in FIG.

【符号の説明】[Explanation of symbols]

1 カラー画像入力部 2 データ変換部 3 領域分割部 4 画像認識部 5 表示部 1 color image input unit 2 data conversion unit 3 area division unit 4 image recognition unit 5 display unit

フロントページの続き (51)Int.Cl.5 識別記号 庁内整理番号 FI 技術表示箇所 H04N 7/18 G Continuation of the front page (51) Int.Cl. 5 Identification code Office reference number FI Technical display area H04N 7/18 G

Claims (3)

【特許請求の範囲】[Claims] 【請求項1】 移動車の前方を撮像してカラー画像を得
るカラー撮像装置と、該カラー撮像装置で得たカラー画
像の各画素を色の類似性に基づき複数の領域に分割する
領域分割部と、この領域分割結果に基づいて走行環境を
認識する画像認識部とを具備することを特徴とする移動
車の走行環境認識装置。
1. A color image pickup device for picking up a color image by picking up an image in front of a moving vehicle, and an area dividing section for dividing each pixel of the color image obtained by the color image pickup device into a plurality of regions based on color similarity. And a traveling environment recognition device for a moving vehicle, comprising: an image recognition unit that recognizes a traveling environment based on the area division result.
【請求項2】 前記領域分割部はカラー画像の各画素を
そのR(赤),G(緑),B(青)値を3軸とする3次
元空間と異なる3次元空間に投影した値に基づき画像を
複数領域に分割すべく構成されている請求項1記載の移
動車の走行環境認識装置。
2. The area dividing unit projects each pixel of a color image into a value projected in a three-dimensional space different from a three-dimensional space having R (red), G (green) and B (blue) values as three axes. The driving environment recognition device for a mobile vehicle according to claim 1, wherein the image is divided into a plurality of regions based on the image.
【請求項3】 前記領域分割部はカラー画像の各画素の
R,G,B値を色の類似性に関連する3つのパラメータ
を3軸とする3次元空間に投影し、この3次元空間の各
軸を夫々L,M,N個に分割して3次元空間をL×M×
N個の領域に分割し、各分割領域の中央値をその領域の
代表値として各画素に夫々が投影されている領域の代表
値に置換し、置換した値を基に画像を領域分割する請求
項2記載の移動車の走行環境認識装置。
3. The area dividing unit projects the R, G, B values of each pixel of a color image onto a three-dimensional space having three axes related to color similarity as three axes, and the three-dimensional space of the three-dimensional space Each axis is divided into L, M, and N, and the three-dimensional space is L × M ×
A method in which the image is divided into N areas, the median value of each divided area is used as a representative value of the area, and the representative value of the area where each pixel is projected is replaced, and the image is divided into areas based on the replaced value. Item 2. A traveling environment recognition device for a moving vehicle according to item 2.
JP5132219A 1993-06-02 1993-06-02 Traveling environment recognizer for traveling vehicle Pending JPH06348991A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP5132219A JPH06348991A (en) 1993-06-02 1993-06-02 Traveling environment recognizer for traveling vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP5132219A JPH06348991A (en) 1993-06-02 1993-06-02 Traveling environment recognizer for traveling vehicle

Publications (1)

Publication Number Publication Date
JPH06348991A true JPH06348991A (en) 1994-12-22

Family

ID=15076178

Family Applications (1)

Application Number Title Priority Date Filing Date
JP5132219A Pending JPH06348991A (en) 1993-06-02 1993-06-02 Traveling environment recognizer for traveling vehicle

Country Status (1)

Country Link
JP (1) JPH06348991A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0810496A2 (en) * 1996-05-31 1997-12-03 Société SAGEM Method and device for the identification and localisation of fixed objects along a path
WO2000030056A1 (en) * 1998-11-14 2000-05-25 Daimlerchrysler Ag Device and method for recognizing traffic signs
EP1327969A1 (en) * 2002-01-11 2003-07-16 Audi Ag Vehicle with means for recognising traffic road signs
JP2010067053A (en) * 2008-09-11 2010-03-25 Honda Motor Co Ltd Vehicle environment recognition apparatus
US7804980B2 (en) 2005-08-24 2010-09-28 Denso Corporation Environment recognition device
US8670605B2 (en) 2010-03-18 2014-03-11 Ricoh Company, Ltd. Identification method of data point distribution area on coordinate plane and recording medium

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0810496A2 (en) * 1996-05-31 1997-12-03 Société SAGEM Method and device for the identification and localisation of fixed objects along a path
FR2749419A1 (en) * 1996-05-31 1997-12-05 Sagem METHOD AND DEVICE FOR IDENTIFYING AND LOCATING FIXED OBJECTS ALONG A ROUTE
EP0810496A3 (en) * 1996-05-31 1998-08-19 Société SAGEM Method and device for the identification and localisation of fixed objects along a path
WO2000030056A1 (en) * 1998-11-14 2000-05-25 Daimlerchrysler Ag Device and method for recognizing traffic signs
WO2000030024A3 (en) * 1998-11-14 2001-11-29 Daimler Chrysler Ag Method for increasing the power of a traffic sign recognition system
EP1327969A1 (en) * 2002-01-11 2003-07-16 Audi Ag Vehicle with means for recognising traffic road signs
US7804980B2 (en) 2005-08-24 2010-09-28 Denso Corporation Environment recognition device
JP2010067053A (en) * 2008-09-11 2010-03-25 Honda Motor Co Ltd Vehicle environment recognition apparatus
US8670605B2 (en) 2010-03-18 2014-03-11 Ricoh Company, Ltd. Identification method of data point distribution area on coordinate plane and recording medium

Similar Documents

Publication Publication Date Title
CN104951775B (en) Railway highway level crossing signal region security intelligent identification Method based on video technique
US11380111B2 (en) Image colorization for vehicular camera images
KR20120072020A (en) Method and apparatus for detecting run and road information of autonomous driving system
JP3381351B2 (en) Ambient situation display device for vehicles
JP2014096135A (en) Moving surface boundary recognition device, mobile equipment control system using the same, moving surface boundary recognition method, and program for recognizing moving surface boundary
CN102314601A (en) Use nonlinear optical to remove by the shade in the image of catching based on the camera of vehicle according to constant nuclear
JP4762026B2 (en) Road sign database construction device
WO2021026855A1 (en) Machine vision-based image processing method and device
JPH06348991A (en) Traveling environment recognizer for traveling vehicle
CN106340031A (en) Method and device for detecting moving object
JP4821399B2 (en) Object identification device
CN110688876A (en) Lane line detection method and device based on vision
JP2018055591A (en) Information processing apparatus, information processing method and program
US7346193B2 (en) Method for detecting object traveling direction
JP2611326B2 (en) Road recognition device for vehicles
JPH0520593A (en) Travelling lane recognizing device and precedence automobile recognizing device
JP2003085535A (en) Position recognition method for road guide sign
JP3378476B2 (en) Vehicle recognition method
JPH0652554B2 (en) Road sign recognition device
JP2610884B2 (en) Color shading pattern matching method
JP3380436B2 (en) Recognition method of vehicles, etc.
JPS62237591A (en) Color pattern matching system
JP2611325B2 (en) Road recognition device for vehicles
Bak et al. Traffic light recognition with HUV-histogram from daytime driving-view images
TWI521468B (en) Instant traffic lights recognition method and the method used in car aided recognition system