JPH1097699A - Obstacle detecting device for vehicle - Google Patents

Obstacle detecting device for vehicle

Info

Publication number
JPH1097699A
JPH1097699A JP8248241A JP24824196A JPH1097699A JP H1097699 A JPH1097699 A JP H1097699A JP 8248241 A JP8248241 A JP 8248241A JP 24824196 A JP24824196 A JP 24824196A JP H1097699 A JPH1097699 A JP H1097699A
Authority
JP
Japan
Prior art keywords
area
obstacle
extracting
edge
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP8248241A
Other languages
Japanese (ja)
Inventor
Makoto Nishida
誠 西田
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toyota Motor Corp
Original Assignee
Toyota Motor Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toyota Motor Corp filed Critical Toyota Motor Corp
Priority to JP8248241A priority Critical patent/JPH1097699A/en
Publication of JPH1097699A publication Critical patent/JPH1097699A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

PROBLEM TO BE SOLVED: To provide an obstacle detecting device for vehicle which can extract horizontal and vertical edges in a shorter time and is improved in processing efficiency by limiting the area for extracting the horizontal edge based on left and right guide lines and the area for extracting the vertical edge based on the extracted horizontal edge. SOLUTION: An obstacle detecting device for vehicle is provided with a horizontal edge extracting means M2 which extracts a horizontal edge in an area between recognized left and right guide lines, an area estimating means M3 which estimates an area having a possibility of the existence of an obstacle based on the horizontal edge, a vertical edge extracting means M4 which extracts a vertical edge in the possible area, and a discriminating means M5 which discriminates the obstacle based on the extracted vertical edge. Therefore, the time required by the obstacle detecting device for extracting the horizontal and vertical edges can be shortened and the processing efficiency and processing speed of the device can be improved, because the areas for extracting the edges can be contained in a taken picture.

Description

【発明の詳細な説明】DETAILED DESCRIPTION OF THE INVENTION

【0001】[0001]

【発明の属する技術分野】本発明は車両用障害物検出装
置に関し、車両の前方の画像から障害物を検出する装置
に関する。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a vehicle obstacle detecting device, and more particularly to a device for detecting an obstacle from an image in front of a vehicle.

【0002】[0002]

【従来の技術】従来より、車両前方を撮像した画像につ
いて画像処理を行い、先行車等の障害物を検出する障害
物検出装置が提案されている。例えば、特願平4−15
1343号公報には、車両前方の画像を撮像し、この画
像の水平エッジ及び垂直エッジ夫々を検出して、水平エ
ッジが連続した領域と、垂直エッジが連続した領域との
両方が存在する領域を障害物と判定している。
2. Description of the Related Art Conventionally, there has been proposed an obstacle detecting apparatus which performs image processing on an image captured in front of a vehicle to detect an obstacle such as a preceding vehicle. For example, Japanese Patent Application No. 4-15
No. 1343 discloses an image in front of a vehicle, detects a horizontal edge and a vertical edge of the image, and detects an area in which both a horizontal edge continuous area and a vertical edge continuous area exist. Judge as an obstacle.

【0003】[0003]

【発明が解決しようとする課題】従来装置では撮像した
画像の全域にわたって水平エッジの検出を行い、かつ、
垂直エッジの検出を行って障害物の判定を行うため、障
害物が存在しえない領域についても上記水平エッジ及び
垂直エッジの検出が行われ、処理効率が悪く処理に時間
がかかるという問題があった。
In the conventional apparatus, horizontal edges are detected over the entire area of a captured image, and
Since an obstacle is determined by detecting a vertical edge, the horizontal edge and the vertical edge are also detected in an area where no obstacle can exist, and there is a problem that processing efficiency is low and processing takes time. Was.

【0004】本発明は上記の点に鑑みなされたもので、
左右のガイドラインに基づいて水平エッジを抽出する領
域を限定し、抽出した水平エッジに基づいて垂直エッジ
を抽出する領域を限定することにより、水平エッジ及び
垂直エッジの抽出に要する時間を短縮でき、処理効率が
向上する車両用障害物検出装置を提供することを目的と
する。
[0004] The present invention has been made in view of the above points,
By limiting the area where horizontal edges are extracted based on the left and right guidelines and the area where vertical edges are extracted based on the extracted horizontal edges, the time required for extracting horizontal edges and vertical edges can be reduced, and processing can be reduced. It is an object of the present invention to provide a vehicle obstacle detection device with improved efficiency.

【0005】[0005]

【課題を解決するための手段】請求項1に記載の発明
は、図1に示すように、車両前方を撮像した画像M0か
ら障害物を検出する車両用障害物検出装置において、上
記画像M0内の左右のガイドラインを認識するガイドラ
イン認識手段M1と、認識された左右のガイドラインで
挟まれた領域内で垂直方向に隣接する画素間の輝度の差
が大なる水平エッジを抽出する水平エッジ抽出手段M2
と、抽出された水平エッジに基づいて障害物が存在する
存在可能性領域を推定する領域推定手段M3と、推定さ
れた存在可能性領域内で水平方向に隣接する画素間の輝
度の差が大なる垂直エッジを抽出する垂直エッジ抽出手
段M4と、抽出された垂直エッジに基づいて障害物を判
定する判定手段M5とを有する。
According to a first aspect of the present invention, there is provided a vehicle obstacle detecting apparatus for detecting an obstacle from an image M0 taken in front of a vehicle, as shown in FIG. A guide line recognizing means M1 for recognizing the left and right guidelines, and a horizontal edge extracting means M2 for extracting a horizontal edge having a large difference in luminance between vertically adjacent pixels in a region sandwiched between the recognized left and right guidelines.
And an area estimating means M3 for estimating a possibility area where an obstacle is present based on the extracted horizontal edge, and a large difference in luminance between horizontally adjacent pixels in the estimated possibility area. A vertical edge extracting unit M4 for extracting a vertical edge, and a determining unit M5 for determining an obstacle based on the extracted vertical edge.

【0006】このように、認識された左右のガイドライ
ンに基づいて水平エッジを抽出する領域を限定し、ま
た、水平エッジの抽出結果に基づいて存在可能性領域を
推定し、この存在可能性領域内だけで垂直エッジを抽出
するため、水平エッジ、垂直エッジ夫々を抽出する領域
を撮像画像の一部とすることができ、抽出に要する時間
を短縮でき、処理効率を向上でき、高速の処理が可能と
なる。
As described above, the region from which the horizontal edge is extracted is limited based on the recognized left and right guidelines, and the existence possibility region is estimated based on the extraction result of the horizontal edge. Because only vertical edges are extracted, the area where horizontal and vertical edges are extracted can be part of the captured image, reducing the time required for extraction, improving processing efficiency, and enabling high-speed processing. Becomes

【0007】[0007]

【発明の実施の形態】図2は本発明装置の一実施例のブ
ロック図を示す。同図中、ビデオカメラ10は自車両の
前方の道路の画像を撮像する。その画像信号は画像入力
回路11に供給されてA/D変換されインタフェース回
路12に供給される。CPU20の制御によりインタフ
ェース回路12から読み出された画像データはバス14
を介してビデオRAM15に供給されて格納される。
FIG. 2 is a block diagram showing an embodiment of the apparatus according to the present invention. In FIG. 1, a video camera 10 captures an image of a road ahead of the host vehicle. The image signal is supplied to an image input circuit 11, A / D converted, and supplied to an interface circuit 12. The image data read from the interface circuit 12 under the control of the CPU 20 is transmitted to the bus 14.
Is supplied to and stored in the video RAM 15.

【0008】相関演算プロセッサ21はCPU20の制
御によりRAM16から読み出されて供給されるテンプ
レートの画像データと、ビデオRAM15から読み出さ
れて供給される探索領域の画像データとの相関演算を行
い、演算結果の相関値をCPU20に供給する。CPU
20はROM18に格納されている処理プログラムを実
行して画像内の先行車両等の障害物の判定を行う。この
際にRAM16は作業領域として使用される。また、C
PU20は障害物の判定結果をインタフェース回路17
を通して自動運転制御システム等の外部システムに出力
する。
A correlation processor 21 performs a correlation operation between the template image data read and supplied from the RAM 16 under the control of the CPU 20 and the search area image data read and supplied from the video RAM 15. The resulting correlation value is supplied to the CPU 20. CPU
20 executes a processing program stored in the ROM 18 to determine an obstacle such as a preceding vehicle in the image. At this time, the RAM 16 is used as a work area. Also, C
The PU 20 transmits the result of the obstacle determination to the interface circuit 17.
To an external system such as an automatic operation control system.

【0009】図3はCPU20が実行する障害物判定処
理のフローチャートを示す。この処理は所定時間毎に実
行される。同図中、ステップS10ではビデオカメラで
撮像した1画面分の画像の画像データをインタフェース
回路12からビデオRAM15に入力する。次にガイド
ライン認識手段M1に対応するステップS12で白線認
識を行う。なお、ガイドラインは白線に限らず黄色の追
越禁止線であっても良い。
FIG. 3 shows a flowchart of the obstacle judging process executed by the CPU 20. This process is executed every predetermined time. In the figure, in step S10, image data of one screen image captured by the video camera is input from the interface circuit 12 to the video RAM 15. Next, white line recognition is performed in step S12 corresponding to the guideline recognition means M1. Note that the guide line is not limited to the white line, and may be a yellow overtaking prohibited line.

【0010】ここで、図4(A)に示す道路の画像30
内のガイドラインである白線31,32に対して探索領
域331 〜332n(nは例えば9)が設けられ、各探索
領域331 〜332nは画像30内での垂直方向位置及び
水平方向位置を固定されている。
Here, an image 30 of a road shown in FIG.
Search area 33 1 ~ 33 2n against the white line 31 is a guideline of the inner (n is, for example, 9) is provided, each search region 33 1 ~ 33 2n is the vertical position and the horizontal position in the image 30 within Has been fixed.

【0011】1画面を512×512画素としたとき、
各探索領域331 〜332nは例えば水平方向32×垂直
方向23画素で構成される。また、探索領域331 〜3
2n夫々に対応して2n個のテンプレートがRAM16
に格納され、各テンプレートは水平方向16×垂直方向
8画素で構成されており、図4(B)は探索領域33 1
に対応するテンプレートを示す。但し、図4(A)に対
しては拡大しており、35は白線部、30はアスファル
ト部(背景部)である。
When one screen has 512 × 512 pixels,
Each search area 331~ 332nIs, for example, 32 x vertical
It is composed of 23 pixels in the direction. In addition, the search area 331~ 3
32n2n templates corresponding to each are stored in the RAM 16
Each template is stored in the horizontal direction 16 × vertical direction
FIG. 4B shows a search area 33. 1
Shows a template corresponding to. However, in contrast to FIG.
35 is white line, 30 is asphalt
G (background part).

【0012】相関演算プロセッサ21では探索領域33
1 の画像データと、これに対応するテンプレートの画像
データを供給されると、両者の相関演算を行って、探索
領域331 における白線部の位置及び相関値をCPU2
0に転送する。同様にして探索領域332 〜332n夫々
と、これに対応するテンプレートとの相関演算が行われ
て、夫々の探索領域における白線部の位置及び相関値が
CPU20に転送される。CPU20はこれらの探索結
果のうち、相関値が所定の基準値以下で白線と認められ
るものだけを選択し、選択された各探索領域における白
線部の位置に基づき、例えば最小2乗法により白線3
1,32夫々の位置及び傾きを求め、更に消失点34の
位置を求め、白線認識を行う。
In the correlation operation processor 21, the search area 33
A first image data, when this is supplied with image data of a corresponding template, by performing a correlation operation of both the white line portion in the search area 33 1 position and the correlation value CPU2
Transfer to 0. A search area 33 2 ~ 33 2n, respectively in the same manner, this is carried out correlation calculation between corresponding template, the position and the correlation value of the white line portion in the search area each are transferred to the CPU 20. The CPU 20 selects, from these search results, only those for which the correlation value is recognized as a white line when the correlation value is equal to or less than the predetermined reference value, and based on the position of the white line portion in each of the selected search areas, for example, by the least square method, the white line
The position and inclination of each of the first and the second 32 are obtained, the position of the vanishing point 34 is further obtained, and white line recognition is performed.

【0013】上記の白線認識が終了すると水平エッジ抽
出手段M2に対応するステップS14に進んで、白線3
1,32に挟まれる領域、つまり白線内領域で水平エッ
ジの検出を行う。水平エッジとは垂直方向に隣接する画
素間で輝度の差が所定の閾値以上となる位置である。
When the white line recognition is completed, the process proceeds to step S14 corresponding to the horizontal edge extracting means M2, and the white line 3
A horizontal edge is detected in an area sandwiched between 1 and 32, that is, an area within a white line. The horizontal edge is a position where the difference in luminance between vertically adjacent pixels is equal to or greater than a predetermined threshold.

【0014】次にステップS16で検出された水平エッ
ジをY軸に投影する。ここで投影とは画面におけるY座
標が同一でX座標が異なる水平エッジをY座標毎にカウ
ントすることである。次にステップS18に進み、Y軸
投影値が閾値TH(y)以上となるY軸位置を抽出し、
Y座標が大きいものから順に領域候補位置Cy(k)と
する。これと共に領域候補位置Cy(k)の数を求め
る。例えば図5に示す画像について、水平エッジのY軸
投影値が図6の実線に示すようになったとする。閾値T
H(y)はY座標の関数であり、一点鎖線に示すように
Y座標が大きくなるに従って値が大きくなる。これはY
軸位置が大なるほど障害物が自車両に近く、それだけ水
平エッジのY軸投影値も大きくなるためである。これに
よって領域候補位置Cy(0),Cy(1),Cy
(2)が抽出される。
Next, the horizontal edge detected in step S16 is projected on the Y axis. Here, the projection means to count horizontal edges having the same Y coordinate and different X coordinates on the screen for each Y coordinate. Next, proceeding to step S18, the Y-axis position at which the Y-axis projection value is equal to or larger than the threshold value TH (y) is extracted,
The region candidate positions Cy (k) are set in order from the one with the largest Y coordinate. At the same time, the number of region candidate positions Cy (k) is obtained. For example, in the image shown in FIG. 5, it is assumed that the Y-axis projection value of the horizontal edge is as shown by the solid line in FIG. Threshold T
H (y) is a function of the Y coordinate, and the value increases as the Y coordinate increases, as indicated by a dashed line. This is Y
This is because the larger the axial position, the closer the obstacle is to the host vehicle, and the greater the Y-axis projection value of the horizontal edge. Thus, the region candidate positions Cy (0), Cy (1), Cy
(2) is extracted.

【0015】次にステップS20で候補カウンタkを0
にリセットしてステップS22に進む。領域推定手段M
3に対応するステップS22では障害物の存在可能性領
域を推定する。この存在可能性領域は矩形であり、領域
候補位置Cy(k)を下限とし、左側白線31のY座標
がCy(k)と一致する点のX座標をLl(k)とし、
右側白線32のY座標がCy(k)と一致する点のX座
標をLr(k)とし、上記Lr(k),Ll(k)間の
距離に所定の係数αを乗算した値Wy(k)を用い、 Wy(k)=α・(Lr(k)−Ll(k)) 左上座標と右下座標とで次のように表わされる領域であ
る。
Next, at step S20, the candidate counter k is set to 0.
And the process proceeds to step S22. Area estimation means M
In step S22 corresponding to No. 3, an area where an obstacle may exist is estimated. This existence possibility area is a rectangle, the area candidate position Cy (k) is set as a lower limit, and the X coordinate of a point where the Y coordinate of the left white line 31 matches Cy (k) is set as Ll (k).
The X coordinate of the point where the Y coordinate of the right white line 32 coincides with Cy (k) is defined as Lr (k), and a value Wy (k) obtained by multiplying the distance between Lr (k) and Ll (k) by a predetermined coefficient α. ), Wy (k) = α · (Lr (k) −L1 (k)) An area represented by upper left coordinates and lower right coordinates as follows.

【0016】(Ll(k),Cy(k)−Wy
(k)),(Lr(k),Cy(k)) 但し、係数αは例えばα=1/2であり、これはCy
(k)を先行車の下限とした場合に、車両の高さは車線
幅(白線31,32間距離)の1/2以内であることが
ほとんどだからである。図5に示す画像で領域候補Cy
(0)に対応する存在可能性領域は点β1,β2,β
3,β4の矩形の内側である。
(Ll (k), Cy (k) -Wy
(K)), (Lr (k), Cy (k)) where the coefficient α is, for example, α = 1 /, which is Cy
This is because, when (k) is set as the lower limit of the preceding vehicle, the height of the vehicle is almost within 1/2 of the lane width (the distance between the white lines 31 and 32). In the image shown in FIG.
The existence possibility area corresponding to (0) is points β1, β2, β
3, inside the rectangle of β4.

【0017】次に垂直エッジ抽出手段M4に対応するス
テップS24に進み、存在可能性領域内の垂直エッジの
検出を行う。垂直エッジとは水平方向に隣接する画素間
で輝度の差が所定の閾値以上となる位置である。次にス
テップS26で検出された垂直エッジをX軸に投影す
る。つまり、画面におけるX座標が同一でY座標が異な
る水平エッジをX座標毎にカウントする。そして、ステ
ップS28でX軸投影値が所定の閾値THx(y)以上
となるX軸位置を抽出し、X軸位置が小さいものから順
にVx(i)とする。例えば図5に示す画像の存在可能
性領域は点β1,β2,β3,β4について、垂直エッ
ジのX軸投影値が図7の実線に示すようになった場合、
一点鎖線で示す閾値THx(y)との比較により位置V
x(0),Vx(1),Vx(2)が抽出され、これと
共に抽出点Vx(i)の数Nxを求める。閾値THx
(y)はY座標(Cy(k)の値)の関数であり、Cy
(k)の値が大きくなるに従って値が大きくなる。
Next, the process proceeds to step S24 corresponding to the vertical edge extracting means M4, and a vertical edge in the existence possibility area is detected. A vertical edge is a position where the difference in luminance between horizontally adjacent pixels is equal to or greater than a predetermined threshold. Next, the vertical edge detected in step S26 is projected on the X axis. That is, horizontal edges having the same X coordinate and different Y coordinates on the screen are counted for each X coordinate. Then, in step S28, X-axis positions at which the X-axis projection value is equal to or greater than a predetermined threshold value THx (y) are extracted, and are set as Vx (i) in ascending order of the X-axis position. For example, when the X-axis projection value of the vertical edge for the points β1, β2, β3, and β4 is as shown by the solid line in FIG.
The position V is obtained by comparison with a threshold value THx (y) indicated by a dashed line.
x (0), Vx (1), and Vx (2) are extracted, and together with this, the number Nx of the extraction points Vx (i) is obtained. Threshold value THx
(Y) is a function of the Y coordinate (the value of Cy (k)),
The value increases as the value of (k) increases.

【0018】続いて、ステップS30で抽出点数Nxが
1より大か否かを判別し、Nx>1の場合はステップS
32に進む。ステップS32では全ての抽出位置Vx
(i)から一対の位置の全ての組み合わせを行い、各対
のX方向幅W(j)を求める。図7のようにVx
(0),Vx(1),Vx(2)が抽出された場合には
幅W(0)=|Vx(0)−Vx(1)|,W(1)=
|Vx(0)−Vx(2)|,W(2)=|Vx(1)
−Vx(2)|が求められる。
Subsequently, in step S30, it is determined whether or not the number of extracted points Nx is greater than 1. If Nx> 1, step S30 is executed.
Go to 32. In step S32, all extraction positions Vx
From (i), all combinations of a pair of positions are performed, and the X-direction width W (j) of each pair is obtained. As shown in FIG.
When (0), Vx (1), and Vx (2) are extracted, the width W (0) = | Vx (0) −Vx (1) |, W (1) =
| Vx (0) −Vx (2) |, W (2) = | Vx (1)
−Vx (2) | is obtained.

【0019】この後、ステップS34で各W(j)につ
いてWTH1(y)<W(j)<WTH2(y)を満足
するか否かを判別する。上記の閾値WTH1(y),W
TH2(y)夫々はY座標Cy(k)の関数であり、C
y(k)が大なるほど値が大きくなる。閾値WTH1
(y)はY座標Cy(k)における車両の最小横幅に相
当し、閾値WTH2(y)はY座標Cy(k)における
車両の最大横幅に相当する。ここで、WTH1(y)<
W(j)<WTH2(y)をW(j)のいずれかが(W
(0)又はW(1)又はW(2)),満足した場合には
ステップS36に進み、全てのW(j)が満足しない場
合は検知物の幅が車両とは一致しないので障害物ではな
くノイズとみなしてステップS38に進む。
Thereafter, in step S34, it is determined whether or not WTH1 (y) <W (j) <WTH2 (y) is satisfied for each W (j). The above threshold value WTH1 (y), W
TH2 (y) is a function of the Y coordinate Cy (k),
The value increases as y (k) increases. Threshold value WTH1
(Y) corresponds to the minimum width of the vehicle at the Y coordinate Cy (k), and the threshold value WTH2 (y) corresponds to the maximum width of the vehicle at the Y coordinate Cy (k). Here, WTH1 (y) <
If W (j) <WTH2 (y) is equal to (W)
(0) or W (1) or W (2)), if satisfied, the process proceeds to step S36. If not all W (j) are satisfied, the width of the detected object does not match the vehicle. And the process proceeds to step S38.

【0020】ステップS36ではステップS34の条件
を満足した幅W(j)について、この幅W(j)を得た
一対の抽出位置Vx(i)と、そのときの領域候補位置
Cy(k)とを障害物の左右端及び下端として出力し処
理を終了する。上記のステップS30〜S40が判定手
段M5に対応する。
In step S36, for a width W (j) satisfying the condition of step S34, a pair of extraction positions Vx (i) from which the width W (j) is obtained, and a candidate region position Cy (k) at that time are obtained. Are output as the left and right ends and the lower end of the obstacle, and the process ends. The above steps S30 to S40 correspond to the determination means M5.

【0021】一方、ステップS30でNx>1の場合は
垂直方向に延在するエッジが単一であるか、又は無いた
めに、この存在可能性領域には障害物が無いとみなし、
ステップS38に進んで添字kの値を1だけインクリメ
ントする。この後、ステップS40に進み、k<Ncで
あるか否かを判別し、k<Ncであればまだ領域候補位
置Cy(k)が残っているのでステップS22に進んで
ステップS22〜S40の処理を繰り返す。k≧Ncで
あれば領域候補位置Cy(k)の全てについて存在可能
性領域から障害物の左右端を検知する処理が完了してい
るため処理を終了する。
On the other hand, if Nx> 1 in step S30, since there is only one edge extending in the vertical direction or there is no edge, it is assumed that there is no obstacle in this existence possibility area.
Proceeding to step S38, the value of the subscript k is incremented by one. Thereafter, the process proceeds to step S40, where it is determined whether or not k <Nc. If k <Nc, the region candidate position Cy (k) still remains, so the process proceeds to step S22 to perform the processes of steps S22 to S40. repeat. If k ≧ Nc, the processing for detecting the left and right ends of the obstacle from the existence possibility area has been completed for all of the area candidate positions Cy (k), and the processing ends.

【0022】このように、認識された左右の白線に基づ
いて水平エッジを抽出する領域を限定し、また、水平エ
ッジの抽出結果に基づいて存在可能性領域を推定し、こ
の存在可能性領域内だけで垂直エッジを抽出するため、
水平エッジ、垂直エッジ夫々を抽出する領域を撮像画像
の一部とすることができ、抽出に要する時間を短縮で
き、処理効率を向上でき、高速の処理が可能となる。
As described above, the region where the horizontal edge is extracted is limited based on the recognized left and right white lines, and the existence possibility region is estimated based on the extraction result of the horizontal edge. Just to extract the vertical edges,
The region for extracting each of the horizontal edge and the vertical edge can be a part of the captured image, so that the time required for extraction can be reduced, the processing efficiency can be improved, and high-speed processing can be performed.

【0023】更に、ステップS14では白線31,32
に挟まれた領域だけで水平エッジを抽出するため、白線
31,32から外側の領域に存在する建築物等の水平エ
ッジが検出されないため、建築物等の水平エッジを障害
物の水平エッジと誤るおそれがない。これは垂直エッジ
についても同じことが言え、これによって障害物の検出
精度が向上する。
In step S14, white lines 31, 32
Since the horizontal edge is extracted only in the region sandwiched between the white lines 31 and 32, the horizontal edge of the building or the like existing in the region outside the white lines 31 and 32 is not detected, and the horizontal edge of the building or the like is mistaken as the horizontal edge of the obstacle. There is no fear. The same is true for vertical edges, which improves the accuracy of obstacle detection.

【0024】[0024]

【発明の効果】上述の如く、請求項1に記載の発明は、
車両前方を撮像した画像から障害物を検出する車両用障
害物検出装置において、上記画像内の左右のガイドライ
ンを認識するガイドライン認識手段と、認識された左右
のガイドラインで挟まれた領域内で垂直方向に隣接する
画素間の輝度の差が大なる水平エッジを抽出する水平エ
ッジ抽出手段と、抽出された水平エッジに基づいて障害
物が存在する存在可能性領域を推定する領域推定手段
と、推定された存在可能性領域内で水平方向に隣接する
画素間の輝度の差が大なる垂直エッジを抽出する垂直エ
ッジ抽出手段と、抽出された垂直エッジに基づいて障害
物を判定する判定手段とを有する。
As described above, the first aspect of the present invention provides
In an obstacle detection device for a vehicle that detects an obstacle from an image captured in front of a vehicle, a guideline recognition unit that recognizes left and right guidelines in the image, and a vertical direction within an area sandwiched between the recognized left and right guidelines. Horizontal edge extracting means for extracting a horizontal edge having a large difference in luminance between pixels adjacent to the pixel, and area estimating means for estimating a possibility area where an obstacle is present based on the extracted horizontal edge. Vertical edge extracting means for extracting a vertical edge having a large luminance difference between horizontally adjacent pixels in the existence possibility area, and determining means for determining an obstacle based on the extracted vertical edge. .

【0025】このように、認識された左右のガイドライ
ンに基づいて水平エッジを抽出する領域を限定し、ま
た、水平エッジの抽出結果に基づいて存在可能性領域を
推定し、この存在可能性領域内だけで垂直エッジを抽出
するため、水平エッジ、垂直エッジ夫々を抽出する領域
を撮像画像の一部とすることができ、抽出に要する時間
を短縮でき、処理効率を向上でき、高速の処理が可能と
なる。
As described above, the region from which the horizontal edge is extracted is limited based on the recognized left and right guidelines, and the existence possibility region is estimated based on the result of extracting the horizontal edge. Because only vertical edges are extracted, the area where horizontal and vertical edges are extracted can be part of the captured image, reducing the time required for extraction, improving processing efficiency, and enabling high-speed processing. Becomes

【図面の簡単な説明】[Brief description of the drawings]

【図1】本発明の原理図である。FIG. 1 is a principle diagram of the present invention.

【図2】本発明のブロック構成図である。FIG. 2 is a block diagram of the present invention.

【図3】本発明の障害物検出処理のフローチャートであ
る。
FIG. 3 is a flowchart of an obstacle detection process according to the present invention.

【図4】撮像画像及びテンプレートを説明するための図
である。
FIG. 4 is a diagram for explaining a captured image and a template.

【図5】撮像画像を説明するための図である。FIG. 5 is a diagram for explaining a captured image.

【図6】水平エッジのY軸投影値を示す図である。FIG. 6 is a diagram showing a Y-axis projection value of a horizontal edge.

【図7】垂直エッジのX軸投影値を示す図である。FIG. 7 is a diagram showing an X-axis projection value of a vertical edge.

【符号の説明】[Explanation of symbols]

10 ビデオカメラ 11 画像入力回路 12,17 インタフェース回路 15 ビデオRAM 16 RAM 18 ROM 20 CPU 21 相関演算プロセッサ M0 画像 M1 ガイドライン認識手段 M2 水平エッジ抽出手段 M3 領域推定手段 M4 垂直エッジ抽出手段 M5 判定手段 Reference Signs List 10 Video camera 11 Image input circuit 12, 17 Interface circuit 15 Video RAM 16 RAM 18 ROM 20 CPU 21 Correlation processor M0 Image M1 Guideline recognition means M2 Horizontal edge extraction means M3 Area estimation means M4 Vertical edge extraction means M5 Judgment means

Claims (1)

【特許請求の範囲】[Claims] 【請求項1】 車両前方を撮像した画像から障害物を検
出する車両用障害物検出装置において、 上記画像内の左右のガイドラインを認識するガイドライ
ン認識手段と、 認識された左右のガイドラインで挟まれた領域内で垂直
方向に隣接する画素間の輝度の差が大なる水平エッジを
抽出する水平エッジ抽出手段と、 抽出された水平エッジに基づいて障害物が存在する存在
可能性領域を推定する領域推定手段と、 推定された存在可能性領域内で水平方向に隣接する画素
間の輝度の差が大なる垂直エッジを抽出する垂直エッジ
抽出手段と、 抽出された垂直エッジに基づいて障害物を判定する判定
手段とを有することを特徴とする車両用障害物検出装
置。
1. An obstacle detecting device for a vehicle, which detects an obstacle from an image captured in front of a vehicle, comprising: a guideline recognizing means for recognizing left and right guidelines in the image; Horizontal edge extraction means for extracting a horizontal edge having a large difference in luminance between vertically adjacent pixels in an area, and area estimation for estimating a possible area where an obstacle is present based on the extracted horizontal edge Means, vertical edge extracting means for extracting a vertical edge having a large luminance difference between horizontally adjacent pixels in the estimated existence possibility area, and determining an obstacle based on the extracted vertical edge. An obstacle detection device for a vehicle, comprising: a determination unit.
JP8248241A 1996-09-19 1996-09-19 Obstacle detecting device for vehicle Pending JPH1097699A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP8248241A JPH1097699A (en) 1996-09-19 1996-09-19 Obstacle detecting device for vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP8248241A JPH1097699A (en) 1996-09-19 1996-09-19 Obstacle detecting device for vehicle

Publications (1)

Publication Number Publication Date
JPH1097699A true JPH1097699A (en) 1998-04-14

Family

ID=17175264

Family Applications (1)

Application Number Title Priority Date Filing Date
JP8248241A Pending JPH1097699A (en) 1996-09-19 1996-09-19 Obstacle detecting device for vehicle

Country Status (1)

Country Link
JP (1) JPH1097699A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6788817B1 (en) 1999-11-04 2004-09-07 Honda Giken Kogyo Kabushikikaisha Object recognition system
US6792147B1 (en) 1999-11-04 2004-09-14 Honda Giken Kogyo Kabushiki Kaisha Object recognition system
JP2006004188A (en) * 2004-06-17 2006-01-05 Daihatsu Motor Co Ltd Obstacle recognition method and obstacle recognition device
EP1615191A1 (en) 2004-07-05 2006-01-11 Nissan Motor Co., Ltd. Image processing system and method for front-view image sensor in a vehicle
US6990216B2 (en) 2000-09-22 2006-01-24 Nissan Motor Co., Ltd. Method and apparatus for estimating inter-vehicle distance using radar and camera
JP2007052645A (en) * 2005-08-18 2007-03-01 Fujitsu Ltd Road marking recognition device and system
JP2007115198A (en) * 2005-10-24 2007-05-10 Aisin Aw Co Ltd Vehicle recognition method and in-vehicle device
US7542835B2 (en) 2003-11-11 2009-06-02 Nissan Motor Co., Ltd. Vehicle image processing device
JP2016164745A (en) * 2015-03-06 2016-09-08 富士通テン株式会社 Obstacle detection device and obstacle detection method
CN107255470A (en) * 2014-03-19 2017-10-17 能晶科技股份有限公司 Obstacle detector

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6792147B1 (en) 1999-11-04 2004-09-14 Honda Giken Kogyo Kabushiki Kaisha Object recognition system
US6788817B1 (en) 1999-11-04 2004-09-07 Honda Giken Kogyo Kabushikikaisha Object recognition system
US6990216B2 (en) 2000-09-22 2006-01-24 Nissan Motor Co., Ltd. Method and apparatus for estimating inter-vehicle distance using radar and camera
US7542835B2 (en) 2003-11-11 2009-06-02 Nissan Motor Co., Ltd. Vehicle image processing device
JP2006004188A (en) * 2004-06-17 2006-01-05 Daihatsu Motor Co Ltd Obstacle recognition method and obstacle recognition device
EP1615191A1 (en) 2004-07-05 2006-01-11 Nissan Motor Co., Ltd. Image processing system and method for front-view image sensor in a vehicle
US7623680B2 (en) 2004-07-05 2009-11-24 Nissan Motor Co., Ltd. Image processing system and method for front-view image sensor
US8184859B2 (en) 2005-08-18 2012-05-22 Fujitsu Limited Road marking recognition apparatus and method
JP2007052645A (en) * 2005-08-18 2007-03-01 Fujitsu Ltd Road marking recognition device and system
JP2007115198A (en) * 2005-10-24 2007-05-10 Aisin Aw Co Ltd Vehicle recognition method and in-vehicle device
JP4687381B2 (en) * 2005-10-24 2011-05-25 アイシン・エィ・ダブリュ株式会社 Vehicle recognition method and in-vehicle device
CN107255470A (en) * 2014-03-19 2017-10-17 能晶科技股份有限公司 Obstacle detector
CN107255470B (en) * 2014-03-19 2020-01-10 能晶科技股份有限公司 Obstacle detection device
JP2016164745A (en) * 2015-03-06 2016-09-08 富士通テン株式会社 Obstacle detection device and obstacle detection method

Similar Documents

Publication Publication Date Title
US8184859B2 (en) Road marking recognition apparatus and method
US7729516B2 (en) Ranging device utilizing image processing
JP3350296B2 (en) Face image processing device
JP5058002B2 (en) Object detection device
JP2007179386A (en) Method and apparatus for recognizing white line
JP2006329776A (en) Car position detection method, vehicle speed detection method, and device
CN111695540A (en) Video frame identification method, video frame cutting device, electronic equipment and medium
JPH11351862A (en) Foregoing vehicle detecting method and equipment
JPH1097699A (en) Obstacle detecting device for vehicle
JP2010061375A (en) Apparatus and program for recognizing object
JPH10162118A (en) Device and method for image processing
JPH11195127A (en) Method for recognizing white line and device therefor
JP4123138B2 (en) Vehicle detection method and vehicle detection device
JP2004118757A (en) Detector for traveling lane on road surface
JP2004034946A (en) Image processing device, parking support device, and image processing method
JP2829934B2 (en) Mobile vehicle environment recognition device
JP4529768B2 (en) On-vehicle object detection device and object detection method
JP2001330665A (en) On-vehicle object detector using radar and image processing
JP3319401B2 (en) Roadway recognition device
JP3345995B2 (en) Road white line detector
JP2962799B2 (en) Roadside detection device for mobile vehicles
JP4070450B2 (en) Forward vehicle recognition device and recognition method
JP4842301B2 (en) Pedestrian detection device and program
JP2001116545A (en) Distance image calculator
JP3331095B2 (en) In-vehicle image processing device