JP2000090268A - Vehicle area detecting method - Google Patents

Vehicle area detecting method

Info

Publication number
JP2000090268A
JP2000090268A JP10253957A JP25395798A JP2000090268A JP 2000090268 A JP2000090268 A JP 2000090268A JP 10253957 A JP10253957 A JP 10253957A JP 25395798 A JP25395798 A JP 25395798A JP 2000090268 A JP2000090268 A JP 2000090268A
Authority
JP
Japan
Prior art keywords
edge
region
vehicle
vertical
horizontal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP10253957A
Other languages
Japanese (ja)
Inventor
Kazuyoshi Otsuka
和由 大塚
Shiyourin Kiyo
昭倫 許
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NEC Corp
Original Assignee
NEC Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NEC Corp filed Critical NEC Corp
Priority to JP10253957A priority Critical patent/JP2000090268A/en
Publication of JP2000090268A publication Critical patent/JP2000090268A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

PROBLEM TO BE SOLVED: To provide a vehicle area detecting method improved in the detection accuracy of an edge area belonging to a vehicle. SOLUTION: The set area of link edge areas is detected from the front image provided by image pickup (step S1). Next, the edge strength and length of the link edge area, the total value of horizontal and vertical variant values and edge strength are calculated (steps S2-S6). Then, the respective link edge areas are respectively rearranged in order from highest PWR, highest LEN and smallest VA (step S7), a high-order R (R is a predetermined number of areas scheduled to detect) of link edge areas are judged as the edge areas belonging to the vehicle and a vehicle candidate area is detected (step S8). Thus, the vehicle candidate area can be highly accurately detected.

Description

【発明の詳細な説明】DETAILED DESCRIPTION OF THE INVENTION

【0001】[0001]

【発明の属する技術分野】本発明は、車両領域検出方法
に関し、更に詳しくは、車両候補領域を高い精度で検出
できる車両領域検出方法に関するものである。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a method for detecting a vehicle region, and more particularly, to a method for detecting a vehicle region with high accuracy.

【0002】[0002]

【従来の技術】カーナビゲーションでの安全性を判断す
る装置として、前方を撮像して先行する車両(以下、先
行車両という)を認識する装置が実用化されている。こ
の装置では、先行車両を並木等と区別する方法として、
撮像した画像の濃淡値を微分処理して得られるエッジ画
像が、水平成分及び垂直成分のエッジ特徴を多く含むこ
とを利用している。すなわち、遠景の画像に属する水平
成分及び垂直成分のエッジの連続性及び強度は、車両候
補領域に属する水平成分及び垂直成分のエッジに比べて
弱いことを利用している。車両候補領域とは、前方映像
で検出された例えば矩形の画像領域で、車両の存在する
可能性が高いと検出された領域である。以下、図面を参
照し、例を挙げて、従来の車両領域検出方法を説明す
る。
2. Description of the Related Art As a device for judging safety in car navigation, a device for recognizing a preceding vehicle (hereinafter referred to as a preceding vehicle) by imaging the front has been put to practical use. In this device, as a method of distinguishing a preceding vehicle from a row of trees,
The fact that an edge image obtained by differentiating a grayscale value of a captured image contains many edge features of a horizontal component and a vertical component is used. That is, the fact that the continuity and strength of the edges of the horizontal component and the vertical component belonging to the image of the distant view are weaker than the edges of the horizontal component and the vertical component belonging to the vehicle candidate area. The vehicle candidate region is, for example, a rectangular image region detected in a forward video, and is a region where it is detected that there is a high possibility that a vehicle is present. Hereinafter, a conventional vehicle area detection method will be described with reference to the drawings and by way of example.

【0003】図3は、例えばCCDカメラで走行車両か
ら前方を撮像した映像(以下、前方映像という)を示す
模式図である。従来の車両領域検出方法では、前方映像
の濃淡値を微分処理し、水平成分及び垂直成分のエッジ
特徴をエッジ画像から検出する。そして、エッジ特徴が
多く分布する領域を先行車両に属するエッジ領域と判断
し、並木等、エッジ特徴が多く分布しない領域を先行車
両に属するエッジ領域でないと判断し、車両候補領域1
0を決定している。
FIG. 3 is a schematic diagram showing an image (hereinafter, referred to as a forward image) obtained by photographing the front of a traveling vehicle with a CCD camera, for example. In the conventional vehicle region detection method, the grayscale value of the forward image is differentiated, and the edge features of the horizontal component and the vertical component are detected from the edge image. Then, a region where many edge features are distributed is determined as an edge region belonging to the preceding vehicle, and a region where many edge features are not distributed such as a row of trees is determined not to be an edge region belonging to the preceding vehicle.
0 has been determined.

【0004】[0004]

【発明が解決しようとする課題】しかし、従来の車両領
域検出方法では、車両に属するエッジ領域の検出精度が
低いため、車両候補領域の検出精度が低いという問題が
あった。以上のような事情に照らして、本発明の目的
は、車両に属するエッジ領域の検出精度が高い車両領域
検出方法を提供することである。
However, the conventional method of detecting a vehicle region has a problem that the detection accuracy of a vehicle candidate region is low because the detection accuracy of an edge region belonging to a vehicle is low. In view of the circumstances as described above, an object of the present invention is to provide a vehicle area detection method with high detection accuracy of an edge area belonging to a vehicle.

【0005】[0005]

【課題を解決するための手段】上記目的を達成するため
に、本発明に係る車両領域検出方法は、撮像した前方の
映像に基づいてエッジ画像を形成し、次いで、エッジ画
像に基づいて連結エッジ領域を形成し、連結エッジ領域
の集合領域のうちから前方の車両の車両候補領域を検出
する車両領域検出方法において、撮像した前方の映像に
所定のフィルタ処理を行う第1ステップと、第1ステッ
プで得られた各エッジ画像の垂直成分又は水平成分のう
ち少なくとも一方を2値化処理及びラベリング処理して
連結エッジ領域を形成し、連結エッジ領域の集合領域を
形成する第2ステップと、各連結エッジ領域について、
水平分散値又は垂直分散値の少なくとも一方、水平方向
の長さ又は垂直方向の長さの少なくとも一方、及び、エ
ッジ強度をそれぞれ算出する第3ステップと、水平分散
値又は垂直分散値の少なくとも一方、水平方向の長さ又
は垂直方向の長さの少なくとも一方、及び、エッジ強度
に基づいて、車両候補領域を構成する連結エッジ領域を
連結エッジ領域の集合領域から抽出する第4ステップと
を備えていることを特徴としている。
In order to achieve the above object, a method for detecting a vehicle region according to the present invention forms an edge image based on a captured front image, and then forms a connected edge based on the edge image. A first step of performing a predetermined filtering process on a captured front image in a vehicle region detection method for forming a region and detecting a vehicle candidate region of a front vehicle from among a set region of connected edge regions; A binarizing process and a labeling process on at least one of a vertical component and a horizontal component of each edge image obtained in the step (a) to form a connected edge region and form a set region of the connected edge regions; For the edge area,
At least one of the horizontal variance value or the vertical variance value, at least one of the horizontal length or the vertical length, and the third step of calculating the edge strength, respectively, at least one of the horizontal variance value or the vertical variance value, A fourth step of extracting a connected edge area constituting the vehicle candidate area from a set area of the connected edge areas based on at least one of the horizontal length or the vertical length and the edge strength. It is characterized by:

【0006】フィルタ処理としては、通常、ノイズ除去
処理、コントラスト改善処理、及び、微分処理を行う。
As the filter processing, noise removal processing, contrast improvement processing, and differentiation processing are usually performed.

【0007】好適には、第3ステップでは、連結エッジ
領域を構成する各エッジ画像の水平分散値又は垂直分散
値の少なくとも一方、水平方向の長さ又は垂直方向の長
さの少なくとも一方、及び、エッジ強度を算出し、更
に、それぞれの平均値を算出する。
Preferably, in the third step, at least one of a horizontal variance value or a vertical variance value of each edge image constituting the connected edge region, at least one of a horizontal length or a vertical length, and The edge strength is calculated, and the respective average values are calculated.

【0008】第4ステップで行う処理の一例としては、
水平分散値又は垂直分散値の少なくとも一方、水平方向
の長さ又は垂直方向の長さの少なくとも一方、及び、エ
ッジ強度について、それぞれ、集合領域のうち上位の連
結エッジ領域を設定数だけ抽出して、車両候補領域を構
成する連結エッジ領域とする。第4ステップで行う処理
の別の一例としては、水平分散値又は垂直分散値の少な
くとも一方、水平方向の長さ又は垂直方向の長さの少な
くとも一方、及び、エッジ強度にそれぞれ所定の重み定
数を乗算してなる評価値を算出し、評価値の上位順に連
結エッジ領域を設定数だけ集合領域から抽出して、車両
候補領域を構成する連結エッジ領域とする。
As an example of the processing performed in the fourth step,
At least one of the horizontal variance value or the vertical variance value, at least one of the horizontal length or the vertical length, and, for the edge strength, respectively, extract a set number of higher-order connected edge regions in the set region. , A connected edge region constituting the vehicle candidate region. As another example of the process performed in the fourth step, a predetermined weight constant is assigned to at least one of the horizontal variance value or the vertical variance value, at least one of the horizontal length or the vertical length, and the edge strength. An evaluation value obtained by the multiplication is calculated, and the connected edge regions are extracted from the set region by the set number in the descending order of the evaluation values, and are set as the connected edge regions constituting the vehicle candidate region.

【0009】第4ステップで行う処理の更に別の一例と
しては、集合領域のうちエッジ強度の高い順に第1の設
定割合だけ連結エッジ領域を抽出する第5ステップと、
第5ステップで抽出した連結エッジ領域の数が、第1の
設定数よりも多いか否かを判断し、多い場合には、水平
分散値又は垂直分散値の少なくとも一方について、高い
順に第2の設定割合だけ連結エッジ領域を更に抽出する
第6ステップと、第6ステップで抽出した連結エッジ領
域の数が、第2の設定数よりも多いか否かを判断し、多
い場合には、水平方向の長さ又は垂直方向の長さの少な
くとも一方について、長い順に第3の設定割合だけ連結
エッジ領域を更に抽出する第7ステップとを行う。
As still another example of the processing performed in the fourth step, a fifth step of extracting a connected edge area by a first set ratio in the descending order of edge strength in a set area;
It is determined whether or not the number of connected edge regions extracted in the fifth step is larger than the first set number. If the number is larger, at least one of the horizontal variance value and the vertical variance value is determined in the descending order from the highest. A sixth step of further extracting the connected edge regions by the set ratio, and determining whether the number of the connected edge regions extracted in the sixth step is larger than the second set number. And a seventh step of further extracting a connected edge region by a third set ratio in ascending order of at least one of the length in the vertical direction and the length in the vertical direction.

【0010】[0010]

【発明の実施の形態】以下に、実施形態例を挙げ、添付
図面を参照して、本発明の実施の形態を具体的かつより
詳細に説明する。実施形態例1 本実施形態例は、本発明の一実施形態例である。図1
は、本実施形態例で、例えばCCDカメラで走行車両か
ら前方を撮像した映像(以下、前方映像という)を示す
模式図である。本実施形態例では、この前方映像を以下
に説明する手順で画像処理する。
Embodiments of the present invention will be described below in detail with reference to the accompanying drawings. Embodiment 1 Embodiment 1 is an embodiment of the present invention. FIG.
FIG. 1 is a schematic diagram showing an image (hereinafter, referred to as a forward image) obtained by capturing an image ahead of a traveling vehicle with a CCD camera, for example, in the present embodiment. In the present embodiment, image processing is performed on the forward video in the procedure described below.

【0011】図2は、本実施形態例の処理手順を示すフ
ローチャート図である。先ず、撮像して得られた前方映
像に、ノイズ除去、コントラスト改善、微分処理などの
所定のフィルタ処理を施す。次いで、検出された垂直成
分または水平成分のエッジ画像(微分画像)に対し二値
化処理し、ラベリング処理を行い、形成された連結エッ
ジ領域の集合領域を検出する(ステップS1)。
FIG. 2 is a flowchart showing the processing procedure of this embodiment. First, predetermined filter processing such as noise removal, contrast improvement, and differentiation is performed on the forward video obtained by imaging. Next, binarization processing and labeling processing are performed on the detected edge image (differential image) of the vertical component or horizontal component, and a set area of the formed connected edge areas is detected (step S1).

【0012】次いで、ラベリング処理後の各ラベルの画
素集合毎に以下の評価値を算出する(ステップS2から
S4)。まず、垂直エッジおよび水平エッジの連続性を
評価するために、面積、長さ、水平分散値、及び、垂直
分散値を以下のようにして算出する。 (a)面積は、以下の式により、同一ラベル画素の画素
数として算出する。 S[n] = (ラベル[n](n番目のラベル)に属する画素の総
数)、すなわち、 (b)長さは、以下の式により、同一ラベル画素に対す
るX座標及びY座標の最大値と最小値の差分として算出す
る。 垂直成分エッジ画像の長さ : LEN[n] = Xn#max - Xn#mi
n 水平成分エッジ画像の長さ : LEN[n] = Yn#max - Yn#mi
n ここで、 Xn#max, Yn#max :ラベル[n]に属する画素の最大のX座標
値及びY座標値 Xn#min, Yn#min :ラベル[n]に属する画素の最小のX座標
値及びY座標値 である。 (c)水平分散値及び垂直分散値は、それぞれ、垂直成
分エッジ画像及び水平成分エッジ画像から、以下の式に
より、同一ラベル画素に対する分散値として算出する。 水平分散値 : VA[n] = (Σ(Yn(i) - Yn#av)) / S[n] 垂直分散値 : VA[n] = (Σ(Xn(i) - Xn#av)) / S[n] ここで、 Xn(i), Yn(i) :ラベル[n]に属する画素のX座標値及びY
座標値 Xn#av, Yn#av :ラベル[n]に属する画素のX座標値の平均
値及びY座標値の平均値 である。
Next, the following evaluation values are calculated for each pixel group of each label after the labeling process (steps S2 to S4). First, in order to evaluate the continuity of a vertical edge and a horizontal edge, an area, a length, a horizontal variance value, and a vertical variance value are calculated as follows. (A) The area is calculated as the number of pixels of the same label pixel by the following equation. S [n] = (total number of pixels belonging to label [n] (n-th label)), that is, (b) The length is calculated by the following formula using the maximum value of the X coordinate and the Y coordinate for the same label pixel. It is calculated as the difference between the minimum values. Length of vertical component edge image: LEN [n] = Xn # max-Xn # mi
n Length of horizontal component edge image: LEN [n] = Yn # max-Yn # mi
n where, Xn # max, Yn # max: maximum X coordinate value and Y coordinate value of pixel belonging to label [n] Xn # min, Yn # min: minimum X coordinate value of pixel belonging to label [n] And Y coordinate values. (C) The horizontal variance value and the vertical variance value are calculated as the variance values for the same label pixel from the vertical component edge image and the horizontal component edge image, respectively, by the following equations. Horizontal variance: VA [n] = (Σ (Yn (i)-Yn # av)) / S [n] Vertical variance: VA [n] = (Σ (Xn (i)-Xn # av)) / S [n] Here, Xn (i), Yn (i): X coordinate value and Y of the pixel belonging to label [n]
Coordinate values Xn # av, Yn # av: The average value of the X coordinate values and the average value of the Y coordinate values of the pixels belonging to the label [n].

【0013】また、以下の式により、各連結エッジ領域
のラベル毎に、同一ラベル内の画素に対し、エッジ画像
でのエッジ強度の平均値を算出する(ステップS5)。 pn(x,y) = ラベル[n]に属する画素のエッジ強度 エッジ強度 : PWR[n] = Σpn(x,y) / Sn 尚、垂直成分エッジ画像では水平の線画像が、水平成分
エッジ画像では垂直の線画像が、それぞれ検出されてい
る。また、ラベル番号は、垂直成分エッジ画像と水平成
分エッジ画像とで重複しない。
Further, the average value of the edge strength in the edge image is calculated for the pixels in the same label for each label of each connected edge area by the following equation (step S5). pn (x, y) = edge strength of pixel belonging to label [n] Edge strength: PWR [n] = Σpn (x, y) / Sn In the vertical component edge image, the horizontal line image is the horizontal component edge image In the figure, vertical line images are respectively detected. Label numbers do not overlap between the vertical component edge image and the horizontal component edge image.

【0014】以上の評価値を算出した後、ラベル毎の各
連結エッジ領域に対し、その連結エッジ領域に属するラ
ベルの評価値から、 エッジ強度の合計値 : PWR =ΣPWR[n] 長さ合計値 : LEN =ΣLEN[n] 水平分散値及び垂直分散値の合計値 : VA = ΣVA[n] を算出する(ステップS6)。
After calculating the above evaluation values, for each connected edge area for each label, the total value of the edge strength is calculated from the evaluation values of the labels belonging to the connected edge area: PWR = ΣPWR [n] Total length : LEN = ΣLEN [n] Total value of horizontal dispersion value and vertical dispersion value: VA = ΣVA [n] is calculated (step S6).

【0015】そして、各連結エッジ領域を PWRの大きい
順、LEN の大きい順、VAの小さい順にそれぞれ並べ替え
(ステップS7)、上位R個(Rはあらかじめ決めてお
いた検出予定領域数)の連結エッジ領域を車両に属する
エッジ領域と判断し、車両候補領域11を検出する(ス
テップS8)。これにより、車両候補領域を高い精度で
検出できる。また、例えば領域12(図1)は、車両候
補領域から効率よく除外される。
The connected edge areas are rearranged in the order of PWR, LEN, and VA (step S7), and the top R (R is the predetermined number of detection areas) connection is performed. The edge area is determined to be the edge area belonging to the vehicle, and the vehicle candidate area 11 is detected (step S8). Thereby, the vehicle candidate area can be detected with high accuracy. Further, for example, the region 12 (FIG. 1) is efficiently excluded from the vehicle candidate regions.

【0016】実施形態例2 本実施形態例では、実施形態例1に比べ、ステップS7
に代えて、以下の式を用いて、PWR, LEN, 及び VAに、
予め設定しておいた重み付け定数 A, B, Cをそれぞれ乗
算し、それらの合計の評価ポイント値(POINT)を算出す
る。 POINT = A * PWR + B * LEN + C * VA 次いで、POINTの高い順に連結エッジ領域を並べ替え、
上位の所定個数の連結エッジ領域を車両に属するエッジ
領域と判断し、車両候補領域を検出する。
Second Embodiment In the second embodiment, compared to the first embodiment, step S7 is performed.
Instead of using PWR, LEN, and VA,
The weighting constants A, B, and C, which are set in advance, are respectively multiplied, and the total evaluation point value (POINT) is calculated. POINT = A * PWR + B * LEN + C * VA Next, sort the connected edge areas in descending order of POINT,
A predetermined number of higher-order connected edge areas are determined to be edge areas belonging to the vehicle, and a vehicle candidate area is detected.

【0017】実施形態例3 本実施形態例では、連結エッジ領域の並べ替えに際し、
各評価値の重要度が PWR、 VA 、LEN 、の順であること
に着目し、実施形態例1に比べて、ステップS7に代え
て、以下の処理を行う。まず PWRの大きい順に連結エッ
ジ領域を並べ替え、上位20%程度を抽出する。抽出し
た連結エッジ領域数がR個(Rはあらかじめ決めておい
た検出予定領域数)より多ければ、抽出した連結エッジ
領域を更に VA の小さい順に並べ替え、上位(分散値の
場合、値が小さいほうを上位とする)50%を抽出す
る。抽出した連結エッジ領域数がR個に比べてまだ多い
場合には、さらに LENの大きい順に並べ替え、その上位
からR個の連結エッジ領域を抽出し、それを車両に属す
るエッジ領域とし、車両候補領域を決定する。
Embodiment 3 In this embodiment, when rearranging the connected edge regions,
Focusing on the importance of each evaluation value in the order of PWR, VA, LEN, the following processing is performed instead of step S7 as compared with the first embodiment. First, the connected edge regions are rearranged in descending order of PWR, and the top 20% is extracted. If the number of extracted connected edge regions is larger than R (R is the predetermined number of regions to be detected), the extracted connected edge regions are further rearranged in ascending order of VA, and higher (in the case of a variance value, the value is smaller). 50% is extracted. If the number of extracted connected edge areas is still larger than R, the rearranged area is further sorted in descending order of LEN, and R connected edge areas are extracted from the top, and are set as edge areas belonging to the vehicle. Determine the area.

【0018】[0018]

【発明の効果】本発明によれば、撮像した前方の映像か
ら連結エッジ領域を形成し、各連結エッジ領域につい
て、水平分散値又は垂直分散値の少なくとも一方、水平
方向の長さ又は垂直方向の長さの少なくとも一方、及
び、エッジ強度をそれぞれ算出し、算出した値に基づい
て、車両候補領域を構成する連結エッジ領域を抽出す
る。これにより、高い精度で、また、効率よく、車両候
補領域を検出できる。
According to the present invention, a connected edge region is formed from a captured front image, and for each connected edge region, at least one of a horizontal variance value or a vertical variance value, a horizontal length or a vertical variance value. At least one of the lengths and the edge strength are calculated, and a connected edge region constituting the vehicle candidate region is extracted based on the calculated values. As a result, the vehicle candidate region can be detected with high accuracy and efficiency.

【図面の簡単な説明】[Brief description of the drawings]

【図1】実施形態例1で、例えばCCDカメラで走行車
両から前方を撮像した映像を示す模式図である。
FIG. 1 is a schematic diagram showing a video image of the front of a traveling vehicle with, for example, a CCD camera in a first embodiment.

【図2】本実施形態例1の処理手順を示すフローチャー
ト図である。
FIG. 2 is a flowchart illustrating a processing procedure according to the first embodiment.

【図3】従来の方法で、例えばCCDカメラで走行車両
から前方を撮像した映像を示す模式図である。
FIG. 3 is a schematic diagram showing an image obtained by capturing an image of the front of a traveling vehicle by a conventional method using, for example, a CCD camera.

【符号の説明】[Explanation of symbols]

10 車両候補領域 11 車両候補領域 12 領域 10 vehicle candidate area 11 vehicle candidate area 12 area

───────────────────────────────────────────────────── フロントページの続き Fターム(参考) 5B057 AA16 BA29 CE02 CE11 DA08 DC14 DC16 5L096 BA04 EA35 EA43 FA06 FA32 FA33 FA64 GA02 GA34  ──────────────────────────────────────────────────続 き Continued on the front page F term (reference) 5B057 AA16 BA29 CE02 CE11 DA08 DC14 DC16 5L096 BA04 EA35 EA43 FA06 FA32 FA33 FA64 GA02 GA34

Claims (6)

【特許請求の範囲】[Claims] 【請求項1】 撮像した前方の映像に基づいてエッジ画
像を形成し、次いで、エッジ画像に基づいて連結エッジ
領域を形成し、連結エッジ領域の集合領域のうちから前
方の車両の車両候補領域を検出する車両領域検出方法に
おいて、 撮像した前方の映像に所定のフィルタ処理を行う第1ス
テップと、 第1ステップで得られた各エッジ画像の垂直成分又は水
平成分のうち少なくとも一方を2値化処理及びラベリン
グ処理して連結エッジ領域を形成し、連結エッジ領域の
集合領域を形成する第2ステップと、 各連結エッジ領域について、水平分散値又は垂直分散値
の少なくとも一方、水平方向の長さ又は垂直方向の長さ
の少なくとも一方、及び、エッジ強度をそれぞれ算出す
る第3ステップと、 水平分散値又は垂直分散値の少なくとも一方、水平方向
の長さ又は垂直方向の長さの少なくとも一方、及び、エ
ッジ強度に基づいて、車両候補領域を構成する連結エッ
ジ領域を連結エッジ領域の集合領域から抽出する第4ス
テップとを備えていることを特徴とする車両領域検出方
法。
1. An edge image is formed based on a captured front image, a connected edge region is formed based on the edge image, and a vehicle candidate region of a forward vehicle is selected from a set of connected edge regions. In a method for detecting a vehicle region, a first step of performing a predetermined filtering process on a captured front image, and a binarization process of at least one of a vertical component and a horizontal component of each edge image obtained in the first step And a labeling process to form a connected edge region to form a set region of the connected edge regions, and for each connected edge region, at least one of a horizontal variance value or a vertical variance value, a horizontal length or a vertical length. A third step of calculating at least one of the length in the direction and the edge strength, and at least one of the horizontal variance value and the vertical variance value; Extracting at least one of the length in the direction or the length in the vertical direction, and a connected edge region forming the vehicle candidate region from a set region of the connected edge regions based on the edge strength. A vehicle area detection method characterized by the above-mentioned.
【請求項2】 フィルタ処理として、ノイズ除去処理、
コントラスト改善処理、及び、微分処理を行うことを特
徴とする請求項1に記載の車両領域検出方法。
2. Filter processing includes noise removal processing,
The vehicle area detection method according to claim 1, wherein a contrast improvement process and a differentiation process are performed.
【請求項3】 第3ステップでは、連結エッジ領域を構
成する各エッジ画像の水平分散値又は垂直分散値の少な
くとも一方、水平方向の長さ又は垂直方向の長さの少な
くとも一方、及び、エッジ強度を算出し、更に、それぞ
れの平均値を算出することを特徴とする請求項1又は2
に記載の車両領域検出方法。
3. In a third step, at least one of a horizontal variance value or a vertical variance value, at least one of a horizontal length or a vertical length, and an edge intensity of each edge image forming the connected edge region. 3. The method according to claim 1, further comprising:
5. The method for detecting a vehicle region according to claim 1.
【請求項4】 第4ステップでは、水平分散値又は垂直
分散値の少なくとも一方、水平方向の長さ又は垂直方向
の長さの少なくとも一方、及び、エッジ強度について、
それぞれ、集合領域のうち上位の連結エッジ領域を設定
数だけ抽出して、車両候補領域を構成する連結エッジ領
域とすることを特徴とする請求項1から3のうち何れか
1項に記載の車両領域検出方法。
In a fourth step, at least one of a horizontal variance value or a vertical variance value, at least one of a horizontal length or a vertical length, and an edge strength
The vehicle according to any one of claims 1 to 3, wherein a set number of higher-order connected edge regions in the set region are extracted as connected edge regions constituting a vehicle candidate region. Region detection method.
【請求項5】 第4ステップでは、水平分散値又は垂直
分散値の少なくとも一方、水平方向の長さ又は垂直方向
の長さの少なくとも一方、及び、エッジ強度にそれぞれ
所定の重み定数を乗算してなる評価値を算出し、評価値
の上位順に連結エッジ領域を設定数だけ集合領域から抽
出して、車両候補領域を構成する連結エッジ領域とする
ことを特徴とする請求項1から3のうち何れか1項に記
載の車両領域検出方法。
In a fourth step, at least one of a horizontal variance value or a vertical variance value, at least one of a horizontal length or a vertical length, and an edge strength are multiplied by a predetermined weighting constant, respectively. 4. An evaluation value calculated as follows, and a set number of connected edge regions are extracted from the set region in the order of higher evaluation values, and are set as connected edge regions constituting a vehicle candidate region. 2. The method for detecting a vehicle region according to claim 1.
【請求項6】 第4ステップでは、集合領域のうちエッ
ジ強度の高い順に第1の設定割合だけ連結エッジ領域を
抽出する第5ステップと、 第5ステップで抽出した連結エッジ領域の数が、第1の
設定数よりも多いか否かを判断し、多い場合には、水平
分散値又は垂直分散値の少なくとも一方について、高い
順に第2の設定割合だけ連結エッジ領域を更に抽出する
第6ステップと、 第6ステップで抽出した連結エッジ領域の数が、第2の
設定数よりも多いか否かを判断し、多い場合には、水平
方向の長さ又は垂直方向の長さの少なくとも一方につい
て、長い順に第3の設定割合だけ連結エッジ領域を更に
抽出する第7ステップと、 を備えていることを特徴とする請求項1から3のうち何
れか1項に記載の車両領域検出方法。
6. In a fourth step, a fifth step of extracting a connected edge area by a first set ratio in descending order of edge strength in the set area, and the number of connected edge areas extracted in the fifth step is the Determining whether the number is larger than the set number of 1, and if so, further extracting a connected edge area by a second set ratio in ascending order of at least one of the horizontal variance value and the vertical variance value; It is determined whether or not the number of connected edge regions extracted in the sixth step is larger than a second set number. If the number is larger, at least one of the horizontal length and the vertical length is determined. The vehicle area detecting method according to any one of claims 1 to 3, further comprising: a seventh step of further extracting a connected edge area by a third set ratio in ascending order.
JP10253957A 1998-09-08 1998-09-08 Vehicle area detecting method Pending JP2000090268A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP10253957A JP2000090268A (en) 1998-09-08 1998-09-08 Vehicle area detecting method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP10253957A JP2000090268A (en) 1998-09-08 1998-09-08 Vehicle area detecting method

Publications (1)

Publication Number Publication Date
JP2000090268A true JP2000090268A (en) 2000-03-31

Family

ID=17258324

Family Applications (1)

Application Number Title Priority Date Filing Date
JP10253957A Pending JP2000090268A (en) 1998-09-08 1998-09-08 Vehicle area detecting method

Country Status (1)

Country Link
JP (1) JP2000090268A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005332130A (en) * 2004-05-19 2005-12-02 Sony Corp Image processor, image processing method, program for image processing method and recording medium with its program recorded thereon
US7231288B2 (en) 2005-03-15 2007-06-12 Visteon Global Technologies, Inc. System to determine distance to a lead vehicle
CN105260701A (en) * 2015-09-14 2016-01-20 中电海康集团有限公司 Front vehicle detection method applied to complex scene

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005332130A (en) * 2004-05-19 2005-12-02 Sony Corp Image processor, image processing method, program for image processing method and recording medium with its program recorded thereon
JP4534594B2 (en) * 2004-05-19 2010-09-01 ソニー株式会社 Image processing apparatus, image processing method, program for image processing method, and recording medium recording program for image processing method
US7231288B2 (en) 2005-03-15 2007-06-12 Visteon Global Technologies, Inc. System to determine distance to a lead vehicle
CN105260701A (en) * 2015-09-14 2016-01-20 中电海康集团有限公司 Front vehicle detection method applied to complex scene
CN105260701B (en) * 2015-09-14 2019-01-29 中电海康集团有限公司 A kind of front vehicles detection method suitable under complex scene

Similar Documents

Publication Publication Date Title
Badr et al. Automatic number plate recognition system
US8208021B2 (en) Vehicle and lane mark detection device
CN107844683B (en) Method for calculating concentration of digital PCR (polymerase chain reaction) liquid drops
EP3036730B1 (en) Traffic light detection
CN103235938B (en) The method and system of car plate detection and indentification
CN101030256B (en) Method and apparatus for cutting vehicle image
US9811746B2 (en) Method and system for detecting traffic lights
CN104143098B (en) Night pedestrian recognition method based on far-infrared camera head
CN107665327B (en) Lane line detection method and device
KR101246120B1 (en) A system for recognizing license plate using both images taken from front and back faces of vehicle
CN112906534A (en) Lock catch loss fault detection method based on improved Faster R-CNN network
CN101369312B (en) Method and equipment for detecting intersection in image
JP5983729B2 (en) White line detection device, white line detection filter device, and white line detection method
JP6375911B2 (en) Curve mirror detector
CN107066929B (en) Hierarchical recognition method for parking events of expressway tunnel integrating multiple characteristics
EP2830020B1 (en) Object detection device and program
JP2000090268A (en) Vehicle area detecting method
JP2002367077A (en) Device and method for deciding traffic congestion
JPH08171689A (en) Changed area detector
KR101875786B1 (en) Method for identifying vehicle using back light region in road image
EP2830019B1 (en) Object detection device and program
JPH0721388A (en) Picture recognizing device
CN202771439U (en) Traffic sign automatic identification system based on MATLAB
JPH03265985A (en) Plate segmenting device for automatic car number reader
JPH1166490A (en) Vehicle detecting method