JP2005149250A - Vehicle detection method and vehicle detection system - Google Patents

Vehicle detection method and vehicle detection system Download PDF

Info

Publication number
JP2005149250A
JP2005149250A JP2003387412A JP2003387412A JP2005149250A JP 2005149250 A JP2005149250 A JP 2005149250A JP 2003387412 A JP2003387412 A JP 2003387412A JP 2003387412 A JP2003387412 A JP 2003387412A JP 2005149250 A JP2005149250 A JP 2005149250A
Authority
JP
Japan
Prior art keywords
vehicle
voting
candidate area
image
length
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
JP2003387412A
Other languages
Japanese (ja)
Inventor
Hitoomi Takizawa
仁臣 滝澤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Daihatsu Motor Co Ltd
Original Assignee
Daihatsu Motor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Daihatsu Motor Co Ltd filed Critical Daihatsu Motor Co Ltd
Priority to JP2003387412A priority Critical patent/JP2005149250A/en
Publication of JP2005149250A publication Critical patent/JP2005149250A/en
Withdrawn legal-status Critical Current

Links

Abstract

<P>PROBLEM TO BE SOLVED: To simply and quickly recognize and detect a vehicle ahead by extracting a vehicle feature quantity stably through abstraction by a small scale robust structure resistant to noise and the like. <P>SOLUTION: A rectangular prediction area enclosing a vehicle ahead of an associated vehicle is set as a vehicle candidate area in a pickup image from a monocular camera mounted on the associated vehicle. Processing by a robust voting method resistant to noise and the like is applied to a vertical edge image and a horizontal edge image in the area to form a plurality of vertical edge side voting planes and horizontal edge side voting planes with different voting positions. From the coordinates of a peak value on the composite voting plane into which the voting results are superposed, the vehicle center position at the center of the vehicle ahead of the associated vehicle in relation to the vehicle width and vehicle height is stably extracted as a vehicle feature quantity. The vehicle ahead of the associated vehicle is recognized and detected on the basis of the vehicle feature quantity. <P>COPYRIGHT: (C)2005,JPO&NCIPI

Description

本発明は、自車に搭載した画像センサの自車前方の撮影画像から、走行中の自車の前方の車両を、認識して検出する車両検出方法及び車両検出装置に関する。   The present invention relates to a vehicle detection method and a vehicle detection device for recognizing and detecting a vehicle ahead of a running vehicle from a captured image in front of the vehicle of an image sensor mounted on the vehicle.

従来、ASVと呼ばれる先進安全自動車(Advanced Safety Vehicle)等の車両は、画像センサを搭載し、この画像センサの撮影画像の画像処理により、自車前方の車両(以下、前方車両という)を認識して検出し、この検出に基づいて追突可否判定等を行っている。   Conventionally, a vehicle such as an advanced safety vehicle called ASV is equipped with an image sensor, and recognizes a vehicle in front of the host vehicle (hereinafter referred to as a preceding vehicle) by image processing of a captured image of the image sensor. The rear-end collision is determined based on the detection.

そして、前記の画像処理は画像テンプレートマッチングのパターン認識処理であり、この認識処理により、前方車両の輪郭線を抽出し、抽出した輪郭線で定まる前方車両の車両パターンと、あらかじめ記憶した種々のプロトタイプの車両パターンとの一致不一致を判別し、車両特徴量としての車両パターンから前方車両を認識して検出する(例えば、特許文献1参照。)。   The image processing is image template matching pattern recognition processing. By this recognition processing, the contour line of the preceding vehicle is extracted, the vehicle pattern of the preceding vehicle determined by the extracted contour line, and various prototypes stored in advance. The vehicle pattern is discriminated from the vehicle pattern, and the vehicle ahead is recognized and detected from the vehicle pattern as the vehicle feature amount (see, for example, Patent Document 1).

特開平7−182484号公報(段落番号[0011]、図9)JP 7-182484 A (paragraph number [0011], FIG. 9)

前記従来のように前方車両の輪郭線を抽出してパターン認識を行う場合、抽出結果が走行環境の明るさの変化等のノイズ(外乱)の影響を容易に受けることから、輪郭線抽出のしきい値を走行環境の明るさの変化等に応じて動的に可変する必要があるが、このしきい値の可変により、同じ車両であっても、抽出した輪郭線に基づく車両パターンが変わり、場合によっては、該当するプロトタイプの車両パターンと大きく異なって前方車両の認識が困難になり、検出不能の事態を招来する虞がある。   When pattern recognition is performed by extracting the contour line of the preceding vehicle as in the prior art, the extraction result is easily affected by noise (disturbance) such as changes in the brightness of the driving environment. It is necessary to change the threshold value dynamically according to changes in the brightness of the driving environment, etc., but this threshold value changes the vehicle pattern based on the extracted contour line even for the same vehicle. In some cases, the vehicle pattern is significantly different from the corresponding prototype vehicle pattern, making it difficult to recognize the preceding vehicle, which may lead to an undetectable situation.

また、車両特徴量として、全天候についての全車種分のプロトタイプの車両パターンを用意することは事実上不可能であり、認識精度の向上を図ること等ができない問題点がある。   Further, it is practically impossible to prepare prototype vehicle patterns for all vehicle types as vehicle feature values, and there is a problem that recognition accuracy cannot be improved.

しかも、多数のプロトタイプの車両パターンに基づく、時間のかかる複雑なパターン認識の処理を行う必要があるため、小規模な構成で簡単かつ迅速に前方車両を認識して検出することができない問題点もある。   In addition, since it is necessary to perform time-consuming complicated pattern recognition processing based on a large number of prototype vehicle patterns, there is also a problem that it is not possible to easily and quickly recognize and detect the preceding vehicle with a small configuration. is there.

そして、これらの問題を解消するため、ノイズ等に強くロバストで小規模な構成により、車両特徴量を抽象化(簡素化)して前方車両を認識することが望まれるが、その具体的構成は、なんら発明されていない。   And in order to solve these problems, it is desirable to recognize the vehicle ahead by abstracting (simplifying) the vehicle feature amount with a robust and robust configuration resistant to noise and the like. No invention has been invented.

本発明は、ノイズ等に強くロバストで小規模な構成により車両特徴量を抽象化して安定に抽出し、簡単かつ迅速に前方車両を認識して検出する具体的な構成の車両検出方法および車両検出装置を提供することを目的とする。   The present invention is a vehicle detection method and a vehicle detection method having a specific configuration in which vehicle feature values are abstracted and stably extracted by a robust and small-sized configuration that is strong against noise and the like, and a front vehicle is recognized and detected easily and quickly. An object is to provide an apparatus.

上記した目的を達成するために、本発明の車両検出方法は、自車に搭載した画像センサの自車前方の撮影画像に、前方車両を囲む矩形の予測領域を車両候補領域として設定し、前記車両候補領域の垂直エッジ画像につき、前記車両候補領域の左半分の垂直エッジを設定長T*右側に移動した位置に+1を投票し、前記車両候補領域の右半分の各垂直エッジを前記設定長T*左側に移動した位置に+1を投票して垂直エッジ側投票平面を形成することを、前記設定長T*をTmin≦T*≦Tmax、(Tmin、Tmaxは設定した最小長、最大長)の範囲で可変しながらくり返し、前記車両候補領域の水平エッジ画像につき、前記車両候補領域の下半分の水平エッジを設定長H*上側に移動した位置に+1を投票し、前記車両候補領域の上半分の各水平エッジを前記設定長T*下側に移動した位置に+1を投票して水平エッジ側投票平面を形成することを、前記設定長H*をHmin≦H*≦Hmax、(Hmin、Hmaxは設定した最小長、最大長)の範囲で可変しながらくり返し、前記両エッジ側投票平面の投票結果を重ね合わせた合成投票平面のピーク値の座標から前記前方車両の車幅方向及び車高方向の中央の車両センタ位置を車両特徴量として抽出し、前記車両特徴量から前記前方車両を認識して検出することを特徴としている(請求項1)。   In order to achieve the above-described object, the vehicle detection method of the present invention sets a rectangular prediction region surrounding a preceding vehicle as a vehicle candidate region in a captured image in front of the own vehicle of an image sensor mounted on the own vehicle, For the vertical edge image of the vehicle candidate area, vote +1 to the position where the vertical edge of the left half of the vehicle candidate area is moved to the right of the set length T *, and set each vertical edge of the right half of the vehicle candidate area to the set length Voting +1 at the position moved to the left side of T * to form a vertical edge side voting plane, the set length T * is Tmin ≦ T * ≦ Tmax (Tmin and Tmax are the set minimum length and maximum length). And +1 is voted at the position where the horizontal edge of the lower half of the vehicle candidate area is moved upward by the set length H * with respect to the horizontal edge image of the vehicle candidate area. +1 is voted at the position where each horizontal edge of the minute is moved to the lower side of the set length T * to form a horizontal edge side voting plane, and the set length H * is set to Hmin ≦ H * ≦ Hmax, (Hmin, Hmax is repeated in a variable range within the set minimum length and maximum length), and the vehicle width direction and vehicle height of the preceding vehicle are determined from the coordinates of the peak value of the composite voting plane obtained by superimposing the voting results of the two edge side voting planes. A vehicle center position in the center of the direction is extracted as a vehicle feature value, and the vehicle ahead is recognized and detected from the vehicle feature value (Claim 1).

また、本発明の車両検出方法は、前方車両を囲む矩形の予測領域が、自車前方の撮影画像の垂直、水平のエッジが集中する領域であることを特徴とし(請求項2)、自車にレーザレーダ、ミリ波レーダ等の測距センサを搭載し、前記測距センサの測距対象の位置から自車前方の車両を囲む矩形の予測領域を設定することも特徴とする(請求項3)。   In the vehicle detection method of the present invention, the rectangular prediction area surrounding the preceding vehicle is an area where vertical and horizontal edges of the captured image in front of the own vehicle are concentrated (claim 2). A distance measurement sensor such as a laser radar or a millimeter wave radar is mounted on the vehicle, and a rectangular prediction region surrounding the vehicle ahead of the host vehicle is set from the position of the distance measurement target of the distance measurement sensor. ).

さらに、本発明の車両検出方法は、画像センサが単眼カメラであることを特徴とする(請求項4)。   Furthermore, the vehicle detection method of the present invention is characterized in that the image sensor is a monocular camera.

つぎに、本発明の車両検出装置は、自車に搭載されて自車前方を撮影する画像センサと、前記画像センサの撮影画像に前方車両を囲む矩形の予測領域を車両候補領域として設定する候補領域設定手段と、前記車両候補領域の垂直エッジ画像につき、前記車両候補領域の左半分の垂直エッジを設定長T*右側に移動した位置に+1を投票し、前記車両候補領域の右半分の各垂直エッジを前記設定長T*左側に移動した位置に+1を投票して垂直エッジ側投票平面を形成することを、前記設定長T*をTmin≦T*≦Tmax、(Tmin、Tmaxは設定した最小長、最大長)の範囲で可変しながらくり返す垂直エッジ側投票処理手段と、前記車両候補領域の水平エッジ画像につき、前記車両候補領域の下半分の水平エッジを設定長H*上側に移動した位置に+1を投票し、前記車両候補領域の上半分の各水平エッジを前記設定長T*下側に移動した位置に+1を投票して水平エッジ側投票平面を形成することを、前記設定長H*をHmin≦H*≦Hmax、(Hmin、Hmaxは設定した最小長、最大長)の範囲で可変しながらくり返す水平エッジ側投票処理手段と、前記両エッジ側投票平面の投票結果を重ね合わせた合成投票平面を形成する投票結果合成手段と、前記合成投票平面のピーク値の座標から前記自車前方の車両の車幅方向及び車高方向の中央の車両センタ位置を車両特徴量として抽出する車両特徴量抽出手段とを備え、前記車両特徴量から前記前方車両を認識して検出するようにしたことを特徴としている(請求項5)。   Next, the vehicle detection device of the present invention is an image sensor that is mounted on the host vehicle and captures the front of the host vehicle, and a candidate that sets, as a vehicle candidate region, a rectangular prediction region that surrounds the front vehicle in the captured image of the image sensor. For the area setting means and the vertical edge image of the vehicle candidate area, vote +1 to the position where the vertical edge of the left half of the vehicle candidate area is moved to the right of the set length T *, and each of the right half of the vehicle candidate area The vertical edge side voting plane is formed by voting +1 to the position where the vertical edge is moved to the left side of the set length T *, and the set length T * is set to Tmin ≦ T * ≦ Tmax (Tmin and Tmax are set) Vertical edge side voting processing means that repeats in a variable range within a range of (minimum length, maximum length) and a horizontal edge image of the vehicle candidate area, the horizontal edge of the lower half of the vehicle candidate area is set to a set length H * above Voting +1 to the moved position, voting +1 to the position where each horizontal edge of the upper half of the vehicle candidate area has moved to the lower side of the set length T * to form a horizontal edge side voting plane, Horizontal edge side voting processing means that repeats the set length H * in a range of Hmin ≦ H * ≦ Hmax (where Hmin and Hmax are set minimum length and maximum length), and the voting results of the both edge side voting planes Voting result combining means for forming a combined voting plane in which the vehicle is overlapped, and the vehicle center position in the center in the vehicle width direction and the vehicle height direction of the vehicle ahead of the host vehicle from the coordinates of the peak value of the combined voting plane Vehicle feature quantity extraction means for extracting the vehicle in front of the vehicle, and detecting and detecting the preceding vehicle from the vehicle feature quantity.

また、本発明の車両検出装置は、候補領域設定手段により、前方車両を囲む矩形の予測領域を、自車前方の撮影画像の垂直、水平のエッジが集中する領域としたことを特徴とし(請求項6)、自車にレーザレーダ等の測距センサを搭載し、候補領域設定手段により、前記測距センサの測距対象の位置から自車前方の車両を囲む矩形の予測領域を設定するようにしたことも特徴とする(請求項7)。   The vehicle detection apparatus of the present invention is characterized in that the candidate region setting means sets the rectangular prediction region surrounding the preceding vehicle as a region where vertical and horizontal edges of the captured image in front of the host vehicle are concentrated (claim). Item 6) A range sensor such as a laser radar is mounted on the host vehicle, and a rectangular prediction region surrounding the vehicle ahead of the host vehicle is set by the candidate region setting means from the position of the target of range measurement of the range sensor. (7).

さらに、本発明の車両検出装置は、画像センサが単眼カメラであることを特徴としている(請求項8)。   Furthermore, the vehicle detection device of the present invention is characterized in that the image sensor is a monocular camera.

まず、請求項1、5の構成によれば、画像センサの撮影画像の前方車両が矩形とみなせることから、矩形の予測領域を撮影画像に車両候補領域として設定し、この車両候補領域の垂直エッジ画像、水平エッジ画像を求める。   First, according to the configurations of claims 1 and 5, since the vehicle ahead of the captured image of the image sensor can be regarded as a rectangle, a rectangular prediction region is set as a vehicle candidate region in the captured image, and the vertical edge of this vehicle candidate region Obtain images and horizontal edge images.

このとき、車両候補領域の前方車両の垂直エッジ、水平エッジは、ほとんどが、前方車両の車幅、車高それぞれの中央の位置に対して対称の位置である、車両の左右端部、上下端部に発生する。   At this time, the vertical edge and horizontal edge of the vehicle ahead in the vehicle candidate area are almost symmetrical with respect to the center position of the vehicle width and vehicle height of the vehicle ahead, left and right ends, upper and lower ends of the vehicle. Occurs in the department.

そのため、設定長T*をTmin≦T*≦Tmaxの範囲で可変しながら、車両候補領域の左半分の垂直エッジを設定長T*右側に移動した位置に+1を投票し、車両候補領域の右半分の各垂直エッジを前記設定長T*左側に移動した位置に+1を投票して各垂直エッジ側投票平面を形成すると、設定長T*が画像上で前方車両の車幅の半分のときの垂直エッジ側投票平面は、前方車両の左半分のエッジの投票位置と右半分のエッジの投票位置とが前方車両の車幅方向の中央の位置で一致し、その位置の投票値が+2になる。   Therefore, while changing the set length T * in the range of Tmin ≦ T * ≦ Tmax, +1 is voted at the position where the vertical edge of the left half of the vehicle candidate area is moved to the right side of the set length T *, and the right of the vehicle candidate area is If voting +1 is formed at the position where each half vertical edge is moved to the left side of the set length T * to form each vertical edge side voting plane, the set length T * is half of the vehicle width of the preceding vehicle on the image. In the vertical edge side voting plane, the voting position of the left half edge of the forward vehicle and the voting position of the right half edge coincide at the center position in the vehicle width direction of the forward vehicle, and the voting value at that position is +2. .

また、設定長H*をHmin≦H*≦Hmaxの範囲で可変しながら、車両候補領域の下半分の水平エッジを設定長H*上側に移動した位置に+1を投票し、車両候補領域の上半分の各水平エッジを設定長H*下側に移動した位置に+1を投票して各水平エッジ側投票平面を形成すると、設定長H*が画像上で前方車両の車高の半分のときの水平エッジ側投票平面は、下半分のエッジの投票位置と上半分のエッジの投票位置とが前方車両の車高方向の中央の位置で一致し、その位置の投票値が+2になる。   In addition, while changing the set length H * within the range of Hmin ≦ H * ≦ Hmax, +1 is voted at the position where the horizontal edge of the lower half of the vehicle candidate area is moved above the set length H *, When voting +1 at the position where each half horizontal edge is moved to the lower side of the set length H * to form each horizontal edge side voting plane, the set length H * is half of the vehicle height of the preceding vehicle on the image. In the horizontal edge side voting plane, the voting position of the lower half edge coincides with the voting position of the upper half edge at the center position in the vehicle height direction of the preceding vehicle, and the voting value at that position becomes +2.

したがって、それらの投票平面の投票結果を重ね合わせた合成投票平面は、前方車両の車幅方向及び車高方向の中央の車両センタ位置の投票値が+4のピーク値になり、このピーク値の座標から、前記の車幅方向及び車高方向の中央の車両センタ位置を、抽象化した前方車両の車両特徴量として安定に抽出することができ、この車両特徴量に基づき、前方車両を認識して検出することができる。   Therefore, in the composite voting plane obtained by superimposing the voting results of these voting planes, the voting value at the center of the vehicle center in the vehicle width direction and the vehicle height direction is a peak value of +4. From the above, the vehicle center position in the center in the vehicle width direction and the vehicle height direction can be stably extracted as an abstract vehicle characteristic amount of the preceding vehicle, and the forward vehicle is recognized based on the vehicle characteristic amount. Can be detected.

この場合、多数のプロトタイプのエッジパターンのデータ等を保持しておく必要がなく、しかも、パターン認識の時間のかかる複雑な処理の代わりに、ノイズ等に強い投票方式の簡単な処理を行えばよく、ノイズ等に強いロバストで小規模かつ安価な構成により車両特徴量を抽象化して安定に抽出し、抽出した車両特徴量から簡単かつ迅速に前方車両を認識して検出することができる、具体的な構成を提供することができる。   In this case, it is not necessary to store a large number of prototype edge pattern data and the like, and instead of complicated processing that takes time for pattern recognition, simple processing of a voting method resistant to noise or the like may be performed. The vehicle features are abstracted and stably extracted with a robust, small-scale and inexpensive configuration that is resistant to noise, etc., and the vehicle ahead can be easily recognized and detected from the extracted vehicle features. Can be provided.

また、請求項2、6の構成によれば、自車前方の撮影画像において、前方車両の領域に垂直、水平のエッジが集中して発生することから、自車前方の撮影画像の垂直、水平のエッジが集中する領域を予測領域とし、この領域を車両候補領域に設定することにより、画像センサの撮影画像の画像処理のみにより、車両候補領域を、前方車両を含む適切な領域に設定し、前方車両を確実に検出することができる。   According to the second and sixth aspects of the present invention, since vertical and horizontal edges are concentrated in the area of the front vehicle in the captured image in front of the host vehicle, the vertical and horizontal of the captured image in front of the host vehicle are generated. By setting the region where the edges of the vehicle are concentrated as a prediction region and setting this region as a vehicle candidate region, the vehicle candidate region is set to an appropriate region including the preceding vehicle only by image processing of the captured image of the image sensor. The vehicle ahead can be detected reliably.

さらに、請求項3、7の構成によれば、自車に搭載した測距センサの測距位置は、多くの場合、前方車両のリフレクタ等の位置であり、それらの位置を囲むように予測領域を設定することにより、車両候補領域を、前方車両を含む適切な領域に設定し、前方車両を確実に検出することができる。   Furthermore, according to the configurations of claims 3 and 7, the distance measurement position of the distance measurement sensor mounted on the host vehicle is often the position of a reflector or the like of the vehicle ahead, and the prediction area is surrounded by these positions. By setting the vehicle candidate area, the vehicle candidate area can be set to an appropriate area including the preceding vehicle, and the preceding vehicle can be reliably detected.

つぎに、請求項4、8の構成によれば、画像センサをステレオカメラより安価な単眼カメラで形成することができ、一層安価かつ簡素な構成で前方車両を正確に認識して検出することができる。   Next, according to the configurations of claims 4 and 8, the image sensor can be formed by a monocular camera that is less expensive than a stereo camera, and the vehicle ahead can be accurately recognized and detected with a more inexpensive and simple configuration. it can.

つぎに、本発明をより詳細に説明するため、その一実施形態について、図1〜図6にしたがって詳述する。   Next, in order to describe the present invention in more detail, an embodiment thereof will be described in detail with reference to FIGS.

図1は車両検出装置のブロック図、図2は図1の動作説明用のフローチャート、図3は図1の動作説明用の処理説明用の実測図、図4は図1の垂直エッジ側の投票処理の説明図、図5は図1の水平エッジ側の投票処理の説明図、図6は図1の合成投票平面の説明図である。   1 is a block diagram of the vehicle detection device, FIG. 2 is a flowchart for explaining the operation of FIG. 1, FIG. 3 is an actual measurement diagram for explaining the operation of FIG. 1, and FIG. 4 is a voting on the vertical edge side of FIG. FIG. 5 is an explanatory diagram of the voting process on the horizontal edge side of FIG. 1, and FIG. 6 is an explanatory diagram of the composite voting plane of FIG.

<構成>
まず、図1の車両検出装置の構成について説明する。
<Configuration>
First, the configuration of the vehicle detection device in FIG. 1 will be described.

この装置は、図1に示すように、先進安全自動車(ASV)等の自車(車両)1の前部に、画像センサとしての2次元固体撮像素子(CCD)構成の単眼カメラ2を搭載する。   As shown in FIG. 1, this apparatus includes a monocular camera 2 having a two-dimensional solid-state imaging device (CCD) configuration as an image sensor in front of a host vehicle (vehicle) 1 such as an advanced safety vehicle (ASV). .

この単眼カメラ2は、自車前方をくり返し撮影し、例えばモノクロームの撮影画像のデジタルデータをマイクロコンピュータからなる認識処理用のECU3にリアルタイムに送る。   This monocular camera 2 repeatedly shoots in front of the host vehicle and sends, for example, digital data of a monochrome captured image to the ECU 3 for recognition processing, which is composed of a microcomputer, in real time.

さらに、ECU3及びメモリ4により、前方車両を認識して検出する認識処理部5が形成され、そのマイクロコンピュータが予め設定された図2の車両検出のステップS1〜S8の認識処理プログラムを実行することにより、認識処理部5がつぎの(a)〜(h)の各手段を備える。   Further, the ECU 3 and the memory 4 form a recognition processing unit 5 for recognizing and detecting the vehicle ahead, and the microcomputer executes the preset recognition processing program of steps S1 to S8 of vehicle detection in FIG. Accordingly, the recognition processing unit 5 includes the following units (a) to (h).

(a)候補領域設定手段
この手段は、画像センサ2の撮影画像に前方車両を囲む矩形の予測領域を車両候補領域として設定する。
(A) Candidate area setting means This means sets a rectangular prediction area surrounding the preceding vehicle in the captured image of the image sensor 2 as a vehicle candidate area.

この領域設定においては、画像の垂直、水平のエッジが車両部分に集中して発生することから、撮影画像のエッジ分布を検出して垂直、水平のエッジが集中している画像上の領域を検出し、検出領域の全部又は一部を含む設定した大きさの矩形の領域を予測領域とし、この予測領域を車両候補領域に設定する。   In this area setting, the vertical and horizontal edges of the image are concentrated on the vehicle part, so the edge distribution of the captured image is detected and the area on the image where the vertical and horizontal edges are concentrated is detected. Then, a rectangular area having a set size including all or part of the detection area is set as a prediction area, and this prediction area is set as a vehicle candidate area.

(b)垂直エッジ画像形成手段
この手段は、撮影画像の車両候補領域の輝度変化から車両候補領域の垂直エッジを検出し、検出したエッジの垂直エッジ画像を形成する。
(B) Vertical edge image forming means This means detects the vertical edge of the vehicle candidate area from the luminance change of the vehicle candidate area of the photographed image, and forms a vertical edge image of the detected edge.

(c)水平エッジ画像形成手段
この手段は、撮影画像の車両候補領域の輝度変化から車両候補領域の水平エッジを検出し、検出したエッジの水平エッジ画像を形成する。
(C) Horizontal edge image forming means This means detects the horizontal edge of the vehicle candidate area from the luminance change of the vehicle candidate area of the photographed image, and forms a horizontal edge image of the detected edge.

(d)垂直エッジ側投票処理手段
この手段は、車両候補領域の垂直エッジ画像について投票方式の投票を行う。
(D) Vertical edge side voting processing means This means performs voting by voting on the vertical edge image of the vehicle candidate area.

この投票は、車両候補領域の左半分の垂直エッジを設定長T*右側に移動した位置に+1を投票し、車両候補領域の右半分の各垂直エッジを設定長T*左側に移動した位置に+1を投票することで、横軸方向を車幅方向(画像の左右方向)、縦軸方向を車高方向(画像の上下方向)とする二次元平面の垂直エッジ側投票平面を形成するものである。   In this vote, +1 is voted at the position where the left half vertical edge of the vehicle candidate area is moved to the set length T * right side, and each vertical edge of the right half of the vehicle candidate area is moved to the position moved to the left side of the set length T *. By voting +1, a two-dimensional vertical edge side voting plane is formed with the horizontal axis direction being the vehicle width direction (left-right direction of the image) and the vertical axis direction being the vehicle height direction (up-down direction of the image). is there.

そして、この投票の結果から、前方車両の車幅方向の中央の位置を検出するため、垂直エッジ側投票処理手段は、設定長T*をTmin≦T*≦Tmax、(Tmin、Tmaxは設定した最小長、最大長)の範囲で、例えば、設定された単位長ずつ可変しながら前記の投票をくり返し、設定長T*を少しずつ変えた複数個の垂直エッジ側投票平面を形成する。   Then, in order to detect the center position in the vehicle width direction of the preceding vehicle from the result of this voting, the vertical edge side voting processing means sets the set length T * to Tmin ≦ T * ≦ Tmax (Tmin and Tmax are set) In the range of (minimum length, maximum length), for example, the above-mentioned voting is repeated while changing by a set unit length, and a plurality of vertical edge side voting planes are formed by changing the set length T * little by little.

(e)水平エッジ側投票処理手段
この手段は、車両候補領域の水平エッジ画像について投票方式の投票を行う。
(E) Horizontal edge side voting processing means This means performs a voting voting on the horizontal edge image of the vehicle candidate area.

この投票は、車両候補領域の下半分の水平エッジを設定長H*上側に移動した位置に+1を投票し、車両候補領域の上半分の水平エッジを設定長H*下側に移動した位置に+1を投票することで、横軸方向、縦軸方向が垂直エッジ側投票平面と同一の水平エッジ側投票平面を形成するものである。   In this voting, +1 is voted at the position where the lower horizontal edge of the vehicle candidate area is moved upward by the set length H *, and the position where the upper horizontal edge of the vehicle candidate area is moved downward by the set length H *. By voting +1, a horizontal edge voting plane is formed in which the horizontal axis direction and the vertical axis direction are the same as the vertical edge voting plane.

そして、この投票の結果から、前方車両の車高方向の中央の位置を検出するため、水平エッジ側投票処理手段は、設定長H*をHmin≦H*≦Hmax、(Hmin、Hmaxは設定した最小長、最大長)の範囲で、例えば、設定された単位長ずつ可変しながら前記の投票をくり返し、設定長H*を少しずつ変えた複数個の水平エッジ側投票平面を形成する。   Then, in order to detect the center position in the vehicle height direction of the preceding vehicle from the result of this voting, the horizontal edge side voting processing means sets the set length H * to Hmin ≦ H * ≦ Hmax (Hmin and Hmax are set). In the range of (minimum length, maximum length), for example, the above-mentioned voting is repeated while changing by a set unit length, and a plurality of horizontal edge side voting planes are formed by changing the set length H * little by little.

なお、垂直エッジ側、水平エッジ側の各投票平面は、実際には、横軸方向及び縦軸方向に設定した単位長の間隔で投票点がマトリクス状に設定され、各投票点に、垂直エッジ又は水平エッジの有無に基づく投票が行われる。   In each voting plane on the vertical edge side and the horizontal edge side, actually, voting points are set in a matrix at intervals of unit lengths set in the horizontal axis direction and the vertical axis direction. Alternatively, voting based on the presence or absence of a horizontal edge is performed.

(f)投票結果合成手段
この手段は、各垂直エッジ側投票平面と各水平エッジ側投票平面とを重ね合わせて、両エッジ側投票平面の投票結果を重ね合わせた合成投票平面を形成する。
(F) Voting Result Combining Means This means superimposes each vertical edge side voting plane and each horizontal edge side voting plane to form a composite voting plane obtained by superimposing the voting results of both edge side voting planes.

(g)車両特徴量抽出手段
この手段は、合成投票平面のピーク値の座標から、前方車両の車幅及び車高の半分の位置である、車幅方向、車高方向の中央の車両センタ位置を、車両特徴量として抽出する。
(G) Vehicle feature amount extraction means This means is a vehicle center position in the vehicle width direction and in the vehicle height direction that is a position half the vehicle width and vehicle height of the preceding vehicle from the coordinates of the peak value of the composite voting plane. Are extracted as vehicle feature values.

(h)車両認識手段
この手段は、車両特徴量としての車両センタ位置から、前方車両の存在及びその位置を把握して前方車両を認識し、検出する。
(H) Vehicle Recognizing Means This means recognizes and detects the forward vehicle by grasping the presence and position of the forward vehicle from the vehicle center position as the vehicle feature amount.

そして、この検出の結果が、認識処理部6のECU4から自車1の衝突可能性の有無を判定する衝突判定処理のECUに送られ、このECUが先行車等との衝突の可能性を判定し、衝突回避に必要な走行、操舵の制御を行う。   Then, the result of this detection is sent from the ECU 4 of the recognition processing unit 6 to the ECU of the collision determination process for determining the possibility of collision of the host vehicle 1, and this ECU determines the possibility of collision with the preceding vehicle or the like. Then, control of traveling and steering necessary for collision avoidance is performed.

<動作>
つぎに、前記の構成に基づく図1の装置の動作について、図2のフローチャート等を参照して説明する。
<Operation>
Next, the operation of the apparatus of FIG. 1 based on the above configuration will be described with reference to the flowchart of FIG.

まず、自車1の走行中、レーザレータ2が自車1の前方をくり返し撮影し、この撮影により、例えば図3の撮影画像Piのような撮影画像がECU3に取り込まれると、ECU3の候補領域設定手段が動作し、図2のステップS1により、例えば、撮影画像Piを二次元フィルタ処理して撮影画像Piのエッジを検出し、画像Piの垂直、水平のエッジが集中している領域の横幅又は高さと、設定された車幅・車高の比とに基づき、この領域の横幅又は高さを基準にしたほぼ車両1台分の大きさの矩形の領域を予測領域とし、この予測領域を、例えば図3の車両候補領域Qに設定する。   First, while the host vehicle 1 is traveling, the laserator 2 repeatedly shoots the front of the host vehicle 1, and when a shot image such as the shot image Pi of FIG. 2, for example, the two-dimensional filter processing is performed on the photographed image Pi to detect the edge of the photographed image Pi, and the width of the region where the vertical and horizontal edges of the image Pi are concentrated or Based on the height and the ratio of the set vehicle width / vehicle height, a rectangular area approximately the size of one vehicle based on the width or height of this area is set as a prediction area, and this prediction area is For example, it is set in the vehicle candidate area Q in FIG.

つぎに、図2のステップS2、S3により、垂直エッジ画像形成手段、水平エッジ画像形成手段が動作し、車両候補領域Qについて、例えば図3の垂直エッジ画像Pa、水平エッジ画像Pbを形成する。   Next, in steps S2 and S3 in FIG. 2, the vertical edge image forming unit and the horizontal edge image forming unit operate to form, for example, the vertical edge image Pa and the horizontal edge image Pb in FIG.

さらに、図2のステップS4、S5により、垂直エッジ側投票処理手段、水平エッジ側投票処理手段が動作して投票を行い、垂直エッジ画像Paについての投票平面(垂直エッジ側投票平面)、水平エッジ画像Pbについての投票平面(水平エッジ側投票平面)を形成する。   Further, in steps S4 and S5 of FIG. 2, the vertical edge side voting processing means and the horizontal edge side voting processing means operate to perform voting, and the voting plane (vertical edge side voting plane) and horizontal edge for the vertical edge image Pa are obtained. A voting plane (horizontal edge side voting plane) for the image Pb is formed.

ところで、ハフ(Hough)変換に代表されるように、投票方式のアルゴリズムはノイズに強く、ロバストであることが多くの実験で確認されている。   By the way, as represented by the Hough transform, it has been confirmed in many experiments that the voting algorithm is robust against noise and robust.

そこで、本発明は、ノイズに強く、ロバストな投票方式により、車両が「おおよそ左右対称でおおよそ矩形(四角形)」と近似できることを利用し、前方車両の最適な特徴量として、その車幅方向及び車高方向の中央の車両センタ位置を精度よく検出し、この検出に基づいて前方車両を認識して検出する。   Therefore, the present invention utilizes the fact that the vehicle can be approximated as “approximately rectangular (quadrangle) with approximately left-right symmetry by a robust voting method that is resistant to noise, and as an optimal feature amount of the preceding vehicle, the vehicle width direction and The vehicle center position in the center in the vehicle height direction is accurately detected, and the vehicle ahead is recognized and detected based on this detection.

そのため、垂直エッジ側投票処理手段、水平エッジ側投票処理手段は、つぎに説明するように投票処理を行って垂直エッジ側投票平面、水平エッジ側投票平面を形成する。   Therefore, the vertical edge side voting processing unit and the horizontal edge voting processing unit perform voting processing as will be described below to form a vertical edge side voting plane and a horizontal edge side voting plane.

まず、説明を簡単にするため、前方車両の垂直エッジ画像Paが、例えば図4に示すように、前方車両の左右端部の位置、すなわち、車幅(横幅)方向である図中のT軸方向の位置Tl、TrのエッジGaの画像であって、前方車両の水平エッジ画像Pbが、例えば図5に示すように、前方車両の上下端部の位置、すなわち、車高(高さ)方向である図中のH軸方向の位置Hu、HdのエッジGbの画像であるとする。   First, in order to simplify the description, the vertical edge image Pa of the front vehicle is, for example, as shown in FIG. 4, the positions of the left and right end portions of the front vehicle, that is, the T-axis in the vehicle width (lateral width) direction. For example, as shown in FIG. 5, the horizontal edge image Pb of the front vehicle is the position of the upper and lower ends of the front vehicle, that is, the vehicle height (height) direction. It is assumed that the image is an image of the edge Gb at positions Hu and Hd in the H-axis direction in FIG.

この場合、前方車両の画像上での車幅をt、車高をhとすると、2本の垂直エッジGaの位置Tl、Trは、前方車両のT軸方向の中央の位置Tcに対して、車幅tの半分t/2離れた対称位置であり、2本の水平エッジGbの位置Hu、Hdは、前方車両のH軸方向の中央の位置Hcに対に対して、車高hの半分h/2離れた対称位置である。   In this case, when the vehicle width on the image of the preceding vehicle is t and the vehicle height is h, the positions Tl and Tr of the two vertical edges Ga are relative to the center position Tc in the T-axis direction of the preceding vehicle. The positions Hu and Hd of the two horizontal edges Gb are half of the vehicle height h with respect to the center position Hc in the H-axis direction of the preceding vehicle. The symmetry position is h / 2 apart.

一方、前方車両の車幅t、車高hによらず、中央の位置Tc、Hcに投票が行われるようにするため、設定長T*の最小値Tmin、最大値Tmaxは、例えば、垂直エッジ画像Paの横幅(車両候補領域Qの横幅)の1/4、3/4に設定され、同様に、設定長H*の最小値Hmin、最大値Hmaxは、例えば、水平エッジ画像Pbの横幅(車両候補領域Qの横幅)1/4、3/4に設定される。   On the other hand, the minimum value Tmin and the maximum value Tmax of the set length T * are, for example, vertical edges so that voting is performed at the center positions Tc and Hc regardless of the vehicle width t and vehicle height h of the preceding vehicle. The horizontal width of the image Pa (the horizontal width of the vehicle candidate area Q) is set to 1/4 and 3/4. Similarly, the minimum value Hmin and the maximum value Hmax of the set length H * are, for example, the horizontal width ( The lateral width of the vehicle candidate area Q) is set to 1/4 and 3/4.

そして、垂直エッジ側投票処理手段は、設定長T*(Tmin≦T*≦Tmax)に基づき、垂直エッジ画像Paの左側の垂直エッジGaを設定長T*右側に移動した位置Tl+T*に+1を投票し、垂直エッジ画像Ptの右側の垂直エッジGaを設定長T*左側に移動した位置Tr−T*に+1を投票し、図4の垂直エッジ側投票平面Raを形成することを、設定長T*を最小長Tminから最大長Tmax又はその逆に単位量ずつ可変しながらくり返す。   Then, the vertical edge side voting processing means adds +1 to the position Tl + T * obtained by moving the left vertical edge Ga of the vertical edge image Pa to the set length T * to the right based on the set length T * (Tmin ≦ T * ≦ Tmax). Voting and voting +1 to the position Tr-T * where the vertical edge Ga on the right side of the vertical edge image Pt has been moved to the set length T * left side to form the vertical edge side voting plane Ra of FIG. T * is repeated while varying the unit length by the minimum length Tmin to the maximum length Tmax or vice versa.

このくり返しにおいて、設定長T*=t/2になると、図4のT*=t/2の投票平面Raに示すように、両側の垂直エッジGaの投票位置が中央の位置Tcになり、投票平面Raは、位置Tcの投票値が+2の平面になる。   In this repetition, when the set length T * = t / 2, as shown in the voting plane Ra of T * = t / 2 in FIG. The plane Ra is a plane having a voting value of +2 at the position Tc.

同様に、水平エッジ側投票処理手段は、設定長H*(Hmin≦H*≦Hmax)に基づき、水平エッジ画像Pbの下側の垂直エッジGaを設定長H*上側に移動した位置Hd+H*に+1を投票し、水平エッジ画像Pbの上側の水平エッジGbを設定長H*下側に移動した位置Hu−H*に+1を投票し、図5の水平エッジ側投票平面Rbを形成することを、設定長H*を最小長Hminから最大長Hmax又はその逆に単位量ずつ可変しながらくり返す。   Similarly, the horizontal edge side voting processing means is based on the set length H * (Hmin ≦ H * ≦ Hmax) at the position Hd + H * where the vertical edge Ga on the lower side of the horizontal edge image Pb has been moved to the upper side of the set length H *. +1 is voted, and +1 is voted at the position Hu-H * where the upper horizontal edge Gb of the horizontal edge image Pb is moved to the lower side by the set length H * to form the horizontal edge side voting plane Rb of FIG. The set length H * is repeated while changing the unit length by the minimum length Hmin to the maximum length Hmax or vice versa.

このくり返しにおいて、設定長H*=h/2になると、図5のH*=h/2の投票平面Rbに示すように、両側の水平エッジGbの投票位置が中央の位置Hcになり、このときの投票平面Rbは、位置Hcの投票値が+2の平面になる。   In this repetition, when the set length H * = h / 2, as shown in the voting plane Rb of H * = h / 2 in FIG. 5, the voting positions of the horizontal edges Gb on both sides become the center position Hc. The voting plane Rb at that time is a plane in which the voting value at the position Hc is +2.

つぎに、図2のステップS6により、投票結果合成手段が、各垂直エッジ側投票平面の投票結果と各水平エッジ側投票平面の投票結果とを重ね合わせ、例えば、図6に示す合成投票平面Rabを形成する。   Next, in step S6 of FIG. 2, the voting result synthesizing means superimposes the voting result of each vertical edge side voting plane and the voting result of each horizontal edge side voting plane, for example, the synthetic voting plane Rab shown in FIG. Form.

このとき、設定長T*=t/2の垂直エッジ側投票平面Raの投票結果(+2)と、設定長H*=h/2の水平エッジ側投票平面Rbの投票結果(+2)との重ね合わせにより、合成投票平面Rabの座標(T、H)=(Tc、Hc)の車両センタ位置の投票結果が、+4のピーク値(最大値)になる。   At this time, the voting result (+2) of the vertical edge side voting plane Ra having the set length T * = t / 2 and the voting result (+2) of the horizontal edge side voting plane Rb having the set length H * = h / 2 are overlapped. As a result, the voting result at the vehicle center position at the coordinates (T, H) = (Tc, Hc) of the combined voting plane Rab becomes a peak value (maximum value) of +4.

なお、実際には、ピーク値の投票位置が車両センタ位置(Tc、Hc)に一致するとは限らないが、設定値T*、H*を、Tmin≦T*≦Tmax、Hmin≦H*≦Hmaxに可変して投票位置が微妙に違う投票平面を複数個用意し、それらを重ね合わせると、いわゆる畳み込み効果で、前記のピーク値の座標(Tc、Hc)を中心とした微小範囲に必ずピーク値が出現し、その座標から車両センタ位置(Tc、Hc)を精度よく検出することができる。   Actually, the voting position of the peak value does not always coincide with the vehicle center position (Tc, Hc), but the set values T * and H * are set to Tmin ≦ T * ≦ Tmax and Hmin ≦ H * ≦ Hmax. If a plurality of voting planes with slightly different voting positions are prepared and overlapped, the peak value is always in a very small range centered on the coordinates (Tc, Hc) of the peak value by the so-called convolution effect. Appears, and the vehicle center position (Tc, Hc) can be accurately detected from the coordinates.

そして、合成投票平面Rabの各投票点の投票値を、投票値に比例した輝度で表すと、合成された実際の投票結果は、例えば、図3の合成投票平面Rabに示すようになり、この図3の投票平面Rabにおいても、ほぼその中央のピーク値の座標(Tc、Hc)を中心とした微小範囲にピーク値が出現している。   Then, when the voting value of each voting point on the combined voting plane Rab is expressed by the luminance proportional to the voting value, the combined actual voting result becomes, for example, as shown in the combined voting plane Rab in FIG. Also in the voting plane Rab of FIG. 3, the peak value appears in a very small range centered on the coordinate (Tc, Hc) of the peak value at the center.

つぎに、図2のステップS7により、車両特徴量抽出手段が動作し、ピーク値の座標(Tc、Hc)を前方車両の車両特徴量として抽出する。   Next, in step S7 in FIG. 2, the vehicle feature amount extraction unit operates to extract the peak value coordinates (Tc, Hc) as the vehicle feature amount of the preceding vehicle.

そして、この車両特徴量と自車1と前方車両との測距距離等とに基づき、前方車両の実際の車両センタ位置を把握し、前方車両を認識して検出し、この検出の結果を衝突判定処理のECUに送る。   Then, based on the vehicle feature amount and the distance measured between the own vehicle 1 and the preceding vehicle, the actual vehicle center position of the preceding vehicle is grasped, the preceding vehicle is recognized and detected, and the result of this detection collides. This is sent to the ECU for the determination process.

以上の処理の繰り返しにより、自車1の走行中に、画像テンプレートマッチングのパターン認識処理のような全天候についての全車種分のプロトタイプのエッジパターンを用意することなく、ノイズに強くロバストな投票方式の処理により、単純で計算量が少なく、安価かつ簡素な構成で、安定に、前方車両の車幅方向及び車高方向の中央の車両センタ位置を、前方車両の車両特徴量として抽出することができ、この車両特徴量に基づき、安定かつ正確に、前方車両を認識して検出することができる。   By repeating the above processing, a robust voting method that is robust against noise without preparing prototype edge patterns for all vehicle types for all weather conditions such as image template matching pattern recognition processing while the vehicle 1 is traveling. By processing, the vehicle center position in the center in the vehicle width direction and the vehicle height direction of the preceding vehicle can be stably extracted as the vehicle feature amount of the preceding vehicle with a simple, low calculation amount, and an inexpensive and simple configuration. Based on this vehicle feature amount, the vehicle ahead can be recognized and detected stably and accurately.

また、画像センサをステレオカメラより安価な単眼カメラ2により形成したため、一層安価に形成できる利点もある。   Further, since the image sensor is formed by the monocular camera 2 which is cheaper than the stereo camera, there is an advantage that it can be formed at a lower cost.

そして、この車両検出結果に基づき、安定した正確な車両認識や衝突有無の判断等が行え、先進安全自動車(ASV)等の信頼性を向上し、交通安全に寄与することができる。   And based on this vehicle detection result, the stable and accurate vehicle recognition, the judgment of the presence or absence of a collision, etc. can be performed, the reliability of advanced safety vehicles (ASV) etc. can be improved, and it can contribute to traffic safety.

ところで、前記の実施形態の場合、自車前方の車両を囲む矩形の予測領域を、単眼カメラ2の撮影画像の垂直及び水平のエッジが集中する領域としたが、自車1にレーザレーダ、ミリ波レーダ等の測距センサを搭載し、例えば、この測距センサが左右方向(車幅方向)にスキャンして反射波を受信することにより、前方車両のリフレクタ等の反射点のスキャン位置を、測距対象の車幅方向の位置として検出し、これらの位置から予測領域を設定してもよい。   In the case of the above-described embodiment, the rectangular prediction region surrounding the vehicle ahead of the host vehicle is a region where the vertical and horizontal edges of the captured image of the monocular camera 2 are concentrated. Mounted with a distance sensor such as a wave radar, for example, the distance sensor scans in the left-right direction (vehicle width direction) and receives the reflected wave, so that the scanning position of the reflection point of the reflector of the vehicle ahead, It may be detected as a position in the vehicle width direction of the distance measurement target, and the prediction area may be set from these positions.

そして、本発明は上記した実施形態に限定されるものではなく、その趣旨を逸脱しない限りにおいて上述したもの以外に種々の変更を行うことが可能であり、例えば、画像センサは単眼カメラ2に限られるものではなく、ステレオカメラ等であってもよい。   The present invention is not limited to the above-described embodiment, and various modifications other than those described above can be made without departing from the spirit thereof. For example, the image sensor is limited to the monocular camera 2. It may be a stereo camera or the like.

ところで、自車1の装備部品数を少なくするため、単眼カメラ2等は追従走行制御、ブレーキ制御等の他の制御のセンサ等に兼用する場合にも適用することができる。   By the way, in order to reduce the number of equipment parts of the own vehicle 1, the monocular camera 2 and the like can also be applied to a case where it is also used as a sensor for other controls such as follow-up running control and brake control.

一実施形態のブロック図である。It is a block diagram of one embodiment. 図1の動作説明用のフローチャートである。It is a flowchart for operation | movement description of FIG. 図1の処理説明用の実測図である。FIG. 2 is an actual measurement diagram for explaining the processing of FIG. 1. 図1の垂直エッジ側の投票処理の説明図である。It is explanatory drawing of the voting process by the side of the vertical edge of FIG. 図1の水平エッジ側の投票処理の説明図である。It is explanatory drawing of the voting process by the side of the horizontal edge of FIG. 図1の合成投票平面の説明図である。It is explanatory drawing of the synthetic | combination voting plane of FIG.

符号の説明Explanation of symbols

1 自車
2 単眼カメラ
5 認識処理部
Pi 撮影画像
Pa 垂直エッジ画像
Pb 水平エッジ画像
Q 車両候補領域
Ra 垂直エッジ側投票平面
Rb 水平エッジ側投票平面
Rab 合成投票平面図
DESCRIPTION OF SYMBOLS 1 Own vehicle 2 Monocular camera 5 Recognition process part Pi Image | photograph Pa Vertical edge image Pb Horizontal edge image Q Vehicle candidate area | region Ra Vertical edge side voting plane Rb Horizontal edge side voting plane Rab Composite voting top view

Claims (8)

自車に搭載した画像センサの自車前方の撮影画像に、自車前方の車両を囲む矩形の予測領域を車両候補領域として設定し、
前記車両候補領域の垂直エッジ画像につき、前記車両候補領域の左半分の垂直エッジを設定長T*右側に移動した位置に+1を投票し、前記車両候補領域の右半分の各垂直エッジを前記設定長T*左側に移動した位置に+1を投票して垂直エッジ側投票平面を形成することを、前記設定長T*をTmin≦T*≦Tmax、(Tmin、Tmaxは設定した最小長、最大長)の範囲で可変しながらくり返し、
前記車両候補領域の水平エッジ画像につき、前記車両候補領域の下半分の水平エッジを設定長H*上側に移動した位置に+1を投票し、前記車両候補領域の上半分の各水平エッジを前記設定長T*下側に移動した位置に+1を投票して水平エッジ側投票平面を形成することを、前記設定長H*をHmin≦H*≦Hmax、(Hmin、Hmaxは設定した最小長、最大長)の範囲で可変しながらくり返し、
前記両エッジ側投票平面の投票結果を重ね合わせた合成投票平面のピーク値の座標から、前記自車前方の車両の車幅方向及び車高方向の中央の車両センタ位置を車両特徴量として抽出し、
前記車両特徴量から前記自車前方の車両を認識して検出することを特徴とする車両検出方法。
In the captured image in front of the vehicle of the image sensor mounted on the vehicle, a rectangular prediction region surrounding the vehicle in front of the vehicle is set as a vehicle candidate region,
For the vertical edge image of the vehicle candidate area, vote +1 to the position where the vertical edge of the left half of the vehicle candidate area is moved to the right by the set length T *, and set each vertical edge of the right half of the vehicle candidate area The set length T * is defined as Tmin ≦ T * ≦ Tmax (Tmin and Tmax are the set minimum length and maximum length). ) Repeatedly while changing within the range of
For the horizontal edge image of the vehicle candidate area, vote +1 to the position where the horizontal edge of the lower half of the vehicle candidate area is moved upward by a set length H *, and set each horizontal edge of the upper half of the vehicle candidate area The set length H * is defined as Hmin ≦ H * ≦ Hmax, where Hmin ≦ H * ≦ Hmax (where Hmin and Hmax are the set minimum length and maximum). Long)
The vehicle center position in the center in the vehicle width direction and the vehicle height direction of the vehicle ahead of the host vehicle is extracted as a vehicle feature value from the coordinates of the peak value of the composite voting plane obtained by superimposing the voting results of the both edge side voting planes ,
A vehicle detection method for recognizing and detecting a vehicle ahead of the host vehicle from the vehicle feature amount.
自車前方の車両を囲む矩形の予測領域が、自車前方の撮影画像の垂直、水平のエッジが集中する領域であることを特徴とする請求項1に記載の車両検出方法。   2. The vehicle detection method according to claim 1, wherein the rectangular prediction region surrounding the vehicle ahead of the host vehicle is a region where vertical and horizontal edges of a captured image in front of the host vehicle are concentrated. 自車にレーザレーダ等の測距センサを搭載し、前記測距センサの測距対象の位置から自車前方の車両を囲む矩形の予測領域を設定することを特徴とする請求項1に記載の車両検出方法。   The distance prediction sensor such as a laser radar is mounted on the own vehicle, and a rectangular prediction region surrounding the vehicle ahead of the own vehicle is set from the position of the distance measurement target of the distance measurement sensor. Vehicle detection method. 画像センサが単眼カメラであることを特徴とする請求項1〜3のいずれかに記載の車両検出方法。   The vehicle detection method according to claim 1, wherein the image sensor is a monocular camera. 自車に搭載されて自車前方を撮影する画像センサと、
前記画像センサの撮影画像に自車前方の車両を囲む矩形の予測領域を車両候補領域として設定する候補領域設定手段と、
前記車両候補領域の垂直エッジ画像につき、前記車両候補領域の左半分の垂直エッジを設定長T*右側に移動した位置に+1を投票し、前記車両候補領域の右半分の各垂直エッジを前記設定長T*左側に移動した位置に+1を投票して垂直エッジ側投票平面を形成することを、前記設定長T*をTmin≦T*≦Tmax、(Tmin、Tmaxは設定した最小長、最大長)の範囲で可変しながらくり返す垂直エッジ側投票処理手段と、
前記車両候補領域の水平エッジ画像につき、前記車両候補領域の下半分の水平エッジを設定長H*上側に移動した位置に+1を投票し、前記車両候補領域の上半分の各水平エッジを前記設定長H*下側に移動した位置に+1を投票して水平エッジ側投票平面を形成することを、前記設定長H*をHmin≦H*≦Hmax、(Hmin、Hmaxは設定した最小長、最大長)の範囲で可変しながらくり返す水平エッジ側投票処理手段と、
前記両エッジ側投票平面の投票結果を重ね合わせた合成投票平面を形成する投票結果合成手段と、
前記合成投票平面のピーク値の座標から前記自車前方の車両の車幅方向及び車高方向の中央の車両センタ位置を車両特徴量として抽出する車両特徴量抽出手段とを備え、
前記車両特徴量から前記自車前方の車両を認識して検出するようにしたことを特徴とする車両検出装置。
An image sensor that is mounted on the vehicle and photographs the front of the vehicle;
Candidate area setting means for setting a rectangular prediction area surrounding the vehicle ahead of the host vehicle as a vehicle candidate area in the captured image of the image sensor;
For the vertical edge image of the vehicle candidate area, vote +1 to the position where the vertical edge of the left half of the vehicle candidate area is moved to the right by the set length T *, and set each vertical edge of the right half of the vehicle candidate area The set length T * is defined as Tmin ≦ T * ≦ Tmax (Tmin and Tmax are the set minimum length and maximum length). ) Vertical edge side voting processing means that repeats while varying within the range of
For the horizontal edge image of the vehicle candidate area, vote +1 to the position where the horizontal edge of the lower half of the vehicle candidate area is moved upward by a set length H *, and set each horizontal edge of the upper half of the vehicle candidate area The set length H * is defined as Hmin ≦ H * ≦ Hmax (where Hmin and Hmax are set minimum lengths and maximums), by voting +1 at the position moved to the lower side of the length H * to form a horizontal edge side voting plane. Horizontal edge side voting processing means that repeats while being variable within a range of (long),
Voting result composition means for forming a composite voting plane obtained by superimposing the voting results of the both edge side voting planes;
Vehicle feature amount extraction means for extracting a vehicle center position in the center of the vehicle width direction and the vehicle height direction of the vehicle ahead of the host vehicle from the coordinates of the peak value of the composite voting plane as a vehicle feature amount;
A vehicle detection apparatus characterized in that a vehicle ahead of the host vehicle is recognized and detected from the vehicle feature amount.
候補領域設定手段により、自車前方の車両を囲む矩形の予測領域を、自車前方の撮影画像の垂直、水平のエッジが集中する領域としたことを特徴とする請求項5に記載の車両検出装置。   6. The vehicle detection according to claim 5, wherein the candidate region setting means sets the rectangular prediction region surrounding the vehicle ahead of the host vehicle as a region where vertical and horizontal edges of the captured image in front of the host vehicle are concentrated. apparatus. 自車にレーザレーダ等の測距センサを搭載し、候補領域設定手段により、前記測距センサの測距対象の位置から自車前方の車両を囲む矩形の予測領域を設定するようにしたことを特徴とする請求項5に記載の車両検出装置。   A distance measuring sensor such as a laser radar is mounted on the own vehicle, and the candidate area setting means sets a rectangular prediction area surrounding the vehicle ahead of the own vehicle from the position of the distance measuring target of the distance measuring sensor. The vehicle detection device according to claim 5, wherein 画像センサが単眼カメラであることを特徴とする請求項5〜7のいずれかに記載の車両検出装置。   The vehicle detection device according to claim 5, wherein the image sensor is a monocular camera.
JP2003387412A 2003-11-18 2003-11-18 Vehicle detection method and vehicle detection system Withdrawn JP2005149250A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2003387412A JP2005149250A (en) 2003-11-18 2003-11-18 Vehicle detection method and vehicle detection system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2003387412A JP2005149250A (en) 2003-11-18 2003-11-18 Vehicle detection method and vehicle detection system

Publications (1)

Publication Number Publication Date
JP2005149250A true JP2005149250A (en) 2005-06-09

Family

ID=34694772

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2003387412A Withdrawn JP2005149250A (en) 2003-11-18 2003-11-18 Vehicle detection method and vehicle detection system

Country Status (1)

Country Link
JP (1) JP2005149250A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008065707A1 (en) * 2006-11-28 2008-06-05 Fujitsu Limited Image data recognizing method, image processing device, and image data recognizing program
US8204278B2 (en) 2006-11-28 2012-06-19 Fujitsu Limited Image recognition method
KR101205565B1 (en) 2008-01-29 2012-11-27 주식회사 만도 Method for Dectecting Front and Rear Vehicle by Using Image
JP2014153914A (en) * 2013-02-08 2014-08-25 Mega Chips Corp Object detection device, program and integrated circuit

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008065707A1 (en) * 2006-11-28 2008-06-05 Fujitsu Limited Image data recognizing method, image processing device, and image data recognizing program
JPWO2008065707A1 (en) * 2006-11-28 2010-03-04 富士通株式会社 Image data recognition method, image processing apparatus, and image data recognition program
JP4743277B2 (en) * 2006-11-28 2011-08-10 富士通株式会社 Image data recognition method, image processing apparatus, and image data recognition program
US8204278B2 (en) 2006-11-28 2012-06-19 Fujitsu Limited Image recognition method
KR101205565B1 (en) 2008-01-29 2012-11-27 주식회사 만도 Method for Dectecting Front and Rear Vehicle by Using Image
JP2014153914A (en) * 2013-02-08 2014-08-25 Mega Chips Corp Object detection device, program and integrated circuit

Similar Documents

Publication Publication Date Title
JP5470886B2 (en) Object detection device
US7899211B2 (en) Object detecting system and object detecting method
US20020134151A1 (en) Apparatus and method for measuring distances
JP5561064B2 (en) Vehicle object recognition device
EP2921992A2 (en) Image processing device, drive support system, image processing method, and program
KR101176693B1 (en) Method and System for Detecting Lane by Using Distance Sensor
JP6743882B2 (en) Image processing device, device control system, imaging device, image processing method, and program
JP2007255977A (en) Object detection method and object detector
JP2015143979A (en) Image processor, image processing method, program, and image processing system
US10803605B2 (en) Vehicle exterior environment recognition apparatus
CN107950023B (en) Vehicle display device and vehicle display method
JP4052291B2 (en) Image processing apparatus for vehicle
JP4123138B2 (en) Vehicle detection method and vehicle detection device
JP4067340B2 (en) Object recognition device and object recognition method
JP4074577B2 (en) Vehicle detection method and vehicle detection device
JP4340000B2 (en) Object recognition device
JP6631691B2 (en) Image processing device, device control system, imaging device, image processing method, and program
JP2017215743A (en) Image processing device, and external world recognition device
JP5248388B2 (en) Obstacle risk calculation device, method and program
JP2005149250A (en) Vehicle detection method and vehicle detection system
JP2001082954A (en) Image processing device and image processing distance- measuring method
JPH10269365A (en) Characteristic extracting method, and object recognition device using the method
CN109923586B (en) Parking frame recognition device
US11420855B2 (en) Object detection device, vehicle, and object detection process
JP6569416B2 (en) Image processing apparatus, object recognition apparatus, device control system, image processing method, and image processing program

Legal Events

Date Code Title Description
A300 Application deemed to be withdrawn because no request for examination was validly filed

Free format text: JAPANESE INTERMEDIATE CODE: A300

Effective date: 20070206