JP4753765B2 - Obstacle recognition method - Google Patents

Obstacle recognition method Download PDF

Info

Publication number
JP4753765B2
JP4753765B2 JP2006093603A JP2006093603A JP4753765B2 JP 4753765 B2 JP4753765 B2 JP 4753765B2 JP 2006093603 A JP2006093603 A JP 2006093603A JP 2006093603 A JP2006093603 A JP 2006093603A JP 4753765 B2 JP4753765 B2 JP 4753765B2
Authority
JP
Japan
Prior art keywords
radar
magnification
camera
side detection
obstacle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
JP2006093603A
Other languages
Japanese (ja)
Other versions
JP2007274037A (en
Inventor
仁臣 滝澤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Daihatsu Motor Co Ltd
Original Assignee
Daihatsu Motor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Daihatsu Motor Co Ltd filed Critical Daihatsu Motor Co Ltd
Priority to JP2006093603A priority Critical patent/JP4753765B2/en
Publication of JP2007274037A publication Critical patent/JP2007274037A/en
Application granted granted Critical
Publication of JP4753765B2 publication Critical patent/JP4753765B2/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Description

本発明は、車両に備えたレーダとカメラとによって自車前方をくり返し探査し、レーダとカメラとのセンサフュージョンの認識処理により、自車前方の先行車等の障害物を認識する障害物認識方法に関するものである。 The present invention is an obstacle recognition method in which a radar and a camera provided in a vehicle repeatedly search in front of the host vehicle, and an obstacle such as a preceding vehicle in front of the host vehicle is recognized by a sensor fusion recognition process between the radar and the camera. it relates to the law.

一般に、ACCと呼ばれる車両走行支援システム(Adaptive Cruise Control)等を搭載した車両においては、いわゆる被害軽減自動ブレーキ機能を実現するため、自車前方の先行車や標識等の障害物を認識することが要求される。   In general, in a vehicle equipped with a vehicle driving support system (Adaptive Cruise Control) called ACC, in order to realize a so-called damage reduction automatic brake function, an obstacle such as a preceding vehicle or a sign in front of the host vehicle may be recognized. Required.

しかしながら、従来のACC搭載車両においては、多くの場合、ACCの自車前方の情報を得るセンサが、レーザレーダやミリ波レーダといった単一センサで構成されるため、障害物の認識が高速道路などの比較的認識し易いシーンでしか行なえない不都合があった。   However, in a conventional ACC-equipped vehicle, in many cases, a sensor that obtains information in front of the ACC vehicle is composed of a single sensor such as a laser radar or a millimeter wave radar. There is an inconvenience that can only be done in scenes that are relatively easy to recognize.

また、 複数のセンサを利用したセンサフュージョンの認識処理により、認識性能を向上して前記の不都合を解消し、種々の走行シーンで自車前方の障害物を認識するようにした場合は、 各センサがまったく同時刻のデータをサンプリングするのではなく、各センサの探査、検出に数十ミリ秒(ms)程度のずれがあり、レーダ探査とカメラ撮影の座標のずれの影響を避けることができず、認識精度の向上が図られない等の問題があった。   In addition, the sensor fusion recognition process using multiple sensors improves recognition performance and eliminates the above-mentioned disadvantages. When various obstacles are detected in front of the vehicle, Does not sample data at exactly the same time, but there is a difference of about several tens of milliseconds (ms) in the search and detection of each sensor, and the influence of the difference in the coordinates of radar search and camera shooting cannot be avoided. There was a problem that the recognition accuracy could not be improved.

そこで、本出願人は、レーダの測距結果とカメラの撮影画像とに基づくセンサフュージョンの認識処理により、レーダ探査とカメラ撮影の座標のずれの影響を受けること等なく自車前方の先行車等の障害物を精度よく認識することができる障害物認識方法及び障害物認識装置を既に出願している(例えば、特許文献1参照)。   Therefore, the applicant of the present invention is able to recognize the preceding vehicle in front of the subject vehicle by the sensor fusion recognition process based on the radar distance measurement result and the image captured by the camera without being affected by the difference in coordinates between the radar exploration and the camera image capturing. An obstacle recognizing method and an obstacle recognizing apparatus capable of accurately recognizing other obstacles have already been filed (for example, see Patent Document 1).

この既出願の障害物認識方法及び障害物認識装置は、概略、車両に備えたレーダとカメラとによって自車前方をくり返し探査し、レーダの探査結果に基づく自車前方の先行車を含む障害物等までの測距距離の変化からレーダ側検出倍率を求め、カメラの自車前方の撮影結果の画像処理によりカメラの自車前方の撮影画像の水平、垂直の画像ヒストグラムの各倍率の基準の画像ヒストグラムとの残差和の最小値に基づいてカメラ側検出倍率を求め、レーダとカメラとのセンサフュージョンの認識処理により、前記残差和が最小値になる水平、垂直の倍率が等しくなるか否かから撮影画像の障害物の領域(候補領域)の画像が先行車等の路面垂直物か否かを判別し、レーダ側検出倍率とカメラ側検出倍率とが等しくなる路面垂直物を障害物として認識する。   The obstacle recognition method and obstacle recognition apparatus of the already-applied application are generally an obstacle including a preceding vehicle in front of the host vehicle based on the radar search result by repeatedly searching the front of the host vehicle with a radar and a camera provided in the vehicle. The radar-side detection magnification is obtained from the change in the ranging distance up to, etc., and the image of the magnification of the horizontal and vertical image histogram of the image captured in front of the camera's own vehicle is processed by image processing of the image taken in front of the camera's own vehicle. The camera side detection magnification is obtained based on the minimum value of the residual sum with the histogram, and the horizontal and vertical magnifications at which the residual sum becomes the minimum value are equalized by the sensor fusion recognition process between the radar and the camera. It is determined whether or not the image of the obstacle area (candidate area) in the captured image is a road surface vertical object such as a preceding vehicle, and a road surface vertical object in which the radar side detection magnification and the camera side detection magnification are equal is determined as an obstacle. To identify.

この場合、障害物であれば、撮影画像上で水平、垂直の両方向に同倍率で拡大縮小変化し、しかも、水平、垂直の同じ倍率で前記残差和が最小値になる。一方、障害物でなく、路面のマンホールの蓋やキャッツアイ(鋲)等であれば、撮影画像上で水平、垂直方向に同倍率では変化せず、しかも、水平、垂直の前記残差和が最小値になる倍率が異なる。   In this case, if the object is an obstacle, the image is enlarged / reduced at the same magnification in both the horizontal and vertical directions on the captured image, and the residual sum is minimized at the same horizontal and vertical magnification. On the other hand, if it is not an obstacle but a manhole cover on the road surface, a cat's eye (鋲), etc., the horizontal and vertical residual sums do not change at the same magnification in the horizontal and vertical directions on the photographed image. The minimum magnification is different.

そのため、水平と垂直の画像ヒストグラムの残差和最小値の倍率が等しくなるか否かにより、候補領域の候補対象が路面上に存在する障害物等の路面垂直物か否かを判別し、さらに、カメラ側検出倍率とレーダ側検出倍率の一致、不一致から、路面垂直物が障害物か否かを判別し、前記候補領域の候補対象の画像から障害物を認識することができる。   Therefore, it is determined whether or not the candidate target of the candidate area is a road surface vertical object such as an obstacle existing on the road surface by determining whether or not the magnification of the residual sum minimum value of the horizontal and vertical image histograms is equal. Thus, it is possible to determine whether or not the road surface vertical object is an obstacle from the coincidence or mismatch between the camera side detection magnification and the radar side detection magnification, and to recognize the obstacle from the candidate target image in the candidate area.

そして、簡単で計算量が少ない画像エッジの水平、垂直のヒストグラムの演算を行なえばよく、簡単で計算量の極めて少ない画像処理の演算で済み、しかも、レーダとカメラの前記座標のずれが自動的に考慮されて認識される等の利点がある。   Then, it is only necessary to calculate the horizontal and vertical histograms of the image edges that are simple and require a small amount of calculation. The calculation of the image processing is simple and requires a very small amount of calculation. There are advantages such as being considered and recognized.

特開2005−329779号公報(要約書、請求項1、13、段落[0055]−[0123]、図1等)Japanese Patent Laying-Open No. 2005-329779 (abstract, claims 1, 13, paragraphs [0055]-[0123], FIG. 1, etc.)

前記既出願の障害物認識方法及び障害物認識装置の場合、カメラ側検出倍率とレーダ側検出倍率とを比較しそれらの一致、不一致から路面垂直物が障害物か否かを判別して障害物を認識するが、カメラとレーダとでは検出手法が異なり、カメラは即時に撮影結果が得られるが、レーダは自車前方をスキャンしてから探査結果が得られるため、たとえカメラとレーダとで同時に検出を開始しても、レーダの検出がカメラの検出より遅れる。   In the case of the obstacle recognition method and obstacle recognition device of the above-mentioned application, the camera-side detection magnification and the radar-side detection magnification are compared to determine whether or not the road surface vertical obstacle is an obstacle from the coincidence or mismatch. However, the detection method is different between the camera and the radar, and the camera can obtain the result immediately, but since the radar scans the front of the vehicle and obtains the exploration result, the camera and the radar can simultaneously Even if the detection is started, the radar detection is delayed from the camera detection.

そのため、レーダ側検出倍率とカメラ側検出倍率との比較を、正確に同じ時刻のレーダとカメラの検出結果に基づいて行うことは困難である。   Therefore, it is difficult to compare the radar-side detection magnification and the camera-side detection magnification based on the detection results of the radar and the camera at exactly the same time.

したがって、実際にはレーダとカメラが先行車等の障害物をしっかり捉えているのにもかかわらず、レーダ側検出倍率とカメラ側検出倍率とが不一致となり、障害物でないと判別する可能性がある。   Therefore, there is a possibility that the radar side detection magnification and the camera side detection magnification do not coincide with each other even though the radar and the camera firmly grasp the obstacle such as the preceding vehicle, and it is determined that the obstacle is not an obstacle. .

本発明は、レーダとカメラの検出の時間ずれに起因したレーダ側検出倍率とカメラ側検出倍率と不一致に基づく誤認識(障害物でないとの誤認識)を極力回避するようにして自車前方の障害物の認識精度を向上することを目的とする。また、同時に認識対象が明らかに路面垂直物でないときの誤認識を確実に防止することも目的とする。   The present invention avoids misrecognition (misrecognition that it is not an obstacle) based on a mismatch between the radar side detection magnification and the camera side detection magnification caused by the time difference between the detection of the radar and the camera as much as possible. The purpose is to improve the recognition accuracy of obstacles. Another object of the present invention is to reliably prevent erroneous recognition when the recognition target is obviously not a road surface vertical object.

上記した目的を達成するために、本発明の障害物認識方法は、車両に備えたレーダとカメラとによって自車前方をくり返し探査し、前記レーダの探査結果に基づく自車前方の先行車を含む障害物等までの測距距離の変化からレーダ側検出倍率を求め、前記カメラの自車前方の撮影結果の画像処理により前記カメラの自車前方の撮影画像の水平、垂直の画像ヒストグラムの各倍率の基準の画像ヒストグラムとの残差和の最小値に基づいてカメラ側検出倍率を求め、前記レーダと前記カメラとのセンサフュージョンの認識処理により、前記残差和が最小値になる水平、垂直の倍率が等しくなるか否かから前記撮影画像の前記障害物の領域の画像が先行車等の路面垂直物か否かを判別し、前記レーダ側検出倍率と前記カメラ側検出倍率とが等しくなる前記路面垂直物を前記障害物として認識する障害物認識方法において、前記水平、垂直の画像ヒストグラムの前記残差和が最小値になる水平、垂直の倍率が略等しいことを条件に、前記レーダ側検出倍率と前記カメラ側検出倍率との所定範囲内の誤差を許容して前記レーダ側検出倍率と前記カメラ側検出倍率とが等しいと判断し、前記路面垂直物を前記障害物と認識することを特徴としている(請求項1)。 In order to achieve the above-described object, the obstacle recognition method of the present invention repeatedly searches the front of the host vehicle with a radar and a camera provided in the vehicle, and includes a preceding vehicle ahead of the host vehicle based on the search result of the radar. Radar-side detection magnification is obtained from a change in distance measurement to an obstacle, etc., and each magnification of horizontal and vertical image histograms of the captured image of the camera in front of the vehicle by image processing of the imaging result of the camera in front of the vehicle The camera side detection magnification is obtained based on the minimum value of the residual sum with the reference image histogram, and the horizontal and vertical values at which the residual sum becomes the minimum value by the sensor fusion recognition process between the radar and the camera. It is determined whether or not the image of the obstacle area of the captured image is a road surface vertical object such as a preceding vehicle based on whether or not the magnification is equal, and the radar side detection magnification and the camera side detection magnification are not equal. In recognizing the obstacle recognition method the road vertical product as the obstacle, the horizontal, horizontal the residual sum of vertical image histogram is minimized value, on condition that vertical magnification is substantially equal, the radar side An error within a predetermined range between the detection magnification and the camera-side detection magnification is allowed to determine that the radar-side detection magnification and the camera-side detection magnification are equal, and the road surface vertical object is recognized as the obstacle. It is characterized (claim 1).

また、本発明の障害物認識方法は、前記請求項1記載の障害物認識方法において、前記残差和が最小値になる水平、垂直の倍率が略等しく、該両倍率がいずれも前記レーダ側検出倍率より大きくなって前記障害物の近距離接近を検出したときに、前記カメラ側検出倍率と前記レーダ側検出倍率とが等しいと判断する前記誤差の範囲を拡大することを特徴としている(請求項2)。   The obstacle recognizing method according to the present invention is the obstacle recognizing method according to claim 1, wherein horizontal and vertical magnifications at which the residual sum is a minimum value are substantially equal, and both of the magnifications are on the radar side. The range of the error for determining that the camera-side detection magnification and the radar-side detection magnification are equal when the near-distance approach of the obstacle is detected to be larger than the detection magnification is increased (claim). Item 2).

さらに、本発明の障害物認識方法は、前記レーダ側検出倍率が、前記残差和が最小値になる水平、垂直の倍率のいずれよりも大きくなり、かつ、前記レーダ側検出倍率と前記残差和が最小値になる水平、垂直の倍率それぞれとの差のいずれかでも設定されたしきい値より大きくなって前記障害物の領域の画像が前記路面垂直物でない可能性が高いときには、前記カメラ側検出倍率と前記レーダ側検出倍率とが等しいと判断する前記誤差の範囲を狭くすることを特徴としている(請求項3)。   Further, according to the obstacle recognition method of the present invention, the radar side detection magnification is larger than both horizontal and vertical magnifications where the residual sum is a minimum value, and the radar side detection magnification and the residual are the same. When the difference between the horizontal and vertical magnifications at which the sum is minimum is greater than a set threshold value and there is a high possibility that the image of the obstacle area is not the road surface vertical object, the camera The range of the error for judging that the side detection magnification is equal to the radar side detection magnification is narrowed (Claim 3).

まず、請求項1の構成によれば、水平、垂直の画像ヒストグラムの残差和が最小値になる水平、垂直の倍率が略等しいことを条件に、レーダ側検出倍率とカメラ側検出倍率との差を所定範囲内で許容して自車前方の障害物を認識するため、レーダとカメラの検出の時間ずれがあっても、そのずれを許容し、レーダとカメラの検出の時間ずれに起因したレーダ側検出倍率とカメラ側検出倍率と不一致に基づく誤認識(障害物でないとの誤認識)を極力回避することができる。 First, according to the configuration of claim 1 , the radar side detection magnification and the camera side detection magnification are set on condition that the horizontal and vertical magnifications at which the residual sum of the horizontal and vertical image histograms is the minimum value are substantially equal . Because the difference is allowed within a predetermined range and obstacles ahead of the vehicle are recognized, even if there is a time difference between the detection of the radar and the camera, the time difference is allowed and caused by the time difference between the detection of the radar and the camera Misrecognition (misrecognition that the object is not an obstacle) based on the mismatch between the radar side detection magnification and the camera side detection magnification can be avoided as much as possible.

その際、水平、垂直の画像ヒストグラムの前記残差和が最小値になる倍率が略等しいことを条件に、レーダ側検出倍率とカメラ側検出倍率との差を所定範囲内で許容するため、この許容によって路面のマンホールの蓋やキャッツアイ(鋲)等の路面反射物等でないものを障害物として誤認識することが極力回避される。   In this case, the difference between the radar side detection magnification and the camera side detection magnification is allowed within a predetermined range on condition that the magnification at which the residual sum of the horizontal and vertical image histograms is the minimum value is substantially equal. It is avoided as much as possible that an object that is not a road surface reflector such as a manhole cover or a cat's eye (鋲) is erroneously recognized as an obstacle by permission.

したがって、路面のマンホールの蓋やキャッツアイ(鋲)等の路面反射物でないものを障害物として誤認識することなく、レーダ及びカメラが先行車等の障害物をしっかり捉えているときに障害物でないと誤認識されてしまう事態の発生を防止することができ、障害物の認識精度を向上することができる。   Therefore, it is not an obstacle when the radar and the camera firmly grasp an obstacle such as a preceding vehicle without erroneously recognizing an obstacle that is not a road surface reflector such as a manhole cover on the road surface or a cat's eye (鋲). Can be prevented from being erroneously recognized, and obstacle recognition accuracy can be improved.

つぎに、請求項2の構成によれば、レーダとカメラの検出の時間ずれがより大きくなる障害物の接近時には、最新のカメラ側の水平、垂直の倍率がいずれも最新のレーダ側検出倍率より大きくなることから、カメラ側検出倍率とレーダ側検出倍率とが等しいと判断する前記誤差の範囲を拡大し、レーダとカメラの検出の時間ずれに起因した前記の誤認識を一層確実に回避することができ、障害物の認識精度を一層向上することができる。   Next, according to the configuration of the second aspect, when the obstacle approaching the detection time difference between the radar and the camera becomes larger, the horizontal and vertical magnifications on the latest camera side are both higher than the latest radar side detection magnification. Therefore, the error detection range for determining that the camera-side detection magnification and the radar-side detection magnification are equal is expanded, and the erroneous recognition due to the time lag between the detection of the radar and the camera is more reliably avoided. And the obstacle recognition accuracy can be further improved.

さらに、請求項3の構成によれば、最新のレーダ側検出倍率が最新のカメラ側の水平、垂直の倍率のいずれよりも大きくなり、しかも、レーダ側検出倍率とカメラ側の水平、垂直の倍率それぞれとの差のいずれかでも設定されたしきい値より大きくなり、明らかに認識対象が路面垂直物でない可能性が高いときには、前記誤差の範囲を逆に狭くして障害物であるとの誤認識を確実に防止して障害物の認識精度をさらに一層向上することができる。   Further, according to the configuration of claim 3, the latest radar side detection magnification is larger than both the latest camera side horizontal and vertical magnifications, and the radar side detection magnification and the camera side horizontal and vertical magnifications. If any of the differences is larger than the set threshold value and there is a high possibility that the recognition target is not a vertical road object, the error range is narrowed to the error of an obstacle. Recognition can be reliably prevented, and obstacle recognition accuracy can be further improved.

つぎに、本発明をより詳細に説明するため、その一実施形態について、図1〜図14にしたがって詳述する。   Next, in order to describe the present invention in more detail, an embodiment thereof will be described in detail with reference to FIGS.

図1は車両(自車)1に搭載された障害物認識装置のブロック図、図2は動作説明用のフローチャート、図3はその一部の詳細なフローチャート、図4はレーザレーダ2の測距結果に基づく候補領域検出の説明図、図5は単眼カメラ3の撮影画像のエッジ画像処理の説明図、図6は図4のエッジ画像から得られたエッジヒストグラムの1例の説明図である。   FIG. 1 is a block diagram of an obstacle recognition device mounted on a vehicle (own vehicle) 1, FIG. 2 is a flowchart for explaining operations, FIG. 3 is a detailed flowchart of a part thereof, and FIG. 4 is a distance measurement of a laser radar 2. FIG. 5 is an explanatory diagram of edge image processing of a captured image of the monocular camera 3, and FIG. 6 is an explanatory diagram of an example of an edge histogram obtained from the edge image of FIG.

また、図7は画像処理の説明図、図8、図9は倍率計算説明用の走行模式図、カメラ撮影倍率の模式図であり、図10は水平、垂直の残差和(個別残差和)及びそれらを統合した残差和(統合残差和)の1例の特性図、図11は図2の他の一部の詳細なフローチャート、図12は計算された倍率の1例の特性図である。   FIG. 7 is an explanatory diagram of image processing, FIGS. 8 and 9 are schematic diagrams of traveling for explanation of magnification calculation, and schematic diagrams of camera photographing magnification. FIG. 10 is a horizontal and vertical residual sum (individual residual sum). ) And a characteristic diagram of one example of a residual sum (integrated residual sum) obtained by integrating them, FIG. 11 is a detailed flowchart of another part of FIG. 2, and FIG. 12 is a characteristic diagram of an example of a calculated magnification. It is.

さらに、図13は障害物が自車前方の比較的離れた距離にあるときのレーダ側検出倍率とカメラ側検出倍率の検出時間ずれの影響の説明図、図14は障害物が接近したときのレーダ側検出倍率とカメラ側検出倍率の検出時間ずれの影響の説明図である。   Further, FIG. 13 is an explanatory diagram of the influence of the detection time difference between the radar side detection magnification and the camera side detection magnification when the obstacle is at a relatively far distance in front of the host vehicle, and FIG. 14 is a diagram when the obstacle approaches. It is explanatory drawing of the influence of the detection time shift | offset | difference of a radar side detection magnification and a camera side detection magnification.

(構成)
図1の障害物認識装置は、車両1に自車前方を探査する測距用のレーダとして、電波レーダに比して安価な汎用のスキャニングレーザレーダ(以下、単に「レーザレーダ」という)2を備え、自車前方を撮影するカメラ(画像センサ)として、小型かつ安価なモノクロCCDカメラ構成の単眼カメラ3を備える。なお、レーザレーダ2に代えてミリ波レーダ等の電波レーダを備えてもよいのは勿論である。
(Constitution)
The obstacle recognition apparatus of FIG. 1 uses a general-purpose scanning laser radar (hereinafter simply referred to as “laser radar”) 2 that is less expensive than a radio wave radar as a ranging radar that searches the vehicle 1 in front of the vehicle. A monocular camera 3 having a small and inexpensive monochrome CCD camera configuration is provided as a camera (image sensor) for photographing the front of the vehicle. Of course, a radio wave radar such as a millimeter wave radar may be provided instead of the laser radar 2.

そして、イグニッションキーによる自車1のエンジン始動後、レーザレーダ2はレーザパルスを掃引照射して自車前方をくり返し探査し、自車前方の測距結果の信号を自車1のマイクロコンピュータ構成の制御ECU4に出力し、単眼カメラ3は自車前方を連続的に撮像し、その撮影画像の例えば画素当たり8ビットの輝度データの信号を制御ECU4に出力する。   Then, after the engine of the own vehicle 1 is started by the ignition key, the laser radar 2 sweeps and irradiates the laser pulse and repeatedly searches the front of the own vehicle, and the distance measurement result signal in front of the own vehicle has a microcomputer configuration of the own vehicle 1. The monocular camera 3 continuously captures the front of the vehicle and outputs, for example, 8-bit luminance data signals per pixel of the captured image to the control ECU 4.

この制御ECU4は種々の情報(データ)を読み書き自在に保持するメモリユニット5とともに後述の認識処理手段としての認識演算部6を形成し、レーザレーダ2と単眼カメラ3とのセンサフュージョンの認識処理により、後述の残差和が最小値になる水平、垂直の倍率が等しくなるか否かから撮影画像の障害物の領域の画像が先行車等の路面垂直物か否かを判別し、レーダ側検出倍率とカメラ側検出倍率とが等しくなる路面垂直物を先行車等の障害物として認識するため、前記のエンジンスタートに基づき、予め設定された図2のステップS1〜S7の障害物認識プログラムを実行し(ステップS4においては、図3のステップS41〜S45を実行し)、ソフトウエア構成のつぎの(a)〜(h)の各手段を備える。   The control ECU 4 forms a recognition calculation unit 6 as a recognition processing unit described later together with a memory unit 5 that holds various information (data) in a readable and writable manner, and performs a sensor fusion recognition process between the laser radar 2 and the monocular camera 3. Radar side detection by determining whether the image of the obstacle area in the captured image is a road surface vertical object such as a preceding vehicle based on whether the horizontal and vertical magnifications at which the residual sum (to be described later) is the minimum are equal. In order to recognize a road surface vertical object whose magnification and camera-side detection magnification are equal as an obstacle such as a preceding vehicle, the obstacle recognition program in steps S1 to S7 in FIG. 2 set in advance is executed based on the engine start. (In step S4, steps S41 to S45 in FIG. 3 are executed), and the following means (a) to (h) of the software configuration are provided.

(a)候補領域設定手段
この手段は、レーザレーダ2の最新の測距結果をクラスタリング処理して自車前方に障害物の候補領域を設定する。具体的には、レーザレーダ2の多数個の反射点の距離計測データに基き、距離の近い反射点同士をクラスタリング処理し、図4(a)の単眼カメラ3の撮影画像Pの画面図に示すように、レーザレーダ2の探査範囲内に、枠線で囲まれた1又は複数個のクラスタ領域を形成し、そのうちの障害物の特徴に適合したクラスタ領域を先行車等の障害物の候補領域Ccに設定する。
(A) Candidate area setting means This means performs a clustering process on the latest distance measurement result of the laser radar 2 and sets an obstacle candidate area in front of the vehicle. Specifically, based on the distance measurement data of a large number of reflection points of the laser radar 2, the reflection points that are close to each other are clustered and shown in the screen view of the captured image P of the monocular camera 3 in FIG. As described above, one or a plurality of cluster regions surrounded by a frame line are formed in the search range of the laser radar 2, and a cluster region suitable for the feature of the obstacle is selected as a candidate region for an obstacle such as a preceding vehicle. Set to Cc.

このとき、図4(b)のレーダ測距領域の模式図に示すように、通常は、候補領域Ccによって、図中の枠で囲んだ先行車等の自車前方の障害物が捕捉される。   At this time, as shown in the schematic diagram of the radar ranging area in FIG. 4B, normally, an obstacle ahead of the host vehicle such as a preceding vehicle surrounded by a frame in the figure is captured by the candidate area Cc. .

(b)相対距離計測手段
この手段は、候補領域Ccのレーダ探査の測距結果から自車1と候補領域Ccの障害物の候補対象αとの時々刻々の相対距離Zを計測する。
(B) Relative Distance Measuring Means This means measures the relative distance Z every moment between the vehicle 1 and the candidate object α of the obstacle in the candidate area Cc from the distance measurement result of the radar search in the candidate area Cc.

(c)ヒストグラム演算手段
この手段は、単眼カメラ3の最新の撮影画像Pの物標特徴量としての候補領域Ccの水平、垂直の二値化された画像エッジのヒストグラムを、水平、垂直の最新エッジヒストグラムY、Xとして算出し、さらに、候補対象特徴量として更新自在に保持された既記憶の水平、垂直の二値化された画像エッジのヒストグラムYm、Xmを設定範囲の各倍率K、(0<Kmin≦K≦Kmax、Kmin:最小値、Kmax:最大値)でそれぞれの横軸方向に拡大縮小して各倍率Kの水平、垂直の参照エッジヒストグラムYr(=K・Ym)、Xr(=K・Xm)を算出する。
(C) Histogram calculation means This means calculates the histogram of the horizontal and vertical binarized image edges of the candidate area Cc as the target feature quantity of the latest photographed image P of the monocular camera 3, and obtains the latest horizontal and vertical histograms. Calculated as edge histograms Y and X, and further stores previously stored horizontal and vertical binarized image edge histograms Ym and Xm stored as updateable candidate feature quantities for respective magnifications K and ( 0 <Kmin ≦ K ≦ Kmax, Kmin: minimum value, Kmax: maximum value), and the horizontal and vertical reference edge histograms Yr (= K · Ym), Xr ( = K · Xm).

このとき、制御ECU4の計算処理の負担を軽減して高速化を図るため、水平、垂直の最新エッジヒストグラムY、Xは、カメラの最新の撮影画像Pを微分して二値化した水平、垂直それぞれの画像エッジ情報の加算により形成される。   At this time, the horizontal and vertical latest edge histograms Y and X are obtained by differentiating and binarizing the latest captured image P of the camera in order to reduce the calculation processing load of the control ECU 4 and increase the speed. It is formed by adding the respective image edge information.

具体的には、撮影画像Pが得られるフレーム毎に、最新の撮影画像Pの候補領域Ccの8ビット/画素の輝度データを水平、垂直両方向に微分二値化処理し、図5に示すように、候補領域Ccの8ビット/画素の画像を同図のエッジ画像(灰色部分が水平エッジ、白色部分が垂直エッジ)に変換し、このエッジ画像を2値化して先行車等の障害物を候補対象αとする水平、垂直の1ビット/画素の画像エッジ情報を得る。 Specifically, for each frame in which the captured image P is obtained, the luminance data of 8 bits / pixel of the candidate area Cc of the latest captured image P is subjected to differential binarization processing in both horizontal and vertical directions, as shown in FIG. In addition, the 8-bit / pixel original image of the candidate area Cc is converted into the edge image shown in the figure (the gray part is the horizontal edge and the white part is the vertical edge), and the edge image is binarized to obstruct obstacles such as the preceding vehicle Image edge information of 1 bit / pixel in the horizontal and vertical directions is obtained with the candidate object α.

さらに、図6に示すように、水平、垂直の画像エッジ情報を、水平の画像エッジについては水平方向に、垂直の画像エッジについては垂直方向にそれぞれ加算し、横軸を垂直方向、水平方向それぞれの画素位置とする水平、垂直の最新エッジヒストグラムY、Xを形成する。   Further, as shown in FIG. 6, the horizontal and vertical image edge information is added in the horizontal direction for the horizontal image edge and in the vertical direction for the vertical image edge, respectively, and the horizontal axis is in the vertical direction and the horizontal direction, respectively. The horizontal and vertical latest edge histograms Y and X having the pixel positions of are formed.

つぎに、図7に示すように候補対象特徴量としての既記憶の水平、垂直の二値化された画像エッジのヒストグラム(既記憶エッジヒストグラム)Ym、Xmは、両ヒストグラムYm、Xmをそれぞれの横軸方向(位置方向)にK倍して形成する水平、垂直の各倍率Kの参照エッジヒストグラムYr、Xrの突発的な変動の影響等を抑えるため、以前に算出された既フレームの水平、垂直の各エッジヒストグラムの重み付け加算平均により形成する。   Next, as shown in FIG. 7, histograms of already stored horizontal and vertical binarized image edges (stored edge histograms) Ym and Xm as candidate target feature values are obtained by using both histograms Ym and Xm respectively. In order to suppress the influence of sudden fluctuations in the reference edge histograms Yr and Xr of the horizontal and vertical magnifications K formed by multiplying the horizontal axis direction (position direction) by K, Formed by weighted average of vertical edge histograms.

ところで、倍率Kと相対距離Zとはつぎの関係がある。   Incidentally, the magnification K and the relative distance Z have the following relationship.

すなわち、図8の走行模式図に示すように、障害物としての先行車7の相対距離Zが、時刻tにZ、時刻t+ΔtにZ+ΔZになり、図9の撮影模式図において、実際の車幅Wが、単眼カメラ3の撮影面上で時刻t、t+Δtに車幅K・ω、ωになるとすると、相対距離Z、Z+ΔZはそれぞれ単眼カメラ3のレンズ焦点から先行車7までの距離であり、単眼カメラ3のレンズ焦点からCCD受光面(像面)までの距離をCfとした場合、つぎの(1)式、(2)式が成立し、相対距離Zと倍率Kとの関係が(3)式で表される。   That is, as shown in the travel schematic diagram of FIG. 8, the relative distance Z of the preceding vehicle 7 as an obstacle becomes Z at time t and Z + ΔZ at time t + Δt, and the actual vehicle width in the photographing schematic diagram of FIG. If W is the vehicle width K · ω, ω at time t, t + Δt on the imaging surface of the monocular camera 3, the relative distances Z, Z + ΔZ are distances from the lens focal point of the monocular camera 3 to the preceding vehicle 7, respectively. When the distance from the lens focus of the monocular camera 3 to the CCD light receiving surface (image surface) is Cf, the following equations (1) and (2) are established, and the relationship between the relative distance Z and the magnification K is (3 ) Expression.

Cf:ω=(Z+ΔZ):W (1)式   Cf: ω = (Z + ΔZ): W Formula (1)

Cf:K・ω=Z:W (2)式   Cf: K · ω = Z: W Formula (2)

Z=ΔZ/(K−1) (3)式   Z = ΔZ / (K−1) (3) Formula

また、Kmin、Kmaxは、つきの(4)式、(5)式のように設定する。これらの式において、Zは相対距離、ΔZ_maxは相対距離Zの最大変化量(自車1が最大加速で先行車7が最大減速の状態での変化量)、ΔZ_minは相対距離Zの最小変化量(自車1が最大減速で先行車7が最大加速の状態での変化量)であり、前記の最大加速、最大減速はそれぞれ実験等に基いて設定される。   Kmin and Kmax are set as shown in the following equations (4) and (5). In these equations, Z is the relative distance, ΔZ_max is the maximum change amount of the relative distance Z (the change amount when the host vehicle 1 is at maximum acceleration and the preceding vehicle 7 is at the maximum deceleration state), and ΔZ_min is the minimum change amount of the relative distance Z. (The amount of change when the host vehicle 1 is at the maximum deceleration and the preceding vehicle 7 is at the maximum acceleration), and the maximum acceleration and the maximum deceleration are set based on experiments or the like.

Kmax = 1 + ΔZ_max / Z (4)式   Kmax = 1 + ΔZ_max / Z Equation (4)

Kmin = 1 + ΔZ_min / Z (5)式   Kmin = 1 + ΔZ_min / Z Equation (5)

(d)残差和演算手段
この手段は、各倍率Kの水平、垂直の参照エッジヒストグラムYr、Xrを、例えば図7の予測移動範囲My、Mx内でそれぞれの横軸y、x方向に平行移動して水平、垂直それぞれの最新エッジヒストグラムY、Xと各倍率Kの参照エッジヒストグラムYr、Xrとの差の絶対値和からなる水平、垂直の個別残差和Vy、Vx、及び両個別残差和Vy、Vxを加算した統合残差和Vそれぞれを算出し、図10の(a)、(b)、(c)に示すそれぞれの最小値Vy_min、Vx_min、Vminを求める。
(D) Residual sum calculation means This means converts the horizontal and vertical reference edge histograms Yr and Xr of each magnification K into, for example, the respective horizontal axes y and x in the predicted movement ranges My and Mx of FIG. The horizontal and vertical individual residual sums Vy and Vx consisting of the absolute value sum of the differences between the horizontal and vertical latest edge histograms Y and X and the reference edge histograms Yr and Xr for each magnification K, and both residuals The integrated residual sum V obtained by adding the difference sums Vy and Vx is calculated, and the minimum values Vy_min, Vx_min, and Vmin shown in (a), (b), and (c) of FIG. 10 are obtained.

なお、図10のKy0、Kx0、K0は最小値Vy_min、Vx_min、Vminを与える倍率Kである。   Note that Ky0, Kx0, and K0 in FIG. 10 are magnifications K that give the minimum values Vy_min, Vx_min, and Vmin.

ところで、各倍率Kの水平、垂直の参照エッジヒストグラムYr、Xrを予測移動範囲My、Mx内でそれぞれの横軸y、x方向に平行移動するのは、つぎの理由に基く。   By the way, the horizontal and vertical reference edge histograms Yr and Xr of each magnification K are translated in the horizontal y and x directions within the predicted movement ranges My and Mx for the following reason.

すなわち、自車1に振動によるピッチング運動がなく、さらに、レーザレーダ2の出力から常に時間遅れなく正確な位置を検出できるのであれば、参照エッジヒストグラムYr、Xrの平行移動は不要であるが、実際には、レーザレーダ2の測距時刻と、単眼カメラ3の撮影時刻とを厳密に一致させることは困難であり、さらに、自車1が振動などによって縦方向にピッチングすることから、これらに起因したレーザレーダ2と単眼カメラ3との座標軸のずれ等を吸収して倍率Ky0、Kx0、K0等を正確に検出するには、予測される範囲内で参照エッジヒストグラムYr、Xrを平行移動して倍率Ky0、Kx0等の位置を探索することが必要だからである。   That is, if the own vehicle 1 does not have a pitching motion due to vibration, and if an accurate position can always be detected from the output of the laser radar 2 without a time delay, the reference edge histograms Yr and Xr need not be translated in parallel. Actually, it is difficult to make the distance measurement time of the laser radar 2 and the photographing time of the monocular camera 3 exactly coincide with each other. Further, since the own vehicle 1 pitches in the vertical direction due to vibrations or the like, To accurately detect the magnifications Ky0, Kx0, K0, etc. by absorbing the resulting coordinate axis shift between the laser radar 2 and the monocular camera 3, the reference edge histograms Yr, Xr are translated in a predicted range. This is because it is necessary to search for positions such as the magnifications Ky0 and Kx0.

そして、予測移動範囲My、Mxは、前記の振動によるピッチング運動等を考慮し、実験等に基いて設定され、参照エッジヒストグラムYr、Xrが予測移動範囲My、Mx内でそれぞれの横軸y、x方向に平行移動される。   The predicted movement ranges My and Mx are set based on experiments and the like in consideration of the above-described pitching motion due to vibrations, and the reference edge histograms Yr and Xr have their respective horizontal axes y and y within the predicted movement ranges My and Mx. Translated in the x direction.

なお、予測移動範囲Mxは、具体的には、自車1のピッチング角度範囲と相対距離Zとで決まり、相対距離Zによって変化し、予測移動範囲Myは、自車1のヨーイング角度範囲と、候補対象αの予想される横移動範囲と、相対距離Zとで決まり、相対距離Zによって変化する。   The predicted movement range Mx is specifically determined by the pitching angle range of the host vehicle 1 and the relative distance Z, and varies depending on the relative distance Z. The predicted movement range My is the yawing angle range of the host vehicle 1; It is determined by the expected lateral movement range of the candidate object α and the relative distance Z, and changes depending on the relative distance Z.

そして、候補領域Ccが、マンホールの蓋や鋲、路面標示等の路面物(路面垂直物ではない物)を間違って囲むものでなく、先行車等の路面垂直物を囲むものであれば、レーダ探査平面とカメラ撮像平面とが一致し、このとき、残差和Vy、Vx、Vの倍率特性のグラフは、前記の図10(a)〜(c)の双曲線状、すなわち、極小値を有する下に凸の二次曲線状になり、しかも、倍率Ky0、Kx0は等しくなる(Ky0=Kx0)が、マンホールの蓋や鋲、路面標示等を誤って囲むものであれば、候補領域Ccが正しくなく、レーダ探査平面とカメラ撮像平面とが同じ平面に属さないため、残差和Vy、Vx、Vの倍率特性のグラフは、前記の下に凸の二次曲線状にならず、倍率Ky0、Kx0はKy0≠Kx0になる。   If the candidate area Cc does not mistakenly surround a road surface object (not a road surface vertical object) such as a manhole cover or fence, a road marking, etc., but if it surrounds a road vertical object such as a preceding vehicle, the radar The search plane and the camera imaging plane coincide, and at this time, the graph of the magnification characteristics of the residual sums Vy, Vx, and V has the hyperbolic shape shown in FIGS. 10A to 10C, that is, the minimum value. If the shape is a downwardly convex quadratic curve and the magnifications Ky0 and Kx0 are equal (Ky0 = Kx0), but if they accidentally enclose manhole covers, ridges, road markings, etc., the candidate area Cc is correct. In addition, since the radar search plane and the camera imaging plane do not belong to the same plane, the graph of the magnification characteristics of the residual sums Vy, Vx, V does not have the downward convex quadratic curve, and the magnification Ky0, Kx0 is Ky0 ≠ Kx0.

そこで、この手段により、残差和Vy、Vx、Vの最小値Vy_min、Vx_min、Vminを求め、それらのときの倍率Ky0、Kx0、K0c(=K0)を検出する。   Therefore, by this means, the minimum values Vy_min, Vx_min, Vmin of the residual sums Vy, Vx, V are obtained, and the magnifications Ky0, Kx0, K0c (= K0) at those times are detected.

実際には、残差和Vy、Vx、Vの略最小値Vy_min、Vx_min、Vminの値(最小値Vy_min、Vx_min、Vminそれぞれの設定された誤差範囲の値)を最小値Vy_min、Vx_min、Vminとして求め、それらの値のときの倍率を最小値Vy_min、Vx_min、Vminの倍率Ky0、Kx0、K0c(=K0)として検出する。   Actually, the values of the substantially minimum values Vy_min, Vx_min, and Vmin of the residual sums Vy, Vx, and V (the values of the error ranges set for the minimum values Vy_min, Vx_min, and Vmin) are set as the minimum values Vy_min, Vx_min, and Vmin. The magnifications at these values are detected as minimum values Vy_min, Vx_min, and magnifications Ky0, Kx0, and K0c (= K0) of Vmin.

(e)第1の対象判別手段
この手段は、請求項3の路面垂直物判別手段を形成し、少なくとも水平と垂直の個別残差和Vy、Vxが最小値Vy_min、Vx_minになる倍率Ky0、Kx0が等しい状態(Ky0=Kx0)になるか否かにより、候補対象αが路面上の路面垂直物か否かを判別する。
(E) First object discriminating means This means forms road surface vertical object discriminating means according to claim 3, and at least magnifications Ky0 and Kx0 at which horizontal and vertical individual residual sums Vy and Vx become minimum values Vy_min and Vx_min. Are equal to each other (Ky0 = Kx0), it is determined whether or not the candidate object α is a road surface vertical object on the road surface.

実際には、前記したように、最小値Vy_min、Vx_min、Vminの倍率Ky0、Kx0、K0c(=K0)として検出された残差和Vy、Vx、Vの略最小値Vy_min、Vx_min、Vminの値のときの倍率に基づき、水平、垂直の倍率Ky0、Kx0が略等しいか否かにより、候補対象αが路面上の路面垂直物か否かを判別する。   Actually, as described above, the values of the approximate sums Vy_min, Vx_min, Vmin of the residual sums Vy, Vx, V detected as the magnifications Ky0, Kx0, K0c (= K0) of the minimum values Vy_min, Vx_min, Vmin. Whether or not the candidate object α is a road surface vertical object on the road surface is determined based on whether or not the horizontal and vertical magnifications Ky0 and Kx0 are substantially equal.

(f)倍率検出手段
この手段は、統合残差和Vが最小値Vminになる倍率K0を障害物のカメラ側検出倍率K0cとして検出する。
(F) Magnification Detection Unit This unit detects the magnification K0 at which the integrated residual sum V becomes the minimum value Vmin as the obstacle camera side detection magnification K0c.

そして、障害物でない路側物であれば、記憶情報と観測情報が大きく変化して統合残差和Vの最小値Vminが所定の路側物検出のしきい値Va以上になることから、路側物による誤認識を防止して認識精度を向上する場合は、路側物しきい値Vaより小さい最小値Vminのみをカメラ側検出倍率K0cとして検出することが好ましい。   If the roadside object is not an obstacle, the stored information and the observation information change greatly, and the minimum value Vmin of the integrated residual sum V becomes equal to or greater than a predetermined roadside object detection threshold Va. In order to prevent recognition errors and improve recognition accuracy, it is preferable to detect only the minimum value Vmin smaller than the roadside object threshold value Va as the camera-side detection magnification K0c.

(g)第2の対象判別手段
この手段は、カメラ側検出倍率K0cが相対距離Zの変化から算出した障害物のレーダ側検出倍率K0rに等しい状態(K0c=K0r)になるか否かにより、候補対象αの路面垂直物が、先行車等の検出有効物か、先行車等の水しぶき、レーザレーダ2の付着ごみ等の検出無効物かを判別する。
(G) Second object discrimination means This means is based on whether or not the camera side detection magnification K0c is equal to the obstacle radar side detection magnification K0r calculated from the change in the relative distance Z (K0c = K0r). It is determined whether the road surface vertical object of the candidate object α is an effective detection object such as a preceding vehicle, or an invalid detection object such as splashing of the preceding vehicle or the like, or dust adhered to the laser radar 2.

すなわち、候補対象αが先行車等の水しぶき、レーザレーダ2の付着ごみ等の検出無効物であれば、実際には障害物は存在しないにもかかわらず、レーザレーダ2の探査からは候補対象αを障害物として誤認識してしまうため、レーザレーダ2から観測したレーダ側検出倍率K0rと、撮影画像から計測したカメラ側検出倍率K0cとが同値にならなければ(K0c≠K0rであれば)、候補対象αの路面垂直物を検出無効物であると判別し、K0c=K0rのときにのみ、候補対象αを検出有効物と判別し、誤認識を防止する。   That is, if the candidate object α is a detection invalid object such as splashing of a preceding vehicle or the like, or attached dust of the laser radar 2, the candidate object α is detected from the search of the laser radar 2 even though no obstacle actually exists. Is detected as an obstacle, the radar side detection magnification K0r observed from the laser radar 2 and the camera side detection magnification K0c measured from the captured image do not have the same value (if K0c ≠ K0r), The vertical object of the candidate object α is determined to be a detection invalid object, and only when K0c = K0r, the candidate object α is determined to be a detection effective object to prevent erroneous recognition.

(h)認識処理手段
この手段は、前記の路面垂直物かつ検出有効物の判別に基づき、レーザレーダ2と単眼カメラ3とのセンサフュージョンの認識処理により、基本的には、水平と垂直の個別残差和Vy、Vxが最小値Vy_min、Vx_minになる倍率Ky0、Kx0が等しい状態(Ky0=Kx0)になるか否かから、撮影画像Pの候補領域Cc(障害物の領域)の画像が先行車等の路面垂直物か否かを判別し、前記レーダ側検出倍率と前記カメラ側検出倍率とが等しくなる路面垂直物の候補対象αを障害物として認識する。
(H) Recognition processing means This means basically recognizes the horizontal and vertical individual by performing the sensor fusion recognition process between the laser radar 2 and the monocular camera 3 based on the discrimination between the vertical object on the road surface and the effective detection object. The image of the candidate area Cc (obstacle area) of the photographed image P is preceded by whether or not the magnifications Ky0 and Kx0 at which the residual sums Vy and Vx become the minimum values Vy_min and Vx_min are equal (Ky0 = Kx0). It is determined whether the object is a road surface vertical object such as a car, and a candidate object α for a road surface vertical object in which the radar side detection magnification and the camera side detection magnification are equal is recognized as an obstacle.

すなわち、Kx0=Ky0、かつ、K0c=K0rになることから、レーザレーダ2の探査に基く測距結果が正しく、候補領域Ccが間違いなく障害物の領域であると判断し、候補領域Ccの画像から候補対象αを障害物として認識する。   That is, since Kx0 = Ky0 and K0c = K0r, it is determined that the distance measurement result based on the search of the laser radar 2 is correct, and the candidate area Cc is definitely an obstacle area, and the image of the candidate area Cc To recognize the candidate object α as an obstacle.

しかしながら、レーダ側検出倍率K0rとカメラ側検出倍率K0cとを比較し、それらの一致、不一致から路面垂直物が障害物か否かを判別して障害物を認識するのみでは、上述したように、レーザレーダ2と単眼カメラ3とで検出手法が異なり、レーザレーダ2の検出が単眼カメラ3の検出より遅れるため、この検出時間のずれに基き、とくに障害物が接近するときには、実際にはレーザレーダ2及び単眼カメラ3が先行車等の障害物をしっかり捉えているのにもかかわらず、レーダ側検出倍率K0rとカメラ側検出倍率K0cとが不一致となり、障害物でないと誤認識する可能性が高くなる。   However, by comparing the radar-side detection magnification K0r and the camera-side detection magnification K0c and determining whether or not the road surface vertical object is an obstacle based on the coincidence or mismatch, as described above, The detection method is different between the laser radar 2 and the monocular camera 3, and the detection of the laser radar 2 is delayed from the detection of the monocular camera 3. Therefore, when the obstacle approaches, it is actually a laser radar. Although the 2 and the monocular camera 3 firmly capture an obstacle such as a preceding vehicle, the radar-side detection magnification K0r and the camera-side detection magnification K0c are inconsistent, and there is a high possibility that the object is not an obstacle. Become.

そこで、認識処理手段は、前記水平と垂直の個別残差和Vy、Vxが最小値Vy_min、Vx_minになる倍率Ky0、Kx0が略等しく、候補対象αが路面上の路面垂直物(先行車等の障害物)であるときに、レーダ側検出倍率K0rとカメラ側検出倍率K0cとの所定範囲内の誤差を許容してレーダ側検出倍率K0rとカメラ側検出倍率K0cとが等しいと判断し、前記路面垂直物を先行車等の障害物と認識する機能を有する。   Therefore, the recognition processing means is such that the magnifications Ky0 and Kx0 at which the horizontal and vertical individual residual sums Vy and Vx become the minimum values Vy_min and Vx_min are substantially equal, and the candidate object α is a road surface vertical object (such as a preceding vehicle). An obstacle), it is determined that the radar-side detection magnification K0r and the camera-side detection magnification K0c are equal to each other by allowing an error within a predetermined range between the radar-side detection magnification K0r and the camera-side detection magnification K0c. It has a function of recognizing a vertical object as an obstacle such as a preceding vehicle.

このとき、倍率Ky0、Kx0が略等しく、しかも、両倍率Ky0、Kx0がいずれもレーダ側検出倍率K0rより大きくなると、レーザレーダ2の検出が単眼カメラ3の検出より遅れ、相対的に接近してくる障害物に対して、単眼カメラ3の最新の検出結果がレーザレーダ2の検出結果より接近した状態で得られることから、障害物が近距離に接近していると判断し、カメラ側検出倍率K0cとレーダ側検出倍率K0rとが等しいと判断する前記誤差の範囲をより広くして拡大する。   At this time, if the magnifications Ky0 and Kx0 are substantially equal, and both the magnifications Ky0 and Kx0 are both greater than the radar-side detection magnification K0r, the detection of the laser radar 2 is delayed from the detection of the monocular camera 3 and relatively close. Since the latest detection result of the monocular camera 3 is obtained closer to the obstacle coming from the detection result of the laser radar 2, it is determined that the obstacle is approaching a short distance, and the camera side detection magnification The error range for determining that K0c is equal to the radar-side detection magnification K0r is widened and expanded.

具体的には、この実施形態の場合、下記の(6)式の画像確信度GCF(%)を算出し、この画像信頼度GCFを障害物の認識精度とすることで、等価的に前記の所定範囲を可変設定し、レーダ側検出倍率K0rとカメラ側検出倍率K0cとの誤差を許容して障害物を認識する。   Specifically, in the case of this embodiment, the image certainty factor GCF (%) of the following equation (6) is calculated, and this image reliability GCF is used as the obstacle recognition accuracy, so that A predetermined range is variably set, and an obstacle is recognized by allowing an error between the radar side detection magnification K0r and the camera side detection magnification K0c.

同時に、認識対象が明らかに路面垂直物でなくマンホールやキャッツアイ(鋲)等の路面反射物若しくはゴースト物標である可能性が高いときには、画像確信度GCFを積極的に低くすることで、等価的に前記の所定範囲を極めて狭くし、障害物であると誤認識することを確実に防止する。   At the same time, when there is a high possibility that the recognition target is obviously a road surface reflector such as a manhole or cat's eye (鋲) or a ghost target rather than a vertical object, the image confidence GCF is actively lowered to In particular, the predetermined range is made extremely narrow to reliably prevent erroneous recognition as an obstacle.

GCF = 100−(Gx・|Kx−Kc|+Gy・|Ky−Kc|+Gxy・|Kx−Ky|) (6)式   GCF = 100− (Gx · | Kx−Kc | + Gy · | Ky−Kc | + Gxy · | Kx−Ky |) (6)

なお、(6)式において、Kxはカメラ側(画像側)の垂直方向の倍率Kx0、Kyはカメラ側(画像側)の水平方向の倍率Ky0、Krはレーダ側検出倍率K0rを表わし、Gx、Gy、Gxyは係数を表わす。   In equation (6), Kx represents the camera side (image side) vertical magnification Kx0, Ky represents the camera side (image side) horizontal magnification Ky0, Kr represents the radar side detection magnification K0r, and Gx, Gy and Gxy represent coefficients.

そして、この画像確信度GCFが100(%)に近くなる程「車両らしさ」換言すれば障害物となる「路面垂直物」らしさが高くなり、画像確信度GCFが0に近くなる程マンホールやキャッツアイ(鋲)等の「路面反射物」若しくは「ゴースト物標」である可能性が高くなる。   The closer the image certainty factor GCF is to 100 (%), the more likely it is to be “vehicle-like”, in other words, the “like road vertical object” that is an obstacle, and the closer the image certainty factor GCF is to 0, the manhole and the cats. The possibility of being a “road surface reflector” such as an eye (鋲) or a “ghost target” increases.

ところで、係数Gx、Gy、Gxyは、図3のステップS41〜S45の処理により、倍率Kx、Ky、Kxyの大小関係にしたがって、つぎのように可変設定する。   By the way, the coefficients Gx, Gy, and Gxy are variably set as follows according to the magnitude relationship between the magnifications Kx, Ky, and Kxy by the processing in steps S41 to S45 in FIG.

(イ)ステップS41の初期設定に基き、通常は、Gx=m1、Gy=n1、Gxy=mn1に設定する。なお、m1、n1、mn1はいずれも実験等によって設定された定数であり、例えば、m1=n1、かつ、m1、n1<<mn1である。そして、GxyをGx、Gyより大きくするのは、水平と垂直の倍率Kx、Kyが一致していない認識対象はKx、KyがKrより大きくても、先行車等の「路面垂直物」である可能性が極めて低いため、レーダ側検出倍率K0rとカメラ側検出倍率K0cとからの先行車等の障害物の認識の信頼性を低くするためである。   (A) Normally, Gx = m1, Gy = n1, and Gxy = mn1 are set based on the initial setting in step S41. Note that m1, n1, and mn1 are all constants set through experiments, for example, m1 = n1, and m1, n1 << mn1. The reason why Gxy is larger than Gx and Gy is a recognition target whose horizontal and vertical magnifications Kx and Ky do not match is a “road surface vertical object” such as a preceding vehicle even if Kx and Ky are larger than Kr. This is because the possibility is extremely low, and the reliability of recognition of an obstacle such as a preceding vehicle from the radar side detection magnification K0r and the camera side detection magnification K0c is lowered.

(ロ)水平、垂直の倍率Ky、Kxが、設定された誤差範囲内で一致し、しかも、レーダ側検出倍率Krより大きくなるときには、上述したように認識対象(障害物)が接近しているため、等価的に前記の所定範囲を一層広くしてレーザレーダ側検出倍率K0rの時間的な遅れをより大きく許容するため、ステップS42、S43により、Kx>Kr、Ky>Kr、|Kx−Kr|<Δa、|Ky−Kr|<Δb、|Kx−Ky|<Δc、Kr>Kr(near)が成立することを条件に、Gx=m2(<m1)、Gy=n2(<n1)、Gxy=mn2(<<mn1)に一層小さくする。なお、m2、n2、mn2、Δa、Δb、Δc、Kr(near)はそれぞれ実験等によって設定されたしきい値の定数である。そして、m2、n2、mn2の定数を設定することで、画像確信度GCFが下がりにくくなっている。   (B) When the horizontal and vertical magnifications Ky and Kx match within the set error range and become larger than the radar-side detection magnification Kr, the recognition target (obstacle) is approaching as described above. Therefore, in order to allow the time delay of the laser radar side detection magnification K0r to be larger by equivalently widening the predetermined range, Kx> Kr, Ky> Kr, | Kx−Kr is performed in steps S42 and S43. | <Δa, | Ky−Kr | <Δb, | Kx−Ky | <Δc, Kr> Kr (near), Gx = m2 (<m1), Gy = n2 (<n1), Gxy = mn2 (<< mn1). Note that m2, n2, mn2, Δa, Δb, Δc, and Kr (near) are threshold constants set by experiments or the like. By setting the constants m2, n2, and mn2, the image certainty factor GCF is difficult to decrease.

(ハ)一方、レーダ側検出倍率Krが画像側の倍率Kx、Kyのいずれよりも大きく、(Kr>Kx、Kr>Ky)かつ、差|Kr−Kx|、|Kr−Ky|のいずれかでも設定されたしきい値の定数Δdより大きくなるときは、認識対象が「路面垂直物」でなく「路面反射物」若しくは「ゴースト物標」である可能性か高いため、ステップS44、S45により、Gx=m3(>m1)、Gy=n3(>n1)、Gxy=mn3(≧mn1)に大きくし、レーダ側検出倍率K0rとカメラ側検出倍率K0cとからの先行車等の障害物の認識の信頼性を低くして、等価的に前記の所定範囲を極めて狭くする。なお、m3、n3、mn3も実験等によって設定された定数である。   (C) On the other hand, the radar-side detection magnification Kr is larger than both the image-side magnifications Kx and Ky, (Kr> Kx, Kr> Ky), and the difference | Kr−Kx | or | Kr−Ky | However, if it is larger than the set threshold constant Δd, it is highly likely that the recognition target is not a “road surface vertical object” but a “road surface reflection object” or a “ghost target”. Gx = m3 (> m1), Gy = n3 (> n1), Gxy = mn3 (≧ mn1), and recognition of an obstacle such as a preceding vehicle from the radar side detection magnification K0r and the camera side detection magnification K0c The reliability is reduced, and equivalently, the predetermined range is extremely narrowed. Note that m3, n3, and mn3 are also constants set by experiments or the like.

(動作)
つぎに、認識演算部6の動作について、主に図2、図11を参照して説明する。
(Operation)
Next, the operation of the recognition calculation unit 6 will be described mainly with reference to FIGS.

自車1のエンジン始動後、図2のステップS1により、候補領域設定手段がクラスタリング処理を実行し、時々刻々の先行車7等の障害物の領域とみなされる候補領域Ccを設定し、同時に、相対距離計測手段が時々刻々の相対距離Zを計測する。   After the engine of the host vehicle 1 is started, the candidate area setting means executes a clustering process according to step S1 of FIG. 2 to set a candidate area Cc which is regarded as an obstacle area such as the preceding vehicle 7 every moment, The relative distance measuring means measures the relative distance Z every moment.

また、図2のステップS2により、ヒストグラム演算手段が時々刻々の(フィールド毎の)撮影画像Pを画像微分二値化処理してその画像エッジ情報を得、以降の計算処理において、8ビット/画素の輝度値をそのままを用いるのでなく、1ビット/画素のエッジ情報を利用するようにして、情報量を1/8に低減し、計算量を少なくする。   Further, at step S2 in FIG. 2, the histogram calculation means obtains image edge information by performing image differential binarization processing on the captured image P every moment (for each field), and in the subsequent calculation processing, 8 bits / pixel is obtained. The amount of information is reduced to 1/8 and the amount of calculation is reduced by using edge information of 1 bit / pixel instead of using the luminance value as it is.

そして、図2のステップS3により、ヒストグラム演算手段が、フィールド毎に水平、垂直の最新エッジヒストグラムY、X及び各倍率Kの水平、垂直の参照エッジヒストグラムYr、Xrを算出し、前記のクラスタリング処理によって設定された領域Ccについての候補対象αの画像上での特徴量(ベクトル)を計算する。   Then, in step S3 of FIG. 2, the histogram calculation means calculates the horizontal and vertical latest edge histograms Y and X and the horizontal and vertical reference edge histograms Yr and Xr of each magnification K for each field, and the clustering process described above. The feature amount (vector) on the image of the candidate target α for the region Cc set by is calculated.

このとき、倍率Kは、前記の(4)式、(5)式の演算に基いて設定された最大値Kmax〜最小値Kminの範囲で変化し、その単位変化の幅(計算刻み幅K_step)は、制御ECU4の処理負担等を考慮して、例えば0.04に設定される。   At this time, the magnification K changes in the range of the maximum value Kmax to the minimum value Kmin set based on the calculations of the above expressions (4) and (5), and the unit change width (calculation step width K_step). Is set to, for example, 0.04 in consideration of the processing load of the control ECU 4 and the like.

また、時刻t、t+1、…における垂直の最新エッジヒストグラムXを今回の候補対象特徴量(ベクトル)Xt、Xt+1、…とし、各時刻t、t+1、…にメモリユニット5に更新自在に保持される既記憶エッジヒストグラムXmを、それぞれ前回までの特徴量(ベクトル)Pt、Pt+1、…とすると、特徴量Pt、Pt+1、…は、撮影画像Pの時間的連続性(相関性)等を考慮して以前に算出された既フレームの垂直の各エッジヒストグラムの重み付け加算平均により、突発的な変動の影響等を抑えて算出され、更新自在に最新のものに書き換えられて保持される。   Also, the latest vertical edge histogram X at times t, t + 1,... Is the current candidate target feature amount (vector) Xt, Xt + 1,..., And is held in the memory unit 5 at each time t, t + 1,. If the stored edge histogram Xm is the feature amounts (vectors) Pt, Pt + 1,... Up to the previous time, the feature amounts Pt, Pt + 1,... Take into account the temporal continuity (correlation) of the captured image P and the like. By calculating the weighted average of the vertical edge histograms of the existing frames that have been calculated previously, it is calculated while suppressing the influence of sudden fluctuations, etc., and is rewritten and held up-to-date so as to be updatable.

具体的には、例えば時刻t+1の既記憶エッジヒストグラムXmの特徴量Pt+1は、時刻tの最新エッジヒストグラムXt、特徴量Ptに基くつぎの(7)式の重み付け加算平均により算出されて時刻t+1まで保持される。なお、同式中のHは、0<H<1の適当な値に設定されたヒス係数である。   Specifically, for example, the feature amount Pt + 1 of the stored edge histogram Xm at the time t + 1 is calculated by the weighted average of the following equation (7) based on the latest edge histogram Xt and the feature amount Pt at the time t until the time t + 1. Retained. H in the equation is a His coefficient set to an appropriate value of 0 <H <1.

Pt+1=H×Xt+(1−H)×Pt (7)式   Pt + 1 = H × Xt + (1−H) × Pt (7)

なお、水平の既記憶エッジヒストグラムYmも、前記と同様にして時間的にヒステリシスを持たせて更新しながら保持される。   The horizontal stored edge histogram Ym is also maintained while being updated with a temporal hysteresis in the same manner as described above.

そして、各時刻の候補対象特徴量としての水平、垂直のエッジヒストグラム(Yt、Xt)、(Yt+1、Xt+1)、…が、保持されていた候補対象特徴量としての水平、垂直の既記憶エッジヒストグラムPt(=Ymt、Xmt)、Pt+1(=Ymt+1、Xmt+1)、…と大きく異なるときには、候補対象αとして前回と違うものを認識したか、或いは、何らかの原因で候補対象αの画像上の特徴が局所的に変わったと判定することができる。   The horizontal and vertical edge histograms (Yt, Xt), (Yt + 1, Xt + 1),... As the candidate target feature values at each time are stored as the horizontal and vertical stored edge histograms as the candidate target feature values. When it is significantly different from Pt (= Ymt, Xmt), Pt + 1 (= Ymt + 1, Xmt + 1),..., The candidate object α is recognized as being different from the previous one, or for some reason the feature on the image of the candidate object α is local Can be determined to have changed.

つぎに、前記の各時刻の候補対象特徴量Pt、Pt+1、…に基く残差和Vy、Vx、V及び最小値Vy_min、Vx_min、Vminは、ステップS3において、残差和演算手段が計算する。   Next, the residual sum calculation means calculates the residual sums Vy, Vx, V and the minimum values Vy_min, Vx_min, Vmin based on the candidate target feature amounts Pt, Pt + 1,.

この計算の処理は、具体的には、例えば図11のステップQ1〜Q11からなり、ステップQ1により、初期値Kminから漸増可変される倍率KがKmin≦K≦Kmaxに探索範囲に設定されていることを確認すると、ステップQ2により、その倍率Kでの垂直の個別残差和Vxを計算し、ステップQ3により、計算した個別残差和Vxが、計算開始時に初期値にセットされた最小値Vx_minより小さいか否かを判別し、Vx<Vx_minであれば、ステップQ4に移行し、最小値Vx_minを計算した個別残差和Vxに更新し、垂直の検出倍率Kx0をその倍率Kに更新する。   Specifically, this calculation process includes, for example, steps Q1 to Q11 in FIG. 11. In step Q1, the magnification K that is gradually increased from the initial value Kmin is set in the search range to Kmin ≦ K ≦ Kmax. In step Q2, the vertical individual residual sum Vx at the magnification K is calculated. In step Q3, the calculated individual residual sum Vx is set to the minimum value Vx_min set to the initial value at the start of calculation. If Vx <Vx_min, the process proceeds to step Q4, where the minimum value Vx_min is updated to the calculated individual residual sum Vx, and the vertical detection magnification Kx0 is updated to the magnification K.

さらに、ステップQ5により、その倍率Kでの水平の個別残差和Vyを計算し、ステップQ6により、計算した個別残差和Vyが、計算開始時に初期値にセットされた最小値Vy_minより小さいか否かを判別し、Vy<Vy_minであれば、ステップQ7に移行し、最小値Vy_minを計算した個別残差和Vyに更新し、水平の検出倍率Ky0もその倍率Kに更新する。   Further, in step Q5, the horizontal individual residual sum Vy at the magnification K is calculated, and in step Q6, is the calculated individual residual sum Vy smaller than the minimum value Vy_min set to the initial value at the start of the calculation? If Vy <Vy_min, the process proceeds to step Q7 where the minimum value Vy_min is updated to the calculated individual residual sum Vy, and the horizontal detection magnification Ky0 is also updated to that magnification K.

つぎに、ステップQ8により、その倍率Kにおいて計算した個別残差和Vx、Vyに基き、統合残差和V(=Vx+Vy)を計算し、ステップQ9により、計算した統合残差和Vが計算開始時に初期値にセットされた最小値Vminより小さいか否かを判別し、Vy<Vminであれば、ステップQ10に移行して最小値Vminを計算した統合残差和Vに更新し、検出倍率K0をその倍率Kに更新する。   Next, in step Q8, an integrated residual sum V (= Vx + Vy) is calculated based on the individual residual sums Vx and Vy calculated at the magnification K. In step Q9, the calculated integrated residual sum V is calculated. It is determined whether or not it is smaller than the minimum value Vmin set to the initial value, and if Vy <Vmin, the process proceeds to step Q10, where the minimum value Vmin is updated to the calculated integrated residual sum V, and the detection magnification K0 Is updated to the magnification K.

そして、ステップQ11により、倍率KをK+K_stepにステップ可変して漸増し、この状態で、ステップQ1に戻ってこのステップQ1から処理をくり返し、ステップQ3により、Vx≧Vx_minに変化して個別残差和Vxの最小値Vx0を検出すると、ステップQ4をパスしてステップQ5に移行し、同様に、ステップQ6により、Vy≧Vy_minに変化して個別残差和Vyの最小値Vy0を検出すると、ステップQ7をパスしてステップQ8に移行する。   In step Q11, the magnification K is stepped and gradually increased to K + K_step. In this state, the process returns to step Q1 and the process is repeated from step Q1, and in step Q3, Vx ≧ Vx_min is changed and individual residual sums are changed. When the minimum value Vx0 of Vx is detected, step Q4 is passed and the process proceeds to step Q5. Similarly, when the minimum value Vy0 of the individual residual sum Vy is detected by changing to Vy ≧ Vy_min in step Q6, step Q7 is detected. Is passed to step Q8.

さらに、ステップQ9により、V≧Vminに変化して統合残差和Vの最小値V0を検出すると、そのときの残差和Vy、Vx、V及び最小値Vy_min、Vx_min、Vminを候補対象αの特徴量として検出するとともに倍率Ky0、Kx0、K0を検出し、図11の処理を終了する。   Further, when the minimum value V0 of the integrated residual sum V is detected by changing to V ≧ Vmin in step Q9, the residual sums Vy, Vx, V and the minimum values Vy_min, Vx_min, Vmin at that time are determined as the candidate target α. While detecting as a feature quantity, the magnifications Ky0, Kx0, and K0 are detected, and the process of FIG. 11 ends.

なお、ステップQ3、Q6、Q9においては、各倍率Kの垂直、水平の参照エッジヒストグラムXr、Yrを、前記の予測移動範囲Mx、My内でそれぞれの横軸x、y方向に平行移動して各残差和Vx、Vy、Vが算出される。   In steps Q3, Q6, and Q9, the vertical and horizontal reference edge histograms Xr and Yr of each magnification K are translated in the horizontal axis x and y directions within the predicted movement ranges Mx and My, respectively. Each residual sum Vx, Vy, V is calculated.

そして、算出結果等に基き、候補領域Ccが適正な領域の場合、残差和Vy、Vx、Vの倍率Kの実測特性として、例えば図12に示す二次曲線特性が得られた。   Then, based on the calculation results and the like, when the candidate region Cc is an appropriate region, for example, a quadratic curve characteristic shown in FIG. 12 is obtained as an actual measurement characteristic of the magnification K of the residual sums Vy, Vx, and V.

そして、図11の処理を終了すると、第1の対象判別手段により、ステップS3で得られた最小値Vy_min、Vx_minの倍率Ky0、Kx0が等しい(Ky0=Kx0)状態か否かから、候補対象物αが路面垂直物か否かを判別し、倍率検出手段により、ステップS3で得られた倍率K0をカメラ側検出倍率K0cとして検出する。   When the processing of FIG. 11 is completed, the first target determination unit determines whether or not the magnifications Ky0 and Kx0 of the minimum values Vy_min and Vx_min obtained in step S3 are equal (Ky0 = Kx0). It is determined whether α is a road surface vertical object, and the magnification K0 obtained in step S3 is detected as the camera-side detection magnification K0c by the magnification detection means.

また、第2の対象判別手段により、相対距離Zの前回からの変化に基き、前記の(3)式から求まる倍率Kをレーダ側検出倍率K0rとして検出し、K0c=K0rになるか否かから、候補対象αが、先行車等の検出有効物か、先行車等の水しぶき、レーザレーダ2の付着ごみ等の検出無効物かを判別する。   Further, the magnification K obtained from the above equation (3) is detected as the radar-side detection magnification K0r based on the change of the relative distance Z from the previous time by the second object discrimination means, and whether or not K0c = K0r is satisfied. Then, it is determined whether the candidate object α is an effective detection object such as a preceding vehicle, or an invalid detection object such as water splashing on the preceding vehicle or adhering dust of the laser radar 2.

そして、ステップS3からステップS4に移行して図3のステップ41〜45により画像確信度GCFを算出してステップS5に移行し、画像確信度GCFを考慮した路面垂直物(Ky0=Kx0)かつ検出有効物(K0c=K0r)の判別に基づき、認識処理手段により、候補領域Ccが適正領域であることを検出して候補領域Ccの画像から候補対象αを障害物(図2では車両(先行車))として認識する。   Then, the process proceeds from step S3 to step S4, the image certainty factor GCF is calculated in steps 41 to 45 in FIG. 3, the process proceeds to step S5, and the road surface vertical object considering the image certainty factor GCF (Ky0 = Kx0) and detection. Based on the discrimination of the effective object (K0c = K0r), the recognition processing means detects that the candidate area Cc is an appropriate area, and determines the candidate object α from the image of the candidate area Cc as an obstacle (in FIG. )).

つぎに、ステップS6に移行し、この実施形態においては、認識精度を一層向上するため、先行車等の障害物の候補領域Ccについて、今回の観測位置をもとにカルマンフィルタを用いて次回の観測位置を予測してメモリユニット5に書き換え自在に保持し、例えば、次回のステップS1の処理において、この予測位置と実際に観測された位置との誤差から候補領域Ccの妥当性を検討し、誤差が大きすぎるときには、候補領域Ccを無効にしてレーダクラスタ検出をロバストにする。   Next, the process proceeds to step S6, and in this embodiment, in order to further improve the recognition accuracy, the next observation using the Kalman filter is performed on the candidate region Cc of the obstacle such as the preceding vehicle based on the current observation position. The position is predicted and held in the memory unit 5 in a rewritable manner. For example, in the next process of step S1, the validity of the candidate region Cc is examined from the error between the predicted position and the actually observed position. Is too large, the candidate area Cc is invalidated to make radar cluster detection robust.

また、今回得られた各特徴量と次回計算される特徴量とを比較等するために、今回得られた各特徴量をメモリユニット5に書き換え自在に保持し、例えば、ステップS3において、算出された特徴量と保持された前回の特徴量との比較から、算出された特徴量の妥当性を検討し、誤差が大きすぎる場合は、算出された特徴量を無効にして認識精度の向上を図る。   Further, in order to compare each feature quantity obtained this time with the feature quantity calculated next time, each feature quantity obtained this time is stored in the memory unit 5 in a rewritable manner, for example, calculated in step S3. The validity of the calculated feature value is examined based on a comparison between the stored feature value and the stored previous feature value. If the error is too large, the calculated feature value is invalidated to improve recognition accuracy. .

そして、ドライバの操作による制御モードの切り換え、又は、自車1のエンジン停止により、認識処理を終了するまで、ステップS6からステップS7を介してステップS1、S2の処理に戻り、各ステップS1〜S7の処理をくり返し、時々刻々の先行車7等の障害物を認識する。   Then, the process returns from step S6 to step S7 through step S7 to step S1 and step S2 until the recognition process is terminated by switching the control mode by the driver's operation or by stopping the engine of the host vehicle 1, and each step S1 to S7. This process is repeated to recognize obstacles such as the preceding vehicle 7 every moment.

このようにすることで、簡単な画像計算処理により、キャッツアイ、マンホール等の路面反射やゴースト物票による障害物の誤認識の防止を図ることができ、認識対象の乗り移り(変化)への適応が可能、車両画像に開する知識を一切必要としない、昼夜でアルゴリズム切り替えが不要等の前記既出願と同様の効果が得られるのは勿論、レーザレーダ2と単眼カメラ3の検出の時間ずれがあっても、水平、垂直の倍率Kx、Kyが略一致することを条件にそのずれを許容し、レーザレーダ2と単眼カメラ3の検出の時間ずれに起因したレーダ側検出倍率K0rとカメラ側検出倍率K0cと不一致に基づく、障害物でないとする誤認識を極力回避することができ、とくに、レーザレーダ2と単眼カメラ3の検出の時間ずれがより大きくなる障害物の接近時の誤認識を極めて良好に回避することができる。   In this way, simple image calculation processing can prevent false recognition of obstacles due to road surface reflections such as cat's eyes and manholes, and ghost articles, and adaptation to changes (changes) in recognition targets It is possible to obtain the same effects as those of the above-mentioned applications such as no need for knowledge to open the vehicle image, no need to switch the algorithm day and night, and of course, the time lag between the detection of the laser radar 2 and the monocular camera 3 Even in such a case, the deviation is allowed on the condition that the horizontal and vertical magnifications Kx and Ky substantially coincide with each other, and the radar-side detection magnification K0r and the camera-side detection caused by the detection time lag between the laser radar 2 and the monocular camera 3 are detected. It is possible to avoid misrecognition that the object is not an obstacle based on the mismatch with the magnification K0c as much as possible, and in particular, an obstacle in which the time lag of detection between the laser radar 2 and the monocular camera 3 becomes larger. It can be very well avoid erroneous recognition when approaching.

また、本実施形態のように、画像確信度GCFに基づいて先行車等の障害物を認識する場合には、前述のように画像確信度GCFの演算式の各しきい値の定数を設定するだけでよいので、先行車等の障害物の認識度を容易に向上させることができる。   Further, as in the present embodiment, when an obstacle such as a preceding vehicle is recognized based on the image certainty factor GCF, as described above, the threshold constants of the calculation formulas for the image certainty factor GCF are set. Therefore, the recognition degree of obstacles such as a preceding vehicle can be easily improved.

すなわち、例えば図13の(a)に示すように撮影画像Pの候補領域Ccの障害物の候補対象αが自車から比較的離れて小さく写るときには、レーザレーダ2と単眼カメラ3の検出の時間ずれがあっても、そのずれの影響は少なく、レーザレーダ2と単眼カメラ3の検出の時間ずれに起因したレーダ側検出倍率K0rとカメラ側検出倍率K0cとの誤差は小さく、同図の(b)に示すようにレーダ側検出倍率K0rからの認識精度とカメラ側検出倍率K0cからの認識精度との前記時間ずれによる差は少ないが、図14の(a)に示すように撮影画像Pの候補領域Ccの障害物の候補対象αが自車に相対的に接近して大きく写るときには、レーザレーダ2と単眼カメラ3の検出の時間ずれの影響が大きく、レーザレーダ2と単眼カメラ3の検出の時間ずれに起因したレーダ側検出倍率K0rとカメラ側検出倍率K0cとの誤差が大きくなり、同図の(b)に示すようにレーダ側検出倍率K0rからの認識精度とカメラ側検出倍率K0cからの認識精度との前記時間ずれによる差は大きくなる。なお、図13の(b)、図14の(b)のlr、lx、lyはレーダ側検出倍率K0rからの認識精度、カメラ側の倍率K0x、Ky0からの認識精度の特性を示す。   That is, for example, when the obstacle candidate target α in the candidate area Cc of the photographed image P appears relatively small away from the host vehicle as shown in FIG. 13A, the detection time of the laser radar 2 and the monocular camera 3 Even if there is a deviation, the influence of the deviation is small, and the error between the radar-side detection magnification K0r and the camera-side detection magnification K0c due to the detection time deviation between the laser radar 2 and the monocular camera 3 is small. ), The difference between the recognition accuracy from the radar side detection magnification K0r and the recognition accuracy from the camera side detection magnification K0c due to the time lag is small. However, as shown in FIG. When the obstacle candidate target α in the region Cc is relatively close to the host vehicle and is photographed largely, the influence of the time difference between the detection of the laser radar 2 and the monocular camera 3 is large, and the detection of the laser radar 2 and the monocular camera 3 is detected. The error between the radar-side detection magnification K0r and the camera-side detection magnification K0c due to the time lag of is increased from the recognition accuracy from the radar-side detection magnification K0r and the camera-side detection magnification K0c, as shown in FIG. The difference due to the time lag with respect to the recognition accuracy increases. Note that lr, lx, and ly in FIGS. 13B and 14B indicate characteristics of recognition accuracy from the radar-side detection magnification K0r and recognition accuracy from the camera-side magnifications K0x and Ky0.

このような場合に、レーザレーダ2と単眼カメラ3の検出の時間ずれに起因したレーダ側検出倍率K0rとカメラ側検出倍率K0cと誤差を所定の条件下に許容することで、レーダ側検出倍率K0rとカメラ側検出倍率K0cとの不一致に基づく、障害物でないとする誤認識を、キャッツアイ(鋲)、マンホール等の路面反射やゴースト物票による障害物の誤認識を大きくすることなく、極力回避することができ、とくに、レーザレーダ2と単眼カメラ3の検出の時間ずれがより大きくなる障害物の接近時の誤認識を極めて良好に回避することができる。   In such a case, the radar-side detection magnification K0r is allowed by allowing the radar-side detection magnification K0r and the camera-side detection magnification K0c and the error caused by the time lag between the detections of the laser radar 2 and the monocular camera 3 under predetermined conditions. Recognition as non-obstacle based on discrepancy between camera and camera-side detection magnification K0c as much as possible without increasing the mis-recognition of obstacles due to cat-eye (ゴ ー), manhole, etc. In particular, misrecognition at the time of the approach of an obstacle in which the time lag of detection between the laser radar 2 and the monocular camera 3 becomes larger can be avoided very well.

しかも、前記(6)式のGxy・|Kx−Ky|の項により、路面のマンホールの蓋やキャッツアイ(鋲)等を障害物として誤認識することが一層確実に防止される。   In addition, the term “Gxy · | Kx−Ky |” in equation (6) further reliably prevents erroneous recognition of a manhole cover, a cat's eye (鋲), or the like on the road surface as an obstacle.

さらに、明らかに認識対象が路面垂直物でない可能性が高いときには、前記(6)式のGx、Gyを大きくして画像確信度GCFを積極的に低くし、前記誤差の範囲を逆に狭くして障害物であるとの誤認識を確実に防止して障害物の認識精度をさらに一層向上することができる。   Furthermore, when there is a high possibility that the recognition target is not a road surface vertical object, Gx and Gy in the equation (6) are increased to positively lower the image certainty factor GCF, and conversely, the error range is narrowed. Thus, the recognition accuracy of the obstacle can be further improved by reliably preventing the erroneous recognition of the obstacle.

したがって、路面のマンホールの蓋やキャッツアイ(鋲)等を障害物として誤認識することなく、レーザレーダ2及び単眼カメラ3が先行車等の障害物をしっかり捉えているときに障害物でないと誤認識されてしまう事態の発生を防止することができ、障害物の認識精度を著しく向上することができる。   Therefore, it is erroneous that the laser radar 2 and the monocular camera 3 are not obstacles when the obstacle such as the preceding vehicle is firmly recognized without erroneously recognizing the manhole cover or cat's eye (鋲) on the road surface as an obstacle. Occurrence of a situation of being recognized can be prevented, and obstacle recognition accuracy can be significantly improved.

そして、この認識結果に基いて例えば自車1の自動ブレーキシステムを制御することにより、交通事故の著しい低減効果等が期待できる。   Then, for example, by controlling the automatic brake system of the vehicle 1 based on the recognition result, a significant reduction in traffic accidents can be expected.

そして、本発明は上記した実施形態に限定されるものではなく、その趣旨を逸脱しない限りにおいて上述したもの以外に種々の変更を行うことが可能であり、例えば、測距センサとしてのレーダは、スキャニングレーザレーダでなく、同様のスキャニング式の安価な超音波レーダであってもよく、場合によっては、ミリ波レーダ等の電波レーダ等であってもよい。また、撮像センサとしてのカメラは、単眼カメラに限られるものではない。   The present invention is not limited to the above-described embodiment, and various modifications other than those described above can be made without departing from the spirit thereof. For example, a radar as a distance measuring sensor Instead of a scanning laser radar, a similar scanning-type inexpensive ultrasonic radar may be used. In some cases, a radio wave radar such as a millimeter wave radar may be used. Moreover, the camera as an imaging sensor is not limited to a monocular camera.

さらに、制御ECU4の障害物認識の処理プログラム等が図2、図3、図10と異なっていてもよいのも勿論であり、水平、垂直の画像ヒストグラムYm、Xmの各倍率の基準の画像ヒストグラムYr、Xrとの残差和の最小値に基づくカメラ側検出倍率K0cの算出方法等はどのようであってもよく、また、レーダ側検出倍率K0rとカメラ側検出倍率K0cとの所定範囲内の誤差を許容する方法は、前記の画像確信度GCFを算出する方法に限られるものではなく、例えば、前記(6)式のGxy・|Kx−Ky|の項を省いた確信度を用いる方法であってもよく、また、実際に許容範囲のしきい値を適当に設定する方法であってもよい。   Furthermore, the obstacle recognition processing program of the control ECU 4 may be different from that shown in FIGS. 2, 3, and 10, and the horizontal and vertical image histograms Ym and Xm are the reference image histograms for the respective magnifications. Any method may be used for calculating the camera side detection magnification K0c based on the minimum value of the residual sum of Yr and Xr, and within a predetermined range between the radar side detection magnification K0r and the camera side detection magnification K0c. The method for allowing the error is not limited to the method for calculating the image certainty factor GCF. For example, the method using the certainty factor in which the term Gxy · | Kx−Ky | in the equation (6) is omitted. There may be a method of actually setting an allowable threshold value appropriately.

ところで、自車1の装備部品数を少なくするため、図1のレーザレーダ2、単眼カメラ3等を追従走行制御、ブレーキ制御等の他の制御のセンサ等に兼用する場合にも適用することができる。   By the way, in order to reduce the number of equipped parts of the own vehicle 1, the present invention can be applied to the case where the laser radar 2 and the monocular camera 3 of FIG. 1 are also used as sensors for other controls such as follow-up running control and brake control. it can.

この発明の一実施形態のブロック図である。It is a block diagram of one embodiment of this invention. 図1の動作説明用のフローチャートである。It is a flowchart for operation | movement description of FIG. 図2の一部の詳細なフローチャートである。3 is a detailed flowchart of a part of FIG. 2. 図1のレーダ測距結果に基づく候補領域検出の説明図である。It is explanatory drawing of the candidate area | region detection based on the radar ranging result of FIG. 図1のカメラ撮影画像のエッジ画像処理の説明図である。It is explanatory drawing of the edge image process of the camera picked-up image of FIG. 図4のエッジ画像から得られたエッジヒストグラムの1例の説明図である。It is explanatory drawing of an example of the edge histogram obtained from the edge image of FIG. 図1の画像処理の説明図である。It is explanatory drawing of the image processing of FIG. 図1の倍率計算説明用の走行模式図である。It is a driving | running | working schematic diagram for magnification calculation description of FIG. 図1のカメラ側検出倍率説明用の模式図である。It is a schematic diagram for camera side detection magnification description of FIG. 図1の個別残差和、統合残差和の1例の特性図である。It is a characteristic view of an example of the individual residual sum of FIG. 1, and an integrated residual sum. 図2の他の一部の詳細なフローチャートである。3 is a detailed flowchart of another part of FIG. 2. 図1の計算された倍率の1例の特性図である。FIG. 2 is a characteristic diagram of an example of a calculated magnification of FIG. 1. 障害物が自車前方の比較的離れた距離にあるときのレーダ側検出倍率とカメラ側検出倍率の検出時間ずれの影響の説明図である。It is explanatory drawing of the influence of the detection time shift | offset | difference of a radar side detection magnification and a camera side detection magnification when an obstruction exists in the distance comparatively far ahead of the own vehicle. 障害物が接近したときのレーダ側検出倍率とカメラ側検出倍率の検出時間ずれの影響の説明図である。It is explanatory drawing of the influence of the detection time shift | offset | difference of a radar side detection magnification and a camera side detection magnification when an obstacle approaches.

符号の説明Explanation of symbols

1 自車
2 スキャニングレーザレーダ
3 単眼カメラ
4 制御ECU
5 メモリユニット
6 認識演算部
1 Vehicle 2 Scanning laser radar 3 Monocular camera 4 Control ECU
5 Memory unit 6 Recognition calculation unit

Claims (3)

車両に備えたレーダとカメラとによって自車前方をくり返し探査し、
前記レーダの探査結果に基づく自車前方の先行車を含む障害物等までの測距距離の変化からレーダ側検出倍率を求め、
前記カメラの自車前方の撮影結果の画像処理により前記カメラの自車前方の撮影画像の各倍率の基準の画像ヒストグラムとの残差和の最小値に基づいてカメラ側検出倍率を求め、
前記レーダと前記カメラとのセンサフュージョンの認識処理により、前記残差和が最小値になる水平、垂直の倍率が等しくなるか否かから前記撮影画像の前記障害物の領域の画像が先行車等の路面垂直物か否かを判別し、前記レーダ側検出倍率と前記カメラ側検出倍率とが等しくなる前記路面垂直物を前記障害物として認識する障害物認識方法において、
前記水平、垂直の画像ヒストグラムの前記残差和が最小値になる水平、垂直の倍率が略等しいことを条件に、前記レーダ側検出倍率と前記カメラ側検出倍率との所定範囲内の誤差を許容して前記レーダ側検出倍率と前記カメラ側検出倍率とが等しいと判断し、前記路面垂直物を前記障害物と認識することを特徴とする障害物認識方法。
Repeated exploration in front of the vehicle with the radar and camera installed in the vehicle,
Obtain the radar side detection magnification from the change in distance measurement distance to obstacles including the preceding vehicle ahead of the vehicle based on the radar search results,
The camera-side detection magnification is obtained based on the minimum value of the residual sum with the reference image histogram of each magnification of the photographed image of the camera front of the camera in front of the camera of the camera,
According to the sensor fusion recognition process between the radar and the camera, the image of the obstacle area in the photographed image is the preceding vehicle or the like based on whether the horizontal and vertical magnifications at which the residual sum becomes a minimum value are equal. In the obstacle recognition method for recognizing the road surface vertical object having the radar side detection magnification equal to the camera side detection magnification as the obstacle,
Allow an error within a predetermined range between the radar-side detection magnification and the camera-side detection magnification, provided that the horizontal and vertical magnifications at which the residual sum of the horizontal and vertical image histograms is the minimum value are substantially equal. Then, it is determined that the radar side detection magnification is equal to the camera side detection magnification, and the obstacle vertical recognition method is recognized as the obstacle.
請求項1記載の障害物認識方法において、
前記残差和が最小値になる水平、垂直の倍率が略等しく、該両倍率がいずれも前記レーダ側検出倍率より大きくなって前記障害物の近距離接近を検出したときに、前記カメラ側検出倍率と前記レーダ側検出倍率とが等しいと判断する前記誤差の範囲を拡大することを特徴とする障害物認識方法。
The obstacle recognition method according to claim 1,
When the horizontal and vertical magnifications at which the residual sum is minimum are substantially equal, both magnifications are greater than the radar-side detection magnification, and the near-side approach of the obstacle is detected. An obstacle recognition method characterized by enlarging the range of the error for judging that the magnification and the radar side detection magnification are equal.
前記レーダ側検出倍率が、前記残差和が最小値になる水平、垂直の倍率のいずれよりも大きくなり、かつ、前記レーダ側検出倍率と前記残差和が最小値になる水平、垂直の倍率それぞれとの差いずれかでも設定されたしきい値より大きくなって前記障害物の領域の画像が前記路面垂直物でない可能性が高いときには、前記カメラ側検出倍率と前記レーダ側検出倍率とが等しいと判断する前記誤差の範囲を狭くすることを特徴とする請求項1または2に記載の障害物認識方法。   The radar-side detection magnification is larger than either the horizontal or vertical magnification at which the residual sum is a minimum value, and the radar-side detection magnification and the residual sum are at a minimum value. When any of the differences is larger than the set threshold value and there is a high possibility that the image of the obstacle area is not the road surface vertical object, the camera side detection magnification and the radar side detection magnification are equal. The obstacle recognition method according to claim 1, wherein the range of the error to be determined is narrowed.
JP2006093603A 2006-03-30 2006-03-30 Obstacle recognition method Expired - Fee Related JP4753765B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2006093603A JP4753765B2 (en) 2006-03-30 2006-03-30 Obstacle recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2006093603A JP4753765B2 (en) 2006-03-30 2006-03-30 Obstacle recognition method

Publications (2)

Publication Number Publication Date
JP2007274037A JP2007274037A (en) 2007-10-18
JP4753765B2 true JP4753765B2 (en) 2011-08-24

Family

ID=38676434

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2006093603A Expired - Fee Related JP4753765B2 (en) 2006-03-30 2006-03-30 Obstacle recognition method

Country Status (1)

Country Link
JP (1) JP4753765B2 (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5416026B2 (en) * 2010-04-23 2014-02-12 本田技研工業株式会社 Vehicle periphery monitoring device
JP5455772B2 (en) * 2010-04-27 2014-03-26 本田技研工業株式会社 Vehicle periphery monitoring device
JP5655497B2 (en) * 2010-10-22 2015-01-21 トヨタ自動車株式会社 Obstacle recognition device and obstacle recognition method
DE102013210928A1 (en) * 2013-06-12 2014-12-18 Robert Bosch Gmbh Method for distinguishing between real obstacles and apparent obstacles in a driver assistance system for motor vehicles
KR102299793B1 (en) * 2015-11-05 2021-09-08 주식회사 만도 Driving assistant apparatus and driving assistant method
CN109283538B (en) * 2018-07-13 2023-06-13 上海大学 Marine target size detection method based on vision and laser sensor data fusion
CN109444911B (en) * 2018-10-18 2023-05-05 哈尔滨工程大学 Unmanned ship water surface target detection, identification and positioning method based on monocular camera and laser radar information fusion
CN109633688B (en) * 2018-12-14 2019-12-24 北京百度网讯科技有限公司 Laser radar obstacle identification method and device
CN109765563B (en) * 2019-01-15 2021-06-11 北京百度网讯科技有限公司 Ultrasonic radar array, obstacle detection method and system
JP7156195B2 (en) * 2019-07-17 2022-10-19 トヨタ自動車株式会社 object recognition device
CN111709356B (en) * 2020-06-12 2023-09-01 阿波罗智联(北京)科技有限公司 Method and device for identifying target area, electronic equipment and road side equipment

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002159445A (en) * 2000-11-27 2002-06-04 Asahi Optical Co Ltd Electronic endoscope and scope of electronic endoscope
JP4070437B2 (en) * 2001-09-25 2008-04-02 ダイハツ工業株式会社 Forward vehicle recognition device and recognition method
JP2003244640A (en) * 2002-02-18 2003-08-29 Hiroshi Kondo Security system
JP2003329601A (en) * 2002-05-10 2003-11-19 Mitsubishi Rayon Co Ltd Apparatus and method for inspecting defect
JP2004117071A (en) * 2002-09-24 2004-04-15 Fuji Heavy Ind Ltd Vehicle surroundings monitoring apparatus and traveling control system incorporating the same
JP4175087B2 (en) * 2002-10-31 2008-11-05 日産自動車株式会社 Vehicle external recognition device
JP4407920B2 (en) * 2004-05-19 2010-02-03 ダイハツ工業株式会社 Obstacle recognition method and obstacle recognition device

Also Published As

Publication number Publication date
JP2007274037A (en) 2007-10-18

Similar Documents

Publication Publication Date Title
JP4753765B2 (en) Obstacle recognition method
EP3208635B1 (en) Vision algorithm performance using low level sensor fusion
US10922561B2 (en) Object recognition device and vehicle travel control system
US10345443B2 (en) Vehicle cruise control apparatus and vehicle cruise control method
JP6795027B2 (en) Information processing equipment, object recognition equipment, device control systems, moving objects, image processing methods and programs
JP5297078B2 (en) Method for detecting moving object in blind spot of vehicle, and blind spot detection device
US6670912B2 (en) Method for detecting stationary object located above road
US10366295B2 (en) Object recognition apparatus
JP4407920B2 (en) Obstacle recognition method and obstacle recognition device
US11119210B2 (en) Vehicle control device and vehicle control method
US10422871B2 (en) Object recognition apparatus using a plurality of object detecting means
JP6614247B2 (en) Image processing apparatus, object recognition apparatus, device control system, image processing method and program
JP2002096702A (en) Vehicle-to-vehicle distance estimation device
JP2008123462A (en) Object detector
KR101326943B1 (en) Overtaking vehicle warning system and overtaking vehicle warning method
US8160300B2 (en) Pedestrian detecting apparatus
US11151395B2 (en) Roadside object detection device, roadside object detection method, and roadside object detection system
JP2019194614A (en) On-vehicle radar device, area detection device and area detection method
CN108027237B (en) Periphery recognition device
JPH08156723A (en) Vehicle obstruction detecting device
JP2008151659A (en) Object detector
WO2019065970A1 (en) Vehicle exterior recognition device
JP4407921B2 (en) Obstacle recognition method and obstacle recognition device
JP2002008019A (en) Railway track recognition device and rolling stock using railway track recognition device
CN114084129A (en) Fusion-based vehicle automatic driving control method and system

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20081112

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20101215

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20110322

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20110426

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20110524

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20110524

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20140603

Year of fee payment: 3

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

LAPS Cancellation because of no payment of annual fees