JP2007172504A - Adhering matter detection device and adhering matter detection method - Google Patents

Adhering matter detection device and adhering matter detection method Download PDF

Info

Publication number
JP2007172504A
JP2007172504A JP2005372441A JP2005372441A JP2007172504A JP 2007172504 A JP2007172504 A JP 2007172504A JP 2005372441 A JP2005372441 A JP 2005372441A JP 2005372441 A JP2005372441 A JP 2005372441A JP 2007172504 A JP2007172504 A JP 2007172504A
Authority
JP
Japan
Prior art keywords
image
area
detection
search area
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2005372441A
Other languages
Japanese (ja)
Other versions
JP4798576B2 (en
Inventor
Kousuke Sakagami
航介 坂上
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Daihatsu Motor Co Ltd
Original Assignee
Daihatsu Motor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Daihatsu Motor Co Ltd filed Critical Daihatsu Motor Co Ltd
Priority to JP2005372441A priority Critical patent/JP4798576B2/en
Publication of JP2007172504A publication Critical patent/JP2007172504A/en
Application granted granted Critical
Publication of JP4798576B2 publication Critical patent/JP4798576B2/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

<P>PROBLEM TO BE SOLVED: To detect that adhering matter to one's own vehicle or a camera is present in a photographed image of the camera photographing a part in front of the own vehicle by an extremely little image processing burden. <P>SOLUTION: Each pixel point of an image characteristic amount is detected from image data of the photographed image of the monocular camera 2 by an image characteristic detection means, a detection result is held in a storage means, each the pixel point wherein the detection results of the image characteristic detection means accord continuously at least for a prescribed time is detected by an accordance point detection means, a basic retrieval area and a sub retrieval area shifted therefrom are set in the photographed image by a retrieval area setting means, and an area of the adhering matter having a counted value of a threshold value or above is detected by processing having the extremely little burden of a decision means counting the number of the detection accordance points of the accordance detection means about each area of both the retrieval areas to decide that there is the adhering matter. <P>COPYRIGHT: (C)2007,JPO&INPIT

Description

この発明は、自車前方を撮影するカメラの撮影画像に含まれた自車もしくはカメラの付着物を検出する付着物検出装置及び付着物検出方法に関する。   The present invention relates to a deposit detection apparatus and a deposit detection method for detecting a deposit on a host vehicle or a camera included in a captured image of a camera that captures the front of the host vehicle.

従来、自車前方の物標認識等を目的としてCCD単眼カメラ、ステレオカメラ等のカラーもしくは白黒のカメラを搭載した車両にあっては、自車走行中にそのカメラによって自車前方を連続的に撮影し、時々刻々の撮影画像(輝度画像)に微分二値化等の周知の画像処理を施し、該画像処理によって得られた各微分二値画像から先行車等の物標や白線の認識を行っている。   Conventionally, in a vehicle equipped with a color or black and white camera such as a CCD monocular camera or a stereo camera for the purpose of target recognition in front of the host vehicle, the front of the host vehicle is continuously moved by the camera while the host vehicle is running. Take a picture, apply known image processing such as differential binarization to the captured image (luminance image) from moment to moment, and recognize the target and white line of the preceding vehicle etc. from each differential binary image obtained by the image processing Is going.

ところで、前記カメラは車内のバックミラー取り付け支柱等に取り付けられ、その撮影レンズ、自車のフロントガラスを通して自車前方(車外)を撮影する。   By the way, the camera is attached to a rearview mirror mounting column or the like in the vehicle, and images the front (outside of the vehicle) of the vehicle through the imaging lens and the windshield of the vehicle.

そのため、例えば自車のフロントガラスの貼着物やフロントガラスに付着した汚泥や雨滴、もしくは撮影レンズの付着塵埃等の自車もしくはカメラの付着物がノイズとして撮影画像に写り込むことがあり、この場合、付着物のノイズによって認識ミス等が生じる。   For this reason, for example, the vehicle or camera deposits such as sludge or raindrops attached to the windshield of the vehicle, dust attached to the windshield, or dust adhering to the shooting lens may appear in the captured image as noise. A recognition error or the like occurs due to the noise of the adhered matter.

そこで、カメラの撮影画像の特徴量として輪郭(エッジ)の各画素点を検出し、それらを連結して輪郭の線分を形成し、その線分の輪郭の形状や位置に時間変化がないことから前記付着物の輪郭を特定して前記付着物のノイズを検出することが提案されている(例えば、特許文献1参照)。
特許第365541号公報(特許請求の範囲、[0007]、[0018]−[0023]、図1、図3等)
Therefore, each pixel point of the contour (edge) is detected as a feature amount of the captured image of the camera and connected to form a line segment of the contour, and there is no time change in the shape and position of the contour of the line segment. From the above, it is proposed to detect the noise of the deposit by specifying the outline of the deposit (see, for example, Patent Document 1).
Japanese Patent No. 365541 (Claims, [0007], [0018]-[0023], FIG. 1, FIG. 3, etc.)

前記従来のように、カメラの撮影画像の輪郭(エッジ)の各画素点を連結して輪郭の線分を形成し、その線分から前記撮影画像中の付着物の輪郭を特定して付着物有りを検出する場合は、前記の連結や輪郭の線分化等の複雑な画像処理が必要であり、画像処理負担が大きくなる問題がある。   As in the prior art, each pixel point of the contour (edge) of the photographed image of the camera is connected to form a contour line segment, and the contour of the deposit in the captured image is identified from the line segment, and there is an deposit Is detected, complicated image processing such as the above-mentioned connection and contour line differentiation is necessary, and there is a problem that the image processing load increases.

本発明は、極めて少ない画像処理負担で自車前方を撮影するカメラの撮影画像に自車もしくはカメラの付着物有りを検出することを目的とする。   It is an object of the present invention to detect the presence of an own vehicle or camera deposits in a photographed image of a camera that photographs the front of the own vehicle with a very small image processing load.

上記した目的を達成するために、本発明の付着物検出装置は、自車に搭載したカメラの自車先行の撮影画像中の付着物有りを検出する付着物検出装置であって、前記撮影画像の画像データから画像特徴量の各画素点を検出する画像特徴検出手段と、前記画像特徴検出手段の検出結果を保持する記憶手段と、最新の前記撮影画像の画像データについての前記画像特徴検出手段の検出結果と、前記記憶手段に保持された過去の撮影画像の画像データについての前記画像特徴検出手段の検出結果とを比較し、少なくとも所定時間連続して前記画像特徴検出手段の検出結果が一致する各画素点を検出する一致点検出手段と、前記撮影画像に、該撮影画像をマトリクス状に区分した複数個のエリアからなる基本検索領域と、前記撮影画像をマトリクス状に区分した各エリアが前記基本検索領域の各エリアを跨ぐように前記基本検索領域を上下、左右方向の任意の位置にずらした状態の副検索領域とを設定する検索領域設定手段と、前記両検索領域それぞれの各エリアについて前記一致点検出手段の検出一致点数を計数し、計数値がしきい値以上になる付着物のエリアを検出して付着物有りと判定する判定手段とを備えたことを特徴としている(請求項1)。   In order to achieve the above-described object, an attached matter detection apparatus according to the present invention is an attached matter detection device that detects the presence of an attached matter in a photographed image preceding a host vehicle of a camera mounted on the own vehicle. Image feature detection means for detecting each pixel point of the image feature amount from the image data of the image, storage means for holding the detection result of the image feature detection means, and the image feature detection means for the latest image data of the captured image And the detection result of the image feature detection unit for the image data of the past photographed image held in the storage unit, and the detection result of the image feature detection unit coincides at least for a predetermined time. A matching point detecting means for detecting each pixel point, a basic search area composed of a plurality of areas obtained by dividing the photographed image in a matrix, and the photographed image in a matrix Search area setting means for setting a sub search area in which the basic search area is shifted to an arbitrary position in the vertical and horizontal directions so that each divided area straddles each area of the basic search area; And a determination unit that counts the number of coincidence points detected by the coincidence point detection unit for each area of each region, and detects an adhering area where the count value is equal to or greater than a threshold value, and determines that there is an adhering matter. It is characterized (claim 1).

また、本発明の付着物検出方法は、自車に搭載したカメラの自車先行の撮影画像中の付着物を検出する付着物検出方法であって、前記撮影画像の画像データから画像特徴量の各画素点を検出して検出結果を保持し、最新の前記撮影画像の画像データについての前記特徴量の検出結果と、保持した過去の撮影画像の画像データについての前記特徴量の検出結果とを比較し、少なくとも所定時間連続して前記検出結果が一致する各画素点を検出し、前記撮影画像に、該撮影画像をマトリクス状に区分した複数個のエリアからなる基本検索領域と、前記撮影画像をマトリクス状に区分した各エリアが前記基本検索領域の各エリアを跨ぐように前記基本検索領域を上下、左右方向の任意の位置にずらした状態の副検索領域とを設定し、前記両検索領域それぞれの各エリアの検出一致点数を計数し、計数値がしきい値以上になる付着物のエリアを検出して付着物有りと判定することを特徴としている(請求項2)。   Further, the attached matter detection method of the present invention is an attached matter detection method for detecting an attached matter in a photographed image preceding a host vehicle of a camera mounted on the host vehicle, wherein an image feature amount is detected from image data of the photographed image. Each pixel point is detected and the detection result is retained, and the feature amount detection result for the latest image data of the captured image and the feature amount detection result for the retained image data of the past captured image are stored. Comparing and detecting each pixel point where the detection results coincide with each other for at least a predetermined time, and in the captured image, a basic search region including a plurality of areas obtained by dividing the captured image into a matrix, and the captured image A sub-search area in which the basic search area is shifted to an arbitrary position in the vertical and horizontal directions so that each area divided into a matrix crosses each area of the basic search area, and the both search areas Counting the detected matching score of each area in respectively, the count value is and determines that there deposits by detecting the area of deposits equal to or greater than the threshold value (claim 2).

請求項1、2の発明によれば、カメラの撮影画像の所定期間以上連続して画像特徴量が検出される付着物の各画素点が、撮影画像をマトリクス状に区分した基本検索領域の各エリア及び、基本検索領域を上下、左右方向の任意の位置にずらした副検索領域の各エリアの単位で分割して計数される。   According to the first and second aspects of the present invention, each pixel point of the deposit from which the image feature amount is detected continuously for a predetermined period or longer of the captured image of the camera has each of the basic search areas obtained by dividing the captured image into a matrix. The area and the basic search area are divided and counted in units of each area of the sub search area shifted to an arbitrary position in the vertical and horizontal directions.

このとき、付着物の各画素点が基本検索領域の分割では複数のエリアに跨って分散する位置にあっても、副検索領域の分割ではそれらの画素点が1つのエリアに含まれてそのエリアの計数値は少なくならない。逆に、付着物の各画素点が副検索領域の分割では複数のエリアに跨って分散する位置にあっても、基本検索領域の分割ではそれらの画素点が1つのエリアに含まれてそのエリアの計数値は少なくならない。このため、付着物については、基本、副検索領域のいずれかのその位置の各エリアの画素点の計数値がしきい値以上になり、確実に付着物が有ることを検出することができる。   At this time, even if each pixel point of the adhering substance is located at a position where the pixel points are distributed over a plurality of areas in the division of the basic search area, those pixel points are included in one area in the division of the sub search area. The count value is not small. On the other hand, even if each pixel point of the adhering substance is located at a position that is distributed across a plurality of areas in the division of the sub search area, those pixel points are included in one area in the division of the basic search area. The count value is not small. For this reason, with respect to the attached matter, the count value of the pixel point in each area at the position of either the basic or the sub-search area is equal to or greater than the threshold value, and it can be reliably detected that the attached matter is present.

そして、前記従来のようにカメラの撮影画像の輪郭(エッジ)の各画素点を連結して線分化(グルーピング)することで輪郭の線分を形成し、その線分から前記撮影画像中の付着物の輪郭を特定して付着物有りを検出するのではなく、撮影画像を区分したエリア毎の画素点の計数から付着物有りと判定するので処理が単純であり、画像処理負担が軽くて済み、極めて少ない画像処理負担で自車前方を撮影するカメラの撮影画像に自車もしくはカメラの付着物有りを検出することができる。   Then, as in the prior art, each pixel point of the contour (edge) of the photographed image of the camera is connected and line segmentation (grouping) is performed to form a line segment of the contour, and the adhered matter in the photographed image is formed from the line segment Instead of detecting the presence of an adhering substance by specifying the contour of the image, it is determined that there is an adhering substance from the count of pixel points for each area where the captured image is divided, so the process is simple and the image processing burden is light. It is possible to detect the presence of the subject vehicle or camera deposits in the photographed image of the camera that photographs the front of the subject vehicle with a very small image processing load.

つぎに、本発明をより詳細に説明するため、一実施形態について、図1〜図7にしたがって詳述する。   Next, in order to describe the present invention in more detail, an embodiment will be described in detail with reference to FIGS.

図1は自車1のブロック構成図であり、図2は自車1に搭載された単眼カメラ2の探査状態を示し、図3(a)、(b)は単眼カメラ2の時刻tn、tn+1の付着物TGを含んだ撮影画像Pn、Pn+1を示す。   FIG. 1 is a block configuration diagram of the own vehicle 1, FIG. 2 shows a search state of the monocular camera 2 mounted on the own vehicle 1, and FIGS. 3A and 3B are times tn and tn + 1 of the monocular camera 2. Photographed images Pn and Pn + 1 including the attached matter TG are shown.

また、図4は後述の制御ECUの撮影画像の画素点の説明図であり、図5の(a)、(b)はエッジ画素点の説明図、その一部の拡大図であり、図6の(a)、(b)は基本検索領域、副検索領域の説明図であり、図7は制御ECUの処理説明用のフローチャートである。   4 is an explanatory diagram of pixel points of a captured image of the control ECU, which will be described later, and FIGS. 5A and 5B are explanatory diagrams of edge pixel points and an enlarged view of a part thereof. (A), (b) are explanatory diagrams of the basic search area and the sub-search area, and FIG. 7 is a flowchart for explaining the processing of the control ECU.

図1に示すように、自車1は自車前方の先行車等の物標や路面の白線を検出して前記物標との衝突可能性等を判断し、警報表示や自動ブレーキ制御、操舵回避制御等を行うため、カラーあるいは白黒のCCDの単眼カメラ2及び、レーザレーダ、ミリ波レーダ等のレーダ3を搭載し、単眼カメラ2、レーダ3によって自車前方をくり返し探査する。   As shown in FIG. 1, the host vehicle 1 detects a target such as a preceding vehicle in front of the host vehicle or a white line on the road surface, determines the possibility of collision with the target, etc., and displays an alarm display, automatic brake control, steering In order to perform avoidance control and the like, a monocular camera 2 of a color or black and white CCD and a radar 3 such as a laser radar and a millimeter wave radar are mounted, and the monocular camera 2 and the radar 3 repeatedly search in front of the vehicle.

また、車速センサ(図示せず)、舵角センサ4、ヨーレートセンサ5等の各種のセンサによって自車1の走行状態等を検出する。   Further, the traveling state of the host vehicle 1 is detected by various sensors such as a vehicle speed sensor (not shown), a rudder angle sensor 4 and a yaw rate sensor 5.

そして、自車1の全体を制御するマイクロコンピュータ構成の制御ECU6により、単眼カメラ2の時々刻々の撮影画像を画像処理して自車前方の先行車等の物標を検出し、レーダ3の時々刻々の探査結果を処理して同様の物標を検出し、両検出の結果及び各センサ4、5等の検出結果に基づく周知のセンサフュージョンの処理により、前記の物標を認識して衝突可能性等を判断する。そして、この判断と制御スイッチ7の切り替えとに基づき、通常は、スロットル制御ユニット8のアクセル開度の制御、ブレーキ制御ユニット9のブレーキ制御、AT制御ユニット10のAT制御によっていわゆる追従走行(車間距離一定)の走行制御を行う。ここで、衝突可能性が高くなると、表示警報ユニット11によってドライバに警報し、ユニット8〜10の自動制動の制御を行い、さらに、必要に応じてステアリング制御ユニット12のパワーステアリングの衝突回避制御等を行う。   Then, the control ECU 6 having a microcomputer configuration that controls the entire host vehicle 1 performs image processing on the captured image of the monocular camera 2 every moment to detect a target such as a preceding vehicle in front of the host vehicle, and the radar 3 sometimes Process every moment exploration results to detect similar targets, and recognize and collide with the above targets by well-known sensor fusion processing based on both detection results and detection results of each sensor 4, 5 etc. Judging sex etc. Then, based on this determination and the switching of the control switch 7, the so-called following travel (distance between vehicles) is normally controlled by the throttle opening control of the throttle control unit 8, the brake control of the brake control unit 9, and the AT control of the AT control unit 10. (Constant) travel control. Here, when the possibility of collision becomes high, the display warning unit 11 warns the driver, performs automatic braking control of the units 8 to 10, and further performs power steering collision avoidance control of the steering control unit 12 as necessary. I do.

図2に示すように、単眼カメラ2は例えば自車1の室内のバックミラー13の支持基部に自車前方に臨むように取り付けられ、図中の実線a、bの斜め下を向いた撮影範囲を撮影する。   As shown in FIG. 2, the monocular camera 2 is attached, for example, to the support base of the rearview mirror 13 in the interior of the host vehicle 1 so as to face the front of the host vehicle, and has a photographing range facing diagonally below the solid lines a and b in the figure. Shoot.

このとき、図3の(a)、(b)に示すように、自車1のフロントガラス14の貼着物やフロントガラス14に付着した汚泥や雨滴、もしくは単眼カメラ2の撮影レンズに付着した塵埃等の自車1もしくは単眼カメラ2の付着物がノイズとして撮影画像に写り込むと、例えば時刻tn、tn+1の順次の撮影画像Pn、Pn+1において、付着物TGは画像の同じ位置に存在する。   At this time, as shown in (a) and (b) of FIG. 3, dirt adhered to the windshield 14 of the vehicle 1, sludge and raindrops attached to the windshield 14, or dust attached to the photographing lens of the monocular camera 2. For example, in the captured images Pn and Pn + 1 at time tn and tn + 1, the deposit TG is present at the same position in the image.

そして、自車速が設定値以上になると、制御ECU6は単眼カメラ2の時々刻々の撮影画像を画像処理するプログラムの1つとして、付着物検出プログラムを実行し、つぎの各手段を備える。なお、自車速が前記の設定値より小さい低速走行時もしくは信号停止等の停車時には例えば撮影画像Pn、Pn+1の付着物TG以外の部分も同じになって付着物TGの検出が困難になるので前記の付着物検出プログラムを実行しない。   When the host vehicle speed becomes equal to or higher than the set value, the control ECU 6 executes the adhering matter detection program as one of the programs for image processing of the captured images of the monocular camera 2 every moment, and includes the following units. Note that when the vehicle speed is lower than the set value or when the vehicle stops, such as when the signal is stopped, the portions other than the deposit TG of the captured images Pn and Pn + 1 are the same, and the detection of the deposit TG is difficult. The attached matter detection program is not executed.

(a)画像特徴検出手段
この手段は、撮影画像の画像データから画像特徴量の各画素点を検出し、設定時間分の検出結果を制御ECU6の記憶手段としてのRAM等に書き換え自在に保持させる。具体的には、撮影画像の画像データは撮影画像の輝度の微分2値画像データ(エッジ2値画像データ)であり、この微分2値画像データの画像特徴量としての微分2値がエッジ有りの「1」になる輝度エッジの各画素点を検出する。
(A) Image feature detection means This means detects each pixel point of the image feature amount from the image data of the photographed image, and rewrites the detection result for the set time in a RAM or the like as a storage means of the control ECU 6. . Specifically, the image data of the captured image is differential binary image data (edge binary image data) of the brightness of the captured image, and the differential binary as an image feature amount of the differential binary image data has an edge. Each pixel point of the luminance edge that becomes “1” is detected.

このとき、図4に示すように撮影画Ptのほほ中央の注目領域ROIの上下、左右のマトリックス状の各画素点pについて、1画素点ずつ「1」になる輝度エッジの各画素点pを検査すると、処理する画素点pの数が極めて多く、制御ECU6の処理負担が多くなるだけでなく、静止した付着物TGであっても走行状態によっては撮影画像Pt毎にわずかに位置がずれることもある。   At this time, as shown in FIG. 4, each pixel point p of the luminance edge that becomes “1” for each pixel point p at the upper, lower, left and right matrix-like pixel points p in the approximately central region of interest ROI of the photographed image Pt. When the inspection is performed, the number of pixel points p to be processed is extremely large, and not only the processing load of the control ECU 6 is increased, but also the stationary deposit TG is slightly shifted for each captured image Pt depending on the traveling state. There is also.

このため、図5の(a)、(b)に示すように、この実施形態においては、例えば上下、左右に8画素点pの間隔で探索ポイントとなる各検出画素点piを設定し、各検出画素点pi毎に、各検出画素点pi及びそれらの画素点piを囲む8個の画素点pの合計9(3×3)画素点pi、pの「1」の個数が設定個数以上になるか否かによって検出画素点piが「1」のエッジ有りか否かを決定する。そして、このようにして決定した各検出点画素piについてのみ「1」のエッジ有りか否かを検索して検出する。なお、図5の(a)の破線矢印は走査方向(検出方向)を示す。   For this reason, as shown in FIGS. 5A and 5B, in this embodiment, for example, detection pixel points pi that are search points are set at intervals of 8 pixel points p in the vertical and horizontal directions, For each detection pixel point pi, the total number of 9 (3 × 3) pixel points pi, p of the eight pixel points p surrounding each detection pixel point pi and the pixel points pi is greater than the set number. Whether or not the detected pixel point pi has an edge of “1” is determined depending on whether or not. Only the detection point pixels pi determined in this way are searched for and detected whether or not there is an edge of “1”. Note that the dashed arrow in FIG. 5A indicates the scanning direction (detection direction).

(b)一致点検出手段
この手段は、単眼カメラ2の最新の(現在得られた)撮影画像の画像データについての前記画像特徴検出手段の検出結果と、前記記憶手段に保持された過去の(現在より一定時間前までの)撮影画像の画像データについての前記画像特徴検出手段の検出結果とを比較し、少なくとも所定時間連続して前記画像特徴検出手段の検出結果が一致する各検出画素点piをエッジ画素点として検出する。なお、前記の一定時間、所定時間は実験等によって設定される例えば秒単位の時間である。
(B) Matching point detection means This means includes the detection result of the image feature detection means for the latest (currently obtained) image data of the monocular camera 2 and the past ( The detection result of the image feature detection unit with respect to the image data of the captured image (up to a certain time before the present) is compared, and each detection pixel point pi where the detection result of the image feature detection unit matches at least for a predetermined time. Are detected as edge pixel points. The predetermined time and the predetermined time are, for example, time in seconds set by experiment or the like.

(c)検索領域設定手段
この手段は、撮影画像Ptに、撮影画像Ptをマトリクス状に区分した複数個のエリアからなる基本検索領域と、撮影画像Ptをマトリクス状に区分した各エリアが基本検索領域の各エリアを跨ぐように基本検索領域を上下、左右方向の任意の位置にずらした状態の副検索領域とを設定する。
(C) Search area setting means This means is that a basic search area composed of a plurality of areas obtained by dividing the photographed image Pt in a matrix form and each area obtained by dividing the photographed image Pt in a matrix form is basically retrieved. A sub-search area in a state where the basic search area is shifted to an arbitrary position in the vertical and horizontal directions so as to straddle each area of the area is set.

具体的には、図6の(a)に示すように撮影画像Ptの前記注目領域ROIの部分をマトリックス状の各エリアαに区分した基本検索領域Kmと、同図の(b)に示すように前記注目領域ROIの部分を、上下、左右方向に1エリアαより短い長さ、例えばエリアαの1/2の長さだけ基本検査領域Kmからずらしたマトリックス状の各エリアβに区分した副検索領域Ksとを設定する。   Specifically, as shown in FIG. 6A, a basic search region Km in which the portion of the region of interest ROI of the captured image Pt is divided into matrix-like areas α, and as shown in FIG. 6B. The region of interest ROI is subdivided into matrix-like areas β shifted from the basic inspection region Km by a length shorter than one area α in the vertical and horizontal directions, for example, a half of the area α. A search area Ks is set.

なお、エリアα、βは前記の検出画素点piを上下、左右方向に複数個含む同じ大きさである。また、副検索領域Ksはずらす量等を変えて複数個設定してもよいが、この実施形態では、処理負担等を考慮して1個だけ設定する。   The areas α and β have the same size including a plurality of the detection pixel points pi in the vertical and horizontal directions. Further, a plurality of sub-search areas Ks may be set by changing the amount of shift, but in this embodiment, only one is set in consideration of the processing load and the like.

(d)判定手段
この手段は、両検索領域Km、Ksそれぞれの各エリアα、βについて、前記一致点検出手段の検出一致点数を計数し、計数値が設定されたしきい値以上になる付着物TGのエリアα、βを検出して付着物有りと判定する。
(D) Determination means This means counts the number of coincidence points detected by the coincidence point detection means for each of the areas α and β in both search areas Km and Ks, and adds the count value to a set threshold value or more. The areas α and β of the kimono TG are detected and it is determined that there is a deposit.

以上の手段を備えることにより、制御ECU6は例えば図7のステップS1〜S3の各検出処理をくり返し実行する。   By providing the above means, the control ECU 6 repeatedly executes, for example, the detection processes in steps S1 to S3 in FIG.

そして、ステップS1の固着ポイント検出においては、前記の画像特徴検出手段によって微分2値画像の「1」のエッジ有りの各検出点画素piを検出し、一致点検出手段によって少なくとも所定時間連続して画像特徴検出手段の検出結果が一致する各検出画素点piをエッジ画素点として検出する。   In the sticking point detection in step S1, each detection point pixel pi having an edge of “1” in the differential binary image is detected by the image feature detection unit, and continuously at least for a predetermined time by the coincidence point detection unit. Each detection pixel point pi that matches the detection result of the image feature detection means is detected as an edge pixel point.

また、ステップS2の固着エリア検出においては、検索領域設定手段によって設定された各エリアα、βの各エッジ画素点を決定してしエッジ画素点を含む各エリアα、βを検出する。   In the sticking area detection in step S2, the edge pixel points of the areas α and β set by the search area setting means are determined, and the areas α and β including the edge pixel points are detected.

さらに、ステップS3の固着物検出においては、判定手段によって、前記計数値が前記設定されたしきい値以上になる付着物TGのエリアα、βを検出し、付着物TGのエリアα、βがいずれか一つでもあれば付着物有りと判定する。   Further, in the fixed object detection in step S3, the determination unit detects the areas α and β of the deposit TG where the count value is equal to or larger than the set threshold value, and the areas α and β of the deposit TG are detected. If any one of them is present, it is determined that there is a deposit.

このようにすることで、撮影画像Ptを区分したエリアα、βの単位で所定時間以上同じ位置に出現するエッジ画素点を計数して付着物有りを検出することができ、この場合、従来の複雑な画像処理をする場合に比して処理が単純であり、画像処理負担が軽くて済み、極めて少ない画像処理負担で自車前方を撮影する単眼カメラ2の撮影画像Ptに自車1もしくは単眼カメラ2の付着物有りを検出することができる。しかも、エッジ画素点を含むエリアα、βから付着物TGの画像上の位置を把握することができる。   By doing this, it is possible to detect the presence of adhering matter by counting edge pixel points that appear at the same position for a predetermined time or more in the units of areas α and β obtained by dividing the photographed image Pt. Compared to the case of performing complex image processing, the processing is simple, the image processing burden is light, and the captured image Pt of the monocular camera 2 that captures the front of the own vehicle with an extremely small image processing burden is included in the own vehicle 1 or monocular. It is possible to detect the presence of an attachment on the camera 2. Moreover, the position of the deposit TG on the image can be grasped from the areas α and β including the edge pixel points.

そのため、撮影画像Ptから付着物TGを除いて前記の物標の誤認識等を防止することができる。   Therefore, it is possible to prevent erroneous recognition of the target by removing the adhered substance TG from the captured image Pt.

また、基本検索領域Km、副検索領域Ksを設定しエッジ画像点をそれぞれの各エリアα、βの単位で分割して計数して付着物TGを検出するため、基本検索領域Ksの分割では複数のエリアαに跨って分散し各エリアαの計数値が少なくなる位置の付着物TGの各エッジ画素点であっても、副検索領域Ksの分割では1つのエリアβに含まれて計数値が少なくならず、同様に、副検索領域Ksの分割では複数のエリアβに跨って分散し各エリアβの計数値が少なくなる位置の付着物TGの各エッジ画素点であっても、基本検索領域Kmの分割では1つのエリアαに含まれて計数値が少なくならない。そのため、付着物TGについては、基本、副検索領域Km、Ksのいずれかのその位置の各エリアα、βのエッジ画素点の計数値が必ずしきい値以上になり、確実に付着物有りを検出することができ、信頼性が著しく向上する。   In addition, since the basic search area Km and the sub search area Ks are set and the edge image points are divided and counted in units of the respective areas α and β, and the adhering substance TG is detected, a plurality of basic search areas Ks are divided. Even if each edge pixel point of the deposit TG at a position where the count value of each area α decreases with a spread over the area α, the count value is included in one area β in the division of the sub search region Ks. Similarly, even in the division of the sub search area Ks, even if each edge pixel point of the deposit TG is located at a position where the count value of each area β is reduced by spreading over a plurality of areas β, the basic search area In the division of Km, the count value is not decreased by being included in one area α. Therefore, for the deposit TG, the count value of the edge pixel points of the areas α and β at the positions of the basic and sub-search areas Km and Ks must be equal to or greater than the threshold value. Can be detected, and reliability is significantly improved.

そして、本発明は上記した実施形態に限定されるものではなく、その趣旨を逸脱しない限りにおいて上述したもの以外に種々の変更を行うことが可能であり、例えば前記の基本検索領域Km、副検索領域Ksを設定すると、上下、左右の周縁部がエリアα、βの幅に達しなくなって無駄になってしまうため、例えば図8に示すように斜線を施した上下、左右の周縁部分に第二の副検索領域としての上下、左右の外周探査領域Keを設定し、これらの外周領域Keについても基本検索領域Km、副検索領域Ksと同様にエリアγに区分してエリア単位で付着物TGの有無を検出するようにし、付着物TGの検出精度を一層向上するようにしてもよい。   The present invention is not limited to the above-described embodiment, and various modifications other than those described above can be made without departing from the spirit thereof, for example, the basic search area Km, the sub-search, and the like. If the region Ks is set, the upper and lower, left and right peripheral portions do not reach the widths of the areas α and β, and are wasted. For example, as shown in FIG. The upper and lower, left and right outer peripheral search areas Ke are set as the sub search areas, and these outer peripheral areas Ke are divided into areas γ in the same manner as the basic search areas Km and the sub search areas Ks, and the adhered TGs are divided in units of areas. The presence or absence may be detected, and the detection accuracy of the deposit TG may be further improved.

また、本発明においては、基本検索領域Kmと副検索領域Ks、Keの大きさや形状等が異なっていてもよいは勿論である。   In the present invention, it is needless to say that the basic search area Km and the sub search areas Ks and Ke may have different sizes, shapes, and the like.

つぎに、基本検索領域Km、副検索領域Ksのエリアα、βの大きさや個数、付着物TGと認識するための前記の連続時間等は、実験等に基づいて適当に設定してよいのは勿論である。   Next, the size and number of areas α and β of the basic search region Km and the sub search region Ks, the continuous time for recognizing the deposit TG, and the like may be appropriately set based on experiments or the like. Of course.

また、単眼カメラ2に代えて、ステレオカメラ等を自車1に搭載した場合にも、本発明を同様に適用することができる。   Further, the present invention can be similarly applied when a stereo camera or the like is mounted on the own vehicle 1 instead of the monocular camera 2.

本発明の一実施形態のブロック図である。It is a block diagram of one embodiment of the present invention. 図1の単眼カメラの搭載例の斜視図である。It is a perspective view of the example of mounting of the monocular camera of FIG. (a)、(b)は図2の単眼カメラの順次の撮影画像例の説明図である。(A), (b) is explanatory drawing of the example of a sequential picked-up image of the monocular camera of FIG. 図1の制御ECUの撮影画像の画素点の説明図である。It is explanatory drawing of the pixel point of the picked-up image of control ECU of FIG. (a)、(b)はエッジ画素点の説明図、その一部の拡大図である。(A), (b) is explanatory drawing of an edge pixel point, and the one part enlarged view. (a)、(b)は基本検索領域、副検索領域の説明図である。(A), (b) is explanatory drawing of a basic search area | region and a subsearch area | region. 図1の制御ECUの処理説明用のフローチャートである。It is a flowchart for process description of control ECU of FIG. 撮影画像の検索領域の他の設定例の説明図である。It is explanatory drawing of the other example of a setting of the search area | region of a picked-up image.

符号の説明Explanation of symbols

1 自車
2 単眼カメラ
6 制御ECU
TG 付着物
1 Own vehicle 2 Monocular camera 6 Control ECU
TG deposit

Claims (2)

自車に搭載したカメラの自車先行の撮影画像中の付着物有りを検出する付着物検出装置であって、 前記撮影画像の画像データから画像特徴量の各画素点を検出する画像特徴検出手段と、 前記画像特徴検出手段の検出結果を保持する記憶手段と、 最新の前記撮影画像の画像データについての前記画像特徴検出手段の検出結果と、前記記憶手段に保持された過去の撮影画像の画像データについての前記画像特徴検出手段の検出結果とを比較し、少なくとも所定時間連続して前記画像特徴検出手段の検出結果が一致する各画素点を検出する一致点検出手段と、
前記撮影画像に、該撮影画像をマトリクス状に区分した複数個のエリアからなる基本検索領域と、前記撮影画像をマトリクス状に区分した各エリアが前記基本検索領域の各エリアを跨ぐように前記基本検索領域を上下、左右方向の任意の位置にずらした状態の副検索領域とを設定する検索領域設定手段と、
前記両検索領域それぞれの各エリアについて前記一致点検出手段の検出一致点数を計数し、計数値がしきい値以上になる付着物のエリアを検出して付着物有りと判定する判定手段とを備えたことを特徴とする付着物検出装置。
An attached matter detecting device for detecting the presence of attached matter in a captured image of a camera preceding a vehicle of an own vehicle, wherein the image feature detecting means detects each pixel point of an image feature amount from image data of the captured image. Storage means for holding the detection results of the image feature detection means, detection results of the image feature detection means for the latest image data of the captured images, and images of past captured images held in the storage means A matching point detection unit that compares the detection result of the image feature detection unit for data and detects each pixel point that the detection result of the image feature detection unit matches at least for a predetermined time;
The basic search area composed of a plurality of areas obtained by dividing the photographed image into a matrix, and the basic search area so that each area obtained by dividing the photographed image in a matrix straddles each area of the basic search area. A search area setting means for setting a sub search area in a state where the search area is shifted to an arbitrary position in the vertical and horizontal directions;
A determination unit that counts the number of coincidence points detected by the coincidence point detection unit for each area of the both search areas, and that determines an adhering matter area where the count value is equal to or greater than a threshold value. An adhering matter detection device characterized by that.
自車に搭載したカメラの自車先行の撮影画像中の付着物を検出する付着物検出方法であって、
前記撮影画像の画像データから画像特徴量の各画素点を検出して検出結果を保持し、
最新の前記撮影画像の画像データについての前記特徴量の検出結果と、保持した過去の撮影画像の画像データについての前記特徴量の検出結果とを比較し、少なくとも所定時間連続して前記検出結果が一致する各画素点を検出し、
前記撮影画像に、該撮影画像をマトリクス状に区分した複数個のエリアからなる基本検索領域と、前記撮影画像をマトリクス状に区分した各エリアが前記基本検索領域の各エリアを跨ぐように前記基本検索領域を上下、左右方向の任意の位置にずらした状態の副検索領域とを設定し、
前記両検索領域それぞれの各エリアの検出一致点数を計数し、計数値がしきい値以上になる付着物のエリアを検出して付着物有りと判定することを特徴とする付着物検出方法。
An attached matter detection method for detecting an attached matter in a captured image of a preceding camera of a camera mounted on the own vehicle,
Detecting each pixel point of the image feature amount from the image data of the captured image and holding the detection result;
The feature value detection result for the latest image data of the captured image is compared with the feature value detection result for the stored image data of the past captured image, and the detection result is continuously at least for a predetermined time. Detect each matching pixel point,
The basic search area composed of a plurality of areas obtained by dividing the photographed image into a matrix, and the basic search area so that each area obtained by dividing the photographed image in a matrix straddles each area of the basic search area. Set the sub search area with the search area shifted to any position in the vertical and horizontal directions,
A deposit detection method, comprising: counting the number of coincident detection points in each of the two search areas, detecting an deposit area where the count value is equal to or greater than a threshold value, and determining the presence of deposits.
JP2005372441A 2005-12-26 2005-12-26 Attachment detection device Expired - Fee Related JP4798576B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2005372441A JP4798576B2 (en) 2005-12-26 2005-12-26 Attachment detection device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2005372441A JP4798576B2 (en) 2005-12-26 2005-12-26 Attachment detection device

Publications (2)

Publication Number Publication Date
JP2007172504A true JP2007172504A (en) 2007-07-05
JP4798576B2 JP4798576B2 (en) 2011-10-19

Family

ID=38298947

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2005372441A Expired - Fee Related JP4798576B2 (en) 2005-12-26 2005-12-26 Attachment detection device

Country Status (1)

Country Link
JP (1) JP4798576B2 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012038048A (en) * 2010-08-06 2012-02-23 Alpine Electronics Inc Obstacle detecting device for vehicle
JP2017187861A (en) * 2016-04-01 2017-10-12 キヤノン株式会社 Information processor and control method thereof
JP2018060422A (en) * 2016-10-06 2018-04-12 株式会社Soken Object detection device
JP2019133333A (en) * 2018-01-30 2019-08-08 株式会社デンソーテン Adhesion detection apparatus and adhesion detection method
JP2019159346A (en) * 2018-03-07 2019-09-19 オムロン株式会社 Imaging apparatus
CN115382821A (en) * 2022-08-29 2022-11-25 石家庄开发区天远科技有限公司 Vehicle-mounted camera cleaning device and method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05205175A (en) * 1992-01-24 1993-08-13 Hitachi Ltd Body detection device
JPH10269365A (en) * 1997-03-24 1998-10-09 Omron Corp Characteristic extracting method, and object recognition device using the method
JP2001204013A (en) * 2000-01-21 2001-07-27 Nippon Seiki Co Ltd Object detector
JP2003259358A (en) * 2002-03-06 2003-09-12 Nissan Motor Co Ltd Apparatus and method for detecting dirt on camera

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05205175A (en) * 1992-01-24 1993-08-13 Hitachi Ltd Body detection device
JPH10269365A (en) * 1997-03-24 1998-10-09 Omron Corp Characteristic extracting method, and object recognition device using the method
JP2001204013A (en) * 2000-01-21 2001-07-27 Nippon Seiki Co Ltd Object detector
JP2003259358A (en) * 2002-03-06 2003-09-12 Nissan Motor Co Ltd Apparatus and method for detecting dirt on camera

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012038048A (en) * 2010-08-06 2012-02-23 Alpine Electronics Inc Obstacle detecting device for vehicle
JP2017187861A (en) * 2016-04-01 2017-10-12 キヤノン株式会社 Information processor and control method thereof
JP2018060422A (en) * 2016-10-06 2018-04-12 株式会社Soken Object detection device
JP2019133333A (en) * 2018-01-30 2019-08-08 株式会社デンソーテン Adhesion detection apparatus and adhesion detection method
JP7210882B2 (en) 2018-01-30 2023-01-24 株式会社デンソーテン Attached matter detection device and attached matter detection method
JP2019159346A (en) * 2018-03-07 2019-09-19 オムロン株式会社 Imaging apparatus
CN115382821A (en) * 2022-08-29 2022-11-25 石家庄开发区天远科技有限公司 Vehicle-mounted camera cleaning device and method

Also Published As

Publication number Publication date
JP4798576B2 (en) 2011-10-19

Similar Documents

Publication Publication Date Title
US11393217B2 (en) Vehicular vision system with detection and tracking of objects at the side of a vehicle
US11787338B2 (en) Vehicular vision system
US9774790B1 (en) Method for enhancing vehicle camera image quality
JP5421072B2 (en) Approaching object detection system
JP3716623B2 (en) Thermal detector
US11532233B2 (en) Vehicle vision system with cross traffic detection
JP2003296736A (en) Device for detecting obstacle and method thereof
JP2011175468A (en) Boundary line detection device
JP2010039634A (en) Image processor
US20180114078A1 (en) Vehicle detection device, vehicle detection system, and vehicle detection method
JP4798576B2 (en) Attachment detection device
JP3868915B2 (en) Forward monitoring apparatus and method
WO2011016257A1 (en) Distance calculation device for vehicle
JP4601376B2 (en) Image abnormality determination device
JP3942289B2 (en) Vehicle monitoring device
JP3532896B2 (en) Smear detection method and image processing apparatus using the smear detection method
JP2006036048A (en) Vehicle light controlling device
JP2003271968A (en) Device and method for detecting mobile-body
JP2007156919A (en) Road section line detector
JP3735468B2 (en) Mobile object recognition device
JPH09240397A (en) Informing device of vehicle in rear side direction
JP2006286010A (en) Obstacle detecting device and its method
JP2004341979A (en) Obstacle detection device and obstacle detecting method

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20080612

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20100715

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20100907

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20101104

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20110726

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20110726

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20140812

Year of fee payment: 3

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

LAPS Cancellation because of no payment of annual fees