JP2002175534A - Method for detecting road white line - Google Patents

Method for detecting road white line

Info

Publication number
JP2002175534A
JP2002175534A JP2000371645A JP2000371645A JP2002175534A JP 2002175534 A JP2002175534 A JP 2002175534A JP 2000371645 A JP2000371645 A JP 2000371645A JP 2000371645 A JP2000371645 A JP 2000371645A JP 2002175534 A JP2002175534 A JP 2002175534A
Authority
JP
Japan
Prior art keywords
white line
white
edge
luminance
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2000371645A
Other languages
Japanese (ja)
Other versions
JP3589293B2 (en
Inventor
Satoshi Terakubo
敏 寺久保
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sumitomo Electric Industries Ltd
Original Assignee
Sumitomo Electric Industries Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sumitomo Electric Industries Ltd filed Critical Sumitomo Electric Industries Ltd
Priority to JP2000371645A priority Critical patent/JP3589293B2/en
Publication of JP2002175534A publication Critical patent/JP2002175534A/en
Application granted granted Critical
Publication of JP3589293B2 publication Critical patent/JP3589293B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

PROBLEM TO BE SOLVED: To securely detect a position where a white line is actually present by algorithm which is as simple as possible and inexpensive equipment constitution. SOLUTION: Luminance space differential values of pixels in an image photographed by an on-vehicle camera are computed and the edge of the white line is detected according to the positions having the extreme values thereof, and pixels having close luminance values of the detected edge are selected as white candidates 1 and 2, which are grouped according to their position relation to detect the white line. Consequently, the pixel positions where the luminance space differential values (difference between adjacent luminance values) show the extreme values, are adopted, thus the edge can stably be detected without being affected by luminance values and contrast. A part which varies in luminance owing to a shadow, a blur, etc., is not forcibly detected together, but detected as a white line candidate having become narrow to break and then integrated according to the position relation (direction and position) among white line candidates, and thus the white line can be detected without being affected by disturbance even if the white line is a broken line or a blurred line.

Description

【発明の詳細な説明】DETAILED DESCRIPTION OF THE INVENTION

【0001】[0001]

【発明の属する技術分野】本発明は、車載カメラで撮影
された前方映像を解析し、道路の走行車線の境界線(白
い線、黄色い線、実線、破線など種々のものが存在する
が、本明細書では一括して「白線」という)を認識する
ための道路の白線検出方法に関するものである。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention analyzes a forward image captured by an on-vehicle camera and analyzes various types of road lane boundaries (white lines, yellow lines, solid lines, broken lines, etc.). The present invention relates to a road white line detection method for recognizing a white line collectively in the specification.

【0002】[0002]

【従来の技術】自動車への運転支援システムを実現する
にあたり、画像による道路環境認識に期待が寄せられて
いる。道路環境認識の一つとして、走行車線の白線自動
認識があり、その手法が各種開発されている。大池「モ
デルベースの認識手法による道路白線認識」(信学技報
PRMU99-211(2000年1月))では、2値化処理を必要とし
ないストリングモデルを利用した白線認識を提案してい
る。
2. Description of the Related Art In realizing a driving support system for an automobile, recognition of a road environment using images is expected. As one of road environment recognitions, there is automatic white line recognition of a driving lane, and various methods have been developed. Oike "Road White Line Recognition by Model-Based Recognition Method" (IEICE Technical Report)
PRMU99-211 (January 2000)) proposes white line recognition using a string model that does not require binarization processing.

【0003】二宮、高橋、太田「高速パターン照合手法
とレーン検出への応用」(第3回次世代走路表示/認識
技術研究会資料、認研3-4-1,1999年11月)では、テンプ
レートを当てはめることにより、道路構造照合を行う手
法を提案している。
[0003] Ninomiya, Takahashi, and Ota, "High-speed pattern matching method and application to lane detection" (3rd next-generation runway display / recognition technology workshop material, Shoken 3-4-1, November 1999) We propose a method to perform road structure collation by applying a template.

【0004】[0004]

【発明が解決しようとする課題】前者の「モデルベース
の‥‥」は、2値化が不要でハードウェアがシンプルと
なるものの、連続の動的輪郭モデルを利用しているた
め、実線・破線の区別ができず、モデルが2本でそれぞ
れ左右白線に収束するよう定義されており、白線をまた
ぐような想定をしていないため、車線変更に対応できな
い、という問題がある。
The former "model-based ‥‥" does not require binarization and simplifies the hardware, but uses a continuous active contour model. Cannot be distinguished, and two models are defined so as to converge on the left and right white lines, respectively, and it is not assumed that the model crosses the white line.

【0005】後者の「高速パターン照合手法‥‥」は、
白線のかすれや悪天候に強く、計算コストも低いが、多
くの道路形状テンプレートを持っておく必要があり、膨
大なメモリを必要とするという難点がある。そこで、本
発明は、可能な限りシンプルなアルゴリズムと、安価な
機器構成とにより実現でき、白線が実際に存在する位置
を確実に検出でき、将来の高度な車両制御にも応用でき
る道路の白線検出方法を実現することを目的とする。
The latter "high-speed pattern matching method ‥‥"
Although it is resistant to fading white lines and bad weather, and its computational cost is low, it has a drawback that it requires a large number of road shape templates and requires a huge amount of memory. Therefore, the present invention can be realized with the simplest algorithm and inexpensive device configuration as much as possible, can reliably detect the position where the white line actually exists, and can detect the white line on the road which can be applied to advanced vehicle control in the future. The aim is to realize the method.

【0006】[0006]

【課題を解決するための手段】本発明の道路の白線検出
方法は、車載カメラで撮影された画像中の画素の輝度空
間微分値を算出し、その極値を示す位置に基づいて、白
線のエッジを抽出し、検出されたエッジの輝度値が近い
ものを白線候補としてまとめ、前記まとめられた白線候
補を、それらの位置関係に基づいて基づいてグループ化
し、左右別々に、1本の白線を検出する方法である(請
求項1)。
A method for detecting a white line on a road according to the present invention calculates a luminance spatial differential value of a pixel in an image photographed by a vehicle-mounted camera, and determines a white line based on a position indicating the extreme value. Edges are extracted, the detected edges having similar luminance values are grouped as white line candidates, the grouped white line candidates are grouped based on their positional relationship, and one white line is separated into left and right sides separately. This is a method for detecting (claim 1).

【0007】前記の方法によれば、エッジの検出には、
一般的に利用されている2値化手法を用いずに、輝度空
間微分値(隣接輝度値の差分)が極値を示している画素
位置を採用するので、輝度値やコントラストに左右され
ずに、安定的にエッジが検出できるようになる。そし
て、検出されたエッジの輝度値が近いものを白線候補と
してまとめることにより、影やかすれ等の影響で輝度が
異なってしまった部分を無理に一緒に検出せずに、細切
れの白線候補として検出する。
According to the above method, the edge is detected by
Since the pixel position at which the luminance space differential value (difference between adjacent luminance values) indicates an extreme value is adopted without using a commonly used binarization method, the pixel position is not affected by the luminance value or the contrast. Thus, the edge can be stably detected. Then, by detecting those with similar luminance values of the detected edges as white line candidates, the portions having different luminances due to the influence of shadows, blurring, etc. are detected as thin white line candidates without being forcibly detected together. I do.

【0008】さらに、それら白線候補の位置関係(方向
や位置)に基づいて統合するため、外乱の影響を受けず
に、破線やかすれた白線でも白線検出が行える。特に、
テンプレートに当てはめて白線検出する手法と異なり、
実際の白線位置をもとにして検出するので、白線位置の
認識が正しくできる。最後に、左右別々に1本ずつ白線
を抽出する。極値を求めるとき、輝度空間微分値の平均
値と標準偏差とを利用して、エッジ抽出範囲を決定する
ことが好ましい(請求項2)。
Furthermore, since the integration is performed based on the positional relationship (direction and position) of these white line candidates, white lines can be detected even with broken lines or faint white lines without being affected by disturbance. In particular,
Unlike the technique of detecting white lines by applying to a template,
Since the detection is performed based on the actual white line position, the white line position can be correctly recognized. Finally, one white line is extracted for each of the left and right sides. When obtaining the extremum, it is preferable to determine the edge extraction range using the average value and the standard deviation of the luminance space differential values (claim 2).

【0009】これにより、エッジ抽出範囲を固定してお
くのでなく、画像ごとに動的に決められるため、主に路
面上に存在する外乱(路面のしみ、轍)等の影響を極力
排除することができる。また、明るさの変化に強く(ロ
バストに)なる。白線候補をグループ化するときの基準
として、白線候補間の横ずれ方向、結合角度を使用する
ことが好ましい(請求項3)。白線の路面に描かれた形
状特徴(画像平面での情報)を利用して統合するので、
路面や背景の外乱の影響を受けにくい。
As a result, the edge extraction range is not fixed, but is determined dynamically for each image, so that the influence of disturbances (road surface stains, ruts, etc.) mainly present on the road surface is eliminated as much as possible. Can be. In addition, it becomes strong (robust) against a change in brightness. It is preferable to use a lateral shift direction and a joint angle between the white line candidates as a reference when grouping the white line candidates. Since it integrates using the shape features (information on the image plane) drawn on the white line road surface,
Less susceptible to road and background disturbances.

【0010】左右別々に1本の白線を検出する手順は、
Hough変換パラメータをもとに、時間的に連続する前後
の画像間で統合グループの位置類似度を求め、類似度の
大きい白線グループを1本抽出することが好ましい(請
求項4)。これにより、1本の連続した白線を決定する
ことができる。なお類似度が小さければ、白線を見失っ
たと判断することができる。また、Hough変換パラメー
タを採用したので、白線の見え方によらずに、単純な同
一評価式が利用でき、パラメータ感度の一様性が確保で
きる。(例えば、一次式y=ax+bのa及びbをパラメータ
として採用すれば、a=∞付近と、a=0付近ではパラメー
タの重みをかえないといけない。)左右白線候補となる
線を求める画像上の範囲の中央に、オーバーラップ領域
を設定することが好ましい(請求項5)。
The procedure for detecting one white line separately for the left and right is as follows:
Based on the Hough transformation parameter, it is preferable to obtain the position similarity of the integrated group between the images before and after temporally continuous and extract one white line group having a large similarity (claim 4). As a result, one continuous white line can be determined. If the similarity is small, it can be determined that the white line has been lost. In addition, since the Hough transform parameter is employed, a simple same evaluation expression can be used regardless of how the white line looks, and uniformity of parameter sensitivity can be secured. (For example, if a and b of the primary expression y = ax + b are adopted as parameters, the weights of the parameters must be changed in the vicinity of a = ∞ and in the vicinity of a = 0.) It is preferable to set an overlap area at the center of the range on the image (claim 5).

【0011】これにより、車線変更のときに左白線から
右白線への移行、あるいはその逆の移行がスムーズにで
きる。
Thus, when changing lanes, the transition from the left white line to the right white line or vice versa can be smoothly performed.

【0012】[0012]

【発明の実施の形態】以下、本発明の実施の形態を、添
付図面を参照しながら詳細に説明する。 1.画像データの定義(図1参照) 車載カメラで撮影された画像の座標系を、左上を原点
(0,0)、右方向にx軸、下方向にy軸をとる。画像
サイズは、横(x方向)640画素、縦(y方向)48
0画素である。
Embodiments of the present invention will be described below in detail with reference to the accompanying drawings. 1. Definition of Image Data (Refer to FIG. 1) The coordinate system of an image captured by the on-vehicle camera has an origin (0, 0) at the upper left, an x axis to the right, and a y axis to the lower. The image size is 640 pixels horizontally (x direction) and 48 pixels vertically (y direction).
0 pixels.

【0013】画素値は、8ビット、0から255までの
256段階で表す。0が黒、255が白となる。位置
(x,y)の画素値をIm[y][x]と記述する。図2は、白
線検出処理を示す全体フローチャートである。この処理
は、1枚の画像の取り込みごとに行う。画像の取り込み
は、毎秒30枚行う(ステップS1)。以下、各ステップ
を分説する。 2.前処理 2.1 縮小 x方向y方向ともに、1画素おきにサンプリングする。
これにより処理量が減り、処理時間を短縮することがで
きる。
The pixel value is represented by 256 levels of 8 bits, 0 to 255. 0 is black and 255 is white. The pixel value at the position (x, y) is described as Im [y] [x]. FIG. 2 is an overall flowchart showing the white line detection processing. This process is performed every time one image is captured. Capture of images is performed 30 sheets per second (step S1). Hereinafter, each step will be described separately. 2. Preprocessing 2.1 Reduction Sampling is performed at every other pixel in both the x and y directions.
As a result, the processing amount is reduced, and the processing time can be shortened.

【0014】2.2 スムージング 画像全体にわたり、画素値と下記オペレータとを畳み込
むことにより、スムージングをする。
2.2 Smoothing Smoothing is performed by convolving the pixel values with the following operators over the entire image.

【0015】[0015]

【数1】 (Equation 1)

【0016】3.エッジ検出 白線が存在する範囲を、画面の下半分の領域に限定す
る。以下の処理は、画面の下半分の領域に対してのみ実
行する。 3.1 画素エッジ強度を計算 空間微分(差分)をもとに、位置(x,y)のエッジ強
度(Intensity)を下記の通り定義し、これを用いてエッ
ジ検出を行う。
3. Edge detection The range where the white line exists is limited to the lower half area of the screen. The following processing is executed only for the lower half area of the screen. 3.1 Calculation of Pixel Edge Intensity Based on the spatial derivative (difference), the edge intensity (Intensity) at the position (x, y) is defined as follows, and the edge is detected using this.

【0017】エッジ強度=(Im[y][x+3]+Im[y][x+2]―
Im[y][x-2]―Im[y][x-3])/10 エッジ強度を表す空間を「エッジ空間」という。なお、
この式は、一次元Prewitt型空間微分(-1 0 1)の注目点
周辺5画素分の平均値をとったものと等価な式となって
いる。 3.2 エッジ検出 エッジ空間上で、極値を示す画素を選択することによ
り、エッジを検出する。選択されたエッジは、各々±の
符号付き値を持ち、これに基づいて、以後の処理でグル
ープ分けされる(後述)。
Edge strength = (Im [y] [x + 3] + Im [y] [x + 2] −
Im [y] [x-2] -Im [y] [x-3]) / 10 A space representing the edge strength is called an “edge space”. In addition,
This equation is an equation equivalent to the one-dimensional Prewitt type spatial derivative (-1 0 1) obtained by averaging five pixels around the target point. 3.2 Edge Detection An edge is detected by selecting a pixel showing an extreme value in the edge space. The selected edges each have a signed value of ±, and are grouped in the subsequent processing based on the values (described later).

【0018】ただし、路面ノイズなどの影響を排除する
ため、極値探索対象範囲を限定できるように工夫してい
る。具体的には、画面下判断のエッジ強度の平均値と、
標準偏差を計算し、下記範囲の外を極値探索対象範囲と
している。 上限:エッジ強度の平均値+1.5×標準偏差 下限:エッジ強度の平均値−1.5×標準偏差 エッジ空間上のエッジ強度分布が正規分布に従うとする
と、前記範囲には、全画素の約86.6%が含まれ、結
果として、上部の極値探索対象範囲と下部の極値探索対
象範囲に含まれる、それぞれ約6.7%の画素から、極
値となる画素を抽出することになる。これにより、処理
のスピードアップを図れる。
However, in order to eliminate the influence of road surface noise and the like, the invention is devised so that the range for extremum search can be limited. Specifically, the average value of the edge strength of the below-screen judgment,
The standard deviation is calculated, and the outside of the following range is set as the extreme value search target range. Upper limit: Average value of edge intensity + 1.5 × standard deviation Lower limit: Average value of edge intensity−1.5 × standard deviation Assuming that the edge intensity distribution in the edge space follows a normal distribution, the above range includes approximately all pixels. 86.6% are included, and as a result, an extreme value pixel is extracted from about 6.7% of pixels included in the upper extreme value search target range and the lower extreme value search target range, respectively. Become. This can speed up the processing.

【0019】4.白線候補の抽出 前項で抽出されたエッジを下記の手順に従いグループ分
けして白線候補(ひと塊の白線の部分)を抽出する。 (1)画像最下段のエッジを白線候補の始点G0として登録
する。 (2)白線候補始点G0の近傍画素であって、白線候補始点
G0の強度と類似する 強度のエッジを新たな白線候補Gとして登録する。以
下、この処理を繰り返す。このようにして、白線候補始
点G0及びこれに関連して登録された白線候補Gによ
り、白線候補を構成する。
4. Extraction of White Line Candidates The edges extracted in the preceding section are grouped according to the following procedure to extract white line candidates (a portion of a single white line). (1) The lowermost edge of the image is registered as the starting point G0 of the white line candidate. (2) An edge near the white line candidate start point G0 and having an intensity similar to that of the white line candidate start point G0 is registered as a new white line candidate G. Hereinafter, this process is repeated. In this way, a white line candidate is constituted by the white line candidate start point G0 and the white line candidate G registered in relation thereto.

【0020】近傍画素は、最大3段上までのラインから
探索する。3ラインを超えたときは、別の白線始点とし
て登録する。横方向の探索範囲は、(探索ラインまでの
距離)×(±3画素)とする。例えば、1ライン上にエ
ッジがなく、2ライン目を探索する場合、±6画素を探
索する。式で書くと、(x,y)位置を基準として、2ライ
ン上なら、(x-6,y-2)から(x+6,y-2)までを探索する。
Neighboring pixels are searched from a line up to three levels up. If it exceeds three lines, it is registered as another white line start point. The search range in the horizontal direction is (distance to search line) × (± 3 pixels). For example, when searching for the second line without an edge on one line, ± 6 pixels are searched. In terms of the expression, if the position is on two lines with reference to the (x, y) position, the search is performed from (x-6, y-2) to (x + 6, y-2).

【0021】前記強度の類似判定は、白線候補始点G0
のエッジ強度の±50%以内かどうかで行う。 5.白線候補のグループ化 前項で抽出された白線候補を、下記の手順に従いグルー
プ化する。 (1)構成画素数が一定以上の白線候補について、左側白
線、右側白線の選別をする。なお、左右白線の存在位置
が入れ替わる車線変更時でも極力追従できるよう、左右
白線の探索領域はオーバーラップさせている。
The intensity similarity determination is performed by determining the white line candidate start point G0.
Is performed within ± 50% of the edge strength. 5. Grouping white line candidates The white line candidates extracted in the previous section are grouped according to the following procedure. (1) For a white line candidate having a certain number of constituent pixels or more, a left white line and a right white line are selected. Note that the search areas for the left and right white lines are overlapped so that the left and right white lines can be followed as much as possible when changing lanes where the existing positions of the left and right white lines are switched.

【0022】具体的には、白線候補の構成画素数の下限
は3とする。白線候補始点G0のx座標が160+α以
下、かつ、エッジ強度が負であれば、左側白線の右端と
判断し、白線候補始点G0のx座標が161−α以上、
かつ、エッジ強度が正であれば、右側白線の左端と判断
する。ここで、「160」という座標は、x方向の縮小
後の画像サイズが640/2=320画素あるという前
提で、半分の位置に設定している。
Specifically, the lower limit of the number of pixels constituting the white line candidate is three. If the x-coordinate of the white line candidate start point G0 is equal to or less than 160 + α and the edge strength is negative, it is determined that the x-coordinate of the white line candidate start point G0 is 161-α or more.
If the edge strength is positive, it is determined to be the left end of the right white line. Here, the coordinate "160" is set at a half position on the assumption that the image size after reduction in the x direction is 640/2 = 320 pixels.

【0023】αは、オーバーラップを考慮したもので、
x方向の画像サイズの約10%(32画素)としてい
る。 (2)下(y座標の大きいもの)から順に、白線候補をグ
ループ化していく。その条件は、(a)下にある基準とな
る白線候補を延ばした直線と、検査対象となっている白
線候補の始点との関係が結合範囲にあること、(b)両直
線の傾きの差θが所定角度範囲以内であること、であ
る。
Α is a value in consideration of the overlap.
Approximately 10% (32 pixels) of the image size in the x direction. (2) White line candidates are grouped in order from the bottom (larger y coordinate). The conditions are as follows: (a) the relationship between the straight line extending the lower reference white line candidate and the starting point of the white line candidate to be inspected is within the coupling range, and (b) the difference between the slopes of the two lines. θ is within a predetermined angle range.

【0024】前記「結合範囲」とは、図3(a)に示すよ
うに、検査対象となっている白線候補2の始点G0から
水平方向に引いた線分Lと、下にある基準となる白線候
補を延ばした直線とが交わる点をPとすると、長さG0
Pが、±所定数の画素の範囲とする。ただし、PがG0
より右側にあるときは正、PがG0より左側にあるとき
は負とする。図3(a)では所定数を10としている。前
記「所定角度範囲」とは、図3(b)に示すように、前記
点Pが、始点G0の右にある場合、−5°〜+20°と
し、図3(c)に示すように、左にある場合は、−20°
〜+5°とする。
As shown in FIG. 3A, the "joining range" is a line segment L drawn in the horizontal direction from the starting point G0 of the white line candidate 2 to be inspected and a reference below. Assuming that a point at which the straight line extending the white line candidate intersects is P, the length G0
P is a range of ± a predetermined number of pixels. Where P is G0
Positive when it is further to the right and negative when P is to the left of G0. In FIG. 3A, the predetermined number is 10. The “predetermined angle range” is, as shown in FIG. 3B, when the point P is to the right of the starting point G0, −5 ° to + 20 °, and as shown in FIG. -20 ° when on the left
To + 5 °.

【0025】6.三次元座標変換 画面を下記の式を用いて、白線グループの始点G0を実
空間座標に3次元座標変換する。 Z(車両進行方向距離)=Dividend(η,sinψ,cosψ)/
Divider(η,sinψ,cosψ) X(車両横方向距離)=Conv2X(ξ,Z) Conv2X(ξ,Z)=(ξ/F)[(Z cosψ)+(H sin
ψ)] Divider(η, sinψ,cosψ)=sinψ−(η/F)cosψ Dividend(η,sinψ,cosψ)=H[cosψ+(η/F)sin
ψ] H:カメラの設置高さ(m) F:レンズ焦点距離(m) ψ:カメラ光軸の俯角(rad) ξ,ηは、撮像面中心(原点)からの距離(m)を表す
(y軸は上方が正、x軸は右向きが正)。
6. Three-dimensional coordinate conversion The starting point G0 of the white line group is three-dimensionally converted into real space coordinates using the following equation on the screen. Z (vehicle travel direction distance) = Dividend (η, sinψ, cosco) /
Divider (η, sinψ, cosψ) X (vehicle lateral distance) = Conv2X (ξ, Z) Conv2X (ξ, Z) = (ξ / F) [(Z cosψ) + (H sin
ψ)] Divider (η, sinψ, cosψ) = sinψ− (η / F) cosψ Dividend (η, sinψ, cosψ) = H [cosψ + (η / F) sin
ψ] H: Camera installation height (m) F: Lens focal length (m) ψ: Depression angle (rad) of camera optical axis ξ, η represents distance (m) from the center of the imaging surface (origin) ( The y-axis is positive upward and the x-axis is positive rightward).

【0026】図4は、3次元座標変換前後の画像を示す
図であり、(a)はカメラで撮像したエッジ画像を示す
図、(b)は3次元座標変換後の画像を示す図である。 7.最も信頼性の高い左右白線の抽出 グループ化された白線候補の中から、左右白線を1本ず
つ抽出する。図5は、実空間における複数の白線グルー
プを示す図である。図6は、本項の手順を説明するため
のフローチャートである。
FIGS. 4A and 4B are diagrams showing images before and after the three-dimensional coordinate conversion. FIG. 4A is a diagram showing an edge image captured by a camera, and FIG. 4B is a diagram showing an image after the three-dimensional coordinate conversion. . 7. Extraction of the most reliable left and right white lines From the grouped white line candidates, the left and right white lines are extracted one by one. FIG. 5 is a diagram showing a plurality of white line groups in the real space. FIG. 6 is a flowchart for explaining the procedure in this section.

【0027】(1)3次元座標変換した場合の白線候補始
点G0が、横方向±4m以内、前方30m以内(図5の
太線枠参照)にあるもののうち、3次元座標変換した場
合の白線候補の長さL1,L2,‥‥が最大のものを抽出
する(ステップS66)。 (2) 3次元座標変換前の画面について、構成画素数が一
定数(例えば5とする)以上の白線候補を選択して、ハ
フ(Hough)変換パラメータを算出する(ステップS6
7)。
(1) White line candidate starting point G0 in the case of three-dimensional coordinate conversion is within ± 4 m in the horizontal direction and within 30 m in front (see the thick frame in FIG. 5). Are extracted with the maximum length L1, L2,... (Step S66). (2) On the screen before the three-dimensional coordinate conversion, a white line candidate having a certain number of pixels (for example, 5) or more is selected and a Hough conversion parameter is calculated (step S6).
7).

【0028】ここでハフ変換とは、xy直交座標系で表
した直線 xcosθ+ysinθ=r を(r,θ)平面に変換することをいう。極座標の原点
は、左側白線候補の場合図7(a)に示すように画面の左
上とし、右側白線候補の場合図7(b)に示すように画面
の右上とする。ハフ変換パラメータをr,θとする。点
線は前回撮影した画面から抽出した白線、実線は今回撮
影した画面から抽出した白線であり、図7では、取得時
刻の異なる2つの白線を同時に描いている。
Here, the Hough transform means transforming a straight line xcosθ + ysinθ = r expressed in an xy orthogonal coordinate system into an (r, θ) plane. The origin of the polar coordinates is the upper left of the screen as shown in FIG. 7A for the left white line candidate, and the upper right of the screen as shown in FIG. 7B for the right white line candidate. The Hough transform parameters are r and θ. The dotted line is a white line extracted from the screen shot last time, and the solid line is a white line extracted from the screen shot this time. In FIG. 7, two white lines having different acquisition times are simultaneously drawn.

【0029】(3) ハフ変換パラメータをもとにして、前
回検出した白線候補との類似度を求め、類似度が最も高
いものを今回の白線候補として左右別々に抽出する(ス
テップS62)。 類似度を判断するのに、判定値 100|θ0-θ1|+|r0-r1| を採用する。r0,θ0は前回の白線候補のハフ変換パラ
メータであり、r1,θ1は今回の白線候補のハフ変換パ
ラメータである。「100」は重み係数(一定とする)
である。この判定値が小さい方が類似度が高い。
(3) Based on the Hough transform parameters, the similarity with the previously detected white line candidate is obtained, and the one with the highest similarity is separately extracted as the current white line candidate (step S62). To determine the similarity, a determination value 100 | θ0−θ1 | + | r0−r1 | is adopted. r0 and θ0 are Hough transformation parameters of the previous white line candidate, and r1 and θ1 are Hough transformation parameters of the current white line candidate. "100" is a weighting factor (constant)
It is. The smaller this determination value is, the higher the similarity is.

【0030】前記判定値が30を超えた場合は、白線位
置を見失ったと判断する(ステップS63のNO)。この
ときは、ステップS66に飛び、改めて(1)の方式で最
適候補を探索し直す。なお、図6で白線位置を見失った
ときには、ステップS66に進まず、ハフ変換パラメー
タを捨てて、次のサイクルでステップS66からスター
トしてもよい。
If the determination value exceeds 30, it is determined that the position of the white line has been lost (NO in step S63). In this case, the process jumps to step S66, and the optimum candidate is searched again by the method (1). When the position of the white line is lost in FIG. 6, the process may not proceed to step S66, but may discard the Hough transform parameter and start from step S66 in the next cycle.

【0031】このようにして抽出された左右白線の、ハ
フ変換パラメータを更新し(ステップS64)、左右白線
を特定する(ステップS65)。 8.道路形状・白線位置推定 以下は、白線位置情報をデータ化する手順である。図8
を参照しながら説明する。 (1)左右白線を表す各々の左右白線を円とみなしてその
中心及び半径Rを計算する。
The Hough transform parameters of the left and right white lines thus extracted are updated (step S64), and the left and right white lines are specified (step S65). 8. Road Shape / White Line Position Estimation The following is a procedure for converting white line position information into data. FIG.
This will be described with reference to FIG. (1) The left and right white lines representing the left and right white lines are regarded as circles, and the center and radius R are calculated.

【0032】(2)左右白線位置の推定 左右白線の手前側の2点の座標を直線的に延ばし、カメ
ラ直下の左右白線位置Sを推定する。
(2) Estimation of Left and Right White Line Positions The coordinates of two points on the near side of the left and right white lines are extended linearly, and the left and right white line positions S immediately below the camera are estimated.

【0033】[0033]

【発明の効果】以上のように本発明の道路の白線検出方
法によれば、可能な限りシンプルなアルゴリズムと、安
価な機器構成で、白線が実際に存在する位置を確実に検
出できる。したがって、将来の高度な車両走行制御に最
適な道路の白線検出方法とすることができる。
As described above, according to the road white line detection method of the present invention, the position where the white line actually exists can be reliably detected with the simplest algorithm and the cheapest equipment configuration. Therefore, it is possible to provide a road white line detection method optimal for future advanced vehicle traveling control.

【図面の簡単な説明】[Brief description of the drawings]

【図1】画像データの定義図である。FIG. 1 is a definition diagram of image data.

【図2】白線検出処理を示す全体フローチャートであ
る。
FIG. 2 is an overall flowchart illustrating a white line detection process.

【図3】白線候補の統合条件を説明するための図であ
り、(a)は、下にある基準となる白線候補と検査対象と
なっている白線候補との位置関係を示す図、(b)及び(c)
は、両直線の角度関係を示す図である。
3A and 3B are diagrams for explaining integration conditions of white line candidates, and FIG. 3A is a diagram illustrating a positional relationship between a white line candidate serving as a reference below and a white line candidate to be inspected; ) And (c)
FIG. 3 is a diagram showing an angle relationship between both straight lines.

【図4】3次元座標変換前後の画像を示す図であり、
(a)はカメラで撮像したエッジ画像を示す図、(b)は3次
元座標変換後の画像を示す図である。
FIG. 4 is a diagram showing images before and after three-dimensional coordinate conversion;
(a) is a diagram showing an edge image captured by a camera, and (b) is a diagram showing an image after three-dimensional coordinate conversion.

【図5】3次元座標変換した場合の白線候補の初期値を
抽出する枠を示す図である。
FIG. 5 is a diagram illustrating a frame for extracting an initial value of a white line candidate when three-dimensional coordinate conversion is performed.

【図6】左右白線を1本ずつ抽出する手順を説明するた
めのフローチャートである。
FIG. 6 is a flowchart illustrating a procedure for extracting left and right white lines one by one.

【図7】ハフ変換パラメータr,θを解説するための図
であり、(a)は左側白線候補の場合、(b)は右側白線候補
の場合を示す。
7A and 7B are diagrams for explaining the Hough transformation parameters r and θ, wherein FIG. 7A shows a case of a left white line candidate, and FIG. 7B shows a case of a right white line candidate.

【図8】白線位置情報をデータ化する手順を示す図であ
る。
FIG. 8 is a diagram showing a procedure for converting white line position information into data.

【符号の説明】[Explanation of symbols]

S カメラ直下の左右白線位置 R 左右白線の半径 r,θ ハフ変換パラメータ L1,L2 3次元座標変換した場合の白線候補の長さ S Position of left and right white line immediately below camera R Radius of left and right white line r, θ Hough transformation parameter L1, L2 Length of white line candidate when three-dimensional coordinate transformation is performed

Claims (5)

【特許請求の範囲】[Claims] 【請求項1】車載カメラで撮影された画像中の画素の輝
度空間微分値を算出し、 その極値を示す位置に基づいて、白線のエッジを抽出
し、 検出されたエッジの輝度値が近いものを白線候補として
まとめ、 前記まとめられた白線候補を、それらの位置関係に基づ
いてグループ化し、 左右別々に、1本の白線を検出することを特徴とする道
路の白線検出方法。
1. A method for calculating a luminance spatial differential value of a pixel in an image photographed by a vehicle-mounted camera, extracting an edge of a white line based on a position indicating the extreme value, and detecting a luminance value of the detected edge is close. A method for detecting white lines on a road, comprising: combining the white line candidates as white line candidates; grouping the collected white line candidates based on their positional relationship; and detecting one white line separately for the left and right sides.
【請求項2】極値を求めるとき、輝度空間微分値の平均
値と標準偏差とを利用して、エッジ抽出範囲を決定する
ことを特徴とする請求項1記載の道路の白線検出方法。
2. The method for detecting white lines on a road according to claim 1, wherein an edge extraction range is determined using an average value and a standard deviation of the luminance space differential values when obtaining the extremum.
【請求項3】白線をグループ化するときの基準として、
白線候補間の横ずれ方向、結合角度を使用することを特
徴とする請求項1記載の道路の白線検出方法。
3. A criterion for grouping white lines,
2. The road white line detection method according to claim 1, wherein a lateral shift direction and a joint angle between the white line candidates are used.
【請求項4】左右別々に1本の白線を検出する手順は、
Hough変換パラメータをもとに、時間的に連続する前後
の画像間で統合グループの位置類似度を求め、類似度の
大きい白線グループを選択することを特徴とする請求項
1記載の道路の白線検出方法。
4. The procedure for detecting one white line separately for the left and right sides is as follows.
2. The white line detection of a road according to claim 1, wherein a position similarity of an integrated group is obtained between images before and after the temporally continuous image based on the Hough transformation parameter, and a white line group having a large similarity is selected. Method.
【請求項5】左右白線候補となる線を求める画像上の範
囲の中央に、オーバーラップ領域を設定したことを特徴
とする請求項1記載の道路の白線検出方法。
5. The road white line detection method according to claim 1, wherein an overlap area is set at the center of the range on the image for obtaining lines as left and right white line candidates.
JP2000371645A 2000-12-06 2000-12-06 Road white line detection method Expired - Fee Related JP3589293B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2000371645A JP3589293B2 (en) 2000-12-06 2000-12-06 Road white line detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2000371645A JP3589293B2 (en) 2000-12-06 2000-12-06 Road white line detection method

Publications (2)

Publication Number Publication Date
JP2002175534A true JP2002175534A (en) 2002-06-21
JP3589293B2 JP3589293B2 (en) 2004-11-17

Family

ID=18841337

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2000371645A Expired - Fee Related JP3589293B2 (en) 2000-12-06 2000-12-06 Road white line detection method

Country Status (1)

Country Link
JP (1) JP3589293B2 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7298918B2 (en) 2003-03-24 2007-11-20 Minolta Co., Ltd. Image processing apparatus capable of highly precise edge extraction
JP2008021161A (en) * 2006-07-13 2008-01-31 Mitsubishi Fuso Truck & Bus Corp Driving state determining device
WO2009088035A1 (en) 2008-01-11 2009-07-16 Nec Corporation Lane recognition system, lane recognition method, and lane recognition program
EP2765532A2 (en) 2013-02-08 2014-08-13 MegaChips Corporation Object detection apparatus, program, and integrated circuit
US9191634B2 (en) 2004-04-15 2015-11-17 Magna Electronics Inc. Vision system for vehicle
US9555803B2 (en) 2002-05-03 2017-01-31 Magna Electronics Inc. Driver assistance system for vehicle
JP2017037472A (en) * 2015-08-10 2017-02-16 富士重工業株式会社 Lane recognition device
US10071676B2 (en) 2006-08-11 2018-09-11 Magna Electronics Inc. Vision system for vehicle
WO2022118422A1 (en) * 2020-12-03 2022-06-09 日本電気株式会社 Line position estimation device, method, and program
WO2022123641A1 (en) * 2020-12-08 2022-06-16 三菱電機株式会社 Rail detection device and rail detection method
JP7149385B1 (en) 2021-07-12 2022-10-06 株式会社デンソーテン Lane detection device, lane detection method, and lane detection program
JP7453008B2 (en) 2020-02-06 2024-03-19 フォルシアクラリオン・エレクトロニクス株式会社 Image processing device and image processing method

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5304804B2 (en) 2011-01-12 2013-10-02 株式会社デンソー Boundary detection device and boundary detection program

Cited By (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9643605B2 (en) 2002-05-03 2017-05-09 Magna Electronics Inc. Vision system for vehicle
US11203340B2 (en) 2002-05-03 2021-12-21 Magna Electronics Inc. Vehicular vision system using side-viewing camera
US10683008B2 (en) 2002-05-03 2020-06-16 Magna Electronics Inc. Vehicular driving assist system using forward-viewing camera
US10351135B2 (en) 2002-05-03 2019-07-16 Magna Electronics Inc. Vehicular control system using cameras and radar sensor
US10118618B2 (en) 2002-05-03 2018-11-06 Magna Electronics Inc. Vehicular control system using cameras and radar sensor
US9834216B2 (en) 2002-05-03 2017-12-05 Magna Electronics Inc. Vehicular control system using cameras and radar sensor
US9555803B2 (en) 2002-05-03 2017-01-31 Magna Electronics Inc. Driver assistance system for vehicle
US7298918B2 (en) 2003-03-24 2007-11-20 Minolta Co., Ltd. Image processing apparatus capable of highly precise edge extraction
US10187615B1 (en) 2004-04-15 2019-01-22 Magna Electronics Inc. Vehicular control system
US11503253B2 (en) 2004-04-15 2022-11-15 Magna Electronics Inc. Vehicular control system with traffic lane detection
US9609289B2 (en) 2004-04-15 2017-03-28 Magna Electronics Inc. Vision system for vehicle
US11847836B2 (en) 2004-04-15 2023-12-19 Magna Electronics Inc. Vehicular control system with road curvature determination
US9736435B2 (en) 2004-04-15 2017-08-15 Magna Electronics Inc. Vision system for vehicle
US9428192B2 (en) 2004-04-15 2016-08-30 Magna Electronics Inc. Vision system for vehicle
US9948904B2 (en) 2004-04-15 2018-04-17 Magna Electronics Inc. Vision system for vehicle
US10462426B2 (en) 2004-04-15 2019-10-29 Magna Electronics Inc. Vehicular control system
US10015452B1 (en) 2004-04-15 2018-07-03 Magna Electronics Inc. Vehicular control system
US10306190B1 (en) 2004-04-15 2019-05-28 Magna Electronics Inc. Vehicular control system
US10110860B1 (en) 2004-04-15 2018-10-23 Magna Electronics Inc. Vehicular control system
US9191634B2 (en) 2004-04-15 2015-11-17 Magna Electronics Inc. Vision system for vehicle
US10735695B2 (en) 2004-04-15 2020-08-04 Magna Electronics Inc. Vehicular control system with traffic lane detection
JP2008021161A (en) * 2006-07-13 2008-01-31 Mitsubishi Fuso Truck & Bus Corp Driving state determining device
US10071676B2 (en) 2006-08-11 2018-09-11 Magna Electronics Inc. Vision system for vehicle
US11396257B2 (en) 2006-08-11 2022-07-26 Magna Electronics Inc. Vehicular forward viewing image capture system
US11623559B2 (en) 2006-08-11 2023-04-11 Magna Electronics Inc. Vehicular forward viewing image capture system
US11951900B2 (en) 2006-08-11 2024-04-09 Magna Electronics Inc. Vehicular forward viewing image capture system
US10787116B2 (en) 2006-08-11 2020-09-29 Magna Electronics Inc. Adaptive forward lighting system for vehicle comprising a control that adjusts the headlamp beam in response to processing of image data captured by a camera
US11148583B2 (en) 2006-08-11 2021-10-19 Magna Electronics Inc. Vehicular forward viewing image capture system
US8655081B2 (en) 2008-01-11 2014-02-18 Nec Corporation Lane recognition system, lane recognition method, and lane recognition program
WO2009088035A1 (en) 2008-01-11 2009-07-16 Nec Corporation Lane recognition system, lane recognition method, and lane recognition program
EP2765532A2 (en) 2013-02-08 2014-08-13 MegaChips Corporation Object detection apparatus, program, and integrated circuit
CN106467105A (en) * 2015-08-10 2017-03-01 富士重工业株式会社 Lane detection device
US10000210B2 (en) 2015-08-10 2018-06-19 Subaru Corporateon Lane recognition apparatus
JP2017037472A (en) * 2015-08-10 2017-02-16 富士重工業株式会社 Lane recognition device
JP7453008B2 (en) 2020-02-06 2024-03-19 フォルシアクラリオン・エレクトロニクス株式会社 Image processing device and image processing method
WO2022118422A1 (en) * 2020-12-03 2022-06-09 日本電気株式会社 Line position estimation device, method, and program
JPWO2022123641A1 (en) * 2020-12-08 2022-06-16
JP7209916B2 (en) 2020-12-08 2023-01-20 三菱電機株式会社 Rail detection device and rail detection method
WO2022123641A1 (en) * 2020-12-08 2022-06-16 三菱電機株式会社 Rail detection device and rail detection method
JP2023011400A (en) * 2021-07-12 2023-01-24 株式会社デンソーテン Lane detection apparatus, lane detection method, and lane detection program
JP7149385B1 (en) 2021-07-12 2022-10-06 株式会社デンソーテン Lane detection device, lane detection method, and lane detection program

Also Published As

Publication number Publication date
JP3589293B2 (en) 2004-11-17

Similar Documents

Publication Publication Date Title
US11958197B2 (en) Visual navigation inspection and obstacle avoidance method for line inspection robot
KR101569919B1 (en) Apparatus and method for estimating the location of the vehicle
US8611585B2 (en) Clear path detection using patch approach
JP4157620B2 (en) Moving object detection apparatus and method
CN110866903B (en) Ping-pong ball identification method based on Hough circle transformation technology
CN110287779A (en) Detection method, device and the equipment of lane line
JP2003228711A (en) Lane mark recognition method
CN109784344A (en) A kind of non-targeted filtering method of image for ground level mark identification
CN104834889A (en) Marking line detection system and marking line detection method
KR20110047797A (en) Apparatus and Method for Building and Updating a Map for Mobile Robot Localization
JP2008158958A (en) Road surface determination method and road surface determination device
JP2007179386A (en) Method and apparatus for recognizing white line
CN109815831B (en) Vehicle orientation obtaining method and related device
JP3589293B2 (en) Road white line detection method
CN111738033B (en) Vehicle driving information determination method and device based on plane segmentation and vehicle-mounted terminal
CN110197494A (en) A kind of pantograph contact point real time detection algorithm based on monocular infrared image
CN111832388B (en) Method and system for detecting and identifying traffic sign in vehicle running
JP3656056B2 (en) Interrupting vehicle detection device and method
CN103198491A (en) Indoor visual positioning method
CN113221739B (en) Monocular vision-based vehicle distance measuring method
Takahashi et al. A robust lane detection using real-time voting processor
JP3629935B2 (en) Speed measurement method for moving body and speed measurement device using the method
CN116665097A (en) Self-adaptive target tracking method combining context awareness
JP2002150302A (en) Road surface recognition device
JPH05151341A (en) Running route detecting device for vehicle

Legal Events

Date Code Title Description
A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20040412

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20040511

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20040629

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20040728

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20040810

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20080827

Year of fee payment: 4

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20080827

Year of fee payment: 4

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20090827

Year of fee payment: 5

LAPS Cancellation because of no payment of annual fees