JP3862015B2 - Automotive radar equipment - Google Patents

Automotive radar equipment Download PDF

Info

Publication number
JP3862015B2
JP3862015B2 JP2002311680A JP2002311680A JP3862015B2 JP 3862015 B2 JP3862015 B2 JP 3862015B2 JP 2002311680 A JP2002311680 A JP 2002311680A JP 2002311680 A JP2002311680 A JP 2002311680A JP 3862015 B2 JP3862015 B2 JP 3862015B2
Authority
JP
Japan
Prior art keywords
image
vehicle
line segment
area
direction vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
JP2002311680A
Other languages
Japanese (ja)
Other versions
JP2004144671A (en
Inventor
剛治 堀部
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Omron Corp
Original Assignee
Omron Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Omron Corp filed Critical Omron Corp
Priority to JP2002311680A priority Critical patent/JP3862015B2/en
Priority to US10/681,840 priority patent/US6831591B2/en
Publication of JP2004144671A publication Critical patent/JP2004144671A/en
Application granted granted Critical
Publication of JP3862015B2 publication Critical patent/JP3862015B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/40Means for monitoring or calibrating
    • G01S7/4004Means for monitoring or calibrating of parts of a radar system
    • G01S7/4026Antenna boresight
    • G01S7/403Antenna boresight in azimuth, i.e. in the horizontal plane
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/40Means for monitoring or calibrating
    • G01S7/4004Means for monitoring or calibrating of parts of a radar system
    • G01S7/4026Antenna boresight
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/9327Sensor installation details
    • G01S2013/93271Sensor installation details in the front of the vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S3/00Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
    • G01S3/78Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received using electromagnetic waves other than radio waves
    • G01S3/782Systems for determining direction or deviation from predetermined direction
    • G01S3/785Systems for determining direction or deviation from predetermined direction using adjustment of orientation of directivity characteristics of a detector or detector system to give a desired condition of signal derived from that detector or detector system
    • G01S3/786Systems for determining direction or deviation from predetermined direction using adjustment of orientation of directivity characteristics of a detector or detector system to give a desired condition of signal derived from that detector or detector system the desired condition being maintained automatically
    • G01S3/7864T.V. type tracking systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/40Means for monitoring or calibrating
    • G01S7/4004Means for monitoring or calibrating of parts of a radar system
    • G01S7/4026Antenna boresight
    • G01S7/4034Antenna boresight in elevation, i.e. in the vertical plane
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/40Means for monitoring or calibrating
    • G01S7/4052Means for monitoring or calibrating by simulation of echoes
    • G01S7/4082Means for monitoring or calibrating by simulation of echoes using externally generated reference signals, e.g. via remote reflector or transponder
    • G01S7/4091Means for monitoring or calibrating by simulation of echoes using externally generated reference signals, e.g. via remote reflector or transponder during normal radar operation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • G01S7/4972Alignment of sensor

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Traffic Control Systems (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

A radar device mounted to a vehicle includes a camera for obtaining images including the road surface in front of or behind the vehicle on which it is mounted, a sensor having for obtaining at least positional information on a target object of detection such as another vehicle in front or behind and a control unit for correcting the direction of the center axis of the sensor based on the image obtained by the camera. The control unit detects a line segment along a lane in which the vehicle is traveling, detects a vector indicative of the direction on a road-fixed coordinate system of the obtained line segment, and controls correction of horizontal and vertical directions of the center axis of the sensor so as to coincide with the direction of the detected vector.

Description

【0001】
【発明の属する技術分野】
本発明は、車両に搭載されるレーダ装置に関する。
【0002】
【従来の技術】
近年、自動車においては、運転の容易性や安全性を向上させるために、先行車等の監視機能や先行車への追従走行機能などが設けられる場合がある。そして、このような機能を実現するために、レーザレーダやミリ波レーダのようなセンサもち、自車両の前方又は後方に存在する検出対象(先行車やガードレールなどの障害物)の少なくとも位置情報を取得するためのレーダ装置が、車両に搭載される。
このような車載用レーダ装置では、カーブ走行時などにも先行車を見逃さないよう、走査範囲の中心軸或いは検出範囲の中心軸(以下、場合により光軸という)を適宜補正するのが好ましく、そのための技術として、従来では特許文献1に開示されたものがある。
これは、カーブ走行している先行車やレーンの中央を走行していない先行車の見逃しに対応し、カーブの曲率や自車及び先行車の横方向位置から先行車を見逃さないように光軸を補正するものである。
【0003】
【特許文献1】
特開平9−218265号公報
【0004】
【発明が解決しようとする課題】
ところが、上記従来技術は、勾配のない平坦な道路や高速道路のようにカーブ曲率が略一定であることを想定しているものである。このため、道路が平坦でありカーブ曲率が略一定である場合には、先行車を見逃すことが少なくなるが、市街地のように勾配がある場合やカーブ曲率が細かく変化する場合には、従来技術のように光軸を補正しても、先行車を見逃してしまう。というのは、従来技術は、カーブの曲率や自車及び先行車の横方向位置に基づいて補正すべき光軸の方向を特定するものであり、垂直方向(上下方向)に光軸を補正できない(垂直方向の光軸の理想的な向きを特定できない)ため、光軸の垂直方向位置はあくまで一定であって、勾配があると先行車が検出エリアから上下方向に外れてしまい検出不可能となる恐れがある。また従来技術は、例えばS字カーブであっても一定の曲率であるとして最適な光軸を求めて補正するため、曲率が細かく変化する場合には、横方向(水平方向)に大きな誤差(求められた光軸方向と実際に最適な光軸方向との差)を生じて、先行車が補正後の検出エリアから横方向に外れてしまい易いからである。
そこで本発明は、勾配又はS字カーブなどがあってもセンサの中心軸を最適な方向に補正して先行車を見逃し難い車載用レーダ装置を提供することを目的としている。
【0005】
【課題を解決するための手段】
この発明による車載用レーダ装置は、自車両の前方又は後方に存在する検出対象の少なくとも位置情報を取得するためのセンサをもつ車載用レーダ装置であって、
自車両の前方又は後方の路面を含む画像を取得するための撮像手段と、この撮像手段により得られた画像に基づいて前記センサの中心軸の向きを補正するための制御手段とを備え、
前記制御手段は、
前記画像から自車両が走行しているレーンに沿った線分を検出する線分検出部と、この線分検出部により得られた線分の実座標上の方向ベクトルを求める方向ベクトル検出部と、この方向ベクトル検出部により得られた方向ベクトルに一致するように前記中心軸の向きを上下左右に補正する制御を実行する補正制御部とを有することを特徴とするものである。
【0006】
ここで、「検出対象」とは、基本的には先行車両又は後続車両(四輪車以外のバイク等も含む)を意味するが、車両以外の障害物(ガードレール等)が含まれてもよい。
また、「センサ」とは、例えば電磁波や音波などを検出エリアに照射し、その反射波に基づいて検出対象を検出するもの(具体的には、レーザレーダやミリ波レーダなど)である。なおこのセンサは、必ずしも走査型である必要はなく、非走査型(但し、指向性があり、特定の検出エリアが存在するもの)でもよい。
また、「センサの中心軸」とは、検出エリアの中心軸であり、走査型の場合には走査範囲の中心軸を意味する。
また、「自車両が走行しているレーンに沿った線分」とは、自車両が走行しているレーンの片側又は両側にある路面上のマーク(白線や黄線、二重線や破線など)、ガードレール、中央分離帯、防護壁、歩道との境界部分などの画像を構成する線(いわゆるエッジ)、或いはこれら画像の経時変化から得られるオプティカルフローであり、自車両が走行しているレーンの接線方向のものである。
【0007】
なお、上記線分の検出(画像からの抽出)は、白線などの濃淡(明暗)のはっきりしたものであれば、いわゆるエッジ抽出により相当の信頼性で実現できる。但し、路面の汚れや道路周辺の建造物などの画像成分を排除して確実に上記線分を検出できる信頼性を向上させるためには、或いは検出のしきい値を下げて濃淡のはっきりしないもの(例えば中央分離帯などのエッジ)も相当の信頼性で検出できるようにするには、上記線分を検出する画像中の位置や領域、或いは線分の角度範囲等を予めあり得る条件に限定しておくのが好ましい。例えば、図5(c)に示すように、画像フレームの左下を原点としてフレーム下辺から反時計回りに30°〜60°の角度範囲と、画像フレームの右下を原点としてフレーム下辺の延長線から反時計回りに120°〜150°の角度範囲に、上記線分を検出する領域や角度を限定してもよい。自車が走行しているレーンの両側の縁部(レーンを区画する白線や中央分離帯等)の位置や方向が、このような条件に納まるように、撮像手段(カメラ)の撮像方向や画角を設定することは容易かつ自然であり、その場合、このような条件から外れるものを検出範囲から除けば、前述した所定の線分のみをより確実かつ容易に検出できるからである。
【0008】
本発明の車載用レーダ装置によれば、自車両の前方又は後方の画像における自車両が走行しているレーンに沿った線分の実座標上での方向にセンサの中心軸が補正されるため、曲率が一定でない道路や勾配のある道路状況でも、検出すべき検出対象(特に先行車又は後続車)を見逃し難い。例えば、自車両から所定距離離れた位置(画像における上下方向の所定位置)の白線などよりなる線分を検出し、この線分の実座標上の方向にセンサの中心軸を補正すれば、自車両がその時点で走行している直近の道路の曲率や勾配にとらわれず、自車両の前方又は後方の道路方向(3次元的方向)に対応させて、センサの中心軸を上下左右に補正することができるからである。特に本発明では、前方又は後方の道路の勾配に応じてセンサの中心軸方向が上下するため、従来行われていなかった上下方向の補正も可能となり、勾配のある道路でも先行車等を見逃し難くなる。
なお、S字カーブやアップダウンのある道路状況に良好に対応するには、後述する態様のように、前記線分を複数検出して、各線分の方向から、自車両の前方又は後方の道路全体を見渡すことができる最適な方向ベクトルを特定することが好ましい。
【0009】
次に、この発明の好ましい態様は、前記線分検出部が、前記線分として、自車両が走行しているレーンの両側にあるものを一対以上求める機能を有し、
前記方向ベクトル検出部が、対をなす前記線分の延長線の交点として前記画像上の消失点を求め、この消失点から前記方向ベクトルを求めるものである。
このような態様であると、撮像手段として一つのカメラを設けた簡単な構成でも、後述する形態例に示すように、簡単な処理で前記方向ベクトルを求めることができる。
【0010】
なお、上述したように消失点から前記方向ベクトルを求める場合、さらに次のような態様とすることが好ましい。
即ち、第1の態様は、前記撮像手段により得られた画像を、複数の小領域に分割する領域分割部を備え、
前記線分検出部が、前記線分の検出を前記小領域毎に実行して、前記線分の対を複数検出し、
前記方向ベクトル検出部が、これら複数の線分の対に基づいて前記画像上の消失点を複数求め、これら消失点の中から、最も手前側に位置する一対の線分上の点及び各消失点を頂点とする画像上の三角形領域と、自車両が走行しているレーンの画像上の領域(以下、場合により道路領域という)との共通領域の面積が最大となる消失点を選択し、この選択した消失点から前記方向ベクトルを求めるものである。なお、上記道路領域は、例えば上記複数の線分によって囲まれた領域として求めることができる。
【0011】
次に第2の態様は、前記線分検出部が、前記画像の下辺に沿った最下帯状領域にある前記線分の対を検出し、
前記方向ベクトル検出部は、前記最下帯状領域にある前記線分の対に基づいて前記画像上の消失点を求め、この消失点を所定の矩形領域において離散化することによって画像上の複数の座標を決定し、これら座標の中から、前記最下帯状領域にある一対の線分上の点及び各座標を頂点とする画像上の三角形領域と、自車両が走行しているレーンの画像上の領域との共通領域の面積が最大となる座標を選択し、この選択した座標から前記方向ベクトルを求める構成であり、前記矩形領域は前記離散化前の消失点を中心とする規定の大きさの矩形領域であることを特徴とするものである。
【0012】
これら第1の態様又は第2の態様であると、複数の消失点(或いは、一つの消失点を離散化してなる複数の座標)の中から、道路全体を最もよく見渡せる点の方向にセンサの中心軸が補正されるので、S字カーブなどの複雑に曲率が変化する道路状況、或いはアップダウンの激しい道路状況でも、検出すべき検出対象(特に先行車又は後続車)をより見逃し難い。
加えて、第2の態様の場合には、消失点を1点しか計算しないので、計算負荷が軽減される利点がある。
【0013】
なお、上記第1の態様では、前記方向ベクトル検出部が、前記センサにより得られた検出対象(少なくとも、先行車又は後続車)の位置情報に基づき、当該検出対象が画像上で前記線分と重なる又は重なる可能性のある小領域を特定し、この小領域を前記消失点を求める対象から排除する構成が好ましい。このようにすれば、例えば先行車などがレーンの縁(レーンとレーンの境界含む)にいることによって、その位置の線分が適正に検出できず、補正ができなくなること、或いは良好な補正ができなくなることを防止できる。
【0014】
また本発明の補正は、自車の走行中或いはイグニションスイッチのオン時などに、常時(例えば周期的に)行ってもよいが、次のようにして必要なときだけ行い、処理の効率化を図るようにしてもよい。即ち、検出対象の水平方向位置が変化したと検知したこと、自車両の速度が変化したと検知したこと、自車両が水平に対して傾いたと検知したこと、及び自車両が車線変更したと検知したことのうち、何れか一つ又は複数の条件が成立したことを起因として、前記制御処理手段が前記中心軸の向きを補正する新たな制御を開始する態様としてもよい。
【0015】
【発明の実施の形態】
以下、本発明の実施の形態を図面に基づいて説明する。
(第1形態例)
図1は、本例の車載用レーダ装置の構成を説明する図であって、図1(a)は全体構成を示すブロック図、図1(b)は画像処理ユニットを説明する機能ブロック図である。
【0016】
まず、全体構成について説明する。
本装置は、図1(a)に示すように、カメラ1(撮像手段)と、センサ2と、画像処理ユニット3と、コントローラ4とよりなる。
カメラ1は、例えばCCD又はCMOSなどの周知のデバイスよりなる撮像手段であり、この場合、1個だけ設置されている。またカメラ1は、例えば自車前方の路面を含む画像が得られるように、自車の前方斜め下(或いは後方斜め下)に向けて搭載される(図6参照)。
センサ2は、例えばレーザレーダである。
画像処理ユニット3は、マイクロコンピュータ(以下、マイコンという)を含む回路よりなり、カメラ1により得られた画像からセンサ2の光軸の最適方向を求めて出力する。
コントローラ4は、図示省略したアクチュエータを制御して、画像処理ユニット3が出力した最適方向にセンサ2の光軸を変化させるもので、やはりマイコンを含む回路よりなる。なお、画像処理ユニット3とコントローラ4は、一体のユニットとして構成することもできる。
【0017】
次に、画像処理ユニット3の詳細を説明する。画像処理ユニット3は、機能的に図1(b)に示す要素(歪補正部11、帯領域分割部12、白線検出部13、消失点検出部14、及び走査領域算出部15)を有する。
ここで、歪補正部11は、後述する図2のステップS1に対応するもので、カメラ1のレンズの歪の影響を排除すべく、カメラ1により得られた画像データを補正するものである。
帯領域分割部12は、後述する図2のステップS2に対応するもので、曲率が一定でないS字カーブなどの道路状況にも的確に対応するため、線分を含む複数の小領域(この場合、横長の帯状領域)に画像を分割する。
白線検出部13は、後述する図2のステップS3〜S5に対応するもので、自車が走行している道路上のレーン両側の白線(黄線や破線を含めてもよい)を、領域毎の直線状の線分として検出する。
消失点検出部14は、後述する図2のステップS6に対応するもので、領域毎に検出された線分から画像上の消失点を求める。
走査領域算出部15は、後述する図2のステップS8〜S13に対応するもので、領域毎に算出された消失点から最適な光軸方向を求める。
【0018】
次に、本装置の動作(主に画像処理ユニット3の処理内容)について、図2により説明する。
画像処理ユニット3は、次の一連の処理(ステップS1〜S14)を例えば周期的に繰り返す。
即ち、まずステップS1において、カメラ1の画像データを読み取り、歪補正を行う。具体的には、例えば数1に示す関係式により画素座標を(x,y)から(x´,y´)に変換する。なお。数1におけるκは負の定数である。
【0019】
【数1】

Figure 0003862015
【0020】
次いでステップS2では、例えば図3(a)に示す如く、道路を含むように画像を横長の帯状領域(N個)に分割するための画像データの設定を行う。道路を含むようにとは、例えば図3(a)に示す如く、道路の画像が存在する画像フレームの下側の領域を少なくとも含むようにという意味である。またこの場合、帯状領域の横幅は画像全体の横幅と同等とし、帯状領域の縦幅は画像の下側の領域をN等分した幅とする。なお本例では、N=7、即ち7個の帯状領域1〜7に分割する。また、ここでの帯状領域は、本発明の小領域に相当する。
【0021】
次に、帯状領域毎にステップS3〜S6の処理を行う。
まず、ステップS3で、エッジ抽出用フィルタを使ってエッジ抽出を行う。エッジ抽出用フィルタとしては、例えば数2に示すような右向きSobelフィルタを用いる。なお数2において、f(x,y)は、画像上の座標(x,y)の画素濃度値を意味し、g(x,y)はフィルタ処理後の値(隣接画素同士の濃度値の差分)を意味する。
【0022】
【数2】
Figure 0003862015
【0023】
次いでステップS4では、白線候補抽出を行う。白線候補抽出とは、数2で計算されたg(x,y)よりなるエッジ画像に対して、所定のしきい値(256階調の場合には、例えば128)により2値化処理を行い、道路両側の白線の内側の座標(白線候補点)を求める。具体的には、前記エッジ画像を左から右に向かって移動したとき、しきい値を超え、さらにその後左右方向の幅が規定幅B以内においてしきい値を下回る座標が存在すれば、その時の最初の座標(しきい値を超えた座標)が左側の白線の左端と推定され、その時の後の座標(しきい値を下回った座標)が左側の白線の右端(左側の白線の内側)と推定される。また、同様の処理で右側の白線の左端(右側の白線の内側)を求め、これら白線の内側の座標を白線候補点として求める。なお通常、これら白線候補点は、帯状領域の縦方向の画素数分だけ複数求まる。またここで、規定幅Bによる制限は、実際の白線幅に相当する幅を大きく超えるものを排除し、白線のエッジだけを信頼性高く検出するためのものである。
なお、この白線候補抽出では、路面の汚れや道路周辺の建造物などの画像成分(輪郭線など)を誤って白線として検出しないように、白線を検出する画像中の位置や領域、或いは角度範囲等を予めあり得る条件に限定しておくのが好ましい。例えば、既述した図5(c)に示すような限定条件を設定して、上記白線候補抽出を行うのがよい。
【0024】
次にステップS5では、上記白線候補点に対してHough変換を行い、最も適合する2直線を、小領域毎の各白線の内側の接線(本発明の線分或いはその延長線)としてを求める。
Hough変換とは、画像上(x−y平面上)の候補点の座標(x,y)に対して、数3で示すρ−θ平面上のHough曲線を求め、このHough曲線の交点の座標から画像上の直線を抽出する処理である。この場合、最も適合する2直線は、前述した複数の白線候補点から得られた複数のHough曲線が多く交差する二つの交点(ρa,θa)及び(ρb,θb)を求めれば特定できる。数3の式においてρとθの値が決まれば、画像上の直線が特定できるからである。なお図5(b)は、(x,y)と(ρ,θ)の位置関係を示す。
【0025】
【数3】
Figure 0003862015
【0026】
次に、ステップS6では、ステップS5で得られた前記二つの直線のデータから画像上の消失点の座標を算出する。消失点は前記二つの直線の交点であるから、前述したデータ(ρa,θa)及び(ρb,θb)から、数4の式によって求まる。
【0027】
【数4】
Figure 0003862015
【0028】
次に、ステップS7では、ステップS3〜S6の実行回数がN未満であるか否か(即ち、全ての帯状領域についてステップS3〜S6の処理が完了していないか否か)を判定し、N未満であればステップS3に戻って次の帯状領域について処理を繰り返す。そして、N以上であれば(全ての帯状領域について処理完了であれば)、ステップS8に進む。
なお図3(b)は、以上の処理により求められた消失点の例を示す図である。図中、太線が道路の白線を示し、破線が各小領域毎に求められた白線の内側(線分)を延長した直線であり、▲1▼〜▲6▼の符号が消失点を示す(この場合7番目の消失点▲7▼は求められなかった)。
【0029】
次にステップS8では、道路領域検出を行う。道路領域検出は、例えばステップS3〜S4の処理により得られた白線の内側の座標に基づいて、二つの白線で囲まれた道路領域(例えば、図4(a)に斜線で示す領域)を検出する。
次に、帯状領域毎にステップS9〜S10の処理を行う。
まず、ステップS9では、最大N個存在する各消失点を上側の頂点とし、手前両側の白線上の点を下側の頂点とする底辺が一定の三角形領域を検出する。即ち、帯状領域1については、例えば図4(b)に斜線で示すように、ステップS6で帯状領域1について求められた消失点を上側の頂点とし、ステップS5で帯状領域1について求められた両側の直線(両白線の内縁側の線)の下端を下側の頂点(底辺の両端点)とする三角形の領域を検出する。また、帯状領域2以降についても、同様に各消失点を上側の頂点とし、帯状領域1について求められた両側の直線の下端を下側の頂点とする三角形の領域を検出する。なお、図4(b)の場合には、両白線の下端内側が画像フレームの下側両隅部に位置しているため、上記三角形の底辺が画像フレームの下辺に一致しているが、必ずしもこのようになるとは限らない。例えば、両白線の下端が画像フレームの横幅よりも内側にある場合には、上記三角形の底辺は画像フレームの横幅(下辺の長さ)よりも短い線分となる。また、上記三角形の下側の頂点は、必ずしも帯状領域1における白線の内縁側の下端である必要はなく、例えば帯状領域1における白線の上端の点(帯状領域1と帯状領域2の境界上の点)でもよいし、帯状領域1における白線の外縁側の点でもよい。また、帯状領域1内に白線が存在しない場合等には、例えば一つ上の帯状領域2における白線上の点を、上記三角形の下側の頂点としてもよい。
【0030】
次にステップS10では、ステップS8で求めた道路領域と、ステップS9で求めた三角形領域とのAND面積(重複部分の面積)を求める。
次にステップS11では、ステップS7と同様に、全ての帯状領域についてステップS9〜S10の処理が完了したか否かを判定し、全ての帯状領域について処理完了であればステップS12に進み、そうでなければステップS9に戻る。そしてステップS12では、ステップS10で求めたAND面積が最大となる帯状領域Nmaxを求める。なお以降では、この帯状領域Nmaxとして帯状領域6が求められたと仮定して説明を進める。
【0031】
次にステップS13では、上記帯状領域Nmax(帯状領域6)の消失点の座標(X6,Y6)を道路座標系(xr,yr,zr)に変換し、前記消失点の道路座標上での方向ベクトルV6を求める。なお、センサ2(レーザレーダなど)は、通常車両のラジエター(バンパの上)に取り付けられるため、センサ2の座標系は厳密には道路座標系とは若干異なるが、センサ2の方向を補正する場合、センサ2の座標系と道路座標系を同一視し、道路座標系のデータでその方向を補正しても特に問題ない。
【0032】
図6に示すように、各座標系の符号と方向の関係を定めた場合、道路座標系(xr,yr,zr)とカメラ座標系(xc,yc,zc)と画像座標系(X,Y)の間には、次の数5及び数6に示す関係式が成り立つ。ここで、Rは3×3の行列、Tは3×1の行列を表す。また、RとTは、カメラキャリブレーションで予め設定しておく。また、数6におけるFは、カメラ1の焦点距離である。
【0033】
【数5】
Figure 0003862015
【0034】
【数6】
Figure 0003862015
【0035】
このため、次のようすれば、消失点の座標(X6,Y6)を道路座標系(xr,yr,zr)に変換できる。
即ち、数6の(X,Y)に(X6,Y6)を代入し、数7に示すようにカメラ座標系(xc,yc,zc)にまず変換する。なお、数7においてはzc=kとしている。
【0036】
【数7】
Figure 0003862015
【0037】
次に、数7の式を数5の式に代入すれば、数8に示す式のように道路座標系(xr,yr,zr)に変換できる。なお、数8の式におけるkの値は、適当な拘束条件を適用して、算出すればよい。例えば、レーザレーダの測定可能距離(例えば、150m)を数8の式のzrに代入してkを求め、この値に設定すればよい。
【0038】
【数8】
Figure 0003862015
【0039】
そして、上述したように求めた前記消失点の道路座標系での座標から、単位ベクトルを求めれば、それが方向ベクトルV6である。
最後に、ステップS14では、上記方向ベクトルV6から、光軸の最適位置を求めてコントローラ4に出力する。これを受けて、コントローラ4は、指令された最適位置に光軸を動かし、光軸の向きが方向ベクトルV6に一致するように補正される。
なお、図5(a)はレーザレーダ2の走査範囲(走査領域)を示す図である。ここで、座標原点Oは、通常状態(補正なしの状態)での走査の中心座標である。また、(−Sx,Sx)と(−Sy,Sy)が通常状態での走査範囲であり、(−Sxmax,Sxmax)と(−Symax,Symax)が最大の走査範囲を示す。即ち、レーザレーダ2の走査範囲の中心(光軸)をxl方向(横方向)に動かすことができる最大補正量はSxmax−Sxであり、yl方向(上下方向)に動かすことができる最大補正量はSymax−Syである。
【0040】
したがって、前述の単位ベクトルV6の成分(xr,yr,zr)から、数9に示す関係式により求まるxl方向とyl方向の光軸位置(xL,yL)が、光軸の最適位置となる。
【0041】
【数9】
Figure 0003862015
【0042】
以上説明したレーダ装置によれば、自車両の前方又は後方の画像における自車両が走行しているレーンに沿った線分(この場合、白線の内側エッジ)の実座標上での方向にセンサの中心軸が補正されるため、曲率が一定でない道路や勾配のある道路状況でも、検出すべき検出対象(特に先行車又は後続車)を見逃し難い。特に本形態例の場合には、画像を領域分割して得られた複数の消失点の中から、道路全体を最もよく見渡せる消失点の方向にセンサの中心軸が補正されるので、S字カーブなどの複雑に曲率が変化する道路状況、或いはアップダウンの激しい道路状況でも、検出すべき検出対象をより見逃し難い。
また本発明は、消失点を求めて方向ベクトルを定めているため、撮像手段として一つのカメラを設けた簡単な構成でも、上述したような簡単な処理で前記方向ベクトルを求めることができる。
【0043】
(第2形態例)
次に、第2形態例を説明する。この形態例は、画像処理ユニット3の処理内容の一部に特徴を有し、他は第1形態例と同様である。図8は、本例の処理内容を示すフローチャートである。なお、第1形態例(図2)と同様のステップには同符号を付して重複する説明を省略する(後述の第3形態例も同様)。
この場合、消失点は、最下の帯状領域1(N=1)のみについて求められる(ステップS3〜S6)。
そして、ステップS8(道路領域検出)の次のステップS21では、図7(a)に示すように、N=1のときの消失点を、この消失点を中心とする上下左右の矩形領域に離散化してなる画像上の複数(D個)の座標(点)を決定する。例えば図7(a)では、N=1のときの消失点を中心に、規定の間隔で左右に5列と上下に3列の点(この場合、D=15)を生成している。
【0044】
次いでステップS22〜S26では、図2のステップS9〜S13と同様に、D個存在する各座標を上側の頂点とし、手前両側の白線上の点を下側の頂点とする画像上の三角形領域と、ステップS8で求めた道路領域との共通領域の面積が最大となる座標を選択し、この選択された座標から前記方向ベクトルを求める。
この第2形態例によれば、第1形態例と同様の加えて、次のような効果がある。即ち、この第2の態様の場合には、消失点を1点しか計算しないので、計算負荷が軽減される利点がある。
【0045】
(第3形態例)
次に、第3形態例を説明する。この形態例も、画像処理ユニット3の処理内容の一部に特徴を有する。但し本例の場合には、図1(a)に破線で示すように、レーザレーダ2による検出結果の情報(少なくとも、他車両の位置情報)が、例えばコントローラ4から画像処理ユニット3に入力される構成となっている必要がある。
図9は、本例の処理内容を示すフローチャートである。
この場合、ステップS2の次に、ステップS31が設けられている。このステップS31では、消失点を求めようとする帯状領域に他車両が存在していて、他車両が線分と重なっている可能性があるか否かを、レーザレーダ2による検出結果の情報に基づいて判断する。そして、他車両が存在していれば、ステップS3〜S6をスキップしてステップS7に進み、その帯状領域についての消失点の算出は実行しない。
【0046】
なお、帯状領域に他車両が存在しているか否かの判定は、例えば次のようにして行う。即ち、レーザレーダ2による検出結果の情報(他車両の位置情報)に基づいて、他車両が存在する画像上の位置を求め、その位置の近傍でヒストグラムを形成し、他車両の画像上での概略の幅寸法や高さ寸法を定めて他車両が存在する画像上の領域を特定する。そして、例えばこの領域と各帯状領域との重複割合を算出し、所定の割合以上重複している帯状領域には他車両が存在していると判定すればよい。
【0047】
本形態例によれば、先行車等が存在する帯状領域が前記方向ベクトル(消失点)を求めるための処理対象から除外される。例えば、図7(b)に示すような画像が得られたときには、領域4及び5が除外される。このため、図7(b)に示すように、先行車などがレーンの縁(レーンとレーンの境界含む)にいることによって、その位置の線分(この場合、白線のエッジ)がその先行車等に隠れて適正に検出できず、補正ができなくなること、或いは良好な補正ができなくなることを防止できる。
【0048】
なお、本発明は上記形態例に限られず、各種の態様や変形が有り得る。
例えば、上記形態例における画像処理ユニット3は、図1(c)に示すような構成でよい。図1(c)は、画像処理ユニット3の上位概念的機能ブロック図であり、この場合画像処理ユニット3は、前処理部21と、領域分割部22と、線分検出部23と、方向ベクトル検出部24と、演算部25とよりなる。
前処理部21は、図1(b)の歪補正部11に対応するもので、ここでは、前述のレンズの歪補正(幾何学的補正)に加え、ガンマ補正やコントラスト強調などの輝度補正、メディアンフィルタなどのノイズ除去を、必要に応じて行ってもよい。
【0049】
また、領域分割部22は、横長の帯状領域に画像を分割する態様に限られず、例えば縦長の帯状領域に画像を分割する態様、或いは格子状に画像を分割するものであってもよい。なお、このように画像を分割する場合、対をなす線分の組(レーンの両側に存在し、同一の消失点に向かうもの)をどのように対応付けるかが問題となるが、例えば、画像の左右対称位置にある領域でそれぞれ検出された線分を、一対の線分として選択すればよい。即ち、例えば縦長の帯状領域に分割する場合、画像における最も左側の帯状領域で検出された線分と、最も右側の帯状領域で検出された線分の延長線の交点として一つ目の消失点を特定し、画像における左から2番目の帯状領域で検出された線分と、右から2番目の帯状領域で検出された線分の延長線の交点として二番目の消失点を特定する、といったように順次消失点を求めればよい。
また、画像を分割する際の分割幅は必ずしも均等又は一定でなくてよい。また、分割数Nは、前記形態例のようにN=7以外でもよいし、可変でもよい。
【0050】
また、線分検出部23は、図1(b)の白線検出部13に対応するが、この白線検出部13の態様に限られず、既述したように、白線以外にも、ガードレールなどの画像を構成する線、或いはこれら画像の経時変化から得られるオプティカルフローを、本発明の線分として検出するものであってもよい。
また、方向ベクトル検出部24は、図1(b)の消失点検出部14及び走査領域算出部15の一部に対応するが、消失点を求めないで線分の方向ベクトルを特定する態様もあり得る。例えば、複数のカメラを設置して、三角測量の原理で線分上の点の3次元位置を求め、実座標における線分の方向を計算してもよい。このような態様であると、単独の線分で(いいかえると、線分毎に)方向ベクトルが定まる(即ち、前記形態例のように線分の対を検出する必要がない)。
また演算部25は、算出された方向ベクトルから最適な光軸方向を求めるもので、図1(b)の走査領域算出部15に対応する。
【0051】
また前記形態例では、画像を分割して得られた複数の線分から複数の方向ベクトル(消失点)を求め、この中から、前述した三角形領域と道路領域の重複部分が最大となる方向ベクトルを選択しているが、複数の線分から最終的な方向ベクトルを選択して光軸を補正する態様は、このようなものに限定されない。例えば、自車両から比較的遠く離れた位置(画像における上方領域)の白線よりなる線分と、自車両に近い位置(画像における下方領域)の白線よりなる線分といったように、前記線分を道路の長手方向に複数検出し、これらの平均的方向にセンサの中心軸を補正する態様(消失点を求めない態様含む)でもよい。平均的方向が、必ずしも最良の方向とはならないが、従来のように一定の曲率であるとして光軸を補正する場合に比較すれば、道路全体を見渡すことができ、先行車等を見逃し難い。
【0052】
また、前記形態例では、画像処理ユニット3とコントローラ4が本発明の制御手段を構成しており、前述したように例えば図2の処理を周期的に行って光軸の補正を常時定期的に行っているが、既述したように、必要なとき(例えば先行車の水平位置が変化したことが検知されたとき)だけ、光軸補正を行う態様でもよい。
また、前記第3形態例では、他車両が存在している小領域は他車両が線分と重なっている可能性があるとして、そのような小領域を全て、方向ベクトル(消失点)を求める処理対象から除外している。しかし、他車両が存在している小領域でも、そこで他車両が線分と重なっていないと判定できる場合(例えば、その他車両が画像の下側の左右方向中央位置にあって、レーンの端に存在しないと判定できる場合など)には、処理対象からその小領域を除外しない構成(即ち、他車両が線分と重なっている小領域のみを除外する態様)でもよい。
【0053】
【発明の効果】
本発明の車載用レーダ装置によれば、自車両の前方又は後方の画像における自車両が走行しているレーンに沿った線分の実座標上での方向にセンサの中心軸が補正されるため、曲率が一定でない道路や勾配のある道路状況でも、検出すべき検出対象(特に先行車又は後続車)を見逃し難い。
【図面の簡単な説明】
【図1】車載用レーダ装置の構成を説明する図である。
【図2】光軸補正処理(第1形態例)のフローチャートである。
【図3】画像の領域分割及び消失点を説明する図である。
【図4】画像における道路領域及び三角形領域を説明する図である。
【図5】レーザレーダの走査範囲などを説明する図である。
【図6】各座標系の符号と方向の関係を示す図である。
【図7】他の形態例を説明する図である。
【図8】光軸補正処理(第2形態例)のフローチャートである。
【図9】光軸補正処理(第3形態例)のフローチャートである。
【符号の説明】
1 カメラ(撮像手段)
2 レーザレーダ(センサ)
3 画像処理ユニット(制御手段)
4 コントローラ(制御手段、補正制御部)
23 線分検出部
24 方向ベクトル検出部[0001]
BACKGROUND OF THE INVENTION
The present invention relates to a radar device mounted on a vehicle.
[0002]
[Prior art]
In recent years, an automobile may be provided with a monitoring function for a preceding vehicle, a follow-up traveling function for a preceding vehicle, and the like in order to improve ease of driving and safety. In order to realize such a function, a sensor such as a laser radar or a millimeter wave radar is used, and at least position information of a detection target (an obstacle such as a preceding vehicle or a guardrail) existing in front of or behind the host vehicle is obtained. A radar device for acquisition is mounted on the vehicle.
In such an on-vehicle radar device, it is preferable to appropriately correct the central axis of the scanning range or the central axis of the detection range (hereinafter sometimes referred to as the optical axis) so that the preceding vehicle is not missed even during curve driving, As a technique for that purpose, there has been conventionally disclosed in Patent Document 1.
This corresponds to the oversight of a preceding vehicle that is driving on a curve or a preceding vehicle that is not traveling in the center of the lane, and the optical axis is set so as not to miss the preceding vehicle from the curvature of the curve and the lateral position of the vehicle and the preceding vehicle. Is to correct.
[0003]
[Patent Document 1]
JP-A-9-218265
[0004]
[Problems to be solved by the invention]
However, the above prior art assumes that the curve curvature is substantially constant like a flat road without a gradient or a highway. For this reason, when the road is flat and the curve curvature is substantially constant, it is less likely to miss the preceding vehicle, but when there is a gradient like in an urban area or when the curve curvature changes finely, the prior art Even if the optical axis is corrected like this, the preceding vehicle is missed. This is because the prior art specifies the direction of the optical axis to be corrected based on the curvature of the curve and the lateral position of the host vehicle and the preceding vehicle, and cannot correct the optical axis in the vertical direction (vertical direction). (The ideal direction of the optical axis in the vertical direction cannot be specified). Therefore, the vertical position of the optical axis is constant, and if there is a gradient, the preceding vehicle will deviate from the detection area in the vertical direction and cannot be detected. There is a fear. Further, since the conventional technique obtains and corrects the optimum optical axis on the assumption that the curvature is constant even for an S-curve, for example, when the curvature changes finely, a large error (obtained in the horizontal direction) is obtained. This is because a preceding vehicle is likely to deviate from the corrected detection area in the lateral direction due to a difference between the optical axis direction and the actually optimal optical axis direction.
Therefore, an object of the present invention is to provide an on-vehicle radar device that corrects the central axis of a sensor in an optimal direction even if there is a gradient or an S-curve, and makes it difficult to overlook a preceding vehicle.
[0005]
[Means for Solving the Problems]
An in-vehicle radar device according to the present invention is an in-vehicle radar device having a sensor for acquiring at least position information of a detection target existing in front of or behind the own vehicle,
Imaging means for acquiring an image including a road surface in front of or behind the host vehicle, and control means for correcting the orientation of the central axis of the sensor based on the image obtained by the imaging means,
The control means includes
A line segment detection unit for detecting a line segment along the lane in which the vehicle is traveling from the image, and a direction vector detection unit for obtaining a direction vector on the actual coordinates of the line segment obtained by the line segment detection unit; And a correction control unit that executes control for correcting the direction of the central axis vertically and horizontally so as to coincide with the direction vector obtained by the direction vector detection unit.
[0006]
Here, “detection target” basically means a preceding vehicle or a succeeding vehicle (including a motorcycle other than a four-wheeled vehicle), but may include an obstacle (such as a guardrail) other than the vehicle. .
The “sensor” is a device that irradiates a detection area with, for example, electromagnetic waves or sound waves and detects a detection target based on the reflected wave (specifically, a laser radar, a millimeter wave radar, or the like). This sensor does not necessarily need to be a scanning type, and may be a non-scanning type (however, having directivity and having a specific detection area).
The “center axis of the sensor” is the center axis of the detection area, and in the case of the scanning type, means the center axis of the scanning range.
In addition, “a line segment along the lane in which the host vehicle is traveling” means a mark (white line, yellow line, double line, broken line, etc.) on one or both sides of the lane in which the host vehicle is traveling. ), Guard rails, median strips, protective walls, borders with sidewalks, etc., lines that make up images (so-called edges), or optical flows obtained from changes over time in these images, the lane in which the vehicle is traveling The tangential direction.
[0007]
Note that the detection of the line segment (extraction from the image) can be realized with considerable reliability by so-called edge extraction as long as the white line or the like is clear (light and dark). However, in order to improve the reliability of reliably detecting the line segment by eliminating image components such as dirt on the road surface and buildings around the road, or by lowering the detection threshold, the shading is not clear In order to be able to detect a certain amount of reliability (for example, an edge such as a median strip), the position and area in the image where the line segment is detected or the angle range of the line segment is limited to possible conditions. It is preferable to keep it. For example, as shown in FIG. 5 (c), from an angle range of 30 ° to 60 ° counterclockwise from the lower left side of the frame with the lower left of the image frame as the origin, and from an extension line of the lower side of the frame with the lower right of the image frame as the origin. You may limit the area | region and angle which detect the said line segment to the angle range of 120 degrees-150 degrees counterclockwise. The imaging direction and image of the imaging means (camera) are set so that the positions and directions of the edges on both sides of the lane in which the vehicle is traveling (white lines and median strips that divide the lane) are within these conditions. This is because it is easy and natural to set the angle, and in this case, only those predetermined line segments described above can be detected more reliably and easily by removing those that do not satisfy these conditions from the detection range.
[0008]
According to the on-vehicle radar device of the present invention, the center axis of the sensor is corrected in the direction on the actual coordinates of the line segment along the lane in which the host vehicle is traveling in the front or rear image of the host vehicle. It is difficult to miss a detection target (especially a preceding vehicle or a subsequent vehicle) to be detected even on a road with a non-constant curvature or a road with a slope. For example, if a line segment composed of a white line or the like at a predetermined distance from the vehicle (predetermined vertical position in the image) is detected and the center axis of the sensor is corrected in the actual coordinate direction of the line segment, The center axis of the sensor is corrected up, down, left, and right according to the road direction (three-dimensional direction) in front of or behind the vehicle, regardless of the curvature or gradient of the nearest road on which the vehicle is traveling at that time. Because it can. In particular, according to the present invention, since the central axis direction of the sensor moves up and down according to the gradient of the road ahead or behind, it is possible to correct the vertical direction, which has not been performed in the past, and it is difficult to overlook the preceding vehicle etc. even on a road with a gradient. Become.
In order to satisfactorily cope with road conditions with S-curves and ups and downs, a plurality of the line segments are detected and roads ahead or behind the host vehicle are detected from the direction of each line segment, as will be described later. It is preferable to identify an optimal direction vector that can be seen over.
[0009]
Next, a preferred aspect of the present invention has a function in which the line segment detection unit obtains one or more pairs on both sides of the lane on which the host vehicle is traveling as the line segment,
The direction vector detection unit obtains a vanishing point on the image as an intersection of the extension lines of the pair of lines, and obtains the direction vector from the vanishing point.
With such a mode, even with a simple configuration in which one camera is provided as an imaging unit, the direction vector can be obtained by a simple process as shown in a form example described later.
[0010]
In addition, when obtaining the direction vector from the vanishing point as described above, it is preferable to further adopt the following aspect.
That is, the first aspect includes an area dividing unit that divides the image obtained by the imaging unit into a plurality of small areas,
The line segment detection unit performs detection of the line segment for each of the small regions, and detects a plurality of pairs of the line segments,
The direction vector detection unit obtains a plurality of vanishing points on the image based on the plurality of line segment pairs, and from these vanishing points, a point on a pair of line segments located closest to each other and each vanishing point Select the vanishing point that maximizes the area of the common area between the triangular area on the image with the point at the top and the area on the image of the lane in which the vehicle is traveling (hereinafter referred to as the road area in some cases) The direction vector is obtained from the selected vanishing point. The road area can be obtained as an area surrounded by the plurality of line segments, for example.
[0011]
Next, in a second aspect, the line segment detection unit detects the pair of line segments in the lowest strip region along the lower side of the image,
The direction vector detection unit obtains a vanishing point on the image based on the line segment pair in the lowermost band-like region, and calculates the vanishing point. By discretizing in a given rectangular area A plurality of coordinates on the image are determined, and from these coordinates, a point on a pair of line segments in the bottom belt-shaped area and a triangular area on the image having each coordinate as a vertex, and the host vehicle travels. Select the coordinate that maximizes the area of the common area with the area on the image of the lane that is present, and obtain the direction vector from this selected coordinate The rectangular area is a rectangular area having a specified size centered on the vanishing point before the discretization. Is.
[0012]
In the first aspect or the second aspect, the sensor is arranged in the direction of the point where the entire road can be best viewed from among a plurality of vanishing points (or a plurality of coordinates obtained by discretizing one vanishing point). Since the central axis is corrected, it is difficult to overlook the detection target (especially the preceding vehicle or the succeeding vehicle) to be detected even in a road situation where the curvature changes in a complicated manner such as an S-curve or a road situation where the up / down is severe.
In addition, in the case of the second mode, since only one vanishing point is calculated, there is an advantage that the calculation load is reduced.
[0013]
In the first aspect, the direction vector detection unit may detect the detection target as the line segment on the image based on the position information of the detection target (at least the preceding vehicle or the following vehicle) obtained by the sensor. A configuration in which a small region that overlaps or possibly overlaps is specified, and the small region is excluded from the object for which the vanishing point is to be obtained is preferable. In this way, for example, if the preceding vehicle is at the edge of the lane (including the lane-to-lane boundary), the line segment at that position cannot be detected properly and correction cannot be made, or good correction can be made. It can be prevented from becoming impossible.
[0014]
The correction according to the present invention may be performed constantly (for example, periodically) while the host vehicle is running or the ignition switch is turned on. However, the correction is performed only when necessary in the following manner to improve processing efficiency. You may make it show. That is, detecting that the horizontal position of the detection target has changed, detecting that the speed of the host vehicle has changed, detecting that the host vehicle has tilted with respect to the horizontal, and detecting that the host vehicle has changed lanes Of these, the control processing means may start a new control for correcting the direction of the central axis because one or more conditions are satisfied.
[0015]
DETAILED DESCRIPTION OF THE INVENTION
Hereinafter, embodiments of the present invention will be described with reference to the drawings.
(First embodiment)
FIG. 1 is a diagram for explaining the configuration of the on-vehicle radar device of this example. FIG. 1A is a block diagram showing the overall configuration, and FIG. 1B is a functional block diagram showing an image processing unit. is there.
[0016]
First, the overall configuration will be described.
As shown in FIG. 1A, this apparatus includes a camera 1 (imaging means), a sensor 2, an image processing unit 3, and a controller 4.
The camera 1 is an imaging unit made of a known device such as a CCD or a CMOS, for example. In this case, only one camera 1 is installed. Moreover, the camera 1 is mounted toward the front lower side (or the rear lower side) of the host vehicle so that an image including the road surface in front of the host vehicle can be obtained (see FIG. 6).
The sensor 2 is a laser radar, for example.
The image processing unit 3 includes a circuit including a microcomputer (hereinafter referred to as a microcomputer), and obtains and outputs an optimum direction of the optical axis of the sensor 2 from an image obtained by the camera 1.
The controller 4 controls an actuator (not shown) to change the optical axis of the sensor 2 in the optimum direction output from the image processing unit 3, and is also composed of a circuit including a microcomputer. The image processing unit 3 and the controller 4 can be configured as an integral unit.
[0017]
Next, details of the image processing unit 3 will be described. The image processing unit 3 functionally includes the elements (distortion correction unit 11, band region division unit 12, white line detection unit 13, vanishing point detection unit 14, and scanning region calculation unit 15) illustrated in FIG.
Here, the distortion correction unit 11 corresponds to step S1 of FIG. 2 to be described later, and corrects image data obtained by the camera 1 so as to eliminate the influence of lens distortion of the camera 1.
The band area dividing unit 12 corresponds to step S2 in FIG. 2 to be described later, and a plurality of small areas including line segments (in this case) in order to accurately correspond to road conditions such as an S-shaped curve whose curvature is not constant. The image is divided into horizontally long belt-like regions).
The white line detection unit 13 corresponds to steps S3 to S5 in FIG. 2 described later, and white lines on both sides of the lane on the road on which the vehicle is traveling (which may include a yellow line or a broken line) are displayed for each region. It is detected as a straight line segment.
The vanishing point detection unit 14 corresponds to step S6 in FIG. 2 described later, and obtains a vanishing point on the image from the line segment detected for each region.
The scanning area calculation unit 15 corresponds to steps S8 to S13 in FIG. 2 described later, and obtains the optimum optical axis direction from the vanishing point calculated for each area.
[0018]
Next, the operation of this apparatus (mainly the processing contents of the image processing unit 3) will be described with reference to FIG.
The image processing unit 3 repeats the following series of processes (steps S1 to S14), for example, periodically.
That is, first, in step S1, image data from the camera 1 is read and distortion correction is performed. Specifically, for example, the pixel coordinates are converted from (x, y) to (x ′, y ′) by the relational expression shown in Formula 1. Note that. Κ in Equation 1 is a negative constant.
[0019]
[Expression 1]
Figure 0003862015
[0020]
Next, in step S2, for example, as shown in FIG. 3A, image data for dividing the image into horizontally long belt-like regions (N) so as to include a road is set. “To include a road” means to include at least a lower region of an image frame where a road image exists, as shown in FIG. 3A, for example. In this case, the width of the band-like area is equal to the width of the entire image, and the width of the band-like area is a width obtained by dividing the lower area of the image into N equal parts. In this example, N = 7, that is, it is divided into seven strip regions 1-7. Further, the belt-like region here corresponds to a small region of the present invention.
[0021]
Next, the processing of steps S3 to S6 is performed for each band-like region.
First, in step S3, edge extraction is performed using an edge extraction filter. As the edge extraction filter, for example, a right-facing Sobel filter as shown in Equation 2 is used. In Equation 2, f (x, y) means a pixel density value at coordinates (x, y) on the image, and g (x, y) is a value after filtering (the density value of adjacent pixels). Difference).
[0022]
[Expression 2]
Figure 0003862015
[0023]
Next, in step S4, white line candidate extraction is performed. White line candidate extraction is performed by performing binarization processing on an edge image composed of g (x, y) calculated by Equation 2 using a predetermined threshold value (for example, 128 in the case of 256 gradations). The coordinates (white line candidate points) inside the white line on both sides of the road are obtained. Specifically, when the edge image is moved from the left to the right, if there is a coordinate that exceeds the threshold, and then the width in the left-right direction is less than the threshold within the specified width B, then The first coordinate (coordinate exceeding the threshold) is estimated as the left edge of the left white line, and the later coordinate (coordinate below the threshold) at that time is the right edge of the left white line (inside the left white line) Presumed. Further, the left end of the right white line (inside the right white line) is obtained by the same processing, and the coordinates inside these white lines are obtained as white line candidate points. Normally, a plurality of white line candidate points are obtained by the number of pixels in the vertical direction of the belt-like region. Here, the restriction by the specified width B is for excluding those that greatly exceed the width corresponding to the actual white line width, and for detecting only the edge of the white line with high reliability.
In this white line candidate extraction, the position, region, or angle range in the image where the white line is detected so that image components such as dirt on the road surface and buildings around the road (such as contour lines) are not erroneously detected as white lines. It is preferable to limit the conditions to possible conditions in advance. For example, the white line candidate extraction may be performed by setting a limiting condition as shown in FIG.
[0024]
Next, in Step S5, Hough transformation is performed on the white line candidate point, and the two most suitable straight lines are obtained as tangent lines (the line segment of the present invention or its extension line) inside each white line for each small area.
The Hough transform refers to the coordinates of the candidate point on the image (on the xy plane), the Hough curve on the ρ-θ plane expressed by Equation 3, and the coordinates of the intersection of the Hough curve. Is a process of extracting a straight line on the image from the image. In this case, the most suitable two straight lines can be specified by obtaining two intersections (ρa, θa) and (ρb, θb) where a plurality of Hough curves obtained from the plurality of white line candidate points described above intersect. This is because if the values of ρ and θ are determined in Equation 3, a straight line on the image can be specified. FIG. 5B shows the positional relationship between (x, y) and (ρ, θ).
[0025]
[Equation 3]
Figure 0003862015
[0026]
In step S6, the coordinates of the vanishing point on the image are calculated from the data of the two straight lines obtained in step S5. Since the vanishing point is an intersection of the two straight lines, the vanishing point is obtained by the equation (4) from the data (ρa, θa) and (ρb, θb) described above.
[0027]
[Expression 4]
Figure 0003862015
[0028]
Next, in step S7, it is determined whether or not the number of executions of steps S3 to S6 is less than N (that is, whether or not the processing of steps S3 to S6 has been completed for all the band-like regions). If it is less, the process returns to step S3 and the process is repeated for the next strip region. If it is N or more (if processing is completed for all strip regions), the process proceeds to step S8.
FIG. 3B is a diagram illustrating an example of vanishing points obtained by the above processing. In the figure, the thick line indicates the white line of the road, the broken line is a straight line obtained by extending the inside (line segment) of the white line obtained for each small area, and the symbols (1) to (6) indicate the vanishing point ( In this case, the seventh vanishing point (7) was not obtained).
[0029]
Next, in step S8, road area detection is performed. For road area detection, for example, a road area surrounded by two white lines (for example, an area indicated by diagonal lines in FIG. 4A) is detected based on the coordinates inside the white line obtained by the processing in steps S3 to S4. To do.
Next, the processing of steps S9 to S10 is performed for each band-like region.
First, in step S9, a triangular region having a fixed base is detected with each of the vanishing points existing at a maximum of N as upper vertices and the points on the white lines on both sides in front are lower vertices. That is, for the belt-like region 1, for example, as indicated by hatching in FIG. 4B, the vanishing point obtained for the belt-like region 1 in step S6 is the upper vertex, and both sides obtained for the belt-like region 1 in step S5. A triangular area having the lower end of the straight line (the inner side of both white lines) as the lower apex (both end points of the base) is detected. Similarly, for the band-like area 2 and later, a triangular area having the vanishing points as upper vertices and the lower ends of both straight lines obtained for the band-like area 1 as lower vertices is detected. In the case of FIG. 4B, since the bottom inner sides of both white lines are located at both lower corners of the image frame, the bottom of the triangle coincides with the lower side of the image frame. This is not always the case. For example, when the lower ends of both white lines are inside the width of the image frame, the base of the triangle is a line segment shorter than the width of the image frame (the length of the lower side). Further, the lower vertex of the triangle does not necessarily need to be the lower end on the inner edge side of the white line in the band-like region 1, for example, the upper point of the white line in the band-like region 1 (on the boundary between the band-like region 1 and the band-like region 2). Point) or a point on the outer edge side of the white line in the belt-like region 1. Further, when there is no white line in the band-like region 1, for example, a point on the white line in the upper band-like region 2 may be set as the lower vertex of the triangle.
[0030]
Next, in step S10, an AND area (area of overlapping portions) between the road region obtained in step S8 and the triangular region obtained in step S9 is obtained.
Next, in step S11, as in step S7, it is determined whether or not the processing in steps S9 to S10 has been completed for all the strip-shaped regions. If the processing has been completed for all the strip-shaped regions, the process proceeds to step S12. If not, the process returns to step S9. In step S12, a belt-like region Nmax that maximizes the AND area obtained in step S10 is obtained. In the following description, it is assumed that the belt-like region 6 is obtained as the belt-like region Nmax.
[0031]
In step S13, the vanishing point coordinates (X6, Y6) of the band-like region Nmax (band-like region 6) are converted into a road coordinate system (xr, yr, zr), and the vanishing point direction on the road coordinates is converted. The vector V6 is obtained. Since the sensor 2 (laser radar or the like) is usually attached to a radiator (on the bumper) of the vehicle, the sensor 2 coordinate system is slightly different from the road coordinate system, but the direction of the sensor 2 is corrected. In this case, there is no particular problem even if the coordinate system of the sensor 2 and the road coordinate system are regarded as the same and the direction is corrected with the data of the road coordinate system.
[0032]
As shown in FIG. 6, when the relationship between the sign and direction of each coordinate system is defined, the road coordinate system (xr, yr, zr), the camera coordinate system (xc, yc, zc), and the image coordinate system (X, Y) ), The following relational expressions shown in Equations 5 and 6 hold. Here, R represents a 3 × 3 matrix, and T represents a 3 × 1 matrix. R and T are set in advance by camera calibration. Further, F in Equation 6 is the focal length of the camera 1.
[0033]
[Equation 5]
Figure 0003862015
[0034]
[Formula 6]
Figure 0003862015
[0035]
Therefore, the vanishing point coordinates (X6, Y6) can be converted into the road coordinate system (xr, yr, zr) as follows.
That is, (X6, Y6) is substituted into (X, Y) in Equation 6, and first converted into the camera coordinate system (xc, yc, zc) as shown in Equation 7. In Equation 7, zc = k.
[0036]
[Expression 7]
Figure 0003862015
[0037]
Next, by substituting Equation 7 into Equation 5, it can be converted into a road coordinate system (xr, yr, zr) as shown in Equation 8. In addition, what is necessary is just to calculate the value of k in Formula 8 by applying an appropriate constraint condition. For example, the measurable distance of the laser radar (for example, 150 m) may be substituted for zr in the equation 8 to obtain k and set to this value.
[0038]
[Equation 8]
Figure 0003862015
[0039]
And if a unit vector is calculated | required from the coordinate in the road coordinate system of the said vanishing point calculated | required as mentioned above, it will be the direction vector V6.
Finally, in step S14, the optimum position of the optical axis is obtained from the direction vector V6 and output to the controller 4. In response to this, the controller 4 moves the optical axis to the commanded optimum position and corrects the direction of the optical axis to coincide with the direction vector V6.
FIG. 5A is a diagram showing a scanning range (scanning region) of the laser radar 2. Here, the coordinate origin O is the center coordinate of scanning in a normal state (state without correction). Further, (-Sx, Sx) and (-Sy, Sy) are scanning ranges in the normal state, and (-Sxmax, Sxmax) and (-Symax, Symax) are the maximum scanning ranges. That is, the maximum correction amount that can move the center (optical axis) of the scanning range of the laser radar 2 in the xl direction (lateral direction) is Sxmax−Sx, and the maximum correction amount that can be moved in the yl direction (vertical direction). Is Symax-Sy.
[0040]
Therefore, the optical axis position (xL, yL) in the xl direction and the yl direction obtained from the component (xr, yr, zr) of the unit vector V6 by the relational expression shown in Equation 9 is the optimal position of the optical axis.
[0041]
[Equation 9]
Figure 0003862015
[0042]
According to the radar device described above, the sensor is oriented in the direction on the real coordinates of the line segment (in this case, the inner edge of the white line) along the lane in which the host vehicle is traveling in the front or rear image of the host vehicle. Since the central axis is corrected, it is difficult to miss a detection target (especially a preceding vehicle or a subsequent vehicle) to be detected even on a road with a constant curvature or a road with a slope. In particular, in the case of the present embodiment, the center axis of the sensor is corrected in the direction of the vanishing point where the entire road can be best viewed from among the plurality of vanishing points obtained by dividing the image into regions. It is difficult to overlook a detection target to be detected even in a road situation where the curvature changes in a complicated manner or a road situation where the up-down is intense.
Further, according to the present invention, since the direction vector is determined by obtaining the vanishing point, the direction vector can be obtained by a simple process as described above even with a simple configuration in which one camera is provided as an imaging unit.
[0043]
(Second embodiment)
Next, a second embodiment will be described. This embodiment is characterized by a part of the processing contents of the image processing unit 3, and the other is the same as the first embodiment. FIG. 8 is a flowchart showing the processing contents of this example. Note that the same steps as those in the first embodiment (FIG. 2) are denoted by the same reference numerals and redundant description is omitted (the same applies to the third embodiment described later).
In this case, a vanishing point is calculated | required only about the lowest strip | belt-shaped area | region 1 (N = 1) (step S3-S6).
Then, in step S21 following step S8 (road area detection), as shown in FIG. 7A, the vanishing point when N = 1 is discretely divided into upper, lower, left and right rectangular areas centering on this vanishing point. A plurality of (D) coordinates (points) on the resulting image are determined. For example, in FIG. 7 (a), five columns on the left and right and three columns on the top and bottom (D = 15 in this case) are generated at a specified interval, centered on the vanishing point when N = 1.
[0044]
Next, in steps S22 to S26, as in steps S9 to S13 of FIG. 2, each of the D coordinates is set as an upper vertex, and a triangular area on the image having points on the white lines on both sides in front as lower vertices. Then, a coordinate that maximizes the area of the common area with the road area obtained in step S8 is selected, and the direction vector is obtained from the selected coordinates.
According to the second embodiment, in addition to the same effects as the first embodiment, the following effects can be obtained. In other words, in the case of the second mode, only one vanishing point is calculated, so that there is an advantage that the calculation load is reduced.
[0045]
(Third embodiment)
Next, a third embodiment will be described. This embodiment is also characterized by a part of the processing contents of the image processing unit 3. However, in the case of this example, as indicated by a broken line in FIG. 1A, information on the detection result by the laser radar 2 (at least the position information of the other vehicle) is input from the controller 4 to the image processing unit 3, for example. It is necessary to have a configuration.
FIG. 9 is a flowchart showing the processing contents of this example.
In this case, step S31 is provided after step S2. In this step S31, information on the detection result by the laser radar 2 indicates whether or not there is another vehicle in the belt-like region where the vanishing point is to be obtained and the other vehicle may overlap the line segment. Judgment based on. If another vehicle exists, steps S3 to S6 are skipped and the process proceeds to step S7, and the vanishing point is not calculated for the band-like region.
[0046]
The determination as to whether or not another vehicle is present in the belt-like region is performed as follows, for example. That is, the position on the image where the other vehicle exists is obtained based on the information of the detection result by the laser radar 2 (position information of the other vehicle), a histogram is formed in the vicinity of the position, and the image on the other vehicle is displayed. An approximate width dimension and height dimension are determined to specify an area on the image where another vehicle exists. Then, for example, the overlapping ratio between this area and each belt-like area may be calculated, and it may be determined that another vehicle exists in the belt-like area overlapping a predetermined ratio or more.
[0047]
According to this embodiment, a belt-like region where a preceding vehicle or the like exists is excluded from the processing target for obtaining the direction vector (vanishing point). For example, when an image as shown in FIG. 7B is obtained, the regions 4 and 5 are excluded. For this reason, as shown in FIG. 7B, when the preceding vehicle or the like is at the edge of the lane (including the boundary between the lanes), the line segment at that position (in this case, the edge of the white line) becomes the preceding vehicle. Therefore, it is possible to prevent a situation in which the detection cannot be performed properly due to being hidden, and the correction cannot be performed, or the favorable correction cannot be performed.
[0048]
In addition, this invention is not restricted to the said example of a form, There can exist various aspects and deformation | transformation.
For example, the image processing unit 3 in the above embodiment may be configured as shown in FIG. FIG. 1C is a high-level functional block diagram of the image processing unit 3. In this case, the image processing unit 3 includes a preprocessing unit 21, a region division unit 22, a line segment detection unit 23, and a direction vector. The detection unit 24 and the calculation unit 25 are included.
The pre-processing unit 21 corresponds to the distortion correction unit 11 in FIG. 1B, and here, in addition to the above-described lens distortion correction (geometric correction), brightness correction such as gamma correction and contrast enhancement, Noise removal such as a median filter may be performed as necessary.
[0049]
The area dividing unit 22 is not limited to an aspect in which an image is divided into horizontally long belt-like areas, and may be an aspect in which an image is divided into vertically long belt-like areas, or an image that is divided into a grid. In addition, when dividing an image in this way, it becomes a problem how to associate a pair of line segments (one that exists on both sides of the lane and goes to the same vanishing point). What is necessary is just to select the line segment each detected in the area | region in a left-right symmetric position as a pair of line segment. That is, for example, when dividing into vertically long strip regions, the first vanishing point is the intersection of the line segment detected in the leftmost strip region in the image and the extended line segment detected in the rightmost strip region. And the second vanishing point is identified as the intersection of the line segment detected in the second belt-like region from the left in the image and the line extension detected in the second belt-like region from the right, etc. Thus, the vanishing point may be obtained sequentially.
Further, the division width when dividing the image is not necessarily equal or constant. Further, the division number N may be other than N = 7 as in the above-described embodiment, or may be variable.
[0050]
The line segment detection unit 23 corresponds to the white line detection unit 13 in FIG. 1B, but is not limited to the form of the white line detection unit 13, and as described above, in addition to the white line, an image of a guardrail or the like. Or an optical flow obtained from the temporal change of these images may be detected as the line segment of the present invention.
In addition, the direction vector detection unit 24 corresponds to a part of the vanishing point detection unit 14 and the scanning area calculation unit 15 in FIG. 1B, but an aspect of specifying the direction vector of the line segment without obtaining the vanishing point is also possible. possible. For example, a plurality of cameras may be installed, the three-dimensional position of the point on the line segment may be obtained by the principle of triangulation, and the direction of the line segment in the actual coordinates may be calculated. In such a mode, the direction vector is determined by a single line segment (in other words, for each line segment) (that is, it is not necessary to detect a pair of line segments as in the above-described embodiment).
The calculation unit 25 obtains an optimum optical axis direction from the calculated direction vector, and corresponds to the scanning region calculation unit 15 in FIG.
[0051]
In the embodiment, a plurality of direction vectors (vanishing points) are obtained from a plurality of line segments obtained by dividing the image, and a direction vector that maximizes the overlapping portion of the above-described triangle area and road area is determined from these. Although selected, the aspect of correcting the optical axis by selecting the final direction vector from a plurality of line segments is not limited to this. For example, the line segment composed of a white line at a position relatively far from the own vehicle (upper region in the image) and a line segment composed of a white line at a position close to the own vehicle (lower region in the image) A mode (including a mode in which the vanishing point is not obtained) may be employed in which a plurality of detections are performed in the longitudinal direction of the road and the central axis of the sensor is corrected in these average directions. Although the average direction is not necessarily the best direction, the entire road can be looked over and it is difficult to overlook the preceding vehicle, etc., as compared with the case where the optical axis is corrected assuming that the curvature is constant as in the prior art.
[0052]
In the embodiment, the image processing unit 3 and the controller 4 constitute the control means of the present invention. As described above, for example, the process of FIG. However, as described above, the optical axis correction may be performed only when necessary (for example, when it is detected that the horizontal position of the preceding vehicle has changed).
Further, in the third embodiment, it is assumed that there is a possibility that the other vehicle overlaps the line segment in the small area where the other vehicle exists, and the direction vector (vanishing point) is obtained for all such small areas. Excluded from processing. However, even in a small area where other vehicles exist, if it can be determined that the other vehicles do not overlap the line segment (for example, the other vehicle is at the center in the horizontal direction below the image and In the case where it can be determined that it does not exist, a configuration in which the small area is not excluded from the processing target (that is, only a small area where another vehicle overlaps the line segment) may be used.
[0053]
【The invention's effect】
According to the on-vehicle radar device of the present invention, the center axis of the sensor is corrected in the direction on the actual coordinates of the line segment along the lane in which the host vehicle is traveling in the front or rear image of the host vehicle. It is difficult to miss a detection target (especially a preceding vehicle or a subsequent vehicle) to be detected even on a road with a non-constant curvature or a road with a slope.
[Brief description of the drawings]
FIG. 1 is a diagram illustrating a configuration of an in-vehicle radar device.
FIG. 2 is a flowchart of an optical axis correction process (first embodiment).
FIG. 3 is a diagram for explaining region division and vanishing points of an image.
FIG. 4 is a diagram illustrating a road area and a triangular area in an image.
FIG. 5 is a diagram for explaining a scanning range of a laser radar and the like.
FIG. 6 is a diagram showing the relationship between the sign and direction of each coordinate system.
FIG. 7 is a diagram illustrating another example.
FIG. 8 is a flowchart of an optical axis correction process (second embodiment).
FIG. 9 is a flowchart of optical axis correction processing (third embodiment).
[Explanation of symbols]
1 Camera (imaging means)
2 Laser radar (sensor)
3 Image processing unit (control means)
4 Controller (control means, correction control unit)
23 Line segment detector
24 direction vector detector

Claims (6)

自車両の前方又は後方に存在する検出対象の少なくとも位置情報を取得するためのセンサをもつ車載用レーダ装置であって、
自車両の前方又は後方の路面を含む画像を取得するための撮像手段と、
この撮像手段により得られた画像に基づいて前記センサの中心軸の向きを補正するための制御手段とを備え、
前記制御手段は、
前記画像から自車両が走行しているレーンに沿った線分を検出する線分検出部と、
この線分検出部により得られた線分の実座標上の方向ベクトルを求める方向ベクトル検出部と、
この方向ベクトル検出部により得られた方向ベクトルに一致するように前記中心軸の向きを上下左右に補正する制御を実行する補正制御部と
を有することを特徴とする車載用レーダ装置。
An in-vehicle radar device having a sensor for acquiring at least position information of a detection target existing in front of or behind the host vehicle;
Imaging means for acquiring an image including a road surface in front of or behind the host vehicle;
Control means for correcting the orientation of the central axis of the sensor based on the image obtained by the imaging means,
The control means includes
A line segment detector for detecting a line segment along the lane in which the vehicle is traveling from the image;
A direction vector detection unit for obtaining a direction vector on the real coordinates of the line segment obtained by the line segment detection unit;
An in-vehicle radar device comprising: a correction control unit that executes control for correcting the direction of the central axis vertically and horizontally so as to coincide with the direction vector obtained by the direction vector detection unit.
前記線分検出部は、前記線分として、自車両が走行しているレーンの両側にあるものを一対以上求める機能を有し、
前記方向ベクトル検出部は、対をなす前記線分の延長線の交点として前記画像上の消失点を求め、この消失点から前記方向ベクトルを求めることを特徴とする請求項1記載の車載用レーダ装置。
The line segment detection unit has a function of obtaining one or more pairs on both sides of the lane in which the host vehicle is traveling as the line segment,
The in-vehicle radar according to claim 1, wherein the direction vector detection unit obtains a vanishing point on the image as an intersection of the extension lines of the pair of lines, and obtains the direction vector from the vanishing point. apparatus.
前記撮像手段により得られた画像を、複数の小領域に分割する領域分割部を備え、
前記線分検出部は、前記線分の検出を前記小領域毎に実行して、前記線分の対を複数検出し、
前記方向ベクトル検出部は、これら複数の線分の対に基づいて前記画像上の消失点を複数求め、これら消失点の中から、最も手前側に位置する一対の線分上の点及び各消失点を頂点とする画像上の三角形領域と、自車両が走行しているレーンの画像上の領域との共通領域の面積が最大となる消失点を選択し、この選択した消失点から前記方向ベクトルを求めることを特徴とする請求項2記載の車載用レーダ装置。
An area dividing unit that divides the image obtained by the imaging unit into a plurality of small areas;
The line segment detection unit performs detection of the line segment for each of the small areas, detects a plurality of pairs of the line segments,
The direction vector detection unit obtains a plurality of vanishing points on the image based on the plurality of line segment pairs, and from these vanishing points, a point on a pair of line segments located closest to each other and each vanishing point A vanishing point that maximizes the area of the common area between the triangular area on the image with the point at the apex and the area on the image of the lane in which the host vehicle is traveling is selected, and the direction vector is determined from the selected vanishing point. The on-vehicle radar device according to claim 2, wherein:
前記線分検出部は、前記画像の下辺に沿った最下帯状領域にある前記線分の対を検出し、
前記方向ベクトル検出部は、前記最下帯状領域にある前記線分の対に基づいて前記画像上の消失点を求め、この消失点を所定の矩形領域において離散化することによって画像上の複数の座標を決定し、これら座標の中から、前記最下帯状領域にある一対の線分上の点及び各座標を頂点とする画像上の三角形領域と、自車両が走行しているレーンの画像上の領域との共通領域の面積が最大となる座標を選択し、この選択した座標から前記方向ベクトルを求める構成であり、前記矩形領域は前記離散化前の消失点を中心とする規定の大きさの矩形領域であることを特徴とする請求項2記載の車載用レーダ装置。
The line segment detection unit detects a pair of the line segments in a lowermost band-like region along the lower side of the image,
The direction vector detection unit obtains a vanishing point on the image based on the line segment pair in the lowermost belt-like region, and discretizes the vanishing point in a predetermined rectangular region to thereby generate a plurality of points on the image. The coordinates are determined, and from these coordinates, a point on the pair of line segments in the lowermost belt-like area, a triangular area on the image having each coordinate as a vertex, and an image of the lane in which the vehicle is traveling The coordinate having the maximum area of the common area with the area is selected, and the direction vector is obtained from the selected coordinates. The rectangular area has a specified size centered on the vanishing point before the discretization. The in-vehicle radar device according to claim 2, wherein the in-vehicle radar device is a rectangular region .
前記方向ベクトル検出部は、
前記センサにより得られた検出対象の位置情報に基づき、当該検出対象が画像上で前記線分と重なる又は重なる可能性のある小領域を特定し、この小領域を前記消失点を求める対象から排除することを特徴とする請求項3記載の車載用レーダ装置。
The direction vector detection unit
Based on the position information of the detection target obtained by the sensor, a small area where the detection target overlaps or possibly overlaps the line segment on the image is specified, and the small area is excluded from the objects for which the vanishing point is calculated. The on-vehicle radar device according to claim 3.
前記制御手段は、
検出対象の水平方向位置が変化したと検知したこと、自車両の速度が変化したと検知したこと、自車両が水平に対して傾いたと検知したこと、及び自車両が車線変更したと検知したことのうち、何れか一つ又は複数の条件が成立したことを起因として、前記中心軸の向きを補正する新たな制御を開始することを特徴とする請求項1乃至5の何れかに記載の車載用レーダ装置。
The control means includes
Detecting that the horizontal position of the detection target has changed, detecting that the speed of the host vehicle has changed, detecting that the host vehicle has tilted with respect to the horizontal, and detecting that the host vehicle has changed lanes 6. The vehicle-mounted device according to claim 1, wherein a new control for correcting the direction of the central axis is started when one or more of the conditions are satisfied. Radar equipment.
JP2002311680A 2002-10-25 2002-10-25 Automotive radar equipment Expired - Fee Related JP3862015B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2002311680A JP3862015B2 (en) 2002-10-25 2002-10-25 Automotive radar equipment
US10/681,840 US6831591B2 (en) 2002-10-25 2003-10-08 Radar device for a vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2002311680A JP3862015B2 (en) 2002-10-25 2002-10-25 Automotive radar equipment

Publications (2)

Publication Number Publication Date
JP2004144671A JP2004144671A (en) 2004-05-20
JP3862015B2 true JP3862015B2 (en) 2006-12-27

Family

ID=32105318

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2002311680A Expired - Fee Related JP3862015B2 (en) 2002-10-25 2002-10-25 Automotive radar equipment

Country Status (2)

Country Link
US (1) US6831591B2 (en)
JP (1) JP3862015B2 (en)

Families Citing this family (71)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3998601B2 (en) * 2002-10-09 2007-10-31 富士通株式会社 Pulse radar equipment
JP3862015B2 (en) * 2002-10-25 2006-12-27 オムロン株式会社 Automotive radar equipment
JP4313089B2 (en) * 2003-05-23 2009-08-12 富士通テン株式会社 Radar apparatus for automobile and its mounting direction adjusting method
US7046190B2 (en) * 2003-07-25 2006-05-16 Raytheon Company Process for phase-derived range measurements
US6906663B2 (en) * 2003-07-30 2005-06-14 The Boeing Company E-field monitor for pulsed signals
US20050024260A1 (en) * 2003-07-30 2005-02-03 Johnston Gary P. E-field monitor for broadband pulsed
US6882302B1 (en) * 2003-09-22 2005-04-19 Rockwell Collins Enhanced adaptive weather thresholds for identification of hazards system and method
US6977610B2 (en) * 2003-10-10 2005-12-20 Raytheon Company Multiple radar combining for increased range, radar sensitivity and angle accuracy
US7038615B2 (en) * 2003-10-10 2006-05-02 Raytheon Company Efficient technique for estimating elevation angle when using a broad beam for search in a radar
JP4259357B2 (en) * 2004-03-12 2009-04-30 三菱ふそうトラック・バス株式会社 Vehicle running state determination device
JP4296973B2 (en) * 2004-03-19 2009-07-15 三菱ふそうトラック・バス株式会社 Vehicle running state determination device
JP4258413B2 (en) * 2004-03-26 2009-04-30 三菱ふそうトラック・バス株式会社 Vehicle running state determination device
JP4225229B2 (en) * 2004-03-30 2009-02-18 三菱ふそうトラック・バス株式会社 Arousal level judgment device
JP4763250B2 (en) * 2004-04-09 2011-08-31 株式会社デンソー Object detection device
JP4396400B2 (en) * 2004-06-02 2010-01-13 トヨタ自動車株式会社 Obstacle recognition device
JP4895484B2 (en) * 2004-06-28 2012-03-14 富士通テン株式会社 Axis deviation calculation method for on-vehicle radar device and on-vehicle radar axis deviation determination method
DE102004036580A1 (en) * 2004-07-28 2006-03-16 Robert Bosch Gmbh Method and device for object detection in a vehicle
US20060091654A1 (en) * 2004-11-04 2006-05-04 Autoliv Asp, Inc. Sensor system with radar sensor and vision sensor
US20060091653A1 (en) * 2004-11-04 2006-05-04 Autoliv Asp, Inc. System for sensing impending collision and adjusting deployment of safety device
JP2006151125A (en) * 2004-11-26 2006-06-15 Omron Corp On-vehicle image processing device
DE102005001429A1 (en) * 2005-01-12 2006-07-20 Robert Bosch Gmbh Method for image-position correction of a monitor image
US7627170B2 (en) * 2005-10-11 2009-12-01 Northrop Grumman Corporation Process for the identification of objects
JP4304517B2 (en) * 2005-11-09 2009-07-29 トヨタ自動車株式会社 Object detection device
JP4726621B2 (en) * 2005-12-13 2011-07-20 アルパイン株式会社 In-vehicle sensor correction device
US7544945B2 (en) 2006-02-06 2009-06-09 Avago Technologies General Ip (Singapore) Pte. Ltd. Vertical cavity surface emitting laser (VCSEL) array laser scanner
JP2007233440A (en) * 2006-02-27 2007-09-13 Omron Corp On-vehicle image processor
JP2007240276A (en) * 2006-03-07 2007-09-20 Olympus Corp Distance measuring device/imaging device, distance measuring method/imaging method, distance measuring program/imaging program, and storage medium
JP2007240277A (en) * 2006-03-07 2007-09-20 Olympus Corp Distance measuring device/imaging device, distance measuring method/imaging method, distance measuring program/imaging program, and storage medium
JP4248558B2 (en) * 2006-03-24 2009-04-02 トヨタ自動車株式会社 Road marking line detection device
US7633431B1 (en) * 2006-05-18 2009-12-15 Rockwell Collins, Inc. Alignment correction engine
JP4367475B2 (en) * 2006-10-06 2009-11-18 アイシン精機株式会社 Moving object recognition apparatus, moving object recognition method, and computer program
DE102007001103A1 (en) * 2007-01-04 2008-07-10 Siemens Ag Vertical alignment of a lidar sensor
EP2122599B1 (en) * 2007-01-25 2019-11-13 Magna Electronics Inc. Radar sensing system for vehicle
JP5160114B2 (en) * 2007-03-26 2013-03-13 本田技研工業株式会社 Vehicle passage judgment device
US8017898B2 (en) 2007-08-17 2011-09-13 Magna Electronics Inc. Vehicular imaging system in an automatic headlamp control system
JP2009053818A (en) * 2007-08-24 2009-03-12 Toshiba Corp Image processor and method thereof
EP2535883B1 (en) * 2008-07-10 2014-03-19 Mitsubishi Electric Corporation Train-of-vehicle travel support device
US8095276B2 (en) * 2008-10-15 2012-01-10 Autoliv Asp, Inc. Sensor system including a confirmation sensor for detecting an impending collision
US20100225522A1 (en) * 2009-03-06 2010-09-09 Demersseman Bernard Guy Sensor system for detecting an impending collision of a vehicle
US8284997B2 (en) * 2009-03-11 2012-10-09 Honeywell International Inc. Vision-based vehicle navigation system and method
US8949069B2 (en) * 2009-12-16 2015-02-03 Intel Corporation Position determination based on propagation delay differences of multiple signals received at multiple sensors
JP5652699B2 (en) * 2010-07-07 2015-01-14 スズキ株式会社 White line detector
JP5656512B2 (en) * 2010-08-27 2015-01-21 株式会社小糸製作所 Light distribution control device
JP2013002927A (en) * 2011-06-15 2013-01-07 Honda Elesys Co Ltd Obstacle detection apparatus and computer program
US10162070B2 (en) * 2012-04-05 2018-12-25 Westerngeco L.L.C. Converting a first acquired data subset to a second acquired data subset
JP2013217799A (en) * 2012-04-10 2013-10-24 Honda Elesys Co Ltd Object detection device, object detection method, object detection program, and operation control system
WO2013162559A1 (en) * 2012-04-26 2013-10-31 Intel Corporation Determining relative positioning information
DE102013113054B4 (en) * 2012-12-03 2022-01-27 Denso Corporation Target detection device for avoiding a collision between a vehicle and a target detected by a sensor mounted on the vehicle
WO2014193334A1 (en) 2013-05-26 2014-12-04 Intel Corporation Apparatus, system and method of communicating positioning information
WO2015005912A1 (en) 2013-07-10 2015-01-15 Intel Corporation Apparatus, system and method of communicating positioning transmissions
JP5812061B2 (en) * 2013-08-22 2015-11-11 株式会社デンソー Target detection apparatus and program
WO2015119298A1 (en) * 2014-02-10 2015-08-13 株式会社デンソー Axis deviation detection device for beam sensor
JP6428270B2 (en) * 2014-02-10 2018-11-28 株式会社デンソー Axis deviation detector
JP6087858B2 (en) * 2014-03-24 2017-03-01 株式会社日本自動車部品総合研究所 Traveling lane marking recognition device and traveling lane marking recognition program
US10032249B2 (en) * 2014-09-05 2018-07-24 Sakai Display Products Corporation Image generating apparatus, image generating method, and computer program
DE102014013432B4 (en) * 2014-09-10 2016-11-10 Audi Ag Method for processing environment data in a vehicle
JP6265095B2 (en) * 2014-09-24 2018-01-24 株式会社デンソー Object detection device
US10962638B2 (en) 2017-09-07 2021-03-30 Magna Electronics Inc. Vehicle radar sensing system with surface modeling
US11150342B2 (en) 2017-09-07 2021-10-19 Magna Electronics Inc. Vehicle radar sensing system with surface segmentation using interferometric statistical analysis
US10877148B2 (en) 2017-09-07 2020-12-29 Magna Electronics Inc. Vehicle radar sensing system with enhanced angle resolution using synthesized aperture
US10962641B2 (en) 2017-09-07 2021-03-30 Magna Electronics Inc. Vehicle radar sensing system with enhanced accuracy using interferometry techniques
TWI734932B (en) * 2018-09-17 2021-08-01 為昇科科技股份有限公司 Radar detection angle caliberation system and method thereof
TWI689432B (en) * 2018-12-26 2020-04-01 財團法人工業技術研究院 Automatic vehicular sensor adjustment method and system thereof
CN109901183A (en) * 2019-03-13 2019-06-18 电子科技大学中山学院 Method for improving all-weather distance measurement precision and reliability of laser radar
JP7339114B2 (en) * 2019-10-09 2023-09-05 株式会社Soken Axial misalignment estimator
KR20210054944A (en) * 2019-11-06 2021-05-14 현대자동차주식회사 Apparatus for compensating error of radar in vehicle and method thereof
US11360191B2 (en) * 2019-12-27 2022-06-14 Woven Planet North America, Inc. Adaptive tilting radars for effective vehicle controls
US11391842B2 (en) * 2020-01-06 2022-07-19 Luminar, Llc Adaptive scan pattern with virtual horizon estimation
CN113829994B (en) * 2020-06-08 2023-11-21 广州汽车集团股份有限公司 Early warning method and device based on car external whistling, car and medium
CN114076946A (en) * 2020-08-18 2022-02-22 华为技术有限公司 Motion estimation method and device
WO2023188793A1 (en) * 2022-03-30 2023-10-05 パナソニックIpマネジメント株式会社 Display system and display method

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11142520A (en) * 1997-11-06 1999-05-28 Omron Corp Axis adjusting method for distance measuring apparatus and detecting method for axis deviation as well as distance measuring apparatus
JP2003121547A (en) * 2001-10-18 2003-04-23 Fuji Heavy Ind Ltd Outside-of-vehicle monitoring apparatus
JP3880837B2 (en) * 2001-11-02 2007-02-14 富士重工業株式会社 Outside monitoring device
JP3880841B2 (en) * 2001-11-15 2007-02-14 富士重工業株式会社 Outside monitoring device
JP3861781B2 (en) * 2002-09-17 2006-12-20 日産自動車株式会社 Forward vehicle tracking system and forward vehicle tracking method
JP3862015B2 (en) * 2002-10-25 2006-12-27 オムロン株式会社 Automotive radar equipment

Also Published As

Publication number Publication date
JP2004144671A (en) 2004-05-20
US20040080449A1 (en) 2004-04-29
US6831591B2 (en) 2004-12-14

Similar Documents

Publication Publication Date Title
JP3862015B2 (en) Automotive radar equipment
USRE48106E1 (en) Detection of obstacles at night by analysis of shadows
JP5829980B2 (en) Roadside detection device
US8311283B2 (en) Method for detecting lane departure and apparatus thereof
JP4676373B2 (en) Peripheral recognition device, peripheral recognition method, and program
JP5637302B2 (en) Driving support apparatus and adjacent vehicle detection method
JP4603421B2 (en) Vehicle, image processing system, image processing method, and image processing program
JP5399027B2 (en) A device having a system capable of capturing a stereoscopic image to assist driving of an automobile
JP3925488B2 (en) Image processing apparatus for vehicle
JP6246014B2 (en) Exterior recognition system, vehicle, and camera dirt detection method
US20210042955A1 (en) Distance estimation apparatus and operating method thereof
JP4956453B2 (en) Object detection device
US10318824B2 (en) Algorithm to extend detecting range for AVM stop line detection
JP2005301603A (en) Traveling lane detection device
JP5855756B2 (en) Lane mark recognition device
JP2011065219A (en) Device for estimation of road curvature
CN107004250B (en) Image generation device and image generation method
KR101268282B1 (en) Lane departure warning system in navigation for vehicle and method thereof
JP4296287B2 (en) Vehicle recognition device
JP2007264717A (en) Lane deviation decision device, lane deviation prevention device and lane follow-up supporting device
JP4113628B2 (en) Vehicle display device
WO2023175741A1 (en) External environment recognition device
JP4471881B2 (en) Obstacle recognition device and obstacle recognition method
US20230410318A1 (en) Vehicle and method of controlling the same
JP2000221015A (en) Object recognition device

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20050126

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20060327

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20060614

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20060802

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20060906

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20060919

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20101006

Year of fee payment: 4

LAPS Cancellation because of no payment of annual fees