JP2012003604A - Mobile body detector and mobile body detection method - Google Patents

Mobile body detector and mobile body detection method Download PDF

Info

Publication number
JP2012003604A
JP2012003604A JP2010139543A JP2010139543A JP2012003604A JP 2012003604 A JP2012003604 A JP 2012003604A JP 2010139543 A JP2010139543 A JP 2010139543A JP 2010139543 A JP2010139543 A JP 2010139543A JP 2012003604 A JP2012003604 A JP 2012003604A
Authority
JP
Japan
Prior art keywords
image
moving body
motion vector
vehicle
component
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2010139543A
Other languages
Japanese (ja)
Other versions
JP5612915B2 (en
Inventor
Kiyoyuki Kawai
清幸 川井
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Alpine Automotive Technology Inc
Original Assignee
Toshiba Alpine Automotive Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Alpine Automotive Technology Inc filed Critical Toshiba Alpine Automotive Technology Inc
Priority to JP2010139543A priority Critical patent/JP5612915B2/en
Priority to US13/094,345 priority patent/US20110298988A1/en
Priority to CN2011101116586A priority patent/CN102270344A/en
Publication of JP2012003604A publication Critical patent/JP2012003604A/en
Application granted granted Critical
Publication of JP5612915B2 publication Critical patent/JP5612915B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

PROBLEM TO BE SOLVED: To provide a mobile body detector for more reliably detecting a mobile body which moves in every direction.SOLUTION: A mobile body detector includes: a motion vector generation part for taking in a camera image photographed by a camera mounted in a vehicle and generating motion vectors at a plurality of points P of an image; a self vehicle movement parameter estimation part for estimating rotary components (Rx, Ry, Rz) of a self vehicle movement parameter as being equal to inclination from a point P to a vanishing point when correcting the inclination of the motion vectors at the points P is corrected by the rotary components of the self vehicle movement parameter; and a mobile body determination part for correcting the inclination of the motion vector at an arbitrary point Q in the image by using the rotary component of the self vehicle movement parameter, comparing the inclination of the motion vector after correction with inclination of a line joining the arbitrary point Q and the vanishing point, and detecting a mobile body having a different direction from the self vehicle moving direction when the correspondence is small.

Description

本発明は、車両に搭載したカメラで車両周辺の映像を撮影し、動きベクトルの解析によって移動体を検出する移動体検出装置及び移動体検出方法に関する。   The present invention relates to a moving body detection apparatus and a moving body detection method for capturing an image around a vehicle with a camera mounted on the vehicle and detecting a moving body by analyzing a motion vector.

従来、車両に複数のカメラを搭載し、自車両の前方や後方、或いは側方を撮影し、撮影した画像のオプティカルフロー或いは動きベクトルを利用して移動体を検出する移動体検出装置が知られている。   2. Description of the Related Art Conventionally, there has been known a moving body detection device in which a plurality of cameras are mounted on a vehicle, the front, rear, or side of the host vehicle is photographed, and a moving body is detected using an optical flow or a motion vector of the photographed image. ing.

例えば特許文献1には、車載カメラのようにカメラ位置が移動する場合においても、静止物と移動する被写体を判定できるようにした移動体検知装置が開示されている。特許文献1では、観測された画像中の観測対象の動きベクトルを抽出し、動きベクトルの延長線上の交点を計算し、その交点(FOE:Focus of Expansion)を求め、FOEの時間的位置変化から移動体を判定するようにしている。   For example, Patent Document 1 discloses a moving body detection apparatus that can determine a stationary object and a moving object even when the camera position moves like an in-vehicle camera. In patent document 1, the motion vector of the observation object in the observed image is extracted, the intersection point on the extension line of the motion vector is calculated, the intersection point (FOE: Focus of Expansion) is obtained, and the temporal position change of the FOE is calculated. A moving body is determined.

特許文献2には、車載カメラで前後方を観測し障害物を検出する際に、路面の凹凸や操舵による揺れの影響を除去する画像処理装置が開示されている。特許文献2では、FOE近傍は距離が無限大とみなせるので、この領域にある動きベクトルはカメラ移動の並進成分が0となり回転成分だけが残ること、つまりFOE近傍の静止物体の動ベクトルからカメラの揺れ量(回転成分)を求め、揺れを補正するようにしている。   Patent Document 2 discloses an image processing apparatus that removes the influence of road surface unevenness and steering when observing front and rear with an in-vehicle camera and detecting an obstacle. In Patent Document 2, since the distance in the vicinity of the FOE can be regarded as infinite, the translation vector of the camera movement becomes 0 and only the rotation component remains in the motion vector in this area, that is, from the motion vector of the stationary object in the vicinity of the FOE. The shaking amount (rotational component) is obtained and the shaking is corrected.

特許文献3には、画像内において静止物被写体が支配的であるという暗黙の前提において、自車移動パラメータから予測される動きベクトルと観測動きベクトルとを比較し、一致するように繰り返し合わせ込みして自車運動パラメータを求め、自車運動パラメータから予測される動きベクトルと実測動きベクトルとの差異が大きい被写体を移動体と判定する動画像解析方法が開示されている。   In Patent Document 3, on the implicit assumption that a stationary object is dominant in the image, the motion vector predicted from the own vehicle movement parameter is compared with the observed motion vector, and repeatedly matched so as to match. Thus, a moving image analysis method is disclosed in which a subject vehicle motion parameter is obtained, and a subject having a large difference between a motion vector predicted from the subject vehicle motion parameter and a measured motion vector is determined as a moving object.

特許文献4には、移動するカメラから撮られた動画像のオプティカルフローを算出し、オプティカルフローを解析して自車移動パラメータ(回転成分R、並進成分T)を推定する移動体検出方法および装置が開示されている。特許文献4の例では、解析結果から逆距離なる指標(並進成分Tz/距離z)を定義し、想定される範囲を予め遠方に設定して指標値を与え、実測指標値が設定外のとき移動体と判定するようにしている。   Patent Document 4 discloses a moving body detection method and apparatus for calculating an optical flow of a moving image taken from a moving camera and estimating the own vehicle movement parameters (rotation component R, translation component T) by analyzing the optical flow. Is disclosed. In the example of Patent Document 4, when an index (translational component Tz / distance z) that is the reverse distance is defined from the analysis result, the index range is given by setting the assumed range in advance far away, and the measured index value is outside the setting It is determined to be a moving object.

特許文献5には、外部センサから自車移動情報をもらい、空間モデルを用いて背景フローを推定し、実測のオプティカルフローと比較し、一致しない領域を移動体と判定する監視装置が開示されている。   Patent Document 5 discloses a monitoring device that obtains own vehicle movement information from an external sensor, estimates a background flow using a spatial model, compares it with a measured optical flow, and determines a region that does not match as a moving body. Yes.

しかしながら、車載用途ではカメラ自体が移動するため、静止物被写体であっても画面上は動きのある画像となる。このため、静止物であるか移動体であるかの判定を信頼性高く行うことが難しい。また特許文献1〜5では、シーン依存性があり、特定のシーンでは移動体を検出できても、別のシーンでは移動体を検出できない場合がある。   However, since the camera itself moves in a vehicle-mounted application, a moving image is displayed on the screen even if it is a stationary object. For this reason, it is difficult to reliably determine whether the object is a stationary object or a moving object. In Patent Documents 1 to 5, there is a scene dependency, and even if a moving body can be detected in a specific scene, the moving body may not be detected in another scene.

例えば特許文献1では、FOEへ向かう放射状の線上を移動体が動いた場合、FOEは一定となり、時間的変化が発生しないため移動体として検出できない。また自車回転が大きい場合は、FOEそのものが定義できなくなり、移動体を検出できない。特許文献2では、画像内のFOE近傍に適当な被写体が無い場合には揺れの補正ができない。   For example, in Patent Document 1, when a moving body moves on a radial line toward the FOE, the FOE is constant and cannot be detected as a moving body because no temporal change occurs. Further, when the host vehicle rotation is large, the FOE itself cannot be defined and the moving body cannot be detected. In Patent Document 2, if there is no suitable subject in the vicinity of the FOE in the image, the shake cannot be corrected.

また特許文献3では、画像の被写体は静止物が支配的であることを前提として、画面全体の動きベクトルから自車(カメラ)の移動パラメータを推定するため、大部分の領域が移動体となるシーンでは、自車移動パラメータが正しく得られない。また特許文献4では、FOEへ向かう方向の移動体を検出できるものの、ある距離以内には被写体は存在しないという前提条件があるため、カメラ近傍の短距離範囲を監視するカメラシステムには適用できない。さらに、特許文献5では、カメラ以外に、車輪速、舵角、或いはジャイロセンサ、ヨーセンサ等が必要になる。   Further, in Patent Document 3, on the premise that a stationary object is dominant as a subject of an image, a movement parameter of the own vehicle (camera) is estimated from a motion vector of the entire screen. In the scene, the vehicle movement parameter cannot be obtained correctly. In Patent Document 4, although a moving body in the direction toward the FOE can be detected, there is a precondition that there is no subject within a certain distance, and therefore it cannot be applied to a camera system that monitors a short distance range near the camera. Furthermore, in Patent Document 5, wheel speed, rudder angle, gyro sensor, yaw sensor, or the like is required in addition to the camera.

特開平8−194822号公報JP-A-8-194822 特開2002−112252号公報JP 2002-112252 A 特開平5−233813号公報Japanese Patent Laid-Open No. 5-233813 特開平6−203163号公報JP-A-6-203163 特開2004−56763号公報Japanese Patent Laid-Open No. 2004-56763

従来の車載用途の移動体検出装置では、カメラ自体が移動するため、静止物被写体であっても画面上は動きのある画像となり、静止物であるか移動体であるかの判定を信頼性高く行うことが難しい。また特許文献1〜5では、シーン依存性があり、特定のシーンでは移動体を検出できても、別のシーンでは移動体を検出できないという課題がある。   In a conventional mobile object detection device for in-vehicle use, the camera itself moves, so even on a stationary object, a moving image is displayed on the screen, and it is highly reliable to determine whether it is a stationary object or a moving object. Difficult to do. In Patent Documents 1 to 5, there is a problem that there is a scene dependency, and even if a moving body can be detected in a specific scene, the moving body cannot be detected in another scene.

本発明はこのような事情に鑑み、あらゆる方向に移動する移動体をより確実に検出することができる移動体検出装置及び移動体検出方法を提供することを目的とする。   In view of such circumstances, it is an object of the present invention to provide a moving body detection apparatus and a moving body detection method that can more reliably detect a moving body that moves in all directions.

請求項1記載の実施形態による移動体検出装置は、車両に搭載したカメラによって撮影されたカメラ画像を取り込む画像入力部と、前記画像入力部からの画像を処理して、前記画像の複数点Pの動きベクトルを生成する動きベクトル生成部と、前記点Pの動きベクトルの傾きを自車移動パラメータの回転成分(Rx,Ry,Rz)で補正したとき、点Pから消失点への傾きと等しいとして自車移動パラメータの回転成分(Rx,Ry,Rz)を推定する自車移動パラメータ推定部と、前記画像内の任意の点Qの動きベクトルの傾きを前記自車移動パラメータの回転成分(Rx,Ry,Rz)を用いて補正し、補正後の動きベクトルの傾きと、前記任意の点Qと前記消失点を結ぶ線との傾きを比較し、一致度が低いときに自車移動方向と異なる方向を持つ移動体として検出し、一致度が高いときに静止物あるいは消失点へ向かう放射状に移動する移動体と判定する移動体判定部と、を具備することを特徴とする。   A moving body detection apparatus according to an embodiment of the present invention includes an image input unit that captures a camera image captured by a camera mounted on a vehicle, and an image from the image input unit, and a plurality of points P of the image. When the inclination of the motion vector at the point P is corrected by the rotation component (Rx, Ry, Rz) of the own vehicle movement parameter, the inclination is equal to the inclination from the point P to the vanishing point. A vehicle movement parameter estimation unit for estimating a rotation component (Rx, Ry, Rz) of the own vehicle movement parameter, and an inclination of the motion vector at an arbitrary point Q in the image as a rotation component (Rx , Ry, Rz), and the inclination of the corrected motion vector is compared with the inclination of the line connecting the arbitrary point Q and the vanishing point. Different Direction is detected as the moving body with, characterized by comprising a movable body and determines the moving object determination unit that moves radially toward the stationary or vanishing point when the degree of coincidence is high, the.

請求項8に記載の実施形態による移動体検出装置は、車両に搭載したカメラによって撮影されたカメラ画像を取り込む画像入力部と、前記画像入力部からの画像を処理して、前記画像の動きベクトルを生成する動きベクトル生成部と、前記画像の複数点Pの動きベクトルと、この動きベクトルを延長した延長動きベクトルが集中する消失点を求め、前記複数点Pの動きベクトルと前記消失点をもとに自車移動パラメータの回転成分(Rx,Ry,Rz)及び並進成分を推定する自車移動パラメータ推定部と、路面とのカメラ視角を関連付けた推定距離重みづけ係数z0を設定し、前記係数z0を乗算した並進成分の平均から推定自車並進成分を算出し、前記画像内の任意の点Qの動きベクトルを前記回転成分(Rx,Ry,Rz)、前記並進成分及び前記係数z0から予測し、実測動きベクトルと比較して一致度が低い場合に移動体と判定する移動体判定部と、を具備したことを特徴とする。   The moving body detection apparatus according to an embodiment of claim 8 includes an image input unit that captures a camera image captured by a camera mounted on a vehicle, an image from the image input unit, and a motion vector of the image. A motion vector generation unit that generates a motion vector at a plurality of points P in the image and a vanishing point at which an extended motion vector obtained by extending the motion vector is concentrated. And an estimated distance weighting coefficient z0 that associates a camera viewing angle with the road surface, and a vehicle movement parameter estimation unit that estimates a rotation component (Rx, Ry, Rz) and a translation component of the vehicle movement parameter. An estimated vehicle translation component is calculated from the average of the translation components multiplied by z0, and the motion vector at an arbitrary point Q in the image is converted into the rotation component (Rx, Ry, Rz) and the translation. Min and predicted from the coefficients z0, characterized by comprising a moving object determination unit determines that the mobile, the if the measured motion match degree as compared to the vector is low.

また請求項10記載の実施形態による移動体検出方法は、車両に搭載したカメラによって撮影されたカメラ画像を取り込み、前記画像入力部からの画像を処理して、前記画像の複数点Pの動きベクトルを生成し、前記点Pの動きベクトルの傾きを自車移動パラメータの回転成分(Rx,Ry,Rz)で補正したとき、点Pから消失点への傾きと等しいとして自車移動パラメータの回転成分(Rx,Ry,Rz)を推定し、前記画像内の任意の点Qの動きベクトルの傾きを前記自車移動パラメータの回転成分(Rx,Ry,Rz)を用いて補正し、補正後の動きベクトルの傾きと、前記任意の点Qと前記消失点を結ぶ線との傾きを比較し、一致度が低いときに自車移動方向と異なる方向を持つ移動体として検出し、一致度が高いときに静止物あるいは消失点へ向かう放射状に移動する移動体と判定することを特徴とする。   A moving body detection method according to an embodiment of the present invention includes a camera image captured by a camera mounted on a vehicle, processes an image from the image input unit, and performs motion vectors of a plurality of points P of the image. When the inclination of the motion vector of the point P is corrected by the rotation component (Rx, Ry, Rz) of the own vehicle movement parameter, the rotation component of the own vehicle movement parameter is assumed to be equal to the inclination from the point P to the vanishing point. (Rx, Ry, Rz) is estimated, the inclination of the motion vector at an arbitrary point Q in the image is corrected using the rotation component (Rx, Ry, Rz) of the own vehicle movement parameter, and the corrected motion When the inclination of the vector is compared with the inclination of the line connecting the arbitrary point Q and the vanishing point, when the coincidence is low, it is detected as a moving body having a direction different from the own vehicle moving direction, and the coincidence is high Stationary objects There is characterized by determining a moving body that moves radially toward the vanishing point.

請求項15に記載の実施形態による移動体検出方法は、車両に搭載したカメラによって撮影されたカメラ画像を取り込み、前記画像入力部からの画像を処理して、前記画像の複数点Pの動きベクトルを生成するとともに、この動きベクトルを延長した延長動きベクトルが集中する消失点を求め、前記動きベクトルと前記消失点をもとに自車移動パラメータの回転成分(Rx,Ry,Rz)及び並進成分を推定し、路面とのカメラ視角を関連付けた推定距離重みづけ係数z0を設定し、前記係数z0を乗算した並進成分の平均から推定自車並進成分を算出し、前記画像内の任意の点Qの動きベクトルを前記回転成分(Rx,Ry,Rz)、前記並進成分及び前記係数z0から予測し、実測動きベクトルと比較して一致度が低い場合に移動体と判定することを特徴とする。   The moving object detection method according to the embodiment of claim 15 takes in a camera image taken by a camera mounted on a vehicle, processes an image from the image input unit, and performs motion vectors of a plurality of points P of the image. And a vanishing point at which the extended motion vector obtained by extending the motion vector is concentrated, and a rotation component (Rx, Ry, Rz) and a translation component of the own vehicle movement parameter based on the motion vector and the vanishing point Is set, an estimated distance weighting coefficient z0 associated with the camera viewing angle with the road surface is set, an estimated vehicle translation component is calculated from the average of the translation components multiplied by the coefficient z0, and an arbitrary point Q in the image is calculated. The motion vector is predicted from the rotation component (Rx, Ry, Rz), the translation component, and the coefficient z0. Characterized in that it.

本発明の実施形態による移動体検出装置によれば、あらゆる方向に移動する移動体をより確実に検出することができる。   According to the moving body detection apparatus according to the embodiment of the present invention, it is possible to more reliably detect a moving body that moves in any direction.

第1の実施形態に係る移動体検出装置の構成を示すブロック図。The block diagram which shows the structure of the moving body detection apparatus which concerns on 1st Embodiment. 第1の実施形態における移動体検出部の詳細な構成を示すブロック図。The block diagram which shows the detailed structure of the moving body detection part in 1st Embodiment. 第1の実施形態に係る移動体検出装置の動作を説明するフローチャート。The flowchart explaining operation | movement of the moving body detection apparatus which concerns on 1st Embodiment. 画像領域選定部による画像領域の選定例を示す説明図。Explanatory drawing which shows the example of selection of the image area by an image area selection part. 動きベクトルの生成処理に関連する座標系を示す説明図。Explanatory drawing which shows the coordinate system relevant to the production | generation process of a motion vector. 回転成分Ry選択のためのヒストグラム図。The histogram figure for rotation component Ry selection. z0設定部の動作を示す説明図。Explanatory drawing which shows operation | movement of z0 setting part. 回転成分を補正した動きベクトルの一例を示す説明図。Explanatory drawing which shows an example of the motion vector which correct | amended the rotation component. 回転成分を補正した動きベクトルの他の例を示す説明図。Explanatory drawing which shows the other example of the motion vector which correct | amended the rotation component. 第2の実施形態にかかる移動体検出装置を示すブロック図。The block diagram which shows the moving body detection apparatus concerning 2nd Embodiment.

以下、この発明の一実施形態に係る移動体検出装置について図面を参照して説明する。   Hereinafter, a mobile object detection apparatus according to an embodiment of the present invention will be described with reference to the drawings.

図1は、本発明の一実施形態に係る移動体検出装置100の構成を示すブロック図である。図1において、移動体検出部10は、車両1に搭載したカメラ30で撮影した画像を処理して移動体を検出し、表示部40に検出結果を表示する。カメラ30は、車両1に搭載され車両周辺の映像を撮影する。図1では、車両1の前方及び後方等にカメラ30…を搭載した例を示している。   FIG. 1 is a block diagram showing a configuration of a moving object detection apparatus 100 according to an embodiment of the present invention. In FIG. 1, the moving body detection unit 10 processes the image captured by the camera 30 mounted on the vehicle 1 to detect the moving body, and displays the detection result on the display unit 40. The camera 30 is mounted on the vehicle 1 and captures an image around the vehicle. FIG. 1 shows an example in which cameras 30... Are mounted in front and rear of the vehicle 1.

図2は、移動体検出部10の詳細な構成を示すブロック図である。移動体検出部10は、移動体検出部10の動作を制御する制御部11と、カメラ画像を取り込む画像入力部12と、移動体の検出処理に有効な画像情報をもつ画像領域を選定する画像領域選定部13と、画像領域選定部13で選定した画像領域の動きベクトルを生成する動きベクトル生成部14を有する。   FIG. 2 is a block diagram illustrating a detailed configuration of the moving object detection unit 10. The moving object detection unit 10 includes a control unit 11 that controls the operation of the moving object detection unit 10, an image input unit 12 that captures a camera image, and an image that selects an image area having image information effective for the detection process of the moving object. An area selection unit 13 and a motion vector generation unit 14 that generates a motion vector of the image area selected by the image area selection unit 13 are included.

また画像選定領域部13で選定した画像(静止物と移動体が混在した画像)から静止物領域候補を推定する静止物領域候補推定部15と、静止領域候補が見出せない場合に全画面動き或いは全画面静止とみなし、動きの発生した領域を移動体と判定する動きスカラ判定部16と、自車が回転せずに移動する場合にFOE(Focus of Expansion)を求めて更新を行うFOE更新部17を有する。   In addition, a stationary object region candidate estimation unit 15 that estimates a stationary object region candidate from an image selected by the image selection region unit 13 (an image in which a stationary object and a moving object are mixed), and when a stationary region candidate cannot be found, A motion scalar determination unit 16 that determines that the region where the motion has occurred is a moving body, and an FOE update unit that performs an update by obtaining FOE (Focus of Expansion) when the vehicle moves without rotating. 17.

また静止領域候補推定部15で選定された領域内の画像をもとに自車移動パラメータ(回転成分Ωu、Ωvおよび並進成分Tx,Ty,Tzを含む)を推定する自車移動パラメータ推定部18と、並進成分(Tz,Tx,Ty)を推定するために近似的な距離として推定距離重みづけ係数z0を設定するz0設定部19を有する。   The own vehicle movement parameter estimation unit 18 for estimating own vehicle movement parameters (including rotation components Ωu, Ωv and translational components Tx, Ty, Tz) based on the image in the region selected by the still region candidate estimation unit 15. And a z0 setting unit 19 that sets an estimated distance weighting coefficient z0 as an approximate distance in order to estimate the translation component (Tz, Tx, Ty).

さらにFOEに着目した移動体判定部20と、FOEへ放射状に向かう方向の移動体の検出を行う並進方向移動体検出部21と、動きスカラ判定部16で判定された移動体と、移動体判定部20で判定された移動体と、並進方向移動体検出部21で判定された移動体の情報がそれぞれ入力され、夫々の論理和(OR)をもとに移動体を判定する接近判定部22とを有する。   Furthermore, the mobile body determination unit 20 focusing on the FOE, the translational direction mobile body detection unit 21 that detects the mobile body in the radial direction toward the FOE, the mobile body determined by the motion scalar determination unit 16, and the mobile body determination Information on the mobile body determined by the unit 20 and the mobile body determined by the translational direction mobile body detection unit 21 are input, respectively, and the approach determination unit 22 that determines the mobile body based on the respective logical sum (OR). And have.

図3は、本発明の第1の実施形態に係る移動体検出装置の動作を説明するフローチャートであり、以下、関連する図を参照しながら動作を説明する。尚、移動体検出部10は、制御部11の制御のもとに動作し、制御部11はCPU,ROM、RAM等を含むマイクロプロセッサで構成されている。制御部11は、以下に説明する動作を行うように、各ブロックに制御信号を発生する(図2では制御信号線路を点線で示している)。   FIG. 3 is a flowchart for explaining the operation of the moving object detection apparatus according to the first embodiment of the present invention. Hereinafter, the operation will be described with reference to related drawings. The moving body detection unit 10 operates under the control of the control unit 11, and the control unit 11 includes a microprocessor including a CPU, a ROM, a RAM, and the like. The control unit 11 generates a control signal in each block so as to perform the operation described below (in FIG. 2, the control signal line is indicated by a dotted line).

先ず、ステップS1では画像入力部12によってカメラ30で撮像したカメラ画像を取得する。ステップS2では、画像領域選定部13により以降の処理に有効な画像情報をもつ画像領域を選定する。有効な画像情報の判断法としては、特徴点抽出法を用いることができる。例えば図4に示すように画像中で無限に広がる地面はある点に収束し、この点は消失線と呼ばれ、この消失線を含む領域を選択する。尚、自車が回転せずに移動(例えば矢印A方向に移動)する場合に、画像中の複数の動きベクトルを延長し、延長動きベクトルが集中する点は消失点(FOE)となる。   First, in step S1, a camera image captured by the camera 30 is acquired by the image input unit 12. In step S2, the image area selection unit 13 selects an image area having image information effective for the subsequent processing. A feature point extraction method can be used as a method for determining effective image information. For example, as shown in FIG. 4, the ground spreading infinitely in the image converges to a certain point, and this point is called a vanishing line, and an area including the vanishing line is selected. When the vehicle moves without rotating (for example, moves in the direction of arrow A), a plurality of motion vectors in the image are extended, and a point where the extended motion vectors are concentrated is a vanishing point (FOE).

次のステップS3では動きベクトル生成部14において、画像領域選定部13で選定した領域の動きベクトルを生成する。動きベクトルの生成処理としては、勾配法、ブロックマッチング法等を使用することができる。   In the next step S3, the motion vector generation unit 14 generates a motion vector of the region selected by the image region selection unit 13. As the motion vector generation process, a gradient method, a block matching method, or the like can be used.

例えば、図5に示すように、カメラ座標系として、カメラ30の光軸をz軸、路面に水平の軸をx軸とし、路面に垂直の軸をy軸としたときのxyz座標系を考える。Oc(0,0,0)をカメラ中心とし焦点距離をfとしたとき、z=f(fは焦点距離)の投影平面を200で示し、世界座標の点P’がこの投影平面200の点Pに投影される。また、世界座標系の地面に固定な座標系をOwで示している。   For example, as shown in FIG. 5, an xyz coordinate system in which the optical axis of the camera 30 is the z-axis, the horizontal axis to the road surface is the x-axis, and the vertical axis to the road surface is the y-axis is shown. . When Oc (0, 0, 0) is the camera center and the focal length is f, the projection plane of z = f (f is the focal length) is denoted by 200, and the point P ′ of the world coordinates is a point of this projection plane 200 Projected onto P. A coordinate system fixed to the ground of the world coordinate system is indicated by Ow.

自車移動に伴う画面上P(x,y)の動きベクトルV(u,v)は、周知の下記(1)(2)式で表わすことができる。

Figure 2012003604
The motion vector V (u, v) on the screen P (x, y) accompanying the movement of the vehicle can be expressed by the following well-known expressions (1) and (2).
Figure 2012003604

ここでは、自車移動パラメータを回転成分Ωu、Ωv及び並進成分Tx,Ty,Tzとする。Tx,Ty,Tzは、夫々x軸、y軸、z軸方向の並進速度を表す。さらに、Ωu、Ωvは下記(3),(4)式によりRx,Ry,Rzで表わされる。Rx,Ry,Rzはx軸、y軸、z軸を中心とした回転成分を表す。

Figure 2012003604
Here, the own vehicle movement parameters are assumed to be rotational components Ωu, Ωv and translational components Tx, Ty, Tz. Tx, Ty, and Tz represent translational speeds in the x-axis, y-axis, and z-axis directions, respectively. Furthermore, Ωu and Ωv are represented by Rx, Ry, and Rz according to the following equations (3) and (4). Rx, Ry, and Rz represent rotational components around the x-axis, y-axis, and z-axis.
Figure 2012003604

周知のように、点P(x,y)における動きベクトルV(u,v)に回転成分の補正を行って傾きを求めると、P(x,y)からFOEへ向かう傾きと同じになる。即ち、次の(5)式が成り立つ。

Figure 2012003604
As is well known, when the inclination is obtained by correcting the rotational component of the motion vector V (u, v) at the point P (x, y), it becomes the same as the inclination from P (x, y) to the FOE. That is, the following equation (5) is established.
Figure 2012003604

静止物領域候補推定部15には、画像選定領域部13で選定した画像(静止物と移動体が混在した画像)が入力され、ステップS4では、静止物領域の推定を行う。つまり車載カメラでは、Rx,Rzは微小な場合が多いので、
Rx=Rz=0
という近似を行うと、(6),(7)式が得られる。

Figure 2012003604
The image selected by the image selection region unit 13 (an image in which a stationary object and a moving object are mixed) is input to the stationary object region estimation unit 15, and in step S4, the stationary object region is estimated. In other words, Rx and Rz are often very small for in-vehicle cameras.
Rx = Rz = 0
(6) and (7) are obtained by approximation.
Figure 2012003604

(5),(6),(7)式から、点P(x,y)と動きベクトルV(v,u)と、FOE(x0,y0)から画素P…毎にRyが求められる。静止物に対しては共通のRyとなる。複数の画像点に対して上記複数のRyを求め、図6に示すようなRyのヒストグラムを生成し、ヒストグラムのピークの近傍の共通のRyを持つ領域を静止物領域候補として選定する。   From the expressions (5), (6), and (7), Ry is obtained for each pixel P... From the point P (x, y), the motion vector V (v, u), and FOE (x0, y0). Ry is common to stationary objects. The plurality of Ry are obtained for a plurality of image points, a histogram of Ry as shown in FIG. 6 is generated, and a region having a common Ry near the peak of the histogram is selected as a stationary object region candidate.

しかしながら、自車速度が高い場合には、Rzが発生するので考慮した方が好ましい。このとき、
Rx=0
という近似を行うと、(8),(9)式が得られ、

Figure 2012003604
However, when the vehicle speed is high, Rz is generated. At this time,
Rx = 0
(8) and (9) are obtained by approximating
Figure 2012003604

近傍の複数の動きベクトルからRy,Rzを求めることができる。このときRy、Rzのヒストグラムを生成し、ヒストグラムのピークの近傍の共通のRy、Rzを持つ領域を静止物領域候補として選定する。 Ry and Rz can be obtained from a plurality of nearby motion vectors. At this time, a histogram of Ry and Rz is generated, and a region having common Ry and Rz in the vicinity of the peak of the histogram is selected as a stationary object region candidate.

静止領域候補が見いだせない場合(ステップS5でNOの場合)には、全画面動き或いは全画面静止とみなし、動きスカラ判定部16で動きの発生した領域を移動体と判定する。つまり、動きスカラ判定部16は、ステップS6において、動きの大きさ(スカラ量を)判定し、ステップS7でスカラ量が予め設定した閾値を越えたとき(YESのとき)に動き領域と判定する。尚、ベクトル量は方向と大きさで表わされ、大きさだけをスカラ(scalar)またはスカラ量と呼ぶ。また画面上において静止物の領域が少ない場合においても、静止物領域部分のRyヒストグラムのピークが見出されればその領域を静止物領域と判定できる。   When a still area candidate cannot be found (NO in step S5), it is regarded as a full screen motion or a full screen still, and the motion scalar determination unit 16 determines a region where the motion has occurred as a moving body. That is, the motion scalar determination unit 16 determines the magnitude of the motion (scalar amount) in step S6, and determines that it is a motion region when the scalar amount exceeds a preset threshold value in step S7 (when YES). . The vector quantity is expressed by the direction and the magnitude, and only the magnitude is called a scalar or a scalar quantity. Even when the area of the stationary object is small on the screen, if the peak of the Ry histogram of the stationary object area is found, that area can be determined as the stationary object area.

FOE更新部17は、ステップS8において、自車が回転せずに移動する場合、動きベクトルの延長線交点としてFOEを求めて更新を行う。車載カメラでは並進運動は車両中心軸方向だけに発生するので、消失点(FOE)は不変と思われるが、乗車人員や積載量が多い場合、或いはカメラ30の取り付け位置の経年変化等による光軸のズレが想定され、カメラ30の光軸のズレは、消失点(FOE)のズレとなって現れる。したがって、ステップS8において、FOEの更新を行う。   When the vehicle moves without rotating in step S8, the FOE update unit 17 obtains the FOE as an extension line intersection of the motion vectors and performs an update. In-vehicle cameras generate translational motions only in the direction of the vehicle center axis, so the vanishing point (FOE) seems to be unchanged, but the optical axis due to a large number of passengers and load, or due to secular changes in the mounting position of the camera 30, etc. Therefore, the optical axis shift of the camera 30 appears as a vanishing point (FOE) shift. Therefore, the FOE is updated in step S8.

即ち、静止物領域候補内において、回転成分=0のとき、点P(x,y)の動きベクトルが(un,vn)であったとしたとき、Pを始点とする動きベクトルの延長線上の点を(x0,y0)とすると、(10)式が得られる。

Figure 2012003604
That is, in the stationary object region candidate, when the rotation component = 0, and the motion vector of the point P (x, y) is (un, vn), the point on the extension line of the motion vector starting from P Is (x0, y0), the following equation (10) is obtained.
Figure 2012003604

またn個の点Pについて行列表現すると、(11)式が得られる。

Figure 2012003604
When the n points P are expressed in a matrix, equation (11) is obtained.
Figure 2012003604

そして複数の点Pから最小2乗法を用いて消失点(x0,y0)を求め、新たな消失点として更新する。したがって、消失点更新は車両が並進運動を行っているときに限定されるが、消失点変動は頻繁には発生しないので実用上問題ない。また消失点からカメラ30の取り付け角を求めることができるので、後述するz0設定部19に消失点更新データが送られ校正データとして用いる。   Then, the vanishing point (x0, y0) is obtained from the plurality of points P using the least square method, and is updated as a new vanishing point. Therefore, although vanishing point update is limited when the vehicle is performing translational movement, vanishing point fluctuations do not occur frequently, so there is no practical problem. Since the attachment angle of the camera 30 can be obtained from the vanishing point, the vanishing point update data is sent to the z0 setting unit 19 described later and used as calibration data.

カメラ30の光軸が路面に対して下向きに俯角θで設置されていれば、自車並進速度Tのとき、z軸、y軸方向の並進速度Tz,Tyは、
Tz=Tcosθ、Ty=Tsinθ
となる。消失点(x0,y0)は、
x0=fTx/Tz、y0=fTy/Tz
で表わすことができるので、
y0=f・tanθ
となり、消失点からカメラ俯角を求めることができる。同様にx軸方向で考えれば自車中心線と光軸のなす角φに関しては、
x0=f・tanφ
で求めることができる。
If the optical axis of the camera 30 is installed at a depression angle θ downward with respect to the road surface, the translation speeds Tz and Ty in the z-axis and y-axis directions when the host vehicle translation speed T is
Tz = Tcosθ, Ty = Tsinθ
It becomes. The vanishing point (x0, y0) is
x0 = fTx / Tz, y0 = fTy / Tz
Can be expressed as
y0 = f · tanθ
Thus, the camera depression angle can be obtained from the vanishing point. Similarly, regarding the angle φ between the vehicle center line and the optical axis when considered in the x-axis direction,
x0 = f · tanφ
Can be obtained.

自車移動パラメータ推定部18は、ステップS9において、静止領域候補推定部15で選定された静止物領域内のn個のP(xn,yn)の動きベクトルV(un,vn)を用いて自車移動パラメータRx,Ry,Rzを推定する。即ち、(1),(2),(3),(4)式からn個のP(xn,yn)に対して次式が得られる。

Figure 2012003604
In step S9, the own vehicle movement parameter estimation unit 18 uses the n P (xn, yn) motion vectors V (un, vn) in the stationary object region selected by the stationary region candidate estimation unit 15. The vehicle movement parameters Rx, Ry, Rz are estimated. That is, the following equation is obtained for n P (xn, yn) from the equations (1), (2), (3), and (4).
Figure 2012003604

(12)式から特異値分解等を用いた最小2乗法で自車移動パラメータの回転成分Rx,Ry,Rzを推定する。推定回転成分を用いて式(1)〜(4)から得られる並進成分は、被写体までの距離zの逆数を乗じた値Tx/z,Ty/z, Tx/zの形式でのみ得られる。z0設定部19では、ステップS10において、x軸、y軸、z軸方向の並進速度Tz,Tx,Tyを推定するために、近似的な距離として推定距離重みづけ係数z0を設定する。   From the equation (12), the rotational components Rx, Ry, Rz of the own vehicle movement parameter are estimated by the least square method using singular value decomposition or the like. The translation component obtained from the equations (1) to (4) using the estimated rotation component can be obtained only in the form of values Tx / z, Ty / z, Tx / z multiplied by the reciprocal of the distance z to the subject. In step S10, the z0 setting unit 19 sets an estimated distance weighting coefficient z0 as an approximate distance in order to estimate the translational speeds Tz, Tx, Ty in the x-axis, y-axis, and z-axis directions.

つまり図7に示すように、カメラ30からの路面距離に対応したカメラ視角(下向き俯角θ)に関連付けて係数z0を設定する。図7の例では、路面距離1〜1.5mに相当する視角の画像に対してz0=1とする。同様に、路面距離1.5〜2mに相当する視角の画像に対してz0=1.5、路面距離2〜3mに相当する視角の画像に対してz0=2、路面距離3〜4mに相当する視角の画像に対してz0=3、路面距離4m〜無限大mに相当する視角の画像に対してz0=4とする。角度θが分かれば自車から被写体Mまでの距離が分かる。   That is, as shown in FIG. 7, the coefficient z0 is set in association with the camera viewing angle (downward depression angle θ) corresponding to the road surface distance from the camera 30. In the example of FIG. 7, z0 = 1 is set for an image having a viewing angle corresponding to a road surface distance of 1 to 1.5 m. Similarly, z0 = 1.5 for a viewing angle image corresponding to a road surface distance of 1.5 to 2 m, z0 = 2 for a viewing angle image corresponding to a road surface distance of 2 to 3 m, and a road surface distance of 3 to 4 m. Z0 = 3 for a viewing angle image and z0 = 4 for a viewing angle image corresponding to a road surface distance of 4 m to infinity m. If the angle θ is known, the distance from the subject vehicle to the subject M can be known.

また、カメラ視角とカメラ画像上の各画素は1:1に対応している。したがって、路面とのカメラ視角を関連付けた推定距離重みづけ係数z0は、画素毎に予め求めておいてLUT(ルックアップテーブル)等に設定しておけばよい。   The camera viewing angle and each pixel on the camera image correspond to 1: 1. Therefore, the estimated distance weighting coefficient z0 associated with the camera viewing angle with the road surface may be obtained in advance for each pixel and set in an LUT (look-up table) or the like.

図7ではz0を離散的に予め設定する手順を説明したが、z0=1/zとして連続的にカメラ視角と関連付けて設定してもよい。この場合には、カメラ分解能に限界があるので一定のz以上ではz0=固定値とする。   Although the procedure for discretely setting z0 in advance has been described with reference to FIG. 7, it may be set continuously in association with the camera viewing angle as z0 = 1 / z. In this case, since there is a limit to the resolution of the camera, z0 = fixed value above a certain z.

自車移動パラメータ推定部18は、このようにカメラからの路面距離に対応したカメラ視角に関連付けて距離z0を設定することで、
Tz=z0(.Tz/z)、Tx=z0(.Tx/z)、Ty=z0(.Ty/z)
を求め、複数点の平均をとり推定並進成分Tzm,Tym,Txmを算出する。
The own vehicle movement parameter estimation unit 18 sets the distance z0 in association with the camera viewing angle corresponding to the road surface distance from the camera in this way,
Tz = z0 (.Tz / z), Tx = z0 (.Tx / z), Ty = z0 (.Ty / z)
And the estimated translational components Tzm, Tym, and Txm are calculated by averaging a plurality of points.

一方、FOEに着目した移動検出部20はステップS11で以下の処理を行う。即ち、画像内の任意の点Q(x,y)に関して、(1)〜(4)式に自車移動パラメータRx,Ry,Rzを代入し、動き補正後の傾き、(13)式を求める。

Figure 2012003604
On the other hand, the movement detection unit 20 focusing on FOE performs the following processing in step S11. That is, for an arbitrary point Q (x, y) in the image, the own vehicle movement parameters Rx, Ry, Rz are substituted into the equations (1) to (4), and the inclination after motion correction, the equation (13) is obtained. .
Figure 2012003604

Q(x,y)からFOE(x0,y0)への直線の傾きは(14)式で表わされる。

Figure 2012003604
The slope of the straight line from Q (x, y) to FOE (x0, y0) is expressed by equation (14).
Figure 2012003604

ステップS11で傾き(13),(14)式を比較した結果、一致度が低い場合(ステップS12でNOのとき)は、図8に示すように回転量を補正した(回転成分Ωu、Ωvで補正した)動きベクトルの延長線はFOEを通らないため、移動方向が自車移動方向と異なる移動体と判定する。   As a result of comparing the slopes (13) and (14) in step S11, if the degree of coincidence is low (NO in step S12), the rotation amount is corrected as shown in FIG. 8 (with rotation components Ωu and Ωv). Since the corrected motion vector extension line does not pass through the FOE, it is determined that the moving direction is different from the moving direction of the vehicle.

カメラ揺れ成分は自車回転成分Rx,Rzとして現れるので、(13)式に示されるように、回転量を補正した後に判定することで、砂利道や段差等での自車移動に伴う揺れは自車回転成分(主にRx,Rz)が発生しても補正されるので、自車揺れの影響を受けない。   Since the camera shake component appears as the own vehicle rotation components Rx and Rz, as shown in the equation (13), by determining after correcting the rotation amount, the shake accompanying the own vehicle movement on the gravel road or steps is Even if a vehicle rotation component (mainly Rx, Rz) is generated, it is corrected, so that it is not affected by the vehicle shake.

また、傾き(13),(14)式を比較して一致度が高い場合(ステップS12でYESのとき)は、図9のように回転量を補正した動きベクトルの延長線がFOEを通る。この場合は静止物だけではなく、FOEから放射状に移動する移動体も含まれている可能性がある。そこで並進方向移動体検出部21は、以下の動作を行う。   If the degree of coincidence is high by comparing the slopes (13) and (14) (when YES in step S12), the extension line of the motion vector with the rotation amount corrected passes through the FOE as shown in FIG. In this case, not only a stationary object but also a moving body that moves radially from the FOE may be included. Therefore, the translational direction moving body detection unit 21 performs the following operation.

並進方向移動体検出部21には、FOEに着目した移動体判定部20で移動体と検出されなかった情報と、自車移動パラメータ推定部18で得られる推定自車並進成分Tzmが入力される。並進方向移動体検出部21は、ステップS13で(1)〜(4)式と回転成分Rx,Ry,Rzから点Q(x,y)のTz/zを求め、z0(Tz/z)と平均の推定並進成分Tzmとを比較し、ステップS14で一致度が予め設定した閾値を越える場合は、移動体と判定する。つまり、周りよりも早く動くものがあれば自車に向かってくる移動体と判断する。   Information not detected as a moving body by the moving body determination unit 20 focusing on FOE and the estimated own vehicle translation component Tzm obtained by the own vehicle movement parameter estimation unit 18 are input to the translation direction moving body detection unit 21. . In step S13, the translational direction moving body detection unit 21 obtains Tz / z of the point Q (x, y) from the equations (1) to (4) and the rotation components Rx, Ry, Rz, and z0 (Tz / z) The average estimated translation component Tzm is compared, and if the degree of coincidence exceeds a preset threshold value in step S14, it is determined as a moving body. That is, if there is something that moves faster than the surroundings, it is determined as a moving body that comes toward the vehicle.

また(1)〜(4)式から判るように、遠距離(zが大)のときに、並進成分Tz/zは小となるので、動きベクトルは小となり、近距離(zが小)のときに並進成分Tz/zは大となるので、動きベクトルは大となる。したがって、そのままでは遠距離の移動体は検出し難く、近距離の静止物を誤検出し易い。しかし、推定距離重みづけ係数z0を導入することで遠距離の移動体を検出し易くなり、近距離の静止物を誤検出し難くなる。   As can be seen from the equations (1) to (4), when the distance is long (z is large), the translation component Tz / z is small, so the motion vector is small, and the short distance (z is small). Since the translation component Tz / z sometimes becomes large, the motion vector becomes large. Therefore, it is difficult to detect a moving object at a long distance, and it is easy to erroneously detect a stationary object at a short distance. However, by introducing the estimated distance weighting coefficient z0, it becomes easy to detect a moving object at a long distance, and it is difficult to erroneously detect a stationary object at a short distance.

接近判定部22は、ステップS15において、動きスカラ判定部16で判定された移動体(ステップS7の判定結果)と、FOEに着目した移動体判定部20で判定された移動体(ステップS12の判定結果)と、並進方向移動体検出部21で判定された移動体(ステップS14の判定結果)の情報を入力し、夫々の論理和(OR)をとって移動体の判定を行う。また移動体と判定されたものは、動きベクトル方向が判明しているので、移動体の接近方向を判断し、自車に接近する方向をもつ移動体を危険度の高いものとして優先的に出力し、自車に接近する移動体があれば表示部40に警告表示する。   In step S15, the approach determination unit 22 determines the moving body determined by the motion scalar determination unit 16 (determination result in step S7) and the moving body determined by the mobile body determination unit 20 focusing on FOE (determination in step S12). Result) and information on the moving body (determination result in step S14) determined by the translational direction moving body detection unit 21 are input, and the moving body is determined by taking the respective logical sums (OR). Also, since the direction of the motion vector is known for those that are determined to be moving objects, the approaching direction of the moving object is determined, and the moving object that has a direction approaching the host vehicle is preferentially output as having a high degree of danger. If there is a moving body approaching the host vehicle, a warning is displayed on the display unit 40.

以上述べたように、第1の実施形態では、FOEへ向かう方向の移動体を検出することができ、あらゆる方向に移動する移動体を検出できる。また乗車人員や積載量が多い場合でも安定な移動体検出ができ、砂利道等の自車振動(カメラ揺れ)の影響を受けずに安定した移動体検出ができる。   As described above, in the first embodiment, a moving body in the direction toward the FOE can be detected, and a moving body moving in any direction can be detected. In addition, stable moving object detection can be performed even when the number of passengers and the load is large, and stable moving object detection can be performed without being affected by own vehicle vibration (camera shake) such as a gravel road.

次に第2の実施形態にかかる移動体検出装置について図10を参照して説明する。第2の実施形態では、FOEに着目した移動体判定部20を動きベクトル予測部23に置き換え、並進方向移動体検出部21を移動体判定部24に置き換えたものである。   Next, a moving body detection apparatus according to a second embodiment will be described with reference to FIG. In the second embodiment, the moving body determination unit 20 focused on FOE is replaced with a motion vector prediction unit 23, and the translational direction moving body detection unit 21 is replaced with a moving body determination unit 24.

第1の実施形態と同様に、自車移動パラメータ推定部18は、回転成分Rx,Ry,Rzと、平均の並進成分Tzm,Txm,Tymを出力する。動きベクトル予測部23は、点Q(x,y)の動きベクトルをTz/z、Tx/z、Ty/zの代わりにTzm/z0、Txm/z0、Tym/z0を用い、(1)〜(4)式に基づいて予測ベクトルu’,v’を算出する。   Similar to the first embodiment, the own vehicle movement parameter estimation unit 18 outputs rotation components Rx, Ry, Rz and average translation components Tzm, Txm, Tym. The motion vector prediction unit 23 uses Tzm / z0, Txm / z0, and Tym / z0 instead of Tz / z, Tx / z, and Ty / z for the motion vector of the point Q (x, y), and (1) to The prediction vectors u ′ and v ′ are calculated based on the equation (4).

移動検出部24は実測動きベクトルu,vと予測動きベクトルu’,v’とを比較し、一致度の低い場合、移動体と判定する。車載カメラの場合、Tx,Tyは微小な場合が多いので、Tx/z,Ty/zおよびTxm,Tymは省略してもよい。   The movement detection unit 24 compares the measured motion vectors u and v with the predicted motion vectors u ′ and v ′, and determines that the object is a moving object when the degree of coincidence is low. In the case of an in-vehicle camera, Tx and Ty are often very small, so Tx / z, Ty / z and Txm, Tym may be omitted.

尚、以上説明した実施形態の限らず、特許請求の範囲を逸脱しない範囲内で種々の変形が可能である。   It should be noted that the present invention is not limited to the embodiments described above, and various modifications can be made without departing from the scope of the claims.

100…移動体検出装置
10…移動体検出部
11…制御部
12…画像入力部
13…画像領域選定部
14…動きベクトル生成部
15…静止領域候補推定部
16…動きスカラ判定部
17…FOE更新部
18…自車移動パラメータ推定部
19…z0設定部
20…移動体判定部(FOEに着目)
21…並進方向移動体検出部
22…接近判定部
23…動きベクトル予測部
24…移動体判定部
30…カメラ
40…表示部
DESCRIPTION OF SYMBOLS 100 ... Moving body detection apparatus 10 ... Moving body detection part 11 ... Control part 12 ... Image input part 13 ... Image area selection part 14 ... Motion vector generation part 15 ... Still area candidate estimation part 16 ... Motion scalar determination part 17 ... FOE update Unit 18: Own vehicle movement parameter estimation unit 19 ... z0 setting unit 20 ... moving object determination unit (focus on FOE)
DESCRIPTION OF SYMBOLS 21 ... Translation direction moving body detection part 22 ... Approach determination part 23 ... Motion vector prediction part 24 ... Moving body determination part 30 ... Camera 40 ... Display part

Claims (15)

車両に搭載したカメラによって撮影されたカメラ画像を取り込む画像入力部と、
前記画像入力部からの画像を処理して、前記画像の複数点Pの動きベクトルを生成する動きベクトル生成部と、
前記点Pの動きベクトルの傾きを自車移動パラメータの回転成分(Rx,Ry,Rz)で補正したとき、点Pから消失点への傾きと等しいとして自車移動パラメータの回転成分(Rx,Ry,Rz)を推定する自車移動パラメータ推定部と、
前記画像内の任意の点Qの動きベクトルの傾きを前記自車移動パラメータの回転成分(Rx,Ry,Rz)を用いて補正し、補正後の動きベクトルの傾きと、前記任意の点Qと前記消失点を結ぶ線との傾きを比較し、一致度が低いときに自車移動方向と異なる方向を持つ移動体として検出し、一致度が高いときに静止物あるいは消失点へ向かう放射状に移動する移動体と判定する移動体判定部と、
を具備することを特徴とする移動体検出装置。
An image input unit that captures a camera image taken by a camera mounted on the vehicle;
A motion vector generation unit that processes an image from the image input unit and generates motion vectors of a plurality of points P of the image;
When the inclination of the motion vector of the point P is corrected by the rotation component (Rx, Ry, Rz) of the own vehicle movement parameter, the rotation component (Rx, Ry) of the own vehicle movement parameter is assumed to be equal to the inclination from the point P to the vanishing point. , Rz), the own vehicle movement parameter estimation unit,
The inclination of the motion vector at an arbitrary point Q in the image is corrected using the rotation components (Rx, Ry, Rz) of the own vehicle movement parameter, the inclination of the corrected motion vector, and the arbitrary point Q Compare the inclination with the line connecting the vanishing points, detect when the coincidence is low as a moving body with a direction different from the moving direction of the host vehicle, and move radially toward the stationary object or vanishing point when the coincidence is high A moving body determination unit that determines a moving body to perform;
A moving body detection apparatus comprising:
前記自車移動パラメータ推定部は、前記回転成分が0のとき、前記複数点Pの動きベクトルの延長線の交点を前記消失点として更新することを特徴とする請求項1記載の移動体検出装置。   2. The mobile object detection apparatus according to claim 1, wherein when the rotation component is 0, the own vehicle movement parameter estimation unit updates an intersection of extension lines of motion vectors of the plurality of points P as the vanishing point. . 前記画像の複数点Pの動きベクトルをもとに静止物領域を推定する静止領域候補推定部を有し、
前記自車移動パラメータ推定部は、前記推定した静止物領域画像の複数点Pの動きベクトルと前記消失点をもとに前記自車移動パラメータの回転成分(Rx,Ry,Rz)を推定することを特徴とする請求項1記載の移動体検出装置。
A stationary region candidate estimation unit that estimates a stationary object region based on motion vectors of a plurality of points P of the image,
The own vehicle movement parameter estimation unit estimates a rotation component (Rx, Ry, Rz) of the own vehicle movement parameter based on a motion vector of a plurality of points P of the estimated stationary object region image and the vanishing point. The moving body detection device according to claim 1.
前記静止領域候補推定部は、前記推定した静止物領域画像の複数点Pの動きベクトルと前記消失点から、自車の舵角による回転成分Ry,Rzの少なくともRyを求め、各点の前記回転成分Ry(あるいはRyとRz)に関するヒストグラムのピーク近傍を静止物領域の候補として選定することを特徴とする請求項3記載の移動体検出装置。   The stationary region candidate estimation unit obtains at least Ry of rotation components Ry and Rz depending on the steering angle of the vehicle from the motion vectors of the plurality of points P of the estimated stationary object region image and the vanishing point, and the rotation of each point 4. The moving object detection apparatus according to claim 3, wherein the vicinity of the peak of the histogram relating to the component Ry (or Ry and Rz) is selected as a candidate for a stationary object region. 前記移動体検出部により移動体として検出されなかった画像情報を入力し、前記静止物領域候補内の任意の点Qの動きベクトルと回転成分(Rx,Ry,Rz)から自車移動パラメータの並進成分Tz/zを求め、路面とのカメラ視角を関連付けた推定距離重みづけ係数z0を設定して、前記係数z0を乗算した並進成分z0(Tz/z)と推定自車並進成分Tzmとを比較し、一致度が低いときに移動体として検出する並進方向移動体検出部を備えたことを特徴とする請求項3記載の移動体検出装置。   Input image information that has not been detected as a moving object by the moving object detection unit, and translate the vehicle movement parameter from the motion vector and rotation components (Rx, Ry, Rz) of an arbitrary point Q in the stationary object region candidate The component Tz / z is obtained, the estimated distance weighting coefficient z0 associated with the camera viewing angle with the road surface is set, and the translation component z0 (Tz / z) multiplied by the coefficient z0 is compared with the estimated own vehicle translation component Tzm. 4. A moving body detecting apparatus according to claim 3, further comprising a translational direction moving body detecting unit that detects the moving body when the degree of coincidence is low. 前記並進方向移動体検出部は、前記静止物領域候補内の複数点Pの動きベクトルと回転成分(Rx,Ry,Rz)から自車移動パラメータの並進成分Tz/zを求め、前記係数z0を乗じた並進成分z0(Tz/z)の平均を前記推定自車並進成分Tzmとすることを特徴とする請求項5記載の移動体検出装置。   The translation-direction moving body detection unit obtains a translation component Tz / z of the own vehicle movement parameter from the motion vectors and rotation components (Rx, Ry, Rz) of a plurality of points P in the stationary object region candidate, and calculates the coefficient z0. 6. The moving body detection device according to claim 5, wherein an average of the multiplied translation component z0 (Tz / z) is set as the estimated own vehicle translation component Tzm. 前記並進方向移動体検出部は、前記カメラ画像の画素位置毎に路面上の距離を関連付けた対応関係を、前記推定距離重みづけ係数z0として予め設定することを特徴とする請求項5又は6記載の移動体検出装置。   The translational direction moving body detection unit presets, as the estimated distance weighting coefficient z0, a correspondence relationship in which a distance on a road surface is associated with each pixel position of the camera image. Mobile object detection device. 車両に搭載したカメラによって撮影されたカメラ画像を取り込む画像入力部と、
前記画像入力部からの画像を処理して、前記画像の動きベクトルを生成する動きベクトル生成部と、
前記画像の複数点Pの動きベクトルと、この動きベクトルを延長した延長動きベクトルが集中する消失点を求め、前記複数点Pの動きベクトルと前記消失点をもとに自車移動パラメータの回転成分(Rx,Ry,Rz)及び並進成分を推定する自車移動パラメータ推定部と、
路面とのカメラ視角を関連付けた推定距離重みづけ係数z0を設定し、前記係数z0を乗算した並進成分の平均から推定自車並進成分を算出し、前記画像内の任意の点Qの動きベクトルを前記回転成分(Rx,Ry,Rz)、前記並進成分及び前記係数z0から予測し、実測動きベクトルと比較して一致度が低い場合に移動体と判定する移動体判定部と、
を具備したことを特徴とする移動体検出装置。
An image input unit that captures a camera image taken by a camera mounted on the vehicle;
A motion vector generation unit that processes an image from the image input unit and generates a motion vector of the image;
A vanishing point at which a motion vector of a plurality of points P in the image and an extended motion vector obtained by extending the motion vector are concentrated is obtained, and a rotation component of the own vehicle movement parameter based on the motion vector of the plurality of points P and the vanishing point (Rx, Ry, Rz) and the own vehicle movement parameter estimation unit for estimating the translation component;
An estimated distance weighting coefficient z0 that associates the camera viewing angle with the road surface is set, an estimated vehicle translation component is calculated from the average of the translation components multiplied by the coefficient z0, and a motion vector at an arbitrary point Q in the image is calculated. A moving object determination unit that predicts from the rotation component (Rx, Ry, Rz), the translation component, and the coefficient z0, and determines a moving object when the degree of coincidence is low compared to the actually measured motion vector;
A moving body detection apparatus comprising:
前記カメラ画像の画素位置毎に路面上の距離を関連付けた対応関係を、前記推定距離重みづけ係数z0として予め設定することを特徴とする請求項8記載の移動体検出装置。   9. The moving body detection apparatus according to claim 8, wherein a correspondence relationship in which a distance on a road surface is associated with each pixel position of the camera image is set in advance as the estimated distance weighting coefficient z0. 車両に搭載したカメラによって撮影されたカメラ画像を取り込み、
前記画像入力部からの画像を処理して、前記画像の複数点Pの動きベクトルを生成し、
前記点Pの動きベクトルの傾きを自車移動パラメータの回転成分(Rx,Ry,Rz)で補正したとき、点Pから消失点への傾きと等しいとして自車移動パラメータの回転成分(Rx,Ry,Rz)を推定し、
前記画像内の任意の点Qの動きベクトルの傾きを前記自車移動パラメータの回転成分(Rx,Ry,Rz)を用いて補正し、補正後の動きベクトルの傾きと、前記任意の点Qと前記消失点を結ぶ線との傾きを比較し、一致度が低いときに自車移動方向と異なる方向を持つ移動体として検出し、一致度が高いときに静止物あるいは消失点へ向かう放射状に移動する移動体と判定することを特徴とする移動体検出方法。
Capture the camera image taken by the camera mounted on the vehicle,
Processing an image from the image input unit to generate motion vectors of a plurality of points P of the image;
When the inclination of the motion vector of the point P is corrected by the rotation component (Rx, Ry, Rz) of the own vehicle movement parameter, the rotation component (Rx, Ry) of the own vehicle movement parameter is assumed to be equal to the inclination from the point P to the vanishing point. , Rz),
The inclination of the motion vector at an arbitrary point Q in the image is corrected using the rotation components (Rx, Ry, Rz) of the own vehicle movement parameter, the inclination of the corrected motion vector, and the arbitrary point Q Compare the inclination with the line connecting the vanishing points, detect when the coincidence is low as a moving body with a direction different from the moving direction of the host vehicle, and move radially toward the stationary object or vanishing point when the coincidence is high A moving body detection method, characterized in that the moving body is determined to be a moving body.
前記回転成分が0のとき、前記複数点Pの動きベクトルの延長線の交点を前記消失点として更新することを特徴とする請求項10記載の移動体検出方法。   11. The moving object detection method according to claim 10, wherein when the rotation component is 0, an intersection of extension lines of motion vectors of the plurality of points P is updated as the vanishing point. 前記画像の複数点Pの動きベクトルをもとに静止物領域を推定し、
前記推定した静止物領域画像の複数点Pの動きベクトルと前記消失点から、自車の舵角による回転成分Ry,Rzの少なくともRyを求め、
各点の前記回転成分Ry(あるいはRyとRz)に関するヒストグラムのピーク近傍を前記静止物領域の候補として選定することを特徴とする請求項10記載の移動体検出方法。
A stationary object region is estimated based on motion vectors of a plurality of points P in the image,
From at least the motion vectors of the plurality of points P of the estimated stationary object region image and the vanishing point, obtain at least Ry of rotation components Ry and Rz depending on the steering angle of the vehicle,
11. The moving object detection method according to claim 10, wherein the vicinity of a peak of a histogram relating to the rotation component Ry (or Ry and Rz) of each point is selected as a candidate for the stationary object region.
前記移動体として検出されなかった画像情報を入力し、
前記静止物領域候補内の任意の点Qの動きベクトルと前記回転成分(Rx,Ry,Rz)から自車移動パラメータの並進成分Tz/zを求め、
路面とのカメラ視角を関連付けた推定距離重みづけ係数z0を設定して、前記係数z0を乗算した並進成分z0(Tz/z)と推定自車並進成分Tzmとを比較し、一致度が低いときに移動体として検出することを特徴とする請求項12記載の移動体検出方法。
Input image information that was not detected as the moving body,
A translation component Tz / z of the own vehicle movement parameter is obtained from a motion vector of an arbitrary point Q in the stationary object region candidate and the rotation component (Rx, Ry, Rz),
When the estimated distance weighting coefficient z0 associated with the camera viewing angle with the road surface is set, the translation component z0 (Tz / z) multiplied by the coefficient z0 is compared with the estimated own vehicle translation component Tzm, and the degree of coincidence is low The moving body detecting method according to claim 12, wherein the moving body is detected as a moving body.
前記静止物領域候補内の複数点Pの動きベクトルと前記回転成分RX,Ry,Rzから自車移動パラメータの並進成分Tz/zを求め、前記係数z0を乗じた並進成分z0(Tz/z)の平均を前記推定自車並進成分Tzmとすることを特徴とする請求項13記載の移動体検出方法。   A translation component Tz / z of the own vehicle movement parameter is obtained from the motion vectors of a plurality of points P in the stationary object region candidate and the rotation components RX, Ry, Rz, and is multiplied by the coefficient z0 (Tz / z). The moving body detection method according to claim 13, wherein an average of the vehicle is the estimated translation component Tzm of the subject vehicle. 車両に搭載したカメラによって撮影されたカメラ画像を取り込み、
前記画像入力部からの画像を処理して、前記画像の複数点Pの動きベクトルを生成するとともに、この動きベクトルを延長した延長動きベクトルが集中する消失点を求め、前記動きベクトルと前記消失点をもとに自車移動パラメータの回転成分(Rx,Ry,Rz)及び並進成分を推定し、
路面とのカメラ視角を関連付けた推定距離重みづけ係数z0を設定し、前記係数z0を乗算した並進成分の平均から推定自車並進成分を算出し、前記画像内の任意の点Qの動きベクトルを前記回転成分(Rx,Ry,Rz)、前記並進成分及び前記係数z0から予測し、実測動きベクトルと比較して一致度が低い場合に移動体と判定することを特徴とする移動体検出方法。
Capture the camera image taken by the camera mounted on the vehicle,
The image from the image input unit is processed to generate motion vectors at a plurality of points P of the image, and a vanishing point at which extended motion vectors obtained by extending the motion vector are concentrated is determined. The motion vector and the vanishing point Based on the above, the rotational component (Rx, Ry, Rz) and translational component of the own vehicle movement parameter are estimated,
An estimated distance weighting coefficient z0 that associates the camera viewing angle with the road surface is set, an estimated vehicle translation component is calculated from the average of the translation components multiplied by the coefficient z0, and a motion vector at an arbitrary point Q in the image is calculated. A moving body detecting method, wherein the moving body is predicted based on the rotation components (Rx, Ry, Rz), the translation component, and the coefficient z0, and is determined to be a moving body when the degree of coincidence is low compared to the actually measured motion vector.
JP2010139543A 2010-06-04 2010-06-18 Moving body detection apparatus and moving body detection method Active JP5612915B2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2010139543A JP5612915B2 (en) 2010-06-18 2010-06-18 Moving body detection apparatus and moving body detection method
US13/094,345 US20110298988A1 (en) 2010-06-04 2011-04-26 Moving object detection apparatus and moving object detection method
CN2011101116586A CN102270344A (en) 2010-06-04 2011-04-29 Moving object detection apparatus and moving object detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2010139543A JP5612915B2 (en) 2010-06-18 2010-06-18 Moving body detection apparatus and moving body detection method

Publications (2)

Publication Number Publication Date
JP2012003604A true JP2012003604A (en) 2012-01-05
JP5612915B2 JP5612915B2 (en) 2014-10-22

Family

ID=45535492

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2010139543A Active JP5612915B2 (en) 2010-06-04 2010-06-18 Moving body detection apparatus and moving body detection method

Country Status (1)

Country Link
JP (1) JP5612915B2 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10299655A (en) * 1997-02-18 1998-11-10 Calsonic Corp Piston assembly for swash plate type compressor
JP2013238497A (en) * 2012-05-15 2013-11-28 Toshiba Alpine Automotive Technology Corp Automatic calibration apparatus of in-vehicle camera
WO2014142367A1 (en) * 2013-03-13 2014-09-18 엘지전자 주식회사 Electronic device and method for controlling same
JP2016091528A (en) * 2014-11-05 2016-05-23 バイドゥ オンライン ネットワーク テクノロジー (ベイジン) カンパニー リミテッド Image segmentation method and image segmentation device
WO2016151976A1 (en) * 2015-03-26 2016-09-29 パナソニックIpマネジメント株式会社 Moving body detection device, image processing device, moving body detection method, and integrated circuit
JP2020145501A (en) * 2019-03-04 2020-09-10 トヨタ自動車株式会社 Information processor, detection method and program
US10943141B2 (en) 2016-09-15 2021-03-09 Mitsubishi Electric Corporation Object detection device and object detection method
US11023748B2 (en) 2018-10-17 2021-06-01 Samsung Electronics Co., Ltd. Method and apparatus for estimating position
US11367213B2 (en) 2019-09-20 2022-06-21 Samsung Electronics Co., Ltd. Method and apparatus with location estimation
CN117032237A (en) * 2023-08-16 2023-11-10 淮安永道智能科技有限公司 Universal motion control method for omnidirectional chassis

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06282655A (en) * 1993-03-30 1994-10-07 Toyota Motor Corp Device for recognizing moving object
JPH09114987A (en) * 1995-10-13 1997-05-02 Toyota Motor Corp Moving object detector
JP2008042759A (en) * 2006-08-09 2008-02-21 Auto Network Gijutsu Kenkyusho:Kk Image processing apparatus
JP2008203992A (en) * 2007-02-16 2008-09-04 Omron Corp Detection device, method, and program thereof
WO2010050095A1 (en) * 2008-10-31 2010-05-06 株式会社小糸製作所 Headlamp controller
JP2010132053A (en) * 2008-12-03 2010-06-17 Koito Mfg Co Ltd Headlight control device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06282655A (en) * 1993-03-30 1994-10-07 Toyota Motor Corp Device for recognizing moving object
JPH09114987A (en) * 1995-10-13 1997-05-02 Toyota Motor Corp Moving object detector
JP2008042759A (en) * 2006-08-09 2008-02-21 Auto Network Gijutsu Kenkyusho:Kk Image processing apparatus
JP2008203992A (en) * 2007-02-16 2008-09-04 Omron Corp Detection device, method, and program thereof
WO2010050095A1 (en) * 2008-10-31 2010-05-06 株式会社小糸製作所 Headlamp controller
JP2010132053A (en) * 2008-12-03 2010-06-17 Koito Mfg Co Ltd Headlight control device

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10299655A (en) * 1997-02-18 1998-11-10 Calsonic Corp Piston assembly for swash plate type compressor
JP2013238497A (en) * 2012-05-15 2013-11-28 Toshiba Alpine Automotive Technology Corp Automatic calibration apparatus of in-vehicle camera
WO2014142367A1 (en) * 2013-03-13 2014-09-18 엘지전자 주식회사 Electronic device and method for controlling same
JP2016091528A (en) * 2014-11-05 2016-05-23 バイドゥ オンライン ネットワーク テクノロジー (ベイジン) カンパニー リミテッド Image segmentation method and image segmentation device
WO2016151976A1 (en) * 2015-03-26 2016-09-29 パナソニックIpマネジメント株式会社 Moving body detection device, image processing device, moving body detection method, and integrated circuit
JPWO2016151976A1 (en) * 2015-03-26 2017-10-12 パナソニックIpマネジメント株式会社 MOBILE BODY DETECTING DEVICE, IMAGE PROCESSING DEVICE, MOBILE BODY DETECTING METHOD, AND INTEGRATED CIRCUIT
US10943141B2 (en) 2016-09-15 2021-03-09 Mitsubishi Electric Corporation Object detection device and object detection method
US11651597B2 (en) 2018-10-17 2023-05-16 Samsung Electronics Co., Ltd. Method and apparatus for estimating position
US11023748B2 (en) 2018-10-17 2021-06-01 Samsung Electronics Co., Ltd. Method and apparatus for estimating position
JP2020145501A (en) * 2019-03-04 2020-09-10 トヨタ自動車株式会社 Information processor, detection method and program
US11367213B2 (en) 2019-09-20 2022-06-21 Samsung Electronics Co., Ltd. Method and apparatus with location estimation
CN117032237A (en) * 2023-08-16 2023-11-10 淮安永道智能科技有限公司 Universal motion control method for omnidirectional chassis
CN117032237B (en) * 2023-08-16 2024-04-12 淮安永道智能科技有限公司 Universal motion control method for omnidirectional chassis

Also Published As

Publication number Publication date
JP5612915B2 (en) 2014-10-22

Similar Documents

Publication Publication Date Title
JP5612915B2 (en) Moving body detection apparatus and moving body detection method
US20110298988A1 (en) Moving object detection apparatus and moving object detection method
US10726576B2 (en) System and method for identifying a camera pose of a forward facing camera in a vehicle
US8184160B2 (en) Image processor, driving assistance system, and out-of-position detecting method
JP4970926B2 (en) Vehicle periphery monitoring device
WO2017135081A1 (en) Vehicle-mounted camera calibration system
JP2007129560A (en) Object detector
JP2007104538A (en) In-vehicle dead angle video image display device
US20170259830A1 (en) Moving amount derivation apparatus
JP2008182652A (en) Camera posture estimation device, vehicle, and camera posture estimating method
JP5959311B2 (en) Data deriving apparatus and data deriving method
JP5624370B2 (en) Moving body detection apparatus and moving body detection method
JP4894771B2 (en) Calibration apparatus and calibration method
JP2008085710A (en) Driving support system
JP6947066B2 (en) Posture estimator
JP5173551B2 (en) Vehicle perimeter monitoring apparatus and camera mounting position / posture information setting correction method applied thereto
JP2008309519A (en) Object detection device using image processing
JP6693314B2 (en) Vehicle approaching object detection device
US9827906B2 (en) Image processing apparatus
JP2009139324A (en) Travel road surface detecting apparatus for vehicle
JP2005318568A (en) Image compensation device and image compensation method
JP5588332B2 (en) Image processing apparatus for vehicle and image processing method for vehicle
JP2011209070A (en) Image processor
WO2017090097A1 (en) Outside recognition device for vehicle
JP2020035158A (en) Attitude estimation device and calibration system

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20130513

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20140130

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20140401

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20140527

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20140826

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20140905

R150 Certificate of patent or registration of utility model

Ref document number: 5612915

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

S111 Request for change of ownership or part of ownership

Free format text: JAPANESE INTERMEDIATE CODE: R313113

R350 Written notification of registration of transfer

Free format text: JAPANESE INTERMEDIATE CODE: R350

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250