JP2011070593A - Vehicle periphery monitoring device - Google Patents

Vehicle periphery monitoring device Download PDF

Info

Publication number
JP2011070593A
JP2011070593A JP2009223423A JP2009223423A JP2011070593A JP 2011070593 A JP2011070593 A JP 2011070593A JP 2009223423 A JP2009223423 A JP 2009223423A JP 2009223423 A JP2009223423 A JP 2009223423A JP 2011070593 A JP2011070593 A JP 2011070593A
Authority
JP
Japan
Prior art keywords
obstacle
feature point
image
processing unit
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2009223423A
Other languages
Japanese (ja)
Other versions
JP5240149B2 (en
Inventor
Shigeya Sasane
成哉 笹根
Keisuke Kaminan
恵資 上南
Hideaki Arai
秀昭 新井
Takahiro Maemura
高広 前村
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mitsubishi Motors Corp
Original Assignee
Mitsubishi Motors Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mitsubishi Motors Corp filed Critical Mitsubishi Motors Corp
Priority to JP2009223423A priority Critical patent/JP5240149B2/en
Publication of JP2011070593A publication Critical patent/JP2011070593A/en
Application granted granted Critical
Publication of JP5240149B2 publication Critical patent/JP5240149B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Traffic Control Systems (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

<P>PROBLEM TO BE SOLVED: To provide a vehicle periphery monitoring device for easily grasping an obstacle displayed with distortion on a bird's eye view video by emphasizing and displaying an obstacle on the bird's eye view video. <P>SOLUTION: This vehicle periphery monitoring device is configured to capture a bird's eye view video by constant time interval, and to extract a feature point P by a feature point extraction processing part 3, and to perform the tracking processing of the movement of the extracted feature point P by a feature point tracking processing part 4, and to calculate the optical flow of the feature point P to determine its moving vector by the tracking processing of the feature point P, and to calculate the relative motion information of the self-vehicle and the feature point P and the three-dimensional coordinate information of the feature point P from the change of the position of the feature point P on the bird's eye view video by a three-dimensional measurement processing part 5, and to detect an obstacle by an obstacle detection processing part 6 from the calculated relative motion information and three-dimensional coordinate information, and to determine and emphasize a region on the bird's eye view video including the detected obstacle by an emphasized region plotting processing part 8, and to display and output it by a display part 10. <P>COPYRIGHT: (C)2011,JPO&INPIT

Description

本発明は、自車両に対する障害物の俯瞰映像上の領域を表示する車両周辺監視装置に関する。   The present invention relates to a vehicle periphery monitoring device that displays an area on an overhead view image of an obstacle with respect to a host vehicle.

従来、自動車に複数の撮像装置を設け、各撮像装置により撮像した画像を合成し、自動車の周囲全周の俯瞰画像を生成し運転席のモニタに表示することで、駐車場などにおける運転操作を容易かつ安全に行えるようにした車両周辺監視装置がある。
このような従来の車両周辺監視装置では、車両に前方カメラ、後方カメラ、右カメラおよび左カメラが取り付けられている。
そして、それら前方カメラ、後方カメラ、右カメラおよび左カメラにより撮像された映像を視点変換し、さらに画像合成して俯瞰映像を生成する。
このようにして生成された俯瞰映像は俯瞰映像表示画面へ表示され、運転操作の支援に
活用される。
Conventionally, a vehicle is provided with a plurality of imaging devices, and images taken by the imaging devices are combined, an overhead view image of the entire circumference of the vehicle is generated and displayed on a driver's seat monitor, thereby enabling driving operations in a parking lot or the like. There is a vehicle periphery monitoring device that can be easily and safely performed.
In such a conventional vehicle periphery monitoring device, a front camera, a rear camera, a right camera, and a left camera are attached to the vehicle.
Then, the viewpoint images of the images captured by the front camera, the rear camera, the right camera, and the left camera are converted, and further, the images are synthesized to generate an overhead image.
The overhead view video generated in this way is displayed on the overhead view video display screen, and is used to support driving operations.

このような車両周辺監視装置として、撮像部と、運転者に情報を提供する表示部とを有し、始動シーン選択部と、車速センサと、シフト位置検出部と、ウインカー検出部と、操舵角センサと、画面モード切替部とを備えたものが提供されている。
そして、この車両周辺監視装置は、ウインカーの情報やハンドル操舵角の情報により後側方の見せる領域を変更することが可能であり、特に、高速モード時にウインカー入力を検出した場合、ウインカー方向の合成画像を拡大表示する。
なお、前記撮像部は、車両に設置した複数のカメラ設置部に取り付けられた個々のカメラにより車両周囲を撮像し、前記表示部は運転者に情報を提供するものである。
また、前記始動シーン選択部は、複数の運転シーンの中からエンジン始動時に通常走行シーンを選択し、前記車速センサでは車速を検出し、前記シフト位置検出部ではシフト位置を検出するものである。
前記ウインカー検出部は、ウインカーの方向を検出し、前記操舵角センサはハンドルの操舵角を検出し、前記画面モード切替部では各センサおよび各検出部により検出された情報より表示画面構成を切り替える。(特許文献1参照)。
As such a vehicle periphery monitoring device, it has an imaging unit, a display unit that provides information to the driver, a start scene selection unit, a vehicle speed sensor, a shift position detection unit, a winker detection unit, a steering angle, A device including a sensor and a screen mode switching unit is provided.
The vehicle periphery monitoring device can change the rear side area based on the blinker information and the steering wheel steering angle information. In particular, when the blinker input is detected in the high speed mode, the turn signal direction synthesis is performed. Zoom in on the image.
In addition, the said imaging part images the vehicle periphery with each camera attached to the several camera installation part installed in the vehicle, and the said display part provides a driver | operator with information.
The start scene selection unit selects a normal travel scene from a plurality of driving scenes when starting the engine, the vehicle speed sensor detects a vehicle speed, and the shift position detection unit detects a shift position.
The winker detection unit detects the direction of the winker, the steering angle sensor detects a steering angle of the steering wheel, and the screen mode switching unit switches a display screen configuration based on information detected by each sensor and each detection unit. (See Patent Document 1).

特開2002−109697号公報JP 2002-109697 A

したがって、従来の車両周辺監視装置では、着目したい目標物、特に立体的な目標物に関しては、その映像は視点変換され画像合成された処理段階で画像が歪んでしまい、俯瞰映像表示画面へ表示される目標物の距離感がつかみにくいという課題があった。   Therefore, in the conventional vehicle periphery monitoring device, for the target to be focused on, particularly the three-dimensional target, the image is distorted at the processing stage where the viewpoint is converted and the image is synthesized, and the image is displayed on the overhead view video display screen. There was a problem that it was difficult to grasp the sense of distance of the target.

本発明は、このような事情に鑑みてなされたものであり、俯瞰映像上の障害物を強調して表示し、前記俯瞰映像上で歪んで表示される障害物の把握を容易にした車両周辺監視装置を提供することを目的とする。   The present invention has been made in view of such circumstances, and highlights obstacles on a bird's-eye view image, and facilitates the understanding of obstacles displayed distorted on the bird's-eye view image. An object is to provide a monitoring device.

請求項1に記載の発明は、車両に搭載された複数の撮像装置により撮像した前記車両周囲の俯瞰映像を表示する車両周辺監視装置において、前記車両の複数箇所に搭載され、それら箇所からそれぞれ異なる方向の車両周囲を撮像する複数の撮像装置と、前記各撮像装置で撮像した前記車両周囲の映像から前記車両を中心とする俯瞰映像を合成する映像合成手段と、前記映像合成手段により合成した俯瞰映像上の画像のコーナまたは画像のエッジまたは画像の画像信号レベルが変化する箇所または画像の色相が変化する箇所を特徴点として抽出する特徴点抽出処理部と、前記俯瞰映像の時系列映像から前記特徴点抽出処理部で抽出した前記特徴点の前記俯瞰映像上での位置の変化を追跡する特徴点追跡処理部と、前記特徴点追跡処理部で追跡した前記特徴点の前記俯瞰映像上での位置の変化をもとに前記特徴点の三次元座標における高さを含む特徴点情報を算出する三次元計測処理部と、前記三次元計測処理部で算出した前記特徴点情報をもとに、前記特徴点を抽出した画像が障害物であるか否かを判定する障害物検知処理部と、前記映像合成手段により合成された俯瞰映像を表示する表示部と、前記障害物検知処理部で障害物と判定した前記画像を含む領域を、前記障害物と判定した画像の撮像位置に応じて決定し、該決定した領域を強調して強調領域として描画する強調領域描画処理部と、前記強調領域を前記俯瞰映像に重畳して前記表示部に表示させる強調領域加算部とを備えたことを特徴とする。   The invention according to claim 1 is a vehicle periphery monitoring device that displays a bird's-eye view image around the vehicle imaged by a plurality of imaging devices mounted on the vehicle. The vehicle periphery monitoring device is mounted at a plurality of locations of the vehicle and is different from the locations. A plurality of imaging devices that capture the surroundings of the vehicle in the direction, a video synthesis unit that synthesizes a bird's-eye view image centered on the vehicle from videos around the vehicle captured by the imaging devices, and a bird's-eye view synthesized by the video synthesis unit A feature point extraction processing unit that extracts, as a feature point, a corner of an image, an edge of the image, a portion where the image signal level of the image changes, or a portion where the hue of the image changes, and the time series video of the overhead video A feature point tracking processing unit that tracks a change in the position of the feature point extracted by the feature point extraction processing unit on the overhead video, and a feature point tracking processing unit that tracks A three-dimensional measurement processing unit that calculates feature point information including the height of the feature point in three-dimensional coordinates based on a change in the position of the feature point on the overhead view image, and the three-dimensional measurement processing unit An obstacle detection processing unit for determining whether the image from which the feature point has been extracted is an obstacle based on the feature point information, and a display unit for displaying an overhead video synthesized by the video synthesis unit And an area including the image determined as an obstacle by the obstacle detection processing unit is determined according to an imaging position of the image determined as the obstacle, and the determined area is emphasized and drawn as an emphasized area. An enhancement region drawing processing unit, and an enhancement region addition unit that superimposes the enhancement region on the overhead video and displays the enhancement region on the display unit are provided.

本発明によれば、俯瞰映像上の障害物が前記俯瞰映像上で歪んで表示される特性に従って、前記俯瞰映像上の障害物を強調して表示できるため、前記俯瞰映像上の障害物の歪んでいる状態が明確化されて前記俯瞰映像上で歪んで表示される障害物を容易に把握できる車両周辺監視装置を提供できる効果がある。   According to the present invention, since the obstacle on the overhead image can be displayed in an emphasized manner according to the characteristic that the obstacle on the overhead image is distorted and displayed on the overhead image, the obstacle on the overhead image is distorted. There is an effect that it is possible to provide a vehicle periphery monitoring device that makes it possible to easily grasp an obstacle that is distorted and displayed on the bird's-eye view image by clarifying the state in which the vehicle is standing.

本発明の実施の形態である車両周辺監視装置の構成を示すブロック図である。It is a block diagram which shows the structure of the vehicle periphery monitoring apparatus which is embodiment of this invention. 本発明の実施の形態の車両周辺監視装置の各カメラの撮像範囲と、これら撮像範囲の映像を視点変換し画像合成して生成された俯瞰映像を示す説明図である。It is explanatory drawing which shows the imaging range of each camera of the vehicle periphery monitoring apparatus of embodiment of this invention, and the bird's-eye view video produced | generated by carrying out viewpoint conversion of the image | video of these imaging ranges, and image-synthesising. 本発明の実施の形態の車両周辺監視装置のカメラ位置と俯瞰映像の歪みを示す説明図である。It is explanatory drawing which shows the camera position of the vehicle periphery monitoring apparatus of embodiment of this invention, and distortion of a bird's-eye view image | video. 本発明の実施の形態の車両周辺監視装置の俯瞰映像における強調領域の決定方法を示す説明図である。It is explanatory drawing which shows the determination method of the emphasis area | region in the bird's-eye view image of the vehicle periphery monitoring apparatus of embodiment of this invention. 本発明の実施の形態の車両周辺監視装置の俯瞰映像における強調領域を示す枠表示の一例を示す説明図である。It is explanatory drawing which shows an example of the frame display which shows the emphasis area | region in the bird's-eye view image of the vehicle periphery monitoring apparatus of embodiment of this invention. 本発明の実施の形態の車両周辺監視装置の俯瞰映像における強調領域を示す枠の他の表示形態の一例を示す説明図である。It is explanatory drawing which shows an example of the other display form of the frame which shows the emphasis area | region in the bird's-eye view image of the vehicle periphery monitoring apparatus of embodiment of this invention. 本発明の実施の形態の車両周辺監視装置における障害物検知を説明するための対象物の三次元空間座標と俯瞰映像上の位置との関係を示す説明図である。It is explanatory drawing which shows the relationship between the three-dimensional space coordinate of the target object for demonstrating the obstruction detection in the vehicle periphery monitoring apparatus of embodiment of this invention, and the position on a bird's-eye view image | video. 本発明の実施の形態の車両周辺監視装置における障害物検知を説明するための時刻tにおける俯瞰映像と時刻t+FΔtにおける俯瞰映像との一例を示す説明図である。It is explanatory drawing which shows an example of the bird's-eye view image in the time t for demonstrating the obstruction detection in the vehicle periphery monitoring apparatus of embodiment of this invention, and the bird's-eye view image in the time t + F (DELTA) t. 本発明の実施の形態の車両周辺監視装置における俯瞰映像から抽出した特徴点の動きをベクトルで示した説明図である。It is explanatory drawing which showed the motion of the feature point extracted from the bird's-eye view image in the vehicle periphery monitoring apparatus of embodiment of this invention with the vector. 本発明の実施の形態の車両周辺監視装置の動作を示すフローチャートである。It is a flowchart which shows operation | movement of the vehicle periphery monitoring apparatus of embodiment of this invention. 本発明の実施の形態の車両周辺監視装置における俯瞰映像上の画像の一例を示す説明図である。It is explanatory drawing which shows an example of the image on the bird's-eye view image in the vehicle periphery monitoring apparatus of embodiment of this invention. 本発明の実施の形態の車両周辺監視装置における距離による障害物エリアAと、衝突予想時間による障害物エリアBとを示す説明図である。It is explanatory drawing which shows the obstacle area A by the distance in the vehicle periphery monitoring apparatus of embodiment of this invention, and the obstacle area B by the collision estimated time. 本発明の実施の形態の車両周辺監視装置における三次元計測処理部および障害物検知処理部の動作を示すフローチャートである。It is a flowchart which shows operation | movement of the three-dimensional measurement process part and the obstruction detection process part in the vehicle periphery monitoring apparatus of embodiment of this invention. 拘束行列に関する数式および行列を示す図である。It is a figure which shows the numerical formula and matrix regarding a constraint matrix. 偏微分A、Bの説明図である。It is explanatory drawing of partial differentiation A and B. FIG.

以下、本発明の実施の形態について説明する。図1は、本発明の実施の形態である車両周辺監視装置の構成を示すブロック図である。この車両周辺監視装置は、俯瞰映像1の時系列映像取り込み部2、取り込んだ俯瞰映像をフレーム単位で一時的に記憶するフレームメモリ3、特徴点抽出処理部4、特徴点追跡処理部5、三次元計測処理部6、障害物検知処理部7、強調領域描画処理部8、強調領域加算部9および表示部10を備えている。
俯瞰映像1は、車両に取り付けられた前方カメラ、後方カメラ、右カメラおよび左カメラにより撮像された映像を視点変換し、さらに画像合成して生成したものである。
Embodiments of the present invention will be described below. FIG. 1 is a block diagram showing a configuration of a vehicle periphery monitoring apparatus according to an embodiment of the present invention. This vehicle periphery monitoring apparatus includes a time-series video capturing unit 2 for the overhead view video 1, a frame memory 3 for temporarily storing the captured overhead video in units of frames, a feature point extraction processing unit 4, a feature point tracking processing unit 5, a tertiary An original measurement processing unit 6, an obstacle detection processing unit 7, an emphasis region drawing processing unit 8, an emphasis region addition unit 9 and a display unit 10 are provided.
The bird's-eye view image 1 is generated by performing viewpoint conversion on images captured by a front camera, a rear camera, a right camera, and a left camera attached to a vehicle and further synthesizing images.

図2は、車両11に取り付けられた前方カメラ114、後方カメラ112、右カメラ111および左カメラ113により撮像される撮像範囲と、これら撮像範囲の映像を視点変換し画像合成して生成された、自車両を中心としてモニタ装置に表示される俯瞰映像を示す説明図である。
前方カメラ114は、例えば車両11前部のフロントグリル中央に取り付けられて自車両の前方を撮像するカメラである。
後方カメラ112は、例えばリアウィンドウが設けられた車両11後部に取り付けられて自車両の後方を撮像するカメラである。
右カメラ111は、例えば右側ドアミラーを支持するドアミラー支持部突端に取り付けられて車両11の右側方を撮像するカメラである。
左カメラ113は、例えば左側ドアミラーを支持するドアミラー支持部突端に取り付けられて車両11の左側方を撮像するカメラである。
符号12は前方カメラ114による撮像範囲、符号13は右カメラ111による撮像範囲、符号14は後方カメラ112による撮像範囲、符号15は左カメラ113による撮像範囲を示す。
また、符号22は前方カメラ114の撮像範囲12の映像から図示していない視点変換手段により視点変換され前記モニタ装置に表示された前方部分俯瞰映像を示す。符号23は右カメラ111の撮像範囲13の映像から同様に視点変換され前記モニタ装置に表示された右側方部分俯瞰映像を示す。符号24は後方カメラ112の撮像範囲14の映像から同様に視点変換され前記モニタ装置に表示された後方部分俯瞰映像を示す。符号25は左カメラ113の撮像範囲15の映像から同様に視点変換され前記モニタ装置に表示された左側方部分俯瞰映像を示す。
これら前方部分俯瞰映像22、右側方部分俯瞰映像23、後方部分俯瞰映像24および左側方部分俯瞰映像25は、自車両11を中心としてその上方から自車両周囲を俯瞰したときの俯瞰映像として図示していない画像合成手段により画像合成される。
FIG. 2 shows an imaging range captured by the front camera 114, the rear camera 112, the right camera 111, and the left camera 113 attached to the vehicle 11, and the images of these imaging ranges are generated by performing viewpoint conversion and image synthesis. It is explanatory drawing which shows the bird's-eye view image displayed on a monitor apparatus centering | focusing on the own vehicle.
The front camera 114 is a camera that is attached to the center of the front grill at the front of the vehicle 11 and images the front of the host vehicle, for example.
The rear camera 112 is a camera that is attached to the rear portion of the vehicle 11 provided with a rear window, for example, and images the rear of the host vehicle.
The right camera 111 is a camera that is attached to a protruding end of a door mirror support portion that supports a right door mirror, for example, and images the right side of the vehicle 11.
The left camera 113 is a camera that is attached to a protruding end of a door mirror support portion that supports a left door mirror, for example, and images the left side of the vehicle 11.
Reference numeral 12 denotes an imaging range by the front camera 114, reference numeral 13 denotes an imaging range by the right camera 111, reference numeral 14 denotes an imaging range by the rear camera 112, and reference numeral 15 denotes an imaging range by the left camera 113.
Reference numeral 22 denotes a front partial overhead view image that has been subjected to viewpoint conversion by a viewpoint conversion unit (not shown) from the image of the imaging range 12 of the front camera 114 and displayed on the monitor device. Reference numeral 23 denotes a right side partial overhead view image that is similarly converted from the viewpoint of the image in the imaging range 13 of the right camera 111 and displayed on the monitor device. Reference numeral 24 denotes a rear partial overhead view image that is similarly converted from the viewpoint of the image in the imaging range 14 of the rear camera 112 and displayed on the monitor device. Reference numeral 25 denotes a left side partial overhead view image which is similarly converted from the viewpoint of the image of the imaging range 15 of the left camera 113 and displayed on the monitor device.
The front partial overhead view image 22, the right side partial overhead view video 23, the rear partial overhead view video 24, and the left side partial overhead view video 25 are illustrated as overhead views when the surroundings of the host vehicle are viewed from above with the host vehicle 11 as a center. The image is synthesized by the image synthesizing means.

図1に戻り、時系列映像取り込み部2は、図2で説明した前記画像合成手段により画像合成された俯瞰映像を所定の時間間隔で時系列映像1として取り込む。
フレームメモリ3は、取り込んだ俯瞰映像をフレーム単位で一時的に記憶するメモリである。
特徴点抽出処理部4は、前記俯瞰映像の時系列映像1に対し、たとえば画像のコーナまたは画像のエッジ等の画像処理技術上において位置の変化の追跡が比較的容易な特徴を有する箇所を特徴点として抽出する。
特徴点追跡処理部5は、前記俯瞰映像上で時刻変化に伴う特徴点の移動位置の変化を追跡する。
三次元計測処理部6は、前記俯瞰映像上における前記特徴点の移動位置の変化からその特徴点の相対運動情報および三次元座標情報を算出する。
この相対運動情報および三次元座標情報は、次に示す俯瞰画像上の対象物体の動きをモデル化した式(1)、(2)を用いて算出する。
Returning to FIG. 1, the time-series video capturing unit 2 captures the overhead video synthesized by the image synthesizing unit described in FIG. 2 as the time-series video 1 at a predetermined time interval.
The frame memory 3 is a memory that temporarily stores the captured overhead image in units of frames.
The feature point extraction processing unit 4 is characterized in that the time-series video 1 of the bird's-eye view video has a feature that makes it relatively easy to track a change in position on an image processing technique such as an image corner or an image edge. Extract as a point.
The feature point tracking processing unit 5 tracks the change in the movement position of the feature point with time change on the overhead view video.
The three-dimensional measurement processing unit 6 calculates relative motion information and three-dimensional coordinate information of the feature point from a change in the movement position of the feature point on the overhead view video.
The relative motion information and the three-dimensional coordinate information are calculated using equations (1) and (2) that model the movement of the target object on the overhead image shown below.

Figure 2011070593
Figure 2011070593

Figure 2011070593
Figure 2011070593

そして、この運転支援装置では、単眼のカメラ映像の俯瞰画像では視点つまりカメラの高さに応じて対象物体の動きが異なるため、カメラの高さに応じた俯瞰画像上の対象物体の動きをモデル化し、実映像に対しモデル推定することで、対象物体の三次元計測を行い障害物を検知する。   In this driving support device, since the movement of the target object varies depending on the viewpoint, that is, the height of the camera in the overhead image of the monocular camera image, the movement of the target object on the overhead image corresponding to the camera height is modeled. The model is estimated with respect to the actual video, and the target object is three-dimensionally measured to detect the obstacle.

障害物検知処理部7は、前記算出した相対運動情報および三次元座標情報をもとに障害物を判定する。この障害物の判定は、自車両に対する距離による障害物エリアと、衝突予想時間による障害物エリアとにおける特徴点から障害物の検知を行う。
強調領域描画処理部8は、障害物検知処理部7で障害物と判定した画像を含む領域を、前記障害物と判定した画像の撮像位置に応じて決定し、該決定した領域を強調して強調領域として描画するための処理を行う。
強調領域加算部9は、強調領域を俯瞰映像に重畳して表示部10に表示させる。
前記視点変換手段および前記画像合成手段と、時系列映像取り込み部2、特徴点抽出処理部4、特徴点追跡処理部5、三次元計測処理部6および障害物検知処理部7、強調領域描画処理部8および強調領域加算部9は、各部を制御するマイクロコンピュータにより構成されたECUにより実現される。
このECUは、前方カメラ114、後方カメラ112、右カメラ111および左カメラ113により撮像した映像の視点変換処理および画像合成処理を含む、この車両周辺監視装置特有の機能を実現するための各種処理を行う。
表示部10は、前記映像合成手段により合成された俯瞰映像を表示する。
The obstacle detection processing unit 7 determines an obstacle based on the calculated relative motion information and three-dimensional coordinate information. In this obstacle determination, an obstacle is detected from feature points in the obstacle area based on the distance to the host vehicle and the obstacle area based on the estimated collision time.
The enhancement region drawing processing unit 8 determines an area including the image determined as the obstacle by the obstacle detection processing unit 7 according to the imaging position of the image determined as the obstacle, and emphasizes the determined region. A process for rendering as an emphasized region is performed.
The enhancement region adding unit 9 causes the display unit 10 to display the enhancement region so as to be superimposed on the overhead view video.
The viewpoint converting means and the image synthesizing means, a time-series video capturing unit 2, a feature point extraction processing unit 4, a feature point tracking processing unit 5, a three-dimensional measurement processing unit 6, an obstacle detection processing unit 7, and an emphasized area drawing process The unit 8 and the enhancement region adding unit 9 are realized by an ECU configured by a microcomputer that controls each unit.
This ECU performs various processes for realizing functions unique to the vehicle periphery monitoring device, including viewpoint conversion processing and image synthesis processing of images captured by the front camera 114, the rear camera 112, the right camera 111, and the left camera 113. Do.
The display unit 10 displays the overhead video synthesized by the video synthesizing unit.

次に動作について説明する。
図3において、符号201は取り付け位置Hの高さに取り付けられているカメラを示している。同図(A)は、カメラ201から撮像された立体物202の画像の一例を示す。また同図(B)は、同図(A)に示した画像を俯瞰映像として視点変換したときの画像を示しており、同図(A)に示した画像の立体物には歪みが顕著に表れている。
図7に示すように、カメラ201はz軸上の高さHの位置に配置されている。図7において、符号300は対象物を示しその三次元空間座標は[X ,Y ,Z である。また、符号301は俯瞰映像上の対象物映像であり、その三次元空間座標は[x ,y ,0] である。符号302は撮像された対象物が含まれた平面的な俯瞰映像を示している。
この実施の形態の車両周辺監視装置の動作を図10に示すフローチャートを参照して説明すると、対象物と自車両とが相対速度[V,V 、ヨーレートωで相対運動をしていたとき、上述した式(2)で示す関係が成立する。
式(2)のベクトルm(t+Δt)、n(t+Δt)は相対速度とヨーレートとを格納する相対運動であり、ベクトルSは上述した式(1)から、特徴点Pの三次元座標への変換が一意に可能である。
この実施の形態では、式(1)および式(2)の関係を用いて、俯瞰映像上の特徴点Pを三次元座標へ変換し、自車両に対する立体物を判定して障害物の検知を行う。そして、前記検知した障害物の画像を撮像したカメラ位置をもとに俯瞰映像上で強調領域を決定し、前記俯瞰映像上へ前記決定した強調領域を重畳して描画する。
Next, the operation will be described.
In FIG. 3, reference numeral 201 denotes a camera attached at the height of the attachment position H. FIG. 3A shows an example of an image of the three-dimensional object 202 captured from the camera 201. FIG. 7B shows an image when the viewpoint shown in FIG. 6A is converted into a bird's-eye view image, and the three-dimensional object of the image shown in FIG. Appears.
As shown in FIG. 7, the camera 201 is disposed at a height H on the z-axis. In FIG. 7, reference numeral 300 denotes an object, and its three-dimensional spatial coordinates are [X p , Y p , Z p ] T. Reference numeral 301 denotes an object image on the bird's-eye view image, and its three-dimensional spatial coordinates are [x p , y p , 0] T. Reference numeral 302 indicates a planar bird's-eye view image including the captured object.
The operation of the vehicle periphery monitoring apparatus according to this embodiment will be described with reference to the flowchart shown in FIG. 10. The object and the host vehicle are in relative motion at a relative speed [V X , V Y ] T and yaw rate ω. Then, the relationship shown by the above-described equation (2) is established.
The vectors m (t + Δt) and n (t + Δt) in equation (2) are relative motions that store the relative velocity and yaw rate, and the vector SP is obtained from the above equation (1) to the three-dimensional coordinates of the feature point P. Conversion is possible uniquely.
In this embodiment, the feature point P on the bird's-eye view image is converted into three-dimensional coordinates using the relationship of the expressions (1) and (2), and the obstacle is detected by determining the three-dimensional object with respect to the host vehicle. Do. Then, an emphasis area is determined on the overhead view video based on the camera position where the detected obstacle image is captured, and the determined emphasis area is superimposed on the overhead image and drawn.

以下、図10に示すフローチャートを参照しながら障害物の検知と、前記検知した障害物について俯瞰映像上での強調領域の決定、描画について動作を説明する。
図2に示すように視点変換され画像合成された俯瞰映像は、一定の時間間隔Δtごとに時系列映像取り込み部2から取り込まれる(ステップS1)。時系列映像取り込み部2から取り込まれた俯瞰映像はフレームメモリ3へ一時的に格納される。
そして、特徴点抽出処理部4において前記俯瞰映像から特徴点Pを抽出する(ステップS2)。特徴点抽出処理部4における特徴点Pの抽出は、たとえば画像のコーナまたは画像のエッジ等の画像処理技術上において位置の変化の追跡が比較的容易な特徴を有する箇所を特徴点Pとして抽出する。
Hereinafter, the operation for detecting an obstacle and determining and drawing an emphasis area on the overhead view video for the detected obstacle will be described with reference to a flowchart shown in FIG.
As shown in FIG. 2, the bird's-eye view video that has undergone viewpoint conversion and image synthesis is captured from the time-series video capturing unit 2 at regular time intervals Δt (step S1). The overhead video captured from the time-series video capturing unit 2 is temporarily stored in the frame memory 3.
And the feature point extraction process part 4 extracts the feature point P from the said bird's-eye view image | video (step S2). The feature point P is extracted by the feature point extraction processing unit 4 by extracting, as the feature point P, a portion having a feature that makes it relatively easy to track a change in position on an image processing technique such as an image corner or an image edge. .

図11は、俯瞰映像上の画像の一例を示す説明図である。
例えばカメラによって撮像された俯瞰映像上の画像が図11に示すように画像データの左上を原点として横軸をx軸、縦軸をy軸と定めた座標上にあるものとする。
特徴点抽出処理部4は、図15に示すように画像Iのx軸およびy軸方向に対する偏微分Aおよび偏微分Bをそれぞれ求める。次いでこの画像Iでの全ての画素における空間配列行列を求める。
そして特徴点抽出処理部4は、空間配列行列に対する固有値を計算し、特徴を有すると認められる所定の値を抽出して特徴点と定める。次いで、上述したようにして得られた俯瞰映像上の画像における特徴点の時間変化を特徴点追跡処理部5が追跡する。
FIG. 11 is an explanatory diagram illustrating an example of an image on the overhead view video.
For example, as shown in FIG. 11, it is assumed that an image on a bird's-eye view image captured by a camera is on coordinates where the upper left of the image data is the origin and the horizontal axis is the x axis and the vertical axis is the y axis.
The feature point extraction processing unit 4 obtains a partial differential A and a partial differential B with respect to the x-axis and y-axis directions of the image I as shown in FIG. Next, a spatial arrangement matrix for all pixels in the image I is obtained.
Then, the feature point extraction processing unit 4 calculates eigenvalues for the spatial arrangement matrix, extracts predetermined values recognized as having features, and determines them as feature points. Next, the feature point tracking processing unit 5 tracks the temporal change of the feature points in the image on the overhead video obtained as described above.

図8は、この実施の形態の車両周辺監視装置における時系列映像取り込み部2から取り込まれる時系列映像1である時刻tにおける俯瞰映像と時刻t+FΔtにおける俯瞰映像との一例を示す説明図である。
図8(A)、(B)に示されているように、特徴点抽出処理部4において抽出された特徴点Pがどのような動きをするかを特徴点追跡処理部5において追跡処理する。
この特徴点追跡処理部5の追跡処理は次のように行う。
つまり、図8(A)に示す時刻tにおける俯瞰映像上の特徴点Pと、同図(B)に示す時刻t+FΔtにおける俯瞰映像上の特徴点Pとの位置関係から、特徴点Pがどのような動きをしてどのような位置へ移動したかを追跡処理する(ステップS3)。
この特徴点Pの追跡処理では、特徴点Pのオプティカルフローを計算することで、その動きベクトルを求める。
図9は、図8に示すこの実施の形態の車両周辺監視装置における俯瞰映像から抽出した特徴点の動きベクトルを示す説明図である。
特徴点Pのオプティカルフローの計算処理は次のように行う。
所定の周期、時間間隔で撮像された前記俯瞰映像上の、互いに共通する特徴点Pの座標変化を検出し、その特徴点Pが移動しているか否か、座標が変化している場合には、特徴点Pの移動、座標変化の向きとその大きさがどの程度かをそれぞれ計算する。
FIG. 8 is an explanatory diagram showing an example of the bird's-eye view video at time t and the bird's-eye view video at time t + FΔt, which is the time-series video 1 captured from the time-series video capture unit 2 in the vehicle periphery monitoring device of this embodiment.
As shown in FIGS. 8A and 8B, the feature point tracking processing unit 5 performs a tracking process to determine how the feature point P extracted by the feature point extraction processing unit 4 moves.
The tracking process of the feature point tracking processing unit 5 is performed as follows.
That is, what is the feature point P from the positional relationship between the feature point P on the overhead image at time t shown in FIG. 8A and the feature point P on the overhead image at time t + FΔt shown in FIG. A tracking process is performed to determine the position of the movement (step S3).
In the tracking process of the feature point P, the motion vector is obtained by calculating the optical flow of the feature point P.
FIG. 9 is an explanatory diagram showing motion vectors of feature points extracted from a bird's-eye view video in the vehicle periphery monitoring device of this embodiment shown in FIG.
The calculation process of the optical flow of the feature point P is performed as follows.
When a coordinate change of a feature point P that is common to each other is detected on the bird's-eye view image captured at a predetermined cycle and time interval, whether the feature point P is moving or not, Then, the movement of the feature point P, the direction of the coordinate change and the extent of the magnitude are calculated.

次に、三次元計測処理部6において、式(1)、(2)を用いて俯瞰映像上の特徴点Pの位置の変化から自車両と特徴点Pとの相対運動情報と、前記特徴点Pの三次元座標情報を算出する(ステップS4)。   Next, in the three-dimensional measurement processing unit 6, relative motion information between the host vehicle and the feature point P is calculated from the change in the position of the feature point P on the overhead view video using the equations (1) and (2), and the feature point. P three-dimensional coordinate information is calculated (step S4).

さらに、障害物検知処理部7において、前記三次元計測処理部6で算出した前記相対運動情報と前記三次元座標情報とから障害物を検知する(ステップS5)。そして、前記検知した障害物について俯瞰映像上で障害物検知処理部7において障害物領域を決定する。さらに、前記決定された障害物領域に対し、強調領域描画処理部8は前記障害物領域の画像を撮像したカメラ位置をもとに強調領域を決定し強調して描画する。そして、強調領域加算部9は前記強調領域を前記俯瞰映像へ重ねて描画し、表示部10により表示する(ステップS6)。
図4は、この実施の形態の車両周辺監視装置の俯瞰映像における強調領域の決定方法を示す説明図である。
図4では、符号311がカメラ位置を示し、符号312はカメラと障害物との距離、奥行を示している。また、符号313は障害物領域を示し、符号314は障害物を含む強調域を示している。この強調領域314は枠315により囲まれて表示される。この強調領域314の決定方法は次のようにして行われる。
まず、障害物検知処理部7において俯瞰映像上での障害物領域313を決定する。
次に、俯瞰映像上でカメラ位置311を端点として障害物領域313に対する2本の外接線501,502を決定する。
さらに、俯瞰映像上でカメラ位置311に対する障害物の奥行312、つまりカメラ位置311と障害物領域313の障害物との間の距離の最小値を決定する。
次に、前記決定した2本の外接線501,502と、前記カメラ位置311と障害物領域313の障害物との間の距離の最小値とから、強調領域314を囲む枠315を決定する。このとき、前記枠315の表示は、図5に示すように、カメラ位置311と障害物領域313の障害物との間の距離に応じてその表示色を異ならせて表示出力する。例えば、奥行が大(車両から離間)であれば、オレンジ、奥行が小(車両に近接)であれば、赤で表示する。
図5は、この実施の形態の車両周辺監視装置の俯瞰映像における強調領域314を示す枠表示の一例を示す説明図である。
なお、図5に示すように、強調領域314全体を囲むように枠315を決定し表示出力するのではなく、図6に示すように強調領域314のカメラ側の枠部分のみを表示するようにしてもよい。図6は、強調領域314を示す枠315の他の表示形態の一例を示す説明図である。
Further, the obstacle detection processing unit 7 detects an obstacle from the relative motion information calculated by the three-dimensional measurement processing unit 6 and the three-dimensional coordinate information (step S5). Then, an obstacle region is determined in the obstacle detection processing unit 7 on the bird's-eye view video for the detected obstacle. Further, with respect to the determined obstacle area, the enhancement area drawing processing unit 8 determines and emphasizes the enhancement area based on the camera position where the image of the obstacle area is captured. Then, the emphasis area adding unit 9 draws the emphasis area so as to overlap the overhead view video, and displays it on the display unit 10 (step S6).
FIG. 4 is an explanatory diagram illustrating a method for determining an emphasis region in the overhead view image of the vehicle periphery monitoring device according to this embodiment.
In FIG. 4, reference numeral 311 indicates the camera position, and reference numeral 312 indicates the distance and depth between the camera and the obstacle. Reference numeral 313 indicates an obstacle area, and reference numeral 314 indicates an emphasis area including the obstacle. The highlighted area 314 is displayed surrounded by a frame 315. The determination method of the enhancement region 314 is performed as follows.
First, the obstacle detection processing unit 7 determines the obstacle region 313 on the overhead view video.
Next, two circumscribed lines 501 and 502 for the obstacle region 313 are determined with the camera position 311 as an end point on the overhead view video.
Furthermore, the minimum depth of the obstacle 312 with respect to the camera position 311 on the overhead view image, that is, the minimum distance between the camera position 311 and the obstacle in the obstacle region 313 is determined.
Next, a frame 315 surrounding the emphasis region 314 is determined from the determined two circumscribed lines 501 and 502 and the minimum distance between the camera position 311 and the obstacle in the obstacle region 313. At this time, as shown in FIG. 5, the display of the frame 315 is displayed and output in different display colors according to the distance between the camera position 311 and the obstacle in the obstacle region 313. For example, if the depth is large (separated from the vehicle), it is displayed in orange, and if the depth is small (close to the vehicle), it is displayed in red.
FIG. 5 is an explanatory diagram showing an example of a frame display showing the emphasis region 314 in the overhead view video of the vehicle periphery monitoring device of this embodiment.
As shown in FIG. 5, the frame 315 is not determined and displayed and output so as to surround the entire emphasis area 314, but only the frame portion on the camera side of the emphasis area 314 is displayed as shown in FIG. May be. FIG. 6 is an explanatory diagram showing an example of another display form of the frame 315 showing the emphasis region 314.

次に、カメラの高さに応じた俯瞰画像上の対象物体の動きをモデル化し、実映像に対しモデル推定することで、対象物体の三次元計測を行い、障害物を検知するときの俯瞰映像上の3次元物体の動きモデルを示す式(2)の導出と、式(2)からの式(3)の導出を説明する。   Next, the movement of the target object on the overhead image according to the height of the camera is modeled, and the model is estimated from the actual video, so that the target object is three-dimensionally measured and the obstacle image is detected Derivation of Expression (2) indicating the motion model of the upper three-dimensional object and derivation of Expression (3) from Expression (2) will be described.

Figure 2011070593
Figure 2011070593

図3に示すように、特徴点pの3次元実空間上の位置ベクトルXpを[X ,Y ,Z とし、時刻tにてカメラは[0,0,H]の位置にあるとする。ここで,Hはカメラの路面に対する高さである。
特徴点pの俯瞰映像上の位置ベクトルx を[x ,y ,0] としたとき、ベクトルXp=[X ,Y ,Z とベクトルx =[x ,y ,0] との間には式(1)が成立する。
ここでk =H/(H−Z )とおけば、式(1)は以下に示す式(4)のように書き換えることができる。
As shown in FIG. 3, [X p, Y p , Z p] the position vector Xp of a three-dimensional real space of the feature point p is T, the camera at time t [0,0, H] to the position of Suppose there is. Here, H is the height of the camera with respect to the road surface.
[X p, y p, 0 ] position vector x p on the overhead image of the feature point p when T, the vector Xp = [X p, Y p , Z p] T and the vector x p = [x p, Equation (1) holds between y p , 0] and T.
Here, if k p = H / (H−Z p ), the equation (1) can be rewritten as the following equation (4).

Figure 2011070593
Figure 2011070593

一方、真の俯瞰映像とはカメラは無限の高さにあるときで(H=∞)、k =1となり、式(4)より、式(5)のようになる。 On the other hand, a true bird's-eye view image is when the camera is at an infinite height (H = ∞), and k p = 1, and from Equation (4), Equation (5) is obtained.

Figure 2011070593
Figure 2011070593

つまり、俯瞰映像に映る特徴点位置ベクトルx =[x ,y ,0] は、真の俯瞰映像の位置[X ,Y ,0] に対して、カメラ高と特徴点の高さに応じた係数k だけ乗じたものである。
今、対象物pはカメラに対して、時刻tにおいて3次元実空間ではベクトルXp=[X ,Y ,Z の位置にあり、ヨーレートω、相対速度V 、V の相対運動で移動していたとする。なお、バウンシング・ロール・ピッチング運動はなかったとする。
このとき、時刻t+Δtでの対象物pの3次元実空間上の位置ベクトルXp(t+Δt)=[X (t+Δt),Y (t+Δt),Z (t+Δt)] は、以下のように式(6)で与えられる。
That is, the feature point position vector x p = [x p , y p , 0] T reflected in the overhead view video is the camera height and the feature point with respect to the true overhead view position [X p , Y p , 0] T. Is multiplied by a coefficient k p corresponding to the height of.
Now, the object p is at the position of the vector Xp = [X p , Y p , Z p ] T in the three-dimensional real space at time t with respect to the camera, and is relative to the yaw rate ω and the relative velocities V X and V Y. Suppose you were moving by exercise. It is assumed that there was no bouncing, roll, or pitching movement.
At this time, the position vector Xp (t + Δt) of the object p at the time t + Δt in the three-dimensional real space = [X p (t + Δt), Y p (t + Δt), Z p (t + Δt)] T is given by equation (6) as follows.

Figure 2011070593
Figure 2011070593

式(6)より、バウンシング・ロール・ピッチング運動がなかったとすれば、Z (t)=Z (t+Δt)であり、k =H/(H−Z )より、係数k は時間によらず一定となる。式(6)は以下のように式(7)のように書き換えることができる。 From Equation (6), if there is no bouncing, roll, or pitching motion, Z p (t) = Z p (t + Δt), and k p = H / (H−Z p ), the coefficient k p Is constant regardless of time. Equation (6) can be rewritten as Equation (7) as follows.

Figure 2011070593
Figure 2011070593

ところで、時刻tとt+Δtでの対象物pの俯瞰映像上の位置ベクトルx =[x (t),y (t),0] 、ベクトルx =[x (t+Δt),y (t+Δt),0] は、係数k は時間によらず一定となるため、式(5)より以下の式(8)のように与えられる。 By the way, the position vector x p = [x p (t), y p (t), 0] T on the bird's-eye view image of the object p at time t and t + Δt, the vector x p = [x p (t + Δt). , Y p (t + Δt), 0] T is given by the following equation (8) from equation (5) because the coefficient k p is constant regardless of time.

Figure 2011070593
Figure 2011070593

時刻t+Δtでの対象物pの俯瞰映像上の位置ベクトルx (t+Δt)=[x (t+Δt),y (t+Δt),0] を、式(7),(8)に代入すると、式(9)に示すようになる。 A position vector x p (t + Δt) = [x p (t + Δt), y p (t + Δt), 0] T on the bird's-eye view image of the object p at time t + Δt is expressed by Equation (7). , (8), the result is as shown in equation (9).

Figure 2011070593
Figure 2011070593

この結果、時刻tとt+Δtでの対象物pの俯瞰映像上の位置ベクトルx (t)=[x (t),y (t),0] 、ベクトルx (t+Δt)=[x (t+Δt),y (t+Δt),0] には、以下の関係式(10)が成立する。 As a result, the position vector x p (t) = [x p (t), y p (t), 0] T on the bird's-eye view image of the object p at the times t and t + Δt, the vector x p (t + Δt) = [x p (t + Δt), y p (t + Δt), 0] The following relational expression (10) holds for T.

Figure 2011070593
Figure 2011070593

さて、対象物pのカメラに対する相対運動は、実際には対象物pは静止物であり、カメラが移動して生じたものとする。つまり、自車両の動きによって生じたものとする。また、俯瞰映像に映る静止物中の特徴点は、対象物pの他に、全部でP個あったとする。
このとき、静止物に対する相対運動は共通であるため、時刻tとt+Δtでの特徴点の俯瞰映像上の位置ベクトルx (t)=[x (t),y (t),0] 、ベクトルx (t+Δt)=[x (t+Δt),y (t+Δt),0] 、1<p<Pは式(10)より以下のように式(11)で与えられる。
Now, it is assumed that the relative motion of the object p with respect to the camera is caused by the fact that the object p is a stationary object and the camera moves. In other words, it is caused by the movement of the host vehicle. Further, it is assumed that there are P feature points in the stationary object shown in the overhead view video in addition to the object p.
At this time, since the relative motion to the stationary object is common, the position vector x p (t) = [x p (t), y p (t), 0] T , vector x p (t + Δt) = [x p (t + Δt), y p (t + Δt), 0] T , 1 <p <P is obtained from equation (10) as follows: It is given by (11).

Figure 2011070593
Figure 2011070593

式(11)は以下のように式(12)に書き換えることができる。   Equation (11) can be rewritten as Equation (12) as follows.

Figure 2011070593
Figure 2011070593

ここで、次に示す式(13)、式(14)であり、ベクトルm(t+Δt)、n(t+Δt)およびc (t+Δt)、c (t+Δt)は時刻tから時刻t+Δtでのカメラと対象物との回転運動と併進運動の相対運動を示し、ベクトルs (t)は式(4)から実質上は時刻tでの特徴点pの3次元実空間上の位置と見なすことができる。 Here, the following expressions (13) and (14) are satisfied , and the vectors m (t + Δt), n (t + Δt), c x (t + Δt), and cy (t + Δt) are times. It shows the relative movement rotational motion and the translational motion of the camera and the object at time t + Delta] t from t, the vector s p (t) is the three-dimensional feature point p at substantially time t from equation (4) It can be regarded as a position in real space.

Figure 2011070593
Figure 2011070593

Figure 2011070593
Figure 2011070593

ところで、時刻tから時刻t+2Δtでのカメラと対象物との相対運動情報m(t+2Δt)、n(t+2Δt)、c (t+2Δt)、c (t+2Δt)を得たならば、時刻t+2Δtでの、俯瞰映像上の特徴点の位置ベクトルx (t+2Δt)=[x (t+2Δt),y (t+2Δt),0] 、1<p<Pは式(12)より、以下のように式(15)で与えられる。 By the way, relative motion information m (t + 2Δt), n (t + 2Δt), c x (t + 2Δt), and c y (t + 2Δt) between the camera and the object from time t to time t + 2Δt are obtained. If obtained, the position vector x p (t + 2Δt) = [x p (t + 2Δt), y p (t + 2Δt), 0] T at the time t + 2Δt, 1 <p <P is given by equation (15) as follows from equation (12).

Figure 2011070593
Figure 2011070593

式(12)、(15)をまとめると、次に示す式(16)となる。   When formulas (12) and (15) are put together, formula (16) shown below is obtained.

Figure 2011070593
Figure 2011070593

今、時間間隔Δtで時刻tからt+FΔtにわたり、俯瞰映像上において、全部でP個の特徴点の位置が観測されたとする。なお、この観測は特徴点抽出・追跡にて行う。すると式(16)を拡張して、以下の関係式(17)が得られる。   Now, it is assumed that the positions of P feature points in total are observed on the bird's-eye view video from time t to t + FΔt at time interval Δt. This observation is performed by extracting and tracking feature points. Then, equation (16) is expanded to obtain the following relational equation (17).

Figure 2011070593
Figure 2011070593

式(17)が俯瞰映像上の3次元物体の動きの数式モデルである。
なお、式(17)は以下に示す式(18)のように行列の積として書き換えることができる。
Expression (17) is a mathematical model of the movement of the three-dimensional object on the overhead view video.
Equation (17) can be rewritten as a matrix product as shown in Equation (18) below.

Figure 2011070593
Figure 2011070593

ここで、行列W、M、Sは以下に示す式(19)のように与えられ、行列Wは特徴点抽出・追跡で得られた時刻tからt+FΔtの俯瞰映像上の各特徴点の位置を示し、行列Mは時刻tからt+FΔtの相対運動を示し、行列Sは時刻tでの各特徴点の3次元実空間上の位置を示す。   Here, the matrices W, M, and S are given as shown in the following equation (19), and the matrix W represents the feature points on the overhead video from time t to t + FΔt obtained by feature point extraction / tracking. The matrix M represents the relative motion from time t to t + FΔt, and the matrix S represents the position of each feature point in the three-dimensional real space at time t.

Figure 2011070593
Figure 2011070593

式(18)の関係式を利用して、俯瞰映像上の特徴点の位置の変化から、相対運動と3次元実空間の位置を得る。   Using the relational expression of Expression (18), the relative motion and the position of the three-dimensional real space are obtained from the change of the position of the feature point on the overhead image.

次に、モデル推定方法の概略について説明する。
式(18)に示すW=MSより、俯瞰映像上の特徴点の位置の変化を示す行列Wから、相対運動を示す行列Mと3次元形状を示す行列Sに分割するには以下の手順で行う。
式(19)より、行列Wは2F×Pの大きさの行列である。行列Mは、ベクトルm、n
が2次元であるため2F×3の大きさの行列であり、行列Sはベクトルsが3次元であるため3×Pの大きさの行列である。つまり、行列Wは階数3の行列である。そこで、以下のように、一旦、行列Wに対して特異値分解を行う。
Next, an outline of the model estimation method will be described.
In order to divide the matrix W indicating the change of the position of the feature point on the overhead view video into the matrix M indicating the relative motion and the matrix S indicating the three-dimensional shape from W = MS shown in Expression (18), the following procedure is used. Do.
From Equation (19), the matrix W is a matrix having a size of 2F × P. The matrix M is a vector m, n
Is a two-dimensional matrix, and the matrix S is a 3 × P matrix because the vector s is three-dimensional. That is, the matrix W is a rank 3 matrix. Therefore, singular value decomposition is once performed on the matrix W as follows.

Figure 2011070593
Figure 2011070593

ここで、Uは2F×Hの直交行列、ΣはH×Hの対角行列、V はH×Pの直交行列であり、HはH=min{2F,P}と与えられる。また、Σの対角要素は
σ >σ >…σ と降順に並んでいるとする。
行列Wは階数3の行列となるため、σ 以降はノイズ成分とみなせることより、式(21)に示すように行列Uの3列分を抽出する。
Here, U is a 2F × H orthogonal matrix, Σ is a H × H diagonal matrix, VT is a H × P orthogonal matrix, and H is given as H = min {2F, P}. Further, it is assumed that diagonal elements of Σ are arranged in descending order as σ 1 > σ 2 >... Σ p .
Since the matrix W is a matrix of rank 3, since σ 4 can be regarded as a noise component, three columns of the matrix U are extracted as shown in Expression (21).

Figure 2011070593
Figure 2011070593

ここから、式(22)に示すように仮の分解結果を得ることができる。   From this, a temporary decomposition result can be obtained as shown in equation (22).

Figure 2011070593
Figure 2011070593

次に拘束行列の計算を行う。式(22)の分解結果は図14の(式1)を満たすが、得られた図14の行列(1)は、一般には図14の(式2)を満たさない。例えば、適当な3×3の大きさの行列Aを用いて、図14の(行列1)を以下に示す式(23)のように変換しても、W=MSを満たしてしまう。つまり、行列Aを適切に選択する必要がある。   Next, the constraint matrix is calculated. The decomposition result of Expression (22) satisfies (Expression 1) of FIG. 14, but the obtained matrix (1) of FIG. 14 generally does not satisfy (Expression 2) of FIG. For example, even if (matrix 1) in FIG. 14 is converted as shown in the following equation (23) using a matrix 3 having an appropriate size of 3 × 3, W = MS is satisfied. That is, it is necessary to select the matrix A appropriately.

Figure 2011070593
Figure 2011070593

行列Aを適切に選択するため、以下の拘束条件を導入する。
(a)回転成分の拘束条件
式(19)より行列Mは以下の式(24)で示すように構成される。
In order to select the matrix A appropriately, the following constraint conditions are introduced.
(A) From the constraint equation (19) of the rotation component, the matrix M is configured as shown by the following equation (24).

Figure 2011070593
Figure 2011070593

ここで、ベクトルmnは式(13)を例に示すとおり、回転成分を示すため、以下の拘束条件が存在する。   Here, since the vector mn indicates a rotational component as shown in the equation (13), the following constraint condition exists.

Figure 2011070593
Figure 2011070593

(b)路面の拘束条件
式(19)より行列Sは以下のように構成される。
(B) Road surface constraint condition From equation (19), the matrix S is constructed as follows.

Figure 2011070593
Figure 2011070593

また、ベクトルS (t)は式(14)よりS (t)=[x (t) y (t)
であり、k はk =H/(H-Z )と与えられる。ここで、k はZ <Hの条件で、Z が増加すれば、k も増加する。つまり対象物がカメラより下にある限り、対象物が高ければ高いほど、k は大きな値を有する。
が最小となるのは、対象物が路面にあるとき、つまりZ =0であり、そのときk =1となる。俯瞰映像は常時路面を撮影しているため、俯瞰映像上には、k =1
を満たす特徴点が必ず存在する。すなわち、k に対して以下の拘束条件が存在する。
Further, the vector S P (t) is expressed as S P (t) = [x p (t) y p (t) from the equation (14).
k p ] T , where k p is given by k p = H / (H−Z p ). Here, k p is a condition of Z p <H, and if Z p increases, k p also increases. That is, as long as the object is below the camera, the higher the object, the greater the value of k p .
k p is minimized when the object is on the road surface, that is, Z p = 0, and then k p = 1. Since the bird's-eye view video always shoots the road surface, k p = 1 on the bird's-eye view image
There is always a feature point that satisfies. That is, the following constraints exist for k p.

Figure 2011070593
Figure 2011070593

以上、式(25)および式(27)を満たすように、行列Aを定めればよい。   As described above, the matrix A may be determined so as to satisfy the expressions (25) and (27).

次に、三次元計測処理部6の三次元計測処理および障害物検知処理部7の障害物検知処理についてさらに説明する。
図12は、俯瞰映像から得られた立体物の三次元情報と相対運動情報とから検知した自車両11に対する距離による障害物エリアAと、衝突予想時間による障害物エリアBとを示す説明図である。
図13は、この実施の形態の車両周辺監視装置における三次元計測処理部6および障害物検知処理部7の動作を示すフローチャートである。
三次元計測処理部6の三次元計測処理では、俯瞰映像上の特徴点の三次元座標情報と、前記特徴点と自車両11との相対運動情報とを前述したように式(1)および式(2)から算出する(ステップS41)。
この特徴点の三次元座標情報は、三次元位置行列Sとして求められ(ステップS42)
、また前記特徴点と自車両11との相対運動情報とは、相対運動行列Mとして求められる(ステップS43)。
そして、三次元位置行列Sとして求めた特徴点の三次元座標情報に対してはz軸方向の高さによる閾値処理を行う(ステップS44)。
この閾値処理では、z軸方向の高さ情報が零でなくある値を有している特徴点を所定の高さ基準値をもとに障害物候補特徴点とし(ステップS45)、またz軸方向の高さ情報が零である特徴点を非障害物特徴点とする(ステップS46)。
そして、前記障害物候補特徴点については、その特徴点の三次元座標情報をもとに自車両11との間の距離が明らかになることから、自車両11と障害物候補特徴点との間の距離に対し所定の距離基準値をもとに閾値処理を行う(ステップS47)。そして、遠距離障害物特徴点と近距離障害物特徴点とに識別を行う(ステップS48、ステップS49)

近距離障害物特徴点として識別した特徴点は、例えば図12に示す障害物エリアAに含まれる特徴点である。
また、遠距離障害物特徴点として識別した特徴点については、ステップS43において求めた相対運動行列Mをもとに、衝突予想時間に対し所定の衝突予想時間基準値をもとに閾値処理を行う(ステップS51)。そして、低危険度障害物特徴点と早期到達障害物特徴点とに識別する(ステップS52、ステップS53)。
早期到達障害物特徴点は、例えば図12に示す障害物エリアBに含まれる特徴点である
Next, the three-dimensional measurement processing of the three-dimensional measurement processing unit 6 and the obstacle detection processing of the obstacle detection processing unit 7 will be further described.
FIG. 12 is an explanatory diagram showing an obstacle area A based on the distance to the host vehicle 11 detected from the three-dimensional information and relative motion information of the three-dimensional object obtained from the overhead view image, and an obstacle area B based on the estimated collision time. is there.
FIG. 13 is a flowchart showing the operations of the three-dimensional measurement processing unit 6 and the obstacle detection processing unit 7 in the vehicle periphery monitoring device of this embodiment.
In the three-dimensional measurement process of the three-dimensional measurement processing unit 6, as described above, the three-dimensional coordinate information of the feature points on the overhead image and the relative motion information between the feature points and the host vehicle 11 are expressed by the equations (1) and (1). Calculate from (2) (step S41).
The three-dimensional coordinate information of this feature point is obtained as a three-dimensional position matrix S (step S42).
Moreover, the relative motion information between the feature point and the host vehicle 11 is obtained as a relative motion matrix M (step S43).
Then, threshold processing based on the height in the z-axis direction is performed on the three-dimensional coordinate information of the feature points obtained as the three-dimensional position matrix S (step S44).
In this threshold value processing, feature points having a value whose height information in the z-axis direction is not zero are determined as obstacle candidate feature points based on a predetermined height reference value (step S45). A feature point whose direction height information is zero is set as a non-obstacle feature point (step S46).
And about the said obstacle candidate feature point, since the distance between the own vehicle 11 becomes clear based on the three-dimensional coordinate information of the feature point, between the own vehicle 11 and an obstacle candidate feature point Threshold processing is performed based on a predetermined distance reference value for the distance (step S47). Then, the long distance obstacle feature point and the short distance obstacle feature point are identified (step S48, step S49).
.
The feature points identified as short-distance obstacle feature points are, for example, feature points included in the obstacle area A shown in FIG.
For the feature points identified as long-distance obstacle feature points, threshold processing is performed based on a predetermined expected collision time reference value for the expected collision time based on the relative motion matrix M obtained in step S43. (Step S51). Then, the feature points are identified as low-risk obstacle feature points and early-arrival obstacle feature points (steps S52 and S53).
The early-arrival obstacle feature points are, for example, feature points included in the obstacle area B shown in FIG.

以上説明したように、この実施の形態の車両周辺監視装置では、前記識別した低危険度障害物特徴点、早期到達障害物特徴点をもとに障害物を検知する。そして、強調領域描画処理部8は、低危険度障害物であるか、あるいは早期到達障害物であるかに応じて、前記障害物を含む強調領域を囲む枠の表示色を異ならせ、前記強調領域を俯瞰映像上へ表示出力することが可能である。   As described above, the vehicle periphery monitoring device according to the present embodiment detects an obstacle based on the identified low risk obstacle feature point and early arrival obstacle feature point. Then, the emphasis area drawing processing unit 8 changes the display color of the frame surrounding the emphasis area including the obstacle depending on whether the obstacle is a low risk obstacle or an early arrival obstacle, and the enhancement It is possible to display and output the area on the overhead view video.

また、この実施の形態によれば、俯瞰映像上の障害物が歪んで表示される特性に従って前記俯瞰映像上の障害物領域を強調して表示できるため、前記俯瞰映像上の障害物の歪んでいる状態が明確化されて前記俯瞰映像上で歪んで表示される障害物を容易に把握できる車両周辺監視装置を提供できる効果がある。
また、俯瞰映像上で歪んで表示される障害物を枠で囲むようにして明確化し強調して表示するため、前記俯瞰映像上の障害物の歪んで表示される個所や状態が容易に把握できる車両周辺監視装置を提供できる効果がある。
また、カメラ位置301と障害物との間の距離、前記障害物が低危険度障害物であるか、あるいは早期到達障害物であるかに応じて、強調領域304を囲む枠305の表示色を異ならせて表示出力できる。このため、俯瞰映像上で歪んで表示される障害物が明確化され強調されるだけでなく、障害物が自車両に対し近い距離にあるか離れた位置にあるかを俯瞰映像上で直観的に把握できる車両周辺監視装置を提供できる効果がある。また、障害物と自車両との相対運動を考慮した障害物と自車両との位置関係を俯瞰映像上で直観的に把握できる車両周辺監視装置を提供できる効果がある。
Further, according to this embodiment, since the obstacle area on the overhead image can be emphasized and displayed according to the characteristic that the obstacle on the overhead image is distorted and displayed, the obstacle on the overhead image is distorted. There is an effect that it is possible to provide a vehicle periphery monitoring device that can easily grasp an obstacle that is distorted and displayed on the bird's-eye view image.
In addition, because the obstacle displayed on the bird's-eye-view video is clarified and emphasized by surrounding it with a frame, it is easy to grasp the location and state of the obstacle displayed on the bird's-eye-view video as distorted There is an effect that a monitoring device can be provided.
The display color of the frame 305 surrounding the emphasis region 304 is changed according to the distance between the camera position 301 and the obstacle, and whether the obstacle is a low-risk obstacle or an early-arrival obstacle. Different display output. For this reason, not only the obstacles that are distorted and displayed on the bird's-eye view image are clarified and emphasized, but it is also intuitive on the bird's-eye view image whether the obstacle is close to or away from the vehicle. It is possible to provide a vehicle periphery monitoring device that can be easily grasped. Further, there is an effect that it is possible to provide a vehicle periphery monitoring device that can intuitively grasp the positional relationship between the obstacle and the own vehicle in consideration of the relative motion between the obstacle and the own vehicle on the overhead view video.

4……特徴点抽出処理部、5……特徴点追跡処理部、6……三次元計測処理部、7……障害物検知処理部、8……強調領域描画処理部、9……強調領域加算部、10……表示部、11……自車両、114……前方カメラ(撮像装置)、112……後方カメラ(撮像装置)、111……右カメラ(撮像装置)、113……左カメラ(撮像装置)。   4 ... Feature point extraction processing unit, 5 ... Feature point tracking processing unit, 6 ... 3D measurement processing unit, 7 ... Obstacle detection processing unit, 8 ... Emphasis area drawing processing unit, 9 ... Enhancement area Adder unit, 10... Display unit, 11 .. own vehicle, 114... Front camera (imaging device), 112 .. rear camera (imaging device), 111... Right camera (imaging device), 113. (Imaging device).

Claims (8)

車両の複数箇所に搭載され、それら箇所からそれぞれ異なる方向の車両周囲を撮像する複数の撮像装置と、
前記各撮像装置で撮像した前記車両周囲の映像から前記車両を中心とする俯瞰映像を合成する映像合成手段と、
前記映像合成手段により合成した俯瞰映像上の画像のコーナまたは画像のエッジ等の画像処理技術上において位置の変化の追跡が比較的容易な特徴を有する箇所を特徴点として抽出する特徴点抽出処理部と、
前記俯瞰映像の時系列映像から前記特徴点抽出処理部で抽出した前記特徴点の前記俯瞰映像上での位置の変化を追跡する特徴点追跡処理部と、
前記特徴点追跡処理部で追跡した前記特徴点の前記俯瞰映像上での位置の変化をもとに前記特徴点の三次元座標における高さを含む特徴点情報を算出する三次元計測処理部と、
前記三次元計測処理部で算出した前記特徴点情報をもとに、前記特徴点を抽出した画像が障害物であるか否かを判定する障害物検知処理部と、
前記映像合成手段により合成された俯瞰映像を表示する表示部と、
前記障害物検知処理部で障害物と判定した前記画像を含む領域を、前記障害物と判定した画像の撮像位置に応じて決定し、該決定した領域を強調して強調領域として描画する強調領域描画処理部と、
前記強調領域を前記俯瞰映像に重畳して前記表示部に表示させる強調領域加算部と、
を備えたことを特徴とする車両周辺監視装置。
A plurality of imaging devices that are mounted in a plurality of locations of the vehicle and that image the vehicle periphery in different directions from those locations;
Video synthesizing means for synthesizing a bird's-eye view image centered on the vehicle from a video around the vehicle imaged by each imaging device;
A feature point extraction processing unit for extracting, as a feature point, a portion having a feature that makes it relatively easy to track a change in position on an image processing technique such as an image corner or an image edge on an overhead video synthesized by the video synthesis means. When,
A feature point tracking processing unit that tracks a change in the position of the feature point on the overhead view video extracted by the feature point extraction processing unit from the time-series video of the overhead view video;
A three-dimensional measurement processing unit that calculates feature point information including a height of the feature point in three-dimensional coordinates based on a change in a position of the feature point tracked by the feature point tracking processing unit on the overhead view image; ,
Based on the feature point information calculated by the three-dimensional measurement processing unit, an obstacle detection processing unit that determines whether the image from which the feature point has been extracted is an obstacle,
A display unit for displaying an overhead video synthesized by the video synthesizing unit;
An area including the image determined to be an obstacle by the obstacle detection processing unit is determined according to an imaging position of the image determined to be the obstacle, and the determined area is emphasized and drawn as an emphasized area A drawing processing unit;
An emphasis region adding unit that causes the display unit to display the emphasis region superimposed on the overhead view video;
A vehicle periphery monitoring device comprising:
前記強調領域は、前記障害物検知処理部で前記障害物と判定した画像を撮像するカメラ位置から前記障害物と判定した画像領域に対する外接線と、前記画像領域と前記カメラ位置との間の前記俯瞰映像上の距離である奥行をもとに決定されることを特徴とする請求項1記載の車両周辺監視装置。   The enhancement region includes a circumscribed line for an image region determined as the obstacle from a camera position that captures an image determined as the obstacle by the obstacle detection processing unit, and the image region and the camera position. The vehicle periphery monitoring device according to claim 1, wherein the vehicle periphery monitoring device is determined based on a depth which is a distance on the overhead view image. 前記強調領域描画処理部が前記外接線と前記奥行とをもとに決定した前記強調領域は、前記外接線と前記奥行とをもとに、その強調領域周囲の少なくとも一部、あるいは全部が枠で規定されて強調されていることを特徴とする請求項2記載の車両周辺監視装置。   The enhancement region rendering processing unit determined based on the circumscribing line and the depth is based on the circumscribing line and the depth, and at least a part or all of the periphery of the enhancement region is a frame. The vehicle periphery monitoring device according to claim 2, characterized in that the vehicle periphery monitoring device is defined and emphasized. 前記強調領域を規定する枠は、前記奥行に応じて異なる表示色を有していることを特徴とする請求項3記載の車両周辺監視装置。   The vehicle periphery monitoring device according to claim 3, wherein the frame defining the emphasis region has a different display color depending on the depth. 前記三次元計測処理部は、前記撮像装置の高さに応じた俯瞰画像上の対象物体の動きを前記特徴点をもとにモデル化し、実映像に対しモデル推定することで、前記特徴点の三次元座標における高さを含む特徴点情報を算出することを特徴とする請求項1記載の運転支援装置。   The three-dimensional measurement processing unit models the movement of the target object on the overhead image according to the height of the imaging device based on the feature points, and performs model estimation on the actual video, thereby The driving support device according to claim 1, wherein feature point information including a height in three-dimensional coordinates is calculated. 前記障害物検知処理部が行う障害物であるか否かの判定は、前記特徴点の三次元座標における高さについての情報から所定の高さ基準値をもとに障害物候補となる障害物候補特徴点を識別し、前記識別した障害物候補特徴点をもとに前記特徴点を抽出した画像が障害物であるか否かを判定することを特徴とする請求項1記載の車両周辺監視装置。   The determination of whether or not the obstacle is detected by the obstacle detection processing unit is an obstacle that becomes an obstacle candidate based on a predetermined height reference value based on information about the height of the feature point in three-dimensional coordinates. The vehicle periphery monitoring according to claim 1, wherein candidate feature points are identified, and it is determined whether or not an image obtained by extracting the feature points is an obstacle based on the identified obstacle candidate feature points. apparatus. 前記障害物検知処理部が行う障害物であるか否かの判定は、前記特徴点の三次元座標における高さについての情報から所定の高さ基準値をもとに障害物候補となる障害物候補特徴点を識別し、前記識別した障害物候補特徴点と前記車両との距離が前記車両に対し近距離の障害物範囲に含まれる距離であるか、あるいは遠距離の障害物範囲に含まれる距離であるかを所定の距離基準値をもとに判定し、前記判定結果から前記障害物候補特徴点を遠距離障害物特徴点と近距離障害物特徴点とに識別し、前記近距離障害物特徴点を抽出した画像が前記車両に対する近距離の障害物であるか否かを前記識別結果をもとに判定することを特徴とする請求項1記載の車両周辺監視装置。   The determination of whether or not the obstacle is detected by the obstacle detection processing unit is an obstacle that becomes an obstacle candidate based on a predetermined height reference value based on information about the height of the feature point in three-dimensional coordinates. Candidate feature points are identified, and the distance between the identified obstacle candidate feature points and the vehicle is a distance included in an obstacle range close to the vehicle, or included in an obstacle range at a long distance It is determined whether it is a distance based on a predetermined distance reference value, and the obstacle candidate feature point is identified as a long-distance obstacle feature point and a short-distance obstacle feature point from the determination result, and the short-distance obstacle 2. The vehicle periphery monitoring device according to claim 1, wherein it is determined based on the identification result whether or not an image from which object feature points are extracted is an obstacle at a short distance to the vehicle. 前記三次元計測処理部が算出する特徴点情報は、前記車両と前記特徴点との前記俯瞰映像上での相対運動情報を含み、前記障害物検知処理部が行う障害物であるか否かの判定は、前記特徴点の三次元座標における高さについての情報から所定の高さ基準値をもとに障害物候補となる障害物候補特徴点を識別し、前記識別した障害物候補特徴点と前記車両との距離が前記車両に対し近距離の障害物範囲に含まれる距離であるか、あるいは遠距離の障害物範囲に含まれる距離であるかを所定の距離基準値をもとに判定し、前記判定結果から前記障害物候補特徴点を遠距離障害物特徴点と近距離障害物特徴点とに識別し、前記識別した遠距離障害物特徴点に対し、前記三次元計測処理部で算出した前記相対運動情報と所定の衝突予想時間基準値とをもとに衝突予想時間について判定し、前記判定結果から低危険度障害物特徴点と早期到達障害物特徴点とに識別し、前記早期到達障害物特徴点を抽出した画像が前記車両に早期に達する障害物であるか否かを、前記識別結果をもとに判定することを特徴とする請求項1記載の車両周辺監視装置。   Whether the feature point information calculated by the three-dimensional measurement processing unit includes relative motion information on the overhead view image of the vehicle and the feature point is an obstacle performed by the obstacle detection processing unit. The determination identifies obstacle candidate feature points that are obstacle candidates based on a predetermined height reference value from information about the height of the feature points in three-dimensional coordinates, and the identified obstacle candidate feature points It is determined based on a predetermined distance reference value whether the distance to the vehicle is a distance included in an obstacle range close to the vehicle or a distance included in an obstacle range far away. The obstacle candidate feature point is identified as a long-distance obstacle feature point and a short-distance obstacle feature point from the determination result, and the identified long-distance obstacle feature point is calculated by the three-dimensional measurement processing unit. The relative motion information and a predetermined expected collision time reference value. The estimated collision time is determined, and a low risk obstacle feature point and an early arrival obstacle feature point are identified from the determination result, and an image obtained by extracting the early arrival obstacle feature point reaches the vehicle early. 2. The vehicle periphery monitoring apparatus according to claim 1, wherein it is determined whether or not the vehicle is an obstacle based on the identification result.
JP2009223423A 2009-09-28 2009-09-28 Vehicle periphery monitoring device Active JP5240149B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2009223423A JP5240149B2 (en) 2009-09-28 2009-09-28 Vehicle periphery monitoring device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2009223423A JP5240149B2 (en) 2009-09-28 2009-09-28 Vehicle periphery monitoring device

Publications (2)

Publication Number Publication Date
JP2011070593A true JP2011070593A (en) 2011-04-07
JP5240149B2 JP5240149B2 (en) 2013-07-17

Family

ID=44015790

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2009223423A Active JP5240149B2 (en) 2009-09-28 2009-09-28 Vehicle periphery monitoring device

Country Status (1)

Country Link
JP (1) JP5240149B2 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013239015A (en) * 2012-05-15 2013-11-28 Sharp Corp Parking support device, and parking support method and program
US9025819B2 (en) 2012-10-31 2015-05-05 Hyundai Motor Company Apparatus and method for tracking the position of a peripheral vehicle
WO2016195647A1 (en) 2015-05-30 2016-12-08 Leia Inc. Vehicle monitoring system
CN106878684A (en) * 2017-03-11 2017-06-20 邱昰霖 A kind of astronomical bird appreciation platform
CN106899834A (en) * 2017-03-11 2017-06-27 王昶皓 A kind of tracing type astronomy bird appreciation system
US9933618B2 (en) 2014-10-07 2018-04-03 Fanuc Corporation Cleaning apparatus and system including cleaning apparatus
JP2019018743A (en) * 2017-07-19 2019-02-07 日産自動車株式会社 Locus estimation method and locus estimation device
US20210306590A1 (en) * 2016-04-26 2021-09-30 Denso Corporation Display control apparatus
CN113844363A (en) * 2020-06-26 2021-12-28 丰田自动车株式会社 Vehicle periphery monitoring device, monitoring method, and non-transitory storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170133743A (en) 2016-05-26 2017-12-06 현대자동차주식회사 Vehicle control system based on user input and method thereof

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004104478A (en) * 2002-09-10 2004-04-02 Auto Network Gijutsu Kenkyusho:Kk Device and method for assisting parking
JP2007027948A (en) * 2005-07-13 2007-02-01 Nissan Motor Co Ltd Apparatus and method for monitoring vehicle periphery
JP2007267343A (en) * 2006-03-01 2007-10-11 Nissan Motor Co Ltd Providing apparatus for peripheral image of vehicle, and method therefor
JP2008048094A (en) * 2006-08-14 2008-02-28 Nissan Motor Co Ltd Video display device for vehicle, and display method of video images in vicinity of the vehicle
JP2008219063A (en) * 2007-02-28 2008-09-18 Sanyo Electric Co Ltd Apparatus and method for monitoring vehicle's surrounding
JP2009093332A (en) * 2007-10-05 2009-04-30 Nissan Motor Co Ltd Vehicle peripheral image processor and vehicle peripheral circumstance presentation method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004104478A (en) * 2002-09-10 2004-04-02 Auto Network Gijutsu Kenkyusho:Kk Device and method for assisting parking
JP2007027948A (en) * 2005-07-13 2007-02-01 Nissan Motor Co Ltd Apparatus and method for monitoring vehicle periphery
JP2007267343A (en) * 2006-03-01 2007-10-11 Nissan Motor Co Ltd Providing apparatus for peripheral image of vehicle, and method therefor
JP2008048094A (en) * 2006-08-14 2008-02-28 Nissan Motor Co Ltd Video display device for vehicle, and display method of video images in vicinity of the vehicle
JP2008219063A (en) * 2007-02-28 2008-09-18 Sanyo Electric Co Ltd Apparatus and method for monitoring vehicle's surrounding
JP2009093332A (en) * 2007-10-05 2009-04-30 Nissan Motor Co Ltd Vehicle peripheral image processor and vehicle peripheral circumstance presentation method

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013239015A (en) * 2012-05-15 2013-11-28 Sharp Corp Parking support device, and parking support method and program
US9025819B2 (en) 2012-10-31 2015-05-05 Hyundai Motor Company Apparatus and method for tracking the position of a peripheral vehicle
US9933618B2 (en) 2014-10-07 2018-04-03 Fanuc Corporation Cleaning apparatus and system including cleaning apparatus
US10703375B2 (en) 2015-05-30 2020-07-07 Leia Inc. Vehicle monitoring system
WO2016195647A1 (en) 2015-05-30 2016-12-08 Leia Inc. Vehicle monitoring system
EP3303088A4 (en) * 2015-05-30 2019-01-23 LEIA Inc. Vehicle monitoring system
US11203346B2 (en) 2015-05-30 2021-12-21 Leia Inc. Vehicle monitoring system
US11750768B2 (en) * 2016-04-26 2023-09-05 Denso Corporation Display control apparatus
US20210306590A1 (en) * 2016-04-26 2021-09-30 Denso Corporation Display control apparatus
CN106899834A (en) * 2017-03-11 2017-06-27 王昶皓 A kind of tracing type astronomy bird appreciation system
CN106878684B (en) * 2017-03-11 2019-12-03 邱昰霖 A kind of astronomy bird appreciation platform
CN106899834B (en) * 2017-03-11 2019-03-29 王昶皓 A kind of tracing type astronomy bird appreciation system
CN106878684A (en) * 2017-03-11 2017-06-20 邱昰霖 A kind of astronomical bird appreciation platform
JP2019018743A (en) * 2017-07-19 2019-02-07 日産自動車株式会社 Locus estimation method and locus estimation device
JP7005978B2 (en) 2017-07-19 2022-02-10 日産自動車株式会社 Trajectory estimation method and trajectory estimation device
CN113844363A (en) * 2020-06-26 2021-12-28 丰田自动车株式会社 Vehicle periphery monitoring device, monitoring method, and non-transitory storage medium
CN113844363B (en) * 2020-06-26 2023-08-08 丰田自动车株式会社 Vehicle periphery monitoring device, monitoring method, and non-transitory storage medium

Also Published As

Publication number Publication date
JP5240149B2 (en) 2013-07-17

Similar Documents

Publication Publication Date Title
JP5240149B2 (en) Vehicle periphery monitoring device
JP5187292B2 (en) Vehicle periphery monitoring device
JP4899424B2 (en) Object detection device
JP5603835B2 (en) Vehicle perimeter monitoring device
JP5418661B2 (en) Vehicle periphery monitoring device
JP5109691B2 (en) Analysis device
US8406472B2 (en) Method and system for processing image data
WO2011108198A1 (en) Surrounding area monitoring device for vehicle
JP2001213254A (en) Side monitoring device for vehicle
JP6171593B2 (en) Object tracking method and system from parallax map
JP5003395B2 (en) Vehicle periphery image processing apparatus and vehicle periphery state presentation method
JP4609603B2 (en) 3D information display device and 3D information display method
JP4797877B2 (en) VEHICLE VIDEO DISPLAY DEVICE AND VEHICLE AROUND VIDEO DISPLAY METHOD
JP6708730B2 (en) Mobile
JP2009040108A (en) Image display control device and image display control system
JP2011048520A (en) Device and method for monitoring vehicle periphery
JP2008158640A (en) Moving object detection apparatus
JP2012198857A (en) Approaching object detector and approaching object detection method
JP5251814B2 (en) Driving assistance device
JP2012252501A (en) Traveling path recognition device and traveling path recognition program
JP4967758B2 (en) Object movement detection method and detection apparatus
JP4553071B1 (en) 3D information display device and 3D information display method
CN108290499B (en) Driver assistance system with adaptive ambient image data processing
JP2014044730A (en) Image processing apparatus
JP2012256159A (en) Approaching object detecting device and method for detecting approaching object

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20110907

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20121219

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20121225

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20130118

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20130305

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20130318

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20160412

Year of fee payment: 3

R151 Written notification of patent or utility model registration

Ref document number: 5240149

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R151

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20160412

Year of fee payment: 3

S531 Written request for registration of change of domicile

Free format text: JAPANESE INTERMEDIATE CODE: R313531

R350 Written notification of registration of transfer

Free format text: JAPANESE INTERMEDIATE CODE: R350