JP6861599B2 - Peripheral monitoring device - Google Patents

Peripheral monitoring device Download PDF

Info

Publication number
JP6861599B2
JP6861599B2 JP2017166766A JP2017166766A JP6861599B2 JP 6861599 B2 JP6861599 B2 JP 6861599B2 JP 2017166766 A JP2017166766 A JP 2017166766A JP 2017166766 A JP2017166766 A JP 2017166766A JP 6861599 B2 JP6861599 B2 JP 6861599B2
Authority
JP
Japan
Prior art keywords
dimensional object
bird
eye view
view image
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2017166766A
Other languages
Japanese (ja)
Other versions
JP2019047244A (en
Inventor
雄太 大泉
雄太 大泉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Faurecia Clarion Electronics Co Ltd
Original Assignee
Clarion Co Ltd
Faurecia Clarion Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Clarion Co Ltd, Faurecia Clarion Electronics Co Ltd filed Critical Clarion Co Ltd
Priority to JP2017166766A priority Critical patent/JP6861599B2/en
Publication of JP2019047244A publication Critical patent/JP2019047244A/en
Application granted granted Critical
Publication of JP6861599B2 publication Critical patent/JP6861599B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Closed-Circuit Television Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Description

本発明は、車両の周辺を監視する周辺監視装置に関する。 The present invention relates to a peripheral monitoring device that monitors the periphery of a vehicle.

従来、車両の周辺に設けた撮像部によって車両の周辺を撮像し、撮像された画像を視点変換した俯瞰画像を運転者に表示することで、より安全な運転を実現するための技術が知られている。しかし、俯瞰画像からは立体物を判断することができない。そこで、異なる時刻に生成された少なくとも2枚の俯瞰画像から差分画像を求め、この差分画像に基づいて、車両の周辺に存在する立体物を認識する技術が知られている(特許文献1参照)。 Conventionally, a technique for realizing safer driving has been known by imaging the surroundings of the vehicle with an imaging unit provided around the vehicle and displaying a bird's-eye view image obtained by converting the captured image into a viewpoint to the driver. ing. However, it is not possible to judge a three-dimensional object from the bird's-eye view image. Therefore, there is known a technique of obtaining a difference image from at least two bird's-eye view images generated at different times and recognizing a three-dimensional object existing around the vehicle based on the difference image (see Patent Document 1). ..

しかし、俯瞰画像を運転者に表示したときには、立体物は路面に倒れ込むようにして表示されてしまう。そのため、車両周辺に存在する立体物のうち、路面に接地していない浮遊物は、俯瞰画像の特性上、実際の位置よりも遠くにあるように見えてしまうという課題がある。 However, when the bird's-eye view image is displayed to the driver, the three-dimensional object is displayed as if it collapses on the road surface. Therefore, among the three-dimensional objects existing around the vehicle, the floating objects that are not in contact with the road surface have a problem that they appear to be farther than the actual position due to the characteristics of the bird's-eye view image.

この課題を解決するための方策として、一つには、立体物が撮像されて路面に視点変換された俯瞰画像と、立体物が撮像されて撮像部の設置位置よりも高い水平面に視点変換された俯瞰画像とを用いて、浮遊物の位置を認識する技術がある(特許文献2参照)。もう一つには、設置位置の高さが異なる複数の撮像部で撮像され視点変換された複数の俯瞰画像を比較して、浮遊物の位置を認識する技術がある(特許文献3参照)。 As a measure to solve this problem, one is a bird's-eye view image in which a three-dimensional object is imaged and the viewpoint is converted to the road surface, and a three-dimensional object is imaged and the viewpoint is converted to a horizontal plane higher than the installation position of the imaging unit. There is a technique for recognizing the position of a floating object by using a bird's-eye view image (see Patent Document 2). Another technique is to recognize the position of a floating object by comparing a plurality of bird's-eye view images imaged by a plurality of imaging units having different heights of installation positions and whose viewpoint is changed (see Patent Document 3).

特開2006-253872号公報Japanese Unexamined Patent Publication No. 2006-253872 特開2009-93332号公報Japanese Unexamined Patent Publication No. 2009-93332 特開2009-188635号公報Japanese Unexamined Patent Publication No. 2009-188635

しかし、特許文献2、3に開示されている技術では、浮遊物の真下にある路面に浮遊物を投影した位置を俯瞰画像上で特定できない。このため、俯瞰画像を頼りにして、車両をその浮遊物の近傍まで接近させると、車両が浮遊物に接触してしまう虞があった。 However, with the techniques disclosed in Patent Documents 2 and 3, the position where the suspended matter is projected on the road surface directly below the suspended matter cannot be specified on the bird's-eye view image. Therefore, if the vehicle is brought close to the floating object by relying on the bird's-eye view image, the vehicle may come into contact with the floating object.

本発明は、上記課題に鑑みてなされたもので、車両の周辺に存在する浮遊物に車両が接触してしまうことを抑制することのできる周辺監視装置を提供することを目的とする。 The present invention has been made in view of the above problems, and an object of the present invention is to provide a peripheral monitoring device capable of suppressing the vehicle from coming into contact with floating objects existing in the vicinity of the vehicle.

本発明に係る周辺監視装置は、路面上を走行する車両に取り付けられた、前記路面に接地していない浮遊物を含む前記車両の周辺を撮像する撮像部と、前記撮像部により前記周辺が撮像されることで得られ、前記車両の動きに応じて変化し得る2つの画像を視点変換して、第1俯瞰画像及び第2俯瞰画像を生成する俯瞰画像生成部と、前記第1俯瞰画像を前記撮像部から放射方向に延びる第1放射線で前記撮像部の位置を中心として所定の角度おきに分割した複数の立体物領域の中から、前記浮遊物の前記路面に対する鉛直性を評価して、前記浮遊物を含む第1立体物領域を抽出する第1立体物領域抽出部と、前記第2俯瞰画像を前記撮像部から放射方向に延びる第2放射線で前記撮像部の位置を中心として所定の角度おきに分割した複数の立体物領域の中から、前記浮遊物の前記路面に対する鉛直性を評価して、前記浮遊物を含む第2立体物領域を抽出する第2立体物領域抽出部と、前記第1俯瞰画像と前記第2俯瞰画像を重畳した重畳俯瞰画像上で、前記第1立体物領域に含まれる前記浮遊物の重心から前記第1放射線に沿って延びる第1直線と、前記第2立体物領域に含まれる前記浮遊物の重心から前記第2放射線に沿って延びる第2直線とが交差する点を、前記浮遊物の真下にある前記路面に前記浮遊物を投影した投影座標点として特定する投影座標点特定部と、を有することを特徴とする。 The peripheral monitoring device according to the present invention has an imaging unit that is attached to a vehicle traveling on a road surface and images the periphery of the vehicle including floating objects that are not in contact with the road surface, and the peripheral image capturing unit. The bird's-eye view image generation unit that generates the first bird's-eye view image and the second bird's-eye view image by converting the viewpoints of the two images that can be changed according to the movement of the vehicle, and the first bird's-eye view image. The verticality of the floating object with respect to the road surface is evaluated from a plurality of three-dimensional object regions divided at predetermined angles about the position of the imaging unit by the first radiation extending in the radial direction from the imaging unit. A first three-dimensional object region extraction unit that extracts the first three-dimensional object region containing the suspended matter and a second radiation that extends the second bird's-eye view image in the radial direction from the imaging unit are predetermined with the position of the imaging unit as the center. A second three-dimensional object region extraction unit that evaluates the verticality of the suspended matter with respect to the road surface and extracts a second three-dimensional object region containing the suspended matter from a plurality of three-dimensional object regions divided at intervals. On the superimposed bird's-eye view image in which the first bird's-eye view image and the second bird's-eye view image are superimposed, a first straight line extending along the first radiation from the center of gravity of the floating object included in the first three-dimensional object region, and the first straight line. 2 Projection coordinate points at which the floating object is projected onto the road surface directly below the floating object at the intersection of the center of gravity of the suspended object included in the three-dimensional object region and the second straight line extending along the second radiation. It is characterized by having a projection coordinate point specifying portion specified as.

このように構成された本発明に係る周辺監視装置によれば、車両の周辺に存在する浮遊物に車両が接触してしまうことを抑制することができる。 According to the peripheral monitoring device according to the present invention configured in this way, it is possible to prevent the vehicle from coming into contact with floating objects existing around the vehicle.

本発明の一実施形態に係る周辺監視装置の概略構成を示すブロック図である。It is a block diagram which shows the schematic structure of the peripheral monitoring apparatus which concerns on one Embodiment of this invention. 本発明の一実施形態に係る周辺監視装置のハードウェア構成を示すハードウェアブロック図である。It is a hardware block diagram which shows the hardware composition of the peripheral monitoring apparatus which concerns on one Embodiment of this invention. 本発明の一実施形態に係る投影座標点取得部の概略構成を示すブロック図である。It is a block diagram which shows the schematic structure of the projection coordinate point acquisition part which concerns on one Embodiment of this invention. 第1立体物領域抽出部で行われる処理の説明図である(その1)。It is explanatory drawing of the process performed in the 1st 3D object area extraction part (the 1). 第1立体物領域抽出部で行われる処理の説明図である(その2)。It is explanatory drawing of the process performed in the 1st three-dimensional object area extraction part (the 2). 第2立体物領域抽出部で行われる処理の説明図である(その1)。It is explanatory drawing of the process performed in the 2nd three-dimensional object area extraction part (the 1). 第2立体物領域抽出部で行われる処理の説明図である(その2)。It is explanatory drawing of the process performed in the 2nd three-dimensional object area extraction part (the 2). 投影座標点特定部で行われる処理の説明図である。It is explanatory drawing of the process performed in the projection coordinate point identification part. 投影座標点出力部で行われる処理の説明図である(その1)。It is explanatory drawing of the process performed in the projection coordinate point output part (the 1). 投影座標点出力部で行われる処理の説明図である(その2)。It is explanatory drawing of the process performed in the projection coordinate point output part (the 2). 投影座標点出力部で行われる処理の説明図である(その3)。It is explanatory drawing of the process performed in the projection coordinate point output part (the 3). 投影座標点出力部で行われる処理の説明図である(その4)。It is explanatory drawing of the process performed in the projection coordinate point output part (the 4). 投影座標点出力部で行われる処理の説明図である(その5)。It is explanatory drawing of the process performed in the projection coordinate point output part (the 5). 投影座標点出力部で行われる処理の説明図である(その6)。It is explanatory drawing of the process performed in the projection coordinate point output part (the 6). 投影座標点出力部で行われる処理の説明図である(その7)。It is explanatory drawing of the process performed in the projection coordinate point output part (the 7). 投影座標点出力部で行われる処理の説明図である(その8)。It is explanatory drawing of the process performed in the projection coordinate point output part (the 8).

以下、本発明の一実施形態に係る周辺監視装置を、図面を参照して説明する。 Hereinafter, the peripheral monitoring device according to the embodiment of the present invention will be described with reference to the drawings.

<周辺監視装置の概略構成>
図1は、本発明の一実施形態に係る周辺監視装置の概略構成を示す。周辺監視装置100は、路面上を走行する車両10に搭載されている。周辺監視装置100は、主に、撮像部12と、画像入力部20と、俯瞰画像生成部30と、立体物検出部40と、投影座標点取得部50と、投影座標点出力部60とを備えている。
<Outline configuration of peripheral monitoring device>
FIG. 1 shows a schematic configuration of a peripheral monitoring device according to an embodiment of the present invention. The peripheral monitoring device 100 is mounted on the vehicle 10 traveling on the road surface. The peripheral monitoring device 100 mainly includes an imaging unit 12, an image input unit 20, a bird's-eye view image generation unit 30, a three-dimensional object detection unit 40, a projection coordinate point acquisition unit 50, and a projection coordinate point output unit 60. I have.

撮像部12は、車両10の左方に取り付けられている。撮像部12は、車両10の直近の路面を含む左方観測範囲を180°の視野範囲(画角)に亘って撮像する。 The imaging unit 12 is attached to the left side of the vehicle 10. The imaging unit 12 images the left observation range including the nearest road surface of the vehicle 10 over a viewing range (angle of view) of 180 °.

画像入力部20は、撮像部12により左方観測範囲が異なる2つの時刻(t−Δt),(t)に撮像されることで得られた2つの画像を、計算機で取り扱えるデジタル画像形式の原画像70(t−Δt)及び原画像70(t)に変換する。画像入力部20は、原画像70(t−Δt)及び原画像70(t)を俯瞰画像生成部30に入力する。 The image input unit 20 is a digital image format source that can handle two images obtained by being imaged by the image pickup unit 12 at two times (t−Δt) and (t) having different left observation ranges by a computer. It is converted into an image 70 (t−Δt) and an original image 70 (t). The image input unit 20 inputs the original image 70 (t−Δt) and the original image 70 (t) to the bird's-eye view image generation unit 30.

俯瞰画像生成部30は、原画像70(t−Δt)及び原画像70(t)を視点変換して、所定の視点位置から見た第1俯瞰画像72(t−Δt)及び第2俯瞰画像72(t)を生成する。 The bird's-eye view image generation unit 30 converts the original image 70 (t−Δt) and the original image 70 (t) into viewpoints, and views the first bird's-eye view image 72 (t−Δt) and the second bird's-eye view image viewed from a predetermined viewpoint position. Generate 72 (t).

なお、立体物検出部40、投影座標点取得部50及び投影座標点出力部60の詳細は後述する。 The details of the three-dimensional object detection unit 40, the projection coordinate point acquisition unit 50, and the projection coordinate point output unit 60 will be described later.

<ハードウェア構成>
図2は、本発明の一実施形態に係る周辺監視装置のハードウェア構成を示すハードウェアブロック図である。以下、図2を用いてハードウェア構成を説明する。
<Hardware configuration>
FIG. 2 is a hardware block diagram showing a hardware configuration of a peripheral monitoring device according to an embodiment of the present invention. Hereinafter, the hardware configuration will be described with reference to FIG.

周辺監視装置100は、車両10に搭載された、ECU(Electronic Control Unit)110と、左方カメラ12aと、車両状態センサ140と、モニタ150(表示部)とを備えている。 The peripheral monitoring device 100 includes an ECU (Electronic Control Unit) 110 mounted on the vehicle 10, a left camera 12a, a vehicle state sensor 140, and a monitor 150 (display unit).

左方カメラ12aは、撮像部12(図1)を構成する。 The left camera 12a constitutes an imaging unit 12 (FIG. 1).

車両状態センサ140は、操舵角センサや距離センサで構成される。車両状態センサ140は、車両10の挙動を検出することによって、車両10の移動量と移動方向を算出する。 The vehicle condition sensor 140 is composed of a steering angle sensor and a distance sensor. The vehicle state sensor 140 calculates the movement amount and the movement direction of the vehicle 10 by detecting the behavior of the vehicle 10.

モニタ150は、投影座標点出力部60(図1)の出力結果を表示する。 The monitor 150 displays the output result of the projection coordinate point output unit 60 (FIG. 1).

ECU110は、CPU(Central Processing Unit)112と、カメラインタフェース114と、センサ入力インタフェース116と、画像処理モジュール118と、メモリ120と、表示制御部122とを備えている。 The ECU 110 includes a CPU (Central Processing Unit) 112, a camera interface 114, a sensor input interface 116, an image processing module 118, a memory 120, and a display control unit 122.

CPU112は、必要なデータの送受信やプログラムの実行を行う。 The CPU 112 transmits / receives necessary data and executes a program.

カメラインタフェース114は、バス124を介してCPU112に接続される。カメラインタフェース114は、左方カメラ12aの制御を行う。 The camera interface 114 is connected to the CPU 112 via the bus 124. The camera interface 114 controls the left camera 12a.

センサ入力インタフェース116は、バス124を介してCPU112に接続される。センサ入力インタフェース116は、車両状態センサ140の測定結果を取得する。 The sensor input interface 116 is connected to the CPU 112 via the bus 124. The sensor input interface 116 acquires the measurement result of the vehicle condition sensor 140.

画像処理モジュール118は、バス124を介してCPU112に接続される。画像処理モジュール118は、内部に内蔵された所定のプログラムによって画像処理を実行する。 The image processing module 118 is connected to the CPU 112 via the bus 124. The image processing module 118 executes image processing by a predetermined program built in the module 118.

メモリ120は、バス124を介してCPU112に接続される。メモリ120は、画像処理の中間結果や、必要な定数、プログラム等を記憶する。 The memory 120 is connected to the CPU 112 via the bus 124. The memory 120 stores intermediate results of image processing, necessary constants, programs, and the like.

表示制御部122は、投影座標点出力部60(図1)を構成する。表示制御部122は、バス124を介してCPU112に接続される。表示制御部122は、モニタ150の制御を行う。 The display control unit 122 constitutes the projection coordinate point output unit 60 (FIG. 1). The display control unit 122 is connected to the CPU 112 via the bus 124. The display control unit 122 controls the monitor 150.

なお、図1で説明した画像入力部20、俯瞰画像生成部30、立体物検出部40、投影座標点取得部50及び投影座標点出力部60は、後述する作用を実現するソフトウェアによって制御されている。このソフトウェアは、上記メモリ120(図2)の内部に記憶されて、必要に応じて適宜実行される。 The image input unit 20, the bird's-eye view image generation unit 30, the three-dimensional object detection unit 40, the projection coordinate point acquisition unit 50, and the projection coordinate point output unit 60 described with reference to FIG. 1 are controlled by software that realizes an operation described later. There is. This software is stored inside the memory 120 (FIG. 2) and is appropriately executed as needed.

<投影座標点取得部の概略構成>
図3は、本発明の一実施形態に係る投影座標点取得部の概略構成を示す。以下、図3を用いて投影座標点取得部の概略構成を説明する。
<Approximate configuration of the projected coordinate point acquisition unit>
FIG. 3 shows a schematic configuration of a projected coordinate point acquisition unit according to an embodiment of the present invention. Hereinafter, a schematic configuration of the projected coordinate point acquisition unit will be described with reference to FIG.

投影座標点取得部50は、第1立体物領域抽出部51と、第2立体物領域抽出部52と、投影座標点特定部53とを備えている。 The projection coordinate point acquisition unit 50 includes a first three-dimensional object area extraction unit 51, a second three-dimensional object area extraction unit 52, and a projection coordinate point identification unit 53.

第1立体物領域抽出部51は、領域分割部51Aと、鉛直性評価部51Bとを備えている。 The first three-dimensional object region extraction unit 51 includes a region division unit 51A and a verticality evaluation unit 51B.

領域分割部51Aは、図4に示す第1俯瞰画像72(t−Δt)を左方カメラ12a(図4)から放射方向に延びる第1放射線で左方カメラ12aの位置を中心として所定の角度おきに分割する。 The region dividing portion 51A uses the first radiation extending in the radial direction from the left camera 12a (FIG. 4) to the first bird's-eye view image 72 (t−Δt) shown in FIG. 4 at a predetermined angle about the position of the left camera 12a. Divide every other.

鉛直性評価部51Bは、領域分割部51Aが分割した複数の立体物領域のうち、特定の立体物領域に含まれるバー85,86(図4)の路面80(図4)に対する鉛直性を評価する。勿論、沿直性の評価は、バー85,86に限らず、支柱81,82,83,84(図4)に対して行われてもよい。 The verticality evaluation unit 51B evaluates the verticality of the bars 85 and 86 (FIG. 4) included in the specific three-dimensional object region with respect to the road surface 80 (FIG. 4) among the plurality of three-dimensional object regions divided by the region division unit 51A. To do. Of course, the evaluation of the straightness is not limited to the bars 85 and 86, and may be performed on the columns 81, 82, 83, 84 (FIG. 4).

第2立体物領域抽出部52は、領域分割部52Aと、鉛直性評価部52Bとを備えている。 The second three-dimensional object region extraction unit 52 includes a region division unit 52A and a verticality evaluation unit 52B.

領域分割部52Aは、図6に示す第2俯瞰画像72(t)を左方カメラ12a(図6)から放射方向に延びる第2放射線で左方カメラ12aの位置を中心として所定の角度おきに分割する。 The region dividing portion 52A uses the second bird's-eye view image 72 (t) shown in FIG. 6 as the second radiation extending in the radial direction from the left camera 12a (FIG. 6) at predetermined angles about the position of the left camera 12a. To divide.

鉛直性評価部52Bは、領域分割部52Aが分割した複数の立体物領域のうち、特定の立体物領域に含まれるバー85,86(図6)の路面80(図6)に対する鉛直性を評価する。 The verticality evaluation unit 52B evaluates the verticality of the bars 85 and 86 (FIG. 6) included in the specific three-dimensional object region with respect to the road surface 80 (FIG. 6) among the plurality of three-dimensional object regions divided by the region division unit 52A. To do.

なお、投影座標点特定部53の詳細は後述する。 The details of the projected coordinate point specifying unit 53 will be described later.

<周辺監視装置で行われる各処理の説明>
以下、周辺監視装置100で行われる各処理の内容について順を追って説明する。
<Explanation of each process performed by the peripheral monitoring device>
Hereinafter, the contents of each process performed by the peripheral monitoring device 100 will be described step by step.

<立体物検出部40で行われる処理の説明>
まず、図4及び図6を用いて、立体物検出部40で行われる処理について説明する。
<Explanation of processing performed by the three-dimensional object detection unit 40>
First, the process performed by the three-dimensional object detection unit 40 will be described with reference to FIGS. 4 and 6.

図4に示す第1俯瞰画像72(t−Δt)は、左方カメラ12aにより時刻(t−Δt)において車両10の直近の路面80を含む左方観測範囲が撮像されて視点変換された俯瞰画像である。第1俯瞰画像72(t−Δt)の下部に映り込んだ車両10は、例えばCG(Computer Graphics)で作成された車両である。 The first bird's-eye view image 72 (t−Δt) shown in FIG. 4 is a bird's-eye view in which the left observation range including the nearest road surface 80 of the vehicle 10 is imaged by the left camera 12a at the time (t−Δt) and the viewpoint is changed. It is an image. The vehicle 10 reflected in the lower part of the first bird's-eye view image 72 (t−Δt) is, for example, a vehicle created by CG (Computer Graphics).

図6に示す第2俯瞰画像72(t)は、左方カメラ12aにより時刻(t)において車両10の直近の路面80を含む左方観測範囲が撮像されて視点変換された俯瞰画像である。第2俯瞰画像72(t)の下部に映り込んだ車両10は、例えばCGで作成された車両である。 The second bird's-eye view image 72 (t) shown in FIG. 6 is a bird's-eye view image obtained by capturing the left observation range including the nearest road surface 80 of the vehicle 10 at the time (t) by the left camera 12a and changing the viewpoint. The vehicle 10 reflected in the lower part of the second bird's-eye view image 72 (t) is, for example, a vehicle created by CG.

路面80には、図4及び図6に示すように、4本の支柱81,82,83,84が立設されている。支柱81,82の各先端部の間には、線状体(浮遊物)としてのバー85が架設されている。支柱83,84の各先端部の間には、線状体(浮遊物)としてのバー86が架設されている。 As shown in FIGS. 4 and 6, four columns 81, 82, 83, and 84 are erected on the road surface 80. A bar 85 as a linear body (floating matter) is erected between the tips of the columns 81 and 82. A bar 86 as a linear body (floating matter) is erected between the tips of the columns 83 and 84.

支柱83,84の各足元部分83a,84aには、図4及び図6に示すように、路面80に溜まった水によって水溜りPが連なっている。路面80に形成された水溜りPには、図4及び図6に示すように、支柱83,84からの光が水溜りPの表面で鏡面反射している。 As shown in FIGS. 4 and 6, water pools P are connected to the foot portions 83a and 84a of the columns 83 and 84 by the water collected on the road surface 80. As shown in FIGS. 4 and 6, the light from the columns 83 and 84 is mirror-reflected on the surface of the puddle P formed on the road surface 80.

そのため、支柱83,84が撮像されて視点変換された俯瞰画像と、水溜りPの表面に映りこんだ支柱87,88が撮像されて視点変換された俯瞰画像とが一体化している。これにより、路面80に立設された2本の支柱83,84は、図4及び図6に示すように、実際の位置よりも車両10の近くにあるように映り込んでいる。 Therefore, the bird's-eye view image in which the columns 83 and 84 are imaged and the viewpoint is converted is integrated with the bird's-eye view image in which the columns 87 and 88 reflected on the surface of the puddle P are imaged and the viewpoint is converted. As a result, the two columns 83, 84 erected on the road surface 80 are reflected so as to be closer to the vehicle 10 than the actual positions, as shown in FIGS. 4 and 6.

立体物検出部40は、図4に示す第1俯瞰画像72(t−Δt)を、短い時間間隔Δtの間における車両10の移動量と移動方向に対応するように平行移動又は回転移動して、時刻(t)における仮想的な第2予測俯瞰画像72’(t)を生成する。図12に、第2予測俯瞰画像72’(t)の一例を示す。 The three-dimensional object detection unit 40 translates or rotates the first bird's-eye view image 72 (t−Δt) shown in FIG. 4 so as to correspond to the movement amount and the movement direction of the vehicle 10 during the short time interval Δt. , Generates a virtual second predicted bird's-eye view image 72'(t) at time (t). FIG. 12 shows an example of the second predicted bird's-eye view image 72'(t).

具体的に、立体物検出部40は、図6に示す第2俯瞰画像72(t)に映り込んだ路面80に描かれている支柱81〜84やバー85,86と、図12に示す第2予測俯瞰画像72’(t)に映り込んだ路面80に描かれている支柱81〜84やバー85,86とに基づいて、支柱81〜84やバー85,86が車両10の動きに応じて移動した移動量や回転した回転量を算出する。この移動量や回転量に基づいて、立体物検出部40は、第2俯瞰画像72(t)に映り込んだ支柱81〜84やバー85,86と、第2予測俯瞰画像72’(t)に映り込んだ支柱81〜84やバー85,86との位置合わせを行う。 Specifically, the three-dimensional object detection unit 40 includes columns 81 to 84 and bars 85 and 86 depicted on the road surface 80 reflected in the second bird's-eye view image 72 (t) shown in FIG. 6, and the third object detection unit 40 shown in FIG. 2 Based on the columns 81 to 84 and bars 85 and 86 drawn on the road surface 80 reflected in the predicted bird's-eye view image 72'(t), the columns 81 to 84 and bars 85 and 86 respond to the movement of the vehicle 10. Calculate the amount of movement and the amount of rotation. Based on the amount of movement and the amount of rotation, the three-dimensional object detection unit 40 includes columns 81 to 84 and bars 85 and 86 reflected in the second bird's-eye view image 72 (t), and the second predicted bird's-eye view image 72'(t). Align with the columns 81 to 84 and the bars 85 and 86 reflected in.

その位置合わせの後、立体物検出部40は、図12に示す第2予測俯瞰画像72’(t)から、図6に示す第2俯瞰画像72(t)を減算する減算処理を施して差分画像を求める。 After the alignment, the three-dimensional object detection unit 40 performs a subtraction process of subtracting the second bird's-eye view image 72 (t) shown in FIG. 6 from the second predicted bird's-eye view image 72'(t) shown in FIG. Ask for an image.

車両10が移動しても、支柱83の足元部分83aや、支柱84の足元部分84aは、第2予測俯瞰画像72’(t)と第2俯瞰画像72(t)の略同じ位置に発生する。そのため、第2予測俯瞰画像72’(t)と第2俯瞰画像72(t)との差分画像において除去された部分は、支柱83の足元部分83aや、支柱84の足元部分84aであると推測することができる。従って、俯瞰画像上で水溜りPの表面に映りこんだ支柱87と一体化した支柱83の足元部分83aや、俯瞰画像上で水溜りPの表面に映りこんだ支柱88と一体化した支柱84の足元部分84aの位置を抽出することができる。 Even if the vehicle 10 moves, the foot portion 83a of the support column 83 and the foot portion 84a of the support column 84 are generated at substantially the same positions of the second predicted bird's-eye view image 72'(t) and the second bird's-eye view image 72 (t). .. Therefore, it is presumed that the portions removed in the difference image between the second predicted bird's-eye view image 72'(t) and the second bird's-eye view image 72 (t) are the foot portion 83a of the support column 83 and the foot portion 84a of the support column 84. can do. Therefore, the foot portion 83a of the support column 83 integrated with the support column 87 reflected on the surface of the water pool P on the bird's-eye view image, and the support column 84 integrated with the support column 88 reflected on the surface of the water pool P on the bird's-eye view image. The position of the foot portion 84a can be extracted.

なお、支柱83の足元部分83aや、支柱84の足元部分84aの位置を抽出するにあたり、立体物検出部40は、時刻(t−Δt)における仮想的な第1予測俯瞰画像72’(t−Δt)を生成しても構わない。図10に、第1予測俯瞰画像72’(t−Δt)の一例を示す。 In extracting the positions of the foot portion 83a of the support column 83 and the foot portion 84a of the support column 84, the three-dimensional object detection unit 40 uses the virtual first predicted bird's-eye view image 72'(t-) at the time (t−Δt). Δt) may be generated. FIG. 10 shows an example of the first predicted bird's-eye view image 72'(t−Δt).

具体的に、立体物検出部40は、図4に示す第1俯瞰画像72(t−Δt)に映り込んだ路面80に描かれている支柱81〜84やバー85,86と、図10に示す第1予測俯瞰画像72’(t−Δt)に映り込んだ路面80に描かれている支柱81〜84やバー85,86とに基づいて、支柱81〜84やバー85,86が車両10の動きに応じて移動した移動量や回転した回転量を算出する。この移動量や回転量に基づいて、立体物検出部40は、第1俯瞰画像72(t−Δt)に映り込んだ支柱81〜84やバー85,86と、第1予測俯瞰画像72’(t−Δt)に映り込んだ支柱81〜84やバー85,86との位置合わせを行う。 Specifically, the three-dimensional object detection unit 40 includes columns 81 to 84 and bars 85 and 86 depicted on the road surface 80 reflected in the first bird's-eye view image 72 (t−Δt) shown in FIG. Based on the columns 81 to 84 and bars 85 and 86 drawn on the road surface 80 reflected in the first predicted bird's-eye view image 72'(t-Δt), the columns 81 to 84 and bars 85 and 86 are vehicles 10. The amount of movement and the amount of rotation are calculated according to the movement of. Based on the amount of movement and the amount of rotation, the three-dimensional object detection unit 40 includes columns 81 to 84 and bars 85 and 86 reflected in the first bird's-eye view image 72 (t-Δt), and the first predicted bird's-eye view image 72'(. Alignment with the columns 81 to 84 and bars 85 and 86 reflected in t−Δt) is performed.

その位置合わせの後、立体物検出部40は、図10に示す第1予測俯瞰画像72’(t−Δt)から、図4に示す第1俯瞰画像72(t−Δt)を減算する減算処理を施して差分画像を求める。その差分画像において除去された部分も、図10に示す第2予測俯瞰画像72’(t)と、図6に示す第2俯瞰画像72(t)との差分画像において除去された部分と同様に、支柱83の足元部分83aや、支柱84の足元部分84aであると推測することができる。 After the alignment, the three-dimensional object detection unit 40 subtracts the first bird's-eye view image 72 (t−Δt) shown in FIG. 4 from the first predicted bird's-eye view image 72 ′ (t−Δt) shown in FIG. To obtain the difference image. The portion removed in the difference image is also the same as the portion removed in the difference image between the second predicted bird's-eye view image 72'(t) shown in FIG. 10 and the second bird's-eye view image 72 (t) shown in FIG. , It can be inferred that it is the foot portion 83a of the support column 83 or the foot portion 84a of the support column 84.

上述したような、予測俯瞰画像と、実際に得られた俯瞰画像との間で行われる減算処理の結果として得られる差分画像の輝度は、路面80が存在する領域において閾値よりも小さくなり、支柱81〜84やバー85,86等の立体物が存在する領域において閾値よりも大きくなる傾向にある。この傾向を利用して、立体物検出部40は、差分画像の輝度が閾値よりも大きくなる領域を立体物存在領域として検出する。 The brightness of the difference image obtained as a result of the subtraction process performed between the predicted bird's-eye view image and the actually obtained bird's-eye view image as described above becomes smaller than the threshold value in the region where the road surface 80 exists, and the support column It tends to be larger than the threshold value in the region where three-dimensional objects such as 81 to 84 and bars 85 and 86 exist. Utilizing this tendency, the three-dimensional object detection unit 40 detects a region where the brightness of the difference image is larger than the threshold value as a three-dimensional object existence region.

<第1立体物領域抽出部51で行われる処理の説明>
次に、図4及び図5を用いて、第1立体物領域抽出部51で行われる処理について説明する。
<Explanation of processing performed by the first three-dimensional object region extraction unit 51>
Next, the process performed by the first three-dimensional object region extraction unit 51 will be described with reference to FIGS. 4 and 5.

(領域分割部51Aで行われる処理の説明)
まず、領域分割部51Aは、図4に示すように、第1俯瞰画像72(t−Δt)を左方カメラ12aから放射方向に延びる7本の第1放射線RL10,RL11,RL12,RL13,RL14,RL15,RL16,RL17で左方カメラ12aの位置を中心として等角度(=30°)おきに分割する。
(Explanation of processing performed in the area division section 51A)
First, as shown in FIG. 4, the region dividing portion 51A extends the first bird's-eye view image 72 (t−Δt) from the left camera 12a in the radial direction of the seven first radiations RL10, RL11, RL12, RL13, RL14. , RL15, RL16, RL17 are divided at equal angles (= 30 °) around the position of the left camera 12a.

第1放射線RL10〜RL17によって、第1俯瞰画像72(t−Δt)は、図4に示すように、6つの立体物領域72(t−Δt)_1,72(t−Δt)_2,72(t−Δt)_3,72(t−Δt)_4,72(t−Δt)_5,72(t−Δt)_6に分割される。 Due to the first radiation RL10 to RL17, the first bird's-eye view image 72 (t−Δt) becomes six three-dimensional object regions 72 (t−Δt) _1,72 (t−Δt) _2,72 (t−Δt) as shown in FIG. It is divided into t-Δt) _3,72 (t-Δt) _4,72 (t-Δt) _5,72 (t-Δt) _6.

第1放射線RL10〜RL17は、支柱81〜84やバー85,86の路面80に対する鉛直性を示す指標となる。支柱は路面80に対して鉛直に立設されているので、支柱の長手方向と、第1放射線の延存方向とは一致する。 The first radiations RL10 to RL17 are indicators of the verticality of the columns 81 to 84 and the bars 85 and 86 with respect to the road surface 80. Since the stanchion is erected vertically with respect to the road surface 80, the longitudinal direction of the stanchion and the extension direction of the first radiation coincide with each other.

(鉛直性評価部51Bで行われる処理の説明)
鉛直性評価部51Bは、図5に示すように、支柱81〜84やバー85,86の路面80に対する鉛直性を評価する。鉛直性評価部51Bは、支柱81,82及びバー85からなる一塊の領域から、支柱81,82と、バー85とを区別する。同様に、鉛直性評価部51Bは、支柱83,84,バー86及び水溜りPの表面に映りこんだ支柱87,88からなる一塊の領域から、支柱83及び支柱87の塊と、支柱84及び支柱88の塊と、バー86とを区別する。
(Explanation of processing performed by verticality evaluation unit 51B)
As shown in FIG. 5, the verticality evaluation unit 51B evaluates the verticality of the columns 81 to 84 and the bars 85 and 86 with respect to the road surface 80. The verticality evaluation unit 51B distinguishes the columns 81 and 82 from the bars 85 from the mass region consisting of the columns 81 and 82 and the bars 85. Similarly, the verticality evaluation unit 51B includes the lumps of the struts 83 and 87, the struts 84, and the struts 84 from the region of the lumps consisting of the struts 83, 84, the bars 86, and the struts 87, 88 reflected on the surface of the puddle P. Distinguish between the mass of stanchions 88 and the bar 86.

支柱やバーを区別する手法には、例えば、一塊の領域を複数のブロックに分割して、ブロック毎に上下方向又は左右方向にエッジ検出を行い、エッジ検出の結果として得られたエッジ方向の類似性等を用いた手法がある。なお以下では、バー85,86の路面80に対する鉛直性評価と、支柱81〜84の路面80に対する鉛直性評価に分けて説明する。 As a method for distinguishing columns and bars, for example, a block area is divided into a plurality of blocks, edge detection is performed for each block in the vertical direction or the horizontal direction, and the edge direction similarity obtained as a result of the edge detection is performed. There is a method using sex and so on. In the following, the verticality evaluation of the bars 85 and 86 with respect to the road surface 80 and the verticality evaluation of the columns 81 to 84 with respect to the road surface 80 will be described separately.

(バー85,86の路面80に対する鉛直性評価)
鉛直性評価部51Bは、図5に示すように、バー85の長手方向のベクトルV1と、左方カメラ12aの放射方向のベクトルとのなす角度が、予め定められた基準角度を超えるか否かを判断する。鉛直性評価部51Bは、両ベクトルのなす角度が基準角度を超える場合、バー85は路面80に対して鉛直に立設されていないと判断する。この判断のもとで、第1立体物領域抽出部51(図3)は、バー85を含む立体物領域72(t−Δt)_1,72(t−Δt)_2を抽出する。バー85は、後述する投影座標点特定部53(図3)において重心G1(図8)を求める際の対象となる。
(Evaluation of verticality of bars 85 and 86 with respect to road surface 80)
As shown in FIG. 5, the verticality evaluation unit 51B determines whether or not the angle formed by the vector V1 in the longitudinal direction of the bar 85 and the vector in the radial direction of the left camera 12a exceeds a predetermined reference angle. To judge. The verticality evaluation unit 51B determines that the bar 85 is not erected vertically with respect to the road surface 80 when the angle formed by both vectors exceeds the reference angle. Based on this determination, the first three-dimensional object region extraction unit 51 (FIG. 3) extracts the three-dimensional object region 72 (t−Δt) _1,72 (t−Δt) _2 including the bar 85. The bar 85 is a target for obtaining the center of gravity G1 (FIG. 8) in the projected coordinate point specifying unit 53 (FIG. 3) described later.

鉛直性評価部51Bは、図5に示すように、バー86の長手方向のベクトルV2と、左方カメラ12aの放射方向のベクトルとのなす角度が、予め定められた基準角度を超えるか否かを判断する。鉛直性評価部51Bは、両ベクトルのなす角度が基準角度を超える場合、バー86は路面80に対して鉛直に立設されていないと判断する。この判断のもとで、第1立体物領域抽出部51(図3)は、バー86を含む立体物領域72(t−Δt)_5,72(t−Δt)_6を抽出する。バー86は、後述する投影座標点特定部53(図3)において重心G21(図8)を求める際の対象となる。 As shown in FIG. 5, the verticality evaluation unit 51B determines whether or not the angle formed by the vector V2 in the longitudinal direction of the bar 86 and the vector in the radial direction of the left camera 12a exceeds a predetermined reference angle. To judge. The verticality evaluation unit 51B determines that the bar 86 is not erected vertically with respect to the road surface 80 when the angle formed by both vectors exceeds the reference angle. Based on this determination, the first three-dimensional object region extraction unit 51 (FIG. 3) extracts the three-dimensional object regions 72 (t−Δt) _5 and 72 (t−Δt) _6 including the bar 86. The bar 86 is a target when the center of gravity G21 (FIG. 8) is obtained in the projected coordinate point specifying unit 53 (FIG. 3) described later.

(支柱81〜84の路面80に対する鉛直性評価)
鉛直性評価部51Bは、図5に示すように、バー85,86と同様に、支柱81〜84についても路面80に対する鉛直性を評価する。本実施形態の支柱81〜84は、路面80に対して鉛直に立設されている。そのため、支柱81の長手方向のベクトルV3の向きと、左方カメラ12aの放射方向とは一致している。
(Evaluation of verticality of columns 81 to 84 with respect to the road surface 80)
As shown in FIG. 5, the verticality evaluation unit 51B evaluates the verticality of the columns 81 to 84 with respect to the road surface 80 in the same manner as the bars 85 and 86. The columns 81 to 84 of this embodiment are erected vertically with respect to the road surface 80. Therefore, the direction of the vector V3 in the longitudinal direction of the support column 81 coincides with the radiation direction of the left camera 12a.

支柱82の長手方向のベクトルV4の向きと、左方カメラ12aの放射方向も同様に一致している。支柱83及び水溜りPの表面に映りこんだ支柱87の長手方向のベクトルV5の向きと、左方カメラ12aの放射方向も同様に一致している。支柱84及び水溜りPの表面に映りこんだ支柱88の長手方向のベクトルV6の向きと、左方カメラ12aの放射方向も同様に一致している。 The direction of the vector V4 in the longitudinal direction of the support column 82 and the radiation direction of the left camera 12a also coincide with each other. The direction of the vector V5 in the longitudinal direction of the support column 87 reflected on the surfaces of the support column 83 and the puddle P and the radiation direction of the left camera 12a are also the same. The direction of the vector V6 in the longitudinal direction of the support column 88 reflected on the surfaces of the support column 84 and the puddle P and the radiation direction of the left camera 12a are also the same.

図5では、支柱の長手方向のベクトルと、第1放射線の延存方向のベクトルのなす鋭角は基準角度以内にあるものと判断される。そのため、支柱81〜84は、後述する投影座標点特定部53(図3)において重心を求める対象からは除外される。 In FIG. 5, it is determined that the acute angle formed by the vector in the longitudinal direction of the column and the vector in the extending direction of the first radiation is within the reference angle. Therefore, the columns 81 to 84 are excluded from the target for which the center of gravity is obtained in the projected coordinate point specifying unit 53 (FIG. 3), which will be described later.

<第2立体物領域抽出部52で行われる処理の説明>
次に、図6及び図7を用いて、第2立体物領域抽出部52で行われる処理について説明する。なお、第1立体物領域抽出部51の領域分割部51A及び鉛直性評価部51Bと、第2立体物領域抽出部52の領域分割部52A及び鉛直性評価部52Bとは、順に同様のものであるので、説明を省略することがある。
<Explanation of processing performed by the second three-dimensional object region extraction unit 52>
Next, the process performed by the second three-dimensional object region extraction unit 52 will be described with reference to FIGS. 6 and 7. The region division unit 51A and the verticality evaluation unit 51B of the first three-dimensional object region extraction unit 51 and the region division unit 52A and the verticality evaluation unit 52B of the second three-dimensional object region extraction unit 52 are the same in order. Therefore, the description may be omitted.

(領域分割部52Aで行われる処理の説明)
まず、領域分割部52Aは、図6に示すように、第2俯瞰画像72(t)を左方カメラ12aから放射方向に延びる7本の第2放射線RL20,RL21,RL22,RL23,RL24,RL25,RL26,RL27で左方カメラ12aの位置を中心として等角度θ2(=30°)おきに分割する。これにより、第2俯瞰画像72(t)は、6つの立体物領域72(t)_1,72(t)_2,72(t)_3,72(t)_4,72(t)_5,72(t)_6に分割される。
(Explanation of processing performed by the area dividing unit 52A)
First, as shown in FIG. 6, the region dividing unit 52A extends the second bird's-eye view image 72 (t) from the left camera 12a in the radial direction to seven second radiations RL20, RL21, RL22, RL23, RL24, RL25. , RL26, RL27 are divided at equal angles θ2 (= 30 °) around the position of the left camera 12a. As a result, the second bird's-eye view image 72 (t) has six three-dimensional object regions 72 (t) _1,72 (t) _2,72 (t) _3,72 (t) _4,72 (t) _5,72 ( t) Divided into _6.

(鉛直性評価部52Bで行われる処理の説明)
次に、鉛直性評価部52Bは、支柱81〜84やバー85,86の路面80に対する鉛直性を評価する。なお以下では、バー85,86の路面80に対する鉛直性評価と、支柱81〜84の路面80に対する鉛直性評価に分けて説明する。
(Explanation of processing performed by verticality evaluation unit 52B)
Next, the verticality evaluation unit 52B evaluates the verticality of the columns 81 to 84 and the bars 85 and 86 with respect to the road surface 80. In the following, the verticality evaluation of the bars 85 and 86 with respect to the road surface 80 and the verticality evaluation of the columns 81 to 84 with respect to the road surface 80 will be described separately.

(バー85、86の路面80に対する鉛直性評価)
鉛直性評価部52Bは、図7に示すように、バー85の長手方向のベクトルV21と、左方カメラ12aの放射方向のベクトルとのなす角度が、予め定められた基準角度を超えるか否かを判断する。図7では、両ベクトルのなす角度が基準角度を超えるものとして、第2立体物領域抽出部52は、バー85を含む立体物領域72(t)_1、72(t)_2を抽出する。バー85は、後述する投影座標点特定部53(図3)において重心G2(図8)を求める対象となる。
(Vertical evaluation of bars 85 and 86 for road surface 80)
As shown in FIG. 7, the verticality evaluation unit 52B determines whether or not the angle formed by the vector V21 in the longitudinal direction of the bar 85 and the vector in the radial direction of the left camera 12a exceeds a predetermined reference angle. To judge. In FIG. 7, assuming that the angle formed by both vectors exceeds the reference angle, the second three-dimensional object region extraction unit 52 extracts the three-dimensional object regions 72 (t) _1 and 72 (t) _2 including the bar 85. The bar 85 is a target for obtaining the center of gravity G2 (FIG. 8) in the projected coordinate point specifying unit 53 (FIG. 3) described later.

鉛直性評価部52Bは、図7に示すように、バー86の長手方向のベクトルV22と、左方カメラ12aの放射方向のベクトルとのなす角度が、予め定められた基準角度を超えるか否かを判断する。図7では、両ベクトルのなす角度が基準角度を超えるものとして、第2立体物領域抽出部52(図3)は、バー86を含む立体物領域72(t)_4〜72(t)_6を抽出する。バー86は、後述する投影座標点特定部53(図3)において重心G22(図8)を求める際の対象となる。 As shown in FIG. 7, the verticality evaluation unit 52B determines whether or not the angle formed by the vector V22 in the longitudinal direction of the bar 86 and the vector in the radial direction of the left camera 12a exceeds a predetermined reference angle. To judge. In FIG. 7, assuming that the angle formed by both vectors exceeds the reference angle, the second three-dimensional object region extraction unit 52 (FIG. 3) sets the three-dimensional object regions 72 (t) _4 to 72 (t) _6 including the bar 86. Extract. The bar 86 is a target for obtaining the center of gravity G22 (FIG. 8) in the projected coordinate point specifying unit 53 (FIG. 3) described later.

(支柱81〜84の路面80に対する鉛直性評価)
鉛直性評価部52Bは、図7に示すように、バー85,86と同様に、支柱81〜84についても路面80に対する鉛直性を評価する。上述したように、支柱81〜84は、路面80に対して鉛直に立設されている。そのため、支柱81の長手方向のベクトルV23の向きと、左方カメラ12aの放射方向とは一致している。
(Evaluation of verticality of columns 81 to 84 with respect to the road surface 80)
As shown in FIG. 7, the verticality evaluation unit 52B evaluates the verticality of the columns 81 to 84 with respect to the road surface 80, similarly to the bars 85 and 86. As described above, the columns 81 to 84 are erected vertically with respect to the road surface 80. Therefore, the direction of the vector V23 in the longitudinal direction of the support column 81 coincides with the radiation direction of the left camera 12a.

支柱82の長手方向のベクトルV24の向きと、左方カメラ12aの放射方向も同様に一致している。支柱83及び水溜りPの表面に映りこんだ支柱87の長手方向のベクトルV25の向きと、左方カメラ12aの放射方向も同様に一致している。支柱84及び水溜りPの表面に映りこんだ支柱88の長手方向のベクトルV26の向きと、左方カメラ12aの放射方向も同様に一致している。 The orientation of the vector V24 in the longitudinal direction of the support column 82 and the radiation direction of the left camera 12a also coincide with each other. The direction of the vector V25 in the longitudinal direction of the support column 87 reflected on the surfaces of the support column 83 and the puddle P and the radiation direction of the left camera 12a are also the same. The direction of the vector V26 in the longitudinal direction of the support column 88 reflected on the surfaces of the support column 84 and the puddle P and the radiation direction of the left camera 12a are also the same.

図7では、支柱の長手方向のベクトルと、第1放射線の延存方向のベクトルのなす鋭角は基準角度以内にあるものと判断される。そのため、支柱81〜84は、後述する投影座標点特定部53(図3)において重心を求める対象からは除外される。 In FIG. 7, it is determined that the acute angle formed by the vector in the longitudinal direction of the column and the vector in the extending direction of the first radiation is within the reference angle. Therefore, the columns 81 to 84 are excluded from the target for which the center of gravity is obtained in the projected coordinate point specifying unit 53 (FIG. 3), which will be described later.

<投影座標点特定部53で行われる処理の説明>
続いて、図8を用いて、投影座標点特定部53で行われる処理について説明する。
<Explanation of processing performed by the projected coordinate point identification unit 53>
Subsequently, the process performed by the projected coordinate point specifying unit 53 will be described with reference to FIG.

図8に示す重畳俯瞰画像SVIは、図4に示す第1俯瞰画像72(t−Δt)と、図6に示す第2俯瞰画像72(t)とを移動及び回転させて、両俯瞰画像の位置を合わせた後に重畳した重畳俯瞰画像である。重畳俯瞰画像SVIの下部に鎖線で示す車両10は、図4に示した第1俯瞰画像72(t−Δt)の下部と同じ位置に映り込んでいる。重畳俯瞰画像SVIの下部に実線で示す車両10は、図6に示した第2俯瞰画像72(t)の下部と同じ位置に映り込んでいる。 The superimposed bird's-eye view image SVI shown in FIG. 8 is obtained by moving and rotating the first bird's-eye view image 72 (t−Δt) shown in FIG. 4 and the second bird's-eye view image 72 (t) shown in FIG. It is a superposed bird's-eye view image superimposed after aligning the positions. The vehicle 10 shown by a chain line at the lower part of the superimposed bird's-eye view image SVI is reflected at the same position as the lower part of the first bird's-eye view image 72 (t−Δt) shown in FIG. The vehicle 10 shown by a solid line at the lower part of the superimposed bird's-eye view image SVI is reflected at the same position as the lower part of the second bird's-eye view image 72 (t) shown in FIG.

図8では、図をわかりやすくするために、第1俯瞰画像72(t−Δt)に含まれる支柱81〜84,バー85,86を斜線で示し、第2俯瞰画像72(t)に含まれる支柱81〜84,バー85,86を網掛けで示している。また、図8では、図をわかりやすくするために、第1俯瞰画像72(t−Δt)に含まれる水溜りPの表面に映りこんだ支柱87,88と、第2俯瞰画像72(t)に含まれる水溜りPの表面に映りこんだ支柱87,88とでハッチングの種類を変えてある。 In FIG. 8, in order to make the figure easier to understand, the columns 81 to 84 and the bars 85 and 86 included in the first bird's-eye view image 72 (t−Δt) are shown by diagonal lines and are included in the second bird's-eye view image 72 (t). The columns 81 to 84 and the bars 85 and 86 are shaded. Further, in FIG. 8, in order to make the figure easier to understand, the columns 87 and 88 reflected on the surface of the puddle P included in the first bird's-eye view image 72 (t−Δt) and the second bird's-eye view image 72 (t). The type of hatching is changed with the columns 87 and 88 reflected on the surface of the puddle P contained in.

投影座標点特定部53は、図8に示すように、重畳俯瞰画像SVI上で、第1直線L1と第1直線L21とが交差する点を、バー85の真下にある路面80にバー85を投影した投影座標点SP1として特定する。第1直線L1は、斜線で示すバー85の重心G1から、鎖線で示す車両10に設置した左方カメラ12aの放射方向に延びる直線である。第1直線L21は、網掛けで示すバー85の重心G2から、実線で示す車両10に設置した左方カメラ12aの放射方向に延びる直線である。 As shown in FIG. 8, the projected coordinate point specifying unit 53 sets the bar 85 on the road surface 80 directly below the bar 85 at the point where the first straight line L1 and the first straight line L21 intersect on the superimposed bird's-eye view image SVI. It is specified as the projected projected coordinate point SP1. The first straight line L1 is a straight line extending from the center of gravity G1 of the bar 85 indicated by the diagonal line in the radial direction of the left camera 12a installed in the vehicle 10 indicated by the chain line. The first straight line L21 is a straight line extending in the radial direction of the left camera 12a installed in the vehicle 10 shown by the solid line from the center of gravity G2 of the bar 85 shown by shading.

投影座標点特定部53は、図8に示すように、重畳俯瞰画像SVI上で、第2直線L2と第2直線L22とが交差する点を、バー86の真下にある路面80にバー86を投影した投影座標点SP2として特定する。第2直線L2は、斜線で示すバー86の重心G21から、鎖線で示す車両10に設置した左方カメラ12aの放射方向に延びる直線である。第2直線L22は、網掛けで示すバー86の重心G22から、実線で示す車両10に設置した左方カメラ12aの放射方向に延びる直線である。 As shown in FIG. 8, the projected coordinate point specifying unit 53 sets the bar 86 on the road surface 80 directly below the bar 86 at the point where the second straight line L2 and the second straight line L22 intersect on the superimposed bird's-eye view image SVI. It is specified as the projected projected coordinate point SP2. The second straight line L2 is a straight line extending from the center of gravity G21 of the bar 86 indicated by the diagonal line in the radial direction of the left camera 12a installed in the vehicle 10 indicated by the chain line. The second straight line L22 is a straight line extending in the radial direction of the left camera 12a installed in the vehicle 10 shown by the solid line from the center of gravity G22 of the bar 86 shown by shading.

<投影座標点出力部60で行われる処理の説明>
続いて、図9〜図16を用いて、投影座標点出力部60で行われる処理について説明する。
<Explanation of processing performed by the projected coordinate point output unit 60>
Subsequently, the processing performed by the projected coordinate point output unit 60 will be described with reference to FIGS. 9 to 16.

(俯瞰画像と第1予測俯瞰画像とを用いた鉛直性判定処理)
まず、投影座標点出力部60は、図9及び図10に示すように、第2俯瞰画像72(t)と、この第2俯瞰画像72(t)を平行移動又は回転移動して生成した第1予測俯瞰画像72’(t−Δt)とを用いて、投影座標点を含む立体物領域に含まれる立体物の路面80に対する鉛直性を判定する。
(Vertical judgment processing using the bird's-eye view image and the first predicted bird's-eye view image)
First, as shown in FIGS. 9 and 10, the projected coordinate point output unit 60 is generated by translating or rotating the second bird's-eye view image 72 (t) and the second bird's-eye view image 72 (t). 1 Using the predicted bird's-eye view image 72'(t−Δt), the verticality of a three-dimensional object included in the three-dimensional object region including the projected coordinate points with respect to the road surface 80 is determined.

なお以下では、投影座標点SP1を含む立体物領域に対する鉛直性判定処理と、投影座標点SP2を含む立体物領域に対する鉛直性判定処理とに分けて説明する。 In the following, the verticality determination process for the three-dimensional object region including the projected coordinate point SP1 and the verticality determination process for the three-dimensional object region including the projected coordinate point SP2 will be described separately.

(投影座標点SP1を含む立体物領域に対する鉛直性判定処理)
まず、図9及び図10を参照しつつ、投影座標点SP1を含む立体物領域に対する鉛直性判定処理について説明する。
(Verticality determination process for a three-dimensional object region including the projected coordinate point SP1)
First, the verticality determination process for the three-dimensional object region including the projected coordinate point SP1 will be described with reference to FIGS. 9 and 10.

投影座標点出力部60は、図9に示すように、第2俯瞰画像72(t)から、投影座標点特定部53で特定された投影座標点SP1を含む立体物領域72(t)_1を抽出する。投影座標点出力部60は、図9に示すように、その立体物領域72(t)_1に含まれる、バー85の長手方向のベクトルV31と、支柱81の長手方向のベクトルV32との合成ベクトルSV1を演算する。 As shown in FIG. 9, the projection coordinate point output unit 60 obtains a three-dimensional object region 72 (t) _1 including the projection coordinate point SP1 specified by the projection coordinate point identification unit 53 from the second bird's-eye view image 72 (t). Extract. As shown in FIG. 9, the projected coordinate point output unit 60 is a composite vector of the vector V31 in the longitudinal direction of the bar 85 and the vector V32 in the longitudinal direction of the support column 81 included in the three-dimensional object region 72 (t) _1. Calculate SV1.

投影座標点出力部60は、図10に示すように、第1予測俯瞰画像72’(t−Δt)から、投影座標点特定部53で特定された投影座標点SP1に対応する投影座標点SP1’を含む立体物領域72’(t−Δt)_2を抽出する。図10に示すように、その立体物領域72’(t−Δt)_2には、バー85の長手方向のベクトルV31’のみが含まれる。そのため、投影座標点出力部60は、合成ベクトルの演算を行わない。 As shown in FIG. 10, the projected coordinate point output unit 60 has the projected coordinate point SP1 corresponding to the projected coordinate point SP1 specified by the projected coordinate point specifying unit 53 from the first predicted bird's-eye view image 72'(t−Δt). The three-dimensional object region 72'(t-Δt) _2 including'is extracted. As shown in FIG. 10, the three-dimensional object region 72'(t-Δt) _2 includes only the vector V31'in the longitudinal direction of the bar 85. Therefore, the projected coordinate point output unit 60 does not perform the calculation of the composite vector.

(第1判定処理)
次に、投影座標点出力部60は、図9に示すように、合成ベクトルSV1と、左方カメラ12aの放射方向のベクトルとのなす角度が、予め定められた基準角度以内である否かを判定する第1判定処理を実施する。
(First judgment process)
Next, as shown in FIG. 9, the projected coordinate point output unit 60 determines whether or not the angle formed by the composite vector SV1 and the vector in the radial direction of the left camera 12a is within a predetermined reference angle. The first determination process for determination is performed.

(第2判定処理)
続いて、投影座標点出力部60は、図9及び図10に示すように、合成ベクトルSV1と、ベクトルV31’とのなす角度が、予め定められた基準角度以内である否かを判定する第2判定処理を実施する。
(Second judgment process)
Subsequently, as shown in FIGS. 9 and 10, the projected coordinate point output unit 60 determines whether or not the angle formed by the composite vector SV1 and the vector V31'is within a predetermined reference angle. 2 Perform the judgment process.

投影座標点出力部60は、第1判定処理及び第2判定処理の双方の判定条件を満たす場合のみ、モニタ150に表示する対象として第2俯瞰画像72(t)から投影座標点SP1を抽出する。 The projection coordinate point output unit 60 extracts the projection coordinate point SP1 from the second bird's-eye view image 72 (t) as a target to be displayed on the monitor 150 only when the determination conditions of both the first determination process and the second determination process are satisfied. ..

(投影座標点SP2を含む立体物領域に対する鉛直性判定処理)
次に、図9及び図10を参照しつつ、投影座標点SP2を含む立体物領域に対する鉛直性判定処理について説明する。
(Verticality determination process for a three-dimensional object region including the projected coordinate point SP2)
Next, the verticality determination process for the three-dimensional object region including the projected coordinate point SP2 will be described with reference to FIGS. 9 and 10.

投影座標点出力部60は、図9に示すように、第2俯瞰画像72(t)から、投影座標点特定部53で特定された投影座標点SP2を含む立体物領域72(t)_5を抽出する。図9に示すように、その立体物領域72(t)_5には、バー86の長手方向のベクトルV33のみが含まれる。そのため、投影座標点出力部60は、合成ベクトルの演算を行わない。 As shown in FIG. 9, the projection coordinate point output unit 60 obtains a three-dimensional object region 72 (t) _5 including the projection coordinate point SP2 specified by the projection coordinate point identification unit 53 from the second bird's-eye view image 72 (t). Extract. As shown in FIG. 9, the three-dimensional object region 72 (t) _5 includes only the vector V33 in the longitudinal direction of the bar 86. Therefore, the projected coordinate point output unit 60 does not perform the calculation of the composite vector.

投影座標点出力部60は、図10に示すように、第1予測俯瞰画像72’(t−Δt)から、投影座標点特定部53で特定された投影座標点SP2に対応する投影座標点SP2’を含む立体物領域72’(t−Δt)_6を抽出する。投影座標点出力部60は、その立体物領域72’(t−Δt)_6に含まれる、バー86の長手方向のベクトルV33’と、その立体物領域72’(t−Δt)_6に含まれる、支柱84及び水溜りPの表面に映りこんだ支柱88の塊の長手方向のベクトルV34’との合成ベクトルSV2’を演算する。 As shown in FIG. 10, the projected coordinate point output unit 60 has the projected coordinate point SP2 corresponding to the projected coordinate point SP2 specified by the projected coordinate point specifying unit 53 from the first predicted bird's-eye view image 72'(t−Δt). The three-dimensional object region 72'(t-Δt) _6 including'is extracted. The projected coordinate point output unit 60 is included in the vector V33'in the longitudinal direction of the bar 86 included in the three-dimensional object region 72'(t-Δt) _6 and in the three-dimensional object region 72'(t-Δt) _6. , The composite vector SV2'with the longitudinal vector V34'of the mass of the stanchions 88 reflected on the surfaces of the stanchions 84 and the puddle P is calculated.

(第3判定処理)
次に、投影座標点出力部60は、図9に示すように、ベクトルV33と、左方カメラ12aの放射方向のベクトルとのなす角度が、予め定められた基準角度以内である否かを判定する第3判定処理を実施する。
(Third judgment process)
Next, as shown in FIG. 9, the projected coordinate point output unit 60 determines whether or not the angle formed by the vector V33 and the vector in the radial direction of the left camera 12a is within a predetermined reference angle. The third determination process is performed.

(第4判定処理)
続いて、投影座標点出力部60は、図9及び図10に示すように、ベクトルV33と、合成ベクトルSV2’とのなす角度が、予め定められた基準角度以内である否かを判定する第4判定処理を実施する。
(4th judgment process)
Subsequently, as shown in FIGS. 9 and 10, the projected coordinate point output unit 60 determines whether or not the angle formed by the vector V33 and the composite vector SV2'is within a predetermined reference angle. 4 Perform the judgment process.

投影座標点出力部60は、第3判定処理及び第4判定処理の双方の判定条件を満たす場合のみ、モニタ150に表示する対象として第2俯瞰画像72(t)から投影座標点SP2を抽出する。 The projection coordinate point output unit 60 extracts the projection coordinate point SP2 from the second bird's-eye view image 72 (t) as a target to be displayed on the monitor 150 only when the determination conditions of both the third determination process and the fourth determination process are satisfied. ..

(俯瞰画像と予測俯瞰画像とを用いた鉛直性判定処理)
次に、投影座標点出力部60は、図11及び図12に示すように、第1俯瞰画像72(t−Δt)と、この第1俯瞰画像72(t−Δt)を平行移動又は回転移動して生成した第2予測俯瞰画像72’(t)とを用いて、投影座標点を含む立体物領域に含まれる立体物の路面80に対する鉛直性を判定する。なお以下では、投影座標点SP1を含む立体物領域に対する鉛直性判定処理と、投影座標点SP2を含む立体物領域に対する鉛直性判定処理とに分けて説明する。
(Vertical judgment processing using a bird's-eye view image and a predicted bird's-eye view image)
Next, as shown in FIGS. 11 and 12, the projected coordinate point output unit 60 translates or rotates the first bird's-eye view image 72 (t−Δt) and the first bird's-eye view image 72 (t−Δt). Using the second predicted bird's-eye view image 72'(t) generated in the above procedure, the verticality of the three-dimensional object included in the three-dimensional object region including the projected coordinate points with respect to the road surface 80 is determined. In the following, the verticality determination process for the three-dimensional object region including the projected coordinate point SP1 and the verticality determination process for the three-dimensional object region including the projected coordinate point SP2 will be described separately.

(投影座標点SP1を含む立体物領域に対する鉛直性判定処理)
まず、図11及び図12を参照しつつ、投影座標点SP1を含む立体物領域に対する鉛直性判定処理について説明する。
(Verticality determination process for a three-dimensional object region including the projected coordinate point SP1)
First, the verticality determination process for the three-dimensional object region including the projected coordinate point SP1 will be described with reference to FIGS. 11 and 12.

投影座標点出力部60は、図11に示すように、第1俯瞰画像72(t−Δt)から、投影座標点特定部53で特定された投影座標点SP1を含む立体物領域72(t−Δt)_2を抽出する。図11に示すように、立体物領域72(t−Δt)_2には、バー85の長手方向のベクトルV35のみが含まれる。そのため、投影座標点出力部60は、合成ベクトルの演算を行わない。 As shown in FIG. 11, the projected coordinate point output unit 60 includes a three-dimensional object region 72 (t−t) including the projected coordinate point SP1 specified by the projected coordinate point specifying unit 53 from the first bird's-eye view image 72 (t−Δt). Δt) _2 is extracted. As shown in FIG. 11, the three-dimensional object region 72 (t−Δt) _2 includes only the vector V35 in the longitudinal direction of the bar 85. Therefore, the projected coordinate point output unit 60 does not perform the calculation of the composite vector.

投影座標点出力部60は、図12に示すように、第2予測俯瞰画像72’(t)から、投影座標点特定部53で特定された投影座標点SP1に対応する投影座標点SP1’を含む立体物領域72’(t)_1を抽出する。投影座標点出力部60は、図12に示すように、その立体物領域72’(t)_1に含まれる、バー85の長手方向のベクトルV35’と、その立体物領域72’(t)_1に含まれる、支柱81の長手方向のベクトルV36’との合成ベクトルSV3’を演算する。 As shown in FIG. 12, the projected coordinate point output unit 60 obtains the projected coordinate point SP1'corresponding to the projected coordinate point SP1 specified by the projected coordinate point specifying unit 53 from the second predicted bird's-eye view image 72'(t). The three-dimensional object region 72'(t) _1 to be included is extracted. As shown in FIG. 12, the projected coordinate point output unit 60 includes the vector V35'in the longitudinal direction of the bar 85 included in the three-dimensional object region 72'(t) _1 and the three-dimensional object region 72'(t) _1. The composite vector SV3'with the vector V36'in the longitudinal direction of the support column 81 included in the above is calculated.

(第5判定処理)
次に、投影座標点出力部60は、図11に示すように、ベクトルV35と、左方カメラ12aの放射方向のベクトルとのなす角度が、予め定められた基準角度以内である否かを判定する第5判定処理を実施する。
(Fifth judgment process)
Next, as shown in FIG. 11, the projected coordinate point output unit 60 determines whether or not the angle formed by the vector V35 and the vector in the radial direction of the left camera 12a is within a predetermined reference angle. The fifth determination process is performed.

(第6判定処理)
続いて、投影座標点出力部60は、図11及び図12に示すように、ベクトルV35と、合成ベクトルSV3’とのなす角度が、予め定められた基準角度以内である否かを判定する第6判定処理を実施する。
(6th judgment process)
Subsequently, as shown in FIGS. 11 and 12, the projected coordinate point output unit 60 determines whether or not the angle formed by the vector V35 and the composite vector SV3'is within a predetermined reference angle. 6 Judgment processing is performed.

投影座標点出力部60は、第5判定処理及び第6判定処理の双方の判定条件を満たす場合のみ、モニタ150に表示する対象として第1俯瞰画像72(t−Δt)から投影座標点SP1を抽出する。 The projection coordinate point output unit 60 sets the projection coordinate point SP1 from the first bird's-eye view image 72 (t−Δt) as a target to be displayed on the monitor 150 only when the determination conditions of both the fifth determination process and the sixth determination process are satisfied. Extract.

(投影座標点SP2を含む立体物領域に対する鉛直性判定処理)
次に、図11及び図12を参照しつつ、投影座標点SP2を含む立体物領域に対する鉛直性判定処理について説明する。
(Verticality determination process for a three-dimensional object region including the projected coordinate point SP2)
Next, the verticality determination process for the three-dimensional object region including the projected coordinate point SP2 will be described with reference to FIGS. 11 and 12.

投影座標点出力部60は、図11に示すように、第2俯瞰画像72(t)から、投影座標点特定部53で特定された投影座標点SP2を含む立体物領域72(t)_6を抽出する。投影座標点出力部60は、図11に示すように、その立体物領域72(t)_6に含まれる、バー86の長手方向のベクトルV37と、その立体物領域72(t)_4に含まれる、支柱83と水溜りPの表面に映りこんだ支柱87の塊の長手方向のベクトルV38との合成ベクトルSV4を演算する。 As shown in FIG. 11, the projected coordinate point output unit 60 obtains a three-dimensional object region 72 (t) _6 including the projected coordinate point SP2 specified by the projected coordinate point specifying unit 53 from the second bird's-eye view image 72 (t). Extract. As shown in FIG. 11, the projected coordinate point output unit 60 is included in the vector V37 in the longitudinal direction of the bar 86 included in the three-dimensional object region 72 (t) _6 and in the three-dimensional object region 72 (t) _4. , The composite vector SV4 of the support column 83 and the vector V38 in the longitudinal direction of the mass of the support column 87 reflected on the surface of the water pool P is calculated.

投影座標点出力部60は、図12に示すように、第1予測俯瞰画像72’(t−Δt)から、投影座標点特定部53で特定された投影座標点SP2に対応する投影座標点SP2’を含む立体物領域72’(t−Δt)_5を抽出する。その立体物領域72’(t−Δt)_5には、バー86の長手方向のベクトルV37’のみが含まれる。そのため、投影座標点出力部60は、合成ベクトルの演算を行わない。 As shown in FIG. 12, the projected coordinate point output unit 60 has the projected coordinate point SP2 corresponding to the projected coordinate point SP2 specified by the projected coordinate point specifying unit 53 from the first predicted bird's-eye view image 72'(t−Δt). The three-dimensional object region 72'(t-Δt) _5 including'is extracted. The three-dimensional object region 72'(t-Δt) _5 includes only the longitudinal vector V37'of the bar 86. Therefore, the projected coordinate point output unit 60 does not perform the calculation of the composite vector.

(第7判定処理)
次に、投影座標点出力部60は、図11に示すように、合成ベクトルSV4と、左方カメラ12aの放射方向のベクトルとのなす角度が、予め定められた基準角度以内である否かを判定する第7判定処理を実施する。
(7th judgment process)
Next, as shown in FIG. 11, the projected coordinate point output unit 60 determines whether or not the angle formed by the composite vector SV4 and the vector in the radial direction of the left camera 12a is within a predetermined reference angle. The seventh determination process for determination is performed.

(第8判定処理)
続いて、投影座標点出力部60は、図11及び図12に示すように、合成ベクトルSV4と、ベクトルV37’とのなす角度が、予め定められた基準角度以内である否かを判定する第8判定処理を実施する。
(8th judgment process)
Subsequently, as shown in FIGS. 11 and 12, the projected coordinate point output unit 60 determines whether or not the angle formed by the composite vector SV4 and the vector V37'is within a predetermined reference angle. 8 Judgment processing is performed.

投影座標点出力部60は、第7判定処理及び第8判定処理の双方の判定条件を満たす場合のみ、モニタ150に表示する対象として第1俯瞰画像72(t−Δt)から投影座標点SP2を抽出する。 The projection coordinate point output unit 60 sets the projection coordinate point SP2 from the first bird's-eye view image 72 (t−Δt) as a target to be displayed on the monitor 150 only when the determination conditions of both the seventh determination process and the eighth determination process are satisfied. Extract.

(俯瞰画像と予測俯瞰画像とを用いた表示制御処理)
続いて、図13〜図16を参照しつつ、投影座標点出力部60で行われる表示制御処理について説明する。
(Display control processing using a bird's-eye view image and a predicted bird's-eye view image)
Subsequently, the display control process performed by the projected coordinate point output unit 60 will be described with reference to FIGS. 13 to 16.

まず、投影座標点出力部60は、図13及び図14に示すように、第1俯瞰画像72(t−Δt)と、この第1俯瞰画像72(t−Δt)を平行移動又は回転移動して生成した第2予測俯瞰画像72’(t)とを用いて、モニタ150への表示制御処理を行う。 First, as shown in FIGS. 13 and 14, the projected coordinate point output unit 60 translates or rotates the first bird's-eye view image 72 (t−Δt) and the first bird's-eye view image 72 (t−Δt). The display control process on the monitor 150 is performed using the second predicted bird's-eye view image 72'(t) generated in the above.

図13に示す第1俯瞰画像72(t−Δt)は、モニタ150に表示されている。第1俯瞰画像72(t−Δt)中の座標点SP1,SP2は、上記第1〜第7判定処理の結果、モニタ150に表示する対象とされた、バー85,86の真下にある路面80にバー85,86を投影した投影座標点である。座標点SP3は、支柱81の足元部分の位置を示す座標点である。座標点SP4は、支柱82の足元部分の位置を示す座標点である。 The first bird's-eye view image 72 (t−Δt) shown in FIG. 13 is displayed on the monitor 150. The coordinate points SP1 and SP2 in the first bird's-eye view image 72 (t−Δt) are the road surfaces 80 directly below the bars 85 and 86, which are the targets to be displayed on the monitor 150 as a result of the first to seventh determination processes. It is a projected coordinate point on which bars 85 and 86 are projected. The coordinate point SP3 is a coordinate point indicating the position of the foot portion of the support column 81. The coordinate point SP4 is a coordinate point indicating the position of the foot portion of the support column 82.

座標点SP5は、バー85の重心の位置を示す座標点である。座標点SP6は、支柱83の足元部分の位置を示す座標点である。座標点SP7は、支柱84の足元部分の位置を示す座標点である。座標点SP8は、バー86の重心の位置を示す座標点である。座標点SP9は、水溜りPの表面に映りこんだ支柱87の先端の位置を示す座標点である。座標点SP10は、水溜りPの表面に映りこんだ支柱88の先端の位置を示す座標点である。例えば、座標点SP6や座標点SP7は、上述した第2予測俯瞰画像72’(t)と第2俯瞰画像72(t)との差分画像において除去された部分の位置における座標点として得られる。なお、図13では、座標点SP1〜座標点SP10を白抜きの星で示している。 The coordinate point SP5 is a coordinate point indicating the position of the center of gravity of the bar 85. The coordinate point SP6 is a coordinate point indicating the position of the foot portion of the support column 83. The coordinate point SP7 is a coordinate point indicating the position of the foot portion of the support column 84. The coordinate point SP8 is a coordinate point indicating the position of the center of gravity of the bar 86. The coordinate point SP9 is a coordinate point indicating the position of the tip of the support column 87 reflected on the surface of the puddle P. The coordinate point SP10 is a coordinate point indicating the position of the tip of the support column 88 reflected on the surface of the puddle P. For example, the coordinate point SP6 and the coordinate point SP7 are obtained as coordinate points at the positions of the portions removed in the difference image between the second predicted bird's-eye view image 72'(t) and the second bird's-eye view image 72 (t) described above. In FIG. 13, the coordinate points SP1 to the coordinate points SP10 are indicated by white stars.

図14に示す第2予測俯瞰画像72’(t)に映り込んだ座標点SP1’,SP2’は、第1俯瞰画像72(t−Δt)に映り込んだ座標点SP1,SP2に対応する座標点である。同様に、座標点SP3’,SP4’,SP5’,SP6’,SP7’,SP8’,SP9’,SP10’も、それぞれが、座標点SP3,SP4,SP5,SP6,SP7,SP8,SP9,SP10に対応している。なお、図14では、座標点SP1’〜座標点SP10’を黒塗りの星で示している。 The coordinate points SP1'and SP2' reflected in the second predicted bird's-eye view image 72'(t) shown in FIG. 14 are the coordinates corresponding to the coordinate points SP1 and SP2 reflected in the first bird's-eye view image 72 (t−Δt). It is a point. Similarly, the coordinate points SP3', SP4', SP5', SP6', SP7', SP8', SP9', and SP10'are also coordinate points SP3, SP4, SP5, SP6, SP7, SP8, SP9, SP10, respectively. It corresponds to. In FIG. 14, the coordinate points SP1'to the coordinate points SP10' are indicated by black stars.

次に、投影座標点出力部60は、図15に示すように、座標点同士の間隔と閾値との比較判定を行う。図15は、第1俯瞰画像72(t−Δt)と第2予測俯瞰画像72’(t)とを移動及び回転させて、両俯瞰画像の位置を合わせた後に重畳した重畳俯瞰画像上に、座標点SP1〜SP10と、座標点SP1’〜SP10’のみを映し出した状態を示す。投影座標点出力部60は、図15に示すように、間隔D1,間隔D2,間隔D3,間隔D4,間隔D5,間隔D6,間隔D7,間隔D8,間隔D9,間隔D10の夫々と、予め定められた閾値とを比較する。 Next, as shown in FIG. 15, the projected coordinate point output unit 60 performs a comparison determination between the distance between the coordinate points and the threshold value. FIG. 15 shows that the first bird's-eye view image 72 (t−Δt) and the second predicted bird's-eye view image 72'(t) are moved and rotated to align the positions of both bird's-eye view images and then superimposed on the superimposed bird's-eye view image. A state in which only the coordinate points SP1 to SP10 and the coordinate points SP1'to SP10' are projected is shown. As shown in FIG. 15, the projected coordinate point output unit 60 is predetermined with intervals D1, interval D2, interval D3, interval D4, interval D5, interval D6, interval D7, interval D8, interval D9, and interval D10, respectively. Compare with the given threshold.

間隔D1は、座標点SP1,SP1’同士の車両10の進行方向の間隔である。間隔D2は、座標点SP2,SP2’同士の車両10の進行方向の間隔である。間隔D3は、座標点SP3,SP3’同士の車両10の進行方向の間隔である。間隔D4は、座標点SP4,SP4’同士の車両10の進行方向の間隔である。間隔D5は、座標点SP5,SP5’同士の車両10の進行方向の間隔である。間隔D6は、座標点SP6,SP6’同士の車両10の進行方向の間隔である。間隔D7は、座標点SP7,SP7’同士の車両10の進行方向の間隔である。間隔D8は、座標点SP8,SP8’同士の車両10の進行方向の間隔である。間隔D9は、座標点SP9,SP9’同士の車両10の進行方向の間隔である。間隔D10は、座標点SP10,SP10’同士の車両10の進行方向の間隔である。 The interval D1 is an interval between the coordinate points SP1 and SP1'in the traveling direction of the vehicle 10. The interval D2 is the interval between the coordinate points SP2 and SP2'in the traveling direction of the vehicle 10. The interval D3 is the interval between the coordinate points SP3 and SP3'in the traveling direction of the vehicle 10. The interval D4 is the interval between the coordinate points SP4 and SP4'in the traveling direction of the vehicle 10. The interval D5 is the interval between the coordinate points SP5 and SP5'in the traveling direction of the vehicle 10. The interval D6 is an interval between the coordinate points SP6 and SP6'in the traveling direction of the vehicle 10. The interval D7 is the interval between the coordinate points SP7 and SP7'in the traveling direction of the vehicle 10. The interval D8 is the interval between the coordinate points SP8 and SP8'in the traveling direction of the vehicle 10. The interval D9 is the interval between the coordinate points SP9 and SP9'in the traveling direction of the vehicle 10. The interval D10 is the interval between the coordinate points SP10 and SP10'in the traveling direction of the vehicle 10.

図15では、間隔D1が略ゼロであるため、座標点SP1及びこれに対応する座標点SP1’を同じ星印で示し、間隔D1の図示を省略している。間隔D2も略ゼロであるため、座標点SP2及びこれに対応する座標点SP2’を同じ星印で示し、間隔D2の図示を省略している。同様に、間隔D3,間隔D4,間隔D6,間隔D7のそれぞれも略ゼロであるため、これら間隔の図示を省略している。投影座標点出力部60は、図15に示すように、間隔D1,D2,D3,D4,D6,D7の夫々を閾値以下と判定する。この判定のもと、重畳俯瞰画像上には、図16に示すように、座標点SP1’,SP2’,SP3’,SP4’,SP6’,SP7’が表示された状態で残される。 In FIG. 15, since the interval D1 is substantially zero, the coordinate point SP1 and the corresponding coordinate point SP1'are indicated by the same stars, and the interval D1 is not shown. Since the interval D2 is also substantially zero, the coordinate point SP2 and the corresponding coordinate point SP2'are indicated by the same star mark, and the interval D2 is not shown. Similarly, since the intervals D3, the intervals D4, the intervals D6, and the intervals D7 are also substantially zero, the illustration of these intervals is omitted. As shown in FIG. 15, the projection coordinate point output unit 60 determines that each of the intervals D1, D2, D3, D4, D6, and D7 is equal to or less than the threshold value. Based on this determination, as shown in FIG. 16, the coordinate points SP1', SP2', SP3', SP4', SP6', and SP7'are left on the superimposed bird's-eye view image.

投影座標点出力部60は、間隔D5,D8,D9,D10の夫々については閾値を超えるものと判定する。この判定のもと、重畳俯瞰画像上からは、図16に示すように、座標点SP5’,SP8’,SP9’,SP10’が消去される。 The projection coordinate point output unit 60 determines that the threshold values are exceeded for each of the intervals D5, D8, D9, and D10. Based on this determination, the coordinate points SP5', SP8', SP9', and SP10'are deleted from the superimposed bird's-eye view image as shown in FIG.

以上説明したように、本実施形態の周辺監視装置100では、投影座標点特定部53は、重畳俯瞰画像SVI上で、第1直線L1と第1直線L21とが交差する点を、バー85の真下にある路面80にバー85を投影した投影座標点SP1として特定する。投影座標点特定部53は、重畳俯瞰画像SVI上で、第2直線L2と第2直線L22とが交差する点を、バー86の真下にある路面80にバー86を投影した投影座標点SP2として特定する。特定された投影座標点SP1,SP2は、投影座標点出力部60において重畳俯瞰画像上でモニタ150に表示される。従って、車両10の運転手は、モニタ150に映し出された重畳俯瞰画像を通じて、バー85,86の位置を明確に把握できる。そのため、車両10の周辺に存在するバー85,86に車両10が接触してしまうことを抑制することができる。 As described above, in the peripheral monitoring device 100 of the present embodiment, the projected coordinate point specifying unit 53 points the point where the first straight line L1 and the first straight line L21 intersect on the superimposed bird's-eye view image SVI on the bar 85. It is specified as a projected coordinate point SP1 in which the bar 85 is projected onto the road surface 80 directly below. The projected coordinate point specifying unit 53 sets the point where the second straight line L2 and the second straight line L22 intersect on the superimposed bird's-eye view image SVI as the projected coordinate point SP2 in which the bar 86 is projected onto the road surface 80 directly below the bar 86. Identify. The specified projected coordinate points SP1 and SP2 are displayed on the monitor 150 on the superimposed bird's-eye view image in the projected coordinate point output unit 60. Therefore, the driver of the vehicle 10 can clearly grasp the positions of the bars 85 and 86 through the superimposed bird's-eye view image projected on the monitor 150. Therefore, it is possible to prevent the vehicle 10 from coming into contact with the bars 85 and 86 existing around the vehicle 10.

また、本実施形態の周辺監視装置100によれば、投影座標点出力部60は、間隔D5,D8の夫々については閾値を超えるものと判定し、重畳俯瞰画像上から座標点SP5’,SP8’を消去する。座標点SP5’は、バー85の重心の位置を示す座標点SP5に対応している。座標点SP8’は、バー86の重心の位置を示す座標点SP8に対応している。バー85,86の重心の位置を示す座標点SP5,SP8は、俯瞰画像の特性上、実際の重心の位置よりも遠くにあるように見えてしまう。従って、このような実際の位置と異なる座標点を重畳俯瞰画像上から消去することで、使い勝手が向上する。 Further, according to the peripheral monitoring device 100 of the present embodiment, the projected coordinate point output unit 60 determines that the threshold values are exceeded for each of the intervals D5 and D8, and the coordinate points SP5'and SP8' from the superimposed bird's-eye view image. Erase. The coordinate point SP5'corresponds to the coordinate point SP5 indicating the position of the center of gravity of the bar 85. The coordinate point SP8'corresponds to the coordinate point SP8 indicating the position of the center of gravity of the bar 86. The coordinate points SP5 and SP8 indicating the positions of the centers of gravity of the bars 85 and 86 appear to be farther than the actual positions of the centers of gravity due to the characteristics of the bird's-eye view image. Therefore, by erasing the coordinate points different from the actual positions from the superimposed bird's-eye view image, the usability is improved.

なお、上記実施形態では、撮像部12の視野範囲(画角)を180°に設定する例を示した。しかし、これに限られない。撮像部12の視野範囲(画角)を180°以外の角度に設定してもよい。例えば、撮像部12の視野範囲(画角)を120°に設定し、図4に示す立体物領域72(t−Δt)_1,72(t−Δt)_6を撮像部12の視野外となる不可視領域としてもよい。 In the above embodiment, an example in which the field of view (angle of view) of the imaging unit 12 is set to 180 ° is shown. However, it is not limited to this. The viewing range (angle of view) of the imaging unit 12 may be set to an angle other than 180 °. For example, the field of view (angle of view) of the imaging unit 12 is set to 120 °, and the three-dimensional object region 72 (t−Δt) _1,72 (t−Δt) _6 shown in FIG. 4 is out of the field of view of the imaging unit 12. It may be an invisible area.

なお、上記実施形態では、投影座標点出力部60は、バーの真下にある路面80にバーを投影した投影座標点をモニタ150に表示する例を示した。しかし、これに限られない。投影座標点出力部60は、投影座標点を音声出力してもよい。 In the above embodiment, the projected coordinate point output unit 60 shows an example in which the projected coordinate point obtained by projecting the bar onto the road surface 80 directly below the bar is displayed on the monitor 150. However, it is not limited to this. The projection coordinate point output unit 60 may output the projection coordinate point by voice.

なお、上記実施形態では、車両10の左方に取り付けた左方カメラ12aで撮像部12を構成する例を示した。しかし、これに限られない。例えば、車両10の前方に取り付けた前方カメラや、車両10の後方に取り付けた後方カメラや、車両10の右方に取り付けた右方カメラで撮像部12を構成してもよい。 In the above embodiment, an example is shown in which the image pickup unit 12 is configured by the left camera 12a attached to the left side of the vehicle 10. However, it is not limited to this. For example, the image pickup unit 12 may be configured by a front camera attached to the front of the vehicle 10, a rear camera attached to the rear of the vehicle 10, or a right camera attached to the right side of the vehicle 10.

なお、上記実施形態では、立体物検出部40は、俯瞰画像同士の輝度の差を利用して、立体物領域を検出する例を示した。しかし、これに限られない。例えば、立体物検出部40は、俯瞰画像同士で、エッジ検出を行った結果の類似性(エッジ強度、エッジ方向の類似性)や、俯瞰画像を複数の小ブロックに分割して、各小ブロックから得た、濃淡ヒストグラムやエッジ検出結果のヒストグラムの類似性等を用いて、立体物領域を検出してもよい。 In the above embodiment, the three-dimensional object detection unit 40 shows an example of detecting a three-dimensional object region by utilizing the difference in brightness between the bird's-eye view images. However, it is not limited to this. For example, the three-dimensional object detection unit 40 divides the bird's-eye view images into a plurality of small blocks and divides the bird's-eye view images into a plurality of small blocks and the similarity of the result of edge detection (edge strength, similarity in the edge direction). The three-dimensional object region may be detected by using the similarity of the grayscale histogram and the histogram of the edge detection result obtained from the above.

なお、上記実施形態では、立体物領域抽出部を、第1立体物領域抽出部51と第2立体物領域抽出部52とで構成する例を示した。しかし、これに限られない。例えば、立体物領域抽出部は、単一の立体物領域抽出部で構成されてもよい。 In the above embodiment, an example is shown in which the three-dimensional object region extraction unit is composed of the first three-dimensional object region extraction unit 51 and the second three-dimensional object region extraction unit 52. However, it is not limited to this. For example, the three-dimensional object region extraction unit may be composed of a single three-dimensional object region extraction unit.

以上、本発明の実施形態を図面により詳述したが、実施形態は本発明の例示にしか過ぎないものであるため、本発明は実施形態の構成にのみ限定されるものではなく、本発明の要旨を逸脱しない範囲の設計の変更等があっても、本発明に含まれることは勿論である。 Although the embodiments of the present invention have been described in detail with reference to the drawings, the present invention is not limited to the configuration of the embodiments because the embodiments are merely examples of the present invention. It goes without saying that even if there is a design change or the like within a range that does not deviate from the gist, it is included in the present invention.

10・・・車両
12・・・撮像部
12a・・・左方カメラ
30・・・俯瞰画像生成部
50・・・投影座標点取得部
51・・・第1立体物領域抽出部
51A・・・領域分割部
51B・・・鉛直性評価部
52・・・第2立体物領域抽出部
52A・・・領域分割部
52B・・・鉛直性評価部
53・・・投影座標点特定部
72(t−Δt),72(t)・・・俯瞰画像
72’(t−Δt),72’(t)・・・予測俯瞰画像
72(t−Δt)_1,72(t−Δt)_2,72(t−Δt)_3,72(t−Δt)_4,72(t−Δt)_5,72(t−Δt)_6,72(t)_1,72(t)_2,72(t)_3,72(t)_4,72(t)_5,72(t)_6・・・立体物領域
80・・・路面
81,82,83,84・・・支柱
85,86・・・バー(浮遊物)
87,88・・・水溜りの表面に映りこんだ支柱
100・・・周辺監視装置
122・・・表示制御部
150・・・モニタ(表示部)
G1,G2,G21,G22・・・重心
L1,L21・・・第1直線
L2,L22・・・第2直線
RL10,RL11,RL12,RL13,RL14,RL15,RL16,RL17・・・第1放射線
RL20,RL21,RL22,RL23,RL24,RL25,RL26,RL27・・・第2放射線
SP1,SP1’,SP2,SP2’・・・投影座標点
SV1,SV2’,SV3’,SV4・・・合成ベクトル
SVI・・・重畳俯瞰画像
V1,V2,V3,V4,V5,V6,V21,V22,V23,V24,V25,V26,V31,V31’,V32,V33,V33’,V34’・・・ベクトル
θ1,θ2・・・等角度(所定の角度)
10 ... Vehicle 12 ... Imaging unit 12a ... Left camera 30 ... Bird's-eye view image generation unit 50 ... Projection coordinate point acquisition unit 51 ... First three-dimensional object area extraction unit 51A ... Region division 51B ... Vertical evaluation unit 52 ... Second three-dimensional object region extraction unit 52A ... Region division 52B ... Vertical evaluation unit 53 ... Projected coordinate point identification unit 72 (t- Δt), 72 (t) ... Bird's-eye view image 72'(t-Δt), 72'(t) ... Predicted bird's-eye view image 72 (t-Δt) _1,72 (t-Δt) _2,72 (t) -Δt) _3,72 (t-Δt) _4,72 (t-Δt) _5,72 (t-Δt) _6,72 (t) _1,72 (t) _2,72 (t) _3,72 (t) ) _4,72 (t) _5,72 (t) _6 ... Three-dimensional object area 80 ... Road surface 81, 82, 83, 84 ... Supports 85, 86 ... Bar (floating object)
87, 88 ... Support column 100 reflected on the surface of a puddle ... Peripheral monitoring device 122 ... Display control unit 150 ... Monitor (display unit)
G1, G2, G21, G22 ... Center of gravity L1, L21 ... 1st straight line L2, L22 ... 2nd straight line RL10, RL11, RL12, RL13, RL14, RL15, RL16, RL17 ... 1st radiation RL20, RL21, RL22, RL23, RL24, RL25, RL26, RL27 ... Second radiation SP1, SP1', SP2, SP2' ... Projected coordinate points SV1, SV2', SV3', SV4 ... Composite vector SVI ... Superimposed bird's-eye view image V1, V2, V3, V4, V5, V6, V21, V22, V23, V24, V25, V26, V31, V31', V32, V33, V33', V34'... Vector θ1 , Θ2 ・ ・ ・ Equal angle (predetermined angle)

Claims (5)

路面上を走行する車両に取り付けられた、前記路面に接地していない浮遊物を含む前記車両の周辺を撮像する撮像部と、
前記撮像部により前記周辺が撮像されることで得られ、前記車両の動きに応じて変化し得る2つの画像を視点変換して、第1俯瞰画像及び第2俯瞰画像を生成する俯瞰画像生成部と、
前記第1俯瞰画像を前記撮像部から放射方向に延びる第1放射線で前記撮像部の位置を中心として所定の角度おきに分割した複数の立体物領域の中から、前記浮遊物の前記路面に対する鉛直性を評価して、前記浮遊物を含む第1立体物領域を抽出する第1立体物領域抽出部と、
前記第2俯瞰画像を前記撮像部から放射方向に延びる第2放射線で前記撮像部の位置を中心として所定の角度おきに分割した複数の立体物領域の中から、前記浮遊物の前記路面に対する鉛直性を評価して、前記浮遊物を含む第2立体物領域を抽出する第2立体物領域抽出部と、
前記第1俯瞰画像と前記第2俯瞰画像を重畳した重畳俯瞰画像上で、前記第1立体物領域に含まれる前記浮遊物の重心から前記第1放射線に沿って延びる第1直線と、前記第2立体物領域に含まれる前記浮遊物の重心から前記第2放射線に沿って延びる第2直線とが交差する点を、前記浮遊物の真下にある前記路面に前記浮遊物を投影した投影座標点として特定する投影座標点特定部と、を有することを特徴とする周辺監視装置。
An imaging unit attached to a vehicle traveling on the road surface to image the surroundings of the vehicle including floating objects that are not in contact with the road surface.
A bird's-eye view image generation unit that generates a first bird's-eye view image and a second bird's-eye view image by converting the viewpoints of two images obtained by capturing the surroundings by the imaging unit and changing according to the movement of the vehicle. When,
The first bird's-eye view image is divided into a plurality of three-dimensional object regions centered on the position of the imaging unit by the first radiation extending in the radial direction from the imaging unit at predetermined angles, and the floating object is vertical to the road surface. A first three-dimensional object region extraction unit that evaluates the property and extracts the first three-dimensional object region containing the suspended matter,
The floating object is vertical to the road surface from a plurality of three-dimensional object regions divided at predetermined angles about the position of the imaging unit by the second radiation extending from the imaging unit in the radial direction. A second three-dimensional object region extraction unit that evaluates the property and extracts the second three-dimensional object region containing the suspended matter,
On the superimposed bird's-eye view image in which the first bird's-eye view image and the second bird's-eye view image are superimposed, a first straight line extending along the first radiation from the center of gravity of the floating object included in the first three-dimensional object region, and the first straight line. 2 Projection coordinate points at which the floating object is projected onto the road surface directly below the floating object at the intersection of the center of gravity of the floating object included in the three-dimensional object region and the second straight line extending along the second radiation. A peripheral monitoring device characterized by having a projection coordinate point specifying unit specified as.
請求項1に記載の周辺監視装置において、
前記車両に搭載された表示部と、
前記投影座標点特定部で特定された前記投影座標点を含む前記重畳俯瞰画像を、前記表示部に表示する表示制御部を備えることを特徴とする周辺監視装置。
In the peripheral monitoring device according to claim 1,
The display unit mounted on the vehicle and
A peripheral monitoring device including a display control unit that displays the superimposed bird's-eye view image including the projected coordinate points specified by the projected coordinate point specifying unit on the display unit.
請求項2に記載の周辺監視装置において、
前記表示制御部は、
前記投影座標点特定部による前記投影座標点の特定が完了すると、前記第1立体物領域に含まれる前記浮遊物の重心の位置を示す座標点と、前記第2立体物領域に含まれる前記浮遊物の重心の位置を示す座標点とを前記表示部から消去することを特徴とする周辺監視装置。
In the peripheral monitoring device according to claim 2,
The display control unit
When a particular said projected coordinates point by the projection coordinate point specifying unit is completed, a coordinate point indicating the position of the centroid of the suspended matter contained in the first three-dimensional object area, the included in the second three-dimensional object area to erase a coordinate point indicating the center of gravity position of the suspended solids from the display unit surroundings monitoring apparatus according to claim.
請求項2又は請求項3に記載の周辺監視装置において、
前記撮像部は、前記路面に立設された支柱に対して前記路面よりも高い位置で交差する線状体を前記浮遊物として撮像し、
前記表示制御部は、
前記第1立体物領域又は前記第2立体物領域に含まれる前記支柱の長手方向のベクトルと、前記第1立体物領域又は前記第2立体物領域に含まれる前記線状体の長手方向のベクトルとの合成ベクトルと、前記撮像部から放射方向に延びる放射ベクトルとのなす角度が、予め定められた基準角度以内である場合に、前記投影座標点を含む前記重畳俯瞰画像を前記表示部に表示することを特徴とする周辺監視装置。
In the peripheral monitoring device according to claim 2 or 3.
The image pickup unit takes an image of a linear body that intersects a support column erected on the road surface at a position higher than the road surface as the floating object.
The display control unit
A vector in the longitudinal direction of the support column included in the first three-dimensional object region or the second three-dimensional object region, and a vector in the longitudinal direction of the linear body included in the first three-dimensional object region or the second three-dimensional object region. When the angle formed by the composite vector of the above and the radiation vector extending in the radiation direction from the imaging unit is within a predetermined reference angle, the superimposed bird's-eye view image including the projected coordinate points is displayed on the display unit. Peripheral monitoring device characterized by
請求項1乃至請求項4のいずれか一項に記載の周辺監視装置において、
前記撮像部は、前記路面に立設された支柱に対して前記路面よりも高い位置で交差する線状体を前記浮遊物として撮像し、
前記第1立体物領域抽出部は、前記第1俯瞰画像を分割した複数の立体物領域のうち、特定の立体物領域に含まれる前記線状体の長手方向のベクトルと、前記撮像部の放射方向のベクトルとのなす角度が、予め定められた基準角度を超えるか否かを判断することによって、前記線状体の前記路面に対する鉛直性を評価し、
前記第2立体物領域抽出部は、前記第2俯瞰画像を分割した複数の立体物領域のうち、特定の立体物領域に含まれる前記線状体の長手方向のベクトルと、前記撮像部の放射方向のベクトルとのなす角度が、予め定められた基準角度を超えるか否かを判断することによって、前記線状体の前記路面に対する鉛直性を評価することを特徴とする周辺監視装置。
In the peripheral monitoring device according to any one of claims 1 to 4.
The image pickup unit takes an image of a linear body that intersects a support column erected on the road surface at a position higher than the road surface as the floating object.
The first three-dimensional object region extraction unit includes a vector in the longitudinal direction of the linear body included in a specific three-dimensional object region among a plurality of three-dimensional object regions obtained by dividing the first bird's-eye view image, and radiation of the imaging unit. The verticality of the linear body with respect to the road surface is evaluated by determining whether or not the angle formed by the direction vector exceeds a predetermined reference angle.
The second three-dimensional object region extraction unit includes a vector in the longitudinal direction of the linear body included in a specific three-dimensional object region among a plurality of three-dimensional object regions obtained by dividing the second bird's-eye view image, and radiation of the imaging unit. A peripheral monitoring device for evaluating the verticality of the linear body with respect to the road surface by determining whether or not the angle formed by the direction vector exceeds a predetermined reference angle.
JP2017166766A 2017-08-31 2017-08-31 Peripheral monitoring device Active JP6861599B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2017166766A JP6861599B2 (en) 2017-08-31 2017-08-31 Peripheral monitoring device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2017166766A JP6861599B2 (en) 2017-08-31 2017-08-31 Peripheral monitoring device

Publications (2)

Publication Number Publication Date
JP2019047244A JP2019047244A (en) 2019-03-22
JP6861599B2 true JP6861599B2 (en) 2021-04-21

Family

ID=65816614

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2017166766A Active JP6861599B2 (en) 2017-08-31 2017-08-31 Peripheral monitoring device

Country Status (1)

Country Link
JP (1) JP6861599B2 (en)

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5938940B2 (en) * 2012-02-24 2016-06-22 日産自動車株式会社 Three-dimensional object detection device
JP6371553B2 (en) * 2014-03-27 2018-08-08 クラリオン株式会社 Video display device and video display system

Also Published As

Publication number Publication date
JP2019047244A (en) 2019-03-22

Similar Documents

Publication Publication Date Title
US10685246B2 (en) Systems and methods for curb detection and pedestrian hazard assessment
USRE47559E1 (en) Parking area detecting apparatus and method thereof
US20190235073A1 (en) Navigation based on radar-cued visual imaging
US9619719B2 (en) Systems and methods for detecting traffic signs
EP3096286B1 (en) Image processing apparatus, image processing method, and computer program product
US10402665B2 (en) Systems and methods for detecting traffic signs
EP3110145A1 (en) External-environment recognition system, vehicle, and camera-dirtiness detection method
US9336595B2 (en) Calibration device, method for implementing calibration, and camera for movable body and storage medium with calibration function
CN107848415A (en) Display control unit, display device and display control method
WO2010047226A1 (en) Lane line detection device, lane line detection method, and lane line detection program
JP6139465B2 (en) Object detection device, driving support device, object detection method, and object detection program
WO2017002209A1 (en) Display control device, display control method, and display control program
JP2017032483A (en) Calibration device, calibration method and program
CN107004250B (en) Image generation device and image generation method
JP2013196387A (en) Image processor and image processing method
WO2019021876A1 (en) In-vehicle camera calibration device and method
JP6861599B2 (en) Peripheral monitoring device
CN116772730B (en) Crack size measurement method, computer storage medium and system
JP7095559B2 (en) Bound line detection device and lane marking method
JP2018077713A (en) Lane marking detection system
EP3227827B1 (en) Driver assistance system, motor vehicle and method for classifying a flow vector
JP5062316B2 (en) Lane marking device, lane marking detection method, and lane marking detection program
JP4231883B2 (en) Image processing apparatus and method
CN114644014A (en) Intelligent driving method based on lane line and related equipment
JP4847303B2 (en) Obstacle detection method, obstacle detection program, and obstacle detection apparatus

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20191126

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20200924

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20201006

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20201116

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20210309

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20210330

R150 Certificate of patent or registration of utility model

Ref document number: 6861599

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150