WO2016185678A1 - Blind-spot display device and blind-spot display method - Google Patents

Blind-spot display device and blind-spot display method Download PDF

Info

Publication number
WO2016185678A1
WO2016185678A1 PCT/JP2016/002215 JP2016002215W WO2016185678A1 WO 2016185678 A1 WO2016185678 A1 WO 2016185678A1 JP 2016002215 W JP2016002215 W JP 2016002215W WO 2016185678 A1 WO2016185678 A1 WO 2016185678A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
blind spot
region
display device
peripheral
Prior art date
Application number
PCT/JP2016/002215
Other languages
French (fr)
Japanese (ja)
Inventor
大貴 五藤
勝之 福田
明 江頭
Original Assignee
株式会社デンソー
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社デンソー filed Critical 株式会社デンソー
Publication of WO2016185678A1 publication Critical patent/WO2016185678A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/24Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view in front of the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R11/02Arrangements for holding or mounting articles, not otherwise provided for for radio sets, television sets, telephones, or the like; Arrangement of controls thereof

Definitions

  • the present disclosure relates to a blind spot display device and a blind spot display method for displaying an image of a blind spot area by a pillar.
  • the present disclosure has been made in view of the above points, and provides a blind spot display device and a blind spot display method capable of presenting a blind spot image or the like to a driver in an easy-to-understand manner without complicating vehicle design. It is aimed.
  • the blind spot display device includes an image acquisition unit, an image generation unit, and an image display unit.
  • the image acquisition unit acquires a peripheral image obtained by imaging a vehicle peripheral region having a blind spot region outside the vehicle, which is generated when a driver's field of view is blocked by a frame including a vehicle pillar.
  • the image generation unit generates a boundary visualized image in which the region boundary between the blind spot region and the adjacent region in the vehicle peripheral region is visualized using the peripheral image acquired by the image acquisition unit as an original image.
  • the image display unit displays the boundary visualized image generated by the image generation unit on a display device provided in the vehicle interior.
  • an image of a blind spot area or the like can be presented to the driver in an easy-to-understand manner without complicating the vehicle design.
  • the blind spot display method acquires and acquires a peripheral image obtained by capturing a vehicle peripheral area having a blind spot area outside the vehicle that is generated when a driver's view is blocked by a frame including a vehicle pillar.
  • a display device that generates a boundary visualization image that visualizes a region boundary between a blind spot region and an adjacent region in a vehicle peripheral region using the peripheral image as an original image, and the generated boundary visualization image is provided in a vehicle interior Including the display.
  • FIG. 1 is a block diagram illustrating an overall configuration of a blind spot display device according to an embodiment of the present disclosure.
  • FIG. 2 is a top view of a vehicle equipped with a blind spot display device.
  • FIG. 3 is a view showing the interior of a vehicle equipped with a blind spot display device.
  • FIG. 4 is a block diagram showing a functional configuration of the ECU,
  • FIG. 5 is a flowchart of image generation processing according to the first embodiment of the present disclosure.
  • FIG. 6A is a diagram showing a boundary visualized image including a contour line superimposed image; FIG.
  • FIG. 6B is a diagram showing a boundary visualization image including a transparent superimposed image
  • FIG. 7 is a flowchart of image generation processing according to the second embodiment of the present disclosure.
  • FIG. 8 is a diagram illustrating a boundary visualized image including a mask superimposed image.
  • a blind spot display device 1 shown in FIG. 1 includes an ECU (Electronic Control Unit) 2, a right camera 3, a left camera 4, a brightness detection unit 5, a right display 6, a left display 7, and a driver camera 8.
  • the right camera 3 and the left camera 4 have a function of imaging the vehicle peripheral area including the blind spot area of the left and right front pillars 31 and 32 shown in FIG.
  • the blind spot region is a region outside the vehicle that is generated when the driver's field of view is blocked by the vehicle frame including the left and right front pillars 31 and 32.
  • Each of the right camera 3 and the left camera 4 is configured by a CMOS camera or the like, and captures an image of a region (that is, a vehicle peripheral region) within the range of the imaging angle of view F (hereinafter referred to as “peripheral image”). Is output to the ECU 2.
  • the area other than the blind spot area is referred to as an adjacent area
  • the boundary between the blind spot area and the adjacent area is referred to as an area boundary.
  • the ECU 2 is an electronic control unit that controls the entire apparatus.
  • the ECU 2 is configured mainly by the CPU 10 and includes a memory 11 such as a ROM, a RAM, and a flash memory, an input signal circuit, an output signal circuit, a power supply circuit, and the like.
  • the CPU 10 acquires peripheral images from the right camera 3 and the left camera 4 based on the program stored in the memory 11, and the blind spot region in the vehicle peripheral region is used as an original image for each acquired peripheral image.
  • Various processes such as generating boundary visualized images in which the boundary between the region and the adjacent region is visualized are performed.
  • the right display 6 and the left display 7 each have a function of displaying the boundary visualized image generated by the ECU 2.
  • the right display 6 is configured by a liquid crystal display or the like provided in the vicinity of the right front pillar 31 in the vehicle interior, and boundary visualization using the peripheral image of the right camera 3 as an original image Display an image.
  • the left display 7 is configured by a liquid crystal display or the like provided in the vicinity of the left front pillar 32 in the vehicle interior, and displays a boundary visualized image using a peripheral image of the left camera 4 as an original image.
  • the brightness detection unit 5 is configured by an illuminance sensor or the like that detects brightness outside the vehicle, and outputs the detection result to the ECU 2.
  • the driver camera 8 is arranged in the passenger compartment so as to capture a face area including the driver's eyes.
  • the driver camera 8 detects a three-dimensional position of each of the driver's eyes in the captured image using, for example, the eye or corneal reflection as a reference point and the iris or pupil as a moving point using a known gaze detection technique. Then, the driver's line of sight is detected based on the position of the moving point with respect to the reference point. Basically, for example, if the iris of the left eye is far from the eye, the driver is looking at the left side, and if the iris is close to the eye of the left eye, the driver is looking at the right side.
  • the driver's line-of-sight direction can be obtained in a three-dimensional space from the three-dimensional relative position of the moving point with respect to the reference point. Accordingly, the driver camera 8 functions as an eye point detection unit.
  • the driver camera 8 outputs information indicating the driver's eye position and line-of-sight direction thus detected (hereinafter referred to as “driver camera information”) to the ECU 2.
  • the body shape DB 9 stores body shape data indicating the three-dimensional position of each part of the frame such as a front pillar constituting the vehicle body.
  • the body shape DB 9 may be built in the memory 11.
  • the ECU 2 functionally includes an image acquisition unit 21, an image generation unit 22, and an image display unit 23. Note that processing for realizing the functions as the image acquisition unit 21, the image generation unit 22, and the image display unit 23 is executed by the CPU 10 based on a program stored in the memory 11.
  • the image acquisition unit 21 acquires peripheral images from each of the right camera 3 and the left camera 4 in time series, and supplies the acquired peripheral images to the image generation unit 22.
  • the image generation unit 22 executes a process (hereinafter referred to as “image generation process”) for generating a boundary visualized image using the peripheral image supplied from the image acquisition unit 21 as an original image for each of the right camera 3 and the left camera 4. Then, each of the generated boundary visualized images is supplied to the image display unit 23 in time series.
  • image generation process a process for generating a boundary visualized image using the peripheral image supplied from the image acquisition unit 21 as an original image for each of the right camera 3 and the left camera 4. Then, each of the generated boundary visualized images is supplied to the image display unit 23 in time series.
  • the image display unit 23 displays the boundary visualized images supplied from the image generation unit 22 in time series for each of the right camera 3 and the left camera 4 as videos on the right display 6 and the left display 7, respectively. Specifically, the boundary visualized video corresponding to the right camera 3 is displayed on the right display 6, and the boundary visualized video corresponding to the left camera 4 is displayed on the left display 7.
  • image generation processing executed by the image generation unit 22 will be described with reference to the flowchart of FIG. Note that this processing is repeatedly started at predetermined cycles while a switch (not shown) for inputting operation details of start and stop regarding the blind spot display function is on, for example. Moreover, in this process, in order to avoid complexity, the processing target is described as a peripheral image supplied from the image acquisition unit 21 without particularly distinguishing the right camera 3 and the left camera 4, and the right display 6 and the left display 7. To do.
  • the image generation unit 22 first acquires driver camera information from the driver camera 8 in step (hereinafter simply referred to as “S”) 110.
  • the body shape data is read from the body shape DB 9.
  • a blind spot area when the driver views the vehicle peripheral area is set on the peripheral image.
  • the driver's eye position is read from the driver camera information, and the front pillar position is extracted from the body shape data.
  • the blind spot area corresponding to the vector from the driver's eye position to the front pillar position is converted from world coordinates into a camera image.
  • the vector can also be corrected based on the driver's line-of-sight direction.
  • the contour line superimposed image shown in FIG. 6A is generated by superimposing the contour line image corresponding to the region boundary on the peripheral image.
  • the contour image an image that transmits the region boundary of the peripheral image is preferable, and an image in which image attributes such as hue, brightness, saturation, and luminance are set to default values in advance is used.
  • a transparent superimposed image shown in FIG. 6B is generated by superimposing a transparent image corresponding to the blind spot area on the peripheral image.
  • a transparent superimposed image is generated by performing a known filter process on the blind spot area portion of the peripheral image.
  • the transparent image preferably has a vehicle body color or a color imitating translucent acrylic so as to remind the driver of the frame portion including the front pillar, in order to transmit the blind spot area portion of the surrounding image.
  • the default value of which the image attribute is set in advance is used.
  • an attribute change process for changing the image attributes of the contour superimposed image and the transparent superimposed image is performed, and the peripheral image including the contour superimposed image and the transparent superimposed image subjected to the attribute change process (that is, boundary visualization) Image) is supplied to the image display unit 23, and this process ends.
  • the brightness and / or brightness relating to each of the contour superimposed image and the transparent superimposed image is changed according to the brightness of the vehicle peripheral region. Specifically, based on the detection result of the brightness detection unit 5, when the illuminance of the vehicle surrounding area is low, the brightness and / or luminance relating to each of the outline superimposed image and the transmission superimposed image is increased, and the vehicle surrounding area is When the illuminance is high, the brightness and / or luminance relating to each of the contour superimposed image and the transparent superimposed image is lowered.
  • the image attributes relating to each of the outline superimposed image and the transparent superimposed image are changed so that the contrast with respect to the surrounding image is higher than a predetermined value.
  • the hue, brightness, saturation, and / or luminance related to the peripheral image supplied from the image acquisition unit 21 is low, the hue, brightness, saturation, and / Or increase the brightness.
  • the hue, lightness, saturation, and / or luminance related to the peripheral image supplied from the image acquisition unit 21 is high, the hue, lightness, saturation, and / or luminance related to each of the contour superimposed image and the transparent superimposed image. Lower.
  • the transmission image corresponding to the blind spot area is superimposed on the peripheral image.
  • the second embodiment is different from the first embodiment in that a mask image corresponding to an adjacent region is superimposed on a peripheral image when generating a boundary visualized image.
  • the image generation unit 22 first acquires driver camera information from the driver camera 8 in S210.
  • the body shape data is read from the body shape DB9.
  • a contour line superimposed image shown in FIG. 6A is generated by superimposing a contour line image corresponding to the region boundary on the peripheral image.
  • the mask superimposed image shown in FIG. 8 is generated by superimposing the mask image corresponding to the adjacent region on the peripheral image.
  • a mask superimposed image is generated by performing a known filter process on the adjacent region portion of the peripheral image.
  • the adjacent region portion of the peripheral image is inconspicuous so as to remind the driver that it is a portion other than the blind spot region in the vehicle peripheral region, or that the driver can directly recognize the region.
  • a default value for which the image attribute is set in advance is used.
  • an attribute changing process for changing the image attributes of the contour superimposed image and the mask superimposed image is performed, and a peripheral image including the contour superimposed image and the mask superimposed image subjected to the attribute changing process (that is, boundary visualization) Image) is supplied to the image display unit 23, and this process ends.
  • the brightness and / or luminance relating to each of the contour superimposed image and the mask superimposed image is changed according to the brightness of the vehicle peripheral region. Specifically, based on the detection result of the brightness detection unit 5, when the illuminance in the vehicle peripheral region is low, the brightness and / or luminance relating to each of the contour superimposed image and the mask superimposed image is increased, and the vehicle peripheral region When the illuminance is high, the brightness and / or luminance relating to each of the contour superimposed image and the mask superimposed image is lowered.
  • the image attributes relating to each of the contour superimposed image and the mask superimposed image are changed so that the contrast with respect to the surrounding image is high.
  • the hue, lightness, saturation, and / or luminance related to the peripheral image supplied from the image acquisition unit 21 is low, the hue, lightness, saturation, and / Or increase the brightness.
  • the hue, lightness, saturation, and / or luminance related to the peripheral image supplied from the image acquisition unit 21 is high, the hue, lightness, saturation, and / or luminance related to each of the contour superimposed image and the mask superimposed image. Lower.
  • the image attributes of the contour superimposed image and the transparent superimposed image, or the contour superimposed image and the mask superimposed image are changed, but the present invention is not limited to this.
  • at least one image attribute of the contour superimposed image, the transparent superimposed image, and the mask superimposed image may be changed.
  • the boundary visualized image is generated by visualizing the region boundary between the blind spot region of the left and right front pillars 31 and 32 and the adjacent region.
  • the present invention is not limited to this.
  • the boundary visualized image may be generated in the same manner for the blind area of other pillars such as a center pillar and a rear pillar.
  • the functions of one component in the above embodiment may be distributed as a plurality of components, or the functions of a plurality of components may be integrated into one component. Further, at least a part of the configuration of the above embodiment may be replaced with a known configuration having the same function. Moreover, you may abbreviate
  • a system including the blind spot display device 1 as a constituent element, one or more programs for causing a computer to function as the blind spot display device 1, and one or more programs recording at least a part of the program.
  • the present disclosure can also be realized in various forms such as a plurality of media and a blind spot display method.

Abstract

A blind-spot display device equipped with an image acquisition unit (21), an image generation unit (22), and an image display unit (23). The image acquisition unit (21) acquires a peripheral image that captures a vehicle periphery region which has a blind spot region outside the vehicle caused by blockage of the field of view of a driver by a vehicle frame including a pillar. The image generation unit (22) generates a boundary-visible image which makes a region boundary between the blind spot region and the adjacent region thereto in the vehicle periphery region visible, with the peripheral image acquired by the image acquisition unit as the original image therefor. An image display unit (23) displays the boundary-visible region generated by the image generation unit on a display device provided inside the vehicle compartment.

Description

死角表示装置及び死角表示方法Blind spot display device and blind spot display method 関連出願の相互参照Cross-reference of related applications
 本出願は、2015年5月15日に出願された日本特許出願番号2015-100161号に基づくもので、ここにその記載内容を援用する。 This application is based on Japanese Patent Application No. 2015-1000016 filed on May 15, 2015, the contents of which are incorporated herein by reference.
 本開示は、ピラーによる死角領域の映像等を表示する死角表示装置及び死角表示方法に関する。 The present disclosure relates to a blind spot display device and a blind spot display method for displaying an image of a blind spot area by a pillar.
 従来、車両における左右一対のフロントピラーの内側にディスプレイをそれぞれ設け、フロントピラーで運転者の視界が遮られることにより生じる車両外の死角領域の映像をディスプレイに表示する技術が提案されている(特許文献1参照)。 Conventionally, a technology has been proposed in which a display is provided inside each of the pair of left and right front pillars in a vehicle, and an image of a blind spot area outside the vehicle that is generated when the driver's field of view is blocked by the front pillars is displayed (patent) Reference 1).
国際公開2009/157446号公報International Publication No. 2009/157446
 上記従来技術では、例えばピラーが透過しているように死角領域の映像をわかりやすく運転者に提示しようとすると、ピラーに沿うようにディスプレイを配置しなければならない等、ディスプレイの形状及び配置並びにピラー等のフレームの形状を含む車両設計が複雑化する、という問題があった。 In the above-described prior art, for example, when an image of the blind spot area is presented to the driver in an easy-to-understand manner so that the pillar is transmitted, the display must be arranged along the pillar. There is a problem that the vehicle design including the shape of the frame becomes complicated.
 本開示は、上記点にかんがみてなされたものであり、車両設計を複雑化させることなく、死角領域の映像等をわかりやすく運転者に提示可能な死角表示装置及び死角表示方法を提供することを目的としている。 The present disclosure has been made in view of the above points, and provides a blind spot display device and a blind spot display method capable of presenting a blind spot image or the like to a driver in an easy-to-understand manner without complicating vehicle design. It is aimed.
 本開示の一態様による死角表示装置は、画像取得部と、画像生成部と、画像表示部と、を備える。画像取得部は、車両のピラーを含むフレームで運転者の視界が遮られることにより生じる車両外の死角領域を有する車両周辺領域を撮像した周辺画像を取得する。 The blind spot display device according to an aspect of the present disclosure includes an image acquisition unit, an image generation unit, and an image display unit. The image acquisition unit acquires a peripheral image obtained by imaging a vehicle peripheral region having a blind spot region outside the vehicle, which is generated when a driver's field of view is blocked by a frame including a vehicle pillar.
 画像生成部は、画像取得部により取得した周辺画像を元画像として、車両周辺領域のうちの死角領域と隣接領域との領域境界を可視化させた境界可視化画像を生成する。画像表示部は、画像生成部により生成した境界可視化画像を、車両の室内に設けられた表示装置に表示する。 The image generation unit generates a boundary visualized image in which the region boundary between the blind spot region and the adjacent region in the vehicle peripheral region is visualized using the peripheral image acquired by the image acquisition unit as an original image. The image display unit displays the boundary visualized image generated by the image generation unit on a display device provided in the vehicle interior.
 このような構成によれば、表示画像において領域境界が可視化されることにより、死角領域に対応する画像部分を運転者に直感的に知らしめることが可能となり、さらに死角領域と車両周辺領域(あるいは隣接領域)との対応関係を、表示装置の形状や配置等にかかわらず運転者に瞬時に理解させることが可能となる。従って、本開示によれば、車両設計を複雑化させることなく、死角領域の映像等をわかりやすく運転者に提示することができる。 According to such a configuration, by visualizing the region boundary in the display image, it is possible to intuitively let the driver know the image portion corresponding to the blind spot region, and further, the blind spot region and the vehicle peripheral region (or It is possible for the driver to instantly understand the correspondence relationship with the adjacent region) regardless of the shape or arrangement of the display device. Therefore, according to the present disclosure, an image of a blind spot area or the like can be presented to the driver in an easy-to-understand manner without complicating the vehicle design.
 また、本開示他の態様による死角表示方法は、車両のピラーを含むフレームで運転者の視界が遮られることにより生じる車両外の死角領域を有する車両周辺領域を撮像した周辺画像を取得し、取得した周辺画像を元画像として、車両周辺領域のうちの死角領域と隣接領域との領域境界を可視化させた境界可視化画像を生成し、生成した境界可視化画像を、車両の室内に設けられた表示装置に表示することを含む。 In addition, the blind spot display method according to another aspect of the present disclosure acquires and acquires a peripheral image obtained by capturing a vehicle peripheral area having a blind spot area outside the vehicle that is generated when a driver's view is blocked by a frame including a vehicle pillar. A display device that generates a boundary visualization image that visualizes a region boundary between a blind spot region and an adjacent region in a vehicle peripheral region using the peripheral image as an original image, and the generated boundary visualization image is provided in a vehicle interior Including the display.
 この死角表示方法によれば、上記本開示の一態様による死角表示装置において既に述べた効果と同様の効果を得ることができる。 According to this blind spot display method, the same effects as those already described in the blind spot display device according to the aspect of the present disclosure can be obtained.
 本開示についての上記目的およびその他の目的、特徴や利点は、添付の図面を参照しながら下記の詳細な記述により、より明確になる。その図面は、
図1は、本開示の一実施形態による死角表示装置の全体構成を示すブロック図であり、 図2は、死角表示装置を搭載した車両の上面図であり、 図3は、死角表示装置を搭載した車両の室内の様子を示した図であり、 図4は、ECUの機能的構成を示すブロック図であり、 図5は、本開示の第1実施形態における画像生成処理のフローチャートであり、 図6Aは、輪郭線重畳画像を含む境界可視化画像を示す図であり、 図6Bは、透過重畳画像を含む境界可視化画像を示す図であり、 図7は、本開示の第2実施形態における画像生成処理のフローチャートであり、 図8は、マスク重畳画像を含む境界可視化画像を示す図である。
The above and other objects, features, and advantages of the present disclosure will become more apparent from the following detailed description with reference to the accompanying drawings. The drawing
FIG. 1 is a block diagram illustrating an overall configuration of a blind spot display device according to an embodiment of the present disclosure. FIG. 2 is a top view of a vehicle equipped with a blind spot display device. FIG. 3 is a view showing the interior of a vehicle equipped with a blind spot display device. FIG. 4 is a block diagram showing a functional configuration of the ECU, FIG. 5 is a flowchart of image generation processing according to the first embodiment of the present disclosure. FIG. 6A is a diagram showing a boundary visualized image including a contour line superimposed image; FIG. 6B is a diagram showing a boundary visualization image including a transparent superimposed image; FIG. 7 is a flowchart of image generation processing according to the second embodiment of the present disclosure. FIG. 8 is a diagram illustrating a boundary visualized image including a mask superimposed image.
 以下、本開示が適用された実施形態について、図面を用いて説明する。 Hereinafter, embodiments to which the present disclosure is applied will be described with reference to the drawings.
 (第1実施形態)
 図1に示す死角表示装置1は、ECU(Electronic Control Unit)2と、右カメラ3と、左カメラ4と、明るさ検出部5と、右ディスプレイ6と、左ディスプレイ7と、ドライバカメラ8と、ボデー形状DB(Data Base)9と、を備える。
(First embodiment)
A blind spot display device 1 shown in FIG. 1 includes an ECU (Electronic Control Unit) 2, a right camera 3, a left camera 4, a brightness detection unit 5, a right display 6, a left display 7, and a driver camera 8. A body shape DB (Data Base) 9.
 右カメラ3及び左カメラ4は、図2に示す左右のフロントピラー31,32の死角領域を含む車両周辺領域を撮像する機能を有するものである。死角領域とは、左右のフロントピラー31,32を含む車両フレームで運転者の視界が遮られることにより生じる車両外の領域のことである。右カメラ3及び左カメラ4は、CMOSカメラ等によって各々構成されており、撮像画角Fの範囲内における領域(すなわち、車両周辺領域)を撮像し、撮像した画像(以下「周辺画像」という)をECU2へ出力する。なお、以下では、車両周辺領域のうち、死角領域以外の他の領域を隣接領域、死角領域と隣接領域との境界を領域境界と称する。 The right camera 3 and the left camera 4 have a function of imaging the vehicle peripheral area including the blind spot area of the left and right front pillars 31 and 32 shown in FIG. The blind spot region is a region outside the vehicle that is generated when the driver's field of view is blocked by the vehicle frame including the left and right front pillars 31 and 32. Each of the right camera 3 and the left camera 4 is configured by a CMOS camera or the like, and captures an image of a region (that is, a vehicle peripheral region) within the range of the imaging angle of view F (hereinafter referred to as “peripheral image”). Is output to the ECU 2. Hereinafter, of the vehicle peripheral area, the area other than the blind spot area is referred to as an adjacent area, and the boundary between the blind spot area and the adjacent area is referred to as an area boundary.
 図1に戻り、ECU2は、装置全体の制御を行う電子制御ユニットである。ECU2は、CPU10を主体として構成され、ROMやRAMやフラッシュメモリ等のメモリ11、入力信号回路、出力信号回路、電源回路等を備えている。ECU2では、CPU10がメモリ11に記憶されたプログラムに基づいて、右カメラ3及び左カメラ4から周辺画像をそれぞれ取得し、取得した各周辺画像についてそれぞれ元画像として、車両周辺領域のうちの死角領域と隣接領域との領域境界を可視化させた境界可視化画像をそれぞれ生成する等の各種処理を実施する。 Referring back to FIG. 1, the ECU 2 is an electronic control unit that controls the entire apparatus. The ECU 2 is configured mainly by the CPU 10 and includes a memory 11 such as a ROM, a RAM, and a flash memory, an input signal circuit, an output signal circuit, a power supply circuit, and the like. In the ECU 2, the CPU 10 acquires peripheral images from the right camera 3 and the left camera 4 based on the program stored in the memory 11, and the blind spot region in the vehicle peripheral region is used as an original image for each acquired peripheral image. Various processes such as generating boundary visualized images in which the boundary between the region and the adjacent region is visualized are performed.
 右ディスプレイ6及び左ディスプレイ7は、ECU2により生成された境界可視化画像をそれぞれ表示する機能を有している。例えば、右ディスプレイ6は、図3に示すように、車両の室内において右フロントピラー31の付近に設けられた液晶ディスプレイ等により構成されており、右カメラ3の周辺画像を元画像とした境界可視化画像を表示する。同様に、左ディスプレイ7は、車両の室内において左フロントピラー32の付近に設けられた液晶ディスプレイ等により構成されており、左カメラ4の周辺画像を元画像とした境界可視化画像を表示する。 The right display 6 and the left display 7 each have a function of displaying the boundary visualized image generated by the ECU 2. For example, as shown in FIG. 3, the right display 6 is configured by a liquid crystal display or the like provided in the vicinity of the right front pillar 31 in the vehicle interior, and boundary visualization using the peripheral image of the right camera 3 as an original image Display an image. Similarly, the left display 7 is configured by a liquid crystal display or the like provided in the vicinity of the left front pillar 32 in the vehicle interior, and displays a boundary visualized image using a peripheral image of the left camera 4 as an original image.
 明るさ検出部5は、車両外の明るさを検出する照度センサ等により構成されており、その検出結果をECU2へ出力する。 The brightness detection unit 5 is configured by an illuminance sensor or the like that detects brightness outside the vehicle, and outputs the detection result to the ECU 2.
 ドライバカメラ8は、運転者の目を含む顔領域を撮像するように車室内に配置されている。ドライバカメラ8は、周知の視線検出技術を利用して、撮像画像内における運転者の目のうち、例えば目頭や角膜反射等を基準点、虹彩や瞳孔等を動点として各々3次元位置を検出し、基準点に対する動点の位置に基づいて運転者の視線を検出する。基本的には、例えば左目の虹彩が目頭から離れていれば、運転者は左側を見ており、左目の目頭と虹彩が近ければ、運転者は右側を見ていることになる。このような基本原理を応用することにより、基準点に対する動点の3次元相対位置から、運転者の視線方向を3次元空間で求めることができる。従って、ドライバカメラ8は、アイポイント検出部として機能する。ドライバカメラ8は、こうして検出した運転者の目の位置と視線方向とを示す情報(以下「ドライバカメラ情報」という)をECU2へ出力する。 The driver camera 8 is arranged in the passenger compartment so as to capture a face area including the driver's eyes. The driver camera 8 detects a three-dimensional position of each of the driver's eyes in the captured image using, for example, the eye or corneal reflection as a reference point and the iris or pupil as a moving point using a known gaze detection technique. Then, the driver's line of sight is detected based on the position of the moving point with respect to the reference point. Basically, for example, if the iris of the left eye is far from the eye, the driver is looking at the left side, and if the iris is close to the eye of the left eye, the driver is looking at the right side. By applying such a basic principle, the driver's line-of-sight direction can be obtained in a three-dimensional space from the three-dimensional relative position of the moving point with respect to the reference point. Accordingly, the driver camera 8 functions as an eye point detection unit. The driver camera 8 outputs information indicating the driver's eye position and line-of-sight direction thus detected (hereinafter referred to as “driver camera information”) to the ECU 2.
 ボデー形状DB9は、車両のボデーを構成するフロントピラー等のフレームの各パーツに関する3次元位置を示すボデー形状データを格納している。なお、ボデー形状DB9は、メモリ11内に構築されていてもよい。 The body shape DB 9 stores body shape data indicating the three-dimensional position of each part of the frame such as a front pillar constituting the vehicle body. The body shape DB 9 may be built in the memory 11.
 次に、ECU2の機能的構成について、図4のブロック図を用いて説明する。 Next, the functional configuration of the ECU 2 will be described with reference to the block diagram of FIG.
 ECU2は、画像取得部21と、画像生成部22と、画像表示部23と、を機能的に備えている。なお、画像取得部21、画像生成部22及び画像表示部23としての各機能を実現するための処理は、メモリ11に記憶されたプログラムに基づき、CPU10によって実行される。 The ECU 2 functionally includes an image acquisition unit 21, an image generation unit 22, and an image display unit 23. Note that processing for realizing the functions as the image acquisition unit 21, the image generation unit 22, and the image display unit 23 is executed by the CPU 10 based on a program stored in the memory 11.
 画像取得部21は、右カメラ3及び左カメラ4のそれぞれから周辺画像を時系列に沿って取得し、取得した周辺画像を画像生成部22に供給する。 The image acquisition unit 21 acquires peripheral images from each of the right camera 3 and the left camera 4 in time series, and supplies the acquired peripheral images to the image generation unit 22.
 画像生成部22は、右カメラ3及び左カメラ4のそれぞれについて、画像取得部21から供給された周辺画像を元画像として、境界可視化画像を生成する処理(以下「画像生成処理」という)を実行し、生成した境界可視化画像のそれぞれを時系列に沿って画像表示部23に供給する。 The image generation unit 22 executes a process (hereinafter referred to as “image generation process”) for generating a boundary visualized image using the peripheral image supplied from the image acquisition unit 21 as an original image for each of the right camera 3 and the left camera 4. Then, each of the generated boundary visualized images is supplied to the image display unit 23 in time series.
 画像表示部23は、右カメラ3及び左カメラ4のそれぞれについて画像生成部22から時系列に沿って供給された境界可視化画像を映像として、右ディスプレイ6及び左ディスプレイ7に各々表示する。具体的には、右カメラ3に対応する境界可視化映像を右ディスプレイ6に表示し、左カメラ4に対応する境界可視化映像を左ディスプレイ7に表示する。 The image display unit 23 displays the boundary visualized images supplied from the image generation unit 22 in time series for each of the right camera 3 and the left camera 4 as videos on the right display 6 and the left display 7, respectively. Specifically, the boundary visualized video corresponding to the right camera 3 is displayed on the right display 6, and the boundary visualized video corresponding to the left camera 4 is displayed on the left display 7.
 次に、画像生成部22が実行する画像生成処理について、図5のフローチャートを用いて説明する。なお、本処理は、例えば死角表示機能に関する起動や停止の操作内容を入力するスイッチ(不図示)がオンである間、所定サイクル毎に繰り返し起動される。また、本処理では、煩雑さを避けるため、右カメラ3と左カメラ4、及び右ディスプレイ6と左ディスプレイ7を特に区別することなく、処理対象を画像取得部21から供給された周辺画像として説明する。 Next, image generation processing executed by the image generation unit 22 will be described with reference to the flowchart of FIG. Note that this processing is repeatedly started at predetermined cycles while a switch (not shown) for inputting operation details of start and stop regarding the blind spot display function is on, for example. Moreover, in this process, in order to avoid complexity, the processing target is described as a peripheral image supplied from the image acquisition unit 21 without particularly distinguishing the right camera 3 and the left camera 4, and the right display 6 and the left display 7. To do.
 本処理が起動すると、画像生成部22は、まず、ステップ(以下単に「S」と記す)110において、ドライバカメラ8からドライバカメラ情報を取得する。 When this processing is started, the image generation unit 22 first acquires driver camera information from the driver camera 8 in step (hereinafter simply referred to as “S”) 110.
 続いて、S120では、ボデー形状DB9からボデー形状データを読み出す。 Subsequently, in S120, the body shape data is read from the body shape DB 9.
 次に、S130では、運転者の目の位置とフロントピラーの位置との位置関係に基づき、運転者が車両周辺領域を見たときの死角領域を周辺画像上において設定する。なお、運転者の目の位置についてはドライバカメラ情報から読み出し、フロントピラーの位置についてはボデー形状データから抽出する。具体的には、カメラ3,4のカメラパラメータを用いて、運転者の目の位置からフロントピラーの位置までのベクトルに対応する死角領域を、ワールド座標からカメラ画像に変換する。このとき、運転者の視線方向に基づいてベクトルを補正することもできる。 Next, in S130, based on the positional relationship between the position of the driver's eyes and the position of the front pillar, a blind spot area when the driver views the vehicle peripheral area is set on the peripheral image. The driver's eye position is read from the driver camera information, and the front pillar position is extracted from the body shape data. Specifically, using the camera parameters of the cameras 3 and 4, the blind spot area corresponding to the vector from the driver's eye position to the front pillar position is converted from world coordinates into a camera image. At this time, the vector can also be corrected based on the driver's line-of-sight direction.
 続いて、S140では、周辺画像上において領域境界に対応する輪郭線画像を重畳させることにより、図6Aに示す輪郭線重畳画像を生成する。この輪郭線画像としては、周辺画像の領域境界を透過させるものが好ましく、色相、明度、彩度、輝度等の画像属性が予めデフォルト値に設定されたものが用いられる。 Subsequently, in S140, the contour line superimposed image shown in FIG. 6A is generated by superimposing the contour line image corresponding to the region boundary on the peripheral image. As the contour image, an image that transmits the region boundary of the peripheral image is preferable, and an image in which image attributes such as hue, brightness, saturation, and luminance are set to default values in advance is used.
 次に、S150では、周辺画像上において死角領域に対応する透過画像を重畳させることにより、図6Bに示す透過重畳画像を生成する。具体的には、周辺画像の死角領域部分に周知のフィルタ処理を施すことにより、透過重畳画像を生成する。この透過画像としては、フロントピラーを含むフレーム部分を運転者に想起させるように、車両ボデーの色や半透明のアクリルを模擬した色を有するものが好ましく、周辺画像の死角領域部分を透過させるためのデフォルト値に画像属性を予め設定したものが用いられる。 Next, in S150, a transparent superimposed image shown in FIG. 6B is generated by superimposing a transparent image corresponding to the blind spot area on the peripheral image. Specifically, a transparent superimposed image is generated by performing a known filter process on the blind spot area portion of the peripheral image. The transparent image preferably has a vehicle body color or a color imitating translucent acrylic so as to remind the driver of the frame portion including the front pillar, in order to transmit the blind spot area portion of the surrounding image. The default value of which the image attribute is set in advance is used.
 続いて、S160では、輪郭線重畳画像及び透過重畳画像の画像属性を変更する属性変更処理を実施し、属性変更処理を施した輪郭線重畳画像及び透過重畳画像を含む周辺画像(つまり、境界可視化画像)を画像表示部23に供給し、本処理を終了する。 Subsequently, in S160, an attribute change process for changing the image attributes of the contour superimposed image and the transparent superimposed image is performed, and the peripheral image including the contour superimposed image and the transparent superimposed image subjected to the attribute change process (that is, boundary visualization) Image) is supplied to the image display unit 23, and this process ends.
 例えば、属性変更処理では、車両周辺領域の明るさに応じて、輪郭線重畳画像及び透過重畳画像のそれぞれに関する明度及び/又は輝度を変更する。具体的には、明るさ検出部5の検出結果に基づき、車両周辺領域の照度が低い場合には、輪郭線重畳画像及び透過重畳画像のそれぞれに関する明度及び/又は輝度を高くし、車両周辺領域の照度が高い場合には、輪郭線重畳画像及び透過重畳画像のそれぞれに関する明度及び/又は輝度を低くする。 For example, in the attribute change process, the brightness and / or brightness relating to each of the contour superimposed image and the transparent superimposed image is changed according to the brightness of the vehicle peripheral region. Specifically, based on the detection result of the brightness detection unit 5, when the illuminance of the vehicle surrounding area is low, the brightness and / or luminance relating to each of the outline superimposed image and the transmission superimposed image is increased, and the vehicle surrounding area is When the illuminance is high, the brightness and / or luminance relating to each of the contour superimposed image and the transparent superimposed image is lowered.
 また例えば、属性変更処理では、周辺画像に対してコントラストが所定値より高くなるように輪郭線重畳画像及び透過重畳画像のそれぞれに関する画像属性を変更する。具体的には、画像取得部21から供給された周辺画像に関する色相、明度、彩度及び/又は輝度が低い場合には、輪郭線重畳画像及び透過重畳画像のそれぞれに関する色相、明度、彩度及び/又は輝度を高くする。また、画像取得部21から供給された周辺画像に関する色相、明度、彩度及び/又は輝度が高い場合には、輪郭線重畳画像及び透過重畳画像のそれぞれに関する色相、明度、彩度及び/又は輝度を低くする。 Also, for example, in the attribute change process, the image attributes relating to each of the outline superimposed image and the transparent superimposed image are changed so that the contrast with respect to the surrounding image is higher than a predetermined value. Specifically, when the hue, brightness, saturation, and / or luminance related to the peripheral image supplied from the image acquisition unit 21 is low, the hue, brightness, saturation, and / Or increase the brightness. Further, when the hue, lightness, saturation, and / or luminance related to the peripheral image supplied from the image acquisition unit 21 is high, the hue, lightness, saturation, and / or luminance related to each of the contour superimposed image and the transparent superimposed image. Lower.
 以上詳述した第1実施形態によれば、以下の効果が得られる。 According to the first embodiment described in detail above, the following effects can be obtained.
 (1a)表示画像において領域境界が可視化されることにより、死角領域に対応する画像部分を運転者に直感的に知らしめることが可能となり、さらに死角領域と車両周辺領域(あるいは隣接領域)との対応関係を、ディスプレイ6,7の形状や配置等にかかわらず運転者に瞬時に理解させることが可能となる。従って、車両設計を複雑化させることなく、死角領域を映像でわかりやすく運転者に提示することができる。 (1a) By visualizing the region boundary in the display image, it is possible to intuitively inform the driver of the image portion corresponding to the blind spot region, and further, the blind spot region and the vehicle peripheral region (or adjacent region) The correspondence can be instantly understood by the driver regardless of the shape and arrangement of the displays 6 and 7. Therefore, the blind spot area can be presented to the driver in an easy-to-understand manner with an image without complicating the vehicle design.
 (2a)運転者の目の位置とフロントピラーの位置との位置関係に基づき、運転者が車両周辺領域を見たときの死角領域を周辺画像上において設定するため、表示画像において例えば領域境界を表す輪郭線を運転者の目の位置に合わせて移動させることが可能となり、死角領域を映像でより直観的にわかりやすく運転者に提示することができる。 (2a) Based on the positional relationship between the position of the driver's eyes and the position of the front pillar, in order to set a blind spot area on the peripheral image when the driver views the vehicle peripheral area, for example, a region boundary is displayed in the display image. The contour line to be represented can be moved in accordance with the position of the driver's eyes, and the blind spot area can be presented to the driver more intuitively and in an intuitive manner.
 (3a)周辺画像上において領域境界に対応する輪郭線画像を重畳させることにより、表示画像において領域境界を明確に運転者に知らしめることができる。 (3a) By superimposing a contour image corresponding to the region boundary on the peripheral image, the driver can clearly know the region boundary in the display image.
 (4a)周辺画像上において死角領域に対応する透過画像を重畳させることにより、表示画像において死角領域をより直観的にわかりやすく運転者に提示することができる。 (4a) By superimposing a transmission image corresponding to the blind spot area on the peripheral image, the blind spot area can be presented to the driver more intuitively and easily in the display image.
 (5a)輪郭線重畳画像及び透過重畳画像の画像属性を変更することにより、例えば車両周辺領域の明るさや周辺画像に対してコントラストが高くなるように画像属性を再設定することで、表示画像において視認性を向上させることができる。 (5a) By changing the image attributes of the contour superimposed image and the transparent superimposed image, for example, by resetting the image attribute so that the contrast with respect to the brightness of the vehicle peripheral region and the peripheral image becomes high, the display image Visibility can be improved.
 (第2実施形態)
 第2実施形態は、基本的な構成は第1実施形態と同様であるため、共通する構成については説明を省略し、相違点を中心に説明する。
(Second Embodiment)
Since the basic configuration of the second embodiment is the same as that of the first embodiment, the description of the common configuration will be omitted, and the description will focus on the differences.
 前述した第1実施形態では、境界可視化画像を生成する際に、周辺画像上において死角領域に対応する透過画像を重畳させていた。これに対し、第2実施形態では、境界可視化画像を生成する際に、周辺画像上において隣接領域に対応するマスク画像を重畳させる点で、第1実施形態と相違する。 In the first embodiment described above, when the boundary visualization image is generated, the transmission image corresponding to the blind spot area is superimposed on the peripheral image. In contrast, the second embodiment is different from the first embodiment in that a mask image corresponding to an adjacent region is superimposed on a peripheral image when generating a boundary visualized image.
 次に、第2実施形態の画像生成部22が、第1実施形態の画像生成処理(図5)に代えて実行する画像生成処理について、図7のフローチャートを用いて説明する。なお、図7におけるS210~S240の処理は、図5におけるS110~S140の処理と同様であるため、説明を一部簡略化している。 Next, image generation processing executed by the image generation unit 22 of the second embodiment instead of the image generation processing (FIG. 5) of the first embodiment will be described with reference to the flowchart of FIG. Note that the processing in S210 to S240 in FIG. 7 is the same as the processing in S110 to S140 in FIG.
 本処理が起動すると、画像生成部22は、まず、S210において、ドライバカメラ8からドライバカメラ情報を取得する。 When this process is started, the image generation unit 22 first acquires driver camera information from the driver camera 8 in S210.
 続いて、S220では、ボデー形状DB9からボデー形状データを読み出す。 Subsequently, in S220, the body shape data is read from the body shape DB9.
 次に、S230では、運転者の目の位置とフロントピラーの位置との位置関係に基づき、運転者が車両周辺領域を見たときの死角領域を周辺画像上において設定する。 Next, in S230, based on the positional relationship between the position of the driver's eyes and the position of the front pillar, a blind spot area when the driver views the vehicle peripheral area is set on the peripheral image.
 続いて、S240では、周辺画像上において領域境界に対応する輪郭線画像を重畳させることにより、図6Aに示す輪郭線重畳画像を生成する。 Subsequently, in S240, a contour line superimposed image shown in FIG. 6A is generated by superimposing a contour line image corresponding to the region boundary on the peripheral image.
 次に、S250では、周辺画像上において隣接領域に対応するマスク画像を重畳させることにより、図8に示すマスク重畳画像を生成する。具体的には、周辺画像の隣接領域部分に周知のフィルタ処理を施すことにより、マスク重畳画像を生成する。このマスク画像としては、車両周辺領域のうち死角領域以外の部分であること、若しくは運転者が直接視認可能な領域であることを運転者に想起させるように、周辺画像の隣接領域部分を目立たなくするためのデフォルト値に画像属性を予め設定したものが用いられる。 Next, in S250, the mask superimposed image shown in FIG. 8 is generated by superimposing the mask image corresponding to the adjacent region on the peripheral image. Specifically, a mask superimposed image is generated by performing a known filter process on the adjacent region portion of the peripheral image. As this mask image, the adjacent region portion of the peripheral image is inconspicuous so as to remind the driver that it is a portion other than the blind spot region in the vehicle peripheral region, or that the driver can directly recognize the region. A default value for which the image attribute is set in advance is used.
 続いて、S260では、輪郭線重畳画像及びマスク重畳画像の画像属性を変更する属性変更処理を実施し、属性変更処理を施した輪郭線重畳画像及びマスク重畳画像を含む周辺画像(つまり、境界可視化画像)を画像表示部23に供給し、本処理を終了する。 Subsequently, in S260, an attribute changing process for changing the image attributes of the contour superimposed image and the mask superimposed image is performed, and a peripheral image including the contour superimposed image and the mask superimposed image subjected to the attribute changing process (that is, boundary visualization) Image) is supplied to the image display unit 23, and this process ends.
 例えば、属性変更処理では、車両周辺領域の明るさに応じて、輪郭線重畳画像及びマスク重畳画像のそれぞれに関する明度及び/又は輝度を変更する。具体的には、明るさ検出部5の検出結果に基づき、車両周辺領域の照度が低い場合には、輪郭線重畳画像及びマスク重畳画像のそれぞれに関する明度及び/又は輝度を高くし、車両周辺領域の照度が高い場合には、輪郭線重畳画像及びマスク重畳画像のそれぞれに関する明度及び/又は輝度を低くする。 For example, in the attribute changing process, the brightness and / or luminance relating to each of the contour superimposed image and the mask superimposed image is changed according to the brightness of the vehicle peripheral region. Specifically, based on the detection result of the brightness detection unit 5, when the illuminance in the vehicle peripheral region is low, the brightness and / or luminance relating to each of the contour superimposed image and the mask superimposed image is increased, and the vehicle peripheral region When the illuminance is high, the brightness and / or luminance relating to each of the contour superimposed image and the mask superimposed image is lowered.
 また例えば、属性変更処理では、周辺画像に対してコントラストが高くなるように輪郭線重畳画像及びマスク重畳画像のそれぞれに関する画像属性を変更する。具体的には、画像取得部21から供給された周辺画像に関する色相、明度、彩度及び/又は輝度が低い場合には、輪郭線重畳画像及びマスク重畳画像のそれぞれに関する色相、明度、彩度及び/又は輝度を高くする。また、画像取得部21から供給された周辺画像に関する色相、明度、彩度及び/又は輝度が高い場合には、輪郭線重畳画像及びマスク重畳画像のそれぞれに関する色相、明度、彩度及び/又は輝度を低くする。 Also, for example, in the attribute change process, the image attributes relating to each of the contour superimposed image and the mask superimposed image are changed so that the contrast with respect to the surrounding image is high. Specifically, when the hue, lightness, saturation, and / or luminance related to the peripheral image supplied from the image acquisition unit 21 is low, the hue, lightness, saturation, and / Or increase the brightness. Further, when the hue, lightness, saturation, and / or luminance related to the peripheral image supplied from the image acquisition unit 21 is high, the hue, lightness, saturation, and / or luminance related to each of the contour superimposed image and the mask superimposed image. Lower.
 以上詳述した第2実施形態によれば、前述した第1実施形態の効果(1a)-(3a)に加え、以下の効果が得られる。 According to the second embodiment described in detail above, in addition to the effects (1a) to (3a) of the first embodiment described above, the following effects can be obtained.
 (1b)周辺画像上において隣接領域に対応するマスク画像を重畳させることにより、表示画像において死角領域をより直観的にわかりやすく運転者に提示することができる。 (1b) By superimposing the mask image corresponding to the adjacent area on the peripheral image, the blind spot area in the display image can be presented to the driver more intuitively and easily.
 (2b)輪郭線重畳画像及びマスク重畳画像の画像属性を変更することにより、例えば車両周辺領域の明るさや周辺画像に対してコントラストが高くなるように画像属性を再設定することで、表示画像において視認性を向上させることができる。 (2b) By changing the image attributes of the contour superimposed image and the mask superimposed image, for example, by resetting the image attribute so that the contrast with respect to the brightness of the vehicle peripheral region and the peripheral image becomes high, the display image Visibility can be improved.
 (他の実施形態)
 以上、本開示の実施形態について説明したが、本開示は上記実施形態に限定されることなく、種々の形態を採り得る。
(Other embodiments)
As mentioned above, although embodiment of this indication was described, this indication can take various forms, without being limited to the above-mentioned embodiment.
 上記実施形態では、輪郭線重畳画像及び透明重畳画像、あるいは輪郭線重畳画像及びマスク重畳画像の画像属性を変更していたが、これに限定されるものではない。例えば、輪郭線重畳画像、透明重畳画像及びマスク重畳画像の少なくとも一つの画像属性を変更してもよい。 In the above embodiment, the image attributes of the contour superimposed image and the transparent superimposed image, or the contour superimposed image and the mask superimposed image are changed, but the present invention is not limited to this. For example, at least one image attribute of the contour superimposed image, the transparent superimposed image, and the mask superimposed image may be changed.
 上記実施形態では、左右のフロントピラー31,32の死角領域と隣接領域との領域境界を可視化させた境界可視化画像を生成していたが、これに限定されるものではない。例えば、センターピラーやリアピラー等の他のピラーの死角領域について同様に境界可視化画像を生成してもよい。 In the above embodiment, the boundary visualized image is generated by visualizing the region boundary between the blind spot region of the left and right front pillars 31 and 32 and the adjacent region. However, the present invention is not limited to this. For example, the boundary visualized image may be generated in the same manner for the blind area of other pillars such as a center pillar and a rear pillar.
 上記実施形態における1つの構成要素が有する機能を複数の構成要素として分散させたり、複数の構成要素が有する機能を1つの構成要素に統合させたりしてもよい。また、上記実施形態の構成の少なくとも一部を、同様の機能を有する公知の構成に置き換えてもよい。また、上記実施形態の構成の一部を省略してもよい。また、上記実施形態の構成の少なくとも一部を、他の上記実施形態の構成に対して付加又は置換してもよい。なお、特許請求の範囲に記載した文言のみによって特定される技術思想に含まれるあらゆる態様が本開示の実施形態である。 The functions of one component in the above embodiment may be distributed as a plurality of components, or the functions of a plurality of components may be integrated into one component. Further, at least a part of the configuration of the above embodiment may be replaced with a known configuration having the same function. Moreover, you may abbreviate | omit a part of structure of the said embodiment. In addition, at least a part of the configuration of the above embodiment may be added to or replaced with the configuration of the other embodiment. In addition, all the aspects included in the technical idea specified only by the wording described in the claims are embodiments of the present disclosure.
 上述した死角表示装置1の他、当該死角表示装置1を構成要素とするシステム、当該死角表示装置1としてコンピュータを機能させるための1ないし複数のプログラム、このプログラムの少なくとも一部を記録した1ないし複数の媒体、死角表示方法等、種々の形態で本開示を実現することもできる。 In addition to the blind spot display device 1 described above, a system including the blind spot display device 1 as a constituent element, one or more programs for causing a computer to function as the blind spot display device 1, and one or more programs recording at least a part of the program. The present disclosure can also be realized in various forms such as a plurality of media and a blind spot display method.
 本開示は、実施例に準拠して記述されたが、本開示は当該実施例や構造に限定されるものではないと理解される。本開示は、様々な変形例や均等範囲内の変形をも包含する。加えて、様々な組み合わせや形態、さらには、それらに一要素のみ、それ以上、あるいはそれ以下、を含む他の組み合わせや形態をも、本開示の範畴や思想範囲に入るものである。

 
Although the present disclosure has been described with reference to the embodiments, it is understood that the present disclosure is not limited to the embodiments and structures. The present disclosure includes various modifications and modifications within the equivalent range. In addition, various combinations and forms, as well as other combinations and forms including only one element, more or less, are within the scope and spirit of the present disclosure.

Claims (9)

  1.  車両のピラーを含むフレームで運転者の視界が遮られることにより生じる車両外の死角領域を有する車両周辺領域を撮像した周辺画像を取得する画像取得部(21)と、
     前記画像取得部により取得した周辺画像を元画像として、前記車両周辺領域のうちの前記死角領域と隣接領域との領域境界を可視化させた境界可視化画像を生成する画像生成部(22)と、
     前記画像生成部により生成した境界可視化画像を、前記車両の室内に設けられた表示装置に表示する画像表示部(23)と、
     を備える死角表示装置。
    An image acquisition unit (21) for acquiring a peripheral image obtained by imaging a vehicle peripheral region having a blind spot region outside the vehicle, which is generated when a driver's field of view is blocked by a frame including a pillar of the vehicle;
    An image generation unit (22) that generates a boundary visualization image that visualizes a region boundary between the blind spot region and an adjacent region in the vehicle peripheral region, using the peripheral image acquired by the image acquisition unit as an original image;
    An image display unit (23) for displaying the boundary visualization image generated by the image generation unit on a display device provided in the vehicle interior;
    A blind spot display device comprising:
  2.  請求項1に記載の死角表示装置であって、
     前記運転者の目の位置を検出するアイポイント検出部(8)、
     を更に備え、
     前記画像生成部(22)は、前記アイポイント検出部(8)により検出した目の位置と前記ピラーの位置との位置関係に基づいて、前記運転者が前記車両周辺領域を見たときの前記死角領域を前記周辺画像上において設定する、
     死角表示装置。
    The blind spot display device according to claim 1,
    An eye point detector (8) for detecting the position of the driver's eyes;
    Further comprising
    The image generation unit (22) is configured to display the vehicle surrounding area when the driver looks at the vehicle surrounding area based on a positional relationship between the eye position detected by the eye point detection unit (8) and the pillar position. Setting a blind spot area on the peripheral image,
    Blind spot display device.
  3.  請求項1又は請求項2に記載の死角表示装置であって、
     前記画像生成部(22)は、前記境界可視化画像を生成する際に、前記周辺画像上において前記領域境界に対応する輪郭線画像を重畳させる、
     死角表示装置。
    The blind spot display device according to claim 1 or 2,
    The image generation unit (22) superimposes a contour image corresponding to the region boundary on the peripheral image when generating the boundary visualized image.
    Blind spot display device.
  4.  請求項1から請求項3までのいずれか1項に記載の死角表示装置であって、
     前記画像生成部(22)は、前記境界可視化画像を生成する際に、前記周辺画像上において前記死角領域に対応する透過画像を重畳させる、
     死角表示装置。
    A blind spot display device according to any one of claims 1 to 3,
    The image generation unit (22) superimposes a transmission image corresponding to the blind spot region on the peripheral image when generating the boundary visualization image.
    Blind spot display device.
  5.  請求項1から請求項4までのいずれか1項に記載の死角表示装置であって、
     前記画像生成部(22)は、前記境界可視化画像を生成する際に、前記周辺画像上において前記隣接領域に対応するマスク画像を重畳させる、
     死角表示装置。
    A blind spot display device according to any one of claims 1 to 4,
    The image generation unit (22) superimposes a mask image corresponding to the adjacent region on the peripheral image when generating the boundary visualization image.
    Blind spot display device.
  6.  請求項1から請求項5までのいずれか1項に記載の死角表示装置であって、
     前記画像生成部(22)は、前記境界可視化画像を生成する際に、前記周辺画像上において、前記領域境界に対応する輪郭線画像、前記死角領域に対応する透過画像、又は、前記隣接領域に対応するマスク画像、を重畳させた重畳画像について、色相、明度、彩度及び輝度のうち少なくとも一つの画像属性を変更する、
     死角表示装置。
    A blind spot display device according to any one of claims 1 to 5,
    When generating the boundary visualization image, the image generation unit (22) generates an outline image corresponding to the region boundary, a transmission image corresponding to the blind spot region, or the adjacent region on the peripheral image. Changing at least one image attribute of hue, brightness, saturation, and luminance for a superimposed image in which a corresponding mask image is superimposed;
    Blind spot display device.
  7.  請求項6に記載の死角表示装置であって、
     前記画像生成部(22)は、前記車両周辺領域の明るさに応じて、前記重畳画像に関する前記画像属性のうち前記明度及び前記輝度の少なくとも一つを変更する、
     死角表示装置。
    The blind spot display device according to claim 6,
    The image generation unit (22) changes at least one of the brightness and the luminance among the image attributes related to the superimposed image according to the brightness of the vehicle surrounding area.
    Blind spot display device.
  8.  請求項6又は請求項7に記載の死角表示装置であって、
     前記画像生成部(22)は、前記画像取得部(21)により取得した周辺画像に対してコントラストが所定値より高くなるように前記重畳画像に関する前記画像属性を変更する、
     死角表示装置。
    The blind spot display device according to claim 6 or 7,
    The image generation unit (22) changes the image attribute relating to the superimposed image so that a contrast is higher than a predetermined value with respect to a peripheral image acquired by the image acquisition unit (21).
    Blind spot display device.
  9.  車両のピラーを含むフレームで運転者の視界が遮られることにより生じる車両外の死角領域を有する車両周辺領域を撮像した周辺画像を取得し(21)、
     前記周辺画像を元画像として、前記車両周辺領域のうちの前記死角領域と隣接領域との領域境界を可視化させた境界可視化画像を生成し(22)、
     前記境界可視化画像を、前記車両の室内に設けられた表示装置に表示すること(23)を含む死角表示方法。

     
    A peripheral image obtained by imaging a vehicle peripheral area having a blind spot area outside the vehicle that is generated when a driver's field of view is blocked by a frame including a vehicle pillar is obtained (21);
    Using the peripheral image as an original image, generating a boundary visualized image in which a region boundary between the blind spot region and an adjacent region in the vehicle peripheral region is visualized (22);
    A blind spot display method including displaying the boundary visualized image on a display device provided in a room of the vehicle (23).

PCT/JP2016/002215 2015-05-15 2016-04-27 Blind-spot display device and blind-spot display method WO2016185678A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015100161A JP2016215725A (en) 2015-05-15 2015-05-15 Dead angle display device and dead angle display method
JP2015-100161 2015-05-15

Publications (1)

Publication Number Publication Date
WO2016185678A1 true WO2016185678A1 (en) 2016-11-24

Family

ID=57319735

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/002215 WO2016185678A1 (en) 2015-05-15 2016-04-27 Blind-spot display device and blind-spot display method

Country Status (2)

Country Link
JP (1) JP2016215725A (en)
WO (1) WO2016185678A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220185182A1 (en) * 2020-12-15 2022-06-16 Panasonic Automotive Systems Company Of America, Division Of Panasonic Corporation Of North America Target identification for vehicle see-through applications

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010058742A (en) * 2008-09-05 2010-03-18 Mazda Motor Corp Vehicle drive assisting device
JP2010221980A (en) * 2009-03-25 2010-10-07 Aisin Seiki Co Ltd Surroundings monitoring device for vehicle
JP2014198531A (en) * 2013-03-29 2014-10-23 アイシン精機株式会社 Image display controller, image display system, and display unit

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010058742A (en) * 2008-09-05 2010-03-18 Mazda Motor Corp Vehicle drive assisting device
JP2010221980A (en) * 2009-03-25 2010-10-07 Aisin Seiki Co Ltd Surroundings monitoring device for vehicle
JP2014198531A (en) * 2013-03-29 2014-10-23 アイシン精機株式会社 Image display controller, image display system, and display unit

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220185182A1 (en) * 2020-12-15 2022-06-16 Panasonic Automotive Systems Company Of America, Division Of Panasonic Corporation Of North America Target identification for vehicle see-through applications

Also Published As

Publication number Publication date
JP2016215725A (en) 2016-12-22

Similar Documents

Publication Publication Date Title
JP5983693B2 (en) Mirror device with display function and display switching method
WO2015015919A1 (en) Vehicle periphery monitoring device
WO2018047400A1 (en) Vehicle display control device, vehicle display system, vehicle display control method, and program
KR20170135952A (en) A method for displaying a peripheral area of a vehicle
US11034305B2 (en) Image processing device, image display system, and image processing method
JP2016055782A5 (en)
JP2017034453A (en) Image processing apparatus, image display system, and image processing method
JP6589796B2 (en) Gesture detection device
JP2018022958A (en) Vehicle display controller and vehicle monitor system
JP2018121287A (en) Display control apparatus for vehicle, display system for vehicle, display control method for vehicle, and program
JP2017097608A (en) Image recognition device
JP5562498B1 (en) Room mirror, vehicle blind spot support device using the room mirror, and display image adjustment method of the room mirror or vehicle blind spot support device
WO2016185678A1 (en) Blind-spot display device and blind-spot display method
WO2016185677A1 (en) Vehicle periphery display device and vehicle periphery display method
JP2017098785A (en) Display controller and display control program
CN109415020B (en) Luminance control device, luminance control system and luminance control method
JP2019047296A (en) Display device, display method, control program, and electronic mirror system
JP2016199266A (en) Mirror device with display function and display switching method
JP2017138645A (en) Sight-line detection device
JP5617678B2 (en) Vehicle display device
JP2021033872A (en) Display control device for vehicle, display control method for vehicle, and program
JP2016082329A (en) Video processing apparatus, and on-vehicle video processing system
WO2018193579A1 (en) Image recognition device
WO2016129241A1 (en) Display control device and display system
JP2018002152A (en) Mirror device with display function and display switching method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16796075

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16796075

Country of ref document: EP

Kind code of ref document: A1