WO2016185677A1 - Vehicle periphery display device and vehicle periphery display method - Google Patents

Vehicle periphery display device and vehicle periphery display method Download PDF

Info

Publication number
WO2016185677A1
WO2016185677A1 PCT/JP2016/002214 JP2016002214W WO2016185677A1 WO 2016185677 A1 WO2016185677 A1 WO 2016185677A1 JP 2016002214 W JP2016002214 W JP 2016002214W WO 2016185677 A1 WO2016185677 A1 WO 2016185677A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
vehicle
display
region
peripheral
Prior art date
Application number
PCT/JP2016/002214
Other languages
French (fr)
Japanese (ja)
Inventor
大貴 五藤
Original Assignee
株式会社デンソー
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社デンソー filed Critical 株式会社デンソー
Publication of WO2016185677A1 publication Critical patent/WO2016185677A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/28Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with an adjustable field of view
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R11/02Arrangements for holding or mounting articles, not otherwise provided for for radio sets, television sets, telephones, or the like; Arrangement of controls thereof
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present disclosure relates to a vehicle periphery display device and a vehicle periphery display method for displaying an image of an external periphery region of the vehicle.
  • An object of the present invention is to provide a display device and a vehicle periphery display method.
  • the vehicle periphery display device includes an image acquisition unit, an image display unit, a situation determination unit, and a display area change unit.
  • the image acquisition unit acquires a first peripheral image and a second peripheral image obtained by imaging a first region and a second region that are different from each other in the external peripheral region of the vehicle.
  • the image display unit displays the first peripheral image and the second peripheral image acquired by the image acquisition unit on one screen of a display device provided in the vehicle interior.
  • the situation determination unit determines a traveling situation of the vehicle.
  • the display area changing unit uses a display area of the display device in which the first peripheral image is displayed as a main display area and the second peripheral image is displayed as a sub display area. Accordingly, when a predetermined traveling condition is satisfied, the sub display area is enlarged on the image display unit.
  • a plurality of peripheral images are displayed on the display screen with a master-slave relationship, so that the driver can intuitively know the image portion corresponding to each of the external peripheral regions of the vehicle. It becomes possible. Furthermore, by expanding the sub display area according to the driving situation of the vehicle, the driver can instantly understand the area that has become necessary to be checked according to the driving scene, and an image of the area is displayed to the driver. Can be presented with good visibility. Therefore, according to the present disclosure, it is possible to easily present the driver with images of a plurality of areas in the external peripheral area of the vehicle without increasing the number of display devices.
  • the vehicle periphery display method acquires a first peripheral image and a second peripheral image obtained by capturing a first region and a second region that are different from each other in the external peripheral region of the vehicle.
  • the main display area Is the main display area, and the area where the second peripheral image is displayed is the sub display area.
  • the sub display area is enlarged.
  • FIG. 1 is a block diagram illustrating an overall configuration of a vehicle periphery display device according to an embodiment of the present disclosure.
  • FIG. 2 is a top view of a vehicle equipped with a vehicle periphery display device
  • FIG. 3 is a diagram showing the interior of a vehicle equipped with a vehicle periphery display device
  • FIG. 4 is a block diagram showing a functional configuration of the ECU
  • FIG. 5 is a flowchart of image generation processing according to the first embodiment of the present disclosure.
  • FIG. 6A is a diagram showing a boundary visualized image including a contour line superimposed image;
  • FIG. 6A is a diagram showing a boundary visualized image including a contour line superimposed image
  • FIG. 6B is a diagram showing a boundary visualization image including a transparent superimposed image
  • FIG. 7 is a flowchart of the display control process.
  • FIG. 8A is a diagram illustrating a case where the sub display area is reduced and displayed in the display example of the left display 7.
  • FIG. 8B is a diagram showing a case where the sub display area is enlarged and displayed in the display example of the left display 7.
  • FIG. 9A is a diagram showing a case where the sub display area is reduced and displayed in the display example of the right display 6;
  • FIG. 9B is a diagram showing a case where the sub display area is enlarged and displayed in the display example of the right display 6;
  • FIG. 10 is a flowchart of image generation processing according to the second embodiment of the present disclosure.
  • FIG. 11 is a diagram illustrating a boundary visualized image including a mask superimposed image.
  • (First embodiment) 1 includes an ECU (Electronic Control Unit) 2, a rear side camera 3, a front side camera 4, an ADAS locator 5, a right display 6, a left display 7, and a vehicle speed sensor. 8, a direction indicator 9, a brightness detection unit 15, a driver camera 18, and a body shape DB (Data Base) 19.
  • ECU Electronic Control Unit
  • rear side camera 3 a front side camera 4
  • ADAS locator 5 a right display 6, a left display 7, and a vehicle speed sensor.
  • 8 a direction indicator 9
  • a brightness detection unit 15
  • driver camera 18 and a body shape DB (Data Base) 19.
  • the rear side camera 3 is mounted in the vicinity of the position where the side mirror is normally installed on the right side and the left side of the vehicle shown in FIG. 2, and the rear side region (corresponding to the first region) of the vehicle is respectively set. It is composed of a right rear camera 3A and a left rear camera 3B for imaging. Each of the right rear camera 3A and the left rear camera 3B is configured by a CMOS camera or the like, and images a region within the range of the imaging field angle G (that is, the rear side region) and captures an image (hereinafter referred to as “first” (Referred to as “peripheral image”) to the ECU 2.
  • first Referred to as “peripheral image”
  • the side mirror is not installed in the vehicle, but the so-called electronic mirror is configured together with the rear side camera 3 by the displays 6 and 7 displaying the first peripheral image.
  • the front side camera 4 includes a right front camera 4A and a left front camera 4B that respectively image a front side region (corresponding to a second region) of the vehicle including a blind spot region of the right front pillar 31 and the left front pillar 32.
  • the blind spot area is a peripheral area outside the vehicle that is generated when the driver's field of view is blocked by the vehicle frame including the left and right front pillars 31 and 32.
  • Each of the right front camera 4A and the left front camera 4B is configured by a CMOS camera or the like, images a region within the range of the imaging field angle F (that is, the front side region), and captures an image (hereinafter, “second peripheral image”). Is output to the ECU 2.
  • a region other than the blind spot region is referred to as an adjacent region
  • a boundary between the blind spot region and the adjacent region is referred to as a blind spot region boundary.
  • the ECU 2 is an electronic control unit that controls the entire apparatus.
  • the ECU 2 is configured mainly by the CPU 10 and includes a memory 11 such as a ROM, a RAM, and a flash memory, an input signal circuit, an output signal circuit, a power supply circuit, and the like.
  • the CPU 10 acquires the first peripheral image and the second peripheral image from the rear side camera 3 and the front side camera 4 based on the program stored in the memory 11, and the acquired left and right peripheral images are respectively left and right.
  • Various processes such as setting display areas of the displays 6 and 7 are performed.
  • the right display 6 and the left display 7 have a function of displaying the first peripheral image and the second peripheral image in the display area set by the ECU 2, respectively.
  • the right display 6 is configured by a liquid crystal display or the like provided in the vicinity of the right front pillar 31 in the vehicle interior, and the first peripheral image of the right rear camera 3A and the right front camera 4A.
  • the second peripheral image is displayed.
  • the left display 7 is constituted by a liquid crystal display or the like provided in the vicinity of the left front pillar 32 in the vehicle interior, and displays the first peripheral image of the left rear camera 3B and the second peripheral image of the left front camera 4B. indicate.
  • the ADAS locator 5 is a well-known device used in the advanced driver assistance system, and detects the current position of the vehicle using a global positioning system (so-called GPS) or the like.
  • the ADAS locator 5 has a map database (DB) including road map information in association with position information such as latitude and longitude.
  • the road map information is a table-like DB in which link information of links constituting a road is associated with node information of nodes connecting the links. Since the link information includes link length, width, connection node, curve information, etc., the road shape can be detected using the road map information.
  • the vehicle speed sensor 8 is configured as a known sensor that detects the vehicle speed based on the rotational speed of the wheel, and outputs the detection result to the ECU 2.
  • the direction indicator 9 is a well-known device for indicating the direction to the surroundings by the driver's turn signal operation when turning right or left or changing the course of the vehicle. The result is output to the ECU 2.
  • the brightness detection part 15 is comprised by the illumination intensity sensor etc. which detect the brightness outside a vehicle, and outputs the detection result to ECU2.
  • the driver camera 18 is disposed in the passenger compartment so as to capture a face area including the driver's eyes.
  • the driver camera 18 uses a well-known gaze detection technique to detect a three-dimensional position of each of the driver's eyes in the captured image, for example, using the eye head or corneal reflection as a reference point and the iris or pupil as a moving point. Then, the driver's line of sight is detected based on the position of the moving point with respect to the reference point. Basically, for example, if the iris of the left eye is far from the eye, the driver is looking at the left side, and if the iris is close to the eye of the left eye, the driver is looking at the right side.
  • the driver's line-of-sight direction can be obtained in a three-dimensional space from the three-dimensional relative position of the moving point with respect to the reference point.
  • the driver camera 18 outputs information indicating the driver's eye position and line-of-sight direction thus detected (hereinafter referred to as “driver camera information”) to the ECU 2.
  • the body shape DB 19 stores body shape data indicating a three-dimensional position relating to each part of the frame such as a front pillar constituting the vehicle body.
  • the body shape DB 19 may be built in the memory 11.
  • the ECU 2 functionally includes an image acquisition unit 21, an image generation unit 22, an image display unit 23, a situation determination unit 24, and a display area change unit 25.
  • the processing for realizing each function as the image acquisition unit 21, the image generation unit 22, the image display unit 23, the situation determination unit 24, and the display area change unit 25 is based on a program stored in the memory 11. Executed by.
  • the image acquisition unit 21 acquires the first peripheral image and the second peripheral image from each of the right rear camera 3A, the right front camera 4A, the left rear camera 3B, and the left front camera 4B in time series, and acquires the respective peripheral images.
  • the first peripheral image is supplied to the image display unit 23, and the second peripheral image is supplied to the image generation unit 22.
  • the image generation unit 22 uses the second peripheral image supplied from the image acquisition unit 21 as an original image, and the blind spot between the blind spot area and the adjacent area in the front side area of the vehicle. Processing (hereinafter referred to as “image generation processing”) for generating an image in which the region boundary is visualized (hereinafter referred to as “boundary visualization image”) is executed, and each of the generated boundary visualization images is displayed along the time series. To supply.
  • the image display unit 23 includes a first peripheral image supplied from the image acquisition unit 21 in time series for each of the right rear camera 3A and the left rear camera 3B, and an image generation unit for each of the right front camera 4A and the left front camera 4B.
  • the boundary visualized images supplied along the time series from 22 are displayed as images on the right display 6 and the left display 7 respectively. Specifically, the first peripheral video and boundary visualized video corresponding to the right cameras 3A and 4A are displayed on the right display 6, and the first peripheral video and boundary visualized video corresponding to the left cameras 3B and 4B are displayed on the left display 7. indicate.
  • image generation processing executed by the image generation unit 22 will be described with reference to the flowchart of FIG. Note that this processing is repeatedly activated at predetermined cycles while, for example, a switch (not shown) for inputting activation and stop operation details regarding the vehicle periphery display function is on.
  • the processing target is described as the second peripheral image supplied from the image acquisition unit 21 without particularly distinguishing the right front camera 4A and the left front camera 4B.
  • the image generation unit 22 first acquires driver camera information from the driver camera 18 in step (hereinafter simply referred to as “S”) 110.
  • body shape data is read from the body shape DB 19.
  • a blind spot area when the driver looks at the front side area of the vehicle is set on the second peripheral image.
  • the driver's eye position is read from the driver camera information, and the front pillar position is extracted from the body shape data.
  • the blind spot area corresponding to the vector from the driver's eye position to the front pillar position is converted from world coordinates into a camera image using the camera parameters of the front side camera 4.
  • the vector can also be corrected based on the driver's line-of-sight direction.
  • a contour line superimposed image shown in FIG. 6A is generated by superimposing a contour line image corresponding to the blind spot region boundary on the second peripheral image.
  • the contour image an image that transmits the blind spot region boundary of the second peripheral image is preferable, and an image in which image attributes such as hue, brightness, saturation, and luminance are set to default values in advance is used.
  • a transparent superimposed image shown in FIG. 6B is generated by superimposing a transparent image corresponding to the blind spot area on the second peripheral image.
  • a transparent superimposed image is generated by performing known filter processing on the blind spot area portion of the second peripheral image.
  • the transparent image preferably has a vehicle body color or a color imitating translucent acrylic so as to remind the driver of the frame portion including the front pillar, in order to transmit the blind spot area portion of the surrounding image.
  • the default value of which the image attribute is set in advance is used.
  • an attribute change process for changing the image attributes of the contour superimposed image and the transparent superimposed image is performed, and a second peripheral image (that is, the second peripheral image including the contour superimposed image and the transparent superimposed image subjected to the attribute change process)
  • the boundary visualization image is supplied to the image display unit 23, and this process is terminated.
  • the brightness and / or brightness relating to each of the contour superimposed image and the transparent superimposed image is changed according to the brightness of the vehicle peripheral region. Specifically, based on the detection result of the brightness detection unit 15, when the illuminance of the vehicle surrounding area is low, the brightness and / or luminance relating to each of the contour superimposed image and the transmission superimposed image is increased, and the vehicle surrounding area is When the illuminance is high, the brightness and / or luminance relating to each of the contour superimposed image and the transparent superimposed image is lowered.
  • the image attributes relating to each of the contour superimposed image and the transparent superimposed image are changed so that the contrast is higher with respect to the second peripheral image.
  • the hue, brightness, saturation, and / or luminance related to the second peripheral image supplied from the image acquisition unit 21 are low, the hue, brightness, saturation related to each of the contour superimposed image and the transparent superimposed image.
  • the hue, lightness, saturation, and / or luminance related to the second peripheral image supplied from the image acquisition unit 21 are high, the hue, lightness, saturation, and / or Or lower the brightness.
  • the display control processing executed by the situation determination unit 24 and the display area changing unit 25 will be described with reference to the flowchart of FIG. Note that this processing is repeatedly activated at predetermined cycles while, for example, a switch (not shown) for inputting activation and stop operation details regarding the vehicle periphery display function is on. Moreover, in this process, in order to avoid complexity, the control object is set as each display area of the first peripheral video and the boundary visualization video displayed by the image display unit 23 without particularly distinguishing the right display 6 and the left display 7. explain.
  • the situation determination unit 24 first determines whether or not the host vehicle speed is equal to or lower than a predetermined threshold value based on the detection result of the vehicle speed sensor 8 in S210.
  • the threshold value is set in advance with reference to an upper limit speed at which the vehicle slows down when the vehicle turns right or left at an intersection or the like. If it is determined that the host vehicle speed is less than or equal to the threshold value, the process proceeds to S220. If it is determined that the host vehicle speed exceeds the threshold value, the process proceeds to S230.
  • the situation determination unit 24 determines whether one of the left and right turn signal operations has been performed, that is, whether the direction indicator 9 is in an on state. If it is determined that the direction indicator 9 is on, the process proceeds to S240, and if it is determined that the direction indicator 9 is off, the process proceeds to S230.
  • the situation determination unit 24 determines whether or not the vehicle is going to travel in a curve, and if it is determined that the vehicle is going to travel in a curve, the process proceeds to S240 and the vehicle tries to travel in a curve. If it is determined that there is not, the process proceeds to S250. Specifically, when the situation determination unit 24 determines that the road shape ahead of the vehicle is a curve shape based on, for example, the map information and vehicle position information of the ADAS locator 5, the vehicle is about to curve. Can be determined.
  • the display area changing unit 25 performs a process of changing each display area of the first peripheral video and the boundary visualized video displayed by the image display unit 23 according to the determination result by the situation determination unit 24. Specifically, if any one of the driving situation conditions, that is, whether the vehicle is going to turn left or right and the vehicle is going to drive a curve, the process of S240 is performed, If not, the process of S250 is performed.
  • the display area changing unit 25 displays, in the display screen displayed by the image display unit 23, the area in which the first peripheral video is displayed as the main display area and the boundary visualized video (that is, the second peripheral video).
  • the area to be displayed is set as a sub display area, and a command for enlarging and displaying the sub display area is sent to the image display unit 23.
  • the sub display area arranged at the upper right end of the left display 7 shown in FIG. 8A is enlarged so that the diagonal line extends to the lower left as shown in FIG. 8B. Display.
  • the sub display area arranged at the upper right end of the right display 6 is enlarged and displayed so that the diagonal line extends to the lower left.
  • the display area changing unit 25 displays, in the display screen displayed by the image display unit 23, the area in which the first peripheral video is displayed as the main display area and the boundary visualized video (that is, the second peripheral video).
  • the area to be displayed is set as a sub display area, and a command for reducing the sub display area is sent to the image display unit 23. For example, if none of the driving condition conditions regarding left turn and left curve driving is satisfied, the sub display area arranged on the left display 7 shown in FIG. 8B is reduced and displayed on the upper right end as shown in FIG. 8A. . Further, for example, when none of the traveling condition conditions related to right turn or right curve traveling is satisfied, the sub display area arranged on the right display 6 is similarly reduced and displayed on the upper right end.
  • a side mirror that is, a door mirror
  • a so-called electronic mirror is configured in which the first peripheral image is displayed in the main display area of the displays 6 and 7.
  • the image display unit 23 displays the first peripheral image in the main display area that simulates the shape of the door mirror, so that the image display unit 23 looks as if the rear side area outside the vehicle displayed on the door mirror is viewed.
  • One peripheral image can be presented to the driver.
  • the display area changing unit 25 is arranged in the upper display and the lower display area in the right display 6 shown in FIG. 9A.
  • the sub display area is enlarged and displayed by shifting the boundary between the main display area and the main display area (hereinafter referred to as “main sub area boundary”) downward as shown in FIG. 9B.
  • main sub area boundary the boundary between the main display area and the main display area (hereinafter referred to as “main sub area boundary”) downward as shown in FIG. 9B.
  • the sub display area is enlarged and displayed by shifting the main sub area boundary downward on the left display 7.
  • the main / sub area boundary is set to FIG. 9A on the right display 6 shown in FIG. 9B. As shown, the sub display area is reduced and displayed by shifting upward. Further, for example, when none of the traveling condition conditions regarding left turn or left curve traveling is satisfied, the sub display area is similarly reduced by shifting the main sub area boundary upward on the left display 7.
  • the main display area is arranged below the sub display area, and the boundary of the main sub area is shifted upward when the driving condition is satisfied.
  • the second peripheral image can be presented to the driver as if he / she is watching.
  • the driver can intuitively know the image portion corresponding to the blind spot area, and the blind spot area and the vehicle peripheral area (or adjacent area) ) Can be instantly understood by the driver regardless of the shape and arrangement of the displays 6 and 7. Therefore, the blind spot area can be presented to the driver in an easy-to-understand manner with an image without complicating the vehicle design.
  • the blind spot area when the driver views the vehicle peripheral area is set on the peripheral image. It is possible to move the contour line representing the position of the driver in accordance with the position of the eyes of the driver, and the blind spot area can be presented to the driver more intuitively and intelligibly.
  • the transmission image corresponding to the blind spot area is superimposed on the second peripheral image.
  • the second embodiment is different from the first embodiment in that a mask image corresponding to an adjacent region is superimposed on the second peripheral image when generating a boundary visualized image.
  • the image generation unit 22 first acquires driver camera information from the driver camera 18 in S310.
  • the body shape data is read from the body shape DB 19.
  • a blind spot area when the driver looks at the front side area of the vehicle is set on the second peripheral image based on the positional relationship between the driver's eye position and the front pillar position.
  • a contour line superimposed image shown in FIG. 6A is generated by superimposing a contour line image corresponding to the blind spot region boundary on the second peripheral image.
  • a mask superimposed image shown in FIG. 11 is generated by superimposing a mask image corresponding to the adjacent region on the second peripheral image.
  • a mask superimposed image is generated by performing a known filter process on the adjacent region portion of the second peripheral image.
  • an adjacent region portion of the second peripheral image is used so as to remind the driver that it is a portion other than the blind spot region in the vehicle peripheral region or a region that can be directly recognized by the driver.
  • An image attribute set in advance as a default value to make it inconspicuous is used.
  • an attribute change process for changing the image attributes of the contour superimposed image and the mask superimposed image is performed, and a second peripheral image including the contour superimposed image and the mask superimposed image subjected to the attribute change process (that is, The boundary visualization image) is supplied to the image display unit 23, and this process is terminated.
  • the brightness and / or luminance relating to each of the contour superimposed image and the mask superimposed image is changed according to the brightness of the vehicle peripheral region. Specifically, based on the detection result of the brightness detection unit 15, when the illuminance of the vehicle surrounding area is low, the brightness and / or brightness relating to each of the contour superimposed image and the mask superimposed image is increased, and the vehicle surrounding area is When the illuminance is high, the brightness and / or luminance relating to each of the contour superimposed image and the mask superimposed image is lowered.
  • the image attributes relating to each of the contour line superimposed image and the mask superimposed image are changed so that the contrast is higher with respect to the second peripheral image.
  • the hue, brightness, saturation, and / or luminance related to the second peripheral image supplied from the image acquisition unit 21 are low, the hue, brightness, saturation related to each of the contour superimposed image and the mask superimposed image.
  • the hue, lightness, saturation, and / or luminance related to the second peripheral image supplied from the image acquisition unit 21 is high, the hue, lightness, saturation, and / or brightness related to each of the contour superimposed image and the mask superimposed image. Or lower the brightness.
  • the image attributes of the contour superimposed image and the transparent superimposed image, or the contour superimposed image and the mask superimposed image are changed, but the present invention is not limited to this.
  • at least one image attribute of the contour superimposed image, the transparent superimposed image, and the mask superimposed image may be changed.
  • the boundary visualized image is generated by visualizing the region boundary between the blind spot region of the left and right front pillars 31 and 32 and the adjacent region.
  • the present invention is not limited to this.
  • the boundary visualized image may be generated in the same manner for the blind area of other pillars such as a center pillar and a rear pillar.
  • two peripheral images obtained by imaging two different areas in the external peripheral area of the vehicle are displayed on one screen of each of the displays 6 and 7, respectively.
  • the present invention is not limited to this. Absent.
  • the number of regions may be further increased so that three or more regions are imaged, and three or more peripheral images may be displayed on one screen of each of the displays 6 and 7, respectively.
  • two or more main display areas may be arranged on the display screen, or two or more sub display areas may be arranged.
  • the functions of one component in the above embodiment may be distributed as a plurality of components, or the functions of a plurality of components may be integrated into one component. Further, at least a part of the configuration of the above embodiment may be replaced with a known configuration having the same function. Moreover, you may abbreviate
  • a system including the vehicle periphery display device 1 as a constituent element, one or more programs for causing a computer to function as the vehicle periphery display device 1, and at least a part of the program are recorded.
  • the present disclosure can be realized in various forms such as one or a plurality of media and a vehicle periphery display method.

Abstract

A vehicle periphery display device equipped with an image acquisition unit (21), an image display unit (23), a state determination unit (24), and a display region change unit (25). The image acquisition unit (21) acquires a first peripheral image and a second peripheral image, which respectively capture a first region and a second region that differ from one another in an exterior peripheral region of a vehicle. The image display unit (23) displays the first and second peripheral images captured by the image acquisition unit (21) on one screen of a display device provided inside a vehicle compartment. The state determination unit (24) determines the state of travel of the vehicle. The display region change unit (25) sets the region in which the first peripheral image is displayed on the display screen of the display device as a principal display region, and sets the region thereof in which the second peripheral image is displayed as a secondary display region, and according to the determination results from the state determination unit (24), causes the image display unit (23) to enlarge the secondary display region when prescribed travel state conditions are met.

Description

車両周辺表示装置及び車両周辺表示方法Vehicle periphery display device and vehicle periphery display method 関連出願の相互参照Cross-reference of related applications
 本出願は、2015年5月15日に出願された日本特許出願番号2015-100162号に基づくもので、ここにその記載内容を援用する。 This application is based on Japanese Patent Application No. 2015-1000016 filed on May 15, 2015, the contents of which are incorporated herein by reference.
 本開示は、車両の外部周辺領域の映像等を表示する車両周辺表示装置及び車両周辺表示方法に関する。 The present disclosure relates to a vehicle periphery display device and a vehicle periphery display method for displaying an image of an external periphery region of the vehicle.
 従来、車両における左右一対のフロントピラーの内側にディスプレイをそれぞれ設け、フロントピラーで運転者の視界が遮られることにより生じる車両外の死角領域の映像をディスプレイに表示する技術が提案されている(特許文献1参照)。 Conventionally, a technology has been proposed in which a display is provided inside each of the pair of left and right front pillars in a vehicle, and an image of a blind spot area outside the vehicle that is generated when the driver's field of view is blocked by the front pillars is displayed (patent) Reference 1).
国際公開2009/157446号公報International Publication No. 2009/157446
 ところで、昨今では、コストや車室内デザイン等の観点から、ディスプレイの数をなるべく減少させることが望まれている。また、車両の外部周辺領域の映像等を表示する構成においては、死角領域に限らず、複数の領域の映像等を運転者に提示することが望まれている。 Nowadays, it is desired to reduce the number of displays as much as possible from the viewpoints of cost, vehicle interior design, and the like. In addition, in a configuration for displaying an image or the like of the external peripheral area of the vehicle, it is desired to present the image or the like of a plurality of areas to the driver in addition to the blind spot area.
 しかしながら、従来技術では、例えばピラーが透過しているように死角領域の映像をわかりやすく運転者に提示しようとすると、ピラーに沿うように専用のディスプレイを配置しなければならない等の制約があるため、一つのディスプレイに複数の領域の映像等を運転者に提示することが困難である、という問題があった。 However, in the conventional technology, for example, if an image of the blind spot area is presented to the driver in an easy-to-understand manner so that the pillar is transmitted, there is a restriction that a dedicated display must be arranged along the pillar. However, there is a problem that it is difficult to present a plurality of areas of video to the driver on one display.
 本開示は、上記点にかんがみてなされたものであり、表示装置の数をむやみに増大させることなく、車両の外部周辺領域における複数の領域の映像等をわかりやすく運転者に提示可能な車両周辺表示装置及び車両周辺表示方法を提供することを目的としている。 The present disclosure has been made in view of the above points, and it is possible to easily present a video image of a plurality of areas in the external peripheral area of the vehicle to the driver without unnecessarily increasing the number of display devices. An object of the present invention is to provide a display device and a vehicle periphery display method.
 本開示の一態様による車両周辺表示装置は、画像取得部と、画像表示部と、状況判定部と、表示領域変更部と、を備える。画像取得部は、車両の外部周辺領域において互いに異なる第1領域及び第2領域をそれぞれ撮像した第1周辺画像及び第2周辺画像を取得する。 The vehicle periphery display device according to an aspect of the present disclosure includes an image acquisition unit, an image display unit, a situation determination unit, and a display area change unit. The image acquisition unit acquires a first peripheral image and a second peripheral image obtained by imaging a first region and a second region that are different from each other in the external peripheral region of the vehicle.
 画像表示部は、画像取得部により取得した第1周辺画像及び第2周辺画像を、車両の室内に設けられた表示装置の一つの画面に表示させる。状況判定部は、車両の走行状況を判定する。 The image display unit displays the first peripheral image and the second peripheral image acquired by the image acquisition unit on one screen of a display device provided in the vehicle interior. The situation determination unit determines a traveling situation of the vehicle.
 表示領域変更部は、表示装置の表示画面のうち、第1周辺画像が表示される領域を主表示領域、第2周辺画像が表示される領域を副表示領域とし、状況判定部による判定結果に応じて、所定の走行状況条件が成立した場合、画像表示部に副表示領域を拡大させる。 The display area changing unit uses a display area of the display device in which the first peripheral image is displayed as a main display area and the second peripheral image is displayed as a sub display area. Accordingly, when a predetermined traveling condition is satisfied, the sub display area is enlarged on the image display unit.
 このような構成によれば、表示画面において複数の周辺画像に主従関係を持たせて表示させることにより、車両の外部周辺領域のうち各領域に対応する画像部分を運転者に直感的に知らしめることが可能となる。さらに車両の走行状況に応じて副表示領域が拡大されることにより、走行シーンに応じて確認する必要性が高くなった領域を運転者に瞬時に理解させ、且つ、その領域の画像を運転者に視認性よく提示することが可能となる。従って、本開示によれば、表示装置の数をむやみに増大させることなく、車両の外部周辺領域における複数の領域の映像等をわかりやすく運転者に提示することができる。 According to such a configuration, a plurality of peripheral images are displayed on the display screen with a master-slave relationship, so that the driver can intuitively know the image portion corresponding to each of the external peripheral regions of the vehicle. It becomes possible. Furthermore, by expanding the sub display area according to the driving situation of the vehicle, the driver can instantly understand the area that has become necessary to be checked according to the driving scene, and an image of the area is displayed to the driver. Can be presented with good visibility. Therefore, according to the present disclosure, it is possible to easily present the driver with images of a plurality of areas in the external peripheral area of the vehicle without increasing the number of display devices.
 また、本開示の他の態様による車両周辺表示方法は、車両の外部周辺領域において互いに異なる第1領域及び第2領域をそれぞれ撮像した第1周辺画像及び第2周辺画像を取得し、第1周辺画像及び第2周辺画像を、車両の室内に設けられた表示装置の一つの画面に表示させ、車両の走行状況を判定し、表示装置の表示画面のうち、第1周辺画像が表示される領域を主表示領域、第2周辺画像が表示される領域を副表示領域とし、車両の走行状況に応じて、所定の走行状況条件が成立した場合、副表示領域を拡大させることを含む。 Further, the vehicle periphery display method according to another aspect of the present disclosure acquires a first peripheral image and a second peripheral image obtained by capturing a first region and a second region that are different from each other in the external peripheral region of the vehicle. An area in which the first peripheral image is displayed on the display screen of the display device by displaying the image and the second peripheral image on one screen of a display device provided in the vehicle interior to determine the traveling state of the vehicle. Is the main display area, and the area where the second peripheral image is displayed is the sub display area. When a predetermined driving condition is satisfied according to the driving condition of the vehicle, the sub display area is enlarged.
 この車両周辺表示方法によれば、上記本開示の一態様による車両周辺表示装置において既に述べた効果と同様の効果を得ることができる。 According to this vehicle periphery display method, the same effects as those already described in the vehicle periphery display device according to the aspect of the present disclosure can be obtained.
 本開示についての上記目的およびその他の目的、特徴や利点は、添付の図面を参照しながら下記の詳細な記述により、より明確になる。その図面は、
図1は、本開示の一実施形態による車両周辺表示装置の全体構成を示すブロック図であり、 図2は、車両周辺表示装置を搭載した車両の上面図であり、 図3は、車両周辺表示装置を搭載した車両の室内の様子を示した図であり、 図4は、ECUの機能的構成を示すブロック図であり、 図5は、本開示の第1実施形態における画像生成処理のフローチャートであり、 図6Aは、輪郭線重畳画像を含む境界可視化画像を示す図であり、 図6Bは、透過重畳画像を含む境界可視化画像を示す図であり、 図7は、表示制御処理のフローチャートであり、 図8Aは、左ディスプレイ7の表示例において、副表示領域を縮小表示させた場合を示す図であり、 図8Bは、左ディスプレイ7の表示例において、副表示領域を拡大表示させた場合を示す図であり、 図9Aは、右ディスプレイ6の表示例において、副表示領域を縮小表示させた場合を示す図であり、 図9Bは、右ディスプレイ6の表示例において、副表示領域を拡大表示させた場合を示す図であり、 図10は、本開示の第2実施形態における画像生成処理のフローチャートであり、 図11は、マスク重畳画像を含む境界可視化画像を示す図である。
The above and other objects, features, and advantages of the present disclosure will become more apparent from the following detailed description with reference to the accompanying drawings. The drawing
FIG. 1 is a block diagram illustrating an overall configuration of a vehicle periphery display device according to an embodiment of the present disclosure. FIG. 2 is a top view of a vehicle equipped with a vehicle periphery display device, FIG. 3 is a diagram showing the interior of a vehicle equipped with a vehicle periphery display device, FIG. 4 is a block diagram showing a functional configuration of the ECU, FIG. 5 is a flowchart of image generation processing according to the first embodiment of the present disclosure. FIG. 6A is a diagram showing a boundary visualized image including a contour line superimposed image; FIG. 6B is a diagram showing a boundary visualization image including a transparent superimposed image; FIG. 7 is a flowchart of the display control process. FIG. 8A is a diagram illustrating a case where the sub display area is reduced and displayed in the display example of the left display 7. FIG. 8B is a diagram showing a case where the sub display area is enlarged and displayed in the display example of the left display 7. FIG. 9A is a diagram showing a case where the sub display area is reduced and displayed in the display example of the right display 6; FIG. 9B is a diagram showing a case where the sub display area is enlarged and displayed in the display example of the right display 6; FIG. 10 is a flowchart of image generation processing according to the second embodiment of the present disclosure. FIG. 11 is a diagram illustrating a boundary visualized image including a mask superimposed image.
 以下、本開示が適用された実施形態について、図面を用いて説明する。 Hereinafter, embodiments to which the present disclosure is applied will be described with reference to the drawings.
 (第1実施形態)
 図1に示す車両周辺表示装置1は、ECU(Electronic Control Unit)2と、後側方カメラ3と、前側方カメラ4と、ADASロケータ5と、右ディスプレイ6と、左ディスプレイ7と、車速センサ8と、方向指示器9と、明るさ検出部15と、ドライバカメラ18と、ボデー形状DB(Data Base)19と、を備える。
(First embodiment)
1 includes an ECU (Electronic Control Unit) 2, a rear side camera 3, a front side camera 4, an ADAS locator 5, a right display 6, a left display 7, and a vehicle speed sensor. 8, a direction indicator 9, a brightness detection unit 15, a driver camera 18, and a body shape DB (Data Base) 19.
 後側方カメラ3は、図2に示す車両の右側部及び左側部においてサイドミラーが通常設置される位置付近に搭載されており、車両の後側方領域(第1領域に相当する)をそれぞれ撮像する右後カメラ3A及び左後カメラ3Bによって構成されている。右後カメラ3A及び左後カメラ3Bは、CMOSカメラ等によって各々構成されており、撮像画角Gの範囲内における領域(すなわち、後側方領域)を撮像し、撮像した画像(以下「第1周辺画像」という)をECU2へ出力する。なお、車両にはサイドミラーが設置されておらず、ディスプレイ6,7が第1周辺画像を表示することにより、後側方カメラ3と共にいわゆる電子ミラーを構成している。 The rear side camera 3 is mounted in the vicinity of the position where the side mirror is normally installed on the right side and the left side of the vehicle shown in FIG. 2, and the rear side region (corresponding to the first region) of the vehicle is respectively set. It is composed of a right rear camera 3A and a left rear camera 3B for imaging. Each of the right rear camera 3A and the left rear camera 3B is configured by a CMOS camera or the like, and images a region within the range of the imaging field angle G (that is, the rear side region) and captures an image (hereinafter referred to as “first” (Referred to as “peripheral image”) to the ECU 2. In addition, the side mirror is not installed in the vehicle, but the so-called electronic mirror is configured together with the rear side camera 3 by the displays 6 and 7 displaying the first peripheral image.
 前側方カメラ4は、右フロントピラー31及び左フロントピラー32の死角領域を含む車両の前側方領域(第2領域に相当する)をそれぞれ撮像する右前カメラ4A及び左前カメラ4Bによって構成されている。死角領域とは、左右のフロントピラー31,32を含む車両フレームで運転者の視界が遮られることにより生じる車両外の周辺領域のことである。右前カメラ4A及び左前カメラ4Bは、CMOSカメラ等によって各々構成されており、撮像画角Fの範囲内における領域(すなわち、前側方領域)を撮像し、撮像した画像(以下「第2周辺画像」という)をECU2へ出力する。なお、以下では、前側方領域のうち、死角領域以外の他の領域を隣接領域、死角領域と隣接領域との境界を死角領域境界と称する。 The front side camera 4 includes a right front camera 4A and a left front camera 4B that respectively image a front side region (corresponding to a second region) of the vehicle including a blind spot region of the right front pillar 31 and the left front pillar 32. The blind spot area is a peripheral area outside the vehicle that is generated when the driver's field of view is blocked by the vehicle frame including the left and right front pillars 31 and 32. Each of the right front camera 4A and the left front camera 4B is configured by a CMOS camera or the like, images a region within the range of the imaging field angle F (that is, the front side region), and captures an image (hereinafter, “second peripheral image”). Is output to the ECU 2. Hereinafter, in the front side region, a region other than the blind spot region is referred to as an adjacent region, and a boundary between the blind spot region and the adjacent region is referred to as a blind spot region boundary.
 図1に戻り、ECU2は、装置全体の制御を行う電子制御ユニットである。ECU2は、CPU10を主体として構成され、ROMやRAMやフラッシュメモリ等のメモリ11、入力信号回路、出力信号回路、電源回路等を備えている。ECU2では、CPU10がメモリ11に記憶されたプログラムに基づいて、後側方カメラ3及び前側方カメラ4から第1周辺画像及び第2周辺画像をそれぞれ取得し、取得した左右各周辺画像についてそれぞれ左右ディスプレイ6,7の表示領域を設定する等の各種処理を実施する。 Referring back to FIG. 1, the ECU 2 is an electronic control unit that controls the entire apparatus. The ECU 2 is configured mainly by the CPU 10 and includes a memory 11 such as a ROM, a RAM, and a flash memory, an input signal circuit, an output signal circuit, a power supply circuit, and the like. In the ECU 2, the CPU 10 acquires the first peripheral image and the second peripheral image from the rear side camera 3 and the front side camera 4 based on the program stored in the memory 11, and the acquired left and right peripheral images are respectively left and right. Various processes such as setting display areas of the displays 6 and 7 are performed.
 右ディスプレイ6及び左ディスプレイ7は、ECU2により設定された表示領域において第1周辺画像及び第2周辺画像をそれぞれ表示する機能を有している。例えば、右ディスプレイ6は、図3に示すように、車両の室内において右フロントピラー31の付近に設けられた液晶ディスプレイ等により構成されており、右後カメラ3Aの第1周辺画像と右前カメラ4Aの第2周辺画像とを表示する。同様に、左ディスプレイ7は、車両の室内において左フロントピラー32の付近に設けられた液晶ディスプレイ等により構成されており、左後カメラ3Bの第1周辺画像と左前カメラ4Bの第2周辺画像を表示する。 The right display 6 and the left display 7 have a function of displaying the first peripheral image and the second peripheral image in the display area set by the ECU 2, respectively. For example, as shown in FIG. 3, the right display 6 is configured by a liquid crystal display or the like provided in the vicinity of the right front pillar 31 in the vehicle interior, and the first peripheral image of the right rear camera 3A and the right front camera 4A. The second peripheral image is displayed. Similarly, the left display 7 is constituted by a liquid crystal display or the like provided in the vicinity of the left front pillar 32 in the vehicle interior, and displays the first peripheral image of the left rear camera 3B and the second peripheral image of the left front camera 4B. indicate.
 ADASロケータ5は、アドバンスド・ドライバ・アシスタンス・システムに使用される周知のものであり、全地球測位システム(いわゆるGPS)等を利用して車両の現在位置を検出する。また、ADASロケータ5は、緯度や経度等の位置情報に対応づけて道路地図情報を含む地図データベース(DB)を有する。道路地図情報は、道路を構成するリンクのリンク情報と、リンクとリンクを接続するノードのノード情報とを対応づけたテーブル状のDBである。リンク情報にはリンク長、幅員、接続ノード、カーブ情報等が含まれるため、道路地図情報を用いて道路形状を検出することができる。 The ADAS locator 5 is a well-known device used in the advanced driver assistance system, and detects the current position of the vehicle using a global positioning system (so-called GPS) or the like. The ADAS locator 5 has a map database (DB) including road map information in association with position information such as latitude and longitude. The road map information is a table-like DB in which link information of links constituting a road is associated with node information of nodes connecting the links. Since the link information includes link length, width, connection node, curve information, etc., the road shape can be detected using the road map information.
 なお、車速センサ8は、車輪の回転速度に基づき自車速を検出する周知のものとして構成されており、その検出結果をECU2へ出力する。また、方向指示器9は、車両の右左折や進路変更の際に、運転者のウインカー操作によってその方向を周囲に示すための周知のものであり、運転者のウインカー操作を検出すると、その検出結果をECU2へ出力する。また、明るさ検出部15は、車両外の明るさを検出する照度センサ等により構成されており、その検出結果をECU2へ出力する。 The vehicle speed sensor 8 is configured as a known sensor that detects the vehicle speed based on the rotational speed of the wheel, and outputs the detection result to the ECU 2. Further, the direction indicator 9 is a well-known device for indicating the direction to the surroundings by the driver's turn signal operation when turning right or left or changing the course of the vehicle. The result is output to the ECU 2. Moreover, the brightness detection part 15 is comprised by the illumination intensity sensor etc. which detect the brightness outside a vehicle, and outputs the detection result to ECU2.
 ドライバカメラ18は、運転者の目を含む顔領域を撮像するように車室内に配置されている。ドライバカメラ18は、周知の視線検出技術を利用して、撮像画像内における運転者の目のうち、例えば目頭や角膜反射等を基準点、虹彩や瞳孔等を動点として各々3次元位置を検出し、基準点に対する動点の位置に基づいて運転者の視線を検出する。基本的には、例えば左目の虹彩が目頭から離れていれば、運転者は左側を見ており、左目の目頭と虹彩が近ければ、運転者は右側を見ていることになる。このような基本原理を応用することにより、基準点に対する動点の3次元相対位置から、運転者の視線方向を3次元空間で求めることができる。ドライバカメラ18は、こうして検出した運転者の目の位置と視線方向とを示す情報(以下「ドライバカメラ情報」という)をECU2へ出力する。 The driver camera 18 is disposed in the passenger compartment so as to capture a face area including the driver's eyes. The driver camera 18 uses a well-known gaze detection technique to detect a three-dimensional position of each of the driver's eyes in the captured image, for example, using the eye head or corneal reflection as a reference point and the iris or pupil as a moving point. Then, the driver's line of sight is detected based on the position of the moving point with respect to the reference point. Basically, for example, if the iris of the left eye is far from the eye, the driver is looking at the left side, and if the iris is close to the eye of the left eye, the driver is looking at the right side. By applying such a basic principle, the driver's line-of-sight direction can be obtained in a three-dimensional space from the three-dimensional relative position of the moving point with respect to the reference point. The driver camera 18 outputs information indicating the driver's eye position and line-of-sight direction thus detected (hereinafter referred to as “driver camera information”) to the ECU 2.
 ボデー形状DB19は、車両のボデーを構成するフロントピラー等のフレームの各パーツに関する3次元位置を示すボデー形状データを格納している。なお、ボデー形状DB19は、メモリ11内に構築されていてもよい。 The body shape DB 19 stores body shape data indicating a three-dimensional position relating to each part of the frame such as a front pillar constituting the vehicle body. The body shape DB 19 may be built in the memory 11.
 次に、ECU2の機能的構成について、図4のブロック図を用いて説明する。 Next, the functional configuration of the ECU 2 will be described with reference to the block diagram of FIG.
 ECU2は、画像取得部21と、画像生成部22と、画像表示部23と、状況判定部24と、表示領域変更部25と、を機能的に備えている。なお、画像取得部21、画像生成部22、画像表示部23、状況判定部24及び表示領域変更部25としての各機能を実現するための処理は、メモリ11に記憶されたプログラムに基づき、CPU10によって実行される。 The ECU 2 functionally includes an image acquisition unit 21, an image generation unit 22, an image display unit 23, a situation determination unit 24, and a display area change unit 25. The processing for realizing each function as the image acquisition unit 21, the image generation unit 22, the image display unit 23, the situation determination unit 24, and the display area change unit 25 is based on a program stored in the memory 11. Executed by.
 画像取得部21は、右後カメラ3A、右前カメラ4A、左後カメラ3B、左前カメラ4Bのそれぞれから第1周辺画像及び第2周辺画像をそれぞれ時系列に沿って取得し、取得した各周辺画像のうち、第1周辺画像を画像表示部23に供給し、第2周辺画像を画像生成部22に供給する。 The image acquisition unit 21 acquires the first peripheral image and the second peripheral image from each of the right rear camera 3A, the right front camera 4A, the left rear camera 3B, and the left front camera 4B in time series, and acquires the respective peripheral images. Among these, the first peripheral image is supplied to the image display unit 23, and the second peripheral image is supplied to the image generation unit 22.
 画像生成部22は、右前カメラ4A及び左前カメラ4Bのそれぞれについて、画像取得部21から供給された第2周辺画像を元画像として、車両の前側方領域のうちの死角領域と隣接領域との死角領域境界を可視化させた画像(以下「境界可視化画像」という)を生成する処理(以下「画像生成処理」という)を実行し、生成した境界可視化画像のそれぞれを時系列に沿って画像表示部23に供給する。 For each of the right front camera 4A and the left front camera 4B, the image generation unit 22 uses the second peripheral image supplied from the image acquisition unit 21 as an original image, and the blind spot between the blind spot area and the adjacent area in the front side area of the vehicle. Processing (hereinafter referred to as “image generation processing”) for generating an image in which the region boundary is visualized (hereinafter referred to as “boundary visualization image”) is executed, and each of the generated boundary visualization images is displayed along the time series. To supply.
 画像表示部23は、右後カメラ3A及び左後カメラ3Bのそれぞれについて画像取得部21から時系列に沿って供給された第1周辺画像と、右前カメラ4A及び左前カメラ4Bのそれぞれについて画像生成部22から時系列に沿って供給された境界可視化画像と、を映像として、右ディスプレイ6及び左ディスプレイ7に各々表示する。具体的には、右カメラ3A,4Aに対応する第1周辺映像及び境界可視化映像を右ディスプレイ6に表示し、左カメラ3B,4Bに対応する第1周辺映像及び境界可視化映像を左ディスプレイ7に表示する。 The image display unit 23 includes a first peripheral image supplied from the image acquisition unit 21 in time series for each of the right rear camera 3A and the left rear camera 3B, and an image generation unit for each of the right front camera 4A and the left front camera 4B. The boundary visualized images supplied along the time series from 22 are displayed as images on the right display 6 and the left display 7 respectively. Specifically, the first peripheral video and boundary visualized video corresponding to the right cameras 3A and 4A are displayed on the right display 6, and the first peripheral video and boundary visualized video corresponding to the left cameras 3B and 4B are displayed on the left display 7. indicate.
 次に、画像生成部22が実行する画像生成処理について、図5のフローチャートを用いて説明する。なお、本処理は、例えば車両周辺表示機能に関する起動や停止の操作内容を入力するスイッチ(不図示)がオンである間、所定サイクル毎に繰り返し起動される。また、本処理では、煩雑さを避けるため、右前カメラ4Aと左前カメラ4Bを特に区別することなく、処理対象を画像取得部21から供給された第2周辺画像として説明する。 Next, image generation processing executed by the image generation unit 22 will be described with reference to the flowchart of FIG. Note that this processing is repeatedly activated at predetermined cycles while, for example, a switch (not shown) for inputting activation and stop operation details regarding the vehicle periphery display function is on. In this process, in order to avoid complexity, the processing target is described as the second peripheral image supplied from the image acquisition unit 21 without particularly distinguishing the right front camera 4A and the left front camera 4B.
 本処理が起動すると、画像生成部22は、まず、ステップ(以下単に「S」と記す)110において、ドライバカメラ18からドライバカメラ情報を取得する。 When this processing is started, the image generation unit 22 first acquires driver camera information from the driver camera 18 in step (hereinafter simply referred to as “S”) 110.
 続いて、S120では、ボデー形状DB19からボデー形状データを読み出す。 Subsequently, in S120, body shape data is read from the body shape DB 19.
 次に、S130では、運転者の目の位置とフロントピラーの位置との位置関係に基づき、運転者が車両の前側方領域を見たときの死角領域を第2周辺画像上において設定する。なお、運転者の目の位置についてはドライバカメラ情報から読み出し、フロントピラーの位置についてはボデー形状データから抽出する。具体的には、前側方カメラ4のカメラパラメータを用いて、運転者の目の位置からフロントピラーの位置までのベクトルに対応する死角領域を、ワールド座標からカメラ画像に変換する。このとき、運転者の視線方向に基づいてベクトルを補正することもできる。 Next, in S130, based on the positional relationship between the position of the driver's eyes and the position of the front pillar, a blind spot area when the driver looks at the front side area of the vehicle is set on the second peripheral image. The driver's eye position is read from the driver camera information, and the front pillar position is extracted from the body shape data. Specifically, the blind spot area corresponding to the vector from the driver's eye position to the front pillar position is converted from world coordinates into a camera image using the camera parameters of the front side camera 4. At this time, the vector can also be corrected based on the driver's line-of-sight direction.
 続いて、S140では、第2周辺画像上において死角領域境界に対応する輪郭線画像を重畳させることにより、図6Aに示す輪郭線重畳画像を生成する。この輪郭線画像としては、第2周辺画像の死角領域境界を透過させるものが好ましく、色相、明度、彩度、輝度等の画像属性が予めデフォルト値に設定されたものが用いられる。 Subsequently, in S140, a contour line superimposed image shown in FIG. 6A is generated by superimposing a contour line image corresponding to the blind spot region boundary on the second peripheral image. As the contour image, an image that transmits the blind spot region boundary of the second peripheral image is preferable, and an image in which image attributes such as hue, brightness, saturation, and luminance are set to default values in advance is used.
 次に、S150では、第2周辺画像上において死角領域に対応する透過画像を重畳させることにより、図6Bに示す透過重畳画像を生成する。具体的には、第2周辺画像の死角領域部分に周知のフィルタ処理を施すことにより、透過重畳画像を生成する。この透過画像としては、フロントピラーを含むフレーム部分を運転者に想起させるように、車両ボデーの色や半透明のアクリルを模擬した色を有するものが好ましく、周辺画像の死角領域部分を透過させるためのデフォルト値に画像属性を予め設定したものが用いられる。 Next, in S150, a transparent superimposed image shown in FIG. 6B is generated by superimposing a transparent image corresponding to the blind spot area on the second peripheral image. Specifically, a transparent superimposed image is generated by performing known filter processing on the blind spot area portion of the second peripheral image. The transparent image preferably has a vehicle body color or a color imitating translucent acrylic so as to remind the driver of the frame portion including the front pillar, in order to transmit the blind spot area portion of the surrounding image. The default value of which the image attribute is set in advance is used.
 続いて、S160では、輪郭線重畳画像及び透過重畳画像の画像属性を変更する属性変更処理を実施し、属性変更処理を施した輪郭線重畳画像及び透過重畳画像を含む第2周辺画像(つまり、境界可視化画像)を画像表示部23に供給し、本処理を終了する。 Subsequently, in S160, an attribute change process for changing the image attributes of the contour superimposed image and the transparent superimposed image is performed, and a second peripheral image (that is, the second peripheral image including the contour superimposed image and the transparent superimposed image subjected to the attribute change process) The boundary visualization image) is supplied to the image display unit 23, and this process is terminated.
 例えば、属性変更処理では、車両周辺領域の明るさに応じて、輪郭線重畳画像及び透過重畳画像のそれぞれに関する明度及び/又は輝度を変更する。具体的には、明るさ検出部15の検出結果に基づき、車両周辺領域の照度が低い場合には、輪郭線重畳画像及び透過重畳画像のそれぞれに関する明度及び/又は輝度を高くし、車両周辺領域の照度が高い場合には、輪郭線重畳画像及び透過重畳画像のそれぞれに関する明度及び/又は輝度を低くする。 For example, in the attribute change process, the brightness and / or brightness relating to each of the contour superimposed image and the transparent superimposed image is changed according to the brightness of the vehicle peripheral region. Specifically, based on the detection result of the brightness detection unit 15, when the illuminance of the vehicle surrounding area is low, the brightness and / or luminance relating to each of the contour superimposed image and the transmission superimposed image is increased, and the vehicle surrounding area is When the illuminance is high, the brightness and / or luminance relating to each of the contour superimposed image and the transparent superimposed image is lowered.
 また例えば、属性変更処理では、第2周辺画像に対してコントラストが高くなるように輪郭線重畳画像及び透過重畳画像のそれぞれに関する画像属性を変更する。具体的には、画像取得部21から供給された第2周辺画像に関する色相、明度、彩度及び/又は輝度が低い場合には、輪郭線重畳画像及び透過重畳画像のそれぞれに関する色相、明度、彩度及び/又は輝度を高くする。また、画像取得部21から供給された第2周辺画像に関する色相、明度、彩度及び/又は輝度が高い場合には、輪郭線重畳画像及び透過重畳画像のそれぞれに関する色相、明度、彩度及び/又は輝度を低くする。 Also, for example, in the attribute change process, the image attributes relating to each of the contour superimposed image and the transparent superimposed image are changed so that the contrast is higher with respect to the second peripheral image. Specifically, when the hue, brightness, saturation, and / or luminance related to the second peripheral image supplied from the image acquisition unit 21 are low, the hue, brightness, saturation related to each of the contour superimposed image and the transparent superimposed image. Increase the degree and / or brightness. Further, when the hue, lightness, saturation, and / or luminance related to the second peripheral image supplied from the image acquisition unit 21 are high, the hue, lightness, saturation, and / or Or lower the brightness.
 次に、状況判定部24及び表示領域変更部25が実行する表示制御処理について、図7のフローチャートを用いて説明する。なお、本処理は、例えば車両周辺表示機能に関する起動や停止の操作内容を入力するスイッチ(不図示)がオンである間、所定サイクル毎に繰り返し起動される。また、本処理では、煩雑さを避けるため、右ディスプレイ6と左ディスプレイ7を特に区別することなく、制御対象を画像表示部23により表示される第1周辺映像及び境界可視化映像の各表示領域として説明する。 Next, the display control processing executed by the situation determination unit 24 and the display area changing unit 25 will be described with reference to the flowchart of FIG. Note that this processing is repeatedly activated at predetermined cycles while, for example, a switch (not shown) for inputting activation and stop operation details regarding the vehicle periphery display function is on. Moreover, in this process, in order to avoid complexity, the control object is set as each display area of the first peripheral video and the boundary visualization video displayed by the image display unit 23 without particularly distinguishing the right display 6 and the left display 7. explain.
 本処理が起動すると、状況判定部24は、まず、S210において、車速センサ8の検出結果に基づいて、自車速が所定のしきい値以下であるか否かを判断する。しきい値は、車両が交差点等において右左折する場合に徐行する上限速度を目安として予め設定されている。自車速がしきい値以下であると判断した場合には、S220に移行し、自車速がしきい値を上回ると判断した場合には、S230に移行する。 When this process is started, the situation determination unit 24 first determines whether or not the host vehicle speed is equal to or lower than a predetermined threshold value based on the detection result of the vehicle speed sensor 8 in S210. The threshold value is set in advance with reference to an upper limit speed at which the vehicle slows down when the vehicle turns right or left at an intersection or the like. If it is determined that the host vehicle speed is less than or equal to the threshold value, the process proceeds to S220. If it is determined that the host vehicle speed exceeds the threshold value, the process proceeds to S230.
 S220では、状況判定部24は、方向指示器9の検出結果に基づいて、左右いずれかのウインカー操作がなされたか否か、つまり、方向指示器9がオン状態であるか否かを判断する。方向指示器9がオン状態であると判断した場合には、S240に移行し、方向指示器9がオフ状態であると判断した場合には、S230に移行する。 In S220, based on the detection result of the direction indicator 9, the situation determination unit 24 determines whether one of the left and right turn signal operations has been performed, that is, whether the direction indicator 9 is in an on state. If it is determined that the direction indicator 9 is on, the process proceeds to S240, and if it is determined that the direction indicator 9 is off, the process proceeds to S230.
 このように、S210~S220では、車両が右左折しようとしているか否かを判定し、車両が右左折しようとしていると判定した場合には、S240に移行し、車両が右左折しようとしていないと判定した場合には、S230に移行するようにしている。 As described above, in S210 to S220, it is determined whether or not the vehicle is going to turn right or left, and if it is determined that the vehicle is going to turn right or left, the process proceeds to S240 and it is determined that the vehicle is not going to turn right or left. If so, the process proceeds to S230.
 次に、S230では、状況判定部24は、車両がカーブ走行しようとしているか否かを判定し、車両がカーブ走行しようとしていると判定した場合には、S240に移行し、車両がカーブ走行しようとしていないと判定した場合には、S250に移行する。具体的には、状況判定部24は、例えばADASロケータ5の地図情報及び車両位置情報に基づいて、車両前方の道路形状がカーブ形状であると判断した場合に、車両がカーブ走行しようとしていると判定することができる。 Next, in S230, the situation determination unit 24 determines whether or not the vehicle is going to travel in a curve, and if it is determined that the vehicle is going to travel in a curve, the process proceeds to S240 and the vehicle tries to travel in a curve. If it is determined that there is not, the process proceeds to S250. Specifically, when the situation determination unit 24 determines that the road shape ahead of the vehicle is a curve shape based on, for example, the map information and vehicle position information of the ADAS locator 5, the vehicle is about to curve. Can be determined.
 一方、表示領域変更部25は、状況判定部24による判定結果に応じて、画像表示部23により表示される第1周辺映像及び境界可視化映像の各表示領域を変更する処理を実施する。具体的には、車両が右左折しようとしていること、及び、車両がカーブ走行しようとしていること、のいずれか一方の走行状況条件が成立した場合、S240の処理を行い、いずれの走行状況条件も成立していない場合、S250の処理を行う。 On the other hand, the display area changing unit 25 performs a process of changing each display area of the first peripheral video and the boundary visualized video displayed by the image display unit 23 according to the determination result by the situation determination unit 24. Specifically, if any one of the driving situation conditions, that is, whether the vehicle is going to turn left or right and the vehicle is going to drive a curve, the process of S240 is performed, If not, the process of S250 is performed.
 S240では、表示領域変更部25は、画像表示部23により表示される表示画面のうち、第1周辺映像が表示される領域を主表示領域、境界可視化映像(つまり、第2周辺映像)が表示される領域を副表示領域とし、画像表示部23に副表示領域を拡大表示させる指令を送る。例えば、車両が左折又は左にカーブ走行しようとしている場合、図8Aに示す左ディスプレイ7の右上端部に配置された副表示領域を、図8Bに示すように左下方に対角線が延びるように拡大表示させる。また例えば、車両が右折又は右にカーブ走行しようとしている場合には、同様に、右ディスプレイ6の右上端部に配置された副表示領域を左下方に対角線が延びるように拡大表示させる。 In S240, the display area changing unit 25 displays, in the display screen displayed by the image display unit 23, the area in which the first peripheral video is displayed as the main display area and the boundary visualized video (that is, the second peripheral video). The area to be displayed is set as a sub display area, and a command for enlarging and displaying the sub display area is sent to the image display unit 23. For example, when the vehicle is going to turn left or turn left, the sub display area arranged at the upper right end of the left display 7 shown in FIG. 8A is enlarged so that the diagonal line extends to the lower left as shown in FIG. 8B. Display. Further, for example, when the vehicle is going to turn right or turn right, similarly, the sub display area arranged at the upper right end of the right display 6 is enlarged and displayed so that the diagonal line extends to the lower left.
 S250では、表示領域変更部25は、画像表示部23により表示される表示画面のうち、第1周辺映像が表示される領域を主表示領域、境界可視化映像(つまり、第2周辺映像)が表示される領域を副表示領域とし、画像表示部23に副表示領域を縮小表示させる指令を送る。例えば、左折及び左カーブ走行に関するいずれの走行状況条件にも該当しない場合には、図8Bに示す左ディスプレイ7に配置された副表示領域を、図8Aに示すように右上端部に縮小表示させる。また例えば、右折又は右カーブ走行に関するいずれの走行状況条件にも該当しない場合には、同様に、右ディスプレイ6に配置された副表示領域を右上端部に縮小表示させる。 In S250, the display area changing unit 25 displays, in the display screen displayed by the image display unit 23, the area in which the first peripheral video is displayed as the main display area and the boundary visualized video (that is, the second peripheral video). The area to be displayed is set as a sub display area, and a command for reducing the sub display area is sent to the image display unit 23. For example, if none of the driving condition conditions regarding left turn and left curve driving is satisfied, the sub display area arranged on the left display 7 shown in FIG. 8B is reduced and displayed on the upper right end as shown in FIG. 8A. . Further, for example, when none of the traveling condition conditions related to right turn or right curve traveling is satisfied, the sub display area arranged on the right display 6 is similarly reduced and displayed on the upper right end.
 なお、本実施形態では、車両にサイドミラー(つまり、ドアミラー)が設置されておらず、ディスプレイ6,7の主表示領域に第1周辺画像が表示される、いわゆる電子ミラーが構成されている。このため、画像表示部23は、ドアミラーの形状を模擬した主表示領域に第1周辺画像を表示させることにより、あたかもドアミラーに映し出された車両外部の後側方領域を見ているかのように第1周辺画像を運転者に提示させることが可能となる。 In this embodiment, a side mirror (that is, a door mirror) is not installed in the vehicle, and a so-called electronic mirror is configured in which the first peripheral image is displayed in the main display area of the displays 6 and 7. For this reason, the image display unit 23 displays the first peripheral image in the main display area that simulates the shape of the door mirror, so that the image display unit 23 looks as if the rear side area outside the vehicle displayed on the door mirror is viewed. One peripheral image can be presented to the driver.
 この場合、表示領域変更部25は、S240において、例えば車両が右折又は右にカーブ走行しようとしている場合、図9Aに示す右ディスプレイ6において、上部に配置された副表示領域と、下部に配置された主表示領域と、の境界(以下「主副領域境界」という)を、図9Bに示すように下方にシフトさせることにより、副表示領域を拡大表示させる。また例えば、車両が左折又は左にカーブ走行しようとしている場合には、同様に、左ディスプレイ7において主副領域境界を下方にシフトさせることにより、副表示領域を拡大表示させる。 In this case, in S240, for example, when the vehicle is going to turn right or turn to the right in S240, the display area changing unit 25 is arranged in the upper display and the lower display area in the right display 6 shown in FIG. 9A. The sub display area is enlarged and displayed by shifting the boundary between the main display area and the main display area (hereinafter referred to as “main sub area boundary”) downward as shown in FIG. 9B. Further, for example, when the vehicle is going to turn left or turn left, similarly, the sub display area is enlarged and displayed by shifting the main sub area boundary downward on the left display 7.
 また、表示領域変更部25は、S250において、例えば、右折及び右カーブ走行に関するいずれの走行状況条件にも該当しない場合には、図9Bに示す右ディスプレイ6において、主副領域境界を図9Aに示すように上方にシフトさせることにより、副表示領域を縮小表示させる。また例えば、左折又は左カーブ走行に関するいずれの走行状況条件にも該当しない場合には、同様に、左ディスプレイ7において主副領域境界を上方にシフトさせることにより、副表示領域を縮小表示させる。 Also, in S250, for example, if the display area changing unit 25 does not correspond to any driving condition regarding right turn or right curve driving, the main / sub area boundary is set to FIG. 9A on the right display 6 shown in FIG. 9B. As shown, the sub display area is reduced and displayed by shifting upward. Further, for example, when none of the traveling condition conditions regarding left turn or left curve traveling is satisfied, the sub display area is similarly reduced by shifting the main sub area boundary upward on the left display 7.
 以上詳述した第1実施形態によれば、以下の効果が得られる。 According to the first embodiment described in detail above, the following effects can be obtained.
 (1a)表示画面において2つの周辺画像に主従関係を持たせて表示させることにより、車両の外部周辺領域のうち各領域に対応する画像部分を運転者に直感的に知らしめることが可能となる。さらに車両の走行状況に応じて副表示領域が拡大されることにより、走行シーンに応じて確認する必要性が高くなった領域を運転者に瞬時に理解させ、且つ、その領域の画像を運転者に視認性よく提示することが可能となる。従って、ディスプレイの数をむやみに増大させることなく、車両の外部周辺領域における2つの領域の映像等をわかりやすく運転者に提示することができる。 (1a) By displaying the two peripheral images with a master-slave relationship on the display screen, it is possible to intuitively let the driver know the image portion corresponding to each of the external peripheral regions of the vehicle. . Furthermore, by expanding the sub display area according to the driving situation of the vehicle, the driver can instantly understand the area that has become necessary to be checked according to the driving scene, and an image of the area is displayed to the driver. Can be presented with good visibility. Therefore, it is possible to present the images of the two areas in the external peripheral area of the vehicle in an easy-to-understand manner without increasing the number of displays.
 (2a)車両が右左折しようとしている場合に、車両の前側方領域を映し出した第2周辺映像を拡大表示するため、右左折時に注意すべき死角領域等の映像をオンデマンドで運転者に提示することができる。 (2a) When the vehicle is about to turn left or right, the second peripheral image showing the front side area of the vehicle is displayed in an enlarged manner, so that the driver displays on-demand images of the blind spot area that should be noted when turning right or left. can do.
 (3a)車両がカーブ走行しようとしている場合に、車両の前側方領域を映し出した第2周辺映像を拡大表示するため、カーブ走行時に注意すべき死角領域等の映像をオンデマンドで運転者に提示することができる。 (3a) When the vehicle is about to drive a curve, the second peripheral image showing the front side area of the vehicle is displayed in an enlarged manner, so that an image of a blind spot area, etc. that should be noted when driving a curve is presented to the driver on demand. can do.
 (4a)表示画面では、主表示領域において、第1周辺画像として車両の後側方領域を撮像した画像が表示され、副表示領域において、第2画像として車両の死角領域を含む前側方領域を撮像した画像が表示されるため、いわゆる電子ミラー機能と死角表示機能とについてディスプレイを共通化することができる。 (4a) On the display screen, an image obtained by imaging the rear side area of the vehicle is displayed as the first peripheral image in the main display area, and the front side area including the blind spot area of the vehicle is displayed as the second image in the sub display area. Since the captured image is displayed, the display can be shared with the so-called electronic mirror function and the blind spot display function.
 (5a)ドアミラーの形状を模擬した主表示領域に第1周辺画像が表示されるため、その表示画像が車両の後側方領域を撮像した画像であることを運転者に直感的に知らしめることができる。 (5a) Since the first peripheral image is displayed in the main display area simulating the shape of the door mirror, the driver can intuitively know that the display image is an image of the rear side area of the vehicle. Can do.
 (6a)表示画面では、主表示領域が副表示領域の下方に配置され、走行状況条件が成立した場合に主副領域境界が上方にシフトされるため、あたかもドアミラーの上方を覗き込んで死角領域を見ているかのように第2周辺画像を運転者に提示させることができる。 (6a) On the display screen, the main display area is arranged below the sub display area, and the boundary of the main sub area is shifted upward when the driving condition is satisfied. The second peripheral image can be presented to the driver as if he / she is watching.
 (7a)また、表示画像において死角領域境界が可視化されることにより、死角領域に対応する画像部分を運転者に直感的に知らしめることが可能となり、さらに死角領域と車両周辺領域(あるいは隣接領域)との対応関係を、ディスプレイ6,7の形状や配置等にかかわらず運転者に瞬時に理解させることが可能となる。従って、車両設計を複雑化させることなく、死角領域を映像でわかりやすく運転者に提示することができる。 (7a) Since the blind spot area boundary is visualized in the display image, the driver can intuitively know the image portion corresponding to the blind spot area, and the blind spot area and the vehicle peripheral area (or adjacent area) ) Can be instantly understood by the driver regardless of the shape and arrangement of the displays 6 and 7. Therefore, the blind spot area can be presented to the driver in an easy-to-understand manner with an image without complicating the vehicle design.
 (8a)運転者の目の位置とフロントピラーの位置との位置関係に基づき、運転者が車両周辺領域を見たときの死角領域を周辺画像上において設定するため、表示画像において例えば死角領域境界を表す輪郭線を運転者の目の位置に合わせて移動させることが可能となり、死角領域を映像でより直観的にわかりやすく運転者に提示することができる。 (8a) Based on the positional relationship between the driver's eye position and the front pillar position, the blind spot area when the driver views the vehicle peripheral area is set on the peripheral image. It is possible to move the contour line representing the position of the driver in accordance with the position of the eyes of the driver, and the blind spot area can be presented to the driver more intuitively and intelligibly.
 (9a)第2周辺画像上において死角領域境界に対応する輪郭線画像を重畳させることにより、表示画像において死角領域境界を明確に運転者に知らしめることができる。 (9a) By superimposing a contour image corresponding to the blind spot area boundary on the second peripheral image, the driver can clearly inform the driver of the blind spot area boundary in the display image.
 (10a)第2周辺画像上において死角領域に対応する透過画像を重畳させることにより、表示画像において死角領域をより直観的にわかりやすく運転者に提示することができる。 (10a) By superimposing a transmission image corresponding to the blind spot area on the second peripheral image, the blind spot area can be presented to the driver more intuitively and easily in the display image.
 (11a)輪郭線重畳画像及び透過重畳画像の画像属性を変更することにより、例えば車両周辺領域の明るさや第2周辺画像に対してコントラストが高くなるように画像属性を再設定することで、表示画像において視認性を向上させることができる。 (11a) By changing the image attributes of the contour superimposed image and the transparent superimposed image, for example, by resetting the image attributes so that the contrast with respect to the brightness of the vehicle peripheral region and the second peripheral image becomes higher, the display Visibility can be improved in the image.
 (第2実施形態)
 第2実施形態は、基本的な構成は第1実施形態と同様であるため、共通する構成については説明を省略し、相違点を中心に説明する。
(Second Embodiment)
Since the basic configuration of the second embodiment is the same as that of the first embodiment, the description of the common configuration will be omitted, and the description will focus on the differences.
 前述した第1実施形態では、境界可視化画像を生成する際に、第2周辺画像上において死角領域に対応する透過画像を重畳させていた。これに対し、第2実施形態では、境界可視化画像を生成する際に、第2周辺画像上において隣接領域に対応するマスク画像を重畳させる点で、第1実施形態と相違する。 In the first embodiment described above, when the boundary visualization image is generated, the transmission image corresponding to the blind spot area is superimposed on the second peripheral image. In contrast, the second embodiment is different from the first embodiment in that a mask image corresponding to an adjacent region is superimposed on the second peripheral image when generating a boundary visualized image.
 次に、第2実施形態の画像生成部22が、第1実施形態の画像生成処理(図5)に代えて実行する画像生成処理について、図10のフローチャートを用いて説明する。なお、図10におけるS310~S340の処理は、図5におけるS110~S140の処理と同様であるため、説明を一部簡略化している。 Next, image generation processing executed by the image generation unit 22 of the second embodiment instead of the image generation processing (FIG. 5) of the first embodiment will be described with reference to the flowchart of FIG. Note that the processing of S310 to S340 in FIG. 10 is the same as the processing of S110 to S140 in FIG.
 本処理が起動すると、画像生成部22は、まず、S310において、ドライバカメラ18からドライバカメラ情報を取得する。 When this process is started, the image generation unit 22 first acquires driver camera information from the driver camera 18 in S310.
 続いて、S320では、ボデー形状DB19からボデー形状データを読み出す。 Subsequently, in S320, the body shape data is read from the body shape DB 19.
 次に、S330では、運転者の目の位置とフロントピラーの位置との位置関係に基づき、運転者が車両の前側方領域を見たときの死角領域を第2周辺画像上において設定する。 Next, in S330, a blind spot area when the driver looks at the front side area of the vehicle is set on the second peripheral image based on the positional relationship between the driver's eye position and the front pillar position.
 続いて、S340では、第2周辺画像上において死角領域境界に対応する輪郭線画像を重畳させることにより、図6Aに示す輪郭線重畳画像を生成する。 Subsequently, in S340, a contour line superimposed image shown in FIG. 6A is generated by superimposing a contour line image corresponding to the blind spot region boundary on the second peripheral image.
 次に、S350では、第2周辺画像上において隣接領域に対応するマスク画像を重畳させることにより、図11に示すマスク重畳画像を生成する。具体的には、第2周辺画像の隣接領域部分に周知のフィルタ処理を施すことにより、マスク重畳画像を生成する。このマスク画像としては、車両周辺領域のうち死角領域以外の部分であること、若しくは運転者が直接視認可能な領域であることを運転者に想起させるように、第2周辺画像の隣接領域部分を目立たなくするためのデフォルト値に画像属性を予め設定したものが用いられる。 Next, in S350, a mask superimposed image shown in FIG. 11 is generated by superimposing a mask image corresponding to the adjacent region on the second peripheral image. Specifically, a mask superimposed image is generated by performing a known filter process on the adjacent region portion of the second peripheral image. As this mask image, an adjacent region portion of the second peripheral image is used so as to remind the driver that it is a portion other than the blind spot region in the vehicle peripheral region or a region that can be directly recognized by the driver. An image attribute set in advance as a default value to make it inconspicuous is used.
 続いて、S260では、輪郭線重畳画像及びマスク重畳画像の画像属性を変更する属性変更処理を実施し、属性変更処理を施した輪郭線重畳画像及びマスク重畳画像を含む第2周辺画像(つまり、境界可視化画像)を画像表示部23に供給し、本処理を終了する。 Subsequently, in S260, an attribute change process for changing the image attributes of the contour superimposed image and the mask superimposed image is performed, and a second peripheral image including the contour superimposed image and the mask superimposed image subjected to the attribute change process (that is, The boundary visualization image) is supplied to the image display unit 23, and this process is terminated.
 例えば、属性変更処理では、車両周辺領域の明るさに応じて、輪郭線重畳画像及びマスク重畳画像のそれぞれに関する明度及び/又は輝度を変更する。具体的には、明るさ検出部15の検出結果に基づき、車両周辺領域の照度が低い場合には、輪郭線重畳画像及びマスク重畳画像のそれぞれに関する明度及び/又は輝度を高くし、車両周辺領域の照度が高い場合には、輪郭線重畳画像及びマスク重畳画像のそれぞれに関する明度及び/又は輝度を低くする。 For example, in the attribute changing process, the brightness and / or luminance relating to each of the contour superimposed image and the mask superimposed image is changed according to the brightness of the vehicle peripheral region. Specifically, based on the detection result of the brightness detection unit 15, when the illuminance of the vehicle surrounding area is low, the brightness and / or brightness relating to each of the contour superimposed image and the mask superimposed image is increased, and the vehicle surrounding area is When the illuminance is high, the brightness and / or luminance relating to each of the contour superimposed image and the mask superimposed image is lowered.
 また例えば、属性変更処理では、第2周辺画像に対してコントラストが高くなるように輪郭線重畳画像及びマスク重畳画像のそれぞれに関する画像属性を変更する。具体的には、画像取得部21から供給された第2周辺画像に関する色相、明度、彩度及び/又は輝度が低い場合には、輪郭線重畳画像及びマスク重畳画像のそれぞれに関する色相、明度、彩度及び/又は輝度を高くする。また、画像取得部21から供給された第2周辺画像に関する色相、明度、彩度及び/又は輝度が高い場合には、輪郭線重畳画像及びマスク重畳画像のそれぞれに関する色相、明度、彩度及び/又は輝度を低くする。 Also, for example, in the attribute change process, the image attributes relating to each of the contour line superimposed image and the mask superimposed image are changed so that the contrast is higher with respect to the second peripheral image. Specifically, when the hue, brightness, saturation, and / or luminance related to the second peripheral image supplied from the image acquisition unit 21 are low, the hue, brightness, saturation related to each of the contour superimposed image and the mask superimposed image. Increase the degree and / or brightness. Further, when the hue, lightness, saturation, and / or luminance related to the second peripheral image supplied from the image acquisition unit 21 is high, the hue, lightness, saturation, and / or brightness related to each of the contour superimposed image and the mask superimposed image. Or lower the brightness.
 以上詳述した第2実施形態によれば、前述した第1実施形態の効果(1a)-(9a)に加え、以下の効果が得られる。 According to the second embodiment described in detail above, in addition to the effects (1a) to (9a) of the first embodiment described above, the following effects can be obtained.
 (1b)第2周辺画像上において隣接領域に対応するマスク画像を重畳させることにより、表示画像において死角領域をより直観的にわかりやすく運転者に提示することができる。 (1b) By superimposing the mask image corresponding to the adjacent area on the second peripheral image, the blind spot area in the display image can be presented to the driver more intuitively and easily.
 (2b)輪郭線重畳画像及びマスク重畳画像の画像属性を変更することにより、例えば車両周辺領域の明るさや第2周辺画像に対してコントラストが高くなるように画像属性を再設定することで、表示画像において視認性を向上させることができる。 (2b) By changing the image attributes of the contour superimposed image and the mask superimposed image, for example, by resetting the image attributes so that the contrast with respect to the brightness of the vehicle peripheral region and the second peripheral image becomes higher, the display Visibility can be improved in the image.
 (他の実施形態)
 以上、本開示の実施形態について説明したが、本開示は上記実施形態に限定されることなく、種々の形態を採り得る。
(Other embodiments)
As mentioned above, although embodiment of this indication was described, this indication can take various forms, without being limited to the above-mentioned embodiment.
 上記実施形態では、輪郭線重畳画像及び透明重畳画像、あるいは輪郭線重畳画像及びマスク重畳画像の画像属性を変更していたが、これに限定されるものではない。例えば、輪郭線重畳画像、透明重畳画像及びマスク重畳画像の少なくとも一つの画像属性を変更してもよい。 In the above embodiment, the image attributes of the contour superimposed image and the transparent superimposed image, or the contour superimposed image and the mask superimposed image are changed, but the present invention is not limited to this. For example, at least one image attribute of the contour superimposed image, the transparent superimposed image, and the mask superimposed image may be changed.
 上記実施形態では、左右のフロントピラー31,32の死角領域と隣接領域との領域境界を可視化させた境界可視化画像を生成していたが、これに限定されるものではない。例えば、センターピラーやリアピラー等の他のピラーの死角領域について同様に境界可視化画像を生成してもよい。 In the above embodiment, the boundary visualized image is generated by visualizing the region boundary between the blind spot region of the left and right front pillars 31 and 32 and the adjacent region. However, the present invention is not limited to this. For example, the boundary visualized image may be generated in the same manner for the blind area of other pillars such as a center pillar and a rear pillar.
 上記実施形態では、車両の外部周辺領域において互いに異なる2つの領域をそれぞれ撮像した2つの周辺画像を、各ディスプレイ6,7の一つの画面にそれぞれ表示させていたが、これに限定されるものではない。例えば、領域の数をさらに増やして3つ以上の領域を撮像するように構成し、3つ以上の周辺画像を各ディスプレイ6,7の一つの画面にそれぞれ表示させてもよい。この場合、表示画面において、2つ以上の主表示領域を配置してもよいし、2つ以上の副表示領域を配置してもよい。 In the above-described embodiment, two peripheral images obtained by imaging two different areas in the external peripheral area of the vehicle are displayed on one screen of each of the displays 6 and 7, respectively. However, the present invention is not limited to this. Absent. For example, the number of regions may be further increased so that three or more regions are imaged, and three or more peripheral images may be displayed on one screen of each of the displays 6 and 7, respectively. In this case, two or more main display areas may be arranged on the display screen, or two or more sub display areas may be arranged.
 上記実施形態における1つの構成要素が有する機能を複数の構成要素として分散させたり、複数の構成要素が有する機能を1つの構成要素に統合させたりしてもよい。また、上記実施形態の構成の少なくとも一部を、同様の機能を有する公知の構成に置き換えてもよい。また、上記実施形態の構成の一部を省略してもよい。また、上記実施形態の構成の少なくとも一部を、他の上記実施形態の構成に対して付加又は置換してもよい。なお、特許請求の範囲に記載した文言のみによって特定される技術思想に含まれるあらゆる態様が本開示の実施形態である。 The functions of one component in the above embodiment may be distributed as a plurality of components, or the functions of a plurality of components may be integrated into one component. Further, at least a part of the configuration of the above embodiment may be replaced with a known configuration having the same function. Moreover, you may abbreviate | omit a part of structure of the said embodiment. In addition, at least a part of the configuration of the above embodiment may be added to or replaced with the configuration of the other embodiment. In addition, all the aspects included in the technical idea specified only by the wording described in the claims are embodiments of the present disclosure.
 上述した車両周辺表示装置1の他、当該車両周辺表示装置1を構成要素とするシステム、当該車両周辺表示装置1としてコンピュータを機能させるための1ないし複数のプログラム、このプログラムの少なくとも一部を記録した1ないし複数の媒体、車両周辺表示方法等、種々の形態で本開示を実現することもできる。 In addition to the vehicle periphery display device 1 described above, a system including the vehicle periphery display device 1 as a constituent element, one or more programs for causing a computer to function as the vehicle periphery display device 1, and at least a part of the program are recorded. The present disclosure can be realized in various forms such as one or a plurality of media and a vehicle periphery display method.
 本開示は、実施例に準拠して記述されたが、本開示は当該実施例や構造に限定されるものではないと理解される。本開示は、様々な変形例や均等範囲内の変形をも包含する。加えて、様々な組み合わせや形態、さらには、それらに一要素のみ、それ以上、あるいはそれ以下、を含む他の組み合わせや形態をも、本開示の範畴や思想範囲に入るものである。

 
Although the present disclosure has been described with reference to the embodiments, it is understood that the present disclosure is not limited to the embodiments and structures. The present disclosure includes various modifications and modifications within the equivalent range. In addition, various combinations and forms, as well as other combinations and forms including only one element, more or less, are within the scope and spirit of the present disclosure.

Claims (8)

  1.  車両の外部周辺領域において互いに異なる第1領域及び第2領域をそれぞれ撮像した第1周辺画像及び第2周辺画像を取得する画像取得部(21)と、
     前記画像取得部(21)により取得した第1周辺画像及び第2周辺画像を、前記車両の室内に設けられた表示装置の一つの画面に表示させる画像表示部(23)と、
     前記車両の走行状況を判定する状況判定部(24)と、
     前記表示装置の表示画面のうち、前記第1周辺画像が表示される領域を主表示領域、前記第2周辺画像が表示される領域を副表示領域とし、前記状況判定部(24)による判定結果に応じて、所定の走行状況条件が成立した場合、前記画像表示部(23)に前記副表示領域を拡大させる表示領域変更部(25)と、
     を備える車両周辺表示装置。
    An image acquisition unit (21) for acquiring a first peripheral image and a second peripheral image obtained by imaging a first region and a second region that are different from each other in the external peripheral region of the vehicle;
    An image display unit (23) for displaying the first peripheral image and the second peripheral image acquired by the image acquisition unit (21) on one screen of a display device provided in a room of the vehicle;
    A situation determination unit (24) for determining a running situation of the vehicle;
    Of the display screen of the display device, a region where the first peripheral image is displayed is a main display region, and a region where the second peripheral image is displayed is a sub display region, and the determination result by the situation determination unit (24) Accordingly, when a predetermined traveling condition is established, a display area changing section (25) that enlarges the sub display area on the image display section (23),
    A vehicle periphery display device comprising:
  2.  請求項1に記載の車両周辺表示装置であって、
     前記状況判定部(24)は、前記車両が右左折しようとしていることを、前記走行状況条件の成立要件とする、
     車両周辺表示装置。
    The vehicle periphery display device according to claim 1,
    The situation determination unit (24) sets that the vehicle is going to turn right or left as a requirement for satisfying the traveling condition.
    Vehicle periphery display device.
  3.  請求項1又は請求項2に記載の車両周辺表示装置であって、
     前記状況判定部(24)は、前記車両がカーブ走行しようとしていることを、前記走行状況条件の成立要件とする、
     車両周辺表示装置。
    The vehicle periphery display device according to claim 1 or 2,
    The situation determination unit (24) sets that the vehicle is about to travel in a curve as a requirement for establishment of the traveling situation condition.
    Vehicle periphery display device.
  4.  請求項1から請求項3までのいずれか1項に記載の車両周辺表示装置であって、
     前記第1周辺画像は、前記車両の後側方領域を前記第1領域として撮像した画像であり、
     前記第2周辺画像は、前記車両のピラーを含むフレームで運転者の視界が遮られることにより生じる車両外の死角領域を有する、前記車両の前側方領域を前記第2領域として撮像した画像である、
     車両周辺表示装置。
    The vehicle periphery display device according to any one of claims 1 to 3, wherein
    The first peripheral image is an image obtained by imaging a rear side region of the vehicle as the first region,
    The second peripheral image is an image obtained by capturing a front side region of the vehicle as the second region, which has a blind spot region outside the vehicle that is generated when a driver's view is blocked by a frame including the pillar of the vehicle. ,
    Vehicle periphery display device.
  5.  請求項4に記載の車両周辺表示装置であって、
     前記画像表示部(23)は、ドアミラーの形状を模擬した前記主表示領域に前記第1周辺画像を表示させる、
     車両周辺表示装置。
    The vehicle periphery display device according to claim 4,
    The image display unit (23) displays the first peripheral image in the main display area simulating the shape of a door mirror.
    Vehicle periphery display device.
  6.  請求項5に記載の車両周辺表示装置であって、
     前記画像表示部(23)は、前記表示画面上において前記副表示領域を前記主表示領域の上方に配置し、
     前記表示領域変更部(25)は、前記走行状況条件が成立した場合、前記主表示領域と前記副表示領域との境界である主副領域境界を下方にシフトさせる、
     車両周辺表示装置。
    The vehicle periphery display device according to claim 5,
    The image display unit (23) arranges the sub display area above the main display area on the display screen,
    The display area changing unit (25) shifts a main sub area boundary, which is a boundary between the main display area and the sub display area, downward when the traveling condition is satisfied.
    Vehicle periphery display device.
  7.  請求項4から請求項6までのいずれか1項に記載の車両周辺表示装置であって、
     前記画像取得部(21)により取得した第2周辺画像を元画像として、前記車両の前側方領域のうちの前記死角領域と他の領域との境界である死角領域境界を可視化させた境界可視化画像を生成する画像生成部(22)、
     を更に備え、
     前記画像表示部(23)は、前記画像生成部(22)により生成した境界可視化画像を前記副表示領域に表示する、
     車両周辺表示装置。
    The vehicle periphery display device according to any one of claims 4 to 6,
    A boundary visualization image that visualizes a blind spot area boundary, which is a boundary between the blind spot area and another area in the front side area of the vehicle, using the second peripheral image acquired by the image acquisition unit (21) as an original image. An image generation unit (22) for generating
    Further comprising
    The image display unit (23) displays the boundary visualized image generated by the image generation unit (22) in the sub display area.
    Vehicle periphery display device.
  8.  車両の外部周辺領域において互いに異なる第1領域及び第2領域をそれぞれ撮像した第1周辺画像及び第2周辺画像を取得し、
     前記第1周辺画像及び第2周辺画像を、前記車両の室内に設けられた表示装置の一つの画面に表示させ、
     前記車両の走行状況を判定し、
     前記表示装置の表示画面のうち、前記第1周辺画像が表示される領域を主表示領域、前記第2周辺画像が表示される領域を副表示領域とし、前記車両の走行状況に応じて、所定の走行状況条件が成立した場合、前記副表示領域を拡大させることを含む車両周辺表示方法。

     
    Obtaining a first peripheral image and a second peripheral image obtained by imaging a first region and a second region, which are different from each other, in the external peripheral region of the vehicle
    Displaying the first peripheral image and the second peripheral image on one screen of a display device provided in a room of the vehicle;
    Determining the running status of the vehicle,
    Of the display screen of the display device, an area where the first peripheral image is displayed is a main display area, and an area where the second peripheral image is displayed is a sub-display area. A vehicle periphery display method including enlarging the sub display area when the driving condition condition is satisfied.

PCT/JP2016/002214 2015-05-15 2016-04-27 Vehicle periphery display device and vehicle periphery display method WO2016185677A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015100162A JP2016215726A (en) 2015-05-15 2015-05-15 Vehicle periphery display apparatus and vehicle periphery display method
JP2015-100162 2015-05-15

Publications (1)

Publication Number Publication Date
WO2016185677A1 true WO2016185677A1 (en) 2016-11-24

Family

ID=57319788

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/002214 WO2016185677A1 (en) 2015-05-15 2016-04-27 Vehicle periphery display device and vehicle periphery display method

Country Status (2)

Country Link
JP (1) JP2016215726A (en)
WO (1) WO2016185677A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3064225A1 (en) * 2017-03-23 2018-09-28 Faurecia Interieur Industrie VEHICLE HABITACLE COMPRISING AN IMAGE MIRROR MODULE PROJECTED ON A SIDE AMOUNT

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7160301B2 (en) * 2018-01-17 2022-10-25 株式会社ジャパンディスプレイ MONITOR DISPLAY SYSTEM AND ITS DISPLAY METHOD

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005125828A (en) * 2003-10-21 2005-05-19 Fujitsu Ten Ltd Vehicle surrounding visually confirming system provided with vehicle surrounding visually confirming device
JP2009119915A (en) * 2007-11-12 2009-06-04 Kanto Auto Works Ltd Automobile
JP2010245701A (en) * 2009-04-02 2010-10-28 Denso Corp Display
JP2012187971A (en) * 2011-03-09 2012-10-04 Isuzu Motors Ltd Vehicle view assisting device
JP2012237725A (en) * 2011-05-13 2012-12-06 Nippon Seiki Co Ltd Display device for vehicle

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005125828A (en) * 2003-10-21 2005-05-19 Fujitsu Ten Ltd Vehicle surrounding visually confirming system provided with vehicle surrounding visually confirming device
JP2009119915A (en) * 2007-11-12 2009-06-04 Kanto Auto Works Ltd Automobile
JP2010245701A (en) * 2009-04-02 2010-10-28 Denso Corp Display
JP2012187971A (en) * 2011-03-09 2012-10-04 Isuzu Motors Ltd Vehicle view assisting device
JP2012237725A (en) * 2011-05-13 2012-12-06 Nippon Seiki Co Ltd Display device for vehicle

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3064225A1 (en) * 2017-03-23 2018-09-28 Faurecia Interieur Industrie VEHICLE HABITACLE COMPRISING AN IMAGE MIRROR MODULE PROJECTED ON A SIDE AMOUNT

Also Published As

Publication number Publication date
JP2016215726A (en) 2016-12-22

Similar Documents

Publication Publication Date Title
JP6493361B2 (en) Vehicle device, vehicle program, filter design program
CN105308620B (en) Information processing apparatus, proximity object notification method, and program
JP6413207B2 (en) Vehicle display device
KR102490272B1 (en) A method for displaying the surrounding area of a vehicle
CN113467600A (en) Information display method, system and device based on augmented reality and projection equipment
JP4867512B2 (en) Image display apparatus and program
KR102071155B1 (en) Vehicle display device and vehicle display method
JP6548900B2 (en) Image generation apparatus, image generation method and program
US20190202356A1 (en) Method for providing a rear mirror view of a surroundings of a vehicle
JPWO2014129026A1 (en) Driving support device and image processing program
JP4367212B2 (en) Virtual image display device and program
JP2015055999A (en) Information processing device, gesture detection method, and gesture detection program
WO2018047400A1 (en) Vehicle display control device, vehicle display system, vehicle display control method, and program
JP2013168063A (en) Image processing device, image display system, and image processing method
JP2010130647A (en) Vehicle periphery checking system
JP6890288B2 (en) Image processing equipment, image display system and image processing method
JP2016078498A (en) Display control device and display system
JP2018142884A (en) Bird's eye video creation device, bird's eye video creation system, bird's eye video creation method, and program
JP2018121287A (en) Display control apparatus for vehicle, display system for vehicle, display control method for vehicle, and program
WO2016185677A1 (en) Vehicle periphery display device and vehicle periphery display method
JP6375633B2 (en) Vehicle periphery image display device and vehicle periphery image display method
CN109415020B (en) Luminance control device, luminance control system and luminance control method
US11828947B2 (en) Vehicle and control method thereof
JP6234701B2 (en) Ambient monitoring device for vehicles
WO2016185678A1 (en) Blind-spot display device and blind-spot display method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16796074

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16796074

Country of ref document: EP

Kind code of ref document: A1