WO2022091194A1 - Display control device, display system, display control method, and display control program - Google Patents

Display control device, display system, display control method, and display control program Download PDF

Info

Publication number
WO2022091194A1
WO2022091194A1 PCT/JP2020/040178 JP2020040178W WO2022091194A1 WO 2022091194 A1 WO2022091194 A1 WO 2022091194A1 JP 2020040178 W JP2020040178 W JP 2020040178W WO 2022091194 A1 WO2022091194 A1 WO 2022091194A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
image
display
blind spot
information
Prior art date
Application number
PCT/JP2020/040178
Other languages
French (fr)
Japanese (ja)
Inventor
修平 鈴木
新作 福▲高▼
宗貴 西平
晶子 今石
美里 湯浅
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to JP2022558626A priority Critical patent/JP7439949B2/en
Priority to PCT/JP2020/040178 priority patent/WO2022091194A1/en
Publication of WO2022091194A1 publication Critical patent/WO2022091194A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R11/02Arrangements for holding or mounting articles, not otherwise provided for for radio sets, television sets, telephones, or the like; Arrangement of controls thereof

Definitions

  • the present disclosure relates to a display control device, a display system, a display control method, and a display control program that control when displaying a blind spot image showing a situation in a blind spot caused by the own vehicle on the display device.
  • a blind spot may occur for a target person such as a pedestrian who exists in the vicinity of the own vehicle due to the own vehicle.
  • a device that captures an image of the periphery of the own vehicle with a camera and displays information on a field of view that becomes a blind spot for a target person such as a pedestrian in the vicinity due to the presence of the own vehicle on the periphery of the own vehicle is a patent document. It is disclosed in 1.
  • the device disclosed in Patent Document 1 is provided in the own vehicle and is imaged by an image pickup means for capturing an image of the periphery of the own vehicle and an image display means provided in at least a part of the own vehicle. It is provided with a display control means for displaying an image of a field of view that becomes a blind spot at a position where an image display means is provided.
  • the above-mentioned conventional display control device has a problem that it is difficult to recognize which range the pedestrian is displaying even if the blind spot image is displayed to the pedestrian.
  • This disclosure is made to solve the above-mentioned problems, and when displaying a blind spot image, it is possible to intuitively and easily display to the subject which blind spot image it is. It is an object of the present invention to provide a display control device, a display system, a display control method, and a display control program.
  • the display control device includes a blind spot image acquisition unit that acquires a blind spot image showing a blind spot generated by the own vehicle when viewed from a target person existing outside the own vehicle, and a display in which an image of the own vehicle is superimposed on the blind spot image. It includes a display image generation unit that generates an image and a control unit that displays a display image on a display device provided in the own vehicle.
  • the display system includes a display device provided in the own vehicle, a blind spot image acquisition unit that acquires a blind spot image showing a blind spot generated by the own vehicle when viewed from a target person outside the own vehicle, and the own vehicle. It is provided with a display image generation unit that generates a display image in which the image of the above is superimposed on a blind spot image, and a control unit that displays the display image on the display device.
  • the display control method includes a step of acquiring a blind spot image showing a blind spot generated by the own vehicle when viewed from a target person existing outside the own vehicle, and generating a display image in which the image of the own vehicle is superimposed on the blind spot image. It has a step of displaying a display image on a display device provided in the own vehicle.
  • the display control program processes to acquire a blind spot image showing a blind spot generated by the own vehicle when viewed from a target person existing outside the own vehicle, and generates a display image in which the image of the own vehicle is superimposed on the blind spot image.
  • the process of displaying the displayed image on the display device provided in the own vehicle is executed.
  • the display image generation unit generates a display image in which the image of the own vehicle is superimposed on the blind spot image
  • the control unit displays the display image on the display device provided in the own vehicle to display the blind spot image.
  • FIG. It is a figure which showed an example of the situation where there is a pedestrian crossing on a road with two lanes on each side.
  • FIG. It is a front view of the own vehicle which concerns on Embodiment 1.
  • FIG. It is a rear view of the own vehicle which concerns on Embodiment 1.
  • FIG. It is a left side view of the own vehicle which concerns on Embodiment 1.
  • FIG. It is a right side view of the own vehicle which concerns on Embodiment 1.
  • FIG. It is a hardware block diagram of the in-vehicle system of the own vehicle which concerns on Embodiment 1.
  • FIG. It is a flowchart which shows the operation of the display system which concerns on Embodiment 1.
  • FIG. 1 is a diagram showing an example of a situation where a pedestrian crossing 4 is located on a road having two lanes on each side.
  • the own vehicle 1 which is an automobile is traveling on a road having two lanes on each side, as shown in FIG.
  • the own vehicle 1 is stopped in front of the pedestrian crossing 4, and the pedestrian 3 is about to cross in front of the own vehicle 1 where the pedestrian 3 is stopped as a target person.
  • another vehicle 2 which is a vehicle is trying to overtake the own vehicle 1 from behind the lane next to the own vehicle 1 which is behind the own vehicle 1 with respect to the pedestrian 3.
  • this disclosure will be described below in a situation where the range where the other vehicle 2 is present becomes a blind spot due to the own vehicle 1 from the pedestrian 3 and the other vehicle 2 cannot be seen.
  • FIG. 2 is a block diagram of a display system 30 including a display control device 20 according to the first embodiment.
  • the display system 30 is information to be notified using the notification device 31 provided in the own vehicle 1 and the inside / outside information of the vehicle acquired from the overall control ECU (Electronic Control Unit) 10 that controls the entire vehicle of the own vehicle 1. It is provided with a display control device 20 that determines the notification information and controls the notification information to be notified from the notification device 31.
  • the vehicle interior / external information includes blind spot information or blind spot images. The blind spot information and the blind spot image will be described later.
  • the notification device 31 is a device that notifies notification information from the own vehicle 1 to a target person existing outside the own vehicle 1.
  • the notification device 31 notifies, for example, the state of the own vehicle 1, the notice and intention of the current or future operation of the own vehicle 1, or a warning to the target person existing outside the own vehicle 1 around the own vehicle 1. ..
  • the notification device 31 includes display devices 32 to 35 and a speaker 36. However, the notification device 31 may include at least one display device.
  • FIG. 3 is a front view of the own vehicle 1 according to the first embodiment.
  • the display device 32 is provided on the front surface of the own vehicle 1
  • the speaker 36 is provided on the upper side of the display device 32 on the front surface of the own vehicle 1.
  • FIG. 4 is a rear view of the own vehicle 1 according to the first embodiment.
  • the display device 33 is provided on the rear surface of the own vehicle 1.
  • FIG. 5 is a left side view of the own vehicle 1 according to the first embodiment.
  • the display device 34 is provided on the left side surface of the own vehicle 1.
  • FIG. 6 is a right side view of the own vehicle 1 according to the first embodiment.
  • the display device 35 is provided on the right side surface of the own vehicle 1.
  • the overall control ECU 10 acquires blind spot information or vehicle internal / external information including blind spot images from various sensors inside and outside the vehicle 1 of the own vehicle 1, and based on the acquired information inside and outside the vehicle, the inside of the own vehicle 1 Send instructions to the hardware and control the hardware.
  • the display control device 20 includes a vehicle interior / external information acquisition unit 27 that acquires vehicle interior / external information including blind spot information or blind spot images from the overall control ECU 10. Further, the display control device 20 uses the blind spot image DB (Database) 24b in which the parts of the image for generating the blind spot image from the blind spot information are stored, and the blind spot image DB 24b to generate the blind spot image from the blind spot information. It includes an image generation unit 28.
  • the display control device 20 includes a blind spot image acquisition unit 21 for acquiring a blind spot image, and a determination unit 22 for determining whether or not to notify the blind spot image, vehicle interior / external information, and the like from the notification device 31. Further, when the display control device 20 determines that the own vehicle image DB (Database) 24a in which the image of the own vehicle 1 is stored and the determination unit 22 notifies the notification, the image of the own vehicle 1 acquired from the own vehicle image DB 24a. It is provided with a display image generation unit 23 that generates notification information including a display image in which the above is superimposed on the blind spot image.
  • DB Database
  • the display control device 20 stores notification device information in which the vehicle interior / external information and the notification device 31 are associated with each other, notification pattern information such as processing information of the display image, and condition information such as notification timing information. It is provided with a condition information DB (Database) 25. Further, the display control device 20 controls the condition giving unit 29 for imparting the condition information acquired from the condition information DB 25 to the display image and the notification device 31 associated with the condition information DB 25 to notify the notification information. It is equipped with a part 26. Each configuration will be described below.
  • the vehicle interior / external information acquisition unit 27 acquires the vehicle interior / external information including the blind spot information or the blind spot image from the overall control ECU 10.
  • the vehicle interior / external information includes, for example, the position information of the own vehicle 1, the position information of the target person 3, the line-of-sight information of the target person 3, the blind spot information of the target person 3, the blind spot image of the target person 3, and the position information of the other vehicle 2.
  • the vehicle interior / external information acquisition unit 27 corresponds to the line-of-sight information acquisition unit that acquires the line-of-sight information of the target person 3.
  • the blind spot information is information indicating the situation in the blind spot caused by the own vehicle 1 as seen from the target person 3 existing outside the own vehicle 1.
  • the blind spot information is information from an ultrasonic sensor, a radar sensor, a millimeter wave radar sensor, a lidar, an infrared laser sensor, or the like.
  • the blind spot image is an image of the blind spot generated by the own vehicle 1 as seen from the target person 3 existing outside the own vehicle 1.
  • the blind spot image is an image of an outside camera or the like.
  • the blind spot image DB 24b stores in advance images of various objects such as other vehicles viewed from various angles. Further, the blind spot image DB 24b stores images of various backgrounds viewed from various angles in advance.
  • An image such as an object and a background is an actual image such as a photograph, an illustration, or the like, and is a part of an image when a blind spot image is generated from blind spot information.
  • the blind spot image generation unit 28 generates a blind spot image from the blind spot information using the image parts stored in the blind spot image DB 24b. Further, when the range of the blind spot image acquired by the vehicle interior / external information acquisition unit 27 is too wide, the blind spot image generation unit 28 extracts an appropriate range and uses it as a subsequent blind spot image. Specifically, the blind spot image generation unit 28 acquires image parts from the blind spot image DB 24b using the blind spot information, for example, when the blind spot information is radar information or the like, and generates a blind spot image from the image parts. do. Further, the blind spot image generation unit 28 extracts, for example, a blind spot image according to the blind spot when the image is an image of a camera outside the vehicle in a range wider than the blind spot.
  • the blind spot image acquisition unit 21 acquires the blind spot image acquired by the vehicle interior / external information acquisition unit 27 or the blind spot image generated by the blind spot image generation unit 28.
  • the blind spot image shows the blind spot caused by the own vehicle 1 when viewed from the subject 3 existing outside the own vehicle 1.
  • the determination unit 22 determines the state of the vehicle of the own vehicle 1, the traffic environment outside the own vehicle 1, and the like based on the information inside and outside the vehicle. The determination unit 22 determines, for example, whether the own vehicle 1 is still stopped, is about to move forward, or is about to move backward, based on the gear change information. Further, the determination unit 22 determines whether or not the target person is in a dangerous state based on, for example, detection information and recognition information of objects such as surrounding people and cars.
  • the determination unit 22 determines whether to notify the target person of the notification information based on the determined vehicle condition of the own vehicle 1, the traffic environment outside the own vehicle 1, and the like. Specifically, the determination unit 22 determines whether or not to display the display image on the target person 3. For example, in the case of FIG. 1, the determination unit 22 determines that the pedestrian 3 is notified of the presence of the other vehicle 2.
  • the own vehicle image DB 24a stores images of the own vehicle 1 viewed from various angles in advance.
  • the image of the own vehicle 1 is an actual image such as a photograph, an illustration, or the like.
  • the display image generation unit 23 When the determination unit 22 determines that the display image generation unit 23 notifies, the display image generation unit 23 generates notification information from the information inside and outside the vehicle. When generating the notification information, the display image generation unit 23 generates notification information including a display image in which at least the image of the own vehicle 1 acquired from the own vehicle image DB 24a is superimposed on the blind spot image.
  • the condition information DB 25 stores the condition information. Specifically, for example, the condition information DB 25 stores information on the devices of the display devices 32 to 35 and the speaker 36, and the notification device in which the identifier of the vehicle interior / external information and the identifier of the notification device 31 are associated with each other. I remember the information. Further, the condition information DB 25 stores the notification pattern information associated with the vehicle interior / external information. The notification pattern information includes, for example, information such as the shape, color, size, brightness, position, etc. of the image displayed on the display devices 32 to 35, the loudness of the sound to be notified to the speaker 36, the pitch of the sound, the type of sound, and the like. Information. Further, the condition information DB 25 stores the notification timing information associated with the vehicle interior / external information. The notification timing information is, for example, information such as timing and time for notification to the notification device 31.
  • the condition giving unit 29 determines the notification device 31 to be notified from the condition information DB 25, the notification pattern, the timing to notify, the time, and the like, and adds the condition information to the notification information including the display image. Specifically, for example, when the condition giving unit 29 acquires the processing information for reducing the image of the own vehicle in the display image as the notification pattern information, the condition giving unit 29 reduces the image of the own vehicle and processes the display image.
  • the control unit 26 controls the notification device 31 associated with the condition information DB 25 to notify the notification information including the display image when the determination unit 22 determines to perform notification. Specifically, when the determination unit 22 determines that the display image is to be displayed, the control unit 26 controls the display device associated with the condition information DB 25 to display the display image.
  • FIG. 7 is a hardware configuration diagram of the in-vehicle system 100 of the own vehicle 1 according to the first embodiment.
  • the configuration of the display system 30 including the display control device 20 according to the first embodiment will be described with reference to FIG. 7.
  • the in-vehicle system 100 includes a communication line 65, a sensor 40, a communication device 60, an overall control ECU 10, a notification control ECU (Electronic Control Unit) 15, a driving device 70, and a notification device 80.
  • a communication line 65 As an example, the in-vehicle system 100 includes a communication line 65, a sensor 40, a communication device 60, an overall control ECU 10, a notification control ECU (Electronic Control Unit) 15, a driving device 70, and a notification device 80.
  • the communication line 65 is a signal path that electrically connects between devices and devices and exchanges data.
  • data is exchanged via CAN (Control Area Network).
  • each device of the in-vehicle system 100 may be connected by a bus.
  • the devices and devices of the in-vehicle system 100 are connected via communication, and the communication to be used may be wired communication or wireless communication.
  • the sensors 40 include a vehicle speed sensor 41, a steering angle sensor 42, an accelerator sensor 43, a brake sensor 44, a shift sensor 45, a winker sensor 46, a hazard sensor 47, a wiper sensor 48, a light sensor 49, a door open / close sensor 50, a driver camera 51, and the like.
  • the vehicle speed sensor 41 is a sensor that detects the speed of the own vehicle 1, and outputs an electric signal (vehicle speed pulse) according to the wheel speed to the overall control ECU 10.
  • the electric signal according to the wheel speed becomes the vehicle speed information.
  • the steering angle sensor 42 is a sensor that detects the steering angle of the own vehicle 1, and outputs an electric signal corresponding to the steering angle to the overall control ECU 10.
  • the electric signal corresponding to the steering angle becomes the steering angle information.
  • the accelerator sensor 43 is a sensor that detects the opening degree of the accelerator of the own vehicle 1, that is, the operation amount of the accelerator pedal, and outputs the operation amount information of the accelerator pedal to the overall control ECU 10.
  • the brake sensor 44 is a sensor that detects the operation amount of the brake pedal of the own vehicle 1, and outputs the operation amount information of the brake pedal to the overall control ECU 10.
  • the shift sensor 45 is a sensor that detects the current state or change of the shift lever of the own vehicle 1, and outputs the operation information of the shift lever such as the shift change by the user of the own vehicle 1 to the overall control ECU 10.
  • the winker sensor 46 is a sensor that detects the operation of the winker (direction indicator) of the own vehicle 1, and when the user operates the winker, the information of the winker operation instruction is output to the overall control ECU 10.
  • the hazard sensor 47 is a sensor that detects the operation of the hazard switch of the own vehicle 1, detects the operation of the user's hazard switch, and outputs the operation information to the overall control ECU 10.
  • the wiper sensor 48 is a sensor that detects the wiper operation of the own vehicle 1, and when the user operates the wiper, this operation instruction information is output to the overall control ECU 10.
  • the light sensor 49 is a sensor that detects the operation of the light lever of the user's own vehicle 1, and outputs the operation information of the user's light to the overall control ECU 10.
  • the door open / close sensor 50 is a sensor that detects the open / close of the door of the own vehicle 1, and outputs the door open / close information to the overall control ECU 10.
  • the driver camera 51 is a camera (imaging device) provided facing the driver seat of the own vehicle 1 and has a function of photographing a user sitting in the driver seat.
  • the driver camera 51 captures an image of the user's face and upper body, and outputs the captured image to the overall control ECU 10.
  • the seating sensor 52 is provided on the seat of the own vehicle 1 and is a sensor that detects the seating state of the user, and is realized by, for example, a pressing sensor or the like.
  • the seating sensor 52 outputs this information to the overall control ECU 10 when the user sits down or leaves the seat.
  • a plurality of seating sensors 52 may be provided on the seat, and the overall control ECU 10 estimates the posture of the user or the like based on the information of the plurality of pressing sensors.
  • the acceleration sensor 53 is a sensor that detects the acceleration of the own vehicle 1, and is composed of, for example, a 3-axis acceleration sensor.
  • the acceleration sensor 53 outputs the acceleration information of the own vehicle 1 to the overall control ECU 10.
  • the angular velocity sensor 54 is a sensor that detects the angular velocity (gyro) of the own vehicle 1, and the overall control ECU 10 detects the turning speed and the like of the own vehicle 1 based on this angular velocity information.
  • the angular velocity sensor 54 outputs the detected angular velocity information to the overall control ECU 10.
  • the GPS device 55 is a device that detects the position information of the own vehicle 1 by using the radio wave transmitted by the artificial satellite using the global positioning system (Global Positioning System), and is the information of the coordinates which is the position information of the own vehicle 1. Is output to the overall control ECU 10 and the navigation system 56.
  • Global Positioning System Global Positioning System
  • the navigation system 56 has map information, and has a function of calculating a recommended route to the destination of the own vehicle 1 based on the position information and the map information of the own vehicle 1. Further, the navigation system 56 has a communication function, and may acquire external information such as traffic congestion information and traffic closure information from a server (not shown) and calculate a recommended route based on these. Alternatively, the navigation system 56 has a function of sending the position information and the destination information of the own vehicle 1 to the server, and is configured as a system that calculates the recommended route on the server side and receives the information of the recommended route. You may be. The navigation system 56 outputs the calculated route information to the overall control ECU 10.
  • the out-of-vehicle camera 57 is a camera (imaging device) provided for taking an image of the outside of the own vehicle 1.
  • the outside camera 57 is provided, for example, in front of, behind, and to the left and right of the own vehicle 1, and outputs each captured image to the overall control ECU 10.
  • the overall control ECU 10 can detect and recognize the target person and detect and recognize objects such as other vehicles and obstacles based on the input captured image.
  • the vehicle exterior sensor 58 is a sensor capable of detecting an object around the vehicle exterior of the own vehicle 1, and is composed of, for example, an ultrasonic sensor, a radar sensor, a millimeter wave radar sensor, a rider, an infrared laser sensor, and the like.
  • the vehicle exterior sensor 58 outputs this detection information to the overall control ECU 10.
  • the overall control ECU 10 detects the distance information to the object and the position information of the object based on the detection information of the object outside the vehicle input from the vehicle outside sensor 58.
  • the detection of the distance information and the position information may be detected by the overall control ECU 10 as in the first embodiment, or the distance information and the position information are calculated based on the information detected by the vehicle exterior sensor 58, and the overall control is performed. This information may be output to the ECU 10.
  • the illuminance sensor 59 is provided toward the outside of the own vehicle 1 and is a sensor that detects the illuminance (brightness) outside the vehicle.
  • the illuminance sensor 59 outputs the detected illuminance information to the overall control ECU 10.
  • the communication device 60 performs wireless communication with another communication device existing outside the vehicle of the own vehicle 1.
  • the communication device 60 uses a specific frequency band to perform vehicle-to-vehicle communication with another vehicle, road-to-vehicle communication with a roadside unit, or communication with a communicable electronic device such as a smartphone owned by a person.
  • the communication performed by the communication device 60 is a communication using a unique communication using a specific fixed frequency band, or a communication standard standardized for performing communication between an in-vehicle communication device and an external communication device. It may be communication using existing communication standards such as wireless LAN (Local Area Network), Bluetooth (registered trademark), Zigbee (registered trademark) and the like.
  • the communication device 60 includes an antenna 61, a transmitting unit 62, and a receiving unit 63.
  • the communication device 60 transmits a wireless signal from the transmitting unit 62 to another communication device via the antenna 61, and transmits a wireless signal such as surrounding congestion status information from the receiving unit 63 from the receiving unit 63 via the antenna 61.
  • the overall control ECU 10 has information on whether or not another vehicle 2 exists around the own vehicle 1 from the outside camera 57, the outside sensor 58, and the like, and features such as the color, shape, and model number of the other surrounding vehicles 2.
  • Information is acquired, and the position information of the own vehicle 1, the speed regulation, the straight road, the number of lanes, and the shape information of the road such as a curved road are acquired from the GPS device 55, the navigation system 56, the communication device 60, etc., and the outside camera 57
  • the situation around the own vehicle 1 including the blind spot information or the blind spot image is acquired from.
  • the display system 30 acquires such information from the overall control ECU 10.
  • At least one of the sensor 40 and the communication device 60 acquires the blind spot information or the blind spot image, and transmits the blind spot information or the blind spot image to the overall control ECU 10.
  • the overall control ECU 10 is an ECU having a function of controlling the entire vehicle of the own vehicle 1. By acquiring the information detected from the sensor 40 and sending instructions and information to properly operate each part of the own vehicle 1 based on this information, the control of the entire vehicle of the own vehicle 1 is executed.
  • the overall control ECU 10 includes a processor 11 and a memory 12.
  • the processor 11 is a CPU (Central Processing Unit) that executes calculation processing in the overall control ECU 10 by reading and executing a program stored in the memory 12. Specifically, the processor 11 loads at least a part of the OS (Operating System) stored in the memory 12 and executes the program while executing the OS.
  • the processor 11 connects to each device and device via the communication line 65, and controls each device and device. Since the processor 11 may be an IC (Integrated Circuit) that performs processing, it may be a calculation processing circuit, an electric circuit, a controller, a central processing unit, an arithmetic unit, a microprocessor, a microcomputer, a DSP (Digital Signal Processor), or the like. May be a combination of.
  • IC Integrated Circuit
  • the memory 12 stores software, firmware, a program in which a combination of software and firmware is described, an OS, various information, and the like.
  • the memory 12 is, for example, a RAM (Random Access Memory), a ROM (Read Only Memory), a flash memory, an EPROM (Erasable Programmable Read Only Memory), an EPROM (Electrically Memory) Non-volatile or volatile semiconductor memory such as (Solid State Drive), magnetic disk, flexible disk, optical disk, compact disk, mini disk, DVD (Digital Versaille Disc) and the like.
  • the memory 12 may realize the functions of each part in separate memories, or may collectively realize the functions of each part in one memory.
  • the overall control ECU 10 has, for example, a ROM as a memory 12 which is a non-volatile storage device for storing one or more programs, and a RAM which is a volatile storage device used by the processor 11 as an expansion area for programs and various information.
  • the processor 11 reads a program from the ROM, expands the read program on the RAM, and executes the calculation process.
  • the function of the overall control ECU 10 can be realized by hardware, software, firmware, or a combination of hardware, software, and firmware.
  • the functions of each part of the overall control ECU 10 may be partially realized by dedicated hardware and partially realized by software or firmware. For example, even if a part of the overall control ECU 10 realizes its function by a processing circuit as dedicated hardware, and the rest realizes its function by reading and executing a program stored in the memory by the CPU. good.
  • the driving device 70 is a device for driving the own vehicle 1.
  • the driving device 70 includes a headlight driver 71, a headlight 72, an engine 73, a transmission 74, a brake actuator 75, a steering actuator 76, a winker 77, and a HUD (Head-Up Display) 78. ing.
  • the headlight driver 71 is a drive device that drives the headlight 72.
  • the headlight driver 71 drives the headride 72 based on the instruction from the overall control ECU 10, and controls operations such as turning on / off the headlight 72 and switching between high beam and low beam.
  • the headlight 72 is an irradiation device provided on the front side of the vehicle body of the own vehicle 1 and irradiates the front of the vehicle body with light.
  • the headlights 72 are provided on the left and right sides of the front side of the vehicle body of the own vehicle 1, respectively, and a high beam that illuminates a farther distance by the structure of a light guide portion that guides the light emitted from the light or by switching a plurality of lights. And the low beam that illuminates a shorter distance than the high beam can be switched.
  • the engine 73 is an internal combustion engine that generates power to drive the own vehicle 1.
  • the engine 73 generates power for rotating the wheels by burning fuel such as gasoline.
  • the engine 73 operates based on an instruction from the overall control ECU 10.
  • the transmission 74 is composed of gears, shafts, etc., and has a function of transmitting power to the wheels.
  • the transmission 74 changes the torque transmitted to the wheels of the own vehicle 1 by changing the gear based on the instruction from the overall control ECU 10.
  • the brake actuator 75 is a mechanism for operating a brake (reducer) for decelerating the own vehicle 1.
  • the brake actuator 75 operates the brake based on the instruction from the overall control ECU 10 to decelerate the own vehicle 1.
  • the steering actuator 76 is a mechanism that changes the direction of the wheels of the own vehicle 1 and operates the steering (steering device) that controls the traveling direction of the own vehicle 1.
  • the steering actuator 76 controls the steering based on the instruction from the overall control ECU 10, and controls the traveling direction of the own vehicle 1.
  • the winker 77 is a direction indicator for indicating the traveling direction of the own vehicle 1 to the outside of the vehicle by light emission.
  • the winker 77 blinks based on an instruction from the overall control ECU 10, and indicates the traveling direction of the own vehicle 1 to the outside of the vehicle.
  • the HUD 78 is a transmissive image display device provided so as to be superimposed on the windshield of the own vehicle 1.
  • the HUD 78 displays various images based on instructions from the overall control ECU 10.
  • the HUD 78 presents various information to the user in the vehicle 1 of the own vehicle 1 by displaying an image.
  • the HUD 78 may be used as a display for the navigation system 56.
  • the notification control ECU 15 is an ECU having a function of controlling the notification device 80 of the own vehicle 1.
  • the notification control ECU 15 acquires vehicle inside / outside information of the own vehicle 1 from the overall control ECU 10, determines the status and state of the own vehicle 1 based on the acquired vehicle inside / outside information, and informs the target person of the blind spot image by the notification device 80. It is a device having a function of notifying notification information including.
  • the notification control ECU 15 includes a processor 16 and a memory 17.
  • the processor 16 is a CPU (Central Processing Unit) that executes a calculation process in the notification control ECU 15 by reading and executing a program stored in the memory 17. Specifically, the processor 16 loads at least a part of the OS (Operating System) stored in the memory 17, and executes the program while executing the OS.
  • the processor 16 connects to each device and device via the communication line 65, and controls each device and device. Since the processor 16 may be an IC (Integrated Circuit) that performs processing, it may be a calculation processing circuit, an electric circuit, a central processing unit, an arithmetic unit, a microprocessor, a microcomputer, a DSP (Digital Signal Processor), or a combination thereof. It may be the one.
  • IC Integrated Circuit
  • the unit 26 is realized by reading and executing the program loaded in the memory 17 by the processor 16.
  • notification control ECU 15 may be realized by another processor, or various functions may be collectively realized by one processor.
  • the memory 17 stores software, firmware, a program in which a combination of software and firmware is described, an OS, various information, and the like.
  • the memory 17 is, for example, a RAM (Random Access Memory), a ROM (Read Only Memory), a flash memory, an EPROM (Erasable Programmable Read Only Memory), an EPROM (Electrically Memory) Non-volatile or volatile semiconductor memory such as (Solid State Drive), magnetic disk, flexible disk, optical disk, compact disk, mini disk, DVD (Digital Versaille Disc) and the like.
  • the unit 26 is realized by a program stored in the memory 17. Further, the blind spot image DB 24b, the own vehicle image DB 24a, and the condition information DB 25 are realized by the memory 17.
  • the memory 17 may realize the functions of each part in separate memories, or may collectively realize the functions of each part in one memory.
  • the notification control ECU 15 has, for example, a ROM as a memory 17 which is a non-volatile storage device for storing one or more programs, and a RAM which is a volatile storage device used by the processor 16 as an expansion area for programs and various information.
  • the processor 16 reads a program from the ROM, expands the read program on the RAM, and executes the calculation process.
  • the function of the notification control ECU 15 can be realized by hardware, software, firmware, or a combination of hardware, software, and firmware. Some of the functions of each part of the notification control ECU 15 may be realized by dedicated hardware, and some may be realized by software or firmware. For example, even if a part of the notification control ECU 15 realizes its function by a processing circuit as dedicated hardware, and the rest realizes its function by reading and executing a program stored in the memory by the CPU. good.
  • the notification device 80 receives notification information including at least a display image from the notification control ECU 15, and notifies the target person of the notification information including displaying at least the display image.
  • the notification device 80 is an external light device 81, a projector device 86, a body light device 89, a sound device 92, and the like.
  • the notification device 80 includes at least one device capable of displaying an image, such as an external light device 81, a projector device 86, and a body light device 89.
  • the notification device 80 corresponds to the notification device 31 in FIG.
  • the external light device 81 is an irradiation device attached to the outside from the vehicle body of the own vehicle 1.
  • the external light device 81 irradiates the road surface or a nearby wall surface with light, and the target person existing outside the own vehicle 1 around the own vehicle 1 is informed of the state of the own vehicle 1 and the current or future operation of the own vehicle 1. Notify the intention, the notice of the future operation of the own vehicle 1, or the warning.
  • the external light device 81 illuminates with a shape, color, position, size, brightness, position, timing, time, etc. suitable for the notification information. Irradiates the road surface and the like.
  • the external light device 81 includes an external light driver 82 and an external light set 83.
  • the external light driver 82 is a drive device for driving the external light set 83.
  • the external light driver 82 controls the external light set 83 to irradiate a predetermined light.
  • the external light driver 82 controls the combination of the irradiation timing, the irradiation time, and the like between the external light 84 and the external light 85 in the external light set 83.
  • the external light driver 82 operates a color filter, a shade, a light guide mechanism, etc. provided on the external light 84 and the external light 85, and has a predetermined shape at a position relative to the vehicle body of the predetermined own vehicle 1.
  • Light of a predetermined color, a predetermined size, and a predetermined brightness is irradiated at a predetermined timing and at a predetermined time.
  • the external light set 83 includes a plurality of external lights (irradiation devices). Specifically, in the first embodiment, the external light set 83 includes an external light 84 and an external light 85. The external light 84 and the external light 85 are turned on based on the external light driver 82.
  • the projector device 86 is an image projection device attached to the outside from the vehicle body of the own vehicle 1.
  • the projector device 86 irradiates the road surface or a nearby wall surface with light, and the state of the own vehicle 1 and the intention of the current or future operation of the own vehicle 1 to the target person existing outside the own vehicle 1 around the own vehicle 1. , Notice of future operation of own vehicle 1, warning, etc.
  • the projector device 86 has a suitable shape, color, position, size, brightness, position, timing, time, etc. Irradiates (projects) light onto the road surface, etc.
  • the projector device 86 includes a projector driver 87 and a projector 88.
  • the projector driver 87 is a driving device that drives the projector 88, and causes the projector 88 to irradiate a predetermined light.
  • the projector driver 87 has a function of controlling the shape, color, position, size, brightness, timing, time, and the like of the light emitted by the projector 88.
  • the projector 88 is an irradiation (projection) device that irradiates (projects) light (image) to the outside of the own vehicle 1.
  • the projector 88 irradiates a road surface or a wall surface outside the vehicle with light (image) based on the operation of the projector driver 87.
  • the body light device 89 is a light emitting device such as a display provided on the vehicle body of the own vehicle 1 and a device that shines in a line along the contour of the vehicle body.
  • the body light device 89 informs the target person who exists outside the own vehicle 1 around the own vehicle 1 the state of the own vehicle 1, the intention of the current or future operation of the own vehicle 1, and the advance notice of the future operation of the own vehicle 1. , Or give a warning.
  • the body light device 89 emits light having a predetermined shape, a predetermined color, a predetermined size, and a predetermined brightness at a predetermined position on the vehicle body surface of the own vehicle 1 at a predetermined timing and at a predetermined time. It emits light.
  • the body light device 89 includes a body light driver 90 and a body light 91.
  • the body light driver 90 is a drive device that drives the body light 91.
  • the body light driver 90 irradiates the body light 91 with a predetermined light.
  • the body light driver 90 controls the shape, color, position, size, brightness, timing, time, etc. of the light emitted by the body light 91.
  • the body light 91 is a light emitting device provided so as to expose the light emitted from the outer surface of the vehicle body of the own vehicle 1.
  • the body light 91 in the first embodiment is composed of an LCD (Liquid Crystal Display) and an LED (Light Emitting Diode), and the light emitted by the LED is transmitted through the LCD to have a predetermined shape, color, and position. , The size and the brightness of the light are radiated to the outside of the own vehicle 1.
  • an example in which the body light 91 is composed of an LCD and an LED is shown, but other light emission such as an organic EL (Electroluminescence) monitor, a dot LED, a liquid crystal monitor, an EL panel, and a rear projection type display is shown.
  • the display device 32 is provided on the front surface of the own vehicle 1
  • the display device 33 is provided on the rear surface of the own vehicle 1
  • the display device 34 is on the left side of the own vehicle 1.
  • it is provided on the surface and the display device 35 is provided on the right side surface of the own vehicle 1, it may be displayed on a window glass or the like.
  • the display devices 32 to 35 included in the notification device 31 in FIG. 2 are realized by an external light device 81, a projector device 86, a body light device 89, or a combination of the external light device 81, the projector device 86, and the body light device 89.
  • the display devices 32 to 35 are body light devices 89, respectively.
  • the sound device 92 is an acoustic device provided on the vehicle body of the own vehicle 1.
  • the sound device 92 tells the target person who exists outside the own vehicle 1 around the own vehicle 1 the state of the own vehicle 1, the intention of the current or future operation of the own vehicle 1, and the advance notice of the future operation of the own vehicle 1. Or notify a warning or the like. Further, the sound device 92 outputs a predetermined sound of a predetermined size, a predetermined timing, and a predetermined time at a predetermined position on the surface of the vehicle body of the own vehicle 1.
  • the sound device 92 includes a sound driver 93 and a sound device 94.
  • the sound driver 93 is a drive device that drives the sound device 94.
  • the sound driver 93 causes the sound device 94 to output a predetermined sound at a predetermined position at a predetermined time, at a predetermined timing, and at a predetermined time.
  • the sound device 94 is a device that generates sound from the vehicle body of the own vehicle 1 toward a target person existing outside the own vehicle 1.
  • the speaker 36 included in the notification device 31 in FIG. 2 is realized by the sound device 92.
  • FIG. 8 is a flowchart showing the operation of the display system 30 according to the first embodiment. The operation of the display system 30 will be described below with reference to FIG.
  • step S1 the vehicle interior / external information acquisition unit 27 acquires vehicle interior / external information including blind spot information or blind spot image from the overall control ECU 10.
  • the vehicle inside / outside information acquisition unit 27 includes the position information of the own vehicle 1, the position information of the target person 3, the line of sight information of the target person 3, and the target person as the inside / outside information of the vehicle.
  • the blind spot information or the blind spot image of 3 is acquired.
  • step S2 the vehicle inside / outside information acquisition unit 27 determines whether or not the blind spot information or the blind spot image included in the vehicle inside / outside information can be used as it is as a blind spot image. If the vehicle interior / external information acquisition unit 27 determines that the blind spot information or the blind spot image included in the vehicle interior / external information can be used as the blind spot image as it is, step S2: Yes. Specifically, for example, when the blind spot information or the blind spot image included in the inside / outside information of the vehicle is a blind spot image acquired from the camera outside the vehicle and which is in the line of sight of the subject 3, step S2: Yes. Become.
  • the vehicle interior / external information acquisition unit 27 transmits the position information of the own vehicle 1, the position information of the target person 3, the line-of-sight information of the target person 3, and the blind spot image to the blind spot image acquisition unit 21, and proceeds to step S4.
  • step S2 No.
  • the blind spot information or the blind spot image included in the vehicle interior / external information is the radar information of the blind spot
  • step S2 No.
  • the blind spot information or the blind spot image included in the vehicle inside / outside information is an image of the vehicle outside camera in a range wider than the blind spot
  • step S2 No.
  • the blind spot information or the blind spot image included in the vehicle interior / external information does not match the line of sight of the subject 3
  • step S2 No.
  • the vehicle interior / external information acquisition unit 27 transmits the position information of the own vehicle 1, the position information of the target person 3, the line-of-sight information of the target person 3, and the blind spot information or the blind spot image of the target person 3 to the blind spot image generation unit 28. Then, the process proceeds to step S3.
  • the blind spot image generation unit 28 receives the position information of the own vehicle 1, the position information of the target person 3, the line of sight information of the target person 3, and the blind spot information or the blind spot of the target person 3 from the vehicle inside / outside information acquisition unit 27. Receive images and.
  • the blind spot image generation unit 28 generates or extracts a blind spot image from the blind spot information or the blind spot image of the subject 3.
  • the blind spot image generation unit 28 uses the image parts of the blind spot image DB 24b from the blind spot radar information. Generate a blind spot image. Further, for example, the blind spot image generation unit 28 extracts the blind spot image according to the blind spot when the blind spot information or the blind spot image included in the vehicle inside / outside information is an image of the vehicle outside camera in a range wider than the blind spot. For example, when the blind spot information or the blind spot image included in the vehicle interior / external information does not match the line of sight of the target person 3, the blind spot image generation unit 28 generates a blind spot image that matches the line of sight of the target person 3.
  • the blind spot image generation unit 28 transmits the position information of the own vehicle 1, the position information of the target person 3, the line of sight information of the target person 3, and the blind spot image to the blind spot image acquisition unit 21, and proceeds to step S4.
  • step S4 the blind spot image acquisition unit 21 receives the position information of the own vehicle 1, the position information of the target person 3, the line of sight information of the target person 3, and the blind spot image from the vehicle inside / outside information acquisition unit 27 or the blind spot image generation unit 28. And get.
  • the blind spot image acquisition unit 21 transmits the position information of the own vehicle 1, the position information of the target person 3, the line of sight information of the target person 3, and the blind spot image to the determination unit 22, and proceeds to step S5.
  • step S5 the determination unit 22 receives the position information of the own vehicle 1, the position information of the target person 3, the line of sight information of the target person 3, and the blind spot image from the blind spot image acquisition unit 21.
  • the determination unit 22 stores predetermined range information.
  • the range information is within 5 meters from the own vehicle 1.
  • the determination unit 22 determines whether or not the subject 3 exists within a predetermined range within 5 meters from the own vehicle 1. Specifically, for example, the determination unit 22 determines whether the target person 3 exists within a predetermined range of 5 meters from the own vehicle 1 based on the position information of the own vehicle 1 and the position information of the target person 3. Judge whether or not.
  • the determination unit 22 determines whether or not the subject 3 exists in the predetermined range within 5 meters from the own vehicle 1 from the image within the range within 5 meters from the predetermined own vehicle 1.
  • the prior art may be used.
  • step S5 Yes and the process proceeds to step S6. If the determination unit 22 determines that the target person 3 does not exist within a predetermined range of 5 meters from the own vehicle 1, step S5: No and the operation ends.
  • step S6 the determination unit 22 determines whether or not an object such as another vehicle 2 exists in the blind spot of the own vehicle 1 as seen from the target person 3. Specifically, for example, the determination unit 22 determines whether an object such as another vehicle 2 is shown in the blind spot image, which is a conventional technique. Or, for example, when the display system 30 acquires the position information and the map information of the other vehicle 2 and the position information is included in the blind spot image, the determination unit 22 obtains the position information and the map information of the other vehicle 2. From the blind spot image, it is determined whether an object such as another vehicle 2 exists in the blind spot.
  • the display system 30 acquires the position information and the map information of the other vehicle 2, the position information and the map information of the other vehicle 2, the position information of the own vehicle 1, and the position information of the target person 3. It is determined from the above whether or not an object such as another vehicle 2 exists in the blind spot.
  • the determination unit 22 determines whether or not an object such as another vehicle 2 exists in the blind spot of the own vehicle 1 as seen from the target person 3. the prior art may be used.
  • step S6 Yes.
  • the determination unit 22 transmits the position information of the own vehicle 1, the position information of the target person 3, the line-of-sight information of the target person 3, and the blind spot image to the display image generation unit 23, and proceeds to step S7.
  • step S6 No and the operation ends.
  • step S7 the display image generation unit 23 receives the position information of the own vehicle 1, the position information of the target person 3, the line of sight information of the target person 3, and the blind spot image from the determination unit 22.
  • the display image generation unit 23 uses the position information of the own vehicle 1, the position information of the target person 3, the line of sight information of the target person 3, and the blind spot image to match the line of sight of the target person 3 from the own vehicle image DB 24a.
  • the image of the own vehicle 1 is acquired, and the process proceeds to step S8.
  • step S8 the display image generation unit 23 generates a display image in which the image of the own vehicle 1 is superimposed on the blind spot image.
  • the display image generation unit 23 generates a display image so that the target person 3 has the same position and the same line of sight as when he / she actually sees the own vehicle 1. Further, the display image generation unit 23 changes the display image to an easy-to-see magnification according to the distance between the target person 3 and the own vehicle 1.
  • a conventional technique such as machine learning may be used.
  • FIG. 9 is an example of the display image 7 before processing the image 8 of the own vehicle 1 according to the first embodiment.
  • the display image generation unit 23 generates the display image 7 by superimposing the image 8 of the own vehicle 1 on the blind spot image which is the image of the other vehicle 2.
  • the blind spot image may include not only the image of the other vehicle 2 but also another vehicle, an object such as a person, a background image, and the like.
  • the blind spot image which is an image of the other vehicle 2 is not visible because it is hidden by the image 8 of the own vehicle 1.
  • the display image generation unit 23 transmits the position information of the own vehicle 1, the position information of the target person 3, the line-of-sight information of the target person 3, and the display image to the condition giving unit 29, and steps are taken. Proceed to S9.
  • step S9 the condition giving unit 29 receives the position information of the own vehicle 1, the position information of the target person 3, the line of sight information of the target person 3, and the display image from the display image generation unit 23.
  • the condition giving unit 29 uses the position information of the own vehicle 1, the position information of the target person 3, the line-of-sight information of the target person 3, and the display image, and the notification device to be notified from the condition information DB 25 including the display device. 31 notification device information, notification pattern information, and notification timing information are acquired.
  • the condition giving unit 29 includes an identifier of the position information of the own vehicle 1, which is information inside and outside the vehicle, an identifier of the position information of the target person 3, an identifier of the line-of-sight information of the target person 3, and an identifier of the display image from the condition information DB 25.
  • the notification device information, the notification pattern information, and the notification timing information corresponding to the above are acquired.
  • the condition giving unit 29 uses machine learning to obtain an identifier of the position information of the own vehicle 1, an identifier of the position information of the target person 3, an identifier of the line-of-sight information of the target person 3, and an identifier of the blind spot image from the condition information DB 25.
  • the notification device information, the notification pattern information, and the notification timing information corresponding to the above are acquired.
  • the condition giving unit 29 As a method for the condition giving unit 29 to acquire the notification device information, the notification pattern information, and the notification timing information from the condition information DB 25, the prior art may be used.
  • the notification device information, the notification pattern information, and the notification timing information do not have to be one set, and may be one set or a plurality of sets.
  • the condition giving unit 29 reduces the notification device information corresponding to the identifier of the display device 32 provided in front of the own vehicle 1 from the condition information DB 25 and the image of the own vehicle 1 to make a blind spot. Assuming that the notification pattern information that makes the image easier to see and the notification timing information that the display device 32 immediately starts the notification are acquired, the process proceeds to step S10.
  • step S10 the condition giving unit 29 determines whether or not the notification pattern information is for processing the displayed image. Specifically, for example, the notification pattern information reduces the image of the own vehicle 1 to make the blind spot image easier to see, the notification pattern information that performs transmission processing on the image of the own vehicle 1, and a little image of the own vehicle 1. If the notification pattern information is an image as seen from a shifted viewpoint, or the notification pattern information is an image in which only the outline of the own vehicle 1 is a solid line or a dotted line, the condition giving unit 29 displays the notification pattern information. It is determined that the image is processed. When the notification pattern information is for processing the display image, the condition giving unit 29 goes to step S10: Yes and proceeds to step S11.
  • the condition giving unit 29 processes the display image by the notification pattern information. Is determined, step S10: Yes, and the process proceeds to step S11.
  • the notification pattern information causes the control unit 26 to display the display so as to emphasize the display image, the notification pattern information for blinking the frame of the display device 32 in red, the notification pattern information for issuing an alarm sound together with the display, and the like. If this is the case, the condition giving unit 29 determines that the notification pattern information does not process the displayed image. If the notification pattern information does not process the displayed image, step S10: No. The condition giving unit 29 transmits the notification device information, the notification pattern information, the notification timing information, and the display image to the control unit 26, and proceeds to step S12.
  • the condition giving unit 29 processes the display image. Specifically, in the first embodiment, the condition giving unit 29 acquires the notification pattern information from the condition information DB 25 by reducing the image of the own vehicle 1 to make the blind spot image easier to see, so that the image of the own vehicle 1 is obtained. Process.
  • the condition giving unit 29 reduces the image of the own vehicle 1 according to the line of sight of the target person 3 acquired from the own vehicle image DB 24a by using the notification pattern information that reduces the image of the own vehicle 1 to make the blind spot image easier to see. ..
  • the condition giving unit 29 reduces the image of the own vehicle 1 based on the degree of reduction given in advance to the notification pattern information that reduces the image of the own vehicle 1 to make the blind spot image easier to see.
  • the condition giving unit 29 determines the degree of reduction by machine learning, and reduces the image of the own vehicle 1 based on the degree of reduction.
  • the prior art may be used.
  • FIG. 10 is an example of a display image 7 after processing the image 8 of the own vehicle 1 according to the first embodiment. Since the condition-imparting unit 29 reduces the image 8 of the own vehicle 1, the blind spot image, which is the image 9 of the other vehicle 2, becomes easy to see.
  • condition giving unit 29 transmits the notification device information, the notification timing information, and the display information to the control unit 26.
  • the condition giving unit 29 transmits the notification device information corresponding to the identifier of the display device 32, the notification timing information at which the display device 32 immediately starts notification, and the display image to the control unit 26. Then, the process proceeds to step S12.
  • the condition-imparting unit 29 reduces the image of the own vehicle 1 to acquire only the notification pattern information that makes the blind spot image easier to see, but reduces the image of the own vehicle 1 to obtain the blind spot image.
  • the notification pattern information that makes it easier to see for example, the notification pattern information that emits an alarm sound together with the display may be acquired.
  • the condition giving unit 29 transmits the notification pattern information to the control unit 26 together with the notification device information, the notification timing information, and the display information.
  • the condition giving unit 29 is a notification device which is an identifier of the speaker 36 provided in front of the own vehicle 1 from the condition information DB 25 regardless of whether or not the notification pattern information is for processing the display image. Information, notification pattern information of the sound of the speaker 36, and notification timing information in which the speaker 36 immediately starts notification may be acquired. In such a case, the condition giving unit 29 generates sound information which is notification information generated from the speaker 36 and transmits it to the control unit 26.
  • the broadcast information is display image, sound information, etc., and may be one or a plurality.
  • step S12 the control unit 26 receives the notification device information, the notification pattern information, the notification timing information, and the display information from the condition giving unit 29. If necessary, the control unit 26 receives the notification pattern information and the notification information other than the display image such as sound information from the condition giving unit 29.
  • the control unit 26 receives the notification device information which is an identifier of the display device 32, the notification timing information at which the display device 32 immediately starts notification, and the display image.
  • the control unit 26 uses the notification device information which is an identifier of the display device 32, the notification timing information at which the display device 32 immediately starts notification, and the display image processed in step S11, and immediately receives the information. Controls to start displaying the display image on the display device 32.
  • FIG. 11 is an example of a diagram when the own vehicle 1 is viewed from the target person 3 according to the first embodiment. Specifically, FIG. 11 is a view taken from the dotted line arrow 5 when the own vehicle 1 is viewed from the subject 3 in FIG. 1.
  • the display device 32 of the own vehicle 1 immediately starts displaying the display image 7 under the control of the control unit 26.
  • FIG. 12 is an example of a diagram when the own vehicle 1 according to the first embodiment is viewed from the front. Specifically, FIG. 12 is a view taken from the dotted line arrow 6 when the own vehicle 1 is viewed from directly in front of FIG. 1. Similar to FIG. 11, the display device 32 of the own vehicle 1 immediately starts displaying the display image 7 under the control of the control unit 26. However, since the displayed image 7 is an image when the own vehicle 1 is viewed from the target person 3, the displayed image 7 is different from the scenery when the own vehicle 1 is viewed from the front.
  • control unit 26 when the control unit 26 receives the sound information of the speaker 36 as the notification information, it may be notified together with the display. For example, when the other vehicle 2 approaches the target person 3, the control unit 26 controls the speaker 36 so that the approaching running sound of the other vehicle 2 is emphasized and output. Further, when the control unit 26 receives the notification pattern information that causes the frame of the display device 32 to blink in red, the control unit 26 displays the display device 32 together with the display so that the display device displays the display so as to emphasize the display image. You may control to blink the frame of.
  • the display system 30 ends the operation.
  • Step S5 No
  • step S6 No
  • step S12 the process returns to step S1, the power is turned off, or the end operation is performed.
  • the above process is repeated until there is a trigger for termination.
  • Each step may be repeated for each step as an independent process.
  • the display image generation unit 23 generates a display image in which the blind spot image is superimposed on the image of the own vehicle 1, and the control unit 26 is provided in the own vehicle 1. Display the displayed image on the display device. Therefore, when the display system 30 of the first embodiment displays the blind spot image, the target person 3 can intuitively display the positional relationship of the blind spot image in an easy-to-understand manner.
  • condition-imparting unit 29 reduces the image 8 of the own vehicle 1, the blind spot image which is the image 9 of the other vehicle 2 becomes easy to see, and the other vehicle 2 and the like existing in the blind spot can be easily seen. The visibility of the object can be increased.
  • the notification pattern information is the notification pattern information that reduces the image of the own vehicle 1 to make the blind spot image easier to see, and the condition giving unit 29 performs processing to reduce the image of the own vehicle 1.
  • the condition-imparting unit 29 is processed to perform transparency processing on the image of the own vehicle 1 by the notification pattern information for performing transmission processing on the image of the own vehicle 1, as seen from a viewpoint in which the image of the own vehicle 1 is slightly shifted. Processing as if the image of the own vehicle 1 is viewed from a viewpoint shifted by the notification pattern information to be made into an image, and the image is made into an image in which only the outline of the own vehicle 1 is a solid line or a dotted line. You may perform the processing or the like. Even in this way, the same effect as that of the first embodiment can be obtained.
  • the condition giving unit 29 after the display system 30 generates the display image, notifies the notification device information, the notification pattern information, and the notification of the notification device 31 including the display device from the condition information DB 25.
  • the timing information and the image of the own vehicle 1 were processed.
  • the condition giving unit 29 acquires the notification device information, the notification pattern information, and the notification timing information of the notification device 31 including the display device from the condition information DB 25, and the image of the own vehicle 1 is obtained.
  • the display image may be generated after processing. Even in this way, the same effect as that of the first embodiment can be obtained.
  • the notification pattern information is the notification pattern information that reduces the image of the own vehicle 1 to make the blind spot image easier to see, and the condition giving unit 29 reduces the image of the own vehicle 1 in the display image.
  • the notification pattern information is the notification pattern information that enlarges the image of the other vehicle 2 to make the blind spot image easier to see, and the condition giving unit 29 enlarges the image of the other vehicle 2 of the blind spot image.
  • the block diagram of the display system 30 including the display control device 20 according to the second embodiment of the present disclosure is the same as that of FIG. 2 of the first embodiment. Further, the hardware configuration diagram of the in-vehicle system 100 of the own vehicle 1 according to the second embodiment is the same as that of FIG. 7.
  • FIG. 13 is a flowchart showing the operation of the display system 30 according to the second embodiment of the present disclosure. The operation of the display system 30 will be described below with reference to FIG.
  • Steps S1 to S8 are the same as steps S1 to S8 of the first embodiment.
  • step S19 in the second embodiment, the condition giving unit 29 enlarges the image of the other vehicle 2 from the condition information DB 25, instead of the notification pattern information that reduces the image of the own vehicle 1 to make the blind spot image easier to see.
  • the notification pattern information that makes the blind spot image easier to see is acquired.
  • step S19 is the same as step S9 of the first embodiment.
  • step S20 in the second embodiment, since the notification pattern information is the notification pattern information that enlarges the image of the other vehicle 2 to make the blind spot image easier to see, the condition giving unit 29 processes the display image by the notification pattern information. It is determined that the image is to be used, step S20: Yes, and the process proceeds to step S21. Other than that, step S20 is the same as step S10 of the first embodiment.
  • the condition giving unit 29 obtains the notification pattern information from the condition information DB 25 by enlarging the image of the other vehicle 2 to make the blind spot image easier to see, so that the image of the other vehicle 2 is processed. do.
  • the condition giving unit 29 enlarges the image of the other vehicle 2 which is the blind spot image by using the notification pattern information which enlarges the image of the other vehicle 2 to make the blind spot image easier to see.
  • the condition giving unit 29 enlarges the image of the other vehicle 2 based on the degree of enlargement given in advance to the notification pattern information that enlarges the image of the other vehicle 2 to make the blind spot image easier to see.
  • condition giving unit 29 determines the degree of enlargement by machine learning, and enlarges the image of the other vehicle 2 based on the degree of enlargement.
  • the prior art may be used as a method for determining the degree of enlargement by the condition-imparting unit 29, the prior art may be used.
  • the condition giving unit 29 obtains only the notification pattern information that enlarges the image of the other vehicle 2 to make the blind spot image easier to see, but enlarges the image of the other vehicle 2 to obtain the blind spot image.
  • the notification pattern information that makes it easier to see for example, the notification pattern information that emits an alarm sound together with the display may be acquired.
  • the condition giving unit 29 transmits the notification pattern information to the control unit 26 together with the notification device information, the notification timing information, and the display information.
  • step S21 is the same as step S11 of the first embodiment.
  • Step S12 is the same as step S12 of the first embodiment.
  • Step S5 No
  • step S6 No
  • step S12 the process returns to step S1, the power is turned off, or the end operation is performed.
  • the above process is repeated until there is a trigger for termination.
  • Each step may be repeated for each step as an independent process.
  • the display image generation unit 23 generates a display image in which the blind spot image is superimposed on the image of the own vehicle 1, and the control unit 26 is provided in the own vehicle 1. Display the displayed image on the display device. Therefore, when the display system 30 of the second embodiment displays the blind spot image, the target person 3 can intuitively display the positional relationship of the blind spot image in an easy-to-understand manner.
  • condition-imparting unit 29 enlarges the image of the other vehicle 2, so that the blind spot image, which is an image of the other vehicle 2, becomes easier to see, and the object of the other vehicle 2 or the like existing in the blind spot becomes easier to see. Visibility can be improved.
  • the notification pattern information is the notification pattern information that enlarges the image of the other vehicle 2 to make the blind spot image easier to see
  • the condition-imparting unit 29 processes to enlarge the image of the other vehicle 2 that is the blind spot image.
  • the condition-imparting unit 29 is processed to enlarge the size of at least one part of the blind spot image by the notification pattern information that enlarges the size of at least one part of the blind spot image
  • the own vehicle is used for at least one part of the blind spot image.
  • the position may be shifted so that the overlap with the image of the own vehicle 1 is reduced for at least one portion of the blind spot image by the notification pattern information for shifting the position so as to reduce the overlap with the image of the own vehicle 1.
  • the condition giving unit 29 attaches an arrow to the image of the object such as the other vehicle 2, emphasizes the outline color of the object such as the other vehicle 2 by adding a conspicuous color such as red or yellow, and emphasizes the other vehicle 2 or the like. Processing may be performed to emphasize the image of the other vehicle 2, which is a blind spot image such as changing the color of the object.
  • the condition giving unit 29 notifies the notification device information, the notification pattern information, and the notification of the notification device 31 including the display device from the condition information DB 25.
  • the timing information was acquired, and the image of the other vehicle 2, which is a blind spot image, was processed.
  • the condition giving unit 29 acquires the notification device information, the notification pattern information, and the notification timing information of the notification device 31 including the display device from the condition information DB 25, and the image of the other vehicle 2 is obtained.
  • the display image may be generated after processing. Even in this way, the same effect as that of the second embodiment can be obtained.
  • the control unit 26 causes the display device to display the display so as to emphasize the display image
  • receives the notification pattern information that causes the frame of the display device 32 to blink in red. Shown an example of controlling the frame of the display device 32 to blink in red together with the display. However, the control unit 26 may blink at least one part of the display image in order to make the display device display so as to emphasize the display image.
  • An example of controlling the frame of the display device 32 to blink in red together with the display is an example of blinking at least one part of the display image.
  • the control unit 26 causes a display device that blinks the frame of the display device 32 or controls to blink at least one part of the display image to display the display image in an emphasized manner. As a result, the attention of the subject 3 can be further attracted.
  • step S5 the determination unit 22 determines whether or not the target person 3 exists in the predetermined range of the own vehicle 1, and within the predetermined range of the own vehicle 1.
  • the display system 30 can contribute to energy saving by not displaying the display when the target person 3 does not exist.
  • the display system 30 uses characters such as "please go first”, “thank you", and "pause” instead of displaying the displayed image to communicate with the target person 3. Display, advertisement display, reservation, vacancy, etc. can be displayed when the own vehicle 1 is a taxi.
  • the display system 30 may display an image on the opposite side of each display device on each display device in order to eliminate a blind spot caused by the own vehicle 1. good.
  • step S6 the determination unit 22 determines whether or not an object such as another vehicle 2 exists in the blind spot of the own vehicle 1 as seen from the subject 3, and is viewed from the subject 3.
  • an object such as another vehicle 2 exists in the blind spot of the own vehicle 1, it is determined that the display image is displayed on the target person 3.
  • the subject 3 can predict some change in the traffic environment in the direction of the own vehicle 1 because the display image is displayed on the display device. Specifically, for example, since the subject 3 can recognize that the display has started as a warning to the other vehicle 2, the situation can be grasped earlier.
  • the display system 30 uses the notification pattern information to notify the speaker 36 of the sound volume such as the shape, color, size, brightness, position, etc. of the image displayed on the display devices 32 to 35.
  • the speaker 36 uses the notification pattern information to notify the speaker 36 of the sound volume such as the shape, color, size, brightness, position, etc. of the image displayed on the display devices 32 to 35.
  • additional processing such as pitch, pitch, and type of sound, the visual and auditory senses of the subject 3 can be impacted, so that the situation can be grasped more quickly.
  • the display image generation unit 23 when the notification pattern information includes the notification pattern information that causes the frame of the display device 32 to blink in red, the display image generation unit 23 generates the display image and then the control unit 26 displays the display.
  • the frame of the display device 32 is blinked in red on the device 32, but the notification pattern information is used as notification pattern information such as blinking an image of an object such as a vehicle 2 in addition to the blind spot image, and the display image generation unit 23 generates a display image.
  • the control unit 26 may blink an image of an object such as the vehicle 2 in addition to the blind spot image.
  • the display image generation unit 23 acquires the image of the own vehicle 1 that matches the line of sight of the target person 3, and superimposes the image of the own vehicle 1 on the blind spot image that matches the line of sight of the target person 3. And generated a display image.
  • the display system 30 can intuitively and easily display the range of the blind spot image by the subject 3.
  • the display image generation unit 23 receives the position information of the own vehicle 1 which is the vehicle interior / external information and the position information of the own vehicle 1.
  • the target person 3 is aligned with the line of sight of the target person 3 from the own vehicle image DB 24a.
  • An image of the own vehicle 1 as seen from the height of the eyes of the own vehicle 1 may be acquired.
  • the display image generation unit 23 superimposes the image of the own vehicle 1 on the blind spot image of the own vehicle 1 viewed from the height of the eyes of the target person 3 in accordance with the line of sight of the target person 3, and the line of sight of the target person 3.
  • a display image of the own vehicle 1 may be generated from the height of the eyes of the subject 3.
  • the vehicle interior / external information acquisition unit 27 corresponds to the eye height information acquisition unit.
  • the own vehicle image DB 24a stores an image of the own vehicle 1 for each line of sight of the target person 3 and the height of the eyes of the target person 3.
  • the display image generation unit 23 obtains the position information of the own vehicle 1, which is the information inside and outside the vehicle, the position information of the target person 3, the line-of-sight information of the target person 3, the eye height information of the target person 3, and the blind spot image. It is used to acquire an image of the own vehicle 1 from the own vehicle image DB 24a according to the line of sight of the target person 3 and the height of the eyes of the target person 3.
  • the own vehicle image DB 24a stores the image of the own vehicle 1 for each attribute such as the line of sight of the target person 3 and the male, female, and children of the target person 3.
  • the display image generation unit 23 determines the attribute from the eye height information of the target person 3, and the position information of the own vehicle 1 which is the information inside and outside the vehicle, the position information of the target person 3, and the line of sight information of the target person 3.
  • the blind spot image and the attribute information are used to acquire an image of the own vehicle 1 from the own vehicle image DB 24a according to the line of sight of the target person 3 and the height of the eyes of the target person 3.
  • the display image generation unit 23 uses the position information of the own vehicle 1 which is the information inside and outside the vehicle, the position information of the target person 3, the line of sight information of the target person 3, and the blind spot image from the own vehicle image DB 24a.
  • the image of the own vehicle 1 according to the line of sight of the target person 3 is acquired.
  • the display image generation unit 23 may process the acquired image of the own vehicle 1 by using the eye height information of the target person 3.
  • the display image of the own vehicle 1 that matches not only the line of sight of the target person 3 but also the height of the eyes of the target person 3, there is an effect that the target person 3 can more easily recognize the broadcast information. For example, if the display image of a male eye is displayed on the display device, it may be difficult to recognize because the display image is from a height that the child has never seen, but the eye height is taken into consideration. This makes it easier to recognize.
  • the line-of-sight information of the target person 3 is acquired from the overall control ECU 10 by the vehicle inside / outside information acquisition unit 27, but it is assumed that the target person 3 is looking at the own vehicle 1, and the determination unit 22. May calculate the line-of-sight information of the target person 3 from the position information of the own vehicle 1 and the position information of the target person 3.
  • the determination unit 22 has the target person 3 within a range of 5 meters from the own vehicle 1 determined in advance from the position information of the own vehicle 1 and the position information of the target person 3. Although it is determined whether or not the range is 5 meters, the range may be any range such as 3 meters and 10 meters. Further, although the range is set within 5 meters from the own vehicle 1 determined in advance, the shape of a predetermined rectangular equal range surrounding the own vehicle 1 with a radius of 5 meters from the display device 32 may be any shape.
  • step S5 the determination unit 22 uses the position information of the own vehicle 1 and the position information of the target person 3 within a predetermined range of the target person within 5 meters from the own vehicle 1. It was determined whether or not 3 was present. Further, in step S6, the determination unit 22 determines whether or not an object such as another vehicle 2 exists in the blind spot of the own vehicle 1 as seen from the target person 3, and the determination unit 22 displays a display image on the target person 3. It was judged whether or not to do it. However, the determination unit 22 predicts the operation of the own vehicle 1 from the gear change information, the steering angle information, etc., and the other vehicle from the speed information of the other vehicle 2, the position information of the other vehicle 2, the operation information of the winker, and the like.
  • the operation of 2 may be predicted, and it may be determined whether or not the determination unit 22 displays the display image on the target person 3.
  • the determination unit 22 may determine whether or not to display the display image on the target person 3 by using the speed information of the other vehicle 2 and the information on the traveling direction.
  • the condition giving unit 29 may change the display image generated by the information inside and outside the vehicle, or process the display image. By doing so, the subject 3 can easily recognize the situation of the object such as the other vehicle 2 in the blind spot and its change.
  • step S5 the determination unit 22 uses the position information of the own vehicle 1 and the position information of the target person 3 within a predetermined range of the target person within 5 meters from the own vehicle 1. It was determined whether or not 3 was present, and if the subject 3 was present, it was determined that the display image would be displayed on the subject 3. However, when the target person 3 goes out of the predetermined range within 5 meters from the own vehicle 1 and disappears in the middle of the display, or the target person 3 does not cross the pedestrian crossing 4. If the display is stopped at, the display may be terminated and it may be determined that the displayed image is not displayed.
  • the control unit 26 controls not to display the display image on the display device when the determination unit 22 determines that the display is not displayed.
  • step S5 the determination unit 22 uses the position information of the own vehicle 1 and the position information of the target person 3 within a predetermined range of the target person within 5 meters from the own vehicle 1. It was determined whether or not 3 was present, and if the subject 3 was present, it was determined that the display image would be displayed on the subject 3. However, the determination unit 22 may display a display image according to a predetermined line of sight even when the target person 3 is not present. By doing so, even if the subject 3 is in a position where the determination unit 22 cannot determine the existence of the subject 3, the subject 3 can grasp the situation of the blind spot early.
  • step S6 the determination unit 22 determines whether or not an object such as another vehicle 2 exists in the blind spot of the own vehicle 1 as seen from the target person 3, and displays an image on the target person 3. Was determined whether or not to display. However, even if an object such as another vehicle 2 exists in the blind spot of the own vehicle 1 as seen from the target person 3, the determination unit 22 is not dangerous to the target person 3 from the prediction of the behavior of the object such as the other vehicle 2. If it is determined, it may be determined that the displayed image is not displayed. Further, when the display image is displayed, the determination unit 22 may end the display and determine that the display image is not displayed.
  • the determination unit 22 ends the display and the target person 3 It is determined that the display image is not displayed on.
  • the control unit 26 controls not to display the display image on the display device when the determination unit 22 determines that the display is not displayed.
  • step S6 the determination unit 22 determines whether or not an object such as another vehicle 2 exists in the blind spot of the own vehicle 1 as seen from the subject 3, and is viewed from the subject 3.
  • the determination unit 22 may display the display image even when the object such as the other vehicle 2 does not exist in the blind spot of the own vehicle 1 as seen from the target person 3. By doing so, it becomes easy for the subject 3 to confirm that there is no danger in the surroundings, which leads to the peace of mind of the subject 3.
  • the determination unit 22 determines the existence of the target person 3 in step S5, and then the object such as the other vehicle 2 exists in the blind spot of the own vehicle 1 as seen from the target person 3 in step S6. It was determined whether or not to display the display image on the target person 3.
  • the display system 30 may take either step S5 or step S6 first. Further, the determination unit 22 may determine anywhere until the display image is displayed on the display device in step S12.
  • the blind spot image acquisition unit 21 acquires the blind spot image in step S4, but the blind spot image acquisition unit 21 acquires the blind spot image after step S5, after step S6, after step S7, and the like. Images may be acquired.
  • the condition giving unit 29 acquires the notification device information, the notification pattern information, and the notification timing information of the notification device 31 including the display device from the condition information DB 25, and images of the other vehicle 2.
  • the condition giving unit 29 obtains the notification device information, the notification pattern information, and the notification timing information of the notification device 31 including the display device from the condition information DB 25.
  • the blind spot image acquisition unit 21 may acquire the blind spot image.
  • the present disclosure can be applied even when the own vehicle 1 is parked or moving.
  • the display system 30 may start displaying while the own vehicle 1 is traveling several tens of meters before the pedestrian crossing 4.
  • the case where the pedestrian 3 is trying to cross in front of the own vehicle 1 which is stopped as the target person has been described, but when the target person 3 is moving, even if the target person 3 is stopped, the book Disclosure is applicable.
  • another vehicle 2 may plunge into it. In that case, the subject 3 can be alerted by displaying the display image of the present disclosure.
  • the vehicle interior / external information acquisition unit 27 acquires other vehicle information indicating the information of the other vehicle 2 existing in the blind spot, and the condition giving unit 29 can be further targeted if the processing is changed according to the other vehicle information. Person 3 can be alerted.
  • the vehicle inside / outside information acquisition unit 27 acquires speed information, traveling direction information, and the like of the other vehicle 2 existing in the blind spot, which is information on the other vehicle.
  • the determination unit 22 predicts the future operation of the other vehicle 2 from the speed information, the traveling direction information, and the like, and the condition giving unit 29 changes the processing according to the other vehicle information.
  • the vehicle inside / outside information acquisition unit 27 corresponds to another vehicle information acquisition unit.
  • the present disclosure can be applied even when the subject 3 is a plurality of people.
  • the own vehicle 1 is provided with display devices 32 to 35 on the front, rear, left and right sides of the vehicle body, one pedestrian is on the front side of the own vehicle 1, and another is on the right side of the own vehicle 1.
  • the determination unit 22 determines whether or not to display the display image for each pedestrian.
  • the display image generation unit 23 generates a display image for each pedestrian.
  • the control unit 26 causes a display device 32 provided on the front surface of the own vehicle 1 to display a display image generated for a pedestrian on the front side of the own vehicle 1, and a display provided on the right side surface of the own vehicle 1.
  • the device 35 causes the pedestrian on the right side of the own vehicle 1 to display the generated display image.
  • the determination unit 22 may be used. , Determine whether to display the display image for each pedestrian.
  • the determination unit 2 determines that the displayed image is to be displayed, and determines one pedestrian who is likely to take the most dangerous action when trying to cross the pedestrian crossing 4 even though there is another vehicle 2 in the blind spot.
  • the display image generation unit 23 determines that the determination unit 22 displays the display image, and generates a display image for the pedestrian who is likely to cause the most dangerous behavior.
  • the control unit 26 displays a display image on a display device 32 provided on the front surface of the own vehicle 1.
  • the determination is made.
  • the unit 22 decides to display the displayed image to the pedestrian who is the earliest to be dangerous. In that case, the display image that matches the line of sight of the pedestrian, which is the earliest danger, is displayed on the display device, but since the display itself can be confirmed by other pedestrians, it also alerts other pedestrians. Can be done. Further, if the speaker 36 is used to notify the display together with the display, more attention can be given.
  • the determination unit 22 determines the priority for each pedestrian, for example, determining the pedestrian to display the display image by machine learning. The pedestrian with the highest priority may be determined as the pedestrian to be displayed.
  • the display system 30 displays a display image according to the line of sight of the average position of the plurality of pedestrians, directly in front of, directly behind, and on the left side of the own vehicle 1. The display image viewed from the center, the center of the right side surface, the top, etc. may be displayed.
  • the display system 30 determines whether the target person 3 is aware of the display image displayed on the display device from the line-of-sight information of the target person 3, and the target person 3 displays the image. If the image is not noticed, the condition-imparting unit 29 may process at least one part of the displayed image. The control unit 26 may display the processed display image on the display device. By doing so, the probability that the target person 3 confirms the displayed image is high, and the visibility of the target person 3 can be improved.
  • the display system 30 has described the case of displaying the display image, but the display system 30 outputs a horn (not shown) together with the display, outputs light from the headlight 72, and sounds from the speaker 36. Or may be combined with one or more display devices. Specifically, for example, when the display system 30 displays the display image, if the target person 3 is not aware of the display image, the determination unit 22 determines whether or not to output the horn provided in the own vehicle. It is determined whether or not light is output from the headlight 72 provided in the own vehicle, whether or not sound is output from the speaker 36 provided in the own vehicle, and the like.
  • the control unit 26 displays a display image on the display device, such as when the determination unit 22 determines to output the horn, determines to output light from the headlight 72, determines to output sound from the speaker 36, and the like.
  • the horn is output, the light is output from the headlight 72, the sound is output from the speaker 36, and the like.
  • the display system 30 continuously outputs the horn until the subject 3 notices the displayed image, outputs light from the headlight 72, outputs sound from the speaker 36, and the like, so that the subject 3 outputs the displayed image.
  • the probability of confirmation is high, and the visibility of the subject 3 can be improved.
  • the display system 30 displays a display image
  • the display system 30 outputs light from the headlight 72, which outputs a horn together with the display.
  • the sound may be output from the speaker 36, and the like.
  • the display system 30 links the lighting and extinguishing of the headlight 72 with the display, for example, by performing passing to make the target person 3 notice the display, to call attention, and the like. May be good. Further, for example, when the display of the displayed image is difficult to see because the light is output from the headlight 72 when viewed from the pedestrian 3, the display system 30 weakens or turns off the light of the headlight 72. You may.
  • the present disclosure can be applied to a crowded parking lot, parking on the shoulder, and the like.
  • the vehicle interior / exterior information acquisition unit 21 acquires vehicle interior / exterior information from the overall control ECU 11, but the vehicle interior / exterior information acquisition unit 21 may acquire vehicle interior / exterior information directly from each sensor. ..
  • the own vehicle image DB 24a, the blind spot image DB 24b, and the condition information DB 25 are provided by the display control device 20, but the own vehicle image DB 24a, the blind spot image DB 24b, and the condition information DB 25 are different from each other. It is stored in a server or the like, and the display control device 20 may be acquired from another server or the like by using the communication device 60 or the like.
  • the own vehicle 1 is a vehicle, but it may be an autonomous vehicle or a vehicle driven by a person.
  • the driver may not always know the driving situation, so that it becomes possible to support the target person 3 so that he / she can act safely on behalf of the driver.
  • the own vehicle 1 is not limited to an automobile, but may be a forklift, a ship, an airplane, a bicycle, a motorcycle, or the like.
  • the target person 3 is a pedestrian, but a driver of a vehicle such as a bicycle, a motorcycle, an automobile, or a forklift may be used.
  • the display control device, the display system, the display control method, and the display control program shown in the above-described embodiment are merely examples, and can be appropriately configured in combination with other devices. It is not limited to the configuration of a single form.

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)
  • Lighting Device Outwards From Vehicle And Optical Signal (AREA)

Abstract

Prior-art display control devices have had a problem in that if a blind spot image is displayed to a pedestrian, it is difficult for the pedestrian to recognize what range is being displayed. This display control device (20) is provided with a blind spot image acquisition unit (21) that acquires a blind spot image of a blind spot caused by a host vehicle when viewed from a subject present outside the host vehicle, a display image generation unit (23) that generates a display image in which the image of the host vehicle is superimposed over the blind spot image, and a control unit (26) that displays a display image on display devices (32–35) provided in the host vehicle.

Description

表示制御装置、表示システム、表示制御方法及び表示制御プログラムDisplay control device, display system, display control method and display control program
 本開示は、自車両によって生じる死角における状況を示す死角画像を表示装置へ表示する際の制御を行う表示制御装置、表示システム、表示制御方法及び表示制御プログラムに関するものである。 The present disclosure relates to a display control device, a display system, a display control method, and a display control program that control when displaying a blind spot image showing a situation in a blind spot caused by the own vehicle on the display device.
 自動車等の車両を運転している場合に、自車両が原因で自車両の周辺に存在する歩行者等の対象者にとって死角が生じることがある。 When driving a vehicle such as a car, a blind spot may occur for a target person such as a pedestrian who exists in the vicinity of the own vehicle due to the own vehicle.
 そこで、自車両の周辺をカメラで撮像し、自車両の存在により周辺の歩行者等の対象者にとって死角となる視野の情報を自車両の周辺に対して表示装置で表示する装置が、特許文献1に開示されている。特許文献1に開示された装置は、自車両に設けられ、自車両の周辺の画像を撮像する撮像手段と、自車両の少なくとも一部に設けられた画像表示手段に、撮像手段で撮像された画像表示手段が設けられた位置において死角となる視野の画像を表示させる表示制御手段とを備えている。 Therefore, a device that captures an image of the periphery of the own vehicle with a camera and displays information on a field of view that becomes a blind spot for a target person such as a pedestrian in the vicinity due to the presence of the own vehicle on the periphery of the own vehicle is a patent document. It is disclosed in 1. The device disclosed in Patent Document 1 is provided in the own vehicle and is imaged by an image pickup means for capturing an image of the periphery of the own vehicle and an image display means provided in at least a part of the own vehicle. It is provided with a display control means for displaying an image of a field of view that becomes a blind spot at a position where an image display means is provided.
特開2002-87188号公報Japanese Unexamined Patent Publication No. 2002-87188
 しかしながら、上記した従来の表示制御装置では、歩行者に対して死角画像を表示しても、歩行者がどこの範囲を表示しているか認識しにくいという問題があった。 However, the above-mentioned conventional display control device has a problem that it is difficult to recognize which range the pedestrian is displaying even if the blind spot image is displayed to the pedestrian.
 本開示は、上記のような問題を解決するためになされたものであって、死角画像を表示する際に、対象者にどこの死角画像であるのかを直感的にわかりやすく表示することができる表示制御装置、表示システム、表示制御方法及び表示制御プログラムを提供することを目的とする。 This disclosure is made to solve the above-mentioned problems, and when displaying a blind spot image, it is possible to intuitively and easily display to the subject which blind spot image it is. It is an object of the present invention to provide a display control device, a display system, a display control method, and a display control program.
 本開示に係る表示制御装置は、自車両の外に存在する対象者から見て自車両によって生じる死角を示す死角画像を取得する死角画像取得部と、自車両の画像を死角画像に重畳した表示画像を生成する表示画像生成部と、自車両に設けられた表示装置に表示画像を表示する制御部とを備えるものである。 The display control device according to the present disclosure includes a blind spot image acquisition unit that acquires a blind spot image showing a blind spot generated by the own vehicle when viewed from a target person existing outside the own vehicle, and a display in which an image of the own vehicle is superimposed on the blind spot image. It includes a display image generation unit that generates an image and a control unit that displays a display image on a display device provided in the own vehicle.
 本開示に係る表示システムは、自車両に設けられた表示装置と、自車両の外に存在する対象者から見て自車両によって生じる死角を示す死角画像を取得する死角画像取得部と、自車両の画像を死角画像に重畳した表示画像を生成する表示画像生成部と、表示装置に表示画像を表示する制御部とを備えるものである。 The display system according to the present disclosure includes a display device provided in the own vehicle, a blind spot image acquisition unit that acquires a blind spot image showing a blind spot generated by the own vehicle when viewed from a target person outside the own vehicle, and the own vehicle. It is provided with a display image generation unit that generates a display image in which the image of the above is superimposed on a blind spot image, and a control unit that displays the display image on the display device.
 本開示に係る表示制御方法は、自車両の外に存在する対象者から見て自車両によって生じる死角を示す死角画像を取得するステップと、自車両の画像を死角画像に重畳した表示画像を生成するステップと、自車両に設けられた表示装置に表示画像を表示するステップとを有するものである。 The display control method according to the present disclosure includes a step of acquiring a blind spot image showing a blind spot generated by the own vehicle when viewed from a target person existing outside the own vehicle, and generating a display image in which the image of the own vehicle is superimposed on the blind spot image. It has a step of displaying a display image on a display device provided in the own vehicle.
 本開示に係る表示制御プログラムは、自車両の外に存在する対象者から見て自車両によって生じる死角を示す死角画像を取得する処理と、自車両の画像を死角画像に重畳した表示画像を生成する処理と、自車両に設けられた表示装置に表示画像を表示する処理とを実行させるものである The display control program according to the present disclosure processes to acquire a blind spot image showing a blind spot generated by the own vehicle when viewed from a target person existing outside the own vehicle, and generates a display image in which the image of the own vehicle is superimposed on the blind spot image. The process of displaying the displayed image on the display device provided in the own vehicle is executed.
 本開示によれば、表示画像生成部が自車両の画像を死角画像に重畳した表示画像を生成し、制御部が自車両に設けられた表示装置に表示画像を表示することにより、死角画像を表示する際に、対象者にどこの範囲の死角画像であるのかを直感的にわかりやすく表示することができる。 According to the present disclosure, the display image generation unit generates a display image in which the image of the own vehicle is superimposed on the blind spot image, and the control unit displays the display image on the display device provided in the own vehicle to display the blind spot image. When displaying, it is possible to intuitively and easily display to the target person what range of the blind spot image is.
片側二車線の道路上に横断歩道がある状況の一例を示した図である。It is a figure which showed an example of the situation where there is a pedestrian crossing on a road with two lanes on each side. 実施の形態1に係る表示制御装置を含む表示システムのブロック図である。It is a block diagram of the display system including the display control device which concerns on Embodiment 1. FIG. 実施の形態1に係る自車両の前面図である。It is a front view of the own vehicle which concerns on Embodiment 1. FIG. 実施の形態1に係る自車両の後面図である。It is a rear view of the own vehicle which concerns on Embodiment 1. FIG. 実施の形態1に係る自車両の左側面図である。It is a left side view of the own vehicle which concerns on Embodiment 1. FIG. 実施の形態1に係る自車両の右側面図である。It is a right side view of the own vehicle which concerns on Embodiment 1. FIG. 実施の形態1に係る自車両の車載システムのハードウェア構成図である。It is a hardware block diagram of the in-vehicle system of the own vehicle which concerns on Embodiment 1. FIG. 実施の形態1に係る表示システムの動作を示すフローチャートである。It is a flowchart which shows the operation of the display system which concerns on Embodiment 1. 実施の形態1に係る自車両の画像を加工する前の表示画像の一例である。This is an example of a display image before processing the image of the own vehicle according to the first embodiment. 実施の形態1に係る自車両の画像を加工した後の表示画像の一例である。This is an example of a display image after processing the image of the own vehicle according to the first embodiment. 実施の形態1に係る対象者から自車両を見たときの図の一例である。It is an example of the figure when the own vehicle is seen from the subject person which concerns on Embodiment 1. 実施の形態1に係る自車両を真正面から見たときの図の一例である。It is an example of the figure when the own vehicle which concerns on Embodiment 1 is seen from the front. 実施の形態2に係る表示システムの動作を示すフローチャートである。It is a flowchart which shows the operation of the display system which concerns on Embodiment 2.
実施の形態1.
 図1は、片側二車線の道路上に横断歩道4がある状況の一例を示した図である。実施の形態1では、一例として、図1に示した通り、自動車である自車両1が片側二車線の道路を走行している場合を考える。横断歩道4の前で自車両1が停車し、対象者として歩行者3が停車している自車両1の前を横切ろうとしている。さらに、歩行者3に対して自車両1の奥にあたる自車両1の隣の車線の後方から自動車である他車両2が自車両1を追い越そうとしている。歩行者3からは自車両1が原因で他車両2がいる範囲が死角になり、他車両2が見えない状況について、本開示を適用した場合を以下に説明する。
Embodiment 1.
FIG. 1 is a diagram showing an example of a situation where a pedestrian crossing 4 is located on a road having two lanes on each side. In the first embodiment, as an example, consider a case where the own vehicle 1 which is an automobile is traveling on a road having two lanes on each side, as shown in FIG. The own vehicle 1 is stopped in front of the pedestrian crossing 4, and the pedestrian 3 is about to cross in front of the own vehicle 1 where the pedestrian 3 is stopped as a target person. Further, another vehicle 2 which is a vehicle is trying to overtake the own vehicle 1 from behind the lane next to the own vehicle 1 which is behind the own vehicle 1 with respect to the pedestrian 3. The case where this disclosure is applied will be described below in a situation where the range where the other vehicle 2 is present becomes a blind spot due to the own vehicle 1 from the pedestrian 3 and the other vehicle 2 cannot be seen.
 図2は、本実施の形態1に係る表示制御装置20を含む表示システム30のブロック図である。表示システム30は、自車両1に設けられた報知装置31と、自車両1の車両全体を制御する全体制御ECU(Electronic Control Unit)10から取得した車内外情報を用いて報知すべき情報である報知情報を判定し、報知情報を報知装置31から報知するよう制御する表示制御装置20とを備えている。ここで、車内外情報は、死角情報または死角画像を含んでいる。死角情報と死角画像については、後述する。 FIG. 2 is a block diagram of a display system 30 including a display control device 20 according to the first embodiment. The display system 30 is information to be notified using the notification device 31 provided in the own vehicle 1 and the inside / outside information of the vehicle acquired from the overall control ECU (Electronic Control Unit) 10 that controls the entire vehicle of the own vehicle 1. It is provided with a display control device 20 that determines the notification information and controls the notification information to be notified from the notification device 31. Here, the vehicle interior / external information includes blind spot information or blind spot images. The blind spot information and the blind spot image will be described later.
 報知装置31は、自車両1から自車両1の外に存在する対象者に報知情報を報知する装置である。報知装置31は、例えば、自車両1の周囲の自車両1の外に存在する対象者に自車両1の状態、自車両1の現在または今後の動作の予告と意図、または警告等を報知する。報知装置31は、表示装置32~35と、スピーカ36とを備える。ただし、報知装置31は、少なくとも1つの表示装置を備えていればいい。 The notification device 31 is a device that notifies notification information from the own vehicle 1 to a target person existing outside the own vehicle 1. The notification device 31 notifies, for example, the state of the own vehicle 1, the notice and intention of the current or future operation of the own vehicle 1, or a warning to the target person existing outside the own vehicle 1 around the own vehicle 1. .. The notification device 31 includes display devices 32 to 35 and a speaker 36. However, the notification device 31 may include at least one display device.
 図3は、本実施の形態1に係る自車両1の前面図である。本実施の形態1では、表示装置32は自車両1の前面に設けられ、スピーカ36は自車両1の前面の表示装置32の上側に設けられている。 FIG. 3 is a front view of the own vehicle 1 according to the first embodiment. In the first embodiment, the display device 32 is provided on the front surface of the own vehicle 1, and the speaker 36 is provided on the upper side of the display device 32 on the front surface of the own vehicle 1.
 図4は、本実施の形態1に係る自車両1の後面図である。本実施の形態1では、表示装置33は自車両1の後面に設けられている。 FIG. 4 is a rear view of the own vehicle 1 according to the first embodiment. In the first embodiment, the display device 33 is provided on the rear surface of the own vehicle 1.
 図5は、本実施の形態1に係る自車両1の左側面図である。本実施の形態1では、表示装置34は自車両1の左側面に設けられている。 FIG. 5 is a left side view of the own vehicle 1 according to the first embodiment. In the first embodiment, the display device 34 is provided on the left side surface of the own vehicle 1.
 図6は、本実施の形態1に係る自車両1の右側面図である。本実施の形態1では、表示装置35は自車両1の右側面に設けられている。 FIG. 6 is a right side view of the own vehicle 1 according to the first embodiment. In the first embodiment, the display device 35 is provided on the right side surface of the own vehicle 1.
 図2に戻って、全体制御ECU10は、自車両1の車内外の様々なセンサから死角情報または死角画像を含む車内外情報を取得し、取得した車内外情報に基づいて、自車両1内のハードウェアに指示を送り、ハードウェアを制御する。 Returning to FIG. 2, the overall control ECU 10 acquires blind spot information or vehicle internal / external information including blind spot images from various sensors inside and outside the vehicle 1 of the own vehicle 1, and based on the acquired information inside and outside the vehicle, the inside of the own vehicle 1 Send instructions to the hardware and control the hardware.
 表示制御装置20は、全体制御ECU10から死角情報または死角画像を含む車内外情報を取得する車内外情報取得部27を備えている。また、表示制御装置20は、死角情報から死角画像を生成する際の画像のパーツが記憶されている死角画像DB(Database)24bと、死角画像DB24bを用いて死角情報から死角画像を生成する死角画像生成部28とを備えている。 The display control device 20 includes a vehicle interior / external information acquisition unit 27 that acquires vehicle interior / external information including blind spot information or blind spot images from the overall control ECU 10. Further, the display control device 20 uses the blind spot image DB (Database) 24b in which the parts of the image for generating the blind spot image from the blind spot information are stored, and the blind spot image DB 24b to generate the blind spot image from the blind spot information. It includes an image generation unit 28.
 また、表示制御装置20は、死角画像を取得する死角画像取得部21と、死角画像、車内外情報等を報知装置31から報知するか否か判定する判定部22とを備えている。また、表示制御装置20は、自車両1の画像が記憶されている自車両画像DB(Database)24aと、判定部22で報知すると判定した場合、自車両画像DB24aから取得した自車両1の画像を死角画像に重畳した表示画像を含む報知情報を生成する表示画像生成部23とを備えている。 Further, the display control device 20 includes a blind spot image acquisition unit 21 for acquiring a blind spot image, and a determination unit 22 for determining whether or not to notify the blind spot image, vehicle interior / external information, and the like from the notification device 31. Further, when the display control device 20 determines that the own vehicle image DB (Database) 24a in which the image of the own vehicle 1 is stored and the determination unit 22 notifies the notification, the image of the own vehicle 1 acquired from the own vehicle image DB 24a. It is provided with a display image generation unit 23 that generates notification information including a display image in which the above is superimposed on the blind spot image.
 また、表示制御装置20は、車内外情報と報知装置31とが対応付けられた報知装置情報と表示画像の加工情報等の報知パターン情報と報知する報知タイミング情報等の条件情報が記憶されている条件情報DB(Database)25とを備えている。また、表示制御装置20は、表示画像に条件情報DB25から取得した条件情報を付与する条件付与部29と、条件情報DB25で対応付けられた報知装置31に報知情報を報知させるように制御する制御部26とを備えている。以下にそれぞれの構成について説明する。 Further, the display control device 20 stores notification device information in which the vehicle interior / external information and the notification device 31 are associated with each other, notification pattern information such as processing information of the display image, and condition information such as notification timing information. It is provided with a condition information DB (Database) 25. Further, the display control device 20 controls the condition giving unit 29 for imparting the condition information acquired from the condition information DB 25 to the display image and the notification device 31 associated with the condition information DB 25 to notify the notification information. It is equipped with a part 26. Each configuration will be described below.
 車内外情報取得部27は、全体制御ECU10から死角情報または死角画像を含む車内外情報を取得する。車内外情報とは、例えば、自車両1の位置情報、対象者3の位置情報、対象者3の目線情報、対象者3の死角情報、対象者3の死角画像、他車両2の位置情報、エンジンの回転量情報、エンジンONとOFFとの切り替え情報、シフトレバーによるギアの変更情報、ドアの開閉情報、自車両1の動作情報、ハンドルの舵角情報、ウィンカーの操作情報、周囲の混雑状況情報、周囲の人や他車両等のオブジェクトの検出情報および認識情報、自車両1の車内にいるユーザに関する情報等である。なお、車内外情報取得部27は、対象者3の目線情報を取得する目線情報取得部に対応する。 The vehicle interior / external information acquisition unit 27 acquires the vehicle interior / external information including the blind spot information or the blind spot image from the overall control ECU 10. The vehicle interior / external information includes, for example, the position information of the own vehicle 1, the position information of the target person 3, the line-of-sight information of the target person 3, the blind spot information of the target person 3, the blind spot image of the target person 3, and the position information of the other vehicle 2. Engine rotation amount information, engine ON / OFF switching information, gear change information by shift lever, door opening / closing information, own vehicle 1 operation information, steering angle information of steering wheel, winker operation information, surrounding congestion status Information, detection information and recognition information of objects such as surrounding people and other vehicles, information about users in the vehicle of own vehicle 1, and the like. The vehicle interior / external information acquisition unit 27 corresponds to the line-of-sight information acquisition unit that acquires the line-of-sight information of the target person 3.
 ここで、死角情報とは、自車両1の外に存在する対象者3から見て自車両1によって生じる死角における状況を示す情報である。具体的には、例えば、死角情報は、超音波センサ、レーダセンサ、ミリ波レーダセンサ、ライダー、赤外線レーザーセンサからの情報等である。 Here, the blind spot information is information indicating the situation in the blind spot caused by the own vehicle 1 as seen from the target person 3 existing outside the own vehicle 1. Specifically, for example, the blind spot information is information from an ultrasonic sensor, a radar sensor, a millimeter wave radar sensor, a lidar, an infrared laser sensor, or the like.
 死角画像とは、自車両1の外に存在する対象者3から見て自車両1によって生じる死角の画像である。具体的には、例えば、死角画像は、車外カメラ等の画像である。 The blind spot image is an image of the blind spot generated by the own vehicle 1 as seen from the target person 3 existing outside the own vehicle 1. Specifically, for example, the blind spot image is an image of an outside camera or the like.
 死角画像DB24bは、様々な角度から見た様々な他車両等のオブジェクトの画像を予め記憶している。また、死角画像DB24bは、様々な角度から見た様々な背景の画像を予め記憶している。オブジェクトと背景等の画像は、写真等の実際の画像、イラスト等であり、死角情報から死角画像を生成する際の画像のパーツとなる。 The blind spot image DB 24b stores in advance images of various objects such as other vehicles viewed from various angles. Further, the blind spot image DB 24b stores images of various backgrounds viewed from various angles in advance. An image such as an object and a background is an actual image such as a photograph, an illustration, or the like, and is a part of an image when a blind spot image is generated from blind spot information.
 死角画像生成部28は、死角情報から死角画像DB24bに記憶されている画像のパーツを用いて死角画像を生成する。また、死角画像生成部28は、車内外情報取得部27で取得した死角画像の範囲が広すぎる場合は、適切な範囲を抽出し、それを以降の死角画像とする。具体的には、死角画像生成部28は、例えば、死角情報がレーダ情報等であった場合は、死角情報を用いて、死角画像DB24bから画像のパーツを取得し、画像のパーツから死角画像生成する。また、死角画像生成部28は、例えば、死角よりも広い範囲の車外カメラの画像である場合は、死角に合わせて死角画像を抽出する。 The blind spot image generation unit 28 generates a blind spot image from the blind spot information using the image parts stored in the blind spot image DB 24b. Further, when the range of the blind spot image acquired by the vehicle interior / external information acquisition unit 27 is too wide, the blind spot image generation unit 28 extracts an appropriate range and uses it as a subsequent blind spot image. Specifically, the blind spot image generation unit 28 acquires image parts from the blind spot image DB 24b using the blind spot information, for example, when the blind spot information is radar information or the like, and generates a blind spot image from the image parts. do. Further, the blind spot image generation unit 28 extracts, for example, a blind spot image according to the blind spot when the image is an image of a camera outside the vehicle in a range wider than the blind spot.
 死角画像取得部21は、車内外情報取得部27が取得した死角画像または死角画像生成部28で生成した死角画像を取得する。なお、死角画像は、自車両1の外に存在する対象者3から見て自車両1によって生じる死角を示す。 The blind spot image acquisition unit 21 acquires the blind spot image acquired by the vehicle interior / external information acquisition unit 27 or the blind spot image generated by the blind spot image generation unit 28. The blind spot image shows the blind spot caused by the own vehicle 1 when viewed from the subject 3 existing outside the own vehicle 1.
 判定部22は、車内外情報に基づいて、自車両1の車両の状態、自車両1の車外の交通環境等を判定する。判定部22は、例えば、ギアの変更情報に基づいて、自車両1が停車したままなのか、前進しようとしているのか、後退しようとしているのかを判定する。また、判定部22は、例えば、周囲の人や車等のオブジェクトの検出情報および認識情報に基づいて、対象者が危険な状態にあるか否かを判定する。 The determination unit 22 determines the state of the vehicle of the own vehicle 1, the traffic environment outside the own vehicle 1, and the like based on the information inside and outside the vehicle. The determination unit 22 determines, for example, whether the own vehicle 1 is still stopped, is about to move forward, or is about to move backward, based on the gear change information. Further, the determination unit 22 determines whether or not the target person is in a dangerous state based on, for example, detection information and recognition information of objects such as surrounding people and cars.
 判定部22は、判定した自車両1の車両の状態、自車両1の車外の交通環境等に基づき、対象者に報知情報を報知するか判定する。具体的には、判定部22は、対象者3に表示画像を表示するか否か判定する。判定部22は、例えば、図1のような場合、歩行者3に他車両2の存在を報知すると判定する。 The determination unit 22 determines whether to notify the target person of the notification information based on the determined vehicle condition of the own vehicle 1, the traffic environment outside the own vehicle 1, and the like. Specifically, the determination unit 22 determines whether or not to display the display image on the target person 3. For example, in the case of FIG. 1, the determination unit 22 determines that the pedestrian 3 is notified of the presence of the other vehicle 2.
 自車両画像DB24aは、様々な角度から見た自車両1の画像を予め記憶している。自車両1の画像は、写真等の実際の画像、イラスト等である。 The own vehicle image DB 24a stores images of the own vehicle 1 viewed from various angles in advance. The image of the own vehicle 1 is an actual image such as a photograph, an illustration, or the like.
 表示画像生成部23は、判定部22で報知すると判定した場合、車内外情報から報知情報を生成する。表示画像生成部23は、報知情報を生成する際、少なくとも自車両画像DB24aから取得した自車両1の画像を死角画像に重畳した表示画像を含む報知情報を生成する。 When the determination unit 22 determines that the display image generation unit 23 notifies, the display image generation unit 23 generates notification information from the information inside and outside the vehicle. When generating the notification information, the display image generation unit 23 generates notification information including a display image in which at least the image of the own vehicle 1 acquired from the own vehicle image DB 24a is superimposed on the blind spot image.
 条件情報DB25は、条件情報が記憶されている。具体的には、例えば、条件情報DB25は、表示装置32~35とスピーカ36との装置の情報を記憶しており、車内外情報の識別子と報知装置31の識別子とが対応付けられた報知装置情報を記憶している。また、条件情報DB25は、車内外情報と対応付けられた報知パターン情報を記憶している。報知パターン情報とは、例えば、表示装置32~35に表示する画像の形状、色、大きさ、輝度、位置等の情報、スピーカ36に報知させる音の大きさ、音の高低、音の種類等の情報である。さらに、条件情報DB25は、車内外情報と対応付けられた報知タイミング情報を記憶している。報知タイミング情報とは、例えば、報知装置31に報知させるタイミング、時間等の情報である。 The condition information DB 25 stores the condition information. Specifically, for example, the condition information DB 25 stores information on the devices of the display devices 32 to 35 and the speaker 36, and the notification device in which the identifier of the vehicle interior / external information and the identifier of the notification device 31 are associated with each other. I remember the information. Further, the condition information DB 25 stores the notification pattern information associated with the vehicle interior / external information. The notification pattern information includes, for example, information such as the shape, color, size, brightness, position, etc. of the image displayed on the display devices 32 to 35, the loudness of the sound to be notified to the speaker 36, the pitch of the sound, the type of sound, and the like. Information. Further, the condition information DB 25 stores the notification timing information associated with the vehicle interior / external information. The notification timing information is, for example, information such as timing and time for notification to the notification device 31.
 条件付与部29は、条件情報DB25から報知する報知装置31と報知パターンと報知するタイミング、時間等とを決定し、表示画像を含む報知情報に条件情報を付与する。具体的には、例えば、条件付与部29は、報知パターン情報として表示画像の自車両の画像を縮小する加工情報を取得した場合、自車両の画像を縮小して表示画像を加工する。 The condition giving unit 29 determines the notification device 31 to be notified from the condition information DB 25, the notification pattern, the timing to notify, the time, and the like, and adds the condition information to the notification information including the display image. Specifically, for example, when the condition giving unit 29 acquires the processing information for reducing the image of the own vehicle in the display image as the notification pattern information, the condition giving unit 29 reduces the image of the own vehicle and processes the display image.
 制御部26は、判定部22で報知を行うと判定された場合に、条件情報DB25で対応付けられた報知装置31に表示画像を含む報知情報を報知させるように制御する。具体的には、制御部26は、判定部22が表示すると判定した場合に、条件情報DB25で対応付けられた表示装置に表示画像を表示させるように制御する。 The control unit 26 controls the notification device 31 associated with the condition information DB 25 to notify the notification information including the display image when the determination unit 22 determines to perform notification. Specifically, when the determination unit 22 determines that the display image is to be displayed, the control unit 26 controls the display device associated with the condition information DB 25 to display the display image.
 次に実施の形態1における自車両1の一部のハードウェア構成について説明する。 Next, a part of the hardware configuration of the own vehicle 1 in the first embodiment will be described.
 図7は、本実施の形態1に係る自車両1の車載システム100のハードウェア構成図である。図7を用いて、本実施の形態1に係る表示制御装置20を含む表示システム30の構成について説明する。 FIG. 7 is a hardware configuration diagram of the in-vehicle system 100 of the own vehicle 1 according to the first embodiment. The configuration of the display system 30 including the display control device 20 according to the first embodiment will be described with reference to FIG. 7.
 車載システム100は、一例として、通信線65と、センサ40と、通信装置60と、全体制御ECU10と、報知制御ECU(Electronic Control Unit)15と、運転装置70と、報知装置80とで構成される。 As an example, the in-vehicle system 100 includes a communication line 65, a sensor 40, a communication device 60, an overall control ECU 10, a notification control ECU (Electronic Control Unit) 15, a driving device 70, and a notification device 80. To.
 通信線65は、各装置間及び機器間を電気的に接続し、データのやり取りを行う信号経路である。本実施の形態1では、CAN(Controller Area Network)を介してデータのやり取りを行う。なお、車載システム100の各装置をバスでつなげてもよい。車載システム100の装置間及び機器間は通信を介して接続されており、用いる通信については、有線通信でも無線通信でもよい。 The communication line 65 is a signal path that electrically connects between devices and devices and exchanges data. In the first embodiment, data is exchanged via CAN (Control Area Network). In addition, each device of the in-vehicle system 100 may be connected by a bus. The devices and devices of the in-vehicle system 100 are connected via communication, and the communication to be used may be wired communication or wireless communication.
 センサ40は、車速センサ41、舵角センサ42、アクセルセンサ43、ブレーキセンサ44、シフトセンサ45、ウィンカーセンサ46、ハザードセンサ47、ワイパーセンサ48、ライトセンサ49、ドア開閉センサ50、ドライバカメラ51、着座センサ52、加速度センサ53、角速度センサ54、GPS(Global Positioning System)デバイス55、ナビゲーションシステム56、車外カメラ57、車外センサ58、および照度センサ59等の自車両1の車内外情報を検出するセンサである。 The sensors 40 include a vehicle speed sensor 41, a steering angle sensor 42, an accelerator sensor 43, a brake sensor 44, a shift sensor 45, a winker sensor 46, a hazard sensor 47, a wiper sensor 48, a light sensor 49, a door open / close sensor 50, a driver camera 51, and the like. Sensors that detect the inside and outside information of the own vehicle 1, such as the seating sensor 52, the acceleration sensor 53, the angular speed sensor 54, the GPS (Global Positioning System) device 55, the navigation system 56, the outside camera 57, the outside sensor 58, and the illuminance sensor 59. Is.
 車速センサ41は、自車両1の速度を検出するセンサであり、車輪速に応じた電気信号(車速パルス)を全体制御ECU10に出力する。車輪速に応じた電気信号が車速情報となる。 The vehicle speed sensor 41 is a sensor that detects the speed of the own vehicle 1, and outputs an electric signal (vehicle speed pulse) according to the wheel speed to the overall control ECU 10. The electric signal according to the wheel speed becomes the vehicle speed information.
 舵角センサ42は、自車両1の操舵角を検出するセンサであり、操舵角に応じた電気信号を全体制御ECU10に出力する。操舵角に応じた電気信号が、舵角情報となる。 The steering angle sensor 42 is a sensor that detects the steering angle of the own vehicle 1, and outputs an electric signal corresponding to the steering angle to the overall control ECU 10. The electric signal corresponding to the steering angle becomes the steering angle information.
 アクセルセンサ43は、自車両1のアクセルの開度すなわちアクセルペダルの操作量を検出するセンサであり、アクセルペダルの操作量情報を全体制御ECU10に出力する。 The accelerator sensor 43 is a sensor that detects the opening degree of the accelerator of the own vehicle 1, that is, the operation amount of the accelerator pedal, and outputs the operation amount information of the accelerator pedal to the overall control ECU 10.
 ブレーキセンサ44は、自車両1のブレーキペダルの操作量を検出するセンサであり、ブレーキペダルの操作量情報を全体制御ECU10に出力する。 The brake sensor 44 is a sensor that detects the operation amount of the brake pedal of the own vehicle 1, and outputs the operation amount information of the brake pedal to the overall control ECU 10.
 シフトセンサ45は、自車両1のシフトレバーの現在の状態または変化を検出するセンサであり、自車両1のユーザによるシフト変更等、シフトレバーの操作情報を全体制御ECU10に出力する。 The shift sensor 45 is a sensor that detects the current state or change of the shift lever of the own vehicle 1, and outputs the operation information of the shift lever such as the shift change by the user of the own vehicle 1 to the overall control ECU 10.
 ウィンカーセンサ46は、自車両1のウィンカー(方向指示器)の操作を検出するセンサであり、ユーザがウィンカーを操作した場合、ウィンカー操作指示の情報を全体制御ECU10に出力する。 The winker sensor 46 is a sensor that detects the operation of the winker (direction indicator) of the own vehicle 1, and when the user operates the winker, the information of the winker operation instruction is output to the overall control ECU 10.
 ハザードセンサ47は、自車両1のハザードスイッチの操作を検出するセンサであり、ユーザのハザードスイッチの操作を検出し、操作情報を全体制御ECU10に出力する。 The hazard sensor 47 is a sensor that detects the operation of the hazard switch of the own vehicle 1, detects the operation of the user's hazard switch, and outputs the operation information to the overall control ECU 10.
 ワイパーセンサ48は、自車両1のワイパー操作を検出するセンサであり、ユーザがワイパーを操作した場合、この操作指示情報を全体制御ECU10に出力する。 The wiper sensor 48 is a sensor that detects the wiper operation of the own vehicle 1, and when the user operates the wiper, this operation instruction information is output to the overall control ECU 10.
 ライトセンサ49は、ユーザの自車両1のライトレバーの操作を検出するセンサであり、ユーザのライトの操作情報を全体制御ECU10に出力する。 The light sensor 49 is a sensor that detects the operation of the light lever of the user's own vehicle 1, and outputs the operation information of the user's light to the overall control ECU 10.
 ドア開閉センサ50は、自車両1のドアの開閉を検出するセンサであり、ドアの開閉情報を全体制御ECU10に出力する。 The door open / close sensor 50 is a sensor that detects the open / close of the door of the own vehicle 1, and outputs the door open / close information to the overall control ECU 10.
 ドライバカメラ51は、自車両1のドライバ席に対向して設けられたカメラ(撮像装置)であり、ドライバ席に座るユーザを撮像する機能を有している。ドライバカメラ51は、ユーザの顔や上半身を撮像し、撮像した画像を全体制御ECU10に出力する。 The driver camera 51 is a camera (imaging device) provided facing the driver seat of the own vehicle 1 and has a function of photographing a user sitting in the driver seat. The driver camera 51 captures an image of the user's face and upper body, and outputs the captured image to the overall control ECU 10.
 着座センサ52は、自車両1のシートに設けられ、ユーザの着座状況を検出するセンサであり、例えば押圧センサ等によって実現される。着座センサ52は、ユーザが着座または離席すると、この情報を全体制御ECU10に出力する。着座センサ52は、シートに複数設けられていてもよく、この複数の押圧センサの情報に基づいて、全体制御ECU10はユーザの姿勢等を推定する。 The seating sensor 52 is provided on the seat of the own vehicle 1 and is a sensor that detects the seating state of the user, and is realized by, for example, a pressing sensor or the like. The seating sensor 52 outputs this information to the overall control ECU 10 when the user sits down or leaves the seat. A plurality of seating sensors 52 may be provided on the seat, and the overall control ECU 10 estimates the posture of the user or the like based on the information of the plurality of pressing sensors.
 加速度センサ53は、自車両1の加速度を検出するセンサであり、例えば3軸加速度センサにより構成される。加速度センサ53は、自車両1の加速度情報を全体制御ECU10に出力する。 The acceleration sensor 53 is a sensor that detects the acceleration of the own vehicle 1, and is composed of, for example, a 3-axis acceleration sensor. The acceleration sensor 53 outputs the acceleration information of the own vehicle 1 to the overall control ECU 10.
 角速度センサ54は、自車両1の角速度(ジャイロ)を検出するセンサであり、この角速度情報に基づいて全体制御ECU10は自車両1の旋回速度等を検出する。角速度センサ54は、検出した角速度情報を全体制御ECU10に出力する。 The angular velocity sensor 54 is a sensor that detects the angular velocity (gyro) of the own vehicle 1, and the overall control ECU 10 detects the turning speed and the like of the own vehicle 1 based on this angular velocity information. The angular velocity sensor 54 outputs the detected angular velocity information to the overall control ECU 10.
 GPSデバイス55は、グローバルポジショニングシステム(Global Positioning System)を用いて人工衛星が発信する電波を利用し、自車両1の位置情報を検出する装置であり、自車両1の位置情報である座標の情報を全体制御ECU10およびナビゲーションシステム56に出力する。 The GPS device 55 is a device that detects the position information of the own vehicle 1 by using the radio wave transmitted by the artificial satellite using the global positioning system (Global Positioning System), and is the information of the coordinates which is the position information of the own vehicle 1. Is output to the overall control ECU 10 and the navigation system 56.
 ナビゲーションシステム56は、地図情報を有しており、自車両1の位置情報と地図情報に基づいて、自車両1の目的地への推奨経路を算出する機能を有している。また、ナビゲーションシステム56は、通信機能を有しており、渋滞情報や通行止め情報等の外部情報をサーバ(不図示)より取得し、これらに基づいた推奨経路を算出しても良い。または、ナビゲーションシステム56は、自車両1の位置情報および目的地情報等をサーバに送る機能を有しており、サーバ側で推奨経路を算出し、この推奨経路の情報を受信するシステムとして構成されていてもよい。ナビゲーションシステム56は、算出された経路情報を全体制御ECU10に出力する。 The navigation system 56 has map information, and has a function of calculating a recommended route to the destination of the own vehicle 1 based on the position information and the map information of the own vehicle 1. Further, the navigation system 56 has a communication function, and may acquire external information such as traffic congestion information and traffic closure information from a server (not shown) and calculate a recommended route based on these. Alternatively, the navigation system 56 has a function of sending the position information and the destination information of the own vehicle 1 to the server, and is configured as a system that calculates the recommended route on the server side and receives the information of the recommended route. You may be. The navigation system 56 outputs the calculated route information to the overall control ECU 10.
 車外カメラ57は、自車両1の車外を撮像するために設けられたカメラ(撮像装置)である。車外カメラ57は、例えば自車両1の前方、後方および左右にそれぞれ設けられており、それぞれの撮像画像を全体制御ECU10に出力する。全体制御ECU10は、この入力された撮像画像に基づいて、対象者の検出や認識、また他車両と障害物等のオブジェクトの検出や認識を実行することができる。 The out-of-vehicle camera 57 is a camera (imaging device) provided for taking an image of the outside of the own vehicle 1. The outside camera 57 is provided, for example, in front of, behind, and to the left and right of the own vehicle 1, and outputs each captured image to the overall control ECU 10. The overall control ECU 10 can detect and recognize the target person and detect and recognize objects such as other vehicles and obstacles based on the input captured image.
 車外センサ58は、自車両1の車外の周囲の物体を検出することができるセンサであり、例えば超音波センサ、レーダセンサ、ミリ波レーダセンサ、ライダー、赤外線レーザーセンサ等により構成される。車外センサ58は、この検出情報を全体制御ECU10に出力する。全体制御ECU10は、車外センサ58から入力される車外の物体の検出情報に基づいて、その物体との距離情報や、物体の位置情報を検出する。距離情報及び位置情報の検出は、本実施の形態1のように全体制御ECU10が検出するものとしてもよいし、車外センサ58が検出した情報に基づいて距離情報及び位置情報を算出し、全体制御ECU10にこの情報を出力するとしてもよい。 The vehicle exterior sensor 58 is a sensor capable of detecting an object around the vehicle exterior of the own vehicle 1, and is composed of, for example, an ultrasonic sensor, a radar sensor, a millimeter wave radar sensor, a rider, an infrared laser sensor, and the like. The vehicle exterior sensor 58 outputs this detection information to the overall control ECU 10. The overall control ECU 10 detects the distance information to the object and the position information of the object based on the detection information of the object outside the vehicle input from the vehicle outside sensor 58. The detection of the distance information and the position information may be detected by the overall control ECU 10 as in the first embodiment, or the distance information and the position information are calculated based on the information detected by the vehicle exterior sensor 58, and the overall control is performed. This information may be output to the ECU 10.
 照度センサ59は、自車両1の車外に向けて設けられており、車外の照度(明るさ)を検出するセンサである。照度センサ59は、検出した照度の情報を全体制御ECU10に出力する。 The illuminance sensor 59 is provided toward the outside of the own vehicle 1 and is a sensor that detects the illuminance (brightness) outside the vehicle. The illuminance sensor 59 outputs the detected illuminance information to the overall control ECU 10.
 通信装置60は、自車両1の車外に存在する別の通信機器と無線通信を行う。通信装置60は、特定の周波数帯域を用いて、他車両との車車間通信、路側機との路車間通信、または人が持つスマートフォン等の通信可能な電子機器との通信を行う。通信装置60が行う通信は、特定の決められた周波数帯を用いた独自の通信、車内通信機器と外部通信機器との間の通信を実行するために規格化された通信規格を用いての通信であってもよいし、例えば無線LAN(Local Area Network)、Bluetooth(登録商標)、Zigbee(登録商標)等の既存の通信規格を用いての通信であってもよい。 The communication device 60 performs wireless communication with another communication device existing outside the vehicle of the own vehicle 1. The communication device 60 uses a specific frequency band to perform vehicle-to-vehicle communication with another vehicle, road-to-vehicle communication with a roadside unit, or communication with a communicable electronic device such as a smartphone owned by a person. The communication performed by the communication device 60 is a communication using a unique communication using a specific fixed frequency band, or a communication standard standardized for performing communication between an in-vehicle communication device and an external communication device. It may be communication using existing communication standards such as wireless LAN (Local Area Network), Bluetooth (registered trademark), Zigbee (registered trademark) and the like.
 通信装置60は、アンテナ61と、送信部62と、受信部63とを備えている。通信装置60は、アンテナ61を介して送信部62より別の通信機器に無線信号を送信し、アンテナ61を介して受信部63より別の通信機器からの周囲の混雑状況情報等の無線信号を受信する。 The communication device 60 includes an antenna 61, a transmitting unit 62, and a receiving unit 63. The communication device 60 transmits a wireless signal from the transmitting unit 62 to another communication device via the antenna 61, and transmits a wireless signal such as surrounding congestion status information from the receiving unit 63 from the receiving unit 63 via the antenna 61. Receive.
 例えば、全体制御ECU10は、車外カメラ57、車外センサ58等から自車両1の周囲に他車両2が存在しているかどうかの情報、及び周囲の他車両2の色、形状、型番等の特徴の情報を取得し、GPSデバイス55、ナビゲーションシステム56、通信装置60等から自車両1の位置情報、速度規定、直線道路と車線数と曲線道路等の道路の形状情報とを取得し、車外カメラ57から死角情報または死角画像を含む自車両1の周囲の状況を取得する。表示システム30は、それらの情報を全体制御ECU10から取得する。なお、センサ40と通信装置60とのうち少なくとも一方は、死角情報または死角画像を取得し、全体制御ECU10に死角情報または死角画像を送信する。 For example, the overall control ECU 10 has information on whether or not another vehicle 2 exists around the own vehicle 1 from the outside camera 57, the outside sensor 58, and the like, and features such as the color, shape, and model number of the other surrounding vehicles 2. Information is acquired, and the position information of the own vehicle 1, the speed regulation, the straight road, the number of lanes, and the shape information of the road such as a curved road are acquired from the GPS device 55, the navigation system 56, the communication device 60, etc., and the outside camera 57 The situation around the own vehicle 1 including the blind spot information or the blind spot image is acquired from. The display system 30 acquires such information from the overall control ECU 10. At least one of the sensor 40 and the communication device 60 acquires the blind spot information or the blind spot image, and transmits the blind spot information or the blind spot image to the overall control ECU 10.
 全体制御ECU10は、自車両1の車両全体を制御する機能を有するECUである。センサ40から検出した情報を取得し、この情報に基づいて自車両1の各部を適切に動作させるように指示や情報を送ることで自車両1の車両全体の制御を実行する。全体制御ECU10は、プロセッサ11と、メモリ12とを備えている。 The overall control ECU 10 is an ECU having a function of controlling the entire vehicle of the own vehicle 1. By acquiring the information detected from the sensor 40 and sending instructions and information to properly operate each part of the own vehicle 1 based on this information, the control of the entire vehicle of the own vehicle 1 is executed. The overall control ECU 10 includes a processor 11 and a memory 12.
 プロセッサ11は、メモリ12に格納されるプログラムを読み込み実行することで、全体制御ECU10において計算処理を実行するCPU(Central Processing Unit)である。具体的には、プロセッサ11は、メモリ12に記憶したOS(Operating System)の少なくとも一部をロードし、OSを実行しながら、プログラムを実行する。プロセッサ11は、通信線65を介して各装置及び機器と接続し、各装置及び機器を制御する。プロセッサ11は、プロセッシングを行うIC(Integrated Circuit)であればいいので、計算処理回路、電気回路、コントローラ、中央処理装置、演算装置、マイクロプロセッサ、マイクロコンピュータ、DSP(Digital Signal Processor)等、またはこれらを組み合わせたものであってもよい。 The processor 11 is a CPU (Central Processing Unit) that executes calculation processing in the overall control ECU 10 by reading and executing a program stored in the memory 12. Specifically, the processor 11 loads at least a part of the OS (Operating System) stored in the memory 12 and executes the program while executing the OS. The processor 11 connects to each device and device via the communication line 65, and controls each device and device. Since the processor 11 may be an IC (Integrated Circuit) that performs processing, it may be a calculation processing circuit, an electric circuit, a controller, a central processing unit, an arithmetic unit, a microprocessor, a microcomputer, a DSP (Digital Signal Processor), or the like. May be a combination of.
 なお、全体制御ECU10の様々な機能を別のプロセッサで実現してもよいし、様々な機能をまとめて1つのプロセッサで実現しても良い。 It should be noted that various functions of the overall control ECU 10 may be realized by another processor, or various functions may be collectively realized by one processor.
 メモリ12は、ソフトウェア、ファームウェア、またはソフトウェアとファームウェアの組み合わせが記述されたプログラム、OS及び各種情報等を記憶する。メモリ12は、例えば、RAM(Random Access Memory)、ROM(Read Only Memory)、フラッシュメモリ、EPROM(Erasable Programmable Read Only Memory)、EEPROM(Electrically Erasable Programmable Read Only Memory)、HDD(Hard Disk Drive)、SSD(Solid State Drive)等の不揮発性または揮発性の半導体メモリ、磁気ディスク、フレキシブルディスク、光ディスク、コンパクトディスク、ミニディスク、DVD(Digital Versatile Disc)等である。 The memory 12 stores software, firmware, a program in which a combination of software and firmware is described, an OS, various information, and the like. The memory 12 is, for example, a RAM (Random Access Memory), a ROM (Read Only Memory), a flash memory, an EPROM (Erasable Programmable Read Only Memory), an EPROM (Electrically Memory) Non-volatile or volatile semiconductor memory such as (Solid State Drive), magnetic disk, flexible disk, optical disk, compact disk, mini disk, DVD (Digital Versaille Disc) and the like.
 なお、メモリ12は、各部の機能をそれぞれ別々のメモリで実現してもよいし、各部の機能をまとめて1つのメモリで実現してもよい。 The memory 12 may realize the functions of each part in separate memories, or may collectively realize the functions of each part in one memory.
 全体制御ECU10は、例えば、メモリ12として、1以上のプログラムを格納する不揮発性記憶装置であるROMと、プロセッサ11がプログラムと各種情報との展開領域として用いる揮発性記憶装置であるRAMを有しており、プロセッサ11はROMからプログラムを読み出し、読み出したプログラムをRAM上に展開して計算処理を実行する。 The overall control ECU 10 has, for example, a ROM as a memory 12 which is a non-volatile storage device for storing one or more programs, and a RAM which is a volatile storage device used by the processor 11 as an expansion area for programs and various information. The processor 11 reads a program from the ROM, expands the read program on the RAM, and executes the calculation process.
 全体制御ECU10の機能は、ハードウェア、ソフトウェア、ファームウェア、またはハードウェアとソフトウェアとファームウェアとの組み合わせにより実現できる。全体制御ECU10の各部の機能は、一部を専用のハードウェアで実現し、一部をソフトウェアまたはファームウェアで実現するようにしても良い。例えば、全体制御ECU10の一部は専用のハードウェアとしての処理回路でその機能を実現し、残りの部分はCPUがメモリに格納されたプログラムを読み出して実行することによってその機能を実現してもよい。 The function of the overall control ECU 10 can be realized by hardware, software, firmware, or a combination of hardware, software, and firmware. The functions of each part of the overall control ECU 10 may be partially realized by dedicated hardware and partially realized by software or firmware. For example, even if a part of the overall control ECU 10 realizes its function by a processing circuit as dedicated hardware, and the rest realizes its function by reading and executing a program stored in the memory by the CPU. good.
 運転装置70は、自車両1を運転するための装置である。運転装置70は、ヘッドライトドライバ71と、ヘッドライト72と、エンジン73と、変速機74と、ブレーキアクチュエータ75と、ステアリングアクチュエータ76と、ウィンカー77と、HUD(Head-Up Display)78とを備えている。 The driving device 70 is a device for driving the own vehicle 1. The driving device 70 includes a headlight driver 71, a headlight 72, an engine 73, a transmission 74, a brake actuator 75, a steering actuator 76, a winker 77, and a HUD (Head-Up Display) 78. ing.
 ヘッドライトドライバ71は、ヘッドライト72を駆動させる駆動装置である。ヘッドライトドライバ71は、全体制御ECU10からの指示に基づいて、ヘッドライド72を駆動させ、ヘッドライト72の点灯、消灯、ハイビームとロービームの切り替え等の動作を制御する。 The headlight driver 71 is a drive device that drives the headlight 72. The headlight driver 71 drives the headride 72 based on the instruction from the overall control ECU 10, and controls operations such as turning on / off the headlight 72 and switching between high beam and low beam.
 ヘッドライト72は、自車両1の車体の前側に設けられ、車体の前方に光を照射する照射装置である。ヘッドライト72は、自車両1の車体の前側の左右にそれぞれ設けられており、ライトから照射された光を導く導光部の構造、または、複数のライトの切り替え等により、より遠くを照らすハイビームと、ハイビームより近距離を照らすロービームとを切り替えることができる。 The headlight 72 is an irradiation device provided on the front side of the vehicle body of the own vehicle 1 and irradiates the front of the vehicle body with light. The headlights 72 are provided on the left and right sides of the front side of the vehicle body of the own vehicle 1, respectively, and a high beam that illuminates a farther distance by the structure of a light guide portion that guides the light emitted from the light or by switching a plurality of lights. And the low beam that illuminates a shorter distance than the high beam can be switched.
 エンジン73は、自車両1を駆動させる動力を発生させる内燃機関である。エンジン73は、ガソリン等の燃料を燃焼させることで、車輪を回転させるための動力を発生させる。エンジン73は、全体制御ECU10からの指示に基づいて動作する。 The engine 73 is an internal combustion engine that generates power to drive the own vehicle 1. The engine 73 generates power for rotating the wheels by burning fuel such as gasoline. The engine 73 operates based on an instruction from the overall control ECU 10.
 変速機74は、歯車や軸等によって構成され、動力を車輪へと伝達する機能を有している。変速機74は、全体制御ECU10からの指示に基づいてギアを変更することで、自車両1の車輪に伝達するトルクを変更する。 The transmission 74 is composed of gears, shafts, etc., and has a function of transmitting power to the wheels. The transmission 74 changes the torque transmitted to the wheels of the own vehicle 1 by changing the gear based on the instruction from the overall control ECU 10.
 ブレーキアクチュエータ75は、自車両1を減速させるためのブレーキ(減速機)を動作させる機構である。ブレーキアクチュエータ75は、全体制御ECU10からの指示に基づいてブレーキを動作させ、自車両1を減速させる。 The brake actuator 75 is a mechanism for operating a brake (reducer) for decelerating the own vehicle 1. The brake actuator 75 operates the brake based on the instruction from the overall control ECU 10 to decelerate the own vehicle 1.
 ステアリングアクチュエータ76は、自車両1の車輪の方向を変更し、自車両1の進行方向を制御するステアリング(操舵装置)を動作させる機構である。ステアリングアクチュエータ76は、全体制御ECU10からの指示に基づいてステアリングを制御し、自車両1の進行方向を制御する。 The steering actuator 76 is a mechanism that changes the direction of the wheels of the own vehicle 1 and operates the steering (steering device) that controls the traveling direction of the own vehicle 1. The steering actuator 76 controls the steering based on the instruction from the overall control ECU 10, and controls the traveling direction of the own vehicle 1.
 ウィンカー77は、自車両1の進行方向を発光によって車外に示すための方向指示器である。ウィンカー77は、全体制御ECU10からの指示に基づいて点滅し、車外に自車両1の進行方向を示す。 The winker 77 is a direction indicator for indicating the traveling direction of the own vehicle 1 to the outside of the vehicle by light emission. The winker 77 blinks based on an instruction from the overall control ECU 10, and indicates the traveling direction of the own vehicle 1 to the outside of the vehicle.
 HUD78は、自車両1のフロントガラスに重畳して設けられた透過型の映像表示装置である。HUD78は、全体制御ECU10からの指示に基づいて種々の映像を表示する。HUD78は、は映像を表示することで自車両1の車内のユーザに様々な情報を提示する。なお、HUD78は、ナビゲーションシステム56のディスプレイとしてもよい。 The HUD 78 is a transmissive image display device provided so as to be superimposed on the windshield of the own vehicle 1. The HUD 78 displays various images based on instructions from the overall control ECU 10. The HUD 78 presents various information to the user in the vehicle 1 of the own vehicle 1 by displaying an image. The HUD 78 may be used as a display for the navigation system 56.
 報知制御ECU15は、自車両1の報知装置80を制御する機能を有するECUである。報知制御ECU15は、全体制御ECU10から自車両1の車内外情報を取得し、取得した車内外情報に基づいて自車両1の状況及び状態を判定し、対象者に対して報知装置80によって死角画像を含む報知情報を報知させる機能を有する装置である。報知制御ECU15は、プロセッサ16と、メモリ17とを備えている。 The notification control ECU 15 is an ECU having a function of controlling the notification device 80 of the own vehicle 1. The notification control ECU 15 acquires vehicle inside / outside information of the own vehicle 1 from the overall control ECU 10, determines the status and state of the own vehicle 1 based on the acquired vehicle inside / outside information, and informs the target person of the blind spot image by the notification device 80. It is a device having a function of notifying notification information including. The notification control ECU 15 includes a processor 16 and a memory 17.
 プロセッサ16は、メモリ17に格納されるプログラムを読み込み実行することで、報知制御ECU15において計算処理を実行するCPU(Central Processing Unit)である。具体的には、プロセッサ16は、メモリ17に記憶したOS(Operating System)の少なくとも一部をロードし、OSを実行しながら、プログラムを実行する。プロセッサ16は、通信線65を介して各装置及び機器と接続し、各装置及び機器を制御する。プロセッサ16は、プロセッシングを行うIC(Integrated Circuit)であればいいので、計算処理回路、電気回路、中央処理装置、演算装置、マイクロプロセッサ、マイクロコンピュータ、DSP(Digital Signal Processor)等、またはこれらを組み合わせたものであってもよい。 The processor 16 is a CPU (Central Processing Unit) that executes a calculation process in the notification control ECU 15 by reading and executing a program stored in the memory 17. Specifically, the processor 16 loads at least a part of the OS (Operating System) stored in the memory 17, and executes the program while executing the OS. The processor 16 connects to each device and device via the communication line 65, and controls each device and device. Since the processor 16 may be an IC (Integrated Circuit) that performs processing, it may be a calculation processing circuit, an electric circuit, a central processing unit, an arithmetic unit, a microprocessor, a microcomputer, a DSP (Digital Signal Processor), or a combination thereof. It may be the one.
 図2における表示制御装置20が備える車内外情報取得部27と、死角画像生成部28と、死角画像取得部21と、判定部22と、表示画像生成部23と、条件付与部29と、制御部26とは、プロセッサ16がメモリ17にロードしたプログラムを読み込み、実行することにより実現する。 Control of the vehicle interior / external information acquisition unit 27, the blind spot image generation unit 28, the blind spot image acquisition unit 21, the determination unit 22, the display image generation unit 23, and the condition giving unit 29 included in the display control device 20 in FIG. The unit 26 is realized by reading and executing the program loaded in the memory 17 by the processor 16.
 なお、報知制御ECU15の様々な機能を別のプロセッサで実現してもよいし、様々な機能をまとめて1つのプロセッサで実現しても良い。 It should be noted that various functions of the notification control ECU 15 may be realized by another processor, or various functions may be collectively realized by one processor.
 メモリ17は、ソフトウェア、ファームウェア、またはソフトウェアとファームウェアの組み合わせが記述されたプログラム、OS及び各種情報等を記憶する。メモリ17は、例えば、RAM(Random Access Memory)、ROM(Read Only Memory)、フラッシュメモリ、EPROM(Erasable Programmable Read Only Memory)、EEPROM(Electrically Erasable Programmable Read Only Memory)、HDD(Hard Disk Drive)、SSD(Solid State Drive)等の不揮発性または揮発性の半導体メモリ、磁気ディスク、フレキシブルディスク、光ディスク、コンパクトディスク、ミニディスク、DVD(Digital Versatile Disc)等である。 The memory 17 stores software, firmware, a program in which a combination of software and firmware is described, an OS, various information, and the like. The memory 17 is, for example, a RAM (Random Access Memory), a ROM (Read Only Memory), a flash memory, an EPROM (Erasable Programmable Read Only Memory), an EPROM (Electrically Memory) Non-volatile or volatile semiconductor memory such as (Solid State Drive), magnetic disk, flexible disk, optical disk, compact disk, mini disk, DVD (Digital Versaille Disc) and the like.
 図2における表示制御装置20が備える車内外情報取得部27と、死角画像生成部28と、死角画像取得部21と、判定部22と、表示画像生成部23と、条件付与部29と、制御部26とは、メモリ17に記憶するプログラムによって実現する。また、死角画像DB24bと、自車両画像DB24aと、条件情報DB25とは、メモリ17によって実現する。 Control of the vehicle interior / external information acquisition unit 27, the blind spot image generation unit 28, the blind spot image acquisition unit 21, the determination unit 22, the display image generation unit 23, and the condition giving unit 29 included in the display control device 20 in FIG. The unit 26 is realized by a program stored in the memory 17. Further, the blind spot image DB 24b, the own vehicle image DB 24a, and the condition information DB 25 are realized by the memory 17.
 なお、メモリ17は、各部の機能をそれぞれ別々のメモリで実現してもよいし、各部の機能をまとめて1つのメモリで実現してもよい。 The memory 17 may realize the functions of each part in separate memories, or may collectively realize the functions of each part in one memory.
 報知制御ECU15は、例えば、メモリ17として、1以上のプログラムを格納する不揮発性記憶装置であるROMと、プロセッサ16がプログラムと各種情報との展開領域として用いる揮発性記憶装置であるRAMを有しており、プロセッサ16はROMからプログラムを読み出し、読み出したプログラムをRAM上に展開して計算処理を実行する。 The notification control ECU 15 has, for example, a ROM as a memory 17 which is a non-volatile storage device for storing one or more programs, and a RAM which is a volatile storage device used by the processor 16 as an expansion area for programs and various information. The processor 16 reads a program from the ROM, expands the read program on the RAM, and executes the calculation process.
 報知制御ECU15の機能は、ハードウェア、ソフトウェア、ファームウェア、またはハードウェアとソフトウェアとファームウェアとの組み合わせにより実現できる。報知制御ECU15の各部の機能は、一部を専用のハードウェアで実現し、一部をソフトウェアまたはファームウェアで実現するようにしても良い。例えば、報知制御ECU15の一部は専用のハードウェアとしての処理回路でその機能を実現し、残りの部分はCPUがメモリに格納されたプログラムを読み出して実行することによってその機能を実現してもよい。 The function of the notification control ECU 15 can be realized by hardware, software, firmware, or a combination of hardware, software, and firmware. Some of the functions of each part of the notification control ECU 15 may be realized by dedicated hardware, and some may be realized by software or firmware. For example, even if a part of the notification control ECU 15 realizes its function by a processing circuit as dedicated hardware, and the rest realizes its function by reading and executing a program stored in the memory by the CPU. good.
 報知装置80は、報知制御ECU15から少なくとも表示画像を含む報知情報を受信し、対象者に少なくとも表示画像を表示することを含む報知情報を報知する。報知装置80は、外部ライトデバイス81、プロジェクタデバイス86、ボディライトデバイス89、サウンドデバイス92等である。ただし、報知装置80は、外部ライトデバイス81、プロジェクタデバイス86、ボディライトデバイス89等の画像が表示できるデバイスを少なくとも1つ備える。なお、報知装置80は、図2の報知装置31に対応する。 The notification device 80 receives notification information including at least a display image from the notification control ECU 15, and notifies the target person of the notification information including displaying at least the display image. The notification device 80 is an external light device 81, a projector device 86, a body light device 89, a sound device 92, and the like. However, the notification device 80 includes at least one device capable of displaying an image, such as an external light device 81, a projector device 86, and a body light device 89. The notification device 80 corresponds to the notification device 31 in FIG.
 外部ライトデバイス81は、自車両1の車体から外部に向けて取り付けられた照射装置である。外部ライトデバイス81は、路面や近くの壁面に光を照射し、自車両1の周囲の自車両1の外に存在する対象者に自車両1の状態、現在または今後の自車両1の動作の意図、自車両1の今後の動作の予告、または警告等を報知する。外部ライトデバイス81は、これらの報知情報を自車両1の外に存在する対象者に報知するために、これに適した形状、色、位置、大きさ、輝度、位置、タイミング、時間等で光を路面等に照射する。外部ライトデバイス81は、外部ライトドライバ82と、外部ライトセット83とを備えている。 The external light device 81 is an irradiation device attached to the outside from the vehicle body of the own vehicle 1. The external light device 81 irradiates the road surface or a nearby wall surface with light, and the target person existing outside the own vehicle 1 around the own vehicle 1 is informed of the state of the own vehicle 1 and the current or future operation of the own vehicle 1. Notify the intention, the notice of the future operation of the own vehicle 1, or the warning. In order to notify the target person existing outside the own vehicle 1 of these notification information, the external light device 81 illuminates with a shape, color, position, size, brightness, position, timing, time, etc. suitable for the notification information. Irradiates the road surface and the like. The external light device 81 includes an external light driver 82 and an external light set 83.
 外部ライトドライバ82は、外部ライトセット83を駆動させる駆動装置である。外部ライトドライバ82は、外部ライトセット83に所定の光を照射させるように制御する。外部ライトドライバ82は、外部ライトセット83の中の外部ライト84と外部ライト85との照射タイミングや照射時間等の組み合わせを制御する。また、外部ライトドライバ82は、外部ライト84と外部ライト85とに設けられたカラーフィルタ、シェード、導光機構等を動作させ、所定の自車両1の車体からの相対位置に、所定形状の、所定の色の、所定の大きさの、所定の輝度の光を所定のタイミング、所定の時間に照射する。 The external light driver 82 is a drive device for driving the external light set 83. The external light driver 82 controls the external light set 83 to irradiate a predetermined light. The external light driver 82 controls the combination of the irradiation timing, the irradiation time, and the like between the external light 84 and the external light 85 in the external light set 83. Further, the external light driver 82 operates a color filter, a shade, a light guide mechanism, etc. provided on the external light 84 and the external light 85, and has a predetermined shape at a position relative to the vehicle body of the predetermined own vehicle 1. Light of a predetermined color, a predetermined size, and a predetermined brightness is irradiated at a predetermined timing and at a predetermined time.
 外部ライトセット83は、複数の外部ライト(照射装置)を備えており、具体的には、本実施の形態1では、外部ライト84と、外部ライト85とを備えている。外部ライト84と、外部ライト85は、外部ライトドライバ82に基づいて点灯する。 The external light set 83 includes a plurality of external lights (irradiation devices). Specifically, in the first embodiment, the external light set 83 includes an external light 84 and an external light 85. The external light 84 and the external light 85 are turned on based on the external light driver 82.
 プロジェクタデバイス86は、自車両1の車体から外部向けて取り付けられた映像投射装置である。プロジェクタデバイス86は、路面や近くの壁面に光を照射し、自車両1の周囲の自車両1の外に存在する対象者に自車両1の状態、現在または今後の自車両1の動作の意図、自車両1の今後の動作の予告、または警告等を報知する。プロジェクタデバイス86は、これらの報知情報を自車両1の外に存在する対象者に伝えるために、これに適したこれに適した形状、色、位置、大きさ、輝度、位置、タイミング、時間等で光を路面等に照射(投影)する。プロジェクタデバイス86は、プロジェクタドライバ87と、プロジェクタ88とを備えている。 The projector device 86 is an image projection device attached to the outside from the vehicle body of the own vehicle 1. The projector device 86 irradiates the road surface or a nearby wall surface with light, and the state of the own vehicle 1 and the intention of the current or future operation of the own vehicle 1 to the target person existing outside the own vehicle 1 around the own vehicle 1. , Notice of future operation of own vehicle 1, warning, etc. In order to convey these notification information to the target person existing outside the own vehicle 1, the projector device 86 has a suitable shape, color, position, size, brightness, position, timing, time, etc. Irradiates (projects) light onto the road surface, etc. The projector device 86 includes a projector driver 87 and a projector 88.
 プロジェクタドライバ87は、プロジェクタ88を駆動させる駆動装置であり、プロジェクタ88に所定の光を照射させる。プロジェクタドライバ87は、プロジェクタ88の照射する光の形状、色、位置、大きさ、輝度、タイミング、時間等を制御する機能を有している。 The projector driver 87 is a driving device that drives the projector 88, and causes the projector 88 to irradiate a predetermined light. The projector driver 87 has a function of controlling the shape, color, position, size, brightness, timing, time, and the like of the light emitted by the projector 88.
 プロジェクタ88は、光(映像)を自車両1の車外に照射(投影)する照射(投影)装置である。プロジェクタ88は、プロジェクタドライバ87の動作に基づいて光(映像)を車外の路面または壁面等に照射する。 The projector 88 is an irradiation (projection) device that irradiates (projects) light (image) to the outside of the own vehicle 1. The projector 88 irradiates a road surface or a wall surface outside the vehicle with light (image) based on the operation of the projector driver 87.
 ボディライトデバイス89は、自車両1の車体に設けられたディスプレイ、車体の輪郭に沿ってライン状に光る装置等の発光装置である。ボディライトデバイス89は、自車両1の周囲の自車両1の外に存在する対象者に自車両1の状態、現在または今後の自車両1の動作の意図、自車両1の今後の動作の予告、または警告等を報知する。また、ボディライトデバイス89は、自車両1の車体表面の所定の位置に、所定の形状の、所定の色の、所定の大きさの、所定の輝度の光を所定のタイミング、所定の時間で発光する。ボディライトデバイス89は、ボディライトドライバ90とボディライト91とを備えている。 The body light device 89 is a light emitting device such as a display provided on the vehicle body of the own vehicle 1 and a device that shines in a line along the contour of the vehicle body. The body light device 89 informs the target person who exists outside the own vehicle 1 around the own vehicle 1 the state of the own vehicle 1, the intention of the current or future operation of the own vehicle 1, and the advance notice of the future operation of the own vehicle 1. , Or give a warning. Further, the body light device 89 emits light having a predetermined shape, a predetermined color, a predetermined size, and a predetermined brightness at a predetermined position on the vehicle body surface of the own vehicle 1 at a predetermined timing and at a predetermined time. It emits light. The body light device 89 includes a body light driver 90 and a body light 91.
 ボディライトドライバ90は、ボディライト91を駆動させる駆動装置である。ボディライトドライバ90は、ボディライト91に所定の光を照射させる。ボディライトドライバ90は、ボディライト91の照射する光の形状、色、位置、大きさ、輝度、タイミング、時間等を制御する。 The body light driver 90 is a drive device that drives the body light 91. The body light driver 90 irradiates the body light 91 with a predetermined light. The body light driver 90 controls the shape, color, position, size, brightness, timing, time, etc. of the light emitted by the body light 91.
 ボディライト91は、自車両1の車体の外側表面から発光する光が露光するように設けられた発光装置である。本実施の形態1におけるボディライト91は、LCD(Liquid Crystal Display)及びLED(Light Emitting Diode)で構成されており、LEDが発光した光がLCDを透過することにより、所定の形状、色、位置、大きさ、輝度の光を自車両1の車外に照射する。本実施の形態1ではボディライト91がLCD及びLEDで構成される例を示したが、有機EL(Electroluminescence)モニタ、ドットLED、液晶モニタ、ELパネル、リアプロジェクション方式の表示器等の他の発光装置や表示装置でもよい。また、発光させる場所についても、本実施の形態1では、表示装置32は自車両1の前面に設けられ、表示装置33は自車両1の後面に設けられ、表示装置34は自車両1の左側面に設けられ、表示装置35は自車両1の右側面に設けられているが、窓ガラスに表示する等してもよい。 The body light 91 is a light emitting device provided so as to expose the light emitted from the outer surface of the vehicle body of the own vehicle 1. The body light 91 in the first embodiment is composed of an LCD (Liquid Crystal Display) and an LED (Light Emitting Diode), and the light emitted by the LED is transmitted through the LCD to have a predetermined shape, color, and position. , The size and the brightness of the light are radiated to the outside of the own vehicle 1. In the first embodiment, an example in which the body light 91 is composed of an LCD and an LED is shown, but other light emission such as an organic EL (Electroluminescence) monitor, a dot LED, a liquid crystal monitor, an EL panel, and a rear projection type display is shown. It may be a device or a display device. Further, as for the place where the light is emitted, in the first embodiment, the display device 32 is provided on the front surface of the own vehicle 1, the display device 33 is provided on the rear surface of the own vehicle 1, and the display device 34 is on the left side of the own vehicle 1. Although it is provided on the surface and the display device 35 is provided on the right side surface of the own vehicle 1, it may be displayed on a window glass or the like.
 図2における報知装置31が備える表示装置32~35は、外部ライトデバイス81、プロジェクタデバイス86、ボディライトデバイス89、または外部ライトデバイス81とプロジェクタデバイス86とボディライトデバイス89との組み合わせによって実現する。本実施の形態1では、表示装置32~35は、それぞれボディライトデバイス89であるとする。 The display devices 32 to 35 included in the notification device 31 in FIG. 2 are realized by an external light device 81, a projector device 86, a body light device 89, or a combination of the external light device 81, the projector device 86, and the body light device 89. In the first embodiment, the display devices 32 to 35 are body light devices 89, respectively.
 サウンドデバイス92は、自車両1の車体に設けられた音響装置である。サウンドデバイス92は、自車両1の周囲の自車両1の外に存在する対象者に自車両1の状態、現在または今後の自車両1の動作の意図、自車両1の今後の動作の予告、または警告等を報知する。また、サウンドデバイス92は、自車両1の車体の表面の所定の位置に、所定の大きさの、所定のタイミング、所定の時間で所定の音を出力する。サウンドデバイス92は、サウンドドライバ93と、サウンド機器94とを備えている。 The sound device 92 is an acoustic device provided on the vehicle body of the own vehicle 1. The sound device 92 tells the target person who exists outside the own vehicle 1 around the own vehicle 1 the state of the own vehicle 1, the intention of the current or future operation of the own vehicle 1, and the advance notice of the future operation of the own vehicle 1. Or notify a warning or the like. Further, the sound device 92 outputs a predetermined sound of a predetermined size, a predetermined timing, and a predetermined time at a predetermined position on the surface of the vehicle body of the own vehicle 1. The sound device 92 includes a sound driver 93 and a sound device 94.
 サウンドドライバ93は、サウンド機器94を駆動させる駆動装置である。サウンドドライバ93は、サウンド機器94に所定の位置に、所定の大きさの、所定のタイミング、所定の時間で所定の音を出力させる。サウンド機器94は、自車両1の車体から自車両1の外に存在する対象者に向けて音を発生させる装置である。図2における報知装置31が備えるスピーカ36は、サウンドデバイス92によって実現する。 The sound driver 93 is a drive device that drives the sound device 94. The sound driver 93 causes the sound device 94 to output a predetermined sound at a predetermined position at a predetermined time, at a predetermined timing, and at a predetermined time. The sound device 94 is a device that generates sound from the vehicle body of the own vehicle 1 toward a target person existing outside the own vehicle 1. The speaker 36 included in the notification device 31 in FIG. 2 is realized by the sound device 92.
 次に、表示システム30の動作について説明する。具体例として、図1に示したような状況について説明する。 Next, the operation of the display system 30 will be described. As a specific example, the situation as shown in FIG. 1 will be described.
 図8は、本実施の形態1に係る表示システム30の動作を示すフローチャートである。図8を用いて、表示システム30の動作を以下に説明する。 FIG. 8 is a flowchart showing the operation of the display system 30 according to the first embodiment. The operation of the display system 30 will be described below with reference to FIG.
 ステップS1において、車内外情報取得部27は、全体制御ECU10から死角情報または死角画像を含む車内外情報を取得する。具体的には、本実施の形態1では、車内外情報取得部27は、車内外情報として自車両1の位置情報と、対象者3の位置情報と、対象者3の目線情報と、対象者3の死角情報または死角画像とを取得する。 In step S1, the vehicle interior / external information acquisition unit 27 acquires vehicle interior / external information including blind spot information or blind spot image from the overall control ECU 10. Specifically, in the first embodiment, the vehicle inside / outside information acquisition unit 27 includes the position information of the own vehicle 1, the position information of the target person 3, the line of sight information of the target person 3, and the target person as the inside / outside information of the vehicle. The blind spot information or the blind spot image of 3 is acquired.
 ステップS2において、車内外情報取得部27は、車内外情報に含まれる死角情報または死角画像はそのまま死角画像として使用できる否か判定する。車内外情報取得部27は、車内外情報に含まれる死角情報または死角画像はそのまま死角画像として使用できると判定した場合、ステップS2:Yesとなる。具体的には、例えば、車内外情報に含まれる死角情報または死角画像が、車外カメラから取得した抽出する必要のない、対象者3の目線にあった死角画像である場合、ステップS2:Yesとなる。なお、車内外情報に含まれる死角情報または死角画像はそのまま死角画像として使用できるということは、車内外情報に含まれる情報は死角画像である。車内外情報取得部27は、自車両1の位置情報と、対象者3の位置情報と、対象者3の目線情報と、死角画像とを死角画像取得部21に送信し、ステップS4に進む。 In step S2, the vehicle inside / outside information acquisition unit 27 determines whether or not the blind spot information or the blind spot image included in the vehicle inside / outside information can be used as it is as a blind spot image. If the vehicle interior / external information acquisition unit 27 determines that the blind spot information or the blind spot image included in the vehicle interior / external information can be used as the blind spot image as it is, step S2: Yes. Specifically, for example, when the blind spot information or the blind spot image included in the inside / outside information of the vehicle is a blind spot image acquired from the camera outside the vehicle and which is in the line of sight of the subject 3, step S2: Yes. Become. The fact that the blind spot information or the blind spot image included in the vehicle interior / external information can be used as the blind spot image as it is means that the information included in the vehicle interior / external information is a blind spot image. The vehicle interior / external information acquisition unit 27 transmits the position information of the own vehicle 1, the position information of the target person 3, the line-of-sight information of the target person 3, and the blind spot image to the blind spot image acquisition unit 21, and proceeds to step S4.
 一方、車内外情報取得部27は、車内外情報に含まれる死角情報または死角画像はそのまま死角画像として使用できないと判定した場合、ステップS2:Noとなる。具体的には、例えば、車内外情報に含まれる死角情報または死角画像が、死角のレーダ情報である場合、ステップS2:Noとなる。また、例えば、車内外情報に含まれる死角情報または死角画像が、死角よりも広い範囲の車外カメラの画像である場合、ステップS2:Noとなる。例えば、車内外情報に含まれる死角情報または死角画像が、対象者3の目線に合っていない場合、ステップS2:Noとなる。車内外情報取得部27は、自車両1の位置情報と、対象者3の位置情報と、対象者3の目線情報と、対象者3の死角情報または死角画像とを死角画像生成部28に送信し、ステップS3に進む。 On the other hand, if the vehicle interior / external information acquisition unit 27 determines that the blind spot information or the blind spot image included in the vehicle interior / external information cannot be used as the blind spot image as it is, step S2: No. Specifically, for example, when the blind spot information or the blind spot image included in the vehicle interior / external information is the radar information of the blind spot, step S2: No. Further, for example, when the blind spot information or the blind spot image included in the vehicle inside / outside information is an image of the vehicle outside camera in a range wider than the blind spot, step S2: No. For example, if the blind spot information or the blind spot image included in the vehicle interior / external information does not match the line of sight of the subject 3, step S2: No. The vehicle interior / external information acquisition unit 27 transmits the position information of the own vehicle 1, the position information of the target person 3, the line-of-sight information of the target person 3, and the blind spot information or the blind spot image of the target person 3 to the blind spot image generation unit 28. Then, the process proceeds to step S3.
 ステップS3において、死角画像生成部28は、車内外情報取得部27から自車両1の位置情報と、対象者3の位置情報と、対象者3の目線情報と、対象者3の死角情報または死角画像とを受信する。死角画像生成部28は、対象者3の死角情報または死角画像から死角画像を生成または抽出する。 In step S3, the blind spot image generation unit 28 receives the position information of the own vehicle 1, the position information of the target person 3, the line of sight information of the target person 3, and the blind spot information or the blind spot of the target person 3 from the vehicle inside / outside information acquisition unit 27. Receive images and. The blind spot image generation unit 28 generates or extracts a blind spot image from the blind spot information or the blind spot image of the subject 3.
 具体的には、例えば、死角画像生成部28は、車内外情報に含まれる死角情報または死角画像が、死角のレーダ情報である場合、死角のレーダ情報から死角画像DB24bの画像のパーツを用いて死角画像を生成する。また、例えば、死角画像生成部28は、車内外情報に含まれる死角情報または死角画像が、死角よりも広い範囲の車外カメラの画像である場合、死角に合わせて死角画像を抽出する。例えば、死角画像生成部28は、車内外情報に含まれる死角情報または死角画像が、対象者3の目線に合っていない場合、対象者3の目線に合った死角画像を生成する。死角画像生成部28が対象者3の死角情報または死角画像から死角画像を生成または抽出する方法は従来技術を用いればよい。死角画像生成部28は、自車両1の位置情報と、対象者3の位置情報と、対象者3の目線情報と、死角画像とを死角画像取得部21に送信し、ステップS4に進む。 Specifically, for example, when the blind spot image or the blind spot image included in the vehicle interior / external information is the blind spot radar information, the blind spot image generation unit 28 uses the image parts of the blind spot image DB 24b from the blind spot radar information. Generate a blind spot image. Further, for example, the blind spot image generation unit 28 extracts the blind spot image according to the blind spot when the blind spot information or the blind spot image included in the vehicle inside / outside information is an image of the vehicle outside camera in a range wider than the blind spot. For example, when the blind spot information or the blind spot image included in the vehicle interior / external information does not match the line of sight of the target person 3, the blind spot image generation unit 28 generates a blind spot image that matches the line of sight of the target person 3. As a method for the blind spot image generation unit 28 to generate or extract a blind spot image from the blind spot information or the blind spot image of the subject 3, the prior art may be used. The blind spot image generation unit 28 transmits the position information of the own vehicle 1, the position information of the target person 3, the line of sight information of the target person 3, and the blind spot image to the blind spot image acquisition unit 21, and proceeds to step S4.
 ステップS4において、死角画像取得部21は、車内外情報取得部27または死角画像生成部28から自車両1の位置情報と、対象者3の位置情報と、対象者3の目線情報と、死角画像とを取得する。死角画像取得部21は、自車両1の位置情報と、対象者3の位置情報と、対象者3の目線情報と、死角画像とを判定部22に送信し、ステップS5に進む。 In step S4, the blind spot image acquisition unit 21 receives the position information of the own vehicle 1, the position information of the target person 3, the line of sight information of the target person 3, and the blind spot image from the vehicle inside / outside information acquisition unit 27 or the blind spot image generation unit 28. And get. The blind spot image acquisition unit 21 transmits the position information of the own vehicle 1, the position information of the target person 3, the line of sight information of the target person 3, and the blind spot image to the determination unit 22, and proceeds to step S5.
 ステップS5において、判定部22は、死角画像取得部21から自車両1の位置情報と、対象者3の位置情報と、対象者3の目線情報と、死角画像とを受信する。判定部22は、予め決められた範囲情報を記憶している。本実施の形態1では、範囲情報は、自車両1から5メートル以内の範囲とする。判定部22は、予め決められた自車両1から5メートル以内の範囲に対象者3が存在するか否か判定する。具体的には、例えば、判定部22は、自車両1の位置情報と、対象者3の位置情報とから、予め決められた自車両1から5メートル以内の範囲に対象者3が存在するか否か判定する。または、判定部22は、予め決められた自車両1から5メートル以内の範囲の画像から予め決められた自車両1から5メートル以内の範囲に対象者3が存在するか否か判定する。判定部22が予め決められた自車両1から5メートル以内の範囲に対象者3が存在するか否か判定する方法は従来技術を用いればよい。 In step S5, the determination unit 22 receives the position information of the own vehicle 1, the position information of the target person 3, the line of sight information of the target person 3, and the blind spot image from the blind spot image acquisition unit 21. The determination unit 22 stores predetermined range information. In the first embodiment, the range information is within 5 meters from the own vehicle 1. The determination unit 22 determines whether or not the subject 3 exists within a predetermined range within 5 meters from the own vehicle 1. Specifically, for example, the determination unit 22 determines whether the target person 3 exists within a predetermined range of 5 meters from the own vehicle 1 based on the position information of the own vehicle 1 and the position information of the target person 3. Judge whether or not. Alternatively, the determination unit 22 determines whether or not the subject 3 exists in the predetermined range within 5 meters from the own vehicle 1 from the image within the range within 5 meters from the predetermined own vehicle 1. As a method for determining whether or not the subject 3 exists within a range of 5 meters from the own vehicle 1 determined in advance by the determination unit 22, the prior art may be used.
 判定部22が予め決められた自車両1から5メートル以内の範囲に対象者3が存在すると判定した場合、ステップS5:Yesとなり、ステップS6に進む。判定部22が予め決められた自車両1から5メートル以内の範囲に対象者3が存在しないと判定した場合、ステップS5:Noとなり、動作を終了する。 If the determination unit 22 determines that the target person 3 exists within a range of 5 meters from the own vehicle 1 determined in advance, step S5: Yes and the process proceeds to step S6. If the determination unit 22 determines that the target person 3 does not exist within a predetermined range of 5 meters from the own vehicle 1, step S5: No and the operation ends.
 ステップS6において、判定部22は、対象者3から見た自車両1の死角に他車両2等のオブジェクトが存在するか否か判定する。具体的には、例えば、判定部22は、死角画像に他車両2等のオブジェクトが写っているか従来技術である画像判断をする。または、例えば、表示システム30が他車両2の位置情報と地図情報とを取得し、死角画像に位置情報が含まれていた場合は、判定部22は、他車両2の位置情報と地図情報と死角画像とから死角に他車両2等のオブジェクトが存在するか判定する。例えば、判定部22は、表示システム30が他車両2の位置情報と地図情報とを取得し、他車両2の位置情報と地図情報と自車両1の位置情報と、対象者3の位置情報とから死角に他車両2等のオブジェクトが存在するか判定する。判定部22が対象者3から見た自車両1の死角に他車両2等のオブジェクトが存在するか否か判定する方法は従来技術を用いればよい。 In step S6, the determination unit 22 determines whether or not an object such as another vehicle 2 exists in the blind spot of the own vehicle 1 as seen from the target person 3. Specifically, for example, the determination unit 22 determines whether an object such as another vehicle 2 is shown in the blind spot image, which is a conventional technique. Or, for example, when the display system 30 acquires the position information and the map information of the other vehicle 2 and the position information is included in the blind spot image, the determination unit 22 obtains the position information and the map information of the other vehicle 2. From the blind spot image, it is determined whether an object such as another vehicle 2 exists in the blind spot. For example, in the determination unit 22, the display system 30 acquires the position information and the map information of the other vehicle 2, the position information and the map information of the other vehicle 2, the position information of the own vehicle 1, and the position information of the target person 3. It is determined from the above whether or not an object such as another vehicle 2 exists in the blind spot. As a method for the determination unit 22 to determine whether or not an object such as another vehicle 2 exists in the blind spot of the own vehicle 1 as seen from the target person 3, the prior art may be used.
 判定部22が対象者3から見た自車両1の死角に他車両2等のオブジェクトが存在すると判定した場合、ステップS6:Yesとなる。判定部22は、自車両1の位置情報と、対象者3の位置情報と、対象者3の目線情報と、死角画像とを表示画像生成部23に送信し、ステップS7に進む。判定部22が対象者3から見た自車両1の死角に他車両2等のオブジェクトが存在しないと判定した場合、ステップS6:Noとなり、動作を終了する。 If the determination unit 22 determines that an object such as another vehicle 2 exists in the blind spot of the own vehicle 1 as seen from the target person 3, step S6: Yes. The determination unit 22 transmits the position information of the own vehicle 1, the position information of the target person 3, the line-of-sight information of the target person 3, and the blind spot image to the display image generation unit 23, and proceeds to step S7. When the determination unit 22 determines that the object such as the other vehicle 2 does not exist in the blind spot of the own vehicle 1 as seen from the target person 3, step S6: No and the operation ends.
 ステップS7において、表示画像生成部23は、判定部22から自車両1の位置情報と、対象者3の位置情報と、対象者3の目線情報と、死角画像とを受信する。表示画像生成部23は、自車両1の位置情報と、対象者3の位置情報と、対象者3の目線情報と、死角画像とを用いて、自車両画像DB24aから対象者3の目線に合わせた自車両1の画像を取得し、ステップS8に進む。 In step S7, the display image generation unit 23 receives the position information of the own vehicle 1, the position information of the target person 3, the line of sight information of the target person 3, and the blind spot image from the determination unit 22. The display image generation unit 23 uses the position information of the own vehicle 1, the position information of the target person 3, the line of sight information of the target person 3, and the blind spot image to match the line of sight of the target person 3 from the own vehicle image DB 24a. The image of the own vehicle 1 is acquired, and the process proceeds to step S8.
 ステップS8において、表示画像生成部23は、自車両1の画像を死角画像に重畳した表示画像を生成する。表示画像生成部23は、対象者3が実際に自車両1を見た際と同じ位置、同じ目線となるように表示画像を生成する。また、表示画像生成部23は、対象者3と自車両1との距離に応じて表示画像を見やすい倍率に変更する。表示画像生成部23が表示画像を生成する方法は、機械学習等の従来技術を用いればよい。 In step S8, the display image generation unit 23 generates a display image in which the image of the own vehicle 1 is superimposed on the blind spot image. The display image generation unit 23 generates a display image so that the target person 3 has the same position and the same line of sight as when he / she actually sees the own vehicle 1. Further, the display image generation unit 23 changes the display image to an easy-to-see magnification according to the distance between the target person 3 and the own vehicle 1. As a method for the display image generation unit 23 to generate a display image, a conventional technique such as machine learning may be used.
 図9は、本実施の形態1に係る自車両1の画像8を加工する前の表示画像7の一例である。本実施の形態1では、表示画像生成部23は、他車両2の画像である死角画像に自車両1の画像8を重畳して表示画像7を生成する。なお、死角画像は、他車両2の画像だけではなく、別の他車両、人等のオブジェクト、背景画像等を含んでいてもよい。ステップS8の段階では、自車両1の画像8に隠れて、他車両2の画像である死角画像は見えていない。 FIG. 9 is an example of the display image 7 before processing the image 8 of the own vehicle 1 according to the first embodiment. In the first embodiment, the display image generation unit 23 generates the display image 7 by superimposing the image 8 of the own vehicle 1 on the blind spot image which is the image of the other vehicle 2. The blind spot image may include not only the image of the other vehicle 2 but also another vehicle, an object such as a person, a background image, and the like. At the stage of step S8, the blind spot image, which is an image of the other vehicle 2, is not visible because it is hidden by the image 8 of the own vehicle 1.
 図8に戻って、表示画像生成部23は、自車両1の位置情報と、対象者3の位置情報と、対象者3の目線情報と、表示画像とを条件付与部29に送信し、ステップS9に進む。 Returning to FIG. 8, the display image generation unit 23 transmits the position information of the own vehicle 1, the position information of the target person 3, the line-of-sight information of the target person 3, and the display image to the condition giving unit 29, and steps are taken. Proceed to S9.
 ステップS9において、条件付与部29は、表示画像生成部23から自車両1の位置情報と、対象者3の位置情報と、対象者3の目線情報と、表示画像とを受信する。条件付与部29は、自車両1の位置情報と、対象者3の位置情報と、対象者3の目線情報と、表示画像とを用いて、条件情報DB25から表示装置を含む報知すべき報知装置31の報知装置情報と、報知パターン情報と、報知タイミング情報とを取得する。 In step S9, the condition giving unit 29 receives the position information of the own vehicle 1, the position information of the target person 3, the line of sight information of the target person 3, and the display image from the display image generation unit 23. The condition giving unit 29 uses the position information of the own vehicle 1, the position information of the target person 3, the line-of-sight information of the target person 3, and the display image, and the notification device to be notified from the condition information DB 25 including the display device. 31 notification device information, notification pattern information, and notification timing information are acquired.
 条件付与部29は、条件情報DB25から車内外情報である自車両1の位置情報の識別子と、対象者3の位置情報の識別子と、対象者3の目線情報の識別子と、表示画像の識別子とに対応する報知装置情報と報知パターン情報と報知タイミング情報とを取得する。または、条件付与部29は、機械学習で、条件情報DB25から自車両1の位置情報の識別子と、対象者3の位置情報の識別子と、対象者3の目線情報の識別子と、死角画像の識別子とに対応する報知装置情報と報知パターン情報と報知タイミング情報とを取得する。条件付与部29が条件情報DB25から報知装置情報と報知パターン情報と報知タイミング情報とを取得する方法は従来技術を用いればよい。なお、報知装置情報と報知パターン情報と報知タイミング情報とは1組である必要はなく、1組でも複数組でもよい。 The condition giving unit 29 includes an identifier of the position information of the own vehicle 1, which is information inside and outside the vehicle, an identifier of the position information of the target person 3, an identifier of the line-of-sight information of the target person 3, and an identifier of the display image from the condition information DB 25. The notification device information, the notification pattern information, and the notification timing information corresponding to the above are acquired. Alternatively, the condition giving unit 29 uses machine learning to obtain an identifier of the position information of the own vehicle 1, an identifier of the position information of the target person 3, an identifier of the line-of-sight information of the target person 3, and an identifier of the blind spot image from the condition information DB 25. The notification device information, the notification pattern information, and the notification timing information corresponding to the above are acquired. As a method for the condition giving unit 29 to acquire the notification device information, the notification pattern information, and the notification timing information from the condition information DB 25, the prior art may be used. The notification device information, the notification pattern information, and the notification timing information do not have to be one set, and may be one set or a plurality of sets.
 本実施の形態1では、条件付与部29は、条件情報DB25から、自車両1の前方に設けられた表示装置32の識別子に対応した報知装置情報と、自車両1の画像を縮小して死角画像を見やすくさせる報知パターン情報と、表示装置32がすぐに報知を開始する報知タイミング情報とを取得したとして、ステップS10に進む。 In the first embodiment, the condition giving unit 29 reduces the notification device information corresponding to the identifier of the display device 32 provided in front of the own vehicle 1 from the condition information DB 25 and the image of the own vehicle 1 to make a blind spot. Assuming that the notification pattern information that makes the image easier to see and the notification timing information that the display device 32 immediately starts the notification are acquired, the process proceeds to step S10.
 ステップS10において、条件付与部29は、報知パターン情報は表示画像を加工するものであるか否か判定する。具体的には、例えば、報知パターン情報が自車両1の画像を縮小して死角画像を見やすくさせる報知パターン情報、自車両1の画像に透過処理を施す報知パターン情報、自車両1の画像を少しだけずらした視点から見たような画像にする報知パターン情報、自車両1の輪郭だけ実線もしくは点線とした画像とする報知パターン情報等であった場合、条件付与部29は、報知パターン情報が表示画像を加工するものであると判定する。条件付与部29は、報知パターン情報が表示画像を加工するものである場合、ステップS10:Yesとなり、ステップS11に進む。本実施の形態1では、報知パターン情報が自車両1の画像を縮小して死角画像を見やすくさせる報知パターン情報であるため、条件付与部29は、報知パターン情報が表示画像を加工するものであると判定し、ステップS10:Yesとなり、ステップS11に進む。 In step S10, the condition giving unit 29 determines whether or not the notification pattern information is for processing the displayed image. Specifically, for example, the notification pattern information reduces the image of the own vehicle 1 to make the blind spot image easier to see, the notification pattern information that performs transmission processing on the image of the own vehicle 1, and a little image of the own vehicle 1. If the notification pattern information is an image as seen from a shifted viewpoint, or the notification pattern information is an image in which only the outline of the own vehicle 1 is a solid line or a dotted line, the condition giving unit 29 displays the notification pattern information. It is determined that the image is processed. When the notification pattern information is for processing the display image, the condition giving unit 29 goes to step S10: Yes and proceeds to step S11. In the first embodiment, since the notification pattern information is the notification pattern information that reduces the image of the own vehicle 1 to make the blind spot image easier to see, the condition giving unit 29 processes the display image by the notification pattern information. Is determined, step S10: Yes, and the process proceeds to step S11.
 一方、例えば、報知パターン情報が、制御部26に表示装置に表示画像を強調するような表示をさせる表示装置32の枠を赤色に点滅させる報知パターン情報、表示と共に警報音を出す報知パターン情報等であった場合、条件付与部29は、報知パターン情報が表示画像を加工するものでないと判定する。報知パターン情報が表示画像を加工するものでない場合、ステップS10:Noとなる。条件付与部29は、報知装置情報と、報知パターン情報と、報知タイミング情報と、表示画像とを制御部26に送信し、ステップS12に進む。 On the other hand, for example, the notification pattern information causes the control unit 26 to display the display so as to emphasize the display image, the notification pattern information for blinking the frame of the display device 32 in red, the notification pattern information for issuing an alarm sound together with the display, and the like. If this is the case, the condition giving unit 29 determines that the notification pattern information does not process the displayed image. If the notification pattern information does not process the displayed image, step S10: No. The condition giving unit 29 transmits the notification device information, the notification pattern information, the notification timing information, and the display image to the control unit 26, and proceeds to step S12.
 ステップS11において、条件付与部29は、表示画像を加工する。具体的には、本実施の形態1では、条件付与部29は、条件情報DB25から、自車両1の画像を縮小して死角画像を見やすくさせる報知パターン情報を取得したため、自車両1の画像を加工する。条件付与部29は、自車両1の画像を縮小して死角画像を見やすくさせる報知パターン情報を用いて、自車両画像DB24aから取得した対象者3の目線に合わせた自車両1の画像を縮小する。例えば、条件付与部29は、自車両1の画像を縮小して死角画像を見やすくさせる報知パターン情報に予め付与されていた縮小度に基づき、自車両1の画像を縮小する。または、条件付与部29は、機械学習で縮小度を決め、当該縮小度に基づき自車両1の画像を縮小する。条件付与部29が縮小度を決定する方法は従来技術を用いればよい。 In step S11, the condition giving unit 29 processes the display image. Specifically, in the first embodiment, the condition giving unit 29 acquires the notification pattern information from the condition information DB 25 by reducing the image of the own vehicle 1 to make the blind spot image easier to see, so that the image of the own vehicle 1 is obtained. Process. The condition giving unit 29 reduces the image of the own vehicle 1 according to the line of sight of the target person 3 acquired from the own vehicle image DB 24a by using the notification pattern information that reduces the image of the own vehicle 1 to make the blind spot image easier to see. .. For example, the condition giving unit 29 reduces the image of the own vehicle 1 based on the degree of reduction given in advance to the notification pattern information that reduces the image of the own vehicle 1 to make the blind spot image easier to see. Alternatively, the condition giving unit 29 determines the degree of reduction by machine learning, and reduces the image of the own vehicle 1 based on the degree of reduction. As a method for determining the degree of reduction by the condition-imparting unit 29, the prior art may be used.
 図10は、本実施の形態1に係る自車両1の画像8を加工した後の表示画像7の一例である。条件付与部29が自車両1の画像8を縮小したことで、他車両2の画像9である死角画像が見えやすくなる。 FIG. 10 is an example of a display image 7 after processing the image 8 of the own vehicle 1 according to the first embodiment. Since the condition-imparting unit 29 reduces the image 8 of the own vehicle 1, the blind spot image, which is the image 9 of the other vehicle 2, becomes easy to see.
 図8に戻って、条件付与部29は、報知装置情報と、報知タイミング情報と、表示情報とを制御部26に送信する。本実施の形態1では、条件付与部29は、表示装置32の識別子に対応した報知装置情報と、表示装置32がすぐに報知を開始する報知タイミング情報と、表示画像とを制御部26に送信し、ステップS12に進む。 Returning to FIG. 8, the condition giving unit 29 transmits the notification device information, the notification timing information, and the display information to the control unit 26. In the first embodiment, the condition giving unit 29 transmits the notification device information corresponding to the identifier of the display device 32, the notification timing information at which the display device 32 immediately starts notification, and the display image to the control unit 26. Then, the process proceeds to step S12.
 なお、本実施の形態1では、条件付与部29は、自車両1の画像を縮小して死角画像を見やすくさせる報知パターン情報だけを取得したが、自車両1の画像を縮小して死角画像を見やすくさせる報知パターン情報とともに、例えば表示と共に警報音を出す報知パターン情報を取得する場合がある。このような場合、条件付与部29は、報知装置情報と、報知タイミング情報と、表示情報とともに、報知パターン情報も制御部26に送信する。 In the first embodiment, the condition-imparting unit 29 reduces the image of the own vehicle 1 to acquire only the notification pattern information that makes the blind spot image easier to see, but reduces the image of the own vehicle 1 to obtain the blind spot image. In addition to the notification pattern information that makes it easier to see, for example, the notification pattern information that emits an alarm sound together with the display may be acquired. In such a case, the condition giving unit 29 transmits the notification pattern information to the control unit 26 together with the notification device information, the notification timing information, and the display information.
 また、例えば、条件付与部29は、報知パターン情報は表示画像を加工するものであるか否かに関わらず、条件情報DB25から自車両1の前方に設けられたスピーカ36の識別子である報知装置情報と、スピーカ36の音の報知パターン情報と、スピーカ36がすぐに報知を開始する報知タイミング情報とを取得する場合がある。このような場合、条件付与部29は、スピーカ36から発生させる報知情報である音情報を生成し、制御部26に送信する。報知情報は、表示画像、音情報等であり、1つでも複数でもよい。 Further, for example, the condition giving unit 29 is a notification device which is an identifier of the speaker 36 provided in front of the own vehicle 1 from the condition information DB 25 regardless of whether or not the notification pattern information is for processing the display image. Information, notification pattern information of the sound of the speaker 36, and notification timing information in which the speaker 36 immediately starts notification may be acquired. In such a case, the condition giving unit 29 generates sound information which is notification information generated from the speaker 36 and transmits it to the control unit 26. The broadcast information is display image, sound information, etc., and may be one or a plurality.
 ステップS12において、制御部26は、条件付与部29から報知装置情報と、報知パターン情報と、報知タイミング情報と、表示情報とを受信する。必要であれば、制御部26は、条件付与部29から報知パターン情報と、例えば音情報等の表示画像以外の報知情報とを受信する。本実施の形態1では、制御部26は、表示装置32の識別子である報知装置情報と、表示装置32がすぐに報知を開始する報知タイミング情報と、表示画像とを受信する。制御部26は、表示装置32の識別子である報知装置情報と、表示装置32がすぐに報知を開始する報知タイミング情報と、ステップS11で加工した表示画像とを用いて、当該情報を受信するとすぐに表示画像を表示装置32に表示し始める制御をする。 In step S12, the control unit 26 receives the notification device information, the notification pattern information, the notification timing information, and the display information from the condition giving unit 29. If necessary, the control unit 26 receives the notification pattern information and the notification information other than the display image such as sound information from the condition giving unit 29. In the first embodiment, the control unit 26 receives the notification device information which is an identifier of the display device 32, the notification timing information at which the display device 32 immediately starts notification, and the display image. The control unit 26 uses the notification device information which is an identifier of the display device 32, the notification timing information at which the display device 32 immediately starts notification, and the display image processed in step S11, and immediately receives the information. Controls to start displaying the display image on the display device 32.
 図11は、本実施の形態1に係る対象者3から自車両1を見たときの図の一例である。具体的には、図11は、図1において、対象者3から自車両1を見たときの点線矢印5から見たときの図である。自車両1の表示装置32は、制御部26の制御ですぐに表示画像7を表示し始める。 FIG. 11 is an example of a diagram when the own vehicle 1 is viewed from the target person 3 according to the first embodiment. Specifically, FIG. 11 is a view taken from the dotted line arrow 5 when the own vehicle 1 is viewed from the subject 3 in FIG. 1. The display device 32 of the own vehicle 1 immediately starts displaying the display image 7 under the control of the control unit 26.
 図12は、本実施の形態1に係る自車両1を真正面から見たときの図の一例である。具体的には、図12は、図1において、真正面から自車両1を見たときの点線矢印6から見たときの図である。図11と同様に、自車両1の表示装置32は、制御部26の制御ですぐに表示画像7を表示し始める。ただし、表示画像7は対象者3から自車両1を見たときの画像であるため、表示画像7は真正面から自車両1を見たときの風景と異なる。 FIG. 12 is an example of a diagram when the own vehicle 1 according to the first embodiment is viewed from the front. Specifically, FIG. 12 is a view taken from the dotted line arrow 6 when the own vehicle 1 is viewed from directly in front of FIG. 1. Similar to FIG. 11, the display device 32 of the own vehicle 1 immediately starts displaying the display image 7 under the control of the control unit 26. However, since the displayed image 7 is an image when the own vehicle 1 is viewed from the target person 3, the displayed image 7 is different from the scenery when the own vehicle 1 is viewed from the front.
 図8に戻って、制御部26が報知情報としてスピーカ36の音情報を受信した場合は、表示と共に報知してもよい。例えば、他車両2が対象者3に接近してくる場合、制御部26は、スピーカ36から他車両2が近づいてくる走行音が強調されて出力されるように制御する。また、制御部26が表示装置32の枠を赤色に点滅させる報知パターン情報を受信した場合は、制御部26は表示装置に表示画像を強調するような表示をさせるように、表示と共に表示装置32の枠を赤色に点滅させる制御をしてもよい。 Returning to FIG. 8, when the control unit 26 receives the sound information of the speaker 36 as the notification information, it may be notified together with the display. For example, when the other vehicle 2 approaches the target person 3, the control unit 26 controls the speaker 36 so that the approaching running sound of the other vehicle 2 is emphasized and output. Further, when the control unit 26 receives the notification pattern information that causes the frame of the display device 32 to blink in red, the control unit 26 displays the display device 32 together with the display so that the display device displays the display so as to emphasize the display image. You may control to blink the frame of.
 表示装置32が制御部26の制御によって表示画像を表示した後、表示システム30は動作を終了する。 After the display device 32 displays the display image under the control of the control unit 26, the display system 30 ends the operation.
 ステップS5:Noとなった場合と、ステップS6:Noなった場合と、ステップS12を実行した後の場合、ステップS1に戻り、電源をOFFにすること、または終了操作がなされる等の処理の終了のトリガーがあるまで上記のような処理を繰り返す。各ステップは、それぞれ独立した処理として、ステップ毎に処理を繰り返してもよい。 Step S5: No, step S6: No, and after executing step S12, the process returns to step S1, the power is turned off, or the end operation is performed. The above process is repeated until there is a trigger for termination. Each step may be repeated for each step as an independent process.
 以上述べたように、実施の形態1の表示システム30は、表示画像生成部23が自車両1の画像に死角画像を重畳した表示画像を生成し、制御部26が自車両1に設けられた表示装置に表示画像を表示する。そのため、実施の形態1の表示システム30は、死角画像を表示する際に、対象者3にどこの範囲の死角画像であるのかを直感的に位置関係をわかりやすく表示することができる。 As described above, in the display system 30 of the first embodiment, the display image generation unit 23 generates a display image in which the blind spot image is superimposed on the image of the own vehicle 1, and the control unit 26 is provided in the own vehicle 1. Display the displayed image on the display device. Therefore, when the display system 30 of the first embodiment displays the blind spot image, the target person 3 can intuitively display the positional relationship of the blind spot image in an easy-to-understand manner.
 また、実施の形態1においては、条件付与部29が自車両1の画像8を縮小したことで、他車両2の画像9である死角画像が見えやすくなり、死角に存在する他車両2等のオブジェクトの視認度を高めることができる。 Further, in the first embodiment, since the condition-imparting unit 29 reduces the image 8 of the own vehicle 1, the blind spot image which is the image 9 of the other vehicle 2 becomes easy to see, and the other vehicle 2 and the like existing in the blind spot can be easily seen. The visibility of the object can be increased.
 実施の形態1においては、報知パターン情報が自車両1の画像を縮小して死角画像を見やすくさせる報知パターン情報であり、条件付与部29は、自車両1の画像を縮小する加工を行った。しかし、条件付与部29は、自車両1の画像に透過処理を施す報知パターン情報によって自車両1の画像に透過処理を施す加工、自車両1の画像を少しだけずらした視点から見たような画像にする報知パターン情報によって自車両1の画像をずらした視点から見たような加工、自車両1の輪郭だけ実線もしくは点線とした画像とする報知パターン情報によって自車両1の輪郭だけ実線もしくは点線とする加工等を行ってもよい。このようにしても、実施の形態1の効果と同様の効果を得ることができる。 In the first embodiment, the notification pattern information is the notification pattern information that reduces the image of the own vehicle 1 to make the blind spot image easier to see, and the condition giving unit 29 performs processing to reduce the image of the own vehicle 1. However, the condition-imparting unit 29 is processed to perform transparency processing on the image of the own vehicle 1 by the notification pattern information for performing transmission processing on the image of the own vehicle 1, as seen from a viewpoint in which the image of the own vehicle 1 is slightly shifted. Processing as if the image of the own vehicle 1 is viewed from a viewpoint shifted by the notification pattern information to be made into an image, and the image is made into an image in which only the outline of the own vehicle 1 is a solid line or a dotted line. You may perform the processing or the like. Even in this way, the same effect as that of the first embodiment can be obtained.
 実施の形態1においては、表示システム30は、表示画像を生成した後に、条件付与部29が条件情報DB25から表示装置を含む報知すべき報知装置31の報知装置情報と、報知パターン情報と、報知タイミング情報とを取得し、自車両1の画像を加工した。しかし、表示システム30は、条件付与部29が条件情報DB25から表示装置を含む報知すべき報知装置31の報知装置情報と、報知パターン情報と、報知タイミング情報とを取得し、自車両1の画像を加工した後に、表示画像を生成してもよい。このようにしても、実施の形態1の効果と同様の効果を得ることができる。 In the first embodiment, after the display system 30 generates the display image, the condition giving unit 29 notifies the notification device information, the notification pattern information, and the notification of the notification device 31 including the display device from the condition information DB 25. The timing information and the image of the own vehicle 1 were processed. However, in the display system 30, the condition giving unit 29 acquires the notification device information, the notification pattern information, and the notification timing information of the notification device 31 including the display device from the condition information DB 25, and the image of the own vehicle 1 is obtained. The display image may be generated after processing. Even in this way, the same effect as that of the first embodiment can be obtained.
実施の形態2.
 実施の形態1では、報知パターン情報が、自車両1の画像を縮小して死角画像を見やすくさせる報知パターン情報であり、条件付与部29は、表示画像の自車両1の画像を縮小した。実施の形態2では、報知パターン情報が、他車両2の画像を拡大して死角画像を見やすくさせる報知パターン情報であり、条件付与部29は、死角画像の他車両2の画像を拡大する。当該加工をすることにより、死角画像を表示する際に、対象者にどこの範囲の死角画像であるのかを直感的にわかりやすく表示することができるとともに、死角に存在する他車両2等のオブジェクトの視認度を高めることができる。それ以外は、実施の形態1と同様である。以下の説明において実施の形態1で説明した構成及び動作については、同一符号を付して、重複する説明を省略する。
Embodiment 2.
In the first embodiment, the notification pattern information is the notification pattern information that reduces the image of the own vehicle 1 to make the blind spot image easier to see, and the condition giving unit 29 reduces the image of the own vehicle 1 in the display image. In the second embodiment, the notification pattern information is the notification pattern information that enlarges the image of the other vehicle 2 to make the blind spot image easier to see, and the condition giving unit 29 enlarges the image of the other vehicle 2 of the blind spot image. By performing this processing, when displaying the blind spot image, it is possible to intuitively and easily display the range of the blind spot image to the target person, and at the same time, an object such as another vehicle 2 existing in the blind spot can be displayed. The visibility of the object can be increased. Other than that, it is the same as the first embodiment. In the following description, the configurations and operations described in the first embodiment are designated by the same reference numerals, and duplicate description will be omitted.
 なお、実施の形態2でも実施の形態1と同様に、一例として、図1に示した状況について、本開示を適用した場合を以下に説明する。 As in the case of the first embodiment, the case where the present disclosure is applied to the situation shown in FIG. 1 will be described below as an example of the second embodiment.
 本開示の実施の形態2に係る表示制御装置20を含む表示システム30のブロック図は、実施の形態1の図2と同様である。また、本実施の形態2に係る自車両1の車載システム100のハードウェア構成図は、図7と同様である。 The block diagram of the display system 30 including the display control device 20 according to the second embodiment of the present disclosure is the same as that of FIG. 2 of the first embodiment. Further, the hardware configuration diagram of the in-vehicle system 100 of the own vehicle 1 according to the second embodiment is the same as that of FIG. 7.
 次に、本実施の形態2に係る表示システム30の動作について説明する。 Next, the operation of the display system 30 according to the second embodiment will be described.
 図13は、本開示の実施の形態2に係る表示システム30の動作を示すフローチャートである。図13を用いて、表示システム30の動作を以下に説明する。 FIG. 13 is a flowchart showing the operation of the display system 30 according to the second embodiment of the present disclosure. The operation of the display system 30 will be described below with reference to FIG.
 ステップS1~S8は、実施の形態1のステップS1~S8と同様である。 Steps S1 to S8 are the same as steps S1 to S8 of the first embodiment.
 ステップS19において、本実施の形態2では、条件付与部29は、条件情報DB25から、自車両1の画像を縮小して死角画像を見やすくさせる報知パターン情報ではなく、他車両2の画像を拡大して死角画像を見やすくさせる報知パターン情報を取得する。それ以外は、ステップS19は、実施の形態1のステップS9と同様である。 In step S19, in the second embodiment, the condition giving unit 29 enlarges the image of the other vehicle 2 from the condition information DB 25, instead of the notification pattern information that reduces the image of the own vehicle 1 to make the blind spot image easier to see. The notification pattern information that makes the blind spot image easier to see is acquired. Other than that, step S19 is the same as step S9 of the first embodiment.
 ステップS20において、本実施の形態2では、報知パターン情報が他車両2の画像を拡大して死角画像を見やすくさせる報知パターン情報であるため、条件付与部29は、報知パターン情報が表示画像を加工するものであると判定し、ステップS20:Yesとなり、ステップS21に進む。それ以外は、ステップS20は、実施の形態1のステップS10と同様である。 In step S20, in the second embodiment, since the notification pattern information is the notification pattern information that enlarges the image of the other vehicle 2 to make the blind spot image easier to see, the condition giving unit 29 processes the display image by the notification pattern information. It is determined that the image is to be used, step S20: Yes, and the process proceeds to step S21. Other than that, step S20 is the same as step S10 of the first embodiment.
 ステップS21において、本実施の形態2では、条件付与部29は、条件情報DB25から、他車両2の画像を拡大して死角画像を見やすくさせる報知パターン情報を取得したため、他車両2の画像を加工する。条件付与部29は、他車両2の画像を拡大して死角画像を見やすくさせる報知パターン情報を用いて、死角画像である他車両2の画像を拡大する。例えば、条件付与部29は、他車両2の画像を拡大して死角画像を見やすくさせる報知パターン情報に予め付与されていた拡大度に基づき、他車両2の画像を拡大する。または、条件付与部29は、機械学習で拡大度を決め、当該拡大度に基づき他車両2の画像を拡大する。条件付与部29が拡大度を決定する方法は従来技術を用いればよい。 In step S21, in the second embodiment, the condition giving unit 29 obtains the notification pattern information from the condition information DB 25 by enlarging the image of the other vehicle 2 to make the blind spot image easier to see, so that the image of the other vehicle 2 is processed. do. The condition giving unit 29 enlarges the image of the other vehicle 2 which is the blind spot image by using the notification pattern information which enlarges the image of the other vehicle 2 to make the blind spot image easier to see. For example, the condition giving unit 29 enlarges the image of the other vehicle 2 based on the degree of enlargement given in advance to the notification pattern information that enlarges the image of the other vehicle 2 to make the blind spot image easier to see. Alternatively, the condition giving unit 29 determines the degree of enlargement by machine learning, and enlarges the image of the other vehicle 2 based on the degree of enlargement. As a method for determining the degree of enlargement by the condition-imparting unit 29, the prior art may be used.
 なお、本実施の形態2では、条件付与部29は、他車両2の画像を拡大して死角画像を見やすくさせる報知パターン情報だけを取得したが、他車両2の画像を拡大して死角画像を見やすくさせる報知パターン情報とともに、例えば表示と共に警報音を出す報知パターン情報を取得する場合がある。このような場合、条件付与部29は、報知装置情報と、報知タイミング情報と、表示情報とともに、報知パターン情報も制御部26に送信する。 In the second embodiment, the condition giving unit 29 obtains only the notification pattern information that enlarges the image of the other vehicle 2 to make the blind spot image easier to see, but enlarges the image of the other vehicle 2 to obtain the blind spot image. In addition to the notification pattern information that makes it easier to see, for example, the notification pattern information that emits an alarm sound together with the display may be acquired. In such a case, the condition giving unit 29 transmits the notification pattern information to the control unit 26 together with the notification device information, the notification timing information, and the display information.
 このように、条件付与部29が他車両2の画像を拡大したことで、他車両2の画像である死角画像が見えやすくなる。それ以外は、ステップS21は、実施の形態1のステップS11と同様である。 In this way, the condition giving unit 29 enlarges the image of the other vehicle 2, so that the blind spot image, which is the image of the other vehicle 2, becomes easier to see. Other than that, step S21 is the same as step S11 of the first embodiment.
 ステップS12は、実施の形態1のステップS12と同様である。 Step S12 is the same as step S12 of the first embodiment.
 ステップS5:Noとなった場合と、ステップS6:Noなった場合と、ステップS12を実行した後の場合、ステップS1に戻り、電源をOFFにすること、または終了操作がなされる等の処理の終了のトリガーがあるまで上記のような処理を繰り返す。各ステップは、それぞれ独立した処理として、ステップ毎に処理を繰り返してもよい。 Step S5: No, step S6: No, and after executing step S12, the process returns to step S1, the power is turned off, or the end operation is performed. The above process is repeated until there is a trigger for termination. Each step may be repeated for each step as an independent process.
 以上述べたように、実施の形態2の表示システム30は、表示画像生成部23が自車両1の画像に死角画像を重畳した表示画像を生成し、制御部26が自車両1に設けられた表示装置に表示画像を表示する。そのため、実施の形態2の表示システム30は、死角画像を表示する際に、対象者3にどこの範囲の死角画像であるのかを直感的に位置関係をわかりやすく表示することができる。 As described above, in the display system 30 of the second embodiment, the display image generation unit 23 generates a display image in which the blind spot image is superimposed on the image of the own vehicle 1, and the control unit 26 is provided in the own vehicle 1. Display the displayed image on the display device. Therefore, when the display system 30 of the second embodiment displays the blind spot image, the target person 3 can intuitively display the positional relationship of the blind spot image in an easy-to-understand manner.
 また、実施の形態2においては、条件付与部29が他車両2の画像を拡大したことで、他車両2の画像である死角画像が見えやすくなり、死角に存在する他車両2等のオブジェクトの視認度を高めることができる。 Further, in the second embodiment, the condition-imparting unit 29 enlarges the image of the other vehicle 2, so that the blind spot image, which is an image of the other vehicle 2, becomes easier to see, and the object of the other vehicle 2 or the like existing in the blind spot becomes easier to see. Visibility can be improved.
 実施の形態2においては、報知パターン情報が他車両2の画像を拡大して死角画像を見やすくさせる報知パターン情報であり、条件付与部29は、死角画像である他車両2の画像を拡大する加工を行った。しかし、条件付与部29は、死角画像中の少なくとも1部分について大きさを拡大させる報知パターン情報によって死角画像中の少なくとも1部分について大きさを拡大させる加工、死角画像中の少なくとも1部分について自車両1の画像との重なりが少なくなるように位置をずらす報知パターン情報によって死角画像中の少なくとも1部分について自車両1の画像との重なりが少なくなるように位置をずらす加工等を行ってもよい。このようにしても、実施の形態2の効果と同様の効果を得ることができる。なお、条件付与部29は、他車両2等のオブジェクトの画像に矢印を付ける、他車両2等のオブジェクトの輪郭の色を赤色、黄色等の目立つ色を付けて強調する、他車両2等のオブジェクトの色を変更する等の死角画像である他車両2の画像を強調する加工を行ってもよい。 In the second embodiment, the notification pattern information is the notification pattern information that enlarges the image of the other vehicle 2 to make the blind spot image easier to see, and the condition-imparting unit 29 processes to enlarge the image of the other vehicle 2 that is the blind spot image. Was done. However, the condition-imparting unit 29 is processed to enlarge the size of at least one part of the blind spot image by the notification pattern information that enlarges the size of at least one part of the blind spot image, and the own vehicle is used for at least one part of the blind spot image. The position may be shifted so that the overlap with the image of the own vehicle 1 is reduced for at least one portion of the blind spot image by the notification pattern information for shifting the position so as to reduce the overlap with the image of the own vehicle 1. Even in this way, the same effect as that of the second embodiment can be obtained. The condition giving unit 29 attaches an arrow to the image of the object such as the other vehicle 2, emphasizes the outline color of the object such as the other vehicle 2 by adding a conspicuous color such as red or yellow, and emphasizes the other vehicle 2 or the like. Processing may be performed to emphasize the image of the other vehicle 2, which is a blind spot image such as changing the color of the object.
 実施の形態2においては、表示システム30は、表示画像を生成した後に、条件付与部29が条件情報DB25から表示装置を含む報知すべき報知装置31の報知装置情報と、報知パターン情報と、報知タイミング情報とを取得し、死角画像である他車両2の画像を加工した。しかし、表示システム30は、条件付与部29が条件情報DB25から表示装置を含む報知すべき報知装置31の報知装置情報と、報知パターン情報と、報知タイミング情報とを取得し、他車両2の画像を加工した後に、表示画像を生成してもよい。このようにしても、実施の形態2の効果と同様の効果を得ることができる。 In the second embodiment, after the display system 30 generates the display image, the condition giving unit 29 notifies the notification device information, the notification pattern information, and the notification of the notification device 31 including the display device from the condition information DB 25. The timing information was acquired, and the image of the other vehicle 2, which is a blind spot image, was processed. However, in the display system 30, the condition giving unit 29 acquires the notification device information, the notification pattern information, and the notification timing information of the notification device 31 including the display device from the condition information DB 25, and the image of the other vehicle 2 is obtained. The display image may be generated after processing. Even in this way, the same effect as that of the second embodiment can be obtained.
 実施の形態1~2においては、制御部26が表示装置に表示画像を強調するような表示をさせる例として、制御部26が表示装置32の枠を赤色に点滅させる報知パターン情報を受信した場合は、表示と共に表示装置32の枠を赤色に点滅させる制御をする例を示した。しかし、制御部26は、表示装置に表示画像を強調するような表示をさせるために、表示画像の少なくとも1部分を点滅させてもよい。表示と共に表示装置32の枠を赤色に点滅させる制御をする例は、表示画像の少なくとも1部分を点滅させることの一例である。このように、制御部26が、表示装置32の枠を点滅させること、または、表示画像の少なくとも1部分を点滅させる制御をしたりするような表示装置に表示画像を強調するような表示をさせることにより、より対象者3の注意を惹くことができる。 In the first and second embodiments, as an example in which the control unit 26 causes the display device to display the display so as to emphasize the display image, the control unit 26 receives the notification pattern information that causes the frame of the display device 32 to blink in red. Shown an example of controlling the frame of the display device 32 to blink in red together with the display. However, the control unit 26 may blink at least one part of the display image in order to make the display device display so as to emphasize the display image. An example of controlling the frame of the display device 32 to blink in red together with the display is an example of blinking at least one part of the display image. In this way, the control unit 26 causes a display device that blinks the frame of the display device 32 or controls to blink at least one part of the display image to display the display image in an emphasized manner. As a result, the attention of the subject 3 can be further attracted.
 実施の形態1~2においては、ステップS5で、判定部22は、自車両1の予め決められた範囲に対象者3が存在するか否か判定し、自車両1の予め決められた範囲に対象者3が存在する場合、対象者3に表示画像を表示すると判定していた。このようにすることで、表示システム30は、対象者3がいない場合は、表示をしないようにし、省エネルギーに貢献できる。また、表示システム30は、対象者3がいない場合は、表示画像の表示の代わりに「お先に行ってください」、「ありがとう」、「一時停止します」等の対象者3へのコミュニケーションを図る文字表示、広告表示、自車両1がタクシーである場合は予約、空き等の表示をすることができる。また、表示システム30は、自車両1が駐車または停車している場合、自車両1が原因となる死角をなくすために、各表示装置の反対側の映像を各表示装置に表示等してもよい。 In the first and second embodiments, in step S5, the determination unit 22 determines whether or not the target person 3 exists in the predetermined range of the own vehicle 1, and within the predetermined range of the own vehicle 1. When the target person 3 exists, it is determined that the display image is displayed on the target person 3. By doing so, the display system 30 can contribute to energy saving by not displaying the display when the target person 3 does not exist. In addition, when the target person 3 does not exist, the display system 30 uses characters such as "please go first", "thank you", and "pause" instead of displaying the displayed image to communicate with the target person 3. Display, advertisement display, reservation, vacancy, etc. can be displayed when the own vehicle 1 is a taxi. Further, when the own vehicle 1 is parked or stopped, the display system 30 may display an image on the opposite side of each display device on each display device in order to eliminate a blind spot caused by the own vehicle 1. good.
 実施の形態1~2においては、ステップS6で、判定部22は、対象者3から見た自車両1の死角に他車両2等のオブジェクトが存在するか否か判定し、対象者3から見た自車両1の死角に他車両2等のオブジェクトが存在する場合、対象者3に表示画像を表示すると判定していた。このようにすることで、対象者3は表示装置に表示画像が表示されたことにより、自車両1の方向に何かしらの交通環境の変化を予測することができる。具体的には、例えば、対象者3は表示が開始されたことが他車両2への注意喚起とそのまま認識できるため、より早く状況を把握することができる。 In the first and second embodiments, in step S6, the determination unit 22 determines whether or not an object such as another vehicle 2 exists in the blind spot of the own vehicle 1 as seen from the subject 3, and is viewed from the subject 3. When an object such as another vehicle 2 exists in the blind spot of the own vehicle 1, it is determined that the display image is displayed on the target person 3. By doing so, the subject 3 can predict some change in the traffic environment in the direction of the own vehicle 1 because the display image is displayed on the display device. Specifically, for example, since the subject 3 can recognize that the display has started as a warning to the other vehicle 2, the situation can be grasped earlier.
 実施の形態1~2においては、表示システム30は、報知パターン情報により、表示装置32~35に表示する画像の形状、色、大きさ、輝度、位置等、スピーカ36に報知させる音の大きさ、音の高低、音の種類等の付加的な処理を行うことにより、対象者3の視覚と聴覚とにインパクトを与えることができるため、より早く状況を把握することができる。 In the first and second embodiments, the display system 30 uses the notification pattern information to notify the speaker 36 of the sound volume such as the shape, color, size, brightness, position, etc. of the image displayed on the display devices 32 to 35. By performing additional processing such as pitch, pitch, and type of sound, the visual and auditory senses of the subject 3 can be impacted, so that the situation can be grasped more quickly.
 なお、実施の形態1~2において、報知パターン情報が、表示装置32の枠を赤色に点滅させる報知パターン情報を含む場合、表示画像生成部23が表示画像を生成した後、制御部26が表示装置32に表示装置32の枠を赤色に点滅させるが、報知パターン情報を死角画像の他車両2等のオブジェクトの画像を点滅させる等の報知パターン情報とし、表示画像生成部23が表示画像を生成した後、制御部26が死角画像の他車両2等のオブジェクトの画像を点滅させる等してもよい。 In the first and second embodiments, when the notification pattern information includes the notification pattern information that causes the frame of the display device 32 to blink in red, the display image generation unit 23 generates the display image and then the control unit 26 displays the display. The frame of the display device 32 is blinked in red on the device 32, but the notification pattern information is used as notification pattern information such as blinking an image of an object such as a vehicle 2 in addition to the blind spot image, and the display image generation unit 23 generates a display image. After that, the control unit 26 may blink an image of an object such as the vehicle 2 in addition to the blind spot image.
 実施の形態1~2において、表示画像生成部23は、対象者3の目線に合わせた自車両1の画像を取得し、対象者3の目線に合わせた死角画像に自車両1の画像を重畳し、表示画像を生成した。このようにすることで、表示システム30は、対象者3によりどこの範囲の死角画像であるのかを直感的にわかりやすく表示することができる。しかし、車内外情報取得部27が、対象者3の目の高さ情報を全体制御ECU10から取得できた場合は、表示画像生成部23は、車内外情報である自車両1の位置情報と、対象者3の位置情報と、対象者3の目線情報と、対象者3の目の高さ情報、死角画像とを用いて、自車両画像DB24aから対象者3の目線に合わせ、かつ対象者3の目の高さから自車両1を見た自車両1の画像を取得してもよい。そして、表示画像生成部23は、対象者3の目線に合わせ、かつ対象者3の目の高さから自車両1を見た死角画像に自車両1の画像を重畳し、対象者3の目線に合わせ、かつ対象者3の目の高さから自車両1を見た表示画像を生成してもよい。ここで、車内外情報取得部27は、目の高さ情報取得部に対応する。 In the first and second embodiments, the display image generation unit 23 acquires the image of the own vehicle 1 that matches the line of sight of the target person 3, and superimposes the image of the own vehicle 1 on the blind spot image that matches the line of sight of the target person 3. And generated a display image. By doing so, the display system 30 can intuitively and easily display the range of the blind spot image by the subject 3. However, when the vehicle interior / external information acquisition unit 27 can acquire the eye height information of the target person 3 from the overall control ECU 10, the display image generation unit 23 receives the position information of the own vehicle 1 which is the vehicle interior / external information and the position information of the own vehicle 1. Using the position information of the target person 3, the line-of-sight information of the target person 3, the eye height information of the target person 3, and the blind spot image, the target person 3 is aligned with the line of sight of the target person 3 from the own vehicle image DB 24a. An image of the own vehicle 1 as seen from the height of the eyes of the own vehicle 1 may be acquired. Then, the display image generation unit 23 superimposes the image of the own vehicle 1 on the blind spot image of the own vehicle 1 viewed from the height of the eyes of the target person 3 in accordance with the line of sight of the target person 3, and the line of sight of the target person 3. A display image of the own vehicle 1 may be generated from the height of the eyes of the subject 3. Here, the vehicle interior / external information acquisition unit 27 corresponds to the eye height information acquisition unit.
 具体的には、例えば、自車両画像DB24aは対象者3の目線及び対象者3の目の高さごとの自車両1の画像を記憶しているとする。表示画像生成部23は、車内外情報である自車両1の位置情報と、対象者3の位置情報と、対象者3の目線情報と、対象者3の目の高さ情報、死角画像とを用いて、自車両画像DB24aから対象者3の目線に合わせ、かつ対象者3の目の高さに合わせた自車両1の画像を取得する。 Specifically, for example, it is assumed that the own vehicle image DB 24a stores an image of the own vehicle 1 for each line of sight of the target person 3 and the height of the eyes of the target person 3. The display image generation unit 23 obtains the position information of the own vehicle 1, which is the information inside and outside the vehicle, the position information of the target person 3, the line-of-sight information of the target person 3, the eye height information of the target person 3, and the blind spot image. It is used to acquire an image of the own vehicle 1 from the own vehicle image DB 24a according to the line of sight of the target person 3 and the height of the eyes of the target person 3.
 または、自車両画像DB24aは対象者3の目線及び対象者3の男性、女性、子供等の属性ごとの自車両1の画像を記憶しているとする。表示画像生成部23は、対象者3の目の高さ情報から属性を判定し、車内外情報である自車両1の位置情報と、対象者3の位置情報と、対象者3の目線情報と、死角画像と、属性の情報とを用いて、自車両画像DB24aから対象者3の目線に合わせ、かつ対象者3の目の高さに合わせた自車両1の画像を取得する。 Alternatively, it is assumed that the own vehicle image DB 24a stores the image of the own vehicle 1 for each attribute such as the line of sight of the target person 3 and the male, female, and children of the target person 3. The display image generation unit 23 determines the attribute from the eye height information of the target person 3, and the position information of the own vehicle 1 which is the information inside and outside the vehicle, the position information of the target person 3, and the line of sight information of the target person 3. , The blind spot image and the attribute information are used to acquire an image of the own vehicle 1 from the own vehicle image DB 24a according to the line of sight of the target person 3 and the height of the eyes of the target person 3.
 または、表示画像生成部23は、車内外情報である自車両1の位置情報と、対象者3の位置情報と、対象者3の目線情報と、死角画像とを用いて、自車両画像DB24aから対象者3の目線に合わせた自車両1の画像を取得する。表示画像生成部23は、取得した自車両1の画像を対象者3の目の高さ情報を用いて画像を加工等してもよい。このように、対象者3の目線だけでなく対象者3の目の高さに合わせた自車両1の画像を用いることで、対象者3は報知情報をより認識しやすくなるという効果がある。例えば、男性目線の表示画像が表示装置に表示されている場合、子供にとっては見たことのない高さからの表示画像となるため認識しにくい可能性があるが、目の高さを考慮することで、認識しやすくなる。 Alternatively, the display image generation unit 23 uses the position information of the own vehicle 1 which is the information inside and outside the vehicle, the position information of the target person 3, the line of sight information of the target person 3, and the blind spot image from the own vehicle image DB 24a. The image of the own vehicle 1 according to the line of sight of the target person 3 is acquired. The display image generation unit 23 may process the acquired image of the own vehicle 1 by using the eye height information of the target person 3. As described above, by using the image of the own vehicle 1 that matches not only the line of sight of the target person 3 but also the height of the eyes of the target person 3, there is an effect that the target person 3 can more easily recognize the broadcast information. For example, if the display image of a male eye is displayed on the display device, it may be difficult to recognize because the display image is from a height that the child has never seen, but the eye height is taken into consideration. This makes it easier to recognize.
 実施の形態1~2において、対象者3の目線情報は、車内外情報取得部27が全体制御ECU10から取得したが、対象者3は自車両1を見ていると仮定して、判定部22が自車両1の位置情報と、対象者3の位置情報とから対象者3の目線情報を算出してもよい。 In the first and second embodiments, the line-of-sight information of the target person 3 is acquired from the overall control ECU 10 by the vehicle inside / outside information acquisition unit 27, but it is assumed that the target person 3 is looking at the own vehicle 1, and the determination unit 22. May calculate the line-of-sight information of the target person 3 from the position information of the own vehicle 1 and the position information of the target person 3.
 実施の形態1~2において、判定部22は、自車両1の位置情報と、対象者3の位置情報とから、予め決められた自車両1から5メートル以内の範囲に対象者3が存在するか否か判定するとしたが、5メートルでなくても、例えば3メートル、10メートル等範囲はどのような範囲でもよい。また、予め決められた自車両1から5メートル以内の範囲としたが、表示装置32から半径5メートル、自車両1を囲む予め決められた矩形等範囲の形はどのようなものでもよい。 In the first and second embodiments, the determination unit 22 has the target person 3 within a range of 5 meters from the own vehicle 1 determined in advance from the position information of the own vehicle 1 and the position information of the target person 3. Although it is determined whether or not the range is 5 meters, the range may be any range such as 3 meters and 10 meters. Further, although the range is set within 5 meters from the own vehicle 1 determined in advance, the shape of a predetermined rectangular equal range surrounding the own vehicle 1 with a radius of 5 meters from the display device 32 may be any shape.
 実施の形態1~2において、ステップS5では、判定部22は、自車両1の位置情報と、対象者3の位置情報とから、予め決められた自車両1から5メートル以内の範囲に対象者3が存在するか否か判定した。また、ステップS6では、判定部22は、対象者3から見た自車両1の死角に他車両2等のオブジェクトが存在するか否か判定し、判定部22は対象者3に表示画像を表示するか否か判定していた。しかし、判定部22は、ギアの変更情報、舵角情報等から自車両1の動作の予測をしたり、他車両2の速度情報、他車両2の位置情報、ウィンカーの操作情報等から他車両2の動作の予測をしたりし、判定部22が対象者3に表示画像を表示するか否か判定してもよい。例えば、判定部22は、他車両2の速度情報と進行方向の情報とを用いて、判定部22は対象者3に表示画像を表示するか否か判定してもよい。判定部22が対象者3に表示画像を表示するか否か判定する方法は従来技術を用いればよい。また、条件付与部29は、車内外情報によって生成する表示画像を変化、または表示画像を加工してもよい。このようにすることで、対象者3は、死角の他車両2等のオブジェクトの状況とその変化とを認識しやすくなる。 In the first and second embodiments, in step S5, the determination unit 22 uses the position information of the own vehicle 1 and the position information of the target person 3 within a predetermined range of the target person within 5 meters from the own vehicle 1. It was determined whether or not 3 was present. Further, in step S6, the determination unit 22 determines whether or not an object such as another vehicle 2 exists in the blind spot of the own vehicle 1 as seen from the target person 3, and the determination unit 22 displays a display image on the target person 3. It was judged whether or not to do it. However, the determination unit 22 predicts the operation of the own vehicle 1 from the gear change information, the steering angle information, etc., and the other vehicle from the speed information of the other vehicle 2, the position information of the other vehicle 2, the operation information of the winker, and the like. The operation of 2 may be predicted, and it may be determined whether or not the determination unit 22 displays the display image on the target person 3. For example, the determination unit 22 may determine whether or not to display the display image on the target person 3 by using the speed information of the other vehicle 2 and the information on the traveling direction. As a method of determining whether or not the determination unit 22 displays the display image on the target person 3, the prior art may be used. Further, the condition giving unit 29 may change the display image generated by the information inside and outside the vehicle, or process the display image. By doing so, the subject 3 can easily recognize the situation of the object such as the other vehicle 2 in the blind spot and its change.
 実施の形態1~2において、ステップS5では、判定部22は、自車両1の位置情報と、対象者3の位置情報とから、予め決められた自車両1から5メートル以内の範囲に対象者3が存在するか否か判定し、対象者3が存在する場合に、対象者3に表示画像を表示すると判定していた。しかし、判定部22は、表示している途中で対象者3が予め決められた自車両1から5メートル以内の範囲から出て存在しなくなった場合、または対象者3が横断歩道4を渡らずに止まっている場合等では、表示を終了し、表示画像を表示しないと判定してもよい。制御部26は、判定部22が表示しないと判定した場合に、表示装置に表示画像を表示させない制御を行う。 In the first and second embodiments, in step S5, the determination unit 22 uses the position information of the own vehicle 1 and the position information of the target person 3 within a predetermined range of the target person within 5 meters from the own vehicle 1. It was determined whether or not 3 was present, and if the subject 3 was present, it was determined that the display image would be displayed on the subject 3. However, when the target person 3 goes out of the predetermined range within 5 meters from the own vehicle 1 and disappears in the middle of the display, or the target person 3 does not cross the pedestrian crossing 4. If the display is stopped at, the display may be terminated and it may be determined that the displayed image is not displayed. The control unit 26 controls not to display the display image on the display device when the determination unit 22 determines that the display is not displayed.
 実施の形態1~2において、ステップS5では、判定部22は、自車両1の位置情報と、対象者3の位置情報とから、予め決められた自車両1から5メートル以内の範囲に対象者3が存在するか否か判定し、対象者3が存在する場合に、対象者3に表示画像を表示すると判定していた。しかし、判定部22は、対象者3がいない場合でも、予め決められた目線に合わせた表示画像を表示してもよい。このようにすることで、判定部22が対象者3の存在を判定できない位置に対象者3がいた場合でも、対象者3は早めに死角の状況を把握することができる。 In the first and second embodiments, in step S5, the determination unit 22 uses the position information of the own vehicle 1 and the position information of the target person 3 within a predetermined range of the target person within 5 meters from the own vehicle 1. It was determined whether or not 3 was present, and if the subject 3 was present, it was determined that the display image would be displayed on the subject 3. However, the determination unit 22 may display a display image according to a predetermined line of sight even when the target person 3 is not present. By doing so, even if the subject 3 is in a position where the determination unit 22 cannot determine the existence of the subject 3, the subject 3 can grasp the situation of the blind spot early.
 実施の形態1~2において、ステップS6では、判定部22は、対象者3から見た自車両1の死角に他車両2等のオブジェクトが存在するか否か判定し、対象者3に表示画像を表示するか否か判定していた。しかし、判定部22は、対象者3から見た自車両1の死角に他車両2等のオブジェクトが存在したとしても、他車両2等のオブジェクトの行動の予測から対象者3にとって危険ではないと判定した場合は、表示画像を表示しないと判定してもよい。また、判定部22は、表示画像を表示していた場合は、表示を終了し、表示しないと判定してもよい。例えば、他車両2が自車両1の後ろ、真横等で停車したり、他車両2が死角を出て、対象者3を通り過ぎた場合は、判定部22は、表示を終了し、対象者3に表示画像を表示しないと判定する。制御部26は、判定部22が表示しないと判定した場合に、表示装置に表示画像を表示させない制御を行う。このようにすることで、対象者3は周囲に危険がないことを確認しやすくなり、対象者3の安心につながる。 In the first and second embodiments, in step S6, the determination unit 22 determines whether or not an object such as another vehicle 2 exists in the blind spot of the own vehicle 1 as seen from the target person 3, and displays an image on the target person 3. Was determined whether or not to display. However, even if an object such as another vehicle 2 exists in the blind spot of the own vehicle 1 as seen from the target person 3, the determination unit 22 is not dangerous to the target person 3 from the prediction of the behavior of the object such as the other vehicle 2. If it is determined, it may be determined that the displayed image is not displayed. Further, when the display image is displayed, the determination unit 22 may end the display and determine that the display image is not displayed. For example, when the other vehicle 2 stops behind the own vehicle 1, right beside, or the like, or when the other vehicle 2 leaves the blind spot and passes the target person 3, the determination unit 22 ends the display and the target person 3 It is determined that the display image is not displayed on. The control unit 26 controls not to display the display image on the display device when the determination unit 22 determines that the display is not displayed. By doing so, it becomes easy for the subject 3 to confirm that there is no danger in the surroundings, which leads to the peace of mind of the subject 3.
 実施の形態1~2において、ステップS6では、判定部22は、対象者3から見た自車両1の死角に他車両2等のオブジェクトが存在するか否か判定し、対象者3から見た自車両1の死角に他車両2等のオブジェクトが存在する場合に、対象者3に表示画像を表示すると判定していた。しかし、判定部22は、対象者3から見た自車両1の死角に他車両2等のオブジェクトが存在しない場合でも、表示画像を表示してもよい。このようにすることで、対象者3は周囲に危険がないことを確認しやすくなり、対象者3の安心につながる。 In the first and second embodiments, in step S6, the determination unit 22 determines whether or not an object such as another vehicle 2 exists in the blind spot of the own vehicle 1 as seen from the subject 3, and is viewed from the subject 3. When an object such as another vehicle 2 exists in the blind spot of the own vehicle 1, it is determined that the display image is displayed to the target person 3. However, the determination unit 22 may display the display image even when the object such as the other vehicle 2 does not exist in the blind spot of the own vehicle 1 as seen from the target person 3. By doing so, it becomes easy for the subject 3 to confirm that there is no danger in the surroundings, which leads to the peace of mind of the subject 3.
 実施の形態1~2において、判定部22は、ステップS5で対象者3の存在を判定してから、ステップS6で対象者3から見た自車両1の死角に他車両2等のオブジェクトが存在するか否か判定し、対象者3に表示画像を表示するか否か判定した。しかし、表示システム30は、ステップS5とステップS6とはどちらを先にしてもよい。さらに、判定部22は、ステップS12で表示画像を表示装置に表示させるまでの間であれば、どこで判定してもよい。 In the first and second embodiments, the determination unit 22 determines the existence of the target person 3 in step S5, and then the object such as the other vehicle 2 exists in the blind spot of the own vehicle 1 as seen from the target person 3 in step S6. It was determined whether or not to display the display image on the target person 3. However, the display system 30 may take either step S5 or step S6 first. Further, the determination unit 22 may determine anywhere until the display image is displayed on the display device in step S12.
 実施の形態1~2において、ステップS4で、死角画像取得部21は、死角画像を取得したが、ステップS5の後、ステップS6の後、ステップS7の後等に、死角画像取得部21が死角画像を取得してもよい。なお、表示システム30は、条件付与部29が条件情報DB25から表示装置を含む報知すべき報知装置31の報知装置情報と、報知パターン情報と、報知タイミング情報とを取得し、他車両2の画像の加工を行った後に、表示画像を生成する場合は、条件付与部29が条件情報DB25から表示装置を含む報知すべき報知装置31の報知装置情報と、報知パターン情報と、報知タイミング情報とを取得した後に、死角画像取得部21が死角画像を取得してもよい。 In the first and second embodiments, the blind spot image acquisition unit 21 acquires the blind spot image in step S4, but the blind spot image acquisition unit 21 acquires the blind spot image after step S5, after step S6, after step S7, and the like. Images may be acquired. In the display system 30, the condition giving unit 29 acquires the notification device information, the notification pattern information, and the notification timing information of the notification device 31 including the display device from the condition information DB 25, and images of the other vehicle 2. When the display image is generated after the processing of the above, the condition giving unit 29 obtains the notification device information, the notification pattern information, and the notification timing information of the notification device 31 including the display device from the condition information DB 25. After the acquisition, the blind spot image acquisition unit 21 may acquire the blind spot image.
 実施の形態1~2において、自車両1が停車している場合の表示システム30を説明したが、自車両1が駐車している場合、移動している場合でも本開示は適用できる。例えば、表示システム30は、自車両1が横断歩道4の数十m手前から走行中に表示を開始してもよい。また、対象者として歩行者3が停車している自車両1の前を横切ろうとしている場合を説明したが、対象者3が移動している場合、対象者3が止まっている場合でも本開示は適用できる。例えば、歩行者3が横断歩道4を渡る気がなく止まっていても、他車両2が突っ込んでくる場合がある。その場合に、本開示の表示画像を表示することで対象者3に注意喚起ができる。 Although the display system 30 when the own vehicle 1 is stopped has been described in the first and second embodiments, the present disclosure can be applied even when the own vehicle 1 is parked or moving. For example, the display system 30 may start displaying while the own vehicle 1 is traveling several tens of meters before the pedestrian crossing 4. Further, the case where the pedestrian 3 is trying to cross in front of the own vehicle 1 which is stopped as the target person has been described, but when the target person 3 is moving, even if the target person 3 is stopped, the book Disclosure is applicable. For example, even if the pedestrian 3 does not want to cross the pedestrian crossing 4 and stops, another vehicle 2 may plunge into it. In that case, the subject 3 can be alerted by displaying the display image of the present disclosure.
 なお、当該場合に、車内外情報取得部27は、死角に存在する他車両2の情報を示す他車両情報を取得し、条件付与部29は、他車両情報によって加工を変更すれば、より対象者3に注意喚起ができる。具体的には、車内外情報取得部27は、他車両情報である死角に存在する他車両2の速度情報と進行方向情報等を取得する。判定部22は、速度情報と進行方向情報等から他車両2の今後の動作の予測をし、条件付与部29は、他車両情報によって加工を変更する。なお、ここで、車内外情報取得部27は、他車両情報取得部に対応する。 In this case, the vehicle interior / external information acquisition unit 27 acquires other vehicle information indicating the information of the other vehicle 2 existing in the blind spot, and the condition giving unit 29 can be further targeted if the processing is changed according to the other vehicle information. Person 3 can be alerted. Specifically, the vehicle inside / outside information acquisition unit 27 acquires speed information, traveling direction information, and the like of the other vehicle 2 existing in the blind spot, which is information on the other vehicle. The determination unit 22 predicts the future operation of the other vehicle 2 from the speed information, the traveling direction information, and the like, and the condition giving unit 29 changes the processing according to the other vehicle information. Here, the vehicle inside / outside information acquisition unit 27 corresponds to another vehicle information acquisition unit.
 実施の形態1~2において、対象者3は歩行者1人である場合を説明したが、対象者3が複数人である場合も本開示は適用できる。例えば、実施の形態1~2のように自車両1が車体の前後左右に表示装置32~35を設けており、自車両1の前側に歩行者が1人、自車両1の右側にも別の歩行者が1人いた場合、判定部22は、それぞれの歩行者に対して表示画像を表示するか否か判定する。判定部22がそれぞれの歩行者に対して表示すると判断した場合、表示画像生成部23は、それぞれの歩行者に対して表示画像を生成する。制御部26は、自車両1の前面に設けられた表示装置32に自車両1の前側にいる歩行者に対して生成された表示画像を表示させ、自車両1の右側面に設けられた表示装置35に自車両1の右側にいる歩行者に対して生成された表示画像を表示させる。 In the first and second embodiments, the case where the subject 3 is one pedestrian has been described, but the present disclosure can be applied even when the subject 3 is a plurality of people. For example, as in the first and second embodiments, the own vehicle 1 is provided with display devices 32 to 35 on the front, rear, left and right sides of the vehicle body, one pedestrian is on the front side of the own vehicle 1, and another is on the right side of the own vehicle 1. When there is one pedestrian, the determination unit 22 determines whether or not to display the display image for each pedestrian. When the determination unit 22 determines that the display is to be displayed for each pedestrian, the display image generation unit 23 generates a display image for each pedestrian. The control unit 26 causes a display device 32 provided on the front surface of the own vehicle 1 to display a display image generated for a pedestrian on the front side of the own vehicle 1, and a display provided on the right side surface of the own vehicle 1. The device 35 causes the pedestrian on the right side of the own vehicle 1 to display the generated display image.
 また、例えば、実施の形態1~2のように自車両1が車体の前後左右に表示装置32~35を設けており、自車両1の前側に歩行者が複数人いた場合、判定部22は、それぞれの歩行者に対して表示画像を表示するか否か判定する。判定部2は、表示画像を表示すると判定し、かつ死角に他車両2がいるにもかかわらず横断歩道4を渡ろうとした一番危険な行動を起こしそうな歩行者を1人決定する。表示画像生成部23は、判定部22が表示画像を表示すると判定し、かつ一番危険な行動を起こしそうな歩行者に対して表示画像を生成する。制御部26は、自車両1の前面に設けられた表示装置32に表示画像を表示させる。 Further, for example, when the own vehicle 1 is provided with display devices 32 to 35 on the front, rear, left and right of the vehicle body as in the first and second embodiments, and there are a plurality of pedestrians on the front side of the own vehicle 1, the determination unit 22 may be used. , Determine whether to display the display image for each pedestrian. The determination unit 2 determines that the displayed image is to be displayed, and determines one pedestrian who is likely to take the most dangerous action when trying to cross the pedestrian crossing 4 even though there is another vehicle 2 in the blind spot. The display image generation unit 23 determines that the determination unit 22 displays the display image, and generates a display image for the pedestrian who is likely to cause the most dangerous behavior. The control unit 26 displays a display image on a display device 32 provided on the front surface of the own vehicle 1.
 なお、例えば、表示画像を表示すると判定し、かつ死角に他車両2がいるにもかかわらず横断歩道4を渡ろうとした一番危険な行動を起こしそうな歩行者が複数人いる場合は、判定部22は、一番早く危険となる歩行者に対して表示画像を表示すると決定する。その場合は、一番早く危険となる歩行者の目線に合わせた表示画像が表示装置に表示されるが、表示自体は他の歩行者も確認できるため、他の歩行者に対しても注意喚起ができる。また、表示と共にスピーカ36で報知を行えば、より注意喚起ができる。 For example, if it is determined that the displayed image is to be displayed and there are a plurality of pedestrians who are likely to take the most dangerous action when trying to cross the pedestrian crossing 4 even though there is another vehicle 2 in the blind spot, the determination is made. The unit 22 decides to display the displayed image to the pedestrian who is the earliest to be dangerous. In that case, the display image that matches the line of sight of the pedestrian, which is the earliest danger, is displayed on the display device, but since the display itself can be confirmed by other pedestrians, it also alerts other pedestrians. Can be done. Further, if the speaker 36 is used to notify the display together with the display, more attention can be given.
 また、自車両1の前側に歩行者が複数人いた場合、判定部22は、例えば、機械学習で表示画像を表示する歩行者を決定する、それぞれの歩行者に対して優先度を判定して優先度が一番高い歩行者を表示の対象の歩行者に決定する等してもよい。さらに、表示システム30は、自車両1の前側に歩行者が複数人いた場合、複数人の歩行者の平均位置の目線に合わせた表示画像を表示する、自車両1の真正面、真後ろ、左側面の真ん中、右側面の真ん中、真上等から見た表示画像を表示する等してもよい。 Further, when there are a plurality of pedestrians on the front side of the own vehicle 1, the determination unit 22 determines the priority for each pedestrian, for example, determining the pedestrian to display the display image by machine learning. The pedestrian with the highest priority may be determined as the pedestrian to be displayed. Further, when there are a plurality of pedestrians on the front side of the own vehicle 1, the display system 30 displays a display image according to the line of sight of the average position of the plurality of pedestrians, directly in front of, directly behind, and on the left side of the own vehicle 1. The display image viewed from the center, the center of the right side surface, the top, etc. may be displayed.
 実施の形態1~2において、表示システム30は、判定部22は対象者3の目線情報から、対象者3が表示装置に表示されている表示画像に気づいているか判定し、対象者3が表示画像に気づいていない場合は、条件付与部29は、表示画像の少なくとも1部を加工してもよい。制御部26は、表示装置に加工した表示画像を表示するようにしてもよい。このようにすることで、対象者3が表示画像を確認する確率が高くなり、対象者3の視認性を高めることができる。 In the first and second embodiments, the display system 30 determines whether the target person 3 is aware of the display image displayed on the display device from the line-of-sight information of the target person 3, and the target person 3 displays the image. If the image is not noticed, the condition-imparting unit 29 may process at least one part of the displayed image. The control unit 26 may display the processed display image on the display device. By doing so, the probability that the target person 3 confirms the displayed image is high, and the visibility of the target person 3 can be improved.
 実施の形態1~2において、表示システム30は、表示画像を表示する場合を説明したが、表示と共にクラクション(不図示)を出力したり、ヘッドライト72から光を出力したり、スピーカ36から音を出力したり、それらを1以上の表示装置と組み合わせてもよい。具体的には、例えば、表示システム30が表示画像を表示した際に、対象者3が表示画像に気づいていない場合、判定部22は、自車両に設けられたクラクションを出力するか否か、自車両に設けられたヘッドライト72から光を出力するか否か、自車両に設けられたスピーカ36から音を出力させるか否か等を判定する。制御部26は、判定部22がクラクションを出力すると判定した場合、ヘッドライト72から光を出力すると判定した場合、スピーカ36から音を出力させると判定した場合等、表示装置に表示画像を表示するとともに、クラクションを出力する、ヘッドライト72から光を出力する、スピーカ36から音を出力させる等する。さらに、表示システム30は、対象者3が表示画像に気づくまでクラクションを出力する、ヘッドライト72から光を出力する、スピーカ36から音を出力する等し続けることにより、対象者3が表示画像を確認する確率が高くなり、対象者3の視認性を高めることができる。 In the first and second embodiments, the display system 30 has described the case of displaying the display image, but the display system 30 outputs a horn (not shown) together with the display, outputs light from the headlight 72, and sounds from the speaker 36. Or may be combined with one or more display devices. Specifically, for example, when the display system 30 displays the display image, if the target person 3 is not aware of the display image, the determination unit 22 determines whether or not to output the horn provided in the own vehicle. It is determined whether or not light is output from the headlight 72 provided in the own vehicle, whether or not sound is output from the speaker 36 provided in the own vehicle, and the like. The control unit 26 displays a display image on the display device, such as when the determination unit 22 determines to output the horn, determines to output light from the headlight 72, determines to output sound from the speaker 36, and the like. At the same time, the horn is output, the light is output from the headlight 72, the sound is output from the speaker 36, and the like. Further, the display system 30 continuously outputs the horn until the subject 3 notices the displayed image, outputs light from the headlight 72, outputs sound from the speaker 36, and the like, so that the subject 3 outputs the displayed image. The probability of confirmation is high, and the visibility of the subject 3 can be improved.
 また、例えば、表示システム30が表示画像を表示した際に、表示したことの注意喚起を対象者3にしたい場合、表示システム30は、表示とともに、クラクションを出力する、ヘッドライト72から光を出力する、スピーカ36から音を出力する等してもよい。このようにすることで、対象者3が表示画像を確認する確率が高くなり、対象者3の視認性を高めることができる。 Further, for example, when the display system 30 displays a display image, if the target person 3 wants to be alerted to the display, the display system 30 outputs light from the headlight 72, which outputs a horn together with the display. , The sound may be output from the speaker 36, and the like. By doing so, the probability that the target person 3 confirms the displayed image is high, and the visibility of the target person 3 can be improved.
 なお、ヘッドライト72の場合は、表示システム30は、ヘッドライト72の点灯と消灯とを表示と連動させ、例えばパッシングを行うことで対象者3に表示を気づかせる、注意喚起をする等してもよい。また、例えば、歩行者3から見て、ヘッドライト72から光を出力していることで表示画像の表示が見えにくい場合は、表示システム30は、ヘッドライト72の光を弱めたり、消したりしてもよい。 In the case of the headlight 72, the display system 30 links the lighting and extinguishing of the headlight 72 with the display, for example, by performing passing to make the target person 3 notice the display, to call attention, and the like. May be good. Further, for example, when the display of the displayed image is difficult to see because the light is output from the headlight 72 when viewed from the pedestrian 3, the display system 30 weakens or turns off the light of the headlight 72. You may.
 実施の形態1~2において、自車両1が横断歩道周辺にいる場合を説明したが、混雑している駐車場、路肩での駐車等でも本開示を適用できる。 Although the case where the own vehicle 1 is in the vicinity of the pedestrian crossing has been described in the first and second embodiments, the present disclosure can be applied to a crowded parking lot, parking on the shoulder, and the like.
 実施の形態1~2において、車内外情報取得部21は、全体制御ECU11から車内外情報を取得したが、車内外情報取得部21は、それぞれのセンサから直接車内外情報を取得してもよい。 In the first and second embodiments, the vehicle interior / exterior information acquisition unit 21 acquires vehicle interior / exterior information from the overall control ECU 11, but the vehicle interior / exterior information acquisition unit 21 may acquire vehicle interior / exterior information directly from each sensor. ..
 実施の形態1~2において、自車両画像DB24aと死角画像DB24bと条件情報DB25とは、表示制御装置20が備えていたが、自車両画像DB24aと死角画像DB24bと条件情報DB25とは、別のサーバ等に記憶されており、表示制御装置20は通信装置60等を用いて、別のサーバ等から取得してもよい。 In the first and second embodiments, the own vehicle image DB 24a, the blind spot image DB 24b, and the condition information DB 25 are provided by the display control device 20, but the own vehicle image DB 24a, the blind spot image DB 24b, and the condition information DB 25 are different from each other. It is stored in a server or the like, and the display control device 20 may be acquired from another server or the like by using the communication device 60 or the like.
 実施の形態1~2において、自車両1は自動車としたが、自動運転車両でも人が運転をする自動車でもよい。なお、自動運転車両の場合は、ドライバが必ずしも運転状況を把握していない場合があるため、ドライバの代わりに対象者3に対して安全に行動ができるようにサポートできるようになる。また、自車両1は自動車に限らず、フォークリフト、船、飛行機、自転車、バイク等でもよい。 In the first and second embodiments, the own vehicle 1 is a vehicle, but it may be an autonomous vehicle or a vehicle driven by a person. In the case of an autonomous driving vehicle, the driver may not always know the driving situation, so that it becomes possible to support the target person 3 so that he / she can act safely on behalf of the driver. Further, the own vehicle 1 is not limited to an automobile, but may be a forklift, a ship, an airplane, a bicycle, a motorcycle, or the like.
 実施の形態1~2において、対象者3は歩行者としたが、自転車、バイク、自動車、フォークリフト等の車両のドライバ等でもよい。 In the first and second embodiments, the target person 3 is a pedestrian, but a driver of a vehicle such as a bicycle, a motorcycle, an automobile, or a forklift may be used.
 ところで、上記した実施の形態に示した表示制御装置、表示システム、表示制御方法及び表示制御プログラムは一例に過ぎず、適宜、他の装置と組み合わせて構成することが出来るものであって、実施の形態単独の構成に限られるものではない。 By the way, the display control device, the display system, the display control method, and the display control program shown in the above-described embodiment are merely examples, and can be appropriately configured in combination with other devices. It is not limited to the configuration of a single form.
 1 自車両、 2 他車両、 3 対象者、 4 横断歩道、
 5 対象者3から自車両1を見たときの点線矢印、
 6 真正面から自車両1を見たときの点線矢印、
 7 表示画像、 8 自車両1の画像、
 9 他車両2の画像である死角画像、
 10 全体制御ECU、 11 プロセッサ、 12 メモリ、
 15 報知制御ECU、 16 プロセッサ、 17 メモリ、
 20 表示制御装置、 21 死角画像取得部、 22 判定部、
 23 表示画像生成部、 24a 自車両画像DB、
 24b 死角画像DB、 25 条件情報DB、
 26 制御部、 27 車内外情報取得部、 28 死角画像生成部、
 29 条件付与部、 30 表示システム、 31 報知装置、
 32~35 表示装置、 36 スピーカ、 40 センサ、
 41 車速センサ、 42 舵角センサ、 43 アクセルセンサ、
 44 ブレーキセンサ、 45 シフトセンサ、
 46 ウィンカーセンサ、 47 ハザードセンサ、
 48 ワイパーセンサ、 49 ライトセンサ、
 50 ドア開閉センサ、 51 ドライバカメラ、
 52 着座センサ、 53 加速度センサ、 54 角速度センサ、
 55 GPSデバイス、 56 ナビゲーションシステム、
 57 車外カメラ、 58 車外センサ、 59 照度センサ、
 60 通信装置、 61 アンテナ、 62 送信部、
 63 受信部、 65 通信線、 70 運転装置、
 71 ヘッドライトドライバ、 72 ヘッドライト、
 73 エンジン、 74 変速機、 75 ブレーキアクチュエータ、
 76 ステアリングアクチュエータ、
 77 ウィンカー、 78 HUD、 80 報知装置、
 81 外部ライトデバイス、 82 外部ライトドライバ、
 83 外部ライトセット、 84~85 外部ライト、
 86 プロジェクタデバイス、 87 プロジェクタドライバ、
 88 プロジェクタ、 89 ボディライトデバイス、
 90 ボディライトドライバ、 91 ボディライト、
 92 サウンドデバイス、 93 サウンドドライバ、
 94 サウンド機器、 95 表示装置32の枠、
 100 車載システム。
1 own vehicle, 2 other vehicles, 3 target person, 4 pedestrian crossing,
5 Dotted arrow when the target person 3 sees the own vehicle 1,
6 Dotted arrow when looking at own vehicle 1 from the front,
7 Display image, 8 Image of own vehicle 1,
9 Blind spot image, which is an image of another vehicle 2.
10 Overall control ECU, 11 processor, 12 memory,
15 notification control ECU, 16 processor, 17 memory,
20 Display control device, 21 Blind spot image acquisition unit, 22 Judgment unit,
23 Display image generator, 24a Own vehicle image DB,
24b blind spot image DB, 25 condition information DB,
26 Control unit, 27 Vehicle interior / external information acquisition unit, 28 Blind spot image generation unit,
29 Conditioning unit, 30 Display system, 31 Notification device,
32 to 35 display devices, 36 speakers, 40 sensors,
41 Vehicle speed sensor, 42 Steering angle sensor, 43 Accelerator sensor,
44 brake sensor, 45 shift sensor,
46 turn signal sensor, 47 hazard sensor,
48 wiper sensor, 49 light sensor,
50 Door open / close sensor, 51 Driver camera,
52 Seating sensor, 53 Accelerometer, 54 Angular velocity sensor,
55 GPS device, 56 navigation system,
57 outside camera, 58 outside sensor, 59 illuminance sensor,
60 communication equipment, 61 antenna, 62 transmitter,
63 receiver, 65 communication line, 70 driving device,
71 headlight driver, 72 headlight,
73 engine, 74 transmission, 75 brake actuator,
76 Steering actuator,
77 Turn Signal, 78 HUD, 80 Notifier,
81 external light device, 82 external light driver,
83 external light set, 84-85 external light,
86 projector device, 87 projector driver,
88 projector, 89 body light device,
90 body light driver, 91 body light,
92 Sound Devices, 93 Sound Drivers,
94 sound equipment, 95 display device 32 frame,
100 In-vehicle system.

Claims (19)

  1.  自車両の外に存在する対象者から見て前記自車両によって生じる死角を示す死角画像を取得する死角画像取得部と、
     前記自車両の画像を前記死角画像に重畳した表示画像を生成する表示画像生成部と、
     前記自車両に設けられた表示装置に前記表示画像を表示する制御部と
    を備えた表示制御装置。
    A blind spot image acquisition unit that acquires a blind spot image showing a blind spot caused by the own vehicle when viewed from a target person existing outside the own vehicle, and a blind spot image acquisition unit.
    A display image generation unit that generates a display image in which an image of the own vehicle is superimposed on the blind spot image,
    A display control device including a control unit for displaying the display image on a display device provided in the own vehicle.
  2.  前記表示画像を加工する条件付与部を備え、
     前記制御部は、前記表示装置に加工した前記表示画像を表示する
    請求項1に記載の表示制御装置。
    A condition-imparting unit for processing the display image is provided.
    The display control device according to claim 1, wherein the control unit displays the display image processed on the display device.
  3.  前記条件付与部は、前記自車両の画像を加工する
    請求項2に記載の表示制御装置。
    The display control device according to claim 2, wherein the condition-imparting unit processes an image of the own vehicle.
  4.  前記条件付与部は、前記自車両の画像を縮小する加工、前記自車両の画像に透過処理を施す加工、前記自車両の画像をずらした視点から見たような加工、または、前記自車両の輪郭だけ実線もしくは点線とする加工、のうちの少なくとも1つの加工をする
    請求項3に記載の表示制御装置。
    The condition-imparting unit performs processing for reducing the image of the own vehicle, processing for performing transmission processing on the image of the own vehicle, processing as seen from a viewpoint of shifting the image of the own vehicle, or processing for the own vehicle. The display control device according to claim 3, wherein at least one of the processing of forming only the contour as a solid line or a dotted line is performed.
  5.  前記条件付与部は、前記死角画像を加工する
    請求項2~4のいずれか1項に記載の表示制御装置。
    The display control device according to any one of claims 2 to 4, wherein the condition-imparting unit processes the blind spot image.
  6.  前記条件付与部は、前記死角画像中の少なくとも1部分について大きさを拡大させる加工、または、前記死角画像中の少なくとも1部分について前記自車両の画像との重なりが少なくなるように位置をずらす加工、のうちの少なくとも1つの加工をする
    請求項5に記載の表示制御装置。
    The condition-imparting portion is processed to enlarge the size of at least one portion of the blind spot image, or to shift the position of at least one portion of the blind spot image so as to reduce the overlap with the image of the own vehicle. The display control device according to claim 5, wherein at least one of the above is processed.
  7.  前記制御部は、前記表示装置に前記表示画像を強調するような表示をさせる
    請求項1~6のいずれか1項に記載の表示制御装置。
    The display control device according to any one of claims 1 to 6, wherein the control unit causes the display device to display a display that emphasizes the display image.
  8.  前記表示画像を強調するような表示とは、前記表示装置の枠を点滅させること、または、前記表示画像の少なくとも1部分を点滅させることである
    請求項7に記載の表示制御装置。
    The display control device according to claim 7, wherein the display that emphasizes the display image means blinking the frame of the display device or blinking at least one part of the display image.
  9.  前記死角に存在する他車両の情報を示す他車両情報を取得する他車両情報取得部を備え、
     前記条件付与部は、前記他車両情報によって加工を変更する
    請求項2~8のいずれか1項に記載の表示制御装置。
    It is provided with another vehicle information acquisition unit that acquires other vehicle information indicating information on other vehicles existing in the blind spot.
    The display control device according to any one of claims 2 to 8, wherein the condition-imparting unit changes processing according to the other vehicle information.
  10.  前記対象者の目線情報を取得する目線情報取得部を備え、
     前記表示画像生成部は、前記目線情報により前記対象者の目線に合わせた前記表示画像を生成する
    請求項1~9のいずれか1項に記載の表示制御装置。
    It is equipped with a line-of-sight information acquisition unit that acquires the line-of-sight information of the target person.
    The display control device according to any one of claims 1 to 9, wherein the display image generation unit generates the display image according to the line of sight of the target person based on the line-of-sight information.
  11.  前記対象者の目の高さ情報を取得する目の高さ情報取得部を備え、
     前記表示画像生成部は、前記目の高さ情報により前記対象者の目の高さから前記自車両を見た前記表示画像を生成する
    請求項10に記載の表示制御装置。
    It is equipped with an eye height information acquisition unit that acquires eye height information of the target person.
    The display control device according to claim 10, wherein the display image generation unit generates the display image of the own vehicle viewed from the height of the eyes of the target person based on the height information of the eyes.
  12.  前記対象者に前記表示画像を表示する否か判定する判定部を備え、
     前記制御部は、前記判定部が表示すると判定した場合に、前記表示装置に前記表示画像を表示する
    請求項1~11のいずれか1項に記載の表示制御装置。
    A determination unit for determining whether or not to display the display image on the target person is provided.
    The display control device according to any one of claims 1 to 11, wherein the control unit displays the display image on the display device when the determination unit determines that the display is to be displayed.
  13.  前記判定部は、予め決められた範囲に前記対象者が存在すると判定した場合、前記対象者に前記表示画像を表示すると判定し、前記位置情報から前記予め決められた範囲に前記対象者が存在しないと判定した場合、前記対象者に前記表示画像を表示しないと判定する
    請求項12に記載の表示制御装置。
    When the determination unit determines that the target person exists in a predetermined range, the determination unit determines that the display image is displayed on the target person, and the target person exists in the predetermined range from the position information. The display control device according to claim 12, wherein if it is determined that the display image is not displayed, the display image is not displayed to the target person.
  14.  前記判定部は、前記死角にオブジェクトが存在する場合、前記対象者に前記表示画像を表示すると判定し、前記死角に前記オブジェクトが存在しない場合と、前記死角に前記オブジェクトが停止している場合とは、前記対象者に前記表示画像を表示しないと判定する
    請求項12~13のいずれか1項に記載の表示制御装置。
    The determination unit determines that the display image is displayed to the target person when the object exists in the blind spot, and the object does not exist in the blind spot and the object is stopped in the blind spot. Is the display control device according to any one of claims 12 to 13, which determines that the display image is not displayed on the target person.
  15.  前記判定部は、前記対象者が前記表示装置に表示されている前記表示画像に気づいているか否か判定し、
     前記条件付与部は、前記判定部が前記対象者が前記表示画像に気づいていないと判定した場合は、前記表示画像の少なくとも1部を加工し、
     前記制御部は、前記表示装置に加工した前記表示画像を表示する
    請求項12~14のいずれか1項に記載の表示制御装置。
    The determination unit determines whether or not the subject is aware of the display image displayed on the display device.
    When the determination unit determines that the subject is not aware of the display image, the condition-imparting unit processes at least one part of the display image.
    The display control device according to any one of claims 12 to 14, wherein the control unit displays the display image processed on the display device.
  16.  前記判定部は、前記自車両に設けられたスピーカから音を出力させるか否か判定し、
     前記制御部は、前記判定部が前記スピーカから前記音を出力させると判定した場合、前記表示装置に前記表示画像を表示するとともに、前記スピーカから前記音を出力させる
    請求項12~15のいずれか1項に記載の表示制御装置。
    The determination unit determines whether or not to output sound from the speaker provided in the own vehicle.
    Any of claims 12 to 15, wherein when the determination unit determines that the sound is output from the speaker, the control unit displays the display image on the display device and outputs the sound from the speaker. The display control device according to item 1.
  17.  自車両に設けられた表示装置と、
     前記自車両の外に存在する対象者から見て前記自車両によって生じる死角を示す死角画像を取得する死角画像取得部と、
     前記自車両の画像を前記死角画像に重畳した表示画像を生成する表示画像生成部と、
     前記表示装置に前記表示画像を表示する制御部と、
    を備えた表示システム。
    The display device installed in the own vehicle and
    A blind spot image acquisition unit that acquires a blind spot image showing a blind spot caused by the own vehicle when viewed from a target person existing outside the own vehicle, and a blind spot image acquisition unit.
    A display image generation unit that generates a display image in which an image of the own vehicle is superimposed on the blind spot image,
    A control unit that displays the displayed image on the display device,
    Display system with.
  18.  自車両の外に存在する対象者から見て前記自車両によって生じる死角を示す死角画像を取得するステップと、
     前記自車両の画像を前記死角画像に重畳した表示画像を生成するステップと、
     前記自車両に設けられた表示装置に前記表示画像を表示するステップと
    を有する表示制御方法。
    A step of acquiring a blind spot image showing a blind spot caused by the own vehicle when viewed from a target person existing outside the own vehicle, and a step of acquiring the blind spot image.
    A step of generating a display image in which the image of the own vehicle is superimposed on the blind spot image,
    A display control method including a step of displaying the display image on a display device provided in the own vehicle.
  19.  自車両の外に存在する対象者から見て前記自車両によって生じる死角を示す死角画像を取得する処理と、
     前記自車両の画像を前記死角画像に重畳した表示画像を生成する処理と、
     前記自車両に設けられた表示装置に前記表示画像を表示する処理と
    を実行させる表示制御プログラム。
    A process of acquiring a blind spot image showing a blind spot caused by the own vehicle when viewed from a target person existing outside the own vehicle, and a process of acquiring a blind spot image.
    A process of generating a display image in which the image of the own vehicle is superimposed on the blind spot image,
    A display control program that causes a display device provided in the own vehicle to execute a process of displaying the display image.
PCT/JP2020/040178 2020-10-27 2020-10-27 Display control device, display system, display control method, and display control program WO2022091194A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2022558626A JP7439949B2 (en) 2020-10-27 2020-10-27 Display control device, display system, display control method, and display control program
PCT/JP2020/040178 WO2022091194A1 (en) 2020-10-27 2020-10-27 Display control device, display system, display control method, and display control program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/040178 WO2022091194A1 (en) 2020-10-27 2020-10-27 Display control device, display system, display control method, and display control program

Publications (1)

Publication Number Publication Date
WO2022091194A1 true WO2022091194A1 (en) 2022-05-05

Family

ID=81382155

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/040178 WO2022091194A1 (en) 2020-10-27 2020-10-27 Display control device, display system, display control method, and display control program

Country Status (2)

Country Link
JP (1) JP7439949B2 (en)
WO (1) WO2022091194A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003244688A (en) * 2001-12-12 2003-08-29 Equos Research Co Ltd Image processing system for vehicle
JP2011253486A (en) * 2010-06-04 2011-12-15 Denso Corp Information display for vehicle
JP2018148284A (en) * 2017-03-01 2018-09-20 ソフトバンク株式会社 Base station device and communication system
KR20190115318A (en) * 2018-04-02 2019-10-11 주식회사 알트에이 System for pre-recognizing pedestrian
JP2019202589A (en) * 2018-05-22 2019-11-28 日本精機株式会社 Display device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003244688A (en) * 2001-12-12 2003-08-29 Equos Research Co Ltd Image processing system for vehicle
JP2011253486A (en) * 2010-06-04 2011-12-15 Denso Corp Information display for vehicle
JP2018148284A (en) * 2017-03-01 2018-09-20 ソフトバンク株式会社 Base station device and communication system
KR20190115318A (en) * 2018-04-02 2019-10-11 주식회사 알트에이 System for pre-recognizing pedestrian
JP2019202589A (en) * 2018-05-22 2019-11-28 日本精機株式会社 Display device

Also Published As

Publication number Publication date
JP7439949B2 (en) 2024-02-28
JPWO2022091194A1 (en) 2022-05-05

Similar Documents

Publication Publication Date Title
JP7249914B2 (en) Driving control device and in-vehicle system
US11267480B2 (en) Travel control apparatus and travel control method
JP6292218B2 (en) Information presenting apparatus and information presenting method
KR101895485B1 (en) Drive assistance appratus and method for controlling the same
JP4807263B2 (en) Vehicle display device
US10748425B2 (en) Image generation apparatus
JP6523561B1 (en) Irradiation apparatus and irradiation method
EP3626567A1 (en) Driving assistance apparatus for vehicle
CN107408349B (en) Information presentation device and information presentation method
JP4517393B2 (en) Driving assistance device
KR20190007286A (en) Driving system for vehicle and Vehicle
JP6489084B2 (en) Automated driving system
JP2008195375A (en) In-vehicle information display device and light irradiation device used therefor
JP2005165422A (en) Collision probability determination device
JP2020033013A (en) Drive assist system
US20230399004A1 (en) Ar display device for vehicle and method for operating same
JP2018024351A (en) Automatic operation system
JP6773382B2 (en) Irradiation device and irradiation method
JP6289776B1 (en) Rear wheel position indicator
US20210197863A1 (en) Vehicle control device, method, and program
JP2015149006A (en) Communication equipment for vehicle, communication equipment program for vehicle, and communication method for vehicle
WO2021019767A1 (en) Driving assistance apparatus, driving assistance system, and driving assistance method
WO2022091194A1 (en) Display control device, display system, display control method, and display control program
KR20180085585A (en) Vehicle control device mounted at vehicle and method for controlling the vehicle
WO2023157721A1 (en) Vehicle control device and vehicle control method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20959712

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022558626

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20959712

Country of ref document: EP

Kind code of ref document: A1