WO2020105685A1 - Display control device, method, and computer program - Google Patents

Display control device, method, and computer program

Info

Publication number
WO2020105685A1
WO2020105685A1 PCT/JP2019/045494 JP2019045494W WO2020105685A1 WO 2020105685 A1 WO2020105685 A1 WO 2020105685A1 JP 2019045494 W JP2019045494 W JP 2019045494W WO 2020105685 A1 WO2020105685 A1 WO 2020105685A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
information
information image
displayed
visibility
Prior art date
Application number
PCT/JP2019/045494
Other languages
French (fr)
Japanese (ja)
Inventor
誠 秦
Original Assignee
日本精機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本精機株式会社 filed Critical 日本精機株式会社
Priority to JP2020557598A priority Critical patent/JP7255608B2/en
Priority to DE112019005849.5T priority patent/DE112019005849T5/en
Priority to CN201980076258.0A priority patent/CN113165510B/en
Publication of WO2020105685A1 publication Critical patent/WO2020105685A1/en

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/24Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view in front of the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • B60K35/10Input arrangements, i.e. from user to vehicle, associated with vehicle functions or specially adapted therefor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • B60K35/20Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor
    • B60K35/21Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor using visual output, e.g. blinking lights or matrix displays
    • B60K35/23Head-up displays [HUD]
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • B60K35/20Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor
    • B60K35/28Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor characterised by the type of the output information, e.g. video entertainment or vehicle dynamics information; characterised by the purpose of the output information, e.g. for attracting the attention of the driver
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • B60K35/20Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor
    • B60K35/29Instruments characterised by the way in which information is handled, e.g. showing information on plural displays or prioritising information according to driving conditions
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0093Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/147Digital output to display device ; Cooperation and interconnection of the display device with other functional units using display panels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/149Instrument input by detecting viewing direction not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/16Type of output information
    • B60K2360/177Augmented reality
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/18Information management
    • B60K2360/191Highlight information
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/20Optical features of instruments
    • B60K2360/21Optical features of instruments using cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/20Optical features of instruments
    • B60K2360/33Illumination features
    • B60K2360/347Optical elements for superposition of display information
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/40Hardware adaptations for dashboards or instruments
    • B60K2360/48Sensors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/301Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing combining image information with other obstacle sensor information, e.g. using RADAR/LIDAR/SONAR sensors for estimating risk of collision
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/302Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing combining image information with GPS information or vehicle data, e.g. vehicle speed, gyro, steering angle data
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/307Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing virtually distinguishing relevant parts of a scene from the background of the scene
    • B60R2300/308Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing virtually distinguishing relevant parts of a scene from the background of the scene by overlaying the real scene, e.g. through a head-up display on the windscreen
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/70Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by an event-triggered choice to display a specific image among a selection of captured images
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/045Zooming at least part of an image, i.e. enlarging it or shrinking it
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2380/00Specific applications
    • G09G2380/10Automotive applications
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/10Intensity circuits

Definitions

  • the present disclosure relates to a display control device, a method, and a computer program that are used in a vehicle and that superimpose an image on the foreground of the vehicle for visual recognition.
  • Patent Document 1 discloses a vehicle image display system that relatively enhances the perceptibility of an image that is not perceived by the driver by reducing the conspicuousness of the image that matches the line of sight of the driver of the vehicle. It is disclosed.
  • the outline of the present disclosure relates to improving the visibility of a driver's visual field in front of the driver's field of view and an image displayed in an overlapping manner with the actual scene. More specifically, the present invention relates to facilitating the transmission of information to the driver while suppressing the visual stimulus of the image displayed in an overlapping manner with the real scene.
  • the display control device described in the present specification does not lower the visibility than the first information image that reduces the visibility when the line of sight is facing, and the degree to which the visibility of the first information image decreases when the line of sight is oriented. And a second information image.
  • the degree of change in the visibility of the information image when the line of sight is directed may be determined according to the magnitude of the risk potential of the information indicated by the information image.
  • FIG. 3 is a block diagram of a vehicular display system according to some embodiments.
  • FIG. 6 is a flow diagram of a process for reducing image visibility according to some embodiments.
  • FIG. 6 is a flow diagram of a process for increasing image visibility according to some embodiments. It is a figure which shows the example of the image which the display system for vehicles displays according to some embodiments. It is a figure which shows the example of the image which the display system for vehicles displays according to some embodiments.
  • FIG. 6 is a flow diagram of a process for reducing image visibility according to some embodiments. It is a figure which shows the example of the image which the display system for vehicles displays according to some embodiments. It is a figure which shows the example of the image which the display system for vehicles displays according to some embodiments.
  • the image display unit 11 in the vehicle display system 10 is a head-up display (HUD: Head-Up Display) device provided in the dashboard 5 of the vehicle 1.
  • the HUD device emits the display light 11a toward the front windshield 2 (which is an example of a member to be projected) and displays the image 200 in the virtual display area 100, so that the front windshield 2 is used.
  • the image 200 is visually recognized by being superimposed on the foreground 300 which is the visually recognized real space.
  • the image display unit 11 may be a head mounted display (hereinafter, HMD) device.
  • HMD head mounted display
  • the driver 4 mounts the HMD device on his / her head and sits in the seat of the host vehicle 1 to visually recognize the displayed image 200 by superimposing it on the foreground 300 through the front windshield 2 of the host vehicle 1.
  • the display area 100 in which the vehicle display system 10 displays the predetermined image 200 is fixed at a specific position with the coordinate system of the host vehicle 1 as a reference, and when the driver 4 turns in that direction, the display area 100 is displayed at the specific position.
  • the image 200 displayed in the fixed display area 100 can be visually recognized.
  • the image display unit 11 Under the control of the display control device 13, the image display unit 11 includes obstacles (pedestrians, bicycles) existing in the foreground 300, which is a real space (real scene) visually recognized through the front windshield 2 of the vehicle 1. , A motorcycle, another vehicle, etc., a road surface, a road sign, the vicinity of the real object 310 such as a feature (building, bridge, etc.) (an example of the positional relationship between the image and the real object), and the position overlapping the real object 310 ( By displaying the image 200 at a position (an example of the positional relationship between the image and the real object) or a position (an example of the positional relationship between the image and the real object) set with the real object 310 as a reference, the visual augmented reality ( AR: Augmented Reality) can also be formed.
  • the image display unit 11 displays a first information image 210 (AR image) and a second information image 220 (AR image) that differ according to the type of information to be provided (described in detail later).
  • FIG. 2 is a block diagram of a vehicle display system 10 according to some embodiments.
  • the vehicle display system 10 includes an image display unit 11 and a display control device 13 that controls the image display unit 11.
  • the display controller 13 includes one or more I / O interfaces 14, one or more processors 16, one or more storage units 18, and one or more image processing circuits 20. ..
  • the various functional blocks depicted in FIG. 2 may be implemented in hardware, software, or a combination of both.
  • FIG. 2 is only one embodiment of an implementation, and the illustrated components may be combined into fewer components or there may be additional components.
  • image processing circuitry 20 eg, a graphics processing unit
  • processors 16 e.g, a graphics processing unit
  • the processor 16 and the image processing circuit 20 are operably connected to the storage unit 18. More specifically, the processor 16 and the image processing circuit 20 execute the program stored in the storage unit 18 to operate the vehicle display system 10, such as generating or / and transmitting image data. It can be carried out.
  • the processor 16 or / and the image processing circuit 20 may be at least one general purpose microprocessor (eg, central processing unit (CPU)), at least one application specific integrated circuit (ASIC), at least one field programmable gate array (FPGA). , Or any combination thereof.
  • the storage unit 18 includes any type of magnetic medium such as a hard disk, any type of optical medium such as CD and DVD, any type of semiconductor memory such as volatile memory, and non-volatile memory. Volatile memory may include DRAM and SRAM, and non-volatile memory may include ROM and NVROM.
  • the processor 16 is operably connected to the I / O interface 14.
  • the I / O interface 14 uses the vehicle display system 10 as a personal area network (PAN) such as a Bluetooth (registered trademark) network or a local area network (LAN) such as an 802.11x Wi-Fi (registered trademark) network.
  • PAN personal area network
  • LAN local area network
  • 802.11x Wi-Fi registered trademark
  • a wireless communication interface for connecting to a wide area network (WAN) such as a 4G or LTE cellular network can be included.
  • the I / O interface 14 may also include a wired communication interface such as, for example, a USB port, a serial port, a parallel port, an OBDII, and / or any other suitable wired communication port.
  • the processor 16 is interoperably connected to the I / O interface 14 so as to be informed of various other electronic devices connected to the vehicle display system 10 (I / O interface 14). Can be given and received.
  • the I / O interface 14 includes, for example, a vehicle ECU 401 provided in the host vehicle 1, a road information database 403, a host vehicle position detector 405, a vehicle exterior sensor 407, a line-of-sight direction detector 409, an eye position detector 411, and portable information.
  • the terminal 413, the communication connection device 420 outside the vehicle, and the like are operably connected.
  • the image display unit 11 is operably connected to the processor 16 and the image processing circuit 20.
  • the image displayed by the image display unit 11 may be based on the image data received from the processor 16 and / or the image processing circuit 20.
  • the processor 16 and the image processing circuit 20 control the image displayed by the image display unit 11 based on the information obtained from the I / O interface 14.
  • the I / O interface 14 may include a function of processing (converting, calculating, analyzing) information received from another electronic device or the like connected to the vehicle display system 10.
  • the host vehicle 1 is in the state of the host vehicle 1 (for example, mileage, vehicle speed, accelerator pedal opening, engine throttle opening, injector fuel injection amount, engine speed, motor speed, steering angle, shift position, drive mode). , Various warning states) are included in the vehicle ECU 401.
  • the vehicle ECU 401 controls each unit of the host vehicle 1, and can transmit vehicle speed information indicating the current vehicle speed of the host vehicle 1 to the processor 16, for example. It should be noted that the vehicle ECU 401 can send the determination result of the data detected by the sensor and / or the analysis result to the processor 16 in addition to or instead of simply sending the data detected by the sensor to the processor 16. For example, information indicating whether the host vehicle 1 is traveling at a low speed or is stopped may be transmitted to the processor 16.
  • the vehicle ECU 401 may transmit an instruction signal for instructing the image 200 displayed by the vehicle display system 10 to the I / O interface 14, and at this time, the coordinates of the image 200, the notification necessity degree of the image 200, or / And the necessity degree related information which is a basis for determining the notification necessity degree may be added to the instruction signal and transmitted.
  • the own vehicle 1 may include a road information database 403 including a navigation system and the like.
  • the road information database 403 is based on the position of the own vehicle 1 acquired from the own vehicle position detection unit 405 described later, and is road information (lane, white line, stop line, Crosswalk, width of road, number of lanes, intersection, curve, branch road, traffic regulation, etc.), presence / absence of feature information (buildings, bridges, rivers, etc.), position (including distance to own vehicle 1), direction , Shape, type, detailed information, etc. may be read and transmitted to the processor 16. Further, the road information database 403 may calculate an appropriate route from the starting point to the destination and send it to the processor 16 as navigation information.
  • the host vehicle 1 may include a host vehicle position detection unit 405 such as a GNSS (Global Navigation Satellite System).
  • the road information database 403, the portable information terminal 413, which will be described later, and / or the external communication connecting device 420 acquires the position information of the own vehicle 1 from the own vehicle position detection unit 405 continuously, intermittently, or at every predetermined event.
  • the information around the host vehicle 1 can be selected / generated and transmitted to the processor 16.
  • the host vehicle 1 may include one or more vehicle exterior sensors 407 that detect a real object existing in the vicinity of the host vehicle 1 (particularly the foreground 300 in this embodiment).
  • the real object detected by the vehicle exterior sensor 407 includes, for example, a pedestrian, a bicycle, a motorcycle, another vehicle (a preceding vehicle, etc.), a road surface, a marking line, a roadside object, and / or a feature (a building etc.).
  • the vehicle exterior sensor for example, a millimeter wave radar, an ultrasonic radar, a radar sensor such as a laser radar, there is a camera sensor consisting of a camera and an image processing device, may be configured by a combination of both the radar sensor, the camera sensor, It may be configured with only one of them.
  • One or more out-of-vehicle sensors 407 detect a real object in front of the host vehicle 1 for each detection cycle of the sensors and detect real object related information (presence or absence of real object, which is an example of real object related information).
  • information such as the position, size, and / or type of each real object
  • the real object related information may be transmitted to the processor 16 via another device (for example, the vehicle ECU 401).
  • a camera is used as a sensor, an infrared camera or a near infrared camera is desirable so that a real object can be detected even when the surroundings are dark such as at night.
  • a stereo camera that can acquire a distance and the like by parallax is desirable.
  • the host vehicle 1 may include a line-of-sight direction detection unit 409 including an infrared camera that detects the gaze direction of the driver 4 (hereinafter, also referred to as “line-of-sight direction”) that captures the face of the driver 4.
  • the processor 16 can specify the line-of-sight direction of the driver 4 by acquiring an image captured by the infrared camera (an example of information that can estimate the line-of-sight direction) and analyzing the captured image. Note that the processor 16 may acquire, from the I / O interface 14, the line-of-sight direction of the driver 4 specified by the line-of-sight direction detection unit 409 (or another analysis unit) from the image captured by the infrared camera.
  • the method for acquiring the driver's 4 line-of-sight direction of the vehicle 1 or the information capable of estimating the driver's 4 line-of-sight direction is not limited to these, and the EOG (Electro-oculogram) method, corneal reflex Method, scleral reflection method, Purkinje image detection method, search coil method, infrared fundus camera method, and other known gaze direction detection (estimation) techniques may be used.
  • EOG Electro-oculogram
  • corneal reflex Method corneal reflex Method
  • scleral reflection method Purkinje image detection method
  • search coil method search coil method
  • infrared fundus camera method and other known gaze direction detection (estimation) techniques
  • the host vehicle 1 may include an eye position detection unit 411 including an infrared camera that detects the position of the eyes of the driver 4.
  • the processor 16 can specify the eye position of the driver 4 by acquiring an image captured by the infrared camera (an example of information that can estimate the eye position) and analyzing the captured image.
  • the processor 16 may acquire the information on the position of the eyes of the driver 4 identified from the image captured by the infrared camera from the I / O interface 14.
  • the method for acquiring the position of the eyes of the driver 4 of the vehicle 1 or the information capable of estimating the position of the eyes of the driver 4 is not limited to these, and known eye position detection (estimation) It may be acquired using a technology.
  • the processor 16 adjusts at least the position of the image 200 based on the position of the eyes of the driver 4 so that the viewer who has detected the eye position of the image 200 superimposed on the desired position of the foreground 300 (driver 4). May be visually confirmed.
  • the mobile information terminal 413 is a smartphone, a laptop computer, a smart watch, or other information device that can be carried by the driver 4 (or another occupant of the vehicle 1).
  • the I / O interface 14 can communicate with the mobile information terminal 413 by pairing with the mobile information terminal 413, and data recorded in the mobile information terminal 413 (or a server via the mobile information terminal). To get.
  • the mobile information terminal 413 has, for example, the same function as the road information database 403 and the vehicle position detection unit 405 described above, acquires the road information (an example of the real object related information), and transmits it to the processor 16. Good.
  • the mobile information terminal 413 may also acquire commercial information (an example of real object-related information) related to a commercial facility in the vicinity of the vehicle 1, and transmit the commercial information to the processor 16.
  • the mobile information terminal 413 transmits schedule information of the owner (for example, the driver 4) of the mobile information terminal 413, incoming information at the mobile information terminal 413, mail reception information, etc. to the processor 16,
  • the image processing circuit 20 may generate or / and transmit image data regarding these.
  • the outside-vehicle communication connection device 420 is a communication device for exchanging information with the own vehicle 1, and, for example, another vehicle connected to the own vehicle 1 via vehicle-to-vehicle communication (V2V: Vehicle To Vehicle), pedestrian-to-vehicle communication (V2P: It is a network communication device connected by a pedestrian (a portable information terminal carried by a pedestrian) and a road-to-vehicle communication (V2I: Vehicle To road Infrastructure) connected by a Vehicle To Pedestrian. Includes everything connected by V2X (Vehicle To Everything).
  • V2V Vehicle To Vehicle
  • V2P pedestrian-to-vehicle communication
  • V2P It is a network communication device connected by a pedestrian (a portable information terminal carried by a pedestrian) and a road-to-vehicle communication (V2I: Vehicle To road Infrastructure) connected by a Vehicle To Pedestrian. Includes everything connected by V2X (Vehicle To Everything).
  • the extra-vehicle communication connection device 420 acquires the position of, for example, a pedestrian, a bicycle, a motorcycle, another vehicle (such as a preceding vehicle), a road surface, a marking line, a roadside object, and / or a feature (such as a building), and the processor 16 May be sent to.
  • the vehicle exterior communication connection device 420 may have the same function as the own vehicle position detection unit 405 described above, may acquire the position information of the own vehicle 1 and may transmit the position information to the processor 16, and further, the road information database described above. It also has the function of 403, and may acquire the road information (an example of real object-related information) and send it to the processor 16.
  • the information acquired from the vehicle exterior communication connection device 420 is not limited to the above.
  • the software components stored in the storage unit 18 are the real object related information detection module 502, the notification necessity degree detection module 504, the image type determination module 506, the image position determination module 508, the image size determination module 510, and the eye position detection module 512.
  • the real object related information detection module 502 detects the position and size of the real object existing in the foreground 300 of the own vehicle 1, which is the basis for determining the coordinates and size of the image 200 described later.
  • the real object-related information detection module 502 uses, for example, the road information database 403, the vehicle exterior sensor 407, or the vehicle exterior communication connection device 420 to identify the real object 310 existing in the foreground 300 of the vehicle 1 (for example, the road surface shown in FIG. 5A). 311, a preceding vehicle 312, a pedestrian 313, a building 314, etc.) (height direction (vertical direction) when the driver 4 of the own vehicle 1 visually recognizes the traveling direction (front) of the own vehicle 1 from the driver's seat). , The position in the horizontal direction (horizontal direction), and the position in the depth direction (forward direction) may be added thereto, and the size of the real object 310 (height direction, horizontal size) is acquired. You may.
  • the real object related information detection module 502 may determine whether or not the environment (the bad weather such as rain, fog, or snow) in which the detection accuracy decreases when detecting the position or size of the real object by the vehicle exterior sensor 407. Good. For example, the position detection accuracy of the real object is calculated, and the determination result of the degree of decrease in the detection accuracy, the position where the detection accuracy is decreased, and the environment (bad weather) in which the detection accuracy is decreased are determined by the processor. 16 may be transmitted.
  • the bad weather may be determined by acquiring the weather information of the position where the host vehicle 1 travels from the portable information terminal 413, the vehicle exterior communication connection device 420, or the like.
  • the real object related information detection module 502 is a source of determining the content of the image 200 described below (hereinafter, also referred to as “image type” as appropriate), which is information about the real object existing in the foreground 300 of the host vehicle 1.
  • image type information about the real object existing in the foreground 300 of the host vehicle 1.
  • the real object related information is, for example, type information indicating the type of the real object such as a pedestrian or another vehicle, moving direction information indicating the moving direction of the real object, distance to the real object, or the like. It is distance time information indicating the arrival time, or individual detailed information of a real object such as a charge of a parking lot (real object) (but not limited to these).
  • the real object related information detection module 502 obtains the type information, the distance time information, and / or the individual detailed information from the road information database 403 or the portable information terminal 413, and the type information, the moving direction information, or the out-of-vehicle sensor 407.
  • the type information, the moving direction information, the distance time information, and / or the individual detailed information may be detected from the vehicle exterior communication connection device 420 by acquiring the / and distance time information.
  • the notification necessity level detection module 504 notifies the driver 4 of the real object position information detected by the real object related information detection module 502 and the real object related information detected by the real object related information detection module 502. Necessity) is detected.
  • the notification necessity degree detection module 504 may detect the notification necessity degree from various other electronic devices connected to the I / O interface 14.
  • the electronic device connected to the I / O interface 14 in FIG. 2 transmits information to the vehicle ECU 401, and the notification necessity detection module 504 detects (acquires) the notification necessity degree determined by the vehicle ECU 401 based on the received information. ) May be.
  • the “information need level” is, for example, a risk level derived from the degree of seriousness of the possibility itself, an urgency level derived from the length of the reaction time required to take a reaction action, the own vehicle 1 or the driver 4 ( Alternatively, it may be determined based on the effectiveness derived from the situation of other occupants of the vehicle 1 or a combination thereof (the indicator of the notification necessity degree is not limited to these).
  • the notification necessity detection module 504 may detect the necessity degree related information that is the basis for estimating the notification necessity degree of the image 200, and estimate the notification necessity degree from this.
  • Necessity degree related information that is a basis for estimating the notification necessity degree of the image 200 may be estimated, for example, by the position and type of a real object or traffic regulation (an example of road information), and is connected to the I / O interface 14. It may be estimated based on other information input from various other electronic devices described above, or in consideration of other information.
  • the vehicle display system 10 may not have the function of estimating the notification necessity degree, and a part or all of the function of estimating the notification necessity degree may be included in the display control device of the vehicle display system 10. It may be provided separately from 13.
  • the image type determination module 506 for example, the type and position of the real object detected by the real object related information detection module 502, the type, number, or / and of the real object related information detected by the real object related information detection module 502.
  • the type of the image 200 to be displayed on the real object can be determined based on the degree of the (necessary) notification need detected by the notification need detection module 504. Further, the image type determination module 506 may increase or decrease the type of the image 200 to be displayed according to the determination result by the visual recognition detection module 514 described later. Specifically, when the real object 310 is in a state where it is difficult for the driver 4 to visually recognize it, the types of the image 200 visually recognized by the driver 4 in the vicinity of the real object may be increased.
  • the image position determination module 508 is based on the position of the real object detected by the real object-related information detection module 502, and the coordinates of the image 200 (when the driver 4 views the direction of the display area 100 from the driver's seat of the own vehicle 1).
  • the horizontal direction (X-axis direction) and the vertical direction (Y-axis direction) are determined.
  • the image position determination module 508 determines the coordinates of the image 200 so as to have a predetermined positional relationship with the specific real object. For example, the positions of the image 200 in the left-right direction and the vertical direction are determined so that the center of the image 200 is visually recognized so as to overlap with the center of the real object.
  • the image position determination module 508 can determine the coordinates of the image 200 so that the image 200 has a predetermined positional relationship with reference to a real object that is not directly related. For example, as shown in FIG. 5A, the lane markings based on the left lane marking 311a (an example of a real object) and the right lane marking 311b (an example of a real object) of the lane (road surface 310) on which the vehicle 1 is traveling are used.
  • the coordinates of the first FCW image 221 described later that is related to the preceding vehicle 312 (an example of a specific real object) that is not directly related to may be determined (or corrected).
  • the "predetermined positional relationship" can be adjusted depending on the situation of the real object or the host vehicle 1, the type of the real object, the type of the displayed image, and the like.
  • the image size determination module 510 associates and displays the image 200 detected by the real object related information detection module 502 with the type and position of the real object and the type of real object related information detected by the real object related information detection module 502. , Number, and / or the size of the (necessary) notification need detected by the notification need detection module 504, the size of the image 200 can be determined. Also, the image size determination module 510 can change the image size according to the number of types of the image 200. For example, the image size may be reduced as the number of types of the image 200 increases.
  • the eye position detection module 512 detects the position of the eyes of the driver 4 of the vehicle 1.
  • the eye position detection module 512 determines where the eye height of the driver 4 is in the height region provided in a plurality of stages, detects the eye height of the driver 4 (position in the Y-axis direction), Detection of the eye height (position in the Y axis direction) and position in the depth direction (position in the Z axis direction) of the driver 4, and / or the position of the eye of the driver 4 (position in the X, Y, Z axis directions) ) Detection, and various software components for performing various operations associated therewith.
  • the eye position detection module 512 can acquire the eye position of the driver 4 from the eye position detection unit 411, or can estimate the eye position including the eye height of the driver 4 from the eye position detection unit 411, for example. Information is received and the eye position including the eye height of the driver 4 is estimated.
  • the information capable of estimating the position of the eyes includes, for example, the position of the driver's seat of the vehicle 1, the position of the driver's 4 face, the height of the sitting height, and the input value by the driver 4 on the operation unit (not shown). Good.
  • the visual recognition detection module 514 detects whether the driver 4 of the vehicle 1 visually recognizes the predetermined image 200.
  • the visual recognition detection module 514 is for executing various operations regarding whether the driver 4 visually recognizes the predetermined image 200 and whether the driver 4 visually recognizes the periphery (vicinity) of the predetermined image 200. Includes various software components.
  • the visual recognition detection module 514 compares the gaze position GZ of the driver 4, which will be described later, acquired from the line-of-sight direction detection unit 409 with the position of the image 200 acquired from the graphic module 518, and the driver 4 visually recognizes the image 200. It may be determined whether or not the image has been visually recognized, and the determination result as to whether or not the image is visually recognized, and information for identifying the visually recognized image 200 may be transmitted to the processor 16.
  • the visual recognition detection module 514 determines an area of a predetermined width that is preset from the outer edge of the image 200 to the outside in order to determine whether the driver 4 visually recognizes the periphery (vicinity) of the predetermined image 200. It may be set as the periphery of the image 200, and when a gaze position GZ described later enters the periphery of the image 200, it may be determined that the driver 4 of the own vehicle 1 visually recognizes the predetermined image 200. The visual recognition determination is not limited to these means.
  • the visual recognition detection module 514 may detect what the driver 4 is visually recognizing other than the image 200. For example, the visual recognition detection module 514 detects the position of the real object 310 existing in the foreground 300 of the vehicle 1 detected by the real object-related information detection module 502 and the gaze position of the driver 4 to be described later acquired from the line-of-sight direction detection unit 409. Information that identifies the real object 310 that is gazing and that identifies the real object 310 that has been visually recognized may be transmitted to the processor 16 by comparing GZ and GZ.
  • the behavior determination module 516 detects the behavior of the driver 4 that is not appropriate for the information shown in the first information image 210 described later. In some of the first information images 210, the inappropriate behavior of the driver 4 is stored in the storage unit 18 in association with each other. The behavior determination module 516 particularly determines whether or not an inappropriate behavior of the driver 4 associated with the first information image 210 with reduced visibility is detected. For example, when the first information image 210 includes route guidance information, if the driver 4 is gazing at a branch road different from the direction indicated by the route guidance, it is determined that an inappropriate behavior of the driver 4 has been detected. You may.
  • the first information image 210 includes traffic regulation information
  • the information collected by the action determination module 516 to determine the action is the state of the vehicle 1 input from the vehicle ECU 401 (for example, mileage, vehicle speed, accelerator pedal opening, engine throttle opening, injector fuel injection amount, The engine rotation speed, the motor rotation speed, the steering angle, the shift position, the drive mode, various warning states), the line-of-sight direction input from the line-of-sight direction detection unit 409, and the like, are not limited to these.
  • the graphics module 518 allows the displayed image 200 to have a visual effect (eg, brightness, transparency, saturation, contrast, or other visual characteristic), size, display position, display distance (from driver 4 to image 200). Various known software components for changing the distance).
  • the graphic module 518 displays the type set by the image type determination module 506 and the coordinates set by the image position determination module 508 (the left-right direction when the driver 4 views the display area 100 from the driver's seat of the vehicle 1 (X).
  • the image 200 is displayed so that the driver 4 can visually recognize it in the image size set by the image size determination module 510 in the axial direction) and the vertical direction (including at least the Y-axis direction).
  • the graphic module 518 has a first information image 210 and a second information image 220, which are augmented reality images (AR images) arranged so as to have a predetermined positional relationship with the real object 310 of the foreground 300 of the vehicle 1. , At least.
  • the first information image 210 reduces the visibility (including non-display).
  • the second information image 220 does not have lower visibility than the first information image 210 (smaller than the degree of reduction in visibility of the first information image 210 ( (To a lesser extent) including lowering the visibility, or not changing the visibility, or increasing the visibility).
  • Reduce visibility means to reduce brightness, increase transparency, decrease saturation, reduce contrast, reduce size, reduce image types, combinations of these, or It may include a combination of other elements.
  • increasing visibility means increasing brightness, decreasing transparency, increasing saturation, increasing contrast, increasing size, increasing the number of image types, combinations of these, Alternatively, a combination of these and other elements may be included.
  • the first information image 210 is information with a relatively low risk derived from the degree of seriousness that can occur, and for example, an arrow image (an example of navigation information) displayed on a road surface to indicate a route, a destination.
  • Text image an example of navigation information
  • an image showing the distance to the next turning point an example of navigation information
  • POI Point of Interest
  • image a feature information of the feature information
  • a store or facility existing in the foreground 300 images related to road signs (guidance signs, warning signs, regulation signs, instruction signs, auxiliary signs), and ACC (Adaptive Cruise Control) that displays the inter-vehicle distance set when following a preceding vehicle on the road surface.
  • ACC Adaptive Cruise Control
  • the second information image 220 is information having a relatively high degree of danger derived from the degree of seriousness of the vehicle itself, and for example, a forward collision prediction visually recognized near an obstacle existing in the foreground 300 of the vehicle 1. It is an image of a warning (FCW: Forward Collision Warning).
  • FIG. 3 is a flow diagram of a process for reducing the visibility of an image according to some embodiments.
  • the line-of-sight direction (gaze position GZ described later) of the driver 4 of the vehicle 1 is acquired (step S11), and the position of the displayed image 200 is acquired (step S12).
  • the processor 16 identifies the target visually recognized by the driver 4 by comparing the line-of-sight direction acquired in step S11 with the position of the image 200 acquired in step S12 (step S13). Specifically, the processor 16 determines whether the first information image 210 is visually recognized, the second information image 220 is visually recognized, or the first information image 210 or the second information image 220 is not visually recognized. To do. If the first information image 210 is visually recognized, which image of the first information image 210 is visually recognized is specified.
  • the processor 16 When it is determined that the driver 4 visually recognizes the first information image 210 in step S13, the processor 16 reduces the visibility of the visually recognized first information image 210. At this time, the processor 16 may hide the first information image 210. Further, when determining that the driver 4 visually recognizes the second information image 220 in step S ⁇ b> 13, the processor 16 does not reduce the visibility of the second information image 220 or makes the visibility of the second information image 220 first. The information image 210 is made smaller (smaller) than the degree of visibility deterioration.
  • FIG. 4 is a flow diagram of a process of increasing the visibility of the second information image according to some embodiments.
  • the processor 16 acquires the behavior of the driver 4 (step S21), and the behavior of the driver 4 that is not appropriate for the information indicated by the first information image 210 whose visibility is reduced by the processing of step S14. Is detected (step S22), and when it is determined that the inappropriate behavior of the driver 4 is made, the visibility of the second information image 220, which has been reduced in visibility, is increased (step S23). ..
  • the operation of the processing process described above can be performed by executing one or more functional modules of an information processing device such as a general-purpose processor or an application-specific chip. All of these modules, a combination of these modules, and / or a combination with general hardware capable of substituting their functions are included in the scope of protection of the present invention.
  • the functional blocks of the vehicular display system 10 are optionally implemented by hardware, software, or a combination of hardware and software to implement the principles of the various described embodiments.
  • the functional blocks described in FIG. 2 may optionally be combined or one functional block may be separated into two or more sub-blocks to implement the principles of the described embodiments.
  • the description herein optionally supports any possible combination or division of the functional blocks described herein.
  • the first information image 210 includes a navigation image 211 that is visually recognized as being superimposed on a road surface 311 and that indicates a guide route, and a notation “P” that indicates a parking lot that instructs a building 314 (actual object 310).
  • a POI image 212 that is an illustration.
  • the second information image 220 is a first FCW image 221 visually recognized linearly (in a line) on the road surface 311 behind the preceding vehicle 312 traveling in front of the host vehicle 1, and the opposite lane side of the traveling lane of the host vehicle 1.
  • the second FCW image 222 visually recognized in an arc shape on the road surface 311 around the pedestrian 313 who walks on the sidewalk.
  • a third information image that is not an augmented reality image (AR image) that is arranged so as to have a predetermined positional relationship with the real object 310 of the foreground 300 of the vehicle 1, notation of "80" indicating the speed limit
  • a road information image 231 that is an illustration including ",” and a speed image 232 that indicates the speed of the host vehicle 1 and is displayed as "35 km / h" are displayed.
  • the display area 100 is a first display area 110 and a second display area 120 that is arranged vertically below the first display area 110 (Y axis negative direction) when the front is visually recognized from the driver's seat of the vehicle 1.
  • the first information image 210 and the second information image 220 which are AR images, are displayed in the first display area 110, and the third information images 231 and 232 are displayed in the second display area 120.
  • the processor 16 executes the instruction of step S14 in FIG. 3 and is visually recognized as shown in FIG. 5B.
  • the navigation image 211 (first information image 210) is hidden (an example of reduced visibility).
  • the navigation image 211 which is the augmented reality image (AR image) whose displayed position is related to the position of the real object 310
  • the displayed position is a non-AR image which is not related to the position of the real object 310.
  • the first navigation image 213 (an example of a related image) and the second navigation image 214 (an example of a related image) are displayed in the second display area 120.
  • the first navigation image 213 is a simplified image showing the approximate direction of the next fork.
  • the second navigation image 214 is a text described as "200 m ahead" indicating the distance to the next branch road.
  • the processor 16 increases the visibility of the navigation image 211, which is the first information image 210 that has been reduced in visibility. That is, the state shown in FIG. 5B is changed to the state shown in FIG. 5A. This allows the driver 4 to recognize that the vehicle should not turn at the nearest intersection.
  • FIG. 6 is a flow diagram of a process for reducing the visibility of an image according to some embodiments.
  • the flow chart of FIG. 6 corresponds to the flow chart of FIG. 3, and steps S31, 32, and 33 of FIG. 6 correspond to steps S11, S12, and S13 of FIG. 3, respectively.
  • step S34 which is a change point from FIG. 3 will be described.
  • the processor 16 When it is determined that the driver 4 visually recognizes the first information image 210 in step S13, the processor 16 reduces the visibility of the visually recognized first information image 210. Further, when it is determined that the driver 4 visually recognizes the second information image 220 in step S33, the processor 16 changes the visually recognized second information image 220 from a still image to a moving image, or the visually recognized second information image. 220 and other nearby second information image 220 may be changed from a still image to a moving image.
  • the processor 16 may continuously change the number of second information images 220 that are viewed and other second information images 220 that are nearby. Specifically, when the gaze position GZ is in the vicinity of the first FCW image 221 which is the second information image 220, the processor 16 displays the first FCW image 221 that has been visually recognized and the other second FCW image 222 that is close to the first FCW image 221. 7B may be changed continuously and / or intermittently between the state where the number of images shown in FIG. 7B is small and the state where the number of images shown in FIG. 7B is large.
  • the moving image is not particularly defined, but the shape of the image repeatedly and / or intermittently changes, the number of images repeatedly changes, the position of the image repeatedly changes, repeatedly blinks, and repeatedly.
  • the size may be changed, and the like.
  • the display control device of the present embodiment controls the image display unit 11 (12) that displays the image 200 in the area overlapping the foreground 300 when viewed from the driver 4 of the vehicle 1. 13, one or more I / O interfaces 14, one or more processors 16, a memory 18, and stored in memory 18 and executed by one or more processors 16. And one or more computer programs configured as described above, the one or more processors 16 obtaining the line-of-sight direction of the driver 4 from the one or more I / O interfaces 14. However, based on the line-of-sight direction, a first information image 210 that reduces visibility when it is determined to be viewed, and a second information image 220 that does not reduce visibility when compared to the first information image 210 when determined to be viewed.
  • the command is executed.
  • the change in visibility after visual recognition differs depending on the type of image, and in the case of the first information image, the visibility is reduced, making it easier to see the real scene in front of the driver's field of view. If it is the second information image, the visibility is not lowered so much, so that the image can be made easy to see even after the visual recognition.
  • the degree of change in the visibility of the second information image 220 when the line of sight is directed may be determined according to the magnitude of the risk potential of the information indicated by the second information image 220. ..
  • the processor 16 may reduce the degree of reduction in visibility when the line of sight is directed, as the risk potential of the information indicated by the second information image 220 is higher. That is, if the risk potential is low, the visibility of the second information image 220 when the line of sight is directed is greatly reduced (the degree of reduction is increased).
  • the processor 16 may change the degree of reduction in visibility according to a risk potential that is predetermined according to the type of the second information image 220, and the second information image 220 is being displayed (I / I). It may be changed according to the risk potential calculated according to the information obtained from the O interface 14).
  • the processor 16 sets the risk potential of the information indicated by the second information image 220 for a predetermined period. If the risk potential does not become higher than the predetermined threshold value, does not significantly increase, or decreases, it is determined that it is not necessary to maintain the visibility as it is, and the second information image 220 is visually recognized. The sex may be reduced.
  • the second information image 220 may be displayed as a moving image when the line of sight turns toward the second information image 220.
  • the processor 16 normally displays the second information image 220 as a still image and, when the line of sight is turned, makes part or all of the second information image 220 a moving image. It is recognized that the first information image 210 has a lower visibility when visually recognized, but becomes a moving image when the second information image 220 is visually recognized, and thus is different from the information indicated by the first information image 210. Thus, the attention can be directed to the second information image 220.
  • the second information image 220 may be changed to a still image after being displayed as a moving image for a certain period. Further, the processor 16 may reduce the visibility of the second information image 220 that is visually recognized at the same timing that the second information image 220 is displayed as a moving image, or the visibility is reduced after the moving image is displayed. You may let me.
  • the predetermined second information image 220 and other second information images are displayed.
  • 220 may be displayed as a moving image.
  • the driver 4 visually recognizes the second FCW image 222 displayed corresponding to the pedestrian 313, the second FCW image 222 and the first FCW image displayed corresponding to the other preceding vehicle 312.
  • 221 may be a moving image. In this manner, by displaying the other information images in the same manner, the visual attention can be directed not only to the visually recognized image but also to the similar image.
  • the predetermined second information image 220 and the predetermined second information image are displayed.
  • Another second information image 220 near 220 may be displayed as a moving image.
  • the predetermined second information image 220 and other second information images are displayed.
  • 220 may be displayed as a moving image at the same cycle. This makes it easy to identify where an image similar to the image with the field of view is displayed.
  • the first information image 210 is displayed.
  • the visibility of the first information image 210 may be increased.
  • the related images (213, 214) related to the first information image 210 are changed to the first information image 210 or the second information image.
  • 220 may be displayed in the second display area 102 different from the first display area 101 in which it is displayed. That is, the area of the foreground 300 that overlaps the first information image 210 can be easily seen, and the information indicated by the first information image 210 can be confirmed in the other second display area 102.
  • the processor 16 may reduce the visibility of one first information image 210 and newly display two or more related images. As a result, more information can be recognized in the related image, and it is possible to suppress deterioration in information recognition due to deterioration in visibility of the first information image 210 that is an AR image.
  • Road information image ( Third information image), 232 ... Velocity image (third information image), 300 ... Foreground, 310 ... Real object, 311 ... Road surface, 311a ... Marking line, 311b ... Marking line, 312 ... Leading vehicle, 313 ... Pedestrian, 314 ... Building, 320 ... Lane, 401 ... Vehicle ECU, 403 ... Road information database, 405 ... Own vehicle position detection unit, 407 ... Exterior sensor, 409 ... Line-of-sight direction detection unit, 411 ... Eye position detection unit, 413 ... Mobile information Terminal, 420 ... External communication connection device, 502 ... Real object related information detection module, 504 ... Notification necessity detection module, 506 ... Image type determination module, 508 ... Image position determination module, 510 ... Image size determination module, 512 ... Eyes Position detection module, 514 ... Visual detection module, 516 ... Behavior determination module, 518 ... Graphic module, GZ ... Gaze position

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mechanical Engineering (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Transportation (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Optics & Photonics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Traffic Control Systems (AREA)
  • Instrument Panels (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The present invention makes an actual view in front of a driver's field of view, and an image displayed while overlapping the actual view more easily visible. A display control device which controls an image display unit for displaying an image in a region that overlaps a foreground when viewed from a driver of a vehicle displays a first information image 210 and a second information image 220 on the image display unit, detects the line-of-sight direction of the driver of the vehicle, when the first information image 210 is determined to be visually recognized, decreases the visibility thereof, and when the second information image 220 is determined to be visually recognized, prevents the visibility thereof from becoming lower than that of the first information image 210.

Description

表示制御装置、方法、及びコンピュータ・プログラムDisplay control device, method, and computer program
 本開示は、車両で使用され、車両の前景に画像を重畳して視認させる表示制御装置、方法、及びコンピュータ・プログラムに関する。 The present disclosure relates to a display control device, a method, and a computer program that are used in a vehicle and that superimpose an image on the foreground of the vehicle for visual recognition.
 特許文献1には、車両の運転者の視線と一致した画像の目立ち具合を低下させることで、運転者によって知覚されていない画像の知覚のしやすさを相対的に高める車両用画像表示システムが開示されている。 Patent Document 1 discloses a vehicle image display system that relatively enhances the perceptibility of an image that is not perceived by the driver by reducing the conspicuousness of the image that matches the line of sight of the driver of the vehicle. It is disclosed.
特開2017-39373号公報JP, 2017-39373, A
 しかしながら、特許文献1の車両用画像表示システムでは、知覚された画像の目立ち具合を低下させることで、他の目立ち具合を低下させていない画像への視覚的注意を相対的に向きやすくすることはできるものの、知覚された後の画像へも注意を向けるべきシチュエーションや、注意を払いたいという運転者の潜在的な願望も想定され、さらに利便性を向上させた表示制御装置が望まれている。 However, in the image display system for a vehicle of Patent Document 1, it is possible to relatively facilitate visual attention to other images whose visual condition is not reduced by reducing the visual condition of the perceived image. Although possible, a situation in which attention should be paid to the image after being perceived and a potential driver's potential desire to pay attention are also assumed, and a display control device with further improved convenience is desired.
 本明細書に開示される特定の実施形態の要約を以下に示す。これらの態様が、これらの特定の実施形態の概要を読者に提供するためだけに提示され、この開示の範囲を限定するものではないことを理解されたい。実際に、本開示は、以下に記載されない種々の態様を包含し得る。 A summary of the specific embodiments disclosed herein is provided below. It should be understood that these aspects are presented only to provide an overview of the reader to these particular embodiments and are not intended to limit the scope of this disclosure. Indeed, the present disclosure may include various aspects not described below.
 本開示の概要は、運転者の視界前方の実景、及びこの実景に重なって表示される画像の見やすさを向上させることに関する。より具体的には、実景に重なって表示される画像の視覚的刺激を抑えつつ、運転者に情報を伝達しやすくすることにも関する。 The outline of the present disclosure relates to improving the visibility of a driver's visual field in front of the driver's field of view and an image displayed in an overlapping manner with the actual scene. More specifically, the present invention relates to facilitating the transmission of information to the driver while suppressing the visual stimulus of the image displayed in an overlapping manner with the real scene.
 したがって、本明細書に記載される表示制御装置は、視線が向くと視認性を低下させる第1情報画像と、視線が向くと第1情報画像の視認性が低下する度合いより視認性を低下させない第2情報画像と、を表示させる。いくつかの実施形態では、情報画像が示す情報のリスクポテンシャルの大きさに応じて、視線が向けられた際の情報画像の視認性の変化度合いを決めてもよい。 Therefore, the display control device described in the present specification does not lower the visibility than the first information image that reduces the visibility when the line of sight is facing, and the degree to which the visibility of the first information image decreases when the line of sight is oriented. And a second information image. In some embodiments, the degree of change in the visibility of the information image when the line of sight is directed may be determined according to the magnitude of the risk potential of the information indicated by the information image.
いくつかの実施形態に係る、車両用表示システムの適用例を示す図である。It is a figure which shows the application example of the display system for vehicles which concerns on some embodiment. いくつかの実施形態に係る、車両用表示システムのブロック図である。FIG. 3 is a block diagram of a vehicular display system according to some embodiments. いくつかの実施形態に係る、画像の視認性を低下させる処理のフロー図である。FIG. 6 is a flow diagram of a process for reducing image visibility according to some embodiments. いくつかの実施形態に係る、画像の視認性を上昇させる処理のフロー図である。FIG. 6 is a flow diagram of a process for increasing image visibility according to some embodiments. いくつかの実施形態に係る、車両用表示システムが表示する画像の例を示す図である。It is a figure which shows the example of the image which the display system for vehicles displays according to some embodiments. いくつかの実施形態に係る、車両用表示システムが表示する画像の例を示す図である。It is a figure which shows the example of the image which the display system for vehicles displays according to some embodiments. いくつかの実施形態に係る、画像の視認性を低下させる処理のフロー図である。FIG. 6 is a flow diagram of a process for reducing image visibility according to some embodiments. いくつかの実施形態に係る、車両用表示システムが表示する画像の例を示す図である。It is a figure which shows the example of the image which the display system for vehicles displays according to some embodiments. いくつかの実施形態に係る、車両用表示システムが表示する画像の例を示す図である。It is a figure which shows the example of the image which the display system for vehicles displays according to some embodiments.
 以下、図1及び図2では、例示的なヘッドアップディスプレイ装置の構成の説明を提供する。図3、図4、図6では、例示的な表示制御の処理の流れを説明する。図5A、図5B、及び図7A、図7Bでは、表示例を提供する。なお、本発明は以下の実施形態(図面の内容も含む)によって限定されるものではない。下記の実施形態に変更(構成要素の削除も含む)を加えることができるのはもちろんである。また、以下の説明では、本発明の理解を容易にするために、公知の技術的事項の説明を適宜省略する。 1 and 2, below, a description of the configuration of an exemplary head-up display device is provided. An exemplary display control process flow will be described with reference to FIGS. 3, 4, and 6. 5A, 5B, and 7A, 7B provide example displays. The present invention is not limited to the following embodiments (including the contents of the drawings). Of course, changes (including deletion of components) can be added to the following embodiments. Further, in the following description, in order to facilitate understanding of the present invention, description of known technical matters will be appropriately omitted.
 図1を参照する。車両用表示システム10における画像表示部11は、自車両1のダッシュボード5内に設けられたヘッドアップディスプレイ(HUD:Head-Up Display)装置である。HUD装置は、表示光11aをフロントウインドシールド2(被投影部材の一例である)に向けて出射し、仮想的な表示領域100内に画像200を表示することで、フロントウインドシールド2を介して視認される現実空間である前景300に重ねて画像200を視認させる。 Refer to Figure 1. The image display unit 11 in the vehicle display system 10 is a head-up display (HUD: Head-Up Display) device provided in the dashboard 5 of the vehicle 1. The HUD device emits the display light 11a toward the front windshield 2 (which is an example of a member to be projected) and displays the image 200 in the virtual display area 100, so that the front windshield 2 is used. The image 200 is visually recognized by being superimposed on the foreground 300 which is the visually recognized real space.
 また、画像表示部11は、ヘッドマウントディスプレイ(以下、HMD)装置であってもよい。運転者4は、HMD装置を頭部に装着して自車両1の座席に着座することで、表示される画像200を、自車両1のフロントウインドシールド2を介した前景300に重畳して視認する。車両用表示システム10が所定の画像200を表示する表示領域100は、自車両1の座標系を基準とした特定の位置に固定され、運転者4がその方向を向くと、その特定の位置に固定された表示領域100内に表示された画像200を視認することができる。 Further, the image display unit 11 may be a head mounted display (hereinafter, HMD) device. The driver 4 mounts the HMD device on his / her head and sits in the seat of the host vehicle 1 to visually recognize the displayed image 200 by superimposing it on the foreground 300 through the front windshield 2 of the host vehicle 1. To do. The display area 100 in which the vehicle display system 10 displays the predetermined image 200 is fixed at a specific position with the coordinate system of the host vehicle 1 as a reference, and when the driver 4 turns in that direction, the display area 100 is displayed at the specific position. The image 200 displayed in the fixed display area 100 can be visually recognized.
 画像表示部11は、表示制御装置13の制御に基づいて、自車両1のフロントウインドシールド2を介して視認される現実空間(実景)である前景300に存在する、障害物(歩行者、自転車、自動二輪車、他車両など)、路面、道路標識、及び地物(建物、橋など)などの実オブジェクト310の近傍(画像と実オブジェクトとの位置関係の一例)、実オブジェクト310に重なる位置(画像と実オブジェクトとの位置関係の一例)、又は実オブジェクト310を基準に設定された位置(画像と実オブジェクトとの位置関係の一例)に画像200を表示することで、視覚的な拡張現実(AR:Augmented Reality)を形成することもできる。画像表示部11は、提供する情報の種類に応じて異なる第1情報画像210(AR画像)と、第2情報画像220(AR画像)と、を表示する(後で詳述する)。 Under the control of the display control device 13, the image display unit 11 includes obstacles (pedestrians, bicycles) existing in the foreground 300, which is a real space (real scene) visually recognized through the front windshield 2 of the vehicle 1. , A motorcycle, another vehicle, etc., a road surface, a road sign, the vicinity of the real object 310 such as a feature (building, bridge, etc.) (an example of the positional relationship between the image and the real object), and the position overlapping the real object 310 ( By displaying the image 200 at a position (an example of the positional relationship between the image and the real object) or a position (an example of the positional relationship between the image and the real object) set with the real object 310 as a reference, the visual augmented reality ( AR: Augmented Reality) can also be formed. The image display unit 11 displays a first information image 210 (AR image) and a second information image 220 (AR image) that differ according to the type of information to be provided (described in detail later).
 図2は、いくつかの実施形態に係る、車両用表示システム10のブロック図である。車両用表示システム10は、画像表示部11と、画像表示部11を制御する表示制御装置13と、で構成される。表示制御装置13は、1つ又はそれ以上のI/Oインタフェース14、1つ又はそれ以上のプロセッサ16、1つ又はそれ以上の記憶部18、及び1つ又はそれ以上の画像処理回路20を備える。図2に記載される様々な機能ブロックは、ハードウェア、ソフトウェア、又はこれら両方の組み合わせで構成されてもよい。図2は、実施態様の一実施形態に過ぎず、図示された構成要素は、より数の少ない構成要素に組み合わされてもよく、又は追加の構成要素があってもよい。例えば、画像処理回路20(例えば、グラフィック処理ユニット)が、1つ又はそれ以上のプロセッサ16に含まれてもよい。 FIG. 2 is a block diagram of a vehicle display system 10 according to some embodiments. The vehicle display system 10 includes an image display unit 11 and a display control device 13 that controls the image display unit 11. The display controller 13 includes one or more I / O interfaces 14, one or more processors 16, one or more storage units 18, and one or more image processing circuits 20. .. The various functional blocks depicted in FIG. 2 may be implemented in hardware, software, or a combination of both. FIG. 2 is only one embodiment of an implementation, and the illustrated components may be combined into fewer components or there may be additional components. For example, image processing circuitry 20 (eg, a graphics processing unit) may be included in one or more processors 16.
 図示するように、プロセッサ16及び画像処理回路20は、記憶部18と動作可能に連結される。より具体的には、プロセッサ16及び画像処理回路20は、記憶部18に記憶されているプログラムを実行することで、例えば画像データを生成又は/及び送信するなど、車両用表示システム10の操作を行うことができる。プロセッサ16又は/及び画像処理回路20は、少なくとも1つの汎用マイクロプロセッサ(例えば、中央処理装置(CPU))、少なくとも1つの特定用途向け集積回路(ASIC)、少なくとも1つのフィールドプログラマブルゲートアレイ(FPGA)、又はそれらの任意の組み合わせを含むことができる。記憶部18は、ハードディスクのような任意のタイプの磁気媒体、CD及びDVDのような任意のタイプの光学媒体、揮発性メモリのような任意のタイプの半導体メモリ、及び不揮発性メモリを含む。揮発性メモリは、DRAM及びSRAMを含み、不揮発性メモリは、ROM及びNVROMを含んでもよい。 As illustrated, the processor 16 and the image processing circuit 20 are operably connected to the storage unit 18. More specifically, the processor 16 and the image processing circuit 20 execute the program stored in the storage unit 18 to operate the vehicle display system 10, such as generating or / and transmitting image data. It can be carried out. The processor 16 or / and the image processing circuit 20 may be at least one general purpose microprocessor (eg, central processing unit (CPU)), at least one application specific integrated circuit (ASIC), at least one field programmable gate array (FPGA). , Or any combination thereof. The storage unit 18 includes any type of magnetic medium such as a hard disk, any type of optical medium such as CD and DVD, any type of semiconductor memory such as volatile memory, and non-volatile memory. Volatile memory may include DRAM and SRAM, and non-volatile memory may include ROM and NVROM.
 図示するように、プロセッサ16は、I/Oインタフェース14と動作可能に連結されている。例えば、I/Oインタフェース14は、車両用表示システム10を、Bluetooth(登録商標)ネットワークなどのパーソナルエリアネットワーク(PAN)、802.11x Wi-Fi(登録商標)ネットワークなどのローカルエリアネットワーク(LAN)、4G又はLTE(登録商標)セルラーネットワークなどの広域ネットワーク(WAN)に接続する無線通信インタフェースを含むことができる。また、I/Oインタフェース14は、例えば、USBポート、シリアルポート、パラレルポート、OBDII、及び/又は他の任意の適切な有線通信ポートなどの有線通信インタフェースを含むことができる。 As shown, the processor 16 is operably connected to the I / O interface 14. For example, the I / O interface 14 uses the vehicle display system 10 as a personal area network (PAN) such as a Bluetooth (registered trademark) network or a local area network (LAN) such as an 802.11x Wi-Fi (registered trademark) network. A wireless communication interface for connecting to a wide area network (WAN) such as a 4G or LTE cellular network can be included. The I / O interface 14 may also include a wired communication interface such as, for example, a USB port, a serial port, a parallel port, an OBDII, and / or any other suitable wired communication port.
 図示するように、プロセッサ16は、I/Oインタフェース14と相互動作可能に連結されることで、車両用表示システム10(I/Oインタフェース14)に接続される種々の他の電子機器等と情報を授受可能となる。I/Oインタフェース14には、例えば、自車両1に設けられた車両ECU401、道路情報データベース403、自車位置検出部405、車外センサ407、視線方向検出部409、目位置検出部411、携帯情報端末413、及び車外通信接続機器420などが動作可能に連結される。画像表示部11は、プロセッサ16及び画像処理回路20に動作可能に連結される。したがって、画像表示部11によって表示される画像は、プロセッサ16又は/及び画像処理回路20から受信された画像データに基づいてもよい。プロセッサ16及び画像処理回路20は、I/Oインタフェース14から得られる情報に基づき、画像表示部11が表示する画像を制御する。なお、I/Oインタフェース14は、車両用表示システム10に接続される他の電子機器等から受信する情報を加工(変換、演算、解析)する機能を含んでいてもよい。 As shown in the figure, the processor 16 is interoperably connected to the I / O interface 14 so as to be informed of various other electronic devices connected to the vehicle display system 10 (I / O interface 14). Can be given and received. The I / O interface 14 includes, for example, a vehicle ECU 401 provided in the host vehicle 1, a road information database 403, a host vehicle position detector 405, a vehicle exterior sensor 407, a line-of-sight direction detector 409, an eye position detector 411, and portable information. The terminal 413, the communication connection device 420 outside the vehicle, and the like are operably connected. The image display unit 11 is operably connected to the processor 16 and the image processing circuit 20. Therefore, the image displayed by the image display unit 11 may be based on the image data received from the processor 16 and / or the image processing circuit 20. The processor 16 and the image processing circuit 20 control the image displayed by the image display unit 11 based on the information obtained from the I / O interface 14. The I / O interface 14 may include a function of processing (converting, calculating, analyzing) information received from another electronic device or the like connected to the vehicle display system 10.
 自車両1は、自車両1の状態(例えば、走行距離、車速、アクセルペダル開度、エンジンスロットル開度、インジェクター燃料噴射量、エンジン回転数、モータ回転数、ステアリング操舵角、シフトポジション、ドライブモード、各種警告状態)などを検出する車両ECU401を含んでいる。車両ECU401は、自車両1の各部を制御するものであり、例えば、自車両1の現在の車速を示す車速情報をプロセッサ16へ送信することができる。なお、車両ECU401は、単にセンサで検出したデータをプロセッサ16へ送信することに加え、又は代わりに、センサで検出したデータの判定結果、又は/及び解析結果をプロセッサ16へ送信することができる。例えば、自車両1が低速走行しているか、又は停止しているかを示す情報をプロセッサ16へ送信してもよい。また、車両ECU401は、車両用表示システム10が表示する画像200を指示する指示信号をI/Oインタフェース14に送信してもよく、この際、画像200の座標、画像200の報知必要度、又は/及び報知必要度を判定する元となる必要度関連情報を、指示信号に付加して送信してもよい。 The host vehicle 1 is in the state of the host vehicle 1 (for example, mileage, vehicle speed, accelerator pedal opening, engine throttle opening, injector fuel injection amount, engine speed, motor speed, steering angle, shift position, drive mode). , Various warning states) are included in the vehicle ECU 401. The vehicle ECU 401 controls each unit of the host vehicle 1, and can transmit vehicle speed information indicating the current vehicle speed of the host vehicle 1 to the processor 16, for example. It should be noted that the vehicle ECU 401 can send the determination result of the data detected by the sensor and / or the analysis result to the processor 16 in addition to or instead of simply sending the data detected by the sensor to the processor 16. For example, information indicating whether the host vehicle 1 is traveling at a low speed or is stopped may be transmitted to the processor 16. Further, the vehicle ECU 401 may transmit an instruction signal for instructing the image 200 displayed by the vehicle display system 10 to the I / O interface 14, and at this time, the coordinates of the image 200, the notification necessity degree of the image 200, or / And the necessity degree related information which is a basis for determining the notification necessity degree may be added to the instruction signal and transmitted.
 自車両1は、ナビゲーションシステム等からなる道路情報データベース403を含んでいてもよい。道路情報データベース403は、後述する自車位置検出部405から取得される自車両1の位置に基づき、実オブジェクト関連情報の一例である自車両1が走行する道路情報(車線,白線,停止線,横断歩道,道路の幅員,車線数,交差点,カーブ,分岐路,交通規制など)、地物情報(建物、橋、河川など)の、有無、位置(自車両1までの距離を含む)、方向、形状、種類、詳細情報などを読み出し、プロセッサ16に送信してもよい。また、道路情報データベース403は、出発地から目的地までの適切な経路を算出し、ナビゲーション情報としてプロセッサ16に送信してもよい。 The own vehicle 1 may include a road information database 403 including a navigation system and the like. The road information database 403 is based on the position of the own vehicle 1 acquired from the own vehicle position detection unit 405 described later, and is road information (lane, white line, stop line, Crosswalk, width of road, number of lanes, intersection, curve, branch road, traffic regulation, etc.), presence / absence of feature information (buildings, bridges, rivers, etc.), position (including distance to own vehicle 1), direction , Shape, type, detailed information, etc. may be read and transmitted to the processor 16. Further, the road information database 403 may calculate an appropriate route from the starting point to the destination and send it to the processor 16 as navigation information.
 自車両1は、GNSS(全地球航法衛星システム)等からなる自車位置検出部405を含んでいてもよい。道路情報データベース403、後述する携帯情報端末413、又は/及び車外通信接続機器420は、自車位置検出部405から自車両1の位置情報を連続的、断続的、又は所定のイベント毎に取得することで、自車両1の周辺の情報を選択・生成して、プロセッサ16に送信することができる。 The host vehicle 1 may include a host vehicle position detection unit 405 such as a GNSS (Global Navigation Satellite System). The road information database 403, the portable information terminal 413, which will be described later, and / or the external communication connecting device 420 acquires the position information of the own vehicle 1 from the own vehicle position detection unit 405 continuously, intermittently, or at every predetermined event. Thus, the information around the host vehicle 1 can be selected / generated and transmitted to the processor 16.
 自車両1は、自車両1の周辺(本実施形態では特に前景300)に存在する実オブジェクトを検出する1つ又はそれ以上の車外センサ407を含んでいてもよい。車外センサ407が検知する実オブジェクトは、例えば、歩行者、自転車、自動二輪車、他車両(先行車等)、路面、区画線、路側物、又は/及び地物(建物など)などを含んでいてもよい。車外センサとしては、例えば、ミリ波レーダ、超音波レーダ、レーザレーダ等のレーダセンサ、カメラと画像処理装置からなるカメラセンサがあり、レーダセンサ、カメラセンサの両方の組み合わせで構成されてもよく、どちらか一方だけで構成されてもよい。これらレーダセンサやカメラセンサによる物体検知については従来の周知の手法を適用する。これらのセンサによる物体検知によって、三次元空間内での実オブジェクトの有無、実オブジェクトが存在する場合には、その実オブジェクトの位置(自車両1からの相対的な距離、自車両1の進行方向を前後方向とした場合の左右方向の位置、上下方向の位置等)、大きさ(横方向(左右方向)、高さ方向(上下方向)等の大きさ)、移動方向(横方向(左右方向)、奥行き方向(前後方向))、移動速度(横方向(左右方向)、奥行き方向(前後方向))、又は/及び種類等を検出してもよい。1つ又はそれ以上の車外センサ407は、各センサの検知周期毎に、自車両1の前方の実オブジェクトを検知して、実オブジェクト関連情報の一例である実オブジェクト関連情報(実オブジェクトの有無、実オブジェクトが存在する場合には実オブジェクト毎の位置、大きさ、又は/及び種類等の情報)をプロセッサ16に送信することができる。なお、これら実オブジェクト関連情報は、他の機器(例えば、車両ECU401)を経由してプロセッサ16に送信されてもよい。また、夜間等の周辺が暗いときでも実オブジェクトが検知できるように、センサとしてカメラを利用する場合には赤外線カメラや近赤外線カメラが望ましい。また、センサとしてカメラを利用する場合、視差で距離等も取得できるステレオカメラが望ましい。 The host vehicle 1 may include one or more vehicle exterior sensors 407 that detect a real object existing in the vicinity of the host vehicle 1 (particularly the foreground 300 in this embodiment). The real object detected by the vehicle exterior sensor 407 includes, for example, a pedestrian, a bicycle, a motorcycle, another vehicle (a preceding vehicle, etc.), a road surface, a marking line, a roadside object, and / or a feature (a building etc.). Good. As the vehicle exterior sensor, for example, a millimeter wave radar, an ultrasonic radar, a radar sensor such as a laser radar, there is a camera sensor consisting of a camera and an image processing device, may be configured by a combination of both the radar sensor, the camera sensor, It may be configured with only one of them. Conventionally known methods are applied to the object detection by the radar sensor and the camera sensor. By detecting an object by these sensors, the presence or absence of a real object in the three-dimensional space, and the position of the real object (relative distance from the own vehicle 1 and the traveling direction of the own vehicle 1 when the real object exists) are detected. Position in the left-right direction in the front-back direction, vertical position, etc.), size (size in the horizontal direction (horizontal direction), height direction (vertical direction), etc.), movement direction (lateral direction (horizontal direction)) , Depth direction (front-back direction), movement speed (lateral direction (left-right direction), depth direction (front-back direction)), and / or type may be detected. One or more out-of-vehicle sensors 407 detect a real object in front of the host vehicle 1 for each detection cycle of the sensors and detect real object related information (presence or absence of real object, which is an example of real object related information). When a real object exists, information such as the position, size, and / or type of each real object) can be transmitted to the processor 16. The real object related information may be transmitted to the processor 16 via another device (for example, the vehicle ECU 401). Further, when a camera is used as a sensor, an infrared camera or a near infrared camera is desirable so that a real object can be detected even when the surroundings are dark such as at night. When using a camera as a sensor, a stereo camera that can acquire a distance and the like by parallax is desirable.
 自車両1は、運転者4の注視方向(以下では「視線方向」ともいう)を検出する、運転者4の顔を撮像する赤外線カメラ等からなる視線方向検出部409を含んでいてもよい。プロセッサ16は、赤外線カメラが撮像した画像(視線方向を推定可能な情報の一例)を取得し、この撮像画像を解析することで運転者4の視線方向を特定することができる。なお、プロセッサ16は、赤外線カメラの撮像画像から視線方向検出部409(又は他の解析部)が特定した運転者4の視線方向をI/Oインタフェース14から取得するものであってもよい。また、自車両1の運転者4の視線方向、又は運転者4の視線方向を推定可能な情報を取得する方法は、これらに限定されるものではなく、EOG(Electro-oculogram)法、角膜反射法、強膜反射法、プルキンエ像検出法、サーチコイル法、赤外線眼底カメラ法などの他の既知の視線方向検出(推定)技術を用いて取得されてもよい。 The host vehicle 1 may include a line-of-sight direction detection unit 409 including an infrared camera that detects the gaze direction of the driver 4 (hereinafter, also referred to as “line-of-sight direction”) that captures the face of the driver 4. The processor 16 can specify the line-of-sight direction of the driver 4 by acquiring an image captured by the infrared camera (an example of information that can estimate the line-of-sight direction) and analyzing the captured image. Note that the processor 16 may acquire, from the I / O interface 14, the line-of-sight direction of the driver 4 specified by the line-of-sight direction detection unit 409 (or another analysis unit) from the image captured by the infrared camera. Further, the method for acquiring the driver's 4 line-of-sight direction of the vehicle 1 or the information capable of estimating the driver's 4 line-of-sight direction is not limited to these, and the EOG (Electro-oculogram) method, corneal reflex Method, scleral reflection method, Purkinje image detection method, search coil method, infrared fundus camera method, and other known gaze direction detection (estimation) techniques may be used.
 自車両1は、運転者4の目の位置を検出する赤外線カメラ等からなる目位置検出部411を含んでいてもよい。プロセッサ16は、赤外線カメラが撮像した画像(目の位置を推定可能な情報の一例)を取得し、この撮像画像を解析することで運転者4の目の位置を特定することができる。なお、プロセッサ16は、赤外線カメラの撮像画像から特定された運転者4の目の位置の情報をI/Oインタフェース14から取得するものであってもよい。なお、自車両1の運転者4の目の位置、又は運転者4の目の位置を推定可能な情報を取得する方法は、これらに限定されるものではなく、既知の目位置検出(推定)技術を用いて取得されてもよい。プロセッサ16は、運転者4の目の位置に基づき、画像200の位置を少なくとも調整することで、前景300の所望の位置に重畳した画像200を、目位置を検出した視認者(運転者4)に視認させてもよい。 The host vehicle 1 may include an eye position detection unit 411 including an infrared camera that detects the position of the eyes of the driver 4. The processor 16 can specify the eye position of the driver 4 by acquiring an image captured by the infrared camera (an example of information that can estimate the eye position) and analyzing the captured image. The processor 16 may acquire the information on the position of the eyes of the driver 4 identified from the image captured by the infrared camera from the I / O interface 14. The method for acquiring the position of the eyes of the driver 4 of the vehicle 1 or the information capable of estimating the position of the eyes of the driver 4 is not limited to these, and known eye position detection (estimation) It may be acquired using a technology. The processor 16 adjusts at least the position of the image 200 based on the position of the eyes of the driver 4 so that the viewer who has detected the eye position of the image 200 superimposed on the desired position of the foreground 300 (driver 4). May be visually confirmed.
 携帯情報端末413は、スマートフォン、ノートパソコン、スマートウォッチ、又は運転者4(又は自車両1の他の乗員)が携帯可能なその他の情報機器である。I/Oインタフェース14は、携帯情報端末413とペアリングすることで、携帯情報端末413と通信を行うことが可能であり、携帯情報端末413(又は携帯情報端末を通じたサーバ)に記録されたデータを取得する。携帯情報端末413は、例えば、上述の道路情報データベース403及び自車位置検出部405と同様の機能を有し、前記道路情報(実オブジェクト関連情報の一例)を取得し、プロセッサ16に送信してもよい。また、携帯情報端末413は、自車両1の近傍の商業施設に関連するコマーシャル情報(実オブジェクト関連情報の一例)を取得し、プロセッサ16に送信してもよい。なお、携帯情報端末413は、携帯情報端末413の所持者(例えば、運転者4)のスケジュール情報、携帯情報端末413での着信情報、メールの受信情報などをプロセッサ16に送信し、プロセッサ16及び画像処理回路20は、これらに関する画像データを生成又は/及び送信してもよい。 The mobile information terminal 413 is a smartphone, a laptop computer, a smart watch, or other information device that can be carried by the driver 4 (or another occupant of the vehicle 1). The I / O interface 14 can communicate with the mobile information terminal 413 by pairing with the mobile information terminal 413, and data recorded in the mobile information terminal 413 (or a server via the mobile information terminal). To get. The mobile information terminal 413 has, for example, the same function as the road information database 403 and the vehicle position detection unit 405 described above, acquires the road information (an example of the real object related information), and transmits it to the processor 16. Good. The mobile information terminal 413 may also acquire commercial information (an example of real object-related information) related to a commercial facility in the vicinity of the vehicle 1, and transmit the commercial information to the processor 16. The mobile information terminal 413 transmits schedule information of the owner (for example, the driver 4) of the mobile information terminal 413, incoming information at the mobile information terminal 413, mail reception information, etc. to the processor 16, The image processing circuit 20 may generate or / and transmit image data regarding these.
 車外通信接続機器420は、自車両1と情報のやりとりをする通信機器であり、例えば、自車両1と車車間通信(V2V:Vehicle To Vehicle)により接続される他車両、歩車間通信(V2P:Vehicle To Pedestrian)により接続される歩行者(歩行者が携帯する携帯情報端末)、路車間通信(V2I:Vehicle To roadside Infrastructure)により接続されるネットワーク通信機器であり、広義には、自車両1との通信(V2X:Vehicle To Everything)により接続される全てのものを含む。車外通信接続機器420は、例えば、歩行者、自転車、自動二輪車、他車両(先行車等)、路面、区画線、路側物、又は/及び地物(建物など)の位置を取得し、プロセッサ16に送信してもよい。また、車外通信接続機器420は、上述の自車位置検出部405と同様の機能を有し、自車両1の位置情報を取得し、プロセッサ16に送信してもよく、さらに上述の道路情報データベース403の機能も有し、前記道路情報(実オブジェクト関連情報の一例)を取得し、プロセッサ16に送信してもよい。なお、車外通信接続機器420から取得される情報は、上述のものに限定されない。 The outside-vehicle communication connection device 420 is a communication device for exchanging information with the own vehicle 1, and, for example, another vehicle connected to the own vehicle 1 via vehicle-to-vehicle communication (V2V: Vehicle To Vehicle), pedestrian-to-vehicle communication (V2P: It is a network communication device connected by a pedestrian (a portable information terminal carried by a pedestrian) and a road-to-vehicle communication (V2I: Vehicle To road Infrastructure) connected by a Vehicle To Pedestrian. Includes everything connected by V2X (Vehicle To Everything). The extra-vehicle communication connection device 420 acquires the position of, for example, a pedestrian, a bicycle, a motorcycle, another vehicle (such as a preceding vehicle), a road surface, a marking line, a roadside object, and / or a feature (such as a building), and the processor 16 May be sent to. In addition, the vehicle exterior communication connection device 420 may have the same function as the own vehicle position detection unit 405 described above, may acquire the position information of the own vehicle 1 and may transmit the position information to the processor 16, and further, the road information database described above. It also has the function of 403, and may acquire the road information (an example of real object-related information) and send it to the processor 16. The information acquired from the vehicle exterior communication connection device 420 is not limited to the above.
 記憶部18に記憶されたソフトウェア構成要素は、実オブジェクト関連情報検出モジュール502、報知必要度検出モジュール504、画像種類決定モジュール506、画像位置決定モジュール508、画像サイズ決定モジュール510、目位置検出モジュール512、視認検出モジュール514、行動判定モジュール516、及びグラフィックモジュール518を含む。 The software components stored in the storage unit 18 are the real object related information detection module 502, the notification necessity degree detection module 504, the image type determination module 506, the image position determination module 508, the image size determination module 510, and the eye position detection module 512. A visual recognition module 514, an action determination module 516, and a graphic module 518.
 実オブジェクト関連情報検出モジュール502は、後述する画像200の座標やサイズを決定する元となる、自車両1の前景300に存在する実オブジェクトの位置やサイズを検出する。実オブジェクト関連情報検出モジュール502は、例えば、道路情報データベース403、車外センサ407、又は車外通信接続機器420から、自車両1の前景300に存在する実オブジェクト310(例えば、図5Aに示された路面311、先行車312、歩行者313、及び建物314など)の位置(自車両1の運転者4が運転席から自車両1の進行方向(前方)を視認した際の高さ方向(上下方向)、横方向(左右方向)の位置であり、これらに、奥行き方向(前方向)の位置が追加されてもよい)、及び実オブジェクト310のサイズ(高さ方向、横方向のサイズ)、を取得してもよい。 The real object related information detection module 502 detects the position and size of the real object existing in the foreground 300 of the own vehicle 1, which is the basis for determining the coordinates and size of the image 200 described later. The real object-related information detection module 502 uses, for example, the road information database 403, the vehicle exterior sensor 407, or the vehicle exterior communication connection device 420 to identify the real object 310 existing in the foreground 300 of the vehicle 1 (for example, the road surface shown in FIG. 5A). 311, a preceding vehicle 312, a pedestrian 313, a building 314, etc.) (height direction (vertical direction) when the driver 4 of the own vehicle 1 visually recognizes the traveling direction (front) of the own vehicle 1 from the driver's seat). , The position in the horizontal direction (horizontal direction), and the position in the depth direction (forward direction) may be added thereto, and the size of the real object 310 (height direction, horizontal size) is acquired. You may.
 また、実オブジェクト関連情報検出モジュール502は、車外センサ407による実オブジェクトの位置やサイズを検出する際、検出精度が低下する環境(雨、霧、雪などの悪天候)であるかを判定してもよい。例えば、実オブジェクトの位置検出精度を計算し、検出精度の低下の度合い、検出精度が低下している位置、検出精度が低下するような環境(悪天候)であるか否かの判定結果を、プロセッサ16に送信してもよい。なお、携帯情報端末413、車外通信接続機器420などから自車両1が走行する位置の天候情報を取得することで悪天候を判定してもよい。 In addition, the real object related information detection module 502 may determine whether or not the environment (the bad weather such as rain, fog, or snow) in which the detection accuracy decreases when detecting the position or size of the real object by the vehicle exterior sensor 407. Good. For example, the position detection accuracy of the real object is calculated, and the determination result of the degree of decrease in the detection accuracy, the position where the detection accuracy is decreased, and the environment (bad weather) in which the detection accuracy is decreased are determined by the processor. 16 may be transmitted. The bad weather may be determined by acquiring the weather information of the position where the host vehicle 1 travels from the portable information terminal 413, the vehicle exterior communication connection device 420, or the like.
 また、実オブジェクト関連情報検出モジュール502は、後述する画像200の内容(以下では、適宜「画像の種類」ともいう)を決定する元となる、自車両1の前景300に存在する実オブジェクトに関する情報(実オブジェクト関連情報)を検出してもよい。実オブジェクト関連情報は、例えば、実オブジェクトが、歩行者である、又は他車両であるなどの実オブジェクトの種類を示す種類情報、実オブジェクトの移動方向を示す移動方向情報、実オブジェクトまでの距離や到達時間を示す距離時間情報、又は駐車場(実オブジェクト)の料金などの実オブジェクトの個別詳細情報、である(但し、これらに限定されない)。例えば、実オブジェクト関連情報検出モジュール502は、道路情報データベース403又は携帯情報端末413から種類情報、距離時間情報、又は/及び個別詳細情報を取得し、車外センサ407から種類情報、移動方向情報、又は/及び距離時間情報を取得し、車外通信接続機器420から種類情報、移動方向情報、距離時間情報、又は/及び個別詳細情報を検出してもよい。 Further, the real object related information detection module 502 is a source of determining the content of the image 200 described below (hereinafter, also referred to as “image type” as appropriate), which is information about the real object existing in the foreground 300 of the host vehicle 1. (Real object related information) may be detected. The real object related information is, for example, type information indicating the type of the real object such as a pedestrian or another vehicle, moving direction information indicating the moving direction of the real object, distance to the real object, or the like. It is distance time information indicating the arrival time, or individual detailed information of a real object such as a charge of a parking lot (real object) (but not limited to these). For example, the real object related information detection module 502 obtains the type information, the distance time information, and / or the individual detailed information from the road information database 403 or the portable information terminal 413, and the type information, the moving direction information, or the out-of-vehicle sensor 407. Alternatively, the type information, the moving direction information, the distance time information, and / or the individual detailed information may be detected from the vehicle exterior communication connection device 420 by acquiring the / and distance time information.
 報知必要度検出モジュール504は、実オブジェクト関連情報検出モジュール502が検出した実オブジェクトの位置情報とともに、実オブジェクト関連情報検出モジュール502が検出した実オブジェクト関連情報を運転者4に報知する必要度(報知必要度)を検出する。報知必要度検出モジュール504は、I/Oインタフェース14に接続される種々の他の電子機器から報知必要度を検出してもよい。また、図2でI/Oインタフェース14に接続された電子機器が車両ECU401に情報を送信し、受信した情報に基づき車両ECU401が決定した報知必要度を、報知必要度検出モジュール504が検出(取得)してもよい。『報知必要度』は、例えば、起こり得る自体の重大さの程度から導き出される危険度、反応行動を起こすまでに要求される反応時間の長短から導き出される緊急度、自車両1や運転者4(又は自車両1の他の乗員)の状況から導き出される有効度、又はこれらの組み合わせなどで決定され得る(報知必要度の指標はこれらに限定されない)。 The notification necessity level detection module 504 notifies the driver 4 of the real object position information detected by the real object related information detection module 502 and the real object related information detected by the real object related information detection module 502. Necessity) is detected. The notification necessity degree detection module 504 may detect the notification necessity degree from various other electronic devices connected to the I / O interface 14. In addition, the electronic device connected to the I / O interface 14 in FIG. 2 transmits information to the vehicle ECU 401, and the notification necessity detection module 504 detects (acquires) the notification necessity degree determined by the vehicle ECU 401 based on the received information. ) May be. The “information need level” is, for example, a risk level derived from the degree of seriousness of the possibility itself, an urgency level derived from the length of the reaction time required to take a reaction action, the own vehicle 1 or the driver 4 ( Alternatively, it may be determined based on the effectiveness derived from the situation of other occupants of the vehicle 1 or a combination thereof (the indicator of the notification necessity degree is not limited to these).
 また、報知必要度検出モジュール504は、画像200の報知必要度を推定する元となる必要度関連情報を検出し、これから報知必要度を推定してもよい。画像200の報知必要度を推定する元となる必要度関連情報は、例えば、実オブジェクトや交通規制(道路情報の一例)の位置、種類などで推定されてもよく、I/Oインタフェース14に接続される種々の他の電子機器から入力される他の情報に基づいて、又は他の情報を加味して推定されてもよい。なお、車両用表示システム10は、前記報知必要度を推定する機能を有していなくてもよく、前記報知必要度を推定する機能の一部又は全部は、車両用表示システム10の表示制御装置13とは別に設けられてもよい。 Further, the notification necessity detection module 504 may detect the necessity degree related information that is the basis for estimating the notification necessity degree of the image 200, and estimate the notification necessity degree from this. Necessity degree related information that is a basis for estimating the notification necessity degree of the image 200 may be estimated, for example, by the position and type of a real object or traffic regulation (an example of road information), and is connected to the I / O interface 14. It may be estimated based on other information input from various other electronic devices described above, or in consideration of other information. The vehicle display system 10 may not have the function of estimating the notification necessity degree, and a part or all of the function of estimating the notification necessity degree may be included in the display control device of the vehicle display system 10. It may be provided separately from 13.
 画像種類決定モジュール506は、例えば、実オブジェクト関連情報検出モジュール502により検出された実オブジェクトの種類、位置、実オブジェクト関連情報検出モジュール502で検出された実オブジェクト関連情報の種類、数、又は/及び報知必要度検出モジュール504で検出された(推定された)報知必要度の大きさに基づいて、実オブジェクトに対して表示する画像200の種類を決定することができる。また、画像種類決定モジュール506は、後述する視認検出モジュール514による判定結果により、表示する画像200の種類を増減させてもよい。具体的には、実オブジェクト310が運転者4によって視認されにくい状態である場合、実オブジェクトの近傍に運転者4によって視認される画像200の種類を多くしてもよい。 The image type determination module 506, for example, the type and position of the real object detected by the real object related information detection module 502, the type, number, or / and of the real object related information detected by the real object related information detection module 502. The type of the image 200 to be displayed on the real object can be determined based on the degree of the (necessary) notification need detected by the notification need detection module 504. Further, the image type determination module 506 may increase or decrease the type of the image 200 to be displayed according to the determination result by the visual recognition detection module 514 described later. Specifically, when the real object 310 is in a state where it is difficult for the driver 4 to visually recognize it, the types of the image 200 visually recognized by the driver 4 in the vicinity of the real object may be increased.
 画像位置決定モジュール508は、実オブジェクト関連情報検出モジュール502により検出された実オブジェクトの位置に基づき、画像200の座標(運転者4が自車両1の運転席から表示領域100の方向を見た際の左右方向(X軸方向)、及び上下方向(Y軸方向)を少なくとも含む)を決定する。特定の実オブジェクトに関連する画像200を表示する場合、画像位置決定モジュール508は、特定の実オブジェクトと所定の位置関係になるように画像200の座標を決定する。例えば、画像200の中心が実オブジェクトの中心と重なって視認されるように、画像200の左右方向、及び上下方向の位置を決定する。また、画像位置決定モジュール508は、画像200が直接関連しない実オブジェクトを基準に、所定の位置関係になるように、画像200の座標を決定することができる。例えば、図5Aに示すように、自車両1が走行する車線(路面310)の左側の区画線311a(実オブジェクトの一例)や右側の区画線311b(実オブジェクトの一例)を基準に、区画線とは直接関連しない先行車312(特定の実オブジェクトの一例)に関連する後述する第1FCW画像221の座標を決定(又は補正)してもよい。なお、『所定の位置関係』は、実オブジェクト又は自車両1の状況、実オブジェクトの種類、表示される画像の種類などにより調整され得る。 The image position determination module 508 is based on the position of the real object detected by the real object-related information detection module 502, and the coordinates of the image 200 (when the driver 4 views the direction of the display area 100 from the driver's seat of the own vehicle 1). The horizontal direction (X-axis direction) and the vertical direction (Y-axis direction) are determined. When displaying the image 200 related to a specific real object, the image position determination module 508 determines the coordinates of the image 200 so as to have a predetermined positional relationship with the specific real object. For example, the positions of the image 200 in the left-right direction and the vertical direction are determined so that the center of the image 200 is visually recognized so as to overlap with the center of the real object. Further, the image position determination module 508 can determine the coordinates of the image 200 so that the image 200 has a predetermined positional relationship with reference to a real object that is not directly related. For example, as shown in FIG. 5A, the lane markings based on the left lane marking 311a (an example of a real object) and the right lane marking 311b (an example of a real object) of the lane (road surface 310) on which the vehicle 1 is traveling are used. The coordinates of the first FCW image 221 described later that is related to the preceding vehicle 312 (an example of a specific real object) that is not directly related to may be determined (or corrected). The "predetermined positional relationship" can be adjusted depending on the situation of the real object or the host vehicle 1, the type of the real object, the type of the displayed image, and the like.
 画像サイズ決定モジュール510は、実オブジェクト関連情報検出モジュール502により検出された画像200を対応付けて表示する実オブジェクトの種類、位置、実オブジェクト関連情報検出モジュール502で検出された実オブジェクト関連情報の種類、数、又は/及び報知必要度検出モジュール504で検出された(推定された)報知必要度の大きさに基づいて、画像200のサイズを決定することができる。また、画像サイズ決定モジュール510は、画像200の種類の数に応じて、画像サイズを変更できる。例えば、画像200の種類が多くなれば、画像サイズを小さくしてもよい。 The image size determination module 510 associates and displays the image 200 detected by the real object related information detection module 502 with the type and position of the real object and the type of real object related information detected by the real object related information detection module 502. , Number, and / or the size of the (necessary) notification need detected by the notification need detection module 504, the size of the image 200 can be determined. Also, the image size determination module 510 can change the image size according to the number of types of the image 200. For example, the image size may be reduced as the number of types of the image 200 increases.
 目位置検出モジュール512は、自車両1の運転者4の眼の位置を検出する。目位置検出モジュール512は、複数段階で設けられた高さ領域のどこに運転者4の眼の高さがあるかの判定、運転者4の眼の高さ(Y軸方向の位置)の検出、運転者4の眼の高さ(Y軸方向の位置)及び奥行方向の位置(Z軸方向の位置)の検出、又は/及び運転者4の眼の位置(X,Y,Z軸方向の位置)の検出、に関係する様々な動作を実行するための様々なソフトウェア構成要素を含む。目位置検出モジュール512は、例えば、目位置検出部411から運転者4の眼の位置を取得する、又は、目位置検出部411から運転者4の目の高さを含む目の位置を推定可能な情報を受信し、運転者4の目の高さを含む目の位置を推定する。目の位置を推定可能な情報は、例えば、自車両1の運転席の位置、運転者4の顔の位置、座高の高さ、運転者4による図示しない操作部での入力値などであってもよい。 The eye position detection module 512 detects the position of the eyes of the driver 4 of the vehicle 1. The eye position detection module 512 determines where the eye height of the driver 4 is in the height region provided in a plurality of stages, detects the eye height of the driver 4 (position in the Y-axis direction), Detection of the eye height (position in the Y axis direction) and position in the depth direction (position in the Z axis direction) of the driver 4, and / or the position of the eye of the driver 4 (position in the X, Y, Z axis directions) ) Detection, and various software components for performing various operations associated therewith. The eye position detection module 512 can acquire the eye position of the driver 4 from the eye position detection unit 411, or can estimate the eye position including the eye height of the driver 4 from the eye position detection unit 411, for example. Information is received and the eye position including the eye height of the driver 4 is estimated. The information capable of estimating the position of the eyes includes, for example, the position of the driver's seat of the vehicle 1, the position of the driver's 4 face, the height of the sitting height, and the input value by the driver 4 on the operation unit (not shown). Good.
 視認検出モジュール514は、自車両1の運転者4が所定の画像200を視認したかを検出する。視認検出モジュール514は、運転者4が所定の画像200を視認したかの判定、運転者4が所定の画像200の周辺(近傍)を視認したかの判定、に関する様々な動作を実行するための様々なソフトウェア構成要素を含む。視認検出モジュール514は、視線方向検出部409から取得した運転者4の後述する注視位置GZと、グラフィックモジュール518から取得した画像200の位置と、を比較して、画像200を運転者4が視認したかを判定し、視認しているか否かの判定結果と、視認された画像200を特定する情報を、プロセッサ16に送信してもよい。 The visual recognition detection module 514 detects whether the driver 4 of the vehicle 1 visually recognizes the predetermined image 200. The visual recognition detection module 514 is for executing various operations regarding whether the driver 4 visually recognizes the predetermined image 200 and whether the driver 4 visually recognizes the periphery (vicinity) of the predetermined image 200. Includes various software components. The visual recognition detection module 514 compares the gaze position GZ of the driver 4, which will be described later, acquired from the line-of-sight direction detection unit 409 with the position of the image 200 acquired from the graphic module 518, and the driver 4 visually recognizes the image 200. It may be determined whether or not the image has been visually recognized, and the determination result as to whether or not the image is visually recognized, and information for identifying the visually recognized image 200 may be transmitted to the processor 16.
 また、視認検出モジュール514は、運転者4が所定の画像200の周辺(近傍)を視認したかの判定を行うために、画像200の外縁から外側に予め設定された所定の幅の領域を、画像200の周辺として設定し、この画像200の周辺に後述する注視位置GZが入ると、自車両1の運転者4が所定の画像200を視認したと判定してもよい。なお、視認判定は、これらの手段に限定されない。 In addition, the visual recognition detection module 514 determines an area of a predetermined width that is preset from the outer edge of the image 200 to the outside in order to determine whether the driver 4 visually recognizes the periphery (vicinity) of the predetermined image 200. It may be set as the periphery of the image 200, and when a gaze position GZ described later enters the periphery of the image 200, it may be determined that the driver 4 of the own vehicle 1 visually recognizes the predetermined image 200. The visual recognition determination is not limited to these means.
 また、視認検出モジュール514は、運転者4が画像200以外の何を視認しているかを検出してもよい。例えば、視認検出モジュール514は、実オブジェクト関連情報検出モジュール502が検出した自車両1の前景300に存在する実オブジェクト310の位置と、視線方向検出部409から取得した運転者4の後述する注視位置GZと、を比較することで、注視している実オブジェクト310を特定し、視認された実オブジェクト310を特定する情報を、プロセッサ16に送信してもよい。 Further, the visual recognition detection module 514 may detect what the driver 4 is visually recognizing other than the image 200. For example, the visual recognition detection module 514 detects the position of the real object 310 existing in the foreground 300 of the vehicle 1 detected by the real object-related information detection module 502 and the gaze position of the driver 4 to be described later acquired from the line-of-sight direction detection unit 409. Information that identifies the real object 310 that is gazing and that identifies the real object 310 that has been visually recognized may be transmitted to the processor 16 by comparing GZ and GZ.
 行動判定モジュール516は、後述する第1情報画像210が示す情報に対して適当ではない運転者4の行動を検出する。いくつかの第1情報画像210には、それぞれ適当ではない運転者4の行動が対応づけて記憶部18に記憶されている。行動判定モジュール516は、特に、視認性が低下した第1情報画像210に対応付けられた適当ではない運転者4の行動が検出されたか否かを判定する。例えば、第1情報画像210が、経路案内情報を含む場合、経路案内が示す方向とは異なる分岐路を運転者4が注視していた場合、適当ではない運転者4の行動が検出されたと判定してもよい。また、第1情報画像210が、交通規制情報を含む場合、この交通規制を違反しそうな行動がなされた場合、適当ではない運転者4の行動が検出されたと判定してもよい。行動判定モジュール516が、行動を判定するために収集する情報は、車両ECU401から入力する自車両1の状態(例えば、走行距離、車速、アクセルペダル開度、エンジンスロットル開度、インジェクター燃料噴射量、エンジン回転数、モータ回転数、ステアリング操舵角、シフトポジション、ドライブモード、各種警告状態)、視線方向検出部409から入力する視線方向などであるが、これらに限定されない。 The behavior determination module 516 detects the behavior of the driver 4 that is not appropriate for the information shown in the first information image 210 described later. In some of the first information images 210, the inappropriate behavior of the driver 4 is stored in the storage unit 18 in association with each other. The behavior determination module 516 particularly determines whether or not an inappropriate behavior of the driver 4 associated with the first information image 210 with reduced visibility is detected. For example, when the first information image 210 includes route guidance information, if the driver 4 is gazing at a branch road different from the direction indicated by the route guidance, it is determined that an inappropriate behavior of the driver 4 has been detected. You may. In addition, when the first information image 210 includes traffic regulation information, it may be determined that an inappropriate behavior of the driver 4 has been detected when behavior that is likely to violate this traffic regulation is performed. The information collected by the action determination module 516 to determine the action is the state of the vehicle 1 input from the vehicle ECU 401 (for example, mileage, vehicle speed, accelerator pedal opening, engine throttle opening, injector fuel injection amount, The engine rotation speed, the motor rotation speed, the steering angle, the shift position, the drive mode, various warning states), the line-of-sight direction input from the line-of-sight direction detection unit 409, and the like, are not limited to these.
 グラフィックモジュール518は、表示される画像200の、視覚的効果(例えば、輝度、透明度、彩度、コントラスト、又は他の視覚特性)、サイズ、表示位置、表示距離(運転者4から画像200までの距離)を変更するための様々な既知のソフトウェア構成要素を含む。グラフィックモジュール518は、画像種類決定モジュール506が設定した種類、画像位置決定モジュール508が設定した座標(運転者4が自車両1の運転席から表示領域100の方向を見た際の左右方向(X軸方向)、及び上下方向(Y軸方向)を少なくとも含む)、画像サイズ決定モジュール510が設定した画像サイズで運転者4に視認されるように画像200を表示する。 The graphics module 518 allows the displayed image 200 to have a visual effect (eg, brightness, transparency, saturation, contrast, or other visual characteristic), size, display position, display distance (from driver 4 to image 200). Various known software components for changing the distance). The graphic module 518 displays the type set by the image type determination module 506 and the coordinates set by the image position determination module 508 (the left-right direction when the driver 4 views the display area 100 from the driver's seat of the vehicle 1 (X The image 200 is displayed so that the driver 4 can visually recognize it in the image size set by the image size determination module 510 in the axial direction) and the vertical direction (including at least the Y-axis direction).
 グラフィックモジュール518は、自車両1の前景300の実オブジェクト310と所定の位置関係となるように配置される拡張現実画像(AR画像)である、第1情報画像210と、第2情報画像220と、を少なくとも表示する。第1情報画像210は、視認検出モジュール514により、視認されたと判定されると、視認性を低下させる(非表示も含む)。他方、第2情報画像220は、視認検出モジュール514により、視認されたと判定されると、第1情報画像210より視認性を低下させない(第1情報画像210の視認性の低下の度合いより小さく(少なく)視認性を低下させること、又は視認性を変化させないこと、又は視認性を上昇させることも含む)。 The graphic module 518 has a first information image 210 and a second information image 220, which are augmented reality images (AR images) arranged so as to have a predetermined positional relationship with the real object 310 of the foreground 300 of the vehicle 1. , At least. When the visual recognition detection module 514 determines that the first information image 210 is visually recognized, the first information image 210 reduces the visibility (including non-display). On the other hand, when the visual recognition detection module 514 determines that the second information image 220 is visually recognized, the second information image 220 does not have lower visibility than the first information image 210 (smaller than the degree of reduction in visibility of the first information image 210 ( (To a lesser extent) including lowering the visibility, or not changing the visibility, or increasing the visibility).
 「視認性を低下させる」とは、輝度を低くする、透明度を高くする、彩度を低くする、コントラストを低くする、サイズを小さくする、画像の種類を少なくする、これらの組み合わせ、又はこれらと他の要素の組み合わせを含んでもよい。逆に、「視認性を上昇させる」とは、輝度を高くする、透明度を低くする、彩度を高くする、コントラストを高くする、サイズを大きくする、画像の種類を多くする、これらの組み合わせ、又はこれらと他の要素の組み合わせを含んでもよい。 "Reduce visibility" means to reduce brightness, increase transparency, decrease saturation, reduce contrast, reduce size, reduce image types, combinations of these, or It may include a combination of other elements. Conversely, "increasing visibility" means increasing brightness, decreasing transparency, increasing saturation, increasing contrast, increasing size, increasing the number of image types, combinations of these, Alternatively, a combination of these and other elements may be included.
 第1情報画像210は、起こり得る自体の重大さの程度から導き出される危険度が比較的低い情報であり、例えば、路面に重ねて表示され経路を示す矢印画像(ナビゲーション情報の一例)、目的地のテキスト画像(ナビゲーション情報の一例)、次ターニング地点までの距離を示す画像(ナビゲーション情報の一例)、前景300に存在する店舗や施設などを指示するPOI(Point of Interest)画像(地物情報の一例)、道路標識(案内標識、警戒標識、規制標識、指示標識、補助標識)に関する画像、先行車に追従走行する際に設定される車間距離を路面に重ねて表示するACC(Adaptive Cruise Control)画像などである。 The first information image 210 is information with a relatively low risk derived from the degree of seriousness that can occur, and for example, an arrow image (an example of navigation information) displayed on a road surface to indicate a route, a destination. Text image (an example of navigation information), an image showing the distance to the next turning point (an example of navigation information), and a POI (Point of Interest) image (a feature information of the feature information) indicating a store or facility existing in the foreground 300. Example), images related to road signs (guidance signs, warning signs, regulation signs, instruction signs, auxiliary signs), and ACC (Adaptive Cruise Control) that displays the inter-vehicle distance set when following a preceding vehicle on the road surface. Such as images.
 第2情報画像220は、起こり得る自体の重大さの程度から導き出される危険度が比較的高い情報であり、例えば、自車両1の前景300に存在する障害物の近傍に視認される前方衝突予測警報(FCW:Forward Collision Warning)の画像である。 The second information image 220 is information having a relatively high degree of danger derived from the degree of seriousness of the vehicle itself, and for example, a forward collision prediction visually recognized near an obstacle existing in the foreground 300 of the vehicle 1. It is an image of a warning (FCW: Forward Collision Warning).
 図3は、いくつかの実施形態に係る、画像の視認性を低下させる処理のフロー図である。まず、自車両1の運転者4の視線方向(後述する注視位置GZ)を取得(ステップS11)し、表示されている画像200の位置を取得する(ステップS12)。 FIG. 3 is a flow diagram of a process for reducing the visibility of an image according to some embodiments. First, the line-of-sight direction (gaze position GZ described later) of the driver 4 of the vehicle 1 is acquired (step S11), and the position of the displayed image 200 is acquired (step S12).
 次に、プロセッサ16は、ステップS11で取得した視線方向と、ステップS12で取得した画像200の位置と、を比較して、運転者4が視認した対象を特定する(ステップS13)。具体的には、プロセッサ16は、第1情報画像210が視認されているか、第2情報画像220が視認されているか、又は第1情報画像210あるいは第2情報画像220が視認されていないか判定する。第1情報画像210が視認されていれば、第1情報画像210のどの画像が視認されているかを特定する。 Next, the processor 16 identifies the target visually recognized by the driver 4 by comparing the line-of-sight direction acquired in step S11 with the position of the image 200 acquired in step S12 (step S13). Specifically, the processor 16 determines whether the first information image 210 is visually recognized, the second information image 220 is visually recognized, or the first information image 210 or the second information image 220 is not visually recognized. To do. If the first information image 210 is visually recognized, which image of the first information image 210 is visually recognized is specified.
 プロセッサ16は、ステップS13で運転者4が第1情報画像210を視認したと判定した場合、視認された第1情報画像210の視認性を低下させる。この際、プロセッサ16は、第1情報画像210を非表示とさせてもよい。また、プロセッサ16は、ステップS13で運転者4が第2情報画像220を視認したと判定した場合、第2情報画像220の視認性を低下させない、又は第2情報画像220の視認性を第1情報画像210の視認性の低下の度合いより小さく(少なく)低下させる。 When it is determined that the driver 4 visually recognizes the first information image 210 in step S13, the processor 16 reduces the visibility of the visually recognized first information image 210. At this time, the processor 16 may hide the first information image 210. Further, when determining that the driver 4 visually recognizes the second information image 220 in step S <b> 13, the processor 16 does not reduce the visibility of the second information image 220 or makes the visibility of the second information image 220 first. The information image 210 is made smaller (smaller) than the degree of visibility deterioration.
 図4は、いくつかの実施形態に係る、第2情報画像の視認性を上昇させる処理のフロー図である。少なくとも1つの第1情報画像210が、ステップS14の処理により、視認性が低下している場合に、処理が開始される。 FIG. 4 is a flow diagram of a process of increasing the visibility of the second information image according to some embodiments. When the visibility of at least one first information image 210 is reduced by the process of step S14, the process is started.
 まず、プロセッサ16は、運転者4の行動を取得し(ステップS21)、ステップS14の処理により視認性が低下している第1情報画像210が示す情報に対して適当ではない運転者4の行動が検出されたかを判定し(ステップS22)、適当ではない運転者4の行動がなされたと判定された場合、視認性を低下させていた第2情報画像220の視認性を上昇させる(ステップS23)。 First, the processor 16 acquires the behavior of the driver 4 (step S21), and the behavior of the driver 4 that is not appropriate for the information indicated by the first information image 210 whose visibility is reduced by the processing of step S14. Is detected (step S22), and when it is determined that the inappropriate behavior of the driver 4 is made, the visibility of the second information image 220, which has been reduced in visibility, is increased (step S23). ..
 上述の処理プロセスの動作は、汎用プロセッサ又は特定用途向けチップなどの情報処理装置の1つ以上の機能モジュールを実行させることにより実施することができる。これらのモジュール、これらのモジュールの組み合わせ、又は/及びそれらの機能を代替えし得る一般的なハードウェアとの組み合わせは全て、本発明の保護の範囲内に含まれる。 The operation of the processing process described above can be performed by executing one or more functional modules of an information processing device such as a general-purpose processor or an application-specific chip. All of these modules, a combination of these modules, and / or a combination with general hardware capable of substituting their functions are included in the scope of protection of the present invention.
 車両用表示システム10の機能ブロックは、任意選択的に、説明される様々な実施形態の原理を実行するために、ハードウェア、ソフトウェア、又はハードウェア及びソフトウェアの組み合わせによって実行される。図2で説明する機能ブロックが、説明される実施形態の原理を実施するために、任意選択的に、組み合わされ、又は1つの機能ブロックを2以上のサブブロックに分離されてもいいことは、当業者に理解されるだろう。したがって、本明細書における説明は、本明細書で説明されている機能ブロックのあらゆる可能な組み合わせ若しくは分割を、任意選択的に支持する。 The functional blocks of the vehicular display system 10 are optionally implemented by hardware, software, or a combination of hardware and software to implement the principles of the various described embodiments. The functional blocks described in FIG. 2 may optionally be combined or one functional block may be separated into two or more sub-blocks to implement the principles of the described embodiments. As will be appreciated by those skilled in the art. Thus, the description herein optionally supports any possible combination or division of the functional blocks described herein.
 図5A、図5Bは、第1情報画像210が視認されたと判定された場合の、車両用表示システム10が表示する画像200の変化を示す図である。図5Aでは、第1情報画像210は、路面311に重畳して視認され、案内経路を示すナビゲーション画像211と、建物314(実オブジェクト310)を指示する駐車場を示す「P」の表記を含むイラストであるPOI画像212と、を含む。第2情報画像220は、自車両1の前方を走行する先行車312の後方の路面311に線状(ライン状)に視認される第1FCW画像221と、自車両1の走行車線の対向車線側の歩道を歩行する歩行者313の周囲の路面311に円弧状に視認される第2FCW画像222と、を含む。この他に、自車両1の前景300の実オブジェクト310と所定の位置関係となるように配置される拡張現実画像(AR画像)ではない第3情報画像として、制限速度を示す「80」の表記を含むイラストである道路情報画像231と、自車両1の速度を示す「35km/h」と表記される速度画像232と、が表示される。表示領域100は、第1表示領域110と、自車両1の運転席から前方を視認した際に第1表示領域110より上下方向下側(Y軸負方向)に配置される第2表示領域120と、を含み、AR画像である第1情報画像210と第2情報画像220は、第1表示領域110に表示され、第3情報画像231,232は、第2表示領域120に表示される。 5A and 5B are diagrams showing changes in the image 200 displayed by the vehicular display system 10 when it is determined that the first information image 210 is visually recognized. In FIG. 5A, the first information image 210 includes a navigation image 211 that is visually recognized as being superimposed on a road surface 311 and that indicates a guide route, and a notation “P” that indicates a parking lot that instructs a building 314 (actual object 310). And a POI image 212 that is an illustration. The second information image 220 is a first FCW image 221 visually recognized linearly (in a line) on the road surface 311 behind the preceding vehicle 312 traveling in front of the host vehicle 1, and the opposite lane side of the traveling lane of the host vehicle 1. The second FCW image 222 visually recognized in an arc shape on the road surface 311 around the pedestrian 313 who walks on the sidewalk. In addition to this, as a third information image that is not an augmented reality image (AR image) that is arranged so as to have a predetermined positional relationship with the real object 310 of the foreground 300 of the vehicle 1, notation of "80" indicating the speed limit A road information image 231 that is an illustration including "," and a speed image 232 that indicates the speed of the host vehicle 1 and is displayed as "35 km / h" are displayed. The display area 100 is a first display area 110 and a second display area 120 that is arranged vertically below the first display area 110 (Y axis negative direction) when the front is visually recognized from the driver's seat of the vehicle 1. The first information image 210 and the second information image 220, which are AR images, are displayed in the first display area 110, and the third information images 231 and 232 are displayed in the second display area 120.
 図5Aに示すように、注視位置GZが、第1情報画像210であるナビゲーション画像211にある場合、プロセッサ16は、図3のステップS14の命令を実行し、図5Bに示すように、視認されたナビゲーション画像211(第1情報画像210)を非表示にする(視認性低下の一例)。この際、表示される位置が実オブジェクト310の位置に関係する拡張現実画像(AR画像)であるナビゲーション画像211の代わりに、表示される位置が実オブジェクト310の位置に関係しない非AR画像である第1ナビゲーション画像213(関連画像の一例)と、第2ナビゲーション画像214(関連画像の一例)と、が第2表示領域120に表示される。第1ナビゲーション画像213は、次の分岐路のおおよその方向を示す簡略化された画像である。また、第2ナビゲーション画像214は、次の分岐路までの距離を示す「200m先」と表記されたテキストである。 As shown in FIG. 5A, when the gaze position GZ is in the navigation image 211 that is the first information image 210, the processor 16 executes the instruction of step S14 in FIG. 3 and is visually recognized as shown in FIG. 5B. The navigation image 211 (first information image 210) is hidden (an example of reduced visibility). At this time, instead of the navigation image 211, which is the augmented reality image (AR image) whose displayed position is related to the position of the real object 310, the displayed position is a non-AR image which is not related to the position of the real object 310. The first navigation image 213 (an example of a related image) and the second navigation image 214 (an example of a related image) are displayed in the second display area 120. The first navigation image 213 is a simplified image showing the approximate direction of the next fork. In addition, the second navigation image 214 is a text described as "200 m ahead" indicating the distance to the next branch road.
 なお、図5Bに示す第1情報画像210であるナビゲーション画像211の視認性が低下している状況で、注視位置GZが直近の交差点の分岐路にあった場合、運転者4が直近の交差点で曲がろうとしていると推測することができる。ナビゲーション画像211が提示する分岐路が直近の交差点の分岐路ではないとすると、直近の交差点で曲がろうと推測できる運転者4の行動(直近の交差点の分岐路を注視している行動)は適当ではない行動である。したがって、このように適当ではない行動が検出された場合、プロセッサ16は、視認性を低下させていた第1情報画像210であるナビゲーション画像211の視認性を上昇させる。つまり、図5Bに示す状態から図5Aに示す状態に変化させる。これにより、運転者4に、直近の交差点で曲がるべきでないことを認識させることができる。 In addition, in the situation where the visibility of the navigation image 211, which is the first information image 210 shown in FIG. 5B, has deteriorated, when the gaze position GZ is on the branch road of the nearest intersection, the driver 4 is at the nearest intersection. You can infer that you are about to turn. Assuming that the branch road presented by the navigation image 211 is not the branch road of the latest intersection, the behavior of the driver 4 who can guess that the vehicle will turn at the latest intersection (the behavior of gazing at the latest intersection of the intersection) is appropriate. Not an action. Therefore, when such an inappropriate action is detected, the processor 16 increases the visibility of the navigation image 211, which is the first information image 210 that has been reduced in visibility. That is, the state shown in FIG. 5B is changed to the state shown in FIG. 5A. This allows the driver 4 to recognize that the vehicle should not turn at the nearest intersection.
 図6は、いくつかの実施形態に係る、画像の視認性を低下させる処理のフロー図である。図6のフロー図は、図3のフロー図に対応しており、図6のステップS31,32,33は、図3のステップS11,S12,S13にそれぞれ対応している。ここでは、図3との変化点であるステップS34について説明する。 FIG. 6 is a flow diagram of a process for reducing the visibility of an image according to some embodiments. The flow chart of FIG. 6 corresponds to the flow chart of FIG. 3, and steps S31, 32, and 33 of FIG. 6 correspond to steps S11, S12, and S13 of FIG. 3, respectively. Here, step S34 which is a change point from FIG. 3 will be described.
 プロセッサ16は、ステップS13で運転者4が第1情報画像210を視認したと判定した場合、視認された第1情報画像210の視認性を低下させる。また、プロセッサ16は、ステップS33で運転者4が第2情報画像220を視認したと判定した場合、視認された第2情報画像220を静止画から動画に変える、又は視認された第2情報画像220及び近くの他の第2情報画像220を静止画から動画に変えてもよい。 When it is determined that the driver 4 visually recognizes the first information image 210 in step S13, the processor 16 reduces the visibility of the visually recognized first information image 210. Further, when it is determined that the driver 4 visually recognizes the second information image 220 in step S33, the processor 16 changes the visually recognized second information image 220 from a still image to a moving image, or the visually recognized second information image. 220 and other nearby second information image 220 may be changed from a still image to a moving image.
 いくつかの実施形態では、プロセッサ16は、視認された第2情報画像220及び近くの他の第2情報画像220の数を連続的に変化させてもよい。具体的には、注視位置GZが第2情報画像220である第1FCW画像221の近傍にある場合、プロセッサ16は、視認された第1FCW画像221及び近くの他の第2FCW画像222を、図7Aに示す画像の数が少ない状態と、図7Bに示す画像の数が多い状態とで連続的/又は断続的に変化させてもよい。動画については、特に定義しないが、連続的/又は断続的に、画像の形状が繰り返し変化すること、画像の数が繰り返し変化すること、画像の位置が繰り返し変化すること、繰り返し点滅すること、繰り返しサイズが変化すること、などを含んでいてもよい。 In some embodiments, the processor 16 may continuously change the number of second information images 220 that are viewed and other second information images 220 that are nearby. Specifically, when the gaze position GZ is in the vicinity of the first FCW image 221 which is the second information image 220, the processor 16 displays the first FCW image 221 that has been visually recognized and the other second FCW image 222 that is close to the first FCW image 221. 7B may be changed continuously and / or intermittently between the state where the number of images shown in FIG. 7B is small and the state where the number of images shown in FIG. 7B is large. The moving image is not particularly defined, but the shape of the image repeatedly and / or intermittently changes, the number of images repeatedly changes, the position of the image repeatedly changes, repeatedly blinks, and repeatedly. The size may be changed, and the like.
 以上に説明したように、本実施形態の表示制御装置は、自車両1の運転者4から見て前景300に重なる領域に画像200を表示する画像表示部11(12)を制御する表示制御装置13であって、1つ又はそれ以上のI/Oインタフェース14と、1つ又はそれ以上のプロセッサ16と、メモリ18と、メモリ18に格納され、1つ又はそれ以上のプロセッサ16によって実行されるように構成される1つ又はそれ以上のコンピュータ・プログラムと、を備え、1つ又はそれ以上のプロセッサ16は、1つ又はそれ以上のI/Oインタフェース14から、運転者4の視線方向を取得し、視線方向に基づき、視認されたと判定されると視認性を低下させる第1情報画像210と、視認されたと判定されると第1情報画像210より視認性を低下させない第2情報画像220と、を表示させる、命令を実行する。これにより、画像の種類により、視認された後の視認性の変化が異なり、第1情報画像であれば、視認性が低下することで運転者4の視界前方の実景を見やすくすることができ、第2情報画像であれば、あまり視認性が低下しないので、視認したあとでも画像を見やすくすることができる。 As described above, the display control device of the present embodiment controls the image display unit 11 (12) that displays the image 200 in the area overlapping the foreground 300 when viewed from the driver 4 of the vehicle 1. 13, one or more I / O interfaces 14, one or more processors 16, a memory 18, and stored in memory 18 and executed by one or more processors 16. And one or more computer programs configured as described above, the one or more processors 16 obtaining the line-of-sight direction of the driver 4 from the one or more I / O interfaces 14. However, based on the line-of-sight direction, a first information image 210 that reduces visibility when it is determined to be viewed, and a second information image 220 that does not reduce visibility when compared to the first information image 210 when determined to be viewed. , Is displayed, the command is executed. As a result, the change in visibility after visual recognition differs depending on the type of image, and in the case of the first information image, the visibility is reduced, making it easier to see the real scene in front of the driver's field of view. If it is the second information image, the visibility is not lowered so much, so that the image can be made easy to see even after the visual recognition.
 また、いくつかの実施形態では、第2情報画像220が示す情報のリスクポテンシャルの大きさに応じて、視線が向けられた際の第2情報画像220の視認性の変化度合いを決めてもよい。プロセッサ16は、第2情報画像220が示す情報のリスクポテンシャルが高いほど、視線が向けられた際の視認性の低下度合いを小さくしてもよい。すなわち、リスクポテンシャルが低ければ、視線が向けられた際の第2情報画像220の視認性を大きく低下させる(低下度合いを大きくする)。なお、プロセッサ16は、第2情報画像220の種類によって予め定められたリスクポテンシャルに応じて、視認性の低下度合いを変化させてもよく、第2情報画像220が表示されている状況(I/Oインタフェース14から得られる情報)に応じて算出されたリスクポテンシャルに応じて変化させてもよい。 Further, in some embodiments, the degree of change in the visibility of the second information image 220 when the line of sight is directed may be determined according to the magnitude of the risk potential of the information indicated by the second information image 220. .. The processor 16 may reduce the degree of reduction in visibility when the line of sight is directed, as the risk potential of the information indicated by the second information image 220 is higher. That is, if the risk potential is low, the visibility of the second information image 220 when the line of sight is directed is greatly reduced (the degree of reduction is increased). Note that the processor 16 may change the degree of reduction in visibility according to a risk potential that is predetermined according to the type of the second information image 220, and the second information image 220 is being displayed (I / I). It may be changed according to the risk potential calculated according to the information obtained from the O interface 14).
 また、いくつかの実施形態では、第2情報画像220に視線が向いた後、又は第2情報画像220に視線が向いて視認性を低下させた後、第2情報画像220が示す情報のリスクポテンシャルが高まらない場合、第2情報画像220の視認性を低下させてもよい。プロセッサ16は、第2情報画像220に視線が向いた後、又は第2情報画像220に視線が向いて視認性を低下させた後、第2情報画像220が示す情報のリスクポテンシャルを所定の期間だけ監視し、リスクポテンシャルが、定められた閾値より高くならない、大きく上昇しない、又は低下している場合、視認性をそのまま維持しておく必要度は低いと判定し、第2情報画像220の視認性を低下させてもよい。 Further, in some embodiments, after the line of sight turns toward the second information image 220 or after the line of sight turns toward the second information image 220 to reduce the visibility, the risk of the information indicated by the second information image 220. If the potential does not increase, the visibility of the second information image 220 may be reduced. After the line of sight of the second information image 220 or the line of sight of the second information image 220 reduces the visibility, the processor 16 sets the risk potential of the information indicated by the second information image 220 for a predetermined period. If the risk potential does not become higher than the predetermined threshold value, does not significantly increase, or decreases, it is determined that it is not necessary to maintain the visibility as it is, and the second information image 220 is visually recognized. The sex may be reduced.
 また、いくつかの実施形態では、視線が第2情報画像220に向くと、第2情報画像220を動画表示させてもよい。プロセッサ16は、通常、第2情報画像220を静止画で表示しておき、視線が向くと、一部又は全部の第2情報画像220を動画にする。第1情報画像210は、視認されると視認性が低下するが、第2情報画像220が視認されると動画となるため、第1情報画像210が示す情報とは異なる情報であることを認識させ、第2情報画像220へ注意を向けさせることができる。なお、第2情報画像220は、一定期間、動画表示された後、再び静止画に変化させてもよい。また、プロセッサ16は、第2情報画像220を動画表示させるのと同じタイミングで視認された第2情報画像220の視認性を低下させてもよく、又は、動画表示された後、視認性を低下させてもよい。 Also, in some embodiments, the second information image 220 may be displayed as a moving image when the line of sight turns toward the second information image 220. The processor 16 normally displays the second information image 220 as a still image and, when the line of sight is turned, makes part or all of the second information image 220 a moving image. It is recognized that the first information image 210 has a lower visibility when visually recognized, but becomes a moving image when the second information image 220 is visually recognized, and thus is different from the information indicated by the first information image 210. Thus, the attention can be directed to the second information image 220. The second information image 220 may be changed to a still image after being displayed as a moving image for a certain period. Further, the processor 16 may reduce the visibility of the second information image 220 that is visually recognized at the same timing that the second information image 220 is displayed as a moving image, or the visibility is reduced after the moving image is displayed. You may let me.
 また、いくつかの実施形態では、第2情報画像220が複数表示されている際、視線が所定の第2情報画像220に向くと、所定の第2情報画像220、及び他の第2情報画像220を動画表示させてもよい。例えば、図5Aにおいて、歩行者313に対応して表示された第2FCW画像222を運転者4が視認した場合、第2FCW画像222、及び他の先行車312に対応して表示された第1FCW画像221を動画としてもよい。このように他の情報画像も同様に動画表示させることで、視認した画像だけではなく、同様の画像にも視覚的注意を向かせることができる。 Further, in some embodiments, when the line of sight is directed to the predetermined second information image 220 when the plurality of second information images 220 are displayed, the predetermined second information image 220 and other second information images are displayed. 220 may be displayed as a moving image. For example, in FIG. 5A, when the driver 4 visually recognizes the second FCW image 222 displayed corresponding to the pedestrian 313, the second FCW image 222 and the first FCW image displayed corresponding to the other preceding vehicle 312. 221 may be a moving image. In this manner, by displaying the other information images in the same manner, the visual attention can be directed not only to the visually recognized image but also to the similar image.
 また、いくつかの実施形態では、第2情報画像220が複数表示されている際、視線が所定の第2情報画像220に向くと、所定の第2情報画像220、及び所定の第2情報画像220の近くの他の第2情報画像220を動画表示させてもよい。 Further, in some embodiments, when the line of sight is directed to the predetermined second information image 220 when the plurality of second information images 220 are displayed, the predetermined second information image 220 and the predetermined second information image are displayed. Another second information image 220 near 220 may be displayed as a moving image.
 また、いくつかの実施形態では、第2情報画像220が複数表示されている際、視線が所定の第2情報画像220に向くと、所定の第2情報画像220、及び他の第2情報画像220を同じ周期で動画表示させてもよい。これにより、視野を向けた画像と同様の画像がどのあたりに表示されているか識別しやすくすることができる。 Further, in some embodiments, when the line of sight is directed to the predetermined second information image 220 when the plurality of second information images 220 are displayed, the predetermined second information image 220 and other second information images are displayed. 220 may be displayed as a moving image at the same cycle. This makes it easy to identify where an image similar to the image with the field of view is displayed.
 また、いくつかの実施形態では、第1情報画像210が示す情報に対して適当ではない行動に関する情報を取得し、第1情報画像210の視認性を低下させた後、第1情報画像210が示す情報に対して適当ではない行動を運転者4がすると判定できる場合、第1情報画像210の視認性を上昇させてもよい。 Further, in some embodiments, after the information regarding the behavior that is not appropriate for the information indicated by the first information image 210 is acquired and the visibility of the first information image 210 is reduced, the first information image 210 is displayed. When it is possible to determine that the driver 4 performs an action that is not appropriate for the information shown, the visibility of the first information image 210 may be increased.
 また、いくつかの実施形態では、第1情報画像210の視認性を低下させた後、第1情報画像210に関連する関連画像(213、214)を、第1情報画像210又は第2情報画像220が表示される第1表示領域101とは異なる第2表示領域102に表示させてもよい。すなわち、第1情報画像210と重なる前景300の領域を見やすくすることができ、かつ第1情報画像210が示す情報を他の第2表示領域102で確認することができる。なお、プロセッサ16は、1つの第1情報画像210の視認性を低下させ、新たに2つ又はそれ以上の関連画像を表示させてもよい。これにより、関連画像により多くの情報を認識させることができ、AR画像である第1情報画像210の視認性の低下による情報の認識性の低下を抑制することができる。 Also, in some embodiments, after reducing the visibility of the first information image 210, the related images (213, 214) related to the first information image 210 are changed to the first information image 210 or the second information image. 220 may be displayed in the second display area 102 different from the first display area 101 in which it is displayed. That is, the area of the foreground 300 that overlaps the first information image 210 can be easily seen, and the information indicated by the first information image 210 can be confirmed in the other second display area 102. Note that the processor 16 may reduce the visibility of one first information image 210 and newly display two or more related images. As a result, more information can be recognized in the related image, and it is possible to suppress deterioration in information recognition due to deterioration in visibility of the first information image 210 that is an AR image.
1…自車両、2…フロントウインドシールド、4…運転者、5…ダッシュボード、10…車両用表示システム、11…画像表示部、11a…表示光、13…表示制御装置、14…I/Oインタフェース、16…プロセッサ、18…記憶部、20…画像処理回路、22…メモリ、100…表示領域、110…第1表示領域、120…第2表示領域、200…画像、210…第1情報画像、211…ナビゲーション画像、212…POI画像、213…第1ナビゲーション画像、214…第2ナビゲーション画像、220…第2情報画像、221…第1FCW画像、222…第2FCW画像、231…道路情報画像(第3情報画像)、232…速度画像(第3情報画像)、300…前景、310…実オブジェクト、311…路面、311a…区画線、311b…区画線、312…先行車、313…歩行者、314…建物、320…車線、401…車両ECU、403…道路情報データベース、405…自車位置検出部、407…車外センサ、409…視線方向検出部、411…目位置検出部、413…携帯情報端末、420…車外通信接続機器、502…実オブジェクト関連情報検出モジュール、504…報知必要度検出モジュール、506…画像種類決定モジュール、508…画像位置決定モジュール、510…画像サイズ決定モジュール、512…目位置検出モジュール、514…視認検出モジュール、516…行動判定モジュール、518…グラフィックモジュール、GZ…注視位置
 
DESCRIPTION OF SYMBOLS 1 ... Own vehicle, 2 ... Front windshield, 4 ... Driver, 5 ... Dashboard, 10 ... Vehicle display system, 11 ... Image display part, 11a ... Display light, 13 ... Display control device, 14 ... I / O Interface, 16 ... Processor, 18 ... Storage unit, 20 ... Image processing circuit, 22 ... Memory, 100 ... Display area, 110 ... First display area, 120 ... Second display area, 200 ... Image, 210 ... First information image , 211 ... Navigation image, 212 ... POI image, 213 ... First navigation image, 214 ... Second navigation image, 220 ... Second information image, 221 ... First FCW image, 222 ... Second FCW image, 231 ... Road information image ( Third information image), 232 ... Velocity image (third information image), 300 ... Foreground, 310 ... Real object, 311 ... Road surface, 311a ... Marking line, 311b ... Marking line, 312 ... Leading vehicle, 313 ... Pedestrian, 314 ... Building, 320 ... Lane, 401 ... Vehicle ECU, 403 ... Road information database, 405 ... Own vehicle position detection unit, 407 ... Exterior sensor, 409 ... Line-of-sight direction detection unit, 411 ... Eye position detection unit, 413 ... Mobile information Terminal, 420 ... External communication connection device, 502 ... Real object related information detection module, 504 ... Notification necessity detection module, 506 ... Image type determination module, 508 ... Image position determination module, 510 ... Image size determination module, 512 ... Eyes Position detection module, 514 ... Visual detection module, 516 ... Behavior determination module, 518 ... Graphic module, GZ ... Gaze position

Claims (18)

  1.  車両(1)の運転者から見て前景(300)に重なる領域に画像(200)を表示する画像表示部(11、12)を制御する表示制御装置(13)において、
     1つ又はそれ以上のI/Oインタフェース(14)と、
     1つ又はそれ以上のプロセッサ(16)と、
     メモリ(18)と、
     前記メモリ(18)に格納され、前記1つ又はそれ以上のプロセッサ(16)によって実行されるように構成される1つ又はそれ以上のコンピュータ・プログラムと、を備え、
     前記1つ又はそれ以上のプロセッサ(16)は、
      前記1つ又はそれ以上のI/Oインタフェース(14)から、前記運転者の視線方向を取得し、
      前記視線方向に基づき、視認されたと判定されると視認性を低下させる第1情報画像(210)と、視認されたと判定されると前記第1情報画像(210)より視認性を低下させない第2情報画像(220)と、を表示させる、
     命令を実行する、表示制御装置。
    In a display control device (13) for controlling an image display unit (11, 12) that displays an image (200) in a region overlapping the foreground (300) when viewed from the driver of the vehicle (1),
    One or more I / O interfaces (14),
    One or more processors (16),
    A memory (18),
    One or more computer programs stored in the memory (18) and configured to be executed by the one or more processors (16);
    Said one or more processors (16)
    Obtaining the driver's gaze direction from the one or more I / O interfaces (14),
    Based on the line-of-sight direction, a first information image (210) that reduces visibility when it is determined to be viewed, and a second information image that does not reduce visibility when compared to the first information image (210) when determined to be viewed The information image (220) is displayed,
    A display controller that executes instructions.
  2.  前記1つ又はそれ以上のプロセッサ(16)は、
     前記第2情報画像(220)が示す情報のリスクポテンシャルの大きさに応じて、前記視線が向けられた際の前記第2情報画像(220)の前記視認性の変化度合いを決める、
     命令を実行する、請求項1に記載の表示制御装置。
    Said one or more processors (16)
    The degree of change in the visibility of the second information image (220) when the line of sight is directed is determined according to the magnitude of the risk potential of the information indicated by the second information image (220).
    The display controller according to claim 1, which executes an instruction.
  3.  前記1つ又はそれ以上のプロセッサ(16)は、
     前記第2情報画像(220)に視線が向いた後、又は前記第2情報画像(220)に視線が向いて視認性を低下させた後、前記第2情報画像(220)が示す情報のリスクポテンシャルが高まらない場合、前記第2情報画像(220)の視認性を低下させる、
     命令を実行する、請求項1に記載の表示制御装置。
    Said one or more processors (16)
    Risk of information indicated by the second information image (220) after the line of sight of the second information image (220) or after the line of sight of the second information image (220) reduces visibility. If the potential does not increase, the visibility of the second information image (220) is reduced.
    The display controller according to claim 1, which executes an instruction.
  4.  前記1つ又はそれ以上のプロセッサ(16)は、
     前記視線が前記第2情報画像(220)に向くと、前記第2情報画像(220)を動画表示させる、
     命令を実行する、請求項1に記載の表示制御装置。
    Said one or more processors (16)
    When the line of sight is directed to the second information image (220), the second information image (220) is displayed as a moving image.
    The display controller according to claim 1, which executes an instruction.
  5.  前記1つ又はそれ以上のプロセッサ(16)は、
     前記第2情報画像(220)が複数表示されている際、前記視線が所定の第2情報画像(221)に向くと、前記所定の第2情報画像(221)、及び他の第2情報画像(222~)を動画表示させる、
     命令を実行する、請求項1に記載の表示制御装置。
    Said one or more processors (16)
    When the line of sight is directed toward the predetermined second information image (221) when the plurality of second information images (220) are displayed, the predetermined second information image (221) and other second information images Display (222 ~) as a video,
    The display controller according to claim 1, which executes an instruction.
  6.  前記1つ又はそれ以上のプロセッサ(16)は、
     前記第2情報画像(220)が複数表示されている際、前記視線が所定の第2情報画像(221)に向くと、前記所定の第2情報画像(221)、及び前記所定の第2情報画像(221)の近くの他の第2情報画像(222~)を動画表示させる、
     命令を実行する、請求項1に記載の表示制御装置。
    Said one or more processors (16)
    When the line of sight is directed toward the predetermined second information image (221) when the plurality of second information images (220) are displayed, the predetermined second information image (221) and the predetermined second information are displayed. The other second information image (222-) near the image (221) is displayed as a moving image.
    The display controller according to claim 1, which executes an instruction.
  7.  前記1つ又はそれ以上のプロセッサ(16)は、
     前記第2情報画像(220)が複数表示されている際、前記視線が所定の第2情報画像(221)に向くと、前記所定の第2情報画像(221)、及び他の第2情報画像(222~)を同じ周期で動画表示させる、
     命令を実行する、請求項1に記載の表示制御装置。
    Said one or more processors (16)
    When the line of sight is directed toward the predetermined second information image (221) when the plurality of second information images (220) are displayed, the predetermined second information image (221) and other second information images Display (222 ~) in the same cycle,
    The display controller according to claim 1, which executes an instruction.
  8.  前記1つ又はそれ以上のプロセッサ(16)は、
     前記第1情報画像(210)が示す情報に対して適当ではない行動を前記運転者がすることを推測可能な推測情報を取得し、
     前記第1情報画像(210)の視認性を低下させた後、前記第1情報画像(210)が示す情報に対して適当ではない行動を前記運転者がすると判定できる場合、前記推測情報に対応する前記第1情報画像(210)の視認性を上昇させる、
     命令を実行する、請求項1に記載の表示制御装置。
    Said one or more processors (16)
    Acquiring inference information capable of inferring that the driver will behave in an inappropriate manner for the information indicated by the first information image (210),
    Corresponding to the inferred information when it is possible to determine that the driver performs an action that is not appropriate for the information indicated by the first information image (210) after reducing the visibility of the first information image (210) Increase the visibility of the first information image (210)
    The display controller according to claim 1, which executes an instruction.
  9.  前記1つ又はそれ以上のプロセッサ(16)は、
     前記第1情報画像(210)の視認性を低下させた後、前記第1情報画像(210)に関連する関連画像(213、214)を、前記第1情報画像(210)又は前記第2情報画像(220)が表示される第1表示領域(101)とは異なる第2表示領域(102)に表示させる、
     命令を実行する、請求項1に記載の表示制御装置。
    Said one or more processors (16)
    After reducing the visibility of the first information image (210), the related images (213, 214) related to the first information image (210) are changed to the first information image (210) or the second information. Displaying in a second display area (102) different from the first display area (101) in which the image (220) is displayed,
    The display controller according to claim 1, which executes an instruction.
  10.  前記第2情報画像(220)は、前記前景(300)に存在する障害物、歩行者、他車両のうち少なくとも1つを強調するように近傍に表示される画像である、
    請求項1に記載の表示制御装置。
    The second information image (220) is an image displayed in the vicinity so as to emphasize at least one of an obstacle, a pedestrian, and another vehicle existing in the foreground (300),
    The display control device according to claim 1.
  11.  車両(1)の運転者から見て前景(300)に重なる領域に画像(200)を表示する画像表示部(11、12)を制御する方法であって、
     前記1つ又はそれ以上のI/Oインタフェース(14)から、前記運転者の視線方向を取得することと、
     視認されたと判定されると視認性を低下させる第1情報画像(210)と、前記視線が向いても前記第1情報画像(210)より視認性を低下させない第2情報画像(220)と、を表示することと、
     を含む方法。
    A method for controlling an image display unit (11, 12) that displays an image (200) in a region overlapping a foreground (300) when viewed from a driver of a vehicle (1),
    Obtaining the driver's gaze direction from the one or more I / O interfaces (14);
    A first information image (210) that reduces visibility when it is determined to have been visually recognized, and a second information image (220) that does not reduce visibility more than the first information image (210) even when the line of sight is facing, Is displayed,
    Including the method.
  12.  前記視線が前記第2情報画像(220)に向くと、前記第2情報画像(220)を動画表示させること、
     を含む、請求項11に記載の方法。
    Displaying the second information image (220) as a moving image when the line of sight is directed to the second information image (220),
    The method of claim 11, comprising:
  13.  前記第2情報画像(220)が複数表示されている際、前記視線が所定の第2情報画像(221)に向くと、前記所定の第2情報画像(221)、及び他の第2情報画像(222~)を動画表示させること、
     を含む、請求項11に記載の方法。
    When the line of sight is directed toward the predetermined second information image (221) when the plurality of second information images (220) are displayed, the predetermined second information image (221) and other second information images Display (222 ~) as a moving image,
    The method of claim 11, comprising:
  14.  前記第2情報画像(220)が複数表示されている際、前記視線が所定の第2情報画像(221)に向くと、前記所定の第2情報画像(221)、及び前記所定の第2情報画像(221)の近くの他の第2情報画像(222~)を動画表示させること、
     を含む、請求項11に記載の方法。
    When the line of sight is directed toward the predetermined second information image (221) when the plurality of second information images (220) are displayed, the predetermined second information image (221) and the predetermined second information are displayed. Displaying another second information image (222-) near the image (221) as a moving image,
    The method of claim 11, comprising:
  15.  前記第2情報画像(220)が複数表示されている際、前記視線が所定の第2情報画像(221)に向くと、前記所定の第2情報画像(221)、及び他の第2情報画像(222~)を同じ周期で動画表示させること、
     を含む、請求項11に記載の方法。
    When the line of sight is directed toward the predetermined second information image (221) when the plurality of second information images (220) are displayed, the predetermined second information image (221) and other second information images Displaying moving images from (222 on) at the same cycle,
    The method of claim 11, comprising:
  16.  前記第1情報画像(210)が示す情報に対して適当ではない行動を前記運転者がすることを推測可能な推測情報を取得することと、
     前記第1情報画像(210)の視認性を低下させた後、前記第1情報画像(210)が示す情報に対して適当ではない行動を前記運転者がすると判定できる場合、前記推測情報に対応する前記第1情報画像(210)の視認性を上昇させること、
     を含む、請求項11に記載の方法。
    Acquiring inference information with which it is possible to infer that the driver is performing an action that is not appropriate for the information indicated by the first information image (210)
    Corresponding to the inferred information when it is possible to determine that the driver performs an action that is not appropriate for the information indicated by the first information image (210) after reducing the visibility of the first information image (210) Increasing the visibility of the first information image (210)
    The method of claim 11, comprising:
  17.  前記第1情報画像(210)の視認性を低下させた後、前記第1情報画像(210)に関連する関連画像(213、214)を、前記第1情報画像(210)又は前記第2情報画像(220)が表示される第1表示領域(101)とは異なる第2表示領域(102)に表示させること、
     を含む、請求項11に記載の方法。
    After reducing the visibility of the first information image (210), the related images (213, 214) related to the first information image (210) are changed to the first information image (210) or the second information. Displaying in a second display area (102) different from the first display area (101) in which the image (220) is displayed;
    The method of claim 11, comprising:
  18.  請求項11乃至17のいずれかに記載の方法を実行するための命令を含む、コンピュータ・プログラム。 A computer program comprising instructions for performing the method according to any one of claims 11 to 17.
PCT/JP2019/045494 2018-11-23 2019-11-20 Display control device, method, and computer program WO2020105685A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2020557598A JP7255608B2 (en) 2018-11-23 2019-11-20 DISPLAY CONTROLLER, METHOD, AND COMPUTER PROGRAM
DE112019005849.5T DE112019005849T5 (en) 2018-11-23 2019-11-20 Display control device, display control method and computer program for display control
CN201980076258.0A CN113165510B (en) 2018-11-23 2019-11-20 Display control device, method, and computer program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-219802 2018-11-23
JP2018219802 2018-11-23

Publications (1)

Publication Number Publication Date
WO2020105685A1 true WO2020105685A1 (en) 2020-05-28

Family

ID=70773132

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/045494 WO2020105685A1 (en) 2018-11-23 2019-11-20 Display control device, method, and computer program

Country Status (4)

Country Link
JP (1) JP7255608B2 (en)
CN (1) CN113165510B (en)
DE (1) DE112019005849T5 (en)
WO (1) WO2020105685A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022095787A (en) * 2021-06-25 2022-06-28 阿波▲羅▼智▲聯▼(北京)科技有限公司 Display method, apparatus, terminal device, computer readable storage medium, and computer program
EP4265463A1 (en) * 2022-04-19 2023-10-25 Volkswagen Ag Vehicle, head-up display, augmented reality device, apparatuses, methods and computer programs for controlling an augmented reality device and for controlling a visualization device

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116572837A (en) * 2023-04-27 2023-08-11 江苏泽景汽车电子股份有限公司 Information display control method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007045169A (en) * 2005-08-05 2007-02-22 Aisin Aw Co Ltd Information processor for vehicle
JP2017039373A (en) * 2015-08-19 2017-02-23 トヨタ自動車株式会社 Vehicle video display system
JP2017097687A (en) * 2015-11-26 2017-06-01 矢崎総業株式会社 Vehicular information presentation device
JP2017226272A (en) * 2016-06-21 2017-12-28 日本精機株式会社 Information providing device for vehicle

Family Cites Families (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8330812B2 (en) * 1995-05-30 2012-12-11 Simulated Percepts, Llc Method and apparatus for producing and storing, on a resultant non-transitory storage medium, computer generated (CG) video in correspondence with images acquired by an image acquisition device tracked in motion with respect to a 3D reference frame
JP3877127B2 (en) * 2000-06-15 2007-02-07 マツダ株式会社 Vehicle display device
JP4698002B2 (en) * 2000-07-11 2011-06-08 マツダ株式会社 Vehicle display device
JP3870409B2 (en) * 2000-08-03 2007-01-17 マツダ株式会社 Vehicle display device
JP2002293162A (en) * 2001-03-30 2002-10-09 Yazaki Corp Vehicular display device
JP4026144B2 (en) * 2004-01-20 2007-12-26 マツダ株式会社 Image display device for vehicle
JP4715718B2 (en) * 2006-10-24 2011-07-06 株式会社デンソー Vehicle display device
JP2008282168A (en) * 2007-05-09 2008-11-20 Toyota Motor Corp Consciousness detector
JP2009292409A (en) * 2008-06-09 2009-12-17 Yazaki Corp Head-up display
JP5245930B2 (en) * 2009-03-09 2013-07-24 株式会社デンソー In-vehicle display device
JP5842419B2 (en) * 2011-07-06 2016-01-13 日本精機株式会社 Head-up display device
JP5406328B2 (en) * 2012-03-27 2014-02-05 株式会社デンソーアイティーラボラトリ VEHICLE DISPLAY DEVICE, ITS CONTROL METHOD AND PROGRAM
JP6232691B2 (en) * 2012-07-27 2017-11-22 株式会社Jvcケンウッド VEHICLE DISPLAY CONTROL DEVICE, VEHICLE DISPLAY DEVICE, AND VEHICLE DISPLAY CONTROL METHOD
WO2014097404A1 (en) * 2012-12-18 2014-06-26 パイオニア株式会社 Head-up display, control method, program and storage medium
JP6037923B2 (en) * 2013-04-08 2016-12-07 三菱電機株式会社 Display information generating apparatus and display information generating method
JP6413207B2 (en) * 2013-05-20 2018-10-31 日本精機株式会社 Vehicle display device
JP2015041969A (en) * 2013-08-23 2015-03-02 ソニー株式会社 Image acquisition apparatus, image acquisition method, and information distribution system
JP6225379B2 (en) * 2013-12-23 2017-11-08 日本精機株式会社 Vehicle information projection system
JP6253417B2 (en) * 2014-01-16 2017-12-27 三菱電機株式会社 Vehicle information display control device
JP6443716B2 (en) * 2014-05-19 2018-12-26 株式会社リコー Image display device, image display method, and image display control program
JP6348791B2 (en) * 2014-07-16 2018-06-27 クラリオン株式会社 Display control apparatus and display control method
JP6379779B2 (en) * 2014-07-16 2018-08-29 日産自動車株式会社 Vehicle display device
JP2016031603A (en) * 2014-07-28 2016-03-07 日本精機株式会社 Display system for vehicle
JP2016055801A (en) * 2014-09-11 2016-04-21 トヨタ自動車株式会社 On-vehicle display device
JP2016107947A (en) * 2014-12-10 2016-06-20 株式会社リコー Information providing device, information providing method, and control program for providing information
JP6504431B2 (en) * 2014-12-10 2019-04-24 株式会社リコー IMAGE DISPLAY DEVICE, MOBILE OBJECT, IMAGE DISPLAY METHOD, AND PROGRAM
JP6103138B2 (en) * 2015-03-26 2017-03-29 三菱電機株式会社 Driver support system
US20180356641A1 (en) * 2015-12-01 2018-12-13 Nippon Seiki Co., Ltd. Head-up display
JP2017138350A (en) * 2016-02-01 2017-08-10 アルプス電気株式会社 Image display device
JP6272375B2 (en) * 2016-03-18 2018-01-31 株式会社Subaru Display control device for vehicle
JP2017200786A (en) * 2016-05-02 2017-11-09 本田技研工業株式会社 Vehicle control system, vehicle control method and vehicle control program
JP2016193723A (en) * 2016-06-24 2016-11-17 パイオニア株式会社 Display device, program, and storage medium
JP2018022958A (en) * 2016-08-01 2018-02-08 株式会社デンソー Vehicle display controller and vehicle monitor system
JP6643969B2 (en) * 2016-11-01 2020-02-12 矢崎総業株式会社 Display device for vehicles
JP2018120135A (en) * 2017-01-26 2018-08-02 日本精機株式会社 Head-up display

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007045169A (en) * 2005-08-05 2007-02-22 Aisin Aw Co Ltd Information processor for vehicle
JP2017039373A (en) * 2015-08-19 2017-02-23 トヨタ自動車株式会社 Vehicle video display system
JP2017097687A (en) * 2015-11-26 2017-06-01 矢崎総業株式会社 Vehicular information presentation device
JP2017226272A (en) * 2016-06-21 2017-12-28 日本精機株式会社 Information providing device for vehicle

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022095787A (en) * 2021-06-25 2022-06-28 阿波▲羅▼智▲聯▼(北京)科技有限公司 Display method, apparatus, terminal device, computer readable storage medium, and computer program
EP4265463A1 (en) * 2022-04-19 2023-10-25 Volkswagen Ag Vehicle, head-up display, augmented reality device, apparatuses, methods and computer programs for controlling an augmented reality device and for controlling a visualization device

Also Published As

Publication number Publication date
DE112019005849T5 (en) 2021-09-02
CN113165510B (en) 2024-01-30
JP7255608B2 (en) 2023-04-11
JPWO2020105685A1 (en) 2021-11-04
CN113165510A (en) 2021-07-23

Similar Documents

Publication Publication Date Title
JP6107590B2 (en) Head-up display device
US20210104212A1 (en) Display control device, and nontransitory tangible computer-readable medium therefor
JP2018203169A (en) Operation awareness estimation device
US11803053B2 (en) Display control device and non-transitory tangible computer-readable medium therefor
JP6443716B2 (en) Image display device, image display method, and image display control program
JP7255608B2 (en) DISPLAY CONTROLLER, METHOD, AND COMPUTER PROGRAM
JP2020032866A (en) Vehicular virtual reality providing device, method and computer program
US20200298703A1 (en) Method, device and computer-readable storage medium with instructions for controlling a display of an augmented-reality head-up display device for a transportation vehicle
JP2023174676A (en) Vehicle display control device, method and program
JP2024029051A (en) In-vehicle display device, method and program
JP6186905B2 (en) In-vehicle display device and program
JP7459883B2 (en) Display control device, head-up display device, and method
WO2020158601A1 (en) Display control device, method, and computer program
WO2021200914A1 (en) Display control device, head-up display device, and method
JP2020117105A (en) Display control device, method and computer program
JP2020121607A (en) Display control device, method and computer program
JP2020121704A (en) Display control device, head-up display device, method and computer program
JP6939147B2 (en) Driving information guidance device and computer program
JP7434894B2 (en) Vehicle display device
JP2020154468A (en) Driving support device, method, and computer program
WO2023145852A1 (en) Display control device, display system, and display control method
WO2021200913A1 (en) Display control device, image display device, and method
JP2020106911A (en) Display control device, method, and computer program
JP7255596B2 (en) Display control device, head-up display device
JP2020117104A (en) Display control device, display system, method and computer program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19887706

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020557598

Country of ref document: JP

Kind code of ref document: A

122 Ep: pct application non-entry in european phase

Ref document number: 19887706

Country of ref document: EP

Kind code of ref document: A1