WO2020003750A1 - Vehicle display control device, vehicle display control method, and control program - Google Patents

Vehicle display control device, vehicle display control method, and control program Download PDF

Info

Publication number
WO2020003750A1
WO2020003750A1 PCT/JP2019/018477 JP2019018477W WO2020003750A1 WO 2020003750 A1 WO2020003750 A1 WO 2020003750A1 JP 2019018477 W JP2019018477 W JP 2019018477W WO 2020003750 A1 WO2020003750 A1 WO 2020003750A1
Authority
WO
WIPO (PCT)
Prior art keywords
display
image
images
group
vehicle
Prior art date
Application number
PCT/JP2019/018477
Other languages
French (fr)
Japanese (ja)
Inventor
靖 作間
智 堀畑
Original Assignee
株式会社デンソー
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社デンソー filed Critical 株式会社デンソー
Publication of WO2020003750A1 publication Critical patent/WO2020003750A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Arrangement of adaptations of instruments
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/37Details of the operation on graphic patterns
    • G09G5/377Details of the operation on graphic patterns for mixing or overlaying two or more graphic patterns
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/38Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory with means for controlling the display position

Definitions

  • the present disclosure relates to a display control device for a vehicle, a display control method for a vehicle, and a control program.
  • Patent Literature 1 discloses a technique in which an information communicator such as a guide arrow as travel guide information is displayed on a head-up display so as to overlap a road at an intersection.
  • Japanese Patent Application Laid-Open No. H11-163873 discloses that, when an obstacle existing in front of a vehicle overlaps with an information communicator, the position at which the information communicator is displayed is shifted or a portion of the information communicator display that overlaps the obstacle is displayed. Or a technique for extinguishing the same.
  • Patent Document 1 assumes that the display position of an information communicator and an obstacle existing in front of the vehicle overlap, but does not assume that the display positions of a plurality of types of information communicators overlap. . Therefore, according to the technique disclosed in Patent Document 1, when the display positions of a plurality of types of information communicators are overlapped, the images are mixed to make it difficult to distinguish between them, and the contents intended for each display are transmitted to the driver. It may be difficult.
  • the present disclosure when a virtual image is superimposed on the foreground of a vehicle by projecting an image on a projection member, a plurality of images having different types of display items are projected on a common projection member to display respective virtual images on the foreground of the vehicle. It is an object of the present invention to provide a vehicular display control device, a vehicular display control method, and a control program that suppress the difficulty in transmitting the intended content of each image to a driver even when the image is displayed in a composite manner. I do.
  • a vehicular display control device controls a head-up display device that is used in a vehicle and that superimposes and displays a virtual image on a foreground of a vehicle by projecting an image to be drawn on a display onto a projection member.
  • the display control device for a vehicle includes: a determination information obtaining unit that obtains content determination information that is information for determining the content of an image to be drawn on the display; and a content determination information that is obtained by the determination information acquisition unit.
  • a plurality of groups classified into a plurality of groups including a group of images for dynamic targets, a group of images for static targets, and a group of images for road surfaces are displayed in units of display item types.
  • a drawing control unit that draws the images of the group on one display device by dividing the layers into groups. The drawing control unit performs drawing so that the visibility of the image is different for each layer on which the image is drawn on the display.
  • a vehicle display control method includes a head-up display device that is used in a vehicle and that superimposes a virtual image on a foreground of a vehicle by projecting an image to be drawn on a display onto a projection member. Control.
  • the display control method for a vehicle obtains information for content determination, which is information for determining the content of an image to be drawn on the display, and, in accordance with the content determination information to be obtained, displays on the display in units of display item types.
  • a plurality of groups of images classified into a plurality of groups including a group of images for a dynamic target, a group of images for a static target, and a group of images for a road surface are divided into layers by group, and In addition to the drawing on one display, the drawing is performed so that the visibility of the image is different for each layer for drawing the image on the display.
  • a control program for a head-up display device that is used in a vehicle and that superimposes a virtual image on a foreground of a vehicle by projecting an image to be drawn on a display onto a projection member.
  • a determination information acquisition unit that acquires content determination information that is information for determining the content of an image to be drawn on the display, and a display on the display according to the content determination information that is acquired by the determination information acquisition unit.
  • a drawing control unit that draws an image on a single display device by dividing the layers, and draws the image so that the visibility of the image is different for each layer for drawing the image on the display device.
  • At least a group of images for a dynamic target, a group of images for a static target, and a group of images for a road surface, which are classified into a plurality of groups in units of display item types, are included.
  • the images of the plurality of groups are drawn on one display so that the layers are divided into groups and the visibility of the images is different for each layer. Therefore, even when a plurality of images having different types of display items are projected on a common projection member to display the respective virtual images in a composite manner on the foreground of the vehicle, at least a group of images for the dynamic target, The driver can distinguish and recognize the group of images for the static target and the group of images for the road surface.
  • the classification of the image group for the dynamic target, the image group for the static target, and the image group for the road surface has a viewpoint of the space in front of the vehicle, which is unique to the superimposed display on the foreground of the vehicle. Grouping. Therefore, since the driver can distinguish and recognize at least each of the groups including the viewpoint of the space in front of the vehicle, the driver can easily recognize the intended content of each image in the superimposed display. As a result, when the virtual images are superimposed and displayed on the foreground of the vehicle by projecting the images on the projection member, a plurality of images of different types of display items are projected on the common projection member and the respective virtual images are displayed on the foreground of the vehicle. Even in the case of displaying multiple images, it is possible to prevent the intention contents of each image from being difficult to be transmitted to the driver.
  • FIG. 1 is a diagram illustrating an example of a schematic configuration of a vehicle system. It is a figure showing the example of mounting in a vehicle of a HUD device.
  • FIG. 2 is a diagram illustrating an example of a schematic configuration of an HCU. It is a figure showing an example of the classification of a display item. It is a figure showing an example of grouping of a display item. It is a figure showing an example of superimposition display. It is a figure showing an example of superimposition display.
  • FIG. 9 is a diagram for describing an example of a combining process of combining images drawn on each layer.
  • FIG. 9 is a diagram for describing an example of a combining process of combining images drawn on each layer.
  • FIG. 9 is a diagram for describing an example of a combining process of combining images drawn on each layer.
  • FIG. 9 is a diagram for describing an example of a combining process of combining images drawn on each layer. It is a flowchart which shows an example of the flow of virtual image display control related processing in HCU.
  • the vehicle system 1 is used for a vehicle running on a road such as an automobile.
  • the vehicle system 1 includes an HMI (Human Machine Interface) system 2, an ADAS (Advanced Driver Assistance Systems) locator 3, a peripheral monitoring sensor 4, a vehicle state sensor 5, a vehicle control ECU 6, a navigation device. 7 and an automatic driving ECU 8.
  • the HMI system 2, the ADAS locator 3, the periphery monitoring sensor 4, the vehicle state sensor 5, the vehicle control ECU 6, the navigation device 7, and the automatic driving ECU 8 are assumed to be connected to, for example, an in-vehicle LAN.
  • the ADAS locator 3 includes a GNSS (Global Navigation Satellite System) receiver and an inertial sensor.
  • the GNSS receiver receives positioning signals from a plurality of satellites.
  • the inertial sensor includes, for example, a gyro sensor and an acceleration sensor.
  • the ADAS locator 3 sequentially measures the vehicle position of the own vehicle by combining the positioning signal received by the GNSS receiver and the measurement result of the inertial sensor. Note that the vehicle position may be measured using a traveling distance or the like obtained from detection results sequentially output from a vehicle speed sensor mounted on the own vehicle. Then, the measured vehicle position is output to the in-vehicle LAN.
  • the ADAS locator 3 may be configured to include, as map data, a map database (hereinafter, map DB) that stores a three-dimensional map including a point group of feature points of a road shape and a structure.
  • map DB map database
  • the LIDAR Light Detection and Ranging / Laser Imaging
  • a configuration in which the vehicle position of the own vehicle is specified using a detection result of the periphery monitoring sensor 4 such as Detection and Ranging may be used.
  • the map data of the three-dimensional map may be obtained from outside the vehicle via the communication module.
  • the surroundings monitoring sensor 4 is an autonomous sensor that monitors the surrounding environment of the own vehicle.
  • the surroundings monitoring sensor 4 is used to detect a moving dynamic target such as a pedestrian, an animal other than a human, a vehicle other than the own vehicle, and a stationary static object such as a falling object on a road, a guardrail, a curb, and a tree. Detects obstacles around the vehicle such as a target.
  • a road marking such as a lane marking around the own vehicle is detected.
  • the peripheral monitoring sensor 4 is, for example, a peripheral monitoring camera that captures an image of a predetermined area around the own vehicle, a millimeter wave radar that transmits a search wave to a predetermined area around the own vehicle, a sonar, a LIDAR sensor, or the like.
  • the periphery monitoring camera sequentially outputs the captured images sequentially captured to the in-vehicle LAN as sensing information.
  • a sensor that transmits a search wave such as a sonar, a millimeter-wave radar, and a LIDAR sequentially outputs a scanning result based on a reception signal obtained when a reflected wave reflected by a detection target is received to the in-vehicle LAN as sensing information.
  • the vehicle state sensor 5 is a group of sensors for detecting various states of the vehicle.
  • the vehicle state sensor 5 includes a vehicle speed sensor for detecting the vehicle speed of the own vehicle, a steering sensor for detecting the steering angle of the own vehicle, an accelerator position sensor for detecting the opening degree of the accelerator pedal of the own vehicle, and depression of a brake pedal of the own vehicle. There is a brake depression force sensor that detects the amount.
  • the vehicle state sensor 5 outputs sensing information to be detected to the in-vehicle LAN. Note that the sensing information detected by the vehicle state sensor 5 may be output to an in-vehicle LAN via an ECU mounted on the own vehicle.
  • the vehicle control ECU 6 is an electronic control device that performs acceleration / deceleration control and / or steering control of the own vehicle.
  • Examples of the vehicle control ECU 6 include a steering ECU that performs steering control, a power unit control ECU that performs acceleration / deceleration control, and a brake ECU.
  • the vehicle control ECU 6 acquires detection signals output from sensors such as an accelerator position sensor, a brake pedal force sensor, a steering angle sensor, and a wheel speed sensor mounted on the own vehicle, and controls the electronic control throttle, the brake actuator, and the EPS (Electric). Power Steering) Outputs control signals to each drive control device such as a motor.
  • the vehicle control ECU 6 can output the detection signals of the above-described sensors to the in-vehicle LAN.
  • the navigation device 7 includes a map DB storing map data, searches for a route that satisfies conditions such as time priority and distance priority to a set destination, and performs route guidance according to the searched route.
  • the map DB is a non-volatile memory, and may store map data such as link data, segment data, node data, and road shapes. Note that the map data may have a configuration including a three-dimensional map including a point group of feature points of a road shape and a structure.
  • the automatic driving ECU 8 controls the vehicle control ECU 6 to execute an automatic driving function for performing a driving operation on behalf of the driver.
  • the automatic driving ECU 8 runs the own vehicle based on the vehicle position of the own vehicle obtained from the ADAS locator 3 and the map data of the three-dimensional map, the sensing information of the surrounding monitoring sensor 4, and the map data obtained from the navigation device 7. Recognize the environment. As an example, from the sensing information of the surrounding monitoring sensor 4, the shape and the moving state of the object around the own vehicle are recognized, and the shape of the road marking around the own vehicle is recognized. Then, by combining with the vehicle position of the own vehicle and the map data, a virtual space that reproduces the actual traveling environment in three dimensions is generated.
  • the automatic driving ECU 8 generates a driving plan for automatically driving the own vehicle by the automatic driving function based on the recognized driving environment.
  • a long- and medium-term travel plan and a short-term travel plan are generated.
  • a route for directing the vehicle to the set destination is defined.
  • a configuration using the route searched by the navigation device 7 may be adopted.
  • a planned traveling locus for realizing traveling according to the long- and medium-term traveling plan is defined using the generated virtual space around the own vehicle. Specifically, execution of steering for lane following and lane change, acceleration / deceleration for speed adjustment, and rapid braking for collision avoidance are determined based on a short-term traveling plan.
  • the HMI system 2 includes an HCU (Human Machine Interface Control Unit) 20, an operation device 21, and a display device 22.
  • the HMI system 2 receives an input operation from a driver who is a user of the own vehicle, or transmits the input operation to the driver of the own vehicle. And present information.
  • the operation device 21 is a group of switches operated by a driver of the own vehicle.
  • the operation device 21 is used for performing various settings.
  • the operation device 21 includes a steering switch provided at a spoke portion of the steering of the vehicle.
  • a head-up display (HUD) device 220 is used as the display device 22 .
  • the HUD device 220 will be described with reference to FIG.
  • the HUD device 220 is provided on the instrument panel 12 of the vehicle, and has a display 221, a plane mirror 222, and a concave mirror 223.
  • the HUD device 220 projects an image on the front windshield 10 under the control of the HCU 20.
  • the HUD device 220 projects a display image formed by the display 221 onto a projection area defined on the front windshield 10 as a projection member through an optical system such as a plane mirror 222 and a concave mirror 223. More specifically, after the image displayed on the display 221 (that is, the display image) is reflected by the plane mirror 222, it is enlarged by the concave mirror 223 and projected on a projection area defined on the front windshield 10. .
  • the projection area is located, for example, in front of the driver's seat.
  • the display 221 is, for example, a TFT liquid crystal panel, and outputs light of a display image to be drawn on the liquid crystal panel by transmitting light from a backlight.
  • the luminous flux of the display image reflected by the front windshield 10 toward the vehicle interior is perceived by the driver sitting in the driver's seat.
  • the light flux from the foreground which is a scene existing in front of the own vehicle, transmitted through the front windshield 10 formed of the translucent glass is also perceived by the driver sitting in the driver's seat. Accordingly, the driver can visually recognize the virtual image 100 of the display image formed in front of the front windshield 10 so as to overlap a part of the foreground. That is, the HUD device 220 superimposes and displays the virtual image 100 on the foreground of the vehicle, and realizes a so-called AR (Augmented Reality) display.
  • AR Augmented Reality
  • the display image includes an image having a moving attribute in which the position is changed in accordance with an object in the foreground to be superimposed and an image having a fixed attribute in which the position is determined regardless of the foreground.
  • the details of the image of the movement attribute will be described later.
  • the fixed attribute image includes an image showing information on the instrument of the own vehicle, an image showing a setting state of the own vehicle function such as an automatic driving function, an image showing an operating state of the automatic driving function, a traffic sign on a road on which the vehicle is traveling. And the image showing the monitoring status of the driver. Examples of the image indicating the information of the instrument of the own vehicle include an image indicating the value of the vehicle speed.
  • Examples of the image indicating the setting state of the function of the own vehicle such as the automatic driving function include an icon image indicating that the automatic driving function such as the ACC (Adaptive Cruise Control) function is on or off and the route guidance function is on or off. No. Examples of the image indicating the operation state of the automatic driving function include an icon image indicating whether or not the automatic driving function is operating. Examples of the image indicating the regulation information of the traveling road include an icon image indicating the content of a road sign such as a speed regulation sign of the traveling road. Examples of the image indicating the monitoring state of the driver include an icon image indicating whether or not the driver grips the steering wheel. Note that the fixed attribute display is not limited to the example given here.
  • the virtual image 100 is viewed from the driver so that the upper side of the virtual image 100 is more visible on the distal side than the lower side.
  • the description will be made by taking as an example a case where the display is performed while being tilted in the front-rear direction of the own vehicle. This eliminates the use of two indicators, one for displaying the virtual image 100 on the proximal side of the driver and the other for displaying the virtual image 100 on the distal side of the driver. This also makes it easier to see the virtual image 100 displayed from the proximal side to the distal side.
  • the present invention is not limited to this configuration, and the virtual image 100 may be displayed without tilting in the front-rear direction of the vehicle.
  • the display 221 is a TFT liquid crystal panel as an example, but is not necessarily limited to this.
  • a display device of another system may be used, and the optical system is not limited to the above-described one such as using a reflective screen instead of the plane mirror 222.
  • the projection member on which the HUD device 220 projects the display image is not limited to the front windshield 10 and may be a light-transmitting combiner.
  • a device that displays an image may be used in addition to the HUD device 220.
  • the HCU 20 is mainly configured by a microcomputer including a processor, a volatile memory, a non-volatile memory, an I / O, and a bus connecting these, and is connected to the HUD device 220.
  • the HCU 20 controls display on the HUD device 220 by executing a control program stored in the nonvolatile memory.
  • the HCU 20 corresponds to a display control device for a vehicle. Executing the control program by the processor corresponds to executing the vehicle display control method corresponding to the control program.
  • the memory as used herein is a non-transitory physical storage medium (non-transitory / tangible / storage / medium) that temporarily stores a computer-readable program and data.
  • the non-transitional substantive storage medium is realized by a semiconductor memory, a magnetic disk, or the like.
  • the configuration of the HCU 20 relating to display control by the HUD device 220 will be described in detail below.
  • the HCU 20 includes, as functional blocks, an information processing block 200 and a display control block 210 as shown in FIG. 3 for display control in the HUD device 220.
  • the description of the display control of the fixed attribute image will be omitted, and the display control of the moving attribute image will be described.
  • part or all of the functions executed by the HCU 20 may be configured in hardware by one or more ICs or the like. Further, some or all of the functional blocks included in the HCU 20 may be realized by a combination of execution of software by a processor and hardware components.
  • the information processing block 200 selectively obtains information necessary for display control in the display control block 210 from various information output to the in-vehicle LAN. In addition, the information processing block 200 performs a process of converting the acquired information into a state suitable for display control. As shown in FIG. 3, the information processing block 200 includes a determination information acquisition unit 201 as a sub-function block.
  • the information obtaining unit for determination 201 obtains information necessary for determining the content to be displayed on the HUD device 220.
  • information for determining the content of an image to be drawn on the display 221 of the HUD device 220 (hereinafter, content determination information) is obtained.
  • the content determination information include the vehicle position output from the ADAS locator 3, the route searched by the navigation device 7, the map data stored in the map DB, the driving environment recognized by the automatic driving ECU 8, the automatic driving ECU 8, and the like.
  • the display control block 210 controls the display on the HUD device 220 (that is, display control) based on the information acquired by the information processing block 200. As shown in FIG. 3, the display control block 210 includes a drawing control unit 211 and an arbitration unit 217 as sub-function blocks.
  • the drawing control unit 211 generates image data of an image to be drawn on the display 221 of the HUD device 220 based on information acquired by the information processing block 200.
  • the drawing control unit 211 draws an image having the movement attribute on the display 221 by outputting the generated image data to the display 221.
  • the drawing control unit 211 changes the content and arrangement of the display image according to the content determination information acquired by the determination information acquisition unit 201.
  • the drawing control unit 211 draws images of a plurality of groups classified into a plurality of groups in units of display items on a single display 221 by dividing layers into groups.
  • the drawing control unit 211 includes an emergency related unit 212, a dynamic target related unit 213, a static target related unit 214, a road surface related unit 215, and an extended information related unit 216.
  • the related unit 212, the dynamic target related unit 213, the static target related unit 214, the road surface related unit 215, and the extended information related unit 216 draw different groups of images. That is, the emergency related unit 212, the dynamic target related unit 213, the static target related unit 214, the road surface related unit 215, and the extended information related unit 216 draw on different layers.
  • the types of display items are classified into a warning system, a response system, and an information system as a large classification.
  • the warning-related display item is a display item on which a content that the driver needs to respond immediately to avoid danger is displayed.
  • the display items of the warning type are further classified into “emergency”, “warning”, and “caution” according to the degree of urgency.
  • the display items "urgent”, “warning”, and “caution” have the highest urgency of "urgent”, followed by "warning” having the highest urgency, and "caution” having the lowest urgency.
  • the display item "emergency" is, for example, a display that warns the driver of an emergency and prompts the driver to perform an operation.
  • a display for requesting a driver to perform a driving operation for avoiding a crisis such as a warning in the collision damage reduction brake function and a warning in the forward collision prediction warning function, may be mentioned.
  • the display item "warning” is a display for giving a warning to the driver, for example, for information that may cause an accident.
  • highlighting of information according to the distance from a nearby dangerous obstacle such as interruption of another vehicle from a blind spot, approach of a pedestrian, and the like can be cited.
  • the display item "caution” is a display that calls attention to the driver, for example.
  • a warning display about information that is presumed to be out of the driver's consciousness, such as an area where the driver has failed to check detected by the driver monitoring system, may be mentioned.
  • the warning-related display items may be displayed in, for example, the area of crisis avoidance or in the vicinity thereof.
  • the response-related display items are display items that convey a response to the driver's operation.
  • the response display items are less urgent than the warning display items.
  • As a display item of the response system there is “Response”.
  • the display for notifying the change in the inter-vehicle distance setting may be displayed on the preceding vehicle, displayed on the traveling lane of the own vehicle, or displayed in an area between the own vehicle and the preceding vehicle.
  • Notification-related display items are display items that indicate that the driver does not need to respond immediately to avoid danger or that is not related to danger avoidance.
  • the display items of the notification type are lower in urgency than the display items of the emergency type and the response type.
  • the display items of the notification system are further subdivided into “Notice” and “Information” according to the degree of urgency and classified.
  • the display item “information” is less urgent than the display item “notification”.
  • the display item “notification” is a display related to the movement of the vehicle and safe driving, but it is a display of a content that the driver does not need to immediately respond to in order to avoid danger. This is the display to be performed.
  • the display item “notification” is further subdivided into “Dynamic target”, “Static target”, and “Road” according to the spatial viewpoint ahead of the vehicle. .
  • the display item “dynamic target” is a display with a large time change, and is a display for a dynamic target such as a nearby vehicle or a pedestrian.
  • a display indicating a dynamic target such as a vehicle, a pedestrian, or the like, indicating a dynamic target, a ripple-like display extending along a road surface around the dynamic target, or the like may be used.
  • a display of information indicating a preceding inter-vehicle time which is displayed on the preceding vehicle, between the own vehicle and the preceding vehicle, in the traveling lane of the own vehicle, or the like.
  • a display indicating a pedestrian recognition result in a night vision system that indicates or surrounds a pedestrian.
  • the display item “static target” is a display with a small time change, and is a display for a static target such as a sign, a roadside object, a falling object, and the like.
  • a display indicating a static target, a display surrounding a static target, or the like may be given as a display notifying the presence of the static target.
  • the display item "road surface” is a display with a small time change, and is a display for a road surface such as a route or a lane.
  • a line-shaped or sheet-shaped display along the road surface of the planned traveling route for indicating a planned traveling route of the own vehicle in a turn-by-turn (hereinafter, TBT) is cited.
  • TBT turn-by-turn
  • the display item "information" is a display that gives information to the driver, and is a display of contents that are not directly related to danger avoidance.
  • information provision to a driver such as information on peripheral facilities irrelevant to a planned traveling route and a proposal for a break.
  • the information of the surrounding facility may be superimposed on the surrounding facility, or may be superimposed on the facility that proposes the break proposal.
  • these display items are prioritized in addition to the urgency of the information itself and also from the viewpoint of the space ahead of the vehicle, and are classified into five groups. As shown in FIG. 5, the five groups are an emergency group, a dynamic target group, a static target group, a road surface group, and an extended information group in descending order of priority.
  • the emergency group is a group of images of display items that are more urgent than other groups.
  • images of the display items “emergency”, “response”, and “caution” of the warning system and images of the display item “response” of the response system are classified.
  • the emergency related unit 212 is responsible for drawing images classified into emergency groups.
  • the emergency-related unit 212 determines the display content to be displayed as an image classified into the emergency group based on the content determination information acquired by the determination information acquisition unit 201. As an example, when the content determination information includes information indicating that the pedestrian is approaching a distance at which a warning is required, an image of a warning mark (see Wa in FIG. 6) is displayed at the position of the pedestrian.
  • FIG. 6 is a diagram illustrating an example of an image superimposed and displayed on the foreground, where Vi indicates a projection area.
  • the emergency-related unit 212 sends a display request to the arbitration unit 217 to request arbitration of the display of the determined display contents.
  • the display request may be provided with information such as the type of the group and the position of the display content.
  • the dynamic target group is a group of images about a dynamic target that the driver does not need to respond immediately to avoid danger.
  • the image of the display item “dynamic target” of the notification system is classified.
  • the dynamic target related unit 213 draws an image classified into a dynamic target group.
  • the dynamic target related unit 213 determines display content to be displayed as an image classified into a dynamic target group based on the content determination information acquired by the determination information acquisition unit 201.
  • the information for content determination includes information indicating the presence of a dynamic target such as a pedestrian who has not approached a distance requiring a warning
  • the information of the dynamic target such as the pedestrian is Determining display contents for displaying a ripple-like image (see Om in FIG. 6) indicating the presence of the dynamic target at a position, and the like.
  • the dynamic target related unit 213 also sends a display request to the arbitrating unit 217 in the same manner as the emergency related unit 212.
  • the static target group is a group of images of static targets that the driver does not need to respond immediately to avoid danger.
  • the image of the display item “static target” of the notification system is classified into the static target group.
  • the static target related unit 214 draws an image classified into a static target group.
  • the static target related unit 214 determines display content to be displayed as an image classified into a static target group based on the content determination information acquired by the determination information acquisition unit 201. As an example, if the information for content determination includes information indicating the presence of a dynamic target such as a falling object that is not approaching a distance requiring a warning, the information of the static target such as the falling object is included. Determining display contents for displaying an image (see So in FIG. 7) indicating the existence of the static target at a position, and the like.
  • the static target related unit 214 also sends a display request to the arbitrating unit 217 in the same manner as the emergency related unit 212.
  • the road surface group is a group of images of road surfaces that the driver does not need to take immediate action to avoid danger.
  • the image of the display item “road surface” of the notification system is classified into the road surface group.
  • the road surface related unit 215 draws images classified into road surface groups.
  • the road surface related unit 215 determines display contents to be displayed as images classified into road surface groups based on the content determination information acquired by the determination information acquisition unit 201. As an example, when the information for content determination includes information on the planned travel route of the own vehicle, the display content for displaying a sheet-like image (see Pa in FIG. 6) along the road surface of the planned travel route is determined. And the like.
  • the road related unit 215 also sends a display request to the arbitrating unit 217 in the same manner as the emergency related unit 212.
  • the priority is set for the dynamic target group, the static target group, and the road surface group from the spatial viewpoint in front of the vehicle.
  • the road surface group has a lower priority than the dynamic target group and the static target group because the road surface is located below the dynamic target and the static target group.
  • the dynamic target group has a higher priority than the static target group and the road surface group because the time change of the position of the dynamic index is larger than that of the road surface and the static target.
  • the extended information group is a group of images related to information provision that is not directly related to danger avoidance.
  • the image of the display item “information” of the notification system is classified into the extended information group.
  • the extended information related unit 216 draws an image classified into the extended information group.
  • the extended information related unit 216 determines display content to be displayed as an image classified into the extended information group based on the content determining information acquired by the determining information acquiring unit 201. As an example, when the information for content determination includes information on a preferred peripheral facility proposed to the driver, a display content for displaying an image (see Ei in FIG. 6) indicating the presence of the peripheral facility at the position of the peripheral facility Is determined.
  • the extended information related unit 216 also sends a display request to the arbitrating unit 217 in the same manner as the emergency related unit 212.
  • the arbitration unit 217 also receives a display request sent from the emergency related unit 212, the dynamic target related unit 213, the static target related unit 214, the road surface related unit 215, and the extended information related unit 216 of the drawing control unit 211. At this time, arbitration of the images of each group is performed, and a control instruction is returned to the drawing control unit 211. Specifically, based on the type of group given to the display request, a control instruction to increase the visibility of the image of the group of the higher priority type is returned to the drawing controller 211.
  • a configuration may be adopted in which a control instruction is returned to set the layer stacking order of images higher in a higher priority group, and the transmittance of an image is higher in a layer of a higher priority group relative to other layers.
  • the height may be increased, or these may be combined.
  • the arbitration unit 217 may be configured to relatively increase the transmittance by increasing the transmittance of the image of the layer of the high-priority group, or to adjust the transmittance of the image of the layer of the low-priority group.
  • the transmittance may be relatively increased by lowering the transmittance. It is more preferable that the transmittance of the image of the layer of the high priority group is relatively increased by lowering the transmittance of the image of the layer of the low priority group.
  • the layersing order is different, the images of each group can be easily distinguished from each other, and the visibility of the images of each group can be enhanced.
  • the image of the group in which the layering order is the higher layer is more visible when superimposed and displayed on the foreground, so that the visibility is further improved.
  • the transmittances of the images are different, the apparent brightness is different, so that the images of each group can be easily distinguished from each other, and the visibility of the images of each group is enhanced.
  • the higher the transmittance of an image the higher the apparent brightness, and thus the higher the visibility.
  • Changing the apparent brightness of the images of each group by changing the transmittance of the images is performed by using a common display unit 221 for displaying the images of each group and using a common backlight. This is because it is difficult to switch the brightness of the backlight separately for each group of images.
  • an image for an object that is farther from the vehicle (hereinafter, a distal object image) is displayed in a higher layer than an image for an object that is closer to the vehicle (hereinafter, a proximal object image).
  • the arbitration unit 217 performs the following arbitration in order to prevent the display from being unnatural.
  • the arbitration unit 217 determines whether the distal target image group has a higher priority than the proximal target image group based on the position of the display content target in addition to the type of the group given to the display request. Instead of making the layer of the distal target image a layer higher than the layer order of the layer of the proximal target image, it is visually recognized by increasing the transmittance of the distal target image relative to the near target image.
  • the arbitration unit 217 sets the layer of the proximal target image to a higher layer than the layering order of the layers of the distal target image. , The visibility may be increased. If there are not a plurality of display requests to be sent, the arbitration unit 217 returns a display permission control instruction without performing arbitration.
  • the drawing control unit 211 When the superimposition order is instructed by the control instruction, the drawing control unit 211 superimposes the layers of each group in the order according to the superposition order and draws the image of each group. Further, when the transmittance is instructed by the control instruction, the drawing control unit 211 draws by changing the transmittance of the image of the layer of each group according to the transmittance. As a result, the drawing control unit 211 performs drawing on the display 221 such that the visibility of the image is different for each layer on which the image is drawn.
  • a display request is sent from the dynamic target related unit 213 and the road surface related unit 215 of the drawing control unit 211 to the arbitration unit 217, and the order of the layers of the dynamic target group is set to be higher than that of the road surface group.
  • the control instruction for the upper layer is returned to the drawing controller 211, the dynamic target group layer is superimposed on the higher layer than the road surface layer and the image of the dynamic target group and the road surface group are overlapped. An image is drawn.
  • the drawing control unit 211 performs a synthesizing process of synthesizing an image drawn on each layer when the layers of each group are overlaid and drawn.
  • a synthesizing process of synthesizing an image drawn on each layer when the layers of each group are overlaid and drawn.
  • Hi indicates an image of a high-priority group (hereinafter, a high-priority image)
  • Lo indicates an image of a low-priority group (low-priority image).
  • the simplest combination processing is a simple “overwrite” shown in FIG. 8A.
  • the simple "overwrite” the transmittance is not changed between the high-priority image and the low-priority image, and the high-priority image is drawn over the low-priority image. In this case, the low priority image that overlaps with the high priority image is hidden by the high priority image.
  • ⁇ ⁇ Modifications of “overwrite” include “addition blend” in FIG. 8B, “edge removal” in FIG. 8C, and “transmissivity change” in FIG. 8D.
  • additional blending the transmittance does not change between the high-priority image and the low-priority image, and the high-priority image is superimposed on the low-priority image and drawn.
  • Rendering is performed by adding RGB values. Note that a configuration may be used in which a “multiplication blend” in which both the RGB values are multiplied and divided by 255 for drawing is used.
  • the transmittance is not changed between the high-priority image and the low-priority image, the high-priority image is superimposed on the low-priority image and drawn, and the boundary portion of the area where both overlap.
  • Is drawn by removing the lower layer image corresponding to In the “transmissivity change” a high-priority image is superimposed and drawn on an upper layer of a low-priority image, and the high-priority image is drawn with a relatively higher transmittance than the low-priority image.
  • This “transmissivity change” may be combined with “addition blend”, “multiplication blend”, “edge removal”, or the like.
  • the low-priority image may be drawn with the transmittance of the high-priority image relatively higher than that of the low-priority image while the low-priority image is superimposed on the upper layer of the high-priority image.
  • additional blending or “multiplication blending” makes it easier to grasp the lower priority image.
  • the drawing control unit 211 does not perform the combining process if the display positions of the images for the respective layers do not overlap each other when the layers of each group are overlaid. It is preferable to draw the layers in a superimposed manner.
  • the drawing control unit 211 draws the image to be newly displayed in an upper layer, or draws an image of a target closer to the own vehicle in an upper layer. What is necessary is just to set it as the structure which draws and overlaps. Further, in the case where 3D graphic drawing processing is performed as drawing processing, a configuration may be used in which an image of a part that cannot be seen from a specific viewpoint is erased and drawn using hidden surface processing used as 3D graphic technology. When the hidden surface processing is used, the eye point is set as a specific viewpoint, and a hidden surface that cannot be seen from the eye point is obtained from all the pixels of the image to be displayed by, for example, the Z-buffer algorithm (Z buffer algorithm). What is necessary is just to estimate the range visible from the driver.
  • Z buffer algorithm Z buffer algorithm
  • FIG. 9 may be configured to start when the power of the HUD device 220 is turned on and the function of the HUD device 220 is turned on.
  • the function of the HUD device 220 may be switched on and off in accordance with an input operation received by the operation device 21.
  • the power supply of the HUD device 220 may be switched on and off in accordance with the on / off of a switch (hereinafter, a power switch) for starting the internal combustion engine or the motor generator of the own vehicle.
  • a switch hereinafter, a power switch
  • the determination information obtaining unit 201 selectively obtains content determination information from various information output to the in-vehicle LAN.
  • S2 based on the information for content determination acquired in S1, at least the emergency related unit 212, the dynamic target related unit 213, the static target related unit 214, the road surface related unit 215, and the extended information related unit 216 If any one sends a display request to the arbitration unit 217 (YES in S2), the process proceeds to S3.
  • the process moves to S8.
  • the arbitration unit 217 arbitrates the images of each group based on the plurality of display requests sent thereto, and issues a control instruction to increase the visibility of the images of the group of the higher priority type to the drawing control unit.
  • the drawing control unit 211 performs a drawing process by combining images to be drawn on each layer according to the control instruction returned from the arbitration unit 217, and proceeds to S8.
  • the arbitration unit 217 returns a display permission control instruction for permitting the transmitted display request to the drawing control unit 211.
  • the drawing control unit 211 draws only one group of images requested to be displayed in accordance with the display permission control instruction returned from the arbitration unit 217, and proceeds to S8.
  • images of a warning group, a dynamic target group, a static target group, a road surface group, and an extended information group which are classified into a plurality of groups in units of display item types, are grouped.
  • the image is drawn on one display unit 221 so that the visibility of the image is different for each layer. Therefore, even when a plurality of images having different types of display items are projected on a common projection member to display the respective virtual images in a composite manner on the foreground of the vehicle, at least the warning group, the dynamic target group, and the static The driver can distinguish and recognize the images of the target object group, the road surface group, and the extended information group.
  • the classification of the dynamic target group, the static target group, and the road surface group is a grouping that adds the viewpoint of the space in front of the vehicle, which is unique to the superimposed display on the foreground of the vehicle. Therefore, since the driver can distinguish and recognize at least each of the groups including the viewpoint of the space in front of the vehicle, the driver can easily recognize the intended content of each image in the superimposed display.
  • the virtual images are superimposed and displayed on the foreground of the vehicle by projecting the images on the projection member, a plurality of images of different types of display items are projected on the common projection member and the respective virtual images are displayed on the foreground of the vehicle. Even in the case of displaying multiple images, it is possible to prevent the intention contents of each image from being difficult to be transmitted to the driver.
  • the configuration of the first embodiment it is possible to make the visibility of the image different by making the transmittance of the image different for each group. Therefore, the image of each group is drawn by one display 221. However, it is possible to easily switch the brightness of each appearance individually.
  • the description of the display control of the image with the fixed attribute is omitted, but the image with the fixed attribute may be configured to draw the image with the moving attribute separately from the layer.
  • a configuration may be adopted in which a layer of an image with a fixed attribute is drawn as a layer on a higher layer than an image with a moving attribute.
  • the layer of the image of the moving attribute is drawn as a layer displayed on a layer higher than the image of the fixed attribute.
  • Embodiment 2 In the first embodiment, the configuration in which the layers are divided into the five groups of the emergency group, the dynamic target group, the static target group, the road surface group, and the extended information group is described, but the present invention is not necessarily limited to this.
  • a configuration may be adopted in which layers are drawn according to types of display items and drawn. In this case, drawing is performed for each type of display items such as “urgent”, “warning”, “caution”, “response”, “dynamic target”, “static target”, “road surface”, and “information”.
  • the drawing control unit 211 may have a functional block for each type of display item for this purpose.
  • the priority of the image for each display item is “urgent”, “warning”, “caution”, “response”, “dynamic target”, “static target”, “road surface” in order from the highest one. ”And“ information ”.
  • a configuration may be adopted in which the type of the display item is added to the display request.
  • the configuration in which the priority of increasing the visibility is set to the order of the dynamic target group, the static target group, and the road surface group is described, but the present invention is not necessarily limited to this.
  • the order of priority for increasing visibility is a dynamic target group, a static target group.
  • the road surface groups may not be arranged in this order.
  • a function related to display control in the HUD device 220 may be performed by a control circuit provided in the HUD device 220, or may be performed by a control circuit provided in the combination meter.
  • control and the technique described in the present disclosure may be realized by a special-purpose computer configuring a processor programmed to execute one or more functions embodied by a computer program.
  • control and the technique described in the present disclosure may be realized by a special-purpose computer that configures a processor with a special-purpose hardware logic circuit.
  • control and the technique described in the present disclosure may be implemented by one or more dedicated computers configured by a combination of a processor that executes a computer program and one or more hardware logic circuits.
  • the computer program may be stored in a computer-readable non-transitional tangible recording medium as instructions to be executed by a computer.
  • each step is expressed as, for example, S1. Further, each step can be divided into a plurality of sub-steps, while a plurality of steps can be combined into one step.
  • the embodiments, configurations, and aspects of the vehicle display control device, the vehicle display control method, and the control program according to one embodiment of the present disclosure have been described.
  • the embodiments, configurations, and aspects of the present disclosure are described above. It is not limited to each embodiment, each configuration, and each aspect.
  • embodiments, configurations, and aspects obtained by appropriately combining technical parts disclosed in different embodiments, configurations, and aspects are also included in the scope of the embodiments, configurations, and aspects according to the present disclosure.

Abstract

This vehicle display control device which controls a head-up display device comprises: a determination information acquisition unit (201) that acquires content determination information, which is information for determining the image content to be drawn on a display; and a drawing control unit (211) that draws a plurality of images classified by type of display item on the single display in separate layers for each group according to the content determination information acquired by the determination information acquisition unit, the plurality of groups into which the images are classified including a group of dynamic-target images, a group of static-target images, and a group of road-surface images. The drawing control unit performs drawing so that the image visibility varies among the layers on which the images are drawn on the display.

Description

車両用表示制御装置、車両用表示制御方法、及び制御プログラムVEHICLE DISPLAY CONTROL DEVICE, VEHICLE DISPLAY CONTROL METHOD, AND CONTROL PROGRAM 関連出願の相互参照Cross-reference of related applications
 本出願は、2018年6月29日に出願された日本国特許出願2018-124010号に基づくものであり、ここにその記載内容を参照により援用する。 This application is based on Japanese Patent Application No. 2018-12410 filed on June 29, 2018, the contents of which are incorporated herein by reference.
 本開示は、車両用表示制御装置、車両用表示制御方法、及び制御プログラムに関するものである。 The present disclosure relates to a display control device for a vehicle, a display control method for a vehicle, and a control program.
 ウインドシールド等の投影部材へ画像を投影することによって車両の前景に虚像を重畳表示させるヘッドアップディスプレイ(以下、HUD)が知られている。例えば特許文献1には、走行案内情報としての案内矢印といった情報伝達子を交差点の道路に重なるようにヘッドアップディスプレイで表示させる技術が開示されている。また、特許文献1には、車両前方に存在する障害物と情報伝達子が重なってしまう場合に、情報伝達子を表示させる位置をずらしたり、情報伝達子の表示のうちの障害物と重なる部分を消したりする技術が開示されている。 2. Description of the Related Art A head-up display (hereinafter, HUD) that superimposes a virtual image on a foreground of a vehicle by projecting an image on a projection member such as a windshield is known. For example, Patent Literature 1 discloses a technique in which an information communicator such as a guide arrow as travel guide information is displayed on a head-up display so as to overlap a road at an intersection. Japanese Patent Application Laid-Open No. H11-163873 discloses that, when an obstacle existing in front of a vehicle overlaps with an information communicator, the position at which the information communicator is displayed is shifted or a portion of the information communicator display that overlaps the obstacle is displayed. Or a technique for extinguishing the same.
JP 4085928 BJP 4085928 B
 特許文献1に開示の技術では、車両前方に存在する障害物と情報伝達子の表示位置が重なることを想定しているものの、複数種類の情報伝達子の表示位置が重なる場合は想定していない。よって、特許文献1に開示の技術では、複数種類の情報伝達子の表示位置が重なる場合にそれぞれの画像が混ざり合ってそれぞれの区別がつき難くなり、それぞれの表示の意図する内容がドライバに伝わり難くなるおそれがある。 The technology disclosed in Patent Document 1 assumes that the display position of an information communicator and an obstacle existing in front of the vehicle overlap, but does not assume that the display positions of a plurality of types of information communicators overlap. . Therefore, according to the technique disclosed in Patent Document 1, when the display positions of a plurality of types of information communicators are overlapped, the images are mixed to make it difficult to distinguish between them, and the contents intended for each display are transmitted to the driver. It may be difficult.
 本開示は、投影部材へ画像を投影することによって車両の前景に虚像を重畳表示させる際に、表示項目の種類の異なる複数の画像を共通の投影部材へ投影して車両の前景にそれぞれの虚像を複合的に表示させる場合であっても、それぞれの画像の意図する内容がドライバに伝わり難くなることを抑える車両用表示制御装置、車両用表示制御方法、及び制御プログラムを提供することを目的とする。 The present disclosure, when a virtual image is superimposed on the foreground of a vehicle by projecting an image on a projection member, a plurality of images having different types of display items are projected on a common projection member to display respective virtual images on the foreground of the vehicle. It is an object of the present invention to provide a vehicular display control device, a vehicular display control method, and a control program that suppress the difficulty in transmitting the intended content of each image to a driver even when the image is displayed in a composite manner. I do.
 本開示の一態様によれば、車両用表示制御装置は、車両で用いられ、表示器に描画する画像を投影部材へ投影することによって車両の前景に虚像を重畳表示させるヘッドアップディスプレイ装置を制御する。車両用表示制御装置は、表示器に描画する画像の内容を定めるための情報である内容決定用情報を取得する決定用情報取得部と、決定用情報取得部で取得する内容決定用情報に応じて、表示器に、表示項目の種別単位で、動的物標についての画像のグループ、静的物標についての画像のグループ、及び路面についての画像のグループを含む複数のグループに分類される複数グループの画像を、グループ別にレイヤを分けて1つの表示器に描画する描画制御部とを備える。描画制御部は、表示器に画像を描画するレイヤ別にその画像の視認性を異ならせるよう描画する。 According to one embodiment of the present disclosure, a vehicular display control device controls a head-up display device that is used in a vehicle and that superimposes and displays a virtual image on a foreground of a vehicle by projecting an image to be drawn on a display onto a projection member. I do. The display control device for a vehicle includes: a determination information obtaining unit that obtains content determination information that is information for determining the content of an image to be drawn on the display; and a content determination information that is obtained by the determination information acquisition unit. In the display unit, a plurality of groups classified into a plurality of groups including a group of images for dynamic targets, a group of images for static targets, and a group of images for road surfaces are displayed in units of display item types. A drawing control unit that draws the images of the group on one display device by dividing the layers into groups. The drawing control unit performs drawing so that the visibility of the image is different for each layer on which the image is drawn on the display.
 本開示の別の態様によれば、車両用表示制御方法は、車両で用いられ、表示器に描画する画像を投影部材へ投影することによって車両の前景に虚像を重畳表示させるヘッドアップディスプレイ装置を制御する。車両用表示制御方法は、表示器に描画する画像の内容を定めるための情報である内容決定用情報を取得し、取得する内容決定用情報に応じて、表示器に、表示項目の種別単位で、動的物標についての画像のグループ、静的物標についての画像のグループ、及び路面についての画像のグループを含む複数のグループに分類される複数グループの画像を、グループ別にレイヤを分けて1つの表示器に描画するとともに、表示器に画像を描画するレイヤ別にその画像の視認性を異ならせるよう描画する。 According to another aspect of the present disclosure, a vehicle display control method includes a head-up display device that is used in a vehicle and that superimposes a virtual image on a foreground of a vehicle by projecting an image to be drawn on a display onto a projection member. Control. The display control method for a vehicle obtains information for content determination, which is information for determining the content of an image to be drawn on the display, and, in accordance with the content determination information to be obtained, displays on the display in units of display item types. A plurality of groups of images classified into a plurality of groups including a group of images for a dynamic target, a group of images for a static target, and a group of images for a road surface are divided into layers by group, and In addition to the drawing on one display, the drawing is performed so that the visibility of the image is different for each layer for drawing the image on the display.
 本開示の別の態様によれば、制御プログラムは、コンピュータを、車両で用いられ、表示器に描画する画像を投影部材へ投影することによって車両の前景に虚像を重畳表示させるヘッドアップディスプレイ装置の表示器に描画する画像の内容を定めるための情報である内容決定用情報を取得する決定用情報取得部と、決定用情報取得部で取得する内容決定用情報に応じて、表示器に、表示項目の種別単位で、動的物標についての画像のグループ、静的物標についての画像のグループ、及び路面についての画像のグループを含む複数のグループに分類される複数グループの画像を、グループ別にレイヤを分けて1つの表示器に描画するとともに、表示器に画像を描画するレイヤ別にその画像の視認性を異ならせるよう描画する描画制御部として機能させる。 According to another aspect of the present disclosure, a control program is provided for a head-up display device that is used in a vehicle and that superimposes a virtual image on a foreground of a vehicle by projecting an image to be drawn on a display onto a projection member. A determination information acquisition unit that acquires content determination information that is information for determining the content of an image to be drawn on the display, and a display on the display according to the content determination information that is acquired by the determination information acquisition unit. For each type of item, multiple groups of images classified into a plurality of groups, including a group of images for dynamic targets, a group of images for static targets, and a group of images for road surfaces, A drawing control unit that draws an image on a single display device by dividing the layers, and draws the image so that the visibility of the image is different for each layer for drawing the image on the display device. To function.
 本開示によれば、表示項目の種別単位で複数のグループに分類される、動的物標についての画像のグループ、静的物標についての画像のグループ、及び路面についての画像のグループを少なくとも含む複数グループの画像を、グループ別にレイヤを分けて、レイヤ別にその画像の視認性を異ならせるよう1つの表示器に描画する。よって、表示項目の種類の異なる複数の画像を共通の投影部材へ投影して車両の前景にそれぞれの虚像を複合的に表示させる場合であっても、少なくとも動的物標についての画像のグループ、静的物標についての画像のグループ、及び路面についての画像のグループについては、ドライバがそれぞれを区別して認識することが可能になる。動的物標についての画像のグループ、静的物標についての画像のグループ、及び路面についての画像のグループという分類は、車両の前景への重畳表示ならではの、車両の前方空間の観点を加えたグループ分けである。従って、少なくともこれらの車両の前方空間の観点を加えたグループのそれぞれをドライバが区別して認識することが可能になることにより、重畳表示におけるそれぞれの画像の意図する内容をドライバが認識し易くなる。その結果、投影部材へ画像を投影することによって車両の前景に虚像を重畳表示させる際に、表示項目の種類の異なる複数の画像を共通の投影部材へ投影して車両の前景にそれぞれの虚像を複合的に表示させる場合であっても、それぞれの画像の意図する内容がドライバに伝わり難くなることを抑えることが可能になる。 According to the present disclosure, at least a group of images for a dynamic target, a group of images for a static target, and a group of images for a road surface, which are classified into a plurality of groups in units of display item types, are included. The images of the plurality of groups are drawn on one display so that the layers are divided into groups and the visibility of the images is different for each layer. Therefore, even when a plurality of images having different types of display items are projected on a common projection member to display the respective virtual images in a composite manner on the foreground of the vehicle, at least a group of images for the dynamic target, The driver can distinguish and recognize the group of images for the static target and the group of images for the road surface. The classification of the image group for the dynamic target, the image group for the static target, and the image group for the road surface has a viewpoint of the space in front of the vehicle, which is unique to the superimposed display on the foreground of the vehicle. Grouping. Therefore, since the driver can distinguish and recognize at least each of the groups including the viewpoint of the space in front of the vehicle, the driver can easily recognize the intended content of each image in the superimposed display. As a result, when the virtual images are superimposed and displayed on the foreground of the vehicle by projecting the images on the projection member, a plurality of images of different types of display items are projected on the common projection member and the respective virtual images are displayed on the foreground of the vehicle. Even in the case of displaying multiple images, it is possible to prevent the intention contents of each image from being difficult to be transmitted to the driver.
 本開示についての上記および他の目的、特徴や利点は、添付図面を参照した下記詳細な説明から、より明確になる。添付図面において、
車両システムの概略的な構成の一例を示す図である。 HUD装置の車両への搭載例を示す図である。 HCUの概略的な構成の一例を示す図である。 表示項目の種別の一例を示す図である。 表示項目のグループ分けの一例を示す図である。 重畳表示の一例を示す図である。 重畳表示の一例を示す図である。 各レイヤに描画される画像を合成する合成処理の一例を説明するための図である。 各レイヤに描画される画像を合成する合成処理の一例を説明するための図である。 各レイヤに描画される画像を合成する合成処理の一例を説明するための図である。 各レイヤに描画される画像を合成する合成処理の一例を説明するための図である。 HCUでの虚像表示制御関連処理の流れの一例を示すフローチャートである。
The above and other objects, features and advantages of the present disclosure will become more apparent from the following detailed description with reference to the accompanying drawings. In the attached drawings,
FIG. 1 is a diagram illustrating an example of a schematic configuration of a vehicle system. It is a figure showing the example of mounting in a vehicle of a HUD device. FIG. 2 is a diagram illustrating an example of a schematic configuration of an HCU. It is a figure showing an example of the classification of a display item. It is a figure showing an example of grouping of a display item. It is a figure showing an example of superimposition display. It is a figure showing an example of superimposition display. FIG. 9 is a diagram for describing an example of a combining process of combining images drawn on each layer. FIG. 9 is a diagram for describing an example of a combining process of combining images drawn on each layer. FIG. 9 is a diagram for describing an example of a combining process of combining images drawn on each layer. FIG. 9 is a diagram for describing an example of a combining process of combining images drawn on each layer. It is a flowchart which shows an example of the flow of virtual image display control related processing in HCU.
 図面を参照しながら、開示のための複数の実施形態を説明する。なお、説明の便宜上、複数の実施形態の間において、それまでの説明に用いた図に示した部分と同一の機能を有する部分については、同一の符号を付し、その説明を省略する場合がある。同一の符号を付した部分については、他の実施形態における説明を参照することができる。 複数 A plurality of embodiments for disclosure will be described with reference to the drawings. Note that, for convenience of description, parts having the same functions as the parts shown in the drawings used in the description are given the same reference numerals among the plurality of embodiments, and the description thereof may be omitted. is there. For the parts denoted by the same reference numerals, the description in the other embodiments can be referred to.
 (実施形態1)
 (車両システム1の概略構成)
 以下、本実施形態について図面を用いて説明する。車両システム1は、自動車といった路上を走行する車両で用いられるものである。車両システム1は、一例として、図1に示すように、HMI(Human Machine Interface)システム2、ADAS(Advanced Driver Assistance Systems)ロケータ3、周辺監視センサ4、車両状態センサ5、車両制御ECU6、ナビゲーション装置7、及び自動運転ECU8を含んでいる。HMIシステム2、ADASロケータ3、周辺監視センサ4、車両状態センサ5、車両制御ECU6、ナビゲーション装置7、及び自動運転ECU8は、例えば車内LANに接続されているものとする。
(Embodiment 1)
(Schematic configuration of vehicle system 1)
Hereinafter, the present embodiment will be described with reference to the drawings. The vehicle system 1 is used for a vehicle running on a road such as an automobile. As an example, as shown in FIG. 1, the vehicle system 1 includes an HMI (Human Machine Interface) system 2, an ADAS (Advanced Driver Assistance Systems) locator 3, a peripheral monitoring sensor 4, a vehicle state sensor 5, a vehicle control ECU 6, a navigation device. 7 and an automatic driving ECU 8. The HMI system 2, the ADAS locator 3, the periphery monitoring sensor 4, the vehicle state sensor 5, the vehicle control ECU 6, the navigation device 7, and the automatic driving ECU 8 are assumed to be connected to, for example, an in-vehicle LAN.
 ADASロケータ3は、GNSS(Global Navigation Satellite System)受信機、及び慣性センサを備えている。GNSS受信機は、複数の人工衛星からの測位信号を受信する。慣性センサは、例えばジャイロセンサ及び加速度センサを備える。ADASロケータ3は、GNSS受信機で受信する測位信号と、慣性センサの計測結果とを組み合わせることにより、自車の車両位置を逐次測位する。なお、車両位置の測位には、自車に搭載された車速センサから逐次出力される検出結果から求めた走行距離等を用いる構成としてもよい。そして、測位した車両位置を車内LANへ出力する。 The ADAS locator 3 includes a GNSS (Global Navigation Satellite System) receiver and an inertial sensor. The GNSS receiver receives positioning signals from a plurality of satellites. The inertial sensor includes, for example, a gyro sensor and an acceleration sensor. The ADAS locator 3 sequentially measures the vehicle position of the own vehicle by combining the positioning signal received by the GNSS receiver and the measurement result of the inertial sensor. Note that the vehicle position may be measured using a traveling distance or the like obtained from detection results sequentially output from a vehicle speed sensor mounted on the own vehicle. Then, the measured vehicle position is output to the in-vehicle LAN.
 ADASロケータ3は、地図データとして、道路形状及び構造物の特徴点の点群からなる三次元地図を格納する地図データベース(以下、地図DB)を備える構成としてもよい。ADASロケータ3は、この三次元地図を用いる場合、GNSS受信機を用いずに、この三次元地図と、道路形状及び構造物の特徴点の点群を検出するLIDAR(Light Detection and Ranging/Laser Imaging Detection and Ranging)等の周辺監視センサ4での検出結果とを用いて、自車の車両位置を特定する構成としてもよい。なお、この三次元地図の地図データは、通信モジュールを介して自車の外部から取得する構成としてもよい。 The ADAS locator 3 may be configured to include, as map data, a map database (hereinafter, map DB) that stores a three-dimensional map including a point group of feature points of a road shape and a structure. When the ADAS locator 3 uses the three-dimensional map, the LIDAR (Light Detection and Ranging / Laser Imaging) which detects the three-dimensional map and a point group of feature points of a road shape and a structure without using a GNSS receiver. A configuration in which the vehicle position of the own vehicle is specified using a detection result of the periphery monitoring sensor 4 such as Detection and Ranging) may be used. The map data of the three-dimensional map may be obtained from outside the vehicle via the communication module.
 周辺監視センサ4は、自車の周辺環境を監視する自律センサである。一例として、周辺監視センサ4は、歩行者,人間以外の動物、自車以外の車両等の移動する動的物標、及び路上の落下物,ガードレール、縁石、及び樹木等の静止している静的物標といった自車周辺の障害物を検出する。他にも、自車周辺の走行区画線等の路面標示を検出する。周辺監視センサ4は、例えば、自車周囲の所定範囲を撮像する周辺監視カメラ、自車周囲の所定範囲に探査波を送信するミリ波レーダ、ソナー、LIDAR等のセンサである。周辺監視カメラは、逐次撮像する撮像画像をセンシング情報として車内LANへ逐次出力する。ソナー、ミリ波レーダ、LIDAR等の探査波を送信するセンサは、検出対象によって反射された反射波を受信した場合に得られる受信信号に基づく走査結果をセンシング情報として車内LANへ逐次出力する。 (4) The surroundings monitoring sensor 4 is an autonomous sensor that monitors the surrounding environment of the own vehicle. As an example, the surroundings monitoring sensor 4 is used to detect a moving dynamic target such as a pedestrian, an animal other than a human, a vehicle other than the own vehicle, and a stationary static object such as a falling object on a road, a guardrail, a curb, and a tree. Detects obstacles around the vehicle such as a target. In addition, a road marking such as a lane marking around the own vehicle is detected. The peripheral monitoring sensor 4 is, for example, a peripheral monitoring camera that captures an image of a predetermined area around the own vehicle, a millimeter wave radar that transmits a search wave to a predetermined area around the own vehicle, a sonar, a LIDAR sensor, or the like. The periphery monitoring camera sequentially outputs the captured images sequentially captured to the in-vehicle LAN as sensing information. A sensor that transmits a search wave such as a sonar, a millimeter-wave radar, and a LIDAR sequentially outputs a scanning result based on a reception signal obtained when a reflected wave reflected by a detection target is received to the in-vehicle LAN as sensing information.
 車両状態センサ5は、自車の各種状態を検出するためのセンサ群である。車両状態センサ5としては、自車の車速を検出する車速センサ,自車の操舵角を検出する操舵センサ,自車のアクセルペダルの開度を検出するアクセルポジションセンサ,自車のブレーキペダルの踏み込み量を検出するブレーキ踏力センサ等がある。車両状態センサ5は、検出するセンシング情報を車内LANへ出力する。なお、車両状態センサ5で検出するセンシング情報は、自車に搭載されるECUを介して車内LANへ出力される構成であってもよい。 The vehicle state sensor 5 is a group of sensors for detecting various states of the vehicle. The vehicle state sensor 5 includes a vehicle speed sensor for detecting the vehicle speed of the own vehicle, a steering sensor for detecting the steering angle of the own vehicle, an accelerator position sensor for detecting the opening degree of the accelerator pedal of the own vehicle, and depression of a brake pedal of the own vehicle. There is a brake depression force sensor that detects the amount. The vehicle state sensor 5 outputs sensing information to be detected to the in-vehicle LAN. Note that the sensing information detected by the vehicle state sensor 5 may be output to an in-vehicle LAN via an ECU mounted on the own vehicle.
 車両制御ECU6は、自車の加減速制御及び/又は操舵制御を行う電子制御装置である。車両制御ECU6としては、操舵制御を行う操舵ECU、加減速制御を行うパワーユニット制御ECU及びブレーキECU等がある。車両制御ECU6は、自車に搭載されたアクセルポジションセンサ、ブレーキ踏力センサ、舵角センサ、車輪速センサ等の各センサから出力される検出信号を取得し、電子制御スロットル、ブレーキアクチュエータ、EPS(Electric Power Steering)モータ等の各走行制御デバイスへ制御信号を出力する。また、車両制御ECU6は、上述の各センサの検出信号を車内LANへ出力可能である。 The vehicle control ECU 6 is an electronic control device that performs acceleration / deceleration control and / or steering control of the own vehicle. Examples of the vehicle control ECU 6 include a steering ECU that performs steering control, a power unit control ECU that performs acceleration / deceleration control, and a brake ECU. The vehicle control ECU 6 acquires detection signals output from sensors such as an accelerator position sensor, a brake pedal force sensor, a steering angle sensor, and a wheel speed sensor mounted on the own vehicle, and controls the electronic control throttle, the brake actuator, and the EPS (Electric). Power Steering) Outputs control signals to each drive control device such as a motor. The vehicle control ECU 6 can output the detection signals of the above-described sensors to the in-vehicle LAN.
 ナビゲーション装置7は、地図データを格納した地図DBを備え、設定される目的地までの時間優先,距離優先等の条件を満たす経路を探索し、その探索した経路に従った経路案内を行う。地図DBは、不揮発性メモリであって、リンクデータ、セグメントデータ、ノードデータ、道路形状等の地図データを格納しているものとすればよい。なお、地図データには、道路形状及び構造物の特徴点の点群からなる三次元地図を含む構成であってもよい。 The navigation device 7 includes a map DB storing map data, searches for a route that satisfies conditions such as time priority and distance priority to a set destination, and performs route guidance according to the searched route. The map DB is a non-volatile memory, and may store map data such as link data, segment data, node data, and road shapes. Note that the map data may have a configuration including a three-dimensional map including a point group of feature points of a road shape and a structure.
 自動運転ECU8は、車両制御ECU6を制御することにより、ドライバによる運転操作の代行を行う自動運転機能を実行する。自動運転ECU8は、ADASロケータ3から取得する自車の車両位置及び三次元地図の地図データ,周辺監視センサ4でのセンシング情報,ナビゲーション装置7から取得する地図データをもとに、自車の走行環境を認識する。一例としては、周辺監視センサ4でのセンシング情報から、自車周辺の物体の形状及び移動状態を認識したり、自車周辺の路面標示の形状を認識したりする。そして、自車の車両位置及び地図データと組み合わせることで、実際の走行環境を三次元で再現した仮想空間を生成する。 (4) The automatic driving ECU 8 controls the vehicle control ECU 6 to execute an automatic driving function for performing a driving operation on behalf of the driver. The automatic driving ECU 8 runs the own vehicle based on the vehicle position of the own vehicle obtained from the ADAS locator 3 and the map data of the three-dimensional map, the sensing information of the surrounding monitoring sensor 4, and the map data obtained from the navigation device 7. Recognize the environment. As an example, from the sensing information of the surrounding monitoring sensor 4, the shape and the moving state of the object around the own vehicle are recognized, and the shape of the road marking around the own vehicle is recognized. Then, by combining with the vehicle position of the own vehicle and the map data, a virtual space that reproduces the actual traveling environment in three dimensions is generated.
 自動運転ECU8は、認識した走行環境に基づき、自動運転機能によって自車を自動走行させるための走行計画を生成する。走行計画としては、長中期の走行計画と、短期の走行計画とが生成される。長中期の走行計画では、設定された目的地に自車を向かわせるための経路が規定される。長中期の走行計画については、ナビゲーション装置7で探索した経路を用いる構成としてもよい。短期の走行計画では、生成した自車の周囲の仮想空間を用いて、長中期の走行計画に従った走行を実現するための予定走行軌跡が規定される。具体的に、車線追従及び車線変更のための操舵、速度調整のための加減速、並びに衝突回避のための急制動等の実行が、短期の走行計画に基づいて決定される。 The automatic driving ECU 8 generates a driving plan for automatically driving the own vehicle by the automatic driving function based on the recognized driving environment. As the travel plan, a long- and medium-term travel plan and a short-term travel plan are generated. In the long- and medium-term traveling plan, a route for directing the vehicle to the set destination is defined. For the long and medium term travel plan, a configuration using the route searched by the navigation device 7 may be adopted. In the short-term traveling plan, a planned traveling locus for realizing traveling according to the long- and medium-term traveling plan is defined using the generated virtual space around the own vehicle. Specifically, execution of steering for lane following and lane change, acceleration / deceleration for speed adjustment, and rapid braking for collision avoidance are determined based on a short-term traveling plan.
 HMIシステム2は、HCU(Human Machine Interface Control Unit)20、操作デバイス21、及び表示装置22を備えており、自車のユーザであるドライバからの入力操作を受け付けたり、自車のドライバに向けて情報を提示したりする。操作デバイス21は、自車のドライバが操作するスイッチ群である。操作デバイス21は、各種の設定を行うために用いられる。例えば、操作デバイス21としては、自車のステアリングのスポーク部に設けられたステアリングスイッチ等がある。表示装置22としては、ヘッドアップディスプレイ(HUD)装置220を用いる。ここで、図2を用いてHUD装置220について説明を行う。 The HMI system 2 includes an HCU (Human Machine Interface Control Unit) 20, an operation device 21, and a display device 22. The HMI system 2 receives an input operation from a driver who is a user of the own vehicle, or transmits the input operation to the driver of the own vehicle. And present information. The operation device 21 is a group of switches operated by a driver of the own vehicle. The operation device 21 is used for performing various settings. For example, the operation device 21 includes a steering switch provided at a spoke portion of the steering of the vehicle. As the display device 22, a head-up display (HUD) device 220 is used. Here, the HUD device 220 will be described with reference to FIG.
 図2に示すようにHUD装置220は、自車のインストルメントパネル12に設けられ、表示器221、平面ミラー222、及び凹面ミラー223を有する。HUD装置220は、HCU20の制御下で画像をフロントウインドシールド10に投影する。HUD装置220は、表示器221によって形成される表示画像を、平面ミラー222及び凹面ミラー223といった光学系を通じて、投影部材としてのフロントウインドシールド10に既定された投影領域に投影する。より詳しくは、表示器221に表示される画像(つまり、表示画像)が、平面ミラー222で反射された後、凹面ミラー223により拡大され、フロントウインドシールド10に既定された投影領域に投影される。投影領域は、例えば運転席前方に位置するものとする。表示器221は、例えばTFT液晶パネルであって、バックライトからの光を透過させることで、液晶パネルに描画する表示画像の光を出力する。 As shown in FIG. 2, the HUD device 220 is provided on the instrument panel 12 of the vehicle, and has a display 221, a plane mirror 222, and a concave mirror 223. The HUD device 220 projects an image on the front windshield 10 under the control of the HCU 20. The HUD device 220 projects a display image formed by the display 221 onto a projection area defined on the front windshield 10 as a projection member through an optical system such as a plane mirror 222 and a concave mirror 223. More specifically, after the image displayed on the display 221 (that is, the display image) is reflected by the plane mirror 222, it is enlarged by the concave mirror 223 and projected on a projection area defined on the front windshield 10. . The projection area is located, for example, in front of the driver's seat. The display 221 is, for example, a TFT liquid crystal panel, and outputs light of a display image to be drawn on the liquid crystal panel by transmitting light from a backlight.
 フロントウインドシールド10によって車室内側に反射される表示画像の光束は、運転席に着座するドライバによって知覚される。また、透光性ガラスにより形成されるフロントウインドシールド10を透過してくる、自車の前方に存在する風景としての前景からの光束も、運転席に着座するドライバによって知覚される。これにより、ドライバは、フロントウインドシールド10の前方にて結像される表示画像の虚像100を、前景の一部と重ねて視認可能となる。つまり、HUD装置220は、自車の前景に虚像100を重畳表示し、所謂AR(Augmented Reality)表示を実現する。 The luminous flux of the display image reflected by the front windshield 10 toward the vehicle interior is perceived by the driver sitting in the driver's seat. In addition, the light flux from the foreground, which is a scene existing in front of the own vehicle, transmitted through the front windshield 10 formed of the translucent glass is also perceived by the driver sitting in the driver's seat. Accordingly, the driver can visually recognize the virtual image 100 of the display image formed in front of the front windshield 10 so as to overlap a part of the foreground. That is, the HUD device 220 superimposes and displays the virtual image 100 on the foreground of the vehicle, and realizes a so-called AR (Augmented Reality) display.
 表示画像としては、重畳表示させる前景中の対象に合わせて位置を変化させる移動属性の画像と、前景にかかわらず位置が定まっている固定属性の画像とがある。移動属性の画像の詳細については後述する。固定属性の画像としては、自車の計器の情報を示す画像,自動運転機能等の自車の機能の設定状態を示す画像,自動運転機能の動作状態を示す画像,走行中の道路の交通標識の内容を示す画像,ドライバのモニタリング状態を示す画像等がある。自車の計器の情報を示す画像としては、車速の値を示す画像等が挙げられる。自動運転機能等の自車の機能の設定状態を示す画像としては、ACC(Adaptive Cruise Control)機能等の自動運転機能のオンオフであったり、経路案内機能のオンオフであったりを示すアイコン画像等が挙げられる。自動運転機能の動作状態を示す画像としては、自動運転機能の動作の有無を示すアイコン画像等が挙げられる。走行中の道路の規制情報を示す画像としては、走行中の道路の速度規制標識等の道路標識の内容を示すアイコン画像等が挙げられる。ドライバのモニタリング状態を示す画像としては、ドライバによるステアリングホイールの把持の有無を示すアイコン画像等が挙げられる。なお、固定属性表示は、ここに挙げた一例に限らない。 (4) The display image includes an image having a moving attribute in which the position is changed in accordance with an object in the foreground to be superimposed and an image having a fixed attribute in which the position is determined regardless of the foreground. The details of the image of the movement attribute will be described later. The fixed attribute image includes an image showing information on the instrument of the own vehicle, an image showing a setting state of the own vehicle function such as an automatic driving function, an image showing an operating state of the automatic driving function, a traffic sign on a road on which the vehicle is traveling. And the image showing the monitoring status of the driver. Examples of the image indicating the information of the instrument of the own vehicle include an image indicating the value of the vehicle speed. Examples of the image indicating the setting state of the function of the own vehicle such as the automatic driving function include an icon image indicating that the automatic driving function such as the ACC (Adaptive Cruise Control) function is on or off and the route guidance function is on or off. No. Examples of the image indicating the operation state of the automatic driving function include an icon image indicating whether or not the automatic driving function is operating. Examples of the image indicating the regulation information of the traveling road include an icon image indicating the content of a road sign such as a speed regulation sign of the traveling road. Examples of the image indicating the monitoring state of the driver include an icon image indicating whether or not the driver grips the steering wheel. Note that the fixed attribute display is not limited to the example given here.
 本実施形態では、平面ミラー222及び凹面ミラー223といった光学系に対して表示器221を傾けることで、ドライバから見て虚像100の上方が下方よりも遠位側により見え易くなるように虚像100を自車の前後方向に傾けて表示させる場合を例に挙げて説明を行う。これにより、虚像100がドライバの近位側に見えるように表示させるための表示器と虚像100がドライバの遠位側に見えるように表示させるための表示器との2つの表示器を用いなくても、虚像100が近位側から遠位側にわたって表示されているように見せ易くなる。なお、必ずしもこの構成に限らず、虚像100を自車の前後方向に傾けずに表示させる構成としてもよい。 In the present embodiment, by inclining the display 221 with respect to the optical system such as the plane mirror 222 and the concave mirror 223, the virtual image 100 is viewed from the driver so that the upper side of the virtual image 100 is more visible on the distal side than the lower side. The description will be made by taking as an example a case where the display is performed while being tilted in the front-rear direction of the own vehicle. This eliminates the use of two indicators, one for displaying the virtual image 100 on the proximal side of the driver and the other for displaying the virtual image 100 on the distal side of the driver. This also makes it easier to see the virtual image 100 displayed from the proximal side to the distal side. The present invention is not limited to this configuration, and the virtual image 100 may be displayed without tilting in the front-rear direction of the vehicle.
 本実施形態では、表示器221がTFT液晶パネルである場合を例に挙げて以降の説明を行うが、必ずしもこれに限らない。例えば、表示画像を描画可能であれば、他の方式の表示器であっても構わず、光学系も平面ミラー222の代わりに反射型のスクリーンを用いるなど上述したものに限らない。なお、HUD装置220が表示画像を投影する投影部材は、フロントウインドシールド10に限らず、透光性コンバイナであっても構わない。また、表示装置22として、HUD装置220の他にも、画像を表示する装置を用いる構成としてもよい。一例としては、コンビネーションメータのディスプレイ,CID(Center Information Display)等がある。 で は In the present embodiment, the following description will be given by taking the case where the display 221 is a TFT liquid crystal panel as an example, but is not necessarily limited to this. For example, as long as a display image can be drawn, a display device of another system may be used, and the optical system is not limited to the above-described one such as using a reflective screen instead of the plane mirror 222. The projection member on which the HUD device 220 projects the display image is not limited to the front windshield 10 and may be a light-transmitting combiner. Further, as the display device 22, a device that displays an image may be used in addition to the HUD device 220. As an example, there are a display of a combination meter, a CID (Center @ Information @ Display), and the like.
 HCU20は、プロセッサ、揮発性メモリ、不揮発性メモリ、I/O、これらを接続するバスを備えるマイクロコンピュータを主体として構成され、HUD装置220に接続されている。HCU20は、不揮発性メモリに記憶された制御プログラムを実行することにより、HUD装置220での表示を制御する。このHCU20が車両用表示制御装置に相当する。プロセッサがこの制御プログラムを実行することは、制御プログラムに対応する車両用表示制御方法が実行されることに相当する。ここで言うところのメモリは、コンピュータによって読み取り可能なプログラム及びデータを非一時的に格納する非遷移的実体的記憶媒体(non-transitory tangible storage medium)である。また、非遷移的実体的記憶媒体は、半導体メモリ又は磁気ディスクなどによって実現される。なお、HUD装置220による表示の制御に関するHCU20の構成については以下で詳述する。 The HCU 20 is mainly configured by a microcomputer including a processor, a volatile memory, a non-volatile memory, an I / O, and a bus connecting these, and is connected to the HUD device 220. The HCU 20 controls display on the HUD device 220 by executing a control program stored in the nonvolatile memory. The HCU 20 corresponds to a display control device for a vehicle. Executing the control program by the processor corresponds to executing the vehicle display control method corresponding to the control program. The memory as used herein is a non-transitory physical storage medium (non-transitory / tangible / storage / medium) that temporarily stores a computer-readable program and data. The non-transitional substantive storage medium is realized by a semiconductor memory, a magnetic disk, or the like. The configuration of the HCU 20 relating to display control by the HUD device 220 will be described in detail below.
 (HCU20の概略構成)
 ここで、図3を用いてHCU20の概略構成についての説明を行う。HCU20は、HUD装置220での表示制御に関して、図3に示すように、情報処理ブロック200及び表示制御ブロック210を機能ブロックとして備える。以下では、便宜上、移動属性の画像及び固定属性の画像のうち、固定属性の画像の表示制御についての説明は省略し、移動属性の画像の表示制御についての説明を行う。なお、HCU20が実行する機能の一部又は全部を、一つ或いは複数のIC等によりハードウェア的に構成してもよい。また、HCU20が備える機能ブロックの一部又は全部は、プロセッサによるソフトウェアの実行とハードウェア部材の組み合わせによって実現されてもよい。
(Schematic configuration of HCU 20)
Here, a schematic configuration of the HCU 20 will be described with reference to FIG. The HCU 20 includes, as functional blocks, an information processing block 200 and a display control block 210 as shown in FIG. 3 for display control in the HUD device 220. In the following, for the sake of convenience, of the moving attribute image and the fixed attribute image, the description of the display control of the fixed attribute image will be omitted, and the display control of the moving attribute image will be described. Note that part or all of the functions executed by the HCU 20 may be configured in hardware by one or more ICs or the like. Further, some or all of the functional blocks included in the HCU 20 may be realized by a combination of execution of software by a processor and hardware components.
 情報処理ブロック200は、車内LANに出力された種々の情報のうちで、表示制御ブロック210での表示制御に必要な情報を選択的に取得する。加えて情報処理ブロック200は、取得した情報を表示制御に適した状態に変換する処理を行う。情報処理ブロック200は、図3に示すように、決定用情報取得部201をサブ機能ブロックとして備える。 The information processing block 200 selectively obtains information necessary for display control in the display control block 210 from various information output to the in-vehicle LAN. In addition, the information processing block 200 performs a process of converting the acquired information into a state suitable for display control. As shown in FIG. 3, the information processing block 200 includes a determination information acquisition unit 201 as a sub-function block.
 決定用情報取得部201は、HUD装置220で表示させる内容を定めるために必要な情報を取得する。言い換えると、HUD装置220の表示器221に描画する画像の内容を定めるための情報(以下、内容決定用情報)を取得する。内容決定用情報の一例としては、ADASロケータ3から出力される車両位置,ナビゲーション装置7で探索する経路,地図DBに格納されている地図データ,自動運転ECU8で認識した走行環境,自動運転ECU8で生成する走行計画,操作デバイス21での操作入力情報,自動運転ECU8での自動運転機能の設定及び動作情報等がある。 (4) The information obtaining unit for determination 201 obtains information necessary for determining the content to be displayed on the HUD device 220. In other words, information for determining the content of an image to be drawn on the display 221 of the HUD device 220 (hereinafter, content determination information) is obtained. Examples of the content determination information include the vehicle position output from the ADAS locator 3, the route searched by the navigation device 7, the map data stored in the map DB, the driving environment recognized by the automatic driving ECU 8, the automatic driving ECU 8, and the like. There are a travel plan to be generated, operation input information on the operation device 21, setting and operation information of the automatic driving function in the automatic driving ECU 8, and the like.
 表示制御ブロック210は、情報処理ブロック200で取得する情報をもとに、HUD装置220での表示に関する制御(つまり、表示制御)を行う。図3に示すように、表示制御ブロック210は、描画制御部211及び調停部217をサブ機能ブロックとして備える。 The display control block 210 controls the display on the HUD device 220 (that is, display control) based on the information acquired by the information processing block 200. As shown in FIG. 3, the display control block 210 includes a drawing control unit 211 and an arbitration unit 217 as sub-function blocks.
 描画制御部211は、HUD装置220の表示器221に描画する画像の画像データを、情報処理ブロック200で取得する情報に基づいて生成する。描画制御部211は、生成する画像データを表示器221に出力することで、表示器221に移動属性の画像を描画する。描画制御部211は、表示画像の内容及び配置の態様を、決定用情報取得部201で取得する内容決定用情報に応じて変更する。 The drawing control unit 211 generates image data of an image to be drawn on the display 221 of the HUD device 220 based on information acquired by the information processing block 200. The drawing control unit 211 draws an image having the movement attribute on the display 221 by outputting the generated image data to the display 221. The drawing control unit 211 changes the content and arrangement of the display image according to the content determination information acquired by the determination information acquisition unit 201.
 描画制御部211は、表示項目の種別単位で複数のグループに分類される複数グループの画像を、グループ別にレイヤを分けて1つの表示器221に描画する。描画制御部211は、図3に示すように、緊急関連部212、動的物標関連部213、静的物標関連部214、路面関連部215、及び拡張情報関連部216を有し、緊急関連部212、動的物標関連部213、静的物標関連部214、路面関連部215、及び拡張情報関連部216がそれぞれ異なるグループの画像を描画する。つまり、緊急関連部212、動的物標関連部213、静的物標関連部214、路面関連部215、及び拡張情報関連部216がそれぞれ異なるレイヤに描画することになる。 (4) The drawing control unit 211 draws images of a plurality of groups classified into a plurality of groups in units of display items on a single display 221 by dividing layers into groups. As illustrated in FIG. 3, the drawing control unit 211 includes an emergency related unit 212, a dynamic target related unit 213, a static target related unit 214, a road surface related unit 215, and an extended information related unit 216. The related unit 212, the dynamic target related unit 213, the static target related unit 214, the road surface related unit 215, and the extended information related unit 216 draw different groups of images. That is, the emergency related unit 212, the dynamic target related unit 213, the static target related unit 214, the road surface related unit 215, and the extended information related unit 216 draw on different layers.
 ここで、表示項目の種別単位で分類される複数のグループについて図4~図7を用いて説明を行う。まず、表示項目の種別としては、図4に示すように、大分類として警告系,応答系,情報系に区分される。警告系の表示項目は、ドライバが危険回避のために直近に対応する必要性の高い内容が表示される表示項目である。警告系の表示項目は、さらに緊急性の高さによって、「緊急(Emergency)」、「警告(Warning)」、「注意(Caution)」に細分化して分類される。表示項目「緊急」,「警告」,「注意」は、「緊急」の緊急性が最も高く、続いて「警告」の緊急性が高く、「注意」の緊急性が最も低い。 Here, a plurality of groups classified according to the type of display item will be described with reference to FIGS. First, as shown in FIG. 4, the types of display items are classified into a warning system, a response system, and an information system as a large classification. The warning-related display item is a display item on which a content that the driver needs to respond immediately to avoid danger is displayed. The display items of the warning type are further classified into “emergency”, “warning”, and “caution” according to the degree of urgency. The display items "urgent", "warning", and "caution" have the highest urgency of "urgent", followed by "warning" having the highest urgency, and "caution" having the lowest urgency.
 表示項目「緊急」は、例えばドライバに緊急警告を行って操作を促す表示である。一例としては、衝突被害軽減ブレーキ機能における警告,前方衝突予測警報機能における警告等の危機回避のためのドライバへ運転操作を要求する表示が挙げられる。 The display item "emergency" is, for example, a display that warns the driver of an emergency and prompts the driver to perform an operation. As an example, a display for requesting a driver to perform a driving operation for avoiding a crisis, such as a warning in the collision damage reduction brake function and a warning in the forward collision prediction warning function, may be mentioned.
 表示項目「警告」は、例えば事故の危険性のある情報についてドライバへ警告を行う表示である。一例としては、死角からの他車の割り込み,歩行者の接近等の直近の危険性のある障害物との距離に応じた情報の強調表示が挙げられる。 The display item "warning" is a display for giving a warning to the driver, for example, for information that may cause an accident. As an example, highlighting of information according to the distance from a nearby dangerous obstacle such as interruption of another vehicle from a blind spot, approach of a pedestrian, and the like can be cited.
 表示項目「注意」は、例えばドライバに注意を促す表示である。一例としては、ドライバをモニタリングするシステムで検出したドライバが確認を怠った領域等の、ドライバの意識外にあると推定される情報についての注意喚起表示が挙げられる。警告系の表示項目は、例えば危機回避の対象の領域若しくはその近傍に表示させるものとすればよい。 The display item "caution" is a display that calls attention to the driver, for example. As an example, a warning display about information that is presumed to be out of the driver's consciousness, such as an area where the driver has failed to check detected by the driver monitoring system, may be mentioned. The warning-related display items may be displayed in, for example, the area of crisis avoidance or in the vicinity thereof.
 応答系の表示項目は、ドライバの操作に対する反応を伝える表示項目である。応答系の表示項目は、警告系の表示項目よりも緊急性が低い。応答系の表示項目としては、「応答(Response)」がある。表示項目「応答」の一例としては、ACC(Adaptive Cruise Control)機能における車間距離設定変更を伝える表示が挙げられる。この場合、車間距離設定変更を伝える表示は、先行車に表示させたり、自車の走行車線に表示させたり、自車と先行車との間の領域に表示させたりすればよい。 The response-related display items are display items that convey a response to the driver's operation. The response display items are less urgent than the warning display items. As a display item of the response system, there is “Response”. As an example of the display item “response”, there is a display for notifying a change in the following distance setting in the ACC (Adaptive Cruise Control) function. In this case, the display for notifying the change in the inter-vehicle distance setting may be displayed on the preceding vehicle, displayed on the traveling lane of the own vehicle, or displayed in an area between the own vehicle and the preceding vehicle.
 通知系の表示項目は、ドライバが危険回避のために直近に対応する必要性の低い内容であったり、危険回避と関係しない内容であったりが表示される表示項目である。通知系の表示項目は、緊急系の表示項目,応答系の表示項目よりも緊急性が低い。通知系の表示項目は、さらに緊急度の高さによって「通知(Notice)」、「情報(Information)」に細分化して分類される。表示項目「情報」は、表示項目「通知」よりも緊急性が低い。 Notification-related display items are display items that indicate that the driver does not need to respond immediately to avoid danger or that is not related to danger avoidance. The display items of the notification type are lower in urgency than the display items of the emergency type and the response type. The display items of the notification system are further subdivided into “Notice” and “Information” according to the degree of urgency and classified. The display item “information” is less urgent than the display item “notification”.
 表示項目「通知」は、自車の移動,安全運転に関係はするものの、ドライバが危険回避のために直近に対応する必要性の低い内容の表示であり、例えばドライバに意識を向けさせる通知を行う表示である。表示項目「通知」は、車両前方の空間的な観点によって、「動的物標(Dynamic)」,「静的物標(Static)」,「路面(Road)」にさらに細分化して分類される。 The display item “notification” is a display related to the movement of the vehicle and safe driving, but it is a display of a content that the driver does not need to immediately respond to in order to avoid danger. This is the display to be performed. The display item “notification” is further subdivided into “Dynamic target”, “Static target”, and “Road” according to the spatial viewpoint ahead of the vehicle. .
 表示項目「動的物標」は、時間変化の大きい表示であって、周辺車両,歩行者等の動的物標についての表示である。一例としては、周辺車両,歩行者等の動的物標の存在を通知する表示としての、動的物標を指し示す表示,動的物標を中心に路面に沿って拡がる波紋状の表示等が挙げられる。また、先行車,自車と先行車との間,自車の走行車線等に表示させる、前方車車間時間を示す情報の表示がある。他にも、歩行者を指し示したり囲ったりする、ナイトビジョンシステムでの歩行者認識結果を示す表示がある。 The display item “dynamic target” is a display with a large time change, and is a display for a dynamic target such as a nearby vehicle or a pedestrian. For example, a display indicating a dynamic target such as a vehicle, a pedestrian, or the like, indicating a dynamic target, a ripple-like display extending along a road surface around the dynamic target, or the like may be used. No. Further, there is a display of information indicating a preceding inter-vehicle time, which is displayed on the preceding vehicle, between the own vehicle and the preceding vehicle, in the traveling lane of the own vehicle, or the like. In addition, there is a display indicating a pedestrian recognition result in a night vision system that indicates or surrounds a pedestrian.
 表示項目「静的物標」は、時間変化の小さい表示であって、標識,路側物,落下物等の静的物標についての表示である。一例としては、静的物標の存在を通知する表示としての、静的物標を指し示す表示,静的物標を囲う表示等が挙げられる。 (4) The display item “static target” is a display with a small time change, and is a display for a static target such as a sign, a roadside object, a falling object, and the like. As an example, a display indicating a static target, a display surrounding a static target, or the like may be given as a display notifying the presence of the static target.
 表示項目「路面」は、時間変化の小さい表示であって、経路,車線等の路面についての表示である。一例としては、ターンバイターン(以下、TBT)での自車の走行予定経路を示すための、走行予定経路の路面に沿った線状若しくはシート状の表示が挙げられる。また、自車の走行車線を示すレーンガイダンスのための、走行予定車線の区画線を疑似的に発光させる表示,走行予定車線の形状に沿ってその車線内の範囲を示す線状若しくはシート状の表示が挙げられる。 The display item "road surface" is a display with a small time change, and is a display for a road surface such as a route or a lane. As an example, a line-shaped or sheet-shaped display along the road surface of the planned traveling route for indicating a planned traveling route of the own vehicle in a turn-by-turn (hereinafter, TBT) is cited. In addition, for lane guidance indicating the traveling lane of the own vehicle, a display in which a lane marking of the planned lane is illuminated in a pseudo manner, and a linear or sheet-like shape indicating the range within the lane along the shape of the planned lane. Display.
 表示項目「情報」は、ドライバへ情報を与える表示であり、危険回避と直接的に関係しない内容の表示である。一例としては、走行予定経路と無関係な周辺施設の情報,休憩提案等のドライバへの情報提供が挙げられる。この場合、周辺施設の情報をその周辺施設に重畳表示させたり、休憩提案を提案する施設に重畳表示させたりすればよい。 The display item "information" is a display that gives information to the driver, and is a display of contents that are not directly related to danger avoidance. As an example, there is information provision to a driver such as information on peripheral facilities irrelevant to a planned traveling route and a proposal for a break. In this case, the information of the surrounding facility may be superimposed on the surrounding facility, or may be superimposed on the facility that proposes the break proposal.
 本実施形態では、これらの表示項目を、情報そのものの緊急性に加え、車両前方の空間的な観点も加えて優先度設定し、5つのグループに分けて分類している。5つのグループは、図5に示すように、優先度の高いものから順に、緊急グループ,動的物標グループ,静的物標グループ,路面グループ,拡張情報グループである。 In the present embodiment, these display items are prioritized in addition to the urgency of the information itself and also from the viewpoint of the space ahead of the vehicle, and are classified into five groups. As shown in FIG. 5, the five groups are an emergency group, a dynamic target group, a static target group, a road surface group, and an extended information group in descending order of priority.
 緊急グループは、他のグループと比較して緊急性の高い表示項目の画像のグループである。緊急グループには、警告系の表示項目「緊急」,「応答」,「注意」の画像、及び応答系の表示項目「応答」の画像が分類される。緊急関連部212が、緊急グループに分類される画像の描画を担う。緊急関連部212は、決定用情報取得部201で取得する内容決定用情報をもとに、緊急グループに分類される画像として表示する表示内容を決定する。一例としては、内容決定用情報に、歩行者が警告を要する距離まで接近してきていることを示す情報が含まれる場合に、この歩行者の位置に警告マークの画像(図6のWa参照)を表示させる表示内容を決定する等が挙げられる。図6は、前景に重畳表示される画像の一例を示した図であって、Viが投影領域を示している。緊急関連部212は、決定した表示内容の表示の調停を要求する表示要求を調停部217に送る。表示要求には、グループの種類,表示内容の対象の位置等の情報を付与すればよい。 The emergency group is a group of images of display items that are more urgent than other groups. In the emergency group, images of the display items “emergency”, “response”, and “caution” of the warning system and images of the display item “response” of the response system are classified. The emergency related unit 212 is responsible for drawing images classified into emergency groups. The emergency-related unit 212 determines the display content to be displayed as an image classified into the emergency group based on the content determination information acquired by the determination information acquisition unit 201. As an example, when the content determination information includes information indicating that the pedestrian is approaching a distance at which a warning is required, an image of a warning mark (see Wa in FIG. 6) is displayed at the position of the pedestrian. For example, the display content to be displayed is determined. FIG. 6 is a diagram illustrating an example of an image superimposed and displayed on the foreground, where Vi indicates a projection area. The emergency-related unit 212 sends a display request to the arbitration unit 217 to request arbitration of the display of the determined display contents. The display request may be provided with information such as the type of the group and the position of the display content.
 動的物標グループは、ドライバが危険回避のために直近に対応する必要性の低い動的物標についての画像のグループである。動的物標グループには、通知系の表示項目「動的物標」の画像が分類される。動的物標関連部213が、動的物標グループに分類される画像の描画を行う。動的物標関連部213は、決定用情報取得部201で取得する内容決定用情報をもとに、動的物標グループに分類される画像として表示する表示内容を決定する。一例としては、内容決定用情報に、警告を要する距離にまでは接近してきていない歩行者等の動的物標の存在を示す情報が含まれる場合に、この歩行者等の動的物標の位置にこの動的物標の存在を示す波紋状の画像(図6のOm参照)を表示させる表示内容を決定する等が挙げられる。動的物標関連部213でも、緊急関連部212と同様にして表示要求を調停部217に送る。 The dynamic target group is a group of images about a dynamic target that the driver does not need to respond immediately to avoid danger. In the dynamic target group, the image of the display item “dynamic target” of the notification system is classified. The dynamic target related unit 213 draws an image classified into a dynamic target group. The dynamic target related unit 213 determines display content to be displayed as an image classified into a dynamic target group based on the content determination information acquired by the determination information acquisition unit 201. As an example, when the information for content determination includes information indicating the presence of a dynamic target such as a pedestrian who has not approached a distance requiring a warning, the information of the dynamic target such as the pedestrian is Determining display contents for displaying a ripple-like image (see Om in FIG. 6) indicating the presence of the dynamic target at a position, and the like. The dynamic target related unit 213 also sends a display request to the arbitrating unit 217 in the same manner as the emergency related unit 212.
 静的物標グループは、ドライバが危険回避のために直近に対応する必要性の低い静的物標についての画像のグループである。静的物標グループには、通知系の表示項目「静的物標」の画像が分類される。静的物標関連部214が、静的物標グループに分類される画像の描画を行う。静的物標関連部214は、決定用情報取得部201で取得する内容決定用情報をもとに、静的物標グループに分類される画像として表示する表示内容を決定する。一例としては、内容決定用情報に、警告を要する距離にまでは接近してきていない落下物等の動的物標の存在を示す情報が含まれる場合に、この落下物等の静的物標の位置にこの静的物標の存在を示す画像(図7のSo参照)を表示させる表示内容を決定する等が挙げられる。静的物標関連部214でも、緊急関連部212と同様にして表示要求を調停部217に送る。 The static target group is a group of images of static targets that the driver does not need to respond immediately to avoid danger. The image of the display item “static target” of the notification system is classified into the static target group. The static target related unit 214 draws an image classified into a static target group. The static target related unit 214 determines display content to be displayed as an image classified into a static target group based on the content determination information acquired by the determination information acquisition unit 201. As an example, if the information for content determination includes information indicating the presence of a dynamic target such as a falling object that is not approaching a distance requiring a warning, the information of the static target such as the falling object is included. Determining display contents for displaying an image (see So in FIG. 7) indicating the existence of the static target at a position, and the like. The static target related unit 214 also sends a display request to the arbitrating unit 217 in the same manner as the emergency related unit 212.
 路面グループは、ドライバが危険回避のために直近に対応する必要性の低い路面についての画像のグループである。路面グループには、通知系の表示項目「路面」の画像が分類される。路面関連部215が、路面グループに分類される画像の描画を行う。路面関連部215は、決定用情報取得部201で取得する内容決定用情報をもとに、路面グループに分類される画像として表示する表示内容を決定する。一例としては、内容決定用情報に、自車の走行予定経路の情報が含まれる場合に、走行予定経路の路面に沿ったシート状の画像(図6のPa参照)を表示させる表示内容を決定する等が挙げられる。路面関連部215でも、緊急関連部212と同様にして表示要求を調停部217に送る。 The road surface group is a group of images of road surfaces that the driver does not need to take immediate action to avoid danger. The image of the display item “road surface” of the notification system is classified into the road surface group. The road surface related unit 215 draws images classified into road surface groups. The road surface related unit 215 determines display contents to be displayed as images classified into road surface groups based on the content determination information acquired by the determination information acquisition unit 201. As an example, when the information for content determination includes information on the planned travel route of the own vehicle, the display content for displaying a sheet-like image (see Pa in FIG. 6) along the road surface of the planned travel route is determined. And the like. The road related unit 215 also sends a display request to the arbitrating unit 217 in the same manner as the emergency related unit 212.
 動的物標グループ,静的物標グループ,路面グループは、車両前方の空間的な観点から優先度設定がされる。詳しくは、路面グループは、路面が動的物標,静的物標よりも下層に位置するため、動的物標グループ,静的物標グループよりも優先度を低く設定している。また、動的物標グループは、動的指標の位置の時間変化が路面,静的物標よりも大きいので、静的物標グループ,路面グループよりも優先度を高く設定している。 (4) The priority is set for the dynamic target group, the static target group, and the road surface group from the spatial viewpoint in front of the vehicle. Specifically, the road surface group has a lower priority than the dynamic target group and the static target group because the road surface is located below the dynamic target and the static target group. Further, the dynamic target group has a higher priority than the static target group and the road surface group because the time change of the position of the dynamic index is larger than that of the road surface and the static target.
 拡張情報グループは、危険回避と直接的に関係しない情報提供についての画像のグループである。拡張情報グループには、通知系の表示項目「情報」の画像が分類される。拡張情報関連部216が、拡張情報グループに分類される画像の描画を行う。拡張情報関連部216は、決定用情報取得部201で取得する内容決定用情報をもとに、拡張情報グループに分類される画像として表示する表示内容を決定する。一例としては、内容決定用情報に、ドライバへの提案の好ましい周辺施設の情報が含まれる場合に、周辺施設の位置に周辺施設の存在を示す画像(図6のEi参照)を表示させる表示内容を決定する等が挙げられる。拡張情報関連部216でも、緊急関連部212と同様にして表示要求を調停部217に送る。 The extended information group is a group of images related to information provision that is not directly related to danger avoidance. The image of the display item “information” of the notification system is classified into the extended information group. The extended information related unit 216 draws an image classified into the extended information group. The extended information related unit 216 determines display content to be displayed as an image classified into the extended information group based on the content determining information acquired by the determining information acquiring unit 201. As an example, when the information for content determination includes information on a preferred peripheral facility proposed to the driver, a display content for displaying an image (see Ei in FIG. 6) indicating the presence of the peripheral facility at the position of the peripheral facility Is determined. The extended information related unit 216 also sends a display request to the arbitrating unit 217 in the same manner as the emergency related unit 212.
 調停部217は、描画制御部211の緊急関連部212,動的物標関連部213,静的物標関連部214,路面関連部215,拡張情報関連部216から送られてくる表示要求をもとに、各グループの画像の調停を行い、描画制御部211に制御指示を返す。具体的には、表示要求に付与されるグループの種類をもとに、優先度の高い種類のグループの画像の視認性を高める制御指示を描画制御部211に返す。 The arbitration unit 217 also receives a display request sent from the emergency related unit 212, the dynamic target related unit 213, the static target related unit 214, the road surface related unit 215, and the extended information related unit 216 of the drawing control unit 211. At this time, arbitration of the images of each group is performed, and a control instruction is returned to the drawing control unit 211. Specifically, based on the type of group given to the display request, a control instruction to increase the visibility of the image of the group of the higher priority type is returned to the drawing controller 211.
 例えば優先度の高いグループほど画像のレイヤの重ね順を上位層にする制御指示を返す構成としてもよいし、優先度の高いグループのレイヤほど画像の透過率を他のレイヤに対して相対的に高くする構成としてもよいし、これらを複合させてもよい。なお、調停部217は、優先度の高いグループのレイヤの画像の透過率を上げることで透過率を相対的に高くする構成としてもよいし、優先度の低いグループのレイヤの画像の透過率を下げることで透過率を相対的に高くする構成としてもよい。なお、優先度の低いグループのレイヤの画像の透過率を下げることで優先度の高いグループのレイヤの画像の透過率を相対的に高くすることがより好ましい。 For example, a configuration may be adopted in which a control instruction is returned to set the layer stacking order of images higher in a higher priority group, and the transmittance of an image is higher in a layer of a higher priority group relative to other layers. The height may be increased, or these may be combined. The arbitration unit 217 may be configured to relatively increase the transmittance by increasing the transmittance of the image of the layer of the high-priority group, or to adjust the transmittance of the image of the layer of the low-priority group. The transmittance may be relatively increased by lowering the transmittance. It is more preferable that the transmittance of the image of the layer of the high priority group is relatively increased by lowering the transmittance of the image of the layer of the low priority group.
 レイヤの重ね順が異なれば各グループの画像がそれぞれ区別し易くなり、各グループの画像の視認性が高まる。レイヤの重ね順が上位層になるグループの画像ほど、前景に重畳表示させた場合により上層側に見えるため、視認性がより高まる。また、画像の透過率が異なれば、見た目の輝度が異なるため、各グループの画像がそれぞれ区別し易くなり、各グループの画像の視認性が高まる。透過率の高い画像ほど、見た目の輝度が高くなるため、視認性がより高まる。なお、画像の透過率を変化させることで各グループの画像の見た目の輝度を変化させるのは、各グループの画像の表示を行うための表示器221が共通であってバックライトが共通であり、バックライトの輝度を各グループの画像で区別して切り替えることが困難なためである。 (4) If the layering order is different, the images of each group can be easily distinguished from each other, and the visibility of the images of each group can be enhanced. The image of the group in which the layering order is the higher layer is more visible when superimposed and displayed on the foreground, so that the visibility is further improved. Further, if the transmittances of the images are different, the apparent brightness is different, so that the images of each group can be easily distinguished from each other, and the visibility of the images of each group is enhanced. The higher the transmittance of an image, the higher the apparent brightness, and thus the higher the visibility. Changing the apparent brightness of the images of each group by changing the transmittance of the images is performed by using a common display unit 221 for displaying the images of each group and using a common backlight. This is because it is difficult to switch the brightness of the backlight separately for each group of images.
 さらに、自車からの距離がより遠い対象に対する画像(以下、遠位対象画像)が、自車からの距離がより近い対象に対する画像(以下、近位対象画像)よりも上層に表示されてしまうことによる表示の違和感を防ぐために、調停部217は、以下の調停を行うことが好ましい。調停部217は、表示要求に付与されるグループの種類に加え、表示内容の対象の位置をもとに、遠位対象画像のグループが近位対象画像のグループよりも優先度が高い場合には、遠位対象画像のレイヤを近位対象画像のレイヤの重ね順よりも上位層にするのではなく、遠位対象画像の透過率を近位置対象画像に対して相対的に高くすることで視認性を高めればよい。一方、調停部217は、近位対象画像のグループが遠位対象画像のグループよりも優先度が高い場合には、近位対象画像のレイヤを遠位対象画像のレイヤの重ね順よりも上位層にすることで視認性を高めればよい。また、調停部217は、送られてくる表示要求が複数でない場合には、調停を行わずに、表示許可の制御指示を返す。 Furthermore, an image for an object that is farther from the vehicle (hereinafter, a distal object image) is displayed in a higher layer than an image for an object that is closer to the vehicle (hereinafter, a proximal object image). It is preferable that the arbitration unit 217 performs the following arbitration in order to prevent the display from being unnatural. The arbitration unit 217 determines whether the distal target image group has a higher priority than the proximal target image group based on the position of the display content target in addition to the type of the group given to the display request. Instead of making the layer of the distal target image a layer higher than the layer order of the layer of the proximal target image, it is visually recognized by increasing the transmittance of the distal target image relative to the near target image. You only have to increase the sex. On the other hand, if the group of the proximal target images has a higher priority than the group of the distal target images, the arbitration unit 217 sets the layer of the proximal target image to a higher layer than the layering order of the layers of the distal target image. , The visibility may be increased. If there are not a plurality of display requests to be sent, the arbitration unit 217 returns a display permission control instruction without performing arbitration.
 そして、描画制御部211は、調停部217から返される制御指示に従い、緊急関連部212、動的物標関連部213、静的物標関連部214、路面関連部215、及び拡張情報関連部216での描画を行う。描画制御部211は、制御指示によって重ね順が指示される場合には、この重ね順に従った順に各グループのレイヤを重ねて各グループの画像を描画する。また、描画制御部211は、制御指示によって透過率が指示される場合には、この透過率に従って各グループのレイヤの画像の透過率を変化させて描画する。これにより、描画制御部211は、表示器221に画像を描画するレイヤ別にその画像の視認性を異ならせるよう描画することになる。 Then, according to the control instruction returned from the arbitration unit 217, the drawing control unit 211, the emergency related unit 212, the dynamic target related unit 213, the static target related unit 214, the road related unit 215, and the extended information related unit 216 Perform drawing with. When the superimposition order is instructed by the control instruction, the drawing control unit 211 superimposes the layers of each group in the order according to the superposition order and draws the image of each group. Further, when the transmittance is instructed by the control instruction, the drawing control unit 211 draws by changing the transmittance of the image of the layer of each group according to the transmittance. As a result, the drawing control unit 211 performs drawing on the display 221 such that the visibility of the image is different for each layer on which the image is drawn.
 一例として、描画制御部211の動的物標関連部213と路面関連部215とから調停部217に表示要求が送られ、動的物標のグループのレイヤの重ね順を路面グループのレイヤよりも上位層にする制御指示が描画制御部211に返される場合には、動的物標のグループのレイヤを路面グループのレイヤよりも上位層に重ねて動的物標のグループの画像と路面グループの画像とが描画されることになる。 As an example, a display request is sent from the dynamic target related unit 213 and the road surface related unit 215 of the drawing control unit 211 to the arbitration unit 217, and the order of the layers of the dynamic target group is set to be higher than that of the road surface group. When the control instruction for the upper layer is returned to the drawing controller 211, the dynamic target group layer is superimposed on the higher layer than the road surface layer and the image of the dynamic target group and the road surface group are overlapped. An image is drawn.
 描画制御部211は、各グループのレイヤを重ねて描画する際、各レイヤに描画される画像を合成する合成処理を行う。ここで、図8Aから図8Dを用いて合成処理の一例について説明を行う。図8Aから図8DのHiが優先度の高いグループの画像(以下、高優先度画像)を示しており、Loが優先度の低いグループの画像(低優先度画像)を示している。 The drawing control unit 211 performs a synthesizing process of synthesizing an image drawn on each layer when the layers of each group are overlaid and drawn. Here, an example of the combining process will be described with reference to FIGS. 8A to 8D. In FIGS. 8A to 8D, Hi indicates an image of a high-priority group (hereinafter, a high-priority image), and Lo indicates an image of a low-priority group (low-priority image).
 最も単純な合成処理としては、図8Aに示す単純な「上書き」がある。単純な「上書き」では、高優先度画像と低優先度画像とで透過率は変化させず、低優先度画像の上層に高優先度画像を重ね合わせて描画する。この場合、高優先度画像と重なる低優先度画像は、高優先度画像によって隠される。 The simplest combination processing is a simple “overwrite” shown in FIG. 8A. In the simple "overwrite", the transmittance is not changed between the high-priority image and the low-priority image, and the high-priority image is drawn over the low-priority image. In this case, the low priority image that overlaps with the high priority image is hidden by the high priority image.
 「上書き」の変形例として、図8Bの「加算ブレンド」,図8Cの「ふち抜き」,図8Dの「透過率変化」がある。「加算ブレンド」では、高優先度画像と低優先度画像とで透過率は変化させず、低優先度画像の上層に高優先度画像を重ね合わせて描画するとともに、双方が重なる領域では双方のRGB値を加算して描画する。なお、双方のRGB値を乗算するとともに255で除算して描画する「乗算ブレンド」を用いる構成としてもよい。 変 形 Modifications of “overwrite” include “addition blend” in FIG. 8B, “edge removal” in FIG. 8C, and “transmissivity change” in FIG. 8D. In “addition blending”, the transmittance does not change between the high-priority image and the low-priority image, and the high-priority image is superimposed on the low-priority image and drawn. Rendering is performed by adding RGB values. Note that a configuration may be used in which a “multiplication blend” in which both the RGB values are multiplied and divided by 255 for drawing is used.
 「ふち抜き」では、高優先度画像と低優先度画像とで透過率は変化させず、低優先度画像の上層に高優先度画像を重ね合わせて描画するとともに、双方が重なる領域の境界部分にあたる下層の画像を白抜きするなどして除いて描画する。「透過率変化」では、低優先度画像の上層に高優先度画像を重ね合わせて描画するとともに、高優先度画像の透過率を低優先度画像に対して相対的に高くして描画する。この「透過率変化」は、「加算ブレンド」,「乗算ブレンド」,「ふち抜き」等と組み合わせてもよい。 In “Border Removal”, the transmittance is not changed between the high-priority image and the low-priority image, the high-priority image is superimposed on the low-priority image and drawn, and the boundary portion of the area where both overlap. Is drawn by removing the lower layer image corresponding to In the “transmissivity change”, a high-priority image is superimposed and drawn on an upper layer of a low-priority image, and the high-priority image is drawn with a relatively higher transmittance than the low-priority image. This “transmissivity change” may be combined with “addition blend”, “multiplication blend”, “edge removal”, or the like.
 ここでは、低優先度画像の上層に高優先度画像を重ね合わせて描画する場合の例について述べたが、低優先度画像の対象が高優先度画像の対象よりも自車からの距離が近い場合には、高優先度画像の上層に低優先度画像を重ね合わせつつ、高優先度画像の透過率を低優先度画像に対して相対的に高くして描画する構成とすればよい。この場合、前述の「加算ブレンド」若しくは「乗算ブレンド」によって下層の高優先度画像をより把握し易くすることが好ましい。また、描画制御部211は、合成処理の無駄を抑えるために、各グループのレイヤを重ねて描画する際、レイヤ別の画像の表示位置が重ならない場合には、合成処理を行わず、各グループのレイヤを重ねて描画することが好ましい。 Here, an example has been described in which the high-priority image is overlaid on the lower-priority image and drawn, but the target of the low-priority image is closer to the own vehicle than the target of the high-priority image. In this case, the low-priority image may be drawn with the transmittance of the high-priority image relatively higher than that of the low-priority image while the low-priority image is superimposed on the upper layer of the high-priority image. In this case, it is preferable that the above-described “addition blending” or “multiplication blending” makes it easier to grasp the lower priority image. Further, in order to suppress the waste of the combining process, the drawing control unit 211 does not perform the combining process if the display positions of the images for the respective layers do not overlap each other when the layers of each group are overlaid. It is preferable to draw the layers in a superimposed manner.
 同じグループに分類される複数の画像を描画する場合には、描画制御部211が、新たに表示する画像ほど上層に重ねて描画したり、自車までの距離が近い対象についての画像ほど上層に重ねて描画したりする構成とすればよい。また、描画処理として3Dグラフィック描画処理を行う場合には、3Dグラフィックの技術として利用されている陰面処理を用いて、特定の視点から見えない部分の画像を消去して描画する構成としてもよい。陰面処理を用いる場合には、アイポイントを特定の視点とし、表示する画像の全画素のうち、例えばZ-バッファ法(Z buffer algorithm)によってアイポイントから見えない陰面を求めることで、自車のドライバから視認可能な範囲を推定すればよい。 In the case of drawing a plurality of images classified into the same group, the drawing control unit 211 draws the image to be newly displayed in an upper layer, or draws an image of a target closer to the own vehicle in an upper layer. What is necessary is just to set it as the structure which draws and overlaps. Further, in the case where 3D graphic drawing processing is performed as drawing processing, a configuration may be used in which an image of a part that cannot be seen from a specific viewpoint is erased and drawn using hidden surface processing used as 3D graphic technology. When the hidden surface processing is used, the eye point is set as a specific viewpoint, and a hidden surface that cannot be seen from the eye point is obtained from all the pixels of the image to be displayed by, for example, the Z-buffer algorithm (Z buffer algorithm). What is necessary is just to estimate the range visible from the driver.
 (HCU20での虚像表示制御関連処理)
 続いて、図9のフローチャートを用いて、HCU20でのHUD装置220での移動属性の画像の表示制御に関連する処理(以下、虚像表示制御関連処理)の流れの一例について説明を行う。図9のフローチャートは、HUD装置220の電源がオン且つHUD装置220の機能がオンになった場合に開始する構成とすればよい。HUD装置220の機能のオンオフは、操作デバイス21で受け付ける入力操作に応じて切り替えられる構成とすればよい。また、HUD装置220の電源のオンオフは、自車の内燃機関又はモータジェネレータを始動させるためのスイッチ(以下、パワースイッチ)のオンオフに応じて切り替えられる構成とすればよい。
(Process related to virtual image display control in HCU 20)
Next, an example of a flow of a process (hereinafter, a virtual image display control related process) related to display control of an image of a movement attribute in the HUD device 220 in the HCU 20 will be described with reference to a flowchart of FIG. The flowchart of FIG. 9 may be configured to start when the power of the HUD device 220 is turned on and the function of the HUD device 220 is turned on. The function of the HUD device 220 may be switched on and off in accordance with an input operation received by the operation device 21. The power supply of the HUD device 220 may be switched on and off in accordance with the on / off of a switch (hereinafter, a power switch) for starting the internal combustion engine or the motor generator of the own vehicle.
 まず、S1では、決定用情報取得部201が、車内LANに出力された種々の情報から、内容決定用情報を選択的に取得する。S2では、S1で取得する内容決定用情報をもとに、緊急関連部212、動的物標関連部213、静的物標関連部214、路面関連部215、及び拡張情報関連部216の少なくともいずれかが表示要求を調停部217に送る場合(S2でYES)には、S3に移る。一方、緊急関連部212、動的物標関連部213、静的物標関連部214、路面関連部215、及び拡張情報関連部216のいずれからも表示要求が調停部217に送られない場合(S2でNO)には、S8に移る。 First, in S1, the determination information obtaining unit 201 selectively obtains content determination information from various information output to the in-vehicle LAN. In S2, based on the information for content determination acquired in S1, at least the emergency related unit 212, the dynamic target related unit 213, the static target related unit 214, the road surface related unit 215, and the extended information related unit 216 If any one sends a display request to the arbitration unit 217 (YES in S2), the process proceeds to S3. On the other hand, when no display request is sent to the arbitration unit 217 from any of the emergency related unit 212, the dynamic target related unit 213, the static target related unit 214, the road surface related unit 215, and the extended information related unit 216 ( If (NO in S2), the process moves to S8.
 S3では、緊急関連部212、動的物標関連部213、静的物標関連部214、路面関連部215、及び拡張情報関連部216のうちの複数が表示要求を調停部217に送る場合(S3でYES)には、S4に移る。一方、緊急関連部212、動的物標関連部213、静的物標関連部214、路面関連部215、及び拡張情報関連部216のうちの1つのみが表示要求を調停部217に送る場合(S3でYES)には、S6に移る。 In S3, when a plurality of the emergency related unit 212, the dynamic target related unit 213, the static target related unit 214, the road surface related unit 215, and the extended information related unit 216 send a display request to the arbitrating unit 217 ( If (YES in S3), the process moves to S4. On the other hand, when only one of the emergency related unit 212, the dynamic target related unit 213, the static target related unit 214, the road surface related unit 215, and the extended information related unit 216 sends a display request to the arbitration unit 217. If (YES in S3), the process moves to S6.
 S4では、調停部217が、送られてくる複数の表示要求をもとに、各グループの画像の調停を行い、優先度の高い種類のグループの画像の視認性を高める制御指示を描画制御部211に返す。S5では、描画制御部211が、調停部217から返される制御指示に従って、各レイヤに描画される画像を合成する合成処理をして描画を行い、S8に移る。 In S4, the arbitration unit 217 arbitrates the images of each group based on the plurality of display requests sent thereto, and issues a control instruction to increase the visibility of the images of the group of the higher priority type to the drawing control unit. Return to 211. In S5, the drawing control unit 211 performs a drawing process by combining images to be drawn on each layer according to the control instruction returned from the arbitration unit 217, and proceeds to S8.
 一方、S6では、調停部217が、送られてくる表示要求を許可する表示許可の制御指示を描画制御部211に返す。S7では、描画制御部211が、調停部217から返される表示許可の制御指示に従って、表示要求を行った1つのグループの画像のみの描画を行い、S8に移る。 On the other hand, in S6, the arbitration unit 217 returns a display permission control instruction for permitting the transmitted display request to the drawing control unit 211. In S7, the drawing control unit 211 draws only one group of images requested to be displayed in accordance with the display permission control instruction returned from the arbitration unit 217, and proceeds to S8.
 S8では、虚像表示制御関連処理の終了タイミングであった場合(S8でYES)には、虚像表示制御関連処理を終了する。一方、虚像表示制御関連処理の終了タイミングでなかった場合(S8でNO)には、S1に戻って処理を繰り返す。虚像表示制御関連処理の終了タイミングの一例としては、自車のパワースイッチがオフになった場合,HUD装置220の機能がオフになった場合等がある。 In S8, if it is the end timing of the virtual image display control-related processing (YES in S8), the virtual image display control-related processing ends. On the other hand, if it is not the end timing of the virtual image display control related processing (NO in S8), the process returns to S1 and repeats the processing. Examples of the end timing of the virtual image display control-related processing include a case where the power switch of the own vehicle is turned off, a case where the function of the HUD device 220 is turned off, and the like.
 実施形態1の構成によれば、表示項目の種別単位で複数のグループに分類される、警告グループ、動的物標グループ、静的物標グループ、路面グループ、及び拡張情報グループの画像を、グループ別にレイヤを分けて、レイヤ別にその画像の視認性を異ならせるよう1つの表示器221に描画する。よって、表示項目の種類の異なる複数の画像を共通の投影部材へ投影して車両の前景にそれぞれの虚像を複合的に表示させる場合であっても、少なくとも警告グループ、動的物標グループ、静的物標グループ、路面グループ、及び拡張情報グループの画像については、ドライバがそれぞれを区別して認識することが可能になる。特に、動的物標グループ、静的物標グループ、及び路面グループという分類は、車両の前景への重畳表示ならではの、車両の前方空間の観点を加えたグループ分けである。従って、少なくともこれらの車両の前方空間の観点を加えたグループのそれぞれをドライバが区別して認識することが可能になることにより、重畳表示におけるそれぞれの画像の意図する内容をドライバが認識し易くなる。その結果、投影部材へ画像を投影することによって車両の前景に虚像を重畳表示させる際に、表示項目の種類の異なる複数の画像を共通の投影部材へ投影して車両の前景にそれぞれの虚像を複合的に表示させる場合であっても、それぞれの画像の意図する内容がドライバに伝わり難くなることを抑えることが可能になる。 According to the configuration of the first embodiment, images of a warning group, a dynamic target group, a static target group, a road surface group, and an extended information group, which are classified into a plurality of groups in units of display item types, are grouped. The image is drawn on one display unit 221 so that the visibility of the image is different for each layer. Therefore, even when a plurality of images having different types of display items are projected on a common projection member to display the respective virtual images in a composite manner on the foreground of the vehicle, at least the warning group, the dynamic target group, and the static The driver can distinguish and recognize the images of the target object group, the road surface group, and the extended information group. In particular, the classification of the dynamic target group, the static target group, and the road surface group is a grouping that adds the viewpoint of the space in front of the vehicle, which is unique to the superimposed display on the foreground of the vehicle. Therefore, since the driver can distinguish and recognize at least each of the groups including the viewpoint of the space in front of the vehicle, the driver can easily recognize the intended content of each image in the superimposed display. As a result, when the virtual images are superimposed and displayed on the foreground of the vehicle by projecting the images on the projection member, a plurality of images of different types of display items are projected on the common projection member and the respective virtual images are displayed on the foreground of the vehicle. Even in the case of displaying multiple images, it is possible to prevent the intention contents of each image from being difficult to be transmitted to the driver.
 実施形態1の構成によれば、グループ別に画像の透過率を異ならせることで画像の視認性を異ならせることが可能であるので、各グループの画像を1つの表示器221で描画する場合であっても、それぞれの見た目の輝度を個別に切り替えることが容易に可能となる。 According to the configuration of the first embodiment, it is possible to make the visibility of the image different by making the transmittance of the image different for each group. Therefore, the image of each group is drawn by one display 221. However, it is possible to easily switch the brightness of each appearance individually.
 実施形態1では、固定属性の画像の表示制御についての説明は省略したが、固定属性の画像についても、移動属性の画像とレイヤを分けて描画する構成とすればよい。この場合、基本的には固定属性の画像のレイヤを移動属性の画像よりも上位層のレイヤとして描画する構成とすればよい。一方、例外として、ドライバが緊急に対応すべき緊急情報が移動属性の画像に含まれる場合には、この移動属性の画像のレイヤを、固定属性の画像よりも上位層に表示されるレイヤとして描画する構成とすればよい。 In the first embodiment, the description of the display control of the image with the fixed attribute is omitted, but the image with the fixed attribute may be configured to draw the image with the moving attribute separately from the layer. In this case, basically, a configuration may be adopted in which a layer of an image with a fixed attribute is drawn as a layer on a higher layer than an image with a moving attribute. On the other hand, as an exception, when urgent information that the driver should respond urgently is included in the image of the moving attribute, the layer of the image of the moving attribute is drawn as a layer displayed on a layer higher than the image of the fixed attribute. The configuration may be such that
 (実施形態2)
 実施形態1では、緊急グループ,動的物標グループ,静的物標グループ,路面グループ,拡張情報グループの5つのグループ別にレイヤを分けて描画する構成を示したが、必ずしもこれに限らない。例えば、表示項目の種類別にレイヤを分けて描画する構成としてもよい。この場合には、「緊急」,「警告」,「注意」,「応答」,「動的物標」,「静的物標」,「路面」,「情報」といった表示項目の種類別に描画するための表示項目の種類別の機能ブロックを、描画制御部211が有する構成とすればよい。この場合、表示項目別の画像の優先度は、高いものから順に、「緊急」,「警告」,「注意」,「応答」,「動的物標」,「静的物標」,「路面」,「情報」となる。また、この場合、表示要求に表示項目の種類を付与する構成とすればよい。
(Embodiment 2)
In the first embodiment, the configuration in which the layers are divided into the five groups of the emergency group, the dynamic target group, the static target group, the road surface group, and the extended information group is described, but the present invention is not necessarily limited to this. For example, a configuration may be adopted in which layers are drawn according to types of display items and drawn. In this case, drawing is performed for each type of display items such as “urgent”, “warning”, “caution”, “response”, “dynamic target”, “static target”, “road surface”, and “information”. The drawing control unit 211 may have a functional block for each type of display item for this purpose. In this case, the priority of the image for each display item is “urgent”, “warning”, “caution”, “response”, “dynamic target”, “static target”, “road surface” in order from the highest one. ”And“ information ”. In this case, a configuration may be adopted in which the type of the display item is added to the display request.
 (実施形態3)
 実施形態1では、視認性を高める優先度を、動的物標グループ,静的物標グループ,路面グループの順とする構成を示したが、必ずしもこれに限らない。例えば、動的物標グループ,静的物標グループ,路面グループの画像の視認性を異ならせる構成であれば、視認性を高める優先度の順は、動的物標グループ,静的物標グループ,路面グループの順でない構成としてもよい。
(Embodiment 3)
In the first embodiment, the configuration in which the priority of increasing the visibility is set to the order of the dynamic target group, the static target group, and the road surface group is described, but the present invention is not necessarily limited to this. For example, in a configuration in which the visibility of images of a dynamic target group, a static target group, and a road surface group is different, the order of priority for increasing visibility is a dynamic target group, a static target group. , The road surface groups may not be arranged in this order.
 (実施形態4)
 実施形態1では、自車が自動運転機能を有している場合に適用した例を挙げて説明を行ったが、必ずしもこれに限らない。例えば、自車が自動運転機能を有していない構成であってもよい。この場合には、車両システム1に自動運転ECU8を含まず、走行環境の認識は他のECUで行うなどすればよい。
(Embodiment 4)
In the first embodiment, the description has been given of an example in which the present invention is applied to a case where the own vehicle has an automatic driving function. For example, a configuration in which the own vehicle does not have an automatic driving function may be employed. In this case, the vehicle system 1 does not include the automatic driving ECU 8, and the recognition of the driving environment may be performed by another ECU.
 (実施形態5)
 実施形態1では、HUD装置220での表示の制御に関する機能をHUD装置220とは別体のHCU20が担う構成を示したが、必ずしもこれに限らない。例えば、HUD装置220での表示の制御に関する機能をHUD装置220に設けられる制御回路が担う構成としてもよいし、コンビネーションメータに設けられる制御回路等が担う構成としてもよい。
(Embodiment 5)
In the first embodiment, the configuration in which the HCU 20 separate from the HUD device 220 has a function related to display control in the HUD device 220 has been described, but the present invention is not limited to this. For example, a function related to display control in the HUD device 220 may be performed by a control circuit provided in the HUD device 220, or may be performed by a control circuit provided in the combination meter.
 本開示に記載の制御及びその手法は、コンピュータプログラムにより具体化された一つ乃至は複数の機能を実行するようにプログラムされたプロセッサを構成する専用コンピュータにより、実現されてもよい。あるいは、本開示に記載の制御及びその手法は、専用ハードウェア論理回路によってプロセッサを構成する専用コンピュータにより、実現されてもよい。もしくは、本開示に記載の制御及びその手法は、コンピュータプログラムを実行するプロセッサと一つ以上のハードウェア論理回路との組み合わせにより構成された一つ以上の専用コンピュータにより、実現されてもよい。また、コンピュータプログラムは、コンピュータにより実行されるインストラクションとして、コンピュータ読み取り可能な非遷移有形記録媒体に記憶されていてもよい。 The control and the technique described in the present disclosure may be realized by a special-purpose computer configuring a processor programmed to execute one or more functions embodied by a computer program. Alternatively, the control and the technique described in the present disclosure may be realized by a special-purpose computer that configures a processor with a special-purpose hardware logic circuit. Alternatively, the control and the technique described in the present disclosure may be implemented by one or more dedicated computers configured by a combination of a processor that executes a computer program and one or more hardware logic circuits. Further, the computer program may be stored in a computer-readable non-transitional tangible recording medium as instructions to be executed by a computer.
 ここで本願に記載されるフローチャート、あるいは、フローチャートの処理は、複数のステップ(あるいはセクションと言及される)から構成され、各ステップは、たとえば、S1と表現される。さらに、各ステップは、複数のサブステップに分割されることができる、一方、複数のステップが合わさって一つのステップにすることも可能である。 The flowchart described herein or the processing of the flowchart is composed of a plurality of steps (or referred to as sections), and each step is expressed as, for example, S1. Further, each step can be divided into a plurality of sub-steps, while a plurality of steps can be combined into one step.
 以上、本開示の一態様に係る車両用表示制御装置、車両用表示制御方法、及び制御プログラムの実施形態、構成、態様を例示したが、本開示に係る実施形態、構成、態様は、上述した各実施形態、各構成、各態様に限定されるものではない。例えば、異なる実施形態、構成、態様にそれぞれ開示された技術的部を適宜組み合わせて得られる実施形態、構成、態様についても本開示に係る実施形態、構成、態様の範囲に含まれる。

 
As described above, the embodiments, configurations, and aspects of the vehicle display control device, the vehicle display control method, and the control program according to one embodiment of the present disclosure have been described. However, the embodiments, configurations, and aspects of the present disclosure are described above. It is not limited to each embodiment, each configuration, and each aspect. For example, embodiments, configurations, and aspects obtained by appropriately combining technical parts disclosed in different embodiments, configurations, and aspects are also included in the scope of the embodiments, configurations, and aspects according to the present disclosure.

Claims (11)

  1.  車両で用いられ、表示器に描画する画像を投影部材へ投影することによって前記車両の前景に虚像を重畳表示させるヘッドアップディスプレイ装置を制御する車両用表示制御装置であって、
     前記表示器に描画する画像の内容を定めるための情報である内容決定用情報を取得する決定用情報取得部(201)と、
     前記決定用情報取得部で取得する前記内容決定用情報に応じて、前記表示器に、表示項目の種別単位で、動的物標についての画像のグループ、静的物標についての画像のグループ、及び路面についての画像のグループを含む複数のグループに分類される複数グループの画像を、前記グループ別にレイヤを分けて1つの表示器に描画する描画制御部(211)とを備え、
     前記描画制御部は、前記表示器に画像を描画する前記レイヤ別にその画像の視認性を異ならせるよう描画する車両用表示制御装置。
    A vehicle display control device that is used in a vehicle and controls a head-up display device that superimposes and displays a virtual image on a foreground of the vehicle by projecting an image to be drawn on a display device onto a projection member,
    A determination information acquisition unit (201) for acquiring content determination information that is information for determining the content of an image to be drawn on the display;
    According to the content determination information obtained by the determination information obtaining unit, the display unit, in units of display items, a group of images for a dynamic target, a group of images for a static target, And a drawing control unit (211) that draws a plurality of groups of images classified into a plurality of groups including a group of images of the road surface on one display device by dividing layers into groups.
    The display control device for a vehicle, wherein the drawing control unit draws the image so that the visibility of the image is different for each layer on which the image is drawn on the display.
  2.  前記描画制御部は、前記レイヤを重ねる順番によって、前記表示器に前記画像を描画する前記レイヤ別にその画像の視認性を異ならせる請求項1に記載の車両用表示制御装置。 2. The vehicle display control device according to claim 1, wherein the drawing control unit changes the visibility of the image for each of the layers on which the image is drawn on the display, depending on the order of overlapping the layers.
  3.  前記描画制御部は、視認性を高くする前記画像を描画する前記レイヤほど上層に重ねる請求項2に記載の車両用表示制御装置。 3. The vehicle display control device according to claim 2, wherein the drawing control unit is configured such that the layer on which the image for which the visibility is enhanced is drawn in a higher layer.
  4.  前記描画制御部は、前記レイヤの透過率を異ならせることによって、前記表示器に前記画像を描画する前記レイヤ別にその画像の視認性を異ならせる請求項1~3のいずれか1項に記載の車両用表示制御装置。 4. The image forming apparatus according to claim 1, wherein the drawing control unit changes the transmittance of the layer so as to change the visibility of the image for each of the layers on which the image is drawn on the display. Display control device for vehicles.
  5.  前記描画制御部は、視認性を高くする前記画像を描画する前記レイヤほど他の前記レイヤに対して透過率を相対的に高くする請求項4に記載の車両用表示制御装置。 The vehicle display control device according to claim 4, wherein the drawing control unit makes the transmissivity relative to the other layers relatively higher for the layer on which the image for which the visibility is enhanced is drawn.
  6.  前記描画制御部は、路面についての画像のグループよりも静的物標についての画像のグループ及び動的物標についての画像のグループの視認性が高くなるように描画するとともに、静的物標についての画像のグループよりも動的物標についての画像のグループの視認性が高くなるように描画する請求項1に記載の車両用表示制御装置。 The drawing control unit performs drawing so that the visibility of the group of images for the static target and the group of images for the dynamic target is higher than that of the group of images on the road surface. The display control device for a vehicle according to claim 1, wherein the drawing is performed such that the visibility of the group of images for the dynamic target is higher than that of the group of images.
  7.  前記描画制御部は、前記レイヤ別の前記画像の表示位置が重なる場合には、各レイヤに描画される前記画像を合成する処理を行い、
     前記レイヤ別の前記画像の表示位置が重ならない場合には、各レイヤに描画される前記画像を合成する処理を行わない請求項1~6のいずれか1項に記載の車両用表示制御装置。
    When the display position of the image for each layer overlaps, the drawing control unit performs a process of combining the images drawn on each layer,
    The display control device for a vehicle according to any one of claims 1 to 6, wherein a process of synthesizing the images drawn on each layer is not performed when display positions of the images for the layers do not overlap.
  8.  前記表示項目の種別単位で分類される複数グループの前記画像は、画像に含まれる情報の緊急性が警告若しくは注意喚起に関する前記表示項目よりも低い通知に関する前記表示項目が、前記動的物標についての画像のグループ、前記静的物標についての画像のグループ、及び前記路面についての画像のグループに細分化して分類され、警告若しくは注意喚起に関する前記表示項目は、前記動的物標についての画像のグループ、前記静的物標についての画像のグループ、及び前記路面についての画像のグループに細分化して分類されない請求項1~7のいずれか1項に記載の車両用表示制御装置。 The images of a plurality of groups classified by the type unit of the display items, the display items related to the notification that the urgency of the information contained in the images is lower than the display items related to warnings or alerts, Image group, the image group about the static target, and the image group about the road surface are classified and classified, and the display items related to the warning or alert are the image items about the dynamic target. The display control device for a vehicle according to any one of claims 1 to 7, wherein the display control device is not divided into a group, a group of images of the static target, and a group of images of the road surface.
  9.  車両で用いられ、表示器に描画する画像を投影部材へ投影することによって前記車両の前景に虚像を重畳表示させるヘッドアップディスプレイ装置を制御する車両用表示制御方法であって、
     前記表示器に描画する画像の内容を定めるための情報である内容決定用情報を取得し、
     取得する前記内容決定用情報に応じて、前記表示器に、表示項目の種別単位で、動的物標についての画像のグループ、静的物標についての画像のグループ、及び路面についての画像のグループを含む複数のグループに分類される複数グループの画像を、前記グループ別にレイヤを分けて1つの表示器に描画するとともに、前記表示器に画像を描画する前記レイヤ別にその画像の視認性を異ならせるよう描画する車両用表示制御方法。
    A vehicle display control method for controlling a head-up display device that is used in a vehicle and that superimposes and displays a virtual image on a foreground of the vehicle by projecting an image to be drawn on a display onto a projection member,
    Acquires content determination information that is information for determining the content of the image to be drawn on the display,
    According to the content determination information to be obtained, the display unit displays, on a display unit basis, a group of images for a dynamic target, a group of images for a static target, and a group of images for a road surface. The image of a plurality of groups classified into a plurality of groups including is drawn on one display device by dividing the layers into the groups, and the visibility of the image is made different for each layer for drawing the image on the display device. Vehicle display control method to draw like.
  10.  コンピュータを、
     車両で用いられ、表示器に描画する画像を投影部材へ投影することによって前記車両の前景に虚像を重畳表示させるヘッドアップディスプレイ装置の前記表示器に描画する画像の内容を定めるための情報である内容決定用情報を取得する決定用情報取得部(201)と、
     前記決定用情報取得部で取得する前記内容決定用情報に応じて、前記表示器に、表示項目の種別単位で、動的物標についての画像のグループ、静的物標についての画像のグループ、及び路面についての画像のグループを含む複数のグループに分類される複数グループの画像を、前記グループ別にレイヤを分けて1つの表示器に描画するとともに、前記表示器に画像を描画する前記レイヤ別にその画像の視認性を異ならせるよう描画する描画制御部(211)として機能させるための制御プログラム。
    Computer
    This information is used in a vehicle, and is information for determining the content of an image to be drawn on the display of a head-up display device that superimposes and displays a virtual image on the foreground of the vehicle by projecting an image to be drawn on the display to a projection member. A decision information acquisition unit (201) for acquiring content decision information;
    According to the content determination information obtained by the determination information obtaining unit, the display unit, in units of display items, a group of images for a dynamic target, a group of images for a static target, And a plurality of groups of images that are classified into a plurality of groups including a group of images of the road surface, are drawn on one display device by dividing the layers into the groups, and the images are drawn on the display device by the layers. A control program for functioning as a drawing control unit (211) for drawing so as to make the visibility of an image different.
  11.  請求項10に記載の制御プログラムを記憶する、コンピュータ読み取り可能な非遷移的実体的記憶媒体。

     
    A non-transitory computer-readable storage medium storing the control program according to claim 10.

PCT/JP2019/018477 2018-06-29 2019-05-09 Vehicle display control device, vehicle display control method, and control program WO2020003750A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018124010A JP6969509B2 (en) 2018-06-29 2018-06-29 Vehicle display control device, vehicle display control method, and control program
JP2018-124010 2018-06-29

Publications (1)

Publication Number Publication Date
WO2020003750A1 true WO2020003750A1 (en) 2020-01-02

Family

ID=68985620

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/018477 WO2020003750A1 (en) 2018-06-29 2019-05-09 Vehicle display control device, vehicle display control method, and control program

Country Status (2)

Country Link
JP (2) JP6969509B2 (en)
WO (1) WO2020003750A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114104162A (en) * 2020-08-27 2022-03-01 Tvs电机股份有限公司 Display system for vehicle
TWI778548B (en) * 2020-11-24 2022-09-21 日商東芝三菱電機產業系統股份有限公司 Plant monitoring and controlling system

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6969509B2 (en) * 2018-06-29 2021-11-24 株式会社デンソー Vehicle display control device, vehicle display control method, and control program
DE202020005800U1 (en) * 2019-08-25 2022-09-16 Nippon Seiki Co. Ltd. head-up display device
CN115480726B (en) * 2022-11-15 2023-02-28 泽景(西安)汽车电子有限责任公司 Display method, display device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1068906A (en) * 1996-05-30 1998-03-10 Asahi Glass Co Ltd Holographic display device
WO2016051586A1 (en) * 2014-10-03 2016-04-07 三菱電機株式会社 Display control device
JP2016109645A (en) * 2014-12-10 2016-06-20 株式会社リコー Information providing device, information providing method, and control program for providing information

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3975986B2 (en) * 2003-08-26 2007-09-12 株式会社島津製作所 Image display device
JP6481273B2 (en) * 2014-07-11 2019-03-13 株式会社デンソー Vehicle display device
JP6401990B2 (en) * 2014-09-30 2018-10-10 矢崎総業株式会社 Vehicle display device
JP2017013590A (en) * 2015-06-30 2017-01-19 日本精機株式会社 Head-up display device
JP2018091908A (en) * 2016-11-30 2018-06-14 日本精機株式会社 Head-up display device
JP6658483B2 (en) * 2016-12-07 2020-03-04 株式会社デンソー Display control device for vehicle and display system for vehicle
JP6969509B2 (en) * 2018-06-29 2021-11-24 株式会社デンソー Vehicle display control device, vehicle display control method, and control program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1068906A (en) * 1996-05-30 1998-03-10 Asahi Glass Co Ltd Holographic display device
WO2016051586A1 (en) * 2014-10-03 2016-04-07 三菱電機株式会社 Display control device
JP2016109645A (en) * 2014-12-10 2016-06-20 株式会社リコー Information providing device, information providing method, and control program for providing information

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114104162A (en) * 2020-08-27 2022-03-01 Tvs电机股份有限公司 Display system for vehicle
CN114104162B (en) * 2020-08-27 2024-02-23 Tvs电机股份有限公司 Display system for vehicle
TWI778548B (en) * 2020-11-24 2022-09-21 日商東芝三菱電機產業系統股份有限公司 Plant monitoring and controlling system

Also Published As

Publication number Publication date
JP2020001589A (en) 2020-01-09
JP2022020688A (en) 2022-02-01
JP6969509B2 (en) 2021-11-24

Similar Documents

Publication Publication Date Title
US11008016B2 (en) Display system, display method, and storage medium
US10197414B2 (en) Vehicle display control device and vehicle display control method
US11325471B2 (en) Method for displaying the course of a safety zone in front of a transportation vehicle or an object by a display unit, device for carrying out the method, and transportation vehicle and computer program
US9827907B2 (en) Drive assist device
WO2020003750A1 (en) Vehicle display control device, vehicle display control method, and control program
US11016497B2 (en) Vehicle control system, vehicle control method, and vehicle control program
US10452930B2 (en) Information display device mounted in vehicle including detector
US20180024354A1 (en) Vehicle display control device and vehicle display unit
JP2019138773A (en) Display device
US11904688B2 (en) Method for calculating an AR-overlay of additional information for a display on a display unit, device for carrying out the method, as well as motor vehicle and computer program
CN111034186B (en) Surrounding vehicle display method and surrounding vehicle display device
JP2023112082A (en) Display device for vehicle
US11274934B2 (en) Information output device, output control method, and storage medium
JP2019174459A (en) Control device, display device, mobile body, control method, and program
JP2020085688A (en) Display system, display control method, and program
JP7024619B2 (en) Display control device for mobile body, display control method for mobile body, and control program
JP7400242B2 (en) Vehicle display control device and vehicle display control method
JP2019172243A (en) Control device, display device, movable body, control method and program
JP7302311B2 (en) Vehicle display control device, vehicle display control method, vehicle display control program
JP7310851B2 (en) vehicle display
WO2022255409A1 (en) Vehicle display system, vehicle display method, vehicle display program
WO2022224753A1 (en) Vehicle display system, vehicle display method, and vehicle display program
WO2022224754A1 (en) Vehicle display system, vehicle display method, and vehicle display program
US20230166754A1 (en) Vehicle congestion determination device and vehicle display control device
WO2019220884A1 (en) Driving assistance device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19825260

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19825260

Country of ref document: EP

Kind code of ref document: A1