WO2021065735A1 - Display control device and display control program - Google Patents

Display control device and display control program Download PDF

Info

Publication number
WO2021065735A1
WO2021065735A1 PCT/JP2020/036371 JP2020036371W WO2021065735A1 WO 2021065735 A1 WO2021065735 A1 WO 2021065735A1 JP 2020036371 W JP2020036371 W JP 2020036371W WO 2021065735 A1 WO2021065735 A1 WO 2021065735A1
Authority
WO
WIPO (PCT)
Prior art keywords
display
scene
display control
content
lane
Prior art date
Application number
PCT/JP2020/036371
Other languages
French (fr)
Japanese (ja)
Inventor
清水 泰博
明彦 柳生
大祐 竹森
一輝 小島
しおり 間根山
猛 羽藤
Original Assignee
株式会社デンソー
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2020145431A external-priority patent/JP7111137B2/en
Application filed by 株式会社デンソー filed Critical 株式会社デンソー
Publication of WO2021065735A1 publication Critical patent/WO2021065735A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Arrangement of adaptations of instruments
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units, or advanced driver assistance systems for ensuring comfort, stability and safety or drive control systems for propelling or retarding the vehicle
    • B60W30/10Path keeping
    • B60W30/12Lane keeping
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems

Definitions

  • the disclosure in this specification relates to a technique for controlling the display of content by a head-up display.
  • Patent Document 1 discloses a vehicle display device that superimposes and displays contents by a head-up display. This vehicle display device superimposes and displays a guidance display indicating a route from the traveling position of the own vehicle to the guidance point in the front view of the driver.
  • Patent Document 1 does not describe a display that reduces driver's anxiety in such a scene.
  • the purpose of disclosure is to provide a display control device and a display control program that can reduce driver anxiety.
  • One of the disclosed display control devices is a display control device that controls the display of contents by a head-up display of a vehicle including a lane keeping control unit capable of executing lane keeping control for keeping the vehicle traveling in the lane.
  • a scene determination unit that determines whether or not the driver's reliability for lane keeping control is low, and a predicted trajectory content that indicates the predicted trajectory by lane keeping control if it is determined to be a specific scene. It is provided with a display control unit that superimposes the display on the road surface and hides the predicted locus content when it is determined that the scene is not a specific scene.
  • One of the disclosed display control programs is a display control program that controls the display of contents by the head-up display of a vehicle provided with a lane keeping control unit capable of executing lane keeping control for keeping the vehicle traveling in the lane.
  • At least one processing unit determines whether or not it is a specific scene in which the reliability of the driver with respect to the lane keeping control is lowered, and if it is determined to be a specific scene, it is predicted that the predicted trajectory by the lane keeping control is shown.
  • the locus content is superimposed and displayed on the road surface, and when it is determined that the scene is not a specific scene, a process including hiding the expected locus content is executed.
  • the expected locus content is displayed when it is determined that the scene is a specific scene in which the reliability of the driver for lane keeping control is lowered. Therefore, the driver who visually recognizes the expected locus content can easily recall the image that the driving in the lane is maintained even in a specific scene. As described above, it is possible to provide a display control device and a display control program capable of reducing driver anxiety.
  • One of the disclosed display control devices is a display control device that controls the display of contents by a head-up display of a vehicle including a lane keeping control unit capable of executing lane keeping control for keeping the vehicle traveling in the lane.
  • the scene determination unit that determines whether or not the driver's reliability for lane keeping control is low and the predicted trajectory content that indicates the predicted trajectory by lane keeping control are superimposed and displayed on the road surface to determine that the scene is specific. It is provided with a display control unit that changes the display mode of the expected locus content depending on whether the scene is determined to be a specific scene or not.
  • One of the disclosed display control programs is a display control program that controls the display of contents by the head-up display of a vehicle provided with a lane keeping control unit capable of executing lane keeping control for keeping the vehicle traveling in the lane.
  • At least one processing unit determines whether or not the scene is a specific scene in which the reliability of the driver for the lane keeping control is lowered, and the predicted locus content indicating the predicted locus by the lane keeping control is superimposed and displayed on the road surface in the specific scene.
  • a process including changing the display mode of the expected locus content is executed depending on whether it is determined to be present or not a specific scene.
  • the display mode of the expected locus content is changed depending on whether the scene is a specific scene in which the reliability of the driver for lane keeping control is lowered or not. Therefore, the driver who visually recognizes the predicted locus content whose display mode has been changed can associate that the lane keeping control is executed after the specific scene is grasped on the vehicle side. Therefore, the driver can easily recall the image that the driving in the lane is maintained even in a specific scene. As described above, it is possible to provide a display control device and a display control program capable of reducing driver anxiety.
  • FIG. 1 shows the whole image of the vehicle-mounted network including the HCU according to the 1st Embodiment of this disclosure. It is a figure which shows an example of the head-up display mounted on the vehicle. It is a figure which shows an example of the schematic structure of HCU. It is a figure which visualizes and shows an example of the simulation of the display layout carried out in the display generation part. It is a figure which shows an example of the LTA display in a HUD. It is a figure which shows an example of the LTA display in a meter display. It is a flowchart which shows an example of the display control method executed by HCU. It is a figure which shows an example of the LTA display in 2nd Embodiment.
  • the function of the display control device according to the first embodiment of the present disclosure is realized by the HCU (Human Machine Interface Control Unit) 100 shown in FIGS.
  • the HCU 100 comprises the HMI (Human Machine Interface) system 10 of the vehicle A together with a head-up display (hereinafter, “HUD”) 20 and the like.
  • the HMI system 10 further includes an operation device 26, a DSM (Driver Status Monitor) 27, and the like.
  • the HMI system 10 has an input interface function for accepting user operations by a driver who is an occupant of the vehicle A, and an output interface function for presenting information to the occupants.
  • the HMI system 10 is communicably connected to the communication bus 99 of the vehicle-mounted network mounted on the vehicle A.
  • the HMI system 10 is one of a plurality of nodes provided in the in-vehicle network.
  • a peripheral monitoring sensor 30, a locator 40, a DCM49, a driving support ECU 50, an automatic driving ECU 60, and the like are connected to the communication bus 99 of the vehicle-mounted network as nodes. These nodes connected to the communication bus 99 can communicate with each other.
  • the peripheral monitoring sensor 30 is an autonomous sensor that monitors the surrounding environment of the vehicle A. From the detection range around the own vehicle, the peripheral monitoring sensor 30 includes moving objects such as pedestrians, cyclists, non-human animals, and other vehicles, as well as falling objects, guardrails, curbs, road markings, traveling lane markings, and the like. It is possible to detect road markings and stationary objects such as roadside structures.
  • the peripheral monitoring sensor 30 provides the detection information of detecting an object around the vehicle A to the driving support ECU 50 and the like through the communication bus 99.
  • the peripheral monitoring sensor 30 has a front camera 31 and a millimeter wave radar 32 as a detection configuration for object detection.
  • the front camera 31 outputs at least one of the imaging data obtained by photographing the front range of the vehicle A and the analysis result of the imaging data as detection information.
  • a plurality of millimeter-wave radars 32 are arranged, for example, on the front and rear bumpers of the vehicle A at intervals from each other.
  • the millimeter wave radar 32 irradiates the millimeter wave or the quasi-millimeter wave toward the front range, the front side range, the rear range, the rear side range, and the like of the vehicle A.
  • the millimeter wave radar 32 generates detection information by a process of receiving reflected waves reflected by a moving object, a stationary object, or the like.
  • other detection configurations such as a rider and sonar may be included in the peripheral monitoring sensor 30.
  • the locator 40 generates highly accurate position information of vehicle A and the like by compound positioning that combines a plurality of acquired information.
  • the locator 40 can specify, for example, the lane in which the vehicle A travels among a plurality of lanes.
  • the locator 40 includes a GNSS (Global Navigation Satellite System) receiver 41, an inertial sensor 42, a high-precision map database (hereinafter, “high-precision map DB”) 43, and a locator ECU 44.
  • GNSS Global Navigation Satellite System
  • the GNSS receiver 41 receives positioning signals transmitted from a plurality of artificial satellites (positioning satellites).
  • the GNSS receiver 41 can receive a positioning signal from each positioning satellite of at least one satellite positioning system among satellite positioning systems such as GPS, GLONASS, Galileo, IRNSS, QZSS, and Beidou.
  • the inertial sensor 42 has, for example, a gyro sensor and an acceleration sensor.
  • the high-precision map DB 43 is mainly composed of a non-volatile memory, and stores map data with higher accuracy (hereinafter, "high-precision map data") than that used for normal navigation.
  • the high-precision map data holds detailed information at least for information in the height (z) direction.
  • High-precision map data includes information that can be used for advanced driving support and autonomous driving, such as road three-dimensional shape information (road structure information), lane number information, and information indicating the direction of travel allowed for each lane. ing.
  • the locator ECU 44 is a control unit having a configuration mainly including a microcomputer provided with a processor, RAM, a storage unit, an input / output interface, a bus connecting them, and the like.
  • the locator ECU 44 combines the positioning signal received by the GNSS receiver 41, the measurement result of the inertial sensor 42, the vehicle speed information output to the communication bus 99, and the like, and sequentially positions the own vehicle position, the traveling direction, and the like of the vehicle A.
  • the locator ECU 44 provides the position information and direction information of the vehicle A based on the positioning result to the driving support ECU 50, the automatic driving ECU 60, the HCU 100, and the like through the communication bus 99. Further, the locator ECU 44 provides high-precision map data around the position of the own vehicle to the HCU 100, the driving support ECU 50, and the like via the communication bus 99.
  • the vehicle speed information is information indicating the current traveling speed of the vehicle A, and is generated based on the detection signal of the wheel speed sensor provided in the hub portion of each wheel of the vehicle A.
  • the node (ECU) that generates vehicle speed information and outputs it to the communication bus 99 may be appropriately changed.
  • a brake control ECU that controls the distribution of braking force for each wheel, or an in-vehicle ECU such as the HCU100 is electrically connected to the wheel speed sensor of each wheel to generate vehicle speed information and output to the communication bus 99.
  • DCM (Data Communication Module) 49 is a communication module mounted on vehicle A.
  • the DCM49 transmits and receives radio waves to and from base stations around the vehicle A by wireless communication in accordance with communication standards such as LTE (Long Term Evolution) and 5G.
  • LTE Long Term Evolution
  • the driving support ECU 50 and the automatic driving ECU 60 are configured to mainly include a computer equipped with a processor, a RAM, a storage unit, an input / output interface, a bus connecting them, and the like, respectively.
  • the driving support ECU 50 has a driving support function that supports the driving operation of the driver.
  • the automatic driving ECU 60 has an automatic driving function capable of acting as a driver's driving operation.
  • the driving support ECU 50 enables partial automatic driving control (advanced driving support) of level 2 or lower.
  • the automatic driving ECU 60 enables automatic driving control of level 3 or higher.
  • the driving support ECU 50 executes automatic driving in which the driver is required to monitor the surroundings, and the automatic driving ECU 60 executes automatic driving in which the driver is not required to monitor the surroundings.
  • the driving support ECU 50 and the automatic driving ECU 60 recognize the driving environment around the vehicle A for the driving control described later based on the detection information acquired from the peripheral monitoring sensor 30, respectively.
  • Each of the ECUs 50 and 60 provides the HCU 100 with the analysis result of the detection information carried out for recognizing the traveling environment as the analyzed detection information.
  • each of the ECUs 50 and 60 is used as boundary information regarding the boundary of the lane in which the vehicle A is currently traveling (hereinafter, "own lane Lns", see FIG. 5), and is the relative position of the left and right lane markings LL, LR or the road edge.
  • Information indicating the shape and the shape can be provided to the HCU 100.
  • the left-right direction is a direction that coincides with the width direction of the vehicle A stationary on the horizontal plane, and is set with reference to the traveling direction of the vehicle A. Further, each of the ECUs 50 and 60 analyzes the information regarding the weather condition in the traveling area and provides it to the HCU 100 as the weather information.
  • the weather information includes at least information on whether or not the weather is poor visibility such as rain, snow, and fog.
  • the weather information is analyzed, for example, by image processing of the captured image of the front camera 31.
  • the driving support ECU 50 has a plurality of functional units that realize advanced driving support by executing a program by a processor. Specifically, the driving support ECU 50 has an ACC (Adaptive Cruise Control) control unit and a lane keeping control unit 51.
  • the ACC control unit is a functional unit that realizes the function of ACC for driving the vehicle A at a constant speed at a target vehicle speed or for following the vehicle A while maintaining the distance between the vehicle and the vehicle in front.
  • the lane keeping control unit 51 is a functional unit that realizes an LTA (Lane Tracing Assist) function for maintaining the vehicle A running in the lane.
  • the LTA is also referred to as an LTC (LaneTraceControl).
  • the LTA function is an example of a lane keeping control function.
  • the lane keeping control unit 51 controls the steering angle of the steering wheel of the vehicle A based on the boundary information extracted from the detection data of the peripheral monitoring sensor 30.
  • the lane keeping control unit 51 generates a planned traveling line having a shape along the own lane Lns so that the vehicle A travels in the center of the own lane Lns during traveling.
  • the lane keeping control unit 51 cooperates with the ACC control unit to perform driving control (hereinafter, “lane keeping control”) for driving the vehicle A in the own lane Lns according to the planned running line.
  • driving control hereinafter, “lane keeping control”
  • the automatic driving ECU 60 has a plurality of functional units that realize autonomous driving of the vehicle A by executing a program by the processor.
  • the automatic driving ECU 60 generates a scheduled traveling line based on the high-precision map data acquired from the locator 40, the vehicle position information, and the extracted boundary information.
  • the automatic driving ECU 60 executes acceleration / deceleration control, steering control, and the like so that the vehicle A travels along the scheduled traveling line.
  • the lane maintenance control that is substantially the same as the lane maintenance control unit 51 of the driving support ECU 50, that is, the functional unit that causes the vehicle A to travel in the own lane Lns, is conveniently controlled to maintain the lane. Let's call it part 61. The user can exclusively use one of the lane keeping control units 51 and 61.
  • the lane keeping control units 51 and 61 may generate a scheduled traveling line based on the traveling locus of the preceding vehicle when the boundary information cannot be acquired. For example, when the boundary cannot be detected by the peripheral monitoring sensor 30 and the vehicle in front can be detected when the visibility is poor such as fog, the lane keeping control units 51 and 61 travel based on the detection information of the vehicle in front. A scheduled traveling line is generated from the locus, and the traveling of the vehicle A is controlled along the planned traveling line.
  • the lane keeping control information includes at least status information indicating the operating state of the lane keeping control and line shape information indicating the shape of the planned traveling line.
  • the status information is information indicating whether the lane keeping control function is in the off state, the standby state, or the execution state.
  • the standby state is a case where the lane keeping control is activated but the motion control is not performed.
  • the execution state is a state in which the operation control is activated based on the establishment of the execution condition.
  • the execution condition is, for example, that the section lines on both sides can be recognized.
  • the line shape information includes at least the three-dimensional coordinates of a plurality of specific points that define the shape of the planned traveling line, the length of the virtual line connecting the specific points, the radius of curvature, and the like.
  • the line shape information may include a large amount of coordinate information.
  • Each coordinate information is information indicating points lined up on the scheduled running line at predetermined intervals. Even with the line shape information in such a data format, the HCU 100 can restore the shape of the planned running line from a large amount of coordinate information.
  • the operation device 26 is an input unit that accepts user operations by a driver or the like.
  • the operation device 26 is input with a user operation for switching between activation and stop, setting the inter-vehicle distance, and the like.
  • the operation device 26 includes a steering switch provided on the spoke portion of the steering wheel, an operation lever provided on the steering column portion 8, a voice input device for detecting the driver's utterance, and the like.
  • the DSM27 has a configuration including a near-infrared light source, a near-infrared camera, and a control unit for controlling them.
  • the DSM 27 is installed in a posture in which the near-infrared camera is directed toward the headrest portion of the driver's seat, for example, on the upper surface of the steering column portion 8 or the upper surface of the instrument panel 9.
  • the DSM27 uses a near-infrared camera to photograph the head of the driver irradiated with near-infrared light by a near-infrared light source.
  • the image captured by the near-infrared camera is image-analyzed by the control unit.
  • the control unit extracts information such as the position of the eye point EP and the line-of-sight direction from the captured image, and sequentially outputs the extracted state information toward the HCU 100.
  • the HUD 20 is mounted on the vehicle A as one of a plurality of in-vehicle display devices together with the meter display 23, the center information display, and the like.
  • the HUD 20 is electrically connected to the HCU 100 and sequentially acquires video data generated by the HCU 100. Based on the video data, the HUD 20 presents various information related to the vehicle A, such as route information, sign information, and control information of each in-vehicle function, to the driver using the virtual image Vi.
  • the HUD 20 is housed in the storage space inside the instrument panel 9 below the windshield WS.
  • the HUD 20 projects the light formed as a virtual image Vi toward the projection range PA of the windshield WS.
  • the light projected on the windshield WS is reflected toward the driver's seat side in the projection range PA and is perceived by the driver.
  • the driver visually recognizes the display in which the virtual image Vi is superimposed on the foreground seen through the projection range PA.
  • the HUD 20 includes a projector 21 and a magnifying optical system 22.
  • the projector 21 has an LCD (Liquid Crystal Display) panel and a backlight.
  • the projector 21 is fixed to the housing of the HUD 20 with the display surface of the LCD panel facing the magnifying optical system 22.
  • the projector 21 displays each frame image of the video data on the display surface of the LCD panel, and transmits and illuminates the display surface with a backlight to emit light formed as a virtual image Vi toward the magnifying optical system 22.
  • the magnifying optical system 22 includes at least one concave mirror in which a metal such as aluminum is vapor-deposited on the surface of a base material made of synthetic resin or glass.
  • the magnifying optical system 22 projects the light emitted from the projector 21 onto the upper projection range PA while spreading it by reflection.
  • the angle of view VA is set for the above HUD20. Assuming that the virtual range in the space where the virtual image Vi can be imaged by the HUD 20 is the image plane IS, the angle of view VA is defined based on the virtual line connecting the driver's eye point EP and the outer edge of the image plane IS. The viewing angle.
  • the angle of view VA is an angle range in which the driver can visually recognize the virtual image Vi when viewed from the eye point EP. In the HUD 20, the horizontal angle of view in the horizontal direction is larger than the vertical angle of view in the vertical direction. When viewed from the eye point EP, the front range that overlaps with the image plane IS is the range within the angle of view VA.
  • the HUD 20 displays superimposed content CTs (see FIGS. 5 and 6) and non-superimposed content as virtual image Vi.
  • Superimposed content CTs are AR display objects used for augmented reality (hereinafter referred to as “AR”) display.
  • the display position of the superimposed content CTs is associated with a specific superimposed object existing in the foreground, such as a specific position on the road surface, a vehicle in front, a pedestrian, and a road sign.
  • the superimposed content CTs are superimposed and displayed on a specific superimposed object in the foreground, and can be moved in the appearance of the driver following the superimposed object so as to be relatively fixed to the superimposed object.
  • the shape of the superimposed content CTs may be continuously updated at a predetermined cycle according to the relative position and shape of the superimposed object.
  • the superimposed content CTs are displayed in a posture closer to horizontal than the non-superimposed content, and have a display shape extended in the depth direction (traveling direction) as seen from the driver, for example.
  • the non-superimposed content is a non-AR display object excluding the superimposed content CTs among the display objects superimposed and displayed in the foreground. Unlike the superimposed content CTs, the non-superimposed content is displayed superimposed on the foreground without specifying the superimposed target.
  • the non-superimposed content is displayed at a fixed position in the projection range PA, so that it is displayed as if it is relatively fixed to the vehicle configuration such as the windshield WS.
  • the meter display 23 is one of a plurality of in-vehicle displays, and is a so-called combination meter display.
  • the meter display 23 is an image display such as a liquid crystal display and an organic EL display.
  • the meter display 23 is installed in front of the driver's seat on the instrument panel 9, and the display screen is directed to the headrest portion of the driver's seat.
  • the meter display 23 is electrically connected to the HCU 100, and sequentially acquires video data generated by the HCU 100.
  • the meter display 23 displays the content corresponding to the acquired video data on the display screen. For example, the meter display 23 displays a status image CTst (described later) showing the status information of the LTA function on the display screen.
  • the HCU 100 is an electronic control device that integrally controls the display by a plurality of in-vehicle display devices including the HUD 20 in the HMI system 10.
  • the HCU 100 mainly includes a computer including a processing unit 11, a RAM 12, a storage unit 13, an input / output interface 14, and a bus connecting them.
  • the processing unit 11 is hardware for arithmetic processing combined with the RAM 12.
  • the processing unit 11 has a configuration including at least one arithmetic core such as a CPU (Central Processing Unit).
  • the RAM 12 may be configured to include a video RAM for video generation.
  • the processing unit 11 executes various processes for realizing the functions of each functional unit, which will be described later, by accessing the RAM 12.
  • the storage unit 13 is configured to include a non-volatile storage medium.
  • Various programs (display control programs, etc.) executed by the processing unit 11 are stored in the storage unit 13.
  • the HCU 100 shown in FIGS. 1 to 3 has a plurality of functional units for functioning as a control unit for controlling content display by the HUD 20 by executing a display control program stored in the storage unit 13 by the processing unit 11. .. Specifically, the HCU 100 is constructed with functional units such as a driver information acquisition unit 101, a locator information acquisition unit 102, an external world information acquisition unit 103, a control information acquisition unit 104, a scene determination unit 105, and a display generation unit 109. ..
  • the driver information acquisition unit 101 identifies the position and line-of-sight direction of the eye point EP of the driver seated in the driver's seat based on the state information acquired from the DSM 27, and acquires it as driver information.
  • the driver information acquisition unit 101 generates three-dimensional coordinates (hereinafter, “eye point coordinates”) indicating the position of the eye point EP, and sequentially provides the generated eye point coordinates to the display generation unit 109.
  • the locator information acquisition unit 102 acquires the latest position information and direction information about the vehicle A from the locator ECU 44 as own vehicle position information. In addition, the locator information acquisition unit 102 acquires high-precision map data around the position of the own vehicle from the locator ECU 44. The locator information acquisition unit 102 sequentially provides the acquired vehicle position information and high-precision map data to the scene determination unit 105 and the display generation unit 109.
  • the external world information acquisition unit 103 acquires the detected detection information analyzed for the peripheral range of the vehicle A from the driving support ECU 50 or the automatic driving ECU 60. For example, the outside world information acquisition unit 103 acquires boundary information indicating the relative positions of the left and right lane markings LL, LR or the road edge of the own lane Lns as detection information. In addition, the outside world information acquisition unit 103 acquires weather information in the traveling area as detection information. The external world information acquisition unit 103 sequentially provides the acquired detection information to the scene determination unit 105 and the display generation unit 109. The external world information acquisition unit 103 may acquire the imaging data of the front camera 31 as the detection information instead of the detection information as the analysis result acquired from the driving support ECU 50 or the automatic driving ECU 60.
  • the control information acquisition unit 104 acquires lane maintenance control information from the lane maintenance control units 51 and 61.
  • the lane keeping control information includes status information of the LTA function, line shape information, and the like.
  • the control information acquisition unit 104 sequentially provides the acquired lane keeping control information to the display generation unit 109.
  • the scene determination unit 105 determines whether or not the current driving scene is a specific scene based on the information acquired from the locator information acquisition unit 102 and the outside world information acquisition unit 103.
  • the specific scene is a scene in which the reliability of the driver for lane keeping control is lowered.
  • the specific scene is a scene that can cause the driver to feel anxious about the vehicle A coming off the own lane Lns.
  • the specific scene includes a scene in which the difficulty of traveling along the own lane Lns is relatively high.
  • a curved driving scene traveling on a curved road is a scene that can cause anxiety in the driver to deviate from the curved road without being able to turn completely, and is included in a specific scene.
  • the specific scene includes a scene in which the lane keeping control units 51 and 61 may raise a suspicion that the own lane Lns is not correctly recognized.
  • a scene with poor visibility such as bad weather such as rain, fog, snow, and nighttime causes the above-mentioned suspicion because the lane markings LL and LR as the boundary of the own lane Lns become difficult to see. It is a scene to obtain and is included in a specific scene. Further, the specific scene includes a scene in which the concern when the vehicle A deviates from the own lane Lns can be relatively large. For example, a cliff running scene traveling on a road on a cliff is included in a specific scene.
  • the scene determination unit 105 determines whether or not the scene is a curve driving scene based on high-precision map data. Specifically, the scene determination unit 105 determines that the scene is a curve traveling scene when it can be determined that the vehicle A is traveling on a curved road based on the curvature of the road or the like. The scene determination unit 105 determines whether or not the scene has poor visibility based on the detection information. Specifically, the scene determination unit 105 determines that the scene has poor visibility when it is determined that the weather is bad based on the analysis result of the captured image of the front camera 31. Further, the scene determination unit 105 determines that the scene has poor visibility when the current time is nighttime, based on the clock function of the HCU 100 or the like.
  • the scene determination unit 105 may determine that the scene has poor visibility when it is determined that there is backlight based on the current time, the traveling direction of the vehicle A, and the like. Further, the scene determination unit 105 determines whether or not the scene is a cliff running scene based on the high-precision map data. Specifically, the scene determination unit 105 determines that the scene is a cliff travel scene when the terrain on the shoulder of the travel path is classified as a cliff.
  • the scene determination unit 105 determines whether or not the current driving scene corresponds to any one of the plurality of specific scenes described above. Alternatively, the scene determination unit 105 may determine only one specific scene. The scene determination unit 105 sequentially provides the determination result to the display generation unit 109.
  • the display generation unit 109 includes a virtual layout function that simulates the display layout of superimposed content CTs (see FIGS. 4 and 5) based on various acquired information, and a content selection function that selects content to be used for information presentation. ing.
  • the display generation unit 109 has a generation function for generating video data to be sequentially output to the HUD 20 based on the information provided by the virtual layout function and the content selection function.
  • the display generation unit 109 is an example of a display control unit.
  • the display generation unit 109 reproduces the current driving environment of the vehicle A in the virtual space based on the own vehicle position information, high-precision map data, detection information, etc. by executing the virtual layout function. More specifically, as shown in FIG. 5, the display generation unit 109 sets the own vehicle object AO at a reference position in the virtual three-dimensional space. The display generation unit 109 maps the road model of the shape indicated by the map data in the three-dimensional space in association with the own vehicle object AO based on the own vehicle position information. The display generation unit 109 sets the virtual left side marking line VLL and the virtual right side marking line VLR corresponding to the left side marking line LL and the right side marking line LR, respectively, on the virtual road surface based on the boundary information. The display generation unit 109 sets the planned travel line generated by the lane keeping control units 51 and 61 on the virtual road surface as the predicted locus PT.
  • the display generation unit 109 sets the virtual camera position CP and the superimposition range SA in association with the own vehicle object AO.
  • the virtual camera position CP is a virtual position corresponding to the driver's eye point EP.
  • the display generation unit 109 sequentially corrects the virtual camera position CP with respect to the own vehicle object AO based on the latest eye point coordinates acquired by the driver information acquisition unit 101.
  • the superimposition range SA is a range in which the virtual image Vi can be superposed and displayed. When the display generation unit 109 looks forward from the virtual camera position CP based on the virtual camera position CP and the outer edge position (coordinates) information of the projection range PA stored in advance in the storage unit 13 (see FIG. 1) or the like.
  • the front range inside the imaging plane IS is set as the superimposition range SA.
  • the superimposition range SA corresponds to the angle of view VA of HUD20.
  • the display generation unit 109 arranges the virtual object VO in the virtual space.
  • the virtual object VO is arranged along the expected locus PT on the road surface of the road model in the three-dimensional space.
  • the virtual object VO is set in the virtual space when the start content CTi and the expected locus content CTp, which will be described later, are displayed as virtual images.
  • the virtual object VO defines the position and shape of the start content CTi and the expected locus content CTp. That is, the shape of the virtual object VO as seen from the virtual camera position CP becomes the virtual image shape of the start content CTi and the expected locus content CTp that are visually recognized from the eye point EP.
  • the virtual object VO includes the left virtual object VOL and the right virtual object VOL.
  • the left virtual object VOL is arranged inside the virtual left lane VLL along the virtual left lane VLL.
  • the right virtual object VOL is arranged inside the virtual right lane VLR along the virtual right lane VLR, as opposed to the left virtual object VOL.
  • the left virtual object VOL and the right virtual object VOL are, for example, thin strip-shaped objects extending in a plane along the virtual lane markings VLL and VLR, respectively.
  • each virtual object VOL, VOL When displaying the start content CTi, each virtual object VOL, VOL is arranged in a stationary state at a predetermined position inside each virtual lane marking VLL, VLR. On the other hand, when displaying the predicted locus content CTp, each virtual object VOL and VOL repeats the movement from the initial position to the center side of the own lane Lns with the arrangement position at the time of displaying the start content CTi as the initial position. Set as an object.
  • the display generation unit 109 uses a plurality of types of superposed content CTs and non-superimposed content properly according to the scene by executing the content selection function, and presents information to the driver. For example, the display generation unit 109 displays the predicted locus content CTp when the control information acquisition unit 104 has acquired the execution information of the LTA function and the scene determination unit 105 has determined that the scene is a specific scene. Let me. On the other hand, the display generation unit 109 hides the expected locus content CTp when it is determined that the scene is not a specific scene even if the execution information of the LTA function is acquired.
  • the display generation unit 109 can execute the LTA display for displaying the contents related to the LTA function by the video data generation function.
  • the details of the LTA display will be described below with reference to FIG. Note that FIG. 5 shows an example of LTA display in a curve driving scene.
  • the display generation unit 109 does not execute the LTA display when the scene determination unit 105 determines that the scene is not a specific scene (see A in FIG. 5). When it is determined that the scene is a specific scene, the display generation unit 109 presents the predicted trajectory PT of the vehicle A by the LTA function. Specifically, the display generation unit 109 first displays the start content CTi (see B in FIG. 5).
  • the start content CTi is a content indicating the start of display of the expected trajectory content CTp described later.
  • the start content CTi is, for example, the expected locus content CTp in a stationary mode. More specifically, the start content CTi is a superposed content CTs whose superimposing target is the road surface of the traveling road.
  • the start content CTi is drawn in a shape along the expected locus PT.
  • the start content CTi includes the left start content CTil and the right start content CTir.
  • the left-side start content CTil and the right-side start content CTir are a pair of contents corresponding to the lane markings LL and LR, which are a pair of boundaries in the own lane Lns.
  • Each of the starting contents CTil and CTirr is, for example, a thin strip-shaped road paint extending in a continuous direction of the vehicle A in the traveling direction and BR>.
  • the left side start content Ctil has a shape along the left side division line LL, and the inside of the left side division line LL is a superposed position.
  • the right side start content CTir has a shape along the right side division line LR, and the inside of the right side division line LR is a superposed position.
  • the start contents CTil and CTirr are continuously displayed in the above-mentioned superposed position in a stationary state.
  • the start content CTi is displayed for a predetermined period from the start time of the specific scene.
  • the specific scene is a curve traveling scene
  • the superimposition range SA is continuously displayed for a predetermined period after reaching the start position of the curve road.
  • the display generation unit 109 starts displaying the expected locus content CTp.
  • the predicted locus content CTp is a content indicating the predicted locus PT of the vehicle A by the LTA function.
  • the predicted locus content CTp includes a left boundary line CTbl and a right boundary line CTbr.
  • the left boundary line CTbl and the right boundary line CTbr have the same display shape as the left start content CTil and the right start content CTir, respectively.
  • the left boundary line CTbl and the right boundary line CTbr are displayed in a manner that moves by animation.
  • the boundary lines CTbl and CTbr are drawn so as to move from both outer sides to the central side in the lane width direction.
  • both outer sides are the sides where the lane markings LL and LR are located with respect to the central portion of the own lane Lns
  • the central side is the side where the central portion of the own lane Lns is located with respect to the lane markings LL and LR.
  • the left boundary line CTbl moves from the left lane LL toward the center of the own lane Lns
  • the right boundary line CTbr moves from the right lane LR toward the center of the own lane Lns.
  • the left boundary line CTbl and the right boundary line CTbr are in a mode of continuously moving in the lane width direction from the initial position to the center side of the own lane Lns.
  • the boundary lines CTbl and CTbr move continuously so that the width between the boundary lines CTbl and CTbr is narrowed.
  • the initial position is the superposed position of the start content CTi, and the start content CTi is drawn as if it started moving from the superposed position.
  • the boundary lines CTbl and CTbr move from their respective initial positions by the same amount of movement to reach their respective moving end positions.
  • the boundary lines CTbl and CTbr start moving at substantially the same timing from the initial position.
  • Each boundary line CTbl, CTbr completes the movement from the initial position to the moving end position in substantially the same period.
  • Each boundary line CTbl, CTbr is an example of moving content.
  • Each boundary line CTbl, CTbr is displayed so as to repeat the above-mentioned movement. Specifically, each boundary line CTbl, CTbr disappears when it moves by a predetermined amount of movement from the initial position, and reappears at the initial position. The reappearing boundary lines CTbl and CTbr perform the above-mentioned movement again. Each boundary line CTbl, CTbr continuously repeats movement until the end of a specific scene. When the specific scene ends, the boundary lines CTbl and CTbr are hidden.
  • the display generation unit 109 displays the status image CTst as execution content indicating the execution of the LTA function in a predetermined display area in the meter display 23 (see FIG. 6).
  • the status image CTst has, for example, a shape imitating the lane markings LL and LR of the own lane Lns. Specifically, the status image CTst is displayed as a pair of thin strips. The status image CTst is fixedly displayed at a predetermined display position. For example, the status image CTst is displayed on both sides of the vehicle icon ICv that imitates the vehicle A.
  • the status image CTst is hidden when the LTA function is off (see A in FIG. 6).
  • the status image CTst is displayed when the LTA function is on.
  • the status image CTst has different display modes depending on whether it is determined that it is not a specific scene or that it is a specific scene. Specifically, when it is determined that the status image CTst is not a specific scene, the status image CTst is continuously brilliantly displayed (see B in FIG. 6).
  • the status image CTst is displayed blinking when it is determined that the scene is a specific scene (see C in FIG. 6). Due to the blinking display, the status image CTst under the specific scene has a display mode different from the expected locus content CTp under the specific scene, which is a moving display mode. In the blinking display, the brightness of the status image CTst may be changed discretely or continuously depending on the lighting state and the extinguishing state.
  • FIG. 7 The process shown in FIG. 7 is started by the HCU 100 that has completed the start-up process or the like, for example, by switching the vehicle power supply to the on state.
  • S means a plurality of steps of the flow executed by a plurality of instructions included in the display control program.
  • the display generation unit 109 determines whether or not the LTA function is on based on the control information acquired by the control information acquisition unit 104. If it is determined that the LTA function is off, it waits until it is turned on. If it is determined that the LTA function is on, the process proceeds to S20, and the scene determination unit 105 determines whether or not the current driving scene is a specific scene.
  • the process proceeds to S30, and the display generation unit 109 displays the start content CTi. After that, the process proceeds to S40, the expected locus content CTp is displayed, and the process proceeds to S50.
  • the scene determination unit 105 determines whether or not the specific scene has ended. If the specific scene is not finished, the process returns to S40 and the display of the expected locus content CTp is continued. On the other hand, if it is determined that the specific scene has ended, the process proceeds to S60, the display of the expected locus content CTp ends, and the process proceeds to S70.
  • S70 it is determined whether or not the LTA function is turned off based on the control information. If it is determined that the LTA function is not off, the process returns to S20. On the other hand, when it is determined that the LTA function is off, a series of processes is terminated.
  • the predicted locus content CTp is superimposed and displayed on the road surface in the case of a specific scene in which the reliability of the driver for the lane keeping control is lowered, and the predicted locus content CTp is not displayed in the case of a non-specific scene. It is displayed. According to this, the expected locus content CTp is displayed in a specific scene. Therefore, the driver who visually recognizes the predicted locus content CTp can easily recall the image that the driving in the lane is maintained even in a specific scene. As a result, the driver's anxiety can be reduced.
  • the start content CTi indicating the start of display of the expected locus content CTp is displayed before the display of the expected locus content CTp. According to this, the display start of the predicted locus content CTp is presented to the driver by the start content CTi. Therefore, the driver can easily understand that the expected locus content CTp is displayed. Therefore, the driver enables an easy-to-understand display.
  • the display generation unit 109 can remind the driver of the image of the vehicle A traveling in the center of the own lane Lns. Therefore, the display generation unit 109 can further reduce the driver's anxiety about traveling in the lane due to the LTA function.
  • the display generation unit 109 can further reduce the anxiety of the driver.
  • the second embodiment is different from the first embodiment in the mode of movement of the predicted locus content CTp.
  • the display generation unit 109 of the second embodiment moves the boundary line superimposed on the outer peripheral side of the curve among the boundary lines CTbl and CTbr to the center side larger than the boundary line superimposed on the inner peripheral side.
  • the outer peripheral side is the side of the pair of boundary lines CTbl and CTbr far from the center of curvature of the curved road
  • the inner peripheral side is the side closer to the center of curvature.
  • the moving end position of the left boundary line CTbl is set closer to the center of the own lane Lns than the moving end position of the right boundary line CTbr. .. As a result, the left boundary line CTbl is displayed so as to move toward the center side of the right boundary line CTbr.
  • the boundary line superimposed on the outer peripheral side of the curve is displayed so as to move to the center side of the boundary line superimposed on the inner peripheral side, so that the image of deviation from the curve path is more evoked. It becomes difficult to be done.
  • the HCU 100 can further reduce the driver's anxiety.
  • the display generation unit 109 displays the expected locus content CTp regardless of the determination result in the scene determination unit 105. Then, the display generation unit 109 changes the display mode of the predicted locus content CTp depending on whether it is determined that it is not a specific scene or that it is a specific scene.
  • the display generation unit 109 creates a pair of boundary lines CTbl and CTbr that emphasize each of the pair of boundaries in the own lane Lns, as shown in A of FIG. , Displayed as expected trajectory content CTp (normal display).
  • the pair of boundary lines in the third embodiment emphasizes the left and right lane markings LL and LR of the own lane Lns, but the road edge or an arbitrarily set virtual boundary line or the like is the own lane. It may be emphasized as a boundary of Lns.
  • the pair of boundary lines CTbl and CTbr include a left boundary line CTbl and a right boundary line CTbr, and is an example of boundary content.
  • the left boundary line CTbl and the right boundary line CTbr in the third embodiment are displayed so as to stay at the superimposition position with the inside of the corresponding division lines LL and LR as the superimposition position, respectively.
  • Each boundary line CTbl, CTbr is displayed as, for example, a thin strip-shaped road paint extending continuously along the lane markings LL, LR.
  • the display generation unit 109 changes the boundary line to a display mode in which the central portion in the lane width direction is emphasized as compared with the case where it is determined that the scene is not the specific scene. (Special display). Specifically, the display generation unit 109 changes the overlapping positions of the boundary lines CTbl and CTbr to the center side of the own lane Lns (see B in FIG. 9). As a result, the boundary lines CTbl and CTbr in the specific scene are closer to the center of the own lane Lns than in the non-specific scene. The magnitude of the movement width from the superposed position in the non-specific scene is set to be about the same for each boundary line CTbl and CTbr.
  • the display generation unit 109 presents a change in the superposition position of the boundary lines CTbl and CTbr by animation display. That is, when it is determined that the scene is a specific scene, the pair of boundary lines CTbl and CTbr are displayed so as to continuously move toward the superposed position in the specific scene (see A in FIG. 9). The timings of the movement start and movement end of the boundary lines CTbl and CTbr are substantially the same. When the specific scene ends, the boundary lines CTbl and CTbr return to the superposed position in the non-specific scene by the animation display that moves in the direction opposite to the animation display described above (see C in FIG. 9).
  • the process proceeds to S15.
  • the expected locus content CTp is displayed in the normal display mode, and the process proceeds to S20. If it is determined in S20 that the scene is a specific scene, the process proceeds to S45, and the expected locus content CTp is displayed in a special display mode.
  • the process proceeds to S65, the special display ends, and the display mode returns to the normal display.
  • the normal display is terminated in S85, the expected locus content CTp is hidden, and a series of processes is terminated.
  • the display mode of the expected locus content CTp is changed depending on whether it is determined to be a specific scene or not. Therefore, the driver who visually recognizes the predicted locus content CTp can associate that the lane keeping control is executed after the specific scene is grasped on the vehicle side. Therefore, the driver can easily recall the image that the driving in the lane is maintained even in a specific scene. From the above, the HCU 100 can reduce the anxiety of the driver.
  • the expected trajectory content CTp is changed to a display mode that emphasizes the central part of the own lane Lns. According to this, in the case of a specific scene, the central side of the own lane Lns is emphasized, so that the driver can recall the image of the vehicle A traveling in the center of the own lane Lns. Therefore, it is possible to further impress the driver that the driving in the lane is maintained even in a specific scene.
  • the pair of boundary lines CTbl and CTbr are displayed on the center side of the case where it is determined that the scene is not a specific scene. Therefore, it is possible to suppress the complexity in the angle of view VA as compared with the case of additionally displaying the content.
  • the display generation unit 109 of the fourth embodiment displays the content superimposed on the outer peripheral side of the curve among the boundary lines CTbl and CTbr on the center side of the content superimposed on the inner peripheral side.
  • the left boundary line CTbl is displayed on the center side as compared with the right boundary line CTbr, and the separation distance from the lane marking is large.
  • the HCU 100 can further reduce the driver's anxiety.
  • the display generation unit 109 displays the additional contents CTal and CTar in a portion on the center side of the pair of boundary lines CTbl and CTbr.
  • the additional contents CTal and CTar are expected locus contents CTp that are additionally displayed on the pair of boundary lines CTbl and CTbr.
  • the additional contents CTal and CTar are formed in a thin band shape extending continuously along the expected locus PT, like the pair of boundary lines CTbl and CTbr, for example.
  • the additional contents CTal and CTar are displayed in a display color different from, for example, the pair of boundary lines CTbl and CTbr.
  • the additional content CTal and CTar include a left side additional content CTal that is relatively displayed on the left side and a right side additional content CTal that is relatively displayed on the right side.
  • the additional contents CTal and CTar are displayed on the center side of the pair of boundary lines CTbl and CTbr, thereby emphasizing the central part of the own lane Lns to the driver.
  • the additional contents CTal and CTar are additionally displayed while the display of the pair of boundary lines CTbl and CTbr is maintained. Therefore, it is easy for the driver to understand that the content related to the LTA display is continuously displayed even in a specific scene. This enables a more understandable display.
  • the display generation unit 109 displays the central line CTc as the expected locus content CTp (see FIG. 13).
  • the central line CTc is content superimposed on the central portion of the own lane Lns.
  • the central line CTc is formed into, for example, a thin band extending along the expected locus PT.
  • the central line CTc is an example of central content.
  • the display generation unit 109 changes the display mode of the central line CTc to a display mode that emphasizes the boundary of the own lane Lns. Specifically, when it is determined that the scene is a specific scene, the central line CTc is changed to a pair of boundary lines CTbl and CTbr (see FIG. 14). The boundary line has overlapping positions on both outer sides of the central line CTc.
  • the display generation unit 109 continuously transforms the central line CTc into a pair of boundary lines CTbl and CTbr by an animation in which one central line CTc branches into two boundary lines CTbl and CTbr (FIG. 14). See A). Further, when the specific scene ends, the display generation unit 109 transforms the pair of boundary lines CTbl and CTbr into the central line CTc by an animation in which the two boundary lines CTbl and CTbr are connected to one central line CTc. It is continuously deformed (see C in FIG. 14).
  • the expected locus content CTp is displayed as the center line CTc when it is determined that it is not a specific scene, and when it is determined that it is a specific scene, the pair of lane markings LL and LR are emphasized.
  • the display mode is changed. Therefore, the driver can recall running while maintaining the inside of the lane markings LL, LR in a specific scene.
  • the display generation unit 109 determines that the scene is a specific scene
  • the display generation unit 109 continuously transforms the center line CTc into a pair of boundary lines CTbl and CTbr by an animation that divides the center line CTc into left and right.
  • the central line CTc is changed into a pair of boundary lines CTbl and CTbr by an animation in which the entire content is divided into left and right halves and each of them moves in parallel in the horizontal direction.
  • the pair of boundary lines CTbl and CTbr return to the center line CTc by the animation moving in the opposite direction to the above.
  • the display generation unit 109 determines that the scene is a specific scene, the display generation unit 109 additionally displays a pair of boundary lines CTbl and CTbr in addition to the center line CTc. As a result, the content that emphasizes the boundary is added, so that the predicted trajectory content CTp has a display mode that emphasizes the boundary of the own lane Lns as a whole, as compared with the case where it is determined that the scene is not a specific scene.
  • the boundary lines CTbl and CTbr are additionally displayed while the display of the center line CTc is maintained. Therefore, it is easy for the driver to understand that the content related to the LTA display is continuously displayed even in a specific scene. This enables a more understandable display.
  • the display generation unit 109 expands the width of the center line CTc when it is determined that the scene is a specific scene.
  • the widthwise end of the center line CTc approaches the boundary of the own lane Lns by widening, so that the boundary is emphasized as compared with the case where it is not a specific scene.
  • the display generation unit 109 displays the wall content CTw when the specific scene is a curve running scene.
  • the wall contents CTw are superimposed contents CTs superimposed near the lane markings on the outer peripheral side of the curve.
  • the wall content CTw exhibits a wall shape erected so as to separate the own lane Lns from the out-lane area.
  • the wall content CTw is displayed so as to be erected on the outer peripheral side of the expected locus content CTp.
  • the wall content CTw has a wall shape that rises upward from the lane marking.
  • the wall content CTw may have a wall shape rising from the road surface inside or outside the lane marking.
  • the wall content CTw has a shape extending along the own lane Lns.
  • the wall content CTw is displayed so as to extend from the start point to the end point of the curve.
  • the wall content CTw is superimposed and displayed on the outer peripheral side of the curve. Therefore, the driver can more easily recall the image that the vehicle A does not deviate to the outer peripheral side of the curve. Therefore, the display generation unit 109 can further reduce the anxiety of the driver.
  • the display generation unit 109 may display the above-mentioned wall content CTw in the cliff running scene.
  • the wall content CTw is superimposed and displayed near the lane marking on the side where the cliff is located.
  • the reliability is measured by, for example, DSM27.
  • the DSM27 measures driver stress as reliability. In this case, the higher the stress, the lower the reliability.
  • the DSM 27 may detect eye movements such as saccades, pupil opening, and the like by analyzing captured images, and calculate a stress evaluation value for evaluating stress by the control unit based on these. Further, the DSM 27 may use the detection information of the biosensor (not shown) for the calculation of stress.
  • the detection information includes, for example, heart rate, sweating amount, body temperature, and the like.
  • the DSM27 sequentially provides the measured stress evaluation values to the HCU 100.
  • the driver information acquisition unit 101 of the HCU 100 acquires the stress evaluation value from the DSM 27 and provides it to the scene determination unit 105.
  • the scene determination unit 105 determines whether or not the current driving scene is a specific scene based on the stress evaluation value. That is, when the stress evaluation value is within the permissible range, the scene determination unit 105 determines that the current driving scene is a specific scene. Then, when the stress evaluation value is out of the permissible range, the scene determination unit 105 determines that the current driving scene is not a specific scene.
  • the above processing is executed in S20 in the flowchart of FIG. 7.
  • the scene determination unit 105 determines the permissible range for determining a specific scene by learning. Specifically, the scene determination unit 105 acquires information regarding the detection timing of the steering wheel grip or the steering operation during the execution of the LTA, or the interruption timing of the LTA due to the brake operation. In addition, the scene determination unit 105 acquires the stress evaluation value at the relevant timing. The scene determination unit 105 may learn the permissible range corresponding to a specific scene based on this information. The scene determination unit 105 may set a preset range instead of determining the permissible range by learning.
  • the scene determination unit 105 may use information other than the actually measured reliability for determining a specific scene. For example, the scene determination unit 105 actually measures the determination result of whether or not the current travel scene corresponds to any one of the curve travel scene, the poor visibility scene, and the cliff travel scene shown in the first embodiment. It may be determined whether or not it is a specific scene in combination with the determined reliability. Specifically, the scene determination unit 105 identifies the current driving scene when the current driving scene corresponds to any one of the above scenes and the reliability is within the range corresponding to the specific scene. It may be determined that it is a scene.
  • the configuration of the eleventh embodiment is also applicable to the HCU 100 that changes the display mode of the expected locus content CTp depending on whether it is determined to be a specific scene or not.
  • the display generation unit 109 of the HCU 100 displays the predicted locus content CTp in a display mode in which the predicted locus content CTp is emphasized so that the reliability of the driver for the lane keeping control is estimated to be lower in the specific scene.
  • the display generation unit 109 estimates that the greater the curvature of the lane of the curve road on which the vehicle travels, the lower the reliability in the curve travel scene.
  • the display generation unit 109 estimates that the more continuous the curve is, the lower the reliability is in the curve traveling scene. Further, the display generation unit 109 estimates that the smaller the width of the road on which the vehicle travels, the lower the reliability.
  • the display generation unit 109 may perform the above reliability estimation based on the high-precision map data.
  • the display generation unit 109 estimates that the lower the visibility of the lane markings due to faintness or the like, the lower the reliability.
  • the display generation unit 109 may estimate the visibility of the lane marking based on the detection information of the lane marking acquired by the outside world information acquisition unit 103.
  • the display generation unit 109 has a display mode in which the expected locus content CTp is emphasized by, for example, increasing the repeating speed of movement of each boundary line CTbl and CTbr.
  • the display generation unit 109 may have a display mode in which the expected locus content CTp is emphasized by increasing the amount of movement of the boundary lines CTbl and CTbr inward.
  • the display generation unit 109 may make the display mode emphasized by increasing the brightness or the display size of each boundary line CTbl, CTbr.
  • the display generation unit 109 may make the display mode emphasized by changing the display colors of the boundary lines CTbl and CTbr.
  • the display generation unit 109 executes the display process according to the above reliability in S40 of the flowchart of FIG.
  • the display generation unit 109 may have a display mode in which the predicted locus content CTp is emphasized as the actually measured reliability is lower.
  • the description of the eleventh embodiment is referred to.
  • the configuration of the twelfth embodiment can be applied to the HCU 100 which changes the display mode of the predicted locus content CTp depending on whether it is determined to be a specific scene or not. In that case, the display generation unit 109 may execute the display process according to the reliability in S45 of the flowchart of FIG.
  • the predicted trajectory content CTp makes it easier for the driver to more reliably recall the image of maintaining the lane driving. Therefore, the driver's anxiety can be further reduced.
  • the curvature of the traveling lane is large.
  • the curvature of the lane increases, a relatively large acceleration can act on the vehicle A, so that the driver is more likely to feel anxiety about lane keeping control. Therefore, as the curvature of the lane becomes larger, the predicted locus content CTp is emphasized, so that the driver can more surely recall the image that the traveling in the lane is maintained by the predicted locus content CTp.
  • the driver's anxiety in the curve driving scene can be further reduced.
  • the driver information acquisition unit 101 acquires the presence / absence of gripping of the steering wheel (hereinafter referred to as gripping information) in addition to the position and line-of-sight direction of the driver's eye point EP.
  • the grip information may be specified by, for example, image analysis by DSM27, or by a grip sensor or steer sensor (not shown).
  • the state in which the driver is gripping the steering wheel may be referred to as a "hands-on state”
  • the state in which the driver is suspending the grip may be referred to as a "hands-off state”.
  • the control information acquisition unit 104 acquires, in addition to the lane maintenance control information, the level information of automatic driving when the LTA function is executed from the lane maintenance control units 51 and 61.
  • the level information may be at least enough information to determine whether the automatic driving level is 2 or lower or the automatic driving level 3 or higher. In other words, the level information need only be able to determine whether the peripheral monitoring obligation is necessary or not necessary when the LTA function is executed. Even if the control information acquisition unit 104 determines from which lane keeping control units 51 and 61 of the ECUs 50 and 60 the information indicating that the LTA function is turned on is provided, and generates level information based on the determination result. Good.
  • control information acquisition unit 104 acquires the track information of the vehicle A scheduled to travel when the automatic driving of level 3 or higher is executed.
  • the track information includes at least information about the route that vehicle A is going to follow.
  • the track information may include information about the speed at which the route travels.
  • the display generation unit 109 determines whether or not to execute the display of the predicted trajectory content CTp based on the gripping information, the level information, and the trajectory information in addition to the determination result of the scene determination unit 105.
  • the display generation unit 109 displays the predicted locus content CTp even if it is determined to be a specific scene. To cancel. On the other hand, when the automatic operation level is 2 or less and the hands-off state is determined, the display generation unit 109 displays the expected locus content CTp if it is determined to be a specific scene.
  • the display generation unit 109 may or may not be a specific scene.
  • the expected locus content CTp is displayed.
  • the display generation unit 109 evaluates the magnitude of the vehicle behavior based on the future acceleration predicted to act on the vehicle A.
  • the magnitude of vehicle behavior may be evaluated based on at least one of lateral acceleration and front-rear acceleration.
  • the display generation unit 109 may predict the future acceleration based on the orbit information.
  • the display generation unit 109 may acquire the magnitude of the vehicle behavior predicted by the automatic driving ECU 60 or the like.
  • the process proceeds to S15.
  • the display generation unit 109 determines whether or not the automatic operation level is 3 or higher. If it is determined that the automatic operation level is 3 or higher, the process proceeds to S16.
  • the display generation unit 109 estimates the magnitude of the vehicle behavior and determines whether or not the magnitude is within the permissible range. If it is determined that it is out of the permissible range, the process proceeds to S20, and if it is determined that it is within the permissible range, the process proceeds to S30.
  • the process proceeds to S20.
  • the scene determination unit 105 determines in S20 that the current driving scene corresponds to a specific scene
  • the process proceeds to S25.
  • the display generation unit 109 determines whether or not the automatic operation level 2 or lower and the hands-on state.
  • the process proceeds to S30. On the other hand, if it is determined that the automatic operation level is 2 or less and the hands-on state is reached, the process proceeds to S70.
  • the operation that the driver can perform with anxiety about lane keeping control may include gripping the steering wheel.
  • the display of the content CTp can be controlled more appropriately according to the necessity of displaying the expected locus content CTp.
  • the predicted trajectory content CTp can be displayed more reliably in a situation where the driver may feel uneasy at the automatic driving level 3 or higher where the future behavior generated in the vehicle A can be easily predicted from the track information or the like.
  • the configuration of the thirteenth embodiment is naturally applicable to the HCU 100 that changes the display mode of the expected locus content CTp depending on whether it is determined to be a specific scene or not. ..
  • the HCU 100 to which the configuration of the thirteenth embodiment is applied determines that the driver is obliged to monitor the periphery of the driver and is in a hands-on state in executing the lane keeping control.
  • the HCU 100 has a configuration in which the display mode of the predicted locus content CTp is the same as the display mode when it is determined that the current driving scene is not a specific scene, even when the current driving scene is determined to be a specific scene. Can be.
  • the HCU 100 to which the configuration of the thirteenth embodiment is applied determines that there is no obligation to monitor the surroundings of the driver in the execution of the lane keeping control and the magnitude of the predicted vehicle behavior is out of the permissible range.
  • the HCU 100 has the same display mode of the predicted locus content CTp as the display mode when it is determined to be a specific scene, even when it is determined that the current driving scene is not a specific scene. It can be configured as.
  • the scene determination unit 105 determines the scene based on the high-precision map data around the vehicle A or the detection information of the peripheral monitoring sensor 30.
  • the scene determination unit 105 may be configured to perform scene determination based on other information. For example, the scene determination unit 105 may determine whether or not the scene has poor visibility by acquiring weather information from an external server via the DCM49.
  • the scene determination unit 105 executes scene determination as to whether or not it is a specific scene based on various acquired information. Instead of this, the scene determination unit 105 may execute the scene determination by acquiring the scene determination result executed by another ECU such as the driving support ECU 50 or the automatic driving ECU 60.
  • the boundary lines CTbl and CTbr are superimposed and displayed inside the lane markings LL and LR, but they may be superimposed and displayed on the lane markings LL and LR. Alternatively, it may be superimposed and displayed on the outside of the lane markings LL and LR.
  • the boundary line may be displayed at a place corresponding to the boundary line other than the lane marking line, such as the left and right road edges.
  • the display generation unit 109 hides the predicted locus content CTp when it is determined that the scene is not a specific scene, but specifies the LTA-related content indicating other than the predicted locus. It may be displayed even if it is not a scene.
  • the display generation unit 109 may display contents such as character information and icons indicating that LTA is being executed separately from the predicted locus content CTp.
  • the display generation unit 109 displays the expected locus content CTp so as to move continuously. Instead, the display generation unit 109 may display the expected locus content CTp so as to move intermittently. Further, the display generation unit 109 may display the predicted locus content CTp in a movement pattern different from the movement from both outer sides to the center side. Further, the display generation unit 109 may use the expected locus content CTp as static content to be displayed while staying in place, as in the third embodiment. Further, the display generation unit 109 may display the expected locus content CTp without displaying the start content CTi.
  • the display generation unit 109 has a display mode in which the predicted locus content CTp emphasizes the central portion in the lane width direction in the case of a specific scene.
  • the display generation unit 109 may change the display mode by increasing the brightness of the predicted locus content CTp in the case of a specific scene.
  • the display generation unit 109 may change the display mode by reducing the transmittance of the predicted locus content CTp, changing the display color, enlarging the display size, and the like.
  • the display generation unit 109 continuously changes the display mode of the predicted locus content CTp by animation.
  • the display generation unit 109 may be configured to display the predicted locus content CTp after the change after the predicted locus content CTp before the display mode change is once hidden.
  • the predicted locus content CTp is a continuous thin band, but the display shape of the predicted locus content CTp is not limited to this.
  • the predicted locus content CTp may be a plurality of figures arranged along the predicted locus PT, or may have an arrow shape.
  • the status image CTst is displayed on the meter display 23, but it may be displayed on another vehicle-mounted display such as a center information display.
  • the status image CTst is displayed in a display mode different from the expected locus content CTp in a specific scene by blinking display.
  • the status image CTst may be displayed in another display mode.
  • the status image CTst does not have to be completely turned off as long as the display mode is such that the maximum brightness state and the minimum brightness state are repeated.
  • the status image CTst may be displayed in a different display mode from the predicted locus content CTp by moving and displaying the status image CTst in a movement pattern different from the predicted locus content CTp.
  • the scene determination unit 105 determines a specific scene by using the stress of the driver as the reliability.
  • the scene determination unit 105 may determine a specific scene by using an index other than stress as the reliability as long as the tension or anxiety about the driver's lane keeping control can be estimated.
  • the scene determination unit 105 may determine a specific scene by using the degree of gaze toward the front of the driver as the reliability. The degree of gaze may be measured by DSM27.
  • the scene determination unit 105 may determine a specific scene, assuming that the higher the gaze degree, the lower the reliability.
  • the processing unit and processor of the above-described embodiment include one or a plurality of CPUs (Central Processing Units).
  • a processing unit and a processor may be a processing unit including a GPU (Graphics Processing Unit), a DFP (Data Flow Processor), and the like in addition to the CPU.
  • the processing unit and the processor may be a processing unit including an FPGA (Field-Programmable Gate Array) and an IP core specialized in specific processing such as learning and inference of AI.
  • Each arithmetic circuit unit of such a processor may be individually mounted on a printed circuit board, or may be mounted on an ASIC (Application Specific Integrated Circuit), an FPGA, or the like.
  • ASIC Application Specific Integrated Circuit
  • non-transitory tangible storage mediums such as flash memory and hard disk can be adopted as the memory device for storing the control program.
  • the form of such a storage medium may also be changed as appropriate.
  • the storage medium may be in the form of a memory card or the like, and may be inserted into a slot portion provided in an in-vehicle ECU and electrically connected to a control circuit.
  • control unit and its method described in the present disclosure may be realized by a dedicated computer constituting a processor programmed to execute one or a plurality of functions embodied by a computer program.
  • the apparatus and method thereof described in the present disclosure may be realized by a dedicated hardware logic circuit.
  • the apparatus and method thereof described in the present disclosure may be realized by one or more dedicated computers configured by a combination of a processor that executes a computer program and one or more hardware logic circuits.
  • the computer program may be stored in a computer-readable non-transitional tangible recording medium as an instruction executed by the computer.

Abstract

A display control device (100) controls display of content by a head-up display (20) of a vehicle provided with a lane keeping control unit (51, 61) that can execute lane keeping control for keeping the vehicle running in its lane. The display control device is provided with a scene determination unit (105) that determines whether the current scene is a specific scene in which the degree of reliance of the driver on lane keeping control decreases. The display control device is provided with a display generation unit (109) that displays predicted track content by superimposing the predicted track content on the road surface in response to a determination that the current scene is a specific scene, the predicted track content indicating a predicted track produced by lane keeping control, and does not display the predicted track content in response to a determination that the current scene is not a specific scene.

Description

表示制御装置、および表示制御プログラムDisplay control device and display control program 関連出願の相互参照Cross-reference of related applications
 この出願は、2019年10月2日に日本に出願された特許出願第2019-182434号、および、2020年8月31日に日本に出願された特許出願第2020-145431号を基礎としており、基礎の出願の内容を、全体的に、参照により援用している。 This application is based on Patent Application No. 2019-182434 filed in Japan on October 2, 2019 and Patent Application No. 2020-145431 filed in Japan on August 31, 2020. The content of the basic application is incorporated by reference as a whole.
 この明細書における開示は、ヘッドアップディスプレイによるコンテンツの表示を制御する技術に関する。 The disclosure in this specification relates to a technique for controlling the display of content by a head-up display.
 特許文献1には、ヘッドアップディスプレイによってコンテンツを重畳表示する車両用表示装置が開示されている。この車両用表示装置は、ドライバの前方視界に自車両の走行位置から誘導地点までの経路を示す誘導表示を重畳表示させる。 Patent Document 1 discloses a vehicle display device that superimposes and displays contents by a head-up display. This vehicle display device superimposes and displays a guidance display indicating a route from the traveling position of the own vehicle to the guidance point in the front view of the driver.
国際公開第2015/118859号International Publication No. 2015/118859
 近年、車線内走行を維持するように車両を制御する車線維持制御機能が実用化されている。しかし、この車線維持制御機能の実行において、実際に車線内走行が維持されるのかドライバが不安を覚え易いシーンが存在する。このようなシーンでドライバの不安を低減するような表示については、特許文献1には記載されていない。 In recent years, a lane keeping control function that controls a vehicle so as to maintain driving in the lane has been put into practical use. However, in executing this lane keeping control function, there is a scene in which the driver tends to feel uneasy as to whether the driving in the lane is actually maintained. Patent Document 1 does not describe a display that reduces driver's anxiety in such a scene.
 開示される目的は、ドライバの不安を低減可能な表示制御装置、および表示制御プログラムを提供することである。 The purpose of disclosure is to provide a display control device and a display control program that can reduce driver anxiety.
 この明細書に開示された複数の態様は、それぞれの目的を達成するために、互いに異なる技術的手段を採用する。また、請求の範囲およびこの項に記載した括弧内の符号は、ひとつの態様として後述する実施形態に記載の具体的手段との対応関係を示す一例であって、技術的範囲を限定するものではない。 The plurality of aspects disclosed in this specification employ different technical means in order to achieve their respective purposes. Further, the claims and the reference numerals in parentheses described in this section are examples showing the correspondence with the specific means described in the embodiment described later as one embodiment, and do not limit the technical scope. Absent.
 開示された表示制御装置のひとつは、車両の車線内走行を維持させる車線維持制御を実行可能な車線維持制御部を備える車両のヘッドアップディスプレイによるコンテンツの表示を制御する表示制御装置であって、車線維持制御に対するドライバの信頼度が低下する特定シーンであるか否かを判定するシーン判定部と、特定シーンであると判定された場合には、車線維持制御による予想軌跡を示す予想軌跡コンテンツを路面に重畳表示させ、特定シーンではないと判定された場合には、予想軌跡コンテンツを非表示とする表示制御部と、を備える。 One of the disclosed display control devices is a display control device that controls the display of contents by a head-up display of a vehicle including a lane keeping control unit capable of executing lane keeping control for keeping the vehicle traveling in the lane. A scene determination unit that determines whether or not the driver's reliability for lane keeping control is low, and a predicted trajectory content that indicates the predicted trajectory by lane keeping control if it is determined to be a specific scene. It is provided with a display control unit that superimposes the display on the road surface and hides the predicted locus content when it is determined that the scene is not a specific scene.
 開示された表示制御プログラムのひとつは、車両の車線内走行を維持させる車線維持制御を実行可能な車線維持制御部を備える車両のヘッドアップディスプレイによるコンテンツの表示を制御する表示制御プログラムであって、少なくとも1つの処理部に、車線維持制御に対するドライバの信頼度が低下する特定シーンであるか否かを判定し、特定シーンであると判定された場合には、車線維持制御による予想軌跡を示す予想軌跡コンテンツを路面に重畳表示させ、特定シーンではないと判定された場合には、予想軌跡コンテンツを非表示とする、ことを含む処理を実行させる。 One of the disclosed display control programs is a display control program that controls the display of contents by the head-up display of a vehicle provided with a lane keeping control unit capable of executing lane keeping control for keeping the vehicle traveling in the lane. At least one processing unit determines whether or not it is a specific scene in which the reliability of the driver with respect to the lane keeping control is lowered, and if it is determined to be a specific scene, it is predicted that the predicted trajectory by the lane keeping control is shown. The locus content is superimposed and displayed on the road surface, and when it is determined that the scene is not a specific scene, a process including hiding the expected locus content is executed.
 これらの開示によれば、車線維持制御に対するドライバの信頼度が低下する特定シーンであると判定された場合に、予想軌跡コンテンツが表示される。故に、予想軌跡コンテンツを視認したドライバは、特定シーンにおいても車線内走行が維持されるイメージを想起し易くなる。以上により、ドライバの不安を低減可能な表示制御装置、および表示制御プログラムを提供することができる。 According to these disclosures, the expected locus content is displayed when it is determined that the scene is a specific scene in which the reliability of the driver for lane keeping control is lowered. Therefore, the driver who visually recognizes the expected locus content can easily recall the image that the driving in the lane is maintained even in a specific scene. As described above, it is possible to provide a display control device and a display control program capable of reducing driver anxiety.
 開示された表示制御装置のひとつは、車両の車線内走行を維持させる車線維持制御を実行可能な車線維持制御部を備える車両のヘッドアップディスプレイによるコンテンツの表示を制御する表示制御装置であって、車線維持制御に対するドライバの信頼度が低下する特定シーンであるか否かを判定するシーン判定部と、車線維持制御による予想軌跡を示す予想軌跡コンテンツを路面に重畳表示させ、特定シーンであると判定された場合と、特定シーンでないと判定された場合とで、予想軌跡コンテンツの表示態様を変更する表示制御部と、を備える。 One of the disclosed display control devices is a display control device that controls the display of contents by a head-up display of a vehicle including a lane keeping control unit capable of executing lane keeping control for keeping the vehicle traveling in the lane. The scene determination unit that determines whether or not the driver's reliability for lane keeping control is low and the predicted trajectory content that indicates the predicted trajectory by lane keeping control are superimposed and displayed on the road surface to determine that the scene is specific. It is provided with a display control unit that changes the display mode of the expected locus content depending on whether the scene is determined to be a specific scene or not.
 開示された表示制御プログラムのひとつは、車両の車線内走行を維持させる車線維持制御を実行可能な車線維持制御部を備える車両のヘッドアップディスプレイによるコンテンツの表示を制御する表示制御プログラムであって、少なくとも1つの処理部に、車線維持制御に対するドライバの信頼度が低下する特定シーンであるか否かを判定し、車線維持制御による予想軌跡を示す予想軌跡コンテンツを路面に重畳表示させ、特定シーンであると判定された場合と、特定シーンでないと判定された場合とで、予想軌跡コンテンツの表示態様を変更する、ことを含む処理を実行させる。 One of the disclosed display control programs is a display control program that controls the display of contents by the head-up display of a vehicle provided with a lane keeping control unit capable of executing lane keeping control for keeping the vehicle traveling in the lane. At least one processing unit determines whether or not the scene is a specific scene in which the reliability of the driver for the lane keeping control is lowered, and the predicted locus content indicating the predicted locus by the lane keeping control is superimposed and displayed on the road surface in the specific scene. A process including changing the display mode of the expected locus content is executed depending on whether it is determined to be present or not a specific scene.
 これらの開示によれば、車線維持制御に対するドライバの信頼度が低下する特定シーンである場合と特定シーンではない場合とで、予想軌跡コンテンツの表示態様が変更される。故に、表示態様の変更された予想軌跡コンテンツを視認したドライバは、車両側で特定シーンが把握されたうえで、車線維持制御が実行されることを連想し得る。このため、ドライバは、特定シーンであっても車線内走行が維持されるイメージを想起し易くなる。以上により、ドライバの不安を低減可能な表示制御装置、および表示制御プログラムを提供することができる。 According to these disclosures, the display mode of the expected locus content is changed depending on whether the scene is a specific scene in which the reliability of the driver for lane keeping control is lowered or not. Therefore, the driver who visually recognizes the predicted locus content whose display mode has been changed can associate that the lane keeping control is executed after the specific scene is grasped on the vehicle side. Therefore, the driver can easily recall the image that the driving in the lane is maintained even in a specific scene. As described above, it is possible to provide a display control device and a display control program capable of reducing driver anxiety.
本開示の第1実施形態によるHCUを含む車載ネットワークの全体像を示す図である。It is a figure which shows the whole image of the vehicle-mounted network including the HCU according to the 1st Embodiment of this disclosure. 車両に搭載されるヘッドアップディスプレイの一例を示す図である。It is a figure which shows an example of the head-up display mounted on the vehicle. HCUの概略的な構成の一例を示す図である。It is a figure which shows an example of the schematic structure of HCU. 表示生成部にて実施される表示レイアウトのシミュレーションの一例を、可視化して示す図である。It is a figure which visualizes and shows an example of the simulation of the display layout carried out in the display generation part. HUDにおけるLTA表示の一例を示す図である。It is a figure which shows an example of the LTA display in a HUD. メータディスプレイにおけるLTA表示の一例を示す図である。It is a figure which shows an example of the LTA display in a meter display. HCUにて実行される表示制御方法の一例を示すフローチャートである。It is a flowchart which shows an example of the display control method executed by HCU. 第2実施形態におけるLTA表示の一例を示す図である。It is a figure which shows an example of the LTA display in 2nd Embodiment. 第3実施形態におけるLTA表示の一例を示す図である。It is a figure which shows an example of the LTA display in 3rd Embodiment. 第3実施形態においてHCUにて実行される表示制御方法の一例を示すフローチャートである。It is a flowchart which shows an example of the display control method executed by HCU in 3rd Embodiment. 第4実施形態におけるLTA表示の一例を示す図である。It is a figure which shows an example of the LTA display in 4th Embodiment. 第5実施形態におけるLTA表示の一例を示す図である。It is a figure which shows an example of the LTA display in 5th Embodiment. 第6実施形態におけるLTA表示の一例を示す図である。It is a figure which shows an example of the LTA display in 6th Embodiment. 第6実施形態におけるLTA表示の一例を示す図である。It is a figure which shows an example of the LTA display in 6th Embodiment. 第7実施形態におけるLTA表示の一例を示す図である。It is a figure which shows an example of the LTA display in 7th Embodiment. 第8実施形態におけるLTA表示の一例を示す図である。It is a figure which shows an example of the LTA display in 8th Embodiment. 第9実施形態におけるLTA表示の一例を示す図である。It is a figure which shows an example of the LTA display in 9th Embodiment. 第10実施形態におけるLTA表示の一例を示す図である。It is a figure which shows an example of the LTA display in 10th Embodiment. 第11実施形態におけるHCUの概略的な構成の一例を示す図である。It is a figure which shows an example of the schematic structure of HCU in eleventh embodiment. 第13実施形態においてHCUにて実行される表示制御方法の一例を示すフローチャートである。It is a flowchart which shows an example of the display control method executed by HCU in 13th Embodiment.
 (第1実施形態)
 本開示の第1実施形態による表示制御装置の機能は、図1~3に示すHCU(Human Machine Interface Control Unit)100によって実現されている。HCU100は、車両AのHMI(Human Machine Interface)システム10を、ヘッドアップディスプレイ(以下、「HUD」)20等とともに構成している。HMIシステム10には、操作デバイス26およびDSM(Driver Status Monitor)27等がさらに含まれている。HMIシステム10は、車両Aの乗員であるドライバによるユーザ操作を受け付ける入力インターフェース機能と、乗員へ向けて情報を提示する出力インターフェース機能とを備えている。
(First Embodiment)
The function of the display control device according to the first embodiment of the present disclosure is realized by the HCU (Human Machine Interface Control Unit) 100 shown in FIGS. The HCU 100 comprises the HMI (Human Machine Interface) system 10 of the vehicle A together with a head-up display (hereinafter, “HUD”) 20 and the like. The HMI system 10 further includes an operation device 26, a DSM (Driver Status Monitor) 27, and the like. The HMI system 10 has an input interface function for accepting user operations by a driver who is an occupant of the vehicle A, and an output interface function for presenting information to the occupants.
 HMIシステム10は、車両Aに搭載された車載ネットワークの通信バス99に通信可能に接続されている。HMIシステム10は、車載ネットワークに設けられた複数のノードのうちの1つである。車載ネットワークの通信バス99には、例えば周辺監視センサ30、ロケータ40、DCM49、運転支援ECU50、および自動運転ECU60等がそれぞれノードとして接続されている。通信バス99に接続されたこれらのノードは、相互に通信可能である。 The HMI system 10 is communicably connected to the communication bus 99 of the vehicle-mounted network mounted on the vehicle A. The HMI system 10 is one of a plurality of nodes provided in the in-vehicle network. For example, a peripheral monitoring sensor 30, a locator 40, a DCM49, a driving support ECU 50, an automatic driving ECU 60, and the like are connected to the communication bus 99 of the vehicle-mounted network as nodes. These nodes connected to the communication bus 99 can communicate with each other.
 周辺監視センサ30は、車両Aの周辺環境を監視する自律センサである。周辺監視センサ30は、自車周囲の検出範囲から、歩行者、サイクリスト、人間以外の動物、および他車両等の移動物体、さらに路上の落下物、ガードレール、縁石、道路標識、走行区画線等の路面表示、および道路脇の構造物等の静止物体、を検出可能である。周辺監視センサ30は、車両Aの周囲の物体を検出した検出情報を、通信バス99を通じて、運転支援ECU50等に提供する。 The peripheral monitoring sensor 30 is an autonomous sensor that monitors the surrounding environment of the vehicle A. From the detection range around the own vehicle, the peripheral monitoring sensor 30 includes moving objects such as pedestrians, cyclists, non-human animals, and other vehicles, as well as falling objects, guardrails, curbs, road markings, traveling lane markings, and the like. It is possible to detect road markings and stationary objects such as roadside structures. The peripheral monitoring sensor 30 provides the detection information of detecting an object around the vehicle A to the driving support ECU 50 and the like through the communication bus 99.
 周辺監視センサ30は、物体検出のための検出構成として、フロントカメラ31およびミリ波レーダ32を有している。フロントカメラ31は、車両Aの前方範囲を撮影した撮像データ、および撮像データの解析結果の少なくとも一方を、検出情報として出力する。ミリ波レーダ32は、例えば車両Aの前後の各バンパーに互いに間隔を開けて複数配置されている。ミリ波レーダ32は、ミリ波または準ミリ波を、車両Aの前方範囲、前側方範囲、後方範囲および後側方範囲等へ向けて照射する。ミリ波レーダ32は、移動物体および静止物体等で反射された反射波を受信する処理により、検出情報を生成する。なお、ライダ、ソナー等の他の検出構成が、周辺監視センサ30に含まれていてもよい。 The peripheral monitoring sensor 30 has a front camera 31 and a millimeter wave radar 32 as a detection configuration for object detection. The front camera 31 outputs at least one of the imaging data obtained by photographing the front range of the vehicle A and the analysis result of the imaging data as detection information. A plurality of millimeter-wave radars 32 are arranged, for example, on the front and rear bumpers of the vehicle A at intervals from each other. The millimeter wave radar 32 irradiates the millimeter wave or the quasi-millimeter wave toward the front range, the front side range, the rear range, the rear side range, and the like of the vehicle A. The millimeter wave radar 32 generates detection information by a process of receiving reflected waves reflected by a moving object, a stationary object, or the like. In addition, other detection configurations such as a rider and sonar may be included in the peripheral monitoring sensor 30.
 ロケータ40は、複数の取得情報を組み合わせる複合測位により、車両Aの高精度な位置情報等を生成する。ロケータ40は、例えば複数車線のうちで、車両Aが走行する車線を特定可能である。ロケータ40は、GNSS(Global Navigation Satellite System)受信器41、慣性センサ42、高精度地図データベース(以下、「高精度地図DB」)43、およびロケータECU44を含む構成である。 The locator 40 generates highly accurate position information of vehicle A and the like by compound positioning that combines a plurality of acquired information. The locator 40 can specify, for example, the lane in which the vehicle A travels among a plurality of lanes. The locator 40 includes a GNSS (Global Navigation Satellite System) receiver 41, an inertial sensor 42, a high-precision map database (hereinafter, “high-precision map DB”) 43, and a locator ECU 44.
 GNSS受信器41は、複数の人工衛星(測位衛星)から送信された測位信号を受信する。GNSS受信器41は、GPS、GLONASS、Galileo、IRNSS、QZSS、Beidou等の衛星測位システムのうちで、少なくとも1つの衛星測位システムの各測位衛星から、測位信号を受信可能である。慣性センサ42は、例えばジャイロセンサおよび加速度センサを有している。 The GNSS receiver 41 receives positioning signals transmitted from a plurality of artificial satellites (positioning satellites). The GNSS receiver 41 can receive a positioning signal from each positioning satellite of at least one satellite positioning system among satellite positioning systems such as GPS, GLONASS, Galileo, IRNSS, QZSS, and Beidou. The inertial sensor 42 has, for example, a gyro sensor and an acceleration sensor.
 高精度地図DB43は、不揮発性メモリを主体に構成されており、通常のナビゲーションに用いられるよりも高精度な地図データ(以下、「高精度地図データ」)を記憶している。高精度地図データは、少なくとも高さ(z)方向の情報について、詳細な情報を保持している。高精度地図データには、道路の三次元形状情報(道路構造情報)、レーン数情報、各レーンに許容された進行方向を示す情報等、高度運転支援および自動運転に利用可能な情報が含まれている。 The high-precision map DB 43 is mainly composed of a non-volatile memory, and stores map data with higher accuracy (hereinafter, "high-precision map data") than that used for normal navigation. The high-precision map data holds detailed information at least for information in the height (z) direction. High-precision map data includes information that can be used for advanced driving support and autonomous driving, such as road three-dimensional shape information (road structure information), lane number information, and information indicating the direction of travel allowed for each lane. ing.
 ロケータECU44は、プロセッサ、RAM、記憶部、入出力インターフェース、およびこれらを接続するバス等を備えたマイクロコンピュータを主体として含む構成の制御部である。ロケータECU44は、GNSS受信器41で受信する測位信号、慣性センサ42の計測結果、および通信バス99に出力された車速情報等を組み合わせ、車両Aの自車位置および進行方向等を逐次測位する。ロケータECU44は、測位結果に基づく車両Aの位置情報および方角情報を、通信バス99を通じて、運転支援ECU50、自動運転ECU60、およびHCU100等に提供する。また、ロケータECU44は、自車位置周辺の高精度地図データを、通信バス99を通じて、HCU100、および運転支援ECU50等に提供する。 The locator ECU 44 is a control unit having a configuration mainly including a microcomputer provided with a processor, RAM, a storage unit, an input / output interface, a bus connecting them, and the like. The locator ECU 44 combines the positioning signal received by the GNSS receiver 41, the measurement result of the inertial sensor 42, the vehicle speed information output to the communication bus 99, and the like, and sequentially positions the own vehicle position, the traveling direction, and the like of the vehicle A. The locator ECU 44 provides the position information and direction information of the vehicle A based on the positioning result to the driving support ECU 50, the automatic driving ECU 60, the HCU 100, and the like through the communication bus 99. Further, the locator ECU 44 provides high-precision map data around the position of the own vehicle to the HCU 100, the driving support ECU 50, and the like via the communication bus 99.
 なお、車速情報は、車両Aの現在の走行速度を示す情報であり、車両Aの各輪のハブ部分に設けられた車輪速センサの検出信号に基づいて生成される。車速情報を生成し、通信バス99に出力するノード(ECU)は、適宜変更されてよい。例えば、各輪の制動力配分を制御するブレーキ制御ECU、またはHCU100等の車載ECUが、各輪の車輪速センサと電気的に接続されており、車速情報の生成および通信バス99への出力を継続的に実施する。 The vehicle speed information is information indicating the current traveling speed of the vehicle A, and is generated based on the detection signal of the wheel speed sensor provided in the hub portion of each wheel of the vehicle A. The node (ECU) that generates vehicle speed information and outputs it to the communication bus 99 may be appropriately changed. For example, a brake control ECU that controls the distribution of braking force for each wheel, or an in-vehicle ECU such as the HCU100 is electrically connected to the wheel speed sensor of each wheel to generate vehicle speed information and output to the communication bus 99. Implement continuously.
 DCM(Data Communication Module)49は、車両Aに搭載される通信モジュールである。DCM49は、LTE(Long Term Evolution)および5G等の通信規格に沿った無線通信により、車両Aの周囲の基地局との間で電波を送受信する。DCM49の搭載により、車両Aは、インターネットに接続可能なコネクテッドカーとなる。DCM49は、クラウド上に設けられたプローブサーバから、最新の高精度地図データを取得可能である。DCM49は、ロケータECU44と連携して、高精度地図DB43に格納された高精度地図データを、最新の情報に更新する。 DCM (Data Communication Module) 49 is a communication module mounted on vehicle A. The DCM49 transmits and receives radio waves to and from base stations around the vehicle A by wireless communication in accordance with communication standards such as LTE (Long Term Evolution) and 5G. By installing the DCM49, the vehicle A becomes a connected car that can connect to the Internet. The DCM49 can acquire the latest high-precision map data from a probe server provided on the cloud. The DCM49 cooperates with the locator ECU 44 to update the high-precision map data stored in the high-precision map DB 43 to the latest information.
 運転支援ECU50および自動運転ECU60は、それぞれプロセッサ、RAM、記憶部、入出力インターフェース、およびこれらを接続するバス等を備えたコンピュータを主体として含む構成である。運転支援ECU50は、ドライバの運転操作を支援する運転支援機能を備えている。自動運転ECU60は、ドライバの運転操作を代行可能な自動運転機能を備えている。一例として、米国自動車技術会の規定する自動運転レベルにおいて、運転支援ECU50は、レベル2以下の部分的な自動走行制御(高度運転支援)を可能にする。一方、自動運転ECU60は、レベル3以上の自動走行制御を可能にする。換言すれば、運転支援ECU50は、ドライバに周辺監視義務が要求される自動運転を実行し、自動運転ECU60は、ドライバに周辺監視義務が要求されない自動運転を実行する。 The driving support ECU 50 and the automatic driving ECU 60 are configured to mainly include a computer equipped with a processor, a RAM, a storage unit, an input / output interface, a bus connecting them, and the like, respectively. The driving support ECU 50 has a driving support function that supports the driving operation of the driver. The automatic driving ECU 60 has an automatic driving function capable of acting as a driver's driving operation. As an example, at the automatic driving level specified by the American Society of Automotive Engineers of Japan, the driving support ECU 50 enables partial automatic driving control (advanced driving support) of level 2 or lower. On the other hand, the automatic driving ECU 60 enables automatic driving control of level 3 or higher. In other words, the driving support ECU 50 executes automatic driving in which the driver is required to monitor the surroundings, and the automatic driving ECU 60 executes automatic driving in which the driver is not required to monitor the surroundings.
 運転支援ECU50および自動運転ECU60は、それぞれ周辺監視センサ30から取得する検出情報に基づき、後述の運転制御のために車両Aの周囲の走行環境を認識する。各ECU50,60は、走行環境認識のために実施した検出情報の解析結果を、解析済みの検出情報として、HCU100に提供する。一例として、各ECU50,60は、車両Aが現在走行する車線(以下、「自車車線Lns」,図5参照)の境界に関する境界情報として、左右の区画線LL,LRまたは道路端の相対位置および形状を示す情報を、HCU100に提供可能である。なお、左右の方向は、水平面上に静止した車両Aの幅方向と一致する方向であり、車両Aの進行方向を基準として設定される。また、各ECU50,60は、走行地域の天候状況に関する情報を解析し、天候情報としてHCU100に提供する。天候情報は、雨、雪、霧等の視界不良となる天候か否かに関する情報を少なくとも含んでいる。天候情報は、例えばフロントカメラ31の撮像画像の画像処理により解析される。 The driving support ECU 50 and the automatic driving ECU 60 recognize the driving environment around the vehicle A for the driving control described later based on the detection information acquired from the peripheral monitoring sensor 30, respectively. Each of the ECUs 50 and 60 provides the HCU 100 with the analysis result of the detection information carried out for recognizing the traveling environment as the analyzed detection information. As an example, each of the ECUs 50 and 60 is used as boundary information regarding the boundary of the lane in which the vehicle A is currently traveling (hereinafter, "own lane Lns", see FIG. 5), and is the relative position of the left and right lane markings LL, LR or the road edge. Information indicating the shape and the shape can be provided to the HCU 100. The left-right direction is a direction that coincides with the width direction of the vehicle A stationary on the horizontal plane, and is set with reference to the traveling direction of the vehicle A. Further, each of the ECUs 50 and 60 analyzes the information regarding the weather condition in the traveling area and provides it to the HCU 100 as the weather information. The weather information includes at least information on whether or not the weather is poor visibility such as rain, snow, and fog. The weather information is analyzed, for example, by image processing of the captured image of the front camera 31.
 運転支援ECU50は、プロセッサによるプログラムの実行により、高度運転支援を実現する複数の機能部を有する。具体的に、運転支援ECU50は、ACC(Adaptive Cruise Control)制御部および車線維持制御部51を有する。ACC制御部は、目標車速で車両Aを定速走行させるか、または前走車との車間距離を維持しつつ車両Aを追従走行させるACCの機能を実現する機能部である。 The driving support ECU 50 has a plurality of functional units that realize advanced driving support by executing a program by a processor. Specifically, the driving support ECU 50 has an ACC (Adaptive Cruise Control) control unit and a lane keeping control unit 51. The ACC control unit is a functional unit that realizes the function of ACC for driving the vehicle A at a constant speed at a target vehicle speed or for following the vehicle A while maintaining the distance between the vehicle and the vehicle in front.
 車線維持制御部51は、車両Aの車線内走行を維持させるLTA(Lane Tracing Assist)機能を実現する機能部である。なお、LTAは、LTC(Lane Trace Control)とも呼称される。LTA機能は、車線維持制御機能の一例である。車線維持制御部51は、周辺監視センサ30の検出データから抽出された境界情報に基づき、車両Aの操舵輪の舵角を制御する。車線維持制御部51は、走行中の自車車線Lnsの中央を車両Aが走行するように、当該自車車線Lnsに沿う形状の予定走行ラインを生成する。車線維持制御部51は、ACC制御部と連携し、予定走行ラインに従い、車両Aを自車車線Lns内で走行させる運転制御(以下、「車線維持制御」)を行う。 The lane keeping control unit 51 is a functional unit that realizes an LTA (Lane Tracing Assist) function for maintaining the vehicle A running in the lane. The LTA is also referred to as an LTC (LaneTraceControl). The LTA function is an example of a lane keeping control function. The lane keeping control unit 51 controls the steering angle of the steering wheel of the vehicle A based on the boundary information extracted from the detection data of the peripheral monitoring sensor 30. The lane keeping control unit 51 generates a planned traveling line having a shape along the own lane Lns so that the vehicle A travels in the center of the own lane Lns during traveling. The lane keeping control unit 51 cooperates with the ACC control unit to perform driving control (hereinafter, “lane keeping control”) for driving the vehicle A in the own lane Lns according to the planned running line.
 自動運転ECU60は、プロセッサによるプログラムの実行により、車両Aの自律走行を実現する複数の機能部を有する。自動運転ECU60は、ロケータ40より取得する高精度地図データおよび自車位置情報と、抽出された境界情報とに基づき、予定走行ラインを生成する。自動運転ECU60は、予定走行ラインに沿って車両Aが走行するように、加減速制御および操舵制御等を実行する。 The automatic driving ECU 60 has a plurality of functional units that realize autonomous driving of the vehicle A by executing a program by the processor. The automatic driving ECU 60 generates a scheduled traveling line based on the high-precision map data acquired from the locator 40, the vehicle position information, and the extracted boundary information. The automatic driving ECU 60 executes acceleration / deceleration control, steering control, and the like so that the vehicle A travels along the scheduled traveling line.
 以上の自動運転ECU60にて、運転支援ECU50の車線維持制御部51と実質的に同一の車線維持制御、即ち、自車車線Lns内で車両Aを走行させる機能部を、便宜的に車線維持制御部61とする。ユーザは、車線維持制御部51,61のうちの一方を排他的に利用可能である。 In the above automatic driving ECU 60, the lane maintenance control that is substantially the same as the lane maintenance control unit 51 of the driving support ECU 50, that is, the functional unit that causes the vehicle A to travel in the own lane Lns, is conveniently controlled to maintain the lane. Let's call it part 61. The user can exclusively use one of the lane keeping control units 51 and 61.
 なお、車線維持制御部51,61は、境界情報を取得不可能な場合には、前走車の走行軌跡に基づいて予定走行ラインを生成してもよい。例えば、霧等の視界不良時に周辺監視センサ30にて境界を検出不可能であり、且つ前走車を検出可能な場合、車線維持制御部51,61は、前走車の検出情報に基づく走行軌跡から予定走行ラインを生成し、当該予定走行ラインに沿って車両Aの走行を制御する。 Note that the lane keeping control units 51 and 61 may generate a scheduled traveling line based on the traveling locus of the preceding vehicle when the boundary information cannot be acquired. For example, when the boundary cannot be detected by the peripheral monitoring sensor 30 and the vehicle in front can be detected when the visibility is poor such as fog, the lane keeping control units 51 and 61 travel based on the detection information of the vehicle in front. A scheduled traveling line is generated from the locus, and the traveling of the vehicle A is controlled along the planned traveling line.
 車線維持制御部51,61は、例えば操作デバイス26へのユーザ操作に基づいて車線維持制御が起動されると、車線維持制御に関連する車線維持制御情報を、通信バス99を通じて、HCU100に逐次提供する。車線維持制御情報には、車線維持制御の作動状態を示すステータス情報、および予定走行ラインの形状を示すライン形状情報が、少なくとも含まれている。 When the lane keeping control is activated based on, for example, a user operation on the operation device 26, the lane keeping control units 51 and 61 sequentially provide the lane keeping control information related to the lane keeping control to the HCU 100 through the communication bus 99. To do. The lane keeping control information includes at least status information indicating the operating state of the lane keeping control and line shape information indicating the shape of the planned traveling line.
 ステータス情報は、車線維持制御の機能について、オフ状態、待機状態、および実行状態のいずれであるかを示す情報である。待機状態は、車線維持制御が起動しているものの、運動制御を実施していない場合である。一方で、実行状態は、実行条件の成立に基づき、運転制御がアクティブとされた状態である。実行条件は、例えば両側の区間線を認識できている等である。ライン形状情報には、予定走行ラインの形状を規定する複数の特定点の三次元座標、並びに特定点を接続する仮想線の長さおよび曲率半径等が少なくとも含まれている。 The status information is information indicating whether the lane keeping control function is in the off state, the standby state, or the execution state. The standby state is a case where the lane keeping control is activated but the motion control is not performed. On the other hand, the execution state is a state in which the operation control is activated based on the establishment of the execution condition. The execution condition is, for example, that the section lines on both sides can be recognized. The line shape information includes at least the three-dimensional coordinates of a plurality of specific points that define the shape of the planned traveling line, the length of the virtual line connecting the specific points, the radius of curvature, and the like.
 なお、ライン形状情報は、多数の座標情報を含む内容とされてよい。各座標情報は、予定走行ライン上に並ぶポイントを、所定の間隔で示す情報である。こうしたデータ形式のライン形状情報であっても、HCU100は、多数の座標情報から、予定走行ラインの形状を復元可能となる。 Note that the line shape information may include a large amount of coordinate information. Each coordinate information is information indicating points lined up on the scheduled running line at predetermined intervals. Even with the line shape information in such a data format, the HCU 100 can restore the shape of the planned running line from a large amount of coordinate information.
 次に、HMIシステム10に含まれる操作デバイス26、DSM27、HUD20およびHCU100の各詳細を、図1および図2に基づき順に説明する。 Next, details of the operating devices 26, DSM27, HUD20, and HCU100 included in the HMI system 10 will be described in order based on FIGS. 1 and 2.
 操作デバイス26は、ドライバ等によるユーザ操作を受け付ける入力部である。操作デバイス26には、例えばLTA機能およびACC機能について、起動および停止の切り替え、および車間距離の設定等を行うユーザ操作が入力される。具体的には、ステアリングホイールのスポーク部に設けられたステアスイッチ、ステアリングコラム部8に設けられた操作レバー、およびドライバの発話を検出する音声入力装置等が、操作デバイス26に含まれる。 The operation device 26 is an input unit that accepts user operations by a driver or the like. For example, for the LTA function and the ACC function, the operation device 26 is input with a user operation for switching between activation and stop, setting the inter-vehicle distance, and the like. Specifically, the operation device 26 includes a steering switch provided on the spoke portion of the steering wheel, an operation lever provided on the steering column portion 8, a voice input device for detecting the driver's utterance, and the like.
 DSM27は、近赤外光源および近赤外カメラと、これらを制御する制御ユニットとを含む構成である。DSM27は、運転席のヘッドレスト部に近赤外カメラを向けた姿勢にて、例えばステアリングコラム部8の上面またはインスツルメントパネル9の上面等に設置されている。DSM27は、近赤外光源によって近赤外光を照射されたドライバの頭部を、近赤外カメラによって撮影する。近赤外カメラによる撮像画像は、制御ユニットによって画像解析される。制御ユニットは、アイポイントEPの位置および視線方向等の情報を撮像画像から抽出し、抽出した状態情報をHCU100へ向けて逐次出力する。 The DSM27 has a configuration including a near-infrared light source, a near-infrared camera, and a control unit for controlling them. The DSM 27 is installed in a posture in which the near-infrared camera is directed toward the headrest portion of the driver's seat, for example, on the upper surface of the steering column portion 8 or the upper surface of the instrument panel 9. The DSM27 uses a near-infrared camera to photograph the head of the driver irradiated with near-infrared light by a near-infrared light source. The image captured by the near-infrared camera is image-analyzed by the control unit. The control unit extracts information such as the position of the eye point EP and the line-of-sight direction from the captured image, and sequentially outputs the extracted state information toward the HCU 100.
 HUD20は、メータディスプレイ23およびセンターインフォメーションディスプレイ等とともに、複数の車載表示デバイスの1つとして、車両Aに搭載されている。HUD20は、HCU100と電気的に接続されており、HCU100によって生成された映像データを逐次取得する。HUD20は、映像データに基づき、例えばルート情報、標識情報、および各車載機能の制御情報等、車両Aに関連する種々の情報を、虚像Viを用いてドライバに提示する。 The HUD 20 is mounted on the vehicle A as one of a plurality of in-vehicle display devices together with the meter display 23, the center information display, and the like. The HUD 20 is electrically connected to the HCU 100 and sequentially acquires video data generated by the HCU 100. Based on the video data, the HUD 20 presents various information related to the vehicle A, such as route information, sign information, and control information of each in-vehicle function, to the driver using the virtual image Vi.
 HUD20は、ウィンドシールドWSの下方にて、インスツルメントパネル9内の収容空間に収容されている。HUD20は、虚像Viとして結像される光を、ウィンドシールドWSの投影範囲PAへ向けて投影する。ウィンドシールドWSに投影された光は、投影範囲PAにおいて運転席側へ反射され、ドライバによって知覚される。ドライバは、投影範囲PAを通して見える前景に、虚像Viが重畳された表示を視認する。 The HUD 20 is housed in the storage space inside the instrument panel 9 below the windshield WS. The HUD 20 projects the light formed as a virtual image Vi toward the projection range PA of the windshield WS. The light projected on the windshield WS is reflected toward the driver's seat side in the projection range PA and is perceived by the driver. The driver visually recognizes the display in which the virtual image Vi is superimposed on the foreground seen through the projection range PA.
 HUD20は、プロジェクタ21および拡大光学系22を備えている。プロジェクタ21は、LCD(Liquid Crystal Display)パネルおよびバックライトを有している。プロジェクタ21は、LCDパネルの表示面を拡大光学系22へ向けた姿勢にて、HUD20の筐体に固定されている。プロジェクタ21は、映像データの各フレーム画像をLCDパネルの表示面に表示し、当該表示面をバックライトによって透過照明することで、虚像Viとして結像される光を拡大光学系22へ向けて射出する。拡大光学系22は、合成樹脂またはガラス等からなる基材の表面にアルミニウム等の金属を蒸着させた凹面鏡を、少なくとも1つ含む構成である。拡大光学系22は、プロジェクタ21から射出された光を反射によって広げつつ、上方の投影範囲PAに投影する。 The HUD 20 includes a projector 21 and a magnifying optical system 22. The projector 21 has an LCD (Liquid Crystal Display) panel and a backlight. The projector 21 is fixed to the housing of the HUD 20 with the display surface of the LCD panel facing the magnifying optical system 22. The projector 21 displays each frame image of the video data on the display surface of the LCD panel, and transmits and illuminates the display surface with a backlight to emit light formed as a virtual image Vi toward the magnifying optical system 22. To do. The magnifying optical system 22 includes at least one concave mirror in which a metal such as aluminum is vapor-deposited on the surface of a base material made of synthetic resin or glass. The magnifying optical system 22 projects the light emitted from the projector 21 onto the upper projection range PA while spreading it by reflection.
 以上のHUD20には、画角VAが設定される。HUD20にて虚像Viを結像可能な空間中の仮想範囲を結像面ISとすると、画角VAは、ドライバのアイポイントEPと結像面ISの外縁とを結ぶ仮想線に基づき規定される視野角である。画角VAは、アイポイントEPから見て、ドライバが虚像Viを視認できる角度範囲となる。HUD20では、垂直方向における垂直画角よりも、水平方向における水平画角の方が大きくされている。アイポイントEPから見たとき、結像面ISと重なる前方範囲が画角VA内の範囲となる。 The angle of view VA is set for the above HUD20. Assuming that the virtual range in the space where the virtual image Vi can be imaged by the HUD 20 is the image plane IS, the angle of view VA is defined based on the virtual line connecting the driver's eye point EP and the outer edge of the image plane IS. The viewing angle. The angle of view VA is an angle range in which the driver can visually recognize the virtual image Vi when viewed from the eye point EP. In the HUD 20, the horizontal angle of view in the horizontal direction is larger than the vertical angle of view in the vertical direction. When viewed from the eye point EP, the front range that overlaps with the image plane IS is the range within the angle of view VA.
 HUD20は、重畳コンテンツCTs(図5および図6参照)および非重畳コンテンツを、虚像Viとして表示する。重畳コンテンツCTsは、拡張現実(Augmented Reality,以下「AR」)表示に用いられるAR表示物である。重畳コンテンツCTsの表示位置は、例えば路面の特定位置、前方車両、歩行者および道路標識等、前景に存在する特定の重畳対象に関連付けられている。重畳コンテンツCTsは、前景中にある特定の重畳対象に重畳表示され、当該重畳対象に相対固定されているように、重畳対象を追って、ドライバの見た目上で移動可能である。即ち、ドライバのアイポイントEPと、前景中の重畳対象と、重畳コンテンツCTsとの相対的な位置関係は、継続的に維持される。そのため、重畳コンテンツCTsの形状は、重畳対象の相対位置および形状に合わせて、所定の周期で継続的に更新されてよい。重畳コンテンツCTsは、非重畳コンテンツよりも水平に近い姿勢で表示され、例えばドライバから見た奥行方向(進行方向)に延伸した表示形状とされる。 The HUD 20 displays superimposed content CTs (see FIGS. 5 and 6) and non-superimposed content as virtual image Vi. Superimposed content CTs are AR display objects used for augmented reality (hereinafter referred to as “AR”) display. The display position of the superimposed content CTs is associated with a specific superimposed object existing in the foreground, such as a specific position on the road surface, a vehicle in front, a pedestrian, and a road sign. The superimposed content CTs are superimposed and displayed on a specific superimposed object in the foreground, and can be moved in the appearance of the driver following the superimposed object so as to be relatively fixed to the superimposed object. That is, the relative positional relationship between the driver's eye point EP, the superimposed object in the foreground, and the superimposed content CTs is continuously maintained. Therefore, the shape of the superimposed content CTs may be continuously updated at a predetermined cycle according to the relative position and shape of the superimposed object. The superimposed content CTs are displayed in a posture closer to horizontal than the non-superimposed content, and have a display shape extended in the depth direction (traveling direction) as seen from the driver, for example.
 非重畳コンテンツは、前景に重畳表示される表示物のうちで、重畳コンテンツCTsを除いた非AR表示物である。非重畳コンテンツは、重畳コンテンツCTsとは異なり、重畳対象を特定されないで、前景に重畳表示される。非重畳コンテンツは、投影範囲PA内の決まった位置に表示されることで、ウィンドシールドWS等の車両構成に相対固定されているように表示される。 The non-superimposed content is a non-AR display object excluding the superimposed content CTs among the display objects superimposed and displayed in the foreground. Unlike the superimposed content CTs, the non-superimposed content is displayed superimposed on the foreground without specifying the superimposed target. The non-superimposed content is displayed at a fixed position in the projection range PA, so that it is displayed as if it is relatively fixed to the vehicle configuration such as the windshield WS.
 メータディスプレイ23は、複数の車載表示器の1つであり、所謂コンビネーションメータのディスプレイである。メータディスプレイ23は、例えば液晶ディスプレイおよび有機ELディスプレイ等の画像表示器である。メータディスプレイ23は、インスツルメントパネル9における運転席の正面に設置され、運転席のヘッドレスト部分にその表示画面を向けている。メータディスプレイ23は、HCU100と電気的に接続されており、HCU100によって生成された映像データを逐次取得する。メータディスプレイ23は、取得した映像データに応じたコンテンツを表示画面に表示する。例えば、メータディスプレイ23は、LTA機能のステータス情報を示すステータス画像CTst(後述)を、表示画面に表示する。 The meter display 23 is one of a plurality of in-vehicle displays, and is a so-called combination meter display. The meter display 23 is an image display such as a liquid crystal display and an organic EL display. The meter display 23 is installed in front of the driver's seat on the instrument panel 9, and the display screen is directed to the headrest portion of the driver's seat. The meter display 23 is electrically connected to the HCU 100, and sequentially acquires video data generated by the HCU 100. The meter display 23 displays the content corresponding to the acquired video data on the display screen. For example, the meter display 23 displays a status image CTst (described later) showing the status information of the LTA function on the display screen.
 HCU100は、HMIシステム10において、HUD20を含む複数の車載表示デバイスによる表示を統合的に制御する電子制御装置である。HCU100は、処理部11、RAM12、記憶部13、入出力インターフェース14、およびこれらを接続するバス等を備えたコンピュータを主体として含む構成である。処理部11は、RAM12と結合された演算処理のためのハードウェアである。処理部11は、CPU(Central Processing Unit)等の演算コアを少なくとも1つ含む構成である。RAM12は、映像生成のためのビデオRAMを含む構成であってよい。処理部11は、RAM12へのアクセスにより、後述する各機能部の機能を実現するための種々の処理を実行する。記憶部13は、不揮発性の記憶媒体を含む構成である。記憶部13には、処理部11によって実行される種々のプログラム(表示制御プログラム等)が格納されている。 The HCU 100 is an electronic control device that integrally controls the display by a plurality of in-vehicle display devices including the HUD 20 in the HMI system 10. The HCU 100 mainly includes a computer including a processing unit 11, a RAM 12, a storage unit 13, an input / output interface 14, and a bus connecting them. The processing unit 11 is hardware for arithmetic processing combined with the RAM 12. The processing unit 11 has a configuration including at least one arithmetic core such as a CPU (Central Processing Unit). The RAM 12 may be configured to include a video RAM for video generation. The processing unit 11 executes various processes for realizing the functions of each functional unit, which will be described later, by accessing the RAM 12. The storage unit 13 is configured to include a non-volatile storage medium. Various programs (display control programs, etc.) executed by the processing unit 11 are stored in the storage unit 13.
 図1~図3に示すHCU100は、記憶部13に記憶された表示制御プログラムを処理部11によって実行することで、HUD20によるコンテンツ表示を制御する制御部として機能するための複数の機能部を有する。具体的に、HCU100には、ドライバ情報取得部101、ロケータ情報取得部102、外界情報取得部103、制御情報取得部104、シーン判定部105、および表示生成部109等の機能部が構築される。 The HCU 100 shown in FIGS. 1 to 3 has a plurality of functional units for functioning as a control unit for controlling content display by the HUD 20 by executing a display control program stored in the storage unit 13 by the processing unit 11. .. Specifically, the HCU 100 is constructed with functional units such as a driver information acquisition unit 101, a locator information acquisition unit 102, an external world information acquisition unit 103, a control information acquisition unit 104, a scene determination unit 105, and a display generation unit 109. ..
 ドライバ情報取得部101は、DSM27から取得する状態情報に基づき、運転席に着座しているドライバのアイポイントEPの位置および視線方向を特定し、ドライバ情報として取得する。ドライバ情報取得部101は、アイポイントEPの位置を示す三次元の座標(以下、「アイポイント座標」)を生成し、生成したアイポイント座標を、表示生成部109に逐次提供する。 The driver information acquisition unit 101 identifies the position and line-of-sight direction of the eye point EP of the driver seated in the driver's seat based on the state information acquired from the DSM 27, and acquires it as driver information. The driver information acquisition unit 101 generates three-dimensional coordinates (hereinafter, “eye point coordinates”) indicating the position of the eye point EP, and sequentially provides the generated eye point coordinates to the display generation unit 109.
 ロケータ情報取得部102は、車両Aについての最新の位置情報および方角情報を、自車位置情報としてロケータECU44から取得する。加えて、ロケータ情報取得部102は、自車位置周辺の高精度地図データを、ロケータECU44から取得する。ロケータ情報取得部102は、取得した自車位置情報および高精度地図データを、シーン判定部105および表示生成部109に逐次提供する。 The locator information acquisition unit 102 acquires the latest position information and direction information about the vehicle A from the locator ECU 44 as own vehicle position information. In addition, the locator information acquisition unit 102 acquires high-precision map data around the position of the own vehicle from the locator ECU 44. The locator information acquisition unit 102 sequentially provides the acquired vehicle position information and high-precision map data to the scene determination unit 105 and the display generation unit 109.
 外界情報取得部103は、運転支援ECU50または自動運転ECU60から、車両Aの周辺範囲について解析済みの検出情報を取得する。例えば、外界情報取得部103は、自車車線Lnsの左右の区画線LL,LRまたは道路端の相対位置を示す境界情報を、検出情報として取得する。加えて、外界情報取得部103は、走行地域の天候情報を検出情報として取得する。外界情報取得部103は、取得した検出情報をシーン判定部105および表示生成部109に逐次提供する。なお、外界情報取得部103は、運転支援ECU50または自動運転ECU60から取得する解析結果としての検出情報に替えて、フロントカメラ31の撮像データを、検出情報として取得してもよい。 The external world information acquisition unit 103 acquires the detected detection information analyzed for the peripheral range of the vehicle A from the driving support ECU 50 or the automatic driving ECU 60. For example, the outside world information acquisition unit 103 acquires boundary information indicating the relative positions of the left and right lane markings LL, LR or the road edge of the own lane Lns as detection information. In addition, the outside world information acquisition unit 103 acquires weather information in the traveling area as detection information. The external world information acquisition unit 103 sequentially provides the acquired detection information to the scene determination unit 105 and the display generation unit 109. The external world information acquisition unit 103 may acquire the imaging data of the front camera 31 as the detection information instead of the detection information as the analysis result acquired from the driving support ECU 50 or the automatic driving ECU 60.
 制御情報取得部104は、各車線維持制御部51,61から車線維持制御情報を取得する。車線維持制御情報には、LTA機能のステータス情報、およびライン形状情報等が含まれている。制御情報取得部104は、取得した車線維持制御情報を表示生成部109へと逐次提供する。 The control information acquisition unit 104 acquires lane maintenance control information from the lane maintenance control units 51 and 61. The lane keeping control information includes status information of the LTA function, line shape information, and the like. The control information acquisition unit 104 sequentially provides the acquired lane keeping control information to the display generation unit 109.
 シーン判定部105は、ロケータ情報取得部102および外界情報取得部103から取得した情報に基づき、現在の走行シーンが特定シーンであるか否かを判定する。特定シーンとは、車線維持制御に対するドライバの信頼度が低下するシーンである。 The scene determination unit 105 determines whether or not the current driving scene is a specific scene based on the information acquired from the locator information acquisition unit 102 and the outside world information acquisition unit 103. The specific scene is a scene in which the reliability of the driver for lane keeping control is lowered.
 詳記すると、特定シーンは、自車車線Lnsから車両Aが外れる不安をドライバに惹起させ得るシーンである。特定シーンには、自車車線Lnsに沿って走行する難度が比較的高いシーンを含む。例えば、カーブ路を走行するカーブ走行シーンは、曲がりきれずにカーブ路を逸脱する不安をドライバに惹起し得るシーンであり、特定シーンに包含される。加えて、特定シーンには、車線維持制御部51,61にて自車車線Lnsが正しく認識されないという疑いを生じさせ得るシーンを含む。例えば、雨、霧、雪等の悪天候および夜間といった、視界が不良となる視界不良シーンは、自車車線Lnsの境界としての区画線LL,LRが視認困難となることで上述の疑いを生じさせ得るシーンであり、特定シーンに包含される。さらに、特定シーンには、自車車線Lnsから車両Aが外れた場合の憂慮が比較的大きくなり得るシーンを含む。例えば、崖上の道路を走行する崖走行シーンが、特定シーンに包含される。 In detail, the specific scene is a scene that can cause the driver to feel anxious about the vehicle A coming off the own lane Lns. The specific scene includes a scene in which the difficulty of traveling along the own lane Lns is relatively high. For example, a curved driving scene traveling on a curved road is a scene that can cause anxiety in the driver to deviate from the curved road without being able to turn completely, and is included in a specific scene. In addition, the specific scene includes a scene in which the lane keeping control units 51 and 61 may raise a suspicion that the own lane Lns is not correctly recognized. For example, a scene with poor visibility such as bad weather such as rain, fog, snow, and nighttime causes the above-mentioned suspicion because the lane markings LL and LR as the boundary of the own lane Lns become difficult to see. It is a scene to obtain and is included in a specific scene. Further, the specific scene includes a scene in which the concern when the vehicle A deviates from the own lane Lns can be relatively large. For example, a cliff running scene traveling on a road on a cliff is included in a specific scene.
 シーン判定部105は、カーブ走行シーンであるか否かを、高精度地図データに基づいて判定する。具体的には、シーン判定部105は、道路の曲率等に基づき車両Aがカーブ路を走行していると判断できる場合に、カーブ走行シーンであると判定する。シーン判定部105は、視界不良シーンであるか否かを、検出情報に基づき判定する。具体的には、シーン判定部105は、フロントカメラ31の撮像画像の解析結果により、悪天候であると判別された場合に、視界不良シーンであると判定する。また、シーン判定部105は、HCU100等の時計機能に基づき、現在時刻が夜間である場合に、視界不良シーンであると判定する。加えて、シーン判定部105は、現在時刻および車両Aの進行方向等に基づき、逆光となると判別される場合に、視界不良シーンであると判定してもよい。また、シーン判定部105は、崖走行シーンであるか否かを、高精度地図データに基づき判定する。具体的には、シーン判定部105は、走行路の路肩の地形が崖に分類される場合に、崖走行シーンであると判定する。 The scene determination unit 105 determines whether or not the scene is a curve driving scene based on high-precision map data. Specifically, the scene determination unit 105 determines that the scene is a curve traveling scene when it can be determined that the vehicle A is traveling on a curved road based on the curvature of the road or the like. The scene determination unit 105 determines whether or not the scene has poor visibility based on the detection information. Specifically, the scene determination unit 105 determines that the scene has poor visibility when it is determined that the weather is bad based on the analysis result of the captured image of the front camera 31. Further, the scene determination unit 105 determines that the scene has poor visibility when the current time is nighttime, based on the clock function of the HCU 100 or the like. In addition, the scene determination unit 105 may determine that the scene has poor visibility when it is determined that there is backlight based on the current time, the traveling direction of the vehicle A, and the like. Further, the scene determination unit 105 determines whether or not the scene is a cliff running scene based on the high-precision map data. Specifically, the scene determination unit 105 determines that the scene is a cliff travel scene when the terrain on the shoulder of the travel path is classified as a cliff.
 シーン判定部105は、上述した複数の特定シーンのうちいずれか1つに現在の走行シーンが該当するか否かを判定する。または、シーン判定部105は、いずれか1つの特定シーンのみについて判定してもよい。シーン判定部105は、判定結果を表示生成部109に逐次提供する。 The scene determination unit 105 determines whether or not the current driving scene corresponds to any one of the plurality of specific scenes described above. Alternatively, the scene determination unit 105 may determine only one specific scene. The scene determination unit 105 sequentially provides the determination result to the display generation unit 109.
 表示生成部109は、取得した種々の情報に基づき、重畳コンテンツCTs(図4,5参照)の表示レイアウトをシミュレートする仮想レイアウト機能と、情報提示に用いるコンテンツを選定するコンテンツ選定機能とを備えている。加えて、表示生成部109は、仮想レイアウト機能およびコンテンツ選定機能から提供される情報に基づき、HUD20に逐次出力させる映像データを生成する生成機能を備えている。表示生成部109は、表示制御部の一例である。 The display generation unit 109 includes a virtual layout function that simulates the display layout of superimposed content CTs (see FIGS. 4 and 5) based on various acquired information, and a content selection function that selects content to be used for information presentation. ing. In addition, the display generation unit 109 has a generation function for generating video data to be sequentially output to the HUD 20 based on the information provided by the virtual layout function and the content selection function. The display generation unit 109 is an example of a display control unit.
 表示生成部109は、仮想レイアウト機能の実行により、自車位置情報、高精度地図データおよび検出情報等に基づいて車両Aの現在の走行環境を仮想空間中に再現する。詳記すると、図5に示すように、表示生成部109は、仮想の三次元空間の基準位置に自車オブジェクトAOを設定する。表示生成部109は、地図データの示す形状の道路モデルを、自車位置情報に基づき、自車オブジェクトAOに関連付けて、三次元空間にマッピングする。表示生成部109は、境界情報に基づいて、左側区画線LLおよび右側区画線LRにそれぞれ対応する仮想左側区画線VLLおよび仮想右側区画線VLRを、仮想路面上に設定する。表示生成部109は、車線維持制御部51,61にて生成された走行予定ラインを、予想軌跡PTとして仮想路面上に設定する。 The display generation unit 109 reproduces the current driving environment of the vehicle A in the virtual space based on the own vehicle position information, high-precision map data, detection information, etc. by executing the virtual layout function. More specifically, as shown in FIG. 5, the display generation unit 109 sets the own vehicle object AO at a reference position in the virtual three-dimensional space. The display generation unit 109 maps the road model of the shape indicated by the map data in the three-dimensional space in association with the own vehicle object AO based on the own vehicle position information. The display generation unit 109 sets the virtual left side marking line VLL and the virtual right side marking line VLR corresponding to the left side marking line LL and the right side marking line LR, respectively, on the virtual road surface based on the boundary information. The display generation unit 109 sets the planned travel line generated by the lane keeping control units 51 and 61 on the virtual road surface as the predicted locus PT.
 表示生成部109は、自車オブジェクトAOに関連付けて、仮想カメラ位置CPおよび重畳範囲SAを設定する。仮想カメラ位置CPは、ドライバのアイポイントEPに対応する仮想位置である。表示生成部109は、ドライバ情報取得部101にて取得される最新のアイポイント座標に基づき、自車オブジェクトAOに対する仮想カメラ位置CPを逐次補正する。重畳範囲SAは、虚像Viの重畳表示が可能となる範囲である。表示生成部109は、仮想カメラ位置CPと、記憶部13(図1参照)等に予め記憶された投影範囲PAの外縁位置(座標)情報とに基づき、仮想カメラ位置CPから前方を見たときに結像面ISの内側となる前方範囲を、重畳範囲SAとして設定する。重畳範囲SAは、HUD20の画角VAに対応している。 The display generation unit 109 sets the virtual camera position CP and the superimposition range SA in association with the own vehicle object AO. The virtual camera position CP is a virtual position corresponding to the driver's eye point EP. The display generation unit 109 sequentially corrects the virtual camera position CP with respect to the own vehicle object AO based on the latest eye point coordinates acquired by the driver information acquisition unit 101. The superimposition range SA is a range in which the virtual image Vi can be superposed and displayed. When the display generation unit 109 looks forward from the virtual camera position CP based on the virtual camera position CP and the outer edge position (coordinates) information of the projection range PA stored in advance in the storage unit 13 (see FIG. 1) or the like. The front range inside the imaging plane IS is set as the superimposition range SA. The superimposition range SA corresponds to the angle of view VA of HUD20.
 表示生成部109は、仮想空間中に仮想オブジェクトVOを配置する。仮想オブジェクトVOは、三次元空間の道路モデルの路面上において、予想軌跡PTに沿うように配置される。仮想オブジェクトVOは、後述する開始コンテンツCTiおよび予想軌跡コンテンツCTpを虚像表示させる場合に、仮想空間中に設定される。仮想オブジェクトVOは、開始コンテンツCTiおよび予想軌跡コンテンツCTpの位置と形状を規定する。すなわち、仮想カメラ位置CPから見た仮想オブジェクトVOの形状が、アイポイントEPから視認される開始コンテンツCTiおよび予想軌跡コンテンツCTpの虚像形状となる。 The display generation unit 109 arranges the virtual object VO in the virtual space. The virtual object VO is arranged along the expected locus PT on the road surface of the road model in the three-dimensional space. The virtual object VO is set in the virtual space when the start content CTi and the expected locus content CTp, which will be described later, are displayed as virtual images. The virtual object VO defines the position and shape of the start content CTi and the expected locus content CTp. That is, the shape of the virtual object VO as seen from the virtual camera position CP becomes the virtual image shape of the start content CTi and the expected locus content CTp that are visually recognized from the eye point EP.
 具体的に、仮想オブジェクトVOは、左側仮想オブジェクトVOlおよび右側仮想オブジェクトVOrを含んでいる。左側仮想オブジェクトVOlは、仮想左側区画線VLLに沿って、仮想左側区画線VLLの内側に配置される。右側仮想オブジェクトVOrは、左側仮想オブジェクトVOlとは反対に、仮想右側区画線VLRに沿って、仮想右側区画線VLRの内側に配置される。左側仮想オブジェクトVOlおよび右側仮想オブジェクトVOrは、例えばそれぞれ仮想区画線VLL,VLRに沿って平面的に延びる細い帯状のオブジェクトである。 Specifically, the virtual object VO includes the left virtual object VOL and the right virtual object VOL. The left virtual object VOL is arranged inside the virtual left lane VLL along the virtual left lane VLL. The right virtual object VOL is arranged inside the virtual right lane VLR along the virtual right lane VLR, as opposed to the left virtual object VOL. The left virtual object VOL and the right virtual object VOL are, for example, thin strip-shaped objects extending in a plane along the virtual lane markings VLL and VLR, respectively.
 各仮想オブジェクトVOl,VOrは、開始コンテンツCTiを表示する際には、各仮想区画線VLL,VLRの内側における所定の位置に静止した状態で配置される。一方で、各仮想オブジェクトVOl,VOrは、予想軌跡コンテンツCTpを表示する際には、開始コンテンツCTi表示時の配置位置を初期位置として、初期位置から自車車線Lnsの中央側への移動を繰り返すオブジェクトとして設定される。 When displaying the start content CTi, each virtual object VOL, VOL is arranged in a stationary state at a predetermined position inside each virtual lane marking VLL, VLR. On the other hand, when displaying the predicted locus content CTp, each virtual object VOL and VOL repeats the movement from the initial position to the center side of the own lane Lns with the arrangement position at the time of displaying the start content CTi as the initial position. Set as an object.
 表示生成部109は、コンテンツ選定機能の実行により、複数種類の重畳コンテンツCTsおよび非重畳コンテンツをシーンに応じて使い分け、ドライバへの情報提示を行う。例えば、表示生成部109は、制御情報取得部104にてLTA機能の実行情報を取得しており、且つシーン判定部105にて特定シーンであると判定された場合に、予想軌跡コンテンツCTpを表示させる。一方で、表示生成部109は、LTA機能の実行情報を取得していても、特定シーンではないと判定された場合には、予想軌跡コンテンツCTpを非表示とする。 The display generation unit 109 uses a plurality of types of superposed content CTs and non-superimposed content properly according to the scene by executing the content selection function, and presents information to the driver. For example, the display generation unit 109 displays the predicted locus content CTp when the control information acquisition unit 104 has acquired the execution information of the LTA function and the scene determination unit 105 has determined that the scene is a specific scene. Let me. On the other hand, the display generation unit 109 hides the expected locus content CTp when it is determined that the scene is not a specific scene even if the execution information of the LTA function is acquired.
 表示生成部109は、映像データの生成機能により、LTA機能に関連するコンテンツを表示するLTA表示を実行可能である。LTA表示の詳細について、図5を参照しつつ以下説明する。なお、図5は、カーブ走行シーンにおけるLTA表示の例を示している。 The display generation unit 109 can execute the LTA display for displaying the contents related to the LTA function by the video data generation function. The details of the LTA display will be described below with reference to FIG. Note that FIG. 5 shows an example of LTA display in a curve driving scene.
 表示生成部109は、シーン判定部105にて特定シーンではないと判定されている場合、LTA表示を実行しない(図5のA参照)。表示生成部109は、特定シーンであると判定された場合には、LTA機能による車両Aの予想軌跡PTを提示する。具体的には、表示生成部109は、まず開始コンテンツCTiを表示させる(図5のB参照)。 The display generation unit 109 does not execute the LTA display when the scene determination unit 105 determines that the scene is not a specific scene (see A in FIG. 5). When it is determined that the scene is a specific scene, the display generation unit 109 presents the predicted trajectory PT of the vehicle A by the LTA function. Specifically, the display generation unit 109 first displays the start content CTi (see B in FIG. 5).
 開始コンテンツCTiは、後述の予想軌跡コンテンツCTpの表示開始を示すコンテンツである。開始コンテンツCTiは、例えば静止した態様の予想軌跡コンテンツCTpとされる。詳記すると、開始コンテンツCTiは、走行路の路面を重畳対象とする重畳コンテンツCTsとされる。開始コンテンツCTiは、予想軌跡PTに沿った形状に描画される。 The start content CTi is a content indicating the start of display of the expected trajectory content CTp described later. The start content CTi is, for example, the expected locus content CTp in a stationary mode. More specifically, the start content CTi is a superposed content CTs whose superimposing target is the road surface of the traveling road. The start content CTi is drawn in a shape along the expected locus PT.
 開始コンテンツCTiは、左側開始コンテンツCTilと、右側開始コンテンツCTirとを含んで構成されている。左側開始コンテンツCTilおよび右側開始コンテンツCTirは、自車車線Lnsにおける一対の境界である区画線LL,LRにそれぞれ対応する一対のコンテンツである。各開始コンテンツCTil,CTirは、例えば、車両Aの進行方向・BR>ノ一続きに延びる細い帯状の道路ペイントである。左側開始コンテンツCTilは、左側区画線LLに沿った形状とされ、左側区画線LLの内側を重畳位置とされる。右側開始コンテンツCTirは、右側区画線LRに沿った形状とされ、右側区画線LRの内側を重畳位置とされる。各開始コンテンツCTil,CTirは、上述の重畳位置に静止した状態で継続して表示される。 The start content CTi includes the left start content CTil and the right start content CTir. The left-side start content CTil and the right-side start content CTir are a pair of contents corresponding to the lane markings LL and LR, which are a pair of boundaries in the own lane Lns. Each of the starting contents CTil and CTirr is, for example, a thin strip-shaped road paint extending in a continuous direction of the vehicle A in the traveling direction and BR>. The left side start content Ctil has a shape along the left side division line LL, and the inside of the left side division line LL is a superposed position. The right side start content CTir has a shape along the right side division line LR, and the inside of the right side division line LR is a superposed position. The start contents CTil and CTirr are continuously displayed in the above-mentioned superposed position in a stationary state.
 開始コンテンツCTiは、特定シーンの開始時点から所定期間表示される。例えば、特定シーンがカーブ走行シーンであった場合には、重畳範囲SAがカーブ路の開始位置に到達してから所定期間継続して表示される。 The start content CTi is displayed for a predetermined period from the start time of the specific scene. For example, when the specific scene is a curve traveling scene, the superimposition range SA is continuously displayed for a predetermined period after reaching the start position of the curve road.
 表示生成部109は、開始コンテンツCTiの表示期間が終了すると、予想軌跡コンテンツCTpの表示を開始する。予想軌跡コンテンツCTpは、LTA機能による車両Aの予想軌跡PTを示すコンテンツである。予想軌跡コンテンツCTpは、左側境界ラインCTblと、右側境界ラインCTbrとを含んで構成されている。左側境界ラインCTblおよび右側境界ラインCTbrは、それぞれ左側開始コンテンツCTilおよび右側開始コンテンツCTirと同じ表示形状とされる。 When the display period of the start content CTi ends, the display generation unit 109 starts displaying the expected locus content CTp. The predicted locus content CTp is a content indicating the predicted locus PT of the vehicle A by the LTA function. The predicted locus content CTp includes a left boundary line CTbl and a right boundary line CTbr. The left boundary line CTbl and the right boundary line CTbr have the same display shape as the left start content CTil and the right start content CTir, respectively.
 左側境界ラインCTblおよび右側境界ラインCTbrは、アニメーションにより動く表示態様とされる。各境界ラインCTbl,CTbrは、車線幅方向における両外側から中央側へと移動するように描画される。ここで両外側は、自車車線Lnsの中央部に対して区画線LL,LRの位置する側であり、中央側は、区画線LL,LRに対して自車車線Lnsの中央部が位置する側である。すなわち、左側境界ラインCTblは、左側区画線LLから自車車線Lnsの中央部へ向かう方向に移動し、右側境界ラインCTbrは、右側区画線LRから自車車線Lnsの中央部へ向かう方向に移動する。 The left boundary line CTbl and the right boundary line CTbr are displayed in a manner that moves by animation. The boundary lines CTbl and CTbr are drawn so as to move from both outer sides to the central side in the lane width direction. Here, both outer sides are the sides where the lane markings LL and LR are located with respect to the central portion of the own lane Lns, and the central side is the side where the central portion of the own lane Lns is located with respect to the lane markings LL and LR. On the side. That is, the left boundary line CTbl moves from the left lane LL toward the center of the own lane Lns, and the right boundary line CTbr moves from the right lane LR toward the center of the own lane Lns. To do.
 左側境界ラインCTblおよび右側境界ラインCTbrは、初期位置から自車車線Lnsの中央側へと、車線幅方向に連続的に移動する態様とされる。換言すると、各境界ラインCTbl,CTbr間の幅が狭まるように、各境界ラインCTbl,CTbrが連続的に移動する。初期位置は開始コンテンツCTiの重畳位置であり、開始コンテンツCTiがその重畳位置から動き出したように描画される。各境界ラインCTbl,CTbrは、それぞれの初期位置から同程度の移動量だけ移動してそれぞれの移動端位置へと到達する。各境界ラインCTbl,CTbrは、初期位置から実質同じタイミングで移動を開始する。各境界ラインCTbl,CTbrは、初期位置から移動端位置まで、実質同じ期間で移動を完了する。各境界ラインCTbl,CTbrは、移動コンテンツの一例である。 The left boundary line CTbl and the right boundary line CTbr are in a mode of continuously moving in the lane width direction from the initial position to the center side of the own lane Lns. In other words, the boundary lines CTbl and CTbr move continuously so that the width between the boundary lines CTbl and CTbr is narrowed. The initial position is the superposed position of the start content CTi, and the start content CTi is drawn as if it started moving from the superposed position. The boundary lines CTbl and CTbr move from their respective initial positions by the same amount of movement to reach their respective moving end positions. The boundary lines CTbl and CTbr start moving at substantially the same timing from the initial position. Each boundary line CTbl, CTbr completes the movement from the initial position to the moving end position in substantially the same period. Each boundary line CTbl, CTbr is an example of moving content.
 各境界ラインCTbl,CTbrは、上述の移動を繰り返すように表示される。具体的には、各境界ラインCTbl,CTbrは、初期位置から所定の移動量だけ移動した時点で消失し、初期位置に再出現する。再出現した各境界ラインCTbl,CTbrは、再度上述した移動を実行する。各境界ラインCTbl,CTbrは、特定シーンが終了するまで移動を連続的に繰り返す。特定シーンが終了すると、各境界ラインCTbl,CTbrは、非表示とされる。 Each boundary line CTbl, CTbr is displayed so as to repeat the above-mentioned movement. Specifically, each boundary line CTbl, CTbr disappears when it moves by a predetermined amount of movement from the initial position, and reappears at the initial position. The reappearing boundary lines CTbl and CTbr perform the above-mentioned movement again. Each boundary line CTbl, CTbr continuously repeats movement until the end of a specific scene. When the specific scene ends, the boundary lines CTbl and CTbr are hidden.
 加えて、表示生成部109は、メータディスプレイ23内の所定の表示領域に、LTA機能の実行を示す実行コンテンツとしてステータス画像CTstを表示させる(図6参照)。ステータス画像CTstは、例えば、自車車線Lnsの区画線LL,LRを模した形状とされる、具体的には、ステータス画像CTstは、一対の細い帯形状として表示される。ステータス画像CTstは、予め規定された表示位置に固定して表示される。例えば、ステータス画像CTstは、車両Aを模った車両アイコンICvの両脇に表示される。 In addition, the display generation unit 109 displays the status image CTst as execution content indicating the execution of the LTA function in a predetermined display area in the meter display 23 (see FIG. 6). The status image CTst has, for example, a shape imitating the lane markings LL and LR of the own lane Lns. Specifically, the status image CTst is displayed as a pair of thin strips. The status image CTst is fixedly displayed at a predetermined display position. For example, the status image CTst is displayed on both sides of the vehicle icon ICv that imitates the vehicle A.
 ステータス画像CTstは、LTA機能がオフである場合には非表示とされる(図6のA参照)。ステータス画像CTstは、LTA機能がオンである場合に表示される。ステータス画像CTstは、特定シーンではないと判定された場合と、特定シーンであると判定された場合とで、異なる表示態様とされる。具体的には、ステータス画像CTstは、特定シーンではないと判定された場合には、連続的に光輝して表示される(図6のB参照)。一方で、ステータス画像CTstは、特定シーンであると判定された場合には、点滅表示される(図6のC参照)。点滅表示により、特定シーン下でのステータス画像CTstは、移動する表示態様とされる特定シーン下での予想軌跡コンテンツCTpとは異なる表示態様となる。なお、点滅表示において、ステータス画像CTstは、点灯状態と消灯状態とで輝度が離散的に変更されてもよいし、連続的に変更されてもよい。 The status image CTst is hidden when the LTA function is off (see A in FIG. 6). The status image CTst is displayed when the LTA function is on. The status image CTst has different display modes depending on whether it is determined that it is not a specific scene or that it is a specific scene. Specifically, when it is determined that the status image CTst is not a specific scene, the status image CTst is continuously brilliantly displayed (see B in FIG. 6). On the other hand, the status image CTst is displayed blinking when it is determined that the scene is a specific scene (see C in FIG. 6). Due to the blinking display, the status image CTst under the specific scene has a display mode different from the expected locus content CTp under the specific scene, which is a moving display mode. In the blinking display, the brightness of the status image CTst may be changed discretely or continuously depending on the lighting state and the extinguishing state.
 次に、表示制御プログラムに基づきHCU100が実行する各コンテンツの表示制御方法の詳細を、図7に示すフローチャートに基づき、図3~図6を参照しつつ、以下説明する。図7に示す処理は、例えば車両電源のオン状態への切り替えにより、起動処理等を終えたHCU100により開始される。後述するフローにおいて「S」とは、表示制御プログラムに含まれた複数命令によって実行される、フローの複数ステップを意味する。 Next, the details of the display control method of each content executed by the HCU 100 based on the display control program will be described below with reference to FIGS. 3 to 6 based on the flowchart shown in FIG. The process shown in FIG. 7 is started by the HCU 100 that has completed the start-up process or the like, for example, by switching the vehicle power supply to the on state. In the flow described later, “S” means a plurality of steps of the flow executed by a plurality of instructions included in the display control program.
 まずS10で、表示生成部109が、制御情報取得部104にて取得される制御情報に基づき、LTA機能がオンであるか否かを判定する。LTA機能がオフであると判定した場合には、オンとなるまで待機する。LTA機能がオンであると判定すると、S20へと進み、シーン判定部105にて現在の走行シーンが特定シーンであるか否かを判定する。 First, in S10, the display generation unit 109 determines whether or not the LTA function is on based on the control information acquired by the control information acquisition unit 104. If it is determined that the LTA function is off, it waits until it is turned on. If it is determined that the LTA function is on, the process proceeds to S20, and the scene determination unit 105 determines whether or not the current driving scene is a specific scene.
 特定シーンであると判定されると、S30へと進み、表示生成部109にて開始コンテンツCTiを表示させる。その後S40へと進み、予想軌跡コンテンツCTpを表示させ、S50へと進む。S50では、シーン判定部105にて特定シーンが終了したか否かを判定する。特定シーンが終了していない場合には、S40に戻り、予想軌跡コンテンツCTpの表示を継続する。一方で、特定シーンが終了したと判定すると、S60に進み、予想軌跡コンテンツCTpの表示を終了してS70へと進む。 When it is determined that the scene is a specific scene, the process proceeds to S30, and the display generation unit 109 displays the start content CTi. After that, the process proceeds to S40, the expected locus content CTp is displayed, and the process proceeds to S50. In S50, the scene determination unit 105 determines whether or not the specific scene has ended. If the specific scene is not finished, the process returns to S40 and the display of the expected locus content CTp is continued. On the other hand, if it is determined that the specific scene has ended, the process proceeds to S60, the display of the expected locus content CTp ends, and the process proceeds to S70.
 一方で、S20にて特定シーンではないと判定した場合には、上述のコンテンツを表示することなくS70へと進む。S70では、制御情報に基づきLTA機能がオフとなったか否かを判定する。LTA機能がオフではないと判定すると、S20へと戻る。一方で、LTA機能がオフであると判定されると、一連の処理を終了する。 On the other hand, if it is determined in S20 that it is not a specific scene, the process proceeds to S70 without displaying the above-mentioned content. In S70, it is determined whether or not the LTA function is turned off based on the control information. If it is determined that the LTA function is not off, the process returns to S20. On the other hand, when it is determined that the LTA function is off, a series of processes is terminated.
 次に第1実施形態のHCU100がもたらす作用効果について説明する。 Next, the action and effect brought about by the HCU100 of the first embodiment will be described.
 第1実施形態において、車線維持制御に対するドライバの信頼度が低下する特定シーンである場合には、予想軌跡コンテンツCTpが路面に重畳表示され、特定シーンではない場合には、予想軌跡コンテンツCTpが非表示とされる。これによれば、特定シーンにて予想軌跡コンテンツCTpが表示される。故に、予想軌跡コンテンツCTpを視認したドライバは、特定シーンにおいても車線内走行が維持されるイメージを想起し易くなる。以上により、ドライバの不安が低減され得る。 In the first embodiment, the predicted locus content CTp is superimposed and displayed on the road surface in the case of a specific scene in which the reliability of the driver for the lane keeping control is lowered, and the predicted locus content CTp is not displayed in the case of a non-specific scene. It is displayed. According to this, the expected locus content CTp is displayed in a specific scene. Therefore, the driver who visually recognizes the predicted locus content CTp can easily recall the image that the driving in the lane is maintained even in a specific scene. As a result, the driver's anxiety can be reduced.
 加えて、予想軌跡コンテンツCTpの表示開始を示す開始コンテンツCTiが、予想軌跡コンテンツCTpの表示前に表示される。これによれば、予想軌跡コンテンツCTpの表示開始が、開始コンテンツCTiによってドライバに提示される。故に、ドライバは、予想軌跡コンテンツCTpが表示されることを理解し易くなる。したがって、ドライバにより分かり易い表示が可能となる。 In addition, the start content CTi indicating the start of display of the expected locus content CTp is displayed before the display of the expected locus content CTp. According to this, the display start of the predicted locus content CTp is presented to the driver by the start content CTi. Therefore, the driver can easily understand that the expected locus content CTp is displayed. Therefore, the driver enables an easy-to-understand display.
 さらに、予想軌跡コンテンツCTpとして、車線幅方向における両外側から中央側へと移動する一対の境界ラインCTbl,CTbrが表示される。これによれば、予想軌跡コンテンツCTpは、自車車線Lnsの両外側から中央側へと移動する。故に、表示生成部109は、自車車線Lnsの中央を車両Aが走行するイメージを、ドライバに想起させ得る。したがって、表示生成部109は、LTA機能による車線内走行に対するドライバの不安を、より軽減することができる。 Further, as the expected locus content CTp, a pair of boundary lines CTbl and CTbr moving from both outer sides to the center side in the lane width direction are displayed. According to this, the predicted locus content CTp moves from both outer sides of the own lane Lns to the central side. Therefore, the display generation unit 109 can remind the driver of the image of the vehicle A traveling in the center of the own lane Lns. Therefore, the display generation unit 109 can further reduce the driver's anxiety about traveling in the lane due to the LTA function.
 また、各境界ラインCTbl,CTbrは繰り返し移動して表示されるので、上述のイメージをより強くドライバに想起させ得る。したがって、表示生成部109は、ドライバの不安を一層軽減可能である。 Further, since each boundary line CTbl and CTbr are repeatedly moved and displayed, the above image can be more strongly reminded to the driver. Therefore, the display generation unit 109 can further reduce the anxiety of the driver.
 (第2実施形態)
 第2実施形態では、第1実施形態におけるHCU100の変形例について説明する。図8において第1実施形態の図面中と同一符号を付した構成要素は、同様の構成要素であり、同様の作用効果を奏するものである。
(Second Embodiment)
In the second embodiment, a modified example of the HCU 100 in the first embodiment will be described. In FIG. 8, the components having the same reference numerals as those in the drawings of the first embodiment are the same components and have the same effects.
 第2実施形態は、予想軌跡コンテンツCTpの移動の態様が第1実施形態と異なる。第2実施形態の表示生成部109は、カーブ走行シーンにおいて、各境界ラインCTbl,CTbrのうちカーブの外周側に重畳する境界ラインを、内周側に重畳する境界ラインよりも大きく中央側に移動させる。ここで外周側は、一対の境界ラインCTbl,CTbrのうちカーブ路の曲率中心から遠い側であり、内周側は、曲率中心に近い側である。具体的には、図8のように右カーブを走行するシーンでは、左側境界ラインCTblの移動端位置が、右側境界ラインCTbrの移動端位置よりも、自車車線Lnsの中央側に設定される。これにより、左側境界ラインCTblは、右側境界ラインCTbrよりも中央側に移動するように表示される。 The second embodiment is different from the first embodiment in the mode of movement of the predicted locus content CTp. In the curve traveling scene, the display generation unit 109 of the second embodiment moves the boundary line superimposed on the outer peripheral side of the curve among the boundary lines CTbl and CTbr to the center side larger than the boundary line superimposed on the inner peripheral side. Let me. Here, the outer peripheral side is the side of the pair of boundary lines CTbl and CTbr far from the center of curvature of the curved road, and the inner peripheral side is the side closer to the center of curvature. Specifically, in the scene of traveling on a right curve as shown in FIG. 8, the moving end position of the left boundary line CTbl is set closer to the center of the own lane Lns than the moving end position of the right boundary line CTbr. .. As a result, the left boundary line CTbl is displayed so as to move toward the center side of the right boundary line CTbr.
 以上によれば、カーブの外周側に重畳される境界ラインが、内周側に重畳される境界ラインよりも中央側に移動するように表示されるので、カーブ路からの逸脱のイメージがより惹起され難くなる。これにより、HCU100は、ドライバの不安を一層軽減可能である。 According to the above, the boundary line superimposed on the outer peripheral side of the curve is displayed so as to move to the center side of the boundary line superimposed on the inner peripheral side, so that the image of deviation from the curve path is more evoked. It becomes difficult to be done. As a result, the HCU 100 can further reduce the driver's anxiety.
 (第3実施形態)
 第3実施形態では、第1実施形態におけるHCU100の変形例について説明する。図9および図10において第1実施形態の図面中と同一符号を付した構成要素は、同様の構成要素であり、同様の作用効果を奏するものである。
(Third Embodiment)
In the third embodiment, a modified example of the HCU 100 in the first embodiment will be described. The components having the same reference numerals as those in the drawings of the first embodiment in FIGS. 9 and 10 are the same components and have the same effects.
 第3実施形態において、表示生成部109は、LTA機能がオンである場合には、シーン判定部105における判定結果に関わらず予想軌跡コンテンツCTpを表示させる。そして、表示生成部109は、特定シーンではないと判定された場合と、特定シーンであると判定された場合とで、予想軌跡コンテンツCTpの表示態様を変更する。 In the third embodiment, when the LTA function is on, the display generation unit 109 displays the expected locus content CTp regardless of the determination result in the scene determination unit 105. Then, the display generation unit 109 changes the display mode of the predicted locus content CTp depending on whether it is determined that it is not a specific scene or that it is a specific scene.
 詳記すると、表示生成部109は、特定シーンではないと判定された場合、図9のAに示すように、自車車線Lnsにおける一対の境界のそれぞれを強調する一対の境界ラインCTbl,CTbrを、予想軌跡コンテンツCTpとして表示させる(通常表示)。なお、第3実施形態における一対の境界ラインは、自車車線Lnsの左右の区画線LL,LRを強調するものとするが、道路端または任意に設定された仮想の境界線等を自車車線Lnsの境界として強調するものとしてもよい。一対の境界ラインCTbl,CTbrは、左側境界ラインCTblと、右側境界ラインCTbrとを含み、境界コンテンツの一例である。第3実施形態における左側境界ラインCTblおよび右側境界ラインCTbrは、それぞれ対応する区画線LL,LRの内側を重畳位置として、当該重畳位置に留まるように表示される。各境界ラインCTbl,CTbrは、例えば、区画線LL,LRに沿って一続きに延びる細い帯状の道路ペイントとして表示される。 More specifically, when it is determined that the scene is not a specific scene, the display generation unit 109 creates a pair of boundary lines CTbl and CTbr that emphasize each of the pair of boundaries in the own lane Lns, as shown in A of FIG. , Displayed as expected trajectory content CTp (normal display). The pair of boundary lines in the third embodiment emphasizes the left and right lane markings LL and LR of the own lane Lns, but the road edge or an arbitrarily set virtual boundary line or the like is the own lane. It may be emphasized as a boundary of Lns. The pair of boundary lines CTbl and CTbr include a left boundary line CTbl and a right boundary line CTbr, and is an example of boundary content. The left boundary line CTbl and the right boundary line CTbr in the third embodiment are displayed so as to stay at the superimposition position with the inside of the corresponding division lines LL and LR as the superimposition position, respectively. Each boundary line CTbl, CTbr is displayed as, for example, a thin strip-shaped road paint extending continuously along the lane markings LL, LR.
 そして、特定シーンであると判定された場合、表示生成部109は、特定シーンではないと判定された場合よりも車線幅方向の中央側の部分を強調する表示態様へと、境界ラインを変更する(特殊表示)。具体的には、表示生成部109は、各境界ラインCTbl,CTbrの重畳位置を、自車車線Lnsの中央側へと変更する(図9のB参照)。これにより、特定シーンにおける境界ラインCTbl,CTbrは、非特定シーンよりも自車車線Lnsの中央に寄った態様となる。非特定シーンにおける重畳位置からの移動幅の大きさは、各境界ラインCTbl,CTbrで同程度とされる。 Then, when it is determined that the scene is a specific scene, the display generation unit 109 changes the boundary line to a display mode in which the central portion in the lane width direction is emphasized as compared with the case where it is determined that the scene is not the specific scene. (Special display). Specifically, the display generation unit 109 changes the overlapping positions of the boundary lines CTbl and CTbr to the center side of the own lane Lns (see B in FIG. 9). As a result, the boundary lines CTbl and CTbr in the specific scene are closer to the center of the own lane Lns than in the non-specific scene. The magnitude of the movement width from the superposed position in the non-specific scene is set to be about the same for each boundary line CTbl and CTbr.
 また、表示生成部109は、アニメーション表示によって境界ラインCTbl,CTbrの重畳位置の変更を提示する。すなわち、特定シーンであると判定された場合、一対の境界ラインCTbl,CTbrは、特定シーンにおける重畳位置に向かって連続的に移動するように表示される(図9のA参照)。各境界ラインCTbl,CTbrの移動開始および移動終了のタイミングは、実質的に同じである。なお、特定シーンが終了した場合、各境界ラインCTbl,CTbrは、上述のアニメーション表示とは逆方向に移動するアニメーション表示により、非特定シーンにおける重畳位置へと戻る(図9のC参照)。 Further, the display generation unit 109 presents a change in the superposition position of the boundary lines CTbl and CTbr by animation display. That is, when it is determined that the scene is a specific scene, the pair of boundary lines CTbl and CTbr are displayed so as to continuously move toward the superposed position in the specific scene (see A in FIG. 9). The timings of the movement start and movement end of the boundary lines CTbl and CTbr are substantially the same. When the specific scene ends, the boundary lines CTbl and CTbr return to the superposed position in the non-specific scene by the animation display that moves in the direction opposite to the animation display described above (see C in FIG. 9).
 次に、表示制御プログラムに基づきHCU100が実行する各コンテンツの表示制御方法の詳細を、図10に示すフローチャートに基づき、図9を参照しつつ、以下説明する。図7のフローチャートと同じ符号を付したステップは、図7と同様の処理であるため、説明を適宜省略する。 Next, the details of the display control method of each content executed by the HCU 100 based on the display control program will be described below with reference to FIG. 9 based on the flowchart shown in FIG. Since the steps with the same reference numerals as those in the flowchart of FIG. 7 are the same processes as those of FIG. 7, the description thereof will be omitted as appropriate.
 S10にてLTA機能がオン状態であると判定されると、S15へと進む。S15では、予想軌跡コンテンツCTpを、通常表示の態様で表示させ、S20へと進む。S20にて、特定シーンであると判定されると、S45へと進み、予想軌跡コンテンツCTpを、特殊表示の態様で表示させる。 If it is determined in S10 that the LTA function is on, the process proceeds to S15. In S15, the expected locus content CTp is displayed in the normal display mode, and the process proceeds to S20. If it is determined in S20 that the scene is a specific scene, the process proceeds to S45, and the expected locus content CTp is displayed in a special display mode.
 特殊表示の実行後、S50にて特定シーンが終了したと判定されると、S65へと進み、特殊表示を終了し、通常表示へと表示態様を戻す。S70にてLTA機能がオフであると判定されると、S85にて通常表示を終了し、予想軌跡コンテンツCTpを非表示として、一連の処理を終了する。 After executing the special display, if it is determined in S50 that the specific scene has ended, the process proceeds to S65, the special display ends, and the display mode returns to the normal display. When it is determined in S70 that the LTA function is off, the normal display is terminated in S85, the expected locus content CTp is hidden, and a series of processes is terminated.
 以上のように、第2実施形態では、特定シーンであると判定された場合と、特定シーンではないと判定された場合とで、予想軌跡コンテンツCTpの表示態様を変更する。故に、予想軌跡コンテンツCTpを視認したドライバは、車両側で特定シーンが把握されたうえで、車線維持制御が実行されることを連想し得る。このため、ドライバは、特定シーンであっても車線内走行が維持されるイメージを想起し易くなる。以上により、HCU100は、ドライバの不安を低減可能である。 As described above, in the second embodiment, the display mode of the expected locus content CTp is changed depending on whether it is determined to be a specific scene or not. Therefore, the driver who visually recognizes the predicted locus content CTp can associate that the lane keeping control is executed after the specific scene is grasped on the vehicle side. Therefore, the driver can easily recall the image that the driving in the lane is maintained even in a specific scene. From the above, the HCU 100 can reduce the anxiety of the driver.
 また、特定シーンであると判定された場合に、予想軌跡コンテンツCTpが自車車線Lnsの中央側の部分を強調する表示態様に変更される。これによれば、特定シーンである場合には、自車車線Lnsの中央側が強調されるので、車両Aが自車車線Lnsの中央を走行するイメージをドライバが想起し得る。したがって、ドライバに、特定シーンにおいても車線内走行が維持されることを、より印象付けることができる。 Further, when it is determined that the scene is a specific scene, the expected trajectory content CTp is changed to a display mode that emphasizes the central part of the own lane Lns. According to this, in the case of a specific scene, the central side of the own lane Lns is emphasized, so that the driver can recall the image of the vehicle A traveling in the center of the own lane Lns. Therefore, it is possible to further impress the driver that the driving in the lane is maintained even in a specific scene.
 加えて、特定シーンであると判定された場合に、一対の境界ラインCTbl,CTbrが、特定シーンではないと判定された場合よりも中央側に表示される。故に、追加的にコンテンツを表示する場合よりも、画角VA内が煩雑になることを抑制できる。 In addition, when it is determined that the scene is a specific scene, the pair of boundary lines CTbl and CTbr are displayed on the center side of the case where it is determined that the scene is not a specific scene. Therefore, it is possible to suppress the complexity in the angle of view VA as compared with the case of additionally displaying the content.
 (第4実施形態)
 第4実施形態では、第3実施形態におけるHCU100の変形例について説明する。図11において第1実施形態の図面中と同一符号を付した構成要素は、同様の構成要素であり、同様の作用効果を奏するものである。
(Fourth Embodiment)
In the fourth embodiment, a modified example of the HCU 100 in the third embodiment will be described. In FIG. 11, the components having the same reference numerals as those in the drawings of the first embodiment are the same components and have the same effects.
 第4実施形態の表示生成部109は、カーブ走行シーンにおいて、各境界ラインCTbl,CTbrのうちカーブの外周側に重畳するコンテンツを、内周側に重畳するコンテンツよりも中央側に表示させる。例えば、図11に示すような右カーブの場合、左側境界ラインCTblが、右側境界ラインCTbrに比較して中央側に表示され、区画線からの離隔距離が大きくなっている。 In the curve traveling scene, the display generation unit 109 of the fourth embodiment displays the content superimposed on the outer peripheral side of the curve among the boundary lines CTbl and CTbr on the center side of the content superimposed on the inner peripheral side. For example, in the case of a right curve as shown in FIG. 11, the left boundary line CTbl is displayed on the center side as compared with the right boundary line CTbr, and the separation distance from the lane marking is large.
 以上によれば、カーブの外周側に重畳される軌跡コンテンツが、内周側に重畳される軌跡コンテンツよりも中央側に表示されるので、カーブ路からの逸脱のイメージがより惹起され難くなる。これにより、HCU100は、ドライバの不安を一層軽減可能である。 According to the above, since the locus content superimposed on the outer peripheral side of the curve is displayed on the center side of the locus content superimposed on the inner peripheral side, the image of deviation from the curve road is less likely to be evoked. As a result, the HCU 100 can further reduce the driver's anxiety.
 (第5実施形態)
 第5実施形態では、第3実施形態におけるHCU100の変形例について説明する。図12において第1実施形態の図面中と同一符号を付した構成要素は、同様の構成要素であり、同様の作用効果を奏するものである。
(Fifth Embodiment)
In the fifth embodiment, a modified example of the HCU 100 in the third embodiment will be described. In FIG. 12, the components having the same reference numerals as those in the drawings of the first embodiment are the same components and have the same effects.
 第5実施形態において、表示生成部109は、特定シーンであると判定された場合には、一対の境界ラインCTbl,CTbrよりも中央側の部分に、追加コンテンツCTal,CTarを表示させる。追加コンテンツCTal,CTarは、一対の境界ラインCTbl,CTbrに追加して表示される予想軌跡コンテンツCTpである。追加コンテンツCTal,CTarは、例えば一対の境界ラインCTbl,CTbrと同様に予想軌跡PTに沿って一続きに延びる細い帯状とされる。追加コンテンツCTal,CTarは、例えば一対の境界ラインCTbl,CTbrとは異なる表示色で表示される。追加コンテンツCTal,CTarは、相対的に左側に表示される左側追加コンテンツCTalと、相対的に右側に表示される右側追加コンテンツCTarとを含んでいる。追加コンテンツCTal,CTarは、一対の境界ラインCTbl,CTbrよりも中央側に表示されることで、自車車線Lnsの中央側の部分をドライバに強調する。 In the fifth embodiment, when it is determined that the scene is a specific scene, the display generation unit 109 displays the additional contents CTal and CTar in a portion on the center side of the pair of boundary lines CTbl and CTbr. The additional contents CTal and CTar are expected locus contents CTp that are additionally displayed on the pair of boundary lines CTbl and CTbr. The additional contents CTal and CTar are formed in a thin band shape extending continuously along the expected locus PT, like the pair of boundary lines CTbl and CTbr, for example. The additional contents CTal and CTar are displayed in a display color different from, for example, the pair of boundary lines CTbl and CTbr. The additional content CTal and CTar include a left side additional content CTal that is relatively displayed on the left side and a right side additional content CTal that is relatively displayed on the right side. The additional contents CTal and CTar are displayed on the center side of the pair of boundary lines CTbl and CTbr, thereby emphasizing the central part of the own lane Lns to the driver.
 以上によれば、特定シーンにおいても、一対の境界ラインCTbl,CTbrの表示が維持された状態で、追加コンテンツCTal,CTarが追加で表示される。故に、特定シーンにおいてもLTA表示に関するコンテンツが継続的に表示されていることをドライバが理解し易い。これにより、より分かり易い表示が可能となる。 According to the above, even in a specific scene, the additional contents CTal and CTar are additionally displayed while the display of the pair of boundary lines CTbl and CTbr is maintained. Therefore, it is easy for the driver to understand that the content related to the LTA display is continuously displayed even in a specific scene. This enables a more understandable display.
 (第6実施形態)
 第6実施形態では、第3実施形態におけるHCU100の変形例について説明する。図13および図14において第1実施形態の図面中と同一符号を付した構成要素は、同様の構成要素であり、同様の作用効果を奏するものである。
(Sixth Embodiment)
In the sixth embodiment, a modified example of the HCU 100 in the third embodiment will be described. The components having the same reference numerals as those in the drawings of the first embodiment in FIGS. 13 and 14 are the same components and have the same effects.
 第6実施形態において、特定シーンではないと判定された場合、表示生成部109は、中央ラインCTcを予想軌跡コンテンツCTpとして表示させる(図13参照)。中央ラインCTcは、自車車線Lnsの中央部に重畳されるコンテンツである。中央ラインCTcは、例えば予想軌跡PTに沿って延びる1本の細い帯状とされる。中央ラインCTcは、中央コンテンツの一例である。 In the sixth embodiment, when it is determined that the scene is not a specific scene, the display generation unit 109 displays the central line CTc as the expected locus content CTp (see FIG. 13). The central line CTc is content superimposed on the central portion of the own lane Lns. The central line CTc is formed into, for example, a thin band extending along the expected locus PT. The central line CTc is an example of central content.
 表示生成部109は、特定シーンであると判定された場合には、中央ラインCTcの表示態様を、より自車車線Lnsの境界を強調する表示態様へと変更する。具体的には、特定シーンであると判定されると、中央ラインCTcは、一対の境界ラインCTbl,CTbrに変更される(図14参照)。境界ラインは、中央ラインCTcよりも両外側を重畳位置とされる。 When it is determined that the scene is a specific scene, the display generation unit 109 changes the display mode of the central line CTc to a display mode that emphasizes the boundary of the own lane Lns. Specifically, when it is determined that the scene is a specific scene, the central line CTc is changed to a pair of boundary lines CTbl and CTbr (see FIG. 14). The boundary line has overlapping positions on both outer sides of the central line CTc.
 表示生成部109は、1本の中央ラインCTcが2本の境界ラインCTbl,CTbrへと枝分かれするアニメーションにより、中央ラインCTcを一対の境界ラインCTbl,CTbrへと連続的に変形させる(図14のA参照)。また、特定シーンが終了した場合、表示生成部109は、2本の境界ラインCTbl,CTbrが1本の中央ラインCTcへと結合するアニメーションにより、一対の境界ラインCTbl,CTbrを中央ラインCTcへと連続的に変形させる(図14のC参照)。 The display generation unit 109 continuously transforms the central line CTc into a pair of boundary lines CTbl and CTbr by an animation in which one central line CTc branches into two boundary lines CTbl and CTbr (FIG. 14). See A). Further, when the specific scene ends, the display generation unit 109 transforms the pair of boundary lines CTbl and CTbr into the central line CTc by an animation in which the two boundary lines CTbl and CTbr are connected to one central line CTc. It is continuously deformed (see C in FIG. 14).
 以上によれば、予想軌跡コンテンツCTpは、特定シーンでないと判定された場合には中央ラインCTcとして表示され、特定シーンであると判定された場合には、一対の区画線LL,LRを強調する表示態様へと変更される。故に、ドライバは、特定シーンにおいて区画線LL,LRの内側を維持して走行することを想起し得る。 According to the above, the expected locus content CTp is displayed as the center line CTc when it is determined that it is not a specific scene, and when it is determined that it is a specific scene, the pair of lane markings LL and LR are emphasized. The display mode is changed. Therefore, the driver can recall running while maintaining the inside of the lane markings LL, LR in a specific scene.
 (第7実施形態)
 第7実施形態では、第6実施形態におけるHCU100の変形例について説明する。図15において第1実施形態の図面中と同一符号を付した構成要素は、同様の構成要素であり、同様の作用効果を奏するものである。
(7th Embodiment)
In the seventh embodiment, a modified example of the HCU 100 in the sixth embodiment will be described. In FIG. 15, the components having the same reference numerals as those in the drawings of the first embodiment are the same components and have the same effects.
 表示生成部109は、特定シーンであると判定されると、中央ラインCTcを左右に分割するアニメーションにより、中央ラインCTcを一対の境界ラインCTbl,CTbrへと連続的に変形させる。具体的には、中央ラインCTcは、コンテンツ全体が左右半分に分割され、それぞれが横方向に平行移動するアニメーションにより、一対の境界ラインCTbl,CTbrへと変化する。また、特定シーンが終了すると、上述と逆方向に移動するアニメーションにより、一対の境界ラインCTbl,CTbrから中央ラインCTcへと戻る。 When the display generation unit 109 determines that the scene is a specific scene, the display generation unit 109 continuously transforms the center line CTc into a pair of boundary lines CTbl and CTbr by an animation that divides the center line CTc into left and right. Specifically, the central line CTc is changed into a pair of boundary lines CTbl and CTbr by an animation in which the entire content is divided into left and right halves and each of them moves in parallel in the horizontal direction. Further, when the specific scene ends, the pair of boundary lines CTbl and CTbr return to the center line CTc by the animation moving in the opposite direction to the above.
 (第8実施形態)
 第8実施形態では、第6実施形態におけるHCU100の変形例について説明する。図16において第1実施形態の図面中と同一符号を付した構成要素は、同様の構成要素であり、同様の作用効果を奏するものである。
(8th Embodiment)
In the eighth embodiment, a modification of the HCU 100 in the sixth embodiment will be described. In FIG. 16, the components having the same reference numerals as those in the drawings of the first embodiment are the same components and have the same effects.
 表示生成部109は、特定シーンであると判定されると、中央ラインCTcに追加して一対の境界ラインCTbl,CTbrを表示させる。これにより、境界を強調するコンテンツが追加されるので、予想軌跡コンテンツCTpは、特定シーンでないと判定された場合よりも、全体としてより自車車線Lnsの境界を強調する表示態様となる。 When the display generation unit 109 determines that the scene is a specific scene, the display generation unit 109 additionally displays a pair of boundary lines CTbl and CTbr in addition to the center line CTc. As a result, the content that emphasizes the boundary is added, so that the predicted trajectory content CTp has a display mode that emphasizes the boundary of the own lane Lns as a whole, as compared with the case where it is determined that the scene is not a specific scene.
 以上によれば、特定シーンにおいても、中央ラインCTcの表示が維持された状態で、境界ラインCTbl,CTbrが追加で表示される。故に、特定シーンにおいてもLTA表示に関するコンテンツが継続的に表示されていることをドライバが理解し易い。これにより、より分かり易い表示が可能となる。 According to the above, even in a specific scene, the boundary lines CTbl and CTbr are additionally displayed while the display of the center line CTc is maintained. Therefore, it is easy for the driver to understand that the content related to the LTA display is continuously displayed even in a specific scene. This enables a more understandable display.
 (第9実施形態)
 第9実施形態では、第6実施形態におけるHCU100の変形例について説明する。図17において第1実施形態の図面中と同一符号を付した構成要素は、同様の構成要素であり、同様の作用効果を奏するものである。
(9th Embodiment)
In the ninth embodiment, a modified example of the HCU 100 in the sixth embodiment will be described. In FIG. 17, the components having the same reference numerals as those in the drawings of the first embodiment are the same components and have the same effects.
 表示生成部109は、特定シーンであると判定されると、中央ラインCTcの幅を拡大する。中央ラインCTcの幅方向の端部が、拡幅によって自車車線Lnsの境界に接近することで、特定シーンでない場合よりも境界が強調される。 The display generation unit 109 expands the width of the center line CTc when it is determined that the scene is a specific scene. The widthwise end of the center line CTc approaches the boundary of the own lane Lns by widening, so that the boundary is emphasized as compared with the case where it is not a specific scene.
 (第10実施形態)
 第10実施形態では、第1実施形態におけるHCU100の変形例について説明する。図18において第1実施形態の図面中と同一符号を付した構成要素は、同様の構成要素であり、同様の作用効果を奏するものである。
(10th Embodiment)
In the tenth embodiment, a modified example of the HCU 100 in the first embodiment will be described. In FIG. 18, the components having the same reference numerals as those in the drawings of the first embodiment are the same components and have the same effects.
 表示生成部109は、特定シーンがカーブ走行シーンである場合、壁コンテンツCTwを表示させる。壁コンテンツCTwは、カーブの外周側の区画線付近に重畳される重畳コンテンツCTsである。壁コンテンツCTwは、自車車線Lnsと車線外領域とを隔てるように立設された壁状を呈する。壁コンテンツCTwは、予想軌跡コンテンツCTpよりも外周側にて立設されるように表示される。例えば壁コンテンツCTwは、区画線から上方に立ち上がる壁状とされる。または、壁コンテンツCTwは、区画線より内側または外側の路面から立ち上がる壁状であってもよい。壁コンテンツCTwは、自車車線Lnsに沿って延びる形状とされる。壁コンテンツCTwは、カーブの開始地点から終了地点まで延びるように表示される。 The display generation unit 109 displays the wall content CTw when the specific scene is a curve running scene. The wall contents CTw are superimposed contents CTs superimposed near the lane markings on the outer peripheral side of the curve. The wall content CTw exhibits a wall shape erected so as to separate the own lane Lns from the out-lane area. The wall content CTw is displayed so as to be erected on the outer peripheral side of the expected locus content CTp. For example, the wall content CTw has a wall shape that rises upward from the lane marking. Alternatively, the wall content CTw may have a wall shape rising from the road surface inside or outside the lane marking. The wall content CTw has a shape extending along the own lane Lns. The wall content CTw is displayed so as to extend from the start point to the end point of the curve.
 以上によれば、カーブ走行シーンにおいて、カーブの外周側に壁コンテンツCTwが重畳表示される。故に、ドライバは、車両Aがカーブの外周側に逸脱しないイメージを一層想起し易くなる。したがって、表示生成部109は、ドライバの不安をより低減可能となる。 According to the above, in the curve driving scene, the wall content CTw is superimposed and displayed on the outer peripheral side of the curve. Therefore, the driver can more easily recall the image that the vehicle A does not deviate to the outer peripheral side of the curve. Therefore, the display generation unit 109 can further reduce the anxiety of the driver.
 なお、表示生成部109は、崖走行シーンにおいて上述の壁コンテンツCTwを表示させてもよい。この場合、壁コンテンツCTwは、崖が位置する側の区画線付近に重畳表示される。 Note that the display generation unit 109 may display the above-mentioned wall content CTw in the cliff running scene. In this case, the wall content CTw is superimposed and displayed near the lane marking on the side where the cliff is located.
 (第11実施形態)
 第11実施形態では、第1実施形態におけるHCU100の変形例について図19を参照して説明する。第11実施形態のHCU100は、車線維持制御に対するドライバの信頼度を、実際に測定された測定値として取得する。HCU100は、当該信頼度に基づいて、予想軌跡コンテンツCTpの表示を制御する。
(11th Embodiment)
In the eleventh embodiment, a modified example of the HCU 100 in the first embodiment will be described with reference to FIG. The HCU 100 of the eleventh embodiment acquires the reliability of the driver for lane keeping control as an actually measured measured value. The HCU 100 controls the display of the predicted locus content CTp based on the reliability.
 信頼度は、例えば、DSM27によって測定される。具体的には、DSM27は、ドライバのストレスを、信頼度として測定する。この場合、ストレスが高いほど、信頼度が低いとされる。DSM27は、サッカード等の眼球運動、瞳孔の開度等を撮像画像の解析により検出し、これらに基づいて制御ユニットにてストレスを評価するストレス評価値を算出すればよい。また、DSM27は、図示しない生体センサの検出情報をストレスの算出に用いてもよい。検出情報には、例えば、心拍数、発汗量、体温等が含まれる。DSM27は、測定したストレス評価値を、HCU100へと逐次提供する。 The reliability is measured by, for example, DSM27. Specifically, the DSM27 measures driver stress as reliability. In this case, the higher the stress, the lower the reliability. The DSM 27 may detect eye movements such as saccades, pupil opening, and the like by analyzing captured images, and calculate a stress evaluation value for evaluating stress by the control unit based on these. Further, the DSM 27 may use the detection information of the biosensor (not shown) for the calculation of stress. The detection information includes, for example, heart rate, sweating amount, body temperature, and the like. The DSM27 sequentially provides the measured stress evaluation values to the HCU 100.
 第11実施形態において、HCU100のドライバ情報取得部101は、DSM27からのストレス評価値を取得し、シーン判定部105へと提供する。 In the eleventh embodiment, the driver information acquisition unit 101 of the HCU 100 acquires the stress evaluation value from the DSM 27 and provides it to the scene determination unit 105.
 シーン判定部105は、ストレス評価値に基づいて、現在の走行シーンが特定シーンであるか否かを判定する。すなわち、シーン判定部105は、ストレス評価値が許容範囲内である場合には、現在の走行シーンが特定シーンであると判定する。そして、シーン判定部105は、ストレス評価値が許容範囲外である場合には、現在の走行シーンが特定シーンではないと判定する。以上の処理は、図7のフローチャートにおけるS20にて実行される。 The scene determination unit 105 determines whether or not the current driving scene is a specific scene based on the stress evaluation value. That is, when the stress evaluation value is within the permissible range, the scene determination unit 105 determines that the current driving scene is a specific scene. Then, when the stress evaluation value is out of the permissible range, the scene determination unit 105 determines that the current driving scene is not a specific scene. The above processing is executed in S20 in the flowchart of FIG. 7.
 シーン判定部105は、特定シーンであると判定する許容範囲を、学習により決定する。具体的には、シーン判定部105は、LTAの実行中におけるステアリングホイールの把持またはステアリング操作の検出タイミング、またはブレーキ操作によるLTAの中断タイミングに関する情報を取得する。また、シーン判定部105は、当該タイミングにおけるストレス評価値を取得する。シーン判定部105は、これらの情報に基づいて、特定シーンに対応する許容範囲を学習すればよい。なお、シーン判定部105は、許容範囲を学習により決定するのではなく、予め設定された範囲としてもよい。 The scene determination unit 105 determines the permissible range for determining a specific scene by learning. Specifically, the scene determination unit 105 acquires information regarding the detection timing of the steering wheel grip or the steering operation during the execution of the LTA, or the interruption timing of the LTA due to the brake operation. In addition, the scene determination unit 105 acquires the stress evaluation value at the relevant timing. The scene determination unit 105 may learn the permissible range corresponding to a specific scene based on this information. The scene determination unit 105 may set a preset range instead of determining the permissible range by learning.
 また、シーン判定部105は、実際に測定された信頼度以外の情報も、特定シーンの判定に利用してもよい。例えば、シーン判定部105は、第1実施形態に示したカーブ走行シーン、視界不良シーン、崖走行シーンのいずれか1つに現在の走行シーンが該当するか否かの判定結果と、実際に測定された信頼度とを組み合わせて、特定シーンであるか否かを判定してもよい。具体的には、シーン判定部105は、上述のシーンのいずれか1つに現在の走行シーンが該当し、且つ信頼度が特定シーンに該当する範囲内である場合に、現在の走行シーンが特定シーンであると判定してもよい。なお、第11実施形態の構成は、特定シーンであると判定された場合と特定シーンでないと判定された場合とで予想軌跡コンテンツCTpの表示態様を変更するHCU100に対しても適用可能である。 Further, the scene determination unit 105 may use information other than the actually measured reliability for determining a specific scene. For example, the scene determination unit 105 actually measures the determination result of whether or not the current travel scene corresponds to any one of the curve travel scene, the poor visibility scene, and the cliff travel scene shown in the first embodiment. It may be determined whether or not it is a specific scene in combination with the determined reliability. Specifically, the scene determination unit 105 identifies the current driving scene when the current driving scene corresponds to any one of the above scenes and the reliability is within the range corresponding to the specific scene. It may be determined that it is a scene. The configuration of the eleventh embodiment is also applicable to the HCU 100 that changes the display mode of the expected locus content CTp depending on whether it is determined to be a specific scene or not.
 (第12実施形態)
 第12実施形態では、第1実施形態におけるHCU100の変形例について説明する。第12実施形態において、HCU100の表示生成部109は、特定シーンにおいて、車線維持制御に対するドライバの信頼度がより低くなると推定されるほど、予想軌跡コンテンツCTpをより強調した表示態様にて表示する。
(12th Embodiment)
In the twelfth embodiment, a modified example of the HCU 100 in the first embodiment will be described. In the twelfth embodiment, the display generation unit 109 of the HCU 100 displays the predicted locus content CTp in a display mode in which the predicted locus content CTp is emphasized so that the reliability of the driver for the lane keeping control is estimated to be lower in the specific scene.
 例えば、表示生成部109は、カーブ走行シーンにおいて、走行するカーブ路の車線の曲率が大きいほど、信頼度がより低くなると推定する。または、表示生成部109は、カーブ走行シーンにおいて、カーブの連続が多いほど、信頼度がより低くなると推定する。さらに、表示生成部109は、走行する道路の幅が小さいほど、信頼度がより低いと推定する。表示生成部109は、以上の信頼度推定を、高精度地図データに基づいて実行すればよい。 For example, the display generation unit 109 estimates that the greater the curvature of the lane of the curve road on which the vehicle travels, the lower the reliability in the curve travel scene. Alternatively, the display generation unit 109 estimates that the more continuous the curve is, the lower the reliability is in the curve traveling scene. Further, the display generation unit 109 estimates that the smaller the width of the road on which the vehicle travels, the lower the reliability. The display generation unit 109 may perform the above reliability estimation based on the high-precision map data.
 さらに、表示生成部109は、かすれ等により区画線の視認度が低下しているほど、信頼度がより低いと推定する。表示生成部109は、外界情報取得部103にて取得された区画線の検出情報に基づいて、区画線の視認度を推定すればよい。 Further, the display generation unit 109 estimates that the lower the visibility of the lane markings due to faintness or the like, the lower the reliability. The display generation unit 109 may estimate the visibility of the lane marking based on the detection information of the lane marking acquired by the outside world information acquisition unit 103.
 表示生成部109は、例えば、各境界ラインCTbl,CTbrの移動の繰り返し速度を早くすることで、予想軌跡コンテンツCTpを強調した表示態様とする。または、表示生成部109は、各境界ラインCTbl,CTbrの内側への移動量を大きくすることで、予想軌跡コンテンツCTpを強調した表示態様としてもよい。また、表示生成部109は、各境界ラインCTbl,CTbrの輝度または表示サイズを大きくすることで、強調した表示態様としてもよい。また、表示生成部109は、各境界ラインCTbl,CTbrの表示色を変更することで、強調した表示態様としてもよい。表示生成部109は、以上の信頼度に応じた表示処理を、図7のフローチャートのS40にて実行する。 The display generation unit 109 has a display mode in which the expected locus content CTp is emphasized by, for example, increasing the repeating speed of movement of each boundary line CTbl and CTbr. Alternatively, the display generation unit 109 may have a display mode in which the expected locus content CTp is emphasized by increasing the amount of movement of the boundary lines CTbl and CTbr inward. Further, the display generation unit 109 may make the display mode emphasized by increasing the brightness or the display size of each boundary line CTbl, CTbr. Further, the display generation unit 109 may make the display mode emphasized by changing the display colors of the boundary lines CTbl and CTbr. The display generation unit 109 executes the display process according to the above reliability in S40 of the flowchart of FIG.
 なお、表示生成部109は、実際に測定された信頼度が低いほど、予想軌跡コンテンツCTpをより強調した表示態様としてもよい。実際の信頼度の測定については、第11実施形態の説明を援用する。また、第12実施形態の構成は、特定シーンであると判定された場合と特定シーンでないと判定された場合とで予想軌跡コンテンツCTpの表示態様を変更するHCU100に対しても適用可能である。その場合、表示生成部109は、信頼度に応じた表示処理を、図10のフローチャートのS45にて実行すればよい。 Note that the display generation unit 109 may have a display mode in which the predicted locus content CTp is emphasized as the actually measured reliability is lower. For the actual measurement of reliability, the description of the eleventh embodiment is referred to. Further, the configuration of the twelfth embodiment can be applied to the HCU 100 which changes the display mode of the predicted locus content CTp depending on whether it is determined to be a specific scene or not. In that case, the display generation unit 109 may execute the display process according to the reliability in S45 of the flowchart of FIG.
 以上の第12実施形態によれば、現在の走行シーンが特定シーンであると判定された場合において、推定される信頼度が低いほど、予想軌跡コンテンツCTpがより強調された表示態様で表示される。故に、ドライバが車線維持制御に対してより不安を覚え得る状況において、車線内走行が維持されるイメージを、予想軌跡コンテンツCTpによってより確実にドライバに想起させ易くなる。したがって、ドライバの不安が一層低減され得る。 According to the twelfth embodiment described above, when it is determined that the current driving scene is a specific scene, the lower the estimated reliability, the more emphasized the predicted locus content CTp is displayed. .. Therefore, in a situation where the driver may feel more anxious about the lane keeping control, the predicted trajectory content CTp makes it easier for the driver to more reliably recall the image of maintaining the lane driving. Therefore, the driver's anxiety can be further reduced.
 また、第12実施形態によれば、現在の走行シーンが特定シーンであると判定され、且つ走行する車線がカーブしている場合、すなわちカーブ走行シーンである場合において、走行する車線の曲率が大きいほど、予想軌跡コンテンツCTpがより強調される。車線の曲率が大きくなるほど比較的大きな加速度が車両Aに作用し得るため、ドライバは車線維持制御に対する不安をより覚え易くなる。故に、車線の曲率が大きくなるほど、予想軌跡コンテンツCTpが強調されることで、車線内走行が維持されるイメージを、予想軌跡コンテンツCTpによってより確実にドライバに想起させ易くなる。以上により、カーブ走行シーンにおけるドライバの不安が一層低減され得る。 Further, according to the twelfth embodiment, when the current driving scene is determined to be a specific scene and the traveling lane is curved, that is, when the traveling lane is a curved traveling scene, the curvature of the traveling lane is large. The more the expected trajectory content CTp is emphasized. As the curvature of the lane increases, a relatively large acceleration can act on the vehicle A, so that the driver is more likely to feel anxiety about lane keeping control. Therefore, as the curvature of the lane becomes larger, the predicted locus content CTp is emphasized, so that the driver can more surely recall the image that the traveling in the lane is maintained by the predicted locus content CTp. As described above, the driver's anxiety in the curve driving scene can be further reduced.
 (第13実施形態)
 第13実施形態では、第1実施形態におけるHCU100の変形例について説明する。
(13th Embodiment)
In the thirteenth embodiment, a modification of the HCU 100 in the first embodiment will be described.
 第13実施形態において、ドライバ情報取得部101は、ドライバのアイポイントEPの位置および視線方向に加えて、ステアリングホイールの把持の有無(以下、把持情報)を取得する。把持情報は、例えば、DSM27による画像解析によって特定されてもよいし、図示しない把持センサまたはステアセンサによって特定されてもよい。なお、以下において、ドライバがステアリングホイールを把持している状態を「ハンズオン状態」、把持を中断している状態を「ハンズオフ状態」と表記する場合が有る。 In the thirteenth embodiment, the driver information acquisition unit 101 acquires the presence / absence of gripping of the steering wheel (hereinafter referred to as gripping information) in addition to the position and line-of-sight direction of the driver's eye point EP. The grip information may be specified by, for example, image analysis by DSM27, or by a grip sensor or steer sensor (not shown). In the following, the state in which the driver is gripping the steering wheel may be referred to as a "hands-on state", and the state in which the driver is suspending the grip may be referred to as a "hands-off state".
 制御情報取得部104は、車線維持制御情報に加えて、LTA機能実行時の自動運転のレベル情報を、各車線維持制御部51,61から取得する。レベル情報は、少なくとも、自動運転レベル2以下であるか、または自動運転レベル3以上であるかを判断可能な程度の情報であればよい。換言すれば、レベル情報は、LTA機能実行時に周辺監視義務が必要であるか、または不要であるかを判断可能であればよい。制御情報取得部104は、LTA機能がオンである旨の情報がいずれのECU50,60の車線維持制御部51,61から提供されたかを判断し、判断結果に基づいてレベル情報を生成してもよい。 The control information acquisition unit 104 acquires, in addition to the lane maintenance control information, the level information of automatic driving when the LTA function is executed from the lane maintenance control units 51 and 61. The level information may be at least enough information to determine whether the automatic driving level is 2 or lower or the automatic driving level 3 or higher. In other words, the level information need only be able to determine whether the peripheral monitoring obligation is necessary or not necessary when the LTA function is executed. Even if the control information acquisition unit 104 determines from which lane keeping control units 51 and 61 of the ECUs 50 and 60 the information indicating that the LTA function is turned on is provided, and generates level information based on the determination result. Good.
 さらに、制御情報取得部104は、レベル3以上の自動運転が実行される場合において、車両Aが走行予定の軌道情報を取得する。軌道情報には、車両Aが辿る予定の経路に関する情報が少なくとも含まれている。軌道情報には、経路の走行する際の速度に関する情報が含まれていてもよい。 Further, the control information acquisition unit 104 acquires the track information of the vehicle A scheduled to travel when the automatic driving of level 3 or higher is executed. The track information includes at least information about the route that vehicle A is going to follow. The track information may include information about the speed at which the route travels.
 表示生成部109は、予想軌跡コンテンツCTpの表示実行の有無を、シーン判定部105の判定結果に加え、把持情報、レベル情報、および軌道情報に基づいて決定する。 The display generation unit 109 determines whether or not to execute the display of the predicted trajectory content CTp based on the gripping information, the level information, and the trajectory information in addition to the determination result of the scene determination unit 105.
 具体的には、表示生成部109は、自動運転レベルが2以下であり、且つハンズオン状態である場合には、特定シーンであると判定されている場合であっても、予想軌跡コンテンツCTpの表示を中止する。一方で、表示生成部109は、自動運転レベルが2以下であり、且つハンズオフ状態である場合において、特定シーンであると判定されれば、予想軌跡コンテンツCTpを表示させる。 Specifically, when the automatic operation level is 2 or less and the hands-on state is set, the display generation unit 109 displays the predicted locus content CTp even if it is determined to be a specific scene. To cancel. On the other hand, when the automatic operation level is 2 or less and the hands-off state is determined, the display generation unit 109 displays the expected locus content CTp if it is determined to be a specific scene.
 また、表示生成部109は、自動運転レベルが3以上であり、且つ予測される車両挙動の大きさが許容範囲外となると判定される場合であれば、特定シーンであるか否かに関わらず、予想軌跡コンテンツCTpを表示させる。例えば、表示生成部109は、車両挙動の大きさを、車両Aに作用すると予測される将来の加速度に基づいて評価する。車両挙動の大きさは、横方向の加速度および前後方向の加速度の少なくとも一方に基づいて評価されればよい。表示生成部109は、将来の加速度を、軌道情報に基づいて予測すればよい。なお、表示生成部109は、自動運転ECU60等で予測された車両挙動の大きさを取得してもよい。 Further, if it is determined that the automatic driving level is 3 or more and the predicted magnitude of the vehicle behavior is out of the permissible range, the display generation unit 109 may or may not be a specific scene. , The expected locus content CTp is displayed. For example, the display generation unit 109 evaluates the magnitude of the vehicle behavior based on the future acceleration predicted to act on the vehicle A. The magnitude of vehicle behavior may be evaluated based on at least one of lateral acceleration and front-rear acceleration. The display generation unit 109 may predict the future acceleration based on the orbit information. The display generation unit 109 may acquire the magnitude of the vehicle behavior predicted by the automatic driving ECU 60 or the like.
 次に、表示制御プログラムに基づきHCU100が実行する各コンテンツの表示制御方法の詳細を、図20に示すフローチャートに基づき、以下説明する。図20のフローにおいて、図7と同じ符号を付した処理については、第1実施形態の説明を援用する。 Next, the details of the display control method of each content executed by the HCU 100 based on the display control program will be described below based on the flowchart shown in FIG. In the flow of FIG. 20, the description of the first embodiment is referred to for the processing with the same reference numerals as those of FIG. 7.
 S10にて、LTA機能がオンであると判定すると、S15へと進む。S15では、表示生成部109にて、自動運転レベル3以上であるか否かを判定する。自動運転レベル3以上であると判定すると、S16へと進む。 If it is determined in S10 that the LTA function is on, the process proceeds to S15. In S15, the display generation unit 109 determines whether or not the automatic operation level is 3 or higher. If it is determined that the automatic operation level is 3 or higher, the process proceeds to S16.
 S16では、表示生成部109にて、車両挙動の大きさを推定し、当該大きさが許容範囲内であるか否かを判定する。許容範囲外であると判定すると、S20へと進み、許容範囲内であると判定すると、S30へと進む。 In S16, the display generation unit 109 estimates the magnitude of the vehicle behavior and determines whether or not the magnitude is within the permissible range. If it is determined that it is out of the permissible range, the process proceeds to S20, and if it is determined that it is within the permissible range, the process proceeds to S30.
 一方、S15にて自動運転レベル3以上ではない、すなわちレベル2以下であると判定すると、S20へと進む。S20にて、シーン判定部105が、現在の走行シーンが特定シーンに該当すると判定すると、S25へと進む。S25では、表示生成部109が、自動運転レベル2以下であり、且つハンズオン状態であるか否かを判定する。 On the other hand, if it is determined in S15 that the automatic operation level is not 3 or higher, that is, level 2 or lower, the process proceeds to S20. When the scene determination unit 105 determines in S20 that the current driving scene corresponds to a specific scene, the process proceeds to S25. In S25, the display generation unit 109 determines whether or not the automatic operation level 2 or lower and the hands-on state.
 自動運転レベル2以下ではない、または自動運転レベル2以下であり且つハンズオフ状態であると判定すると、S30へと進む。一方で、自動運転レベル2以下であり、且つハンズオン状態であると判定すると、S70へと進む。 If it is determined that the automatic operation level is not 2 or less, or the automatic operation level is 2 or less and the hands-off state is reached, the process proceeds to S30. On the other hand, if it is determined that the automatic operation level is 2 or less and the hands-on state is reached, the process proceeds to S70.
 以上の第13実施形態によれば、車線維持制御の実行においてドライバの周辺監視義務が有り且つハンズオン状態であると判断された場合には、特定シーンであると判定された場合であっても予想軌跡コンテンツCTpが非表示とされる。特定シーンであっても、既にステアリングホイールを把持したハンズオン状態であれば、ドライバが車線維持制御に不安に感じて行い得る動作は、ハンズオフ状態と比較して少なくなる。すなわち、予想軌跡コンテンツCTpを表示させる必要性が、ハンズオフ状態と比較して低くなる。一方で、特定シーンにおいてハンズオフ状態であれば、ドライバが車線維持制御に不安に感じて行い得る動作に、ステアリングホイールの把持が含まれ得る。したがって、本来不要である当該動作を新たに実行させないという観点において、予想軌跡コンテンツCTpを表示させる必要性が、ハンズオフ状態と比較して高くなる。したがって、第13実施形態によれば、予想軌跡コンテンツCTpを表示させる必要性に応じて、当該コンテンツCTpの表示がより適切に制御され得る。 According to the thirteenth embodiment described above, when it is determined that the driver is obliged to monitor the surroundings of the driver and is in a hands-on state in executing the lane keeping control, it is expected even if it is determined to be a specific scene. The locus content CTp is hidden. Even in a specific scene, if the driver is in the hands-on state in which the steering wheel is already gripped, the number of actions that the driver can perform with anxiety about lane keeping control is smaller than in the hands-off state. That is, the need to display the expected locus content CTp is lower than in the hands-off state. On the other hand, in the hands-off state in a specific scene, the operation that the driver can perform with anxiety about lane keeping control may include gripping the steering wheel. Therefore, from the viewpoint of not newly executing the operation that is originally unnecessary, the necessity of displaying the predicted locus content CTp is higher than that in the hands-off state. Therefore, according to the thirteenth embodiment, the display of the content CTp can be controlled more appropriately according to the necessity of displaying the expected locus content CTp.
 また、第13実施形態によれば、車線維持制御の実行においてドライバの周辺監視義務が無く且つ予測される車両挙動の大きさが許容範囲外であると判断された場合には、特定シーンではないと判定された場合であっても、予想軌跡コンテンツCTpが表示される。故に、車両Aに発生する将来の挙動が軌道情報等により予測し易い自動運転レベル3以上において、ドライバが不安に感じ得る状況下でより確実に予想軌跡コンテンツCTpが表示され得る。 Further, according to the thirteenth embodiment, when it is determined that there is no obligation to monitor the surroundings of the driver and the predicted magnitude of the vehicle behavior is out of the permissible range in the execution of the lane keeping control, it is not a specific scene. Even when it is determined that, the expected locus content CTp is displayed. Therefore, the predicted trajectory content CTp can be displayed more reliably in a situation where the driver may feel uneasy at the automatic driving level 3 or higher where the future behavior generated in the vehicle A can be easily predicted from the track information or the like.
 なお、第13実施形態の構成は、特定シーンであると判定された場合と特定シーンでないと判定された場合とで予想軌跡コンテンツCTpの表示態様を変更するHCU100に対しても当然適用可能である。例えば、第13実施形態の構成が適用されたHCU100が、車線維持制御の実行においてドライバの周辺監視義務が有り且つハンズオン状態であると判断したとする。この場合、HCU100は、現在の走行シーンが特定シーンであると判定された場合であっても、予想軌跡コンテンツCTpの表示態様を、特定シーンでないと判定された場合の表示態様と同等とする構成となり得る。また、第13実施形態の構成が適用されたHCU100が、車線維持制御の実行においてドライバの周辺監視義務が無く且つ予測される車両挙動の大きさが許容範囲外であると判断したとする。この場合には、HCU100は、現在の走行シーンが特定シーンではないと判定された場合であっても、予想軌跡コンテンツCTpの表示態様を、特定シーンであると判定された場合の表示態様と同等とする構成となり得る。 The configuration of the thirteenth embodiment is naturally applicable to the HCU 100 that changes the display mode of the expected locus content CTp depending on whether it is determined to be a specific scene or not. .. For example, suppose that the HCU 100 to which the configuration of the thirteenth embodiment is applied determines that the driver is obliged to monitor the periphery of the driver and is in a hands-on state in executing the lane keeping control. In this case, the HCU 100 has a configuration in which the display mode of the predicted locus content CTp is the same as the display mode when it is determined that the current driving scene is not a specific scene, even when the current driving scene is determined to be a specific scene. Can be. Further, it is assumed that the HCU 100 to which the configuration of the thirteenth embodiment is applied determines that there is no obligation to monitor the surroundings of the driver in the execution of the lane keeping control and the magnitude of the predicted vehicle behavior is out of the permissible range. In this case, the HCU 100 has the same display mode of the predicted locus content CTp as the display mode when it is determined to be a specific scene, even when it is determined that the current driving scene is not a specific scene. It can be configured as.
 (他の実施形態)
 この明細書における開示は、例示された実施形態に制限されない。開示は、例示された実施形態と、それらに基づく当業者による変形態様を包含する。例えば、開示は、実施形態において示された部品および/または要素の組み合わせに限定されない。開示は、多様な組み合わせによって実施可能である。開示は、実施形態に追加可能な追加的な部分をもつことができる。開示は、実施形態の部品および/または要素が省略されたものを包含する。開示は、ひとつの実施形態と他の実施形態との間における部品および/または要素の置き換え、または組み合わせを包含する。開示される技術的範囲は、実施形態の記載に限定されない。開示されるいくつかの技術的範囲は、請求の範囲の記載によって示され、さらに請求の範囲の記載と均等の意味および範囲内での全ての変更を含むものと解されるべきである。
(Other embodiments)
The disclosure herein is not limited to the illustrated embodiments. The disclosure includes exemplary embodiments and modifications by those skilled in the art based on them. For example, disclosure is not limited to the parts and / or element combinations shown in the embodiments. Disclosure can be carried out in various combinations. The disclosure can have additional parts that can be added to the embodiments. Disclosures include those in which the parts and / or elements of the embodiment have been omitted. Disclosures include replacement or combination of parts and / or elements between one embodiment and another. The technical scope disclosed is not limited to the description of the embodiments. Some technical scopes disclosed are indicated by the claims description and should be understood to include all modifications within the meaning and scope equivalent to the claims statement.
 上述の実施形態において、シーン判定部105は、車両A周辺の高精度地図データまたは周辺監視センサ30の検出情報に基づいてシーン判定を行うとした。これに代えて、シーン判定部105は、他の情報に基づいてシーン判定を行う構成であってもよい。例えば、シーン判定部105は、DCM49を介して外部サーバから気象情報を取得することで、視界不良シーンか否かを判定してもよい。 In the above-described embodiment, the scene determination unit 105 determines the scene based on the high-precision map data around the vehicle A or the detection information of the peripheral monitoring sensor 30. Instead of this, the scene determination unit 105 may be configured to perform scene determination based on other information. For example, the scene determination unit 105 may determine whether or not the scene has poor visibility by acquiring weather information from an external server via the DCM49.
 上述の実施形態において、シーン判定部105は、取得した各種情報に基づいて特定シーンか否かのシーン判定を実行するとした。これに代えて、シーン判定部105は、運転支援ECU50や自動運転ECU60等、他のECUにて実行されたシーン判定結果の取得を以ってシーン判定の実行としてもよい。 In the above-described embodiment, the scene determination unit 105 executes scene determination as to whether or not it is a specific scene based on various acquired information. Instead of this, the scene determination unit 105 may execute the scene determination by acquiring the scene determination result executed by another ECU such as the driving support ECU 50 or the automatic driving ECU 60.
 上述の実施形態において、境界ラインCTbl,CTbrは、区画線LL,LRの内側に重畳表示されるとしたが、区画線LL,LR上に重畳表示されてもよい。または、区画線LL,LRの外側に重畳表示されてもよい。なお、境界ラインは、左右の道路端等、区画線以外の境界線に対応した場所に表示されてもよい。 In the above-described embodiment, the boundary lines CTbl and CTbr are superimposed and displayed inside the lane markings LL and LR, but they may be superimposed and displayed on the lane markings LL and LR. Alternatively, it may be superimposed and displayed on the outside of the lane markings LL and LR. The boundary line may be displayed at a place corresponding to the boundary line other than the lane marking line, such as the left and right road edges.
 上述の第1実施形態において、表示生成部109は、特定シーンではないと判定された場合には予想軌跡コンテンツCTpを非表示とするが、予想軌跡以外を示すLTA関連のコンテンツであれば、特定シーンではない場合でも表示してよい。例えば、表示生成部109は、LTA実行中を示す文字情報、アイコン等のコンテンツを、予想軌跡コンテンツCTpとは別に表示してよい。 In the above-described first embodiment, the display generation unit 109 hides the predicted locus content CTp when it is determined that the scene is not a specific scene, but specifies the LTA-related content indicating other than the predicted locus. It may be displayed even if it is not a scene. For example, the display generation unit 109 may display contents such as character information and icons indicating that LTA is being executed separately from the predicted locus content CTp.
 上述の第1実施形態において、表示生成部109は、予想軌跡コンテンツCTpを、連続的に移動するように表示させるとした。これに代えて、表示生成部109は、予想軌跡コンテンツCTpを断続的に移動するように表示させてもよい。また、表示生成部109は、両外側から中央側への移動とは異なる移動パターンにて予想軌跡コンテンツCTpを表示させてもよい。また、表示生成部109は、第3実施形態のように、予想軌跡コンテンツCTpをその場に留まって表示される静止コンテンツとしてもよい。また、表示生成部109は、開始コンテンツCTiを表示させずに予想軌跡コンテンツCTpを表示させてもよい。 In the first embodiment described above, the display generation unit 109 displays the expected locus content CTp so as to move continuously. Instead, the display generation unit 109 may display the expected locus content CTp so as to move intermittently. Further, the display generation unit 109 may display the predicted locus content CTp in a movement pattern different from the movement from both outer sides to the center side. Further, the display generation unit 109 may use the expected locus content CTp as static content to be displayed while staying in place, as in the third embodiment. Further, the display generation unit 109 may display the expected locus content CTp without displaying the start content CTi.
 上述の第3実施形態において、表示生成部109は、特定シーンである場合に、予想軌跡コンテンツCTpを車線幅方向の中央側の部分を強調する表示態様とする。これに代えて、表示生成部109は、特定シーンである場合に、予想軌跡コンテンツCTpの輝度を高めることで表示態様を変更してもよい。または、表示生成部109は、予想軌跡コンテンツCTpの透過率の低減、表示色の変更、表示サイズの拡大等により、表示態様を変更してもよい。 In the third embodiment described above, the display generation unit 109 has a display mode in which the predicted locus content CTp emphasizes the central portion in the lane width direction in the case of a specific scene. Instead of this, the display generation unit 109 may change the display mode by increasing the brightness of the predicted locus content CTp in the case of a specific scene. Alternatively, the display generation unit 109 may change the display mode by reducing the transmittance of the predicted locus content CTp, changing the display color, enlarging the display size, and the like.
 上述の第5実施形態において、特定シーンである場合には、予想軌跡コンテンツCTpとして追加コンテンツCTal,CTarを2本追加するとしたが、中央ラインを1本のみ追加する構成であってもよく、3本以上追加する構成であってもよい。 In the fifth embodiment described above, in the case of a specific scene, two additional contents CTal and CTar are added as the expected locus content CTp, but only one center line may be added. 3 It may be configured to add more than one.
 上述の実施形態において、表示生成部109は、予想軌跡コンテンツCTpの表示態様をアニメーションによって連続的に変更するとした。これに代えて、表示生成部109は、表示態様変更前の予想軌跡コンテンツCTpが一旦非表示となってから、変更後の予想軌跡コンテンツCTpを表示させる構成であってもよい。 In the above-described embodiment, the display generation unit 109 continuously changes the display mode of the predicted locus content CTp by animation. Instead of this, the display generation unit 109 may be configured to display the predicted locus content CTp after the change after the predicted locus content CTp before the display mode change is once hidden.
 上述の実施形態において、予想軌跡コンテンツCTpは、一続きの細い帯状であるとしたが、予想軌跡コンテンツCTpの表示形状はこれに限定されない。例えば、予想軌跡コンテンツCTpは、予想軌跡PTに沿って並べられた複数の図形であってもよく、矢印形状であってもよい。 In the above-described embodiment, the predicted locus content CTp is a continuous thin band, but the display shape of the predicted locus content CTp is not limited to this. For example, the predicted locus content CTp may be a plurality of figures arranged along the predicted locus PT, or may have an arrow shape.
 上述の実施形態において、ステータス画像CTstは、メータディスプレイ23に表示されるとしたが、センターインフォメーションディスプレイ等の他の車載表示器に表示されてもよい。 In the above-described embodiment, the status image CTst is displayed on the meter display 23, but it may be displayed on another vehicle-mounted display such as a center information display.
 上述の実施形態において、ステータス画像CTstは、点滅表示によって、特定シーンでの予想軌跡コンテンツCTpとは異なる表示態様にて表示されるとした。これに代えて、ステータス画像CTstは、別の表示態様にて表示されてもよい。例えば、ステータス画像CTstは、最高輝度状態と最低輝度状態とが繰り返される表示態様であれば、完全に消灯状態とならなくてもよい。また、ステータス画像CTstは、予想軌跡コンテンツCTpとは異なる移動パターンにて移動表示されることで、予想軌跡コンテンツCTpと異なる表示態様とされてもよい。 In the above-described embodiment, the status image CTst is displayed in a display mode different from the expected locus content CTp in a specific scene by blinking display. Instead of this, the status image CTst may be displayed in another display mode. For example, the status image CTst does not have to be completely turned off as long as the display mode is such that the maximum brightness state and the minimum brightness state are repeated. Further, the status image CTst may be displayed in a different display mode from the predicted locus content CTp by moving and displaying the status image CTst in a movement pattern different from the predicted locus content CTp.
 上述の第11実施形態において、シーン判定部105は、ドライバのストレスを信頼度として、特定シーンの判定を行うとした。しかし、シーン判定部105は、ドライバの車線維持制御に対する緊張または不安を推定可能であれば、ストレス以外の指標を信頼度として、特定シーンの判定を行ってもよい。例えば、シーン判定部105は、ドライバの前方に対する注視度合を信頼度として、特定シーンの判定を行ってもよい。注視度合は、DSM27によって測定されればよい。シーン判定部105は、例えば、注視度合が高いほど信頼度が低くなるとして、特定シーンの判定を行えばよい。 In the eleventh embodiment described above, the scene determination unit 105 determines a specific scene by using the stress of the driver as the reliability. However, the scene determination unit 105 may determine a specific scene by using an index other than stress as the reliability as long as the tension or anxiety about the driver's lane keeping control can be estimated. For example, the scene determination unit 105 may determine a specific scene by using the degree of gaze toward the front of the driver as the reliability. The degree of gaze may be measured by DSM27. For example, the scene determination unit 105 may determine a specific scene, assuming that the higher the gaze degree, the lower the reliability.
 上述の実施形態の処理部およびプロセッサは、1つまたは複数のCPU(Central Processing Unit)を含む。こうした処理部およびプロセッサは、CPUに加えて、GPU(Graphics Processing Unit)およびDFP(Data Flow Processor)等を含む処理部であってよい。さらに処理部およびプロセッサは、FPGA(Field-Programmable Gate Array)、並びにAIの学習および推論等の特定処理に特化したIPコア等を含む処理部であってもよい。こうしたプロセッサの各演算回路部は、プリント基板に個別に実装された構成であってもよく、またはASIC(Application Specific Integrated Circuit)およびFPGA等に実装された構成であってもよい。 The processing unit and processor of the above-described embodiment include one or a plurality of CPUs (Central Processing Units). Such a processing unit and a processor may be a processing unit including a GPU (Graphics Processing Unit), a DFP (Data Flow Processor), and the like in addition to the CPU. Further, the processing unit and the processor may be a processing unit including an FPGA (Field-Programmable Gate Array) and an IP core specialized in specific processing such as learning and inference of AI. Each arithmetic circuit unit of such a processor may be individually mounted on a printed circuit board, or may be mounted on an ASIC (Application Specific Integrated Circuit), an FPGA, or the like.
 制御プログラムを記憶するメモリ装置には、フラッシュメモリおよびハードディスク等の種々の非遷移的実体的記憶媒体(non-transitory tangible storage medium)が採用可能である。こうした記憶媒体の形態も、適宜変更されてよい。例えば記憶媒体は、メモリカード等の形態であり、車載ECUに設けられたスロット部に挿入されて、制御回路に電気的に接続される構成であってよい。 Various non-transitory tangible storage mediums such as flash memory and hard disk can be adopted as the memory device for storing the control program. The form of such a storage medium may also be changed as appropriate. For example, the storage medium may be in the form of a memory card or the like, and may be inserted into a slot portion provided in an in-vehicle ECU and electrically connected to a control circuit.
 本開示に記載の制御部およびその手法は、コンピュータプログラムにより具体化された1つ乃至は複数の機能を実行するようにプログラムされたプロセッサを構成する専用コンピュータにより、実現されてもよい。あるいは、本開示に記載の装置およびその手法は、専用ハードウェア論理回路により、実現されてもよい。もしくは、本開示に記載の装置およびその手法は、コンピュータプログラムを実行するプロセッサと1つ以上のハードウェア論理回路との組み合わせにより構成された1つ以上の専用コンピュータにより、実現されてもよい。また、コンピュータプログラムは、コンピュータにより実行されるインストラクションとして、コンピュータ読み取り可能な非遷移有形記録媒体に記憶されていてもよい。 The control unit and its method described in the present disclosure may be realized by a dedicated computer constituting a processor programmed to execute one or a plurality of functions embodied by a computer program. Alternatively, the apparatus and method thereof described in the present disclosure may be realized by a dedicated hardware logic circuit. Alternatively, the apparatus and method thereof described in the present disclosure may be realized by one or more dedicated computers configured by a combination of a processor that executes a computer program and one or more hardware logic circuits. Further, the computer program may be stored in a computer-readable non-transitional tangible recording medium as an instruction executed by the computer.

Claims (23)

  1.  車両(A)の車線内走行を維持させる車線維持制御を実行可能な車線維持制御部(51,61)を備える前記車両のヘッドアップディスプレイ(20)によるコンテンツの表示を制御する表示制御装置であって、
     前記車線維持制御に対するドライバの信頼度が低下する特定シーンであるか否かを判定するシーン判定部(105)と、
     前記特定シーンであると判定された場合には、前記車線維持制御による予想軌跡(PT)を示す予想軌跡コンテンツ(CTp)を路面に重畳表示させ、前記特定シーンではないと判定された場合には、前記予想軌跡コンテンツを非表示とする表示制御部(109)と、
     を備える表示制御装置。
    It is a display control device that controls the display of contents by the head-up display (20) of the vehicle including the lane keeping control unit (51, 61) capable of executing the lane keeping control for keeping the vehicle (A) running in the lane. hand,
    A scene determination unit (105) for determining whether or not the scene is a specific scene in which the reliability of the driver for the lane keeping control is lowered, and
    When it is determined that it is the specific scene, the predicted locus content (CTp) indicating the predicted locus (PT) by the lane keeping control is superimposed and displayed on the road surface, and when it is determined that it is not the specific scene, it is determined. , A display control unit (109) that hides the expected locus content, and
    A display control device comprising.
  2.  前記表示制御部は、前記予想軌跡コンテンツの表示開始を示す開始コンテンツ(CTi)を、前記予想軌跡コンテンツの表示前に表示させる請求項1に記載の表示制御装置。 The display control device according to claim 1, wherein the display control unit displays start content (CTi) indicating the start of display of the predicted locus content before displaying the predicted locus content.
  3.  前記表示制御部は、車線幅方向における両外側から中央側へと移動する一対の移動コンテンツ(CTbl,CTbr)を、前記予想軌跡コンテンツに含む請求項1または請求項2に記載の表示制御装置。 The display control device according to claim 1 or 2, wherein the display control unit includes a pair of moving contents (CTbl, CTbr) moving from both outer sides to the center side in the lane width direction in the predicted locus content.
  4.  前記表示制御部は、前記移動コンテンツの移動を繰り返し表示させる請求項3に記載の表示制御装置。 The display control device according to claim 3, wherein the display control unit repeatedly displays the movement of the moving content.
  5.  前記表示制御部は、前記車線がカーブしている場合には、一対の前記移動コンテンツのうちカーブの外周側に重畳する一方を、カーブの内周側に重畳する他方よりも前記中央側に移動させる請求項3または請求項4に記載の表示制御装置。 When the lane is curved, the display control unit moves one of the pair of the moving contents superimposed on the outer peripheral side of the curve to the center side of the other superimposed on the inner peripheral side of the curve. The display control device according to claim 3 or 4.
  6.  前記表示制御部は、前記車線維持制御の実行において前記ドライバの周辺監視義務が有り且つ前記ドライバがステアリングホイールを把持していると判断した場合には、前記特定シーンであると判定された場合であっても前記予想軌跡コンテンツを非表示とする請求項1から請求項5のいずれか1項に記載の表示制御装置。 When the display control unit determines that the driver is obliged to monitor the surroundings of the driver in executing the lane keeping control and the driver is holding the steering wheel, it is determined that the scene is the specific scene. The display control device according to any one of claims 1 to 5, which hides the predicted locus content even if there is any.
  7.  前記表示制御部は、前記車線維持制御の実行において前記ドライバの周辺監視義務が無く且つ予測される前記車両の挙動の大きさが許容範囲外であると判断した場合には、前記特定シーンではないと判定された場合であっても、前記予想軌跡コンテンツを表示させる請求項1から請求項6のいずれか1項に記載の表示制御装置。 When the display control unit determines that there is no obligation to monitor the surroundings of the driver in the execution of the lane keeping control and the predicted magnitude of the behavior of the vehicle is out of the permissible range, it is not the specific scene. The display control device according to any one of claims 1 to 6, which displays the predicted locus content even when it is determined to be.
  8.  車両(A)の車線内走行を維持させる車線維持制御を実行可能な車線維持制御部(51,61)を備える前記車両のヘッドアップディスプレイ(20)によるコンテンツの表示を制御する表示制御装置であって、
     前記車線維持制御に対するドライバの信頼度が低下する特定シーンであるか否かを判定するシーン判定部(105)と、
     前記車線維持制御による予想軌跡(PT)を示す予想軌跡コンテンツ(CTp)を路面に重畳表示させ、前記特定シーンであると判定された場合と、前記特定シーンでないと判定された場合とで、前記予想軌跡コンテンツの表示態様を変更する表示制御部(109)と、
     を備える表示制御装置。
    It is a display control device that controls the display of contents by the head-up display (20) of the vehicle including the lane keeping control unit (51, 61) capable of executing the lane keeping control for keeping the vehicle (A) running in the lane. hand,
    A scene determination unit (105) for determining whether or not the scene is a specific scene in which the reliability of the driver for the lane keeping control is lowered, and
    The predicted locus content (CTp) indicating the predicted locus (PT) by the lane keeping control is superimposed and displayed on the road surface, and the case where it is determined to be the specific scene and the case where it is determined not to be the specific scene are described. A display control unit (109) that changes the display mode of the predicted locus content, and
    A display control device comprising.
  9.  前記表示制御部は、
     前記特定シーンではないと判定された場合には、前記車線における一対の境界をそれぞれ強調する一対の境界コンテンツ(CTbl,CTbr)を前記予想軌跡コンテンツとして重畳表示させ、
     前記特定シーンであると判定された場合には、前記特定シーンではないと判定された場合よりも前記車線の中央側の部分を強調する表示態様に、前記予想軌跡コンテンツを変更する請求項8に記載の表示制御装置。
    The display control unit
    When it is determined that the scene is not the specific scene, a pair of boundary contents (CTbl, CTbr) that emphasize each of the pair of boundaries in the lane are superimposed and displayed as the expected locus content.
    The eighth aspect of the present invention changes the expected locus content to a display mode in which the central portion of the lane is emphasized more than when it is determined that the specific scene is not the specific scene. The display control device described.
  10.  前記表示制御部は、
     前記特定シーンであると判定された場合には、前記特定シーンではないと判定された場合よりも一対の前記境界コンテンツを前記中央側の部分に表示させる請求項9に記載の表示制御装置。
    The display control unit
    The display control device according to claim 9, wherein when it is determined that it is the specific scene, a pair of the boundary contents is displayed on the central portion as compared with the case where it is determined that the scene is not the specific scene.
  11.  前記表示制御部は、
     前記特定シーンであると判定された場合には、一対の前記境界コンテンツよりも前記中央側の部分に、新たな前記予想軌跡コンテンツを追加して表示させる請求項9に記載の表示制御装置。
    The display control unit
    The display control device according to claim 9, wherein when it is determined to be the specific scene, a new expected locus content is added and displayed on a portion on the center side of the pair of boundary contents.
  12.  前記表示制御部は、
     前記車線がカーブしている場合、一対の前記境界コンテンツのうちカーブの外周側に重畳する一方を、内周側に重畳する他方よりも前記中央側に表示させる請求項9から請求項11のいずれか1項に記載の表示制御装置。
    The display control unit
    Any of claims 9 to 11 in which when the lane is curved, one of the pair of boundary contents superimposed on the outer peripheral side of the curve is displayed on the center side of the pair superimposed on the inner peripheral side. The display control device according to item 1.
  13.  前記表示制御部は、
     前記特定シーンではないと判定された場合には、前記車線の中央側の部分を強調する中央コンテンツ(CTc)を前記予想軌跡コンテンツとして重畳表示させ、
     前記特定シーンであると判定された場合には、前記特定シーンではないと判定された場合よりも、前記車線における一対の境界を強調する表示態様に、前記予想軌跡コンテンツを変更する請求項8に記載の表示制御装置。
    The display control unit
    When it is determined that the scene is not the specific scene, the central content (CTc) that emphasizes the central portion of the lane is superimposed and displayed as the expected locus content.
    The eighth aspect of the present invention changes the expected locus content to a display mode that emphasizes a pair of boundaries in the lane when it is determined that the scene is not the specific scene. The display control device described.
  14.  前記表示制御部は、
     前記特定シーンであると判定された場合には、前記中央コンテンツを、一対の前記境界をそれぞれ強調する一対の境界コンテンツ(CTbl,CTbr)に変更する請求項13に記載の表示制御装置。
    The display control unit
    The display control device according to claim 13, wherein when it is determined that the scene is the specific scene, the central content is changed to a pair of boundary contents (CTbl, CTbr) that emphasize each of the pair of the boundaries.
  15.  前記表示制御部は、
     前記特定シーンであると判定された場合には、前記中央コンテンツよりも一対の前記境界側に、前記予想軌跡コンテンツを追加して表示させる請求項13に記載の表示制御装置。
    The display control unit
    The display control device according to claim 13, wherein when it is determined that the scene is the specific scene, the expected locus content is additionally displayed on the boundary side of the pair of the central content.
  16.  前記表示制御部は、
     前記車線がカーブしている場合、カーブの外周側に立設される壁コンテンツ(CTw)を表示させる請求項1から請求項15のいずれか1項に記載の表示制御装置。
    The display control unit
    The display control device according to any one of claims 1 to 15, wherein when the lane is curved, the wall content (CTw) erected on the outer peripheral side of the curve is displayed.
  17.  前記表示制御部は、
     前記ヘッドアップディスプレイとは異なる車載表示器(23)に、前記車線維持制御の実行を示す実行コンテンツ(CTst)を、前記予想軌跡コンテンツとは異なる表示態様で表示させる請求項1から請求項16のいずれか1項に記載の表示制御装置。
    The display control unit
    Claims 1 to 16 for displaying execution content (CTst) indicating execution of the lane keeping control on an in-vehicle display (23) different from the head-up display in a display mode different from the predicted trajectory content. The display control device according to any one item.
  18.  前記表示制御部は、前記車線維持制御の実行において前記ドライバの周辺監視義務が有り且つ前記ドライバがステアリングホイールを把持していると判断した場合には、前記特定シーンであると判定された場合であっても、前記予想軌跡コンテンツの表示態様を、前記特定シーンでないと判定された場合の表示態様と同等とする請求項1から請求項17のいずれか1項に記載の表示制御装置。 When the display control unit determines that the driver is obliged to monitor the surroundings of the driver in executing the lane keeping control and the driver is holding the steering wheel, it is determined that the scene is the specific scene. The display control device according to any one of claims 1 to 17, wherein the display mode of the predicted locus content is equivalent to the display mode when it is determined that the scene is not the specific scene.
  19.  前記表示制御部は、前記車線維持制御の実行において前記ドライバの周辺監視義務が無く且つ予測される前記車両の挙動の大きさが許容範囲外であると判断した場合には、前記特定シーンではないと判定された場合であっても、前記予想軌跡コンテンツの表示態様を、前記特定シーンであると判定された場合の表示態様と同等とする請求項1から請求項18のいずれか1項に記載の表示制御装置。 When the display control unit determines that there is no obligation to monitor the surroundings of the driver in the execution of the lane keeping control and the predicted magnitude of the behavior of the vehicle is out of the permissible range, it is not the specific scene. The present invention according to any one of claims 1 to 18, wherein the display mode of the predicted locus content is equivalent to the display mode when it is determined to be the specific scene. Display control device.
  20.  前記表示制御部は、前記特定シーンであると判定された場合において、推定される前記信頼度が低いほど、前記予想軌跡コンテンツをより強調した表示態様で表示させる請求項1から請求項19のいずれか1項に記載の表示制御装置。 Any of claims 1 to 19, wherein the display control unit displays the expected locus content in a more emphasized display mode as the estimated reliability is lower when it is determined that the scene is the specific scene. The display control device according to item 1.
  21.  前記表示制御部は、前記特定シーンであると判定され、且つ前記車線がカーブしている場合、前記車線の曲率が大きいほど、前記予想軌跡コンテンツをより強調した表示態様に表示させる請求項20に記載の表示制御装置。 According to claim 20, when the display control unit is determined to be the specific scene and the lane is curved, the larger the curvature of the lane, the more emphasized the expected locus content is displayed. The display control device described.
  22.  車両(A)の車線内走行を維持させる車線維持制御を実行可能な車線維持制御部(51,61)を備える前記車両のヘッドアップディスプレイ(20)によるコンテンツの表示を制御する表示制御プログラムであって、
     少なくとも1つの処理部(11)に、
     前記車線維持制御に対するドライバの信頼度が低下する特定シーンであるか否かを判定し(S20)、
     前記特定シーンであると判定された場合には、前記車線維持制御による予想軌跡(PT)を示す予想軌跡コンテンツ(CTp)を路面に重畳表示させ(S40)、
     前記特定シーンではないと判定された場合には、前記予想軌跡コンテンツを非表示とする(S60)、
     ことを含む処理を実行させる表示制御プログラム。
    It is a display control program that controls the display of contents by the head-up display (20) of the vehicle including the lane keeping control unit (51, 61) capable of executing the lane keeping control for keeping the vehicle (A) running in the lane. hand,
    In at least one processing unit (11)
    It is determined whether or not the scene is a specific scene in which the reliability of the driver for the lane keeping control is lowered (S20).
    When it is determined that the scene is the specific scene, the predicted locus content (CTp) indicating the predicted locus (PT) by the lane keeping control is superimposed and displayed on the road surface (S40).
    If it is determined that the scene is not the specific scene, the expected locus content is hidden (S60).
    A display control program that executes processing including that.
  23.  車両(A)の車線内走行を維持させる車線維持制御を実行可能な車線維持制御部(51,61)を備える前記車両のヘッドアップディスプレイ(20)によるコンテンツの表示を制御する表示制御プログラムであって、
     少なくとも1つの処理部(11)に、
     前記車線維持制御に対するドライバの信頼度が低下する特定シーンであるか否かを判定し(S20)、
     前記車線維持制御による予想軌跡(PT)を示す予想軌跡コンテンツ(CTp)を路面に重畳表示させ、前記特定シーンであると判定された場合と、前記特定シーンでないと判定された場合とで、前記予想軌跡コンテンツの表示態様を変更する(S15,S45)、
     ことを含む処理を実行させる表示制御プログラム。
    It is a display control program that controls the display of contents by the head-up display (20) of the vehicle including the lane keeping control unit (51, 61) capable of executing the lane keeping control for keeping the vehicle (A) running in the lane. hand,
    In at least one processing unit (11)
    It is determined whether or not the scene is a specific scene in which the reliability of the driver for the lane keeping control is lowered (S20).
    The predicted locus content (CTp) indicating the predicted locus (PT) by the lane keeping control is superimposed and displayed on the road surface, and the case where it is determined to be the specific scene and the case where it is determined not to be the specific scene are described. Change the display mode of the predicted trajectory content (S15, S45),
    A display control program that executes processing including that.
PCT/JP2020/036371 2019-10-02 2020-09-25 Display control device and display control program WO2021065735A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2019-182434 2019-10-02
JP2019182434 2019-10-02
JP2020-145431 2020-08-31
JP2020145431A JP7111137B2 (en) 2019-10-02 2020-08-31 Display controller and display control program

Publications (1)

Publication Number Publication Date
WO2021065735A1 true WO2021065735A1 (en) 2021-04-08

Family

ID=75338221

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/036371 WO2021065735A1 (en) 2019-10-02 2020-09-25 Display control device and display control program

Country Status (1)

Country Link
WO (1) WO2021065735A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113788015A (en) * 2021-08-04 2021-12-14 杭州飞步科技有限公司 Method, device and equipment for determining vehicle track and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005170323A (en) * 2003-12-15 2005-06-30 Denso Corp Runway profile displaying device
JP2016145783A (en) * 2015-02-09 2016-08-12 株式会社デンソー Vehicle display control device and vehicle display control method
JP2017094922A (en) * 2015-11-24 2017-06-01 アイシン精機株式会社 Periphery monitoring device
JP2018127204A (en) * 2017-02-08 2018-08-16 株式会社デンソー Display control unit for vehicle
JP2018140714A (en) * 2017-02-28 2018-09-13 株式会社デンソー Display control device and display control method
JP2019500658A (en) * 2015-09-17 2019-01-10 ソニー株式会社 System and method for assisting driving to safely catch up with a vehicle
JP2019163037A (en) * 2014-12-01 2019-09-26 株式会社デンソー Image processing apparatus

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005170323A (en) * 2003-12-15 2005-06-30 Denso Corp Runway profile displaying device
JP2019163037A (en) * 2014-12-01 2019-09-26 株式会社デンソー Image processing apparatus
JP2016145783A (en) * 2015-02-09 2016-08-12 株式会社デンソー Vehicle display control device and vehicle display control method
JP2019500658A (en) * 2015-09-17 2019-01-10 ソニー株式会社 System and method for assisting driving to safely catch up with a vehicle
JP2017094922A (en) * 2015-11-24 2017-06-01 アイシン精機株式会社 Periphery monitoring device
JP2018127204A (en) * 2017-02-08 2018-08-16 株式会社デンソー Display control unit for vehicle
JP2018140714A (en) * 2017-02-28 2018-09-13 株式会社デンソー Display control device and display control method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113788015A (en) * 2021-08-04 2021-12-14 杭州飞步科技有限公司 Method, device and equipment for determining vehicle track and storage medium

Similar Documents

Publication Publication Date Title
US20220130296A1 (en) Display control device and display control program product
US20220118983A1 (en) Display control device and display control program product
JP7014205B2 (en) Display control device and display control program
WO2020208989A1 (en) Display control device and display control program
US20220058998A1 (en) Display control device and non-transitory computer-readable storage medium for display control on head-up display
US11850940B2 (en) Display control device and non-transitory computer-readable storage medium for display control on head-up display
JP7338735B2 (en) Display control device and display control program
WO2021065735A1 (en) Display control device and display control program
JP7243660B2 (en) Display control device and display control program
JP7111137B2 (en) Display controller and display control program
JP2021075219A (en) Display control device and display control program
JP7283448B2 (en) Display controller and display control program
JP7111121B2 (en) Display control device and display control program
JP7173078B2 (en) Display control device and display control program
JP7092158B2 (en) Display control device and display control program
JP2021094965A (en) Display control device and display control program
JP2021037895A (en) Display control system, display control device, and display control program
JP2021060808A (en) Display control system and display control program
JP7014206B2 (en) Display control device and display control program
JP7088152B2 (en) Display control device and display control program
JP7255429B2 (en) Display controller and display control program
JP2021037916A (en) Display control device and display control program
JP7302702B2 (en) Display control device and display control program
JP7255443B2 (en) Display control device and display control program
JP7063345B2 (en) Display control device and display control program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20870803

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20870803

Country of ref document: EP

Kind code of ref document: A1