WO2021065735A1 - Dispositif de commande d'affichage et programme de commande d'affichage - Google Patents

Dispositif de commande d'affichage et programme de commande d'affichage Download PDF

Info

Publication number
WO2021065735A1
WO2021065735A1 PCT/JP2020/036371 JP2020036371W WO2021065735A1 WO 2021065735 A1 WO2021065735 A1 WO 2021065735A1 JP 2020036371 W JP2020036371 W JP 2020036371W WO 2021065735 A1 WO2021065735 A1 WO 2021065735A1
Authority
WO
WIPO (PCT)
Prior art keywords
display
scene
display control
content
lane
Prior art date
Application number
PCT/JP2020/036371
Other languages
English (en)
Japanese (ja)
Inventor
清水 泰博
明彦 柳生
大祐 竹森
一輝 小島
しおり 間根山
猛 羽藤
Original Assignee
株式会社デンソー
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2020145431A external-priority patent/JP7111137B2/ja
Application filed by 株式会社デンソー filed Critical 株式会社デンソー
Publication of WO2021065735A1 publication Critical patent/WO2021065735A1/fr

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/10Path keeping
    • B60W30/12Lane keeping
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems

Definitions

  • the disclosure in this specification relates to a technique for controlling the display of content by a head-up display.
  • Patent Document 1 discloses a vehicle display device that superimposes and displays contents by a head-up display. This vehicle display device superimposes and displays a guidance display indicating a route from the traveling position of the own vehicle to the guidance point in the front view of the driver.
  • Patent Document 1 does not describe a display that reduces driver's anxiety in such a scene.
  • the purpose of disclosure is to provide a display control device and a display control program that can reduce driver anxiety.
  • One of the disclosed display control devices is a display control device that controls the display of contents by a head-up display of a vehicle including a lane keeping control unit capable of executing lane keeping control for keeping the vehicle traveling in the lane.
  • a scene determination unit that determines whether or not the driver's reliability for lane keeping control is low, and a predicted trajectory content that indicates the predicted trajectory by lane keeping control if it is determined to be a specific scene. It is provided with a display control unit that superimposes the display on the road surface and hides the predicted locus content when it is determined that the scene is not a specific scene.
  • One of the disclosed display control programs is a display control program that controls the display of contents by the head-up display of a vehicle provided with a lane keeping control unit capable of executing lane keeping control for keeping the vehicle traveling in the lane.
  • At least one processing unit determines whether or not it is a specific scene in which the reliability of the driver with respect to the lane keeping control is lowered, and if it is determined to be a specific scene, it is predicted that the predicted trajectory by the lane keeping control is shown.
  • the locus content is superimposed and displayed on the road surface, and when it is determined that the scene is not a specific scene, a process including hiding the expected locus content is executed.
  • the expected locus content is displayed when it is determined that the scene is a specific scene in which the reliability of the driver for lane keeping control is lowered. Therefore, the driver who visually recognizes the expected locus content can easily recall the image that the driving in the lane is maintained even in a specific scene. As described above, it is possible to provide a display control device and a display control program capable of reducing driver anxiety.
  • One of the disclosed display control devices is a display control device that controls the display of contents by a head-up display of a vehicle including a lane keeping control unit capable of executing lane keeping control for keeping the vehicle traveling in the lane.
  • the scene determination unit that determines whether or not the driver's reliability for lane keeping control is low and the predicted trajectory content that indicates the predicted trajectory by lane keeping control are superimposed and displayed on the road surface to determine that the scene is specific. It is provided with a display control unit that changes the display mode of the expected locus content depending on whether the scene is determined to be a specific scene or not.
  • One of the disclosed display control programs is a display control program that controls the display of contents by the head-up display of a vehicle provided with a lane keeping control unit capable of executing lane keeping control for keeping the vehicle traveling in the lane.
  • At least one processing unit determines whether or not the scene is a specific scene in which the reliability of the driver for the lane keeping control is lowered, and the predicted locus content indicating the predicted locus by the lane keeping control is superimposed and displayed on the road surface in the specific scene.
  • a process including changing the display mode of the expected locus content is executed depending on whether it is determined to be present or not a specific scene.
  • the display mode of the expected locus content is changed depending on whether the scene is a specific scene in which the reliability of the driver for lane keeping control is lowered or not. Therefore, the driver who visually recognizes the predicted locus content whose display mode has been changed can associate that the lane keeping control is executed after the specific scene is grasped on the vehicle side. Therefore, the driver can easily recall the image that the driving in the lane is maintained even in a specific scene. As described above, it is possible to provide a display control device and a display control program capable of reducing driver anxiety.
  • FIG. 1 shows the whole image of the vehicle-mounted network including the HCU according to the 1st Embodiment of this disclosure. It is a figure which shows an example of the head-up display mounted on the vehicle. It is a figure which shows an example of the schematic structure of HCU. It is a figure which visualizes and shows an example of the simulation of the display layout carried out in the display generation part. It is a figure which shows an example of the LTA display in a HUD. It is a figure which shows an example of the LTA display in a meter display. It is a flowchart which shows an example of the display control method executed by HCU. It is a figure which shows an example of the LTA display in 2nd Embodiment.
  • the function of the display control device according to the first embodiment of the present disclosure is realized by the HCU (Human Machine Interface Control Unit) 100 shown in FIGS.
  • the HCU 100 comprises the HMI (Human Machine Interface) system 10 of the vehicle A together with a head-up display (hereinafter, “HUD”) 20 and the like.
  • the HMI system 10 further includes an operation device 26, a DSM (Driver Status Monitor) 27, and the like.
  • the HMI system 10 has an input interface function for accepting user operations by a driver who is an occupant of the vehicle A, and an output interface function for presenting information to the occupants.
  • the HMI system 10 is communicably connected to the communication bus 99 of the vehicle-mounted network mounted on the vehicle A.
  • the HMI system 10 is one of a plurality of nodes provided in the in-vehicle network.
  • a peripheral monitoring sensor 30, a locator 40, a DCM49, a driving support ECU 50, an automatic driving ECU 60, and the like are connected to the communication bus 99 of the vehicle-mounted network as nodes. These nodes connected to the communication bus 99 can communicate with each other.
  • the peripheral monitoring sensor 30 is an autonomous sensor that monitors the surrounding environment of the vehicle A. From the detection range around the own vehicle, the peripheral monitoring sensor 30 includes moving objects such as pedestrians, cyclists, non-human animals, and other vehicles, as well as falling objects, guardrails, curbs, road markings, traveling lane markings, and the like. It is possible to detect road markings and stationary objects such as roadside structures.
  • the peripheral monitoring sensor 30 provides the detection information of detecting an object around the vehicle A to the driving support ECU 50 and the like through the communication bus 99.
  • the peripheral monitoring sensor 30 has a front camera 31 and a millimeter wave radar 32 as a detection configuration for object detection.
  • the front camera 31 outputs at least one of the imaging data obtained by photographing the front range of the vehicle A and the analysis result of the imaging data as detection information.
  • a plurality of millimeter-wave radars 32 are arranged, for example, on the front and rear bumpers of the vehicle A at intervals from each other.
  • the millimeter wave radar 32 irradiates the millimeter wave or the quasi-millimeter wave toward the front range, the front side range, the rear range, the rear side range, and the like of the vehicle A.
  • the millimeter wave radar 32 generates detection information by a process of receiving reflected waves reflected by a moving object, a stationary object, or the like.
  • other detection configurations such as a rider and sonar may be included in the peripheral monitoring sensor 30.
  • the locator 40 generates highly accurate position information of vehicle A and the like by compound positioning that combines a plurality of acquired information.
  • the locator 40 can specify, for example, the lane in which the vehicle A travels among a plurality of lanes.
  • the locator 40 includes a GNSS (Global Navigation Satellite System) receiver 41, an inertial sensor 42, a high-precision map database (hereinafter, “high-precision map DB”) 43, and a locator ECU 44.
  • GNSS Global Navigation Satellite System
  • the GNSS receiver 41 receives positioning signals transmitted from a plurality of artificial satellites (positioning satellites).
  • the GNSS receiver 41 can receive a positioning signal from each positioning satellite of at least one satellite positioning system among satellite positioning systems such as GPS, GLONASS, Galileo, IRNSS, QZSS, and Beidou.
  • the inertial sensor 42 has, for example, a gyro sensor and an acceleration sensor.
  • the high-precision map DB 43 is mainly composed of a non-volatile memory, and stores map data with higher accuracy (hereinafter, "high-precision map data") than that used for normal navigation.
  • the high-precision map data holds detailed information at least for information in the height (z) direction.
  • High-precision map data includes information that can be used for advanced driving support and autonomous driving, such as road three-dimensional shape information (road structure information), lane number information, and information indicating the direction of travel allowed for each lane. ing.
  • the locator ECU 44 is a control unit having a configuration mainly including a microcomputer provided with a processor, RAM, a storage unit, an input / output interface, a bus connecting them, and the like.
  • the locator ECU 44 combines the positioning signal received by the GNSS receiver 41, the measurement result of the inertial sensor 42, the vehicle speed information output to the communication bus 99, and the like, and sequentially positions the own vehicle position, the traveling direction, and the like of the vehicle A.
  • the locator ECU 44 provides the position information and direction information of the vehicle A based on the positioning result to the driving support ECU 50, the automatic driving ECU 60, the HCU 100, and the like through the communication bus 99. Further, the locator ECU 44 provides high-precision map data around the position of the own vehicle to the HCU 100, the driving support ECU 50, and the like via the communication bus 99.
  • the vehicle speed information is information indicating the current traveling speed of the vehicle A, and is generated based on the detection signal of the wheel speed sensor provided in the hub portion of each wheel of the vehicle A.
  • the node (ECU) that generates vehicle speed information and outputs it to the communication bus 99 may be appropriately changed.
  • a brake control ECU that controls the distribution of braking force for each wheel, or an in-vehicle ECU such as the HCU100 is electrically connected to the wheel speed sensor of each wheel to generate vehicle speed information and output to the communication bus 99.
  • DCM (Data Communication Module) 49 is a communication module mounted on vehicle A.
  • the DCM49 transmits and receives radio waves to and from base stations around the vehicle A by wireless communication in accordance with communication standards such as LTE (Long Term Evolution) and 5G.
  • LTE Long Term Evolution
  • the driving support ECU 50 and the automatic driving ECU 60 are configured to mainly include a computer equipped with a processor, a RAM, a storage unit, an input / output interface, a bus connecting them, and the like, respectively.
  • the driving support ECU 50 has a driving support function that supports the driving operation of the driver.
  • the automatic driving ECU 60 has an automatic driving function capable of acting as a driver's driving operation.
  • the driving support ECU 50 enables partial automatic driving control (advanced driving support) of level 2 or lower.
  • the automatic driving ECU 60 enables automatic driving control of level 3 or higher.
  • the driving support ECU 50 executes automatic driving in which the driver is required to monitor the surroundings, and the automatic driving ECU 60 executes automatic driving in which the driver is not required to monitor the surroundings.
  • the driving support ECU 50 and the automatic driving ECU 60 recognize the driving environment around the vehicle A for the driving control described later based on the detection information acquired from the peripheral monitoring sensor 30, respectively.
  • Each of the ECUs 50 and 60 provides the HCU 100 with the analysis result of the detection information carried out for recognizing the traveling environment as the analyzed detection information.
  • each of the ECUs 50 and 60 is used as boundary information regarding the boundary of the lane in which the vehicle A is currently traveling (hereinafter, "own lane Lns", see FIG. 5), and is the relative position of the left and right lane markings LL, LR or the road edge.
  • Information indicating the shape and the shape can be provided to the HCU 100.
  • the left-right direction is a direction that coincides with the width direction of the vehicle A stationary on the horizontal plane, and is set with reference to the traveling direction of the vehicle A. Further, each of the ECUs 50 and 60 analyzes the information regarding the weather condition in the traveling area and provides it to the HCU 100 as the weather information.
  • the weather information includes at least information on whether or not the weather is poor visibility such as rain, snow, and fog.
  • the weather information is analyzed, for example, by image processing of the captured image of the front camera 31.
  • the driving support ECU 50 has a plurality of functional units that realize advanced driving support by executing a program by a processor. Specifically, the driving support ECU 50 has an ACC (Adaptive Cruise Control) control unit and a lane keeping control unit 51.
  • the ACC control unit is a functional unit that realizes the function of ACC for driving the vehicle A at a constant speed at a target vehicle speed or for following the vehicle A while maintaining the distance between the vehicle and the vehicle in front.
  • the lane keeping control unit 51 is a functional unit that realizes an LTA (Lane Tracing Assist) function for maintaining the vehicle A running in the lane.
  • the LTA is also referred to as an LTC (LaneTraceControl).
  • the LTA function is an example of a lane keeping control function.
  • the lane keeping control unit 51 controls the steering angle of the steering wheel of the vehicle A based on the boundary information extracted from the detection data of the peripheral monitoring sensor 30.
  • the lane keeping control unit 51 generates a planned traveling line having a shape along the own lane Lns so that the vehicle A travels in the center of the own lane Lns during traveling.
  • the lane keeping control unit 51 cooperates with the ACC control unit to perform driving control (hereinafter, “lane keeping control”) for driving the vehicle A in the own lane Lns according to the planned running line.
  • driving control hereinafter, “lane keeping control”
  • the automatic driving ECU 60 has a plurality of functional units that realize autonomous driving of the vehicle A by executing a program by the processor.
  • the automatic driving ECU 60 generates a scheduled traveling line based on the high-precision map data acquired from the locator 40, the vehicle position information, and the extracted boundary information.
  • the automatic driving ECU 60 executes acceleration / deceleration control, steering control, and the like so that the vehicle A travels along the scheduled traveling line.
  • the lane maintenance control that is substantially the same as the lane maintenance control unit 51 of the driving support ECU 50, that is, the functional unit that causes the vehicle A to travel in the own lane Lns, is conveniently controlled to maintain the lane. Let's call it part 61. The user can exclusively use one of the lane keeping control units 51 and 61.
  • the lane keeping control units 51 and 61 may generate a scheduled traveling line based on the traveling locus of the preceding vehicle when the boundary information cannot be acquired. For example, when the boundary cannot be detected by the peripheral monitoring sensor 30 and the vehicle in front can be detected when the visibility is poor such as fog, the lane keeping control units 51 and 61 travel based on the detection information of the vehicle in front. A scheduled traveling line is generated from the locus, and the traveling of the vehicle A is controlled along the planned traveling line.
  • the lane keeping control information includes at least status information indicating the operating state of the lane keeping control and line shape information indicating the shape of the planned traveling line.
  • the status information is information indicating whether the lane keeping control function is in the off state, the standby state, or the execution state.
  • the standby state is a case where the lane keeping control is activated but the motion control is not performed.
  • the execution state is a state in which the operation control is activated based on the establishment of the execution condition.
  • the execution condition is, for example, that the section lines on both sides can be recognized.
  • the line shape information includes at least the three-dimensional coordinates of a plurality of specific points that define the shape of the planned traveling line, the length of the virtual line connecting the specific points, the radius of curvature, and the like.
  • the line shape information may include a large amount of coordinate information.
  • Each coordinate information is information indicating points lined up on the scheduled running line at predetermined intervals. Even with the line shape information in such a data format, the HCU 100 can restore the shape of the planned running line from a large amount of coordinate information.
  • the operation device 26 is an input unit that accepts user operations by a driver or the like.
  • the operation device 26 is input with a user operation for switching between activation and stop, setting the inter-vehicle distance, and the like.
  • the operation device 26 includes a steering switch provided on the spoke portion of the steering wheel, an operation lever provided on the steering column portion 8, a voice input device for detecting the driver's utterance, and the like.
  • the DSM27 has a configuration including a near-infrared light source, a near-infrared camera, and a control unit for controlling them.
  • the DSM 27 is installed in a posture in which the near-infrared camera is directed toward the headrest portion of the driver's seat, for example, on the upper surface of the steering column portion 8 or the upper surface of the instrument panel 9.
  • the DSM27 uses a near-infrared camera to photograph the head of the driver irradiated with near-infrared light by a near-infrared light source.
  • the image captured by the near-infrared camera is image-analyzed by the control unit.
  • the control unit extracts information such as the position of the eye point EP and the line-of-sight direction from the captured image, and sequentially outputs the extracted state information toward the HCU 100.
  • the HUD 20 is mounted on the vehicle A as one of a plurality of in-vehicle display devices together with the meter display 23, the center information display, and the like.
  • the HUD 20 is electrically connected to the HCU 100 and sequentially acquires video data generated by the HCU 100. Based on the video data, the HUD 20 presents various information related to the vehicle A, such as route information, sign information, and control information of each in-vehicle function, to the driver using the virtual image Vi.
  • the HUD 20 is housed in the storage space inside the instrument panel 9 below the windshield WS.
  • the HUD 20 projects the light formed as a virtual image Vi toward the projection range PA of the windshield WS.
  • the light projected on the windshield WS is reflected toward the driver's seat side in the projection range PA and is perceived by the driver.
  • the driver visually recognizes the display in which the virtual image Vi is superimposed on the foreground seen through the projection range PA.
  • the HUD 20 includes a projector 21 and a magnifying optical system 22.
  • the projector 21 has an LCD (Liquid Crystal Display) panel and a backlight.
  • the projector 21 is fixed to the housing of the HUD 20 with the display surface of the LCD panel facing the magnifying optical system 22.
  • the projector 21 displays each frame image of the video data on the display surface of the LCD panel, and transmits and illuminates the display surface with a backlight to emit light formed as a virtual image Vi toward the magnifying optical system 22.
  • the magnifying optical system 22 includes at least one concave mirror in which a metal such as aluminum is vapor-deposited on the surface of a base material made of synthetic resin or glass.
  • the magnifying optical system 22 projects the light emitted from the projector 21 onto the upper projection range PA while spreading it by reflection.
  • the angle of view VA is set for the above HUD20. Assuming that the virtual range in the space where the virtual image Vi can be imaged by the HUD 20 is the image plane IS, the angle of view VA is defined based on the virtual line connecting the driver's eye point EP and the outer edge of the image plane IS. The viewing angle.
  • the angle of view VA is an angle range in which the driver can visually recognize the virtual image Vi when viewed from the eye point EP. In the HUD 20, the horizontal angle of view in the horizontal direction is larger than the vertical angle of view in the vertical direction. When viewed from the eye point EP, the front range that overlaps with the image plane IS is the range within the angle of view VA.
  • the HUD 20 displays superimposed content CTs (see FIGS. 5 and 6) and non-superimposed content as virtual image Vi.
  • Superimposed content CTs are AR display objects used for augmented reality (hereinafter referred to as “AR”) display.
  • the display position of the superimposed content CTs is associated with a specific superimposed object existing in the foreground, such as a specific position on the road surface, a vehicle in front, a pedestrian, and a road sign.
  • the superimposed content CTs are superimposed and displayed on a specific superimposed object in the foreground, and can be moved in the appearance of the driver following the superimposed object so as to be relatively fixed to the superimposed object.
  • the shape of the superimposed content CTs may be continuously updated at a predetermined cycle according to the relative position and shape of the superimposed object.
  • the superimposed content CTs are displayed in a posture closer to horizontal than the non-superimposed content, and have a display shape extended in the depth direction (traveling direction) as seen from the driver, for example.
  • the non-superimposed content is a non-AR display object excluding the superimposed content CTs among the display objects superimposed and displayed in the foreground. Unlike the superimposed content CTs, the non-superimposed content is displayed superimposed on the foreground without specifying the superimposed target.
  • the non-superimposed content is displayed at a fixed position in the projection range PA, so that it is displayed as if it is relatively fixed to the vehicle configuration such as the windshield WS.
  • the meter display 23 is one of a plurality of in-vehicle displays, and is a so-called combination meter display.
  • the meter display 23 is an image display such as a liquid crystal display and an organic EL display.
  • the meter display 23 is installed in front of the driver's seat on the instrument panel 9, and the display screen is directed to the headrest portion of the driver's seat.
  • the meter display 23 is electrically connected to the HCU 100, and sequentially acquires video data generated by the HCU 100.
  • the meter display 23 displays the content corresponding to the acquired video data on the display screen. For example, the meter display 23 displays a status image CTst (described later) showing the status information of the LTA function on the display screen.
  • the HCU 100 is an electronic control device that integrally controls the display by a plurality of in-vehicle display devices including the HUD 20 in the HMI system 10.
  • the HCU 100 mainly includes a computer including a processing unit 11, a RAM 12, a storage unit 13, an input / output interface 14, and a bus connecting them.
  • the processing unit 11 is hardware for arithmetic processing combined with the RAM 12.
  • the processing unit 11 has a configuration including at least one arithmetic core such as a CPU (Central Processing Unit).
  • the RAM 12 may be configured to include a video RAM for video generation.
  • the processing unit 11 executes various processes for realizing the functions of each functional unit, which will be described later, by accessing the RAM 12.
  • the storage unit 13 is configured to include a non-volatile storage medium.
  • Various programs (display control programs, etc.) executed by the processing unit 11 are stored in the storage unit 13.
  • the HCU 100 shown in FIGS. 1 to 3 has a plurality of functional units for functioning as a control unit for controlling content display by the HUD 20 by executing a display control program stored in the storage unit 13 by the processing unit 11. .. Specifically, the HCU 100 is constructed with functional units such as a driver information acquisition unit 101, a locator information acquisition unit 102, an external world information acquisition unit 103, a control information acquisition unit 104, a scene determination unit 105, and a display generation unit 109. ..
  • the driver information acquisition unit 101 identifies the position and line-of-sight direction of the eye point EP of the driver seated in the driver's seat based on the state information acquired from the DSM 27, and acquires it as driver information.
  • the driver information acquisition unit 101 generates three-dimensional coordinates (hereinafter, “eye point coordinates”) indicating the position of the eye point EP, and sequentially provides the generated eye point coordinates to the display generation unit 109.
  • the locator information acquisition unit 102 acquires the latest position information and direction information about the vehicle A from the locator ECU 44 as own vehicle position information. In addition, the locator information acquisition unit 102 acquires high-precision map data around the position of the own vehicle from the locator ECU 44. The locator information acquisition unit 102 sequentially provides the acquired vehicle position information and high-precision map data to the scene determination unit 105 and the display generation unit 109.
  • the external world information acquisition unit 103 acquires the detected detection information analyzed for the peripheral range of the vehicle A from the driving support ECU 50 or the automatic driving ECU 60. For example, the outside world information acquisition unit 103 acquires boundary information indicating the relative positions of the left and right lane markings LL, LR or the road edge of the own lane Lns as detection information. In addition, the outside world information acquisition unit 103 acquires weather information in the traveling area as detection information. The external world information acquisition unit 103 sequentially provides the acquired detection information to the scene determination unit 105 and the display generation unit 109. The external world information acquisition unit 103 may acquire the imaging data of the front camera 31 as the detection information instead of the detection information as the analysis result acquired from the driving support ECU 50 or the automatic driving ECU 60.
  • the control information acquisition unit 104 acquires lane maintenance control information from the lane maintenance control units 51 and 61.
  • the lane keeping control information includes status information of the LTA function, line shape information, and the like.
  • the control information acquisition unit 104 sequentially provides the acquired lane keeping control information to the display generation unit 109.
  • the scene determination unit 105 determines whether or not the current driving scene is a specific scene based on the information acquired from the locator information acquisition unit 102 and the outside world information acquisition unit 103.
  • the specific scene is a scene in which the reliability of the driver for lane keeping control is lowered.
  • the specific scene is a scene that can cause the driver to feel anxious about the vehicle A coming off the own lane Lns.
  • the specific scene includes a scene in which the difficulty of traveling along the own lane Lns is relatively high.
  • a curved driving scene traveling on a curved road is a scene that can cause anxiety in the driver to deviate from the curved road without being able to turn completely, and is included in a specific scene.
  • the specific scene includes a scene in which the lane keeping control units 51 and 61 may raise a suspicion that the own lane Lns is not correctly recognized.
  • a scene with poor visibility such as bad weather such as rain, fog, snow, and nighttime causes the above-mentioned suspicion because the lane markings LL and LR as the boundary of the own lane Lns become difficult to see. It is a scene to obtain and is included in a specific scene. Further, the specific scene includes a scene in which the concern when the vehicle A deviates from the own lane Lns can be relatively large. For example, a cliff running scene traveling on a road on a cliff is included in a specific scene.
  • the scene determination unit 105 determines whether or not the scene is a curve driving scene based on high-precision map data. Specifically, the scene determination unit 105 determines that the scene is a curve traveling scene when it can be determined that the vehicle A is traveling on a curved road based on the curvature of the road or the like. The scene determination unit 105 determines whether or not the scene has poor visibility based on the detection information. Specifically, the scene determination unit 105 determines that the scene has poor visibility when it is determined that the weather is bad based on the analysis result of the captured image of the front camera 31. Further, the scene determination unit 105 determines that the scene has poor visibility when the current time is nighttime, based on the clock function of the HCU 100 or the like.
  • the scene determination unit 105 may determine that the scene has poor visibility when it is determined that there is backlight based on the current time, the traveling direction of the vehicle A, and the like. Further, the scene determination unit 105 determines whether or not the scene is a cliff running scene based on the high-precision map data. Specifically, the scene determination unit 105 determines that the scene is a cliff travel scene when the terrain on the shoulder of the travel path is classified as a cliff.
  • the scene determination unit 105 determines whether or not the current driving scene corresponds to any one of the plurality of specific scenes described above. Alternatively, the scene determination unit 105 may determine only one specific scene. The scene determination unit 105 sequentially provides the determination result to the display generation unit 109.
  • the display generation unit 109 includes a virtual layout function that simulates the display layout of superimposed content CTs (see FIGS. 4 and 5) based on various acquired information, and a content selection function that selects content to be used for information presentation. ing.
  • the display generation unit 109 has a generation function for generating video data to be sequentially output to the HUD 20 based on the information provided by the virtual layout function and the content selection function.
  • the display generation unit 109 is an example of a display control unit.
  • the display generation unit 109 reproduces the current driving environment of the vehicle A in the virtual space based on the own vehicle position information, high-precision map data, detection information, etc. by executing the virtual layout function. More specifically, as shown in FIG. 5, the display generation unit 109 sets the own vehicle object AO at a reference position in the virtual three-dimensional space. The display generation unit 109 maps the road model of the shape indicated by the map data in the three-dimensional space in association with the own vehicle object AO based on the own vehicle position information. The display generation unit 109 sets the virtual left side marking line VLL and the virtual right side marking line VLR corresponding to the left side marking line LL and the right side marking line LR, respectively, on the virtual road surface based on the boundary information. The display generation unit 109 sets the planned travel line generated by the lane keeping control units 51 and 61 on the virtual road surface as the predicted locus PT.
  • the display generation unit 109 sets the virtual camera position CP and the superimposition range SA in association with the own vehicle object AO.
  • the virtual camera position CP is a virtual position corresponding to the driver's eye point EP.
  • the display generation unit 109 sequentially corrects the virtual camera position CP with respect to the own vehicle object AO based on the latest eye point coordinates acquired by the driver information acquisition unit 101.
  • the superimposition range SA is a range in which the virtual image Vi can be superposed and displayed. When the display generation unit 109 looks forward from the virtual camera position CP based on the virtual camera position CP and the outer edge position (coordinates) information of the projection range PA stored in advance in the storage unit 13 (see FIG. 1) or the like.
  • the front range inside the imaging plane IS is set as the superimposition range SA.
  • the superimposition range SA corresponds to the angle of view VA of HUD20.
  • the display generation unit 109 arranges the virtual object VO in the virtual space.
  • the virtual object VO is arranged along the expected locus PT on the road surface of the road model in the three-dimensional space.
  • the virtual object VO is set in the virtual space when the start content CTi and the expected locus content CTp, which will be described later, are displayed as virtual images.
  • the virtual object VO defines the position and shape of the start content CTi and the expected locus content CTp. That is, the shape of the virtual object VO as seen from the virtual camera position CP becomes the virtual image shape of the start content CTi and the expected locus content CTp that are visually recognized from the eye point EP.
  • the virtual object VO includes the left virtual object VOL and the right virtual object VOL.
  • the left virtual object VOL is arranged inside the virtual left lane VLL along the virtual left lane VLL.
  • the right virtual object VOL is arranged inside the virtual right lane VLR along the virtual right lane VLR, as opposed to the left virtual object VOL.
  • the left virtual object VOL and the right virtual object VOL are, for example, thin strip-shaped objects extending in a plane along the virtual lane markings VLL and VLR, respectively.
  • each virtual object VOL, VOL When displaying the start content CTi, each virtual object VOL, VOL is arranged in a stationary state at a predetermined position inside each virtual lane marking VLL, VLR. On the other hand, when displaying the predicted locus content CTp, each virtual object VOL and VOL repeats the movement from the initial position to the center side of the own lane Lns with the arrangement position at the time of displaying the start content CTi as the initial position. Set as an object.
  • the display generation unit 109 uses a plurality of types of superposed content CTs and non-superimposed content properly according to the scene by executing the content selection function, and presents information to the driver. For example, the display generation unit 109 displays the predicted locus content CTp when the control information acquisition unit 104 has acquired the execution information of the LTA function and the scene determination unit 105 has determined that the scene is a specific scene. Let me. On the other hand, the display generation unit 109 hides the expected locus content CTp when it is determined that the scene is not a specific scene even if the execution information of the LTA function is acquired.
  • the display generation unit 109 can execute the LTA display for displaying the contents related to the LTA function by the video data generation function.
  • the details of the LTA display will be described below with reference to FIG. Note that FIG. 5 shows an example of LTA display in a curve driving scene.
  • the display generation unit 109 does not execute the LTA display when the scene determination unit 105 determines that the scene is not a specific scene (see A in FIG. 5). When it is determined that the scene is a specific scene, the display generation unit 109 presents the predicted trajectory PT of the vehicle A by the LTA function. Specifically, the display generation unit 109 first displays the start content CTi (see B in FIG. 5).
  • the start content CTi is a content indicating the start of display of the expected trajectory content CTp described later.
  • the start content CTi is, for example, the expected locus content CTp in a stationary mode. More specifically, the start content CTi is a superposed content CTs whose superimposing target is the road surface of the traveling road.
  • the start content CTi is drawn in a shape along the expected locus PT.
  • the start content CTi includes the left start content CTil and the right start content CTir.
  • the left-side start content CTil and the right-side start content CTir are a pair of contents corresponding to the lane markings LL and LR, which are a pair of boundaries in the own lane Lns.
  • Each of the starting contents CTil and CTirr is, for example, a thin strip-shaped road paint extending in a continuous direction of the vehicle A in the traveling direction and BR>.
  • the left side start content Ctil has a shape along the left side division line LL, and the inside of the left side division line LL is a superposed position.
  • the right side start content CTir has a shape along the right side division line LR, and the inside of the right side division line LR is a superposed position.
  • the start contents CTil and CTirr are continuously displayed in the above-mentioned superposed position in a stationary state.
  • the start content CTi is displayed for a predetermined period from the start time of the specific scene.
  • the specific scene is a curve traveling scene
  • the superimposition range SA is continuously displayed for a predetermined period after reaching the start position of the curve road.
  • the display generation unit 109 starts displaying the expected locus content CTp.
  • the predicted locus content CTp is a content indicating the predicted locus PT of the vehicle A by the LTA function.
  • the predicted locus content CTp includes a left boundary line CTbl and a right boundary line CTbr.
  • the left boundary line CTbl and the right boundary line CTbr have the same display shape as the left start content CTil and the right start content CTir, respectively.
  • the left boundary line CTbl and the right boundary line CTbr are displayed in a manner that moves by animation.
  • the boundary lines CTbl and CTbr are drawn so as to move from both outer sides to the central side in the lane width direction.
  • both outer sides are the sides where the lane markings LL and LR are located with respect to the central portion of the own lane Lns
  • the central side is the side where the central portion of the own lane Lns is located with respect to the lane markings LL and LR.
  • the left boundary line CTbl moves from the left lane LL toward the center of the own lane Lns
  • the right boundary line CTbr moves from the right lane LR toward the center of the own lane Lns.
  • the left boundary line CTbl and the right boundary line CTbr are in a mode of continuously moving in the lane width direction from the initial position to the center side of the own lane Lns.
  • the boundary lines CTbl and CTbr move continuously so that the width between the boundary lines CTbl and CTbr is narrowed.
  • the initial position is the superposed position of the start content CTi, and the start content CTi is drawn as if it started moving from the superposed position.
  • the boundary lines CTbl and CTbr move from their respective initial positions by the same amount of movement to reach their respective moving end positions.
  • the boundary lines CTbl and CTbr start moving at substantially the same timing from the initial position.
  • Each boundary line CTbl, CTbr completes the movement from the initial position to the moving end position in substantially the same period.
  • Each boundary line CTbl, CTbr is an example of moving content.
  • Each boundary line CTbl, CTbr is displayed so as to repeat the above-mentioned movement. Specifically, each boundary line CTbl, CTbr disappears when it moves by a predetermined amount of movement from the initial position, and reappears at the initial position. The reappearing boundary lines CTbl and CTbr perform the above-mentioned movement again. Each boundary line CTbl, CTbr continuously repeats movement until the end of a specific scene. When the specific scene ends, the boundary lines CTbl and CTbr are hidden.
  • the display generation unit 109 displays the status image CTst as execution content indicating the execution of the LTA function in a predetermined display area in the meter display 23 (see FIG. 6).
  • the status image CTst has, for example, a shape imitating the lane markings LL and LR of the own lane Lns. Specifically, the status image CTst is displayed as a pair of thin strips. The status image CTst is fixedly displayed at a predetermined display position. For example, the status image CTst is displayed on both sides of the vehicle icon ICv that imitates the vehicle A.
  • the status image CTst is hidden when the LTA function is off (see A in FIG. 6).
  • the status image CTst is displayed when the LTA function is on.
  • the status image CTst has different display modes depending on whether it is determined that it is not a specific scene or that it is a specific scene. Specifically, when it is determined that the status image CTst is not a specific scene, the status image CTst is continuously brilliantly displayed (see B in FIG. 6).
  • the status image CTst is displayed blinking when it is determined that the scene is a specific scene (see C in FIG. 6). Due to the blinking display, the status image CTst under the specific scene has a display mode different from the expected locus content CTp under the specific scene, which is a moving display mode. In the blinking display, the brightness of the status image CTst may be changed discretely or continuously depending on the lighting state and the extinguishing state.
  • FIG. 7 The process shown in FIG. 7 is started by the HCU 100 that has completed the start-up process or the like, for example, by switching the vehicle power supply to the on state.
  • S means a plurality of steps of the flow executed by a plurality of instructions included in the display control program.
  • the display generation unit 109 determines whether or not the LTA function is on based on the control information acquired by the control information acquisition unit 104. If it is determined that the LTA function is off, it waits until it is turned on. If it is determined that the LTA function is on, the process proceeds to S20, and the scene determination unit 105 determines whether or not the current driving scene is a specific scene.
  • the process proceeds to S30, and the display generation unit 109 displays the start content CTi. After that, the process proceeds to S40, the expected locus content CTp is displayed, and the process proceeds to S50.
  • the scene determination unit 105 determines whether or not the specific scene has ended. If the specific scene is not finished, the process returns to S40 and the display of the expected locus content CTp is continued. On the other hand, if it is determined that the specific scene has ended, the process proceeds to S60, the display of the expected locus content CTp ends, and the process proceeds to S70.
  • S70 it is determined whether or not the LTA function is turned off based on the control information. If it is determined that the LTA function is not off, the process returns to S20. On the other hand, when it is determined that the LTA function is off, a series of processes is terminated.
  • the predicted locus content CTp is superimposed and displayed on the road surface in the case of a specific scene in which the reliability of the driver for the lane keeping control is lowered, and the predicted locus content CTp is not displayed in the case of a non-specific scene. It is displayed. According to this, the expected locus content CTp is displayed in a specific scene. Therefore, the driver who visually recognizes the predicted locus content CTp can easily recall the image that the driving in the lane is maintained even in a specific scene. As a result, the driver's anxiety can be reduced.
  • the start content CTi indicating the start of display of the expected locus content CTp is displayed before the display of the expected locus content CTp. According to this, the display start of the predicted locus content CTp is presented to the driver by the start content CTi. Therefore, the driver can easily understand that the expected locus content CTp is displayed. Therefore, the driver enables an easy-to-understand display.
  • the display generation unit 109 can remind the driver of the image of the vehicle A traveling in the center of the own lane Lns. Therefore, the display generation unit 109 can further reduce the driver's anxiety about traveling in the lane due to the LTA function.
  • the display generation unit 109 can further reduce the anxiety of the driver.
  • the second embodiment is different from the first embodiment in the mode of movement of the predicted locus content CTp.
  • the display generation unit 109 of the second embodiment moves the boundary line superimposed on the outer peripheral side of the curve among the boundary lines CTbl and CTbr to the center side larger than the boundary line superimposed on the inner peripheral side.
  • the outer peripheral side is the side of the pair of boundary lines CTbl and CTbr far from the center of curvature of the curved road
  • the inner peripheral side is the side closer to the center of curvature.
  • the moving end position of the left boundary line CTbl is set closer to the center of the own lane Lns than the moving end position of the right boundary line CTbr. .. As a result, the left boundary line CTbl is displayed so as to move toward the center side of the right boundary line CTbr.
  • the boundary line superimposed on the outer peripheral side of the curve is displayed so as to move to the center side of the boundary line superimposed on the inner peripheral side, so that the image of deviation from the curve path is more evoked. It becomes difficult to be done.
  • the HCU 100 can further reduce the driver's anxiety.
  • the display generation unit 109 displays the expected locus content CTp regardless of the determination result in the scene determination unit 105. Then, the display generation unit 109 changes the display mode of the predicted locus content CTp depending on whether it is determined that it is not a specific scene or that it is a specific scene.
  • the display generation unit 109 creates a pair of boundary lines CTbl and CTbr that emphasize each of the pair of boundaries in the own lane Lns, as shown in A of FIG. , Displayed as expected trajectory content CTp (normal display).
  • the pair of boundary lines in the third embodiment emphasizes the left and right lane markings LL and LR of the own lane Lns, but the road edge or an arbitrarily set virtual boundary line or the like is the own lane. It may be emphasized as a boundary of Lns.
  • the pair of boundary lines CTbl and CTbr include a left boundary line CTbl and a right boundary line CTbr, and is an example of boundary content.
  • the left boundary line CTbl and the right boundary line CTbr in the third embodiment are displayed so as to stay at the superimposition position with the inside of the corresponding division lines LL and LR as the superimposition position, respectively.
  • Each boundary line CTbl, CTbr is displayed as, for example, a thin strip-shaped road paint extending continuously along the lane markings LL, LR.
  • the display generation unit 109 changes the boundary line to a display mode in which the central portion in the lane width direction is emphasized as compared with the case where it is determined that the scene is not the specific scene. (Special display). Specifically, the display generation unit 109 changes the overlapping positions of the boundary lines CTbl and CTbr to the center side of the own lane Lns (see B in FIG. 9). As a result, the boundary lines CTbl and CTbr in the specific scene are closer to the center of the own lane Lns than in the non-specific scene. The magnitude of the movement width from the superposed position in the non-specific scene is set to be about the same for each boundary line CTbl and CTbr.
  • the display generation unit 109 presents a change in the superposition position of the boundary lines CTbl and CTbr by animation display. That is, when it is determined that the scene is a specific scene, the pair of boundary lines CTbl and CTbr are displayed so as to continuously move toward the superposed position in the specific scene (see A in FIG. 9). The timings of the movement start and movement end of the boundary lines CTbl and CTbr are substantially the same. When the specific scene ends, the boundary lines CTbl and CTbr return to the superposed position in the non-specific scene by the animation display that moves in the direction opposite to the animation display described above (see C in FIG. 9).
  • the process proceeds to S15.
  • the expected locus content CTp is displayed in the normal display mode, and the process proceeds to S20. If it is determined in S20 that the scene is a specific scene, the process proceeds to S45, and the expected locus content CTp is displayed in a special display mode.
  • the process proceeds to S65, the special display ends, and the display mode returns to the normal display.
  • the normal display is terminated in S85, the expected locus content CTp is hidden, and a series of processes is terminated.
  • the display mode of the expected locus content CTp is changed depending on whether it is determined to be a specific scene or not. Therefore, the driver who visually recognizes the predicted locus content CTp can associate that the lane keeping control is executed after the specific scene is grasped on the vehicle side. Therefore, the driver can easily recall the image that the driving in the lane is maintained even in a specific scene. From the above, the HCU 100 can reduce the anxiety of the driver.
  • the expected trajectory content CTp is changed to a display mode that emphasizes the central part of the own lane Lns. According to this, in the case of a specific scene, the central side of the own lane Lns is emphasized, so that the driver can recall the image of the vehicle A traveling in the center of the own lane Lns. Therefore, it is possible to further impress the driver that the driving in the lane is maintained even in a specific scene.
  • the pair of boundary lines CTbl and CTbr are displayed on the center side of the case where it is determined that the scene is not a specific scene. Therefore, it is possible to suppress the complexity in the angle of view VA as compared with the case of additionally displaying the content.
  • the display generation unit 109 of the fourth embodiment displays the content superimposed on the outer peripheral side of the curve among the boundary lines CTbl and CTbr on the center side of the content superimposed on the inner peripheral side.
  • the left boundary line CTbl is displayed on the center side as compared with the right boundary line CTbr, and the separation distance from the lane marking is large.
  • the HCU 100 can further reduce the driver's anxiety.
  • the display generation unit 109 displays the additional contents CTal and CTar in a portion on the center side of the pair of boundary lines CTbl and CTbr.
  • the additional contents CTal and CTar are expected locus contents CTp that are additionally displayed on the pair of boundary lines CTbl and CTbr.
  • the additional contents CTal and CTar are formed in a thin band shape extending continuously along the expected locus PT, like the pair of boundary lines CTbl and CTbr, for example.
  • the additional contents CTal and CTar are displayed in a display color different from, for example, the pair of boundary lines CTbl and CTbr.
  • the additional content CTal and CTar include a left side additional content CTal that is relatively displayed on the left side and a right side additional content CTal that is relatively displayed on the right side.
  • the additional contents CTal and CTar are displayed on the center side of the pair of boundary lines CTbl and CTbr, thereby emphasizing the central part of the own lane Lns to the driver.
  • the additional contents CTal and CTar are additionally displayed while the display of the pair of boundary lines CTbl and CTbr is maintained. Therefore, it is easy for the driver to understand that the content related to the LTA display is continuously displayed even in a specific scene. This enables a more understandable display.
  • the display generation unit 109 displays the central line CTc as the expected locus content CTp (see FIG. 13).
  • the central line CTc is content superimposed on the central portion of the own lane Lns.
  • the central line CTc is formed into, for example, a thin band extending along the expected locus PT.
  • the central line CTc is an example of central content.
  • the display generation unit 109 changes the display mode of the central line CTc to a display mode that emphasizes the boundary of the own lane Lns. Specifically, when it is determined that the scene is a specific scene, the central line CTc is changed to a pair of boundary lines CTbl and CTbr (see FIG. 14). The boundary line has overlapping positions on both outer sides of the central line CTc.
  • the display generation unit 109 continuously transforms the central line CTc into a pair of boundary lines CTbl and CTbr by an animation in which one central line CTc branches into two boundary lines CTbl and CTbr (FIG. 14). See A). Further, when the specific scene ends, the display generation unit 109 transforms the pair of boundary lines CTbl and CTbr into the central line CTc by an animation in which the two boundary lines CTbl and CTbr are connected to one central line CTc. It is continuously deformed (see C in FIG. 14).
  • the expected locus content CTp is displayed as the center line CTc when it is determined that it is not a specific scene, and when it is determined that it is a specific scene, the pair of lane markings LL and LR are emphasized.
  • the display mode is changed. Therefore, the driver can recall running while maintaining the inside of the lane markings LL, LR in a specific scene.
  • the display generation unit 109 determines that the scene is a specific scene
  • the display generation unit 109 continuously transforms the center line CTc into a pair of boundary lines CTbl and CTbr by an animation that divides the center line CTc into left and right.
  • the central line CTc is changed into a pair of boundary lines CTbl and CTbr by an animation in which the entire content is divided into left and right halves and each of them moves in parallel in the horizontal direction.
  • the pair of boundary lines CTbl and CTbr return to the center line CTc by the animation moving in the opposite direction to the above.
  • the display generation unit 109 determines that the scene is a specific scene, the display generation unit 109 additionally displays a pair of boundary lines CTbl and CTbr in addition to the center line CTc. As a result, the content that emphasizes the boundary is added, so that the predicted trajectory content CTp has a display mode that emphasizes the boundary of the own lane Lns as a whole, as compared with the case where it is determined that the scene is not a specific scene.
  • the boundary lines CTbl and CTbr are additionally displayed while the display of the center line CTc is maintained. Therefore, it is easy for the driver to understand that the content related to the LTA display is continuously displayed even in a specific scene. This enables a more understandable display.
  • the display generation unit 109 expands the width of the center line CTc when it is determined that the scene is a specific scene.
  • the widthwise end of the center line CTc approaches the boundary of the own lane Lns by widening, so that the boundary is emphasized as compared with the case where it is not a specific scene.
  • the display generation unit 109 displays the wall content CTw when the specific scene is a curve running scene.
  • the wall contents CTw are superimposed contents CTs superimposed near the lane markings on the outer peripheral side of the curve.
  • the wall content CTw exhibits a wall shape erected so as to separate the own lane Lns from the out-lane area.
  • the wall content CTw is displayed so as to be erected on the outer peripheral side of the expected locus content CTp.
  • the wall content CTw has a wall shape that rises upward from the lane marking.
  • the wall content CTw may have a wall shape rising from the road surface inside or outside the lane marking.
  • the wall content CTw has a shape extending along the own lane Lns.
  • the wall content CTw is displayed so as to extend from the start point to the end point of the curve.
  • the wall content CTw is superimposed and displayed on the outer peripheral side of the curve. Therefore, the driver can more easily recall the image that the vehicle A does not deviate to the outer peripheral side of the curve. Therefore, the display generation unit 109 can further reduce the anxiety of the driver.
  • the display generation unit 109 may display the above-mentioned wall content CTw in the cliff running scene.
  • the wall content CTw is superimposed and displayed near the lane marking on the side where the cliff is located.
  • the reliability is measured by, for example, DSM27.
  • the DSM27 measures driver stress as reliability. In this case, the higher the stress, the lower the reliability.
  • the DSM 27 may detect eye movements such as saccades, pupil opening, and the like by analyzing captured images, and calculate a stress evaluation value for evaluating stress by the control unit based on these. Further, the DSM 27 may use the detection information of the biosensor (not shown) for the calculation of stress.
  • the detection information includes, for example, heart rate, sweating amount, body temperature, and the like.
  • the DSM27 sequentially provides the measured stress evaluation values to the HCU 100.
  • the driver information acquisition unit 101 of the HCU 100 acquires the stress evaluation value from the DSM 27 and provides it to the scene determination unit 105.
  • the scene determination unit 105 determines whether or not the current driving scene is a specific scene based on the stress evaluation value. That is, when the stress evaluation value is within the permissible range, the scene determination unit 105 determines that the current driving scene is a specific scene. Then, when the stress evaluation value is out of the permissible range, the scene determination unit 105 determines that the current driving scene is not a specific scene.
  • the above processing is executed in S20 in the flowchart of FIG. 7.
  • the scene determination unit 105 determines the permissible range for determining a specific scene by learning. Specifically, the scene determination unit 105 acquires information regarding the detection timing of the steering wheel grip or the steering operation during the execution of the LTA, or the interruption timing of the LTA due to the brake operation. In addition, the scene determination unit 105 acquires the stress evaluation value at the relevant timing. The scene determination unit 105 may learn the permissible range corresponding to a specific scene based on this information. The scene determination unit 105 may set a preset range instead of determining the permissible range by learning.
  • the scene determination unit 105 may use information other than the actually measured reliability for determining a specific scene. For example, the scene determination unit 105 actually measures the determination result of whether or not the current travel scene corresponds to any one of the curve travel scene, the poor visibility scene, and the cliff travel scene shown in the first embodiment. It may be determined whether or not it is a specific scene in combination with the determined reliability. Specifically, the scene determination unit 105 identifies the current driving scene when the current driving scene corresponds to any one of the above scenes and the reliability is within the range corresponding to the specific scene. It may be determined that it is a scene.
  • the configuration of the eleventh embodiment is also applicable to the HCU 100 that changes the display mode of the expected locus content CTp depending on whether it is determined to be a specific scene or not.
  • the display generation unit 109 of the HCU 100 displays the predicted locus content CTp in a display mode in which the predicted locus content CTp is emphasized so that the reliability of the driver for the lane keeping control is estimated to be lower in the specific scene.
  • the display generation unit 109 estimates that the greater the curvature of the lane of the curve road on which the vehicle travels, the lower the reliability in the curve travel scene.
  • the display generation unit 109 estimates that the more continuous the curve is, the lower the reliability is in the curve traveling scene. Further, the display generation unit 109 estimates that the smaller the width of the road on which the vehicle travels, the lower the reliability.
  • the display generation unit 109 may perform the above reliability estimation based on the high-precision map data.
  • the display generation unit 109 estimates that the lower the visibility of the lane markings due to faintness or the like, the lower the reliability.
  • the display generation unit 109 may estimate the visibility of the lane marking based on the detection information of the lane marking acquired by the outside world information acquisition unit 103.
  • the display generation unit 109 has a display mode in which the expected locus content CTp is emphasized by, for example, increasing the repeating speed of movement of each boundary line CTbl and CTbr.
  • the display generation unit 109 may have a display mode in which the expected locus content CTp is emphasized by increasing the amount of movement of the boundary lines CTbl and CTbr inward.
  • the display generation unit 109 may make the display mode emphasized by increasing the brightness or the display size of each boundary line CTbl, CTbr.
  • the display generation unit 109 may make the display mode emphasized by changing the display colors of the boundary lines CTbl and CTbr.
  • the display generation unit 109 executes the display process according to the above reliability in S40 of the flowchart of FIG.
  • the display generation unit 109 may have a display mode in which the predicted locus content CTp is emphasized as the actually measured reliability is lower.
  • the description of the eleventh embodiment is referred to.
  • the configuration of the twelfth embodiment can be applied to the HCU 100 which changes the display mode of the predicted locus content CTp depending on whether it is determined to be a specific scene or not. In that case, the display generation unit 109 may execute the display process according to the reliability in S45 of the flowchart of FIG.
  • the predicted trajectory content CTp makes it easier for the driver to more reliably recall the image of maintaining the lane driving. Therefore, the driver's anxiety can be further reduced.
  • the curvature of the traveling lane is large.
  • the curvature of the lane increases, a relatively large acceleration can act on the vehicle A, so that the driver is more likely to feel anxiety about lane keeping control. Therefore, as the curvature of the lane becomes larger, the predicted locus content CTp is emphasized, so that the driver can more surely recall the image that the traveling in the lane is maintained by the predicted locus content CTp.
  • the driver's anxiety in the curve driving scene can be further reduced.
  • the driver information acquisition unit 101 acquires the presence / absence of gripping of the steering wheel (hereinafter referred to as gripping information) in addition to the position and line-of-sight direction of the driver's eye point EP.
  • the grip information may be specified by, for example, image analysis by DSM27, or by a grip sensor or steer sensor (not shown).
  • the state in which the driver is gripping the steering wheel may be referred to as a "hands-on state”
  • the state in which the driver is suspending the grip may be referred to as a "hands-off state”.
  • the control information acquisition unit 104 acquires, in addition to the lane maintenance control information, the level information of automatic driving when the LTA function is executed from the lane maintenance control units 51 and 61.
  • the level information may be at least enough information to determine whether the automatic driving level is 2 or lower or the automatic driving level 3 or higher. In other words, the level information need only be able to determine whether the peripheral monitoring obligation is necessary or not necessary when the LTA function is executed. Even if the control information acquisition unit 104 determines from which lane keeping control units 51 and 61 of the ECUs 50 and 60 the information indicating that the LTA function is turned on is provided, and generates level information based on the determination result. Good.
  • control information acquisition unit 104 acquires the track information of the vehicle A scheduled to travel when the automatic driving of level 3 or higher is executed.
  • the track information includes at least information about the route that vehicle A is going to follow.
  • the track information may include information about the speed at which the route travels.
  • the display generation unit 109 determines whether or not to execute the display of the predicted trajectory content CTp based on the gripping information, the level information, and the trajectory information in addition to the determination result of the scene determination unit 105.
  • the display generation unit 109 displays the predicted locus content CTp even if it is determined to be a specific scene. To cancel. On the other hand, when the automatic operation level is 2 or less and the hands-off state is determined, the display generation unit 109 displays the expected locus content CTp if it is determined to be a specific scene.
  • the display generation unit 109 may or may not be a specific scene.
  • the expected locus content CTp is displayed.
  • the display generation unit 109 evaluates the magnitude of the vehicle behavior based on the future acceleration predicted to act on the vehicle A.
  • the magnitude of vehicle behavior may be evaluated based on at least one of lateral acceleration and front-rear acceleration.
  • the display generation unit 109 may predict the future acceleration based on the orbit information.
  • the display generation unit 109 may acquire the magnitude of the vehicle behavior predicted by the automatic driving ECU 60 or the like.
  • the process proceeds to S15.
  • the display generation unit 109 determines whether or not the automatic operation level is 3 or higher. If it is determined that the automatic operation level is 3 or higher, the process proceeds to S16.
  • the display generation unit 109 estimates the magnitude of the vehicle behavior and determines whether or not the magnitude is within the permissible range. If it is determined that it is out of the permissible range, the process proceeds to S20, and if it is determined that it is within the permissible range, the process proceeds to S30.
  • the process proceeds to S20.
  • the scene determination unit 105 determines in S20 that the current driving scene corresponds to a specific scene
  • the process proceeds to S25.
  • the display generation unit 109 determines whether or not the automatic operation level 2 or lower and the hands-on state.
  • the process proceeds to S30. On the other hand, if it is determined that the automatic operation level is 2 or less and the hands-on state is reached, the process proceeds to S70.
  • the operation that the driver can perform with anxiety about lane keeping control may include gripping the steering wheel.
  • the display of the content CTp can be controlled more appropriately according to the necessity of displaying the expected locus content CTp.
  • the predicted trajectory content CTp can be displayed more reliably in a situation where the driver may feel uneasy at the automatic driving level 3 or higher where the future behavior generated in the vehicle A can be easily predicted from the track information or the like.
  • the configuration of the thirteenth embodiment is naturally applicable to the HCU 100 that changes the display mode of the expected locus content CTp depending on whether it is determined to be a specific scene or not. ..
  • the HCU 100 to which the configuration of the thirteenth embodiment is applied determines that the driver is obliged to monitor the periphery of the driver and is in a hands-on state in executing the lane keeping control.
  • the HCU 100 has a configuration in which the display mode of the predicted locus content CTp is the same as the display mode when it is determined that the current driving scene is not a specific scene, even when the current driving scene is determined to be a specific scene. Can be.
  • the HCU 100 to which the configuration of the thirteenth embodiment is applied determines that there is no obligation to monitor the surroundings of the driver in the execution of the lane keeping control and the magnitude of the predicted vehicle behavior is out of the permissible range.
  • the HCU 100 has the same display mode of the predicted locus content CTp as the display mode when it is determined to be a specific scene, even when it is determined that the current driving scene is not a specific scene. It can be configured as.
  • the scene determination unit 105 determines the scene based on the high-precision map data around the vehicle A or the detection information of the peripheral monitoring sensor 30.
  • the scene determination unit 105 may be configured to perform scene determination based on other information. For example, the scene determination unit 105 may determine whether or not the scene has poor visibility by acquiring weather information from an external server via the DCM49.
  • the scene determination unit 105 executes scene determination as to whether or not it is a specific scene based on various acquired information. Instead of this, the scene determination unit 105 may execute the scene determination by acquiring the scene determination result executed by another ECU such as the driving support ECU 50 or the automatic driving ECU 60.
  • the boundary lines CTbl and CTbr are superimposed and displayed inside the lane markings LL and LR, but they may be superimposed and displayed on the lane markings LL and LR. Alternatively, it may be superimposed and displayed on the outside of the lane markings LL and LR.
  • the boundary line may be displayed at a place corresponding to the boundary line other than the lane marking line, such as the left and right road edges.
  • the display generation unit 109 hides the predicted locus content CTp when it is determined that the scene is not a specific scene, but specifies the LTA-related content indicating other than the predicted locus. It may be displayed even if it is not a scene.
  • the display generation unit 109 may display contents such as character information and icons indicating that LTA is being executed separately from the predicted locus content CTp.
  • the display generation unit 109 displays the expected locus content CTp so as to move continuously. Instead, the display generation unit 109 may display the expected locus content CTp so as to move intermittently. Further, the display generation unit 109 may display the predicted locus content CTp in a movement pattern different from the movement from both outer sides to the center side. Further, the display generation unit 109 may use the expected locus content CTp as static content to be displayed while staying in place, as in the third embodiment. Further, the display generation unit 109 may display the expected locus content CTp without displaying the start content CTi.
  • the display generation unit 109 has a display mode in which the predicted locus content CTp emphasizes the central portion in the lane width direction in the case of a specific scene.
  • the display generation unit 109 may change the display mode by increasing the brightness of the predicted locus content CTp in the case of a specific scene.
  • the display generation unit 109 may change the display mode by reducing the transmittance of the predicted locus content CTp, changing the display color, enlarging the display size, and the like.
  • the display generation unit 109 continuously changes the display mode of the predicted locus content CTp by animation.
  • the display generation unit 109 may be configured to display the predicted locus content CTp after the change after the predicted locus content CTp before the display mode change is once hidden.
  • the predicted locus content CTp is a continuous thin band, but the display shape of the predicted locus content CTp is not limited to this.
  • the predicted locus content CTp may be a plurality of figures arranged along the predicted locus PT, or may have an arrow shape.
  • the status image CTst is displayed on the meter display 23, but it may be displayed on another vehicle-mounted display such as a center information display.
  • the status image CTst is displayed in a display mode different from the expected locus content CTp in a specific scene by blinking display.
  • the status image CTst may be displayed in another display mode.
  • the status image CTst does not have to be completely turned off as long as the display mode is such that the maximum brightness state and the minimum brightness state are repeated.
  • the status image CTst may be displayed in a different display mode from the predicted locus content CTp by moving and displaying the status image CTst in a movement pattern different from the predicted locus content CTp.
  • the scene determination unit 105 determines a specific scene by using the stress of the driver as the reliability.
  • the scene determination unit 105 may determine a specific scene by using an index other than stress as the reliability as long as the tension or anxiety about the driver's lane keeping control can be estimated.
  • the scene determination unit 105 may determine a specific scene by using the degree of gaze toward the front of the driver as the reliability. The degree of gaze may be measured by DSM27.
  • the scene determination unit 105 may determine a specific scene, assuming that the higher the gaze degree, the lower the reliability.
  • the processing unit and processor of the above-described embodiment include one or a plurality of CPUs (Central Processing Units).
  • a processing unit and a processor may be a processing unit including a GPU (Graphics Processing Unit), a DFP (Data Flow Processor), and the like in addition to the CPU.
  • the processing unit and the processor may be a processing unit including an FPGA (Field-Programmable Gate Array) and an IP core specialized in specific processing such as learning and inference of AI.
  • Each arithmetic circuit unit of such a processor may be individually mounted on a printed circuit board, or may be mounted on an ASIC (Application Specific Integrated Circuit), an FPGA, or the like.
  • ASIC Application Specific Integrated Circuit
  • non-transitory tangible storage mediums such as flash memory and hard disk can be adopted as the memory device for storing the control program.
  • the form of such a storage medium may also be changed as appropriate.
  • the storage medium may be in the form of a memory card or the like, and may be inserted into a slot portion provided in an in-vehicle ECU and electrically connected to a control circuit.
  • control unit and its method described in the present disclosure may be realized by a dedicated computer constituting a processor programmed to execute one or a plurality of functions embodied by a computer program.
  • the apparatus and method thereof described in the present disclosure may be realized by a dedicated hardware logic circuit.
  • the apparatus and method thereof described in the present disclosure may be realized by one or more dedicated computers configured by a combination of a processor that executes a computer program and one or more hardware logic circuits.
  • the computer program may be stored in a computer-readable non-transitional tangible recording medium as an instruction executed by the computer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Optics & Photonics (AREA)
  • Mathematical Physics (AREA)
  • Human Computer Interaction (AREA)
  • Traffic Control Systems (AREA)

Abstract

La présente invention concerne un dispositif de commande d'affichage (100) qui commande l'affichage d'un contenu par un affichage tête haute (20) d'un véhicule pourvu d'une unité de commande de maintien de voie (51, 61) qui peut exécuter une commande de maintien de voie pour maintenir le véhicule en circulation dans sa voie. Le dispositif de commande d'affichage est pourvu d'une unité de détermination de scène (105) qui détermine si la scène actuelle est une scène spécifique dans laquelle le degré de confiance du conducteur dans la commande de maintien de voie diminue. Le dispositif de commande d'affichage est pourvu d'une unité de génération d'affichage (109) qui affiche un contenu de piste prédite par superposition du contenu de piste prédite sur la surface de route en réponse à une détermination que la scène actuelle est une scène spécifique, le contenu de piste prédite indiquant une piste prédite produite par une commande de maintien de voie, et n'affiche pas le contenu de piste prédite en réponse à une détermination que la scène actuelle n'est pas une scène spécifique.
PCT/JP2020/036371 2019-10-02 2020-09-25 Dispositif de commande d'affichage et programme de commande d'affichage WO2021065735A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2019182434 2019-10-02
JP2019-182434 2019-10-02
JP2020-145431 2020-08-31
JP2020145431A JP7111137B2 (ja) 2019-10-02 2020-08-31 表示制御装置、および表示制御プログラム

Publications (1)

Publication Number Publication Date
WO2021065735A1 true WO2021065735A1 (fr) 2021-04-08

Family

ID=75338221

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/036371 WO2021065735A1 (fr) 2019-10-02 2020-09-25 Dispositif de commande d'affichage et programme de commande d'affichage

Country Status (1)

Country Link
WO (1) WO2021065735A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113788015A (zh) * 2021-08-04 2021-12-14 杭州飞步科技有限公司 车辆轨迹的确定方法、装置、设备以及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005170323A (ja) * 2003-12-15 2005-06-30 Denso Corp 走路形状表示装置
JP2016145783A (ja) * 2015-02-09 2016-08-12 株式会社デンソー 車両用表示制御装置及び車両用表示制御方法
JP2017094922A (ja) * 2015-11-24 2017-06-01 アイシン精機株式会社 周辺監視装置
JP2018127204A (ja) * 2017-02-08 2018-08-16 株式会社デンソー 車両用表示制御装置
JP2018140714A (ja) * 2017-02-28 2018-09-13 株式会社デンソー 表示制御装置及び表示制御方法
JP2019500658A (ja) * 2015-09-17 2019-01-10 ソニー株式会社 車両に安全に追い付けるように運転を支援するシステムおよび方法
JP2019163037A (ja) * 2014-12-01 2019-09-26 株式会社デンソー 画像処理装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005170323A (ja) * 2003-12-15 2005-06-30 Denso Corp 走路形状表示装置
JP2019163037A (ja) * 2014-12-01 2019-09-26 株式会社デンソー 画像処理装置
JP2016145783A (ja) * 2015-02-09 2016-08-12 株式会社デンソー 車両用表示制御装置及び車両用表示制御方法
JP2019500658A (ja) * 2015-09-17 2019-01-10 ソニー株式会社 車両に安全に追い付けるように運転を支援するシステムおよび方法
JP2017094922A (ja) * 2015-11-24 2017-06-01 アイシン精機株式会社 周辺監視装置
JP2018127204A (ja) * 2017-02-08 2018-08-16 株式会社デンソー 車両用表示制御装置
JP2018140714A (ja) * 2017-02-28 2018-09-13 株式会社デンソー 表示制御装置及び表示制御方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113788015A (zh) * 2021-08-04 2021-12-14 杭州飞步科技有限公司 车辆轨迹的确定方法、装置、设备以及存储介质

Similar Documents

Publication Publication Date Title
US11996018B2 (en) Display control device and display control program product
US20220118983A1 (en) Display control device and display control program product
JP7014205B2 (ja) 表示制御装置および表示制御プログラム
WO2020208989A1 (fr) Dispositif et programme de commande d'affichage
US20220058998A1 (en) Display control device and non-transitory computer-readable storage medium for display control on head-up display
US11850940B2 (en) Display control device and non-transitory computer-readable storage medium for display control on head-up display
JP2021075219A (ja) 表示制御装置及び表示制御プログラム
WO2020203065A1 (fr) Appareil et programme de commande d'affichage
JP7283448B2 (ja) 表示制御装置および表示制御プログラム
WO2021065735A1 (fr) Dispositif de commande d'affichage et programme de commande d'affichage
JP7243660B2 (ja) 表示制御装置及び表示制御プログラム
JP7111137B2 (ja) 表示制御装置、および表示制御プログラム
JP7255429B2 (ja) 表示制御装置および表示制御プログラム
JP7111121B2 (ja) 表示制御装置及び表示制御プログラム
JP7173078B2 (ja) 表示制御装置及び表示制御プログラム
JP7092158B2 (ja) 表示制御装置及び表示制御プログラム
JP7188271B2 (ja) 表示制御装置及び表示制御プログラム
JP2021094965A (ja) 表示制御装置、および表示制御プログラム
JP2021037895A (ja) 表示制御システム、表示制御装置、および表示制御プログラム
JP2021060808A (ja) 表示制御システム及び表示制御プログラム
JP7014206B2 (ja) 表示制御装置および表示制御プログラム
JP7088152B2 (ja) 表示制御装置、および表示制御プログラム
JP2021037916A (ja) 表示制御装置及び表示制御プログラム
JP7302702B2 (ja) 表示制御装置及び表示制御プログラム
JP7255443B2 (ja) 表示制御装置及び表示制御プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20870803

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20870803

Country of ref document: EP

Kind code of ref document: A1