WO2017134861A1 - Dispositif d'affichage tête haute - Google Patents

Dispositif d'affichage tête haute Download PDF

Info

Publication number
WO2017134861A1
WO2017134861A1 PCT/JP2016/080181 JP2016080181W WO2017134861A1 WO 2017134861 A1 WO2017134861 A1 WO 2017134861A1 JP 2016080181 W JP2016080181 W JP 2016080181W WO 2017134861 A1 WO2017134861 A1 WO 2017134861A1
Authority
WO
WIPO (PCT)
Prior art keywords
display
vehicle
virtual image
head
display device
Prior art date
Application number
PCT/JP2016/080181
Other languages
English (en)
Japanese (ja)
Inventor
真希 花田
昭央 三沢
望 下田
裕司 藤田
卓見 中田
滝澤 和之
Original Assignee
日立マクセル株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2016061245A external-priority patent/JP2019059247A/ja
Priority claimed from JP2016062925A external-priority patent/JP2019059248A/ja
Application filed by 日立マクセル株式会社 filed Critical 日立マクセル株式会社
Publication of WO2017134861A1 publication Critical patent/WO2017134861A1/fr

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/74Projection arrangements for image reproduction, e.g. using eidophor

Definitions

  • the present invention relates to a technology for a head-up display device, and more particularly, to a technology effective when applied to a head-up display device using AR (Augmented Reality).
  • AR Augmented Reality
  • HUD head-up display
  • In-vehicle display devices including HUDs may cause the vehicle to vibrate or tilt depending on the driving situation, which may cause problems with the visibility of the displayed video or display appropriate content. .
  • Patent Document 1 acquires a rotation component generated in a vehicle body as an inclination of the vehicle body, Based on this, it is described that the image is three-dimensionally corrected for rotation, and the position and inclination for displaying the rotation-corrected image are determined and projected.
  • Patent Document 2 describes, when displaying the distance scale on the HUD, from the map data of the navigation device, the travel point information where the host vehicle is currently traveling, and the host vehicle Information on the planned travel point where the vehicle will travel, and based on this information, obtain the inclination angle of the road on which the vehicle travels, and use the correction coefficient according to the inclination angle to display the distance scale from the ground. It is described that the image is corrected and displayed.
  • Patent Document 3 describes the display position of the generated image according to the detected driving situation such as right / left turn, acceleration / deceleration, and the like when left turn is detected. It is described that control is performed such as shifting to the right direction when a right turn is detected and shifting to the right direction.
  • Patent Document 4 describes that the display position of video information is moved in a direction in which the driver's field of view is secured according to the vehicle state.
  • JP 2013-237320 A JP 2007-55365 A JP 2006-7867 A JP2015-202842A
  • HUD is to project a video on a windshield and to make the driver recognize the video as a virtual image outside the vehicle.
  • a HUD that realizes a so-called AR function that makes it possible to show the driver information about the object or the like.
  • AR-HUD Even in such an AR-HUD, it is necessary to make adjustments for maintaining the visibility and appropriateness of the display image in accordance with the traveling state of the vehicle.
  • Patent Document 4 it is possible to move the display area itself in the HUD according to the vehicle state.
  • the technique described in Patent Document 4 is intended to ensure the driver's field of view even when a change occurs in the vehicle state, so that the display area in the HUD does not become an obstacle to the driver. It is moved to a position.
  • a virtual image may not be superimposed on an actual landscape included in the driver's field of view, and the AR function is not effective.
  • an object of the present invention is to provide a head-up display device that can display a virtual image so as to be appropriately superimposed on an actual landscape in accordance with the traveling state of the vehicle.
  • a head-up display device projects a video on a windshield of a vehicle, and displays a virtual image superimposed on a landscape in front of the vehicle for a driver.
  • a vehicle information acquisition unit that acquires various types of vehicle information that can be detected by the vehicle, and a control unit that controls display of the video based on the vehicle information acquired by the vehicle information acquisition unit.
  • An image display device that forms the image based on an instruction from the control unit, a mirror that reflects the image formed by the image display device and projects the image onto the windshield, and an instruction from the control unit
  • a mirror driving unit that changes the angle of the mirror based on the display distance adjustment mechanism that adjusts a display distance of the virtual image to the driver. It is.
  • the controller controls the angle of the mirror via the mirror driving unit so that the driver can display the virtual image superimposed on the landscape based on the vehicle information. adjust.
  • a head-up display device that changes a display position of the virtual image when the virtual image overlaps a predetermined object in the landscape.
  • the representative embodiment of the present invention it is possible to display the AR-HUD so as to appropriately superimpose the virtual image on the actual landscape according to the traveling state of the vehicle.
  • (A), (b) is the figure which showed the outline
  • (A), (b) is the figure which showed the outline
  • Embodiment 2 of this invention It is the figure which showed the outline
  • (A), (b) is the figure which showed the outline
  • (A), (b) is the figure which showed the outline
  • (A), (b) is the figure which showed the outline
  • (A), (b) is the figure which showed the outline
  • FIG. 2 is a diagram showing an outline of an example of an operation concept of the head-up display device according to the first embodiment of the present invention.
  • an image displayed on the image display device 30 such as a projector or an LCD (Liquid Crystal Display) is converted into a mirror 51 or a mirror 52 (for example, a free-form surface mirror or an optical axis asymmetric shape). And is projected onto the windshield 3 of the vehicle 2.
  • the driver 5 views the image projected as a virtual image in front of the transparent windshield 3 by viewing the image projected on the windshield 3.
  • the display position of the virtual image seen by the driver 5 is adjusted in the vertical direction by adjusting the position of the image projected onto the windshield 3 by adjusting the angle of the mirror 52.
  • the display distance can be adjusted, such as displaying a virtual image near (for example, 2 to 3 m away) or distant (for example, 30 to 40 m away). It is.
  • the AR function is realized by adjusting the display position and the display distance so that the virtual image is superimposed on the scenery outside the vehicle (roads, buildings, people, etc.).
  • FIG. 1 is a functional block diagram showing an outline of an overall configuration example of the head-up display device according to the first embodiment of the present invention.
  • the AR-HUD 1 mounted on the vehicle 2 includes, for example, a vehicle information acquisition unit 10, a control unit 20, a video display device 30, a display distance adjustment mechanism 40, a mirror driving unit 50, a mirror 52, and a speaker 60.
  • the shape of the vehicle 2 is displayed like a passenger car.
  • the shape of the vehicle 2 is not particularly limited, and can be applied as appropriate to general vehicles.
  • the vehicle information acquisition unit 10 includes information acquisition devices such as various sensors, which will be described later, installed in each part of the vehicle 2, detects various events occurring in the vehicle 2, and relates to the driving situation at predetermined intervals.
  • the vehicle information 4 is acquired and output by detecting and acquiring values of various parameters.
  • the vehicle information 4 includes, for example, speed information, gear information, steering wheel steering angle information, lamp lighting information, external light information, distance information, infrared information, engine ON / OFF information, and camera video information of the vehicle 2 as illustrated. (Inside / outside the vehicle), acceleration gyro information, GPS (Global Positioning System) information, navigation information, vehicle-to-vehicle communication information, road-to-vehicle communication information, and the like may be included.
  • the control unit 20 has a function of controlling the operation of the AR-HUD 1, and is implemented by, for example, a CPU (Central Processing Unit) and software executed thereby. It may be implemented by hardware such as a microcomputer or FPGA (Field Programmable Gate Array). As shown in FIG. 2, the control unit 20 drives the video display device 30 to form an image to be displayed as a virtual image based on the vehicle information 4 acquired from the vehicle information acquisition unit 10, and this is mirrored. The light is projected onto the windshield 3 by being appropriately reflected by 52 or the like. Then, control such as adjusting the display position of the virtual image display area or adjusting the display distance of the virtual image is performed by a method described later.
  • a CPU Central Processing Unit
  • FPGA Field Programmable Gate Array
  • the video display device 30 is a device configured by, for example, a projector or an LCD, and forms a video for displaying a virtual image based on an instruction from the control unit 20 and projects or displays the video.
  • the display distance adjustment mechanism 40 is a mechanism for adjusting the distance from the driver 5 of the virtual image to be displayed based on an instruction from the control unit 20. For example, various display distance adjustment methods as described later are used. One or more of them are mounted.
  • the mirror driving unit 50 adjusts the angle of the mirror 52 based on an instruction from the control unit 20 and adjusts the position of the virtual image display area in the vertical direction. The adjustment of the position of the virtual image display area will be described later.
  • the speaker 60 performs audio output related to the AR-HUD1. For example, voice guidance of the navigation system, voice output when notifying the driver 5 by the AR function, etc. can be performed.
  • FIG. 3 is a diagram showing an outline of an example of a hardware configuration related to acquisition of the vehicle information 4 in the head-up display device of the present embodiment.
  • the vehicle information 4 is acquired by an information acquisition device such as various sensors connected to the ECU 21 under the control of an ECU (Electronic Control Unit) 21, for example.
  • ECU Electronic Control Unit
  • a vehicle speed sensor 101 for example, a vehicle speed sensor 101, a shift position sensor 102, a steering wheel steering angle sensor 103, a headlight sensor 104, an illuminance sensor 105, a chromaticity sensor 106, a distance measuring sensor 107, an infrared sensor 108, an engine start sensor 109, acceleration sensor 110, gyro sensor 111, temperature sensor 112, road-to-vehicle communication wireless receiver 113, vehicle-to-vehicle communication wireless receiver 114, camera (inside the vehicle) 115, camera (outside the vehicle) 116, GPS receiver 117, and
  • Each device includes a VICS (Vehicle Information and Communication System: a road traffic information communication system, registered trademark (hereinafter the same)) receiver 118 and the like. It is not always necessary to include all these devices, and other types of devices may be included.
  • the vehicle information 4 that can be acquired by the provided device can be used as appropriate.
  • the vehicle speed sensor 101 acquires speed information of the vehicle 2.
  • the shift position sensor 102 acquires current gear information of the vehicle 2.
  • the steering wheel angle sensor 103 acquires steering wheel angle information.
  • the headlight sensor 104 acquires lamp lighting information related to ON / OFF of the headlight.
  • the illuminance sensor 105 and the chromaticity sensor 106 acquire external light information.
  • the distance measuring sensor 107 acquires distance information between the vehicle 2 and an external object.
  • the infrared sensor 108 acquires infrared information related to the presence / absence and distance of an object at a short distance of the vehicle 2.
  • the engine start sensor 109 detects engine ON / OFF information.
  • the acceleration sensor 110 and the gyro sensor 111 acquire acceleration gyro information including acceleration and angular velocity as information on the posture and behavior of the vehicle 2.
  • the temperature sensor 112 acquires temperature information inside and outside the vehicle.
  • the road-to-vehicle communication wireless receiver 113 and the vehicle-to-vehicle communication wireless receiver 114 are respectively road-to-vehicle communication information received by road-to-vehicle communication between the vehicle 2 and roads, signs, signals, etc.
  • the vehicle-to-vehicle communication information received by the vehicle-to-vehicle communication with another vehicle is acquired.
  • the camera (inside the vehicle) 115 and the camera (outside the vehicle) 116 respectively capture the moving image of the situation inside and outside the vehicle and acquire camera video information (inside / outside the vehicle).
  • the camera (inside the vehicle) 115 captures, for example, the posture of the driver 5, the position of the eyes, the movement, and the like. By analyzing the obtained moving image, for example, it is possible to grasp the fatigue status of the driver 5, the position of the line of sight, and the like.
  • the camera (outside the vehicle) 116 captures a situation around the vehicle 2 such as the front or rear. By analyzing the obtained video, for example, it is possible to grasp the presence or absence of moving objects such as other vehicles and people around the building, topography, road surface conditions (rain, snow, freezing, unevenness, etc.) It is.
  • the GPS receiver 117 and the VICS receiver 118 obtain GPS information obtained by receiving the GPS signal and VICS information obtained by receiving the VICS signal, respectively. It may be implemented as a part of a car navigation system that acquires and uses these pieces of information.
  • FIG. 4 is a functional block diagram showing details of a configuration example of the head-up display device of the present embodiment.
  • the example of FIG. 4 shows a case where the video display device 30 is a projector, and the video display device 30 includes, for example, each unit such as a light source 31, an illumination optical system 32, and a display element 33.
  • the light source 31 is a member that generates illumination light for projection.
  • a high-pressure mercury lamp, a xenon lamp, an LED (Light-Emitting-Diode) light source, a laser light source, or the like can be used.
  • the illumination optical system 32 is an optical system that collects the illumination light generated by the light source 31 and irradiates the display element 33 with more uniform illumination light.
  • the display element 33 is an element that generates an image to be projected.
  • a transmissive liquid crystal panel, a reflective liquid crystal panel, a DMD (Digital Micromirror Device) (registered trademark) panel, or the like can be used.
  • control unit 20 includes an ECU 21, an audio output unit 22, a nonvolatile memory 23, a memory 24, a light source adjustment unit 25, a distortion correction unit 26, a display element drive unit 27, a display distance adjustment unit 28, and a mirror adjustment. Each part such as the part 29 is included.
  • the ECU 21 acquires the vehicle information 4 via the vehicle information acquisition unit 10, and records, stores, and reads the acquired information in the nonvolatile memory 23 and the memory 24 as necessary. To do.
  • the nonvolatile memory 23 may store setting information such as setting values and parameters for various controls. Further, the ECU 21 generates video data relating to a virtual image to be displayed as the AR-HUD 1 by executing a dedicated program.
  • the audio output unit 22 outputs audio information via the speaker 60 as necessary.
  • the light source adjustment unit 25 adjusts the light emission amount of the light source 31 of the video display device 30. When there are a plurality of light sources 31, they may be controlled individually.
  • the distortion correction unit 26 corrects the video distortion caused by the curvature of the windshield 3 by image processing.
  • the display element drive unit 27 sends a drive signal corresponding to the video data corrected by the distortion correction unit 26 to the display element 33 to generate an image to be projected.
  • the display distance adjustment unit 28 drives the display distance adjustment mechanism 40 to adjust the display distance of the image projected from the image display device 30.
  • the mirror adjustment unit 29 changes the angle of the mirror 52 via the mirror driving unit 50 to move the virtual image display area up and down. The position adjustment of the virtual image display area will also be described later.
  • FIG. 5 is a diagram showing details of an example of a configuration related to display distance adjustment in the head-up display device of the present embodiment.
  • the display distance adjustment unit 28 of the control unit 20 further includes, as individual units controlled by the ECU 21, for example, a functional liquid crystal film ON / OFF control unit 281, a lens movable unit 282, and a dimming mirror ON / OFF control unit 283. , A diffusion plate movable portion 284, an optical filter movable portion 285, and the like.
  • the display distance adjustment mechanism 40 further includes a functional liquid crystal film 401, a lens movable mechanism 402, a light control mirror 403, a diffusion plate movable mechanism 404, and an optical.
  • a filter movable mechanism 405 and the like are included. A method for adjusting the display distance of the virtual image by these units will be described later.
  • the AR-HUD 1 does not need to include all of these units and devices, but may include various units necessary for implementing the virtual image display distance adjustment method described later. Good.
  • FIG. 6 is a flowchart showing an outline of an example of an initial operation of the head-up display device of the present embodiment.
  • the AR-HUD 1 starts with the vehicle information acquisition unit 10 based on an instruction from the control unit 20.
  • vehicle information is acquired (S02).
  • the control unit 20 calculates a suitable brightness level based on external light information acquired by the illuminance sensor 105, the chromaticity sensor 106, and the like in the vehicle information 4 (S03), and the light source adjustment unit 25 calculates the light source 31. Is set so that the calculated brightness level is obtained (S04). For example, when the outside light is bright, the brightness level is set high, and when the outside light is dark, the brightness level is set low.
  • the ECU 21 determines and generates a video (for example, an initial image) to be displayed as a virtual image (S05), and performs a process of correcting the distortion by the distortion correction unit 26 for the generated video (S06).
  • the display element 33 is driven and controlled by the element driving unit 27 to form a projected image (S07).
  • video is projected on the windshield 3, and the driver
  • the ECU 21 or the display distance adjustment unit 28 calculates and determines the display distance of the virtual image (S08), and the display distance adjustment unit 28 drives the display distance adjustment mechanism 40 to display the image projected from the video display device 30.
  • the distance is controlled (S09).
  • the HUD-ON signal is output.
  • the control unit 20 determines whether or not this signal has been received (S11). . If not received, the HUD-ON signal is further waited for a predetermined time (S12), and the HUD-ON signal waiting process (S12) is repeated until it is determined in step S11 that the HUD-ON signal has been received. If it is determined in step S11 that the HUD-ON signal has been received, normal operation of AR-HUD1 described later is started (S13), and a series of initial operations are terminated.
  • FIG. 7 is a flowchart showing an outline of an example of normal operation of the head-up display device of the present embodiment. Also in the normal operation, the basic processing flow is substantially the same as the initial operation shown in FIG. First, the AR-HUD 1 acquires vehicle information by the vehicle information acquisition unit 10 based on an instruction from the control unit 20 (S21). And the control part 20 performs a brightness level adjustment process based on the external light information acquired by the illumination intensity sensor 105, the chromaticity sensor 106, etc. among the vehicle information 4 (S22).
  • FIG. 8 is a flowchart showing an outline of an example of brightness level adjustment processing of the head-up display device of the present embodiment.
  • a suitable brightness level is calculated based on the acquired outside light information (S221). Then, by comparing with the currently set brightness level, it is determined whether or not the brightness level needs to be changed (S222). If no change is necessary, the brightness level adjustment process is terminated. On the other hand, when the change is necessary, the light source adjustment unit 25 controls the light emission amount of the light source 31 to set the brightness level after the change (S223), and the brightness level adjustment process is ended. .
  • step S222 even when there is a difference between the preferred brightness level calculated in step S221 and the currently set brightness level, the brightness level is only when the difference is equal to or greater than a predetermined threshold. It may be determined that the change is necessary.
  • the video displayed as a virtual image is changed as necessary from the current one based on the latest vehicle information 4 acquired in step S ⁇ b> 21, and the changed video is determined and generated.
  • the pattern which changes a display content based on the vehicle information 4 can have many things according to the content of the acquired vehicle information 4, those combinations, etc. For example, when the speed information changes, the value of the speed display that is displayed at all times is changed, the guidance arrow graphic is displayed / erased based on the navigation information, and the arrow shape and display position are changed. There may be various patterns, such as when performing.
  • adjustment / correction processing is performed to maintain visibility, appropriateness of display contents, and the like according to the traveling state of the vehicle 2.
  • the mirror adjustment process is performed to change the angle of the mirror 52 via the mirror driving unit 50 and move the virtual image display area up and down (S24).
  • a vibration correction process for correcting the display position of the image in the display area with respect to the vibration of the vehicle 2 is performed (S25). Detailed contents of the adjustment / correction processing in steps S24 and S25 will be described later.
  • the distortion correction unit 26 performs distortion correction processing on the adjusted / corrected image (S26), and then the display element driving unit 27 drives and controls the display element 33 to form a projected image ( S27). Then, the display distance adjustment unit 28 calculates and determines the display distance of the virtual image (S28), and the display distance adjustment unit 28 drives the display distance adjustment mechanism 40 to display the image projected from the image display device 30. The distance is controlled (S29).
  • a HUD-OFF signal is output to the AR-HUD 1. It is determined whether or not this signal has been received (S30). If the HUD-OFF signal has not been received, the process returns to step S21, and a series of normal operations are repeated until the HUD-OFF signal is received. If it is determined that the HUD-OFF signal has been received, a series of normal operations is terminated.
  • FIG. 9 is a diagram showing an outline of an example in which the position of the virtual image display area is adjusted up and down in the head-up display device of the present embodiment.
  • the upper stage schematically shows the state of the slope of the road on which the vehicle 2 is traveling and the state of the visual field of the driver 5 as viewed from the side.
  • the situation in front of the vehicle viewed by the driver 5 and the situation of the position of the virtual image display area 6 (rectangular line in a broken line) displayed superimposed thereon are schematically shown. Show.
  • the center figure shows a case where the road gradient (upward direction) of the current location of the vehicle 2 is greater than the road gradient (upward direction) of the front road, that is, when the vehicle is traveling on a downhill road ahead.
  • the field of view is for viewing the road ahead in relation to the height of the field of view of the driver 5 (solid line frame in the figure) based on the gradient at the position of the vehicle 2.
  • Must be moved downward (dotted line frame in the figure).
  • the display position of the virtual image display area 6 remains the basic display position (rectangular rectangle), the virtual image is superimposed on the scenery in front of the vehicle by the AR function. Therefore, the display area 6 itself needs to be moved downward in order to superimpose and display.
  • the right side diagram shows a case where the road gradient (upward direction) of the current location of the vehicle 2 is smaller than the road gradient (upward direction) of the front road, that is, the road ahead is traveling on an uphill road. ing.
  • the field of view is for viewing the road ahead in relation to the height of the field of view of the driver 5 (solid line frame in the figure) based on the gradient at the position of the vehicle 2.
  • Must be moved upward (dotted line frame in the figure).
  • the display position of the virtual image display area 6 remains the basic display position (rectangular rectangle)
  • the virtual image is superimposed on the scenery in front of the vehicle by the AR function. Therefore, in order to display the images in a superimposed manner, the display area 6 itself needs to be moved upward.
  • the situation where it is necessary to move the vertical position of the virtual image display area 6 according to the traveling situation is as follows: the gradient of the current position and the gradient of the road ahead as shown in the example of FIG. It is not limited to the case where there is a certain amount of difference between the two.
  • the driver 5's line of sight generally looks farther than usual, and the height of the field of view moves upward. Therefore, for example, in order to superimpose a virtual image on the scenery outside the vehicle including other vehicles and the like existing further forward than in the normal time, it may be necessary to move the display area 6 upward.
  • the driver's 5 eye height position itself changes due to changes in the posture or posture of the driver 5 while the vehicle 2 is traveling, and thus the height of the visual field moves in the vertical direction. is there.
  • the mirror driving unit 50 controls the angle of the mirror 52 in accordance with the traveling state of the vehicle 2, and the vertical position of the virtual image display area Are adjusted as shown in the example of FIG.
  • FIG. 10 is a flowchart showing an outline of an example of the mirror adjustment process in step S24 of FIG.
  • the mirror adjustment process is started, first, the current angle of the mirror 52 is acquired (S241), and further, based on the vehicle information 4, the angle of the mirror 52 is adjusted (that is, the display position of the virtual image display area is adjusted).
  • the current value of the related parameter is acquired (S242).
  • the types of parameters required may vary depending on the conditions under which the display position of the display area is adjusted. For example, in the example illustrated in FIG. 9, a value indicating a difference (relative gradient) between the gradient of the current position of the vehicle 2 and the gradient of the road ahead is acquired as the related parameter value.
  • the gradient of the current position can be grasped from the tilt information of the vehicle 2 obtained from the acceleration gyro information. It is also possible to grasp the slope of the road ahead by analyzing camera video information outside the vehicle. It is also possible to obtain the current position and the gradient of the road ahead based on three-dimensional road / terrain information obtained from the navigation information.
  • the target angle of the mirror 52 is calculated on the basis of predetermined criteria / conditions (S243). Which logic is used to calculate the target angle based on which parameter may differ depending on the conditions under which the display position of the display area is adjusted. For example, in the example shown in FIG. 9, when the absolute value of the relative gradient between the current location and the road ahead is greater than or equal to a predetermined threshold, the target angle of the mirror 52 is determined according to the sign of the relative gradient. To do.
  • the predetermined threshold may be, for example, 1 / x (x is a predetermined value) of the vertical FOV (Field Of View: viewing angle) of the virtual image display area.
  • the target angle of the mirror 52 is calculated based on the current parameter value acquired in step S242, but in the near future based on information on the current parameter value and history of past values.
  • the target angle may be calculated based on the prediction result.
  • the tendency of the value transition may be analyzed based on the past history of the parameter value, and the near future parameter value may be predicted based on the tendency. It is also possible to predict the surrounding situation of the vehicle 2 in the near future by analyzing the camera image information ahead of the vehicle, or to grasp the road situation ahead of the vehicle 2 based on the navigation information.
  • step S241 it is determined whether or not there is a difference between the angle of the current mirror 52 acquired in step S241 and the angle of the target mirror 52 acquired in step S243 (S244).
  • the determination for example, it may be determined that there is a difference when the difference is equal to or greater than a predetermined threshold, and it is determined that there is no difference when the difference is equal to or less than the threshold. Further, it may be determined that there is a difference only when the state with the difference continues for a certain time or more. Thereby, for example, an event that the inclination of the vehicle 2 changes temporarily or instantaneously, such as when the vehicle 2 rides on a step such as a curb, can be excluded from the adjustment target of the mirror 52.
  • step S244 If it is determined in step S244 that there is no angle difference, the mirror adjustment process is terminated as it is. That is, the angle of the mirror 52 is not adjusted, and the current angle is maintained.
  • the mirror 52 is rotated in the designated direction so as to be the target angle (S245). Specifically, a mirror adjustment signal for rotating the mirror 52 is output to the mirror driving unit 50. Then, it is determined whether or not the mirror 52 has reached the target angle (S246). If not, the process returns to step S245 to continue the rotation of the mirror 52. That is, the output of the mirror adjustment signal to the mirror driving unit 50 is continued.
  • the rotation of the mirror 52 is stopped (S247). That is, the output of the mirror adjustment signal to the mirror driving unit 50 is stopped. Then, a series of mirror adjustment processing ends.
  • FIG. 11 is a flowchart showing an outline of an example of the vibration correction process in step S25 of FIG.
  • the vibration correction process is started, first, information on the vibration amount of the vehicle 2 is acquired based on the vehicle information 4 (S251). For example, it is possible to grasp the vibration amount (the amount of short-term vertical movement in the vehicle 2) based on acceleration gyro information, camera image information outside the vehicle, and the like.
  • the vibration information is acquired based on the current vehicle information 4. For example, the road surface condition around the vehicle 2 in the near future by analyzing the camera image information on the front outside the vehicle. And the amount of vibration of the vehicle 2 in the near future may be predicted based on this.
  • the vibration correction process is terminated as it is because the vibration is minute. That is, the display image associated with the vibration is not corrected.
  • the display shift amount of the video in the display area is calculated (S253). For example, based on the ratio between the actual height of the vehicle 2 and the height of the virtual image display area, the display shift amount of the video in the display area is calculated from the vibration amount of the vehicle 2. Then, based on the calculated display shift amount, the display position of the video in the display area is offset up and down (S254), and the series of vibration correction processing is ended.
  • the display distance adjustment unit 28 of the control unit 20 drives the display distance adjustment mechanism 40 to adjust the display distance of the video projected from the video display device 30.
  • the adjustment method of the display distance of a virtual image by each part of the display distance adjustment part 28 and the display distance adjustment mechanism 40 shown in FIG. 5 is demonstrated.
  • FIG. 12 is a diagram showing an outline of an example of display distance adjustment using the functional liquid crystal film 401 in the head-up display device of the present embodiment.
  • a plurality of functional liquid crystal films 401 are used as a diffusion plate (diffuser) 41a.
  • a diffusion plate (diffuser) 41a As shown to Fig.12 (a) and (b), by changing the location made into a white state for every area of each functional liquid crystal film 401, a focal distance is changed for every area, and the display distance of a virtual image (Distance between driver's 5 eye position and virtual image display position) is changed.
  • a virtual image Distance between driver's 5 eye position and virtual image display position
  • FIG. 13 is a diagram showing an outline of a configuration example of the diffusion plate 41a made of the functional liquid crystal film 401.
  • the functional liquid crystal film 401 is a film that can control a transmission state and a white state by electricity.
  • the white state portion of the functional liquid crystal film 401 functions as a diffusion plate, and the image projected by the projector 30a forms an image in this white state portion.
  • control is performed so that the plurality of functional liquid crystal films 401 are individually in a white state for each of a plurality of areas.
  • the display position of the virtual image based on the image projected from the projector 30a is determined in accordance with the distance between the white portion of each functional liquid crystal film 401 and the lens 42a.
  • a plurality of functional liquid crystal films 401 are arranged so that the distance from the lens 42a is different, and the image projected from the projector 30a is displayed for each area by the functional liquid crystal film ON / OFF control unit 281 shown in FIG.
  • the display distance of a virtual image can be changed for every area by making any one of the functional liquid crystal films 401 into a white state.
  • the display distance of the corresponding virtual image can be made the shortest.
  • the target area for example, the uppermost part
  • only the functional liquid crystal film 401 disposed at the position closest to the lens 42a is set in a white state, and the others are By setting the transmission state, the display distance of the corresponding virtual image can be made the shortest.
  • the target area for example, the uppermost part
  • only the functional liquid crystal film 401 disposed at the farthest position from the lens 42a is set in the white state, and the other is set in the transmissive state.
  • the display distance of the corresponding virtual image can be made the longest.
  • FIG. 14 is a diagram showing an outline of an example of display distance adjustment using a plurality of mirrors in the head-up display device of the present embodiment.
  • a plurality of mirrors 51a are arranged between the LCD 30b and the lens 42a as shown in the figure, and the image from the LCD 30b is reflected by the mirror 51a different for each area and is incident on the lens 42a.
  • the distance from the LCD 30b to the lens 42a is different for each area, and the display distance of the virtual image can be changed accordingly.
  • the image is displayed on the LCD 30b in an area reflected by the mirror 51a (the farthest from the lens 42a) arranged at the farthest position from the LCD 30b.
  • the display distance of the virtual image can be the longest.
  • the display distance of the corresponding virtual image can be made the shortest.
  • the number of mirrors 51a is not limited to three as illustrated, and can be appropriately configured according to the number of areas.
  • FIG. 15 is a diagram showing an outline of an example of display distance adjustment using a movable lens in the head-up display device of the present embodiment.
  • the image projected from the projector 30a is imaged by a diffusion plate (diffuser) 41b and then incident on the mirror 52 via a movable lens 42b provided in a plurality of areas.
  • each movable lens 42b can be individually moved along the optical axis direction by the lens movable portion 282 and the lens movable mechanism 402 shown in FIG.
  • the display position of the virtual image based on the image projected from the projector 30a is determined according to the distance between the diffusion plate 41b and each movable lens 42b. Therefore, by moving the movable lens 42b, the focal distance can be changed for each area, and the display distance of the virtual image can be changed.
  • the display distance of the corresponding virtual image can be reduced by moving the movable lens 42b to a position close to the diffusion plate 41b as in the uppermost area.
  • the display distance of the corresponding virtual image can be increased by moving the movable lens 42b to a position far from the diffusion plate 41b as in the lowermost area.
  • the number of movable lenses 42b is not limited to three as illustrated, and can be appropriately configured according to the number of areas.
  • FIG. 16 is a diagram schematically showing an example of display distance adjustment using the light control mirror 51b in the head-up display device of the present embodiment.
  • the dimming mirror 51b is configured by arranging a plurality of dimming mirrors 403 between the LCD 30b and the lens 42a so as to form a matrix when viewed from the cross-sectional direction as illustrated. Then, as shown in FIGS. 16A and 16B, the distance from the LCD 30b to the lens 42a varies from area to area by changing the location of the light control mirror 403 in the mirror state. The display distance of the virtual image can be changed.
  • FIG. 17 is a diagram showing an outline of a configuration example of the light control mirror 403.
  • the light control mirror 403 is a member such as a film, a sheet, or glass that can control a transmission state and a mirror state by electricity.
  • the dimming mirror 403 in the transmissive state transmits the image from the LCD 30b, and only the dimming mirror 403 in the mirror state reflects the image in the direction of the lens 42a.
  • dimming is performed so that only one dimming mirror 403 is in a mirror state for each row and each column (each area) for a plurality of dimming mirrors 403 arranged in a matrix when viewed from the cross-sectional direction. It is controlled by the mirror ON / OFF control unit 283.
  • only the light control mirror 403 in the lowermost row is set in the mirror state for the area corresponding to the column of the light control mirror 403 closest to the lens 42a. Is set to the transmissive state, the optical path length from the LCD 30b to the lens 42a can be made the shortest, and the display distance of the corresponding virtual image can be made the shortest. Conversely, in the area corresponding to the column of the light control mirror 403 farthest from the lens 42a, only the light control mirror 403 in the uppermost row is set to the mirror state, and the other is set to the transmission state. The optical path length can be made the longest, and the display distance of the corresponding virtual image can be made the longest.
  • the length can be made relatively longer than other areas, and the display distance of the corresponding virtual image can be increased.
  • the number of the light control mirrors 403 is not limited to 3 rows and 3 columns as shown, and can be appropriately configured according to the number of areas.
  • FIG. 18 is a diagram showing an outline of an example of display distance adjustment using a movable diffusion plate in the head-up display device of the present embodiment.
  • the image projected from the projector 30a is imaged by a movable diffuser plate (movable diffuser) 41c and then incident on the mirror 52 through the lens 42a.
  • the movable diffusion plate 41c can be moved and / or rotated along the optical axis direction by the diffusion plate movable portion 284 and the diffusion plate movable mechanism 404 shown in FIG.
  • the display position of the virtual image based on the image projected from the projector 30a is determined according to the distance and / or inclination between the movable diffusion plate 41c and the lens 42a. Therefore, the display distance of the virtual image can be changed by changing the focal length by moving and / or rotating the movable diffusion plate 41c.
  • the display distance of the virtual image can be reduced by moving and / or rotating the movable diffusion plate 41c to a position close to the lens 42a. Conversely, the display distance of the virtual image can be increased by moving and / or rotating the movable diffusion plate 41c to a position far from the lens 42a.
  • FIG. 19 is a diagram showing an outline of an example of display distance adjustment using the movable optical filter in the head-up display device of the present embodiment.
  • a movable optical filter 43a is installed between a lens 42a and a diffuser plate (diffuser) 41b, and as shown in FIGS. 19A and 19B, the movable optical filter 43a is used as an optical path.
  • the focal distance for each area the display distance of the virtual image is changed.
  • the optical filter is a member having a characteristic of changing a focal length by a single optical component such as a lens or a combination thereof.
  • a plurality of optical filters having different refractive indexes are combined to form one optical filter having a different refractive index for each region, and a movable optical filter 43a that can be inserted into and removed from the optical path.
  • the movable image filter 43a is inserted into and extracted from the optical path by the optical filter movable unit 285 and the optical filter movable mechanism 405 shown in FIG. The display distance can be changed.
  • the focal length of the optical filter corresponding to the lowermost area is minimized,
  • the display distance of the virtual image can be shortened, and the focal distance of the optical filter corresponding to the uppermost area can be maximized to shorten the display distance of the virtual image.
  • the diffusion plate 41b and the lens 42a are provided for this area.
  • the display distance of the virtual image is determined by the distance between and the virtual image display distance can be made farther than the other areas passing through the optical filter.
  • the number of regions having different focal lengths in the movable optical filter 43a is not limited to three as shown in the figure, and can be appropriately configured according to the number of areas.
  • FIG. 20 is a diagram schematically showing an example of display distance adjustment using a comb-like optical filter in the head-up display device of the present embodiment.
  • the image projected from the projector 30a is imaged by a diffusion plate (diffuser) 41b and then incident on the mirror 52 via the comb-like optical filter 43b and the lens 42a.
  • the comb-like optical filter 43b is a member provided with a comb-like optical filter portion that has the same function as a lens and can change the display distance of a virtual image according to the focal length.
  • the optical filter portion and the portion without the optical filter are associated with each other (not limited to each line but can be set to any line) of the image projected from the projector 30a. By doing so, the display distance of the virtual image can be changed in units of lines.
  • the display distance of the virtual image based on the image of the line corresponding to the optical filter portion can be reduced, and the display distance of the virtual image based on the image of the line corresponding to the portion without the optical filter can be increased.
  • the head-up display device According to the head-up display device according to the first embodiment of the present invention, even when the virtual image cannot be superimposed on the scenery in front of the vehicle 2 depending on the traveling state of the vehicle 2.
  • the AR function can be realized by appropriately superimposing the virtual image on the front landscape. Furthermore, it is possible to appropriately adjust the display distance of the virtual image according to the traveling state or the like.
  • the head-up display device In addition to the adjustment of the display area of the virtual image and the adjustment of the display distance in the first embodiment, the head-up display device according to the second embodiment of the present invention is further adapted to the situation such as the scenery in front of the vehicle 2, The display contents and display method of the virtual image itself to be displayed are adjusted. Thereby, it is possible to superimpose a more appropriate virtual image on the front landscape in a more appropriate position and manner.
  • the apparatus configuration and basic operation contents of the AR-HUD 1 are the same as those in the first embodiment, and a description thereof will be omitted.
  • FIG. 21 and FIG. 22 are diagrams showing an outline of an example of avoiding that a virtual image overlaps an object in a frontal landscape or the like.
  • the driver 5 of the vehicle 2 schematically shows an example of a front landscape visually recognized from the driver's seat through the windshield 3 and a state of a virtual image (content) in the display area 6 projected on the windshield 3. Is shown.
  • an arrow figure (in the figure, “ ⁇ ” figure) for instructing and navigating the traveling direction etc. in the display area 6 and an instrument icon (in the figure, “25 km / h”) for displaying the vehicle speed etc.
  • a character which may be referred to as “vehicle speed display” below), is displayed as a virtual image.
  • the curve mirror in the front landscape is hidden by the arrow graphic, which indicates that it is difficult for the driver 5 to visually recognize the contents of the curve mirror.
  • the display position of the arrow graphic is moved to a position avoiding the curve mirror.
  • the display position of the arrow graphic is moved to the left side until it does not overlap the curve mirror.
  • the movement amount is not limited to movement in units of pixels.
  • the display area 6 is divided into a predetermined number of blocks and moved from the block to which the arrow graphic belongs to another block (for example, the adjacent block) in units of blocks. You may do it. Whether or not the object such as the curve mirror overlaps the virtual image can be determined by, for example, image processing on the camera video information outside the vehicle in the vehicle information 4.
  • the content shown in the curve mirror may be enlarged and displayed as a virtual image.
  • the contents reflected in the curve mirror can be acquired from the camera video information outside the vehicle in the vehicle information 4 by image processing, and can be further enlarged by image processing.
  • the virtual image obtained by enlarging the contents of the curve mirror is displayed at a position that does not overlap the arrow graphic.
  • the display position of the virtual image obtained by enlarging the contents of the curve mirror is displayed at a position that does not obstruct the driver's field of view, and conversely, it is displayed in the driver's field of view and the direction of the viewpoint. You may enable it to see the virtual image which expanded the content of the curve mirror, without moving.
  • FIG. 23 is a diagram showing an outline of another example for avoiding the overlap of virtual images with objects in a frontal landscape or the like.
  • FIG. 23 shows a state in which an alert display (a graphic composed of a ⁇ mark and an exclamation mark symbol in the figure) is displayed as a virtual image in addition to an arrow graphic and a vehicle speed display in order to call attention to an object such as a road sign. ing.
  • an alert display a graphic composed of a ⁇ mark and an exclamation mark symbol in the figure
  • a virtual image in addition to an arrow graphic and a vehicle speed display in order to call attention to an object such as a road sign.
  • the road sign in the scenery in front is hidden by the alert display. Therefore, it is shown that it is difficult for the driver 5 to visually recognize the content of the road sign.
  • the display position of the alert display is moved to a position where the road sign is avoided.
  • the visibility of an object that a driver, such as a road sign, needs to pay attention to is improved.
  • the alert display displayed at the upper right of the display area 6 is moved to the lower left of the farthest diagonal, but in this case as well, the method for determining the position to be moved is not particularly limited. Similar to the example of FIG. 21, a minimum movement to a peripheral position that does not overlap with the road sign may be performed.
  • the movement of the display position does not mean that the display is moved to the position where the road sign is avoided after the alert display is temporarily displayed at the position where the road sign is hidden.
  • the display position is adjusted so that the object is not hidden by displaying it as it is, and then displayed. It means to do.
  • FIG. 24 is a flowchart showing an outline of an example of the flow of processing for adjusting the display content and display method of a virtual image.
  • the generation / display of the display content (content) of the virtual image in the AR-HUD 1 is performed in step S05 in the initial operation of FIG. 6 in the first embodiment and step S23 in the normal operation of FIG.
  • the processing content is shown by taking the display video change / determination processing in step S23 in the normal operation of FIG. 7 as an example.
  • the ECU 21 performs standard content generation processing (S231) and event content generation processing (S232).
  • the standard content basically refers to content such as vehicle speed display that is always displayed in the display area 6 while the vehicle 2 is traveling.
  • the event content refers to content such as an alert display that is displayed as necessary based on the driving situation of the vehicle 2 (including the situation of the scenery in front). In any case, a plurality of contents may be generated.
  • the ECU 21 adjusts the display position, display color, etc. of the generated contents in relation to the front landscape grasped by the camera video information (S233) and the display color adjustment process (S234). )I do. Then, video data for display related to each adjusted content is generated (S235), and the process ends. Note that the video data generated here is projected by the display element driving unit 27 in the subsequent step S27 in FIG. 7 of the first embodiment.
  • FIG. 25 is a flowchart showing an outline of an example of the flow of the standard content generation process (step S231) in the display video determination / change process of FIG.
  • necessary standard contents are generated (S1100).
  • a default display position is set for each generated standard content (S1200). For example, as shown in FIGS. 21 to 23, a vehicle speed display is generated, and the display position is set at the lower right of the display area 6.
  • a display color setting process S1300 for setting an initial value as a display color is performed, and the process ends. Note that processing relating to setting and adjustment of the display color will be described later.
  • FIG. 26 is a flowchart showing an outline of an example of the flow of event content generation processing (step S232) in the display video determination / change processing of FIG.
  • the vehicle information 4 acquired in step S21 in FIG. 7 of the first embodiment is analyzed (S2100), and it is analyzed whether or not an event that requires display of event content is occurring (S2200).
  • S2100 the vehicle information 4 acquired in step S21 in FIG. 7 of the first embodiment
  • S2200 an event that requires display of event content is occurring
  • S2200 an event that requires display of event content is occurring
  • FIGS. 21 to 23 there is a road sign or the like in front of a corner or curve that requires the display of an arrow graphic, or an alert display that needs to be alerted. Doing this corresponds to the above events.
  • various events such as the presence of a pedestrian and the approach of a preceding vehicle are also targeted.
  • step S2200 If it is determined in step S2200 that an event is occurring, necessary event content is generated (S2300). Then, a default display position is set for each generated event content (S2400). For example, as shown in the upper diagram of FIG. 21, an arrow graphic is generated and the display position is set at the center of the display area 6. Further, as shown in the upper diagram of FIG. 23, an alert display is generated, and the display position is set on the road sign at the upper right of the display area 6. Then, for each event content, a display color setting process (S2500) for setting an initial value for the display color is performed, and the process ends. Note that processing relating to setting and adjustment of the display color will be described later.
  • step S2200 determines whether event content is currently being displayed (S2600). If the event content is not being displayed, the process ends without doing anything. If the event content is being displayed, it is determined that the event content is no longer necessary (S2700), and the process ends.
  • FIG. 27 is a flowchart showing an overview of an example of the flow of display position adjustment processing (step S233) in the display video determination / change processing of FIG.
  • the camera image information outside the vehicle in the vehicle information 4 acquired in step S21 in FIG. 7 of the first embodiment is analyzed, and is there an object to avoid hiding the display in the front scenery or the like? It is determined whether or not (S3100).
  • S3100 For example, in addition to curve mirrors and road signs as shown in the examples of FIGS. 21 to 23, traffic lights, pedestrians, two-wheeled vehicles, forward vehicles, and the like may be applicable.
  • the current display position of each standard content and each event content generated in steps S231 and S232 of the processing flow of FIG. 24 is collated with the coordinates of each object (S3200). Then, it is determined whether or not the display position of each content needs to be adjusted, that is, whether or not the display of each content is an obstacle to the object (S3300).
  • the display position of the arrow graphic or the alert display is compared with the position of the curve mirror or the road sign by image processing or the like to determine whether the curve mirror or the road sign is hidden. Determine.
  • step S3300 If it is determined in step S3300 that the display position does not need to be adjusted, the process ends. If it is determined that the display position needs to be adjusted, the display position is adjusted and moved for the target content, and a new display position is set (S3400). As described above, the method for determining the position to move the content is not particularly limited. Instead of or in addition to the adjustment of the position of the content, the display size of the content may be reduced so that the object is not hidden (S3500).
  • step S3200 After making these adjustments, the process returns to step S3200 again, and the above-described series of processing is repeated until it is no longer necessary to adjust the display position of each content. Since there is a possibility that another object exists before the content is moved, it is necessary to adjust until the display position is appropriate. If this adjustment takes too much time, the display cannot be performed at a timing suitable for the driver. Therefore, for example, it is preferable to adjust the display position by recognizing the position of a curve mirror or a road sign at an early stage before reaching the intersection.
  • virtual image content can be displayed in a state where the object is avoided, and the visibility of the object can be improved.
  • FIG. 28 is a diagram showing an outline of an example of the difference in visibility with respect to road surface sign information due to weather or the like.
  • FIG. 21 and the like an example of the scenery in front of the driver 5 of the vehicle 2 viewing through the windshield 3 from the driver's seat and the state of the virtual image (content) projected on the windshield 3 is shown. This is shown schematically.
  • the upper diagram in FIG. 28 shows the condition of the road during the daytime in fine weather as the scenery in front, etc.
  • the lower diagram shows the condition of the road at night and / or in bad weather.
  • a road sign that can be clearly recognized in daylight on a clear day may be hidden by snow on the shoulder in bad weather.
  • road signs and pedestrian crossings that can be clearly recognized in clear daytime are difficult to see at night and in bad weather.
  • FIG. 29 is a diagram showing an outline of an example of a configuration that assists the driver 5 by detecting sign information that is difficult to recognize in the video information.
  • the upper diagram in FIG. 29 shows a situation where the road surface condition ahead of the vehicle 2 is difficult to recognize due to poor visibility due to nighttime or bad weather. Further, in the lower left figure, as a result, the road surface sign information is not recognized.
  • the map information (which can be obtained from the GPS information and the navigation information) acquired by the vehicle 2 as the vehicle information 4 or during traveling Analyze the contents of the recorded and accumulated travel log.
  • the road surface sign information that can be recognized from the video information in this way is compared with the road information that can be grasped from the data such as the map information and the travel log. If there is a discrepancy, the driver 5 Display a virtual image to assist. That is, the event that the inconsistency is detected is used as an event content to generate a virtual image for assisting the driver 5 as the event content by the event content generation process (step S232 in the process flow of FIG. 24) shown in FIG. indicate.
  • FIG. 30 is a diagram showing an outline of an auxiliary example when a virtual image is displayed on the scenery in front of the night or in bad weather.
  • sign information such as road signs and road surface signs that cannot be recognized by video information is detected and grasped from data such as map information and travel logs
  • these marker information is displayed as a virtual image.
  • road markings such as a pedestrian crossing in front are displayed as virtual images at positions / coordinates grasped from the map information and travel logs.
  • the virtual image of the hidden road sign is displayed as an icon at an appropriate position.
  • voice notification for example, “There is a pedestrian crossing in front. Please be careful.” May be performed in the vehicle via the speaker 60.
  • the virtual image of the road sign shown in the example of FIG. 30 (a) is further highlighted with decorations such as flashing and blinking.
  • the highlighting means is not limited to these, and various other means such as sequentially changing the display color can be appropriately employed.
  • the degree of emphasis of virtual image display of the pedestrian crossing is further increased, or the display time of the virtual image is made longer than when there are no pedestrians. There may be a difference in the contents of the highlighting depending on the presence or absence of, that is, the degree of risk.
  • a virtual image for assisting the driver 5 may be generated and displayed based on the information of the recognized road sign.
  • a virtual image can be generated and displayed without using map information or travel log information.
  • the recognition result of the road sign may be combined with the information of the map information and the travel log, and thereby more accurate information can be obtained.
  • the vehicle 2 passes through a place where a road sign or road sign that requires virtual image display exists. If it is lost, erase it. That is, in the event content generation process shown in FIG. 26 described above, the virtual image that is the event content is deleted assuming that no event has occurred.
  • the above-described method is not limited to the time of nighttime or bad weather, and can be applied to, for example, driving on a road that is difficult to see due to a thin road sign, and can obtain the same effect as the above-described effect. it can.
  • FIG. 31 is a diagram showing an outline of an example of adjusting the display position of content with respect to the driver's viewpoint range.
  • the front landscape viewed by the driver 5 of the vehicle 2 through the windshield 3 from the driver's seat and the virtual image in the display area 6 projected onto the windshield 3 The example of the state of (content) is shown typically.
  • FIG. 31 shows a display example of content when making a right turn at an intersection. Similarly to the example of FIG. 21 and the like, it shows that a vehicle speed display and an arrow graphic instructing a right turn are displayed. In the example of FIG. 31, in addition to these, an icon as an alert display for notifying the surrounding situation of the vehicle 2 (the presence of a pedestrian, etc.) is displayed in the lower left portion of the display area 6. Further, although it does not indicate the scenery in front or the content of the virtual image, for the convenience of explanation, the outline of the viewpoint range of the driver 5 is indicated by a dotted circle.
  • the viewpoint range of the driver 5 tends to look at an object such as a curved mirror, a road sign, or a pedestrian while turning to the right side when turning right.
  • an alert display for notifying the surrounding situation is displayed at a position away from the viewpoint range of the driver 5. Therefore, this indicates that the driver 5 is easily overlooked or cannot see unless the viewpoint is moved unnecessarily during normal driving.
  • the display position of the alert display for notifying the surrounding situation is moved within the viewpoint range of the driver 5 as shown in the lower diagram of FIG.
  • the visibility of the alert display is improved even at an intersection or the like so that it is not easily overlooked, and the viewpoint movement of the driver 5 is reduced.
  • the position where the alert display is moved is not limited to the position moved within the viewpoint range as long as the alert display is not easily overlooked in relation to the viewpoint range of the driver 5. For example, it may be moved near the viewpoint range.
  • the alert display once displayed at the upper position in FIG. 31 is not moved to the lower position in FIG. 31 by position adjustment, but an appropriate alert according to the driver's viewpoint at that time. After the display position (for example, the lower position in FIG. 31 when turning right) is determined, this is displayed.
  • FIG. 32 is a flowchart outlining another example of the adjustment process of the display position and the like for controlling the display position of the alert display.
  • steps S3010 to S3040 are added before step S3100 to the example of the flow of the display position adjustment process shown in FIG. 27 described above.
  • in-vehicle camera image information in the vehicle information 4 is analyzed, and information on the viewpoint of the driver 5 is acquired by a known technique (S3010).
  • the driver 5 mainly looks depends on the driving situation.
  • the position of the viewpoint may be estimated based on other information such as the steering status of the steering wheel.
  • the default display position of the target content is collated with the viewpoint position acquired in step S3010 (S3020), and it is determined whether or not the content display position is more than a predetermined distance from the viewpoint position (S3030). . If it is determined that the target content is more than the predetermined distance, the target content is moved to a position close to the viewpoint position (for example, within a certain distance from the viewpoint position), and a new display position is temporarily set. (S3040).
  • step S3100 the same processing as that in step S3100 and subsequent steps in the example of the display position adjustment processing flow shown in FIG. 27 is performed, and the entire processing is terminated.
  • the display position is readjusted appropriately. Can do.
  • FIG. 33 is a diagram showing an overview of an example of displaying the situation of an intersection as content.
  • the example which displays the virtual image which expressed the situation of the intersection ahead of the vehicle 2 as a bird's-eye view is shown.
  • the overhead view for example, the position and movement status of the own vehicle, other vehicles, pedestrians, and the like are displayed in a state close to real time.
  • Such information can be obtained, for example, when the vehicle 2 acquires data of camera images installed at intersections through road-to-vehicle communication, or acquires information such as the position of other vehicles through vehicle-to-vehicle communication.
  • the positional relationship and distance between the host vehicle and the pedestrian or another vehicle can be more accurately and efficiently compared to when an alert is displayed for the approach of the pedestrian as in the example of FIG. 31 described above. Can be grasped.
  • the AR-HUD 1 of the present embodiment can change the display color according to the risk level of the object, It is possible to call attention appropriately.
  • FIG. 34 is a diagram showing an outline of an example in which the display color of the content is changed according to the inter-vehicle distance from the preceding vehicle.
  • the display color of the content is changed according to the inter-vehicle distance from the preceding vehicle.
  • FIG. 34 shows an outline of an example in which the display color of the content is changed according to the inter-vehicle distance from the preceding vehicle.
  • the lower figure for example, when the presence of a forward vehicle is recognized based on camera image information etc. outside the vehicle, it is superimposed on the forward vehicle and marked with a virtual image (in the example of FIG. An example in the case of displaying (figure) is shown.
  • the distances between the vehicles are X, Y, and Z (X> Y> Z), respectively.
  • the inter-vehicle distance is divided into three ranges (for example, safety, somewhat dangerous, and dangerous) according to the degree of danger. If the distance between the vehicles is X that belongs to a safe category, the mark is displayed in display color A (for example, blue). When the distance between the vehicles is Y belonging to a slightly small section, the mark is displayed in display color B (for example, yellow). When the distance between the vehicles is Z that belongs to a dangerous category with a small inter-vehicle distance, the mark is displayed in display color C (for example, red). And a display color is switched according to the distance between vehicles with a preceding vehicle changing.
  • FIG. 35 is a diagram showing an outline of an example in which the display color of the content is changed according to the approach speed to the curve.
  • an example is shown in which an arrow graphic indicating a turning direction is displayed by a virtual image.
  • the approach speeds to the curve are P, Q, and R (P ⁇ Q ⁇ R), respectively.
  • the arrow graphic is displayed in display color A (for example, blue).
  • the approach speed is Q belonging to a slightly larger section, the arrow graphic is displayed in display color B (for example, yellow).
  • display color C for example, red.
  • a display color is switched according to approach speed changing (normally decelerating).
  • FIG. 36 is a diagram showing an outline of another example in which the display color of the content is changed according to the degree of risk.
  • the indicators related to the degree of risk are not limited to the above-mentioned inter-vehicle distance and the approach speed to the curve. For example, various indicators such as the speed of the vehicle 2 and the distance to an object such as a pedestrian are assumed.
  • 36 shows an example in which the display color of characters in the vehicle speed display is changed in accordance with the speed of the vehicle 2. Three sections may be set according to the absolute value of the vehicle speed, or three sections may be set according to the relationship with the speed limit of the road that is running.
  • the vehicle belongs to safety when the vehicle speed is less than or equal to the speed limit, is slightly dangerous if the speed limit exceeds a certain value or within a certain range, and if it exceeds this speed, it is judged to belong to danger.
  • the risk index may be changed according to external factors, such as encouraging the driver to increase the distance between vehicles. Good.
  • a mark (a figure of a frame surrounding the pedestrian in the example of FIG. 36) is displayed as an alert superimposed on the pedestrian. An example of doing this is shown.
  • the display color of the alert display is changed according to the distance from the pedestrian.
  • the color of the content is displayed, but in addition to this, other methods may be combined as appropriate.
  • a warning mark as shown in the example of FIG. 23 may be additionally displayed in addition to the display color C (for example, red).
  • the display form may be changed and emphasized, for example, a ring-shaped mark as shown in the example of FIG. 34 is changed to a double ring.
  • a warning by voice for example, “the distance between vehicles is too close. Please be careful” may be performed.
  • the degree of risk is set to three has been described, but the number of classifications is not limited to this.
  • FIG. 37 is a flowchart showing an overview of an example of the flow of display color setting processing.
  • an index (such as an inter-vehicle distance or an approach speed to a curve) related to the degree of risk in the vehicle information 4 is analyzed (S2511), and it is determined to which classification of the level the index belongs (S2512). Then, it is determined whether or not the index value has changed more than a certain amount in the past predetermined number of determinations (N times) with respect to the determined category (S2513), and it is determined that the value has not changed more than a certain amount. If so, the process ends without changing the display color. This control is performed in order to prevent frequent switching of the display color when the index value fluctuates in the vicinity of the boundary value between sections.
  • step S2513 If it is determined in step S2513 that the amount has changed by a certain amount or more, the section after the change is determined (S2514), and the display color of the content is changed according to the value as shown in the above examples of FIGS. Change and set (S2515a-c).
  • the risk levels are divided into three, and display colors A to C are assigned to each of them, and these are switched.
  • the driver 5 may be surprised if the display color of the content is suddenly switched to the display color C (for example, red). Therefore, for example, the display color may be switched more finely.
  • FIG. 38 is a diagram showing an outline of an example of the display color of the content to be switched according to the degree of risk.
  • the risk level is divided into three, and the display color A (for example, blue), the display color B (for example, yellow), and the display color C (for example, red) from the safer side, respectively. ) Is assigned.
  • the risk levels may be further divided and display colors A and B, and intermediate colors of display colors B and C may be assigned.
  • the rate of change at the time of switching the display color (especially, switching from the display color B to the display color C) can be moderated to prompt attention more smoothly.
  • gradation from the display color A through the display color B to the display color C can be used. In this case, there is no need to classify the index related to the risk level, and the display color can be directly set based on the value of the index.
  • FIG. 39 is a diagram showing an outline of an example of improving the visibility when an alert display is superimposed on the preceding vehicle.
  • FIG. 21 an example of a landscape in front of the driver 5 of the vehicle 2 viewing through the windshield 3 from the driver's seat and a state of a virtual image (content) projected on the windshield 3 is schematically shown. Is shown.
  • FIG. 39 when the presence of the vehicle ahead is recognized based on the camera image information etc. outside the vehicle, a mark (ring-shaped figure) is displayed as a virtual image superimposed on the vehicle ahead, and further shown in the example of FIG. This shows a state where a warning mark is added and displayed.
  • the color of the vehicle body of the vehicle ahead and the color of the warning mark are similar colors, indicating that it is difficult to see.
  • the display color of the alert display such as a warning mark is changed to a color of a system different from the color of the vehicle body of the preceding vehicle. . This improves the visibility of the alert display and prevents it from being overlooked.
  • FIG. 40 is a flowchart showing an outline of an example of the flow of display color adjustment processing.
  • information on the current display color of the target content is acquired (S4100).
  • the color information of the area corresponding to the display position of the content in the front scenery or the like is acquired (S2400).
  • the target area is the forward vehicle, but is not necessarily limited to the forward vehicle, and can be applied to general areas in which an alert display is superimposed.
  • the color information is obtained by, for example, an average value of luminance of each pixel in the target area.
  • the current display color of the content acquired in step S4100 is compared with the color information of the target area (S4300), and it is determined whether or not the display color of the content needs to be changed (S4400).
  • the display color of the content needs to be changed.
  • a known determination method or standard it is determined whether or not the color is the same system, and if it is the same system, it is determined that the display color needs to be changed.
  • a different system color is set as the content display color (S4500), and the process is terminated. If it is determined that the display color does not need to be changed, the process ends.
  • the display color of the content is changed to a different system to improve the visibility of the scenery such as the preceding vehicle to be superimposed, but this is not a limitation.
  • the visibility may be improved by shifting the display position of the content to a position that does not overlap with an object such as a forward vehicle, using the method shown in FIG.
  • FIG. 41 is a diagram showing an overview of an example in which alert display is prioritized over normal display content.
  • FIG. 21 an example of a landscape in front of the driver 5 of the vehicle 2 viewing through the windshield 3 from the driver's seat and a state of a virtual image (content) projected on the windshield 3 is schematically shown. Is shown.
  • FIG. 41 shows that as the contents to be displayed at normal time, a vehicle speed display and an arrow graphic instructing that the corner is approaching along with the distance are displayed. Furthermore, when the presence of a pedestrian is recognized based on camera video information etc. outside the vehicle, a state is shown in which an alert is displayed for the pedestrian by a virtual image (a figure consisting of a frame surrounding the pedestrian and a warning mark). ing. In the upper diagram of FIG. 41, the alert display for the pedestrian is confused with the contents such as the vehicle speed display and the arrow graphic displayed at the normal time, and it is difficult to see.
  • the vehicle speed display is reduced.
  • the priority in terms of the display impact and appeal to the driver 5 is increased relative to other contents. This improves the visibility of the alert display and prevents it from being overlooked.
  • FIG. 42 is a diagram showing an overview of another example in which alert display is prioritized over normal display content.
  • the method for improving the visibility of the alert display is not limited to the above, and various other methods can be used.
  • the shape of an arrow figure may be simplified, and the display of the distance to the corner may be deleted to simplify the display.
  • the display position of the vehicle speed display is moved to a position away from an object such as a pedestrian.
  • the vehicle speed display is reduced.
  • FIG. 43 is a flowchart showing an overview of an example of a flow of event content generation processing for performing priority control of alert display.
  • the presence / absence of an obstacle that requires alert display is analyzed (S2110), and it is determined whether there is a target obstacle (whether alert display is necessary) (S2210).
  • step S2210 If it is determined in step S2210 that there is a target obstacle (alert display is required), the status of other content displayed near the obstacle is acquired (S2310). Then, it is determined whether it is necessary to lower the display priority of these other contents (relatively increase the priority of alert display) (S2311). When it is necessary to lower the priority of display of other contents, for example, the priority of display of these other contents is lowered using the method shown in FIGS. 41 and 42 (S2312).
  • event content for alert display related to the target obstacle is generated (S2313). Then, a default display position is set for each generated alert display content (S2410), a display color setting process (S2510) for setting an initial value as a display color is performed, and the process ends.
  • step S2210 determines whether the corresponding alert display is currently being displayed (S2610). If the alert is not displayed, the process ends without doing anything. If it is being displayed, it is determined whether or not the priority of the alert display has been raised (whether the priority of display of other content has been lowered) (S2710). If the alert display priority has been raised, the display priority of other contents is restored to lower the alert display priority (S2711). Thereafter, the alert display content is erased (S2712), and the process is terminated.
  • the priority of alert display can be increased relative to other contents that are normally displayed, and the visibility of alert display can be improved.
  • FIG. 44 is a diagram showing an overview of an example of switching the contents of the alert display according to the attribute of a pedestrian or the like.
  • FIG. 21 an example of a landscape in front of the driver 5 of the vehicle 2 viewing through the windshield 3 from the driver's seat and a state of a virtual image (content) projected on the windshield 3 is schematically shown. Is shown.
  • FIG. 44 shows a state in which a vehicle speed display is displayed as the content to be displayed at normal time. Furthermore, when the presence of a pedestrian is recognized based on camera video information etc. outside the vehicle, a state is shown in which an alert is displayed for the pedestrian by a virtual image (a figure consisting of a frame surrounding the pedestrian and a warning mark). ing. In the upper diagram of FIG. 44, even when a pedestrian includes a child, the alert display is the same as that for an adult pedestrian, so it is difficult to recognize the presence of a child that requires more attention. It is shown that.
  • the alert display for the child pedestrian is distinguished from the one for the adult pedestrian. Furthermore, the alert display is distinguished not only for children but also for elderly pedestrians. This makes it possible to easily recognize the presence of children or elderly pedestrians who need more attention than adult pedestrians.
  • the alert display for example, an adult, a child, and an elderly person are distinguished as having different color, shape, line type, and the like in the alert display. It is not restricted to such a method.
  • the color and shape of the warning mark for example, changing ⁇ to ⁇
  • the distinction methods based on colors, shapes, line types, and the like may be appropriately combined.
  • FIG. 45 is a flowchart showing an overview of an example of a flow of event content generation processing for switching alert display according to the attribute of a pedestrian.
  • the presence / absence of a pedestrian that requires alert display is analyzed (S2120), and whether or not there is a target pedestrian (whether or not alert display is required). Is determined (S2220).
  • the age group of the target pedestrian is also analyzed by a known image recognition technique or the like.
  • step S2220 If it is determined in step S2220 that the target pedestrian exists (alert display is required), the age group of the target pedestrian is determined (S2320), and the above example is shown according to the contents
  • the contents of the alert display are set to the contents, and the contents are generated (S2321a to c).
  • pedestrians are classified into three types, children, adults, and elderly people, according to age group, but other classification methods may be used.
  • Other pedestrians who need attention such as those who bring their own dogs, those who wear headphones or earphones, those who operate mobile terminal devices, those who are drunk, and who use strollers You may classify the person who is. Furthermore, a person riding a bicycle or a person using a wheelchair may be classified.
  • step S2220 determines whether the corresponding alert display is currently being displayed (S2620). If the alert is not displayed, the process ends without doing anything. If it is being displayed, the target alert display content is erased (S2720), and the process is terminated.
  • the contents of the alert display can be switched and distinguished according to the attribute of the recognized pedestrian or the like, and the pedestrian requiring attention can be recognized more reliably.
  • FIG. 46 is a diagram showing an outline of an example of improving visibility when displaying a vehicle speed display.
  • FIG. 21 an example of a landscape in front of the driver 5 of the vehicle 2 viewing through the windshield 3 from the driver's seat and a state of a virtual image (content) projected on the windshield 3 is schematically shown. Is shown.
  • FIG. 46 shows a state where the vehicle speed display is always displayed. In the upper diagram of FIG. 46, the vehicle speed display overlaps with the preceding vehicle, which indicates that it is difficult to see.
  • information that is always displayed such as the vehicle speed display
  • the vehicle speed display is displayed at a position where it is difficult to overlap with the preceding car, but for example, the distance between the front car and the front car is smaller than when driving normally during traffic jams. It may happen that the content overlaps with the vehicle ahead and it is difficult to see.
  • the vehicle speed display is a position that does not overlap with the preceding vehicle, and the position where the amount of information in the front scenery is relatively small ( In the example in the figure, it is moved to the lower right).
  • This makes it possible to prevent the visibility of the content from being impaired even when the distance between the vehicles is reduced such as in a traffic jam and the content such as the vehicle speed display overlaps with the preceding vehicle.
  • it is desirable to control so that the moved destination does not overlap with an object such as a signal or a road sign by adopting a method as shown in FIG.
  • FIG. 47 is a diagram showing an outline of another example of improving visibility when displaying a vehicle speed display.
  • the method for improving the visibility of the vehicle speed display is not limited to the above, and various other methods can be used.
  • the display size may be reduced without changing the display position of the vehicle speed display so that it does not overlap with the preceding vehicle.
  • FIG. 47 (b) even when the vehicle speed display overlaps with the preceding vehicle, visibility is improved by displaying the character color of a system different from the color of the vehicle body of the preceding vehicle. Also good.
  • the same control may be performed not only on the vehicle speed display but also on icons such as other instruments that are constantly displayed.
  • FIG. 48 is a flowchart showing an outline of an example of the flow of display position adjustment processing for adjusting the display position such as the vehicle speed display with respect to the preceding vehicle.
  • camera image information outside the vehicle in the vehicle information 4 is analyzed, and information on the position of the vehicle ahead (coordinates in the display area) is acquired (S3110). Although not shown, if there is no forward vehicle, the process is terminated as it is.
  • the current display position is the default display position for content that is always displayed, such as vehicle speed display (S3210). If it is determined that it is the default display position, the current display position of the content being displayed (that is, the default display position) is compared with the position of the preceding vehicle (S3211), and the displayed content is compared with the preceding vehicle. It is determined whether or not they overlap (S3311). If it is determined that there is no overlap, the processing is terminated as it is.
  • the current display position of the content being displayed that is, the default display position
  • S3211 the current display position of the content being displayed
  • the displayed content is compared with the preceding vehicle. It is determined whether or not they overlap (S3311). If it is determined that there is no overlap, the processing is terminated as it is.
  • step S3311 if it is determined in step S3311 that the content being displayed overlaps with the vehicle ahead, the content destination is set based on a predetermined standard (S3410). Then, it is determined whether or not there is a signal or road sign at the destination (S3411). If there is a road sign or the like, the process returns to step S3410 to set the destination of the content until there is no road sign or the like. repeat.
  • step S3411 If it is determined in step S3411 that there is no road sign or the like at the destination, it is determined whether or not a certain time has passed since the target content was displayed or the previous display position was changed. Determination is made (S3412). If the fixed time has not elapsed, the process is terminated as it is. Only when the predetermined time or more has elapsed, the target content is moved to the destination set in step S3410 (S3413). As described above, duplication may be avoided by changing the display size of the content as necessary (S3511).
  • the passage of a certain time is used as a condition, so that the relative position of the vehicle ahead with respect to the vehicle 2 moves as the vehicle 2 and the vehicle ahead move. Avoid situations where the display position of content frequently changes and becomes complicated.
  • step S3210 if it is determined that the current display position of the content is not the default display position, the default display position of the content being displayed is compared with the position of the vehicle ahead (S3212), and the default display position is determined. When the content is displayed on the screen, it is determined whether or not the content overlaps with the preceding vehicle (S3312). If it is determined that there is an overlap, the processing is terminated as it is.
  • step S3312 if it is determined in step S3312 that the content is displayed at the default display position and does not overlap with the preceding vehicle, the target content is displayed or displayed last time as in the case of step S3412 above. It is determined whether or not a predetermined time has elapsed since the position was changed (S3414). If the fixed time has not elapsed, the process is terminated as it is. Then, only when the fixed time has passed, the display position of the target content is returned to the default display position (S3415). You may make it avoid duplication by changing the display size of a content as needed (S3511).
  • the above processing flow shows an example of changing the display position and display size of the vehicle speed display etc. according to the situation of the vehicle ahead, but the display color of the vehicle speed display etc. is changed as shown in FIG.
  • the change can be realized, for example, by performing the same process as in the example of the process flow shown in FIG.
  • the display position of content such as vehicle speed display can be switched with respect to the preceding vehicle, and the visibility of content such as vehicle speed display can be improved.
  • FIG. 49 is a diagram showing an outline of an example of notifying the approach of an emergency vehicle.
  • FIG. 21 an example of a landscape in front of the driver 5 of the vehicle 2 viewing through the windshield 3 from the driver's seat and a state of a virtual image (content) projected on the windshield 3 is schematically shown. Is shown.
  • the AR-HUD 1 recognizes the approach of an emergency vehicle such as a fire engine, an ambulance, or a police vehicle based on the vehicle information 4 and notifies the driver 5 of this, so that appropriate evacuation behavior or Encourage them to take standby actions.
  • FIG. 49 shows a state in which a vehicle speed display is displayed and an icon for notifying the approach of an emergency vehicle is displayed as a virtual image as an alert display.
  • FIG. 49 (a) shows an example in which the approach of an emergency vehicle is notified by characters and warning marks.
  • the emergency vehicle approach is notified as “emergency vehicle approaching”, but if the detection is possible, the approach direction is notified as “emergency vehicle approaching from behind”, for example. Also good.
  • more detailed prediction information may be notified, for example, “150 m ahead, 10 seconds after passing prediction”. Further, such information may be displayed as an overhead image of road conditions, for example, as shown in FIG. Further, notification may be made by voice or a combination of display and voice.
  • a siren sound generated by an emergency vehicle is detected by a microphone (not shown) mounted on the vehicle 2 to determine the type of the emergency vehicle based on the type of the siren sound, or the emergency vehicle It is conceivable to determine the approach direction and distance. It is also possible to grasp detailed information by road-to-vehicle communication or vehicle-to-vehicle communication. It is also conceivable to use camera video information outside the vehicle.
  • FIG. 50 is a diagram showing an outline of another example for notifying the approach of an emergency vehicle.
  • FIG. 50A shows an example of navigating a route that retreats to the road shoulder in preparation for the approach of an emergency vehicle.
  • an arrow figure for instructing a retreat route and a figure for indicating a target stop position of the vehicle are displayed as virtual images on the left side.
  • FIG. 50B shows an example in which the display of the approach notification is emphasized and attention is further urged as the emergency vehicle approaches.
  • the character is inverted and further emphasized by increasing the number of warning marks, but the emphasis method is not limited to these.
  • a character color or an icon color may be changed, or display brightness may be increased.
  • the shape of the icon or warning mark may be changed, or the font of the character may be changed.
  • you may blink an icon and a character. You may use combining these methods suitably.
  • FIG. 51 is a diagram showing an outline of an example in which the display of the approach notification is controlled according to the traveling state of the emergency vehicle.
  • emergency vehicles that are running are targeted for approach notification, but only emergency vehicles that are on the move, that is, in a state in which a siren sound is emitted, are subject to approach notification. That is, as shown in the upper diagram of FIG. 51 (a), when an emergency vehicle is approaching but does not emit a siren sound (for example, a fire truck that is returning or is simply moving), the lower As shown in the figure, approach notification is not performed.
  • a siren sound for example, a fire truck that is returning or is simply moving
  • the lower As shown in the figure, approach notification is not performed.
  • an approach notification icon is displayed as shown in the lower diagram.
  • the siren sound can be detected by a microphone (not shown) mounted on the vehicle 2.
  • FIG. 52 is a flowchart showing an overview of an example of a flow of event content generation processing for generating an emergency vehicle approach notification.
  • emergency vehicle approach information is acquired based on siren sound information detected by the microphone in the vehicle information 4, road-to-vehicle communication information, vehicle-to-vehicle communication information, and the like (S2130). Is determined (S2230).
  • S2231 When an approaching emergency vehicle can be detected, it is determined whether or not a retreat action or a standby action is necessary (S2231), and it is determined whether or not a retreat action or the like is necessary (S2232).
  • the necessity of the evacuation action or the like is performed, for example, by predicting the course of the emergency vehicle by a method as described later and determining the relationship between this and the course of the own vehicle.
  • step S2232 If it is determined in step S2232 that an evacuation action or the like is necessary, emergency vehicle approach notification generation processing for generating emergency vehicle approach notification content as shown in the above example is performed (S2330). Thereafter, a retreat route for preparing for the approach of the emergency vehicle is examined (S2331).
  • the examination of the evacuation route can be performed, for example, by predicting the course of the emergency vehicle by a method as will be described later, and determining the information such as the position of the own vehicle and the road condition. Then, content for performing navigation from the current position to the target stop position as shown in the example of FIG. 50A is generated for the saving route of the examination result (S2332).
  • step S2333 it is determined whether or not the evacuation action or the like by the driver 5 has already been completed. If it is determined that it has been completed, the navigation route navigation display generated in step S2332 is terminated (S2334). If it is determined that it has not been completed, the display of the emergency vehicle approach notification content generated in step S2330 is highlighted as shown in the example of FIG. 50B (S2335). At this time, the end of the navigation display of the evacuation route generated in step S2332 confirms that the evacuation action by the driver 5 is completed after highlighting the approach of the emergency vehicle in step S2335 or after highlighting the display. This should be done after
  • a default display position is set for each of the generated emergency vehicle approach notification content and navigation display (S2430), a display color setting process (S2530) for setting an initial value for the display color is performed, and the process ends. .
  • step S2230 if an approaching emergency vehicle cannot be detected, and if it is determined in step S2232 that no evacuation action or the like is required, whether or not an emergency vehicle approach notification is currently being displayed. Is determined (S2630). If it is determined that the approach notification is not being displayed, the processing is terminated as it is. If it is determined that the approach notification is being displayed, the content of the target approach notification is deleted (S2730), and the process ends.
  • FIG. 53 is a diagram showing an outline of an example of a technique for grasping an emergency vehicle that is approaching and predicting its course.
  • emergency vehicle dispatch information obtained by road-to-vehicle communication, vehicle-to-vehicle communication, or the like acquired by the vehicle 2 as vehicle information 4 is used.
  • the emergency vehicle is grasped based on the acquired siren sound.
  • the emergency vehicle dispatch information as shown in the table in the figure, for example, the status (“Returning”, “Waiting”, “Preparing for dispatch”), destination information, etc. for each emergency vehicle in operation. keeping.
  • the lower left figure shows an example of predicting the course of emergency vehicles (fire engines, ambulances, etc.) dispatched from the fire department.
  • emergency vehicles fire engines, ambulances, etc.
  • the lower right figure shows an example of predicting the course of an emergency vehicle (such as an ambulance) that rushes to the hospital.
  • an emergency vehicle such as an ambulance
  • FIG. 54 is a flowchart showing an outline of an example of a flow of emergency vehicle approach notification generation processing for generating content of approach notification based on prediction of a course of an approaching emergency vehicle.
  • the emergency vehicle approach notification generation process is the process of step S2330 in the emergency vehicle notification process shown in the example of FIG. 52 described above, and the presence of an emergency vehicle already approaching is detected.
  • the current position of the vehicle 2 is acquired based on the map information of the vehicle information 4 and the travel log (S2330_1). Furthermore, the dispatch information of the nearest emergency vehicle is acquired based on the emergency vehicle dispatch information of the vehicle information 4 (S2330_2). Then, the route of the emergency vehicle is predicted (S2330_3) by the method as shown in the example of FIG. 53 described above, and the prediction result is determined (S2330_4). If it is determined that the vehicle is approaching from the front or the rear, alert display contents for notifying the approach from the front or the rear are generated (S2330_5a, S2330_5b), and the process is terminated. If neither of them can be determined, it is determined that the vehicle is outside the travel route of the emergency vehicle, and the process ends without generating the approach notification content.
  • FIG. 55 is a diagram showing an outline of an example in which an approaching emergency vehicle and a surrounding vehicle 2 perform inter-vehicle communication.
  • the diagram on the left side of FIG. 55 illustrates a state in which an emergency vehicle has been notified by inter-vehicle communication.
  • the vehicle D (2d) and the vehicle F (2f) capable of performing inter-vehicle communication with the emergency vehicle are notified.
  • the vehicle E (2e) is in a state where the vehicle E (2e) has not received the notification, and the vehicle-to-vehicle communication cannot be performed (the vehicle-to-vehicle communication function is not provided or the communication with the emergency vehicle is not supported). It shows that there is.
  • the vehicle D (2d) and the vehicle F (2f) can receive the notification and recognize the approach of the emergency vehicle, and can display an alert to that effect by the AR-HUD1.
  • the emergency vehicle receives responses from the vehicle D (2d) and the vehicle F (2f) and cannot receive a response from the vehicle E (2e).
  • the emergency vehicle can display the reception result as a virtual image content by the AR-HUD 1 mounted on the own vehicle.
  • FIG. 56 is a diagram showing an outline of an example of a display method of the surrounding vehicle 2 by the AR-HUD 1 mounted on the emergency vehicle.
  • the vehicle 2 that is the forward vehicle as viewed from the emergency vehicle recognizes the approach of the emergency vehicle like the vehicle D (2d) and the vehicle F (2f) in the above-described example of FIG. Shows the case.
  • the presence of a forward vehicle is recognized based on camera video information outside the vehicle, and a mark (a ring-shaped figure in the example of FIG. 56) is superimposed on this by a virtual image.
  • the display color A for example, blue
  • FIG. 56 (b) shows a case where the vehicle 2, which is the forward vehicle as viewed from the emergency vehicle, does not recognize the approach of the emergency vehicle like the vehicle E (2e) in the example of FIG. 55 described above. Yes.
  • the vehicle 2 which is the forward vehicle as viewed from the emergency vehicle
  • the display color C for example, red
  • a warning mark or the like as shown in FIG. 22 described above may be displayed in combination to alert the driver of the emergency vehicle.
  • the emergency vehicle makes it possible for the emergency vehicle to make the driver 5 easily grasp the surrounding vehicles 2 that do not recognize the vehicle, and to call attention when passing.
  • the above-described method of changing the content display content depending on whether or not the vehicle is recognized is not limited to the case where the AR-HUD 1 of the present embodiment is mounted on an emergency vehicle. You may apply when mounted in the other kind of vehicle 2 also including a general private vehicle.
  • the front side in the vehicle 2 is further improved.
  • the display content and display method of the virtual image to be displayed can be adjusted according to the situation such as the scenery. Thereby, it becomes possible to superimpose a more appropriate virtual image on the front landscape in a more appropriate position and manner.
  • the display area 6 can be divided into a plurality of areas, and the display distance of the virtual image can be adjusted for each area. Thereby, it is possible to give a sense of depth and perspective (including virtual distance perception) of the display content to some extent.
  • the data of each content to be displayed itself is generated without considering the sense of depth or the like, it may be difficult to superimpose the actual forward scenery or the like including the depth without any sense of incongruity. .
  • the head-up display device in addition to the adjustment of the display area of the virtual image and the adjustment of the display distance in the first embodiment, further, depending on the display content and display method of the virtual image content, It gives a sense of depth and perspective. As a result, it is possible to superimpose the virtual image content on the scenery in front of the virtual image without causing a sense of incongruity so that the virtual image to be superimposed in the distance can be felt in the distance and the virtual image to be superimposed in the vicinity can be felt closer. To do.
  • the apparatus configuration and basic operation contents of the AR-HUD 1 are the same as those in the first embodiment, and a description thereof will be omitted.
  • FIG. 57 is a diagram showing an outline of an example in which a virtual image is superimposed by giving a sense of depth to a front landscape or the like.
  • a landscape in front of the driver 5 of the vehicle 2 viewing from the driver's seat via the windshield 3 and a state of a virtual image (content) projected on the windshield 3 is schematically shown. .
  • an arrow figure (in the figure, a plurality of continuous arrows) for instructing and navigating the traveling direction of the vehicle 2 and an icon of an instrument (such as “45 km / h” in the figure) for displaying the vehicle speed, etc.
  • vehicle speed display may be described as a virtual image.
  • a depth feeling is given by displaying a far (back) arrow smaller than a near (front) arrow in the arrow figure.
  • a sense of depth and a sense of depth are further obtained by changing the display content by a method as described later.
  • FIG. 58 is a diagram showing an outline of an example in which a sense of depth is expressed by the display color and transparency of content.
  • a series of arrow figures for instructing and navigating the traveling direction of the vehicle 2 are displayed as virtual images on the road in the front landscape. It is said.
  • the series of arrow graphics is formed so as to instruct and navigate to turn left at a front intersection after going straight from the current position.
  • the intersection that makes a left turn is still far away.
  • the nearest (front) arrow is displayed in display color 1
  • the farthest (back) arrow that is, an arrow indicating a left turn at the front intersection is displayed in display color 2.
  • each arrow in the meantime is displayed with the display color gradually changed from display color 1 to display color 2.
  • a gradation that gradually changes from display color 1 to display color 2 is formed as a whole series of arrow figures.
  • the display color 1 and the display color 2 are set such that, for example, the near display color 1 is a dark color and the distant display color 2 is a light color so that a sense of depth can be expressed by gradation. Is desirable.
  • the sense of depth may be expressed by a gradation of the transparency ( ⁇ value) of each arrow. That is, the transparency of the nearest arrow is set to zero (non-transmission) or a small value corresponding thereto.
  • the far arrow is displayed as a value (transmission) in which the transparency is gradually increased as the distance increases.
  • a gradation that gradually becomes transparent from non-transparent is formed as a whole series of arrow figures.
  • the lower part of FIG. 58 shows a state where an intersection that turns left is approaching.
  • the gradation of the display color of each arrow is the same as that in the upper diagram, and the gradation from display color 1 to display color 2 is used.
  • the transparency gradually decreases and becomes zero (non-transparent). (Transparency is canceled).
  • the intersection approaches the arrow of the display color 2 can be visually recognized gradually and clearly.
  • the depth is given to the series of arrow figures by changing the display color and / or transparency in a gradation from near to far, but the content of the gradation is not limited to this. .
  • the display color may be changed as the distance to the arrow approaches.
  • the target to be displayed by gradation is not limited to the arrow graphic designating the traveling direction or the like, but may be another content to be displayed superimposed on the scenery in front.
  • FIG. 59 is a diagram showing an outline of an example of displaying the vehicle speed display overlapping the arrow graphic so as to feel closer.
  • a series of arrow figures for instructing and navigating the traveling direction of the vehicle 2 is displayed as a virtual image on the road in the front landscape.
  • the vehicle speed display is basically displayed on the front (front side).
  • a shadow of the vehicle speed display on the front surface is added to the arrow portion on the back surface (back) as shown in the figure. Accordingly, it is possible to express a sense of depth by making the vehicle speed display on the front surface appear closer to the arrow shape on the rear surface.
  • FIG. 60 is a diagram showing an outline of an example in which a base color is set when content is displayed according to a situation such as a landscape in front. 60, as in FIG. 58 described above, in addition to the vehicle speed display, a case where a series of arrow figures for instructing and navigating the traveling direction of the vehicle 2 is displayed as a virtual image on the road in the front landscape. As an example.
  • FIG. 60A shows an example in which the scenery in front is dark at night or in bad weather, that is, the color of the scenery in front is black as a whole.
  • the content is displayed in a warm color system (base color BA) such as white or yellow, or orange or red, which has a higher contrast than the black system.
  • base color BA such as white or yellow, or orange or red
  • the displayed content feels large for the driver
  • the dark base color BA1 is set as the display color 1
  • the light base color BA2 is set as the display color 2.
  • the nearest arrow is displayed in display color 1
  • the farthest arrow is displayed in display color 2
  • the display color of each arrow in between is displayed from display color 1.
  • Gradation is formed by gradually changing to display color 2.
  • the vehicle speed display is also displayed in a color belonging to the base color BA.
  • FIG. 60B shows an example in which the scenery in front is bright when it is backlit in the daytime, reflected, or snowy, that is, the color of the scenery in front is white as a whole.
  • the content is displayed in a cool color system (base color BB) such as blue, light blue, or yellowish green, which has a higher contrast than the white system.
  • base color BB such as blue, light blue, or yellowish green
  • the displayed content is felt large for the driver
  • the dark base color BB1 is set as the display color 1
  • the light base color BB2 is set as the display color 2.
  • the nearest arrow is displayed in display color 1
  • the farthest arrow is displayed in display color 2
  • the display color of each arrow in between is displayed from display color 1.
  • Gradation is formed by gradually changing to display color 2.
  • the vehicle speed display is also displayed in a color belonging to the base color BB.
  • the base color (that is, the default base color) may be any of the base color BA (warm color system) or the base color BB (cold color system) as described above. Alternatively, other colors may be used.
  • FIG. 61 is a diagram showing an outline of an example in which a base color is set when content is displayed in accordance with the overlapping situation with the preceding vehicle. 61, as in FIG. 58, in addition to the vehicle speed display, a case where a series of arrow figures for instructing and navigating the traveling direction of the vehicle 2 is displayed as a virtual image on the road in the front landscape. As an example.
  • the scenery in front is black as a whole, and the base color is the base color BA (warm color).
  • the base color is the base color BA (warm color).
  • BA warm color
  • BB cold color system
  • the scenery in front of the daytime backlight or snowy road is white as a whole, and the base color is the base color BB.
  • the base color is changed from the base color BB (cold color system) to the base color BA (warm color system) for a series of arrow figures and vehicle speed indications that overlap with the darker front vehicle. You may make it switch partially.
  • FIG. 62 is a flowchart showing an outline of an example of the flow of processing for adjusting the display content and display method of a virtual image.
  • the generation / display of the display content (content) of the virtual image in the AR-HUD 1 is performed in step S05 in the initial operation of FIG. 6 in the first embodiment and step S23 in the normal operation of FIG.
  • the processing content is shown by taking the display video change / determination processing in step S23 in the normal operation of FIG. 7 as an example.
  • the ECU 21 performs standard content generation processing (S231) and event content generation processing (S232).
  • the standard content basically refers to content such as vehicle speed display that is always displayed in the display area 6 while the vehicle 2 is traveling.
  • the event content refers to content such as an alert display that is displayed as necessary based on the driving situation of the vehicle 2 (including the situation of the scenery in front). In any case, a plurality of contents may be generated.
  • the ECU 21 adjusts the display position, display color, etc. of the generated contents in relation to the front landscape grasped by the camera video information (S233) and the display color adjustment process (S234). )I do. Then, video data for display related to each adjusted content is generated (S235), and the process ends. Note that the video data generated here is projected by the display element driving unit 27 in the subsequent step S27 in FIG. 7 of the first embodiment.
  • FIG. 63 is a flowchart showing an outline of an example of the flow of the standard content generation process (step S231) in the display video determination / change process of FIG.
  • necessary standard contents are generated (S1100).
  • a default display position is set for each generated standard content (S1200).
  • a display color setting process (S1300) for setting an initial value as a display color is performed, and the process ends. Note that processing relating to setting and adjustment of the display color will be described later.
  • FIG. 64 is a flowchart showing an outline of an example of the flow of event content generation processing (step S232) in the display video determination / change processing of FIG.
  • the vehicle information 4 acquired in step S21 in FIG. 7 of the first embodiment is analyzed (S2100), and it is analyzed whether or not an event that requires display of event content is occurring (S2200).
  • S2100 the vehicle information 4 acquired in step S21 in FIG. 7 of the first embodiment
  • S2200 an event that requires display of event content is occurring
  • the fact that an intersection or the like is approaching corresponds to the above event.
  • various events such as the presence of a road sign or the like that requires attention, the presence of a pedestrian, the approach of a vehicle ahead, and the like are also targeted.
  • step S2200 If it is determined in step S2200 that an event is occurring, necessary event content is generated (S2300). Then, a default display position is set for each generated event content (S2400). Then, for each event content, a display color setting process (S2500) for setting an initial value for the display color is performed, and the process ends. Note that processing relating to setting and adjustment of the display color will be described later.
  • step S2200 determines whether event content is currently being displayed (S2600). If the event content is not being displayed, the process ends without doing anything. If the event content is being displayed, it is determined that the event content is no longer necessary (S2700), and the process ends.
  • FIG. 65 is a flowchart showing an outline of an example of the flow of display position adjustment processing (step S233) in the display video determination / change processing of FIG.
  • step S233 the camera image information outside the vehicle in the vehicle information 4 acquired in step S21 in FIG. 7 of the first embodiment is analyzed, and is there an object to avoid hiding the display in the front scenery or the like? It is determined whether or not (S3100).
  • step S3300 If it is determined in step S3300 that the display position does not need to be adjusted, the process ends. If it is determined that the display position needs to be adjusted, the display position is adjusted and moved for the target content, and a new display position is set (S3400).
  • the method for determining the position to move the content is not particularly limited. Instead of or in addition to the adjustment of the position of the content, the display size of the content may be reduced so that the object is not hidden (S3500).
  • step S3200 After making these adjustments, the process returns to step S3200 again, and the above-described series of processing is repeated until it is no longer necessary to adjust the display position of each content. Since there is a possibility that another object exists before the content is moved, it is necessary to adjust until the display position is appropriate.
  • the processing for changing the display color and transparency of the content as shown in FIG. 58 to gradation is, for example, the display color setting processing (step S1300) in the standard content generation processing in FIG. 63 and the event content in FIG. This is performed by the display color setting process (step S2500) in the generation process.
  • FIG. 66 is a flowchart showing an outline of an example of the flow of display color setting processing.
  • the brightness level of the background in the front landscape is acquired (S2541).
  • a base color is set by the method shown in FIGS. 60 and 61 (S2542).
  • the display color 1 and the display color 2 are set based on the set base color, and the display color gradation process is performed on the content by the method shown in FIG. 58, FIG. 60, etc. (S2543).
  • the distance to the right / left turn point is acquired (S2544), and the transparency setting processing for the content according to the distance is performed by the method shown in FIG. Perform (S2545).
  • the transparency is set to decrease as the distance to the right / left turn point decreases, but until the distance becomes smaller than a predetermined threshold, the transparency of the arrow graphic at the farthest left / right turn point has a constant value. You may make it maintain as it is.
  • the base color can be set to an appropriate color according to the color situation such as the scenery in front, and the visibility can be further improved.
  • FIG. 67 is a diagram showing an outline of an example in which decoration for characters such as a vehicle speed display is changed and emphasized in accordance with the traveling state of the vehicle 2.
  • a series of arrow figures for indicating and navigating the traveling direction of the vehicle 2 are displayed as virtual images on the road in the front landscape. As an example.
  • the AR-HUD 1 of the present embodiment for example, when the vehicle 2 exceeds the speed on a road with a maximum speed limit (40 km / h in the example in the figure), characters in the vehicle speed display are displayed.
  • the driver 5 is alerted by modifying and emphasizing. Specifically, as shown in the figure, it is possible to change the character color, thicken the outline of the character, enlarge the character, change the font, or cause intermittent light emission.
  • FIG. 67 is a flowchart showing an outline of an example of the flow of the display color setting process for changing the character modification.
  • the vehicle speed information of the vehicle 2 is acquired from the vehicle information 4 (S1311). Further, the information on the legal speed of the current traveling position is acquired from the map information of the vehicle information 4 (S1312). Then, it is determined whether or not the vehicle speed of the vehicle 2 exceeds the legal speed (S1313). When the legal speed is exceeded, the character modification of the vehicle speed display is changed by the method as shown in FIG. 67 (S1314). Further, in the case where the legal speed is not exceeded in step S1313, and there is an overlap between a series of arrow graphics and icons for displaying instruments such as a vehicle speed display, the method as shown in FIG. Thus, processing (S1315) for improving visibility such as vehicle speed display such as adding a shadow may be performed.
  • the character modification of the vehicle speed display can be changed and emphasized according to the driving situation such as exceeding the speed limit, and the driver 5 can be further cautioned.
  • the present invention is not limited to the above-described embodiments, and various modifications can be made without departing from the scope of the invention. Needless to say.
  • the above-described embodiment has been described in detail for easy understanding of the present invention, and is not necessarily limited to the one having all the configurations described.
  • a part of the configuration of one embodiment can be replaced with the configuration of another embodiment, and the configuration of another embodiment can be added to the configuration of one embodiment. .
  • the present invention can be used for a head-up display device using AR.
  • Display distance adjustment mechanism 41a ... Diffusion plate, 41b ... Diffusion plate, 41c ... Movable diffusion plate, 42a ... Lens, 42b ... Movable lens, 43a ... Movable optical filter, 43b ... Comb-like optical filter, 50 ... Mirror drive unit, 51 ... Mirror, 51a ... Mirror, 51b ... Dimming mirror, 52 ... Mirror, 60 ... Speaker, DESCRIPTION OF SYMBOLS 101 ... Vehicle speed sensor, 102 ... Shift position sensor, 103 ... Steering wheel angle sensor, 104 ... Headlight sensor, 105 ... Illuminance sensor, 106 ... Chromaticity sensor, 107 ... Ranging sensor, 108 ...

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Instrument Panels (AREA)

Abstract

L'invention concerne un dispositif d'affichage tête haute qui permet un affichage de sorte qu'une image virtuelle est superposée de façon appropriée sur un décor réel en fonction de l'état de déplacement d'un véhicule. Le dispositif d'affichage tête haute comprend une unité d'acquisition d'informations de véhicule (10) qui acquiert des informations de véhicule (4), une unité de commande (20) qui commande l'affichage d'une image sur la base des informations de véhicule (4), un dispositif d'affichage d'image (30) qui forme une image sur la base d'instructions provenant de l'unité de commande (20), un miroir (52) qui réfléchit l'image formée par le dispositif d'affichage d'image (30) et projette ladite image sur un pare-brise (3), une unité d'entraînement de miroir (50) qui modifie l'angle du miroir (52) sur la base d'instructions provenant de l'unité de commande (20), et un mécanisme de réglage de distance d'affichage (40) qui règle la distance d'affichage de l'image virtuelle par rapport au conducteur. L'unité de commande (20) règle l'angle du miroir (52) par l'intermédiaire de l'unité d'entraînement de miroir (50), de manière à permettre à l'image virtuelle d'être superposée sur le décor et affichée pour le conducteur, et modifie également la position d'affichage de l'image virtuelle si l'image virtuelle chevauche un objet prédéterminé dans le décor.
PCT/JP2016/080181 2016-02-05 2016-10-12 Dispositif d'affichage tête haute WO2017134861A1 (fr)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
JP2016-020471 2016-02-05
JP2016020471 2016-02-05
JP2016-061245 2016-03-25
JP2016061245A JP2019059247A (ja) 2016-03-25 2016-03-25 ヘッドアップディスプレイ装置
JP2016-062925 2016-03-28
JP2016062925A JP2019059248A (ja) 2016-03-28 2016-03-28 ヘッドアップディスプレイ装置

Publications (1)

Publication Number Publication Date
WO2017134861A1 true WO2017134861A1 (fr) 2017-08-10

Family

ID=59499617

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/080181 WO2017134861A1 (fr) 2016-02-05 2016-10-12 Dispositif d'affichage tête haute

Country Status (1)

Country Link
WO (1) WO2017134861A1 (fr)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019129383A (ja) * 2018-01-23 2019-08-01 アルパイン株式会社 映像処理装置
EP3536533A1 (fr) * 2018-03-07 2019-09-11 Yazaki Corporation Appareil d'affichage par projection de véhicule
JP2019164317A (ja) * 2018-03-20 2019-09-26 パナソニックIpマネジメント株式会社 画像形成システム、画像補正システム、画像表示システム、移動体、画像形成方法、及びプログラム
WO2020031915A1 (fr) * 2018-08-06 2020-02-13 株式会社小糸製作所 Système d'affichage de véhicule et véhicule
CN110816409A (zh) * 2018-08-07 2020-02-21 本田技研工业株式会社 显示装置、显示控制方法及存储介质
JP2020071453A (ja) * 2018-11-02 2020-05-07 京セラ株式会社 無線通信ヘッドアップディスプレイシステム、無線通信機器、移動体、およびプログラム
CN112154077A (zh) * 2018-05-24 2020-12-29 三菱电机株式会社 车辆用显示控制装置和车辆用显示控制方法
CN112606832A (zh) * 2020-12-18 2021-04-06 芜湖雄狮汽车科技有限公司 一种车辆智能辅助视觉系统
WO2022208695A1 (fr) * 2021-03-30 2022-10-06 ソニーグループ株式会社 Dispositif de traitement d'informations, procédé de traitement d'informations, et programme
US11990066B2 (en) 2022-03-18 2024-05-21 Honda Motor Co., Ltd. System and method to adjust inclined heads-up display perspective

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005138801A (ja) * 2003-11-10 2005-06-02 Calsonic Kansei Corp ヘッドアップディスプレイ装置
JP2007257286A (ja) * 2006-03-23 2007-10-04 Denso Corp 車両用表示システム
JP2009090689A (ja) * 2007-10-03 2009-04-30 Calsonic Kansei Corp ヘッドアップディスプレイ
JP2009196630A (ja) * 2008-01-25 2009-09-03 Denso Corp 表示装置
JP2010256878A (ja) * 2009-03-30 2010-11-11 Equos Research Co Ltd 情報表示装置
JP2011150579A (ja) * 2010-01-22 2011-08-04 Toyota Motor Corp 車両制御装置
JP2011150105A (ja) * 2010-01-21 2011-08-04 Fuji Heavy Ind Ltd 情報表示装置
WO2012176288A1 (fr) * 2011-06-22 2012-12-27 パイオニア株式会社 Affichage tête haute et procédé de commande
JP2013069178A (ja) * 2011-09-24 2013-04-18 Denso Corp 車両用報知装置及び追従走行制御システム
JP2015003544A (ja) * 2013-06-19 2015-01-08 三菱電機株式会社 車載用表示装置
JP2015134521A (ja) * 2014-01-16 2015-07-27 三菱電機株式会社 車両情報表示制御装置
JP2015141161A (ja) * 2014-01-30 2015-08-03 富士通テン株式会社 車両用表示装置、及び、表示方法
JP2015202842A (ja) * 2014-04-16 2015-11-16 株式会社デンソー ヘッドアップディスプレイ装置

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005138801A (ja) * 2003-11-10 2005-06-02 Calsonic Kansei Corp ヘッドアップディスプレイ装置
JP2007257286A (ja) * 2006-03-23 2007-10-04 Denso Corp 車両用表示システム
JP2009090689A (ja) * 2007-10-03 2009-04-30 Calsonic Kansei Corp ヘッドアップディスプレイ
JP2009196630A (ja) * 2008-01-25 2009-09-03 Denso Corp 表示装置
JP2010256878A (ja) * 2009-03-30 2010-11-11 Equos Research Co Ltd 情報表示装置
JP2011150105A (ja) * 2010-01-21 2011-08-04 Fuji Heavy Ind Ltd 情報表示装置
JP2011150579A (ja) * 2010-01-22 2011-08-04 Toyota Motor Corp 車両制御装置
WO2012176288A1 (fr) * 2011-06-22 2012-12-27 パイオニア株式会社 Affichage tête haute et procédé de commande
JP2013069178A (ja) * 2011-09-24 2013-04-18 Denso Corp 車両用報知装置及び追従走行制御システム
JP2015003544A (ja) * 2013-06-19 2015-01-08 三菱電機株式会社 車載用表示装置
JP2015134521A (ja) * 2014-01-16 2015-07-27 三菱電機株式会社 車両情報表示制御装置
JP2015141161A (ja) * 2014-01-30 2015-08-03 富士通テン株式会社 車両用表示装置、及び、表示方法
JP2015202842A (ja) * 2014-04-16 2015-11-16 株式会社デンソー ヘッドアップディスプレイ装置

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019129383A (ja) * 2018-01-23 2019-08-01 アルパイン株式会社 映像処理装置
EP3536533A1 (fr) * 2018-03-07 2019-09-11 Yazaki Corporation Appareil d'affichage par projection de véhicule
JP2019155960A (ja) * 2018-03-07 2019-09-19 矢崎総業株式会社 車両用表示投影装置
JP7048358B2 (ja) 2018-03-07 2022-04-05 矢崎総業株式会社 車両用表示投影装置
US11004424B2 (en) 2018-03-20 2021-05-11 Panasonic Intellectual Property Management Co., Ltd. Image display system, image display method, movable object including the image display system, and non-transitory computer-readable medium
JP2019164317A (ja) * 2018-03-20 2019-09-26 パナソニックIpマネジメント株式会社 画像形成システム、画像補正システム、画像表示システム、移動体、画像形成方法、及びプログラム
JP7117592B2 (ja) 2018-03-20 2022-08-15 パナソニックIpマネジメント株式会社 画像形成システム、画像補正システム、画像表示システム、画像形成方法、及びプログラム
US11315528B2 (en) 2018-03-20 2022-04-26 Panasonic Intellectual Property Management Co., Ltd. Image display system, image display method, movable object including the image display system, and non-transitory computer-readable medium
US11646000B2 (en) 2018-03-20 2023-05-09 Panasonic In Tei Ifctual Property Management Co., Ltd. Image display system, image display method, movable object including the image display system, and non-transitory computer-readable medium
JP2021105726A (ja) * 2018-03-20 2021-07-26 パナソニックIpマネジメント株式会社 画像形成システム、画像補正システム、画像表示システム、画像形成方法、及びプログラム
CN112154077A (zh) * 2018-05-24 2020-12-29 三菱电机株式会社 车辆用显示控制装置和车辆用显示控制方法
WO2020031915A1 (fr) * 2018-08-06 2020-02-13 株式会社小糸製作所 Système d'affichage de véhicule et véhicule
US11639138B2 (en) 2018-08-06 2023-05-02 Koito Manufacturing Co., Ltd. Vehicle display system and vehicle
JPWO2020031915A1 (ja) * 2018-08-06 2021-08-26 株式会社小糸製作所 車両用表示システム及び車両
JP7241081B2 (ja) 2018-08-06 2023-03-16 株式会社小糸製作所 車両用表示システム及び車両
CN110803019B (zh) * 2018-08-06 2023-05-05 株式会社小糸制作所 车辆用显示系统及车辆
CN110803019A (zh) * 2018-08-06 2020-02-18 株式会社小糸制作所 车辆用显示系统及车辆
CN110816409A (zh) * 2018-08-07 2020-02-21 本田技研工业株式会社 显示装置、显示控制方法及存储介质
JP7034052B2 (ja) 2018-11-02 2022-03-11 京セラ株式会社 無線通信ヘッドアップディスプレイシステム、無線通信機器、移動体、およびプログラム
WO2020090714A1 (fr) * 2018-11-02 2020-05-07 京セラ株式会社 Système d'affichage tête haute de communication radio, équipement de communication radio, corps mobile et programme
JP2020071453A (ja) * 2018-11-02 2020-05-07 京セラ株式会社 無線通信ヘッドアップディスプレイシステム、無線通信機器、移動体、およびプログラム
CN112606832A (zh) * 2020-12-18 2021-04-06 芜湖雄狮汽车科技有限公司 一种车辆智能辅助视觉系统
WO2022208695A1 (fr) * 2021-03-30 2022-10-06 ソニーグループ株式会社 Dispositif de traitement d'informations, procédé de traitement d'informations, et programme
EP4318453A1 (fr) * 2021-03-30 2024-02-07 Sony Group Corporation Dispositif de traitement d?informations, procédé de traitement d?informations, et programme
US11990066B2 (en) 2022-03-18 2024-05-21 Honda Motor Co., Ltd. System and method to adjust inclined heads-up display perspective

Similar Documents

Publication Publication Date Title
WO2017134861A1 (fr) Dispositif d'affichage tête haute
JP2019059248A (ja) ヘッドアップディスプレイ装置
JP6717856B2 (ja) ヘッドアップディスプレイ装置
KR101908308B1 (ko) 차량용 램프
JP6629889B2 (ja) ヘッドアップディスプレイ装置
KR101908309B1 (ko) 차량용 램프
WO2018070193A1 (fr) Dispositif d'affichage tête haute
JP2023024825A (ja) 車両
US20170161009A1 (en) Vehicular display device
JP2019059247A (ja) ヘッドアップディスプレイ装置
JP6262111B2 (ja) 車両用表示装置
WO2011108091A1 (fr) Dispositif d'affichage embarqué dans un véhicule et procédé d'affichage
JP2016055691A (ja) 車両用表示システム
JP2010143520A (ja) 車載用表示システム及び表示方法
JP2019113809A (ja) ヘッドアップディスプレイ装置
CN113126295A (zh) 一种基于环境显示的抬头显示设备
JP6801508B2 (ja) ヘッドアップディスプレイ装置
JP2015074391A (ja) ヘッドアップディスプレイ装置
US11345364B2 (en) Attention calling device and attention calling method
JP7429875B2 (ja) 表示制御装置、表示装置、表示制御方法、及びプログラム
JP7002612B2 (ja) 車両用表示システム
KR101908310B1 (ko) 차량용 램프
CN112009355B (zh) 图像投射装置和方法
JP2019202641A (ja) 表示装置
CN113147595A (zh) 基于立体视觉显示的车辆驾驶控制系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16889344

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16889344

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP