WO2018105585A1 - Head-up display - Google Patents

Head-up display Download PDF

Info

Publication number
WO2018105585A1
WO2018105585A1 PCT/JP2017/043564 JP2017043564W WO2018105585A1 WO 2018105585 A1 WO2018105585 A1 WO 2018105585A1 JP 2017043564 W JP2017043564 W JP 2017043564W WO 2018105585 A1 WO2018105585 A1 WO 2018105585A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual image
image
display
vehicle
displayable area
Prior art date
Application number
PCT/JP2017/043564
Other languages
French (fr)
Japanese (ja)
Inventor
誠 秦
勇希 舛屋
Original Assignee
日本精機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本精機株式会社 filed Critical 日本精機株式会社
Publication of WO2018105585A1 publication Critical patent/WO2018105585A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Arrangement of adaptations of instruments
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory

Definitions

  • the present invention relates to a head-up display that displays a virtual image.
  • the head-up display of Patent Document 1 includes an image display unit that displays an image, and the projection unit projects the image on a front windshield (an example of a projection member) positioned in front of the viewer, thereby The virtual image based on the image is displayed on the back side (outside of the vehicle) of the seen front windshield.
  • a front windshield an example of a projection member
  • the pitching angle with respect to the traveling road surface changes depending on the condition of the traveling road surface or driving operation.
  • the position where the virtual image is displayed changes according to the change in the pitching angle of the vehicle. Specifically, the position where the virtual image is displayed rotates and moves around a predetermined axis in conjunction with a change in the pitching angle of the vehicle. As a result, the virtual image is visually recognized as being rotated about the horizontal direction as viewed from the viewer. This phenomenon will be specifically described with reference to FIGS.
  • FIGS. 10, 11 and 12 are diagrams for explaining the difference in the appearance of the virtual image 510 when the pitching angle 1p of the vehicle with respect to the road surface 502 is different.
  • the virtual image 510r shown in FIG. 10B, the virtual image 510u shown in FIG. 11B, and 510d shown in FIG. 12B are shown in different shapes, but they are all during normal travel ( When the virtual image 510 that is viewed in a rectangular shape by a viewer is displayed when the pitching angle 1p of the vehicle with respect to the road surface 502 is zero), an example in which the shape is viewed differently due to the difference in the pitching angle 1p of the vehicle is shown. ing. 11B and 12B, the amount of change in the shape of the virtual image 510 is exaggerated.
  • FIG. 10 is a diagram showing a state during normal traveling in which the vehicle pitching angle 1p with respect to the road surface 502 is zero.
  • the virtual image 510r displayed by the head-up display is displayed so that an angle 511r formed between the head-up display and the road surface 502 is approximately 90 degrees as shown in FIG.
  • the virtual image 510r displayed by the head-up display shown in FIG. 10B has a rectangular shape, and in order to make it easy to understand the change in shape from FIG. 11B and FIG.
  • a straight grid line parallel to (X-axis direction) and a straight grid line parallel to the vertical direction (Y-axis direction) are attached. Since the virtual image 510r visually recognized during normal traveling shown in FIG. 10B is rectangular, the lower end width (also referred to as the reference lower end width) 512r and the upper end width (also referred to as the reference upper end width) 513r of the virtual image 510r. And become equal.
  • FIG. 11 is a view showing a pitch-up state in which the pitching angle 1p of the vehicle is inclined upward with respect to the road surface 502.
  • An angle 511u formed between the virtual image 510u displayed on the head-up display and the road surface 502 is an angle larger than 90 degrees as shown in FIG.
  • the virtual image 510u displayed by the head-up display shown in FIG. 11B is visually recognized as a trapezoidal shape having a short lower side when viewed from the viewer.
  • the virtual image 510u that is visually recognized at the time of pitch-up illustrated in FIG. 11B is viewed such that the lower end width 512u is shorter than the upper end width 513u, and the vertical width 514u is illustrated in FIG. 10B. It is visually recognized as being shorter than the vertical width 514r of the virtual image 510r visually recognized during normal traveling.
  • FIG. 12 is a diagram showing a pitch-down state in which the pitching angle 1p of the vehicle is inclined downward with respect to the road surface 502.
  • An angle 511d formed between the virtual image 510d displayed on the head-up display and the road surface 502 is an angle smaller than 90 degrees as shown in FIG.
  • the virtual image 510d displayed by the head-up display shown in FIG. 12B is visually recognized as a trapezoidal shape having a long lower side when viewed from the viewer.
  • the virtual image 510d visually recognized at the time of pitch down shown in FIG. 12B is viewed such that the lower end width 512d is longer than the upper end width 513d, and the vertical width 514d is shown in FIG. 10B. It is visually recognized as being shorter than the vertical width 514r of the virtual image 510r visually recognized during normal traveling.
  • the visibility of the virtual image 510 may be reduced because the visible virtual image 510 is visually recognized as having a trapezoidal distortion due to the difference in the pitching angle 1p of the vehicle. was there.
  • One object of the present invention is to provide a head-up display capable of improving the visibility of a virtual image.
  • a head-up display is mounted on a vehicle and projects the first display light (210) on a projection member that transmits and reflects light, thereby superimposing the actual display outside the vehicle.
  • a head-up display that displays a first virtual image (311) in a first virtual image displayable area (310), the first image displayable area (13) corresponding to the first virtual image displayable area
  • the display control unit 70 corrects the first image based on the vehicle attitude information G, so that the first virtual image is visually recognized as having a trapezoidal distortion due to a change in the attitude of the vehicle. Can be visually recognized by the viewer while reducing or canceling the trapezoidal distortion.
  • the visibility of the virtual image can be improved.
  • FIG. 3 is a diagram showing an example in which a first image is displayed on the first screen shown in FIG. 2 when the vehicle is in a pitch down state. It is a figure which shows arrangement
  • (a) is a diagram showing the arrangement of the first virtual image displayable region when the pitching angle of the vehicle is substantially parallel to the road surface, and (b) shows the real scene and the virtual image. It is a figure which shows a mode that it is visually recognized from a driver's seat.
  • (a) is a diagram showing the arrangement of the first virtual image displayable area when the vehicle is in a pitch-up state
  • (b) is a diagram showing a real scene and a virtual image from the driver's seat. It is a figure which shows a mode that it is visually recognized.
  • (a) is a diagram showing the arrangement of the first virtual image displayable area when the vehicle is in a pitch-down state
  • (b) is a diagram showing a real scene and a virtual image from the driver's seat. It is a figure which shows a mode that it is visually recognized.
  • the virtual first virtual image displayable area 310 and the second virtual image displayable area 320 generated by the head-up display (hereinafter referred to as HUD) 100 of the present invention will be described.
  • the left-right direction facing the front of the vehicle 1 is defined as the X axis (the left direction is the X axis positive direction), and the vertical
  • the direction is defined as the Y-axis (upper vertical direction is the positive Y-axis direction)
  • the front-rear direction is defined as the Z-axis (the forward direction is the positive Z-axis direction).
  • the HUD 100 is stored in a dashboard 3 of a vehicle (one application example of a moving body) 1, for example.
  • the HUD 100 projects display light 200 (first display light 210 and second display light 220) indicating vehicle information and the like onto a part of a front windshield (an example of a projection member) 2 of the vehicle 1.
  • the front windshield 2 generates a predetermined eye box (not shown) by reflecting the first display light 210 and the second display light 220 toward the viewer 4 side.
  • the viewer 4 places the viewpoint in the eye box so that the first virtual image displayable area 310 and the second virtual image displayable area 320 virtually generated by the HUD 100 are displayed in front of the front windshield 2.
  • the first virtual image 311 and the second virtual image 321 can be visually recognized above.
  • the eyebox in the description of the present embodiment means that the entire first virtual image 311 generated by the first display light 210 and the entire second virtual image 321 generated by the second display light 220 are visually recognized. It is defined as a possible viewing area.
  • the first virtual image displayable area 310 shown in FIG. 1 is virtually arranged such that the angle formed with the road surface 5L with the horizontal direction as viewed from the viewer 4 being approximately 90 degrees.
  • the first virtual image displayable area 310 is virtually arranged at a position 5 [m] to 10 [m] away from the eyebox in the forward direction (the traveling direction of the vehicle 1).
  • the second virtual image displayable region 320 may be provided with an inclination of about ⁇ 20 degrees from an angle at which the angle formed with the road surface 5L is approximately 90 degrees.
  • the second virtual image displayable area 320 shown in FIG. 1 is virtually arranged such that the angle formed with the road surface 5L with the horizontal direction as viewed from the viewer 4 being approximately 0 degrees. .
  • the second virtual image displayable area 320 is virtually arranged so as to be substantially parallel to the road surface 5L.
  • the second virtual image displayable area 320 is provided so as to overlap from a position 10 [m] away from the eyebox in a forward direction to a position 100 [m] away.
  • the second virtual image displayable area 320 is tilted from ⁇ 10 degrees (CCW direction on the paper surface of FIG. 1) to +45 degrees (CW direction on the paper surface of FIG. 1) from an angle parallel to the road surface 5L of FIG. May be provided.
  • FIG. 2 is a diagram illustrating an example of a landscape, a first virtual image 311, and a second virtual image 321 that can be seen from the viewer 4 sitting in the driver's seat of the vehicle 1 including the HUD 100 illustrated in FIG. 1.
  • the HUD 100 is a maximum area where the first virtual image 311 can be displayed, for example, a rectangular first virtual image displayable area 310 and the maximum area where the second virtual image 321 can be displayed when viewed from the viewer 4.
  • a trapezoidal second virtual image displayable area 320 in which an upper side partially overlapping with the first virtual image displayable area 310 is visually recognized shorter than the lower side is generated.
  • first virtual image displayable area 310 and the second virtual image displayable area 320 themselves are not visible to the viewer 4 or difficult to be viewed by the viewer 4, and are on the first virtual image displayable area 310.
  • the viewer 4 visually recognizes the first virtual image element V1 of the first virtual image 311 to be displayed and the second virtual image element V2 of the second virtual image 321 displayed on the second virtual image displayable area 320; Become.
  • the first virtual image 311 displayed in the first virtual image displayable area 310 is not associated with a specific target of the real scene or does not add information to the specific target of the real scene.
  • 1st virtual image element V1 which shows vehicle information, such as a vehicle speed, and the distance to a branch point is included.
  • the second virtual image 321 displayed in the second virtual image displayable area 320 is displayed in the vicinity of a specific target in the real scene or at a position where it is superimposed, and this specific target is emphasized or information is added.
  • an arrow image that adds information that guides the route to the destination on the road surface 5L, a linear image that emphasizes a white line (specific target) at the time of lane departure warning (LDW: Lane Depth Warning), etc.
  • the second virtual image element V2 is included.
  • the specific target is, for example, a road surface, an obstacle present on or near the road surface, a traffic sign, a traffic sign, a traffic signal, a building, and the like.
  • the first virtual image displayable area 310 includes an area in which the first virtual image 311 including the first virtual image element V1 is displayed, and the first virtual image 311 in the vertical direction (Y-axis direction) and the horizontal direction (X-axis). And a virtual image blank area 311a surrounding from (direction).
  • the first virtual image 311 has a rectangular shape when viewed from the viewer 4.
  • the HUD 100 of the present invention performs an image correction process, which will be described later, so as to cancel the distortion of the first virtual image 311 caused by the change in the pitching angle 1p of the vehicle 1. This image correction process will be described later.
  • various detection units for detecting the pitching angle 1p of the vehicle 1 will be described.
  • Various detection units to be described below may be provided in the HUD 100.
  • the vehicle 1 or the HUD 100 may be detachably wired or wirelessly connected.
  • the sensor unit may be wired or wirelessly connected to the vehicle 1 or the HUD 100, or a portable terminal having various detection units may be wired or wirelessly connected.
  • the vehicle 1 is equipped with a front information detection unit 6 that acquires actual scene information F of the vehicle 1.
  • the real scene information F includes at least position information of a specific target that is emphasized or loaded with information by displaying the second virtual image 321 in the real scene in front of the vehicle 1, for example, one or more cameras. Obtained from an infrared sensor or the like.
  • the vehicle attitude detection unit 7 analyzes, for example, a triaxial acceleration sensor (not shown) and the triaxial acceleration detected by the triaxial acceleration sensor, so that the pitching angle 1p (vehicle attitude) of the vehicle 1 with respect to the horizontal plane is used. ) And the vehicle attitude information G including information related to the pitching angle 1p of the vehicle 1 is output to the HUD 100 (display control unit 70).
  • the vehicle attitude detection unit 7 may be configured by a height sensor (not shown) arranged in the vicinity of the suspension of the vehicle 1 other than the three-axis acceleration sensor described above.
  • the vehicle attitude detection unit 7 estimates the pitching angle 1p of the vehicle 1 as described above by analyzing the height from the ground of the vehicle 1 detected by the height sensor, and the pitching angle 1p of the vehicle 1 is estimated.
  • the vehicle attitude information G including the information regarding is output to the HUD 100 (display control unit 70).
  • the vehicle posture detection unit 7 may include an imaging camera (not shown) that images the outside of the vehicle 1 and an image analysis unit (not shown) that analyzes the captured image.
  • the vehicle posture detection unit 7 estimates the pitching angle 1p (vehicle posture) of the vehicle 1 from the temporal change of the landscape included in the captured image. Note that the method by which the vehicle attitude detection unit 7 determines the pitching angle 1p of the vehicle 1 is not limited to the method described above, and the pitching angle 1p of the vehicle 1 may be determined using a known sensor or analysis method.
  • FIG. 3 is a diagram showing an example of the configuration of the HUD 100 shown in FIG. 1 includes, for example, a first image display unit 10, a second image display unit 20, a projection unit 30 (a reflection unit 31, a display synthesis unit 32, a concave mirror 33), an angle adjustment unit (actuator) 40, And a display control unit 70.
  • the HUD 100 is generally housed in the dashboard of the vehicle 1, but the first image display unit 10, the second image display unit 20, the reflection unit 31, the actuator 40, the display composition unit 32, and the concave mirror 33.
  • all or part of the display control unit 70 may be arranged outside the dashboard.
  • the HUD 100 (display control unit 70) is connected to a bus 8 including an in-vehicle LAN (Local Area Network) mounted on the vehicle 1, and the above-described actual scene information F and vehicle attitude information G can be input from the bus 8. it can.
  • a bus 8 including an in-vehicle LAN (Local Area Network) mounted on the vehicle 1, and the above-described actual scene information F and vehicle attitude information G can be input from the bus 8. it can.
  • the first image display unit 10 in FIG. 3 receives, for example, a first projection unit 11 including a projector using a reflective display device such as DMD or LCoS, and projection light from the first projection unit 11.
  • a first projection unit 11 including a projector using a reflective display device such as DMD or LCoS
  • the first image 14 is displayed, and the first screen 12 that emits the first display light 210 indicating the first image 14 toward the reflection unit 31 is mainly configured.
  • the first image display unit 10 virtually displays the first image 14 on the first screen 12 based on the image data D input from the display control unit 70 described later, so that the first image display unit 10 is virtually in front of the viewer 4.
  • the first virtual image 311 is displayed on the generated first virtual image displayable area 310.
  • FIG. 4 is a front view of the first screen 12 shown in FIG.
  • the left-right direction of the first screen 12 is defined as the dx axis (the left direction is the positive direction of the dx axis).
  • the direction is defined as the dy axis (the downward direction is the positive direction of the dy axis).
  • the position in the X-axis direction of the first virtual image 311 that the viewer 4 shown in FIG. 2 visually recognizes from the driver's seat of the vehicle 1 is the first image 14 displayed on the first screen 12 shown in FIG. Corresponds to the position in the dx-axis direction.
  • the position in the Y-axis direction of the first virtual image 311 viewed by the viewer 4 shown in FIG. 2 from the driver's seat of the vehicle 1 is displayed on the first screen 12 shown in FIG. This corresponds to the position of the image 14 in the dy-axis direction.
  • the above-described real space is determined by the arrangement of the optical members (first image display unit 10, second image display unit 20, reflection unit 31, actuator 40, display composition unit 32, concave mirror 33) in the HUD 100.
  • the relationship between the above XY coordinate axes and the dxdy coordinate axes used in the description of the first screen 12 is not limited to the above.
  • the first screen 12 in FIG. 3 has an area 13 that is the maximum area in which the first image 14 can be displayed, as shown in FIG.
  • this region 13 is also referred to as a first image displayable region 13 corresponding to the first virtual image displayable region 310 that is virtually generated by the HUD 100.
  • the first image displayable area 13 displays the first image 14 in the area according to the image data D.
  • one or a plurality of first image elements M1 are arranged in part or in whole.
  • the display control unit 70 to be described later adjusts the size and shape of the first image 14 in the first image displayable area 13 based on the image data D, so that the first virtual image displayable area 310 in the first virtual image displayable area 310 is adjusted.
  • the size and shape of one virtual image 311 can be adjusted.
  • the first image 14 shown in FIG. 4 has a rectangular shape corresponding to the first virtual image 311 in FIG. 2, and is parallel to the horizontal direction (dx axis direction) and linear grid lines, Although the grid line is attached in parallel to the direction (dy-axis direction), the first display light 210 is actually projected depending on the curved surface shape of the projection member (front windshield 2).
  • the shapes of the first image 14 displayed on the screen 12 and the first virtual image 311 viewed from the viewer 4 are not equal. That is, when the shape of the first virtual image 311 and the grid lines in FIG. 2 are used as a reference, the shape of the first image 14 is different from the rectangular shape, and the grid lines are not straight but bent or parallel to each other.
  • the shape and grid line of the first image 14 are changed to the shape and grid line of the first virtual image 311.
  • the width of the lower end of the first image 14 is defined as the lower end width dx1
  • the width of the upper end of the first image 14 is defined as the upper end width dx2
  • the height of the first image 14 is defined as the height dy. It prescribes.
  • the first image display unit 10 of the present embodiment normally displays the first image 14 including one or more first image elements M1 based on the image data D, and the pitching angle 1p of the vehicle 1 has changed.
  • the first virtual image 311 close to the normal shape and size can be visually recognized by executing an image correction process, which will be described later, that expands or narrows a part of the first image 14.
  • the second image display unit 20 in FIG. 3 has the same configuration as the first image display unit 10 described above, and receives the projection light from the second projection unit 21 and the second projection unit 21.
  • the second screen 22 is mainly configured to display the second image 24 and emit the second display light 220 indicating the second image 24 toward the reflection unit 31. Similar to the first screen 12 shown in FIG. 4, the second screen 22 in FIG. 3 is the maximum area where the second image 24 can be displayed, and corresponds to the second virtual image displayable area 320. It is also called a second image displayable area 23.
  • the second image displayable area 23 displays the second image 24 in the area according to the image data D. In the second image 24, one or a plurality of second image elements (not shown) are arranged in part or in whole.
  • the reflection unit 31 (projection unit 30) of FIG. 3 is formed by, for example, a flat plane mirror, and is inclined and arranged on the optical path of the second display light 220 from the second image display unit 20 toward the viewer 4.
  • the second display light 220 emitted from the second image display unit 20 is reflected toward the display composition unit 32.
  • the reflection unit 31 includes an actuator 40 that rotates the reflection unit 31.
  • the reflection part 31 may have a curved surface instead of a plane.
  • the actuator 40 is, for example, a stepping motor or a DC motor, and rotates the reflecting unit 31 based on the vehicle attitude information G detected by the vehicle attitude detecting unit 7 under the control of the display control unit 70 described later.
  • the angle and position of the second virtual image displayable area 320 are adjusted.
  • the display composition unit 32 (projection unit 30) in FIG. 3 is configured by, for example, a flat half mirror in which a transflective layer such as a metal reflective film or a dielectric multilayer film is formed on one surface of a translucent substrate. Is done.
  • the display composition unit 32 reflects the second display light 220 from the second image display unit 20 reflected by the reflection unit 31 toward the concave mirror 33, and the second display light 220 from the second image display unit 20.
  • the display light 220 is transmitted to the concave mirror 33 side.
  • the transmittance of the display combining unit 32 is, for example, 50%, but the luminance of the first virtual image 311 and the second virtual image 321 may be adjusted by appropriately adjusting.
  • the concave mirror 33 (projection unit 30) of FIG. 3 is configured to transmit the first display light 210 and the second display light 220 from the first image display unit 10 and the second image display unit 20 to the viewer 4.
  • the first display light 210 and the second display light 220 that are arranged on the optical path and are emitted from the first image display unit 10 and the second image display unit 20 are reflected toward the front windshield 2.
  • the optical path length of the first display light 210 from the first screen 12 of the first image display unit 10 to the concave mirror 33 is the first optical path length from the second screen 22 of the second image display unit 20 to the concave mirror 33.
  • the first virtual image 311 generated by the first image display unit 10 is arranged so as to be shorter than the optical path length of the second display light 220, whereby the second image display unit 20 generates the second virtual image 311.
  • the virtual image 321 is formed at a position near the eye box.
  • the concave mirror 33 typically cooperates with the front windshield 2 for the first display light 210 and the second display light 220 generated by the first image display unit 10 and the second image display unit 20.
  • a function of working and enlarging a function of correcting the distortion of the first virtual image 311 and the second virtual image 321 caused by the curved surface of the front windshield 2, and visually recognizing a virtual image without distortion, the first virtual image 311 and the second virtual image
  • the virtual image 321 has a function of forming an image at a position away from the viewer 4 by a predetermined distance.
  • FIG. 5 shows a schematic configuration example of the display control unit 70 of FIG.
  • the display control unit 70 includes, for example, a processing unit 71, a storage unit 72, and an interface 73.
  • the processing unit 71 is configured by, for example, one or a plurality of CPUs and RAMs
  • the storage unit 72 is configured by, for example, a ROM
  • the interface 73 is configured by an input / output communication interface connected to the bus 8.
  • the interface 73 can acquire the vehicle information, the above-described actual scene information F, the vehicle attitude information G, and the like via the bus 8, and the storage unit 72 can input the input vehicle information, the actual scene information F, and the vehicle attitude information G.
  • the data for generating the image data D based on the data and the data for generating the drive data T based on the input vehicle attitude information G can be stored. Can be generated and image data D and drive data T can be generated.
  • the interface 73 can acquire the vehicle attitude information G including information related to the pitching angle 1p of the vehicle 1 from the vehicle attitude detection unit 7 via the bus 8, for example, and also functions as a vehicle attitude information acquisition unit. Have.
  • the display control unit 70 may be inside the HUD 100, and a part or all of the functions may be provided on the vehicle 1 side outside the HUD 100.
  • an operation example of the HUD 100 of the present embodiment will be described.
  • FIG. 6 is a flowchart showing an example of the operation of the HUD 100 of the present embodiment.
  • the HUD 100 is, for example, when the vehicle 1 is activated, when electric power is supplied to the electronic device of the vehicle 1, or when a predetermined time has elapsed since the activation of the vehicle 1 or the electric power supply of the electronic device of the vehicle 1. The processing described below is started.
  • step S ⁇ b> 1 the processing unit 71 acquires the vehicle attitude information G from the interface 73.
  • step S2 the processing unit 71 determines whether the vehicle attitude information G acquired in step S1 is equal to or greater than a predetermined threshold value TH. Specifically, the processing unit 71 determines whether the pitching angle 1p of the vehicle 1 is greater than or equal to a predetermined threshold value TH. If the processing unit 71 determines that the pitching angle 1p of the vehicle 1 is large based on the vehicle attitude information G (YES in step S2), the processing unit 71 proceeds to step S3 and determines that the pitching angle 1p of the vehicle 1 is small. If (NO in step S2), the process proceeds to step S4.
  • step S3 the processing unit 71 corrects the first virtual image 311 to be trapezoidally corrected so as to suppress or cancel the trapezoidal distortion of the first virtual image 311 caused by the change in the pitching angle 1p of the vehicle 1.
  • the processing unit 71 is visually recognized by the shape and size of the first virtual image 311 that is visually recognized during normal traveling in which the pitching angle 1p of the vehicle 1 is substantially zero.
  • the first virtual image 311 is trapezoidally corrected.
  • the processing unit 71 corrects the lower end horizontal width dx1, the upper end horizontal width dx2, and the vertical width dy of the first image 14 displayed on the first screen 12 according to the pitching angle 1p of the vehicle 1.
  • the processing unit 71 corrects the lower end width dx1 and / or the upper end width dx2 of the first image 14 to reduce the difference between the upper side and the lower side of the trapezoidal distortion of the first virtual image 311, and the first virtual image 311. Can be approximated to a rectangular shape.
  • FIG. 7A shows a first image 14 displayed on the first screen 12 during normal traveling in which the pitching angle 1p of the vehicle 1 with respect to the road surface 5L is substantially zero.
  • the first image 14 is also referred to as a reference image 14r.
  • FIG. 7B shows a first image 14u when the vehicle 1 is in a pitch-up state with respect to the road surface 5L.
  • the processing unit 71 determines the ratio of the upper end width dx2u to the lower end width dx1u (dx2u / dx1u) and the ratio of the reference upper end width dx2r to the reference lower end width dx1r during normal travel (dx2r / dx1r). As a result, the difference (DX2> DX1) between the lower end lateral width DX1 and the upper end lateral width DX2 caused by pitch-up can be reduced or eliminated.
  • FIG. 7C shows a first image 14d when the vehicle 1 is in a pitch-down state with respect to the road surface 5L.
  • the processing unit 71 determines the ratio of the upper end width dx2d to the lower end width dx1d (dx2d / dx1) and the ratio of the reference upper end width dx2r to the reference lower end width dx1r during normal travel (dx2r / larger than dx1r). Thereby, the difference (DX1> DX2) between the lower end lateral width DX1 and the upper end lateral width DX2 caused by the pitch down can be reduced or eliminated.
  • the processing unit 71 sets the longitudinal width dy (dyu, dyd) to be longer than the reference longitudinal width dyr during normal traveling as the absolute value of the pitching angle 1p of the vehicle 1 with respect to the road surface 5L increases. Thereby, the reduction of the vertical width DY of the first virtual image 311 can be reduced or eliminated by the change of the pitching angle 1p of the vehicle 1.
  • a magnification may be set for each area of the first image 14. That is, when the processing unit 71 corrects the keystone of the first image 14, the horizontal interval Gx1 and the vertical interval of the grid (region G1) corresponding to the first virtual image 311 below the first virtual image 311 in the vertical direction.
  • the processing unit 71 enlarges the lower grid G1u of the first image 14 at a larger magnification than the upper grid G2u, as shown in FIG. 7B. To do. Therefore, when the vehicle 1 is in the pitch-up state, the horizontal interval Gx1u and the vertical interval Gy1u of the lower grid G1u of the first image 14 are larger than the horizontal interval Gx2u and the vertical interval Gy2u of the upper grid G2u.
  • the processing unit 71 enlarges the lower grid G1d of the first image 14 at a smaller magnification than the upper grid G2d, as shown in FIG. 7C. Therefore, when the vehicle 1 is in a pitch-down state, the horizontal interval Gx1d and the vertical interval Gy1d of the lower grid G1d of the first image 14 are smaller than the horizontal interval Gx2d and the vertical interval Gy2d of the upper grid G2d.
  • step S4 the processing unit 71 corrects the vertical width DY of the first virtual image 311 so as to suppress or cancel the reduction in the vertical width DY of the first virtual image 311 caused by the change in the pitching angle 1p of the vehicle 1. To do. In other words, even after the pitching angle 1p of the vehicle 1 is changed, the processing unit 71 is visually recognized with the vertical width DY of the first virtual image 311 that is visually recognized when the pitching angle 1p of the vehicle 1 is substantially zero. In addition, the vertical width DY of the first virtual image 311 is corrected.
  • the processing unit 71 sets the vertical width dy (dyu, dyd) to be longer than the reference vertical width dyr during normal traveling as the absolute value of the pitching angle 1p of the vehicle 1 with respect to the road surface 5L increases.
  • the above is the image correction processing executed by the display control unit 70 of the present embodiment.
  • the angle with respect to the road surface 5L changes according to the pitching angle 1p of the vehicle 1.
  • the processing unit 71 of the present embodiment adjusts the angle of the second virtual image displayable area 320 according to the vehicle attitude information G to be acquired, so that the unintended angle of the second virtual image displayable area 320 with respect to the road surface 5L. Reduce or offset changes. The method is described below.
  • the display control unit 70 determines drive data T including the drive amount of the actuator 40 corresponding to the acquired vehicle attitude information G. Specifically, the display control unit 70 reads table data stored in advance in the storage unit 72 and determines drive data T corresponding to the acquired vehicle attitude information G. Note that the display control unit 70 may obtain the drive data T from the vehicle attitude information G by calculation using a preset calculation formula.
  • the display control unit 70 drives the actuator 40 based on the determined drive data T.
  • the display control unit 70 drives the actuator 40 to rotate the reflecting unit 31 located on the optical path of the second display light 220 emitted from the second image display unit 20.
  • the relative angle 330 of the 2nd image display part 20 with respect to the 1st virtual image displayable area 310 changes.
  • the display control unit 70 controls the actuator 40 so that the second virtual image displayable region 320 is parallel to the road surface 5L even when the vehicle posture of the vehicle 1 changes, and the reflection unit 31 May be rotated.
  • 8A, 8B, and 8C are diagrams showing how the angle 330 formed by the first virtual image displayable area 310 and the second virtual image displayable area 320 generated by the HUD 100 of the present embodiment is changed.
  • 8A, 8B, and 8C are diagrams illustrating an angle 330 formed by the first virtual image displayable area 310 and the second virtual image displayable area 320, which changes based on the pitching angle 1p of the vehicle 1.
  • FIG. 8A is a diagram illustrating an angle 330 formed by the first virtual image displayable area 310 and the second virtual image displayable area 320 when the vehicle 1 has a pitching angle 1p parallel to the road surface 5L.
  • the second virtual image displayable area 320 is adjusted to be substantially parallel to the road surface 5L
  • the angle 330 formed by the first virtual image displayable area 310 and the second virtual image displayable area 320 is: The angle is approximately 90 degrees 330r.
  • an angle 330r formed by the first virtual image displayable area 310 and the second virtual image displayable area 320 illustrated in FIG. 8A is also referred to as a reference angle 330r below.
  • FIG. 8B is a diagram illustrating an angle 330 formed by the first virtual image displayable area 310 and the second virtual image displayable area 320 when the vehicle 1 is pitched up.
  • the first virtual image displayable area 310 is adjusted to be substantially parallel to the road surface 5L as shown in FIG. 8B, for example, and the first virtual image displayable area 310 and the second virtual image displayable area 320 are adjusted.
  • FIG. 8C is a diagram illustrating an angle 330 formed by the first virtual image displayable area 310 and the second virtual image displayable area 320 when the vehicle 1 is in the pitch down state.
  • the first virtual image displayable area 310 is adjusted to be substantially parallel to the road surface 5L as shown in FIG. 8C, for example, and the first virtual image displayable area 310 and the second virtual image displayable area 320 are adjusted.
  • the angle 330 formed by the first virtual image displayable area 310 and the second virtual image displayable area 320 changes based on the vehicle posture of the vehicle 1, which of the visible virtual images is the first virtual image. Since the viewer 4 can recognize whether the information is displayed in the displayable area 310 or the second virtual image displayable area 320, the first virtual image displayable area 310 and the second virtual image displayable area 320 can be recognized. The first virtual image 311 and the second virtual image 321 displayed on each can be recognized more three-dimensionally.
  • the second virtual image displayable area 320 is generated by being inclined in the horizontal direction from the first virtual image displayable area 310, and the angle with respect to the real scene is adjusted by driving the actuator 40.
  • the angle adjustment with respect to the real scene of the virtual image display surface (second virtual image displayable area 320) tilted in the horizontal direction is more than the angle adjustment with respect to the real scene of the virtual image display surface (first virtual image displayable area 310) tilted in the vertical direction.
  • the impression given to the viewer 4 with respect to a certain angle change of the virtual image display surface is large.
  • the second virtual image 321 displayed in the second virtual image displayable area 320 and the first virtual image are adjusted by adjusting the angle of the virtual image display surface (second virtual image displayable area 320) inclined in the horizontal direction. It becomes easy to distinguish the first virtual image 311 displayed in the displayable area 310, and the first virtual image 311 displayed in each of the first virtual image displayable area 310 and the second virtual image displayable area 320, The second virtual image 321 can be recognized more three-dimensionally.
  • the HUD 100 is mounted on the vehicle 1 and projects the first display light 210 onto the front windshield 2, thereby superimposing the first virtual image on the real scene outside the vehicle 1.
  • a head-up display that displays the first virtual image 311 in the displayable area 310, which has the first image displayable area 13 corresponding to the first virtual image displayable area 310, and can display the first image
  • the first image display unit 10 that emits the first display light 210 from the region 13, the projection unit 30 that directs the first display light 210 emitted from the first image display unit 10 toward the front windshield 2, and the vehicle 1
  • the display control unit 70 corrects the first image 14 based on the vehicle attitude information G, thereby correcting the trapezoidal distortion of the first virtual
  • the display control unit 70 sets the size of the first image 14 so that the aspect ratio of the first virtual image 311 after the keystone correction is the same as the aspect ratio of the first virtual image 311 before the keystone correction. You may adjust. By making the aspect ratio constant before and after the trapezoid correction, the display of the first virtual image 311 can be continued without giving the display a strange feeling due to the trapezoid correction.
  • the display control unit 70 enlarges only the vertical width DY of the first virtual image 311 when viewed from the viewer 4 when the change in the vehicle attitude of the vehicle 1 is equal to or less than the predetermined threshold value TH. May be. Thereby, a part of the processing load of the display control unit 70 can be reduced.
  • the actuator 40 rotates the reflecting unit 31 positioned on the optical path of the first display light 210 up to the display combining unit 32 that directs the first display light 210 and the second display light 220 in the same direction.
  • the angle 330 formed by the first virtual image displayable area 310 and the second virtual image displayable area 320 is changed, but the display combining unit 32 may be rotated by the actuator 40.
  • the actuator 40 rotates the reflecting unit 31 positioned on the optical path of the first display light 210 up to the display combining unit 32 that directs the first display light 210 and the second display light 220 in the same direction.
  • the HUD 100 of the present invention can display the first virtual image displayable area 310 and the second virtual image by rotating the image display surface (first screen 12) of the first image display unit 10 by the actuator 40.
  • the angle 330 formed with the region 320 may be changed.
  • the actuator 40 does not need to set the center of the image display surface (first screen 12) of the reflection unit 31, the display synthesis unit 32, and the first image display unit 10 as the rotation axis AX.
  • a predetermined position may be used as the rotation axis AX.
  • the rotation axis AX may be provided at a position separated from each optical member.
  • FIG. 9 shows an example in which the real scene and the first virtual image 311 and the second virtual image 321 displayed by the modified example of the HUD 100 shown in FIG. 2 are visually recognized when facing the front of the vehicle 1 from the driver's seat.
  • the HUD 100 of the present invention can display the first virtual image displayable area 310 generated by the first image display unit 10 and the second virtual image display generated by the second image display unit 20.
  • the region 320 may be viewed with a distance.
  • the HUD 100 in this modified example includes a region on the display combining unit 32 where the first display light 210 is incident from the first image display unit 10 and a second display from the second image display unit 20. You may comprise by separating
  • the first image display unit 10 that generates the first virtual image displayable region 310 and the second image display unit 20 that generates the second virtual image displayable region 320 are provided.
  • the image display unit may be a single unit.
  • the HUD 100 in this modified example projects projection light from a single projection unit (not shown) onto a plurality of screens (image display surfaces) (not shown), and rotates one of the screens with an actuator, thereby The angle 330 formed by the virtual image displayable area 310 and the second virtual image displayable area 320 may be adjusted.
  • the angle 330 formed by the first virtual image displayable area 310 and the second virtual image displayable area 320 is adjusted by adjusting the angle of the first virtual image displayable area 310 with respect to the real scene.
  • the first virtual image displayable area 310 is adjusted.
  • the angle 330 formed by the second virtual image displayable area 320 may be adjusted.
  • the first image display unit 10 may be a transmissive display panel such as a liquid crystal display element, a self-luminous display panel such as an organic EL element, or a scanning display device that scans with laser light. Good.

Abstract

The present invention is capable of improving the visibility of a virtual image. A projection unit 30 projects first display light 210 of a first image 14 to be displayed by a first image display unit 10 toward a projection member to display a first virtual image 311 based on the first image 14, and acquires vehicular orientation information G, including information regarding the vehicular orientation of a vehicle 1, from an interface 73. A display control unit 70 carries out keystone correction on the first image 14 on the basis of the vehicular orientation information G so as to suppress or cancel out keystone distortion in the first virtual image 311 to be viewed by a viewer 4.

Description

ヘッドアップディスプレイHead-up display
 本発明は、虚像を表示するヘッドアップディスプレイに関するものである。 The present invention relates to a head-up display that displays a virtual image.
 例えば特許文献1のヘッドアップディスプレイは、画像を表示する画像表示部を備え、投射部が前記画像を視認者の前方に位置するフロントウインドシールド(投影部材の一例)に映すことで、視認者から見た前記フロントウインドシールドの奥側(車両の外側)に前記画像に基づく前記虚像を表示する。 For example, the head-up display of Patent Document 1 includes an image display unit that displays an image, and the projection unit projects the image on a front windshield (an example of a projection member) positioned in front of the viewer, thereby The virtual image based on the image is displayed on the back side (outside of the vehicle) of the seen front windshield.
特開2012-0335745号公報JP 2012-0335745 A
 しかしながら、ヘッドアップディスプレイを搭載する車両は、走行している路面の状況や運転操作などに応じて、走行する路面に対するピッチング角が変化する。この車両のピッチング角の変化に応じて、虚像が表示される位置が変化してしまう。具体的には、虚像が表示される位置は、車両のピッチング角の変化に連動して、所定の軸を中心にしたように回転移動する。これにより、虚像は、視認者から見た左右方向を軸にして回転したように視認される。この現象について、図10乃至図12を参照して具体的に説明する。 However, in a vehicle equipped with a head-up display, the pitching angle with respect to the traveling road surface changes depending on the condition of the traveling road surface or driving operation. The position where the virtual image is displayed changes according to the change in the pitching angle of the vehicle. Specifically, the position where the virtual image is displayed rotates and moves around a predetermined axis in conjunction with a change in the pitching angle of the vehicle. As a result, the virtual image is visually recognized as being rotated about the horizontal direction as viewed from the viewer. This phenomenon will be specifically described with reference to FIGS.
 図10,図11,図12は、路面502に対する車両のピッチング角1pが異なる場合の虚像510の見え方の違いを説明する図である。なお、図10(b)に示される虚像510r,図11(b)に示される虚像510u,図12(b)に示される510dは、異なる形状に図示してあるが、いずれも通常走行時(路面502に対する車両のピッチング角1pがゼロの時)に視認者によって矩形状に視認される虚像510を表示した際に、車両のピッチング角1pの違いで形状が異なって視認されてしまう例を示している。なお、図11(b),図12(b)では、虚像510の形状の変化量を誇張して表現している。 FIGS. 10, 11 and 12 are diagrams for explaining the difference in the appearance of the virtual image 510 when the pitching angle 1p of the vehicle with respect to the road surface 502 is different. Note that the virtual image 510r shown in FIG. 10B, the virtual image 510u shown in FIG. 11B, and 510d shown in FIG. 12B are shown in different shapes, but they are all during normal travel ( When the virtual image 510 that is viewed in a rectangular shape by a viewer is displayed when the pitching angle 1p of the vehicle with respect to the road surface 502 is zero), an example in which the shape is viewed differently due to the difference in the pitching angle 1p of the vehicle is shown. ing. 11B and 12B, the amount of change in the shape of the virtual image 510 is exaggerated.
 図10は、路面502に対する車両のピッチング角1pがゼロの通常走行時の様子を示した図である。ヘッドアップディスプレイが表示する虚像510rは、路面502との間のなす角511rが、図10(a)に示すように、概ね90度となるように表示されている。図10(b)に示されるヘッドアップディスプレイが表示する虚像510rは、矩形状であり、後述する図11(b)、図12(b)との形状の変化をわかりやすくするために、左右方向(X軸方向)に平行で直線のグリッド線と、上下方向(Y軸方向)に平行で直線のグリッド線と、を付してある。なお、図10(b)に示される通常走行時に視認される虚像510rは、矩形状であるため、虚像510rの下端横幅(基準下端横幅とも呼ぶ)512rと上端横幅(基準上端横幅とも呼ぶ)513rとが等しくなる。 FIG. 10 is a diagram showing a state during normal traveling in which the vehicle pitching angle 1p with respect to the road surface 502 is zero. The virtual image 510r displayed by the head-up display is displayed so that an angle 511r formed between the head-up display and the road surface 502 is approximately 90 degrees as shown in FIG. The virtual image 510r displayed by the head-up display shown in FIG. 10B has a rectangular shape, and in order to make it easy to understand the change in shape from FIG. 11B and FIG. A straight grid line parallel to (X-axis direction) and a straight grid line parallel to the vertical direction (Y-axis direction) are attached. Since the virtual image 510r visually recognized during normal traveling shown in FIG. 10B is rectangular, the lower end width (also referred to as the reference lower end width) 512r and the upper end width (also referred to as the reference upper end width) 513r of the virtual image 510r. And become equal.
 図11は、路面502に対して車両のピッチング角1pが上側に傾くピッチアップの様子を示した図である。ヘッドアップディスプレイが表示する虚像510uと路面502との間のなす角511uは、図12(a)に示すように、90度より大きい角度となる。図11(b)に示されるヘッドアップディスプレイが表示する虚像510uは、視認者から見て下辺が短い台形形状に視認される。具体的には、図11(b)に示されるピッチアップ時に視認される虚像510uは、下端横幅512uが上端横幅513uより短いように視認され、縦幅514uが、図10(b)で示した通常走行時に視認される虚像510rの縦幅514rより短いように視認される。 FIG. 11 is a view showing a pitch-up state in which the pitching angle 1p of the vehicle is inclined upward with respect to the road surface 502. An angle 511u formed between the virtual image 510u displayed on the head-up display and the road surface 502 is an angle larger than 90 degrees as shown in FIG. The virtual image 510u displayed by the head-up display shown in FIG. 11B is visually recognized as a trapezoidal shape having a short lower side when viewed from the viewer. Specifically, the virtual image 510u that is visually recognized at the time of pitch-up illustrated in FIG. 11B is viewed such that the lower end width 512u is shorter than the upper end width 513u, and the vertical width 514u is illustrated in FIG. 10B. It is visually recognized as being shorter than the vertical width 514r of the virtual image 510r visually recognized during normal traveling.
 図12は、路面502に対して車両のピッチング角1pが下側に傾くピッチダウンの様子を示した図である。ヘッドアップディスプレイが表示する虚像510dと路面502との間のなす角511dは、図12(a)に示すように、90度より小さい角度となる。図12(b)に示されるヘッドアップディスプレイが表示する虚像510dは、視認者から見て下辺が長い台形形状に視認される。具体的には、図12(b)に示されるピッチダウン時に視認される虚像510dは、下端横幅512dが上端横幅513dより長いように視認され、縦幅514dが、図10(b)で示した通常走行時に視認される虚像510rの縦幅514rより短いように視認される。 FIG. 12 is a diagram showing a pitch-down state in which the pitching angle 1p of the vehicle is inclined downward with respect to the road surface 502. An angle 511d formed between the virtual image 510d displayed on the head-up display and the road surface 502 is an angle smaller than 90 degrees as shown in FIG. The virtual image 510d displayed by the head-up display shown in FIG. 12B is visually recognized as a trapezoidal shape having a long lower side when viewed from the viewer. Specifically, the virtual image 510d visually recognized at the time of pitch down shown in FIG. 12B is viewed such that the lower end width 512d is longer than the upper end width 513d, and the vertical width 514d is shown in FIG. 10B. It is visually recognized as being shorter than the vertical width 514r of the virtual image 510r visually recognized during normal traveling.
 すなわち、同じ虚像を表示していても、車両のピッチング角1pの違いにより、視認される虚像510に、台形歪みが生じたように視認されるため、虚像510の視認性が低下してしまうおそれがあった。 That is, even if the same virtual image is displayed, the visibility of the virtual image 510 may be reduced because the visible virtual image 510 is visually recognized as having a trapezoidal distortion due to the difference in the pitching angle 1p of the vehicle. was there.
 本発明の1つの目的は、虚像の視認性を向上することができるヘッドアップディスプレイを提供することである。 One object of the present invention is to provide a head-up display capable of improving the visibility of a virtual image.
 本発明の第1の態様のヘッドアップディスプレイは、車両に搭載され、光を透過及び反射する投影部材に第1の表示光(210)を投影することで、前記車両の外部の実景に重畳する第1の虚像表示可能領域(310)内に第1の虚像(311)を表示するヘッドアップディスプレイであって、前記第1の虚像表示可能領域に対応する第1の画像表示可能領域(13)を有し、前記第1の画像表示可能領域内に第1の画像(14)を表示する第1の画像表示部(10)と、前記第1の画像表示部が発する前記第1の表示光を投影部材に向ける投射部と、前記車両の車両姿勢に関する情報を含む車両姿勢情報(G)を取得する車両姿勢情報取得手段(73)と、前記車両姿勢情報取得手段が取得した前記車両姿勢情報に基づき、前記第1の虚像に生じる台形歪みを抑制するように、前記第1の画像を台形補正する表示制御部(70)と、を備える。第1の態様では、表示制御部70が、車両姿勢情報Gに基づき、第1の画像を台形補正することで、車両の姿勢変化により台形歪みが生じたように視認されてしまう第1の虚像を、台形歪みを軽減または相殺して視認者に視認させることができる。 A head-up display according to a first aspect of the present invention is mounted on a vehicle and projects the first display light (210) on a projection member that transmits and reflects light, thereby superimposing the actual display outside the vehicle. A head-up display that displays a first virtual image (311) in a first virtual image displayable area (310), the first image displayable area (13) corresponding to the first virtual image displayable area A first image display section (10) for displaying the first image (14) in the first image displayable area, and the first display light emitted by the first image display section. A vehicle attitude information acquisition means (73) for acquiring vehicle attitude information (G) including information relating to the vehicle attitude of the vehicle, and the vehicle attitude information acquired by the vehicle attitude information acquisition means Based on the first imaginary So as to suppress a trapezoidal distortion caused in, provided with a display control unit (70) for keystone correcting the first image. In the first aspect, the display control unit 70 corrects the first image based on the vehicle attitude information G, so that the first virtual image is visually recognized as having a trapezoidal distortion due to a change in the attitude of the vehicle. Can be visually recognized by the viewer while reducing or canceling the trapezoidal distortion.
 虚像の視認性を向上させることができる。 The visibility of the virtual image can be improved.
本発明のヘッドアップディスプレイが生成する第1の虚像表示可能領域と第2の虚像表示可能領域の例を説明する図である。It is a figure explaining the example of the 1st virtual image displayable area | region and the 2nd virtual image displayable area | region which the head-up display of this invention produces | generates. 運転席から車両の前方を向いた場合の、実景と図1に示されるヘッドアップディスプレイが表示する虚像とが視認される様子の例を示す図である。It is a figure which shows the example of a mode that a real scene and the virtual image which the head-up display shown in FIG. 1 display are visually recognized when it faces the front of a vehicle from a driver's seat. 図1に示されるヘッドアップディスプレイの構成の例を示す図である。It is a figure which shows the example of a structure of the head up display shown by FIG. 図2に示される第1のスクリーンに第1の画像が表示された例を示す図である。It is a figure which shows the example by which the 1st image was displayed on the 1st screen shown by FIG. 図2に示される制御部の構成の例を示す図である。It is a figure which shows the example of a structure of the control part shown by FIG. 図2に示されるヘッドアップディスプレイの動作を説明するフローチャートである。It is a flowchart explaining operation | movement of the head-up display shown by FIG. 車両のピッチング角が概ね路面に平行である場合の図2に示される第1のスクリーンに第1の画像が表示された例を示す図である。It is a figure which shows the example by which the 1st image was displayed on the 1st screen shown in FIG. 2 when the pitching angle of a vehicle is substantially parallel to a road surface. 車両のピッチアップの状態である場合の図2に示される第1のスクリーンに第1の画像が表示された例を示す図である。It is a figure which shows the example by which the 1st image was displayed on the 1st screen shown by FIG. 2 in the case of the pitch-up state of a vehicle. 車両のピッチダウンの状態である場合の図2に示される第1のスクリーンに第1の画像が表示された例を示す図である。FIG. 3 is a diagram showing an example in which a first image is displayed on the first screen shown in FIG. 2 when the vehicle is in a pitch down state. 図2に示されるヘッドアップディスプレイにより、車両のピッチング角が概ね路面に平行である場合の第1、第2の虚像表示可能領域の配置を示す図である。It is a figure which shows arrangement | positioning of the 1st, 2nd virtual image displayable area in case the pitching angle of a vehicle is substantially parallel to a road surface with the head-up display shown by FIG. 図2に示されるヘッドアップディスプレイにより、車両がピッチアップの状態である場合の第1、第2の虚像表示可能領域の配置を示す図である。It is a figure which shows arrangement | positioning of the 1st, 2nd virtual image displayable area | region when a vehicle is a pitch-up state by the head-up display shown by FIG. 図2に示されるヘッドアップディスプレイにより、車両がピッチダウンの状態である場合の第1、第2の虚像表示可能領域の配置を示す図である。It is a figure which shows arrangement | positioning of the 1st, 2nd virtual image displayable area | region when a vehicle is a pitch-down state by the head-up display shown by FIG. 運転席から車両の前方を向いた場合の、実景と図2に示されるヘッドアップディスプレイの変形例が表示する虚像とが視認される様子を示す図である。It is a figure which shows a mode that the virtual scene which the real view and the modification of the head-up display shown by FIG. 2 display is visually recognized when it faces the front of a vehicle from a driver's seat. 従来のヘッドアップディスプレイにおいて、(a)は、車両のピッチング角が概ね路面に平行である場合の第1の虚像表示可能領域の配置を示す図であり、(b)は、実景と虚像とが運転席から視認される様子を示す図である。In the conventional head-up display, (a) is a diagram showing the arrangement of the first virtual image displayable region when the pitching angle of the vehicle is substantially parallel to the road surface, and (b) shows the real scene and the virtual image. It is a figure which shows a mode that it is visually recognized from a driver's seat. 従来のヘッドアップディスプレイにおいて、(a)は、車両がピッチアップの状態である場合の第1の虚像表示可能領域の配置を示す図であり、(b)は、実景と虚像とが運転席から視認される様子を示す図である。In the conventional head-up display, (a) is a diagram showing the arrangement of the first virtual image displayable area when the vehicle is in a pitch-up state, and (b) is a diagram showing a real scene and a virtual image from the driver's seat. It is a figure which shows a mode that it is visually recognized. 従来のヘッドアップディスプレイにおいて、(a)は、車両がピッチダウンの状態である場合の第1の虚像表示可能領域の配置を示す図であり、(b)は、実景と虚像とが運転席から視認される様子を示す図である。In the conventional head-up display, (a) is a diagram showing the arrangement of the first virtual image displayable area when the vehicle is in a pitch-down state, and (b) is a diagram showing a real scene and a virtual image from the driver's seat. It is a figure which shows a mode that it is visually recognized.
 以下に説明する実施形態は、本発明を容易に理解するために用いられ、当業者は、本発明が以下に説明される実施形態によって不当に限定されないことを留意すべきである。 The embodiments described below are used to easily understand the present invention, and those skilled in the art should note that the present invention is not unduly limited by the embodiments described below.
 図1を参照して、本発明のヘッドアップディスプレイ(以下、HUDと記載)100が生成する仮想的な第1の虚像表示可能領域310、第2の虚像表示可能領域320について説明する。以下の説明を容易にするために、図1に示されるように、実空間において、例えば、車両1の前方を向いた左右方向をX軸(左方向がX軸正方向)に規定し、鉛直方向をY軸(鉛直方向上側がY軸正方向)に規定し、前後方向をZ軸(前方向がZ軸正方向)に規定する。 With reference to FIG. 1, the virtual first virtual image displayable area 310 and the second virtual image displayable area 320 generated by the head-up display (hereinafter referred to as HUD) 100 of the present invention will be described. In order to facilitate the following description, as shown in FIG. 1, in the real space, for example, the left-right direction facing the front of the vehicle 1 is defined as the X axis (the left direction is the X axis positive direction), and the vertical The direction is defined as the Y-axis (upper vertical direction is the positive Y-axis direction), and the front-rear direction is defined as the Z-axis (the forward direction is the positive Z-axis direction).
 図1に示すように、HUD100は、例えば、車両(移動体の一適用例)1のダッシュボード3内に収納される。HUD100は、車両1のフロントウインドシールド(投影部材の一例)2の一部に車両情報等を示す表示光200(第1の表示光210及び第2の表示光220)を投影する。フロントウインドシールド2は、第1の表示光210及び第2の表示光220を視認者4側に向けて反射することで所定のアイボックス(図示しない)を生成する。視認者4は、視点を前記アイボックス内におくことで、フロントウインドシールド2を介した前方に、HUD100が仮想的に生成した第1の虚像表示可能領域310、第2の虚像表示可能領域320上で第1の虚像311、第2の虚像321を視認することができる。なお、本実施形態の説明におけるアイボックスとは、第1の表示光210で生成される第1の虚像311の全体及び第2の表示光220で生成される第2の虚像321の全体が視認できる視域と規定する。 As shown in FIG. 1, the HUD 100 is stored in a dashboard 3 of a vehicle (one application example of a moving body) 1, for example. The HUD 100 projects display light 200 (first display light 210 and second display light 220) indicating vehicle information and the like onto a part of a front windshield (an example of a projection member) 2 of the vehicle 1. The front windshield 2 generates a predetermined eye box (not shown) by reflecting the first display light 210 and the second display light 220 toward the viewer 4 side. The viewer 4 places the viewpoint in the eye box so that the first virtual image displayable area 310 and the second virtual image displayable area 320 virtually generated by the HUD 100 are displayed in front of the front windshield 2. The first virtual image 311 and the second virtual image 321 can be visually recognized above. Note that the eyebox in the description of the present embodiment means that the entire first virtual image 311 generated by the first display light 210 and the entire second virtual image 321 generated by the second display light 220 are visually recognized. It is defined as a possible viewing area.
 図1に示される第1の虚像表示可能領域310は、視認者4から見た左右方向を軸とした路面5Lとのなす角度が、概ね90度となるように仮想的に配置される。また、第1の虚像表示可能領域310は、前記アイボックスから前方方向(車両1の進行方向)の5[m]~10[m]離れた位置に仮想的に配置される。なお、第2の虚像表示可能領域320は、路面5Lとのなす角度が概ね90度となる角度から±20度程度傾いて設けられてもよい。 The first virtual image displayable area 310 shown in FIG. 1 is virtually arranged such that the angle formed with the road surface 5L with the horizontal direction as viewed from the viewer 4 being approximately 90 degrees. The first virtual image displayable area 310 is virtually arranged at a position 5 [m] to 10 [m] away from the eyebox in the forward direction (the traveling direction of the vehicle 1). The second virtual image displayable region 320 may be provided with an inclination of about ± 20 degrees from an angle at which the angle formed with the road surface 5L is approximately 90 degrees.
 また、図1に示される第2の虚像表示可能領域320は、視認者4から見た左右方向を軸とした路面5Lとのなす角度が、概ね0度となるように仮想的に配置される。言い換えると、第2の虚像表示可能領域320は、概ね路面5Lに平行になるように仮想的に配置される。また、第2の虚像表示可能領域320は、前記アイボックスから前方方向に10[m]離れた位置から100[m]離れた位置に亘って重なって設けられる。なお、第2の虚像表示可能領域320は、図1の路面5Lに平行となる角度から-10度(図1の紙面上のCCW方向)~+45度(図1の紙面上のCW方向)傾いて設けられてもよい。 Further, the second virtual image displayable area 320 shown in FIG. 1 is virtually arranged such that the angle formed with the road surface 5L with the horizontal direction as viewed from the viewer 4 being approximately 0 degrees. . In other words, the second virtual image displayable area 320 is virtually arranged so as to be substantially parallel to the road surface 5L. The second virtual image displayable area 320 is provided so as to overlap from a position 10 [m] away from the eyebox in a forward direction to a position 100 [m] away. The second virtual image displayable area 320 is tilted from −10 degrees (CCW direction on the paper surface of FIG. 1) to +45 degrees (CW direction on the paper surface of FIG. 1) from an angle parallel to the road surface 5L of FIG. May be provided.
 図2は、図1に示されるHUD100を備える車両1の運転席に座る視認者4から見える風景及び第1の虚像311、第2の虚像321の例を示す図である。HUD100は、第1の虚像311を表示可能な最大領域であり、例えば視認者4から見て矩形状の第1の虚像表示可能領域310と、第2の虚像321を表示可能な最大領域であり、第1の虚像表示可能領域310と一部が重なる上辺が下辺に比べて短く視認される台形形状の第2の虚像表示可能領域320と、を生成する。なお、この第1の虚像表示可能領域310、第2の虚像表示可能領域320自体は、視認者4に視認されないまたは視認者4にされにくい状態であり、第1の虚像表示可能領域310上に表示される第1の虚像311の第1の虚像要素V1、第2の虚像表示可能領域320上に表示される第2の虚像321の第2の虚像要素V2を視認者4が視認することとなる。 FIG. 2 is a diagram illustrating an example of a landscape, a first virtual image 311, and a second virtual image 321 that can be seen from the viewer 4 sitting in the driver's seat of the vehicle 1 including the HUD 100 illustrated in FIG. 1. The HUD 100 is a maximum area where the first virtual image 311 can be displayed, for example, a rectangular first virtual image displayable area 310 and the maximum area where the second virtual image 321 can be displayed when viewed from the viewer 4. Then, a trapezoidal second virtual image displayable area 320 in which an upper side partially overlapping with the first virtual image displayable area 310 is visually recognized shorter than the lower side is generated. Note that the first virtual image displayable area 310 and the second virtual image displayable area 320 themselves are not visible to the viewer 4 or difficult to be viewed by the viewer 4, and are on the first virtual image displayable area 310. The viewer 4 visually recognizes the first virtual image element V1 of the first virtual image 311 to be displayed and the second virtual image element V2 of the second virtual image 321 displayed on the second virtual image displayable area 320; Become.
 第1の虚像表示可能領域310に表示される第1の虚像311は、実景の特定の対象に関連付けられたり、実景の特定の対象に情報を付加したりするものではなく、例えば、車両1の車速などの車両情報や分岐点までの距離を示す第1の虚像要素V1を含む。また、第2の虚像表示可能領域320に表示される第2の虚像321は、実景の特定の対象の近傍または重畳する位置に表示され、この特定の対象を強調したり、情報を付加したりする、例えば、路面5Lに、目的地までの経路を案内する情報を付加する矢印画像や、車線逸脱警報(LDW:Lane Depature Warning)時に白線(特定の対象)を強調する線状の画像などの第2の虚像要素V2を含む。特定の対象とは、例えば、路面や、路面上または付近に存在する障害物、交通標識、交通標示、交通信号機、建物などである。 The first virtual image 311 displayed in the first virtual image displayable area 310 is not associated with a specific target of the real scene or does not add information to the specific target of the real scene. 1st virtual image element V1 which shows vehicle information, such as a vehicle speed, and the distance to a branch point is included. Also, the second virtual image 321 displayed in the second virtual image displayable area 320 is displayed in the vicinity of a specific target in the real scene or at a position where it is superimposed, and this specific target is emphasized or information is added. For example, an arrow image that adds information that guides the route to the destination on the road surface 5L, a linear image that emphasizes a white line (specific target) at the time of lane departure warning (LDW: Lane Depth Warning), etc. The second virtual image element V2 is included. The specific target is, for example, a road surface, an obstacle present on or near the road surface, a traffic sign, a traffic sign, a traffic signal, a building, and the like.
 第1の虚像表示可能領域310は、第1の虚像要素V1を含む第1の虚像311が表示される領域と、この第1の虚像311を上下方向(Y軸方向)及び左右方向(X軸方向)から囲む虚像ブランク領域311aと、から構成される。第1の虚像311は、視認者4から見て、矩形状である。本発明のHUD100は、車両1のピッチング角1pが変化したことに起因する第1の虚像311の歪みを相殺するように、後述する画像補正処理を実行する。この画像補正処理については、後述する。 The first virtual image displayable area 310 includes an area in which the first virtual image 311 including the first virtual image element V1 is displayed, and the first virtual image 311 in the vertical direction (Y-axis direction) and the horizontal direction (X-axis). And a virtual image blank area 311a surrounding from (direction). The first virtual image 311 has a rectangular shape when viewed from the viewer 4. The HUD 100 of the present invention performs an image correction process, which will be described later, so as to cancel the distortion of the first virtual image 311 caused by the change in the pitching angle 1p of the vehicle 1. This image correction process will be described later.
 再び、図2に戻って、車両1のピッチング角1pを検出する各種検出部について説明する。なお、これから説明する各種検出部は、HUD100に設けられてもよい。また、車両1またはHUD100に取り外し可能に有線接続または無線接続されてもよい。具体的には、車両1またはHUD100に、センサユニットを有線接続または無線接続したり、各種検出部を有する携帯端末を有線接続または無線接続したりしてもよい。 Returning to FIG. 2 again, various detection units for detecting the pitching angle 1p of the vehicle 1 will be described. Various detection units to be described below may be provided in the HUD 100. Further, the vehicle 1 or the HUD 100 may be detachably wired or wirelessly connected. Specifically, the sensor unit may be wired or wirelessly connected to the vehicle 1 or the HUD 100, or a portable terminal having various detection units may be wired or wirelessly connected.
 図1の車両1には、車両1の実景情報Fを取得する前方情報検出部6が搭載されている。実景情報Fとは、車両1の前方の実景のうち、第2の虚像321が表示されることで強調または情報が負荷される特定の対象の位置情報を少なくとも含み、例えば、単数または複数のカメラ、赤外線センサなどから得られる。 1 is equipped with a front information detection unit 6 that acquires actual scene information F of the vehicle 1. The real scene information F includes at least position information of a specific target that is emphasized or loaded with information by displaying the second virtual image 321 in the real scene in front of the vehicle 1, for example, one or more cameras. Obtained from an infrared sensor or the like.
 図1の車両1には、車両1の姿勢を検出する車両姿勢検出部7が搭載されている。車両姿勢検出部7は、例えば、三軸加速度センサ(図示しない)と、前記三軸加速度センサが検出した三軸加速度を解析することで、水平面を基準とした車両1のピッチング角1p(車両姿勢)を推定し、車両1のピッチング角1pに関する情報を含む車両姿勢情報GをHUD100(表示制御部70)に出力する。なお、車両姿勢検出部7は、前述した三軸加速度センサ以外に、車両1のサスペンション近傍に配置されるハイトセンサ(図示しない)で構成されてもよい。このとき、車両姿勢検出部7は、前記ハイトセンサが検出する車両1の地面からの高さを解析することで、前述したように車両1のピッチング角1pを推定し、車両1のピッチング角1pに関する情報を含む車両姿勢情報GをHUD100(表示制御部70)に出力する。また、車両姿勢検出部7は、車両1の外部を撮像する撮像カメラ(図示しない)と、この撮像画像を解析する画像解析部(図示しない)から構成されてもよい。このとき、車両姿勢検出部7は、前記撮像画像に含まれる風景の時間変化から車両1のピッチング角1p(車両姿勢)を推定する。なお、車両姿勢検出部7が、車両1のピッチング角1pを求める方法は、前述した方法に限定されず、公知のセンサや解析方法を用いて車両1のピッチング角1pを求めてもよい。 1 is equipped with a vehicle attitude detection unit 7 for detecting the attitude of the vehicle 1. The vehicle attitude detection unit 7 analyzes, for example, a triaxial acceleration sensor (not shown) and the triaxial acceleration detected by the triaxial acceleration sensor, so that the pitching angle 1p (vehicle attitude) of the vehicle 1 with respect to the horizontal plane is used. ) And the vehicle attitude information G including information related to the pitching angle 1p of the vehicle 1 is output to the HUD 100 (display control unit 70). The vehicle attitude detection unit 7 may be configured by a height sensor (not shown) arranged in the vicinity of the suspension of the vehicle 1 other than the three-axis acceleration sensor described above. At this time, the vehicle attitude detection unit 7 estimates the pitching angle 1p of the vehicle 1 as described above by analyzing the height from the ground of the vehicle 1 detected by the height sensor, and the pitching angle 1p of the vehicle 1 is estimated. The vehicle attitude information G including the information regarding is output to the HUD 100 (display control unit 70). The vehicle posture detection unit 7 may include an imaging camera (not shown) that images the outside of the vehicle 1 and an image analysis unit (not shown) that analyzes the captured image. At this time, the vehicle posture detection unit 7 estimates the pitching angle 1p (vehicle posture) of the vehicle 1 from the temporal change of the landscape included in the captured image. Note that the method by which the vehicle attitude detection unit 7 determines the pitching angle 1p of the vehicle 1 is not limited to the method described above, and the pitching angle 1p of the vehicle 1 may be determined using a known sensor or analysis method.
 図3は、図1に示されるHUD100の構成の例を示す図である。
 図1のHUD100は、例えば、第1の画像表示部10、第2の画像表示部20、投射部30(反射部31、表示合成部32、凹面ミラー33)、角度調整部(アクチュエータ)40、及び表示制御部70を有する。HUD100は、一般的に車両1のダッシュボードの中に収納されるが、第1の画像表示部10、第2の画像表示部20、反射部31、アクチュエータ40、表示合成部32、凹面ミラー33及び表示制御部70の全部または一部がダッシュボードの外部に配置されてもよい。HUD100(表示制御部70)は、車両1に搭載される車載LAN(Local Area Network)などからなるバス8に接続され、このバス8から上述した実景情報F、車両姿勢情報Gを入力することができる。
FIG. 3 is a diagram showing an example of the configuration of the HUD 100 shown in FIG.
1 includes, for example, a first image display unit 10, a second image display unit 20, a projection unit 30 (a reflection unit 31, a display synthesis unit 32, a concave mirror 33), an angle adjustment unit (actuator) 40, And a display control unit 70. The HUD 100 is generally housed in the dashboard of the vehicle 1, but the first image display unit 10, the second image display unit 20, the reflection unit 31, the actuator 40, the display composition unit 32, and the concave mirror 33. In addition, all or part of the display control unit 70 may be arranged outside the dashboard. The HUD 100 (display control unit 70) is connected to a bus 8 including an in-vehicle LAN (Local Area Network) mounted on the vehicle 1, and the above-described actual scene information F and vehicle attitude information G can be input from the bus 8. it can.
 図3の第1の画像表示部10は、例えば、DMDやLCoSなどの反射型表示デバイスを用いたプロジェクタなどからなる第1の投影部11と、第1の投影部11からの投影光を受光して第1の画像14を表示し、この第1の画像14を示す第1の表示光210を反射部31に向けて出射する第1のスクリーン12と、から主に構成される。第1の画像表示部10は、後述する表示制御部70から入力する画像データDに基づき、第1のスクリーン12に第1の画像14を表示することで、視認者4の前方に仮想的に生成される第1の虚像表示可能領域310上に第1の虚像311を表示させる。 The first image display unit 10 in FIG. 3 receives, for example, a first projection unit 11 including a projector using a reflective display device such as DMD or LCoS, and projection light from the first projection unit 11. Thus, the first image 14 is displayed, and the first screen 12 that emits the first display light 210 indicating the first image 14 toward the reflection unit 31 is mainly configured. The first image display unit 10 virtually displays the first image 14 on the first screen 12 based on the image data D input from the display control unit 70 described later, so that the first image display unit 10 is virtually in front of the viewer 4. The first virtual image 311 is displayed on the generated first virtual image displayable area 310.
 図4は、図3に示される第1のスクリーン12の正面図である。以下の説明の理解を容易にするため、図4に示されるように、第1のスクリーン12の左右方向をdx軸(左方向をdx軸正方向)と規定し、第1のスクリーン12の上下方向をdy軸(下方向をdy軸正方向)と規定する。図2で示される視認者4が車両1の運転席から視認する第1の虚像311のX軸方向の位置は、図4に示される第1のスクリーン12上に表示される第1の画像14のdx軸方向の位置に対応する。同様に、図2で示される視認者4が車両1の運転席から視認する第1の虚像311のY軸方向の位置は、図4に示される第1のスクリーン12上に表示される第1の画像14のdy軸方向の位置に対応する。なお、HUD100内の光学部材(第1の画像表示部10、第2の画像表示部20、反射部31、アクチュエータ40、表示合成部32、凹面ミラー33)などの配置などにより、前述した実空間上のXY座標軸と、第1のスクリーン12の説明で用いるdxdy座標軸との関係は、前述した限りではない。 FIG. 4 is a front view of the first screen 12 shown in FIG. In order to facilitate understanding of the following description, as shown in FIG. 4, the left-right direction of the first screen 12 is defined as the dx axis (the left direction is the positive direction of the dx axis). The direction is defined as the dy axis (the downward direction is the positive direction of the dy axis). The position in the X-axis direction of the first virtual image 311 that the viewer 4 shown in FIG. 2 visually recognizes from the driver's seat of the vehicle 1 is the first image 14 displayed on the first screen 12 shown in FIG. Corresponds to the position in the dx-axis direction. Similarly, the position in the Y-axis direction of the first virtual image 311 viewed by the viewer 4 shown in FIG. 2 from the driver's seat of the vehicle 1 is displayed on the first screen 12 shown in FIG. This corresponds to the position of the image 14 in the dy-axis direction. It should be noted that the above-described real space is determined by the arrangement of the optical members (first image display unit 10, second image display unit 20, reflection unit 31, actuator 40, display composition unit 32, concave mirror 33) in the HUD 100. The relationship between the above XY coordinate axes and the dxdy coordinate axes used in the description of the first screen 12 is not limited to the above.
 図3の第1のスクリーン12は、図4に示されるように、第1の画像14を表示可能な最大の領域である領域13を有する。以下では、この領域13を、HUD100が仮想的に生成する第1の虚像表示可能領域310に対応する第1の画像表示可能領域13とも呼ぶ。第1の画像表示可能領域13は、画像データDに応じて、領域内に第1の画像14を表示する。この第1の画像14には、一部または全体に、単数または複数の第1の画像要素M1が配置される。後述する表示制御部70は、画像データDに基づき、第1の画像表示可能領域13内の第1の画像14の大きさや形状を調整することで、第1の虚像表示可能領域310内の第1の虚像311の大きさや形状を調整することができる。なお、図4に記載の第1の画像14は、図2の第1の虚像311に対応するように、矩形状であり、左右方向(dx軸方向)に平行で直線のグリッド線と、上下方向(dy軸方向)に平行で直線のグリッド線と、を付してあるが、実際は、第1の表示光210が投影される投影部材(フロントウインドシールド2)の曲面形状により、第1のスクリーン12に表示される第1の画像14と、視認者4から見た第1の虚像311との形状は、等しくはならない。すなわち、図2の第1の虚像311の形状およびグリッド線を基準とすると、第1の画像14の形状は、矩形状とは異なり、またグリッド線も直線ではなく、屈曲したり、それぞれが平行にならなかったりするが、第1の虚像311と第1の画像14との対応をわかりやすく説明するため、第1の画像14の形状およびグリッド線を、第1の虚像311の形状およびグリッド線と同じにしている。図4では、第1の画像14の下端の横幅を下端横幅dx1と規定し、第1の画像14の上端の横幅を上端横幅dx2と規定し、第1の画像14の縦幅を縦幅dyと規定する。 The first screen 12 in FIG. 3 has an area 13 that is the maximum area in which the first image 14 can be displayed, as shown in FIG. Hereinafter, this region 13 is also referred to as a first image displayable region 13 corresponding to the first virtual image displayable region 310 that is virtually generated by the HUD 100. The first image displayable area 13 displays the first image 14 in the area according to the image data D. In the first image 14, one or a plurality of first image elements M1 are arranged in part or in whole. The display control unit 70 to be described later adjusts the size and shape of the first image 14 in the first image displayable area 13 based on the image data D, so that the first virtual image displayable area 310 in the first virtual image displayable area 310 is adjusted. The size and shape of one virtual image 311 can be adjusted. The first image 14 shown in FIG. 4 has a rectangular shape corresponding to the first virtual image 311 in FIG. 2, and is parallel to the horizontal direction (dx axis direction) and linear grid lines, Although the grid line is attached in parallel to the direction (dy-axis direction), the first display light 210 is actually projected depending on the curved surface shape of the projection member (front windshield 2). The shapes of the first image 14 displayed on the screen 12 and the first virtual image 311 viewed from the viewer 4 are not equal. That is, when the shape of the first virtual image 311 and the grid lines in FIG. 2 are used as a reference, the shape of the first image 14 is different from the rectangular shape, and the grid lines are not straight but bent or parallel to each other. In order to explain the correspondence between the first virtual image 311 and the first image 14 in an easy-to-understand manner, the shape and grid line of the first image 14 are changed to the shape and grid line of the first virtual image 311. Same as. In FIG. 4, the width of the lower end of the first image 14 is defined as the lower end width dx1, the width of the upper end of the first image 14 is defined as the upper end width dx2, and the height of the first image 14 is defined as the height dy. It prescribes.
 本実施形態の第1の画像表示部10は、通常、画像データDに基づき、1以上の第1の画像要素M1を含む第1の画像14を表示し、車両1のピッチング角1pの変化した場合、第1の画像14の一部を押し広げたり、絞り込んだりする後述する画像補正処理を実行することで、通常時の形状および大きさに近い第1の虚像311を視認させることができる。 The first image display unit 10 of the present embodiment normally displays the first image 14 including one or more first image elements M1 based on the image data D, and the pitching angle 1p of the vehicle 1 has changed. In this case, the first virtual image 311 close to the normal shape and size can be visually recognized by executing an image correction process, which will be described later, that expands or narrows a part of the first image 14.
 図3の第2の画像表示部20は、前述の第1の画像表示部10と同様の構成であり、第2の投影部21と、第2の投影部21からの投影光を受光して第2の画像24を表示し、この第2の画像24を示す第2の表示光220を反射部31に向けて出射する第2のスクリーン22と、から主に構成される。図3の第2のスクリーン22は、図4に示される第1のスクリーン12と同様に、第2の画像24を表示可能な最大の領域であり、第2の虚像表示可能領域320に対応する第2の画像表示可能領域23とも呼ぶ。第2の画像表示可能領域23は、画像データDに応じて、領域内に第2の画像24を表示する。この第2の画像24には、一部または全体に、単数または複数の第2の画像要素(図示しない)が配置される。 The second image display unit 20 in FIG. 3 has the same configuration as the first image display unit 10 described above, and receives the projection light from the second projection unit 21 and the second projection unit 21. The second screen 22 is mainly configured to display the second image 24 and emit the second display light 220 indicating the second image 24 toward the reflection unit 31. Similar to the first screen 12 shown in FIG. 4, the second screen 22 in FIG. 3 is the maximum area where the second image 24 can be displayed, and corresponds to the second virtual image displayable area 320. It is also called a second image displayable area 23. The second image displayable area 23 displays the second image 24 in the area according to the image data D. In the second image 24, one or a plurality of second image elements (not shown) are arranged in part or in whole.
 図3の反射部31(投射部30)は、例えば、平板状の平面鏡で形成され、第2の画像表示部20から視認者4に向かう第2の表示光220の光路上に傾いて配置され、第2の画像表示部20から出射された第2の表示光220を表示合成部32に向けて反射する。反射部31には、反射部31を回転させるアクチュエータ40が備えられている。なお、反射部31は、平面ではなく、曲面を有していてもよい。 The reflection unit 31 (projection unit 30) of FIG. 3 is formed by, for example, a flat plane mirror, and is inclined and arranged on the optical path of the second display light 220 from the second image display unit 20 toward the viewer 4. The second display light 220 emitted from the second image display unit 20 is reflected toward the display composition unit 32. The reflection unit 31 includes an actuator 40 that rotates the reflection unit 31. In addition, the reflection part 31 may have a curved surface instead of a plane.
 アクチュエータ40は、例えば、ステッピングモータやDCモータなどであり、後述する表示制御部70の制御のもと、車両姿勢検出部7が検出する車両姿勢情報Gに基づいて反射部31を回転させることで第2の虚像表示可能領域320の角度及び位置を調整する。 The actuator 40 is, for example, a stepping motor or a DC motor, and rotates the reflecting unit 31 based on the vehicle attitude information G detected by the vehicle attitude detecting unit 7 under the control of the display control unit 70 described later. The angle and position of the second virtual image displayable area 320 are adjusted.
 図3の表示合成部32(投射部30)は、例えば、透光性の基板の一方の表面に金属反射膜あるいは誘電体多層膜などの半透過反射層を形成した平面のハーフミラーなどで構成される。表示合成部32は、反射部31により反射された第2の画像表示部20からの第2の表示光220を凹面ミラー33に向けて反射し、第2の画像表示部20からの第2の表示光220を凹面ミラー33側に透過する。表示合成部32の透過率は、例えば50%だが、適宜調整することで第1の虚像311、第2の虚像321の輝度を調整してもよい。 The display composition unit 32 (projection unit 30) in FIG. 3 is configured by, for example, a flat half mirror in which a transflective layer such as a metal reflective film or a dielectric multilayer film is formed on one surface of a translucent substrate. Is done. The display composition unit 32 reflects the second display light 220 from the second image display unit 20 reflected by the reflection unit 31 toward the concave mirror 33, and the second display light 220 from the second image display unit 20. The display light 220 is transmitted to the concave mirror 33 side. The transmittance of the display combining unit 32 is, for example, 50%, but the luminance of the first virtual image 311 and the second virtual image 321 may be adjusted by appropriately adjusting.
 図3の凹面ミラー33(投射部30)は、例えば、第1の画像表示部10及び第2の画像表示部20から視認者4に向かう第1の表示光210及び第2の表示光220の光路上に傾いて配置され、第1の画像表示部10、第2の画像表示部20から出射された第1の表示光210、第2の表示光220を、フロントウインドシールド2に向けて反射する。第1の画像表示部10の第1のスクリーン12から凹面ミラー33までの第1の表示光210の光路長は、第2の画像表示部20の第2のスクリーン22から凹面ミラー33までの第2の表示光220の光路長より短くなるように配置され、これにより、第1の画像表示部10により生成される第1の虚像311は、第2の画像表示部20により生成される第2の虚像321より前記アイボックスの近傍の位置に結像される。なお、凹面ミラー33は、典型的には、第1の画像表示部10及び第2の画像表示部20が生成する第1の表示光210及び第2の表示光220をフロントウインドシールド2と協働して拡大する機能、フロントウインドシールド2の曲面により生ずる第1の虚像311、第2の虚像321の歪みを補正し、歪みのない虚像を視認させる機能、第1の虚像311、第2の虚像321を視認者4から所定の距離だけ離れた位置で結像させる機能を有する。 For example, the concave mirror 33 (projection unit 30) of FIG. 3 is configured to transmit the first display light 210 and the second display light 220 from the first image display unit 10 and the second image display unit 20 to the viewer 4. The first display light 210 and the second display light 220 that are arranged on the optical path and are emitted from the first image display unit 10 and the second image display unit 20 are reflected toward the front windshield 2. To do. The optical path length of the first display light 210 from the first screen 12 of the first image display unit 10 to the concave mirror 33 is the first optical path length from the second screen 22 of the second image display unit 20 to the concave mirror 33. The first virtual image 311 generated by the first image display unit 10 is arranged so as to be shorter than the optical path length of the second display light 220, whereby the second image display unit 20 generates the second virtual image 311. The virtual image 321 is formed at a position near the eye box. The concave mirror 33 typically cooperates with the front windshield 2 for the first display light 210 and the second display light 220 generated by the first image display unit 10 and the second image display unit 20. A function of working and enlarging, a function of correcting the distortion of the first virtual image 311 and the second virtual image 321 caused by the curved surface of the front windshield 2, and visually recognizing a virtual image without distortion, the first virtual image 311 and the second virtual image The virtual image 321 has a function of forming an image at a position away from the viewer 4 by a predetermined distance.
 図5は、図3の表示制御部70の概略構成例を示す。図5に示されるように、表示制御部70は、例えば、処理部71、記憶部72及びインターフェース73を含む。処理部71は、例えば単数または複数のCPUやRAMで構成され、記憶部72は、例えばROMで構成され、インターフェース73は、バス8に接続される入出力通信インターフェースで構成される。例えば、インターフェース73は、バス8を介して車両情報や上述した実景情報F、車両姿勢情報G等を取得することができ、記憶部72は、入力した車両情報や実景情報F、車両姿勢情報Gに基づいて画像データDを生成するためのデータ、及び入力した車両姿勢情報Gに基づいて駆動データTを生成するためのデータを記憶することができ、処理部71は、記憶部72からのデータを読み取り、所定の動作を実行することで画像データD及び駆動データTを生成することができる。なお、インターフェース73は、例えば、バス8を介して、車両姿勢検出部7から車両1のピッチング角1pに関する情報を含む車両姿勢情報Gを取得することができ、車両姿勢情報取得手段としての機能も有する。なお、表示制御部70は、HUD100の内部にあってもよく、その一部または全部の機能がHUD100の外側の車両1側に設けられてもよい。以下に本実施形態のHUD100の動作例を説明する。 FIG. 5 shows a schematic configuration example of the display control unit 70 of FIG. As illustrated in FIG. 5, the display control unit 70 includes, for example, a processing unit 71, a storage unit 72, and an interface 73. The processing unit 71 is configured by, for example, one or a plurality of CPUs and RAMs, the storage unit 72 is configured by, for example, a ROM, and the interface 73 is configured by an input / output communication interface connected to the bus 8. For example, the interface 73 can acquire the vehicle information, the above-described actual scene information F, the vehicle attitude information G, and the like via the bus 8, and the storage unit 72 can input the input vehicle information, the actual scene information F, and the vehicle attitude information G. The data for generating the image data D based on the data and the data for generating the drive data T based on the input vehicle attitude information G can be stored. Can be generated and image data D and drive data T can be generated. The interface 73 can acquire the vehicle attitude information G including information related to the pitching angle 1p of the vehicle 1 from the vehicle attitude detection unit 7 via the bus 8, for example, and also functions as a vehicle attitude information acquisition unit. Have. The display control unit 70 may be inside the HUD 100, and a part or all of the functions may be provided on the vehicle 1 side outside the HUD 100. Hereinafter, an operation example of the HUD 100 of the present embodiment will be described.
 図6は、本実施形態のHUD100の動作の例を示すフローチャートである。HUD100は、例えば、車両1が起動されたとき、又は、車両1の電子機器に電力が供給されたとき、又は、車両1の起動または車両1の電子機器の電力供給から所定時間経過したときに以下に説明する処理を開始する。 FIG. 6 is a flowchart showing an example of the operation of the HUD 100 of the present embodiment. The HUD 100 is, for example, when the vehicle 1 is activated, when electric power is supplied to the electronic device of the vehicle 1, or when a predetermined time has elapsed since the activation of the vehicle 1 or the electric power supply of the electronic device of the vehicle 1. The processing described below is started.
 ステップS1では、処理部71は、インターフェース73から車両姿勢情報Gを取得する。 In step S <b> 1, the processing unit 71 acquires the vehicle attitude information G from the interface 73.
 ステップS2では、処理部71は、ステップS1で取得した車両姿勢情報Gが、所定の閾値TH以上であるか判定する。具体的には、処理部71は、車両1のピッチング角1pが所定の閾値TH以上であるか判定する。処理部71は、車両姿勢情報Gに基づき、車両1のピッチング角1pが大きいと判定した(ステップS2でYES)場合、ステップS3に移行し、また、車両1のピッチング角1pが小さいと判定した(ステップS2でNO)場合、ステップS4に移行する。 In step S2, the processing unit 71 determines whether the vehicle attitude information G acquired in step S1 is equal to or greater than a predetermined threshold value TH. Specifically, the processing unit 71 determines whether the pitching angle 1p of the vehicle 1 is greater than or equal to a predetermined threshold value TH. If the processing unit 71 determines that the pitching angle 1p of the vehicle 1 is large based on the vehicle attitude information G (YES in step S2), the processing unit 71 proceeds to step S3 and determines that the pitching angle 1p of the vehicle 1 is small. If (NO in step S2), the process proceeds to step S4.
 ステップS3において、処理部71は、車両1のピッチング角1pの変化により生じる第1の虚像311の台形歪みを、抑制または相殺するように、第1の虚像311を台形補正する。言い換えると、処理部71は、車両1のピッチング角1pの変化した後でも、車両1のピッチング角1pが概ねゼロの通常走行時に視認される第1の虚像311の形状および大きさで視認されるように、第1の虚像311を台形補正する。処理部71は、車両1のピッチング角1pに応じて、第1のスクリーン12に表示する第1の画像14の下端横幅dx1、上端横幅dx2、縦幅dyを補正する。 In step S3, the processing unit 71 corrects the first virtual image 311 to be trapezoidally corrected so as to suppress or cancel the trapezoidal distortion of the first virtual image 311 caused by the change in the pitching angle 1p of the vehicle 1. In other words, even after the pitching angle 1p of the vehicle 1 is changed, the processing unit 71 is visually recognized by the shape and size of the first virtual image 311 that is visually recognized during normal traveling in which the pitching angle 1p of the vehicle 1 is substantially zero. Thus, the first virtual image 311 is trapezoidally corrected. The processing unit 71 corrects the lower end horizontal width dx1, the upper end horizontal width dx2, and the vertical width dy of the first image 14 displayed on the first screen 12 according to the pitching angle 1p of the vehicle 1.
 処理部71は、第1の画像14の下端横幅dx1または/および上端横幅dx2を補正することで、第1の虚像311の台形歪みの上辺と下辺との差を縮小し、第1の虚像311を矩形状に近づけることができる。図7Aは、路面5Lに対する車両1のピッチング角1pが概ねゼロの通常走行時における、第1のスクリーン12に表示される第1の画像14を示している。以下では、この第1の画像14を、基準画像14rとも呼ぶ。また、基準画像14rにおける下端横幅dx1を基準下端横幅dx1rとも呼び、上端横幅dx2を基準上端横幅dx2rとも呼び、縦幅dyを基準縦幅dyrとも呼ぶ。図7Bは、路面5Lに対して車両1がピッチアップの状態である場合の第1の画像14uを示している。車両1がピッチアップの状態である場合、処理部71は、下端横幅dx1uに対する上端横幅dx2uの比(dx2u/dx1u)を、通常走行時の基準下端横幅dx1rに対する基準上端横幅dx2rの比(dx2r/dx1r)よりも小さくする。これによりピッチアップにより生じる下端横幅DX1と上端横幅DX2との差(DX2>DX1)を縮めるまたは無くすことができる。また、図7Cは、路面5Lに対して車両1がピッチダウンの状態である場合の第1の画像14dを示している。車両1がピッチダウンの状態である場合、処理部71は、下端横幅dx1dに対する上端横幅dx2dの比(dx2d/dx1)を、通常走行時の基準下端横幅dx1rに対する基準上端横幅dx2rの比(dx2r/dx1r)よりも大きくする。これによりピッチダウンにより生じる下端横幅DX1と上端横幅DX2との差(DX1>DX2)を縮めるまたは無くすことができる。また、処理部71は、路面5Lに対する車両1のピッチング角1pの絶対値が大きくなるに従い、縦幅dy(dyu、dyd)を、通常走行時の基準縦幅dyrよりも長く設定する。これにより車両1のピッチング角1pの変化により第1の虚像311の縦幅DYの縮小を小さくするまたは無くすことができる。 The processing unit 71 corrects the lower end width dx1 and / or the upper end width dx2 of the first image 14 to reduce the difference between the upper side and the lower side of the trapezoidal distortion of the first virtual image 311, and the first virtual image 311. Can be approximated to a rectangular shape. FIG. 7A shows a first image 14 displayed on the first screen 12 during normal traveling in which the pitching angle 1p of the vehicle 1 with respect to the road surface 5L is substantially zero. Hereinafter, the first image 14 is also referred to as a reference image 14r. Further, the lower end horizontal width dx1 in the reference image 14r is also referred to as a reference lower end horizontal width dx1r, the upper end horizontal width dx2 is also referred to as a reference upper end horizontal width dx2r, and the vertical width dy is also referred to as a reference vertical width dyr. FIG. 7B shows a first image 14u when the vehicle 1 is in a pitch-up state with respect to the road surface 5L. When the vehicle 1 is in a pitch-up state, the processing unit 71 determines the ratio of the upper end width dx2u to the lower end width dx1u (dx2u / dx1u) and the ratio of the reference upper end width dx2r to the reference lower end width dx1r during normal travel (dx2r / dx1r). As a result, the difference (DX2> DX1) between the lower end lateral width DX1 and the upper end lateral width DX2 caused by pitch-up can be reduced or eliminated. FIG. 7C shows a first image 14d when the vehicle 1 is in a pitch-down state with respect to the road surface 5L. When the vehicle 1 is in a pitch-down state, the processing unit 71 determines the ratio of the upper end width dx2d to the lower end width dx1d (dx2d / dx1) and the ratio of the reference upper end width dx2r to the reference lower end width dx1r during normal travel (dx2r / larger than dx1r). Thereby, the difference (DX1> DX2) between the lower end lateral width DX1 and the upper end lateral width DX2 caused by the pitch down can be reduced or eliminated. Further, the processing unit 71 sets the longitudinal width dy (dyu, dyd) to be longer than the reference longitudinal width dyr during normal traveling as the absolute value of the pitching angle 1p of the vehicle 1 with respect to the road surface 5L increases. Thereby, the reduction of the vertical width DY of the first virtual image 311 can be reduced or eliminated by the change of the pitching angle 1p of the vehicle 1.
 なお、処理部71は、車両1のピッチング角1pに応じて、第1のスクリーン12に表示する第1の画像14の下端横幅dx1、上端横幅dx2、縦幅dyを補正する際、第1の画像14の全領域を一律の拡大率で拡大するのではなく、第1の画像14の領域毎に倍率を設定してもよい。すなわち、処理部71は、第1の画像14を台形補正する場合、第1の画像14のうち第1の虚像311の鉛直方向の下方に対応するグリッド(領域G1)の横間隔Gx1及び縦間隔Gy1と、第1の虚像311の鉛直方向の上方に対応するグリッド(領域G2)の横間隔Gx2及び縦間隔Gy2とを、異なる倍率で拡大してもよい。具体的には、処理部71は、車両1がピッチアップの状態である場合、図7Bに示すように、第1の画像14の下方のグリッドG1uを、上方のグリッドG2uよりも大きい倍率で拡大する。したがって、車両1がピッチアップの状態である場合、第1の画像14の下方のグリッドG1uの横間隔Gx1u及び縦間隔Gy1uは、上方のグリッドG2uの横間隔Gx2u及び縦間隔Gy2uよりも大きくなる。また、処理部71は、車両1がピッチダウンの状態である場合、図7Cに示すように、第1の画像14の下方のグリッドG1dを、上方のグリッドG2dよりも小さい倍率で拡大する。したがって、車両1がピッチダウンの状態である場合、第1の画像14の下方のグリッドG1dの横間隔Gx1d及び縦間隔Gy1dは、上方のグリッドG2dの横間隔Gx2d及び縦間隔Gy2dよりも小さくなる。 When the processing unit 71 corrects the lower end horizontal width dx1, the upper end horizontal width dx2, and the vertical width dy of the first image 14 displayed on the first screen 12 according to the pitching angle 1p of the vehicle 1, Instead of enlarging the entire area of the image 14 at a uniform enlargement ratio, a magnification may be set for each area of the first image 14. That is, when the processing unit 71 corrects the keystone of the first image 14, the horizontal interval Gx1 and the vertical interval of the grid (region G1) corresponding to the first virtual image 311 below the first virtual image 311 in the vertical direction. You may enlarge Gy1 and the horizontal space | interval Gx2 and the vertical space | interval Gy2 of the grid (area | region G2) corresponding to the upper direction of the 1st virtual image 311 by a different magnification. Specifically, when the vehicle 1 is in a pitch-up state, the processing unit 71 enlarges the lower grid G1u of the first image 14 at a larger magnification than the upper grid G2u, as shown in FIG. 7B. To do. Therefore, when the vehicle 1 is in the pitch-up state, the horizontal interval Gx1u and the vertical interval Gy1u of the lower grid G1u of the first image 14 are larger than the horizontal interval Gx2u and the vertical interval Gy2u of the upper grid G2u. Further, when the vehicle 1 is in a pitch-down state, the processing unit 71 enlarges the lower grid G1d of the first image 14 at a smaller magnification than the upper grid G2d, as shown in FIG. 7C. Therefore, when the vehicle 1 is in a pitch-down state, the horizontal interval Gx1d and the vertical interval Gy1d of the lower grid G1d of the first image 14 are smaller than the horizontal interval Gx2d and the vertical interval Gy2d of the upper grid G2d.
 ステップS4において、処理部71は、車両1のピッチング角1pの変化により生じる第1の虚像311の縦幅DYの縮小を、抑制または相殺するように、第1の虚像311の縦幅DYを補正する。言い換えると、処理部71は、車両1のピッチング角1pの変化した後でも、車両1のピッチング角1pが概ねゼロの通常走行時に視認される第1の虚像311の縦幅DYで視認されるように、第1の虚像311の縦幅DYを補正する。具体的には、処理部71は、路面5Lに対する車両1のピッチング角1pの絶対値が大きくなるに従い、縦幅dy(dyu、dyd)を、通常走行時の基準縦幅dyrよりも長く設定する。以上が、本実施形態の表示制御部70が実行する画像補正処理である。 In step S4, the processing unit 71 corrects the vertical width DY of the first virtual image 311 so as to suppress or cancel the reduction in the vertical width DY of the first virtual image 311 caused by the change in the pitching angle 1p of the vehicle 1. To do. In other words, even after the pitching angle 1p of the vehicle 1 is changed, the processing unit 71 is visually recognized with the vertical width DY of the first virtual image 311 that is visually recognized when the pitching angle 1p of the vehicle 1 is substantially zero. In addition, the vertical width DY of the first virtual image 311 is corrected. Specifically, the processing unit 71 sets the vertical width dy (dyu, dyd) to be longer than the reference vertical width dyr during normal traveling as the absolute value of the pitching angle 1p of the vehicle 1 with respect to the road surface 5L increases. . The above is the image correction processing executed by the display control unit 70 of the present embodiment.
 なお、第2の虚像表示可能領域320は、車両1のピッチング角1pに応じて、路面5L(実景)に対する角度が変化してしまう。本実施形態の処理部71は、取得する車両姿勢情報Gに応じて、第2の虚像表示可能領域320の角度を調整することで、第2の虚像表示可能領域320の路面5Lに対する意図しない角度変化を軽減もしくは相殺する。以下にその方法を述べる。 In the second virtual image displayable area 320, the angle with respect to the road surface 5L (real scene) changes according to the pitching angle 1p of the vehicle 1. The processing unit 71 of the present embodiment adjusts the angle of the second virtual image displayable area 320 according to the vehicle attitude information G to be acquired, so that the unintended angle of the second virtual image displayable area 320 with respect to the road surface 5L. Reduce or offset changes. The method is described below.
 表示制御部70は、取得した車両姿勢情報Gに対応するアクチュエータ40の駆動量を含む駆動データTを決定する。具体的には、表示制御部70は、記憶部72に予め記憶されたテーブルデータを読み出し、取得した車両姿勢情報Gに対応する駆動データTを決定する。なお、表示制御部70は、車両姿勢情報Gから駆動データTを予め設定された算出式を用いて演算により求めてもよい。 The display control unit 70 determines drive data T including the drive amount of the actuator 40 corresponding to the acquired vehicle attitude information G. Specifically, the display control unit 70 reads table data stored in advance in the storage unit 72 and determines drive data T corresponding to the acquired vehicle attitude information G. Note that the display control unit 70 may obtain the drive data T from the vehicle attitude information G by calculation using a preset calculation formula.
 次いで、表示制御部70は、決定した駆動データTに基づいてアクチュエータ40を駆動する。表示制御部70は、アクチュエータ40を駆動し、第2の画像表示部20が出射する第2の表示光220の光路上に位置する反射部31を回転させる。なお、第1の虚像表示可能領域310の角度は調整していないため、第1の虚像表示可能領域310に対する第2の画像表示部20の相対的な角度330が変化する。具体的に例えば、表示制御部70は、車両1の車両姿勢が変化した場合であっても第2の虚像表示可能領域320が路面5Lに平行となるようにアクチュエータ40を制御し、反射部31を回転させてもよい。 Next, the display control unit 70 drives the actuator 40 based on the determined drive data T. The display control unit 70 drives the actuator 40 to rotate the reflecting unit 31 located on the optical path of the second display light 220 emitted from the second image display unit 20. In addition, since the angle of the 1st virtual image displayable area 310 is not adjusted, the relative angle 330 of the 2nd image display part 20 with respect to the 1st virtual image displayable area 310 changes. Specifically, for example, the display control unit 70 controls the actuator 40 so that the second virtual image displayable region 320 is parallel to the road surface 5L even when the vehicle posture of the vehicle 1 changes, and the reflection unit 31 May be rotated.
 図8A、図8B、図8Cは、本実施形態のHUD100が生成する第1の虚像表示可能領域310と第2の虚像表示可能領域320とがなす角度330を変化させる様子を示した図である。図8A、図8B、図8Cは、車両1のピッチング角1pに基づいて変化する、第1の虚像表示可能領域310と第2の虚像表示可能領域320とのなす角度330を示す図である。 8A, 8B, and 8C are diagrams showing how the angle 330 formed by the first virtual image displayable area 310 and the second virtual image displayable area 320 generated by the HUD 100 of the present embodiment is changed. . 8A, 8B, and 8C are diagrams illustrating an angle 330 formed by the first virtual image displayable area 310 and the second virtual image displayable area 320, which changes based on the pitching angle 1p of the vehicle 1. FIG.
 図8Aは、車両1が路面5Lに対して平行であるピッチング角1pの場合の第1の虚像表示可能領域310と第2の虚像表示可能領域320とがなす角度330を示す図である。第2の虚像表示可能領域320は、例えば、路面5Lに対して概ね平行となるように調整され、第1の虚像表示可能領域310と第2の虚像表示可能領域320とがなす角度330は、概ね90度の角度330rとなる。なお、以下では、図8Aに示される第1の虚像表示可能領域310と第2の虚像表示可能領域320とがなす角度330rを、以下では、基準角度330rとも呼ぶ。 FIG. 8A is a diagram illustrating an angle 330 formed by the first virtual image displayable area 310 and the second virtual image displayable area 320 when the vehicle 1 has a pitching angle 1p parallel to the road surface 5L. For example, the second virtual image displayable area 320 is adjusted to be substantially parallel to the road surface 5L, and the angle 330 formed by the first virtual image displayable area 310 and the second virtual image displayable area 320 is: The angle is approximately 90 degrees 330r. Hereinafter, an angle 330r formed by the first virtual image displayable area 310 and the second virtual image displayable area 320 illustrated in FIG. 8A is also referred to as a reference angle 330r below.
 図8Bは、車両1がピッチアップの状態の第1の虚像表示可能領域310と第2の虚像表示可能領域320とがなす角度330を示す図である。第1の虚像表示可能領域310は、例えば、図8Bに示されるように路面5Lに対して概ね平行となるように調整され、第1の虚像表示可能領域310と第2の虚像表示可能領域320とがなす角度330は、基準角度330rより大きい角度330uとなる。 FIG. 8B is a diagram illustrating an angle 330 formed by the first virtual image displayable area 310 and the second virtual image displayable area 320 when the vehicle 1 is pitched up. The first virtual image displayable area 310 is adjusted to be substantially parallel to the road surface 5L as shown in FIG. 8B, for example, and the first virtual image displayable area 310 and the second virtual image displayable area 320 are adjusted. The angle 330 formed by and becomes an angle 330u larger than the reference angle 330r.
 図8Cは、車両1がピッチダウンの状態の第1の虚像表示可能領域310と第2の虚像表示可能領域320とがなす角度330を示す図である。第1の虚像表示可能領域310は、例えば、図8Cに示されるように路面5Lに対して概ね平行となるように調整され、第1の虚像表示可能領域310と第2の虚像表示可能領域320とがなす角度330は、基準角度330rより小さい角度330uとなる。 FIG. 8C is a diagram illustrating an angle 330 formed by the first virtual image displayable area 310 and the second virtual image displayable area 320 when the vehicle 1 is in the pitch down state. The first virtual image displayable area 310 is adjusted to be substantially parallel to the road surface 5L as shown in FIG. 8C, for example, and the first virtual image displayable area 310 and the second virtual image displayable area 320 are adjusted. The angle 330 formed by and becomes an angle 330u smaller than the reference angle 330r.
 その結果、車両1の車両姿勢に基づいて、第1の虚像表示可能領域310と第2の虚像表示可能領域320とがなす角度330が変化するため、視認する虚像のうちどれが第1の虚像表示可能領域310または第2の虚像表示可能領域320に表示された情報であるか視認者4に認識させることができるため、第1の虚像表示可能領域310、第2の虚像表示可能領域320のそれぞれに表示される第1の虚像311、第2の虚像321とをより立体的に認識させることができる。 As a result, since the angle 330 formed by the first virtual image displayable area 310 and the second virtual image displayable area 320 changes based on the vehicle posture of the vehicle 1, which of the visible virtual images is the first virtual image. Since the viewer 4 can recognize whether the information is displayed in the displayable area 310 or the second virtual image displayable area 320, the first virtual image displayable area 310 and the second virtual image displayable area 320 can be recognized. The first virtual image 311 and the second virtual image 321 displayed on each can be recognized more three-dimensionally.
 また、本実施形態のHUD100において、第2の虚像表示可能領域320は、第1の虚像表示可能領域310より水平方向に傾いて生成され、アクチュエータ40の駆動により、実景に対する角度が調整される。水平方向に傾いた虚像表示面(第2の虚像表示可能領域320)の実景に対する角度調整は、鉛直方向に傾いた虚像表示面(第1の虚像表示可能領域310)の実景に対する角度調整よりも、虚像表示面の一定の角度変化に対する視認者4に与える印象が大きい。したがって、水平方向に傾いた虚像表示面(第2の虚像表示可能領域320)を角度調整することにより、第2の虚像表示可能領域320に表示される第2の虚像321と、第1の虚像表示可能領域310に表示される第1の虚像311と、を区別しやすくなり、第1の虚像表示可能領域310、第2の虚像表示可能領域320のそれぞれに表示される第1の虚像311、第2の虚像321とをより立体的に認識させることができる。 Further, in the HUD 100 of the present embodiment, the second virtual image displayable area 320 is generated by being inclined in the horizontal direction from the first virtual image displayable area 310, and the angle with respect to the real scene is adjusted by driving the actuator 40. The angle adjustment with respect to the real scene of the virtual image display surface (second virtual image displayable area 320) tilted in the horizontal direction is more than the angle adjustment with respect to the real scene of the virtual image display surface (first virtual image displayable area 310) tilted in the vertical direction. The impression given to the viewer 4 with respect to a certain angle change of the virtual image display surface is large. Therefore, the second virtual image 321 displayed in the second virtual image displayable area 320 and the first virtual image are adjusted by adjusting the angle of the virtual image display surface (second virtual image displayable area 320) inclined in the horizontal direction. It becomes easy to distinguish the first virtual image 311 displayed in the displayable area 310, and the first virtual image 311 displayed in each of the first virtual image displayable area 310 and the second virtual image displayable area 320, The second virtual image 321 can be recognized more three-dimensionally.
 以上に説明したように、本実施形態のHUD100は、車両1に搭載され、フロントウインドシールド2に第1の表示光210を投影することで、車両1の外部の実景に重畳する第1の虚像表示可能領域310に第1の虚像311を表示するヘッドアップディスプレイであって、第1の虚像表示可能領域310に対応する第1の画像表示可能領域13を有し、該第1の画像表示可能領域13から第1の表示光210を発する第1の画像表示部10と、第1の画像表示部10が発する第1の表示光210をフロントウインドシールド2に向ける投射部30と、車両1の車両姿勢に関する情報を含む車両姿勢情報Gを取得するインターフェース73と、インターフェース73から取得した車両姿勢情報Gに基づき、第1の虚像311に生じる台形歪みを抑制するように、第1の画像14を台形補正する表示制御部70と、を備える。このように、表示制御部70が、車両姿勢情報Gに基づいて、第1の画像14を台形補正することで、車両1のピッチング角1pの変化により生じる第1の虚像311の台形歪みを、抑制または相殺することができる。 As described above, the HUD 100 according to the present embodiment is mounted on the vehicle 1 and projects the first display light 210 onto the front windshield 2, thereby superimposing the first virtual image on the real scene outside the vehicle 1. A head-up display that displays the first virtual image 311 in the displayable area 310, which has the first image displayable area 13 corresponding to the first virtual image displayable area 310, and can display the first image The first image display unit 10 that emits the first display light 210 from the region 13, the projection unit 30 that directs the first display light 210 emitted from the first image display unit 10 toward the front windshield 2, and the vehicle 1 An interface 73 for acquiring vehicle attitude information G including information related to the vehicle attitude, and a table generated in the first virtual image 311 based on the vehicle attitude information G acquired from the interface 73 To suppress distortion, comprising a first image 14 and the display control unit 70 for keystone correction, a. In this way, the display control unit 70 corrects the first image 14 based on the vehicle attitude information G, thereby correcting the trapezoidal distortion of the first virtual image 311 caused by the change in the pitching angle 1p of the vehicle 1. Can be suppressed or offset.
 また、表示制御部70は、台形補正後の第1の虚像311のアスペクト比が、台形補正前の第1の虚像311のアスペクト比と同じになるように、第1の画像14の大きさを調整してもよい。台形補正の前後でアスペクト比を一定にすることで、台形補正による表示の違和感を与えずに、第1の虚像311の表示を継続することができる。 In addition, the display control unit 70 sets the size of the first image 14 so that the aspect ratio of the first virtual image 311 after the keystone correction is the same as the aspect ratio of the first virtual image 311 before the keystone correction. You may adjust. By making the aspect ratio constant before and after the trapezoid correction, the display of the first virtual image 311 can be continued without giving the display a strange feeling due to the trapezoid correction.
 また、表示制御部70は、車両姿勢情報Gに基づき、車両1の車両姿勢の変化が所定の閾値TH以下であった場合、視認者4から見て第1の虚像311の縦幅DYのみ拡大してもよい。これにより、表示制御部70の処理負荷を一部軽減することができる。 Further, based on the vehicle attitude information G, the display control unit 70 enlarges only the vertical width DY of the first virtual image 311 when viewed from the viewer 4 when the change in the vehicle attitude of the vehicle 1 is equal to or less than the predetermined threshold value TH. May be. Thereby, a part of the processing load of the display control unit 70 can be reduced.
 これより、本発明の実施形態の変形例を説明する。以上の説明では、第1の表示光210と第2の表示光220とを同じ方向に向ける表示合成部32までの第1の表示光210の光路上に位置する反射部31をアクチュエータ40により回転させることで、第1の虚像表示可能領域310と第2の虚像表示可能領域320とがなす角度330を変化させたが、表示合成部32をアクチュエータ40により回転させてもよい。この場合においても、前述した反射部31を回転させるものと同様に、第1の虚像表示可能領域310の角度を調整させずに、実景に対する第2の虚像表示可能領域320の角度のみを調整することができる。 Now, a modification of the embodiment of the present invention will be described. In the above explanation, the actuator 40 rotates the reflecting unit 31 positioned on the optical path of the first display light 210 up to the display combining unit 32 that directs the first display light 210 and the second display light 220 in the same direction. As a result, the angle 330 formed by the first virtual image displayable area 310 and the second virtual image displayable area 320 is changed, but the display combining unit 32 may be rotated by the actuator 40. Even in this case, only the angle of the second virtual image displayable area 320 with respect to the real scene is adjusted without adjusting the angle of the first virtual image displayable area 310, as in the case of rotating the reflection unit 31 described above. be able to.
 また、本発明のHUD100は、第1の画像表示部10の画像表示面(第1のスクリーン12)をアクチュエータ40で回転させることで、第1の虚像表示可能領域310と第2の虚像表示可能領域320とのなす角度330を変化させてもよい。 The HUD 100 of the present invention can display the first virtual image displayable area 310 and the second virtual image by rotating the image display surface (first screen 12) of the first image display unit 10 by the actuator 40. The angle 330 formed with the region 320 may be changed.
 なお、アクチュエータ40は、反射部31、表示合成部32、第1の画像表示部10の画像表示面(第1のスクリーン12)の中心を回転軸AXとする必要はなく、それぞれの光学部材の所定の位置(端部を含む)を回転軸AXとしてもよい。また、それぞれの光学部材と離間した位置に回転軸AXを設けてもよい。 In addition, the actuator 40 does not need to set the center of the image display surface (first screen 12) of the reflection unit 31, the display synthesis unit 32, and the first image display unit 10 as the rotation axis AX. A predetermined position (including the end portion) may be used as the rotation axis AX. Further, the rotation axis AX may be provided at a position separated from each optical member.
 図9は、運転席から車両1の前方を向いた場合の、実景と図2に示されるHUD100の変形例が表示する第1の虚像311、第2の虚像321とが視認される様子の例を示す図である。本発明のHUD100は、図9に示されるように、第1の画像表示部10が生成する第1の虚像表示可能領域310と、第2の画像表示部20が生成する第2の虚像表示可能領域320とが離間して視認されるようにしてもよい。具体的に例えば、この変形例におけるHUD100は、第1の画像表示部10から第1の表示光210が入射する表示合成部32上の領域と、第2の画像表示部20から第2の表示光220が入射する表示合成部32上の領域と、を離間させることで構成されていてもよい。 FIG. 9 shows an example in which the real scene and the first virtual image 311 and the second virtual image 321 displayed by the modified example of the HUD 100 shown in FIG. 2 are visually recognized when facing the front of the vehicle 1 from the driver's seat. FIG. As shown in FIG. 9, the HUD 100 of the present invention can display the first virtual image displayable area 310 generated by the first image display unit 10 and the second virtual image display generated by the second image display unit 20. The region 320 may be viewed with a distance. Specifically, for example, the HUD 100 in this modified example includes a region on the display combining unit 32 where the first display light 210 is incident from the first image display unit 10 and a second display from the second image display unit 20. You may comprise by separating | separating the area | region on the display synthetic | combination part 32 in which the light 220 injects.
 また、上記実施形態では、第1の虚像表示可能領域310を生成する第1の画像表示部10と、第2の虚像表示可能領域320を生成する第2の画像表示部20とを設けていたが、画像表示部が単体であってもよい。この変形例におけるHUD100は、単体の投影部(図示しない)からの投影光を複数のスクリーン(画像表示面)(図示しない)に投影し、一方の前記スクリーンをアクチュエータで回転させることで、第1の虚像表示可能領域310と第2の虚像表示可能領域320とのなす角度330を調整してもよい。 In the above embodiment, the first image display unit 10 that generates the first virtual image displayable region 310 and the second image display unit 20 that generates the second virtual image displayable region 320 are provided. However, the image display unit may be a single unit. The HUD 100 in this modified example projects projection light from a single projection unit (not shown) onto a plurality of screens (image display surfaces) (not shown), and rotates one of the screens with an actuator, thereby The angle 330 formed by the virtual image displayable area 310 and the second virtual image displayable area 320 may be adjusted.
 また、上記実施形態では、第1の虚像表示可能領域310の実景に対する角度を調整することで、第1の虚像表示可能領域310と第2の虚像表示可能領域320とのなす角度330を調整していたが、第1の虚像表示可能領域310と第2の虚像表示可能領域320との双方の実景に対する角度をそれぞれ調整し、角度調整量を異ならせることで、第1の虚像表示可能領域310と第2の虚像表示可能領域320とのなす角度330を調整してもよい。 In the above embodiment, the angle 330 formed by the first virtual image displayable area 310 and the second virtual image displayable area 320 is adjusted by adjusting the angle of the first virtual image displayable area 310 with respect to the real scene. However, by adjusting the angles of the first virtual image displayable area 310 and the second virtual image displayable area 320 with respect to the actual scene, and by varying the angle adjustment amount, the first virtual image displayable area 310 is adjusted. And the angle 330 formed by the second virtual image displayable area 320 may be adjusted.
 また、第1の画像表示部10は、例えば、液晶表示素子などの透過型表示パネルや、有機EL素子などの自発光表示パネルや、レーザー光を走査する走査型表示装置などを適用してもよい。 The first image display unit 10 may be a transmissive display panel such as a liquid crystal display element, a self-luminous display panel such as an organic EL element, or a scanning display device that scans with laser light. Good.
1:車両、1p:ピッチング角、2:フロントウインドシールド(投射部材)、10:第1の画像表示部、20:第2の画像表示部、31:反射部(投射部)、32:表示合成部(投射部)、33:凹面ミラー(投射部)、40:アクチュエータ、70:表示制御部、71:処理部、72:記憶部、73:インターフェース(車両姿勢情報取得手段)、210:第1の表示光、220:第2の表示光、310:第1の虚像表示可能領域、311:第1の虚像、320:第2の虚像表示可能領域、321:第2の虚像、TH:閾値 1: vehicle, 1p: pitching angle, 2: front windshield (projection member), 10: first image display unit, 20: second image display unit, 31: reflection unit (projection unit), 32: display composition Unit (projection unit), 33: concave mirror (projection unit), 40: actuator, 70: display control unit, 71: processing unit, 72: storage unit, 73: interface (vehicle posture information acquisition means), 210: first Display light, 220: second display light, 310: first virtual image displayable area, 311: first virtual image, 320: second virtual image displayable area, 321: second virtual image, TH: threshold value

Claims (5)

  1.  車両に搭載され、光を透過及び反射する投影部材に第1の表示光(210)を投影することで、前記車両の外部の実景に重畳する第1の虚像表示可能領域(310)内に第1の虚像(311)を表示するヘッドアップディスプレイであって、
     前記第1の虚像表示可能領域に対応する第1の画像表示可能領域(13)を有し、前記第1の画像表示可能領域内に第1の画像(14)を表示する第1の画像表示部(10)と、
     前記第1の画像表示部が発する前記第1の表示光を投影部材に向ける投射部と、
     前記車両の車両姿勢に関する情報を含む車両姿勢情報(G)を取得する車両姿勢情報取得手段(73)と、
     前記車両姿勢情報取得手段が取得した前記車両姿勢情報に基づき、前記第1の虚像に生じる台形歪みを抑制するように、前記第1の画像を台形補正する表示制御部(70)と、を備える、
     ヘッドアップディスプレイ。
    By projecting the first display light (210) onto a projection member that is mounted on the vehicle and transmits and reflects light, the first virtual image displayable region (310) superimposed on the real scene outside the vehicle 1 is a head-up display that displays a virtual image (311) of 1;
    A first image display having a first image displayable area (13) corresponding to the first virtual image displayable area, and displaying the first image (14) in the first image displayable area. Part (10);
    A projection unit for directing the first display light emitted from the first image display unit to a projection member;
    Vehicle attitude information acquisition means (73) for acquiring vehicle attitude information (G) including information on the vehicle attitude of the vehicle;
    A display control unit (70) that corrects the keystone of the first image so as to suppress trapezoidal distortion generated in the first virtual image based on the vehicle posture information acquired by the vehicle posture information acquisition means. ,
    Head-up display.
  2.  前記表示制御部は、前記台形補正後の前記第1の虚像のアスペクト比が、前記台形補正前の前記第1の虚像のアスペクト比と同じになるように、前記第1の虚像の大きさを調整する、
     請求項1に記載のヘッドアップディスプレイ。
    The display control unit adjusts the size of the first virtual image so that the aspect ratio of the first virtual image after the keystone correction is the same as the aspect ratio of the first virtual image before the keystone correction. adjust,
    The head-up display according to claim 1.
  3.  前記表示制御部は、前記車両姿勢情報に基づき、前記車両姿勢の変化が所定の閾値以下であった場合、前記視認者から見て前記第1の虚像の縦幅のみ拡大する、
     請求項1または2に記載のヘッドアップディスプレイ。
    The display control unit enlarges only the vertical width of the first virtual image as viewed from the viewer when the change in the vehicle posture is equal to or less than a predetermined threshold based on the vehicle posture information.
    The head-up display according to claim 1 or 2.
  4.  前記表示制御部は、前記車両姿勢情報に基づき、前記車両の前方が上方に向くピッチアップの状態であると判定した場合、前記第1の画像のうち前記第1の虚像の鉛直方向の下方に対応する領域(G1)を、前記第1の虚像の鉛直方向の上方に対応する領域(G2)よりも大きい倍率で拡大する、
    請求項1乃至3のいずれかに記載のヘッドアップディスプレイ。
    When it is determined that the display control unit is in a pitch-up state in which the front of the vehicle is directed upward based on the vehicle attitude information, the display control unit is below the vertical direction of the first virtual image in the first image. Enlarging the corresponding region (G1) with a larger magnification than the region (G2) corresponding to the upper part of the first virtual image in the vertical direction;
    The head-up display according to any one of claims 1 to 3.
  5.  第2の画像表示可能領域(23)を有し、前記第2の画像表示可能領域から第2の表示光(220)を発する第2の画像表示部(20)をさらに備え、前記投影部材に前記第2の表示光を投影することで、前記第1の虚像表示可能領域より前記水平方向に向けて傾いて配置され、前記第2の画像表示可能領域に対応する第2の虚像表示可能領域(320)に第2の虚像(321)を表示するヘッドアップディスプレイであって、
     前記第1の虚像表示可能領域に対する前記第2の虚像表示可能領域の角度を調整する角度調整部(40)をさらに備える、
     請求項1乃至請求項4のいずれかに記載のヘッドアップディスプレイ。
    The projection member further includes a second image display section (20) having a second image displayable area (23) and emitting a second display light (220) from the second image displayable area. By projecting the second display light, the second virtual image displayable area corresponding to the second image displayable area is disposed to be inclined in the horizontal direction from the first virtual image displayable area. A head-up display that displays a second virtual image (321) at (320),
    An angle adjustment unit (40) for adjusting an angle of the second virtual image displayable area with respect to the first virtual image displayable area;
    The head-up display according to any one of claims 1 to 4.
PCT/JP2017/043564 2016-12-09 2017-12-05 Head-up display WO2018105585A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016239554 2016-12-09
JP2016-239554 2016-12-09

Publications (1)

Publication Number Publication Date
WO2018105585A1 true WO2018105585A1 (en) 2018-06-14

Family

ID=62492303

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/043564 WO2018105585A1 (en) 2016-12-09 2017-12-05 Head-up display

Country Status (1)

Country Link
WO (1) WO2018105585A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020009218A1 (en) * 2018-07-05 2020-01-09 日本精機株式会社 Head-up display device
CN112313737A (en) * 2018-06-22 2021-02-02 三菱电机株式会社 Image display device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010156608A (en) * 2008-12-26 2010-07-15 Toshiba Corp Automotive display system and display method
JP2014026244A (en) * 2012-07-30 2014-02-06 Jvc Kenwood Corp Display device
WO2016181749A1 (en) * 2015-05-13 2016-11-17 日本精機株式会社 Head-up display

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010156608A (en) * 2008-12-26 2010-07-15 Toshiba Corp Automotive display system and display method
JP2014026244A (en) * 2012-07-30 2014-02-06 Jvc Kenwood Corp Display device
WO2016181749A1 (en) * 2015-05-13 2016-11-17 日本精機株式会社 Head-up display

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112313737A (en) * 2018-06-22 2021-02-02 三菱电机株式会社 Image display device
WO2020009218A1 (en) * 2018-07-05 2020-01-09 日本精機株式会社 Head-up display device
JP7375753B2 (en) 2018-07-05 2023-11-08 日本精機株式会社 heads up display device

Similar Documents

Publication Publication Date Title
JP6458998B2 (en) Head-up display
JP6731644B2 (en) Display position correction device, display device including display position correction device, and moving body including display device
CN110573369B (en) Head-up display device and display control method thereof
WO2018088362A1 (en) Head-up display
WO2016190135A1 (en) Vehicular display system
JP6911868B2 (en) Head-up display device
WO2015041203A1 (en) Head up display device
JP2010070066A (en) Head-up display
WO2017094427A1 (en) Head-up display
US20220208148A1 (en) Image display system, image display method, movable object including the image display system, and non-transitory computer-readable medium
JP2009150947A (en) Head-up display device for vehicle
WO2017090464A1 (en) Head-up display
JP2018077400A (en) Head-up display
JP2018120135A (en) Head-up display
JP7396404B2 (en) Display control device and display control program
WO2018003650A1 (en) Head-up display
WO2018105585A1 (en) Head-up display
JP6845988B2 (en) Head-up display
JP6481445B2 (en) Head-up display
JP2016185768A (en) Vehicle display system
JP2019032362A (en) Head-up display device and navigation device
JP6943079B2 (en) Image processing unit and head-up display device equipped with it
JP7223283B2 (en) IMAGE PROCESSING UNIT AND HEAD-UP DISPLAY DEVICE INCLUDING THE SAME
JPWO2018199244A1 (en) Display system
JP7253719B2 (en) Vehicle equipped with a display device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17877709

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17877709

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP