WO2022024964A1 - Head-up display device - Google Patents

Head-up display device Download PDF

Info

Publication number
WO2022024964A1
WO2022024964A1 PCT/JP2021/027463 JP2021027463W WO2022024964A1 WO 2022024964 A1 WO2022024964 A1 WO 2022024964A1 JP 2021027463 W JP2021027463 W JP 2021027463W WO 2022024964 A1 WO2022024964 A1 WO 2022024964A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
display
display area
viewer
upright
Prior art date
Application number
PCT/JP2021/027463
Other languages
French (fr)
Japanese (ja)
Inventor
勇希 舛屋
Original Assignee
日本精機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本精機株式会社 filed Critical 日本精機株式会社
Priority to CN202180058974.3A priority Critical patent/CN116157290A/en
Priority to JP2022540275A priority patent/JPWO2022024964A1/ja
Priority to DE112021004005.7T priority patent/DE112021004005T5/en
Publication of WO2022024964A1 publication Critical patent/WO2022024964A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Arrangement of adaptations of instruments
    • B60K35/23
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/011Head-up displays characterised by optical features comprising device for correcting geometrical aberrations, distortion

Definitions

  • the present invention is, for example, a head-up display (HUD) that projects (projects) image display light onto a windshield such as a vehicle or a projected member such as a combiner to display a virtual image in front of the driver. ) Regarding equipment, etc.
  • HUD head-up display
  • cognition is improved by presenting an information image (depth image such as an arrow or a map) having depth information.
  • an information image (upright image such as text or numbers) having no depth information (in a broad sense, not emphasizing depth) stand upright and visually recognized, which is highly convenient.
  • the HUD device when displaying characters, numbers, etc., the HUD device needs to display it as correctly upright content so that a person can read the information correctly.
  • the above image (virtual image) is called an upright image (upright virtual image). Since the upright image is often displayed so that the viewer faces each other, it may be said that the image is a facing image.
  • the present inventor examined the oblique image plane HUD device and recognized the following new problems. If an image (virtual image) can be displayed on an inclined surface, as described above, an image (virtual image) with a sense of depth can be displayed, and the inclination is to some extent the ground (or a surface corresponding to the ground: equivalent). If it is erected on the surface), it can be displayed as an upright image of the content whose constituent elements are characters, numbers, and the like, for which a sense of depth is not emphasized.
  • the inclined surface is greatly inclined with respect to the ground (corresponding surface thereof), it will be in a state suitable for a deep display, but on the other hand, there will be a problem such as deterioration of visibility for an upright image. As a result, there are problems such as difficulty in recognition by the viewer, long time required for recognition even if it can be recognized, or subjective discomfort or discomfort.
  • the upright image can be recognized correctly, in other words, the upright image can be visually recognized. If not, the upright image can be discriminated, identified, visually recognized, etc. It means that it cannot be done or it is difficult to see.
  • the visibility (visual sensitivity) of a person depends on the distance to the image, and the sensitivity is high when displayed on the front side and relatively low when displayed on the back side.
  • the inclination (inclination angle) also changes, and a unified standard (threshold value) cannot be obtained, which is not easy to use.
  • One of the objects of the present invention is to enable an oblique image plane HUD device to display an image of content (upright image) to be viewed upright while suppressing a decrease in visibility.
  • the head-up display device is An image display unit that displays an image and By projecting the light of the image displayed by the image display unit toward the projected member, the viewer can visually recognize the virtual image of the image within a virtual display area in the real space in front of the viewer.
  • Optical system and A control unit that controls the display of the image in the image display unit, Have The direction toward the front of the viewer in the real space is defined as the forward direction.
  • the direction perpendicular to the front direction and along the line segment connecting the left and right eyes of the viewer is defined as the left-right direction.
  • the direction along the line segment orthogonal to the front direction and the left-right direction is defined as the vertical direction or the height direction.
  • the control unit In the display area, which is an inclined surface of a flat surface or a curved surface that inclines from the side close to the viewer and the lower side to the far side and the upper side with respect to the ground or a surface corresponding to the ground in the real space.
  • control is performed to display an upright image, which is an upright image to be visually recognized.
  • the upright image is displayed in a display area that is the outline of a quadrangle when viewed from the viewer.
  • the difference in convergence angle between the upper end and the lower end of the display area is set to be less than a predetermined threshold value determined based on at least one of the visibility of the image, the time required for visual recognition, and psychological factors such as discomfort and discomfort. ..
  • the degree of inclination of the display area so that the upright image can be correctly visually recognized is replaced by "convergence angle difference (provided that the inclination distortion angle caused by the difference (see the angle ⁇ d in FIG. 6C)). It is possible to make a judgment using a new index (standard as a threshold value), which makes it more efficient (or easier) to determine and set the degree of inclination.
  • the display area has a quadrangular outline, the ground (or equivalent surface of the ground: road surface, etc.) side of the display area is the lower end (lower side), and the opposite side (the side away from the ground) is the upper end (upper side). do.
  • a pair of points (which can be provided at arbitrary positions) corresponding to each other at each of the lower end (lower side) and the upper end (upper side) are preferably provided, for example, the right end point or the left end point of each end (each side). ) Is set.
  • This pair of points is defined as a first point and a second point, and the convergence angle (the angle formed by the visual axis indicating the line-of-sight direction of each eye) when the first point is viewed from each of the left and right eyes is the first point.
  • the convergence angle is defined as the convergence angle when the second point is viewed, and the difference between the first congestion angle and the second convergence angle (the first congestion angle to the second convergence angle is defined as the second convergence angle).
  • the subtracted difference) is defined as the "convergence angle difference”. In other words, this convergence angle difference can be said to be “convergence angle difference between the upper end and the lower end of the display area”.
  • the display area is erected on the ground (corresponding surface thereof) at a substantially right angle, for example, the upper end (upper side) and the lower end (lower side) of the quadrangle are viewed in a plan view from above.
  • Overlap and the first point and the second point overlap.
  • the length (height) of the vertical side of the quadrangle is small, the variation in the distance between the first and second points and the left and right eyes due to the height position of each of the first and second points. Can be ignored.
  • the first and second convergence angles for each of the first and second points are the same (substantially the same), and the convergence angle difference is zero (substantially zero).
  • the convergence angle difference is ⁇ ( ⁇ is an integer larger than 0).
  • the first convergence angle further increases, so that the difference from the second convergence angle is. It becomes large and the convergence angle difference becomes ⁇ ( ⁇ is an integer satisfying ⁇ ⁇ ).
  • the "convergence angle difference between the upper end and the lower end of the display area” is an index indicating the degree of inclination of the display area with respect to the ground (corresponding surface thereof). Further, since the convergence angle fluctuates depending on the distance from the viewer's eyes, distance information is included. Therefore, the convergence angle difference is the ground of the display area (or the virtual image display surface, etc.) including the distance. It is one integrated index (threshold value) that has information on the degree of inclination with respect to (its equivalent surface). Unlike the conventional method, it is not necessary to set the inclination with the precondition such as the number of inclination angles at a distance of several meters.
  • the visibility of an upright image varies from person to person and cannot be unequivocally stated, but it is based on at least one of the visibility of the image to be displayed, the time required for viewing, and psychological factors such as discomfort and discomfort. It is possible to objectively determine whether or not a normal upright image can be visually recognized. Then, by using the above index in the determination, a threshold value that can be used in the determination can be obtained.
  • the difference in convergence angle near the limit at which the viewer can fuse (synthesize) each image of the left eye and the right eye in the brain to recognize the erect image is set as a threshold value, and for example, when designing a HUD device. If each part is set so that the difference in convergence angle is less than the threshold value, the viewer can recognize the upright image even if it is displayed on the inclined surface. In other words, the visibility of the upright image by the viewer is ensured to a predetermined level or higher.
  • this new index (convergence angle difference or tilt distortion angle caused by this) can also be used for calibration of the HUD device, initialization of the HUD device, simulation of the function of the HUD device, and the like. , The effect of improving the efficiency of each process can be obtained.
  • the display area includes a first area in which both a virtual image of a depth image, which is an image to be visually recognized by being tilted, and a virtual image of the upright image can be displayed, and a second area for displaying a virtual image of the depth image.
  • Divided into The convergence angle difference between the upper end and the lower end of the display area in the first region is set to be less than the predetermined threshold value, and the convergence angle difference between the upper end and the lower end of the display area in the second region is set to be equal to or higher than the predetermined threshold value. It may have been done.
  • the display area is divided into a first area capable of displaying both a depth image and an upright image and a second area suitable for displaying a depth image, and the display in the second area is performed.
  • the difference in convergence angle between the upper end and the lower end of the region is set to be equal to or higher than the above-mentioned predetermined threshold value.
  • the threshold value is set based on the visibility of the upright image and the like, and above the threshold value, the visibility of the upright image is lowered and it is not suitable for displaying the upright image.
  • it is suitable for displaying a depth image expressed in an inclined manner (including an image that floats in the air and extends substantially parallel to the road surface, etc., or gives a visual sense superimposed on the road surface). It can be said that. Therefore, for the image (virtual image) displayed in the second region, the convergence angle difference or the like is set to be equal to or higher than the threshold value.
  • the predetermined threshold value functions as a normal visibility determination threshold for determining the normal visibility of the upright image.
  • the convergence angle difference as the predetermined threshold value may be set to 0.2 °.
  • predetermined threshold value can be specifically used as, for example, a "normal visibility determination threshold value", and an example of a preferable value thereof is 0.2 °. It clarifies that there is.
  • the limitation of the convergence angle difference by the predetermined threshold is the content of an upright image having a vertical angle of view of 0.75 ° or less. May be applied to.
  • a vertical image is taken into consideration that as the size of the displayed content increases, it becomes more difficult to fuse the image of the left eye and the image of the right eye in the brain even with the same convergence angle difference.
  • the above threshold value is applied to small contents having an angle of 0.75 ° or less, and the difference in convergence angle between the upper end and the lower end is set to be less than the threshold value.
  • FIG. 1A is a diagram showing an example of a configuration of a HUD device mounted on a vehicle and an inclined display area
  • FIGS. 1B and 1C are display areas shown in FIG. 1A. It is a figure which shows the example of the method to realize.
  • FIG. 2A is a diagram showing a main configuration of a HUD device mounted on a vehicle and a display example in a display area
  • FIG. 2B is a diagram showing both a virtual image of a depth image and a virtual image of an upright image. It is a figure which shows the example of the display area
  • FIG. 3A is a diagram and a diagram showing a state in which an inclined surface (inclined display area) on which an arrow figure as a depth image is displayed is arranged in front and the viewer is viewing it with both eyes.
  • 3 (B) is a diagram showing an image seen with the left eye
  • FIG. 3 (C) is a diagram showing a depth image seen by fusing (combining) each image of the left eye and the right eye
  • D) is a diagram showing an image seen by the right eye.
  • FIG. 4A is a diagram showing a state in which an inclined surface (inclined display area) on which a vehicle speed display as an upright image is displayed is arranged in front and the viewer is viewing it with both eyes.
  • FIG. 4 (B) is a diagram showing an image seen with the left eye
  • FIG. 4 (C) is a diagram showing an upright image seen by fusing (combining) each image of the left eye and the right eye
  • FIG. 4 (D) is a figure showing an image seen by the right eye
  • FIG. 5 (A) is a diagram showing a state in which a viewer is viewing a display area erected substantially perpendicular to the road surface with both eyes
  • FIG. 5 (B) is a view of the display area in FIG. 5 (A).
  • FIG. 5 (A) is a diagram showing the convergence angles of both eyes with respect to the first right end point of the upper end (upper side) and the second right end point of the lower end (lower side) corresponding to the first right end point
  • FIG. 5 (A) is
  • FIG. 5C is a diagram showing the left eye. It is a figure which shows the image which fused (combined) each image of a right eye.
  • FIG. 6A is a diagram showing a state in which a viewer is viewing a display area inclined by approximately 45 ° with respect to the road surface with both eyes
  • FIG. 6B is a display area in FIG. 6A.
  • FIG. 6C is a diagram showing the left eye.
  • FIG. 6A is a diagram showing a state in which a viewer is viewing a display area inclined by approximately 45 ° with respect to the road surface with both eyes
  • FIG. 6B is a display area in FIG. 6A.
  • FIG. 6 (D) is a diagram showing an upright image obtained by fusing (synthesizing) each image of the left eye and the right eye
  • FIG. 6 (E) is a diagram showing an image viewed with the right eye. It is a figure which shows.
  • FIG. 7A is a diagram showing a state in which a viewer is viewing a display area inclined by approximately 30 ° with respect to the road surface with both eyes
  • FIG. 7B is a display area in FIG. 7A.
  • FIG. 7 (C) is a diagram showing the convergence angles of both eyes with respect to the first right end point of the upper end (upper side) and the second right end point of the lower end (lower side) corresponding to the first right end point
  • FIG. 7 (C) is the left eye.
  • FIG. 7 (D) is a diagram showing the images seen in the above, and FIG. 7 (D) is a diagram showing visual perception that is difficult to see due to double vision when each image of the left eye and the right eye is fused (combined).
  • 8 (A) and 8 (B) are flowcharts showing an example of a design method of a HUD device (oblique image plane HUD device). It is a figure which shows the structural example of the display control unit (control unit) in a HUD apparatus.
  • 10 (A) and 10 (B) are views showing another example of the inclined display area.
  • FIG. 11 is a graph of experimental results showing the proportion (vertical axis) of those who answered that they did not feel uncomfortable with each convergence angle difference (horizontal axis).
  • FIG. 1A is a diagram showing an example of a configuration of a HUD device mounted on a vehicle and an inclined display area
  • FIGS. 1B and 1C are display areas shown in FIG. 1A. It is a figure which shows the example of the method to realize.
  • the direction along the front of the vehicle 1 (also referred to as the front-rear direction) is the Z direction
  • the direction along the width (horizontal width) of the vehicle 1 is the X direction
  • Direction (Y direction is the direction away from the ground or its corresponding surface (here, the road surface) 40 of the line segment perpendicular to the flat road surface 40.
  • the term virtual display area (sometimes simply referred to as a display area) provided in front of the viewer can be broadly interpreted.
  • it may be a virtual display surface (sometimes referred to as a virtual image display surface) corresponding to (the display range of) a display surface such as a screen on which an image is displayed, or it may be displayed on the virtual display surface.
  • the image area can be regarded as one display area (or a part of a virtual image display surface). .. In the following description, based on the above, it is simply referred to as "display area".
  • the HUD device 100 of this embodiment is mounted inside the dashboard 41 of the vehicle (own vehicle) 1.
  • the HUD device 100 is an upright image (an image that does not particularly emphasize depth and is also referred to as an erect image) in which the display area PS1 having an area inclined with respect to the road surface 40 is visually recognized in an upright position in front of the vehicle 1. (For example, it may be composed of numbers, letters, etc.), and a depth image in which depth is an important factor (also referred to as an inclined image or an inclined image, for example, a navigation arrow extending along the road surface 40). Etc.) can be displayed.
  • the HUD device 100 has a display unit (sometimes referred to as an image display unit, specifically, a screen) 160 having a display surface 164 for displaying an image, and a display light K for displaying an image, and a projected member (reflection).
  • a curved mirror (concave mirror) having an optical system 120 including an optical member projecting onto a windshield which is a translucent member) 2 and a light projecting unit (image projection unit) 150, and the optical member 120 has a reflecting surface 179.
  • the reflecting surface 179 of the curved mirror 170 having 170 is not a shape having a uniform radius of curvature, but a shape consisting of, for example, a set of partial regions having a plurality of radius of curvature.
  • a free curved surface design method can be used (the free curved surface itself may be used).
  • a free curved surface is a curved surface that cannot be expressed by a simple mathematical formula, and a curved surface is expressed by setting some intersections and curvatures in space and interpolating each intersection with a higher-order equation.
  • the shape of the reflective surface 179 has a considerable influence on the shape of the display area PS1 and the relationship with the road surface.
  • the shape of the display area PS1 includes the shape of the reflective surface 179 of the curved mirror (concave mirror) 130, the curved shape of the windshield (reflection translucent member 2), and other optical members mounted in the optical system 120. It is also affected by the shape of (for example, the correction mirror). It is also affected by the shape of the display surface 164 of the display unit 160 (generally a flat surface, but all or part of it may be non-planar) and the arrangement of the display surface 164 with respect to the reflective surface 179.
  • the curved mirror (concave mirror) 170 is a magnifying reflector, and has a considerable influence on the shape of the display area (virtual image display surface). Further, if the shape of the reflecting surface 179 of the curved mirror (concave mirror) 170 is different, the shape of the display area (virtual image display surface) PS1 actually changes.
  • the display surface 164 of the display unit 160 is 90 with respect to the optical axis of the optical system (the main optical axis corresponding to the main light ray). It can be formed by arranging diagonally at an intersection angle of less than a degree.
  • the shape of the curved surface of the display area PS1 is such that the optical characteristics of all or a part of the optical system can be adjusted, the arrangement of the optical member and the display surface 164 can be adjusted, and the shape of the display surface 164 can be adjusted. It may be adjusted or adjusted by a combination thereof. In this way, the shape of the virtual image display surface can be adjusted in various ways. Thereby, the display area PS1 having the first area Z1 and the second area Z2 can be realized.
  • the display area PS1 is suitable for displaying the first area Z1 capable of displaying both the depth image (tilted image) and the upright image (standing image) and the depth image (tilted image) (in other words, the depth image). It is divided into a second area (used exclusively for display of).
  • the mode and degree of the overall inclination of the display area (including the virtual image display surface) PS1 are adjusted according to the mode and degree of inclination of the display surface 164 of the display unit 160. do.
  • the distortion of the display area (virtual image display surface) due to the curved surface of the windshield (reflection and translucent member 2) is corrected by the curved surface shape of the reflective surface 179 of the curved mirror (concave mirror or the like) 170.
  • a flat display area (virtual image display surface) PS1 is generated.
  • the display surface 164 is rotated.
  • the mirror By moving the mirror to make the relative relationship with the optical member (curved surface mirror 170) different, it is possible to adjust the degree to which the display region (virtual image display surface) PS1 which is an inclined surface is separated from the road surface 40.
  • the shape of the reflecting surface of the curved mirror (concave mirror or the like) 170 which is an optical member, is adjusted (or the shape of the display surface 164 of the display unit 160 is adjusted) for display.
  • the virtual image display distance near the end (near end) U1 on the side of the area PS1 near the vehicle 1 is bent toward the road surface and controlled to stand against the road surface ( In other words, elevation), thereby obtaining a display area PS1 having an inclined portion.
  • the reflective surface 179 of the curved mirror 170 has three portions (Near (nearby display), Center (middle (center) display), and Far (far display)). It can be divided into parts).
  • Near is a part that generates display light E1 (indicated by a long-dotted line in FIGS. 4A and 4B) corresponding to the near-end U1 of the display area PS1, and the Center is a display.
  • Far is a portion that generates the display light E2 (indicated by a broken line) corresponding to the intermediate portion (central portion) U2 of the region PS1, and Far is the display light E3 (solid line) corresponding to the far end portion U3 of the display region PS1. Is the part that produces).
  • the Center and Far portions are the same as the curved mirror (concave mirror or the like) 170 in the case of generating the flat display area PS1 shown in FIG. 1 (B).
  • the curvature of the Near portion is set to be smaller than that in FIG. 1 (B). Then, the magnification corresponding to the Near part becomes large.
  • the magnification (referred to as c) of the HUD device 100 is the distance (referred to as a) from the display surface 164 of the display unit 160 to the windshield 2 and the viewpoint of the light reflected by the windshield (reflecting translucent member 2).
  • the image is formed at a position farther from the vehicle 1. That is, in the case of FIG. 1 (C), the virtual image display distance is larger than that of the case of FIG. 1 (B).
  • the near-end U1 of the display region PS1 will be separated from the vehicle 1, and the near-end U1 will bow toward the road surface 40, and as a result, the first region Z1 will be formed. .. As a result, a display region PS1 having a first region Z1 and a second region Z2 can be obtained.
  • FIG. 2A is a diagram showing a main configuration of a HUD device mounted on a vehicle and a display example in a display area
  • FIG. 2B is a diagram showing both a virtual image of a depth image and a virtual image of an upright image. It is a figure which shows the example of the display area
  • the same reference numerals are given to the portions common to those in FIG.
  • the HUD device 100 includes a display unit (for example, a light transmission type screen) 160 having a display surface 164, a reflector 165, and a curved mirror (for example, a curved mirror) as an optical member for projecting display light. It is a concave mirror having a reflecting surface 179, and the reflecting surface may be a free curved surface) 170.
  • the image displayed on the display unit 160 is projected onto the projected region 5 of the windshield 2 as a projected member via the reflecting mirror 165 and the curved mirror 170.
  • the HUD device 100 may be provided with a plurality of curved mirrors.
  • a refraction optical element such as a lens, a diffractive optical element, or the like can be used.
  • a configuration including a functional optical element may be adopted.
  • FIG. 2A As display examples in the first area Z1 of the display area PS1, for example, a vehicle speed display SP which is an upright image (upright virtual image) and an image (virtual image) AW of a navigation arrow which is a depth display. 'It is shown. Further, in the second region Z2, an image (virtual image) AW of a navigation arrow extending from the front side to the back side of the vehicle 1 along the road surface 40 is shown.
  • the angle (inclination angle) formed by the first region Z1 with the road surface 40 is ⁇ 1 (0 ⁇ 1 ⁇ 90 °), and the angle formed by the second region Z2 with the road surface 40. (Inclination angle) is ⁇ 2 (0 ⁇ 2 ⁇ 1).
  • Each of the first and second regions Z1 and Z2 is an inclined region (or a region having at least an inclined portion).
  • FIG. 3A is a diagram and a diagram showing a state in which an inclined surface (inclined display area) on which an arrow figure as a depth image is displayed is arranged in front and the viewer is viewing it with both eyes.
  • 3 (B) is a diagram showing an image seen with the left eye
  • FIG. 3 (C) is a diagram showing a depth image seen by fusing (combining) each image of the left eye and the right eye
  • FIG. 3 (C) is a diagram showing an image seen by the right eye.
  • the midpoint C0 is drawn at the center position between the left eye A1 and the right eye A2.
  • the image obtained by fusing the image seen by the left eye A1 and the image seen by the right eye A2 in the brain of the viewer can be said to be an image at the midpoint position C0 for convenience.
  • FIG. 3A an image (virtual image) AW of an inclined and extending navigation arrow is displayed in the second area Z2 of the display area PS1.
  • images (images with binocular parallax) of the left and right eyes A1 and A2 shown in FIGS. 3 (B) and 3 (D) are fused (combined), a sense of depth (three-dimensional) as shown in FIG. 3 (C) is obtained.
  • An image with a feeling) is visually recognized.
  • the image (virtual image) AW of the arrow is naturally tilted to the back side and visually recognized due to the image shape change caused by the positional deviation between the upper end of the angle of view and the lower end of the angle of view in each of the left and right eyes A1 and A2.
  • FIG. 4A is a diagram showing a state in which an inclined surface (inclined display area) on which a vehicle speed display as an upright image is displayed is arranged in front and the viewer is viewing it with both eyes.
  • 4 (B) is a diagram showing an image seen with the left eye
  • FIG. 4 (C) is a diagram showing an upright image seen by fusing (combining) each image of the left eye and the right eye
  • FIG. 4 (D) is a figure showing an image seen by the right eye.
  • FIG. 4A an image (virtual image) of a vehicle speed display SP (displayed as “120 km / h” as described in FIG. 4B and the like) is shown in the first region Z1 of the display region PS1.
  • This vehicle speed display SP is displayed as an upright image (upright virtual image) that is displayed in the inclined first region Z1 and is visually recognized in an upright position.
  • the inclination angle of the first region Z1 with respect to the road surface 40 is not so small and it is erected to some extent, the cognition of the viewer is not impaired and the visual recognition as an upright image (reading the information as an upright image). ) Is possible.
  • the images (images having binocular parallax) of the left and right eyes A1 and A2 shown in FIGS. 4 (B) and 4 (D) are fused (combined), the images shown in FIG. 4 (C) are obtained.
  • the vehicle speed display SP is visually recognized as a standing image.
  • the appearance of the first region Z1 when the inclination angle with respect to the road surface 40 is small and the fusion (combination) of the images by the left and right eyes fails (when the fusion is not performed) varies from person to person. For example, it may appear tilted as a whole, or it may appear due to a change in the shape of either eye, but in any case, visibility decreases, recognition time increases, and it is subjective (psychological factor). ) Also cannot be seen favorably.
  • FIG. 5 (A) is a diagram showing a state in which a viewer is viewing a display area erected substantially perpendicular to the road surface with both eyes
  • FIG. 5 (B) is a view of the display area in FIG. 5 (A).
  • FIG. 5C is a diagram showing the left eye. It is a figure which shows the image which fused (combined) each image of a right eye.
  • the display area has a contour of a predetermined shape (here, a quadrangle), the ground (or the corresponding surface of the ground: road surface, etc.) side of the display area is the lower end (lower side), and the opposite side (lower side). The side away from the ground) is the upper end (upper side).
  • the "quadrangle" indicating the shape of the display area shall be broadly interpreted to include, for example, a rectangle, a square, a trapezoid, a parallelogram, and the like.
  • a display area here, the first area Z1 standing substantially perpendicular to the road surface 40 is shown in front of the viewer.
  • the viewer is looking at the image (virtual image) displayed in the first region Z1 with both eyes A1 and A2.
  • the displayed image is the vehicle speed display SP shown above.
  • the lower end of the angle of view (hereinafter, may be simply referred to as the lower end or the lower side) in the display area (first area Z1) is indicated by a PL reference numeral, and the upper end of the angle of view (hereinafter, may be simply referred to as the lower end or the lower side) is shown. (Sometimes referred to simply as the top or top) is indicated with a PU sign.
  • FIG. 5 (B) shows the convergence angle due to the binocular parallax of the viewer in a plan view when looking from the upper side to the lower side (in the figure, the ⁇ Y direction side indicated by the arrow) in FIG. 5 (A). Is shown.
  • a pair of points (which can be provided at arbitrary positions) corresponding to each other at each of the lower end (lower side) PL and the upper end (upper side) PU are preferably provided, but preferably, for example, each end (each side). ) Is set to the right end point or the left end point).
  • the right end point R1 at the lower end (lower side) and the right end point R2 at the upper end (upper side) are set.
  • the point R1 is referred to as a first point
  • the point R2 is referred to as a second point.
  • the convergence angle (the angle formed by the visual axis indicating the line-of-sight direction of each eye A1 and A2) when the first point R1 is viewed from the left and right eyes A1 and A2 is defined as the first convergence angle ⁇ L, and the second point.
  • the congestion angle when looking at R2 is defined as the second congestion angle ⁇ U, and the difference between the first congestion angle ⁇ L and the second congestion angle ⁇ U (the second congestion angle ⁇ U is subtracted from the first congestion angle ⁇ L). Difference) is defined as "convergence angle difference”. In other words, this convergence angle difference can be said to be “convergence angle difference between the upper end (or upper side) and the lower end (or lower side) of the display area".
  • the contour quadrangle of the display area (first area Z1) is viewed from above.
  • the upper end (upper side) PU and the lower end (lower side) PL of the quadrangle overlap, and the first point R1 and the second point R2 overlap.
  • the length of the vertical side of the quadrangle (the length of the line segment indicating the first region Z1 in FIG. 5A: in other words, the height with respect to the road surface 40 of the first region) is small.
  • an image obtained by fusing (combining) each image of the left eye and the right eye is visually recognized as a standing image. There is no problem with the visibility of the vehicle speed display SP.
  • FIG. 6A is a diagram showing a state in which a viewer is viewing a display area inclined by approximately 45 ° with respect to the road surface with both eyes
  • FIG. 6B is a display area in FIG. 6A.
  • FIG. 6C is a diagram showing the left eye.
  • FIG. 6 (D) is a diagram showing an upright image obtained by fusing (synthesizing) each image of the left eye and the right eye
  • FIG. 6 (E) is a diagram showing an image viewed with the right eye. It is a figure which shows.
  • the display area (first area Z1) is arranged at an inclination of approximately 45 ° with respect to the road surface 40.
  • the lower end (lower side) of the quadrangle showing the outline of the first region Z1 moves to the side of the viewer.
  • the lower end (lower side) PL is closer to the viewer than the upper end (upper side) PU.
  • the convergence angle difference is ⁇ ( ⁇ is an integer larger than 0).
  • the human eye can recognize an upright image by correcting a certain depth, left-right distortion, etc., and in the example of FIG. 6, the limit of the correction function (recognition function) is not exceeded. Therefore, as shown in FIG. 6D, the vehicle speed display SP as an upright image can be visually recognized correctly for the time being.
  • FIG. 7A is a diagram showing a state in which a viewer is viewing a display area inclined by approximately 30 ° with respect to the road surface with both eyes
  • FIG. 7B is a display area in FIG. 7A
  • FIG. 7 (C) is a diagram showing the convergence angles of both eyes with respect to the first right end point of the upper end (upper side) and the second right end point of the lower end (lower side) corresponding to the first right end point
  • FIG. 7 (C) is the left eye.
  • FIG. 7 (D) is a diagram showing the images seen in the above
  • FIG. 7 (D) is a diagram showing visual perception that is difficult to see due to double vision when each image of the left eye and the right eye is fused (combined).
  • Is a diagram showing an image seen with the right eye.
  • the display area (first area Z1) is further inclined with respect to the road surface 40.
  • the lower end (lower side) PL of the quadrangle further moves toward the viewer and approaches the viewer.
  • the first convergence angle ⁇ L is further increased. Therefore, the difference from the second convergence angle ⁇ U becomes large, and the convergence angle difference ( ⁇ L ⁇ U) becomes ⁇ ( ⁇ is an integer satisfying ⁇ ⁇ ).
  • the "convergence angle difference ( ⁇ L- ⁇ U) between the upper end (upper side) and the lower end (lower side) of the display area” is the inclination of the display area with respect to the ground (corresponding surface thereof). Can be an indicator of the degree of.
  • the convergence angle difference ( ⁇ L ⁇ U) includes the distance. It is one integrated index (threshold value) having information on the degree of inclination of the display area (or the virtual image display surface, etc.) with respect to the ground (corresponding surface thereof). It is not necessary to set the inclination with the precondition of the distance, such as the number of inclination angles at the distance of several meters as in the conventional case. Therefore, by introducing this index into the design of the HUD device or the like, the setting of the inclination of the display area or the like can be made more efficient (easy).
  • the visibility of the upright image varies from person to person and cannot be unequivocally stated, but the visibility of the image to be displayed (first factor), the time required for viewing (second factor), and discomfort or discomfort. It is possible to objectively determine whether or not a normal upright image can be visually recognized based on at least one of psychological factors (third factor) such as pleasure. Then, by using the above index in the determination, a threshold value that can be used in the determination can be obtained.
  • this "predetermined threshold value” can be used as, for example, a “normal visual visibility determination threshold value”, and an example of a preferable value thereof is the above-mentioned 0.2 °.
  • this threshold value for example, a design that enables an image of content (upright image) to be visually recognized upright while suppressing a decrease in visibility is made more efficient or easier. Will be done.
  • this new index (convergence angle difference, or tilt distortion angle caused by this (see the angle ⁇ d in FIG. 6C)) is used for calibration of the HUD device, initialization of the HUD device, and functions of the HUD device. It can also be used for simulations and the like, and effects such as improving the efficiency of each process can be obtained.
  • the display area includes a first area Z1 capable of displaying both a depth image and an upright image, and a second area Z2 suitable for displaying a depth image.
  • the difference in convergence angle between the upper end and the lower end of the display region in the second region Z2 is preferably set to the above-mentioned predetermined threshold value (preferably 0.2 °) or more.
  • the threshold value is set based on the visibility of the upright image and the like, and above the threshold value, the visibility of the upright image is lowered and it is not suitable for displaying the upright image.
  • it is suitable for displaying a depth image expressed in an inclined manner (including an image that floats in the air and extends substantially parallel to the road surface, etc., or gives a visual sense superimposed on the road surface). It can be said that. Therefore, for the image (virtual image) displayed in the second region Z2, the convergence angle difference is set to be equal to or larger than the threshold value.
  • the limitation of the convergence angle difference by a predetermined threshold is the content of an upright image having a vertical angle of view of 0.75 ° or less. May be applied to.
  • the above threshold value is applied to small content of 75 ° or less, and the convergence angle difference between the upper end and the lower end is set to be less than the threshold value.
  • FIG. 8 (A) and 8 (B) are flowcharts showing an example of a design method of a HUD device (oblique image plane HUD device).
  • a HUD device oblique image plane HUD device
  • the convergence angle difference between the upper end and the lower end of the display area of the information image (upright image) to be visually recognized upright (or the tilt distortion angle caused by the difference) is shown.
  • Each part is designed so as to be less than a predetermined threshold value (preferably less than 0.2 °) determined based on at least one of psychological factors such as image visibility, time required for visual recognition, and discomfort and discomfort (preferably less than 0.2 °).
  • the display area is tilted and the first area in which both the information image (depth image) to be visually recognized by tilting and the information image (upright image) to be visually recognized in an upright position are displayed and the display area is tilted.
  • the information image (depth image) to be visually recognized is divided into a second area to be displayed (step S2).
  • the convergence angle difference between the upper end and the lower end in the first region is designed to be less than a predetermined threshold value (preferably less than 0.2 °).
  • the convergence angle difference between the upper end and the lower end of the second region is designed to be equal to or larger than a predetermined threshold value (step S3).
  • FIG. 9 is a diagram showing a configuration example of a display control unit (control unit) in the HUD device.
  • the upper view of FIG. 9 is almost the same as that of FIG. 1 (A).
  • a line-of-sight detection camera 188 and a viewpoint position detection unit 192 are provided.
  • the display control unit (control unit) 190 has an input / output (I / O) interface 193 and an image processing unit 194.
  • the image processing unit 194 includes an image generation control unit 195, a ROM (having an upright image table 199 and a depth image table 200) 198, and a VRAM (having a warping parameter 196 and a post-warping data storage buffer 197) 201. And an image generation unit (image rendering unit) 202.
  • the image generation control unit 195 arranges the vehicle speed display SP and the arrow display AW'in the first region Z1 and the arrow display AW in the second region. It is possible to control the display position of the content, such as arranging it in Z2. Further, the display control unit (control unit) 190 performs control such as setting the display area at an appropriate position by using the above threshold value at the time of calibration or initialization of the HUD device 100, for example. You can also do it.
  • FIG. 10 (A) and 10 (B) are views showing another example of the inclined display area.
  • the device configuration itself is the same as in FIG. 1 (A).
  • the cross-sectional shape of the vehicle 1 in the display area PS1 as seen from the width direction (horizontal direction, X direction) is not limited to the one in which the driver side is convex as shown in FIG. As shown in FIG. 10A, the display area PS1 may have a concave shape on the driver side. Further, the display area PS1 does not have to be curved as shown in FIG. 10 (B). These are examples, and display areas having various cross-sectional shapes can be assumed.
  • the configuration of this embodiment has the effect of improving the visibility of the image.
  • a sensory evaluation was performed in which a subject was asked to visually recognize a virtual image displayed by multiple types of head-up display devices and to answer whether or not he / she felt uncomfortable.
  • a plurality of types of head-up display devices display upright images having different convergence gaps at the upper and lower ends.
  • the upright image has a shape that is visually recognized as a rectangle in the vertical and horizontal directions when viewed from the observation position of the subject, and the vertical angle of view is set to 0.75 °. Will be done.
  • FIG. 11 is a graph of experimental results showing the proportion (vertical axis) of those who answered that they did not feel uncomfortable with each convergence angle difference (horizontal axis). It can be seen that the proportion of people who answered that they did not feel uncomfortable increased as the difference in the convergence angle between the upper end and the lower end became smaller when visually recognizing the upright image.
  • the present invention can be widely applied to a parallax type HUD device for incident an image having parallax, a light ray reproduction type HUD device using a lenticular lens or the like, and the like.
  • the term vehicle can be broadly interpreted as a vehicle.
  • terms related to navigation shall be interpreted in a broad sense, taking into consideration, for example, the viewpoint of navigation information in a broad sense useful for vehicle operation, and road signs and the like can also be included.
  • the upright image can also be said to be a face-to-face image that the viewer sees face-to-face, and can be widely interpreted without being bound by the name.
  • the HUD device shall include a device used as a simulator (for example, a simulator as an aircraft simulator, a simulator as a game device, etc.).
  • An optical system including an optical member, 150 ... a light projecting unit (image projection unit), 160 ... a display unit (for example, a liquid crystal display device, a screen, etc.), 164 ... a display surface, 170 ... a curved mirror. (Concave mirror, etc.) 179 ... Reflective surface, 188 ... Line-of-sight detection camera, 190 ... Display control unit (control unit), 192 ... Viewpoint position detection unit, 194 ... Image processing unit, 195 ... Image generation control unit, 196 ...
  • Warping parameter 197 ... Data buffer after warping processing, 198 ... ROM, 199 ... Upright image table, 200 ... Depth image table, 201.
  • VRAM storage device for image processing
  • 202 image generation unit (image rendering unit)
  • EB eye box
  • PS1 display area (imaginary image display surface)
  • Z1 upright image

Abstract

The purpose of the present invention is to make it possible to display, in an oblique image surface HUD device, an image (erect image) of a content to be viewed in an erect position while suppressing a decrease in visibility. A control unit of a head-up display device according to the present invention performs control for displaying an erect image that is an image to be viewed in an erect position within a display area (Z1) that is a flat or curved inclined plane inclined from a side that is close to a viewer and a lower side to a side that is far from the viewer and an upper side with respect to a ground surface or a plane (40) corresponding to the ground surface in real space, the erect image is displayed in the display area (Z1) that is a quadrangular outline when viewed from the viewer, a convergence angle difference (θL-θU) between an upper end (PU) and a lower end (PL) of the display area (Z1) is set to less than a predetermined threshold (preferably a convergence angle of 0.2°) determined on the basis of at least one of the visibility of the image, the time required for visual recognition, and a psychological factor such as an uncomfortable feeling or an unpleasant feeling.

Description

ヘッドアップディスプレイ装置Head-up display device
 本発明は、例えば、車両等のウインドシールドやコンバイナ等の被投影部材に画像の表示光を投影(投射)し、運転者の前方等に虚像を表示するヘッドアップディスプレイ(Head-up Display:HUD)装置等に関する。 The present invention is, for example, a head-up display (HUD) that projects (projects) image display light onto a windshield such as a vehicle or a projected member such as a combiner to display a virtual image in front of the driver. ) Regarding equipment, etc.
 視認者(運転者等)の情報認知性を向上させる等の目的で、HUD装置の虚像表示面を奥行方向に傾斜させるものが提案されている(例えば、特許文献1参照)。 For the purpose of improving the information recognition of the viewer (driver, etc.), it has been proposed to incline the virtual image display surface of the HUD device in the depth direction (see, for example, Patent Document 1).
 この方式のHUD装置(以下、斜め像面HUD装置、あるいは傾斜面HUD装置という場合がある)では、奥行情報を持たせた情報画像(矢印やマップ等の奥行き画像)を提示すると認知性が向上すると共に、奥行情報を持たない(広義には、奥行きを重視しない)情報画像(テキストや数字等の正立画像)を正立して視認させることも可能であり、利便性が高い。 In this type of HUD device (hereinafter, may be referred to as an oblique image surface HUD device or an inclined surface HUD device), cognition is improved by presenting an information image (depth image such as an arrow or a map) having depth information. At the same time, it is also possible to make an information image (upright image such as text or numbers) having no depth information (in a broad sense, not emphasizing depth) stand upright and visually recognized, which is highly convenient.
 なお、正立して視認させる、という場合の「正立」は、例えば以下のような意味合いで使用される。例えば文字や数字等で表される内容(コンテンツ)は、上下が逆となって倒立した状態はもちろんのこと、表示面の傾斜の程度が大きい場合等には、人は視認(認識)できないか、あるいは、視認困難となる。 In addition, "upright" in the case of making it stand upright is used in the following meanings, for example. For example, if the content (content) represented by letters or numbers is upside down and inverted, or if the display surface is tilted to a large extent, can a person see (recognize) it? Or, it becomes difficult to see.
 従って、HUD装置は、文字や数字等を表示するときは、人が、正しく情報を読み取れるように、正しく正立したコンテンツとして表示する必要がある。上記のような画像(虚像)を、正立画像(正立虚像)という。なお、正立画像は、視認者が向き合うように表示されることが多いため、正対画像といえる場合もある。 Therefore, when displaying characters, numbers, etc., the HUD device needs to display it as correctly upright content so that a person can read the information correctly. The above image (virtual image) is called an upright image (upright virtual image). Since the upright image is often displayed so that the viewer faces each other, it may be said that the image is a facing image.
特開2018-120135号公報Japanese Unexamined Patent Publication No. 2018-120135
 本発明者は、斜め像面HUD装置について検討し、以下の新たな課題を認識した。傾斜面に画像(虚像)を表示できるようにすると、上述のとおり、奥行き感のある画像(虚像)の表示が可能となり、また、その傾斜が、ある程度、地面(あるいは地面に相当する面:相当面)に対して立設していれば、奥行き感が重視されない文字や数字等を構成要素とするコンテンツについての正立画像として表示することもできる。 The present inventor examined the oblique image plane HUD device and recognized the following new problems. If an image (virtual image) can be displayed on an inclined surface, as described above, an image (virtual image) with a sense of depth can be displayed, and the inclination is to some extent the ground (or a surface corresponding to the ground: equivalent). If it is erected on the surface), it can be displayed as an upright image of the content whose constituent elements are characters, numbers, and the like, for which a sense of depth is not emphasized.
 但し、傾斜面が地面(その相当面)に対して大きく傾斜すると、奥行きのある表示には適した状態となるが、その一方、正立画像については、視認性が低下する等の問題が生じて、視認者が認識困難となったり、認識できたとしても認識するまでに要する時間が長くなったり、あるいは、主観的に不快感や違和感を生じたりする、とった課題が生じる。 However, if the inclined surface is greatly inclined with respect to the ground (corresponding surface thereof), it will be in a state suitable for a deep display, but on the other hand, there will be a problem such as deterioration of visibility for an upright image. As a result, there are problems such as difficulty in recognition by the viewer, long time required for recognition even if it can be recognized, or subjective discomfort or discomfort.
 これらの課題が顕在化しない場合は、正しく正立画像を認識できる、言い換えれば、正調な正立画像を視認できる、ということになり、そうでなければ、正立画像を判別、識別、視認等できない、あるいは視認等が困難ということになる。 If these problems do not become apparent, it means that the upright image can be recognized correctly, in other words, the upright image can be visually recognized. If not, the upright image can be discriminated, identified, visually recognized, etc. It means that it cannot be done or it is difficult to see.
 よって、HUD装置の効率的な設計、HUD装置のキャリブレーション、あるいはHUD装置の初期化等を適切に実施できるようにするためには、正調な正立画像を認識できるか否かを判定する、基準となる指標を設けることが好ましい。 Therefore, in order to appropriately perform efficient design of the HUD device, calibration of the HUD device, initialization of the HUD device, etc., it is determined whether or not a normal upright image can be recognized. It is preferable to provide a reference index.
 例えば、車両の所定点や視認者(運転者等)の視点を基準として、前方に何メートルの位置に画像(虚像)を表示するとき、地面(相当面)に対する、表示領域の傾斜角が何度以下とならないようにする、といった制限を設けることが考えられる。 For example, when displaying an image (virtual image) at a position several meters ahead of a predetermined point of a vehicle or the viewpoint of a viewer (driver, etc.), what is the inclination angle of the display area with respect to the ground (equivalent surface)? It is conceivable to set restrictions such as not to be less than the degree.
 しかし、人の視認性(視認感度)は画像までの距離に依存するところがあり、手前側に表示する場合には感度は大きく、奥側に表示する場合は感度は比較的、低くなるため、上記のような設定手法では、距離が変われば傾き(傾斜角)も変化し、統一した基準(閾値)を得ることができず、使い勝手が悪い。 However, the visibility (visual sensitivity) of a person depends on the distance to the image, and the sensitivity is high when displayed on the front side and relatively low when displayed on the back side. With such a setting method, if the distance changes, the inclination (inclination angle) also changes, and a unified standard (threshold value) cannot be obtained, which is not easy to use.
 従って、統一的に使用できる基準(閾値)となる指標を得ること、そして、その閾値の好ましい値を適切に設定すること、が重要となる。この指標が得られたならば、その指標を用いて、正立画像の読み取りが妨げられないような傾斜面(あるいは表示領域)の傾斜の設定が可能となり、設計等も容易化され得る。特許文献1等の従来技術では、この点については、何ら検討されておらず、適切な指標は得られていない。 Therefore, it is important to obtain an index that serves as a standard (threshold value) that can be used in a unified manner, and to appropriately set a preferable value of the threshold value. Once this index is obtained, it is possible to set the slope of the inclined surface (or display area) so that the reading of the upright image is not hindered by using the index, and the design and the like can be facilitated. In the prior art such as Patent Document 1, no study has been made on this point, and an appropriate index has not been obtained.
 本発明の目的の1つは、斜め像面HUD装置において、視認性の低下を抑制しつつ、正立して視認させるコンテンツの画像(正立画像)を表示できるようにすることである。 One of the objects of the present invention is to enable an oblique image plane HUD device to display an image of content (upright image) to be viewed upright while suppressing a decrease in visibility.
 本発明の他の目的は、以下に例示する態様及び最良の実施形態、並びに添付の図面を参照することによって、当業者に明らかになるであろう。 Other objects of the invention will be apparent to those skilled in the art by reference to the embodiments exemplified below and the best embodiments, as well as the accompanying drawings.
 以下に、本発明の概要を容易に理解するために、本発明に従う態様を例示する。 Hereinafter, in order to easily understand the outline of the present invention, an embodiment according to the present invention will be illustrated.
 第1の態様において、ヘッドアップディスプレイ装置は、
 画像を表示する画像表示部と、
 前記画像表示部が表示する前記画像の光を被投影部材に向けて投影することで、視認者の前方の実空間における仮想的な表示領域内で、前記視認者に前記画像の虚像を視認させる光学系と、
 前記画像表示部における前記画像の表示を制御する制御部と、
 を有し、
 実空間における前記視認者の前方に向かう方向を前方方向とし、
 前記前方方向に直交し、前記視認者の左右の目を結ぶ線分に沿う方向を左右方向とし、
 前記前方方向及び前記左右方向に直交する線分に沿う方向を上下方向又は高さ方向とし、
 前記実空間における地面又は地面に相当する面から離れる方向を上方向、近づく方向を下方向とするとき、
 前記制御部は、
 前記実空間において、前記地面又は地面に相当する面に対して、前記視認者に近い側かつ下方側から、遠い側かつ上方側へと傾斜する、平面又は曲面の傾斜面である前記表示領域内に、正立して視認させる画像である正立画像を表示する制御を実施し、
 前記正立画像は、前記視認者から見て四角形の輪郭である表示領域に表示され、
 前記表示領域の上端と下端の輻輳角差は、画像の視認性、視認に要する時間、及び違和感や不快感等の心理的要因の少なくとも1つに基づいて定められる所定閾値未満に設定されている。
In the first aspect, the head-up display device is
An image display unit that displays an image and
By projecting the light of the image displayed by the image display unit toward the projected member, the viewer can visually recognize the virtual image of the image within a virtual display area in the real space in front of the viewer. Optical system and
A control unit that controls the display of the image in the image display unit,
Have,
The direction toward the front of the viewer in the real space is defined as the forward direction.
The direction perpendicular to the front direction and along the line segment connecting the left and right eyes of the viewer is defined as the left-right direction.
The direction along the line segment orthogonal to the front direction and the left-right direction is defined as the vertical direction or the height direction.
When the direction away from the ground or the surface corresponding to the ground in the real space is the upward direction, and the direction toward the ground is the downward direction.
The control unit
In the display area, which is an inclined surface of a flat surface or a curved surface that inclines from the side close to the viewer and the lower side to the far side and the upper side with respect to the ground or a surface corresponding to the ground in the real space. In addition, control is performed to display an upright image, which is an upright image to be visually recognized.
The upright image is displayed in a display area that is the outline of a quadrangle when viewed from the viewer.
The difference in convergence angle between the upper end and the lower end of the display area is set to be less than a predetermined threshold value determined based on at least one of the visibility of the image, the time required for visual recognition, and psychological factors such as discomfort and discomfort. ..
 第1の態様では、正立画像を正しく視認できるような表示領域の傾斜の程度を、「輻輳角差(但し、それにより引き起こされる傾斜歪み角度(図6(C)の角度θd参照)により代替可能である)」という新しい指標(閾値としての基準)を用いて判定し、傾斜の程度を決めたり、設定したりすることを効率化(あるいは容易化)する。 In the first aspect, the degree of inclination of the display area so that the upright image can be correctly visually recognized is replaced by "convergence angle difference (provided that the inclination distortion angle caused by the difference (see the angle θd in FIG. 6C)). It is possible to make a judgment using a new index (standard as a threshold value), which makes it more efficient (or easier) to determine and set the degree of inclination.
 表示領域が四角形の輪郭を有するものとし、その表示領域の、地面(又は地面の相当面:路面等)側を下端(下辺)とし、その反対側(地面から離れる側)を上端(上辺)とする。ここで例えば、下端(下辺)と上端(上辺)の各々に、互いに対応する一対の点(任意の位置に設けることができるが、好ましくは例えば、各端(各辺)の右端点あるいは左端点)を設定する。この一対の点を第1の点、第2の点とし、左右の各眼から第1の点を見た場合の輻輳角(各眼の視線方向を示す視軸がなす角度)を第1の輻輳角とし、第2の点を見た場合の輻輳角を第2の輻輳角とし、第1の輻輳角と第2の輻輳角との差(第1の輻輳角から第2の輻輳角を引き算した差)を「輻輳角差」とする。この輻輳角差は、言い換えれば、「表示領域の上端と下端の輻輳角差」ということができる。 It is assumed that the display area has a quadrangular outline, the ground (or equivalent surface of the ground: road surface, etc.) side of the display area is the lower end (lower side), and the opposite side (the side away from the ground) is the upper end (upper side). do. Here, for example, a pair of points (which can be provided at arbitrary positions) corresponding to each other at each of the lower end (lower side) and the upper end (upper side) are preferably provided, for example, the right end point or the left end point of each end (each side). ) Is set. This pair of points is defined as a first point and a second point, and the convergence angle (the angle formed by the visual axis indicating the line-of-sight direction of each eye) when the first point is viewed from each of the left and right eyes is the first point. The convergence angle is defined as the convergence angle when the second point is viewed, and the difference between the first congestion angle and the second convergence angle (the first congestion angle to the second convergence angle is defined as the second convergence angle). The subtracted difference) is defined as the "convergence angle difference". In other words, this convergence angle difference can be said to be "convergence angle difference between the upper end and the lower end of the display area".
 ここで、表示領域が地面(その相当面)に、例えば略直角をなして立設されているのならば、その四角形を上側から見た平面視で、四角形の上端(上辺)と下端(下辺)は重なり、第1の点と第2の点とは重なる。四角形の縦の辺の長さ(高さ)が小さいとすると、第1、第2の各点の高さ位置に起因する第1、第2の各点と左右の各眼との距離の変動は無視できる。上記の場合は、第1、第2の各点についての第1、第2の輻輳角は同じ(略同じ)となり、輻輳角差は零(略零)となる。 Here, if the display area is erected on the ground (corresponding surface thereof) at a substantially right angle, for example, the upper end (upper side) and the lower end (lower side) of the quadrangle are viewed in a plan view from above. ) Overlap, and the first point and the second point overlap. Assuming that the length (height) of the vertical side of the quadrangle is small, the variation in the distance between the first and second points and the left and right eyes due to the height position of each of the first and second points. Can be ignored. In the above case, the first and second convergence angles for each of the first and second points are the same (substantially the same), and the convergence angle difference is zero (substantially zero).
 ここで、表示領域が地面(その相当面)に対して傾斜し、四角形の下端(下辺)が視認者の側に移動すると、第1、第2の各点の輻輳角の値に差が生じる。言い換えれば、第1の輻輳角は、第2の輻輳角よりも大きくなる。したがって、輻輳角差はα(αは、0より大きい整数)となる。 Here, when the display area is tilted with respect to the ground (corresponding surface thereof) and the lower end (lower side) of the quadrangle moves toward the viewer, there is a difference in the value of the convergence angle at each of the first and second points. .. In other words, the first convergence angle is larger than the second convergence angle. Therefore, the convergence angle difference is α (α is an integer larger than 0).
 表示領域がさらに傾斜し、四角形の下端(下辺)が、さらに視認者の側に移動して視認者に近づくと、第1の輻輳角はさらに増大するため、第2の輻輳角との差は大きくなり、輻輳角差はβ(βは、α<βを満たす整数)となる。 When the display area is further inclined and the lower end (lower side) of the quadrangle moves further toward the viewer and approaches the viewer, the first convergence angle further increases, so that the difference from the second convergence angle is. It becomes large and the convergence angle difference becomes β (β is an integer satisfying α <β).
 このように、「表示領域の上端と下端の輻輳角差」は、表示領域の地面(その相当面)に対する傾斜の程度を示す指標となる。また、輻輳角は視認者の眼からの距離に依存して変動するため、距離情報を内包しており、よって、輻輳角差は、距離も含めた表示領域(あるいは虚像表示面等)の地面(その相当面)に対する傾斜の程度の情報を有する、1つの統合された指標(閾値)となる。従来のように、距離何メートルのときに傾斜角何度というような前提条件付きで傾斜を設定する必要がない。 In this way, the "convergence angle difference between the upper end and the lower end of the display area" is an index indicating the degree of inclination of the display area with respect to the ground (corresponding surface thereof). Further, since the convergence angle fluctuates depending on the distance from the viewer's eyes, distance information is included. Therefore, the convergence angle difference is the ground of the display area (or the virtual image display surface, etc.) including the distance. It is one integrated index (threshold value) that has information on the degree of inclination with respect to (its equivalent surface). Unlike the conventional method, it is not necessary to set the inclination with the precondition such as the number of inclination angles at a distance of several meters.
 ここで、正立画像の視認性は個人差があり、一概には言えないが、表示する画像の視認性、視認に要する時間、及び違和感や不快感等の心理的要因の少なくとも1つに基づいて正調な正立画像の視認ができているか否かを、客観的に判定可能である。そして、その判定に際して、上記の指標を用いることで、判定に用いることができる閾値が得られる。 Here, the visibility of an upright image varies from person to person and cannot be unequivocally stated, but it is based on at least one of the visibility of the image to be displayed, the time required for viewing, and psychological factors such as discomfort and discomfort. It is possible to objectively determine whether or not a normal upright image can be visually recognized. Then, by using the above index in the determination, a threshold value that can be used in the determination can be obtained.
 上述のとおり、表示領域の傾斜が大きくなって第1の点が視認者に近づくほど、輻輳角差の値は大きくなる。よって、例えば、視認者が左眼と右眼の各画像を脳内で融像(合成)して正立画像を認識できる限界付近の輻輳角差を閾値とし、例えば、HUD装置の設計に際して、輻輳角差が、その閾値未満となるように各部を設定すれば、傾斜面に正立画像を表示しても、視認者が認識可能となる。言い換えれば、視認者による、正立画像の視認性が所定レベル以上に確保されていることになる。 As described above, the larger the inclination of the display area and the closer the first point is to the viewer, the larger the value of the convergence angle difference becomes. Therefore, for example, the difference in convergence angle near the limit at which the viewer can fuse (synthesize) each image of the left eye and the right eye in the brain to recognize the erect image is set as a threshold value, and for example, when designing a HUD device. If each part is set so that the difference in convergence angle is less than the threshold value, the viewer can recognize the upright image even if it is displayed on the inclined surface. In other words, the visibility of the upright image by the viewer is ensured to a predetermined level or higher.
 このようにして、例えば、視認性の低下を抑制しつつ、正立して視認させるコンテンツの画像(正立画像)を表示できるようにする設計が、効率化あるいは容易化される。また、この新たな指標(輻輳角差、又は、これにより引き起こされる傾斜歪み角度)は、HUD装置のキャリブレーション、HUD装置の初期化、HUD装置の機能のシミュレーション等に利用することも可能であり、各処理を効率化する等の効果が得られる。 In this way, for example, a design that enables an image of content (upright image) to be visually recognized upright while suppressing a decrease in visibility is streamlined or facilitated. In addition, this new index (convergence angle difference or tilt distortion angle caused by this) can also be used for calibration of the HUD device, initialization of the HUD device, simulation of the function of the HUD device, and the like. , The effect of improving the efficiency of each process can be obtained.
 第1の態様に従属する第2の態様において、
 前記表示領域が、傾斜して視認させる画像である奥行き画像の虚像、及び前記正立画像の虚像のいずれも表示可能な第1の領域と、前記奥行き画像の虚像を表示する第2の領域とに分けられ、
 前記第1の領域における前記表示領域の上端と下端の輻輳角差が前記所定閾値未満に設定され、前記第2の領域における前記表示領域の上端と下端の輻輳角差が前記所定閾値以上に設定されていてもよい。
In the second aspect, which is subordinate to the first aspect,
The display area includes a first area in which both a virtual image of a depth image, which is an image to be visually recognized by being tilted, and a virtual image of the upright image can be displayed, and a second area for displaying a virtual image of the depth image. Divided into
The convergence angle difference between the upper end and the lower end of the display area in the first region is set to be less than the predetermined threshold value, and the convergence angle difference between the upper end and the lower end of the display area in the second region is set to be equal to or higher than the predetermined threshold value. It may have been done.
 第2の態様では、表示領域が、奥行き画像及び正立画像の双方を表示できる第1の領域と、奥行き画像の表示に適した第2の領域とに分かれていて、第2の領域における表示領域の上端と下端の輻輳角差が、上記の所定閾値以上に設定される。 In the second aspect, the display area is divided into a first area capable of displaying both a depth image and an upright image and a second area suitable for displaying a depth image, and the display in the second area is performed. The difference in convergence angle between the upper end and the lower end of the region is set to be equal to or higher than the above-mentioned predetermined threshold value.
 上述のとおり、閾値は、正立画像の視認性等を基準として設定されるものであり、その閾値以上では、正立画像の視認性が低下して正立画像の表示には不適であるが、言い換えれば、傾斜して表現される奥行き画像(空中で浮いて、路面等と略平行に延在する、あるいは、路面に重畳する視覚を与えるような画像も含む)の表示には適している、といえる。よって、第2の領域に表示される画像(虚像)については、輻輳角差等を閾値以上に設定する。これによって、例えば、第1領域に正立画像を表示し、第2の領域に奥行き画像を表示したときに、各画像の適切な視認性等を確保することができる。 As described above, the threshold value is set based on the visibility of the upright image and the like, and above the threshold value, the visibility of the upright image is lowered and it is not suitable for displaying the upright image. In other words, it is suitable for displaying a depth image expressed in an inclined manner (including an image that floats in the air and extends substantially parallel to the road surface, etc., or gives a visual sense superimposed on the road surface). It can be said that. Therefore, for the image (virtual image) displayed in the second region, the convergence angle difference or the like is set to be equal to or higher than the threshold value. Thereby, for example, when an upright image is displayed in the first region and a depth image is displayed in the second region, appropriate visibility of each image can be ensured.
 第1又は第2の態様に従属する第3の態様において、
 前記所定閾値は、前記正立画像の正調な視認可能性を判定する正調視認可能性判定閾値として機能し、
 前記所定閾値としての前記輻輳角差は、0.2°に設定されてもよい。
In the third aspect, which is subordinate to the first or second aspect,
The predetermined threshold value functions as a normal visibility determination threshold for determining the normal visibility of the upright image.
The convergence angle difference as the predetermined threshold value may be set to 0.2 °.
 第3の態様では、上述した「所定閾値」が、具体的には、例えば「正調視認可能性判定閾値」として利用可能である点を明らかとし、その好ましい値の例が、0.2°であることを明確化している。 In the third aspect, it is clarified that the above-mentioned "predetermined threshold value" can be specifically used as, for example, a "normal visibility determination threshold value", and an example of a preferable value thereof is 0.2 °. It clarifies that there is.
 第1乃至第3の何れか1つに従属する第4の態様において、
 前記視認者から見た前記上下方向(あるいは高さ方向)における画角を縦画角というとき、前記所定閾値による輻輳角差の制限は、縦画角0.75°以下の正立画像のコンテンツに対して適用されてもよい。
In the fourth aspect, which is subordinate to any one of the first to the third.
When the angle of view in the vertical direction (or height direction) seen from the viewer is referred to as a vertical angle of view, the limitation of the convergence angle difference by the predetermined threshold is the content of an upright image having a vertical angle of view of 0.75 ° or less. May be applied to.
 第4の態様では、表示コンテンツのサイズが大きくなると、同じ輻輳角差でも、左眼の画像と右眼の画像との脳内での融像がより困難になることを考慮して、縦画角0.75°以下の小さいコンテンツについて、上記の閾値を適用し、上端と下端の輻輳角差が閾値未満となるように設定する。 In the fourth aspect, a vertical image is taken into consideration that as the size of the displayed content increases, it becomes more difficult to fuse the image of the left eye and the image of the right eye in the brain even with the same convergence angle difference. The above threshold value is applied to small contents having an angle of 0.75 ° or less, and the difference in convergence angle between the upper end and the lower end is set to be less than the threshold value.
 また、このサイズより大きい正立コンテンツは、HUD装置において視界妨害の観点から煩わしさが増大することが確認されており、現状の実施可能性も高いとはいえない。よって、表示コンテンツが所定サイズ以下のものに限定して上記の閾値を適用することとしても、特に問題がないものと考えられる。 In addition, it has been confirmed that upright content larger than this size is more annoying from the viewpoint of visibility obstruction in the HUD device, and it cannot be said that the current feasibility is high. Therefore, it is considered that there is no particular problem even if the above threshold value is applied only to the display contents having a predetermined size or less.
 当業者は、例示した本発明に従う態様が、本発明の精神を逸脱することなく、さらに変更され得ることを容易に理解できるであろう。 Those skilled in the art will easily understand that the embodiments according to the present invention may be further modified without departing from the spirit of the present invention.
図1(A)は、車両に搭載されたHUD装置の構成、及び傾斜した表示領域の一例を示す図、図1(B)、(C)は、図1(A)に示される表示領域を実現する手法の例を示す図である。FIG. 1A is a diagram showing an example of a configuration of a HUD device mounted on a vehicle and an inclined display area, and FIGS. 1B and 1C are display areas shown in FIG. 1A. It is a figure which shows the example of the method to realize. 図2(A)は、車両に搭載されたHUD装置の主要な構成、及び表示領域における表示例を示す図、図2(B)は、奥行き画像の虚像及び正立画像の虚像のいずれも表示可能な第1の領域と、奥行き画像の虚像を表示する第2の領域と、により構成される表示領域の例を示す図である。FIG. 2A is a diagram showing a main configuration of a HUD device mounted on a vehicle and a display example in a display area, and FIG. 2B is a diagram showing both a virtual image of a depth image and a virtual image of an upright image. It is a figure which shows the example of the display area | region which is composed of the 1st possible area and the 2nd area which displays a virtual image of a depth image. 図3(A)は、前方に、奥行き画像としての矢印の図形が表示された傾斜面(傾斜した表示領域)が配置され、それを視認者が両眼で見ている状態を示す図、図3(B)は、左眼で見た画像を示す図、図3(C)は、左眼と右眼の各画像を融像(合成)することで見える奥行き画像を示す図、図3(D)は、右眼で見た画像を示す図である。FIG. 3A is a diagram and a diagram showing a state in which an inclined surface (inclined display area) on which an arrow figure as a depth image is displayed is arranged in front and the viewer is viewing it with both eyes. 3 (B) is a diagram showing an image seen with the left eye, and FIG. 3 (C) is a diagram showing a depth image seen by fusing (combining) each image of the left eye and the right eye, FIG. 3 (C). D) is a diagram showing an image seen by the right eye. 図4(A)は、前方に、正立画像としての車速表示が表示された傾斜面(傾斜した表示領域)が配置され、それを視認者が両眼で見ている状態を示す図、図4(B)は、左眼で見た画像を示す図、図4(C)は、左眼と右眼の各画像を融像(合成)することで見える正立画像を示す図、図4(D)は、右眼で見た画像を示す図である。FIG. 4A is a diagram showing a state in which an inclined surface (inclined display area) on which a vehicle speed display as an upright image is displayed is arranged in front and the viewer is viewing it with both eyes. 4 (B) is a diagram showing an image seen with the left eye, and FIG. 4 (C) is a diagram showing an upright image seen by fusing (combining) each image of the left eye and the right eye, FIG. 4 (D) is a figure showing an image seen by the right eye. 図5(A)は、路面にほぼ垂直に立設された表示領域を、視認者が両眼で見ている状態を示す図、図5(B)は、図5(A)における表示領域の上端(上辺)の第1の右端点と、第1の右端点に対応する下端(下辺)の第2の右端点とに対する両眼の輻輳角を示す図、図5(C)は、左眼と右眼の各画像を融像(合成)した画像を示す図である。FIG. 5 (A) is a diagram showing a state in which a viewer is viewing a display area erected substantially perpendicular to the road surface with both eyes, and FIG. 5 (B) is a view of the display area in FIG. 5 (A). A diagram showing the convergence angles of both eyes with respect to the first right end point of the upper end (upper side) and the second right end point of the lower end (lower side) corresponding to the first right end point, FIG. 5C is a diagram showing the left eye. It is a figure which shows the image which fused (combined) each image of a right eye. 図6(A)は、路面に対して略45°傾斜した表示領域を、視認者が両眼で見ている状態を示す図、図6(B)は、図6(A)における表示領域の上端(上辺)の第1の右端点と、第1の右端点に対応する下端(下辺)の第2の右端点とに対する両眼の輻輳角を示す図、図6(C)は、左眼でみた画像を示す図、図6(D)は、左眼と右眼の各画像を融像(合成)した正立画像を示す図、図6(E)は、右眼で見た画像を示す図である。FIG. 6A is a diagram showing a state in which a viewer is viewing a display area inclined by approximately 45 ° with respect to the road surface with both eyes, and FIG. 6B is a display area in FIG. 6A. A diagram showing the convergence angles of both eyes with respect to the first right end point of the upper end (upper side) and the second right end point of the lower end (lower side) corresponding to the first right end point, FIG. 6C is a diagram showing the left eye. FIG. 6 (D) is a diagram showing an upright image obtained by fusing (synthesizing) each image of the left eye and the right eye, and FIG. 6 (E) is a diagram showing an image viewed with the right eye. It is a figure which shows. 図7(A)は、路面に対して略30°傾斜した表示領域を、視認者が両眼で見ている状態を示す図、図7(B)は、図7(A)における表示領域の上端(上辺)の第1の右端点と、第1の右端点に対応する下端(下辺)の第2の右端点とに対する両眼の輻輳角を示す図、図7(C)は、左眼でみた画像を示す図、図7(D)は、左眼と右眼の各画像を融像(合成)した場合の二重見え等によって視認がむずかしい視覚を示す図、図7(E)は、右眼で見た画像を示す図である。FIG. 7A is a diagram showing a state in which a viewer is viewing a display area inclined by approximately 30 ° with respect to the road surface with both eyes, and FIG. 7B is a display area in FIG. 7A. FIG. 7 (C) is a diagram showing the convergence angles of both eyes with respect to the first right end point of the upper end (upper side) and the second right end point of the lower end (lower side) corresponding to the first right end point, FIG. 7 (C) is the left eye. FIG. 7 (D) is a diagram showing the images seen in the above, and FIG. 7 (D) is a diagram showing visual perception that is difficult to see due to double vision when each image of the left eye and the right eye is fused (combined). , Is a diagram showing an image seen with the right eye. 図8(A)、(B)は、HUD装置(斜め像面HUD装置)の設計手法の例を示すフローチャートである。8 (A) and 8 (B) are flowcharts showing an example of a design method of a HUD device (oblique image plane HUD device). HUD装置における表示制御部(制御部)の構成例を示す図である。It is a figure which shows the structural example of the display control unit (control unit) in a HUD apparatus. 図10(A)、(B)は、傾斜した表示領域の他の例を示す図である。10 (A) and 10 (B) are views showing another example of the inclined display area. 図11は、各輻輳角差(横軸)に対する不快感を覚えないと回答した人の割合(縦軸)を示す実験結果のグラフである。FIG. 11 is a graph of experimental results showing the proportion (vertical axis) of those who answered that they did not feel uncomfortable with each convergence angle difference (horizontal axis).
 以下に説明する最良の実施形態は、本発明を容易に理解するために用いられている。従って、当業者は、本発明が、以下に説明される実施形態によって不当に限定されないことを留意すべきである。 The best embodiments described below are used for easy understanding of the present invention. Accordingly, one of ordinary skill in the art should note that the invention is not unreasonably limited by the embodiments described below.
 図1を参照する。図1(A)は、車両に搭載されたHUD装置の構成、及び傾斜した表示領域の一例を示す図、図1(B)、(C)は、図1(A)に示される表示領域を実現する手法の例を示す図である。図1において、車両1の前方に沿う方向(前後方向ともいう)をZ方向とし、車両1の幅(横幅)に沿う方向(あるいは左右方向)をX方向とし、車両1の高さ方向あるいは上方向(平坦な路面40に垂直な線分の、地面又はその相当面(ここでは路面とする)40から離れる方向をY方向とする。 Refer to FIG. FIG. 1A is a diagram showing an example of a configuration of a HUD device mounted on a vehicle and an inclined display area, and FIGS. 1B and 1C are display areas shown in FIG. 1A. It is a figure which shows the example of the method to realize. In FIG. 1, the direction along the front of the vehicle 1 (also referred to as the front-rear direction) is the Z direction, the direction along the width (horizontal width) of the vehicle 1 is the X direction, and the height direction or the upper side of the vehicle 1. Direction (Y direction is the direction away from the ground or its corresponding surface (here, the road surface) 40 of the line segment perpendicular to the flat road surface 40.
 また、以下の説明において、視認者の前方等に設けられる仮想的な表示領域(単に、表示領域と称する場合もある)という用語は、広義に解釈することができる。例えば、画像が表示されるスクリーン等の表示面(の表示範囲)に対応した仮想的な表示面(虚像表示面という場合がある)であってもよく、また、その仮想的な表示面に表示される1つの画像が、例えば所定サイズの所定形状(例えば四角形)の画像領域内に配置されるとき、その画像領域を1つの表示領域(あるいは、虚像表示面の一部)とみることもできる。以下の説明では、上記のことを踏まえた上で、単に「表示領域」と記載する。 Further, in the following description, the term virtual display area (sometimes simply referred to as a display area) provided in front of the viewer can be broadly interpreted. For example, it may be a virtual display surface (sometimes referred to as a virtual image display surface) corresponding to (the display range of) a display surface such as a screen on which an image is displayed, or it may be displayed on the virtual display surface. When one image to be created is arranged in an image area having a predetermined shape (for example, a quadrangle) having a predetermined size, the image area can be regarded as one display area (or a part of a virtual image display surface). .. In the following description, based on the above, it is simply referred to as "display area".
 また、表示領域の形状の説明等において、上、下、という表現をする場合がある。ここでは、説明の便宜上、路面40に垂直な線分(法線)に沿う方向(車両1の高さ方向でもある)を上下方向とする。路面が水平である場合は、鉛直下向きが下方であり、その反対方向が上方である。この点は、他の図面の説明にも適用され得る。 In addition, in the explanation of the shape of the display area, the expressions "upper" and "lower" may be used. Here, for convenience of explanation, the direction along the line segment (normal line) perpendicular to the road surface 40 (which is also the height direction of the vehicle 1) is taken as the vertical direction. When the road surface is horizontal, the vertical downward direction is downward and the opposite direction is upward. This point may also apply to the description of other drawings.
 図1(A)に示されるように、車両(自車両)1のダッシュボード41の内部に、本実施例のHUD装置100が搭載されている。HUD装置100は、車両1の前方において、路面40に対して傾斜している領域をもつ表示領域PS1に、正立して視認させる正立画像(奥行を特に重視しない画像であり、立像とも称され、例えば数字、文字等で構成され得る)、及び、奥行きが重要な要素となる奥行き画像(傾斜画像、あるいは傾斜像とも称され、例えば、路面40に沿うように延在するナビゲーション用の矢印等)を表示することが可能である。 As shown in FIG. 1A, the HUD device 100 of this embodiment is mounted inside the dashboard 41 of the vehicle (own vehicle) 1. The HUD device 100 is an upright image (an image that does not particularly emphasize depth and is also referred to as an erect image) in which the display area PS1 having an area inclined with respect to the road surface 40 is visually recognized in an upright position in front of the vehicle 1. (For example, it may be composed of numbers, letters, etc.), and a depth image in which depth is an important factor (also referred to as an inclined image or an inclined image, for example, a navigation arrow extending along the road surface 40). Etc.) can be displayed.
 HUD装置100は、画像を表示する表示面164を有する表示部(画像表示部と言う場合もあり、具体的には例えばスクリーン)160と、画像を表示する表示光Kを、被投影部材(反射透光部材)2であるウインドシールドに投影する光学部材を含む光学系120と、投光部(画像投射部)150と、を有し、光学部材120は、反射面179を有する曲面ミラー(凹面鏡、あるいは拡大反射鏡ともいう)170を有し、その曲面ミラー170の反射面179は、曲率半径が一律である形状ではなく、例えば複数の曲率半径をもつ部分領域の集合からなる形状とすることができ、例えば自由曲面の設計手法を利用することができる(自由曲面そのものであってもよい)。なお、自由曲面とは、単純な数式では表わすことができない曲面であり、空間に交点と曲率をいくつか設定し、高次方程式で
それぞれの交点を補間して曲面を表現するものである。反射面179の形状は、表示領域PS1の形状や路面との関係に、かなり大きな影響を与える。
The HUD device 100 has a display unit (sometimes referred to as an image display unit, specifically, a screen) 160 having a display surface 164 for displaying an image, and a display light K for displaying an image, and a projected member (reflection). A curved mirror (concave mirror) having an optical system 120 including an optical member projecting onto a windshield which is a translucent member) 2 and a light projecting unit (image projection unit) 150, and the optical member 120 has a reflecting surface 179. The reflecting surface 179 of the curved mirror 170 having 170 (also referred to as a magnifying mirror) is not a shape having a uniform radius of curvature, but a shape consisting of, for example, a set of partial regions having a plurality of radius of curvature. For example, a free curved surface design method can be used (the free curved surface itself may be used). A free curved surface is a curved surface that cannot be expressed by a simple mathematical formula, and a curved surface is expressed by setting some intersections and curvatures in space and interpolating each intersection with a higher-order equation. The shape of the reflective surface 179 has a considerable influence on the shape of the display area PS1 and the relationship with the road surface.
 なお、表示領域PS1の形状は、曲面ミラー(凹面鏡)130の反射面179の形状の他、ウインドシールド(反射透光部材2)の曲面形状や、光学系120内に搭載される他の光学部材(例えば補正鏡)の形状にも影響される。また、表示部160の表示面164の形状(一般的には平面だが、全体又は一部が非平面となり得る)や、反射面179に対する表示面164の配置にも影響される。但し、曲面ミラー(凹面鏡)170は拡大反射鏡であり、表示領域(虚像表示面)の形状に与える影響はかなり大きい。また、曲面ミラー(凹面鏡)170の反射面179の形状が異なれば、実際に、表示領域(虚像表示面)PS1の形状が変化する。 The shape of the display area PS1 includes the shape of the reflective surface 179 of the curved mirror (concave mirror) 130, the curved shape of the windshield (reflection translucent member 2), and other optical members mounted in the optical system 120. It is also affected by the shape of (for example, the correction mirror). It is also affected by the shape of the display surface 164 of the display unit 160 (generally a flat surface, but all or part of it may be non-planar) and the arrangement of the display surface 164 with respect to the reflective surface 179. However, the curved mirror (concave mirror) 170 is a magnifying reflector, and has a considerable influence on the shape of the display area (virtual image display surface). Further, if the shape of the reflecting surface 179 of the curved mirror (concave mirror) 170 is different, the shape of the display area (virtual image display surface) PS1 actually changes.
 また、近端部U1から遠端部U3へと一体的に延びる表示領域PS1は、光学系の光軸(主光線に対応する主光軸)に対して、表示部160の表示面164を90度未満の交差角で斜めに配置することにより形成することができる。 Further, in the display area PS1 integrally extending from the near end portion U1 to the far end portion U3, the display surface 164 of the display unit 160 is 90 with respect to the optical axis of the optical system (the main optical axis corresponding to the main light ray). It can be formed by arranging diagonally at an intersection angle of less than a degree.
 また、表示領域PS1の曲面の形状は、光学系における全領域又は一部の領域の光学的特性を調整すること、光学部材と表示面164との配置を調整すること、表示面164の形状を調整すること、又はこれらの組み合わせにより調整されてもよい。このようにして、虚像表示面の形状を、多様に調整することができる。これによって、第1の領域Z1及び第2の領域Z2を有する表示領域PS1を実現することができる。 Further, the shape of the curved surface of the display area PS1 is such that the optical characteristics of all or a part of the optical system can be adjusted, the arrangement of the optical member and the display surface 164 can be adjusted, and the shape of the display surface 164 can be adjusted. It may be adjusted or adjusted by a combination thereof. In this way, the shape of the virtual image display surface can be adjusted in various ways. Thereby, the display area PS1 having the first area Z1 and the second area Z2 can be realized.
 言い換えれば、表示領域PS1は、奥行き画像(傾斜画像)及び正立画像(立像)の双方を表示できる第1の領域Z1と、奥行き画像(傾斜画像)の表示に適した(言い換えれば、奥行き画像の表示用としてもっぱら使用される)第2の領域とに分かれている。 In other words, the display area PS1 is suitable for displaying the first area Z1 capable of displaying both the depth image (tilted image) and the upright image (standing image) and the depth image (tilted image) (in other words, the depth image). It is divided into a second area (used exclusively for display of).
 以下、この点について具体的に説明する。図1(B)の左側及び左下に示すように、表示部160の表示面164の傾斜の態様及び程度によって、表示領域(虚像表示面を含む)PS1の全体的な傾斜の態様及び程度を調整する。なお、図1(B)の例では、ウインドシールド(反射透光部材2)の曲面による表示領域(虚像表示面)の歪みが、曲面ミラー(凹面鏡等)170の反射面179の曲面形状によって補正されて、結果的に平面の表示領域(虚像表示面)PS1が生成されるものとする。 Hereafter, this point will be explained concretely. As shown on the left side and the lower left side of FIG. 1B, the mode and degree of the overall inclination of the display area (including the virtual image display surface) PS1 are adjusted according to the mode and degree of inclination of the display surface 164 of the display unit 160. do. In the example of FIG. 1B, the distortion of the display area (virtual image display surface) due to the curved surface of the windshield (reflection and translucent member 2) is corrected by the curved surface shape of the reflective surface 179 of the curved mirror (concave mirror or the like) 170. As a result, a flat display area (virtual image display surface) PS1 is generated.
 また、図1(B)の右側及び左下に示すように、光学部材(ここでは曲面ミラー(凹面鏡等)170)と表示面164との位置関係の調整によって、言い換えれば、例えば表示面164を回動させて光学部材(曲面ミラー170)との相対的関係を異なるものとすることによって、傾斜面である表示領域(虚像表示面)PS1の、路面40から離れる程度を調整することができる。 Further, as shown on the right side and the lower left side of FIG. 1B, by adjusting the positional relationship between the optical member (here, the curved surface mirror (concave mirror or the like) 170) and the display surface 164, in other words, the display surface 164 is rotated. By moving the mirror to make the relative relationship with the optical member (curved surface mirror 170) different, it is possible to adjust the degree to which the display region (virtual image display surface) PS1 which is an inclined surface is separated from the road surface 40.
 また、図1(C)に示すように、光学部材である曲面ミラー(凹面鏡等)170の反射面の形状を調整して(あるいは、表示部160の表示面164の形状を調整して)表示領域PS1の車両1に近い側の端部(近端部)U1付近の虚像表示距離を変更することで、近近端部U1付近を路面側に曲げて路面に対して立つように制御し(言い換えれば、立面化し)、これによって、傾斜した部分を有する表示領域PS1を得る。 Further, as shown in FIG. 1C, the shape of the reflecting surface of the curved mirror (concave mirror or the like) 170, which is an optical member, is adjusted (or the shape of the display surface 164 of the display unit 160 is adjusted) for display. By changing the virtual image display distance near the end (near end) U1 on the side of the area PS1 near the vehicle 1, the vicinity of the near end U1 is bent toward the road surface and controlled to stand against the road surface ( In other words, elevation), thereby obtaining a display area PS1 having an inclined portion.
 図1(C)の上側に示されるように、曲面ミラー170の反射面179は、Near(近傍表示部)、Center(中間(中央)表示部)、Far(遠方表示部)の3つの部分(箇所)に分けることできる。 As shown on the upper side of FIG. 1 (C), the reflective surface 179 of the curved mirror 170 has three portions (Near (nearby display), Center (middle (center) display), and Far (far display)). It can be divided into parts).
 ここで、Nearは、表示領域PS1の近端部U1に対応する表示光E1(図4(A)及び(B)において一点鎖線で示されている)を生成する部分であり、Centerは、表示領域PS1の中間部(中央部)U2に対応する表示光E2(破線で示されている)を生成する部分であり、Farは、表示領域PS1の遠端部U3に対応する表示光E3(実線で示されている)を生成する部分である。 Here, Near is a part that generates display light E1 (indicated by a long-dotted line in FIGS. 4A and 4B) corresponding to the near-end U1 of the display area PS1, and the Center is a display. Far is a portion that generates the display light E2 (indicated by a broken line) corresponding to the intermediate portion (central portion) U2 of the region PS1, and Far is the display light E3 (solid line) corresponding to the far end portion U3 of the display region PS1. Is the part that produces).
 図1(C)では、Center及びFarの部分は、図1(B)に示される、平面の表示領域PS1を生成する場合の曲面ミラー(凹面鏡等)170と同じである。但し、図1(C)では、Nearの部分の曲率が、図1(B)に比べて小さく設定される。そうすると、Nearの部分に対応する倍率が大きくなる。 In FIG. 1 (C), the Center and Far portions are the same as the curved mirror (concave mirror or the like) 170 in the case of generating the flat display area PS1 shown in FIG. 1 (B). However, in FIG. 1 (C), the curvature of the Near portion is set to be smaller than that in FIG. 1 (B). Then, the magnification corresponding to the Near part becomes large.
 HUD装置100の倍率(cとする)は、表示部160の表示面164からウインドシールド2に至るまでの距離(aとする)と、ウインドシールド(反射透光部材2)で反射した光が視点Aを経由して結像点で結像するまでの距離を(bとする)と、c=b/aで表すことができるが、Nearの部分の曲率が小さくなると、aが小さくなり、倍率が上がり、像は、車両1からより遠い位置に結像するようになる。つまり、図1(C)の場合、図1(B)の場合に比べて、虚像表示距離が大きくなる。 The magnification (referred to as c) of the HUD device 100 is the distance (referred to as a) from the display surface 164 of the display unit 160 to the windshield 2 and the viewpoint of the light reflected by the windshield (reflecting translucent member 2). The distance to form an image at the image formation point via A can be expressed as c = b / a, but when the curvature of the Near part becomes smaller, a becomes smaller and the magnification becomes smaller. The image is formed at a position farther from the vehicle 1. That is, in the case of FIG. 1 (C), the virtual image display distance is larger than that of the case of FIG. 1 (B).
 したがって、表示領域PS1の近端部U1は、車両1から引き離されることになり、近端部U1がおじぎをする形で路面40側に湾曲し、この結果として第1の領域Z1が形成される。これにより、第1の領域Z1と第2の領域Z2とを有する表示領域PS1が得られる。 Therefore, the near-end U1 of the display region PS1 will be separated from the vehicle 1, and the near-end U1 will bow toward the road surface 40, and as a result, the first region Z1 will be formed. .. As a result, a display region PS1 having a first region Z1 and a second region Z2 can be obtained.
 次に、図2を参照する。図2(A)は、車両に搭載されたHUD装置の主要な構成、及び表示領域における表示例を示す図、図2(B)は、奥行き画像の虚像及び正立画像の虚像のいずれも表示可能な第1の領域と、奥行き画像の虚像を表示する第2の領域と、により構成される表示領域の例を示す図である。図2において、図1と共通する部分には同じ符号を付している。 Next, refer to FIG. FIG. 2A is a diagram showing a main configuration of a HUD device mounted on a vehicle and a display example in a display area, and FIG. 2B is a diagram showing both a virtual image of a depth image and a virtual image of an upright image. It is a figure which shows the example of the display area | region which is composed of the 1st possible area and the 2nd area which displays a virtual image of a depth image. In FIG. 2, the same reference numerals are given to the portions common to those in FIG.
 図2に示されるように、HUD装置100は、表示面164を有する表示部(例えば、光透過型のスクリーン)160と、反射鏡165と、表示光を投影する光学部材としての曲面ミラー(例えば反射面179を有する凹面鏡であり、反射面は自由曲面である場合もある)170と、を有する。表示部160に表示された画像は、反射鏡165、曲面ミラー170を経て、被投影部材としてのウインドシールド2の被投影領域5に投影される。なお、HUD装置100では、曲面ミラーを複数設けてもよい。また、本実施形態のミラー(反射光学素子)に加えて、又は本実施形態のミラー(反射光学素子)の一部(又は全部)に代えて、レンズなどの屈折光学素子、回折光学素子などの機能性光学素子を含む構成を採用してもよい。 As shown in FIG. 2, the HUD device 100 includes a display unit (for example, a light transmission type screen) 160 having a display surface 164, a reflector 165, and a curved mirror (for example, a curved mirror) as an optical member for projecting display light. It is a concave mirror having a reflecting surface 179, and the reflecting surface may be a free curved surface) 170. The image displayed on the display unit 160 is projected onto the projected region 5 of the windshield 2 as a projected member via the reflecting mirror 165 and the curved mirror 170. The HUD device 100 may be provided with a plurality of curved mirrors. Further, in addition to the mirror (reflection optical element) of the present embodiment, or in place of a part (or all) of the mirror (reflection optical element) of the present embodiment, a refraction optical element such as a lens, a diffractive optical element, or the like can be used. A configuration including a functional optical element may be adopted.
 画像の表示光の一部は、ウインドシールド2で反射されて、予め設定されたアイボックス(立体であるが便宜上、平面的に描かれている)EBの内部に(あるいはEB上に)位置する運転者等の視点(眼)Aに入射され、車両1の前方に結像することで、仮想的な表示領域(虚像表示面)PS1上に、種々の画像(虚像)が表示される。図2(A)では、表示領域PS1の第1の領域Z1における表示例として、例えば、正立画像(正立虚像)である車速表示SP、奥行き表示であるナビゲーション用矢印の画像(虚像)AW’が示されている。また、第2の領域Z2には、路面40に沿って車両1の手前側から奥側へと延在するナビゲーション用矢印の画像(虚像)AWが示されている。 Part of the display light of the image is reflected by the windshield 2 and is located inside (or on) the preset eyebox (three-dimensional but, for convenience, drawn in a plane) EB. Various images (virtual images) are displayed on the virtual display area (virtual image display surface) PS1 by being incident on the viewpoint (eye) A of the driver or the like and forming an image in front of the vehicle 1. In FIG. 2A, as display examples in the first area Z1 of the display area PS1, for example, a vehicle speed display SP which is an upright image (upright virtual image) and an image (virtual image) AW of a navigation arrow which is a depth display. 'It is shown. Further, in the second region Z2, an image (virtual image) AW of a navigation arrow extending from the front side to the back side of the vehicle 1 along the road surface 40 is shown.
 図2(B)に示されるように、第1の領域Z1が路面40となす角度(傾斜角)はθ1(0<θ1<90°)であり、第2の領域Z2が路面40となす角度(傾斜角)はθ2(0<θ2<θ1)である。第1、第2の各領域Z1、Z2は共に、傾斜した領域(あるいは、少なくとも傾斜した部分を有する領域)である。 As shown in FIG. 2B, the angle (inclination angle) formed by the first region Z1 with the road surface 40 is θ1 (0 <θ1 <90 °), and the angle formed by the second region Z2 with the road surface 40. (Inclination angle) is θ2 (0 <θ2 <θ1). Each of the first and second regions Z1 and Z2 is an inclined region (or a region having at least an inclined portion).
 次に、図3を参照する。図3(A)は、前方に、奥行き画像としての矢印の図形が表示された傾斜面(傾斜した表示領域)が配置され、それを視認者が両眼で見ている状態を示す図、図3(B)は、左眼で見た画像を示す図、図3(C)は、左眼と右眼の各画像を融像(合成)することで見える奥行き画像を示す図、図3(D)は、右眼で見た画像を示す図である。 Next, refer to FIG. FIG. 3A is a diagram and a diagram showing a state in which an inclined surface (inclined display area) on which an arrow figure as a depth image is displayed is arranged in front and the viewer is viewing it with both eyes. 3 (B) is a diagram showing an image seen with the left eye, and FIG. 3 (C) is a diagram showing a depth image seen by fusing (combining) each image of the left eye and the right eye, FIG. 3 (C). D) is a diagram showing an image seen by the right eye.
 図3(A)では、左眼A1と右眼A2との間の中央の位置に、中点C0を描いている。左眼A1に見える画像と右眼A2に見える画像とを、視認者の脳内で融像して得られる画像は、便宜上、中点位置C0における画像ということができる。 In FIG. 3A, the midpoint C0 is drawn at the center position between the left eye A1 and the right eye A2. The image obtained by fusing the image seen by the left eye A1 and the image seen by the right eye A2 in the brain of the viewer can be said to be an image at the midpoint position C0 for convenience.
 図3(A)では、表示領域PS1の第2の領域Z2に、傾斜して延在するナビゲーション用矢印の画像(虚像)AWが表示されている。図3(B)、(D)に示される左右の各眼A1、A2の画像(両眼視差をもつ画像)を融像(合成)すると、図3(C)のような奥行き感の(立体感)のある画像が視認される。言い換えれば、左右の各眼A1、A2での画角上端と画角下端との位置ずれに起因する像形状変化により、矢印の画像(虚像)AWは、自然に奥側に傾斜して視認されることになり、視認性あるいは認知性が向上する In FIG. 3A, an image (virtual image) AW of an inclined and extending navigation arrow is displayed in the second area Z2 of the display area PS1. When the images (images with binocular parallax) of the left and right eyes A1 and A2 shown in FIGS. 3 (B) and 3 (D) are fused (combined), a sense of depth (three-dimensional) as shown in FIG. 3 (C) is obtained. An image with a feeling) is visually recognized. In other words, the image (virtual image) AW of the arrow is naturally tilted to the back side and visually recognized due to the image shape change caused by the positional deviation between the upper end of the angle of view and the lower end of the angle of view in each of the left and right eyes A1 and A2. Will improve visibility or cognition
 次に、図4を参照する。図4(A)は、前方に、正立画像としての車速表示が表示された傾斜面(傾斜した表示領域)が配置され、それを視認者が両眼で見ている状態を示す図、図4(B)は、左眼で見た画像を示す図、図4(C)は、左眼と右眼の各画像を融像(合成)することで見える正立画像を示す図、図4(D)は、右眼で見た画像を示す図である。 Next, refer to FIG. FIG. 4A is a diagram showing a state in which an inclined surface (inclined display area) on which a vehicle speed display as an upright image is displayed is arranged in front and the viewer is viewing it with both eyes. 4 (B) is a diagram showing an image seen with the left eye, and FIG. 4 (C) is a diagram showing an upright image seen by fusing (combining) each image of the left eye and the right eye, FIG. 4 (D) is a figure showing an image seen by the right eye.
 図4(A)において、表示領域PS1の第1の領域Z1には、車速表示SP(図4(B)等に記載されるように「120km/h」という表示)の画像(虚像)が示されている。この車速表示SPは、傾斜した第1の領域Z1に表示され、かつ、正立して視認させる正立画像(正立虚像)としての表示である。 In FIG. 4A, an image (virtual image) of a vehicle speed display SP (displayed as “120 km / h” as described in FIG. 4B and the like) is shown in the first region Z1 of the display region PS1. Has been done. This vehicle speed display SP is displayed as an upright image (upright virtual image) that is displayed in the inclined first region Z1 and is visually recognized in an upright position.
 第1の領域Z1の、路面40に対する傾斜角がそれほど小さくなく、ある程度、立設していれば、視認者の認知性は損なわれず、正立画像としての視認(正立画像として情報を読み取ること)が可能である。この場合、例えば、図4(B)、(D)に示される左右の各眼A1、A2の画像(両眼視差をもつ画像)を融像(合成)すると、図4(C)のような立像として車速表示SPが視認される。 If the inclination angle of the first region Z1 with respect to the road surface 40 is not so small and it is erected to some extent, the cognition of the viewer is not impaired and the visual recognition as an upright image (reading the information as an upright image). ) Is possible. In this case, for example, when the images (images having binocular parallax) of the left and right eyes A1 and A2 shown in FIGS. 4 (B) and 4 (D) are fused (combined), the images shown in FIG. 4 (C) are obtained. The vehicle speed display SP is visually recognized as a standing image.
 一方、第1の領域Z1の、路面40に対する傾斜角が小さく、左右の各眼による画像の融像(合成)に失敗する場合(融像されないとき)の見え方は、人によって個人差があり、例えば、全体として傾斜して見えたり、何れか一方の眼の形状変化で見えたりするが、いずれにせよ、視認性が低下し、認識時間が増加し、また、主観的(心理的要因的)にも好意的には捉えられない。 On the other hand, the appearance of the first region Z1 when the inclination angle with respect to the road surface 40 is small and the fusion (combination) of the images by the left and right eyes fails (when the fusion is not performed) varies from person to person. For example, it may appear tilted as a whole, or it may appear due to a change in the shape of either eye, but in any case, visibility decreases, recognition time increases, and it is subjective (psychological factor). ) Also cannot be seen favorably.
 次に、図5を参照する。図5(A)は、路面にほぼ垂直に立設された表示領域を、視認者が両眼で見ている状態を示す図、図5(B)は、図5(A)における表示領域の上端(上辺)の第1の右端点と、第1の右端点に対応する下端(下辺)の第2の右端点とに対する両眼の輻輳角を示す図、図5(C)は、左眼と右眼の各画像を融像(合成)した画像示す図である。 Next, refer to FIG. FIG. 5 (A) is a diagram showing a state in which a viewer is viewing a display area erected substantially perpendicular to the road surface with both eyes, and FIG. 5 (B) is a view of the display area in FIG. 5 (A). A diagram showing the convergence angles of both eyes with respect to the first right end point of the upper end (upper side) and the second right end point of the lower end (lower side) corresponding to the first right end point, FIG. 5C is a diagram showing the left eye. It is a figure which shows the image which fused (combined) each image of a right eye.
 以下の説明では、表示領域が所定形状(ここでは四角形)の輪郭を有するものとし、その表示領域の、地面(又は地面の相当面:路面等)側を下端(下辺)とし、その反対側(地面から離れる側)を上端(上辺)とする。なお、表示領域の形状を示す「四角形」としては、例えば、長方形、正方形、台形、平行四辺形等を含めて、広義に解釈するものとする。 In the following description, it is assumed that the display area has a contour of a predetermined shape (here, a quadrangle), the ground (or the corresponding surface of the ground: road surface, etc.) side of the display area is the lower end (lower side), and the opposite side (lower side). The side away from the ground) is the upper end (upper side). The "quadrangle" indicating the shape of the display area shall be broadly interpreted to include, for example, a rectangle, a square, a trapezoid, a parallelogram, and the like.
 図5(A)では、視認者の前方において、路面40に対して略垂直に立設された表示領域(ここでは第1の領域Z1)が示されている。視認者は、両眼A1、A2で、第1の領域Z1に表示される画像(虚像)を見ている。図5(C)に示されるように、表示される画像(虚像)は、先に示した車速表示SPである。 In FIG. 5A, a display area (here, the first area Z1) standing substantially perpendicular to the road surface 40 is shown in front of the viewer. The viewer is looking at the image (virtual image) displayed in the first region Z1 with both eyes A1 and A2. As shown in FIG. 5C, the displayed image (virtual image) is the vehicle speed display SP shown above.
 図5(A)において、表示領域(第1の領域Z1)における画角下端(以下、単に下端あるいは下辺と称することがある)はPLの符号が付されて示されており、画角上端(単に、上端あるいは上辺と称する場合がある)はPUの符号が付されて示されている。 In FIG. 5A, the lower end of the angle of view (hereinafter, may be simply referred to as the lower end or the lower side) in the display area (first area Z1) is indicated by a PL reference numeral, and the upper end of the angle of view (hereinafter, may be simply referred to as the lower end or the lower side) is shown. (Sometimes referred to simply as the top or top) is indicated with a PU sign.
 図5(B)は、図5(A)において、上側から下側(図中、矢印で示される-Y方向側)を見た場合の平面視での、視認者の両眼視差による輻輳角を示している。 5 (B) shows the convergence angle due to the binocular parallax of the viewer in a plan view when looking from the upper side to the lower side (in the figure, the −Y direction side indicated by the arrow) in FIG. 5 (A). Is shown.
 また、図5(B)では、下端(下辺)PLと上端(上辺)PUの各々に、互いに対応する一対の点(任意の位置に設けることができるが、好ましくは例えば、各端(各辺)の右端点あるいは左端点)が設定されている。図5(B)では、下端(下辺)の右端点R1、上端(上辺)の右端点R2が設定されている。点R1を第1の点、点R2を第2の点という。 Further, in FIG. 5B, a pair of points (which can be provided at arbitrary positions) corresponding to each other at each of the lower end (lower side) PL and the upper end (upper side) PU are preferably provided, but preferably, for example, each end (each side). ) Is set to the right end point or the left end point). In FIG. 5B, the right end point R1 at the lower end (lower side) and the right end point R2 at the upper end (upper side) are set. The point R1 is referred to as a first point, and the point R2 is referred to as a second point.
 左右の各眼A1、A2から第1の点R1を見た場合の輻輳角(各眼A1、A2の視線方向を示す視軸がなす角度)を第1の輻輳角θLとし、第2の点R2を見た場合の輻輳角を第2の輻輳角θUとし、第1の輻輳角θLと第2の輻輳角θUとの差(第1の輻輳角θLから第2の輻輳角θUを引き算した差)を「輻輳角差」とする。この輻輳角差は、言い換えれば、「表示領域の上端(あるいは上辺)と下端(あるいは下辺)の輻輳角差」ということができる。 The convergence angle (the angle formed by the visual axis indicating the line-of-sight direction of each eye A1 and A2) when the first point R1 is viewed from the left and right eyes A1 and A2 is defined as the first convergence angle θL, and the second point. The congestion angle when looking at R2 is defined as the second congestion angle θU, and the difference between the first congestion angle θL and the second congestion angle θU (the second congestion angle θU is subtracted from the first congestion angle θL). Difference) is defined as "convergence angle difference". In other words, this convergence angle difference can be said to be "convergence angle difference between the upper end (or upper side) and the lower end (or lower side) of the display area".
 図5の例では、表示領域(第1の領域Z1)が路面40に略直角をなして立設されているため、表示領域(第1の領域Z1)の輪郭の四角形を上側から見た平面視で、四角形の上端(上辺)PUと下端(下辺)PLは重なり、第1の点R1と第2の点R2とは重なる。ここで、四角形の縦の辺の長さ(図5(A)の第1の領域Z1を示す線分の長さ:言い換えれば第1の領域の路面40を基準とする高さ)が小さいとすると、第1、第2の各点R1、R2の高さ位置の差に起因する、第1、第2の各点R1、R2と左右の各眼A1、A2との距離の差(距離の変動分)は無視できる。上記の場合は、第1、第2の各点R1、R2についての第1、第2の輻輳角θL、θUの値は同じ(略同じ)となり、輻輳角差は零(略零)となる。 In the example of FIG. 5, since the display area (first area Z1) is erected at a substantially right angle to the road surface 40, the contour quadrangle of the display area (first area Z1) is viewed from above. Visually, the upper end (upper side) PU and the lower end (lower side) PL of the quadrangle overlap, and the first point R1 and the second point R2 overlap. Here, if the length of the vertical side of the quadrangle (the length of the line segment indicating the first region Z1 in FIG. 5A: in other words, the height with respect to the road surface 40 of the first region) is small. Then, the difference in the distance between the first and second points R1 and R2 and the left and right eyes A1 and A2 due to the difference in the height positions of the first and second points R1 and R2 (of the distance). Fluctuations) can be ignored. In the above case, the values of the first and second convergence angles θL and θU for the first and second points R1 and R2 are the same (substantially the same), and the convergence angle difference is zero (substantially zero). ..
 図5(C)に示されるように、左眼と右眼の各画像を融像(合成)した画像(中央位置C0における画像であり、C0の画像と略称する)は、立像として視認され、車速表示SPの視認性には問題がない。 As shown in FIG. 5C, an image obtained by fusing (combining) each image of the left eye and the right eye (an image at the center position C0, abbreviated as an image of C0) is visually recognized as a standing image. There is no problem with the visibility of the vehicle speed display SP.
 次に、図6を参照する。図6(A)は、路面に対して略45°傾斜した表示領域を、視認者が両眼で見ている状態を示す図、図6(B)は、図6(A)における表示領域の上端(上辺)の第1の右端点と、第1の右端点に対応する下端(下辺)の第2の右端点とに対する両眼の輻輳角を示す図、図6(C)は、左眼でみた画像を示す図、図6(D)は、左眼と右眼の各画像を融像(合成)した正立画像を示す図、図6(E)は、右眼で見た画像を示す図である。 Next, refer to FIG. FIG. 6A is a diagram showing a state in which a viewer is viewing a display area inclined by approximately 45 ° with respect to the road surface with both eyes, and FIG. 6B is a display area in FIG. 6A. A diagram showing the convergence angles of both eyes with respect to the first right end point of the upper end (upper side) and the second right end point of the lower end (lower side) corresponding to the first right end point, FIG. 6C is a diagram showing the left eye. FIG. 6 (D) is a diagram showing an upright image obtained by fusing (synthesizing) each image of the left eye and the right eye, and FIG. 6 (E) is a diagram showing an image viewed with the right eye. It is a figure which shows.
 図6(A)に示されるように、表示領域(第1の領域Z1)が路面40に対して略45°をなして傾斜して配置されている。第1の領域Z1の輪郭を示す四角形の下端(下辺)が視認者の側に移動する。図6(B)では、下端(下辺)PLが、上端(上辺)PUよりも視認者に近づいている。この結果、第1の点R1に対する第1の輻輳角θLと、第2の点R2に対する第2の輻輳角θUの各値に差が生じる。言い換えれば、第1の輻輳角θLは、第2の輻輳角θUよりも大きくなる。したがって、輻輳角差はα(αは、0より大きい整数)となる。 As shown in FIG. 6A, the display area (first area Z1) is arranged at an inclination of approximately 45 ° with respect to the road surface 40. The lower end (lower side) of the quadrangle showing the outline of the first region Z1 moves to the side of the viewer. In FIG. 6B, the lower end (lower side) PL is closer to the viewer than the upper end (upper side) PU. As a result, there is a difference between the values of the first convergence angle θL with respect to the first point R1 and the second convergence angle θU with respect to the second point R2. In other words, the first convergence angle θL is larger than the second convergence angle θU. Therefore, the convergence angle difference is α (α is an integer larger than 0).
 図6(C)、(E)を参照する。表示領域Z1は上側ほど奥側に傾斜していることから、この点で左眼A1、右眼A2の負担は大きくなる。さらに、両眼視差により、左眼A1に見える画像は、四角形が左側に歪み、右眼A2に見える画像は、四角形が右に歪み、この結果として略平行四辺形の形状として視認される。この点でも正立画像(車速表示SP)の視認が難しくなる。 Refer to FIGS. 6 (C) and 6 (E). Since the display area Z1 is inclined toward the back side toward the upper side, the burden on the left eye A1 and the right eye A2 becomes large at this point. Further, due to the binocular disparity, the quadrangle is distorted to the left in the image seen by the left eye A1, and the quadrangle is distorted to the right in the image seen by the right eye A2, and as a result, it is visually recognized as a substantially parallelogram shape. At this point as well, it becomes difficult to visually recognize the upright image (vehicle speed display SP).
 但し、人の眼は、ある程度の奥行きや左右の歪み等を補正して正立画像を認知することができ、図6の例では、その補正機能(認識機能)の限界を超えていない。よって、図6(D)に示すように、正立画像としての車速表示SPを、一応、正しく視認できる。 However, the human eye can recognize an upright image by correcting a certain depth, left-right distortion, etc., and in the example of FIG. 6, the limit of the correction function (recognition function) is not exceeded. Therefore, as shown in FIG. 6D, the vehicle speed display SP as an upright image can be visually recognized correctly for the time being.
 次に、図7を参照する。図7(A)は、路面に対して略30°傾斜した表示領域を、視認者が両眼で見ている状態を示す図、図7(B)は、図7(A)における表示領域の上端(上辺)の第1の右端点と、第1の右端点に対応する下端(下辺)の第2の右端点とに対する両眼の輻輳角を示す図、図7(C)は、左眼でみた画像を示す図、図7(D)は、左眼と右眼の各画像を融像(合成)した場合の二重見え等によって視認がむずかしい視覚を示す図、図7(E)は、右眼で見た画像を示す図である。 Next, refer to FIG. 7. FIG. 7A is a diagram showing a state in which a viewer is viewing a display area inclined by approximately 30 ° with respect to the road surface with both eyes, and FIG. 7B is a display area in FIG. 7A. FIG. 7 (C) is a diagram showing the convergence angles of both eyes with respect to the first right end point of the upper end (upper side) and the second right end point of the lower end (lower side) corresponding to the first right end point, FIG. 7 (C) is the left eye. FIG. 7 (D) is a diagram showing the images seen in the above, and FIG. 7 (D) is a diagram showing visual perception that is difficult to see due to double vision when each image of the left eye and the right eye is fused (combined). , Is a diagram showing an image seen with the right eye.
 図7(A)に示されるように、表示領域(第1の領域Z1)は、路面40に対して、さらに傾斜している。図7(B)に示されるように、四角形の下端(下辺)PLが、さらに視認者の側に移動して視認者に近づいている。この結果、第1の輻輳角θLはさらに増大する。よって、第2の輻輳角θUとの差は大きくなり、輻輳角差(θL-θU)は、β(βは、α<βを満たす整数)となる。 As shown in FIG. 7A, the display area (first area Z1) is further inclined with respect to the road surface 40. As shown in FIG. 7B, the lower end (lower side) PL of the quadrangle further moves toward the viewer and approaches the viewer. As a result, the first convergence angle θL is further increased. Therefore, the difference from the second convergence angle θU becomes large, and the convergence angle difference (θL−θU) becomes β (β is an integer satisfying α <β).
 図7の例では、人の奥行きや左右の歪みの補正機能の限界を超えており、正しく融像することができない。よって、図7(C)~(E)に示されるように、正立画像である車速表示SPの正しく視認することができない。なお、具体的に図を記載することがむずかしいため、図中、単に、SPと記載している。 In the example of FIG. 7, the limit of the correction function of the human depth and the left-right distortion is exceeded, and the image cannot be fused correctly. Therefore, as shown in FIGS. 7 (C) to 7 (E), the vehicle speed display SP, which is an upright image, cannot be visually recognized correctly. Since it is difficult to specifically describe the figure, it is simply described as SP in the figure.
 以上、図5~図7にて説明したように、「表示領域の上端(上辺)と下端(下辺)の輻輳角差(θL-θU)」は、表示領域の地面(その相当面)に対する傾斜の程度を示す指標となり得る。 As described above with reference to FIGS. 5 to 7, the "convergence angle difference (θL-θU) between the upper end (upper side) and the lower end (lower side) of the display area" is the inclination of the display area with respect to the ground (corresponding surface thereof). Can be an indicator of the degree of.
 また、輻輳角θL、θUは、視認者の眼A1、A2からの距離に依存して変動するため、距離情報を内包しており、よって、輻輳角差(θL-θU)は、距離も含めた表示領域(あるいは虚像表示面等)の地面(その相当面)に対する傾斜の程度の情報を有する、1つの統合された指標(閾値)となる。従来のように、距離何メートルのときに傾斜角何度というような、距離の前提条件付きで傾斜を設定する必要がない。よって、この指標を、HUD装置の設計等に導入することで、表示領域の傾斜の設定等が効率化(容易化)される。 Further, since the convergence angles θL and θU vary depending on the distance from the eyes A1 and A2 of the viewer, distance information is included, and therefore the convergence angle difference (θL−θU) includes the distance. It is one integrated index (threshold value) having information on the degree of inclination of the display area (or the virtual image display surface, etc.) with respect to the ground (corresponding surface thereof). It is not necessary to set the inclination with the precondition of the distance, such as the number of inclination angles at the distance of several meters as in the conventional case. Therefore, by introducing this index into the design of the HUD device or the like, the setting of the inclination of the display area or the like can be made more efficient (easy).
 ここで、正立画像の視認性は個人差があり、一概には言えないが、表示する画像の視認性(第1の要因)、視認に要する時間(第2の要因)、及び違和感や不快感等の心理的要因(第3の要因)の少なくとも1つに基づいて正調な正立画像の視認ができているか否かを、客観的に判定可能である。そして、その判定に際して、上記の指標を用いることで、判定に用いることができる閾値が得られる。 Here, the visibility of the upright image varies from person to person and cannot be unequivocally stated, but the visibility of the image to be displayed (first factor), the time required for viewing (second factor), and discomfort or discomfort. It is possible to objectively determine whether or not a normal upright image can be visually recognized based on at least one of psychological factors (third factor) such as pleasure. Then, by using the above index in the determination, a threshold value that can be used in the determination can be obtained.
 (実験結果)
 本発明者は、複数人の協力を得て、上記の第1~第3の各要因に基づいて、上記の輻輳角差を指標として正立画像を正しく判定できるか否かを試行した。ここでは、3つの要因のすべてでNGが検出されたとき、正調な正立画像の視認困難とし、NGが1つ、又は2つのときは、正調な正立画像の視認可能とした。
(Experimental result)
The present inventor, with the cooperation of a plurality of persons, tried whether or not an upright image can be correctly determined using the above-mentioned convergence angle difference as an index based on each of the above-mentioned first to third factors. Here, when NG is detected by all three factors, it is difficult to visually recognize the upright image, and when there is one or two NGs, the upright image is visible.
 この結果、正立して視認させる画像の上端と下端の輻輳角差が、0.182°であったときに、視認困難となることがわかった。閾値を0.1のオーダーとするとき、0.182は四捨五入して0.2となる。よって、0.2°が好ましい閾値の例として抽出することができた。よって、輻輳角差を0.2°未満とすることで、正立画像の視認が可能となる。 As a result, it was found that it became difficult to visually recognize when the difference in the convergence angle between the upper end and the lower end of the image to be visually recognized upright was 0.182 °. When the threshold is on the order of 0.1, 0.182 is rounded to 0.2. Therefore, 0.2 ° could be extracted as an example of a preferable threshold value. Therefore, by setting the convergence angle difference to less than 0.2 °, the upright image can be visually recognized.
 この「所定の閾値」は、具体的には、例えば「正調視認可能性判定閾値」として利用可能であり、その好ましい値の例が、上記の0.2°である。 Specifically, this "predetermined threshold value" can be used as, for example, a "normal visual visibility determination threshold value", and an example of a preferable value thereof is the above-mentioned 0.2 °.
 この閾値(指標)を活用することによって、例えば、視認性の低下を抑制しつつ、正立して視認させるコンテンツの画像(正立画像)を表示できるようにする設計が、効率化あるいは容易化される。また、この新たな指標(輻輳角差、又は、これにより引き起こされる傾斜歪み角度(図6(C)の角度θd参照))は、HUD装置のキャリブレーション、HUD装置の初期化、HUD装置の機能のシミュレーション等に利用することも可能であり、各処理を効率化する等の効果が得られる。 By utilizing this threshold value (index), for example, a design that enables an image of content (upright image) to be visually recognized upright while suppressing a decrease in visibility is made more efficient or easier. Will be done. In addition, this new index (convergence angle difference, or tilt distortion angle caused by this (see the angle θd in FIG. 6C)) is used for calibration of the HUD device, initialization of the HUD device, and functions of the HUD device. It can also be used for simulations and the like, and effects such as improving the efficiency of each process can be obtained.
 また、先に図2(A)で示したように、表示領域が、奥行き画像及び正立画像の双方を表示できる第1の領域Z1と、奥行き画像の表示に適した第2の領域Z2とに分かれている場合において、第2の領域Z2における表示領域の上端と下端の輻輳角差は、上記の所定閾値(好ましくは0.2°)以上に設定されるのがよい。 Further, as shown in FIG. 2A, the display area includes a first area Z1 capable of displaying both a depth image and an upright image, and a second area Z2 suitable for displaying a depth image. The difference in convergence angle between the upper end and the lower end of the display region in the second region Z2 is preferably set to the above-mentioned predetermined threshold value (preferably 0.2 °) or more.
 上述のとおり、閾値は、正立画像の視認性等を基準として設定されるものであり、その閾値以上では、正立画像の視認性が低下して正立画像の表示には不適であるが、言い換えれば、傾斜して表現される奥行き画像(空中で浮いて、路面等と略平行に延在する、あるいは、路面に重畳する視覚を与えるような画像も含む)の表示には適している、といえる。よって、第2の領域Z2に表示される画像(虚像)については、輻輳角差を閾値以上に設定する。これによって、例えば、第1領域Z1に正立画像を表示し、第2の領域Z2に奥行き画像を表示したときに、各画像の適切な視認性等を確保することができる。 As described above, the threshold value is set based on the visibility of the upright image and the like, and above the threshold value, the visibility of the upright image is lowered and it is not suitable for displaying the upright image. In other words, it is suitable for displaying a depth image expressed in an inclined manner (including an image that floats in the air and extends substantially parallel to the road surface, etc., or gives a visual sense superimposed on the road surface). It can be said that. Therefore, for the image (virtual image) displayed in the second region Z2, the convergence angle difference is set to be equal to or larger than the threshold value. Thereby, for example, when an upright image is displayed in the first region Z1 and a depth image is displayed in the second region Z2, appropriate visibility of each image can be ensured.
 また、視認者から見た上下方向(あるいは高さ方向)における画角を縦画角というとき、所定の閾値による輻輳角差の制限は、縦画角0.75°以下の正立画像のコンテンツに対して適用されてもよい。 Further, when the image angle in the vertical direction (or height direction) seen from the viewer is referred to as a vertical angle of view, the limitation of the convergence angle difference by a predetermined threshold is the content of an upright image having a vertical angle of view of 0.75 ° or less. May be applied to.
 言い換えれば、表示コンテンツのサイズが大きくなると、同じ輻輳角差でも、左眼の画像と右眼の画像との脳内での融像がより困難になることを考慮して、縦画角0.75°以下の小さいコンテンツについて、上記の閾値を適用し、上端と下端の輻輳角差が閾値未満となるように設定する。 In other words, as the size of the displayed content increases, it becomes more difficult to fuse the image of the left eye and the image of the right eye in the brain even with the same convergence angle difference, and the vertical angle of view is 0. The above threshold value is applied to small content of 75 ° or less, and the convergence angle difference between the upper end and the lower end is set to be less than the threshold value.
 また、このサイズより大きい正立コンテンツは、HUD装置100において視界妨害の観点から煩わしさが増大することが確認されており、現状の実施可能性も高いとはいえない。よって、表示コンテンツが所定サイズ以下のものに限定して上記の閾値を適用することとしても、特に問題がないものと考えられる。 In addition, it has been confirmed that upright content larger than this size is more annoying from the viewpoint of visibility obstruction in the HUD device 100, and it cannot be said that the current feasibility is high. Therefore, it is considered that there is no particular problem even if the above threshold value is applied only to the display contents having a predetermined size or less.
 次に、図8を参照する。図8(A)、(B)は、HUD装置(斜め像面HUD装置)の設計手法の例を示すフローチャートである。図8(A)では、斜め像面HUD装置において、正立して視認させる情報画像(正立画像)の表示領域の上端と下端の輻輳角差(又はそれにより引き起こされる傾斜歪み角度)を、画像の視認性、視認に要する時間、及び違和感や不快感等の心理的要因の少なくとも1つに基づいて定められる所定閾値未満(好ましくは0.2°未満)となるように各部を設計する(ステップS1)。 Next, refer to FIG. 8 (A) and 8 (B) are flowcharts showing an example of a design method of a HUD device (oblique image plane HUD device). In FIG. 8A, in the oblique image plane HUD device, the convergence angle difference between the upper end and the lower end of the display area of the information image (upright image) to be visually recognized upright (or the tilt distortion angle caused by the difference) is shown. Each part is designed so as to be less than a predetermined threshold value (preferably less than 0.2 °) determined based on at least one of psychological factors such as image visibility, time required for visual recognition, and discomfort and discomfort (preferably less than 0.2 °). Step S1).
 図8(B)では、表示領域を、傾斜して視認させる情報画像(奥行き画像)、及び正立して視認させる情報画像(正立画像)のいずれも表示可能な第1の領域と、傾斜して視認させる情報画像(奥行き画像)を表示する第2の領域とに分ける(ステップS2)。続いて、好ましくは、縦画角0.75°以下のコンテンツに関して、第1の領域における上端と下端の輻輳角差が所定閾値未満(好ましくは0.2°未満)となるように設計し、第2の領域の上端と下端の輻輳角差は、所定閾値以上に設計する(ステップS3)。 In FIG. 8B, the display area is tilted and the first area in which both the information image (depth image) to be visually recognized by tilting and the information image (upright image) to be visually recognized in an upright position are displayed and the display area is tilted. The information image (depth image) to be visually recognized is divided into a second area to be displayed (step S2). Subsequently, preferably, for the content having a vertical angle of view of 0.75 ° or less, the convergence angle difference between the upper end and the lower end in the first region is designed to be less than a predetermined threshold value (preferably less than 0.2 °). The convergence angle difference between the upper end and the lower end of the second region is designed to be equal to or larger than a predetermined threshold value (step S3).
 次に、図9を参照する。図9は、HUD装置における表示制御部(制御部)の構成例を示す図である。図9の上側の図は、図1(A)とほぼ同じである。但し、図9では、視線検出用カメラ188と、視点位置検出部192と、が設けられている。 Next, refer to FIG. FIG. 9 is a diagram showing a configuration example of a display control unit (control unit) in the HUD device. The upper view of FIG. 9 is almost the same as that of FIG. 1 (A). However, in FIG. 9, a line-of-sight detection camera 188 and a viewpoint position detection unit 192 are provided.
 表示制御部(制御部)190は、入出力(I/O)インタフェース193と、画像処理部194と、を有する。画像処理部194は、画像生成制御部195と、ROM(正立画像テーブル199、及び奥行き画像テーブル200を有する)198と、VRAM(ワーピングパラメータ196、ワーピング処理後データ格納用バッファ197を有する)201と、画像生成部(画像レンダリング部)202と、を有する。 The display control unit (control unit) 190 has an input / output (I / O) interface 193 and an image processing unit 194. The image processing unit 194 includes an image generation control unit 195, a ROM (having an upright image table 199 and a depth image table 200) 198, and a VRAM (having a warping parameter 196 and a post-warping data storage buffer 197) 201. And an image generation unit (image rendering unit) 202.
 画像生成制御部195は、例えば、先に図2(A)で示した例において、車速表示SPや矢印の表示AW’を第1の領域Z1に配置し、矢印の表示AWを第2の領域Z2に配置するといった、コンテンツの表示位置の制御等を実施することができる。また、表示制御部(制御部)190は、例えばHUD装置100のキャリブレーション時や、初期化時等において、上記の閾値を用いて、例えば表示領域を適切な位置に設定させる等の制御を行うこともできる。 For example, in the example shown in FIG. 2A, the image generation control unit 195 arranges the vehicle speed display SP and the arrow display AW'in the first region Z1 and the arrow display AW in the second region. It is possible to control the display position of the content, such as arranging it in Z2. Further, the display control unit (control unit) 190 performs control such as setting the display area at an appropriate position by using the above threshold value at the time of calibration or initialization of the HUD device 100, for example. You can also do it.
 次に、図10を参照する。図10(A)、(B)は、傾斜した表示領域の他の例を示す図である。装置構成自体は、図1(A)と同じである。 Next, refer to FIG. 10 (A) and 10 (B) are views showing another example of the inclined display area. The device configuration itself is the same as in FIG. 1 (A).
 表示領域PS1の車両1の幅方向(左右方向、X方向)から見た断面形状は、先に図1等で示した運転者側が凸形状のものに限定されるものではない。表示領域PS1は、図10(A)に示すように、運転者側が凹形状であってもよい。また、表示領域PS1は、図10(B)に示すように、湾曲していなくてもよい。これらは一例であり、種々の断面形状の表示領域が想定され得る。 The cross-sectional shape of the vehicle 1 in the display area PS1 as seen from the width direction (horizontal direction, X direction) is not limited to the one in which the driver side is convex as shown in FIG. As shown in FIG. 10A, the display area PS1 may have a concave shape on the driver side. Further, the display area PS1 does not have to be curved as shown in FIG. 10 (B). These are examples, and display areas having various cross-sectional shapes can be assumed.
 本実施形態の構成であれば、画像の視認性を向上させる効果を奏することを実験において確認した。本実験例では、被験者に複数種のヘッドアップディスプレイ装置が表示する虚像を視認してもらい、不快感を覚えるか否かを回答してもらう官能評価を実施した。本実験では、複数種のヘッドアップディスプレイ装置は、上端と下端の輻輳格差が異なる正立画像をそれぞれ表示するものである。ここで、複数種のヘッドアップディスプレイ装置のいずれにおいても、正立画像は、被験者の観察位置から見て、縦横方向で矩形に視認される形状であり、縦画角は0.75°に設定される。 It was confirmed in the experiment that the configuration of this embodiment has the effect of improving the visibility of the image. In this experimental example, a sensory evaluation was performed in which a subject was asked to visually recognize a virtual image displayed by multiple types of head-up display devices and to answer whether or not he / she felt uncomfortable. In this experiment, a plurality of types of head-up display devices display upright images having different convergence gaps at the upper and lower ends. Here, in any of the plurality of types of head-up display devices, the upright image has a shape that is visually recognized as a rectangle in the vertical and horizontal directions when viewed from the observation position of the subject, and the vertical angle of view is set to 0.75 °. Will be done.
 図11は、各輻輳角差(横軸)に対する不快感を覚えないと回答した人の割合(縦軸)を示す実験結果のグラフである。正立画像を視認させる上で、上端と下端の輻輳角差が小さくなるにつれて、不快感を覚えないと回答した人の割合が増加することがわかる。 FIG. 11 is a graph of experimental results showing the proportion (vertical axis) of those who answered that they did not feel uncomfortable with each convergence angle difference (horizontal axis). It can be seen that the proportion of people who answered that they did not feel uncomfortable increased as the difference in the convergence angle between the upper end and the lower end became smaller when visually recognizing the upright image.
 輻輳角差が0.22[degree]であるとき、不快感を覚えないと回答した人の割合が0[%]であり、被験者の全員が不快であると回答したことがわかった。また、輻輳角差が0.20degree]であるとき、不快感を覚えないと回答した人の割合が20[%]であり、輻輳角差が0.17[degree]であるとき、不快感を覚えないと回答した人の割合が40[%]であり、輻輳角差が0.15[degree]であるとき、40[%]であり、輻輳角差が0.14[degree]であるとき、60[%]であり、輻輳角差が0.12[degree]であるとき、80[%]であり、輻輳角差が0.09[degree]であるとき、100[%]であり、そして、輻輳角差が0.06[degree]であるとき、100[%]であった。 When the convergence angle difference was 0.22 [degree], the percentage of those who answered that they did not feel uncomfortable was 0 [%], and it was found that all the subjects answered that they were uncomfortable. Further, when the convergence angle difference is 0.20 degree], the percentage of those who answered that they do not feel discomfort is 20 [%], and when the convergence angle difference is 0.17 [degree], discomfort is caused. When the percentage of people who answered that they do not remember is 40 [%] and the convergence angle difference is 0.15 [degree], when it is 40 [%] and the convergence angle difference is 0.14 [degree]. , 60 [%], 80 [%] when the convergence angle difference is 0.12 [degree], and 100 [%] when the convergence angle difference is 0.09 [degree]. Then, when the convergence angle difference was 0.06 [degree], it was 100 [%].
 上記実験結果では、輻輳格差が0.22[degree]以上であるとき、全員が不快感を覚えたという回答であり、輻輳格差が0.20[degree]以下で、不快感を覚えたと回答する人が減少していき、正立画像を視認させる場合、正立画像の上端と下端の輻輳角差は、0.20[degree]以下とすることが好ましいと考えられる。 In the above experimental results, all respondents felt uncomfortable when the congestion gap was 0.22 [image] or more, and answered that they felt uncomfortable when the congestion gap was 0.20 [image] or less. When the number of people decreases and the upright image is visually recognized, it is considered preferable that the convergence angle difference between the upper end and the lower end of the upright image is 0.20 [degree] or less.
 また、輻輳角差を0.14[degree]以下にすることは、半数以上の人が不快感を覚えなかった効果を確認できた。すなわち、輻輳角差を0.14[degree]以下にすることは、正立画像を視認させる上で、不快感を覚えさせない効果を十分に得ることができ、より好ましいと言える。 In addition, it was confirmed that setting the convergence angle difference to 0.14 [degree] or less had the effect that more than half of the people did not feel uncomfortable. That is, it can be said that setting the convergence angle difference to 0.14 [degree] or less is more preferable because it is possible to sufficiently obtain an effect of not causing discomfort in visually recognizing an upright image.
 また、輻輳角差を0.09[degree]以下にすることは、全員が不快感を覚えなかった効果を確認できた。すなわち、輻輳角差を0.09[degree]以下にすることは、正立画像を視認させる上で、不快感を覚えさせない効果を十分に得ることができ、より好ましいと言える。 In addition, it was confirmed that setting the convergence angle difference to 0.09 [degree] or less had the effect that everyone did not feel uncomfortable. That is, it can be said that setting the convergence angle difference to 0.09 [degree] or less is more preferable because it is possible to sufficiently obtain an effect of not causing discomfort in visually recognizing an upright image.
 本発明は、視差をもつ画像を入射させる視差式のHUD装置や、レンチキュラレンズ等を用いる光線再現方式のHUD装置等に、広く適用可能である。 The present invention can be widely applied to a parallax type HUD device for incident an image having parallax, a light ray reproduction type HUD device using a lenticular lens or the like, and the like.
 本明細書において、車両という用語は、広義に、乗り物としても解釈し得るものである。また、ナビゲーションに関する用語(例えばナビゲーション用矢印等)についても、例えば、車両の運行に役立つ広義のナビゲーション情報という観点等も考慮し、広義に解釈するものとし、道路標識等も含めることも可能である。また、正立画像についても、視認者が正対して見る正対画像ということもでき、名称に拘泥せずに広く解釈可能である。また、HUD装置には、シミュレータ(例えば、航空機のシミュレータやゲーム装置としてのシミュレータ等)として使用されるものも含まれるものとする。 In this specification, the term vehicle can be broadly interpreted as a vehicle. In addition, terms related to navigation (for example, navigation arrows, etc.) shall be interpreted in a broad sense, taking into consideration, for example, the viewpoint of navigation information in a broad sense useful for vehicle operation, and road signs and the like can also be included. .. Further, the upright image can also be said to be a face-to-face image that the viewer sees face-to-face, and can be widely interpreted without being bound by the name. Further, the HUD device shall include a device used as a simulator (for example, a simulator as an aircraft simulator, a simulator as a game device, etc.).
 本発明は、上述の例示的な実施形態に限定されず、また、当業者は、上述の例示的な実施形態を特許請求の範囲に含まれる範囲まで、容易に変更することができるであろう。 The present invention is not limited to the above-mentioned exemplary embodiments, and those skilled in the art will be able to easily modify the above-mentioned exemplary embodiments to the extent included in the claims. ..
1・・・車両(自車両)、2・・・被投影部材(反射透光部材、ウインドシールド等)、5・・・投影領域、40・・路面、100・・・HUD装置、120・・・光学部材を含む光学系、150・・・投光部(画像投射部)、160・・・表示部(例えば液晶表示装置やスクリーン等)、164・・・表示面、170・・・曲面ミラー(凹面鏡等)、179・・・反射面、188・・・視線検出カメラ、190・・・表示制御部(制御部)、192・・・視点位置検出部、194・・・画像処理部、195・・・画像生成制御部、196・・・ワーピングパラメータ、197・・・ワーピング処理後データバッファ、198・・・ROM、199・・・正立画像テーブル、200・・・奥行き画像テーブル、201・・・VRAM(画像処理用記憶装置)、202・・・画像生成部(画像レンダリング部)、EB・・・アイボックス、PS1・・・表示領域(虚像表示面)、Z1・・・正立画像及び奥行き画像の双方を表示可能な第1の表示領域、Z2・・・奥行き画像を表示する第2の表示領域。 1 ... Vehicle (own vehicle), 2 ... Projected member (reflected translucent member, windshield, etc.), 5 ... Projection area, 40 ... Road surface, 100 ... HUD device, 120 ... An optical system including an optical member, 150 ... a light projecting unit (image projection unit), 160 ... a display unit (for example, a liquid crystal display device, a screen, etc.), 164 ... a display surface, 170 ... a curved mirror. (Concave mirror, etc.) 179 ... Reflective surface, 188 ... Line-of-sight detection camera, 190 ... Display control unit (control unit), 192 ... Viewpoint position detection unit, 194 ... Image processing unit, 195 ... Image generation control unit, 196 ... Warping parameter, 197 ... Data buffer after warping processing, 198 ... ROM, 199 ... Upright image table, 200 ... Depth image table, 201. VRAM (storage device for image processing), 202 ... image generation unit (image rendering unit), EB ... eye box, PS1 ... display area (imaginary image display surface), Z1 ... upright image And a first display area capable of displaying both a depth image, Z2 ... A second display area for displaying a depth image.

Claims (4)

  1.  画像を表示する画像表示部と、
     前記画像表示部が表示する前記画像の光を被投影部材に向けて投影することで、視認者の前方の実空間における仮想的な表示領域内で、前記視認者に前記画像の虚像を視認させる光学系と、
     前記画像表示部における前記画像の表示を制御する制御部と、
     を有し、
     実空間における前記視認者の前方に向かう方向を前方方向とし、
     前記前方方向に直交し、前記視認者の左右の目を結ぶ線分に沿う方向を左右方向とし、
     前記前方方向及び前記左右方向に直交する線分に沿う方向を上下方向又は高さ方向とし、前記実空間における地面又は地面に相当する面から離れる方向を上方向、近づく方向を下方向とするとき、
     前記制御部は、
     前記実空間において、前記地面又は地面に相当する面に対して、前記視認者に近い側かつ下方側から、遠い側かつ上方側へと傾斜する、平面又は曲面の傾斜面である前記表示領域内に、正立して視認させる画像である正立画像を表示する制御を実施し、
     前記正立画像は、前記視認者から見て四角形の輪郭である表示領域に表示され、
     前記表示領域の上端と下端の輻輳角差は、画像の視認性、視認に要する時間、及び違和感や不快感等の心理的要因の少なくとも1つに基づいて定められる所定閾値未満に設定されている、
     ことを特徴とする、ヘッドアップディスプレイ装置。
    An image display unit that displays an image and
    By projecting the light of the image displayed by the image display unit toward the projected member, the viewer can visually recognize the virtual image of the image within a virtual display area in the real space in front of the viewer. Optical system and
    A control unit that controls the display of the image in the image display unit,
    Have,
    The direction toward the front of the viewer in the real space is defined as the forward direction.
    The direction perpendicular to the front direction and along the line segment connecting the left and right eyes of the viewer is defined as the left-right direction.
    When the direction along the line segment orthogonal to the front direction and the left-right direction is the vertical direction or the height direction, the direction away from the ground or the surface corresponding to the ground in the real space is the upward direction, and the approaching direction is the downward direction. ,
    The control unit
    In the display area, which is an inclined surface of a flat surface or a curved surface that inclines from the side close to the viewer and the lower side to the far side and the upper side with respect to the ground or a surface corresponding to the ground in the real space. In addition, control is performed to display an upright image, which is an upright image to be visually recognized.
    The upright image is displayed in a display area that is the outline of a quadrangle when viewed from the viewer.
    The difference in convergence angle between the upper end and the lower end of the display area is set to be less than a predetermined threshold value determined based on at least one of the visibility of the image, the time required for visual recognition, and psychological factors such as discomfort and discomfort. ,
    A head-up display device that features this.
  2.  前記表示領域が、傾斜して視認させる画像である奥行き画像の虚像、及び前記正立画像の虚像のいずれも表示可能な第1の領域と、前記奥行き画像の虚像を表示する第2の領域とに分けられ、
     前記第1の領域における前記表示領域の上端と下端の輻輳角差が前記所定閾値未満に設定され、
     前記第2の領域における前記表示領域の上端と下端の輻輳角差が前記所定閾値以上に設定されている、
     ことを特徴とする、請求項1に記載のヘッドアップディスプレイ装置。
    The display area includes a first area in which both a virtual image of a depth image, which is an image to be visually recognized by being tilted, and a virtual image of the upright image can be displayed, and a second area for displaying a virtual image of the depth image. Divided into
    The convergence angle difference between the upper end and the lower end of the display area in the first region is set to be less than the predetermined threshold value.
    The convergence angle difference between the upper end and the lower end of the display area in the second region is set to be equal to or higher than the predetermined threshold value.
    The head-up display device according to claim 1, wherein the head-up display device is characterized in that.
  3.  前記所定閾値は、前記正立画像の正調な視認可能性を判定する正調視認可能性判定閾値として機能し、
     前記所定閾値としての前記輻輳角差は、0.2°に設定されている、
     ことを特徴とする、請求項1又は2に記載のヘッドアップディスプレイ装置。
    The predetermined threshold value functions as a normal visibility determination threshold for determining the normal visibility of the upright image.
    The convergence angle difference as the predetermined threshold value is set to 0.2 °.
    The head-up display device according to claim 1 or 2, wherein the head-up display device is characterized in that.
  4.  前記視認者から見た前記上下方向(あるいは高さ方向)における画角を縦画角というとき、
     前記所定閾値による輻輳角差の制限は、縦画角0.75°以下の正立画像のコンテンツに対して適用される、
     ことを特徴とする、請求項1乃至3の何れか1項に記載のヘッドアップディスプレイ装置。
    When the angle of view in the vertical direction (or height direction) seen from the viewer is referred to as a vertical angle of view.
    The limitation of the convergence angle difference by the predetermined threshold value is applied to the content of an upright image having a vertical angle of view of 0.75 ° or less.
    The head-up display device according to any one of claims 1 to 3, wherein the head-up display device is characterized in that.
PCT/JP2021/027463 2020-07-29 2021-07-26 Head-up display device WO2022024964A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202180058974.3A CN116157290A (en) 2020-07-29 2021-07-26 Head-up display device
JP2022540275A JPWO2022024964A1 (en) 2020-07-29 2021-07-26
DE112021004005.7T DE112021004005T5 (en) 2020-07-29 2021-07-26 head-up display device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020128459 2020-07-29
JP2020-128459 2020-07-29

Publications (1)

Publication Number Publication Date
WO2022024964A1 true WO2022024964A1 (en) 2022-02-03

Family

ID=80035672

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/027463 WO2022024964A1 (en) 2020-07-29 2021-07-26 Head-up display device

Country Status (4)

Country Link
JP (1) JPWO2022024964A1 (en)
CN (1) CN116157290A (en)
DE (1) DE112021004005T5 (en)
WO (1) WO2022024964A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102011082985A1 (en) * 2011-09-19 2013-03-21 Bayerische Motoren Werke Aktiengesellschaft Projection device for use in head-up display of vehicle, has projection screen comprising partial areas, where image produced by projector is projected on screen in projection plane such that virtual image is produced on screen
JP2018120135A (en) * 2017-01-26 2018-08-02 日本精機株式会社 Head-up display
WO2020009219A1 (en) * 2018-07-05 2020-01-09 日本精機株式会社 Head-up display device
JP2020064235A (en) * 2018-10-19 2020-04-23 コニカミノルタ株式会社 Display device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102011082985A1 (en) * 2011-09-19 2013-03-21 Bayerische Motoren Werke Aktiengesellschaft Projection device for use in head-up display of vehicle, has projection screen comprising partial areas, where image produced by projector is projected on screen in projection plane such that virtual image is produced on screen
JP2018120135A (en) * 2017-01-26 2018-08-02 日本精機株式会社 Head-up display
WO2020009219A1 (en) * 2018-07-05 2020-01-09 日本精機株式会社 Head-up display device
JP2020064235A (en) * 2018-10-19 2020-04-23 コニカミノルタ株式会社 Display device

Also Published As

Publication number Publication date
CN116157290A (en) 2023-05-23
DE112021004005T5 (en) 2023-06-01
JPWO2022024964A1 (en) 2022-02-03

Similar Documents

Publication Publication Date Title
KR101127534B1 (en) Method and device for generating retinal images using the stigmatism of the two foci of a substantially elliptical sight
US9785306B2 (en) Apparatus and method for designing display for user interaction
JP4155343B2 (en) An optical system for guiding light from two scenes to the viewer&#39;s eye alternatively or simultaneously
JP5008556B2 (en) Navigation navigation display method and apparatus using head-up display
JPH06270716A (en) Head-up display device for vehicle
JPH09322199A (en) Stereoscopic video display device
JPH01503572A (en) 3D display system
CN112204453B (en) Image projection system, image projection device, image display light diffraction optical element, instrument, and image projection method
JP7358909B2 (en) Stereoscopic display device and head-up display device
WO2022024964A1 (en) Head-up display device
JP7126115B2 (en) DISPLAY SYSTEM, MOVING OBJECT AND DESIGN METHOD
JPWO2020032095A1 (en) Head-up display
JP6105531B2 (en) Projection display device for vehicle
WO2016150166A1 (en) Amplification and display apparatus forming virtual image
WO2021241718A1 (en) Head-up display device
CN114127614B (en) Head-up display device
JP7354846B2 (en) heads up display device
CN217655373U (en) Head-up display device and moving object
JP4102410B2 (en) 3D image display device
CN112731664A (en) Vehicle-mounted augmented reality head-up display system and display method
JP6516161B2 (en) Floating image display device and display method
JP7372618B2 (en) In-vehicle display device
CN215181216U (en) Head-up display system with continuously-changed image distance and vehicle
WO2021261438A1 (en) Head-up display device
WO2019151199A1 (en) Display system, moving body, and measurement method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21848675

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022540275

Country of ref document: JP

Kind code of ref document: A

122 Ep: pct application non-entry in european phase

Ref document number: 21848675

Country of ref document: EP

Kind code of ref document: A1