CN116157290A - Head-up display device - Google Patents

Head-up display device Download PDF

Info

Publication number
CN116157290A
CN116157290A CN202180058974.3A CN202180058974A CN116157290A CN 116157290 A CN116157290 A CN 116157290A CN 202180058974 A CN202180058974 A CN 202180058974A CN 116157290 A CN116157290 A CN 116157290A
Authority
CN
China
Prior art keywords
image
display
display area
upright
convergence angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180058974.3A
Other languages
Chinese (zh)
Inventor
舛屋勇希
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nippon Seiki Co Ltd
Original Assignee
Nippon Seiki Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nippon Seiki Co Ltd filed Critical Nippon Seiki Co Ltd
Publication of CN116157290A publication Critical patent/CN116157290A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Arrangement of adaptations of instruments
    • B60K35/23
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/011Head-up displays characterised by optical features comprising device for correcting geometrical aberrations, distortion

Abstract

The invention provides a head-up display device. The purpose of the present invention is to enable an image (an upright image) of an upright-recognized content to be displayed while suppressing a decrease in visibility in an oblique-image-plane HUD device. A control unit of a head-up display device performs control such that, in a real space, an upright image, which is an image recognized as being upright, is displayed in a display area (Z1) of an inclined surface, which is a flat surface or curved surface, from a side close to an observer and a side far from the lower side and an upper side with respect to the ground surface (40), the upright image is displayed in the display area (Z1) of a square outline viewed from the observer, and the angle difference (thetaL-thetaU) of the convergence between the upper end (PU) and the lower end (PL) of the display area (Z1) is set to be smaller than a predetermined threshold value (preferably, the angle difference of convergence is 0.2 DEG) determined based on at least one of the image recognizability, the time required for recognition, the sense of discomfort or the like.

Description

Head-up display device
Technical Field
The present invention relates to a Head-up Display (HUD) device or the like that displays a virtual image in front of a driver or the like by projecting (projecting) Display light of an image onto a windshield of a vehicle or a projected member such as a combiner.
Background
In order to improve information visibility of an observer (driver, etc.), there is proposed a technique of tilting a virtual image display surface of a HUD device in a depth direction (for example, refer to patent document 1).
In the HUD device of this embodiment (hereinafter, sometimes referred to as a tilted image plane HUD device or a tilted surface HUD device), visibility is improved when an information image (a depth image such as an arrow or a map) having depth information is presented, and an information image (an upright image such as text or a numeral) having no depth information (depth is not emphasized in a broad sense) can be recognized upright, so that convenience is high.
In the case of standing upright, the term "standing upright" is used in the following sense, for example. For example, a content (content) indicated by a letter, a number, or the like is necessarily in an upside-down inverted state, and when the inclination of the display surface is large, a person cannot recognize (recognize) or cannot recognize.
Therefore, the HUD device needs to correctly display the content as upright when displaying letters, numerals, or the like, in order for a person to correctly read information. The image (virtual image) described above is referred to as an upright image (upright virtual image). In addition, the upright image is generally displayed so as to face the observer, and is therefore sometimes referred to as a facing image.
Prior art literature
Patent literature
Patent document 1: japanese patent laid-open publication No. 2018-120135
Disclosure of Invention
Problems to be solved by the invention
The present inventors have studied on a tilted image plane HUD device and found the following new problems. When an image (virtual image) can be displayed on the inclined surface, as described above, an image (virtual image) having a sense of depth can be displayed, and if the slope is substantially perpendicular to the ground (or a surface corresponding to the ground: a corresponding surface), a content including characters, numerals, or the like, for which the sense of depth is not emphasized, can be displayed as an upright image.
However, if the slope of the inclined surface is large relative to the ground (the equivalent surface), the display is suitable, but on the other hand, there is a problem that visibility of the upright image is lowered, which makes it difficult for the observer to recognize the image, or the time required for recognition until the image can be recognized is long, or a problem that a subjective feeling of discomfort or incongruity is generated.
If these problems are not apparent, the normal image can be accurately recognized, in other words, the normal image can be recognized, otherwise, the normal image cannot be recognized, or the like, or it is difficult to recognize the normal image.
Therefore, in order to properly implement efficient design of the HUD device, calibration of the HUD device, initialization of the HUD device, or the like, it is preferable to set an index as a reference, and determine whether or not a normal upright image can be recognized.
For example, when an image (virtual image) is displayed at a position several meters ahead with respect to a predetermined point of the vehicle or a viewpoint of an observer (driver or the like), it is considered to provide a restriction that the inclination angle of the display area with respect to the ground (equivalent surface) is not limited to several degrees or less.
However, since the visibility (recognition sensitivity) of a person may depend on the distance to an image, the sensitivity may be increased when the image is displayed on the near side, and the sensitivity may be decreased when the image is displayed on the far side, in the setting method as described above, the slope (inclination angle) may be changed when the distance is changed, and a uniform reference (threshold) may not be obtained, resulting in poor usability.
Therefore, it is important to obtain an index as a standard (threshold value) that can be used in a unified manner, and to appropriately set a preferable value of the threshold value. If the index is obtained, the slope of the inclined surface (or the display area) that does not affect the reading of the upright image can be set using the index, and design and the like can be facilitated. In the prior art such as patent document 1, no study is made on this point, and no suitable index can be obtained.
An object of the present invention is to display an image of a content (an upright image) that is recognized upright while suppressing a decrease in visibility in an oblique image plane HUD device.
Other objects of the present invention will become apparent to those skilled in the art upon consideration of the following examples and best modes and figures.
Means for solving the problems
The following is a description of the mode of the present invention for easy understanding of the outline of the present invention.
In a first aspect, a head-up display device includes:
an image display unit that displays an image;
an optical system that projects light of the image displayed on the image display unit toward a projection target member, and that causes a viewer to recognize a virtual image of the image in a virtual display region in a real space in front of the viewer; and
a control unit that controls display of the image on the image display unit,
taking the direction toward the front of the observer in real space as the front direction,
a direction along a line segment connecting the left and right eyes of the observer, which is orthogonal to the front direction, is defined as a left-right direction,
a direction along a line segment orthogonal to the front direction and the left-right direction is defined as an up-down direction or a height direction,
The direction of the surface of the real space far from the ground or corresponding to the ground is set as the upper direction, the approaching direction is set as the lower direction, at this time,
the control unit performs control of,
in the real space, an upright image which is an image recognized as an upright place is displayed in the display area of the inclined surface which is a plane or curved surface inclined from the side close to the observer and the lower side to the side far from the upper side with respect to the ground or the surface corresponding to the ground,
the erect image is displayed in a display area in which a quadrangular outline is observed from the observer,
the difference in convergence angle between the upper and lower ends of the display area is set to be smaller than a predetermined threshold value determined based on at least one of the recognizability of the image, the time required for recognition, and psychological factors such as discomfort or uncomfortable feeling.
In the first aspect, the degree of tilt of the display area of the upright image can be accurately recognized by determining the degree of tilt using a new index (as a reference for a threshold value) of "the convergence angle difference (in which the tilt distortion angle (see the angle θd in fig. 6 (C)) due to the convergence angle difference can be replaced), and the degree of tilt can be efficiently (or easily) determined or set.
The display area has a quadrangular contour, and the ground (or a surface corresponding to the ground: road surface, etc.) side of the display area is set as a lower end (lower side), and the opposite side (side away from the ground) is set as an upper end (upper side). Here, for example, a pair of points (which can be provided at arbitrary positions, but is preferably, for example, a right end point or a left end point of each end (each side)) corresponding to each other are set at the lower end (lower side) and the upper end (upper side). The pair of points is defined as a first point and a second point, the convergence angle at which the first point is observed from each of the left and right eyes (the angle formed by the visual axis indicating the viewing direction of each eye) is defined as a first convergence angle, the convergence angle at which the second point is observed is defined as a second convergence angle, and the difference between the first convergence angle and the second convergence angle (the difference obtained by subtracting the second convergence angle from the first convergence angle) is defined as a "convergence angle difference". This convergence angle difference may be referred to as "a convergence angle difference between the upper end and the lower end of the display area".
Here, if the display area is erected on the ground (corresponding surface thereof) at a substantially right angle, for example, the upper end (upper side) and the lower end (lower side) of the quadrangle overlap each other when the quadrangle is viewed from the upper side, and the first point overlaps the second point. When the length (height) of the vertical side of the quadrangle is small, the variation of the distances between the first point and the second point and the left and right eyes due to the height positions of the first point and the second point can be ignored. In the above case, the first convergence angle and the second convergence angle of the first point and the second point are the same (substantially the same), and the convergence angle difference is zero (substantially zero).
Here, when the display area is inclined with respect to the ground (corresponding surface thereof), the value of the convergence angle of the first point and the second point is different when the lower end (lower side) of the quadrangle moves to the observer side. In other words, the first convergence angle is larger than the second convergence angle. Therefore, the convergence angle difference is α (α is an integer greater than 0).
When the display area is further inclined and the lower end (lower side) of the quadrangle is further moved toward the viewer side to approach the viewer, the first convergence angle is further increased, and thus the difference from the second convergence angle becomes larger, and the convergence angle difference is β (β is an integer satisfying α < β).
In this way, the "difference in convergence angle between the upper end and the lower end of the display area" becomes an indicator of the degree of inclination of the display area with respect to the ground (corresponding surface thereof). Further, since the convergence angle varies depending on the distance from the eyes of the observer, the distance information is included, and the convergence angle difference becomes a comprehensive index (threshold value) and has information on the degree of inclination of the display area (or virtual image display surface or the like) including the distance with respect to the ground (corresponding surface thereof). The slope does not need to be set with preconditions such as a tilt angle of a few degrees at a distance of a few meters as in the prior art.
Here, the visibility of the normal image is not always the same, but it can be objectively determined whether the normal image can be recognized based on at least one of the visibility of the displayed image, the time required for the recognition, and psychological factors such as discomfort or discomfort. In this determination, a threshold value usable for the determination can be obtained by using the above-described index.
As described above, as the slope of the display area becomes larger and the first point approaches the viewer, the value of the convergence angle difference becomes larger. Therefore, for example, when the convergence angle difference in the vicinity of the boundary where the right and left eyes can be recognized by the observer by fusing (combining) the images in the brain is used as a threshold value, for example, in designing the HUD device, if the convergence angle difference is set so as to be smaller than the threshold value, the observer can recognize the right and left eyes even if the right and left images are displayed on the inclined surface. In other words, the visibility of the upright image of the observer can be ensured to be equal to or higher than a predetermined level.
In this way, for example, the design of an image (upright image) capable of displaying the content that is being recognized upright while suppressing the reduction in visibility is made efficient or easy. The new index (convergence angle difference or tilt distortion angle due to the convergence angle difference) can be used for calibration of the HUD device, initialization of the HUD device, simulation of the function of the HUD device, and the like, and thus, effects such as improving the efficiency of each process can be obtained.
In a second mode subordinate to the first mode, it may be:
the display area is divided into a first area capable of displaying all of a virtual image of a depth image which is an obliquely recognized image and a virtual image of the upright image and a second area for displaying the virtual image of the depth image,
the difference in convergence angle between the upper end and the lower end of the display area in the first area is set to be smaller than the predetermined threshold, and the difference in convergence angle between the upper end and the lower end of the display area in the second area is set to be equal to or larger than the predetermined threshold.
In the second aspect, the display area is divided into a first area capable of displaying both the depth image and the upright image and a second area suitable for displaying the depth image, and the convergence angle difference between the upper end and the lower end of the display area in the second area is set to be equal to or greater than the predetermined threshold.
As described above, the threshold value is set based on the visibility of the upright image or the like, and when the threshold value or more is set, the visibility of the upright image is lowered to be unsuitable for displaying the upright image, in other words, it is considered that the display of the deep image (including the image floating in the air and visually extending substantially parallel to the road surface or overlapping the road surface) expressed obliquely is suitable. Therefore, the convergence angle difference or the like is set to a threshold value or more for the image (virtual image) displayed in the second region. Thus, for example, when an upright image is displayed in the first area and a deep image is displayed in the second area, appropriate visibility of each image and the like can be ensured.
In a third mode depending on the first mode or the second mode, it may be:
the predetermined threshold functions as a normal authentication possibility determination threshold for determining the normal authentication possibility of the upright image,
the convergence angle difference as the predetermined threshold value is set to 0.2 °.
In the third aspect, the above-described "predetermined threshold value" is clarified, and specifically, for example, the above-described "predetermined threshold value" can be used as the "normal discrimination possibility determination threshold value", and an example of the preferable value is clarified to be 0.2 °.
In a fourth aspect depending on any one of the first to third aspects, it may be:
when the angle of view in the up-down direction (or the height direction) as seen from the observer is referred to as a vertical angle of view, the restriction of the convergence angle difference by the predetermined threshold is applied to the content of an upright image in which the vertical angle of view is 0.75 ° or less.
In the fourth aspect, it is considered that when the size of the display content becomes large, even if the convergence angle difference is the same, the fusion of the left-eye image and the right-eye image in the brain becomes more difficult, the threshold is applied to a smaller content having a vertical viewing angle of 0.75 ° or less, and the convergence angle difference between the upper end and the lower end is set to be smaller than the threshold.
In addition, in the case of an upright content larger than this size, from the viewpoint of view obstruction in the HUD device, the increase in interference is confirmed, and the current practical applicability cannot be considered high. Therefore, the display content is limited to a predetermined size or less, and even if the above threshold is applied, there is no particular problem.
Those skilled in the art will readily appreciate that the illustrated aspects of the invention can be further modified without departing from the spirit of the invention.
Brief description of the drawings
Fig. 1 (a) is a diagram showing an example of a structure and an inclined display area of a HUD device mounted on a vehicle, and fig. 1 (B) and (C) are diagrams showing examples of a method of realizing the display area shown in fig. 1 (a).
Fig. 2 (a) is a diagram showing a main configuration of a HUD device mounted on a vehicle and a display example in a display region, and fig. 2 (B) is a diagram showing an example of a display region including all first regions capable of displaying a virtual image of a depth image and a virtual image of an upright image and a second region capable of displaying a virtual image of a depth image.
Fig. 3 (a) is a view showing a state in which an inclined plane (inclined display area) showing an arrow as a depth image is arranged in front of the image, and the observer views the image with both eyes, fig. 3 (B) is a view showing an image observed with the left eye, fig. 3 (C) is a view showing a depth image observed by fusing (combining) the images of the left eye and the right eye, and fig. 3 (D) is a view showing an image observed with the right eye.
Fig. 4 (a) is a diagram showing a state in which an inclined plane (inclined display area) showing a vehicle speed display as an upright image is arranged in front of the image, the observer views the image with both eyes, fig. 4 (B) is a diagram showing an image observed with the left eye, fig. 4 (C) is a diagram showing an upright image observed by fusing (combining) the images of the left eye and the right eye, and fig. 4 (D) is a diagram showing an image observed with the right eye.
Fig. 5 (a) is a view showing a state in which an observer views a display area standing substantially perpendicular to a road surface with both eyes, fig. 5 (B) is a view showing a convergence angle of both eyes with respect to a first right end point of an upper end (upper side) of the display area in fig. 5 (a) and a second right end point of a lower end (lower side) corresponding to the first right end point, and fig. 5 (C) is a view showing an image obtained by fusing (combining) respective images of left and right eyes.
Fig. 6 (a) is a view showing a state in which an observer views a display area inclined by about 45 ° with respect to a road surface with both eyes, fig. 6 (B) is a view showing a convergence angle of a first right end point of both eyes with respect to an upper end (upper side) of the display area in fig. 6 (a) and a second right end point of a lower end (lower side) corresponding to the first right end point, fig. 6 (C) is a view showing an image observed with a left eye, fig. 6 (D) is a view showing an upright image obtained by fusing (combining) respective images of the left eye and the right eye, and fig. 6 (E) is a view showing an image observed with a right eye.
Fig. 7 (a) is a view showing a state in which an observer views a display area inclined by about 30 ° with respect to a road surface with both eyes, fig. 7 (B) is a view showing a convergence angle of both eyes with respect to a first right end point of an upper end (upper side) and a second right end point of a lower end (lower side) corresponding to the first right end point of the display area in fig. 7 (a), fig. 7 (C) is a view showing an image observed with a left eye, fig. 7 (D) is a view showing a view which is difficult to recognize due to ghost or the like when the images of the left eye and the right eye are fused (synthesized), and fig. 7 (E) is a view showing an image observed with the right eye.
Fig. 8 (a) and (B) are flowcharts showing an example of a design method of a HUD device (oblique image plane HUD device).
Fig. 9 is a diagram showing a configuration example of a display control unit (control unit) in the HUD device.
Fig. 10 (a) and (B) are diagrams showing other examples of the tilted display area.
Fig. 11 is a graph showing the experimental results of the ratio (vertical axis) of the persons who answer to the sense of discomfort for each convergence angle difference (horizontal axis).
Detailed Description
The following description of the preferred embodiments is provided for easy understanding of the present invention. Accordingly, it should be noted by those skilled in the art that the present invention is not unduly limited by the embodiments described below.
Reference is made to fig. 1. Fig. 1 (a) is a diagram showing an example of a structure and an inclined display area of a HUD device mounted on a vehicle, and fig. 1 (B) and (C) are diagrams showing examples of a method of realizing the display area shown in fig. 1 (a). In fig. 1, a direction along the front of the vehicle 1 (also referred to as a front-rear direction) is referred to as a Z direction, a direction along the width (lateral width) of the vehicle 1 (or a left-right direction) is referred to as an X direction, and a height direction or an upward direction of the vehicle 1 (a direction of a line segment perpendicular to the flat road surface 40, a direction away from the ground or an opposite surface thereof (herein referred to as a road surface) 40 is referred to as a Y direction.
In the following description, the term "virtual display area (also simply referred to as" display area ") provided in front of the observer or the like may be interpreted broadly. For example, the virtual display surface (sometimes referred to as a virtual image display surface) corresponding to (a display range of) a display surface such as a screen on which an image is displayed may be considered as one display area (or a part of the virtual image display surface) when one image displayed on the virtual display surface is arranged in an image area of a predetermined shape (for example, a quadrangle) of a predetermined size, for example. In the following description, based on the above, only the "display area" will be described.
In the description of the shape of the display area, the expression of up and down may be performed. Here, for convenience of explanation, a direction along a line segment (normal line) perpendicular to the road surface 40 (also the height direction of the vehicle 1) is taken as the up-down direction. When the road surface is horizontal, the vertical direction is downward, and the opposite direction is upward. This can also be applied to the description of other drawings.
As shown in fig. 1 (a), the HUD device 100 of the present embodiment is mounted inside a dash panel 41 of a vehicle (own vehicle) 1. The HUD device 100 can display, in front of the vehicle 1, an upright image (an image in which depth is not particularly emphasized, which is also referred to as an upright image, for example, and which can be configured by numerals, characters, or the like) and a depth image (also referred to as an oblique image or an oblique image, for example, an arrow for navigation extending along the road surface 40, or the like) in which depth is an important element, in the display area PS1 having an area inclined with respect to the road surface 40.
The HUD device 100 has: a display unit (sometimes also referred to as an image display unit, specifically, a screen, for example) 160 having a display surface 164 on which an image is displayed; an optical system 120 including an optical member that projects display light K of a display image onto a windshield that is a projected member (reflective light transmitting member) 2; and a light projecting section (image projecting section) 150, wherein the optical member 120 has a curved mirror (also referred to as a concave mirror or a magnifying mirror) 170, and the curved mirror has a reflecting surface 179, and the reflecting surface 179 of the curved mirror 170 is not the same shape as the radius of curvature, but may be a shape composed of a set of partial areas having a plurality of radii of curvature, for example, a design method of a free-form surface (or a free-form surface itself) may be used. The free-form surface is a surface that cannot be expressed by a simple mathematical expression, but is expressed by setting a plurality of points and curvatures in space and interpolating the points by a higher-order equation. The shape of the reflective surface 179 has a considerable influence on the shape or road surface relation of the display area PS 1.
The shape of the display area PS1 is affected by the shape of the reflecting surface 179 of the curved mirror (concave mirror) 130, the curved shape of the windshield (reflective transparent member 2), and the shape of other optical components (for example, a correction mirror) mounted in the optical system 120. In addition, the shape of the display surface 164 of the display unit 160 (generally, a plane, but the whole or a part may be non-plane) and the arrangement of the display surface 164 with respect to the reflection surface 179 are also affected. However, the curved mirror (concave mirror) 170 is a magnifying mirror, and has a considerable influence on the shape of the display area (virtual image display surface). In addition, if the shape of the reflecting surface 179 of the curved mirror (concave mirror) 170 is different, the shape of the display area (virtual image display surface) PS1 is changed in practice.
The display area PS1 integrally extending from the proximal end portion U1 to the distal end portion U3 can be formed by disposing the display surface 164 of the display portion 160 obliquely at an intersection angle smaller than 90 degrees with respect to the optical axis (main optical axis corresponding to the main light ray) of the optical system.
The shape of the curved surface of the display area PS1 can be adjusted by adjusting the optical characteristics of the entire area or a part of the area in the optical system, adjusting the arrangement of the optical components and the display surface 164, adjusting the shape of the display surface 164, or a combination of these. In this way, the shape of the virtual image display surface can be variously adjusted. Thereby, the display area PS1 having the first area Z1 and the second area Z2 can be realized.
In other words, the display area PS1 is divided into a first area Z1 in which both the depth image (oblique image) and the erect image (portrait) can be displayed and a second area suitable for display of the depth image (oblique image) (in other words, dedicated for display of the depth image).
This will be specifically described below. As shown on the left side and the lower left side of fig. 1 (B), the manner and the degree of the overall inclination of the display area (including the virtual image display surface) PS1 are adjusted according to the manner and the degree of the inclination of the display surface 164 of the display section 160. In the example of fig. 1B, distortion of the curved display area (virtual image display surface) of the windshield (reflective light-transmitting member 2) is corrected by the curved shape of the reflection surface 179 of the curved mirror (concave mirror or the like) 170, and as a result, a planar display area (virtual image display surface) PS1 is generated.
As shown in the right and left lower sides of fig. 1B, the degree of distance from the road surface 40 in the display region PS1 (virtual image display surface) serving as an inclined surface can be adjusted by adjusting the positional relationship between the optical member (here, curved mirror (concave mirror or the like) 170 and the display surface 164, in other words, by rotating the display surface 164 so that the relative relationship with the optical member (curved mirror 170) is different, for example.
As shown in fig. 1C, the shape of the reflecting surface of a curved mirror (concave mirror or the like) 170 as an optical member (or the shape of the display surface 164 of the display unit 160) is adjusted to change the virtual image display distance in the vicinity of the end (proximal portion) U1 of the display area PS1 on the vehicle 1 side, and the vicinity of the proximal portion U1 is controlled to bend toward the road surface side and stand up with respect to the road surface (in other words, to make the elevation up), whereby the display area PS1 having an inclined portion is obtained.
As shown in the upper side of fig. 1C, the reflection surface 179 of the curved mirror 170 can be divided into three parts (portions) of Near (Near display portion), center (middle (Center) display portion), far (Far display portion).
Here, near is a portion for generating display light E1 corresponding to the proximal portion U1 of the display area PS1 (indicated by a single-dot chain line in fig. 4 a and B), center is a portion for generating display light E2 corresponding to the intermediate portion (central portion) U2 of the display area PS1 (indicated by a broken line), and Far is a portion for generating display light E3 corresponding to the distal portion U3 of the display area PS1 (indicated by a solid line).
In fig. 1C, the Center and Far portions are the same as the curved mirror (concave mirror or the like) 170 in generating the flat display area PS1 shown in fig. 1B. However, in fig. 1 (C), the curvature of the portion of Near is set smaller than that in fig. 1 (B). Then, the magnification corresponding to the portion of Near becomes larger.
The magnification (c) of the HUD device 100 may be represented by c=b/a for the distance (a) from the display surface 164 of the display unit 160 to the windshield 2 and the distance (b) from the light reflected by the windshield (the reflective light transmitting member 2) to the image formed at the image forming point via the viewpoint a, and when the curvature of the Near portion becomes smaller, a becomes smaller, and the magnification increases, and the image is formed at a position farther from the vehicle 1. That is, in the case of fig. 1 (C), the virtual image display distance is larger than in the case of fig. 1 (B).
Therefore, the proximal end portion U1 of the display area PS1 is away from the vehicle 1, and the proximal end portion U1 is bent toward the road surface 40 side in a low-head shape, with the result that the first area Z1 is formed. Thereby, the display area PS1 having the first area Z1 and the second area Z2 can be obtained.
Next, refer to fig. 2. Fig. 2 (a) is a diagram showing a main configuration of a HUD device mounted on a vehicle and a display example in a display region, and fig. 2 (B) is a diagram showing an example of a display region including all first regions capable of displaying a virtual image of a depth image and a virtual image of an upright image and a second region capable of displaying a virtual image of a depth image. In fig. 2, the same reference numerals are given to the portions common to fig. 1.
As shown in fig. 2, the HUD device 100 has: a display unit (for example, a light-transmitting screen) 160 having a display surface 164; a mirror 165; and a curved mirror (for example, a concave mirror having a reflecting surface 179, and a free curved surface in some cases) 170, which is an optical member for projecting display light. The image displayed on the display unit 160 is projected onto the projected area 5 of the windshield 2, which is a projected member, via the reflecting mirror 165 and the curved mirror 170. In addition, in the HUD device 100, a plurality of curved mirrors may be provided. In addition to the mirror (reflective optical element) of the present embodiment, a configuration including a refractive optical element such as a lens, a diffractive optical element, or other functional optical element may be employed instead of (or in addition to) part of (or in addition to) the mirror (reflective optical element) of the present embodiment.
Part of the display light of the image is reflected by the windshield 2, and is incident on a viewpoint (eye) a of a driver or the like located inside (or on) the predetermined viewpoint region (three-dimensional structure, depicted as a planar structure for convenience) EB, and imaged in front of the vehicle 1, whereby various images (virtual images) are displayed in the virtual display region (virtual image display surface) PS 1. Fig. 2 a shows, as a display example in the first region Z1 of the display region PS1, for example, a vehicle speed display SP as an upright image (upright virtual image) and an image (virtual image) AW' as a navigation arrow for depth display. In the second region Z2, an image (virtual image) AW of a navigation arrow extending from the front side to the rear side of the vehicle 1 along the road surface 40 is shown.
As shown in fig. 2B, the angle (inclination angle) between the first region Z1 and the road surface 40 is θ1 (0 < θ1 < 90 °), and the angle (inclination angle) between the second region Z2 and the road surface 40 is θ2 (0 < θ2 < θ1). The first region Z1 and the second region Z2 are each inclined regions (or regions having at least inclined portions).
Next, refer to fig. 3. Fig. 3 (a) is a view showing a state in which an inclined plane (inclined display area) showing an arrow as a depth image is arranged in front of the image, and the observer views the image with both eyes, fig. 3 (B) is a view showing an image observed with the left eye, fig. 3 (C) is a view showing a depth image observed by fusing (combining) the images of the left eye and the right eye, and fig. 3 (D) is a view showing an image observed with the right eye.
In fig. 3 (a), a midpoint C0 is depicted at a central position between the left eye A1 and the right eye A2. For convenience, an image obtained by fusing an image observed by the left eye A1 and an image observed by the right eye A2 in the brain of the observer may be referred to as an image of the midpoint position C0.
In fig. 3 a, an image (virtual image) AW of a navigation arrow extending obliquely is displayed in the second region Z2 of the display region PS 1. When the images of the left and right eyes A1 and A2 (images having binocular parallax) shown in fig. 3 (B) and (D) are fused (synthesized), a (stereoscopic) image having a depth sensation such as that of fig. 3 (C) can be recognized. In other words, the image (virtual image) AW of the arrow is naturally recognized so as to be inclined to the rear side by the image shape change caused by the positional shift between the upper end and the lower end of the angle of view of each of the left and right eyes A1, A2, and the visibility or the recognition is improved.
Next, refer to fig. 4. Fig. 4 (a) is a diagram showing a state in which an inclined plane (inclined display area) showing a vehicle speed display as an upright image is arranged in front of the image, the observer views the image with both eyes, fig. 4 (B) is a diagram showing an image observed with the left eye, fig. 4 (C) is a diagram showing an upright image observed by fusing (combining) the images of the left eye and the right eye, and fig. 4 (D) is a diagram showing an image observed with the right eye.
In fig. 4 a, an image (virtual image) of the vehicle speed display SP (displayed as "120km/h" as described in fig. 4B) is shown in the first region Z1 of the display region PS 1. The vehicle speed display SP is displayed in the inclined first region Z1, and is displayed as an upright image (upright virtual image) recognized upright.
If the inclination angle of the first region Z1 with respect to the road surface 40 is made not so small and is erected to some extent, an upright image (as upright image reading information) can be recognized without affecting the visibility of the observer. In this case, for example, when the images of the left and right eyes A1 and A2 (images having binocular parallax) shown in fig. 4 (B) and (D) are fused (combined), the vehicle speed display SP as a standing image such as fig. 4 (C) can be recognized.
On the other hand, the first region Z1 has a small inclination angle with respect to the road surface 40, and the observation method in the case where fusion (composition) of images by the left and right eyes fails (in the case of non-fusion) varies from person to person, and for example, the observation is performed obliquely as a whole or by changing the shape of one of the eyes, and in any case, the visibility is lowered, the recognition time is increased, and the subjective (psychological factor) is not positively grasped.
Next, refer to fig. 5. Fig. 5 (a) is a view showing a state in which an observer views a display area standing substantially perpendicular to a road surface with both eyes, fig. 5 (B) is a view showing a convergence angle of both eyes with respect to a first right end point of an upper end (upper side) of the display area in fig. 5 (a) and a second right end point of a lower end (lower side) corresponding to the first right end point, and fig. 5 (C) is a view showing an image obtained by fusing (combining) images of left and right eyes.
In the following description, a display area is given a contour of a predetermined shape (herein, a quadrangle), and a ground (or a surface corresponding to the ground: a road surface or the like) side of the display area is set as a lower end (lower side), and an opposite side (a side away from the ground) is set as an upper end (upper side). The term "quadrangle" representing the shape of the display area includes, for example, a rectangle, a square, a trapezoid, a parallelogram, and the like, and is interpreted in a broad sense.
Fig. 5 a shows a display area (here, a first area Z1) standing substantially perpendicular to the road surface 40, in front of the observer. The observer observes the image (virtual image) displayed in the first region Z1 with both eyes A1, A2. As shown in fig. 5C, the displayed image (virtual image) is the vehicle speed display SP described above.
In fig. 5 a, a symbol indicating PL is marked at the lower end (hereinafter, may be simply referred to as lower end or lower side) of the view in the display region (first region Z1), and a symbol indicating PU is marked at the upper end (may be simply referred to as upper end or upper side) of the view.
Fig. 5B shows a convergence angle due to binocular parallax of an observer in a plan view when the lower side (in the figure, the-Y direction side indicated by an arrow) is viewed from the upper side in fig. 5 a.
In fig. 5B, a pair of points (which may be provided at any position, preferably, for example, right or left end points of the respective ends) corresponding to each other are set on the lower end (lower side) PL and the upper end (upper side) PU. In fig. 5B, a right end point R1 at the lower end (lower side) and a right end point R2 at the upper end (upper side) are set. Point R1 is referred to as a first point and point R2 is referred to as a second point.
The convergence angle at which the first point R1 is observed from each of the left and right eyes A1, A2 (the angle formed by the visual axes indicating the directions of the eyes A1, A2) is referred to as a first convergence angle θl, the convergence angle at which the second point R2 is observed is referred to as a second convergence angle θu, and the difference between the first convergence angle θl and the second convergence angle θu (the difference obtained by subtracting the second convergence angle θu from the first convergence angle θl) is referred to as a "convergence angle difference". This convergence angle difference may be referred to as "a convergence angle difference between the upper end (or upper side) and the lower end (or lower side) of the display area".
In the example of fig. 5, since the display area (first area Z1) stands on the road surface 40 at a substantially right angle, the upper end (upper side) PU and the lower end (lower side) PL of the quadrangle overlap each other, and the first point R1 and the second point R2 overlap each other when the quadrangle of the outline of the display area (first area Z1) is viewed from above. Here, when the length of the vertical side of the quadrangle (the length of the line segment indicating the first region Z1 in fig. 5 (a): in other words, the height of the first region with respect to the road surface 40) is small, the difference in distance (the amount of fluctuation in distance) between the first point R1 and the second point R2 and the left and right eyes A1 and A2 due to the difference in height positions of the first point R1 and the second point R2 can be ignored. In the above case, the first convergence angle θl and the second convergence angle θu of the first point R1 and the second point R2 have the same (substantially the same) value, and the convergence angle difference is zero (substantially zero).
As shown in fig. 5C, an image obtained by fusing (combining) the images of the left and right eyes (an image at the center position C0, simply referred to as C0) is recognized as a portrait, and there is no problem in the visibility of the vehicle speed display SP.
Next, refer to fig. 6. Fig. 6 (a) is a view showing a state in which an observer views a display area inclined by about 45 ° with respect to a road surface with both eyes, fig. 6 (B) is a view showing a convergence angle of a first right end point of both eyes with respect to an upper end (upper side) of the display area in fig. 6 (a) and a second right end point of a lower end (lower side) corresponding to the first right end point, fig. 6 (C) is a view showing an image observed with a left eye, fig. 6 (D) is a view showing an upright image obtained by fusing (combining) respective images of the left eye and the right eye, and fig. 6 (E) is a view showing an image observed with a right eye.
As shown in fig. 6 a, the display area (first area Z1) is disposed obliquely at about 45 ° to the road surface 40. The lower end (lower side) of the quadrangle representing the outline of the first region Z1 moves toward the observer side. In fig. 6 (B), the lower end (lower side) PL is closer to the viewer than the upper end (upper side) PU. As a result, the first convergence angle θl with respect to the first point R1 and the second convergence angle θu with respect to the second point R2 are different. In other words, the first convergence angle θl is larger than the second convergence angle θu. Therefore, the convergence angle difference is α (α is an integer greater than 0).
Refer to fig. 6 (C) and (E). Since the display area Z1 is inclined inward as it is above, the burden on the left eye A1 and the right eye A2 increases in this regard. Then, due to binocular parallax, the quadrangle of the image observed by the left eye A1 is distorted to the left, and the quadrangle of the image observed by the right eye A2 is distorted to the right, and as a result, it is recognized as a substantially parallelogram shape. At this point, the upright image (vehicle speed display SP) is also difficult to recognize.
However, the human eye can recognize an upright image by correcting the depth, distortion in the left and right directions, or the like to some extent, and in the example of fig. 6, the limit of the correction function (recognition function) thereof is not exceeded. Therefore, as shown in fig. 6 (D), the vehicle speed display SP, which is an upright image, can be recognized substantially accurately.
Next, refer to fig. 7. Fig. 7 (a) is a view showing a state in which an observer views a display area inclined by about 30 ° with respect to a road surface with both eyes, fig. 7 (B) is a view showing a convergence angle of both eyes with respect to a first right end point of an upper end (upper side) and a second right end point of a lower end (lower side) corresponding to the first right end point of the display area in fig. 7 (a), fig. 7 (C) is a view showing an image observed with a left eye, fig. 7 (D) is a view showing a view which is difficult to recognize due to ghost or the like when the images of the left eye and the right eye are fused (synthesized), and fig. 7 (E) is a view showing an image observed with the right eye.
As shown in fig. 7 a, the display area (first area Z1) is further inclined with respect to the road surface 40. As shown in fig. 7 (B), the lower end (lower side) PL of the quadrangle further moves toward the observer side to approach the observer. As a result, the first convergence angle θl further increases. Therefore, the difference from the second convergence angle θu becomes large, and the convergence angle difference (θl- θu) is β (β is an integer satisfying α < β).
In the example of fig. 7, the depth of the person and the limit of the correction function of the distortion in the left and right are exceeded, and the fusion cannot be performed accurately. Therefore, as shown in fig. 7 (C) to (E), the vehicle speed display SP, which is an upright image, cannot be accurately recognized. Note that, since it is difficult to specifically describe in the drawings, it is simply referred to as SP in the drawings.
As described above, as shown in fig. 5 to 7, the "convergence angle difference (θl- θu) between the upper end (upper side) and the lower end (lower side) of the display area" can be an index indicating the degree of inclination of the display area with respect to the ground (corresponding surface thereof).
Since the convergence angles θl and θu vary depending on the distance from the eyes A1 and A2 of the observer, the distance information is included, and the convergence angle difference (θl- θu) becomes a comprehensive index (threshold value) and includes information on the degree of inclination of the display area (or virtual image display surface or the like) including the distance with respect to the ground (corresponding surface thereof). The slope is not required to be set with a precondition of a distance such as a tilt angle of several degrees when the distance is several meters, as in the prior art. Therefore, by introducing the index into the design or the like of the HUD device, the setting of the slope of the display area or the like can be made efficient (facilitated).
Here, the visibility of the normal image is not always the same, but it can be objectively determined whether or not the normal image can be recognized based on at least one of the visibility (first factor) of the displayed image, the time required for the recognition (second factor), and psychological factors (third factor) such as discomfort or uncomfortable feeling. In this determination, a threshold value usable for the determination can be obtained by using the above-described index.
(experimental results)
The present inventors have made an attempt to accurately determine an upright image using the convergence angle difference as an index based on the first to third factors. Here, when NG is detected by each of the three factors, it is difficult to recognize a normal upright image, and when NG is one or two, it is possible to recognize a normal upright image.
As a result, it was found that when the difference in convergence angle between the upper end and the lower end of the image recognized immediately was 0.182 °, it was difficult to recognize. When the threshold is on the order of 0.1, 0.182 is rounded to 0.2. Thus, 0.2 ° can be extracted as an example of a preferred threshold. Therefore, by making the convergence angle difference smaller than 0.2 °, an upright image can be recognized.
Specifically, this "predetermined threshold value" can be used as the "normal authentication possibility determination threshold value", and an example of the preferable value thereof is 0.2 ° described above.
By using the threshold (index), for example, the design of an image (upright image) capable of displaying the content that is being recognized upright is made efficient or easy while suppressing the reduction of visibility. The new index (convergence angle difference or tilt distortion angle (see angle θd of fig. 6C)) can be used for calibration of the HUD device, initialization of the HUD device, simulation of the function of the HUD device, and the like, and thus, effects such as improving the efficiency of each process can be obtained.
As shown in fig. 2 (a), the display area is divided into a first area Z1 in which both the depth image and the upright image can be displayed and a second area Z2 suitable for displaying the depth image, and in this case, the difference in convergence angle between the upper end and the lower end of the display area in the second area Z2 may be set to the predetermined threshold (preferably, 0.2 ° or more).
As described above, the threshold value is set based on the visibility of the upright image or the like, and when the threshold value or more is set, the visibility of the upright image is lowered to be unsuitable for displaying the upright image, in other words, it is considered that the display of the deep image (including the image floating in the air and visually extending substantially parallel to the road surface or overlapping the road surface) expressed obliquely is suitable. Therefore, the convergence angle difference is set to a threshold value or more for the image (virtual image) displayed in the second region Z2. Thus, for example, when the upright image is displayed in the first region Z1 and the depth image is displayed in the second region Z2, appropriate visibility of each image and the like can be ensured.
When the angle of view in the vertical direction (or the height direction) as seen from the observer is referred to as a vertical angle of view, the limitation of the convergence angle difference of the predetermined threshold value can be applied to the content of an upright image in which the vertical angle of view is 0.75 ° or less.
In other words, it is considered that when the size of the display content becomes large, even if the convergence angle difference is the same, the fusion of the left-eye image and the right-eye image in the brain becomes more difficult, the threshold described above is applied to a smaller content in which the vertical angle of view is 0.75 ° or less, and the convergence angle difference between the upper end and the lower end is set to be smaller than the threshold.
In addition, in the case of an upright content larger than this size, from the viewpoint of the view field being blocked in the HUD device 100, the interference is observed to be large, and the possibility of implementation in the present situation cannot be considered to be high. Therefore, the display content is limited to a predetermined size or less, and even if the above threshold is applied, there is no particular problem.
Next, refer to fig. 8. Fig. 8 (a) and (B) are flowcharts showing an example of a design method of a HUD device (oblique image plane HUD device). In fig. 8 a, in the oblique image plane HUD device, each portion is designed so that the difference in convergence angle between the upper end and the lower end of the display area of the information image (upright image) that is recognized upright (or the oblique distortion angle due to the difference in convergence angle) is smaller than a predetermined threshold (preferably smaller than 0.2 °) determined based on at least one of the recognizability of the image, the time required for recognition, and psychological factors such as discomfort or discomfort (step S1).
In fig. 8B, the display area is divided into all the first areas in which the obliquely recognized information image (depth image) and the vertically recognized information image (vertical image) can be displayed and the second areas in which the obliquely recognized information image (depth image) is displayed (step S2). Next, it is preferable that the vertical viewing angle is 0.75 ° or less, and the convergence angle difference between the upper end and the lower end in the first region is designed to be equal to or larger than a predetermined threshold value (preferably, less than 0.2 °), and the convergence angle difference between the upper end and the lower end in the second region is designed to be equal to or larger than the predetermined threshold value (step S3).
Next, refer to fig. 9. Fig. 9 is a diagram showing a configuration example of a display control unit (control unit) in the HUD device. The upper side view in fig. 9 is substantially the same as (a) of fig. 1. However, in fig. 9, a camera 188 for line of sight detection and a viewpoint position detection unit 192 are provided.
The display control section (control section) 190 has an input-output (I/O) interface 193 and an image processing section 194. The image processing section 194 includes an image generation control section 195, a ROM (having an upright image table 199 and a depth image table 200) 198, a VRAM (having a warp parameter 196, a post-warp-processing data storage buffer 197) 201, and an image generation section (image drawing section) 202.
For example, in the example shown in fig. 2 (a), the image generation control unit 195 can perform control such as disposing the vehicle speed display SP, the arrow display AW' in the first region Z1, and disposing the arrow display AW in the display position of the content such as the second region Z2. The display control unit (control unit) 190 may perform control such as setting the display area at an appropriate position, for example, using the above-described threshold value, for example, at the time of calibration, initialization, or the like of the HUD device 100.
Next, refer to fig. 10. Fig. 10 (a) and (B) are diagrams showing other examples of the tilted display area. The device structure itself is the same as in fig. 1 (a).
The cross-sectional shape of the display area PS1 as viewed from the width direction (left-right direction, X direction) of the vehicle 1 is not limited to the shape of the convex shape on the driver side as shown in fig. 1 and the like. As shown in fig. 10 (a), the display area PS1 may be concave on the driver side. The display area PS1 may not be curved as shown in fig. 10 (B). As an example, various cross-sectional display areas are conceivable.
According to the configuration of the present embodiment, the effect of improving the visibility of the image was confirmed in experiments. In this experimental example, a sensory evaluation was performed in which a subject identified virtual images displayed on a plurality of heads-up display devices and whether or not the subject felt uncomfortable was answered. In this experiment, a plurality of heads-up display devices respectively display upright images having different convergence differences between the upper and lower ends. In the various head-up display devices, all of the upright images are observed from the observation positions of the subject, and are observed in a rectangular shape in the horizontal direction, and the vertical viewing angle is set to 0.75 °.
Fig. 11 is a graph showing the experimental results of the ratio (vertical axis) of the persons who answer to the sense of discomfort for each convergence angle difference (horizontal axis). It is found that, after the upright image is recognized, the rate of the person who answers to be uncomfortable increases as the difference in the convergence angle between the upper end and the lower end becomes smaller.
When the difference in convergence angle is 0.22[ deg ], the ratio of the persons who answer to be uncomfortable feeling is 0[% ], and it is found that the whole subjects answer to be uncomfortable. When the difference in convergence angle is 0.20 degrees, the ratio of the person who is not feeling uncomfortable is 20[% ], when the difference in convergence angle is 0.17 degrees, the ratio of the person who is not feeling uncomfortable is 40[% ], when the difference in convergence angle is 0.15 degrees, 40[% ], when the difference in convergence angle is 0.14 degrees, 60[% ], when the difference in convergence angle is 0.12 degrees, 80[% ], when the difference in convergence angle is 0.09 degrees, 100[% ], and when the difference in convergence angle is 0.06 degrees, 100[% ].
In the above experimental results, when the convergence difference is 0.22[ deg ] or more, the overall answer is that the uncomfortable feeling is felt, and when the convergence difference is 0.20[ deg ] or less, the answer is that the uncomfortable feeling is felt is reduced, and it is considered that when the upright image is recognized, the convergence angle difference between the upper end and the lower end of the upright image is preferably 0.20[ deg ] or less.
Further, the effect that half or more of the people do not feel uncomfortable can be confirmed by setting the convergence angle difference to 0.14[ degrees ] or less. That is, it is more preferable that the convergence angle difference is 0.14[ degrees ] or less, since the effect of not giving a sense of discomfort can be sufficiently obtained after the upright image is recognized.
Further, the effect of not giving a sense of discomfort to the whole can be confirmed by setting the convergence angle difference to 0.09[ deg ] or less. That is, it is more preferable that the convergence angle difference is 0.09[ deg ] or less, since the effect of not giving a sense of discomfort can be sufficiently obtained after the upright image is recognized.
The present invention can be widely applied to a parallax type HUD device that allows an image having parallax to be input, a beam reconstruction type HUD device using a lenticular lens or the like, and the like.
In the present specification, the term vehicle is also to be interpreted as a vehicle in a broad sense. The term related to navigation (for example, an arrow for navigation) may be interpreted broadly in consideration of, for example, the viewpoint of navigation information in a broad sense contributing to the running of the vehicle, or may include a road sign. The standing image may be referred to as a facing image that the observer is facing to observe, and can be widely interpreted without limitation to the name. In addition, the HUD device also includes a device used as a simulator (for example, a simulator of an airplane, a simulator as a game device, or the like).
The present invention is not limited to the above-described exemplary embodiments, and those skilled in the art can easily change the above-described exemplary embodiments within the scope of the technical scope.
Symbol description
1: a vehicle (own vehicle); 2: a projected member (a reflective light-transmitting member, a windshield, etc.); 5: a projection area; 40: road surface; 100: HUD device; 120: an optical system including an optical component; 150: a light projecting section (image projecting section); 160: a display unit (e.g., a liquid crystal display device, a screen, etc.); 164: a display surface; 170: curved mirrors (concave mirrors, etc.); 179: a reflecting surface; 188: a line-of-sight detection camera; 190: a display control unit (control unit); 192: a viewpoint position detection unit; 194: an image processing section; 195: an image generation control unit; 196: a distortion parameter; 197: a post-warp data buffer; 198: a ROM;199: erecting an image table; 200: a depth image table; 201: VRAM (image processing storage device); 202: an image generation unit (image drawing unit); EB: a viewpoint region; PS1: a display area (virtual image display surface); z1: a first display area capable of displaying both an erect image and a depth image; z2: and displaying a second display area of the depth image.

Claims (4)

1. A head-up display device, comprising:
an image display unit that displays an image;
an optical system that projects light of the image displayed on the image display unit toward a projection target member, and that causes a viewer to recognize a virtual image of the image in a virtual display region in a real space in front of the viewer; and
a control unit that controls display of the image on the image display unit,
taking the direction toward the front of the observer in real space as the front direction,
a direction along a line segment connecting the left and right eyes of the observer, which is orthogonal to the front direction, is defined as a left-right direction,
the direction along a line segment orthogonal to the front direction and the left-right direction is set as an up-down direction or a height direction, the direction of a surface far from or corresponding to the ground in the real space is set as an upper direction, and the approaching direction is set as a lower direction, at this time,
the control unit performs control of,
in the real space, an upright image which is an image recognized as an upright place is displayed in the display area of the inclined surface which is a plane or curved surface inclined from the side close to the observer and the lower side to the side far from the upper side with respect to the ground or the surface corresponding to the ground,
The erect image is displayed in a display area in which a quadrangular outline is observed from the observer,
the difference in convergence angle between the upper and lower ends of the display area is set to be smaller than a predetermined threshold value determined based on at least one of the recognizability of the image, the time required for recognition, and psychological factors such as discomfort or uncomfortable feeling.
2. The head-up display device of claim 1, wherein,
the display area is divided into a first area capable of displaying all of a virtual image of a depth image which is an obliquely recognized image and a virtual image of the upright image and a second area for displaying the virtual image of the depth image,
the difference in convergence angle between the upper end and the lower end of the display area in the first area is set to be smaller than the prescribed threshold,
the difference in convergence angle between the upper end and the lower end of the display area in the second area is set to be equal to or greater than the predetermined threshold.
3. The head-up display device according to claim 1 or 2, wherein,
the predetermined threshold functions as a normal authentication possibility determination threshold for determining the normal authentication possibility of the upright image,
the convergence angle difference as the predetermined threshold value is set to 0.2 °.
4. A head-up display device according to any one of claim 1 to 3,
when a viewing angle in the up-down direction (or height direction) as seen from the observer is referred to as a vertical viewing angle,
the limitation of the convergence angle difference by the predetermined threshold is applied to the content of an upright image having a vertical angle of view of 0.75 ° or less.
CN202180058974.3A 2020-07-29 2021-07-26 Head-up display device Pending CN116157290A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2020128459 2020-07-29
JP2020-128459 2020-07-29
PCT/JP2021/027463 WO2022024964A1 (en) 2020-07-29 2021-07-26 Head-up display device

Publications (1)

Publication Number Publication Date
CN116157290A true CN116157290A (en) 2023-05-23

Family

ID=80035672

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180058974.3A Pending CN116157290A (en) 2020-07-29 2021-07-26 Head-up display device

Country Status (4)

Country Link
JP (1) JPWO2022024964A1 (en)
CN (1) CN116157290A (en)
DE (1) DE112021004005T5 (en)
WO (1) WO2022024964A1 (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102011082985B4 (en) * 2011-09-19 2023-04-27 Bayerische Motoren Werke Aktiengesellschaft Head-up display for a vehicle and method for projecting an image
JP2018120135A (en) 2017-01-26 2018-08-02 日本精機株式会社 Head-up display
JP7338625B2 (en) * 2018-07-05 2023-09-05 日本精機株式会社 head-up display device
JP2020064235A (en) * 2018-10-19 2020-04-23 コニカミノルタ株式会社 Display device

Also Published As

Publication number Publication date
DE112021004005T5 (en) 2023-06-01
WO2022024964A1 (en) 2022-02-03
JPWO2022024964A1 (en) 2022-02-03

Similar Documents

Publication Publication Date Title
KR101127534B1 (en) Method and device for generating retinal images using the stigmatism of the two foci of a substantially elliptical sight
US9785306B2 (en) Apparatus and method for designing display for user interaction
JPH06270716A (en) Head-up display device for vehicle
JPWO2018008577A1 (en) Head-mounted display device
JPH01503572A (en) 3D display system
JPS5834814B2 (en) Display using optical collimation
US11945306B2 (en) Method for operating a visual field display device for a motor vehicle
EP3730992A1 (en) Vehicle display device
CN112204453B (en) Image projection system, image projection device, image display light diffraction optical element, instrument, and image projection method
JP4929768B2 (en) Visual information presentation device and visual information presentation method
JP7358909B2 (en) Stereoscopic display device and head-up display device
CN111727399B (en) Display system, mobile object, and design method
CN116157290A (en) Head-up display device
JP6105531B2 (en) Projection display device for vehicle
JP7354846B2 (en) heads up display device
WO2021241718A1 (en) Head-up display device
CN114127614B (en) Head-up display device
CN217655373U (en) Head-up display device and moving object
CN110573934A (en) Mirror device
JP7127415B2 (en) virtual image display
CN215181216U (en) Head-up display system with continuously-changed image distance and vehicle
WO2021261438A1 (en) Head-up display device
JP2023120466A (en) Display control apparatus, display apparatus, and display control method
JP3563769B2 (en) Image display device
KR20230101510A (en) Glasses-Type Device with Micro Prism Array-Based Optical Lens

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination