WO2021241718A1 - Dispositif d'affichage tête haute - Google Patents

Dispositif d'affichage tête haute Download PDF

Info

Publication number
WO2021241718A1
WO2021241718A1 PCT/JP2021/020328 JP2021020328W WO2021241718A1 WO 2021241718 A1 WO2021241718 A1 WO 2021241718A1 JP 2021020328 W JP2021020328 W JP 2021020328W WO 2021241718 A1 WO2021241718 A1 WO 2021241718A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
warping
display
viewpoint
virtual image
Prior art date
Application number
PCT/JP2021/020328
Other languages
English (en)
Japanese (ja)
Inventor
勇希 舛屋
Original Assignee
日本精機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本精機株式会社 filed Critical 日本精機株式会社
Priority to JP2022526655A priority Critical patent/JPWO2021241718A1/ja
Publication of WO2021241718A1 publication Critical patent/WO2021241718A1/fr

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B30/00Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/38Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory with means for controlling the display position

Definitions

  • the present invention is a head-up display (HUD) device or the like that projects (projects) the display light of an image onto a projected member such as a windshield or a combiner of a vehicle and displays a virtual image in front of the driver or the like.
  • HUD head-up display
  • an image correction process (hereinafter referred to as warping process) in which the projected image is pre-distorted so as to have characteristics opposite to the distortion of the virtual image caused by the curved surface shape of the optical system, the windshield, etc. )It has been known.
  • the warping process in the HUD apparatus is described in, for example, Patent Document 1.
  • Patent Document 2 describes that the warping process is performed based on the viewpoint position of the driver (viewpoint following warping).
  • the present inventor has considered implementing viewpoint position-following warping control that updates the warping parameter according to the viewpoint position of the driver (which can be widely interpreted by the driver, crew, etc.), and is described below. Recognized the challenge.
  • the present inventor further improves the inclined surface HUD to include an inclined surface and an elevation surface (including a pseudo elevation surface) in one virtual image display surface, for example, a distant inclined surface is long on the road surface.
  • a depth image virtual image
  • non-superimposed content for example, a virtual image composed of numbers and characters that are always displayed
  • a nearby elevation is displayed on a nearby elevation.
  • the viewpoint position of the driver may move (shift) along the width direction (left-right direction) of the vehicle, and at this time, the visibility changes due to the motion parallax. If this causes a visual adverse effect, it is preferable to devise a warping process and take measures against that point as well.
  • the "motion parallax” is the parallax caused by the movement of the observer's viewpoint (or observation target), and even if the line-of-sight direction is changed by the same amount, the closer the object is, the more the position in the visual field is. It changes greatly, and the farther it is, the less the position changes, and it is said that the difference in the amount of change in this position makes it easier to feel the perspective.
  • Patent Documents 1 and 2 the relationship between parallax and warping processing is not examined at all. In addition, no consideration is given to what to do when the viewpoint moves during the warping process.
  • One of the objects of the present invention is, for example, to provide a more natural visibility for a depth image (tilted image) premised on three-dimensional vision or a facing image (standing image) not premised on three-dimensional vision. It is to realize warping control that can be secured.
  • the head-up display device is a head-up display (HUD) device mounted on a vehicle and projecting an image onto a projected member provided on the vehicle so that the driver can visually recognize a virtual image of the image.
  • An image generation unit that generates the image and A display unit that displays the image and An optical system including an optical member that reflects the display light of the image and projects it onto the projected member.
  • a control unit that updates the warping parameter according to the viewpoint position of the driver in the eyebox and performs viewpoint position tracking warping control that corrects the image displayed on the display unit using the warping parameter.
  • the control unit When performing warping processing on a depth image, warping control is performed with the contour when the contour of the image area of the depth image is correctly recognized as a virtual image as a trapezoid in which at least one set of opposite sides is parallel to each other. death, When the warping process is performed on the facing image, the warping control is performed by setting the contour when the contour of the image area of the facing image is correctly recognized as a virtual image as a rectangle or a rectangle including a square.
  • the contour (outer shape) of the image area to be warped is trapezoidal in the depth image (for example, tilted image) and rectangular (rectangular or square) in the facing image (for example, standing image).
  • the depth image is, for example, a deep image displayed on a virtual image display surface of an inclined surface. It may be simply referred to as a depth image or an inclined image.
  • the facing image is preferably an image displayed so as to face the driver. For example, an elevation that stands up against the road surface (not only a surface that stands orthogonal to the road surface, but at least part of it is inclined, but it is treated as an upright surface as a whole. It is an image displayed on a "pseudo-elevation" that can be). In the following description, it may be simply referred to as a face-to-face image or a standing image.
  • the viewpoint position tracking warping process is, for example, a function value for a filter that determines a warping parameter (for example, a polynomial, a multiplier, a constant, etc. for warping image correction using a digital filter) according to the position of the viewpoint in the eyebox. Etc.) and use the warping parameters to perform coordinate conversion for multiple coordinate points set in the image area of the image to be warped, thereby preliminarily distorting the characteristics opposite to the distortion caused by the optical member. It can be realized by giving to.
  • a warping parameter for example, a polynomial, a multiplier, a constant, etc. for warping image correction using a digital filter
  • an image (virtual image) is displayed on an elevation surface perpendicular to the road surface
  • an image area (may be an entire area or a partial area) in a display to be image-corrected may be used.
  • the contour corresponding to the contour (visible for convenience of explanation, the shape is, for example, a rectangle) in the virtual image display (in other words, the contour when the viewer correctly sees it as a virtual image).
  • the contour (outer shape) suitable for each of the depth image (for example, an inclined image) and the facing image (for example, a standing image) is uniformly set as a rectangle (rectangle or square). Is adopted.
  • the above-mentioned "outline when the image is correctly viewed by the viewer as a virtual image” is "a contour as a" normal image (virtual image) "that the HUD device intends to display, or a" correct image (virtual image) ". It can also be said. If the outline of the image (virtual image) is a trapezoid, it means that the HUD device takes the trapezoid as the correct answer and warps the image of the outline (rectangle) of the image area to display it.
  • the virtual image display surface of the inclined surface arranged in front of the vehicle has its contour (here, this contour is seen from the viewpoint located in the center of the eyebox) (in other words, when the driver sees it from the front). It is assumed that it is visible.
  • this contour is seen from the viewpoint located in the center of the eyebox
  • it is assumed that it is visible.
  • it is assumed to be the outline of the entire virtual image display surface here, it may be the outline of the image display area for a certain image). It looks like a trapezoid (including a parallelogram).
  • the viewpoint moves (shifts) in the width direction (horizontal direction) of the vehicle
  • the relative position between the viewpoint and the virtual image will change, so the effect of motion parallax
  • the trapezoidal shape is distorted and deformed to the left and right.
  • the outline of the image area of the image displayed on the virtual image display surface of the inclined surface (the image of the long arrow, the image of the center line, etc.) is also based on the trapezoid.
  • the contour corresponding to the contour of the image area and visually recognized at the time of displaying a virtual image is a rectangle (rectangle or square).
  • the virtual image after the warping process (after the distortion caused by the windshield or the like is removed) has a more natural perspective. Visibility is improved by obtaining a more natural three-dimensional effect.
  • the change in the appearance of the image (virtual image) due to the movement of the viewpoint is felt to be the same as that of the road surface in the background. This is advantageous for making the driver feel that the image (virtual image) is superimposed on the road surface (matches the road surface), and in this respect as well, the visibility of the road surface superimposed HUD is improved. Leads to.
  • the outline (outer shape) of the image area is a rectangle (rectangle or square) as in the conventional case.
  • the vehicle speed display displayed on the elevation is often displayed at a fixed position at all times, and it is important to be able to read numbers, symbols, characters, etc. accurately, and since perspective is not particularly necessary, it is tilted. Since there is no advantage of making it trapezoidal as in the case of an image, the rectangular outer shape is adopted here as before.
  • the contour of the image area of the depth image (tilted image) and the contour of the image area of the facing image (standing image) are individually set and warped. This means that different methods of warping are being implemented.
  • a display with improved visibility that gives a natural perspective is realized, and for facing images (standing images), it is easier to see and has improved cognition. (Display suitable for grasping accurate information, etc.) is realized.
  • the control unit When the viewpoint moves along the width direction of the vehicle while the warping process is performed on the depth image, the deformation of the trapezoid caused by the movement is reflected. Warping control that deforms the trapezoid is performed. When the viewpoint moves along the width direction of the vehicle while the warping process is performed on the facing image, the deformation of the rectangle caused by the movement is canceled out. Warping control that deforms the rectangle may be performed.
  • the viewpoint moves (shifts) to the left or right, and in this case, the influence of parallax is taken into consideration for each of the depth image (tilted image) and the facing image (standing image).
  • warping is performed with the image area as a trapezoid.
  • the trapezoid is an isosceles trapezoid (the upper base and the lower base are parallel, the internal angles of both ends of the upper base are equal, and the internal angles of both ends of the lower base are also equal. It looks like an isosceles trapezoid).
  • the viewpoint shifts in the width direction (left-right direction) of the vehicle.
  • the position of the displayed virtual image is fixed (to be exact, the position in the coordinate system set in the real space is fixed) (for example, the virtual image of the center line is meaningless unless it is superimposed on the center of the road surface).
  • the position cannot be moved), but if the viewpoint moves, the relative positional relationship between the virtual image and the viewpoint changes.
  • the above Image distortion (the image looks distorted under the influence of motion parallax) can, of course, be expected from the beginning.
  • the motion parallax creates a natural stereoscopic effect (perspective), and therefore the above distortion assists the natural vision of the depth image.
  • the motion parallax associated with the movement of the viewpoint adversely affects the visual sense. It is necessary for the vehicle speed display and the like to always be seen as a standing image without deformation so as not to reduce the cognitive recognition of the display.
  • the viewpoint moves in the left-right direction, for example, assuming that a rectangular image (virtual image) is visible from the viewpoint located at the center of the eye box.
  • the position of the virtual image in the coordinate system set in the real space is fixed (for example, it can be assumed that the vehicle speed display is always displayed near the lower right end of the windshield).
  • the deformation based on such motion parallax (deformation of a rectangle such as a rectangle) is suppressed (preferably offset).
  • Motion parallax correction is performed to deform the image (outline of the image area: rectangular) in the direction opposite to the actual deformation due to the influence of motion parallax, and what is the distortion that occurs in the optical system for the deformed image area?
  • Warping is performed, including distortion correction that gives distortion of the opposite characteristic.
  • the "warping" of the present invention has a meaning including a correction process for optimizing the appearance of a virtual image in consideration of parallax and motion parallax, rather than simply correcting the distortion.
  • the "image by motion parallax (virtual image)" has a meaning.
  • the motion parallax correction and the distortion correction are performed as a set.
  • the viewpoint shifts the light of the deformed image is incident on the human eye, which is the opposite of the deformation due to the motion parallax.
  • the deformation due to the motion parallax is canceled out by the reverse deformation applied in advance, and when the human brain judges the image, it looks like a rectangular (rectangular or square) image.
  • the image maintains a rectangle such as a rectangle and does not change, so that the cognition does not deteriorate.
  • Visuality is improved because an accurate facing image is always obtained.
  • the control unit deforms the rectangle so as to cancel the deformation of the rectangle in the warping control for the facing image.
  • the lower side of the rectangle may be fixed and the upper side may be moved to distort it.
  • the upper side may be fixed and the lower side may be moved to distort.
  • the upper side of the rectangle displayed in front is far away, and the lower side is near, and considering the motion parallax, the farther the rectangle is, the smaller the change in position is perceived.
  • the amount of movement (substantial deformation amount) can be reduced as compared with the case where the lower side is moved to deform the rectangle, and there is a difference in this respect. ..
  • the control unit If the viewpoint moves along the width direction of the vehicle while the warping process is being performed on the facing image, The position of the facing image may be changed so that the virtual image of the facing image is fixed at a predetermined position in the viewpoint coordinate system with respect to the viewpoint.
  • the distortion opposite to the distortion due to the motion parallax (deformation of the rectangle) (deformation of the inverse rectangle).
  • the display position of the facing image (virtual image) is shifted according to the movement of the viewpoint.
  • the problem that the facing image is distorted due to the viewpoint position shift can be caused by the fixed display position of the facing image (virtual image) in the coordinate system set in the real space. ..
  • the above problem is dealt with by eliminating the premise that the display position in the coordinate system set in the real space is fixed.
  • the viewpoint coordinate system (a coordinate system in which axes along each of the front-rear direction, the left-right direction, and the vertical direction of the vehicle are set with the person's viewpoint as the center, and the person's viewpoint (including the face, etc.)). If the image moves, the coordinate system also moves according to the movement), the position of the image (rectangular image area) on the display surface of the display unit (so that the virtual image of the opposite image is always in the same position). Alternatively, control is performed to appropriately move the image generation surface in the image generation unit or the position in the image generation space) in correspondence with the movement of the viewpoint.
  • control unit In the case of displaying a mixture of the depth image and the facing image, the depth image and the facing image are individually subjected to image processing, and then the images after the image processing are combined to form one image. May be.
  • each image (which can also be referred to as an area of each image) is individually corrected, for example, an image correction unique to each image. Then, a method of synthesizing images to generate one image is adopted.
  • This method can be realized by, for example, image rendering.
  • image rendering By treating the depth image (or the image area of the depth image) and the facing image (or the image area of the facing image) separately, there is a particular problem even when it is necessary to perform different image corrections for each image. It does not occur and the process can be simplified.
  • the control unit When the region surrounded by a trapezoid whose opposite sides are parallel to each other is defined as the first contour in the image region of the depth image viewed from a predetermined first position in the left-right direction of the eye box.
  • the upper side of the first contour When viewed from the left side of the first position, the upper side of the first contour is on the right side relative to the lower side with respect to the deformation of the trapezoid caused by the movement.
  • the warping control is performed so that the upper side of the first contour is on the left side relative to the lower side with respect to the deformation of the trapezoid caused by the movement.
  • the motion parallax of the depth image (tilted image) and the depth image (tilted image) ) Is different from the motion parallax of the background such as the road surface where it overlaps.
  • the motion parallax of the depth image (tilted image) becomes larger than the motion parallax of the background such as the road surface, and the superposition (coincidence) between the depth image and the background such as the road surface decreases. ..
  • the motion parallax is used. Warping is performed so that the distortion of the first contour caused by the above is reduced and the driver can visually recognize the distortion.
  • the position of the upper side of the first contour is distorted to the left with respect to the position of the lower side due to motion parallax. Make corrections to reduce.
  • the correction for the first contour and the correction for the second contour are both corrections in the direction of weakening the distortion caused by the motion parallax, but the correction for the first contour is smaller than the correction for the second contour.
  • the motion parallax of the depth image approaches the motion parallax with the background such as the road surface, so that a display with improved visibility is realized, in which a natural perspective can be felt.
  • the control unit When the control unit performs warping processing on the depth image and the viewpoint moves along the width direction of the vehicle, the deformation of the trapezoid caused by the movement is reflected.
  • the displayed image is a road surface superimposed image displayed so as to be superimposed on the road surface.
  • the virtual image of the road surface superimposed image is displayed on the inclined surface inclined with respect to the road surface and the virtual image appears to be lifted from the road surface, the trapezoid is deformed as compared with the case where the virtual image is lifted from the road surface and cannot be seen.
  • Warping control with motion parallax correction may be performed to reduce the degree of.
  • the displayed image is a road surface superimposed image (for example, an image of a figure of an arrow) displayed so as to be superimposed on the road surface, and the virtual image of the road surface superimposed image is inclined with respect to the road surface.
  • the image of the arrow in other words, the image of the arrow
  • the motion parallax correction is performed so as to reduce the degree of deformation of the trapezoidal shape as the outline of the display area of.
  • the position of the virtual image of the actual arrow figure is more than the position of the road surface where the arrow is superimposed (visual landing position of the arrow), the driver (visual person). This is because it is on the front side when viewed from the front.
  • kinetic parallax occurs in the virtual image of the arrow seen in the foreground, while kinetic parallax also occurs in the road surface seen in the back. Since the road surface seen in the back is displayed on the distant side, the motion parallax is smaller, and the virtual image of the arrow seen in the foreground is displayed on the front side, so that the motion parallax is more perceived.
  • the motion parallax for the virtual image of the arrow feels larger than when the virtual image of the arrow is in close contact with the road surface, for example. Therefore, the deformation (tilt) of the virtual image of the arrow is emphasized. In this case, the deformation (tilt) is reduced, in other words, the deformation of the trapezoid, which is the outline of the display area of the arrow image, is suppressed to some extent (however, the deformation is offset). (Remains without), perform motion parallax correction. As a result, a display with appropriate motion parallax can be obtained.
  • FIG. 1A shows an outline of warping control (conventional warping control) in a conventional HUD device that displays a virtual image on a virtual image display surface erected with respect to the road surface, and a virtual image (and) displayed through the warping control.
  • FIG. 1B is a diagram for explaining a mode of distortion of the virtual image display surface), and is a diagram showing an example of a virtual image visually recognized by a driver through a windshield.
  • FIG. 2A is a diagram for explaining an outline of viewpoint position tracking warping control
  • FIG. 2B is a diagram showing a configuration example of an eyebox whose inside is divided into a plurality of partial regions.
  • FIG. 3A is a diagram showing an example of a virtual image visually recognized by a user through a windshield, FIG.
  • FIG. 3B is a diagram showing a state of virtual image display on a virtual image display surface
  • FIG. 3C is a diagram showing a state of virtual image display on a virtual image display surface. It is a figure which shows an example of the image display on the display surface of a display part.
  • 4 (A) is a diagram showing a configuration of a HUD device mounted on a vehicle and an example of a virtual image display surface
  • FIGS. 4 (B) and 4 (C) are views of the virtual image display surface shown in FIG. 4 (A). It is a figure which shows the example of the method to realize.
  • FIG. 1 is a diagram showing a configuration of a HUD device mounted on a vehicle and an example of a virtual image display surface
  • FIGS. 4 (B) and 4 (C) are views of the virtual image display surface shown in FIG. 4 (A). It is a figure which shows the example of the method to realize.
  • FIG. 5 is a diagram showing an example of a display method of a HUD device (parallax type HUD device or 3D HUD device) that displays a three-dimensional virtual image.
  • FIG. 6 is a diagram showing an example of viewpoint tracking warping control in a HUD device capable of displaying at least one of a virtual image of a standing image (including a pseudo standing image) and a virtual image of an inclined image (depth image).
  • an inclined surface for example, an inclined image HUD region of a virtual image display surface
  • the viewpoint of the driver (monocular) looking at the inclined surface is located in the central divided region of the eye box.
  • FIGS. 7 (F), 7 (G), (H) is a virtual image of an arrow when the viewpoint is located in each of the left divided area, the central divided area, and the right divided area of the eyebox.
  • FIGS. 7 (I) and 7 (J) are diagrams showing the appearance of the arrow (shape of the virtual image of the arrow), and FIGS. 7 (I) and 7 (J) are diagrams showing the appearance of the virtual image of the arrow with / without motion misalignment correction.
  • an elevation including a pseudo-elevation, for example, a standing image HUD region of a virtual image display surface
  • the viewpoint (monocular) of the driver looking at it is the center of the eyebox.
  • FIG. 8 (B), (C), and (D) are views showing the case where the vehicle moves along the width direction (horizontal direction) of the vehicle from the state of being located in the divided area of the above.
  • FIGS. G) is a diagram showing the contents of image correction applied to the display image in order to suppress deformation of the rectangle which is the contour of the elevation
  • FIGS. 8 (H), (I), and (J) have the viewpoint of the eye box. It is a figure which shows the appearance of the virtual image of an arrow (the shape of the virtual image of an arrow) when it is located in each of the left division area, the center division area, and the right division area of.
  • 9 (A) to 9 (D) are diagrams for explaining two examples (an example of deforming the lower side or the upper side) of image correction applied to a display image while suppressing deformation of a rectangle which is an elevation contour. Is.
  • FIG. 10A is a diagram showing the contents of warping control for an inclined image (depth image) displayed on an inclined surface
  • FIG. 10B is a warping for an standing image (facing image) displayed on an elevation surface
  • 10 (C) to 10 (E) are diagrams showing the contents of control
  • FIGS. 10 (C) to 10 (E) are diagrams showing an example of generating an image to be displayed on the display surface by image rendering
  • FIGS. 10 (F) to 10 (H) are views according to the viewpoint position.
  • it is a figure which shows the example of the image (visual-viewing image, visual-viewing virtual image) visually recognized by a driver.
  • FIG. 10 (F) to 10 (H) are views according to the viewpoint position.
  • FIG. 12A is a diagram showing an example of a configuration of a main part of the HUD device when a standing image (pseudo standing image) is fixedly displayed at a predetermined position in the viewpoint coordinate system regardless of the viewpoint position
  • FIG. 12B Is a diagram showing that the display (virtual image) appears to move following the movement of the viewpoint. It is a figure which shows an example of the procedure in viewpoint tracking warping control.
  • 14 (A) and 14 (B) are views showing a modified example of the virtual image display surface.
  • the depth image is, for example, a deep image displayed on a virtual image display surface of an inclined surface.
  • the case where the inclination angle is zero with respect to the road surface that is, the case where the image is superimposed on the road surface
  • the facing image is preferably an image displayed so as to face the driver.
  • an elevation that stands up against the road surface (not only a surface that stands orthogonal to the road surface, but at least part of it is inclined, but it is treated as an upright surface as a whole. It is an image displayed on a "pseudo-elevation" that can be). It may be referred to simply as a face-to-face image or a standing image.
  • FIG. 1A shows an outline of warping control (conventional warping control) in a conventional HUD device that displays a virtual image on a virtual image display surface erected with respect to the road surface, and a virtual image (and) displayed through the warping control.
  • FIG. 1B is a diagram for explaining a mode of distortion of the virtual image display surface), and is a diagram showing an example of a virtual image visually recognized by a driver through a windshield.
  • the HUD device 100 is mounted on the vehicle (which can be interpreted in a broad sense) 1.
  • the HUD device 100 includes a display unit (for example, a light transmission type screen) 101, a reflecting mirror 103, and a curved mirror (for example, a concave mirror) as an optical member for projecting display light, and the reflecting surface may be a free curved surface.
  • a display unit for example, a light transmission type screen
  • a reflecting mirror 103 for example, a concave mirror
  • a curved mirror for example, a concave mirror
  • the image displayed on the display unit 101 is projected onto the virtual image display area 5 of the windshield 2 as a projected member via the reflecting mirror 103 and the curved mirror 105.
  • Reference numeral 4 indicates a projection area.
  • the HUD device 100 may be provided with a plurality of curved mirrors.
  • a refraction optical element such as a lens, a diffractive optical element, or the like can be used.
  • a configuration including a functional optical element may be adopted.
  • a part of the display light of the image is reflected by the windshield 2, and the driver or the like located inside (or on the EB) the preset eye box (here, a rectangular shape having a predetermined area) EB.
  • the virtual image V is displayed on the virtual virtual image display surface PS corresponding to the display surface 102 of the display unit 101 by being incident on the viewpoint (eye) A of the above and forming an image in front of the vehicle 1.
  • the image of the display unit 101 is distorted due to the influence of the shape of the curved mirror 105, the shape of the windshield 2, and the like.
  • distortion occurs due to the optical system of the HUD device 100 and the optical member including the windshield 2.
  • warping processing warping image correction processing
  • the virtual image V displayed on the virtual image display surface PS by the warping process becomes a flat image without curvature.
  • the display light is projected onto the wide projection area 4 on the windshield 2 and the display light is projected.
  • the virtual image display distance is set in a considerably wide range, it is undeniable that some distortion remains, which is unavoidable.
  • PS'shown by a broken line indicates a virtual image display surface in which the distortion is not completely removed, and V'indicates a virtual image displayed on the virtual image display surface PS'. There is.
  • the degree of distortion or the mode of distortion of the virtual image V'in which distortion remains differs depending on the position of the viewpoint A on the eyebox EB. Since the optical system of the HUD device 100 is designed on the assumption that the viewpoint A is located near the central portion, when the viewpoint A is near the central portion, the distortion of the virtual image is relatively small, and the distortion of the virtual image is relatively small in the peripheral portion. Indeed, the distortion of the virtual image tends to increase.
  • FIG. 1B shows an example of a virtual image V visually recognized by the driver through the windshield 2.
  • the virtual image V having a rectangular outer shape has, for example, five vertically and five horizontally, for a total of 25 reference points (reference pixel points or coordinate points) GD (i, j). )
  • i and j are variables that can take values of 1 to 5.
  • i and j are variables that can take values of 1 to 5.
  • reference numeral 7 is a steering wheel.
  • FIG. 2A is a diagram for explaining an outline of the viewpoint position tracking warping process
  • FIG. 2B is a diagram showing a configuration example of an eyebox whose inside is divided into a plurality of partial regions.
  • FIG. 2 the same reference numerals are given to the parts common to those in FIG. 1 (this point is the same in the following figures).
  • the eyebox EB is divided into a plurality of (here, nine) partial regions J1 to J9, and the driver's viewpoint A is set in units of the respective partial regions J1 to J9. The position of is detected.
  • the display light K of the image is emitted from the projection optical system 118 of the HUD device 100, and a part of the display light K is reflected by the windshield 2 and incident on the driver's viewpoint (eye) A.
  • the viewpoint A is in the eye box, the driver can visually recognize the virtual image of the image.
  • the HUD device 100 has a ROM 210, and the ROM 210 has a built-in image conversion table 212.
  • the image conversion table 212 stores, for example, a warping parameter WP that determines a polynomial, a multiplier, a constant, or the like for image correction (warping image correction) by a digital filter.
  • the warping parameter WP is provided corresponding to each of the partial regions J1 to J9 in the eyebox EB.
  • WP (J1) to WP (J9) are shown as warping parameters corresponding to each partial region.
  • only WP (J1), WP (J4), and WP (J7) are shown as reference numerals.
  • the viewpoint A moves, the position of the viewpoint A in the plurality of partial regions J1 to J9 is detected. Then, any one of the warping parameters WP (J1) to WP (J9) corresponding to the detected partial area is read from the ROM 210 (warping parameter update), and the warping process is performed using the warping parameter.
  • FIG. 2B shows an eyebox EB in which the number of partial regions is increased as compared with the example of FIG. 2A.
  • the eyebox EB is divided into a total of 60 partial regions, 6 in the vertical direction and 10 in the horizontal direction.
  • Each subregion is displayed as J (X, Y) with each coordinate position in the X direction and the Y direction as a parameter.
  • FIG. 3A is a diagram showing an example of a virtual image visually recognized by a user through a windshield
  • FIG. 3B is a diagram showing a state of virtual image display on a virtual image display surface
  • FIG. 3C is a diagram showing a state of virtual image display on a virtual image display surface. It is a figure which shows an example of the image display on the display surface of a display part.
  • the virtual image produced by the HUD device 100 is displayed in the virtual image display area 5 in the windshield 2.
  • a virtual image SP of a vehicle speed display (display of “120 km / h”) is displayed on the front side when viewed from a user (driver or the like).
  • the virtual image of the vehicle speed display SP can be said to be a virtual image of a "standing image”, in other words, a virtual image G1 of a "face-to-face image” displayed so as to face the user.
  • the virtual image G1 of the "standing image (face-to-face image)" is a non-superimposed content that is always displayed on the front side when viewed from the user (for example, the state of the vehicle 1 or the situation around the vehicle 1 is not intended to be superimposed on the target).
  • Etc. or may be a navigation display consisting of at least one of characters, figures, symbols, and the like.
  • a curved navigation arrow AW is displayed on the back side so as to cover the road surface 40 extending linearly in front of the vehicle 1 and gradually ascending from the side close to the vehicle 1 to the side far from the vehicle 1.
  • the display of the arrow AW gives a unique three-dimensional vision, and is also an excellent display with a sense of presence and aesthetics.
  • the display of the arrow AW can be broadly referred to as a virtual image G2 of a "depth image" having depth information (distance information) displayed as an inclined image.
  • the "curved surface” includes a part which is a plane in a part thereof.
  • the virtual image G2 of the "depth image” is an information image including a figure of an arrow relating to the progress of the vehicle 1 or another vehicle, and a figure other than the arrow (for example, a triangular figure indicating the traveling direction, a straight line indicating the center line, etc.).
  • the figure may be, for example, a figure that covers the road surface 40 and has at least a main part separated from the road surface 40 and is drawn along the road surface 40 (in other words, "visual overlap with the road surface”. It may be "a figure to be given” or "a figure extending along the road surface”).
  • an operation unit 9 capable of switching on / off of the HUD device or the like and setting an operation mode or the like is provided in the vicinity of the steering wheel (in a broad sense, the steering handle) 7.
  • a display device for example, a liquid crystal display device 13 is provided in the center of the front panel 11.
  • the display device 13 can be used, for example, for assisting the display by the HUD device.
  • the display device 13 may be a composite panel having a touch panel or the like.
  • the virtual image display surface PS1 which is an image plane is far from the near end U1 on the side closer to the vehicle 1 when viewed from the viewpoint (eye) A of the user (driver or the like) on the vehicle (own vehicle) 1. It is a curved surface that extends integrally to the far end U3 on the side, and is a second distance between the road surface 40 and the far end U3 rather than the first distance h1 between the road surface 40 and the near end U1.
  • the distance h2 is set large.
  • the virtual image display surface PS1 includes the near end portion U1 and is shown as a standing image HUD region Z1 (in the figure, surrounded by a broken line) in which the virtual image G1 of the “standing image (face-to-face image)” on the road surface 40 is displayed. (), It is located farther from the standing image HUD region Z1 (front in the front-rear direction (Z direction) of the vehicle 1), and is inclined toward the road surface 40 side from the virtual image G1 of the "standing image (face-to-face image)". It is distinguished from the tilted image HUD region Z2 in which the virtual image G2 of the tilted image (depth image) is displayed.
  • the intermediate portion U2 is a portion (or a point) on the virtual image display surface PS1 located at the boundary between the standing image HUD region Z1 and the tilted image HUD region Z2.
  • the virtual image G1 of the standing image can be paraphrased as the first virtual image G1. Further, the virtual image G2 of the tilted image (depth image) can be paraphrased as the second virtual image G2.
  • the distance from the user's viewpoint A (or the reference point corresponding to the viewpoint A set in the vehicle 1 or the like) to the near end U1 of the virtual image display surface PS1 (“imaging distance”, in other words, “virtual image display”. "Distance”) is L1
  • the distance to the intermediate portion U2 is L2
  • the distance to the far end portion U3 is L3.
  • L1 ⁇ L2 ⁇ L3 is established.
  • the length (extended range) of the virtual image display surface PS1 along the road surface 40 can be, for example, about 10 m to 30 m.
  • the area of the tilted image HUD region Z2 on the virtual image display surface PS1 is set to be larger than the area of the standing image HUD region Z1 (however, it is not limited to this). No).
  • the tilted image HUD area Z2 By setting a large relative area of the tilted image HUD area Z2, a wide area that can be displayed with a sense of depth (in other words, an area with expressive power in the depth direction) is secured, and the depth is realistic.
  • the display is easy to realize. In other words, it is easy to display by effectively utilizing the tilted image HUD region Z2, and the existence of the standing image HUD region Z1 is not greatly restricted. Therefore, for example, guidance information (for example, an arrow) can be presented to the user in an intuitive and realistic virtual image.
  • FIG. 3C shows an example of display control by the display control unit (reference numeral 190 in FIG. 11).
  • the specific configuration of the HUD device will be described later.
  • the display control unit uses the image display area 45 of the display surface 164 of the display unit 160 as a reference, for example, the boundary position LN'which is the boundary between the near side and the far side when viewed from the user. As a result, it is divided into a first display area Z1'corresponding to the standing image HUD area Z1 and a second display area Z2' corresponding to the tilted image HUD area Z2.
  • the display control unit (reference numeral 190 in FIG. 11) displays the first image RG1 (here, the vehicle speed display SP') at a predetermined position in the first display area Z1', and displays the second display area Z2.
  • the second image RG2 (here, the figure AW of the arrow for navigation) is displayed at a predetermined position of'.
  • each point of U1', U2', and U3' corresponds to each point of U1, U2, and U3 in FIG. 3B.
  • the horizontal direction on the display surface 164 corresponds to the "left-right direction (X direction)" of the vehicle 1 in the real space
  • the vertical direction is the "height direction (height direction)” which is a direction perpendicular to the road surface 40 in the real space. Y direction) ”.
  • vehicle information As the first image RG1, vehicle information, vehicle surrounding information, navigation information, and the like can be exemplified. These are required to have accurate display, quick recognition, etc., and vehicle speed information, etc. are often displayed at all times. It is displayed in an easy-to-see manner in a mode of standing (standing) at an angle (a predetermined angle or a threshold angle of an image (for example, 45 degrees or more)).
  • these information include vehicle speed display, road speed limit information, turn-by-turn information (for example, intersection name information, POI (specific point on map) information, etc.), various icons and characters. Display etc. can be mentioned.
  • the second image RG2 a figure of an arrow relating to the progress of the own vehicle or another vehicle, information including a figure other than the arrow, and the like can be mentioned. These are mainly figures, and it is important that they can be intuitively grasped and that distance information can be obtained without discomfort. It should be noted that the second image RG2 does not exclude, for example, characters.
  • the information image of the figure having depth is displayed as a virtual image with a sense of depth by the tilted image HUD. Specifically, these information include arrow information as a route guide, white line indicating a center line, colored graphic information indicating a frozen road surface area, or a steering wheel of a driver who is a user. Information on ADAS (advanced driver assistance system) that supports the operation of equipment can be given.
  • ADAS advanced driver assistance system
  • a virtual image display with a sense of reality as shown in FIG. 3 (A) is realized.
  • the first virtual image G1 which is a standing image standing at a certain angle or more with respect to the road surface 40, and the figure of the arrow extending along the front-rear direction of the vehicle 1 so as to cover the road surface 2, for example.
  • the second virtual image G2 it is possible to realize a display that is full of presence, easy to see, and has high expressive power. Therefore, the visibility of the virtual image display in the HUD device is improved.
  • FIG. 4 (A) is a diagram showing a configuration of a HUD device mounted on a vehicle and an example of a virtual image display surface
  • FIGS. 4 (B) and 4 (C) are views of the virtual image display surface shown in FIG. 4 (A). It is a figure which shows the example of the method to realize. Note that FIG. 4 (A) adopts a configuration different from that of FIG. 1 (A). Therefore, even if they are the same part, different reference numerals are given.
  • the direction along the front of the vehicle 1 (also referred to as the front-rear direction) is the Z direction
  • the direction along the width (horizontal width) of the vehicle 1 (horizontal width) is the X direction
  • the height direction of the vehicle 1 (also referred to as the front-rear direction).
  • the direction of the line segment perpendicular to the flat road surface 40 away from the road surface 40) is defined as the Y direction.
  • FIG. 4A it is also possible to display only the HUD device (specifically, the “standing image”) of the present embodiment inside the dashboard 41 of the vehicle (own vehicle) 1, and it is possible to display “tilt”.
  • a HUD device) 100 that can display only an “image” and can display a "standing image and an inclined image” at the same time is mounted.
  • the HUD device 100 has a display unit (sometimes referred to as an image display unit, specifically, a screen) 160 having a display surface 164 for displaying an image, and a display light K for displaying an image. It has an optical system 120 including an optical member that projects onto the windshield, and a light projecting unit (image projection unit) 150, and the optical member is a curved mirror (concave mirror or magnifying mirror) having a reflecting surface 179.
  • the reflective surface 179 of the curved mirror 170 having 170 is not a shape having a uniform radius of curvature, but can be a shape consisting of, for example, a set of partial regions having a plurality of radius of curvature, for example, a free curved surface.
  • a free curved surface is a curved surface that cannot be expressed by a simple mathematical formula, and a curved surface is expressed by setting some intersections and curvatures in a space and interpolating each intersection with a higher-order equation.
  • the shape of the reflective surface 179 has a considerable influence on the shape of the virtual image display surface PS1 and the relationship with the road surface.
  • the shape of the virtual image display surface PS1 includes the shape of the reflective surface 179 of the curved mirror (concave mirror) 130, the curved shape of the windshield (reflection translucent member 2), and other optics mounted in the optical system 120. It is also affected by the shape of the member (for example, the correction mirror). It is also affected by the shape of the display surface 164 of the display unit 160 (generally a flat surface, but all or part of it may be non-planar) and the arrangement of the display surface 164 with respect to the reflective surface 179.
  • the curved mirror (concave mirror) 170 is a magnifying reflector, and has a considerable influence on the shape of the virtual image display surface. Further, if the shape of the reflecting surface 179 of the curved mirror (concave mirror) 170 is different, the shape of the virtual image display surface actually changes.
  • the virtual image display surface PS1 extending integrally from the near end portion U1 to the far end portion U3 has a display surface 164 of the display unit 160 with respect to the optical axis of the optical system (the main optical axis corresponding to the main light ray). It can be formed by arranging it diagonally at an intersection angle of less than 90 degrees.
  • the shape of the curved surface of the virtual image display surface PS1 is such that the optical characteristics of all or a part of the optical system can be adjusted, the arrangement of the optical member and the display surface 164 can be adjusted, and the shape of the display surface 164. Or may be adjusted by a combination thereof. In this way, the shape of the virtual image display surface can be adjusted in various ways. Thereby, the virtual image display surface PS1 having the standing image HUD region Z1 and the tilted image HUD region Z2 can be realized.
  • the mode and degree of the overall inclination of the virtual image display surface PS are adjusted according to the mode and degree of inclination of the display surface 164 of the display unit 160.
  • the distortion of the virtual image display surface due to the curved surface of the windshield (reflection and translucent member 2) is corrected by the curved surface shape of the reflecting surface 179 of the curved mirror (concave mirror or the like) 170, and the result is It is assumed that a flat virtual image display surface PS is generated.
  • the display surface 164 is rotated.
  • the degree to which the virtual image display surface PS, which is an inclined surface, is separated from the road surface 40 is adjusted.
  • the shape of the reflecting surface of the curved mirror (concave mirror or the like) 170 which is an optical member, is adjusted (or the shape of the display surface 164 of the display unit 160 is adjusted) to display a virtual image.
  • the virtual image display distance near the end (near end) U1 on the side of the surface PS near the vehicle 1 is bent toward the road surface and controlled to stand against the road surface ( In other words, elevation), thereby obtaining the virtual image display surface PS1.
  • the reflective surface 179 of the curved mirror 170 has three portions (Near (nearby display), Center (middle (center) display), and Far (far display)). It can be divided into parts).
  • Near is a part that generates display light E1 (indicated by a alternate long and short dash line in FIGS. 4A and 4B) corresponding to the near end portion U1 of the virtual image display surface PS1
  • the Center is a part.
  • Far is a portion that generates display light E2 (indicated by a broken line) corresponding to the intermediate portion (central portion) U2 of the virtual image display surface PS1, and Far is display light corresponding to the far end portion U3 of the virtual image display surface PS1.
  • the part that produces E3 shown by the solid line).
  • the Center and Far portions are the same as the curved mirror (concave mirror or the like) 170 in the case of generating the virtual image display surface PS of the plane shown in FIG. 4B.
  • the curvature of the Near portion is set smaller than that in FIG. 4B. Then, the magnification corresponding to the Near part becomes large.
  • the magnification (referred to as c) of the HUD device is the distance (referred to as a) from the display surface 164 of the display unit 160 to the window shield 2 and the viewpoint of the light reflected by the wind shield (reflecting translucent member 2).
  • the image is formed at a position farther from the vehicle 1. That is, in the case of FIG. 4C, the virtual image display distance is larger than that in the case of FIG. 4B.
  • the near-end U1 of the virtual image display surface is separated from the vehicle 1, and the near-end U1 bends toward the road surface 40 in a bowing manner, and as a result, the standing image HUD region Z1 is formed.
  • a virtual image display surface PS1 having a standing image HUD region Z1 and an inclined image HUD region Z2 can be obtained.
  • FIG. 5 is a diagram showing an example of a display method of a HUD device (parallax type HUD device or 3D HUD device) that displays a three-dimensional virtual image.
  • the eye point P (C) indicating the viewpoint position of the driver (user) is located at the center of the eye box EB.
  • the virtual image V (C) is located in the center of the overlapping region. ..
  • the convergence angle of the virtual image V (C) is ⁇ d, and the virtual image V (C) is recognized as a three-dimensional image by the driver (user, viewer).
  • This three-dimensional virtual image V (C) can be displayed (formed) as follows. For example, by distributing the light from the image IM displayed in time division with, for example, a MEMS scanner or the like, display lights L10 and R10 for the left eye / right eye can be obtained, and the display lights L10 and R10 can be used. For example, it is reflected by a curved mirror (concave mirror, etc.) 105 included in the optical system (the number of reflections is at least once), thereby projecting the display light K onto the windshield (projected member) 2, and the reflected light is reflected. By reaching both eyes of the driver and forming an image in front of the windshield 2, a three-dimensional virtual image V (C) with a sense of depth is displayed (formed).
  • the display method of an image having a sense of depth is not limited to the above.
  • images for each of the left and right eyes are displayed on a flat panel or the like at the same time, and the light from each image is separated by using a lenticular lens or a parallax barrier to obtain display lights L10 and R10 for each eye. There may be.
  • the present invention can be applied to a parallax type HUD device in which a parallax image (different image) is incident on each of the left and right eyes, but the present invention is not limited to this, and the same image is applied to each of the left and right eyes. It can also be applied to a HUD device that incidents. These points will be described later.
  • FIG. 6 is a diagram showing an example of viewpoint tracking warping control in a HUD device capable of displaying at least one of a virtual image of a standing image (including a pseudo standing image) and a virtual image of an inclined image (depth image).
  • the same reference numerals are given to the parts common to those in FIGS. 1 and 3.
  • the inclined surface shall be interpreted in a broad sense, and may be interpreted as including, for example, those superimposed on the road surface (the one having an inclination angle of zero), if necessary. ..
  • the display example of FIG. 6 is the same as that described with reference to FIG.
  • the virtual image display surface PS1 has an elevation (pseudo-elevation) PS1a including an elevation HUD region Z1 and an inclined surface PS1b including an inclined image HUD region Z2.
  • a vehicle speed display SP as a first virtual image, which is a facing image (standing image) of "120 km / h", is displayed.
  • an arrow AW for navigation as a second virtual image, which is a depth image (tilted image) is displayed.
  • the display method may be either a monocular type in which the display light of the same image is incident on the left and right eyes or a parallax type in which different parallax images are incident, but FIG. 6 shows a display example of the parallax type as an example. ..
  • Each image is preliminarily given a distortion in the direction opposite to the distortion due to the curved surface of the windshield 2.
  • the first projected image G1' is a standing image, and since it is not necessary to express the depth by the parallax image, the same image (common image) is projected.
  • both the driver's viewpoints A1 and A2 are located in the center of the eyebox EB, a perspective arrow extending linearly along the road surface can be seen in front of the driver. Further, on the front side, a vehicle speed display of "120 km / h" can be seen diagonally to the lower right of the image of the arrow. In the example of FIG. 6, this vehicle speed display is displayed at a fixed position in the coordinate system in real space. This vehicle speed display can be displayed at all times, for example.
  • the driver's viewpoints A1 and A2 move (shift) along the width direction (horizontal direction, X direction) of the vehicle.
  • this movement of the viewpoint is indicated by a broken line, bidirectional arrow SE.
  • the display seen by the driver is deformed by the motion parallax, and this deformation is advantageous as giving a natural perspective to the image AW of the arrow, which is a depth image.
  • the vehicle speed display SP which is a facing image, works disadvantageously as it lowers the cognitive ability to read the content of the information. Therefore, it is preferable to individually perform warping by different image corrections without performing uniform warping on both.
  • a specific description will be given.
  • an inclined surface for example, an inclined image HUD region of a virtual image display surface
  • the viewpoint of the driver (monocular) looking at the inclined surface is located in the central divided region of the eye box.
  • Figures 7 (B), (C), and (D) show the case where the vehicle moves along the width direction (horizontal direction) of the vehicle from the above-mentioned state.
  • a figure showing the appearance of an inclined surface (a trapezoidal shape that is the outline of an inclined surface) when located in each of the divided area and the right divided area
  • FIG. 7 (E) displays a virtual image of an arrow on the inclined surface.
  • FIG. 7 (F), 7 (G), (H), is a virtual image of an arrow when the viewpoint is located in each of the left divided area, the central divided area, and the right divided area of the eyebox.
  • FIG. 7 (I) and FIG. 7 (J) are diagrams showing the appearance of the arrow (shape of the virtual image of the arrow)
  • FIGS. 7 (I) and 7 (J) are diagrams showing the appearance of the virtual image of the arrow with / without motion misalignment correction.
  • the entire inclined surface PS1b on the virtual image display surface PS1 is considered as one display area that can be visually recognized by the driver, and the outline (outer shape) of the display area is viewed from the direction perpendicular to the road surface. It is a rectangle when viewed in a plan view. Further, a grid consisting of two orthogonal line segments is drawn inside the inclined surface PS1b. This indicates that each reference point (coordinate point) of the intersection can be grasped as a target of warping.
  • the eyebox EB is divided into left, center, and right partial regions ZL, ZC, and ZR.
  • the codes AL, AC, and AR attached to the eye (viewpoint) correspond to the divided regions ZL, ZC, and ZR of the above-mentioned eye box EB.
  • the driver's eye (viewpoint) is initially located in the center of the eyebox EB (divided region ZC).
  • X0 be the coordinate value in this state.
  • the coordinate values when moved to the left and right are + X1 and ⁇ X1.
  • the image seen by the driver corresponding to the viewpoints AL, AC, and AR is appropriately changed under the influence of motion parallax as shown in FIGS. 7 (B), (C), and (D). do.
  • the lateral direction of the viewpoint position (the apparent amount of position movement due to the movement in the left-right direction is smaller as it is farther from the vehicle 1 and larger as it is closer to the vehicle 1.
  • the image displayed in the back is sufficiently large.
  • the upper side of the trapezoid is fixed, and the lower side is visually recognized as moving with the movement of the viewpoint position, whereby the trapezoid (isosceles trapezoid seen in the front, etc.) is deformed.
  • the outline of the virtual image display surface PS1b of the inclined surface arranged in front of the vehicle 1 is, when viewed from the viewpoint located in the center of the eyebox EB (in other words, when viewed from the front by the driver). It looks like a trapezoid (including a parallelogram) because it is perceived with a sense of perspective so that the width becomes narrower (Fig. 7 (C)).
  • the upper base and the lower base are parallel, the length of the upper base is shorter than the length of the lower base, the inner angles of both ends of the upper base are equal, and the inner angles of both ends of the lower base are also equal. It is an equal trapezoid (so-called isosceles trapezoid).
  • a trapezoid is a concept that includes a parallelogram.
  • the basic shape of the outline (outer shape) of the virtual image display surface (depth HUD region Z2) of the slope is "trapezoid".
  • an "isosceles trapezoid” can be seen as shown in Fig. 7 (C), and when the viewpoint shifts to the left, it becomes a “left-tilted trapezoid (Fig. 7 (A))” and the viewpoint shifts to the right. And, it becomes a trapezoid tilted to the right (Fig. 7 (D)).
  • the contour (outer shape) is "rectangular” rather than “rectangular” as in the conventional case. It means that the image (virtual image) with a more natural sense of depth can be seen after the distortion caused by the optical system is removed by implementing it as a "trapezoid". Therefore, the warping control for deforming the trapezoid is performed so that the deformation of the trapezoid caused by the movement of the viewpoint is reflected.
  • FIGS. 7 (E) to 7 (G) show changes in the appearance (visual sense) of the arrow figure AW (individual figure) displayed on the depth HUD area Z2 itself, not on the depth HUD area Z2 itself.
  • FIGS. 7 (E) to 7 (H) correspond to FIGS. 7 (A) to 7 (D).
  • the arrow AW seen from the front looks like FIG. 7 (G), but when the viewpoint shifts to the left, the arrow AW looks tilted to the left as shown in FIG. 7 (F). Further, when the viewpoint is shifted to the right, the arrow AW appears to be tilted to the right as shown in FIG. 7 (H).
  • the image area of each image is the same as in the cases of FIGS. 7 (B) to 7 (D).
  • warping is performed based on a trapezoidal shape, an image (virtual image) with a more natural sense of depth can be seen after the distortion caused by the optical system is removed.
  • the display of the arrow used in the route guide or the like is designed so as to be visually recognized substantially in accordance with the road surface. The same apparent shape of the road surface on which the display is superimposed changes due to the above-mentioned viewpoint movement.
  • this shape change is a person for an image intended to match (superimpose) on the road surface, or for a display intentionally arranged in the depth direction (an image that is understood to be distant). It can be said that it assists the recognition of.
  • an image area of an image with a sense of depth meaning the entire display area, and also a display area of individual images included in the display area, and interpretation is possible as appropriate.
  • the image correction is performed by using the contour when the contour is correctly recognized as a virtual image as a trapezoid.
  • an image (virtual image) having a more natural sense of depth is visually recognized by the driver (user, viewer).
  • the depth image (tilted image) is an image generated so as to generate a stereoscopic effect due to parallax, it is naturally assumed from the beginning that the image looks distorted due to the influence of motion parallax. What you get.
  • a natural (close to nature) three-dimensional effect (perspective) is generated, and therefore, the trapezoidal deformation (distortion) as shown in FIGS. 7 (B) and 7 (D). Will assist the natural vision of the depth image.
  • the isosceles trapezoid is corrected so that it can be visually recognized by tilting it to the left. Is made closer to a more natural one, and the corrected image area is corrected (conventional warping) in which distortion having the opposite characteristics to the distortion caused by the optical member is given in advance.
  • the display as shown in FIG. 7B is reproduced. In other words, it is possible to display a depth image with a more natural perspective (three-dimensional effect).
  • the degree of inclination of the arrow in other words, the degree of distortion of the outline (trapezoid) of the image area displaying the arrow
  • the image has little discomfort.
  • the displayed image is a road surface superimposed image (for example, an image of an arrow figure as shown in FIGS. 7 (F) and 7 (H)) displayed so as to be superimposed on the road surface, and the virtual image of the road surface superimposed image is.
  • a road surface superimposed image for example, an image of an arrow figure as shown in FIGS. 7 (F) and 7 (H)
  • the virtual image of the road surface superimposed image is.
  • it is displayed on an inclined surface that is inclined with respect to the road surface and the virtual image appears to be lifted from the road surface (see the display examples in FIGS. 3 and 6), it is not visible when it is lifted from the road surface (superimposition on the road surface).
  • It is preferable to perform motion misalignment correction so as to reduce the degree of deformation of the arrow (in other words, the trapezoidal shape as the outline of the display area of the arrow image) as compared with the case where the degree is relatively high).
  • the position of the virtual image of the actual arrow figure (virtual image of the road surface superimposed image) is closer to the driver (viewer) than the position of the road surface where the arrow is superimposed. be.
  • kinetic parallax occurs in the virtual image of the arrow seen in the foreground
  • kinetic parallax also occurs in the road surface seen in the back. Since the road surface seen in the back is displayed on the distant side, the motion parallax is smaller, and the virtual image of the arrow seen in the foreground is displayed on the front side, so that the motion parallax is more perceived.
  • the motion parallax of the virtual image of the arrow is felt to be larger than that of the case where the virtual image of the arrow is in close contact with the road surface, for example. Therefore, as in the example of FIG. 7 (I), the inclination of the arrow to the left side of the virtual image is emphasized. Therefore, in this case, the motion parallax is such that the inclination is reduced while leaving the inclination to the left side, in other words, the deformation of the trapezoid which is the outline of the display area of the arrow image is suppressed. By performing the correction, the display with appropriate motion parallax as shown in FIG. 7 (J) is obtained.
  • the motion parallax is not offset, and a certain amount of motion parallax remains. Therefore, the amount of suppression of the motion parallax of the depth image is smaller than the amount of suppression of the motion parallax of the facing image.
  • an elevation including a pseudo-elevation, for example, a standing image HUD region of a virtual image display surface
  • the viewpoint (monocular) of the driver looking at it is the center of the eyebox.
  • 8 (B), (C), and (D) are views showing the case where the vehicle moves along the width direction (horizontal direction) of the vehicle from the state of being located in the divided area of the above.
  • Figures 8 (E), (F), (FIG. 8), FIGS. G) is a diagram showing the contents of image correction applied to the display image in order to suppress deformation of the rectangle which is the contour of the elevation
  • FIGS. 8 (H), (I), and (J) have the viewpoint of the eye box. It is a figure which shows the appearance of the virtual image of an arrow (the shape of the virtual image of an arrow) when it is located in each of the left division area, the center division area, and the right division area of.
  • FIGS. 8 (A) to 8 (D) correspond to FIGS. 7 (A) to 7 (D) shown above.
  • a vehicle speed display SP of “120 km / h” is displayed on the virtual image display surface PS1a on the elevation surface.
  • the vehicle speed display seen from the front looks like FIG. 8 (C), but when the viewpoint shifts to the left, the vehicle speed display SP appears tilted to the left as shown in FIG. 8 (B). Further, when the viewpoint is shifted to the right, the vehicle speed display SP appears to be tilted to the right as shown in FIG. 8 (D).
  • the contour (outer shape) of the image area is rectangular (rectangular or rectangular) as in the conventional case.
  • Square The vehicle speed display SP, etc. displayed on the elevation is often always displayed at a fixed position, and it is important to be able to read numbers, symbols, characters, etc. accurately, and no particular perspective is required. Since there is no advantage of having a trapezoid as in the case of an inclined image, a rectangular outer shape is adopted here as in the conventional case.
  • the display will be deformed as shown in FIGS. 8 (B) and 8 (D).
  • the reason for the deformation is that even if the distortion of the optical system is removed by warping and the display light of the rectangular image reaches the viewpoint (eye), the position of the vehicle speed display SP does not change, so before and after the viewpoint shift occurs. If the arrival direction (incident direction) of the display light is different and a distortion-free rectangle as shown in Fig. 8 (C) is visible before the viewpoint shift, then after the viewpoint shift occurs, from a different angle. This is because when the brain judges an image by receiving the incoming light with the eyes, the image looks distorted due to motion disparity as shown in FIGS. 8 (B) and 8 (D).
  • Deformation of the image due to motion parallax as shown in FIGS. 8 (B) and 8 (D) is not preferable in terms of display cognition. Therefore, in the present embodiment, such deformation (deformation of a rectangle such as a rectangle) is suppressed (cancelled) during the warping process of the vehicle speed display SP (in a broad sense, a facing image (standing image)).
  • Warping conventional warping that deforms an image (outline of an image area: rectangle) in the direction opposite to the actual deformation due to the influence of motion disparity, and then gives distortion with the opposite characteristics to the distortion generated in the optical system.
  • FIGS. 8 (E) to 8 (G) The images in the state where the distortion having the opposite characteristic to the actual distortion is applied are shown in FIGS. 8 (E) to 8 (G).
  • FIGS. 8 (H) to 8 (J) The images at this time are shown in FIGS. 8 (H) to 8 (J).
  • the outer shape (contour) of the image area of the vehicle speed display SP is rectangular even when the viewpoint shift occurs.
  • the outer shape of the image area is maintained in a rectangular shape, and an easy-to-see facing image is always displayed.
  • the decrease in cognition is suppressed, and since a facing image is always obtained, there is no sense of discomfort, and therefore the visual sense is improved.
  • the contour of the image area of the depth image (tilted image) and the contour of the image area of the facing image (standing image) are individually set and warped, so that each image is substantially. , It means that different methods of warping are being carried out.
  • a display with improved visibility that gives a natural perspective is realized, and for facing images (standing images), it is easier to see and has improved cognition. (Display suitable for grasping accurate information, etc.) is realized.
  • FIG. 9 (A) to 9 (D) are diagrams for explaining two examples (an example of deforming the lower side or the upper side) of image correction applied to a display image while suppressing deformation of a rectangle which is an elevation contour. Is.
  • FIGS. 8 (E) and 8 (G) described above the point that the process of deforming the rectangle is performed at the time of warping so as to cancel the deformation of the rectangle actually caused by the motion parallax has been described. There are two possible methods. Hereinafter, this point will be specifically described.
  • FIG. 9A the viewpoint is shifted to the right while the vehicle speed display SP is displayed.
  • the vehicle speed display SP is distorted so as to incline to the left as described above.
  • the lower side BC of the rectangle may be fixed and the upper side AD may be moved to distort the image.
  • the upper side AD is displaced to the upper side A'D'.
  • the upper side AD may be fixed and the lower side BC may be moved to distort.
  • the lower side BD is displaced to the lower side B'D'.
  • the amount of movement can be reduced when the upper side is moved to deform the rectangle, as compared with the case where the lower side is moved to deform the rectangle. Then there is a difference.
  • FIG. 10A is a diagram showing the contents of warping control for an inclined image (depth image) displayed on an inclined surface
  • FIG. 10B is a warping for an standing image (facing image) displayed on an elevation surface
  • 10 (C) to 10 (E) are diagrams showing the contents of control
  • FIGS. 10 (C) to 10 (E) are diagrams showing an example of generating an image to be displayed on the display surface by image rendering
  • FIGS. 10 (F) to 10 (H) are views according to the viewpoint position.
  • it is a figure which shows the example of the image (visual-viewing image, visual-viewing virtual image) visually recognized by a driver.
  • the image display area 45 of the display surface 164 of the display unit 160 is based on the boundary position LN'which is the boundary between the near side and the far side when viewed from the user.
  • the navigation arrow figure AW'as the second image RG2 is displayed in the second display area Z2'.
  • the arrow figure AW' is shown by a vertically long ellipse for convenience.
  • the outer shape (contour) of the image area of the arrow figure AW' is corrected according to the position of the viewpoint during the warping process. do. That is, when the viewpoint is located in the center of the eyebox EB, the isosceles trapezoid is formed, for example, as in a2, and when the viewpoint shifts to the left, the isosceles trapezoid is tilted to the right and distorted as in a1. If the viewpoint shifts to the right, the isosceles trapezoid is tilted to the left and distorted, as in a3.
  • the motion parallax and perform warping so as to obtain an appropriate trapezoidal deformation.
  • the upper side of the image is deformed so as to be greatly tilted to the left due to motion parallax, but in the warping control, the upper side is slightly tilted to the right. This is because, as described above, since the virtual image surface is in front of the road surface, the motion parallax deformation of the virtual image surface becomes larger than the motion parallax of the road surface, and it is necessary to weaken the influence (deformation) of this motion parallax. Is.
  • each image is distorted with the opposite characteristics to the distortion caused by the windshield or the optical system of the HUD device (collectively referred to as an optical member). Image area). In this way, the viewpoint-following warping for the facing image (tilted image) is performed.
  • the image SP'of the vehicle speed display as the first image RG1 is displayed in the first display area Z1'.
  • the image SP'of the vehicle speed display is shown by a horizontally long ellipse for convenience.
  • the outer shape (contour) of the image area is corrected according to the viewpoint position.
  • distortion having a characteristic opposite to the distortion caused by the optical member is applied to each image (image area of each image) in correspondence with the viewpoint position. In this way, the viewpoint-following warping for the depth image (tilted image) is performed.
  • each image (region of each image) is individually subjected to image correction peculiar to each image as described with reference to FIG. 10A or FIG. 10B. That is, different image correction methods are adopted depending on the type of image, and image correction is performed individually.
  • each image (image after warping processing) corresponding to the viewpoint position is combined by, for example, image rendering to generate one image.
  • the process is carried out.
  • one image obtained by synthesis is designated by the reference numerals Q1 to Q3.
  • this image composition process it can also be referred to as a warping process.
  • Depth images (or image areas of depth images) and facing images (or image areas of facing images) are handled separately, image corrections are performed individually, and then they are combined, which is desired for each image. Image correction can be performed quickly.
  • the image processing itself can be simplified. Therefore, the load on the image processing unit (which can include the image generation unit, the image rendering unit, and the like) associated with the image correction processing is also reduced.
  • FIGS. 10 (F), (G), and (H) are diagrams showing how the virtual image looks when the viewpoint is located in each of the left divided area, the central divided area, and the right divided area of the eyebox. be.
  • the region surrounded by a trapezoid for example, an isosceles trapezoid
  • the first contour Za from the center to the left.
  • the isosceles trapezoid tends to tilt to the left due to the influence of motion parallax (in other words, the upper side of the first contour Za tends to be on the left side relative to the lower side).
  • the warping control that tilts the isosceles trapezoid to the right as in a1 the deformation that the isosceles trapezoid tends to tilt to the left due to the motion parallax is reduced as shown in FIG. 10 (F).
  • the virtual image displayed on the first contour Za is visually recognized by the driver (the driver whose viewpoint is shifted to the left from the center).
  • the isosceles trapezoid tends to tilt to the right due to the influence of motion parallax (in other words, the upper side of the first contour Za tends to be on the right side relative to the lower side. ).
  • the warping control that tilts the isosceles trapezoid to the left as in a3
  • the deformation that the isosceles trapezoid tends to tilt to the right due to the motion parallax is reduced as shown in FIG. 10 (J).
  • the virtual image displayed on the first contour Za is visually recognized by the driver (the driver whose viewpoint is shifted to the right from the center).
  • the first contour Za is a part of the tilted image HUD region Z2 whose viewpoint is viewed from the center of the eyebox EB and is surrounded by a trapezoid (for example, an isosceles trapezoid) in which the upper side and the lower side are parallel to each other. If the tilted image HUD region Z2 itself is trapezoidal, it may be the entire region.
  • the second contour Zb is the region surrounded by the rectangle or the rectangle including the rectangle in the standing image HUD region Z1 when the viewpoint is viewed from the center of the eyebox EB
  • the motion parallax The rectangle tends to tilt to the left due to the influence of (in other words, the upper side of the second contour Zb tends to be on the left side relative to the lower side).
  • the rectangle due to the motion parallax is executed by executing the warping control that inclines the rectangle to the right so as to cancel the deformation of the second contour Zb due to the motion parallax as in b1.
  • the virtual image displayed on the second contour Zb in which the deformation that tends to incline to the left is canceled is visually recognized by the driver (the driver whose viewpoint is shifted to the left from the center).
  • the rectangle tends to tilt to the right due to the influence of motion parallax (in other words, the upper side of the second contour Zb tends to be on the right side relative to the lower side).
  • the rectangle due to the motion parallax is executed by executing the warping control that inclines the rectangle to the left so as to cancel the deformation of the second contour Zb due to the motion parallax as in b3.
  • the virtual image displayed on the second contour Zb in which the deformation that tends to incline to the right is canceled is visually recognized by the driver (the driver whose viewpoint is shifted from the center to the left).
  • the second contour Zb may be a part of the standing image HUD region Z1 seen from the center of the eyebox EB, which is surrounded by a rectangle, and if the standing image HUD region Z1 itself is a rectangle, the second contour Zb may be a part of the standing image HUD region Z1. It may be the entire area.
  • the display layers are separated according to the use / non-use of depth, and after performing separate shape correction processing for each layer, the images are superimposed and finally displayed.
  • It may be an image. That is, the display layer of the tilted image HUD area Z2 that expresses the depth and the display layer of the standing image HUD area Z1 that does not express the depth are separated, and after performing separate shape correction processing for each layer, the images are superimposed. Together, it may be used as the final display image.
  • the tilted image HUD region Z2 that expresses the depth may be composed of a plurality of display layers that are different for each region. Specifically, the tilted image HUD region Z2 may be composed of a plurality of display layers divided into a plurality of regions in the depth direction. Further, the shape may be corrected individually according to the display attribute of each content.
  • FIG. 11 is a diagram showing a configuration example of the HUD device.
  • the upper view of FIG. 11 is the same as that of FIG. 4 (A).
  • the configuration of the display control unit 190 will be described.
  • the display control unit 190 has a viewpoint position detection unit 192 and a warping processing unit 194.
  • the warping processing unit 194 stores a warping control unit 192, a ROM 198 (having the first and first image tables 199 and 200), and a VRAM (for example, image data 196, post-warping data 197, and the like). It has 201 and an image generation unit (image rendering unit) 202.
  • the warping control unit 195 may be provided outside the warping processing unit 194. Further, the viewpoint position detection unit 192 can also be provided outside the HUD device 100.
  • the first image conversion table 199 stores warping parameters for facing images (standing images).
  • the second image conversion table 200 stores warping parameters for a depth image (tilted image).
  • the warping control unit 195 uses an image generation unit (image rendering) so that the viewpoint position tracking warping as described above is performed using the warping parameters corresponding to the viewpoint position information supplied from the viewpoint position detection unit 192. Part) 202, ROM198, VRAM201, etc. are controlled.
  • the original image data 196 of the image to be displayed is stored (stored) in the VRAM.
  • the original image data 196 is read out, and by applying the warping parameter corresponding to the viewpoint position to the read out original image data, the deformation correction of the image area and the optical system described above can be performed. Image correction or the like is performed in which distortion having the opposite characteristics to the distortion of the above is applied in advance.
  • the post-warping data 197 is temporarily stored in the VRAM 201 and then supplied to the image generation unit (image rendering) 202, and for example, an image composition process as shown in FIGS. 10 (C) to 10 (E) is performed. Will be done. As a result, one image (display image) is generated.
  • the generated image is supplied to a display unit (for example, a flat panel display such as a liquid crystal panel) 160 and displayed on a display surface (reference numeral 164 in FIG. 3C).
  • FIG. 12A is a diagram showing an example of a configuration of a main part of the HUD device when a standing image (pseudo standing image) is fixedly displayed at a predetermined position in the viewpoint coordinate system regardless of the viewpoint position
  • FIG. 12B Is a diagram showing that the display (virtual image) appears to move following the movement of the viewpoint.
  • the problem that the facing image is distorted due to the viewpoint position shift can be caused by the fixed display position of the facing image (virtual image) in the coordinate system set in the real space. ..
  • the viewpoint coordinate system (a coordinate system in which axes are set along each of the front-rear direction, the left-right direction, and the up-down direction of the vehicle with the person's viewpoint as the center, and if the person's viewpoint (including the face, etc.) moves,
  • the coordinate system also moves according to the movement
  • the position (or image generation) of the image (rectangular image area) on the display surface of the display unit so that the imaginary image of the facing image is always at the same position. Control is performed to appropriately move the image generation surface or the position in the image generation space in the unit in correspondence with the movement of the viewpoint.
  • the position of the virtual image moves appropriately according to the viewpoint shift.
  • the relative positional relationship between the virtual image of the facing image and the viewpoint of the person is always constant, it is not necessary to consider the motion parallax, and there is no change in the appearance. Therefore, the above-mentioned problem that the image is deformed by the motion parallax and the cognition is lowered can be solved by this method as well.
  • the fact that the display position of the image (virtual image) does not change has the effect of giving a sense of security to the driver or the like who is the viewer.
  • FIG. 12A a movement amount (rotation amount) calculation unit 193 of the viewpoint coordinate system is added to the configuration shown on the lower side of FIG. Since the other parts are the same as those in FIG. 11, FIG. 12 (A) is simplified and only the main parts are shown.
  • the viewpoint position detection unit 192 detects a change in the viewpoint position (displacement of the viewpoint) in the coordinate system (XYZ coordinate system) set in the real space
  • the change (displacement of the viewpoint) is detected.
  • Displacement) is detected as the amount of movement (including the amount of rotation) of the viewpoint coordinate system (X'Y'Z' coordinate system).
  • the warping control unit 195 moves the viewpoint coordinate system based on the detected movement amount, and controls so that the image is displayed at a predetermined position (coordinate point indicating the viewpoint position before the movement) in the viewpoint coordinate system after the movement. do.
  • the vehicle speed display SP when the viewpoint A moves from the coordinate point X0 to + X1, the vehicle speed display SP also follows the left-right direction and the same direction as the coordinate point movement direction. Move the same distance to. For example, if the vehicle speed display SP is visible in front of the driver before the viewpoint A is moved, that state is maintained even after the viewpoint is moved. Therefore, motion parallax does not occur. The driver can always see a virtual image of a front-facing image (standing image) without distortion. Therefore, the visibility is improved.
  • FIG. 13 is a diagram showing an example of a procedure in viewpoint tracking warping control.
  • step S1 the viewpoint position is detected and the warping process is started.
  • step S2 the type of image such as whether the image to be displayed is a depth image (tilted image, tilted image) or a facing image (standing image, pseudo standing image) is determined.
  • step S2 If it is determined to be a depth image in step S2, the process proceeds to step S3.
  • step S3 warping is performed with the outer shape (contour) of the image area correctly visually recognized as a virtual image as a trapezoid (including a parallelogram).
  • the viewpoint position shifts in the width direction of the vehicle, what is the change in the appearance due to the motion parallax in order to reduce the degree of deformation while maintaining the direction of the change in the appearance of the virtual image due to the motion parallax?
  • Image processing is performed to give the trapezoidal shape a deformation of the reverse characteristic in advance.
  • the virtual image of the road surface superimposed image is displayed on the virtual image display surface of the inclined surface, for example, and is displayed floating from the road surface, it is corrected so as to suppress the deformation of the virtual image due to the motion parallax (however, the appropriate motion parallax is Leave it, and do not offset the motor parallax).
  • step S2 If it is determined in step S2 that the image is a facing image, the process proceeds to step S4.
  • step S4 warping is performed with the outer shape (contour) of the image area correctly visually recognized as a virtual image as a rectangle or a square (collectively referred to as a "rectangle").
  • the viewpoint position shifts in the width direction of the vehicle, the image (image area) is preliminarily distorted by the characteristics opposite to the change in the appearance of the virtual image due to motion parallax.
  • Rectangle maintenance correction motion parallax cancellation, suppression correction
  • rectangle maintenance correction the lower side (lower base) of the rectangle is fixed and the upper side (upper base) is moved to distort it.
  • the upper side (upper base) is fixed and the lower side (lower base) is moved.
  • step S5 it is determined that the image correction is completed.
  • step S5 the viewpoint tracking warping process is terminated.
  • step S5 the process returns to step S2.
  • a depth image tilt image premised on stereoscopic vision
  • a facing image not premised on stereoscopic vision in principle. It is possible to realize warping control that can ensure more natural visibility for (standing image, pseudo standing image).
  • Recent HUD devices tend to be developed on the assumption that a virtual image is displayed over a fairly wide area in front of the vehicle, for example. In this case, the virtual image display area on the windshield is expanded and various displays are possible. Become.
  • the required warping method will differ depending on the type of image.
  • the warping method can be changed according to the type of the image, the visibility of various displays is not deteriorated. Therefore, it is possible to realize high functionality and high performance of the HUD device.
  • the present invention can be used in either a monocular HUD device in which the display light of the same image is incident on each of the left and right eyes, or a parallax type HUD device in which an image having parallax is incident on each of the left and right eyes.
  • the term vehicle can be broadly interpreted as a vehicle.
  • terms related to navigation shall be interpreted in a broad sense in consideration of, for example, the viewpoint of navigation information in a broad sense useful for vehicle operation.
  • the HUD device shall include a device used as a simulator (for example, a simulator as an aircraft simulator, a simulator as a game device, etc.).
  • FIG. 14 (A) and 14 (B) are views showing a modified example of the virtual image display surface.
  • the cross-sectional shape of the virtual image display surface PS1 as seen from the width direction (horizontal direction, X direction) of the vehicle is not limited to the one having a convex shape on the driver side.
  • the virtual image display surface PS1 may have a concave shape on the driver side.
  • the virtual image display surface PS1 does not have to be curved as shown in FIG. 14 (B).
  • Optical system including optical member, 150 ... Light projecting unit (image projection unit), 160 ... Display unit (for example, liquid crystal display device) Screen, etc.) 164 ... Display surface, 170 ... Curved mirror (concave mirror, etc.), 179 ... Reflective surface, 188 ... Imaging unit (pupil detection unit, face imaging unit, etc.), 190 ... Display control unit, 192 ... viewpoint position detection unit, 193 ...

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Instrument Panels (AREA)

Abstract

Un objet de la présente invention est de réaliser une commande de gauchissement permettant de garantir une visibilité plus naturelle pour au moins l'une d'une image de profondeur (image oblique) dans laquelle une vision tridimensionnelle est supposée, et une image en regard direct (image stationnaire) qui est de préférence affichée de manière à faire face à un opérateur et dans laquelle la vision tridimensionnelle n'est pas supposée en principe. Lors de la réalisation d'un traitement de gauchissement sur une image de profondeur AW affichée sur une surface d'affichage d'image virtuelle inclinée PS1a, une unité de commande 190 (ou 195) pour effectuer une commande de suivi/déformation de position de point de vue réalise une commande de gauchissement avec le contour d'une zone d'image de l'image de profondeur sous la forme d'un trapézoïde dans lequel au moins un ensemble de côtés opposés sont parallèles l'un à l'autre, et, lors de la réalisation d'un traitement de gauchissement sur une image en face directe SP, effectue une commande de déformation avec le contour de la zone d'image de l'image à face directe sous la forme d'un rectangle, comprenant un rectangle long ou un carré.
PCT/JP2021/020328 2020-05-29 2021-05-28 Dispositif d'affichage tête haute WO2021241718A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2022526655A JPWO2021241718A1 (fr) 2020-05-29 2021-05-28

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020094169 2020-05-29
JP2020-094169 2020-05-29

Publications (1)

Publication Number Publication Date
WO2021241718A1 true WO2021241718A1 (fr) 2021-12-02

Family

ID=78744678

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/020328 WO2021241718A1 (fr) 2020-05-29 2021-05-28 Dispositif d'affichage tête haute

Country Status (2)

Country Link
JP (1) JPWO2021241718A1 (fr)
WO (1) WO2021241718A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4212378A1 (fr) * 2022-01-11 2023-07-19 Hyundai Mobis Co., Ltd. Système de déformation d'image d'affichage de véhicule

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014199385A (ja) * 2013-03-15 2014-10-23 日本精機株式会社 表示装置及びその表示方法
JP2016053691A (ja) * 2014-09-04 2016-04-14 矢崎総業株式会社 車両用投影表示装置
JP2016068577A (ja) * 2014-09-26 2016-05-09 矢崎総業株式会社 ヘッドアップディスプレイ装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014199385A (ja) * 2013-03-15 2014-10-23 日本精機株式会社 表示装置及びその表示方法
JP2016053691A (ja) * 2014-09-04 2016-04-14 矢崎総業株式会社 車両用投影表示装置
JP2016068577A (ja) * 2014-09-26 2016-05-09 矢崎総業株式会社 ヘッドアップディスプレイ装置

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4212378A1 (fr) * 2022-01-11 2023-07-19 Hyundai Mobis Co., Ltd. Système de déformation d'image d'affichage de véhicule

Also Published As

Publication number Publication date
JPWO2021241718A1 (fr) 2021-12-02

Similar Documents

Publication Publication Date Title
JP6248931B2 (ja) 画像表示装置および画像表示方法
JP2014026244A (ja) 表示装置
JP6899082B2 (ja) ヘッドアップディスプレイ
WO2018105533A1 (fr) Dispositif de projection d'image, dispositif d'affichage d'image et corps mobile
CN101720445A (zh) 扫描式图像显示装置、眼镜型头戴式显示器以及车辆
JP2012079291A (ja) プログラム、情報記憶媒体及び画像生成システム
US10809526B2 (en) Display system and movable object
JP7126115B2 (ja) 表示システム、移動体、及び、設計方法
WO2021241718A1 (fr) Dispositif d'affichage tête haute
JP2018132685A (ja) ヘッドアップディスプレイ装置
JP7358909B2 (ja) 立体表示装置及びヘッドアップディスプレイ装置
JP6864580B2 (ja) ヘッドアップディスプレイ装置、ナビゲーション装置
JP7110968B2 (ja) ヘッドアップディスプレイ装置
WO2018199244A1 (fr) Système d'affichage
CN217655373U (zh) 平视显示装置以及移动体
JP7354846B2 (ja) ヘッドアップディスプレイ装置
WO2022024962A1 (fr) Dispositif d'affichage tête haute
CN114127614B (zh) 平视显示装置
JP2022114602A (ja) 表示制御装置、表示装置、表示システム、及び画像表示制御方法
WO2021002428A1 (fr) Dispositif d'affichage tête haute
JP2022036432A (ja) ヘッドアップディスプレイ装置、表示制御装置、及びヘッドアップディスプレイ装置の制御方法
WO2022024964A1 (fr) Dispositif d'affichage tête haute (hud)
WO2020009218A1 (fr) Dispositif d'affichage tête haute
KR20220027494A (ko) 헤드업 디스플레이 및 그 제어방법
WO2021065699A1 (fr) Dispositif de commande d'affichage et dispositif d'affichage tête haute

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21813672

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022526655

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21813672

Country of ref document: EP

Kind code of ref document: A1