CN105988220B - Method and control device for operating an autostereoscopic field display for a vehicle - Google Patents

Method and control device for operating an autostereoscopic field display for a vehicle Download PDF

Info

Publication number
CN105988220B
CN105988220B CN201610163253.XA CN201610163253A CN105988220B CN 105988220 B CN105988220 B CN 105988220B CN 201610163253 A CN201610163253 A CN 201610163253A CN 105988220 B CN105988220 B CN 105988220B
Authority
CN
China
Prior art keywords
image
display
eye
driver
observer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201610163253.XA
Other languages
Chinese (zh)
Other versions
CN105988220A (en
Inventor
R.菲泽
S.霍克
A.弗雷德里克森
J.瓦因加登
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Robert Bosch GmbH
Original Assignee
Robert Bosch GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Robert Bosch GmbH filed Critical Robert Bosch GmbH
Publication of CN105988220A publication Critical patent/CN105988220A/en
Application granted granted Critical
Publication of CN105988220B publication Critical patent/CN105988220B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Arrangement of adaptations of instruments
    • B60K35/10
    • B60K35/23
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0093Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B27/0103Head-up displays characterised by optical features comprising holographic elements
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B30/00Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
    • G02B30/20Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes
    • G02B30/26Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type
    • B60K2360/149
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0132Head-up displays characterised by optical features comprising binocular systems
    • G02B2027/0134Head-up displays characterised by optical features comprising binocular systems of stereoscopic type
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye

Abstract

The invention relates to a method (1500) for operating an autostereoscopic field display (102) for a vehicle (100), wherein the method (1500) has an adjustment step (1502). A misalignment (700) between a right image (402) and a left image (404) of the visual field display (102) is adjusted using a convergence angle (702) between a right visual axis (502) of a right eye (704) and a left visual axis (502) of a left eye (708) of an observer (108) of the visual field display (102). When the viewing axes (502) intersect in a projection plane (504) of the images (402, 404), the images (402, 404) are adjusted without misalignment.

Description

Method and control device for operating an autostereoscopic field display for a vehicle
Technical Field
The invention relates to a method for operating an autostereoscopic field of view display for a vehicle, to a corresponding control device and to a corresponding computer program.
Background
WO 1998/035260 a1 describes a holographic screen which can be depicted by means of a laser projector and can be integrated into a windscreen.
The publication "expanding Design Parameters for a 3D Head-Up Display; broy; the H mini-bridge ckh; fredeksen et al; pervasive Displays 2014 "introduces a stereoscopically viewable heads-up display.
Disclosure of Invention
Against this background, a method for operating an autostereoscopic field of view display for a vehicle is proposed with the solution proposed here, and furthermore a control device using the method and finally a corresponding computer program are proposed. Advantageous embodiments include: the method comprises the following steps: determining the position of the right image and the left image in the projection plane using the viewing axis and the eye position of the observer, wherein the determined positions are also adjusted in the step of adjusting; the method comprises the following steps: reading in eye information by an eye detection device of the vehicle, which eye detection device is designed to detect the right and left eyes of the observer, wherein the eye positions are imaged with right and left eye position values of the eye information and the right and left viewing axes are imaged with right and left viewing direction values of the eye information, wherein the convergence angle is determined from the viewing direction values and the eye position values in the step of adjusting; wherein in the step of adjusting, the misalignment is adjusted within a comfort zone associated with the observer; the method has the steps of matching the comfort zones, wherein the comfort zones are matched in a manner that reacts to the input of the user; wherein in the matching step, the comfort region is matched in relation to the number of depth planes to be displayed, wherein the comfort region is reduced as the number of depth planes increases; the comfort zone changes in relation to the user, wherein the comfort zone is reduced when the observer is tired; wherein in the step of matching, fatigue of the observer is identified using the eye information; wherein the eyelid closure frequency and/or the eyelid closure duration are evaluated.
Parallax errors occur in a visual field display with one and the same image for the right and left eyes of the observer in the following cases: the image is viewed together with one or more target objects arranged at another distance. This dual image is filtered by the observer's brain by: one of the eyes is preferably used for viewing. Considerable effort is required here for the filtration.
In the solution proposed here, a right image for the right eye of the observer and a left image for the left eye are provided in order to exclude parallax errors. Here, a virtual image is formed, which can be arranged approximately freely in space. The spatial position of the image can be adjusted in relation to the viewing distance of the observer. In particular, the angle of convergence between the visual axes of the eyes can be evaluated.
A method for operating an autostereoscopic field of view display for a vehicle is proposed, wherein the method comprises the following steps:
adjusting a misalignment between a right image and a left image of a right eye of an observer of the visual field display using a convergence angle between the right visual axis and the left visual axis of a left eye, wherein the images are adjusted without misalignment when the visual axes intersect in a projection plane of the images.
The autostereoscopic viewing field display is understood to be a head-up display or a windscreen display. The field of view display is designed to display two images in a projection plane, wherein only the right image is visible to the right eye of the observer. The left eye of the viewer can only see the left image. The right image is only visible in the viewing area on the right. The left image is only visible in the left viewing area. For example, the holographic elements of the field of view display may have directional reflective properties. The offset can be a stroke (wegstreche) in order to offset the image points of the right image and the left image, which correspond to one another, from one another in the projection plane.
The image points corresponding to one another can be pixels or image elements corresponding to one another of the left-hand image and the right-hand image, which are projected onto the projection surface with the offset. Therefore, the misalignment is referred to as a stroke so as to display that the right image moves relative to the left image. A display without a misalignment of the right image and the left image is understood to mean a display in which the right image and the left image (in particular completely) overlap and therefore the respective image point of the right image is imaged on the associated image point of the left image. The observer may be a driver of the vehicle.
The method may comprise the steps of: the position of the image in the projection plane is determined. The determination may be achieved using the viewer's visual axis and eye position. In addition, the determined position can be adjusted in the adjustment step. The images can be moved together in the plane of projection without changing the misalignment. Whereby the head movements of the observer can be compensated.
The method may comprise the steps of: the eye information is read in by an eye detection device of the vehicle. The eye detection device is designed for detecting an eye of an observer. The eye positions are imaged with right eye position values and left eye position values. The visual axis is imaged with a right view value and a left view value. The convergence angle is determined from the viewing direction value and the eye position value. Whereby the eye information can be processed in real time. So that the misalignment can be adjusted quickly.
The misalignment can be adjusted within a comfort zone associated with the viewer. Within the comfort zone, the observer can combine the two images into a stereoscopic viewing experience without much effort. This makes it possible to achieve a lower load on the observer.
The method may include the step of matching the comfort region. The comfort zone can be adapted in such a way that it reacts to the input of the user. The comfort zone can be adjusted in relation to the load state of the observer. The comfort zone can be reduced when the observer is strenuous. The viewer can input and/or change the rating for the comfort zone via the user interface.
The comfort region may be matched in relation to the number of depth planes (tieffenebeenen) to be displayed. The comfort zone can be reduced as the number of depth facets increases. The target objects can be shown at different distances. The target object is provided with a depth surface. Multiple displayed depth surfaces may be viewed harder than a small number of displayed depth surfaces. The comfort region, inside which the target object is displayed, can be reduced.
The comfort zone can be varied in relation to the user. The comfort zone may be reduced when the viewer is tired. This can reduce the load on the observer. So that the viewer can recover.
In the case of using eye information, the fatigue of the observer can be recognized. In particular, the eyelid closing frequency may be evaluated, and alternatively or additionally the eyelid closing duration may be evaluated. The eye information can reliably identify whether the user is tired. When the user recovers, the comfort zone may be re-enlarged.
The solution proposed here also provides a control device which is designed to carry out, control or carry out the steps of the variants of the method proposed here in a corresponding device. The object of the invention is also achieved quickly and efficiently by the embodiment variant of the invention in the form of a control device.
A control device is understood here to mean an electrical device which processes the sensor signals and transmits control signals and/or data signals in accordance therewith. The control device may have an interface, which may be formed by hardware and/or software. When implemented in hardware, the interface may be part of a so-called system ASIC, which contains the most different functions of the control unit, for example. However, the interface may also be an integrated circuit alone or at least partly composed of discrete components. When implemented in software, the interfaces may be software modules that reside, for example, on the microcontroller, next to other software modules.
A computer program product or a computer program having a program code which can be stored on a machine-readable carrier or storage medium, such as a semiconductor memory, a hard disk memory or an optical memory, and which is used to carry out, implement and/or handle the steps of a method according to one of the preceding embodiments, in particular when the program product or program is executed on a computer or a device, is also advantageous.
Drawings
The embodiments proposed here are explained in detail below by way of example with the aid of the figures. Wherein:
FIG. 1 shows a diagrammatic view of a vehicle having a field of view display;
FIG. 2 shows a diagram of a traffic scene from a driver's perspective focused on a vehicle traveling ahead;
FIG. 3 shows a diagram of a traffic scene focused onto a visual display from a driver's perspective;
FIG. 4 shows a diagrammatic view of a visual field display with a control device for operating the visual field display according to an embodiment of the invention;
FIG. 5 shows a diagram of an autostereoscopic field of view display with combiner, according to an embodiment of the invention;
FIG. 6 shows a diagram of an autostereoscopic windshield display according to an embodiment of the invention;
FIG. 7 shows a schematic diagram of the relationship between image misalignment and convergence angle in an autostereoscopic viewing field display according to an embodiment of the invention;
FIG. 8 is a schematic diagram illustrating the relationship between image misalignment and virtual image distance in an autostereoscopic field of view display according to an embodiment of the invention;
FIG. 9 shows a diagram of a vehicle having a field of view display according to an embodiment of the invention;
FIG. 10 shows a representation of a traffic scene from a driver's perspective in a visual field display according to an embodiment of the invention;
FIG. 11 shows a diagram of comfort regions in a relationship between virtual image distance and projected distance in a visual field display according to an embodiment of the present invention;
FIG. 12 shows a diagram of a reduced comfort region in the relationship between virtual image distance and projected distance in a visual field display according to one embodiment of the present invention;
FIG. 13 shows a flow chart of a method for operating an autostereoscopic visual field display according to an embodiment of the invention; and
fig. 14 shows a schematic representation of the operating principle of a method for operating an autostereoscopic field of view display according to an embodiment of the invention.
Detailed Description
In the following description of advantageous embodiments of the invention, the same or similar reference numerals are used for elements shown in different figures and functioning similarly, wherein repeated descriptions of these elements are omitted.
Fig. 1 shows a schematic representation of a vehicle 100 having a field of view display 102. The field of view display 102 is designed as a front window display 102. In this case, a real image 104 of the information to be displayed is generated in the region of a windshield 106 of the vehicle 100. The image 104 is generated in the field of view of an observer 108 of the vehicle 100, in this case the driver 108.
In a windshield display 102 or a transparent display 102 integrated separately in the vehicle, which overlaps the driving scene, the image 104 displayed on the display 102 appears as a real image 102 in the region of the windshield 106 or a separate glass or plastic windshield and does not appear as a virtual image at a greater distance (greater than 1.8 m) as in a HUD. Thus, the eyes of the driver 108 need to focus (adjust vision) and rotate (converge) harder from the driving scene to the windshield display 102 when reading out the image content on the windshield display 102 than in the HUD. When the driver 108 observes the driving scene, the respective optical axes of his eyes are almost parallel, so that target objects in the vicinity (within 10 meters) double appear. If the target object in the vicinity now consists of image content 104 shown on the transparent display 102, it may double and may play a disturbing role.
Fig. 2 shows a representation of a traffic scene 202 from the driver's perspective focused on a vehicle 200 traveling ahead. The traffic scene 202 is shown from the perspective of the driver in fig. 1. The driver sees the traffic scene 202 forward through the windshield 106. Here, the right and left eyes of the driver are focused on the vehicle 200 traveling ahead. However, both the right and left eyes gather the image 104 in the windshield 106. Thus, the image 104 appears doubly as a right virtual image 204 and a left virtual image 206.
Fig. 3 shows a representation of a traffic scene 202 from the driver's perspective focused onto the field of view display 102. The traffic scene 202 corresponds to the traffic scene in fig. 2. In contrast, the right and left eyes are focused on the image 104 here. However, both the right and left eyes gather the vehicle 200 traveling ahead. The vehicle 200 traveling in front thus appears doubly.
In fig. 1, a windscreen display 102 is shown with exemplary image content 104, in this case a digital speed display 104. If the driver 108 observes a driving scene, i.e. a distance in the range of approximately 10 m to 500 m, the image content 104 appears doubly, as shown in fig. 2. If, on the other hand, the driver observes the image content 104 of the windshield display 102, the driving scene 200 behind it appears doubly, as shown in fig. 3. This effect causes the eyes to switch frequently between far vision and near vision in the windscreen display 102, which can be tiring for a long time.
If image content 104 is displayed on windshield display 102, the image content typically appears doubly in the following cases: the driver 108 observes a target object 200 of a driving scene 202, in this case a vehicle 200 driving ahead. Thereby making it difficult to identify the image content 104 of the windscreen display 102 in the peripheral field of view. Conversely, if the driver observes image content on the windshield 106 (shown microscopically), the driver doubly sees the driving scene 202.
In the transparent display 102 arranged near the main field of view, the following problem arises due to the small image distance: when the driver 108 sees a vehicle 200 traveling ahead, for example, on the display 102, it sees the image content 104 of the transparent display 102 double. Due to the convergence, the images 104 are laterally offset from each other. This is very disturbing in the following cases: the image content 104 on the transparent display 102 is close to the driving situation 202 and cannot be read out appropriately in the peripheral field of view-in a way to speak without direct viewing-because the image content 104 appears doubly flat.
Fig. 4 shows a schematic representation of the visual field display 102 with a control device 400 for operating the visual field display 102 according to an embodiment of the invention. As in fig. 1, a visual field display 102 is mounted in a vehicle 100. In contrast, the visual field display 102 is designed here as an autostereoscopic head-up display 102. The heads-up display 102 projects a right image 402 for the right eye of the observer 108 and projects a left image 404 for the left eye of the observer 108. The projected images 402, 404 are generated on an image sensor 406 of the heads-up display 102 and reflected into the windshield 106 through a lens system 408. The images 402, 404 are steered by the windshield 106 within a viewing area 410 in which the eyes of the observer 108 are located. The images 402, 404 are organized by the control device 400 in order to make the image content 412 appear stereoscopically at a predetermined distance to the observer 108. For this purpose, the observer detects and in particular knows the position and viewing direction of the eyes by means of the detection device 414. Based on the position of the eyes and the viewing direction or axis, the misalignment between the right image 402 and the left image 404 is determined in order to obtain a stereoscopic effect.
An autostereoscopic head-up display 102 (HUD) is proposed, which has an adaptive display region that takes into account the current comfort region of the driver 108 with regard to depth perception (tieflexstellung).
In the proposed solution, a special use scenario of the autostereoscopic head-up display 102 (HUD) is considered. As an important aspect, it can be provided in this case that the (virtual) projection distances in the LCD screen and in the autostereoscopic HUD differ significantly from one another. The HUD102 may be referred to herein as a see-through Display (also referred to as a see-through Display) or a field-of-view Display. This display (unit) 412 is adjusted according to the user's fatigue status. In this case, the viewing direction and the internal rotation of the eyes of the user are detected by means of the eye tracking device 414 of the user, and the temporal change in the viewing adjustment and its duration are known and taken into account in order to assess whether or not there is or may be a problem with the driver 108 using the display symbols on the display unit 412.
The heads-up display 102 (HUD) is used to display driving-related information 412 (e.g., speed indication, navigation information, warning indication, or more) in a viewing area 410 of the driver 108. The virtual image 412, in which the information is shown, overlaps with the real environment.
In principle, the HUD102 includes a light source, an image sensor unit 406, and a lens system 408 that performs imaging. The light leaving the system falls onto the windshield 106 or a combiner-pane mounted behind it, is partially reflected by the windshield or combiner-pane and enters the eyes 410 of the driver 108, who perceives a virtual image 412 in front of it at a distance defined by the lens system 408 and at a magnification also determined by this lens system.
As light source, for example, LEDs or laser diodes can be used, which illuminate the LCD406 providing the image content from behind. Alternatively, a different projector may be used. In this case, small projectors based on DMD, LCoS or laser technology may be used more in the future. In addition, the aim is to realize the display of the contact simulation. The display of such contact simulation occupies a very large volume of structural space in a binocular combined embodiment where both eyes view the same image. A binocular-combined variant, in which the two eyes view slightly different images 402, 404, is also possible, in addition to the additional function of the perception of depth of stereopsis (3-D effect), in particular with regard to robustness, installation space and costs.
With the aid of the autostereoscopic HUD102 (asHUD), the information 412 relating to driving can be displayed in 3D, without the driver 108 needing additional aids, such as blind or polarized glasses. Thus, the driver 108 can view the perfect image 412 at any time even when the head is moving, and the asHUD 102 needs: a head tracking system 414 capable of analyzing the head position and eye position 410 of the driver 108; and a corresponding tracker. A system overview of the principle architecture of the asHUD system 102 is shown in fig. 4.
Fig. 5 shows a diagram of an autostereoscopic field of view display 102 with a combiner 500 according to an embodiment of the invention. The field of view display 102 is mounted in a vehicle as in fig. 1 to 4. The image sensor 406 is illuminated by a light source. A right image and a left image are generated on the image sensor 406. The combiner 500 reflects the reflected wave of the image sensor 406 toward the eye of the observer 108. The combiner 500 is here arranged at the lower edge of the windscreen 106. Thus, the combiner 500 is below the visual axis 502 of the observer's 108 eyes when the driver is looking out of the way. In other words, the combiner 500 is arranged in the peripheral field of view of the observer 108. A virtual image 504 is formed in the projection plane 504 in the region of the windscreen 106 for the observer 108.
In fig. 5, an exemplary embodiment is shown in which the image is formed by a flat glass window 500 (or combiner pane) by means of a radiation-directing holographic projection pane 406 which is recessed in the instrument panel. Such an embodiment has the advantage that the system 102 can be used independently of the windscreen 106 and that the image is seen as a virtual image 504, for example behind the windscreen 106, which is comfortably perceived by the user due to the enlarged image distance.
In a further embodiment, not shown, the image is projected directly onto a separate combiner-glazing, which then has, for example, a photopolymer layer with holographic scattering functionality. The advantage of a widened image distance compared to the windshield display is thereby eliminated, for which reason the system is more space-saving and relatively uncomplicated, since it consists of fewer components.
A variant of the "reflective display (einsipegelungsanzeiger) 102" shown in fig. 5 can also be considered, in which the image is not reflected by a separate pane but directly by the windshield 106. An advantage of such an embodiment is that no additional components need to be installed in the dashboard and therefore no interfering edges of the individual combiners 500 can be seen. In such an embodiment, an image that is evaluated as acceptable by the user is also formed when the smaller windshield is bent.
The stereoscopically viewable display 102 may also be implemented as a combiner-display 102, as it is shown in this figure. The stereoscopically viewed display 102, which is also based on the holographic projection display 406 in this respect, can likewise be realized by such a combiner display 102. In this case, holographic projection display 406 is opaque and imaged by planar combiner 500. The image of the stereoscopic display 406 appears as a virtual image 504. This has the advantage that there is a slightly larger image distance with respect to the driver's eyes than in the embodiments described below.
FIG. 6 shows a diagram of an autostereoscopic windshield display 102 according to an embodiment of the invention. The windshield display 102 may also be referred to as a field of view display 102. In contrast to the field of view display in fig. 5, the right-hand image and the left-hand image are generated in a projection region 600 or projection area integrated into the front window 106. For this purpose, two projectors 602 each project one of the images from below into the projection area 600. In the projection area 600, a holographic film 604 is arranged in or on the front window pane 106. The film 604 forms a right viewing area 606 for the right eye and a left viewing area 608 for the left eye of the viewer 108. The right image is visible in the right viewing area 606 and the left image is visible in the left viewing area 608. The viewing areas 606, 608 may move laterally with the viewer 108 by lateral movement of the projector 602.
In other words, a stereoscopic windscreen display 102 with a dynamic convergence matching device is proposed.
A preferred embodiment of the transparent display 102 is the windscreen display 102 shown in fig. 6. Here, a photopolymer film 604 that performs the holographic scattering function is laminated into or onto the windshield 106. By suitable combination with two separate image projectors 602, autostereoscopic image content can be produced as shown.
An embodiment of an autostereoscopic windshield display 102 is shown. The area 600 shown in dashed lines is a usable display surface 600, which is illuminated by two projectors 602. The holographically implemented transparent display 600 diffracts the image displayed by the first projector 602 into a first eyebox 606 and the image displayed by the second projector 602 into a second eyebox 608. By lateral movement of the projector 602, the lateral position of the eyeboxes 606, 608 may be changed to follow the lateral head movement.
FIG. 7 illustrates a schematic diagram of the relationship between image misalignment 700 and convergence angle 702 in an autostereoscopic viewing field display 102 according to an embodiment of the invention. Here, the visual field display 102 is installed in the vehicle 100, and the driver of the vehicle 100 sees the vehicle 200 traveling in front via the visual field display 102. Because the driver's right eye 704 is separated from the left eye 708 by an inter-eye distance 706, the visual axes 502 of the eyes 704, 708 have a convergence angle 702 with respect to each other under the following conditions: the two eyes 704, 708 are aligned at the same point 710. When the right image 402 and the left image 404 should appear stereoscopically at the point 710, the image misalignment 700 between the right image 402 and the left image 404 is related to the viewing distance 712 between the eyes 704, 708 and the images 402, 404, the distance 714 between the eyes 704, 706 and the point 710 being viewed, and the inter-eye distance 706.
The convergence angle 702 θ (Theta) may be the angle 702 between the visual axes 502 of the left and right eyes 708, 704, s may be the distance 712 from the driver's eyes 704, 708 to the windshield 106, and d may be the difference 700, i.e., the offset 700 of the images 402, 404 shown in a stereoscopic manner on the windshield 106, shown so as to cause the target 710 to appear at the distance 714 z.
FIG. 8 illustrates a schematic diagram of the relationship between image misalignment 700 and virtual image distance 714 in an autostereoscopic viewing field display 102 according to an embodiment of the invention. The field of view display 102 here corresponds substantially to one of the field of view displays in fig. 4 to 6. The images 402, 404 are arranged in the same plane of projection 504. As shown, in both examples, different virtual image distances 714 result, as in the case of the same offset 700, the same viewing distance 712, however, each with a different reference numeral.
In the first example, the right image 402 is arranged on the right side of the left image. The right eye 704 views the right image 402. The left eye 708 views the left image 404. This gives the impression to the viewer that the virtual image 412 is further away than the actual images 402, 404.
In the second example, the right image 402 is arranged on the left side of the left image. Here, the right eye 704 additionally observes the right image 402. The left eye 708 additionally observes the left image 404. Thereby creating the impression to the viewer that a virtual image 412 would be disposed between the viewer and the images 402, 404.
In other words, the principle of 3-D-viewing for stereopsis is shown in fig. 8, which illustrates the size of the virtual image distance 714 (VID) and virtual screen distance 712 (VSD) for crossed (right) and non-crossed (left) viewing cases.
The imaging lens system determines a virtual screen distance 712 (VSD) of the system 102. The distance 714 (virtual image distance VID) can be selected by the horizontal offset 700 d of the two single images 402, 404 on the display or on the virtual screen distance 712 VSD, where the driver sees the image 412. The parameters virtual screen distance 712 VSD and virtual image distance 714 VID are shown in fig. 8 for crossed and non-crossed viewing situations. The virtual image distance 714 VID is determined by the following equation,
Figure DEST_PATH_IMAGE001
where a represents the interpupillary eye distance 706 of the driver. The sign in the denominator determines whether the image 412 is located before or after the virtual screen distance 712 VSD. The system 102 can be matched to the inter-eye distance 706 so that the driver can perceive the image information 412 at a desired distance 714.
Since the eye distance 706 of the driver is indispensable information which is required by the system 102 in order to realize the image view 412 in such a way that it appears at an intentional distance 714 for the respective driver, the system 102 is immediately able to measure the eye distance 706 of the driver and to match the views 402, 404 accordingly. For example, the size of the driver is also taken into account with respect to the downward viewing angle. The camera for head tracking is then also used to learn this information and to adapt the image views 402, 404 to the driver in a personalized manner. Additionally, the driver may save personal priority in terms of brightness, color, contrast, or comfort zone.
FIG. 9 shows a diagram of a vehicle 100 having a horizon display 102 according to one embodiment of the invention. The visual field display 102 substantially corresponds to the image display as shown in fig. 5 and 6. The field of view display 102 is then here a windscreen display 102 or a windscreen display 102. The right image 402 for the right eye 704 and the left image 404 for the left eye 708 are gradually visualized (einegblebledet) for the driver 108 into the windshield 106 or the windshield 106. The images 402, 404 have a misalignment 700 with respect to one another, since the driver 108 sees the driving scene in front of his vehicle 100, and therefore the visual axes of the eyes 704, 708 intersect behind the projection plane of the images 402, 404. The images 402, 404 appear perspectively distorted to obtain a non-distorted image from the perspective of the driver 108.
The transparent display device 102 may be realized by means of a volume hologram. The vehicle 100 may have a system for driver observation that may detect the heading of the driver 108.
In order to avoid double images, it is proposed to use a stereoscopically viewed windshield display 102 or a separately mounted stereoscopically viewed transparent display 102 in combination with a system for dynamic disparity matching (dispait ä tsanpassung) on the basis of the real-time known convergence angle of the eyes 704, 706 of the driver 108.
This requires a system for detecting look direction that can reliably and in real time distinguish between at least two situations: "driver 108 views the driving scene" and "driver 108 views the image content 402, 404 of the transparent display 102". If the driver 108 is looking at the driving scene, the gap 700 between the image contents 402, 404 shown on the windshield display 102 for the respective left eye 708 and for the right eye 704 is adjusted in such a way that double images are minimized. This corresponds to the image content 402, 404 moving into the viewing plane of the driver 108 in a stereoscopically visible manner. If, on the other hand, the driver 108 looks at the image content 402 of the windscreen display 102, said image content is not shown in a stereoscopy manner, so that the driver 108 can also read out the display, for example every other vehicle display, as is customary.
Additional eye position tracking is necessary to maintain the effect of autostereoscopic viewing even when the lateral head of the driver 108 is moved.
The image contents 402, 404 shown stereoscopically on the windshield 106 can be read without the double image. The readout can be achieved in the following cases: the eyes 704, 708 are not subject to convergence matching that is required when reading a conventional vehicle display (e.g., a combination meter). This results in a time advantage and a comfort advantage when reading out the image content 402, 404, as a result of which ultimately the driving safety can be improved. When the image content 402, 404 of the windscreen display 102 is size matched accordingly, it can also be perceived and read out in the peripheral field of view, to some extent without direct viewing. The size of the content 402, 404 shown may then be matched according to the smaller eye resolution in the peripheral field of view.
In fig. 9, the autostereoscopic windscreen display 102 is shown, however, it shows the same image content 402, 404 as in fig. 1 in a stereoscopically view. The left eye 708 and the right eye 704 of the driver 108 each see one of the two target objects 402, 404 shown offset from one another. When the driver 108 views the driving scene, the view through the autostereoscopic viewing does not form a double image.
FIG. 10 shows a representation of a traffic scene 202 from a driver's perspective in the visual field display 102 according to an embodiment of the invention. The traffic scene 202 corresponds substantially to the traffic scene in fig. 2 and 3. Shown here are: as in fig. 9, the driver, although viewing the right image with the right eye and gathering the left image with the left eye, sees a common undistorted virtual image 412, which appears floating between the vehicle 200 driving in front and its vehicle 100.
If the image content 402, 404 is shown stereoscopically on the windscreen display 102, the image content 402, 404 does not appear doubly: the driver views a target object 200 of a driving scene 202, in this case a vehicle 200 driving ahead. In this illustration, the image content 402, 404 can therefore be read out in the peripheral field of view and be displayed without interference.
A solution to this problem is to display autostereoscopic image content 402, 404, which is shown as long as the driver 108 is not looking at the transparent display 102, in the following way: the image content may be read out at least in the peripheral field of view. When the driver looks directly at the image content or the display, the display of the autostereoscopic is switched off, since the display is not comfortable for the eyes due to the small image distance.
Fig. 11 and 12 show diagrams of a comfort region 1200 in a relationship between a virtual image distance 714 and a projection distance 712 or a viewing distance 712 in a visual field display according to an embodiment of the invention. A comfort region 1200 for each illustrated virtual image is depicted in fig. 11 herein. In fig. 12, a comfort region 1200 for a plurality of virtual images is shown, wherein the virtual images are shown at different distances 714. In the comfort region 1200, 75% of the observers perceive the observation of the image content as comfortable. The comfort region 1200 begins with a minimum distance 1202 from the observer and ends with a maximum distance 1204. This study is described in the publication "expanding design parameters for a 3D Head-Up Display" mentioned at the outset, wherein the boundaries of the comfort zone, which are described here only by way of example, are investigated in detail.
The comfort region 1200 here increases with increasing viewing distance 712. In images simultaneously shown at a plurality of distances, the comfort region 1200 is smaller than in the case of the respective images. Additionally, the minimum pitch 1202 moves to a larger image distance 714 relative to the respective image in the case of multiple viewed images, while the maximum pitch 1204 becomes smaller.
The comfort region 1200 is shown in fig. 11 in relation to a virtual image distance 714 VID and a virtual screen distance 712 VSD for representing a virtual 3-D-target object. The comfort region 1200 represents 75% of subjects. In the region adjacent to the comfort region 1200, 50% of the subjects are also within their comfort region.
The comfort region 1200 is shown in fig. 12 in relation to a virtual image distance 714 VID and a virtual screen distance 712 VSD for representing three virtual 3-D-target objects (front, middle, back). The middle region 1200 represents the comfort region for 75% of subjects. In the adjacent region, 50% of the subjects were within their comfort zone.
In particular in the publication "expanding Design Parameters for a 3D Head-upsisplay" mentioned at the outset, the comfort region 1200 is described by means of example values of the interrelationship consisting of a virtual image distance VID 714 and a virtual screen distance VSD 712 for HUD applications. Further, the front and rear bounds 1202, 1204 of the comfort region 1200 are elaborated. A system with a plurality of (virtual) projection distances 714 is also subject matter here.
The 3-D-perception of each person is individually different, except for the relationship reproduced by the formula presented in fig. 8. Each person thus also has a personalized region in which the virtual image distance 714 VID can be moved with a defined virtual screen distance 712 VSD, for which comfortable 3-D viewing can be achieved. The region may be referred to as a comfort region 1200. The comfort zone 1200 is definitively defined by the criteria of accommodation-convergence-conflict. If the comfort zone 1200 is exceeded, the driver may feel discomfort and discomfort (e.g., eye pain, headaches). The desired 3-D-perception may no longer be achieved.
The loss of perception of each type of physically unpleasant sensation, also the travel-related information as shown by the asHUD, can form a risk in road traffic and endanger not only the safety of the driver but also all other traffic participants.
In the solution proposed here, an autostereoscopic HUD with an automatically adapted display region 1200 is proposed. In this case, the display area is adapted not only to the respective driver, but also to the current physical and/or psychological situation. This prevents the 3-D representation from being outside the (current) comfort region 1200 of the driver, thereby ensuring a comfortable perception for the driver and avoiding eye pain, headaches or discomfort, which leads to an increase in driving safety and pleasure.
The asHUD uses the head-tracked data and other information captured by the head-tracking camera in order to adaptively match the size of the display area 1200 in which the image content is shown and the number of depth planes to the driver's current comfort area 1200. A negative impact on the reduction of 3-D-display is thereby achieved, which may cause symptoms like headache, eye pain, discomfort. Furthermore, the risk that the image information cannot be combined is reduced. This makes it possible to achieve an extended service life, since the comfort region 1200 is limited, for example, in the event of fatigue, for which the system must be switched off without automatically adapting the driver, since the perception in the display is not comfortable for this.
Furthermore, less fatigue is experienced by the driver, since the display is so adapted to the driver that the driver is minimally fatigued. The display area 1200 may be adjusted differently for intersecting and non-intersecting viewing situations individually, i.e. the front limit 1202 and the rear limit 1204 of the display area 1200 may be changed individually. A driver with a particularly large comfort zone 1200 may also use this comfort zone. The system can be adjusted to different drivers/conditions. Leading to improved experience factors, improved safety and improved comfort.
The comfort region 1200 is highly correlated to the selected virtual screen distance 712 VSD. The determined region 1200 of the virtual image distance 714 VID, in which the information can be displayed, is derived from the virtual screen distance 712 VSD, so that the driver can comfortably perceive this information. The region 1200 is examined in relation to the virtual screen distance 712 VSD. The results are qualitatively shown in fig. 12 and 13. In "expanding Design Parameters for a 3DHead-Up Display; broy; the H mini-bridge ckh; fredeksen et al; more accurate values are described in Pervasive Displays 2014 ". The comfort region 1200 represents a region of 75% of the subjects that is perceived as comfortable. Accordingly, the correspondingly adapted region 1200 should also be selected for displaying information within a lower limit 1202 VIDmin and an upper limit 1204 VIDmax of the virtual image distance 714 VID, depending on the virtual screen distance 712 VSD applied in the asHUD. In systems that operate with multiple virtual screen distances 712 VSD or even one virtual screen distance 712 VSD that can be varied, the region 1200 for display that correspondingly matches the virtual image distance 714 VID can also be noted when changing or changing the virtual screen distance 712 VSD.
The comfort region 1200 is of course not only related to the virtual screen distance 712 VSD, but also to the number of virtual image planes with different virtual image distances 714 VID, which are used for the display. The asHUD presented herein also includes: the comfort region 1200 is adapted in relation to the image content, in particular the number of virtual image planes applied.
Figures 11 and 12 show only the order of magnitude and the general distribution of comfort zones in relation to the VSD. Specific values are associated with specific cases and details concerning the underlying study can be gathered from the above-mentioned publications.
Furthermore, the empirically learned values shown in fig. 11 and 12 are not effective for each driver, where there are large individual differences. In addition, the comfort zone 1200 of the driver may vary. Here, the following factors play a large role: fatigue, contrast, life time, visual defects, etc. In a further embodiment of the system, therefore, not only the virtual screen distance 712 VSD and the number of virtual surfaces are taken into account as described above, but also the individualized, time-dependent state of the driver.
In a simple embodiment, the driver can give feedback to the system as long as his comfort zone 1200 is impaired, that is to say as long as he feels uncomfortably the current display. The input device may include, for example: knobs, operating elements, steering wheel buttons, touch screens or also voice or gesture control. The input means may here be a simple input device, in which the driver can only reduce or enlarge the size of the comfort zone 1200 used, or may however also be a more complex input device, which enables the driver to directly adjust the virtual image distance 714 via the input means to the front 1202 and rear 1204 limit of the VID and/or the maximum number of depth planes.
In another embodiment the system is more complex. The system automatically identifies when the driver's comfort zone 1200 is exceeded and automatically adjusts the comfort zone.
The driver's view allows the display area 714 to match the driver's current comfort area 1200. For this purpose, information is obtained from the camera image of the head tracking system.
The general fatigue can be detected by means of a changing steering behavior or from a camera image of the face of the driver. For example, eye gaze, eyelid closure frequency, eyelid closure duration, percent eye closure, or yawning are considered here. Therefore, the fatigue state of the driver can be detected from this information of the steering angle sensor and by the head-tracking camera. If fatigue is present, it is assumed in this case that the reduced map is displayed with a reduced display area 714 and fewer depth planes.
The reduction of the display area can be adjusted here, for example, in steps of 20% (Schritt) with respect to the front limit 1202 (VIDmin is enlarged by 20%) and in steps of 10% (VIDmax is reduced by 10%) with respect to the rear limit 1204. While the number of depth surfaces used can be reduced by one each. After each step a reevaluation of the driver behavior and if necessary another step is performed. If a state is realized in which the driver can treat the HUD display again without any problem, this state is first of all maintained. If the driver's continuous observation leads to the conclusion that it is increasingly easier for the driver to read the display, the system can enlarge its display area 1200 again in the respective other direction in the same steps.
The proposed system not only makes it possible to know the driver's viewing direction by means of a human eye tracking device, but also to deduce from the degree of rotation inside the two eyes the plane of convergence of the driver's visual axis. The duration of the observation of the image content and its variation with time can thus be identified. In order to be able to detect the image content, it is necessary that not only the viewing direction but also the convergence plane of the viewing axis is adapted to the target object shown. If the duration is gradually increased until the driver has achieved this permanent observation state, it is possible to take the combined difficulty increase as a starting point. The system stores: how long it takes for the driver to see the displayed target object until the viewing direction and the convergence plane match and how long it takes for the driver to see the displayed target object. If the time is atypically long for the respective driver or exceeds a previously defined absolute value or increases significantly over the service life, the system reacts as described above to reduce the comfort region 1200 and the number of depth planes used.
If the driver is not yet able to combine the images, the 3-D mode is switched off. The system then operates as a 2-D-HUD and may use a perspective display. After the next driving interruption or as the driver wishes, the 3-D mode is switched back on. For this purpose, the driver uses the input device described above if necessary.
The system uses the image content shown at each time to correct the data obtained by the driver's view. From this comparison, it is known which information is of the greatest interest to the driver at that time. The information can then be arranged, for example, in the center of the field of view and, as long as it is not shown in a touch-simulated manner, in the center of the display region, wherein, for example, one minute is observed in each case and can then react to this.
In summary, it can be seen that the system proposed here is able to match the depth position (tieffenoposition) 714 of the image content and to arrange the used depth planes narrowly and reduce their number in the following cases: all of the above information shows the system that the image content is now outside the area 1200 that is comfortable for the driver. The goal here is to use only the instantaneous comfort zone 1200 of the driver's personality. The comfort zone 1200 thereof then becomes smaller if the driver has been on the way for a long time, for example, and additionally or alternatively has become dark and the driver has become fatigued. It may then be increasingly difficult for the driver to recognize the information shown with the smaller virtual image distance 714 VID. In this case, the driver needs more time for reading out the information. The system reacts to this and enlarges the minimum used virtual image distance 1202 VID until the system can determine that the driver is no longer having difficulty using an asHUD with an automatically adapted display area.
FIG. 13 shows a flow chart of a method 1500 for operating an autostereoscopic viewing field display according to an embodiment of the invention. The method 1500 has the steps 1500: the misalignment between the right and left images of the visual field display is adjusted. The offset is adjusted using the convergence angle between the right visual axis of the right eye and the left visual axis of the left eye of the observer of the field-of-view display. When the viewing axes intersect in the plane of projection of the image, the image is adjusted in sub-step 1502 without misalignment.
In one embodiment, the method 1500 has a determining step 1504. The step 1504 has a plurality of sub-steps 1504. In a first sub-step 1504 the viewing direction of the observer is determined. In a second sub-step 1504 the convergence angle is determined. The eye position is determined in a third sub-step 1504.
In determining the viewing direction, a time difference is: whether the viewer is looking away or looking at the field display. In determining the convergence angle, the visual axes of the right and left eyes are evaluated taking into account the interocular distance of the eyes. In determining the eye position, the inter-eye distance and the relative position of the eyes with respect to the visual field display are determined. When the observer looks distant, the misalignment is calculated using the convergence angle in the calculation step 1506, and the image is adjusted in a stereoscopic manner in the adjustment step 1502. When the observer views the viewing field display, the image is displayed without misalignment or microscopically (monoskopisch) in an adjustment step 1502.
In a further substep 1506 of the calculation, the position of the image on the visual field display is calculated in order to compensate for the head movement of the observer.
In an adjustment step 1502, the image content is also sorted for the field of view display.
A flow chart shows exemplary features of a display system according to the present invention. A driver observation system, not described in detail, detects the viewing direction, the above-described convergence angle θ, and the driver's eye position. If the driver has looked at the transparent display, his content is shown microscopically. If the driver watches a driving scene, the content of the transparent display is according to the given formula
Figure DEST_PATH_IMAGE002
The gap is adapted to move into the driving scene in order to eliminate double images which would otherwise occur. Furthermore, the eye position of the driver is known in order to dynamically adapt the image content that moves stereoscopically in the driving scene in terms of its position as the head of the driver moves laterally, depending on the depth indication and the motion parallax.
Fig. 14 shows a schematic representation of the operating principle of a method for operating an autostereoscopic visual field display 102 according to an embodiment of the invention. The visual field display provides a right image 402 for the right eye and a left image 404 for the left eye for the observer 108 or driver 108. The observer 108 is recorded by a camera 414 of the driver monitoring system. The camera 414 provides a camera image 1600 or video information 1600. The camera image 1600 is evaluated in the control device 400. The camera image 1600 is evaluated in a first evaluation device 1602 in order to match the image content to the field of view display 102. Here, in particular the eye position 1604 and the visual axis 1606 of the eye are evaluated and provided for further processing.
The camera image 1600 is evaluated in a second evaluation device 1608 in order to identify a comfort region 1610 of the observer 108. In this case, the load state of the observer 108 is known by means of a physical reaction. The comfort region 1610 is associated with the load state.
Using eye position 1604 and visual axis 1606, the misalignment 700 between right image 402 and left image 404 is determined in means for determining 1612. The misalignment 700 between the images is adjusted in the means for adjusting 1614 by: the image content shown moves to a position determined by the misalignment. Here, image information 1616 on the right and image information 1618 on the left are provided.
In the means for matching 1620, the right-hand and left-hand image information 1616, 1618 is matched using the comfort region 1610. If the misalignment 700 is outside the comfort zone, the misalignment is defined. The device 1620 provides matched right image information 1622 and matched left image information 1624 for the perimeter display 102.
A control loop (Regelschleife) is thus created, in which the response of the observer 108 can be used decisively to control the comfort region 1610.
In other words, fig. 14 shows: the solution proposed here is included in the operating principle of the autostereoscopic head-up display 102.
Where more complex embodiments can be combined using simpler embodiments. The system 400 operates automatically, but the driver can not only switch off the system, but can also adapt it to his wishes manually as in the simpler embodiment.
In another embodiment, the automated system is designed as a learning system, so that said system knows and stores the individual drivers and their typical adjustment and fatigue behavior. Thus, the ideal matching can always be performed faster and more smoothly.
A schematic diagram of the operating principle of the asHUD 102 with comfort zone-matching is shown.
The embodiments described and shown in the drawings have been chosen by way of example only. The different embodiments can be combined with each other completely or with regard to individual features. An embodiment may also be supplemented by features of another embodiment.
Furthermore, the method steps set forth herein may be performed repeatedly and in another order than that described.
If an embodiment comprises an "and/or" connection between a first feature and a second feature, this is to be understood as meaning that the embodiment has not only the first feature but also the second feature according to one embodiment and either only the first feature or only the second feature according to another embodiment.

Claims (11)

1. Method (1500) for operating an autostereoscopic field display (102) for a vehicle (100), wherein the method (1500) comprises the following steps:
adjusting (1502) a misalignment (700) between a right image (402) and a left image (404) of the heads-up display (102) using a convergence angle (702) between a right visual axis of a right eye (704) and a left visual axis of a left eye (708) of an observer (108) of the heads-up display (102), wherein the right image (402) and the left image (404) are adjusted without misalignment when the right and left visual axes intersect in a projection plane (504) of the right image (402) and the left image (404).
2. The method (1500) of claim 1, having the step (506) of: the positions of the right image (402) and the left image (404) in the projection surface (504) are determined using the visual axis and the eye position of the observer (108), wherein the determined positions are also adjusted in an adjustment (1502).
3. The method (1500) according to any of the preceding claims, having the steps of: the eye information is read in by an eye detection device (414) of the vehicle (100), which is designed to detect a right eye (704) and a left eye (708) of the observer (108), wherein the eye positions are imaged with right eye position values and left eye position values of the eye information, and the right visual axis and the left visual axis are imaged with right visual direction values and left visual direction values of the eye information, wherein the convergence angle (702) is determined from the visual direction values and the eye position values in the step of adjusting (1502).
4. The method (1500) according to claim 1 or 2, wherein in the step of adjusting (1502) the misalignment (700) is adjusted within a comfort zone (1200) associated with the observer.
5. The method (1500) of claim 4, having the step of matching the comfort region (1200), wherein the comfort region (1200) is matched in a manner that is responsive to an input by a user (108).
6. The method (1500) of claim 5, wherein in the step of matching, the comfort region (1200) is matched in relation to a number of depth planes to be displayed, wherein the comfort region (1200) is reduced when the number of depth planes increases.
7. The method (1500) according to claim 5, wherein in the step of matching the comfort region (1200) is changed in relation to the user, wherein the comfort region (1200) is reduced when the observer (108) is tired.
8. The method (1500) according to claim 7, wherein in the step of matching fatigue of the observer (108) is identified using eye information.
9. The method (1500) according to claim 8, wherein eyelid closure frequency and/or eyelid closure duration are evaluated.
10. Control device (400) designed to perform all the steps of the method (1500) according to any one of the preceding claims.
11. A machine-readable storage medium having stored thereon a computer program arranged to perform all the steps of the method according to any of the preceding claims 1 to 9.
CN201610163253.XA 2015-03-23 2016-03-22 Method and control device for operating an autostereoscopic field display for a vehicle Expired - Fee Related CN105988220B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102015205167.6A DE102015205167A1 (en) 2015-03-23 2015-03-23 Method and control device for operating an autostereoscopic field of view display device for a vehicle
DE102015205167.6 2015-03-23

Publications (2)

Publication Number Publication Date
CN105988220A CN105988220A (en) 2016-10-05
CN105988220B true CN105988220B (en) 2020-09-15

Family

ID=56889647

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610163253.XA Expired - Fee Related CN105988220B (en) 2015-03-23 2016-03-22 Method and control device for operating an autostereoscopic field display for a vehicle

Country Status (2)

Country Link
CN (1) CN105988220B (en)
DE (1) DE102015205167A1 (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110024381A (en) * 2016-12-01 2019-07-16 夏普株式会社 Display device and electronics rearview mirror
DE102017213654A1 (en) * 2017-08-07 2019-02-07 Bayerische Motoren Werke Aktiengesellschaft User interface, door operating module, roof control module and means of transport for displaying user-facing control element labels
JP6968676B2 (en) * 2017-12-06 2021-11-17 矢崎総業株式会社 Display device for vehicles
DE102018213269A1 (en) 2018-08-08 2020-02-13 Bayerische Motoren Werke Aktiengesellschaft Method for operating a field of view display device for a motor vehicle
DE102018215266B4 (en) 2018-09-07 2024-04-18 Audi Ag Motor vehicle with a display device for providing a three-dimensional display
DE102019206358B4 (en) * 2019-05-03 2022-04-21 Audi Ag Camera device and method for generating an image of an environment
CN110262049A (en) * 2019-06-26 2019-09-20 爱驰汽车有限公司 Naked eye stereo-picture display component, windshield and automobile using it
CN110579879B (en) * 2019-09-17 2021-11-02 中国第一汽车股份有限公司 Vehicle-mounted head-up display system and control method thereof
WO2023208907A1 (en) 2022-04-27 2023-11-02 Saint-Gobain Glass France Composite pane with a first reflective layer and a second reflective layer
CN117320878A (en) 2022-04-27 2023-12-29 法国圣-戈班玻璃公司 Composite glass pane with reflective layer and hologram element
CN115314699A (en) * 2022-06-27 2022-11-08 中国第一汽车股份有限公司 Control method and control device of display screen and vehicle

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102314315A (en) * 2010-07-09 2012-01-11 株式会社东芝 Display device, image data generating device, image data generation program and display packing
DE102014001710A1 (en) * 2014-02-08 2014-08-14 Daimler Ag Device for augmented representation of virtual image object in real environment of vehicle, e.g. head-up-display, reflects partial images by disc, so that virtual image object is output as virtual depth image in real environment
CN104253990A (en) * 2013-06-28 2014-12-31 罗伯特·博世有限公司 Method and device for displaying three-dimensional image of imager of vehicle view display device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19704740B4 (en) 1997-02-13 2006-07-13 Eads Deutschland Gmbh Holographic screen and manufacturing process

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102314315A (en) * 2010-07-09 2012-01-11 株式会社东芝 Display device, image data generating device, image data generation program and display packing
CN104253990A (en) * 2013-06-28 2014-12-31 罗伯特·博世有限公司 Method and device for displaying three-dimensional image of imager of vehicle view display device
DE102014001710A1 (en) * 2014-02-08 2014-08-14 Daimler Ag Device for augmented representation of virtual image object in real environment of vehicle, e.g. head-up-display, reflects partial images by disc, so that virtual image object is output as virtual depth image in real environment

Also Published As

Publication number Publication date
CN105988220A (en) 2016-10-05
DE102015205167A1 (en) 2016-09-29

Similar Documents

Publication Publication Date Title
CN105988220B (en) Method and control device for operating an autostereoscopic field display for a vehicle
JP7370340B2 (en) Enhanced augmented reality experience on heads-up display
US10247941B2 (en) Vehicle vision system with light field monitor
US10162175B2 (en) Dual head-up display apparatus
JP4686586B2 (en) In-vehicle display device and display method
JP2015015708A (en) Method and device for displaying three-dimensional image using video unit of view field display device for vehicle
CN111295702A (en) Virtual image display device and head-mounted display using the same
JP4367212B2 (en) Virtual image display device and program
US9684166B2 (en) Motor vehicle and display of a three-dimensional graphical object
JP5849628B2 (en) Vehicle display device
CN113168012A (en) Volume display device representing virtual image and method thereof
JP2011180351A (en) Head-up display apparatus
JP4929768B2 (en) Visual information presentation device and visual information presentation method
WO2018101170A1 (en) Display device and electronic mirror
JP2021067909A (en) Stereoscopic display device and head-up display apparatus
JP2019133116A (en) Display system, movable body, and design method
KR20160089600A (en) Display device
CN115018942A (en) Method and apparatus for image display of vehicle
JP7127415B2 (en) virtual image display
KR102043389B1 (en) Three dimentional head-up display using binocular parallax generated by image separation at the conjugate plane of the eye-box location and its operation method
WO2018109991A1 (en) Display device, electronic mirror, display device control method, program, and storage medium
WO2019151199A1 (en) Display system, moving body, and measurement method
JP2024048865A (en) Display control device, display device
JP2023046339A (en) Display control device, head-up display apparatus and display control method
JP2022066080A (en) Display control device, head-up display apparatus and image display control method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200915

CF01 Termination of patent right due to non-payment of annual fee