JP5813243B2 - display device - Google Patents

display device Download PDF

Info

Publication number
JP5813243B2
JP5813243B2 JP2014537959A JP2014537959A JP5813243B2 JP 5813243 B2 JP5813243 B2 JP 5813243B2 JP 2014537959 A JP2014537959 A JP 2014537959A JP 2014537959 A JP2014537959 A JP 2014537959A JP 5813243 B2 JP5813243 B2 JP 5813243B2
Authority
JP
Japan
Prior art keywords
image
combiner
touch
operation
control unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2014537959A
Other languages
Japanese (ja)
Other versions
JPWO2014049787A1 (en
Inventor
哲也 藤栄
哲也 藤栄
Original Assignee
パイオニア株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パイオニア株式会社 filed Critical パイオニア株式会社
Priority to PCT/JP2012/074943 priority Critical patent/WO2014049787A1/en
Application granted granted Critical
Publication of JP5813243B2 publication Critical patent/JP5813243B2/en
Publication of JPWO2014049787A1 publication Critical patent/JPWO2014049787A1/en
Application status is Active legal-status Critical
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS, OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0093Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS, OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers
    • G06F3/0418Control or interface arrangements specially adapted for digitisers for error correction or compensation, e.g. based on parallax, calibration or alignment
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS, OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye

Description

  The present invention relates to a technical field for visually recognizing a virtual image.

  Conventionally, a display device such as a head-up display for visually recognizing an image as a virtual image is known. For example, Patent Document 1 describes a technique of forming a touch panel on a windshield corresponding to a telephone push button display displayed on the front outside through the windshield by a head-up display.

JP 7-307775 A

  However, in the technique described in Patent Document 1, it is difficult to perform a touch operation because the touch panel is formed on the windshield somewhat away from the driver. For example, during operation, basically a touch operation cannot be performed from the viewpoint of safety.

  The above-mentioned thing is mentioned as an example as a subject which the present invention tends to solve. The main object of the present invention is to enable a touch operation to be easily performed in a display device for visually recognizing a virtual image.

In the first aspect of the present invention, the display device includes a projection unit that projects light constituting the image, and a curved surface having a predetermined curvature on which the light of the projection unit is projected, and the curved surface projects the light . A combiner that reflects and visually recognizes the image as a virtual image, and a contact position acquisition unit that is provided along a curved surface of the combiner and acquires a position that the user has contacted for an operation related to the image And determining means for determining the operation of the user based on the position acquired by the contact position acquiring means and information related to the curvature .

1 shows a schematic configuration of a head-up display according to the present embodiment. The figure for demonstrating image correction is shown. The figure for demonstrating the method of calculating | requiring a 2nd touch reaction area | region is shown. The processing flow concerning a present Example is shown. The schematic structure of the system which concerns on a modification is shown.

  In one aspect of the present invention, the display device includes a projection unit that projects light constituting the image, a combiner that reflects the light projected from the projection unit, and causes the user to visually recognize the image as a virtual image; Operation acquisition means for acquiring touch operation information on the combiner by the user, and determination means for determining the user operation based on the touch operation information.

  In the above display device, the projecting unit projects the light constituting the image, and the combiner reflects the light projected from the projecting unit to visually recognize the image as a virtual image to the user (for example, a driver of the moving body). Let The operation acquisition unit acquires information on a touch operation performed on the combiner by the user. For example, the operation acquisition unit acquires information on a touch operation performed by the user on the combiner from a touch panel arranged integrally with the combiner. Then, the determination unit determines the user's operation based on the touch operation information acquired by the operation acquisition unit. In the display device described above, for example, as compared with the technique described in Patent Document 1, a touch operation may be performed on a combiner arranged at a position closer to the user. Therefore, the user can easily perform a touch operation.

  In one aspect of the above display device, the display device further includes a correction control unit that corrects the shape of the image and projects light constituting the corrected image from the projection unit, and the determination unit includes at least the touch operation. The user's operation is determined based on the above information and the correction amount of the image by the correction control means. By considering the correction amount of such an image, it is possible to accurately determine the user's operation.

  In another aspect of the above display device, the combiner is configured in a concave shape having a predetermined curvature, which is recessed toward the projection unit, and the determination unit includes the touch operation information, The user's operation is determined based on the correction amount of the image by the correction control means and information related to the curvature of the combiner. Thus, by considering not only the image correction amount but also the curvature of the combiner, it becomes possible to determine the user's operation with higher accuracy.

  Preferably, in the display device, the correction control unit determines a correction amount of the image so that distortion of a virtual image visually recognized by the user is corrected. Thereby, the user can perform a touch operation while visually recognizing a virtual image without distortion.

  Preferably, in the display device, the combiner is configured to be capable of adjusting a tilt angle, and the correction control means determines a correction amount of the image according to the tilt angle of the combiner. Thereby, the image correction according to the tilt angle of the combiner can be appropriately performed.

  In another aspect of the present invention, there is provided a display device having projection means for projecting light constituting an image, and a combiner for reflecting the light projected from the projection means and allowing the user to visually recognize the image as a virtual image. The display method to be executed includes an operation acquisition step of acquiring touch operation information on the combiner by the user, and a determination step of determining the user operation based on the touch operation information.

  In still another aspect of the present invention, the computer has a computer, a projection unit that projects light constituting the image, and a combiner that reflects the light projected from the projection unit and causes the user to visually recognize the image as a virtual image. The program executed by the display device includes: an operation acquisition unit that acquires information on a touch operation performed on the combiner by the user; a determination unit that determines an operation of the user based on the information on the touch operation; To make the computer function.

  The above program can be suitably handled in a state recorded on a recording medium.

  Hereinafter, preferred embodiments of the present invention will be described with reference to the drawings.

[Device configuration]
FIG. 1 is a schematic configuration diagram of a head-up display 2 according to the present embodiment. As shown in FIG. 1, the head-up display 1 according to the present embodiment mainly includes a light source unit 3 and a combiner 9, and includes a front window 25, a ceiling portion 27, a hood 28, a dashboard 29, and the like. Mounted on the vehicle. The head-up display 2 is an example of the “display device” in the present invention.

  The light source unit 3 is installed on the ceiling portion 27 in the passenger compartment via the support members 5a and 5b, and emits light constituting an image to be displayed toward the combiner 9. Specifically, the light source unit 3 generates an original image (real image) of the display image in the light source unit 3 based on the control of the control unit 4, and emits light constituting the image to the combiner 9. The virtual image “Iv” is visually recognized by the driver via the combiner 9. For example, a laser, DLP (Digital Light Processing), LCOS (Liquid Crystal On Silicon), or the like is applied to the light source unit 3 (“DLP” and “LCOS” are registered trademarks). The light source unit 3 corresponds to an example of a “projection unit” in the present invention.

  The combiner 9 is configured as a half mirror having a reflection function and a transmission function. The combiner 9 projects the display image emitted from the light source unit 3 and reflects the display image as a virtual image Iv by reflecting the display image to the driver's eye point Pe. The combiner 9 is configured in a concave shape having a predetermined curvature, which is recessed toward the light source unit 3. Thereby, the driver can visually recognize the virtual image Iv obtained by enlarging the display image. Furthermore, the combiner 9 has the support shaft part 8 installed in the ceiling part 27, and rotates the support shaft part 8 as a spindle. That is, the combiner 9 is configured to be able to adjust the tilt angle with the support shaft portion 8 as a support shaft. The support shaft portion 8 is installed, for example, in the vicinity of the ceiling portion 27 in the vicinity of the upper end of the front window 25, in other words, the position where a sun visor (not shown) for the driver is installed. The support shaft portion 8 may be installed instead of the above-described sun visor. In this embodiment, the light source unit 3 and the combiner 9 are separate bodies, but the light source unit and the combiner may be integrated. In this case as well, the combiner is attached to the light source unit via a support shaft that enables adjustment of the tilt angle of the combiner.

  An electrostatic sheet 9 a is provided on the surface of the combiner 9. The electrostatic sheet 9 a is a capacitive touch panel, and outputs a signal corresponding to a touch operation by the driver to the control unit 4. For example, the electrostatic sheet 9a outputs a signal corresponding to the position touched on the electrostatic sheet 9a (synonymous with the position touched on the combiner 9. The same applies hereinafter) to the control unit 4. . The electrostatic sheet 9 a has a shape that follows the curved surface of the combiner 9, and is attached to the surface of the combiner 9 on which the light from the light source unit 3 is projected. In addition, the electrostatic sheet 9a is configured by a transparent sheet. Thereby, the reflection function and transmission function of the combiner 9 are ensured.

  The control unit 4 is built in the light source unit 3, has a CPU, RAM, ROM, etc. (not shown), and performs general control of the head-up display 2. In this embodiment, the control unit 4 projects light from the light source unit 3 to cause the driver to visually recognize the image as a virtual image via the combiner 9, and from the electrostatic sheet 9a. A signal corresponding to the touch operation by the driver is acquired, and the operation of the driver is determined based on the signal. In addition, about the image which should be displayed, the control part 4 may produce | generate and what was produced | generated by the apparatus outside the head-up display 2 etc. may be acquired. Although details will be described later, the control unit 4 corresponds to an example of “operation acquisition unit”, “determination unit”, and “correction control unit” in the present invention.

  In FIG. 1, the electrostatic sheet 9a is provided on the surface of the combiner 9 on which the light from the light source unit 3 is projected, but the surface opposite to the surface of the combiner 9 on which the light from the light source unit 3 is projected. An electrostatic sheet 9a may be provided. In this case, the detection sensitivity of the operation can be increased with respect to the touch operation from the side opposite to the surface of the combiner 9 on which the light from the light source unit 3 is projected (that is, the windshield side). If a touch operation is performed from the windshield side, the light projected from the light source unit 3 to the combiner 9 is not blocked.

  Further, instead of the electrostatic sheet 9a, a touch panel having both a reflection function and a transmission function may be used. In that case, if the touch panel has the same function as the combiner 9, it is not necessary to use the combiner 9 separately. Moreover, it is not limited to applying a capacitive system to a touch panel, In addition to this, various well-known systems (for example, resistance film system) can be applied.

  Furthermore, as shown in FIG. 1, the light source unit 3 is not limited to being installed on the ceiling portion 27, and the light source unit 3 may be installed inside the dashboard 29 instead of the ceiling portion 27.

[Control method]
Next, a control method performed by the control unit 4 of the head-up display 2 in the present embodiment will be described.

  In the present embodiment, the control unit 4 projects light constituting the image including an image (hereinafter referred to as “touch image”) for causing the driver to perform a touch operation such as a button from the light source unit 3, and Based on the signal acquired from the electric sheet 9a, an operation on the touch image by the driver is determined. Specifically, the control unit 4 first obtains an area corresponding to the touch image in the image to be displayed (hereinafter referred to as “first touch reaction area”), and the first touch reaction area is projected. A region on the combiner 9 (in other words, a region corresponding to the first touch reaction region formed on the combiner 9 and hereinafter referred to as a “second touch reaction region”) is obtained. The reason why the second touch reaction area is obtained from the first touch reaction area is to correct the difference between the virtual image visually recognized by the driver and the image formed on the combiner 9. This corresponds to performing correction relating to the coordinates of the touch panel on the sheet 9a (in other words, calibration, hereinafter also referred to as “touch panel correction”).

  Thereafter, the control unit 4 is based on a signal acquired from the electrostatic sheet 9a, and the position on the combiner 9 where the driver performs a touch operation (uniquely the position on the electrostatic sheet 9a). ) And comparing the position with the second touch reaction area, the operation on the touch image by the driver is determined. In this case, when the position at which the touch operation is performed is included in the second touch reaction area, the control unit 4 determines that the touch operation has been performed on the touch image and associates it with the touch image. Perform a predetermined operation.

  In the present embodiment, the control unit 4 corrects the shape of the image (original image) to be displayed and corrects the original image so that the distortion of the virtual image visually recognized by the driver is corrected. The light constituting the image (hereinafter referred to as “corrected image”) is projected from the light source unit 3. For example, the control unit 4 performs various image corrections such as rotation correction and trapezoidal correction. And the control part 4 calculates | requires an above-mentioned 2nd touch reaction area | region based on the correction amount (henceforth "image correction amount") used for such image correction. This is because an image corresponding to the corrected image is formed on the combiner 9. Furthermore, in the present embodiment, the control unit 4 obtains the second touch reaction area in consideration of not only the image correction amount but also the curvature of the combiner 9 (the curvature of the concave shape). This is because an image in which the corrected image is enlarged by the curvature of the combiner 9 is formed on the combiner 9.

  When the virtual image is not distorted, it is not necessary to correct the original image. That is, it is not necessary to generate a corrected image. In this case, as described above, it is not necessary to consider the image correction amount in obtaining the second touch reaction area. Further, when the combiner 9 has a planar shape instead of a concave shape (that is, when it does not have a curvature), it is not necessary to consider the curvature of the combiner 9 in obtaining the second touch reaction region as described above.

  Next, the control method performed by the control unit 4 will be described more specifically with reference to FIGS. 2 and 3.

  FIG. 2 is a diagram for explaining image correction. FIG. 2A shows an example of the original image 70. The original image 70 includes two buttons 70a and 70b as touch images. In the present embodiment, the control unit 4 obtains an area corresponding to the buttons 70a and 70b in the original image 70 as the first touch reaction area.

  FIG. 2B shows an example of a virtual image 71 visually recognized when the original image 70 that has not been corrected is used. It can be seen that the virtual image 71 is curved in one direction as compared to the original image 70. That is, it can be seen that the virtual image 71 is distorted. Such distortion is caused by, for example, that the shape of the combiner 9 is concave, or that the combiner 9 is inclined with respect to the light source unit 3 (that is, light from the light source unit 3 is incident on the combiner 9 obliquely). Or the driver's eye point is out of the proper position.

  FIG. 2C shows a corrected image 72 for correcting the distortion of the virtual image 71 as shown in FIG. The corrected image 72 is an image obtained by curving the original image 70 in the direction opposite to the direction of distortion generated in the virtual image 71. The corrected image 72 is an image projected by the light source unit 3. In one example, the control unit 4 generates a corrected image 72 in response to a driver's operation on an input device (switch, button, remote controller, etc., not shown in FIG. 1) of the head-up display 2. . That is, in this example, the driver operates the input device so that the distortion of the visually recognized virtual image is eliminated, and the control unit 4 generates the corrected image 72 according to the operation of such an input device. In another example, an image corresponding to a virtual image visually recognized by the driver is captured by a camera, and the control unit 4 analyzes the captured image, thereby correcting the corrected image 72 that can eliminate distortion generated in the virtual image. Is generated.

  FIG. 2D shows an example of a virtual image 73 that is visually recognized when the corrected image 72 shown in FIG. 2C is used. In the virtual image 73, it can be seen that the distortion as shown in FIG. That is, it can be said that a virtual image 73 that substantially matches the original image 70 is visually recognized.

  FIG. 3 is a diagram for specifically explaining a method of obtaining the second touch reaction area. Here, as shown in FIG. 3A, a case is considered in which the combiner 9 (including the electrostatic sheet 9a) has a shape (concave shape) that is recessed toward the light source unit 3 with a predetermined curvature. FIG. 3B shows an example of an image 75 (virtual image) formed on the plane S <b> 1 facing the light source unit 3 by the light projected from the light source unit 3. The image 75 corresponds to an image projected by the light source unit 3. FIG. 3B illustrates an image 75 obtained when the corrected image 72 shown in FIG. 2C is used. This image 75 basically matches the corrected image 72 shown in FIG.

  FIG. 3C shows an image 76 formed on the surface (curved surface) S <b> 2 of the combiner 9 facing the direction of the light source unit 3 by the light projected from the light source unit 3. This image 76 corresponds to a virtual image visually recognized by the driver. FIG. 3C illustrates an image 76 obtained when the corrected image 72 shown in FIG. 2C is used, as in FIG. 3B. As shown in FIG. 3C, an image 76 formed on the surface S2 of the combiner 9 is an image obtained by enlarging the image 75 formed on the surface S1 shown in FIG. It can be seen that the image 72 is an image enlarged in the left-right direction. This is because the combiner 9 is configured in a concave shape having a curvature, as shown in FIG.

  In the present embodiment, an area corresponding to the buttons 76a and 76b included in the image 76 formed on the surface S2 of the combiner 9 is obtained as the second touch reaction area. Specifically, in this embodiment, an image 76 obtained by enlarging the corrected image 72 obtained by the procedure described in FIG. 2 according to the curvature of the combiner 9 is obtained, and buttons 76a and 76b included in the image 76 are obtained. A second touch reaction region corresponding to is obtained. For example, the control unit 4 changes the first touch reaction area corresponding to the buttons 70 a and 70 b in the original image 70 based on the image correction amount used for the image correction and the curvature of the combiner 9, thereby Determine the 2-touch reaction area. In this case, a coordinate system with a predetermined position on the combiner 9 as the origin is defined, and the control unit 4 obtains the position of the second touch reaction area defined by the coordinate system. The coordinate system for defining the position of the second touch reaction region is preferably the same as the coordinate system for determining the position where the touch operation is performed based on the signal of the electrostatic sheet 9a.

  In FIG. 3, an image enlarged in the left-right direction is shown as the image 76 formed on the surface S <b> 2 of the combiner 9, but the image 76 formed on the surface S <b> 2 of the combiner 9 corresponds to the curvature of the combiner 9. Thus, the image may be enlarged in the vertical direction instead of the horizontal direction, and may be enlarged in both the horizontal direction and the vertical direction.

[Processing flow]
Next, a processing flow according to the present embodiment will be described with reference to FIG. This processing flow is repeatedly executed by the control unit 4 in the head-up display 2.

  First, in step S101, the control unit 4 generates an image (original image) to be displayed. Here, it is assumed that the control unit 4 generates an original image including a touch image. Note that the control unit 4 is not limited to generating the original image, and a device outside the head-up display 2 may generate the original image, and the control unit 4 may acquire the original image. After step S101, the process proceeds to step S102.

  In step S102, the control unit 4 obtains a first touch reaction area corresponding to the touch image in the original image. When the original image includes a plurality of touch images, the control unit 4 obtains a first touch reaction area corresponding to each of the plurality of touch images. Then, the process proceeds to step S103.

  In step S103, the control unit 4 performs correction on the shape of the original image such as rotation correction and keystone correction so that the distortion of the virtual image visually recognized by the driver is corrected. For example, the driver operates the input device (switch, button, remote controller, etc.) of the head-up display 2 so that the distortion of the visible virtual image is eliminated, and the control unit 4 operates such an input device. The original image is corrected accordingly. Then, the process proceeds to step S104.

  In step S104, the control unit 4 acquires the curvature of the combiner 9 (the curvature of the concave shape) stored in a memory or the like in the head-up display 2. Instead of acquiring the curvature of such a combiner 9, the degree to which the image is enlarged according to the curvature of the combiner 9 (for example, the degree to which the image is enlarged in the horizontal direction and / or the vertical direction), or after the enlargement You may acquire image size etc. These are all information related to the curvature of the combiner. After step S104, the process proceeds to step S105.

  In step S105, the control unit 4 changes the first touch reaction area obtained in step S102 based on the image correction amount in step S103 and the curvature of the combiner 9 acquired in step S104, thereby changing the second touch reaction area. Find the touch response area. That is, the control unit 4 takes into account that the corrected image obtained by correcting the original image is enlarged according to the curvature when the image is formed on the combiner 9, and the second touch reaction corresponding to the first touch reaction region. Find the area. In this case, the control unit 4 obtains the position of the second touch reaction area defined by the coordinate system with the predetermined position on the combiner 9 as the origin. In one example, the control unit 4 obtains a second touch reaction area corresponding to the first touch reaction area based on the image correction amount and the curvature of the combiner 9 using a predetermined arithmetic expression determined in advance. . In another example, a table in which a parameter for obtaining the second touch reaction area from the first touch reaction area (that is, a correction amount used for touch panel correction) is associated with the image correction amount and the curvature of the combiner 9 is obtained in advance. The control unit 4 refers to such a table to obtain a second touch reaction area corresponding to the first touch reaction area. When a plurality of first touch reaction areas are obtained in step S102 (that is, when a plurality of touch images are included in the original image), the control unit 4 corresponds to each of the plurality of first touch reaction areas. A second touch reaction area is determined. And the control part 4 memorize | stores the 2nd touch reaction area | region calculated | required in this way in RAM. For example, the control unit 4 stores the second touch reaction area in pixel units of the image. Thereafter, the process proceeds to step S106.

  In step S106, the control unit 4 determines whether or not the combiner 9 has been touched. In this case, the control part 4 performs the said determination according to whether the signal from the electrostatic sheet 9a was acquired. If touched (step S106: Yes), the process proceeds to step S107. If not touched (step S106: No), the process ends.

  In step S107, the control unit 4 obtains the touched position on the combiner 9 based on the signal acquired from the electrostatic sheet 9a, and the second touch reaction stored in the RAM or the like in step S105. It is determined whether it is included in the area. In this case, the control unit 4 compares the touched position with the second touch reaction area using a coordinate system having a predetermined position on the combiner 9 as an origin. When the touched position is included in the second touch reaction area (step S107: Yes), the process proceeds to step S108. When a plurality of second touch reaction areas are obtained in step S105 (that is, when a plurality of touch images are included in the original image), the control unit 4 determines which second touch reaction area is touched. Shall be specified. On the other hand, when the touched position is not included in the second touch reaction area (step S107: No), the process ends.

  In step S108, the control unit 4 executes an operation corresponding to the touched second touch reaction area. That is, the control unit 4 executes a predetermined operation associated with the touch image corresponding to the touched second touch reaction area. In this case, for example, the control unit 4 outputs a control signal for controlling the components in the head-up display 2 or outputs a control signal for controlling a device outside the head-up display 2. . Then, the process ends.

[Operation and effect of this embodiment]
As described above, according to the present embodiment, the electrostatic sheet 9a is provided on the combiner 9 disposed closer to the driver than the windshield, for example, compared to the technique described in Patent Document 1, for example. Thus, the driver can easily perform the touch operation and can ensure the safety of driving during the touch operation.

  Further, according to the present embodiment, the touch operation can be performed with high accuracy by using the second touch reaction region obtained in consideration of the correction of the shape of the image and the curvature of the combiner 9 (that is, by performing touch panel correction). It becomes possible to judge.

  Furthermore, according to the present embodiment, for example, the touch panel correction can be automatically completed only by the driver performing an operation for correcting the shape of the image. That is, according to the present embodiment, when touch panel correction is performed, an operation different from the operation for image correction need not be newly imposed on the driver.

[Modification]
Below, the modification suitable for an above-described Example is demonstrated. It should be noted that the following modifications can be applied to the above-described embodiments in any combination.

(Modification 1)
As described above, the second touch reaction area is obtained based on the image correction amount and the curvature of the combiner 9. That is, an optimal correction amount for touch panel correction (hereinafter referred to as “touch panel correction amount” as appropriate) is obtained based on the image correction amount and the curvature of the combiner 9. Here, since the curvature of the combiner 9 is constant, it can be said that basically the touch panel correction amount can be obtained if the image correction amount is determined. Therefore, in another example, a table in which the image correction amount and the touch panel correction amount are associated with each other is created in advance, and the control unit 4 can obtain the touch panel correction amount with reference to such a table.

  Further, since the image correction amount is generally determined according to the driver's eye point (that is, the image correction amount tends to be substantially constant if the eye point is the same), the image correction amount for each driver's eye point is obtained. Thus, it is possible to create a table associating eye points with touch panel correction amounts. Therefore, in yet another example, the control unit 4 can obtain the touch panel correction amount with reference to a table in which such eye points are associated with the touch panel correction amount. Thereby, both image correction and touch panel correction can be appropriately performed only by inputting information (for example, ID) for specifying the driver to the head-up display 2. Instead of using a table that associates eye points with touch panel correction amounts, a table that associates drivers with touch panel correction amounts may be used.

  Further, as described above, the combiner 9 is configured so that the tilt angle can be adjusted. However, the image correction amount is generally determined according to the tilt angle of the combiner 9 (that is, if the tilt angle is the same, the image correction amount is substantially constant). Therefore, by obtaining the image correction amount for each tilt angle of the combiner 9, a table in which the tilt angle and the touch panel correction amount are associated can be created. Therefore, in yet another example, the control unit 4 can obtain the touch panel correction amount by referring to a table in which such a tilt angle is associated with the touch panel correction amount. Thereby, both image correction and touch panel correction can be appropriately performed according to the adjustment of the tilt angle by the driver. The tilt angle of the combiner 9 can be obtained by providing an angle sensor or the like that can detect the tilt angle.

  In still another example, the touch panel correction amount can be obtained using a table in which the eye point, the tilt angle, and the touch panel correction amount are associated with each other. This example takes into consideration that different tilt angles may be set even if the eye point is the same. This makes it possible to perform both image correction and touch panel correction with higher accuracy.

(Modification 2)
In the above-described embodiment, an example in which the present invention is applied to the head-up display 2 is shown, but the present invention is not limited to this. The present invention can also be applied to a system including the head-up display 2 and a navigation device that can communicate with the head-up display 2. In this case, the system including the head-up display 2 and the navigation device corresponds to an example of the “display device” in the present invention.

  FIG. 5 is a block diagram illustrating a schematic configuration of a system according to the second modification. As shown in FIG. 5, the system includes a navigation device 100 and a head-up display 2. The navigation device 100 is configured to be communicable with the head-up display 2 (either wireless communication or wired communication), and includes a CPU 100a and the like. For example, the navigation device 100 may be a stationary navigation device installed in a vehicle, a portable navigation device (PND), or a portable terminal such as a smartphone. The CPU 100a in the navigation device 100 performs route guidance from the departure place to the destination, for example. The head-up display 2 has a configuration similar to that shown in FIG.

  In the second modification, the CPU 100a in the navigation device 100 generates an image to be displayed including a touch image, obtains a first touch reaction area corresponding to the touch image in the image, and is formed on the combiner 9. A second touch reaction area corresponding to the first touch reaction area is obtained. More specifically, the CPU 100a corrects the shape of the image to be displayed (original image), and obtains the second touch reaction area based on the image correction amount and the curvature of the combiner 9. In this case, the CPU 100a generates a corrected image and supplies the corrected image to the head-up display 2. Then, the CPU 100a acquires a signal from the electrostatic sheet 9a of the head-up display 2, obtains a position on the combiner 9 where the touch operation is performed based on the signal, and determines the position and the second touch reaction area. By comparing, the operation on the touch image by the driver is determined. Thus, in the second modification, the CPU 100a in the navigation device 100 functions as the “operation acquisition unit”, “determination unit”, and “correction control unit” in the present invention.

  In the above description, the CPU 100a in the navigation device 100 performs the image correction. However, instead of this, the control unit 4 in the head-up display 2 may perform the image correction. In this case, the control unit 4 supplies information on the image correction amount used for the image correction to the navigation device 100, and the CPU 100a obtains the second touch reaction area based on the image correction amount. In this example, the control unit 4 in the head-up display 2 functions as “correction control means” in the present invention, and the CPU 100a in the navigation device 100 functions as “operation acquisition means” and “determination means” in the present invention. To do.

(Modification 3)
Although the example which applies this invention to a vehicle was shown above, application of this invention is not limited to this. The present invention can be applied to various mobile objects such as ships, helicopters, and airplanes in addition to vehicles.

  The present invention can be applied to a head-up display, a navigation device (including a mobile phone such as a smartphone), and the like.

2 Head Up Display 3 Light Source Unit 4 Control Unit 9 Combiner 9a Electrostatic Sheet 200 Navigation Device

Claims (3)

  1. Projection means for projecting light constituting the image;
    A combiner that has a curved surface with a predetermined curvature on which the light of the projection unit is projected , reflects the light on the curved surface, and causes the user to visually recognize the image as a virtual image;
    Contact position acquisition means provided along the curved surface of the combiner for acquiring a position where the user has contacted for an operation related to the image;
    A determination unit that determines an operation of the user based on the position acquired by the contact position acquisition unit and information related to the curvature ;
    A display device comprising:
  2. The determination means calculates a second area on the curved surface of the combiner corresponding to the first area of the image based on information related to the curvature, and the position acquired by the contact position acquisition means is the second position. The display device according to claim 1 , wherein if it is included in an area, it is determined that an operation corresponding to the first area of the image has been performed .
  3. The contact position obtaining means, a display device according to claim 1 or 2, characterized in that light of said projecting means of said combiner is provided in the curved surface opposite to the surface to be projected.
JP2014537959A 2012-09-27 2012-09-27 display device Active JP5813243B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2012/074943 WO2014049787A1 (en) 2012-09-27 2012-09-27 Display device, display method, program, and recording medium

Publications (2)

Publication Number Publication Date
JP5813243B2 true JP5813243B2 (en) 2015-11-17
JPWO2014049787A1 JPWO2014049787A1 (en) 2016-08-22

Family

ID=50387252

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2014537959A Active JP5813243B2 (en) 2012-09-27 2012-09-27 display device

Country Status (2)

Country Link
JP (1) JP5813243B2 (en)
WO (1) WO2014049787A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105530365A (en) * 2014-12-26 2016-04-27 比亚迪股份有限公司 Vehicle-mounted telephone system and vehicle containing same
JP6512080B2 (en) * 2015-11-27 2019-05-15 株式会社デンソー Display correction device
JP2019090964A (en) * 2017-11-16 2019-06-13 株式会社デンソー Virtual image display system, virtual image display device, operation input device, virtual image display method, and program

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07307775A (en) * 1994-05-13 1995-11-21 Nec Corp Automobile telephone system with head-up display
JPH1021007A (en) * 1996-07-02 1998-01-23 Hitachi Eng Co Ltd Touch position image projecting method for front projection type touch panel, and front projection type touch panel system
JP2000241748A (en) * 1999-02-23 2000-09-08 Asahi Glass Co Ltd Information display device
JP2001125740A (en) * 1999-10-29 2001-05-11 Seiko Epson Corp Pointing position detector, image display device, presentation system and information storage medium
US20020140633A1 (en) * 2000-02-03 2002-10-03 Canesta, Inc. Method and system to present immersion virtual simulations using three-dimensional measurement
US6654070B1 (en) * 2001-03-23 2003-11-25 Michael Edward Rofe Interactive heads up display (IHUD)
JP2006065092A (en) * 2004-08-27 2006-03-09 Denso Corp Head-up display
JP2007310285A (en) * 2006-05-22 2007-11-29 Denso Corp Display device
US20100117661A1 (en) * 2007-08-15 2010-05-13 Frederick Johannes Bruwer Grid touch position determination
EP2194418A1 (en) * 2008-12-02 2010-06-09 Saab Ab Head-up display for night vision goggles
JP2011512575A (en) * 2008-01-25 2011-04-21 マイクロソフト コーポレーション Projecting graphic objects on interactive uneven displays
US8089568B1 (en) * 2009-10-02 2012-01-03 Rockwell Collins, Inc. Method of and system for providing a head up display (HUD)
JP2012058688A (en) * 2010-09-13 2012-03-22 Yazaki Corp Head-up display
JP4907744B1 (en) * 2010-09-15 2012-04-04 パイオニア株式会社 Display device
JP2012071825A (en) * 2011-10-13 2012-04-12 Pioneer Electronic Corp Head-up display, and mounting method of the head-up display
JP4928014B1 (en) * 2011-02-28 2012-05-09 パイオニア株式会社 Display device
JP2012123252A (en) * 2010-12-09 2012-06-28 Nikon Corp Image display apparatus

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07307775A (en) * 1994-05-13 1995-11-21 Nec Corp Automobile telephone system with head-up display
JPH1021007A (en) * 1996-07-02 1998-01-23 Hitachi Eng Co Ltd Touch position image projecting method for front projection type touch panel, and front projection type touch panel system
JP2000241748A (en) * 1999-02-23 2000-09-08 Asahi Glass Co Ltd Information display device
JP2001125740A (en) * 1999-10-29 2001-05-11 Seiko Epson Corp Pointing position detector, image display device, presentation system and information storage medium
US20020140633A1 (en) * 2000-02-03 2002-10-03 Canesta, Inc. Method and system to present immersion virtual simulations using three-dimensional measurement
US6654070B1 (en) * 2001-03-23 2003-11-25 Michael Edward Rofe Interactive heads up display (IHUD)
JP2006065092A (en) * 2004-08-27 2006-03-09 Denso Corp Head-up display
JP2007310285A (en) * 2006-05-22 2007-11-29 Denso Corp Display device
US20100117661A1 (en) * 2007-08-15 2010-05-13 Frederick Johannes Bruwer Grid touch position determination
JP2011512575A (en) * 2008-01-25 2011-04-21 マイクロソフト コーポレーション Projecting graphic objects on interactive uneven displays
EP2194418A1 (en) * 2008-12-02 2010-06-09 Saab Ab Head-up display for night vision goggles
US8089568B1 (en) * 2009-10-02 2012-01-03 Rockwell Collins, Inc. Method of and system for providing a head up display (HUD)
JP2012058688A (en) * 2010-09-13 2012-03-22 Yazaki Corp Head-up display
JP4907744B1 (en) * 2010-09-15 2012-04-04 パイオニア株式会社 Display device
JP2012123252A (en) * 2010-12-09 2012-06-28 Nikon Corp Image display apparatus
JP4928014B1 (en) * 2011-02-28 2012-05-09 パイオニア株式会社 Display device
JP2012071825A (en) * 2011-10-13 2012-04-12 Pioneer Electronic Corp Head-up display, and mounting method of the head-up display

Also Published As

Publication number Publication date
JPWO2014049787A1 (en) 2016-08-22
WO2014049787A1 (en) 2014-04-03

Similar Documents

Publication Publication Date Title
US10269226B1 (en) Systems and methods for monitoring a vehicle operator and for monitoring an operating environment within the vehicle
EP3061642B1 (en) Vehicle information projection system, and projection device
US8693103B2 (en) Display device and display method
US9008904B2 (en) Graphical vehicle command system for autonomous vehicles on full windshield head-up display
US9933692B2 (en) Head-up display device
US9001153B2 (en) System and apparatus for augmented reality display and controls
JP6214752B2 (en) Display control device, display control method for display control device, gaze direction detection system, and calibration control method for gaze direction detection system
JP6149543B2 (en) Head-up display device
JP4886751B2 (en) In-vehicle display system and display method
US8924150B2 (en) Vehicle operation and control system for autonomous vehicles on full windshield display
JP4412365B2 (en) Driving support method and driving support device
JP5158063B2 (en) Vehicle display device
US9471151B2 (en) Display and method capable of moving image
KR101123738B1 (en) System and method for monitoring safe operation of heavy machinery
JPWO2014174575A1 (en) Head-up display device for vehicle
US8094190B2 (en) Driving support method and apparatus
US10310264B2 (en) Virtual image display device
US8536995B2 (en) Information display apparatus and information display method
JP4366716B2 (en) Vehicle information display device
EP2441635B1 (en) Vehicle User Interface System
DE102013210746A1 (en) System and method for monitoring and / or operating a technical system, in particular a vehicle
JP6201690B2 (en) Vehicle information projection system
EP2891953A1 (en) Eye vergence detection on a display
WO2015064080A1 (en) Gaze direction-detecting device and gaze direction-detecting method
WO2010109941A1 (en) Vehicluar display system, method of displaying and vehicle

Legal Events

Date Code Title Description
TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20150825

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20150915

R150 Certificate of patent or registration of utility model

Ref document number: 5813243

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150