WO2017061026A1 - Image display device - Google Patents

Image display device Download PDF

Info

Publication number
WO2017061026A1
WO2017061026A1 PCT/JP2015/078728 JP2015078728W WO2017061026A1 WO 2017061026 A1 WO2017061026 A1 WO 2017061026A1 JP 2015078728 W JP2015078728 W JP 2015078728W WO 2017061026 A1 WO2017061026 A1 WO 2017061026A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
user
display device
light
image display
Prior art date
Application number
PCT/JP2015/078728
Other languages
French (fr)
Japanese (ja)
Inventor
裕己 永野
真希 花田
壮太 佐藤
Original Assignee
日立マクセル株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日立マクセル株式会社 filed Critical 日立マクセル株式会社
Priority to PCT/JP2015/078728 priority Critical patent/WO2017061026A1/en
Priority to JP2017544149A priority patent/JP6637986B2/en
Publication of WO2017061026A1 publication Critical patent/WO2017061026A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays

Definitions

  • the present invention relates to a so-called head-up display (hereinafter referred to as HUD: Head Up Display) that is particularly suitable for being mounted on an aircraft or a vehicle and displaying various information.
  • HUD Head Up Display
  • Such technology has been put into practical use as a device for displaying various types of information to, for example, automobile occupants.
  • the appearance of the image information varies depending on the eye position (viewpoint) of the viewer (for example, an occupant in the case of a car), for example, the height. That is, the state of the image formed on the retina of the occupant is determined by the relationship between the position of the virtual image formed by the HUD and the position of the occupant's eyes. Further, when the position of the virtual image formed by the HUD and the position of the occupant's eyes do not satisfy a predetermined relationship, the occupant may not be able to visually recognize the image information.
  • the range in which the position of the occupant's eyes must enter in order to view the image is called an eyebox.
  • Patent Document 1 has a mechanism for moving the optical system in order to adjust the projection of the image information with respect to the occupant's viewpoint so that the occupant can visually recognize the image information without depending on the eye height.
  • a device has been proposed.
  • Patent Document 2 discloses a technology that allows a virtual image projected on a windshield to flexibly cope with variations in the occupant's viewpoint and differences in the shape of the windshield.
  • An object of the present invention is to provide a system in which a HUD user can easily use a HUD with a simpler operation.
  • An image display device that emits light for projecting an image, forms a virtual image, and displays the image, and controls to set the angle at which the light is emitted to a plurality of conditions
  • An image display device includes a module and a viewpoint position detection module that selects one of a plurality of conditions, and displays the image by emitting light according to the selected conditions.
  • the apparatus further includes a video processing module, the video processing module displays different images for each of a plurality of conditions, and the viewpoint position detection module is configured to specify a specific image among different images depending on a user. By specifying one, a condition corresponding to the specified image is selected.
  • the viewpoint position detection module selects a condition corresponding to the timing by designating a timing set to a specific one of a plurality of conditions by the user. To do.
  • the camera further includes a camera that acquires an image of the user's eye, and the viewpoint position detection module detects the timing at which the user's pupil contracts from the image of the user's eye, A condition corresponding to the timing is selected.
  • the camera further includes a camera that acquires an image of the user's eye, and the viewpoint position detection module detects, from the image of the user's eye, the timing at which the user's pupil is enlarged, The condition immediately before the condition corresponding to the timing is selected.
  • a distortion correction module is further provided, and image distortion correction settings are changed in accordance with the condition of the angle at which light is emitted.
  • the conceptual diagram explaining the Example of this invention Flow chart for explaining the first embodiment of the present invention
  • the system block diagram explaining Example 1 of this invention Block diagram for explaining the first embodiment of the present invention Table for explaining data used in Example 1 of the present invention
  • Block diagram for explaining a second embodiment of the present invention Flow chart for explaining a third embodiment of the present invention
  • Flow chart for explaining a fourth embodiment of the present invention Block diagram for explaining a fourth embodiment of the present invention
  • Conceptual diagram for explaining a fifth embodiment of the present invention
  • notations such as “first”, “second”, and “third” are attached to identify the constituent elements, and do not necessarily limit the number or order.
  • a number for identifying a component is used for each context, and a number used in one context does not necessarily indicate the same configuration in another context. Further, it does not preclude that a component identified by a certain number also functions as a component identified by another number.
  • FIG. 1 is a schematic diagram for explaining a situation where a driver (or an occupant) 101 riding in a driver's seat of an automobile visually recognizes image information.
  • Reference numeral 102 denotes a windshield of an automobile.
  • Light 103 for projecting image information emitted from a light source (not shown) is irradiated to the windshield 102 with a first mirror (for example, a folding mirror: a folding mirror hereinafter). : Explained using Folding Mirror) 104 and second mirror (for example, concave mirror: Concave mirror, hereinafter explained using Concave mirror) 105 to reflect and irradiate.
  • a first mirror for example, a folding mirror: a folding mirror hereinafter.
  • second mirror for example, concave mirror: Concave mirror, hereinafter explained using Concave mirror
  • the HUD includes a folding mirror 104, a concave mirror 105, a light source, a transmissive liquid crystal panel that displays an image to be projected, a lens, and other optical systems.
  • the transmissive liquid crystal panel that displays an image to be projected may be replaced with a reflective liquid crystal panel, a DMD (Digital Micromirror Device) panel, a MEMS (Micro Electro Mechanical Systems), or the like.
  • the light 103 is reflected by the windshield 102, enters the eye 106 of the occupant 101, and forms an image on the retina so that the occupant 101 can visually recognize the image information. At this time, the occupant 101 is looking at the virtual image 107 on the other side of the windshield 102.
  • the occupant 101 may have different eye positions (heights) such as Person A, Person B, and Person C in FIG. In this case, depending on the position (height) of the virtual image 107 to be formed, the occupant 101 cannot visually recognize the image information.
  • the figure which expanded and showed the virtual image 107 at the upper part of FIG. 1 is shown.
  • the size of the virtual image formed by the HUD is the size of a rectangle 108a indicated by a solid line in the figure, and the rectangles 108b and 108c indicated by dotted lines indicate a state in which the HUD has moved the position of the virtual image. That is, the virtual image (image information) that appears at the same time is indicated by any one of the rectangles 108a, 108b, and 108c.
  • the projected images are switched to circled numbers 1, 2, and 3 as the virtual image moves.
  • the occupant A can visually recognize the circled number 2 in the image information of FIG. Cannot be seen.
  • the passenger B can visually recognize the image information of number 3 in FIG. 1, but cannot visually recognize the image information of number 2 and number 4.
  • the occupant B can see the image information, but the occupant A (Person A) and the occupant C (Person c) cannot see the image information. become.
  • the position of the virtual image 107 for example, by changing the angle of the concave mirror 105.
  • the imaging position of the virtual image 107 can be changed in the vertical direction.
  • FIG. 2 shows a control flow of the system of the embodiment of the present invention.
  • the description will be made assuming the initial setting of the HUD immediately after the occupant enters the automobile.
  • the system starts the initial setting (S201).
  • the start of the setting may be started when the occupant actively inputs a command, or may be started when the occupant is seated and the seat belt is worn, for example. Since it is desirable to perform the initial setting after the occupant's posture (eye position) is stabilized, the start after the seat belt is worn is desirable. It is also desirable to call attention by voice guidance such as “Start initial setting of HUD from now on” at the start of setting in order to stabilize the eye position.
  • the concave mirror 105 is moved to the home position (S202).
  • the home position is arbitrary, since it is desirable to drive the entire movable range of the concave mirror 105 in subsequent operations, it is desirable that the upper limit or the lower limit of the movable range.
  • the home position is at an arbitrary position in the middle of the movable range, and such a form is also possible.
  • the system displays an identification image corresponding to the current mirror angle (S203).
  • the image may be anything such as an icon or ID number. For example, the numbers as described in FIG. 1 may be used.
  • the occupant (user) is caused to input information specifying the identification image visually recognized by the occupant (user).
  • identification information that looks like “No. 2” can be uttered and input to the system by voice recognition.
  • the user's gesture may be recognized by an in-vehicle camera. Or you may make it input from input means like a touch panel. Or you may make it operate so that a user may push a button at the timing when the recognition image was seen. This method requires a complicated operation for the user, but has an advantage that it is not necessary to change the identification image.
  • the system changes the angle of the concave mirror 105 (S204).
  • the amount of angle change is arbitrary, but if the amount of change is small, a setting suitable for the sitting height and posture of the occupant is possible, but the setting takes time. The opposite is true if the amount of change is increased.
  • the system sets the designated mirror angle as a fixed position for the user (S208). Further, if necessary, distortion correction corresponding to the mirror angle is set (S209). The distortion correction will be described later. Thereafter, a normal HUD operation is executed at the mirror angle (S210).
  • the angle change direction of the concave mirror 105 is reversed (S211). Then, the identification image is displayed (S203) over the entire movable range of the mirror.
  • the change (increase / decrease) of the mirror angle is continued by the determination process of S205. It is also possible to return to S201 and start the operation from the beginning, issue a warning such as “Could not be recognized”, or change the mirror angle to the home position and perform the HUD operation.
  • FIG. 3 is an overall configuration diagram of the system of this embodiment. This is an image of HUD mounted on a car. The same components as those in FIG.
  • the HUD 301 includes a folding mirror 104 and a concave mirror 105, for example.
  • the configuration of the HUD 301 itself may be the same as a known system configuration such as Patent Document 1, in this embodiment, the projection position of the virtual image 107 can be controlled under the control of the information processing apparatus 302.
  • the information processing apparatus 302 can be configured as a microcomputer including an input device 303, an output device 304, a processing device 305, and a storage device 306.
  • the microcomputer may be dedicated to HUD control, or may be shared with a car stereo or engine control.
  • the input device 303 may include a known voice input device, a keyboard, and a touch panel (not shown).
  • the output device 304 includes an interface for controlling the HUD 301.
  • the control content includes an image signal of an image displayed on the HUD and a control signal for adjusting the angle of the concave mirror 105.
  • the output device 304 may include a voice output interface that gives instructions to the occupant by voice and a display device other than the HUD.
  • FIG. 4 is a block diagram illustrating in detail the part of the configuration shown in FIG.
  • the control function of the HUD is such that the program stored in the storage device 306 of the information processing device 302 is executed by the processing device 305 so that the determined processing cooperates with other hardware.
  • a program executed by the processing device 305 or the like or means for realizing the function may be referred to as “function”, “means”, “unit”, “module”, or the like.
  • the program provided in the storage device 306 includes a viewpoint position detection module 401, a voice recognition module 402 that constitutes a part of the viewpoint position detection module 401, a video processing module 403, a distortion correction module 404, and a mirror control module 405.
  • the input device 303 includes a microphone 407.
  • the HUD 301 includes a video display unit 406 and a mirror unit 407.
  • the viewpoint position detection module 401 instructs the video processing module 403 and the mirror control module 405 to drive the concave mirror 105 while the video processing module 403 performs the concave operation.
  • a display (number 1, number 2, number 3, etc.) corresponding to the position of the mirror is output.
  • the voice recognition module 402 recognizes the input number.
  • the mirror control unit 405 adjusts the display position to the mirror angle corresponding to the recognized number, and the initial setting is completed.
  • the mirror control unit 405 adjusts the angle of the concave mirror 105, but other optical systems such as lenses and prisms may be driven as long as the position of the virtual image can be changed.
  • the video display unit 406 includes a light source, a liquid crystal panel for displaying video, and the like. However, since a known one may be used, details thereof are omitted.
  • the distortion correction module 404 will be described later.
  • FIG. 5 is a table showing the contents of data stored in the storage device 306.
  • the identification image 502 to be projected in the initial setting process S203 is stored.
  • distortion correction information 503 is stored according to the angle 501 of the concave mirror 105.
  • the distortion correction information will be described later.
  • the user ID 504 is stored according to the angle 501 of the concave mirror 105.
  • the initial setting of FIG. 2 when the mirror angle is specified by the user (S207), it is convenient to use the same setting for the same user without performing the initial setting again. Therefore, the ID of the user is registered in the process of setting the mirror to the designated angle (S208).
  • the user may input his / her ID from the input device 303 such as a touch panel.
  • an ID associated with the engine key may be automatically stored.
  • the HP of the user ID means a home position and indicates an angle set for a user who has not registered ID.
  • a magnetic disk device or a non-volatile semiconductor memory can be used as the storage device 306 as the storage device 306 .
  • Example 2 will be described with reference to FIG. The same components as those in FIG. In this example, after starting the initial setting, a specific image is output while driving the mirror 105.
  • the hard switch 601 that is a part of the input device 303 is pressed.
  • the switch input can be detected by the switch detection module 602, the mirror control can be stopped at that timing, and the display position can be stopped.
  • the switch 601 may be a push switch or a tact switch, or may be detected by a pedal operation or the like.
  • the video processing module 403 does not need to change the video depending on the mirror angle, but the mirror control is stopped at the timing of the switch 601, so there is a problem that imposes a burden on the user.
  • FIG. 7 shows the overall processing flow of the third embodiment.
  • the third embodiment an example using image recognition will be described.
  • the same components as those in FIG. In this example, the displayed image may not be changed depending on the mirror angle (S710).
  • an image in the vehicle (for example, the driver's seat) is acquired (S701).
  • an in-vehicle camera for taking an image inside the vehicle is installed, for example, at the position of the room mirror.
  • Detect the pupil of the user (driver) from the acquired image S702.
  • a known face authentication technique may be applied to cut out the eye part from the face image.
  • a voice like “Start the initial setting of the HUD. Stop the vehicle in a place where it is not directly exposed to sunlight. Do not move your head when looking at the front.” It is also desirable to provide guidance by
  • the user's pupil After detecting the pupil, it is determined whether or not the user's pupil has changed (S703). For this purpose, for example, it can be determined by cutting out the user's eye part from the acquired image, detecting the pupil and iris part, and determining the ratio of the diameters of both.
  • the user's pupil When the light from the HUD is incident on the user's eye, the user's pupil is narrowed by the brightness of the incident light as a physiological reaction. For example, when the pupil diameter is smaller than half of the iris diameter, it can be determined that light from the HUD has entered the user's eye (that is, the image has been visually recognized). Therefore, the mirror angle is adjusted to the angle at which the user's pupil is narrowed down (S704), the distortion correction corresponding to the mirror angle is set (S209), and then the operation shifts to the HUD normal operation (S210).
  • FIG. 8 is an overall processing flow of the fourth embodiment.
  • the fourth embodiment another example using image recognition will be described.
  • a change in the pupil of the user is detected (S710), and the timing for opening the pupil is detected (S803). That is, when the mirror angle is changed (S204) and the image is displayed, the user visually recognizes the image at a certain timing. The pupil contracts when visually recognized, but if the mirror angle is further changed, the image cannot be visually recognized (light does not enter the eye), so the pupil opens. This timing is detected in process S803.
  • the mirror angle is returned by one step so that the image can be seen (S806). At this angle, the HUD normal operation is performed (S210).
  • the user's pupil is most closed because it looks brightest at the proper position of the mirror.
  • the setting of the brightness of the image is arbitrary, but it is preferable that the screen is brightened in order to increase the pupil change. On the other hand, if it is too bright, the user may feel dazzled, so it is desirable to set an upper limit.
  • voice guidance is given as "Please stop the car in a place where it is not directly exposed to sunlight" and the side brake is pulled. You may comprise so that initial setting may not be performed if it is not in a certain state.
  • FIG. 9 is a block diagram for executing the processing of FIG. The same components as those in FIG.
  • the input device 303 includes a viewpoint measurement camera 901 and an external light detection sensor 902 for acquiring a user image in step S701.
  • An image acquired by the viewpoint measurement camera 901 is input to the pupil detection unit 903 of the viewpoint position detection unit 401.
  • the pupil detection unit 903 performs the processing of S702 and S803.
  • the external light detection unit 904 uses the processes S804 and S805.
  • the distortion correction processing S209 of FIGS. 2, 7, and 8 and the distortion correction modules of FIGS. 4, 6, and 9 will be described.
  • the above distortion correction is not necessarily required, but is an optional position.
  • FIG. 10 is a conceptual diagram for explaining the purpose of distortion correction, and describes the positional relationship between the user's eye 106 and the virtual image 107 and how the virtual image is seen by the user. As shown in FIG. 10, when the position (viewpoint) of the eye 106 moves up and down with respect to the virtual image 107, the virtual image appearance 1001 is distorted.
  • Such a phenomenon is caused by changing the shape of the image by changing the optical path from the light source to the eye by changing the mirror angle (S204) in the first to fourth embodiments.
  • the windshield of the automobile has a complicated curved surface shape, and the shape varies depending on the vehicle type.
  • the mirror angle by changing the mirror angle, the incident position and angle of light on the windshield 102 change in a complicated manner. Therefore, the appearance of the image is different (that is, distorted) due to the change in the mirror angle.
  • Japanese Patent Application Laid-Open No. 2004-228867 discloses a technique for previously distorting an image to be projected so as to cancel such image distortion.
  • the position of the light source, virtual image, and eye is uniquely determined by the process of setting the mirror to a specified angle (S208).
  • the projected image is distorted in advance so that an image without distortion can be visually recognized.
  • How to distort the original image can be determined experimentally with a real machine. For example, a process of setting the mirror to a specified angle (S208) is performed at a plurality of viewpoints, and a perfect circle image is projected at that position. If the image looks distorted, the original image is converted to look like a perfect circle.
  • the conversion rule (for example, a function) is stored as distortion correction information 503 for each mirror angle as shown in FIG. Alternatively, the image itself may be stored.
  • functions equivalent to those configured by software can be realized by hardware such as FPGA (Field Programmable Gate Array) and ASIC (Application Specific Integrated Circuit). Such an embodiment is also included in the scope of the present invention.
  • the present invention is not limited to the above-described embodiment, and includes various modifications.
  • a part of the configuration of one embodiment can be replaced with the configuration of another embodiment, and the configuration of another embodiment can be added to the configuration of one embodiment.
  • HUD which is used for displaying information such as automobiles.
  • Driver (or occupant) 101 Driver (or occupant) 101, windshield 102, light 103 for projecting image information, folding mirror 104, concave mirror 105 Eye 106, virtual image 107

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Instrument Panels (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

Disclosed is an image display device, which outputs light for projecting an image, and displays the image by constituting a virtual image. The image display device is provided with a control module that sets, for a plurality of conditions, angles for outputting light, and a view point position detection module for selecting one condition from among the conditions, said image display device displaying the image by outputting the light under the condition thus selected.

Description

画像表示装置Image display device
 本発明は、特に、航空機や車両等に搭載し、各種情報を表示するのに好適な、いわゆるヘッドアップディスプレイ(以下、HUD:Head Up Display)に関するものである。 The present invention relates to a so-called head-up display (hereinafter referred to as HUD: Head Up Display) that is particularly suitable for being mounted on an aircraft or a vehicle and displaying various information.
 近年、画像情報を現実空間に重ねて表示する技術が、エンターテイメントや作業支援システムの分野で注目されている。この一例として、光学的に生成した画像を光分岐素子などで使用者側に反射させて虚像を生成し、虚像と現実空間を重ねて使用者に見せる表示装置がある(例えば特許文献1参照)。 In recent years, technology for displaying image information in a real space is drawing attention in the field of entertainment and work support systems. As an example of this, there is a display device in which an optically generated image is reflected to the user side by an optical branching element or the like to generate a virtual image, and the virtual image and the real space are overlapped and shown to the user (see, for example, Patent Document 1). .
 このような技術は、例えば自動車の乗員に対して、各種情報を表示する装置として、実用化が進んでいる。 Such technology has been put into practical use as a device for displaying various types of information to, for example, automobile occupants.
特開2014-225017号公報JP 2014-225017 特開2011-105306号公報JP 2011-105306 JP
 上記背景技術に記載の技術は、見る者(例えば、自動車の場合は乗員)の目の位置(視点)、例えば高さによって、画像情報の見え方が異なってくる。すなわち、HUDが形成する虚像の位置と、乗員の目の位置の関係により、乗員の網膜に結像される像の状態が定まる。また、HUDが形成する虚像の位置と、乗員の目の位置が所定の関係を満たさない場合には、乗員が画像情報を視認できない場合もある。画像を視認するために、乗員の目の位置がその中に入らなければならない範囲は、アイボックス(Eyebox)と呼ばれる。 In the technique described in the background art, the appearance of the image information varies depending on the eye position (viewpoint) of the viewer (for example, an occupant in the case of a car), for example, the height. That is, the state of the image formed on the retina of the occupant is determined by the relationship between the position of the virtual image formed by the HUD and the position of the occupant's eyes. Further, when the position of the virtual image formed by the HUD and the position of the occupant's eyes do not satisfy a predetermined relationship, the occupant may not be able to visually recognize the image information. The range in which the position of the occupant's eyes must enter in order to view the image is called an eyebox.
 特許文献1には、乗員が目の高さに依存せずに、画像情報を視認できるように、乗員の視点に対して画像情報の投影を調整するために、光学系を移動させる機構を有する装置が提案されている。 Patent Document 1 has a mechanism for moving the optical system in order to adjust the projection of the image information with respect to the occupant's viewpoint so that the occupant can visually recognize the image information without depending on the eye height. A device has been proposed.
 特許文献2には、フロントガラス上に投影させた虚像を、乗員の視点のばらつきやフロントガラスの形状の相違にフレキシブルに対応させる技術が開示されている。 Patent Document 2 discloses a technology that allows a virtual image projected on a windshield to flexibly cope with variations in the occupant's viewpoint and differences in the shape of the windshield.
 しかしながら、自動車等の使用環境を考えると、例えば乗員が自動車に乗り込み、運転席に座った時点で画像情報が視認できなければ、そもそもその自動車にHUDが備わっていることを認識できない。また、何らかの方法でHUDの存在を認識できたとしても、HUDを使用する前に、自分の眼をアイボックスに位置づけるための調整が必要である。また、眼をアイボックスに位置づけた後に、視点に虚像を対応させることも煩雑である。 However, considering the usage environment of a car or the like, for example, if the occupant gets into the car and sits in the driver's seat, the image information cannot be visually recognized, so it cannot be recognized that the car is originally equipped with a HUD. Even if the presence of the HUD can be recognized by some method, it is necessary to make an adjustment for positioning his / her eyes on the eye box before using the HUD. It is also complicated to associate a virtual image with the viewpoint after positioning the eye in the eye box.
 本発明の課題は、HUDのユーザが、より簡単な操作でHUDを容易に使用できるシステムを提供することにある。 An object of the present invention is to provide a system in which a HUD user can easily use a HUD with a simpler operation.
 上記課題を解決する発明の一側面は、画像を投影する光を射出し、虚像を構成して前記画像を表示する画像表示装置であって、光を射出する角度を複数の条件に設定する制御モジュールと、複数の条件のうちの一つを選択する視点位置検出モジュールと、を備え、選択された条件により光を射出して前記画像を表示する、画像表示装置である。 One aspect of the invention for solving the above-described problems is an image display device that emits light for projecting an image, forms a virtual image, and displays the image, and controls to set the angle at which the light is emitted to a plurality of conditions An image display device includes a module and a viewpoint position detection module that selects one of a plurality of conditions, and displays the image by emitting light according to the selected conditions.
 具体的な構成例を示せば、この装置は、映像処理モジュールをさらに備え、映像処理モジュールは、複数の条件ごとに異なる画像を表示し、視点位置検出モジュールは、ユーザにより異なる画像のうち特定の一つを指定されることにより、指定された画像に対応する条件を選択する。 If a specific configuration example is shown, the apparatus further includes a video processing module, the video processing module displays different images for each of a plurality of conditions, and the viewpoint position detection module is configured to specify a specific image among different images depending on a user. By specifying one, a condition corresponding to the specified image is selected.
 他の具体的な構成例を示せば、視点位置検出モジュールは、ユーザにより複数の条件のうちの特定の一つに設定されているタイミングを指定されることにより、当該タイミングに対応する条件を選択する。 If another specific configuration example is shown, the viewpoint position detection module selects a condition corresponding to the timing by designating a timing set to a specific one of a plurality of conditions by the user. To do.
 他の具体的な構成例を示せば、ユーザの眼の画像を取得するカメラをさらに備え、視点位置検出モジュールは、ユーザの眼の画像から、ユーザの瞳孔が収縮するタイミングを検出することにより、当該タイミングに対応する条件を選択する。 In another specific configuration example, the camera further includes a camera that acquires an image of the user's eye, and the viewpoint position detection module detects the timing at which the user's pupil contracts from the image of the user's eye, A condition corresponding to the timing is selected.
 他の具体的な構成例を示せば、ユーザの眼の画像を取得するカメラをさらに備え、視点位置検出モジュールは、ユーザの眼の画像から、ユーザの瞳孔が拡大するタイミングを検出することにより、当該タイミングに対応する条件のひとつ前の条件を選択する。 In another specific configuration example, the camera further includes a camera that acquires an image of the user's eye, and the viewpoint position detection module detects, from the image of the user's eye, the timing at which the user's pupil is enlarged, The condition immediately before the condition corresponding to the timing is selected.
 他の具体的な構成例を示せば、
歪み補正モジュールをさらに備え、光を射出する角度の条件に対応して、画像の歪み補正の設定を変更する。
If you show other specific configuration examples,
A distortion correction module is further provided, and image distortion correction settings are changed in accordance with the condition of the angle at which light is emitted.
 HUDのユーザが、より簡単な操作でHUDを容易に使用できるシステムを提供することができる。 It is possible to provide a system that allows HUD users to easily use HUD with simpler operations.
本発明の実施例を説明する概念図The conceptual diagram explaining the Example of this invention 本発明の実施例1を説明する流れ図Flow chart for explaining the first embodiment of the present invention 本発明の実施例1を説明するシステムブロック図The system block diagram explaining Example 1 of this invention 本発明の実施例1を説明するブロック図Block diagram for explaining the first embodiment of the present invention 本発明の実施例1が用いるデータを説明する表図Table for explaining data used in Example 1 of the present invention 本発明の実施例2を説明するブロック図Block diagram for explaining a second embodiment of the present invention 本発明の実施例3を説明する流れ図Flow chart for explaining a third embodiment of the present invention 本発明の実施例4を説明する流れ図Flow chart for explaining a fourth embodiment of the present invention 本発明の実施例4を説明するブロック図Block diagram for explaining a fourth embodiment of the present invention 本発明の実施例5を説明する概念図Conceptual diagram for explaining a fifth embodiment of the present invention.
 実施の形態について、図面を用いて詳細に説明する。ただし、本発明は以下に示す実施の形態の記載内容に限定して解釈されるものではない。本発明の思想ないし趣旨から逸脱しない範囲で、その具体的構成を変更し得ることは当業者であれば容易に理解される。 Embodiments will be described in detail with reference to the drawings. However, the present invention is not construed as being limited to the description of the embodiments below. Those skilled in the art will readily understand that the specific configuration can be changed without departing from the spirit or the spirit of the present invention.
 以下に説明する発明の構成において、同一部分又は同様な機能を有する部分には同一の符号を異なる図面間で共通して用い、重複する説明は省略することがある。 In the structure of the invention described below, the same portions or portions having similar functions are denoted by the same reference numerals in different drawings, and redundant description may be omitted.
 本明細書等における「第1」、「第2」、「第3」などの表記は、構成要素を識別するために付するものであり、必ずしも、数または順序を限定するものではない。また、構成要素の識別のための番号は文脈毎に用いられ、一つの文脈で用いた番号が、他の文脈で必ずしも同一の構成を示すとは限らない。また、ある番号で識別された構成要素が、他の番号で識別された構成要素の機能を兼ねることを妨げるものではない。 In this specification and the like, notations such as “first”, “second”, and “third” are attached to identify the constituent elements, and do not necessarily limit the number or order. In addition, a number for identifying a component is used for each context, and a number used in one context does not necessarily indicate the same configuration in another context. Further, it does not preclude that a component identified by a certain number also functions as a component identified by another number.
 図面等において示す各構成の位置、大きさ、形状、範囲などは、発明の理解を容易にするため、実際の位置、大きさ、形状、範囲などを表していない場合がある。このため、本発明は、必ずしも、図面等に開示された位置、大きさ、形状、範囲などに限定されない。 The position, size, shape, range, etc. of each component shown in the drawings and the like may not represent the actual position, size, shape, range, etc. in order to facilitate understanding of the invention. For this reason, the present invention is not necessarily limited to the position, size, shape, range, and the like disclosed in the drawings and the like.
 図1を用いて、本発明の一実施例を説明する。図1は自動車の運転席に乗車した運転者(あるいは乗員)101が、画像情報を視認する状況を説明する模式図である。102は自動車のフロントガラスであり、フロントガラス102に対して、図示しない光源から照射される、画像情報を投影するための光103を、第1のミラー(たとえばフォールディングミラー:Folding Mirror、以下フォールディングミラー:Folding Mirrorを用いて説明する)104、第2のミラー(たとえばコンケーブミラー:Concave mirror、以下コンケーブミラー:Concave mirrorを用いて説明する)105で反射させて照射する。HUDはフォールディングミラー104、コンケーブミラー105、光源の他、投影する画像を表示する透過型液晶パネル、レンズその他の光学系を含むが、公知の部分は説明を省略する。 なお、投影する画像を表示させる透過型液晶パネルは、反射型液晶パネルやDMD(Digital Micromirror Device:登録商標)パネル、MEMS(Micro Electro Mechanical Systems)などに置き換えてもよい。 An embodiment of the present invention will be described with reference to FIG. FIG. 1 is a schematic diagram for explaining a situation where a driver (or an occupant) 101 riding in a driver's seat of an automobile visually recognizes image information. Reference numeral 102 denotes a windshield of an automobile. Light 103 for projecting image information emitted from a light source (not shown) is irradiated to the windshield 102 with a first mirror (for example, a folding mirror: a folding mirror hereinafter). : Explained using Folding Mirror) 104 and second mirror (for example, concave mirror: Concave mirror, hereinafter explained using Concave mirror) 105 to reflect and irradiate. The HUD includes a folding mirror 104, a concave mirror 105, a light source, a transmissive liquid crystal panel that displays an image to be projected, a lens, and other optical systems. Note that the transmissive liquid crystal panel that displays an image to be projected may be replaced with a reflective liquid crystal panel, a DMD (Digital Micromirror Device) panel, a MEMS (Micro Electro Mechanical Systems), or the like.
 光103はフロントガラス102で反射され、乗員101の眼106に入射し、網膜上に像を結ぶことにより、乗員101は画像情報を視認することができる。このとき、乗員101は、フロントガラス102の向こう側にある虚像107を見ていることになる。 The light 103 is reflected by the windshield 102, enters the eye 106 of the occupant 101, and forms an image on the retina so that the occupant 101 can visually recognize the image information. At this time, the occupant 101 is looking at the virtual image 107 on the other side of the windshield 102.
 乗員101は座高や姿勢により図1中Person A, Person B, Person Cのように眼の位置(高さ)が異なる場合がある。この場合、形成される虚像107の位置(高さ)によっては、乗員101は画像情報を視認できない。 The occupant 101 may have different eye positions (heights) such as Person A, Person B, and Person C in FIG. In this case, depending on the position (height) of the virtual image 107 to be formed, the occupant 101 cannot visually recognize the image information.
 図1の上部に虚像107を拡大して示した図を示す。HUDが形成する虚像の大きさは、図に示す実線で示す矩形108aの大きさであり、点線で示す矩形108b,108cはHUDが虚像の位置を移動させた状態を示している。すなわち、同時に現れる虚像(画像情報)は、108a,108b,108cのうち、いずれか一つの矩形で示される。図1の例では、虚像の移動に伴って、投影する画像を丸付の番号1、番号2、番号3のように切り換えている。 The figure which expanded and showed the virtual image 107 at the upper part of FIG. 1 is shown. The size of the virtual image formed by the HUD is the size of a rectangle 108a indicated by a solid line in the figure, and the rectangles 108b and 108c indicated by dotted lines indicate a state in which the HUD has moved the position of the virtual image. That is, the virtual image (image information) that appears at the same time is indicated by any one of the rectangles 108a, 108b, and 108c. In the example of FIG. 1, the projected images are switched to circled numbers 1, 2, and 3 as the virtual image moves.
 虚像を108a,108b,108cのように順番に投影していくと、乗員A(Person A)は図1の画像情報の丸付の番号2を視認できるが、番号1と番号3の画像情報を視認できない。また、乗員B(Person B)は図1の番号3の画像情報を視認できるが、番号2と番号4の画像情報を視認できない。 When the virtual images are projected in order like 108a, 108b, and 108c, the occupant A (Person A) can visually recognize the circled number 2 in the image information of FIG. Cannot be seen. The passenger B (PersonPerB) can visually recognize the image information of number 3 in FIG. 1, but cannot visually recognize the image information of number 2 and number 4.
 すなわち、例えば、虚像107の位置が108bに固定されている場合、乗員B(Person B)は画像情報が見えるが、乗員A(Person A)と乗員C(Person c)は画像情報が見えないことになる。 That is, for example, when the position of the virtual image 107 is fixed at 108b, the occupant B (Person B) can see the image information, but the occupant A (Person A) and the occupant C (Person c) cannot see the image information. become.
 ここで、虚像107の位置を移動させることは、例えばコンケーブミラー105の角度を変えることによって可能となる。例えば、図の紙面に垂直な方向にミラー面と回転軸を想定し、回転軸を回転させることにより、虚像107の結像位置は上下方向に変化させることができる。 Here, it is possible to move the position of the virtual image 107, for example, by changing the angle of the concave mirror 105. For example, assuming a mirror surface and a rotation axis in a direction perpendicular to the paper surface of the figure, and rotating the rotation axis, the imaging position of the virtual image 107 can be changed in the vertical direction.
 以下では、いずれの乗員でもHUDの画像情報を認識し、容易な操作でHUDを適切な状態に設定できる実施例を説明する。 Hereinafter, an embodiment will be described in which any occupant can recognize the image information of the HUD and set the HUD in an appropriate state with an easy operation.
 図2に本発明の実施例のシステムの制御フローを示す。ここでは、自動車に乗員が乗り込んだ直後の、HUDの初期設定を想定して説明する。 FIG. 2 shows a control flow of the system of the embodiment of the present invention. Here, the description will be made assuming the initial setting of the HUD immediately after the occupant enters the automobile.
 まずシステムが初期設定を開始する(S201)。設定の開始は、乗員が能動的にコマンドを入力することで開始してもよいし、例えば、乗員が着席し、シートベルトを装着したことを契機に開始してもよい。乗員の姿勢(眼の位置)が安定した後に初期設定を行うことが望ましいので、シートベルトを装着した後の開始が望ましい。また、設定開始時に、「これからHUDの初期設定を開始します。」等の音声ガイダンスにより注意喚起を行うことも、眼の位置を安定にするためには望ましい。 First, the system starts the initial setting (S201). The start of the setting may be started when the occupant actively inputs a command, or may be started when the occupant is seated and the seat belt is worn, for example. Since it is desirable to perform the initial setting after the occupant's posture (eye position) is stabilized, the start after the seat belt is worn is desirable. It is also desirable to call attention by voice guidance such as “Start initial setting of HUD from now on” at the start of setting in order to stabilize the eye position.
 次にコンケーブミラー105をホームポジションへ移動する(S202)。ホームポジションは任意であるが、その後の動作において、コンケーブミラー105の可動範囲全体にわたって駆動することが望ましいため、可動範囲の上限または下限であることが望ましい。ただし、図2では、ホームポジションは可動範囲の中間の任意の位置にあることを想定しており、そのような形態でも実施可能である。 Next, the concave mirror 105 is moved to the home position (S202). Although the home position is arbitrary, since it is desirable to drive the entire movable range of the concave mirror 105 in subsequent operations, it is desirable that the upper limit or the lower limit of the movable range. However, in FIG. 2, it is assumed that the home position is at an arbitrary position in the middle of the movable range, and such a form is also possible.
 次にシステムは、現在のミラー角度に対応する識別画像を表示する(S203)。画像はアイコン、ID番号等何でもよい。例えば、図1で説明したような番号でもよい。表示後、乗員(ユーザ)には、自分が視認した識別画像を特定する情報をシステムに入力させる。入力方法としては、「2番」のように見えた識別情報を発声し、音声認識によりシステムに入力させることができる。あるいは、車載カメラにより、ユーザのジェスチャを認識させてもよい。あるいは、タッチパネルのような入力手段から、入力させてもよい。あるいは、ユーザに、認識画像が見えたタイミングでボタンを押すように操作させてもよい。この方法は、ユーザに煩雑な操作を強いるが、識別画像を変える必要がないという長所がある。 Next, the system displays an identification image corresponding to the current mirror angle (S203). The image may be anything such as an icon or ID number. For example, the numbers as described in FIG. 1 may be used. After the display, the occupant (user) is caused to input information specifying the identification image visually recognized by the occupant (user). As an input method, identification information that looks like “No. 2” can be uttered and input to the system by voice recognition. Alternatively, the user's gesture may be recognized by an in-vehicle camera. Or you may make it input from input means like a touch panel. Or you may make it operate so that a user may push a button at the timing when the recognition image was seen. This method requires a complicated operation for the user, but has an advantage that it is not necessary to change the identification image.
 なお、上記のようにユーザに所定の操作をさせるためには、音声ガイダンス等により「フロントガラスに見えた番号を入力してください」等の指示を与えることが望ましい。 In order to allow the user to perform a predetermined operation as described above, it is desirable to give an instruction such as “Please enter the number that appears on the windshield” by voice guidance or the like.
 角度変更後、ユーザの操作の有無を検出する(S206)。 After the angle change, the presence / absence of user operation is detected (S206).
 いままでの設定動作中にユーザ操作があった場合には、その操作によりユーザによるミラー角度の指定を受けたかどうかを判定する(S207)。すなわち、ユーザが「2番」のように見えた識別情報を入力することにより、システムは「2番」の識別情報を表示していた時点のミラー角度、すなわち虚像の投影位置を指定したことを認識することができる。 If there has been a user operation during the setting operation so far, it is determined whether or not a mirror angle designation has been received by the user (S207). That is, when the user inputs identification information that looks like “No. 2”, the system specifies that the mirror angle at the time when the “No. 2” identification information was displayed, that is, the projection position of the virtual image is specified. Can be recognized.
 ユーザによるミラー角度の指定がない場合には、システムはコンケーブミラー105の角度を変更する(S204)。角度変更の量は任意であるが、変更量を小さくすれば、乗員の座高や姿勢により適合した設定が可能になる反面、設定に時間がかかる。変更量を大きくすれば、その逆である。 If the mirror angle is not specified by the user, the system changes the angle of the concave mirror 105 (S204). The amount of angle change is arbitrary, but if the amount of change is small, a setting suitable for the sitting height and posture of the occupant is possible, but the setting takes time. The opposite is true if the amount of change is increased.
 ミラー角度の変更後、ミラー角度が可動範囲の上限または下限に達したかどうかを判定し(S205)、ミラー角度が上限または下限に達するまでミラー角度に対応する画像の表示と、ミラー角度の変更、およびユーザ操作の有無検出を続ける(S203~S206)。 After changing the mirror angle, determine whether the mirror angle has reached the upper limit or lower limit of the movable range (S205), display the image corresponding to the mirror angle until the mirror angle reaches the upper limit or lower limit, and change the mirror angle , And the presence / absence detection of the user operation is continued (S203 to S206).
 処理S207でミラー角度の指定があった場合、システムは、指定を受けたミラー角度を、当該ユーザのための固定ポジションとして設定する(S208)。また、必要に応じて、ミラー角度に対応した歪補正の設定を実施する(S209)。歪補正については後述する。その後、当該ミラー角度で通常のHUD動作を実行する(S210)。 If the mirror angle is designated in process S207, the system sets the designated mirror angle as a fixed position for the user (S208). Further, if necessary, distortion correction corresponding to the mirror angle is set (S209). The distortion correction will be described later. Thereafter, a normal HUD operation is executed at the mirror angle (S210).
 また、角度が可動範囲の上限または下限に達したかどうかの判定(S205)で、上限または下限に達した場合には、コンケーブミラー105の角度の変更方向を逆転させる(S211)。そして、ミラーの可動範囲全域にわたって、識別画像の表示(S203)を実行する。 Also, in the determination of whether the angle has reached the upper limit or lower limit of the movable range (S205), if the upper limit or lower limit is reached, the angle change direction of the concave mirror 105 is reversed (S211). Then, the identification image is displayed (S203) over the entire movable range of the mirror.
 なお、可動範囲全域にわたって識別画像の表示を実行しても、ユーザによるミラー角度の指定が認識できない場合は、S205の判定処理により、ミラー角度の変更(増加/減少)を継続し続けるが、処理S201に戻って最初から操作をやり直すか、「認識ができませんでした」等の警告を発するか、もしくはミラー角度をホームポジションに変更しHUD動作を実施する、ことにしてもよい。 If the mirror angle designation by the user cannot be recognized even after displaying the identification image over the entire movable range, the change (increase / decrease) of the mirror angle is continued by the determination process of S205. It is also possible to return to S201 and start the operation from the beginning, issue a warning such as “Could not be recognized”, or change the mirror angle to the home position and perform the HUD operation.
 図3は本実施例のシステムの全体構成図である。HUDを自動車に搭載したイメージである。図1と同じ構成は同じ番号を付して説明は省略する。HUD301は、例えばフォールディングミラー104、コンケーブミラー105を内蔵する。HUD301の構成自体は、特許文献1など既知のシステム構成と同様でよいが、本実施例では、情報処理装置302の制御を受け、虚像107の投影位置を制御可能となっている。 FIG. 3 is an overall configuration diagram of the system of this embodiment. This is an image of HUD mounted on a car. The same components as those in FIG. The HUD 301 includes a folding mirror 104 and a concave mirror 105, for example. Although the configuration of the HUD 301 itself may be the same as a known system configuration such as Patent Document 1, in this embodiment, the projection position of the virtual image 107 can be controlled under the control of the information processing apparatus 302.
 情報処理装置302は、入力装置303、出力装置304、処理装置305、記憶装置306を備えるマイクロコンピュータとして構成することができる。マイクロコンピュータはHUD制御専用のものでもよいし、カーステレオやエンジンを制御するものと共用でもよい。入力装置303は図示しない、既知の音声入力装置やキーボードやタッチパネルを備えていてもよい。出力装置304は、HUD301を制御するためのインタフェースを含む。制御内容には、HUDで表示する画像の画像信号や、コンケーブミラー105の角度調整のための制御信号を含む。また、出力装置304は、音声で乗員に指示を行う音声出力インタフェースや、HUD以外の表示装置を含んでもよい。 The information processing apparatus 302 can be configured as a microcomputer including an input device 303, an output device 304, a processing device 305, and a storage device 306. The microcomputer may be dedicated to HUD control, or may be shared with a car stereo or engine control. The input device 303 may include a known voice input device, a keyboard, and a touch panel (not shown). The output device 304 includes an interface for controlling the HUD 301. The control content includes an image signal of an image displayed on the HUD and a control signal for adjusting the angle of the concave mirror 105. The output device 304 may include a voice output interface that gives instructions to the occupant by voice and a display device other than the HUD.
 図4は図3の構成のうち、初期設定を実行する部分を詳細に説明したブロック図である。図4の例では、HUDの制御機能は、情報処理装置302の記憶装置306に格納されたプログラムが、処理装置305によって実行されることで、定められた処理を他のハードウェアと協働して実現する。処理装置305などが実行するプログラムまたはその機能を実現する手段を、「機能」、「手段」、「部」、「モジュール」等と呼ぶ場合がある。 FIG. 4 is a block diagram illustrating in detail the part of the configuration shown in FIG. In the example of FIG. 4, the control function of the HUD is such that the program stored in the storage device 306 of the information processing device 302 is executed by the processing device 305 so that the determined processing cooperates with other hardware. Realized. A program executed by the processing device 305 or the like or means for realizing the function may be referred to as “function”, “means”, “unit”, “module”, or the like.
 記憶装置306が備えるプログラムは、視点位置検出モジュール401、視点位置検出モジュール401の一部を構成する音声認識モジュール402、映像処理モジュール403、歪み補正モジュール404、ミラー制御モジュール405を含む。また、入力装置303として、マイク407を備える。また、HUD301は、映像表示部406とミラー部407を含む。 The program provided in the storage device 306 includes a viewpoint position detection module 401, a voice recognition module 402 that constitutes a part of the viewpoint position detection module 401, a video processing module 403, a distortion correction module 404, and a mirror control module 405. The input device 303 includes a microphone 407. The HUD 301 includes a video display unit 406 and a mirror unit 407.
 図4の例では、初期設定開始後、視点位置検出モジュール401は、映像処理モジュール403とミラー制御モジュール405に命令し、ミラー制御モジュール405がコンケーブミラー105を駆動させながら、映像処理モジュール403がコンケーブミラーの位置に応じた表示(番号1、番号2、番号3など)を出力する。乗員にとって最適な表示位置に来た場合の数字を、音声でマイク407に入力すると、音声認識モジュール402が入力された数字を認識する。ミラー制御部405により、認識した数字に対応するミラーの角度へ表示位置を調整し初期設定が完了する。 In the example of FIG. 4, after starting the initial setting, the viewpoint position detection module 401 instructs the video processing module 403 and the mirror control module 405 to drive the concave mirror 105 while the video processing module 403 performs the concave operation. A display (number 1, number 2, number 3, etc.) corresponding to the position of the mirror is output. When the number at the optimal display position for the passenger is input to the microphone 407 by voice, the voice recognition module 402 recognizes the input number. The mirror control unit 405 adjusts the display position to the mirror angle corresponding to the recognized number, and the initial setting is completed.
 なお、この例では、ミラー制御部405はコンケーブミラー105の角度調整をすることとしたが、虚像の位置を変更できるのであれば、レンズやプリズムなど他の光学系を駆動してもよい。また、映像表示部406は光源や映像を表示する液晶パネル等で構成されるが、公知のものを使用すればよいので、詳細は省略する。歪み補正モジュール404については、後に説明する。 In this example, the mirror control unit 405 adjusts the angle of the concave mirror 105, but other optical systems such as lenses and prisms may be driven as long as the position of the virtual image can be changed. The video display unit 406 includes a light source, a liquid crystal panel for displaying video, and the like. However, since a known one may be used, details thereof are omitted. The distortion correction module 404 will be described later.
 図5は、記憶装置306に格納されるデータの内容を示す表図である。コンケーブミラー105の角度501に応じて、初期設定時の処理S203で投影する識別画像502を格納する。また、コンケーブミラー105の角度501に応じて、歪み補正情報503を格納する。歪み補正情報については後述する。また、コンケーブミラー105の角度501に応じて、使用者ID504を格納する。図2の初期設定において、ユーザによるミラーの角度指定を受けた場合(S207)同じユーザについては、再度初期設定をせずに同じ設定を用いるようにすれば便利である。よって、ミラーを指定角度へ設定する処理(S208)において、当該ユーザのIDを登録しておく。登録は、ユーザに自分のIDをタッチパネル等の入力装置303から入力させればよい。あるいは、エンジンキーに紐づけられたIDを自動で格納する等してもよい。使用者IDのHPはホームポジションを意味し、ID登録のないユーザの場合に設定する角度を示す。 FIG. 5 is a table showing the contents of data stored in the storage device 306. In accordance with the angle 501 of the concave mirror 105, the identification image 502 to be projected in the initial setting process S203 is stored. Also, distortion correction information 503 is stored according to the angle 501 of the concave mirror 105. The distortion correction information will be described later. Further, the user ID 504 is stored according to the angle 501 of the concave mirror 105. In the initial setting of FIG. 2, when the mirror angle is specified by the user (S207), it is convenient to use the same setting for the same user without performing the initial setting again. Therefore, the ID of the user is registered in the process of setting the mirror to the designated angle (S208). For registration, the user may input his / her ID from the input device 303 such as a touch panel. Alternatively, an ID associated with the engine key may be automatically stored. The HP of the user ID means a home position and indicates an angle set for a user who has not registered ID.
 記憶装置306としては、磁気ディスク装置や不揮発性の半導体メモリを用いることができる。 As the storage device 306, a magnetic disk device or a non-volatile semiconductor memory can be used.
 図6に実施例2を説明する。図4と同じ構成は同じ番号を付して説明は省略する。この例では、初期設定開始後、ミラー105を駆動させながら特定の映像を出力する。ユーザにとって最適な表示位置に来た際に、入力装置303の一部であるハードスイッチ601を押す。スイッチの入力をスイッチ検出モジュール602で検出し、そのタイミングでミラー制御を停止し、表示位置を止めることができる。スイッチ601にはプッシュスイッチやタクトスイッチを使用してもよく、ペダル操作などで検出してもよい。 Example 2 will be described with reference to FIG. The same components as those in FIG. In this example, after starting the initial setting, a specific image is output while driving the mirror 105. When the optimal display position for the user is reached, the hard switch 601 that is a part of the input device 303 is pressed. The switch input can be detected by the switch detection module 602, the mirror control can be stopped at that timing, and the display position can be stopped. The switch 601 may be a push switch or a tact switch, or may be detected by a pedal operation or the like.
 図5の例では、映像処理モジュール403では、ミラー角度により映像を変える必要がないが、スイッチ601のタイミングでミラー制御を停止するので、ユーザに負担を強いる課題がある。 In the example of FIG. 5, the video processing module 403 does not need to change the video depending on the mirror angle, but the mirror control is stopped at the timing of the switch 601, so there is a problem that imposes a burden on the user.
 図7は実施例3の全体の処理フローである。実施例3では、画像認識を利用した例を説明する。図2と同じ構成は同じ番号を付して説明は省略する。この例では、ミラー角度によって表示する画像は変えなくてもよい(S710)。 FIG. 7 shows the overall processing flow of the third embodiment. In the third embodiment, an example using image recognition will be described. The same components as those in FIG. In this example, the displayed image may not be changed depending on the mirror angle (S710).
 こちらでは、ユーザからの所定の操作を検出する代わりに、車内(例えば運転席)の画像を取得する(S701)。このためには、車内の画像を撮影するための車載カメラを、例えばルームミラーの位置に設置しておく。 Here, instead of detecting a predetermined operation from the user, an image in the vehicle (for example, the driver's seat) is acquired (S701). For this purpose, an in-vehicle camera for taking an image inside the vehicle is installed, for example, at the position of the room mirror.
 取得した画像から、ユーザ(ドライバ)の瞳孔を検出する(S702)。瞳孔部分を検出するためには、既知の顔認証技術を適用して、顔画像から眼の部分を切り出せばよい。なお、瞳孔部分の検出のために、「これからHUDの初期設定を開始します。日光が直接当たらない場所に車を止めてください。正面を見て頭を動かさないでください。」のような音声によるガイダンスを行うことも望ましい。 Detect the pupil of the user (driver) from the acquired image (S702). In order to detect the pupil part, a known face authentication technique may be applied to cut out the eye part from the face image. In order to detect the pupil part, a voice like “Start the initial setting of the HUD. Stop the vehicle in a place where it is not directly exposed to sunlight. Do not move your head when looking at the front.” It is also desirable to provide guidance by
 瞳孔を検出した後、ユーザの瞳孔が変化したかどうかの判定を行う(S703)。このためには、例えば、取得画像からユーザの眼の部分を切り出し、瞳孔と虹彩の部分を検出し、両者の径の比を判定することにより判定することができる。 After detecting the pupil, it is determined whether or not the user's pupil has changed (S703). For this purpose, for example, it can be determined by cutting out the user's eye part from the acquired image, detecting the pupil and iris part, and determining the ratio of the diameters of both.
 ユーザの眼にHUDからの光が入射すると、生理反応として、入射光の明るさによりユーザの瞳孔が絞られる。例えば、瞳孔の径が虹彩の径の半分より小さくなった場合、ユーザの眼にHUDからの光が入射した(すなわち、像を視認した)と判定することができる。よって、ユーザの瞳孔が絞られた角度へミラー角度を調整し(S704)、ミラー角度に対応した歪み補正の設定を実施後(S209)、HUD通常動作へ移行する(S210) When the light from the HUD is incident on the user's eye, the user's pupil is narrowed by the brightness of the incident light as a physiological reaction. For example, when the pupil diameter is smaller than half of the iris diameter, it can be determined that light from the HUD has entered the user's eye (that is, the image has been visually recognized). Therefore, the mirror angle is adjusted to the angle at which the user's pupil is narrowed down (S704), the distortion correction corresponding to the mirror angle is set (S209), and then the operation shifts to the HUD normal operation (S210).
 図8は実施例4の全体の処理フローである。実施例4では、画像認識を利用した別の例を説明する。図2、図7と同じ構成は同じ番号を付して説明は省略する。この例では、ミラー角度によって表示する画像は変えなくてもよい(S710)。 FIG. 8 is an overall processing flow of the fourth embodiment. In the fourth embodiment, another example using image recognition will be described. The same configurations as those in FIG. 2 and FIG. In this example, the displayed image may not be changed depending on the mirror angle (S710).
 図8の例では、ユーザの瞳孔の変化を検出し(S710)、瞳孔が開くタイミングを検出する(S803)。すなわち、ミラー角度を変更し(S204)画像を表示していくと、あるタイミングでユーザは画像を視認する。視認すると瞳孔が収縮するが、さらにミラー角度を変更すると、画像を視認できなくなる(光が眼に入らなくなる)ので瞳孔が開く。このタイミングを処理S803で検出する。 In the example of FIG. 8, a change in the pupil of the user is detected (S710), and the timing for opening the pupil is detected (S803). That is, when the mirror angle is changed (S204) and the image is displayed, the user visually recognizes the image at a certain timing. The pupil contracts when visually recognized, but if the mirror angle is further changed, the image cannot be visually recognized (light does not enter the eye), so the pupil opens. This timing is detected in process S803.
 瞳孔が開いた場合、それが外光の影響か否かを判定するために、外光を検出する(S804)。外光が暗くなっていた場合(S805)、外光の影響で瞳孔が開いた可能性があるので、誤判定として結果を無視して、ミラー角度の変更を継続する(S204)。 When the pupil is opened, external light is detected in order to determine whether or not it is the influence of external light (S804). If the external light is dark (S805), the pupil may have opened due to the influence of the external light, so that the result is ignored as an erroneous determination, and the mirror angle change is continued (S204).
 外光が暗くなっていない場合には(S805)、ミラー角度を1段階戻して画像が見えるようにする(S806)。この角度で、HUD通常動作へ移行する(S210)。 If the external light is not dark (S805), the mirror angle is returned by one step so that the image can be seen (S806). At this angle, the HUD normal operation is performed (S210).
 実施例3、実施例4のように、瞳孔の変化によってミラーの適正位置を検出する方法では、ミラー適正位置で最も明るく見える為、ユーザの瞳孔が最も閉じることになる。画像の明るさの設定は任意であるが、瞳孔の変化を大きくするためには、画面が明るくなる方が好ましい。一方、明るすぎるとユーザが眩しく感じる場合があるので、上限を設定するのが望ましい。また、初期設定中に外光の影響を避け、安全に留意するために、「日光が直接当たらない場所に車を止めてください。」のように音声ガイダンスを行い、また、サイドブレーキが引いてある状態でないと初期設定を行わないように構成してもよい。 As in the third and fourth embodiments, in the method of detecting the appropriate position of the mirror by the change of the pupil, the user's pupil is most closed because it looks brightest at the proper position of the mirror. The setting of the brightness of the image is arbitrary, but it is preferable that the screen is brightened in order to increase the pupil change. On the other hand, if it is too bright, the user may feel dazzled, so it is desirable to set an upper limit. In order to avoid the influence of outside light and pay attention to safety during the initial setting, voice guidance is given as "Please stop the car in a place where it is not directly exposed to sunlight" and the side brake is pulled. You may comprise so that initial setting may not be performed if it is not in a certain state.
 図9は図8の処理を実行するためのブロック図である。図4と同じ構成は同じ番号を付して説明は省略する。図9の例では、入力装置303として、処理S701でユーザの画像を取得するための視点計測用カメラ901と、外光検出センサ902を備える。視点計測用カメラ901で取得された画像は、視点位置検出部401の瞳孔検出部903に入力される。瞳孔検出部903では、S702とS803の処理を行う。映像処理モジュール403では、ミラー角度により映像を変える必要がないが、瞳孔変化を検出することを容易にするために、通常時よりも初期設定時には画像の明るさを強くする処理を行ってもよい。 FIG. 9 is a block diagram for executing the processing of FIG. The same components as those in FIG. In the example of FIG. 9, the input device 303 includes a viewpoint measurement camera 901 and an external light detection sensor 902 for acquiring a user image in step S701. An image acquired by the viewpoint measurement camera 901 is input to the pupil detection unit 903 of the viewpoint position detection unit 401. The pupil detection unit 903 performs the processing of S702 and S803. In the video processing module 403, it is not necessary to change the video depending on the mirror angle, but in order to make it easier to detect a pupil change, a process of increasing the brightness of the image at the initial setting than at the normal time may be performed. .
 外光検出センサ902で測定された外光を用いて、外光検出部904で処理S804,S805の処理を行う。 Using the external light measured by the external light detection sensor 902, the external light detection unit 904 performs the processes S804 and S805.
 実施例5では、図2、図7、図8の歪み補正処理S209、図4、図6、図9の歪み補正モジュールについて説明する。実施例1~4では必ずしも上記歪み補正は必要ではなく、オプション的な位置づけである。 In the fifth embodiment, the distortion correction processing S209 of FIGS. 2, 7, and 8 and the distortion correction modules of FIGS. 4, 6, and 9 will be described. In the first to fourth embodiments, the above distortion correction is not necessarily required, but is an optional position.
 図10は、歪み補正の趣旨を説明するための概念図であり、ユーザの眼106と虚像107の位置関係と、ユーザからの虚像の見え方を説明している。図10に示されているように、虚像107に対して眼106の位置(視点)が上下すると、虚像の見え方1001は歪む。 FIG. 10 is a conceptual diagram for explaining the purpose of distortion correction, and describes the positional relationship between the user's eye 106 and the virtual image 107 and how the virtual image is seen by the user. As shown in FIG. 10, when the position (viewpoint) of the eye 106 moves up and down with respect to the virtual image 107, the virtual image appearance 1001 is distorted.
 このような現象は、実施例1~4にて、ミラー角度を変更(S204)させることにより、光源から眼に至る光路が変化することにより、像の形が変わることにより生じる。特に、HUDがフロントガラス102の反射を利用して像を形成している場合、自動車のフロントガラスは複雑な曲面形状をしており、また、車種により形状も異なる。このため、ミラー角度を変更することにより、フロントガラス102への光の入射位置や角度が複雑に変化する。従って、ミラー角度の変化により、像の見え方が異なる(すなわち歪む)ことになる。このような画像の歪みを相殺するように、投影する画像を予め歪ませる技術は特許文献2に開示がある。 Such a phenomenon is caused by changing the shape of the image by changing the optical path from the light source to the eye by changing the mirror angle (S204) in the first to fourth embodiments. In particular, when the HUD forms an image using the reflection of the windshield 102, the windshield of the automobile has a complicated curved surface shape, and the shape varies depending on the vehicle type. For this reason, by changing the mirror angle, the incident position and angle of light on the windshield 102 change in a complicated manner. Therefore, the appearance of the image is different (that is, distorted) due to the change in the mirror angle. Japanese Patent Application Laid-Open No. 2004-228867 discloses a technique for previously distorting an image to be projected so as to cancel such image distortion.
 実施例1(図2~図5)の例で説明すると、ミラーを指定角度へ設定する処理(S208)により、光源、虚像、眼の位置が一意に定まる。この位置関係において、歪みのない画像を視認できるように、投影する画像を予め歪ませる。どのように元画像を歪ませるかは、実機にて実験的に定めることができる。例えば、複数の視点において、ミラーを指定角度へ設定する処理(S208)を行い、その位置において真円の画像を投影する。画像が歪んで見えた場合には、真円に見えるように元画像を変換する。 In the example of Example 1 (FIGS. 2 to 5), the position of the light source, virtual image, and eye is uniquely determined by the process of setting the mirror to a specified angle (S208). In this positional relationship, the projected image is distorted in advance so that an image without distortion can be visually recognized. How to distort the original image can be determined experimentally with a real machine. For example, a process of setting the mirror to a specified angle (S208) is performed at a plurality of viewpoints, and a perfect circle image is projected at that position. If the image looks distorted, the original image is converted to look like a perfect circle.
 当該変換ルール(例えば関数)は、図5に示すようにミラー角度毎に歪み補正情報503として格納しておく。あるいは、画像そのものを格納してもよい。 The conversion rule (for example, a function) is stored as distortion correction information 503 for each mirror angle as shown in FIG. Alternatively, the image itself may be stored.
 本実施例中、ソフトウェアで構成した機能と同等の機能は、FPGA(Field Programmable Gate Array)、ASIC(Application Specific Integrated Circuit)などのハードウェアでも実現できる。そのような態様も本願発明の範囲に含まれる。 In this embodiment, functions equivalent to those configured by software can be realized by hardware such as FPGA (Field Programmable Gate Array) and ASIC (Application Specific Integrated Circuit). Such an embodiment is also included in the scope of the present invention.
 本発明は上記した実施形態に限定されるものではなく、様々な変形例が含まれる。例えば、ある実施例の構成の一部を他の実施例の構成に置き換えることが可能であり、また、ある実施例の構成に他の実施例の構成を加えることが可能である。また、各実施例の構成の一部について、他の実施例の構成の追加・削除・置換をすることが可能である。 The present invention is not limited to the above-described embodiment, and includes various modifications. For example, a part of the configuration of one embodiment can be replaced with the configuration of another embodiment, and the configuration of another embodiment can be added to the configuration of one embodiment. Further, it is possible to add, delete, and replace the configurations of other embodiments with respect to a part of the configurations of the embodiments.
 自動車等の情報表示に用いる、HUDに利用することができる。 It can be used for HUD, which is used for displaying information such as automobiles.
 運転者(あるいは乗員)101、フロントガラス102、画像情報を投影するための光103、フォールディングミラー(Folding Mirror)104、コンケーブミラー(Concave mirror)105
眼106、虚像107
Driver (or occupant) 101, windshield 102, light 103 for projecting image information, folding mirror 104, concave mirror 105
Eye 106, virtual image 107

Claims (9)

  1.  画像を投影する光を射出し、虚像を構成して前記画像を表示する画像表示装置であって、
     前記光を射出する角度を複数の条件に設定する制御モジュールと、
     前記複数の条件のうちの一つを選択する視点位置検出モジュールと、
     を備え、
     選択された条件により前記光を射出して前記画像を表示する、
     画像表示装置。
    An image display device that emits light for projecting an image and forms a virtual image to display the image,
    A control module for setting the angle at which the light is emitted to a plurality of conditions;
    A viewpoint position detection module for selecting one of the plurality of conditions;
    With
    Emitting the light according to selected conditions to display the image;
    Image display device.
  2.  映像処理モジュールをさらに備え、
     前記映像処理モジュールは、
     前記複数の条件ごとに異なる画像を表示し、
     前記視点位置検出モジュールは、
     ユーザにより前記異なる画像のうち特定の一つを指定されることにより、当該指定された画像に対応する条件を選択する、
     請求項1記載の画像表示装置。
    A video processing module;
    The video processing module includes:
    Displaying different images for each of the plurality of conditions,
    The viewpoint position detection module includes:
    By selecting a specific one of the different images by the user, a condition corresponding to the specified image is selected.
    2. The image display device according to claim 1.
  3.  前記視点位置検出モジュールは、
     ユーザにより前記複数の条件のうちの特定の一つに設定されているタイミングを指定されることにより、当該タイミングに対応する条件を選択する、
     請求項1記載の画像表示装置。
    The viewpoint position detection module includes:
    By selecting a timing set to a specific one of the plurality of conditions by the user, a condition corresponding to the timing is selected.
    2. The image display device according to claim 1.
  4.  ユーザの眼の画像を取得するカメラをさらに備え、
     前記視点位置検出モジュールは、
     前記ユーザの眼の画像から、ユーザの瞳孔が収縮するタイミングを検出することにより、当該タイミングに対応する条件を選択する、
     請求項1記載の画像表示装置。
    A camera for acquiring an image of the user's eye;
    The viewpoint position detection module includes:
    By detecting the timing at which the user's pupil contracts from the image of the user's eye, a condition corresponding to the timing is selected.
    2. The image display device according to claim 1.
  5.  ユーザの眼の画像を取得するカメラをさらに備え、
     前記視点位置検出モジュールは、
     前記ユーザの眼の画像から、ユーザの瞳孔が拡大するタイミングを検出することにより、当該タイミングに対応する条件のひとつ前の条件を選択する、
     請求項1記載の画像表示装置。
    A camera for acquiring an image of the user's eye;
    The viewpoint position detection module includes:
    By detecting the timing at which the user's pupil is enlarged from the image of the user's eye, the condition immediately before the condition corresponding to the timing is selected.
    2. The image display device according to claim 1.
  6.  外部の光の状態を検知するセンサをさらに備え、
     前記視点位置検出モジュールは、
     外部の光の状態に基づいて、前記選択を無効化する、
     請求項5記載の画像表示装置。
    It further includes a sensor that detects the state of external light,
    The viewpoint position detection module includes:
    Disable the selection based on external light conditions;
    The image display device according to claim 5.
  7.  前記制御モジュールは、
     前記光を反射させるミラーの角度を制御することにより、前記光を射出する角度を複数の条件に設定する、
     請求項1記載の画像表示装置。
    The control module is
    By controlling the angle of the mirror that reflects the light, the angle at which the light is emitted is set to a plurality of conditions.
    The image display device according to claim 1.
  8.  歪み補正モジュールをさらに備え、
     前記歪み補正モジュールは、
     前記光を射出する角度の条件に対応して、前記画像の歪み補正の設定を変更する、
     請求項7記載の画像表示装置。
    A distortion correction module;
    The distortion correction module includes:
    Corresponding to the condition of the angle at which the light is emitted, the distortion correction setting of the image is changed.
    The image display device according to claim 7.
  9.  前記画像の歪み補正の設定は、
     前記投影する光によって搬送される元画像を変換する変換ルールである、
     請求項8記載の画像表示装置。
    The image distortion correction setting is
    It is a conversion rule for converting the original image conveyed by the light to be projected.
    The image display device according to claim 8.
PCT/JP2015/078728 2015-10-09 2015-10-09 Image display device WO2017061026A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2015/078728 WO2017061026A1 (en) 2015-10-09 2015-10-09 Image display device
JP2017544149A JP6637986B2 (en) 2015-10-09 2015-10-09 Image display device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2015/078728 WO2017061026A1 (en) 2015-10-09 2015-10-09 Image display device

Publications (1)

Publication Number Publication Date
WO2017061026A1 true WO2017061026A1 (en) 2017-04-13

Family

ID=58488160

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/078728 WO2017061026A1 (en) 2015-10-09 2015-10-09 Image display device

Country Status (2)

Country Link
JP (1) JP6637986B2 (en)
WO (1) WO2017061026A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019124323A1 (en) * 2017-12-19 2019-06-27 コニカミノルタ株式会社 Virtual image display device and headup display device
US10338397B1 (en) 2018-04-18 2019-07-02 Hyundai Mobis Co., Ltd. Vehicle head-up display device and control method thereof
JP2019182237A (en) * 2018-04-11 2019-10-24 ヒュンダイ・モービス・カンパニー・リミテッド Vehicular head up display device and control method thereof
WO2021171346A1 (en) * 2020-02-25 2021-09-02 三菱電機株式会社 Display control device, head-up display, and display control method
JP2021173980A (en) * 2020-04-30 2021-11-01 京セラ株式会社 Image display system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008137490A (en) * 2006-12-01 2008-06-19 Yazaki Corp Vehicular display device and its display position adjustment assistance method
JP2015048007A (en) * 2013-09-03 2015-03-16 株式会社デンソー Information display device
JP2015087619A (en) * 2013-10-31 2015-05-07 日本精機株式会社 Vehicle information projection system and projection device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6272231U (en) * 1985-10-25 1987-05-08
JP4577638B2 (en) * 2001-07-30 2010-11-10 日本精機株式会社 Vehicle display device
JP2003107391A (en) * 2001-09-28 2003-04-09 Nippon Seiki Co Ltd Head-up display device
JP2005096664A (en) * 2003-09-26 2005-04-14 Nippon Seiki Co Ltd Display device for vehicle
JP2012148754A (en) * 2011-01-18 2012-08-09 Yoshikazu Yui Method for adjusting rearview mirror for vehicle
JP2014210537A (en) * 2013-04-19 2014-11-13 トヨタ自動車株式会社 Head-up display device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008137490A (en) * 2006-12-01 2008-06-19 Yazaki Corp Vehicular display device and its display position adjustment assistance method
JP2015048007A (en) * 2013-09-03 2015-03-16 株式会社デンソー Information display device
JP2015087619A (en) * 2013-10-31 2015-05-07 日本精機株式会社 Vehicle information projection system and projection device

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019124323A1 (en) * 2017-12-19 2019-06-27 コニカミノルタ株式会社 Virtual image display device and headup display device
JP2019182237A (en) * 2018-04-11 2019-10-24 ヒュンダイ・モービス・カンパニー・リミテッド Vehicular head up display device and control method thereof
US10338397B1 (en) 2018-04-18 2019-07-02 Hyundai Mobis Co., Ltd. Vehicle head-up display device and control method thereof
WO2021171346A1 (en) * 2020-02-25 2021-09-02 三菱電機株式会社 Display control device, head-up display, and display control method
JP2021173980A (en) * 2020-04-30 2021-11-01 京セラ株式会社 Image display system
JP7337023B2 (en) 2020-04-30 2023-09-01 京セラ株式会社 image display system

Also Published As

Publication number Publication date
JPWO2017061026A1 (en) 2018-07-05
JP6637986B2 (en) 2020-01-29

Similar Documents

Publication Publication Date Title
WO2017061026A1 (en) Image display device
JP6221942B2 (en) Head-up display device
JP6255537B2 (en) Projection display apparatus and projection control method
JP6462194B2 (en) Projection display device, projection display method, and projection display program
WO2015174051A1 (en) Display device and display method
JP6387465B2 (en) Projection display apparatus and projection control method
KR20100026466A (en) Head up display system and method for adjusting video display angle thereof
JP6186538B2 (en) Projection display system and control method for projection display device
JP7140504B2 (en) projection display
JP6482975B2 (en) Image generating apparatus and image generating method
JP2017097759A (en) Visual line direction detection device, and visual line direction detection system
US20190339535A1 (en) Automatic eye box adjustment
JP2016210259A (en) Head-up display
JP2014149640A (en) Gesture operation device and gesture operation program
WO2014049787A1 (en) Display device, display method, program, and recording medium
JP2016147532A (en) Image generation device, and head-up display
WO2018124299A1 (en) Virtual image display device and method
KR20230021219A (en) Apparatus for controlling display of vehicle and method thereof
JP7209197B2 (en) Control device, vehicle, control method
KR101550606B1 (en) Curved display apparatus for vehicle
JP6620682B2 (en) In-vehicle display device
JP2023078881A (en) Image projection device and control method of image projection device
JP7149192B2 (en) head-up display device
JP2020071441A (en) Display unit
JP2018162023A (en) Operation device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15905845

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2017544149

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15905845

Country of ref document: EP

Kind code of ref document: A1