TWI766954B - Split exit pupil heads-up display systems and methods - Google Patents

Split exit pupil heads-up display systems and methods Download PDF

Info

Publication number
TWI766954B
TWI766954B TW107106971A TW107106971A TWI766954B TW I766954 B TWI766954 B TW I766954B TW 107106971 A TW107106971 A TW 107106971A TW 107106971 A TW107106971 A TW 107106971A TW I766954 B TWI766954 B TW I766954B
Authority
TW
Taiwan
Prior art keywords
image
hud
pixels
imager
light emitting
Prior art date
Application number
TW107106971A
Other languages
Chinese (zh)
Other versions
TW201837539A (en
Inventor
葛洛力 哈森 S 艾爾
蔡靖波
奇理 莊
馬提 邁爾斯
Original Assignee
美商傲思丹度科技公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US15/449,679 external-priority patent/US10539791B2/en
Application filed by 美商傲思丹度科技公司 filed Critical 美商傲思丹度科技公司
Publication of TW201837539A publication Critical patent/TW201837539A/en
Application granted granted Critical
Publication of TWI766954B publication Critical patent/TWI766954B/en

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0081Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for altering, e.g. enlarging, the entrance or exit pupil
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/10Beam splitting or combining systems
    • G02B27/1066Beam splitting or combining systems for enhancing image performance, like resolution, pixel numbers, dual magnifications or dynamic range, by tiling, slicing or overlapping fields of view
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B5/00Optical elements other than lenses
    • G02B5/08Mirrors
    • G02B5/10Mirrors with curved faces
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/001Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background
    • G09G3/003Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background to produce spatial visual effects
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/011Head-up displays characterised by optical features comprising device for correcting geometrical aberrations, distortion
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0112Head-up displays characterised by optical features comprising device for genereting colour display
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0123Head-up displays characterised by optical features comprising devices increasing the field of view
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0123Head-up displays characterised by optical features comprising devices increasing the field of view
    • G02B2027/0125Field-of-view increase by wavefront division
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0127Head-up displays characterised by optical features comprising devices increasing the depth of field
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0132Head-up displays characterised by optical features comprising binocular systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0132Head-up displays characterised by optical features comprising binocular systems
    • G02B2027/0136Head-up displays characterised by optical features comprising binocular systems with a single image source for both eyes
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0185Displaying image at variable distance

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Theoretical Computer Science (AREA)
  • Instrument Panels (AREA)

Abstract

Split exit pupil (or split eye-box) heads-up display (HUD)systems and methods are described. The described HUD system methods make use of a split exit pupil design method that enables a modular HUD system and allows the HUD system viewing eye-box size to be tailored while reducing the overall HUD volumetric aspects. A HUD module utilizes a high brightness small size micro-pixel imager to generate one or more HUD virtual images with a one or a plurality of given viewing eye-box segment sizes. When integrated together into a HUD system, a multiplicity of such HUD modules displaying the same image would enable such an integrated HUD system to have an eye-box size that is substantially larger than the eye-box size of a HUD module. The resultant integrated HUD system volume is substantially volumetrically smaller than a HUD system that uses a single larger imager. Furthermore, the integrated HUD system can be comprised of a multiplicity of HUD modules to scale the eye-box size to match the intended application while maintaining a given desired overall HUD system brightness.

Description

分裂出射瞳孔抬頭顯示系統及方法Split exit pupil head-up display system and method

本發明大體上係關於抬頭顯示器(heads-up display,HUD),且更特定言之,係關於產生一或多個虛擬影像之HUD系統。The present invention relates generally to heads-up displays (HUDs), and more particularly, to HUD systems that generate one or more virtual images.

引用參考: [1] 美國專利第7,623,560號,El-Ghoroury等,Quantum Photonic Imager and Methods of Fabrication Thereof, 2009年11月24日。 [2] 美國專利第7,767,479號,El-Ghoroury等,Quantum Photonic Imager and Methods of Fabrication Thereof, [3] 美國專利第7,829,902號,El-Ghoroury 等,Quantum Photonic Imager and Methods of Fabrication Thereof, [4] 美國專利第8,049,231號,El-Ghoroury等,Quantum Photonic Imager and Methods of Fabrication Thereof, [5] 美國專利第8,098,265號,El-Ghoroury等,Quantum Photonic Imager and Methods of Fabrication Thereof, [6] 美國專利申請公開案第2010/0066921號,El-Ghoroury等,Quantum Photonic Imager and Methods of Fabrication Thereof, [7] 美國專利申請公開案第2012/0033113號,El-Ghoroury等,Quantum Photonic Imager and Methods of Fabrication Thereof, [8] 美國專利第4,218,111號,Withrington等,Holographic Heads-up Displays, 1980年8月19日, [9] 美國專利第6,813,086號,Bignolles等,Head Up Display Adaptable to Given Type of Equipment,2004年11月2日, [10] 美國專利第7,391,574號,Fredriksson, Heads-up Display, 2008年6月24日, [1] 美國專利第7,982,959號,Lvovskiy等,Heads-up Display,2011年7月19日, [12] 美國專利第4,613,200號,Hartman, Heads-Up Display System with Holographic Dispersion Correcting,1986年9月23日, [13] 美國專利第5,729,366號,Yang, Heads-Up Display for Vehicle Using Holographic Optical Elements,1998年3月17日, [14] 美國專利第8,553,334號,Lambert等,Heads-Up Display System Utilizing Controlled Reflection from Dashboard Surface,2013年10月8日, [15] 美國專利第8,629,903號,Seder等,Enhanced Vision System Full-Windshield HUD,2014年7月14日, [16] B. H. Walker, Optical Design of Visual Systems, Tutorial tests in optical engineering,由國際光學工程學會(The international Society of Optical Engineering,SPIE)公佈, 第139-150頁, ISBN 0-8194-3886-3, 2000, [17] C. Guilloux等,Varilux S Series Braking the Limits [18] M. Born, Principles of Optics,第七版,劍橋大學出版社 1999年版,Section 5.3, 第236-244頁, 隨著視覺輔助技術藉由使得汽車駕駛員更可視地感知及知情汽車儀錶盤資訊而不需使駕駛員之視線及注意力偏離道路來有助於汽車安全,現正探求抬頭顯示器。然而,目前可用的抬頭顯示器在體積上大,且過於昂貴,以至不能作為用於大多數汽車中之可行選項。在飛機及直升機中之抬頭顯示器之應用中遭遇此等相同障礙,儘管成本因素之程度較小。在抬頭顯示器汽車應用之情況下,依據廣泛範圍之載具大小、類型及成本要求,進一步加劇體積及成本約束。因此,存在對適合用於諸如汽車、小型飛機及直升機之小型載具的低成本及非大型抬頭顯示器的需要。 先前技術HUD系統可大體上分組為兩個類型:瞳孔成像HUD及非瞳孔成像HUD。瞳孔成像HUD通常由負責中間影像遞送及瞳孔成形之中繼模組以及負責影像準直及檢視者之眼位置(本文中被稱作眼框)處之瞳孔成像的準直模組組成。瞳孔成像HUD之準直模組通常實現為傾斜彎曲或平面反射器或全像光學元件(holographic optical element,HOE),且該中繼模組通常傾斜以使光路折曲且補償光學像差。非瞳孔成像HUD藉由擴散根據顯示器處或中間影像位置處的光錐角度限定系統光圈。對於中間影像HUD系統,亦需要中繼模組,但HUD光圈僅由準直光學裝置決定。準直光學裝置通常具有軸向對稱性,但具有摺疊鏡以滿足所要求之體積約束。此由像差校正需要及系統體積態樣決定。 圖1-1中展示之參考[8]中所描述之先前技術使用凹面HOE反射器(圖1-1中之11)作為合併器及準直器以最小化準直光學裝置且減小HUD系統體積態樣。所得HUD系統需要複雜的傾斜之中繼光學裝置(圖1-1中之10)以補償像差且遞送中間影像。此外,此HUD系統僅對狹窄光譜起作用。 圖1-2中展示之參考[9]中所描述之先前技術使用中繼光學裝置(REL)模組以在彙集合併器(convergent combiner,CMB)鏡(圖1-2中之CMB)之焦平面處遞送中間影像,且限定系統瞳孔。CMB鏡準直中間影像且將系統瞳孔成像至檢視者之眼上以促進檢視。此瞳孔成像HUD方法必要地設計複雜的REL模組以用於封裝及像差補償。 圖1-3中展示之參考[10]中所描述之先前技術使用投影透鏡(3)以將中間影像投射至作為影像源之漫射表面(圖1-3中之51)及半透明準直鏡(圖1-3中之7)上。準直鏡在無窮遠處形成一影像,且準直光學裝置之光圈係由漫射器之角度寬度限定。 圖1-4中展示之參考[11]中所描述之先前技術使用一影像形成源,其由兩個液晶顯示器(liquid crystal display,LCD)面板(圖1-4中之23)組成以在置放於準直光學裝置模組(圖1-4中之1)之焦平面處的漫射螢幕(圖1-4中之5)上形成中間影像。影像形成源中之兩個LCD面板之主要用途係達成足夠亮度以實現所形成影像之可檢視性。為達成此目標,影像形成源中之兩個LCD面板經組態以在漫射螢幕處形成兩個連續之並列影像,抑或使兩個影像水平地及豎直地在漫射螢幕處彼此重疊及偏移一半像素。 參考[12]中所描述之先前技術使用一對反射全像光學元件(HOE)以達成全像分散校正及在觀察者之視野內投射寬頻顯示源之虛擬影像。參考[13]中所描述之先前技術亦使用一對全像光學元件(HOE):一個係透射性的,另一係反射性的,以投射一影像至載具擋風玻璃上。 圖1-5中展示之參考[14]中所描述之先前技術使用安裝於載具擋風玻璃頂側上之影像投影儀(圖1-5中之14),其經組態以投射一影像至裝配有多面體反射表面(圖1-5中之18)之載具儀錶盤上,該多面體反射表面經組態以將來自影像投影儀之影像反射至載具擋風玻璃上。載具擋風玻璃表面定向為朝向檢視者反射來自儀錶盤多面體反射表面之影像。 簡要描述之先前技術HUD系統以及引用之先前技術中所描述之諸多其他HUD系統的共同點係系統之高成本及大體積大小。此外,所見先前技術HUD系統中無一者可在大小及成本上按比例縮放以匹配寬範圍的汽車及其他載具之大小及價格範圍。因此,本發明之一目標係介紹使用大量發光微縮放像素陣列成像器以實現相較於使用單個影像形成源之HUD系統體積大體上更小之HUD系統的抬頭顯示器方法。本發明之進一步目標係介紹一種新穎的分裂出射瞳孔HUD系統設計方法,其利用大量發光微縮放像素陣列成像器以使得能夠實現具有可按比例縮放以匹配廣泛範圍汽車及小型載具大小及價格範圍之體積及成本態樣的模組化HUD系統。本發明之額外目標及優勢應自繼續參考附圖之其較佳實施例之如下詳細描述變得顯而易見。Citation Reference: [1] US Patent No. 7,623,560, El-Ghoroury et al., Quantum Photonic Imager and Methods of Fabrication Thereof, Nov. 24, 2009. [2] U.S. Patent No. 7,767,479, El-Ghoroury et al., Quantum Photonic Imager and Methods of Fabrication Thereof, [3] U.S. Patent No. 7,829,902, El-Ghoroury et al., Quantum Photonic Imager and Methods of Fabrication Thereof, [4] U.S. Patent No. 8,049,231, El-Ghoroury et al., Quantum Photonic Imager and Methods of Fabrication Thereof, [5] U.S. Patent No. 8,098,265, El-Ghoroury et al., Quantum Photonic Imager and Methods of Fabrication Thereof, [6] U.S. Patent Application Publication No. 2010/0066921, El-Ghoroury et al, Quantum Photonic Imager and Methods of Fabrication Thereof, [7] US Patent Application Publication No. 2012/0033113, El-Ghoroury et al, Quantum Photonic Imager and Methods of Fabrication Thereof, [8 ] U.S. Patent No. 4,218,111, Withrington et al., Holographic Heads-up Displays, Aug. 19, 1980, [9] U.S. Patent No. 6,813,086, Bignolles et al., Head Up Display Adaptable to Given Type of Equipment, Nov. 2, 2004 , [10] US Patent No. 7,391,574, Fredriksson, Heads-up Display, June 24, 2008, [1] US Patent No. 7,982,959, Lvovskiy et al., Heads-up Display, July 19, 2011, [ 12] U.S. Patent No. 4,613,200, Hartman, Heads-Up Display System with Holographic Dispersio n Correcting, Sept. 23, 1986, [13] U.S. Patent No. 5,729,366, Yang, Heads-Up Display for Vehicle Using Holographic Optical Elements, March 17, 1998, [14] U.S. Patent No. 8,553,334, Lambert et al. , Heads-Up Display System Utilizing Controlled Reflection from Dashboard Surface, Oct. 8, 2013, [15] U.S. Patent No. 8,629,903, Seder et al., Enhanced Vision System Full-Windshield HUD, Jul. 14, 2014, [16] B. H. Walker, Optical Design of Visual Systems, Tutorial tests in optical engineering, published by The international Society of Optical Engineering (SPIE), pp. 139-150, ISBN 0-8194-3886-3, 2000, [ 17] C. Guilloux et al., Varilux S Series Braking the Limits [18] M. Born, Principles of Optics, 7th ed., Cambridge University Press, 1999, Section 5.3, pp. 236-244. Head-up displays are being pursued to aid in car safety by allowing car drivers to more visually perceive and be informed of car dashboard information without taking the driver's sight and attention off the road. However, currently available head-up displays are bulky and too expensive to be a viable option for use in most automobiles. These same obstacles are encountered in the application of head-up displays in aircraft and helicopters, albeit to a lesser extent as a cost factor. In the case of head-up display automotive applications, size and cost constraints are further exacerbated by a wide range of vehicle sizes, types and cost requirements. Accordingly, there is a need for a low-cost and non-large head-up display suitable for use in small vehicles such as automobiles, small aircraft and helicopters. Prior art HUD systems can be broadly grouped into two types: pupillary imaging HUDs and non-pupil imaging HUDs. Pupil imaging HUDs typically consist of a relay module responsible for intermediate image delivery and pupil shaping, and a collimation module responsible for image collimation and pupil imaging at the viewer's eye location (referred to herein as the eye frame). The collimation module of the pupil imaging HUD is usually implemented as a tilted curved or flat reflector or a holographic optical element (HOE), and the relay module is usually tilted to bend the optical path and compensate for optical aberrations. Non-pupil imaging HUDs define the system aperture by diffusion according to the angle of the light cone at the display or at the intermediate image position. For the intermediate image HUD system, a relay module is also required, but the HUD aperture is only determined by the collimating optical device. Collimating optics typically have axial symmetry, but have folded mirrors to meet the required volume constraints. This is determined by the need for aberration correction and the volume profile of the system. The prior art described in reference [8] shown in Fig. 1-1 uses a concave HOE reflector (11 in Fig. 1-1) as a combiner and collimator to minimize collimating optics and reduce the size of the HUD system volume shape. The resulting HUD system requires complex tilted relay optics (10 in Figures 1-1) to compensate for aberrations and deliver intermediate images. Furthermore, this HUD system only works on a narrow spectrum. The prior art described in reference [9] shown in Figures 1-2 uses a relay optical device (REL) module to focus on the convergent combiner (CMB) mirror (CMB in Figures 1-2) Intermediate images are delivered at the plane and define the system pupil. The CMB mirror collimates the intermediate image and images the system pupil onto the viewer's eye to facilitate viewing. This pupil imaging HUD method necessitates the design of complex REL modules for packaging and aberration compensation. The prior art described in reference [10] shown in Figures 1-3 uses a projection lens (3) to project an intermediate image onto a diffusing surface (51 in Figures 1-3) as the source of the image and translucent collimation mirror (7 in Figure 1-3). The collimating mirror forms an image at infinity, and the aperture of the collimating optics is defined by the angular width of the diffuser. The prior art described in reference [11] shown in Figures 1-4 uses an image forming source consisting of two liquid crystal display (LCD) panels (23 in Figures 1-4) to An intermediate image is formed on a diffuser screen (5 in Figures 1-4) placed at the focal plane of the collimating optics module (1 in Figures 1-4). The primary purpose of the two LCD panels in the image forming source is to achieve sufficient brightness for the viewability of the formed image. To achieve this goal, the two LCD panels in the image forming source are configured to form two consecutive side-by-side images at the diffusing screen, or to have the two images overlap each other horizontally and vertically at the diffusing screen and Offset by half a pixel. The prior art described in reference [12] used a pair of reflective holographic optical elements (HOE) to achieve holographic dispersion correction and to project a virtual image of a broadband display source within the viewer's field of view. The prior art described in reference [13] also uses a pair of holographic optical elements (HOE): one transmissive and the other reflective, to project an image onto the vehicle windshield. The prior art described in reference [14] shown in FIGS. 1-5 uses an image projector (14 in FIGS. 1-5 ) mounted on the top side of the vehicle windshield, which is configured to project an image To a vehicle dashboard equipped with a polyhedral reflective surface (18 in Figures 1-5) configured to reflect the image from the image projector onto the vehicle windshield. The vehicle windshield surface is oriented to reflect the image from the reflective surface of the instrument panel polyhedron toward the viewer. Common to the briefly described prior art HUD system, as well as many other HUD systems described in the cited prior art, is the high cost and bulky size of the system. Furthermore, none of the prior art HUD systems seen can be scaled in size and cost to match a wide range of car and other vehicle sizes and price ranges. Accordingly, one of the objectives of the present invention is to introduce a head-up display method that uses a large number of light-emitting micro-scale pixel array imagers to achieve a HUD system that is substantially smaller in size than HUD systems that use a single image forming source. A further object of the present invention is to introduce a novel split exit pupil HUD system design method that utilizes a large number of light-emitting micro-scale pixel array imagers to enable a device with a scale that is scalable to match a wide range of automotive and small vehicle sizes and price ranges A modular HUD system with the highest volume and cost. Additional objects and advantages of the present invention should become apparent from the following detailed description of its preferred embodiments with continued reference to the accompanying drawings.

本發明的以下詳細描述中對「一個實施例」或「一實施例」之參考意謂結合實施例描述之特定特徵、結構或特性包括於本發明的至少一個實施例中。片語「在一個實施例中」在本實施方式中各處之出現未必全部指同一實施例。 最近已介紹新類別之發光微縮放像素陣列成像器裝置。此等裝置特徵在於包括全部所要求之圖像處理驅動電路的極小單個裝置大小中之高亮度、非常快速之多色光強度及空間調變能力。一個此等裝置之固態發光(SSL)像素可為發光二極體(light emitting diode,LED)抑或雷射二極體(laser diode,LD),其開關狀態係藉由含於CMOS晶片(或裝置)內之驅動電路控制,該驅動電路上接合有成像器之發光微縮放像素陣列。包含此等成像器裝置之發光陣列之像素之大小應通常在大致5至20微米範圍內,而裝置之典型發光表面積在大致15至150平方公釐範圍內。發光微縮放像素陣列裝置內之像素可個別地通常經由其CMOS晶片之驅動電路按空間、色譜及時間定址。由此等成像器裝置產生之光之亮度可以適當地低電力消耗達到多個100,000 cd/m2 。此等裝置之一個實例係QPI®成像器(參考參考[1-7]),其在下文描述之例示性實施例中參考。然而,應理解,QPI®成像器僅為可用於本發明中之裝置的類型之一實例。(「QPI」係Ostendo Technologies公司之註冊商標)。由此,在以下描述中,對QPI®成像器之任何參考應理解為出於作為可使用之固態發光像素陣列成像器(以下簡稱為「成像器」)之一個具體實例揭示實施例中之特定性的目的,而非出於本發明之任何限制之目的。 本發明組合此等成像器之發光微像素陣列裝置獨特能力與新穎的分裂出射瞳孔HUD系統架構,以實現低成本及小體積的模組化HUD (MHUD)系統,其可輕易地用於成本及體積約束至關重要之應用中,諸如(例如)汽車HUD。諸如QPI®成像器之成像器之發光高亮度微發射極像素陣列與本發明之分裂出射瞳孔HUD架構之組合允許在高亮度環境日光中有效地操作,然而在體積上足夠小以適配於寬範圍的載具大小及類型之儀錶盤或儀錶板後方。如本文所使用,詞「載具」用於最普遍意義,且包括某人在其中或藉由其行進之任何構件,包括但不限於在陸地、水中、水下及在空中行進。此等成像器致能之分裂出射瞳孔HUD架構的低成本及模塊性使得模組化HUD系統可經裁適以適應寬範圍的載具的體積約束。分裂出射瞳孔HUD系統之優勢應自以下段落中所描述之實施例的內容中提供的詳細描述變得更顯而易見。 圖2說明本發明之一個實施例的模組化HUD (MHUD)系統200的設計概念。如圖2中所說明,在較佳的實施例中,本發明之MHUD系統200由MHUD總成210組成,該MHUD總成又由裝配至一起以形成MHUD 210之大量模組215組成,藉此各模組215由具相關聯的光學裝置的單個成像器220及凹面鏡230組成。如圖2中所說明,自具相關聯的光學裝置的各單個成像器220發射之影像由其相關聯的凹面鏡230準直、放大及反射,接著部分地自載具擋風玻璃240反射以形成虛擬影像260,其可在載具之駕駛員(操作員)之標稱頭部位置處的眼框區段255內檢視。如圖2中所說明,MHUD總成210之模組215中之每一者經安置以在任一時間且在自載具擋風玻璃240之相同位置處形成相同虛擬影像260,但各自在其各別眼框區段255處,使得MHUD總成210之大量模組215共同地形成MHUD系統200之集合眼框250。亦即,虛擬影像260在眼框區段255中之每一者處部分地可檢視,但在集合眼框250中完全可檢視。因此,MHUD系統200的眼框區段255之整體大小可藉由選擇恰當數目之包含MHUD總成210的模組215來加以裁適,其中眼框區段及模組數目係使用者可定義的。儘管MHUD總成210之模組215中之每一者經安置以在任一時間形成相同虛擬影像260,但彼等影像當然將隨時間改變,且可緩慢改變,如例如燃料計影像之改變,或可更快速改變,諸如GPS導航系統顯示器影像之顯示中之改變,然而本發明的MHUD系統200可在影像資料以此種速率可用時以至少高達典型視頻速率之頻率操作。 在MHUD系統200之較佳實施例中,MHUD總成210之模組215之眼框區段255各自位於由其對應凹面鏡230反射之光線光束之出射瞳孔處。MHUD系統200之集合眼框250實際上為由MHUD總成210的模組215的眼框區段255之重疊形成的分裂出射瞳孔眼框。本發明之MHUD系統200之此分裂出射瞳孔設計方法在以下段落中進一步更詳細地闡述。 在本發明之MHUD系統200的較佳實施例中,MHUD總成210由裝配至一起以形成MHUD總成210之大量模組215組成,藉此各模組215由諸如QPI®成像器之成像器或諸如OLED裝置之具有相關聯光學裝置220及凹面鏡230的其他合適發光結構組成。本發明之此實施例之MHUD系統200的MHUD總成210之設計方法及其各別模組215在以下段落更詳細地描述,在其之前係本發明之MHUD系統200的相關優勢及相關設計參數取捨之解釋。MHUD 200 光學設計參數取捨 為理解本發明之MHUD系統200之優勢,認為解釋典型HUD系統及其相關設計參數之間的關係的基礎設計取捨係重要的。由HUD系統產生的虛擬影像通常重疊於自然場景上,以使得操作載具之檢視者能夠可視地感知載具操作參數,且提供關鍵資訊,諸如(例如)導航資訊,而不需駕駛員將他或她的視線及注意力自道路或載具外部環境移開。HUD系統之設計中待考慮之重要參數包括:集合眼框之目標大小、所需視野(FOV)、所形成之虛擬影像大小、虛擬影像解析度及系統體積約束。此等設計參數與約束之間的關係在圖3中說明。本發明之模組化 HUD ( MHUD ) 如何實現體積減小 - 參考圖3,MHUD系統200成像器220大小減小導致較小有效焦距(EFL),有效焦距為系統之特徵性光學跡線長度,且大體上有助於減小系統體積。然而,若維持眼框大小,則成像器光圈大小之減少導致降低之系統F/#,其伴隨光學複雜度之添加。此大體上導致較大系統體積。參考圖2中所說明的MHUD系統200設計概念,用於各模組215之眼框區段255之大小連同成像器220大小一起按比例縮放,以避免光學複雜度添加。此導致模組215中之每一者之體積按成像器220大小比例縮放。大量模組215經組合以形成MHUD總成210,其可提供任意大小之集合眼框250。本發明之MHUD系統200之此新穎的多區段式眼框設計概念係藉由將形成於檢視者之眼框處之系統的出射瞳孔分裂為各自與包含本發明之MHUD系統200之集合眼框250之眼框區段255中之一者對應的多個區段實現。本發明之MHUD系統200之此分裂出射瞳孔設計方法由此相較於提供同樣大小的眼框之先前技術HUD系統達成更小的整體體積態樣。此合乎需要地導致整體HUD體積、複雜度及成本之減小。在以下論述中描述本發明的MHUD系統200之所揭示之分裂出射瞳孔設計方法之其他優勢。當然,每一模組在任一時間發射相同影像,如此載具操作員將在同一位置處看到相同虛擬影像,其獨立於操作員檢視哪個或哪些眼框區段255。 使用鏡反射器參考[8-10]之先前技術HUD系統的體積之主要貢獻因素已被識別為凹面鏡。除鏡自身之較大大小以外,影像源之大小亦按比例較大,其使得需要使用較大大小成像器,諸如LCD面板,抑或形成投射至漫射螢幕上之較大大小中間影像,其對於併入投影儀成像器及其相關聯的投影光學裝置中添加甚至更多體積。如前文論述中所解釋,本發明之MHUD系統200藉由使用由各自使用裝配至一起以形成MHUD總成210之整體反射器235之較小大小凹面鏡230的多個模組215組成的MHUD總成210,相較於使用單個凹面鏡作為主要反射器之先前技術HUD系統實現大體上更小體積態樣,其大小小得多且達成小得多的光學跡線長度。使用較小光圈大小成像器220之MHUD總成210使得能夠使用具較小光學跡線長度之較小光圈大小凹面鏡230,其導致本發明之大體上更小之體積及具體積效益的MHUD系統200。 本發明之MHUD系統200之設計藉由將通常由單個較大鏡產生的較大準直光束劃分為例示性實施例中之三個相等大小的準直子光束而起作用。各子光束係由模組215之光學子系統產生。因此,F#,光學複雜度及焦距(EFL)(或光學跡線長度)減小,且因此系統之實體體積包絡減小。圖4說明包含MHUD總成210之模組215的光學設計態樣及光線跡線圖。如圖4中所說明,較佳實施例之模組215由一個成像器連同其相關聯的光學裝置220及凹面鏡230組成。儘管在圖4中所說明的實施例中,與成像器410相關聯之光學裝置420展示為單獨透鏡光學元件,但在本發明之替代性實施例中,成像器相關聯光學裝置420可直接附接至成像器410之發光表面頂部以形成整合式成像器總成220。如圖4中所說明,在模組215中之每一者中,反射凹面鏡230放大及準直由其各別成像器(或其他成像器) 220產生的影像以形成集合眼框250之一個眼框區段255,同時與圖4中之成像器410相關聯之光學元件420平衡由該等反射凹面鏡230引起之離軸變形及傾斜像差。 圖5說明MHUD總成210之模組215之光學效能。如圖5中所說明,與成像器410相關聯之光學元件420之作用係平衡由反射凹面鏡230引起之離軸變形及傾斜像差以最小化影像遊動效應,同時維持調變轉移函數(MTF)處於足夠高水平。出於完整性之目的,影像遊動效應通常由歸因於由鏡像差所引起的光學變形而在光進入檢視者之瞳孔的方向上之變化所引起,且產生隨檢視者之頭部在HUD系統眼框中移動(或凝視)而被感知之虛擬影像虛假運動(被稱為「遊動效應」)[參考6]。最小化諸如HUD之雙眼光學系統中之遊動效應係至關重要的,此係因為在極端情況下,虛擬影像中之過量遊動效應可導致由人視覺及感覺系統之前庭態樣與動眼神經態樣之間的衝突所引起的動暈症、頭暈或噁心(參考[16,17])。 本發明之MHUD系統200之分裂出射瞳孔方法之另一優勢在於,當與使用具較大光學光圈之單個鏡之先前技術HUD系統相比時,其達成大體上減小之遊動效應。反射凹面鏡230之較小光學光圈的像差相較於先前技術單鏡HUD系統中使用之相對較大光學光圈反射鏡之像差小得多。由於遊動效應與由HUD反射鏡引起之像差所引起的光學變形(或光線方向偏差)之幅值成正比,因此當相比於先前技術HUD系統時,本發明之MHUD系統200之大量較小光學光圈凹面鏡230達成大體上較小的遊動效應。此外,MHUD模組215之眼框區段255之間的角度重疊(其在圖8之論述中更詳細地解釋)致使對虛擬影像260中任何點之感知併有來自多個MHUD模組215之光學貢獻。因此,由多個MHUD模組215之個別凹面鏡230之像差所引起的光學變形(或光線方向偏差)傾向於在虛擬影像260中之任何點處平均化,因此導致MHUD系統200之檢視者感知的整體遊動效應減小。 在本發明之另一實施例中,MHUD總成210之成像器220具有高於人類視覺系統(HVS)能夠解析之解析度,其中所添加的解析度專用於由凹面鏡230引起之像差所引起的殘餘光學變形的數位影像捲曲預補償。在典型HUD檢視體驗中,虛擬影像將形成於大致2.5m距離處。HVS之側向清晰度大致為200微弧度。在此距離處,HVS可解析大致2500×0.0002=0.5 mm像素,其等效於用於具有10"對角線之虛擬影像260之大致450×250像素解析度。用於例示性MHUD總成210中之成像器220可相較於此限制提供高得多的解析度,例如藉由相同大小之光學光圈提供640×360解析度,或甚至提供1280×720解析度。藉由相同大小之光學光圈提供較高解析度之成像器220使得能夠使用具有相同大小之光學光圈的凹面鏡230,由此維持MHUD總成200之體積優勢。成像器220之添加的解析度允許使用數位影像捲曲預補償,其幾乎消除由凹面鏡230像差引起之光學變形及所得遊動效應,同時維持虛擬影像260處之最大可實現解析度及相同體積優勢。 反射凹面鏡230中之每一者可為非球面抑或自由形式,藉此凹面鏡230之非球面或自由形式因數所選以最小化凹面鏡230之光學像差,且在必要時最小化擋風玻璃之曲率。應注意,成像器220中之每一者之位置較佳相對於其相關聯的凹面鏡230軸向對稱以確保任何兩個凹面鏡230之相鄰邊緣處之經最佳平衡的(某種程度上相等)像差。此係本發明之MHUD系統200之重要設計態樣,此係因為其確保MHUD系統200之集合眼框250具有之多個眼框區段255之間的虛擬影像260之均一檢視過渡。 圖6說明MHUD總成210之多視圖視角。如圖6中所說明,MHUD總成210由在罩殼600內裝配至一起之三個反射凹面鏡230組成。三個凹面鏡230可分開地製造接著在罩殼600內適配至一起,抑或可作為單個部分製造接著適配在罩殼600內。無論分開地裝配或作為單個光學部分裝配,三個凹面鏡230皆可使用壓印聚碳酸酯塑膠製造,其光學表面隨後使用濺鍍技術塗佈反射金屬,諸如銀或鋁之薄層。如圖6中所說明,罩殼之背面側壁由三個單獨部分610組成,其各自併入光學窗615,當背面側壁部分610分別與其各別凹面鏡230裝配至一起時,該光學窗將與其各別凹面鏡230之光軸對準。如圖6之側視圖視角中所說明,背面側壁部分610中之每一者之頂部邊緣617朝向凹面鏡230成角度,以允許將安裝在背面側壁部分610之成角度邊緣表面617上的成像器220與其各別凹面鏡230的光軸對準。 如圖6中之後側視圖視角所說明,背面側壁部分610將一起裝配至背板630之一側上,而MHUD總成210之控制及介面電子裝置(印刷電路板) 620安裝至背板630之相對側上。此外,背板630亦併有熱量散熱片以耗散由MHUD總成210之成像器220及介面電子裝置元件620產生的熱量。如圖6之後側視圖視角中所說明,成像器220中之每一者通常將安裝在將成像器220連接至控制及介面電子裝置620的可撓性電氣板618上。 如圖6之後側視角中所說明,各對凹面鏡230及背面側壁部分610之介面邊緣的中心可併有通常為光電二極體之光偵測器(photo detector,PD) 640,其各自經安置及定向以偵測自成像器220發射至其各別凹面鏡230上的光。通常,將在各模組中使用三個光電二極體,每個光電二極體用於所發射光的一種顏色。光偵測器(PD) 640之輸出連接至MHUD總成210之控制及介面電子裝置620,且用作至實施於介面電子裝置元件620之硬體及軟體設計元件內的均一性控制迴路(在以下論述中描述)的輸入。環境光光偵測器感測器660之輸出亦作為輸入提供至MHUD總成210之控制及介面電子裝置620,該感測器通常為大多數載具之儀錶盤亮度控制項的整體部分。 MHUD總成210之控制及介面電子裝置620併有圖7之方塊圖中說明的硬體及軟體設計功能元件,包括MHUD介面函式710、控制函式720及均一性迴路函式730。通常以硬體與軟體之組合實施的MHUD總成210之控制及介面電子裝置620的MHUD介面函式710自載具之駕駛員輔助系統(DAS)接收影像輸入715,且將由控制函式720提供之顏色及亮度校正735併入影像中,接著將影像輸入744、745及746提供至MHUD總成210之成像器220。儘管相同影像輸入715資料可提供至MHUD總成210之三個成像器220,但MHUD介面函式710基於自控制函式720接收之顏色及亮度校正735而將各成像器220特定顏色及亮度校正併入至其各別影像輸入744、745及746中。 為確保跨越集合眼框250之多個區段255的顏色及亮度均一性,控制及介面電子裝置620之均一性迴路函式730自MHUD總成210之模組215中之每一者之光偵測器(PD)640接收輸入信號754、755及756,計算與MHUD總成210之模組215中之每一者相關聯的顏色及亮度,接著計算使得顏色及亮度跨越集合眼框250之多個區段255變得更均一所需的顏色及亮度校正。此將藉助於在MHUD總成210最初裝配時將被執行及儲存於控制及介面電子裝置620之記憶體中的初始校準查找表實現。由均一性迴路函式730計算之顏色及亮度校正接著提供至控制函式720,該控制函式組合此等校正與自環境光感測器650接收之輸入以及外部顏色及亮度調整輸入命令725,以產生顏色及亮度校正735,該等校正接著在經校正圖像數據作為影像輸入744、745及746提供至成像器220之前藉由MHUD介面函式710併入至影像資料中。在將自環境光感測器650接收之輸入併入至顏色及亮度校正時,控制函式720將與載具外部光亮度成比例地或相對於載具外部光亮度調整抬頭顯示器之虛擬影像之亮度。應注意,如本文所使用之影像資料意謂任何形式之影像資訊,無論其作為至抬頭顯示器之輸入被接收之影像資訊、提供至成像器之影像資訊,抑或任何其他形式之影像資訊。 如先前所解釋,MHUD系統200之一個實施例在虛擬影像260處使用解析度高於最大HVS可解析解析度的成像器220,且併有用以藉由數位捲曲輸入至成像器220的影像構件而消除或大體上減小其造成的光學變形及遊動效應的構件。彼實施例之MHUD系統200之MHUD總成210的MHUD介面函式710亦可併有大量查找表,其各自併有識別預補償凹面鏡230中之每一者之殘餘光學變形所需的數位影像捲曲參數之資料。此等參數由MHUD介面函式710使用以捲曲成像器220中之每一者之數位影像輸入,其方式為使得輸入至成像器220中之每一者之影像資料預補償其各別凹面鏡230之殘餘變形。併入於MHUD介面函式710中之查找表中的數位影像捲曲參數將自MHUD總成210之光學設計模擬初步產生,接著在數位影像捲曲預補償藉由MHUD介面函式710施加之後,基於各模組215之殘餘光學變形之量測值的光學測試數據擴充。所得經數位捲曲影像資料接著與由控制函式720提供之顏色及亮度校正735組合,接著顏色及亮度經校正且變形經預補償的影像資料作為影像輸入744、745及746提供至MHUD總成210之成像器220。藉由MHUD系統200之此設計方法,由凹面鏡230所引起之殘餘光學變形及其所得遊動效應可大體上被減小或完全消除,由此致能無變形之MHUD系統200。 如圖6之透視圖中所說明,MHUD總成210之頂部側為玻璃蓋板430,其充當載具儀錶盤頂部表面處之MHUD總成210的光學介面窗,且充當衰減日光紅外線放射以阻止成像器220處之日光熱量負載的濾光器。所使用之玻璃應被選擇為亦對所關注之光波長大體上透明。 MHUD總成210之設計方法利用人視覺系統(HVS)之特性以簡化MHUD總成210的設計實施及裝配容限。第一,眼瞳孔直徑大致為5mm (日間為3-5 mm且夜間為4-9 mm),且檢視虛擬影像260時所得的側向清晰度將允許MHUD總成210凹面鏡230之間的可達到高達1mm寬度的難識別的小間隙。其次,大致0.5度之眼角度差值適應限制將允許MHUD總成210凹面鏡230之間的可達到大致0.15度的小角度傾角。此等傾角及間隙容許度給出用於MHUD總成210凹面鏡230之非常寬鬆之機械對準容限需求,且因此允許用於MHUD總成210之非常經濟的製造及裝配方法。可通常在軟體中容易地適應更進一步傾角及/或對準需求。 圖8說明本發明之MHUD系統200的新穎的分裂眼框設計方法。圖8之說明意圖展示集合眼框250與MHUD系統200之虛擬影像260之間的關係。圖8亦說明一實例物件810,即藉由MHUD系統200顯示的展示於虛擬影像260上之箭頭。在MHUD系統200之設計中,眼框區段255中之每一者通常將定位於其各別模組215的出射瞳孔處。因此,眼框區段255中之每一者內之呈現至檢視者之眼的影像資訊將在角度空間之中。由此,在眼框區段255中之每一者內分開地呈現至檢視者的虛擬影像260箭頭物件810將通常在檢視者之頭部位於各別眼框區段255中心區域內時對檢視者完全可見,但當檢視者之頭部移動至眼框區段255之右側或左側時,虛擬影像260之箭頭物件810的端部或尾部將相應地逐漸漸暈(或淡化)。在MHUD系統200之設計中,當模組215一起整合至圖6之視角說明中展示的MHUD總成210中時,模組215之眼框區段255將變得如圖8中所說明一般重疊,以產生MHUD系統200之集合眼框250。由此,MHUD系統200之集合眼框250係由形成大量模組215之眼框區段255的出射瞳孔區域之重疊形成,由此使得在集合眼框250內呈現至檢視者之眼的影像資訊為跨越組合之MHUD模組215視野延伸的虛擬影像260的成角度多工視圖。如圖8中所說明,虛擬影像260之箭頭物件810在限定MHUD系統200的集合眼框250的眼框區段255的重疊區域內變得完全可見(或可檢視),而虛擬影像260之箭頭物件810在檢視者之頭部移動至集合眼框250周邊區域之各別右側或左側時,逐漸漸暈(或淡化)。 模組215之眼框區段255之間的重疊大小取決於其角度漸暈曲線(圖8中之820),且判定MHUD系統200之集合眼框250之極限大小。後者被定義為集合眼框250區域邊界或尺寸,在其中虛擬影像260在所需亮度均一性處完全可見(或可檢視)。圖8亦說明跨越模組215之重疊眼框區段255的整體區域的MHUD總成210的所得角度漸暈曲線屏蔽。如圖8中所說明,檢視者感知之虛擬影像260亮度包括分別來自模組215中之每一者之亮度比重

Figure 02_image001
Figure 02_image003
Figure 02_image005
(左側、中心及右側)。用於限定集合眼框250之邊界的規則係眼框區段255之重疊的區域A ,在其中虛擬影像260亮度在跨越所選區的給定臨限值λ 內(舉例而言,其小於25%)為均一的;亦即,
Figure 02_image007
(所需均一性臨限值)。藉由用於限定集合眼框250之邊界與圖8中所說明的模組215的眼框區段255的重疊之此規則,跨越虛擬影像260感知之亮度包括來自模組215中之一者之至少50%的比重。此意謂當集合眼框250邊界內之任何位置由所陳述之規則限定時,模組215中之每一者貢獻虛擬影像260的感知亮度之至少50%。藉由MHUD系統200之此設計方法,虛擬影像260之所需亮度均一性成為限定集合眼框250之大小的規則。此設計規則說明於圖8之設計實例中,其使用均一性臨限值λ =25%以產生120 mm寬之集合眼框250。如圖8之說明中所展示,當使用均一性臨限值λ =37.5%時,定義寬出大致25%之集合眼框250,其大致量測為150 mm。 如圖8中所說明,在延伸超出MHUD系統200之集合眼框250的右側及左側的眼框區段區域中,隨檢視者之頭部移入此等區中,虛擬影像之箭頭物件810分別逐漸漸暈或褪色。藉由MHUD系統200之設計方法,添加模組215至圖6中所說明的MHUD總成210之右側抑或左側將相應地延伸如藉由先前定義之設計規則所定義之MHUD系統200的集合眼框250的側向之寬度至右側或左側,其中虛擬影像260之箭頭物件810將以所需亮度均一性變得完全可見。當另一列模組215添加至MHUD總成210中時,延伸集合眼框250高度之類似效應發生在正交方向。由此,藉由本發明之MHUD系統200之此模組化設計方法,可藉由添加更多模組215至MHUD總成210中來實現具任何設計選擇寬度及高度尺寸的任何任意大小的集合眼框250。 基本上,本發明之MHUD系統200的分裂出射瞳孔模組化設計方法使得能夠使用大量成像器220及凹面鏡230,其各自具有相對較小光圈且各自達成較短光學跡線長度,以代替用於先前技術HUD系統中之較大影像源及單個鏡之長得多的光學長度。由此,MHUD模組215之較小光圈成像器220及凹面鏡230相較於可藉由使用較大單個影像源及單個鏡以達成相同大小眼框的先前技術HUD系統達成之態樣,共同地允許大體上較小的體積態樣。此外,MHUD系統200所達成之集合眼框250之大小可藉由使用恰當數目之模組215基本設計元件來加以裁適。反之,可使MHUD系統200之體積態樣匹配在載具儀錶盤區域中可用的體積,同時相較於可藉由可適配於相同可用體積中之先前技術HUD系統達成的集合眼框達成較大大小之集合眼框250。 為說明本發明之MHUD系統200的體積優勢,圖6之透視圖展示MHUD總成210之設計尺寸,其使用三個各自具6.4×3.6 mm之光學光圈大小的成像器220及三個各自具60×100 mm之光學光圈大小的凹面鏡,以達成基於λ =25%之亮度均一性臨限值的120×60 mm集合眼框250大小。基於圖6中展示之設計尺寸,MHUD總成210之總體積將大致為1350 cc (1.35公升)。出於比較目的,使用單個較大光圈鏡及單個較大影像源以達成相同眼框大小的先前技術HUD系統的總體積將超過5000cc (5公升)。由此,本發明之MHUD系統200的設計方法將允許相較於先前技術HUD系統在體積上節約(或更小)3.7倍之HUD系統。為形象化此體積優勢,圖9說明圖6中所說明的安裝於微型汽車之儀錶盤中之MHUD總成210設計實例的體積。如圖9中所說明,本發明之MHUD系統200的體積節約設計允許在具先前技術HUD系統將根本無法適配之極受限儀錶盤體積之汽車中添加HUD能力。 圖10說明MHUD系統200之光線路徑。如圖10中所說明,及先前圖2中所說明及解釋,包含MHUD總成210之三個成像器220將以用於三個影像之相同解析度(例如640×360像素)各自產生相同影像,且在由其三個各別凹面鏡230反射之後,將成角度地定址於先前所描述之設計實例之整個120×60 mm集合眼框250,且將共同地跨越先前所描述之設計實例之125×225 mm虛擬影像260提供640×360空間解析率。 圖10說明在虛擬影像260處產生10,000 cd/m2 亮度之設計需求。藉由典型擋風玻璃大致20%之反射率及先前解釋之集合眼框250邊界限定規則,三個成像器220中之每一者將產生大致25,000 cd/m2 亮度。保守估計,MHUD總成210之三個成像器220加控制及介面電子裝置620將共同地消耗大致2 W以產生25,000 cd/m2 亮度,其大致為先前技術HUD系統功率消耗之25%。 參考圖5中所說明的MHUD系統200效能,圖5之環繞能量曲線展示來自凹面鏡230的180微米大小之光學光圈之準直光束的幾何模糊半徑。藉由具有72 mm有效焦距之圖6中所說明的模組215設計實例中之每一者,對於源自成像器220之像素且藉由其各別凹面鏡230準直之光束,圖5之環繞能量曲線中指示的180微米模糊大小向模組215中之每一者給出0.143度之角展度。遊動效應與跨越來自像素之完整射束寬度的0.143度角展度相關聯,而解析度(MTF)由眼瞳孔大小取樣之有效射束寬度決定。圖5之MTF曲線展示模組215中之每一者之MTF,其係針對4 mm直徑之典型眼瞳孔光圈計算。此角展度角度愈小,虛擬影像260處之遊動半徑愈小。對於距MHUD系統200之集合眼框250 2.5m處檢視之虛擬影像260,用於MHUD系統200設計實例之各別遊動半徑將為6.2 mm。使用單個鏡及具有等於MHUD總成210設計實例之完整光圈大小的光學光圈大小的先前技術HUD系統將具有大致為模組215之光學光圈2.4倍大之光學光圈。由於像差模糊大小與光圈大小之三次冪成正比(參考[18]),若5階像差恰好補償較大3階像差(其無法藉由設計有目的地實現),則具有等於MHUD總成210設計實例之完整光圈大小的光學光圈大小的先前技術單鏡HUD系統將具有大致14.3 mm之對應遊動半徑,否則先前技術單個鏡HUD系統將通常具有大致39.7 mm之對應遊動半徑,其為MHUD系統200之設計實例所達成的遊動半徑的6.2倍大。亦應提及,藉由先前所描述之像差預補償方法,MHUD系統200遊動半徑可大體上減小至低於此設計實例之陳述值或甚至得以完全消除。 圖10亦說明包括日光負載之本發明之MHUD系統200的光線路徑。如圖10中所說明,直射載具之擋風玻璃之日光的逆向光學路徑將到達集合眼框250區域,可能導致虛擬影像260中之眩光。在本發明之MHUD系統200之設計中,相比於先前技術HUD系統,能夠到達集合眼框250之日光之量少得多。首先,假定擋風玻璃240之光學透射率為80%,來自日光之光線藉由擋風玻璃240衰減至其亮度之至多80%。其次,在其被朝向凹面鏡230總成反射回之前,透過擋風玻璃240且由朝向其對應成像器220之凹面鏡230中之一者反射之日光光線將進一步由成像器220之光學光圈上之抗反射(AR)塗層衰減至其亮度之至多5%。第三,當其由朝向集合眼框250之擋風玻璃240反射時,此逆向路徑日光將接著進一步衰減至其亮度之至多20%。如先前解釋,由於模組215中之每一者之成像器220及凹面鏡230貢獻虛擬影像260亮度之至多50%,因此自被日光擊中的模組215反射之日光眩光將表現為在虛擬影像260處進一步衰減50%。 因此,基於此路徑衰減分析,將到達集合眼框250之日光將衰減至其亮度的至多0.4% (較1%小得多)。在MHUD系統200能夠在虛擬影像260處產生超過10,000 cd/m2 亮度及0.4%日光眩光的情況下,MHUD系統200可耐受超過250,000 cd/m2 之日光亮度,其等效於大致28 dB之通用眩光等級(UGR)(或眩光對影像亮度比率)。值得提及,玻璃蓋板430將吸收紅外光,但對於用於本發明的抬頭顯示器中之波長透光,以阻止日光負載熱量被凹面鏡230總成集中回至成像器220。 在上文所述之實施例中,多個模組並排安置以提供重疊眼框區段,以相較於眼框區段255本身提供更寬集合眼框250。然而,必要時、替代地或另外地,模組可安置為使得模組215之眼框區段亦堆疊以提供更高之集合眼框250,再次,全部模組在載具前面相同位置處顯示相同虛擬影像。應注意,堆疊以提供較高集合眼框250通常並非堆疊模組,而是由於典型擋風玻璃之斜率,眼框區段之堆疊可藉由簡單地使用儀錶盤之用於額外模組之較大、大體上水平區域實現。 此外,儘管先前陳述為, 「如圖2中所說明,自具相關聯的光學裝置的各單個成像器220發射之影像由其相關聯的凹面鏡230準直、放大及反射,接著部分地自載具擋風玻璃240反射以形成虛擬影像260,其可在載具之駕駛員(操作員)之標稱頭部位置處的眼框區段255內檢視」 但在任何實施例中,藉由凹面鏡達成之準直程度將必然低於完美值,且可能有意地經設定以限制虛擬影像應形成於載具前多遠處。在一些情況下,凹面鏡可實際上故意地設計為使準直變形以偏移之後的任何像差源(擋風玻璃之曲率(若存在),其為最顯而易見之實例)。 先前指示,離軸變形及傾斜像差以及顏色及亮度校正可在圖2之MHUD總成210 (圖6亦可見)之控制及介面電子裝置620中進行。當然,來自各模組215之各影像或影像區段的側向位置校正亦可在控制及介面電子裝置620中進行(或以機械方式),使得不顯示雙重影像或雙重影像部分。另外,應注意,「亮度校正」具有至少兩個主要態樣。首先及最顯而易見的係亮度變化之模組至模組校正,使得來自不同模組之影像亮度(及顏色)應不為不同。然而,與其相關聯的係影像捲曲及其他因素可能造成在單獨模組內之影像部分的亮度改變的事實,此係因為歸因於捲曲,像素間距改變可能引起可見亮度像差。若遭遇此狀況,則由於各模組中之各個別像素之亮度可個別地控制,因此在必要時可局部增大像素間距增大之區域中之像素亮度,及減小像素間距減小之區域中之像素亮度。最後,應注意,典型固態發光像素陣列成像器並非方形成像器,而是通常係具有不等尺寸之矩形。因此,成像器定向之選擇亦可提供可適用於本發明的抬頭顯示器之設計的額外變量。 下文表1呈現基於本發明之某些實施例之MHUD系統200的成像器的突出效能特性,其說明其相比於使用單個較大鏡及單個較大影像源之先前技術HUD系統的效能優勢。如表1中所展示,本發明之分裂出射瞳孔MHUD系統在每一效能類別上勝過先前技術HUD系統多倍。另外,由於其先前解釋的寬鬆製造容限及較小大小之鏡,本發明之MHUD系統200預期相較於具類似的眼框大小之先前技術具有高得多的成本效益。
Figure 107106971-A0304-0001
* 基於使用高亮度LCD面板作為影像源之先前技術HUD 1 : 效能比較 具近場及遠場虛擬影像之多影像抬頭顯示器系統 在諸多HUD系統應用中,合乎需要的是HUD系統顯示多個虛擬影像至檢視者,較佳地直接在檢視者前面,以免自駕駛分散檢視者的注意力,而同時提供額外資訊之安全可檢視性。在此情況下,多個虛擬影像可藉由HUD系統顯示,其中,舉例而言,第一虛擬影像顯示於習知HUD系統中通常採用之遠場距離處,且第二虛擬影像顯示於近場距離處。較佳地,兩個虛擬影像皆對HUD系統檢視者可檢視,而無需檢視者將他或她的頭部自道路轉離,且准許駕駛員繼續集中注意力於駕駛狀況。 在本揭示內容之發明的一個替代性較佳實施例中,如圖2中所說明,先前所描述之分裂出射瞳孔設計架構可與大量顯示元件220 (即,成像器及相關聯的光學裝置220)結合使用,藉此各顯示元件220經組態以便以不同輸出角度調變多個影像。 在本發明之多影像抬頭顯示系統之一態樣中,該系統可包含大量模組215,其各自具有固態發光像素陣列成像器(即,顯示元件) 220及凹面鏡230,該凹面鏡經組態以準直、放大由固態發光像素陣列成像器220產生的第一影像及第二影像且將該第一影像及第二影像朝向載具擋風玻璃反射以形成在眼框區段內可檢視的第一虛擬影像及第二虛擬影像。大量模組經安置以使得眼框區段255組合以提供抬頭顯示器為具有大於各模組215之眼框區段255的集合眼框250,且使得集合眼框250位於載具之駕駛員之標稱頭部位置處。在本發明之多影像抬頭顯示器系統實施例之第一態樣中,固態發光像素陣列成像器220包含與各別第一組微光學元件相關聯的第一組像素及與各別第二組微光學元件相關的第二組像素。該第一組微光學元件經組態以引導自該各別第一組像素之輸出產生上文所述之第一影像,藉此產生第一虛擬影像,其可在自該集合眼框250之第一距離處檢視。該第二組微光學元件經組態以引導自該各別第二組像素之輸出產生上文所述之第二影像,藉此產生第二虛擬影像,其可在自該集合眼框250之第二距離處檢視。微光學元件可包括非遠心透鏡或非遠心光學元件,其經組態以致能相對於固態發光像素陣列成像器220表面之大體上傾斜的像素輸出。 在本發明之多影像抬頭顯示器系統實施例之第一態樣中,第一距離可為遠場距離,且第二距離可為近場距離。第一組像素可為固態發光像素陣列成像器220的使用者定義之第一組像素,且第二組像素可為固態發光像素陣列成像器220的使用者定義之第二組像素。第一組像素可為固態發光像素陣列成像器220之像素的奇數編號列,且第二組像素可為固態發光像素陣列成像器220之偶數編號列。第一組像素可為固態發光像素陣列成像器220之偶數編號列,且第二組像素可為固態發光像素陣列成像器220之奇數編號列。該第一組像素可為包含固態發光像素陣列成像器22之像素區域至少50%之像素,且第二組像素可為固態發光像素陣列成像器220的其餘像素區域之其餘部分。第一組像素可為固態發光像素陣列成像器220之上部區或部分,且第二組像素可為固態發光像素陣列成像器220之下部區或部分。 圖11A至B及圖11C至D說明此等多個成像光調變顯示元件220之非限制性實例,其藉由預先判定的若干組個別像素經組態,諸如顯示元件220上之2D像素陣列中之預先判定的若干組像素列或像素行,其各自個別地併有以預先判定的唯一方向導向或定向調變自各別像素發射之光微光學元件。 圖11A至B及圖11C至D說明以下實例,其中多圖像顯示元件220經設計以同時調變兩個影像,其中各第一影像及第二影像自顯示元件220表面以不同方向發射。當此種顯示元件220用於圖2之分裂出射瞳孔HUD設計架構之上下文時,經調變第一影像(上文所述)產生可自HUD系統眼框250之遠場距離(例如,大致2.5 m)檢視之第一虛擬影像,而經調變之第二影像產生可在近場距離(例如,大致0.5 m)處檢視之第二虛擬影像。此等兩個可檢視虛擬影像可同時藉由多影像分裂出射瞳孔HUD系統調變,且HUD系統檢視者可簡單地藉由以與藉由分裂出射瞳孔HUD系統之多個顯示元件220調變的兩個虛擬影像的調變方向之角度傾角(或間距)成比例之角度重定向他/她的縱軸平面中之視線來選擇性地檢視第一虛擬影像或第二虛擬影像中之任一者。 圖11A及圖11B說明本發明的一個實施例中的顯示元件220之頂部視圖及側視圖,其中分裂出射瞳孔HUD系統之大量顯示元件220經組態以藉由將其光學光圈分割成兩組顯示像素(例如顯示像素之奇數編號及偶數編號列)來調變第一影像及第二影像,其中一個群組像素(像素之奇數編號列)調變第一影像,而第二群組像素(像素之偶數編號列)調變第二影像。HUD系統顯示元件220之此種定向調變能力可藉由設計與影像調變像素組中之每一者相關聯之微光學元件或微透鏡元件,以定向地調變自其相關聯像素以預先判定的影像方向發射之光來實現。舉例而言,在圖11A及圖11B中說明的狀況下,與像素之奇數編號列相關聯之微光學元件引導自相關聯像素群發射之光以形成第一影像,而與像素之偶數編號列相關聯之微光學元件引導自像素群發射之光以形成第二影像。應注意,儘管光線被說明為對於圖11A至圖11D中之各影像平行,但實際上其應大體上自成像器220扇出以視需要擴展或放大影像大小。可藉由如下文更詳細地論述之非遠心QPI®成像器之形式使用非遠心微光學透鏡元件來允許該像素發射角度。 應注意,利用先前所描述之單影像分裂出射瞳孔HUD設計架構,大量成像器220調變各自在不同方向上的相同的兩個影像,以在分裂出射瞳孔HUD系統之集合眼框250中呈現兩個虛擬影像,其中經調變之兩個所得虛擬影像中之每一者可跨越集合眼框250檢視,但處於不同豎直(或方位角)方向。 在圖11C及圖11D中說明的另一多影像HUD系統的較佳實施例中,分裂出射瞳孔多影像HUD系統之大量顯示元件220各自具有光學光圈,其分成兩個區或區域之像素,即,在所說明之實例中,上部區像素及下部區像素。在此實施例中,以不同方向調變之兩個影像各自藉由單個專用像素區調變。舉例而言,如圖11C及圖11D中所說明,該等顯示元件220光學光圈上部區(其可為成像器像素設定之任何使用者定義之部分)像素之微光學元件經設計以引導自顯示元件220的包含像素上部區之像素中之每一者發射之光以形成如上所定義的第一影像,而顯示元件220光學光圈下部區之像素之微光學元件經設計以引導自顯示元件220的包含像素下部區之像素中之每一者發射之光以形成如上所定義之第二影像。像素發射角度可藉由使用以下文更詳細地論述之非遠心成像器220的形式的非遠心微光學元件來提供。 圖12說明本發明之多影像分裂出射瞳孔HUD系統的一較佳實施例。如圖12中所說明,大量顯示元件(或成像器) 220可各自調變兩個虛擬影像,第一影像在向上方向上調變,而第二影像在向下方向上調變。 大量顯示元件220同時調變第一影像及第二影像兩者以成角度地填充如圖8中所說明之多影像分裂出射瞳孔HUD系統眼框250。在經凹面鏡230準直且經擋風玻璃反射至眼框250上之後,由大量顯示元件220調變(產生)之包含第一影像及第二影像兩者的經準直光線集束可在眼框250內之兩個不同傾角處檢視,以允許多影像分裂出射瞳孔HUD系統檢視者集中於兩個獨立及同時調變之虛擬影像,其中第一虛擬影像可在遠場260-1處檢視且第二虛擬影像可在近場260-2處檢視,該兩個虛擬影像在豎直(方位角)方向上藉由經大量顯示元件220調變之兩個影像之間的與定向間距角度220-4成比例之角度220-3在角度上分離。 由於其光線集束以不同水準(至不同程度)準直,因此該兩個虛擬影像在第一虛擬距離與第二虛擬距離處係不同的。凹面鏡230準直經設計以達成距眼框250之遠場虛擬影像距離。下文作為具體實施例之特定實例論述之非遠心QPI®成像器的微光學元件經設計以對自與非遠心QPI®元件相關聯之各別像素發射之光引入額外準直。藉由非遠心微光學元件與凹面鏡230合作達成之組合準直由此達成自眼框250之遠場及近場虛擬影像距離,以使得多影像HUD能夠同時顯示遠場及近場虛擬影像兩者。 如圖13中所說明,多影像分裂出射瞳孔HUD系統檢視者可僅藉由以角度220-3 (圖12亦可見)重定向他/她在豎直(方位角)方向的視線來檢視(或注意)藉由HUD系統調變之第一虛擬影像或第二虛擬影像兩者中之任一者。由於該兩個虛擬影像係藉由兩個包含該等顯示元件(成像器)220之單獨組像素獨立及分開地調變,因此顯示至檢視者之第一影像及第二影像中之每一者可包含檢視者可能關注之不同資訊。 圖13亦說明藉由多影像分裂出射瞳孔HUD系統調變之兩個虛擬的第一影像及第二影像之標稱位置,其中在非限制性所說明實例中,遠場虛擬影像可由檢視者在大致2.5 m距離處聚焦(大致在載具之前部引擎蓋結尾),而近場虛擬影像可由檢視者在大致0.5 m距離(大致在載具之擋風玻璃的外部下邊緣處)聚焦。 應注意,所描述之HUD多影像能力有益地不產生圖6中所概述之多影像分裂出射瞳孔HUD系統體積態樣之增大。如圖7中所說明,顯示器元件(成像器) 220之介面710、控制函式720及均一性迴路730亦保持無變化。 多影像分裂出射瞳孔HUD系統相較於所描述之單個影像分裂出射瞳孔HUD系統之實施及設計方法的主要差異係: 1. 大量顯示元件(成像器) 220具有在與描述於先前實施例中者不同的方向上調變多個影像之能力, 2. 多影像分裂出射瞳孔HUD系統之豎直視野(FOV)在角度上分裂成兩個定向區,以使得能夠同時調變兩個在角度上分離之影像;及; 3. 至大量顯示器元件(成像器)220之影像輸入715由兩個影像組成,其各自(數位地)定址至先前實施例中所描述之對應像素群。 圖14說明上文參考之非遠心QPI®成像器的例示性實體化,其中非遠心微光學元件1250-1可實現為折射光學元件(ROE)且用於以相對於顯示元件220表面大體上傾斜的角度引導所選像素光輸出以提供近場虛擬影像。 在圖14之此實施例中,像素層級折射非遠心微光學元件1250-1定向調變態樣可使用由具有不同折射率之介電材料1310及1320之連續層形成之去中心微透鏡1250-1實現。圖14為包含複數個非遠心折射微光學元件1250-1之顯示元件220之示意性橫截面。在此實施例中,像素層級非遠心微光學元件1250-1之陣列可使用半導體光微影、蝕刻及沈積技術,以晶圓級單片地製造為多個半導體介電材料層,諸如對於低折射率層1310使用氧化矽且對於高折射率層1320使用氮化矽。如圖14中所說明,藉由使用具不同折射率之多層介電材料1310及1320連續(依次)沈積以形成像素層級微光學元件1250-1之折射表面,實現陣列像素層級微光學元件1250-1,其視需要在跨越微透鏡陣列之折射微透鏡元件中心位置中漸進地改變,以獲得所需非遠心特性及影像投射方向。 圖15說明上文參考之非遠心QPI®成像器之替代例示性實體化,其中非遠心微光學元件1250-2實現為傾斜折射光學元件(ROE),再次,其視需要跨越微透鏡陣列漸進地變化以獲得所需非遠心特性及影像投射方向,且可用於以相對於成像器220表面大體上傾斜的角度引導所選像素光輸出,以提供近場或第二影像。在此實施例中,像素層級折射非遠心微光學元件1250-2定向調變態樣係使用由具不同折射率之介電材料1410及1420的連續層形成之傾斜微透鏡1250-2實現。 圖15為本發明之顯示元件220之側視圖,其包含複數個傾斜之折射微光學元件1250-2。在此實施例中,像素層級非遠心微光學元件1250-2之陣列可使用半導體光微影、蝕刻及沈積技術,以晶圓級單片地製造為多個半導體介電材料層,諸如針對低折射率層1410使用氧化矽且針對高折射率層1420使用氮化矽。如圖15中所說明,可使用具不同折射率之多層介電材料1410及1420連續(依次)沈積以形成非遠心微光學元件1250-2之像素層級的折射表面來實現非遠心微光學元件1250-2之陣列。 因此,本發明具有數個態樣,該等態樣可按需要單獨實踐或以各種組合或子組合實踐。儘管已出於說明的目的而非出於限制的目的揭示及描述本發明之某些較佳實施例,但熟習此項技術者應理解,可在其中作出形式及細節之各種改變而不背離如以下申請專利範圍之全部範圍所定義的本發明之精神及範疇。Reference in the following detailed description of the present invention to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. The appearances of the phrase "in one embodiment" in various places in this embodiment are not necessarily all referring to the same embodiment. A new class of emissive microscale pixel array imager devices has recently been introduced. These devices are characterized by high brightness, very fast polychromatic light intensity and spatial modulation capability in a very small single device size including all required image processing driver circuits. The solid-state light emitting (SSL) pixels of one of these devices can be either light emitting diodes (LEDs) or laser diodes (LDs), the on-off states of which are determined by the inclusion of a CMOS chip (or device). ) is controlled by a driving circuit in which the light-emitting micro-scale pixel array of the imager is connected to the driving circuit. The size of the pixels of the light emitting arrays comprising these imager devices should typically be in the range of approximately 5 to 20 microns, and the typical light emitting surface area of the devices is in the range of approximately 15 to 150 square millimeters. Pixels within an emissive microscale pixel array device can be individually addressable spatially, chromatically, and temporally, typically via the driver circuits of its CMOS chip. The brightness of the light produced by such imager devices can be well over 100,000 cd/m 2 with low power consumption. An example of such a device is a QPI® imager (see references [1-7]), which are referenced in the exemplary embodiments described below. It should be understood, however, that a QPI® imager is only one example of one type of device that may be used in the present invention. ("QPI" is a registered trademark of Ostendo Technologies). Thus, in the following description, any reference to a QPI® imager should be understood to be specific in disclosing an embodiment as one specific example of a solid state light emitting pixel array imager (hereinafter referred to as an "imager") that may be used for the purpose of sexuality and not for any limitation of the present invention. The present invention combines the unique capabilities of the light-emitting micro-pixel array devices of these imagers with a novel split exit pupil HUD system architecture to achieve a low-cost and small-volume Modular HUD (MHUD) system that can be easily used for cost and In applications where volume constraints are critical, such as, for example, automotive HUDs. The combination of the light-emitting high-brightness micro-emitter pixel array of an imager such as the QPI® imager and the split exit pupil HUD architecture of the present invention allows efficient operation in high-brightness ambient sunlight, yet is small enough in size to fit in wide Range of vehicle sizes and types of dashboards or behind dashboards. As used herein, the word "vehicle" is used in the most general sense and includes any means in or by which a person travels, including but not limited to travel on land, water, underwater, and air. The low cost and modularity of the split exit pupil HUD architecture enabled by these imagers allows modular HUD systems to be tailored to fit a wide range of vehicle volume constraints. The advantages of a split exit pupil HUD system should become more apparent from the detailed description provided in the context of the embodiments described in the following paragraphs. FIG. 2 illustrates a design concept of a modular HUD (MHUD) system 200 according to one embodiment of the present invention. As illustrated in FIG. 2, in the preferred embodiment, the MHUD system 200 of the present invention consists of a MHUD assembly 210, which in turn consists of a plurality of modules 215 assembled together to form the MHUD 210, whereby Each module 215 consists of a single imager 220 and a concave mirror 230 with associated optics. As illustrated in FIG. 2, images emitted from each individual imager 220 with associated optics are collimated, magnified, and reflected by its associated concave mirror 230, and then partially reflected from the vehicle windshield 240 to form A virtual image 260 that can be viewed within the eye frame section 255 at the nominal head position of the driver (operator) of the vehicle. As illustrated in FIG. 2, each of the modules 215 of the MHUD assembly 210 are positioned to form the same virtual image 260 at any one time and at the same location from the vehicle windshield 240, but each in its own The eye frame section 255 allows the plurality of modules 215 of the MHUD assembly 210 to collectively form the collective eye frame 250 of the MHUD system 200 . That is, the virtual image 260 is partially viewable at each of the eye frame sections 255 , but is fully viewable in the collective eye frame 250 . Thus, the overall size of the eye frame section 255 of the MHUD system 200 can be tailored by selecting an appropriate number of modules 215 comprising the MHUD assembly 210, where the eye frame section and the number of modules are user-definable . Although each of the modules 215 of the MHUD assembly 210 is positioned to form the same virtual image 260 at any one time, their images will, of course, change over time, and may change slowly, such as, for example, a change in the fuel gauge image, or More rapid changes are possible, such as changes in the display of GPS navigation system display images, however the MHUD system 200 of the present invention can operate at frequencies at least up to typical video rates when image data is available at such rates. In the preferred embodiment of the MHUD system 200, the eye frame sections 255 of the modules 215 of the MHUD assembly 210 are each located at the exit pupil of the light beam reflected by its corresponding concave mirror 230. The collective eye frame 250 of the MHUD system 200 is actually a split exit pupil eye frame formed by the overlap of the eye frame sections 255 of the modules 215 of the MHUD assembly 210 . This split exit pupil design method of the MHUD system 200 of the present invention is further elaborated in the following paragraphs. In the preferred embodiment of the MHUD system 200 of the present invention, the MHUD assembly 210 consists of a plurality of modules 215 assembled together to form the MHUD assembly 210, whereby each module 215 consists of an imager such as a QPI® imager Or other suitable light emitting structures such as OLED devices with associated optical devices 220 and concave mirrors 230 . The design method of the MHUD assembly 210 of the MHUD system 200 of this embodiment of the present invention and its respective modules 215 are described in more detail in the following paragraphs, preceded by related advantages and related design parameters of the MHUD system 200 of the present invention Explanation of trade-offs. MHUD System 200 Optical Design Parameter Tradeoffs In order to understand the advantages of the MHUD system 200 of the present invention, it is believed that basic design trade-offs that explain the relationship between a typical HUD system and its associated design parameters are important. The virtual imagery produced by the HUD system is often overlaid on the natural scene to enable the viewer operating the vehicle to visually perceive vehicle operating parameters and provide critical information, such as, for example, navigation information without the driver having to Or her eyes and attention are taken away from the road or the environment outside the vehicle. Important parameters to be considered in the design of the HUD system include: the target size of the assembled eye frame, the required field of view (FOV), the size of the virtual image formed, the resolution of the virtual image, and system volume constraints. The relationship between these design parameters and constraints is illustrated in FIG. 3 . How the Modular HUD ( MHUD ) of the Invention Achieves Volume Reduction - Referring to Figure 3, the reduced size of the imager 220 of the MHUD system 200 results in a smaller effective focal length (EFL), which is the characteristic optical trace length of the system, And generally contribute to reducing system size. However, if the eye frame size is maintained, the reduction in the imager aperture size results in a reduced system F/# with an accompanying increase in optical complexity. This generally results in a larger system volume. Referring to the MHUD system 200 design concept illustrated in FIG. 2, the size of the eye frame section 255 for each module 215 is scaled along with the imager 220 size to avoid optical complexity additions. This results in the volume of each of the modules 215 being scaled by the size of the imager 220 . A number of modules 215 are combined to form a MHUD assembly 210, which can provide a collective eye frame 250 of any size. This novel multi-segment eye frame design concept of the MHUD system 200 of the present invention is accomplished by splitting the exit pupil of the system formed at the viewer's eye frame into individual and collective eye frames comprising the MHUD system 200 of the present invention Multiple sections corresponding to one of the eye frame sections 255 of 250 are implemented. This split exit pupil design method of the MHUD system 200 of the present invention thus achieves a smaller overall volume aspect compared to prior art HUD systems that provide eye frames of the same size. This desirably results in a reduction in overall HUD size, complexity and cost. Additional advantages of the disclosed split exit pupil design method of the MHUD system 200 of the present invention are described in the following discussion. Of course, each module emits the same image at any one time, so the vehicle operator will see the same virtual image at the same location, independent of which eye frame segment(s) 255 the operator views. The main contributor to the volume of prior art HUD systems using specular reflector references [8-10] has been identified as the concave mirror. In addition to the larger size of the mirror itself, the size of the image source is also proportionally larger, which necessitates the use of larger size imagers, such as LCD panels, or the formation of larger size intermediate images projected onto a diffuse screen, which is Incorporating the projector imager and its associated projection optics adds even more volume. As explained in the foregoing discussion, the MHUD system 200 of the present invention utilizes an MHUD assembly consisting of a plurality of modules 215 of smaller size concave mirrors 230 each used assembled together to form the integral reflector 235 of the MHUD assembly 210 210, which achieves a substantially smaller volume aspect, which is much smaller in size and achieves a much smaller optical trace length, than prior art HUD systems that use a single concave mirror as the primary reflector. The MHUD assembly 210 using a smaller aperture size imager 220 enables the use of a smaller aperture size concave mirror 230 with a smaller optical trace length, which results in a substantially smaller and volume-specific MHUD system 200 of the present invention . The design of the MHUD system 200 of the present invention functions by dividing a larger collimated beam, typically produced by a single larger mirror, into three collimated sub-beams of equal size in the exemplary embodiment. Each sub-beam is generated by the optical subsystem of module 215 . Consequently, F#, optical complexity and focal length (EFL) (or optical trace length) are reduced, and thus the physical volume envelope of the system is reduced. FIG. 4 illustrates an optical design aspect and light trace diagram of the module 215 including the MHUD assembly 210 . As illustrated in FIG. 4 , the module 215 of the preferred embodiment consists of an imager with its associated optics 220 and concave mirror 230 . Although in the embodiment illustrated in FIG. 4, the optics 420 associated with the imager 410 are shown as separate lens optics, in alternative embodiments of the present invention, the imager-associated optics 420 may be attached directly to Attached to the top of the light emitting surface of imager 410 to form integrated imager assembly 220 . As illustrated in FIG. 4, in each of the modules 215, a reflective concave mirror 230 magnifies and collimates the image produced by its respective imager (or other imager) 220 to form one eye of the collective eye frame 250 Frame section 255, while optical element 420 associated with imager 410 in FIG. 4 balances off-axis deformation and tilt aberration caused by the reflective concave mirrors 230. FIG. 5 illustrates the optical performance of the module 215 of the MHUD assembly 210. As illustrated in FIG. 5, the function of the optical element 420 associated with the imager 410 is to balance off-axis deformation and tilt aberration caused by the reflective concave mirror 230 to minimize image wandering effects while maintaining the modulation transfer function (MTF) at a sufficiently high level. For the sake of completeness, image wandering effects are usually caused by changes in the direction of light entering the viewer's pupil due to optical distortions caused by mirror image aberrations, and are produced in the HUD system with the viewer's head. The perceived false motion of the virtual image by moving (or staring) the eye frame (called the "swimming effect") [Ref 6]. Minimizing the wandering effect in binocular optical systems such as HUDs is critical because, in extreme cases, excessive wandering effects in virtual images can lead to vestibular and oculomotor neurological changes in the human visual and sensory systems. motion sickness, dizziness, or nausea caused by the conflict between the samples (refs [16, 17]). Another advantage of the split exit pupil approach of the MHUD system 200 of the present invention is that it achieves a substantially reduced travel effect when compared to prior art HUD systems using a single mirror with a larger optical aperture. The aberrations of the smaller optical aperture of the reflective concave mirror 230 are much smaller than the aberrations of the relatively larger optical aperture mirrors used in prior art single-lens HUD systems. Since the wander effect is proportional to the magnitude of the optical distortion (or light direction deviation) caused by the HUD mirror-induced aberrations, the mass of the MHUD system 200 of the present invention is smaller when compared to prior art HUD systems The optical aperture concave mirror 230 achieves substantially less wandering effect. Furthermore, the angular overlap between the eye frame segments 255 of the MHUD modules 215 (which is explained in more detail in the discussion of FIG. Optical contribution. As a result, optical distortions (or light direction deviations) caused by aberrations of individual concave mirrors 230 of multiple MHUD modules 215 tend to average out at any point in virtual image 260, thus resulting in viewer perception of MHUD system 200 The overall swimming effect is reduced. In another embodiment of the present invention, the imager 220 of the MHUD assembly 210 has a higher resolution than the human visual system (HVS) can resolve, with the added resolution dedicated to the aberrations caused by the concave mirror 230 Digital image curl pre-compensation for residual optical distortion. In a typical HUD viewing experience, the virtual image will be formed at a distance of approximately 2.5m. The lateral sharpness of the HVS is approximately 200 microradians. At this distance, the HVS can resolve approximately 2500 x 0.0002 = 0.5 mm pixels, which is equivalent to approximately 450 x 250 pixel resolution for a virtual image 260 with a 10" diagonal. For the exemplary MHUD assembly 210 The imager 220 in can provide much higher resolution than this limitation, such as 640 x 360 resolution with the same size optical aperture, or even 1280 x 720 resolution. With the same size optical aperture Providing a higher resolution imager 220 enables the use of a concave mirror 230 with the same size optical aperture, thereby maintaining the volume advantage of the MHUD assembly 200. The added resolution of the imager 220 allows the use of digital image curl pre-compensation, which Optical distortion and resulting wandering effects caused by aberrations of the concave mirrors 230 are virtually eliminated, while maintaining the maximum achievable resolution and the same volume advantage at the virtual image 260. Each of the reflective concave mirrors 230 can be aspherical or free-form, by means of The aspherical or free-form factor of this concave mirror 230 is chosen to minimize optical aberrations of the concave mirror 230 and, if necessary, the curvature of the windshield. It should be noted that the location of each of the imagers 220 is preferably relative to Axisymmetric about its associated concave mirror 230 to ensure optimally balanced (somewhat equal) aberrations at adjacent edges of any two concave mirrors 230. This is an important design aspect of the MHUD system 200 of the present invention As such, this is because it ensures a uniform viewing transition of the virtual image 260 between the multiple eye-frame segments 255 of the collective eye-frame 250 of the MHUD system 200. Figure 6 illustrates the multi-view perspective of the MHUD assembly 210. Figure 6 As illustrated, the MHUD assembly 210 consists of three reflective concave mirrors 230 assembled together within a housing 600. The three concave mirrors 230 may be fabricated separately and then fitted together within the housing 600, or may be fabricated as a single part It is then fitted within the housing 600. Whether assembled separately or as a single optical section, the three concave mirrors 230 can be fabricated using stamped polycarbonate plastic, the optical surfaces of which are then coated with a reflective metal, such as silver, using sputtering techniques Or a thin layer of aluminum. As illustrated in Figure 6, the rear sidewall of the enclosure consists of three separate sections 610, each incorporating an optical window 615, when the rear sidewall sections 610 are assembled together with their respective concave mirrors 230, The optical window will be aligned with the optical axis of its respective concave mirror 230. As illustrated in the side view perspective of Figure 6, the top edge 617 of each of the backside sidewall portions 610 is angled toward the concave mirror 230 to allow installation in the The imagers 220 on the angled edge surfaces 617 of the back side wall portions 610 are aligned with the optical axis of their respective concave mirrors 230. The back side wall portions 610 will be assembled together to one of the back plates 630 as illustrated by the rear side view perspective in Figure 6 side, and the control and interface electronics (printed circuit board) 620 of the MHUD assembly 210 are mounted to the phase of the backplane 630 on the opposite side. In addition, the backplane 630 also incorporates heat sinks to dissipate heat generated by the imager 220 and the interface electronics 620 of the MHUD assembly 210 . As illustrated in the rear side view perspective of FIG. 6 , each of the imagers 220 will typically be mounted on a flexible electrical board 618 that connects the imager 220 to the control and interface electronics 620 . 6, the center of the interface edge of each pair of concave mirror 230 and rear sidewall portion 610 may incorporate a photo detector (PD) 640, typically a photodiode, each disposed and oriented to detect light emitted from the imager 220 onto its respective concave mirror 230. Typically, three photodiodes will be used in each module, one for each color of the emitted light. The output of the photodetector (PD) 640 is connected to the control and interface electronics 620 of the MHUD assembly 210 and serves as a uniformity control loop to the hardware and software design elements implemented in the interface electronics element 620 (in described in the following discussion) input. The output of ambient light detector sensor 660 is also provided as input to control and interface electronics 620 of MHUD assembly 210, which sensor is typically an integral part of the dashboard brightness controls of most vehicles. The control and interface electronics 620 of the MHUD assembly 210 also have the hardware and software design functional elements illustrated in the block diagram of FIG. The MHUD interface function 710 of the control and interface electronics 620 of the MHUD assembly 210 , typically implemented in a combination of hardware and software, receives image input 715 from the vehicle's driver assistance system (DAS) and will be provided by the control function 720 The color and luminance correction 735 is incorporated into the image, and the image inputs 744 , 745 and 746 are then provided to the imager 220 of the MHUD assembly 210 . Although the same image input 715 data may be provided to the three imagers 220 of the MHUD assembly 210 , the MHUD interface function 710 corrects each imager 220 for specific color and brightness based on the color and brightness corrections 735 received from the control function 720 Incorporated into their respective image inputs 744 , 745 and 746 . To ensure color and brightness uniformity across the plurality of segments 255 of the collective eye frame 250, the uniformity loop function 730 of the control and interface electronics 620 is from the light detection of each of the modules 215 of the MHUD assembly 210 Detector (PD) 640 receives input signals 754 , 755 and 756 , calculates the color and luminance associated with each of modules 215 of MHUD assembly 210 , and then calculates as much as the color and luminance span set eye frame 250 Color and brightness correction required for each segment 255 to become more uniform. This will be accomplished by means of an initial calibration look-up table that will be executed and stored in the memory of the control and interface electronics 620 when the MHUD assembly 210 is initially assembled. The color and brightness corrections calculated by the uniformity loop function 730 are then provided to the control function 720, which combines these corrections with the input received from the ambient light sensor 650 and the external color and brightness adjustment input commands 725, To generate color and luminance corrections 735, these corrections are then incorporated into the image data by MHUD interface functions 710 before the corrected image data is provided to imager 220 as image inputs 744, 745, and 746. When incorporating the input received from ambient light sensor 650 into color and brightness correction, control function 720 will adjust the level of the virtual image of the head-up display in proportion to or relative to vehicle external brightness. brightness. It should be noted that image data as used herein means image information in any form, whether it is received as input to a head-up display, image information provided to an imager, or any other form of image information. As previously explained, one embodiment of the MHUD system 200 uses an imager 220 at a virtual image 260 with a resolution higher than the maximum HVS resolvable resolution, and has image means to digitally warp the input to the imager 220 A member that eliminates or substantially reduces the optical distortion and wandering effects it causes. The MHUD interface function 710 of the MHUD assembly 210 of the MHUD system 200 of that embodiment may also incorporate a number of look-up tables, each of which incorporates the digital image curl needed to identify the residual optical distortion of each of the pre-compensated concave mirrors 230 parameter information. These parameters are used by the MHUD interface function 710 to curl the digital image input of each of the imagers 220 in such a way that the image data input to each of the imagers 220 pre-compensates for the input of its respective concave mirror 230 residual deformation. The digital image warping parameters in the look-up table incorporated in MHUD interface function 710 will be initially generated from the optical design simulation of MHUD assembly 210, then after digital image warping pre-compensation is applied by MHUD interface function 710, based on each Optical test data augmentation of measurements of residual optical distortion of module 215 . The resulting digitally warped image data is then combined with color and luminance correction 735 provided by control function 720, and the color and luminance corrected and warp pre-compensated image data is then provided to MHUD assembly 210 as image inputs 744, 745, and 746 imager 220. With this design method of the MHUD system 200 , the residual optical distortion caused by the concave mirror 230 and its resulting wandering effect can be substantially reduced or completely eliminated, thereby enabling a distortion-free MHUD system 200 . As illustrated in the perspective view of FIG. 6, the top side of the MHUD assembly 210 is a glass cover plate 430 that acts as an optical interface window for the MHUD assembly 210 at the top surface of the vehicle dashboard and acts to attenuate solar infrared radiation to block Filter for solar heat load at imager 220 . The glass used should be selected to also be substantially transparent to the wavelengths of light of interest. The design method of the MHUD assembly 210 utilizes the characteristics of the Human Vision System (HVS) to simplify the design implementation and assembly tolerances of the MHUD assembly 210 . First, the pupil diameter of the eye is approximately 5 mm (3-5 mm during the day and 4-9 mm at night), and the lateral sharpness obtained when viewing the virtual image 260 will allow the achievable distance between the concave mirrors 230 of the MHUD assembly 210. Difficult to identify small gaps up to 1mm width. Second, an eye angle difference adaptation limit of approximately 0.5 degrees will allow for small angular inclinations between the concave mirrors 230 of the MHUD assembly 210, up to approximately 0.15 degrees. These inclination and clearance tolerances give very loose mechanical alignment tolerance requirements for the MHUD assembly 210 concave mirror 230 and thus allow for a very economical manufacturing and assembly method for the MHUD assembly 210. Further tilt and/or alignment requirements can be easily accommodated, often in software. FIG. 8 illustrates the novel split eye frame design method of the MHUD system 200 of the present invention. The illustration of FIG. 8 is intended to show the relationship between the collective eye frame 250 and the virtual image 260 of the MHUD system 200 . FIG. 8 also illustrates an example object 810 , an arrow displayed on virtual image 260 displayed by MHUD system 200 . In the design of the MHUD system 200 , each of the eye frame segments 255 will typically be positioned at the exit pupil of its respective module 215 . Thus, the image information presented to the viewer's eye within each of the eye frame sections 255 will be in angular space. Thus, the virtual image 260 arrow object 810 presented to the viewer separately within each of the eye frame sections 255 will typically be viewed when the viewer's head is within the central area of the respective eye frame section 255 is fully visible, but when the viewer's head moves to the right or left of the eye frame section 255, the end or tail of the arrow object 810 of the virtual image 260 will gradually vignettes (or fades) accordingly. In the design of the MHUD system 200, when the modules 215 are integrated together into the MHUD assembly 210 shown in the perspective illustration of FIG. 6, the eye frame sections 255 of the modules 215 will become generally overlapping as illustrated in FIG. 8 , to generate the collective eye frame 250 of the MHUD system 200 . Thus, the collective eye frame 250 of the MHUD system 200 is formed by the overlapping of the exit pupil regions of the eye frame sections 255 forming the plurality of modules 215, thereby allowing image information to be presented to the viewer's eye within the collective eye frame 250 is an angled multiplexed view of virtual image 260 extending across the combined MHUD module 215 field of view. As illustrated in FIG. 8 , the arrow object 810 of the virtual image 260 becomes fully visible (or viewable) within the overlapping area of the eye frame segment 255 defining the collective eye frame 250 of the MHUD system 200 , while the arrow of the virtual image 260 becomes fully visible (or viewable) The object 810 is gradually vignetted (or faded) as the viewer's head moves to the respective right or left side of the peripheral region of the collective eye frame 250 . The size of the overlap between the eyebox segments 255 of the module 215 depends on its angular vignetting curve (820 in FIG. 8 ), and determines the limit size of the collective eyebox 250 of the MHUD system 200 . The latter is defined as the aggregate eyebox 250 area boundary or size within which the virtual image 260 is fully visible (or viewable) at the desired luminance uniformity. FIG. 8 also illustrates the resulting angular vignetting curve masking of the MHUD assembly 210 spanning the entire area of the overlapping eye frame section 255 of the module 215. As illustrated in FIG. 8, the viewer's perceived brightness of the virtual image 260 includes the respective brightness weights from each of the modules 215
Figure 02_image001
,
Figure 02_image003
and
Figure 02_image005
(left, center and right). The rule used to define the boundaries of the set eyebox 250 is the area A of the overlap of eyebox segments 255 in which the virtual image 260 luminance is within a given threshold λ across the selected area (for example, which is less than 25% ) is uniform; that is,
Figure 02_image007
(Required Homogeneity Threshold). By this rule for defining the overlap of the boundaries of the set eyebox 250 with the eyebox segments 255 of the modules 215 illustrated in FIG. At least 50% proportion. This means that each of the modules 215 contributes at least 50% of the perceived brightness of the virtual image 260 when any location within the bounds of the set eye box 250 is defined by the stated rules. With this design method of the MHUD system 200 , the desired luminance uniformity of the virtual image 260 becomes the rule that defines the size of the set eye frame 250 . This design rule is illustrated in the design example of Figure 8, which uses a uniformity threshold of λ = 25% to produce a collective eye frame 250 that is 120 mm wide. As shown in the illustration of FIG. 8 , when using the uniformity threshold λ = 37.5%, a set eye frame 250 approximately 25% wider is defined, which measures approximately 150 mm. As illustrated in FIG. 8, in the areas of the eye frame segments extending beyond the right and left side of the collective eye frame 250 of the MHUD system 200, as the viewer's head moves into these areas, the arrow objects 810 of the virtual image, respectively, step by step. Gradually fade or fade. With the design methodology of MHUD system 200, adding module 215 to the right or left side of MHUD assembly 210 illustrated in FIG. 6 will accordingly extend the collective eyebox of MHUD system 200 as defined by the previously defined design rules The lateral width of 250 is to the right or left, where the arrow object 810 of the virtual image 260 will become fully visible with the desired brightness uniformity. When another row of modules 215 is added to the MHUD assembly 210, a similar effect of extending the height of the collective eye frame 250 occurs in the orthogonal direction. Thus, with this modular design method of the MHUD system 200 of the present invention, any arbitrary size assembly eye with any design choice width and height dimensions can be realized by adding more modules 215 to the MHUD assembly 210 Block 250. Basically, the split exit pupil modular design approach of the MHUD system 200 of the present invention enables the use of a large number of imagers 220 and concave mirrors 230, each having a relatively small aperture and each achieving a short optical trace length, instead of being used for Larger image sources and much longer optical lengths of individual mirrors in prior art HUD systems. Thus, the smaller aperture imager 220 and the concave mirror 230 of the MHUD module 215 are collectively compared to aspects that can be achieved with prior art HUD systems using a larger single image source and a single mirror to achieve the same size eye frame Substantially smaller volume profiles are allowed. Furthermore, the size of the collective eye frame 250 achieved by the MHUD system 200 can be tailored by using an appropriate number of module 215 basic design elements. Conversely, the volumetric aspect of the MHUD system 200 can be made to match the volume available in the vehicle dashboard area, while achieving comparisons with the collective eyebox that can be achieved with prior art HUD systems that can fit in the same available volume. Large size set eye frame 250. To illustrate the volume advantage of the MHUD system 200 of the present invention, the perspective view of FIG. 6 shows the design dimensions of the MHUD assembly 210 using three imagers 220 each having an optical aperture size of 6.4 x 3.6 mm and three each having a 60 mm optical aperture. A concave mirror with an optical aperture size of ×100 mm, to achieve a 120 × 60 mm set eye frame 250 size based on a threshold value of luminance uniformity of λ = 25%. Based on the design dimensions shown in Figure 6, the total volume of the MHUD assembly 210 will be approximately 1350 cc (1.35 liters). For comparison purposes, the total volume of a prior art HUD system using a single larger aperture mirror and single larger image source to achieve the same eye frame size would be in excess of 5000cc (5 liters). Thus, the design method of the MHUD system 200 of the present invention will allow for a HUD system that is 3.7 times smaller (or smaller) in volume compared to prior art HUD systems. To visualize this volume advantage, FIG. 9 illustrates the volume of the MHUD assembly 210 design example illustrated in FIG. 6 installed in the dashboard of a miniature car. As illustrated in FIG. 9, the volume saving design of the MHUD system 200 of the present invention allows for the addition of HUD capabilities in automobiles with extremely limited instrument panel volume that prior art HUD systems would simply not fit. FIG. 10 illustrates the light path of the MHUD system 200 . As illustrated in Figure 10, and previously illustrated and explained in Figure 2, the three imagers 220 including the MHUD assembly 210 will each produce the same image at the same resolution (eg, 640x360 pixels) for the three images , and after reflection by its three respective concave mirrors 230, will be angularly addressed to the entire 120 x 60 mm aggregate eye frame 250 of the previously described design example, and will collectively span 125 of the previously described design example ×225 mm virtual image 260 provides 640×360 spatial resolution. FIG. 10 illustrates the design requirement to produce 10,000 cd/m 2 luminance at virtual image 260 . Each of the three imagers 220 will produce approximately 25,000 cd/m 2 brightness with a typical windshield reflectivity of approximately 20% and the set eyebox 250 bounding rules explained previously. A conservative estimate is that the three imagers 220 plus control and interface electronics 620 of the MHUD assembly 210 will collectively consume approximately 2 W to produce 25,000 cd/m2 brightness, which is approximately 25% of the power consumption of prior art HUD systems. Referring to the performance of the MHUD system 200 illustrated in FIG. 5 , the surrounding energy curve of FIG. 5 shows the geometric blur radius of the collimated beam from the 180 micron sized optical aperture of the concave mirror 230 . With each of the module 215 design examples illustrated in FIG. 6 having an effective focal length of 72 mm, the surrounding energy of FIG. 5 for a beam originating from a pixel of the imager 220 and collimated by its respective concave mirror 230 The 180 micron blur size indicated in the curve gives each of the modules 215 an angular spread of 0.143 degrees. The wander effect is associated with an angular spread of 0.143 degrees across the full beam width from the pixel, while the resolution (MTF) is determined by the effective beam width sampled by the pupil size of the eye. The MTF curves of Figure 5 show the MTF for each of the modules 215, calculated for a typical eye pupil aperture of 4 mm diameter. The smaller the angular spread angle is, the smaller the swimming radius at the virtual image 260 is. For the virtual image 260 viewed at 2.5 m from the collective eye frame 250 of the MHUD system 200, the respective travel radius for the MHUD system 200 design example would be 6.2 mm. A prior art HUD system using a single mirror and having an optical aperture size equal to the full aperture size of the MHUD assembly 210 design example would have an optical aperture approximately 2.4 times larger than the optical aperture of module 215 . Since the size of the aberration blur is proportional to the third power of the aperture size (refer to [18]), if the 5th-order aberration just compensates for the larger 3rd-order aberration (which cannot be purposefully achieved by design), it has a value equal to the total MHUD A prior art single mirror HUD system with an optical aperture size of the full aperture size of the 210 design example would have a corresponding travel radius of approximately 14.3 mm, otherwise a prior art single mirror HUD system would typically have a corresponding travel radius of approximately 39.7 mm, which is the MHUD The walk radius achieved by the design example of system 200 is 6.2 times larger. It should also be mentioned that with the previously described aberration pre-compensation method, the MHUD system 200 walk radius can be substantially reduced below the stated value of this design example or even eliminated entirely. Figure 10 also illustrates the light path of the MHUD system 200 of the present invention including a solar load. As illustrated in FIG. 10 , the reverse optical path of sunlight directly hitting the windshield of the vehicle will reach the area of the collective eye frame 250 , possibly causing glare in the virtual image 260 . In the design of the MHUD system 200 of the present invention, the amount of sunlight that can reach the collection eye frame 250 is much less than in prior art HUD systems. First, assuming that the optical transmittance of windshield 240 is 80%, light from sunlight is attenuated by windshield 240 to at most 80% of its brightness. Second, the sunlight rays that pass through the windshield 240 and are reflected by one of the concave mirrors 230 directed toward its corresponding imager 220 will be further refracted by the resistivity on the optical aperture of the imager 220 before it is reflected back toward the concave mirror 230 assembly. Reflective (AR) coatings attenuate up to 5% of their brightness. Third, this reverse path sunlight will then further attenuate to at most 20% of its brightness as it is reflected by the windshield 240 towards the set eye frame 250 . As previously explained, since the imager 220 and concave mirror 230 of each of the modules 215 contribute up to 50% of the brightness of the virtual image 260, sunlight glare reflected from the modules 215 hit by sunlight will appear in the virtual image A further 50% attenuation at 260. Therefore, based on this path attenuation analysis, sunlight that will reach the collective eyebox 250 will attenuate to at most 0.4% of its brightness (much less than 1%). Where the MHUD system 200 is capable of producing over 10,000 cd/m 2 of luminance and 0.4% daylight glare at the virtual image 260, the MHUD system 200 can withstand over 250,000 cd/m 2 of daylight luminance, which is equivalent to approximately 28 dB The Universal Glare Rating (UGR) (or the glare to image brightness ratio). It is worth mentioning that the glass cover 430 will absorb infrared light, but transmit light for the wavelengths used in the head-up display of the present invention to prevent sunlight-loaded heat from being concentrated back to the imager 220 by the concave mirror 230 assembly. In the embodiments described above, multiple modules are arranged side-by-side to provide overlapping eye frame segments to provide a wider set of eye frames 250 than the eye frame segments 255 themselves. However, if desired, alternatively or additionally, the modules may be positioned such that the eye frame sections of the modules 215 are also stacked to provide a higher aggregate eye frame 250, again with all modules displayed at the same location on the front of the vehicle the same virtual image. It should be noted that stacking to provide a higher set of eye frames 250 is generally not stacking modules, but rather due to the slope of a typical windshield, the stacking of eye frame sections can be compared to additional modules by simply using the dashboard. Large, generally horizontal area implementation. Furthermore, although previously stated, "As illustrated in Figure 2, images emitted from each individual imager 220 with associated optics are collimated, magnified, and reflected by its associated concave mirror 230, and then partially self-supported. with windshield 240 reflection to form a virtual image 260 that can be viewed within eye frame section 255 at the nominal head position of the driver (operator) of the vehicle" but in any embodiment, by means of a concave mirror The degree of collimation achieved will necessarily be less than perfect, and may be intentionally set to limit how far in front of the vehicle the virtual image should be formed. In some cases, the concave mirror may actually be intentionally designed to distort the collimation to offset any aberration sources later (curvature of the windshield, if present, which is the most obvious example). As previously indicated, off-axis distortion and tilt aberrations, as well as color and luminance corrections, may be performed in the control and interface electronics 620 of the MHUD assembly 210 of FIG. 2 (also visible in FIG. 6). Of course, the lateral position correction of each image or image segment from each module 215 can also be performed in the control and interface electronics 620 (or mechanically) so that no double image or double image portion is displayed. Additionally, it should be noted that "luminance correction" has at least two main aspects. The first and most obvious is the module-to-module correction of brightness variation so that image brightness (and color) from different modules should not be different. However, associated with it is the fact that image curling and other factors can cause brightness changes in image portions within individual modules, since pixel pitch changes can cause visible brightness aberrations due to curling. If this is the case, since the brightness of each individual pixel in each module can be individually controlled, it is possible to locally increase the pixel brightness in the area with increased pixel pitch and decrease the area with reduced pixel pitch when necessary Pixel brightness in . Finally, it should be noted that typical solid state light emitting pixel array imagers are not square imagers, but are generally rectangular with unequal dimensions. Thus, the choice of imager orientation may also provide additional variables that may be suitable for the design of the head-up display of the present invention. Table 1 below presents the outstanding performance characteristics of the imagers of the MHUD system 200 based on certain embodiments of the present invention, which illustrate its performance advantages over prior art HUD systems using a single larger mirror and a single larger image source. As shown in Table 1, the split exit pupil MHUD system of the present invention outperforms the prior art HUD system by a factor of several in each performance category. Additionally, due to its previously explained loose manufacturing tolerances and smaller sized mirrors, the MHUD system 200 of the present invention is expected to be much more cost effective than prior art with similar eye frame sizes.
Figure 107106971-A0304-0001
* Based on the prior art HUD using high brightness LCD panel as the image source Table 1 : Performance comparison of multi-image head-up display systems with near-field and far-field virtual images In many HUD system applications, it is desirable for the HUD system to display multiple virtual images The image is delivered to the viewer, preferably directly in front of the viewer, so as not to distract the viewer from self-driving, while at the same time providing safe viewability of additional information. In this case, a plurality of virtual images may be displayed by the HUD system, wherein, for example, a first virtual image is displayed at a far-field distance commonly employed in conventional HUD systems, and a second virtual image is displayed at a near-field distance. Preferably, both virtual images are viewable to a viewer of the HUD system without the viewer turning his or her head off the road and allowing the driver to continue to focus on the driving situation. In an alternative preferred embodiment of the invention of the present disclosure, as illustrated in FIG. 2, the previously described split exit pupil design architecture can be used with a large number of display elements 220 (ie, imagers and associated optics 220). ) in combination, whereby each display element 220 is configured to modulate multiple images at different output angles. In one aspect of the multi-image head-up display system of the present invention, the system may include a plurality of modules 215, each having a solid state light emitting pixel array imager (ie, display element) 220 and a concave mirror 230 configured to Collimates, amplifies the first and second images produced by the solid state light emitting pixel array imager 220 and reflects the first and second images towards the vehicle windshield to form a viewable first and second image within the eye frame section. a virtual image and a second virtual image. A number of modules are positioned such that the eye frame sections 255 are combined to provide the heads-up display as a collective eye frame 250 having a larger eye frame section 255 of each module 215, and such that the collective eye frame 250 is located at the sign of the driver of the vehicle called the head position. In a first aspect of the multi-image head-up display system embodiment of the present invention, the solid state light emitting pixel array imager 220 includes a first set of pixels associated with a respective first set of micro-optical elements and a respective second set of micro-optics The second set of pixels associated with the optical element. The first set of micro-optics is configured to direct the output from the respective first set of pixels to produce the first image described above, thereby producing a first virtual image that can be viewed from the collective eye frame 250 View at first distance. The second set of micro-optics is configured to direct the output from the respective second set of pixels to generate the second image described above, thereby generating a second virtual image, which can be seen from the collective eye frame 250 View at the second distance. The micro-optics may include non-telecentric lenses or non-telecentric optical elements configured to enable pixel outputs that are generally tilted relative to the solid state light emitting pixel array imager 220 surface. In the first aspect of the multi-image head-up display system embodiment of the present invention, the first distance may be a far-field distance, and the second distance may be a near-field distance. The first set of pixels may be a first set of pixels defined by a user of the solid state light emitting pixel array imager 220 , and the second set of pixels may be a second set of pixels defined by a user of the solid state light emitting pixel array imager 220 . The first set of pixels may be odd-numbered columns of pixels of the solid state light emitting pixel array imager 220 , and the second set of pixels may be even numbered columns of the solid state light emitting pixel array imager 220 . The first set of pixels may be the even numbered columns of the solid state light emitting pixel array imager 220 , and the second set of pixels may be the odd numbered columns of the solid state light emitting pixel array imager 220 . The first set of pixels may be pixels comprising at least 50% of the pixel area of the solid state light emitting pixel array imager 22 , and the second set of pixels may be the remainder of the remaining pixel area of the solid state light emitting pixel array imager 220 . The first set of pixels may be an upper region or portion of the solid state light emitting pixel array imager 220 , and the second set of pixels may be a lower region or portion of the solid state light emitting pixel array imager 220 . FIGS. 11A-B and 11C-D illustrate non-limiting examples of these multiple imaging light modulating display elements 220 configured with predetermined sets of individual pixels, such as a 2D pixel array on the display element 220 Predetermined groups of pixel columns or rows of pixels, each of which individually incorporate micro-optical elements directed or directionally modulated in a predetermined unique direction for light emitted from the respective pixel. FIGS. 11A-B and 11C-D illustrate examples in which multi-image display element 220 is designed to modulate two images simultaneously, where each first and second image is emitted from the display element 220 surface in different directions. When such a display element 220 is used in the context of the split exit pupil HUD design architecture of FIG. 2, modulating the first image (described above) yields a far-field distance (eg, approximately 2.5) that can be obtained from the HUD system eye frame 250 m) A first virtual image of the view, and the modulated second image produces a second virtual image viewable at a near field distance (eg, approximately 0.5 m). These two viewable virtual images can be modulated by the multi-image split exit pupil HUD system at the same time, and the HUD system viewer can simply by and by the multiple display elements 220 of the split exit pupil HUD system. The angle inclination (or spacing) of the modulation directions of the two virtual images is proportional to the angle to redirect his/her line of sight in the longitudinal plane to selectively view either the first virtual image or the second virtual image . FIGS. 11A and 11B illustrate top and side views of display elements 220 in one embodiment of the present invention in which a large number of display elements 220 of a split exit pupil HUD system are configured to display by splitting their optical apertures into two groups Pixels (eg, odd-numbered and even-numbered rows of display pixels) to modulate the first image and second image, where one group of pixels (odd-numbered rows of pixels) modulates the first image, and a second group of pixels (pixels the even-numbered column) modulates the second image. This directional modulation capability of the HUD system display element 220 can be directionally modulated from its associated pixel by designing the micro-optic or micro-lens element associated with each of the image-modulating pixel sets to pre-empt It is realized by the light emitted in the determined image direction. For example, under the conditions illustrated in FIGS. 11A and 11B , the micro-optics associated with odd-numbered columns of pixels direct light emitted from the associated group of pixels to form the first image, while those associated with even-numbered columns of pixels An associated micro-optical element directs light emitted from the pixel group to form a second image. It should be noted that although the light rays are illustrated as being parallel for each of the images in FIGS. 11A-11D , in practice they should generally fan out from the imager 220 to expand or enlarge the image size as needed. This pixel emission angle can be allowed by using non-telecentric micro-optical lens elements in the form of non-telecentric QPI® imagers as discussed in more detail below. It should be noted that with the previously described single image split exit pupil HUD design architecture, a number of imagers 220 modulate the same two images each in different directions to present two images in the aggregate eye frame 250 of the split exit pupil HUD system. virtual images, where each of the modulated two resulting virtual images can be viewed across the set eye frame 250, but in different vertical (or azimuthal) orientations. In another preferred embodiment of the multi-image HUD system illustrated in FIGS. 11C and 11D , the plurality of display elements 220 of the split exit pupil multi-image HUD system each have an optical aperture that is divided into two regions or regions of pixels, namely , in the illustrated example, upper region pixels and lower region pixels. In this embodiment, the two images modulated in different directions are each modulated by a single dedicated pixel area. For example, as illustrated in Figures 11C and 11D, the micro-optics of the pixels in the upper region of the display element 220 optical aperture (which may be any user-defined portion of the imager pixel settings) pixels are designed to direct self-display The light emitted by each of the pixels of the element 220 including the upper region of the pixel to form the first image as defined above, while the micro-optical elements of the pixels of the lower region of the optical aperture of the display element 220 are designed to direct from the light of the display element 220. Light emitted by each of the pixels, including the lower region of the pixel, forms a second image as defined above. The pixel emission angles may be provided by using non-telecentric micro-optics in the form of non-telecentric imagers 220 discussed in more detail below. FIG. 12 illustrates a preferred embodiment of the multi-image split exit pupil HUD system of the present invention. As illustrated in FIG. 12, a number of display elements (or imagers) 220 can each modulate two virtual images, the first image modulated in the upward direction and the second image modulated in the downward direction. A number of display elements 220 simultaneously modulate both the first image and the second image to angularly fill the multi-image split exit pupil HUD system eye frame 250 as illustrated in FIG. 8 . After being collimated by the concave mirror 230 and reflected onto the eye frame 250 by the windshield, the collimated light bundle comprising both the first image and the second image modulated (generated) by the plurality of display elements 220 can be displayed at the eye frame Viewing at two different inclinations within 250 to allow a multi-image split exit pupil HUD system viewer to focus on two independently and simultaneously modulated virtual images, where the first virtual image can be viewed at far field 260-1 and the second Two virtual images can be viewed at the near field 260-2, the two virtual images in the vertical (azimuth) direction by the orientation spacing angle 220-4 between the two images modulated by the plurality of display elements 220 The proportional angle 220-3 is angularly separated. The two virtual images are different at the first virtual distance and the second virtual distance because their light bundles are collimated at different levels (to different degrees). The concave mirror 230 collimation is designed to achieve the far-field virtual image distance from the eye frame 250 . The micro-optical elements of the non-telecentric QPI® imager discussed below as specific examples of specific embodiments are designed to introduce additional collimation to light emitted from the respective pixels associated with the non-telecentric QPI® element. The combined collimation achieved by the cooperation of the non-telecentric micro-optics and the concave mirror 230 thereby achieves the far-field and near-field virtual image distances from the eye frame 250, so that the multi-image HUD can display both the far-field and near-field virtual images simultaneously . As illustrated in Figure 13, a multi-image split exit pupil HUD system viewer can view (or view) simply by redirecting his/her line of sight in the vertical (azimuth) direction by angle 220-3 (also seen in Figure 12). Note) Either the first virtual image or the second virtual image modulated by the HUD system. Since the two virtual images are independently and separately modulated by two separate sets of pixels comprising the display elements (imagers) 220, each of the first and second images is displayed to the viewer Can contain different information that viewers may be interested in. 13 also illustrates the nominal positions of two virtual first and second images modulated by the multi-image split exit pupil HUD system, where, in a non-limiting illustrative example, the far-field virtual images can be viewed by the viewer at Focused at a distance of approximately 2.5 m (approximately at the end of the front hood of the vehicle), while the near-field virtual image can be focused by the viewer at a distance of approximately 0.5 m (approximately at the outer lower edge of the windshield of the vehicle). It should be noted that the described HUD multi-image capability advantageously does not result in an increase in the volume aspect of the multi-image split exit pupil HUD system outlined in FIG. 6 . As illustrated in FIG. 7, the interface 710, control function 720, and uniformity loop 730 of the display element (imager) 220 also remain unchanged. The main differences in the implementation and design method of the multi-image split exit pupil HUD system compared to the described single image split exit pupil HUD system are: 1. The large number of display elements (imagers) 220 have the same as those described in the previous embodiments The ability to modulate multiple images in different directions, 2. The vertical field of view (FOV) of the multi-image split exit pupil HUD system is angularly split into two directional areas, so that two angularly separated fields can be modulated simultaneously. image; and; 3. The image input 715 to the multitude of display elements (imagers) 220 consists of two images, each of which is (digitally) addressed to a corresponding pixel group as described in the previous embodiment. 14 illustrates an exemplary materialization of the non-telecentric QPI® imager referenced above, in which the non-telecentric micro-optical element 1250-1 may be implemented as a refractive optical element (ROE) and used to be substantially tilted relative to the display element 220 surface The angle of the selected pixel directs the light output to provide a near-field virtual image. In this embodiment of FIG. 14, a pixel-level refractive non-telecentric micro-optical element 1250-1 directional modulation aspect may use a decentered microlens 1250-1 formed from successive layers of dielectric materials 1310 and 1320 having different indices of refraction accomplish. 14 is a schematic cross-section of a display element 220 including a plurality of non-telecentric refractive micro-optical elements 1250-1. In this embodiment, the array of pixel-level non-telecentric micro-optics 1250-1 can be fabricated monolithically as multiple layers of semiconductor dielectric material at the wafer level using semiconductor photolithography, etching, and deposition techniques, such as for low The refractive index layer 1310 uses silicon oxide and the high refractive index layer 1320 uses silicon nitride. As illustrated in FIG. 14, an array of pixel-level micro-optics 1250-1 is realized by successively (sequentially) deposited using multiple layers of dielectric materials 1310 and 1320 with different refractive indices to form the refractive surfaces of pixel-level micro-optics 1250-1 1. It is progressively changed in the center position of the refractive microlens elements across the microlens array as needed to obtain the desired non-telecentric characteristics and image projection direction. FIG. 15 illustrates an alternative exemplary materialization of the non-telecentric QPI® imager referenced above, wherein the non-telecentric micro-optical element 1250-2 is implemented as a tilted refractive optical element (ROE), again, as desired, progressively across the microlens array Variations are made to achieve the desired non-telecentric properties and image projection direction, and can be used to direct selected pixel light outputs at substantially oblique angles relative to the imager 220 surface to provide near-field or second images. In this embodiment, the directional modulation aspect of the pixel-level refractive non-telecentric micro-optic 1250-2 is implemented using a tilted microlens 1250-2 formed from successive layers of dielectric materials 1410 and 1420 with different refractive indices. 15 is a side view of the display element 220 of the present invention, which includes a plurality of inclined refractive micro-optical elements 1250-2. In this embodiment, the array of pixel-level non-telecentric micro-optics 1250-2 can be fabricated monolithically as multiple layers of semiconductor dielectric material at the wafer level using semiconductor photolithography, etching, and deposition techniques, such as for low The refractive index layer 1410 uses silicon oxide and for the high refractive index layer 1420, silicon nitride is used. As illustrated in FIG. 15, non-telecentric micro-optical element 1250 may be implemented using multiple layers of dielectric materials 1410 and 1420 having different indices of refraction deposited successively (sequentially) to form a pixel-level refractive surface of non-telecentric micro-optical element 1250-2 An array of -2. Thus, the invention has several aspects that can be practiced individually or in various combinations or subcombinations as desired. Although certain preferred embodiments of the present invention have been disclosed and described for purposes of illustration and not of limitation, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from such The spirit and scope of the invention are defined by the full scope of the following claims.

1‧‧‧準直光學裝置模組3‧‧‧投影透鏡5‧‧‧漫射螢幕7‧‧‧半透明準直鏡10‧‧‧傾斜之中繼光學裝置11‧‧‧凹面全像光學元件反射器14‧‧‧影像投影儀18‧‧‧多面體反射表面23‧‧‧液晶顯示器板51‧‧‧漫射表面200‧‧‧模組化抬頭顯示器系統210‧‧‧模組化抬頭顯示器裝配215‧‧‧模組220‧‧‧顯示元件/成像器220-3‧‧‧角度220-4‧‧‧角度230‧‧‧凹面鏡240‧‧‧載具擋風玻璃250‧‧‧集合眼框255‧‧‧眼框區段260‧‧‧虛擬影像260-1‧‧‧遠場260-2‧‧‧近場410‧‧‧成像器420‧‧‧光學裝置/光學元件430‧‧‧玻璃蓋板610‧‧‧背面側壁部分615‧‧‧光學窗617‧‧‧頂部邊緣/成角度邊緣表面620‧‧‧介面電子裝置630‧‧‧背板640‧‧‧光偵測器650‧‧‧環境光感測器710‧‧‧介面函式715‧‧‧影像輸入720‧‧‧控制函式730‧‧‧均一性迴路函式735‧‧‧顏色及亮度校正744‧‧‧影像輸入745‧‧‧影像輸入746‧‧‧影像輸入754‧‧‧輸入信號755‧‧‧輸入信號756‧‧‧輸入信號810‧‧‧箭頭物件820‧‧‧角度漸暈曲線1250-1‧‧‧非遠心折射微光學元件/去中心微透鏡/像素層級微光學元件1250-2‧‧‧非遠心微光學元件1310‧‧‧介電材料/低折射率層1320‧‧‧介電材料/高折射率層1410‧‧‧介電材料/低折射率層1420‧‧‧介電材料/高折射率層1‧‧‧Collimating Optical Device Module 3‧‧‧Projection Lens 5‧‧‧Diffusing Screen 7‧‧‧Translucent Collimating Mirror 10‧‧‧Tilted Relay Optical Device 11‧‧‧Concave Holographic Optics Element Reflectors 14‧‧‧Image Projectors 18‧‧‧Polyhedral Reflecting Surfaces 23‧‧‧LCD Panels 51‧‧‧Diffusing Surfaces 200‧‧‧Modular HUD Systems 210‧‧‧Modular HUDs Assembly 215‧‧‧Module 220‧‧‧Display Element/Imager 220-3‧‧‧Angle 220-4‧‧‧Angle 230‧‧‧Concave Mirror 240‧‧‧Vehicle Windshield 250‧‧‧Assembly Eye Frame 255‧‧‧Eye Frame Section 260‧‧‧Virtual Image 260-1‧‧‧Far Field 260-2‧‧‧Near Field 410‧‧‧Imager 420‧‧‧Optics/Optics 430‧‧‧ Glass Cover 610‧‧‧Back Side Wall Section 615‧‧‧Optical Windows 617‧‧‧Top Edge/Angled Edge Surface 620‧‧‧Interface Electronics 630‧‧‧Back Plate 640‧‧‧Photodetector 650‧ ‧‧Ambient Light Sensor 710‧‧‧Interface Function 715‧‧‧Image Input 720‧‧‧Control Function 730‧‧‧Uniformity Loop Function 735‧‧‧Color and Brightness Correction 744‧‧‧Image Input 745‧‧‧Image Input 746‧‧‧Image Input 754‧‧‧Input Signal 755‧‧‧Input Signal 756‧‧‧Input Signal 810‧‧‧Arrow Object 820‧‧‧Angle Vignetting Curve 1250-1‧‧‧ Non-Telecentric Refractive Micro-Optics/Decentralized Micro-Lenses/Pixel-Level Micro-Optics 1250-2‧‧‧Non-Telecentric Micro-Optics 1310‧‧‧Dielectric Materials/Low Refractive Index Layer 1320‧‧‧Dielectric Materials/High Refractive Index Index layer 1410‧‧‧Dielectric material/Low refractive index layer 1420‧‧‧Dielectric material/High refractive index layer

在以下描述中,即使在不同圖式中,相同圖式參考數字仍用於相同元件。提供描述中所定義的事項,諸如詳細構造及設計元件,以幫助全面理解例示性實施例。然而,本發明可在無彼等特別定義事項的情況下加以實踐。此外,未詳細描述熟知功能或構造,因為該等熟知功能或構造將使本發明因不必要細節而混淆。為理解本發明且瞭解其在實踐中可如何實行,現將參考附圖僅作為非限制性實例描述其幾個實施例,其中: 圖1-1說明先前技術抬頭顯示器(HUD)系統,其使用凹面HOE反射器作為合併器且使用準直器以最小化準直光學裝置及減小HUD系統體積態樣。 圖1-2說明先前技術抬頭顯示器(HUD)系統,其使用中繼光學裝置(REL)模組以在彙集合併器(CMB)鏡之焦平面處遞送中間影像且限定系統瞳孔。 圖1-3說明先前技術抬頭顯示器(HUD)系統,其使用投影透鏡(3)以投射中間影像至作為影像源之漫射表面及半透明的準直鏡上。 圖1-4說明先前技術抬頭顯示器(HUD)系統,其使用由兩個液晶顯示器(LCD)面板組成之影像形成源以在置放於準直光學裝置模組之焦平面處之漫射螢幕上形成中間影像。 圖1-5說明先前技術抬頭顯示器(HUD)系統,其使用安裝於載具擋風玻璃頂側上之影像投影儀,其經組態以投射影像至裝配有多面體反射表面之載具儀錶盤上,該多面體反射表面經組態以將來自影像投影儀之影像反射至載具擋風玻璃上。 圖2說明本發明之例示性模組化HUD (MHUD)系統。 圖3說明圖2之MHUD系統之設計參數與約束之間的關係。 圖4說明包含圖2之實施例之MHUD總成之HUD模組的光學設計態樣及光線跡線圖。 圖5說明包含圖2之實施例之MHUD總成的HUD模組的光學效能。 圖6說明圖2之實施例之MHUD系統的MHUD總成設計實例的多視圖視角。 圖7說明圖2之實施例之MHUD系統之介面及控制電子裝置設計元件(板)的功能方塊圖。 圖8說明圖2之實施例之MHUD系統200的新穎的分裂眼框設計方法。 圖9說明安裝於微型汽車之儀錶盤中之圖6中所說明的MHUD總成設計實例的實際體積。 圖10說明包括日光負載之本發明之MHUD系統200的光線路徑。 圖11A及圖11B分別說明本發明之多影像HUD系統實施例中之固態發光像素陣列成像器(即,顯示元件)之前視圖與側視圖,描繪奇數像素列具有將產生大體上自成像器表面向外投射之第一影像之輸出,且描繪偶數像素列具有將產生大體上相對於第一影像略微向下投射之第二影像之輸出。 圖11C及圖11D分別說明本發明之多影像HUD系統實施例中之固態發光像素陣列成像器之前視圖與側視圖,描繪固態發光像素陣列成像器(即,顯示元件)之上部區中之像素具有將產生如上文所述之第二影像之輸出,且固態發光像素陣列成像器之下部區中之像素具有將產生如上文所述之第一影像之輸出。 圖12說明本發明之多影像HUD系統實施例的多個光線路徑。 圖13說明安裝於微型汽車之儀錶盤中之本發明的多影像HUD系統實施例中之低體積封裝設計中之近場虛擬影像及遠場虛擬影像之標稱位置。 圖14為本發明之顯示元件之側視圖,其包含複數個非遠心折射微光學元件。 圖15為本發明之顯示元件之側視圖,其包含複數個傾斜之折射微光學元件。In the following description, the same drawing reference numerals are used for the same elements even in different drawings. Matters defined in the description, such as detailed construction and design elements, are provided to assist in a comprehensive understanding of the exemplary embodiments. However, the present invention may be practiced without those specifically defined matters. Also, well-known functions or constructions are not described in detail since they would obscure the invention with unnecessary detail. In order to understand the invention and to understand how it may be implemented in practice, several embodiments thereof will now be described, by way of non-limiting example only, with reference to the accompanying drawings, in which: Figures 1-1 illustrate prior art head-up display (HUD) systems using Concave HOE reflectors act as combiners and collimators are used to minimize collimating optics and reduce HUD system bulk aspect. 1-2 illustrate a prior art head-up display (HUD) system that uses a relay optics (REL) module to deliver an intermediate image at the focal plane of a converging combiner (CMB) mirror and define a system pupil. 1-3 illustrate a prior art head-up display (HUD) system that uses a projection lens (3) to project an intermediate image onto a diffusing surface and translucent collimating mirror as the image source. 1-4 illustrate prior art head-up display (HUD) systems that use an image-forming source consisting of two liquid crystal display (LCD) panels on a diffusing screen placed at the focal plane of a collimating optics module form an intermediate image. 1-5 illustrate a prior art head-up display (HUD) system using an image projector mounted on the top side of a vehicle windshield configured to project an image onto a vehicle dashboard equipped with a polyhedral reflective surface , the polyhedral reflective surface is configured to reflect the image from the image projector onto the vehicle windshield. 2 illustrates an exemplary modular HUD (MHUD) system of the present invention. FIG. 3 illustrates the relationship between design parameters and constraints of the MHUD system of FIG. 2 . FIG. 4 illustrates an optical design aspect and a light trace diagram of a HUD module including the MHUD assembly of the embodiment of FIG. 2 . FIG. 5 illustrates the optical performance of a HUD module including the MHUD assembly of the embodiment of FIG. 2 . FIG. 6 illustrates a multi-view perspective of an example MHUD assembly design of the MHUD system of the embodiment of FIG. 2 . FIG. 7 illustrates a functional block diagram of the interface and control electronics design elements (boards) of the MHUD system of the embodiment of FIG. 2 . FIG. 8 illustrates a novel split eye frame design method for the MHUD system 200 of the embodiment of FIG. 2 . Figure 9 illustrates the actual volume of the MHUD assembly design example illustrated in Figure 6 installed in the dashboard of a miniature car. Figure 10 illustrates the light path of the MHUD system 200 of the present invention including a solar load. FIGS. 11A and 11B illustrate a front view and a side view, respectively, of a solid-state light emitting pixel array imager (ie, display element) in a multi-image HUD system embodiment of the present invention, depicting odd-numbered pixel columns having a direction that will be generated substantially from the imager surface. The output of the first image projected outward and depicting the even pixel row has an output that will produce a second image projected generally slightly downward relative to the first image. 11C and 11D illustrate a front view and a side view, respectively, of a solid state light emitting pixel array imager in a multi-image HUD system embodiment of the present invention, depicting that the pixels in the upper region of the solid state light emitting pixel array imager (ie, the display element) have The output of the second image as described above will be produced, and the pixels in the lower region of the solid state light emitting pixel array imager have outputs that will produce the first image as described above. FIG. 12 illustrates multiple light paths of an embodiment of a multi-image HUD system of the present invention. 13 illustrates the nominal locations of near-field virtual images and far-field virtual images in a low-volume package design in a multi-image HUD system embodiment of the present invention installed in the dashboard of a miniature car. 14 is a side view of a display element of the present invention, which includes a plurality of non-telecentric refractive micro-optical elements. 15 is a side view of a display element of the present invention, which includes a plurality of inclined refractive micro-optical elements.

220‧‧‧顯示元件/成像器 220‧‧‧Display Components/Imagers

220-3‧‧‧角度 220-3‧‧‧angle

230‧‧‧凹面鏡 230‧‧‧Concave Mirror

240‧‧‧載具擋風玻璃 240‧‧‧Vehicle windshield

250‧‧‧集合眼框 250‧‧‧Eye Frames

260-1‧‧‧遠場 260-1‧‧‧Far Field

260-2‧‧‧近場 260-2‧‧‧Nearfield

Claims (9)

一種用於一載具之抬頭顯示器,其包含: 大量模組,每一所述模組具有: 一固態發光像素陣列成像器; 一凹面鏡,其經安置以準直、放大由該固態發光像素陣列成像器產生的一第一影像及一第二影像且將該第一影像及該第二影像朝向一載具擋風玻璃反射以形成可在一眼框區段內檢視之一第一虛擬影像及一第二虛擬影像; 該大量模組經安置以使得該等眼框區段組合以提供該抬頭顯示器,其具有大於各模組之該眼框區段之一集合眼框,該集合眼框位於一載具之駕駛員之一標稱頭部位置處; 該固態發光像素陣列成像器包含與一各別第一組微光學元件相關聯之一第一組像素及與一各別第二組微光學元件相關聯之第二組像素; 該第一組微光學元件經組態以自該各別第一組像素在自該固態發光像素陣列成像器之一表面向外之一方向上投射一影像,藉此產生一第一虛擬影像,該第一虛擬影像可在距該集合眼框之一第一距離處檢視; 該第二組微光學元件經組態以自該各別第二組像素在相對於該第一影像向下傾斜之一方向上投射一影像,藉此產生一第二虛擬影像,該第二虛擬影像可在距該集合眼框之一第二距離處檢視。A head-up display for a vehicle, comprising: a plurality of modules, each said module having: a solid state light emitting pixel array imager; a concave mirror positioned to collimate, magnify the solid state light emitting pixel array A first image and a second image are generated by the imager and reflected toward a vehicle windshield to form a first virtual image and a a second virtual image; the plurality of modules are arranged such that the eye frame segments are combined to provide the head-up display having a collective eye frame larger than the eye frame segments of each module, the collective eye frame located at a at a nominal head position of the driver of the vehicle; the solid state light emitting pixel array imager includes a first set of pixels associated with a respective first set of micro-optics and a respective second set of micro-optics a second set of pixels associated with the element; the first set of micro-optical elements configured to project an image from the respective first set of pixels in a direction outward from a surface of the solid state light emitting pixel array imager, by This produces a first virtual image viewable at a first distance from the collective eye frame; the second set of micro-optics configured to be from the respective second set of pixels at a relative The first image projects an image in a downwardly inclined direction, thereby generating a second virtual image that can be viewed at a second distance from the set eye frame. 如請求項1之抬頭顯示器,其中該第一距離為一遠場距離,且該第二距離為一近場距離。The head-up display of claim 1, wherein the first distance is a far-field distance, and the second distance is a near-field distance. 如請求項1之抬頭顯示器,其中該第一組像素由該固態發光像素陣列成像器之一使用者定義之第一組像素組成,且該第二組像素由該固態發光像素陣列成像器之一使用者定義之第二組像素組成。The heads-up display of claim 1, wherein the first set of pixels consists of a first set of pixels defined by a user of one of the solid state light emitting pixel array imagers, and the second set of pixels consists of one of the solid state light emitting pixel array imagers User-defined second set of pixel composition. 如請求項1之抬頭顯示器,其中該第一組像素由該固態發光像素陣列成像器之像素之奇數編號列組成,且該第二組像素由該固態發光像素陣列成像器之偶數編號列組成。The heads-up display of claim 1, wherein the first set of pixels consists of odd-numbered columns of pixels of the solid state light emitting pixel array imager, and the second set of pixels consists of even numbered columns of the solid state light emitting pixel array imager. 如請求項1之抬頭顯示器,其中該第一組像素由該固態發光像素陣列成像器之該等偶數編號列組成,且該第二組像素由該固態發光像素陣列成像器之該等奇數編號列組成。The heads-up display of claim 1, wherein the first set of pixels consists of the even-numbered columns of the solid state light emitting pixel array imager, and the second set of pixels consists of the odd numbered columns of the solid state light emitting pixel array imager composition. 如請求項1之抬頭顯示器,其中該第一組像素由包含該固態發光像素陣列成像器之像素區域之至少50%之像素組成,且該第二組像素由該固態發光像素陣列成像器之其餘像素區域組成。The heads-up display of claim 1, wherein the first set of pixels consists of pixels comprising at least 50% of a pixel area of the solid state light emitting pixel array imager, and the second set of pixels consists of the remainder of the solid state light emitting pixel array imager Pixel area composition. 如請求項1之抬頭顯示器,其中該第一組像素由該固態發光像素陣列成像器之一上部區組成,且該第二組像素由該固態發光像素陣列成像器之一下部區組成。The head-up display of claim 1, wherein the first group of pixels consists of an upper region of the solid state light emitting pixel array imager, and the second group of pixels consists of a lower region of the solid state light emitting pixel array imager. 如請求項1之抬頭顯示器,其中眼框區段及模組之數目為眼框區段及模組之一使用者定義之數目。The head-up display of claim 1, wherein the number of eye-frame segments and modules is a user-defined number of one of the eye-frame segments and modules. 如請求項1之抬頭顯示器,其中該第一組像素由該固態發光像素陣列成像器之一使用者定義之第一組像素組成,該第二組像素由該固態發光像素陣列成像器之一使用者定義之第二組像素組成,且眼框區段及模組之該數目為眼框區段及模組之一使用者定義之數目。The heads-up display of claim 1, wherein the first set of pixels consists of a first set of pixels defined by a user of one of the solid state light emitting pixel array imagers, and the second set of pixels is used by one of the solid state light emitting pixel array imagers It consists of a second set of pixels defined by , and the number of eye-frame segments and modules is a user-defined number of one of the eye-frame segments and modules.
TW107106971A 2017-03-03 2018-03-02 Split exit pupil heads-up display systems and methods TWI766954B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US15/449,679 US10539791B2 (en) 2014-09-02 2017-03-03 Split exit pupil multiple virtual image heads-up display systems and methods
US15/449,679 2017-03-03

Publications (2)

Publication Number Publication Date
TW201837539A TW201837539A (en) 2018-10-16
TWI766954B true TWI766954B (en) 2022-06-11

Family

ID=61622805

Family Applications (1)

Application Number Title Priority Date Filing Date
TW107106971A TWI766954B (en) 2017-03-03 2018-03-02 Split exit pupil heads-up display systems and methods

Country Status (6)

Country Link
EP (1) EP3615981A1 (en)
JP (1) JP7025439B2 (en)
KR (1) KR20190119093A (en)
CN (1) CN110573930B (en)
TW (1) TWI766954B (en)
WO (1) WO2018160765A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114397755A (en) * 2020-05-15 2022-04-26 华为技术有限公司 Display device, method and vehicle
EP4120004A1 (en) 2021-07-16 2023-01-18 Coretronic Corporation Head up display
GB2610205B (en) * 2021-08-26 2024-08-14 Envisics Ltd Field of view optimisation
KR102697105B1 (en) * 2021-09-17 2024-08-20 네이버 주식회사 Head up display and control method thereof
TWI820848B (en) * 2022-05-20 2023-11-01 中強光電股份有限公司 Head up display device
CN114815264B (en) * 2022-05-26 2023-11-07 业成科技(成都)有限公司 Image generating unit, assembling method thereof, head-up display system and vehicle

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1188727C (en) * 1995-06-07 2005-02-09 雅各布·N·沃斯塔德特 Three-dimensional imaging system
WO2011074209A1 (en) * 2009-12-14 2011-06-23 パナソニック株式会社 Transmissive display device
TW201245764A (en) * 2011-05-13 2012-11-16 New Young Optical Tech Co Ltd Semi-reflective concave mirror head-up display
US20160062113A1 (en) * 2014-09-02 2016-03-03 Ostendo Technologies, Inc. Split Exit Pupil Heads-Up Display Systems and Methods
CN205281018U (en) * 2015-12-24 2016-06-01 深圳点石创新科技有限公司 New line display device

Family Cites Families (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4218111A (en) 1978-07-10 1980-08-19 Hughes Aircraft Company Holographic head-up displays
US4613200A (en) 1984-07-09 1986-09-23 Ford Motor Company Heads-up display system with holographic dispersion correcting
AU590835B2 (en) * 1986-03-31 1989-11-16 Dainippon Printing Co. Ltd. Integrated head-up and panel display unit
KR960016721B1 (en) 1993-12-23 1996-12-20 현대전자산업 주식회사 Vehicle head-up display device for hologram light particle
JPH07257228A (en) * 1994-03-18 1995-10-09 Nissan Motor Co Ltd Display device for vehicle
JPH08122737A (en) * 1994-10-28 1996-05-17 Shimadzu Corp Headup display device for vehicle
US6014259A (en) * 1995-06-07 2000-01-11 Wohlstadter; Jacob N. Three dimensional imaging system
FR2818393B1 (en) 2000-12-19 2003-10-10 Thomson Csf METHOD FOR MANUFACTURING A SET OF HIGH HEAD SIGHTS ADAPTED TO DATA TYPE EQUIPMENT
JP2004272230A (en) * 2003-02-19 2004-09-30 Pentax Corp Scanning optical system
US7334901B2 (en) * 2005-04-22 2008-02-26 Ostendo Technologies, Inc. Low profile, large screen display using a rear projection array system
EP1798587B1 (en) 2005-12-15 2012-06-13 Saab Ab Head-up display
US7623560B2 (en) 2007-09-27 2009-11-24 Ostendo Technologies, Inc. Quantum photonic imagers and methods of fabrication thereof
US20100033813A1 (en) * 2008-08-05 2010-02-11 Rogoff Gerald L 3-D Display Requiring No Special Eyewear
US8098265B2 (en) 2008-10-10 2012-01-17 Ostendo Technologies, Inc. Hierarchical multicolor primaries temporal multiplexing system
US8629903B2 (en) 2009-04-02 2014-01-14 GM Global Technology Operations LLC Enhanced vision system full-windshield HUD
US9244275B1 (en) * 2009-07-10 2016-01-26 Rockwell Collins, Inc. Visual display system using multiple image sources and heads-up-display system using the same
WO2011015843A2 (en) * 2009-08-07 2011-02-10 Light Blue Optics Ltd Head up displays
WO2011029409A1 (en) * 2009-09-14 2011-03-17 Wang Xiaoguang Three-dimensional image reproduction display method for naked eyes
TW201144861A (en) * 2010-06-07 2011-12-16 Cheng Uei Prec Ind Co Ltd Head-up Display System
US7982959B1 (en) 2010-08-02 2011-07-19 Matvey Lvovskiy Head-up display
BR112013014975A2 (en) * 2010-12-16 2020-08-11 Lockheed Martin Corporation collimation display with pixel lenses
US20120224060A1 (en) * 2011-02-10 2012-09-06 Integrated Night Vision Systems Inc. Reducing Driver Distraction Using a Heads-Up Display
US20130021226A1 (en) * 2011-07-21 2013-01-24 Jonathan Arnold Bell Wearable display devices
JP5370427B2 (en) * 2011-07-24 2013-12-18 株式会社デンソー Head-up display device
JP6031741B2 (en) * 2011-10-06 2016-11-24 日本精機株式会社 Display device
US8553334B2 (en) 2011-11-16 2013-10-08 Delphi Technologies, Inc. Heads-up display system utilizing controlled reflections from a dashboard surface
US8854724B2 (en) * 2012-03-27 2014-10-07 Ostendo Technologies, Inc. Spatio-temporal directional light modulator
JP6004706B2 (en) * 2012-04-04 2016-10-12 三菱電機株式会社 Display device and head-up display system provided with the same
US10215583B2 (en) * 2013-03-15 2019-02-26 Honda Motor Co., Ltd. Multi-level navigation monitoring and control
DE102013208625A1 (en) * 2013-05-10 2014-11-13 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. MULTIAPERTUR PROJECTION DISPLAY AND INDIVIDUAL PRODUCER FOR SUCH A
US9429757B1 (en) * 2013-11-09 2016-08-30 Jonathan Peeri System for providing projected information combined with outside scenery
JP6497158B2 (en) * 2014-05-16 2019-04-10 株式会社リコー Display device, moving body
JP6658529B2 (en) * 2014-09-08 2020-03-04 ソニー株式会社 Display device, display device driving method, and electronic device
JP6262111B2 (en) * 2014-09-29 2018-01-17 矢崎総業株式会社 Vehicle display device
US10247941B2 (en) * 2015-01-19 2019-04-02 Magna Electronics Inc. Vehicle vision system with light field monitor

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1188727C (en) * 1995-06-07 2005-02-09 雅各布·N·沃斯塔德特 Three-dimensional imaging system
WO2011074209A1 (en) * 2009-12-14 2011-06-23 パナソニック株式会社 Transmissive display device
TW201245764A (en) * 2011-05-13 2012-11-16 New Young Optical Tech Co Ltd Semi-reflective concave mirror head-up display
US20160062113A1 (en) * 2014-09-02 2016-03-03 Ostendo Technologies, Inc. Split Exit Pupil Heads-Up Display Systems and Methods
CN205281018U (en) * 2015-12-24 2016-06-01 深圳点石创新科技有限公司 New line display device

Also Published As

Publication number Publication date
CN110573930A (en) 2019-12-13
WO2018160765A1 (en) 2018-09-07
KR20190119093A (en) 2019-10-21
TW201837539A (en) 2018-10-16
EP3615981A1 (en) 2020-03-04
CN110573930B (en) 2022-07-22
JP7025439B2 (en) 2022-02-24
JP2020510236A (en) 2020-04-02

Similar Documents

Publication Publication Date Title
US10539791B2 (en) Split exit pupil multiple virtual image heads-up display systems and methods
US9494794B2 (en) Split exit pupil heads-up display systems and methods
TWI766954B (en) Split exit pupil heads-up display systems and methods
US11828938B2 (en) Information display apparatus
US10845591B2 (en) Split exit pupil heads-up display systems and methods
WO2018042844A1 (en) Information display device
TWI728094B (en) Split exit pupil heads-up display systems and methods
EP3447561B1 (en) Head-up display device
US20080204731A1 (en) Optical device with tilt and power microlenses
JP7200317B2 (en) head-up display device
KR102645824B1 (en) light field projector device
JP2020020914A (en) Information display device and information display method
US20040125461A1 (en) Imaging optical system, image display apparatus and imaging optical apparatus
JP2023002519A (en) Information display device
CN117518465A (en) Image source device, refraction element, display device, traffic equipment and display method

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees