TW200818114A - System and method for automated calibration and correction of display geometry and color - Google Patents

System and method for automated calibration and correction of display geometry and color Download PDF

Info

Publication number
TW200818114A
TW200818114A TW96129642A TW96129642A TW200818114A TW 200818114 A TW200818114 A TW 200818114A TW 96129642 A TW96129642 A TW 96129642A TW 96129642 A TW96129642 A TW 96129642A TW 200818114 A TW200818114 A TW 200818114A
Authority
TW
Taiwan
Prior art keywords
display
distortion
image
color
viewing surface
Prior art date
Application number
TW96129642A
Other languages
Chinese (zh)
Other versions
TWI411967B (en
Inventor
Zorawar S Bassi
Masoud Vakili
Original Assignee
Silicon Optix Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Silicon Optix Inc filed Critical Silicon Optix Inc
Publication of TW200818114A publication Critical patent/TW200818114A/en
Application granted granted Critical
Publication of TWI411967B publication Critical patent/TWI411967B/en

Links

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Transforming Electric Information Into Light Information (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Video Image Reproduction Devices For Color Tv Systems (AREA)
  • Editing Of Facsimile Originals (AREA)
  • Projection Apparatus (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

Various embodiments are described herein for a system and method for calibrating a display device to eliminate distortions due to various components such as one or more of lenses, mirrors, projection geometry, lateral chromatic aberration and color misalignment, and color and brightness non-uniformity. Calibration for distortions that vary over time is also addressed. Sensing devices coupled to processors can be used to sense display characteristics, which are then used to compute distortion data, and generate pre-compensating maps to correct for display distortions.

Description

200818114 九、發明說明: I:發明所屬之技術領域3 優先權主張 本申請案係主張2006年八月11日申請在先之美國臨時 5 專利申請案第60/836,940號和2007年五月11曰申請在先之 美國臨時專利申請案第60/836,940號的優先權。 發明領域 各種實施例係參照顯示器裝置之校準加以討論。 L· Tltr 10 發明背景 大多數之影像顯示器裝置,會呈現某種形式之幾何上 或色彩上之失真。此等失真可具有多種因素,諸如幾何條 件背景、系統中之各種光學組件的非理想性質、各種組件 之欠對齊、導致幾何失真之複雜顯示表面和光學路徑、和 15 面板内之瑕疵、等等。取決於系統,失真之量會有大幅之 變化,自不可察覺的至十分可厭的。上述失真之效應亦會 有變化,以及可能會招致影像色彩方面之改變,或影像形 狀或幾何條件方面之改變。 【發明内容3 20 發明概要 在一個特徵中,本說明書所說明之至少一個實施例, 提供了一種可供一個具有觀看表面之顯示器裝置使用的顯 示器校準系統。此種顯示器校準系統係包含有:至少一個 感測裝置,其係被適配來感測上述觀看表面之形狀、尺度、 5 200818114 邊界、和方位中的至少一個有關之資訊;和至少一個處理 器,其係耦合至該至少一個感測裝置,以及係適配使依據 至少一個感測裝置所感測之資訊,來計算該顯示器裝置之 特性。 5 在另一個特徵中,本說明書所說明之至少一個實施 例,提供了一種可供一個具有觀看表面之顯示器裝置使用 的顯示器校準系統。此種顯示器校準系統係包含有:至少 一個感測裝置,其係被適配來感測上述觀看表面上所顯示 之測試影像的資訊;和至少一個耦合至該至少一個感測裝 10 置之處理器,此種至少一個處理器,係適配使依據所感測 之資訊,來計算顯示失真。該等預補償圖可由表面功能來 實現。當該等預補償圖應用至顯示前之輸入影像資料時, 一個在該觀看表面上所成之顯示影像,大體上並無失真。 在另一個特徵中,本說明書所說明之至少一個實施 15 例,提供了一種可供一個具有觀看表面之顯示器裝置使用 的顯示器校準系統。此種顯示器校準系統係包含有:至少 一個影像感測裝置,其係被適配來感測來自該觀看表面上 所顯示之測試影像的資訊;和至少一個耦合至該至少一個 影像感測裝置之處理器,此種至少一個處理器,係適配使 20 依據所感測之資訊,來計算顯示失真,使依據每個小片内 之顯示失真的嚴格性,將該觀看表面分割成一些小片,以 及使產生每個小片内之顯示失真有關的預補償圖,以便當 此等預補償圖應用至顯示前之輸入影像資料時,使一個在 該觀看表面上所成之顯示影像,大體上並無失真。 6 200818114 在另一個特徵中,本說明書所說明之至少一個實施 例,提供了一種可供一個具有觀看表面之顯示器裝置使用 的顯示器校準系統。此種顯示器校準系統係包含有:至少 一個影像感測裝置,其係被適配來獨立地感測該觀看表面 5 上所顯示之測試影像的至少一個色彩分量有關之色彩資 訊;和至少一個耦合至該至少一個影像感測裝置之處理 器,此種至少一個處理器,係適配使依據所感測之資訊, 來計算色彩不均勻性,以及使產生至少一個色彩分量有關 之至少一個色彩校正圖,以便當該至少一個色彩校正圖, 10 應用至顯示前之輸入影像資料時,一個在該觀看表面上所 成之顯示影像,大體上無至少一個色彩不均勻性。 在另一個特徵中,本說明書所說明之至少一個實施 例,提供了一種可供一個具有觀看表面之顯示器裝置使用 的顯示器校準系統。此種顯示器校準系統係包含有:至少 15 —個影像感測裝置,其係被適配來感測該觀看表面上所顯 示之個別色彩分量測試影像的資訊;和至少一個耦合至該 至少一個影像感測裝置和該顯示器裝置之處理器,此種至 少一個處理器,係適配使依據所感測之資訊,來獨立地計 算幾何顯示失真,以及使獨立地產生至少一個色彩分量有 20 關之至少一個預補償圖,以便當該至少一個色彩校正圖, 應用至顯示前之輸入影像資料時,一個在該觀看表面上所 成之顯示影像,大體上無至少一個色彩相依性幾何失真。 在另一個特徵中,本說明書所說明之至少一個實施 例,提供了一種可使用在一個具有彎曲觀看表面之投影系 7 200818114 統中的顯示器校準方法,此種方法包含之步驟有: 使用多重之投影器,將一個影像之不同部分,投射至 上述彎曲觀看表面之對應部分上面;以及 使該影像之每一部分,大體上聚焦在上述彎曲觀看表 5 面之對應部分上面,以使該影像以最佳化之聚焦,整體形 成在該彎曲觀看表面上。 在另一個特徵中,本說明書所說明之至少一個實施 例,提供了一種可使用在一個具有彎曲觀看表面之投影系 統中的顯示器校準方法,此種方法包含之步驟有: 10 測量自該彎曲觀看表面至該投射影像之聚焦平面的多 數距離;以及 偏移該聚焦平面,直至該等多數距離之函數被極小化 而得到最佳化的聚焦為止。 圖式簡單說明 15 為對本說明書所說明之實施例和/或相關實現體有較 佳之理解,以及為更清楚顯示何以彼等可能被實現,茲將 參照所附僅作為範例之繪圖,彼等係顯示至少一個範例性 實施例和/或相關之實現體: 第1圖係一種自動化校準和校正系統之範例性實施例 20 的簡圖; 第2a和2b圖係曲面螢幕幾何結構之例示圖; 第3圖係幾何失真中之上溢、下溢、和失配範例的例示 圖, 第4圖係一種校準影像測試樣式之範例的例示圖; 8 200818114 第5圖係一種校正幾何條件和所涉及之各種坐標空間 的例示圖; 第6圖係一種校準資料產生器之範例性實施例的例示 圖, 5 第7圖係一種標度和原點之最佳化的例示圖; 第8圖係一種多重之色彩校準資料產生器的範例性實 施例之例示圖; 第9圖係一種色彩不均勻性校準有關之設置的例示圖; 第10圖係一種色彩不均勻性校正有關之校準資料產生 10 器的範例性實施例之例示圖; 第11圖係一種翹曲資料產生器之範例性實施例的例示 圖; 第12圖係一種顯示器校正有關之小片分割的例示圖; 第13圖係一種數位翹曲單元之範例性實施例的例示 15 圖; 弟14圖係一種觀看表面的形狀和相對方位決定有關之 設置的示意圖; 第15圖係一種失焦測試樣式之例示圖; 第16圖係一種對焦測試樣式之例示圖; 20 第17圖係一種由多重投影器和一個曲面螢幕所組成之 校準系統的範例性實施例之部份例示圖; 第18圖係一種用以顯示不同投影器之聚焦平面而由第 17圖之多重投影器和一個彎曲形螢幕所組成的校準系統之 範例性實施例的部份例示圖; 9 200818114 第19圖係一種可極小化一個距離函數之聚焦技術的範 例之例示圖; 第20圖係另一個由多重投影器和一個曲面螢幕所組成 而可調整投影器位置使影像聚焦最佳化之校準系統的範例 5 性實施例之部份例示圖; 第21圖係一種使用多重相機之校準系統的範例性實施 例之部份例示圖; 第22圖係一種具有一種可使顯示器自我校準且容許動 態失真校正之積體化校準系統的背投影電視(RPTv)之範例 10 性實施例的部份例示圖; 第23圖係一種由多重投影器和多重感測裝置之校準系 統的範例性實施例之部份例示圖; 第24圖係一種使用上述觀看表面之實體邊緣和邊界的 校準系統之範例性實施例的部份例示圖; 15 第25圖係一種使用一種聚焦技術來決定一個彎曲狀顯 不杰螢幕之形狀的校準系統之範例性實施例的部份例示 圖;而 卜第26_係_個使用—種聚焦技術來決定—個波浪形 八m螢幕之形狀的权準系統之範例性實施例的部份示 2〇 圖。 、 I:實施方式】 較佳實施例之詳細說明 春理應瞭解的是,為例示之單純和清晰計在被認為適 之清況下#考數字在諸圖之間,可能會被重複來指明 200818114 對應或類似之元件。此外,所列舉之眾多特定細節,係為 提供本說明書所說明之實施例和/或實現體的徹底理解。 然而,本技藝之一般從業人員理應理解的是,本說明書所 說明之實施例和/或實現體,在實現上可能並不需要此種 5 特定之細節。在其他之實例中,一些習見之方法、程序、 和組件並未詳加說明,俾不致混淆本說明書所說明之實施 例和/或實現體。此外,此說明内容不應被視為限制本說 明書所說明之實施例的界定範圍,而是更確切地說明本說 明書所說明之實施例的結構和/或運作。 10 一些顯示器裝置有關之重要失真係包括··透鏡組件所 致之失真、來自面鏡(彎曲狀或平面狀)反射組體之失真;投 影幾何條件所致之失真,諸如遮光角(off angle)和旋轉投射 (梯形’旋轉)和曲面螢幕上面之投射;每個色彩有所不同之 橫向色像差和失真,諸如多重微顯示器裝置中之欠對齊和 15 失聚(misconvergence);色彩和輝度(亮度)不均勻性、和光 學聚焦問題(球面像差、像散性、等等)所致之失真。 第一組見到的是最後影像中之幾何失真,亦即,輪入 影像形狀未被保留。色像差亦屬一種幾何失真;然而,失 真係各色彩分量而有變化。此等失真在投射(前方或後方) 20 式顯示器裝置中係屬常見,以及將集體被稱為幾何失真。 色度和亮度不均勻性,會影響到所有之顯示器裝置,因而 —個意味著屬固定亮度或色度之信號,見到是橫跨一個顯 示器裝置之表面而有變化,或不同於其預期之感覺。此種 類型之失真,可能是由一些具有變化之亮度、橫跨顯示器 11 200818114 之變化的光學路徑長度、和面板(例如,IXD、UX)S、電 ΓΠ)光源中不均句之感測器響應所引起。聚焦相關之 失真,會使-個影像模糊,以及是由於使物體平面上之不 同點聚焦在不同的影像平面上所致。本說明書所舉 =實施财,所㈣的是某些與聚焦和聚焦深度相關之議 10 15 20 本說明書所提出之實施例,說明 裝置使消除或減低至少某些上文所逃之失真:系= 法。此荨實施例可使校準資料和所成校正之產生與該校正 之應用兩者自動化。亦針對的是彼等長期變化之失直有關 辦拍^刪段(產嶋_係㈣:特性化該顯 不斋,拍攝該顯示器裝置上面觀測到之特殊性測試樣式, 舉例“,透過一個類似高解析度相機之感測裝置;以及 自此等影㈣取出所需之資料(料,校準細。該校正階 段係涉及經由—個電子校正Μ,使該影像預先失直,促 成该榮幕上面之無失真影像。亦被提出的是-個用以達成 最佳之顯示II聚焦和拍攝的測試樣式之機構。 第1圖係顯示-種自動化校準和校正系統之範例性實 施例的簡圖,其可校正-個顯示器裝置之觀看表面16上所 ⑹的影像。此種自動化校準和校正系統係包含有:測試 影像產生器14、感測裝置11、校準資料產生器12、翹曲產 生器b、和數位翹曲單元15。該顯示器裝置,可能是一個 電視(背投影電視、LCD、電漿、㈣…個前投影系統(亦 I3们’、有螢幕之投影II)、或任何其他可呈現影像之系 12 200818114 :有彼=了 —片觀看表面,看表面 =-=與背景有區別之邊界或邊框 將 會疋一個環繞鶴㈣㈣(觀看表面)之實 該邊界並非必需為屏框或某種實體形貌。通常,一個邊界 =與該實體觀看表㈣上之任何可藉由某種裝心背景 10 15 ==之_聯結。舉例而言,一個藉由該顯示器 置投射至該顯示器且展現在該實體屏框内部的矩 形輪廓,可被確認為該邊界。在本說—彳舉之範例性 實施例中,該觀看表面16以校準和校正之觀點而論,係被 視為該實際顯示器裝置位於上述至少在某些情況中可能為 屏框本身之被識別的邊界内之區域。該邊界亦被稱為上述 在第1圖巾顯示為n峨看表面16之觀看表面邊框。 就具有變化深度之曲面榮幕而言,該顯示器可採用兩 個主要之觀察點。该觀看平面可被視為要使影像成為正確 之形式的聚焦平面’其可能會不同於上述之實體觀看表面 16,或者僅包括該觀看表面16之部份實體。上述聚焦平面 上的所有點,係具有相同之聚焦深度。在此種情況中,上 述感測裝置(亦即,觀測器)之實體標記或視野,將會決定出 該聚焦平面邊界(見第2a圖)。該觀看表面邊框在可用時,係 20被用來決定上述相機相對於觀看表面16之方位。 或者,該整個螢幕可能被觀看,而使該實體屏框形成 上述呈彎曲狀之邊界(見第2b圖)。在此,該螢幕上面之不同 點,係具有不同之水焦/朱度。該校準和校正運作,係專注 於使最後之影像與該彎曲狀邊界相匹配。 13 200818114 5亥等兩個觀察點可使相結合,藉以識別該等校準和才— 正所f之不同顯示區域。舉例而言,該邊界可魏為上^ 實體屏框的-個組合,除了上述拍攝之影像輪靡,係在— 個特定之聚焦平面處外。一個彎曲狀邊界,亦可能藉由浐 射個幫曲狀輪廊,被迫在一個平坦之顯示哭 一 ”、只小态上面。此可 被視為一種特殊情況,其中,該邊界係呈彎曲狀,但螢幕 本身係呈平坦,亦即,具有一個無窮大之曲率半徑。 10 15 20 就一些涉及形狀或幾何條件中之變化的失真而士, 述觀看表面16上被觀看到之影像(在校正之前),可能=會完 全被包括(上溢)。此係顯示在第3圖中。在情況(勾中,影$ ABCD係上溢而可完全包含該觀看之表面邊框18,而在情況 作)中,該影像完全係被包括(下溢)。情況((〇為一種中^情 況(失配),其中,該影像係部份覆蓋著上述之觀看表面16。 所有三種情形可能係由前方或後方投影系統所引起,以及 可以本系統來校正。 該測試影像產生器Μ,提供了-些包含有為校正程序 而設計之特殊樣式的影像;此等影像亦被稱為校準測試樣 式。該等可被使用而最常被使用之校準測試樣式係包括·· 規則(非連接)袼線樣式、圓形、正方形、水平、和 ;度=線形、同心之樣式、長方形魯 又上文所述之色衫版本,可被用於橫向色像差 :正=均句性校正。該等樣式中之形狀亦被稱作形 等形貌之數^糸具有其被明確界定之形貌特性,亦即,該 、位置、尺度、邊界、色彩、和任何其他之 14 200818114 界定參數係屬已知。 數種範例性校正樣式,係顯示在第4圖之面板⑷至㈣ 内。彼等用以顯示該等特性之準線(中心位置、半徑、等等) 非屬該等測試樣式之部分。此等測試樣式之色彩和形狀鐵 5動,亦可被用來交換黑白色,以彩色取代黑白色,使^ 個樣式内之不同形貌有關的不同色彩,处 、〜合一個樣式内之 不同形狀,以及改變灰度和色度。 此等使用原色之樣式的版本,係被用來校正橫向色像 差。一個範例性色彩樣式係顯示在面板(幻内,其中,該等 10水平條線、垂直條線、和彼等之交點,全係不同之色彩。 每個樣式呈現了某些明確之特性,其中最值得注意 的,是該等形狀之中心位置和彼等之邊界,彼等在數學上 可在分別被視為點和線。 該感測裝置11,可紀錄該觀看表面16上見到之校準測 15試樣式。為校正幾何失真,該感測裝置11可能是一個相機。 該相機之解析度和拍攝格式,可依據上述校正中所需之準 確度來加以選擇。當校正色度和亮度之不均勻性時,該感 測裝置11可能是一個色彩分析儀(例如,光度計或分光計)。 在此一範例性實施例中,為校正幾何誤差,該感測裝 20置11可被置於相對於上述顯示器裝置之任何位置處。定位 感測裝置11中之此種自由度之所以可能所基於的事實是, 该專拍攝影像可藉由該感測裝置11之定位,而容許包含失 真成分。除非該感測裝置11直接觀看該觀看表面16(亦即, 正上方),其中將會因該感測裝置丨丨而有一個梯形成分。此 15 200818114 種變形或會發生在多達三條之轴線中,彼等係被考慮為多 重軸線梯形失真成分。 此外,由於該感測裝置u之光學器件’諸如相機,係 具有其自身之失真’其中亦有—個被納入考慮之光學失真 5成分。其他類型之感測裝置11,係具有其他固有之失真。 上述相機或感測裝置11所導入之結合失真,將被稱作相機 失真。此齡機失真係在產生额衫料時被蚊及補償。 為决疋上述之相機失真,在至少一個範例性實施例 中,所使用的是實體參照標記,彼等無失真之方位/形狀 10係屬已知。此等標記會被該相機拍攝到,以及藉由使彼等 在上述拍攝到之影像中的方位/形狀,與彼等無失真之方 位/形狀相比較,上述之相機失真便可被決定。一個自然 之標記為邊框(邊界)本身,其已知係屬-個即定之方位和形 15 (U在現實世界中屬無失真之長方形)。該邊框亦為該校 運作70成所依之參考,換言之,上述校正過之影像,相 對於該邊框應為直線形。所以,在校正幾何失真時,上述 機拍攝到之影像’應包括該等觀看榮幕邊界(亦即,邊框 18)〇 20 在另個其中邊界屬不可被偵測之範例性實施例中, j機中之感測器,係被用來感測來自上述螢幕上面之發射 =的信號’藉以決定㈣於該觀看表面16之相機失真。該 ^就之測量值’會產p個如該相機所見之觀看表面16 的映射圖。 田抆正毛田、向色像差時,該相機將會拍攝κ組影像,其中 16 200818114 κ為色彩分量之數目,舉例而言,該等三原色刪。第4圖 中之至少某些測试樣式,將會就每個色彩分量加以重複。 亮度和色彩(亮度和色度)校正在完成上,係無關乎幾何 校正之相關性。在一些投影系統中,此等亮度和色彩校正, 5係在幾何失真之校正後被完成。在其中並未呈現幾何失真 之平面顯示器裝置中,亮度和色彩校正係直接被完成。在 -個範例性實施例中,-個感聰置,諸如色彩分析儀, 係直接被置於該觀看表面16處或近鄰,藉以擷取色彩資 訊。在此種情況中,上述感測震置定位有關之校正非屬必 1〇需。該感測裝置11可能拍攝該整個影像或特殊點處之資 汛。在後者之情況中,來自該螢幕上面之點格線的資料需 要被拍攝。若該感測裝置11,係在一個相對於上述觀看表 面16之梯形位置中,則與上文之相機者相類似,其由於定 位所致之校正便需要被完成。 15 就一些具有幾何失真之顯示器裝置而言,亮度和色彩 校正,應在幾何校正已被完成之後被完成。此意謂的是f 該顯示器裝置,首先係就包括色彩相依性者之幾何失真力口 以才父正。幾何校正後之色彩有關的校正,可容許上述幾何 校正所導入之任何額外的色彩失真被考慮到,以及確使唯 2〇有包含最後影像(亦即,無背景)之區域被校正。 在此一範例性實施例中,該校準資料產生器12,可分 析該等影像,以及可擷取上述翹曲產生器13所使用之袼式 中的杈準資料,後者接著可提供翹曲資料給該數位翹 元15 〇 17 200818114 數位翹曲運作,通常可被描述 射圖,使依據方程式⑴來執行該等輪_固預先補償映 像坐標間之數學變換。 别入影像坐標與輸出影 (1) 在方程式⑴中,,蓋了該等輪人 5定了輸入像素之空間坐標,弍給定 "標,給 (ά)給定了上述映射至輪出空間之J:像素之色彩’ 標’以及e;給定了對應之像素輪出色/像素的空間坐 統而言,ί僅為一個RGB值。方程式了。就一個三原色系 式,而成-種格線之形式。—個處^)為上述校正之表示 )格式係有困難,其中,該校正勢必要接使用—個格線 諸如就視訊而言之60 Hz訊框率。因此,▲才之方式來應用, 將方程式⑴轉換成一種更具硬體效率之2曲產生器,可 產生器12,係由三個子產生器所組 “ :β亥校準資料 條件、橫向色彩、和色彩非均勻性。精以分別校正幾何 15 在下文中,將首先討論上述校正 料。在以下列舉之範例中,該等被、/、件之校準資 為具有-種格線樣式者,諸如第刀之初步娜試樣式, 顯示的樣式。當該等條線/練所 第4时之面板⑷至⑷内的樣式亦可被=、。—個格線時, 彼等類似格線型樣式之測試影像,提供了一组以上述 輸入空間内之已知位置為中心的形狀。該等中心可被指明 為(χζ ),其中,i涵蓋了兮梦 T油-了 "亥#形狀之範圍。在此存在有總200818114 IX. Description of the invention: I: Technical field to which the invention belongs 3 Priority claim This application claims the application of the prior US Provisional Patent Application No. 60/836,940 and May 11, 2007 on August 11, 2006. Priority is claimed in U.S. Provisional Patent Application Serial No. 60/836,940. FIELD OF THE INVENTION Various embodiments are discussed with reference to calibration of display devices. L·Tltr 10 BACKGROUND OF THE INVENTION Most image display devices exhibit some form of geometric or color distortion. Such distortions can have a variety of factors, such as geometric background, non-ideal properties of various optical components in the system, under-alignment of various components, complex display surfaces and optical paths that cause geometric distortion, and flaws within the 15 panels, etc. . Depending on the system, the amount of distortion can vary greatly from undetectable to very annoying. The effects of the aforementioned distortions may also vary, and may result in changes in the color of the image, or changes in the shape or geometry of the image. SUMMARY OF THE INVENTION 3 In one feature, at least one embodiment described in this specification provides a display calibration system for use with a display device having a viewing surface. Such a display calibration system includes: at least one sensing device adapted to sense information relating to at least one of a shape, a scale, a 5200818114 boundary, and an orientation of the viewing surface; and at least one processor And coupled to the at least one sensing device and adapted to calculate characteristics of the display device based on information sensed by the at least one sensing device. In another feature, at least one embodiment described in this specification provides a display calibration system for use with a display device having a viewing surface. Such a display calibration system includes: at least one sensing device adapted to sense information of a test image displayed on the viewing surface; and at least one processing coupled to the at least one sensing device The at least one processor is adapted to calculate display distortion based on the sensed information. These precompensation maps can be implemented by surface functions. When the pre-compensation map is applied to the input image data before display, a display image formed on the viewing surface is substantially free of distortion. In another feature, at least one of the implementations illustrated in this specification provides a display calibration system for use with a display device having a viewing surface. The display calibration system includes: at least one image sensing device adapted to sense information from a test image displayed on the viewing surface; and at least one coupled to the at least one image sensing device The processor, the at least one processor, is adapted to cause 20 to calculate display distortion based on the sensed information, so that the viewing surface is divided into small pieces according to the strictness of display distortion in each small piece, and A precompensation map relating to display distortion within each patch is generated to provide substantially no distortion of the displayed image formed on the viewing surface when the precompensation map is applied to the input image material prior to display. 6 200818114 In another feature, at least one embodiment described in this specification provides a display calibration system for use with a display device having a viewing surface. Such a display calibration system includes: at least one image sensing device adapted to independently sense color information relating to at least one color component of a test image displayed on the viewing surface 5; and at least one coupling a processor to the at least one image sensing device, the at least one processor adapted to calculate color non-uniformity based on the sensed information, and to generate at least one color correction map associated with the at least one color component Therefore, when the at least one color correction map, 10 is applied to the input image material before the display, a display image formed on the viewing surface is substantially free of at least one color unevenness. In another feature, at least one embodiment described in this specification provides a display calibration system for use with a display device having a viewing surface. The display calibration system includes: at least 15 image sensing devices adapted to sense information of individual color component test images displayed on the viewing surface; and at least one coupled to the at least one image a sensing device and a processor of the display device, the at least one processor adapted to independently calculate geometric display distortion based on the sensed information, and to independently generate at least one color component having at least 20 levels A pre-compensation map, such that when the at least one color correction map is applied to the input image material before display, a display image formed on the viewing surface is substantially free of at least one color dependency geometric distortion. In another feature, at least one embodiment described in this specification provides a display calibration method that can be used in a projection system 7 200818114 having a curved viewing surface, the method comprising the steps of: using multiple a projector that projects a different portion of an image onto a corresponding portion of the curved viewing surface; and causes each portion of the image to be substantially focused on a corresponding portion of the curved viewing surface 5 to maximize the image The focus of the optimisation is integrally formed on the curved viewing surface. In another feature, at least one embodiment described in this specification provides a display calibration method that can be used in a projection system having a curved viewing surface, the method comprising the steps of: 10 measuring from the curved viewing a plurality of distances from the surface to the focal plane of the projected image; and offsetting the focus plane until the functions of the plurality of distances are minimized to obtain an optimized focus. Brief Description of the Drawings 15 For a better understanding of the embodiments and/or related implementations of the present specification, and for the purpose of clarity, it will be understood that At least one exemplary embodiment and/or related implementation is shown: Figure 1 is a simplified diagram of an exemplary embodiment 20 of an automated calibration and calibration system; and Figures 2a and 2b are diagrams of a curved surface geometry; 3 is an illustration of an example of overflow, underflow, and mismatch in geometric distortion, and Figure 4 is an illustration of an example of a calibration image test pattern; 8 200818114 Figure 5 is a calibration geometry and involved An illustration of various coordinate spaces; Figure 6 is an illustration of an exemplary embodiment of a calibration data generator, 5 Figure 7 is an illustration of an optimization of the scale and origin; Figure 8 is a multiple An illustration of an exemplary embodiment of a color calibration data generator; Figure 9 is an illustration of a setting related to color unevenness calibration; Figure 10 is a color unevenness correction FIG. 11 is an illustration of an exemplary embodiment of a warpage data generator; FIG. 12 is an illustration of a display correction related tile segmentation; Figure 13 is an illustration of an exemplary embodiment of a digital warping unit; Figure 14 is a schematic view of the arrangement of the shape and relative orientation of the viewing surface; Figure 15 is an illustration of a defocusing test pattern. Figure 16 is an illustration of a focus test pattern; 20 Figure 17 is a partial illustration of an exemplary embodiment of a calibration system consisting of a multi-projector and a curved screen; Figure 18 is a Partial illustration of an exemplary embodiment of a calibration system comprising a multi-projector and a curved screen of Figure 17 showing the focus plane of different projectors; 9 200818114 Figure 19 is a minimization of a distance function An illustration of an example of a focus technique; Figure 20 is another image of a multi-projector and a curved screen that adjusts the position of the projector to make the image Partial illustration of an exemplary embodiment of a focus optimized calibration system; FIG. 21 is a partial illustration of an exemplary embodiment of a calibration system using multiple cameras; Partial illustration of an exemplary embodiment of a rear projection television (RPTv) of a self-calibrating display that allows self-calibration and dynamic distortion correction; Figure 23 is a calibration system with multiple projectors and multiple sensing devices Partial illustration of an exemplary embodiment; Figure 24 is a partial illustration of an exemplary embodiment of a calibration system using physical edges and boundaries of the viewing surface; 15 Figure 25 is a use of a focusing technique Partial illustration of an exemplary embodiment of a calibration system that determines the shape of a curved display screen; and the use of a focusing technique to determine the shape of a wavy eight m screen A partial diagram of an exemplary embodiment of a barebones system is shown. I: EMBODIMENT OF THE PREFERRED EMBODIMENT The detailed description of the preferred embodiment is to be understood that the simplicity and clarity of the illustrations are considered to be appropriate. The number of test numbers may be repeated between the figures to indicate 200818114. Corresponding or similar components. In addition, numerous specific details are set forth to provide a thorough understanding of the embodiments and/or embodiments disclosed herein. However, it will be understood by one of ordinary skill in the art that the embodiments and/or implementations described herein may not require such specific details. In other instances, some of the methods, procedures, and components of the present invention are not described in detail, and are not intended to confuse the embodiments and/or implementations described herein. In addition, the description should not be taken as limiting the scope of the embodiments described in the specification, but rather the structure and/or operation of the embodiments illustrated in the specification. 10 Some of the important distortions associated with some display devices include distortion caused by the lens assembly, distortion from the mirror (curved or planar) reflection group, distortion caused by projection geometry, such as off angle And rotary projection (trapezoidal 'rotation) and projection on the curved screen; lateral chromatic aberration and distortion for each color, such as under-alignment and 15 misconvergence in multiple microdisplay devices; color and luminance ( Distortion due to brightness) unevenness, and optical focusing problems (spherical aberration, astigmatism, etc.). The first group sees the geometric distortion in the final image, that is, the shape of the wheeled image is not preserved. Chromatic aberration is also a geometric distortion; however, the distortion varies with each color component. Such distortions are common in projection (front or rear) 20-type display devices, and collectively referred to as geometric distortion. Chromaticity and brightness non-uniformity affects all display devices, so a signal that means a fixed brightness or chromaticity is seen to vary across the surface of a display device, or is different from its expectations. feel. This type of distortion may be caused by a variety of varying brightness, optical path lengths across display 11 200818114, and panel (eg, LCD, UX) S, eMule) Caused by the response. Focusing on the associated distortion blurs the image and causes the different points on the object plane to be focused on different image planes. This specification refers to the implementation of the financial, (4) is a discussion related to the focus and depth of focus 10 15 20 The embodiments presented in this specification illustrate the device to eliminate or reduce at least some of the above-mentioned distortions: = law. This embodiment automates both the calibration data and the resulting corrections and the application of the calibration. Also aimed at the long-term changes in their short-term changes related to the filming ^ delete paragraph (production _ system (four): characterization of the display is not fast, shooting the display device above the special test style observed, for example, "through a similar The sensing device of the high-resolution camera; and the data required for the removal of the image (four) from this (four). The calibration phase involves the electronic correction of the image, which causes the image to be pre-deficient and promotes the top of the image. The distortion-free image is also proposed as a mechanism for achieving the best test pattern for focusing and shooting. Figure 1 is a simplified diagram showing an exemplary embodiment of an automated calibration and calibration system. It can correct images of (6) on the viewing surface 16 of a display device. Such an automated calibration and calibration system includes: a test image generator 14, a sensing device 11, a calibration data generator 12, a warp generator b And a digital warping unit 15. The display device may be a television (rear projection television, LCD, plasma, (four)... a front projection system (also I3', with a projection II of the screen), or any other The current image system 12 200818114: There is a = the film to see the surface, see the surface =-= the boundary or frame that is different from the background will be a surrounding crane (four) (four) (viewing the surface) the boundary is not necessarily a frame or a certain physical form. Usually, a boundary = any connection with the entity viewing table (4) can be linked by a certain type of background 10 15 ==. For example, a display is projected onto the display by the display. And the rectangular outline displayed inside the physical screen frame can be confirmed as the boundary. In the exemplary embodiment of the present invention, the viewing surface 16 is regarded as a calibration and correction point. The actual display device is located in an area within the identified boundary of the frame itself, at least in some cases. The boundary is also referred to as the viewing surface bezel shown as the n-view surface 16 in the first towel. For a curved surface with varying depths, the display can take two main observation points. The viewing plane can be thought of as a focus plane to make the image the correct form 'which may differ from the entity above Looking at the surface 16, or only a portion of the body of the viewing surface 16. All points on the focal plane have the same depth of focus. In this case, the entity of the sensing device (i.e., the observer) The mark or field of view will determine the focal plane boundary (see Figure 2a). When the viewing surface bezel is available, the system 20 is used to determine the orientation of the camera relative to the viewing surface 16. Alternatively, the entire screen may be Viewing, so that the physical screen frame forms the above-mentioned curved boundary (see Figure 2b). Here, the different points on the screen have different water coke/juju. The calibration and correction operation is focused on In order to match the final image with the curved boundary. 13 200818114 Five observation points such as 5H can be combined to identify the different display areas of the calibration and the correctness. For example, the boundary can be a combination of upper and lower physical screen frames, except for the image rims captured above, which are outside the particular focus plane. A curved boundary may also be forced to cry on a flat display by smashing a curved-shaped veranda. This can be regarded as a special case in which the boundary is curved. Shape, but the screen itself is flat, that is, has an infinite radius of curvature. 10 15 20 For some distortions involving changes in shape or geometry, view the image viewed on surface 16 (in correction) Previously, it may be completely included (overflow). This is shown in Figure 3. In the case (hook, the shadow $ABCD overflows and can completely contain the surface frame 18 of the viewing, and in the case In the case, the image is completely covered (underflow). The situation ((〇 is a middle case (mismatch), wherein the image portion is partially covered by the viewing surface 16 described above. All three cases may be from the front Or caused by the rear projection system, and can be corrected by the system. The test image generator 提供 provides some images containing special patterns designed for the calibration program; these images are also called calibration test patterns. The calibration test styles that can be used most often are: • Rule (non-joined) 样式 line style, circle, square, horizontal, and; degree = line, concentric style, rectangle and above The color shirt version can be used for lateral chromatic aberration: positive = uniform sentence correction. The shapes in these styles are also called the shape of the shape and shape, and have their well-defined shape characteristics. That is, the position, scale, boundary, color, and any other 14 200818114 defined parameters are known. Several exemplary correction patterns are shown in panels (4) to (4) of Figure 4. They are used The guidelines (center position, radius, etc.) showing these characteristics are not part of the test pattern. The color and shape of these test patterns are also used to exchange black and white, replaced by color. Black and white, different colors related to different shapes in the pattern, different shapes in a pattern, and changes in grayscale and chromaticity. These versions using the style of the primary color are used to correct Lateral chromatic aberration. An example The color style is displayed in the panel (the magic, in which the 10 horizontal lines, vertical lines, and their intersections, all of the different colors. Each style presents some specific characteristics, the most notable of which The center position of the shapes and their boundaries, which are mathematically considered to be points and lines, respectively. The sensing device 11 can record the calibration test seen on the viewing surface 16 To correct the geometric distortion, the sensing device 11 may be a camera. The resolution and shooting format of the camera can be selected according to the accuracy required in the above correction. When the unevenness of the chromaticity and brightness is corrected The sensing device 11 may be a color analyzer (for example, a photometer or a spectrometer). In this exemplary embodiment, to correct geometric errors, the sensing device 20 can be placed relative to Any position of the above display device. Such a degree of freedom in the positioning sensing device 11 may be based on the fact that the camera image can be accommodated by the sensing device 11 to allow for the inclusion of distortion components. Unless the sensing device 11 directly views the viewing surface 16 (i.e., directly above), there will be a trapezoidal component due to the sensing device. This 15 200818114 variant may occur in up to three axes, which are considered to be multi-axis trapezoidal distortion components. Furthermore, since the optical device 'such as a camera of the sensing device u has its own distortion', there is also an optical distortion component that is taken into consideration. Other types of sensing devices 11 have other inherent distortions. The combined distortion introduced by the above camera or sensing device 11 will be referred to as camera distortion. This age machine distortion is compensated by mosquitoes when generating the amount of clothing. To account for the camera distortion described above, in at least one exemplary embodiment, physical reference marks are used, and their undistorted orientation/shape 10 is known. These markers are captured by the camera and the camera distortion described above can be determined by comparing their orientation/shape in the captured image to their undistorted orientation/shape. A natural mark is the border (boundary) itself, which is known to be a fixed orientation and shape 15 (U is a distortion-free rectangle in the real world). The frame is also a reference for the school to operate 70%. In other words, the corrected image should be linear with respect to the frame. Therefore, when correcting geometric distortion, the image captured by the above machine should include such viewing glory borders (ie, frame 18) 〇20 in another exemplary embodiment in which the boundary is undetectable, j The sensor in the machine is used to sense the signal from the above-mentioned transmission = ' to determine (4) the camera distortion of the viewing surface 16. This measurement will yield p maps of the viewing surface 16 as seen by the camera. When Tian Haozheng Maotian and chromatic aberration, the camera will capture κ group images, where 16 200818114 κ is the number of color components, for example, the three primary colors are deleted. At least some of the test patterns in Figure 4 will be repeated for each color component. The brightness and color (brightness and chrominance) corrections are complete, regardless of the geometric correction correlation. In some projection systems, these brightness and color corrections are done after the correction of geometric distortion. In a flat panel display device in which geometric distortion is not present, brightness and color correction are directly accomplished. In an exemplary embodiment, a sensory device, such as a color analyzer, is placed directly at or near the viewing surface 16 for color information. In this case, the above-mentioned correction related to the location of the shock is not necessary. The sensing device 11 may capture the entire image or the assets at a particular point. In the latter case, data from the grid lines above the screen needs to be taken. If the sensing device 11 is in a trapezoidal position relative to the viewing surface 16, it is similar to the camera above, and its correction due to positioning needs to be completed. 15 For some display devices with geometric distortion, brightness and color correction should be done after the geometric correction has been completed. This means that the display device is the first to include the geometric distortion of the color dependent person. The geometrically corrected color-related correction allows for any additional color distortion introduced by the above geometric correction to be taken into account, and to ensure that the area containing the last image (i.e., no background) is corrected. In this exemplary embodiment, the calibration data generator 12 can analyze the images, and can retrieve the data in the format used by the warpage generator 13, which can then provide warpage data. For the digital warping operation, the digital warping operation can usually be described, so that the mathematical transformation between the coordinates of the wheel-compensated image is performed according to equation (1). Don't enter image coordinates and output shadow (1) In equation (1), cover the space coordinates of the input pixels, and give the above-mentioned mapping to the round. Space J: the color of the pixel 'mark' and e; given the space dome of the corresponding pixel wheel excellent / pixel, ί is only an RGB value. The equation is gone. In the form of a three-primary color system, it is in the form of a grid line. The format of the above correction is difficult. The correction is necessary to use a grid line such as the 60 Hz frame rate for video. Therefore, the ▲ method is applied to convert equation (1) into a more hardware-efficient 2-cursor generator, and the generator 12 is composed of three sub-generators: "β海 calibration data condition, lateral color, And color non-uniformity. Finely calibrating the geometry separately. In the following, the above calibration materials will be discussed first. In the examples listed below, the calibration of the are, /, pieces is a type of line style, such as The preliminary sample of the knife, the style of the display. When the styles in the panels (4) to (4) of the 4th time of the line/training can also be =, . - a grid line, they are similar to the test of the grid type. The image provides a set of shapes centered on the known locations in the input space described above. These centers can be designated as (χζ), where i covers the range of the Nightmare T-oil-shaped shape. There is a total here

數為Afx#之形狀,自左上自R 角開始,並沿著上述測試樣式之 18 20 200818114 列前進’以及%’為測試樣式之解析度。該測試樣式解 析度,並不需要與該顯示器裝置之本有解析度相匹配。當 被顯示時,上述測試樣式中之形狀的中心,將會因幾何失 真而被變換成某些其他由κ心所指明之值。該等形狀亦 5會失真,亦即,-個圓圈將會失真成—個額,等等。該 等坐標係界定在相對於上述觀看表面16之邊框18的左上角 處之原點的顯示器空間。令心心指明一個任意之測量單 元中的顯示器裝置(在邊框18内)之解析度,以及該等坐桿 (“)亦在此等相同之測量單元内。該顯示器空間係等同 10於現實世界或觀察器空間。換言之,上述校正之影像,在 該顯示器空間中勢必要呈現不失真。 15 該相機可拍攝上述失真之格線樣式的影像,以及將呈 傳送給該校準資料產生器12。該相機之解析度係指明為 %叫。在本說明書所列舉之實施例中,該相機解析度並 不必要與該顯示器者相匹配,以及除此之外,該相機可被 置於任-處。該相機空間中心之坐標是(<〇, 被界定為上述拍攝之影像的左上角。 /、牙〜£你 隹該等拍攝之影像,係來自上述相機之觀察點’而該校 準運作勢必要使在上述現實世界之觀察料,亦即,來自 糾,_程輕必㈣去上述相機 之觀察點,亦被稱作該相機失真。誠如上一 2例性實施例中,此在完成上係使用該觀看=邊框18 標記。因此’該相機影像亦應拍攝該觀看表面邊 框18。在現實世界中,該觀看表面邊㈣在界定上係藉由 20 200818114 坐標: 左上角:(〇,〇) 右上角:(%,〇) (2)The number is the shape of Afx#, starting from the upper left corner from the R angle, and proceeding along the 18-20 200818114 column of the above test pattern 'and %' as the resolution of the test pattern. The test pattern resolution does not need to match the native resolution of the display device. When displayed, the center of the shape in the above test pattern will be transformed into some other value indicated by the κ heart due to geometric distortion. These shapes are also distorted, that is, - a circle will be distorted into a number, and so on. The coordinate system defines the display space at the origin relative to the upper left corner of the bezel 18 of the viewing surface 16 described above. Let the heart indicate the resolution of the display device (in the bezel 18) in an arbitrary measurement unit, and the seat bars (") are also in the same measurement unit. The display space is equivalent to 10 in the real world or The viewer space. In other words, the corrected image is necessarily undistorted in the display space. 15 The camera can capture the above-described distorted grid pattern image and transmit the image to the calibration data generator 12. The camera The resolution is indicated as %. In the embodiments recited in this specification, the camera resolution does not have to match the display, and in addition, the camera can be placed anywhere. The coordinates of the center of the camera space are (<〇, defined as the upper left corner of the above-mentioned captured image. /, tooth ~ £ you are shooting the image, which is the observation point from the above camera' and the calibration operation is necessary In the above-mentioned real world observation material, that is, from the correction, _ Cheng light must (4) to the observation point of the above camera, also known as the camera distortion. As in the above two example embodiments, this The completion of the use of the view = border 18 mark. Therefore, the camera image should also capture the viewing surface frame 18. In the real world, the viewing surface edge (four) is defined by 20 200818114 coordinates: upper left corner: (〇 ,〇) Upper right corner: (%, 〇) (2)

左下角:(0,//D) U 右下角·· (WD,HD) 在該相機影像中,該等坐標變為: 左上角:(4x,>4) 右上角:(¾¾,义77?) .. 左下 _ : οάιΆι) 右下角··(Xc8R,ycBJi) 第5圖係例示各種空間和坐標系統。雖然該等影像係顯 示為一些在白色背景上面之黑色圓圈,但所有測試樣式係 5 可被加色,以及使用其他之形狀或形貌(例如,見第4圖)。 該等顯示器和相機空間中所顯示的三個情況係對應於:情 況(a)當該影像上溢而完全涵蓋該觀看表面邊框18時,情況 (b)當該影像完全適配進該觀看表面邊框18内或下溢時,以 及情況(c)為一種中間情況或失配,其中,該影像不完全位 10 於該觀看表面邊框18内。此等情況係被稱作投影幾何類 別。理應注意的是,當該等輸入和相機空間係以像素來界 定時,該顯示器空間便可能為像素、毫米、或某種其他單 位。 上述以Λ指明之顯示失真,可以函數方式被敘述為方 15 程式(4)所產生之映射圖。 fD:(x:,yn-^(x0di,y°di) (4) 此係意指該校正(/ί)為方程式4中所產生之反函數,其 20 200818114 係明列在方程式5中。 :ix〇di^y°di)(xi ^y°i) (5) 該數位翹曲單元15,將會對輸入影像應用上述之校正 #,以使其在顯示之前被翹曲(預先失真)。 以上兩者映射圖係順向被界定:該函數域為輸入影 5 像,以及該範圍為輸出影像。誠如所習見,一個電子校正 電路更有效率及更正確的,是使用一個反函數架構來產生 影像。在一個反魅曲架構中,該電路之輸出影像在產生上, 係藉由經由該校正映射圖,使輸出中之像素映射至輸入, 以及接著在該輸入空間中進行濾色(亦即,分配色值)。此亦 10 意謂的是,該校正圖係表示成反函數之形式,其將被標記 為/r。由於反函數形式中之校正,係該顯示失真圖本身 (九^(/#疒=九)。一個反函數架構校正單元所需要之映射圖 或翹曲資料,僅僅是該顯示失真圖。所以,上述要由校準 資料產生器12產生之格線資料,係界定在方程式(6)中。 (6) 15 理應注意的是,該等術語『格線』和映射圖經常係可 交換使用。此資訊係需要自上述相機拍攝之影像擷取出, 彼等係位於該相機空間内。該等拍得之影像係對應於方程 式(7)中所界定之映射圖。 Λ :(<,>0 — («) (7) 此將被稱作完整影像圖之映射圖,可被視為該等顯示 20 失真圖/d和相機失真圖/c的一個組合,其之減除可產生方 21 200818114 程式(8)中所界定之必需者/r。 fc · (xdi ? y°di) (x°ci 9 y°ci) fF:fcfD=fcfw3fwH (8) /c自/D之減除,僅僅是上兩映射圖之鏈結(或函數複 合)。此外,當該顯示器坐標系統標度和原點可能不、商用 時,該等坐標(4,%)便需要使達至上述正確之像素f产和 5 原點。此將在下文有更加詳細之討論。 上述校準資料產生器12的一個範例性實施例,係顯示 在第6圖中。一個測試樣式之%\圮相機影像,係首先被分 析,藉以擷取該等形狀中心(<,,·^);此將會產生A。上述 相機空間中之形狀中心,係輸入空間中之形狀中心在被該 10專顯示器和相機失真映射後的對應位置。就一些上溢該觀 看表面16之影像區域而言,該等形狀將屬不可用。此等在 外之形狀,在背投影電視中,或就一個前投影系統而言, 通常將屬不可見,因為彼等將位於一個可能之不同平面上 的背景中。所以,僅有上述觀看表面16内被界定為EFGH(見 15第5圖)之形狀會被分析。 該等形狀中心可使用各種影像處理演算法來找出。有 一種方法係涉及使用一個臨界值機構使拍攝影像變換成一 個二進位(黑白)影像。該二進位影像_之形狀,可使彼等像 素被識別及被標記。每組被分類之像素的形心,接著將會 20近似化該等形狀中心。該臨界值可藉由分析上述影像之柱 狀圖自動地被決定^該柱狀圖可能是上述拍攝之影像的亮 度或特定色調。 22 200818114 該等拍攝H亦齡 和邊界。此步驟可能使用不同之/取該觀看表面之坐標 凡,係需要該等邊框坐標。〜像。要決定該相機失真 相機失真將為_個被標目機並無光學失真,則該 a ’僅有該等四個角、落之方:失真,以及要決定 邊框邊界-必要。該 線方程式參數化 <4可被其邊緣之 10 15 20 個角落亦可被絲決定該等四 古▲疋何者形狀位於該觀看表面16内。一個具 cc cc “"^之實體矩形格線,比方說顯示器空間中之 (½,〜),亦可伟11/4上π丄、 、ϋ至或投射至該觀看表面16,藉以提供 額外之標記,彼笪+ » ’、 攸寻在该相機空間中,將成像為^、/^)。此 格線可被視為該相機校準(CC)格線 。該等邊框坐標和邊界 之決定:亦被稱作顯示器特性化。 <:、、破置之觀點’該相機透鏡和一個曲面螢幕内 ^ 一的識况係屬不可區分。在兩者情況中,該等標 σ己矛邊框係成像為呈彎曲狀。所以,-個曲面螢幕,亦可 在 4相機失真和一個相結合之CC格線的架構内被定 址。校正該相機、生 琴失真,亦將確保最後之影像,能與該彎曲 狀邊框相1°就曲面榮幕校正而言,該CC格線可藉由以 規則之距離(依據螢幕上所測量)使標記附加至該邊框18而 構成’彼等接著可⑽至該邊框I8之㈣。彼等標記亦可 附加至該邊框】s 、 S之内部。理應注意的是,該螢幕雖呈彎曲 狀,卻是一個_ & ^ —、、隹表面,因此可容許經由上述之二維CC格 23 200818114 線來校準。 Λ專邊緣(邊框18或附加之CC格線)或標記,可使用舉 例而言類似邊緣偵測等標準影像處理方法來加以備測。知 道或等邊緣之位置,一個線方程式便可配合至該邊緣,以 5及該等線之交點可提供四個角落和CC格線坐標。該等邊緣 和CC格線坐標,可如方程式(9)中所顯 示地加以界定,其 中’Nec為該相機校準格線中之點的數目。 (9) (‘⑺’认))〜上緣 dk⑺w右緣 I⑻下綾 左 5 —相機校準格線 就某些顯示器裝置(諸如具有曲面螢幕者)而言,一個來 自貫體標記之CC格線,可能無法立即可得。在此種情況 10中’該等邊緣方程式,可被用來以數學方式建立該cc格 線。其中存在的自由度,是有關該等點如何沿該等邊緣而 佈置’和如何内插至該邊框18之内部。無論所選之方法為 何’该最後影像將會與該邊框18相匹配,倘若該等域坐標 (見有關排序之討論)被適當選定。一個佈置方法是沿該等邊 15緣等距離佈置該等點,彼等接著可使線性内插至其内部。 若製造商提供了該相機被標記為之光學失真方面的 規格’則此等規格便可與上述之透視失真相結合,以備用 末取代或產生该相機校準格線,其係載明在方程式(1〇)中。 (10) 該相機失真之光學組件,可在該顯示器校準之前被決 24 200818114 ^因為其係與相機位置和方位無關。方程式⑺和⑼中之 貝’、、,將集體被稱作相機校準資料。 -旦該等坐標已被娜,彼等便需要以上述之正確順 配=!。在數學上,排序將會對每個範圍坐標以)分 等域為建立上述完整之影像圖人,該 有""被心。上述之#1取程序,並不提供任何 述二 =:資訊。該等中心將非必然要依-個與上 ^樣式中之形狀排序相匹配的順序被決定。 10 15 可被用=㈣4圖之面板⑷和⑷中所顯示之測試樣式, Γ=Γ等點。—些自此等測試樣式拍攝到之影 心亦可被佈置在此分等形狀中 條碼,舉輪㈣將之水平和垂直 中,I係衫在方程式⑴)中。出述之域坐標(“°),其 {r-l)N + s ^ (11) ,面==,重要的是決定何者條碼和形狀係在該觀看 i不該背景區域(該觀看表面邊框18之外部) =:個具有高對比之影像,則—個適當之臨界值(在 義取城坐齡射),單㈣確健有 Γ==測量。若該等外部之形狀二 I内:广=比較,可決定出何者形狀和條碼 數目’勢必要考慮到任何漏失之條碼 (该邊框18之外部者)。—個^之數序的細,可一欠閃現 -個,藉料㈣等是料科框之㈣ 25 20 200818114 之條碼,亦可被用來隱含地編號彼等。 該相機校準資料亦需要被排序,其中,該等域坐標係 在該顯示器空間内。然而,在此,該程序係較為簡單,因 為所有之形貌(藉由定義)均位於該邊框18内。在大多數之情 5 況中,坐標比較便足以決定上述之排序。就該CC格線而 言,該排序將會分配上述之坐標網,彼等為上述稱 作域CC格線之CC格線有關的域坐標(在顯示器空間内)。該 域CC格線之值,將取決於該格線是否對應於實體標記,或 者其是否以數學方式來建立。就前者而言,該等標記之已 10 知坐標,可產生該域CC格線。就後者而言,選擇該域CC 格線方面,係具有某種自由度。若該最後影像與上述之邊 框18(亦即,幾何類別(a))相匹配,則該等邊緣上面之CC格 線點,勢必要映射至上述長方形EFGH上面之對應邊緣。此 係意謂該等邊緣需要映射如下: 15 上緣θ通過{(〇,〇),(%,〇)}之直線 右緣《通過{(%,0),(%,凡)}之直線 下緣 <=> 通過{(〇,/^),(%,/^)}之直線 左緣《通過{(0,0),(0,仏)}之直線 除該等限制條件外,該等域CC格線點,可以任何合理 20 之方式來選擇。該擷取和排序完成時,該映射圖/『便可使 用方程式(8)來找出。 該相機校準資料,係被用來首先建立該反函數相機失 真映射圖/c_1。就一個純透視相機失真的最常見實況(亦 即,/c =//)而言,需要的僅有四個角落點。 26 (12) 200818114 — (ο,0) (xdcTR^ydcTR) (WD^) (x^L,y^R) — (XtBR,一D,HD) 該(反)透視變換係由方程式13來產生。 ^ =fcX'\Xc^c)^ cixc ^byc -Kc gxc+hyc-^\ yd =fcy~\Xc^c)^ dxc-heyc +£ 队吻c+l (13) 在此,(\,h)為上述顯示器空間内之坐標,以及(\,尺)為 上述相機空間内之坐標。使用方程式(12),會得到八個線性 方程式’此可求出該等用以界定上述透視變換之係數 5 {a,b,c,d,e,f,g,h}的解。 當該相機失真包括一個光學失真成分/(?時,或者將會 就一個彎曲狀邊框被校正時,該等邊緣方程式或cc格線, 10 訊 係被用來決定該反函數相機失真映射圖凡'一個方法是要 使用該CC格線,因為其可提供内部點處之失直 訊,而不僅僅是該邊緣方面的資訊。該cc袼線在= 程式⑽中。該格線可或配合(以最小平方之 “在方 即定之基底函數組來内插。一種選擇是要:、)或由-個 (spline)基底,來得到對-個如方程式(14)二—個樣條 樣條配合或插值。 心之格線的 27 (14) 200818114 /广:(4e,e)—(4e,#),對格線之配合或插值 ^ =/cx l(Xc^c) yd ^fcy \xc^yc) dO=(/r,/T) 由上述擷取相機校準資料步驟期間計得之/c-1和該等 坐標(<·,·^),該映射圖/r係藉由鏈結而得到如下 Λ : («)>«,%)其中,《·,%)係由方程式G5)來產生。 - fc'ffixUyt) = fc'ix^yli) 相機具有透視失真& =/ϋ:,β) (15) /di^fcyV{X〇ci,y〇ci) 相機具有透視失真+光學失真, 該鏈結可就其域使用完整影像映射範圍,來評估該相 機反失真映射圖。 10 上述得到之格線,係對應於第5圖中之 中間簡圖,以及會產生用以校正該顯示器失真所需之映射 圖(以反函數之形式)。誠如前文所提及,該格線僅包含彼等 位於上述觀看表面邊框18内之點。就上溢之失真(情 ::)而言’上述域空間(亦即,自顯示失真之觀點的輪入 )的許多像素(對應於該等形狀中心),在該格線所界定 ^不器空_,並不具有彼等之坐標。上述在此範例性 錢列中為數位翹曲單元15之電子 有的域办Η榇I 玟正早兀,將會處理所 的::間像素;一個反函數架構校正單元有關之域空 資钭2上是產生出之輸出影像。所以,上述漏失之格線 貝抖係需純計算’錢藉由_奸重練樣步驟來完 28 15 200818114 如同在上述相機失真之計算中, 該格線凡可或配合(以Lower left corner: (0, / / D) U Lower right corner · (WD, HD) In this camera image, the coordinates become: Upper left corner: (4x, > 4) Upper right corner: (3⁄43⁄4, meaning 77 ?) .. Bottom left _ : οάιΆι) Bottom right corner · (Xc8R, ycBJi) Figure 5 illustrates various spaces and coordinate systems. Although these images are displayed as black circles on a white background, all test styles 5 can be colored and other shapes or topography used (see, for example, Figure 4). The three conditions displayed in the display and camera space correspond to: case (a) when the image overflows to completely cover the viewing surface bezel 18, condition (b) when the image is fully fit into the viewing surface When the bezel 18 is under or underflow, and condition (c) is an intermediate condition or mismatch, wherein the image is not fully positioned within the viewing surface bezel 18. These conditions are called projection geometry categories. It should be noted that when the input and camera space are bounded by pixels, the display space may be pixels, millimeters, or some other unit. The above-mentioned display distortion indicated by Λ can be described as a map generated by the program (4) in a functional manner. fD: (x:, yn-^(x0di, y°di) (4) This means that the correction (/ί) is the inverse function generated in Equation 4, and its 20 200818114 is listed in Equation 5. :ix〇di^y°di)(xi ^y°i) (5) The digit warping unit 15 will apply the above-mentioned correction # to the input image so that it is warped before display (pre-distortion) . The above two maps are defined in the forward direction: the function domain is the input shadow image, and the range is the output image. As you can see, an electronic correction circuit is more efficient and correct, using an inverse function architecture to generate the image. In an inverse sacred architecture, the output image of the circuit is generated by mapping the pixels in the output to the input via the correction map, and then performing color filtering (ie, assigning) in the input space. Color value). This also means that the correction map is expressed in the form of an inverse function, which will be marked as /r. Due to the correction in the inverse function form, the distortion map itself is displayed (9^(/#疒=9). The map or warpage data required by an inverse function architecture correction unit is only the display distortion map. Therefore, The above-mentioned ruled line data to be generated by the calibration data generator 12 is defined in equation (6). (6) 15 It should be noted that the terms "grid line" and map are often used interchangeably. The images taken from the camera are taken out, and they are located in the camera space. The captured images correspond to the map defined in equation (7). Λ :(<,>0 — («) (7) This will be referred to as the map of the complete image map, which can be regarded as a combination of the display 20 distortion map /d and the camera distortion map /c, which can be decremented to generate the program 21 200818114 Required in (8) /r. fc · (xdi ? y°di) (x°ci 9 y°ci) fF:fcfD=fcfw3fwH (8) /c subtracted from /D, just on The link of the two maps (or function compound). In addition, when the display coordinate system scale and origin may not be commercial, The coordinates (4, %) need to be such that the correct pixel f and the 5 origin are achieved. This will be discussed in more detail below. An exemplary embodiment of the above calibration data generator 12 is shown in the sixth In the figure, the %\圮 camera image of a test pattern is first analyzed to capture the center of the shape (<,,·^); this will produce A. The shape center in the above camera space is input. The shape center in space is at a corresponding position after being mapped by the 10 dedicated display and camera distortion. For some image areas that overflow the viewing surface 16, the shapes will be unavailable. These are external shapes, on the back. In projection televisions, or in the case of a front projection system, they are usually invisible because they will lie in a background on a possibly different plane. Therefore, only the viewing surface 16 described above is defined as EFGH (see 15). The shape of Figure 5) will be analyzed. The center of these shapes can be found using various image processing algorithms. One method involves using a threshold mechanism to transform the captured image into a binary (black) The image, the shape of the binary image, allows the pixels to be identified and marked. The centroid of each group of classified pixels will then approximate the center of the shape by 20. The threshold can be analyzed by analysis. The histogram of the above image is automatically determined. The histogram may be the brightness or specific hue of the image taken. 22 200818114 These shots are also age and border. This step may use different/take the viewing surface. Coordinates, the coordinates of the frame are required. ~ Image. To determine the camera distortion, the camera distortion will be _ a target machine without optical distortion, then the a 'only these four corners, the falling side: distortion, And to decide the border of the border - necessary. The line equation parameterization <4 can be determined by the edges of the 10 15 20 corners of the line or by the wires. The shape of the elements is located within the viewing surface 16. A solid rectangular grid line with cc cc ""^, such as (1⁄2, ~) in the display space, can also be 1/4 上, ϋ, or projected onto the viewing surface 16 to provide additional The mark, 笪+ » ', 攸 in the camera space, will be imaged as ^, / ^). This grid can be regarded as the camera calibration (CC) grid. The coordinates of the border and the decision of the border : Also known as display characterization. <:,, breakpoint view' The camera lens and the surface of a curved screen are indistinguishable. In both cases, the standard σ has a spear border The image is curved. Therefore, a curved screen can also be addressed within the framework of 4 camera distortion and a combined CC grid. Correcting the camera and the piano distortion will also ensure the final image. In contrast to the curved frame 1° in terms of curved surface correction, the CC ruled line can be formed by attaching a mark to the frame 18 at a regular distance (as measured on the screen) to enable them to (10) (4) of the frame I8. Their marks may also be attached to the frame] s, S It should be noted that although the screen is curved, it is a _ & ^ -, 隹 surface, so it can be calibrated via the above-mentioned two-dimensional CC grid 23 200818114 line. Λ special edge (frame 18 or additional The CC grid line or mark can be prepared by using a standard image processing method such as edge detection, for example. Knowing or waiting for the position of the edge, a line equation can be matched to the edge to 5 and the line The intersections provide four corner and CC grid coordinates. These edges and CC grid coordinates can be defined as shown in equation (9), where 'Nec is the number of points in the calibration grid of the camera. (9) ('(7)' recognized)) ~ upper edge dk (7) w right edge I (8) 绫 left 5 - camera calibration grid For some display devices (such as those with curved screens), a CC line from the body mark , may not be immediately available. In this case 10, the edge equations can be used to mathematically establish the cc grid. The degrees of freedom that exist are related to how the points are arranged along the edges. 'and how to interpolate to The interior of the bezel 18. Regardless of the method selected, 'the last image will match the bezel 18, provided that the domain coordinates (see discussion of sorting) are properly selected. One arrangement is along the equilateral 15 The edges are equidistantly arranged such that they can then be linearly interpolated into the interior. If the manufacturer provides the specifications for the optical distortion that the camera is labeled, then these specifications can be combined with the above-described perspective distortion. Replace or generate the camera calibration grid with the spare end, which is shown in equation (1〇). (10) The optical component of the camera distortion can be resolved before the monitor is calibrated 24 200818114 ^Because it is The camera position and orientation are irrelevant. The shells ', ' in equations (7) and (9) are collectively referred to as camera calibration data. - Once the coordinates have been taken, they will need to match correctly with the above =! Mathematically, the sorting will be based on each range coordinate to create the above-mentioned complete image map, which has "" The above #1 procedure does not provide any description of the second =: information. These centers will not necessarily be determined in the order in which they match the shape ordering in the upper ^ style. 10 15 Can be used = (4) Test patterns shown in panels (4) and (4) of Figure 4, Γ = Γ etc. - Some of the shadows captured from these test styles can also be placed in this graded shape. The bar code is lifted (4) in horizontal and vertical, and the I-series is in equation (1)). The domain coordinate ("°), its {rl)N + s ^ (11) , face ==, it is important to decide which bar code and shape are in the background area of the viewing i (the viewing surface frame 18 External) =: An image with high contrast, then an appropriate threshold (in the city of Yicheng City), single (four) is indeed Γ == measurement. If the external shape is two I: wide = Comparing, you can decide which shape and number of barcodes. It is necessary to consider any missing barcode (the outside of the border 18). - The order of the number of ^ can be flashed - one, borrowed (four), etc. The bar codes of (4) 25 20 200818114 may also be used to implicitly number them. The camera calibration data also needs to be sorted, wherein the domain coordinates are within the display space. However, here, The program is simpler because all the topography (by definition) are located within the border 18. In most cases, the coordinate comparison is sufficient to determine the ordering. In the case of the CC grid, the ordering The above-mentioned coordinate network will be assigned, which are the domains related to the CC grid line referred to above as the domain CC grid line. Mark (in the display space). The value of the CC grid line in the field will depend on whether the grid corresponds to the entity marker, or whether it is mathematically established. In the former case, the marker has 10 coordinates. , the CC grid line of the domain can be generated. For the latter, the CC grid line is selected to have some degree of freedom. If the last image matches the above-mentioned frame 18 (ie, geometric category (a)) Then, the CC grid points above the edges are necessarily mapped to the corresponding edges above the rectangular EFGH. This means that the edges need to be mapped as follows: 15 The upper edge θ passes {(〇,〇), (%, 〇)} The right edge of the line "Through the lower edge of the line of {(%,0),(%,凡)}<=> by the line of {(〇, /^), (%, /^)} left In addition to the constraints of the line {(0,0), (0,仏)}, the CC grid points of the fields can be selected in any reasonable way. When the capture and sorting are completed, The map/" can be found using equation (8). The camera calibration data is used to first create the inverse function camera distortion map /c_1. In the most common case of pure perspective camera distortion (ie, /c =//), only four corner points are needed. 26 (12) 200818114 — (ο,0) (xdcTR^ydcTR) (WD^ ) (x^L, y^R) — (XtBR, D, HD) This (reverse) perspective transformation is generated by Equation 13. ^ =fcX'\Xc^c)^ cixc ^byc -Kc gxc+hyc -^\ yd =fcy~\Xc^c)^ dxc-heyc +£ Team kiss c+l (13) Here, (\,h) is the coordinates in the above display space, and (\,foot) is above The coordinates within the camera space. Using equation (12), eight linear equations are obtained. This can be used to find solutions for defining the coefficients 5 {a, b, c, d, e, f, g, h} of the above-described perspective transformation. When the camera distortion includes an optical distortion component / (?, or will be corrected for a curved frame, the edge equation or cc grid, 10 signal system is used to determine the inverse function camera distortion map 'One method is to use the CC grid because it provides the loss of direct information at the internal point, not just the information on the edge. The cc line is in the = program (10). The grid can be matched or Interpolate with the least squares of the "basis function group defined in the square. One choice is:," or by a spline base to obtain a pair of equations (14) two - spline spline fit Or interpolation. 27 (14) 200818114 / wide: (4e, e) - (4e, #), the fit or interpolation of the grid line ^ = / cx l (Xc ^ c) yd ^ fcy \xc ^yc) dO=(/r,/T) The /c-1 and the coordinates (<·,·^) calculated during the step of capturing the camera calibration data above, the map/r is by the chain The result is as follows: («)>«,%) where "·,%) is generated by equation G5) - fc'ffixUyt) = fc'ix^yli) The camera has perspective distortion & =/ ϋ:,β) (15) /di ^fcyV{X〇ci,y〇ci) The camera has perspective distortion + optical distortion, and the link can use the complete image mapping range for its domain to evaluate the camera anti-aliasing map. The middle diagram in Figure 5, and the map (in the form of an inverse function) needed to correct the distortion of the display. As mentioned earlier, the grid only contains those on the viewing surface. a point within the border 18. In terms of the distortion of the overflow (in case::), a plurality of pixels (corresponding to the center of the shape) of the above-mentioned domain space (that is, from the viewpoint of display distortion), in the grid The lines are defined as ^ not empty _, and do not have their coordinates. In the example money column, the digital warping unit 15 of the electronic warfare unit is 兀I 玟Immediately, it will be processed. :: inter-pixel; an inverse function architecture correction unit related to the domain of the empty asset 钭 2 is the output image produced. Therefore, the above-mentioned missing grid line shake system needs to be purely calculated 'money by _ rape and practice steps End 28 15 200818114 As in the above calculation of camera distortion, Where ruled line or may be fit (in

正袼線更加稠密。 以及藉由評估上述輸入 該校正圖如今可被採納為夂, 空間上面之任何點陣列處的函數所得到之校正格線,係包 10含彼等漏失之點。為維持最初之格線(«)—«,%),兄有 關之插值形式會被使用,藉以界定該輸入空間上面如方程 式16中所顯示之新規律間隔之格線陣列。 = ,其係包含該陣列{(«)} (16) 該陣列可使較稠密,而具有反〉从排和及〉#行。評估 此陣列上面之Λ,可依據方程式17,產生上述校訂之校正 15格線(心,*^),其係包括彼等漏失之點,以及可能會較稠密。 (½,〜) = «,%‘),若兄·)=顯示器邊框内之 («)和《,少1) 彼等配合和插值之結合,亦可就兄加以使用,以便可 月b提供漏失資料插補有關之配合和内部資料有關之插值。 上述校準資料產生中之最後階段,是固定該標度和原 點。該校正袼線係在該顯示器空間内,以及在界定上係相 20對於上述觀看表面邊框18之右上角。上述顯示器空間之單 29 200818114 位(標度)係屬任意性,以及可能係不同於該輸入空間中所用 者。在此資料可被該翹曲產生器13使用之前,該等原點和 標度係需要使與該輸入空間者相一致。此可被視為該原點 和標度之最佳化。 5 考慮第5圖之中間簡圖,當應用該校正時,該最後校正 之影像’相對於該觀看表面邊框18應呈矩形。參照第7圖, 此種包含被校正之影像的矩形,將被稱作活動性矩形 A’BfCT)f。此活動性矩形,勢必要位於上述影像(ABCD)之 光包跡内’並且在該觀看表面邊框(EFGH)内。該原點和標 1〇度需要加以選擇,而使上述活動性矩形之左上角,對應於 (〇,〇)’以及此矩形之寬度乘以高度是%,其係上述輸 入影像之像素解析度(見第7圖)。 理應注意的是,上述校正有關之輸入空間,實際上係 一個反函數架構中之電子校正有關的輸出影像,以及一旦 °亥軚疋和移位已被完成,上述校正有關之輸入影像,實際 上係與該顯示器空間等效(亦即,校正有關之輸出空間)。 若該活動性矩形之左上角和尺度,在該顯示器坐標空 ΒΒ 日,分別係由(乂,給定,則所有的格線坐標, 便而要如方程式(18)中所示地加以標定及移位。The positive line is more dense. And by evaluating the above inputs, the correction map can now be adopted as a correction grid for the function at any point array above the space, and the package 10 contains the points at which they are missing. In order to maintain the initial grid («)—«, %), the parent-related interpolation form is used to define a grid array of new regular intervals above the input space as shown in Equation 16. = , which contains the array {(«)} (16) The array can be made denser, with the opposite and the > rows. Evaluating the top of the array can be based on Equation 17, which produces the calibration corrected 15 grid lines (heart, *^), which include their missing points and may be denser. (1⁄2,~) = «,%'), if the brother ·) = («) and ", less 1" in the display frame, the combination of their cooperation and interpolation, can also be used by the brother, so that it can be provided Missing data interpolation related to the interpolation related to internal data. The final stage in the generation of the above calibration data is to fix the scale and origin. The correction line is within the display space and defines the upper phase of the upper phase 20 with respect to the viewing surface bezel 18. The above-mentioned display space list 29 200818114 (scale) is arbitrary and may be different from the one used in the input space. Before the data can be used by the warpage generator 13, the origins and scales need to be consistent with the input space. This can be considered as an optimization of the origin and scale. 5 Consider the middle diagram of Figure 5, which should be rectangular with respect to the viewing surface bezel 18 when the correction is applied. Referring to Fig. 7, such a rectangle containing the corrected image will be referred to as an active rectangle A'BfCT)f. This active rectangle is necessarily located within the optical envelope of the above image (ABCD) and within the viewing surface border (EFGH). The origin and the target 1 degree need to be selected, and the upper left corner of the active rectangle corresponds to (〇, 〇)' and the width of the rectangle multiplied by the height is %, which is the pixel resolution of the input image. (See Figure 7). It should be noted that the input space related to the above correction is actually an output image related to the electronic correction in an inverse function architecture, and the input image related to the above correction once the shift and the shift have been completed, actually It is spatially equivalent to the display (ie, the output space associated with the correction). If the upper left corner and the scale of the active rectangle are on the display coordinate space, respectively, (乂, given, all the grid coordinates, then it is calibrated as shown in equation (18) and Shift.

wd nd v J 、上述將可決定該矩形坐標值之%有關的值,可被 $擇^何之整數值,只要彼㈣維持上述觀看表面邊框 見回比。應用方程式(18),可將該顯示器空間尺度(底 30 200818114 部圖),變換成第7圖中之校正(頂部圖)所需之輪入影像尺 度。 在該活動性矩形之決定方面,係存在有自由度;然而, 加進某一定之自然限制條件,可使該選擇動作簡化。為極 5大化上述校正之影像的像素解析度,該矩形應加以選擇使 盡可能地大。若上述校正之影像,要使具有與上述輸入影 像者相同之寬高比,上述矩形之寬高比,便應與該 輸入影像者相匹配。 各種限制條件C1至C4係列舉如下。 10 C1)該活動性矩形,係包含在光包跡ABCD内。 C2)該活動性矩形,係包含在觀看表面邊框EFGI1内。 C3)該活動性矩形區域,係使最大化。 C4)該活動性矩形寬高比,係設定使等於輸入影像者 {^dlhd=WTIHT) 〇 15 解決上述活動性矩形(亦即,決定有關 之此等限制條件,變為數值最佳化中的一項問題。所有以 上之限制條件,可使置於數學之形式中,其可容許使用各 種最佳化方法來解決該頂問題。 一個可能之方法是,使用受到限制之極小化。此係涉 20及改寫等式或不等式形式中之限制條件,以及界定一個要 被極小化(或最大化)之函數。對邊框邊緣(見方程式(9))和最 外之格線點(見方程式(17))的線方程式,可被用來將彼等限 制條件C1和C2,公式化成不等式之形式,亦即,四個矩形 角落位於(<=)該等線内。限制條件C4係已成一種等式之形 31 200818114 式,而限制條件C3則變成上述最大化之函數,亦即,最大 化上述活動性矩形之區域。 就第5圖之實況(a)而言,其中,該影像係上溢而填滿該 觀看表面16,該觀看表面邊框18,提供了一個可自動滿足 限制條件C1至C3之自然矩形。藉由使該顯示器之標度,固 定至該測試影像者,彼等參數便可依據方程式(19)而被設 定。Wd nd v J , the above value which can determine the % of the rectangular coordinate value, can be selected as an integer value, as long as the (four) maintains the above-mentioned viewing surface frame to see the ratio. Using equation (18), the display spatial scale (bottom 30 200818114 map) can be transformed into the rounded image size required for the correction (top graph) in Figure 7. There is a degree of freedom in the determination of the active rectangle; however, adding a certain natural constraint can simplify the selection. In order to maximize the pixel resolution of the above corrected image, the rectangle should be selected to be as large as possible. If the corrected image is to have the same aspect ratio as the input image, the aspect ratio of the rectangle should match the input image. The various restrictions C1 to C4 series are as follows. 10 C1) The active rectangle is contained in the optical envelope ABCD. C2) The active rectangle is included in the viewing surface border EFGI1. C3) The active rectangular area is maximized. C4) The active rectangle aspect ratio is set so as to equal the input image {^dlhd=WTIHT) 〇15 to solve the above-mentioned active rectangle (that is, to determine the relevant constraints, and to become numerically optimized) One problem. All of the above restrictions can be placed in a mathematical form that allows for the use of various optimization methods to solve the top problem. One possible approach is to minimize the use of restrictions. 20 and rewrite the constraints in the form of the equation or inequality, and define a function to be minimized (or maximized). For the edge of the border (see equation (9)) and the outermost grid point (see equation (17) The line equations of )) can be used to formulate their constraints C1 and C2 into the form of inequalities, that is, four rectangular corners are located in (<=) the lines. The constraint C4 has become a kind Equation 31 200818114, and constraint C3 becomes a function of the above maximization, that is, maximizing the region of the active rectangle. In the case of the reality (a) of Figure 5, where the image is Overfill the watch 16. The viewing surface bezel 18 provides a natural rectangle that automatically satisfies the constraints C1 to C3. By adjusting the scale of the display to the test image, the parameters can be based on equation (19). set as.

wd^WD= WT 上述校正之影像將會與該觀看表面邊框18完全匹配, 此係其中使用整個觀看表面邊框18之理想情況。因此,就 10此-情況而言,第6圖中之最佳化步驟,僅僅意謂使用方程 式(19),亦即,該等點並不需要被標定或移位。 該最佳化步驟,亦可被用來藉由使限制條件4如方程式 (20)所示地被修飾,而達成寬高比中之改變。 Ί—α (20) 繼續使用方程式⑽,上述校正之影像的寬高比將變成 15 α。此種寬高比之選擇中的自由度,可容許影像包括在一 個顯示器裝置中具有不同寬高比之影像屏幕(㈣“㈣ 或pillar-boxed)内。藉由調整該標度和移位,該影像亦可在 該觀看表面16上面輕易被過掃描(亦即,影像上溢}和欠㈣ (亦即’影像欠掃描)。因此,使用表面函數,係有利於輕易 20 實現過掃描和欠掃描之情況。 32 200818114 上述校準資料產生器12所產生之最後校準資料,係方 程式(21)所給定之格線資料/V。 / w: (xt di) (21) 以上之討論係著重於所有原色有關之校正均相同之情 況中的失真。在此等情況中,相同之格線資料,說明所有 5色彩有關之校正,此情況可被稱作單一色彩校正。然而, 就橫向色彩失真而言,該格線資料,係就所有原色而有不 同,以及係需要多重之色彩校正,此情況可被稱作多色彩 校正。所有原色共有之任何幾何失真,可使包含在該橫向 校正中;因此,上述校準資料產生器12之先前實現體,可 10被視為下文所說明之多重色彩校正的一個特殊情況。 上述杈準資料產生器12有關橫向色彩校正之範例性實 現體,係顯示在第8圖中。誠如可見的是,此係與上述單一 色彩校正情況(見前節)重複K次有關之實現體相類似,其 中,K為原色之數目。該等原色係被標記為〇丨尤。就最 15常見之三原色RGB而言,(/l,/2,/3) =(足。 取 校準每個原色有關之步驟和細節係與上文有關單一 彩才父正情況之說明相同,而具有以下數項修飾。 該等測試樣式如今係依據上述正被校準之原色來力 色。舉例而言,當校準紅色色彩時,所有之測試樣= 2〇 4圖,面板⑷至⑴),將具有彼等之形貌(圓圈、條線、弟 該等形貌特性(關數、料),可就上面色彩樣“有不 所有之影像處理步驟,諸如擷取彼等中心和邊、 使用色彩影像。該臨界值可被調整來處理上述正被和準將 33 200818114 色彩 旦彳于到一個二進位影像,則該影像處理便與該色 彩無關。 通系,由於該等相機透鏡本身内之橫向色彩失真所 致:該相機校準資料,係就不同之原色而有不同,以及需 要就所有之原色分別加以計算。本系統可被配置來校正該 相機本身内之橫向色彩失真。來自不同原色之測試影像樣 式係與权準該顯示器裝置者相類似,可被用來產生該相 機枚準資料。上述相機之(多重色彩)校準資料,可獨立於該 顯示器校準而被完成,以及僅需要被完成一次。在產生該 1〇相機校準資料中,應被使關是—個具有零或極小(亦即, 甚小於該相機)之橫向彩色失真的顯示器裝置。若此種顯示 器裝置不可得,則可使用一些加色之標記,來提供一個具 有已知坐標之實體格線。上述多重色彩相機校準有關之最 後結果,係一個取決於上述如方程式(22)所界定之原色的反 15 函數相機失真。 fck · (¾ , y^k) (χ^, yc^k), A: = l.. .ΛΓ, 對格線之配合和插值 怠(2 在任何漏失資料已被計算過之後,該等κ個得到之格線 (類似於方程式(17)),係界定在方程式(23)中。 fm: (xi 5^ (χώ5ykdi)Wd^WD= WT The above corrected image will exactly match the viewing surface bezel 18, which is ideal for using the entire viewing surface bezel 18. Thus, in the case of the case - the optimization step in Fig. 6 simply means that equation (19) is used, i.e., the points do not need to be scaled or shifted. This optimization step can also be used to achieve a change in the aspect ratio by modifying the constraint 4 as shown in equation (20). Ί—α (20) Continuing with equation (10), the aspect ratio of the above corrected image will become 15 α. The degree of freedom in the choice of such aspect ratio allows the image to be included in an image screen having different aspect ratios in a display device ((4) "(4) or pillar-boxed. By adjusting the scale and shift, The image can also be easily scanned over the viewing surface 16 (i.e., image overflow) and under (four) (i.e., 'image underscan). Therefore, using the surface function facilitates easy over-scanning and owuating Scanning situation 32 200818114 The last calibration data generated by the above calibration data generator 12 is the grid data given by equation (21) /V. / w: (xt di) (21) The above discussion focuses on all Distortion in the case where the corrections for the primary colors are the same. In these cases, the same ruled line data indicates all 5 color-related corrections, which may be referred to as single color correction. However, in terms of lateral color distortion The grid data is different for all primary colors, and the system requires multiple color corrections, which can be referred to as multi-color correction. Any geometric distortion common to all primary colors can be included in the horizontal In the correction; therefore, the previous implementation of the calibration data generator 12 described above can be considered as a special case of the multiple color correction described below. The exemplary data generator 12 described above is an exemplary implementation of lateral color correction, It is shown in Figure 8. As can be seen, this is similar to the implementation of the above-mentioned single color correction (see the previous section) in which K times are performed, where K is the number of primary colors. For Chiyou. For the most common three primary colors RGB, (/l, /2, /3) = (foot. Take the steps and details related to the calibration of each primary color and the above is related to the single color. The descriptions are the same, but with the following modifications: These test styles are now based on the primary color being calibrated as above. For example, when calibrating red color, all test samples = 2〇4, panel (4) To (1)), they will have their appearance (circles, lines, brothers, etc. (closed, material), and the above color samples "have not all image processing steps, such as capturing their centers And side, use color Color image. The threshold value can be adjusted to process the above-mentioned positive and the normal color. The image processing is independent of the color. The system is due to the lateral direction of the camera lens itself. Caused by color distortion: The camera calibration data is different for different primary colors, and needs to be calculated separately for all primary colors. The system can be configured to correct lateral color distortion within the camera itself. Testing from different primary colors The image pattern is similar to that of the display device and can be used to generate the camera registration data. The (multi-color) calibration data of the camera can be completed independently of the display calibration, and only needs to be completed once. In generating the 1 〇 camera calibration data, it should be turned off to be a display device having zero or very small (i.e., much less than the camera) lateral color distortion. If such a display device is not available, some additive markings can be used to provide a solid grid with known coordinates. The final result associated with the multi-color camera calibration described above is an inverse 15 function camera distortion that depends on the primary colors defined by equation (22) above. Fck · (3⁄4 , y^k) (χ^, yc^k), A: = l.. .ΛΓ, the fit and interpolation of the grid (2) After any missing data has been calculated, the κ The obtained ruled line (similar to equation (17)) is defined in equation (23). fm: (xi 5^ (χώ5ykdi)

k = l...K i = l...MkxNk (23) 在此,每個格線有關之點數,可能依據所用之測· 式和任何所做之重新取樣而有所不同。 34 200818114 該等原色有關之測試樣式,可能隸屬於不同之投影幾 何類別(見第5圖)。該等原色有關之某些測試樣式,可能如 第5圖之面板(a)中地完全上溢於該觀看表面邊框丨8,而其他 的可能如第5圖之面板(b)中地完全位於該邊框内。當該最佳 5化被執行時,上述之活動性矩形,勢必要位於該觀看表面 邊框16内,以及在每個色彩有關之影像包跡ABCDk内丨該 等影像包跡之空間交點會被使用。此意謂的是,被完成的 是一個單一最佳化,而使限制條件丨考慮到所有原色之包跡 ABCDk。該最佳化可決定所有原色共有之活動性矩形有關 ίο的坐標。此等坐標接著被用來依據方程式(18),標定及移位 該等格線。 該最佳化步驟之輸出為格線,彼等可如方程式(24)中所 指明,產生所有原色有關之校準資料。 /W,/) ~> (χ’Κ) k = 1..,Κ xNk (24) 此等資料組係被該翹曲產生器13使用。 15 找範例性實施例中,該色彩和亮度、或僅僅是色彩 之不均勻性校準資料產生,係在該等幾何失真(類型心 已被校正之後方被執行。色彩不均勾性可能是由數種來原 所引起,諸如因投影幾何(梯形角度)所致至觀看表面16之: 徑長度的改變、該微顯示器平板中之瑕疵、等等。 20 京尤一個幾何上已校正之顯示器裝置而言,該測試樣式 影像,會在該邊框18内呈現為—個可能在尺度上與其相^ 配之矩形(亦即,活祕_)。該原點係採用上述活動性= 35 200818114 形之左上角,而非上述觀看表面邊框18之的左上角。該等 被使用之測試樣式,僅僅是上文就單一色彩幾何所使用者 之加色版本;亦即,就校正原色κ而言,該等形貌(圓圈、 條線)將會是有色的。此係與校正橫向色彩所用者相同。就 5壳度而言,可使用的是灰度值(最大白色,半最大)。該術語 色彩』通常是用來識別任何正被校正之色彩分量;其可 能是亮度、RGB或YCbCr的一個分量、或一個在任何其他可 被該感測裝置11測量之色彩空間中的分量。 該感測裝置11,可能是一個相機或一個色彩分析儀(亦 10即,分光计、光度計、等等)。就較大之準確度而言,應被 使用的疋一個光度计或分光計。此等色彩分析儀,可能拍 攝整個影像(亦即,多重之點)或單一點處之資料。該感測裝 置11在佈置上,應使盡可能接近該觀看表面16。彼等單點 色彩分析儀,實際上將被置於該螢幕上面之已知坐標(亦 15即,該專形狀之中心)處,而得到該坐標有關之資料。雖然 多點色彩分析儀和相機,可被置於一個任意之位置處,提 昇之準確度,係藉由佈置彼等使接近該觀看表面16以及盡 玎能接近中心而得到。第9圖係例示一個包含有觀看表面 、單點色彩分析儀92、和多點色彩分析儀93之範例性裝 20置。上述色彩不均勻性有關之校準資料產生器,係與校正 幾何失真者相類似。第10圖係例示色彩非均勻性有關之校 準資料產生器12’的一個範例性實施例。 上述單點色彩分析儀92所拍攝之資料,係包含有所有 被測量之點處的原色值C’I和對應之空間坐標。在 36 200818114 此,Α = 1··Χ可識別正被分析之色彩。上述以C〗指明之原有 色值亦屬已知,因為其測試樣式係被明確界定。此可產生 方程式(25),其係上述用以說明色彩不均勻性失真之格線資 料,而被稱作色彩失真映射圖。 /dc · (xi ,C^) -> (xf ,C°ki) (25) 理應,主思的疋,该4空間坐標,並不會被該色彩不均 勻性失真變更。該原有之色彩值,就一個即定之測試樣 式而言,通常將為固定之值;此意謂的是,所有非 月厅、之像素係屬相同之色彩。有多於一組之測量s = 可能 被元成,其中,每一組係對應於一個具有不同固定色值(諸 1〇如不同位準之飽和度或灰度)之測試樣式。為簡化其記號, 該單一指標亦將涵蓋橫跨如方程式(26)中所示不同之測量 組。 該等空間坐標,就每一組而言係相同。下文之討論係 適用於每一組(亦即,測試樣式)。 15 就上述可為一個相機之多點色彩分析儀93而言,上述 拍攝之資料,係對應於該整個影像。在此情況中,某些影 :處理係需要在得到該格線之前被完成。該等形狀:H (〇破等之域坐標㈨,幻會被計算出。完成 與夕幾何校正期間所使用之擷取和排序步驟相同。^該等中 心外’上述形狀中心處之色值亦會被計算出。校正該色值 可藉由依據方程式(27)平均化錢波上述拍攝之_ 被識別的中心近鄰中之像素色值而得到。 37 20 200818114 c,Hraf、 A =濾波器係數 (27) Γ = 之鄰域 其中,為上述中心之近鄰中的拍攝影像中之色值。 就平均化最接近的四個點而言,該等濾波係數是 = 1/4" = 1 …4 〇 此最終結果是方程式(25)中所界定之格線資料。理應注 5 意的是:⑴被需要的僅有該等域坐標,因為該色彩失真並 不會改變空間坐標;(ii)其中並無漏失資料,因為該影像並 無幾何上之失真,以及係在該觀看表面16内;以及(iii)其中 並不需要計算該感測裝置失真及執行鏈結,因為其中並無 被完成之幾何校正。 10 依據所用之感測裝置的類型和拍攝貨料的格式而定’ 可能需要有一個色彩空間變換,將該色彩資料引領至該顯 示器之色彩空間。舉例而言,一個分光計可能產生依彩度 值而定之資料,而該顯示器裝置和該電子校正單元(其係一 個處理器),係需要RGB值。一個色彩變換可能係藉由一個 15 矩陣乘法或透過一個更複雜之非線性方程式來執行。就一 個色彩空間轉換而言,所有原色有關之格線資料均會被使 用。通常,此種變換係採用方程式(28)中所顯示之形式。 (28) 若並無色彩失真出現,則就一個固定之色彩測試樣式 而言,所有坐標(χΓ,〇處之色值,便應測量為一個常數G。 20 該測量之常數,可能不會等於原有之固定像素值。就大 38 200818114 多數之顯示器而言,該等測量值和原有值係成比例,其中, 該比例常數λ在無色彩失真存在時係呈固定,以及在有色 彩失真存在時會有空間上之變化。所以,該顯示器之色彩 失真映射圖,在表示上可如方程式(29)中所示。k = l...K i = l...MkxNk (23) Here, the number of points per grid may vary depending on the test used and any resampling done. 34 200818114 The test patterns associated with these primary colors may be subject to different projection geometry categories (see Figure 5). Some of the test patterns associated with the primary colors may be completely overlaid on the viewing surface frame 丨8 as in panel (a) of Figure 5, while others may be completely located in panel (b) of Figure 5 Inside the border. When the optimal 5 is performed, the above-mentioned active rectangle is necessarily located in the viewing surface frame 16, and the spatial intersection of the image envelopes in each color-related image envelope ABCDk will be used. . What this means is that a single optimization is done, and the constraints are taken into account the envelope of all primary colors ABCDk. This optimization determines the coordinates of the active rectangle shared by all primary colors. These coordinates are then used to calibrate and shift the grid lines according to equation (18). The output of the optimization step is a grid line, which can be used to generate calibration data relating to all primary colors as indicated in equation (24). /W, /) ~> (χ'Κ) k = 1.., Κ xNk (24) These data sets are used by the warpage generator 13. 15 In the exemplary embodiment, the color and brightness, or simply the color non-uniformity calibration data is generated, and the geometric distortion (the type of heart has been corrected) is performed. The color unevenness may be caused by Caused by several origins, such as due to projection geometry (trapezoidal angle) to the viewing surface 16: changes in path length, flaws in the microdisplay panel, etc. 20 Jingyou a geometrically corrected display device In this case, the test pattern image is presented in the frame 18 as a rectangle (ie, a live _) that may be matched to the scale. The origin uses the above activity = 35 200818114 The upper left corner, rather than the upper left corner of the viewing surface bezel 18. The test patterns used are only the additive versions of the user of the single color geometry described above; that is, in terms of correcting the primary color κ, The topography (circles, lines) will be colored. This is the same as the one used to correct the horizontal color. For the 5 shell, the gray value (maximum white, half maximum) can be used. "usually To identify any color component being corrected; it may be a component of luminance, RGB or YCbCr, or a component in any other color space that can be measured by the sensing device 11. The sensing device 11, possibly A camera or a color analyzer (also known as a spectrometer, photometer, etc.). For greater accuracy, a photometer or spectrometer should be used. These color analyzers may Capture the entire image (ie, multiple points) or data at a single point. The sensing device 11 is arranged so as to be as close as possible to the viewing surface 16. Their single point color analyzer will actually be placed The coordinates of the coordinates (also 15 at the center of the shape) are obtained on the screen, and the information about the coordinates is obtained. Although the multi-point color analyzer and the camera can be placed at an arbitrary position, the lifting is performed. Accuracy is obtained by arranging them close to the viewing surface 16 and as close as possible to the center. Figure 9 illustrates a viewing surface, a single point color analyzer 92, and multi-point color analysis. The exemplary configuration of the above-mentioned color unevenness is similar to that of correcting geometric distortion. Fig. 10 is an exemplary example of a calibration data generator 12' related to color non-uniformity. The data captured by the single-point color analyzer 92 includes the primary color value C'I and the corresponding spatial coordinates at all the points to be measured. At 36 200818114, Α = 1··Χ can identify positive The color to be analyzed. The original color value specified by C is also known because its test pattern is clearly defined. This produces equation (25), which is the above-mentioned ruled line for explaining color unevenness distortion. The data is called the color distortion map. /dc · (xi ,C^) -> (xf ,C°ki) (25) It should be the main thinking, the 4 space coordinates, and will not be Color unevenness distortion changes. The original color value, for a given test pattern, will usually be a fixed value; this means that all non-moon rooms and pixels are of the same color. More than one set of measurements s = may be formed, where each set corresponds to a test pattern having a different fixed color value (such as saturation or grayscale at different levels). To simplify its notation, this single indicator will also cover different measurement sets across the equations shown in equation (26). These spatial coordinates are the same for each group. The discussion below applies to each group (i.e., test style). 15 In the case of the multi-point color analyzer 93 which can be a camera as described above, the above-mentioned photographed data corresponds to the entire image. In this case, some shadows: the processing system needs to be completed before the grid is obtained. These shapes: H (the smashed domain coordinates (9), the illusion will be calculated. The completion of the extraction and sorting steps used during the eve geometry correction. ^The center of the above-mentioned shape is also the color value at the center of the shape It will be calculated. Correcting the color value can be obtained by averaging the pixel color values in the identified nearest neighbors of the above-mentioned shot according to equation (27). 37 20 200818114 c,Hraf, A = filter coefficient (27) The neighborhood of Γ = is the color value in the captured image in the neighborhood of the above center. For the four closest averaging points, the filter coefficients are = 1/4" = 1 ... 4 The final result is the ruled line data defined in equation (25). It should be noted that: (1) only the domain coordinates are needed because the color distortion does not change the spatial coordinates; (ii) There is no missing data because the image is not geometrically distorted and is within the viewing surface 16; and (iii) there is no need to calculate the distortion of the sensing device and perform the chaining because it is not completed. Geometric correction. 10 Depending on the sensing used Depending on the type of shot and the format of the shot, it may be necessary to have a color space transformation that leads the color data to the color space of the display. For example, a spectrometer may produce data based on chroma values, and The display device and the electronic correction unit (which is a processor) require RGB values. A color transformation may be performed by a 15 matrix multiplication or by a more complex nonlinear equation. In other words, the grid data for all primary colors will be used. Usually, this transformation is in the form shown in equation (28). (28) If no color distortion occurs, then a fixed color test pattern is used. In other words, all the coordinates (χΓ, 〇, the color value, should be measured as a constant G. 20 The constant of this measurement, may not be equal to the original fixed pixel value. On the big 38 200818114 most of the display, these The measured value is proportional to the original value, wherein the proportional constant λ is fixed in the absence of color distortion and will be present in the presence of color distortion There is a spatial variation. Therefore, the color distortion map of the display can be represented as shown in equation (29).

Cli = A(x%y;)C°ki^X(x%y:) = ^ (29) 5 通常,該輸入和測量之色值,將藉由某種以3給定之 已知顯示器色彩函數Λ使相關聯,其係一個如方程式(30) 中所顯示之參數向量。 c0ki = MX,c〇ki) (30) 若有色彩失真存在,則3會有空間上之變化。一個即 定之坐標(<,兄°)處的參數,可藉由分析如方程式(31)中所示 10 之不同組s = U有關的資料而被決定,其中,s指標係明白 顯示出。 s = (31) 每個坐標處需要的,是一個足夠數目之值。該分析可 能藉由對該資料做一配合,而使Λ近似化。同理,其反函 數//1可藉由分析如方程式(32)中所示反方向中之同一資料 15 而被計算出。 (<,火,C’L) —(<,兄°,〇 => Q (32) 該反函數亦取決於某些稱作色彩校正參數之參數又’, 其可由若屬已知之Λ的明白形式來加以決定,或者可使用 一個類似多項函數等特定基底函數,自一個對該反函數資 39 200818114 歡畎射圖,係採用方程式(33)中 g ’該反函 料之配合計算出。就一 數映射圖,係按用太鞋 個線性最小平方配合而 所顯示之形式。Cli = A(x%y;)C°ki^X(x%y:) = ^ (29) 5 Normally, the input and measured color values will be determined by some kind of known display color function given by 3. The association is associated with a parameter vector as shown in equation (30). C0ki = MX,c〇ki) (30) If there is color distortion, there will be a spatial change in 3. The parameter at a given coordinate (<, brother °) can be determined by analyzing the data for a different set of s = U as shown in equation (31), where the s indicator is clearly shown. s = (31) What is needed at each coordinate is a sufficient number of values. The analysis may approximate the Λ by doing a match with the data. Similarly, the inverse of the function / / 1 can be calculated by analyzing the same data 15 in the opposite direction as shown in equation (32). (<, fire, C'L)—(<, brother°, 〇=> Q (32) The inverse function also depends on some parameters called color correction parameters, which can be known from明白 明白 明白 明白 明白 明白 明白 , , , , , , 明白 明白 明白 明白 明白 明白 明白 明白 明白 明白 明白 明白 明白 明白 明白 明白 明白 明白 明白 明白 明白 明白 明白 明白 明白 明白 明白 明白 明白 明白 明白 明白 明白 明白 明白 特定 特定 特定 特定 特定 39 The one-number map is in the form shown by the linear least squares fit of the shoes.

失:般性之下,可假定係屬多項式之形式。上述之表料 亦容許調整該最後固定之色度,因為在某些情況中,、可= 必需的或希望的,是降低該輪出處之原有Q值。在此,該 以調整,來增加或降 4參數可藉由一個簡單之標度因素加 低上述反轉之值。 一旦該反函數(在每個中心坐標處)係屬已知,上述可校 正該色彩不均勻性失真之校正色彩映射圖,係由方程式(34) 給定。 (34) 15 該色彩失真之空間變動和校正,係分別由該參數J和 其反函數1,完全加以說明。所以,上述被標記為/^而供校 正用之(基底)校準資料,可依據方程式(35),完全說明了上 述與該等色彩校正參數相關聯之格線資料。 fwck:(35) 就方程式(29)最常見之情況而言,該等參數係依據方程 20 式(36)而產生。 40 200818114 r^\〇 ik (36) 上述之格線可能藉由以-個適當之配合或内插函 新取樣而使較稠密。上述使用與幾何校準者相類似之記。 的新格線,係由方程式(37)給定。 歲 fwck * (Xi i ) ^ik = ikr } (37) k = \...K / = 1.. .Mck x Nck r = 1 此係上述校準資料產生器12,之資料輸出。 上述包括所有子產生器(亦即,第1〇圖中之每一行)之杧 準貝料產生裔12’的完整資料輸出,係由方程式(38)給定。 f’ m ·· (xi,火、—(x’kdi,ykdi) / ·· (Ά,乂、= μ’*)Loss: Under the generality, it can be assumed to be in the form of a polynomial. The above reference also allows adjustment of the last fixed chromaticity, since in some cases it may be necessary or desirable to reduce the original Q value of the turn. Here, the adjustment to increase or decrease the 4 parameter can increase the value of the above inversion by a simple scale factor. Once the inverse function (at each center coordinate) is known, the above corrected color map that corrects the color non-uniformity distortion is given by equation (34). (34) 15 The spatial variation and correction of the color distortion are fully explained by the parameter J and its inverse function 1, respectively. Therefore, the above (base) calibration data, which is labeled as /^, can be used to fully illustrate the above-described grid data associated with the color correction parameters in accordance with equation (35). Fwck: (35) For the most common case of equation (29), these parameters are generated according to equation 20 (36). 40 200818114 r^\〇 ik (36) The above-mentioned ruled line may be made dense by a new sampling with a suitable fit or interpolation. The above uses a similar note to the geometric calibrator. The new grid is given by equation (37). Year old fwck * (Xi i ) ^ik = ikr } (37) k = \...K / = 1.. .Mck x Nck r = 1 This is the data output of the above calibration data generator 12. The complete data output of the above-mentioned sub-generators (i.e., each of the rows in the first graph) is determined by equation (38). f’ m ·· (xi, fire, —(x’kdi,ykdi) / ·· (Ά,乂,= μ’*)

k = l".K i = \...MkxNk (38) j = \^MckxNck r = 1.../? 若並無橫向色彩存在,則該等K個格線係屬相同, 亦即,僅有一個幾何校正格線被計算及輸出。該校準資料 係使輸入至該翹曲產生器13。 1〇 〇誠如所文所提及,該格線資料並非直接被該電子校正 早凡使用。雖然一個袼線表示式,係最常見之格式,就一 個,體貝現體而言係屬效率不彰,主要是因為其需要儲存 大里之貝料(每個像素有關之坐標), 以及無法輕易地被操縱 1 (諸如就;^度方面之改變而言)。某些使用一個查對表之先存 技藝式系統’出於同樣原因並非屬最佳。該翹曲產生器13, 200818114 可將(38)中所界定之格線表示式,變換成該翹曲資料,其係 上述校正的一種他型表示式,其形式就硬體中之應用而言 係屬有效率。若該電子校正單元可直接使用格線資料,則 上述就所有像素重新取樣之格線而言係可被使用,以及不 5再需要以該翹曲產生器13來產生翹曲資料。 口亥魅曲資料係依據上述電子校正單元之資料需求而產 生。一個電子校正單元,可應用一些使用各種架構之幾何 和色彩變換。大多數單元係使用幾何校正有關之反函數映 射圖,以及上述之格線係為一個反函數架構而設計。一個 10有效率之電子校正架構,諸如頒佈之美國專利申請案第us 2006-0050074 A1 號標題為 “System and meth〇d f〇r epresenting a general two dimensional transformation”(表示 一般性二維變換之系統與方法)中所說明者,係基於上述格 線貝料之線性函數表示式。該翹曲產生器13,可將該格線 15資料,轉換成一個函數表示式。第11圖係例示上述翹曲產 生菇13的一個範例性實施例。 一個二維格線(心兄之一般性函數表示式,可寫成 如方程式(39)所示。 u = PiB^y) (39) 方桎式(39)界定了一個橫跨域(^力之二維表面函數。其 2〇係上述基底函數你,咖的一個線性組合,此組合之係 數,稱作表面係數,係由%給定。該等係數為常數,以及 並不會板跨該域而有變化。該等基底函數不必定要呈線 42 200818114 唯有彼等之組合要呈線性。在至少某些 基底函數可能是呈極度之非線性;因此,方程式4等 不之形式’係^夠-般性地代表該校正袼線。所顯 數和彼等之數目,係由該電子校正單元來。、土底函 係在硬體巾實現及被評估。魏曲產01’因為彼等 必需之係數 在 數 函數和對應之表面 Bij(X,y) = X1 yj 口口 可決定該等 個範例性實施例中,該等使用 標,-二 ,可能被寫成如方程式(4〇)所示。一 (’少)ξΧχ>ν = 0·.·χ”7 = α ' (40) 10 由於該等基底函數係屬已知, #咅靖 7m ,丄 表面表示式, 二個如絲式⑷)中赫,自格線值至表面係數之變 ut α. (41) 15 20 上迷表示式之效益,健生自 線值需要就每個像素加以針# 貫.在—個格 可容,料二存之情況中’該等表面係數, 群組,計算該等格線值,·因而,需要 被儲存的表面係數,在數目上係相當地小。 要 地被Γ等係數Γ數目,可決定該等原有格線值可如何精確 =。错由增加係數之數目,亦即 基底函數,將可得到加增之準確度。或者,若該域被區別 43 200818114 為一些小片,便可使用數目較少之基底函數,而使每個小 片使用一個不同之表面函數。該小片結構係依據每個小片 内之顯示失真的嚴格性來建立。此種解決方案,在該結合 表面之複雜性與該失真之匹配方面,可容有較大之彈性。 5 舉例而言,一個失真愈複雜,所使用之小片便愈多。小片 /7=1.. 有關之係數,係被標記為 <。在接下來者之中,不 失一般性之下,將使用一個多項式形式之記號,其可輕易 被調適至另一個基底。該整個表面則會採用方程式(42)中所 指明之形式。k = l".K i = \...MkxNk (38) j = \^MckxNck r = 1.../? If no lateral colors exist, then the K grid lines are the same, ie Only one geometric correction grid is calculated and output. This calibration data is input to the warpage generator 13. 1〇 As mentioned in the article, the grid data is not directly used by the electronic calibration. Although a squall line representation is the most common format, it is inefficient in terms of body and body, mainly because it needs to store the outer material (the coordinates related to each pixel), and it is not easy. The ground is manipulated 1 (such as; in terms of changes in degree). Some pre-existing technical systems that use a checklist are not optimal for the same reason. The warpage generator 13, 200818114 can convert the ruled line expression defined in (38) into the warpage data, which is a type of expression of the above correction, in the form of hardware application. The system is efficient. If the electronic correction unit can directly use the ruled line data, the above-described ruled line for re-sampling all pixels can be used, and it is no longer necessary to use the warp generator 13 to generate warpage data. The commemorative data of the mouth is produced according to the data requirements of the above electronic correction unit. An electronic correction unit that applies geometry and color transformations using a variety of architectures. Most cell systems use geometric correction related inverse function maps, and the above-described grid lines are designed for an inverse function architecture. A 10 efficient electronic correction architecture, such as the issued US Patent Application No. 2006-0050074 A1 entitled "System and meth〇df〇r epresenting a general two dimensional transformation" (representing a system of general two-dimensional transformations) The method described in the method is based on the linear function expression of the above-mentioned grid-line material. The warp generator 13 converts the ruled line 15 data into a function representation. Fig. 11 is a view showing an exemplary embodiment of the above-described warp-producing mushroom 13. A two-dimensional grid line (the general function expression of the heart brother, can be written as shown in equation (39). u = PiB^y) (39) The square equation (39) defines a cross-domain (^ force A two-dimensional surface function, which is a linear combination of the above-mentioned basis functions, a combination of the coefficients of the combination, called the surface coefficient, given by %. The coefficients are constants, and the plate does not cross the domain. There is a change. These basis functions do not have to be in line 42 200818114. Only the combination of them should be linear. At least some of the basis functions may be extremely nonlinear; therefore, Equation 4 is not in the form 'system^ It is sufficient to represent the correction line. The number of the number and their number are obtained by the electronic correction unit. The earth bottom letter is realized and evaluated in the hard towel. Wei Qu produced 01' because they are The necessary coefficients in the number function and the corresponding surface Bij(X,y) = X1 yj can determine the exemplary embodiments, the use of the standard, -2, may be written as the equation (4〇)示.一('少)ξΧχ>ν = 0···χ"7 = α ' (40) 10 due to these basis functions Is known, #咅靖7m, 丄 surface expression, two as silk (4)) medium ah, from grid line value to surface coefficient change ut α. (41) 15 20 on the expression of the benefits, health from The line value needs to be pinned for each pixel. In the case of a cell, the surface coefficients, the group, calculate the grid values, and thus the surface coefficients that need to be stored. , in terms of the number is quite small. To determine the number of such factors, the number of the original grid lines can be accurate =. The number of the increase factor, that is, the basis function, will be increased. Accuracy. Or, if the field is distinguished by 43 200818114 as a small piece, a smaller number of basis functions can be used, and each piece uses a different surface function. The piece structure is based on the display distortion in each piece. The rigor of the solution is established. This solution allows for greater flexibility in matching the complexity of the combined surface to the distortion. 5 For example, the more complex a distortion, the more small pieces are used. Small piece /7=1.. related coefficient The system is marked as <. In the following, without loss of generality, a polynomial form of the token will be used, which can be easily adapted to another substrate. The entire surface will be in equation (42). The form specified.

UJ i成·丄x,j = l』y (42)UJ i成·丄x,j = l』y (42)

/7 = 1...P (尤,>〇€小片;7 10 一個單一表面係對應於一個單一小片,其係等於上述 之整個輸出影像(域)。一個範例性小片區分,係顯示在第12 圖中。 該小片區分可被初始化至某一起始結構,諸如成4x4 對稱排列的16個小片。該等小片之排列(亦即,小片之數目 15 和每個小片之邊界),係被稱作小片幾何條件D,其係採用 方程式(43)中所指明之形式。 (43) 小片尸= ((4/)14 給定一個小片幾何條件,該等係數可使用上述依據方 程式(38)之資料的線性最小平方配合來加以計算。該配合應 使受到限制,藉以確保在小片邊界處,其表面係呈連續性。 44 200818114 一旦該表面被決定,便會有一項誤差分析被完成,而如方 程式(44)所顯示,使該等坐標網值與該等計算之值相比較。 Error{ =| u{ (44) 該等誤差(ΕιτοιΟ值,係使與一個容許度位準相比 較。若該最大誤差小於或等於該容許度位準,亦即, 5 nipc(£m^)<£max,該等表面係數便會被保留,以及係自該翹 曲產生器13輸出,而作為上述之翹曲資料。若該最大誤差 係較大,該小片幾何條件,便會進一步被細分而加以精提, 以及該等係數被重新計算並重新做誤差分析。 方程式(38)中之表面表示式,可被寫成如方程式(45)所 10 示。/7 = 1...P (especially, > small piece; 7 10 A single surface system corresponds to a single small piece, which is equal to the entire output image (domain) described above. An exemplary small piece is distinguished and displayed In Fig. 12. The slice distinguishing can be initialized to a starting structure, such as 16 small pieces arranged symmetrically in 4x4. The arrangement of the small pieces (i.e., the number of small pieces 15 and the boundary of each small piece), The system is called the patch geometry D, which is in the form specified in equation (43). (43) Small piece of corpse = ((4/)14 Given a piece of geometrical condition, the coefficients can be used according to the above equation ( 38) The linear least squares fit of the data is calculated. The fit should be limited to ensure continuity at the boundary of the die. 44 200818114 Once the surface is determined, an error analysis is completed. And, as shown in equation (44), compare the coordinate network values with the calculated values. Error { =| u{ (44) The errors (ΕιτοιΟ values are compared with an allowable level Compare if the maximum error is less than Or equal to the tolerance level, that is, 5 nipc (£m^) < £max, the surface coefficients are retained and output from the warpage generator 13 as the warpage data described above. If the maximum error is large, the small piece geometry will be further subdivided and refined, and the coefficients will be recalculated and the error analysis repeated. The surface representation in equation (38) can be written as As shown in Equation (45).

uk(x^y) = Y,ai/PxlyJ i,j ^k(^y) =Uk(x^y) = Y,ai/PxlyJ i,j ^k(^y) =

iJ (u,v) = YjCy PUiVj (45)iJ (u,v) = YjCy PUiVj (45)

iJiJ

k = l...K P = L"Pk i = Q…Lkx,j = 0…Lky 理應注意的是,上述格線表示式中之仏指標不再需 要,因為該函數形式係就整個空間而不僅僅是在一個分立 之坐標組處加以界定。該等指標(〖,·/)如今可給定該等指數, 或更一般性地識別上述之基底函數。該指標k可識別該等原 15 色,以及該指標P可識別上述之小片。該表面係就該域坐標 所在之小片加以評估。該小片排列和基底函數之數目,可 就該等原色而有不同。上述格式之額外變動,舉例而言, 45 200818114 可藉由改變每個小片之基底函數而得到。上述幾何校正有 關之域空間’已被標記為,以及其係對應於該輸出影 像二間(在一個反函數架構中),以及該範圍空間已被重新標 為(’)’以及其係對應於該輸入影像空間。 5 就該色彩校正而言,該域空間已被重新標記為(w,v)。 /色校正係針對一個在幾何上屬正確之影像而運作。此 思明的是,該色彩校正勢必要在上述具有坐標空間(w,v)之 輸入影像已就幾何做校正而被翹曲之前對其施加。若該電 子枝正單元,在該輸入影像已就幾何做校正而被翹曲之 10别’應用該色彩校正,則上述之係數,便需要就此種應用 所校正之新排序而做調整,亦即,需要一個重新排序之步 驟。在此情況中,該等色彩參數係在該(X,)空間中被界定。 首先’可得到一個新格線力A,其如方程式(46)中所示,係 在來自上述表面之空間(x,j〇中被界定。 7 (46) (xtk,y,)—又\ 15 該格線接著可如上文所提及地加以配合,以及該等係 數會被計算,該域空間如今即為上述之輸出影像空間。該 等色彩校正表面係數,係使用相同之記號。該誤差分析如 今將使用上述被重新排序之格線。 該翹曲產生器13之最後輸出,係方程式(47)中之係數組 20 (若有必要就排序加以調整),其係集體形成上述之翹曲資 料。 46 (47) 200818114k = l...KP = L"Pk i = Q...Lkx,j = 0...Lky It should be noted that the above-mentioned rule of the grid expression is no longer needed because the function form is the entire space and not only It is only defined at a separate coordinate group. These indicators (〖, / /) can now be given the indices, or more generally the above-mentioned basis functions. The indicator k identifies the original 15 colors, and the indicator P identifies the above-mentioned small pieces. The surface is evaluated for the small piece of the domain coordinates. The number of patch arrangements and basis functions can vary with respect to the primary colors. Additional variations to the above format, for example, 45 200818114 can be obtained by changing the basis function of each tile. The above-mentioned geometric correction related domain space 'has been marked as, and its system corresponds to the output image two (in an inverse function architecture), and the range space has been relabeled as (')' and its system corresponds to This input image space. 5 For this color correction, the domain space has been re-marked as (w, v). The color correction is performed for a geometrically correct image. It is believed that the color correction potential must be applied before the input image having the coordinate space (w, v) has been warped before being corrected for geometry. If the electronic branch is positive, the color correction is applied when the input image has been corrected for the geometry and the warp is applied, then the above coefficients need to be adjusted for the new order corrected by the application, that is, Need a reordering step. In this case, the color parameters are defined in the (X,) space. First, a new grid force A can be obtained, which is defined in the space (x, j〇 from the above surface) as shown in equation (46). 7 (46) (xtk, y,) - again\ 15 The ruled line can then be matched as mentioned above, and the coefficients are calculated, which is now the output image space described above. The color correction surface coefficients use the same mark. The analysis will now use the above-mentioned reordered grid lines. The final output of the warpage generator 13 is the coefficient group 20 (adjusted if necessary) in equation (47), which collectively forms the warp described above. Information. 46 (47) 200818114

k = L..K p = x …Pk i = Q“.Lkx,j = Q.JLky 項目:係包含界定該原色k有關之小片幾何條件的所 有貧訊。該(β,6)資料择卜扯、 、 貝卄係上述可校正類塑1至4之幾何翹曲資 料或變換,以及?係上述可土七 這了抆正類型5之失真的色彩翹曲或 變換。 5 錄她曲單元15係、-種處理器,以及係作用為該系 統的-個電子校正單元。術語『電子校正單元』,本說明書 係與術語『數位赵曲單元』交換使用。在實際使用中,該 數位魅曲單兀15,可施加麵曲資料給該數位輸入影像(視 ι〇 Λ) ’藉以預失真或翹曲該等輸人影像。該等輸人影像,係 在二間性空間和色彩空間兩者中被翹曲。該空間性輕曲在 凡成上,係依據該幾何翹曲,以及該色彩翹曲在完成上, ,依據該色彩㈣。該預失真可被建立,藉以消除該顯示 °。之失真,而產生一個顯示在該觀看表面16上面之無失真 影像。 一 15 上述可校正幾何和色彩不均勻性兩者之數位輕曲單元 的個範例性實施例,係顯示在第13圖中(此圖中省略了 卞)5亥數位勉曲單元15,包括兩個主要區塊:一個應用 成何輕曲之第一區塊(亦即,在幾何上鍾曲該等輸入影 # 個可僅就色彩不均勻性做权正而勉曲色彩空間中 之輪入影像的第二翹曲區塊。在此,該色彩校正係發生在 4幾何校正之後,然而,此討論可對反向順序輕易被採用。 47 200818114 當-㈣定之校正不需要時,兩者區塊是可被繞過。每個 區塊I、有兩個成分:_個表面評估成分,其可評估每個 像素(w,有關每個原色(此指標係被省略)在方程式⑽ 中斤―丨疋之表面夕項式,藉以產生該等必需之坐標 k’v〜} ’和_個像素產生成分,其實際上係使用該等必需 之坐標:來計算該像素色值ς。就該幾何校正而言,該像 素產生$個渡波步驟,其中,一個具有指明為〜X··『之 預先心的係數之濾波器,係應用至上述正被處理之當前 像素^),的像素之某些近鄰。 10 15 十 某二隋况中,該等濾波係數係在該系統之外部 被片及會被狀魏她曲單元15内。就該色彩不 句句 而"&像素產生將會取用來自上述幾何麵曲 之影像的像素值,以及應用方程式(33),來決定其新的色 值。該像素產生步驟係總結在方程式(48)中 之鄰域 ,/e「 〜 C’i = Z又WQ),r = l;j( (48) r 該等步驟係就每個原色被執行。該G係表示幾何校正 後之中間色值。 該等渡波和色彩桉正古# 4、^ 仅止方私式之細節,係取決於上述硬 體之架構。-個簡單之缝器,可能僅僅是平均化四個最 近之近鄰點,在該情況中m -個 可能使用-個橢圓近鄰,彼等之形狀係取決於上述表面之 1〇⑷acobian(圖像雅可比),以及該等滤波器係數,可使用 骑之遽波器產生演算法而得到。在此情況中,該等近鄰 48 20 200818114 坐標(、r,vyer), 、 J月匕有需要被用來評估Jacobian(雅可比)。 同理,—^固释口口 間早之色彩校正,係涉及僅使用一個如方程式 (49)中所界定之線性校正。k = L..K p = x ... Pk i = Q".Lkx,j = Q.JLky Item: contains all the poor information defining the geometry of the patch associated with the primary color k. The (β,6) data selection Twist, Bessie, the above-mentioned geometric warping data or transformation of the calibratable plastic 1 to 4, and the color warping or transformation of the above-mentioned turbidity type 7 distortion of the correct type 5. 5 recording her music unit 15 System, a kind of processor, and an electronic correction unit functioning as the system. The term "electronic correction unit", this specification is used interchangeably with the term "digital Zhaoqu unit". In actual use, the digital enchantment Single 兀 15, can apply facial music data to the digital input image (as ι〇Λ) 'by pre-distortion or warping the input images. These input images are in the two-dimensional space and color space. The warp is warped. The spatial softness is based on the geometric warpage, and the color warping is completed, according to the color (4). The predistortion can be established to eliminate the display °. Distortion to produce an undistorted image displayed on the viewing surface 16 An exemplary embodiment of the digital light flexing unit of the above-mentioned correctable geometric and color non-uniformity is shown in FIG. 13 (omitted in this figure) 5 Hai digit distortion unit 15, including Two main blocks: one that is applied to the first block of the light (that is, the geometric input of the input shadows can only be used to correct the color unevenness and distort the wheel in the color space Entering the second warped block of the image. Here, the color correction occurs after 4 geometric corrections, however, this discussion can be easily applied to the reverse order. 47 200818114 When -(4) calibration is not required, both Blocks can be bypassed. Each block I has two components: _ a surface evaluation component that evaluates each pixel (w, for each primary color (this indicator is omitted) in equation (10) The surface of the 丨疋, by which the necessary coordinates k'v~} ' and _ pixels are generated, which actually use the necessary coordinates: to calculate the pixel color value 就. In terms of geometric correction, the pixel generates $ wave steps, one of which Some of the neighbors of a pixel having a coefficient of ~X·· "the pre-centered coefficient applied to the current pixel being processed ^). 10 15 In the case of the first and second, the filtering The coefficients are placed outside the system and will be in the shape of the unit. In terms of the color, the "pixel generation will take the pixel values from the image of the above geometric surface, and the application Equation (33), to determine its new color value. The pixel generation step is summarized in the neighborhood of equation (48), /e "~ C'i = Z and WQ), r = l; j (48 r These steps are performed for each primary color. The G system represents the geometrically corrected intermediate color value. The wave and color of the 桉正古# 4, ^ only the details of the private party depends on the structure of the above hardware. - A simple seamer, perhaps just averaging the four nearest neighbors, in which case m - may use - elliptical neighbors, the shape of which depends on the surface of the above 1 (4) acobian (image ya Comparable), and the filter coefficients can be obtained using a rider chopper generating algorithm. In this case, the neighbors 48 20 200818114 coordinates (, r, vyer), and J 匕 need to be used to evaluate Jacobian. For the same reason, the early color correction between the mouth and the mouth of the solidification is related to using only one linear correction as defined in equation (49).

Ci = ^i2 1 (49) 一 一個複雜之色彩校正,係可能被使用,其係使 用個如方程式GO}中所界定之立方多項式。 ^ = ^(ζ)3+Α·3(ς)2+Λ,2(^) + Λ, (5〇) 該等色彩參數内和表面係數會被計算,崎知該數位 翹曲單元15之架構細節。 該數位趣曲單元15之最後結果,係上述使用-個被用 來指明所有原色分量之向量記號重寫在下文之方程式⑻ 10中的方程式(1)以數學方式說明之校正。 輸入影像〇輸出影像 (W,,V,,A) ,乃,e'·) (51) 該龜曲或預補償之輸出影像,係輸入至該顯示器裝置 (未示出),其中,其係投射至該觀看表面16上而在視覺上 無失真,因而完成上述自動化之校準和校正。一旦該校準 和校正程序完成,正常之(無測試樣式)影像和視訊,便^被 15 傳送給該顯示器裝置。 該多色彩幾何校準和校正,業已配合橫向彩色校正加 以討論。然而,其係可被用來校準及校正原色分量在其中 有幾何失真之任何失真。其他應用係包括:欠=所致之 失真;和光學組件之由於彼此相對佈置或—個背投影顯示 20器裝置中相對底盤或外殼而佈置加上就該等色彩分=而 49 2〇〇8l8li4 不同放大率之多重微顯示器裝置所致的欠收歛。 〜在投影系統中,該色彩校準和校正在完成上,係針對 個在幾何上枚正之影像。此意謂的是,該色彩校正亦考 $慮到上述幾何翹曲本身所導入之任何不均勻性。一個在幾 何上趣曲之影像,將具有一些内含因標定和渡波程序所致 ^同色彩或亮度内容的不同區域。事實上,—個區域被 禚定愈多,亮度和色彩中之變化便愈大。此係藉由幾何翹 曲後所做之色彩校正而自動被補償。所以,該系統可自動 補谓上述幾何翹曲程序所致之色彩不均勻性。 10 在另一個適配體中,該系統可使整合在一個單一電路 内’而传到-個數位校準和翹曲單元。該等校準資料和趣 曲產生|§12和13,係-些可在任何處理器上面被執行之組 件。該測試影像產生器14,亦可由上述處理器所輸出的一 組儲存影像來取代。使用上述硬體内的一個内嵌式處理 15器,可對上述之整個校準和校正程序,給予一個單一電路 解決方案。此外,該硬體可連同該相機,使整合在一個顯 示器裝置内,藉以得到一個自我校準式顯示器裝置。在此 適配體中,僅需要有一個處理器,來接收來自至少一個影 像感測裝置之感測資訊,以及計算該等顯示器失真,而產 20生一些預補償映射圖,亦即,翹曲映射圖和色彩映射圖(亦 被稱作幾何翹曲和色彩翹曲),以及將該等預補償映射圖, 應用至彼等輸入影像資料,而使上述觀看表面上所成之顯 示影像,大體上將無失真。然而,在其他之情況中,使用 一個以上之處理器,可能更加有效率。因此,實現本說明 50 200818114 書所說明之實施例,係需要至少一個處理器。 各種類型之感測器,可使整合進該顯示器裝置(而非或 連同該等相機)内,藉以作用為該感測裝置11。在一個顯示 在第14圖中之範例性實施例中,一個感測器143,係一個距 5 離感測裝置,其係獨立地被使用,或與一個相機142—起使 用,藉以測量該觀看表面141上的某一定數目之點的距離。 該平面並不需要呈平坦狀。由該等測量之距離和該等感測 距離彼此相對之角度,該等相機142與觀看表面141之相對 角度會被計算出。此外,上述若不呈平坦之螢幕的形狀, 10 亦可使用此種方法來計算。在第14圖中所顯示之範例中, 該螢幕之右側上面的較密集線條,將指明該感測器143較接 近上述螢幕之一般觀測(normal view),而左側上面的較不 密集之樣式,係指明離該左側上面的一般觀測較遠。各種 類型之感測器143,可被使用而包括紅外線感測器、等等。 15 在此範例性實施例中,要描繪該顯示器螢幕(亦即,觀看表 面141),並不需要一個實體結構,以及該相機142係可任意 被佈置。 另一個範例性實施例,係構成一個具有動態校準和校 正之自我校準式顯示器裝置,因而該校準和校準程序,不 20 需要外部資源,便可隨時被執行來校正失真。此可容許校 正長期可能改變之失真,諸如一個投影器有關之梯形失 真,或一些類似RPTV等背投影顯示器裝置之場校準。該校 準系統,係佈置在該RPTV之外殼或底盤内,藉以在此情況 中提供自我校準。其他長期改變之重要失真,為光學組件 51 200818114 内因實體運動、去 b、和^度所致之變動。舉例而言,在 -個背投影顯示器裝置中,一片面…玄 在 重量或π声所站 片面鏡之曲率,可能會因其 正。者::❿略有變動’此將需要動態之校準和校 5 10 15 20 到時置被啟通’或者該失真中之改變被偵測 Λ又;和才义正系統便會被執行。 顯示器裝置有關之領域中,諸如電視系統, 重要^、、、_裝£可用’動態式校準和校正便變得特別 & ’在該起始之校準和校正過後,未來之失直係 二,組件中長期之小量變動所致。在一個受控之條件背景 期=如製造功,該數_曲單元,可被用來模 長期在現場中被預期之失真,i=1...N。此等失真接著可被 準及校正,以便使用前文所提及之範例性實施例中载 :的系統;然而’兩個電子校正單元可能被使用,= 果擬失真,以及另一個用來測試該等自動產生之校正資 枓。該等N個測試情況有關之校正有關的翹曲資料,可使儲 存在該顯示器裝置内。在現場中及長期以來’隨著小量失 真之發展,由該等N個翹曲校正,—個最能校正該失直者便 ,被選定。因此,該整個系統並無必要,僅有該數位趣曲 早凡’需要被建立在該顯示器裝置中,因為校正係在製造 期間被完成,以及該觀組校正㈣,係儲存在該顯示器農 置中。為自動化上述適當之校正資料的選擇,該顯示器屏 框内之感測器,可被用來债測彼等特殊化之測試樣式。上 述達成失真之最佳偵測有關的影像測試樣式因而會被栽 入。此程序可在該顯示器裳置被啟通而得到動態之校正和 52 200818114 校準時被執行。 誠如第15和16圖中所示,在一個範例性實施例中,該 校準系統係被適配來找出一個觀看表面上之最佳投影器聚 焦。此在元成上係藉由在该觀看表面上顯示一種測試樣 5式,諸如-組特定數目之平行線條。該影像接著會被拍攝, 以及會被該電子校正單兀掃描,藉以找出該等測試樣式中 之暗區與焭區間的對比。該投影器聚焦接著係使移位,以 及該對比會被重新測量。此將會繼續直至最大之對比被找 到為止。該最大之對比係對應於最佳之聚焦。此係以較差 之來焦顯不在該觀看表面151上,並且以較佳之聚焦顯示在 邊觀看表面161上。此同一技術可被用來聚焦該感測裝置。 一些具有銳緣之實體標記,諸如該顯示器銀幕之屏框(亦 即’觀看表面)會被拍攝,以及就最大之對比做分析。若有 必要,一個適當加色之測試樣式,可被顯示來提高該等標 15記與背景間之對比。該感測裝置聚焦接著係使移位,以及 5亥對比會被重新測量。該最大對比之設定,可提供該感測 裝置有關之最佳聚焦。該感測裝置係在聚焦該顯示器裝置 之前使聚焦。 在另一個範例性實施例中,有部份顯示在第17和18圖 20中’該校準系統係被使用在一個分別具有曲面螢幕171和 W1和多重投影器1至3之顯示器裝置。該等投影器係跨越上 述曲面螢幕171和181之整個區域,以及彼等係受到同一電 子組件之控制。該幾何校準係就每個投影器1至3加以完 成,而使映射至該等螢幕171和181之對應區域。此外,該 53 200818114 幾何校準可轉動及平移每個投影器影像,而使其與一個田比 連之投影器影像相綴合。特言之,在該等交疊之區域中, β亥專對應之像素’係覆盖在彼此之頂部上面。理應注竟的 是,該等螢幕171和181上面來自不同投影器丨至3之映射 5圖,係具有不同之入射角,以及係依該等螢幕171和181之 曲線而改變。上述具有或得到該等曲面螢幕171和181如翹 曲資料所表示之映射圖的電子組件,可校正該等橫跨螢幕 171和181之角變動。 除幾何校準外,每個投影器1至3之色彩校準完成上, 1〇係為確保在所有之投影器區域内,有視覺上相同之色彩特 性。该電子組件亦被適配來區分投影器1至3中或之間的像 素色彩和亮度,以使橫跨該等曲面螢幕171和181,達成一 個均勻之冗度和色彩映射。理應注意的是,任何數目之個 別投景均可被使用,以及該等交疊之區域,可在應用該 15等相同的校準技術之際,在許多投影器中被共用。 就一個曲面螢幕上之投影而言,該聚焦問題總是至關 重要。此源自之事實是,一個投影器係具有平坦之聚焦平 面,而該螢幕係呈彎曲狀,以及就此而論,該螢幕之不同 部分,係具有來自任一聚焦平面之不同距離。注視該螢幕 20的某一部分,該影像可能看來要比該螢幕之另一部分更聚 焦。在以單一投影器投射之際,為克服此種問題,有一種 技術可被用來使失焦極小化,其一個範例性實施例,係顯 示在第19圖中。在此一情況中,該校準系統在佈置上述投 影聚焦平面之方式上,可使自曲面螢幕191至聚焦平面193 54 200818114 之一系列法線的距離平方之和值為最小。若該螢幕希望其 中心比側部更聚焦,該等使該螢幕之中央部分至該聚焦平 面相連接之節段,便給以更大之權量。 在此一情況中,該最佳之聚焦平面,可基於上述螢幕 5 之已知形狀預先加以計算。上述最佳聚焦平面與該螢幕之 交點,可產生該螢幕上面有最佳聚焦之影像的點,以及接 著可得到一個最大之對比。在計算出且已知之最佳平面和 最大對比點下,一個與第16圖中所用相類似之影像測試樣 式,會投射至該螢幕上面,以及該影像接著會被拍攝,以 10 及就對比做分析。若該拍攝影像之最大對比位置,與先前 決定之最大對比點相一致,而在某容許度内,則該投射之 影像便係位於該最佳之聚焦平面上。若最大對比點不與先 前決定之最大對比點相一致,則該投影器聚焦便會被調 整,以及該程序會一再重複,直至有一個匹配得到為止。 15 理應注意的是,此種技術係可應用至一些呈一維彎曲(例 如,圓筒形、零空間曲率、等等)或二維彎曲(例如,球形、 非零空間曲率、等等)之銀幕。 在另一個部份顯示在第20圖中之範例性實施例中,除 早已解釋過之校準外,該聚焦問題係藉由多重投影器以不 20 同角度投射影像來應付。誠如此圖所顯示,藉由該等在特 定角度下之投影器照耀曲面螢幕201的特定區域上面,該失 焦問題大體上可被消除。該等角度係使每條投射軸線大體 上為其投射至之對應螢幕部分的法線,以及每片聚焦平 面,幾乎與曲面螢幕201之覆蓋部分的中心正切。為最佳化 55 200818114 苐19圖中所示相同之技術。或 可使保持正切於該螢幕。在此 每個節段之聚焦,係可使用 者,每個聚焦節段之中心, 範例性實施例中,該校準系統可匹配多重投影器相重疊之 區域的聚焦加上像錢何條件、亮度、和色彩,藉以在該 螢幕201上面,產生_個平滑且無縫而聚焦之影像。此技術 之結果所致’輪曲料著該等《、平面錢幕切線間之 角度的減少,而變得其嚴重性小很多。 -個可就多重色彩幾何校準―個感測裝置之系統已做 T討論° ^理’該系統可被用來校準上述感測裝置中之色 10 彩(非幾何)失真。使用一 個校準過及校正過之顯示器裝置, 一些固定之色彩樣式,係顯示在該榮幕上面,以及會被該 感測裝置記錄;校準該顯示器色彩失真所用之同-樣式係 可被使S。知道該財之I彩值,便可得到射程式(25) 相類似之相機色彩映射圖。由該色彩映射圖,便可決定出 !5上述相機有關之色彩校正參數,其在有色彩失真存在時, 將會有空間上之變化。該校正有關之模型,舉例而言,可 以是一個線性最少平方來配合。該等校正參數可完全特性 化該相機之色彩失真有關的校準資料。 該色彩校正已依據原色和亮度呈現出。該系統可被適 20配來處理一個任意色彩之校正和調整。彼等測試樣式或各 種色彩(不僅是原色或灰度),可被用來在一個類似方程式 (31)之方式中,得到上述顯示器之色彩映射圖,其係顯示在 方程式(52)中。 (52) 56 200818114 —在此’每個e可產生—個具有所有分量而不僅是一個 特疋之原色刀里的色彩向量。該組被使用之色彩,可被選 擇作為上述整個色彩空間中之向量的某種重新取樣。該反 函數映射圖,如今係以方程式(53)來表示。 ^=7;>(λ-,〇Τ) = Σα\^(〇7),, = 1...λ λΊ x’uj (53) 在此,每個色彩參數係一個長度尤之向量(原色之數 目)。依據先前之記號··Ci = ^i2 1 (49) A complex color correction, which may be used, uses a cubic polynomial as defined in the equation GO}. ^ = ^(ζ)3+Α·3(ς)2+Λ,2(^) + Λ, (5〇) These color parameters and surface coefficients are calculated, and the digital warping unit 15 is known. Architectural details. The final result of the digital content unit 15 is that the vector symbol used to indicate all the primary color components is overwritten by the equation (1) in Equation (8) 10 below to mathematically correct the correction. Input image 〇 output image (W,, V,, A), ie, e'·) (51) The turtle or pre-compensated output image is input to the display device (not shown), wherein Projected onto the viewing surface 16 without visual distortion, thus completing the automated calibration and correction described above. Once the calibration and calibration procedures are completed, normal (no test style) images and video are transmitted to the display device. This multi-color geometry calibration and correction has been discussed in conjunction with lateral color correction. However, it can be used to calibrate and correct any distortion in which the primary color components have geometric distortion. Other applications include: distortion caused by under-representation; and optical components are arranged relative to one another or a rear projection display relative to the chassis or casing in the device, plus the color points = 49 2 〇〇 8l8li4 Under-convergence due to multiple microdisplay devices of different magnifications. ~ In the projection system, this color calibration and correction is done on a geometrically aligned image. This means that the color correction also takes into account any non-uniformity introduced by the geometric warping itself. An image of the interesting music on the geometry will have some different areas containing the same color or brightness content due to the calibration and the wave program. In fact, the more the area is set, the greater the change in brightness and color. This is automatically compensated for by color correction made after geometric warping. Therefore, the system can automatically complement the color unevenness caused by the above geometric warping program. 10 In another aptamer, the system can be integrated into a single circuit and passed to a digital calibration and warping unit. These calibration data and interesting songs are produced|§12 and 13, which are components that can be executed on any processor. The test image generator 14 can also be replaced by a set of stored images output by the processor. Using a built-in processor in the above-described hardware, a single circuit solution can be given for the entire calibration and calibration procedure described above. In addition, the hardware can be integrated with a camera to be integrated into a display device to provide a self-aligning display device. In this aptamer, only one processor is needed to receive the sensing information from the at least one image sensing device and calculate the distortion of the display, and generate some pre-compensation maps, that is, warpage. Maps and color maps (also known as geometric warpage and color warping), and applying the pre-compensation maps to their input image data to make the displayed image on the viewing surface generally There will be no distortion in the upper. However, in other cases, using more than one processor may be more efficient. Thus, to implement the embodiments described in the teachings of the present specification 50 200818114, at least one processor is required. Various types of sensors can be integrated into the display device (rather than or in conjunction with the cameras) to act as the sensing device 11. In an exemplary embodiment shown in Fig. 14, a sensor 143 is a 5-way sensing device that is used independently or in conjunction with a camera 142 to measure the viewing. The distance of a certain number of points on surface 141. This plane does not need to be flat. The relative angles of the cameras 142 to the viewing surface 141 are calculated from the measured distances and the angles at which the sensing distances are opposite each other. Further, if the shape of the screen is not flat, 10 can be calculated by such a method. In the example shown in Figure 14, the denser lines on the right side of the screen will indicate that the sensor 143 is closer to the normal view of the screen, and the less dense pattern on the left side, Indicates that it is far from the general observation above the left side. Various types of sensors 143 can be used to include infrared sensors, and the like. In this exemplary embodiment, the display screen (i.e., viewing surface 141) is depicted, and does not require a physical structure, and the camera 142 can be arbitrarily arranged. Another exemplary embodiment constitutes a self-calibrating display device with dynamic calibration and calibration so that the calibration and calibration procedures can be performed at any time to correct for distortion without requiring external resources. This allows for correction of distortions that may change over time, such as a projector-related trapezoidal distortion, or some field calibration of a rear projection display device such as an RPTV. The calibration system is disposed within the outer casing or chassis of the RPTV to provide self-calibration in this situation. Other important distortions of long-term changes are the optical components 51 200818114 Internal changes in physical motion, deb, and ^ degrees. For example, in a rear projection display device, the curvature of a one-sided mirror may be due to the weight or the π sound. :: ❿ slightly changed 'This will require dynamic calibration and calibration 5 10 15 20 when the time is turned on' or the change in the distortion is detected Λ again; and the righteous system will be executed. In the field of display devices, such as television systems, important ^, , , _ can be used for 'dynamic calibration and calibration becomes special & 'after the initial calibration and correction, the future is not straight, A small amount of change in the component in the long term. In a controlled conditional background period = such as manufacturing work, the number-curve unit can be used to model the expected distortion in the field for a long time, i = 1...N. These distortions can then be corrected for use in order to use the system described in the exemplary embodiments mentioned above; however, 'two electronic correction units may be used, = pseudo-distortion, and the other is used to test the Automatically generated calibration assets. The correction related warpage data relating to the N test conditions can be stored in the display device. In the field and for a long time, with the development of a small amount of distortion, the correction of the N warps was made, and the one that best corrected the loss was selected. Therefore, the entire system is not necessary, only the digital music is required to be built in the display device, because the calibration system is completed during the manufacturing process, and the viewing correction (4) is stored in the display farm. in. To automate the selection of appropriate calibration data as described above, the sensors within the display screen can be used to test their specific test patterns. The image test pattern associated with the best detection of the distortion described above is thus implanted. This procedure can be performed when the display is turned on for dynamic correction and 52 200818114 calibration. As shown in Figures 15 and 16, in an exemplary embodiment, the calibration system is adapted to find the best projector focus on a viewing surface. This is done by displaying a test pattern on the viewing surface, such as a set of specific numbers of parallel lines. The image is then captured and scanned by the electronic calibration unit to find a comparison of dark areas and 焭 intervals in the test patterns. The projector is focused and then shifted, and the contrast is re-measured. This will continue until the largest comparison is found. This maximum contrast corresponds to the best focus. This is inferior to the viewing surface 151 and is preferably displayed on the side viewing surface 161 with better focus. This same technique can be used to focus the sensing device. Some physical markers with sharp edges, such as the screen frame of the display screen (i.e., the 'viewing surface'), are taken and analyzed for maximum contrast. If necessary, a properly colored test pattern can be displayed to improve the contrast between the 15 marks and the background. The sensing device is focused and then shifted, and the 5H contrast is re-measured. This maximum contrast setting provides the best focus associated with the sensing device. The sensing device focuses the focus prior to focusing the display device. In another exemplary embodiment, some of them are shown in Figures 17 and 18 of Figure 20. The calibration system is used in a display device having curved screens 171 and W1 and multiple projectors 1 through 3, respectively. The projectors span the entire area of the curved screens 171 and 181 and are controlled by the same electronic component. The geometric calibration is done for each of the projectors 1 through 3, and is mapped to corresponding regions of the screens 171 and 181. In addition, the 53 200818114 geometry calibration rotates and translates each projector image to align it with a field-matched projector image. In particular, in the overlapping regions, the pixels corresponding to the pixels are overlaid on top of each other. It should be noted that the mappings from the different projectors 3 to 3 on the screens 171 and 181 have different angles of incidence and are varied according to the curves of the screens 171 and 181. The electronic components having or having the maps represented by the curved screens 171 and 181, such as warp data, correct for angular variations across the screens 171 and 181. In addition to the geometric calibration, the color calibration of each of the projectors 1 through 3 is done to ensure visually identical color characteristics across all of the projector areas. The electronic component is also adapted to distinguish between pixel color and brightness in or between projectors 1 through 3 to achieve a uniform redundancy and color mapping across the curved screens 171 and 181. It should be noted that any number of individual shots can be used, and such overlapping areas can be shared among many projectors while applying the same calibration technique. For a projection on a curved screen, this focus problem is always critical. This stems from the fact that a projector has a flat focusing plane and the screen is curved, and as such, different portions of the screen have different distances from any of the focal planes. Looking at a portion of the screen 20, the image may appear to be more focused than another portion of the screen. To overcome this problem, a technique can be used to minimize the out-of-focus when projecting with a single projector, an exemplary embodiment of which is shown in FIG. In this case, the calibration system minimizes the sum of the squared distances of a series of normals from the curved screen 191 to the focus plane 193 54 200818114 in the manner in which the above-described projected focus plane is disposed. If the screen wishes its center to be more focused than the side, the segments that connect the central portion of the screen to the focal plane give a greater weight. In this case, the optimum focus plane can be calculated in advance based on the known shape of the screen 5 described above. The intersection of the best focus plane and the screen described above produces a point on the screen with the best focus image and, in turn, a maximum contrast. Under the calculated and known best plane and maximum contrast point, an image test pattern similar to that used in Figure 16 will be projected onto the screen, and the image will then be captured, with 10 and contrast analysis. If the maximum contrast position of the captured image is consistent with the previously determined maximum contrast point, and within a certain tolerance, the projected image is located on the optimal focus plane. If the maximum contrast point does not match the previously determined maximum contrast point, the projector focus will be adjusted and the program will repeat until and after a match is obtained. 15 It should be noted that this technique can be applied to some one-dimensional bending (eg, cylindrical, zero-space curvature, etc.) or two-dimensional bending (eg, spherical, non-zero spatial curvature, etc.) screen. In another exemplary embodiment shown in Fig. 20, in addition to the calibration already explained, the focus problem is addressed by multiple projectors projecting images at not the same angle. As shown in this figure, the defocus problem can be substantially eliminated by illuminating a particular area of the curved screen 201 by the projector at a particular angle. The angles are such that each projection axis is substantially normal to its corresponding portion of the screen portion, and each of the focusing planes is nearly tangent to the center of the portion of the curved screen 201. To optimize the same technology as shown in Figure 18 200818114 苐19. Or you can keep it tangential to the screen. The focus of each segment here is the user, the center of each focus segment. In an exemplary embodiment, the calibration system can match the focus of the overlapping regions of the multiple projectors plus the conditions and brightness of the image. And color, by which a smooth and seamless and focused image is produced on the screen 201. As a result of this technology, the rounds of the "curtains" and the reduction of the angle between the tangent planes have become much less serious. A system for multi-color geometric calibration - a sensing device has been discussed. The system can be used to calibrate the color (non-geometric) distortion in the above sensing device. Using a calibrated and calibrated display device, some fixed color patterns are displayed on the glory and recorded by the sensing device; the same-style used to calibrate the color distortion of the display can be made S. Knowing the value of the I color, you can get a camera color map similar to the program (25). From the color map, it is possible to determine the color correction parameters associated with the above camera, which will have spatial variations in the presence of color distortion. The model associated with this correction, for example, can be a linear least squares fit. These calibration parameters fully characterize the calibration data associated with the color distortion of the camera. This color correction has been rendered in terms of primary colors and brightness. The system can be adapted to handle correction and adjustment of an arbitrary color. These test patterns or various colors (not only primary colors or gray scales) can be used to obtain a color map of the above display in a manner similar to equation (31), which is shown in equation (52). (52) 56 200818114 - Here each 'e' can produce a color vector in a primary color knives with all components and not just one feature. The color used by the set can be selected as some resampling of the vectors in the entire color space described above. This inverse function map is now represented by equation (53). ^=7;>(λ-,〇Τ) = Σα\^(〇7),, = 1...λ λΊ x'uj (53) Here, each color parameter is a length, especially a vector ( The number of primary colors). According to the previous mark··

Jf» __ ^ ilr ir — · (54) u,」 然而,此不僅是該等色彩參數成為一個單一方程式之 重新女排,因為該等基底函數如今係在該整個色彩空間上 面而不僅疋在一維色彩空間(亦即,一種原色)上面被界定。 10就一個多項式之形式而言,該等基底函數係界定在方程式 (55)中。 矣 m = )ί2···(ς^ (55) 該等Α參數可藉由導入色彩空間内如方程式(56)中所 顯示之2個小片的尤維小片結構,進一步加以一般化。 ^)ig = {c\c°gk<ck<cik9k^Lt.K} (56) 此可如方程式(57)中所顯示,將另一個指標加至該等色 15 彩參數。 57 (57) 200818114 又" irq A,A, i\rq i2rq 此可在色彩空間内之每個空間格線點處,產生一個一 般性k:換(該等形狀之中心)。該校準色彩資料,如今係由方 程式(58)來界定。 …(58) 。在無任何失真存在中,該格線在每個坐標處將為一個 單4元忒翹曲產生器,可將此轉換成一個具有方程式(59) 中所指明之形式的表面函數。 Ο) = Σ0νν ijJf» __ ^ ilr ir — · (54) u,” However, this is not only the re-women's volleyball in which the color parameters become a single equation, because these basis functions are now attached to the entire color space and not only in one dimension. The color space (ie, a primary color) is defined above. 10 In terms of the form of a polynomial, the basis functions are defined in equation (55).矣m = ) ί2···(ς^ (55) These parameters can be further generalized by importing the two-dimensional Euvy plate structure shown in equation (56) in the color space. Ig = {c\c°gk<ck<cik9k^Lt.K} (56) This can be added to the equal-color 15 color parameter as shown in equation (57). 57 (57) 200818114 Also " irq A, A, i\rq i2rq This produces a general k: change (the center of the shapes) at each spatial grid point in the color space. This calibration color data is now defined by the program (58). ...(58) . In the absence of any distortion, the grid will be a single 4-element warp generator at each coordinate, which can be converted into a surface function of the form specified in equation (59). Ο) = Σ0νν ij

k = L..K, Γ = Q (59) P = '-Pk 1 二 Λ) 取後,该數位輕曲單元,將會評估此多項式,以 使用方程邮3)來制耗純正。 ^ 10 …在母個工間坐標處,具有—個—般性色彩映射圖 谷终校正任一坐;I:®未ΑΑ / , 可 W敕, 個色彩。此係包括執行1 和口。正’諸如白點調整、對比調整、和色調 又 無關乎該顯示器之不Α 疋’而 A之不㈣域。所有此等調整,係— 衫空間之特定函數, ‘在色 , 及口而可經由一個函數近似化y 達成上述如方程式(53)所指明之—般 ,使 小片區分的附加特徵下"*空間内之 正可藉由迫使該色彩小片 成正。該校 『之早位7L袼線被校正,使限 58 15 200818114 制至一些特定之色彩,而聽任其他者不改變。此係包括選 擇性色調校準,其中,特定之色調會被校正,以及其他色 調不被觸及。使用該系統的一般性色彩校準和校正,該顯 示器裝置中,將可達成高色彩準確度。 該系統亦可用於色彩自定義(custom)調整,其係藉由提 供一些自定義色彩參數,其可在該系統之外部被計算, 以及可輸入至該翹曲產生器13。同理,一些自定義幾何效 應(特定效應),可藉由提供一些自定義幾何格線«,yi)給 該翹曲產生器13來達成。 在另一個部份顯示在第21圖内之範例性實施例中,有 兩個相機Cml和Cm2安裝在一個投影器213上面。有一個輸 入影像提供給該投影器213,其接著會在一個觀看表面211 15 20 上,產生一個對應之投射影像樣式。該兩相機Cml*cm2, 係被用來拍攝該觀看表面211上之投射影像樣式。該系統進 一步係包含有一個處理器(未顯示但先前說明過)。該兩相機 Cml和Cm2之相對位置,對該處理器而言係屬已知。該兩 相機Cml和Cm2,可相對於該投影器213,而水平地、垂直 地、或水平及垂直兩者地使交錯排列。該處理器可基於來 自該兩相機Cml和Cm2的兩個拍攝影像之比較,來決定該 等包括該投影器213相對於上述觀看表面211之角度的失真 參數。上述之電子校正單元(未顯核先前朗^接著會 將一個翹曲變換,應用至該輸入影像,藉以校正誃 曰 上述成就之投射影像,大體上並無失真。Γ系統和方 法可被使用在一個背投影電視(RPTV)中,舉例而言,其中, 59 200818114 有一個或多個相機,被安裝至該RPTV如第22圖内所顯示之 範例性實施例中所見的固定位置和方位中。該等相機亦可 以其他方式來安裝。此等相機可拍攝一些投射至上述rPTV 螢幕上面之樣式。自上述相機之觀點的RpTV螢幕之視圖, 5可能具有某一與其相關聯之梯形失真。然而,在該校準系 統屬上述顯示器裝置的一部分之下,該顯示器可如上文所 討論,使成為自我校準。 在另一個部份顯示在第23圖中所示之範例性實施例 中,有數個投影器P1至P3,被用來使一個影像投射至一個 10曲面螢幕231上面。同時,有數個相機cml至Cm3,被用來 拍攝每個投影器P1至P3所投射之影像。該等相機cml至 Cm3之數目’和該等投影器?1至?3之數目,在此實施例中 係屬任意性。在一個情況中,每個相機Cini至Cm3,可被 用來拍攝來自所有投影器?1至?3之影像。該等相機〇:1111至 15 Cm3,可彼此相對水平地及垂直地交錯排列。每個投影器 P1至P3,係被適配來將一個已知之樣式或測試影像,投射 至該曲面螢幕231上面以供校準。基於該等相機(:1111至(::1113 所拍攝之景〉像,一個處理器(未顯示但先前說明過)可計算該 等包括上述曲面螢幕231之形狀和相對方位的失真參數。此 2〇等參數接著可被該處理器使用,藉以產生一個翹曲變換, 其可在正常使用期間,被應用至該等提供給每個投影器ρι 至P3之輸入影像。每個投影器^至^之翹曲變換,係使其 預先補償該特定投影器所遭受之顯示器失真。此外,每個 投影器P1至P3有關之亮度可被分析,以使該觀看表面231 60 200818114 上之投射影像的總亮度始終如一。此外,該處理器可使上 述重疊之區域内的像素排齊,以及使此等交疊像素之哀 度,以無縫之影像品質,分佈於該等不同投影器之間。 在第23圖之系統的一個他型實施例中,該亮度和色彩 5資料,亦可被該等相機Cml至Cm3拍攝。該等資料接著會 被该處理裔使用’使糟由調整每個像素有關之強产,而使 不同毗連之影像的側緣相匹配且相混合。所有投影器?1至 P3之總亮度和色彩,亦可被該處理器常態化。 10 15 2〇 杜口丨仞,,、、貝η杜------丨叫孕i^rr生貫施例中,該 感測裝置(在此等情況中為相機),係被用來拍攝一個已帶= 或不帶樣式而投射出之投射影像。在此之際,該相機亦被 用來偵測一個觀看表面241之形狀、尺度、相對方位、和邊 界。該等邊界側緣可為-個下拉式觀看表面(亦即,伸縮式 投影器螢幕)之側緣、或房間之角落、等等。一個處理器 顯不但先前說明過),接著可分析該影像之側緣的方向和該 測試影像之樣式,藉以計算該等觀看表面之特性,諸如^ 狀、尺度、邊界、和相對方位。在此計算之下 二 器失真可被決基於該等投射並接著拍攝之影像= 式之稷雜性’該電子校正單元(亦即,處理 二 失真參數。就辩罝夕接斗、 J决疋该4 相早之樣式而言,該電子校正單元 目父於上述觀看表面之投射角度。在較複雜之 電子校正單元’可決定該觀看表面之形狀=’該 曲或不規則之觀看表面。該電子校正單元,言,彎 鏡瑕疯相關之失真參數,諸如針墊形或桶形失真,。、=透 、 旦呑亥 61 200818114 等失真參數被收集到,該適當預補償之想曲映射圖 應用至該輸人影像資料,藉崎正料失真,^ 影像在視覺上將無失真。 之 5 10 15 20 在一個他型實施例中,第24圖之系統係進— 何實體標記或側緣之存在下,被適配來校正至—個平= 面上之投射。來自上述投射之Φ亩 —. 失直—“… 括梯形和透鏡 失真兩者。在此糸統中,有—個相機以一個固定之 方位附加至該投影器。該校準和校正,係在—種兩步驟之 程序中被執行。在第-步驟中,一個完全之校準程序 用-些測試影像樣式,可被用來儲存上述相機在_些包括 縮放位準的已知之梯形失真角度和透鏡失真參數下戶輪攝 到的樣式之影像。此外,任何為該校正所必需之額外資%, =如㈣資料’亦可加以儲存。此步驟可在—個组裝該投 影器之工廠内被執行,以及可被視為工廠校準。該第二步k = L..K, Γ = Q (59) P = '-Pk 1 2 Λ) After taking the digital light unit, the polynomial will be evaluated to use the equation 3) to consume pure. ^ 10 ... at the parent's coordinates, with a - general color map. Valley final correction of any sitting; I: ® not / /, W敕, colors. This system includes execution 1 and port. Positive 'such as white point adjustment, contrast adjustment, and hue are irrelevant to the display's not 疋' and A's no (four) field. All such adjustments, the specific function of the shirt space, 'in the color, and the mouth can be approximated by a function y to achieve the above-mentioned as specified in equation (53), so that the small features of the additional features under the " * Just within the space can force the color patch to be positive. The school's early 7L line was corrected to limit the number of 58 15 200818114 to some specific colors, while leaving the others unchanged. This includes selective tone calibration where specific tones are corrected and other tonals are not touched. With the general color calibration and calibration of the system, high color accuracy will be achieved in the display device. The system can also be used for color customization by providing some custom color parameters that can be calculated external to the system and can be input to the warp generator 13. Similarly, some custom geometric effects (specific effects) can be achieved by providing the custom geometry line «, yi) to the warp generator 13. In another exemplary embodiment shown in Fig. 21, two cameras Cml and Cm2 are mounted on a projector 213. An input image is provided to the projector 213, which in turn produces a corresponding projected image pattern on a viewing surface 211 15 20 . The two cameras Cml*cm2 are used to capture the projected image pattern on the viewing surface 211. The system further includes a processor (not shown but previously described). The relative positions of the two cameras Cml and Cm2 are known to the processor. The two cameras Cml and Cm2 can be staggered horizontally, vertically, or both horizontally and vertically with respect to the projector 213. The processor can determine distortion parameters including the angle of the projector 213 relative to the viewing surface 211 based on a comparison of two captured images from the two cameras Cml and Cm2. The above-mentioned electronic correction unit (which has not been calibrated and then applies a warp transformation to the input image, thereby correcting the projected image of the above achievement, is substantially free of distortion. The system and method can be used in In a rear projection television (RPTV), for example, 59 200818114 has one or more cameras mounted to the RPTV in a fixed position and orientation as seen in the exemplary embodiment shown in FIG. These cameras can also be installed in other ways. These cameras can capture some of the patterns projected onto the rPTV screen. From the perspective of the camera, the RpTV screen view 5 may have some associated keystone distortion. Where the calibration system is part of the display device described above, the display can be self-calibrated as discussed above. In another portion of the exemplary embodiment shown in Figure 23, there are several projectors. P1 to P3 are used to project an image onto a 10 curved screen 231. At the same time, there are several cameras cml to Cm3, which are used to capture each projection. The images projected by the devices P1 to P3. The number of the cameras cml to Cm3' and the number of the projectors ?1 to ?3 are arbitrary in this embodiment. In one case, each camera Cini Up to Cm3, can be used to capture images from all projectors 1 to 3. These cameras are: 1111 to 15 Cm3, which can be staggered horizontally and vertically with each other. Each projector P1 to P3, It is adapted to project a known pattern or test image onto the curved screen 231 for calibration. Based on the cameras (: 1111 to (::1113), a processor (not shown but not shown) As previously explained, the distortion parameters including the shape and relative orientation of the curved surface 231 described above can be calculated. The parameters can then be used by the processor to generate a warp transformation that can be used during normal use. Applied to the input images provided to each of the projectors ρι to P3. The warp transformation of each of the projectors is such that it pre-compensates for display distortion suffered by the particular projector. In addition, each projector P1 to P3 related The degree can be analyzed such that the total brightness of the projected image on the viewing surface 231 60 200818114 is consistent. In addition, the processor can align the pixels in the overlapping regions and the sorrow of the overlapping pixels. The seamless image quality is distributed between the different projectors. In one embodiment of the system of Fig. 23, the brightness and color 5 data can also be taken by the cameras Cml to Cm3. This data will then be used by the processor to adjust the intensity of each pixel and match the side edges of the different adjacent images. The total brightness and color of all projectors 1 to P3. , can also be normalized by the processor. 10 15 2 〇 丨仞 丨仞,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, In the case of a camera, it is used to capture a projected image that has been projected with or without a pattern. At this point, the camera is also used to detect the shape, scale, relative orientation, and boundary of a viewing surface 241. The boundary side edges may be the side edges of a drop-down viewing surface (i.e., a telescopic projector screen), or the corners of a room, and the like. A processor not only previously described), but also the direction of the side edges of the image and the pattern of the test image can be analyzed to calculate characteristics of the viewing surfaces, such as shapes, dimensions, boundaries, and relative orientations. In this calculation, the second distortion can be determined based on the projection and then the image of the image = the complexity of the 'electronic correction unit' (ie, processing the two distortion parameters. In the case of the early phase 4, the electronic correction unit is the target of the viewing angle of the viewing surface. In the more complicated electronic correction unit, the shape of the viewing surface can be determined = 'the curved or irregular viewing surface. The electronic correction unit, in other words, the distortion parameters related to the curved mirror, such as the pin cushion shape or the barrel distortion, ., = through, the 呑 呑 61 61 200818114 and other distortion parameters are collected, the appropriate pre-compensation of the imaginary map Applied to the input image data, the image is visually undistorted. 5 10 15 20 In a other embodiment, the system of Fig. 24 is attached to the body mark or side edge. In the presence of it, it is adapted to correct the projection onto a flat surface. From the above projection Φ mu —. Straight — “... both trapezoidal and lens distortion. In this system, there is a camera. Attached in a fixed orientation To the projector, the calibration and calibration are performed in a two-step procedure. In the first step, a complete calibration procedure uses a number of test image patterns that can be used to store the camera in the Including the known trapezoidal distortion angle of the zoom level and the image of the style captured by the household wheel under the lens distortion parameter. In addition, any additional capital required for the correction, such as (4) data, can also be stored. This step can be Executed in a factory that assembles the projector, and can be considered factory calibration. This second step

驟係發生在該投影H被制之場地巾。該投㈣可投射I 述第-步驟中所使用之同一樣式,彼等接著可被該相機拍 攝。该等現場拍攝之樣式,係使與該等工廠拍攝之樣式連 同該工廠得到被儲存之失真參數相比較,藉以決定該現場 中之投影器的失真參數。得知該現場中之失真參數,該校 準麵曲若早已被儲存便可被檢索,或者實時加以建立,藉 讀正該投影器梯形失真和透鏡失真。由於該比較係以先 前儲存之資崎彡像)來完成,真正之浙或標記(諸如勞幕 屏框)便無需要。上述工廠中所儲存之資料,並不需要為完 整之影像,但可能會是格線資料,或其他特性化不同失真 62 818114 位準有關之樣式的參數。 心主只她例令,該相機會被 有4個點之簡單格線型影像樣式,來校正僅 況中,爷測岬 梯形失真。在此情 5 10 (僅需要4個::二在第咖圖中,係由-個2x2格線 Γ之下他)顺成。轉縣真*言,結任何透鏡失 二:,以決定該失真。該等四個點可被置於任 決定知道彼等之位置(投射之前和之後),便足以 、-^抆正。此種方法亦可合併任何之投f 位調整,彼等#健^ ”之心為透鏡移 或益透αΓ 簡單平移。就—個具有可能有 15 έ士 形^直Γ卜轉和所儲叙校均曲,妹線(亦即,無梯 〜®被執仃。接著,該校正翹曲(就適當之縮放位準 ;鏡失真而市被應用,以及該校準可使請點僅就 :形权正一再重複。該梯形校正可與該伸縮鏡頭校正相鏈 r或在功能上與之相組合,藉以得到一個可校正所有投影 =失真之最後映射圖。該透鏡校正僅需要在一個工廠校準 私序期間被計算及儲存一次。該梯形校正接著會使用現場 中之相機來執行,以及仙透鏡校正來組成。 另一個範例性實施例,係部份顯示在第25圖中,以及 20係與-個曲面螢幕251上之投射的情況相關。為決定該曲面 勞幕251内含形狀和距離之映射圖,一個二維影像樣式,舉 例而。,一個格子花影像樣式,係投射至該觀看表面上。 /相機係被用來拍攝上述之投射影像。該電子校正單元(亦 即,未顯示但先前說明過之處理器),接著係適配使計算上 63 200818114 =花樣式中之每—條線所導入的對比。藉由不斷改變 5 10 15 20 為^隹上面之每無處的最佳軌,係被發現 為过、、、距的一個函數。在此方式中,上述曲面螢幕 表面映射圖會被決定出。該映射圖之準確度和㈣ 决於投射樣式之複雜度和嘗試之焦距的數目。亦鹿、主立、、 技術亦會產生該相機之角度,和因而_點= 杈影裔相對於上述觀看表面之法線的角度。一曰 地 x _ 〜邊電子校 早元,计异出與每個點處該觀看表面之形狀、尺声 角度相關聯的失真參數,其接著便會計算一個趣;換和 或者使用-個早已儲存的適當者。她 、+、>从 夂俠在應用至上 ;L輸入影像資料時,將會成就一個與該觀看表面 相匹配的無視覺上之失真的影像。 特1± 另一個範例性實施例,係部份顯示在第26圖中,、 係與-個波浪形螢幕261之情況相關。上述說 ’以及 Η 9 ττΑΔ 、 2 5 |^1 沐目 關^之貫施财的技術’亦可被用來決定每個點處之波浪 形觀看表面的形狀和相對方位。此範_示的是料= 規則之觀看表面,均可被使用在該顯示器裝置中。 備好該觀看表面的一個映射圖,該電子校正 一準 _、q 平7°(未顯示但 无則呪明過),便可使用此映射圖來配置應用至該輪入麥像 之翹曲變換。一旦此翹曲變換應用至該輸入影像,該投射 影像便無視覺上之失真,以及係與該觀看表面之特性相匹 配。 雖然上文之說明提供了各種範例性實施例,理應瞭解 的是’該等說明之實施例的某些特徵和/或功能,係可在 64 200818114 該等說明之實施例的運作之精神和原理下被修飾。因此, 上文已說明者係預期屬例示性和非限制性,以及本技藝之 專業人員理應瞭解的是,有其他之變更形式和修飾體可被 完成,而不違離該等實施例如本說明書所附申請專利範圍 5 中所界定之範圍。 【圖式簡單說明3 第1圖係一種自動化校準和校正系統之範例性實施例 的簡圖; 第2a和2b圖係曲面螢幕幾何結構之例示圖; 10 第3圖係幾何失真中之上溢、下溢、和失配範例的例示 圖, 第4圖係一種校準影像測試樣式之範例的例示圖; 第5圖係一種校正幾何條件和所涉及之各種坐標空間 的例示圖; 15 第6圖係一種校準資料產生器之範例性實施例的例示 圖, 第7圖係一種標度和原點之最佳化的例示圖; 第8圖係一種多重之色彩校準資料產生器的範例性實 施例之例示圖; 20 第9圖係一種色彩不均勻性校準有關之設置的例示圖; 第10圖係一種色彩不均勻性校正有關之校準資料產生 器的範例性實施例之例示圖; 第11圖係一種翹曲資料產生器之範例性實施例的例示 圖, 65 200818114 第12圖係-種顯示器校正有關之小片分割的例示圖; 第13圖係一種數位翹曲單元之範例性實施例的例示 圖; 第14圖得、牙重觀看表面的形狀和相對方位決定有關之 5 設置的示意圖; 第15圖係一種失焦測試樣式之例示圖; 第16圖係一種對焦測試樣式之例示圖; 第17圖係-種由多重投影器和一個曲面榮幕所組成之 校準系統的範例性實施例之部份例示圖; 10 冑18圖係—種用以顯示不同投影ϋ之聚焦平面而由第 17圖之多重投影器和—個彎曲形榮幕所組成的校準系統之 範例性實施例的部份例示圖; 第19圖係-種可極小化一個距離函數之聚焦技術的範 例之例示圖; 15帛2G圖係另—個由多重投影器和-個曲面勞幕所組成 而可調整投影器位置使影像聚焦最佳化之校準系統的範例 性實施例之部份例示圖; 第圖係種使用多重相機之校準系統的範例性實施 例之部份例示圖; 我校準且容許動 視(RPTV)之範例 2〇 第22圖係_種呈古 a 裡具有一種可使顯示器自 態失真校正之積體化校準系統的背投影電 性實施例的部份例示圖; 統多重感繼之_ 66 200818114 第24圖係一種使用上述觀看表面之實體邊緣和邊界的 校準系統之範例性實施例的部份例示圖; 第25圖係一種使用一種聚焦技術來決定一個彎曲狀顯 示器螢幕之形狀的校準系統之範例性實施例的部份例示 5 圖;而 第26圖則係一個使用一種聚焦技術來決定一個波浪形 顯示器螢幕之形狀的校準系統之範例性實施例的部份例示 圖。 【主要元件符號說明】 1-3...投影器 143...感測器 11...感測裝置 151…觀看表面 12...校準資料產生器 161...觀看表面 12’...校準資料產生器 171,181.··曲面螢幕 13...翹曲產生器 191…曲面螢幕 14…測試影像產生器 193…聚焦平面 15...數位翹曲單元 211…觀看表面 16...觀看表面 213...投影器 91...觀看表面 231...曲面螢幕 92...單點色彩分析儀 241…觀看表面 93...多點色彩分析儀 251···曲面螢幕 141...觀看表面 261…波浪形螢幕 142...相機 67The system occurs in the field towel where the projection H is made. The projection (4) can project the same pattern used in the first step, which can then be taken by the camera. The pattern of such live shots is such that, in conjunction with the styles captured by the factory, the factory is compared to the stored distortion parameters to determine the distortion parameters of the projector in the field. Knowing the distortion parameters in the scene, the calibration surface can be retrieved if it has already been stored, or it can be created in real time to read the trapezoidal distortion and lens distortion of the projector. Since the comparison is done with the previous storage of the Zisaki image, the real Zhejiang or mark (such as the screen frame) is not needed. The data stored in the above factory does not need to be a complete image, but it may be grid data, or other parameters that characterize the different distortions of the 62 818114 level. The heart is only her order, the camera will be corrected by a simple four-point line style image to correct the trapezoidal distortion. In this case 5 10 (only need 4:: 2 in the coffee chart, by a 2x2 grid below him). Turn the county true words, knot any lens lost two: to determine the distortion. These four points can be placed at any position to know their position (before and after the projection), which is sufficient, -^ 抆. This method can also be combined with any of the adjustments of the f-position, and the heart of the #健^" is a lens shift or a translucent αΓ simple translation. It is possible to have 15 gentlemen-shaped ^ straight and turn The average school song, the sister line (that is, no ladder ~ ® is executed. Then, the correction warp (the appropriate zoom level; the mirror distortion is applied in the city, and the calibration can make the point only: shape The right is repeated over and over again. The keystone correction can be combined with the telescopic lens correction phase chain or functionally combined to obtain a final map that corrects all projections = distortion. The lens correction only needs to be calibrated in a factory. The sequence period is calculated and stored once. The keystone correction is then performed using the camera in the field, and the lens correction is performed. Another exemplary embodiment is shown in Figure 25, and the 20 series and - For the case of the projection on the curved screen 251, in order to determine the map of the shape and distance contained in the curved screen 251, a two-dimensional image style, for example, a plaid image pattern is projected onto the viewing surface. /camera It is used to capture the above projected image. The electronic correction unit (that is, the processor not shown but previously described) is then adapted to calculate the contrast introduced in each of the lines of the 2008 200818114 = flower pattern. By constantly changing the optimal track of 5 10 15 20 for each nowhere above, it is found to be a function of over, , and distance. In this way, the surface surface map of the above surface will be determined. The accuracy of the map and (iv) depends on the complexity of the projection style and the number of focal lengths attempted. Also deer, the main, the technology will also produce the angle of the camera, and thus _ point = 杈影 relative to the above Observe the angle of the normal of the surface. The x _ ~ side of the electronic school is early, and the distortion parameter associated with the shape of the viewing surface and the angle of the sound at each point is calculated, and then an interesting Change or use - a suitable person already stored. She, +, > from the scorpion in the application to the top; L input image data, will achieve a non-visually distorted image matching the viewing surface Special 1± another The exemplary embodiment, which is shown in Fig. 26, is related to the case of a wavy screen 261. The above said 'and Η 9 ττΑΔ , 2 5 |^1 The technique 'can also be used to determine the shape and relative orientation of the wavy viewing surface at each point. This is a material = regular viewing surface that can be used in the display device. A map of the surface, the electronic correction is a _, q flat 7 ° (not shown but not explained), you can use this map to configure the warp transformation applied to the wheel of the wheat image. Once this A warp transformation is applied to the input image, the projected image is visually distorted and matches the characteristics of the viewing surface. While the above description provides various exemplary embodiments, it should be understood that Some of the features and/or functions of the illustrated embodiments may be modified in the spirit and principles of the operation of the embodiments of the description of the invention. Accordingly, the above description is intended to be illustrative and not restrictive, and it is understood by those skilled in the art that other variations and modifications can be practiced without departing from the practice. The scope defined in the scope of patent application 5 is attached. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a simplified diagram of an exemplary embodiment of an automated calibration and calibration system; FIGS. 2a and 2b are diagrams of a curved surface geometry; 10 Figure 3 is an overflow of geometric distortion An illustration of an example of a calibration image test pattern; Figure 5 is an illustration of an example of a calibration geometry test; and Figure 5 is an illustration of a calibration geometry and various coordinate spaces involved; An illustration of an exemplary embodiment of a calibration data generator, Figure 7 is an illustration of an optimization of the scale and origin; Figure 8 is an exemplary embodiment of a multiple color calibration data generator FIG. 9 is an illustration of a setting related to color unevenness calibration; FIG. 10 is an illustration of an exemplary embodiment of a calibration data generator related to color unevenness correction; FIG. An illustration of an exemplary embodiment of a warpage data generator, 65 200818114 Figure 12 is an illustration of a slice segmentation associated with display correction; Figure 13 is an example of a digital warp unit An illustration of an embodiment; Figure 14 shows a schematic diagram of the shape and relative orientation of the tooth surface viewing surface; Figure 15 is an illustration of a out-of-focus test pattern; Figure 16 is a focus test pattern Example diagram; Figure 17 is a partial illustration of an exemplary embodiment of a calibration system consisting of a multi-projector and a curved surface; 10 胄18-system for displaying different projections Partial illustration of an exemplary embodiment of a calibration system consisting of a multi-projector and a curved glory of Figure 17; Figure 19 is an example of a focusing technique that minimizes a distance function An exemplary view of an exemplary embodiment of a calibration system consisting of a multi-projector and a curved screen to adjust the position of the projector to optimize image focus; The figure is a partial illustration of an exemplary embodiment of a calibration system using multiple cameras; I calibrated and allowed a visual motion (RPTV) example 2 〇 22 系 呈 呈 古 具有 可使 可使 可使 可使 可使 可使 可使 可使State loss Partial illustration of a backprojection electrical embodiment of a calibrated integrated calibration system; multiplex multiplicity followed by _ 66 200818114 Figure 24 is an exemplary embodiment of a calibration system using physical edges and boundaries of the viewing surface described above Partial illustration of Figure 25; Figure 25 is a partial illustration of an exemplary embodiment of a calibration system using a focusing technique to determine the shape of a curved display screen; and Figure 26 is a focusing technique A partial illustration of an exemplary embodiment of a calibration system that determines the shape of a wavy display screen. [Main component symbol description] 1-3...projector 143...sensor 11...sensing device 151...viewing surface 12...calibration data generator 161... viewing surface 12'.. Calibration data generator 171, 181..... Curve screen 13... Warpage generator 191... Curve screen 14... Test image generator 193... Focus plane 15... Digital warpage unit 211... View surface 16.. Viewing surface 213...Projector 91...Viewing surface 231...Surface screen 92...Single point color analyzer 241...Viewing surface 93...Multi-point color analyzer 251···Surface screen 141 ...view surface 261...wavy screen 142...camera 67

Claims (1)

200818114 十、申請專利範圍: 1. 一種可供一個具有觀看表面之顯示器裝置使用的顯示 器校準系統,此種顯示器校準系統係包含有: 至少一個感測裝置,其係被適配來感測上述觀看表 5 面之形狀、尺度、邊界、和方位中的至少一個有關之資 訊;和 • 至少一個處理器,其係耦合至該至少一個感測裝 置,以及係適配使依據至少一個感測裝置所感測之資 訊,來計算該顯示器裝置之特性。 10 2.如申請專利範圍第1項之顯示器校準系統,其中,該至 少一個感測裝置,係進一步被適配來感測該觀看表面上 所顯示之測試影像,以及其中,該至少一個處理器,係 進一步被適配使基於該等感測到之測試影像和該等顯 示器裝置特性,來計算彼等顯示器失真。 15 3.如申請專利範圍第2項之顯示器校準系統,其中,該至 * 少一個處理器,係進一步被適配使基於該等顯示器失 - 真,來產生一些預補償映射圖,以便當該等預補償映射 圖,應用至顯示前之輸入影像資料時,使該觀看表面上 所成之顯示影像大體上無失真。 20 4.如申請專利範圍第3項之顯示器校準系統,其中,該等 顯示失真係隨時間變化,以及其中,該顯示器校準系 統,係被適配來動態地校準該顯示器裝置,藉以預補償 變化之失真。 5.如申請專利範圍第3項之顯示器校準系統,其中,該至 68 200818114 v 一個處理器,係被適配來校正以下之至少一個情況: ^ 一個上溢情況,其中,一個顯示之影像,係大於該 觀看表面,個下溢情況,其中,一個顯示之影像,係 1、於該觀看表φ ;和—個失配情況,其巾,有部份之顯 不衫像,上溢該觀看表面,以及其他部份之顯示影像, 下溢該觀看表面。 6·如申請專利範圍第3項之顯示器校準系統,其中,該顯 不杰裝置’係-個具有_個外殼之背投影顯示器裝置, 从及其中,該顯示器校準系統,係佈置在該外殼内。 7·如申請專利範圍第3項之顯示器校準系統,其中,該至 夕—個感測裝置,係進一步被適配來感測至少一個亮度 資訊和色彩資訊,以及其中,該至少一個處理器,係進 :步被適配來分別翻償至少—個亮度科勻性和色 彩不均勻性。 8·如申請專職m第3項之顯示器校準系統,其中,該顯 不為系統,亦包含有-些具有額外之失真的光學組件, 以及其中’該至少-個處理器,係進—步被適配來使該 額外之失真,與該等顯示器失真相鏈結,藉以預補償該 等額外之失真和顯示器失真兩者。 9_如申請專利範圍第2項之顯示器校準系統,其中,該顯 示器失真,係包括幾何失真、光學失真、纽歛、欠對 齊、和橫向色像差中的至少一個。 1〇·^申請專利範圍第!項之顯示器校準系統,其中,該至 少—個感測裝置,係被適配來感測該觀看表面上至多數 69 200818114 點之距離,以及其中,該至少一個處理器,係適配使基 於該等距離,來計算該觀看表面之相對位置和相對方 位。 11. 如申請專利範圍第2項之顯示器校準系統,其中,該至 5 少一個感測裝置,係被適配來感測該觀看表面上的一個 測試影像在各種焦距下之不同部分,以及其中,該至少 一個處理器,係被適配來決定該測試影像之不同部分中 的最高對比,以及基於上述決定之最高對比,來計算至 該觀看表面之不同部分的距離,藉以計算該觀看表面之 10 形狀和相對方位。 12. 如申請專利範圍第2項之顯示器校準系統,其中,該至 少一個感測裝置,係具有一些感測器失真,以及其中, 該至少一個處理器,係進一步被適配來計算該感測器失 真,以及在計算該等顯示器失真時,考慮到該等感測器 15 失真。 13. 如申請專利範圍第12項之顯示器校準系統,其中,該等 感測器失真,係由至少一個具有一條不與該觀看表面的 法線方向相平行之軸線的感測裝置所造成。 14. 如申請專利範圍第2項之顯示器校準系統,其中,該至 20 少一個影像感測裝置,係由多個影像感測裝置所構成, 彼等係佈置在該至少一個處理器已知之不同位置處,以 及其中,該至少一個處理器,係被適配來比較來自不同 感測裝置之不同的感測之測試影像,以及基於該等不同 之感測影像和該等不同感測裝置之位置,來計算該等顯 70 200818114 示器失真。 15. 如申請專利範圍第2項之顯示器校準系統,其中,該至 少一個影像感測裝置,係被適配來感測有關一個在該觀 看表面上具有四個標記之測試影像的資訊,以及其中, 5 該至少一個處理器,係適配使基於該感測之資訊,來計 算梯形失真。 16. 如申請專利範圍第3項之顯示器校準系統,其中,該至 少一個感測裝置,係進一步被適配來感測該等亮度資訊 和色彩資訊中的至少一個,以及其中,該至少一個處理 10 器,係進一步被適配來校正該等預補償映射圖所造成之 亮度不均勻性和色彩不均勻性中的至少一個。 17. —種可供一個具有觀看表面之顯示器裝置使用的顯示 器校準系統,此種顯示器校準系統係包含有: 至少一個感測裝置,其係被適配來感測上述觀看表 15 面上所顯示之測試影像的資訊;和 至少一個耦合至該至少一個感測裝置之處理器,此 種至少一個處理器,係適配使依據所感測之資訊,來計 算顯示失真,以及產生一些預補償映射圖,藉以補償該 等顯示器失真,該等預補償圖可由表面功能來實現,以 20 便在該等預補償圖應用至顯示前之輸入影像資料時,一 個在該觀看表面上所成之顯示影像,大體上並無失真。 18. 如申請專利範圍第17項之顯示器校準系統,其中,該至 少一個處理器係進一步被適配來鏈結各種失真,以及產 生一些可預補償該等鏈結之失真的表面函數。 71 200818114 19·如申睛專利範圍第17項之顯示器校準系統,其中,該等 表面函數係一些多項式。 月專範圍弟17項之顯示器校準系統,其中,該至 —個處理器,係進一步被適配來調整該等表面函數, 藉以進一步補償一個過掃描情況和一個欠掃描情況中 的至少一個。 21 •種可供一個具有觀看表面之顯示器裝置使用的顯示 裔校準系統,此種顯示器校準系統係包含有: 至少一個影像感測裝置,其係被適配來感測來自該 觀看表面上所顯示之測試影像的資訊;和 至少一個耦合至該至少一個影像感測裝置之處理 态,此種至少一個處理器,係適配使依據所感測之資 讯’來計算顯示失真,使依據每個小片内之顯示失真的 嚴重性’將該觀看表面分割成一些小片,以及使產生每 個小片内之顯示失真有關的預補償圖,以便當此等預補 償圖應用至顯示前之輸入影像資料時,一個在該觀看表 面上所成之顯示影像,大體上並無失真。 22· 一種可供一個具有觀看表面之顯示器裝置使用的顯示 裔校準系統,此種顯示器校準系統係包含有: 至少一個影像感測裝置,其係被適配來獨立地感測 該觀看表面上所顯示之測試影像的至少一個色彩分量 有關之色彩資訊;和 至夕個耗合至該至少^•個影像感測裝置之處理 器’此種至少一個處理器,係適配使依據所感測之資 72 200818114 訊,來計算色彩不均勻性,以及使產生至少一個色彩分 量有關之至少一個色彩校正圖,以便當該至少一個色彩 校正圖,應用至顯示前之輸入影像資料時,一個在該觀 看表面上所成之顯示影像,大體上並無至少一個色彩不 5 均勻性。 23. —種可供一個具有觀看表面之顯示器裝置使用的顯示 器校準系統,此種顯示器校準系統係包含有: 至少一個影像感測裝置,其係被適配來感測該觀看 表面上所顯示之個別色彩分量測試影像的資訊;和 10 至少一個耦合至該至少一個影像感測裝置和該顯 示器裝置之處理器,此種至少一個處理器,係適配使依 據所感測之資訊,來獨立地計算幾何顯示失真,以及使 獨立地產生至少一個色彩分量有關之至少一個預補償 圖,以便當該至少一個色彩校正圖,應用至顯示前之輸 15 入影像資料時,一個在該觀看表面上所成之顯示影像, 大體上並無至少一個色彩相依性幾何失真。 24. —種可使用在一個具有彎曲觀看表面之投影系統中的 顯示器校準方法,此種方法包含之步驟有: 使用多重之投影器,將一個影像之不同部分,投射 20 至上述彎曲觀看表面之對應部分上面;以及 使該影像之每一部分,大體上聚焦在上述彎曲觀看 表面之對應部分上面,以使該影像以最佳化之聚焦,整 體形成在該彎曲觀看表面上。 25. 如申請專利範圍第24項之方法,其中,此種方法進一步 73 200818114 包含之步驟:獨立地定位及定向每個多數之投影器,而 使每個投影器的一條投射轴線,大體上垂直於該彎曲觀 看表面之對應部分,藉以最佳化聚焦及極小化幾何失 真。 5 26. —種可使用在一個具有彎曲觀看表面之投影系統中的 顯示器校準方法,此種方法包含之步驟有: 測量自該彎曲觀看表面至該投射影像之聚焦平面 的多數距離;以及 偏移該聚焦平面,直至該等多數距離之函數被極小 10 化而得到最佳化之聚焦為止。 74200818114 X. Patent Application Range: 1. A display calibration system for use with a display device having a viewing surface, the display calibration system comprising: at least one sensing device adapted to sense the viewing At least one processor related to at least one of a shape, a scale, a boundary, and an orientation; and The information is measured to calculate the characteristics of the display device. The display calibration system of claim 1, wherein the at least one sensing device is further adapted to sense a test image displayed on the viewing surface, and wherein the at least one processor The system is further adapted to calculate the distortion of the display based on the sensed test images and the characteristics of the display devices. 15 3. The display calibration system of claim 2, wherein the one to less than one processor is further adapted to generate a pre-compensation map based on the display loss-true, so that The pre-compensation map is applied to the input image data before the display, so that the display image formed on the viewing surface is substantially free of distortion. 20. The display calibration system of claim 3, wherein the display distortion varies over time, and wherein the display calibration system is adapted to dynamically calibrate the display device to precompensate for changes Distortion. 5. The display calibration system of claim 3, wherein the processor of 68 200818114 v is adapted to correct at least one of the following: ^ an overflow condition, wherein an image of the display, The system is larger than the viewing surface, an underflow condition, wherein, a displayed image, the system 1, the watch table φ; and a mismatch situation, the towel, a part of the shirt, overflowing the viewing The surface, as well as other parts of the display image, underflow the viewing surface. 6. The display calibration system of claim 3, wherein the display device is a rear projection display device having a housing, and the display calibration system is disposed in the housing . 7. The display calibration system of claim 3, wherein the sensing device is further adapted to sense at least one brightness information and color information, and wherein the at least one processor, Intro: Steps are adapted to compensate for at least one brightness uniformity and color unevenness. 8. If applying for a full-time m item 3 display calibration system, wherein the display is not a system, it also includes some optical components with additional distortion, and wherein the 'at least one processor is stepped into The adaptation is such that the additional distortion is linked to the display distortion to pre-compensate for both the additional distortion and display distortion. 9_ The display calibration system of claim 2, wherein the display is distorted by at least one of geometric distortion, optical distortion, convergence, under-alignment, and lateral chromatic aberration. 1〇·^ Apply for the patent scope! Display calibration system, wherein the at least one sensing device is adapted to sense a distance from the viewing surface to a majority of 69 200818114 points, and wherein the at least one processor is adapted to Equidistual distances are used to calculate the relative position and relative orientation of the viewing surface. 11. The display calibration system of claim 2, wherein the one to five less sensing devices are adapted to sense different portions of a test image on the viewing surface at various focal lengths, and wherein The at least one processor is adapted to determine a highest contrast in different portions of the test image and to calculate a distance to a different portion of the viewing surface based on the highest contrast of the determination to calculate the viewing surface 10 shape and relative orientation. 12. The display calibration system of claim 2, wherein the at least one sensing device has some sensor distortion, and wherein the at least one processor is further adapted to calculate the sensing Distortion, and the distortion of the sensors 15 is taken into account when calculating the distortion of the displays. 13. The display calibration system of claim 12, wherein the sensor distortion is caused by at least one sensing device having an axis that is not parallel to a normal direction of the viewing surface. 14. The display calibration system of claim 2, wherein the one or more image sensing devices are composed of a plurality of image sensing devices, and the plurality of image sensing devices are disposed differently from the at least one processor. And wherein the at least one processor is adapted to compare different sensed test images from different sensing devices and to locate the different sensed images and the different sensing devices based on the different sensed devices , to calculate the display of the display 70 200818114 distortion. 15. The display calibration system of claim 2, wherein the at least one image sensing device is adapted to sense information about a test image having four marks on the viewing surface, and wherein 5 The at least one processor is adapted to calculate keystone distortion based on the sensed information. 16. The display calibration system of claim 3, wherein the at least one sensing device is further adapted to sense at least one of the brightness information and color information, and wherein the at least one processing The apparatus is further adapted to correct at least one of brightness non-uniformity and color unevenness caused by the pre-compensation maps. 17. A display calibration system for use with a display device having a viewing surface, the display calibration system comprising: at least one sensing device adapted to sense display on a surface of said viewing table 15 Information of the test image; and at least one processor coupled to the at least one sensing device, the at least one processor adapted to calculate display distortion based on the sensed information, and to generate some pre-compensation maps In order to compensate for the display distortion, the pre-compensation maps can be implemented by a surface function, so that when the pre-compensation maps are applied to the input image data before the display, a display image formed on the viewing surface, There is basically no distortion. 18. The display calibration system of claim 17, wherein the at least one processor is further adapted to link various distortions and to generate surface functions that pre-compensate for distortion of the links. 71 200818114 19. The display calibration system of claim 17, wherein the surface functions are polynomials. A display calibration system of 17 items, wherein the processor is further adapted to adjust the surface functions to further compensate for at least one of an overscan condition and an underscan condition. 21 • A display calibration system for use with a display device having a viewing surface, the display calibration system comprising: at least one image sensing device adapted to sense a display from the viewing surface Information of the test image; and at least one processing state coupled to the at least one image sensing device, the at least one processor is adapted to calculate display distortion based on the sensed information, such that each piece is The severity of the display distortion within the 'dividing the viewing surface into small patches, and the pre-compensation maps associated with the display distortion in each patch, so that when such pre-compensation maps are applied to the input image material before display, A display image formed on the viewing surface is substantially free of distortion. 22. A display calibration system for use with a display device having a viewing surface, the display calibration system comprising: at least one image sensing device adapted to independently sense the viewing surface Displaying at least one color component related to the color component of the test image; and processing the processor to the at least one image sensing device, the at least one processor adapted to sense the 72 200818114, to calculate color non-uniformity, and to generate at least one color correction map related to at least one color component, so that when the at least one color correction map is applied to the input image material before display, one on the viewing surface In the display image formed above, there is generally no at least one color that is not uniform. 23. A display calibration system for use with a display device having a viewing surface, the display calibration system comprising: at least one image sensing device adapted to sense a display on the viewing surface Information of an individual color component test image; and 10 at least one processor coupled to the at least one image sensing device and the display device, the at least one processor adapted to independently calculate based on the sensed information Geometrically displaying distortion and causing at least one pre-compensation pattern associated with at least one color component to be independently generated such that when the at least one color correction map is applied to the image data before display, one is formed on the viewing surface The display image generally has no geometric distortion of at least one color dependency. 24. A display calibration method that can be used in a projection system having a curved viewing surface, the method comprising the steps of: projecting different portions of an image to the curved viewing surface using a plurality of projectors And correspondingly, each portion of the image is substantially focused on a corresponding portion of the curved viewing surface such that the image is optimally focused and integrally formed on the curved viewing surface. 25. The method of claim 24, wherein the method further 73 200818114 comprises the steps of independently positioning and orienting each of the plurality of projectors such that a projection axis of each of the projectors is substantially A portion corresponding to the curved viewing surface is oriented to optimize focus and minimize geometric distortion. 5 26. A display calibration method that can be used in a projection system having a curved viewing surface, the method comprising the steps of: measuring a majority distance from the curved viewing surface to a focal plane of the projected image; and offsetting The focus plane is until the function of the majority of the distances is minimized to obtain an optimized focus. 74
TW96129642A 2006-08-11 2007-08-10 System and method for automated calibration and correction of display geometry and color TWI411967B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US83694006P 2006-08-11 2006-08-11
US91752507P 2007-05-11 2007-05-11

Publications (2)

Publication Number Publication Date
TW200818114A true TW200818114A (en) 2008-04-16
TWI411967B TWI411967B (en) 2013-10-11

Family

ID=39341859

Family Applications (2)

Application Number Title Priority Date Filing Date
TW102131831A TWI511122B (en) 2006-08-11 2007-08-10 Calibration method and system to correct for image distortion of a camera
TW96129642A TWI411967B (en) 2006-08-11 2007-08-10 System and method for automated calibration and correction of display geometry and color

Family Applications Before (1)

Application Number Title Priority Date Filing Date
TW102131831A TWI511122B (en) 2006-08-11 2007-08-10 Calibration method and system to correct for image distortion of a camera

Country Status (4)

Country Link
JP (2) JP5535431B2 (en)
KR (1) KR20080014712A (en)
CN (1) CN101136192B (en)
TW (2) TWI511122B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI396035B (en) * 2009-08-28 2013-05-11 Avermedia Information Inc Project position apparatus and document projector thereof
TWI423143B (en) * 2010-06-17 2014-01-11 Pixart Imaging Inc Image sensing module
TWI452270B (en) * 2011-10-21 2014-09-11 Univ Nat Central Detecting apparatus and detecting method thereof
TWI513303B (en) * 2012-05-15 2015-12-11 Omnivision Tech Inc Apparatus and method for correction of distortion in digital image data
TWI567473B (en) * 2014-09-04 2017-01-21 惠普發展公司有限責任合夥企業 Projection alignment
TWI663577B (en) * 2018-06-04 2019-06-21 宏碁股份有限公司 Demura system for non-planar screen
TWI677231B (en) * 2018-03-16 2019-11-11 和碩聯合科技股份有限公司 Method and system for inspecting display apparatus
US10496352B2 (en) 2015-03-03 2019-12-03 Aten International Co., Ltd. Calibration system and method for multi-image output system

Families Citing this family (61)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8737832B1 (en) 2006-02-10 2014-05-27 Nvidia Corporation Flicker band automated detection system and method
US8594441B1 (en) 2006-09-12 2013-11-26 Nvidia Corporation Compressing image-based data using luminance
US8724895B2 (en) 2007-07-23 2014-05-13 Nvidia Corporation Techniques for reducing color artifacts in digital images
US8570634B2 (en) 2007-10-11 2013-10-29 Nvidia Corporation Image processing of an incoming light field using a spatial light modulator
US9177368B2 (en) 2007-12-17 2015-11-03 Nvidia Corporation Image distortion correction
US8780128B2 (en) 2007-12-17 2014-07-15 Nvidia Corporation Contiguously packed data
US8698908B2 (en) 2008-02-11 2014-04-15 Nvidia Corporation Efficient method for reducing noise and blur in a composite still image from a rolling shutter camera
US9379156B2 (en) 2008-04-10 2016-06-28 Nvidia Corporation Per-channel image intensity correction
JP5256899B2 (en) * 2008-07-18 2013-08-07 セイコーエプソン株式会社 Image correction apparatus, image correction method, projector and projection system
JP5386956B2 (en) * 2008-12-04 2014-01-15 セイコーエプソン株式会社 Projector, display adjustment method, display adjustment program, and recording medium
US8749662B2 (en) 2009-04-16 2014-06-10 Nvidia Corporation System and method for lens shading image correction
US20100321382A1 (en) 2009-06-18 2010-12-23 Scalable Display Technologies, Inc. System and method for injection of mapping functions
JP2013190671A (en) * 2012-03-14 2013-09-26 Ricoh Co Ltd Image projection device, brightness processing method, and program
JP6070307B2 (en) * 2012-05-21 2017-02-01 株式会社リコー Pattern extraction apparatus, image projection apparatus, pattern extraction method, and program
JP6065656B2 (en) * 2012-05-22 2017-01-25 株式会社リコー Pattern processing apparatus, pattern processing method, and pattern processing program
CN102768757B (en) * 2012-06-28 2015-01-07 北京市遥感信息研究所 Remote sensing image color correcting method based on image type analysis
US9470893B2 (en) 2012-10-11 2016-10-18 Sony Computer Entertainment Europe Limited Head mountable device
US20140104692A1 (en) * 2012-10-11 2014-04-17 Sony Computer Entertainment Europe Limited Head mountable display
JP6083185B2 (en) 2012-10-12 2017-02-22 セイコーエプソン株式会社 Projector, projector black level area setting method
CN105247415B (en) * 2013-06-28 2017-03-08 Cj Cgv 株式会社 Theatrical structures and many optical projection systems using the theatrical structures
JP6289003B2 (en) * 2013-09-26 2018-03-07 キヤノン株式会社 Information processing apparatus, control method therefor, and program
JP6065804B2 (en) * 2013-10-08 2017-01-25 株式会社デンソー Inspection device
JP6307843B2 (en) * 2013-11-12 2018-04-11 株式会社リコー Interpolation method, program, and interpolation apparatus
TWI520099B (en) * 2014-02-19 2016-02-01 鈺創科技股份有限公司 Calibration method of an image capture system
US9836816B2 (en) * 2014-04-05 2017-12-05 Sony Interactive Entertainment America Llc Varying effective resolution by screen location in graphics processing by approximating projection of vertices onto curved viewport
TWI511086B (en) * 2014-04-18 2015-12-01 Altek Semiconductor Corp Lens distortion calibration method
CN105096348B (en) 2014-04-30 2018-06-29 富士通株式会社 The device and method of tint plate in detection image
CN104123137A (en) * 2014-07-21 2014-10-29 联想(北京)有限公司 Information processing method and electronic device
CN104539921B (en) * 2014-11-26 2016-09-07 北京理工大学 A kind of illumination compensation method based on many optical projection systems
US9684950B2 (en) * 2014-12-18 2017-06-20 Qualcomm Incorporated Vision correction through graphics processing
TWI548864B (en) * 2015-03-06 2016-09-11 佳世達科技股份有限公司 Color calibrator module
KR101886840B1 (en) * 2015-03-13 2018-08-08 한국전자통신연구원 Method and apparatus for geometric correction based on user interface
WO2017122500A1 (en) * 2016-01-13 2017-07-20 株式会社リコー Projection system, image processing device, projection method, and program
US10057556B2 (en) * 2016-01-28 2018-08-21 Disney Enterprises, Inc. Projector optimization method and system
US11074715B2 (en) 2016-03-28 2021-07-27 Sony Corporation Image processing apparatus and method
CN109076202B (en) 2016-04-27 2021-03-16 索尼公司 Image projection apparatus, projection imaging system, and correction method
JP6618449B2 (en) * 2016-10-06 2019-12-11 キヤノン株式会社 Liquid crystal display device and control method thereof
JP6551427B2 (en) * 2017-01-20 2019-07-31 セイコーエプソン株式会社 Projector, projector black level area setting method
JP6407330B2 (en) * 2017-03-10 2018-10-17 キヤノン株式会社 Image projection device
CN109104596B (en) * 2017-06-21 2021-02-26 中强光电股份有限公司 Projection system and correction method of display image
CN108803006B (en) * 2017-09-18 2021-01-05 成都理想境界科技有限公司 Optical fiber scanning imaging system, optical fiber scanning imaging equipment and distortion detection and correction system of optical fiber scanning imaging equipment
JP6992560B2 (en) * 2018-02-02 2022-01-13 セイコーエプソン株式会社 Projector and projector control method
CN110176209B (en) * 2018-02-27 2021-01-22 京东方科技集团股份有限公司 Optical compensation method and optical compensation apparatus for display panel
JP6642610B2 (en) 2018-03-22 2020-02-05 カシオ計算機株式会社 Projection control device, projection device, projection control method, and program
CN110875021A (en) * 2018-08-29 2020-03-10 中兴通讯股份有限公司 Screen display control method, device, equipment and readable storage medium
FR3085519B1 (en) * 2018-09-04 2023-01-13 Centre Nat Rech Scient METHOD FOR MEASURING A COLOR SPACE SPECIFIC TO AN INDIVIDUAL AND METHOD FOR CORRECTING DIGITAL IMAGES AS A FUNCTION OF THE COLOR SPACE SPECIFIC TO THE INDIVIDUAL
CN109557829B (en) * 2018-11-13 2021-10-29 国网技术学院 Fire simulation system and method with nonlinear distortion correction
TWI691213B (en) * 2019-02-21 2020-04-11 緯創資通股份有限公司 Portable device, display device and calibration method of display device
JP7190701B2 (en) * 2019-03-27 2022-12-16 パナソニックIpマネジメント株式会社 Projected image adjustment system and method
JP7467883B2 (en) * 2019-04-29 2024-04-16 セイコーエプソン株式会社 Circuit device, electronic device and mobile device
CN111861865B (en) 2019-04-29 2023-06-06 精工爱普生株式会社 Circuit device, electronic apparatus, and moving object
CN111935465B (en) 2019-05-13 2022-06-17 中强光电股份有限公司 Projection system, projection device and correction method of display image thereof
CN112261392B (en) 2019-07-22 2022-08-09 中强光电股份有限公司 Projection system and image uniformity compensation method thereof
WO2021039977A1 (en) * 2019-08-29 2021-03-04 国立大学法人東北大学 Projection system, projection system control device, projection method, and program
CN112995620B (en) * 2019-12-17 2024-01-02 青岛海高设计制造有限公司 Method for correcting cylindrical projection, device for cylindrical projection and household appliance
CN112233570B (en) * 2020-12-16 2021-04-02 卡莱特(深圳)云科技有限公司 Arc screen correction method and device, computer equipment and storage medium
KR20230014518A (en) * 2021-07-21 2023-01-30 삼성전자주식회사 Electronic apparatus and control method thereof
CN113516584B (en) * 2021-09-14 2021-11-30 风脉能源(武汉)股份有限公司 Image gray processing method and system and computer storage medium
CN114143519B (en) * 2021-11-11 2024-04-12 深圳市橙子软件有限公司 Method and device for automatically matching projection image with curtain area and projector
CN114283077B (en) * 2021-12-08 2024-04-02 凌云光技术股份有限公司 Method for correcting lateral chromatic aberration of image
CN117275433B (en) * 2023-11-20 2024-02-20 北京七维视觉传媒科技有限公司 Screen color correction method and system

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07131802A (en) * 1993-11-05 1995-05-19 Matsushita Electric Ind Co Ltd Image correcting device
JPH089309A (en) * 1994-06-23 1996-01-12 Canon Inc Display method and its device
JP3393029B2 (en) * 1997-01-20 2003-04-07 富士通株式会社 Display image distortion correction method for display device, distortion detection device, distortion correction device, and display device provided with the distortion correction device
US6060383A (en) * 1998-08-10 2000-05-09 Nogami; Takeshi Method for making multilayered coaxial interconnect structure
JP2000155373A (en) * 1998-11-24 2000-06-06 Matsushita Electric Ind Co Ltd Projection type display device
US6538691B1 (en) * 1999-01-21 2003-03-25 Intel Corporation Software correction of image distortion in digital cameras
JP4507307B2 (en) * 1999-09-16 2010-07-21 独立行政法人科学技術振興機構 Video projection device
JP2001339672A (en) * 2000-03-24 2001-12-07 Olympus Optical Co Ltd Multi-vision device
JP3497805B2 (en) * 2000-08-29 2004-02-16 オリンパス株式会社 Image projection display device
JP2002247614A (en) * 2001-02-15 2002-08-30 Ricoh Co Ltd Projector
US6999046B2 (en) * 2002-04-18 2006-02-14 International Business Machines Corporation System and method for calibrating low vision devices
JP4806894B2 (en) * 2004-02-05 2011-11-02 カシオ計算機株式会社 Projection apparatus, projection method, and program
JP3880582B2 (en) * 2004-02-13 2007-02-14 Necビューテクノロジー株式会社 Projector with multiple cameras
JP3882927B2 (en) * 2004-03-29 2007-02-21 セイコーエプソン株式会社 Image processing system, projector, and image processing method
JP3882928B2 (en) * 2004-03-29 2007-02-21 セイコーエプソン株式会社 Image processing system, projector, and image processing method
CN1753077A (en) * 2004-09-24 2006-03-29 乐金电子(惠州)有限公司 Brightness control method of image display and its device
JP2006109088A (en) * 2004-10-05 2006-04-20 Olympus Corp Geometric correction method in multi-projection system
JP2006121240A (en) * 2004-10-20 2006-05-11 Sharp Corp Image projection method, projector, and computer program
US7262816B2 (en) * 2004-10-22 2007-08-28 Fakespace Labs, Inc. Rear projection imaging system with image warping distortion correction system and associated method
WO2006062508A1 (en) * 2004-12-07 2006-06-15 Silicon Optix Inc. Dynamic warp map generation system and method

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI396035B (en) * 2009-08-28 2013-05-11 Avermedia Information Inc Project position apparatus and document projector thereof
TWI423143B (en) * 2010-06-17 2014-01-11 Pixart Imaging Inc Image sensing module
TWI452270B (en) * 2011-10-21 2014-09-11 Univ Nat Central Detecting apparatus and detecting method thereof
TWI513303B (en) * 2012-05-15 2015-12-11 Omnivision Tech Inc Apparatus and method for correction of distortion in digital image data
TWI567473B (en) * 2014-09-04 2017-01-21 惠普發展公司有限責任合夥企業 Projection alignment
US10884546B2 (en) 2014-09-04 2021-01-05 Hewlett-Packard Development Company, L.P. Projection alignment
US10496352B2 (en) 2015-03-03 2019-12-03 Aten International Co., Ltd. Calibration system and method for multi-image output system
TWI677231B (en) * 2018-03-16 2019-11-11 和碩聯合科技股份有限公司 Method and system for inspecting display apparatus
TWI663577B (en) * 2018-06-04 2019-06-21 宏碁股份有限公司 Demura system for non-planar screen

Also Published As

Publication number Publication date
CN101136192A (en) 2008-03-05
TW201351391A (en) 2013-12-16
KR20080014712A (en) 2008-02-14
JP2008113416A (en) 2008-05-15
TWI511122B (en) 2015-12-01
CN101136192B (en) 2013-06-05
JP2014171234A (en) 2014-09-18
TWI411967B (en) 2013-10-11
JP5535431B2 (en) 2014-07-02

Similar Documents

Publication Publication Date Title
TW200818114A (en) System and method for automated calibration and correction of display geometry and color
US8768094B2 (en) System and method for automated calibration and correction of display geometry and color
US10701324B2 (en) Gestural control of visual projectors
CN105247342B (en) For determining the method and apparatus of particle size
JP6437310B2 (en) System and method for calibrating display system color and saturation for practical use
JP6569742B2 (en) Projection system, image processing apparatus, projection method, and program
US8310525B2 (en) One-touch projector alignment for 3D stereo display
US8042954B2 (en) Mosaicing of view projections
US6527395B1 (en) Method for calibrating a projector with a camera
US20160363842A1 (en) System and method for calibrating a display system using manual and semi-manual techniques
Lee et al. A robust camera-based method for optical distortion calibration of head-mounted displays
US20100245591A1 (en) Small Memory Footprint Light Transport Matrix Capture
WO2006025191A1 (en) Geometrical correcting method for multiprojection system
JP2016519330A (en) System and method for calibrating a display system using a short focus camera
JP2014238601A (en) System and method for displaying images
CN107249128A (en) A kind of bearing calibration of camera and device
Huang et al. End-to-end full projector compensation
US20090195758A1 (en) Meshes for separately mapping color bands
JP2005234698A (en) Distortion parameter generation method, video generation method, distortion parameter generation system and video generation system
JP2020182127A (en) Calibration device, calibration system, and calibration method of display device
JP5041079B2 (en) Image conversion method and apparatus, image recognition apparatus, robot control apparatus, and image projection apparatus
CN116233375A (en) Projection picture correction method and device
Sanders et al. A Camera-Based Energy Relaxation Framework to Minimize Color Artifacts in a Projected Display