TW200841702A - Adaptive image acquisition system and method - Google Patents

Adaptive image acquisition system and method Download PDF

Info

Publication number
TW200841702A
TW200841702A TW96112439A TW96112439A TW200841702A TW 200841702 A TW200841702 A TW 200841702A TW 96112439 A TW96112439 A TW 96112439A TW 96112439 A TW96112439 A TW 96112439A TW 200841702 A TW200841702 A TW 200841702A
Authority
TW
Taiwan
Prior art keywords
output
pixel
content
image
output pixel
Prior art date
Application number
TW96112439A
Other languages
Chinese (zh)
Inventor
Charles Chia-Ming Chuang
Qing Guo
John Dick Gilbert
Original Assignee
Lighten Technologies N
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lighten Technologies N filed Critical Lighten Technologies N
Priority to TW96112439A priority Critical patent/TW200841702A/en
Publication of TW200841702A publication Critical patent/TW200841702A/en

Links

Landscapes

  • Image Processing (AREA)

Abstract

A system and method for correcting optical distortions on an image acquisition system by scanning and mapping the image acquisition system and adjusting the content of output pixels. The optical distortion correction can be performed either at the camera end or at the display receiving end.

Description

200841702 九、發明說明: 【發明所屬之技術領域】 本發明係有關於一種影像擷取系統,特別係但並不限於, 提供一種用於調適一輸出影像至一高畫質解析度相機(still camera)或一攝影機(video camera)之系統及其方法。 【先前技術】 由於利用電荷搞合裝置(Charged Couple Device,CCD ) ζ] 或互補式金屬氧化半導體(Complimentary Metal Oxide Semiconductor,CMOS )技術之高解析度感測器(high resolution sensors)的快速發展,使數位相機(digital still camera)及視 訊記錄器(video recorder )廣為流行,並且成為普遍大眾都負 擔得起的產品。此種感測器的技術係隨著長期以來半導體增加 密度及降低成本而快速發展的趨勢,然而,數位相機及視訊記 錄器之成本卻並未隨之而大幅降低,其主要原因在於影像擷取 系統所使用之光學系統不論在執行上及成本上都遭遇到瓶 1/ 頸。一典型的可變焦光學系統(variable focus and variable zoom optical system )具有許多個透鏡(lens ),當影像像素從閉錄電 視(Closed Circuit Television,CCTV)的水平解析度 656 條水 平線(horizontal lines )增加到 1000 萬畫素(10 mega-pixel) 數位相機的2500條水平線(horizontal lines)及以上,而且該 像素解析度的變化從8 bits到10 bits再到12 bits時,光學元件以 及光學系統裝備的精確度則必須加以改進,並降低其光學失真 的情況。而光學技術的發展並不如半導體技術般的快速,具有 200841702 嚴謹公差的精密光學零件,特別是那些非球面透鏡(aspheric lenses)的製造係非常昂貴的.目前光學表面的需求係1〇微米 (micro meters )或更高,當將多個光學零件組裝成光學系統 時,公差隨之增加。即使在組裝過程中非常小心,在如此嚴謹 的公差之下,也很難保持其焦距(f〇cus )、球面像差(sphered aberration )、中心定位()、色像差(如〇咖化 aberrations)、像散性(astigmatism)、失真(dist〇rti〇n)及色 Ο Ο 聚(color C〇nvergence )的情況,即便感測器的成本正在下跌, 而影像擷取產品中光學系統之成本卻仍持續上揚,顯然、傳統單 純(pure)的光學方法(appr〇ach)是無法解決這個問題。 具有非㊉廣角的透鏡是有必要的,當有人想利用手機自拍 畫相時’ W伸長他/她的手臂到最遠的地方。而高解析度之 CCD或C騰感測器係可用的並具有成本效益。—具高解析度 感測:結合一非常廣角的透鏡系統,其可含蓋(cover)的監 ,軲(SUrVeillance target)之範圍與多個一般低解析度相機 相同。因此’以幾個高解析度相機來取代許多個低解析度相 機,不論在安裝、操作、及%嘈 维邊上,具有更高的成本效益。對 於傳統單純(pure )的光學 乃床(approach)而言,要設計及 製造廣角透鏡實在是非常困難 . 田雞大家都知道當視域(field of200841702 IX. Description of the Invention: [Technical Field] The present invention relates to an image capturing system, particularly but not limited to, providing an image for adapting an output image to a high quality resolution camera (still camera) Or a video camera system and method therefor. [Prior Art] Due to the rapid development of high resolution sensors using Charged Couple Device (CCD) or Complementary Metal Oxide Semiconductor (CMOS) technology, Digital still cameras and video recorders have become popular and become affordable products for the general public. The technology of such sensors has been rapidly developing with the increasing density and cost reduction of semiconductors. However, the cost of digital cameras and video recorders has not been greatly reduced. The main reason is the image capture. The optical system used in the system encountered bottle 1/neck both in terms of execution and cost. A typical variable focus and variable zoom optical system has a number of lenses that increase when the image pixels are 656 horizontal lines from the horizontal resolution of Closed Circuit Television (CCTV). Up to 2,500 horizontal lines of the 10 megapixel camera and above, and the pixel resolution varies from 8 bits to 10 bits to 12 bits, equipped with optical components and optical systems Accuracy must be improved and its optical distortion reduced. The development of optical technology is not as fast as semiconductor technology. Precision optical parts with strict tolerances of 200841702, especially those of aspheric lenses, are very expensive. The current optical surface requirements are 1 micron (micro) Meters ) or higher, when multiple optical components are assembled into an optical system, the tolerance increases. Even with great care during the assembly process, it is difficult to maintain its focal length (f〇cus), spherical aberration (sphered aberration), centering (), chromatic aberration (such as ab 化 ab ab ab ab ab ab ab ab ab ab ab ab ab ab ab ab ab ), astigmatism, distortion (dist〇rti〇n), and color 〇 聚 ( (color C〇nvergence), even if the cost of the sensor is falling, and the image captures the cost of the optical system in the product However, it continues to rise. Obviously, the traditional pure optical method (appr〇ach) cannot solve this problem. It is necessary to have a lens that is not a wide-angle lens. When someone wants to take a self-portrait with a mobile phone, 'W stretch his/her arm to the farthest place. High resolution CCD or C-Tens sensors are available and cost effective. - High resolution Sensing: In combination with a very wide-angle lens system, the coverage of the SURVeillance target can be the same as for many general low resolution cameras. Therefore, replacing many low-resolution cameras with several high-resolution cameras is more cost-effective, regardless of installation, operation, and %嘈. For the traditional pure optical bed, it is very difficult to design and manufacture wide-angle lenses. Tianji everyone knows the field of view (field of

Vlew )擴大時,透鏡的幾何奂吉 抓支 戍仃失真便會增加。以一般規律而言, 成何失真的增幅是視域角度 奴u 的七次方。這就是為什麼大部分的 數位相機不具備廣角鏡頭, e目士丄曰 的廣角鏡頭不是非常昂貴就 疋具有大置的失真情況,魚 、μ a丄+ 邮a、* 兄碩(fish-eye lens )就疋大家 所知道的廣角鏡頭的一種類型。 200841702 在習知技術中大家所熟知,光學系統幾何失真近似值的一 般公式(general formula )係可以用來作為校正的依據,不論 是透過曲線圖表產生(warp table generation )或是即時固定演 算法(fixed algorithms ),透鏡的失真係可以校正到某種程度, 而一般公式(general formula )卻因為透鏡製造的公差而無法 達到一定的品質(consistent quality )。且一般公式(general formula)也無法擷取(capture)每一個影像擷取系統光學失 真的獨特特徵,該一般公式(generai formula )例如··變形函 ^ 數(parametric class 〇f warping function )、多項式函數 (polynomial functions )、或是尺度函數(scaHng functi〇ns ), 係很精細的計算方式,必須利用到昂貴的硬體來做即時的校 正。因此,需要一種有效並符合經濟效益的新系統及方法來校 正影像擷取系統中的光學失真的情況。 【發明内容】 Q 本發明之主要目的係提供一種具有調適性裝置來校正光 學失真的影像擷取系統,該所校正的光學失真包括即時校正其 幾何、亮度及其對比變化。 〃 本發明之另-目的係提供—種具有調適性方法來即時校 正光學失真的影像擷取系統。 =I明逖有一目的係提供一種視訊内容認證 at_)的方法,其係根據在調適的過程中所確保 (secured)該視訊幾何、亮度及對比之校正資訊。 本發明實施例係提供一種系統及方法,其係以經濟實惠的 7 200841702 j貝格來修改視訊内容’以即時校正光學失真。該實施例並不需 要影像圖框緩衝器(frame buffer ),也沒有影像圖框間之時間 延遲(frame delay )的問題。該實施例係以像素時脈速率(pixel clock rate)來執行,為此原因可被描述為以管線式(pipleline) 運作,每一個像素輸入,則會有一個像素輸出。 本發明之實施例係對於升冪取樣(u卜sampling)或降冪取 Ο ϋ 樣(d〇wn-sampling)都能處理得一樣好,其並不需要假設輸 出像素之均勻空間分佈(uniform spatial distributi〇n),並且, 實施例中僅使用一重要的數學運算方式:除法,而不需要使用 到二在傳統影像調整系統中所❹之既複雜又昂貴的浮點 數計算法(floating point calculations)。 在本發明之-實施例中,該方法包含:放置一測試目標於 照相機前方,·為複數個輸出像素擷取輸出像素矩心;自該 個輸出像素來決定一第一輪巾 敌夕仏“ 素之相_出像素;根據所操 别出像素心及該相鄰輸出像素來決 蓋於虛擬像素之重疊,而該虛擬像素係相對應於一輸入t 根據该經重疊虛擬像素之内容,決定該第一輸出像:之::: 再輸出經決定之内容至一顯示裝置。 ’、谷, 在本發明之一實施例中,該 擎,·一相鄰輸出像素引擎,复伟盒外^一輸出像素矩心引 合;-輸出像素重疊引擎,像素矩心引擎通㈣ 二及-輸出像素内容,其係與該 重 搞合。該相鄰輸出像素 素重宜?|擎通訊 第-輸出像素之相鄰輸出像辛=數個輪出像素來決定- 輪出像素重疊引擎,係根據 200841702 所擷取之輸出像素矩心及該相鄰輸出像… 像素覆蓋虛擬像素之重疊,而該虛擬像 亥弟一輪出 訊;該輸出像素内容料,其係根據該經子應入視 容,決定該第-輸出像素之内容,並且 j擬料之内 視訊顯示裝置。 ·斤决疋之内谷至一 在本發明之另-實施例中,該方法 :::r為複數個輸出像素— Ο =,::〇stream)中後入該輸出像素矩心資料、亮度及對 比句句讀’並傳送到一視訊顯示農置 裝置末端執行像素校正程序。在本發明之變化中,對於 ㈣似調適性方法之視訊顯示裝置’該照相機之像素矩心資料 及冗度均句資料係可與該顯示輸出裝置之像素矩心資料及亮 又句勻貝料相結合,只需要利用—組硬體來執行該運作。 【實施方式】 〇 以下所提供之說明内容使具有-般技術之人士能據以實 本&月並且適用於一特別應用之情況及其需求。對於熟悉 *員技術之人士而言,針對實施例所做的不同修改則為淺顯易 it的,且在此所定義之原理可適用於其他實施例及應用,而不 ^脫離本發明之精神及範圍,因此,本發明並非僅限制於所述 之實施例,故舉凡依本發明申請專利範圍所述之形狀、構造、 特徵及精神所為之均等變化與修飾,其所產生之功能、作用仍 未超出說明書及圖式所涵蓋之精神時,均應包括於本發明之申 °月專利範圍内。茲為使貴審查委員對本發明之結構、特徵及 9 200841702 所達成之功效能有更進一步之瞭解與認識,謹佐以圖式及較佳 具體實施例之詳細說明如后··When Vlew is enlarged, the geometry of the lens will increase. In general terms, the increase in distortion is the seventh power of the slave's angle of view. This is why most digital cameras do not have a wide-angle lens. The wide-angle lens of e-Shishi is not very expensive and has a large distortion. Fish, μ a丄+mail a,* fish-eye lens A type of wide-angle lens that everyone knows. 200841702 As is well known in the art, the general formula of the optical system geometric distortion approximation can be used as a basis for correction, whether through warp table generation or immediate fixed algorithm (fixed). Algorithms, the distortion of the lens can be corrected to some extent, while the general formula cannot achieve a certain quality due to the tolerances of the lens manufacturing. And the general formula can't capture the unique characteristics of each image capturing system optical distortion. The general formula (generai formula), for example, parametric class 〇f warping function, polynomial Polynomial functions, or scaling functions (scaHng functi〇ns), are very sophisticated calculations that require expensive hardware for immediate correction. Therefore, there is a need for an efficient and cost-effective new system and method for correcting optical distortion in an image capture system. SUMMARY OF THE INVENTION The primary object of the present invention is to provide an image capture system having an adaptive device for correcting optical distortion, the corrected optical distortion including immediate correction of its geometry, brightness, and contrast variations thereof.另 Another object of the present invention is to provide an image capture system with an adaptive method to instantly correct optical distortion. =I Ming has a purpose to provide a method of video content authentication at_), which is based on the correction of the video geometry, brightness and contrast during the adaptation process. Embodiments of the present invention provide a system and method for modifying video content at an affordable 7 200841702 jberg to correct optical distortion on the fly. This embodiment does not require a frame buffer, nor does it have a problem of frame delay between image frames. This embodiment is performed at a pixel clock rate, for which reason it can be described as operating in a pipeline, with one pixel output per pixel input. Embodiments of the present invention are equally well processed for upsampling or d〇wn-sampling, which does not require a uniform spatial distribution of the output pixels (uniform spatial). Distributi〇n), and, in the embodiment, only one important mathematical operation method is used: division, and it is not necessary to use the complicated and expensive floating point calculation method which is used in the conventional image adjustment system. ). In an embodiment of the invention, the method includes: placing a test target in front of the camera, extracting an output pixel centroid for a plurality of output pixels; determining a first wheeled enemy from the output pixel. The phase of the pixel is out of the pixel; the overlap of the virtual pixel is determined according to the pixel heart and the adjacent output pixel, and the virtual pixel is determined according to the content of the overlapped virtual pixel corresponding to an input t The first output image:::: then outputs the determined content to a display device. ', Valley, in an embodiment of the invention, the engine, an adjacent output pixel engine, the complex box outside ^ An output pixel centroid is coupled; - an output pixel overlap engine, a pixel centroid engine pass (four) two and - output pixel content, which is coupled with the re-engagement. The adjacent output pixel is suitable for the first-output pixel The adjacent output is determined by symplectic = several rounds of pixels - the round-out pixel overlap engine is based on the output pixel centroid extracted by 200841702 and the adjacent output image... the pixel covers the overlap of the virtual pixels, and the virtual image Haidi One round of outgoing communication; the output pixel content material is determined according to the content of the first output pixel according to the meridian, and the video display device within the j-thinking. In another embodiment of the present invention, the method:::r is a plurality of output pixels - Ο =,::〇stream), and then the output pixel centroid data, brightness, and contrast sentences are read and transmitted to a video. Displaying the pixel correction program at the end of the display device. In the variation of the present invention, for the video display device of the (four) adaptive method, the pixel centroid data and the redundancy sentence data of the camera are compatible with the pixel of the display output device. The combination of centroid data and bright and sentence-like materials requires only the use of a set of hardware to perform the operation. [Embodiment] The following description provides the general practitioners with the knowledge and practice. Months and apply to a particular application and its needs. For those skilled in the art, the different modifications made to the embodiment are simple and the principles defined here can be applied to other implementations. And the application, without departing from the spirit and scope of the invention, therefore, the invention is not limited to the embodiments described herein, and thus, the shapes, structures, features, and spirits described in the claims of the present invention vary equally. And the functions and functions produced by the inventions are not included in the scope of the invention and the scope of the invention, and are included in the scope of the invention. 9 200841702 The effects achieved can be further understood and understood. The detailed description of the drawings and preferred embodiments is as follows.

請參閱第1圖,其係習用照相機之方塊圖。請參閱第2 A 圖,其係根據本發明之一實施例以說明一種調適性影像擷取系 統100之方塊圖。每一個影像擷取系統具有一個感測器130,用 以擷取影像。典型的感測器像是在數位相機中常見的CCD或是 CMOS 的二維感測器陣列(2-dimensional sensor arrays )。線掃 目苗照相機(line scan camera )及影像掃目苗器(image scanners ) 係使用具有光學透鏡之一維感測器陣列(one-dimensional sensor array ),也會遭遇到光學失真的情況;其他影像感測器 如紅外線(infrared)、紫外線(ultra-violet )、或是X光(X-ray ), 這些也許是肉眼看不到的,但他們也都具備其本身的光學透鏡 系統,也會有光學失真的情況,因此,本發明實施例對這些影 像感測器也將會有所助益。在感測器前方有一光學透鏡系統 170,用以收集從影像所放射出或反射出的光線(light rays ), 並正確地將其聚焦(focus )至感測器陣列(sensor array ) 130。 通常會有一個照相機控制線路(camera control circuit ) 150以 改變快門的速度(shutter speed )或是光圈開口( iris opening ), 以最佳化該影像擷取。該影像感測器的輸出通常需要白平衡 (white balance )校正、伽瑪(gamma)校正、顏色處理以及 各種其他操作的處理,以便構成一能清楚呈現出所擷取之影 像。影像處理140通常係以一特殊應用積體電路(ASIC)來完 200841702 成,但也可以透過具有影像處理能力的一微處理器 (microprocessor )或是一微控制器(microcontroller )來執行。 根據本發明之一實施例,其係利用一調適性影像處理器110, 在送出影像之前,也可以針對影像進行光學失真校正、亮度及 對比的校正。由於這種影像調適發明的處理速度夠快而能進行 即時連續的影像處理或是視訊處理,因此,在本發明申請案 中,影像處理及視訊處理係可交替換地使用,而且,影像的輸 (' 出或及視訊的輸出也是可交替換地使用。一記憶體方塊120係 與該調適性影像處理器110相互通訊耦合的,該記憶體方塊120 係用以儲存調適性參數作為幾何及亮度的校正。為了要減小記 憶體所儲存的量,這些參數可先進行壓縮,而後要應用之前, 再藉由該調適性影像處理器進行解壓。在送出外面的世界之 前,經處理的影像係藉由一輸出格式化器(output formatter ) 160來封包成一種不同的輸出格式。對於NTSC格式而言,其是 Q 一種典型的類比傳輸標準,該經處理的影像會先編碼成適當的 類比格式;對於乙太網路(Ethernet )而言,經處理的影像係 在格式化成乙太網路封包(Ethernet packets )之前,先藉由 MPEG-2、MPEG-4、JPEG-2000或是其他不同的商用壓縮演算 法(compression algorithms )來壓縮,然後該乙太網路封包 (Ethernet packets )會再加以封包以符合其傳輸協定 (transmission protocols ),例如無線的 802.11a、802.11b、 200841702 802. llg、或是100M的有線乙太網路;該經處理之影像也可以 被封包成可藉由 USB、Bluetooth、IEEE 1394、Irda、HomePNA、 HDMI、或其他商用視訊傳輸協定標準來傳輸。自影像擷取系 統輸出的視訊係輸出的視訊被輸入至該典型的顯示裝置190 ; 該影像呈現在榮幕之前,在該顯示裝置19 0配合特定的顯示輸 出裝置,例如CRT、LCD或投影機等,來進一步進行格式化。 【照相機校準】 p 典型的影像擷取係會顯示出桶狀失真(barrel distortions ),如第1 〇圖戶斤示。透過攝取成像(imaging ) — 棋盤狀圖案(checker board pattern ),可計算(computed )遍 及整個影像空間(entire image space )之黑白方塊棋盤交叉點 之矩心,並測量出每一格的亮度;所產生的幾何及亮度/對比 失真圖像實質上就是一種特定影像擷取系統的“指紋”(finger print)。而該指紋包含了從透鏡的缺點、裝配的公差、於基板 上塗層之差異、感測器上保護層(passivation )之差異、以及 其他製造/裝配所導致的誤差等而產生的失真。由於光的波長 (light wavelength )透過一光學系統會影響其失真的程度,因 此,失真矩心可收集三次,一次為紅色、一次為綠色、而一次 為藍色,使能正確地調整其橫向彩色失真。 一具有25英对(inches )寬度之棋盤狀圖案(checker board pattern )測試目標,如第4 A圖所示,係可利用光微影術 12 200841702 (photolithography )來製造成高精確度,在商業上係以25英 对(inches )總寬度可做到10微对(micro-inches )的精確度, 其可產生尺寸精度(dimensional accuracy)達0.00004%。對 於1000萬像素照相機之一線性面積(linear dimension ) 2500 像素而言,其棋盤(checkerboard)之精確度更可以一像素之 0.1%來表示。如第4 C圖所示,該棋盤狀測試圖案(checker board test pattern )不需要置於與照相機呈完全垂直的位置, 偏移角度(offset angles )可自該a/b兩側直接以高精確度計算 出來,以及從校準誤差移除照相機偏移角度(offset angles ); 在此校準的程序中並不需要精密的機械校準,也不需要移動該 目標(校正板)。利用用典型的十字形(across shaped )或是 基準圖案之獨立方格(isolated squares fudicial patterns),照 相機校正精確度即可達到約1/4至0.1像素。 在該棋盤狀圖案(checker board pattern )中,可利用黑方 (J 格及白方格相互交叉處以達到更高精確度。請參閱第9圖,其 係透過一照相機,以校準(calibration )及圖形方法(graphical method )來決定介於兩個置於對角之黑方格905及906之交叉 點,來顯示該棋盤狀圖案所擷取到的一嚴重失焦(defocused ) 圖像,該感測陣歹’J ( sensor array ) 900 係疊加(superimposed ) 在所收集影像上,線901係在該方格905之右側邊緣,利用計 算由白至黑轉換(transition)的反曲點(inflectionpoint),或 13 200841702 疋利用線性外插(linear extrapolation )來計算介於由白至零 轉換(transition)之中間點,再來決定該邊緣。而線9〇2則係 在該方格906之左側邊緣。在一能清楚對焦的光學系統中,線 901及線902應該是重疊的(coincide),該棋盤狀圖案(仏“匕以 board pattern)之關鍵特徵係:即使所具備的係有瑕疵的光學 系統、有瑕疵的光圈最佳化或是對焦最佳化、或是光學軸 (optical axis)垂直於該校準板(calibrati〇nplate)之對準係 有瑕疵的,該垂直的轉變線(vertical transition line)係可以 精準地計算出來,當與該線901及線902為等距及平行的線。 利用相同的原理(token ),線903係該方格905之下側邊緣, 而線904則係該方格906之上側邊緣;當方格之矩心係利用線 901、902、903及904所形成,此兩個黑方格9〇5及9〇6之邊 緣係可以非爷精確的方式被估算出來。因此,照相機校準精確 度可達到0·025像素或更佳,該精確度標準係需要以整個影像 0擷取系統之光學失真為特徵,該光學失真特徵係漸進的變化函 數(smooth varying functi〇n),因此,在一線性面積⑶㈣ dimension)中具有4(M〇〇方格之棋盤狀圖案(ch_r pattern )係足以作為該在一面積(⑴爪⑶以仙)中具有乃⑻像 素之1000萬像素照相機之失真特徵。而在形狀上相類似於一 棋盤圖案之測試圖案,係會有相類似的結果,例如··鑽石形之 棋盤圖案也是可以使用的。 14 200841702 該棋盤圖案測試目標(checker board pattern test target ) 可在一具有黑色及透明方格之聚酯薄膜(Mylar film )上,使 用相同於印刷電路板的製程來製造。此測試目標係可設置於一Please refer to Figure 1, which is a block diagram of a conventional camera. Referring to Figure 2A, a block diagram of an adaptive image capture system 100 is illustrated in accordance with an embodiment of the present invention. Each image capture system has a sensor 130 for capturing images. Typical sensors are like CCD or CMOS 2-dimensional sensor arrays that are common in digital cameras. Line scan cameras and image scanners use one-dimensional sensor arrays with optical lenses, which also suffer from optical distortion; others Image sensors such as infrared, ultra-violet, or X-ray may not be visible to the naked eye, but they also have their own optical lens system. There are cases of optical distortion, and therefore, embodiments of the present invention will also be useful for these image sensors. An optical lens system 170 is disposed in front of the sensor for collecting light rays emitted or reflected from the image and accurately focusing it to a sensor array 130. There is usually a camera control circuit 150 to change the shutter speed or iris opening to optimize the image capture. The output of the image sensor typically requires white balance correction, gamma correction, color processing, and various other operations to form a clear image of the captured image. The image processing 140 is usually completed by a special application integrated circuit (ASIC), but can also be executed by a microprocessor or a microcontroller with image processing capability. According to an embodiment of the present invention, an adaptive image processor 110 is used to perform optical distortion correction, brightness, and contrast correction on the image before the image is sent. Since the image processing invention is fast enough to perform continuous image processing or video processing, in the application of the present invention, image processing and video processing can be used interchangeably, and image transmission ( The output of the output or video is also interchangeable. A memory block 120 is coupled to the adaptive image processor 110 for storing the adaptive parameters as geometric and brightness corrections. In order to reduce the amount of memory stored, these parameters can be compressed first, and then decompressed by the adaptive image processor before being applied. Before being sent to the outside world, the processed image is processed. An output formatter 160 is encapsulated into a different output format. For the NTSC format, it is a typical analog transmission standard for Q, and the processed image is first encoded into an appropriate analog format; In the case of Ethernet, processed images are formatted into Ethernet packets. Previously, it is compressed by MPEG-2, MPEG-4, JPEG-2000 or other different commercial compression algorithms, and then the Ethernet packets are further encapsulated to match them. Transmission protocols, such as wireless 802.11a, 802.11b, 200841702 802.llg, or 100M wired Ethernet; the processed image can also be packetized to be USB, Bluetooth, IEEE 1394 , Irda, HomePNA, HDMI, or other commercial video transmission protocol standards are transmitted. Video output from the video output from the image capture system is input to the typical display device 190; the image is presented before the screen, in the display The device 19 0 is further formatted in conjunction with a specific display output device such as a CRT, LCD or projector. [Camera calibration] p A typical image capture system displays barrel distortions, such as 1st. According to the image of the checker board pattern, it can be computed throughout the image space ( Entire image space ) The centroid of the black and white squares of the checkerboard intersection, and the brightness of each grid is measured; the resulting geometric and brightness/contrast distortion image is essentially a “fingerprint” of a particular image capture system (finger print) The fingerprint contains distortions from the lens, tolerances in assembly, differences in coating on the substrate, differences in the passivation on the sensor, and errors caused by other manufacturing/assembly. . Since the light wavelength of an light affects the degree of distortion through an optical system, the distortion centroid can be collected three times, once in red, once in green, and once in blue, enabling correct adjustment of its lateral color. distortion. A checker board pattern test target with a 25-inch inches width, as shown in Figure 4A, can be fabricated using photolithography 12 200841702 (photolithography) to achieve high precision in commercial The upper system achieves a micro-inches accuracy of 25 inches in total width, which produces dimensional accuracy of 0.00004%. For a linear dimension of 2500 pixels for a 10 megapixel camera, the accuracy of the checkerboard can be expressed as 0.1% of a pixel. As shown in Fig. 4C, the checker board test pattern does not need to be placed completely perpendicular to the camera, and the offset angles can be directly accurate from both sides of the a/b. The degrees are calculated and the camera offset angles are removed from the calibration error; no precise mechanical calibration is required in this calibration procedure and there is no need to move the target (calibration plate). By using a typical cross shaped or isolated squares fudicial patterns, camera correction accuracy can be achieved by about 1/4 to 0.1 pixels. In the checker board pattern, black squares can be used (J-squares and white squares intersect each other for higher accuracy. See Figure 9, which is calibrated through a camera and A graphical method is used to determine the intersection of two diagonally placed black squares 905 and 906 to display a severely defocused image captured by the checkerboard pattern. Sensor array 900 superimposed On the collected image, line 901 is on the right edge of the square 905, using the calculation of the inflection point from white to black transition. , or 13 200841702 疋 use linear extrapolation to calculate the intermediate point from white to zero transition, and then determine the edge. Line 9〇2 is on the left edge of the square 906 In an optical system capable of focusing clearly, the line 901 and the line 902 should be coincident, and the key feature of the checkerboard pattern ("board pattern") is that even if there is a light with a flaw The system, the aperture optimization or focus optimization, or the alignment of the optical axis perpendicular to the calibration plate (the calibrati〇nplate) is flawed, the vertical transition (vertical transition) The line) can be accurately calculated, when it is equidistant and parallel to the line 901 and line 902. Using the same principle (token), line 903 is the lower edge of the square 905, while line 904 is The upper edge of the square 906; when the square of the square is formed by lines 901, 902, 903 and 904, the edges of the two black squares 9〇5 and 9〇6 can be Therefore, the camera calibration accuracy can reach 0. 025 pixels or better, which is characterized by the optical distortion of the entire image 0 capture system, which is a progressive varying function (smooth varying) Functi〇n), therefore, having a 4 (M_square pattern of ch_r pattern) in a linear area (3) (four) dimension is sufficient as the (8) pixel in one area ((1) claw (3) in cents) 10 megapixel camera The distortion feature, and the test pattern similar in shape to a checkerboard pattern, will have similar results, for example, a diamond-shaped checkerboard pattern can also be used. 14 200841702 The checkerboard pattern test Target ) can be fabricated on a Mylar film with a black and transparent grid using the same process as the printed circuit board. This test target can be set to one

ϋ 經权準之照明光源(illumination source )的前方,如第4B圖 所示’對於亮度及對比的校準,在棋盤測試圖案中每一個黑方 格及白方格之比色法(c〇1〇rmetry )係可利用精密的儀器而測 1出來的,這種儀器的實例如曰本柯尼卡美能達公司 Minolta Corporati〇n)所製造的 cs_1〇〇A 色度計(c〇i〇rime如), 典型的商用儀器測得的亮度公差可低至〇·2〇/。,而一經操取的 典型影像可顯示亮度的梯度,如第5 Β圖所示,當將從儀器所 得之亮度讀數(luminancereadings)加以比較,經過感測器之 亮度與對比失真圖㈣可被記錄下來,此為某—特定影像操取 系統在幾何之外之“指紋,,Uingerprint)或特徵。 【在視串流叙入特徵】 本發明之一較佳實施例係在視訊串流中嵌入特徵資料 (signature information)’並在顯示器末端執行調適性影像校 正,請參閱第2 Β ®,其係說明此較佳實施例之方塊圖。在此 實施例中,在該影像擷取系統中之可調適性影像處理器⑴會 在視訊串流中嵌人特徵,而在__顯示裝置191之—可調適性影 像處理器181則會執行光學失真校正。請參閱第,其係顯 示4像素視訊資料圖像之示意圖,其係具有紅、綠及藍之内 15 200841702 各且每個具有8 bits。而針對幾何及/或亮度而嵌入光學失 〃特徵之車乂佳實施例則如第7圖所示,兩種特徵都可以透過與 其相姊者之失真差異而描繪出來,此方法可縮小儲存的需求 量。由於嵌入光學失真特徵為亮度資料於末端之2—,目標 』不裝置就算無法執行光學失真校正,也會把該資料翻譯成非 常弱之視訊資料,而經嵌人的特徵在顯示裝置上係看不出來 的而對於可執行光學失真校正的顯示裝置而言,其會將視訊 (]轉換回去,就好像在幾何及亮度兩者方面實際上都沒有失真一 樣對於防濩應用(security application)上,這也是很重要 的,因為如果所有視訊影像都沒有失真的情況,對於執行目標 的辨識可更精準及快ϋ ;而%果㈣資料沒有經過校正而傳 达,就會很難再去竄改(tamper with ),因為幾何及亮度兩者 都會在顯示之前才改變,而對於預先經校正的資料之任何修正 會造成無法符合原本影像擷取裝置之特徵,並且會突顯出來。 ◎對於一 Sti11照相機而言,其整個光學特徵(optical signature) 必須在之前就被嵌入到每個圖片,或是預先被一次傳送作為該 特定照相機的特徵;而對於連續的視訊而言,其整體之光學特 徵則不必要一次被全部傳送,有很多種方式可以分解該特徵, 並經由多個視訊框來傳送,而且有很多方式可以去編碼 (encode )該光學特徵,以使得他們更難以被還原(代”以以)。 【在傳送之前的視訊壓縮】 16 200841702 習用的標準壓縮演算法可在偯 异友j隹得达之前使用,對於劣質的壓 縮(lossy compression ),則必須耍拄則t 貝罟特別小心以確保該光學特徵 在壓縮的過程中不會走樣。 【光學失真校正】 在幾何及亮度兩方面使用弁學胜外 、 文用尤子特破,該視訊輸出可利用下 列方法來加以校正。 Γ 〇 該影像處理器110之進一步說明如下。具體而言,該影像 處理器m係藉由使瑩幕上之輸出像素與相對應於原輸入視 訊框像素之虛擬像素相吻合,以映射(maps) 一原輸入視訊框 至一輸出視訊框。該影像處理器110係利用記憶體12〇以儲存 像素矩心資料及/或任何需要暫時儲存的運算。該影像處理器 110可以軟體或是電路系統來執行,例如:特殊應用積體電路 (Application Specific Integrated Circuit,ASIC ) 〇 器110之進-步詳細情形將詳述於後。該記憶體12〇係可包括 快閃記憶體(Flash memory)或其他記憶體格式。在本發明之 一實施例中,該系統HH)料包括複數個影像處理器ιι〇,每 一個影像處理器11〇用於-種顏色(紅、緣、藍)及/或其他 内谷(例如:亮度)’以同步運作來調適影像以輸出。 請參閱第8圖,其係說明第2 A圖中所述之影像處理器 ⑽之方塊圖。該影像處理器110包含有一輪出像素矩心引擎 幻〇、一相鄰輸出像素引擎220、一輸出像素重疊5丨擎23〇及 17 200841702 一輸出像素内Μ擎240°該輸出像素矩心引擎21〇讀取(read 〇m)该矩心位置至先進先出緩衝器模組(心“η⑽—, 阳❻)記憶體(例如:影像處理器内部或其他位置)相對應於 (Levant Iines)o#^„>€(lines);&〇^ 個額外的矩〜而要同時被儲存,因此,可額外減少所需要的記 憶體。 該相鄰輸出像素引擎22〇藉由在_s記憶體中對角相鄰 輸出像素的位置,就可決定哪個輸出像素係對角相鄰於該所要 的輸出像素。該輸出像素重叠引擎23〇之進一步詳細情形將詳 述於後。該輸出像素重㈣擎23()決定哪個虛擬像素係被輸出 像素所重疊。該輸出像素内容引擎240之進-步詳細情形將詳 述於後。該輸出像素内容引擎⑽會根據該經重疊虛擬像素之 内容來決定輸出像素之内容(例如:顏色、亮度等等)。 Ο i’第1 0圖’其係說明—經校正之顯示範圍7⑽及幾 ㈣正之前之照相機的視訊顯示區310。在幾何校正前,利用 二廣角透鏡之照相機輸出通常會出現桶狀失真( distortions ),比起麵妒τ /么 、、、&正後,所佔用的顯示範圍較少。該經校 正後之觀看區73G (_吨⑽a)(此處也就是虛擬像素座標 方才口 virtual pixel咖)包含與一輸入視訊框(i叩加*⑶ frame )相符之一虑德 显擬像素乂乘7陣列(array)(例如:每一條 線具有X個虛擬像素及每一個框有y條線),該經校正後之觀 看區〇之虛擬像素係與輪出視訊框正好相符,在本發明之一 200841702 實施例中,該觀看區可具一 1 6 ·· g方位比對於1 2 8 〇 x 7 20像素,或4 : 3之方位比對於640x480像素。 在螢幕之光學失真顯示區310中,實際的輸出像素數量與 輸出解析度是相吻合的,而在觀看區73〇中,虛擬像素的數量 /、輸入解析度則是相吻合的;換言之,即吻合該輸入視訊框之 解析度,也就是說,虛擬像素與輸入視訊框之像素比相當於 1 · 1,然而,虛擬像素與輸出像素比則未必相當於1 : 1。 比方說,在該觀看區730之角位,每個輸出像素可能有數個虛 擬像素,並且在該觀看區730之中央,虛擬像素與輸出像素比 則可能會係相當於i ·· i (或更少)。此外,輸出像素之空間 位置(spatial location)及大小與虛擬像素係呈一非直線方式 U non_linear fashi〇n)之分別。在本發明之實施例中,藉由 映射實際輸出像素至虛擬像素,以使得該虛擬像素看起來就像 該輸入視訊一樣。接著,利用此映射方式來重新取樣 (resample)該輸入視訊,以致該輸出像素之顯示使得該虛擬 Q 像素看起來與該輸入視訊像素完全相同;換言之,使得該輸出 視訊框與輸入視訊框相符,以便能觀看相同的影像。 請參閱第1 1圖,其係說明映射輸出像素至影像區31〇之 一虛擬像素座標方格730 ( virtual pixel grid )之示意圖。在本 發明之實施例,輸出像素内容能用來製造出(create)該被觀 看之虛擬像素,該輸出像素之映射係以虛擬像素方式(或單位) 來表示,為達到此目的,該虛擬像素陣列73〇可被視為一概念 上的座標方格(conceptual grid),在此座標方格73〇中任何輸 出像素之位置可以水平及垂直座標的方式來表示。 19 200841702 出像辛Γ/的是’藉由找出在虛擬像素座標方格7财-輸 映==該映射之描述與相關大小之差異無關,而且該 ㈣。大V: 至任何準確的數量。例如:-第-輸出像 ' 第一輸出像素420的四倍大,該第一輸+ 410之映射描述可以係χ + 2·5 /輸出像素 描對應,同樣地,該輪出像素之映射 指遴則係為x + 1 2 · 5,y.2 · 5。 =該輸出像素矩心引擎21峻其他引擎相互通訊所需要 、斤有貝料’該資料可被儲存於該記憶體m中之查尋目錄(in _up-table f〇rm )格式或其他格式(例如:連妹 = 以作其他的處理。而調整影像所需要的所有其他 貝;·'、了從3亥視訊内容推知或取得,此將進—步詳細述於後。 ,眼看去’要找出在該虛擬座標方格中之輸出像素所需的 '料數量顯然頗大,例如,若該虛擬解析度為i 2 8 〇 x7 2 Ο 〇 ’大約需要2 4位元(bits)以全面追蹤每個輸出像素矩心。 然而,該機制(scheme )本身可輕易地提供重大的壓縮 _p:Ctlon)(例如:一種方法係可以在每個輸出線中全面 找出及第像素’接著利用遞增的改變來找出其他像素)。 在本發明之一實施例中,藉由成像裝置來執行決定像素矩 心的操作’係可為每個像素顏色提供-個別的準則(separate guide) ’此舉使得可在調整影像同時做橫向的顏色校正。 月多閱第12圖’其係說明從校準程序之輸入矩心示意 圖矩〜的操取係即時執行的—每一個矩心以一預先計算的格 式從外部儲存II中被操取m記M12Q。 20 200841702 就概念上而言,當藉由輸出像素矩心引擎21〇擷取矩心, 該引擎210即儲存該矩心於一組線緩衝暫存器(line buffer), 這些線緩衝暫存器也代表一連續的FIF〇(其在邊界情況下有特 別加插),每一個進來的矩心輸入第一 FIF〇的開端,並串流自 每個FIFO的未端重複執行至隨後一個的開端。 該作為線緩衝暫存器的矩心FIF〇S2用途係使相鄰輸出像 素引擎220能以簡化相鄰矩心的位置來以決定角位(corner)的 ) 位置,由於該線被操作之前及之後的線緩衝器取消了額外“角 (位保留器,,(cornerholder) &件,角位矩心總係在相對於該被 操作矩心的同樣FIFO位置找到。 請參閱第1 3圖’其係說明一輸出像素角位計算法(c〇rner前方 The front of the illumination source is as shown in Figure 4B. 'For the calibration of brightness and contrast, the colorimetric method of each black square and white square in the checkerboard test pattern (c〇1) 〇rmetry ) can be measured using sophisticated instruments such as the cs_1〇〇A colorimeter manufactured by Sakamoto Konica Minolta Corporati〇n (c〇i〇rime) For example, typical commercial instruments measure brightness tolerances as low as 〇·2〇/. The typical image that can be manipulated can display the gradient of brightness. As shown in Figure 5, when the brightness readings (luminance readings) obtained from the instrument are compared, the brightness and contrast distortion map (4) of the sensor can be recorded. Down, this is a "fingerprint, Uingerprint" or feature outside the geometry of a particular image manipulation system. [In the Streaming Streaming Feature] A preferred embodiment of the present invention embeds features in a video stream. Signature information' and performing adaptive image correction at the end of the display, see section 2, which illustrates a block diagram of the preferred embodiment. In this embodiment, in the image capture system The adaptive image processor (1) embeds a feature in the video stream, and the adaptive image processor 181 performs an optical distortion correction on the __ display device 191. Please refer to the section, which displays 4 pixel video data. A schematic diagram of an image having red, green, and blue within 15 200841702 and each having 8 bits. A preferred embodiment of the embossed feature for embedding optical defects in geometry and/or brightness is As shown in Fig. 7, both features can be drawn by the distortion difference between the two, which can reduce the storage requirement. Since the embedded optical distortion feature is the brightness data at the end 2, the target is not installed. Even if optical distortion correction cannot be performed, the data will be translated into very weak video data, and the embedded features are not visible on the display device, but for display devices that can perform optical distortion correction, Converting the video (] back is as if there is virtually no distortion in both geometry and brightness. This is also important for security applications, because if all video images are not distorted, The identification of the execution target can be more precise and quicker; while the %(4) data is not corrected and it will be difficult to tamper with, because both geometry and brightness will change before the display, but for Any correction of the pre-corrected data will result in failure to conform to the characteristics of the original image capture device and will be highlighted. In a Sti11 camera, the entire optical signature must be embedded in each picture before, or transmitted in advance as a feature of that particular camera; for continuous video, the whole is Optical features do not have to be transmitted all at once, there are many ways to decompose the feature and transmit it through multiple video frames, and there are many ways to encode the optical features to make them more difficult to restore ( [Representation of video compression] [Video compression before transmission] 16 200841702 The standard compression algorithm can be used before the 友 友 隹 , , , , , , , , , , , , , , , , 劣 劣 劣 劣 劣 劣 劣 劣 劣 劣 劣 劣 劣 劣 劣 劣 劣罟 Be especially careful to ensure that the optical features are not distorted during compression. [Optical Distortion Correction] In terms of geometry and brightness, the use of 弁 胜 胜 , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,进一步 〇 Further description of the image processor 110 is as follows. Specifically, the image processor m maps an original input video frame to an output video frame by matching the output pixels on the screen with the virtual pixels corresponding to the original input video frame pixels. The image processor 110 utilizes memory 12 to store pixel centroid data and/or any operations that need to be temporarily stored. The image processor 110 can be implemented by a software or a circuit system. For example, the details of the application-specific integrated circuit (ASIC) device 110 will be described in detail later. The memory 12 can include a flash memory or other memory format. In an embodiment of the invention, the system HH includes a plurality of image processors, each of which is used for - color (red, edge, blue) and/or other inner valleys (eg, : Brightness) 'Adjust the image to output in sync. Please refer to Fig. 8, which is a block diagram showing the image processor (10) described in Fig. 2A. The image processor 110 includes a round of pixel centroid engine illusion, an adjacent output pixel engine 220, an output pixel overlap 5 丨 engine 23 〇 and 17 200841702 an output pixel Μ 240 240 ° the output pixel centroid engine 21〇read (read 〇m) the centroid position to the FIFO buffer module (heart “η(10)—, impotence) memory (eg, inside the image processor or other location) corresponds to (Levant Iines) o#^„>€(lines);&〇^ Additional moments~ are stored at the same time, so the required memory can be additionally reduced. The adjacent output pixel engine 22 determines which output pixel is diagonally adjacent to the desired output pixel by diagonally adjacent output pixel positions in the s memory. Further details of the output pixel overlap engine 23 will be described in detail later. The output pixel weight (four) engine 23 () determines which virtual pixel is overlapped by the output pixel. The details of the step-by-step output of the output pixel content engine 240 will be described in detail later. The output pixel content engine (10) determines the content of the output pixels (e.g., color, brightness, etc.) based on the content of the overlapped virtual pixels. Ο i'Fig. 10 is a description of the corrected video display area 310 of the camera before the corrected display range 7 (10) and (4). Before the geometric correction, the camera output using the two wide-angle lens usually has barrel distortion, which occupies less display area than the surface 妒τ / 、, , & The corrected viewing zone 73G (_ton (10)a) (here, the virtual pixel coordinate virtual pixel coffee) contains one of the input video frames (i叩 plus *(3) frame). By multiplying 7 arrays (for example, each line has X virtual pixels and each frame has y lines), the corrected viewing area 〇 virtual pixel system exactly coincides with the round-out video frame, in the present invention In one embodiment of 200841702, the viewing zone may have a 1 6 · g orientation ratio of 1 2 8 〇 x 7 20 pixels, or a 4 : 3 aspect ratio for 640 x 480 pixels. In the optical distortion display area 310 of the screen, the actual number of output pixels is consistent with the output resolution, and in the viewing area 73, the number of virtual pixels / input resolution is consistent; in other words, The resolution of the input video frame is matched, that is, the pixel ratio of the virtual pixel to the input video frame is equivalent to 1 · 1, however, the virtual pixel to output pixel ratio is not necessarily equivalent to 1:1. For example, at the corner of the viewing zone 730, there may be several virtual pixels per output pixel, and in the center of the viewing zone 730, the virtual pixel to output pixel ratio may be equivalent to i ·· i (or less). In addition, the spatial location and size of the output pixel are different from the virtual pixel system in a non-linear manner U non_linear fashi〇n). In an embodiment of the invention, the actual output pixel is mapped to the virtual pixel such that the virtual pixel appears to be the same as the input video. Then, the mapping mode is used to resample the input video, so that the output pixel is displayed such that the virtual Q pixel looks exactly the same as the input video pixel; in other words, the output video frame is matched with the input video frame. In order to be able to view the same image. Please refer to FIG. 1 1 for a schematic diagram of mapping a virtual pixel grid 730 (virtual pixel grid) from an output pixel to an image area 31〇. In an embodiment of the invention, the output pixel content can be used to create the viewed virtual pixel, the mapping of the output pixel being represented in virtual pixel mode (or unit), for this purpose, the virtual pixel Array 73A can be viewed as a conceptual conceptual grid in which the position of any output pixel in the coordinate box 73〇 can be represented by horizontal and vertical coordinates. 19 200841702 It is like 辛Γ/ is by finding out the difference in the description of the mapping and the relative size difference in the virtual pixel coordinate box, and the (4). Big V: to any exact amount. For example: - the first output is four times larger than the first output pixel 420, and the mapping description of the first input + 410 can be χ + 2·5 / output image corresponding to the sketch, and likewise, the mapping of the rounded pixels The 遴 is x + 1 2 · 5, y.2 · 5. = The output pixel centroid engine 21 is required for other engines to communicate with each other, and the data can be stored in the memory directory m in the search directory (in _up-table f〇rm) format or other formats (for example :Lianmei = for other treatments. All the other shells needed to adjust the image;·', inferred or obtained from the contents of 3H video, this will be described in detail later. The amount of material required for the output pixels in the virtual coordinate square is obviously quite large. For example, if the virtual resolution is i 2 8 〇x7 2 Ο 〇', approximately 2 4 bits are needed to fully track each The output pixel centroid. However, the mechanism itself can easily provide significant compression _p:Ctlon) (for example, a method can find the entire pixel in each output line) and then use incremental Change to find other pixels). In an embodiment of the present invention, the operation of determining the centroid of the pixel by the imaging device 'can provide a separate guide for each pixel color'. This allows horizontal adjustment of the image while adjusting Color correction. Read more in Figure 12, which shows that the operation of the input centroid diagram from the calibration procedure is performed immediately - each centroid is operated from external storage II in a pre-computed format, m is recorded M12Q. 20 200841702 Conceptually, when the centroid engine 21 captures the centroid, the engine 210 stores the centroid in a set of line buffers, these line buffer registers. Also represents a continuous FIF〇 (which has a special interpolation in the case of the boundary), each incoming centroid is input to the beginning of the first FIF〇, and the stream is repeated from the end of each FIFO to the beginning of the next one. . The use of the centroid FIF 〇 S2 as a line buffer register enables the adjacent output pixel engine 220 to simplify the position of the adjacent centroids to determine the position of the corner, since the line is operated before and Subsequent line buffers eliminate the extra "corner" and "cornerholder" and the angular center of the total is found in the same FIFO position relative to the operated centroid. See Figure 1 3 ' It describes an output pixel angular position calculation method (c〇rner

CalCUlati°n)之示意圖°該調整影像线及方法之實施例係取 決於一些假設性條件: μ •介於相鄰像素之間’輸出像素之大小與形狀差異之改 變並不大。 〇 Φ介於相鄰像素之間,輸出像素在“X”或“y”的方向的 偏移量(offset)並不大。 *輸出像素之大小及内容之覆蓋範圍藉由四邊形足以模 擬(approximated ) 〇 、 •輸出四邊形的估算(estimations)可相互緊靠。 這些假設性條件在一背投射電視通常係準確的。 若能達到上述假設性條件,那麼,當準備每個輸出像素 内容作日夺,任何輸出像素近似四邊形(quadHiaterai app聰―)之角位點(就該虛擬像素座標 21 200841702 可藉由相鄰輸出像素引擎220即時(on the fly)計算出來,這 系了透過把中途點610 ( halfway point )置於所有對角輸出像 素(例如··該輸出像素62〇)之中心來完成。 一旦該角位建立後,藉由該輸出像素重疊引擎23〇來建立 吞擬像素之重疊,從而創造出該輸入視訊之直接(相同的) 重疊。 清注意在上述實例中,該輸出像素四邊形的近似值覆蓋許 ^個虛擬像素,但其也可能小到以致於可以完全置於一個虛擬 ”中例如·輸出像素420 (參第1 1圖)也可完全置於 一個虛擬像素之中。 、 卜也明’主思’為了能進行管線式(pipeline )運作, 藉由該相鄰輸出像素引擎220,每個即將要來的輸出像素之角 位近似值也可預先在一個或多個像素時脈(pixel clock)算出 來0 —*輸出像素至虛擬像素間之空間關係被建立後,内容之決 G定可藉由輸出像素内容引擎24〇利用公認( 之重新取樣sampling)的技術計算出。 由於遍及觀看區310不同位置輸出像素大小/密度之變 化於疋有些區域會進行升墓 a、 9适仃开譽取樣(uP-samPled),而其他區 域進行降幂取樣(d〇wn_sampled)。此兴Schematic of CalCUlati°n) The embodiment of the image line and method of adjustment depends on some hypothetical conditions: μ • Between adjacent pixels The change in size and shape of the output pixels is not significant. 〇 Φ is between adjacent pixels, and the offset of the output pixel in the direction of "X" or "y" is not large. * The size of the output pixels and the coverage of the content can be approximated by the quadrilateral enough to simulate (approximated) and the output quadrilateral. These hypothetical conditions are usually accurate in a back projection television. If the above hypothetical condition can be achieved, then when each output pixel content is prepared for daylight, any output pixel approximates the quadrilateral (quadHiaterai app) corner location (the virtual pixel coordinate 21 200841702 can be adjacent output) The pixel engine 220 is calculated on the fly by placing the halfway point 610 at the center of all diagonal output pixels (e.g., the output pixel 62A). After establishment, the overlap of the swallowed pixels is established by the output pixel overlap engine 23〇, thereby creating a direct (same) overlap of the input video. Note that in the above example, the approximation of the output pixel quadrilateral covers the ^ A virtual pixel, but it may also be so small that it can be completely placed in a virtual "", for example, the output pixel 420 (see Figure 11) can also be completely placed in a virtual pixel. 'In order to be able to perform pipeline operation, by the adjacent output pixel engine 220, the approximate angular value of each upcoming output pixel can also be advanced. One or more pixel clocks are calculated from 0—*. After the spatial relationship between the output pixels and the virtual pixels is established, the content of the content can be determined by the output pixel content engine 24 (resampled sampling) The technique is calculated. Since the output pixel size/density varies from position to location in the viewing area 310, some areas will perform augmentation a, 9 suitable for sampling (uP-samPled), while other areas are subjected to power sampling ( D〇wn_sampled). This is

At , y P」此舉將需要額外的過濾功 月匕(例如·•平滑化等等),而該 汀而之過濾係取決於光學失真 的程度。 该光學失真的產生也提供一 .y 二改善重新取樣的獨特機 曰’例如··在螢幕73 0的部分區域中, A中5亥輸出像素比該虛擬像 22 200841702 素較稀少,而在其他位置,此種關係則會相反,因此,採 同的重新取樣演算法係可行的。 刼用不 :貝料使仔包合在每一個虛擬像素中一個輸出像素之 番新角位係已知的),所採用的 ',以法之變化可包含利用“虛擬,,像素部分覆蓋區域 之比重,該詳細情形將進一步詳述於後。 請參閱第1 4圖,其係說明像素分支(s‘divisi〇n)重愚 f 近似值UpPn>XimatiGn)之示意圖。如之前所提到,-種決^ 内合可订的决异法係估算—輸出像素在有關虛擬像素上所覆 蓋之區域,根據每個重疊虛擬像素之比重值來計算該輸 之内容值。 、然而,用硬體來精確地計算出重疊的百分比係需要重大的 速度及處理功率’此不符合用於投射電視機中需求的廉 體。 、 為了能簡化硬體的運作,該輸出像素重疊引擎230係透過 L虛擬像素座;^方格31Q之有限分支(sub_divisi。。)來決定重疊 (例如:將每個虛擬像素分為4乘4之分支座標方格、或任何 其他》支(sub-division )),並且用分支“ub_divisi〇n )重疊 的數量來估算一輸出像素所覆蓋之區域。 輸出像素重疊引擎230的重疊計算係可利用一些分支取樣 特性來簡化,如下: 在。亥輸出像素近似四邊形(qUa(jriiateral approximation)所界定之最大矩形之内,所有的分支 取樣係在重疊區之内。 23 200841702 ♦在該輸出像素近似四邊形被界定於最小矩形之外,所 有刀支取樣係不在重疊區之内。 •在介於前述兩個所界定之矩形間該分支取樣總和數之 %,係該重疊區中數量有效的近似值。 接下來w亥輸出像素内容引擎240係藉由每個虛擬像素内 減以該相關重疊分支的數量,加總其結果後,接著再除以重 1刀支之總數里’來決^該輸出像素之内容。接著,該輸出像 素内容引擎24G輸出該蚊之内容至光引擎,用以顯示該決定 之内容。 月多閱第1 5圖,其係說明一種適用於光學失真之方法 圖。在本發明之一實施例中,該影像處理器係執行 -亥方法800。在本發明之一實施例中,該影像處理器或複數 個〜像處理器11〇執行該方法8〇〇之複數個狀況(例如:每—個 用於紅、綠及藍中任-種顏色)。首先,擷取(步驟8ig)輸出 像素矩心,其係藉由從記憶體中讀取再放入至FIFOs (例如: I 5夺人最夕二行)。在擷取(步驟810)之後,在FIFOs記憶 體中對角相鄰像素之位置,來衫(步驟82Q)該所要的輸: 象素之對角相鄰輸出像素。接著,決定(步驟㈣)介於對角 相鄰像素與所要的像素之間之中途點(hW —Ο;接下 來,決定(步驟840)該輸出像素覆蓋於虛擬像素之重疊;並 根據該重疊,決定(步驟85〇)輪出像素内容;接著,經決定 之輸出像素内容被輸出至一光引擎,用以投影至一顯示器。該 =去800會針對額外的輸出像素重覆上述步驟,直到所有輸出 素之内容經蚊完成(步驟叫。必須注意的是,該像素重 24 200841702 新映射程序(pixel remapping procees )係一單一程序(single pass process),而且,該像素重新映射程序(pixel remapping procees )係不需要在該光學轴(optical axis )位置的資料。 【針對投影顯示器串連調整演算法】(concatenate adaptive algorithms for projection displays ) 對於利用LCD或電漿技術(plasma technologies)之平板 顯示器(flat panel displays )而言,從顯示器本身並沒有影像 幾何失真的情況,而對於投影顯示器則不然。投影光學 f) ( projection optics )係會將來自數位光調變器(digital light modulator )的影像放大50-100倍,至一典型的50”或60”的投影 顯示器,該投影光學係引入焦距(focus )、球面像差(spherical aberration )、色像差(chromatic aberrations )、像散性 (astigmatism )、失真(distortion )及色聚(color convergence ) 的誤差,正如影像擷取裝置之光學一樣。雖然物理失真 (physical distortion )會是不同,但仍可利用矩心的原理。因 此,同時串連此矩心原理使能一次同時調適性地校正影像擷取 〇 及顯示失真的情況係可行的。就以第1 6圖之點420作為範 例,其可歸納【X+3.5,Y+1.5】的顯示幾何校正及該【Χ+2·5, Υ+1.5】之影像擷取幾何校正,並串連至【Χ+6,Υ+3】裡,而 最後的矩心為點430,經串連的矩心圖像可在提早計算出來, 利用相同的標記,亮度及對比失真校正的圖像也可以計算出 來。 上述有關本發明之各實施例之闡述係僅為舉例之用,依照 上述教導,前述實施例與方法之其他變化及修改是有可能的。 25 200841702 2胜本發明之零件可利用—經程式化之—般用途數位電腦、 定應用積體電路、或制—相連接傳統零件之網路及電 來執行’該連接則可為有線、無線、數據機等等。其中所 2實施例並未有排除或設限之意圖,本發明僅受所附 專利範圍所限制。 【圖式簡單說明】 Ο 本發明之未限制及未排除之實施例係伴隨下列圖式而描 二之 <其中除非其他具體指定外,所有不同視圖之相似部分, 係採用相同之參考圖號。 第1圖係習知視訊影像擷取系統之方塊圖 種可調適性影 第2 A圖係根據本發明一實施例以說明一 像擷取系統之方塊圖。 ,第2以餘據本發明另—實施例以說明—種可調適性 影像擷取系統之方塊圖。At , y P” will require additional filtering power (eg, smoothing, etc.), and the filtration depends on the degree of optical distortion. The generation of the optical distortion also provides a unique mechanism for improving the resampling. For example, in a partial region of the screen 73 0, the 5 HM output pixels in A are rarer than the avatar 22 200841702, while in other Location, this relationship will be reversed, so the same resampling algorithm is feasible.刼Using: the material is made to fit in a new pixel in each virtual pixel. The new angle is known.) The change in the method can include the use of "virtual, pixel partial coverage area." The specific gravity will be further described in detail later. Please refer to Fig. 14 for a schematic diagram illustrating the pixel branch (s'divisi〇n) and the approximate value UpPn>XimatiGn). As mentioned before, - The default method is to estimate the output of the pixel on the area covered by the virtual pixel, and calculate the content value of the input according to the specific gravity value of each overlapping virtual pixel. However, with hardware Accurately calculating the percentage of overlap requires significant speed and processing power. This is not compatible with the requirements for projecting TV sets. In order to simplify the operation of the hardware, the output pixel overlap engine 230 is transmitted through the L virtual pixel. The finite branch of the square 31Q (sub_divisi.) determines the overlap (for example, divide each virtual pixel into 4 by 4 branch coordinates, or any other sub-division), and use Minute "Ub_divisi〇n) to estimate the number of overlapping coverage area of an output pixel. The overlap calculation of the output pixel overlap engine 230 can be simplified using some of the branch sampling characteristics, as follows: The output pixel is approximately within the largest rectangle defined by qUa (jriiateral approximation), and all branch samples are within the overlap region. 23 200841702 ♦ In this output pixel, the approximate quadrilateral is defined outside the minimum rectangle, all the knives The sampling system is not within the overlap region. • The % of the total number of samples taken between the two defined rectangles is an effective approximation of the number in the overlap. Next, the output pixel content engine 240 is used by Each virtual pixel is subtracted by the number of related overlapping branches, and after summing up the result, the content of the output pixel is determined by dividing by the total number of weights. Then, the output pixel content engine 24G outputs The content of the mosquito to the light engine is used to display the content of the decision. A more detailed description of the method for optical distortion is shown in Figure 15. In one embodiment of the present invention, the image processor is Executing - Hai method 800. In an embodiment of the invention, the image processor or a plurality of image processor 11 executes a plurality of conditions of the method (for example: Each one is used for any of red, green, and blue colors. First, the extraction (step 8ig) outputs the pixel centroids, which are read from the memory and placed in the FIFOs (for example: I 5 After capturing (step 810), the position of the diagonally adjacent pixels in the FIFOs memory, the shirt (step 82Q) the desired input: the diagonal adjacent output pixels of the pixel Next, determining (step (4)) is between the diagonal adjacent pixel and the desired pixel (hW - Ο; next, determining (step 840) that the output pixel covers the overlap of the virtual pixel; and according to Overlap, decision (step 85) to rotate the pixel content; then, the determined output pixel content is output to a light engine for projection to a display. The =800 will repeat the above steps for additional output pixels. Until all the contents of the output are completed by mosquitoes (steps are called. It must be noted that the pixel weight 24 200841702 new mapping procedure (pixel remapping procees) is a single pass process, and the pixel remapping procedure (pixel Remapping p Rocees) does not require information on the optical axis position. [concatenate adaptive algorithms for projection displays] for flat panel displays using LCD or plasma technologies ( In the case of flat panel displays, there is no geometric distortion of the image from the display itself, but not to the projection display. Projection optics f) (projection optics) will magnify images from a digital light modulator 50-100 times to a typical 50" or 60" projection display, which introduces a focal length ( Focus ), spherical aberration, chromatic aberrations, astigmatism, distortion, and color convergence, just like the optics of an image capture device. Although the physical distortion will be different, the principle of the centroid can still be used. Therefore, it is feasible to simultaneously connect the centroid principle to enable simultaneous correction of image capture and display distortion. Taking the point 420 of Figure 16 as an example, it can summarize the display geometry correction of [X+3.5, Y+1.5] and the image acquisition geometric correction of the [Χ+2·5, Υ+1.5], and string Connected to [Χ+6, Υ+3], and the final centroid is point 430, the tanned centroid image can be calculated early, using the same mark, brightness and contrast distortion corrected image Can be calculated. The above description of the various embodiments of the present invention is for illustrative purposes only, and other variations and modifications of the foregoing embodiments and methods are possible in accordance with the teachings herein. 25 200841702 2 wins the parts of the invention can be used - the stylized general purpose digital computer, the application of the integrated circuit, or the connection of the traditional parts of the network and electricity to perform 'this connection can be wired, wireless , data machine, and so on. The invention is not intended to be limited or limited, and the invention is limited only by the scope of the appended claims. BRIEF DESCRIPTION OF THE DRAWINGS The unrestricted and unresolved embodiments of the present invention are described in conjunction with the following drawings, wherein the same reference numerals are used for the similar parts of all the different views unless otherwise specified. . BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 is a block diagram of a conventional video capture system. Figure 2A is a block diagram illustrating an image capture system in accordance with an embodiment of the present invention. The second embodiment is a block diagram of an adaptive image capturing system according to another embodiment of the present invention.

Ci f 3 A圖係從習知影像榻取系統所取得之影像示意圖。 第3 B圖係從具廣角之可調適性影像擷取“所取得之 影像示意圖。 第4A圖係顯示在燈光箱(lightb〇x)前之一棋盤狀圖案 (checker board pattern )作為幾何及亮度的校正。 第4 B圖係、顯示在校準程序中照相機的相對位置。 第4 C圖係顯示在典型的校準設定中,該棋盤狀圖案 (checker board pattern )所在位置與照相機並非完全相垂直 的0 26 200841702 第5 A圖係I貝示藉由一典型的照相機/透鏡系統所展示 的桶狀影響(barrel effect)。 ▲第5 B圖係顯示藉由一典型的照相機,透鏡系統所展示 的亮度降低(brightness fall off)。 第6圖係顯示4像素視訊資料圖像之示意圖,其係具有 紅、綠及藍内容且每個具有8 bits。 第7圖係顯示4像素視訊資料圖像之示意圖,其係具有 、紅、綠及藍内容且每個具有8 bits,額外還有2 bits的平面,作 (;為儲存亮度及對比校正,以及幾何校正資料。 第8圖係說明一影像處理器之方塊圖。 第9圖係顯示該棋盤狀圖案(如〜b〇ard _—之嚴 重失焦影像,並以-圖形方法來決定介於兩個置於對角之黑方 格交叉點。 第1 0圖係π明失真的影像㈣及經校正而無失真的顯 示範圍。 第11圖係說明輸出影像映射至影像之虛擬像素座標方 I 格之示意圖。 第1 2圖係說明從校準程序輸人矩心之示意圖。 3圖係說明—輸出像素角位(咖er)計算之示意圖。 弟1 4圖係說明像素分支(_·—)重疊近似值之示 意圖。 第15圖係說明-種用於光學失真的調適方法。 σ第1 6圖係說明顯示輸出像素映射(mapping)至一顯示 裔之虛擬像素座标方格,接著再映射至影像擷取裝置之虛擬像 27 200841702 素座標方格。 【主要元件符號說明】 元件編號 英文 中文 10 0 Adaptive image acquisition system 可調適性影像擷取系統 110 Adaptive image processor 可調適性影像處理器 111 Adaptive image processor 可調適性影像處理器 12 0 Memory 記憶體 12 1 Memory 記憶體 13 0 Sensor 感測器 13 1 Sensor 感測器 14 0 Image processing 影像處理 14 1 Image processing 影像處理 15 0 Camera control circuit 照相機控制線路 15 1 Camera control circuit 照相機控制線路 16 0 Output formatter 輸出格式化器 16 1 Output formatter 輸出格式化器 17 0 Optical lens systems 光學透鏡系統 17 1 Optical lens systems 光學透鏡系統 18 1 Adaptive image processor 可調適性影像處理 19 0 Display device 顯示裝置 19 1 Display device 顯示裝置 28 200841702 2 10 Output pixel centroid engine 輸出像素矩心引擎 2 2 0 Adjacent output pixel engine 相鄰輸出像素引擎 2 3 0 Output pixel overlay engine 輸出像素重疊引擎 2 4 0 Output pixel content engine 輸出像素内容引擎 3 10 Distorted imaging area 失真影像區 4 10 First output pixel 第一輸出像素 4 2 0 Second output pixel 第二輸出像素 6 10 Halfway point 中途點 6 2 0 Output pixel 輸出像素 7 3 0 Corrected display area 經校正之顯示區 8 0 0 Method 方法 8 10 Acquire output pixel centroid 擷取輸出像素矩心 8 2 0 Determine diagonally adjacent output pixels 決定對角相鄰輸出像素 8 3 0 Determine halfway point between centroid and adjacent output pixel 決定介於矩心及相鄰輸 出像素之間的中途點 8 4 0 Determine overlay 決定重疊 8 5 0 Determine content 決定内容 9 0 0 Sensor array 感測陣列 9 0 1 Line 線 9 0 2 Line 線 9 0 3 Line 線 29 200841702Ci f 3 A is a schematic image obtained from a conventional image couching system. Figure 3B is a schematic diagram of the image taken from a wide-angle adjustable image. Figure 4A shows a checker board pattern in front of the lightbox (lightb〇x) as geometry and brightness. Correction. Figure 4B shows the relative position of the camera in the calibration procedure. Figure 4C shows that in a typical calibration setup, the checker board pattern is not exactly perpendicular to the camera. 0 26 200841702 Figure 5A shows the barrel effect exhibited by a typical camera/lens system. ▲ Figure 5B shows the lens system shown by a typical camera. Brightness fall off. Figure 6 is a schematic diagram showing a 4-pixel video data image with red, green and blue content and each having 8 bits. Figure 7 shows a 4-pixel video data image. Schematic diagram, which has red, green, and blue content and each has 8 bits, and an additional 2 bits of plane, for (for storage brightness and contrast correction, and geometric correction data. Figure 8 shows Block diagram of the image processor. Figure 9 shows the discoid pattern (such as ~b〇ard _ - the severe out-of-focus image, and the -graphic method to determine the intersection of two black squares placed diagonally Figure 10 shows the image of π bright distortion (4) and the corrected distortion-free display range. Figure 11 is a schematic diagram showing the output pixel mapped to the virtual pixel coordinate of the image. Figure 1 2 shows the Schematic diagram of the calibration procedure inputting the centroid. 3 Diagram Description - Schematic diagram of the calculation of the output pixel angular position (Cai er). The brother 1 4 diagram illustrates the schematic diagram of the overlapping approximation of the pixel branch (_·-). An adaptation method for optical distortion. σ Figure 16 illustrates the display of an output pixel mapping to a virtual pixel coordinate square of a display, and then mapping to the virtual image of the image capture device 27 200841702 Prime coordinates [Main component symbol description] Component number English Chinese 10 0 Adaptive image acquisition system Adjustable image capture system 110 Adaptive image processor Adjustable image processor 111 Adaptive i Mage processor Adjustable Image Processor 12 0 Memory Memory 12 1 Memory Memory 13 0 Sensor Sensor 13 1 Sensor Sensor 14 0 Image processing Image Processing 14 1 Image processing Image processing 15 0 Camera control circuit Camera control circuit 15 1 Camera control circuit Camera control circuit 16 0 Output formatter Output formatter 16 1 Output formatter Output formatter 17 0 Optical lens systems Optical lens system 17 1 Optical lens systems 18 1 Adaptive image processor Adjustable image processing 19 0 Display device Display device 19 1 Display device Display device 28 200841702 2 10 Output pixel centroid engine Output pixel centroid engine 2 2 0 Adjacent output pixel engine Adjacent output pixel engine 2 3 0 Output pixel overlay engine Output pixel overlap engine 2 4 0 Output pixel content engine Output pixel content engine 3 10 Distorted imaging area Distorted image area 4 10 First output pixel First output pixel 4 2 0 Second output pixel Second output pixel 6 10 Halfway point Midway point 6 2 0 Output pixel Output pixel 7 3 0 Corrected display area Corrected display area 8 0 0 Method Method 8 10 Acquire output pixel centroid Capture output pixel centroid 8 2 0 Determine diagonally adjacent output pixels Angle adjacent output pixel 8 3 0 Determine halfway point between centroid and adjacent output pixel Determines the midway point between the centroid and the adjacent output pixel 8 4 0 Determine overlay Determines the overlap 8 5 0 Determine content Decides content 9 0 0 Sensor Array sensing array 9 0 1 Line 9 0 2 Line 9 0 3 Line 29 200841702

Ο 9 0 4 Black square 黑方格 9 0 5 Black square 黑方格 30Ο 9 0 4 Black square Black square 9 0 5 Black square Black square 30

Claims (1)

200841702 十、申請專利範圍: 1 · 一種影像擷取的方法,係包含·· 為複數個輸出像素擷取該輸出像素之矩心; 自該複數個輸出像素來決定一第一輸出像 出像素; …之相鄰輸 根據所擷取之輸出像素矩心及該相鄰輸出像素 〜 该第-輸出像素覆蓋於虛擬像素之重疊,而該虛擬像专: 相對應於一輸入影像; ,、係 Ο Ο 根據該經重疊虛擬像素之内容,決定該第_輸 内容;及 彳冢素之 輸出該經決定之内容。 2·如申請專利範圍第1項所述之 ^ f 4乏方法其中該擷取係讀取三列 輸出像素矩心資料至記憶體。 3·如申請專利範圍第2項所述之古、土甘A 4之方法’其中該相鄰輸出像素之 决疋係決定了對角的相鄰輸出像素。 4·如申請專利範圍第3項所述 ^^ .少 方法,其中该對角的相鄰輸出 c , . W己隱體中對角的相鄰記憶位置。 5·如申請專利範圍第丨項所述 人4 ^ 述之方法,其中該重疊之決定係包 3細为該虛擬像素至最少2 芬山— 取夕2乘2之子區域(Sub-regi〇n),以 及由母一個虛擬像素内 數旦 、疋被輸出像素所重疊之子區域之 6·如申請專利範圍第丨項所述 仇 ^ ^方法’其中該内容之決定係用 於一早一顏色。 7·如申請專利範圍第6項所奸、* 、 迷之方法,其中該内容之決定及輸 31 200841702 出係重覆用於額外的顏色。 其中該内容之決定係使 8·如申請專利範圍第1項所述之方法 用加法及除法。 9·如申請專利範圍第1項所述之方 r , 之方法,其中該方法係以管線式 (pipeline)運作。 10·如申請專利範圍第1項所述之 向々山 其中更包含在經決定之 内谷肷入該重疊作為亮度資料。 f) U·如申請專利範圍第1項所述 ’其中該輸出更包含嵌入 成何或免度之光學失真特徵於輸出之中。 12·—種影像擷取之系統,係包含: 一輸出像素矩心引擎,兑将$盔 /、係了為複數個輸出像素擷取輸 出像素之矩心; -相鄰輸出像素引擎,其係與該輸出像素矩心引擎通訊 =合,該相鄰輸出像素引擎可自複數個輸出像素來決定一 弟一輸出像素之相鄰輸出像素; (j —輸出像素重疊引擎,其係與該相鄰輸出像素引擎通m 輕=可根據所擷取之輪出像素矩心及該相鄰輸出像素來 、、疋4第輸出像素覆蓋虛擬像素之重疊,而該虛擬像素 係相對應於一輸入視訊,·及 ,、 八一輸出像素内容引擎,其係與輸出像素重疊引擎通訊耦 根據5亥經重疊虛擬像素之内容,可決定該第一輸出像 素之内谷,並且可輸出經決定之内容。 13 »^ 如申%專利範圍第12項所述之系統,其中該輸出像素矩心 丨擎係擷取所讀取三列輸出像素矩心資料至記憶體。 32 200841702 14.如申請專利範圍第13項所述之系統,其中該相 引擎藉由決定對角的相㈣㈣素來決定相= 象素 15·如申請專利範圍_所述之系統,其中該相鄰輪象;:象音 引擎係稭由讀取在記憶體中對角的相鄰記憶位置,二 對角的相鄰輸出像素。 '、疋 丨6·如申請專利範圍第12項所述之系統,其中該 引擎決定該重疊部份係藉由細分該虛擬像素至最少2乘2: 〇 =:(,—),以及由每-個虛㈣^ ^ 像素所重疊之子區域之數量。 如申請專利範圍第12項所述之系統,其中該輸出像素内容 引擎係決定用於一單一顏色之内容。 合 18. 如申請專利範圍第17項所述之系統,其中該輸出像素内容 引擎係決定内容,並輸出用於額外的顏色所決定之内容。 19. 如申請專利範圍第12項所述之系統,其中該輸出像素内容 引擎係僅利用加法及除法。 ◎ 20·如申請專利範圍第12項所述之系統,其中該系統係一種管 線式(pipeline)系統。 .如申請專利範圍第12項所述之系統,其中該輸出像素内容 引擎肷入S亥重疊至該經決定之内容中作為亮度資料。 22.如申請專利範圍第12項所述之系統,其中更包一可調適性 影像處理器,用以嵌入幾何或是亮度光學失真特徵於輸出 之中。 23·—種擷取影像系統,係包含·· 一擷取裝置,其係為複數個輸出像素,用以擷取輸出像 33 200841702 素之矩心; 一決定相鄰輸出像素 以決定U出像I ^ #、自複數個輸出像素,用 、 像素之相鄰輸出像素; 一决疋重璺褒置,发 其係根據所擷取之輸出像♦ 相鄰輸出像素來決定㈣一 Μ像素矩〜及该 晶占 /第輸出像素覆蓋虛擬像素之重 唛,而該虛擬像素係相斜 τ子目對應於一輸入影像; 一決定内容裝置,盆後g 兵係根據該經重疊虛擬像素之内容,200841702 X. Patent application scope: 1 · A method for image capture, comprising: taking a centroid of the output pixel for a plurality of output pixels; determining a first output image out pixel from the plurality of output pixels; The adjacent input is based on the extracted output pixel centroid and the adjacent output pixel ~ the first output pixel covers the overlap of the virtual pixels, and the virtual image is: corresponding to an input image;决定 determining the content of the _th output based on the content of the superimposed virtual pixel; and outputting the determined content by the tiling. 2. The method of claim 4, wherein the extraction system reads three rows of output pixel centroid data to the memory. 3. The method of claim 2, wherein the adjacent output pixels determine the adjacent output pixels of the diagonal. 4. The method of claim 3, wherein the adjacent output of the diagonal c, the adjacent memory location of the diagonal in the hidden body. 5. The method described in the fourth paragraph of the patent application scope, wherein the decision of the overlap is as follows: the virtual pixel to the minimum of 2 Fenshan - the eclipse 2 by 2 sub-area (Sub-regi〇n And the sub-area in which one of the virtual pixels in the virtual pixel is overlapped by the output pixel, and the method of determining the content is used for the early morning color. 7. If the method of claim 6, the method of the content, and the method of the content of the patent, the content of the content and the transmission of the 2008 41702 is repeated for additional colors. The determination of the content is such that the method described in item 1 of the patent application is added and divided. 9. The method of claim r, wherein the method operates in a pipeline. 10. As described in the first paragraph of the patent application, Xiangshan is included in the determined valley to incorporate the overlap as brightness data. f) U. As described in item 1 of the scope of the patent application, wherein the output further includes an optical distortion characteristic embedded in or exempted from the output. 12·—A system for image capture, comprising: an output pixel centroid engine, which converts the helmet to a plurality of output pixels to capture the centroid of the output pixel; - an adjacent output pixel engine, the system Communicating with the output pixel centroid engine, the adjacent output pixel engine can determine the adjacent output pixels of the output pixel from the plurality of output pixels; (j - output pixel overlap engine, which is adjacent to the output pixel The output pixel engine passes m light = according to the extracted pixel centroid and the adjacent output pixel, the fourth output pixel covers the overlap of the virtual pixels, and the virtual pixel corresponds to an input video. And, the Bayi output pixel content engine, which is coupled to the output pixel overlap engine, can determine the inner valley of the first output pixel according to the content of the overlapped virtual pixel, and can output the determined content. The system of claim 12, wherein the output pixel centroid engine system reads the three columns of output pixel centroid data to the memory. 32 200841702 14. If applying The system of claim 13, wherein the phase engine determines a phase = pixel 15 by determining a diagonal phase (four) (four). The system is as described in the patent application scope, wherein the adjacent wheel image; The system is a system for reading the adjacent memory locations diagonally opposite to the memory in the memory, and the two diagonally adjacent output pixels. The system of claim 12, wherein the engine determines the The overlapping portion is obtained by subdividing the virtual pixel to a minimum of 2 by 2: 〇 =: (, -), and the number of sub-regions overlapped by each virtual (four) ^ ^ pixel. As described in claim 12 The system, wherein the output pixel content engine determines a content for a single color. The system of claim 17, wherein the output pixel content engine determines content and outputs for additional The system of claim 12, wherein the output pixel content engine utilizes addition and division only. ◎ 20. The system of claim 12, wherein System department The system of claim 12, wherein the output pixel content engine is immersed in the determined content as brightness data. The system of claim 12, further comprising an adaptable image processor for embedding geometric or luminance optical distortion features in the output. 23·-- capturing image system, comprising: , which is a plurality of output pixels for extracting the centroid of the output image 33 200841702; one determines the adjacent output pixel to determine the U output image I ^ #, the self-complex number of output pixels, and the adjacent output of the pixel Pixel; a 疋 疋 , , , , , 发 发 发 发 发 发 发 发 发 发 发 发 根据 根据 根据 根据 根据 根据 根据 根据 根据 根据 根据 根据 根据 根据 根据 根据 根据 根据 根据 根据 根据 根据 根据 根据 根据 根据 决定 决定 决定The virtual pixel system skew τ sub-head corresponds to an input image; a decision content device, the post-pot g corps according to the content of the overlapped virtual pixel, 用以決定該第一輪出像素之内容;及 -輪出裝置’其係用以輸出經決定之内容。 ϋ 34Used to determine the content of the first round of pixels; and - the rounding device ' is used to output the determined content. ϋ 34
TW96112439A 2007-04-10 2007-04-10 Adaptive image acquisition system and method TW200841702A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW96112439A TW200841702A (en) 2007-04-10 2007-04-10 Adaptive image acquisition system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW96112439A TW200841702A (en) 2007-04-10 2007-04-10 Adaptive image acquisition system and method

Publications (1)

Publication Number Publication Date
TW200841702A true TW200841702A (en) 2008-10-16

Family

ID=44821619

Family Applications (1)

Application Number Title Priority Date Filing Date
TW96112439A TW200841702A (en) 2007-04-10 2007-04-10 Adaptive image acquisition system and method

Country Status (1)

Country Link
TW (1) TW200841702A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI418211B (en) * 2010-07-02 2013-12-01 Ability Entpr Co Ltd Method of determining shift between two images
TWI424373B (en) * 2009-04-17 2014-01-21 Univ Nat Changhua Education Image processing device for determining object characteristics and method thereof
TWI464673B (en) * 2012-05-10 2014-12-11 Silicon Motion Inc Electronic apparatus and method for sending data from an electronic apparatus to a display device
TWI810950B (en) * 2022-05-25 2023-08-01 國立高雄科技大學 Correction method for 2d vision measurement with large fov

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI424373B (en) * 2009-04-17 2014-01-21 Univ Nat Changhua Education Image processing device for determining object characteristics and method thereof
TWI418211B (en) * 2010-07-02 2013-12-01 Ability Entpr Co Ltd Method of determining shift between two images
TWI464673B (en) * 2012-05-10 2014-12-11 Silicon Motion Inc Electronic apparatus and method for sending data from an electronic apparatus to a display device
TWI810950B (en) * 2022-05-25 2023-08-01 國立高雄科技大學 Correction method for 2d vision measurement with large fov

Similar Documents

Publication Publication Date Title
KR102500759B1 (en) Seamless image stitching
JP6767543B2 (en) Image imaging and processing using a monolithic camera array with different types of imaging devices
JP6468307B2 (en) Imaging apparatus, image processing apparatus and method
US20080002041A1 (en) Adaptive image acquisition system and method
KR100796849B1 (en) Method for photographing panorama mosaics picture in mobile device
JP5725975B2 (en) Imaging apparatus and imaging method
CN107925751A (en) For multiple views noise reduction and the system and method for high dynamic range
JP2017208619A (en) Image processing apparatus, image processing method, program and imaging system
JP2014078926A (en) Image adjustment device, image adjustment method and program
TW201228382A (en) Capturing and processing of images using monolithic camera array with heterogeneous imagers
TWI599809B (en) Lens module array, image sensing device and fusing method for digital zoomed images
WO2022100668A1 (en) Temperature measurement method, apparatus, and system, storage medium, and program product
TW201839716A (en) Panoramic Image Stitching Method and System Thereof
JP2013183278A (en) Image processing apparatus, image processing method and program
TW200841702A (en) Adaptive image acquisition system and method
JP5363872B2 (en) Image correction apparatus and program thereof
US9743007B2 (en) Lens module array, image sensing device and fusing method for digital zoomed images
JP2013113877A (en) Stereoscopic photographing device, and portable terminal device using the same
TWI497448B (en) Method and system for image enhancement
JP2004007213A (en) Digital three dimensional model image pickup instrument
JP2011175201A (en) Projection image display device
WO2008122145A1 (en) Adaptive image acquisition system and method